Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable more robust multiple dispatch with plum #415

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

seeM
Copy link
Contributor

@seeM seeM commented Jun 1, 2022

This PR is an experiment that builds fastcore.transform with the same interface (all tests passing) on top of plum instead of fastcore.dispatch. The ideal outcome of this would be a "yes/no" decision on whether to adopt plum.

It was mostly seamless except for:

  1. The complexity in _mk_plum_func. I don't think the library was designed for dynamically adding plum.Functions to classes after the class body is evaluated. The workaround isn't too hairy though.
  2. I couldn't find a good way to get plum's type conversion system to match the desired Transform interface, which is to automatically convert the result of a tfm method call to the same type as the input, so it's still relying on fastcore.dispatch.retain_type.

Other points worth noting:

  1. I noticed that there aren't many code comments in these parts of the repo. I've added some here to guide the review process, but happy to remove them if preferred.
  2. fastcore.dispatch convention was to use tuples of types as unions, plum's is to use Union. E.g. (str,list) becomes Union[str,list].
  3. plum searches parent classes by default, which is great.
  4. I replaced dynamic creation of tfm methods from _TfmMeta.__new__ with methods on Transform. I think this is simpler and haven't found a downside yet.
  5. Dispatching classmethods, staticmethods, and normal methods requires a specific order (How to use @dispatch with @staticmethod inside a class? beartype/plum#35).
  6. plum Does not work with future annotations beartype/plum#41, though the author has said that it shouldn't be too difficult a fix.
  7. plum does not work with autoreload (FR: Make plum work with ipython's autoreload beartype/plum#38).

It's also worth considering whether plum enables new interfaces that would've otherwise been very difficult. For example, it supports parametric classes and hooking into its type inference system. I haven't given this much thought yet.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@jph00
Copy link
Contributor

jph00 commented Jun 3, 2022

Many thanks for looking into this! I think the prototype looks encouraging.

Could you please take a look at the CI failures? The fastai integration tests are failing. So if you clone the fastai repo and do a editable install of it, you can then make test in that repo to run the tests directly.

I noticed that there aren't many code comments in these parts of the repo. I've added some here to guide the review process, but happy to remove them if preferred.

Feel free to add comments anywhere that the code wasn't that clear, such that future readers won't have to do the digging that you did.

fastcore.dispatch convention was to use tuples of types as unions, plum's is to use Union. E.g. (str,list) becomes Union[str,list].

We're moving to unions too, so that's fine. We always write them as str|list however, and use futures annotations.

I replaced dynamic creation of tfm methods from _TfmMeta.new with methods on Transform

Great.
I'd always prefer avoiding metaclasses where possible.

plus does not work with future annotations

That's a blocker, since we use them a lot. I'll set this PR to draft since we can't merge it until there's a release of plum with that fixed.

plum does not work with autoreload

That would be really nice to fix too.

@jph00 jph00 marked this pull request as draft June 3, 2022 19:16
@seeM
Copy link
Contributor Author

seeM commented Jun 3, 2022

Thanks so much for the response!

I took a look at the integration test logs and looks like it’s the clash with future annotations 🥲

Will try see how hard it’d be to fix that as well as the autoreload issue, and try get in touch with plum’s maintainer too.

@seeM
Copy link
Contributor Author

seeM commented Jun 7, 2022

@jph00 a few updates on progress here:

All fastai tests are now passing locally 🚀! (Except 20a_distributed which fails with OSError: [Errno 24] Too many open files. I suspect this is unrelated but I'll ask for help on Discord.)

This is thanks to plum now supporting future annotations! 🎉 @wesselb has been very responsive and supportive throughout (@wesselb - just wanted to say thanks, but please feel free to unsubscribe if you like). See this issue for details. There was also a small bug with operator.{attr,item}getter which was fixed very quickly as well.

Next steps

Atm this PR requires changes to the fastai repo, specifically to handle the convention of tuple annotations representing unions. @jph00 do you think it's worth fastcore supporting the tuple convention for a smoother release flow - or is it better to do concurrent branches/PRs on fastcore and fastai? In any case, I'll make the PR on the fastai repo next.

After having worked a bit more with these libs, I think I can clean up the current implementation some more. I also have a few more test cases I want to add to fastcore from the plum future annotations thread - fastai uses Transforms and dispatch in a few very interesting ways! This branch currently also has a bunch of TODOs hanging around.

There is one outstanding issue: _TfmMeta manually sets Transform.__signature__ which breaks the ability to dispatch a partial(Transform). I have a few ideas here.

There is ongoing discussion about getting plum to work with autoreload here. I suspect it might not affect our use-cases much but I haven't really dug into it yet. (We manually define plum.Functions for the tricky Transform behaviours. This issue mostly affects the dispatch tables used by a global plum.Dispatcher on class functions.)

@PhilipVinc
Copy link

Hi @seeM I'm a plum contributor.

I'm chirping in because I saw this thread and noticed that you mentioned

I couldn't find a good way to get plum's type conversion system to match the desired Transform interface, which is to automatically convert the result of a tfm method call to the same type as the input, so it's still relying on fastcore.dispatch.retain_type.

Have you seen how return-type annotations work in plum? It seems to me they would solve this issue. See for example:

In [1]: import plum

In [2]: @plum.dispatch
   ...: def test(a: int) -> float:
   ...:     return 2*a
   ...:

In [3]: plum.add_conversion_method(type_from=int, type_to=float, f=float)

In [4]: test(1)
Out[4]: 2.0

@seeM
Copy link
Contributor Author

seeM commented Jun 7, 2022

Hi @PhilipVinc, thanks for the response! I’m not at a PC atm so can’t share any code examples just yet

Yep, we’re using add_conversion_method atm. What I was referring to was that, if you create a fastcore.Transform from a function without a return annotation (a Transform can roughly be thought of as a plum.Function here), the result of the Transform call will be converted to the type of the first argument.

I don’t think there’s a neat way to configure plum’s type conversion system to achieve that.

I’m not even sure what’d be a good interface for that just yet 😄 perhaps to be able to provide a custom default convert function to a Dispatcher or Function? I’d like to give this some more thought. Does this make sense?

@PhilipVinc
Copy link

Ah I see.
In Julia's language (which is what inspired heavily plum), what you are trying to have would be

fun(x::T, y, z)::T where {T}

Type parameters are in general unsupported by plum, but I would love to see them supported.
I think it would be quite a bit of work, however, to get them working. Probably using the recently introduced TypeVar machinery in typing, which is there for a similar necessity, would be a good starting point... But indeed, someone needs to put in the work for that.

@seeM
Copy link
Contributor Author

seeM commented Jun 8, 2022

Ahhh yes that’s exactly it - can’t believe I missed that! Python does indeed support type variables now. I’d be keen to poke around plum to see what it would take to support them, but agree that’s definitely a longer term project.

@wesselb
Copy link

wesselb commented Jun 9, 2022

Just briefly commenting that experimental support for autoreload has been included in the latest release v1.6 thanks to #44 by @PhilipVinc.

On the topic of type parameters, it would be super awesome to have those included. I'm currently time constrained because I'm submitting my dissertation in the next month (ish), but I've planned a major overhaul of Plum's internals for after that to fix some longstanding suboptimal design choices made early on. If at all possible, I'd be willing to look into supporting type parameters in a perhaps initially limited extent.

@jph00
Copy link
Contributor

jph00 commented Jun 9, 2022 via email

@seeM
Copy link
Contributor Author

seeM commented Jun 11, 2022

Agreed, many thanks to @wesselb and @PhilipVinc for the amazing contributions 😄 it's been a privilege working with fastcore and plum!

@jph00 this is now a drop-in replacement supporting all existing functionality, and ready for the GitHub action to run and for review if that passes (fastcore and fastai tests pass locally). It is still of course a prototype, so keen to hear your thoughts

setup.py Outdated Show resolved Hide resolved
fastcore/transform.py Outdated Show resolved Hide resolved
fastcore/transform.py Show resolved Hide resolved
fastcore/transform.py Outdated Show resolved Hide resolved
@jph00
Copy link
Contributor

jph00 commented Jun 13, 2022

OK running the CI now - nice job on getting to this point so quickly!

@seeM
Copy link
Contributor Author

seeM commented Jun 14, 2022

@jph00 fce1adb fixes the Python 3.7 incompatibility that broke CI. Tests pass locally on Python 3.7

@seeM seeM marked this pull request as ready for review June 16, 2022 13:30
@hamelsmu hamelsmu added the enhancement New feature or request label Jun 16, 2022
@hamelsmu hamelsmu requested a review from jph00 June 16, 2022 20:12
@hamelsmu hamelsmu changed the title prototype plum-dispatch for fastcore.transform enable more robust multiple dispatch with plum Jun 16, 2022
@hamelsmu
Copy link
Contributor

hamelsmu commented Jun 16, 2022

I refactored the title and added label to this PR so that when/if it is merged it will show up in Change Log correctly

@hamelsmu
Copy link
Contributor

@jph00 friendly reminder this PR is ready for review

@seeM
Copy link
Contributor Author

seeM commented Jun 27, 2022

We chatted on Discord about extending this prototype to use plum everywhere, not just for Transform. I'm still working on that. I'll make this a draft PR again. Sorry for the confusion!

@seeM seeM marked this pull request as draft June 30, 2022 04:37
@seeM seeM force-pushed the refactor-plum-dispatch branch from fce1adb to aad948e Compare July 1, 2022 03:39
@seeM seeM force-pushed the refactor-plum-dispatch branch 2 times, most recently from 6df0281 to 2af8d1d Compare July 1, 2022 11:46
@seeM seeM marked this pull request as ready for review July 1, 2022 11:59
@seeM
Copy link
Contributor Author

seeM commented Jul 1, 2022

@hamelsmu @jph00 this is ready for review. I suggest reviewing #424 and #425 first.

@hamelsmu
Copy link
Contributor

hamelsmu commented Jul 1, 2022

@seeM there are some conflicts now, looks like you might want to merge master into this branch etc.

@seeM seeM force-pushed the refactor-plum-dispatch branch from 2af8d1d to 0c53871 Compare July 2, 2022 00:07
@seeM
Copy link
Contributor Author

seeM commented Jul 2, 2022

@hamelsmu Done!

@jph00
Copy link
Contributor

jph00 commented Jul 2, 2022

Amazing job @seeM :D

So, now that you've been on this journey, what do you think? Are there improvements in using plum over our home-made approach? Are there downsides?

There's still quite a bit of code here, despite the heavy lifting being pushed out to plum. Are there changes we could/should make to our interface to take better advantage of plum, and more closer to their API, rather than replicating our existing API?

Something to think about too: fastcore now has its first external dependency. Should be perhaps move this into a new project called fastdispatch?

@seeM
Copy link
Contributor Author

seeM commented Jul 2, 2022

@jph00, TL;DR: Switching to plum would be a bet that multiple dispatch could lead to powerful use-cases in future. It wouldn't improve much today. Someone with experience using both fastai and multiple dispatch could better understand the likelihood of that bet paying off. Unfortunately I'm not yet in that position, but I'm happy to tinker. I'd love to hear your candid thoughts!

Overview

Pros:

  1. The biggest improvement is multiple rather than dual dispatch. Whether this is worthwhile depends on how much multiple dispatch could improve fastai. I'm not sure, since I don't have sufficiently deep experience with either fastai or multiple dispatch. Since fastai's current design is heavily class-based, this may also require a large design shift. See Wessel's lab ("generic interface for linear algebra backends") for a really neat example of what's possible.
  2. plum searches base classes by default, without having to use metaclasses as we do.

Cons:

  1. The biggest downside atm is that we still have lots of code here. I think we can bring a lot of these extensions into plum, ending up with ~20 lines here (see the next section).
  2. plum uses compiled C code, so we lose parts of the stack trace.
  3. There are constraints to dispatching over static- and class-methods. See: How to use @dispatch with @staticmethod inside a class? beartype/plum#35. This hasn't affected any fastai use-cases.

Details

Bringing these extensions into plum

Here's a rough sketch of the code we'd need if we brought most of our extensions into plum (excluding fastcore's casting functions):

def _eval_annotations(f):
    "Evaluate future annotations before passing to plum to support backported union operator `|`"
    f = copy_func(f)
    for k, v in type_hints(f).items(): f.__annotations__[k] = Union[v] if isinstance(v, tuple) else v
    return f

class FastFunction(Function):
    @delegates(Function.dispatch)
    def dispatch(self, f=None, **kwargs): return super().dispatch(_eval_annotations(f), **kwargs)

class FastDispatcher(Dispatcher):
    function_cls = FastFunction
    @delegates(Dispatcher.__call__, but='method')
    def __call__(self, f, **kwargs): return super().__call__(_eval_annotations(f), **kwargs)

typedispatch = FastDispatcher()

We currently extend plum as follows:

  1. More concise repr(Function). There's already an issue: Make Printing of Types Friendlier beartype/plum#47.
  2. Evaluating future annotations before passing to plum to support backported | operator. Currently by overriding Dispatcher.__call__ and Function.dispatch - I think that's fine as is.
  3. Function.__getitem__ is a convenient wrapper of Function.invoke which was to avoid a breaking change for fastai's show_batch. It could instead use invoke (see the next section).
  4. Use a custom Function in Dispatcher (to support points 1-3). plum doesn't provide a nice hook for this so we have to bring in the entire Dispatcher._get_function and change one symbol.
  5. New feature: Dispatcher.to for late dispatching to class instance methods. I suspect this would be a useful feature to be brought into plum as well, and that there is a simpler way to implement it if I had a better understanding of plum's internals.

Using invoke instead of __getitem__

We could use Function.invoke instead of FastFunction.__getitem__. While the latter is convenient, I don't know if it's necessary over the more explicit former. This is currently only used by fastai's show_batch.

That is:

# This
show_batch[object]
# Becomes
show_batch.invoke(object, object, object)

The three objects are needed because the base show_batch is defined with three positional non-defaulting parameters, and plum dispatches over all of them.

We could also use keyword-only parameters to indicate which shouldn't be dispatched:

@typedispatch
def show_batch(x, y, *, samples, ctxs=None, max_n=9, **kwargs): pass

Then we can do:

show_batch.invoke(object, object)

@jph00
Copy link
Contributor

jph00 commented Jul 3, 2022

Wow fantastic explanation @seeM , thanks so much.

As a research lab, I think we should be focussing on things that allow us to do interesting research. So this change fits our mission.

Can you provide more detail re "plum uses compiled C code"? I don't see C code in the plum repo.

My suggested path is:

  • Move this PR to instead create new modules in this new repo: https://github.com/fastai/fastdispatch . I've made you an admin. You could use nbprocess_new to get it started, and then add a notebook to it with this functionality
  • Change fastcore's typedispatch to use invoke instead of __getitem__, and for now put a deprecation warning of __getitem__ and have it call invoke
  • In a case where there's no type annotation, could plum be changed to not require an explicit object in that position when calling invoke? That would maintain compatibility with fastcore typedispatch AFAICT, and IMO is a better dev UX
  • Move stuff from this new fastdispatch to plum for anything that @wesselb is OK with. For other stuff, leave it in fastdispatch
  • Once the previous step is done as best as we can, change fastai to use fastdispatch. If at this point there's almost no code in fastdispatch, move that code into fastai and delete the fastdispatch repo, adding a dep in fastai to plum (and otherwise we'll make fastai dep on fastdispatch)

How does that all sound? And @wesselb how do you feel about the ideas for stuff that @seeM mentions above?

@seeM
Copy link
Contributor Author

seeM commented Jul 4, 2022

@jph00, thanks for the kind words and the thoughtful suggestions!

plum.function is compiled with Cython1. I'm not sure how much of a performance boost this gives fastai since it's a high-level library, and whether it's worth losing inspect support and stack frames. We might want to turn it off if possible.

I'm happy to move forward as you suggested. I have a few questions though:

  • I'm a bit confused about the dependency graph. Would both fastcore and fastai depend directly on fastdispatch? Since fastcore.transform depends on fastcore.dispatch, and fastai depends on both of those.
  • Is the idea to leave the existing dispatch implementation in fastcore for now?
  • Do you mean "change fastcorefastai's usage of typedispatch to use invoke instead of __getitem__"? The only current use-case of __getitem__ is in show_batch in fastai.
  • I'll need to give some more thought to your invoke suggestion. I'm finding it tricky to think through its ramifications. There was an interesting discussion about removing invoke from Julia. Jeff Bezanson (co-creator of Julia) argued that most uses of invoke could be better addressed by having another function with its own name - in our case, show_batch_default. I quite like that too. The proposal fell through though.

Footnotes

  1. plum.function is included as an extension in setup.py. setuptools checks whether Cython is installed and if so, uses it to build extensions. Since Cython is also included as a build dependency in pyproject.toml that'll always be the case.

@jph00
Copy link
Contributor

jph00 commented Jul 4, 2022

Oh I see - it's not really using any of the typed features of cython, but just doing some AOT compilation with it. I don't think that's of any benefit to fastai, so if there's an easy way to turn it off, I think that's a bit better.

I'm a bit confused about the dependency graph. Would both fastcore and fastai depend directly on fastdispatch? Since fastcore.transform depends on fastcore.dispatch, and fastai depends on both of those.

fastcore.transform would be moved to fastai. That's where it used to be.

Is the idea to leave the existing dispatch implementation in fastcore for now?

Yes, and give it a deprecation warning.

Do you mean "change fastai's usage of typedispatch to use invoke instead of getitem"? The only current use-case of getitem is in show_batch in fastai.

Yes.

@jph00
Copy link
Contributor

jph00 commented Jul 4, 2022

BTW I wonder if it would be worth adding a "plum-purepython" package to pypi and conda -- one that doesn't use C extensions? That would be a bit easier to deal with IMO.

@seeM
Copy link
Contributor Author

seeM commented Jul 5, 2022

@jph00 Sorry, I totally missed your reply!

Double-checking that my understanding is correct:

  • This PR's code moves to fastdispatch
  • As much of fastdispatch functionality as @wesselb is OK with is brought into plum
  • fastcore.transform moves to fastai, refactored to use fastdispatch
  • fastcore.dispatch remains, but raises a deprecation warning (perhaps pointing to fastdispatch)
  • fastcore would still have no external dependencies

I like the plum-purepython idea. I'll move that discussion to plum's issue tracker.

Shall we close this PR then?

@jph00
Copy link
Contributor

jph00 commented Oct 11, 2022 via email

@wesselb
Copy link

wesselb commented Oct 18, 2022

Hey @jph00! Just a quick clarification that all Cython compilation has since been removed from Plum entirely, since the gains were only minor. Plum is now a pure-Python package and doesn’t have Cython as a dependency anymore. It would, however, definitely be possible to reintroduce compilation whenever the user happens to have Cython installed.

@jph00 jph00 reopened this Oct 18, 2022
@jph00
Copy link
Contributor

jph00 commented Oct 18, 2022

Thanks @wesselb - I've reopened this now. We're currently focused on the latest fast.ai course, but we'll come back to this PR in the new year.

@wesselb
Copy link

wesselb commented Oct 23, 2022

Best of luck with running the latest fast.ai course!

Coming back to this in the new year is perfect timing. Plum should undergo some major improvements between now and then.

@wesselb
Copy link

wesselb commented Mar 28, 2023

Plum should undergo some major improvements between now and then.

I'm very happy to say that these improvements have now been merged, and that there's room for any additional features/improvement, should this PR require that.

@jph00
Copy link
Contributor

jph00 commented Mar 28, 2023

Do you have any thoughts @wesselb about ways we could improve this PR using the most recent plum?

@wesselb
Copy link

wesselb commented Mar 29, 2023

The most recent version of Plum actually doesn't bring new features, so the PR might not benefit from the new version in that way. The main change is that Plum now purely focusses on multiple dispatch by deferring all type-related stuff to Beartype. Consequently, Plum now supports all types supported by Beartype, which is a lot more than before and a lot more robust. (Beartype is Plum's only dependency, and Beartype has no dependencies.)

I believe there were some small things that @seeM suggested, like cleaner printing of functions. I'll have a think about what else could benefit this PR!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants