-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable more robust multiple dispatch with plum
#415
base: master
Are you sure you want to change the base?
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Many thanks for looking into this! I think the prototype looks encouraging. Could you please take a look at the CI failures? The fastai integration tests are failing. So if you clone the fastai repo and do a editable install of it, you can then
Feel free to add comments anywhere that the code wasn't that clear, such that future readers won't have to do the digging that you did.
We're moving to unions too, so that's fine. We always write them as
Great.
That's a blocker, since we use them a lot. I'll set this PR to draft since we can't merge it until there's a release of plum with that fixed.
That would be really nice to fix too. |
Thanks so much for the response! I took a look at the integration test logs and looks like it’s the clash with future annotations 🥲 Will try see how hard it’d be to fix that as well as the autoreload issue, and try get in touch with plum’s maintainer too. |
@jph00 a few updates on progress here: All fastai tests are now passing locally 🚀! (Except This is thanks to plum now supporting future annotations! 🎉 @wesselb has been very responsive and supportive throughout (@wesselb - just wanted to say thanks, but please feel free to unsubscribe if you like). See this issue for details. There was also a small bug with Next stepsAtm this PR requires changes to the fastai repo, specifically to handle the convention of tuple annotations representing unions. @jph00 do you think it's worth fastcore supporting the tuple convention for a smoother release flow - or is it better to do concurrent branches/PRs on fastcore and fastai? In any case, I'll make the PR on the fastai repo next. After having worked a bit more with these libs, I think I can clean up the current implementation some more. I also have a few more test cases I want to add to fastcore from the plum future annotations thread - fastai uses There is one outstanding issue: There is ongoing discussion about getting |
Hi @seeM I'm a plum contributor. I'm chirping in because I saw this thread and noticed that you mentioned
Have you seen how return-type annotations work in plum? It seems to me they would solve this issue. See for example: In [1]: import plum
In [2]: @plum.dispatch
...: def test(a: int) -> float:
...: return 2*a
...:
In [3]: plum.add_conversion_method(type_from=int, type_to=float, f=float)
In [4]: test(1)
Out[4]: 2.0 |
Hi @PhilipVinc, thanks for the response! I’m not at a PC atm so can’t share any code examples just yet Yep, we’re using I don’t think there’s a neat way to configure plum’s type conversion system to achieve that. I’m not even sure what’d be a good interface for that just yet 😄 perhaps to be able to provide a custom default |
Ah I see. fun(x::T, y, z)::T where {T} Type parameters are in general unsupported by |
Ahhh yes that’s exactly it - can’t believe I missed that! Python does indeed support type variables now. I’d be keen to poke around plum to see what it would take to support them, but agree that’s definitely a longer term project. |
Just briefly commenting that experimental support for autoreload has been included in the latest release On the topic of type parameters, it would be super awesome to have those included. I'm currently time constrained because I'm submitting my dissertation in the next month (ish), but I've planned a major overhaul of Plum's internals for after that to fix some longstanding suboptimal design choices made early on. If at all possible, I'd be willing to look into supporting type parameters in a perhaps initially limited extent. |
Message ID: ***@***.***>This is amazingly great progress, and so cool to see this wonderful collaboration bearing fruit!
|
Agreed, many thanks to @wesselb and @PhilipVinc for the amazing contributions 😄 it's been a privilege working with fastcore and plum! @jph00 this is now a drop-in replacement supporting all existing functionality, and ready for the GitHub action to run and for review if that passes (fastcore and fastai tests pass locally). It is still of course a prototype, so keen to hear your thoughts |
OK running the CI now - nice job on getting to this point so quickly! |
plum-dispatch
for fastcore.transform
plum
I refactored the title and added label to this PR so that when/if it is merged it will show up in Change Log correctly |
@jph00 friendly reminder this PR is ready for review |
We chatted on Discord about extending this prototype to use plum everywhere, not just for Transform. I'm still working on that. I'll make this a draft PR again. Sorry for the confusion! |
fce1adb
to
aad948e
Compare
6df0281
to
2af8d1d
Compare
@seeM there are some conflicts now, looks like you might want to merge master into this branch etc. |
2af8d1d
to
0c53871
Compare
@hamelsmu Done! |
Amazing job @seeM :D So, now that you've been on this journey, what do you think? Are there improvements in using There's still quite a bit of code here, despite the heavy lifting being pushed out to Something to think about too: fastcore now has its first external dependency. Should be perhaps move this into a new project called fastdispatch? |
@jph00, TL;DR: Switching to plum would be a bet that multiple dispatch could lead to powerful use-cases in future. It wouldn't improve much today. Someone with experience using both fastai and multiple dispatch could better understand the likelihood of that bet paying off. Unfortunately I'm not yet in that position, but I'm happy to tinker. I'd love to hear your candid thoughts! OverviewPros:
Cons:
DetailsBringing these extensions into plumHere's a rough sketch of the code we'd need if we brought most of our extensions into plum (excluding fastcore's casting functions): def _eval_annotations(f):
"Evaluate future annotations before passing to plum to support backported union operator `|`"
f = copy_func(f)
for k, v in type_hints(f).items(): f.__annotations__[k] = Union[v] if isinstance(v, tuple) else v
return f
class FastFunction(Function):
@delegates(Function.dispatch)
def dispatch(self, f=None, **kwargs): return super().dispatch(_eval_annotations(f), **kwargs)
class FastDispatcher(Dispatcher):
function_cls = FastFunction
@delegates(Dispatcher.__call__, but='method')
def __call__(self, f, **kwargs): return super().__call__(_eval_annotations(f), **kwargs)
typedispatch = FastDispatcher() We currently extend plum as follows:
Using
|
Wow fantastic explanation @seeM , thanks so much. As a research lab, I think we should be focussing on things that allow us to do interesting research. So this change fits our mission. Can you provide more detail re "plum uses compiled C code"? I don't see C code in the plum repo. My suggested path is:
How does that all sound? And @wesselb how do you feel about the ideas for stuff that @seeM mentions above? |
@jph00, thanks for the kind words and the thoughtful suggestions!
I'm happy to move forward as you suggested. I have a few questions though:
Footnotes
|
Oh I see - it's not really using any of the typed features of cython, but just doing some AOT compilation with it. I don't think that's of any benefit to fastai, so if there's an easy way to turn it off, I think that's a bit better.
fastcore.transform would be moved to fastai. That's where it used to be.
Yes, and give it a deprecation warning.
Yes. |
BTW I wonder if it would be worth adding a "plum-purepython" package to pypi and conda -- one that doesn't use C extensions? That would be a bit easier to deal with IMO. |
@jph00 Sorry, I totally missed your reply! Double-checking that my understanding is correct:
I like the Shall we close this PR then? |
Message ID: ***@***.***>That's all correct.
In addition, I think we should aim to eventually get rid of fastdispatch, if possible, by moving all the general stuff to plum, and everything else to fastai.
I thought of a better idea than creating a "plum-purepython". Instead, I think that cython should simply be removed from deps in plum's toml. That way, no dep on an installed compiler toolchain is required, but for people that have it, it'll be used. (Or at least, that's my understanding...)
|
Hey @jph00! Just a quick clarification that all Cython compilation has since been removed from Plum entirely, since the gains were only minor. Plum is now a pure-Python package and doesn’t have Cython as a dependency anymore. It would, however, definitely be possible to reintroduce compilation whenever the user happens to have Cython installed. |
Thanks @wesselb - I've reopened this now. We're currently focused on the latest fast.ai course, but we'll come back to this PR in the new year. |
Best of luck with running the latest fast.ai course! Coming back to this in the new year is perfect timing. Plum should undergo some major improvements between now and then. |
I'm very happy to say that these improvements have now been merged, and that there's room for any additional features/improvement, should this PR require that. |
Do you have any thoughts @wesselb about ways we could improve this PR using the most recent plum? |
The most recent version of Plum actually doesn't bring new features, so the PR might not benefit from the new version in that way. The main change is that Plum now purely focusses on multiple dispatch by deferring all type-related stuff to Beartype. Consequently, Plum now supports all types supported by Beartype, which is a lot more than before and a lot more robust. (Beartype is Plum's only dependency, and Beartype has no dependencies.) I believe there were some small things that @seeM suggested, like cleaner printing of functions. I'll have a think about what else could benefit this PR! |
This PR is an experiment that builds
fastcore.transform
with the same interface (all tests passing) on top ofplum
instead offastcore.dispatch
. The ideal outcome of this would be a "yes/no" decision on whether to adoptplum
.It was mostly seamless except for:
_mk_plum_func
. I don't think the library was designed for dynamically addingplum.Function
s to classes after the class body is evaluated. The workaround isn't too hairy though.plum
's type conversion system to match the desiredTransform
interface, which is to automatically convert the result of a tfm method call to the same type as the input, so it's still relying onfastcore.dispatch.retain_type
.Other points worth noting:
fastcore.dispatch
convention was to use tuples of types as unions,plum
's is to useUnion
. E.g.(str,list)
becomesUnion[str,list]
.plum
searches parent classes by default, which is great._TfmMeta.__new__
with methods onTransform
. I think this is simpler and haven't found a downside yet.classmethod
s,staticmethod
s, and normal methods requires a specific order (How to use @dispatch with @staticmethod inside a class? beartype/plum#35).plum
Does not work with future annotations beartype/plum#41, though the author has said that it shouldn't be too difficult a fix.plum
does not work with autoreload (FR: Make plum work with ipython's autoreload beartype/plum#38).It's also worth considering whether
plum
enables new interfaces that would've otherwise been very difficult. For example, it supports parametric classes and hooking into its type inference system. I haven't given this much thought yet.