-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issue #72
Comments
Please also check out Zygote.jl and Capstan.jl for possible alternatives. AutoGrad supports Knet and its overhead is negligible for deep learning models, so no immediate plans on tape compilation. Memory is a big concern though (gpu memory is very limited), any suggestions for efficiency impovement in memory are welcome. I am planning to try some memoization to avoid creating records of repeated operations (e.g. indexing) when I get a chance. |
Thanks for the feedback! I am not really into ML or deep learning, I am more interested in AD and it's applications in optimization. I looked into as many AD packages as possible and it seems that the machine learning community has a lot of interest in AD. I was mainly looking for speed and nested AD, and it seems that nested AD isn't the most popular feature. |
Hi!
I starting to use Autograd and I have a question concerning the performance of AutoGrad compared to ReverseDiff.jl.
I have this basic setup:
With ReverseDiff I do:
Which leads to:
And with AutoGrad I do:
Which leads to:
So AutoGrad is much slower then ReverseDiff. I assume it's because I can precompile a tape with ReverseDiff, which makes it faster.
Is it possible to get a similar level of performance using AutoGrad?
Thanks!
The text was updated successfully, but these errors were encountered: