-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This PR makes Gradient Descent parallelized using Threads.@spawn #179
base: master
Are you sure you want to change the base?
Conversation
I made the extra tests in a separate PR. I locally combined the two and the new tests perform pass on my laptop with |
Codecov ReportAttention: Patch coverage is
|
@lkdvos I don't understand how to make codecov happy. The new lines are/are not being tested depending on MPSKit.Defaults aren't they ? I understand that we might want to test both scenarios but again, this is currently not implemented in any tests as far as I understand ? ps : I've compared my codecov with that of the existing parallel VUMPS implementation. The non-covered lines are identical. However, VUMPS has 2 of these lines whereas my GD has way more... |
The failing macOS test seems to be unrelated to the code I've added. |
Merging so that I have the updated tests on this branch :)
Todo : paralellize all maps !
It seems like this PR works, and does parallelize the code. However, it does re-introduce out-of-memory crashes that julia 1.10 had solved before. I tried to get the cluster managers to update to 1.11 to see if this fixes the problem :) Apart from that, perhaps the issue might be solved through |
As the title suggest this PR makes Gradient descent parallel...
Two remarks are :