Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion: Make optimisers stateful #37

Open
ablaom opened this issue Jul 26, 2024 · 0 comments
Open

Suggestion: Make optimisers stateful #37

ablaom opened this issue Jul 26, 2024 · 0 comments

Comments

@ablaom
Copy link

ablaom commented Jul 26, 2024

Sometimes you want to train for n iterations, inspect results and then add additional iterations if required. Or one may want to control iteration externally, using a package like IterationControl.jl, which provides more options for early stopping, callbacks, and so forth. This is not possible at present because optimiser "state" (such as the trust region radius Δ) is lost as soon as solve! returns. So, calling solve! a second time means re-initializing state, which is undesirable.

Perhaps optimisers could implement a pattern similar to that adopted in Optimisers.jl for gradient descent updates?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant