You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes you want to train for n iterations, inspect results and then add additional iterations if required. Or one may want to control iteration externally, using a package like IterationControl.jl, which provides more options for early stopping, callbacks, and so forth. This is not possible at present because optimiser "state" (such as the trust region radius Δ) is lost as soon as solve! returns. So, calling solve! a second time means re-initializing state, which is undesirable.
Perhaps optimisers could implement a pattern similar to that adopted in Optimisers.jl for gradient descent updates?
The text was updated successfully, but these errors were encountered:
Sometimes you want to train for
n
iterations, inspect results and then add additional iterations if required. Or one may want to control iteration externally, using a package like IterationControl.jl, which provides more options for early stopping, callbacks, and so forth. This is not possible at present because optimiser "state" (such as the trust region radiusΔ
) is lost as soon assolve!
returns. So, callingsolve!
a second time means re-initializing state, which is undesirable.Perhaps optimisers could implement a pattern similar to that adopted in Optimisers.jl for gradient descent updates?
The text was updated successfully, but these errors were encountered: