You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A very common use case is that one wants to not only differentiate an objective, but also get some auxiliary output (intermediate results, the predictions of an ML model, data structures of a PDE solver, etc.)
The issue I see is that most backends want a single-output function. For instance, to get the gradient of loss_fn with ForwardDiff, I'd have to call ForwardDiff.gradient(p -> loss_fn(p)[1], params), and then we lose the benefit of "side-effect computations".
Which backends can actually discard-but-return this extra_data without calling the function twice?
Fair point! I guess the only way to get the extra_data out of a single call without purely returning it is to extract it by a side effect indeed (i.e. global state / save-to-file / ...). That does seem tricky to do in a nice way for all AD backends.
Idea in passing: maybe we could define a new type of Context which, unlike Cache, would guarantee that it is not overwritten, and allow returning auxiliary data
A very common use case is that one wants to not only differentiate an objective, but also get some auxiliary output (intermediate results, the predictions of an ML model, data structures of a PDE solver, etc.)
For example, in JAX there is the
has_aux
keyword option in jax.value_and_grad, which is actually the most common usage pattern of AD in JAX I have seen. The pattern looks like this (See e.g. the flax docs for a full example in context)I typically use some hacky workarounds to achieve similar behavior in Julia, but maybe it is common enough to solve it at the interface level?
The text was updated successfully, but these errors were encountered: