-
-
Notifications
You must be signed in to change notification settings - Fork 1
Callback Interface #10
Comments
For reference, here was @mauro3 's suggestion: callback_step(t1, t2, u1, u2) = ... # simplest callback, no dense output needed
callback_dense(u_dense::Callable) = ... # callback when there is dense output
callback_free_form(alg::Alg,...) # anything else, dispatch on the Algorithm could be useful I suggested a DSL for the most advanced features. Let's get a middleground that combines them. |
Here's the necessary details on my callbacks. My standard callback looks like: @inline function ODE_DEFAULT_CALLBACK(alg,f,t,u,k,tprev,uprev,kprev,ts,timeseries,ks,dtprev,dt,
saveat,cursaveat,saveiter,iter,save_timeseries,timeseries_steps,uEltype,ksEltype,
dense,kshortsize,issimple_dense,fsal,fsalfirst,cache,calck,T,Ts)
@ode_savevalues
reeval_fsal = false
cursaveat,saveiter,dt,t,T,reeval_fsal
end essentially, it passes a bunch of stuff to run the saving routine. The saving routine implements the full common interface, so it is quite complex: @def ode_savevalues begin
if !isempty(saveat) # Perform saveat
while cursaveat <= length(saveat) && saveat[cursaveat]<= t
if saveat[cursaveat]<t # If we already saved at the point, ignore it
saveiter += 1
curt = saveat[cursaveat]
ode_addsteps!(k,tprev,uprev,dt,alg,f)
Θ = (curt - tprev)/dt
val = ode_interpolant(Θ,dt,uprev,u,kprev,k,alg)
copyat_or_push!(ts,saveiter,curt)
copyat_or_push!(timeseries,saveiter,val)
end
cursaveat+=1
end
end
if save_timeseries && iter%timeseries_steps==0
saveiter += 1
copyat_or_push!(timeseries,saveiter,u)
copyat_or_push!(ts,saveiter,t)
if dense
copyat_or_push!(ks,saveiter,k)
end
end
if !issimple_dense
resize!(k,kshortsize)
end
end The other thing to note is that the interpolations are decomposed since there's no good way to handle the case where they hold immutable values, other than putting them in a big mutable wrapper type which would slow down the routine pretty considerably. So a lot of those values are passed for a reason. The saving is done in the callback routine because as noted before, event handling has to hijack the saving routine in order to be both correct and fast. So simply removing it is not an option. I have tried to think about something like, "if the method signature is |
One user brought this up as "logging". So I guess some people are interesting in printing intermediate values. Should the common interface just be enhanced for standard logging procedures?
If this was included, would there be a need for anything other than the more specific callbacks (for dealing with event handling)? |
@tshort might be interested in helping us figure this out. Maybe the answer could be that callbacks have to be algorithm specific, but event interfaces and logging doesn't have to. Here's what I've been doing. If you check the progress monitoring you'll see there's a new option: If that's not needed, then the only other common thing which callbacks are used for is event handling. But we can make that simply have the options
For the functions, we should take something like:
). With full-featured monitoring and events as part of the common interface, the vast majority of callback usage is then done in the common interface. However, there will still be the option to pass a callback function Opinions on this? |
I did find a problem which needed more than just the events and progress monitoring. However, this would easily be doable if we standardized just the beginning arguments to the callbacks. SciML/DifferentialEquations.jl#100 what about something like: callback_free_form(alg::Alg,t,u,du,args...) where the extra args can be algorithm-specific? |
I have found that being able to control the cache in callbacks is very necessary in many cases like hybrid dynamical systems, so I'd suggest it next: SciML/DifferentialEquations.jl#117 callback_free_form(alg::Alg,t,u,du,cache,args...) |
Could you use a ParameterizedFunction-field for a cache instead? |
No, let me describe a little what the cache is. Take a look at RK4: The cache is found here: cache = (u,tmp,k₁,k₂,k₃,k₄,kprev,uprev) It's a collection which holds the pointers to every temporary array in the algorithm. Now, this is very non-standard so it should be justified why it's there. The reason why it's there is because it allows you to "fully mutate". Two examples. The first example is the ODE which changes size: Notice that in order to have the right size on everything, you have to go through all of the cache arrays and resize all of them in addition to the problem, which is why it's necessary here. The other example is a hybrid dynamical system where one makes a type with the continuous variables are indexed and the discontinuous variables just as fields for the type. In order to fully update so that way the discontinuous changes are seen in all parts of the method, you have to mutate every cache variable as well. So it is very powerful, but maybe it should be a cache object to handle even more cases nicely via dispatch. Then |
I see. Note that this cache is essentially the state variable in PR49's iteration setup. |
Something that came up in this thread was the possibility for multiple events: The idea is just to use an array of functions for |
Intermediate SummarySince I just posted this on Discourse and want to see if I can get @tshort 's opinion (I know you need events in DASKR.jl) and @ronisbr 's opinion, here's a summary of where the proposal is at now. Events split from callbacks. The event framework is:
The function signatures being: event_f(t,u,du,event_cache,f)
apply_event!(t,u,du,event_cache,f,cache=nothing) Cached is described in a previous comment:
Callbacks would have a generic part and an algorithm-specific part. The interface would be: callback(alg::Alg,t,u,du,f,cache,args...) This is simple enough to handle most cases where events won't apply, and if anything really specific is needed, the solvers can supply a bunch of Two points of contention still:
|
Hi @ChrisRackauckas, The way you proposed the events seem to handle every single case I can think of in my satellite simulator.
Yes, this is necessary for me. There are two events that, if one happened under a set of conditions, the other must not be executed. Hence, in Unfortunately, I do not have the right expertise to comment about your 2 points. But I have one question about the |
If you have an event with |
Oh I see! If I need something like this, then I just can use events and everything will be fine 👍 |
Yes, the goal is for most things to be covered by events since it's a high-level API and so it's easy to standardize among all of the different solvers. Callbacks are injecting almost arbitrary code, so you'll have to handle the different ways the different solvers interpolate, save, etc. Thus I am hoping the events API is broad enough to cover almost all use cases, with callbacks being the fallback if you have very specific needs. Actually, to make it slightly more generic, I propose one more argument to go with the events:
which is standard (two saves are necessary to handle a discontinuity). However, you may want to use an event to just make a change after each iteration. In which case, you can use the trivial event Are both booleans needed, or just the one for the pre-event save? The confusing thing though is that we want to override this for chained events, because we never want two consecutive saves:
It would have to be up to the solvers to catch the double saving. |
It would be useful to be able to modify |
Yes, this indeed sounds very useful. If we extended the signatures once more: event_f(t,u,du,event_cache,funcs...)
apply_event!(t,u,du,event_cache,cache,funcs...) (note I use # Declare the type to overload
immutable MyFunction
state::Int
end
MyFunction(0)
# Make the overload
function (p::MyFunction)(t,u,du)
if state == 0
# one set of ODEs
elseif state == 1
# another set of ODEs
else
# a third set of ODEs
end
end Then in Does that sound good? Or is there a case this can't handle? If the functions are in there, is there a need for the |
As for my opinion, I'll just have to try it out. Hopefully, before the new
year, but it might be early January.
…On Mon, Dec 19, 2016 at 10:10 AM, Christopher Rackauckas < ***@***.***> wrote:
It would be useful to be able to modify f in the event handlers (or is
this already possible?) I need to be able to adjust the system being solved
at events too.
Yes, this indeed sounds very useful. If we extended the signatures once
more:
event_f(t,u,du,event_cache,funcs...)apply_event!(t,u,du,event_cache,cache,funcs...)
(note I use funcs.... because this catches all of the available
functions. An ODE would only have one function here so you could replace
that with f, but an SDE is defined by two functions, so you'd need f,g
here, etc.). This would allow you to affect the fields of a function. This
means you could adjust the parameters in a ParameterizedFunction, or
build a "multi-f" like:
# Declare the type to overload
immutable MyFunction
state::Int
end
MyFunction(0)
# Make the overload
function (p::MyFunction)(t,u,du)
if state == 0
# one set of ODEs
elseif state == 1
# another set of ODEs
else
# a third set of ODEs
end
end
Then in apply_event!(t,u,du,event_cache,cache,f) (for an ODE) you could
do f.state = 2 to change the state (or parameters, etc.).
Does that sound good? Or is there a case this can't handle?
If the functions are in there, is there a need for the du? The reason why
I ask is because most methods will not have evaluated this derivative yet
so it would incur an extra function evaluation (on any non-FSAL algorithm),
and the user can explicitly calculate it themselves using f. Normally
this won't be a problem because it can then be used in the next interval,
but that's not the case if there's a discontinuity. If there's a
discontinuity, then that first du calculation would have to be thrown
away, so it would incur an extra function cost without a gain if du is
not used in the event handling. However, this does hurt the FSAL case where
this du is already calculated so it would be free.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#10 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAm2BGKkPsZIe6PTlqeUDeENL06F6AEkks5rJp5KgaJpZM4KwXh_>
.
|
Is this drafted in code somewhere? /
I'd like to look to see what I'd need to do to implement this in DASKR then
try to use it from Sims.
…On Mon, Dec 19, 2016 at 12:19 PM, Tom Short ***@***.***> wrote:
As for my opinion, I'll just have to try it out. Hopefully, before the new
year, but it might be early January.
On Mon, Dec 19, 2016 at 10:10 AM, Christopher Rackauckas <
***@***.***> wrote:
> It would be useful to be able to modify f in the event handlers (or is
> this already possible?) I need to be able to adjust the system being solved
> at events too.
>
> Yes, this indeed sounds very useful. If we extended the signatures once
> more:
>
> event_f(t,u,du,event_cache,funcs...)apply_event!(t,u,du,event_cache,cache,funcs...)
>
> (note I use funcs.... because this catches all of the available
> functions. An ODE would only have one function here so you could replace
> that with f, but an SDE is defined by two functions, so you'd need f,g
> here, etc.). This would allow you to affect the fields of a function. This
> means you could adjust the parameters in a ParameterizedFunction, or
> build a "multi-f" like:
>
> # Declare the type to overload
> immutable MyFunction
> state::Int
> end
> MyFunction(0)
>
> # Make the overload
> function (p::MyFunction)(t,u,du)
> if state == 0
> # one set of ODEs
> elseif state == 1
> # another set of ODEs
> else
> # a third set of ODEs
> end
> end
>
> Then in apply_event!(t,u,du,event_cache,cache,f) (for an ODE) you could
> do f.state = 2 to change the state (or parameters, etc.).
>
> Does that sound good? Or is there a case this can't handle?
>
> If the functions are in there, is there a need for the du? The reason
> why I ask is because most methods will not have evaluated this derivative
> yet so it would incur an extra function evaluation (on any non-FSAL
> algorithm), and the user can explicitly calculate it themselves using f.
> Normally this won't be a problem because it can then be used in the next
> interval, but that's not the case if there's a discontinuity. If there's a
> discontinuity, then that first du calculation would have to be thrown
> away, so it would incur an extra function cost without a gain if du is
> not used in the event handling. However, this does hurt the FSAL case where
> this du is already calculated so it would be free.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#10 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAm2BGKkPsZIe6PTlqeUDeENL06F6AEkks5rJp5KgaJpZM4KwXh_>
> .
>
|
A "pre-draft" would be OrdinaryDiffEq.jl's callbacks/events, but I will update it to this proposal soon (lets say by the end of the week?). It's missing a few things and is in a macro form (so it's hard to map in its current form over to other packages), but it lets you generally explore what's possible with the framework. That said, I'll update OrdinaryDiffEq.jl, and I am solving the remaining issues as follows:
I'll wait and see if @BenLauwens has any comments and then get this up and running. |
I did not understand something (maybe it is already explained, but I was not able to infer). For another application, I am using a simulator totally written in julia that sends TCP packages to be shown in a GUI coded in C++/Qt. As of now, I am using my own integrator. But I will be very happy if I can use this package. However I have a very important requirement: I need to be able to change the It works like this:
In order to use this package, I think to use an event to send the TCP packages. However, the |
You beat me to it by just a few seconds! I actually just had this thought, albeit for very different reasons. My hybrid discrete+continuous simulations of biological organisms needs this in order to properly do Gillespie SSA / tau-leaping. Doing this in the current OrdinaryDiffEq.jl callbacks can be done by changing the variable So this new draft is missing two important things:
Let me explain a little bit about current implementations to set the grounds. The way that @inbounds for T in tstops # Actually, internally I name it Ts, but it's essentially tstops
while t < T
# inner loop
end
end (Interop packages like Sundials have issues, moreso with how to pass this information as What this means is that to adding new stopping locations to @def ode_terminate begin
T = t
while length(tstops)>1
pop!(tstops)
end
end is how I made it work. So any ideas on a better way to handle this? If you test: tstops = [20;40]
for i in eachindex(tstops)
if tstops[i] == 40
push!(tstops,60)
end
@show i
end then this can't handle the growing @inbounds for i in eachindex(tstops) # Actually, internally I name it Ts, but it's essentially tstops
while t < tstops[i]
# inner loop
end
end doesn't work. The other thing we want to handle is adding values to EditHere is a minimal example we want to solve. We want something that's kind of like this: tstops = [20;40]
for i in eachindex(tstops)
if tstops[i] == 20
push!(tstops,70)
end
if tstops[i] == 40
push!(tstops,60)
end
if tstops[i] == 70
tstops[end] = 100
end
@show tstops[i]
end but actually shows 20, 40, 60, 70, 100 (in that order). If we can get this to work, then the rest follows. Edit 2I also put it to SO to see what that produces Edit 3:Even with all of this side business of a more robust implementation for apply_event!(t,u,du,event_cache,cache,funcs...)
@ode_terminate t tstops
end for a whatever way we find works a good implementation? Or is there a good way to do this without resorting to a macro within the event, say some kind of boolean switch the user sets? |
Hey guys, I am very happy to announce a solution here! Here's what I got. It solves all of the standing issues, and I think it's safe to say it's "infinitely customizable", and, this is what took a long time, there are no performance regressions to implement it. Here's the gist that will be fleshed out more in documentation. Everything is folded into a type: integrator. The integrator at least has the fields: Integrator must have:
Essentially, the https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/integrator_interface.jl#L1 Most of what these do should be understood by the name. Once you have that, two things happen. First of all, putting an iterator interface on it is trivial (see the following for a sneak peak: ). This gives a first form of a "callback" by allowing you to solve step by step, and you can see from the tests some sample usage: https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/test/iterator_tests.jl The interactivity is cool and fun to play with, but having everything in this "god-type" is super powerful. It means that, the rest of the interface can be built using dispatch on top of this type. This means that the same tools for writing the internal methods are now the same tools in the iterators and then, by extension, the same tools in the callbacks. (It also means that it's trivial to build a "multi-algorithm" which switches between stiff and nonstiff solvers.... 👍 you know where my next weekend is going to be) So what are the callbacks? Well, using the iterator interface directly can be bad in some cases. For example, FSAL algorithms need to re-evaluate the beginning function only if By making the callbacks very free, there's no reason to then distinguish them from event handling. Instead what we have are conditional callbacks. They are held in a type which is defined here: https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/callbacks.jl It goes like this:
Here's how you map previous ideas to this. A "callback" is simply Event handling is a continuous So yay, those are now the same thing, and they don't need to be split because everything about the method can be changed in If you want to give this all a try, this issue should be really simple to implement: SciML/DifferentialEquations.jl#118 . type AutoAbstol{T}
cur_max::{T}
end
# Now make `affect!` for this:
function (p::AutoAbstol)(integrator)
p.curmax = maximum(p.curmax,integrator.u)
integrator.opts.abstol = p.cur_max * integrator.opts.reltol
end Now you build the type from these, and that type could be passed into any solver. Lastly, these can then be composed. You're allowed to specify, in order, multiple conditional callbacks. So: solve(prob,Tsit5(),callback=(c1,c2)) will first apply c1, and then c2, then hit the iterator part of the interface (if you did it in the iterator form), then go up to the top. This means that you can use the library for tolerance controls along with your own callbacks, and so this makes the ecosystem "infinitely extendable". Given all of this together, here are some things which are easy to do using the callbacks:
And I can keep going. Loads of credit go to @mauro3 since a lot of this stems from his overarching design ideas, which I thought were wrong at first, only to be proven wrong (I thought I couldn't get enough performance out of it. Turns out there are ways). Thanks! |
Hitting a few other points:
So in the end, I think this is a very easy to understand, highly extendable, and easy to document interface. Let me know if you think there are any changes that should be done. I am not set on things like names: they are very mutable. These things are on the master branches of OrdinaryDiffEq.jl and DiffEqBase.jl, so feel free to give suggestions. I think I'd like to tag DiffEqBase.jl by the end of the week / during the weekend, and then OrdinaryDiffEq.jl soon after, to get this all up and running (it works really well, like, not having issues at all!). If there's no complaints, then I'll close this issue when that's done. Then I'll get to implementing this onto Sundials.jl probably next month (or a good weekend). |
Well, the story here gets really tricky. In many cases you may not be solving the function given to you by the problem type. You can't mutate the problem type's function because it's strictly typed. Some examples of this are:
I think the way to handle this is through docs. I actually already put places for this stuff: http://docs.juliadiffeq.org/latest/types/dae_types.html#Special-Solution-Fields-1 . Basically, the intro on solutions will say "here's the stuff every solution/integrator has, but there some specifics related to the problem domain on the following pages". This way we add |
Note that I will probably be putting my focus on StochasticDiffEq.jl first (getting it to integrators and callbacks/events), finish up DelayDiffEq.jl, release Monte Carlo and differentiation tools libraries, and then head over to Sundials. So it might take a bit more than a week to get to that. Sorry about the wait. But let me know if that interface compromise above works. I think if we're solid on this then any future change is pretty simple (and mostly internal). |
I'm good with that, @ChrisRackauckas. |
Alright. So it sounds like there's no other objections to the general idea and the first releases for this are going out soon. So I am closing this. Further discussions should be more focused on individual packages/PRs unless something comes up. |
Hi @ChrisRackauckas, I am having problems with this new interface. Can you please take a look at the following code: https://gist.github.com/ronisbr/a424303a1818e931cdb1a4688b0de1da I think I follow everything right as you mentioned in the new documentation, but the event is not being applied correctly. Here is the figure I am getting: After |
Btw, this is the same scenario I described in SciML/DifferentialEquations.jl#117 |
Hi @ChrisRackauckas, I manage to make this code work only if I remove |
Oh yes, this is definitely a regression. It's due to implementation issues, maybe you'll have an idea for how to handle this or maybe the spec needs to be slightly modified. The issue is that, with the new callbacks, to make them more robust I added the possibility to "ignore values near zero". So there's a tolerance when checking the callback condition so that way it's not re-triggered right after a previous event. The relevant line is here: https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/callbacks.jl#L15 Basically: # Check for events through sign changes
if isapprox(previous_condition,0,rtol=callback.reltol,atol=callback.abstol)
prev_sign = 0
else
prev_sign = sign(previous_condition)
end
# Later on...
# Check for sign change for event
if prev_sign*sign(callback.condition(integrator.tprev+integrator.dt,integrator.u,integrator))<0
# Oops, never triggered because `prev_sign` is zero!
end If you're too close to zero, then it drops that from being a possibility. While this is good for rootfinding (makes it a lot more robust), the moment you posted your result I could immediately realize that it would be dropping out every event you actually wanted! The fix is to make a callback with zero tolerances, no rootfinding, and no interpolation points. However, that might be a little awkward for the user interface (basically exposing implementation issues), so I think it would be good to expose a immutable DiscreteCallback{F1,F2,F3,T,T2} <: DECallback
condition::F1
affect!::F2
save_positions::Tuple{Bool,Bool}
end
DiscreteCallback(condition,affect!,save_positions) = DiscreteCallback(condition,affect!,save_positions) Now before I was thinking of keeping callbacks the same type, but in the actual implementation it's impossible to avoid dynamic dispatch when checking/applying the type because every single one has different functions and thus different callback parameters. In the Gitter channel we calculated the dynamic dispatch costs to be around 90 ns for this case (very low since they are very similar in memory layout, and the layout is known at compile-time because the callbacks are stored as tuples). So I don't think that making a
Is this a good idea? (Also, should the user specify them slightly differently? |
Hi@ChrisRackauckas , Thanks for the prompt answer. Well, maybe I did not understood everything, but since I pass to callback function if I want or not rootfind, why we cannot use this variable to change how we are handling callbacks? |
Yes, it could just be an implementation issue, and just checking for zero if |
No, I don't think so. Actually, I think it is better (if it is not very difficult to implement). With the new discrete callback, then the |
Alright. I think that @tshort would agree it's good to separate them because Sundials and DASKR (and probably all interop packages) will only want to see the The last thing is then what I mentioned above.
What do you think here? If every solver is going to need to parse and split the tuple, it at least makes sense implementation-wise to split it, but I wouldn't want to unnecessarily put burden on the user because of implementation issues. Maybe what really needs to be done is that, instead of a tuple, one uses a # instead of
callback = (cb1,cb2,cb3)
# Put it in a type
callback = CallbackSet(cb1,cb2,cb3) and that constructor can store the two different tuples of callbacks to make it easier for the solvers to grab the two different types? |
Well, here I am not sure. I think it is OK to make things easier for the user, but it maybe lead to problems very difficult to debug. Like, if the user confuse one discrete callback with a continuous one? |
The |
Ah ok! Then it is fine! |
Hey everyone, So I made a few changes to the spec to accomodate the latest feedback. Instead The Now for a few details:
https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/callbacks.jl#L34 Because of this there generation is actually type-stable/inferrable.
That said, it is self-contained (non-leaky, i.e. it won't affect https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/callbacks.jl#L112 This is something that can almost directly be mapped over to other packages. I think that's all of it. |
As for your code @ronisbr, here's a version for the updated callback interface: https://gist.github.com/ChrisRackauckas/48f782cf6671ba5eb118baca4cc667c4 I also showed chaining of two different callbacks using a function affect!(integrator)
integrator.u.f1 = 1.5
integrator.cache.tmp.f1 = 1.5
end Notice the cache variable. This is due to the same reasons as before. That said, given the integrator specs, there will be a function affect!(integrator)
for c in cache(integrator)
c.f1 = 1.5
end
end And of course, any native solver package could give you an iterator over its cached variables to do the same (but this is something only Julia native packages could hope to support). That said... I haven't gotten around to implementing all of the cache iterators yet because it will be tedious. I'll get to it (it means the size changing example is also broken on master right now too). |
Hi @ChrisRackauckas ! Wow! You are fast! Amazing work, congratulations! I will test this tonight and, if I could make everything work, then tomorrow I will update my simulator and compare with the old version. I will let you know the results. |
Hi @ChrisRackauckas, I am having a problem, I tried to run your example but I got this error:
Both |
Nevermind, I deleted |
I meant |
Hi @ChrisRackauckas, In this new version, is it possible to let the solver define the algorithm? If I remove the algorithm from the |
That will need the DifferentialEquations.jl tag to go through, which is last. |
@ChrisRackauckas just to let you know that I ported my simulator to the new version of OrdinaryDiffEq and the new callback interface. Everything is exactly the same, no performance drop, no change at the results. And now I don't need a global variable to pass the time to the When we can select the absolute tolerance per state, I think this simulator will beat simulink execution time, which is fantastic! |
Good to hear! Sounds like the callbacks are working well then. I'll close this and the rest is package-specific implementation issues. |
@ronisbr , I expanded the cache iterator parts. There one for just the function affect!(integrator)
for c in u_cache(integrator)
c.f1 = 1.5
end
end and that should work with all functions for which it's implemented (everything but Rosenbrock and the implicit methods), and will gain support as time goes on. Again, this is on master and may not be released for a bit. |
Excellent @ChrisRackauckas! Good work :) |
This is to discuss the callback interface separate from the clutter of #5. In there we started discussing a little about coming up with a common callback interface, but didn't get very far.
The text was updated successfully, but these errors were encountered: