You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment, to maximize performance in dynamics, it helps to first run a short single time step run_dynamics call that takes care of the precompilation, followed by the full trajectory.
All production scripts do this separately at the moment. Why don't we "bake" this into run_dynamics so that, internally, it will first do a single time step to ensure precompilation and after that overwrite and run the proper dynamics.
no user should have have to worry this subtlety that is related to DIffEQ and Julia precompilation issues..
The text was updated successfully, but these errors were encountered:
After giving this some thought, I would like some input from the main users / devs (@reinimaurer1, @Nhertl, @Alexsp32, @mlarkin863). There are two options as what to do in the code but they come with advantages / disadvantages.
I will use the following generic framework:
int(u,p) = call_our_dynamics_method(...)
prob = ODEProblem(int,tspan)
sol = solve(prob,solver(),callbacks...)
Option 1.
We do a single call to the int(u,p) function that we place into the problem.
Advantage: This is the fastest precompilation thing to do and will precompile the majority of the code before then building the problem.
Disadvantage: The problem, solve functions aren't precompiled (they are only called once so no big deal). Callbacks are also not precompiled - this is where I then need user input to know how often callbacks are called (I have no intuition on this)
Option 2
We perform the whole above workflow with a tspan of maybe (0.0,0.1) for example. Should precompile at least some of the callbacks but also not many. Other than that advantages and disadvantages are flipped.
Also I need to have a think and a look about how we should do this with Distributed.jl as I'm not sure if that distributes pre-compiled functions automatically (everything is linked to one Julia compiled/precompiled instance) or whether it creates N Julia instances in which case then we would have to precompile everything N times anyway.
I use callbacks quite a bit, all of them are run every time step for wrapping positions around periodic boundary conditions and desorption checking.
To me, it makes sense to run the whole workflow for a single time step so that anything in the callbacks that can be precompiled will be. This also has the beneficial side-effect of making most errors with output functions show up before a lot of computation is already done.
I've not looked at whether "precompiling" one simulation affects all distributed processes, but if we keep all run_dynamics args including parallelisation the same, then it shouldn't really matter, right?
At the moment, to maximize performance in dynamics, it helps to first run a short single time step
run_dynamics
call that takes care of the precompilation, followed by the full trajectory.All production scripts do this separately at the moment. Why don't we "bake" this into run_dynamics so that, internally, it will first do a single time step to ensure precompilation and after that overwrite and run the proper dynamics.
no user should have have to worry this subtlety that is related to DIffEQ and Julia precompilation issues..
The text was updated successfully, but these errors were encountered: