Dymos performance question #1120
Unanswered
FluffyCodeMonster
asked this question in
Q&A
Replies: 2 comments 5 replies
-
If it helps, I can also provide the iteration-by-iteration output from a solve in IPOPT. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Even with good gradients, its possible that there are multiple paths that provide the same (or nearly the same) performance at that point. Nonlinear optimizers like SNOPT, IPOPT, and SLSQP typically struggle at these conjugate points because they're tuned to find a minimum that might not be well defined. Some things you can try:
|
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm experiencing some potential optimisation issues using Dymos (with IPOPT as the solver), and wanted to ask whether anything seems wrong with my setup. I'm concerned as my optimisation is taking a very long time compared to the Dymos examples, and I don't know if I'm doing something wrong which might be slowing it unnecessarily. I've tried to compare run times against systems of a similar size/complexity in other papers and it doesn't seem too far wrong (assuming they're set up well), but thought it would be a good idea to seek insight here.
The system I'm optimising has 12 states (plus other OpenMDAO states for output), with quite a lot of coupling in the differential equations, and two control inputs. The partial derivates are calculated using Jax, and each iteration of the dynamics involves an interpolation, performed by the Interpax Python library. I've checked the partials with the check_partials method in Dymos and the gradient errors look okay (the largest absolute error is 0.00012, much smaller for complex step, although this only works if I disable the interpolation so it's not a true representation). With IPOPT it can take hours to solve (usually with one grid refinement), and even still often ends with an error message, or only 'solves to an acceptable level'. The outputs of some of my runs are shown in the attached spreadsheet - as can be seen, mostly all of them exit with an IPOPT error.
The primal error often gets down to e-13/14 or so but the dual error struggles, getting stuck at about e-3/-4 at best, and animating the state discretisation nodes/collocation points shows that they do find a good solution quite quickly but then stay put and jitter for the rest of the time. However if the gradients are okay I'm not sure what's happening. This might just be the normal behaviour of a system like this, but I don't really have a reference point for these kinds of optimisation problems.
For extra information, the Jacobian shape pre-grid-refinement is (6374, 5249) - it takes ~99s to compute the sparsity and ~18s to compute the colouring. Post-refinement it's (9824, 6449); it takes ~156s to compute the sparsity and ~46s for the colouring. All of this is being done on a very powerful desktop computer (although the software as written doesn't utilise many of the cores - this is another peculiarity, because on my personal desktop it uses all 16 of them, but it's perhaps to do with the Jax/XLA compatibility of the machine).
I was wondering if these solve times (also shown in the spreadsheet) look normal, and if it's standard to encounter errors like this, or whether I'm doing something fundamentally wrong? Many thanks for any assistance - I really appreciate the help.
Investigating solver performance.xlsx
Beta Was this translation helpful? Give feedback.
All reactions