Replies: 4 comments 1 reply
-
Thanks Mark. It's probably worth submitting a PR to pyoptsparse regarding the IPOPT issue. We're also working on a method for getting the multipliers out of methods that doesn't explicitly provide them. Youre right though, with those in hand we can work out the costates and do a better job of verifying if the necessary optimality conditions are satisfied, and help in scaling/balancing. We should probably add a function to dymos.utils that takes a problem object and, for each trajectory and phase in the problem, finds the lagrange multipliers that apply to the states and converts them to costates. As you said, that will be transcription dependent and there are references out there to do this. Ultimately, path and boundary constraints also have their own lagrange multipliers that are used in the satisfaction of the necessary conditions for optimality. It would be nice to eventually have those, but for now I think getting the costates is a good next step. Once we get that function working, we can add plotting of the costates. |
Beta Was this translation helpful? Give feedback.
-
I went ahead and added this change into pyOptSparse, so hopefully it will be in the main code soon. |
Beta Was this translation helpful? Give feedback.
-
Thank you Rob, my plans of working on it last weekend got completely derailed! |
Beta Was this translation helpful? Give feedback.
-
I have a branch here: https://github.com/robfalck/dymos/tree/costates with a costate-estimation example. This uses the Fahroo/Ross paper from 2001 to recover the costates: https://arc.aiaa.org/doi/10.2514/2.4709 It requires the files from pyoptsparse pull request 415 above to work correctly. Since it currently relies on lagrange multipliers from the optimizer, it will currently only work with IPOPT and SNOPT. It matches the results from Example 1 very well. This is currently a proof-of-concept and will need a lot of cleanup before making it into the code. This is an overlay that plots the dymos solution (in color) to those from the Fahroo & Ross Paper. These are just overlayed by eyeball so it's not super precise, but I think it demonstrates that we're obtaining them successfully. |
Beta Was this translation helpful? Give feedback.
-
Hi all,
It seems to happen fairly often to me, when developing a new problem, to get stuck in "scaling hell" where you spend a lot of time trying different values of "ref" or "scaler" for the states, constraints and objective to get the optimizer to converge, and do so without taking very small step sizes. It is a long and tedious process with no guarantee of success.
In the past Rob Falck has referred me the following paper Scaling and Balancing for High-Performance Computation of Optimal Controls about how to scale trajectory problems.
I recently revisited it after having read another book by one of the authors, IM Ross, A Primer on Pontryagin's Principle in Optimal Control and can understand better what the scaling and balancing paper is saying.
Essentially, the author is saying is that in order to scale and balance a problem, you need to be able to see the co-states or lagrange multipliers associated with the constraints imposed by equations of motion, and those imposed by boundary / terminal conditions.
The paper also shows that simply scaling the states and constraints to a value of order of magnitude 1 can work but is probably not the best way. With a lot of problems it doesn't seem to work. According to Ross, the best thing to do is ensure that the states and co-states are of the same order of magnitude. That order of magnitude may not be 1.
In order to find a better way to setup problems and get them working, I started it looking into how I could get co-state information. Both SNOPT and IPOPT compute the lagrange multipliers.
I looked into what PyOptSparse does with the lagrange multiplier data from IPOPT, and it turns out it simply doesn't return them. However. modifying one line of code in PyOptSparse's "pyIpopt.py" module allows you to access them.
They can then be found in the OpenMDAO problem, under p.driver.pyopt_solution.lambdastar
(going from memory here as I don't have the code in front of me)
I would like to be able to translate these lagrange multipliers into the appropriate costate values. From what I have read there seems to be several more steps to go through before getting to a useful quantity. The transcription used and variable scalings all matter.
I am willing to do the work to implement this feature, but I need to learn a bit more about what to do. Could anyone point me to books and papers to read? I haven't been able to find a nice book that really gets into the nitty gritty of pseudospectral optimal control. The Betts book doesn't seem to cover the transcriptions we see in Dymos like Radau, lagrange-gauss-lobatto, now Birkhoff.
The end result I hope to achieve is to be able to be able to see costates plotted along with states, and then use that information to much more easily scale the problem, and get rid of a lot of trial and error time.
Beta Was this translation helpful? Give feedback.
All reactions