-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pass sparse structure of constraints to NLPModels #183
Comments
@amontoison can you please point towards a worked out example defining an NLPModel? According to the API, I guess that the stuff listed here are required (Jacobian / Hessian and associated products...) |
Do you want a |
Thanks @amontoison for the example. I suppose that building manually the whole thing (= sparse derivatives, using AD only where it’s needed) gives the best result, but what is tour advice? |
@jbcaillau If you directly provide the sparsity pattern of Jacobians / Hessians in an If you are building manually the whole thing, the coloring / AD / decompression will be also replaced by your own function With AD we can evaluate multiple directional derivates in the same time to compute the compressed Jacobian / Hessian. |
thanks @amontoison let’s try the first option as a start. looking forward JuliaSmoothOptimizers/ADNLPModels.jl#286 |
@amontoison actually, there is a worked out example of building an NLPModel by hand in MadNLP docs: |
I already gave you a tutorial on a In the same spirit than the example of MadNLP, we have a few problems defined by hand to test our various AbstractNLPModels in the JSO ecosystem: |
@amontoison yes, exactly what we need 👍🏽. thanks again. |
We can easily provide the sparsity pattern of the Jacobian / Hessian with the release 0.8.5 of |
nice, @amontoison. actually, this seems to be OK for our needs for direct transcription into an nlp. BTW, given that, what would be the benefit to use NLPModels (instead of ADNLPModels)? the expected improvement being on sparsity patterns (that we do know in advance), I guess that ADNLPModels is enough. Am I missing sth? |
@jbcaillau It depends on whether you have more information than what J = [
J2 J3 0 0
J1 J2 J3 0
0 J1 J2 J3
0 0 J1 J2
0 0 0 J1
] In this case, you can optimize computations by only calculating the blocks The blocks |
🙏🏽@amontoison perfectly clear, many thanks. having such common blocks is not expected a priori. the next step in the case of direct approach (ocp -> nlp vs. using Pontrjagin then shooting or so) is rather exploiting SIMD features. |
Hi. What about the dynamics and path constraints along the time steps ? I thought they would be this kind of common blocks, did I miss something ? Currently, the constraints evaluation looks like this (the double arguments and update parts are a specific optimization for the trapeze method and would not be present for another discretization scheme). Not included is the final part with the boundary and variable constraints, but
|
@PierreMartinon common generators (= functions to evaluate) for every time step, but different evals so different blocks. Maybe a few common things in the particular case of the trapezoidal scheme (see your smart update - nice 👍🏽)? I might be missing sth, though. |
See control-toolbox/CTBenchmarks.jl#17 (comment)
The text was updated successfully, but these errors were encountered: