Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solve constrained optimization through augmented Lagrangian #16442

Open
2 of 9 tasks
hongkai-dai opened this issue Jan 26, 2022 · 3 comments
Open
2 of 9 tasks

Solve constrained optimization through augmented Lagrangian #16442

hongkai-dai opened this issue Jan 26, 2022 · 3 comments
Assignees
Labels
component: mathematical program Formulating and solving mathematical programs; our autodiff and symbolic libraries type: feature request

Comments

@hongkai-dai
Copy link
Contributor

hongkai-dai commented Jan 26, 2022

Currently Drake doesn't implement doing optimization with augmented Lagrangian method. We think this method can be very helpful for constrained optimization.

The tentative plan includes supporting the following features

  • Compute the augmented Lagrangian for a given mathematical program.
  • Solve the constrained optimization program using a black-box optimizer with augmented Lagrangian
  • Solve the constrained optimization using a gradient-based solver with augmented Lagrangian.
    • Solve the smooth augmented Lagrangian with first-order method (gradient-descent, Adam, etc).
    • Solve the smooth augmented Lagrangian with second-order method (Newton with L-BFGS, etc).
@hongkai-dai hongkai-dai self-assigned this Jan 26, 2022
@hongkai-dai hongkai-dai added the component: mathematical program Formulating and solving mathematical programs; our autodiff and symbolic libraries label Jan 26, 2022
@RussTedrake
Copy link
Contributor

I also think that we should provide a method that constructs a new program as the augmented Lagrangian of the original problem. (e.g. AddDecisionVariables for the original decision variables, taking the current value for the multipliers as an argument, and then AddCost for the augmented Lagrangian cost), no? I guess we also need to provide the updates for the multipliers. But that way all of the existing solvers can be applied to the augmented Lagrangian formulation?

(Perhaps this is contained your "gradient-based solver" bullet...?)

@hongkai-dai
Copy link
Contributor Author

What I planned is like this

  1. Create a new solver like AugmentedLagrangianSolver, which takes in a solver ID (like SNOPT/IPOPT/NLOPT).
  2. Inside this solver, when calling Solve() with a MathematicalProgram with constraints, it will first construct a new MathematicalProgram corresponding to the augmented Lagrangian for some given Lagrangian multiplier and penalty terms. Then the solver (with the ID provided in step 1) will solve this un-constrained MathematicalProgram.
  3. It then updates the Lagrangian multiplier and the penalty term, and then go to step 2 again.

Does this make sense? One implementation is done in Anzu as https://github.shared-services.aws.tri.global/robotics/anzu/blob/master/common/nevergrad_al.py.

@RussTedrake
Copy link
Contributor

Yes, that sounds great. I just wanted to make sure I understood the plan. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: mathematical program Formulating and solving mathematical programs; our autodiff and symbolic libraries type: feature request
Projects
None yet
Development

No branches or pull requests

3 participants