Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Augmented Lagrangian #731

Closed
ivborissov opened this issue Apr 5, 2024 · 5 comments
Closed

Augmented Lagrangian #731

ivborissov opened this issue Apr 5, 2024 · 5 comments

Comments

@ivborissov
Copy link

Hi @Vaibhavdixit02 ,

In follow up to this conversation https://discourse.julialang.org/t/global-constrained-nonlinear-optimization/111972/9. I d be glad to contribute to implementing Augmented Lagrangian for Optimization.jl and I wanted to learn how do you see this implementation.

Those are the implementations I have found.

  1. https://github.com/JuliaSmoothOptimizers/Percival.jl (pure julia, uses :tron solver for local optimization, potentially can be extended to support other JSO compatible solvers)
  2. NLopt AUGLAG https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/#augmented-lagrangian-algorithm ( supports NLopt local and global algorithms for local optimization)
  3. https://github.com/JuliaNonconvex/NonconvexAugLagLab.jl (pure julia, not sure about the status of the package)
  4. https://github.com/JuliaSmoothOptimizers/NCL.jl (pure julia, supports IPOPT and KNITRO solvers)
  5. https://github.com/pjssilva/NLPModelsAlgencan.jl (julia interface to Fortran-based Algencan optimizer)

One idea is to make an interface to one of the existing implementations, but almost all the implementations restrict the local optimizers choice to package specific (NLopt) or ecosystem standards (JSO). Implementing an Augmented Lagrangian at Optimization.jl level has the advantage of supporting a very rich local optimizers choice.

@mohamed82008
Copy link

https://github.com/JuliaNonconvex/NonconvexAugLagLab.jl

Please don't use this now. It's an experimental package.

@Vaibhavdixit02
Copy link
Member

Vaibhavdixit02 commented Apr 5, 2024

Hi @ivborissov thanks for the offer of help. This is something I am very interested in and have already started working on #727. This will be generalized once the bundled implementation feels good enough - right now it's quite a naive implementation. Feel free to review the PR.

@Vaibhavdixit02
Copy link
Member

@mohamed82008 that approach is quite interesting. Correct me if I am misunderstanding - your hypothesis is that instead of the traditional iterative updates of the dual variables in the lagrangian you are utilizing another optimization solver to solve for the dual variables?

@mohamed82008
Copy link

Yes it is very slow though and not required for convergence. I was just messing around.

@ivborissov
Copy link
Author

@Vaibhavdixit02 Cool! Thanks for the link, I have missed, that you had already started the implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants