Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lack of convergence for extreme-scale problems #40

Open
cnpetra opened this issue May 14, 2018 · 0 comments
Open

Lack of convergence for extreme-scale problems #40

cnpetra opened this issue May 14, 2018 · 0 comments

Comments

@cnpetra
Copy link
Collaborator

cnpetra commented May 14, 2018

Behavior
The NLP PIPS-NLP solver does not converge for some extreme-scale problems (thousands+ scenarios and million+ variables/constraints). The manifestation of this issue includes any of the following:

  1. PIPS-NLP returns with message "Problem may be locally infeasible. Feasibility Restoration Phase is needed [...]"
  2. a large number of iterations (>100) without much decrease in "Inf_pr", "Inf_du", "Comp", "lg(mu)" convergence metrics

Cause
This issue almost always occurs when the problem is numerically ill-posed in some sense, either the constraint Jacobian is rank-deficient or the problem is badly scaled.

While PIPS-NLP has mathematical mechanisms to cope with these, namely primal and dual regularization strategies, a couple of things can go wrong. Very often the inertia needed for these regularizations is not accurate because the diagonal KKT blocks of the problems are severely ill-conditioned or even singular and the linear solvers used by PIPS-NLP under the hood return incorrect inertias for these blocks. We stress that this is caused by the ill-conditioning of the problem and the linear solvers are fine. In other cases, these linear solvers do not detect rank deficiency and solve singular linear systems, which result in "garbage" search directions inside PIPS (you will see "AugSysErr" > 1e-4 in PIPS-NLP's output and/or very large ||d||).

Fix
The ultimate fix is to improve the numerical formulation of the problem by ensuring full-rank Jacobians and/or good scaling of the problem. When this is not possible, we recommend a couple of workarounds/possible fixes using PIPS-NLP.

Fine tuning PIPS-NLP
Here are a couple of potential PIPS-based fixes for the issue.

  1. Switch between MA27 and MA57 (options 'SymLinearSolver 0' and 'SymLinearSolver 1', respectively)
  2. Use 'DoIR_Aug 1' option (iterative refinement for the augmented linear system) and also do 1.
  3. Use 'DoIR_Full 1' option (iterative refinement for the entire linearized KKT system) and also do 1.
  4. Provide a good initial (primal) starting point, for example by solving the problem with a small number of scenarios and "port" the solution to the problematic problem that has a larger number of scenarios. This strategy proved to be quite robust for SC-ACOPF problems.
  5. Use inertia-free regularization approaches: options 'dWd_test 1' or 'dWd_test 2'. These regularization strategies are not as robust as the inertia-based regularization, but you're running out of options if you reached this step.

Relevant information about what's going on internally in PIPS during the numerical solve can be obtained by running with option 'prtLvl 2'.

These options should be provided in the pipsnlp.parameter file (place this in the directory you're running PIPS or StructJuMP).

Still does not work?
Submit an issue describing your experience and attach the PIPS-NLP's output obtained by running with option 'prtLvl 3' + the necessary files we need to run your problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant