Skip to content

Commit

Permalink
Merge pull request #788 from SciML/newdocs
Browse files Browse the repository at this point in the history
New tutorials and docs
  • Loading branch information
Vaibhavdixit02 authored Aug 7, 2024
2 parents e987377 + 92e9273 commit 255a7bf
Show file tree
Hide file tree
Showing 8 changed files with 269 additions and 115 deletions.
53 changes: 0 additions & 53 deletions .github/workflows/Downgrade.yml

This file was deleted.

2 changes: 2 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed"
ReverseDiff = "37e2e3b7-166d-5795-8a7a-e32c996b4267"
SciMLBase = "0bca4576-84f4-4d90-8ffe-ffa030f20462"
SciMLSensitivity = "1ed8b502-d754-442c-8d5d-10ac956f44a1"
SymbolicAnalysis = "4297ee4d-0239-47d8-ba5d-195ecdf594fe"
Symbolics = "0c5d862f-8b57-4792-8d23-62f2024744c7"
Tracker = "9f7883ad-71c0-57eb-9f7f-b5c9e6d3789c"
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

Expand Down
44 changes: 21 additions & 23 deletions docs/src/getting_started.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,39 @@
# Getting Started with Optimization in Julia
# Getting Started with Optimization.jl

In this tutorial, we introduce the basics of Optimization.jl by showing
how to easily mix local optimizers from Optim.jl and global optimizers
from BlackBoxOptim.jl on the Rosenbrock equation. The simplest copy-pasteable
code to get started is the following:
how to easily mix local optimizers and global optimizers on the Rosenbrock equation.
The simplest copy-pasteable code using a quasi-Newton method (LBFGS) to solve the Rosenbrock problem is the following:

```@example intro
# Import the package and define the problem to optimize
using Optimization
using Optimization, Zygote
rosenbrock(u, p) = (p[1] - u[1])^2 + p[2] * (u[2] - u[1]^2)^2
u0 = zeros(2)
p = [1.0, 100.0]
prob = OptimizationProblem(rosenbrock, u0, p)
# Import a solver package and solve the optimization problem
using OptimizationOptimJL
sol = solve(prob, NelderMead())
optf = OptimizationFunction(rosenbrock, AutoZygote())
prob = OptimizationProblem(optf, u0, p)
# Import a different solver package and solve the optimization problem a different way
using OptimizationBBO
prob = OptimizationProblem(rosenbrock, u0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited())
sol = solve(prob, Optimization.LBFGS())
```

Notice that Optimization.jl is the core glue package that holds all the common
pieces, but to solve the equations, we need to use a solver package. Here, OptimizationOptimJL
is for [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) and OptimizationBBO is for
[BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl).
## Import a different solver package and solve the problem

OptimizationOptimJL is a wrapper for [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) and OptimizationBBO is a wrapper for [BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl).

The output of the first optimization task (with the `NelderMead()` algorithm)
is given below:
First let's use the NelderMead a derivative free solver from Optim.jl:

```@example intro
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
prob = OptimizationProblem(optf, u0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
sol = solve(prob, NelderMead())
using OptimizationOptimJL
sol = solve(prob, Optim.NelderMead())
```

BlackBoxOptim.jl offers derivative-free global optimization solvers that requrie the bounds to be set via `lb` and `ub` in the `OptimizationProblem`. Let's use the BBO_adaptive_de_rand_1_bin_radiuslimited() solver:

```@example intro
using OptimizationBBO
prob = OptimizationProblem(rosenbrock, u0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited())
```

The solution from the original solver can always be obtained via `original`:
Expand Down
149 changes: 124 additions & 25 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,9 @@ Pkg.add("Optimization")

The packages relevant to the core functionality of Optimization.jl will be imported
accordingly and, in most cases, you do not have to worry about the manual
installation of dependencies. However, you will need to add the specific optimizer
packages.
installation of dependencies. [Optimization.jl](@ref) natively offers a LBFGS solver
but for more solver choices (discussed below in Optimization Packages), you will need
to add the specific wrapper packages.

## Contributing

Expand All @@ -48,29 +49,127 @@ packages.
+ On the [Julia Discourse forums](https://discourse.julialang.org)
+ See also [SciML Community page](https://sciml.ai/community/)

## Overview of the Optimizers

| Package | Local Gradient-Based | Local Hessian-Based | Local Derivative-Free | Box Constraints | Local Constrained | Global Unconstrained | Global Constrained |
|:----------------------- |:--------------------:|:-------------------:|:---------------------:|:---------------:|:-----------------:|:--------------------:|:--------------------:|
| BlackBoxOptim ||||||| ❌ ✅ |
| CMAEvolutionaryStrategy ||||||||
| Evolutionary ||||||| 🟡 |
| Flux ||||||||
| GCMAES ||||||||
| MathOptInterface ||||||| 🟡 |
| MultistartOptimization ||||||||
| Metaheuristics ||||||| 🟡 |
| NOMAD ||||||| 🟡 |
| NLopt ||||| 🟡 || 🟡 |
| Optim ||||||||
| PRIMA ||||||||
| QuadDIRECT ||||||||

✅ = supported

🟡 = supported in downstream library but not yet implemented in `Optimization`; PR to add this functionality are welcome

❌ = not supported
## Overview of the solver packages in alphabetical order

<details>
<summary><strong>BlackBoxOptim</strong></summary>
- **Global Methods**
- Zeroth order
- Unconstrained
- Box Constraints
</details>
<details>
<summary><strong>CMAEvolutionaryStrategy</strong></summary>
- **Global Methods**
- Zeroth order
- Unconstrained
- Box Constraints
</details>
<details>
<summary><strong>Evolutionary</strong></summary>
- **Global Methods**
- Zeroth order
- Unconstrained
- Box Constraints
- Non-linear Constraints
</details>
<details>
<summary><strong>GCMAES</strong></summary>
- **Global Methods**
- First order
- Box Constraints
- Unconstrained
</details>
<details>
<summary><strong>Manopt</strong></summary>
- **Local Methods**
- First order
- Second order
- Zeroth order
- Box Constraints
- Constrained 🟡
- **Global Methods**
- Zeroth order
- Unconstrained
</details>
<details>
<summary><strong>MathOptInterface</strong></summary>
- **Local Methods**
- First order
- Second order
- Box Constraints
- Constrained
- **Global Methods**
- First order
- Second order
- Constrained
</details>
<details>
<summary><strong>MultistartOptimization</strong></summary>
- **Global Methods**
- Zeroth order
- First order
- Second order
- Box Constraints
</details>
<details>
<summary><strong>Metaheuristics</strong></summary>
- **Global Methods**
- Zeroth order
- Unconstrained
- Box Constraints
</details>
<details>
<summary><strong>NOMAD</strong></summary>
- **Global Methods**
- Zeroth order
- Unconstrained
- Box Constraints
- Constrained 🟡
</details>
<details>
<summary><strong>NLopt</strong></summary>
- **Local Methods**
- First order
- Zeroth order
- Second order 🟡
- Box Constraints
- Local Constrained 🟡
- **Global Methods**
- Zeroth order
- First order
- Unconstrained
- Constrained 🟡
</details>
<details>
<summary><strong>Optim</strong></summary>
- **Local Methods**
- Zeroth order
- First order
- Second order
- Box Constraints
- Constrained
- **Global Methods**
- Zeroth order
- Unconstrained
- Box Constraints
</details>
<details>
<summary><strong>PRIMA</strong></summary>
- **Local Methods**
- Derivative-Free: ✅
- **Constraints**
- Box Constraints: ✅
- Local Constrained: ✅
</details>
<details>
<summary><strong>QuadDIRECT</strong></summary>
- **Constraints**
- Box Constraints: ✅
- **Global Methods**
- Unconstrained: ✅
</details>
🟡 = supported in downstream library but not yet implemented in `Optimization.jl`; PR to add this functionality are welcome

## Citation

Expand Down
39 changes: 39 additions & 0 deletions docs/src/optimization_packages/optimization.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Optimization.jl

There are some solvers that are available in the Optimization.jl package directly without the need to install any of the solver wrappers.

## Methods

`LBFGS`: The popular quasi-Newton method that leverages limited memory BFGS approximation of the inverse of the Hessian. Through a wrapper over the [L-BFGS-B](https://users.iems.northwestern.edu/%7Enocedal/lbfgsb.html) fortran routine accessed from the [LBFGSB.jl](https://github.com/Gnimuc/LBFGSB.jl/) package. It directly supports box-constraints.

This can also handle arbitrary non-linear constraints through a Augmented Lagrangian method with bounds constraints described in 17.4 of Numerical Optimization by Nocedal and Wright. Thus serving as a general-purpose nonlinear optimization solver available directly in Optimization.jl.

## Examples

### Unconstrained rosenbrock problem

```@example L-BFGS
using Optimization, Zygote
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]
optf = OptimizationFunction(rosenbrock, AutoZygote())
prob = Optimization.OptimizationProblem(optf, x0, p)
sol = solve(prob, Optimization.LBFGS())
```

### With nonlinear and bounds constraints

```@example L-BFGS
function con2_c(res, x, p)
res .= [x[1]^2 + x[2]^2, (x[2] * sin(x[1]) + x[1]) - 5]
end
optf = OptimizationFunction(rosenbrock, AutoZygote(), cons = con2_c)
prob = OptimizationProblem(optf, x0, p, lcons = [1.0, -Inf],
ucons = [1.0, 0.0], lb = [-1.0, -1.0],
ub = [1.0, 1.0])
res = solve(prob, Optimization.LBFGS(), maxiters = 100)
```
49 changes: 49 additions & 0 deletions docs/src/tutorials/certification.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Using SymbolicAnalysis.jl for convexity certificates

In this tutorial, we will show how to use automatic convexity certification of the optimization problem using [SymbolicAnalysis.jl](https://github.com/Vaibhavdixit02/SymbolicAnalysis.jl).

This works with the `structural_analysis` keyword argument to `OptimizationProblem`. This tells the package to try to trace through the objective and constraints with symbolic variables (for more details on this look at the [Symbolics documentation](https://symbolics.juliasymbolics.org/stable/manual/functions/#function_registration)). This relies on the Disciplined Programming approach hence neccessitates the use of "atoms" from the SymbolicAnalysis.jl package.

We'll use a simple example to illustrate the convexity structure certification process.

```@example symanalysis
using SymbolicAnalysis, Zygote, LinearAlgebra, Optimization, OptimizationMOI
function f(x, p = nothing)
return exp(x[1]) + x[1]^2
end
optf = OptimizationFunction(f, Optimization.AutoForwardDiff())
prob = OptimizationProblem(optf, [0.4], structural_analysis = true)
sol = solve(prob, Optimization.LBFGS(), maxiters = 1000)
```

The result can be accessed as the `analysis_results` field of the solution.

```@example symanalysis
sol.cache.analysis_results.objective
```

Relatedly you can enable structural analysis in Riemannian optimization problems (supported only on the SPD manifold).

We'll look at the Riemannian center of mass of SPD matrices which is known to be a Geodesically Convex problem on the SPD manifold.

```@example symanalysis
using Optimization, OptimizationManopt, Symbolics, Manifolds, Random, LinearAlgebra,
SymbolicAnalysis
M = SymmetricPositiveDefinite(5)
m = 100
σ = 0.005
q = Matrix{Float64}(LinearAlgebra.I(5)) .+ 2.0
data2 = [exp(M, q, σ * rand(M; vector_at = q)) for i in 1:m];
f(x, p = nothing) = sum(SymbolicAnalysis.distance(M, data2[i], x)^2 for i in 1:5)
optf = OptimizationFunction(f, Optimization.AutoZygote())
prob = OptimizationProblem(optf, data2[1]; manifold = M, structural_analysis = true)
opt = OptimizationManopt.GradientDescentOptimizer()
sol = solve(prob, opt, maxiters = 100)
```
Loading

0 comments on commit 255a7bf

Please sign in to comment.