Skip to content

Commit

Permalink
Add advanced tutorials
Browse files Browse the repository at this point in the history
Signed-off-by: ErikQQY <[email protected]>

Add packages in docs

Signed-off-by: ErikQQY <[email protected]>

Update docs/src/tutorials/advanced.md

Co-authored-by: Christopher Rackauckas <[email protected]>

Update docs/src/tutorials/advanced.md

Co-authored-by: Christopher Rackauckas <[email protected]>

Update docs/src/tutorials/advanced.md

Co-authored-by: Christopher Rackauckas <[email protected]>

Same environment for ill conditioned nlprob

Signed-off-by: ErikQQY <[email protected]>
  • Loading branch information
ErikQQY authored and ChrisRackauckas committed Oct 6, 2023
1 parent 5c89c59 commit d1910b4
Show file tree
Hide file tree
Showing 4 changed files with 289 additions and 1 deletion.
6 changes: 6 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
[deps]
AlgebraicMultigrid = "2169fc97-5a83-5252-b627-83903c6c433c"
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
IncompleteLU = "40713840-3770-5561-ab4c-a76e7d0d7895"
LinearSolve = "7ed4a6bd-45f5-4d41-b270-4a48e9bafcae"
NonlinearSolve = "8913a72c-1f9b-4ce2-8d82-65094dcecaec"
NonlinearSolveMINPACK = "c100e077-885d-495a-a2ea-599e143bf69d"
Expand All @@ -9,10 +11,13 @@ SimpleNonlinearSolve = "727e6d20-b764-4bd8-a329-72de5adea6c7"
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
SteadyStateDiffEq = "9672c7b4-1e72-59bd-8a11-6ac3964bc41f"
Sundials = "c3572dad-4567-51f8-b174-8c6c989267f4"
Symbolics = "0c5d862f-8b57-4792-8d23-62f2024744c7"

[compat]
AlgebraicMultigrid = "0.5"
BenchmarkTools = "1"
Documenter = "1"
IncompleteLU = "0.2"
LinearSolve = "2"
NonlinearSolve = "1, 2"
NonlinearSolveMINPACK = "0.1"
Expand All @@ -21,3 +26,4 @@ SimpleNonlinearSolve = "0.1.5"
StaticArrays = "1"
SteadyStateDiffEq = "1.10"
Sundials = "4.11"
Symbolics = "4, 5"
1 change: 1 addition & 0 deletions docs/pages.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

pages = ["index.md",
"Tutorials" => Any["tutorials/nonlinear.md",
"tutorials/advanced.md",
"tutorials/iterator_interface.md"],
"Basics" => Any["basics/NonlinearProblem.md",
"basics/NonlinearFunctions.md",
Expand Down
277 changes: 277 additions & 0 deletions docs/src/tutorials/advanced.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,277 @@
# Solving Large Ill-Conditioned Nonlinear Systems with NonlinearSolve.jl

This tutorial is for getting into the extra features of using NonlinearSolve.jl. Solving ill-conditioned nonlinear systems requires specializing the linear solver on properties of the Jacobian in order to cut down on the ``\mathcal{O}(n^3)`` linear solve and the ``\mathcal{O}(n^2)`` back-solves. This tutorial is designed to explain the advanced usage of NonlinearSolve.jl by solving the steady state stiff Brusselator partial differential equation (BRUSS) using NonlinearSolve.jl.

## Definition of the Brusselator Equation

!!! note

Feel free to skip this section: it simply defines the example problem.

The Brusselator PDE is defined as follows:

```math
\begin{align}
0 &= 1 + u^2v - 4.4u + \alpha(\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}) + f(x, y, t)\\
0 &= 3.4u - u^2v + \alpha(\frac{\partial^2 v}{\partial x^2} + \frac{\partial^2 v}{\partial y^2})
\end{align}
```

where

```math
f(x, y, t) = \begin{cases}
5 & \quad \text{if } (x-0.3)^2+(y-0.6)^2 ≤ 0.1^2 \text{ and } t ≥ 1.1 \\
0 & \quad \text{else}
\end{cases}
```

and the initial conditions are

```math
\begin{align}
u(x, y, 0) &= 22\cdot (y(1-y))^{3/2} \\
v(x, y, 0) &= 27\cdot (x(1-x))^{3/2}
\end{align}
```

with the periodic boundary condition

```math
\begin{align}
u(x+1,y,t) &= u(x,y,t) \\
u(x,y+1,t) &= u(x,y,t)
\end{align}
```

To solve this PDE, we will discretize it into a system of ODEs with the finite
difference method. We discretize `u` and `v` into arrays of the values at each
time point: `u[i,j] = u(i*dx,j*dy)` for some choice of `dx`/`dy`, and same for
`v`. Then our ODE is defined with `U[i,j,k] = [u v]`. The second derivative
operator, the Laplacian, discretizes to become a tridiagonal matrix with
`[1 -2 1]` and a `1` in the top right and bottom left corners. The nonlinear functions
are then applied at each point in space (they are broadcast). Use `dx=dy=1/32`.

The resulting `NonlinearProblem` definition is:

```@example ill_conditioned_nlprob
using NonlinearSolve, LinearAlgebra, SparseArrays, LinearSolve
const N = 32
const xyd_brusselator = range(0, stop = 1, length = N)
brusselator_f(x, y) = (((x - 0.3)^2 + (y - 0.6)^2) <= 0.1^2) * 5.0
limit(a, N) = a == N + 1 ? 1 : a == 0 ? N : a
function brusselator_2d_loop(du, u, p)
A, B, alpha, dx = p
alpha = alpha / dx^2
@inbounds for I in CartesianIndices((N, N))
i, j = Tuple(I)
x, y = xyd_brusselator[I[1]], xyd_brusselator[I[2]]
ip1, im1, jp1, jm1 = limit(i + 1, N), limit(i - 1, N), limit(j + 1, N),
limit(j - 1, N)
du[i, j, 1] = alpha * (u[im1, j, 1] + u[ip1, j, 1] + u[i, jp1, 1] + u[i, jm1, 1] -
4u[i, j, 1]) +
B + u[i, j, 1]^2 * u[i, j, 2] - (A + 1) * u[i, j, 1] +
brusselator_f(x, y)
du[i, j, 2] = alpha * (u[im1, j, 2] + u[ip1, j, 2] + u[i, jp1, 2] + u[i, jm1, 2] -
4u[i, j, 2]) +
A * u[i, j, 1] - u[i, j, 1]^2 * u[i, j, 2]
end
end
p = (3.4, 1.0, 10.0, step(xyd_brusselator))
function init_brusselator_2d(xyd)
N = length(xyd)
u = zeros(N, N, 2)
for I in CartesianIndices((N, N))
x = xyd[I[1]]
y = xyd[I[2]]
u[I, 1] = 22 * (y * (1 - y))^(3 / 2)
u[I, 2] = 27 * (x * (1 - x))^(3 / 2)
end
u
end
u0 = init_brusselator_2d(xyd_brusselator)
prob_brusselator_2d = NonlinearProblem(brusselator_2d_loop, u0, p)
```

## Choosing Jacobian Types

When we are solving this nonlinear problem, the Jacobian must be built at many
iterations, and this can be one of the most
expensive steps. There are two pieces that must be optimized in order to reach
maximal efficiency when solving stiff equations: the sparsity pattern and the
construction of the Jacobian. The construction is filling the matrix
`J` with values, while the sparsity pattern is what `J` to use.

The sparsity pattern is given by a prototype matrix, the `jac_prototype`, which
will be copied to be used as `J`. The default is for `J` to be a `Matrix`,
i.e. a dense matrix. However, if you know the sparsity of your problem, then
you can pass a different matrix type. For example, a `SparseMatrixCSC` will
give a sparse matrix. Other sparse matrix types include:

- Bidiagonal
- Tridiagonal
- SymTridiagonal
- BandedMatrix ([BandedMatrices.jl](https://github.com/JuliaLinearAlgebra/BandedMatrices.jl))
- BlockBandedMatrix ([BlockBandedMatrices.jl](https://github.com/JuliaLinearAlgebra/BlockBandedMatrices.jl))

## Declaring a Sparse Jacobian with Automatic Sparsity Detection

Jacobian sparsity is declared by the `jac_prototype` argument in the `NonlinearFunction`.
Note that you should only do this if the sparsity is high, for example, 0.1%
of the matrix is non-zeros, otherwise the overhead of sparse matrices can be higher
than the gains from sparse differentiation!

One of the useful companion tools for NonlinearSolve.jl is
[Symbolics.jl](https://github.com/JuliaSymbolics/Symbolics.jl).
This allows for automatic declaration of Jacobian sparsity types. To see this
in action, we can give an example `du` and `u` and call `jacobian_sparsity`
on our function with the example arguments, and it will kick out a sparse matrix
with our pattern, that we can turn into our `jac_prototype`.

```@example ill_conditioned_nlprob
using Symbolics
du0 = copy(u0)
jac_sparsity = Symbolics.jacobian_sparsity((du, u) -> brusselator_2d_loop(du, u, p),
du0, u0)
```

Notice that Julia gives a nice print out of the sparsity pattern. That's neat, and
would be tedious to build by hand! Now we just pass it to the `NonlinearFunction`
like as before:

```@example ill_conditioned_nlprob
ff = NonlinearFunction(brusselator_2d_loop; jac_prototype = float.(jac_sparsity))
```

Build the `NonlinearProblem`:

```@example ill_conditioned_nlprob
prob_brusselator_2d_sparse = NonlinearProblem(ff, u0, p)
```

Now let's see how the version with sparsity compares to the version without:

```@example ill_conditioned_nlprob
using BenchmarkTools # for @btime
@btime solve(prob_brusselator_2d, NewtonRaphson());
@btime solve(prob_brusselator_2d_sparse, NewtonRaphson());
@btime solve(prob_brusselator_2d_sparse, NewtonRaphson(linsolve = KLUFactorization()));
nothing # hide
```

Note that depending on the properties of the sparsity pattern, one may want to try
alternative linear solvers such as `NewtonRaphson(linsolve = KLUFactorization())`
or `NewtonRaphson(linsolve = UMFPACKFactorization())`

## Using Jacobian-Free Newton-Krylov

A completely different way to optimize the linear solvers for large sparse
matrices is to use a Krylov subspace method. This requires choosing a linear
solver for changing to a Krylov method. To swap the linear solver out, we use
the `linsolve` command and choose the GMRES linear solver.

```@example ill_conditioned_nlprob
@btime solve(prob_brusselator_2d, NewtonRaphson(linsolve = KrylovJL_GMRES()));
nothing # hide
```

Notice that this acceleration does not require the definition of a sparsity
pattern, and can thus be an easier way to scale for large problems. For more
information on linear solver choices, see the [linear solver documentation](https://docs.sciml.ai/DiffEqDocs/stable/features/linear_nonlinear/#linear_nonlinear). `linsolve` choices are any valid [LinearSolve.jl](https://linearsolve.sciml.ai/dev/) solver.

!!! note

Switching to a Krylov linear solver will automatically change the nonlinear problem solver
into Jacobian-free mode, dramatically reducing the memory required. This can
be overridden by adding `concrete_jac=true` to the algorithm.

## Adding a Preconditioner

Any [LinearSolve.jl-compatible preconditioner](https://docs.sciml.ai/LinearSolve/stable/basics/Preconditioners/)
can be used as a preconditioner in the linear solver interface. To define
preconditioners, one must define a `precs` function in compatible with nonlinear
solvers which returns the left and right preconditioners, matrices which
approximate the inverse of `W = I - gamma*J` used in the solution of the ODE.
An example of this with using [IncompleteLU.jl](https://github.com/haampie/IncompleteLU.jl)
is as follows:

```@example ill_conditioned_nlprob
using IncompleteLU
function incompletelu(W, du, u, p, t, newW, Plprev, Prprev, solverdata)
if newW === nothing || newW
Pl = ilu(W, τ = 50.0)
else
Pl = Plprev
end
Pl, nothing
end
@btime solve(prob_brusselator_2d_sparse,
NewtonRaphson(linsolve = KrylovJL_GMRES(), precs = incompletelu,
concrete_jac = true));
nothing # hide
```

Notice a few things about this preconditioner. This preconditioner uses the
sparse Jacobian, and thus we set `concrete_jac=true` to tell the algorithm to
generate the Jacobian (otherwise, a Jacobian-free algorithm is used with GMRES
by default). Then `newW = true` whenever a new `W` matrix is computed, and
`newW=nothing` during the startup phase of the solver. Thus, we do a check
`newW === nothing || newW` and when true, it's only at these points when
we update the preconditioner, otherwise we just pass on the previous version.
We use `convert(AbstractMatrix,W)` to get the concrete `W` matrix (matching
`jac_prototype`, thus `SpraseMatrixCSC`) which we can use in the preconditioner's
definition. Then we use `IncompleteLU.ilu` on that sparse matrix to generate
the preconditioner. We return `Pl,nothing` to say that our preconditioner is a
left preconditioner, and that there is no right preconditioning.

This method thus uses both the Krylov solver and the sparse Jacobian. Not only
that, it is faster than both implementations! IncompleteLU is fussy in that it
requires a well-tuned `τ` parameter. Another option is to use
[AlgebraicMultigrid.jl](https://github.com/JuliaLinearAlgebra/AlgebraicMultigrid.jl)
which is more automatic. The setup is very similar to before:

```@example ill_conditioned_nlprob
using AlgebraicMultigrid
function algebraicmultigrid(W, du, u, p, t, newW, Plprev, Prprev, solverdata)
if newW === nothing || newW
Pl = aspreconditioner(ruge_stuben(convert(AbstractMatrix, W)))
else
Pl = Plprev
end
Pl, nothing
end
@btime solve(prob_brusselator_2d_sparse,
NewtonRaphson(linsolve = KrylovJL_GMRES(), precs = algebraicmultigrid,
concrete_jac = true));
nothing # hide
```

or with a Jacobi smoother:

```@example ill_conditioned_nlprob
function algebraicmultigrid2(W, du, u, p, t, newW, Plprev, Prprev, solverdata)
if newW === nothing || newW
A = convert(AbstractMatrix, W)
Pl = AlgebraicMultigrid.aspreconditioner(AlgebraicMultigrid.ruge_stuben(A,
presmoother = AlgebraicMultigrid.Jacobi(rand(size(A,
1))),
postsmoother = AlgebraicMultigrid.Jacobi(rand(size(A,
1)))))
else
Pl = Plprev
end
Pl, nothing
end
@btime solve(prob_brusselator_2d_sparse,
NewtonRaphson(linsolve = KrylovJL_GMRES(), precs = algebraicmultigrid2,
concrete_jac = true));
nothing # hide
```

For more information on the preconditioner interface, see the
[linear solver documentation](https://docs.sciml.ai/LinearSolve/stable/basics/Preconditioners/).
6 changes: 5 additions & 1 deletion test/basictests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,11 @@ end
@test (@ballocated solve!($cache)) < 200
end

precs = [NonlinearSolve.DEFAULT_PRECS, :Random]

precs = [
NonlinearSolve.DEFAULT_PRECS,
(args...) -> (Diagonal(rand!(similar(u0))), nothing),
]

@testset "[IIP] u0: $(typeof(u0)) precs: $(_nameof(prec)) linsolve: $(_nameof(linsolve))" for u0 in ([
1.0, 1.0],), prec in precs, linsolve in (nothing, KrylovJL_GMRES())
Expand Down

0 comments on commit d1910b4

Please sign in to comment.