Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix documentation examples #987

Merged
merged 2 commits into from
Jan 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ IterTools = "c8e1da08-722c-5040-9ed9-7db0dc04731e"
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
LuxCUDA = "d0bbae9a-e099-4d5b-a835-1c6931763bda"
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
OptimizationFlux = "253f991c-a7b2-45f8-8852-8b9a9df78a86"
OptimizationNLopt = "4e6fcdb7-1186-4e1f-a706-475e75c168bb"
OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e"
OptimizationOptimisers = "42dfb2eb-d2b4-4451-abcd-913932933ac1"
Expand All @@ -26,6 +27,7 @@ RecursiveArrayTools = "731186ca-8d62-57ce-b412-fbd966d074cd"
ReverseDiff = "37e2e3b7-166d-5795-8a7a-e32c996b4267"
SciMLSensitivity = "1ed8b502-d754-442c-8d5d-10ac956f44a1"
SimpleChains = "de6bee2f-e2f4-4ec7-b6ed-219cc6f6e9e5"
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
StochasticDiffEq = "789caeaf-c7a9-5a7d-9973-96adeb23e2a0"
Tracker = "9f7883ad-71c0-57eb-9f7f-b5c9e6d3789c"
Expand Down
2 changes: 2 additions & 0 deletions docs/src/examples/dae/physical_constraints.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ zeros, then we have a constraint defined by the right-hand side. Using
terms must add to one. An example of this is as follows:

```@example dae
using SciMLSensitivity
using Lux, ComponentArrays, DiffEqFlux, Optimization, OptimizationNLopt,
OrdinaryDiffEq, Plots

Expand Down Expand Up @@ -73,6 +74,7 @@ result_stiff = Optimization.solve(optprob, NLopt.LD_LBFGS(), maxiters = 100)
### Load Packages

```@example dae2
using SciMLSensitivity
using Lux, ComponentArrays, DiffEqFlux, Optimization, OptimizationNLopt,
OrdinaryDiffEq, Plots

Expand Down
2 changes: 2 additions & 0 deletions docs/src/examples/neural_ode/minibatch.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Training a Neural Ordinary Differential Equation with Mini-Batching

```@example
using SciMLSensitivity
using DifferentialEquations, Flux, Random, Plots
using IterTools: ncycle

Expand Down Expand Up @@ -95,6 +96,7 @@ When training a neural network, we need to find the gradient with respect to our
For this example, we will use a very simple ordinary differential equation, newtons law of cooling. We can represent this in Julia like so.

```@example minibatch
using SciMLSensitivity
using DifferentialEquations, Flux, Random, Plots
using IterTools: ncycle

Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/neural_ode/neural_gde.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This tutorial has been adapted from [here](https://github.com/CarloLucibello/GraphNeuralNetworks.jl/blob/master/examples/neural_ode_cora.jl).

In this tutorial, we will use Graph Differential Equations (GDEs) to perform classification on the [CORA Dataset](https://relational.fit.cvut.cz/dataset/CORA). We shall be using the Graph Neural Networks primitives from the package [GraphNeuralNetworks](https://github.com/CarloLucibello/GraphNeuralNetworks.jl).
In this tutorial, we will use Graph Differential Equations (GDEs) to perform classification on the [CORA Dataset](https://paperswithcode.com/dataset/cora). We shall be using the Graph Neural Networks primitives from the package [GraphNeuralNetworks](https://github.com/CarloLucibello/GraphNeuralNetworks.jl).

```julia
# Load the packages
Expand Down
3 changes: 1 addition & 2 deletions docs/src/examples/ode/exogenous_input.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ In the following example, a discrete exogenous input signal `ex` is defined and
used as an input into the neural network of a neural ODE system.

```@example exogenous
using SciMLSensitivity
using OrdinaryDiffEq, Lux, ComponentArrays, DiffEqFlux, Optimization,
OptimizationPolyalgorithms, OptimizationOptimisers, Plots, Random

Expand Down Expand Up @@ -97,5 +98,3 @@ plot(tsteps, sol')
N = length(sol)
scatter!(tsteps, y[1:N])
```

![](https://aws1.discourse-cdn.com/business5/uploads/julialang/original/3X/f/3/f3c2727af36ac20e114fe3c9798e567cc9d22b9e.png)
1 change: 1 addition & 0 deletions docs/src/examples/ode/second_order_adjoints.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ with Hessian-vector products (never forming the Hessian) for large parameter
optimizations.

```@example secondorderadjoints
using SciMLSensitivity
using Lux, ComponentArrays, DiffEqFlux, Optimization, OptimizationOptimisers,
OrdinaryDiffEq, Plots, Random, OptimizationOptimJL

Expand Down
1 change: 1 addition & 0 deletions docs/src/examples/ode/second_order_neural.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ neural network by the mass!)
An example of training a neural network on a second order ODE is as follows:

```@example secondorderneural
using SciMLSensitivity
using OrdinaryDiffEq, Lux, Optimization, OptimizationOptimisers, RecursiveArrayTools,
Random, ComponentArrays

Expand Down
2 changes: 0 additions & 2 deletions docs/src/examples/optimal_control/optimal_control.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,5 +126,3 @@ cb(res3.u, l)
p = plot(solve(remake(prob, p = res3.u), Tsit5(), saveat = 0.01), ylim = (-6, 6), lw = 3)
plot!(p, ts, [first(first(ann([t], ComponentArray(res3.u, ax), st))) for t in ts], label = "u(t)", lw = 3)
```

![](https://user-images.githubusercontent.com/1814174/81859169-db65b280-9532-11ea-8394-dbb5efcd4036.png)
37 changes: 20 additions & 17 deletions docs/src/examples/pde/pde_constrained.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ This example uses a prediction model to optimize the one-dimensional Heat Equati
(Step-by-step description below)

```@example pde
using SciMLSensitivity
using DelimitedFiles, Plots
using OrdinaryDiffEq, Optimization, OptimizationPolyalgorithms, Zygote

Expand All @@ -17,7 +18,7 @@ u0 = exp.(-(x .- 3.0) .^ 2) # I.C

## Problem Parameters
p = [1.0, 1.0] # True solution parameters
xtrs = [dx, Nx] # Extra parameters
const xtrs = [dx, Nx] # Extra parameters
dt = 0.40 * dx^2 # CFL condition
t0, tMax = 0.0, 1000 * dt
tspan = (t0, tMax)
Expand All @@ -35,22 +36,24 @@ function d2dx(u, dx)
"""
2nd order Central difference for 2nd degree derivative
"""
return [[zero(eltype(u))];
(u[3:end] - 2.0 .* u[2:(end - 1)] + u[1:(end - 2)]) ./ (dx^2);
[zero(eltype(u))]]
return [zero(eltype(u));
(@view(u[3:end]) .- 2.0 .* @view(u[2:(end - 1)]) .+ @view(u[1:(end - 2)])) ./ (dx^2);
zero(eltype(u))]
end

## ODE description of the Physics:
function heat(u, p, t)
function heat(u, p, t, xtrs)
# Model parameters
a0, a1 = p
dx, Nx = xtrs #[1.0,3.0,0.125,100]
return 2.0 * a0 .* u + a1 .* d2dx(u, dx)
end
heat_closure(u, p, t) = heat(u, p, t, xtrs)

# Testing Solver on linear PDE
prob = ODEProblem(heat, u0, tspan, p)
prob = ODEProblem(heat_closure, u0, tspan, p)
sol = solve(prob, Tsit5(), dt = dt, saveat = t);
arr_sol = Array(sol)

plot(x, sol.u[1], lw = 3, label = "t0", size = (800, 500))
plot!(x, sol.u[end], lw = 3, ls = :dash, label = "tMax")
Expand All @@ -63,8 +66,7 @@ end
## Defining Loss function
function loss(θ)
pred = predict(θ)
l = predict(θ) - sol
return sum(abs2, l), pred # Mean squared error
return sum(abs2.(predict(θ) .- arr_sol)), pred # Mean squared error
end

l, pred = loss(ps)
Expand Down Expand Up @@ -101,6 +103,7 @@ res = Optimization.solve(optprob, PolyOpt(), callback = cb)
### Load Packages

```@example pde2
using SciMLSensitivity
using DelimitedFiles, Plots
using OrdinaryDiffEq, Optimization, OptimizationPolyalgorithms, Zygote
```
Expand All @@ -122,7 +125,7 @@ u0 = exp.(-(x .- 3.0) .^ 2) # I.C

## Problem Parameters
p = [1.0, 1.0] # True solution parameters
xtrs = [dx, Nx] # Extra parameters
const xtrs = [dx, Nx] # Extra parameters
dt = 0.40 * dx^2 # CFL condition
t0, tMax = 0.0, 1000 * dt
tspan = (t0, tMax)
Expand Down Expand Up @@ -159,9 +162,9 @@ function d2dx(u, dx)
"""
2nd order Central difference for 2nd degree derivative
"""
return [[zero(eltype(u))];
(u[3:end] - 2.0 .* u[2:(end - 1)] + u[1:(end - 2)]) ./ (dx^2);
[zero(eltype(u))]]
return [zero(eltype(u));
(@view(u[3:end]) .- 2.0 .* @view(u[2:(end - 1)]) .+ @view(u[1:(end - 2)])) ./ (dx^2);
zero(eltype(u))]
end
```

Expand All @@ -170,13 +173,13 @@ end
Next, we set up our desired set of equations in order to define our problem.

```@example pde2
## ODE description of the Physics:
function heat(u, p, t)
function heat(u, p, t, xtrs)
# Model parameters
a0, a1 = p
dx, Nx = xtrs #[1.0,3.0,0.125,100]
return 2.0 * a0 .* u + a1 .* d2dx(u, dx)
end
heat_closure(u, p, t) = heat(u, p, t, xtrs)
```

### Solve and Plot Ground Truth
Expand All @@ -186,8 +189,9 @@ will compare to further on.

```@example pde2
# Testing Solver on linear PDE
prob = ODEProblem(heat, u0, tspan, p)
prob = ODEProblem(heat_closure, u0, tspan, p)
sol = solve(prob, Tsit5(), dt = dt, saveat = t);
arr_sol = Array(sol)

plot(x, sol.u[1], lw = 3, label = "t0", size = (800, 500))
plot!(x, sol.u[end], lw = 3, ls = :dash, label = "tMax")
Expand Down Expand Up @@ -222,8 +226,7 @@ use the **mean squared error**.
## Defining Loss function
function loss(θ)
pred = predict(θ)
l = predict(θ) - sol
return sum(abs2, l), pred # Mean squared error
return sum(abs2.(predict(θ) .- arr_sol)), pred # Mean squared error
end

l, pred = loss(ps)
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/data_parallel.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ The following is a full copy-paste example for the multithreading.
Distributed and GPU minibatching are described below.

```@example dataparallel
using OrdinaryDiffEq, Optimization, OptimizationOptimisers
using OrdinaryDiffEq, Optimization, OptimizationOptimisers, SciMLSensitivity
pa = [1.0]
u0 = [3.0]
θ = [u0; pa]
Expand Down
1 change: 1 addition & 0 deletions docs/src/tutorials/training_tips/divergence.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ end
A full example making use of this trick is:

```@example divergence
using SciMLSensitivity
using OrdinaryDiffEq, SciMLSensitivity, Optimization, OptimizationOptimisers,
OptimizationNLopt, Plots

Expand Down
2 changes: 2 additions & 0 deletions docs/src/tutorials/training_tips/local_minima.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ before, except with one small twist: we wish to find the neural ODE that fits
on `(0,5.0)`. Naively, we use the same training strategy as before:

```@example iterativefit
using SciMLSensitivity
using OrdinaryDiffEq,
ComponentArrays, SciMLSensitivity, Optimization, OptimizationOptimisers
using Lux, Plots, Random, Zygote
Expand Down Expand Up @@ -166,6 +167,7 @@ time span and (0, 10), any longer and more iterations will be required. Alternat
one could use a mix of (3) and (4), or breaking up the trajectory into chunks and just (4).

```@example resetic
using SciMLSensitivity
using OrdinaryDiffEq,
ComponentArrays, SciMLSensitivity, Optimization, OptimizationOptimisers
using Lux, Plots, Random, Zygote
Expand Down
1 change: 1 addition & 0 deletions docs/src/tutorials/training_tips/multiple_nn.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ this kind of study.
The following is a fully working demo on the Fitzhugh-Nagumo ODE:

```@example
using SciMLSensitivity
using Lux, DiffEqFlux, ComponentArrays, Optimization, OptimizationNLopt,
OptimizationOptimisers, OrdinaryDiffEq, Random

Expand Down
Loading