diff --git a/docs/make.jl b/docs/make.jl
index e63e6b55..549265b3 100644
--- a/docs/make.jl
+++ b/docs/make.jl
@@ -22,12 +22,12 @@ makedocs(
"Tutorials" => ["tutorial-basic-example.md",
"tutorial-basic-example-f.md",
"tutorial-double-integrator.md",
- "tutorial-lqr-basic.md",
"tutorial-goddard.md",
- #"tutorial-model.md",
- #"tutorial-solvers.md",
"tutorial-init.md",
+ "tutorial-lqr-basic.md",
"tutorial-plot.md",
+ #"tutorial-model.md",
+ #"tutorial-solvers.md",
#"tutorial-iss.md",
#"tutorial-flows.md",
#"tutorial-ct_repl.md",
diff --git a/docs/src/index.md b/docs/src/index.md
index fca386c8..339d96b9 100644
--- a/docs/src/index.md
+++ b/docs/src/index.md
@@ -41,8 +41,4 @@ and other constraints such as
\end{array}
```
-See our tutorials to get started solving optimal control problems:
-
-- [Basic example](@ref basic): a simple smooth optimal control problem solved by the direct method.
-- [Double integrator](@ref double-int): the classical double integrator, minimum time.
-- [Advanced example](@ref goddard): the [Goddard problem](https://en.wikipedia.org/w/index.php?title=Goddard_problem&oldid=1091155283) solved by direct and indirect methods.
+See our tutorials to get started solving optimal control problems.
diff --git a/docs/src/tutorial-basic-example-f.md b/docs/src/tutorial-basic-example-f.md
index 76c8b33a..4acea40f 100644
--- a/docs/src/tutorial-basic-example-f.md
+++ b/docs/src/tutorial-basic-example-f.md
@@ -7,10 +7,10 @@ We denote by $x = (x_1, x_2)$ the state of the wagon, that is its position $x_1$
```
-We assume that the mass is constant and unitary and that there is no friction. The dynamics is thus given by
+We assume that the mass is constant and unitary and that there is no friction. The dynamics we consider is given by
```math
- \dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t) \in \mathbb{R},
+ \dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t), \quad u(t) \in \R,
```
which is simply the [double integrator](https://en.wikipedia.org/w/index.php?title=Double_integrator&oldid=1071399674) system.
@@ -43,21 +43,27 @@ using OptimalControl
Then, we can define the problem
```@example main
-ocp = Model()
+ocp = Model() # empty optimal control problem
-time!(ocp, [ 0, 1 ])
-state!(ocp, 2)
-control!(ocp, 1)
+time!(ocp, [ 0, 1 ]) # time interval
+state!(ocp, 2) # dimension of the state
+control!(ocp, 1) # dimension of the control
-constraint!(ocp, :initial, [ -1, 0 ])
-constraint!(ocp, :final, [ 0, 0 ])
+constraint!(ocp, :initial, [ -1, 0 ]) # initial condition
+constraint!(ocp, :final, [ 0, 0 ]) # final condition
-dynamics!(ocp, (x, u) -> [ x[2], u ])
+dynamics!(ocp, (x, u) -> [ x[2], u ]) # dynamics of the double integrator
-objective!(ocp, :lagrange, (x, u) -> 0.5u^2)
+objective!(ocp, :lagrange, (x, u) -> 0.5u^2) # cost in Lagrange form
nothing # hide
```
+!!! note "Nota bene"
+
+ There are two ways to define an optimal control problem:
+ - using functions like in this example, see also the [`Mode` documentation](https://control-toolbox.org/docs/ctbase/stable/api-model.html) for more details.
+ - using an abstract formulation. You can compare both ways taking a look at the abstract version of this [basic example](@ref basic).
+
Solve it
```@example main
diff --git a/docs/src/tutorial-basic-example.md b/docs/src/tutorial-basic-example.md
index e0add4b0..4005a4a9 100644
--- a/docs/src/tutorial-basic-example.md
+++ b/docs/src/tutorial-basic-example.md
@@ -7,10 +7,10 @@ We denote by $x = (x_1, x_2)$ the state of the wagon, that is its position $x_1$
```
-We assume that the mass is constant and unitary and that there is no friction. The dynamics is thus given by
+We assume that the mass is constant and unitary and that there is no friction. The dynamics we consider is given by
```math
- \dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t) \in \mathbb{R},
+ \dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t), , \quad u(t) \in \R,
```
which is simply the [double integrator](https://en.wikipedia.org/w/index.php?title=Double_integrator&oldid=1071399674) system.
diff --git a/docs/src/tutorial-double-integrator.md b/docs/src/tutorial-double-integrator.md
index 379bbe36..af00580f 100644
--- a/docs/src/tutorial-double-integrator.md
+++ b/docs/src/tutorial-double-integrator.md
@@ -9,7 +9,7 @@ The problem consists in minimising the final time $t_f$ for the double integrato
and the limit conditions
```math
- x(0) = (1,2), \quad x(t_f) = (0,0)
+ x(0) = (1,2), \quad x(t_f) = (0,0).
```
This problem can be interpretated as a simple model for a wagon with constant mass moving along
@@ -48,13 +48,14 @@ Then, we can define the problem
end
nothing # hide
```
+
!!! note "Nota bene"
In order to ensure convergence of the direct solver, we have added the state constraints labelled (1) and (2):
-```math
-0 \leq q(t) \leq 5,\quad -2 \leq v(t) \leq 3,\quad t \in [ 0, tf ].
-```
+ ```math
+ 0 \leq q(t) \leq 5,\quad -2 \leq v(t) \leq 3,\quad t \in [ 0, t_f ].
+ ```
Solve it
diff --git a/docs/src/tutorial-goddard.md b/docs/src/tutorial-goddard.md
index 35661f3c..e1dd5cc6 100644
--- a/docs/src/tutorial-goddard.md
+++ b/docs/src/tutorial-goddard.md
@@ -1,4 +1,4 @@
-# [Advanced example](@id goddard)
+# [Goddard problem](@id goddard)
## Introduction
@@ -28,7 +28,7 @@ subject to the controlled dynamics
and subject to the control constraint $u(t) \in [0,1]$ and the state constraint
$v(t) \leq v_{\max}$. The initial state is fixed while only the final mass is prescribed.
-!!! note
+!!! note "Nota bene"
The Hamiltonian is affine with respect to the control, so singular arcs may occur,
as well as constrained arcs due to the path constraint on the velocity (see below).
@@ -179,7 +179,7 @@ as well as the associated multiplier for the *order one* state constraint on the
\mu(x, p) = H_{01}(x, p) / (F_1 \cdot g)(x).
```
-!!! note
+!!! note "Poisson bracket and Lie derivative"
The Poisson bracket $\{H,G\}$ is also given by the Lie derivative of $G$ along the
Hamiltonian vector field $X_H = (\nabla_p H, -\nabla_x H)$ of $H$, that is
diff --git a/docs/src/tutorial-init.md b/docs/src/tutorial-init.md
index 5736203e..00970acf 100644
--- a/docs/src/tutorial-init.md
+++ b/docs/src/tutorial-init.md
@@ -1,6 +1,10 @@
# Initial guess
-We define the following optimal control problem.
+```@meta
+CurrentModule = OptimalControl
+```
+
+We present in this tutorial the different possibilities to provide an initial guess to solve an optimal control problem using the [`solve`](@ref) command. For the illustrations, we define the following optimal control problem.
```@example main
using OptimalControl
@@ -39,7 +43,7 @@ Let us plot the solution of the optimal control problem.
plot(sol, size=(600, 450))
```
-To reduce the number of iterations and improve the convergence, we can give an initial guess to the solver. We define the following initial guess.
+To reduce the number of iterations and improve the convergence, we can give an initial guess to the solver. We define the following constant initial guess.
```@example main
# constant initial guess
@@ -57,8 +61,8 @@ We can also provide functions of time as initial guess for the state and the con
```@example main
# initial guess as functions of time
-x(t) = [ -0.2 * t, 0.1 * t ]
-u(t) = -0.2 * t
+x(t) = [ -0.2t, 0.1t ]
+u(t) = -0.2t
initial_guess = (state=x, control=u, variable=0.05)
# solve the optimal control problem with initial guess
diff --git a/docs/src/tutorial-lqr-basic.md b/docs/src/tutorial-lqr-basic.md
index 1239f877..a24220e5 100644
--- a/docs/src/tutorial-lqr-basic.md
+++ b/docs/src/tutorial-lqr-basic.md
@@ -1,6 +1,6 @@
# LQR example
-The energy and distance minimisation Linear Quadratic Problem (LQR) problem consists in minimising
+We consider the following Linear Quadratic Regulator (LQR) problem which consists in minimising
```math
\frac{1}{2} \int_{0}^{t_f} \left( x_1^2(t) + x_2^2(t) + u^2(t) \right) \, \mathrm{d}t
@@ -25,6 +25,7 @@ We define $A$ and $B$ as
B = \begin{pmatrix} 0 \\ 1 \\ \end{pmatrix}
```
+in order to get $\dot{x} = Ax + Bu$
and we aim to solve this optimal control problem for different values of $t_f$.
First, we need to import the `OptimalControl.jl` package.
@@ -37,18 +38,20 @@ Then, we can define the problem parameterized by the final time `tf`.
```@example main
x0 = [ 0
1 ]
+
A = [ 0 1
-1 0 ]
+
B = [ 0
1 ]
-function LQRProblem(tf)
+function lqr(tf)
@def ocp begin
t ∈ [ 0, tf ], time
x ∈ R², state
u ∈ R, control
- x(0) == x0, initial_con
+ x(0) == x0
ẋ(t) == A * x(t) + B * u(t)
∫( 0.5(x₁(t)^2 + x₂(t)^2 + u(t)^2) ) → min
end
@@ -61,12 +64,12 @@ nothing # hide
We solve the problem for $t_f \in \{3, 5, 30\}$.
```@example main
-solutions = []
-tfspan = [3, 5, 30]
+solutions = [] # empty list of solutions
+tfs = [3, 5, 30]
-for tf ∈ tfspan
- sol = solve(LQRProblem(tf), display=false)
- push!(solutions, sol)
+for tf ∈ tfs
+ solution = solve(lqr(tf), display=false)
+ push!(solutions, solution)
end
nothing # hide
```
@@ -74,33 +77,18 @@ nothing # hide
We choose to plot the solutions considering a normalized time $s=(t-t_0)/(t_f-t_0)$.
We thus introduce the function `rescale` that rescales the time and redefine the state, costate and control variables.
-!!! tip
-
- Instead of defining the function `rescale`, you can consider $t_f$ as a parameter and define the following
- optimal control problem:
-
- ```julia
- @def ocp begin
- s ∈ [ 0, 1 ], time
- x ∈ R², state
- u ∈ R, control
- x(0) == x0, initial_con
- ẋ(s) == tf * ( A * x(s) + B * u(s) )
- ∫( 0.5(x₁(s)^2 + x₂(s)^2 + u(s)^2) ) → min
- end
- ```
-
```@example main
function rescale(sol)
- # integration times
- times = sol.times
+ # initial and final times
+ t0 = sol.times[1]
+ tf = sol.times[end]
# s is the rescaled time between 0 and 1
- t(s) = times[1] + s * (times[end] - times[1])
+ t(s) = t0 + s * ( tf - t0 )
# rescaled times
- sol.times = (times .- times[1]) ./ (times[end] .- times[1])
+ sol.times = ( sol.times .- t0 ) ./ ( tf - t0 )
# redefinition of the state, control and costate
x = sol.state
@@ -116,10 +104,25 @@ end
nothing # hide
```
-!!! note
+!!! note "Nota bene"
The `∘` operator is the composition operator. Hence, `x∘t` is the function `s -> x(t(s))`.
+!!! tip
+
+ Instead of defining the function `rescale`, you can consider $t_f$ as a parameter and define the following
+ optimal control problem:
+
+ ```julia
+ @def ocp begin
+ s ∈ [ 0, 1 ], time
+ x ∈ R², state
+ u ∈ R, control
+ x(0) == x0
+ ẋ(s) == tf * ( A * x(s) + B * u(s) )
+ ∫( 0.5(x₁(s)^2 + x₂(s)^2 + u(s)^2) ) → min
+ end
+ ```
Finally we choose to plot only the state and control variables.
@@ -134,12 +137,12 @@ end
# we plot only the state and control variables and we add the legend
px1 = plot(plt[1], legend=false, xlabel="s", ylabel="x₁")
-px2 = plot(plt[2], label=reshape(["tf = $tf" for tf ∈ tfspan],
- (1, length(tfspan))), xlabel="s", ylabel="x₂")
+px2 = plot(plt[2], label=reshape(["tf = $tf" for tf ∈ tfs],
+ (1, length(tfs))), xlabel="s", ylabel="x₂")
pu = plot(plt[5], legend=false, xlabel="s", ylabel="u")
plot(px1, px2, pu, layout=(1, 3), size=(800, 300), leftmargin=5mm, bottommargin=5mm)
```
-!!! note
+!!! note "Nota bene"
We can observe that $x(t_f)$ converges to the origin as $t_f$ increases.
diff --git a/docs/src/tutorial-plot.md b/docs/src/tutorial-plot.md
index 4b7db6a2..cea7d498 100644
--- a/docs/src/tutorial-plot.md
+++ b/docs/src/tutorial-plot.md
@@ -1,6 +1,6 @@
# Plot a solution
-In this tutorial we explain the different ways to plot a solution from an optimal control problem.
+In this tutorial we explain the different ways to plot a solution of an optimal control problem.
Let us start by importing the necessary package.
@@ -35,11 +35,11 @@ plot(sol)
As you can see, it produces a grid of subplots. The left column contains the state trajectories, the right column the costate trajectories, and at the bottom we have the control trajectory.
-Attributes from `Plots.jl` can be passed to the `plot` function:
+Attributes from [`Plots.jl`](https://docs.juliaplots.org) can be passed to the `plot` function:
-- In addition to `sol` you can pass attributes to the full `Plot`, see the [attribute plot documentation](https://docs.juliaplots.org/latest/generated/attributes_plot/) from `Plots.jl` for more details. For instance, you can specify the size of the figure.
-- You can also pass attributes to the subplots, see the [attribute subplot documentation](https://docs.juliaplots.org/latest/generated/attributes_subplot/) from `Plots.jl` for more details. However, it will affect all the subplots. For instance, you can specify the location of the legend.
-- In the same way, you can pass axis attributes to the subplots, see the [attribute axis documentation](https://docs.juliaplots.org/latest/generated/attributes_axis/) from `Plots.jl` for more details. It will also affect all the subplots. For instance, you can remove the grid.
+- In addition to `sol` you can pass attributes to the full `Plot`, see the [attributes plot documentation](https://docs.juliaplots.org/latest/generated/attributes_plot/) from `Plots.jl` for more details. For instance, you can specify the size of the figure.
+- You can also pass attributes to the subplots, see the [attributes subplot documentation](https://docs.juliaplots.org/latest/generated/attributes_subplot/) from `Plots.jl` for more details. However, it will affect all the subplots. For instance, you can specify the location of the legend.
+- In the same way, you can pass axis attributes to the subplots, see the [attributes axis documentation](https://docs.juliaplots.org/latest/generated/attributes_axis/) from `Plots.jl` for more details. It will also affect all the subplots. For instance, you can remove the grid.
```@example main
plot(sol, size=(700, 450), legend=:bottomright, grid=false)
@@ -84,11 +84,12 @@ sol2 = solve(ocp, display=false)
nothing # hide
```
-We first plot the solution of the first optimal control problem. Then, we plot the solution of the second optimal control problem on the same figure, but with dashed lines.
+We first plot the solution of the first optimal control problem, then, we plot the solution of the second optimal control problem on the same figure, but with dashed lines.
```@example main
# first plot
plt = plot(sol, size=(700, 450))
+
# second plot
style = (linestyle=:dash, )
plot!(plt, sol2, state_style=style, costate_style=style, control_style=style)
@@ -118,6 +119,8 @@ u = sol.control
plot(t, norm∘u, label="‖u‖")
```
-!!! note
+!!! note "Nota bene"
- The `norm` function is from `LinearAlgebra.jl`. The `∘` operator is the composition operator. Hence, `norm∘u` is equivalent to `t -> norm(u(t))`. The `sol.state`, `sol.costate` and `sol.control` are functions that return the state, costate and control trajectories at a given time.
+ - The `norm` function is from `LinearAlgebra.jl`.
+ - The `∘` operator is the composition operator. Hence, `norm∘u` is the function `t -> norm(u(t))`.
+ - The `sol.state`, `sol.costate` and `sol.control` are functions that return the state, costate and control trajectories at a given time.