Skip to content

Commit

Permalink
Merge pull request #156 from control-toolbox/doc
Browse files Browse the repository at this point in the history
Doc
  • Loading branch information
ocots authored Oct 5, 2023
2 parents 9aa3a2b + c7da462 commit 6abbd46
Show file tree
Hide file tree
Showing 9 changed files with 84 additions and 71 deletions.
6 changes: 3 additions & 3 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@ makedocs(
"Tutorials" => ["tutorial-basic-example.md",
"tutorial-basic-example-f.md",
"tutorial-double-integrator.md",
"tutorial-lqr-basic.md",
"tutorial-goddard.md",
#"tutorial-model.md",
#"tutorial-solvers.md",
"tutorial-init.md",
"tutorial-lqr-basic.md",
"tutorial-plot.md",
#"tutorial-model.md",
#"tutorial-solvers.md",
#"tutorial-iss.md",
#"tutorial-flows.md",
#"tutorial-ct_repl.md",
Expand Down
6 changes: 1 addition & 5 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,4 @@ and other constraints such as
\end{array}
```

See our tutorials to get started solving optimal control problems:

- [Basic example](@ref basic): a simple smooth optimal control problem solved by the direct method.
- [Double integrator](@ref double-int): the classical double integrator, minimum time.
- [Advanced example](@ref goddard): the [Goddard problem](https://en.wikipedia.org/w/index.php?title=Goddard_problem&oldid=1091155283) solved by direct and indirect methods.
See our tutorials to get started solving optimal control problems.
26 changes: 16 additions & 10 deletions docs/src/tutorial-basic-example-f.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ We denote by $x = (x_1, x_2)$ the state of the wagon, that is its position $x_1$
<img src="./assets/chariot.png" style="display: block; margin: 0 auto 20px auto;" width="300px">
```

We assume that the mass is constant and unitary and that there is no friction. The dynamics is thus given by
We assume that the mass is constant and unitary and that there is no friction. The dynamics we consider is given by

```math
\dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t) \in \mathbb{R},
\dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t), \quad u(t) \in \R,
```

which is simply the [double integrator](https://en.wikipedia.org/w/index.php?title=Double_integrator&oldid=1071399674) system.
Expand Down Expand Up @@ -43,21 +43,27 @@ using OptimalControl
Then, we can define the problem

```@example main
ocp = Model()
ocp = Model() # empty optimal control problem
time!(ocp, [ 0, 1 ])
state!(ocp, 2)
control!(ocp, 1)
time!(ocp, [ 0, 1 ]) # time interval
state!(ocp, 2) # dimension of the state
control!(ocp, 1) # dimension of the control
constraint!(ocp, :initial, [ -1, 0 ])
constraint!(ocp, :final, [ 0, 0 ])
constraint!(ocp, :initial, [ -1, 0 ]) # initial condition
constraint!(ocp, :final, [ 0, 0 ]) # final condition
dynamics!(ocp, (x, u) -> [ x[2], u ])
dynamics!(ocp, (x, u) -> [ x[2], u ]) # dynamics of the double integrator
objective!(ocp, :lagrange, (x, u) -> 0.5u^2)
objective!(ocp, :lagrange, (x, u) -> 0.5u^2) # cost in Lagrange form
nothing # hide
```

!!! note "Nota bene"

There are two ways to define an optimal control problem:
- using functions like in this example, see also the [`Mode` documentation](https://control-toolbox.org/docs/ctbase/stable/api-model.html) for more details.
- using an abstract formulation. You can compare both ways taking a look at the abstract version of this [basic example](@ref basic).

Solve it

```@example main
Expand Down
4 changes: 2 additions & 2 deletions docs/src/tutorial-basic-example.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ We denote by $x = (x_1, x_2)$ the state of the wagon, that is its position $x_1$
<img src="./assets/chariot.png" style="display: block; margin: 0 auto 20px auto;" width="300px">
```

We assume that the mass is constant and unitary and that there is no friction. The dynamics is thus given by
We assume that the mass is constant and unitary and that there is no friction. The dynamics we consider is given by

```math
\dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t) \in \mathbb{R},
\dot x_1(t) = x_2(t), \quad \dot x_2(t) = u(t), , \quad u(t) \in \R,
```

which is simply the [double integrator](https://en.wikipedia.org/w/index.php?title=Double_integrator&oldid=1071399674) system.
Expand Down
9 changes: 5 additions & 4 deletions docs/src/tutorial-double-integrator.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ The problem consists in minimising the final time $t_f$ for the double integrato
and the limit conditions

```math
x(0) = (1,2), \quad x(t_f) = (0,0)
x(0) = (1,2), \quad x(t_f) = (0,0).
```

This problem can be interpretated as a simple model for a wagon with constant mass moving along
Expand Down Expand Up @@ -48,13 +48,14 @@ Then, we can define the problem
end
nothing # hide
```

!!! note "Nota bene"

In order to ensure convergence of the direct solver, we have added the state constraints labelled (1) and (2):

```math
0 \leq q(t) \leq 5,\quad -2 \leq v(t) \leq 3,\quad t \in [ 0, tf ].
```
```math
0 \leq q(t) \leq 5,\quad -2 \leq v(t) \leq 3,\quad t \in [ 0, t_f ].
```

Solve it

Expand Down
6 changes: 3 additions & 3 deletions docs/src/tutorial-goddard.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# [Advanced example](@id goddard)
# [Goddard problem](@id goddard)

## Introduction

Expand Down Expand Up @@ -28,7 +28,7 @@ subject to the controlled dynamics
and subject to the control constraint $u(t) \in [0,1]$ and the state constraint
$v(t) \leq v_{\max}$. The initial state is fixed while only the final mass is prescribed.

!!! note
!!! note "Nota bene"

The Hamiltonian is affine with respect to the control, so singular arcs may occur,
as well as constrained arcs due to the path constraint on the velocity (see below).
Expand Down Expand Up @@ -179,7 +179,7 @@ as well as the associated multiplier for the *order one* state constraint on the
\mu(x, p) = H_{01}(x, p) / (F_1 \cdot g)(x).
```

!!! note
!!! note "Poisson bracket and Lie derivative"

The Poisson bracket $\{H,G\}$ is also given by the Lie derivative of $G$ along the
Hamiltonian vector field $X_H = (\nabla_p H, -\nabla_x H)$ of $H$, that is
Expand Down
12 changes: 8 additions & 4 deletions docs/src/tutorial-init.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
# Initial guess

We define the following optimal control problem.
```@meta
CurrentModule = OptimalControl
```

We present in this tutorial the different possibilities to provide an initial guess to solve an optimal control problem using the [`solve`](@ref) command. For the illustrations, we define the following optimal control problem.

```@example main
using OptimalControl
Expand Down Expand Up @@ -39,7 +43,7 @@ Let us plot the solution of the optimal control problem.
plot(sol, size=(600, 450))
```

To reduce the number of iterations and improve the convergence, we can give an initial guess to the solver. We define the following initial guess.
To reduce the number of iterations and improve the convergence, we can give an initial guess to the solver. We define the following constant initial guess.

```@example main
# constant initial guess
Expand All @@ -57,8 +61,8 @@ We can also provide functions of time as initial guess for the state and the con

```@example main
# initial guess as functions of time
x(t) = [ -0.2 * t, 0.1 * t ]
u(t) = -0.2 * t
x(t) = [ -0.2t, 0.1t ]
u(t) = -0.2t
initial_guess = (state=x, control=u, variable=0.05)
# solve the optimal control problem with initial guess
Expand Down
67 changes: 35 additions & 32 deletions docs/src/tutorial-lqr-basic.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# LQR example

The energy and distance minimisation Linear Quadratic Problem (LQR) problem consists in minimising
We consider the following Linear Quadratic Regulator (LQR) problem which consists in minimising

```math
\frac{1}{2} \int_{0}^{t_f} \left( x_1^2(t) + x_2^2(t) + u^2(t) \right) \, \mathrm{d}t
Expand All @@ -25,6 +25,7 @@ We define $A$ and $B$ as
B = \begin{pmatrix} 0 \\ 1 \\ \end{pmatrix}
```

in order to get $\dot{x} = Ax + Bu$
and we aim to solve this optimal control problem for different values of $t_f$.
First, we need to import the `OptimalControl.jl` package.

Expand All @@ -37,18 +38,20 @@ Then, we can define the problem parameterized by the final time `tf`.
```@example main
x0 = [ 0
1 ]
A = [ 0 1
-1 0 ]
B = [ 0
1 ]
function LQRProblem(tf)
function lqr(tf)
@def ocp begin
t ∈ [ 0, tf ], time
x ∈ R², state
u ∈ R, control
x(0) == x0, initial_con
x(0) == x0
ẋ(t) == A * x(t) + B * u(t)
∫( 0.5(x₁(t)^2 + x₂(t)^2 + u(t)^2) ) → min
end
Expand All @@ -61,46 +64,31 @@ nothing # hide
We solve the problem for $t_f \in \{3, 5, 30\}$.

```@example main
solutions = []
tfspan = [3, 5, 30]
solutions = [] # empty list of solutions
tfs = [3, 5, 30]
for tf ∈ tfspan
sol = solve(LQRProblem(tf), display=false)
push!(solutions, sol)
for tf ∈ tfs
solution = solve(lqr(tf), display=false)
push!(solutions, solution)
end
nothing # hide
```

We choose to plot the solutions considering a normalized time $s=(t-t_0)/(t_f-t_0)$.
We thus introduce the function `rescale` that rescales the time and redefine the state, costate and control variables.

!!! tip

Instead of defining the function `rescale`, you can consider $t_f$ as a parameter and define the following
optimal control problem:

```julia
@def ocp begin
s ∈ [ 0, 1 ], time
x ∈ R², state
u ∈ R, control
x(0) == x0, initial_con
ẋ(s) == tf * ( A * x(s) + B * u(s) )
∫( 0.5(x₁(s)^2 + x₂(s)^2 + u(s)^2) ) → min
end
```

```@example main
function rescale(sol)
# integration times
times = sol.times
# initial and final times
t0 = sol.times[1]
tf = sol.times[end]
# s is the rescaled time between 0 and 1
t(s) = times[1] + s * (times[end] - times[1])
t(s) = t0 + s * ( tf - t0 )
# rescaled times
sol.times = (times .- times[1]) ./ (times[end] .- times[1])
sol.times = ( sol.times .- t0 ) ./ ( tf - t0 )
# redefinition of the state, control and costate
x = sol.state
Expand All @@ -116,10 +104,25 @@ end
nothing # hide
```

!!! note
!!! note "Nota bene"

The `∘` operator is the composition operator. Hence, `x∘t` is the function `s -> x(t(s))`.

!!! tip

Instead of defining the function `rescale`, you can consider $t_f$ as a parameter and define the following
optimal control problem:

```julia
@def ocp begin
s ∈ [ 0, 1 ], time
x ∈ R², state
u ∈ R, control
x(0) == x0
ẋ(s) == tf * ( A * x(s) + B * u(s) )
∫( 0.5(x₁(s)^2 + x₂(s)^2 + u(s)^2) ) → min
end
```

Finally we choose to plot only the state and control variables.

Expand All @@ -134,12 +137,12 @@ end
# we plot only the state and control variables and we add the legend
px1 = plot(plt[1], legend=false, xlabel="s", ylabel="x₁")
px2 = plot(plt[2], label=reshape(["tf = $tf" for tf ∈ tfspan],
(1, length(tfspan))), xlabel="s", ylabel="x₂")
px2 = plot(plt[2], label=reshape(["tf = $tf" for tf ∈ tfs],
(1, length(tfs))), xlabel="s", ylabel="x₂")
pu = plot(plt[5], legend=false, xlabel="s", ylabel="u")
plot(px1, px2, pu, layout=(1, 3), size=(800, 300), leftmargin=5mm, bottommargin=5mm)
```

!!! note
!!! note "Nota bene"

We can observe that $x(t_f)$ converges to the origin as $t_f$ increases.
19 changes: 11 additions & 8 deletions docs/src/tutorial-plot.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Plot a solution

In this tutorial we explain the different ways to plot a solution from an optimal control problem.
In this tutorial we explain the different ways to plot a solution of an optimal control problem.

Let us start by importing the necessary package.

Expand Down Expand Up @@ -35,11 +35,11 @@ plot(sol)

As you can see, it produces a grid of subplots. The left column contains the state trajectories, the right column the costate trajectories, and at the bottom we have the control trajectory.

Attributes from `Plots.jl` can be passed to the `plot` function:
Attributes from [`Plots.jl`](https://docs.juliaplots.org) can be passed to the `plot` function:

- In addition to `sol` you can pass attributes to the full `Plot`, see the [attribute plot documentation](https://docs.juliaplots.org/latest/generated/attributes_plot/) from `Plots.jl` for more details. For instance, you can specify the size of the figure.
- You can also pass attributes to the subplots, see the [attribute subplot documentation](https://docs.juliaplots.org/latest/generated/attributes_subplot/) from `Plots.jl` for more details. However, it will affect all the subplots. For instance, you can specify the location of the legend.
- In the same way, you can pass axis attributes to the subplots, see the [attribute axis documentation](https://docs.juliaplots.org/latest/generated/attributes_axis/) from `Plots.jl` for more details. It will also affect all the subplots. For instance, you can remove the grid.
- In addition to `sol` you can pass attributes to the full `Plot`, see the [attributes plot documentation](https://docs.juliaplots.org/latest/generated/attributes_plot/) from `Plots.jl` for more details. For instance, you can specify the size of the figure.
- You can also pass attributes to the subplots, see the [attributes subplot documentation](https://docs.juliaplots.org/latest/generated/attributes_subplot/) from `Plots.jl` for more details. However, it will affect all the subplots. For instance, you can specify the location of the legend.
- In the same way, you can pass axis attributes to the subplots, see the [attributes axis documentation](https://docs.juliaplots.org/latest/generated/attributes_axis/) from `Plots.jl` for more details. It will also affect all the subplots. For instance, you can remove the grid.

```@example main
plot(sol, size=(700, 450), legend=:bottomright, grid=false)
Expand Down Expand Up @@ -84,11 +84,12 @@ sol2 = solve(ocp, display=false)
nothing # hide
```

We first plot the solution of the first optimal control problem. Then, we plot the solution of the second optimal control problem on the same figure, but with dashed lines.
We first plot the solution of the first optimal control problem, then, we plot the solution of the second optimal control problem on the same figure, but with dashed lines.

```@example main
# first plot
plt = plot(sol, size=(700, 450))
# second plot
style = (linestyle=:dash, )
plot!(plt, sol2, state_style=style, costate_style=style, control_style=style)
Expand Down Expand Up @@ -118,6 +119,8 @@ u = sol.control
plot(t, norm∘u, label="‖u‖")
```

!!! note
!!! note "Nota bene"

The `norm` function is from `LinearAlgebra.jl`. The `∘` operator is the composition operator. Hence, `norm∘u` is equivalent to `t -> norm(u(t))`. The `sol.state`, `sol.costate` and `sol.control` are functions that return the state, costate and control trajectories at a given time.
- The `norm` function is from `LinearAlgebra.jl`.
- The `∘` operator is the composition operator. Hence, `norm∘u` is the function `t -> norm(u(t))`.
- The `sol.state`, `sol.costate` and `sol.control` are functions that return the state, costate and control trajectories at a given time.

0 comments on commit 6abbd46

Please sign in to comment.