Skip to content

Commit

Permalink
More numerical considerations
Browse files Browse the repository at this point in the history
  • Loading branch information
SouthEndMusic committed Sep 27, 2023
1 parent 474ba2c commit a51264d
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 12 deletions.
2 changes: 1 addition & 1 deletion docs/core/equations.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ In general a model has more nodes than states, so in the Julia core there is a d
## The Jacobian
The Jacobian is a $n\times n$ matrix where $n$ is the number of states in the simulation. The computation of the Jacobian for the solver is hardcoded in the Julia core, which is faster and more accurate than using finite difference methods.
The Jacobian is a $n\times n$ matrix where $n$ is the number of states in the simulation. The Jacobian is computed either using finite difference methods or automatic differentiation. For more details on the computation of the Jacobian and how it is used in the solvers see [numerical considerations](numerics.qmd).
The entries of the Jacobian $J$ are given by
Expand Down
29 changes: 18 additions & 11 deletions docs/core/numerics.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ In general $\mathbf{f}$ is a non-linear function in $\mathbf{u}$. These non-line
- `Basin` profiles;
- `TabulatedRatingCurve` Q(h) relations.
The problem @eq-prob can be solved by various numerical time-integration methods. To do this the time interval $[t_0,t_\text{end}]$ is discretized into a finite number of timesteps $t_0 < t_1 < \ldots < t_N = t_\text{end}$ for which approximate solutions $\mathbf{w}_n \approx u(t_n)$ are computed. In general we do not assume fixed timesteps.
The problem @eq-prob can be solved by various numerical time-integration methods. To do this the time interval $[t_0,t_\text{end}]$ is discretized into a finite number of time points $t_0 < t_1 < \ldots < t_N = t_\text{end}$ for which approximate solutions $\mathbf{w}_n \approx \mathbf{u}(t_n)$ are computed. In general we do not assume a fixed timestep (the interval between successive points in time).
# Example numerical methods
Expand All @@ -29,20 +29,20 @@ This section discusses 2 relatively simple numerical methods, mainly to demonstr
The simplest numerical method is Euler forward:
$$
\mathbf{w}_{n+1} = \mathbf{w}_n + (t_{n+1}-t_n)\mathbf{f}(\mathbf{w}_n, t_n).
$$
$$ {#eq-eulerforward}
Here $\mathbf{w}_{n+1}$ is given as a simple explicit function of $\mathbf{w}_n$.
## Eular backward
## Euler backward
Euler backward is formulated as follows:
$$
\mathbf{w}_{n+1} = \mathbf{w}_n + (t_{n+1}-t_n)\mathbf{f}(\mathbf{w}_{n+1},t_{n+1}).
$$
$$ {#eq-eulerbackward}
Note that this is an implicit equation for $\mathbf{w}_{n+1}$, which is non-linear because of the non-linearity of $\mathbf{f}$.
Generally there are 2 iterative methods used for finding solutions to non-linear equations like this:
Generally one of the following iterative methods is used for finding solutions to non-linear equations like this:
- Picard iteration for fixed points. This method aims to approximate $\mathbf{w}_{n+1}$ as a fixed point of the function
$$
Expand All @@ -59,22 +59,27 @@ $$
\mathbf{0} =& \mathbf{h}(\mathbf{w}_{n+1}^k) + \mathbf{J}(\mathbf{h})(\mathbf{w}_{n+1}^k)(\mathbf{w}_{n+1}^{k+1}-\mathbf{w}_{n+1}^k) \\
=& \mathbf{w}_n + (t_{n+1}-t_n)\mathbf{f}(\mathbf{w}_{n+1}^k,t_{n+1}) - \mathbf{w}_{n+1}^k \\ +&\left[(t_{n+1}-t_n)\mathbf{J}(\mathbf{f})(\mathbf{w}_{n+1}^k)-\mathbf{I}\right](\mathbf{w}_{n+1}^{k+1}-\mathbf{w}_{n+1}^k).
\end{align}
$$
$$ {#eq-newtoniter}
Note that this thus requires an evaluation of the Jacobian of $\mathbf{f}$ and solving a linear system per iteration.
# The advantage of implicit methods
***Implicit methods are generally more stable, is there an intuition for this?***
The implicit method @eq-eulerbackward is a coupled system of equations for $\mathbf{w}_{n+1}$, while the explicit @eq-eulerforward is fully decoupled. This means in general that in a time integration step with an implicit method the basins communicate information with eachother, while in an explicit method this does not happen. A consequence of this is that local events (e.g. a pump turns on) propagate slowly trough the model using an explicit method but quickly using an implicit method, making implicit methods more stable.
# Jacobian computations
The iterations @eq-newtoniter require an evaluation of the Jacobian of $\mathbf{f}$. The Jacobian of `water_balance!` is discussed [here](equations.qmd#the-jacobian).
***Finite difference Jacobians probably don't work well together with `DiscreteCallback` control, because then the difference can be large because of a jump in parameters. Or does the solver take care of that, the same way it does so with `ContinuousCallback`?***
There are several ways to compute the Jacobian:
- Using [finite difference methods](https://en.wikipedia.org/wiki/Numerical_differentiation#Finite_differences); however these are relatively expensive to compute and introduce truncation errors.
- Hardcoding analytical expressions for the partial derivatives; these have the advantages over finite difference methods that they are cheaper to compute and don't introduce truncation errors. However, computing and coding these partial derivatives is complex and thus error prone, and requires maintenance for every adjustment of $\mathbf{f}$.
- Using [automatic differentiation](https://juliadiff.org/ForwardDiff.jl/stable/dev/how_it_works/); this has the advantages of being fast to compute and not requiring much extra implementation other than implementing $\mathbf{f}$ in a way that is compatible with an automatic differentiation package, in the case of Ribasim that is `ForwardDiff.jl`. What remains is supplying the sparsity structure of the Jacobian of $\mathbf{f}$, to make the computation of the Jacobian efficient in terms of usage of memory and computation time.
# Continuity considerations
The convergence of the Newton-Rhapson method can be [proven](https://en.wikipedia.org/wiki/Newton%27s_method#Proof_of_quadratic_convergence_for_Newton's_iterative_method) given certain properties of $\mathbf{f}$ around the initial guess and root to find. An important aspect is the smoothness of $\mathbf{f}$. The basin profiles and $Q(h)$ relations are given by interpolated data, and thus we have some control over the smoothness of these functions by the choice of interpolation method. This is discussed further below.
The convergence of the Newton-Rhapson method can be [proven](https://en.wikipedia.org/wiki/Newton%27s_method#Proof_of_quadratic_convergence_for_Newton's_iterative_method) given certain properties of $\mathbf{f}$ around the initial guess and root to find. An important aspect is the smoothness of $\mathbf{f}$. The basin profiles and $Q(h)$ relations are given by interpolated data, and thus we have some control over the smoothness of these functions by the choice of interpolation method. This is discussed further below. the Manning resistance is not mentioned here since it is given by an analytical expression.
Manning resistance is not mentioned here since it is given by an analytical expression.
Control mechanisms can change parameters of $\mathbf{f}$ discontinuously, leading to discontinuities of $\mathbf{f}$. This however does not yield problems for the time integration methods in `DifferentialEquations.jl`, since the [callback mechanisms](https://docs.sciml.ai/DiffEqDocs/stable/features/callback_functions/) used to change these parameters make the solver take these discontinuities into account.
## Basin profiles
Expand All @@ -84,7 +89,7 @@ The basin profiles affect $\mathbf{f}$ in many ways, anywhere where a basin leve
## Q(h) relations
`TabulatedRatingCurve` nodes contribute to $\mathbb{f}$ with terms of the following form:
`TabulatedRatingCurve` nodes contribute to $\mathbf{f}$ with terms of the following form:
$$
Q(h(S))
Expand All @@ -93,3 +98,5 @@ $$
where the continuity of this term is given by the least continuous of $Q$ and $h$.
## Reduction factors for soft landings
[Reduction factors](equations.qmd#the-reduction-factor) are introduced at several points in the definition of $\mathbf{f}$ to smooth out otherwise discontinuous transitions (e.g. the flow rate of a pump going to zero when the source basin dries out).

0 comments on commit a51264d

Please sign in to comment.