From 09ca37cca469645fb7a8f817dd958f7764a41f50 Mon Sep 17 00:00:00 2001 From: ArnoStrouwen Date: Sun, 8 Jan 2023 07:45:52 +0100 Subject: [PATCH] [skip ci] LanguageTool --- docs/src/comparisons/cppfortran.md | 32 ++++++++--------- docs/src/comparisons/matlab.md | 14 ++++---- docs/src/comparisons/python.md | 12 +++---- docs/src/comparisons/r.md | 8 ++--- docs/src/getting_started/find_root.md | 18 +++++----- .../src/getting_started/first_optimization.md | 20 +++++------ docs/src/getting_started/first_simulation.md | 27 +++++++------- docs/src/getting_started/fit_simulation.md | 26 +++++++------- docs/src/getting_started/getting_started.md | 2 +- docs/src/getting_started/installation.md | 6 ++-- docs/src/highlevels/array_libraries.md | 22 ++++++------ .../src/highlevels/developer_documentation.md | 16 ++++----- docs/src/highlevels/equation_solvers.md | 14 ++++---- docs/src/highlevels/function_approximation.md | 12 +++---- docs/src/highlevels/interfaces.md | 12 +++---- docs/src/highlevels/inverse_problems.md | 10 +++--- docs/src/highlevels/learning_resources.md | 4 +-- .../model_libraries_and_importers.md | 8 ++--- docs/src/highlevels/modeling_languages.md | 18 +++++----- docs/src/highlevels/numerical_utilities.md | 29 +++++++-------- docs/src/highlevels/parameter_analysis.md | 10 +++--- .../partial_differential_equation_solvers.md | 4 +-- docs/src/highlevels/symbolic_learning.md | 2 +- docs/src/highlevels/symbolic_tools.md | 6 ++-- .../highlevels/uncertainty_quantification.md | 12 +++---- docs/src/index.md | 2 +- docs/src/overview.md | 14 ++++---- docs/src/showcase/bayesian_neural_ode.md | 12 +++---- docs/src/showcase/brusselator.md | 10 +++--- docs/src/showcase/gpu_spde.md | 34 +++++++++--------- docs/src/showcase/massively_parallel_gpu.md | 12 +++---- docs/src/showcase/missing_physics.md | 30 ++++++++-------- docs/src/showcase/ode_types.md | 34 +++++++++--------- docs/src/showcase/pinngpu.md | 11 +++--- docs/src/showcase/symbolic_analysis.md | 36 ++++++++++--------- 35 files changed, 271 insertions(+), 268 deletions(-) diff --git a/docs/src/comparisons/cppfortran.md b/docs/src/comparisons/cppfortran.md index 9785d8afffa..55777cb8185 100644 --- a/docs/src/comparisons/cppfortran.md +++ b/docs/src/comparisons/cppfortran.md @@ -9,7 +9,7 @@ to help the transition. ## Why SciML? High-Level Workflow Reasons -If you're coming from "hardcore" C++/Fortran computing environments, some things to check +If you're coming from “hardcore” C++/Fortran computing environments, some things to check out with Julia's SciML are: * **Interactivity** - use the interactive REPL to easily investigate numerical details. @@ -21,7 +21,7 @@ out with Julia's SciML are: * **Symbolic modeling languages** - writing models by hand can leave a lot of performance on the table. Using high-level modeling tools like [ModelingToolkit](https://github.com/SciML/ModelingToolkit.jl) can automate symbolic - simplifications which + simplifications, which [improve the stability and performance of numerical solvers](https://www.youtube.com/watch?v=ZFoQihr3xLs). On complex models, even the best handwritten C++/Fortran code is orders of magnitude behind the code that symbolic tearing algorithms can achieve! @@ -29,7 +29,7 @@ out with Julia's SciML are: a silo. Arrays made for PETSc cannot easily be used in Trilinos, and converting Sundials NVector outputs to DataFrames for post-simulation data processing is a process itself. The Julia SciML environment embraces interoperability. Don't wait for SciML to do it: by - using generic coding with JIT compilation these connections create new optimized code on + using generic coding with JIT compilation, these connections create new optimized code on the fly and allow for a more expansive feature set than can ever be documented. Take [new high-precision number types from a package](https://github.com/JuliaArbTypes/ArbFloats.jl) and stick them into a nonlinear solver. Take @@ -43,16 +43,16 @@ out with Julia's SciML are: one line of code. This gives you a way to incrementally adopt new features/methods while retaining the older pieces you know and trust. * **Don't Start from Scratch** - SciML builds on the extensive - [Base library of Julia](https://docs.julialang.org/en/v1/) and thus grows and improves + [Base library of Julia](https://docs.julialang.org/en/v1/), and thus grows and improves with every update to the language. With hundreds of monthly contributors to SciML and hundreds of monthly contributors to Julia, SciML is one of the most actively developed - open source scientific computing ecosystems out there! + open-source scientific computing ecosystems out there! * **Easier High-Performance and Parallel Computing** - With Julia's ecosystem, [CUDA](https://github.com/JuliaGPU/CUDA.jl) will automatically install of the required binaries and `cu(A)*cu(B)` is then all that's required to GPU-accelerate large-scale linear algebra. [MPI](https://github.com/JuliaParallel/MPI.jl) is easy to install and use. [Distributed computing through password-less SSH](https://docs.julialang.org/en/v1/manual/distributed-computing/). [Multithreading](https://docs.julialang.org/en/v1/manual/multi-threading/) - is automatic and baked into a lot of libraries, with a specialized algorithm to ensure + is automatic and baked into many libraries, with a specialized algorithm to ensure hierarchical usage does not oversubscribe threads. Basically, libraries give you a lot of parallelism for free, and doing the rest is a piece of cake. * **Mix Scientific Computing with Machine Learning** - Want to [automate the discovery @@ -68,7 +68,7 @@ solvers: ## Why SciML? Some Technical Details Let's face the facts, in the [open benchmarks](https://benchmarks.sciml.ai/stable/) the -pure-Julia solvers tend to outperform the classic "best" C++ and Fortran solvers in almost +pure-Julia solvers tend to outperform the classic “best” C++ and Fortran solvers in almost every example (with a few notable exceptions). But why? The answer is two-fold: Julia is as fast as C++/Fortran, and the algorithms are what matter. @@ -94,8 +94,8 @@ There are many ways which Julia's algorithms achieve performance advantages. Som highlight include: * Julia is at the forefront of numerical methods research in many domains. This is highlighted - in [the differential equation solver comparisons](https://www.stochasticlifestyle.com/comparison-differential-equation-solver-suites-matlab-r-julia-python-c-fortran/) - where the Julia solvers were the first to incorporate "newer" optimized Runge-Kutta tableaus, + in [the differential equation solver comparisons](https://www.stochasticlifestyle.com/comparison-differential-equation-solver-suites-matlab-r-julia-python-c-fortran/), + where the Julia solvers were the first to incorporate “newer” optimized Runge-Kutta tableaus, around half a decade before other software. Since then, the literature has only continued to evolve, and only Julia's SciML keeps up. At this point, many of the publication's first implementation is in [OrdinaryDiffEq.jl](https://github.com/SciML/OrdinaryDiffEq.jl) with @@ -107,8 +107,8 @@ highlight include: However, in modern Julia, every function from `log` to `^` has been reimplemented in the Julia standard library to improve numerical correctness and performance. For example, [Pumas, the nonlinear mixed effects estimation system](https://www.biorxiv.org/content/10.1101/2020.11.28.402297v2) - built on SciML and - [used by Moderna for the vaccine trials](https://www.youtube.com/watch?v=6wGSCD3cI9E) + built on SciML, and + [used by Moderna for the vaccine trials](https://www.youtube.com/watch?v=6wGSCD3cI9E), notes in its paper that approximations to such math libraries itself gave a 2x performance improvement in even the most simple non-stiff ODE solvers over matching Fortran implementations. Pure Julia linear algebra tooling, like @@ -123,11 +123,11 @@ highlight include: [sparsity patterns are automatically deduced from code and optimized on](https://openreview.net/pdf?id=rJlPdcY38B). [Nonlinear equations are symbolically-torn](https://www.youtube.com/watch?v=ZFoQihr3xLs), changing large nonlinear systems into sequential solving of much smaller systems and benefiting from an O(n^3) cost reduction. These can be orders of magnitude cost reductions which come for free, and unless you know every trick in the book it will - be hard to match SciML's performance! + be difficult to match SciML's performance! * Pervasive automatic differentiation mixed with compiler tricks wins battles. Many high-performance libraries in C++ and Fortran cannot assume that all of its code is compatible with automatic differentiation, and thus many internal performance tricks are - not applied. For example + not applied. For example, [ForwardDiff.jl's chunk seeding](https://github.com/JuliaDiff/ForwardDiff.jl/blob/master/docs/src/dev/how_it_works.md) allows for a single call to `f` to generate multiple columns of a Jacobian. When mixed with [sparse coloring tools](https://github.com/JuliaDiff/SparseDiffTools.jl), entire Jacobians can be constructed with just a few `f` calls. Studies in applications @@ -138,8 +138,8 @@ highlight include: To really highlight how JIT compilation and automatic differentiation integration can change algorithms, let's look at the problem of differentiating an ODE solver. As is -[derived an discussed in detail at a seminar with the American Statistical Association](https://www.youtube.com/watch?v=Xwh42RhB7O4), -there are many ways to implement well-known "adjoint" methods which are required for +[derived and discussed in detail at a seminar with the American Statistical Association](https://www.youtube.com/watch?v=Xwh42RhB7O4), +there are many ways to implement well-known “adjoint” methods which are required for performance. Each has different stability and performance trade-offs, and [Julia's SciML is the only system to systemically offer all of the trade-off options](https://sensitivity.sciml.ai/stable/manual/differential_equation_sensitivities/). In many cases, using analytical adjoints of a solver is not advised due to performance reasons, [with the @@ -153,7 +153,7 @@ showcased as the seeding methods in this plot: ![](https://i0.wp.com/www.stochasticlifestyle.com/wp-content/uploads/2022/10/Capture7.png?w=2091&ssl=1) -Unless one directly defines special "vjp" functions, this is how the Julia SciML methods +Unless one directly defines special “vjp” functions, this is how the Julia SciML methods achieve orders of magnitude performance advantages over CVODES's adjoints and PETSC's TS-adjoint. diff --git a/docs/src/comparisons/matlab.md b/docs/src/comparisons/matlab.md index 41b5e59d113..d340051c9e4 100644 --- a/docs/src/comparisons/matlab.md +++ b/docs/src/comparisons/matlab.md @@ -1,7 +1,7 @@ # [Getting Started with Julia's SciML for the MATLAB User](@id matlab) If you're a MATLAB user who has looked into Julia for some performance improvements, you -may have noticed that the standard library does not have all of the "batteries" included +may have noticed that the standard library does not have all of the “batteries” included with a base MATLAB installation. Where's the ODE solver? Where's `fmincon` and `fsolve`? Those scientific computing functionalities are the pieces provided by the Julia SciML ecosystem! @@ -15,13 +15,13 @@ ecosystem! grow as more complex algorithms are required. * **Julia is quick to learn from MATLAB** - Most ODE codes can be translated in a few minutes. If you need help, check out the - [QuantEcon MATLAB-Python-Julia Cheatsheet.](https://cheatsheets.quantecon.org/) + [QuantEcon MATLAB-Python-Julia Cheat Sheet.](https://cheatsheets.quantecon.org/) * **Package Management and Versioning** - [Julia's package manager](https://github.com/JuliaLang/Pkg.jl) takes care of dependency management, testing, and continuous delivery in order to make the installation and maintenance process smoother. For package users, this means it's easier to get packages with complex functionality in your hands. * **Free and Open Source** - If you want to know how things are being computed, just look - [at our Github organization](https://github.com/SciML). Lots of individuals use Julia's + [at our GitHub organization](https://github.com/SciML). Lots of individuals use Julia's SciML to research how the algorithms actually work because of how accessible and tweakable the ecosystem is! * **Composable Library Components** - In MATLAB environments, every package feels like @@ -37,7 +37,7 @@ ecosystem! binaries and `cu(A)*cu(B)` is then all that's required to GPU-accelerate large-scale linear algebra. [MPI](https://github.com/JuliaParallel/MPI.jl) is easy to install and use. [Distributed computing through password-less SSH](https://docs.julialang.org/en/v1/manual/distributed-computing/). [Multithreading](https://docs.julialang.org/en/v1/manual/multi-threading/) - is automatic and baked into a lot of libraries, with a specialized algorithm to ensure + is automatic and baked into many libraries, with a specialized algorithm to ensure hierarchical usage does not oversubscribe threads. Basically, libraries give you a lot of parallelism for free, and doing the rest is a piece of cake. * **Mix Scientific Computing with Machine Learning** - Want to [automate the discovery @@ -52,16 +52,16 @@ In this plot, `MATLAB` in orange represents MATLAB's most commonly used solvers: ## Need a case study? Check out [this talk from NASA Scientists getting a 15,000x acceleration by switching from -Simulink to Julia's ModelingToolkit!](https://www.youtube.com/watch?v=tQpqsmwlfY0). +Simulink to Julia's ModelingToolkit!](https://www.youtube.com/watch?v=tQpqsmwlfY0) ## Need Help Translating from MATLAB to Julia? The following resources can be particularly helpful when adopting Julia for SciML for the first time: -* [QuantEcon MATLAB-Python-Julia Cheatsheet](https://cheatsheets.quantecon.org/) +* [QuantEcon MATLAB-Python-Julia Cheat Sheet](https://cheatsheets.quantecon.org/) * [The Julia Manual's Noteworthy Differences from MATLAB page](https://docs.julialang.org/en/v1/manual/noteworthy-differences/#Noteworthy-differences-from-MATLAB) -* Double check your results with [MATLABDiffEq.jl](https://github.com/SciML/MATLABDiffEq.jl) +* Double-check your results with [MATLABDiffEq.jl](https://github.com/SciML/MATLABDiffEq.jl) (automatically converts and runs ODE definitions with MATLAB's solvers) * Use [MATLAB.jl](https://github.com/JuliaInterop/MATLAB.jl) to more incrementally move code to Julia. diff --git a/docs/src/comparisons/python.md b/docs/src/comparisons/python.md index f04ac82818e..d70d318a959 100644 --- a/docs/src/comparisons/python.md +++ b/docs/src/comparisons/python.md @@ -1,9 +1,9 @@ # [Getting Started with Julia's SciML for the Python User](@id python) -If you're an Python user who has looked into Julia, you're probably wondering what is the +If you're a Python user who has looked into Julia, you're probably wondering what is the equivalent to SciPy is. And you found it: it's the SciML ecosystem! To a Python developer, SciML is SciPy, but with the high-performance GPU, capabilities of PyTorch, and -neural network capabilities, all baked right in. With SciML, there is no "separate world" +neural network capabilities, all baked right in. With SciML, there is no “separate world” of machine learning sublanguages: there is just one cohesive package ecosystem. ## Why SciML? High-Level Workflow Reasons @@ -30,7 +30,7 @@ of machine learning sublanguages: there is just one cohesive package ecosystem. binaries and `cu(A)*cu(B)` is then all that's required to GPU-accelerate large-scale linear algebra. [MPI](https://github.com/JuliaParallel/MPI.jl) is easy to install and use. [Distributed computing through password-less SSH](https://docs.julialang.org/en/v1/manual/distributed-computing/). [Multithreading](https://docs.julialang.org/en/v1/manual/multi-threading/) - is automatic and baked into a lot of libraries, with a specialized algorithm to ensure + is automatic and baked into many libraries, with a specialized algorithm to ensure hierarchical usage does not oversubscribe threads. Basically, libraries give you a lot of parallelism for free, and doing the rest is a piece of cake. * **Mix Scientific Computing with Machine Learning** - Want to [automate the discovery @@ -48,7 +48,7 @@ The following resources can be particularly helpful when adopting Julia for SciM first time: * [The Julia Manual's Noteworthy Differences from Python page](https://docs.julialang.org/en/v1/manual/noteworthy-differences/#Noteworthy-differences-from-Python) -* Double check your results with [SciPyDiffEq.jl](https://github.com/SciML/SciPyDiffEq.jl) +* Double-check your results with [SciPyDiffEq.jl](https://github.com/SciML/SciPyDiffEq.jl) (automatically converts and runs ODE definitions with SciPy's solvers) * Use [PyCall.jl](https://github.com/JuliaPy/PyCall.jl) to more incrementally move code to Julia. @@ -59,7 +59,7 @@ The following chart will help you get quickly acquainted with Julia's SciML Tool |Workflow Element|SciML-Supported Julia packages| | --- | --- | -|matplotlib|[Plots](https://docs.juliaplots.org/stable/), [Makie](https://docs.makie.org/stable/)| +|Matplotlib|[Plots](https://docs.juliaplots.org/stable/), [Makie](https://docs.makie.org/stable/)| |`scipy.special`|[SpecialFunctions](https://github.com/JuliaMath/SpecialFunctions.jl)| |`scipy.linalg.solve`|[LinearSolve](http://linearsolve.sciml.ai/dev/)| |`scipy.integrate`|[Integrals](https://integrals.sciml.ai/)| @@ -86,5 +86,5 @@ you use analytical adjoint definitions? You can, but there are tricks to mix aut differentiation into the adjoint definitions for a few orders of magnitude improvement too, as [explained in this blog post](https://www.stochasticlifestyle.com/direct-automatic-differentiation-of-solvers-vs-analytical-adjoints-which-is-better/). -This facts, along with many others, compose to algorithmic improvements with the +These facts, along with many others, compose to algorithmic improvements with the implementation improvements, which leads to orders of magnitude improvements! diff --git a/docs/src/comparisons/r.md b/docs/src/comparisons/r.md index d36070e2fad..155708d147d 100644 --- a/docs/src/comparisons/r.md +++ b/docs/src/comparisons/r.md @@ -21,8 +21,8 @@ is the ecosystem for doing this with Julia. the differential equation solver to use specialized hardware acceleration. * **A Global Harmonious Documentation for Scientific Computing** - R's documentation for scientific computing is scattered in a bunch of individual packages where the developers - do not talk to each other! This not only leads to documentation differences but also - "style" differences: one package uses `tol` while the other uses `atol`. With Julia's + do not talk to each other! This not only leads to documentation differences, but also + “style” differences: one package uses `tol` while the other uses `atol`. With Julia's SciML, the whole ecosystem is considered together, and inconsistencies are handled at the global level. The goal is to be working in one environment with one language. * **Easier High-Performance and Parallel Computing** - With Julia's ecosystem, @@ -30,7 +30,7 @@ is the ecosystem for doing this with Julia. binaries and `cu(A)*cu(B)` is then all that's required to GPU-accelerate large-scale linear algebra. [MPI](https://github.com/JuliaParallel/MPI.jl) is easy to install and use. [Distributed computing through password-less SSH](https://docs.julialang.org/en/v1/manual/distributed-computing/). [Multithreading](https://docs.julialang.org/en/v1/manual/multi-threading/) - is automatic and baked into a lot of libraries, with a specialized algorithm to ensure + is automatic and baked into many libraries, with a specialized algorithm to ensure hierarchical usage does not oversubscribe threads. Basically, libraries give you a lot of parallelism for free, and doing the rest is a piece of cake. * **Mix Scientific Computing with Machine Learning** - Want to [automate the discovery @@ -50,7 +50,7 @@ first time: * [The Julia Manual's Noteworthy Differences from R page](https://docs.julialang.org/en/v1/manual/noteworthy-differences/#Noteworthy-differences-from-R) * [Tutorials on Data Wrangling and Plotting in Julia (Sections 4 and 5)](http://tutorials.pumas.ai/) written for folks with a background in R. -* Double check your results with [deSolveDiffEq.jl](https://github.com/SciML/deSolveDiffEq.jl) +* Double-check your results with [deSolveDiffEq.jl](https://github.com/SciML/deSolveDiffEq.jl) (automatically converts and runs ODE definitions with R's deSolve solvers) * Use [RCall.jl](https://juliainterop.github.io/RCall.jl/stable/) to more incrementally move code to Julia. diff --git a/docs/src/getting_started/find_root.md b/docs/src/getting_started/find_root.md index e8d7989d9fa..1c05eae9d5c 100644 --- a/docs/src/getting_started/find_root.md +++ b/docs/src/getting_started/find_root.md @@ -59,11 +59,11 @@ sol = solve(prob,NewtonRaphson()) @show sol.u, prob.f(sol.u,prob.p) ``` -## Step by Step Solution +## Step-by-Step Solution ### Step 1: Import the Packages -To do this tutorial we will need a few components: +To do this tutorial, we will need a few components: * [ModelingToolkit.jl, our modeling environment](https://docs.sciml.ai/ModelingToolkit/stable/) * [NonlinearSolve.jl, the nonlinear system solvers](https://docs.sciml.ai/NonlinearSolve/stable/) @@ -106,9 +106,9 @@ This looks as follows: !!! note Note that in ModelingToolkit and Symbolics, `~` is used for equation equality. This is - separate from `=` which is the "assignment operator" in the Julia programming language. + separate from `=` which is the “assignment operator” in the Julia programming language. For example, `x = x + 1` is a valid assignment in a programming language, and it is - invalid for that to represent "equality", which is the reason why a separate operator + invalid for that to represent “equality”, which is why a separate operator is used! ```@example first_rootfind @@ -117,7 +117,7 @@ eqs = [0 ~ σ*(y-x), 0 ~ x*y - β*z] ``` -Finally we bring these pieces together, the equation along with its states and parameters, +Finally, we bring these pieces together, the equation along with its states and parameters, define our `NonlinearSystem`: ```@example first_rootfind @@ -140,7 +140,7 @@ blank override `[]`. This looks like: prob = NonlinearProblem(ns,[]) ``` -If for example we did want to change the initial condition of `x` +If we did want to change the initial condition of `x` to `2.0` and the parameter `σ` to `4.0`, we would do `[x => 2.0, σ => 4.0]`. This looks like: @@ -150,7 +150,7 @@ prob2 = NonlinearProblem(ns,[x => 2.0, σ => 4.0]) ### Step 4: Solve the Numerical Problem -Now we solve the nonlinear system. For this we choose a solver from the +Now we solve the nonlinear system. For this, we choose a solver from the [NonlinearSolve.jl's solver options.](https://docs.sciml.ai/NonlinearSolve/stable/solvers/NonlinearSystemSolvers/) We will choose `NewtonRaphson` as follows: @@ -168,11 +168,11 @@ see that by asking for its type: typeof(sol) ``` -From this we can see that it is an `NonlinearSolution`. We can see the documentation for +From this, we can see that it is an `NonlinearSolution`. We can see the documentation for how to use the `NonlinearSolution` by checking the [NonlinearSolve.jl solution type page.](https://docs.sciml.ai/NonlinearSolve/stable/basics/NonlinearSolution/) For example, the solution is stored as `.u`. -What is the solution to our nonlinear system and what is the final residual value? +What is the solution to our nonlinear system, and what is the final residual value? We can check it as follows: ```@example first_rootfind diff --git a/docs/src/getting_started/first_optimization.md b/docs/src/getting_started/first_optimization.md index 3dc90dbbd75..df404b3c40b 100644 --- a/docs/src/getting_started/first_optimization.md +++ b/docs/src/getting_started/first_optimization.md @@ -30,7 +30,7 @@ L(u,p) = (p_1 - u_1)^2 + p_2 * (u_2 - u_1)^2 What we want to do is find the values of ``u_1`` and ``u_2`` such that ``L`` achieves its minimum value possible. We will do this under a few constraints: -we want to find this optima within some bounded domain, i.e. ``u_i \in [-1,1]``. +we want to find this optimum within some bounded domain, i.e. ``u_i \in [-1,1]``. This should be done with the parameter values ``p_1 = 1.0`` and ``p_2 = 100.0``. What should ``u = [u_1,u_2]`` be to achieve this goal? Let's dive in! @@ -58,20 +58,20 @@ sol = solve(prob,NLopt.LD_LBFGS()) @show sol.u, L(sol.u,p) ``` -## Step by Step Solution +## Step-by-Step Solution ### Step 1: Import the packages -To do this tutorial we will need a few components: +To do this tutorial, we will need a few components: * [Optimization.jl](https://docs.sciml.ai/Optimization/stable/), the optimization interface. * [OptimizationNLopt.jl](https://docs.sciml.ai/Optimization/stable/optimization_packages/nlopt/), the optimizers we will use. Note that Optimization.jl is an interface for optimizers, and thus we always have to choose which optimizer we want to use. Here we choose to demonstrate `OptimizationNLopt` because -of its efficiency and versitility. But there are many other possible choices. Check out +of its efficiency and versatility. But there are many other possible choices. Check out the -[solver compatability chart](https://docs.sciml.ai/Optimization/stable/#Overview-of-the-Optimizers) +[solver compatibility chart](https://docs.sciml.ai/Optimization/stable/#Overview-of-the-Optimizers) for a quick overview of what optimizer packages offer. To start, let's add these packages [as demonstrated in the installation tutorial](@ref installation): @@ -100,7 +100,7 @@ L(u,p) = (p[1] - u[1])^2 + p[2] * (u[2] - u[1]^2)^2 Now we need to define our `OptimizationProblem`. If you need help remembering how to define the `OptimizationProblem`, you can always refer to the -[Optimization.jl problem definition page](https://docs.sciml.ai/Optimization/stable/API/optimization_problem/) +[Optimization.jl problem definition page](https://docs.sciml.ai/Optimization/stable/API/optimization_problem/). Thus what we need to define is an initial condition `u0` and our parameter vector `p`. We will make our initial condition have both values as zero, which is done by the Julia @@ -129,10 +129,10 @@ prob = OptimizationProblem(L, u0, p, lb = -1 * ones(2), ub = ones(2)) Now we solve the `OptimizationProblem` that we have defined. This is done by passing our `OptimizationProblem` along with a chosen solver to the `solve` command. At -the beginning we explained that we will use the `OptimizationNLopt` set of solvers, which +the beginning, we explained that we will use the `OptimizationNLopt` set of solvers, which are [documented in the OptimizationNLopt page](https://docs.sciml.ai/Optimization/stable/optimization_packages/nlopt/). -From here we are choosing the `NLopt.LD_LBFGS()` for its mixture of robustness and +From here, we are choosing the `NLopt.LD_LBFGS()` for its mixture of robustness and performance. To perform this solve, we do the following: ```@example first_opt @@ -149,11 +149,11 @@ see that by asking for its type: typeof(sol) ``` -From this we can see that it is an `OptimizationSolution`. We can see the documentation for +From this, we can see that it is an `OptimizationSolution`. We can see the documentation for how to use the `OptimizationSolution` by checking the [Optimization.jl solution type page](https://docs.sciml.ai/Optimization/stable/API/optimization_solution/). For example, the solution is stored as `.u`. What is the solution to our -optimization and what is the final loss value? We can check it as follows: +optimization, and what is the final loss value? We can check it as follows: ```@example first_opt # Analyze the solution diff --git a/docs/src/getting_started/first_simulation.md b/docs/src/getting_started/first_simulation.md index 42f58fe1f26..81ac46e4d90 100644 --- a/docs/src/getting_started/first_simulation.md +++ b/docs/src/getting_started/first_simulation.md @@ -1,6 +1,6 @@ # [Build and run your first simulation with Julia's SciML](@id first_sim) -In this tutorial we will build and run our first simulation with SciML! +In this tutorial, we will build and run our first simulation with SciML! !!! note @@ -87,11 +87,11 @@ p2 = plot(sol,idxs=z,title = "Total Animals") plot(p1,p2,layout=(2,1)) ``` -## Step by Step Solution +## Step-by-Step Solution ### Step 1: Install and Import the Required Packages -To do this tutorial we will need a few components: +To do this tutorial, we will need a few components: * [ModelingToolkit.jl, our modeling environment](https://docs.sciml.ai/ModelingToolkit/stable/) * [DifferentialEquations.jl, the differential equation solvers](https://docs.sciml.ai/DiffEqDocs/stable/) @@ -136,16 +136,17 @@ for parameters, where the default value is now the parameter value: that into the α symbol. That can make your code look a lot more like the mathematical expressions! -Next we define our set of differential equations. We need to first tell it what to -differentiate with respect to, here the independent variable `t`, do define the `Differential` -operator `D`. Then once we have the operator, we apply that into the equations. +Next, we define our set of differential equations. +To define the `Differential` operator `D`, we need to first tell it what to +differentiate with respect to, here the independent variable `t`, +Then, once we have the operator, we apply that into the equations. !!! note Note that in ModelingToolkit and Symbolics, `~` is used for equation equality. This is - separate from `=` which is the "assignment operator" in the Julia programming language. + separate from `=` which is the “assignment operator” in the Julia programming language. For example, `x = x + 1` is a valid assignment in a programming language, and it is - invalid for that to represent "equality", which is the reason why a separate operator + invalid for that to represent “equality”, which is why a separate operator is used! ```@example first_sim @@ -160,7 +161,7 @@ eqs = [ ] ``` -Notice that in the display it will automatically generate LaTeX. If one is interested in +Notice that in the display, it will automatically generate LaTeX. If one is interested in generating this LaTeX locally, one can simply do: ```julia @@ -191,10 +192,10 @@ simpsys = structural_simplify(sys) ``` Notice that what is returned is another `ODESystem`, but now with the simplified set of -equations. `z` has been turned into an "observable", i.e. a state that is not computed +equations. `z` has been turned into an “observable”, i.e. a state that is not computed but can be constructed on-demand. This is one of the ways that SciML reaches its speed: you can have 100,000 equations, but solve only 1,000 to then automatically reconstruct -the full set. Here, it's just 3 equations to 2, but as models get more complex the +the full set. Here, it's just 3 equations to 2, but as models get more complex, the symbolic system will find ever more clever interactions! Now that we have simplified our system, let's turn it into a numerical problem to @@ -208,7 +209,7 @@ representation. We need to tell it the numerical details now: In this case, we will use the default values for all our variables, so we will pass a blank override `[]`. If for example we did want to change the initial condition of `x` -to `2.0` and `α` to `4.0`, we would do `[x => 2.0, α => 4.0]`. Then secondly we pass a +to `2.0` and `α` to `4.0`, we would do `[x => 2.0, α => 4.0]`. Then secondly, we pass a tuple for the time span, `(0.0,10.0)` meaning start at `0.0` and end at `10.0`. This looks like: @@ -271,7 +272,7 @@ And tada, we have a full analysis of our ecosystem! ## Bonus Step: Emoji Variables -If you made it this far, then congrats you get to learn a fun fact! Since Julia code can +If you made it this far, then congrats, you get to learn a fun fact! Since Julia code can use Unicode, emojis work for variable names. Here's the simulation using emojis of rabbits and wolves to define the system: diff --git a/docs/src/getting_started/fit_simulation.md b/docs/src/getting_started/fit_simulation.md index a412d93eabf..12fe09be964 100644 --- a/docs/src/getting_started/fit_simulation.md +++ b/docs/src/getting_started/fit_simulation.md @@ -28,7 +28,7 @@ Along with the following general ecosystem packages: Assume that we know that the dynamics of our system are given by the [Lotka-Volterra dynamical system](https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equations): Let $x(t)$ be the number of rabbits in the environment and $y(t)$ be the number of wolves. -This is the same dynamical system as [the first tutorial!](@ref first_sim). +This is the same dynamical system as [the first tutorial!](@ref first_sim) The equation that defines the evolution of the species is given as follows: ```math @@ -41,7 +41,7 @@ The equation that defines the evolution of the species is given as follows: where ``\alpha, \beta, \gamma, \delta`` are parameters. Starting from equal numbers of rabbits and wolves, ``x(0) = 1`` and ``y(0) = 1``. -Now in the [the first tutorial](@ref first_sim), we assumed: +Now, in [the first tutorial](@ref first_sim), we assumed: > Luckily, a local guide provided our with some parameters that seem to match the system! @@ -107,11 +107,11 @@ result_ode = Optimization.solve(optprob, PolyOpt(), maxiters = 200) ``` -## Step by Step Solution +## Step-by-Step Solution ### Step 1: Install and Import the Required Packages -To do this tutorial we will need a few components. This is done using the Julia Pkg REPL: +To do this tutorial, we will need a few components. This is done using the Julia Pkg REPL: ```julia ]add DifferentialEquations, Optimization, OptimizationPolyalgorithms, SciMLSensitivity, ForwardDiff, Plots @@ -126,10 +126,10 @@ using ForwardDiff, Plots ### Step 2: Generate the Training Data -In our example we assumed that we had data representative of the solution with +In our example, we assumed that we had data representative of the solution with ``\alpha = 1.5``, ``\beta = 1.0``, ``\gamma = 3.0``, and ``\delta = 1.0``. Let's make that training data. The way we can do that is by defining the ODE with those parameters and -simulating it. Unlike [the first tutorial](@ref first_sim) which used ModelingToolkit, +simulating it. Unlike [the first tutorial](@ref first_sim), which used ModelingToolkit, let's demonstrate using [DifferentialEquations.jl](https://docs.sciml.ai/DiffEqDocs/stable/) to directly define the ODE for the numerical solvers. @@ -179,9 +179,9 @@ data = Array(datasol) For more details on using DifferentialEquations.jl, check out the [getting started with DifferentialEquations.jl tutorial](https://docs.sciml.ai/DiffEqDocs/stable/getting_started/). -### Step 3: Setup the Cost Function for Optimization +### Step 3: Set Up the Cost Function for Optimization -Now let's start the estimation process. First let's define a loss function. For our loss function, we want to +Now let's start the estimation process. First, let's define a loss function. For our loss function, we want to take a set of parameters, create a new ODE which has everything the same except for the changed parameters, solve this ODE with new parameters, and compare its predictions against the data. For this parameter changing, there is a useful functionality in the @@ -213,8 +213,8 @@ new parameters) as extra return arguments. We will explain why this extra return ### Step 4: Solve the Optimization Problem This step will look very similar to [the first optimization tutorial](@ref first_opt), except now we have a new -cost function. Just like in that tutorial we want to define a callback to monitor the solution process. However, -this time our function returns two things. The callback syntax is always `(value being optimized, arguments of loss return)` +cost function. Just like in that tutorial, we want to define a callback to monitor the solution process. However, +this time, our function returns two things. The callback syntax is always `(value being optimized, arguments of loss return)` and thus this time the callback is given `(p, l, sol)`. See, returning the solution along with the loss as part of the loss function is useful because we have access to it in the callback to do things like plot the current solution against the data! Let's do that in the following way: @@ -235,9 +235,9 @@ Thus every step of the optimization will show us the loss and a plot of how the looks vs the data at our current parameters. Now, just like [the first optimization tutorial](@ref first_opt), we setup our optimization -problem. To do this, we need come up with a `pguess`, an initial condition for the parameters -which is our best guess of the true parameters. For this we will use `pguess = [1.0, 1.2, 2.5, 1.2]`. -Together this looks like: +problem. To do this, we need to come up with a `pguess`, an initial condition for the parameters +which is our best guess of the true parameters. For this, we will use `pguess = [1.0, 1.2, 2.5, 1.2]`. +Together, this looks like: ```@example odefit diff --git a/docs/src/getting_started/getting_started.md b/docs/src/getting_started/getting_started.md index 0fbd84e7e96..2770a53590d 100644 --- a/docs/src/getting_started/getting_started.md +++ b/docs/src/getting_started/getting_started.md @@ -7,7 +7,7 @@ Julia's SciML is: * SciPy or MATLAB's standard library but in Julia, but * Runs orders of magnitude faster, even outperforms C and Fortran libraries, and * Is fully compatible with machine learning and automatic differentiation, -* All while having an easy to use high level interactive development environment. +* All while having an easy-to-use high level interactive development environment. Interested? diff --git a/docs/src/getting_started/installation.md b/docs/src/getting_started/installation.md index 013a8ab05bb..d06f98564ac 100644 --- a/docs/src/getting_started/installation.md +++ b/docs/src/getting_started/installation.md @@ -32,8 +32,8 @@ Next, open Visual Studio Code and click Extensions. ![](https://user-images.githubusercontent.com/1814174/195773680-2226b2fc-5903-4eff-aae2-4d8689f16280.PNG) -Then, search for "Julia" in the search bar on the top of the extension tab, click on the -"Julia" extension, and click the install button on the tab that opens up. +Then, search for “Julia” in the search bar on the top of the extension tab, click on the +“Julia” extension, and click the install button on the tab that opens up. ![](https://user-images.githubusercontent.com/1814174/195773697-ede4edee-d479-46e8-acce-94a3ff884de8.PNG) @@ -59,7 +59,7 @@ Once again, if you got stuck in this installation process, ask for help SciML is [over 130 Julia packages](https://github.com/SciML). That's too much stuff to give someone in a single download! Thus instead, the SciML organization divides its functionality into composable modules that can be mixed and matched as required. Installing -SciML ecosystem functionality is equivalent to installation such packages. +SciML ecosystem functionality is equivalent to installation of such packages. For example, do you need the differential equation solver? Then install DifferentialEquations via the command: diff --git a/docs/src/highlevels/array_libraries.md b/docs/src/highlevels/array_libraries.md index 4e31fea157c..a6eaef051c2 100644 --- a/docs/src/highlevels/array_libraries.md +++ b/docs/src/highlevels/array_libraries.md @@ -2,7 +2,7 @@ ## RecursiveArrayTools.jl: Arrays of Arrays and Even Deeper -Sometimes when one is creating a model, basic array types are not enough for expressing +Sometimes, when one is creating a model, basic array types are not enough for expressing a complex concept. RecursiveArrayTools.jl gives many types, such as `VectorOfArray` and `ArrayPartition`, which allow for easily building nested array models in a way that conforms to the standard `AbstractArray` interface. While standard `Vector{Array{Float64,N}}` @@ -15,7 +15,7 @@ the timeseries solution types being `AbstractVectorOfArray`. ## LabelledArrays.jl: Named Variables in Arrays without Overhead -Sometimes you want to use a full domain-specific language like +Sometimes, you want to use a full domain-specific language like [ModelingToolkit](https://github.com/SciML/ModelingToolkit.jl). Other times, you wish arrays just had a slightly nicer syntax. Don't you wish you could write the Lorenz equations like: @@ -28,12 +28,12 @@ end ``` without losing any efficiency? [LabelledArrays.jl](https://github.com/SciML/LabelledArrays.jl) -provides the array types to do just that. All of the `.` accesses are resolved at compile-time +provides the array types to do just that. All the `.` accesses are resolved at compile-time, so it's a [zero-overhead interface](https://www.stochasticlifestyle.com/zero-cost-abstractions-in-julia-indexing-vectors-by-name-with-labelledarrays/). !!! note - We recommend using ComponentArrays.jl for any instance where nested accesses is required, + We recommend using ComponentArrays.jl for any instance where nested accesses are required, or where the `.` accesses need to be views to subsets of the array. ## MultiScaleArrays.jl: Multiscale Modeling to Compose with Equation Solvers @@ -42,7 +42,7 @@ so it's a [zero-overhead interface](https://www.stochasticlifestyle.com/zero-cos How do you encode such real-world structures in a manner that is compatible with the SciML equation solver libraries? [MultiScaleArrays.jl](https://github.com/SciML/MultiScaleArrays.jl) is -an answer. MultiScaleArrays.jl gives a highly flexible interface for defining multi-level types +an answer. MultiScaleArrays.jl gives a highly flexible interface for defining multi-level types, which generates a corresponding interface as an `AbstractArray`. MultiScaleArrays.jl's flexibility includes the ease of resizing, allowing for models where the number of equations grows and shrinks as agents (cells) in the model divide and die. @@ -52,7 +52,7 @@ as agents (cells) in the model divide and die. We recommend using ComponentArrays.jl instead in any instance where the resizing functionality is not used. -## Third Party Libraries to Note +## Third-Party Libraries to Note ## ComponentArrays.jl: Arrays with Arbitrarily Nested Named Components @@ -62,7 +62,7 @@ with zero-overhead? This is what [ComponentArrays.jl](https://github.com/jonnied provides, and as such it is one of the top recommendations of `AbstractArray` types to be used. Multi-level definitions such as `x = ComponentArray(a=5, b=[(a=20., b=0), (a=33., b=0), (a=44., b=3)], c=c)` are common-place, and allow for accessing via `x.b.a` etc. without any performance loss. `ComponentArrays` -are fully compatible with the SciML equation solvers, thus they can be used as initial conditions. Here's a +are fully compatible with the SciML equation solvers. They thus can be used as initial conditions. Here's a demonstration of the Lorenz equation using ComponentArrays with Parameters.jl's `@unpack`: ```julia @@ -90,7 +90,7 @@ lorenz_ic = ComponentArray(x=0.0, y=0.0, z=0.0) lorenz_prob = ODEProblem(lorenz!, lorenz_ic, tspan, lorenz_p) ``` -Is that beautiful? Yes it is. +Is that beautiful? Yes, it is. ## StaticArrays.jl: Statically-Defined Arrays @@ -104,10 +104,10 @@ but for larger equations they increase compile time with little to no benefit. [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) is the library for defining arrays which live on NVIDIA GPUs (`CuArray`). SciML's libraries will respect the GPU-ness of the inputs, i.e., if the input arrays live on the GPU then the operations will all take place on the GPU -or else the libraries will error if it's unable to do so. Thus using CUDA.jl's `CuArray` is +or else the libraries will error if it's unable to do so. Thus, using CUDA.jl's `CuArray` is how one GPU-accelerates any computation with the SciML organization's libraries. Simply use a `CuArray` as the initial condition to an ODE solve or as the initial guess for a nonlinear -solve and the whole solve will recompile to take place on the GPU. +solve, and the whole solve will recompile to take place on the GPU. ## AMDGPU.jl: AMD-Based GPU Array Computations @@ -117,7 +117,7 @@ if the input arrays live on the GPU then the operations will all take place on t or else the libraries will error if it's unable to do so. Thus using AMDGPU.jl's `ROCArray` is how one GPU-accelerates any computation with the SciML organization's libraries. Simply use a `ROCArray` as the initial condition to an ODE solve or as the initial guess for a nonlinear -solve and the whole solve will recompile to take place on the GPU. +solve, and the whole solve will recompile to take place on the GPU. ## FillArrays.jl: Lazy Arrays diff --git a/docs/src/highlevels/developer_documentation.md b/docs/src/highlevels/developer_documentation.md index 3ebf2028c5b..67f6691d0b5 100644 --- a/docs/src/highlevels/developer_documentation.md +++ b/docs/src/highlevels/developer_documentation.md @@ -1,6 +1,6 @@ # Developer Documentation -For uniformity and clarity, the SciML Open Source Software Organization has many +For uniformity and clarity, the SciML Open-Source Software Organization has many well-defined rules and practices for its development. However, we stress one important principle: @@ -49,7 +49,7 @@ StochasticDiffEq.jl, and DelayDiffEq.jl. This section of the documentation descr internal systems of these packages and how they are used to quickly write efficient solvers. -# Third Party Libraries to Note +# Third-Party Libraries to Note ## Documenter.jl @@ -73,15 +73,15 @@ JuliaFormatter.format(pkgdir(DevedPackage)) which will reformat the code according to the SciML Style. -## Github Actions Continuous Integrations +## GitHub Actions Continuous Integrations The SciML Organization uses continuous integration testing to always ensure tests are passing when merging -pull requests. The organization uses the Github Actions supplied by [Julia Actions](https://github.com/julia-actions) +pull requests. The organization uses the GitHub Actions supplied by [Julia Actions](https://github.com/julia-actions) to accomplish this. Common continuous integration scripts are: - CI.yml, the standard CI script - Downstream.yml, used to specify packages for downstream testing. This will make packages which depend on the current - package also be tested to ensure that "non-breaking changes" do not actually break other packages. + package also be tested to ensure that “non-breaking changes” do not actually break other packages. - Documentation.yml, used to run the documentation automatic generation with Documenter.jl - FormatCheck.yml, used to check JuliaFormatter SciML Style compliance @@ -90,9 +90,9 @@ to accomplish this. Common continuous integration scripts are: [CompatHelper](https://github.com/JuliaRegistries/CompatHelper.jl) is used to automatically create pull requests whenever a dependent package is upper bounded. The results of CompatHelper PRs should be checked to ensure that the latest version of the dependencies are grabbed for the test process. After successful CompatHelper PRs, i.e. if the increase of the upper -bound did not cause a break to the tests, a new version tag should follow. It is setup by adding the CompatHelper.yml Github action. +bound did not cause a break to the tests, a new version tag should follow. It is set up by adding the CompatHelper.yml GitHub action. ## TagBot -[TagBot](https://github.com/JuliaRegistries/TagBot) automatically creates tags in the Github repository whenever a package -is registered to the Julia General repository. It is setup by adding the TagBot.yml Github action. +[TagBot](https://github.com/JuliaRegistries/TagBot) automatically creates tags in the GitHub repository whenever a package +is registered to the Julia General repository. It is set up by adding the TagBot.yml GitHub action. diff --git a/docs/src/highlevels/equation_solvers.md b/docs/src/highlevels/equation_solvers.md index 45061a14e06..b4d60ba668f 100644 --- a/docs/src/highlevels/equation_solvers.md +++ b/docs/src/highlevels/equation_solvers.md @@ -33,7 +33,7 @@ for solving `NonlinearProblem`s. It includes: - Fast non-allocating implementations on static arrays of common methods (Newton-Rhapson) - Bracketing methods (Bisection, Falsi) for methods with known upper and lower bounds (`IntervalNonlinearProblem`) - Wrappers to common other solvers (NLsolve.jl, MINPACK, KINSOL from Sundials) for trust - region methods, line search based approaches, etc. + region methods, line search-based approaches, etc. - Built over the LinearSolve.jl API for maximum flexibility and performance in the solving approach - Compatible with arbitrary AbstractArray and Number types, such as GPU-based arrays, uncertainty @@ -62,7 +62,7 @@ for solving `DEProblem`s. This includes: The well-optimized DifferentialEquations solvers benchmark as some of the fastest implementations of classic algorithms. It also includes algorithms from recent -research which routinely outperform the "standard" C/Fortran methods, and algorithms +research which routinely outperform the “standard” C/Fortran methods, and algorithms optimized for high-precision and HPC applications. Simultaneously, it wraps the classic C/Fortran methods, making it easy to switch over to them whenever necessary. Solving differential equations with different methods from @@ -137,7 +137,7 @@ including: - `RDirect`: A variant of Gillespie's Direct method that uses rejection to sample the next reaction. - *`DirectCR`*: The Composition-Rejection Direct method of Slepoy et al. For - large networks and linear chain-type networks it will often give better + large networks and linear chain-type networks, it will often give better performance than `Direct`. (Requires dependency graph, see below.) - `DirectFW`: the Gillespie Direct method SSA with `FunctionWrappers`. This aggregator uses a different internal storage format for collections of @@ -146,13 +146,13 @@ including: offer better performance and be preferred to `FRM`. - `FRMFW`: the Gillespie first reaction method SSA with `FunctionWrappers`. - *`NRM`*: The Gibson-Bruck Next Reaction Method. For some reaction network - structures this may offer better performance than `Direct` (for example, + structures, this may offer better performance than `Direct` (for example, large, linear chains of reactions). (Requires dependency graph, see below.) - *`RSSA`*: The Rejection SSA (RSSA) method of Thanh et al. With `RSSACR`, for - very large reaction networks it often offers the best performance of all + very large reaction networks, it often offers the best performance of all methods. (Requires dependency graph, see below.) - *`RSSACR`*: The Rejection SSA (RSSA) with Composition-Rejection method of - Thanh et al. With `RSSA`, for very large reaction networks it often offers the + Thanh et al. With `RSSA`, for very large reaction networks, it often offers the best performance of all methods. (Requires dependency graph, see below.) - *`SortingDirect`*: The Sorting Direct Method of McCollum et al. It will usually offer performance as good as `Direct`, and for some systems can offer @@ -166,7 +166,7 @@ and differential equations driven by Levy processes. In addition, JumpProcesses's interfaces allow for solving with regular jump methods, such as adaptive Tau-Leaping. -# Third Party Libraries to Note +# Third-Party Libraries to Note ## JuMP.jl: Julia for Mathematical Programming diff --git a/docs/src/highlevels/function_approximation.md b/docs/src/highlevels/function_approximation.md index 2a3056a484b..d3577d08cd1 100644 --- a/docs/src/highlevels/function_approximation.md +++ b/docs/src/highlevels/function_approximation.md @@ -34,7 +34,7 @@ doing machine learning using reservoir computing techniques, such as with method State Networks (ESNs). Its reservoir computing methods make it stabilized for usage with difficult equations like stiff dynamics, chaotic equations, and more. -# Third Party Libraries to Note +# Third-Party Libraries to Note ## Flux.jl: the ML library that doesn't make you tensor @@ -46,8 +46,8 @@ compatibility. ## Lux.jl: Explicitly Parameterized Neural Networks in Julia [Lux.jl](https://github.com/avik-pal/Lux.jl) is a library for fully explicitly parameterized -neural networks. Thus while alternative interfaces are required to use Flux with many equation -solvers (i.e. `Flux.destructure`), Lux.jl's explicit design marries very easily with the +neural networks. Thus, while alternative interfaces are required to use Flux with many equation +solvers (i.e. `Flux.destructure`), Lux.jl's explicit design marries effortlessly with the SciML equation solver libraries. For this reason, SciML's library are also heavily tested with Lux to ensure compatibility with neural network definitions from here. @@ -55,7 +55,7 @@ Lux to ensure compatibility with neural network definitions from here. [SimpleChains.jl](https://github.com/PumasAI/SimpleChains.jl) is a library specialized for small-scale machine learning. It uses non-allocating mutating forms to be highly efficient -for the cases where matrix multiplication kernels are not able to overcome the common overheads +for the cases where matrix multiplication kernels cannot overcome the common overheads of machine learning libraries. Thus for SciML cases with small neural networks (<100 node layers) and non-batched usage (many/most use cases), SimpleChains.jl can be the fastest choice for the neural network definitions. @@ -92,12 +92,12 @@ MNIST data, MLDatasets is the quickest way to obtain it. making writing common machine learning pipelines easier. This includes functionality for: - An extensible dataset interface (`numobs` and `getobs`). -- Data iteration and dataloaders (`eachobs` and `DataLoader`). +- Data iteration and data loaders (`eachobs` and `DataLoader`). - Lazy data views (`obsview`). - Resampling procedures (`undersample` and `oversample`). - Train/test splits (`splitobs`) - Data partitioning and aggregation tools (`batch`, `unbatch`, `chunk`, `group_counts`, `group_indices`). - Folds for cross-validation (`kfolds`, `leavepout`). -- Datasets lazy tranformations (`mapobs`, `filterobs`, `groupobs`, `joinobs`, `shuffleobs`). +- Datasets lazy transformations (`mapobs`, `filterobs`, `groupobs`, `joinobs`, `shuffleobs`). - Toy datasets for demonstration purpose. - Other data handling utilities (`flatten`, `normalise`, `unsqueeze`, `stack`, `unstack`). diff --git a/docs/src/highlevels/interfaces.md b/docs/src/highlevels/interfaces.md index 6ec7ef1f3d6..5f59ffdff4d 100644 --- a/docs/src/highlevels/interfaces.md +++ b/docs/src/highlevels/interfaces.md @@ -5,7 +5,7 @@ [SciMLBase.jl](https://github.com/SciML/SciMLBase.jl) defines the core interfaces of the SciML libraries, such as the definitions of abstract types like `SciMLProblem`, along with their instantiations like `ODEProblem`. While SciMLBase.jl is insufficient -to solve any equations, it holds all of the equation definitions, and thus downstream +to solve any equations, it holds all the equation definitions, and thus downstream libraries which wish to allow for using SciML solvers without depending on any solvers can directly depend on SciMLBase.jl. @@ -24,9 +24,9 @@ SciML ecosystem. ## CommonSolve.jl: The Common Definition of Solve [CommonSolve.jl](https://github.com/SciML/CommonSolve.jl) is the library that defines -the `solve`, `solve!`, and `init` interfaces which are used throughout all of the SciML +the `solve`, `solve!`, and `init` interfaces which are used throughout all the SciML equation solvers. It's defined as an extremely lightweight library so that other -ecosystems can build off of the same `solve` definition without clashing with SciML +ecosystems can build on the same `solve` definition without clashing with SciML when both export. ## Static.jl: A Shared Interface for Static Compile-Time Computation @@ -42,7 +42,7 @@ DifferentialEquations.jl ecosystem. It's not intended for non-developer users to directly with, instead it's used for the common functionality for uniformity of implementation between the solver libraries. -# Third Party Libraries to Note +# Third-Party Libraries to Note ## ArrayInterface.jl: Extensions to the Julia AbstractArray Interface @@ -58,7 +58,7 @@ like `findstructralnz` to get the structural non-zeros of arbitrary sparse and s [Adapt.jl](https://github.com/JuliaGPU/Adapt.jl) makes it possible to write code that is generic to the compute devices, i.e. code that works on both CPUs and GPUs. It defines the `adapt` function which acts like `convert(T, x)`, but without the restriction of returning a `T`. This allows you to -"convert" wrapper types like `Adjoint` to be GPU compatible (for example) without throwing away the wrapper. +“convert” wrapper types, like `Adjoint` to be GPU compatible (for example) without throwing away the wrapper. Example usage: @@ -77,7 +77,7 @@ compatible with FFT libraries without having an explicit dependency on a solver. ## GPUArrays.jl: Common Interface for GPU-Based Array Types [GPUArrays.jl](https://github.com/JuliaGPU/GPUArrays.jl) defines the shared higher-level operations -for GPU-based array types like [CUDA.jl's CuArray](https://github.com/JuliaGPU/CUDA.jl) and +for GPU-based array types, like [CUDA.jl's CuArray](https://github.com/JuliaGPU/CUDA.jl) and [AMDGPU.jl's ROCmArray](https://github.com/JuliaGPU/AMDGPU.jl). Packages in SciML use the designation `x isa AbstractGPUArray` in order to find out if a user's operation is on the GPU and specialize computations. diff --git a/docs/src/highlevels/inverse_problems.md b/docs/src/highlevels/inverse_problems.md index e46a5c3f2f8..19604d807c7 100644 --- a/docs/src/highlevels/inverse_problems.md +++ b/docs/src/highlevels/inverse_problems.md @@ -17,14 +17,14 @@ so that you can easily decide which one suits your needs best. ## SciMLSensitivity.jl: Local Sensitivity Analysis and Automatic Differentiation Support for Solvers -SciMLSensitivity.jl is the system for local sensitivity analysis which all other inverse problem +SciMLSensitivity.jl is the system for local sensitivity, which all other inverse problem methods rely on. This package defines the interactions between the equation solvers and automatic differentiation, defining fast overloads for forward and adjoint (reverse) sensitivity analysis for fast gradient and Jacobian calculations with respect to model inputs. Its documentation covers how to use direct differentiation of equation solvers in conjunction with tools like Optimization.jl to perform model calibration of ODEs against data, PDE-constrained optimization, nonlinear optimal controls analysis, and much more. As a lower level tool, this library is very versatile, feature-rich, -and high-performance, giving all of the tools required but not directly providing a higher level +and high-performance, giving all the tools required but not directly providing a higher level interface. !!! note @@ -54,7 +54,7 @@ delay differential equations. ## DiffEqBayes.jl: Simplified Bayesian Estimation Interface As the name suggests, this package has been designed to provide the estimation -of differential equations parameters by means of Bayesian methods. It works in +of differential equations parameters by Bayesian methods. It works in conjunction with [Turing.jl](https://turing.ml/), [CmdStan.jl](https://github.com/StanJulia/CmdStan.jl), [DynamicHMC.jl](https://github.com/tpapp/DynamicHMC.jl), and @@ -63,7 +63,7 @@ as flexible as direct usage of DiffEqFlux.jl or Turing.jl, DiffEqBayes.jl can be an approachable interface for those not familiar with Bayesian estimation, and provides a nice way to use Stan from pure Julia. -# Third Party Tools of Note +# Third-Party Tools of Note ## Turing.jl: A Flexible Probabilistic Programming Language for Bayesian Analysis @@ -116,7 +116,7 @@ Zygote.jl is the AD engine associated with the Flux machine learning library. Ho limitations which limits its performance in equation solver contexts, such as an inability to handle mutation and introducing many small allocations and type-instabilities. For this reason, the SciML equation solvers define differentiation overloads using [ChainRules.jl](https://github.com/JuliaDiff/ChainRules.jl), -meaning that the equation solvers tend to not use Zygote.jl internally even if the user code uses `Zygote.gradient`. +meaning that the equation solvers tend not to use Zygote.jl internally even if the user code uses `Zygote.gradient`. In this manner, the speed and performance of more advanced techniques can be preserved while using the Julia standard. ## FiniteDiff.jl: Fast Finite Difference Approximations diff --git a/docs/src/highlevels/learning_resources.md b/docs/src/highlevels/learning_resources.md index 4669ecd91ff..104f4df59ea 100644 --- a/docs/src/highlevels/learning_resources.md +++ b/docs/src/highlevels/learning_resources.md @@ -3,7 +3,7 @@ While the SciML documentation is made to be comprehensive, there will always be good alternative resources. The purpose of this section of the documentation is to highlight the alternative resources which can be helpful for learning how -to use the SciML Open Source Software libraries. +to use the SciML Open-Source Software libraries. ## JuliaCon and SciMLCon Videos @@ -38,4 +38,4 @@ classic SIR epidemic model. - [Statistics with Julia](https://statisticswithjulia.org/) - [Statistical Rethinking with Julia](https://shmuma.github.io/rethinking-2ed-julia/) - [The Koopman Operator in Systems and Control](https://www.springer.com/gp/book/9783030357122) - - "All simulations have been performed in Julia, with additional Julia packages: LinearAlgebra.jl, Random.jl, Plots.jl, Lasso.jl, DifferentialEquations.jl" + - “All simulations have been performed in Julia, with additional Julia packages: LinearAlgebra.jl, Random.jl, Plots.jl, Lasso.jl, DifferentialEquations.jl” diff --git a/docs/src/highlevels/model_libraries_and_importers.md b/docs/src/highlevels/model_libraries_and_importers.md index 0e9efb86189..578c9b4ee2b 100644 --- a/docs/src/highlevels/model_libraries_and_importers.md +++ b/docs/src/highlevels/model_libraries_and_importers.md @@ -21,7 +21,7 @@ quickly building up complex differential equation models. It includes: * Callbacks for specialized output and saving procedures * Callbacks for enforcing domain constraints, positivity, and manifolds -* Timed callbacks for periodic dosing, preseting of tstops, and more +* Timed callbacks for periodic dosing, presetting of tstops, and more * Callbacks for determining and terminating at steady state * Callbacks for controlling stepsizes and enforcing CFL conditions * Callbacks for quantifying uncertainty with respect to numerical errors @@ -30,14 +30,14 @@ quickly building up complex differential equation models. It includes: [SBMLToolkit.jl](https://github.com/SciML/SBMLToolkit.jl) is a library for reading [SBML files](https://synonym.caltech.edu/#:~:text=What%20is%20SBML%3F,field%20of%20the%20life%20sciences.) -into the standard formats for Catalyst.jl and ModelingToolkit.jl. There are more than one thousand biological -models available in the the [BioModels Repository](https://www.ebi.ac.uk/biomodels/). +into the standard formats for Catalyst.jl and ModelingToolkit.jl. There are well over one thousand biological +models available in the [BioModels Repository](https://www.ebi.ac.uk/biomodels/). ## CellMLToolkit.jl: CellML Import [CellMLToolkit.jl](https://github.com/SciML/CellMLToolkit.jl) is a library for reading [CellML files](https://www.cellml.org/) into the standard formats for ModelingToolkit.jl. -There are several hundred biological models available in the the +There are several hundred biological models available in the [CellML Model Repository](https://models.cellml.org/cellml). ## ReactionNetworkImporters.jl: BioNetGen Import diff --git a/docs/src/highlevels/modeling_languages.md b/docs/src/highlevels/modeling_languages.md index ffe16b8a5e6..fcb4aa3f9d8 100644 --- a/docs/src/highlevels/modeling_languages.md +++ b/docs/src/highlevels/modeling_languages.md @@ -2,7 +2,7 @@ While in theory one can build perfect code for all models from scratch, in practice many scientists and engineers need or want some help! The SciML modeling tools -provide a higher level interface over the equation solvers which helps the translation +provide a higher level interface over the equation solver, which helps the translation from good models to good simulations in a way that abstracts away the mathematical and computational details without giving up performance. @@ -22,8 +22,8 @@ numerical stability, and automatically remove variables which it can show are re ModelingToolkit.jl is the base of the SciML symbolic modeling ecosystem, defining the `AbstractSystem` types, such as `ODESystem`, `SDESystem`, `OptimizationSystem`, `PDESystem`, and more, which are -then used by all of the other modeling tools. As such, when using other modeling tools like Catalyst.jl, -the reference for all of the things that can be done with the symbolic representation is simply +then used by all the other modeling tools. As such, when using other modeling tools like Catalyst.jl, +the reference for all the things that can be done with the symbolic representation is simply ModelingToolkit.jl. ## Catalyst.jl: Chemical Reaction Networks (CRN), Systems Biology, and Quantitative Systems Pharmacology (QSP) Modeling @@ -31,7 +31,7 @@ ModelingToolkit.jl. [Catalyst.jl](https://docs.sciml.ai/Catalyst/stable/) is a modeling interface for efficient simulation of mass action ODE, chemical Langevin SDE, and stochastic chemical kinetics jump process (i.e. chemical master equation) models for chemical reaction networks and population processes. It uses a -highly intuitive chemical reaction syntax interface, which generates all of the extra functionality +highly intuitive chemical reaction syntax interface, which generates all the extra functionality necessary for the fastest use with JumpProcesses.jl, DifferentialEquations.jl, and higher level SciML libraries. Its `ReactionSystem` type is a programmable extension of the ModelingToolkit `AbstractSystem` interface, meaning that complex reaction systems are represented symbolically, and then compiled to @@ -43,7 +43,7 @@ For an overview of the library, see ## NBodySimulator.jl: A differentiable simulator for N-body problems, including astrophysical and molecular dynamics -[NBodySimulator.jl](https://docs.sciml.ai/NBodySimulator/stable/) is differentiable simulator for N-body problems, +[NBodySimulator.jl](https://docs.sciml.ai/NBodySimulator/stable/) is a differentiable simulator for N-body problems, including astrophysical and molecular dynamics. It uses the DifferentialEquations.jl solvers, allowing for one to choose between a large variety of symplectic integration schemes. It implements many of the thermostats required for doing standard molecular dynamics approximations. @@ -58,9 +58,9 @@ Black-Scholes model. ![](https://user-images.githubusercontent.com/1814174/172001045-b9e35b8d-0d40-41af-b606-95b81bb1194d.png) -This image that went viral is actually runnable code from [ParameterizedFunctions.jl](https://docs.sciml.ai/ParameterizedFunctions/stable/). Define equations and models using a very simple high level syntax and let the code generation tools build symbolic fast Jacobian, gradient, etc. functions for you. +This image that went viral is actually runnable code from [ParameterizedFunctions.jl](https://docs.sciml.ai/ParameterizedFunctions/stable/). Define equations and models using a very simple high-level syntax and let the code generation tools build symbolic fast Jacobian, gradient, etc. functions for you. -# Third Party Tools of Note +# Third-Party Tools of Note ## MomentClosure.jl: Automated Generation of Moment Closure Equations @@ -79,7 +79,7 @@ making it a solid foundation for any agent-based model. ## Unitful.jl: A Julia package for physical units -Supports not only SI units but also any other unit system. +Supports not only SI units, but also any other unit system. [Unitful.jl](https://docs.sciml.ai/Unitful/stable/) has minimal run-time penalty of units. Includes facilities for dimensional analysis, and integrates easily with the usual mathematical operations and collections that are defined in Julia. @@ -100,7 +100,7 @@ allow for directly solving the weak form of the stochastic model. ## AlgebraicPetri.jl: Applied Category Theory of Modeling [AlgebraicPetri.jl](https://docs.sciml.ai/AlgebraicPetri/stable/) is a library for automating the intuitive -generation of dynamical models using a Category theory based approach. +generation of dynamical models using a Category theory-based approach. ## QuantumOptics.jl: Simulating quantum systems. diff --git a/docs/src/highlevels/numerical_utilities.md b/docs/src/highlevels/numerical_utilities.md index 18ff665aaf9..46afee5c742 100644 --- a/docs/src/highlevels/numerical_utilities.md +++ b/docs/src/highlevels/numerical_utilities.md @@ -8,7 +8,7 @@ around this to improve performance in scientific contexts, including: - Faster methods for (non-allocating) matrix exponentials via `exponential!` - Methods for computing matrix exponential that are generic to number types and arrays (i.e. GPUs) -- Methods for computing arnoldi iterations on Krylov subspaces +- Methods for computing Arnoldi iterations on Krylov subspaces - Direct computation of `exp(t*A)*v`, i.e. exponentiation of a matrix times a vector, without computing the matrix exponential - Direct computation of `ϕ_m(t*A)*v` operations, where `ϕ_0(z) = exp(z)` and `ϕ_(k+1)(z) = (ϕ_k(z) - 1) / z` @@ -27,7 +27,7 @@ low discrepancy Quasi-Monte Carlo samples, using methods like: * `LatticeRuleSample` for a randomly-shifted rank-1 lattice rule. * `LowDiscrepancySample(base)` where `base[i]` is the base in the ith direction. * `GoldenSample` for a Golden Ratio sequence. -* `KroneckerSample(alpha, s0)` for a Kronecker sequence, where alpha is an length-d vector of irrational numbers (often sqrt(d)) and s0 is a length-d seed vector (often 0). +* `KroneckerSample(alpha, s0)` for a Kronecker sequence, where alpha is a length-d vector of irrational numbers (often sqrt(d)) and s0 is a length-d seed vector (often 0). * `SectionSample(x0, sampler)` where `sampler` is any sampler above and `x0` is a vector of either `NaN` for a free dimension or some scalar for a constrained dimension. ## PoissonRandom.jl: Fast Poisson Random Number Generation @@ -49,11 +49,11 @@ such as ModelingToolkit.jl to allow for runtime code generation for improved per ## EllipsisNotation.jl: Implementation of Ellipsis Array Slicing [EllipsisNotation.jl](https://github.com/SciML/EllipsisNotation.jl) defines the ellipsis -array slicing notation for Julia. It uses `..` as a catch all for "all dimensions", allow for indexing -like `[..,1]` to mean "[:,:,:,1]` on four dimensional arrays, in a way that is generic to the number +array slicing notation for Julia. It uses `..` as a catch-all for “all dimensions”, allowing for indexing +like `[..,1]` to mean `[:,:,:,1]` on four dimensional arrays, in a way that is generic to the number of dimensions in the underlying array. -# Third Party Libraries to Note +# Third-Party Libraries to Note ## Distributions.jl: Representations of Probability Distributions @@ -86,10 +86,10 @@ accelerating handwritten PDE discretizations. ## Polyester.jl: Cheap Threads [Polyester.jl](https://github.com/JuliaSIMD/Polyester.jl) is a cheaper version of threads for -Julia which use a set pool of threads for lower overhead. Note that Polyester does not +Julia, which use a set pool of threads for lower overhead. Note that Polyester does not compose with the standard Julia composable threading infrastructure, and thus one must take -care to not compose two levels of Polyester as this will oversubscribe the computation and -lead to performance degradation. Many SciML solvers have options to use Polyseter for +care not to compose two levels of Polyester, as this will oversubscribe the computation and +lead to performance degradation. Many SciML solvers have options to use Polyester for threading to achieve the top performance. ## Tullio.jl: Fast Tensor Calculations and Einstein Notation @@ -101,7 +101,7 @@ automatic differentiation, GPUs, and more. ## ParallelStencil.jl: High-Level Code for Parallelized Stencil Computations [ParallelStencil.jl](https://github.com/omlins/ParallelStencil.jl) is a library for writing -high level code forparallelized stencil computations. It is +high-level code for parallelized stencil computations. It is [compatible with SciML equation solvers](https://github.com/omlins/ParallelStencil.jl/issues/29) and is thus a good way to generate GPU and distributed parallel model code. @@ -128,8 +128,8 @@ methods and regression techniques for handling noisy data. Its methods include: The argument choices are the same as the `BSplineInterpolation`, with the additional parameter `hf(x,y)) or call-overloaded types to do it @@ -298,7 +298,7 @@ f = MyFunction(MyA,AMx,DA) These last two ways enclose the pointer to our cache arrays locally but still present a function f(du,u,p,t) to the ODE solver. -Now since PDEs are large, many times we don't care about getting the whole timeseries. Using +Now, since PDEs are large, many times we don't care about getting the whole timeseries. Using the [output controls from DifferentialEquations.jl](http://diffeq.sciml.ai/latest/basics/common_solver_opts.html#Output-Control-1), we can make it only output the final timepoint. ```julia @@ -307,8 +307,8 @@ prob = ODEProblem(f,u0,(0.0,100.0)) @time sol = solve(prob,ROCK2(),progress=true,save_everystep=false,save_start=false); ``` -Around 0.4 seconds. Much better. Also, if you're using VS Code this'll give you a nice -progress bar so you can track how it's going. +Around 0.4 seconds. Much better. Also, if you're using VS Code, this'll give you a nice +progress bar, so you can track how it's going. ## Quick Note About Performance @@ -319,7 +319,7 @@ progress bar so you can track how it's going. If we wanted to use a more conventional implicit ODE solver, we would need to make use of the sparsity pattern. This is covered in [the advanced ODE tutorial](https://docs.sciml.ai/DiffEqDocs/stable/tutorials/advanced_ode_example/) - It turns out that ROCK2 is more efficient anyways (and doesn't require sparsity + It turns out that ROCK2 is more efficient anyway (and doesn't require sparsity handling), so we will keep this setup. ### Quick Summary: full PDE ODE Code @@ -388,9 +388,9 @@ plot(p1,p2,p3,layout=grid(3,1)) ## Making Use of GPU Parallelism -That was all using the CPU. How do we make turn on GPU parallelism with +That was all using the CPU. How do we turn on GPU parallelism with DifferentialEquations.jl? Well, you don't. DifferentialEquations.jl "doesn't have GPU bits". -So wait... can we not do GPU parallelism? No, this is the glory of type-genericness, +So, wait... can we not do GPU parallelism? No, this is the glory of type-genericness, especially in broadcasted operations. To make things use the GPU, we simply use a CuArray from [CUDA.jl](https://cuda.juliagpu.org/stable/). If instead of `zeros(N,M)` we used `CuArray(zeros(N,M))`, then the array lives on the GPU. CuArray naturally overrides @@ -491,9 +491,9 @@ exists by virtue of the Julia type system. require a choice of dt. This is left to the reader to try.) !!! note - This can take awhile to solve! An explicit Runge-Kutta algorithm isn't necessarily great + This can take a while to solve! An explicit Runge-Kutta algorithm isn't necessarily great here, though to use a stiff solver on a problem of this size requires once again smartly choosing sparse linear solvers. The high order adaptive method is pretty much necessary - though since something like Euler-Maruyama is simply not stable enough to solve this at + though, since something like Euler-Maruyama is simply not stable enough to solve this at a reasonable dt. Also, the current algorithms are not so great at handling this problem. Good thing there's a publication coming along with some new stuff... diff --git a/docs/src/showcase/massively_parallel_gpu.md b/docs/src/showcase/massively_parallel_gpu.md index d2b4979b809..f1c4a1112b0 100644 --- a/docs/src/showcase/massively_parallel_gpu.md +++ b/docs/src/showcase/massively_parallel_gpu.md @@ -5,7 +5,7 @@ Before we dive deeper, let us remark that there are two very different ways that one can accelerate an ODE solution with GPUs. There is one case where `u` is very big and `f` is very expensive but very structured, and you use GPUs to accelerate the computation -of said `f`. The other use case is where `u` is very small but you want to solve the ODE +of said `f`. The other use case is where `u` is very small, but you want to solve the ODE `f` over many different initial conditions (`u0`) or parameters `p`. In that case, you can use GPUs to parallelize over different parameters and initial conditions. In other words: @@ -23,7 +23,7 @@ Let's say we wanted to quantify the uncertainty in the solution of a differentia One simple way to do this would be to a Monte Carlo simulation of the same ODE, randomly jiggling around some parameters according to an uncertainty distribution. We could do that on a CPU, but that's not hip. What's hip are GPUs! GPUs have thousands of cores, so -could we make each core of our GPU solve the same ODE but with different parameters? +could we make each core of our GPU solve the same ODE, but with different parameters? The [ensembling tools of DiffEqGPU.jl](https://docs.sciml.ai/DiffEqGPU/stable/) solve exactly this issue, and today you will learn how to master the GPUniverse. @@ -60,10 +60,10 @@ prob = ODEProblem{false}(lorenz, u0, tspan, p) ``` Notice we use `SVector`s, i.e. StaticArrays, in order to define our arrays. This is -important for later since the GPUs will want a fully non-allocating code to build a +important for later, since the GPUs will want a fully non-allocating code to build a kernel on. -Now from this problem, we build an `EnsembleProblem` as per the DifferentialEquations.jl +Now, from this problem, we build an `EnsembleProblem` as per the DifferentialEquations.jl specification. A `prob_func` jiggles the parameters and we solve 10_000 trajectories: ```@example diffeqgpu @@ -80,13 +80,13 @@ Now uhh, we just change `EnsembleThreads()` to `EnsembleGPUArray()` sol = solve(monteprob,Tsit5(),EnsembleGPUArray(),trajectories=10_000,saveat=1.0f0) ``` -Or for a more efficient version, `EnsembleGPUKernel()`. But that requires special solvers +Or for a more efficient version, `EnsembleGPUKernel()`. But that requires special solvers, so we also change to `GPUTsit5()`. ```@example diffeqgpu sol = solve(monteprob, GPUTsit5(), EnsembleGPUKernel(), trajectories = 10_000) ``` -Okay so that was anticlimactic but that's the point: if it was harder then that then it +Okay, so that was anticlimactic, but that's the point: if it were harder than that, it wouldn't be automatic! Now go check out [DiffEqGPU.jl's documentation for more details,](https://docs.sciml.ai/DiffEqGPU/stable/) that's the end of our show. diff --git a/docs/src/showcase/missing_physics.md b/docs/src/showcase/missing_physics.md index d89d806c0ad..50c1b40b64c 100644 --- a/docs/src/showcase/missing_physics.md +++ b/docs/src/showcase/missing_physics.md @@ -1,11 +1,11 @@ # [Automatically Discover Missing Physics by Embedding Machine Learning into Differential Equations](@id autocomplete) -One of the most time consuming parts of modeling is building the model. How do you know +One of the most time-consuming parts of modeling is building the model. How do you know when your model is correct? When you solve an inverse problem to calibrate your model to data, who you gonna call if there are no parameters that make the model the data? This is the problem that the Universal Differential Equation (UDE) approach solves: the ability to start from the model you have, and suggest minimal mechanistic extensions that -would allow the model to fit the data. In this showcase we will show how to take a partially +would allow the model to fit the data. In this showcase, we will show how to take a partially correct model and auto-complete it to find the missing physics. !!! note @@ -54,7 +54,7 @@ And external libraries: !!! note The deep learning framework [Flux.jl](https://fluxml.ai/) could be used in place of Lux, though most tutorials in SciML generally prefer Lux.jl due to its explicit parameter - interface leading to nicer code. Both share the same internal implementations of core + interface, leading to nicer code. Both share the same internal implementations of core kernels, and thus have very similar feature support and performance. ```@example ude @@ -159,7 +159,7 @@ prob_nn = ODEProblem(nn_dynamics!,Xₙ[:, 1], tspan, p) ``` Notice that the most important part of this is that the neural network does not have -hardcoded weights. The weights of the neural network are the parameters of the ODE system. +hard-coded weights. The weights of the neural network are the parameters of the ODE system. This means that if we change the parameters of the ODE system, then we will have updated the internal neural networks to new weights. Keep that in mind for the next part. @@ -199,7 +199,7 @@ function predict(θ, X = Xₙ[:,1], T = t) end ``` -Now for our loss function we solve the ODE at our new parameters and check its L2 loss +Now, for our loss function, we solve the ODE at our new parameters and check its L2 loss against the dataset. Using our `predict` function, this looks like: ```@example ude @@ -231,9 +231,9 @@ end Now we're ready to train! To run the training process, we will need to build an [`OptimizationProblem`](https://docs.sciml.ai/Optimization/stable/API/optimization_problem/). -Because we have a lot of parameteres, we will use +Because we have a lot of parameters, we will use [Zygote.jl](https://docs.sciml.ai/Zygote/stable/). Optimization.jl makes the choice of -automatic diffeerentiation easy just by specifying an `adtype` in the +automatic differentiation easy, just by specifying an `adtype` in the [`OptimizationFunction` construction](https://docs.sciml.ai/Optimization/stable/API/optimization_function/) Knowing this, we can build our `OptimizationProblem` as follows: @@ -270,7 +270,7 @@ println("Final training loss after $(length(losses)) iterations: $(losses[end])" p_trained = res2.u ``` -and bingo we have a trained UDE. +and bingo, we have a trained UDE. ## Visualizing the Trained UDE @@ -294,7 +294,7 @@ pl_trajectory = plot(ts, transpose(X̂), xlabel = "t", ylabel ="x(t), y(t)", col scatter!(solution.t, transpose(Xₙ), color = :black, label = ["Measurements" nothing]) ``` -Lets see how well the unknown term has been approximated: +Let's see how well the unknown term has been approximated: ```@example ude # Ideal unknown interactions of the predictor @@ -328,7 +328,7 @@ more after the break! #### This part of the showcase is still a work in progress... shame on us. But be back in a jiffy and we'll have it done. -Okay that was a quick break, and that's good because this next part is pretty cool. Let's +Okay, that was a quick break, and that's good because this next part is pretty cool. Let's use [DataDrivenDiffEq.jl](https://docs.sciml.ai/DataDrivenDiffEq/stable/) to transform our trained neural network from machine learning mumbo jumbo into predictions of missing mechanistic equations. To do this, we first generate a symbolic basis that represents the @@ -347,7 +347,7 @@ capability of the sparse regression, we will look at 3 cases: from the original noisy data? This is the approach in the literature known as structural identification of dynamical systems (SINDy). We will call this the full problem. This will assess whether this incorporation of prior information was helpful. -* What if we trained the neural network using the ideal right hand side missing derivative +* What if we trained the neural network using the ideal right-hand side missing derivative functions? This is the value computed in the plots above as `Ȳ`. This will tell us whether the symbolic discovery could work in ideal situations. * Do the symbolic regression directly on the function `y = NN(x)`, i.e. the trained learned @@ -356,21 +356,21 @@ capability of the sparse regression, we will look at 3 cases: To define the full problem, we need to define a `DataDrivenProblem` that has the time series of the solution `X`, the time points of the solution `t`, and the derivative -at each time point of the solution (obtained by the ODE solution's interpolation. We can just use an interpolation to get the derivative: +at each time point of the solution, obtained by the ODE solution's interpolation. We can just use an interpolation to get the derivative: ```@example ude full_problem = ContinuousDataDrivenProblem(Xₙ, t) ``` Now for the other two symbolic regressions, we are regressing input/outputs of the missing -terms and thus we directly define the datasets as the input/output mappings like: +terms, and thus we directly define the datasets as the input/output mappings like: ```@example ude ideal_problem = DirectDataDrivenProblem(X̂, Ȳ) nn_problem = DirectDataDrivenProblem(X̂, Ŷ) ``` -Let's solve the data driven problems using sparse regression. We will use the `ADMM` +Let's solve the data-driven problems using sparse regression. We will use the `ADMM` method, which requires we define a set of shrinking cutoff values `λ`, and we do this like: ```@example ude @@ -426,7 +426,7 @@ for eqs in (full_eqs, ideal_eqs, nn_eqs) end ``` -Next, we want to predict with our model. To do so, we embedd the basis into a function like before: +Next, we want to predict with our model. To do so, we embed the basis into a function like before: ```@example ude # Define the recovered, hybrid model diff --git a/docs/src/showcase/ode_types.md b/docs/src/showcase/ode_types.md index 069a597f009..f359fa45b54 100644 --- a/docs/src/showcase/ode_types.md +++ b/docs/src/showcase/ode_types.md @@ -9,8 +9,8 @@ few useful/interesting types that can be used: |--------------------------|---------------------------------------------------------------------------------------|-----------------------------------------------------------------| | BigFloat | Base Julia | Higher precision solutions | | ArbFloat | [ArbFloats.jl](https://github.com/JuliaArbTypes/ArbFloats.jl) | More efficient higher precision solutions | -| Measurement | [Measurements.jl](https://github.com/JuliaPhysics/Measurements.jl) | Uncertainty propogation | -| MonteCarloMeasurement | [MonteCarloMeasurements.jl](https://github.com/baggepinnen/MonteCarloMeasurements.jl) | Uncertainty propogation | +| Measurement | [Measurements.jl](https://github.com/JuliaPhysics/Measurements.jl) | Uncertainty propagation | +| MonteCarloMeasurement | [MonteCarloMeasurements.jl](https://github.com/baggepinnen/MonteCarloMeasurements.jl) | Uncertainty propagation | | Unitful | [Unitful.jl](https://painterqubits.github.io/Unitful.jl/stable/) | Unit-checked arithmetic | | Quaternion | [Quaternions.jl](https://juliageometry.github.io/Quaternions.jl/stable/) | Quaternions, duh. | | Fun | [ApproxFun.jl](https://juliaapproximation.github.io/ApproxFun.jl/latest/) | Representing PDEs as ODEs in function spaces | @@ -34,7 +34,7 @@ initial condition. !!! note Support for this feature is restricted to the native algorithms of OrdinaryDiffEq.jl. - The other solvers such as Sundials.jl, and ODEInterface.jl are not compatible with some + The other solvers such as Sundials.jl, and ODEInterface.jl are incompatible with some number systems. !!! warn @@ -77,7 +77,7 @@ sol = solve(prob_ode_linear_big,Tsit5()) ``` Now let's send it into the bizarre territory. Let's use rational values for everything. -Let's start by making the time type `Rational`. Rationals are not compatible with adaptive +Let's start by making the time type `Rational`. Rationals are incompatible with adaptive time stepping since they do not have an L2 norm (this can be worked around by defining `internalnorm`, but we will skip that in this tutorial). To account for this, let's turn off adaptivity as well. Thus the following is a valid use of rational time (and parameter): @@ -88,8 +88,8 @@ sol = solve(prob,RK4(),dt=1//2^(6),adaptive=false) ``` Now let's change the state to use `Rational{BigInt}`. You will see that we will need to -use the arbitrary-sized integers because... well... there's a reason people use floating -point numbers with ODE solvers: +use the arbitrary-sized integers because... well... there's a reason people use +floating-point numbers with ODE solvers: ```@example odetypes prob = ODEProblem(f,BigInt(1)//BigInt(2),(0//1,1//1),101//100); @@ -102,7 +102,7 @@ Yeah... sol[end] ``` -That's one huge fraction! 0 floating point error ODE solve achieved. +That's one huge fraction! 0 floating-point error ODE solve achieved. ## Unit Checked Arithmetic via Unitful.jl @@ -115,7 +115,7 @@ DifferentialEquations.jl allows for one to use Unitful.jl to have unit-checked a natively in the solvers. Given the dispatch implementation of the Unitful, this has little overhead because the unit checks occur at compile-time and not at runtime, and thus it does not have a runtime effect unless conversions are required (i.e. converting `cm` to `m`), -which automatically adds a floating point operation for the multiplication. +which automatically adds a floating-point operation for the multiplication. Let's see this in action. @@ -160,7 +160,7 @@ throw an error for incorrect operations: Just like with other number systems, you can choose the units for your numbers by simply specifying the units of the initial condition and the timespan. For example, to solve the -linear ODE where the variable has units of Newton's and `t` is in Seconds, we would use: +linear ODE where the variable has units of Newton's and `t` is in seconds, we would use: ```@example odetypes using DifferentialEquations @@ -177,7 +177,7 @@ ODE: \frac{dy}{dt} = f(t,y) ``` -we must have that `f` is a rate, i.e. `f` is a change in `y` per unit time. So we need to +we must have that `f` is a rate, i.e. `f` is a change in `y` per unit time. So, we need to fix the units of `f` in our example to be `N/s`. Notice that we then do not receive an error if we do the following: @@ -187,13 +187,13 @@ prob = ODEProblem(f,u0,(0.0u"s",1.0u"s")) sol = solve(prob,Tsit5()) ``` -This gives a a normal solution object. Notice that the values are all with the correct units: +This gives a normal solution object. Notice that the values are all with the correct units: ```@example odetypes print(sol[:]) ``` -And when we plot the solution it automatically adds the units: +And when we plot the solution, it automatically adds the units: ```@example odetypes using Plots @@ -203,7 +203,7 @@ plot(sol,lw=3) # Measurements.jl: Numbers with Linear Uncertainty Propagation -The result of a measurement should be given as a number with an attached uncertainties, +The result of a measurement should be given as a number with an attached uncertainty, besides the physical unit, and all operations performed involving the result of the measurement should propagate the uncertainty, taking care of correlation between quantities. @@ -215,7 +215,7 @@ out-of-the-box. Let's try to automate uncertainty propagation through number types on some classical physics examples! -### Caveat about `Measurement` type +### Warning about `Measurement` type Before going on with the tutorial, we must point up a subtlety of `Measurements.jl` that you should be aware of: @@ -383,8 +383,8 @@ Tiny difference on the order of the chosen `1e-6` tolerance. ### Simple pendulum: Arbitrary amplitude -Now that we know how to solve differential equations involving numbers with uncertainties -we can solve the simple pendulum problem without any approximation. This time the +Now that we know how to solve differential equations involving numbers with uncertainties, +we can solve the simple pendulum problem without any approximation. This time, the differential equation to solve is the following: ```math @@ -418,7 +418,7 @@ plot(sol.t, getindex.(sol.u, 2), label = "Numerical") ### Warning about Linear Uncertainty Propagation -Measurements.jl uses linear uncertainty propagation which has an error associated with it. +Measurements.jl uses linear uncertainty propagation, which has an error associated with it. [MonteCarloMeasurements.jl has a page which showcases where this method can lead to incorrect uncertainty measurements](https://baggepinnen.github.io/MonteCarloMeasurements.jl/stable/comparison/). Thus for more nonlinear use cases, it's suggested that one uses one of the more powerful UQ methods, such as: diff --git a/docs/src/showcase/pinngpu.md b/docs/src/showcase/pinngpu.md index a0225dbc196..e2e2fe747f7 100644 --- a/docs/src/showcase/pinngpu.md +++ b/docs/src/showcase/pinngpu.md @@ -12,7 +12,7 @@ it's the heat equation so that's cool). To solve PDEs using neural networks, we will use the [NeuralPDE.jl package](https://neuralpde.sciml.ai/stable/). This package uses -ModelingToolkit's symbolic `PDESystem` as an input and it generates an +ModelingToolkit's symbolic `PDESystem` as an input, and it generates an [Optimization.jl](https://docs.sciml.ai/Optimization/stable/) `OptimizationProblem` which, when solved, gives the weights of the neural network that solve the PDE. In the end, our neural network `NN` satisfies the PDE equations and is thus the solution to the PDE! Thus @@ -55,7 +55,7 @@ with physics-informed neural networks. ## Step 2: Define the PDESystem -First let's use ModelingToolkit's `PDESystem` to represent the PDE. To do this, basically +First, let's use ModelingToolkit's `PDESystem` to represent the PDE. To do this, basically just copy-paste the PDE definition into Julia code. This looks like: ```@example pinn @@ -92,8 +92,7 @@ domains = [t ∈ Interval(t_min,t_max), !!! note We used the wildcard form of the variable definition `@variables u(..)` which then - requires that we all ways specify what the dependent variables of `u` are. The reason - for this is because in the boundary conditions we change from using `u(t,x,y)` to + requires that we always specify what the dependent variables of `u` are. This is because in the boundary conditions we change from using `u(t,x,y)` to more specific points and lines, like `u(t,x_max,y)`. ## Step 3: Define the Lux Neural Network @@ -144,7 +143,7 @@ end res = Optimization.solve(prob,Adam(0.01);callback = callback,maxiters=2500); ``` -We then use the `remake` function allows to rebuild the PDE problem to start a new +We then use the `remake` function to rebuild the PDE problem to start a new optimization at the optimized parameters, and continue with a lower learning rate: ```@example pinn @@ -154,7 +153,7 @@ res = Optimization.solve(prob,Adam(0.001);callback = callback,maxiters=2500); ## Step 7: Inspect the PINN's Solution -Finally we inspect the solution: +Finally, we inspect the solution: ```julia phi = discretization.phi diff --git a/docs/src/showcase/symbolic_analysis.md b/docs/src/showcase/symbolic_analysis.md index 88d320250e0..6467b8822ab 100644 --- a/docs/src/showcase/symbolic_analysis.md +++ b/docs/src/showcase/symbolic_analysis.md @@ -8,7 +8,7 @@ the SciML ecosystem gracefully mixes analytical symbolic computations with the n solver processes to accelerate solvers, give additional information (sparsity, identifiability), automatically fix numerical stability issues, and more. -In this showcase we will highlight two aspects of symbolic-numeric programming. +In this showcase, we will highlight two aspects of symbolic-numeric programming. 1. Automated index reduction of DAEs. While arbitrary [differential-algebraic equation systems can be written in DifferentialEquations.jl](https://docs.sciml.ai/DiffEqDocs/stable/tutorials/dae_example/), @@ -28,7 +28,7 @@ Let's dig into these two cases! # Automated Index Reduction of DAEs In many cases one may accidentally write down a DAE that is not easily solvable -by numerical methods. In this tutorial we will walk through an example of a +by numerical methods. In this tutorial, we will walk through an example of a pendulum which accidentally generates an index-3 DAE, and show how to use the `modelingtoolkitize` to correct the model definition before solving. @@ -66,7 +66,7 @@ plot(sol, vars=states(traced_sys)) ### Attempting to Solve the Equation -In this tutorial we will look at the pendulum system: +In this tutorial, we will look at the pendulum system: ```math \begin{aligned} @@ -115,21 +115,21 @@ Did you implement the DAE incorrectly? No. Is the solver broken? No. It turns out that this is a property of the DAE that we are attempting to solve. This kind of DAE is known as an index-3 DAE. For a complete discussion of DAE index, see [this article](http://www.scholarpedia.org/article/Differential-algebraic_equations). -Essentially the issue here is that we have 4 differential variables (``x``, ``v_x``, ``y``, ``v_y``) +Essentially, the issue here is that we have 4 differential variables (``x``, ``v_x``, ``y``, ``v_y``) and one algebraic variable ``T`` (which we can know because there is no `D(T)` term in the equations). An index-1 DAE always satisfies that the Jacobian of the algebraic equations is non-singular. Here, the first 4 equations are differential equations, with the last term the algebraic relationship. However, the partial derivative of `x^2 + y^2 - L^2` w.r.t. `T` is zero, and thus the -Jacobian of the algebraic equations is the zero matrix and thus it's singular. -This is a very quick way to see whether the DAE is index 1! +Jacobian of the algebraic equations is the zero matrix, and thus it's singular. +This is a quick way to see whether the DAE is index 1! The problem with higher order DAEs is that the matrices used in Newton solves are singular or close to singular when applied to such problems. Because of this fact, the nonlinear solvers (or Rosenbrock methods) break down, making them difficult to solve. The classic paper [DAEs are not ODEs](https://epubs.siam.org/doi/10.1137/0903023) goes into detail on this and shows that many methods are no longer convergent -when index is higher than one. So it's not necessarily the fault of the solver +when index is higher than one. So, it's not necessarily the fault of the solver or the implementation: this is known. But that's not a satisfying answer, so what do you do about it? @@ -150,9 +150,9 @@ v_y^\prime =& y T - g \\ \end{aligned} ``` -Note that this is mathematically-equivalent to the equation that we had before, +Note that this is mathematically equivalent to the equation that we had before, but the Jacobian w.r.t. `T` of the algebraic equation is no longer zero because -of the substitution. This means that if you wrote down this version of the model +of the substitution. This means that if you wrote down this version of the model, it will be index-1 and solve correctly! In fact, this is how DAE index is commonly defined: the number of differentiations it takes to transform the DAE into an ODE, where an ODE is an index-0 DAE by substituting out all of the @@ -181,14 +181,14 @@ plot(sol, vars=states(traced_sys)) ``` Note that plotting using `states(traced_sys)` is done so that any -variables which are symbolically eliminated, or any variable reorderings +variables which are symbolically eliminated, or any variable reordering done for enhanced parallelism/performance, still show up in the resulting plot and the plot is shown in the same order as the original numerical code. -Note that we can even go a little bit further. If we use the `ODAEProblem` +Note that we can even go a bit further. If we use the `ODAEProblem` constructor, we can remove the algebraic equations from the states of the -system and fully transform the index-3 DAE into an index-0 ODE which can +system and fully transform the index-3 DAE into an index-0 ODE, which can be solved via an explicit Runge-Kutta method: ```@example indexred @@ -215,7 +215,8 @@ using Pkg Pkg.add("StructuralIdentifiability") ``` -The package has a standalone data structure for ordinary differential equations but is also compatible with `ODESystem` type from `ModelingToolkit.jl`. +The package has a standalone data structure for ordinary differential equations, +but is also compatible with `ODESystem` type from `ModelingToolkit.jl`. ## Local Identifiability ### Input System @@ -260,7 +261,7 @@ de = ODESystem(eqs, t, name=:Biohydrogenation) ``` -After that we are ready to check the system for local identifiability: +After that, we are ready to check the system for local identifiability: ```julia # query local identifiability # we pass the ode-system @@ -305,7 +306,7 @@ $$\begin{cases} y(t) = x_1(t) \end{cases}$$ -We will run a global identifiability check on this enzyme dynamics[^3] model. We will use the default settings: the probability of correctness will be `p=0.99` and we are interested in identifiability of all possible parameters +We will run a global identifiability check on this enzyme dynamics[^3] model. We will use the default settings: the probability of correctness will be `p=0.99` and we are interested in identifiability of all possible parameters. Global identifiability needs information about local identifiability first, but the function we chose here will take care of that extra step for us. @@ -340,9 +341,10 @@ ode = ODESystem(eqs, t, name=:GoodwinOsc) # g => :nonidentifiable # delta => :globally ``` -We can see that only parameters `a, g` are unidentifiable and everything else can be uniquely recovered. +We can see that only parameters `a, g` are unidentifiable, and everything else can be uniquely recovered. -Let us consider the same system but with two inputs and we will try to find out identifiability with probability `0.9` for parameters `c` and `b`: +Let us consider the same system but with two inputs, +and we will find out identifiability with probability `0.9` for parameters `c` and `b`: ```julia using StructuralIdentifiability, ModelingToolkit