You don't need help if you're a Fortran guru. I'm just kidding, you're not a Lisp developer. If you're coming from C++ or Fortran, you may be familiar with high-performance computing environments similar to SciML, such as PETSc, Trilinos, or Sundials. The following are some points to help the transition.
Symbolic modeling languages - writing models by hand can leave a lot of performance on the table. Using high-level modeling tools like ModelingToolkit can automate symbolic simplifications, which improve the stability and performance of numerical solvers. On complex models, even the best handwritten C++/Fortran code is orders of magnitude behind the code that symbolic tearing algorithms can achieve!
Composable Library Components - In C++/Fortran environments, every package feels like a silo. Arrays made for PETSc cannot easily be used in Trilinos, and converting Sundials NVector outputs to DataFrames for post-simulation data processing is a process itself. The Julia SciML environment embraces interoperability. Don't wait for SciML to do it: by using generic coding with JIT compilation, these connections create new optimized code on the fly and allow for a more expansive feature set than can ever be documented. Take new high-precision number types from a package and stick them into a nonlinear solver. Take a package for Intel GPU arrays and stick it into the differential equation solver to use specialized hardware acceleration.
Wrappers to the Libraries You Know and Trust - Moving to SciML does not have to be a quick transition. SciML has extensive wrappers to many widely-used classical solver environments such as SUNDIALS and Hairer's classic Fortran ODE solvers (dopri5, dop853, etc.). Using these wrapped solvers is painless and can be swapped in for the Julia versions with one line of code. This gives you a way to incrementally adopt new features/methods while retaining the older pieces you know and trust.
Don't Start from Scratch - SciML builds on the extensive Base library of Julia, and thus grows and improves with every update to the language. With hundreds of monthly contributors to SciML and hundreds of monthly contributors to Julia, SciML is one of the most actively developed open-source scientific computing ecosystems out there!
Easier High-Performance and Parallel Computing - With Julia's ecosystem, CUDA will automatically install of the required binaries and cu(A)*cu(B) is then all that's required to GPU-accelerate large-scale linear algebra. MPI is easy to install and use. Distributed computing through password-less SSH. Multithreading is automatic and baked into many libraries, with a specialized algorithm to ensure hierarchical usage does not oversubscribe threads. Basically, libraries give you a lot of parallelism for free, and doing the rest is a piece of cake.
Let's face the facts, in the open benchmarks the pure-Julia solvers tend to outperform the classic “best” C++ and Fortran solvers in almost every example (with a few notable exceptions). But why?
The answer is two-fold: Julia is as fast as C++/Fortran, and the algorithms are what matter.
While Julia code looks high level like Python or MATLAB, its performance is on par with C++ and Fortran. At a technical level, when Julia code is type-stable, i.e. that the types that are returned from a function are deducible at compile-time from the types that go into a function, then Julia can optimize it as much as C++ or Fortran by automatically devirtualizing all dynamic behavior and compile-time optimizing the quasi-static code. This is not an empirical statement, it's a provable type-theoretic result. The resulting compiler used on the resulting quasi-static representation is LLVM, the same optimizing compiler used by clang and LFortran.
For more details on how Julia code is optimized and how to optimize your own Julia code, check out this chapter from the SciML Book.
There are many ways which Julia's algorithms achieve performance advantages. Some facts to highlight include:
Julia is at the forefront of numerical methods research in many domains. This is highlighted in the differential equation solver comparisons, where the Julia solvers were the first to incorporate “newer” optimized Runge-Kutta tableaus, around half a decade before other software. Since then, the literature has only continued to evolve, and only Julia's SciML keeps up. At this point, many of the publication's first implementation is in OrdinaryDiffEq.jl with benchmark results run on the SciML Open Benchmarking platform!
Julia does not take low-level mathematical functions for granted. The common openlibm implementation of mathematical functions used in many open source projects is maintained by the Julia and SciML developers! However, in modern Julia, every function from log to ^ has been reimplemented in the Julia standard library to improve numerical correctness and performance. For example, Pumas, the nonlinear mixed effects estimation system built on SciML, and used by Moderna for the vaccine trials, notes in its paper that approximations to such math libraries itself gave a 2x performance improvement in even the most simple non-stiff ODE solvers over matching Fortran implementations. Pure Julia linear algebra tooling, like RecursiveFactorization.jl for LU-factorization, outperforms common LU-factorization implementations used in open-source projects like OpenBLAS by around 5x! This should not be surprising though, given that OpenBLAS was a prior MIT Julia Lab project!
Compilers are limited on the transformations that they can perform because they do not have high-level context-dependent mathematical knowledge. Julia's SciML makes extensive use of customized symbolic-based compiler transformations to improve performance with context-based code optimizations. Things like sparsity patterns are automatically deduced from code and optimized on. Nonlinear equations are symbolically-torn, changing large nonlinear systems into sequential solving of much smaller systems and benefiting from an O(n^3) cost reduction. These can be orders of magnitude cost reductions which come for free, and unless you know every trick in the book it will be difficult to match SciML's performance!
To really highlight how JIT compilation and automatic differentiation integration can change algorithms, let's look at the problem of differentiating an ODE solver. As is derived and discussed in detail at a seminar with the American Statistical Association, there are many ways to implement well-known “adjoint” methods which are required for performance. Each has different stability and performance trade-offs, and Julia's SciML is the only system to systemically offer all of the trade-off options. In many cases, using analytical adjoints of a solver is not advised due to performance reasons, with the trade-off described in detail here. Likewise, even when analytical adjoints are used, it turns out that for general nonlinear equations there is a trick which uses automatic differentiation in the construction of the analytical adjoint to improve its performance. As demonstrated in this publication, this can lead to about 2-3 orders of magnitude performance improvements. These AD-enhanced adjoints are showcased as the seeding methods in this plot:
Unless one directly defines special “vjp” functions, this is how the Julia SciML methods achieve orders of magnitude performance advantages over CVODES's adjoints and PETSC's TS-adjoint.
Moral of the story, even there are many reasons to use automatic differentiation of a solver, and even if an analytical adjoint rule is used for some specific performance reason, that analytical expression can often times be accelerated by orders of magnitude itself by embedding some form of automatic differentiation into it. This is just one algorithm of many which are optimized in this fashion.
Settings
This document was generated with Documenter.jl version 1.1.1 on Wednesday 18 October 2023. Using Julia version 1.9.3.
+Getting Started with Julia's SciML for the C++/Fortran User · Overview of Julia's SciML
You don't need help if you're a Fortran guru. I'm just kidding, you're not a Lisp developer. If you're coming from C++ or Fortran, you may be familiar with high-performance computing environments similar to SciML, such as PETSc, Trilinos, or Sundials. The following are some points to help the transition.
Symbolic modeling languages - writing models by hand can leave a lot of performance on the table. Using high-level modeling tools like ModelingToolkit can automate symbolic simplifications, which improve the stability and performance of numerical solvers. On complex models, even the best handwritten C++/Fortran code is orders of magnitude behind the code that symbolic tearing algorithms can achieve!
Composable Library Components - In C++/Fortran environments, every package feels like a silo. Arrays made for PETSc cannot easily be used in Trilinos, and converting Sundials NVector outputs to DataFrames for post-simulation data processing is a process itself. The Julia SciML environment embraces interoperability. Don't wait for SciML to do it: by using generic coding with JIT compilation, these connections create new optimized code on the fly and allow for a more expansive feature set than can ever be documented. Take new high-precision number types from a package and stick them into a nonlinear solver. Take a package for Intel GPU arrays and stick it into the differential equation solver to use specialized hardware acceleration.
Wrappers to the Libraries You Know and Trust - Moving to SciML does not have to be a quick transition. SciML has extensive wrappers to many widely-used classical solver environments such as SUNDIALS and Hairer's classic Fortran ODE solvers (dopri5, dop853, etc.). Using these wrapped solvers is painless and can be swapped in for the Julia versions with one line of code. This gives you a way to incrementally adopt new features/methods while retaining the older pieces you know and trust.
Don't Start from Scratch - SciML builds on the extensive Base library of Julia, and thus grows and improves with every update to the language. With hundreds of monthly contributors to SciML and hundreds of monthly contributors to Julia, SciML is one of the most actively developed open-source scientific computing ecosystems out there!
Easier High-Performance and Parallel Computing - With Julia's ecosystem, CUDA will automatically install of the required binaries and cu(A)*cu(B) is then all that's required to GPU-accelerate large-scale linear algebra. MPI is easy to install and use. Distributed computing through password-less SSH. Multithreading is automatic and baked into many libraries, with a specialized algorithm to ensure hierarchical usage does not oversubscribe threads. Basically, libraries give you a lot of parallelism for free, and doing the rest is a piece of cake.
Let's face the facts, in the open benchmarks the pure-Julia solvers tend to outperform the classic “best” C++ and Fortran solvers in almost every example (with a few notable exceptions). But why?
The answer is two-fold: Julia is as fast as C++/Fortran, and the algorithms are what matter.
While Julia code looks high level like Python or MATLAB, its performance is on par with C++ and Fortran. At a technical level, when Julia code is type-stable, i.e. that the types that are returned from a function are deducible at compile-time from the types that go into a function, then Julia can optimize it as much as C++ or Fortran by automatically devirtualizing all dynamic behavior and compile-time optimizing the quasi-static code. This is not an empirical statement, it's a provable type-theoretic result. The resulting compiler used on the resulting quasi-static representation is LLVM, the same optimizing compiler used by clang and LFortran.
For more details on how Julia code is optimized and how to optimize your own Julia code, check out this chapter from the SciML Book.
There are many ways which Julia's algorithms achieve performance advantages. Some facts to highlight include:
Julia is at the forefront of numerical methods research in many domains. This is highlighted in the differential equation solver comparisons, where the Julia solvers were the first to incorporate “newer” optimized Runge-Kutta tableaus, around half a decade before other software. Since then, the literature has only continued to evolve, and only Julia's SciML keeps up. At this point, many of the publication's first implementation is in OrdinaryDiffEq.jl with benchmark results run on the SciML Open Benchmarking platform!
Julia does not take low-level mathematical functions for granted. The common openlibm implementation of mathematical functions used in many open source projects is maintained by the Julia and SciML developers! However, in modern Julia, every function from log to ^ has been reimplemented in the Julia standard library to improve numerical correctness and performance. For example, Pumas, the nonlinear mixed effects estimation system built on SciML, and used by Moderna for the vaccine trials, notes in its paper that approximations to such math libraries itself gave a 2x performance improvement in even the most simple non-stiff ODE solvers over matching Fortran implementations. Pure Julia linear algebra tooling, like RecursiveFactorization.jl for LU-factorization, outperforms common LU-factorization implementations used in open-source projects like OpenBLAS by around 5x! This should not be surprising though, given that OpenBLAS was a prior MIT Julia Lab project!
Compilers are limited on the transformations that they can perform because they do not have high-level context-dependent mathematical knowledge. Julia's SciML makes extensive use of customized symbolic-based compiler transformations to improve performance with context-based code optimizations. Things like sparsity patterns are automatically deduced from code and optimized on. Nonlinear equations are symbolically-torn, changing large nonlinear systems into sequential solving of much smaller systems and benefiting from an O(n^3) cost reduction. These can be orders of magnitude cost reductions which come for free, and unless you know every trick in the book it will be difficult to match SciML's performance!
To really highlight how JIT compilation and automatic differentiation integration can change algorithms, let's look at the problem of differentiating an ODE solver. As is derived and discussed in detail at a seminar with the American Statistical Association, there are many ways to implement well-known “adjoint” methods which are required for performance. Each has different stability and performance trade-offs, and Julia's SciML is the only system to systemically offer all of the trade-off options. In many cases, using analytical adjoints of a solver is not advised due to performance reasons, with the trade-off described in detail here. Likewise, even when analytical adjoints are used, it turns out that for general nonlinear equations there is a trick which uses automatic differentiation in the construction of the analytical adjoint to improve its performance. As demonstrated in this publication, this can lead to about 2-3 orders of magnitude performance improvements. These AD-enhanced adjoints are showcased as the seeding methods in this plot:
Unless one directly defines special “vjp” functions, this is how the Julia SciML methods achieve orders of magnitude performance advantages over CVODES's adjoints and PETSC's TS-adjoint.
Moral of the story, even there are many reasons to use automatic differentiation of a solver, and even if an analytical adjoint rule is used for some specific performance reason, that analytical expression can often times be accelerated by orders of magnitude itself by embedding some form of automatic differentiation into it. This is just one algorithm of many which are optimized in this fashion.
Settings
This document was generated with Documenter.jl version 1.1.1 on Thursday 19 October 2023. Using Julia version 1.9.3.
diff --git a/dev/comparisons/matlab/index.html b/dev/comparisons/matlab/index.html
index 25c11795258..13e20855fe6 100644
--- a/dev/comparisons/matlab/index.html
+++ b/dev/comparisons/matlab/index.html
@@ -1,2 +1,2 @@
-Getting Started with Julia's SciML for the MATLAB User · Overview of Julia's SciML
If you're a MATLAB user who has looked into Julia for some performance improvements, you may have noticed that the standard library does not have all of the “batteries” included with a base MATLAB installation. Where's the ODE solver? Where's fmincon and fsolve? Those scientific computing functionalities are the pieces provided by the Julia SciML ecosystem!
Performance - The key reason people are moving from MATLAB to Julia's SciML in droves is performance. Even simple ODE solvers are much faster!, demonstrating orders of magnitude performance improvements for differential equations, nonlinear solving, optimization, and more. And the performance advantages continue to grow as more complex algorithms are required.
Package Management and Versioning - Julia's package manager takes care of dependency management, testing, and continuous delivery in order to make the installation and maintenance process smoother. For package users, this means it's easier to get packages with complex functionality in your hands.
Free and Open Source - If you want to know how things are being computed, just look at our GitHub organization. Lots of individuals use Julia's SciML to research how the algorithms actually work because of how accessible and tweakable the ecosystem is!
Composable Library Components - In MATLAB environments, every package feels like a silo. Functions made for one file exchange library cannot easily compose with another. SciML's generic coding with JIT compilation these connections create new optimized code on the fly and allow for a more expansive feature set than can ever be documented. Take new high-precision number types from a package and stick them into a nonlinear solver. Take a package for Intel GPU arrays and stick it into the differential equation solver to use specialized hardware acceleration.
Easier High-Performance and Parallel Computing - With Julia's ecosystem, CUDA will automatically install of the required binaries and cu(A)*cu(B) is then all that's required to GPU-accelerate large-scale linear algebra. MPI is easy to install and use. Distributed computing through password-less SSH. Multithreading is automatic and baked into many libraries, with a specialized algorithm to ensure hierarchical usage does not oversubscribe threads. Basically, libraries give you a lot of parallelism for free, and doing the rest is a piece of cake.
If you're a MATLAB user who has looked into Julia for some performance improvements, you may have noticed that the standard library does not have all of the “batteries” included with a base MATLAB installation. Where's the ODE solver? Where's fmincon and fsolve? Those scientific computing functionalities are the pieces provided by the Julia SciML ecosystem!
Performance - The key reason people are moving from MATLAB to Julia's SciML in droves is performance. Even simple ODE solvers are much faster!, demonstrating orders of magnitude performance improvements for differential equations, nonlinear solving, optimization, and more. And the performance advantages continue to grow as more complex algorithms are required.
Package Management and Versioning - Julia's package manager takes care of dependency management, testing, and continuous delivery in order to make the installation and maintenance process smoother. For package users, this means it's easier to get packages with complex functionality in your hands.
Free and Open Source - If you want to know how things are being computed, just look at our GitHub organization. Lots of individuals use Julia's SciML to research how the algorithms actually work because of how accessible and tweakable the ecosystem is!
Composable Library Components - In MATLAB environments, every package feels like a silo. Functions made for one file exchange library cannot easily compose with another. SciML's generic coding with JIT compilation these connections create new optimized code on the fly and allow for a more expansive feature set than can ever be documented. Take new high-precision number types from a package and stick them into a nonlinear solver. Take a package for Intel GPU arrays and stick it into the differential equation solver to use specialized hardware acceleration.
Easier High-Performance and Parallel Computing - With Julia's ecosystem, CUDA will automatically install of the required binaries and cu(A)*cu(B) is then all that's required to GPU-accelerate large-scale linear algebra. MPI is easy to install and use. Distributed computing through password-less SSH. Multithreading is automatic and baked into many libraries, with a specialized algorithm to ensure hierarchical usage does not oversubscribe threads. Basically, libraries give you a lot of parallelism for free, and doing the rest is a piece of cake.
This document was generated with Documenter.jl version 1.1.1 on Thursday 19 October 2023. Using Julia version 1.9.3.
diff --git a/dev/comparisons/python/index.html b/dev/comparisons/python/index.html
index c67c3d543f9..a0020e65186 100644
--- a/dev/comparisons/python/index.html
+++ b/dev/comparisons/python/index.html
@@ -1,2 +1,2 @@
-Getting Started with Julia's SciML for the Python User · Overview of Julia's SciML
If you're a Python user who has looked into Julia, you're probably wondering what is the equivalent to SciPy is. And you found it: it's the SciML ecosystem! To a Python developer, SciML is SciPy, but with the high-performance GPU, capabilities of PyTorch, and neural network capabilities, all baked right in. With SciML, there is no “separate world” of machine learning sublanguages: there is just one cohesive package ecosystem.
Performance - The key reason people are moving from SciPy to Julia's SciML in droves is performance. Even simple ODE solvers are much faster!, demonstrating orders of magnitude performance improvements for differential equations, nonlinear solving, optimization, and more. And the performance advantages continue to grow as more complex algorithms are required.
Package Management and Versioning - Julia's package manager takes care of dependency management, testing, and continuous delivery in order to make the installation and maintenance process smoother. For package users, this means it's easier to get packages with complex functionality in your hands.
Composable Library Components - In Python environments, every package feels like a silo. Functions made for one file exchange library cannot easily compose with another. SciML's generic coding with JIT compilation these connections create new optimized code on the fly and allow for a more expansive feature set than can ever be documented. Take new high-precision number types from a package and stick them into a nonlinear solver. Take a package for Intel GPU arrays and stick it into the differential equation solver to use specialized hardware acceleration.
Easier High-Performance and Parallel Computing - With Julia's ecosystem, CUDA will automatically install of the required binaries and cu(A)*cu(B) is then all that's required to GPU-accelerate large-scale linear algebra. MPI is easy to install and use. Distributed computing through password-less SSH. Multithreading is automatic and baked into many libraries, with a specialized algorithm to ensure hierarchical usage does not oversubscribe threads. Basically, libraries give you a lot of parallelism for free, and doing the rest is a piece of cake.
These facts, along with many others, compose to algorithmic improvements with the implementation improvements, which leads to orders of magnitude improvements!
Settings
This document was generated with Documenter.jl version 1.1.1 on Wednesday 18 October 2023. Using Julia version 1.9.3.
+Getting Started with Julia's SciML for the Python User · Overview of Julia's SciML
If you're a Python user who has looked into Julia, you're probably wondering what is the equivalent to SciPy is. And you found it: it's the SciML ecosystem! To a Python developer, SciML is SciPy, but with the high-performance GPU, capabilities of PyTorch, and neural network capabilities, all baked right in. With SciML, there is no “separate world” of machine learning sublanguages: there is just one cohesive package ecosystem.
Performance - The key reason people are moving from SciPy to Julia's SciML in droves is performance. Even simple ODE solvers are much faster!, demonstrating orders of magnitude performance improvements for differential equations, nonlinear solving, optimization, and more. And the performance advantages continue to grow as more complex algorithms are required.
Package Management and Versioning - Julia's package manager takes care of dependency management, testing, and continuous delivery in order to make the installation and maintenance process smoother. For package users, this means it's easier to get packages with complex functionality in your hands.
Composable Library Components - In Python environments, every package feels like a silo. Functions made for one file exchange library cannot easily compose with another. SciML's generic coding with JIT compilation these connections create new optimized code on the fly and allow for a more expansive feature set than can ever be documented. Take new high-precision number types from a package and stick them into a nonlinear solver. Take a package for Intel GPU arrays and stick it into the differential equation solver to use specialized hardware acceleration.
Easier High-Performance and Parallel Computing - With Julia's ecosystem, CUDA will automatically install of the required binaries and cu(A)*cu(B) is then all that's required to GPU-accelerate large-scale linear algebra. MPI is easy to install and use. Distributed computing through password-less SSH. Multithreading is automatic and baked into many libraries, with a specialized algorithm to ensure hierarchical usage does not oversubscribe threads. Basically, libraries give you a lot of parallelism for free, and doing the rest is a piece of cake.
These facts, along with many others, compose to algorithmic improvements with the implementation improvements, which leads to orders of magnitude improvements!
Settings
This document was generated with Documenter.jl version 1.1.1 on Thursday 19 October 2023. Using Julia version 1.9.3.
diff --git a/dev/comparisons/r/index.html b/dev/comparisons/r/index.html
index 50d51d01d4f..f2502f9738f 100644
--- a/dev/comparisons/r/index.html
+++ b/dev/comparisons/r/index.html
@@ -1,2 +1,2 @@
-Getting Started with Julia's SciML for the R User · Overview of Julia's SciML
If you're an R user who has looked into Julia, you're probably wondering where all of the scientific computing packages are. How do I solve ODEs? Solve f(x)=0 for x? Etc. SciML is the ecosystem for doing this with Julia.
Performance - The key reason people are moving from R to Julia's SciML in droves is performance. Even simple ODE solvers are much faster!, demonstrating orders of magnitude performance improvements for differential equations, nonlinear solving, optimization, and more. And the performance advantages continue to grow as more complex algorithms are required.
Composable Library Components - In R environments, every package feels like a silo. Functions made for one file exchange library cannot easily compose with another. SciML's generic coding with JIT compilation these connections create new optimized code on the fly and allow for a more expansive feature set than can ever be documented. Take new high-precision number types from a package and stick them into a nonlinear solver. Take a package for Intel GPU arrays and stick it into the differential equation solver to use specialized hardware acceleration.
A Global Harmonious Documentation for Scientific Computing - R's documentation for scientific computing is scattered in a bunch of individual packages where the developers do not talk to each other! This not only leads to documentation differences, but also “style” differences: one package uses tol while the other uses atol. With Julia's SciML, the whole ecosystem is considered together, and inconsistencies are handled at the global level. The goal is to be working in one environment with one language.
Easier High-Performance and Parallel Computing - With Julia's ecosystem, CUDA will automatically install of the required binaries and cu(A)*cu(B) is then all that's required to GPU-accelerate large-scale linear algebra. MPI is easy to install and use. Distributed computing through password-less SSH. Multithreading is automatic and baked into many libraries, with a specialized algorithm to ensure hierarchical usage does not oversubscribe threads. Basically, libraries give you a lot of parallelism for free, and doing the rest is a piece of cake.
Check out this R-Bloggers blog post on diffeqr, a package which uses ModelingToolkit to translate R code to Julia, and achieves 350x acceleration over R's popular deSolve ODE solver package. But when the solve is done purely in Julia, it achieves 2777x acceleration over deSolve!
Settings
This document was generated with Documenter.jl version 1.1.1 on Wednesday 18 October 2023. Using Julia version 1.9.3.
+Getting Started with Julia's SciML for the R User · Overview of Julia's SciML
If you're an R user who has looked into Julia, you're probably wondering where all of the scientific computing packages are. How do I solve ODEs? Solve f(x)=0 for x? Etc. SciML is the ecosystem for doing this with Julia.
Performance - The key reason people are moving from R to Julia's SciML in droves is performance. Even simple ODE solvers are much faster!, demonstrating orders of magnitude performance improvements for differential equations, nonlinear solving, optimization, and more. And the performance advantages continue to grow as more complex algorithms are required.
Composable Library Components - In R environments, every package feels like a silo. Functions made for one file exchange library cannot easily compose with another. SciML's generic coding with JIT compilation these connections create new optimized code on the fly and allow for a more expansive feature set than can ever be documented. Take new high-precision number types from a package and stick them into a nonlinear solver. Take a package for Intel GPU arrays and stick it into the differential equation solver to use specialized hardware acceleration.
A Global Harmonious Documentation for Scientific Computing - R's documentation for scientific computing is scattered in a bunch of individual packages where the developers do not talk to each other! This not only leads to documentation differences, but also “style” differences: one package uses tol while the other uses atol. With Julia's SciML, the whole ecosystem is considered together, and inconsistencies are handled at the global level. The goal is to be working in one environment with one language.
Easier High-Performance and Parallel Computing - With Julia's ecosystem, CUDA will automatically install of the required binaries and cu(A)*cu(B) is then all that's required to GPU-accelerate large-scale linear algebra. MPI is easy to install and use. Distributed computing through password-less SSH. Multithreading is automatic and baked into many libraries, with a specialized algorithm to ensure hierarchical usage does not oversubscribe threads. Basically, libraries give you a lot of parallelism for free, and doing the rest is a piece of cake.
Check out this R-Bloggers blog post on diffeqr, a package which uses ModelingToolkit to translate R code to Julia, and achieves 350x acceleration over R's popular deSolve ODE solver package. But when the solve is done purely in Julia, it achieves 2777x acceleration over deSolve!
Settings
This document was generated with Documenter.jl version 1.1.1 on Thursday 19 October 2023. Using Julia version 1.9.3.
From this, we can see that it is an NonlinearSolution. We can see the documentation for how to use the NonlinearSolution by checking the NonlinearSolve.jl solution type page. For example, the solution is stored as .u. What is the solution to our nonlinear system, and what is the final residual value? We can check it as follows:
# Analyze the solution
-@show sol.u, sol.resid
([0.0, 0.0, 0.0], [0.0, 0.0, 0.0])
Settings
This document was generated with Documenter.jl version 1.1.1 on Wednesday 18 October 2023. Using Julia version 1.9.3.