diff --git a/404.html b/404.html index 17c26da..6945460 100644 --- a/404.html +++ b/404.html @@ -1 +1 @@ -
The Trixi framework is a collaborative scientific effort to provide open source tools for adaptive high-order numerical simulations of hyperbolic PDEs in Julia. Besides the core algorithms, the framework also includes mesh and visualization tools. Moreover, it includes utilities such as Julia wrappers of mature libraries written in other programming languages.
This page gives an overview of the different activities that, together, constitute the Trixi framework on GitHub.
Adaptive high-order numerical simulations of hyperbolic PDEs in Julia
Convert output files generated with Trixi.jl to VTK
Use Trixi.jl from C/C++/Fortran
Create troubled cell indicators for Trixi.jl using artificial neural networks
HOHQMesh.jl is a Julia wrapper for the HOHQMesh mesh generator, which allows to produce curved quadrilateral and hexahedral meshes for high-order numerical simulations.
High Order Hex-Quad Mesh (HOHQMesh) package to automatically generate all-quadrilateral meshes with high order boundary information.
Smesh.jl is a Julia wrapper packagae for smesh, a simple Fortran package for generating and handling unstructured triangular and polygonal meshes.
A simple Fortran package for generating and handling unstructured triangular and polygonal meshes.
Particle-based multiphysics simulations in Julia
Efficient neighborhood search in point clouds with fixed search radius
P4est.jl is lightweight Julia wrapper for the p4est C library.
KROME.jl is a lightweight Julia wrapper for KROME, a Fortran library for including chemistry and microphysics in astrophysics simulations.
Julia package for reading VTK XML files (maintained by the Trixi framework authors).
The following publications make use of Trixi.jl or one of the other packages listed above. Author names of Trixi.jl's main developers are in italics.
Doehring, Christmann, Schlottke-Lakemper, Gassner, Torrilhon, Fourth-Order Paired-Explicit Runge-Kutta Methods, 2024.
Ersing, Goldberg, Winters, Entropy stable hydrostatic reconstruction schemes for shallow water systems, 2024.
Glaubitz, Ranocha, Winters, Schlottke-Lakemper, Öffner, Gassner, Generalized upwind summation-by-parts operators and their application to nodal discontinuous Galerkin methods, 2024.
Bender, Öffner, Entropy-Conservative Discontinuous Galerkin Methods for the Shallow Water Equations with Uncertainty, 2024.
Oblapenko, Torrilhon, Entropy-conservative high-order methods for high-enthalpy flows, 2024.
Doehring, Schlottke-Lakemper, Gassner, Torrilhon, Multirate Time-Integration based on Dynamic ODE Partitioning through Adaptively Refined Meshes for Compressible Fluid Dynamics, 2024.
Babbar, Chandrashekar, Multi-Derivative Runge-Kutta Flux Reconstruction for Hyperbolic Conservation Laws, 2024.
Lampert, Ranocha, Structure-Preserving Numerical Methods for Two Nonlinear Systems of Dispersive Wave Equations, 2024.
Rueda-Ramírez, Sikstel, Gassner, An Entropy-Stable Discontinuous Galerkin Discretization of the Ideal Multi-Ion Magnetohydrodynamics System, 2024.
Doehring, Gassner, Torrilhon, Many-Stage Optimal Stabilized Runge-Kutta Methods for Hyperbolic Partial Differential Equations, 2024.
Babbar, Chandrashekar, Lax-Wendroff Flux Reconstruction for advection-diffusion equations with error-based time stepping, 2024.
Babbar, Chandrashekar, Lax-Wendroff Flux Reconstruction on adaptive curvilinear meshes with error based time stepping for hyperbolic conservation laws, 2024.
Babbar, Kenettinkara, Chandrashekar, Admissibility preserving subcell limiter for Lax-Wendroff flux reconstruction, 2023.
Ovadia, Oommen, Kahana, Peyvan, Turkel, Karniadakis, Real-time Inference and Extrapolation via a Diffusion-inspired Temporal Transformer Operator (DiTTO), 2023.
Ranocha, Winters, Schlottke-Lakemper, Öffner, Glaubitz, Gassner, High-order upwind summation-by-parts methods for nonlinear conservation laws, 2023.
Ranocha, Schütz, Multiderivative time integration methods preserving nonlinear functionals via relaxation, 2023.
Ranocha, Giesselmann, Stability of step size control based on a posteriori error estimates, 2023.
Chan, Shukla, Wu, Liu, Nalluri, High order entropy stable schemes for the quasi-one-dimensional shallow water and compressible Euler equations, 2023.
Ersing, Winters, An entropy stable discontinuous Galerkin method for the two-layer shallow water equations on curvilinear meshes, 2023.
Rueda-Ramírez, Bolm, Kuzmin, Gassner, Monolithic Convex Limiting for Legendre–Gauss–Lobatto Discontinuous Galerkin Spectral Element Methods, 2023.
Ranocha, A discontinuous Galerkin discretization of elliptic problems with improved convergence properties using summation by parts operators, 2023.
Ranocha, Winters, Castro, Dalcin, Schlottke-Lakemper, Gassner, Parsani, On error-based step size control for discontinuous Galerkin methods for compressible fluid dynamics, 2023.
Ranocha, Schlottke-Lakemper, Chan, Rueda-Ramírez, Winters, Hindenlang, Gassner, Efficient implementation of modern entropy stable and kinetic energy preserving discontinuous Galerkin methods for conservation laws, ACM Transactions on Mathematical Software, 2023.
Chan, Ranocha, Rueda-Ramírez, Gassner, Warburton, On the entropy projection and the robustness of high order entropy stable discontinuous Galerkin schemes for under-resolved flows, 2022.
Rueda-Ramírez, Pazner, Gassner, Subcell limiting strategies for discontinuous Galerkin spectral element methods, 2022.
Lukáčová-Medvid’ová, Öffner, Convergence of Discontinuous Galerkin Schemes for the Euler Equations via Dissipative Weak Solutions, 2022.
Ranocha, A Note on Numerical Fluxes Conserving Harten's Entropies for the Compressible Euler Equations, 2022.
Ranocha, Schlottke-Lakemper, Winters, Faulhaber, Chan, Gassner, Adaptive numerical simulations with Trixi.jl: A case study of Julia for scientific computing, JuliaCon Proceedings, 77, 2022.
Gassner, Svärd, Hindenlang, Stability Issues of Entropy-Stable and/or Split-form High-order Schemes, 2022.
Singh, Chandrashekar, On a linear stability issue of split form schemes for compressible flows, 2021.
Ranocha, Gassner, Preventing pressure oscillations does not fix local linear stability issues of entropy-based split-form high-order schemes, Communications on Applied Mathematics and Computation, 2021.
Schlottke-Lakemper, Winters, Ranocha, Gassner, A purely hyperbolic discontinuous Galerkin approach for self-gravitating gas dynamics, Journal of Computational Physics (442), 110467, 2021.
Non-intrusive Multirate Time-Integration for High-Order accurate Compressible Fluid Dynamics with Trixi.jl
Doehring
2nd July 2024, PDESoft 2024, Cambridge, UK
Challenges of sustainable research software engineering in Trixi.jl
Schlottke-Lakemper
27th October 2023, MBD Colloquium, Aachen, Germany
Julia for scientific high-performance computing: opportunities and challenges
Schlottke-Lakemper
6th October 2023, Ferrite.jl User & Developer Conference, Bochum, Germany
Scaling Trixi.jl to more than 10,000 cores using MPI
Schlottke-Lakemper, Ranocha
27th July 2023, JuliaCon 2023, Cambridge, US
Massively Parallel Computational Fluid Dynamics with Julia and Trixi.jl
Schlottke-Lakemper
28th June 2023, PASC Conference, Davos, Switzerland
Research Software Engineering for Sustainable Scientific Computing
Schlottke-Lakemper
30th January 2023, SSD Seminar Series, Aachen, Germany
Trixi.jl: High-Order Numerical Simulations of Conservation Laws in Julia
Schlottke-Lakemper
19th January 2023, SNuBIC Seminar
tutorials & notebooks
Robust and efficient high-performance computational fluid dynamics enabled by modern numerical methods and technologies
Ranocha
3rd November 2022, MUSEN Colloquium, TU Braunschweig, Germany
Reproducibility as a service: collaborative scientific computing with Julia
Schlottke-Lakemper, Ranocha
27th October 2022, MaRDI Workshop for Scientific Computing, Münster, Germany
From Mesh Generation to Adaptive Simulation: A Journey in Julia
Winters
27th July 2022, JuliaCon 2022
recorded talk on YouTube | presentation & code
Running Julia code in parallel with MPI: Lessons learned
Christmann, Neher, Schlottke-Lakemper
26th July 2022, Julia for HPC Minisymposium, JuliaCon 2022
recorded talk on YouTube | presentation
Extensible Computational Fluid Dynamics in Julia with Trixi.jl
Schlottke-Lakemper, Ranocha, Gassner
25th February 2022, SIAM Conference on Parallel Processing for Scientific Computing, Seattle, US
Research software development with Julia
Schlottke-Lakemper, Ranocha
27th September 2021, NFDI4Ing Conference 2021
Adaptive high-order numerical simulations with Trixi.jl
Schlottke-Lakemper, Ranocha
9th September 2021, CliMA Seminar, California Institute of Technology
Adaptive and extendable numerical simulations with Trixi.jl
Schlottke-Lakemper, Ranocha
30th July 2021, JuliaCon 2021
presentation & notebooks | recorded talk on YouTube
Trixi.jl: High-Order Numerical Simulations of Hyperbolic PDEs in Julia
Ranocha, Schlottke-Lakemper, Winters
14th July 2021, ICOSAHOM 2021
tutorials & notebooks
Introduction to Julia and Trixi, a numerical simulation framework for hyperbolic PDEs
Ranocha
27th April 2021, Applied Mathematics Seminar, University of Münster
presentation
Purely hyperbolic self-gravitating flow simulations in Julia
Schlottke-Lakemper, Winters, Ranocha, Gassner
15th March 2021, GAMM Annual Meeting 2021
Julia for adaptive high-order multi-physics simulations
Schlottke-Lakemper
27th January 2021, Numerical Analysis Seminar, Lund University
presentation & notebooks
Trixi.jl participated in the Google Summer of Code 2023, marking its initial steps towards running on GPUs. This project was mentored by Hendrik Ranocha and Michael Schlottke-Lakemper. Here you can find the report from our contributor Huiyu Xie.
Michael Schlottke-Lakemper (University of Augsburg, Germany), Gregor Gassner (University of Cologne, Germany), Hendrik Ranocha (University of Hamburg, Germany), Andrew Winters (Linköping University, Sweden), and Jesse Chan (Rice University, US) are the principal developers of Trixi.jl. David A. Kopriva (Florida State University, US) is the principal developer of HOHQMesh and HOHQMesh.jl. For a full list of authors, please check out the respective packages.
There are a number of ways to reach out to us:
Meet us on Slack
Create an issue in one of the repositories listed on this page
Get in touch with one of the Trixi Authors
This project has benefited from funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the following grants:
Excellence Strategy EXC 2044-390685587, Mathematics Münster: Dynamics-Geometry-Structure.
Research unit FOR 5409 "Structure-Preserving Numerical Methods for Bulk- and Interface Coupling of Heterogeneous Models (SNuBIC)" (project number 463312734).
Individual grant no. 528753982.
This project has benefited from funding from the European Research Council through the ERC Starting Grant "An Exascale aware and Un-crashable Space-Time-Adaptive Discontinuous Spectral Element Solver for Non-Linear Conservation Laws" (Extreme), ERC grant agreement no. 714487.
This project has benefited from funding from Vetenskapsrådet (VR, Swedish Research Council), Sweden through the VR Starting Grant "Shallow water flows including sediment transport and morphodynamics", VR grant agreement 2020-03642 VR.
This project has benefited from funding from the United States National Science Foundation (NSF) under awards DMS-1719818 and DMS-1943186.
This project has benefited from funding from the German Federal Ministry of Education and Research (BMBF) through the project grant "Adaptive earth system modeling with significantly reduced computation time for exascale supercomputers (ADAPTEX)" (funding id: 16ME0668K).
This project has benefited from funding by the Daimler und Benz Stiftung (Daimler and Benz Foundation) through grant no. 32-10/22.
Trixi.jl is supported by NumFOCUS as an Affiliated Project.
The Trixi framework is a collaborative scientific effort to provide open source tools for adaptive high-order numerical simulations of hyperbolic PDEs in Julia. Besides the core algorithms, the framework also includes mesh and visualization tools. Moreover, it includes utilities such as Julia wrappers of mature libraries written in other programming languages.
This page gives an overview of the different activities that, together, constitute the Trixi framework on GitHub.
Adaptive high-order numerical simulations of hyperbolic PDEs in Julia
Convert output files generated with Trixi.jl to VTK
Use Trixi.jl from C/C++/Fortran
Create troubled cell indicators for Trixi.jl using artificial neural networks
HOHQMesh.jl is a Julia wrapper for the HOHQMesh mesh generator, which allows to produce curved quadrilateral and hexahedral meshes for high-order numerical simulations.
High Order Hex-Quad Mesh (HOHQMesh) package to automatically generate all-quadrilateral meshes with high order boundary information.
Smesh.jl is a Julia wrapper packagae for smesh, a simple Fortran package for generating and handling unstructured triangular and polygonal meshes.
A simple Fortran package for generating and handling unstructured triangular and polygonal meshes.
Particle-based multiphysics simulations in Julia
Efficient neighborhood search in point clouds with fixed search radius
P4est.jl is lightweight Julia wrapper for the p4est C library.
KROME.jl is a lightweight Julia wrapper for KROME, a Fortran library for including chemistry and microphysics in astrophysics simulations.
Julia package for reading VTK XML files (maintained by the Trixi framework authors).
The following publications make use of Trixi.jl or one of the other packages listed above. Author names of Trixi.jl's main developers are in italics.
Doehring, Christmann, Schlottke-Lakemper, Gassner, Torrilhon, Fourth-Order Paired-Explicit Runge-Kutta Methods, 2024.
Ersing, Goldberg, Winters, Entropy stable hydrostatic reconstruction schemes for shallow water systems, 2024.
Glaubitz, Ranocha, Winters, Schlottke-Lakemper, Öffner, Gassner, Generalized upwind summation-by-parts operators and their application to nodal discontinuous Galerkin methods, 2024.
Bender, Öffner, Entropy-Conservative Discontinuous Galerkin Methods for the Shallow Water Equations with Uncertainty, 2024.
Oblapenko, Torrilhon, Entropy-conservative high-order methods for high-enthalpy flows, 2024.
Doehring, Schlottke-Lakemper, Gassner, Torrilhon, Multirate Time-Integration based on Dynamic ODE Partitioning through Adaptively Refined Meshes for Compressible Fluid Dynamics, 2024.
Babbar, Chandrashekar, Multi-Derivative Runge-Kutta Flux Reconstruction for Hyperbolic Conservation Laws, 2024.
Lampert, Ranocha, Structure-Preserving Numerical Methods for Two Nonlinear Systems of Dispersive Wave Equations, 2024.
Rueda-Ramírez, Sikstel, Gassner, An Entropy-Stable Discontinuous Galerkin Discretization of the Ideal Multi-Ion Magnetohydrodynamics System, 2024.
Doehring, Gassner, Torrilhon, Many-Stage Optimal Stabilized Runge-Kutta Methods for Hyperbolic Partial Differential Equations, 2024.
Babbar, Chandrashekar, Lax-Wendroff Flux Reconstruction for advection-diffusion equations with error-based time stepping, 2024.
Babbar, Chandrashekar, Lax-Wendroff Flux Reconstruction on adaptive curvilinear meshes with error based time stepping for hyperbolic conservation laws, 2024.
Babbar, Kenettinkara, Chandrashekar, Admissibility preserving subcell limiter for Lax-Wendroff flux reconstruction, 2023.
Ovadia, Oommen, Kahana, Peyvan, Turkel, Karniadakis, Real-time Inference and Extrapolation via a Diffusion-inspired Temporal Transformer Operator (DiTTO), 2023.
Ranocha, Winters, Schlottke-Lakemper, Öffner, Glaubitz, Gassner, High-order upwind summation-by-parts methods for nonlinear conservation laws, 2023.
Ranocha, Schütz, Multiderivative time integration methods preserving nonlinear functionals via relaxation, 2023.
Ranocha, Giesselmann, Stability of step size control based on a posteriori error estimates, 2023.
Chan, Shukla, Wu, Liu, Nalluri, High order entropy stable schemes for the quasi-one-dimensional shallow water and compressible Euler equations, 2023.
Ersing, Winters, An entropy stable discontinuous Galerkin method for the two-layer shallow water equations on curvilinear meshes, 2023.
Rueda-Ramírez, Bolm, Kuzmin, Gassner, Monolithic Convex Limiting for Legendre–Gauss–Lobatto Discontinuous Galerkin Spectral Element Methods, 2023.
Ranocha, A discontinuous Galerkin discretization of elliptic problems with improved convergence properties using summation by parts operators, 2023.
Ranocha, Winters, Castro, Dalcin, Schlottke-Lakemper, Gassner, Parsani, On error-based step size control for discontinuous Galerkin methods for compressible fluid dynamics, 2023.
Ranocha, Schlottke-Lakemper, Chan, Rueda-Ramírez, Winters, Hindenlang, Gassner, Efficient implementation of modern entropy stable and kinetic energy preserving discontinuous Galerkin methods for conservation laws, ACM Transactions on Mathematical Software, 2023.
Chan, Ranocha, Rueda-Ramírez, Gassner, Warburton, On the entropy projection and the robustness of high order entropy stable discontinuous Galerkin schemes for under-resolved flows, 2022.
Rueda-Ramírez, Pazner, Gassner, Subcell limiting strategies for discontinuous Galerkin spectral element methods, 2022.
Lukáčová-Medvid’ová, Öffner, Convergence of Discontinuous Galerkin Schemes for the Euler Equations via Dissipative Weak Solutions, 2022.
Ranocha, A Note on Numerical Fluxes Conserving Harten's Entropies for the Compressible Euler Equations, 2022.
Ranocha, Schlottke-Lakemper, Winters, Faulhaber, Chan, Gassner, Adaptive numerical simulations with Trixi.jl: A case study of Julia for scientific computing, JuliaCon Proceedings, 77, 2022.
Gassner, Svärd, Hindenlang, Stability Issues of Entropy-Stable and/or Split-form High-order Schemes, 2022.
Singh, Chandrashekar, On a linear stability issue of split form schemes for compressible flows, 2021.
Ranocha, Gassner, Preventing pressure oscillations does not fix local linear stability issues of entropy-based split-form high-order schemes, Communications on Applied Mathematics and Computation, 2021.
Schlottke-Lakemper, Winters, Ranocha, Gassner, A purely hyperbolic discontinuous Galerkin approach for self-gravitating gas dynamics, Journal of Computational Physics (442), 110467, 2021.
Non-intrusive Multirate Time-Integration for High-Order accurate Compressible Fluid Dynamics with Trixi.jl
Doehring
2nd July 2024, PDESoft 2024, Cambridge, UK
Challenges of sustainable research software engineering in Trixi.jl
Schlottke-Lakemper
27th October 2023, MBD Colloquium, Aachen, Germany
Julia for scientific high-performance computing: opportunities and challenges
Schlottke-Lakemper
6th October 2023, Ferrite.jl User & Developer Conference, Bochum, Germany
Scaling Trixi.jl to more than 10,000 cores using MPI
Schlottke-Lakemper, Ranocha
27th July 2023, JuliaCon 2023, Cambridge, US
Massively Parallel Computational Fluid Dynamics with Julia and Trixi.jl
Schlottke-Lakemper
28th June 2023, PASC Conference, Davos, Switzerland
Research Software Engineering for Sustainable Scientific Computing
Schlottke-Lakemper
30th January 2023, SSD Seminar Series, Aachen, Germany
Trixi.jl: High-Order Numerical Simulations of Conservation Laws in Julia
Schlottke-Lakemper
19th January 2023, SNuBIC Seminar
tutorials & notebooks
Robust and efficient high-performance computational fluid dynamics enabled by modern numerical methods and technologies
Ranocha
3rd November 2022, MUSEN Colloquium, TU Braunschweig, Germany
Reproducibility as a service: collaborative scientific computing with Julia
Schlottke-Lakemper, Ranocha
27th October 2022, MaRDI Workshop for Scientific Computing, Münster, Germany
From Mesh Generation to Adaptive Simulation: A Journey in Julia
Winters
27th July 2022, JuliaCon 2022
recorded talk on YouTube | presentation & code
Running Julia code in parallel with MPI: Lessons learned
Christmann, Neher, Schlottke-Lakemper
26th July 2022, Julia for HPC Minisymposium, JuliaCon 2022
recorded talk on YouTube | presentation
Extensible Computational Fluid Dynamics in Julia with Trixi.jl
Schlottke-Lakemper, Ranocha, Gassner
25th February 2022, SIAM Conference on Parallel Processing for Scientific Computing, Seattle, US
Research software development with Julia
Schlottke-Lakemper, Ranocha
27th September 2021, NFDI4Ing Conference 2021
Adaptive high-order numerical simulations with Trixi.jl
Schlottke-Lakemper, Ranocha
9th September 2021, CliMA Seminar, California Institute of Technology
Adaptive and extendable numerical simulations with Trixi.jl
Schlottke-Lakemper, Ranocha
30th July 2021, JuliaCon 2021
presentation & notebooks | recorded talk on YouTube
Trixi.jl: High-Order Numerical Simulations of Hyperbolic PDEs in Julia
Ranocha, Schlottke-Lakemper, Winters
14th July 2021, ICOSAHOM 2021
tutorials & notebooks
Introduction to Julia and Trixi, a numerical simulation framework for hyperbolic PDEs
Ranocha
27th April 2021, Applied Mathematics Seminar, University of Münster
presentation
Purely hyperbolic self-gravitating flow simulations in Julia
Schlottke-Lakemper, Winters, Ranocha, Gassner
15th March 2021, GAMM Annual Meeting 2021
Julia for adaptive high-order multi-physics simulations
Schlottke-Lakemper
27th January 2021, Numerical Analysis Seminar, Lund University
presentation & notebooks
Trixi.jl participated in the Google Summer of Code 2023, marking its initial steps towards running on GPUs. This project was mentored by Hendrik Ranocha and Michael Schlottke-Lakemper. Here you can find the report from our contributor Huiyu Xie.
Michael Schlottke-Lakemper (University of Augsburg, Germany), Gregor Gassner (University of Cologne, Germany), Hendrik Ranocha (University of Hamburg, Germany), Andrew Winters (Linköping University, Sweden), and Jesse Chan (Rice University, US) are the principal developers of Trixi.jl. David A. Kopriva (Florida State University, US) is the principal developer of HOHQMesh and HOHQMesh.jl. For a full list of authors, please check out the respective packages.
There are a number of ways to reach out to us:
Meet us on Slack
Create an issue in one of the repositories listed on this page
Get in touch with one of the Trixi Authors
This project has benefited from funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the following grants:
Excellence Strategy EXC 2044-390685587, Mathematics Münster: Dynamics-Geometry-Structure.
Research unit FOR 5409 "Structure-Preserving Numerical Methods for Bulk- and Interface Coupling of Heterogeneous Models (SNuBIC)" (project number 463312734).
Individual grant no. 528753982.
This project has benefited from funding from the European Research Council through the ERC Starting Grant "An Exascale aware and Un-crashable Space-Time-Adaptive Discontinuous Spectral Element Solver for Non-Linear Conservation Laws" (Extreme), ERC grant agreement no. 714487.
This project has benefited from funding from Vetenskapsrådet (VR, Swedish Research Council), Sweden through the VR Starting Grant "Shallow water flows including sediment transport and morphodynamics", VR grant agreement 2020-03642 VR.
This project has benefited from funding from the United States National Science Foundation (NSF) under awards DMS-1719818 and DMS-1943186.
This project has benefited from funding from the German Federal Ministry of Education and Research (BMBF) through the project grant "Adaptive earth system modeling with significantly reduced computation time for exascale supercomputers (ADAPTEX)" (funding id: 16ME0668K).
This project has benefited from funding by the Daimler und Benz Stiftung (Daimler and Benz Foundation) through grant no. 32-10/22.
Trixi.jl is supported by NumFOCUS as an Affiliated Project.
Mentee: Huiyu Xie
Mentors: Hendrik Ranocha and Michael Schlottke-Lakemper
Project Link: https://github.com/huiyuxie/trixi_cuda
The goal of this GSoC project was to accelerate Trixi.jl using GPUs.
Table of contents
The project was focused on enhancing the Trixi.jl numerical simulation framework, a prominent tool used for solving hyperbolic conservation laws within the Julia programming language. The primary aim was to introduce GPU support through CUDA.jl and essentially CUDA to accelerate the discretization processes used in solving partial differential equations (PDEs). This work was undertaken as part of the Google Summer of Code 2023 program, and the progress is summarized below:
GPU Implementation: The GPU implementations were prototyped using CUDA, starting with 1D equation kernels and gradually extending to more complex 2D and 3D equation kernels. These developments formed the backbone of the Discontinuous Galerkin Collocation Spectral Element Method (DGSEM) in the framework.
Performance Benchmarks: A series of benchmarks were conducted on the developed CUDA kernels to assess their efficiency. These benchmarks demonstrated substantial performance enhancements through a strategic integration of various factors including data transfer, kernel architecture, and method characteristics.
Acceleration Extension: The GPU support was not limited to basic kernels but was extended to more intricate methods within the framework. This included integration with the DG solver that allows meshes with simplex elements (DGMulti) and other summation-by-parts (SBP) schemes like Finite Differences (FD) and Continuous Galerkin Spectral Element Method (CGSEM) through the DGMulti solver.
Please note that the third step was planned but remains incomplete due to time constraints and this step will be completed in the future if possible.
This project was entirely set up and tested on Amazon Web Services (AWS), and the instance type chosen was p3.2xlarge
(see the link for more details). Here is the link to the specific information of both CPU and GPU used for this project. Note that this project is reproducible by following the setup instructions provided link aboout how to set up environment. Also, for individuals without an Nvidia GPU but interested in experimenting with CUDA, here is a link detailing how to set up a cloud GPU on AWS.
The overview of the project repository can be accessed through this README file. Here is a detailed description of the highlights of this project.
Kernel Prototyping
Several function (kernel) naming rules were applied in the kernel prototyping process:
The functions for GPU kernel parallel computing must end with _kernel
The functions for calling the GPU kernels must begin with cuda_
These rules make the whole structure of the GPU code consistent with the original CPU code. Also, the implementation essentially revolves around three points:
Using custom kernel implementation instead of direct array (and matrix) operations through CuArray
type
Avoiding the use of conditional statement (like if/else
branches) from the original CPU code
Minimizing the number of GPU kernel calls within a certain function (like cuda_volume_integral!
)
Based on these points, the work began with dg_1d.jl
, and then extended to dg_2d.jl
and dg_3d.jl
under the src/solvers/dgsem_tree
directory. The prototying mainly focused on the rhs!
functions that are called in the semidiscretize
process. Besides, it is worthwhile to mention some caveats in this GPU prototyping process:
CUDA.jl does not support dynamic array access and the use of other dynamic types inside kernels (Issue #6, Issue #8, and Issue #11)
GPU parallel computing can run into race conditions (Issue #5)
The Float32
type can be promoted to Float64
type in the GPU computing process (Issue #3 and PR #1604)
Kernel Configuration
The GPU kernels were designed to be launched with the appropriate size of threads and blocks. The occupancy API CUDA.launch_configuration
was used to create kernel configurator functions for 1D, 2D, and 3D kernels (i.e., configurator_1d
, configurator_2d
, and configurator_3d
).
Specifically, in kernel configurator functions, CUDA.launch_configuration
would first return a suggested number of threads for the compiled but not yet run kernel, and then the number of blocks would be computed through dividing the corresponding array size by the number of threads.
Thus, in the process of calling a GPU kernel, the kernel is first compiled and then run based on the returned launching data from the configurator. For example, it is common to see code like
sample_kernel = @cuda launch = false sample_kernel!(arg1, arg2, arg3)
+ GSoC 2023: GPU acceleration in Trixi.jl using CUDA.jl GSoC 2023: GPU acceleration in Trixi.jl using CUDA.jl
Mentee: Huiyu Xie
Mentors: Hendrik Ranocha and Michael Schlottke-Lakemper
Project Link: https://github.com/huiyuxie/trixi_cuda
The goal of this GSoC project was to accelerate Trixi.jl using GPUs.
Table of contents
Project Overview
The project was focused on enhancing the Trixi.jl numerical simulation framework, a prominent tool used for solving hyperbolic conservation laws within the Julia programming language. The primary aim was to introduce GPU support through CUDA.jl and essentially CUDA to accelerate the discretization processes used in solving partial differential equations (PDEs). This work was undertaken as part of the Google Summer of Code 2023 program, and the progress is summarized below:
GPU Implementation: The GPU implementations were prototyped using CUDA, starting with 1D equation kernels and gradually extending to more complex 2D and 3D equation kernels. These developments formed the backbone of the Discontinuous Galerkin Collocation Spectral Element Method (DGSEM) in the framework.
Performance Benchmarks: A series of benchmarks were conducted on the developed CUDA kernels to assess their efficiency. These benchmarks demonstrated substantial performance enhancements through a strategic integration of various factors including data transfer, kernel architecture, and method characteristics.
Acceleration Extension: The GPU support was not limited to basic kernels but was extended to more intricate methods within the framework. This included integration with the DG solver that allows meshes with simplex elements (DGMulti) and other summation-by-parts (SBP) schemes like Finite Differences (FD) and Continuous Galerkin Spectral Element Method (CGSEM) through the DGMulti solver.
Please note that the third step was planned but remains incomplete due to time constraints and this step will be completed in the future if possible.
How to Setup
This project was entirely set up and tested on Amazon Web Services (AWS), and the instance type chosen was p3.2xlarge
. Here is the link to the specific information of both CPU and GPU used for this project. Note that this project is reproducible by following the setup instructions provided link aboout how to set up environment. Also, for individuals without an Nvidia GPU but interested in experimenting with CUDA, here is a link detailing how to set up a cloud GPU on AWS.
Key Highlights
The overview of the project repository can be accessed through this README.md file. Here is a detailed description of the highlights of this project.
Kernel Prototyping
Several function (kernel) naming rules were applied in the kernel prototyping process:
The functions for GPU kernel parallel computing must end with _kernel
The functions for calling the GPU kernels must begin with cuda_
These rules make the whole structure of the GPU code consistent with the original CPU code. Also, the implementation essentially revolves around three points:
Using custom kernel implementation instead of direct array (and matrix) operations through CuArray
type
Avoiding the use of conditional statement (like if/else
branches) from the original CPU code
Minimizing the number of GPU kernel calls within a certain function (like cuda_volume_integral!
)
Based on these points, the work began with dg_1d.jl
, and then extended to dg_2d.jl
and dg_3d.jl
under the src/solvers/dgsem_tree
directory. The prototying mainly focused on the rhs!
functions that are called in the semidiscretize
process. Besides, it is worthwhile to mention some caveats in this GPU prototyping process:
CUDA.jl does not support dynamic array access and the use of other dynamic types inside kernels (Issue #6, Issue #8, and Issue #11)
GPU parallel computing can run into race conditions (Issue #5)
The Float32
type can be promoted to Float64
type in the GPU computing process (Issue #3 and PR #1604)
Kernel Configuration
The GPU kernels were designed to be launched with the appropriate size of threads and blocks. The occupancy API CUDA.launch_configuration
was used to create kernel configurator functions for 1D, 2D, and 3D kernels (i.e., configurator_1d
, configurator_2d
, and configurator_3d
).
Specifically, in kernel configurator functions, CUDA.launch_configuration
would first return a suggested number of threads for the compiled but not yet run kernel, and then the number of blocks would be computed through dividing the corresponding array size by the number of threads.
Thus, in the process of calling a GPU kernel, the kernel is first compiled and then run based on the returned launching data from the configurator. For example, it is common to see code like
sample_kernel = @cuda launch = false sample_kernel!(arg1, arg2, arg3)
sample_kernel(arg1, arg2, arg3; configurator_1d(sample_kernel, size_arr)...) # Similar for `configurator_2d` and configurator_3d`
in kernel calling functions. In this way, the GPU kernels are configured and launched with the intention of achieving maximal occupancy (i.e., optimizing the utilization of GPU computing resources), potentially enhancing overall performance. However, it should be noted that the maximal occupancy cannot be reached in most cases.
Also, this method of kernel configuration may not be optimal, considering that it may not be applicable to different GPU versions. Given that
julia> attribute(device(),CUDA.DEVICE_ATTRIBUTE_MAX_GRID_DIM_X) 2147483647
-julia> attribute(device(),CUDA.DEVICE_ATTRIBUTE_MAX_THREADS_PER_BLOCK) 1024
the kernel could be addressed in the crrent GPU version but may not in some other GPU versions (as different GPU gives different attribute data like CUDA.DEVICE_ATTRIBUTE_MAX_GRID_DIM_X
and CUDA.DEVICE_ATTRIBUTE_MAX_THREADS_PER_BLOCK
). So it was suggested to introduce the use of a stride loop for the current GPU kernels.
Kernel Optimization
Some work on kernel optimization has already been done during the process of kernel prototyping, such as avoiding the use of conditional branches and minimizing kernel calls. But the general work for kernel optimization has not yet been introdued (so this part is somewhat related to the future work).
In summary, the kernel optimization should be based on kernel benchmarks and kernel profiling, and here are some factors that can be considered to improve performance:
Data Transfer: The process of data transfer from CPU to GPU (and back) mainly occurs in the rhs!
function when calling semidiscretize
. Since rhs!
is called multiple times during the process of time integration, it would be more efficient to complete the data transfer before calling the rhs!
function.
Stride Loop Tuning: The stride loop has not yet been introduced to the current GPU kernels. By applying stride loop tuning, the loop structure (i.e., stride length) can be modified to improve performance.
Multi-GPU/Multi-Thread: The performance can be further improved if multiple GPUs or multiple threads are used.
Performance Benchmarks
The performance benchmarks were conducted for both CPU and GPU on Float64
and Float32
types, respectively. The example files elixir_advection_basic.jl
, elixir_euler_ec.jl
, and elixir_euler_source_terms.jl
were chosen from tree_1d_dgsem
, tree_2d_dgsem
, and tree_3d_dgsem
under the src/examples
directory. These examples were chosen because they are consistent in case of 1D, 2D, and 3D. Please note that all the examples have passed the accuracy tests and you can check them using this link to examples.
The benchmark results were archived in another file and please use this link to benchmarks to check them. Also note that the benchmarks were focuesd on the time integration part (i.e., on OrdinaryDiffEq.solve
), see a benchmark exmaple below
# Run on CPU
+julia> attribute(device(),CUDA.DEVICE_ATTRIBUTE_MAX_THREADS_PER_BLOCK) 1024
the kernel could be addressed in the crrent GPU version but may not in some other GPU versions (as different GPU gives different attribute data like CUDA.DEVICE_ATTRIBUTE_MAX_GRID_DIM_X
and CUDA.DEVICE_ATTRIBUTE_MAX_THREADS_PER_BLOCK
). So it was suggested to introduce the use of a stride loop for the current GPU kernels.
Kernel Optimization
Some work on kernel optimization has already been done during the process of kernel prototyping, such as avoiding the use of conditional branches and minimizing kernel calls. But the general work for kernel optimization has not yet been introdued (so this part is somewhat related to the future work).
In summary, the kernel optimization should be based on kernel benchmarks and kernel profiling, and here are some factors that can be considered to improve performance:
Data Transfer: The process of data transfer from CPU to GPU (and back) mainly occurs in the rhs!
function when calling semidiscretize
. Since rhs!
is called multiple times during the process of time integration, it would be more efficient to complete the data transfer before calling the rhs!
function.
Stride Loop Tuning: The stride loop has not yet been introduced to the current GPU kernels. By applying stride loop tuning, the loop structure (i.e., stride length) can be modified to improve performance.
Multi-GPU/Multi-Thread: The performance can be further improved if multiple GPUs or multiple threads are used.
Performance Benchmarks
The performance benchmarks were conducted for both CPU and GPU on Float64
and Float32
types, respectively. The example files elixir_advection_basic.jl
, elixir_euler_ec.jl
, and elixir_euler_source_terms.jl
were chosen from tree_1d_dgsem
, tree_2d_dgsem
, and tree_3d_dgsem
under the src/examples
directory. These examples were chosen because they are consistent in case of 1D, 2D, and 3D. Please note that all the examples have passed the accuracy tests and you can check them using this link to examples.
The benchmark results were archived in another file and please use this link to benchmarks to check them. Also note that the benchmarks were focuesd on the time integration part (i.e., on OrdinaryDiffEq.solve
), see a benchmark exmaple below
# Run on CPU
@benchmark begin
sol_cpu = OrdinaryDiffEq.solve(ode_cpu, BS3(), adaptive=false, dt=0.01;
abstol=1.0e-6, reltol=1.0e-6, ode_default_options()...)
@@ -10,4 +10,4 @@
@benchmark begin
sol_gpu = OrdinaryDiffEq.solve(ode_gpu, BS3(), adaptive=false, dt=0.01;
abstol=1.0e-6, reltol=1.0e-6, ode_default_options()...)
-end
From the benchmark results, it is shown that the GPU did not perform better than the CPU in general (but there were some exceptions). Furthermore, the Memory estimate
and allocs estimate
statistics from the GPU are much larger than those from the CPU. This is probably due to the design that all the data transfer happens in the rhs!
function and thus the memory cost is extremely high when transferring data repeatedly between the CPU and GPU.
In addition, the results indicate that the GPU performs better with 2D and 3D examples than with 1D examples. That is because GPUs are designed to handle a large number of parallel tasks, and 2D and 3D problems usually offer more parallelism compared to 1D problems. Essentially, the more data you can process simultaneously, the more efficiently you can utilize the GPU. 1D problems may not be complex enough to take full advantage of the GPU parallel processing capability.
Future Work
The future work is listed here, ranging from specific to more general, from top to bottom:
-
Complete the prototype for the remaining kernels (please refer to the Kernel to be Implemented from the README file).
Update PR #1604 and make it merged into the repository
Optimize CUDA kernels to improve performance (especially data transfer, please refer to the kernel optimization part)
Prototype the GPU kernels for other DG solvers (for example, DGMulti
, etc.)
Extend the single-GPU support to multi-GPU support (similarly, from single-thread to multi-thread)
Broaden compatibility to other GPU types beyond Nvidia (such as those from Apple, Intel, and AMD)
Acknowledgements
I would like to express my gratitude to Google, the Julia community, and my mentors (Hendrik Ranocha, Michael Schlottke-Lakemper, and Jesse Chan) for this enriching experience during the Google Summer of Code 2023 program. This opportunity to participate, enhance my skills, and contribute to the advancement of Julia has been both challenging and rewarding.
Special thanks go to my GSoC mentor Hendrik Ranocha (@ranocha) and another person from JuliaGPU Tim Besard (@maleadt, though he is not my mentor), whose guidance and support throughout our regular discussions have been instrumental in answering my questions and overcoming hurdles. The Julia community is incredibly welcoming and supportive, and I am proud to have been a part of this endeavor.
I am filled with appreciation for this fantastic summer of learning and development, and I look forward to seeing the continued growth of Julia and the contributions of its vibrant community.
© The Trixi Authors. Last modified: August 22, 2024. Website built with Franklin.jl and the Julia programming language.
\ No newline at end of file
+end
From the benchmark results, it is shown that the GPU did not perform better than the CPU in general (but there were some exceptions). Furthermore, the Memory estimate
and allocs estimate
statistics from the GPU are much larger than those from the CPU. This is probably due to the design that all the data transfer happens in the rhs!
function and thus the memory cost is extremely high when transferring data repeatedly between the CPU and GPU.
In addition, the results indicate that the GPU performs better with 2D and 3D examples than with 1D examples. That is because GPUs are designed to handle a large number of parallel tasks, and 2D and 3D problems usually offer more parallelism compared to 1D problems. Essentially, the more data you can process simultaneously, the more efficiently you can utilize the GPU. 1D problems may not be complex enough to take full advantage of the GPU parallel processing capability.
The future work is listed here, ranging from specific to more general, from top to bottom:
Complete the prototype for the remaining kernels (please refer to the Kernel to be Implemented from the README.md file).
Update PR #1604 and make it merged into the repository
Optimize CUDA kernels to improve performance (especially data transfer, please refer to the kernel optimization part)
Prototype the GPU kernels for other DG solvers (for example, DGMulti
, etc.)
Extend the single-GPU support to multi-GPU support (similarly, from single-thread to multi-thread)
Broaden compatibility to other GPU types beyond Nvidia (such as those from Apple, Intel, and AMD)
I would like to express my gratitude to Google, the Julia community, and my mentors (Hendrik Ranocha, Michael Schlottke-Lakemper, and Jesse Chan) for this enriching experience during the Google Summer of Code 2023 program. This opportunity to participate, enhance my skills, and contribute to the advancement of Julia has been both challenging and rewarding.
Special thanks go to my GSoC mentor Hendrik Ranocha (@ranocha) and another person from JuliaGPU Tim Besard (@maleadt, though he is not my mentor), whose guidance and support throughout our regular discussions have been instrumental in answering my questions and overcoming hurdles. The Julia community is incredibly welcoming and supportive, and I am proud to have been a part of this endeavor.
I am filled with appreciation for this fantastic summer of learning and development, and I look forward to seeing the continued growth of Julia and the contributions of its vibrant community.