From fce908a0bae2e60769cca718060728ebc5721074 Mon Sep 17 00:00:00 2001 From: sloede Date: Mon, 8 Jul 2024 15:01:14 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20=20@=204484b?= =?UTF-8?q?6cb5a3d2bdb3c241a2342ff551a5cba4f87=20=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- 404.html | 2 +- Manifest.toml | 4 ++-- index.html | 2 +- .../gpu-acceleration-in-trixi-jl-using-cuda-jl/index.html | 2 +- package-lock.json | 6 +++--- sitemap.xml | 4 ++-- 6 files changed, 10 insertions(+), 10 deletions(-) diff --git a/404.html b/404.html index 717c0ec..fc4858c 100644 --- a/404.html +++ b/404.html @@ -1 +1 @@ - 404

404


The requested page was not found



Click here to go back to the homepage.
\ No newline at end of file + 404

404


The requested page was not found



Click here to go back to the homepage.
\ No newline at end of file diff --git a/Manifest.toml b/Manifest.toml index 81d326e..7046d3b 100644 --- a/Manifest.toml +++ b/Manifest.toml @@ -22,9 +22,9 @@ version = "0.7.5" [[ConcurrentUtilities]] deps = ["Serialization", "Sockets"] -git-tree-sha1 = "6cbbd4d241d7e6579ab354737f4dd95ca43946e1" +git-tree-sha1 = "ea32b83ca4fefa1768dc84e504cc0a94fb1ab8d1" uuid = "f0e56b4a-5159-44fe-b623-3e5288b988bb" -version = "2.4.1" +version = "2.4.2" [[Dates]] deps = ["Printf"] diff --git a/index.html b/index.html index 1561835..a703a6f 100644 --- a/index.html +++ b/index.html @@ -1 +1 @@ - Trixi Framework

Trixi Framework

The Trixi framework is a collaborative scientific effort to provide open source tools for adaptive high-order numerical simulations of hyperbolic PDEs in Julia. Besides the core algorithms, the framework also includes mesh and visualization tools. Moreover, it includes utilities such as Julia wrappers of mature libraries written in other programming languages.

This page gives an overview of the different activities that, together, constitute the Trixi framework on GitHub.

Adaptive high-order numerical simulations of hyperbolic PDEs

Mesh generation

Particle-based multiphysics simulations

Additional packages

Publications

The following publications make use of Trixi.jl or one of the other packages listed above. Author names of Trixi.jl's main developers are in italics.

2024

2023

2022

2021

Talks

2023

2022

2021

Outreach

Google Summer of Code 2023

Trixi.jl participated in the Google Summer of Code 2023, marking its initial steps towards running on GPUs. This project was mentored by Hendrik Ranocha and Michael Schlottke-Lakemper. Here you can find the report from our contributor Huiyu Xie.

Authors

Michael Schlottke-Lakemper (University of Augsburg, Germany), Gregor Gassner (University of Cologne, Germany), Hendrik Ranocha (University of Hamburg, Germany), Andrew Winters (Linköping University, Sweden), and Jesse Chan (Rice University, US) are the principal developers of Trixi.jl. David A. Kopriva (Florida State University, US) is the principal developer of HOHQMesh and HOHQMesh.jl. For a full list of authors, please check out the respective packages.

Get in touch!

There are a number of ways to reach out to us:

Acknowledgments

This project has benefited from funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the following grants:

This project has benefited from funding from the European Research Council through the ERC Starting Grant "An Exascale aware and Un-crashable Space-Time-Adaptive Discontinuous Spectral Element Solver for Non-Linear Conservation Laws" (Extreme), ERC grant agreement no. 714487.

This project has benefited from funding from Vetenskapsrådet (VR, Swedish Research Council), Sweden through the VR Starting Grant "Shallow water flows including sediment transport and morphodynamics", VR grant agreement 2020-03642 VR.

This project has benefited from funding from the United States National Science Foundation (NSF) under awards DMS-1719818 and DMS-1943186.

This project has benefited from funding from the German Federal Ministry of Education and Research (BMBF) through the project grant "Adaptive earth system modeling with significantly reduced computation time for exascale supercomputers (ADAPTEX)" (funding id: 16ME0668K).

This project has benefited from funding by the Daimler und Benz Stiftung (Daimler and Benz Foundation) through grant no. 32-10/22.

Trixi.jl is supported by NumFOCUS as an Affiliated Project.

\ No newline at end of file + Trixi Framework

Trixi Framework

The Trixi framework is a collaborative scientific effort to provide open source tools for adaptive high-order numerical simulations of hyperbolic PDEs in Julia. Besides the core algorithms, the framework also includes mesh and visualization tools. Moreover, it includes utilities such as Julia wrappers of mature libraries written in other programming languages.

This page gives an overview of the different activities that, together, constitute the Trixi framework on GitHub.

Adaptive high-order numerical simulations of hyperbolic PDEs

Mesh generation

Particle-based multiphysics simulations

Additional packages

Publications

The following publications make use of Trixi.jl or one of the other packages listed above. Author names of Trixi.jl's main developers are in italics.

2024

2023

2022

2021

Talks

2024

2023

2022

2021

Outreach

Google Summer of Code 2023

Trixi.jl participated in the Google Summer of Code 2023, marking its initial steps towards running on GPUs. This project was mentored by Hendrik Ranocha and Michael Schlottke-Lakemper. Here you can find the report from our contributor Huiyu Xie.

Authors

Michael Schlottke-Lakemper (University of Augsburg, Germany), Gregor Gassner (University of Cologne, Germany), Hendrik Ranocha (University of Hamburg, Germany), Andrew Winters (Linköping University, Sweden), and Jesse Chan (Rice University, US) are the principal developers of Trixi.jl. David A. Kopriva (Florida State University, US) is the principal developer of HOHQMesh and HOHQMesh.jl. For a full list of authors, please check out the respective packages.

Get in touch!

There are a number of ways to reach out to us:

Acknowledgments

This project has benefited from funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the following grants:

This project has benefited from funding from the European Research Council through the ERC Starting Grant "An Exascale aware and Un-crashable Space-Time-Adaptive Discontinuous Spectral Element Solver for Non-Linear Conservation Laws" (Extreme), ERC grant agreement no. 714487.

This project has benefited from funding from Vetenskapsrådet (VR, Swedish Research Council), Sweden through the VR Starting Grant "Shallow water flows including sediment transport and morphodynamics", VR grant agreement 2020-03642 VR.

This project has benefited from funding from the United States National Science Foundation (NSF) under awards DMS-1719818 and DMS-1943186.

This project has benefited from funding from the German Federal Ministry of Education and Research (BMBF) through the project grant "Adaptive earth system modeling with significantly reduced computation time for exascale supercomputers (ADAPTEX)" (funding id: 16ME0668K).

This project has benefited from funding by the Daimler und Benz Stiftung (Daimler and Benz Foundation) through grant no. 32-10/22.

Trixi.jl is supported by NumFOCUS as an Affiliated Project.

\ No newline at end of file diff --git a/outreach/gsoc/2023/gpu-acceleration-in-trixi-jl-using-cuda-jl/index.html b/outreach/gsoc/2023/gpu-acceleration-in-trixi-jl-using-cuda-jl/index.html index b25a72e..ee8a6bf 100644 --- a/outreach/gsoc/2023/gpu-acceleration-in-trixi-jl-using-cuda-jl/index.html +++ b/outreach/gsoc/2023/gpu-acceleration-in-trixi-jl-using-cuda-jl/index.html @@ -10,4 +10,4 @@ @benchmark begin sol_gpu = OrdinaryDiffEq.solve(ode_gpu, BS3(), adaptive=false, dt=0.01; abstol=1.0e-6, reltol=1.0e-6, ode_default_options()...) -end

From the benchmark results, it is shown that the GPU did not perform better than the CPU in general (but there were some exceptions). Furthermore, the Memory estimate and allocs estimate statistics from the GPU are much larger than those from the CPU. This is probably due to the design that all the data transfer happens in the rhs! function and thus the memory cost is extremely high when transferring data repeatedly between the CPU and GPU.

In addition, the results indicate that the GPU performs better with 2D and 3D examples than with 1D examples. That is because GPUs are designed to handle a large number of parallel tasks, and 2D and 3D problems usually offer more parallelism compared to 1D problems. Essentially, the more data you can process simultaneously, the more efficiently you can utilize the GPU. 1D problems may not be complex enough to take full advantage of the GPU parallel processing capability.

Future Work

The future work is listed here, ranging from specific to more general, from top to bottom:

  1. Resolve Issue #9 and Issue #11 (and any upcoming issues)

  2. Complete the prototype for the remaining kernels (please refer to the Kernel to be Implemented from the README file).

  3. Update PR #1604 and make it merged into the repository

  4. Optimize CUDA kernels to improve performance (especially data transfer, please refer to the kernel optimization part)

  5. Prototype the GPU kernels for other DG solvers (for example, DGMulti, etc.)

  6. Extend the single-GPU support to multi-GPU support (similarly, from single-thread to multi-thread)

  7. Broaden compatibility to other GPU types beyond Nvidia (such as those from Apple, Intel, and AMD)

Acknowledgements

I would like to express my gratitude to Google, the Julia community, and my mentors (Hendrik Ranocha, Michael Schlottke-Lakemper, and Jesse Chan) for this enriching experience during the Google Summer of Code 2023 program. This opportunity to participate, enhance my skills, and contribute to the advancement of Julia has been both challenging and rewarding.

Special thanks go to my GSoC mentor Hendrik Ranocha (@ranocha) and another person from JuliaGPU Tim Besard (@maleadt, though he is not my mentor), whose guidance and support throughout our regular discussions have been instrumental in answering my questions and overcoming hurdles. The Julia community is incredibly welcoming and supportive, and I am proud to have been a part of this endeavor.

I am filled with appreciation for this fantastic summer of learning and development, and I look forward to seeing the continued growth of Julia and the contributions of its vibrant community.

\ No newline at end of file +end

From the benchmark results, it is shown that the GPU did not perform better than the CPU in general (but there were some exceptions). Furthermore, the Memory estimate and allocs estimate statistics from the GPU are much larger than those from the CPU. This is probably due to the design that all the data transfer happens in the rhs! function and thus the memory cost is extremely high when transferring data repeatedly between the CPU and GPU.

In addition, the results indicate that the GPU performs better with 2D and 3D examples than with 1D examples. That is because GPUs are designed to handle a large number of parallel tasks, and 2D and 3D problems usually offer more parallelism compared to 1D problems. Essentially, the more data you can process simultaneously, the more efficiently you can utilize the GPU. 1D problems may not be complex enough to take full advantage of the GPU parallel processing capability.

Future Work

The future work is listed here, ranging from specific to more general, from top to bottom:

  1. Resolve Issue #9 and Issue #11 (and any upcoming issues)

  2. Complete the prototype for the remaining kernels (please refer to the Kernel to be Implemented from the README file).

  3. Update PR #1604 and make it merged into the repository

  4. Optimize CUDA kernels to improve performance (especially data transfer, please refer to the kernel optimization part)

  5. Prototype the GPU kernels for other DG solvers (for example, DGMulti, etc.)

  6. Extend the single-GPU support to multi-GPU support (similarly, from single-thread to multi-thread)

  7. Broaden compatibility to other GPU types beyond Nvidia (such as those from Apple, Intel, and AMD)

Acknowledgements

I would like to express my gratitude to Google, the Julia community, and my mentors (Hendrik Ranocha, Michael Schlottke-Lakemper, and Jesse Chan) for this enriching experience during the Google Summer of Code 2023 program. This opportunity to participate, enhance my skills, and contribute to the advancement of Julia has been both challenging and rewarding.

Special thanks go to my GSoC mentor Hendrik Ranocha (@ranocha) and another person from JuliaGPU Tim Besard (@maleadt, though he is not my mentor), whose guidance and support throughout our regular discussions have been instrumental in answering my questions and overcoming hurdles. The Julia community is incredibly welcoming and supportive, and I am proud to have been a part of this endeavor.

I am filled with appreciation for this fantastic summer of learning and development, and I look forward to seeing the continued growth of Julia and the contributions of its vibrant community.

\ No newline at end of file diff --git a/package-lock.json b/package-lock.json index f6c04c1..bb11ebc 100644 --- a/package-lock.json +++ b/package-lock.json @@ -3,9 +3,9 @@ "lockfileVersion": 1, "dependencies": { "highlight.js": { - "version": "11.9.0", - "resolved": "https://registry.npmjs.org/highlight.js/-/highlight.js-11.9.0.tgz", - "integrity": "sha512-fJ7cW7fQGCYAkgv4CPfwFHrfd/cLS4Hau96JuJ+ZTOWhjnhoeN1ub1tFmALm/+lW5z4WCAuAV9bm05AP0mS6Gw==" + "version": "11.10.0", + "resolved": "https://registry.npmjs.org/highlight.js/-/highlight.js-11.10.0.tgz", + "integrity": "sha512-SYVnVFswQER+zu1laSya563s+F8VDGt7o35d4utbamowvUNLLMovFqwCLSocpZTz3MgaSRA1IbqRWZv97dtErQ==" } } } diff --git a/sitemap.xml b/sitemap.xml index 091e3f6..2bcabb3 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -3,13 +3,13 @@ https://trixi-framework.github.io/outreach/gsoc/2023/gpu-acceleration-in-trixi-jl-using-cuda-jl/index.html - 2024-07-02 + 2024-07-08 monthly 0.5 https://trixi-framework.github.io/index.html - 2024-07-02 + 2024-07-08 monthly 0.5