From c92441d18df7f267d72dda0377b37aa6ad090cc9 Mon Sep 17 00:00:00 2001 From: Ludovic Raess Date: Sun, 1 Dec 2024 11:01:13 +0100 Subject: [PATCH] Fixup --- slide-notebooks/l9_1-projects.jl | 19 +++++--------- slide-notebooks/notebooks/l1_1-admin.ipynb | 6 ++--- slide-notebooks/notebooks/l1_2-why-gpu.ipynb | 6 ++--- .../notebooks/l1_3-julia-intro.ipynb | 6 ++--- slide-notebooks/notebooks/l9_1-projects.ipynb | 26 +++++-------------- website/_literate/l9_1-projects_web.jl | 19 +++++--------- 6 files changed, 30 insertions(+), 52 deletions(-) diff --git a/slide-notebooks/l9_1-projects.jl b/slide-notebooks/l9_1-projects.jl index 0b83abfd..c072df88 100644 --- a/slide-notebooks/l9_1-projects.jl +++ b/slide-notebooks/l9_1-projects.jl @@ -66,7 +66,7 @@ import MPI #src ######################################################################### #nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}} md""" -3. Also add global maximum computation using MPI reduction function +3. Further, add global maximum computation using MPI reduction function to be used instead of `maximum()` """ max_g(A) = (max_l = maximum(A); MPI.Allreduce(max_l, MPI.MAX, MPI.COMM_WORLD)) @@ -112,12 +112,7 @@ end #src ######################################################################### #nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}} md""" -7. Use the `max_g` function in the timestep `dt` definition (instead of `maximum`) as one now needs to gather the global maximum among all MPI processes. -""" - -#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}} -md""" -8. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above) +7. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above) """ @hide_communication b_width begin @parallel compute_Dflux!(qDx, qDy, qDz, Pf, T, k_ηf, _dx, _dy, _dz, αρg, _1_θ_dτ_D) @@ -127,7 +122,7 @@ end #src ######################################################################### #nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}} md""" -9. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed) +8. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed) """ @hide_communication b_width begin @parallel update_T!(T, qTx, qTy, qTz, dTdt, _dx, _dy, _dz, _1_dt_β_dτ_T) @@ -139,7 +134,7 @@ end #src ######################################################################### #nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}} md""" -10. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. +9. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. Use it in the timestep `dt` definition and in the error calculation (instead of `maximum`). """ ## time step dt = if it == 1 @@ -151,12 +146,12 @@ end #src ######################################################################### #nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}} md""" -11. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points. +10. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points. """ #nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}} md""" -12. Update the visualisation and output saving part +11. Update the visualisation and output saving part """ ## visualisation if do_viz && (it % nvis == 0) @@ -172,7 +167,7 @@ end #src ######################################################################### #nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}} md""" -13. Finalise the global grid before returning from the main function +12. Finalise the global grid before returning from the main function """ finalize_global_grid() return diff --git a/slide-notebooks/notebooks/l1_1-admin.ipynb b/slide-notebooks/notebooks/l1_1-admin.ipynb index e42c416d..2b1cf179 100644 --- a/slide-notebooks/notebooks/l1_1-admin.ipynb +++ b/slide-notebooks/notebooks/l1_1-admin.ipynb @@ -187,11 +187,11 @@ "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", - "version": "1.10.5" + "version": "1.11.1" }, "kernelspec": { - "name": "julia-1.10", - "display_name": "Julia 1.10.5", + "name": "julia-1.11", + "display_name": "Julia 1.11.1", "language": "julia" } }, diff --git a/slide-notebooks/notebooks/l1_2-why-gpu.ipynb b/slide-notebooks/notebooks/l1_2-why-gpu.ipynb index 9e20a12a..e697ba16 100644 --- a/slide-notebooks/notebooks/l1_2-why-gpu.ipynb +++ b/slide-notebooks/notebooks/l1_2-why-gpu.ipynb @@ -331,11 +331,11 @@ "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", - "version": "1.10.5" + "version": "1.11.1" }, "kernelspec": { - "name": "julia-1.10", - "display_name": "Julia 1.10.5", + "name": "julia-1.11", + "display_name": "Julia 1.11.1", "language": "julia" } }, diff --git a/slide-notebooks/notebooks/l1_3-julia-intro.ipynb b/slide-notebooks/notebooks/l1_3-julia-intro.ipynb index 5816ed03..3514af88 100644 --- a/slide-notebooks/notebooks/l1_3-julia-intro.ipynb +++ b/slide-notebooks/notebooks/l1_3-julia-intro.ipynb @@ -1533,11 +1533,11 @@ "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", - "version": "1.10.5" + "version": "1.11.1" }, "kernelspec": { - "name": "julia-1.10", - "display_name": "Julia 1.10.5", + "name": "julia-1.11", + "display_name": "Julia 1.11.1", "language": "julia" } }, diff --git a/slide-notebooks/notebooks/l9_1-projects.ipynb b/slide-notebooks/notebooks/l9_1-projects.ipynb index fd821037..171b1dfb 100644 --- a/slide-notebooks/notebooks/l9_1-projects.ipynb +++ b/slide-notebooks/notebooks/l9_1-projects.ipynb @@ -118,7 +118,7 @@ { "cell_type": "markdown", "source": [ - "3. Also add global maximum computation using MPI reduction function" + "3. Further, add global maximum computation using MPI reduction function to be used instead of `maximum()`" ], "metadata": { "name": "A slide ", @@ -220,7 +220,7 @@ { "cell_type": "markdown", "source": [ - "7. Use the `max_g` function in the timestep `dt` definition (instead of `maximum`) as one now needs to gather the global maximum among all MPI processes." + "7. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)" ], "metadata": { "name": "A slide ", @@ -229,18 +229,6 @@ } } }, - { - "cell_type": "markdown", - "source": [ - "8. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)" - ], - "metadata": { - "name": "A slide ", - "slideshow": { - "slide_type": "fragment" - } - } - }, { "outputs": [], "cell_type": "code", @@ -256,7 +244,7 @@ { "cell_type": "markdown", "source": [ - "9. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)" + "8. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)" ], "metadata": { "name": "A slide ", @@ -282,7 +270,7 @@ { "cell_type": "markdown", "source": [ - "10. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes." + "9. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. Use it in the timestep `dt` definition and in the error calculation (instead of `maximum`)." ], "metadata": { "name": "A slide ", @@ -308,7 +296,7 @@ { "cell_type": "markdown", "source": [ - "11. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points." + "10. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points." ], "metadata": { "name": "A slide ", @@ -320,7 +308,7 @@ { "cell_type": "markdown", "source": [ - "12. Update the visualisation and output saving part" + "11. Update the visualisation and output saving part" ], "metadata": { "name": "A slide ", @@ -350,7 +338,7 @@ { "cell_type": "markdown", "source": [ - "13. Finalise the global grid before returning from the main function" + "12. Finalise the global grid before returning from the main function" ], "metadata": { "name": "A slide ", diff --git a/website/_literate/l9_1-projects_web.jl b/website/_literate/l9_1-projects_web.jl index a8d8353a..a689e18b 100644 --- a/website/_literate/l9_1-projects_web.jl +++ b/website/_literate/l9_1-projects_web.jl @@ -66,7 +66,7 @@ import MPI #src ######################################################################### #nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}} md""" -3. Also add global maximum computation using MPI reduction function +3. Further, add global maximum computation using MPI reduction function to be used instead of `maximum()` """ max_g(A) = (max_l = maximum(A); MPI.Allreduce(max_l, MPI.MAX, MPI.COMM_WORLD)) @@ -112,12 +112,7 @@ end #src ######################################################################### #nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}} md""" -7. Use the `max_g` function in the timestep `dt` definition (instead of `maximum`) as one now needs to gather the global maximum among all MPI processes. -""" - -#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}} -md""" -8. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above) +7. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above) """ @hide_communication b_width begin @parallel compute_Dflux!(qDx, qDy, qDz, Pf, T, k_ηf, _dx, _dy, _dz, αρg, _1_θ_dτ_D) @@ -127,7 +122,7 @@ end #src ######################################################################### #nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}} md""" -9. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed) +8. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed) """ @hide_communication b_width begin @parallel update_T!(T, qTx, qTy, qTz, dTdt, _dx, _dy, _dz, _1_dt_β_dτ_T) @@ -139,7 +134,7 @@ end #src ######################################################################### #nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}} md""" -10. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. +9. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. Use it in the timestep `dt` definition and in the error calculation (instead of `maximum`). """ ## time step dt = if it == 1 @@ -151,12 +146,12 @@ end #src ######################################################################### #nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}} md""" -11. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points. +10. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points. """ #nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}} md""" -12. Update the visualisation and output saving part +11. Update the visualisation and output saving part """ ## visualisation if do_viz && (it % nvis == 0) @@ -172,7 +167,7 @@ end #src ######################################################################### #nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}} md""" -13. Finalise the global grid before returning from the main function +12. Finalise the global grid before returning from the main function """ finalize_global_grid() return