Skip to content

Commit

Permalink
Merge pull request #23
Browse files Browse the repository at this point in the history
Fixup redundant content
  • Loading branch information
luraess authored Dec 1, 2024
2 parents 7bd1a86 + c92441d commit c321aa8
Show file tree
Hide file tree
Showing 6 changed files with 30 additions and 52 deletions.
19 changes: 7 additions & 12 deletions slide-notebooks/l9_1-projects.jl
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ import MPI
#src #########################################################################
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
md"""
3. Also add global maximum computation using MPI reduction function
3. Further, add global maximum computation using MPI reduction function to be used instead of `maximum()`
"""
max_g(A) = (max_l = maximum(A); MPI.Allreduce(max_l, MPI.MAX, MPI.COMM_WORLD))

Expand Down Expand Up @@ -112,12 +112,7 @@ end
#src #########################################################################
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
md"""
7. Use the `max_g` function in the timestep `dt` definition (instead of `maximum`) as one now needs to gather the global maximum among all MPI processes.
"""

#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}}
md"""
8. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)
7. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)
"""
@hide_communication b_width begin
@parallel compute_Dflux!(qDx, qDy, qDz, Pf, T, k_ηf, _dx, _dy, _dz, αρg, _1_θ_dτ_D)
Expand All @@ -127,7 +122,7 @@ end
#src #########################################################################
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
md"""
9. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)
8. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)
"""
@hide_communication b_width begin
@parallel update_T!(T, qTx, qTy, qTz, dTdt, _dx, _dy, _dz, _1_dt_β_dτ_T)
Expand All @@ -139,7 +134,7 @@ end
#src #########################################################################
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
md"""
10. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes.
9. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. Use it in the timestep `dt` definition and in the error calculation (instead of `maximum`).
"""
## time step
dt = if it == 1
Expand All @@ -151,12 +146,12 @@ end
#src #########################################################################
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
md"""
11. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points.
10. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points.
"""

#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}}
md"""
12. Update the visualisation and output saving part
11. Update the visualisation and output saving part
"""
## visualisation
if do_viz && (it % nvis == 0)
Expand All @@ -172,7 +167,7 @@ end
#src #########################################################################
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
md"""
13. Finalise the global grid before returning from the main function
12. Finalise the global grid before returning from the main function
"""
finalize_global_grid()
return
Expand Down
6 changes: 3 additions & 3 deletions slide-notebooks/notebooks/l1_1-admin.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -187,11 +187,11 @@
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.10.5"
"version": "1.11.1"
},
"kernelspec": {
"name": "julia-1.10",
"display_name": "Julia 1.10.5",
"name": "julia-1.11",
"display_name": "Julia 1.11.1",
"language": "julia"
}
},
Expand Down
6 changes: 3 additions & 3 deletions slide-notebooks/notebooks/l1_2-why-gpu.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -331,11 +331,11 @@
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.10.5"
"version": "1.11.1"
},
"kernelspec": {
"name": "julia-1.10",
"display_name": "Julia 1.10.5",
"name": "julia-1.11",
"display_name": "Julia 1.11.1",
"language": "julia"
}
},
Expand Down
6 changes: 3 additions & 3 deletions slide-notebooks/notebooks/l1_3-julia-intro.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1533,11 +1533,11 @@
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.10.5"
"version": "1.11.1"
},
"kernelspec": {
"name": "julia-1.10",
"display_name": "Julia 1.10.5",
"name": "julia-1.11",
"display_name": "Julia 1.11.1",
"language": "julia"
}
},
Expand Down
26 changes: 7 additions & 19 deletions slide-notebooks/notebooks/l9_1-projects.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@
{
"cell_type": "markdown",
"source": [
"3. Also add global maximum computation using MPI reduction function"
"3. Further, add global maximum computation using MPI reduction function to be used instead of `maximum()`"
],
"metadata": {
"name": "A slide ",
Expand Down Expand Up @@ -220,7 +220,7 @@
{
"cell_type": "markdown",
"source": [
"7. Use the `max_g` function in the timestep `dt` definition (instead of `maximum`) as one now needs to gather the global maximum among all MPI processes."
"7. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)"
],
"metadata": {
"name": "A slide ",
Expand All @@ -229,18 +229,6 @@
}
}
},
{
"cell_type": "markdown",
"source": [
"8. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)"
],
"metadata": {
"name": "A slide ",
"slideshow": {
"slide_type": "fragment"
}
}
},
{
"outputs": [],
"cell_type": "code",
Expand All @@ -256,7 +244,7 @@
{
"cell_type": "markdown",
"source": [
"9. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)"
"8. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)"
],
"metadata": {
"name": "A slide ",
Expand All @@ -282,7 +270,7 @@
{
"cell_type": "markdown",
"source": [
"10. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes."
"9. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. Use it in the timestep `dt` definition and in the error calculation (instead of `maximum`)."
],
"metadata": {
"name": "A slide ",
Expand All @@ -308,7 +296,7 @@
{
"cell_type": "markdown",
"source": [
"11. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points."
"10. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points."
],
"metadata": {
"name": "A slide ",
Expand All @@ -320,7 +308,7 @@
{
"cell_type": "markdown",
"source": [
"12. Update the visualisation and output saving part"
"11. Update the visualisation and output saving part"
],
"metadata": {
"name": "A slide ",
Expand Down Expand Up @@ -350,7 +338,7 @@
{
"cell_type": "markdown",
"source": [
"13. Finalise the global grid before returning from the main function"
"12. Finalise the global grid before returning from the main function"
],
"metadata": {
"name": "A slide ",
Expand Down
19 changes: 7 additions & 12 deletions website/_literate/l9_1-projects_web.jl
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ import MPI
#src #########################################################################
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
md"""
3. Also add global maximum computation using MPI reduction function
3. Further, add global maximum computation using MPI reduction function to be used instead of `maximum()`
"""
max_g(A) = (max_l = maximum(A); MPI.Allreduce(max_l, MPI.MAX, MPI.COMM_WORLD))

Expand Down Expand Up @@ -112,12 +112,7 @@ end
#src #########################################################################
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
md"""
7. Use the `max_g` function in the timestep `dt` definition (instead of `maximum`) as one now needs to gather the global maximum among all MPI processes.
"""

#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}}
md"""
8. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)
7. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)
"""
@hide_communication b_width begin
@parallel compute_Dflux!(qDx, qDy, qDz, Pf, T, k_ηf, _dx, _dy, _dz, αρg, _1_θ_dτ_D)
Expand All @@ -127,7 +122,7 @@ end
#src #########################################################################
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
md"""
9. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)
8. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)
"""
@hide_communication b_width begin
@parallel update_T!(T, qTx, qTy, qTz, dTdt, _dx, _dy, _dz, _1_dt_β_dτ_T)
Expand All @@ -139,7 +134,7 @@ end
#src #########################################################################
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
md"""
10. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes.
9. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. Use it in the timestep `dt` definition and in the error calculation (instead of `maximum`).
"""
## time step
dt = if it == 1
Expand All @@ -151,12 +146,12 @@ end
#src #########################################################################
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
md"""
11. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points.
10. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points.
"""

#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}}
md"""
12. Update the visualisation and output saving part
11. Update the visualisation and output saving part
"""
## visualisation
if do_viz && (it % nvis == 0)
Expand All @@ -172,7 +167,7 @@ end
#src #########################################################################
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
md"""
13. Finalise the global grid before returning from the main function
12. Finalise the global grid before returning from the main function
"""
finalize_global_grid()
return
Expand Down

0 comments on commit c321aa8

Please sign in to comment.