Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proof of concept: TrixiMPIArray #1104

Draft
wants to merge 37 commits into
base: main
Choose a base branch
from
Draft
Changes from 1 commit
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
8042a04
WIP: TrixiMPIArray
ranocha Mar 30, 2022
5da9e5c
update TODO notes
ranocha Mar 30, 2022
d09b4cc
use TrixiMPIArrays via allocate_coefficients
ranocha Mar 31, 2022
3434a58
do not dispatch on TrixiMPIArray for saving solution/restart files
ranocha Mar 31, 2022
8135d7f
WIP: experiment with global/local length settings
ranocha Mar 31, 2022
987407e
resize!
ranocha Apr 1, 2022
e7d3db3
Merge branch 'main' into hr/mpi_arrays
ranocha Apr 1, 2022
4e33567
add error-based step size control to tests
ranocha Apr 1, 2022
eb1d9b1
SIMD optimizations specialize also on TrixiMPIArrays
ranocha Apr 1, 2022
c2e0b86
replace some 1:length by eachindex
ranocha Apr 1, 2022
d58b1b5
local_copy for AMR
ranocha Apr 1, 2022
23d4520
specialize show
ranocha Apr 1, 2022
d8d85b7
clean up
ranocha Apr 1, 2022
4df8602
specialize view
ranocha Apr 1, 2022
3528a16
clean up
ranocha Apr 1, 2022
6f984c0
use global mpi_comm() again instead of mpi_comm(u)
ranocha Apr 1, 2022
efa08e7
dispatch on parallel mesh instead of TrixiMPIArray whenever possible
ranocha Apr 1, 2022
80f6d59
YAGNI mpi_rank, mpi_size
ranocha Apr 1, 2022
d28c888
use accessor function mpi_comm
ranocha Apr 1, 2022
5ba7f9e
update comment
ranocha Apr 1, 2022
01186f6
Merge branch 'hr/mpi_arrays' of github.com:trixi-framework/Trixi.jl i…
ranocha Apr 1, 2022
96e4a3d
fix efa08e7a76f1b823217c0c9981194510bee3caec
ranocha Apr 1, 2022
bce6cb7
get rid of local_copy
ranocha Apr 1, 2022
0af5ae7
Merge branch 'main' into hr/mpi_arrays
ranocha Apr 1, 2022
76ae70f
test P4estMesh in 2D and 3D with MPI and error-based step size control
ranocha Apr 1, 2022
82e480a
MPI tests with error-based step size control with reltol as rtol
ranocha Apr 3, 2022
5049bbb
specialize broadcasting
ranocha Apr 4, 2022
195e1e0
get rid of local_length
ranocha Apr 4, 2022
4de9a6f
more tests of TrixiMPIArrays
ranocha Apr 4, 2022
5277f86
print test names with error-based step size control
ranocha Apr 5, 2022
53fbfbd
export ode_norm, ode_unstable_check
ranocha Apr 5, 2022
b549d2a
more comments
ranocha Apr 5, 2022
8857b7a
fuse MPI reductions
ranocha Apr 5, 2022
923c6ad
clean-up
ranocha Apr 5, 2022
e38692b
mark ode_norm, ode_unstable_check as experimental
ranocha Apr 5, 2022
126d54d
Merge branch 'main' into hr/mpi_arrays
ranocha Apr 5, 2022
cdcf828
Merge branch 'main' into hr/mpi_arrays
ranocha Apr 5, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
clean up
  • Loading branch information
ranocha committed Apr 1, 2022
commit 3528a16c22fbbe3f1fb1f6ffd772d36a96d823c2
15 changes: 9 additions & 6 deletions src/auxiliary/mpi_arrays.jl
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,6 @@ end
Trixi.mpi_comm(u::TrixiMPIArray) = u.mpi_comm
Trixi.mpi_rank(u::TrixiMPIArray) = u.mpi_rank
Trixi.mpi_nranks(u::TrixiMPIArray) = u.mpi_size
# TODO: MPI. What about the following interface?
# Trixi.mpi_isparallel(u::TrixiMPIArray) = u.mpi_isparallel


# Implementation of the abstract array interface of Base
Expand All @@ -91,9 +89,13 @@ end
Base.elsize(::Type{TrixiMPIArray{T, N, Parent}}) where {T, N, Parent} = elsize(Parent)


# TODO: MPI. Do we need customized broadcasting? What about FastBroadcast.jl and
# threaded execution with `@.. thread=true`?
# See https://docs.julialang.org/en/v1/manual/interfaces/#man-interfaces-broadcasting
# We do probably not need to customize broadcasting. First tests suggest that
# this version also works with FastBroadcast.jl and threaded execution with
# `@.. thread=true`, e.g., when calling a threaded RK algorithm of the form
# `SSPRK43(thread=OrdinaryDiffEq.True())`.
# See also
# https://github.com/YingboMa/FastBroadcast.jl
# https://docs.julialang.org/en/v1/manual/interfaces/#man-interfaces-broadcasting


# Implementation of methods from ArrayInterface.jl for use with
Expand Down Expand Up @@ -182,7 +184,8 @@ end


# Specialization of `view`. Without these, `view`s of arrays returned by
# `wrap_array` with multiple conserved variables do not always work.
# `wrap_array` with multiple conserved variables do not always work...
# This may also be related to the use of a global `length`?
Base.view(u::TrixiMPIArray, idx::Vararg{Any,N}) where {N} = view(parent(u), idx...)


Expand Down