-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TreeMesh 2D simulation with MPI crashes when a rank has no boundaries #1870
Conversation
Review checklistThis checklist is meant to assist creators of PRs (to let them know what reviewers will typically look for) and reviewers (to guide them in a structured review process). Items do not need to be checked explicitly for a PR to be eligible for merging. Purpose and scope
Code quality
Documentation
Testing
Performance
Verification
Created with ❤️ by the Trixi.jl community. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1870 +/- ##
=======================================
Coverage 96.30% 96.30%
=======================================
Files 439 439
Lines 35744 35745 +1
=======================================
+ Hits 34423 34424 +1
Misses 1321 1321
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for debugging this!
src/meshes/serial_tree.jl
Outdated
@@ -32,6 +32,7 @@ mutable struct SerialTree{NDIMS} <: AbstractTree{NDIMS} | |||
levels::Vector{Int} | |||
coordinates::Matrix{Float64} | |||
original_cell_ids::Vector{Int} | |||
mpi_ranks::Vector{Int} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like introducing MPI-parallel data structures into the plain serial code...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seemed strange to me as well. It was necessary because ParallelTree
is used during the simulation whereas SerialTree is used when converting the output via Trixi2Vtk
. So in order to get the MPI ranks in paraview I had to introduce this field here as well.
In general I find the mpi_ranks
output very useful. Do you have an idea how to better implement it? Anyways, I will revert these changes here in this PR!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could specialize only the MPI-parallel case without changing the serial infrastructure?
ca2169d
to
738de75
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Example:
results in
Reason:
reinitialize_containers!
resizes theboundaries
cache to size 0 on this rankTrixi.jl/src/solvers/dgsem_tree/containers.jl
Line 27 in f235619
init_boundaries!
is called but returns earlyTrixi.jl/src/solvers/dgsem_tree/containers_2d.jl
Lines 422 to 425 in f235619
leaving
cache.boundaries.n_boundaries_per_direction
in a faulty stateHere is a straightforward fix, though I am not sure if this is the best place.
(The other changes are just for outputting the MPI rank of each cell. I can just delete them or move them to a separate PR)