Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix segfaults for MPCD with large particle counts under MPI #1897

Merged
merged 12 commits into from
Nov 27, 2024

Conversation

mphoward
Copy link
Collaborator

@mphoward mphoward commented Sep 26, 2024

Description

This PR refactors parts of the MPCD code that relied on generic serialized MPI routines to instead use custom MPI datatypes. This increases the amount of data that can be sent because each object is larger. This is an attempt to fix segfaults reported when either initializing from or taking a snapshot with a large number of MPCD particles.

I replaced most uses of bcast with MPI_Bcast while I was at it since this is a relatively simple MPI call.

I also added exceptions when the serialized methods exceed the byte count that can be stored in a signed int.

Motivation and context

Large numbers of particles seem to lead to overflow of the serialized MPI methods.

Resolves #1895

How has this been tested?

Existings tests pass. I will confirm with reporting user (or test on a cluster myself) that this fixes the segfaults they were observing.

Change log

* Fix segmentation faults initializing MPCD simulations with large numbers (~100+ million) of MPCD particles under MPI.

Checklist:

Copy link

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added the stale There has been no activity on this for some time. label Oct 16, 2024
@mphoward mphoward removed the stale There has been no activity on this for some time. label Oct 17, 2024
@mphoward
Copy link
Collaborator Author

Not stale—haven't had time to finish this off.

Copy link

github-actions bot commented Nov 6, 2024

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@github-actions github-actions bot added stale There has been no activity on this for some time. and removed stale There has been no activity on this for some time. labels Nov 6, 2024
@mphoward
Copy link
Collaborator Author

@joaander I will rebase this onto trunk-major and finish this week.

@mphoward mphoward force-pushed the fix-mpcd-large-snapshots branch from bba7e0f to f23d6e0 Compare November 20, 2024 14:16
@joaander joaander changed the base branch from trunk-patch to trunk-major November 20, 2024 17:09
@mphoward mphoward marked this pull request as ready for review November 20, 2024 19:45
@mphoward
Copy link
Collaborator Author

@joaander this should be ready to take a look at! I updated the regular particle communicator to use these new data structures, but I didn't touch the use of bcast, scatter_v, etc. outside MPCD. I did add code to throw an exception if the serialization buffer sizes will overflow an int.


if (rank == root)
{
/*
* First sort the particles into tag order. This is probably a slow step, but could be
* sped up by sorting on each rank first then merging the sorted segments here.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can make this optimization in a subsequent PR, as well as packing the buffers on the GPU. I left it off for now in order to get this finalized.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, no problem. You might find the GatherTagOrder class helpful in implementing the optimization. I wrote it to solve a similar problem when writing GSD files and gathering forces.

hoomd-blue/hoomd/HOOMDMPI.h

Lines 511 to 531 in f725ae3

/// Helper class that gathers local values from ranks and orders them in ascending tag order.
/**
To use:
1) Call setLocalTagsSorted() to establish the sizes and orders of the arrays to be gathered.
2) Call gatherArray() to combine the per-rank local arrays into a global array in ascending tag
order.
3) Repeat step 2 for as many arrays as needed (all must be in the same tag order).
GatherTagOrder maintains internal cache variables. Reuse an existing GatherTagOrder instances
to avoid costly memory reallocations.
Future expansion: Provide an alternate entry point setLocalTagsUnsorted() that allows callers
to provide arrays directly in index order.
**/
class GatherTagOrder
{
public:
/// Construct GatherTagOrder
/** \param mpi_comm MPI Communicator.
\param root Rank to gather on.
*/

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I remembered that you had written this and wanted to try it, but didn’t have time now. I will keep this mind when I come back to it later. Gathering these snapshots can be very slow.

@mphoward mphoward added mpcd MPCD component validate Execute long running validation tests on pull requests labels Nov 25, 2024
Copy link
Member

@joaander joaander left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@mphoward
Copy link
Collaborator Author

FYI: In running with one of the other branches that didn’t have this fix, I found that some MPI libraries internally check and report the error we were unintentionally triggering (count too large). This supports that this is the right fix.

@joaander
Copy link
Member

FYI: In running with one of the other branches that didn’t have this fix, I found that some MPI libraries internally check and report the error we were unintentionally triggering (count too large). This supports that this is the right fix.

It is strange that the MPI API hasn't addressed this defect more aggressively. It has been 25 years since 64 bit processors have become commonplace.

@mphoward
Copy link
Collaborator Author

It is indeed strange and frustrating. It should be addressed as part of the MPI-4 standard, but that will probably take a while to percolate through the implementations.

https://www.mpi-forum.org/docs/mpi-4.1/mpi41-report/node499.htm#Node499

@joaander joaander merged commit 2bd0f6f into trunk-major Nov 27, 2024
34 checks passed
@joaander joaander deleted the fix-mpcd-large-snapshots branch November 27, 2024 14:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mpcd MPCD component validate Execute long running validation tests on pull requests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Segmentation fault for MPI operations on large data sizes
2 participants