You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As discussed in issues #10, and #13, the flag -multidir in mdrun can be used to enable MPI-enabled GROMACS, but the overhead could be high due to longer GROMACS start times. However, in cases where the exchange frequency is not too high so the introduced overhead is affordable (e.g. in EEXE simulations for multiple serial mutations), enabling MPI-enabled GROMACS might still be useful.
To make our EEXE implementation work with both thread-MPI GROMACS and MPI-enabled GROMACS, we decided to have two CLIs for running EEXE simulations, including the original CLI run_EEXE that works for thread-MPI GROMACS, and the CLI run_EEXE_mpi that we aim to develop here to work with MPI-enabled GROMACS. These two CLIs will be mostly the same, except that run_EEXE_mpi will not use mpi4py to avoid nested MPI calls. Some functions in ensemble_EXE.py may need to be modified to work with both thread-MPI GROMACS and MPI-enabled GROMACS, or, when this is not possible, functions specifically for MPI-enabled GROMACS will be added.
As discussed in issues #10, and #13, the flag
-multidir
inmdrun
can be used to enable MPI-enabled GROMACS, but the overhead could be high due to longer GROMACS start times. However, in cases where the exchange frequency is not too high so the introduced overhead is affordable (e.g. in EEXE simulations for multiple serial mutations), enabling MPI-enabled GROMACS might still be useful.To make our EEXE implementation work with both thread-MPI GROMACS and MPI-enabled GROMACS, we decided to have two CLIs for running EEXE simulations, including the original CLI
run_EEXE
that works for thread-MPI GROMACS, and the CLIrun_EEXE_mpi
that we aim to develop here to work with MPI-enabled GROMACS. These two CLIs will be mostly the same, except thatrun_EEXE_mpi
will not usempi4py
to avoid nested MPI calls. Some functions inensemble_EXE.py
may need to be modified to work with both thread-MPI GROMACS and MPI-enabled GROMACS, or, when this is not possible, functions specifically for MPI-enabled GROMACS will be added.This issue is a part of the work in the project EEXE for serial mutations.
The text was updated successfully, but these errors were encountered: