Skip to content

How to train a parallel model? #65

Answered by YutackPark
Zhou-jiecheng asked this question in Q&A
Discussion options

You must be logged in to vote

You don't need another training steps to run parallel, scalable MD simulations.

  1. Deploy your model via sevenn_get_model -p {path to checkpoint}

  2. Run LAMMPS with mpirun after writing appropriate input script. You can find the guideline from here: https://github.com/MDIL-SNU/SevenNet

Check 'for parallel model' section.

Note that currently parallel MD run can not compute virial pressure. Plus, cuda-aware openMPI is essential for optimal performance. You should choose number of the mpi process equals the number of GPUs you want to use.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by YutackPark
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants