Can SevenNet also benefit from the recent NVIDIA cuEquivariance library? #126
-
I recently came across NVIDIA's cuEquivariance library. Their benchmarks show significant speedups for MACE when using the cuEquivariance math lib., both in training and inference compared to the base MACE implementation. The performance was demonstrated on MACE models, but considering most modern GNN-based MLFF architectures share similar constructions, I believe SevenNet could potentially gain computational benefits as well. I'm interested in the SevenNet developers' perspectives on potential performance improvements. The article references several related models:
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Thanks for noticing me. The short answer is yes.
Thanks again for this great news! |
Beta Was this translation helpful? Give feedback.
-
Initial update in |
Beta Was this translation helpful? Give feedback.
Thanks for noticing me. The short answer is yes.
I think not, especially for SevenNet-0. The number of channels is different for each 'L' value in SevenNet-0 (128x0e+64+1e+32x2e), which is different from that of MACE-MP-0. It restricts possible memory layout and it is may critical to optimization performance of cuEquivaraince. However, if we use the same number of channels, similar speed-up I expect.
Yes. But seeing the library, it replaces most of e3nn parts. As sevenn.nn and sevenn.model_build is on top of e3nn, it take me some times to refactor SevenNet.
Thanks again for this great news!