Replies: 1 comment 1 reply
-
almost the same, the only difference is that se_atten uses type embedding while se_a uses different embedding nets for different atom types. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Dear all,
I try to build the model using the attention descriptor (se_attn_v2). I want to compress the model to be quickly calculated. However, as you show in the DeePMD-kit v2 paper, when we use se_attn type descriptor, the embedding matrix is embedded through attention layers depending on the number of attention layers (attn_layer). If attn_layer = 0, the embedding matrix equals to the matrix when we use "se_e2_a"?
Beta Was this translation helpful? Give feedback.
All reactions