You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found an issue using InPlaceABNSync with the newer PyTorch version. I'm using torch 1.8.1 and inplace 1.1.0.
The bug is that InPlaceABNSync does not sync across GPU unless explicitly passed the distributed group.
To clarify, if I instance it as: InPlaceABNSync(64, activation="leaky_relu", activation_param=.01)
it has the same behavior of InPlaceABN (without Sync) if I use 2 GPUs.
Differently, if I instance it as: InPlaceABNSync(64, activation="leaky_relu", activation_param=.01, group=distributed.group.WORLD)
it will correctly work.
Hello!
I found an issue using InPlaceABNSync with the newer PyTorch version. I'm using
torch 1.8.1
andinplace 1.1.0
.The bug is that InPlaceABNSync does not sync across GPU unless explicitly passed the distributed group.
To clarify, if I instance it as:
InPlaceABNSync(64, activation="leaky_relu", activation_param=.01)
it has the same behavior of InPlaceABN (without Sync) if I use 2 GPUs.
Differently, if I instance it as:
InPlaceABNSync(64, activation="leaky_relu", activation_param=.01, group=distributed.group.WORLD)
it will correctly work.
The repo and full code can be found here.
Hope it helps! 😄
The text was updated successfully, but these errors were encountered: