-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seeking advice on improvement reliability of communication. #624
Comments
error.mp4Here's a video that demonstrates 2 failed bootstraps, followed by a successful one - on a 100% local DHT. |
I have a similar problem. This seems like something that can be fixed honestly. Curious @mryab if you have any hint where in the codebase it could be coming from |
I created reproducible code here. |
First, thanks for the work on Hivemind, it's a great library and we have been using it extensively in https://github.com/PrimeIntellect-ai/OpenDiloco.
There are two main issues that we have encountered and I am looking for tips / best practices on how to avoid them.
Peers don't always find each other during DHT initialization. It happened that when starting 4 peers two independent DHT will be created with 2 peers each instead. This happened even though I passed the same initial peers to all of them. Once they all join there is rarely desync at least at the dht level.
Lost peers during
DecentralizedAverager.step()
. It happened that we randomly lost a peer during an all_reduce with some class that inheritedDecentralizedAverager
. They never seem to have an obvious reason why the peer left.Both of these issues happened relatively often even when doing experiments locally (passing through localhost). And it logically gets worse when using poorly connected machines. I have the feeling they are linked and that solving it would make decentralized training with hivemind more reliable.
My questions are:
Thanks in advance 🙏
The text was updated successfully, but these errors were encountered: