-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Impact of peers that rotate their PeerIDs #31
Comments
+1 Concerning (3), IMO it isn't our problem, if a user decides to rotate its PeerID (assuming it does), the user cannot expect the Content it provides to be reachable. IMO (1) and (2) are legit concerns if peers are actually rotating their PeerID AND are acting as DHT Servers. |
Agreed. I'm just thinking that if this is due to a bug/misconfiguration of the hosts and they intend to publish their content on IPFS, but after a few mins it's not findable, then they get terrible experience from using IPFS. I.e., I'm thinking of a non-malicious case in (3) ;-) |
I'm just wondering whether (3) is obvious to someone who is currently just running IPFS without a fairly good understanding of how the DHT works. A user might think that changing the node's ID (and restarting the node) should not stop providing the content. Do we have anything in the docs to that effect? |
I really doubt we have anything along those lines in the docs. But I would guess that if someone has the technical knowledge to change the node's ID, they would also have made an effort to understand how things work a little deeper :) |
Sure @yiannisbot , I just spawned a new Hoard to see how it affects to the PRL. |
I'm wondering what is the impact of peers that join the IPFS DHT and rotate their PeerIDs excessively. We've seen in recent reports, e.g., Week 5 Nebula Report, that there are 5 peers which rotate their PeerID 5000 times each, within the space of a week. This comes down to peers having a separate PeerID every couple of minutes. The number of rotating PeerIDs seen are roughly as many as the relatively stable nodes in the network (aka network size). The routing table of DHT peers is updated every 10mins, so the impact is likely not sticking around for longer than that, but given the excessive number of rotations, I feel that this requires a second thought.
I can see three cases where this might have an impact (although there might be more):
The first case should be covered by the concurrency factor, although the large number of rotations might be causing issues. We could check the second case through the CID Hoarder - @cortze it's worth spinning up an experiment to cross-check what happens with previous results. Not sure what can be done for the third case :)
Thoughts on whether this is actually a problem or not:
It's worth checking whether those PeerIDs co-exist in parallel in the network, or whether when we see a new PeerID from the same IP address, the previous one(s) we've seen from the same IP address have disappeared. @dennis-tra do we know that already? Is there a way to check that from the Nebula logs?
Also, from @mcamou:
Extra thoughts more than welcome.
The text was updated successfully, but these errors were encountered: