-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Values expiring with permanent=true #599
Comments
This also results and in a big rush inbound and one of the opendht threads going 100% CPU. |
This looks like a catastrophic bug
Thanks |
Ah, I'm stopping and starting this development node. What triggers it? A mode going offline?
Hmmm, I'll add a timer or something when I first see an expired value. |
So, my sentrypeer node has been up since 16:10 today and you can see that the event timestamps of bad_actors all created after that, so they shouldn't have expired: You can prove that, because in this screenshot I don't process them as they have the same node_id (uuid) as the currently running node: In this console screenshot you can see I'm saving them all permanently (https://github.com/SentryPeer/SentryPeer/blob/main/src/peer_to_peer_dht.c#L345): Then they all flood in again: When you see the flood of expires and flood of values for the Thanks. |
This is a vanillia dhtnode from master branch today, on the same box bootstrapped to bootstrap.sentrypeer.org up since 16:04 showing the same set of values coming in again via and then I see them expire again on my sentrypeer node, then they go round and round :-) |
I've stopped all my nodes now and restarted, but things are probably living on other DHT nodes that I'm not running as per 4222 is open....let's see. |
Thanks. How many values are on this key approximately? |
For the "permanent put" feature to work, the node would need to stay online for the duration of the value lifetime |
Yep, going by my screenshots the same node was. |
No idea. Can I see that on |
Just done that and pasted into a text file. 210 values at the moment. |
I updated dhtcnode so it now does something similar to the C++ node. Still investigating this issue. Some standalone code to reproduce the problem would be useful. |
I made a few tests on my side and had no issue with |
Have there been changes since I reported this then?
|
Could you please try with I could reproduce some issues when the value count was reaching the value limit per key (for a given node). At least some of these issues should now be solved. The limit has also been raised from 1024 to 64k. |
Will do. I'll update Alpine and Homebrew too.
|
A middle-click-paste introduced a typo in the 2.4.1 release commit -_- |
Is 2.4.2 coming out today? Just done - Homebrew/homebrew-core#99672 |
Alpine submitted too. |
The tag is there, just the GitHub release is not documented yet |
Still seeing this, but my bootstrap node is on 2.4.0 via Homebrew. Test node gets restart often and is on 2.4.2. Need to get this 2.4.3 out so I can test on that.
|
Hi @aberaud Any thoughts on this still? I'd really like to get this expired thing sorted and a bandwidth limiter in place for UDP traffic in the lib. They flood in from peers as per https://twitter.com/ghenry/status/1534525326155554817 but Thanks, |
The problem is that every sentrypeer dht node puts every value on the same key (bad_actors). The DHT Kademlia design distributes the load over the different keys. In case of significant load on a single key, best-effort applies and there is no guarantee to obtain all the values, instead, the load will spread on adjacent nodes as nodes taking too much traffic will stop responding (for BitTorrent, this means a popular torrent won't flood all the same nodes, and peers later exchange peer lists directly with PEX). The best way to handle this might be to |
On 2.4.0 should this be happening?
I'm getting all my own
puts
back off the DHT too, which I skip as I'm checking what put them there.Thanks.
The text was updated successfully, but these errors were encountered: