Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disconnect rlpx message following hello not snappy-compressed #7726

Open
arnetheduck opened this issue Nov 6, 2024 · 2 comments
Open

Disconnect rlpx message following hello not snappy-compressed #7726

arnetheduck opened this issue Nov 6, 2024 · 2 comments
Assignees

Comments

@arnetheduck
Copy link

In p2p/5, after the hello message, all subsequent messages should be snappy-compresssed, but when Nethermind has reached max peers, it sends its disconnect message uncompressed - this leads a protocol violation in nimbus which causes that peer to be downscored.

From the spec:

All messages following Hello are compressed using the Snappy algorithm.

No other messages may be sent until a Hello is received.

Here's a debug log:

2024-11-05 23:38:35.480+01:00 Peer sent us                               topics="eth p2p rlpx" remote=Node[91.134.22.6:30401] msgBody=01c104
2024-11-05 23:38:35.480+01:00 TODO Raising protocol breach               topics="eth p2p rlpx" remote=Node[91.134.22.6:30401] clientId=Nethermind/v1.26.0+0068729c/linux-x64/dotnet8.0.4 msg="Could not decompress snappy data"

This happens for multiple nethermind versions - we can see the message being 01c104 which effectively is a TooManyPeers disconnect, but uncompressed. This follows the hello message from which the client id has been taken.

@Marchhill
Copy link
Contributor

I haven't been able to reproduce this issue, are you able to consistently reproduce it?

@arnetheduck
Copy link
Author

arnetheduck commented Dec 6, 2024

Yes, quite consistently - here's another random session:

TRC 2024-12-06 10:16:45.602+01:00 Connecting to peer                         topics="eth p2p rlpx" remote=Node[65.185.57.164:30404]
TRC 2024-12-06 10:16:45.989+01:00 Sending message                            topics="eth p2p rlpx" remote=Node[65.185.57.164:30404] clientId= msgId=0 data=80f88505b8386e696d6275732d657468312f76302e312e302d36626531613933652f6c696e75782d616d6436342f4e696d2d322e302e31322f6e696d766dc6c5836574684480b840b2c35b96f0bbd8fe548ca580a85da31f0730db9ce88761a706644d9b24a16dc0d49ebc837712dc64c6e33d5581227e01f85f072e746828c3289925805c07351f payload=f88505b8386e696d6275732d657468312f76302e312e302d36626531613933652f6c696e75782d616d6436342f4e696d2d322e302e31322f6e696d766dc6c5836574684480b840b2c35b96f0bbd8fe548ca580a85da31f0730db9ce88761a706644d9b24a16dc0d49ebc837712dc64c6e33d5581227e01f85f072e746828c3289925805c07351f
TRC 2024-12-06 10:16:45.989+01:00 Received message                           topics="eth p2p rlpx" remote=Node[65.185.57.164:30404] clientId= data=80f89e05b24e65746865726d696e642f76312e32392e312b64666561353234302f6c696e75782d7836342f646f746e6574382e302e3130e4c58365746842c58365746843c58365746844ca886e6f64656461746101c684736e6170018276c4b8409910b605b87c21a4ef2ee266f57d4fbda33a87c6dc580bd76b7146b096e53d853e337022dbeba42350bd090fd7bb5bfac313e8df859906479d4e5790c7bfe534
TRC 2024-12-06 10:16:45.989+01:00 DevP2P handshake completed                 topics="eth p2p rlpx" clientId=Nethermind/v1.29.1+dfea5240/linux-x64/dotnet8.0.10 remote=Node[65.185.57.164:30404]
TRC 2024-12-06 10:16:45.989+01:00 >> [eth/68] Sending Status (0x00)          topics="eth68" peer=Node[65.185.57.164:30404] td=58750003716598352816469 bestHash=1dc6a51039da networkId=1 genesis=d4e567cb8fa3 forkHash=9f3d2254 forkNext=0
TRC 2024-12-06 10:16:45.989+01:00 Sending message                            topics="eth p2p rlpx" remote=Node[65.185.57.164:30404] clientId=Nethermind/v1.29.1+dfea5240/linux-x64/dotnet8.0.10 msgId=16 data=1058f057f85644018a0c70d815d562d3cfa955a01dc6a58e2da55f85cee5e4c33d7917d8b203f67d0866d6b6de4f265c5e1039daa0d4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3c6849f3d225480 payload=f85644018a0c70d815d562d3cfa955a01dc6a58e2da55f85cee5e4c33d7917d8b203f67d0866d6b6de4f265c5e1039daa0d4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3c6849f3d225480
TRC 2024-12-06 10:16:45.990+01:00 Received message                           topics="eth p2p rlpx" remote=Node[65.185.57.164:30404] clientId=Nethermind/v1.29.1+dfea5240/linux-x64/dotnet8.0.10 data=01c104
DBG 2024-12-06 10:16:45.990+01:00 Trying to decode disconnect uncompressed   topics="eth p2p rlpx" remote=Node[65.185.57.164:30404] clientId=Nethermind/v1.29.1+dfea5240/linux-x64/dotnet8.0.10 data=01c104
RC 2024-12-06 10:16:45.990+01:00 disconnect message received                topics="eth p2p rlpx" reason=TooManyPeers peer=Node[65.185.57.164:30404]
DBG 2024-12-06 10:16:45.990+01:00 Disconnect finishing hello                 topics="eth p2p rlpx" remote=Node[65.185.57.164:30404] clientId=Nethermind/v1.29.1+dfea5240/linux-x64/dotnet8.0.10 err="Peer disconnected during handshake" reason=TooManyPeers

you can see how the devp2p hello exchange is completed and yet the disconnect message is uncompressed. As the new logging indicates, there's code that detects this condition and it keeps happening with nethermind peers of several versions but I haven't spotted any other clients doing this yet.

Notably, this only happens for "early disconnects" - it doesn't happen if we talk to the peer for a while and then get disconnected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants