Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix networking memory leaks and file transfer issues #1578

Open
wants to merge 8 commits into
base: nightly
Choose a base branch
from

Conversation

DasBabyPixel
Copy link
Contributor

Motivation

Networking has quite a few issues. There are multiple memory leaks because listeners or query handlers do not clear the packet content buffer.
Also there is a shocking lack of error logging in networking code, making theoretically obvious issues difficult to debug.

Modification

  • Added some sensible error logging
  • Fixed all the memory leaks I could find
  • I also made the packet encoder/decoder a little more strict, so illegal data will get caught early on
  • I also made sure netty and cloudnet always use the same ByteBufferMemoryManager instance. The memory manager doesn't have a state, so it should be fine to have 2 instances, but whyyy?

Result

  • Netty detects no more leaks
  • Added an option (disabled by default) in launcher.cnl to configure max buffer size
    ->Repeated file transfer works without OOME with option enabled (File transfer sometimes doesn't work #1572)
Other context

Fixes #1577
Fixes #1572

@0utplay
Copy link
Member

0utplay commented Jan 26, 2025

As far as I can tell the buffers you are force closing now are getting garbage collected anyway. The provided stacktrace in the corresponding issue tells the same.

@DasBabyPixel
Copy link
Contributor Author

As far as I can tell the buffers you are force closing now are getting garbage collected anyway. The provided stacktrace in the corresponding issue tells the same.

The buffers get GC'd, that is correct. However, netty uses native/off heap memory, which is not cleaned up properly. Every buffer allocated with netty must be freed explicitly, see netty/netty#11347:

Reference counting is gone.
After a buffer has been allocated, calling close on it will deallocate it.
It is then up to users and integrators to ensure that the life-times of buffers are managed correctly.

Though those buffers are not currently freed, this is why I added the close calls.

@derklaro
Copy link
Member

derklaro commented Jan 26, 2025

This should be handled as we wrap the drop of the buffer in a CleanerDrop which then delegates to our DirectBufferFreeDrop which explicitly frees the off-heap memory that is used by the buffer

@DasBabyPixel
Copy link
Contributor Author

This should be handled as we wrap the drop of the buffer in a CleanerDrop which then delegates to our DirectBufferFreeDrop which explicitly frees the off-heap memory that is used by the buffer

I mean... Turn on leak detection

var cloudnet.net.leak-detection-enabled true
var io.netty5.buffer.lifecycleTracingEnabled true

and try if the current code works. I am using this PR currently, before I used this leaks everywhere, after I started using this I haven't seen a single leak from netty.

@derklaro
Copy link
Member

derklaro commented Jan 26, 2025

Yes... because netty just tells you that the drop was garbage collected before close was called (handled by CleanerDrop as well)... Doesn't mean that the native memory is still in use/reserved after garbage collection

@DasBabyPixel
Copy link
Contributor Author

Are you trying to tell me that explicitly clearing the buffer everywhere in the code except those 3 places is intended?



+implicit releases


...

If we argue that the garbage collector cleans up anyway, why do we even bother reference counting?
Also CleanerDrop is netty internal, and subject to change. Also the javadoc even says ... This ensures that objects are dropped even if they leak.

@derklaro
Copy link
Member

Idk what you're on about, but I'm just telling you that the buffers are released even if they leak (in response to your Every buffer allocated with netty must be freed explicitly), nothing more, nothing less 🤷

@DasBabyPixel
Copy link
Contributor Author

Just because netty is smart enough to detect when leaks happen, doesn't mean the leaks are supposed to happen in the first place is what I am on about.
Tbh. why does this not bother you? To me it just seems like an oversight, and regarding the closing of buffers I expected something along the lines of "oh yeah, forgot to free those buffers, lets fix that".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants