-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Worker thread exhaustion #34
Comments
yeah.. might be time to switch to async and use tokio |
do you have a (small) reproducer? |
I was just hitting this. Turns out I'd used When there are no panics, this works as expected |
I was a bit surprised to find a custom threadpool in this project. As Lennart said in his recent All Systems Go varlink, an advantage of varlink is that it doesn't force you into being a "service" handling multiple clients, you can use it to augment existing "CLI" tools with a linear control flow. Of course such a thing is handled today, one can ignore the listener and threadpool. But honestly I'd lean towards dropping it in the next semver - there's plenty of perfectly fine networking + threadpool stacks (especially in the async world), but also for sync e.g. rayon etc. |
Well...OK these two things are not the same but they're close enough we can dedup the threads, so closing in favor of #102 |
When I started this project async/await was not yet stable on rust :) |
I'm working on a project that uses varlink (c.f. https://github.com/nyarly/lorri/blob/stream_events/src/daemon/rpc.rs)
It appears that clients connecting to the Monitor interface are leaking worker threads - even after those clients disconnect, the thread remains listening. Previously, we used raw sockets, and the closing of the socket immediately released the thread.
Now I'm seeing sockets left open when the client program exits, and the worker threads hanging out (and trying to send new events to those sockets and hanging.)
The text was updated successfully, but these errors were encountered: