Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deadlocking with high number of sends #75

Open
rogeralsing opened this issue Nov 6, 2016 · 2 comments
Open

Deadlocking with high number of sends #75

rogeralsing opened this issue Nov 6, 2016 · 2 comments

Comments

@rogeralsing
Copy link

for (int i = 0; i < 100000; i++)
{
       var data = Encoding.UTF8.GetBytes("hello");
       topic.Produce(data);
}

Running the above snippet a few times make the client deadlock.
(It doesnt occur right away, I usually have to run this a few times)

I'm not sure what is causing this, but once its start deadlocking, if I stop and then lower the number to a much lower number, it usually works.
So my impression is that for some reason, the underlying infrastructure have some sort of issue ingesting large numbers of writes too fast.

Would be interesting to hear if anyone else get the same issue.

@ah-
Copy link
Owner

ah- commented Nov 6, 2016

There's a local send queue which buffers the messages you produce before actually transmitting them to the broker. This queue is limited to a certain size (see queue.buffering.max.messages and queue.buffering.max.kbytes in https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md).
Once that limit is hit it'll currently block on the Produce until there is space again. In the future you'll also be able to get an error back from Produce if the queue is full.

It shouldn't deadlock, I suspect what you're seeing is because for whatever reason it doesn't free up space in the queue. Messages leave the queue either after a timeout, which may be a long time or because they've been sent successfully.
Could you check if any of your messages arrive? And how quickly?

@ah-
Copy link
Owner

ah- commented Nov 9, 2016

Could you reproduce this? Also there's now a 0.9.2 build up, could you try with that?

Also there's a new parameter on Produce, if you call it with topic.Produce(data, blockIfQueueFull: false); it will throw an RdKafkaException with error _QUEUE_FULL if the queue is full instead of blocking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants