You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
for (int i = 0; i < 100000; i++)
{
var data = Encoding.UTF8.GetBytes("hello");
topic.Produce(data);
}
Running the above snippet a few times make the client deadlock.
(It doesnt occur right away, I usually have to run this a few times)
I'm not sure what is causing this, but once its start deadlocking, if I stop and then lower the number to a much lower number, it usually works.
So my impression is that for some reason, the underlying infrastructure have some sort of issue ingesting large numbers of writes too fast.
Would be interesting to hear if anyone else get the same issue.
The text was updated successfully, but these errors were encountered:
There's a local send queue which buffers the messages you produce before actually transmitting them to the broker. This queue is limited to a certain size (see queue.buffering.max.messages and queue.buffering.max.kbytes in https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md).
Once that limit is hit it'll currently block on the Produce until there is space again. In the future you'll also be able to get an error back from Produce if the queue is full.
It shouldn't deadlock, I suspect what you're seeing is because for whatever reason it doesn't free up space in the queue. Messages leave the queue either after a timeout, which may be a long time or because they've been sent successfully.
Could you check if any of your messages arrive? And how quickly?
Could you reproduce this? Also there's now a 0.9.2 build up, could you try with that?
Also there's a new parameter on Produce, if you call it with topic.Produce(data, blockIfQueueFull: false); it will throw an RdKafkaException with error _QUEUE_FULL if the queue is full instead of blocking.
Running the above snippet a few times make the client deadlock.
(It doesnt occur right away, I usually have to run this a few times)
I'm not sure what is causing this, but once its start deadlocking, if I stop and then lower the number to a much lower number, it usually works.
So my impression is that for some reason, the underlying infrastructure have some sort of issue ingesting large numbers of writes too fast.
Would be interesting to hear if anyone else get the same issue.
The text was updated successfully, but these errors were encountered: