-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AsyncProducer produces messages in out-of-order when retries happen #2619
Comments
@dethi As far as I know, if you set In this scenario, the producer will wait for an acknowledgment (ACK) from the broker before sending the next message. This means that if a message fails to be sent (due to a transient error or a network issue), the producer will retry sending that message before sending any new messages. This mechanism helps maintain a closer order of messages as they were originally sent by the producer. However, please note that there is still no strict guarantee of message order in this case, as certain scenarios like broker failures or network partitions can introduce potential complexities. Messages might still be reordered if a network issue causes a message to be delayed and another message is sent before the delayed message is successfully delivered. So in summary, while setting And I'm very curious why the previous versions of Sarama work well only with Please CMIIW @dnwe |
@napallday the kafka documentation seems to indicate that if
This is explained in @slaunay 's post here:
|
@T30rix I see. Seems the apache/kafka and Confluent Kafka have different descriptions here. In apache/kafka:
In Confluent Kafka:
|
So what's the conclusion here? Is this issue considered a bug or not? |
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
Thanks for finally marking this as a bug 👍 |
Is there anyone working on this bug? |
Any movement on this? This is impacting us significantly. |
It has been investigated on-and-off, using @dethi 's sample as a functional producer test, but I don't yet have good news of a resolution to the issue |
Ok, thanks for the update @dnwe |
Should be fixed by #2943 if someone can review |
Are there any updates regarding this bug? It is still impacting our work as well. |
Are there any updates regarding this bug? It is still impacting our work as well. Our service set |
Versions
Configuration
What configuration values are you using for Sarama and Kafka?
Logs & Reproduction Code
I have created a Gist with all the information required to reproduce (code, configuration, etc), including some example logs:
https://gist.github.com/dethi/406fb5562030d8d0fa76db18d95bbbbe
Problem Description
Expectation
When producing message to the same partition, the AsyncProducer guarantee the ordering of the produced messages even when retries are required, as long as
config.Net.MaxOpenRequests = 1
.Idempotency shouldn't matter here.
Reality
Up until 1.31.0, the expectation hols. But the behaviour changed when request pipelining was introduced to the AsyncProducer. Now, retries cause message to be published out of order.
An easy way to see this is by enabling
config.Producer.Idempotent
. It will result in the AsyncProducer returning an error to the caller when retries happen (like the partition leader disappear, i.e. broker disconnect):When idempotency is not enabled, Sarama will publish successfully the messages, but in out-of-order.
Code Analysis / Flow of produced messages
(follow the links as you read each paragraph)
Code link: We think the issue is coming here. Here it use to be that we would not send another message/batch to the Kafka broker before we got back the answer from that broker and we sent the answer to the goroutine that processes that answer. One of the key point here as well is that the goroutine that writes into the
bridge
channel is also the goroutine that reads from the responses channel as we can see infunc (bp brokerProducer) waitForSpace
orfunc (bp brokerProducer) run
, which means that wouldn't send another message to Kafka before we received AND processed the answer for the previous message.Code link: Now we use the new AsyncProduce function to send messages to Kafka brokers. The key point here is that it use to be that we would not be able to call AsyncProduce (or Produce to be exact) before the previous call to AsyncProduce/Produce returned (which would also give us the response of the request). Now the response are processed asynchronously and sent back to us via the sendResponse callback. We will see in part 3 that once a message is sent to the broker and the goroutine that processes the response is scheduled then AsyncProduce will return and another call will be made even though we potentially did not received the answer from the previous request yet.
Code link:
broker.AsyncProduce()
usessendInternal
to send a batch/message to a broker.b.responses
is a buffered channel that is used to control how many "in flight" requests there is currently to that broker so that a goroutine can't call AsyncProduce before we were able to schedule a run of the goroutine that will processes the response for that request (seefunc (b *Broker) responseReceiver()
). One of the issue here is that if we setb.conf.Net.MaxOpenRequests = 1
so that we force theb.response
to have a buffer of size 0 then it seems to me that we can still haveb.conf.Net.MaxOpenRequests + 1
in flight requests to the Kafka broker. Assuming MaxOpenRequests is equal to one, then sure the 2nd call to AsyncProduce will block onb.responses <- promise
but there will still be 2 inflight requests.Code link: Once a response was received by the broker from Kafka, it gets forwarded to the
pending
channel, which we read from the same goroutine. The responses are then forwarded back to thefunc (bp *brokerProducer) run()
goroutine, which is also the one that is sending messages to thebridge
channel.Code link: Once a response is received in the
func (bp brokerProducer) run()
goroutine, we will end up calling thisretryMessages
function if for instance the broker that we tried to send a the batch/message to crashed or was not available for other reasons. In that case we will retry to send the message that trigger that error by callingretryMessages
on line 1146 forpSet.msgs
which will send back all the element of thepSet.msgs
batch into the input channel of the asyncProducer manager. We then also send back all messages that were buffered in the brokerProducer (these messages should have been batched and sent to Kafka at some point via AsyncProduce) so that we can send them after we tried to send back the elements ofpSet.msgs
because they were produced after the messages inpSet.msgs
. The problem here is that this retry sequence does not take into account the messages that are currently in flight and for which we did not receive a response yet. Because of that, the ordering of the messages will change in case we have to retry to send the messages that were in flight when we had to reschedule a retry when we received the first error.Surprisingly, part of the issue the issue was already noted in the latest comment of the same PR:
Thanks to @T30rix for helping me with the debugging and writing the detailed flow.
The text was updated successfully, but these errors were encountered: