Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AsyncProducer produces messages in out-of-order when retries happen #2619

Open
dethi opened this issue Aug 22, 2023 · 14 comments
Open

AsyncProducer produces messages in out-of-order when retries happen #2619

dethi opened this issue Aug 22, 2023 · 14 comments
Labels
bug needs-investigation Issues that require followup from maintainers stale/exempt Issues and pull requests that should never be closed as stale

Comments

@dethi
Copy link

dethi commented Aug 22, 2023

Versions
Sarama Kafka Go
>= v1.31.0 v3.5.0 1.20.7
Configuration

What configuration values are you using for Sarama and Kafka?

config.Version = sarama.V2_7_0_0
// Read success and errors
config.Producer.Return.Successes = true
config.Producer.Return.Errors = true
// Wait for Ack from all
config.Producer.RequiredAcks = sarama.WaitForAll
// Reduce internal buffer size to simplify the reproduction
config.ChannelBufferSize = 10
// Allow only one open requests at a time
config.Net.MaxOpenRequests = 1
// Retry almost indefinitely
config.Producer.Retry.Max = 1_000_000_000
// Reduce the max size, to flush more often and reproduce easily
config.Producer.MaxMessageBytes = 100
// We are only gonna produce to one partition to simplify the reproduction
config.Producer.Partitioner = sarama.NewManualPartitioner
Logs & Reproduction Code

I have created a Gist with all the information required to reproduce (code, configuration, etc), including some example logs:
https://gist.github.com/dethi/406fb5562030d8d0fa76db18d95bbbbe

Problem Description
Expectation

When producing message to the same partition, the AsyncProducer guarantee the ordering of the produced messages even when retries are required, as long as config.Net.MaxOpenRequests = 1.

Idempotency shouldn't matter here.

Reality

Up until 1.31.0, the expectation hols. But the behaviour changed when request pipelining was introduced to the AsyncProducer. Now, retries cause message to be published out of order.

An easy way to see this is by enabling config.Producer.Idempotent. It will result in the AsyncProducer returning an error to the caller when retries happen (like the partition leader disappear, i.e. broker disconnect):

kafka: Failed to produce message to topic sarama-outoforder-bug: kafka server: The broker received an out of order sequence number

When idempotency is not enabled, Sarama will publish successfully the messages, but in out-of-order.

Code Analysis / Flow of produced messages

(follow the links as you read each paragraph)

  1. Code link: We think the issue is coming here. Here it use to be that we would not send another message/batch to the Kafka broker before we got back the answer from that broker and we sent the answer to the goroutine that processes that answer. One of the key point here as well is that the goroutine that writes into the bridge channel is also the goroutine that reads from the responses channel as we can see in func (bp brokerProducer) waitForSpace or func (bp brokerProducer) run, which means that wouldn't send another message to Kafka before we received AND processed the answer for the previous message.

  2. Code link: Now we use the new AsyncProduce function to send messages to Kafka brokers. The key point here is that it use to be that we would not be able to call AsyncProduce (or Produce to be exact) before the previous call to AsyncProduce/Produce returned (which would also give us the response of the request). Now the response are processed asynchronously and sent back to us via the sendResponse callback. We will see in part 3 that once a message is sent to the broker and the goroutine that processes the response is scheduled then AsyncProduce will return and another call will be made even though we potentially did not received the answer from the previous request yet.

  3. Code link: broker.AsyncProduce() uses sendInternal to send a batch/message to a broker. b.responses is a buffered channel that is used to control how many "in flight" requests there is currently to that broker so that a goroutine can't call AsyncProduce before we were able to schedule a run of the goroutine that will processes the response for that request (see func (b *Broker) responseReceiver()). One of the issue here is that if we set b.conf.Net.MaxOpenRequests = 1 so that we force the b.response to have a buffer of size 0 then it seems to me that we can still have b.conf.Net.MaxOpenRequests + 1 in flight requests to the Kafka broker. Assuming MaxOpenRequests is equal to one, then sure the 2nd call to AsyncProduce will block on b.responses <- promise but there will still be 2 inflight requests.

  4. Code link: Once a response was received by the broker from Kafka, it gets forwarded to the pending channel, which we read from the same goroutine. The responses are then forwarded back to the func (bp *brokerProducer) run() goroutine, which is also the one that is sending messages to the bridge channel.

  5. Code link: Once a response is received in the func (bp brokerProducer) run() goroutine, we will end up calling this retryMessages function if for instance the broker that we tried to send a the batch/message to crashed or was not available for other reasons. In that case we will retry to send the message that trigger that error by calling retryMessages on line 1146 for pSet.msgs which will send back all the element of the pSet.msgs batch into the input channel of the asyncProducer manager. We then also send back all messages that were buffered in the brokerProducer (these messages should have been batched and sent to Kafka at some point via AsyncProduce) so that we can send them after we tried to send back the elements of pSet.msgs because they were produced after the messages in pSet.msgs. The problem here is that this retry sequence does not take into account the messages that are currently in flight and for which we did not receive a response yet. Because of that, the ordering of the messages will change in case we have to retry to send the messages that were in flight when we had to reschedule a retry when we received the first error.

Surprisingly, part of the issue the issue was already noted in the latest comment of the same PR:

This means setting Net.MaxOpenRequests to 1 and using the AsyncProducer might result in up to 2 in-flight requests, leading to possible ordering issues or maybe an error with idempotency turn on against Kafka 0.11.0.x.

Thanks to @T30rix for helping me with the debugging and writing the detailed flow.

@dnwe dnwe added the needs-investigation Issues that require followup from maintainers label Aug 22, 2023
@napallday
Copy link
Contributor

napallday commented Aug 24, 2023

@dethi As far as I know, if you set max.in.flight.requests.per.connection to 1 and don't enable the idempotent producer (enable.idempotence=false), the Kafka producer will only have one unacknowledged request in flight at a time, which effectively reduces the likelihood of message reordering during produce retries.

In this scenario, the producer will wait for an acknowledgment (ACK) from the broker before sending the next message. This means that if a message fails to be sent (due to a transient error or a network issue), the producer will retry sending that message before sending any new messages. This mechanism helps maintain a closer order of messages as they were originally sent by the producer.

However, please note that there is still no strict guarantee of message order in this case, as certain scenarios like broker failures or network partitions can introduce potential complexities. Messages might still be reordered if a network issue causes a message to be delayed and another message is sent before the delayed message is successfully delivered.

So in summary, while setting max.in.flight.requests.per.connection to 1 and not enabling idempotence can help reduce the chances of message reordering, it's not a full-proof guarantee, and enabling idempotence is the recommended approach if strict message order is a critical requirement for your use case.

And I'm very curious why the previous versions of Sarama work well only with config.Net.MaxOpenRequests = 1.

Please CMIIW @dnwe

@T30rix
Copy link

T30rix commented Aug 24, 2023

@napallday the kafka documentation seems to indicate that if max.in.flight.requests.per.connection is set to 1 then there is no risk of message reordering. If config.Net.MaxOpenRequests is the Sarama equivalent of max.in.flight.requests.per.connection then this behavior is broken since #2094 merged because we can now have config.Net.MaxOpenRequests + 1 request in flight.

And I'm very curious why the previous versions of Sarama work well only with config.Net.MaxOpenRequests = 1

This is explained in @slaunay 's post here:

I believe the brokerProducer goroutine was previously limited to a single in flight requests being synchronous (even though calling broker.Produce concurrently from multiple goroutines could already result in up to 2 in flight requests when Net.MaxOpenRequests = 1).

@napallday
Copy link
Contributor

napallday commented Aug 24, 2023

@T30rix I see. Seems the apache/kafka and Confluent Kafka have different descriptions here.

In apache/kafka:

Allowing retries while setting enable.idempotence to false and max.in.flight.requests.per.connection to greater than 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first.

In Confluent Kafka:

Allowing retries while setting enable.idempotence to false and max.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first.

@tsuna
Copy link

tsuna commented Sep 15, 2023

So what's the conclusion here? Is this issue considered a bug or not?

This comment was marked as outdated.

@github-actions github-actions bot added the stale Issues and pull requests without any recent activity label Jan 26, 2024
@dethi

This comment was marked as outdated.

@github-actions github-actions bot removed the stale Issues and pull requests without any recent activity label Jan 26, 2024
@dnwe dnwe added the stale/exempt Issues and pull requests that should never be closed as stale label Jan 30, 2024
@dnwe dnwe added the bug label Mar 12, 2024
@tsuna
Copy link

tsuna commented Mar 14, 2024

Thanks for finally marking this as a bug 👍

@tsuna
Copy link

tsuna commented Mar 26, 2024

Is there anyone working on this bug?

@asg0451
Copy link

asg0451 commented Jun 3, 2024

Any movement on this? This is impacting us significantly.

wk989898 added a commit to wk989898/tiflow that referenced this issue Jun 4, 2024
@dnwe
Copy link
Collaborator

dnwe commented Jun 4, 2024

It has been investigated on-and-off, using @dethi 's sample as a functional producer test, but I don't yet have good news of a resolution to the issue

@asg0451
Copy link

asg0451 commented Jun 5, 2024

Ok, thanks for the update @dnwe

@richardartoul
Copy link
Contributor

Should be fixed by #2943 if someone can review

@barbiedrummer
Copy link

Are there any updates regarding this bug? It is still impacting our work as well.

@3AceShowHand
Copy link

Are there any updates regarding this bug? It is still impacting our work as well.

Our service set config.Net.MaxOpenRequests = 1 and enable.idempotence=false also meet reorder issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug needs-investigation Issues that require followup from maintainers stale/exempt Issues and pull requests that should never be closed as stale
Projects
None yet
Development

No branches or pull requests

9 participants