-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Record being picked up by multiple threads simultaneously #843
Comments
That doesn't sound right at all - are the records failing and being retried? Otherwise if same record is actually picked more than once for processing in normal execution scenario - that would be a significant bug. Do you have more information, ideally test / sample code that reproduces the behaviour? |
Hi @rkolesnev,
The situation is like this - Logs are something like this- . Both threads belong to same pod. We have not been able to reproduce it locally yet. |
Couple of things that may help shed some light on this:
|
I've went through the code and I do not see a possibility of duplicate processing of a message - work retrieval is single threaded (through control loop thread) and uses iterator + inflight flag to guard against getting same messages - For the above scenario to happen - same work container would need to be submitted more than once into that queue by single thread that is pulling the work from shards - there is no concurrent processing of work selection itself so no race conditions on that code path. Only explanation I can think of - is that somehow that task got produced to topic more than once upstream - and it would have to be with a different Key or on different Partition - as otherwise it would be blocked from being picked for work due to Key ordering mode. |
Hi @rkolesnev,
We have added detailed logging so that in case its reproduced again, we will have more data to pin point the issue. |
I am using the library v5.3.0 and i noticed that same record is being picked up by different threads of the same pod almost at the same time. It is causing issue while processing them as I am maintaining the counter of the records processed and that count becomes more than the total records as few records being counted twice.
The text was updated successfully, but these errors were encountered: