-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When parallel consumer does not close kafka consumer if commmit fails during close #597
Comments
We will investigate this in due time. Thanks |
Any update on this? |
@BartoszSta - Hmm - i am wondering if that commit on close failure was due to the same bug as #648 ? |
@rkolesnev This happened on 0.5.2.5, so probably not, anyway something still might happen during close operation which prevents commit from working (like network issue etc.) |
@BartoszSta - yeah - i was looking into it for a bit - it looks like in certain conditions actual Kafka Consumer is stuck in metadata update loop and cannot be closed - i am still trying to figure out if there is an issue with actual Kafka Consumer or how i am closing it. |
@rkolesnev It looks like this issue is not present in 0.5.3.1 anymore (maybe earlier), commitOffsetsThatAreReady() exceptions are checked now and consumer is closed after. |
When AbstractParallelEoSStreamProcessor.close(Duration timeout) is being executed it performs commitOffsetsThatAreReady() - if this operation fails then maybeCloseConsumer() is not executed - so consumer is not closed.
Not closing consumer means that it will stay in consumer group for max.poll.interval.ms if it is not executing polls - it can also prevent from joining consumer group by other consumers.
In my case issue was like this:
Of course as I am providing kafka consumer to parallel listener I can close it myself - which I will do if close of parallel consumer fail but I think that it should be done by parallel consumer.
The text was updated successfully, but these errors were encountered: