-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KafkaHeaders.DELIVERY_ATTEMPT is not added for batch listeners #3407
Comments
If the whole batch fails, the retry behavior would be pretty much same as if the first record in a batch fails, is my assumption correct? |
I'm not sure what you mean with a |
The logic around this header is not that simple as it sounds. |
Seems like my analogy with BatchListenerFailedException is incorrect. So my suggestion would be to call Currently |
That's correct. But the problem with the rest of missing API that we deal with the whole batch. If you have anything in mind, feel free to open pull request and we gladly review it. |
I am not so deep into the code, are there any contributors who could take care of it? Maybe the issue should be promoted as a feature request? |
Sure! Any one can take an issue for contribution. |
@lm231290 We will keep this in the backlog for the time being. If there is an urgent need for anyone in the community, they can design a solution and contribute a PR. If not, we will take a look at this as time permits. |
Hi, @artembilan , @sobychacko ! When you have time, please take a look 🙇♂️ |
Version 3.3.2
When using blocking retries for a batch listener, the attempt number is not coming in DELIVERY_ATTEMPT header even if setting
setDeliveryAttemptHeader(true)
The method
KafkaMessageListenerContainer.ListenerConsumer.internalHeaders
is called only inConsumerRecord<K, V> checkEarlyIntercept(ConsumerRecord<K, V> recordArg)
which is used for a regular listener, and is not called inConsumerRecords<K, V> checkEarlyIntercept(ConsumerRecords<K, V> nextArg)
which is used for a batch listenerThe text was updated successfully, but these errors were encountered: