From fe30984018f30b2e9f07a6d14e4b8f5160e07ae8 Mon Sep 17 00:00:00 2001 From: Wzy19930507 <1208931582@qq.com> Date: Sun, 18 Feb 2024 13:33:08 +0800 Subject: [PATCH 1/5] Supplement container-props.adoc ContainerProperties properties --- .../kafka/annotation-error-handling.adoc | 2 +- .../ROOT/pages/kafka/container-props.adoc | 45 +++++++++++++------ 2 files changed, 32 insertions(+), 15 deletions(-) diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc index 52f9ec06cc..3a85647f42 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc @@ -425,7 +425,7 @@ To replace any `BatchErrorHandler` implementation, you should implement `handleB You should also implement `handleOtherException()` - to handle exceptions that occur outside the scope of record processing (e.g. consumer errors). [[after-rollback]] -== After-rollback Processor +== After Rollback Processor When using transactions, if the listener throws an exception (and an error handler, if present, throws an exception), the transaction is rolled back. By default, any unprocessed records (including the failed record) are re-fetched on the next poll. diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc index b126974ebe..361de98955 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc @@ -30,7 +30,7 @@ See the JavaDocs for `ContainerProperties.AssignmentCommitOption` for more information about the available options. |[[asyncAcks]]<> -|false +|`false` |Enable out-of-order commits (see xref:kafka/receiving-messages/ooo-commits.adoc[Manually Committing Offsets]); the consumer is paused and commits are deferred until gaps are filled. |[[authExceptionRetryInterval]]<> @@ -38,6 +38,10 @@ See the JavaDocs for `ContainerProperties.AssignmentCommitOption` for more infor |When not null, a `Duration` to sleep between polls when an `AuthenticationException` or `AuthorizationException` is thrown by the Kafka client. When null, such exceptions are considered fatal and the container will stop. +|[[batchRecoverAfterRollback]]<> +|`false` +|Set to `true` to enable batch recovery, See xref:kafka/annotation-error-handling.adoc#after-rollback[After Rollback Processor]. + |[[clientId]]<> |(empty string) |A prefix for the `client.id` consumer property. @@ -57,10 +61,6 @@ Useful when the consumer code cannot determine that an `ErrorHandlingDeserialize |`null` |When present and `syncCommits` is `false` a callback invoked after the commit completes. -|[[offsetAndMetadataProvider]]<> -|`null` -|A provider for `OffsetAndMetadata`; by default, the provider creates an offset and metadata with empty metadata. The provider gives a way to customize the metadata. - |[[commitLogLevel]]<> |DEBUG |The logging level for logs pertaining to committing offsets. @@ -69,15 +69,15 @@ Useful when the consumer code cannot determine that an `ErrorHandlingDeserialize |`null` |A rebalance listener; see xref:kafka/receiving-messages/rebalance-listeners.adoc[Rebalancing Listeners]. -|[[consumerStartTimout]]<> +|[[commitRetries]]<> +|3 +|Set the number of retries `RetriableCommitFailedException` when using `syncCommits` set to true. +Default 3 (4-attempt total). + +|[[consumerStartTimeout]]<> |30s |The time to wait for the consumer to start before logging an error; this might happen if, say, you use a task executor with insufficient threads. -|[[consumerTaskExecutor]]<> -|`SimpleAsyncTaskExecutor` -|A task executor to run the consumer threads. -The default executor creates threads named `-C-n`; with the `KafkaMessageListenerContainer`, the name is the bean name; with the `ConcurrentMessageListenerContainer` the name is the bean name suffixed with `-n` where n is incremented for each child container. - |[[deliveryAttemptHeader]]<> |`false` |See xref:kafka/annotation-error-handling.adoc#delivery-header[Delivery Attempts Header]. @@ -123,9 +123,14 @@ Also see `idleBeforeDataMultiplier`. |None |Used to override any arbitrary consumer properties configured on the consumer factory. +|[[listenerTaskExecutor]]<> +|`SimpleAsyncTaskExecutor` +|A task executor to run the consumer threads. +The default executor creates threads named `-C-n`; with the `KafkaMessageListenerContainer`, the name is the bean name; with the `ConcurrentMessageListenerContainer` the name is the bean name suffixed with `-n` where n is incremented for each child container. + |[[logContainerConfig]]<> |`false` -|Set to true to log at INFO level all container properties. +|Set to `true` to log at INFO level all container properties. |[[messageListener]]<> |`null` @@ -145,7 +150,7 @@ Also see `idleBeforeDataMultiplier`. |[[missingTopicsFatal]]<> |`false` -|When true prevents the container from starting if the confifgured topic(s) are not present on the broker. +|When true prevents the container from starting if the configured topic(s) are not present on the broker. |[[monitorInterval]]<> |30s @@ -157,9 +162,21 @@ See `noPollThreshold` and `pollTimeout`. |Multiplied by `pollTimeOut` to determine whether to publish a `NonResponsiveConsumerEvent`. See `monitorInterval`. +|[[observationConvention]]<> +|`null` +|When set, add dynamic tags to the timers and traces, based on information in the consumer records. + +|[[observationEnabled]]<> +|`false` +|Set to `true` to enable observation via Micrometer. + +|[[offsetAndMetadataProvider]]<> +|`null` +|A provider for `OffsetAndMetadata`; by default, the provider creates an offset and metadata with empty metadata. The provider gives a way to customize the metadata. + |[[onlyLogRecordMetadata]]<> |`false` -|Set to false to log the complete consumer record (in error, debug logs etc) instead of just `topic-partition@offset`. +|Set to `false` to log the complete consumer record (in error, debug logs etc.) instead of just `topic-partition@offset`. |[[pauseImmediate]]<> |`false` From 81a1679d9dca91e23a7f25d13f31e1836c229055 Mon Sep 17 00:00:00 2001 From: Wzy19930507 <1208931582@qq.com> Date: Sun, 18 Feb 2024 14:30:32 +0800 Subject: [PATCH 2/5] Remove properties for `AbstractListenerContainer` --- .../antora/modules/ROOT/pages/kafka/container-props.adoc | 8 -------- 1 file changed, 8 deletions(-) diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc index 361de98955..8ef577d618 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc @@ -273,14 +273,6 @@ See xref:kafka/annotation-error-handling.adoc#error-handlers[Container Error Han |`ContainerProperties` |The container properties instance. -|[[errorHandler]]<> -|See desc. -|Deprecated - see `commonErrorHandler`. - -|[[genericErrorHandler]]<> -|See desc. -|Deprecated - see `commonErrorHandler`. - |[[groupId2]]<> |See desc. |The `containerProperties.groupId`, if present, otherwise the `group.id` property from the consumer factory. From e0c3cbf81919a231a1777f751e2b89ca054bb422 Mon Sep 17 00:00:00 2001 From: Wzy19930507 <1208931582@qq.com> Date: Mon, 19 Feb 2024 16:00:46 +0800 Subject: [PATCH 3/5] polish doc annotation-error-handling.adoc --- .../modules/ROOT/pages/kafka/annotation-error-handling.adoc | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc index 3a85647f42..c83c2bbf0b 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc @@ -65,8 +65,7 @@ In either case, you should NOT perform any seeks on the consumer because the con Starting with version 2.8, the legacy `ErrorHandler` and `BatchErrorHandler` interfaces have been superseded by a new `CommonErrorHandler`. These error handlers can handle errors for both record and batch listeners, allowing a single listener container factory to create containers for both types of listener. -`CommonErrorHandler` implementations to replace most legacy framework error handler implementations are provided and the legacy error handlers deprecated. -The legacy interfaces are still supported by listener containers and listener container factories; they will be deprecated in a future release. +`CommonErrorHandler` implementations to replace most legacy framework error handler implementations are provided. See xref:kafka/annotation-error-handling.adoc#migrating-legacy-eh[Migrating Custom Legacy Error Handler Implementations to `CommonErrorHandler`] for information to migrate custom error handlers to `CommonErrorHandler`. From 95b67de57a1ffd6422cda72a1ee2f189375526ad Mon Sep 17 00:00:00 2001 From: Wzy19930507 <1208931582@qq.com> Date: Sun, 18 Feb 2024 22:12:41 +0800 Subject: [PATCH 4/5] Polish micrometer.adoc modify a Micrometer Tracing link change https://micrometer.io/docs/tracing to https://docs.micrometer.io/tracing/reference/index.html --- .../main/antora/modules/ROOT/pages/kafka/micrometer.adoc | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/micrometer.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/micrometer.adoc index cdfd498422..886cbb8f51 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/micrometer.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/micrometer.adoc @@ -24,7 +24,7 @@ NOTE: With the concurrent container, timers are created for each thread and the [[monitoring-kafkatemplate-performance]] == Monitoring KafkaTemplate Performance -Starting with version 2.5, the template will automatically create and update Micrometer `Timer`+++s for send operations, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context. +Starting with version 2.5, the template will automatically create and update Micrometer `Timer`+++s+++ for send operations, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context. The timers can be disabled by setting the template's `micrometerEnabled` property to `false`. Two timers are maintained - one for successful calls to the listener and one for failures. @@ -95,7 +95,7 @@ Using Micrometer for observation is now supported, since version 3.0, for the `K Set `observationEnabled` to `true` on the `KafkaTemplate` and `ContainerProperties` to enable observation; this will disable xref:kafka/micrometer.adoc[Micrometer Timers] because the timers will now be managed with each observation. -Refer to https://micrometer.io/docs/tracing[Micrometer Tracing] for more information. +Refer to https://docs.micrometer.io/tracing/reference/index.html[Micrometer Tracing] for more information. To add tags to timers/traces, configure a custom `KafkaTemplateObservationConvention` or `KafkaListenerObservationConvention` to the template or listener container, respectively. @@ -109,6 +109,6 @@ Starting with version 3.0.6, you can add dynamic tags to the timers and traces, To do so, add a custom `KafkaListenerObservationConvention` and/or `KafkaTemplateObservationConvention` to the listener container properties or `KafkaTemplate` respectively. The `record` property in both observation contexts contains the `ConsumerRecord` or `ProducerRecord` respectively. -The sender and receiver contexts' `remoteServiceName` properties are set to the Kafka `clusterId` property; this is retrieved by a `KafkaAdmin`. +The sender and receiver contexts `remoteServiceName` properties are set to the Kafka `clusterId` property; this is retrieved by a `KafkaAdmin`. If, for some reason - perhaps lack of admin permissions, you cannot retrieve the cluster id, starting with version 3.1, you can set a manual `clusterId` on the `KafkaAdmin` and inject it into `KafkaTemplate` s and listener containers. When it is `null` (default), the admin will invoke the `describeCluster` admin operation to retrieve it from the broker. From a128a7cde11e63532b77f8455acacf7cc45613ff Mon Sep 17 00:00:00 2001 From: Wzy19930507 <1208931582@qq.com> Date: Mon, 19 Feb 2024 10:38:02 +0800 Subject: [PATCH 5/5] fix pause-resume.adoc ref --- .../main/antora/modules/ROOT/pages/kafka/pause-resume.adoc | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/pause-resume.adoc b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/pause-resume.adoc index c06436ba35..58f55aa34d 100644 --- a/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/pause-resume.adoc +++ b/spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/pause-resume.adoc @@ -13,11 +13,11 @@ Starting with version 2.1.5, you can call `isPauseRequested()` to see if `pause( However, the consumers might not have actually paused yet. `isConsumerPaused()` returns true if all `Consumer` instances have actually paused. -In addition (also since 2.1.5), `ConsumerPausedEvent` and `ConsumerResumedEvent` instances are published with the container as the `source` property and the `TopicPartition` instances involved in the `partitions` property. +In addition(also since 2.1.5), `ConsumerPausedEvent` and `ConsumerResumedEvent` instances are published with the container as the `source` property and the `TopicPartition` instances involved in the `partitions` property. Starting with version 2.9, a new container property `pauseImmediate`, when set to true, causes the pause to take effect after the current record is processed. -By default, the pause takes effect when all of the records from the previous poll have been processed. -See <>. +By default, the pause takes effect when all the records from the previous poll have been processed. +See xref:kafka/container-props.adoc#pauseImmediate[pauseImmediate]. The following simple Spring Boot application demonstrates by using the container registry to get a reference to a `@KafkaListener` method's container and pausing or resuming its consumers as well as receiving the corresponding events: