Skip to content

Commit

Permalink
Ref doc improvements (#3045)
Browse files Browse the repository at this point in the history
* Supplement container-props.adoc ContainerProperties properties
* Remove properties for `AbstractListenerContainer`
* polish doc annotation-error-handling.adoc
* Polish micrometer.adoc
* modify a Micrometer Tracing link change https://micrometer.io/docs/tracing to 
  https://docs.micrometer.io/tracing/reference/index.html
* fix pause-resume.adoc ref
  • Loading branch information
Wzy19930507 authored Feb 21, 2024
1 parent 81c5329 commit 4ce75ee
Show file tree
Hide file tree
Showing 4 changed files with 39 additions and 31 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,7 @@ In either case, you should NOT perform any seeks on the consumer because the con

Starting with version 2.8, the legacy `ErrorHandler` and `BatchErrorHandler` interfaces have been superseded by a new `CommonErrorHandler`.
These error handlers can handle errors for both record and batch listeners, allowing a single listener container factory to create containers for both types of listener.
`CommonErrorHandler` implementations to replace most legacy framework error handler implementations are provided and the legacy error handlers deprecated.
The legacy interfaces are still supported by listener containers and listener container factories; they will be deprecated in a future release.
`CommonErrorHandler` implementations to replace most legacy framework error handler implementations are provided.

See xref:kafka/annotation-error-handling.adoc#migrating-legacy-eh[Migrating Custom Legacy Error Handler Implementations to `CommonErrorHandler`] for information to migrate custom error handlers to `CommonErrorHandler`.

Expand Down Expand Up @@ -425,7 +424,7 @@ To replace any `BatchErrorHandler` implementation, you should implement `handleB
You should also implement `handleOtherException()` - to handle exceptions that occur outside the scope of record processing (e.g. consumer errors).

[[after-rollback]]
== After-rollback Processor
== After Rollback Processor

When using transactions, if the listener throws an exception (and an error handler, if present, throws an exception), the transaction is rolled back.
By default, any unprocessed records (including the failed record) are re-fetched on the next poll.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,14 +30,18 @@
See the JavaDocs for `ContainerProperties.AssignmentCommitOption` for more information about the available options.

|[[asyncAcks]]<<asyncAcks,`asyncAcks`>>
|false
|`false`
|Enable out-of-order commits (see xref:kafka/receiving-messages/ooo-commits.adoc[Manually Committing Offsets]); the consumer is paused and commits are deferred until gaps are filled.

|[[authExceptionRetryInterval]]<<authExceptionRetryInterval,`authExceptionRetryInterval`>>
|`null`
|When not null, a `Duration` to sleep between polls when an `AuthenticationException` or `AuthorizationException` is thrown by the Kafka client.
When null, such exceptions are considered fatal and the container will stop.

|[[batchRecoverAfterRollback]]<<batchRecoverAfterRollback,`batchRecoverAfterRollback`>>
|`false`
|Set to `true` to enable batch recovery, See xref:kafka/annotation-error-handling.adoc#after-rollback[After Rollback Processor].

|[[clientId]]<<clientId,`clientId`>>
|(empty string)
|A prefix for the `client.id` consumer property.
Expand All @@ -57,10 +61,6 @@ Useful when the consumer code cannot determine that an `ErrorHandlingDeserialize
|`null`
|When present and `syncCommits` is `false` a callback invoked after the commit completes.

|[[offsetAndMetadataProvider]]<<offsetAndMetadataProvider,`offsetAndMetadataProvider`>>
|`null`
|A provider for `OffsetAndMetadata`; by default, the provider creates an offset and metadata with empty metadata. The provider gives a way to customize the metadata.

|[[commitLogLevel]]<<commitLogLevel,`commitLogLevel`>>
|DEBUG
|The logging level for logs pertaining to committing offsets.
Expand All @@ -69,15 +69,15 @@ Useful when the consumer code cannot determine that an `ErrorHandlingDeserialize
|`null`
|A rebalance listener; see xref:kafka/receiving-messages/rebalance-listeners.adoc[Rebalancing Listeners].

|[[consumerStartTimout]]<<consumerStartTimout,`consumerStartTimout`>>
|[[commitRetries]]<<commitRetries,`commitRetries`>>
|3
|Set the number of retries `RetriableCommitFailedException` when using `syncCommits` set to true.
Default 3 (4-attempt total).

|[[consumerStartTimeout]]<<consumerStartTimeout,`consumerStartTimeout`>>
|30s
|The time to wait for the consumer to start before logging an error; this might happen if, say, you use a task executor with insufficient threads.

|[[consumerTaskExecutor]]<<consumerTaskExecutor,`consumerTaskExecutor`>>
|`SimpleAsyncTaskExecutor`
|A task executor to run the consumer threads.
The default executor creates threads named `<name>-C-n`; with the `KafkaMessageListenerContainer`, the name is the bean name; with the `ConcurrentMessageListenerContainer` the name is the bean name suffixed with `-n` where n is incremented for each child container.

|[[deliveryAttemptHeader]]<<deliveryAttemptHeader,`deliveryAttemptHeader`>>
|`false`
|See xref:kafka/annotation-error-handling.adoc#delivery-header[Delivery Attempts Header].
Expand Down Expand Up @@ -123,9 +123,14 @@ Also see `idleBeforeDataMultiplier`.
|None
|Used to override any arbitrary consumer properties configured on the consumer factory.

|[[listenerTaskExecutor]]<<listenerTaskExecutor,`listenerTaskExecutor`>>
|`SimpleAsyncTaskExecutor`
|A task executor to run the consumer threads.
The default executor creates threads named `<name>-C-n`; with the `KafkaMessageListenerContainer`, the name is the bean name; with the `ConcurrentMessageListenerContainer` the name is the bean name suffixed with `-n` where n is incremented for each child container.

|[[logContainerConfig]]<<logContainerConfig,`logContainerConfig`>>
|`false`
|Set to true to log at INFO level all container properties.
|Set to `true` to log at INFO level all container properties.

|[[messageListener]]<<messageListener,`messageListener`>>
|`null`
Expand All @@ -145,7 +150,7 @@ Also see `idleBeforeDataMultiplier`.

|[[missingTopicsFatal]]<<missingTopicsFatal,`missingTopicsFatal`>>
|`false`
|When true prevents the container from starting if the confifgured topic(s) are not present on the broker.
|When true prevents the container from starting if the configured topic(s) are not present on the broker.

|[[monitorInterval]]<<monitorInterval,`monitorInterval`>>
|30s
Expand All @@ -157,9 +162,21 @@ See `noPollThreshold` and `pollTimeout`.
|Multiplied by `pollTimeOut` to determine whether to publish a `NonResponsiveConsumerEvent`.
See `monitorInterval`.

|[[observationConvention]]<<observationConvention,`observationConvention`>>
|`null`
|When set, add dynamic tags to the timers and traces, based on information in the consumer records.

|[[observationEnabled]]<<observationEnabled,`observationEnabled`>>
|`false`
|Set to `true` to enable observation via Micrometer.

|[[offsetAndMetadataProvider]]<<offsetAndMetadataProvider,`offsetAndMetadataProvider`>>
|`null`
|A provider for `OffsetAndMetadata`; by default, the provider creates an offset and metadata with empty metadata. The provider gives a way to customize the metadata.

|[[onlyLogRecordMetadata]]<<onlyLogRecordMetadata,`onlyLogRecordMetadata`>>
|`false`
|Set to false to log the complete consumer record (in error, debug logs etc) instead of just `topic-partition@offset`.
|Set to `false` to log the complete consumer record (in error, debug logs etc.) instead of just `topic-partition@offset`.

|[[pauseImmediate]]<<pauseImmediate,`pauseImmediate`>>
|`false`
Expand Down Expand Up @@ -256,14 +273,6 @@ See xref:kafka/annotation-error-handling.adoc#error-handlers[Container Error Han
|`ContainerProperties`
|The container properties instance.

|[[errorHandler]]<<errorHandler,`errorHandler`>>
|See desc.
|Deprecated - see `commonErrorHandler`.

|[[genericErrorHandler]]<<genericErrorHandler,`genericErrorHandler`>>
|See desc.
|Deprecated - see `commonErrorHandler`.

|[[groupId2]]<<groupId2,`groupId`>>
|See desc.
|The `containerProperties.groupId`, if present, otherwise the `group.id` property from the consumer factory.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ NOTE: With the concurrent container, timers are created for each thread and the
[[monitoring-kafkatemplate-performance]]
== Monitoring KafkaTemplate Performance

Starting with version 2.5, the template will automatically create and update Micrometer `Timer`+++s for send operations, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context.
Starting with version 2.5, the template will automatically create and update Micrometer `Timer`+++s+++ for send operations, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context.
The timers can be disabled by setting the template's `micrometerEnabled` property to `false`.

Two timers are maintained - one for successful calls to the listener and one for failures.
Expand Down Expand Up @@ -95,7 +95,7 @@ Using Micrometer for observation is now supported, since version 3.0, for the `K

Set `observationEnabled` to `true` on the `KafkaTemplate` and `ContainerProperties` to enable observation; this will disable xref:kafka/micrometer.adoc[Micrometer Timers] because the timers will now be managed with each observation.

Refer to https://micrometer.io/docs/tracing[Micrometer Tracing] for more information.
Refer to https://docs.micrometer.io/tracing/reference/index.html[Micrometer Tracing] for more information.

To add tags to timers/traces, configure a custom `KafkaTemplateObservationConvention` or `KafkaListenerObservationConvention` to the template or listener container, respectively.

Expand All @@ -109,6 +109,6 @@ Starting with version 3.0.6, you can add dynamic tags to the timers and traces,
To do so, add a custom `KafkaListenerObservationConvention` and/or `KafkaTemplateObservationConvention` to the listener container properties or `KafkaTemplate` respectively.
The `record` property in both observation contexts contains the `ConsumerRecord` or `ProducerRecord` respectively.

The sender and receiver contexts' `remoteServiceName` properties are set to the Kafka `clusterId` property; this is retrieved by a `KafkaAdmin`.
The sender and receiver contexts `remoteServiceName` properties are set to the Kafka `clusterId` property; this is retrieved by a `KafkaAdmin`.
If, for some reason - perhaps lack of admin permissions, you cannot retrieve the cluster id, starting with version 3.1, you can set a manual `clusterId` on the `KafkaAdmin` and inject it into `KafkaTemplate` s and listener containers.
When it is `null` (default), the admin will invoke the `describeCluster` admin operation to retrieve it from the broker.
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ Starting with version 2.1.5, you can call `isPauseRequested()` to see if `pause(
However, the consumers might not have actually paused yet.
`isConsumerPaused()` returns true if all `Consumer` instances have actually paused.

In addition (also since 2.1.5), `ConsumerPausedEvent` and `ConsumerResumedEvent` instances are published with the container as the `source` property and the `TopicPartition` instances involved in the `partitions` property.
In addition(also since 2.1.5), `ConsumerPausedEvent` and `ConsumerResumedEvent` instances are published with the container as the `source` property and the `TopicPartition` instances involved in the `partitions` property.

Starting with version 2.9, a new container property `pauseImmediate`, when set to true, causes the pause to take effect after the current record is processed.
By default, the pause takes effect when all of the records from the previous poll have been processed.
See <<pauseImmediate>>.
By default, the pause takes effect when all the records from the previous poll have been processed.
See xref:kafka/container-props.adoc#pauseImmediate[pauseImmediate].

The following simple Spring Boot application demonstrates by using the container registry to get a reference to a `@KafkaListener` method's container and pausing or resuming its consumers as well as receiving the corresponding events:

Expand Down

0 comments on commit 4ce75ee

Please sign in to comment.