-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to stop infinite client side errors #260
Comments
I also get these errors when I have messages coming in with an incompatible schema that I need to fix manually. Indeed I also use the In my case I get the following error but I assume it's really the same as @yohei1126's:
This will literally never get solved without manual intervention so an infinite retry by default makes no sense here. |
This option only work for a backend error or a quota exceeded error only.
|
Hi! We also get this kind of errors when we have an incompatible schema, with We found that the connector writes to BigQuery using a separate thread pool, which means that errors encountered during those writes don’t immediately cause the task to fail. Those errors aren’t swallowed silently, however, and do cause the task to throw an exception from its flush method when invoking Unfortunately, the framework does not fail tasks when they throw exceptions from their flush method (which is indirectly called as a result of invoking SinkTask::preCommit), and instead, logs the error and resets the consumer to the last successful offset.
We are thinking about waiting for task execution in |
Hey @Cyril-Engels would catching the |
I see @Cyril-Engels has a PR already that seems like a massive step in the right direction to making this connector safe to run with topics that change schemas frequently. Thanks for that. My team would be extremely happy if that makes it into the current master. Please let me know if I can help with the review process in any way. |
Hi, I am using
kafka-connect-bigquery
to sync data from PostgreSQL to BigQuery. Then I found that these errors repeatedly show up on my Kafka connect logs. These fields are required on both PostgreSQL and BigQuery tables. I did not change the PostgreSQL schema so I guess the messages were accidentally transferred to a wrong topic for some reason and these errors might be caused.I am not able to stop these error by setting
bigQueryRetry: 5
sincebigQueryRetry
is for a backend error and the following errors are client side errors.Would you advice how we can stop such an infinite client side error?
The text was updated successfully, but these errors were encountered: