Using Error Policies

This page describes how to use Error policies in Stream Reactor sink connectors.

In addition to the dead letter queues provided by Kafka Connect, the Stream Reactor sink connectors support Error Policies to handle failure scenarios.

The sinks have three error policies that determine how failed writes to the target database are handled. These error policies allow you to control the behaviour of the sink if it encounters an error when writing records to the target system. Since Kafka retains the records, subject to the configured retention policy of the topic, the sink can ignore the error, fail the connector or attempt redelivery.

Name
Description
Default Value

[connector-prefix].error.policy

Specifies the action to be taken if an error occurs while inserting the data. There are three available options, NOOP, the error is swallowed, THROW, the error is allowed to propagate and retry. For RETRY the Kafka message is redelivered up to a maximum number of times specified by the [connector-prefix].max.retries option

THROW

[connector-prefix].max.retries

The maximum number of times a message is retried. Only valid when the [connector-prefix].error.policy is set to RETRY

10

[connector-prefix].retry.interval

The interval, in milliseconds between retries, if the sink is using [connector-prefix].error.policy set to RETRY

60000

Throw

Any error in writing to the target system will be propagated up and processing is stopped. This is the default behaviour.

Noop

Any error in writing to the target database is ignored and processing continues.

Keep in mind This can lead to missed errors if you don’t have adequate monitoring. Data is not lost as it’s still in Kafka subject to Kafka’s retention policy. The sink currently does not distinguish between integrity constraint violations and or other exceptions thrown by any drivers or target systems.

Retry

Any error in writing to the target system causes the RetryIterable exception to be thrown. This causes the Kafka Connect framework to pause and replay the message. Offsets are not committed. For example, if the table is offline it will cause a write failure, and the message can be replayed. With the Retry policy, the issue can be fixed without stopping the sink.

Last updated

Logo

2024 © Lenses.io Ltd. Apache, Apache Kafka, Kafka and associated open source project names are trademarks of the Apache Software Foundation.