5.0

Error policies

In addition to the deadletter queues provided by Kafka Connect, the Stream Reactor sink connectors support Error Policies to handle failure scenarios.

The sinks have three error policies that determine how failed writes to the target database are handled. These error polices allow you to control the behavior of the sink if it encounters an error when writing records to the target system. Since Kafka retains the records, subject to the configured retention policy of the topic, the sink can ignore the error, fail the connector or attempt redelivery.

NameDescriptionDefault Value
[connector-prefix].error.policySpecifies the action to be taken if an error occurs while inserting the data. There are three available options, NOOP, the error is swallowed, THROW, the error is allowed to propagate and retry. For RETRY the Kafka message is redelivered up to a maximum number of times specified by the [connector-prefix].max.retries optionTHROW
[connector-prefix].max.retriesThe maximum number of times a message is retried. Only valid when the [connector-prefix].error.policy is set to RETRY10
[connector-prefix].retry.intervalThe interval, in milliseconds between retries, if the sink is using [connector-prefix].error.policy set to RETRY60000

Throw 

Any error on write to the target system will be propagated up and processing is stopped. This is the default behavior.

Noop 

Any error on write to the target database is ignored and processing continues.

Retry 

Any error on write to the target system causes the RetryIterable exception to be thrown. This causes the Kafka Connect framework to pause and replay the message. Offsets are not committed. For example, if the table is offline it will cause a write failure, the message can be replayed. With the Retry policy, the issue can be fixed without stopping the sink.