Sinks
This page details the configuration options for the Stream Reactor Kafka Connect sink connectors.
Last updated
This page details the configuration options for the Stream Reactor Kafka Connect sink connectors.
Last updated
2024 © Lenses.io Ltd. Apache, Apache Kafka, Kafka and associated open source project names are trademarks of the Apache Software Foundation.
Sink connectors read data from Kafka and write to an external system.
Kafka topic retention policies determine how long a message is retained in a topic before it is deleted. If the retention period expires and the connector has not processed the messages, possibly due to not running or other issues, the unprocessed Kafka data will be deleted as per the retention policy. This can lead to significant data loss since the messages will no longer be available for the connector to sink to the target system.
Yes, the datalakes connectors natively support exactly-once guarantees.
Field names in Kafka message headers or values may contain dots (.
). To access these correctly, enclose the entire target in backticks (```) and each segment which consists of a field name in single quotes ('
):
For field names with spaces or special characters, use a similar escaping strategy:
Field name with a space: `_value.'full name'`
Field name with special characters: `_value.'$special_characters!'`
This ensures the connector correctly extracts the intended fields and avoids parsing errors.
MQTT
Sink data from Kafka to MQTT.