What's new?
This page details the release notes of Lenses K2K replicator.
Version 0.4.0
Summary
At-least-once data loss: Fixed an issue that could lead to data loss when producers failed to reliably receive timely write acknowledgements from the broker (e.g., due to quota limits or network instability). K2K now includes mitigations to minimize this risk, and an explicit error is returned before data loss can occur.
Dependency Packaging:
Enhanced Auto-topic Creation: Prevent failures when replicating newer clusters to older versions, ensuring smooth topic creation despite configuration differences.
Metrics Exposure: Committed offsets are now exposed as metrics for integration with tools like Prometheus.
Connectivity Suport: Connectivity troubleshooting module is disabled by default
License: The free license requirement has been removed
Google Managed Kafka: This release ensures all essential Google Managed Kafka dependencies are included. When topic auto creation is enabled, and Google Managed Kafka is the target environment, failures might occur. This is due to Google Managed Kafka not allowing the setting of some properties like
segment.bytes
andlocal.retention.ms
. It's therefore advised at the moment to provide the following config values fortopicCreation.replication.common.config
:topicCreation: replication: common: config: "segment.bytes": null "local.retention.ms": null
Version 0.3.0
Summary
Centralized Schema registry and Kafka connectivity configuration
Schema registry is now specified in
source|target.registry.config|headers|cacheSize
Kafka configuration is now specified in
source|target.kafka.[common|consumer|admin]
Kafka connectivity validation (on by default, can be toggled off using
features.checkKafkaConnectionOnStartup
)com.google.cloud.google-cloud-managedkafka
is now packaged with K2K.Added fine grain control for schema replication.
configured through
schemaMapping.topics
Metrics messages and control message are now all part of the same
ControlMessage
definition hierarchy.Added support for injection of environment variables and/or file contents in the pipeline definition file.
the pipeline definition file supports substituting expressions (e.g:
${env:base64:ENV_VARIABLE}
by their values.
Performance improvements.
Migration
.parameters
.configuration
renamed
.coordination.kafka.consumer
.target.kafka.consumer
moved
.coordination.kafka.charset
removed
.coordination.kafka.commit.group
removed
source consumer's group.id
is now used .
.coordination.kafka.assignement
.coordination.assignement
removed kafka
path segment
.coordination.kafka.commit
.coordination.commit
removed kafka
path segment
.source.kafka.connection.servers
.source.kafka.common."bootstrap.servers"
moved
.source.kafka.consumer
inherits all properties defined under source.kafka.common.
.source.kafka.registry.supportedTypes
.schemaMapping.supportedTypes
source and target SR use the same values
.source.kafka.registry.url
.source.registry.config."schema.registry.url"
.source.kafka.registry
.source.registry
.target.kafka.connection.servers
.target.kafka.common."bootstrap.servers"
.target.kafka.consumer
.target.kafka.consumer
inherits all properties defined under target.kafka.common
.target.kafka.producer
.target.kafka.producer
inherits all properties defined under target.kafka.common
.target.kafka.registry.supportedTypes
.schemaMapping.supportedTypes
source and target SR use the same values
.target.kafka.registry.url
.target.registry.config."schema.registry.url"
.target.kafka.registry
.target.registry
.features.offsetCommitOptimizePartition
.features.optimizeOffsetCommitPartition
Example:
name: "simple_pipeline"
source:
kafka:
common:
"bootstrap.servers": "localhost:9092"
"sasl.mechanism": "PLAIN"
"sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=userName password=pwd;"
"ssl.truststore.type": "JKS"
"ssl.truststore.location": "/etc/some/path"
"ssl.truststore.password": "some-password"
"security.protocol": "SASL_SSL"
consumer:
"group.id": "my-group"
registry:
config:
"schema.registry.url": "${env:raw:REGISTRY_URL}"
"schema.registry.ssl.truststore.type": "JSK"
"schema.registry.ssl.truststore.location": "/some/other/path"
"schema.registry.ssl.truststore.password": "some-sr-password"
target:
kafka:
common:
"bootstrap.servers": "localhost:9099"
"sasl.mechanism": "PLAIN"
"sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=userName password=pwd;"
"ssl.truststore.type": "JKS"
"ssl.truststore.location": "/etc/some/path"
"ssl.truststore.password": "some-password"
"security.protocol": "SASL_SSL"
registry:
config:
"schema.registry.url": "localhost:8081"
"schema.registry.ssl.truststore.type": "JSK"
"schema.registry.ssl.truststore.location": "/some/other/path"
"schema.registry.ssl.truststore.password": "some-sr-password"
replication:
- source:
name: source
topic: topic.*
- sink:
topic: source
name: sink-source-topic
partition: source
Version 0.1.0
Warning
Please note that from this version, a free license token is required. Ensure you obtain one to continue using the service without interruptions. To obtain a free license, send an email to [email protected]
. You will receive an automatic reply with the license token information.
Features
Retain Original Message Timestamp
The replication process now includes a feature called keepRecordCreationTimestamp
, which retains the original message timestamp. This feature is enabled by default.
Fixes
Ensure topic creation waits for Kafka to finish the action before proceeding.
Auto-created control topics will use "compact" as the cleanup.policy to help lower costs and decrease startup latency.
Last updated
Was this helpful?