What's new?
This page details the release notes of Lenses K2K replicator.
Version 1.0.0
The K2K data replicator is now available for use in production environments.
JVM Support
The software is now packaged to run on JVM 25, enabling the application to benefit from improvements made to object allocations in this JVM version.
Licensing
License restrictions have been implemented for the exactly-once delivery feature. This feature can only be activated through a license.
Distributed Tracing
Trace headers are now stored as strings instead of binary format, improving compatibility with observability and monitoring tools.
Kafka Configuration
This release introduces a new configuration parameter, target.kafka.topicPropsExclusionList, which provides finer control over topic creation behavior. This parameter allows you to exclude specified topic configurations when the replicator creates topics on the target cluster.
The primary use case addresses compatibility issues with Kafka implementations that don't expose certain configurations as read-only in their metadata but reject them when included in topic creation requests.
Version 0.5.0
Application failure reasons can now be written to a configurable location. Command line argument
Improved K2K startup time. Control topics are now read and processed faster at startup.
Addressed bug that would cause the app to stop replicating data when feature flag
optimizeOffsetCommitPartitionwasdisabledIncreased default topic sync timeout. By default the app can take up to 90s to read the whole control topic.
Improved error messages when the app takes too long to read the control topic(s).
Version 0.4.0
Summary
At-least-once data loss: Fixed an issue that could lead to data loss when producers failed to reliably receive timely write acknowledgements from the broker (e.g., due to quota limits or network instability). K2K now includes mitigations to minimize this risk, and an explicit error is returned before data loss can occur.
Dependency Packaging:
Enhanced Auto-topic Creation: Prevent failures when replicating newer clusters to older versions, ensuring smooth topic creation despite configuration differences.
Metrics Exposure: Committed offsets are now exposed as metrics for integration with tools like Prometheus.
Connectivity Suport: Connectivity troubleshooting module is disabled by default
License: The free license requirement has been removed
Google Managed Kafka: This release ensures all essential Google Managed Kafka dependencies are included. When topic auto creation is enabled, and Google Managed Kafka is the target environment, failures might occur. This is due to Google Managed Kafka not allowing the setting of some properties like
segment.bytesandlocal.retention.ms. It's therefore advised at the moment to provide the following config values fortopicCreation.replication.common.config:topicCreation: replication: common: config: "segment.bytes": null "local.retention.ms": null
Version 0.3.0
Summary
Centralized Schema registry and Kafka connectivity configuration
Schema registry is now specified in
source|target.registry.config|headers|cacheSizeKafka configuration is now specified in
source|target.kafka.[common|consumer|admin]
Kafka connectivity validation (on by default, can be toggled off using
features.checkKafkaConnectionOnStartup)com.google.cloud.google-cloud-managedkafkais now packaged with K2K.Added fine grain control for schema replication.
configured through
schemaMapping.topics
Metrics messages and control message are now all part of the same
ControlMessagedefinition hierarchy.Added support for injection of environment variables and/or file contents in the pipeline definition file.
the pipeline definition file supports substituting expressions (e.g:
${env:base64:ENV_VARIABLE}by their values.
Performance improvements.
Migration
.parameters
.configuration
renamed
.coordination.kafka.consumer
.target.kafka.consumer
moved
.coordination.kafka.charset
removed
.coordination.kafka.commit.group
removed
source consumer's group.id is now used .
.coordination.kafka.assignement
.coordination.assignement
removed kafka path segment
.coordination.kafka.commit
.coordination.commit
removed kafka path segment
.source.kafka.connection.servers
.source.kafka.common."bootstrap.servers"
moved
.source.kafka.consumer
inherits all properties defined under source.kafka.common.
.source.kafka.registry.supportedTypes
.schemaMapping.supportedTypes
source and target SR use the same values
.source.kafka.registry.url
.source.registry.config."schema.registry.url"
.source.kafka.registry
.source.registry
.target.kafka.connection.servers
.target.kafka.common."bootstrap.servers"
.target.kafka.consumer
.target.kafka.consumer
inherits all properties defined under target.kafka.common
.target.kafka.producer
.target.kafka.producer
inherits all properties defined under target.kafka.common
.target.kafka.registry.supportedTypes
.schemaMapping.supportedTypes
source and target SR use the same values
.target.kafka.registry.url
.target.registry.config."schema.registry.url"
.target.kafka.registry
.target.registry
.features.offsetCommitOptimizePartition
.features.optimizeOffsetCommitPartition
Example:
name: "simple_pipeline"
source:
kafka:
common:
"bootstrap.servers": "localhost:9092"
"sasl.mechanism": "PLAIN"
"sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=userName password=pwd;"
"ssl.truststore.type": "JKS"
"ssl.truststore.location": "/etc/some/path"
"ssl.truststore.password": "some-password"
"security.protocol": "SASL_SSL"
consumer:
"group.id": "my-group"
registry:
config:
"schema.registry.url": "${env:raw:REGISTRY_URL}"
"schema.registry.ssl.truststore.type": "JSK"
"schema.registry.ssl.truststore.location": "/some/other/path"
"schema.registry.ssl.truststore.password": "some-sr-password"
target:
kafka:
common:
"bootstrap.servers": "localhost:9099"
"sasl.mechanism": "PLAIN"
"sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=userName password=pwd;"
"ssl.truststore.type": "JKS"
"ssl.truststore.location": "/etc/some/path"
"ssl.truststore.password": "some-password"
"security.protocol": "SASL_SSL"
registry:
config:
"schema.registry.url": "localhost:8081"
"schema.registry.ssl.truststore.type": "JSK"
"schema.registry.ssl.truststore.location": "/some/other/path"
"schema.registry.ssl.truststore.password": "some-sr-password"
replication:
- source:
name: source
topic: topic.*
- sink:
topic: source
name: sink-source-topic
partition: sourceVersion 0.1.0
Warning
Please note that from this version, a free license token is required. Ensure you obtain one to continue using the service without interruptions. To obtain a free license, send an email to [email protected]. You will receive an automatic reply with the license token information.
Features
Retain Original Message Timestamp
The replication process now includes a feature called keepRecordCreationTimestamp, which retains the original message timestamp. This feature is enabled by default.
Fixes
Ensure topic creation waits for Kafka to finish the action before proceeding.
Auto-created control topics will use "compact" as the cleanup.policy to help lower costs and decrease startup latency.
Last updated
Was this helpful?

