What's new?

This page details the release notes of Lenses K2K replicator.

Version 1.1.0

Free version restriction

The free version now includes a restriction on the total number of topic-partitions that the replicator can process.

  • The maximum limit for the sum of topic partitions across all replicated topics is 5.

  • Any combination of topics can be replicated as long as their combined partition count is less than or equal to this limit.

  • Topics with a partition count greater than 5 cannot be replicated without a valid license.

For licensing inquiries, please contact [email protected].

Schema Mapping feature

The Schema Mapping feature is now an enterprise feature and requires a valid license for use.

Improvements

  • Added validation for Producer/Consumer configuration when exactly-once semantics are enabled

  • Reduced Docker image size by removing unnecessary deb files

  • Fixed issue where empty license caused application termination without error message

Version 1.0.0

The K2K data replicator is now available for use in production environments.

JVM Support

The software is now packaged to run on JVM 25, enabling the application to benefit from improvements made to object allocations in this JVM version.

Licensing

License restrictions have been implemented for the exactly-once delivery feature. This feature can only be activated through a license.

Distributed Tracing

Trace headers are now stored as strings instead of binary format, improving compatibility with observability and monitoring tools.

Kafka Configuration

This release introduces a new configuration parameter, target.kafka.topicPropsExclusionList, which provides finer control over topic creation behavior. This parameter allows you to exclude specified topic configurations when the replicator creates topics on the target cluster.

The primary use case addresses compatibility issues with Kafka implementations that don't expose certain configurations as read-only in their metadata but reject them when included in topic creation requests.

Version 0.5.0

  • Application failure reasons can now be written to a configurable location. Command line argument

  • Improved K2K startup time. Control topics are now read and processed faster at startup.

  • Addressed bug that would cause the app to stop replicating data when feature flag optimizeOffsetCommitPartition was disabled

  • Increased default topic sync timeout. By default the app can take up to 90s to read the whole control topic.

  • Improved error messages when the app takes too long to read the control topic(s).

Version 0.4.0

Summary

  • At-least-once data loss: Fixed an issue that could lead to data loss when producers failed to reliably receive timely write acknowledgements from the broker (e.g., due to quota limits or network instability). K2K now includes mitigations to minimize this risk, and an explicit error is returned before data loss can occur.

  • Dependency Packaging:

  • Enhanced Auto-topic Creation: Prevent failures when replicating newer clusters to older versions, ensuring smooth topic creation despite configuration differences.

  • Metrics Exposure: Committed offsets are now exposed as metrics for integration with tools like Prometheus.

  • Connectivity Suport: Connectivity troubleshooting module is disabled by default

  • License: The free license requirement has been removed

  • Google Managed Kafka: This release ensures all essential Google Managed Kafka dependencies are included. When topic auto creation is enabled, and Google Managed Kafka is the target environment, failures might occur. This is due to Google Managed Kafka not allowing the setting of some properties like segment.bytes and local.retention.ms. It's therefore advised at the moment to provide the following config values for topicCreation.replication.common.config:\

Version 0.3.0

Summary

  • Centralized Schema registry and Kafka connectivity configuration

    • Schema registry is now specified in source|target.registry.config|headers|cacheSize

    • Kafka configuration is now specified in source|target.kafka.[common|consumer|admin]

  • Kafka connectivity validation (on by default, can be toggled off using features.checkKafkaConnectionOnStartup)

  • com.google.cloud.google-cloud-managedkafka is now packaged with K2K.

  • Added fine grain control for schema replication.

    • configured through schemaMapping.topics

  • Metrics messages and control message are now all part of the same ControlMessage definition hierarchy.

  • Added support for injection of environment variables and/or file contents in the pipeline definition file.

    • the pipeline definition file supports substituting expressions (e.g: ${env:base64:ENV_VARIABLE} by their values.

  • Performance improvements.

Migration

v0.2.0
v0.3.0
Change

.parameters

.configuration

renamed

.coordination.kafka.consumer

.target.kafka.consumer

moved

.coordination.kafka.charset

removed

.coordination.kafka.commit.group

removed

source consumer's group.id is now used .

.coordination.kafka.assignement

.coordination.assignement

removed kafka path segment

.coordination.kafka.commit

.coordination.commit

removed kafka path segment

.source.kafka.connection.servers

.source.kafka.common."bootstrap.servers"

moved

.source.kafka.consumer

inherits all properties defined under source.kafka.common.

.source.kafka.registry.supportedTypes

.schemaMapping.supportedTypes

source and target SR use the same values

.source.kafka.registry.url

.source.registry.config."schema.registry.url"

.source.kafka.registry

.source.registry

.target.kafka.connection.servers

.target.kafka.common."bootstrap.servers"

.target.kafka.consumer

.target.kafka.consumer

inherits all properties defined under target.kafka.common

.target.kafka.producer

.target.kafka.producer

inherits all properties defined under target.kafka.common

.target.kafka.registry.supportedTypes

.schemaMapping.supportedTypes

source and target SR use the same values

.target.kafka.registry.url

.target.registry.config."schema.registry.url"

.target.kafka.registry

.target.registry

.features.offsetCommitOptimizePartition

.features.optimizeOffsetCommitPartition

Example:

Version 0.1.0

Warning

Please note that from this version, a free license token is required. Ensure you obtain one to continue using the service without interruptions. To obtain a free license, send an email to [email protected]. You will receive an automatic reply with the license token information.

Features

Retain Original Message Timestamp

The replication process now includes a feature called keepRecordCreationTimestamp, which retains the original message timestamp. This feature is enabled by default.

Fixes

  • Ensure topic creation waits for Kafka to finish the action before proceeding.

  • Auto-created control topics will use "compact" as the cleanup.policy to help lower costs and decrease startup latency.

Last updated

Was this helpful?