# What's new?

## Version 1.2.0

**K2K Tooling**

The K2K docker container now includes an additional binary: `k2k-tool` . \
This new app allows resetting and visualizing the K2K consumer&#x20;

**Bidirectional Replication**

K2K now allows bidirectional record replication by using the flag `bidirectionalReplication` . Advanced control regarding headers used and strategy can be done using the `bidirectional` top level configuration.

## Version 1.1.0

#### Free version restriction

The free version now includes a restriction on the total number of topic-partitions that the replicator can process.

* The maximum limit for the sum of topic partitions across all replicated topics is 5.
* Any combination of topics can be replicated as long as their combined partition count is less than or equal to this limit.
* Topics with a partition count greater than 5 cannot be replicated without a valid license.

For licensing inquiries, please contact <sales@lenses.io>.

#### Schema Mapping feature

The Schema Mapping feature is now an enterprise feature and requires a valid license for use.

#### Improvements

* Added validation for Producer/Consumer configuration when exactly-once semantics are enabled
* Reduced Docker image size by removing unnecessary deb files
* Fixed issue where empty license caused application termination without error message

## Version 1.0.0

The K2K data replicator is now available for use in production environments.

#### JVM Support

The software is now packaged to run on JVM 25, enabling the application to benefit from improvements made to object allocations in this JVM version.

#### Licensing

License restrictions have been implemented for the exactly-once delivery feature. This feature can only be activated through a license.

#### Distributed Tracing

Trace headers are now stored as strings instead of binary format, improving compatibility with observability and monitoring tools.

#### Kafka Configuration

This release introduces a new configuration parameter, `target.kafka.topicPropsExclusionList`, which provides finer control over topic creation behavior. This parameter allows you to exclude specified topic configurations when the replicator creates topics on the target cluster.

The primary use case addresses compatibility issues with Kafka implementations that don't expose certain configurations as read-only in their metadata but reject them when included in topic creation requests.

## Version 0.5.0

* Application failure reasons can now be written to a configurable location. Command line argument
* Improved K2K startup time. Control topics are now read and processed faster at startup.
* Addressed bug that would cause the app to stop replicating data when feature flag `optimizeOffsetCommitPartition` was `disabled`
* Increased default topic sync timeout. By default the app can take up to 90s to read the whole control topic.
* Improved error messages when the app takes too long to read the control topic(s).

## Version 0.4.0

### Summary

* **At-least-once data loss:** Fixed an issue that could lead to data loss when producers failed to reliably receive timely write acknowledgements from the broker (e.g., due to quota limits or network instability). K2K now includes mitigations to minimize this risk, and an explicit error is returned before data loss can occur.
* **Dependency Packaging**:
* **Enhanced Auto-topic Creation**: Prevent failures when replicating newer clusters to older versions, ensuring smooth topic creation despite configuration differences.
* **Metrics Exposure**: Committed offsets are now exposed as metrics for integration with tools like Prometheus.
* **Connectivity Suport**: Connectivity troubleshooting module is disabled by default
* **License**: The free license requirement has been removed
* **Google Managed Kafka:** This release ensures all essential Google Managed Kafka dependencies are included.\
  \
  When topic auto creation is enabled, and Google Managed Kafka is the target environment, failures might occur. This is due to Google Managed Kafka not allowing the setting of some properties like `segment.bytes` and `local.retention.ms`.\
  It's therefore advised at the moment to provide the following config values for `topicCreation.replication.common.config`:\\

  ```yaml
  topicCreation:
    replication:
      common:
        config:
          "segment.bytes": null
          "local.retention.ms": null
  ```

## Version 0.3.0

### Summary

* Centralized Schema registry and Kafka connectivity configuration
  * Schema registry is now specified in `source|target.registry.config|headers|cacheSize`
  * Kafka configuration is now specified in `source|target.kafka.[common|consumer|admin]`
* Kafka connectivity validation (on by default, can be toggled off using `features.checkKafkaConnectionOnStartup`)
* `com.google.cloud.google-cloud-managedkafka` is now packaged with K2K.
* Added fine grain control for schema replication.
  * configured through `schemaMapping.topics`
* Metrics messages and control message are now all part of the same `ControlMessage` definition hierarchy.
* Added support for injection of environment variables and/or file contents in the pipeline definition file.
  * the pipeline definition file supports substituting expressions (e.g: `${env:base64:ENV_VARIABLE}` by their values.
* Performance improvements.

### Migration

<table><thead><tr><th width="308.58331298828125">v0.2.0</th><th width="304.00006103515625">v0.3.0</th><th>Change</th></tr></thead><tbody><tr><td><code>.parameters</code></td><td><code>.configuration</code></td><td>renamed</td></tr><tr><td><code>.coordination.kafka.consumer</code></td><td><code>.target.kafka.consumer</code></td><td>moved</td></tr><tr><td><code>.coordination.kafka.charset</code></td><td>removed</td><td></td></tr><tr><td><code>.coordination.kafka.commit.group</code></td><td>removed</td><td>source consumer's <code>group.id</code> is now used .</td></tr><tr><td><code>.coordination.kafka.assignement</code></td><td><code>.coordination.assignement</code></td><td>removed <code>kafka</code> path segment</td></tr><tr><td><code>.coordination.kafka.commit</code></td><td><code>.coordination.commit</code></td><td>removed <code>kafka</code> path segment</td></tr><tr><td><code>.source.kafka.connection.servers</code></td><td><code>.source.kafka.common."bootstrap.servers"</code></td><td>moved</td></tr><tr><td><code>.source.kafka.consumer</code></td><td></td><td>inherits all properties defined under <code>source.kafka.common.</code></td></tr><tr><td><code>.source.kafka.registry.supportedTypes</code></td><td><code>.schemaMapping.supportedTypes</code></td><td>source and target SR use the same values</td></tr><tr><td><code>.source.kafka.registry.url</code></td><td><code>.source.registry.config."schema.registry.url"</code></td><td></td></tr><tr><td><code>.source.kafka.registry</code></td><td><code>.source.registry</code></td><td></td></tr><tr><td><code>.target.kafka.connection.servers</code></td><td><code>.target.kafka.common."bootstrap.servers"</code></td><td></td></tr><tr><td><code>.target.kafka.consumer</code></td><td><code>.target.kafka.consumer</code></td><td>inherits all properties defined under <code>target.kafka.common</code></td></tr><tr><td><code>.target.kafka.producer</code></td><td><code>.target.kafka.producer</code></td><td>inherits all properties defined under <code>target.kafka.common</code></td></tr><tr><td><code>.target.kafka.registry.supportedTypes</code></td><td><code>.schemaMapping.supportedTypes</code></td><td>source and target SR use the same values</td></tr><tr><td><code>.target.kafka.registry.url</code></td><td><code>.target.registry.config."schema.registry.url"</code></td><td></td></tr><tr><td><code>.target.kafka.registry</code></td><td><code>.target.registry</code></td><td></td></tr><tr><td><code>.features.offsetCommitOptimizePartition</code></td><td><code>.features.optimizeOffsetCommitPartition</code></td><td></td></tr></tbody></table>

**Example:**

```yaml
name: "simple_pipeline"
source:
  kafka:
    common:
      "bootstrap.servers": "localhost:9092"
      "sasl.mechanism": "PLAIN"
      "sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=userName password=pwd;"
      "ssl.truststore.type": "JKS"
      "ssl.truststore.location": "/etc/some/path"
      "ssl.truststore.password": "some-password"
      "security.protocol": "SASL_SSL"
    consumer:
      "group.id": "my-group"
  registry:
    config:
      "schema.registry.url": "${env:raw:REGISTRY_URL}"
      "schema.registry.ssl.truststore.type": "JSK"
      "schema.registry.ssl.truststore.location": "/some/other/path"
      "schema.registry.ssl.truststore.password": "some-sr-password"
target:
  kafka:
    common:
      "bootstrap.servers": "localhost:9099"
      "sasl.mechanism": "PLAIN"
      "sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=userName password=pwd;"
      "ssl.truststore.type": "JKS"
      "ssl.truststore.location": "/etc/some/path"
      "ssl.truststore.password": "some-password"
      "security.protocol": "SASL_SSL"
  registry:
    config:
      "schema.registry.url": "localhost:8081"
      "schema.registry.ssl.truststore.type": "JSK"
      "schema.registry.ssl.truststore.location": "/some/other/path"
      "schema.registry.ssl.truststore.password": "some-sr-password"
replication:
  - source:
      name: source
      topic: topic.*
  - sink:
      topic: source
      name: sink-source-topic
      partition: source
```

## Version 0.1.0

### Warning

Please note that from this version, a free license token is required. Ensure you obtain one to continue using the service without interruptions. To obtain a free license, send an email to `k2k@lenses.io`. You will receive an automatic reply with the license token information.

### Features

#### **Retain Original Message Timestamp**

The replication process now includes a feature called `keepRecordCreationTimestamp`, which retains the original message timestamp. This feature is enabled by default.

### Fixes

* Ensure topic creation waits for Kafka to finish the action before proceeding.
* Auto-created control topics will use "compact" as the cleanup.policy to help lower costs and decrease startup latency.
