# Setting up

{% hint style="success" %}
Found an issue? Feed it back to us at [Github](https://github.com/orgs/lensesio/discussions), on [Slack](https://www.launchpass.com/lensesio), [Ask Marios](https://ask.lenses.io/) or [email](mailto:info@lenses.io).
{% endhint %}

{% hint style="warning" %}
To execute K2K, you must agree to the EULA and secure a free license.&#x20;

Accept the EULA by setting `license.acceptEula` to `true` .
{% endhint %}

{% hint style="warning" %}
To execute K2K, you must agree to the EULA and secure a free license.

Accept the EULA by setting `license.acceptEula` to `true` .
{% endhint %}

{% stepper %}
{% step %}

### **Set Up Base Directory**

* Create a directory to store K2K configuration and infrastructure definitions:

  ```
  mkdir k2k-demo
  cd k2k-demo
  ```

{% endstep %}

{% step %}

### **Define Replication Pipeline**

Save the configuration below as `k2k-pipeline.yml` to establish a K2K replication pipeline definition file.

```
name: "my-first-replication"
features:
  autoCreateControlTopics: enabled
  autoCreateTopics: enabled
license:
  token:"<your-license-token>"
  acceptEula:false
source:
  kafka:
    common:
      "bootstrap.servers": "kafka-source:9092"
    consumer:
      "group.id": "k2k.my-first-k2k"
target:
  kafka:
    common:
      "bootstrap.servers": "kafka-target:9092"
replication:
  - source:
      topic: ".*"
  - sink:
      topic: source
      partition: source
```

{% endstep %}

{% step %}

### Initialize Two Kafka Clusters

To evaluate the replicator feature, you'll need to configure two local Kafka clusters with Docker Compose. Begin by setting up a `docker-compose.yml` file:

```yaml
services:
  k2k:
    image: "lensesio/k2k:0.5.0"
    volumes:
      - ".:/pipelines"
    environment:
      OTEL_SERVICE_NAME: "k2k"
      OTEL_METRICS_EXPORTER: none
      OTEL_TRACES_EXPORTER: none
      OTEL_LOGS_EXPORTER: none
      #LENSES_K2K_ACCEPT_EULA: true
    command:
      - k2k
      - start
      - -f
      - /pipelines/k2k-pipeline.yml
      - -t
  kafka-source:
    image: "apache/kafka:3.8.0"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: INTERNAL://:9092,EXTERNAL://:9094,CONTROLLER://:9093
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-source:9092,EXTERNAL://127.0.0.1:9094
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_NUM_PARTITIONS: 3
    ports:
      - "9094:9094"
  kafka-target:
    image: "apache/kafka:3.8.0"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: INTERNAL://:9092,EXTERNAL://:9099,CONTROLLER://:9093
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-target:9092,EXTERNAL://127.0.0.1:9099
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_NUM_PARTITIONS: 3
    ports:
      - 9099:9099
```

To start the two Kafka instances, execute the following command:

```bash
 docker compose up -d kafka-source kafka-target
```

{% endstep %}

{% step %}

### Create a topic to be replicated

Use the following command to create a topic named *user-topic* in the source cluster:

```bash
#adds a topic named k2k-example-topic to the source cluster
docker compose exec kafka-source \
      ./opt/kafka/bin/kafka-topics.sh \
      --create \
      --topic user-topic \
      --partitions 5 \
      --bootstrap-server 127.0.0.1:9092 
```

{% endstep %}

{% step %}

### Start K2K

Run the following command to start the replicator, which will begin replicating topics and data from one cluster to another:

```bash
docker compose up -d k2k && docker compose logs -f k2k
```

{% endstep %}

{% step %}

### Add data to the source cluster

To view replicated data, open two terminal windows and execute these commands separately. The first terminal will show new data appearing in the target cluster topic.

```bash
# terminal 1: read replicated topic data
docker compose exec kafka-target \
  /opt/kafka/bin/kafka-console-consumer.sh \
  --bootstrap-server localhost:9099 \
  --topic user-topic \
  --property print.key=true \
  --property key.separator=, \
  --from-beginning
```

<pre class="language-bash"><code class="lang-bash">#terminal 2: add data to the source topic
<strong>docker-compose exec kafka-source \
</strong>    ./opt/kafka/bin/kafka-producer-perf-test.sh \
    --topic user-topic \
    --num-records 100 \
    --record-size 20 \
    --throughput -1 \
    --producer-props bootstrap.servers=localhost:9092
</code></pre>

{% endstep %}
{% endstepper %}
