# Scaling K2K

{% hint style="warning" %}
To execute K2K, you must agree to the EULA and secure a free license.

Accept the EULA by setting `license.acceptEula` to `true` .
{% endhint %}

Effortlessly scale K2K by developing new applications with a standardized replication pipeline. This enhances your ability to manage increased workloads, elevate performance, and maximize throughput as partitions are seamlessly distributed across instances.

This tutorial assumes the following files exist (See [run-a-quick-example](https://docs.lenses.io/latest/k2k/tutorial/run-a-quick-example "mention") for more details):

{% tabs %}
{% tab title="k2k-pipeline.yml" %}

```yaml
name: "my-first-replication"
features:
  autoCreateControlTopics: enabled
  autoCreateTopics: enabled
source:
  kafka:
    common:
      "bootstrap.servers": "kafka-source:9092"
    consumer:
      "group.id": "k2k.my-first-k2k"
target:
  kafka:
    common:
      "bootstrap.servers": "kafka-target:9092"
replication:
  - source:
      topic: ".*"
  - sink:
      topic: source
      partition: source
```

{% endtab %}

{% tab title="docker-compose.yml" %}

```yaml
services:
  k2k:
    image: "lensesio/k2k:0.5.0"
    volumes:
      - ".:/pipelines"
    environment:
      OTEL_SERVICE_NAME: "k2k"
      OTEL_METRICS_EXPORTER: none
      OTEL_TRACES_EXPORTER: none
      OTEL_LOGS_EXPORTER: none
      #LENSES_K2K_ACCEPT_EULA: true
    command:
      - k2k
      - start
      - -f
      - /pipelines/k2k-pipeline.yml
      - -t
  kafka-source:
    image: "apache/kafka:3.8.0"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: INTERNAL://:9092,EXTERNAL://:9094,CONTROLLER://:9093
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-source:9092,EXTERNAL://127.0.0.1:9094
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_NUM_PARTITIONS: 3
    ports:
      - "9094:9094"
  kafka-target:
    image: "apache/kafka:3.8.0"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: INTERNAL://:9092,EXTERNAL://:9099,CONTROLLER://:9093
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-target:9092,EXTERNAL://127.0.0.1:9099
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_NUM_PARTITIONS: 3
    ports:
      - 9099:9099
```

{% endtab %}
{% endtabs %}

To ensure a clean start, execute this command to reset any prior configurations from earlier tutorials.

```bash
docker compose down
```

{% stepper %}
{% step %}

### Starting the Kafka clusters

Use the following command to run the K2K replicator app:

<pre class="language-bash"><code class="lang-bash"><strong>#start k2k
</strong>docker compose up k2k
</code></pre>

{% endstep %}

{% step %}

### Creating topics and data

Run the following commands to create these topics: user-topic, transaction-topic, transfers-eu and transfers-us:

```bash
#create topic user-topic
docker compose exec kafka-source \
      ./opt/kafka/bin/kafka-topics.sh \
      --create \
      --topic user-topic \
      --partitions 5 \
      --bootstrap-server 127.0.0.1:9092 
#create topic transaction-topic
docker compose exec kafka-source \
      ./opt/kafka/bin/kafka-topics.sh \
      --create \
      --topic transaction-topic \
      --partitions 5 \
      --bootstrap-server 127.0.0.1:9092
#create topic transfers-eu      
docker compose exec kafka-source \
      ./opt/kafka/bin/kafka-topics.sh \
      --create \
      --topic transfers-eu \
      --partitions 5 \
      --bootstrap-server 127.0.0.1:9092 
      
#create topic transfers-us
docker compose exec kafka-source \
      ./opt/kafka/bin/kafka-topics.sh \
      --create \
      --topic transfers-us \
      --partitions 5 \
      --bootstrap-server 127.0.0.1:9092 
```

To insert test data, execute the following commands:

<pre class="language-bash"><code class="lang-bash"><strong>write some data to user-topic
</strong><strong>docker-compose exec kafka-source \
</strong>    ./opt/kafka/bin/kafka-producer-perf-test.sh \
    --topic user-topic \
    --num-records 100 \
    --record-size 20 \
    --throughput -1 \
    --producer-props bootstrap.servers=localhost:9092

#write some data to transaction-topic    
docker-compose exec kafka-source \
    ./opt/kafka/bin/kafka-producer-perf-test.sh \
    --topic transaction-topic \
    --num-records 100 \
    --record-size 20 \
    --throughput -1 \
    --producer-props bootstrap.servers=localhost:9092
    
#write some data to transfers-eu    
docker-compose exec kafka-source \
    ./opt/kafka/bin/kafka-producer-perf-test.sh \
    --topic transfer-eu \
    --num-records 100 \
    --record-size 20 \
    --throughput -1 \
    --producer-props bootstrap.servers=localhost:9092


#write some data to transfers-us    
docker-compose exec kafka-source \
    ./opt/kafka/bin/kafka-producer-perf-test.sh \
    --topic transfers-us \
    --num-records 100 \
    --record-size 20 \
    --throughput -1 \
    --producer-props bootstrap.servers=localhost:9092
</code></pre>

{% endstep %}

{% step %}

### Run and Scale

```bash
#Start 5 K2K instances
docker compose up -d k2k --scale k2k=5
```

The replication pipeline is now distributed across five different instances.
{% endstep %}
{% endstepper %}
