Setting up

Create a set of containerized environments to learn how to use and configure K2K.

1

Set Up Base Directory

  • Create a directory to store K2K configuration and infrastructure definitions:

    mkdir k2k-demo
    cd k2k-demo
2

Define Replication Pipeline

Save the configuration below as k2k-pipeline.yml to establish a K2K replication pipeline definition file.

name: "my-first-replication"
features:
  autoCreateControlTopics: enabled
  autoCreateTopics: enabled
license:
  token:"<your-license-token>"
  acceptEula:false
source:
  kafka:
    connection:
      servers: "kafka-source:9092"
    consumer:
      "group.id": "k2k.my-first-k2k"
target:
  kafka:
    connection:
      servers: "kafka-target:9092"
coordination:
  kafka:
    commit:
      group: "k2k.my-first-k2k"
replication:
  - source:
      topic: ".*"
  - sink:
      topic: source
      partition: source
3

Initialize Two Kafka Clusters

To evaluate the replicator feature, you'll need to configure two local Kafka clusters with Docker Compose. Begin by setting up a docker-compose.yml file:

services:
  k2k:
    image: "lensesio/k2k:0.1.0"
    volumes:
      - ".:/pipelines"
    environment:
      OTEL_SERVICE_NAME: "k2k"
      OTEL_METRICS_EXPORTER: none
      OTEL_TRACES_EXPORTER: none
      OTEL_LOGS_EXPORTER: none
      #LENSES_K2K_ACCEPT_EULA: true
    command:
      - k2k
      - start
      - -f
      - /pipelines/k2k-pipeline.yml
      - -t
  kafka-source:
    image: "apache/kafka:3.8.0"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: INTERNAL://:9092,EXTERNAL://:9094,CONTROLLER://:9093
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-source:9092,EXTERNAL://127.0.0.1:9094
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_NUM_PARTITIONS: 3
    ports:
      - "9094:9094"
  kafka-target:
    image: "apache/kafka:3.8.0"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: INTERNAL://:9092,EXTERNAL://:9099,CONTROLLER://:9093
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka-target:9092,EXTERNAL://127.0.0.1:9099
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_NUM_PARTITIONS: 3
    ports:
      - 9099:9099

To start the two Kafka instances, execute the following command:

 docker compose up -d kafka-source kafka-target
4

Create a topic to be replicated

Use the following command to create a topic named user-topic in the source cluster:

#adds a topic named k2k-example-topic to the source cluster
docker compose exec kafka-source \
      ./opt/kafka/bin/kafka-topics.sh \
      --create \
      --topic user-topic \
      --partitions 5 \
      --bootstrap-server 127.0.0.1:9092 
5

Start K2K

Run the following command to start the replicator, which will begin replicating topics and data from one cluster to another:

docker compose up -d k2k && docker compose logs -f k2k
6

Add data to the source cluster

To view replicated data, open two terminal windows and execute these commands separately. The first terminal will show new data appearing in the target cluster topic.

# terminal 1: read replicated topic data
docker compose exec kafka-target \
  /opt/kafka/bin/kafka-console-consumer.sh \
  --bootstrap-server localhost:9099 \
  --topic user-topic \
  --property print.key=true \
  --property key.separator=, \
  --from-beginning
#terminal 2: add data to the source topic
docker-compose exec kafka-source \
    ./opt/kafka/bin/kafka-producer-perf-test.sh \
    --topic user-topic \
    --num-records 100 \
    --record-size 20 \
    --throughput -1 \
    --producer-props bootstrap.servers=localhost:9092

Last updated

Was this helpful?