Setting up

Create a set of containerized environments to learn how to use and configure K2K.

1

Set Up Base Directory

  • Create a directory to store K2K configuration and infrastructure definitions:

    mkdir k2k-demo
    cd k2k-demo
2

Define Replication Pipeline

Save the configuration below as k2k-pipeline.yml to establish a K2K replication pipeline definition file.

3

Initialize Two Kafka Clusters

To evaluate the replicator feature, you'll need to configure two local Kafka clusters with Docker Compose. Begin by setting up a docker-compose.yml file:

To start the two Kafka instances, execute the following command:

 docker compose up -d kafka-source kafka-target
4

Create a topic to be replicated

Use the following command to create a topic named user-topic in the source cluster:

#adds a topic named k2k-example-topic to the source cluster
docker compose exec kafka-source \
      ./opt/kafka/bin/kafka-topics.sh \
      --create \
      --topic user-topic \
      --partitions 5 \
      --bootstrap-server 127.0.0.1:9092 
5

Start K2K

Run the following command to start the replicator, which will begin replicating topics and data from one cluster to another:

docker compose up -d k2k && docker compose logs -f k2k
6

Add data to the source cluster

To view replicated data, open two terminal windows and execute these commands separately. The first terminal will show new data appearing in the target cluster topic.

# terminal 1: read replicated topic data
docker compose exec kafka-target \
  /opt/kafka/bin/kafka-console-consumer.sh \
  --bootstrap-server localhost:9099 \
  --topic user-topic \
  --property print.key=true \
  --property key.separator=, \
  --from-beginning
#terminal 2: add data to the source topic
docker-compose exec kafka-source \
    ./opt/kafka/bin/kafka-producer-perf-test.sh \
    --topic user-topic \
    --num-records 100 \
    --record-size 20 \
    --throughput -1 \
    --producer-props bootstrap.servers=localhost:9092

Last updated

Was this helpful?