Helm
This page describes installing Standalone K2K in Kubernetes via Helm.
Prerequisites
Kubernetes 1.23+
Helm 3.8.0+
Available local Kafka Clusters:
Configure K2K
To configure Lenses K2K properly we have to understand the parameter groups that the Chart offers. Under the k2k parameter there are some key parameter groups that are used to set up HQ:
licence
Definition that configures EULA acceptance.
otelConfig
Defines metric, traces and log exporters
replicationConfig
Defines core K2K configuration file which included:
connection to source and destination Kafka Cluster / Schema Registry
replication semantics, replication options and many more
Moving forward, in the same order you can start configuring your Helm chart.
Configure licence
Before using K2K as a standalone application, you must agree to the End User License Agreement (EULA) and request a free license token by contacting [email protected]. Ensure this section is included in the replicationConfig yaml values:
k2k:
replicationConfig:
license:
acceptEULA: true
token: <license token>Configure OTEL options
If you would like to monitor your K2K applications and by monitor we mean, export:
logs;
metrics;
Then you would have to configure following block:
k2k:
otelConfig:
serviceName: "k2k"
metricsExporter: "prometheus"
tracesExporter: "none"
logsExporter: "none"
prometheusHost: "0.0.0.0"
prometheusPort: 9090Note: The export functionality for warning logs and traces is currently unavailable.
Replication Configuration
The configuration file is in YAML and has 8 basic sections:
source: defines the source cluster details (required)
target: defined the target cluster details (required)
replication: defines the set of topics to replicate and how to replicate (required)
coordination: defines the setting for the coordinator, for example, the offsets (required)
features: defines the extra functionality, such as exactly once (optional)
errorHandling: defines how to handle errors (optional)
tracing: defines the open tracing components (optional)
Helm definition of replicationConfig parameter is as an object
k2k:
replicationConfig: {}Therefore, all the yaml parameters that one can find under the configuration document above can be freely copy/pasted.
Secrets can be created via k2k.additionalEnv property and be referenced in following way
foo: ${env:string:MY_ENV}
bar: ${env:number:MY_ENV}
bar2: ${env:base64:MY_ENV}
bar3: ${file:MY_ENV}Example of Kafka2Kafka replicationConfig that can be used.
k2k:
replicationConfig:
name: "k2k-demo-env"
features:
exactlyOnce: disabled
headerReplication: disabled
schemaMapping: disabled
optimizeOffsetCommitPartition: enabled
tracingHeaders: disabled
autoCreateControlTopics: enabled
autoCreateTopics: enabled
coordination:
assignment:
topic: "__k2k-app-eot-assignment"
commit:
topic: "__k2k-app-eot-consumer-offsets"
source:
kafka:
common:
"bootstrap.servers": "source-kafka:9092"
consumer:
"group.id": "k2k.eot"
target:
kafka:
common:
"bootstrap.servers": "target-kafka:9092"
replication:
- source:
topic:
- "topic1"
- "topic2"
- sink:
topic:
prefix: "k2k.eot."
partition: sourceserviceAccount:
create: true
name: msk-serverless-sa
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<AccountId>:role/MSKAccessRole
k2k:
replicationConfig:
name: "aws-k2k"
source:
kafka:
common:
bootstrap.servers: "boot.c1.kafka.eu-west-3.amazonaws.com:9098"
security.protocol: "SASL_SSL"
sasl.mechanism: "AWS_MSK_IAM"
sasl.jaas.config: "software.amazon.msk.auth.iam.IAMLoginModule required;"
sasl.client.callback.handler.class: "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
consumer:
client.id: "demo-k2k"
group.id: "k2k.eot"
target:
kafka:
common:
bootstrap.servers: "boot.c2.kafka.eu-west-3.amazonaws.com:9098"
security.protocol: "SASL_SSL"
sasl.mechanism: "AWS_MSK_IAM"
sasl.jaas.config: "software.amazon.msk.auth.iam.IAMLoginModule required;"
sasl.client.callback.handler.class: "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
replication:
- source:
topic: #required
- "mysource-topic-1"
- sink:
partition: source #required
topic:
prefix: "aws."
suffix: ".copy"Prerequisites:
Secret with sasl-jaas.conf has to be precreated.
k2k:
additionalEnv:
- name: SASL_JAAS_CONFIG
valueFrom:
secretKeyRef:
name: kafka-jaas-secret
key: sasl-jaas.conf
replicationConfig:
name: "demo-k2k"
source:
kafka:
common:
bootstrap.servers: "kafka-us-dev-1.domain.io:9093"
security.protocol: "SASL_PLAINTEXT"
sasl.mechanism: "SCRAM-SHA-512"
sasl.jaas.config: ${env:string:SASL_JAAS_CONFIG}
consumer:
group.id: "demo-k2k-consumer"
client.id: "demo-k2k"
target:
kafka:
common:
security.protocol: "SASL_PLAINTEXT"
sasl.mechanism: "SCRAM-SHA-512"
sasl.jaas.config: ${env:string:SASL_JAAS_CONFIG}
bootstrap.servers: "kafka-us-dev-2.domain.io:9093"
producer:
client.id: "demo-k2k"
replication:
- source:
topic: #required
- "airline-customers"
- sink:
partition: source #required
topic:
prefix: "demo."
suffix: ".copy"Prerequisites:
Secret with caroot.pem must be precreated;
Secret all.pem including certificate + private key must be precreated;
additionalVolumeMounts:
- name: external-kafka-ca-cert
mountPath: "/etc/cacert/caroot.pem"
subPath: "caroot.pem"
- name: external-kafka-certs-all
mountPath: "/etc/clientcert/all.pem"
subPath: "all.pem"
additionalVolumes:
- name: external-kafka-ca-cert
secret:
secretName: external-kafka-ca-cert
- name: external-kafka-certs-all
secret:
secretName: external-kafka-certs-all
k2k:
acceptEULA: true
otelConfig:
serviceName: "k2k"
metricsExporter: "prometheus"
tracesExporter: "none"
logsExporter: "none"
prometheusHost: "0.0.0.0"
prometheusPort: "9090"
replicationConfig:
name: "k2k-demo-env"
source:
kafka:
common:
"security.protocol": "SSL"
"ssl.truststore.type": "PEM"
"ssl.keystore.type": "PEM"
"ssl.truststore.location": "/etc/cacert/caroot.pem"
"ssl.keystore.location": "/etc/clientcert/all.pem"
"bootstrap.servers": "kafka-us-dev-1.domain.io:9093"
consumer:
"group.id": "k2k.eot"
target:
kafka:
common:
"security.protocol": "SSL"
"ssl.truststore.type": "PEM"
"ssl.keystore.type": "PEM"
"ssl.truststore.location": "/etc/cacert/caroot.pem"
"ssl.keystore.location": "/etc/clientcert/all.pem"
"bootstrap.servers": "kafka-us-dev-1.domain.io:9093"
replication:
- source:
topic:
- "airline-customers"
- "airline-customers-name"
- sink:
topic:
prefix: "k2k.eot."
partition: source(Optional) Configure Service Accounts
Lenses Kafka2Kafka, by default, uses the default Kubernetes service account but you can choose to use a specific one.
If the user defines the following:
# serviceAccount is the Service account to be used by Lenses to deploy apps
serviceAccount:
create: true
annotations: {}
name: lenses-k2kThe chart will create a new service account in the defined namespace for Kafka2Kafka to use.
Add chart repository
First, add the Helm Chart repository using the Helm command line:
helm repo add lensesio https://helm.repo.lenses.io/
helm repo updateInstalling Kafka2Kafka
helm install lenses-k2k lensesio/lenses-k2k \
--values values.yaml \
--create-namespace --namespace lenses-k2k \
--version 0.0.10Last updated
Was this helpful?

