4.2

Kafka Connect

Add one or more Kafka Connect clusters to manage connectors. For each cluster provide at least a unique name, the addresses of all the nodes, authentication settings if required, and its backing Kafka topics. Optionally provide metrics settings and an encryption key for running SQL Processors in it.

Users need explicit permission via their groups to see a Connect cluster. In order to view, create, or manage a connector, they further need permissions to the topics it accesses.

One Kafka Connect cluster with two workers:

The array of worker urls must not be empty.

lenses.kafka.connect.clusters = [
 {
   name: "changeDataCapture",
   urls: [
     { url:"http://CONNECT_HOST_1:8083" },
     { url:"http://CONNECT_HOST_2:8083" }
   ],
   statuses: "connect-status",
   configs : "connect-configs",
   offsets : "connect-offsets"
   # Uncomment and configure accordingly to make this Cluster eligible for deploying SQL processors
   # Check https://docs.lenses.io/4.2/configuration/sql/connect/
   # ,aes256.key: "0123456789abcdef0123456789abcdef"
 }
]
One Kafka Connect cluster with two workers using basic authentication.

For Aiven you must use the secure https protocol.

lenses.kafka.connect.clusters = [
  {
    name: "changeDataCapture",
    username: "USERNAME",
    password: "PASSWORD",
    auth: "USER_INFO",
    urls: [
      // non-empty list of workers urls
    ],
    statuses: "connect-status",
    configs: "connect-configs",
    offsets: "connect-offsets"
    # Uncomment and configure accordingly to make this Cluster eligible for deploying SQL processors
    # Check https://docs.lenses.io/4.2/configuration/sql/connect/
    # ,aes256.key: "0123456789abcdef0123456789abcdef"
  }
]

One Kafka Connect cluster with two workers and JMX metrics.

When defining metrics objects, particular entries should have the same values of type field. Otherwise, the type of the first metrics object will be applied for all.

For JMX, the host part of metrics.url key (“CONNECT_HOST_1” & “CONNECT_HOST_2” in examples below) need to be the same as the worker url host itself.

lenses.kafka.connect.clusters = [
 {
   name: "changeDataCapture",
   urls: [
     {
       url:"http://CONNECT_HOST_1:8083",
       metrics: {
          type: "JMX",               # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
          url: "CONNECT_HOST_1:9584" # JMX port is 9584; Scheme (http or https) is required when using JOLOKIAP and JOLOKIAG !
       }
     {
       url:"http://CONNECT_HOST_2:8083",
       metrics: {
          type: "JMX",               # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
          url: "CONNECT_HOST_2:9584" # JMX port is 9584; Scheme (http or https) is required when using JOLOKIAP and JOLOKIAG !
     }
   ],
   statuses: "connect-status",
   configs : "connect-configs",
   offsets : "connect-offsets"
   # Uncomment and configure accordingly to make this Cluster eligible for deploying SQL processors
   # Check https://docs.lenses.io/4.2/configuration/sql/connect/
   # ,aes256.key: "0123456789abcdef0123456789abcdef"
 }
]

One Kafka Connect cluster with two workers and authenticated JMX metrics over SSL.

Particular metrics entries should have the same values of following fields: type, ssl, user, password. Otherwise, the first metrics object is treatened as the valid one for those keys.

In principle - the host part of metrics.url key (“CONNECT_HOST_1” & “CONNECT_HOST_2” in examples below) should be the same as the worker url host itself.

lenses.kafka.connect.clusters = [
 {
   name: "changeDataCapture",
   urls: [
     {
       url:"http://CONNECT_HOST_1:8083",
       metrics: {           # Metrics section is (optional)
         ssl: true,         # SSL - ensure JMX/HTTP certificate is accepted by Lenses truststore
         user: "admin",     # JMX protected by user/pass
         password: "admin",
         type: "JMX",       # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
         url: "CONNECT_HOST_1:9584" # JMX port is 9584; Scheme (http or https) is required when using JOLOKIAP and JOLOKIAG !
       }
     },
     {
       url:"http://CONNECT_HOST_2:8083",
       metrics: {           # Metrics section is (optional)
         ssl: true,         # SSL - ensure JMX/HTTP certificate is accepted by Lenses truststore
         user: "admin",     # JMX protected by user/pass
         password: "admin",
         type: "JMX",       # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
         url: "CONNECT_HOST_2:9584" # JMX port is 9584; Scheme (http or https) is required when using JOLOKIAP and JOLOKIAG !
       }
     }
   ],
   statuses: "connect-status",
   configs: "connect-configs",
   offsets: "connect-offsets"
   # Uncomment and configure accordingly to make this Cluster eligible for deploying SQL processors
   # Check https://docs.lenses.io/4.2/configuration/sql/connect/
   # ,aes256.key: "0123456789abcdef0123456789abcdef"
 }
]

Kafka Connect SSL/TLS Http configuration.

Particular metrics entries should have the same values for these fields:type, ssl, user, password. Otherwise, the first metrics object settings are applied.

In principle - the host part of metrics.url key (“CONNECT_HOST_1” & “CONNECT_HOST_2” in examples below) should be the same as the worker url host itself.

lenses.kafka.connect.clusters = [
 {
   name: "changeDataCapture",
   urls: [
     {
       url:"http://CONNECT_HOST_1:8083",
       metrics: {           # Metrics section is (optional)
         ssl: true,         # SSL - ensure JMX/HTTP certificate is accepted by Lenses truststore
         user: "admin",     # JMX protected by user/pass
         password: "admin",
         type: "JOLOKIAP",       # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
         url: "https://CONNECT_HOST_1:9584" # JMX port is 9584; Scheme (http or https) is required when using JOLOKIAP and JOLOKIAG !
       }
     }
   ],
   statuses: "connect-status",
   configs: "connect-configs",
   offsets: "connect-offsets"
   # Uncomment and configure accordingly to make this Cluster eligible for deploying SQL processors
   # Check https://docs.lenses.io/4.2/configuration/sql/connect/
   # ,aes256.key: "0123456789abcdef0123456789abcdef"
 }
]
lenses.kafka.connect.ssl.keystore.location   = "/path/to/keystore.jks"
lenses.kafka.connect.ssl.keystore.password   = "changeit"
lenses.kafka.connect.ssl.key.password        = "changeit"
lenses.kafka.connect.ssl.truststore.location = "/path/to/truststore.jks"
lenses.kafka.connect.ssl.truststore.password = "changeit"

Only Connect clusters (distributed mode) are supported. Workers running in standalone mode are not supported. The connect nodes URL should include the scheme (http, https).

Metrics, if configured, are shown in each connector’s page and in the topology view. If some workers are ommited, the metrics will be incomplete, each worker only exports its own metrics.

The AES-256 key (aes.key) along with the SQL connector installed in the Connect cluster are required to run SQL Processors in the cluster. The key’s length should be exactly 32 characters and it must match the key set in the Connect cluster. Learn more about SQL in Connect .

The name of the Connect cluster (lenses.kafka.connect.clusters[].name) may only contain alphanumeric characters ([A-Za-z0-9]) and dashes (-). Valid examples would be dev, Prod1, SQLCluster,Prod-1, SQL-Team-Awesome.

Restarting Lenses with different lenses.kafka.connect.clusters will cause:

  1. If any cluster was removed - all the groups will lose permission to removed cluster.
  2. If any cluster was added - cluster won’t be visible to any groups, permissions have to be granted.

See configuration settings.