Configuration

Introduction

In this section, we explore Lenses configuration; how it is laid out on disk, which are the available options, which are mandatory, specific cases such as brokers that require authentication and more. It is the best place to start if you are new to Lenses and assigned the task to set it up. Even if you install Lenses via Docker or Helm, these same settings can be applied via environment variables and yaml files.

Lenses requires two configuration files; lenses.conf and security.conf:

  • lenses.conf
    Here we store most of the configuration options, such as the connection details for your brokers or the port Lenses use. You have to create this file before Lenses can work. For the complete list of the configuration options please refer to Options Reference.
  • security.conf
    Here we configure the authentication module. For more information about authentication methods and authorization, refer to Security Configurations.

Lenses configuration files are in HOCON format.

Our Docker image and Helm charts create these files automatically on start, via reading environment variables, ConfigMaps, and secrets.

Quick Start

A typical example of lenses.conf for a Kafka cluster with Schema Registry, Kafka Connect and Zookeeper, without authentication to the brokers:

# Set the ip:port for Lenses to bind to
lenses.ip = 0.0.0.0
lenses.port = 9991

# License file allowing connecting to up to N brokers
lenses.license.file = "license.json"

# Directory where Lenses stores local storage. Currently Data Policies are stored here.
# If omitted it will create a directory named 'storage' under the current directory.
# Write access is needed as well as surviving between upgrades.
lenses.storage.directory = "/var/lib/lenses/storage"

# Set up infrastructure end-points

lenses.kafka.brokers = "PLAINTEXT://host1:9092,PLAINTEXT://host2:9092,PLAINTEXT://host3:9092"
#lenses.jmx.broker.port = BROKERS_JMX_PORT # required only if `lenses.zookeeper.hosts` is not provided

lenses.zookeeper.hosts = [
  { url:"host1:2181", jmx:"host1:9585" },
  { url:"host2:2181", jmx:"host2:9585" },
  { url:"host3:2181", jmx:"host3:9585" }
]
#lenses.zookeeper.chroot = "" # Optional in case a ZK chroot path is in use

lenses.schema.registry.urls = [
  { url:"http://host1:8081", jmx:"host1:9582" },
  { url:"http://host2:8081", jmx:"host2:9582" }
]

lenses.connect.clusters = [
  {
    name: "connect_cluster_X",
    statuses: "connect-statuses",
    configs: "connect-configs",
    offsets: "connect-offsets",
    urls: [
     { url: "http://host1:8083", jmx: "host1:9583" },
     { url: "http://host2:8083", jmx: "host2:9583"  }
    ]
  }
]

lenses.sql.execution.mode = "IN_PROC"
lenses.sql.state.dir = "/tmp/lenses-sql-kstream-state"

Also, a basic example of security.conf that adds an admin user. For the complete reference of available security options, check out Security Configurations.

# Security mode. Can be BASIC, LDAP, KERBEROS, CUSTOM_HTTP
lenses.security.mode = BASIC

# Security groups is a mandatory section for all security modes.
lenses.security.groups = [
  {
    name: "adminGroup",
    roles: ["Admin", "DataPolicyWrite", "DataPolicyRead", "TableStorageWrite", "AlertsWrite"]
  }
]

# Here you can set user accounts for the BASIC security mode.
lenses.security.users = [
  {
    username: "admin",
    password: "admin",
    displayname: "Lenses Admin",
    groups: ["adminGroup"]
  }
]

Basic Configuration

Let’s explore the most pertinent sections of Lenses configuration.

Host and Port

During startup, Lenses binds to all available network interfaces on the port 9991. To adjust these to a custom value, set the ip and port options.

# Set the ip:port for Lenses to bind to
#lenses.ip = 0.0.0.0
#lenses.port = 9991

License

With your Lenses subscription or trial, you received a license file. If you don’t have a license yet, contact us here. This license file (license.json for this guide) is necessary for the application to start. Once you have uploaded it to the server that runs Lenses, update the configuration to point at it. It is better to use an absolute file path, but a relative path from the directory you run Lenses from can work as well.

# License file allowing connecting to up to N brokers
lenses.license.file="license.json"

If you run Lenses under a specific user account, make sure that this account has permission to read the license file.

Kafka Brokers

Setting up access to the Kafka Brokers is very important. The most simple case is when the brokers accept unauthenticated connections. In such case only the lenses.kafka.brokers setting is required which is the same as the bootstrap servers you would set for any Kafka client. Please make sure to add at least a few of your brokers in this list, do not settle for just one —unless you have a single broker installation.

lenses.kafka.brokers = "PLAINTEXT://host1:9092,PLAINTEXT://host2:9092"

Broker JMX

Lenses take advantage of the JMX metrics the Kafka Brokers offer to monitor the health of your cluster and show metrics and information. While Lenses can work without it, these features will be disabled.

If Lenses is set up with access to Zookeeper, then we can discover the JMX port of each broker automatically. If access to Zookeeper is unavailable, you can provide Lenses with the Brokers’ JMX ports manually.

If all your brokers listen for JMX connections to the same port, set this:

lenses.jmx.broker.port = 9581

If the brokers listen to different JMX ports —a setup we advise against— you can pair Broker IDs and ports, like below:

lenses.jmx.brokers = [
  { id: "BROKER_1_ID", port:"JMX_1_PORT"},
  { id: "BROKER_2_ID", port:"JMX_2_PORT"},
  ...
]

Broker Authentication

Connection to authenticated brokers is a bit more involved, but it’s like any Kafka client. If you have clients that already use authentication, you’ll have Lenses up and running in no time at all.

Kafka Brokers may be set up for authentication via the simple authentication security layer (SASL), SSL/TLS or both. SASL most commonly use GSSAPI (Kerberos) however, in the latest versions of Kafka, more SASL flavors were added, such as SCRAM. Let’s have a look the various authentication scenarios.

SSL

If your Kafka cluster uses TLS certificates for authentication, set the broker protocol to SSL and then pass in any keystore and truststore configurations to the consumer and producer settings by prefixing with lenses.kafka.settings. such as:

lenses.kafka.brokers = "SSL://host1:9093,SSL://host2:9093"

lenses.kafka.settings.consumer.security.protocol        = SSL
lenses.kafka.settings.consumer.ssl.truststore.location  = /var/private/ssl/client.truststore.jks
lenses.kafka.settings.consumer.ssl.truststore.password  = test1234
lenses.kafka.settings.consumer.ssl.keystore.location    = /var/private/ssl/client.keystore.jks
lenses.kafka.settings.consumer.ssl.keystore.password    = test1234
lenses.kafka.settings.consumer.ssl.key.password         = test1234

lenses.kafka.settings.producer.security.protocol        = SSL
lenses.kafka.settings.producer.ssl.truststore.location  = /var/private/ssl/client.truststore.jks
lenses.kafka.settings.producer.ssl.truststore.password  = test1234
lenses.kafka.settings.producer.ssl.keystore.location    = /var/private/ssl/client.keystore.jks
lenses.kafka.settings.producer.ssl.keystore.password    = test1234
lenses.kafka.settings.producer.ssl.key.password         = test1234

If you use TLS certificates only for encryption of data on the wire, you can omit the keystore settings:

lenses.kafka.brokers = "SSL://host1:9093,SSL://host2:9093"

lenses.kafka.settings.consumer.security.protocol        = SSL
lenses.kafka.settings.consumer.ssl.truststore.location  = /var/private/ssl/client.truststore.jks
lenses.kafka.settings.consumer.ssl.truststore.password  = test1234

lenses.kafka.settings.producer.security.protocol        = SSL
lenses.kafka.settings.producer.ssl.truststore.location  = /var/private/ssl/client.truststore.jks
lenses.kafka.settings.producer.ssl.truststore.password  = test1234

If your brokers’ CA certificate is embedded in the system-wide truststore, you can omit the truststore settings.

SASL/GSSAPI

For Lenses to access Kafka in an environment set up with Kerberos (SASL) you need to provide a JAAS file as in the example below. If your Kafka cluster is set up with an authorizer (ACLs), it is advised to use the same principal as the brokers, so Lenses has superuser permissions.

Note

A system configured to work with Kerberos usually provides a system-wide Kerberos configuration file (krb5.conf) that points to the location of the KDC and includes other configuration options necessary to authenticate. If your system is missing this file, please contact your administrator.

KafkaClient {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/path/to/keytab-file"
  storeKey=true
  useTicketCache=false
  serviceName="kafka"
  principal="principal@MYREALM";
};

/*
  Optional section for authentication to zookeeper
  Please also remember to set lenses.zookeeper.security.enabled=true
*/
Client {
  com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="/path/to/keytab-file"
   storeKey=true
   useTicketCache=false
   principal="principal@MYREALM";
};

Once the jaas file is in place, add it to LENSES_OPTS, before starting Lenses:

export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"

Last, set the security protocol in lenses configuration file:

lenses.kafka.brokers = "SASL_PLAINTEXT://host1:9094,SASL_PLAINTEXT://host2:9094"

lenses.kafka.settings.consumer.security.protocol=SASL_PLAINTEXT
lenses.kafka.settings.producer.security.protocol=SASL_PLAINTEXT

By default, the connection to Zookeeper remains unauthenticated. This only affects the Quota entries, which are written without any Zookeeper ACLs to protect them. The option lenses.zookeeper.security.enabled may be used to change this behavior but it is important in such case to use the brokers’ principal for Lenses. If Lenses is configured with a different principal, then the brokers will not be able to manipulate the Quota entries, and will fail to start. Please contact our support if you need help for this feature.

SASL_SSL

In this security protocol, Kafka uses a SASL method for authentication and TLS certificates for encryption of data on the wire. As such the configuration is a combination of the SSL/TLS and SASL configurations.

Please provide Lenses with a JAAS file as described in the previous section and add it to LENSES_OPTS:

export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"

Set Lenses to use SASL_SSL for its producer and consumer part. If your CA’s certificate isn’t part of the system-wide truststore, please provide Lenses with a truststore as well:

lenses.kafka.brokers = "SASL_SSL://host1:9096,SASL_SSL://host2:9096"

lenses.kafka.settings.consumer.security.protocol        = SASL_SSL
lenses.kafka.settings.consumer.ssl.truststore.location  = /var/private/ssl/client.truststore.jks
lenses.kafka.settings.consumer.ssl.truststore.password  = test1234

lenses.kafka.settings.producer.security.protocol        = SASL_SSL
lenses.kafka.settings.producer.ssl.truststore.location  = /var/private/ssl/client.truststore.jks
lenses.kafka.settings.producer.ssl.truststore.password  = test1234

SASL/SCRAM

For Lenses to access Kafka in an environment set up with SCRAM authentication (SASL/SCRAM) you need to provide lenses with a JAAS file as in the example below. If Lenses is used with an ACL enabled cluster, it is advised to use the same principal as the brokers, so it has superuser permissions.

KafkaClient {
  org.apache.kafka.common.security.scram.ScramLoginModule required
  username="[USERNAME]"
  password="[PASSWORD]";
};

Once the jaas file is in place, add it to LENSES_OPTS, before starting Lenses:

export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"

Last, set the security protocol and mechanism in lenses configuration file:

lenses.kafka.brokers = "SASL_PLAINTEXT://host1:9092,SASL_PLAINTEXT://host2:9092"

lenses.kafka.settings.consumer.security.protocol=SASL_PLAINTEXT
lenses.kafka.settings.consumer.sasl.mechanism=SCRAM-SHA-256
lenses.kafka.settings.producer.security.protocol=SASL_PLAINTEXT
lenses.kafka.settings.producer.sasl.mechanism=SCRAM-SHA-256

An alternative to the jaas.conf file, is to configure JAAS within Lenses configuration (lenses.conf). The configuration format is HOCON, Scala’s configuration format. As such, multiline strings should be enclosed within triple quotes:

lenses.kafka.settings.consumer.sasl.jaas.config="""
  org.apache.kafka.common.security.scram.ScramLoginModule required
    username="[USERNAME]"
    password="[PASSWORD]";"""
lenses.kafka.settings.producer.sasl.jaas.config="""
  org.apache.kafka.common.security.scram.ScramLoginModule required
    username="[USERNAME]"
    password="[PASSWORD]";"""

Zookeeper

Optionally Lenses can use zookeeper to manage quotas and autodetect brokers’ JMX ports. Also, we continuously monitor your zookeeper installation and report offline nodes. For installations where access to zookeeper isn’t possible, the features described are disabled. To take advantage of the health monitoring feature, you have to provide a complete list of your zookeeper nodes. JMX is optional, and it is used in the Services screen to display information about your nodes, like the leader, node count, connections and more. If your cluster is under a zookeeper chroot, you must set this too.

lenses.zookeeper.hosts = [
  { url:"ZK_HOST_1:2181", jmx:"ZK_HOST_1:9585"},
  { url:"ZK_HOST_2:2181", jmx:"ZK_HOST_2:9585"},
  ...
]
#lenses.zookeeper.chroot = ""

If your zookeeper is protected with Kerberos, please also check zookeeper security.

Schema Registry

If you use the AVRO format to serialize records stored in Kafka, then most probably you use a Schema Registry implementation. The most common ones come from Confluent and HortonWorks. Lenses supports both.

In the most simple scenario, you only need to provide a list of your Schema Registry servers. Lenses also monitors the health of your nodes. For this check to work properly, a complete list of your schema registry servers is required. JMX is optional and used to display node details in the Services screen. You may also permit schema deletion.

lenses.schema.registry.urls = [
 { url:"http://SR_HOST_1:8081", jmx:"SR_HOST_1:9582"},
 { url:"http://SR_HOST_2:8081", jmx:"SR_HOST_2:9582"},
   ...
 ]
#lenses.schema.registry.delete = false

Note

It is necessary to add the scheme (http:// or https://) in front of the Schema Registry address.

For the Confluent implementation, we also need to know the topic that schemas. If it’s left at the default value (i.e. _schemas), then no action in Lenses is required.

lenses.schema.registry.topics = "_schemas"

If you use the HortonWorks’ Schema Registry, you should enable the appropriate mode:

lenses.schema.registry.mode = HORTONWORKS

Authentication

Internally Lenses uses the AVRO serde classes provided by Confluent. As such the authentication configuration reflects the options of these classes.

There are two places in Lenses that AVRO is used: in the Schemas management screen, and in the Lenses SQL engine, where it is used in the Topics screen when you browse your records, and in the SQL and Processors screens where you can run complex queries and perform stream processing. Each of these two places needs to be configured for authentication to the Schema Registry.

BASIC

For Basic authentication, please set these options:

lenses.schema.registry.auth = "USER_INFO"
lenses.schema.registry.username = "USERNAME"
lenses.schema.registry.password = "PASSWORD"

lenses.kafka.settings.producer.basic.auth.credentials.source = USER_INFO
lenses.kafka.settings.producer.basic.auth.user.info = "USERNAME:PASSWORD"

lenses.kafka.settings.consumer.basic.auth.credentials.source = USER_INFO
lenses.kafka.settings.consumer.basic.auth.user.info = "USERNAME:PASSWORD"

Warning

Please note that although you could add BASIC authentication username and password in the Schema Registry URL, it is a bad idea as Lenses display this URL in a few places.

Kerberos

For Kerberos authentication, please set these options in lenses.conf:

# Enable Kerberos Authentication for schema registry
lenses.schema.registry.kerberos=true

# Define the schema registry principal
lenses.schema.registry.principal="HTTP/<YOUR-SCHEMA-REGISTRY-HOSTNAME>"

# Define the schema registry principal
lenses.schema.registry.service.name="registryclient@<KERBEROS-DOMAIN-REALM>"

# Define the key tab
lenses.schema.registry.keytab="/path/to/registryclient.keytab"

Also you need to setup the LENSES_OPTS variable the Kerberos conf.

export LENSES_OPTS="-Djava.security.krb5.conf=krb5.conf"

Kafka Connect

You can add your Kafka Connect clusters to Lenses, so you can manage your connectors (create, remove, update), monitor them (ephemeral metrics), find out issues (e.g a task failed) and view them in the topology view. For this we need a list of Kafka Connect nodes (workers), the topics they use for storing their configuration, state, and source offset. Additionally if you want to monitor your nodes and get alerts when a worker is offline, the list of the workers should be exhaustive (include all your workers). JMX is optional and it will be used to provide additional information about your Connect clusters in the Services screen.

lenses.connect.clusters = [
  {
    name: "development",
    urls: [
            { url:"http://CONNECT_HOST_1:8083", jmx:"localhost:9583"},
            { url:"http://CONNECT_HOST_1:8083", jmx:"localhost:9583"},
            ...
    ],
    statuses: "connect-statuses",
    configs: "connect-configs",
    offsets: "connect-offsets"
  },
  ...
]

Note

The cluster name cannot contain dots (.), nor underscores (_).

Warning

If Lenses fails to find a Connect Cluster that is defined in lenses.conf during startup, it will exit immediately.

Authentication

If your Connect cluster requires authentication, additional configuration is required. Otherwise you can protect your Connect via a firewall and let your users manage it only via Lenses.

BASIC

To configure BASIC authentication to connect, you should add the connection details in the lenses.connect.clusters block:

lenses.connect.clusters = [
  {
    name: "development",
    urls: [
            { url:"http://CONNECT_HOST_1:8083", jmx:"localhost:9583"},
            { url:"http://CONNECT_HOST_1:8083", jmx:"localhost:9583"},
            ...
    ],
    statuses: "connect-statuses",
    configs: "connect-configs",
    offsets: "connect-offsets",
    auth: "USER_INFO",
    username: "USERNAME",
    password: "PASSWORD"
  },
  ...
]

Warning

Please note that although you could add BASIC authentication username and password in the Connect URL, it is a bad idea as Lenses display this URL in a few places.

Lenses SQL Processors

The Lenses SQL Processors is a great way to do stream-processing using the Lenses SQL dialect.

IN_PROC

Out of the box, LSQL Processors work within Lenses, in what we call IN_PROC mode. It’s a convenient setup to get a feel of the Processors functionality without any hassle. The only thing you need to setup, is a directory that Lenses can use to store some ephemeral data.

lenses.sql.state.dir = "/tmp/lenses-sql-kstream-state"

CONNECT

Connect mode lets you start LSQL Processors within your Kafka Connect cluster(s). An LSQL Connector is provided with your Lenses subscription (not part of the trial), which you have to add to your Connect cluster as any other connector. For more details on how you add the connector, the LSQL Connector deployment.

Once you load the connector to one —or more— of your Connect clusters, Lenses can automatically detect it. Then you only have to set Lenses to use CONNECT mode and also set a directory which the connector (not Lenses), can use to write ephemeral data.

lenses.sql.execution.mode = CONNECT
lenses.sql.state.dir = "/tmp/lenses-sql-kstream-state"

KUBERNETES

Kubernetes mode is the most scalable one for Lenses SQL Processors. Just type your LSQL query and fire as many pods as you like in your Kubernetes cluster. The first step to enable this mode is to add the Lenses Container Registry key to your kubernetes cluster. For more information on how to do this, check how to setup LSQL in Kubernetes.

Once your cluster is ready, you only have to provide Lenses with a Kube configuration file so that it can access the cluster —and the account to use.

lenses.sql.execution.mode = KUBERNETES
lenses.kubernetes.config.file = "/home/lenses/.kube/config"
lenses.kubernetes.service.account = "default"

AlertManager

Lenses continuously monitors your cluster and informs you on service degradation, consumer groups lag, and various operational issues. These actionable events are considered alerts and are forwarded to your AlertManager (AM) installation, so they can be deduplicated, grouped and routed accordingly. Check out the the alertmanager integration for examples on how to configure your AlertManager for Lenses’ alerts.

Note

Not all alerts are actionable and thus forwarded to AlertManager. AM expects an alert that can be raised and brought down once fixed. As an example, an offline broker would raise an alert and once your broker came back online, the alert would be dismissed.

In the configuration file, you have to provide a list of your AlertManager endpoints. The way that AM clusters work is that an application sends the alert to all nodes and the nodes themselves make sure they’ll process the alert at least once.

lenses.alert.manager.endpoints = "http://ALERTMANAGER_HOST_1:9094,http://ALERTMANAGER_HOST_2:9094,..."

If you have more than one Lenses installation, it’s a good idea to set the AM source to something unique so that AlertManager can distinguish between your different installations. Also, it’s useful (but optional) to set the Lenses address in the AM generator URL so that you can navigate to the Lenses Web Interface through the alerts’ links. This URL should be the address your users use to access Lenses.

lenses.alert.manager.source = "Lenses"
lenses.alert.manager.generator.url = "http://LENSES_HOST:9991"

Grafana

If you’ve set up the Lenses Monitoring Suite, or have your own monitoring solution in place, you can set the Grafana address (or your own monitoring tool’s address) in Lenses, so you get a link to it from the web interface.

lenses.grafana = "http://GRAFANA_HOST:3000"

Advanced Configuration

Lenses Storage Topics

Lenses keeps a portion of its configuration and data inside Kafka Topics. You can find these under the System Topics category. Into them information about your cluster, metrics, auditing, processors and more is stored. When the application starts, it checks their existence and creates them if needed. Although usually not needed, you can override the default names for these topics if desired:

# topics created on start-up that Lenses uses to store state
lenses.topics.audits            = "_kafka_lenses_audits"
lenses.topics.cluster           = "_kafka_lenses_cluster"
lenses.topics.metrics           = "_kafka_lenses_metrics"
lenses.topics.profiles          = "_kafka_lenses_profiles"
lenses.topics.processors        = "_kafka_lenses_processors"
lenses.topics.alerts.storage    = "_kafka_lenses_alerts"
lenses.topics.lsql.storage      = "_kafka_lenses_lsql_storage"
lenses.topics.alerts.settings   = "_kafka_lenses_alerts_settings"
lenses.topics.metadata          = "_kafka_lenses_topics_metadata"
lenses.topics.external.topology = "__topology"
lenses.topics.external.metrics  = "__topology__metrics"

Warning

These topics are created and managed by Lenses automatically. Do not create them by hand as they may need compaction enabled or a certain number of partitions. If you are using ACLs, only allow Lenses to manage these topics.

ACLs

If your Kafka cluster is set up with an authorizer (ACLs), Lenses should have at least permission to manage and access its storage topics. Make sure to set the principal and host appropriately:

kafka-acls \
    --authorizer-properties zookeeper.connect=ZOOKEEPER_HOST:2181 \
    --add \
    --allow-principal User:Lenses \
    --allow-host lenses-host \
    --operation Read \
    --operation Write \
    --operation Alter \
    --topic topic

Topology

The Topology screen offers a window to your data flows, a high-level view of how your data moves in and out of Kafka. Lenses builds the topology graph from your connectors, LSQL processors and applications that include our topology libraries.

To build the graph, some information is needed. The Lenses SQL processors (Kafka Streams applications written with L.SQL) are always managed automatically, so you don’t have to do anything. Same goes for the more than 45 Kafka Connect connectors we support out of the box. For any other connector it’s as simple as adding it to lenses.connectors.info:

lenses.connectors.info = [
  {
     class.name = "org.apache.kafka.connect.file.FileStreamSinkConnector"
     name = "File"
     instance = "file"
     sink = true
     extractor.class = "com.landoop.kafka.lenses.connect.SimpleTopicsExtractor"
     icon = "file.png"
     description = "Store Kafka data into files"
     author = "Apache Kafka"
  },
  ...
]

Your custom applications, on the other hand, need to embed our topology libraries. For more information about the topology setup, for both connectors and external applications, please have a look at the Topology Configuration.

Consumer Groups Lag

Lenses exposes the Kafka Consumer Groups lag via a Prometheus metrics endpoint within the application. The default Prometheus path is used (/metrics), so you can add the Lenses address as is to your Prometheus targets. No additional configuration is required.

Slack Integration

Lenses can post alerts directly to Slack. We strongly advise using the Alertmanager integration instead and post alerts to Slack via AM, as alerts without deduplication can cause too much noise.

To integrate Lenses alerting with Slack, add an incoming webhook. Select the #channel where Lenses can post alerts and copy the Webhook URL:

lenses.alert.plugins.slack.enabled      = true
lenses.alert.plugins.slack.webhook.url  = "https://hooks.slack.com/services/SECRET/YYYYYYYYYYY/XXXXXXXX"
lenses.alert.plugins.slack.username     = "lenses"
lenses.alert.plugins.slack.channel      = "#alerts"

Kafka ACLs

You can manage your Kafka ACLs through Lenses. If you are running Kafka 1.0 or later you don’t have to set anything in the configuration file. If your brokers are configured with an authorizer, Lenses will allow you to see and manage ACLs. This function by default is performed by using the KafkaClient Admin protocol.

When using Kafka 0.11 or older, you have to switch to ACL management via Zookeeper. To do that Lenses should be configured with access to Zookeeper, and the ACLs broker mode set to false:

lenses.acls.broker.mode = false

Note

The ACL management functionality is tested with the default Kafka authorizer class.

Producer & Consumer

Lenses interacts with your Kafka Cluster via Kafka Consumers and Producers. There may be scenarios where the Consumer and/or the Producer need to be tweaked. The settings of each are kept separate; prefix any option described in the Kafka documentation for the new consumer with lenses.kafka.settings.consumer and for the producer with lenses.kafka.settings.producer. As an example:

lenses.kafka.settings.consumer.isolation.level = "read_committed"
lenses.kafka.settings.producer.acks = "all"

Warning

Changing the default settings of the Kafka client may lead to unexpected issues. Furthermore some settings are set dynamically in runtime, for example IN_PROC LSQL processors will get their own group.id. We encourage you to visit our community or consult with us via one of the available channels if you need help tweaking Lenses.

System Topics

Systems topics is a convention used by Lenses to separate between topics created by users and topics created by software —such as Lenses and Kafka Connect. Lenses shows System Topics in a different tab in the Topics screen to minimize the cognitive load of the users.

The default setting includes Lenses system topics, LSQL processors’ kstreams topics, consumer offsets, schemas, and transactions. You can add topics of your own as well, but it’s advised to keep the default ones too, so they won’t transfer to your user topics. The setting takes prefixes so, for example, the lsql_ item matches all topics starting with lsql_.

lenses.kafka.control.topics = [
  "connect-configs",
  "connect-offsets",
  "connect-status",
  "connect-statuses",
  "_schemas",
  "__consumer_offsets",
  "_kafka_lenses_",
  "lsql_",
  "__transaction_state"
]

Tuning Lenses

Lenses comes tuned out of the box, but as every production setup may be different, there are many advanced options to tweak the behavior of the software. These settings include the connections to the Kafka services and JMX ports, the web server and the web socket part of Lenses, the lsql engine settings, the frequency of various update actions —like how often we update the consumers— and many more.

For a list of the advanced options, please check out the Options Reference Table and also have a look at the lenses.conf.sample file that comes with the Lenses archive or under the /opt/lenses/lenses.conf.sample path in our docker images.

Our recommendation is to install our software with the default settings and only go to the advanced section if you have a particular reason. Changing them without a good reason can lead to unexpected behavior.

We encourage you to visit our community or consult with us via one of the available channels if you need help or advice tweaking Lenses.

Runtime Configuration

Java Options

Lenses runs on an embedded Java Virtual Machine (JVM). You can tune it like any JVM-based application, and we made sure to follow the same convention that you see throughout the Kafka ecosystem. This means you get five environment variables you may use, LENSES_OPTS, LENSES_HEAP_OPTS, LENSES_JMX_OPTS, LENSES_LOG4J_OPTS and LENSES_PERFORMANCE_OPTS. Let’s see them in detail:

LENSES_OPTS
This variable should be used for generic settings, such as the Kerberos configuration (e.g SASL/GSSAPI auth to the Brokers). Please note that in this option in our docker image we add a java agent (in addition to your settings), to export Lenses metrics into Prometheus format.
LENSES_HEAP_OPTS
Here you can set options about the JVM heap. The default setting is -Xmx3g -Xms512m which sets the heap size between 512MB and 3GB. It will serve you well even for larger clusters. It is possible to set the upper limit (3GB) lower if needed. For our Lenses Box as an example, we set it at just 1.2GB. If you are using many LSQL processors in IN_PROC mode, or your cluster has more than 3000 partitions, you should increase it.
LENSES_JMX_OPTS
This variable can be used to tweak the JMX options that the JVM offers, such as if the JMX will allow remote access. Have a look at the Metrics Section for more information.
LENSES_LOG4J_OPTS
This variable can be used to tweak Lenses logging. Please note that Lenses uses the Logback library for logging. For more information about this, check the Logging section.
LENSES_PERFORMANCE_OPTS

Here you can tune the JVM. Our default settings should serve you well:

-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true

Logging

Lenses uses the Logback framework for logging. For its configuration, upon startup, it looks for a file named logback.xml, first inside the current directory (the directory you run Lenses from), then at /etc/lenses/ and last in the Lenses installation directory. The first one found (in the above order), is used. It will also be printed in the startup logs, so you know which logback configuration file is in use. This is useful, because the application constantly monitors this file for changes, so you can edit it and Lenses will reload it without need to restart.

To use a file at a custom location, set the LENSES_LOG4J_OPTS environment variable as in the example:

export LENSES_LOG4J_OPTS="-Dlogback.configurationFile=file:mylogback.xml"

Lenses scan the log configuration file every 30 seconds for changes.

Inside the installation directory, there is also a logback-debug.xml file, where we set the default logging level to DEBUG. You can use this to increase the logging quickly.

Tip

For convenience, Lenses offers a basic log viewer within the web interface. Once logged into Lenses, visit http://LENSES_HOST/lenses/#/logs to check it out.

Log Level

The default log level is set to INFO except for some 3rd party classes we feel are too verbose at this level. You can use the logback-debux.xml configuration to quickly switch to DEBUG.

For fine-grained control you can edit the logback.xml file and adjust the global or per class log level.

The default logger levels are:

<logger name="com.landoop" level="INFO"/>
<logger name="io.lenses" level="INFO"/>
<logger name="akka" level="INFO"/>
<logger name="io.confluent.kafka.serializers.KafkaAvroDeserializerConfig" level="WARN"/>
<logger name="io.confluent.kafka.serializers.KafkaAvroSerializerConfig" level="WARN"/>
<logger name="org.apache.calcite" level="OFF"/>
<logger name="org.apache.kafka" level="WARN"/>
<logger name="org.apache.kafka.clients.admin.AdminClientConfig" level="ERROR"/>
<logger name="org.apache.kafka.clients.consumer.internals.AbstractCoordinator" level="WARN"/>
<logger name="org.apache.kafka.clients.consumer.ConsumerConfig" level="ERROR"/>
<logger name="org.apache.kafka.clients.producer.ProducerConfig" level="ERROR"/>
<logger name="org.apache.kafka.clients.NetworkClient" level="ERROR"/>
<logger name="org.apache.kafka.common.utils.AppInfoParser" level="ERROR"/>
<logger name="org.apache.zookeeper" level="WARN"/>
<logger name="org.reflections" level="WARN"/>
<logger name="org.I0Itec.zkclient" level="WARN"/>
<logger name="com.typesafe.sslconfig.ssl.DisabledComplainingHostnameVerifier" level="ERROR"/>
<root level="INFO">...</root>

Log Format

All the log entries are written to the output using the following pattern:%d{ISO8601} %-5p [%c{2}:%L] %m%n. You can adjust this inside logback.xml to match your organization’s defaults.

Log Location

By default Lenses logs both to stdout and to files inside the directory it runs from, under logs/. This may also be configured inside logback.xml.

The stdout output can be integrated with any log collection infrastructure you may have in place and is useful with containers as well. It follows the Twelve-Factor App approach to logs.

On the other hand, the file logs are separated into three files: lenses.log, lenses-warn.log and metrics.log. The first contains all logs and is the same as the stdout. The second contains only messages at level WARN and above. The third one contains timing metrics for Lenses operations and can be useful for debugging. If you ever need to file a bug report, we may ask for any of these files (in whole or part) to be able to debug your issue. Lenses take care of the log rotation for these files.

Metrics

JMX Metrics

Lenses runs on the JVM; it is possible to expose a JMX endpoint or use a java agent of your choosing, such as Prometheus’ jmx_exporter or Jolokia’s agent to monitor it. The JMX endpoint is managed by the lenses.jmx.port option. Leave it empty to disable JMX.

The most interesting information you can get from JMX, are Lenses’ JVM usage (e.g. CPU, memory, GC) and the metrics of the Kafka clients Lenses use internally.

It is often the case with JMX that you need to tune it further for remote access. As we’ve seen, it comes to LENSES_JMX_OPTS environment variable. An example of how you can configure it for remote access is below. If you use it verbatim, adjust the hostname to reflect your server’s hostname.

LENSES_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=[HOSTNAME]"

Prometheus’ Agent

The Lenses Monitoring Suite is a reference architecture based on Prometheus and Grafana. As part of your Lenses subscription, you get access to resources such as templates and dashboards to assist on your implementation.

In the monitoring context, Lenses is considered a Kafka Client such as any of your Kafka applications. You can use the resources provided in the monitoring suite (jmx_exporter build and configuration) to enable the Prometheus metrics endpoint via the LENSES_OPTS environment variable:

export LENSES_OPTS="-javaagent:/path/to/jmx_exporter/fastdata_agent.jar=9102:/path/to/jmx_exporter/client.yml"

If you use your own jmx_exporter build and templates, the process is the same, just substitute our files for your own.

Our docker image (landoop/lenses) sets up the Prometheus endpoint automatically. You only have to expose the 9102 port to access it.

Plugins

Lenses can be extended via user-provided classes. There are four categories you can extend:

  • Serde: custom serialization and deserialization classes so you can use all Lenses functionality with your own data formats (such as protobuf)
  • LDAP Group Filter: custom plugin to query your LDAP implementation for groups your users belong to if you do not use AD or the memberOf overlay of OpenLDAP
  • UDF for the SQL Table-based Engine: User Defined Functions (UDF) can extend the Lenses SQL Table Engine with new functions
  • Custom HTTP authentication: A class that can extract —and verify possibly— user information sent via headers, so your users can authenticate to Lenses via an authentication proxy / Single Sign-On solution

Location

Lenses search for plugins under two directories:

  • The $LENSES_HOME/plugins/ directory, where $LENSES_HOME is the Lenses installation path
  • An optional directory set by the environment variable LENSES_PLUGINS_CLASSPATH_OPTS

On startup these two directories and any first level subdirectory of theirs are added in the Lenses classpath (as the security plugins are required to be available during startup) and also they are passed to Lenses so it can monitor them for new jar files.

While any layout or a single directory may work for you, a suggested layout for the plugin directory is this:

plugins/
├── security
├── serde
└── udf

Which once populated with plugins, could look like this:

plugins/
├── security
│   └── sso_header_decoder.jar
├── serde
│   ├── protobuf_actions.jar
│   └── protobuf_clients.jar
└── udf
    ├── eu_vat.jar
    ├── reverse_geocode.jar
    └── summer_sale_discount.jar

Tip

Lenses continuously monitor the plugin directories and their first level subdirectories (that existed during Lenses startup) for new plugins (jars).

Custom Serde

Custom serde (serializers and deserializers) can be used to extend Lenses with support for additional message formats. Lenses has built-in support for Avro, JSON, CSV, XML and more formats. If your data is in a format that isn’t supported out of the box, or maybe it requires to be hardcoded (such as protobuf) you can write and compile your own serde jars and add them to Lenses. For more information about custom serde check the LSQL section.

As mentioned, custom serde can be read from the plugins directories. Before Lenses 2.2, custom serde could be read from the locations below. These locations are still supported but will be deprecated in the future. If you use them, please switch to the plugins directories.

  • $LENSES_HOME/serde
  • $LENSES_SERDE_CLASSPATH_OPTS if set

The plugins (and serde) directories are continuously monitored for new jar files. Once a serde’s libraries are dropped in, the new format should be visible in Lenses within a few seconds.

Processors

To use custom serde with Lenses SQL Proccesors, the custom serde should also be added to the Lenses SQL execution engine. If it’s set to IN_PROC, the default mode, no additional action is required. If it’s set to CONNECT, then the serde jars should be added in the connector directory, together with the default libraries. If it’s set to KUBERNETES, then a custom processor image should be created with the custom serde (see Custom Serdes for Kubernetes SQL).

To add custom serde to the Lenses docker see Lenses docker plugins.

Options List Reference

Config Description Required Type Default
lenses.ip
Bind HTTP at the given endpoint.
Used in conjunction with lenses.port
no string 0.0.0.0
lenses.port
The HTTP port the HTTP server listens
for connections: serves UI, Rest and WS APIs
no int 9991
lenses.jmx.port
The port to bind an JMX agent to
enable JVM monitoring
no int 9992
lenses.license.file
The full path to the license file
yes string license.json
lenses.secret.file
The full path to security.conf containing security
credentials read more
yes string security.conf
lenses.storage.directory
The full path to the directory where Lenses
stores some of its state
no string null
lenses.topics.audits
Topic to store system auditing
information. Keep track of WHO did WHAT and WHEN.
When a topic, config, connector is Created/Updated
or Deleted an audit message is stored.
*We advise not to change the defaults
neither to delete the topic*
yes string _kafka_lenses_audits
lenses.topics.metrics
Topic to store stream processor
metrics. When your state-less stream processors are
running in Kubernetes or Kafka Connect, this
topic collects health checks and
performance metrics.
*We advise not to change the defaults
neither to delete the topic*.
yes string _kafka_lenses_metrics
lenses.topics.cluster
Topic to store broker details.
Infrastructure information is used to determine
config changes, failures and new nodes added or
removed in a cluster.
*We advise not to change the defaults neither to
delete the topic*
yes string _kafka_lenses_cluster
lenses.topics.profiles
Topic to store user preferences.
Bookmark your most used topics, connectors or
SQL processors. *We advise not to change
the defaults neither to delete the topic*
yes string _kafka_lenses_profiles
lenses.topics.processors
Topic to store the SQL processors details.
*We advise not to change the defaults
neither to delete the topic*
yes string _kafka_lenses_processors
lenses.topics.alerts.storage
Topic to store the alerts raised.
*We advise not to change the defaults
neither to delete the topic*
yes string _kafka_lenses_alerts
lenses.topics.alerts.settings
Topic to store the alerts configurations.
*We advise not to change the defaults
neither to delete the topic*.
yes string _kafka_lenses_alerts_settings
lenses.topics.lsql.storage
Topic to store all data access SQL queries.
Know WHO access WHAT data and WHEN.
*We advise not to change the defaults
neither to delete the topic*
yes string _kafka_lenses_lsql_storage
lenses.topics.external.topology
Topic where external application
publish their topology.
yes string __topology
lenses.topics.external.metrics
Topic where external application
publish their topology metrics.
yes string __topology__metrics
lenses.kafka.brokers
A list of host/port pairs to
use for establishing the initial connection to the
Kafka cluster. Add just a few broker addresses
here and Lenses will bootstrap and discover the
full cluster membership (which may change dynamically).
This list should be in the form
"host1:port1,host2:port2,host3:port3"
yes string PLAINTEXT://localhost:9092
lenses.jmx.broker.port
Required when lenses.zookeeper.hosts
has not been set or can not be set. This is usually the case with cloud
deployments. The value is the port to open a JMX connection to the broker.
Typically all your brokers will use the same port for the JMX endpoint.
This will enable the best user experience on Lenses.
no int null
lenses.jmx.brokers
Used when lenses.zookeeper.hosts
has not been set or can not be set, and not all brokers share
the same JMX port. If they do use lenses.jmx.broker.port configuration.
The value is a list of broker-Id/JMX-port pairs to
use for establishing the JMX connection to the broker.
This will enable the best user experience on Lenses.
[{ id:"broker1 id", port:"broker1 JMX port" },
{ id:"broker2 id", port:"broker2 JMX port" },...]
yes string []
lenses.zookeeper.hosts
Provide all the available Zookeeper nodes details.
For every ZooKeeper node specify the
connection url (host:port) and if JMX is
enabled the JMX (host:port).
The configuration should be
[{url:"hostname1:port1", jmx:"hostname1:port2"}]
yes string
[{url: “localhost:2181” , jmx: “localhost:11991” }]
lenses.zookeeper.chroot
You can add your znode (chroot) path if
you are using it. Please do not add
leading or trailing slashes. For example if you use
the zookeeper chroot ``/kafka` for
your Kafka cluster, set this value to kafka
no string  
lenses.zookeeper.security.enabled
Enables secured connection to your Zookeeper.
The default value is false.
Please read about this setting before enabling it.
no boolean false
lenses.schema.registry.urls
Provide all available Schema Registry node details or list
the load balancer address if one is used. For every instance
specify the connection url and if
JMX is enabled the JMX (host:port)
yes string [{url:”http://localhost:8081”, jmx:”localhost:10081”}]
lenses.schema.registry.kerberos
Set to true if the schema registry
is deployed with kerberos authentication
no boolean false
lenses.schema.registry.keytab
The location of the keytab if connecting
to a kerberized schema registry
no string null
lenses.schema.registry.principal
The service principal of the above keytab
no string null
lenses.connect.clusters
Provide all available Kafka Connect clusters.
For each cluster give a name, list the 3 backing topics
and provide workers connection details (host:port) and
JMX endpoints if enabled and on Kafka 1.0.0
no array
[{name: “dev”, urls: [{url:”http://localhost:8083”,
jmx:”localhost:11100”}], statuses: “connect-statuses”,
configs: “connect-configs”, offsets: “connect-offsets” }]
lenses.alert.manager.endpoints
Comma separated Alert Manager endpoints.
If provided, Lenses will push raised
alerts to the downstream notification gateway.
The configuration should be
"http://host1:port1"
no string  
lenses.alert.manager.source
How to identify the source of an Alert
in Alert Manager. Default is Lenses but you might
want to override to UAT for example
no string Lenses
lenses.alert.manager.generator.url
A unique URL identifying the creator of this alert.
Default is http://lenses but you might
want to override to http://<my_instance_url> for example
no string http://lenses
lenses.grafana
If using Grafana, provide the Url location.
The configuration should be
"http://grafana-host:port"
no string  
lenses.sql.max.bytes
Used when reading data from a Kafka topic.
This is the maximum data size in bytes to return
from a LSQL query. If the query is bringing more
data than this limit any records received after
the limit are discarded.
This can be overwritten
in the LSQL query.
yes long 20971520 (20MB)
lenses.sql.max.time
Used when reading data from a
Kafka topic. This is the time in milliseconds the
query will be allowed to run. If the time is exhausted
it returns the records found so far.
This can be overwritten in the
LSQL query.
yes int 3600000 (1h)
lenses.sql.sample.default
Number of messages to take in every
sampling attempt
no int 2
lenses.sql.sample.window
How frequently to sample a topic
for new messages when tailing it
no int 200
lenses.metrics.workers
Number of workers to distribute the load
of querying JMX endpoints and collecting metrics
no int 16
lenses.offset.workers
Number of workers to distribute the
load of querying topic offsets
no int 5
lenses.sql.execution.mode
The SQL execution mode, IN_PROC
or CONNECT or KUBERNETES
no string IN_PROC
lenses.sql.state.dir
Directory location to store the state
of KStreams. If using CONNECT mode, this folder
must already exist on each Kafka
Connect worker
no string logs/lenses-sql-kstream-state
lenses.sql.monitor.frequency
How frequently SQL processors
emmit healthcheck and performance metrics to
lenses.topics.metrics
no int 10000
lenses.kubernetes.processor.image.name
The docker/container repository url
and name of the Lenses SQL runner
no string eu.gcr.io/lenses-container-registry/lenses-sql-processor
lenses.kubernetes.processor.image.tag The Lenses SQL runner image tag no string 2.1
lenses.kubernetes.config.file The location of the kubectl config file no string /home/lenses/.kube/config
lenses.kubernetes.service.account
The service account to deploy with.
This account should be able to pull images
from lenses.kubernetes.processor.image.name
no string default
lenses.kubernetes.pull.policy
The pull policy for Kubernetes containers:
IfNotPresent or Always
no string IfNotPresent
lenses.kubernetes.runner.mem.limit The memory limit applied to the Container no string 768Mi
lenses.kubernetes.runner.mem.request The memory requested for the Container no string 512Mi
lenses.kubernetes.runner.java.opts Advanced JVM and GC memory tunning parameters no string
-Xms256m -Xmx512m
-XX:MaxPermSize=128m -XX:MaxNewSize=128m
-XX:+UseG1GC -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35
-XX:+DisableExplicitGC -Djava.awt.headless=true
lenses.interval.summary
The interval (in msec) to check for new topics,
or topic config changes
no long 10000
lenses.interval.consumers
The interval (in msec) to read all
consumer info
no int 10000
lenses.interval.partitions.messages
The interval (in msec) to refresh
partitions info
no long 10000
lenses.interval.type.detection
The interval (in msec) to check the
topic payload type
no long 30000
lenses.interval.user.session.ms
The duration (in msec) that a
client session stays alive for.
no long 14400000 (4h)
lenses.interval.user.session.refresh
The interval (in msec) to check whether a
client session is idle and should be terminated.
no long 60000
lenses.interval.schema.registry.healthcheck
The interval (in msec) to check the
status of schema registry instances.
no long 30000
lenses.interval.topology.topics.metrics
The interval (in msec) to refresh the
topology status page.
no long 30000
lenses.interval.alert.manager.healthcheck
The interval (in msec) to check the
status of the Alert manager instances.
no long 5000
lenses.interval.alert.manager.publish
The interval (in msec) on which
unresolved alerts are published
to alert manager.
no long 30000
lenses.interval.topology.custom.app.metrics.discard.ms
The interval (in msec) when
an already published metrics entry is consider stale.
Once this happens the record is discarded.
no long 120000
lenses.interval.jmx.refresh.zk
The interval (in msec) to get
Zookeeper JMX.
yes long 5000
lenses.interval.jmx.refresh.sr
The interval (in msec) to get
Schema Registry JMX.
yes long 5000
lenses.interval.jmx.refresh.broker The interval (in msec) to get Broker JMX. yes long 5000
lenses.interval.jmx.refresh.alert.manager
The interval (in msec) to get
Alert Manager JMX
yes long  
lenses.interval.jmx.refresh.connect The interval (in msec) to get Connect JMX yes long  
lenses.interval.jmx.refresh.brokers.in.zk
The interval (in msec) to refresh
the brokers from Zookeeper.
yes long 5000
lenses.kafka.ws.poll.ms
Max time (in msec) a consumer polls for
data on each request, on WS API request.
no int 1000
lenses.kafka.ws.buffer.size Max buffer size for WS consumer no int 10000
lenses.kafka.ws.max.poll.records
Specify the maximum number of records
returned in a single call to poll(). It will
impact how many records will be pushed at once
to the WS client.
no int 1000
lenses.kafka.ws.heartbeat.ms
The interval (in msec) to send messages to
the client to keep the TCP connection open.
no int 30000
lenses.access.control.allow.methods
Restrict the HTTP verbs allowed
to initiate a cross-origin HTTP request
no string GET,POST,PUT,DELETE,OPTIONS
lenses.access.control.allow.origin
Restrict to specific hosts cross-origin
HTTP requests.
no string
lenses.schema.registry.topics The backing topic where schemas are stored. no string _schemas
lenses.schema.registry.delete
Allows subjects to be deleted in
the Schema Registry. Default is disabled.
Requires schema-registry version 3.3.0 or later
no boolean false
lenses.allow.weak.SLL
Allow connecting with https:// services even
when self-signed certificates are used
no boolean false
lenses.telemetry.enable Enable or disable telemetry data collection no boolean true
lenses.curator.retries
The number of attempts to read the
broker metadata from Zookeeper.
no int 3
lenses.curator.initial.sleep.time.ms
The initial amount of time to wait between
retries to ZK.
no int 2000
lenses.zookeeper.max.session.ms
The max time (in msec) to wait for
the Zookeeper server to
reply for a request. The implementation requires that
the timeout be a minimum of 2 times the tickTime
(as set in the server configuration).
no int 10000
lenses.zookeeper.max.connection.ms
The duration (in msec) to wait for the Zookeeper client to
establish a new connection.
no int 10000
lenses.akka.request.timeout.ms
The maximum time (in msec) to wait for an
Akka Actor to reply.
no int 10000
lenses.kafka.control.topics List of Kafka topics to be marked as system topics no string
[“connect-configs”, “connect-offsets”, “connect-status”,
“connect-statuses”, “_schemas”, “__consumer_offsets”,
“_kafka_lenses_”, “lsql_”, “__transaction_state”,
“__topology”, “__topology__metrics”]
lenses.alert.buffer.size
The number of most recently raised
alerts to keep in the cache.
no int 100
lenses.kafka.settings.consumer
Allow additional Kafka consumer settings
to be specified. When Lenses creates an instance
of KafkaConsumer class it will use these
properties during initialization.
no string {reconnect.backoff.ms = 1000, retry.backoff.ms = 1000}
lenses.kafka.settings.producer
Allow additional Kafka producer settings to
be specified. When Lenses creates an
instance of KafkaProducer
class it will use these properties during initialization.
no string {reconnect.backoff.ms = 1000, retry.backoff.ms = 1000}
lenses.kafka.settings.kstream
Allow additional Kafka KStreams settings
to be specified
no string  

The last three keys, allow configuring the consumer/producer/kstreams settings of Lenses internal consumer/producers/kstreams. Example: lenses.kafka.settings.producer.compression.type = snappy