Configuration

Introduction

In this section, we explore Lenses configuration; how it is laid out on disk, which are the available options, which are mandatory, specific cases such as brokers that require authentication and more. It is the best place to start if you are new to Lenses and assigned the task to set it up. Even if you install Lenses via Docker or Helm, these same settings can be applied via environment variables and YAML files.

Lenses requires two configuration files; lenses.conf and security.conf:

  • lenses.conf
    Here we store most of the configuration options, such as the connection details for your brokers or the port Lenses uses. You have to create this file before Lenses can work. For the complete list of the configuration options please refer to Options Reference.
  • security.conf
    Here we configure the authentication module. For more information about authentication methods and authorization, refer to Security Configurations.

Our Docker image and Helm charts create these files automatically on start, via reading environment variables, ConfigMaps, and secrets.

Configuration Format

Lenses configuration format is HOCON and it’s a superset of JSON and properties files. No experience of HOCON is required; the examples provided with the Lenses archive and throughout the documentation is all you need to setup the software. Like in JSON, please remember that string values need to be quoted. Numbers, true, and false can remain unquoted. If a string value contains only alphanumeric characters it may be ok to be left without quotes.

For more information, please check the HOCON design document.

Quick Start

A typical example of lenses.conf for a Kafka cluster without authentication to the brokers, looks like this:

# Set the ip:port for Lenses to bind to
lenses.ip = 0.0.0.0
lenses.port = 9991

# License file allowing connecting to up to N brokers
lenses.license.file = "/etc/lenses/license.json"

# Directory where Lenses stores local storage.
# If omitted it will create a directory named 'storage' under the current directory.
# Write access is needed as well as surviving between upgrades.
lenses.storage.directory = "/var/lib/lenses/storage"

# Set up infrastructure end-points

# The more brokers you can add here, the better
lenses.kafka.brokers = "PLAINTEXT://host1:9092,PLAINTEXT://host2:9092,PLAINTEXT://host3:9092"

# Broker JMX Port
# lenses.kafka.metrics.default.port = 9581


# Schema Registry options
# lenses.schema.registry.urls = [
#  {url: "http://host-1:8081", metrics:{url:"host-1:9582", type:"JMX"}},
#  {url: "http://host-2:8081", metrics:{url:"host-2:9582", type:"JMX"}}
# ]

# Connect options
# lenses.kafka.connect.clusters = [
#  {
#    name: "dev",
#    urls: [
#      {url:"http://host-1:8083", metrics:{url:"host-1:9584", type:"JMX"}},
#      {url:"http://host-2:8083", metrics:{url:"host-2:9584", type:"JMX"}}
#    ],
#    statuses: "connect-status",
#    configs: "connect-configs",
#    offsets: "connect-offsets"
#  }
# ]

# Processor Mode & State dir options
# lenses.sql.execution.mode = "IN_PROC"
# lenses.sql.state.dir = "/tmp/lenses-sql-kstream-state"

Next snippet gives an example of basic security.conf that adds an admin user. For the complete reference of available security options, check out Security Configurations.

# Default user and password
lenses.security.user = "admin"
lenses.security.password = "admin"

Basic Configuration

Let’s explore the most pertinent sections of Lenses configuration.

Host and Port

During startup, Lenses binds to all available network interfaces on the port 9991. To adjust these to a custom value, set the ip and port options.

# Set the ip:port for Lenses to bind to
lenses.ip = 0.0.0.0
lenses.port = 9991

Enabling TLS

Lenses supports TLS termination for securing via encryption all HTTP connections. A Java Keystore file with the private key and certificate pair is required to setup TLS. Optionally you can tweak the protocols and ciphers offered to clients.

# Set the keystore location and passwords
lenses.ssl.keystore.location = "/path/to/keystore.jks"
lenses.ssl.keystore.password = "changeit"
lenses.ssl.key.password      = "changeit"

# Optionally you can tweak the TLS version, algorithm and ciphers
# If you skip them, the default values will be used
#lenses.ssl.enabled.protocols = "TLSv1.2"
#lenses.ssl.cipher.suites     = "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384"

The options lenses.ssl.keystore.location, lenses.ssl.keystore.password, and lenses.ssl.key.password are mandatory.

TLS Authentication

When enabling TLS, it is possible to require clients to perform mutual TLS authentication via client certificates. For the time being this type of authentication is independent of the authentication to Lenses. This means that clients will need to both authenticate via mutual TLS and via any of the available authentication methods for Lenses (basic, ldap, spnego, custom_http).

# Set the keystore location and passwords
lenses.ssl.keystore.location = "/path/to/keystore.jks"
lenses.ssl.keystore.password = "changeit"
lenses.ssl.key.password      = "changeit"

# Set a truststore, password, and enable client TLS auth
lenses.ssl.truststore.location = "/path/to/truststore.jks"
lenses.ssl.truststore.password = "changeit"
lenses.ssl.client.auth = true

License

With your Lenses subscription or trial, you receive a license file. If you don’t have a license yet, contact us here. This license file (license.json for this guide) is necessary for the application to start. Once you have uploaded it to the server that runs Lenses, update the configuration to point at it. It is better to use an absolute file path, but a relative path from the directory you run Lenses from can work as well.

# License file allowing connecting to up to N brokers
lenses.license.file="license.json"

If you run Lenses under a specific user account, make sure that this account has permission to read the license file.

Kafka Brokers

Setting up access to the Kafka Brokers is very important. The most simple case is when the brokers accept unauthenticated connections. In such case only the lenses.kafka.brokers setting is required which is the same as the bootstrap servers you would set for any Kafka client. Please make sure to add at least a few of your brokers in this list, do not settle for just one —unless you have a single broker installation.

lenses.kafka.brokers = "PLAINTEXT://host1:9092,PLAINTEXT://host2:9092"

Warning

It is important to set at least a few of your brokers here. If the brokers on this list are all down, then some parts of Lenses will fail to work properly.

Broker metrics

Lenses can take advantage of the Kafka Brokers metrics to monitor the health of your cluster and show metrics and other information. Although not a hard requirement, allowing access to these metrics will make for more functionality and a better experience in the web interface. Brokers metrics may be exposed via JMX, optionally password protected, or the Jolokia JMX-HTTP bridge using either of the HTTP GET or POST modes.

In most common cases, the metrics are exposed via JMX. If Lenses is set up with Zookeeper access, it will discover the Brokers JMX ports automatically without any extra configuration needed. If access to Zookeeper is restricted, a common occurrence with managed cloud instances, you can provide the Brokers’ JMX ports manually via configuration. If all your brokers listen for JMX connections to the same port, set the default metrics port option.

lenses.kafka.metrics.default.port = 9581

If the brokers listen to different JMX ports (a setup we advise against), or if the broker’s JMX Endpoint is protected, you can pair Broker IDs and ports, like below:

lenses.kafka.metrics = {
    ssl: true,         # Optional, please make the remote JMX certificate
                       # is accepted by the Lenses truststore
    user: "admin",     # Optional, the remote JMX user
    password: "admin", # Optional, the remote JMX password
    type: "JMX",
    port: [
      {id:BROKER_ID_1, port:9581, host:"host1"},
      {id:BROKER_ID_2, port:9581, host:"host2"}
    ]
  }
}

In addition to JMX, Lenses supports reading broker metrics exposed via the Jolokia JMX-HTTP bridge. The Jolokia agent exposes the metrics via HTTP and it provides two sets of APIs, based on GET or POST requests. The lenses.kafka.metrics.type option may be set to JOLOKIAG for the GET-based API or to JOLOKIAP for the POST-based API.

lenses.kafka.metrics = {
    ssl: true,         # Optional, please make the remote JMX certificate
                       # is accepted by the Lenses truststore
    user: "admin",     # Optional, the Jolokia user if required
    password: "admin", # Optional, the Jolokia password if required
    type: "JOLOKIAP"   # 'JOLOKIAP' for the POST API, 'JOLOKIAG' for the GET API
    default.port: 19999
  }
}

If the brokers export their metrics on a different port (in case a machine runs more than one Kafka Broker), then use lenses.kafka.metrics.port to define the mapping.

lenses.kafka.metrics = {
    ssl: true,         # Optional, please make the remote JMX certificate
                       # is accepted by the Lenses truststore
    user: "admin",     # Optional, the Jolokia user if required
    password: "admin", # Optional, the Jolokia password if required
    type: "JOLOKIAP"   # 'JOLOKIAP' for the POST API, 'JOLOKIAG' for the GET API
    port: [
      {id:BROKER_ID_1, port:9581, host:"host1"},
      {id:BROKER_ID_2, port:9581, host:"host2"}
    ]
  }
}

A special usecase is Amazon’s Managed Kafka Service (MSK), which exposes some metrics over a Prometheus endpoint. This functionality requires Lenses 3.0.3 or later and can be configured as shown below.

lenses.kafka.metrics = {
    type: "AWS",
    port: [
      {id: <broker-id-1>,  url:"http://b-1.<broker.1.endpoint>:11001/metrics"},
      {id: <broker-id-2>,  url:"http://b-2.<broker.2.endpoint>:11001/metrics"},
    ]
}

Broker Authentication

Connection to authenticated brokers is a bit more involved, but it’s like any Kafka client. If you have clients that already use authentication, you’ll have Lenses up and running in no time at all. Kafka Brokers may be set up for authentication via the simple authentication security layer (SASL), SSL/TLS or both. SASL most commonly use GSSAPI (Kerberos) however, in the latest versions of Kafka, more SASL flavors were added, such as SCRAM.

When configuring the Kafka client of Lenses, it is important to remember that there are three modules that require these settings: the main application’s consumer client, the main application’s producer client and the Lenses SQL in-Kubernetes processors’ Kafka client (both producer and consumer). If you do not use LSQL in Kubernetes, you can skip the related configuration sections.

Let’s have a look the various authentication scenarios.

SSL

If your Kafka cluster uses TLS certificates for authentication, set the broker protocol to SSL and then pass in any keystore and truststore configurations to the consumer and producer settings by prefixing with lenses.kafka.settings.client. the configuration keys:

lenses.kafka.brokers = "SSL://host1:9093,SSL://host2:9093"

lenses.kafka.settings.client.security.protocol       = SSL
lenses.kafka.settings.client.ssl.truststore.location = "/var/private/ssl/client.truststore.jks"
lenses.kafka.settings.client.ssl.truststore.password = "changeit"
lenses.kafka.settings.client.ssl.keystore.location   = "/var/private/ssl/client.keystore.jks"
lenses.kafka.settings.client.ssl.keystore.password   = "changeit"
lenses.kafka.settings.client.ssl.key.password        = "changeit"

If you are using TLS certificates only for encryption of data on the wire, you can omit the keystore settings:

lenses.kafka.brokers = "SSL://host1:9093,SSL://host2:9093"

lenses.kafka.settings.client.security.protocol        = SSL
lenses.kafka.settings.client.ssl.truststore.location  = "/var/private/ssl/client.truststore.jks"
lenses.kafka.settings.client.ssl.truststore.password  = "changeit"

If your brokers’ CA certificate is embedded in the system-wide truststore, you can omit the truststore settings.

Important

If you use Lenses SQL processors in Kafka Connect, you have to make sure that your keystore and truststore files exist in the Connect workers nodes at the locations dictated by lenses.kafka.settings.client.ssl.truststore.location, lenses.kafka.settings.client.ssl.keystore.location.

SASL/GSSAPI

For Lenses to access Kafka in an environment set up with Kerberos (SASL), you need to provide a JAAS file as in the example below. If your Kafka cluster is set up with an authorizer (ACLs), it is advised to use the same principal as the brokers, so Lenses has superuser permissions.

KafkaClient {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/path/to/keytab-file"
  storeKey=true
  useTicketCache=false
  serviceName="kafka"
  principal="principal@MYREALM";
};

/*
  Optional section for authentication to zookeeper
  Please also remember to set lenses.zookeeper.security.enabled=true
*/
Client {
  com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="/path/to/keytab-file"
   storeKey=true
   useTicketCache=false
   principal="principal@MYREALM";
};

Once the jaas file is in place, add it to LENSES_OPTS, before starting Lenses:

export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"

Lenses SQL processors need their own JAAS file. If you use the same keytab for both Lenses and the processors, you can copy your jaas.conf file and only replace the paths to the keytab. For the Kubernetes processors, the keytab is always mounted under /mnt/secrets/kafka/keytab:

KafkaClient {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/mnt/secrets/kafka/keytab"
  storeKey=true
  useTicketCache=false
  serviceName="kafka"
  principal="principal@MYREALM";
};

/*
  Optional section for authentication to zookeeper
  Please also remember to set lenses.zookeeper.security.enabled=true
*/
Client {
  com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="/mnt/secrets/kafka/keytab"
   storeKey=true
   useTicketCache=false
   principal="principal@MYREALM";
};

Last, set the security protocol and Kubernetes settings (if required) in the configuration file:

lenses.kafka.brokers = "SASL_PLAINTEXT://host1:9094,SASL_PLAINTEXT://host2:9094"

lenses.kafka.settings.client.security.protocol             = SASL_PLAINTEXT
lenses.kubernetes.processor.kafka.settings.security.protocol = SASL_PLAINTEXT

lenses.kubernetes.processor.jaas                  = "path/to/jaas-processors.conf"
lenses.kubernetes.processor.kafka.settings.keytab = "path/to/processor.keytab"
lenses.kubernetes.processor.krb5                  = "/etc/krb5.conf"

If you use Lenses SQL processors in Kafka Connect, then you have to configure your Connect workers with Kerberos as well. This will probably be already the case, but if not, add your JAAS file and keytab to the Connect worker nodes and export the Kerberos configuration in KAFKA_OPTS. Please note that you need to provide your JAAS file via KAFKA_OPTS and not as a Kafka Connect configuration:

KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/jaas.conf"

Note

A system configured to work with Kerberos usually provides a system-wide Kerberos configuration file (krb5.conf) that points to the location of the KDC and includes other configuration options necessary to authenticate. If your system is missing this file, please contact your administrator. If you can’t set the system-wide configuration, you can provide a custom krb5.conf via LENSES_OPTS:

export LENSES_OPTS="-Djava.security.krb5.conf=/path/to/krb5.conf"

By default, the connection to Zookeeper remains unauthenticated. This only affects the Quota entries, which are written without any Zookeeper ACLs to protect them. The option lenses.zookeeper.security.enabled may be used to change this behavior but it is important in such case to use the brokers’ principal for Lenses. If Lenses is configured with a different principal, then the brokers will not be able to manipulate the Quota entries, and will fail to start. Please contact our support if you need help with this feature.

SASL_SSL

In this security protocol, Kafka uses a SASL method for authentication and TLS certificates for encryption of data on the wire. As such the configuration is a combination of the SSL/TLS and SASL configurations.

Please provide Lenses with a JAAS file as described in the previous section and add it to LENSES_OPTS:

export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"

Set Lenses to use SASL_SSL for its producer and consumer part. If your CA’s certificate isn’t part of the system-wide truststore, please provide Lenses with a truststore as well:

lenses.kafka.brokers = "SASL_SSL://host1:9096,SASL_SSL://host2:9096"

lenses.kafka.settings.client.security.protocol       = SASL_SSL
lenses.kafka.settings.client.ssl.truststore.location = "/var/private/ssl/client.truststore.jks"
lenses.kafka.settings.client.ssl.truststore.password = "changeit"

For Lenses SQL processors in Kafka Connect, you will have to make sure the truststore is located in the same path as in lenses.kafka.settings.client.ssl.truststore.location, unless you use the default truststore, and of course, setup Kafka Connect with Kerberos.

SASL/SCRAM

For Lenses to access Kafka in an environment set up with SCRAM authentication (SASL/SCRAM) you need to provide lenses with a JAAS file as in the example below. If Lenses is used with an ACL enabled cluster, it is advised to use the same principal as the brokers, so it has superuser permissions.

KafkaClient {
  org.apache.kafka.common.security.scram.ScramLoginModule required
  username="[USERNAME]"
  password="[PASSWORD]";
};

Once the jaas file is in place, add it to LENSES_OPTS, before starting Lenses:

export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"

Last, set the security protocol and mechanism in the configuration file:

lenses.kafka.brokers = "SASL_PLAINTEXT://host1:9092,SASL_PLAINTEXT://host2:9092"

lenses.kafka.settings.client.security.protocol=SASL_PLAINTEXT
lenses.kafka.settings.client.sasl.mechanism=SCRAM-SHA-256

An alternative to the jaas.conf file, is to configure JAAS within Lenses configuration (lenses.conf). The configuration format is HOCON, Scala’s configuration format. As such, multiline strings should be enclosed within triple quotes:

lenses.kafka.settings.client.sasl.jaas.config="""
  org.apache.kafka.common.security.scram.ScramLoginModule required
    username="[USERNAME]"
    password="[PASSWORD]";"""

Please notice that SASL/SCRAM is officially unsupported currently for Lenses SQL processors in either Connect or Kubernetes modes, although it may work.

Zookeeper

Optionally Lenses can use Zookeeper - Lenses can work normally without it - to autodetect brokers’ JMX ports. Only if you want to manage quotas, the Zookeeper is required.

lenses.zookeeper.hosts = [
  {
    url: "ZK_HOST_1:2181"
  },
  {
    url: "ZK_HOST_2:2181"
  }
]

If your cluster is under a zookeeper chroot, you must set this too.

# The Kafka Brokers' Zookeeper chroot if used
lenses.zookeeper.chroot = ""

Optionally, you could enable JMX (or Jolokia) which could be used to display node details in the Services screen. The configuration will be as the following:

lenses.zookeeper.hosts = [
  {
    url: "ZK_HOST_1:2181",
    metrics: {
      ssl: true,         # Optional, please make the remote JMX/HTTP
                         # certificate is accepted by the Lenses truststore
      user: "admin",     # Optional, the remote JMX/HTTP user
      password: "admin", # Optional, the remote JMX/HTTP password
      type: "JMX",       # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
      url: "ZK_HOST_1:9585"
    }
  },
  {
    url: "ZK_HOST_2:2181",
    metrics: {
      ssl: true,         # Optional, please make the remote JMX/HTTP
                         # certificate is accepted by the Lenses truststore
      user: "admin",     # Optional, the remote JMX/HTTP user
      password: "admin", # Optional, the remote JMX/HTTP password
      type: "JMX",       # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
      url: "ZK_HOST_2:9585"
    }
  }
]

If your zookeeper is protected with Kerberos, please also check zookeeper security.

Schema Registry

If you use the AVRO format to serialize records stored in Kafka, then most probably you use a Schema Registry implementation. The most common ones come from Confluent and HortonWorks. Lenses supports both.

Confluent

In the most simple scenario, you only need to provide a list of your Schema Registry servers. Lenses also monitors the health of your nodes. For this check to work properly, a complete list of your schema registry servers is required.

lenses.schema.registry.urls = [
  {
    url: "http://SR_HOST_1:8081"
  },
  {
    url: "http://SR_HOST_1:8081"
  }
]

The Confluent Registry allows schema deletion, so it is possible to enable access to this functionality.

lenses.schema.registry.delete = false

Note

It is necessary to add the scheme (http:// or https://) in front of the Schema Registry address.

Optionally, you could enable JMX (or Jolokia) which could be used to display node details in the Services screen. The configuration will be as the following:

lenses.schema.registry.urls = [
  {
    url: "http://SR_HOST_1:8081",
    metrics: {            # Optional section
       ssl: true,         # Optional, please make the remote JMX/HTTP
                          # certificate is accepted by the Lenses truststore
       user: "admin",     # Optional, the remote JMX/HTTP user
       password: "admin", # Optional, the remote JMX/HTTP password
       type: "JMX",       # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
       url: "SR_HOST_1:9583"
    }
  },
  {
    url: "http://SR_HOST_1:8081",
     metrics: {           # Optional section
       ssl: true,         # Optional, please make the remote JMX/HTTP
                          # certificate is accepted by the Lenses truststore
       user: "admin",     # Optional, the remote JMX/HTTP user
       password: "admin", # Optional, the remote JMX/HTTP password
       type: "JMX",       # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
       url: "SR_HOST_1:9583"
     }
  }
]

The Confluent Registry stores all schemas into a Kafka topic. Lenses consumes this topic to track changes. If the topic is left with the default name (i.e. _schemas), then no action in Lenses is required. Otherwise, please set the Schema Registry topic name.

lenses.schema.registry.topics = "_schemas"

Hortonworks

The HortonWorks Schema Registry is different in that it needs the full API path, it does not support monitoring, it does not offer schema deletion, and it uses a non-Kafka backend (such as an RDBMS), so Lenses cannot track live changes. Furthermore not all serialization modes are compatible with Confluent’s, hence the need to use HortonWorks serde libs.

To configure this Registry, enable the appropriate mode and provide the API path in full.

lenses.schema.registry.mode = HORTONWORKS

lenses.schema.registry.urls = [
  {url:"http://SR_HOST_1:9090/api/v1"},
  {url:"http://SR_HOST_1:9090/api/v1"}
]

Authentication

Depending on the Schema Registry mode, Lenses uses internally the AVRO serde classes provided by either Confluent or Hortonworks. As such the authentication configuration reflects the options of these classes.

There are three places in Lenses that AVRO configuration is used: a. The Schemas management screen, where you can view and manage your schemas b. The Lenses application where it used in the SQL engine for

data-browsing, in the in-process and connect execution modes of the streaming SQL engine, and the SQL Studio
  1. The streaming SQL engine in Kubernetes where you can run complex queries and perform stream processing

Each of these three modules needs to be configured for authentication to the Schema Registry. If you do not use Lenses SQL processors in Kubernetes, you may skip the corresponding settings.

BASIC

The Confluent Schema Registry offers support for Basic authentication. To use it, set these options in addition to the rest of your Registry specific configuration.

lenses.schema.registry.auth = "USER_INFO"
lenses.schema.registry.username = "USERNAME"
lenses.schema.registry.password = "PASSWORD"

lenses.kafka.settings.client.basic.auth.credentials.source = USER_INFO
lenses.kafka.settings.client.basic.auth.user.info = "USERNAME:PASSWORD"

lenses.kubernetes.processor.kafka.settings.basic.auth.credentials.source = USER_INFO
lenses.kubernetes.processor.kafka.settings.basic.auth.user.info = "USERNAME:PASSWORD"
lenses.kubernetes.processor.schema.registry.settings.basic.auth.credentials.source = USER_INFO
lenses.kubernetes.processor.schema.registry.settings.basic.auth.user.info = "USERNAME:PASSWORD"

Warning

Please note that although you could add the BASIC authentication username and password in the Schema Registry URL, it is a bad idea as Lenses display this URL in a few places.

Kerberos

The HortonWorks Schema Registry offers support for Kerberos (SPNEGO) authentication. The setup is more involved than the BASIC auth.

As with the brokers, a JAAS file is needed to setup Lenses for Kerberos. If you already have a JAAS file in place for connecting to the brokers, then instead of creating a new, append the snippet below to your current.

RegistryClient {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/path/to/keytab-file"
  storeKey=true
  useTicketCache=false
  principal="principal@MYREALM";
};

Once the jaas file is in place, add it to LENSES_OPTS, before starting Lenses:

export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"

Lenses SQL processors, when running in Kubernetes, need their own JAAS file. If you use the same keytab for both Lenses and the processors, you can copy your jaas.conf file and only replace the paths to the keytab. For the kubernetes processors, the Schema Registry keytab is always mounted under /mnt/secrets/registry/keytab:

RegistryClient {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/mnt/secrets/registry/keytab"
  storeKey=true
  useTicketCache=false
  principal="principal@MYREALM";
};

If you are running the SQL processors in Kafka Connect, then you have to configure your Connect workers with Kerberos as well. This will probably be already the case, but if not, add your JAAS file and keytab to the Connect worker nodes and export the Kerberos configuration in KAFKA_OPTS:

KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/jaas.conf"

Note

A system configured to work with Kerberos usually provides a system-wide Kerberos configuration file (krb5.conf) that points to the location of the KDC and includes other configuration options necessary to authenticate. If your system is missing this file, please contact your administrator. If you can’t set the system-wide configuration, you can provide a custom krb5.conf via LENSES_OPTS:

export LENSES_OPTS="-Djava.security.krb5.conf=/path/to/krb5.conf"

Once your JAAS files are ready, proceed to configure lenses.conf for access to the Kerberized Schema Registry.

# Enable Kerberos Authentication for schema registry
lenses.schema.registry.kerberos=true

# Define the Schema Registry principal. Usually principals for HTTP services
# are in the form of 'HTTP/HOSTNAME@REALM'. The HW Registry option expects
# locally to write it in the form of 'http@HOSTNAME'.
lenses.schema.registry.principal="http@<REGISTRY-HOSTNAME>"

# Define the principal used by Lenses to access the registry
lenses.schema.registry.service.name="principal@MYREALM"

# Define the keytab
lenses.schema.registry.keytab="path/to/keytab"

# Options for Lenses SQL processors in Kubernetes. Please note that if you use
# SASL_PLAINTEXT or SASL_SSL for the Kafka Brokers, you have already set the
# first two options. You should merge the JAAS files, whilst the krb5.conf is
# a global configuration file.
lenses.kubernetes.processor.jaas="path/to/jaas.conf"
lenses.kubernetes.processor.krb5="/etc/krb5.conf"
lenses.kubernetes.processor.schema.registry.keytab="path/to/keytab"
TLS Client Authentication

Confluent’s Schema Registry supports authentication via TLS client certificates. The client library of Schema Registry may only be configured for TLS authentication via the JVM configuration [1].

This authentication mode is not officially supported by Lenses. The instructions below are provided for users who want to use it despite not being covered by their support contract.

To setup Lenses for TLS auth to the Schema Registry, you need to use the LENSES_OPTS variable to configure the JVM options:

LENSES_OPTS=-Djavax.net.ssl.keyStore=/path/to/keystore/keystore.jks -Djavax.net.ssl.keyStorePassword=changeit -Djavax.net.ssl.trustStore=/path/to/truststore.jks -Djavax.net.ssl.trustStorePassword=changeit

Please note that when setting up TLS options via JVM’s javax.net.ssl you must use the same password for both your keystore and the private key stored inside it. Javax does not support a separate option for the key password.

For Lenses SQL Processors you will have to create a new docker image where you add all the needed files and set the LENSES_SQL_RUNNERS_OPTS environment variable. This setup has not been thoroughly tested. An example of a Dockerfile for a custom SQL Processor:

FROM gcr.io/lenses-container-registry/lenses-sql-processor:3.2

ADD keystore.jks /path/to/keystore.jks
ADD trustore.jks /path/to/truststore.jks
ENV LENSES_SQL_RUNNERS_OPTS=-Djavax.net.ssl.keyStore=/path/to/keystore/keystore.jks -Djavax.net.ssl.keyStorePassword=changeit -Djavax.net.ssl.trustStore=/path/to/truststore.jks -Djavax.net.ssl.trustStorePassword=changeit
[1]Schema Registry’s client library starting with version 5.5.0 allows to setup TLS authentication via configuration options passed by the Kafka client. Once this version is deemed stable for production, it will be used in Lenses.

Kafka Connect

You can add your Kafka Connect clusters to Lenses, so you can manage your connectors (create, remove, update), monitor them (ephemeral metrics), find out issues (e.g a task failed), view them in the topology view, and of course, to scale Lenses SQL processors. To set this up, you need to provide a list of Kafka Connect nodes (workers), the topics they use for storing their configuration, state, and source offset. Additionally, if you want to monitor your nodes and get alerts when a worker is offline, the list of the workers should be exhaustive (include all your workers).

lenses.kafka.connect.clusters = [
    {
      name: "dev",
      urls: [
        {
          url:"http://CONNECT_HOST_1:8083"
        },
        {
          url:"http://CONNECT_HOST_2:8083"
        }
      ],
      statuses: "connect-status",
      configs: "connect-configs",
      offsets: "connect-offsets"
    }
  ]

Note

The cluster name cannot contain dots (.), nor dashes (-).

Warning

If Lenses fails to find a Connect Cluster that is defined in lenses.conf during startup, it will exit immediately.

Optionally, you can enable JMX (or Jolokia) which will be used to provide additional information about your Connect clusters in the Services screen and per-connector metrics in the Connectors and Topology screens. The configuration will be as the following:

lenses.kafka.connect.clusters = [
    {
      name: "dev",
      urls: [
        {
          url:"http://CONNECT_HOST_1:8083",
          metrics: {           # Optional section
            ssl: true,         # Optional, please make the remote JMX/HTTP
                               # certificate is accepted by the Lenses truststore
            user: "admin",     # Optional, the remote JMX/HTTP user
            password: "admin", # Optional, the remote JMX/HTTP password
            type: "JMX",       # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
            url: "CONNECT_HOST_1:9584"
          }
        },
        {
          url:"http://CONNECT_HOST_2:8083",
          metrics: {           # Optional section
            ssl: true,         # Optional, please make the remote JMX/HTTP
                               # certificate is accepted by the Lenses truststore
            user: "admin",     # Optional, the remote JMX/HTTP user
            password: "admin", # Optional, the remote JMX/HTTP password
            type: "JMX",       # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
            url: "CONNECT_HOST_2:9584"
          }
        }
      ],
      statuses: "connect-status",
      configs: "connect-configs",
      offsets: "connect-offsets"
    }
  ]

Authentication

If your Connect cluster requires authentication, additional configuration is required. Otherwise, you can protect your Connect via a firewall and let your users manage it only via Lenses.

BASIC

To configure BASIC authentication to Connect worker nodes, you should add the connection details in the lenses.kafka.connect.clusters:

lenses.kafka.connect.clusters = [
  {
    name: "dev",
    username: "USERNAME",
    password: "PASSWORD",
    auth: "USER_INFO",
    urls: [
      {
        url:"http://CONNECT_HOST_1:8083",
        metrics: {           # Optional section
          ssl: true,         # Optional, please make the remote JMX/HTTP
                             # certificate is accepted by the Lenses truststore
          user: "admin",     # Optional, the remote JMX/HTTP user
          password: "admin", # Optional, the remote JMX/HTTP password
          type: "JMX",       # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
          url: "CONNECT_HOST_1:9584"
        }
      },
      {
        url:"http://CONNECT_HOST_2:8083",
        metrics: {           # Optional section
          ssl: true,         # Optional, please make the remote JMX/HTTP
                             # certificate is accepted by the Lenses truststore
          user: "admin",     # Optional, the remote JMX/HTTP user
          password: "admin", # Optional, the remote JMX/HTTP password
          type: "JMX",       # One of 'JMX', 'JOLOKIAP' (POST), 'JOLOKIAG' (GET)
          url: "CONNECT_HOST_2:9584"
        }
      }
    ],
    statuses: "connect-status",
    configs: "connect-configs",
    offsets: "connect-offsets"
  }
]

Warning

Please note that although you could add BASIC authentication username and password in the Connect URL, it is a bad idea as Lenses display this URL in a few places.

TLS Client Authentication

To configure TLS Client Authentication to Connect worker nodes, you should add the options below to lenses.conf. Please note that currently Lenses expect to be able to authenticate to all your Connect clusters (that require authentication) with the same TLS Keystore and Certificate.

lenses.kafka.connect.ssl.keystore.location   = "/path/to/keystore.jks"
lenses.kafka.connect.ssl.keystore.password   = "changeit"
lenses.kafka.connect.ssl.key.password        = "changeit"
lenses.kafka.connect.ssl.truststore.location = "/path/to/truststore.jks"
lenses.kafka.connect.ssl.truststore.password = "changeit"

Lenses Storage

Persistent data are stored by default under the storage/ directory where Lenses runs from. It is strongly advised to set explicitly where persistent data will be stored, make sure the Lenses process has permission to read and write files in this directory and put an upgrade and backup policy in place.

To configure the storage directory, set this option:

lenses.storage.directory = "/path/to/persistent/data/directory"

Lenses SQL Processors

The Lenses SQL Processors is a great way to do stream-processing using the Lenses SQL dialect. As we configured Lenses for access to the Kafka Brokers and Schema Registry, we already saw part of the processors’ configuration. Besides the Kafka client module of the processors, there are a few more settings to adjust.

IN_PROC

Out of the box, Lenses SQL Processors are running in the same JVM process. We call this mode IN_PROC. It is a convenient setup to get a feel of the Processors functionality without any hassle. The only thing you need to set up is a directory which can use to store some ephemeral data.

lenses.sql.state.dir = "/tmp/lenses-sql-kstream-state"

CONNECT

Connect mode lets you run Lenses SQL Processors within your Kafka Connect cluster(s). A Lenses SQL Connector is provided with your Lenses subscription (not part of the trial), which you have to add to your Connect cluster like any other connector. For more details on how you add the connector, the Lenses SQL Connector deployment.

Once you load the connector to one —or more— of your Connect clusters, Lenses can automatically detect it. Then you only have to set Lenses to use CONNECT mode and also set a directory which the connector (not Lenses), can use to write ephemeral data.

lenses.sql.execution.mode = CONNECT
lenses.sql.state.dir = "/tmp/lenses-sql-kstream-state"

KUBERNETES

Kubernetes mode is the most scalable one for Lenses SQL Processors. Just type your streaming SQL query and fire as many pods as you like in your Kubernetes cluster. The first step to enable this mode is to add the Lenses Container Registry key to your Kubernetes cluster. For more information on how to do this, check how to setup Lenses SQL in Kubernetes.

Once your cluster is ready, you only have to provide Lenses with a Kubernetes configuration file so that it can access the cluster and the account to use.

lenses.sql.execution.mode = KUBERNETES
lenses.kubernetes.config.file = "/home/lenses/.kube/config"
lenses.kubernetes.service.account = "default"

Alerting Plugins

Lenses continuously monitors your cluster and informs you on service degradation, consumer groups lag, important events and various operational issues. You can configure which notifications and alerts are enabled via the user interface. Once an event triggers an alert or notification, it can be forwarded to external systems via a plugin interface.

There are two alerting plugins that come by default; AlertManager and Slack. To implement your own plugin see Alerts Integration With 3rd Party Providers.

The configuration of the alerting plugins is handled via the lenses.alerts.plugins array; passing the plugin class and configuration.

lenses.alert.plugins= [
  {
    class = "io.lenses.alerts.plugin.PluginA",
    config = {
      key1 = "value1",
      key2 = "value2",
      ...
    }
  },
  {
    class = "io.lenses.alerts.plugin.PluginB",
    config = {
      key1 = "value1",
      key2 = "value2",
      ...
    }
  },
  ...
]

See below for example configuration for the built-in plugins.

AlertManager

The AlertManager Plugin forwards actionable events (alerts) to an Alertmanager (AM) installation, so they can be deduplicated, grouped and routed accordingly. Check out Alertmanager Integration for examples on how to configure your AlertManager for Lenses’ alerts.

Note

Not all alerts are actionable and thus forwarded to AlertManager. AM expects an alert that can be raised and brought down once fixed. As an example, an offline broker would raise an alert and once your broker came back online, the alert would be dismissed. On the other hand a deleted topic is not actionable, in the fashion that you cannot recreate the old topic, with its data and offsets intact. Non-actionable events (notifications) have a severity level of INFO and are not forwarded to AM.

To setup AM, you have to provide a list of your AlertManager endpoints in the configuration file. The way that AM clusters work is that an application sends the alert to all nodes and the nodes themselves make sure they’ll process the alert at least once.

If you have more than one Lenses installation, it’s a good idea to set the AM source to something unique so that AlertManager can distinguish between your different installations. Also, it is useful (but optional) to set the Lenses address in the AM generator URL so that you can navigate to the Lenses Web Interface through the alerts’ links. This URL should be the address your users use to access Lenses.

Example:

lenses.alert.plugins = [
  {
    class = "io.lenses.alerts.plugin.am.AlertManagerPlugin",
    config = {
      endpoints = "http://ALERTMANAGER_HOST_1:9094,http://ALERTMANAGER_HOST_2:9094,http://ALERTMANAGER_HOST_3:9094",
      source = "Lenses Prod",
      generator-url = "http://LENSES_HOST:LENSES_PORT" # Optional
    }
  }
]
AlertManager Plugin Configuration Options Table
Config Description Required Type Default
endpoints
Comma-separated list of Alert Manager endpoints
yes string n/a
source
Lenses instance raising the alert
yes string n/a
generator-url
A URL to identify the source
yes string n/a
ssl
If true it enables SSL
no boolean false
publish-interval
The interval in milliseconds to send
the alerts raised to Alert manager
no int 300000
http-connect-timeout
Time in milliseconds which determines
the timeout in milliseconds until
a connection is established
no int 5000
http-request-timeout
The timeout in milliseconds used when
requesting a connection from the
connection manager
no int 5000
http-socket-timeout
Defines the socket timeout (SO_TIMEOUT)
in milliseconds, which is the timeout
for waiting for data
no int 5000

Slack

Lenses can post alerts directly to Slack. We strongly advise using the AlertManager instead and post alerts to Slack via AM, as alerts without deduplication can cause too much noise. An example of such a setup can be found here.

To integrate Lenses alerting with Slack, add a incoming webhook. Select the channel where Lenses can post alerts and copy the Webhook URL:

Example:

lenses.alert.plugins = [
  {
    class = "io.lenses.alerts.plugin.slack.SlackAlertsPlugin",
    config = {
      webhook-url = "https://hooks.slack.com/services/XXXXXXXX/YYYYYYYYY/ZZZZZZZZZZZZZZZZZZZZZZZZ",
      username = "Lenses",
      channel = "alerts"
    }
  }
]
Slack Plugin Configuration Options Table
Config Description Required Type
webhook-url
The Slack endpoint to send the alert to
yes string
username
The user name to appear in Slack as the sender
yes string
channel
The name of the channel to send the alert to
yes string
icon-url
A URL to an icon image to set for the sent slack message
no string

CloudWatch Events

Lenses can post alerts directly to CloudWatch Events and you can setup your own CloudWatch trigger rules and do actions when a high severity alert is received.

Example:

lenses.alert.plugins= [
  {
    class = "io.lenses.alerts.plugin.cloudwatch.CloudWatchAlertsPlugin",
    config = {
      access-key = "<your-aws-access-key>",
      access-secret-key = "<your-aws-access-secret-key>"",
      source = "lenses-prod"
    }
  },
]
Slack Plugin Configuration Options Table
Config Description Required Type
access-key
The AWS access key for the IAM user to send events
yes string
access-secret-key
The AWS access secret key for the IAM user to send events
yes string
source
The source attribute of a CloudWatch event
yes string

Grafana

If you’ve set up the Lenses Monitoring Suite, or have your monitoring solution in place, you can set the Grafana address (or your own monitoring tool’s address) in Lenses, so you get a link to it from the web interface.

lenses.grafana = "http://GRAFANA_HOST:3000"

Advanced Configuration

Lenses Storage Topics

Lenses keeps a portion of its configuration and data inside Kafka Topics. You can find these under the System Topics category. They retain the information about your cluster, metrics, auditing, processors and more is stored. When the application starts, it checks their existence and creates them if needed. Although usually not needed, you can override the default names for these topics if desired:

# topics created on start-up that Lenses uses to store state
lenses.topics.processors        = "_kafka_lenses_processors"
lenses.topics.external.topology = "__topology"
lenses.topics.external.metrics  = "__topology__metrics"

Warning

These topics are created and managed by Lenses automatically. Do not create them by hand as they may need compaction enabled or a certain number of partitions. If you are using ACLs, only allow Lenses to manage these topics.

ACLs

If your Kafka cluster is set up with an authorizer (ACLs), Lenses should have at least permission to manage and access its storage topics. Make sure to set the principal and host appropriately:

kafka-acls \
    --authorizer-properties zookeeper.connect=ZOOKEEPER_HOST:2181 \
    --add \
    --allow-principal User:Lenses \
    --allow-host lenses-host \
    --operation Read \
    --operation Write \
    --operation Alter \
    --topic topic

Lenses also needs access to certain third party, system topics, to work:

__offsets
This is the internal Kafka topic, where consumer offsets are stored. Lenses needs read access to this topic, to track consumer lags.
_schemas
If you use Confluent’s Schema Registry, then Lenses needs read access to the topic where the schemas are stored to track changes in real-time.
_connect-configs, _connect-offsets, _connect-status
If you use Kafka Connect, then Lenses needs read access to the topics that your Connect cluster(s) use to store their state, to provider richer information about the connector instances.

Topology

The Topology screen offers a window to your data flows, a high-level view of how your data moves in and out of Kafka. Lenses builds the topology graph from your connectors, SQL processors and applications that include our topology libraries.

To build the graph, some information is needed. The Lenses SQL processors (Kafka Streams applications written with Lenses SQL) are always managed automatically, so you don’t have to do anything. Same goes for the more than 45 Kafka Connect connectors we support out of the box. For any other connector it’s as simple as adding it to lenses.connectors.info:

lenses.connectors.info = [
  {
     class.name = "org.apache.kafka.connect.file.FileStreamSinkConnector"
     name = "File"
     instance = "file"
     sink = true
     extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
     icon = "file.png"
     description = "Store Kafka data into files"
     author = "Apache Kafka"
  },
  ...
]

Your custom applications, on the other hand, need to embed our topology libraries. For more information about the topology setup, for both connectors and external applications, please have a look at the Topology Configuration.

Consumer Groups Lag

Lenses exposes the Kafka Consumer Groups lag via a Prometheus metrics endpoint within the application. The default Prometheus path is used (/metrics), so you can add the Lenses address as is to your Prometheus targets. No additional configuration is required.

Kafka ACLs

You can manage your Kafka ACLs through Lenses. If you are running Kafka 1.0 or later you don’t have to set anything in the configuration file. If your brokers are configured with an authorizer, Lenses will allow you to see and manage ACLs.

When using Kafka 0.11 or older, you have to switch to ACL management via Zookeeper. To do that Lenses should be configured with access to Zookeeper, and the ACLs broker mode set to false:

lenses.acls.broker.mode = false

Note

The ACL management functionality is tested with the default Kafka authorizer class.

Producer & Consumer

Lenses interacts with your Kafka Cluster via Kafka Consumers and Producers. There may be scenarios where the Consumer and/or the Producer need to be tweaked. Prefix any option described in the Kafka documentation for

lenses.kafka.settings.client. As an example:

lenses.kafka.settings.client.isolation.level = "read_committed"
lenses.kafka.settings.client.acks = "all"

The Lenses SQL processors when used in Kubernetes, have separate Kafka client settings under the prefix lenses.kubernetes.processor.kafka.settings. These settings apply to both the consumer and the producer part. As an example:

lenses.kubernetes.processor.kafka.settings.acks = "all"

Warning

Changing the default settings of the Kafka client may lead to unexpected issues. Furthermore, some settings are set dynamically in runtime, for example, IN_PROC SQL processors will get their own group.id. We encourage you to visit our community or consult with us via one of the available channels if you need help tweaking Lenses.

System Topics

Systems topics is a convention used by Lenses to separate between topics created by users and topics created by software —such as Lenses and Kafka Connect. Lenses shows System Topics in a different tab in the Topics screen to minimize the cognitive load of the users.

The default setting includes Lenses system topics, SQL processors’ Kstreams topics, consumer offsets, schemas, and transactions. You can add topics of your own as well, but it is advised to keep the default ones too, so they will not be transferred to your user topics. The setting takes prefixes so, for example, the lsql_ item matches all topics starting with lsql_.

lenses.kafka.control.topics = [
  "connect-configs",
  "connect-offsets",
  "connect-status",
  "connect-statuses",
  "_schemas",
  "__consumer_offsets",
  "_kafka_lenses_",
  "lsql_",
  "__transaction_state"
]

Tuning Lenses

Lenses comes tuned out of the box, but as every production setup may be different, there are many advanced options to tweak the behavior of the software. These settings include the connections to the Kafka services and JMX ports, the web server and the web socket part of Lenses, the SQL engine settings, the frequency of various update actions —like how often we update the consumers— and many more.

For a list of the advanced options, please check out the Options Reference Table and also have a look at the lenses.conf.sample file that comes with the Lenses archive or under the /opt/lenses/lenses.conf.sample path in our docker images.

Our recommendation is to install our software with the default settings and only go to the advanced section if you have a particular reason. Changing them without a good reason can lead to unexpected behavior.

We encourage you to visit our community or consult with us via one of the available channels if you need help or advice tweaking Lenses.

Runtime Configuration

Java Options

Lenses runs on an embedded Java Virtual Machine (JVM). You can tune it like any JVM-based application and we made sure to follow the same convention that you see throughout the Kafka ecosystem. This means you get five environment variables you may use, LENSES_OPTS, LENSES_HEAP_OPTS, LENSES_JMX_OPTS, LENSES_LOG4J_OPTS and LENSES_PERFORMANCE_OPTS. Let’s see them in detail:

LENSES_OPTS
This variable should be used for generic settings, such as the Kerberos configuration (e.g SASL/GSSAPI auth to the Brokers). Please note that in this option in our docker image we add a java agent (in addition to your settings), to export Lenses metrics into Prometheus format.
LENSES_HEAP_OPTS
Here you can set options about the JVM heap. The default setting is -Xmx3g -Xms512m which sets the heap size between 512MB and 3GB. It will serve you well even for larger clusters. It is possible to set the upper limit (3GB) lower if needed. For our Lenses Box as an example, we set it at just 1.2GB. If you are using many Lenses SQL processors in IN_PROC mode, or your cluster has more than 3000 partitions, you should increase it.
LENSES_JMX_OPTS
This variable can be used to tweak the JMX options that the JVM offers, such as allowing remote access. Have a look at the Metrics Section for more information.
LENSES_LOG4J_OPTS
This variable can be used to tweak Lenses logging. Please note that Lenses uses the Logback library for logging. For more information about this, check the Logging section.
LENSES_PERFORMANCE_OPTS

Here you can tune the JVM. Our default settings should serve you well:

-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true

Logging

Lenses uses the Logback framework for logging. For its configuration, upon startup, it looks for a file named logback.xml, first inside the current directory (the directory you run Lenses from), then at /etc/lenses/ and last in the Lenses installation directory. The first one found (in the above order), is used. It will also be printed in the startup logs, so you know which logback configuration file is in use. This is useful, because the application constantly monitors this file for changes, so you can edit it and Lenses will reload it without the need to restart.

To use a file at a custom location, set the LENSES_LOG4J_OPTS environment variable as in the example:

export LENSES_LOG4J_OPTS="-Dlogback.configurationFile=file:mylogback.xml"

Lenses scan the log configuration file every 30 seconds for changes.

Inside the installation directory, there is also a logback-debug.xml file, where we set the default logging level to DEBUG. You can use this to increase the logging quickly.

Tip

For convenience, Lenses offers a basic log viewer within the web interface. Once logged into Lenses, visit http://LENSES_HOST/lenses/#/logs to check it out.

Log Level

The default log level is set to INFO except for some 3rd party classes we feel is too verbose at this level. You can use the logback-debux.xml configuration to quickly switch to DEBUG.

For fine-grained control, you can edit the logback.xml file and adjust the global or per class log level.

The default logger levels are:

<logger name="com.landoop" level="INFO"/>
<logger name="io.lenses" level="INFO"/>
<logger name="akka" level="INFO"/>
<logger name="io.confluent.kafka.serializers.KafkaAvroDeserializerConfig" level="WARN"/>
<logger name="io.confluent.kafka.serializers.KafkaAvroSerializerConfig" level="WARN"/>
<logger name="org.apache.calcite" level="OFF"/>
<logger name="org.apache.kafka" level="WARN"/>
<logger name="org.apache.kafka.clients.admin.AdminClientConfig" level="ERROR"/>
<logger name="org.apache.kafka.clients.consumer.internals.AbstractCoordinator" level="WARN"/>
<logger name="org.apache.kafka.clients.consumer.ConsumerConfig" level="ERROR"/>
<logger name="org.apache.kafka.clients.producer.ProducerConfig" level="ERROR"/>
<logger name="org.apache.kafka.clients.NetworkClient" level="ERROR"/>
<logger name="org.apache.kafka.common.utils.AppInfoParser" level="ERROR"/>
<logger name="org.apache.zookeeper" level="WARN"/>
<logger name="org.reflections" level="WARN"/>
<logger name="org.I0Itec.zkclient" level="WARN"/>
<logger name="com.typesafe.sslconfig.ssl.DisabledComplainingHostnameVerifier" level="ERROR"/>
<root level="INFO">...</root>

Log Format

All the log entries are written to the output using the following pattern:%d{ISO8601} %-5p [%c{2}:%L] %m%n. You can adjust this inside logback.xml to match your organization’s defaults.

Log Location

By default Lenses logs both to stdout and to files inside the directory it runs from, under logs/. This may also be configured inside logback.xml.

The stdout output can be integrated with any log collection infrastructure you may have in place and is useful with containers as well. It follows the Twelve-Factor App approach to logs.

On the other hand, the file logs are separated into three files: lenses.log, lenses-warn.log and metrics.log. The first contains all logs and is the same as the stdout. The second contains only messages at level WARN and above. The third one contains timing metrics for Lenses operations and can be useful for debugging. If you ever need to file a bug report, we may ask for any of these files (in whole or part) to be able to debug your issue. Lenses take care of the log rotation for these files.

Metrics

JMX Metrics

Lenses runs on the JVM; it is possible to expose a JMX endpoint or use a java agent of your choosing, such as Prometheus’ jmx_exporter or Jolokia’s agent to monitor it. The JMX endpoint is managed by the lenses.jmx.port option. Leave it empty to disable JMX.

The most interesting information you can get from JMX, are Lenses’ JVM usage (e.g. CPU, memory, GC) and the metrics of the Kafka clients Lenses use internally.

It is often the case with JMX that you need to tune it further for remote access. As we’ve seen, it comes to LENSES_JMX_OPTS environment variable. An example of how you can configure it for remote access is below. If you use it verbatim, adjust the hostname to reflect your server’s hostname.

LENSES_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=[HOSTNAME]"

Prometheus’ Agent

The Lenses Monitoring Suite is a reference architecture based on Prometheus and Grafana. As part of your Lenses subscription, you get access to resources such as templates and dashboards to assist in your implementation.

In the monitoring context, Lenses is considered a Kafka Client such as any of your Kafka applications. You can use the resources provided in the monitoring suite (jmx_exporter build and configuration) to enable the Prometheus metrics endpoint via the LENSES_OPTS environment variable:

export LENSES_OPTS="-javaagent:/path/to/jmx_exporter/fastdata_agent.jar=9102:/path/to/jmx_exporter/client.yml"

If you use your own jmx_exporter build and templates, the process is the same, just substitute our files for your own.

Our docker image (lensesio/lenses) sets up the Prometheus endpoint automatically. You only have to expose the 9102 port to access it.

Directories

Lenses needs write access to certain directories. Also, it needs a temporary directory with execution permission.

Write Access

Unless configured otherwise, Lenses need write access inside the directory it is running from (WorkingDirectory in SystemD) and /tmp.

logs/
In this directory two kinds of files are stored: log files and Lenses SQL processors (when in-process mode) state. Both are safe to delete, although if you delete the processors’ state, Lenses (the KStreams framework more specifically) will need to re-build it. To change the log files location, edit the logback.xml file inside the Lenses installation directory or copy it into the run directory and edit it there. To change the location for the processors’ state directory, use the lenses.sql.state.dir option.
storage/
In this directory Lenses stores configuration in an H2 database. To change this directory, use the lenses.storage.directory option. This directory needs to be backed-up and survive upgrades, etc.
tmp/
In this directory, temporary files are stored, like JNI shared libraries. If Lenses fails to start with an error like Failed to read data, please try to remove the directory /tmp/vlxjre. Code execution should be allowed in this directory as it is required by the JNI libraries.

JNI libraries and Code Execution

Lenses and Kafka itself use two common Java libraries that take advantage of JNI: the Snappy library and the RocksDB library. JNI uses native libraries — .so files in our case— that it extracts inside /tmp. Native libraries means that they run via the Linux kernel itself, thus the kernel should be allowed to execute this code. In some enterprise setups, the /tmp directory is mounted with the noexec option, leading to problems.

Apart from the obvious solution, mount /tmp without noexec, you can configure Lenses to use a different temp directory, where code can be executed. For Snappy the option org.xerial.snappy.tempdir can control the temp directory. For RocksDB, the temp directory for the JVM that runs Lenses needs to be adjusted via the java.io.tmpdir option.

LENSES_OPTS="-Dorg.xerial.snappy.tempdir=/path/to/exec/tmp -Djava.io.tmpdir=/path/to/exec/tmp"

Important

Please note that it is not just Lenses that uses Snappy, but the Kafka Brokers and Kafka Connect workers as well. As such, you may need the same workarounds for these services as well.

Plugins

Lenses can be extended via user-provided classes. There are five categories you can extend:

  • Serde: custom serialization and deserialization classes so you can use all Lenses functionality with your data formats (such as protobuf)
  • LDAP Group Filter: custom plugin to query your LDAP implementation for groups your users belong to if you do not use AD or the memberOf overlay of OpenLDAP
  • UDF for the SQL Engine: User Defined Functions (UDF) can extend the Engine with new functions
  • Custom HTTP authentication: A class that can extract —and verify possibly— user information sent via headers, so your users can authenticate to Lenses via an authentication proxy / Single Sign-On solution
  • Alerts integration: A class that can forward alerts raised by Lenses to an external system.

Location

Lenses search for plugins under two directories:

  • The $LENSES_HOME/plugins/ directory, where $LENSES_HOME is the Lenses installation path
  • An optional directory set by the environment variable LENSES_PLUGINS_CLASSPATH_OPTS

On startup these two directories and any first-level subdirectory of theirs are added in the Lenses classpath (as the security plugins are required to be available during startup) and also they are passed to Lenses so it can monitor them for new jar files.

While any layout or a single directory may work for you, a suggested layout for the plugin directory is this:

plugins/
├── security
├── serde
├── udf
└── alerts

Which once populated with plugins, could look like this:

plugins/
├── security
│   └── sso_header_decoder.jar
├── serde
│   ├── protobuf_actions.jar
│   └── protobuf_clients.jar
└── udf
│   ├── eu_vat.jar
│   ├── reverse_geocode.jar
│   └── summer_sale_discount.jar
└── alerts
    └── email_sender.jar

Tip

Lenses continuously monitor the plugin directories and their first level subdirectories (that existed during Lenses startup) for new plugins (jars).

Custom Serde

Custom serde (serializer and deserializer) can be used to extend Lenses with support for additional message formats. Out of the box, you get built-in support for Avro, JSON, CSV, XML and more formats. If your data is in a format that isn’t supported out of the box, or maybe it requires to be hardcoded (such as protobuf) you can write and compile your serde jars and add them to Lenses. For more information about custom serde check the Lenses SQL section.

As mentioned, custom serde can be read from the plugins directories. Before Lenses 2.2, custom serde could be read from the locations below. These locations are still supported but will be deprecated in the future. If you use them, please switch to the plugins directories.

  • $LENSES_HOME/serde
  • $LENSES_SERDE_CLASSPATH_OPTS if set

The plugins (and serde) directories are continuously monitored for new jar files. Once a new library is dropped in, the new format should be available to use within a few seconds.

Processors

Lenses SQL Processors support custom serde as well. If processor execution mode is set to IN_PROC, the default mode, no additional action is required. If it’s set to CONNECT, then the serde jars should be added to the connector directory, alongside the default libraries. If the mode is KUBERNETES, then a custom processor image should be created with a custom serde.

Including custom serde to the Lenses docker please see Lenses docker plugins.

Options List Reference

Config Description Required Type Default
lenses.ip
Bind HTTP at the given endpoint.
Used in conjunction with lenses.port
no string 0.0.0.0
lenses.port
The HTTP port the HTTP server listens
for connections: serves UI, Rest and WS APIs
no int 9991
lenses.ssl.keystore.location
The full path to the keystore file
used to enable TLS on lenses port
no string null
lenses.ssl.keystore.password
Password to unlock the keystore
file
no String null
lenses.ssl.key.password
Password for the ssl certificate
used
no String null
lenses.ssl.enabled.protocols
Version of TLS protocol
that will be used
no string TLSv1.2
lenses.ssl.algorithm
X509 or PKIX algorithms used by TLS
termination
no string SunX509
lenses.ssl.cipher.suites
Comma separated list of ciphers
allowed for the TLS negotiation
no String null
lenses.jmx.port
The port to bind an JMX agent to
enable JVM monitoring
no int 9992
lenses.license.file
The full path to the license file
yes string license.json
lenses.secret.file
The full path to security.conf containing security
credentials read more
yes string security.conf
lenses.storage.directory
The full path to the directory where Lenses
stores some of its state
no string null
lenses.topics.processors
Topic to store the SQL processors details.
*We advise not to change the defaults
neither to delete the topic*
yes string _kafka_lenses_processors
lenses.topics.external.topology
Topic where external application
publish their topology.
yes string __topology
lenses.topics.external.metrics
Topic where external application
publish their topology metrics.
yes string __topology__metrics
lenses.kafka.brokers
A list of host/port pairs to
use for establishing the initial connection to the
Kafka cluster. Add just a few broker addresses
here and Lenses will bootstrap and discover the
full cluster membership (which may change dynamically).
This list should be in the form
"host1:port1,host2:port2,host3:port3"
yes string PLAINTEXT://localhost:9092
lenses.kafka.metrics.port
An array mapping the Kafka broker id to its metrics port.
yes array null
lenses.kafka.metrics.port[*].id
The Kafka broker identifier
(integer as defined in your Kafka broker configuration)
no int null
lenses.kafka.metrics.port[*].port
The port on the broker machine
to connect to in order to get the broker’s metrics.
no int null
lenses.kafka.metrics.port[*].host
The Kafka broker host name to use
for the given broker identifier.
no string null
lenses.kafka.metrics.default.port
Set this when all the Kafka brokers
use the same JMX/JOLOKIA port number.
When a machine runs more than one Kafka broker,
you need to use lenses.kafka.metrics.port[*]
to set the connection port.
no int null
lenses.kafka.metrics.type
Sets the metrics type. Available options are:
JMX, JOLOKIAG, or JOLOKIAP. JMX - Java Management Extensions is more common
Jolokia supports two APIs a GET and a POST based one.
Use JOLOKIAG if the metrics are exposed via GET requests.
Use JOLOKIAP if the metrics are exposed via POST requests.
no string JMX
lenses.kafka.metrics.user
For secure connections, the setting specifies the
user name to use when connecting to JMX/JOLOKIA endpoints.
The same user is applied for all brokers connections
no string null
lenses.kafka.metrics.password
For secure connections, the setting specifies
the password to use when connecting to JMX/JOLOKIA endpoints.
The same value is used for all brokers connections.
no string null
lenses.kafka.metrics.https
Set this flag to true when the metrics are exposed
via JOLOKIA and the connection is using HTTPS protocol.
no bool false
lenses.kafka.metrics.ssl
This applies for JMX exposed metrics only. Set the value to true
when secure connection is required.
no bool false
lenses.kafka.connect.clusters
Defines the Kafka connect clusters
no array null
lenses.kafka.connect.clusters.name
The name for the connect cluster to recognize it by
no string null
lenses.kafka.connect.clusters.statuses
Comma separated topics which hold the Connect cluster status
no string null
lenses.kafka.connect.clusters.configs
Comma separated topics which hold the Connect cluster config
no string null
lenses.kafka.connect.clusters.offsets
Comma separated topics which hold the Connect cluster offsets
no string null
lenses.kafka.connect.clusters.username
if the connect endpoints are protected
by user/password this is the user to use
no string null
lenses.kafka.connect.clusters.password
if the connect endpoints are protected
by user/password this is the password to use
no string null
lenses.kafka.connect.clusters.auth
if the connect endpoints are protected
this is the protection mode
(URL, USER_INFO, SASL_INHERIT, NONE)
no string null
lenses.kafka.connect.clusters.urls
A list of all the workers endpoints
no array null
lenses.kafka.connect.clusters.urls.url
The connect worker endpoint
no string null
lenses.kafka.connect.clusters.urls.jmx
old style still supported
no int null
lenses.kafka.connect.clusters.urls.metrics.url
The metrics connection endpoint
no string null
lenses.kafka.connect.clusters.urls.metrics.type
The metrics connection type (JMX or JOLOKIA)
no string null
lenses.kafka.connect.clusters.urls.metrics.user
if the metrics connection is protected by user/password
this is the user to use
no string null
lenses.kafka.connect.clusters.urls.metrics.password
if the metrics connection is protected by user/password
this is the password to use
no string null
lenses.kafka.connect.request.timeout
The maximum time (in milliseconds) to wait for Kafka Connect to reply
no int 10000
lenses.kafka.connect.ssl.keystore.location
The full path to the keystore file
used to enable TLS to Kafka Connect
no string null
lenses.kafka.connect.ssl.keystore.password
Password to unlock the keystore
file
no String null
lenses.kafka.connect.ssl.key.password
Password for the ssl certificate
used
no String null
lenses.kafka.connect.ssl.truststore.location
The full path to the truststore file
used to enable TLS to Kafka Connect
no string null
lenses.kafka.connect.ssl.truststore.password
Password to unlock the truststore
file
no String null
lenses.zookeeper.hosts
A list of all the zookeeper nodes
no array null
lenses.zookeeper.hosts.url
The Zookeeper node endpoint
no string null
lenses.zookeeper.hosts.url.jmx
old still supported
no string null
lenses.zookeeper.hosts.metrics.type
The Zookeeper node metrics type (JMX or JOLOKIA)
no string null
lenses.zookeeper.hosts.metrics.url
The Zookeeper node metrics endpoint
no string null
lenses.zookeeper.hosts.metrics.user
if the metrics connection is protected by user/password
this is the user to use
no string null
lenses.zookeeper.hosts.metrics.password
if the metrics connection is protected by user/password
this is the password to use
no string null
lenses.schema.registry.urls
A list of SR nodes
no array null
lenses.schema.registry.urls.url
The SR node endpoint
no string null
lenses.schema.registry.urls.metrics.url
The SR node metrics endpoint
no string null
lenses.schema.registry.urls.metrics.user
if the metrics connection is protected by user/password
this is the user to use
no string null
lenses.schema.registry.urls.metrics.password
If the metrics connection is protected by user/password
this sets the password to use
no string null
lenses.zookeeper.hosts
Provide all the available Zookeeper nodes details.
For every ZooKeeper node specify the
connection url (host:port) and the metrics endpoint.
The configuration should be
[{url:"hostname1:port1", metrics:{url:"URL", type:"JMX"}}]
yes string []
lenses.zookeeper.chroot
You can add your znode (chroot) path if
you are using it. Please do not add
leading or trailing slashes. For example if you use
the zookeeper chroot ``/kafka` for
your Kafka cluster, set this value to kafka
no string  
lenses.zookeeper.security.enabled
Enables secured connection to your Zookeeper.
The default value is false.
Please read about this setting before enabling it.
no boolean false
lenses.schema.registry.urls
Provide all available Schema Registry node details or list
the load balancer address if one is used. For every instance
specify the connection url and if
metrics are enabled endpoint
yes string []
lenses.schema.registry.kerberos
Set to true if the schema registry
is deployed with kerberos authentication
no boolean false
lenses.schema.registry.keytab
The location of the keytab file if
connecting to a kerberized schema registry
no string null
lenses.schema.registry.jaas
The location of the jaas file if
connecting to a kerberized schema registry
no string null
lenses.schema.registry.krb5
The location of the krb5 file if
connecting to a kerberized schema registry
no string null
lenses.schema.registry.principal
The service principal of the above keytab
no string null
lenses.schema.registry.service.name
The service name of the above keytab
no string null
lenses.schema.registry.auth
Specifies the authentication mode
for connecting to the schema registry endpoints.
Available values are: URL, USER_INFO, SASL_INHERIT or NONE
no string null
lenses.schema.registry.username
When a USER_INFO authentication
mode is used, this specifies the user name value
no string null
lenses.schema.registry.password
When a USER_INFO authentication
mode is used, this specifies the password value
no string null
lenses.schema.registry.settings.*
Prefixes all the schema registry
client configuration you might want to use.
For example for SASL_INHERIT you need to provide
lenses.schema.registry.settings.sasl.jaas.config=PATH
no string null
lenses.kafka.connect.clusters
Provide all available Kafka Connect clusters.
For each cluster give a name, list the 3 backing topics
and provide workers connection details (host:port) and
metrics endpoints if enabled and on Kafka 1.0.0
no array []
lenses.alerts.plugins
Array of alerts plugins definition.
"http://host1:port1"
no string  
lenses.alert.manager.source
How to identify the source of an Alert
in Alert Manager. Default is Lenses but you might
want to override to UAT for example
no string Lenses
lenses.alert.manager.generator.url
A unique URL identifying the creator of this alert.
Default is http://lenses but you might
want to override to http://<my_instance_url> for example
no string http://lenses
lenses.grafana
If using Grafana, provide the Url location.
The configuration should be
"http://grafana-host:port"
no string  
lenses.sql.settings.max.size
Used when reading data from a Kafka topic.
This is the maximum data size in bytes to return
from a Lenses SQL query. If the query is bringing more
data than this limit any records received after
the limit are discarded.
This can be overwritten
in the Lenses SQL query.
yes long 20971520 (20MB)
lenses.sql.settings.max.query.time
Used when reading data from a
Kafka topic. This is the time in milliseconds the
query will be allowed to run. If the time is exhausted
it returns the records found so far.
This can be overwritten in the
Lenses SQL query.
yes int 3600000 (1h)
lenses.sql.settings.max.idle.time
Used when reading data from a
Kafka topic. This is the time in milliseconds
to wait when reaching the end of the topic.
This can be overwritten in the
Lenses SQL query.
yes int 5000 (5 seconds)
lenses.sql.settings.skip.bad.records
Used when reading data from a
Kafka topic. If the flag is set to true,
the SQL engine will skip records which can
not be read. This can be overwritten in the
Lenses SQL query.
yes boolean true
lenses.sql.settings.format.timestamp
Used when reading data from a
Kafka topic. If the flag is set to true,
the Avro date and time fields are rendered
to the UI in a human readable format.
This can be overwritten in the
Lenses SQL query.
yes boolean true
lenses.sql.settings.live.aggs
Used when reading data from a
Kafka topic. If the flag is set to true,
it will enable running aggregate queries
on the tabled based SQL engine.
This can be overwritten in the
Lenses SQL query.
yes boolean true
lenses.sql.sample.default
Number of messages to take in every
sampling attempt
no int 2
lenses.sql.sample.window
How frequently to sample a topic
for new messages when tailing it
no int 200
lenses.metrics.workers
Number of workers to distribute the load
of querying and collecting metrics
no int 16
lenses.offset.workers
Number of workers to distribute the
load of querying topic offsets
no int 5
lenses.sql.execution.mode
The SQL execution mode, IN_PROC
or CONNECT or KUBERNETES
no string IN_PROC
lenses.sql.state.dir
Directory location to store the state
of KStreams. If using CONNECT mode, this folder
must already exist on each Kafka
Connect worker
no string logs/lenses-sql-kstream-state
lenses.sql.monitor.frequency
How frequently SQL processors
emmit healthcheck and performance metrics
no int 10000
lenses.kubernetes.processor.image.name
The docker/container repository url
and name of the Lenses SQL runner
no string eu.gcr.io/lenses-container-registry/lenses-sql-processor
lenses.kubernetes.processor.image.tag The Lenses SQL runner image tag no string 3.0
lenses.kubernetes.config.file The location of the kubectl config file no string /home/lenses/.kube/config
lenses.kubernetes.pull.policy
The value will set Kubernetes imagePullPolicy which will instruct
Kubelet how do handle the Lenses SQL processor image.
no string always
lenses.kubernetes.watch.reconnect.limit
A flag specifying how many times to attempt a reconnect while
establishing a watcher for Kubernetes events.
no long -1
lenses.kubernetes.processor.heap
The amount of memory
the underlying Java process will use
no string 1024M
lenses.kubernetes.processor.mem.request
The values will control how much memory resource the Pod Container will request
no string 128M
lenses.kubernetes.processor.mem.limit
The values will control the Pod Container memory limit
no string 1152M
lenses.kubernetes.processor.jaas
The path to Lenses SQL processor JAAS file if a different connection to the one provided to Lenses is required
no string null
lenses.kubernetes.processor.krb5
The path to Lenses SQL processor KRB5 file if a different connection to the one provided to Lenses is required
no string null
lenses.kubernetes.processor.kafka.settings
Prefix all the Kafka configurations
required to run the resulting Kafka Streams application
resulted from the Lenses SQL streaming code.
no string null
lenses.kubernetes.processor.kafka.protected.settings
An array of the keys
prefixed with lenses.kubernetes.processor.kafka.settings which
contain sensitive information. If for example
ssl.key.password is set in the settings,
then this value should be added as an item here.
no string null
lenses.kubernetes.processor.kafka.protected.file.settings
An array of the keys
prefixed with lenses.kubernetes.processor.kafka.settings which
are pointing to files. If for example
ssl.keystore.location is set in the settings,
then this value should be added as an item here.
no string null
lenses.kubernetes.processor.kafka.keytab
The path to Lenses SQL processor Keytab file if a different connection to the one provided to Lenses is required
no string null
lenses.kubernetes.processor.schema.registry.settings
Prefix all the Schema Registry configurations
required to connect to your instance.
no string null
lenses.kubernetes.processor.schema.registry.protected.settings
An array of the keys
prefixed with lenses.kubernetes.processor.schema.registry.settings which
contain sensitive information. If for example
ssl.key.password is set in the settings,
then this value should be added as an item here.
no string null
lenses.kubernetes.processor.schema.registry.protected.file.settings
An array of the keys
prefixed with lenses.kubernetes.processor.schema.registry.settings which
are pointing to files. If for example
ssl.keystore.location is set in the settings,
then this value should be added as an item here.
no string null
lenses.kubernetes.processor.schema.registry.keytab
The path to the Keytab required by a running Lenses SQL processor to connect to Schema Registry
no string null
lenses.kubernetes.service.account
The service account to deploy with.
This account should be able to pull images
from lenses.kubernetes.processor.image.name
no string default
lenses.kubernetes.pull.policy
The pull policy for Kubernetes containers:
IfNotPresent or Always
no string IfNotPresent
lenses.kubernetes.runner.mem.limit The memory limit applied to the Container no string 768Mi
lenses.kubernetes.runner.mem.request The memory requested for the Container no string 512Mi
lenses.kubernetes.runner.java.opts Advanced JVM and GC memory tuning parameters no string
-Xms256m -Xmx512m
-XX:MaxPermSize=128m -XX:MaxNewSize=128m
-XX:+UseG1GC -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35
-XX:+DisableExplicitGC -Djava.awt.headless=true
lenses.interval.summary
The interval (in milliseconds) to check for new topics,
or topic config changes
no long 10000
lenses.interval.consumers
The interval (in milliseconds) to read all
consumer info
no int 10000
lenses.interval.partitions.messages
The interval (in milliseconds) to refresh
partitions info
no long 10000
lenses.interval.type.detection
The interval (in milliseconds) to check the
topic payload type
no long 30000
lenses.interval.user.session.ms
The duration (in milliseconds) that a
client session stays alive for.
no long 14400000 (4h)
lenses.interval.user.session.refresh
The interval (in milliseconds) to check whether a
client session is idle and should be terminated.
no long 60000
lenses.interval.schema.registry.healthcheck
The interval (in milliseconds) to check the
status of schema registry instances.
no long 30000
lenses.interval.topology.topics.metrics
The interval (in milliseconds) to refresh the
topology status page.
no long 30000
lenses.interval.alert.manager.healthcheck
The interval (in milliseconds) to check the
status of the Alert manager instances.
no long 5000
lenses.interval.alert.manager.publish
The interval (in milliseconds) on which
unresolved alerts are published
to alert manager.
no long 30000
lenses.interval.topology.custom.app.metrics.discard.ms
The interval (in milliseconds) when
an already published metrics entry is consider stale.
Once this happens the record is discarded.
no long 120000
lenses.interval.metrics.refresh.zk
The interval (in milliseconds) to get
Zookeeper metrics.
yes long 5000
lenses.interval.metrics.refresh.sr
The interval (in milliseconds) to get
Schema Registry metrics.
yes long 5000
lenses.interval.metrics.refresh.broker The interval (in milliseconds) to get Broker metrics. yes long 5000
lenses.interval.metrics.refresh.alert.manager
The interval (in milliseconds) to get
Alert Manager metrics
yes long  
lenses.interval.metrics.refresh.connect The interval (in milliseconds) to get Connect metrics. yes long  
lenses.interval.metrics.refresh.brokers.in.zk
The interval (in milliseconds) to refresh
the brokers from Zookeeper.
yes long 5000
lenses.kafka.ws.poll.ms
Max time (in milliseconds) a consumer polls for
data on each request, on WS API request.
no int 1000
lenses.kafka.ws.buffer.size Max buffer size for WS consumer no int 10000
lenses.kafka.ws.max.poll.records
Specify the maximum number of records
returned in a single call to poll(). It will
impact how many records will be pushed at once
to the WS client.
no int 1000
lenses.kafka.ws.heartbeat.ms
The interval (in milliseconds) to send messages to
the client to keep the TCP connection open.
no int 30000
lenses.access.control.allow.methods
Restrict the HTTP verbs allowed
to initiate a cross-origin HTTP request
no string GET,POST,PUT,DELETE,OPTIONS
lenses.access.control.allow.origin
Restrict to specific hosts cross-origin
HTTP requests.
no string
lenses.schema.registry.topics The backing topic where schemas are stored. no string _schemas
lenses.schema.registry.delete
Allows subjects to be deleted in
the Schema Registry. Default is disabled.
Requires schema-registry version 3.3.0 or later
no boolean false
lenses.allow.weak.ssl
Allow connecting with https:// services even
when self-signed certificates are used
no boolean false
lenses.telemetry.enable Enable or disable telemetry data collection no boolean true
lenses.curator.retries
The number of attempts to read the
broker metadata from Zookeeper.
no int 3
lenses.curator.initial.sleep.time.ms
The initial amount of time to wait between
retries to ZK.
no int 2000
lenses.zookeeper.max.session.ms
The max time (in milliseconds) to wait for
the Zookeeper server to
reply for a request. The implementation requires that
the timeout be a minimum of 2 times the tickTime
(as set in the server configuration).
no int 10000
lenses.zookeeper.max.connection.ms
The duration (in milliseconds) to wait for the Zookeeper client to
establish a new connection.
no int 10000
lenses.akka.request.timeout.ms
The maximum time (in milliseconds) to wait for an
Akka Actor to reply.
no int 10000
lenses.kafka.control.topics List of Kafka topics to be marked as system topics no string
[“connect-configs”, “connect-offsets”, “connect-status”,
“connect-statuses”, “_schemas”, “__consumer_offsets”,
“_kafka_lenses_”, “lsql_”, “__transaction_state”,
“__topology”, “__topology__metrics”]
lenses.alert.buffer.size
The number of most recently raised
alerts to keep in the cache.
no int 100
lenses.kafka.settings.client
Allow additional Kafka consumer/producer settings
to be specified. When Lenses creates an instance
of KafkaConsumer/KafkaProducer class it will use these
properties during initialization.
no string {reconnect.backoff.ms = 1000, retry.backoff.ms = 1000}
lenses.kafka.settings.client.kstream
Allow additional Kafka KStreams settings
to be specified
no string  
lenses.api.response.cache.enable
Controls the HTTP headers for caching the responses
on the client side. When set to true it will push these headers
to the client: Cache-Control: no-cache, no-store, must-revalidate,
Pragma: no-cache, and Expires: -1.
no boolean false
lenses.api.services.endpoints.hide
Protects the infrastructure services endpoints
details from being returned to the client
no boolean true

The last three keys, allow configuring the consumer/producer/kstreams settings of Lenses internal consumer/producers/kstreams. Example: lenses.kafka.settings.client.compression.type = snappy