Configuration

Lenses configuration is driven by two configuration files:

  • lenses.conf
    Which contains runtime configuration options. You will need to edit this file and setup the connection details for the Kafka cluster before using the software. For the complete list of the configuration options please refer to Options Reference.
  • security.conf
    Which contains the security related configurations, so that sensitive data can be protected by the administrators. For more information about enabling security options, authentication methods and authorisation please refer to Security Configurations.

If you are running Lenses with Docker or Helm you may also refer to the relevant sections on how to instruct the configuration values.

Quick Start

Lenses configuration files are in HOCON format.

Here is an example of a lenses.conf file:

# Set the ip:port for Lenses to bind to
lenses.ip = 0.0.0.0
lenses.port = 9991
#lenses.jmx.port = 9992

# License file allowing connecting to up to N brokers
lenses.license.file = "license.json"

# topics created on start-up that Lenses uses to store state
lenses.topics.audits = "_kafka_lenses_audits"
lenses.topics.metrics = "_kafka_lenses_metrics"
lenses.topics.cluster = "_kafka_lenses_cluster"
lenses.topics.profiles = "_kafka_lenses_profiles"
lenses.topics.processors = "_kafka_lenses_processors"
lenses.topics.alerts.storage = "_kafka_lenses_alerts"
lenses.topics.lsql.storage = "_kafka_lenses_lsql_storage"
lenses.topics.alerts.settings= "_kafka_lenses_alerts_settings"
lenses.topics.metadata = "_kafka_lenses_topics_metadata"
lenses.topics.external.topology = "__topology"
lenses.topics.external.metrics = "__topology__metrics"

# Set up infrastructure end-points
lenses.kafka.brokers        = "PLAINTEXT://host1:9092,PLAINTEXT://host2:9092,PLAINTEXT://host3:9092"
lenses.zookeeper.hosts      = [
  { url:"host1:2181", jmx:"host1:9010" },
  { url:"host2:2181", jmx:"host2:9010" },
  { url:"host3:2181", jmx:"host3:9010" }
]
lenses.zookeeper.chroot     = ""         # Optional in case a ZK chroot path is in use
lenses.jmx.broker.port = BROKERS_JMX_PORT         # required only if `lenses.zookeeper.hosts` is not provided
lenses.jmx.brokers = [                            # required only if `lenses.zookeeper.hosts` is not provided and
  { id:"broker1 id", jmx:"broker1 JMX port" },    # there are brokers which use different JMX port and therefore.
  { id:"broker2 id", jmx:"broker2 JMX port" },    # `lenses.jmx.broker.port` is not enough.
  { id:"broker3 id", jmx:"broker3 JMX port" }     # (We do not advise such deployments)
]
lenses.schema.registry.urls = [
  { url:"http://host1:18081", jmx:"host1:19395" },
  { url:"http://host2:18081", jmx:"host2:19395" }
]
lenses.connect.clusters     = [
  { name: "connect_cluster_X", statuses: "connect-statuses", configs: "connect-configs", offsets: "connect-offsets",
    urls: [
     { url: "http://host1:8083", jmx: "host1:18083" },
     { url: "http://host2:8083", jmx: "host2:1880"  }
    ]
  }
]

License

In order to run, Lenses requires a license file which can be obtained by contacting us. Once you have received your license, store it in a file ( i.e. license.json) and update the configuration to point to it. Make sure the configuration value contains the full file path.

# License file allowing connecting to up to N brokers
lenses.license.file="license.json"

Host and Port

During startup, Lenses will bind to the IP and port settings in the configuration file. Use the ip and port configuration entries to set a different value. By default Lenses binds to port 9991.

# Set the ip:port for Lenses to bind to
lenses.ip = 0.0.0.0
lenses.port = 9991

Kafka Endpoints

Lenses integrates and monitors Kafka Brokers, Zookeepers, Schema Registries and Kafka Connect Clusters. To configure your Kafka Cluster and the related services, you need to set the corresponding endpoints:

# Set up infrastructure end-points
lenses.kafka.brokers        = "PLAINTEXT://host1:9092,PLAINTEXT://host2:9092"
lenses.zookeeper.hosts      = [{url:"localhost:2181",jmx:"localhost:12181"}]
lenses.zookeeper.chroot     = "kafka"
lenses.schema.registry.urls = [{url:"http://localhost:8081",jmx:"localhost:18081"}]
lenses.connect.clusters     = [{name: "connectClusterA", statuses: "connect-statuses", configs: "connect-configs", offsets: "connect-offsets", urls:[{url: "http://localhost:8083", jmx:"localhost:18083"}] }]

Broker Authentication & Encryption

If your Kafka Brokers are set up for authentication via Simple Authentication Security lLayer (SASL) or SSL/TLS you will need to configure Lenses accordingly. Please refer to Broker Authentication.

JMX endpoints

You may notice here that the JMX endpoint is also configured. JMX are optional for Lenses, however they are highly recommended in order to have monitoring features enabled. If you haven’t enabled JMX, follow the instructions here.

The Kafka Brokers JMX endpoints are picked up automatically from Zookeeper. However, for cloud deployments Zookeeper connection tend to not be available for security reasons. When Zookeepers endpoints are not available, it is a common scenario for cloud deployments where Kafka is provided as a managed service, then lenses.zookeeper.host and lenses.zookeeper.chroot will be set to null values. In this case Lenses can not identify the Kafka brokers JMX endpoints, therefore to make the best of the user experience they would need to be set manually. Here is how they can be set:

#Set the Kafka brokers JMX endpoints
lenses.jmx.brokers = [
  { id:"broker1 id", jmx:"broker1 JMX port" },
  { id:"broker2 id", jmx:"broker2 JMX port" },
  { id:"broker3 id", jmx:"broker3 JMX port" }
]

Kafka Connect clusters

Lenses allows multiple connect clusters to be configured. For this reason you will need to name your clusters, which name will act as an alias to refer to this cluster from within Lenses.

To configure Kafka Connect correctly you must provide:

  1. Name for the cluster
  2. For each worker provide its REST endpoint and optionally the JMX connection
  3. The Kafka Connect backing topics for status, configs, and offsets

System Topics

Lenses is a state-less application and thus an excellent fit for containerised deployments. During its startup it creates a few system topics for storing: monitoring, auditing, cluster, user profiles and processors information. These topics are configured by the topics configuration block and you can optionally override them:

# topics created on start-up that Lenses uses to store state
lenses.topics.audits            = "_kafka_lenses_audits"
lenses.topics.cluster           = "_kafka_lenses_cluster"
lenses.topics.metrics           = "_kafka_lenses_metrics"
lenses.topics.profiles          = "_kafka_lenses_profiles"
lenses.topics.processors        = "_kafka_lenses_processors"
lenses.topics.alerts.storage    = "_kafka_lenses_alerts"
lenses.topics.lsql.storage      = "_kafka_lenses_lsql_storage"
lenses.topics.alerts.settings   = "_kafka_lenses_alerts_settings"
lenses.topics.metadata          = "_kafka_lenses_topics_metadata"
lenses.topics.external.topology = "__topology"
lenses.topics.external.metrics  = "__topology__metrics"

Warning

These are the topics created and managed by Lenses automatically. If you are using ACLs, only allow Lenses to manage these topics.

If ACLs are already enabled on your Kafka cluster set the ACLs for the Lenses user and server for the Lenses system topics.

kafka-acls \
--authorizer-properties zookeeper.connect=my_zk:2181 \
--add \
--allow-principal User:Lenses \
--allow-host lenses-host \
--operation Read \
--operation Write \
--operation Alter \
--topic topic

Broker Authentication

Kafka Brokers may be set up for authentication via simple authentication security layer (SASL) or SSL/TLS. Lenses need to be set up accordingly to each authentication scenario.

SSL Authentication and Encryption

If your Kafka cluster uses TLS certificates for authentication, set the broker protocol to SSL and then pass in any keystore and truststore configurations to the consumer and producer settings by prefixing with lenses.kafka.settings. such as:

lenses.kafka.settings.consumer.security.protocol        = SSL
lenses.kafka.settings.consumer.ssl.truststore.location  = /var/private/ssl/client.truststore.jks
lenses.kafka.settings.consumer.ssl.truststore.password  = test1234
lenses.kafka.settings.consumer.ssl.keystore.location    = /var/private/ssl/client.keystore.jks
lenses.kafka.settings.consumer.ssl.keystore.password    = test1234
lenses.kafka.settings.consumer.ssl.key.password         = test1234

lenses.kafka.settings.producer.security.protocol        = SSL
lenses.kafka.settings.producer.ssl.truststore.location  = /var/private/ssl/client.truststore.jks
lenses.kafka.settings.producer.ssl.truststore.password  = test1234
lenses.kafka.settings.producer.ssl.keystore.location    = /var/private/ssl/client.keystore.jks
lenses.kafka.settings.producer.ssl.keystore.password    = test1234
lenses.kafka.settings.producer.ssl.key.password         = test1234

If TLS certificates are only used for encryption of data on the wire, the keystore settings may be ommited:

lenses.kafka.settings.consumer.security.protocol        = SSL
lenses.kafka.settings.consumer.ssl.truststore.location  = /var/private/ssl/client.truststore.jks
lenses.kafka.settings.consumer.ssl.truststore.password  = test1234

lenses.kafka.settings.producer.security.protocol        = SSL
lenses.kafka.settings.producer.ssl.truststore.location  = /var/private/ssl/client.truststore.jks
lenses.kafka.settings.producer.ssl.truststore.password  = test1234

If your brokers’ CA certificate is embedded in the system-wide truststore, you can ommit the truststore settings.

SASL/GSSAPI (Kerberos) Authentication

In order for Lenses to access Kafka in an environment set up with Kerberos (SASL) you need to provide lenses with a JAAS file as in the example below. If Lenses is to be used with an ACL enabled cluster, it is advised to use the same principal as the brokers, so it has super user permissions.

Note

A system configured to work with Kerberos usually provides a system-wide kerberos configuration file (krb5.conf) that points to the location of the KDC and includes other configuration options necessary to authenticate. If your system is missing this file, please contact your administrator.

KafkaClient {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/path/to/keytab-file"
  storeKey=true
  useTicketCache=false
  serviceName="kafka"
  principal="principal@MYREALM";
};

/*
  Optional section for authentication to zookeeper
  Please also remember to set lenses.zookeeper.security.enabled=true
*/
Client {
  com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="/path/to/keytab-file"
   storeKey=true
   useTicketCache=false
   principal="principal@MYREALM";
};

Once the jaas file is in place, add it to LENSES_OPTS, before starting Lenses:

export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"

Last, set the security protocol in lenses configuration file:

lenses.kafka.settings.consumer.security.protocol=SASL_PLAINTEXT
lenses.kafka.settings.producer.security.protocol=SASL_PLAINTEXT

By default, the connection to Zookeeper will remain unauthenticated. This only affects the Quota entries, which are written without any Zookeeper ACLs to protect them. The option lenses.zookeeper.security.enabled may be used to change this behaviour but it is very important in such case to use the brokers’ principal for Lenses. If Lenses is configured with a different principal, then the brokers will not be able to manipulate the Quota entries and will fail to start. Please contact our support if help is needed for this feature.

SASL_SSL Authentication and Encryption

In this security protocol, Kafka uses a SASL method for authentication and TLS certificates for encryption of data on the wire. As such the configuration is a combination of the SSL/TLS and SASL configurations.

Please provide Lenses with a JAAS file as described in the previous section and add it to LENSES_OPTS:

export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"

Set Lenses to use SASL_SSL for its producer and consumer part. If your CA’s certificate isn’t part of the system wide truststore, please provide Lenses with a truststore as well:

lenses.kafka.settings.consumer.security.protocol        = SASL_SSL
lenses.kafka.settings.consumer.ssl.truststore.location  = /var/private/ssl/client.truststore.jks
lenses.kafka.settings.consumer.ssl.truststore.password  = test1234

lenses.kafka.settings.producer.security.protocol        = SASL_SSL
lenses.kafka.settings.producer.ssl.truststore.location  = /var/private/ssl/client.truststore.jks
lenses.kafka.settings.producer.ssl.truststore.password  = test1234

SASL/SCRAM Authentication

In order for Lenses to access Kafka in an environment set up with SCRAM authentication (SASL/SCRAM) you need to provide lenses with a JAAS file as in the example below. If Lenses is to be used with an ACL enabled cluster, it is advised to use the same principal as the brokers, so it has super user permissions.

KafkaClient {
  org.apache.kafka.common.security.scram.ScramLoginModule required
  username="[USERNAME]"
  password="[PASSWORD]";
};

Once the jaas file is in place, add it to LENSES_OPTS, before starting Lenses:

export LENSES_OPTS="-Djava.security.auth.login.config=/opt/lenses/jaas.conf"

Last, set the security protocol and mechanism in lenses configuration file:

lenses.kafka.settings.consumer.security.protocol=SASL_PLAINTEXT
lenses.kafka.settings.consumer.sasl.mechanism=SCRAM-SHA-256
lenses.kafka.settings.producer.security.protocol=SASL_PLAINTEXT
lenses.kafka.settings.producer.sasl.mechanism=SCRAM-SHA-256

An alternative to the jaas.conf file, is to configure JAAS within Lenses configuration (lenses.conf). The configuration format is HOCON, Scala’s configuration format. As such, multiline strings should be enclosed within triple quotes:

lenses.kafka.settings.consumer.sasl.jaas.config="""
  org.apache.kafka.common.security.scram.ScramLoginModule required
    username="[USERNAME]"
    password="[PASSWORD]";"""
lenses.kafka.settings.producer.sasl.jaas.config="""
  org.apache.kafka.common.security.scram.ScramLoginModule required
    username="[USERNAME]"
    password="[PASSWORD]";"""

Kafka ACLs

Lenses can manage ACLs in Zookeeper or via the brokers and the Kafka Admin protocol. The latter mode is mandatory for Kafka version 1.1 onwards. The default method is zookeeper. To switch to the Kafka Admin method, please set in your Lenses configuration file:

lenses.acls.broker.mode=true

Alert Routing

Lenses provides in-app alert notifications. In order to route the alerts to your preferred system follow the options:

Alertmanager

This is the preferred way to route alerts to downstream gateways. Lenses can push alert notifications to the Alertmanager Web hook. To enable the functionality, provide the Alertmanager endpoint(s) via the following setting:

lenses.alert.manager.endpoints="http://host1:port1,http://host1:port1"

Read on Lenses and Alertmanager Integration, find out more on Alertmanager.

Slack

To integrate Lenses alerting with Slack, you need to add an Incoming WebHook integration here. Select the #channel where Lenses will be posting alerts and copy the Webhook URL

lenses.alert.plugins.slack.enabled      = true
lenses.alert.plugins.slack.webhook.url  = "https://hooks.slack.com/services/SECRET/YYYYYYYYYYY/XXXXXXXX"
lenses.alert.plugins.slack.username     = "lenses"
lenses.alert.plugins.slack.channel      = "#devops"

Java Options

The following environment variables control the Java configuration options when starting Lenses:

  • LENSES_HEAP_OPTS - The heap space settings, the default is -Xmx3g -Xms512m
  • LENSES_JMX_OPTS - JMX options so set
  • LENSES_LOG4J_OPTS - Logging options
  • LENSES_PERFORMANCE_OPTS - Any extra options, default is -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true

Logging

Lenses uses Logback for logging. The logback.xml is picked up from the installation folder. To point to a different location for the Logback configuration file just export LENSES_LOG4J_OPTS as you see below:

export LENSES_LOG4J_OPTS="-Dlogback.configurationFile=file:mylogback.xml"

Log levels

Logback enables hot loading of changes to the logback.xml file. The default refresh period is 30 seconds, and this can be adjusted via the configuration element:

<configuration scan="true" scanPeriod="30 seconds" >
  ...
</configuration>

Default log level is set to INFO. To change adjust the configuration in <logger name="akka" level="DEBUG"/>.

The default appenders are:

<logger name="akka" level="INFO"/>
<logger name="org.apache.zookeeper.ClientCnxn" level="ERROR"/>
<logger name="com.typesafe.sslconfig.ssl.DisabledComplainingHostnameVerifier" level="ERROR"/>
<logger name="org.apache.kafka.clients.consumer.ConsumerConfig" level="ERROR"/>
<logger name="org.apache.kafka.common.utils.AppInfoParser" level="WARN"/>
<logger name="org.apache.kafka.clients.consumer.internals.AbstractCoordinator" level="WARN"/>
<logger name="io.confluent.kafka.serializers.KafkaAvroDeserializerConfig" level="WARN"/>
<logger name="org.I0Itec.zkclient" level="WARN"/>
<logger name="org.apache.zookeeper" level="WARN"/>
<logger name="org.apache.calcite" level="OFF"/>

All the log entries are written to the output using the following pattern:%d{ISO8601} %-5p [%c{2}:%L] %m%n.

Log location

All the logs Lenses produces can be found in the logs directory; however we recommend following the Twelve-Factor App approach to logging and log to stdout, especially when using a container orchestration engine such as Kubernetes. Leave log collection to agents such as filebeats, logstash, fluentd and flume.

SQL Processors

The Lenses SQL Engine allows users to browse and query topics and also build and execute Kafka Streams flows with a SQL like syntax. Three execution modes are currently available: IN_PROC, CONNECT and KUBERNETES. The last two are made available to Enterprise clients and offer fault tolerant and performant execution modes of Kafka Streams apps built via Lenses SQL.

To configure the execution mode update the lenses.sql.execution.mode.

See Configuring SQL Processors for more details.

Note

CONNECT and KUBERNETES Lenses SQL execution modes aren’t available in the trial version of Lenses. For more information please contact our sales department.

Custom Serde

Custom serde (serializers and deserializers) can be used to extend Lenses with support for additional message formats. Once you write and compile your own serde jars (more information here), you should add them to Lenses.

Lenses can read custom serdes from two locations:

  • $LENSES_HOME/serde which is always monitored. $LENSES_HOME is the Lenses installation directory.
  • $LENSES_SERDE_CLASSPATH_OPTS which is monitored if set. This is an environment variable, optionally set by the administrator to a custom path where Lenses will look for new serdes. By default it is not set.

These directories are monitored constantly for new jar files. Once a serde’s libraries are dropped in, the new format should be visible in Lenses within a few seconds.

In order to use custom serde with Lenses SQL Proccesors, the custom serde should also be added to the Lenses SQL execution engine. If it’s set to IN_PROC, the default mode, no additional action is required. If it’s set to CONNECT, then the serde jars should be added in the connector directory, together with the default libraries. If it’s set to KUBERNETES, then a custom processor image should be created with the custom serde (see Custom Serde for Kubernetes SQL).

In order to add custom serde to the Lenses docker see Lenses docker custom serde.

Topology

When using a Kafka Connector and/or Lenses SQL, Lenses will build a graph of all the data flows, and users can interact with this graph via the topology screen. This provides a high-level view of how your data moves in and out of Kafka. LSQL processors (Kafka Streams applications written with LSQL) are managed automatically. Out of the box, Lenses supports over 45 Kafka Connect connectors. To enable a custom connector, or a connector not supported out of the box, set lenses.connectors.info configuration entry.

Read on how to configure topology nodes under the Topology Configuration Section.

Expose Lenses JMX

Lenses can expose its own JMX endpoint and so other systems can monitor it. To enable it, set the lenses.jmx.port option; to disable it comment out the entry.

The Prometheus JMX exporter may also be used which will make Lenses metrics available to Prometheus.

The JMX exporter configuration file and a jmx_exporter build are provided within the monitoring suite. JMX exporter can run as a java agent, in which case it must be set via the LENSES_OPTS environment variable:

export LENSES_OPTS="-javaagent:/path/to/jmx_exporter/fastdata_agent.jar=9102:/path/to/jmx_exporter/client.yml"

In order to monitor from remote hosts, JMX remote access should be enabled as well.

export LENSES_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=[JMX_PORT]"

Grafana

If you are using our Operational Monitoring setup you may enable the Grafana dashboards link as follows:

# If using Grafana, set the url i.e. http://grafana-host:port
lenses.grafana = ""

Prometheus

If you are using Prometheus, add http://lenses-host:port/metrics as a Prometheus target.

Options List Reference

Config Description Required Type Default
lenses.ip
Bind HTTP at the given endpoint.
Used in conjunction with lenses.port
no string 0.0.0.0
lenses.port
The HTTP port the HTTP server listens
for connections: serves UI, Rest and WS APIs
no int 9991
lenses.jmx.port
The port to bind an JMX agent to
enable JVM monitoring
no int 9992
lenses.license.file
The full path to the license file
yes string license.json
lenses.secret.file
The full path to security.conf containing security
credentials read more
yes string security.conf
lenses.topics.audits
Topic to store system auditing
information. Keep track of WHO did WHAT and WHEN.
When a topic, config, connector is Created/Updated
or Deleted an audit message is stored.
*We advise not to change the defaults
neither to delete the topic*
yes string _kafka_lenses_audits
lenses.topics.metrics
Topic to store stream processor
metrics. When your state-less stream processors are
running in Kubernetes or Kafka Connect, this
topic collects health checks and
performance metrics.
*We advise not to change the defaults
neither to delete the topic*.
yes string _kafka_lenses_metrics
lenses.topics.cluster
Topic to store broker details.
Infrastructure information is used to determine
config changes, failures and new nodes added or
removed in a cluster.
*We advise not to change the defaults neither to
delete the topic*
yes string _kafka_lenses_cluster
lenses.topics.profiles
Topic to store user preferences.
Bookmark your most used topics, connectors or
SQL processors. *We advise not to change
the defaults neither to delete the topic*
yes string _kafka_lenses_profiles
lenses.topics.processors
Topic to store the SQL processors details.
*We advise not to change the defaults
neither to delete the topic*
yes string _kafka_lenses_processors
lenses.topics.alerts.storage
Topic to store the alerts raised.
*We advise not to change the defaults
neither to delete the topic*
yes string _kafka_lenses_alerts
lenses.topics.alerts.settings
Topic to store the alerts configurations.
*We advise not to change the defaults
neither to delete the topic*.
yes string _kafka_lenses_alerts_settings
lenses.topics.lsql.storage
Topic to store all data access SQL queries.
Know WHO access WHAT data and WHEN.
*We advise not to change the defaults
neither to delete the topic*
yes string _kafka_lenses_lsql_storage
lenses.topics.external.topology
Topic where external application
publish their topology.
yes string __topology
lenses.topics.external.metrics
Topic where external application
publish their topology metrics.
yes string __topology__metrics
lenses.kafka.brokers
A list of host/port pairs to
use for establishing the initial connection to the
Kafka cluster. Add just a few broker addresses
here and Lenses will bootstrap and discover the
full cluster membership (which may change dynamically).
This list should be in the form
"host1:port1,host2:port2,host3:port3"
yes string PLAINTEXT://localhost:9092
lenses.jmx.broker.port
Required when lenses.zookeeper.hosts
has not been set or can not be set. This is usually the case with cloud
deployments. The value is the port to open a JMX connection to the broker.
Typically all your brokers will use the same port for the JMX endpoint.
This will enable the best user experience on Lenses.
no int null
lenses.jmx.brokers
Used when lenses.zookeeper.hosts
has not been set or can not be set, and not all brokers share
the same JMX port. If they do use lenses.jmx.broker.port configuration.
The value is a list of broker-Id/JMX-port pairs to
use for establishing the JMX connection to the broker.
This will enable the best user experience on Lenses.
[{ id:"broker1 id", jmx:"broker1 JMX port" },
{ id:"broker2 id", jmx:"broker2 JMX port" },...]
yes string []
lenses.zookeeper.hosts
Provide all the available Zookeeper nodes details.
For every ZooKeeper node specify the
connection url (host:port) and if JMX is
enabled the JMX (host:port).
The configuration should be
[{url:"hostname1:port1", jmx:"hostname1:port2"}]
yes string
[{url: “localhost:2181” , jmx: “localhost:11991” }]
lenses.zookeeper.chroot
You can add your znode (chroot) path if
you are using it. Please do not add
leading or trailing slashes. For example if you use
the zookeeper chroot ``/kafka` for
your Kafka cluster, set this value to kafka
no string  
lenses.zookeeper.security.enabled
Enables secured connection to your Zookeeper.
The default value is false. Please read :ref:`about
this setting <zookeeper_security>` before enabling it.
no boolean false
lenses.schema.registry.urls
Provide all available Schema Registry node details or list
the load balancer address if one is used. For every instance
specify the connection url and if
JMX is enabled the JMX (host:port)
yes string [{url:”http://localhost:8081”, jmx:”localhost:10081”}]
lenses.schema.registry.kerberos
Set to true if the schema registry
is deployed with kerberos authentication
no boolean false
lenses.schema.registry.keytab
The location of the keytab if connecting
to a kerberized schema registry
no string null
lenses.schema.registry.principal
The service principal of the above keytab
no string null
lenses.connect.clusters
Provide all available Kafka Connect clusters.
For each cluster give a name, list the 3 backing topics
and provide workers connection details (host:port) and
JMX endpoints if enabled and on Kafka 1.0.0
no array
[{name: “dev”, urls: [{url:”http://localhost:8083”,
jmx:”localhost:11100”}], statuses: “connect-statuses”,
configs: “connect-configs”, offsets: “connect-offsets” }]
lenses.alert.manager.endpoints
Comma separated Alert Manager endpoints.
If provided, Lenses will push raised
alerts to the downstream notification gateway.
The configuration should be
"http://host1:port1"
no string  
lenses.alert.manager.source
How to identify the source of an Alert
in Alert Manager. Default is Lenses but you might
want to override to UAT for example
no string Lenses
lenses.alert.manager.generator.url
A unique URL identifying the creator of this alert.
Default is http://lenses but you might
want to override to http://<my_instance_url> for example
no string http://lenses
lenses.grafana
If using Grafana, provide the Url location.
The configuration should be
"http://grafana-host:port"
no string  
lenses.sql.max.bytes
Used when reading data from a Kafka topic.
This is the maximum data size in bytes to return
from a LSQL query. If the query is bringing more
data than this limit any records received after
the limit are discarded.
This can be overwritten
in the LSQL query.
yes long 20971520 (20MB)
lenses.sql.max.time
Used when reading data from a
Kafka topic. This is the time in milliseconds the
query will be allowed to run. If the time is exhausted
it returns the records found so far.
This can be overwritten in the
LSQL query.
yes int 3600000 (1h)
lenses.sql.sample.default
Number of messages to take in every
sampling attempt
no int 2
lenses.sql.sample.window
How frequently to sample a topic
for new messages when tailing it
no int 200
lenses.metrics.workers
Number of workers to distribute the load
of querying JMX endpoints and collecting metrics
no int 16
lenses.offset.workers
Number of workers to distribute the
load of querying topic offsets
no int 5
lenses.sql.execution.mode
The SQL execution mode, IN_PROC
or CONNECT or KUBERNETES
no string IN_PROC
lenses.sql.state.dir
Directory location to store the state
of KStreams. If using CONNECT mode, this folder
must already exist on each Kafka
Connect worker
no string logs/lenses-sql-kstream-state
lenses.sql.monitor.frequency
How frequently SQL processors
emmit healthcheck and performance metrics to
lenses.topics.metrics
no int 10000
lenses.kubernetes.image.name
The docker/container repository url
and name of the Lenses SQL runner
no string eu.gcr.io/lenses-container-registry/lenses-sql-processor
lenses.kubernetes.image.tag The Lenses SQL runner image tag no string 2.1
lenses.kubernetes.config.file The location of the kubectl config file no string /home/lenses/.kube/config
lenses.kubernetes.service.account
The service account to deploy with.
This account should be able to pull images
from lenses.kubernetes.image.name
no string default
lenses.kubernetes.pull.policy
The pull policy for Kubernetes containers:
IfNotPresent or Always
no string IfNotPresent
lenses.kubernetes.runner.mem.limit The memory limit applied to the Container no string 768Mi
lenses.kubernetes.runner.mem.request The memory requested for the Container no string 512Mi
lenses.kubernetes.runner.java.opts Advanced JVM and GC memory tunning parameters no string
-Xms256m -Xmx512m
-XX:MaxPermSize=128m -XX:MaxNewSize=128m
-XX:+UseG1GC -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35
-XX:+DisableExplicitGC -Djava.awt.headless=true
lenses.interval.summary
The interval (in msec) to check for new topics,
or topic config changes
no long 10000
lenses.interval.consumers
The interval (in msec) to read all
consumer info
no int 10000
lenses.interval.partitions.messages
The interval (in msec) to refresh
partitions info
no long 10000
lenses.interval.type.detection
The interval (in msec) to check the
topic payload type
no long 30000
lenses.interval.user.session.ms
The duration (in msec) that a
client session stays alive for.
no long 14400000 (4h)
lenses.interval.user.session.refresh
The interval (in msec) to check whether a
client session is idle and should be terminated.
no long 60000
lenses.interval.schema.registry.healthcheck
The interval (in msec) to check the
status of schema registry instances.
no long 30000
lenses.interval.topology.topics.metrics
The interval (in msec) to refresh the
topology status page.
no long 30000
lenses.interval.alert.manager.healthcheck
The interval (in msec) to check the
status of the Alert manager instances.
no long 5000
lenses.interval.alert.manager.publish
The interval (in msec) on which
unresolved alerts are published
to alert manager.
no long 30000
lenses.interval.jmx.refresh.zk
The interval (in msec) to get
Zookeeper JMX.
yes long 5000
lenses.interval.jmx.refresh.sr
The interval (in msec) to get
Schema Registry JMX.
yes long 5000
lenses.interval.jmx.refresh.broker The interval (in msec) to get Broker JMX. yes long 5000
lenses.interval.jmx.refresh.alert.manager
The interval (in msec) to get
Alert Manager JMX
yes long  
lenses.interval.jmx.refresh.connect The interval (in msec) to get Connect JMX yes long  
lenses.interval.jmx.refresh.brokers.in.zk
The interval (in msec) to refresh
the brokers from Zookeeper.
yes long 5000
lenses.kafka.ws.poll.ms
Max time (in msec) a consumer polls for
data on each request, on WS API request.
no int 1000
lenses.kafka.ws.buffer.size Max buffer size for WS consumer no int 10000
lenses.kafka.ws.max.poll.records
Specify the maximum number of records
returned in a single call to poll(). It will
impact how many records will be pushed at once
to the WS client.
no int 1000
lenses.kafka.ws.heartbeat.ms
The interval (in msec) to send messages to
the client to keep the TCP connection open.
no int 30000
lenses.access.control.allow.methods
Restrict the HTTP verbs allowed
to initiate a cross-origin HTTP request
no string GET,POST,PUT,DELETE,OPTIONS
lenses.access.control.allow.origin
Restrict to specific hosts cross-origin
HTTP requests.
no string
lenses.schema.registry.topics The backing topic where schemas are stored. no string _schemas
lenses.schema.registry.delete
Allows subjects to be deleted in
the Schema Registry. Default is disabled.
Requires schema-registry version 3.3.0 or later
no boolean false
lenses.allow.weak.SLL
Allow connecting with https:// services even
when self-signed certificates are used
no boolean false
lenses.telemetry.enable Enable or disable telemetry data collection no boolean true
lenses.curator.retries
The number of attempts to read the
broker metadata from Zookeeper.
no int 3
lenses.curator.initial.sleep.time.ms
The initial amount of time to wait between

retries to ZK.

no int 2000
lenses.zookeeper.max.session.ms
The max time (in msec) to wait for
the Zookeeper server to
reply for a request. The implementation requires that
the timeout be a minimum of 2 times the tickTime
(as set in the server configuration).
no int 10000
lenses.zookeeper.max.connection.ms
The duration (in msec) to wait for the Zookeeper client to
establish a new connection.
no int 10000
lenses.akka.request.timeout.ms
The maximum time (in msec) to wait for an
Akka Actor to reply.
no int 10000
lenses.kafka.control.topics List of Kafka topics to be marked as system topics no string
[“connect-configs”, “connect-offsets”, “connect-status”,
“connect-statuses”, “_schemas”, “__consumer_offsets”,
“_kafka_lenses_”, “lsql_”, “__transaction_state”,
“__topology”, “__topology__metrics”]
lenses.alert.buffer.size
The number of most recently raised
alerts to keep in the cache.
no int 100
lenses.kafka.settings.consumer
Allow additional Kafka consumer settings
to be specified. When Lenses creates an instance
of KafkaConsumer class it will use these
properties during initialization.
no string {reconnect.backoff.ms = 1000, retry.backoff.ms = 1000}
lenses.kafka.settings.producer
Allow additional Kafka producer settings to
be specified. When Lenses creates an
instance of KafkaProducer
class it will use these properties during initialization.
no string {reconnect.backoff.ms = 1000, retry.backoff.ms = 1000}
lenses.kafka.settings.kstream
Allow additional Kafka KStreams settings
to be specified
no string  

The last three keys, allow configuring the consumer/producer/kstreams settings of Lenses internal consumer/producers/kstreams. Example: lenses.kafka.settings.producer.compression.type = snappy

Example

# Set the ip:port for Lenses to bind to
lenses.ip                                   = 0.0.0.0
lenses.port                                 = 9991
#lenses.jmx.port                            = 9992

# License file allowing connecting to up to N brokers
lenses.license.file                         = "license.json"

# Lenses security configuration is managed in an external file
lenses.secret.file                          = "security.conf"

# Topics created on start-up that Lenses uses to store state
lenses.topics.audits                        = "_kafka_lenses_audits"
lenses.topics.metrics                       = "_kafka_lenses_metrics"
lenses.topics.cluster                       = "_kafka_lenses_cluster"
lenses.topics.profiles                      = "_kafka_lenses_profiles"
lenses.topics.processors                    = "_kafka_lenses_processors"
lenses.topics.alerts.storage                = "_kafka_lenses_alerts"
lenses.topics.lsql.storage                  = "_kafka_lenses_lsql_storage"
lenses.topics.alerts.settings               = "_kafka_lenses_alerts_settings"
lenses.topics.metadata                      = "_kafka_lenses_topics_metadata"
lenses.topics.external.topology             = "__topology"
lenses.topics.external.metrics              = "__topology__metrics"

# Set up infrastructure end-points
lenses.kafka.brokers                        = "PLAINTEXT://localhost:9092"
lenses.zookeeper.hosts                      = [{url:"localhost:2181", jmx:"localhost:11991"}]
lenses.zookeeper.chroot                     = ""

# Optional integrations
lenses.schema.registry.urls                 = [{url:"http://localhost:8081", jmx:"localhost:10081"}]
lenses.connect.clusters                     = [{name: "dev", urls: [{url:"http://localhost:8083", jmx:"localhost:11100"}], statuses: "connect-statuses", configs: "connect-configs", offsets: "connect-offsets" }]
lenses.alert.manager.endpoints              = "http://host1:port1,http://host1:port1"
lenses.grafana                              = "http://grafana-host:port"

# Set up Lenses SQL
lenses.sql.max.bytes                        = 20971520
lenses.sql.max.time                         = 3600000
lenses.sql.sample.default                   = 2         # Sample 2 messages every 200 msec
lenses.sql.sample.window                    = 200

# Set up Lenses workers
lenses.metrics.workers                      = 16
lenses.offset.workers                       = 5

# Set up Lenses SQL processing engine
lenses.sql.execution.mode                   = "IN_PROC" # "CONNECT" # "KUBERNETES"
lenses.sql.state.dir                        = "logs/lenses-sql-kstream-state"
lenses.sql.monitor.frequency                = 10000

# Kubernetes configuration
lenses.kubernetes.image.name                = "eu.gcr.io/lenses-container-registry/lenses-sql-processor"
lenses.kubernetes.image.tag                 = "2.1"
lenses.kubernetes.config.file               = "/home/lenses/.kube/config"
lenses.kubernetes.service.account           = "default"
lenses.kubernetes.pull.policy               = "IfNotPresent"
lenses.kubernetes.watch.reconnect.limit     = 10
lenses.kubernetes.runner.mem.limit          = "768Mi"
lenses.kubernetes.runner.mem.request        = "512Mi"
lenses.kubernetes.runner.java.opts          = "-Xms256m -Xmx512m -XX:MaxPermSize=128m -XX:MaxNewSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true"

# Schema Registry topics and whether to allow deleting schemas in schema registry
lenses.schema.registry.topics               = "_schemas"
lenses.schema.registry.delete               = false

# Lenses internal refresh (in msec)
lenses.interval.summary                     = 10000
lenses.interval.consumers                   = 10000
lenses.interval.partitions.messages         = 10000
lenses.interval.type.detection              = 30000
lenses.interval.user.session.ms             = 14400000
lenses.interval.user.session.refresh        = 60000
lenses.interval.schema.registry.healthcheck = 30000
lenses.interval.topology.topics.metrics     = 30000
lenses.interval.processor.metrics.buffer.ms = 30000
lenses.interval.alert.manager.healthcheck   = 5000
lenses.interval.alert.manager.publish       = 30000

# Lenses JMX internal refresh (in msec)
lenses.interval.jmx.refresh.zk              = 30000
lenses.interval.jmx.refresh.sr              = 30000
lenses.interval.jmx.refresh.broker          = 30000
lenses.interval.jmx.refresh.alert.manager   = 30000
lenses.interval.jmx.refresh.connect         = 30000
lenses.interval.jmx.refresh.brokers.in.zk   = 30000

# Lenses Web Socket API
lenses.kafka.ws.poll.ms                     = 1000
lenses.kafka.ws.buffer.size                 = 10000
lenses.kafka.ws.max.poll.records            = 1000
lenses.kafka.ws.heartbeat.ms                = 30000

# Set access control
lenses.access.control.allow.methods         = "GET,POST,PUT,DELETE,OPTIONS"
lenses.access.control.allow.origin          = "*"

# Whether to allow self-signed certificates and telemetry
lenses.allow.weak.SSL                       = true
lenses.telemetry.enable                     = true

# Zookeeper connections configs
lenses.curator.retries                      = 3
lenses.curator.initial.sleep.time.ms        = 2000
lenses.zookeeper.max.session.ms             = 10000
lenses.zookeeper.max.connection.ms          = 10000

lenses.akka.request.timeout.ms = 10000
lenses.kafka.control.topics = ["connect-configs", "connect-offsets", "connect-status", "connect-statuses", "_schemas", "__consumer_offsets", "_kafka_lenses_", "lsql_", "__transaction_state", "__topology", "__topology__metrics"]

# Set up Alerts and Integrations
lenses.alert.buffer.size                    = 100
lenses.alert.manager.source                 = "Lenses"
lenses.alert.manager.generator.url          = "http://lenses"  # A unique URL identifying the creator of this alert.

#we override the aggressive defaults. Don't go too low as it will affect performance when the cluster is down
lenses.kafka.settings.consumer {
    reconnect.backoff.ms = 1000
    retry.backoff.ms = 1000
}

#we override the aggressive defaults. Don't go too low as it will affect performance when the cluster is down
lenses.kafka.settings.producer {
    reconnect.backoff.ms = 1000
    retry.backoff.ms = 1000
}

lenses.kafka.settings.producer.compression.type=snappy