# Configuration Reference

{% hint style="success" %}
Set in **lenses.conf**
{% endhint %}

## Basics <a href="#basics" id="basics"></a>

Reference documentation of all configuration and authentication options:

<table data-full-width="true"><thead><tr><th width="256">Key</th><th width="369">Description</th><th width="164">Default</th><th width="155">Type</th><th>Required</th></tr></thead><tbody><tr><td>lenses.eula.accept</td><td>Accept the <a href="https://lenses.io/legals/eula">Lenses EULA</a></td><td>false</td><td>boolean</td><td>yes</td></tr><tr><td>lenses.ip</td><td>Bind HTTP at the given endpoint. Use in conjunction with <code>lenses.port</code></td><td>0.0.0.0</td><td>string</td><td>no</td></tr><tr><td>lenses.port</td><td>The HTTP port to listen for API, UI and WS calls</td><td>9991</td><td>int</td><td>no</td></tr><tr><td>lenses.jmx.port</td><td>Bind JMX port to enable monitoring Lenses</td><td></td><td>int</td><td>no</td></tr><tr><td>lenses.root.path</td><td>The path from which all the Lenses URLs are served</td><td></td><td>string</td><td>no</td></tr><tr><td>lenses.secret.file</td><td>The full path to <code>security.conf</code> for security credentials</td><td>security.conf</td><td>string</td><td>no</td></tr><tr><td>lenses.sql.execution.mode</td><td>Streaming SQL mode <code>IN_PROC</code> (test mode) or <code>KUBERNETES</code> (prod mode)</td><td>IN_PROC</td><td>string</td><td>no</td></tr><tr><td>lenses.offset.workers</td><td>Number of workers to monitor topic offsets</td><td>5</td><td>int</td><td>no</td></tr><tr><td>lenses.kafka.control.topics</td><td>An array of topics to be treated as “system topics”</td><td>list</td><td>array</td><td>no</td></tr><tr><td>lenses.grafana</td><td>Add your Grafana url i.e. http://grafanahost:port</td><td></td><td>string</td><td>no</td></tr><tr><td>lenses.api.response.cache.enable</td><td>If enabled, it disables client cache on the Lenses API HTTP responses by adding these HTTP Headers: <code>Cache-Control: no-cache, no-store, must-revalidate</code>, <code>Pragma: no-cache</code>, and <code>Expires: -1</code>.</td><td>false</td><td>boolean</td><td>no</td></tr><tr><td>lenses.workspace</td><td>Directory to write temp files. If write access is denied, Lenses will fallback to <code>/tmp</code>.</td><td>/run</td><td>string</td><td>no</td></tr><tr><td>lenses.connections.webhook.whitelist</td><td><p>This configuration key allows you to specify a whitelist of allowed IP ranges and hostnames for webhook connections. Only addresses matching the whitelist will be permitted for webhook connections.</p><p>The value should be a list of strings, where each string can be:</p><ul><li>An IPv4 address (e.g., "192.168.1.10")</li><li>An IPv4 CIDR range (e.g., "192.168.1.0/24")</li><li>An IPv6 address (e.g., "2001:db8::1")</li><li>An IPv6 CIDR range (e.g., "2001:db8::/32")</li><li>A hostname pattern (e.g., "*.trusted.com", "localhost", "api.example.com"</li></ul></td><td></td><td>array</td><td>no</td></tr></tbody></table>

## Default system topics <a href="#default-system-topics" id="default-system-topics"></a>

System or control topics are created by services for their internal use. Below is the list of built-in configurations to identify them.

* `_schemas`
* `__consumer_offsets`
* `_kafka_lenses_`
* `lsql_*`
* `lsql-*`
* `__transaction_state`
* `__topology`
* `__topology__metrics`
* `_confluent*`
* `*-KSTREAM-*`
* `*-TableSource-*`
* `*-changelog`
* `__amazon_msk*`

Wildcard (`*`) is used to match any name in the path to capture a list of topics not just one. When the wildcard is not specified, Lenses matches on the entry name provided.

## Security

### TLS <a href="#tls" id="tls"></a>

<table data-full-width="true"><thead><tr><th>Key</th><th>Description</th><th>Default</th></tr></thead><tbody><tr><td>lenses.access.control.allow.methods</td><td>HTTP verbs allowed in cross-origin HTTP requests</td><td><code>GET,POST,PUT,DELETE,OPTIONS</code></td></tr><tr><td>lenses.access.control.allow.origin</td><td>Allowed hosts for cross-origin HTTP requests</td><td><code>*</code></td></tr><tr><td>lenses.allow.weak.ssl</td><td>Allow <code>https://</code> with self-signed certificates</td><td><code>false</code></td></tr><tr><td>lenses.ssl.keystore.location</td><td>The full path to the keystore file used to enable TLS on Lenses port</td><td></td></tr><tr><td>lenses.ssl.keystore.password</td><td>Password for the keystore file</td><td></td></tr><tr><td>lenses.ssl.key.password</td><td>Password for the ssl certificate used</td><td></td></tr><tr><td>lenses.ssl.enabled.protocols</td><td>Version of TLS protocol to use</td><td><code>TLSv1.2</code></td></tr><tr><td>lenses.ssl.algorithm</td><td>X509 or PKIX algorithm to use for TLS termination</td><td><code>SunX509</code></td></tr><tr><td>lenses.ssl.cipher.suites</td><td>Comma separated list of ciphers allowed for TLS negotiation</td><td></td></tr></tbody></table>

### Kerberos <a href="#kerberos" id="kerberos"></a>

<table data-full-width="true"><thead><tr><th width="401.33333333333337">Key</th><th>Description</th><th>Default</th></tr></thead><tbody><tr><td>lenses.security.kerberos.service.principal</td><td>The Kerberos principal for Lenses to use in the SPNEGO form: <code>HTTP/lenses.address@REALM.COM</code></td><td></td></tr><tr><td>lenses.security.kerberos.keytab</td><td>Path to Kerberos keytab with the service principal. It should not be password protected</td><td></td></tr><tr><td>lenses.security.kerberos.debug</td><td>Enable Java’s JAAS debugging information</td><td><code>false</code></td></tr></tbody></table>

## Persistent storage

***

### Common <a href="#common" id="common"></a>

<table data-full-width="true"><thead><tr><th>Key</th><th>Description</th><th width="97">Default</th><th width="75">Type</th><th>Required</th></tr></thead><tbody><tr><td>lenses.storage.hikaricp.[*]</td><td>To pass additional properties to HikariCP connection pool</td><td></td><td></td><td>no</td></tr></tbody></table>

### Postgres <a href="#postgressql" id="postgressql"></a>

<table data-full-width="true"><thead><tr><th width="373">Key</th><th width="332">Description</th><th width="100">Default</th><th width="127">Type</th><th>Required</th></tr></thead><tbody><tr><td>lenses.storage.postgres.host</td><td>Host of PostgreSQL server for Lenses to use for persistence</td><td></td><td>string</td><td>no</td></tr><tr><td>lenses.storage.postgres.port</td><td>Port of PostgreSQL server for Lenses to use for persistence</td><td><code>5432</code></td><td>integer</td><td>no</td></tr><tr><td>lenses.storage.postgres.username</td><td>Username for PostgreSQL database user</td><td></td><td>string</td><td>no</td></tr><tr><td>lenses.storage.postgres.password</td><td>Password for PostgreSQL database user</td><td></td><td>string</td><td>no</td></tr><tr><td>lenses.storage.postgres.database</td><td>PostgreSQL database name for Lenses to use for persistence</td><td></td><td>string</td><td>no</td></tr><tr><td>lenses.storage.postgres.schema</td><td>PostgreSQL schema name for Lenses to use for persistence</td><td><code>"public"</code></td><td>string</td><td>no</td></tr><tr><td>lenses.storage.postgres.properties.[*]</td><td>To pass additional properties to PostgreSQL JDBC driver</td><td></td><td></td><td>no</td></tr></tbody></table>

### Microsoft SQL Server <a href="#postgressql" id="postgressql"></a>

{% hint style="success" %}
Set in **security.conf**
{% endhint %}

<table data-full-width="true"><thead><tr><th>Key</th><th>Description</th><th>Default</th><th>Type</th><th>Required</th></tr></thead><tbody><tr><td>lenses.storage.msssql.host</td><td>Specifies the hostname or IP address of the Microsoft SQL Server instance</td><td>​</td><td>string</td><td>yes</td></tr><tr><td>lenses.storage.mssql.port</td><td>Specifies the TCP port number that the Lenses application uses to connect to a Microsoft SQL Server database</td><td>​</td><td>int</td><td>yes</td></tr><tr><td>lenses.storage.mssql.schema</td><td>Specifies the database schema Lenses uses within Microsoft SQL Server</td><td>​</td><td>string</td><td>yes</td></tr><tr><td>lenses.storage.mssql.database</td><td>Specifies the Microsoft SQL server database Lenses connects to</td><td>​</td><td>string</td><td>yes</td></tr><tr><td>lenses.storage.mssql.username</td><td>Specifies the username that the Lenses application uses to authenticate with the Microsoft SQL Server database</td><td>​</td><td>string</td><td>yes</td></tr><tr><td>lenses.storage.mssql.password</td><td>Specifies the password that the Lenses application uses to authenticate with the Microsoft SQL Server database</td><td>​</td><td>string</td><td>yes</td></tr><tr><td>lenses.storage.mssql.properties</td><td>Allows additional properties to be set for the Microsoft SQL Servicer JDBC drive</td><td>​</td><td>​</td><td>no</td></tr></tbody></table>

CommentShare feedback on the editor

### &#x20;<a href="#schema-registries" id="schema-registries"></a>

## Schema registries

If the records schema is centralized, the connectivity to Schema Registry nodes is defined by a Lenses *Connection*.

There are two static config entries to enable/disable the deletion of schemas:

<table data-full-width="true"><thead><tr><th width="332.33333333333337">Key</th><th>Description</th><th>Type</th></tr></thead><tbody><tr><td>lenses.schema.registry.delete</td><td>Allow schemas to be deleted. Default is <code>false</code></td><td>boolean</td></tr><tr><td>lenses.schema.registry.cascade.delete</td><td>Deletes associated schemas when a topic is deleted. Default is <code>false</code></td><td>boolean</td></tr></tbody></table>

## Deployments

Options for specific deployment targets:

* Global options
* Kubernetes

### Global options <a href="#global-options" id="global-options"></a>

Common settings, independently of the underlying deployment target:

<table data-full-width="true"><thead><tr><th width="341.33333333333337">Key</th><th>Description</th><th>Default</th></tr></thead><tbody><tr><td>lenses.deployments.events.buffer.size</td><td>Buffer size for events coming from Deployment targets such as Kubernetes</td><td><code>10000</code></td></tr><tr><td>lenses.deployments.errors.buffer.size</td><td>Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes</td><td><code>1000</code></td></tr></tbody></table>

### Kubernetes <a href="#kubernetes" id="kubernetes"></a>

Kubernetes connectivity is optional. Minimum supported K8 version 0.11.10. All settings are string.

<table data-full-width="true"><thead><tr><th width="386.33333333333337">Key</th><th width="359">Description</th><th>Default</th></tr></thead><tbody><tr><td>lenses.kubernetes.processor.image.name</td><td>The url for the streaming SQL Docker for K8</td><td><code>lensesioextra/sql-processor</code></td></tr><tr><td>lenses.kubernetes.processor.image.tag</td><td>The version/tag of the above container</td><td><code>5.2</code></td></tr><tr><td>lenses.kubernetes.config.file</td><td>The path for the <code>kubectrl</code> config file</td><td><code>/home/lenses/.kube/config</code></td></tr><tr><td>lenses.kubernetes.pull.policy</td><td>Pull policy for K8 containers: <code>IfNotPresent</code> or <code>Always</code></td><td><code>IfNotPresent</code></td></tr><tr><td>lenses.kubernetes.service.account</td><td>The service account for deployments. Will also pull the image</td><td><code>default</code></td></tr><tr><td>lenses.kubernetes.init.container.image.name</td><td>The docker/container repository url and name of the Init Container image used to deploy applications to Kubernetes</td><td><code>lensesio/lenses-cli</code></td></tr><tr><td>lenses.kubernetes.init.container.image.tag</td><td>The tag of the Init Container image used to deploy applications to Kubernetes</td><td><code>5.2.0</code></td></tr><tr><td>lenses.kubernetes.watch.reconnect.limit</td><td>How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable</td><td><code>10</code></td></tr><tr><td>lenses.kubernetes.watch.reconnect.interval</td><td>How often to wait between Kubernetes Watcher reconnection attempts expressed in milliseconds</td><td><code>5000</code></td></tr><tr><td>lenses.kubernetes.websocket.timeout</td><td>How long to wait for a Kubernetes Websocket response expressed in milliseconds</td><td><code>15000</code></td></tr><tr><td>lenses.kubernetes.websocket.ping.interval</td><td>How often to ping Kubernetes Websocket to check it’s alive expressed in milliseconds</td><td><code>30000</code></td></tr><tr><td>lenses.kubernetes.pod.heap</td><td>The max amount of memory the underlying Java process will use</td><td><code>900M</code></td></tr><tr><td>lenses.kubernetes.pod.min.heap</td><td>The initial amount of memory the underlying Java process will allocate</td><td><code>128M</code></td></tr><tr><td>lenses.kubernetes.pod.mem.request</td><td>The value will control how much memory resource the Pod Container will request</td><td><code>128M</code></td></tr><tr><td>lenses.kubernetes.pod.mem.limit</td><td>The value will control the Pod Container memory limit</td><td><code>1152M</code></td></tr><tr><td>lenses.kubernetes.pod.cpu.request</td><td>The value will control how much cpu resource the Pod Container will request</td><td><code>null</code></td></tr><tr><td>lenses.kubernetes.pod.cpu.limit</td><td>The value will control the Pod Container cpu limit</td><td><code>null</code></td></tr><tr><td>lenses.kubernetes.namespaces</td><td>Object setting a list of Kubernetes namespaces that Lenses will see for each of the specified and configured cluster</td><td><code>null</code></td></tr><tr><td>lenses.kubernetes.pod.liveness.initial.delay</td><td>Amount of time Kubernetes will wait to check Processor’s health for the first time. It can be expressed like 30 second, 2 minute or 3 hour, mind the time unit is singular</td><td><code>60 second</code></td></tr><tr><td>lenses.deployments.events.buffer.size</td><td>Buffer size for events coming from Deployment targets such as Kubernetes</td><td><code>10000</code></td></tr><tr><td>lenses.deployments.errors.buffer.size</td><td>Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes</td><td><code>1000</code></td></tr><tr><td>lenses.kubernetes.config.reload.interval</td><td>Time interval to reload the Kubernetes configuration file. Expressed in milliseconds.</td><td><code>30000</code></td></tr></tbody></table>

## SQL snapshot (Explore & Studio)

Optimization settings for SQL queries.

<table data-full-width="true"><thead><tr><th width="356">Key</th><th width="338">Description</th><th width="101">Type</th><th>Default</th></tr></thead><tbody><tr><td>lenses.sql.settings.max.size</td><td>Restricts the max bytes that a kafka sql query will return</td><td>long</td><td><code>20971520</code> (20MB)</td></tr><tr><td>lenses.sql.settings.max.query.time</td><td>Max time (in msec) that a sql query will run</td><td>int</td><td><code>3600000</code> (1h)</td></tr><tr><td>lenses.sql.settings.max.idle.time</td><td>Max time (in msec) for a query when it reaches the end of the topic</td><td>int</td><td><code>5000</code> (5 sec)</td></tr><tr><td>lenses.sql.settings.show.bad.records</td><td>By default show bad records when querying a kafka topic</td><td>boolean</td><td><code>true</code></td></tr><tr><td>lenses.sql.settings.format.timestamp</td><td>By default convert AVRO date to human readable format</td><td>boolean</td><td><code>true</code></td></tr><tr><td>lenses.sql.settings.live.aggs</td><td>By default allow aggregation queries on kafka data</td><td>boolean</td><td><code>true</code></td></tr><tr><td>lenses.sql.sample.default</td><td>Number of messages to sample when live tailing a kafka topic</td><td>int</td><td><code>2</code>/window</td></tr><tr><td>lenses.sql.sample.window</td><td>How frequently to sample messages when tailing a kafka topic</td><td>int</td><td><code>200</code> msec</td></tr><tr><td>lenses.sql.websocket.buffer</td><td>Buffer size for messages in a SQL query</td><td>int</td><td><code>10000</code></td></tr><tr><td>lenses.metrics.workers</td><td>Number of workers for parallelising SQL queries</td><td>int</td><td><code>16</code></td></tr><tr><td>lenses.kafka.ws.buffer.size</td><td>Buffer size for WebSocket consumer</td><td>int</td><td><code>10000</code></td></tr><tr><td>lenses.kafka.ws.max.poll.records</td><td>Max number of kafka messages to return in a single poll()</td><td>long</td><td><code>1000</code></td></tr><tr><td>lenses.sql.state.dir</td><td>Folder to store KStreams state.</td><td>string</td><td><code>logs/sql-kstream-state</code></td></tr><tr><td>lenses.sql.udf.packages</td><td>The list of allowed java packages for UDFs/UDAFs</td><td>array of strings</td><td><code>["io.lenses.sql.udf"]</code></td></tr><tr><td>lenses.sql.settings.max.concurrent.queries</td><td>The maximum number of concurrent queries the engines allows</td><td>int</td><td>100</td></tr><tr><td>lenses.sql.settings.max.concurrent.queries.per.user</td><td>The maximum number of concurrent queries a user can run</td><td>int</td><td>2</td></tr></tbody></table>

## Lenses internal Kafka topics

Lenses requires these Kafka topics to be available, otherwise, it will try to create them. The topics can be created manually before Lenses is run, or allow Lenses the correct Kafka ACLs to create the topics:

<table data-full-width="true"><thead><tr><th width="277">Key</th><th width="159">Description</th><th width="96">Partition</th><th width="161">Replication</th><th width="178">Default</th><th width="114">Compacted</th><th>Retention</th></tr></thead><tbody><tr><td>lenses.topics.external.topology</td><td>Topic for applications to publish their topology</td><td><code>1</code></td><td><code>3</code> (recommended)</td><td><code>__topology</code></td><td>yes</td><td>N/A</td></tr><tr><td>lenses.topics.external.metrics</td><td>Topic for external application to publish their metrics</td><td><code>1</code></td><td><code>3</code> (recommended)</td><td><code>__topology__metrics</code></td><td>no</td><td>1 day</td></tr><tr><td>lenses.topics.metrics</td><td>Topic for SQL Processor to send the metrics</td><td><code>1</code></td><td><code>3</code> (recommended)</td><td><code>_kafka_lenses_metrics</code></td><td>no</td><td></td></tr></tbody></table>

To allow for fine-grained control over the replication factor of the three topics, the following settings are available:

<table data-full-width="true"><thead><tr><th width="425">Key</th><th width="493">Description</th><th>Default</th></tr></thead><tbody><tr><td>lenses.topics.replication.external.topology</td><td>Replication factor for the <code>lenses.topics.external.topology</code> topic</td><td>1</td></tr><tr><td>lenses.topics.replication.external.metrics</td><td>Replication factor for the <code>lenses.topics.external.metrics</code> topic</td><td>1</td></tr><tr><td>lenses.topics.replication.metrics</td><td>Replication factor for the <code>lenses.topics.metrics</code> topic</td><td>1</td></tr></tbody></table>

{% hint style="warning" %}
When configuring the replication factor for your deployment, it's essential to consider the requirements imposed by your cloud provider. Many cloud providers enforce a minimum replication factor to ensure data durability and high availability. For example, IBM Cloud mandates a minimum replication factor of 3. Therefore, it's crucial to set the replication factor for the Lenses internal topics to at least 3 when deploying Lenses on IBM Cloud.
{% endhint %}

## Advanced

All time configuration options are in milliseconds.

<table data-full-width="true"><thead><tr><th width="487">Key</th><th width="239">Description</th><th width="103">Type</th><th>Default</th></tr></thead><tbody><tr><td>lenses.interval.summary</td><td>How often to refresh kafka topic list and configs</td><td>long</td><td><code>10000</code></td></tr><tr><td>lenses.interval.consumers.refresh.ms</td><td>How often to refresh kafka consumer group info</td><td>long</td><td><code>10000</code></td></tr><tr><td>lenses.interval.consumers.timeout.ms</td><td>How long to wait for kafka consumer group info to be retrieved</td><td>long</td><td><code>300000</code></td></tr><tr><td>lenses.interval.consumer.lag.window.ms</td><td>Sliding window size in milliseconds used to estimate consumer lag in time via linear regression</td><td>long</td><td><code>300000</code></td></tr><tr><td>lenses.interval.partitions.messages</td><td>How often to refresh kafka partition info</td><td>long</td><td><code>10000</code></td></tr><tr><td>lenses.interval.type.detection</td><td>How often to check kafka topic payload info</td><td>long</td><td><code>30000</code></td></tr><tr><td>lenses.interval.user.session.ms</td><td>How long a client-session stays alive if inactive (4 hours)</td><td>long</td><td><code>14400000</code></td></tr><tr><td>lenses.interval.user.session.refresh</td><td>How often to check for idle client sessions</td><td>long</td><td><code>60000</code></td></tr><tr><td>lenses.interval.topology.topics.metrics</td><td>How often to refresh topology info</td><td>long</td><td><code>30000</code></td></tr><tr><td>lenses.interval.schema.registry.healthcheck</td><td>How often to check the schema registries health</td><td>long</td><td><code>30000</code></td></tr><tr><td>lenses.interval.schema.registry.refresh.ms</td><td>How often to refresh schema registry data</td><td>long</td><td><code>30000</code></td></tr><tr><td>lenses.interval.metrics.refresh.zk</td><td>How often to refresh ZK metrics</td><td>long</td><td><code>5000</code></td></tr><tr><td>lenses.interval.metrics.refresh.sr</td><td>How often to refresh Schema Registry metrics</td><td>long</td><td><code>5000</code></td></tr><tr><td>lenses.interval.metrics.refresh.broker</td><td>How often to refresh Kafka Broker metrics</td><td>long</td><td><code>5000</code></td></tr><tr><td>lenses.interval.metrics.refresh.connect</td><td>How often to refresh Kafka Connect metrics</td><td>long</td><td><code>30000</code></td></tr><tr><td>lenses.interval.metrics.refresh.brokers.in.zk</td><td>How often to refresh from ZK the Kafka broker list</td><td>long</td><td><code>5000</code></td></tr><tr><td>lenses.interval.topology.timeout.ms</td><td>Time period when a metric is considered stale</td><td>long</td><td><code>120000</code></td></tr><tr><td>lenses.interval.audit.data.cleanup</td><td>How often to clean up dataset view entries from the audit log</td><td>long</td><td><code>300000</code></td></tr><tr><td>lenses.audit.to.log.file</td><td>Path to a file to write audits to in JSON format.</td><td>string</td><td></td></tr><tr><td>lenses.interval.jmxcache.refresh.ms</td><td>How often to refresh the JMX cache used in the Explore page</td><td>long</td><td><code>180000</code></td></tr><tr><td>lenses.interval.jmxcache.graceperiod.ms</td><td>How long to pause for when a JMX connectity error occurs</td><td>long</td><td><code>300000</code></td></tr><tr><td>lenses.interval.jmxcache.timeout.ms</td><td>How long to wait for a JMX response</td><td>long</td><td><code>500</code></td></tr><tr><td>lenses.interval.sql.udf</td><td>How often to look for new UDF/UDAF (user defined [aggregate] functions)</td><td>long</td><td><code>10000</code></td></tr><tr><td>lenses.kafka.consumers.batch.size</td><td>How many consumer groups to retrieve in a single request</td><td>Int</td><td><code>500</code></td></tr><tr><td>lenses.kafka.consumers.offsets-for-timestamp-timeout-seconds</td><td>Timeout in seconds for Kafka offsetsForTimes() calls when resetting consumer group offsets by timestamp</td><td>Int</td><td><code>30</code></td></tr><tr><td>lenses.kafka.ws.heartbeat.ms</td><td>How often to send heartbeat messages in TCP connection</td><td>long</td><td><code>30000</code></td></tr><tr><td>lenses.kafka.ws.poll.ms</td><td>Max time for kafka consumer data polling on WS APIs</td><td>long</td><td><code>10000</code></td></tr><tr><td>lenses.kubernetes.config.reload.interval</td><td>Time interval to reload the Kubernetes configuration file.</td><td>long</td><td><code>30000</code></td></tr><tr><td>lenses.kubernetes.watch.reconnect.limit</td><td>How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable</td><td>long</td><td><code>10</code></td></tr><tr><td>lenses.kubernetes.watch.reconnect.interval</td><td>How often to wait between Kubernetes Watcher reconnection attempts</td><td>long</td><td><code>5000</code></td></tr><tr><td>lenses.kubernetes.websocket.timeout</td><td>How long to wait for a Kubernetes Websocket response</td><td>long</td><td><code>15000</code></td></tr><tr><td>lenses.kubernetes.websocket.ping.interval</td><td>How often to ping Kubernetes Websocket to check it’s alive</td><td>long</td><td><code>30000</code></td></tr><tr><td>lenses.akka.request.timeout.ms</td><td>Max time for a response in an Akka Actor</td><td>long</td><td><code>10000</code></td></tr><tr><td>lenses.sql.monitor.frequency</td><td>How often to emit healthcheck and performance metrics on Streaming SQL</td><td>long</td><td><code>10000</code></td></tr><tr><td>lenses.audit.data.access</td><td>Record dataset access as audit log entries</td><td>boolean</td><td><code>true</code></td></tr><tr><td>lenses.audit.data.max.records</td><td>How many dataset view entries to retain in the audit log. Set to <code>-1</code> to retain indefinitely</td><td>int</td><td><code>500000</code></td></tr><tr><td>lenses.explore.lucene.max.clause.count</td><td>Override Lucene’s maximum number of clauses permitted per BooleanQuery</td><td>int</td><td><code>1024</code></td></tr><tr><td>lenses.explore.queue.size</td><td>Optional setting to bound Lenses internal queue used by the catalog subsystem. It needs to be positive integer or it will be ignored.</td><td>int</td><td>N/A</td></tr><tr><td>lenses.interval.kafka.connect.http.timeout.ms</td><td>How long to wait for Kafka Connect response to be retrieved</td><td>int</td><td><code>10000</code></td></tr><tr><td>lenses.interval.kafka.connect.healthcheck</td><td>How often to check the Kafka health</td><td>int</td><td><code>15000</code></td></tr><tr><td>lenses.interval.schema.registry.http.timeout.ms</td><td>How long to wait for Schema Registry response to be retrieved</td><td>int</td><td><code>10000</code></td></tr><tr><td>lenses.interval.zookeeper.healthcheck</td><td>How often to check the Zookeeper health</td><td>int</td><td><code>15000</code></td></tr><tr><td>lenses.ui.topics.row.limit</td><td>The number of Kafka records to load automatically when exploring a topic</td><td>int</td><td><code>200</code></td></tr><tr><td>lenses.deployments.connect.failure.alert.check.interval</td><td>Time interval in seconds to check the connector failure grace period has completed. Used by the Connect auto-restart failed connectors functionality. It needs too be a value between (1,600].</td><td>int</td><td><code>10</code></td></tr><tr><td>lenses.provisioning.path</td><td>Folder on the filesystem containing the provisioning data. See [provisioning docs](link to provisioning docs) for further details</td><td>string</td><td></td></tr><tr><td>lenses.provisioning.interval</td><td>Time interval in seconds to check for changes on the provisioning resources</td><td>int</td><td></td></tr><tr><td>lenses.schema.registry.client.http.retryOnTooManyRequest</td><td>When enabled, Lenses will retry a request whenever the schema registry returns a <code>429 Too Many Requests</code></td><td>boolean</td><td><pre><code>false
</code></pre></td></tr><tr><td>lenses.schema.registry.client.http.maxRetryAwait</td><td>Max amount of time to wait whenever a <code>429 Too Many Requests</code> is returned.</td><td>duration</td><td><pre><code>"2 seconds"
</code></pre></td></tr><tr><td>lenses.schema.registry.client.http.maxRetryCount</td><td>Max retry count whenever a <code>429 Too Many Requests</code> is returned.</td><td>integer</td><td>2</td></tr><tr><td>lenses.schema.registry.client.http.rate.type</td><td>Specifies if http requests to the configured schema registry should be rate limited. Can be "session" or "unlimited"</td><td>"unlimited" | "session"</td><td><pre><code>unlimited
</code></pre></td></tr><tr><td>lenses.schema.registry.client.http.rate.maxRequests</td><td>Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.</td><td>integer</td><td>N/A</td></tr><tr><td>lenses.schema.registry.client.http.rate.window</td><td>Whenever the rate limiter is "session" this configuration will determine the duration of the window used.</td><td>duration</td><td>N/A</td></tr><tr><td>lenses.schema.connect.client.http.retryOnTooManyRequest</td><td>Retry a request whenever a connect cluster returns a 429 <code>Too Many Requests</code></td><td>boolean</td><td><pre><code>false
</code></pre></td></tr><tr><td>lenses.schema.connect.client.http.maxRetryAwait</td><td>Max amount of time to wait whenever a <code>429 Too Many Requests</code> is returned.</td><td>duration</td><td><pre><code>2 seconds
</code></pre></td></tr><tr><td>lenses.schema.connect.client.http.maxRetryCount</td><td>Max retry count whenever a <code>429 Too Many Requests</code> is returned.</td><td>integer</td><td>2</td></tr><tr><td>lenses.connect.client.http.rate.type</td><td>Specifies if http requests to the configured connect cluster should be rate limited. Can be "session" or "unlimited"</td><td>"unlimited" | "session"</td><td><pre><code>unlimited
</code></pre></td></tr><tr><td>lenses.connect.client.http.rate.maxRequests</td><td>Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.</td><td>integer</td><td>N/A</td></tr><tr><td>lenses.connect.client.http.rate.window</td><td>Whenever the rate limiter is "session" this configuration will determine the duration of the window used.</td><td>duration</td><td>N/A</td></tr></tbody></table>

## Connectors topology

Control how Lenses identifies your connectors in the Topology view. Catalogue your connector types, set their icons, and control how Lenses extracts the topics used by your connectors.

Lenses comes preconfigured for some of the popular connectors as well as the Stream Reactor connectors. If you see that Lenses doesn’t automatically identify your connector type then use the `lenses.connectors.info` setting to register it with Lenses.

Add a new HOCON object `{}` for every new Connector in your `lenses.connectors.info` list :

```
  lenses.connectors.info = [
      {
        class.name = "The connector full classpath"
        name = "The name which will be presented in the UI"
        instance = "Details about the instance. Contains the connector configuration field which holds the information. If  a database is involved it would be  the DB connection details, if it is a file it would be the file path, etc"
        sink = true
        extractor.class = "The full classpath for the implementation knowing how to extract the Kafka topics involved. This is only required for a Source"
        icon = "file.png"
        description = "A description for the connector"
        author = "The connector author"
      }
  ]
```

This configuration allows the connector to work with the topology graph, and also have the RBAC rules applied to it.

### Source example <a href="#source-example" id="source-example"></a>

To extract the topic information from the connector configuration, source connectors require an extra configuration. The extractor class should be: `io.lenses.config.kafka.connect.SimpleTopicsExtractor`. Using this extractor requires an extra property configuration. It specifies the field in the connector configuration which determines the topics data is sent to.

Here is an example for the file source:

```
  lenses.connectors.info = [
    {
      class.name = "org.apache.kafka.connect.file.FileStreamSource"
      name = "File"
      instance = "file"
      sink = false
      property = "topic"
      extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
    }
  ]
```

### Sink example <a href="#sink-example" id="sink-example"></a>

An example of a Splunk sink connector and a Debezium SQL server connector

```
  lenses.connectors.info = [
    {
      class.name = "com.splunk.kafka.connect.SplunkSinkConnector"
      name = "Splunk Sink",
      instance = "splunk.hec.uri"
      sink = true,
      extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
      icon = "splunk.png",
      description = "Stores Kafka data in Splunk"
      docs = "https://github.com/splunk/kafka-connect-splunk",
      author = "Splunk"
    },
    {
      class.name = "io.debezium.connector.sqlserver.SqlServerConnector"
      name = "CDC MySQL"
      instance = "database.hostname"
      sink = false,
      property = "database.history.kafka.topic"
      extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
      icon = "debezium.png"
      description = "CDC data from RDBMS into Kafka"
      docs = "//debezium.io/docs/connectors/mysql/",
      author = "Debezium"
    }
  ]
```

## External Applications

<table data-full-width="true"><thead><tr><th width="409">Key</th><th width="272">Description</th><th width="130">Default</th><th width="108">Type</th><th>Required</th></tr></thead><tbody><tr><td>apps.external.http.state.refresh.ms</td><td>When registering a runner for external app, a health-check interval can be specified. If it is not, this default interval is used (value in milliseconds)</td><td><code>30000</code></td><td>int</td><td>no</td></tr><tr><td>apps.external.http.state.cache.expiration.ms</td><td>Last known state of the runner is stored in a cache. The entries in the cache are being invalidated after a time that is defined by following configuration key (value in milliseconds). This value should not be lower than the <code>apps.external.http.state.refresh.ms</code> value.</td><td><code>60000</code></td><td>int</td><td>no</td></tr></tbody></table>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.lenses.io/latest/deployment/configuration/agent/configuration-reference.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
