Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Changelog for Lenses 6.1.0
2025-10-24(YYYY-MM-DD)
Agent image:
lensesio/lenses-agent:6.1Helm charts for HQ and Agent: https://helm.repo.lenses.io/
Archive installation: https://archive.lenses.io/lenses/
Changelog details for Lenses 6.1.1
When developing a SQL processor, the application Docker image is fixed. Consequently, any updates for performance or security enhancements cannot be applied. In this release, a new API and user interface options were introduced to enable attachment of the desired Docker image.
Fixes an issue preventing default quotas for Users and Clients to be properly stored and applied.
Changelog for Lenses 6.1.1
2025-11-27(YYYY-MM-DD)
Agent image:
lensesio/lenses-agent:6.1.1Helm charts for HQ and Agent: https://helm.repo.lenses.io/
Archive installation: https://archive.lenses.io/lenses/
This page describes configuring basic authentication in Lenses.
Basic authentication is set in the config.yaml for HQ under the http.users key, as an array of usernames and passwords.
Passwords need to bcrypt hashes.
This ensures that the passwords are hashed and secure rather than stored in plaintext. For instance, instead of using "builder" directly, it should be hashed using bcrypt.
An example of a bcrypt-hashed password looks like this:
$2a$12$XQW..XQrtZXCvbQWertqQeFi/1KoQW4eNephNXTfHqtoW9Q4qih5G.
Always ensure that you replace plaintext passwords with their bcrypt counterparts to securely authenticate users.
You can use the Lenses CLI to create a bcrypt password:
lenses utils hash-passwordauth:
users:
- username: bob
password: $2a$12$XQW..XQrtZXCvbQWertqQeFi/1KoQW4eNephNXTfHqtoW9Q4qih5G
- username: brain
password: $2a$12$XQW..XQrtZXCvbQWertqQeFi/1KoQW4eNephNXTfHqtoW9Q4qih5G This page details the release notes of Lenses.
Lenses 6.1 has introduced Kafka-to-Kafka replication, initially supporting AWS MSK to AWS MSK, including Express Brokers. Kafka Replicators can be deployed through the Lenses UI, featuring comprehensive lifecycle management and monitoring capabilities.
Kafka Connections let administrators establish links to Kafka using Kubernetes secrets or service accounts that handle connection credentials. Many organizations employ secret providers like AWS Secret Manager or Vault, automatically syncing them to Kubernetes secrets. This process ensures that Lenses or users deploying applications don't need to manage credentials manually.
A new environment creation flow has been added, so now Lenses Agents can be configured directly from HQ, with a new, in product editor to allow you test can configure Kafka, Schema Registry, Kafka Connect and more connections.
The new APIs support a GitOps style approach, allows you manage the connection state and files fully via the APIs or maintain them in version control.
The significantly enhanced tree view explorer sidebar in SQL Studio improves user experience when working with an extensive number of topics and environments. We've introduced easier search and navigation designed to minimize scrolling and maximize your daily productivity. Plus, you can now bookmark both topics and environments for quick access.
It is now possible to view a topic's schema directly in SQL Studio, including a powerful split view that allows for seamless comparison of schema versions.
This quick start guide will walk you through installing and starting Lenses using the Community Edition, an all-in one docker compose, including Kafka brokers.
This is a quick start for a local setup using the Lenses Community edition. To connect to your Kafka clusters, see here.\
By running the following command, including the ACCEPT_EULA, you are accepting the Lenses EULA agreement.
Run the following command:
The very first time you run this command it will take a bit longer as Docker has to download the images. Subsequent runs should take much less time.
To run this setup smoothly, your Docker settings must allocate at least 5GB of memory
Once the images are pulled and containers started, you can log in by going to http://localhost:9991 or the IP of your Docker host.
Username: admin
Password: adminThe HQ binary does not have a default password. This is a default password configured by the Docker Compose scripts to secure your deployment.
CHANGE THE DEFAULT PASSWORD. You can see how here.
It may take a few seconds for the agent to fully boot and connect to HQ.
You will need an access code to use Community Edition. It will ask you to set it up the first time you login. Once applied you won't need to access it again. Please see the self-guided walk through for details on setting it up.
The quick start uses a docker-compose file to:
Start Postgres, HQ and Agent uses Postgres as a backing store.
Start a single local Kafka Broker
Start a local Confluent Schema Registry
Start HQ and create an environment & agent key
Start Agent and connect to Kafka, the Schema Registry and HQ.
Changelog details for Lenses 6.1.0
Kafka Connections allow administrators to define connections to Kafka as Kubernetes secrets or service accounts, that reference the credentials to connect with.
Most organisations already use secret providers such as AWS Secret Manager or Vault, and sync these to Kubernetes secrets automatically. This ensures Lenses or users deploying applications never need to deal with the credentials themselves.
See the documentation to get started.
Lenses Kafka to Kafka Replicator is now integrated into Lenses. You can configure and deploy Lenses, with predefined Kafka Connections to move data between AWS MSK IAM clusters.
More cluster authentication methods and providers coming soon!
See the documentation to get started.
You can now create an environment and configure the Agent provisioning directly from HQ. JSON schema support is added, providing syntax highlighting, auto completion and error reporting.
You can also view and edit the existing provisioning files of agents already connected.
SQL Studio is moving forward again, and now brings a more IDE style experience. An improved tree navigation provides:
Improved search functionality
Expanded topics nodes to browse schemas and consumer groups associated with topics
Context menu support to allow actions on topics
Bookmarking of favourite topics.
Enhanced the performance of the environments screen, making it more responsive and capable of handling larger datasets.
Service Type Configuration: Added configurable service.type parameter to allow customization of the Kubernetes service type (ClusterIP, NodePort, LoadBalancer, etc.)
New parameter in values.yaml: service.type (default: ClusterIP)
Updated service template to use the configurable value
Fix values.schema.json examples format for postgres params object
Changelog details for Lenses 6.1.1
Left panel with explorer toggle had been removed.
Schema Registry entries can now be listed and the schema details viewed from the Studio.
Context menus are now available in both explorer and breadcrumbs.
When a new K2K app is created, it automatically sets the Docker image to K2K Kafka-to-Kafka replicator version 1.1.0.
The IAM checks for the K2K application have been enhanced to include dependencies such as source and target environment Kafka connections, Kubernetes cluster, and namespace. This update ensures comprehensive permission validation across multiple axes.
New actions introduced:
ManageOffsets: Allows management of K2K application resources.
GetKafkaConnectionDetails: Retrieves details for Kafka connection resources.
Various fixes have been applied throughout the user interface to address glitches and inconsistencies, enhancing the in-app user experience.
This page describes installing Lenses HQ and Agent in Kubernetes via Helm.
Welcome to Lenses, Autonomy in data streaming.
Lenses has two components:
HQ is the control plane / central portal where end users interact with different environments (Kafka clusters). It provides a central place to explore data across many environments.
HQ is a single binary, installed on premise or in your cloud. From HQ you create environments which represent individual Kafka clusters and their supporting services. For each environment, you deploy an agent which connects to Kafka and back to HQ.
Environments
Lenses defines each Kafka Cluster and supporting services, such as Schema Registries and Kafka Connect Clusters, as an environment.
Each environment has an agent. Environments can also be assigned extra metadata such as tiers, domains and descriptions.
To explore and operate in an environment you need an agent. Agents are headless applications, deployed with connectivity to your Kafka cluster and its supporting services.
Agents only ever communicate with HQ, using an Agent key over a secure channel. You can not, as a user, interact directly with them. End users are unaware of agents, only environments.
Agents require:
Agent Key to establish a communication channel to HQ
Connectivity to a Kafka cluster and credentials to do so.
The agents acts as a proxy to read, or write to your Kafka cluster, execute queries, monitor for alerts and manage SQL Processors and Kafka Connectors.
This page describes configuring Azure SSO for Lenses authentication.
Learn more here about
This page describes installing Lenses with Docker Image.
This page describes how to configure admin accounts in Lenses.
You can configure a list of the principals (users, service accounts) with root admin access. Access control allows any API operation performed by such principals. If not set, it will default to [].
Admin accounts are set in the config.yaml for HQ under the key, as an array of usernames.
To change the admin password, update the config.yaml in the section. Set the password of the admin users.
Passwords need to be bcrypt hashes.
You can use the Lenses CLI to create a bcrypt password. You can download the CLI . The executable for Lenses 6 CLI is called "hq".
This page describes how to configure Lenses.
lensesio/lenses-hq:6.1.1lensesio/lenses-cli:6.1.1Identifier (Entity ID)
Use the base url of the Lenses installation e.g. https://lenses-dev.example.com
Reply URL
Use the base url with the callback details e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client
Sign on URL
Use the base url
auth:
administrators:
- admin
- [email protected]
- [email protected] hq utils hash-passwordauth:
- admin
- [email protected]
- [email protected]
users:
- username: admin
password: $2a$12$XQW..XQrtZXCvbQWertqQeFi/1KoQW4eNephNXTfHqtoW9Q4qih5G 

Overview
Introduction on how to connect your Kafka Cluster to Lenses.
Install
Learn how to configure and start HQ and an agent.


Deploy Lenses HQ
Learn how to deploy HQ via tarball.
Deploy Lenses Agent
Learn how to deploy Agent via tarball.


Admin Account
Learn how to configure admin accounts in Lenses.
Basic Authentication
Learn how to configure Lenses with Basic Auth.
SSO & SAML
Learn how to configure Lenses with SSO & SAML.







Changelog for Lenses 6.1.0
2025-10-24 (YYYY-MM-DD)
HQ image:
lensesio/lenses-hq:6.1.0HQ CLI image
lensesio/lenses-cli:6.1.0Helm charts for HQ and Agent: https://lenses.jfrog.io/ui/native/helm-charts
Archive installation: https://archive.lenses.io/lenses/
This page gives an overview of SSO & SAML for authentication with Lenses.
Control of how user create with SSO is determined by the SSO User Creation Mode. There are two modes:
Manual
SSO
With manual mode, only users that pre-created in HQ can login.
With sso mode, users that do not already exists are created and logged in.
Control of how a user's group membership should be handled in relation to SSO is determined by the SSO Group Membership Mode. There are two modes:
Manual
SSO
With the manual mode, the information about the group membership returned from an Identity Provider will not be used and a user will only be a member of groups that were explicitly assigned to them in HQ.
With the sso mode, group information from Identity Provider (IdP) will be used. On login, a user's group membership is set to the groups listed in the IdP.
Groups that do not exist in HQ are ignored.
SAML configuration is defined in the config.yaml provided to HQ. For more information on the configuration options see here.
http:
saml:
metadata: |-The follow SSO / SAML providers are supported.
Creating a Keystore
Enable SAML single-sign on by creating a keystore.
SAML needs a keystore with a generated key-pair.
SAML uses the key-pair to encrypt its communication with the IdP.
Use the Java keytool to create one.
keytool \
-genkeypair \
-storetype pkcs12 \
-keystore lenses.p12 \
-storepass my_password \
-alias lenses \
-keypass my_password \
-keyalg RSA \
-keysize 2048 \
-validity 10000storetype
The type of keystore (pkcs12 is industry standard, but jks also supported)
keystore
The filename of the keystore
storepass
The password of the keystore
alias
The name of the key-pair
keypass
The password of the key-pair (must be same as storepass for pkcs12 stores
Changelog details for Lenses 6.1.0
The Agent now requires only a connection to HQ for startup. This connection can be configured through environment variables or by setting a provisioning file with HQ details. This streamlined process enhances the user experience for creating a new environment, allowing Lenses HQ to automatically push connection details to the Lenses agent.
Enhance the logs outlining the HQ connectivity status.
Service Type Configuration: Added configurable service.type parameter to allow customization of the Kubernetes service type (ClusterIP, NodePort, LoadBalancer, etc.)
New parameter in values.yaml: service.type (default: ClusterIP)
Updated service template to use the configurable value
Add persistence.provisioning configuration with 50Mi default size, disabled by default
Create PVC template for provisioning data storage at /data/provisioning
Update deployment to mount provisioning volume when enabled
Add helper function for provisioning claim name generation
Add tests for provisioning volume
Switch .Values.persistence.existingClaim to
.Values.persistence.log.existingClaim and
.Values.persistence.provisioning.existingClaim and fail the
deployment if the old value is being used
Component Label Correction: Fixed component labels from lenses to lenses-agent for proper identification
Updated in deployment.yaml
Updated in service.yaml

This page describes configuring OneLogin SSO for Lenses authentication.
This page describes configuring Lenses to connect to Confluent Platform.
For Confluent Platform see
This page describes connecting Lenses to Apicurio.
Apicuro supports the following versions of Confluent's API:
Confluent Schema Registry API v6
Confluent Schema Registry API v7
Only one Schema Registry connection is allowed.
Name must be schema-registry.
See for support.
Environment variables are supported; escape the dollar sign
Set the schema registry URLs to include the compatibility endpoints, for example:
This page describes connecting Lenses to IBM Event Streams schema registry.
Requires Enterprise subscription on IBM Event Streams and only hard delete is supported for IBM Event streams
To configure an application to use this compatibility API, specify the Schema Registry endpoint in the following format:
Use "token" as the username. Set the password as your API KEY from IBM Event streams
Only one Schema Registry connection is allowed.
Name must be schema-registry.
See for support.
Environment variables are supported; escape the dollar sign
There's no breaking change between 6.0 and 6.1.
sslKeystorePassword:
value: "\${ENV_VAR_NAME}"http://localhost:8080/apis/ccompat/v6confluentSchemaRegistry:
- name: schema-registry
tags: ["tag1"]
version: 1
configuration:
schemaRegistryUrls:
value:
- http://localhost:8080/apis/ccompat/v6https://token:{$APIKEY}@{$HOST}/confluentsslKeystorePassword:
value: "\${ENV_VAR_NAME}"confluentSchemaRegistry:
- name: schema-registry
tags: ["tag1"]
version: 1
configuration:
schemaRegistryUrls:
value:
- https://token:{$APIKEY}@{$HOST}/confluentHelm
Learn how to deploy Lenses in your Kubernetes cluster with Helm.
Docker
Learn how to deploy Lenses with Docker.
Linux
Learn how to deploy Lenses on Linux / VMs.



Overview
Learn about SSO & SAML for Lenses authentication.
Azure SSO
Configure Lenses with Azure SSO.
Google SSO
Configure Lenses with Google SSO.
Keycloak SSO
Configure Lenses with Keycloak SSO.
Okta SSO
Configure Lenses with Okta SSO.
OneLogin SSO
Configure Lenses with OneLogin SSO.
Generic SSO
Configure Lenses with a Generic SSO provider.







This page describe an overview of deploying Lenses against your Kafka clusters.
This guide walks you through manually deploying HQ and an Agent to connect to your Kafka clusters. Lenses acts as a Kafka client, it can connect to any provider exposing a Kafka-compatible API.
For more detailed guides on Helm, Docker, and Linux, see here.
To deploy Lenses against your environments, you need to:
To start HQ and an Agent, you have to accept the Lenses EULA.
For HQ, in the config.yaml set:
license:
acceptEULA: trueAny version of Apache Kafka (2.0 or newer) on-premise and in the cloud. Supported providers include:
Confluent Platform & Cloud
AWS MSK & AWS MSK Serverless
Aiven
IBM Event Streams
Azure HDInsight & EventHubs
Any version of Confluent Schema Registry (5.5.0 or newer), APICurio (2.0 or newer) and AWS Glue.
Only needed if you want to use your Postgres. The docker compose will start a local Postgres instance.
HQ and Agents can share the same instance, by either using a separate database or schema for HQ and each agent, depending on your networking needs.
Postgres server running version 9.6 or higher.
The recommended configuration is to create a dedicated login role and database for the HQ and each Agent, setting the HQ or Agent role as the database or schema owner. Both the agent and HQ need credentials; create a role for each.
# login as superuser and add Lenses role and database
psql -U postgres -d postgres <<EOF
CREATE ROLE lenses_agent WITH LOGIN PASSWORD 'changeme';
CREATE DATABASE lenses_agent OWNER lenses_agent;
CREATE ROLE lenses_hq WITH LOGIN PASSWORD 'changeme';
CREATE DATABASE lenses_hq OWNER lenses_hq;
EOFWeb sockets - You may need to adjust your load balancer to allow them. See here.
JMX connectivity - Connectivity to JMX is optional (not required) but recommended for additional/enhanced monitoring of the Kafka Brokers and Connect Workers. Secure JMX connections are supported, including JOLOKIA and Open Metrics (MSK).
For more enable JMX for the Agent itself, see here.
The agent requires access to your Kafka cluster. If ACLs are enabled, you will need to allow the Agent access.
If you want to use SSO / SAML for authentication, you will need the metadata.xml file from your provider. See Authentication for more information.
This page describes deploying Lenses HQ via docker.
The HQ docker image can be configured via volume mounts for the configuration file.
docker run --name lenses-hq \
--network panoptes \
-p 8080:8080 \
-v $(pwd)/config.yaml:/config.yaml\
lensting/lenses-hq:6-previewThe main pre-requirements that has to be fulfilled before Lenses HQ container can be started and those are:
In demo purposes and testing the product you can use our community license
license_key_2SFZ0BesCNu6NFv0-EOSIvY22ChSzNWXa5nSds2l4z3y7aBgRPKCVnaeMlS57hHNVboR2kKaQ8Mtv1LFt0MPBBACGhDT5If8PmTraUM5xXLz4MYvMain configuration file that has to be configured before running docker command is config.yaml.
Sample configuration file is following:
auth:
administrators:
- admin
- [email protected]
users:
- username: admin
# bcrypt("correcthorsebatterystaple").
password: $2a$10$F66cb6ZhnJjGCZuxlvKP1e84eytTpT1MDJcpBblHaZgsqp1/Aa0LG
sessionDuration: 24h
saml:
enabled: true
baseURL: https://lenses6.company.com
entityID: https://lenses6.company.com
metadata: |
<?xml version="1.0" encoding="UTF-8"?><md:EntityDescriptor
xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata"
</md:IDPSSODescriptor>
</md:EntityDescriptor>
userCreationMode: manual
groupMembershipMode: manual
uiRootURL: /
groupAttributeKey: groups
authnRequestSignature:
enabled: false
http:
address: :8080
accessControlAllowOrigin:
- https://lenses6.company.com
accessControlAllowCredentials: false
secureSessionCookies: false
agents:
address: :10000
database:
host: postgres-postgresql.postgres.svc.cluster.local:5432
username: $(LENSESHQ_PG_USERNAME)
password: $(LENSESHQ_PG_PASSWORD)
schema:
database: lenseshq
TLS: false
license:
key: license_key_
acceptEULA: true
logger:
mode: text
level: debug
metrics:
prometheusAddress: :9090
More about configuration options you can find on the HQ configuration page.
After the successful configuration and installation of HQ, the next steps would be:
This page describes configuring Okta SSO for Lenses authentication.
Lenses is available directly in Okta’s Application catalog.
SAML configuration is set in HQ's config.yaml file. See here for more details.
This page describes connection a Lenses Agent with HQ
To be able to view and drill into your Kafka environment, you need to connect the agent to HQ. You need to create an environment in HQ and copy the Agent Key into the provisioning.yaml.
Only one HQ connection is allowed.
See JSON schema for support.
Environment variables are supported; escape the dollar sign
sslKeystorePassword:
value: "\${ENV_VAR_NAME}"lensesHq:
- name: lenses-hq
version: 1
tags: ['hq']
configuration:
server:
value: "\${LENSES_HQ_HOST}"
port:
value: 10000
agentKey:
value: "\${LENSES_HQ_AGENT_KEY}"
sslEnabled:
value: true
sslTruststore:
file: hq-truststore.jks
sslTruststorePassword:
value: "\${LENSES_HQ_AGENT_TRUSTSTORE_PWD}"This page describes configuring Lenses to connect to Aiven.
Only one Kafka connection is allowed.
The name must be kafka.
See JSON schema for support.
Environment variables are supported; escape the dollar sign
sslKeystorePassword:
value: "\${ENV_VAR_NAME}"Set the following in the provisioning.yaml, replacing Service URI, username and password from your Aiven account.
kafka:
- name: kafka
version: 1
tags: ['my-tag']
configuration:
kafkaBootstrapServers:
value:
- SASL_SSL://[Service URI]
protocol:
value: SASL_SSL
saslMechanism:
value: SCRAM-SHA-256
saslJaasConfig:
value: |
org.apache.kafka.common.security.scram.ScramLoginModule required
username="[your-username]"
password="[your-password]"; This page describes configuring Lenses to connect to Confluent Cloud.
For Confluent Platform see Apache Kafka.
Only one Kafka connection is allowed.
The name must be kafka.
See JSON schema for support.
Environment variables are supported; escape the dollar sign
sslKeystorePassword:
value: "\${ENV_VAR_NAME}"Set the following in the provisioning.yaml
kafka:
- name: kafka
version: 1
tags: ['my-tag']
configuration:
kafkaBootstrapServers:
value:
- SASL_SSL://[YOUR_BOOTSTRAP_SERVER]
- SASL_SSL://[YOUR_BOOTSTRAP_SERVER]
protocol:
value: SASL_SSL
saslMechanism:
value: PLAIN
saslJaasConfig:
value: |
org.apache.kafka.common.security.plain.PlainLoginModule required
username="[YOUR_API_KEY]"
password="[YOUR_API_KEY_SECRET]";This page describes the hardware and OS prerequisites for Lenses.
Run on any Linux server (review ulimits or container technology (docker/kubernetes). For RHEL 6.x and CentOS 6.x use docker.
Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit command:
ulimit -S -n # soft limit
ulimit -H -n # hard limitIncrease as a super-user the soft limit to 4096 with:
ulimit -S -n 4096This page describes the JVM options for the Lenses Agent.
The Agent runs as a JVM app; you can tune runtime configurations via environment variables.
LENSES_OPTS
For generic settings, such as the global truststore. Note that the docker image is using this to plug in a prometheus java agent for monitoring Lenses
LENSES_HEAP_OPTS
JVM heap options. Default setting are -Xmx3g -Xms512m that sets the heap size between 512MB and 3GB. The upper limit is set to 1.2GB on the Box development docker image.
LENSES_JMX_OPTS
Tune the JMX options for the JVM i.e. to allowing remote access.
LENSES_LOG4J_OPTS
Override Agent logging configuration. Should only be used to set the logback configuration file, using the format -Dlogback.configurationFile=file:/path/to/logback.xml.
LENSES_PERFORMANCE_OPTS
JVM performance tuning. The default settings are -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=
This page describes configuring Lenses Agent logging.
All logs are emitted unbuffered as a stream of events to both stdout and to rotating files inside the directory logs/.
The logback.xml file is used to configure logging.
If customization is required, it is recommended to adapt the default configuration rather than write your own from scratch.
The file can be placed in any of the following directories:
the directory where the Agent is started from
/etc/lenses/
agent installation directory.
The first one found, in the above order, is used, but to override this and use a custom location, set the following environment variable:
export LENSES_LOG4J_OPTS="-Dlogback.configurationFile=file:/path/to/logback.xml"The default configuration file is set up to hot-reload any changes every 30 seconds.
The default log level is set to INFO (apart from some very verbose classes).
All the log entries are written to the output using the following pattern:
%d{ISO8601} %-5p [%c{2}:%L] [%thread] %m%nYou can adjust this inside logback.xml to match your organization’s defaults.
logs/ you will find three files: lenses.log, lenses-warn.log and metrics.log. The first contains all logs and is the same as the stdout. The second contains only messages at level WARN and above. The third one contains timing metrics and can be useful for debugging.
The default configuration contains two cyclic buffer appenders: "CYCLIC-INFO” and “CYCLIC-METRICS”. These appenders are required to expose the Agent logs within the Admin UI.
This page describes Environments in Lenses.
Environments are virtual containers for you, including Kafka Cluster, Schema Registries, and Kafka Connect Clusters.
Each Environment has an Agent, the Agent communicates with HQ via an Agent Key generated at the environment creation time.
Environments can be assigned tiers, domains and a description and group accordingly.
Go to Environments in the left-hand side navigation, then select New Environments button in the top right corner.
Enter the details for the agent, once you have a key you will be guided on how to run and start docker, then configure the agent to connect to your environment.
Learn how to configure an agent here.
This page describes connection the Lenses Agent to a AWS MSK cluster.
Only one Kafka connection is allowed.
The name must be kafka.
See JSON schema for support.
Environment variables are supported; escape the dollar sign
sslKeystorePassword:
value: "\${ENV_VAR_NAME}"It is recommended to install the Agent on an EC2 instance or with EKS in the same VPC as your MSK cluster. The Agent can be installed and preconfigured via the AWS Marketplace.
Edit the AWS MSK security group in the AWS Console and add the IP address of your Agent installation.
If you want to have the Agent collect JMX metrics you have to enable Open Monitoring on your MSK cluster. Follow the AWS guide here.
Depending on your MSK cluster, select the endpoint and protocol you want to connect with.
It is not recommended to use Plaintext for secure environments. For these environments use TLS or IAM.
When the Agent is running inside AWS and is connecting to an Amazon’s Managed Kafka (MSK) instance, IAM can be used for authentication.
kafka:
- name: kafka
version: 1
tags: ["optional-tag"]
configuration:
kafkaBootstrapServers:
value:
- SASL_SSL://your.kafka.broker.0:9098
- SASL_SSL://your.kafka.broker.1:9098
protocol:
value: SASL_SSL
saslMechanism:
value: AWS_MSK_IAM
saslJaasConfig:
value: software.amazon.msk.auth.iam.IAMLoginModule required;
additionalProperties:
value:
sasl.client.callback.handler.class: "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
metricsType:
value: AWSSimple walk through to introduce you to the Lenses 6 user interface.
Logging in to Community Edition
Exploring Lenses UI
Adding a new environment
Searching Topics and Schemas
Using SQL Studio
Drilling Down Into Environments
Adding a Data Policy
After you've run your docker compose command you can access it running locally at . CE will ask you to login:
User: admin Pass: admin
The very first time you login Lenses will ask you to verify with your email. This is easy to setup, just click on the "Verify" button:
If you have done the verification before, then you can enter your email address and the received access code from the link "Already verified?":
The verify link will take you to the setup page on Lenses website, where you can enter your email address:
Click Send Link and then Lenses will send you an email with a magic link to activate Lenses Community Edition. Be sure to check your junk folder if it doesn't arrive. In this email you will also find other important information - your personal access code and useful links, which will help you to quick start with Lenses. Don't forget to bookmark and keep this email.
The very first time you login to Lenses CE you will see our first start help screen. There is a video to watch as well as links to these docs, , our and .
Click on Let's Start to access the Lenses UI. The first view you'll see is the Environments view. This is where Lenses displays all of your connected Kafka Environments. This can include: the Kafka clusters themselves, Kafka Connect, Schema Registry, connectors, consumers, even the Kubernetes clusters it's all running in. Environments mean your entire Kafka ecosystem not just the clusters themselves. For our demo setup we only have one environment connected, but you can have up to two at no charge with Community Edition.
Click on the link below Environments view to switch to the topics view. Here you'll see all the topics in your connected Environments. We are currently logged in as Admin so we can see all the Topics in our Environments. If we were logging in with a more restricted role we might only see the Topics we have permission to view.
Use the bottom scroll bar to scroll to the right so you can see further information about each topic.
You can see what type of schema it uses, how many partitions it uses, and much more.
Adding a new environment will allow you to connect a second Kafka cluster to Lenses HQ.
Click on the button New environment in the top right corner and the new environment wizard will guide you through the process.
Fill the environment name and give it a short description, if you like. You can select also domain membership, if you use the Domain conventions and the environment tier. After you are ready, click on the button "Create environment"
This will generate an Agent key. You need this to start the agent docker.
Copy the command to start the docker, this uses the environment variables to create a provisioning.yaml to connect to HQ.
It may take a minute for Agent to start fully and connect.
Once the agent docker has started, its will connect and you can move onto the next step, select the type of Kafka and other services you need. This will update the YAML editor and highlight any errors. You can also type directly in the editor to get JSON schema support is required, e.g. type "Kafka" to get the default snippets.
Once you have entered the details for your Kafka and have no validation errors you can test the configuration. This will push the configuration to the agent, where it will check validation and connectivity.
If valid, you can apply to the agent.
If the configuration fails, errors and the line number will be visible in the problem panel.
The topics view is fully searchable. So for example if we wanted to build a "Customer Location View" for our web page using Kafka data — we could search for the keyword longitude here and see which topics include location data. Let's do a search for "latitude" in the topics view and see what comes up:
Three topics appear to have data about latitude, but let's dive a bit deeper. Tick on the "Search in Schema" tickbox to get Lenses to display the actual names of the keys in the schema.
This will surface the actual schema names that match your search.
Based on what we've discovered it seems like the nyc_yellow_taxi_trip_data might be useful for our theoretical project. Let's use Lenses to dive a bit deeper into that topic and view the actual data flowing through using SQL Studio. To get to SQL Studio from this view simply hover your mouse over the topic: nyc_yellow_taxi_trip_data. That will cause the interactive controls to appear. Click on the SQL shortcut when it pops up:
Clicking that button automatically opens up that topic in SQL Studio. You can now interact directly with the data flowing through that topic using SQL statements. Note when you first access SQL Studio it appears with both side "drawers" open. You can click on the drawer close icons on either side to make more room to work directly with your data and SQL.
Now you can go back and open those as needed later on, but this give you all the screen to view and work with your data. Toggle your view from Grid to List. Now you have your data in a more JSON style format. Expand out the JSON to view the individual key / value pairs. Across the top you'll see the metadata for each event: Partition, Offset, and Time Stamp. Below you can examine the key / value pairs. As you can see we've got plenty of longitude and latitude data to work with for our customer location visualization.
Now let's move on from data discovery to troubleshooting. Using the same taxi data topic we can troubleshoot a "live" problem. Several drivers are reporting errors with credit card transactions going through in the last 15 minutes. Let's use SQL Studio to examine taxi transactions in the last 15 minutes using a SQL search:
Copy that text and paste it into the SQL box in your SQL Studio. Then from the Time Range picker select the last 15 minutes to set your time frame and then hit run.
Next up let's clean up our view so the data is a bit easier to see. Go to the Columns button and get ride of the timestamp, partition, and offset columns. Now we just have our vendorID, fare_amount, and payment_type. Assuming payment_type = 1 means the customer paid cash and payment_type = 2 means card scroll down and notice that both types of payments seem to be going through. Maybe the problem is with a particular driver. Let's filter our results on vendorID. Select the Filters button and create a filter to just show vendorID = 1.
Toggle the filter back and forth between vendorID = 1 and 2 and see that transactions of both types seem to be flowing through. So perhaps the drivers' reported problem is not here, maybe it's a wireless connectivity issue? We could check our wireless telecom topic to further troubleshoot this theoretical issue. We have a detailed guide to Lenses SQL and using SQL Studio in our docs here:
But for now let's move on to more other Lenses features. Let's switch back to the Environments View. Hover your mouse over our environment and some controls should appear. Click on the big arrow that appears in order to drill down into the specifics for this environment.
Now we're in the details page of this specific Environment. We can quickly see the health of all the components of the Kafka ecosystem. We can use any of the specific views on the left side, or drill down more interactively from details that appear on the main dashboard. Take a moment to look around at all the stats and data presented on this page before we move on.
On the lefthand side switch to the Topics view and select the backblaze_smart topic. That will open up the Topic View. Here we can see examples of the data but can also view much more detailed information about the topic. Be sure to click on the button to close the right side drawer to free up some screen space. Take a moment to toggle through the different topics view as listed across the top but then come back to the Data view.
Coming back to the Data View you'll notice that we have serial_number field displayed. This field is tied to registered owners and can be considered personally identifiable data. Luckily Lenses has the capability to block the view of this sensitve data. We need to setup a Data Policy to block this. Make a note of the name of the filed we want to obscure: serial_number.
Click on the Policy view on the left hand side and click on New Policy. Then fill out the form:
Name: serial-number-blocker
Redaction: last 3 (this means we'll mark out everything but the last 3 digits in the number)
Category: private_info (note after you type this in you'll need to hit enter to make it stick)
Impact Type: medium
Affected Datasets: don't change
Add Fields: serial_number (you'll need to hit return here as well to make it stick)
Once you're done it should look like this:
Then click "Create New Policy"
Now you'll see your new policy in the list. You can go back to the topics page and click on backblaze_smart topic again and verify that the serial_number field has been obfuscated.
It should look like this:
Congrats you've completed a basic introduction to Lenses 6. There's lots more to learn and features to use. Look for more tutorials to come soon.
This page describes configuring and starting Lenses HQ and Agent against your Kafka cluster.
This guide uses the Lenses docker-compose file. For non-dev installations and automation see the section.
HQ is configured via by one file, config.yaml. The docker-compose files loads the content of hq.config.yaml and mounts it as the HQ config.yaml file.
You only need to follow this step if you do not want to use the local Postgres instance started by the docker-compose file.
You must create a database and role in your Postgres instance for HQ to use. See .
Edit the docker-compose.yaml and add the set the credentials for your database in the hq.config.yaml section.
Currently HQ supports:
Basic Authentication (default)
SAML
For this example, we will use basic authentication, for information on configuring other methods, see and configure the hq.config.yaml key accordingly for SAML.
To start HQ, run the following docker command:
You can now log in to your with admin/admin.
To create an environment in HQ:
Login into HQ and create an environment, Environments->New Environment.
At the end of the process, you will be shown an Agent Key. Copy that, keep it safe!
The environment will be disconnected until the Agent is up and configured with the key.
You can also manage environments using the CLI.
The Agent is configured via two files:
lenses.conf - holds low-level options for the agent and the database connection. You can set this via the agent.lenses.conf in the docker-compose file
provisioning.yaml - holds the connection details to your Kafka cluster and supporting systems. can set this via the agent.provisioning.yaml key in the docker-compose file.
You only need to follow this step if you do not want to use the local Postgres instance started by the docker-compose file.
You must create a database and role in your Postgres instance for the Agent to use. See .
Update the docker-compose file agent.lenses.conf key for your Postgres instance.
You can connect the agent to HQ in two ways, all via
Start the Agent docker with the an AGENT_KEY via environment variables at minimum. You need to create an environment in HQ to get this key
Or mount a provisioning file that contains the connection to HQ, recommended for TLS enabled HQs
You can still reference environment variables if you mount the file, e.g
First deploy HQ and create an environment, then with the AGENT KEY run:
This will start and connect to HQ but not to Kafka or other services. It will create a provisioning file in data/provisioning.
By default, the agent is configured to connect to Kafka on localhost. To change this, update the provisioning.yaml key. The information required here depends on how you want the Agent to authenticate against Kafka.
You can add connections in three ways:
direct editing of the provisioning file directly
Lenses UX
APIs (which step 2 uses)
They all result in writing a provisioning file which the Agent picks up and loads.
You must manual add all the connections you want to the file and then mount it. To help you create a provisioning file you can use the JSON SCHEMA support. In you IDE, like VS Code, create a file called provisioning.yaml and add the following line at the top:
Then start typing, for example k for Kafka and Kafka Connect, s for schema registry or just crtl+space to trigger the default templates.
Fill in the required fields, your editor should highlight issues for you
Add lenses-agent.conf if you are overriding defaults like the embedded database.
See for examples of different authentication types for Kafka.
When you create an environment via Lenses UI, you will be guided through the process to, start the agent, and configuration the connections. The experience is similar to manually editing the provisioning but it uses the APIs to push down and test configurations.
You can also use the APIs directly. See .
This page describes the install of the Lenses Agent via an archive on Linux.
To install the Agent from the archive you must:
Extract the archive
Configure the Agent
Start the Agent
Installation link
Link to archives can be found here:
Extract the archive using the following command
Inside the extract archive, you will find.
To configure the agents connection to Postgres and its provisioning file. See here in the .
Once the agent files are configure you can continue to start the agent.
You can connect the agent to HQ in two ways, all via
Start the Agent with the an AGENT_KEY via environment variables at minimum. You need to create an environment in HQ to get this key
Or provisioning file that contains the connection to HQ, recommended for TLS enabled HQs
You can still reference environment variables if you use the file, e.g
My default the Agent will start with an embedded database, if you wish to use Progress, recommended for production, see . Database settings are set in lenses-agent.conf
Start Lenses by running:
or pass the location of the config file:
If you do not pass the location of lenses-agent.conf, the Agent will look for it inside the current (runtime) directory. If it does not exist, it will try its installation directory.
In case agent fails with error message that security.conf does not exist and is provided just run following command under lenses directory
To stop Lenses, press CTRL+C.
Set the permissions of the lenses-agent.conf to be readable only by the lenses user.
The agent needs write access in 4-5 places in total:
[RUNTIME DIRECTORY]
When the Agent runs, it will create at least one directory under the directory it is run in:
[RUNTIME DIRECTORY]/logs
Where logs are stored
[RUNTIME DIRECTORY]/logs/sql-kstream-state
Where SQL processors (when In Process mode) store state. To change the location for the processors’ state directory, use lenses.sql.state.dir option.
[RUNTIME DIRECTORY]/storage
Where the H2 embedded database is stored when PostgreSQL is not set. To change this directory, use the lenses.storage.directory option.
/run (Global directory for temporary data at runtime)
Used for temporary files. If Lenses does not have permission to use it, it will fall back to /tmp.
/tmp (Global temporary directory)
Used for temporary files (if access /run fails), and JNI shared libraries.
Back-up this location for disaster recovery
The Agent and Kafka use two common Java libraries that take advantage of JNI and are extracted to /tmp.
You must either:
Mount /tmp without noexec
or set org.xerial.snappy.tempdir and java.io.tmpdir to a different location
If your server uses systemd as a Service Manager, then manage the Agent (start upon system boot, stop, restart). Below is a simple unit file that starts the Agent automatically on system boot.
The Agent uses the default trust store (cacerts) of the system’s JRE (Java Runtime) installation. The trust store is used to verify remote servers on TLS connections, such as Kafka Brokers with an SSL protocol, JMX over TLS, and more. Whilst for some types of connections (e.g. Kafka Brokers) a separate keystore can be provided at the connection’s configuration, for some other connections (JMX over TLS) we always rely on the system trust store.
It is possible to set up a global custom trust store via the LENSES_OPTS environment variable:
Run on any Linux server (review ulimits or container technology (docker/kubernetes). For RHEL 6.x and CentOS 6.x use docker.
Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit command:
Increase as a super-user the soft limit to 4096 with:
Use 8GB RAM /4 CPUs and 20GB disk space.
This page describe the Lenses Agent configuration.
This page describes an overview of the Lenses Agent configuration.
The Agent configuration is split between two files.
lenses-agent.conf
provisioning.yaml
lenses-agent.conf holds all the database connections and low-level options for the agent.
In the you define how to connect to your Kafka cluster, Schema Registries, Kafka Connect clusters and HQ. See for more information.
The provisioning.yaml is watched by the Agent, so any changes made, if valid, are applied.
To help with creating a provisioning.yaml from your IDE you can use the provided JSON schema support. They are available in the following repo.
Add the following to the top of your YAML file
You will then get auto completion and validation, for example, at the start of a line type kafka at the start of the line to trigger the default snippets (templates) for Kafka.
For Schema registry, type Schema, Connect type Connect, Alerting type altering, etc.
You require at minimum a lenses-hq connection and a kafka connection for the schema to be valid.
You do not need to use the default snippets, you can also use the auto completion for each connection type.
This page describes how to connect the Lenses Agent to your Kafka brokers.
The Lenses Agent can connect to any Kafka cluster or service exposing the Apache Kafka APIs and supporting the authentication methods offered by Apache Kafka.
This page describes connection Lenses to a Azure HDInsight cluster.
Only one Kafka connection is allowed.
The name must be kafka.
See for support.
Environment variables are supported; escape the dollar sign
This page describes how to connect Lenses to IBM Event Streams.
Only one Kafka connection is allowed.
The name must be kafka.
See for support.
Environment variables are supported; escape the dollar sign
This page describes an overview of connecting a Lenses Agent with Schema Registries
Consider if you have a high number of schemas.
Only one Schema Registry connection is allowed.
TLS and basic authentication are supported for connections to Schema Registries.
The Agent can collect Schema registry metrics via:
JMX
Jolokia
See
AVRO
PROTOBUF
JSON and XML formats are supported by Lenses but without a backing schema registry.
To enable the deletion of schemas in the UI, set the following in the lenses.conf file.
IBM Event Streams supports hard deletes only
This page describes how to configure TLS for the Lenses Agent.
By default, the Agent does not provide TLS termination but can be enabled via a configuration option. TLS termination is recommended for enhanced security and a prerequisite for integrating with SSO (Single Sign On) via SAML2.0.
TLS termination can be configured directly within Agent or by using a TLS proxy or load balancer.
To use a non-default global truststore, set the path in accordingly with the LENSES_OPTS variable.
To enable mutual TLS, set your keystore accordingly.
Rate limit the calls the Lenses Agent makes to Schema Registries and Connect Clusters.
To rate limit the calls the Agent makes to Schema Registries or Connect Clusters set the following the Agent configuration:
The exact values provided will depend on your setup, for example the number of schemas, how often are new schemas added, so some trial and error is required.
This page describes Identity & Access Management (IAM) in Lenses.
LENSES_OPTS=-Djavax.net.ssl.trustStore=/path/to/truststorelenses.ssl.truststore.location = "/path/to/truststore.jks"
lenses.ssl.truststore.password = "changeit"# To secure and encrypt all HTTPS connections to Lenses via TLS termination.
# Java Keystore location and passwords
lenses.ssl.client.auth = true
lenses.ssl.keystore.location = "/path/to/keystore.jks"
lenses.ssl.keystore.password = "changeit"
lenses.ssl.key.password = "changeit"
# You can also tweak the TLS version, algorithm and ciphers
#lenses.ssl.enabled.protocols = "TLSv1.2"
#lenses.ssl.cipher.suites = "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WIT# schema registry
lenses.schema.registry.client.http.rate.type="sliding"
lenses.schema.registry.client.http.rate.maxRequests= 200
lenses.schema.registry.client.http.rate.window="2 seconds"
# connect clusters
lenses.connect.client.http.rate.type="sliding"
lenses.connect.client.http.rate.maxRequests=200
lenses.connect.client.http.rate.window="2 seconds" tar -xvf lenses-agent-latest-linux64.tar.gz -C lenses lenses
├── lenses.conf ← edited and renamed from .sample
├── logback.xml
├── logback-debug.xml
├── bin/
├── lib/
├── licences/
├── logs/ ← created when you run Lenses
├── plugins/
├── storage/ ← created when you run Lenses
└── ui/agentKey:
value: ${LENSES_HQ_AGENT_KEY# Directory containing the provision.yaml files
lenses.provisioning.path=/my/dirbin/lensesbin/lenses lenses-agent.conftouch security.confchmod 0600 /path/to/lenses-agent.conf
chown [lenses-user]:root /path/to/lenses-agent.confLENSES_OPTS="-Dorg.xerial.snappy.tempdir=/path/to/exec/tmp -Djava.io.tmpdir=/path/to/exec/tmp"[Unit]
Description=Run Agent service
[Service]
Restart=always
User=[LENSES-USER]
Group=[LENSES-GROUP]
LimitNOFILE=4096
WorkingDirectory=/opt/lenses
#Environment=LENSES_LOG4J_OPTS="-Dlogback.configurationFile=file:/etc/lenses/logback.xml"
ExecStart=/opt/lenses/bin/lenses /etc/lenses/lenses-agent.conf
[Install]
WantedBy=multi-user.targetexport LENSES_OPTS="-Djavax.net.ssl.trustStore=/path/to/truststore.jks -Djavax.net.ssl.trustStorePassword=changeit"
bin/lensesulimit -S -n # soft limit
ulimit -H -n # hard limitulimit -S -n 4096sslKeystorePassword:
value: "\${ENV_VAR_NAME}"sslKeystorePassword:
value: "\${ENV_VAR_NAME}"kafka:
- name: kafka
version: 1
tags: [my-tag]
configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://your.kafka.broker.0:9092
- PLAINTEXT://your.kafka.broker.1:9092
protocol:
value: PLAINTEXT
# all metrics properties are optional
metricsPort:
value: 9581
metricsType:
value: JMX
metricsSsl:
value: falsesslKeystorePassword:
value: "\${ENV_VAR_NAME}"sslKeystorePassword:
value: "\${ENV_VAR_NAME}"kafka:
- name: kafka
version: 1
tags: ['my-tag']
configuration:
kafkaBootstrapServers:
value:
- SASL_SSL://[YOUR_BOOTSTRAP_ENDPOINTS]
protocol:
value: SASL_SSL
saslMechanism:
value: PLAIN
saslJaasConfig:
value: |
org.apache.kafka.common.security.plain.PlainLoginModule required
username="token"
password="[YOUR_API_KEY]";## Enable schema deletion in the Lenses UI
## default: false
lenses.schema.registry.delete = true
## When a topic is deleted,
## automatically delete also its associated Schema Registry subjects
## default: false
lenses.schema.registry.cascade.delete = trueSELECT VendorID, fare_amount, payment_type
FROM nyc_yellow_taxi_trip_data
























hq.config.yaml:
content: |
# ACCEPT THE LENSES EULA
license:
acceptEULA: true
database:
host: postgres:5432
username: [YOUR_POSTGRES_YOUR_NAME]
password: lenses
database: hqdocker-compose up hq➜ lenses environments
Manage Environments.
Usage:
lenses environments [command]
Aliases:
environments, e, env, envs
Available Commands:
create Creates a new environment.
delete Deletes an environment.
get Retrieves a single environment by name.
list Lists all environments
metadata Manages environment metadata.
update Updates an environment.
watch Watch live environment updates. agent.lenses.conf:
content: |
lenses.storage.postgres.host=[YOUR_POSTGRES_INSTANCE]
lenses.storage.postgres.port=[YOUR_POSTGRES_PORT]
lenses.storage.postgres.database=agent
lenses.storage.postgres.username=lenses
lenses.storage.postgres.password=lensesagentKey:
value: ${LENSES_HQ_AGENT_KEYdocker run \
--name "xxx" \
--network=lenses \
--restart=unless-stopped \
-e PROVISION_AGENT_KEY=YOUR_AGENT_KEY \
-e PROVISION_HQ_URL=YOUR_LENSES_HQ_URL \
lensesio/lenses-agent:latest # yaml-language-server: $schema=./agent/provisioning.schema-6.1.jsondocker run --name lenses-agent \
-v $(pwd)/provisioning.yaml:/mnt/provision-secrets/provisioning.yaml \
-v $(pwd)/lenses-agent.conf:/data/lenses-agent.conf \
-e LENSES_PROVISIONING_PATH=/mnt/provision-secrets \
lensesio/lenses-agent:latest

Hardware & OS
Learn about the hardware & OS requirements for Linux archive installs.

JVM Options
Understand how to customize the Lenses JVM settings.

Logs
Understand and customize Lenses logging.
Overview
Learn about provisioning.
HQ
Learn how to connect the Agent to HQ.
Kafka
Learn how to connect the Agent to Kafka.
Schema Registries
Learn how to connect the Agent to Schema Registries.
Kafka Connect
Learn how to connect the Agent to Kafka Connect Clusters.
Zookeeper
Learn how to connect the Agent to Zookeeper.
AWS
Learn how to connect the Agent to AWS.
Alert & Auditing Integrations
Learn how to connect the Agent to Alert & Audit Integrations.
JMX
Learn how to connect the Agent to JMX for Kafka, Schema Registries, Kafka Connect and others.











Overview
Learn an overview of connecting the Lenses Agent to Schema Registries.
AWS Glue
Connect the Lenses Agent to your AWS Glue service for schema registry support.
Confluent
Connect the Lenses Agent to Confluent Schema Registry.
IBM Event Streams
Connect the Lenses Agent to IBM Event Streams Schema Registry
Apicurio
Connect the Lenses Agent to Apicurio.





Global Catalogue
Learn how to use the Global Catalogue.
Environment
Learn how to explore topics in an environment.


curl -L https://lenses.io/preview -o docker-compose.yml \
&& ACCEPT_EULA=true docker compose up -d --wait \
&& echo "Lenses.io is running on http://localhost:9991"This page describes deploying an Lenses Agent via Docker.
The Agent docker image can be configured via environment variables or via volume mounts for the configuration files.
You can connect the agent to HQ in two ways, all via provisioning
Start the Agent docker with the an AGENT_KEY via environment variables at minimum. You need to create an environment in HQ to get this key
Or mount a provisioning file that contains the connection to HQ, recommended for TLS enabled HQs
You can still reference environment variables if you mount the file, e.g
agentKey:
value: ${LENSES_HQ_AGENT_KEYFirst deploy HQ and create an environment, then with the AGENT KEY run:
docker run \
--name "xxx" \
--network=lenses \
--restart=unless-stopped \
-e PROVISION_AGENT_KEY=YOUR_AGENT_KEY \
-e PROVISION_HQ_URL=YOUR_LENSES_HQ_URL \
lensesio/lenses-agent:latest This will start and connect to HQ but not to Kafka or other services. It will create a provisioning file in data/provisioning.
docker run --name lenses-agent \
-v $(pwd)/provisioning.yaml:/mnt/provision-secrets/provisioning.yaml \
-e LENSES_PROVISIONING_PATH=/mnt/provision-secrets \
lensesio/lenses-agent:6.0Example provisioning files:
lensesHq:
- name: lenses-hq
version: 1
tags: ['hq']
configuration:
server:
value: [LENSES_HQ_URL]
port:
value: 10000
agentKey:
value: ${LENSES_HQ_AGENT_KEY}
sslEnabled:
value: true
sslTruststore:
file: "hq-truststore.jks"
sslTruststorePassword:
value: ${LENSES_HQ_AGENT_TRUSTSTORE_PWD}lensesHq:
- name: lenses-hq
version: 1
tags: ['hq']
configuration:
server:
value: [LENSES_HQ_URL]
port:
value: 10000
agentKey:
value: ${LENSES_HQ_AGENT_KEY}
sslEnabled:
value: falseMy default the Agent will start with an embedded database, if you wish to use Progress, recommended for production, see here. Database settings are set in lenses-agent.conf
Environment variables prefixed with LENSES_ are transformed into corresponding configuration options. The environment variable name is converted to lowercase and underscores (_) are replaced with dots (.). As an example set the option lenses.port use the environment variable LENSES_PORT.
Alternatively, the lenses-agent.conf can be mounted directly as
/mnt/settings/lenses-agent.conf
The Docker image exposes four volumes in total, where cache, logs, plugins, and persistent data are stored:
/data/storage
/data/plugins
/data/logs
/data/kafka-streams-state
Resides under /data/storage and is used to store persistent data, such as Data Policies. For this data to survive between Docker runs and/or Agent upgrades, the volume must be managed externally (persistent volume).
Resides under /data/plugins it’s where classes that extend Agent may be added —such as custom Serdes, LDAP filters, UDFs for the Lenses SQL table engine, and custom_http implementations.
Resides under /data/logs, logs are stored here. The application also logs to stdout, so the log files aren’t needed for most cases.
Resides under /data/kafka-streams-state, used when Lenses SQL is in IN_PROC configuration. In such a case, Lenses uses this scratch directory to cache Lenses SQL internal state. Whilst this directory can safely be removed, it can be beneficial to keep it around, so the Processors won’t have to rebuild their state during a restart.
By default, the the serves connections over plaintext (HTTP). It is possible to use TLS instead. The Docker image offers the ability to provide the content for extra files via secrets mounted as files or as environment variables. Especially for SSL, Docker supports SSL/TLS keys and certificates in Java Keystore (JKS) formats.
This capability is optional, and users can mount such files under custom paths and configure lenses-agent.conf manually via environment variables, or lenses.append.conf.
There are two ways to use the File/Variable names of the table below.
Create a file with the appropriate filename as listed below and mount it under /mnt/settings, /mnt/secrets, or /run/secrets
Set them as environment variables.
All settings except for passwords, can be optionally encoded in base64. The docker will detect such encoding automatically.
FILECONTENT_JVM_SSL_TRUSTSTORE
The SSL/TLS trust store to use as the global JVM trust store. Add to LENSES_OPTS the property javax.net.ssl.trustStore
FILECONTENT_JVM_SSL_TRUSTSTORE_PASSWORD
Τhe trust store password. If set, the startup script will add automatically to LENSESOPTS the property javax.net.ssl.trustStorePassword (**_base64 not supported**)
FILECONTENT_LENSES_SSL_KEYSTORE
The SSL/TLS keystore to use for the TLS listener for the Agent
The docker does not require running as root. The default user is set to root for convenience and to verify upon start-up that all the directories and files have the correct permissions. The user drops to nobody and group nogroup (65534:65534) before starting the Agent.
If the image is started without root privileges, the agent will start successfully using the effective uid:gid applied. Ensure any volumes mounted (i.e., for the settings and data) have the correct permission set.
This page describes configuring Keycloak SSO for Lenses authentication.
Go to Clients
Click Create
Fill in the details: see the table below.
Click Save
Client ID
Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com
Client Protocol
Set it to saml
Client Saml Endpoint
This is the Lenses API point for Keycloak to call back. Set it to [BASE_URL]/api/v2/auth/saml/callback?client_name=SAML2Client. e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client
Change the settings on client you just created to:
Name
Lenses
Description
(Optional) Add a description to your app.
SAML Signature Name
KEY_ID
Client Signature Required
OFF
Force POST Binding
ON
Front Channel Logout
OFF
Force Name ID Format
ON
Name ID Format
Root URL
Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com
Valid Redirect URIs
Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com
Configure Keycloak to communicate groups to Lenses. Head to the Mappers (under Client scope tab) section.
Click Create
Fill in the details: see table below.
Click Save
Name
Groups
Mapper Type
Group list
Group attribute name
groups (case-sensitive)
Single Group Attribute
ON
Full group path
OFF
SAML configuration is set in HQ's config.yaml file. See here for more details.
This page describes connection Lenses to Azure EventHubs.
Azure EventHubs only support delete or compact as a topic cleanup policy.
Only one Kafka connection is allowed.
The name must be kafka.
See JSON schema for support.
Environment variables are supported; escape the dollar sign
sslKeystorePassword:
value: "\${ENV_VAR_NAME}"Add a shared access policy
Navigate to your Event Hub resource and select Shared access policies in the Settings section.
Select + Add shared access policy, give a name, and check all boxes for the permissions (Manage, Send, Listen)
Once the policy is created, obtain the Primary Connection String, by clicking the policy and copying the connection string. The connection string will be used as a JAAS password to connect to Kafka.
The bootstrap broker [YOUR_EVENT_HUBS_NAMESPACE].servicebus.windows.net:9093
Set the following in the provisioning.yaml
Due to Azure EventHubs limitation, Pricing tier for EventHub has to be at least Standard.
First, set the environment variable
export SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username="\$ConnectionString" password="Endpoint=sb://[SB_URL]/;SharedAccessKeyName=[KEY_NAME];SharedAccessKey=[ACCESS_KEY]";kafka:
- name: kafka
version: 1
tags: [my-tag]
configuration:
kafkaBootstrapServers:
value:
- SASL_SSL://[YOUR_BOOTSTRAP_SERVER]
- SASL_SSL://[YOUR_BOOTSTRAP_SERVER]
saslJaasConfig:
value: '${SASL_JAAS_CONFIG}'
saslMechanism:
value: PLAIN
protocol:
value: SASL_SSLconnections:
kafka:
- name: Kafka
version: 1
tags: [my-tag]
configuration:
kafkaBootstrapServers:
value:
- SASL_SSL://[YOUR_BOOTSTRAP_SERVER]
- SASL_SSL://[YOUR_BOOTSTRAP_SERVER]
saslJaasConfig:
value: org.apache.kafka.common.security.plain.PlainLoginModule required username="\$ConnectionString" password="Endpoint=sb://[SB_URL]/;SharedAccessKeyName=[KEY_NAME];SharedAccessKey=[ACCESS_KEY]";
saslMechanism:
value: PLAIN
protocol:
value: SASL_SSLThis page describes connection to AWS Glue.
AWS Glue Schema Registry connection, depends on an AWS connection.
Only one Schema Registry connection is allowed.
Name must be schema-registry.
See JSON schema for support.
sslKeystorePassword:
value: "\${ENV_VAR_NAME}"These are examples of provision Lenses with an AWS connection named my-aws-connection and an AWS Glue Schema Registry that references it.
aws:
- name: my-aws-connection
tags: ["tag1"]
version: 1
configuration:
authMode:
value: Access Key
accessKeyId:
value: my-access-key-id
secretAccessKey:
value: my-secret-access-key
region:
value: eu-west-1
glueSchemaRegistry:
- name: schema-registry
tags: ["tag1"]
version: 1
configuration:
authMode:
reference: my-aws-connection
accessKeyId:
reference: my-aws-connection
secretAccessKey:
reference: my-aws-connection
glueRegistryArn:
value: arn:aws:glue:[region]:[account-id]:registry/[name]aws:
- name: my-aws-connection
version: 1
tags: []
configuration:
region:
value: eu-north-1
authMode:
value: "Credentials Chain"
glueSchemaRegistry:
- name: schema-registry
version: 1
tags: []
templateName: SchemaRegistry
configuration:
authMode:
reference: my-aws-connection
glueRegistryArn:
value: arn:aws:glue:[region]:[account-id]:registry/[name]aws:
- name: my-aws-connection
version: 1
tags: []
configuration:
region:
value: eu-north-1
authMode:
value: "Assume Role"
assumeRoleArn:
value: arn:aws:iam::[account-id]:role/[name]
assumeRoleSessionName:
value: [session-name]
glueSchemaRegistry:
- name: schema-registry
version: 1
tags: []
templateName: SchemaRegistry
configuration:
authMode:
reference: my-aws-connection
assumeRoleArn:
reference: my-aws-connection
assumeRoleSessionName:
reference: my-aws-connection
glueRegistryArn:
value: arn:aws:glue:[region]:[account-id]:registry/[name]This page describes connecting Lenses to Confluent schema registries.
Only one Schema Registry connection is allowed.
Name must be schema-registry.
See JSON schema for support.
Environment variables are supported; escape the dollar sign
sslKeystorePassword:
value: "\${ENV_VAR_NAME}"The URLs (nodes) should always have a scheme defined (http:// or https://).
confluentSchemaRegistry:
- name: schema-registry
tags: ["tag1"]
version: 1
configuration:
schemaRegistryUrls:
value:
- http://my-sr.host1:8081
- http://my-sr.host2:8081
## all metrics properties are optional
metricsPort:
value: 9581
metricsType:
value: JMX
metricsSsl:
value: falseFor Basic Authentication, define username and password properties.
confluentSchemaRegistry:
- name: schema-registry
tags: ["tag1"]
version: 1
configuration:
schemaRegistryUrls:
value:
- http://my-sr.host1:8081
- http://my-sr.host2:8081
username:
value: my-username
password:
value: my-passwordA custom truststore is needed when the Schema Registry is served over TLS (encryption-in-transit) and the Registry’s certificate is not signed by a trusted CA.
confluentSchemaRegistry:
- name: schema-registry
tags: ["tag1"]
version: 1
configuration:
schemaRegistryUrls:
value:
- https://my-sr.host1:8081
- https://my-sr.host2:8081
sslTruststore:
file: schema-truststore.jks
sslTruststorePassword:
value: myPasswordA custom truststore might be necessary too (see above).
confluentSchemaRegistry:
- name: schema-registry
tags: ["tag1"]
version: 1
configuration:
schemaRegistryUrls:
value:
- https://my-sr.host1:8081
- https://my-sr.host2:8081
sslKeystore:
file: schema-keystore.jks
sslKeystorePassword:
value: myPasswordBy default, Lenses will use hard delete for Schema Registry. To use soft delete, add the following property:
confluentSchemaRegistry:
- name: schema-registry
tags: ["tag1"]
version: 1
configuration:
schemaRegistryUrls:
value:
- http://my-sr.host1:8081
- http://my-sr.host2:8081
hardDelete:
value: true This page describes adding a Zookeeper to the Lenses Agent.
Only one Zookeeper connection is allowed.
zookeeper:
- name: Zookeeper
version: 1
tags: ["tag1"]
configuration:
zookeeperUrls:
value:
- my-zookeeper-host-0:2181
- my-zookeeper-host-1:3181
- my-zookeeper-host-2:4181
# optional, a suffix to Zookeeper's connection string
zookeeperChrootPath:
value: "/mypath"
zookeeperSessionTimeout:
value: 10000 # in milliseconds
zookeeperConnectionTimeout:
value: 10000 # in millisecondsSimple configuration with Zookeeper metrics read via JMX.
zookeeper:
- name: Zookeeper
version: 1
tags: ["tag1"]
configuration:
zookeeperUrls:
value:
- my-zookeeper-host-0:2181
- my-zookeeper-host-1:3181
- my-zookeeper-host-2:4181
# optional, a suffix to Zookeeper's connection string
zookeeperChrootPath:
value: "/mypath"
zookeeperSessionTimeout:
value: 10000 # in milliseconds
zookeeperConnectionTimeout:
value: 10000 # in milliseconds
# all metrics properties are optional
metricsPort:
value: 9581
metricsType:
value: JMX
metricsSsl:
value: falseWith such a configuration, Lenses will use 3 Zookeeper nodes and will try to read their metrics from following URLs (notice the same port - 9581 - used for all of them, as defined by metricsPort property):
my-zookeeper-host-0:9581
my-zookeeper-host-1:9581
my-zookeeper-host-2:9581
Add a connection to AWS in the Lenses Agent.
The agent uses an AWS in three places:
AWS IAM connection to MSK for Lenses itself
Connecting to AWS Glue
Alert channels to Cloud Watch.
If the Agent is deployed on an EC2 Instance or has access to AWS credentials in the default AWS toolchain that can be used instead.
See JSON schema for support.
Environment variables are supported; escape the dollar sign
sslKeystorePassword:
value: "\${ENV_VAR_NAME}"Names must match be alphanumeric or dash non-empty string.
aws:
- name: my-aws-connection
version: 1
tags: [tag1, tag2]
configuration:
# Way to authenticate against AWS.Credentials Chain or Access Key
authMode:
value:
# Access key ID of an AWS IAM account.
accessKeyId:
value:
# Secret access key of an AWS IAM account.
secretAccessKey:
value:
# AWS region to connect to. If not provided, this is deferred to client
# configuration.
region:
value:
# Specifies the session token value that is required if you are using temporary
# security credentials that you retrieved directly from AWS STS operations.
sessionToken:
value:
# The Amazon Resource Name (ARN) of the IAM role to assume using AWS STS
assumeRoleArn:
value: arn:aws:iam::[account-id]:role/[name]
# An identifier for the assumed role session, used to uniquely distinguish
# sessions when assuming the same role multiple times
assumeRoleSessionName:
value: [session-name]aws:
- name: my-aws-connection
tags: ["tag1"]
version: 1
configuration:
authMode:
value: Access Key
accessKeyId:
value: my-access-key-id
secretAccessKey:
value: my-secret-access-key
region:
value: eu-west-1aws:
- name: my-aws-connection
version: 1
tags: []
configuration:
region:
value: eu-north-1
authMode:
value: "Credentials Chain"aws:
- name: my-aws-connection
version: 1
tags: []
configuration:
region:
value: eu-north-1
authMode:
value: "Assume Role"
assumeRoleArn:
value: arn:aws:iam::[account-id]:role/[name]
assumeRoleSessionName:
value: [session-name]This page describes the Kafka ACLs prerequisites for the Lenses Agent if ACLs are enabled on your Kafka clusters.
When your Kafka cluster is configured with an authorizer which enforces ACLs, the Agent will need a set of permissions to function correctly.
Common practice is to give teh Agent superuser status or the complete list of available operations for all resources. The IAM model of Lenses can then be used to restrict the access level per user.
kafka-acls \
--bootstrap-server [broker.url:9092] --command-config [client.properties] \
--add \
--allow-principal [User:Lenses] \
--allow-host [lenses.host] \
--operation All \
--topic * \
--group * \
--delegation-token * \
--clusterThe Agent needs permission to manage and access their own internal Kafka topics:
__topology
__topology__metrics
kafka-acls \
--bootstrap-server [broker.url:9092] --command-config [client.properties] \
--add \
--allow-principal [User:Lenses] \
--allow-host [lenses.host] \
--operation All \
--topic [topic]It also needs to read and describe permissions for the consumer offsets and Kafka Connect topics —if enabled:
__consumer_offsets
connect-configs
connect-offsets
connect-status
kafka-acls \
--bootstrap-server [broker.url:9092] --command-config [client.properties] \
--add \
--allow-principal [User:Lenses] \
--allow-host [lenses.host] \
--operation Describe \
--operation DescribeConfigs \
--operation Read \
--topic [topic]This same set of permissions is required for any topic that the agent must have read access.
kafka-acls \
--bootstrap-server [broker.url:9092] --command-config [client.properties] \
--add \
--allow-principal [User:Lenses] \
--allow-host [lenses.host] \
--operation Describe \
--operation DescribeConfigs \
--operation Read \
--topic *Additional permissions are needed to produce topics or manage them.
Permission to at least read and describe consumer groups is required to take advantage of the Consumer Groups' monitoring capabilities.
kafka-acls \
--bootstrap-server [broker.url:9092] --command-config [client.properties] \
--add \
--allow-principal [User:Lenses] \
--allow-host [lenses.host] \
--operation Describe \
--operation Read \
--group *Additional permissions are needed to manage groups.
To manage ACLs, permission to the cluster is required:
kafka-acls \
--bootstrap-server [broker.url:9092] --command-config [client.properties] \
--add \
--allow-principal [User:Lenses] \
--allow-host [lenses.host] \
--operation Describe \
--operation DescribeConfigs \
--operation Alter \
--clusterThis page describes how to install plugins in the Lenses Agent.
The following implementations can be specified:
Serializers/Deserializers Plug your serializer and deserializer to enable observability over any data format (i.e., protobuf / thrift)
Custom authentication Authenticate users on your proxy and inject permissions HTTP headers.
LDAP lookup Use multiple LDAP servers or your group mapping logic.
SQL UDFs User Defined Functions (UDF) that extend SQL and streaming SQL capabilities.
Once built, the jar files and any plugin dependencies should be added to the Agent and, in the case of Serializers and UDFs, to the SQL Processors if required.
On startup, the Agent loads plugins from the $LENSES_HOME/plugins/ directory and any location set in the environment variable LENSES_PLUGINS_CLASSPATH_OPTS. The Agent is watching, and dropping a new plugin will hot-reload it. For the Agent docker (and Helm chart) you use /data/plugins.
Any first-level directories under the paths mentioned above, detected on startup will also be monitored for new files. During startup, the list of monitored locations will be shown in the logs to help confirm the setup.
...
Initializing (pre-run) Lenses
Installation directory autodetected: /opt/lenses
Current directory: /data
Logback configuration file autodetected: logback.xml
These directories will be monitored for new jar files:
- /opt/lenses/plugins
- /data/plugins
- /opt/lenses/serde
Starting application
...Whilst all jar files may be added to the same directory (e.g /data/plugins), it is suggested to use a directory hierarchy to make management and maintenance easier.
An example hierarchy for a set of plugins:
├── security
│ └── sso_header_decoder.jar
├── serde
│ ├── protobuf_actions.jar
│ └── protobuf_clients.jar
└── udf
├── eu_vat.jar
├── reverse_geocode.jar
└── summer_sale_discount.jarThere are two ways to add custom plugins (UDFs and Serializers) to the SQL Processors; (1) via making available a tar.gz archive at an HTTP (s) address, or (2) via creating a custom docker image.
With this method, a tar archive, compressed with gzip, can be created that contains all plugin jars and their dependencies. Then this archive should be uploaded to a web server that the SQL Processors containers can access, and its address should be set with the option lenses.kubernetes.processor.extra.jars.url.
Step by step:
Create a tar.gz file that includes all required jars at its root:
tar -czf [FILENAME.tar.gz] -C /path/to/jars/ *Upload to a web server, ie. https://example.net/myfiles/FILENAME.tar.gz
Set
lenses.kubernetes.processor.extra.jars.url=https://example.net/myfiles/FILENAME.tar.gzFor the docker image, set the corresponding environment variable
LENSES_KUBERNETES_PROCESSOR_EXTRA_JARS_URL=https://example.net/myfiles/FILENAME.tar.gz`The SQL Processors inside Kubernetes use the docker image lensesio-extra/sql-processor. It is possible to build a custom image and add all the required jar files under the /plugins directory, then set lenses.kubernetes.processor.image.name and lenses.kubernetes.processor.image.tag options to point to the custom image.
Step by step:
Create a Docker image using lensesio-extra/sql-processor:VERSION as a base and add all required jar files under /plugins:
FROM lensesio-extra/sql-processor:4.2
ADD jars/* /pluginsdocker build -t example/sql-processor:4.2 .Upload the docker image to a registry:
docker push example/sql-processor:4.2Set
lenses.kubernetes.processor.image.name=example/sql-processor
lenses.kubernetes.processor.image.tag=4.2For the docker image, set the corresponding environment variables
LENSES_KUBERNETES_PROCESSOR_IMAGE_NAME=example/sql-processor
LENSES_KUBERNETES_PROCESSOR_IMAGE_TAG=4.2# yaml-language-server: $schema=https://raw.githubusercontent.com/lensesio/json-schemas/refs/heads/main/agent/provisioning.schema.jsonThis page describes Users in Lenses.
User can be manually created in Lenses. Users can either be of type:
SSO, or
Basic Authentication
When creating a User, you can assign them groups membership.
Each user, once logged in can update their Name, Profile Photo and set an email address.
To Create Service Account go to IAM->Users->New User, once created you can assign the user to a group.
You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.
➜ lenses users
Usage:
lenses users [command]
Aliases:
users, usr, u
Available Commands:
create Creates a new user.
delete Deletes a user.
get Returns a specific user
get-current Returns the currently authenticated user
list Returns all users
metadata Manages user metadata.
set-groups Assigns the given user exactly to the provided groups, ensuring they are not part of any other groups.
update Updates a user.
update-profile Allows updating fields of the user profile.This page describes IAM groups in Lenses.
Groups are a collection of users, service accounts and roles.
Users can be assign to Groups in two ways:
Manual
Linked from the groups provided by your SSO provider
This behaviour can be toggled in the organizational settings of your profile. To control the default set the following in the config.yaml for HQ.
users_group_membership_management_mode: [manual|sso]Groups can be defined with the following metadata:
Colour
Description
Each group has a resource that unique identifies it across an HQ installation.
To Create Group go to IAM->Groups->New Group, create the group, assign members, service accounts and roles.
You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.
➜ lenses groups
Manage Groups.
Usage:
lenses groups [command]
Aliases:
groups, grp
Available Commands:
create Creates a new Group.
delete Deletes a group.
get Gets a group by its name.
list Lists all groups
metadata Manages group metadata.
update Updates a group.This page describes Service account in Lenses.
Service accounts are intended for programmatic access to Lenses.
Each service account has a key that is used to authenticate and identify the service account.
In addition you can set:
Description
Resource name - Must be unique across Lenses.
Key expiry
Regenerate the key
Key expiry can be 7, 30, 60, 90 days, 1 year or a custom expiration or no expiration at all.
To Create Service Account go to IAM->Service Accounts->New Service Account, once created you can then assign service accounts to groups.
You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.
➜ hq service-accounts
Manage ServiceAccounts.
Usage:
hq service-accounts [command]
Aliases:
service-accounts, sa
Available Commands:
create Creates a new ServiceAccount.
delete Deletes a ServiceAccount.
get Returns a specific ServiceAccount.
list Returns all ServiceAccounts.
metadata Manages service-account metadata.
renew-token Renews the service account's token. The current token is invalidated and a new one is generated. An optional expiration timestamp can be provided.
set-groups Assigns the given service account exactly to the provided groups, ensuring they are not part of any other groups.
update Updates a service account.When interacting with Lenses via APIs set the service account token as in the header:
"Authorization": "Bearer sa_token"This page describes configuring Google SSO for Lenses authentication.
Given the base URL of the Lenses installation, e.g. https://lenses-dev.example.com, fill out the settings:
Add a mapping from the custom attribute for Lenses groups to the app attribute groups
From the newly added app details screen, select User access
Turn on the service
Download the Federation Metadata XML file with the Google IdP details.
This page describes an overview of Lenses Agent Provisioning.
Connections are defined in the provisioning.yaml file. The Agent will watch the file and resolve the desired state, applying connections defined in the file.
When deploying via the provisioning.yaml is part of the Agent Values.yaml file.
The minimum configuration needed is a configuration for Lenses HQ. Once the connection is established you can use the Lenses APIs to configure and test the remaining connections, or at start up provide the full configuration.
The APIs will validate the schema and connectivity, and if validate update the file used by the Agent. They update the file provided at start up.
The file is the source of truth for connection management
Connections are defined in the provisioning.yaml. This file is divided into components, each component representing a type of connection.
Each component is mandatory:
Name - This is the free name of the connection
Version set to 1
Configuration - This is a list of keys/values dependent on the component type.
To help you create a provisioning file you can use the JSON SCHEMA support. In you IDE, like VS Code, create a file called provisioning.yaml and add the following line at the top:
Then start typing, for example k for Kafka and Kafka Connect, s for schema registry or just crtl+space to trigger the default templates.
Fill in the required fields, your editor should highlight issues for you.
The provisioning.yaml contains secrets. If you are deploying via Helm, the chart will use Kubernetes secrets.
Support is provided for referencing environment variables. This allows you to set secrets in your environment and resolve the value at runtime.
Escape the dollar sign
Many connections need files, for example, to secure Kafka with SSL you will need a key store and optionally a trust store.
To reference a file in the provisioning.yaml, for example, given:
a file called my-keystore.jks is expected in the same directory.
This page describes how to connect Lenses to an Amazon MSK Serverless cluster.
Only one Kafka connection is allowed.
The name must be kafka.
See for support.
Environment variables are supported; escape the dollar sign
It is recommended to install the Agent on an EC2 instance or with EKS in the same VPC as your MSK Serverless cluster.
Enable communications between the Agent & the Amazon MSK Serverless cluster by opening the Amazon MSK Serverless cluster's security group in the AWS Console and add the IP address of your Agent installation.
To authenticate the Agent & access resources within our MSK Serverless cluster, we'll need to create an IAM policy and apply that to the resource (EC2, EKS cluster, etc) running the Agent service. here is an example IAM policy with sufficient permissions which you can associate with the relevant IAM role:
MSK Serverless IAM to be used after cluster creation. Update this IAM policy with the relevant ARN.
Click your MSK Serverless Cluster in the MSK console and select View Client Information page to check the bootstrap server endpoint.
To enable the creation of SQL Processors that create consumer groups, you need to add the following statement in your IAM policy:
Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.
To integrate with the AWS Glue Schema Registry, you also need to add the following statement for the registries and schemas in your IAM policy:
Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.
To integrate with the AWS Glue Schema Registry, you also need to modify the security policy for the registry and schemas, which results in additional functions within it:
More details about how IAM works with MSK Serverless can be found in the documentation:
When using the Agent with MSK Serverless:
The agent does not receive Prometheus-compatible metrics from the brokers because they are not exported outside of CloudWatch.
The agent does not configure quotas and ACLs because MSK Serveless does not allow this.
This page describes the how to retrieve Lenses Agent JMX metrics.
The JMX endpoint is managed by the lenses.jmx.port option. To disable the JMX leave the option empty.
To enable monitoring of the Agent metrics:
To export via Prometheus exporter:
The Agent Docker image (lensesio/lenses) automatically sets up the Prometheus endpoint. You only have to expose the 9102 port to access it.
This will be done in two parts. The first part is about setting up the required files that JMX Agent will require and the second is about the options we need to pass to the agent.
First let’s create a new folder called jmxremote
To enable basic auth JMX, first create two files:
jmxremote.access
jmxremote.password
The password file has the credentials that the JMX agent will check during client authentication
The above code is registering 2 users.
UserA:
username admin
password admin
UserB:
username: guest
password: admin
The access file has authorization information, like who is allowed to do what.
In the above code, we can see that the admin user can do read and write operations in JMX, while guest user can only read the JMX content.
Now, to enable JMX with basic auth protection, all we need to do is pass the following options in the JRE’s env that will run the Java process you need to protect the jmx.
Let’s assume this java process is Kafka.
Change the permissions on both files so only owner can edit and view them.
If you do not change the permissions to 0600 and to the user that will run the jre process, then JMX will Agent will cause an error complaining that the Process is not the owner of the files that will be used for authentication and authorization.
Finally export the following options in the user’s env which will run Kafka.
First setup JMX with basic auth as shown in the Secure JMX: Basic Auth page.
To enable TLS Encryption/Authentication in JMX you need a jks keystore and truststore.
Please note that both JKS Truststore and Keystore should have the same password.
The reason for this is because the javax.net.ssl class will use the password you pass to the Keystore as the keypassword
Let’s assume this java process is Kafka and that you have installed the keystore.jks and truststore.jks under `/etc/certs``
Export the following options in the user’s env which will run Kafka.
LENSES_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=[HOSTNAME]"export LENSES_OPTS="-javaagent:/path/to/jmx_exporter/fastdata_agent.jar=9102:/path/to/jmx_exporter/client.yml"mkdir -vp /etc/jmxremotecat /etc/jmxremote/jmxremote.password
admin admin
guest admincat jmxremote/jmxremote.access
admin readwrite
guest readonlychmod -R 0600 /etc/jmxremote
chown -R <user-that-will-run-kafka-name>:<user-that-will-run-kafka-group> /etc/jmxremote/jmxremote.*export BROKER_JMX_OPTS= "-Dcom.sun.management.jmxremote=true \
-Dcom.sun.management.jmxremote.authenticate=true \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.local.only=false \
-Djava.rmi.server.hostname=10.15.3.1 \
-Dcom.sun.management.jmxremote.rmi.port=9581 \
-Dcom.sun.management.jmxremote.access.file=/etc/jmxremote/jmxremote.access \
-Dcom.sun.management.jmxremote.password.file=/etc/jmxremote/jmxremote.password \
-Dcom.sun.management.jmxremote.port=9581export BROKER_JMX_OPTS= "-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.authenticate=true \
-Dcom.sun.management.jmxremote.ssl=true \
-Dcom.sun.management.jmxremote.local.only=false \
-Djava.rmi.server.hostname=10.15.3.1 \
-Dcom.sun.management.jmxremote.rmi.port=9581 \
-Dcom.sun.management.jmxremote.access.file=/etc/jmxremote.access \
-Dcom.sun.management.jmxremote.password.file=/etc/jmxremote.password \
-Dcom.sun.management.jmxremote.port=9581 \
-Djavax.net.ssl.keyStore=/etc/certs/kafka.jks \
-Djavax.net.ssl.keyStorePassword=somePassword \
-Djavax.net.ssl.trustStore=/etc/certs/truststore.jks \
-Djavax.net.ssl.trustStorePassword=somePassword \
-Dcom.sun.management.jmxremote.registry.ssl=true \
-Dcom.sun.management.jmxremote.ssl.need.client.auth=trueACS URL
Use the base url with the callback path e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client
Entity ID
Use the base url e.g. https://lenses-dev.example.com
Start URL
Leave empty
Signed Response
Leave unchecked
Name ID format
Leave as UNSPECIFIED
Name ID
Leave as Basic Information > Primary Email
sslKeystorePassword:
value: "\${ENV_VAR_NAME}"{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kafka-cluster:Connect",
"kafka-cluster:AlterCluster",
"kafka-cluster:DescribeCluster"
],
"Resource": "arn:aws:kafka:[region]:[aws_account_id]:cluster/[cluster_name]/[cluster_uuid]/*"
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:DescribeTopic",
"kafka-cluster:CreateTopic",
"kafka-cluster:WriteData",
"kafka-cluster:ReadData"
],
"Resource": "arn:aws:kafka:[region]:[aws_account_id]:topic/[cluster_name]/[cluster_uuid]/*"
},
{
"Effect": "Allow",
"Action": [
"kafka-cluster:AlterGroup",
"kafka-cluster:DescribeGroup"
],
"Resource": "arn:aws:kafka:[region]:[aws_account_id]:group/[cluster_name]/[cluster_uuid]/*"
}
]
}kafka:
tags: ["optional-tag"]
name: kafka
configuration:
kafkaBootstrapServers:
value:
- SASL_SSL://your.kafka.broker.0:9098
- SASL_SSL://your.kafka.broker.1:9098
protocol:
value: SASL_SSL
saslMechanism:
value: AWS_MSK_IAM
saslJaasConfig:
value: software.amazon.msk.auth.iam.IAMLoginModule required;
additionalProperties:
value:
sasl.client.callback.handler.class: "software.amazon.msk.auth.iam.IAMClientCallbackHandle{
"Action": [
"kafka-cluster:*Topic*",
"kafka-cluster:WriteData",
"kafka-cluster:ReadData"
],
"Resource": "arn:aws:kafka:[region]:[aws_account_id]:cluster/[cluster_name]/[cluster_uuid]/*"
}{
"Action": [
"kafka-cluster:*Group*"
],
"Resource": "arn:aws:kafka:[region]:[aws_account_id]:cluster/[cluster_name]/[cluster_uuid]/*"
}{
"Action": [
"glue:DeregisterDataPreview",
"glue:ListRegistries",
"glue:CreateRegistry",
"glue:RegisterSchemaVersion",
"glue:GetRegistry",
"glue:UpdateRegistry",
"glue:ListSchemas",
"glue:DeleteRegistry",
"glue:GetSchema",
"glue:CreateSchema",
"glue:ListSchemaVersions",
"glue:GetSchemaVersion",
"glue:UpdateSchema",
"glue:DeleteSchemaVersions"
],
"Resource": [
"arn:aws:glue:[region]:[aws_account_id]:registry/*",
"arn:aws:glue:[region]:[aws_account_id]:schema/*"
]
}

✅ Provisioning (v2) is mandatory. MAKE SURE PROVISIONING CONTAINS ALL CONNECTIONS INCLUDING ALERT & AUDIT CHANNELS
✅ Main element managing all agents which are connecting to Kafka
❌ It no longer uses the ingress controller
✅ Supports only Postgres as a database
✅ All security elements are moved from security.conf to Lenses HQ
✅ Holder of license
✅ Remained the same in terms of functionalities; authentication and authorization were moved to HQ, along with new features.
✅ Single pane of glass for all engineers to check the whole Kafka ecosystem
❌ No Wizard / UI for making connections between Agent and any component of Kafka ecosystem
❌ Cannot work without HQ
✅ PostgresDB is recommended for Production systems (H2 embedded available as of v6.0.6 for non production deployments)
❌ No longer holder of license
SLA requires < 5 minutes downtime or follows blue-green deployment patterns.
Downtime windows are acceptable.
Connections in Lenses ⅘ have been set by Wizard mode.
Configuration is simple (already use Provision v2).
Resources available for parallel infrastructure.
Quick upgrade is priority.
Wants to spend more time exploring IAMs and new potential permissions for certain AD Groups.
Enterprise deployment with SQL processors, complex data policies, and multi-tenant configurations.

# yaml-language-server: $schema=./agent/provisioning.schema-6.1.jsonsslKeystorePassword:
value: "\${ENV_VAR_NAME}" configuration:
protocol:
value: SASL_SSL
sslKeystore:
file: "my-keystore.jks"
kafka:
- name: Kafka
version: 1
tags: [my-tag]
configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://your.kafka.broker.0:9092
- PLAINTEXT://your.kafka.broker.1:9092
protocol:
value: PLAINTEXT
# all metrics properties are optional
metricsPort:
value: 9581
metricsType:
value: JMX
metricsSsl:
value: falseSSL This page describes how to use the provisioning API
The Lenses Provisioning System allows you to manage Lenses connections declaratively through YAML manifests. It provides a GitOps-friendly approach to managing your Lenses infrastructure, enabling version control, automated deployments, and consistent configuration across environments.
Declarative Configuration: Define your entire Lenses infrastructure in YAML
File Management: Upload and manage SSL certificates, keystores, and other binary files
Validation: Comprehensive validation with detailed error messages
Selective Updates: Update only specific connections without affecting others
File Preservation: Existing files are preserved when not explicitly replaced
Connectivity Testing: Optional connectivity validation for all connections
Files are uploaded as part of the multipart form data:
curl -X POST "https://lenses-server/api/v1/state/connections/upload" \
-H "Authorization: Bearer your-token" \
-F "[email protected]" \
-F "[email protected]" \
-F "[email protected]"When updating connections, existing files are preserved if not explicitly provided in the new request. This allows for selective updates without losing existing SSL certificates or other files.
file names must match the file names in the actual provisioning file
Endpoint: POST /api/v1/state/connections/upload
Description: Uploads a complete provisioning manifest with files. This replaces the entire connection state.
Request: multipart/form-data
provisioning: YAML manifest file
Additional files: SSL certificates, keystores, etc.
Response: ProvisioningValidationResponse
POST /api/v1/state/connections/upload
Content-Type: multipart/form-data
--boundary
Content-Disposition: form-data; name="provisioning"; filename="provisioning.yaml"
Content-Type: text/plain
kafka:
- name: my-kafka
version: 1
tags: ["production"]
configuration:
kafkaBootstrapServers:
value: ["localhost:9092"]
protocol:
value: "PLAINTEXT"
--boundary
Content-Disposition: form-data; name="keystore.jks"; filename="keystore.jks"
Content-Type: application/octet-stream
[binary keystore content]
--boundary--Endpoint: POST /api/v1/state/connections/validate/upload
Description: Validates a provisioning manifest without applying changes (dry-run).
Request: Same as upload endpoint Response: ProvisioningValidationResponse
Endpoint: GET /api/v1/state/connections
Description: Retrieves the current provisioning.yaml file contents.
Response: Raw YAML content
This page describes adding a Kafka Connect Cluster to the Lenses Agent.
Lenses integrates with Kafka Connect Clusters to manage connectors.
The name of a Kafka Connect Connections may only contain alphanumeric characters ([A-Za-z0-9]) and dashes (-). Valid examples would be dev, Prod1, SQLCluster,Prod-1, SQL-Team-Awesome.
Multiple Kafka Connect clusters are supported.
If you are using Kafka Connect < 2.6 set the following to ensure you can see Connectors
lenses.features.connectors.topics.via.api.enabled=false
Consider Rate Limiting if you have a high number of connectors.
See JSON schema for support.
Environment variables are supported; escape the dollar sign
sslKeystorePassword:
value: "\${ENV_VAR_NAME}"Names must match be alphanumeric or dash non-empty string.
The URLs (workers) should always have a scheme defined (http:// or https://).
connect:
- name: my-connect-cluster-name
version: 1
tags: ["tag1"]
configuration:
workers:
value:
- http://my-kc.worker1:8083
- http://my-kc.worker2:8083
metricsPort:
value: 9585
metricsType:
value: JMX For Basic Authentication, define username and password properties.
connect:
- name: my-connect-cluster-name
tags: ["tag1"]
version: 1
configuration:
workers:
value:
- http://my-kc.worker1:8083
- http://my-kc.worker2:8083
username:
value: my-username
password:
value: my-passwordA custom truststore is needed when the Kafka Connect workers are served over TLS (encryption-in-transit) and their certificates are not signed by a trusted CA.
connect:
- name: my-connect-cluster-name
tags: ["tag1"]
version: 1
configuration:
workers:
value:
- http://my-kc.worker1:8083
- http://my-kc.worker2:8083
sslTruststore:
file: /connect-truststore.jks
sslTruststorePassword:
value: myPasswordA custom truststore might be necessary too (see above).
connect:
name: my-connect-cluster-name
tags: ["tag1"]
version: 1
configuration:
workers:
value:
- http://my-kc.worker1:8083
- http://my-kc.worker2:8083
sslKeystore:
file: connect-keystore.jks
sslKeystorePassword:
value: myPasswordIf you have developed your own Connector or are using not using a Lenses connector you can still display the connector instances in the topology. To do this Lenses needs to know the configuration option of the Connector that defines which topic the Connector reads from or writes to. This is set in the connectors.info parameter in the lenses.conf file.
connectors.info = [
{
class.name = "The connector full classpath"
name = "The name which will be presented in the UI"
instance = "Details about the instance. Contains the connector configuration field which holds the information. If a database is involved it would be the DB connection details, if it is a file it would be the file path, etc"
sink = true
extractor.class = "The full classpath for the implementation knowing how to extract the Kafka topics involved. This is only required for a Source"
icon = "file.png"
description = "A description for the connector"
author = "The connector author"
}
]Connect the Lenses Agent to your alerting and auditing systems.
The Agent can send out alerts and audits events. Once you have configured alert and audit connections, you can create alert and audit channels to route events to them.
Names must match be alphanumeric or dash non-empty string.
datadog:
- name: my-datadog-connection
version: 1
tags: [tag1, tag2]
configuration:
# The Datadog site.
site:
value:
# The Datadog API key.
apiKey:
value:
# The Datadog application key.
applicationKey:
value: See AWS connection.
pagerduty:
- name: my-pagerduty-connection
version: 1
tags: [tag1, tag2]
configuration:
# An Integration Key for PagerDuty's service with Events API v2 integration type.
integrationKey:
value: slack:
- name: my-slack-connection
version: 1
tags: [tag1, tag2]
configuration:
# The Slack endpoint to send the alert to.
webhookUrl:
value: alertManager:
- name: my-alertmanager-connection
version: 1
tags: [tag1, tag2]
configuration:
# Comma separated list of Alert Manager endpoints.
endpoints:
value: webhook:
- name: my-webhook-alert-connection
version: 1
tags: [tag1, tag2]
configuration:
# The host name for the HTTP Event Collector API of the Splunk instance.
host:
value:
# The port number for the HTTP Event Collector API of the Splunk instance. (int)
port:
value:
# Set to true in order to set the URL scheme to https.
# Will otherwise default to http.
useHttps:
value:
# An array of (secret) strings to be passed over to alert channel plugins.
creds:
value:
-
- webhook:
- name: my-webhook-audit-connection
version: 1
tags: [tag1, tag2]
configuration:
# The host name for the HTTP Event Collector API of the Splunk instance.
host:
value:
# The port number for the HTTP Event Collector API of the Splunk instance. (int)
port:
value:
# Set to true in order to set the URL scheme to https.
# Will otherwise default to http.
useHttps:
value:
# An array of (secret) strings to be passed over to alert channel plugins.
creds:
value:
-
- splunk:
- name: my-splunk-connection
version: 1
tags: [tag1, tag2]
configuration:
# The host name for the HTTP Event Collector API of the Splunk instance.
host:
value:
# The port number for the HTTP Event Collector API of the Splunk instance. (int)
port:
value:
# Use TLS. Boolean, default false
useHttps:
value:
# This is not encouraged but is required for a Splunk Cloud Trial instance. Bool
insecure:
value:
# HTTP event collector authorization token. (string)
token:
value: This page describes configuring the database connection for the Lenses Agent. There are two options for the backing storage: Postgres or Microsoft SQL Server.
Once you have created a role for the agent to use you can then configure the Agent in the lenses.conf file:
lenses.storage.postgres.host="my-postgres-server"
lenses.storage.postgres.port=5432
lenses.storage.postgres.username="lenses_agent"
lenses.storage.postgres.database="lenses_agent"
lenses.storage.postgres.password="changeme"Additional configurations for the PostgreSQL database connection can be passed under the lenses.storage.postgres.properties configuration prefix.
One Postgres server can be used for all agents by using a separate database or schema each.
For the Agent see lenses.storage.postgres.schema or lenses.storage.postgres.database
The supported parameters can be found in the PostgreSQL documentation. For example:
# require SSL encryption with full host verification
lenses.storage.postgres.properties.ssl=true
lenses.storage.postgres.properties.sslmode="verify-full"
lenses.storage.postgres.properties.sslcert="/path/to/certs/lenses.crt.pem"
lenses.storage.postgres.properties.sslkey="/path/to/certs/lenses.key.pk8"
lenses.storage.postgres.properties.sslpassword="mypassword"
lenses.storage.postgres.properties.sslrootcert="/path/to/certs/CA.crt.pem"# login as superuser and add Lenses role and database
psql -U postgres -d postgres <<EOF
CREATE ROLE lenses_agent WITH LOGIN PASSWORD 'changeme';
CREATE DATABASE lenses_agent OWNER lenses_agent;
EOFTo configure Lenses to use a Microsoft SQL Server database, you need to add the following settings to your lenses.conf file. This example mirrors the structure of the PostgreSQL configuration you provided.
lenses.storage.mssql.host="my-mssql-server"
lenses.storage.mssql.port=1433
lenses.storage.mssql.database="lenses_db"
lenses.storage.mssql.schema="lenses_schema"
lenses.storage.mssql.username="lenses_user"
lenses.storage.mssql.password="changeme"Before starting Lenses, you must create the database, schema, and login credentials on your Microsoft SQL Server instance. You can use a tool like SQL Server Management Studio (SSMS) or the sqlcmd command-line utility to execute these commands.
-- Create the database for Lenses
CREATE DATABASE lenses_db;
GO
-- Switch to the newly created database
USE lenses_db;
GO
-- Create a login (user) for Lenses to use
CREATE LOGIN lenses_user WITH PASSWORD = 'changeme';
GO
-- Create a database user linked to the login
CREATE USER lenses_user FOR LOGIN lenses_user;
GO
-- Create a schema for Lenses
CREATE SCHEMA lenses_schema;
GO
-- Grant the necessary permissions to the user on the schema
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE TABLE, ALTER ON SCHEMA::lenses_schema TO lenses_user;
GOYou can pass additional JDBC driver properties using the lenses.storage.mssql.properties prefix. This is useful for enabling features like connection encryption. The supported parameters can be found in the Microsoft JDBC Driver documentation.
For example, to enforce SSL encryption and validate the server certificate:
# Require SSL encryption
lenses.storage.mssql.properties.encrypt="true"
lenses.storage.mssql.properties.trustServerCertificate="false"
lenses.storage.mssql.properties.hostNameInCertificate="my-mssql-server.example.com"The Agent uses the HikariCP library for high-performance database connection pooling.
The default settings should perform well but can be overridden via the lenses.storage.hikaricp configuration prefix. The supported parameters can be found in the HikariCP documentation.
For example:
# set maximumPoolSize to 25
lenses.storage.hikaricp.maximum.pool.size=25LRNs uniquely identify all resources that Lenses understands. Examples are a Lenses User, a Kafka topic or a Kafka-Connect connector.
Use an LRN to specify a resource across all of Lenses, unambiguously:
To add topic permissions for a team in IAM permissions.
To share a consumer-group reference with a colleague.
The top-level format has 3 parts called segments. A semi-colon : separates them:
service is the namespace of the Lenses service that manages a set of resource types.
e.g. kafka for things like topics and consumer groups.
resource-type is the type of resources that are served by a service.
e.g. topic for a Kafka topic, consumer-group for a Kafka consumer group. They both belong to the kafka service.
resource-id is the unique name or path that identifies a resource. The resource ID is specific to a service and resource type. The resource ID can be:
a single resource name, e.g. :
[email protected] for a user resource name.
The full LRN would be iam:user:[email protected].
a nested resource path that contains slashes / e.g. :
dev-environment/kafka/my-topic for a kafka topic.
The full LRN would be kafka:topic:dev-environment/kafka/my-topic.
IAM user
Kafka topic
Kafka consumer group
Schema Registry schema
Kafka Connect connector
LRNs separate top-level segments with a colon : and resource path segments with a slash /.
A segment may have:
Alphanumeric characters: a-z, A-Z, 0-9
Hyphen symbols only: -
Use the wildcard asterisk * to express catch-all LRNs.
Use these examples to express multiple resources easily.
Avoid these examples because they are ambiguous. Lenses does not allow them.
service:resource-type:resource-idiam:user:[email protected]kafka:topic:dev-environment/kafka/my-topickafka:consumer-group:dev-environment/kafka/my-consumer-groupschemas:schema:dev-environment/schema-registry/my-topic-valuekafka-connect:connector:dev-environment/connect-cluster-1/my-s3-sink*
*
Global wildcard.
Capture all the resources that Lenses manages.
"Everything"
service:*
kafka:*
Service-specific wildcard.
Capture all the resources for a service.
"All Kafka resources in all environments, i.e. topics, consumer groups, acls and quotas"
service:resource-type:*
kafka:topic:*
Resource-type-specific wildcard.
Capture all the resources for a type of resources of a service.
"All Kafka topics in all environments"
service:resource-type:parent/*/grandchild
kafka-connect:connector:dev-environment/*/my-s3-sink
Path segment wildcard.
Capture a part of the resource path.
"All connectors named 'my-s3-sink' in all Connect clusters under the environment 'dev-environment' "
service:resource-type:resourcePa*
kafka:topic:dev-environment/kafka/red-*
Trailing wildcard.
This wildcard is at the end of an LRN. It acts as a 'globstar' (**) and matches against the rest of the string.
Capture the resources that start with the given path prefix.
"All Kafka topics in the environment 'dev-environment' whose name starts with 'red-' "
service:resource-type:paren*/chil*/grandchil*
kafka-connect:connector:dev*/sinks*/s3*
Path suffix wildcard.
Capture resources where different path segments start with certain prefixes.
"All connectors in all environments that start with 'dev', within any Connect cluster that starts with 'sinks' and where the connector name starts with 's3' "
servic*:resource-type:resource-id
kafk*::dev-environment/
or
:topic:dev-environment/
No wildcards allowed at the service level. A service must be its full string.
Global wildcard *
service:resource-typ*:resource-id
kafka:topi*:dev-environment/*
No wilcards allowed at the resource-type level. A resource type must be its full string.
Service-specific wildcard service:*
No resource-id segments allowed in this case.
This page describes how to configure JMX metrics for Connections in Lenses.
All core services (Kafka, Schema Registry, Kafka Connect, Zookeeper) use the same set of properties for services’ monitoring.
The Agent will discover all the brokers by itself and will try to fetch metrics using metricsPort, metricsCustomUrlMappings and other properties (if specified).
See JSON schema for support.
Environment variables are supported; escape the dollar sign
sslKeystorePassword:
value: "\${ENV_VAR_NAME}"[connnection]
configuration:
metricsPort:
value: 9581
metricsType:
value: JMX
metricsSsl:
value: false
metricsUsername:
value: user
metricsPassword:
value: passThe same port used for all brokers/workers/nodes. No SSL, no authentication.
kafka:
tags: []
name: kafka
configuration:
kafkaBootstrapServers:
- PLAINTEXT://my-kafka-host-0:9092
protocol:
value: PLAINTEXT
metricsPort:
value: 9585
metricsType:
value: JMXkafka:
tags: []
name: kafka
configuration:
kafkaBootstrapServers:
- PLAINTEXT://my-kafka-host-0:9092
protocol:
value: PLAINTEXT
metricsPort:
value: 9585
metricsType:
value: JMX
metricsSsl:
value: truekafka:
tags: []
name: kafka
configuration:
kafkaBootstrapServers:
- PLAINTEXT://my-kafka-host-0:9092
protocol:
value: PLAINTEXT
metricsPort:
value: 9581
metricsType:
value: JMX
metricsSsl:
value: false
metricsUsername:
value: user
metricsPassword:
value: passSuch a configuration means that the Agent will try to connect using JMX with every pair of kafkaBootstrapServers.host:metricsPort, so following the example: my-kafka-host-0:9581.
For Jolokia the Agent supports two types of requests: GET (JOLOKIAG) and POST (JOLOKIAP).
The same port used for all brokers/workers/nodes. No SSL, no authentication.
kafka:
tags: []
name: kafka
configuration:
kafkaBootstrapServers:
- PLAINTEXT://my-kafka-host-0:9092
protocol:
value: PLAINTEXT
metricsPort:
value: 9585
# For GET method: JOLOKIAG
# For POST method: JOLOKIAP
metricsType:
value: JOLOKIAG
metricsSsl:
value: false
metricsHttpSuffix:
value: /jolokia/JOLOKIA monitoring works on the top of HTTP protocol. To fetch metrics the Agent has to perform either GET or POST request. There is a way of configuring http request timeout using httpRequestTimeout property (ms value). Its default value is 20 seconds.
httpRequestTimeout:
value: 30000Default suffix for Jolokia endpoints is /jolokia/, so that should be provided value. Sometimes that suffix can be different, so there is a way of customizing it by using metricsHttpSuffix field.
metricsHttpSuffix:
value: /custom/Before enabling collection of metrics within Agents provision configuration, make sure in your MSK Provisioned cluster you have enabled open monitoring with Prometheus.
AWS has a predefined metrics configuration. The Agent hits the Prometheus endpoint using port 11001 for each broker. There is an option of customizing AWS metrics connection in Lenses by using metricsUsername, metricsPassword, metricsHttpTimeout, metricsHttpSuffix, metricsCustomUrlMappings, and metricsSsl properties. However, except for metricsHttpTimeout, the other settings will seldom be needed - AWS has its standard, which is unlikely to change. Customization can be achieved only by API or CLI.
kafka:
tags: [ "dev", "dev-2", "eu"]
configuration:
kafkaBootstrapServers:
value:
- SASL_SSL://my-broker-0:9098
- SASL_SSL://my-broker-1:9098
- SASL_SSL://my-broker-2:9098
protocol:
value: SASL_SSL
saslMechanism:
value: AWS_MSK_IAM
saslJaasConfig:
value: software.amazon.msk.auth.iam.IAMLoginModule required;
additionalProperties:
value:
sasl.client.callback.handler.class: "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
metricsType:
value: AWS
metricsHttpTimeout: # optional, milliseconds
value: 20000In some cases, the metricsHttpTimeout option may be required. Typically, this occurs when the OpenMetrics instance is undersized for the size of the MSK cluster, resulting in longer-than-usual metric retrieval times. Each Kafka partition adds a large number of metrics, so the OpenMetrics instance should ideally be sized to accommodate the number of partitions that the MSK will host.
Another common pitfall with MSK OpenMetrics is that there exists a global rate limit for each instance. If more than one service hits the OpenMetrics endpoint, the rate limit may be triggered, and the clients will receive an HTTP error code 429. To overcome this, you can set the lenses.interval.metrics.refresh.broker option in Lenses Agent. As an example, to make Lenses request metrics every minute, set the value to 60000 (milliseconds).
There is also a way to configure custom mapping for each broker (Kafka) / node (Schema Registry, Zookeeper) / worker (Kafka Connect).
Such a configuration means that the Agent will try to connect using JMX for:
my-kafka-host-0:9582 - because of metricsCustomUrlMappings
my-kafka-host-1:9581 - because of metricsPort and no entry in metricsCustomUrlMappings
kafka:
tags: ["optional-tag"]
name: kafka
configuration:
kafkaBootstrapServers:
- PLAINTEXT://my-kafka-host-0:9092
protocol:
value: PLAINTEXT
metricsPort:
value: 9581
metricsType:
value: JMX
metricsSsl:
value: false
metricsCustomUrlMappings:
value:
"my-kafka-host-0:9092": my-kafka-host-0:9582Kubernetes cluster and kubectl - you can use something like Minikube or Docker Desktop in Kubernetes mode if you'd like, but you will need to allocate at least 8 gigs of RAM and 6 CPUs
Helm.
Text editor.
Kafka cluster and a Postgres database(we will provide setup instructions below for this if it's not already installed.)
Kafka Connect and a schema registry (optional)
From a workstation with kubectl and Helm installed, add the Lenses Helm repository:
If you don't already have a Kafka cluster or Postgres installed you will need to add this repository as well:
Once you've added them, run the following command:
If you already have Postgres installed skip to the next section: Configuring Postgres
Create a namespace for Postgres
Create a PVC claim for Postgres
Save the above to a file called postgres-pvc.yaml and then run the following command:
Install Postgres using the Bitnami Helm chart.
Save the above text to a file called postgres-values.yaml. Then run the following command:
Verify that Postgres is up and running. It may take a minute or so to download and be fully ready.
We need to create the databases in Postgres for Lenses to use.
Option 1: You will need to use a Postgres client to run the following commands.
Log in to your Postgres instance and run the following commands:
Option 2: Use a Kubernetes job to run the Postgres commands.
Lenses needs a database for LensesHQ and for Lenses Agent. This job will create one for each using the same Postgres instance.
Copy the above text to a file called lenses-db-init-job.yaml and then run the following command:
Wait a bit and then run
You should see
Now Postgres is setup and configured to work with Lenses.
If you already have a Kafka cluster installed skip to the Installing HQ section.
Create the kafka-cluster-values.yaml file for installation. We are using "standard" storage class here. Depending on what K8s vendor you're using and where you are running it, your PCV setup will vary.
Create a namespace for Kafka
Install the Kafka cluster with the Bitnami Helm chart:
Give the Helm chart a few minutes to install then verify the installation:
Create lenses namespace
Install Lenses HQ with its Helm chart using the following lensesHQ-values.yaml
Copy the above text to a file lenseshq-values.yaml and apply it with the following command:
You can verify that Lenses HQ is installed:
Accessing Lenses HQ:
In order to access Lenses HQ you will need to setup an ingress route using an ingress controller. There are so many different ways to do this depending on how and where you are running Kubernetes.
We have provided here an example ingress configuration using Nginx:
Once you have successfully logged on to Lenses HQ you can start to setup your agent. See for login details.
Click on the Add New Environment button at the bottom of the main screen. Give your new environment a name (you can accept the other defaults for now) and click Create Environment.
Be sure to save your Agent Key from the screen that follows.
Now we can install the Lenses Agent using the agent_key. Here is the lenses-agent-values.yaml file:
Copy the above config to a file named lenses-agent-values.yaml.
NOTE: you must replace value: "agent_key_Insert_Your_Agent_Key_Here" with your actual Agent Key you saved in a previous step.
Your lenses-agent-values.yaml should look like this:
Use the Lenses Agent Helm chart to install the Lenses Agent
Give Kubernetes time to install the Lenses Agent, then go back to the Lenses HQ UI and view your Kafka cluster is connected. You can now uses Lenses on your own cluster! Congrats!!
This page describes the install of the Lenses Agent via an archive on Linux.
To install the HQ from the archive you must:
Extract the archive
Configure the HQ
Start the HQ
Installation link
Link to archives can be found here:
Extract the archive using the following command
Inside the extract archive, you will find.
In order to properly configure HQ, one core components is necessary as prerequirement:
Postgres database
To set up authentication, there are multiple methods available.
You can choose between:
password-based authentication, which requires users to provide a username and password;
and SAML/SSO (Single Sign-On) authentication, which allows users to authenticate through an external identity provider for a seamless and secure login experience.
Both password based and SAML / SSO authentication methods can be used alongside each other.
First to cover is users property.
Users Property: The users property is defined as an array, where each entry includes a username and a password. The passwords are hashed using bcrypt for security purposes, ensuring that they are stored securely.
Second to cover will be administrators. It serves as definition of user emails which will have highest level of permissions upon authentication to HQ.
Full auth configuration spec can be found .
Another part which has to be set in order to successfully run HQ is the http definition. As previously mentioned, this parameter defines everything around HTTP endpoint of the HQ itself and how users will interact with.
Definition of HTTP object is as follows:
More about setting up TLS can be read . Full http configuration spec can be found .
Prerequisite:
Running Postgres instance;
Created database for HQ;
Username (and password) which has access to created database;
In order to successfully run HQ, storage within config.yaml has to be defined first.
Definition of storage object is as follows:
Full database configuration spec can be found .
If you have meticulously followed all the outlined steps, your config.yaml file should mirror the example provided below, fully configured and ready for deployment. This ensures your system is set up correctly with all necessary settings for authentication, database connection, and other configurations optimally defined.
Start Lenses by running:
or pass the location of the config file:
If you do not pass the location of the config file, the HQ will look for it inside the current (runtime) directory. If it does not exist, it will try its installation directory.
Once HQ starts, it will be listening on the
To stop HQ, press CTRL+C.
If your server uses systemd as a Service Manager, then manage the Agent (start upon system boot, stop, restart). Below is a simple unit file that starts the Agent automatically on system boot.
After the successful configuration and installation of HQ, the next steps would be:
This page describes connecting the Lenses Agent to Apache Kafka.
A Kafka connection is required for the agent to start. You can connect to Kafka via:
Plaintext (no credentials an unencrypted)
SSL (no credentials an encrypted)
SASL Plaintext and SASL SSL
Only one Kafka connection is allowed.
The name must be kafka.
See for support.
Environment variables are supported; escape the dollar sign
With PLAINTEXT, there's no encryption and no authentication when connecting to Kafka.
The only required fields are:
kafkaBootstrapServers - a list of bootstrap servers (brokers). It is recommended to add as many brokers (if available) as convenient to this list for fault tolerance.
protocol - depending on the protocol, other fields might be necessary (see examples for other protocols)
In following example JMX metrics for Kafka Brokers are configured too, assuming that all brokers expose their JMX metrics using the same port (9581), without SSL and authentication.
With SSL the connection to Kafka is encrypted. You can also uses SSL and certificates to authenticate users against Kafka.
A truststore (with password) might need to be set explicitly if the global truststore of the Agent does not include the Certificate Authority (CA) of the brokers.
If TLS is used for authentication to the brokers in addition to encryption-in-transit, a key store (with passwords) is required.
There are 2 SASL-based protocols to access Kafka Brokers: SASL_SSL and SASL_PLAINTEXT. They both require SASL mechanism and JAAS Configuration values. What is different is:
The transport layer is encyrpted (SSL)
The SASL mechanism for authentication (PLAIN, AWS_MSK_IAM, GSSAPI).
In addition to this, there might be a keytab file required, depending on the SASL mechanism (for example when using GSSAPI mechanism, most often used for Kerberos).
To use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.
When encryption-in-transit is used (with SASL_SSL), a trust store might need to be set explicitly if the global trust store of Lenses does not include the CA of the brokers.
Encrypted communication and basic username and password for authentication.
In order to use Kerberos authentication, a Kerberos Connection should be created beforehand.
No SSL encrypted of communication, credentials communicated to Kafka in clear text.
tar -xvf lenses-hq-linux-amd64-latest.tar.gz -C lenses-hq lenses-hq
├── lenses-hq/api/v2/auth/saml/callback?client_name=SAML2Clientauth:
users:
- username: admin
password: $2a$10$F66cb6ZhnJjGCZuxlvKP1e84eytTpT1MDJcpBblHaZgsqp1/Aa0LG # bcrypt("correcthorsebatterystaple").
administrators:
- admin
- [email protected]
saml:
enabled: true
metadata: |-
<?xml version="1.0" encoding="UTF-8"?><md:EntityDescriptor>
...
...
</md:EntityDescriptor>
# Defines base URL of HQ for IdP redirects
baseURL: https://changeme.com # <--- Change this
# Defines globally unique identifier for the SAML entity
# — either the Service Provider (SP) or Identity Provider (IdP)
# It's often a URL, but it doesn't necessarily need to resolve to anything
entityID: https://example.com # <--- Change this
userCreationMode: sso
groupMembershipMode: ssohttp:
address: :8080
accessControlAllowOrigin:
- https://example.com
accessControlAllowCredentials: false
secureSessionCookies: false
tls:
enabled: true
cert: |
-----BEGIN CERTIFICATE-----
MIIDXTCCAkWgAwIBAgIJALkNfT3d1N8tMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV
BAYTAlVTMRYwFAYDVQQKEw1FeGFtcGxlIENlcnQwHhcNMjUwMzI2MDAwMDAwWhcN
MzUwMzIzMDAwMDAwWjBFMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNZXhhbXBsZS5j
b20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC5D3jXq5JnE9NnRJ8N
...
-----END CERTIFICATE-----
key: |
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASC...
...
-----END PRIVATE KEY-----http:
address: :8080
accessControlAllowOrigin:
- https://example.com
accessControlAllowCredentials: false
secureSessionCookies: false
tls:
enabled: falseagents:
address: :10000
tls:
enabled: true
cert: |
-----BEGIN CERTIFICATE-----
MIIDXTCCAkWgAwIBAgIJALkNfT3d1N8tMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV
BAYTAlVTMRYwFAYDVQQKEw1FeGFtcGxlIENlcnQwHhcNMjUwMzI2MDAwMDAwWhcN
MzUwMzIzMDAwMDAwWjBFMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNZXhhbXBsZS5j
b20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC5D3jXq5JnE9NnRJ8N
...
-----END CERTIFICATE-----
key: |
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASC...
...
-----END PRIVATE KEY-----agents:
address: :10000
tls:
enabled: falsedatabase:
host: postgres:5432
username: panoptes
password: password
database: panoptes
schema: insert-schema-here
# Params example - not required and it depends on your PG requirements
params:
sslmode: requirelicense_key_2SFZ0BesCNu6NFv0-EOSIvY22ChSzNWXa5nSds2l4z3y7aBgRPKCVnaeMlS57hHNVboR2kKaQ8Mtv1LFt0MPBBACGhDT5If8PmTraUM5xXLz4MYvlicense:
key: license_key_*
acceptEULA: trueauth:
users:
- username: admin
password: $2a$10$F66cb6ZhnJjGCZuxlvKP1e84eytTpT1MDJcpBblHaZgsqp1/Aa0LG # bcrypt("correcthorsebatterystaple").
administrators:
- admin
- [email protected]
saml:
enabled: true
metadata: |-
<?xml version="1.0" encoding="UTF-8"?><md:EntityDescriptor>
...
...
</md:EntityDescriptor>
baseURL: https://example.com
entityID: https://example.com
userCreationMode: sso
groupMembershipMode: sso
http:
address: ":8080"
accessControlAllowOrigin:
- https://example.com
agents:
address: ":10000"
database:
host: postgres:5432
username: panoptes
password: password
database: panoptes
schema: insert-schema-here
params:
sslmode: require
license:
key: license_key_*
acceptEULA: true
logger:
mode: text
level: debug
./lenses-hq./lenses-hq config.yaml[Unit]
Description=Run HQ service
[Service]
Restart=always
User=[LENSES-USER]
Group=[LENSES-GROUP]
LimitNOFILE=4096
WorkingDirectory=/opt/lenses-hq
ExecStart=/opt/lenses-hq /etc/lenses-hq/config.yaml
[Install]
WantedBy=multi-user.targetsslKeystorePassword:
value: "\${ENV_VAR_NAME}"kafka:
- name: kafka
version: 1
tags: [my-tag]
configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://your.kafka.broker.0:9092
- PLAINTEXT://your.kafka.broker.1:9092
protocol:
value: PLAINTEXT
# all metrics properties are optional
metricsPort:
value: 9581
metricsType:
value: JMX
metricsSsl:
value: falsekafka:
- name: kafka
version: 1
tags: [my-tag]
configuration:
kafkaBootstrapServers:
value:
- SSL://your.kafka.broker.0:9092
- SSL://your.kafka.broker.1:9092
protocol:
value: SSL
sslTruststore:
file: kafka-truststore.jks
sslTruststorePassword:
value: truststorePassword
sslKeystore:
file: kafka-keystore.jks
sslKeyPassword:
value: keyPassword
sslKeystorePassword:
value: keystorePasswordkafka:
- name: kafka
version: 1
tags: [my-tag]
configuration:
kafkaBootstrapServers:
value:
- SASL_SSL://your.kafka.broker.0:9092
- SASL_SSL://your.kafka.broker.1:9092
protocol:
value: SASL_SSL
sslTruststore:
file: kafka-truststore.jks
sslTruststorePassword:
value: truststorePassword
sslKeystore:
file: kafka-keystore.jks
sslKeyPassword:
value: keyPassword
sslKeystorePassword:
value: keystorePassword
saslMechanism:
value: PLAIN
saslJaasConfig:
value: |
org.apache.kafka.common.security.plain.PlainLoginModule required
username="your-username"
password="your-password"; kafka:
- name: kafka
version: 1
tags: [my-tag]
configuration:
kafkaBootstrapServers:
value:
- SASL_SSL://your.kafka.broker.0:9092
- SASL_SSL://your.kafka.broker.1:9092
protocol:
value: SASL_SSL
sslTruststore:
file: kafka-truststore.jks
sslTruststorePassword:
value: ${SSL_KEYSTORE_PASSWORD}
sslKeystore:
file: kafka-keystore.jks
sslKeyPassword:
value: ${SSL_KEYSTORE_PASSWORD}
sslKeystorePassword:
value: ${SSL_KEYSTORE_PASSWORD}
saslMechanism:
value: PLAIN
saslJaasConfig:
value: ${SASL_JAAS_CONFIG}kafka:
- name: kafka
version: 1
tags: [my-tag]
configuration:
kafkaBootstrapServers:
value:
- SASL_SSL://your.kafka.broker.0:9092
- SASL_SSL://your.kafka.broker.1:9092
protocol:
value: SASL_SSL
sslTruststore:
file: kafka-truststore.jks
sslTruststorePassword:
value: truststorePassword
sslKeystore:
file: kafka-keystore.jks
sslKeyPassword:
value: keyPassword
sslKeystorePassword:
value: keystorePassword
saslMechanism:
value: GSSAPI
saslJaasConfig:
value: |
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=false
serviceName=kafka
principal="[email protected]";
keytab:
file: /path/to/kafka-keytab.keytabkafka:
- name: kafka
version: 1
tags: [my-tag]
configuration:
kafkaBootstrapServers:
value:
- SASL_PLAINTEXT://your.kafka.broker.0:9092
- SASL_PLAINTEXT://your.kafka.broker.1:9092
protocol:
value: SASL_PLAINTEXT
saslMechanism:
value: SCRAM-SHA-256
saslJaasConfig:
value: |
org.apache.kafka.common.security.scram.ScramLoginModule required
username="your-username"
password="your-password"; kafka:
- name: kafka
version: 1
tags: [my-tag]
configuration:
kafkaBootstrapServers:
value:
- SASL_PLAINTEXT://your.kafka.broker.0:9092
- SASL_PLAINTEXT://your.kafka.broker.1:9092
protocol:
value: SASL_PLAINTEXT
saslMechanism:
value: SCRAM-SHA-512
saslJaasConfig:
value: |
org.apache.kafka.common.security.scram.ScramLoginModule required
username="your-username"
password="your-password"; helm repo add lensesio https://helm.repo.lenses.io/helm repo add bitnami https://charts.bitnami.com/bitnamihelm repo updatekubectl create namespace postgres-system# postgres-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-data
namespace: postgres-system
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standardkubectl apply -f postgres-pvc.yaml# postgres-values.yaml
global:
postgresql:
auth:
username: "admin"
password: "changeme"
postgresPassword: "changeme"
primary:
persistence:
existingClaim: "postgres-data"
auth:
database: postgres
username: admin
password: changeme
postgresPassword: changeme
enablePostgresUser: truehelm install postgres bitnami/postgresql \
--namespace postgres-system \
--values postgres-values.yamlCREATE ROLE lenses_agent WITH LOGIN PASSWORD 'changeme'
CREATE DATABASE lenses_agent OWNER lenses_agent
CREATE ROLE lenses_hq WITH LOGIN PASSWORD 'changeme'
CREATE DATABASE lenses_hq OWNER lenses_hq# lenses-db-init-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: lenses-db-init
namespace: postgres-system
spec:
template:
spec:
containers:
- name: db-init
image: postgres:14
command:
- /bin/bash
- -c
- |
echo "Waiting for PostgreSQL to be ready..."
until PGPASSWORD=changeme psql -h postgres-postgresql -U postgres -d postgres -c '\l' &> /dev/null; do
echo "PostgreSQL is unavailable - sleeping 2s"
sleep 2
done
echo "PostgreSQL is up - creating databases and roles"
PGPASSWORD=changeme psql -h postgres-postgresql -U postgres -d postgres <<EOF
CREATE ROLE lenses_agent WITH LOGIN PASSWORD 'changeme';
CREATE DATABASE lenses_agent OWNER lenses_agent;
CREATE ROLE lenses_hq WITH LOGIN PASSWORD 'changeme';
CREATE DATABASE lenses_hq OWNER lenses_hq;
EOF
echo "Database initialization completed!"
restartPolicy: OnFailure
backoffLimit: 5kubectl apply -f lenses-db-init-job.yamlkubectl get job -n postgres-system# Kafka Bitnami Helm chart values for dev/testing with KRaft mode
## Global settings
global:
storageClass: "standard"
## Enable KRaft mode and disable Zookeeper
kraft:
enabled: true
controllerQuorumVoters: "0@kafka-controller-0.kafka-controller-headless.kafka.svc.cluster.local:9093"
# Disable Zookeeper since we're using KRaft
zookeeper:
enabled: false
## Controller configuration (for KRaft mode)
controller:
replicaCount: 1
persistence:
enabled: true
storageClass: "standard"
size: 2Gi
selector:
matchLabels:
app: kafka-controller
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
## Broker configuration
broker:
replicaCount: 1
persistence:
enabled: true
storageClass: "standard"
size: 2Gi
selector:
matchLabels:
app: kafka-broker
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
# Networking configuration for standalone K8s cluster
service:
type: ClusterIP
ports:
client: 9092
## External access configuration (if needed)
externalAccess:
enabled: false
service:
type: NodePort
nodePorts: [31090]
autoDiscovery:
enabled: false
# Listeners configuration for standalone cluster
listeners:
client:
name: PLAINTEXT
protocol: PLAINTEXT
containerPort: 9092
controller:
name: CONTROLLER
protocol: PLAINTEXT
containerPort: 9093
interbroker:
name: INTERNAL
protocol: PLAINTEXT
containerPort: 9094
# Disable authentication for simplicity in dev environment
auth:
clientProtocol: plaintext
interBrokerProtocol: plaintext
sasl:
enabled: false
jaas:
clientUsers: []
interBrokerUser: ""
tls:
enabled: false
zookeeper:
user: ""
password: ""
# Configuration suitable for development
configurationOverrides:
"offsets.topic.replication.factor": 1
"transaction.state.log.replication.factor": 1
"transaction.state.log.min.isr": 1
"log.retention.hours": 24
"num.partitions": 3
"security.inter.broker.protocol": PLAINTEXT
"sasl.enabled.mechanisms": ""
"sasl.mechanism.inter.broker.protocol": PLAINTEXT
"allow.everyone.if.no.acl.found": "true"
# Enable JMX metrics
metrics:
jmx:
enabled: true
containerPorts:
jmx: 5555
service:
ports:
jmx: 5555
kafka:
enabled: true
containerPorts:
metrics: 9308
service:
ports:
metrics: 9308
# Enable auto-creation of topics
allowAutoTopicCreation: truekubectl create ns kafkahelm install my-kafka bitnami/kafka \
--namespace kafka \
--values kafka-cluster-values.yamlkubectl create ns lenses# lenseshq-values.yaml
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 2
memory: 4Gi
image:
repository: lensesio/lenses-hq:6.0
pullPolicy: Always
rbacEnable: false
namespaceScope: true
# Lense HQ container port
restPort: 8080
# Lenses HQ service port, service targets restPort
servicePort: 80
servicePortName: lenses-hq
# serviceAccount is the Service account to be used by Lenses to deploy apps
serviceAccount:
create: false
name: default
# Lenses service
service:
enabled: true
type: ClusterIP
annotations: {}
lensesHq:
agents:
address: ":10000"
auth:
administrators:
- "admin"
users:
- username: admin
password: $2a$10$DPQYpxj4Y2iTWeuF1n.ItewXnbYXh5/E9lQwDJ/cI/.gBboW2Hodm # bcrypt("admin").
http:
address: ":8080"
accessControlAllowOrigin:
- "http://localhost:8080"
secureSessionCookies: false
# Storage property has to be properly filled with Postgres database information
storage:
postgres:
enabled: true
host: postgres-postgresql.postgres-system.svc.cluster.local
port: 5432
username: lenses_hq
database: lenses_hq
passwordSecret:
type: "createNew"
password: "changeme"
logger:
mode: "text"
level: "debug"
license:
referenceFromSecret: false
stringData: "license_key_2SFZ0BesCNu6NFv0-EOSIvY22ChSzNWXa5nSds2l4z3y7aBgRPKCVnaeMlS57hHNVboR2kKaQ8Mtv1LFt0MPBBACGhDT5If8PmTraUM5xXLz4MYv"
acceptEULA: truehelm install lenses-hq lensesio/lenses-hq \
--namespace lenses \
--values lenseshq-values.yaml# lenses-hq-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: lenses-hq-ingress
namespace: lenses # Update this if LensesHQ is in a different namespace
annotations:
# For nginx ingress controller
nginx.ingress.kubernetes.io/rewrite-target: /
# If you need larger request bodies for API calls
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
# Optional: enable CORS if needed
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
spec:
ingressClassName: nginx
rules:
- host: lenses-hq.local # Change this to your desired hostname
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: lenses-hq
port:
number: 80
# Optional: expose the agents port if needed externally
- path: /agents
pathType: Prefix
backend:
service:
name: lenses-hq
port:
number: 10000# lenses-agent-values.yaml
image:
repository: lensesio/lenses-agent
tag: 6.0.0
pullPolicy: IfNotPresent
lensesAgent:
# Postgres connection
storage:
postgres:
enabled: true
host: postgres-postgresql.postgres-system.svc.cluster.local
port: 5432
username: lenses_agent
password: changeme
database: lenses_agent
hq:
agentKey:
secret:
type: "createNew"
name: "agentKey"
value: "agent_key_Insert_Your_Agent_Key_Here"
sql:
processorImage: hub.docker.com/r/lensesioextra/sql-processor/
processorImageTag: latest
mode: KUBERNETES
heap: 1024M
minHeap: 128M
memLimit: 1152M
memRequest: 128M
livenessInitialDelay: 60 seconds
namespace: lenses
provision:
path: /mnt/provision-secrets
connections:
lensesHq:
- name: lenses-hq
version: 1
tags: ['hq']
configuration:
server:
value: lenses-hq.lenses.svc.cluster.local
port:
value: 10000
agentKey:
value: ${LENSESHQ_AGENT_KEY}
kafka:
# There can only be one Kafka cluster at a time
- name: kafka
version: 1
tags: ['staging', 'pseudo-data-only']
configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://my-kafka.kafka.svc.cluster.local:9092
protocol:
value: PLAINTEXThelm install lenses-agent lensesio/lenses-agent \
--namespace lenses \
--values lenses-agent-values.yaml





This page describes the memory & cpu prerequisites for Lenses.
This documentation provides memory recommendations for Lenses.io, considering the number of Kafka topics, the number of schemas, and the complexity of these schemas (measured by the number of fields). Proper memory allocation ensures optimal performance and stability of Lenses.io in various environments.
Number of Topics: Kafka topics require memory for indexing, metadata, and state management.
Schemas and Their Complexity: The memory impact of schemas is influenced by both the number of schemas and the number of fields within each schema. Each schema field contributes to the creation of Lucene indexes, which affects memory usage.
For a basic setup with minimal topics and schemas:
Minimum Memory: 4 GB
Recommended Memory: 8 GB
This setup assumes:
Fewer than 100 topics
Fewer than 100 schemas
Small schemas with few fields (less than 10 fields per schema)
Memory requirements increase with the number of topics. Topics are used as the primary reference for memory scaling, with additional considerations for schemas.
Number of Topics / Partitions
Recommended Memory
Up to 1,000 / 10,000 partitions
12 GB
1,001 to 10,000 / 100.000 partitions
24 GB
10,001 to 30,000 / 300.000 partitions
64 GB
Schemas have a significant impact on memory usage, particularly as the number of fields within each schema increases. The memory impact is determined by both the number of schemas and the complexity (number of fields) of these schemas.
Schema Complexity
Number of Fields per Schema
Memory Addition
Low to Moderate Complexity
Up to 50 fields
None
High Complexity
51 - 100 fields
1 GB for every 1,000 schemas
Very High Complexity
100+ fields
2 GB for every 1,000 schemas
Number of Topics
Number of Schemas
Number of Fields per Schema
Base Memory
Additional Memory
Total Recommended Memory
1,000
1,000
Up to 10
8 GB
None
12 GB
1,000
1,000
11 - 50
8 GB
None
12 GB
5,000
5,000
Up to 10
12 GB
None
16 GB
5,000
5,000
11 - 50
12 GB
None
16 GB
10,000
10,000
Up to 10
16 GB
None
24 GB
10,000
10,000
51 - 100
24 GB
10 GB
34 GB
30,000
30,000
Up to 10
64 GB
None
64 GB
30,000
30,000
51 - 100
64 GB
30 GB
94 GB
To help illustrate how to apply these recommendations, here are some example configurations considering both topics and schema complexity:
Topics: 500
Schemas: 100 (average size 50 KB, 8 fields per schema)
Recommended Memory: 8 GB
Schema Complexity: Low → No additional memory needed.
Total Recommended Memory: 8 GB
Topics: 5,000
Schemas: 1,000 (average size 200 KB, 25 fields per schema)
Base Memory: 12 GB
Schema Complexity: Moderate → No additional memory needed.
Total Recommended Memory: 16 GB
Topics: 15,000
Schemas: 3,000 (average size 500 KB, 70 fields per schema)
Base Memory: 32 GB
Schema Complexity: High → Add 3 GB for schema complexity.
Total Recommended Memory: 35 GB
30,000 Topics
Schemas: 5,000 (average size 300 KB, 30 fields per schema)
Base Memory: 64 GB
Schema Complexity: Moderate → Add 5 GB for schema complexity.
Total Recommended Memory: 69 GB
High Throughput: If your Kafka cluster is expected to handle high throughput, consider adding 20-30% more memory than the recommendations.
Complex Queries and Joins: If using Lenses.io for complex data queries and joins, consider increasing the memory allocation by 10-15% to accommodate the additional processing.
Monitoring and Adjustment: Regularly monitor memory usage and adjust based on actual load and performance.
Proper memory allocation is crucial for the performance and reliability of Lenses.io, especially in environments with a large number of topics and complex schemas. While topics provide a solid baseline for memory recommendations, the complexity of schemas—particularly the number of fields—can also significantly impact memory usage. Regular monitoring and adjustments are recommended to ensure that your Lenses.io setup remains performant as your Kafka environment scales.
This section provides example IAM policies for Lenses.
These are only some sample policies to help you build your own
Full admin across all resources.
Allow full access for all services and resources beginning with blue.
Allow read only access for topics and schemas beginning with la.
Allow operators to restart connectors and list & get IAM resource only.
Explicitly deny access to environments with names starting with prod-.
Allow developers access to topics, schemas, sql processors, consumer groups, acls, quotas, connectors for us-dev.
Before doing upgrade, make a copy of all configuration files.
In case of Helm deployment that would be:
values.yaml
In case of Archive of Docker deployment that would be:
lenses.conf
It is strongly suggested to rename this file to lenses-agent.conf for the upgrade.
provisioning.yaml
There were multiple ways how Lenses resources could have been managed in the past:
Wizard
Provisioning v1
Provisioning v2
For Lenses (6) Agent, it is recommended that all connections (kafka, schema-registry, connect,...) are kept inside of provisioning.yaml file in version 2. Provisioning
Differences of provisioning between version 1 and 2 can be seen below:
license:
fileRef:
inline: '{"source":"Landoop LTD","clientId":"Lenses Dev","details":"kafka-lenses","key":"eyJhbGciOiJBMTI4S1ciLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0.AqO6Ax-o-4T0WKFX7eCGFRu329wxplkZuWrGdhyncrhBfh9higjsZA.-uGCXWjULTzb7-3ROfsRhw.olmx6FR7FH7c2adHol0ipokHF6jOo6LTDtoFOSPWfqKxbA3yI-CUqlyo_-Obin7MSA4KqXBLpXOvP72EJhIYuyqkxUVGRoXHF0Oj2V6kzdDcmjJHbMB4VTxdE8YBAYbPXzEXdhq7lZy4fxHHhYxAsRATCtqf7t7TQCE0TWOiSHvLwyD7xMK2X47KiKbnNlNvqeVnnjLUMMd7vzA5dTft48wJm2D5HJNZ0mS32gTaiiExT5nqolToL0KYIOpRiT00MTQkGlBdagVigc-DZBPM0ZTP5wuLkwdk4XbfoQKaWC4qaYA6VpGgQg03Mo1W4ljlqRy0N4cPQ-l4Mi1XV9VK-825-zhyxzPrxef5Zct2nzVEJ9MbWy0-xuf6THX4q2X8zmz_KiHoA-hBWjebv_2R9479ldGj0h-vm9htVD59_6RBOGb0rT4XSS-4_CGYBZzv5PIPpLdnVbkr_qjsxCI0BO7tPKoyxXg2qh4YQbn3wn5MqsE9yR2BbRaso9MSPFlF8PxqR7A4qrKJjn_mPlcrR-XGf0ua2XfWCVe4ngcWpzssYHcJJD80APyZgzneIw2dSaO0enfFYUq6avqGSeoG7VC9zYACfUdofdlULH2azmptJ2Jzw3ggpLR7ZzZ9QrySXTUB2jkzrqiHyM9fqIXUVwAkAJMcBuwF5zY5B_ChA69Uj_-s-S1RITBbg5wtB3LuHyJGtTo4fuYY75F_OL9Cwp7gcpa5u0M_wWZlx70j_6jCb-ogvghALbHY8OPeWz_1-3bvJM9T_jmjKy0FDt6x8FJV1lgMMR0j1RiUeauUMsnd4TNUYAH50mFwtK5PU-Iq.U4LwNfOL2JB4vzBMvo3Hig"}'
connections:
kafka:
tags: []
templateName: Kafka
configurationObject:
kafkaBootstrapServers:
- PLAINTEXT:///localhost:9092
protocol: PLAINTEXT
# metrics
## JMX
metricsPort: 9581
metricsType: JMX
metricsSsl: false
## JOLOKIA
# metricsPort: 19581
# metricsType: JOLOKIAG # or JOLOKIAP
# metricsSsl: false
# metricsHttpSuffix: "/jolokia/"
zookeeper:
tags: []
templateName: Zookeeper
configurationObject:
zookeeperUrls:
- localhost:2181
zookeeperSessionTimeout: 10000
zookeeperConnectionTimeout: 10000
# metrics
## JMX
metricsPort: 9585
metricsType: JMX
metricsSsl: false
## JOLOKIA
# metricsPort: 19585
# metricsType: JOLOKIAG # or JOLOKIAP
# metricsSsl: false
# metricsHttpSuffix: "/jolokia/"
schema-registry:
templateName: SchemaRegistry
tags: [ ]
configurationObject:
schemaRegistryUrls:
- http://localhost:8081
additionalProperties: { }
# metrics
## JMX
metricsPort: 9582
metricsType: JMX
metricsSsl: false
## JOLOKIA
# metricsPort: 19582
# metricsType: JOLOKIAG # or JOLOKIAP
# metricsSsl: false
# metricsHttpSuffix: "/jolokia/"
connect-cluster-dev-1:
templateName: KafkaConnect
tags: []
configurationObject:
workers:
- http://localhost:8083
aes256Key: PasswordPasswordPasswordPassword
# metrics
## JMX
metricsPort: 9584
metricsType: JMX
metricsSsl: false
## JOLOKIA
# metricsPort: 19584
# metricsType: JOLOKIAG # or JOLOKIAP
# metricsSsl: false
# metricsHttpSuffix: "/jolokia/"
my-prometheus: {"configuration":[{"key":"endpoints","value":["https://am.acme.com"]}],"tags":["prometheus","monitoring","metrics"],"templateName":"PrometheusAlertmanager"}
lensesHq:
- configuration:
agentKey:
value: ${LENSESHQ_AGENT_KEY}
port:
value: 10000
server:
value: lenses-hq
name: lenses-hq
tags: ['hq']
version: 1
kafka:
- name: kafka
version: 1
tags: [ 'kafka', 'dev' ]
configuration:
metricsType:
value: JMX
metricsPort:
value: 9581
kafkaBootstrapServers:
value: [PLAINTEXT://demo-kafka:19092]
protocol:
value: PLAINTEXT
confluentSchemaRegistry:
- name: schema-registry
version: 1
tags: [ 'dev' ]
configuration:
schemaRegistryUrls:
value: [http://demo-kafka:8081]
metricsType:
value: JMX
metricsPort:
value: 9582
connect:
- name: dev
version: 1
tags: [ 'dev' ]
configuration:
workers:
value: [http://demo-kafka:8083]
aes256Key:
value: 0123456789abcdef0123456789abcdef
metricsType:
value: JMX
metricsPort:
value: 9584%More about other configuration options in provisioning.yaml -> Provisioning
If you are curious on how to properly create provisioning.yaml file, you can read more on How to convert Wizard Mode to Provisioning Mode.
In this step you'll ensure that Groups (with permissions) that exist in Lenses 5 will still have the same amount of permissions in Lenses 6 for newly created Environment (Agent).
Migration of :
data policies
alerts
sql processors
is not necessary in case Agent will re-use the same database as Lenses 5.
To execute this step we have tooling that can help you Lenses Migration Tool.
Be aware, cloning of alerts is not available yet via script above.
Once script is initiated you should be able to see new:
Groups and
Roles with its permissions inside HQ screen.
These are matching the ones you have in Lenses 5 instance and will enable users to see new Environment once it is connected 👇
There are multiple deployment methods for the Agent deployment, please choose one from Installation
Two Lenses instances shouldn't be connecting to the same database therefore old Lenses 5 should be stopped.
Note that there is no rollback mechanism once upgrade is initiated over the same database.
This type of upgrade is only possible when Postgres is used as datastore.
This page describes Roles in Lenses.
Lenses IAM is built around Roles. Roles contain policies and each policy defines a set of actions a user is allow to take.
Roles are then assigned to groups.
The Lenses policies are resource based. They are YAML based documents attached to a resource.
Each policy has:
Action
Resource
Effect
The resource is the name of the resource. This is defined by the creator of the resource.
The action describes the action or verb that a user can perform. The format of the action is
[entity type]:actionFor example to list topics in Kafka
policy
- action:
- kafka:ListTopicsTo restrict access to resources, for example, only list topics being with red we can used use the resource field.
Effect is either allow the action on the resource or deny. If allow is not set the action will be denied and if any policy for a resource has a deny effect it takes precedence.
To Create Service Account go to IAM->Roles->New Role.
You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.
➜ lenses roles
Manage Roles.
Usage:
lenses roles [command]
Available Commands:
create Creates a new role.
delete Deletes a role.
get Returns a specific role.
list Returns all roles.
metadata Manages role metadata.
update Updates a role.This page describes how to configure the agent to deploy and manage SQL Processors for stream processing.
Lenses can be used to define & deploy stream processing applications that read from Kafka and write back to Kafka with SQL. They are based on the Kafka Stream framework. They are known as SQL Processors.
SQL processing of real-time data can run in 2 modes:
SQL In-Process - the workload runs inside of the Lenses Agent.
SQL in Kubernetes - the workload runs & scale on your Kubernetes cluster.
Which mode the SQL Processors will run as should be defined within the lenses.conf before Lenses is started.
In this mode, SQL processors run as part of the Agent process, sharing resources, memory, and CPU time with the rest of the platform.
This mode of operation is meant to be used for development only.
As such, the agent will not allow the creation of more than 50 SQL Processors in In Process mode, as this could impact the platform's stability and performance negatively.
For production, use the KUBERNETES mode for maximum flexibility and scalability.
Set the execution configuration to IN_PROC
Set the directory to store the internal state of the SQL Processors:
SQL processors use the same connection details that Agent uses to speak to Kafka and Schema Registry. The following properties are mounted, if present, on the file system for each processor:
Kafka
SSLTruststore
SSLKeystore
Schema Registry
SSL Keystore
SSL Truststore
The file structure created by applications is the following: /run/[lenses_installation_id]/applications/
Keep in mind Lenses require an installation folder with write permissions. The following are tried:
/run
/tmp
Kubernetes can be used to deploy SQL Processors. To configure Kubernetes, set the mode to KUBERNETES and configure the location of the kubeconfig file.
When the Agent is deployed inside Kubernetes, the lenses.kubernetes.config.file configuration entry should be set to an empty string. The Kubernetes client will auto-configure from the pod it is deployed in.
The SQL Processor docker image is live in Dockerhub.
Custom serdes should be embedded in a new Lenses SQL processor Docker image.
To build a custom Docker image, create the following directory structure:
Copy your serde jar files under processor-docker/serde.
Create Dockerfile containing:
Build the Docker.
Once the image is deployed in your registry, please set Lenses to use it (lenses.conf):
Don't use the LPFP_ prefix.
Internally, Lenses prefixes all its properties with LPFP_.
Avoid passing custom environment variables starting with LPFP_ as it may cause the processors to fail.
To deploy Lenses Processors in Kubernetes the suggested way is to activate RBAC in Cluster level through Helm values.yaml:
If you want to limit the permissions Lenses has against your Kubernetes cluster, you can use Role/RoleBinding resources instead.
To achieve this you need to create a Role and a RoleBinding resource in the namespace you want the processors deployed to:
example for:
Lenses namespace = lenses-ns
Processor namespace = lenses-proc-ns
Finally you need to define in Lenses configuration which namespaces can Lenses access. To achieve this amend values.yaml to contain the following:
example:
# Set up Lenses SQL processing engine
lenses.sql.execution.mode = "IN_PROC"lenses.sql.state.dir = "/tmp/sql-kstream-state"lenses.sql.execution.mode = KUBERNETES
# kubernetes configuration
lenses.kubernetes.config.file = "/home/lenses/.kube/config"
lenses.kubernetes.service.account = "default"
#lenses.kubernetes.processor.image.name = "" # Only needed if you use a custom image
#lenses.kubernetes.processor.image.tag = "" # Only needed if you use a custom image
# Only needed if you want to tune the buffer size for incoming events from Kubernetes
#lenses.deployments.errors.buffer.size = 1000
# Only needed if you want to tune the buffer size for incoming errors from Kubernetes WS communication
#lenses.deployments.events.buffer.size = 10000mkdir -p processor-docker/serdeFROM lensesioextra/sql-processor:4.2
ADD serde /opt/serde
ENV LENSES_SQL_RUNNERS_SERDE_CLASSPATH_OPTS=/opt/serdecd processor-docker
docker build -t example/lsql-processor .lenses.kubernetes.processor.image.name = "your/image-name"
lenses.kubernetes.processor.image.tag = "your-tag"rbacEnable: truekind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: [ROLE_NAME]
namespace: [PROCESSORS_NAMESPACE]
rules:
- apiGroups: [""]
resources:
- namespaces
- persistentvolumes
- persistentvolumeclaims
- pods/log
verbs:
- list
- watch
- get
- create
- apiGroups: ["", "extensions", "apps"]
resources:
- pods
- replicasets
- deployments
- ingresses
- secrets
- statefulsets
- services
verbs:
- list
- watch
- get
- update
- create
- delete
- patch
- apiGroups: [""]
resources:
- events
verbs:
- list
- watch
- getkind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: [ROLE_BINDING_NAME]
namespace: [PROCESSOR_NAMESPACE]
subjects:
- kind: ServiceAccount
namespace: [LENSES_NAMESPACE]
name: [SERVICE_ACCOUNT_NAME]
roleRef:
kind: Role
name: [ROLE_NAME]
apiGroup: rbac.authorization.k8s.iokind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: processor-role
namespace: lenses-proc-ns
rules:
- apiGroups: [""]
resources:
- namespaces
- persistentvolumes
- persistentvolumeclaims
- pods/log
verbs:
- list
- watch
- get
- create
- apiGroups: ["", "extensions", "apps"]
resources:
- pods
- replicasets
- deployments
- ingresses
- secrets
- statefulsets
- services
verbs:
- list
- watch
- get
- update
- create
- delete
- patch
- apiGroups: [""]
resources:
- events
verbs:
- list
- watch
- getkind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: processor-role-binding
namespace: lenses-proc-ns
subjects:
- kind: ServiceAccount
namespace: lenses-ns
name: default
roleRef:
kind: Role
name: processor-role
apiGroup: rbac.authorization.k8s.iolenses:
append:
conf: |
lenses.kubernetes.namespaces = {
incluster = [
"[PROCESSORS NAMESPACE]"
]
} lenses:
append:
conf: |
lenses.kubernetes.namespaces = {
incluster = [
"lenses-processors"
]
} 
This page describes an overview of Lenses IAM (Identify & Access Management)
Principals (Users & Service accounts) receive their permissions based on their group membership.
Roles hold a set of policies, defining the permissions. Roles are assigned to groups.
Roles provide flexibility in how you want to provide access. You can create a very open policy or a very granular policy, for example, allowing operators and support engineers certain permissions to restart Connectors but denying actions that would allow them to view data or configuration options.
Roles are defined at the HQ level. This allows you to control access to actions at HQ and lower environment levels and to assign the same set of permissions across your whole Kafka landscape in a central place.
A policies have:
One or more actions;
One or more resource patterns that the actions apply to;
An effect: allow or deny.
name: [policy_name]
policy: #list of actions/resources/effect
- action:
resource:
effect:[allow|deny] A policy is defined by a YAML specification.
Actions describe a set of actions. Concrete actions can match an Action Pattern. In this text, action and action patterns are used interchangeably.
Services describe the system entity that an action applies to. Services are:
environments
kafka
registry
schemas
kafka-connect
sql-streaming
kubernetes
applications
alerts
data-policies
governance
audit
iam
administration
Operation can contain a wildcard. If so, only at the end. See IAM Reference for the available operations per service.
Resources identify which resource, in a service, that the principal is allowed or denied, to perform the operation on.
If the service is provided, resource-type can be a wildcard.
The resource ID identifies the resource within the context of a service and a resource type.
A resource-id consists of one or more segments separated by a slash /. A segment can be a wildcard, or contain a wildcard as a suffix of a string. If a segment is a wildcard, then remaining segments do not need to be provided, and will be assumed to be wildcards as well.
Where LRN is the Lenses Resource Name
kafka:topic:my-env/* will be expanded to kafka:topic:my-env/*/*;
kafka:topic:my-env/my-cluster* is invalid because the Topic segment is missing, kafka:topic:my-env/my-cluster*/topic would be valid though;
*:topic:* is invalid, the service is not provided;
kaf*:* and kafka:top* are invalid, service and resource-type cannot contain wildcards;
kafka:*:foo is invalid, if the resource-type is a wildcard then resource-id cannot be set.
A principal (user or service account) can perform an action on a resource if:
In any of the roles it receives via group membership:
There is any matching Permission Statement that has an effect of allow;
And there is not any matching Permission Statement that has an effect of deny.
A Permission Statement matches an action plus resource, if:
The action matches any of the Permission Statement's Action Patterns, AND:
The resource matches any of the Permission Statement's Resource Patterns.
An Action matches an Action Pattern (AP) if:
The AP is a wildcard, OR:
The Action's service equals the AP's and the AP's operation string-matches the Action's operation.
A Resource matches a Resource Pattern (RP) if:
The RP is a wildcard, OR:
The Resource's services equals the RP's and the RP's resource-type is a wildcard, OR:
The Resource's service and types equals that of the RP and resource-ids match. Resource-ids are matched by string-matching each individual segment. If the RP has a trailing wildcard segment, the remaining segments are ignored.
A string s matches p if:
They equal character by character.
If s or p has more non-wildcard characters than the other they don't match;
If p contains a * suffix, any remaining characters in s are ignored.
"lit"
"lit"
true
"lit"
"li"
false
"lit"
"litt"
false
"lit"
"oth"
false
"*"
"some"
true
"foo*"
"foo"
true
"foo*"
"foo-bar"
true
""
""
true
"x"
""
false
""
"x"
false
Order of items in any collection is irrelevant during evaluation. Collections are considered sets rather than ordered lists. The following are equivalent:
Order of Resource Patterns does not matter
Order of Permission Statements does not matter
Order of Roles does not matter
Order of Groups does not matter
In the examples we're not too religious about strict JSON formatting.
Broad Allow + Specific Deny
Given:
policy:
- effect: allow
resource: kafka:topic:my-env/*/*
action: ReadKafkaData
- effect: deny
resource: kafka:topic:*/*/forbidden-topic
action: ReadKafkaDataA principal:
Can ReadKafkaData on kafka:topic:my-env/the-cluster/some-topic because it is allowed and not denied;
Cannot DeleteKafkaTopic on kafka:topic:my-env/the-cluster/some-topic because there is no allow;
Cannot ReadKafkaData on kafka:topic:my-env/the-cluster/forbidden-topic because while it is allowed the deny kicks in.
Given:
policy:
- effect: allow
resource: [*, kafka:topic:my-cluster/*]
action: ReadKafkaDataA principal:
Can ReadKafkaData on kafka:topic:someone-else-cluster/their-topic because the resource matches *.
Note that here the matching can be considered "most permissive".
Given:
policy:
- effect: allow
resource: [kafka:topic:my-cluster/my-topic-1, kafka:topic:my-cluster/my-topic-2]
action: ReadKafkaDataA principal:
Can ReadKafkaData on kafka:topic:my-cluster/my-topic-1 and kafka:topic:my-cluster/my-topic-2 because the resources match, but cannot ReadKafkaData on kafka:topic:my-cluster/my-topic-3.
Guide on how to migrate your Lenses 5 instance to Lenses 6 Agent
H2 is not recommended for production environments.
For any other purposes, it is highly recommended to use a PostgreSQL (preferred) or Microsoft SQL Server database. Multiple agents can use the same Postgres database, but in that case, you must ensure that each Agent uses a different schema.
Therefore, in preparatio,n you must ensure:
Postgres instance
Postgres schema per Agent
Postgres user and password per Agent
For non-production environments, you can rely on the embedded H2 database.
There were multiple ways which Lenses resources could have been managed in the past:
Wizard
Provisioning v1
Provisioning v2
For Lenses (6) Agent, it is recommended that all connections (kafka, schema-registry, connect,...) are kept inside of provisioning.yaml file in version 2.
Differences in provisioning between version 1 and 2 can be seen below:
More about other configuration options in provisioning.yaml ->
If you are curious about how to properly create the provisioning.yaml file, you can read more on
In case you have to migrate SQL Processors and Postgres is not set as a data store, please read first.
There are multiple deployment methods for the Agent deployment. Please choose one from
It is perfectly safe to run your older installation of Lenses 4.x/5.x and at the side Agent that is connecting to the existing Kafka Cluster.
Lenses ⅘ and Agent behaves as just another KafkaAdminClient that is connecting to your Kafka cluster and therefore they can live next to each other.
Two Lenses instances shouldn't be connecting to the same database.
Migrating of :
data policies
alerts
sql processors
is not necessary in case Agent will re-use the same database as Lenses 5.
Once you confirm that:
HQ works and users can login via SAML/SSO
Agent is connected to HQ and you (admin) can inspect it
It is time to move towards migrating Lenses ⅘ groups to:
HQ Groups / Roles / Permissions;
Data policies
In order to do so, we have tooling that can help you .
Be aware, cloning of alerts is not available yet via script above.
In case new database will be used for Agent upgrade path it it highly recommended to pick .
SQL Processors are stored within Lenses ⅘ database. Same will continue to work even if Lenses instance is not running anymore, but in order to preserve its configuration and possibility for Agent to be aware of their whereabouts following step has to be done.
Old Postgres to new Postgres database
This requires multiple steps:
Stop Lenses 5 instance;
Backup Postgres database Lenses 5 was using;
Load the same database to new Postgres database;
Start Lenses Agent (v6).
Old H2 database to new H2 database
Old H2 database to new Postgres database
Only available with if upgrade to Lenses Agent v6 is being done from Lenses 5.5 and onwards
Process of migration looks a follows:
Create a database and user in PostgreSQL for Lenses ⅘ to use.\
Make a backup
Take a backup of your current embedded database (its path is controlled via the setting lenses.storage.directory). Just copy the directory.
Prepare Lenses conf
Edit the Lenses configuration file (lenses.conf), adding the PostgreSQL settings. E.g:\
Restart Lenses. It will perform the migration automatically.
Once everything looks good, delete the directory containing the embedded database, and remove the key lenses.storage.directory from your lenses-agent.conf.
Perform Lenses Agent upgrade
lensesHq:
- configuration:
agentKey:
value: ${LENSESHQ_AGENT_KEY}
port:
value: 10000
server:
value: lenses-hq
name: lenses-hq
tags: ['hq']
version: 1
kafka:
- name: kafka
version: 1
tags: [ 'kafka', 'dev' ]
configuration:
metricsType:
value: JMX
metricsPort:
value: 9581
kafkaBootstrapServers:
value: [PLAINTEXT://demo-kafka:19092]
protocol:
value: PLAINTEXT
confluentSchemaRegistry:
- name: schema-registry
version: 1
tags: [ 'dev' ]
configuration:
schemaRegistryUrls:
value: [http://demo-kafka:8081]
metricsType:
value: JMX
metricsPort:
value: 9582
connect:
- name: dev
version: 1
tags: [ 'dev' ]
configuration:
workers:
value: [http://demo-kafka:8083]
aes256Key:
value: 0123456789abcdef0123456789abcdef
metricsType:
value: JMX
metricsPort:
value: 9584%# login as superuser and add Lenses role and database
psql -U postgres -d postgres <<EOF
CREATE ROLE lenses WITH LOGIN PASSWORD 'changeme';
CREATE DATABASE lenses OWNER lenses;
EOFlenses.storage.postgres.password="changeme"
lenses.storage.postgres.host="my-postgres-server"
lenses.storage.postgres.port=5431 # optional, defaults to 5432
lenses.storage.postgres.username="lenses"
lenses.storage.postgres.database="lenses"license:
fileRef:
inline: '{"source":"Landoop LTD","clientId":"Lenses Dev","details":"kafka-lenses","key":"eyJhbGciOiJBMTI4S1ciLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0.AqO6Ax-o-4T0WKFX7eCGFRu329wxplkZuWrGdhyncrhBfh9higjsZA.-uGCXWjULTzb7-3ROfsRhw.olmx6FR7FH7c2adHol0ipokHF6jOo6LTDtoFOSPWfqKxbA3yI-CUqlyo_-Obin7MSA4KqXBLpXOvP72EJhIYuyqkxUVGRoXHF0Oj2V6kzdDcmjJHbMB4VTxdE8YBAYbPXzEXdhq7lZy4fxHHhYxAsRATCtqf7t7TQCE0TWOiSHvLwyD7xMK2X47KiKbnNlNvqeVnnjLUMMd7vzA5dTft48wJm2D5HJNZ0mS32gTaiiExT5nqolToL0KYIOpRiT00MTQkGlBdagVigc-DZBPM0ZTP5wuLkwdk4XbfoQKaWC4qaYA6VpGgQg03Mo1W4ljlqRy0N4cPQ-l4Mi1XV9VK-825-zhyxzPrxef5Zct2nzVEJ9MbWy0-xuf6THX4q2X8zmz_KiHoA-hBWjebv_2R9479ldGj0h-vm9htVD59_6RBOGb0rT4XSS-4_CGYBZzv5PIPpLdnVbkr_qjsxCI0BO7tPKoyxXg2qh4YQbn3wn5MqsE9yR2BbRaso9MSPFlF8PxqR7A4qrKJjn_mPlcrR-XGf0ua2XfWCVe4ngcWpzssYHcJJD80APyZgzneIw2dSaO0enfFYUq6avqGSeoG7VC9zYACfUdofdlULH2azmptJ2Jzw3ggpLR7ZzZ9QrySXTUB2jkzrqiHyM9fqIXUVwAkAJMcBuwF5zY5B_ChA69Uj_-s-S1RITBbg5wtB3LuHyJGtTo4fuYY75F_OL9Cwp7gcpa5u0M_wWZlx70j_6jCb-ogvghALbHY8OPeWz_1-3bvJM9T_jmjKy0FDt6x8FJV1lgMMR0j1RiUeauUMsnd4TNUYAH50mFwtK5PU-Iq.U4LwNfOL2JB4vzBMvo3Hig"}'
connections:
kafka:
tags: []
templateName: Kafka
configurationObject:
kafkaBootstrapServers:
- PLAINTEXT:///localhost:9092
protocol: PLAINTEXT
# metrics
## JMX
metricsPort: 9581
metricsType: JMX
metricsSsl: false
## JOLOKIA
# metricsPort: 19581
# metricsType: JOLOKIAG # or JOLOKIAP
# metricsSsl: false
# metricsHttpSuffix: "/jolokia/"
zookeeper:
tags: []
templateName: Zookeeper
configurationObject:
zookeeperUrls:
- localhost:2181
zookeeperSessionTimeout: 10000
zookeeperConnectionTimeout: 10000
# metrics
## JMX
metricsPort: 9585
metricsType: JMX
metricsSsl: false
## JOLOKIA
# metricsPort: 19585
# metricsType: JOLOKIAG # or JOLOKIAP
# metricsSsl: false
# metricsHttpSuffix: "/jolokia/"
schema-registry:
templateName: SchemaRegistry
tags: [ ]
configurationObject:
schemaRegistryUrls:
- http://localhost:8081
additionalProperties: { }
# metrics
## JMX
metricsPort: 9582
metricsType: JMX
metricsSsl: false
## JOLOKIA
# metricsPort: 19582
# metricsType: JOLOKIAG # or JOLOKIAP
# metricsSsl: false
# metricsHttpSuffix: "/jolokia/"
connect-cluster-dev-1:
templateName: KafkaConnect
tags: []
configurationObject:
workers:
- http://localhost:8083
aes256Key: PasswordPasswordPasswordPassword
# metrics
## JMX
metricsPort: 9584
metricsType: JMX
metricsSsl: false
## JOLOKIA
# metricsPort: 19584
# metricsType: JOLOKIAG # or JOLOKIAP
# metricsSsl: false
# metricsHttpSuffix: "/jolokia/"
my-prometheus: {"configuration":[{"key":"endpoints","value":["https://am.acme.com"]}],"tags":["prometheus","monitoring","metrics"],"templateName":"PrometheusAlertmanager"}
This page describes installing Lenses Agent in Kubernetes via Helm.
Kubernetes 1.23+
Helm 3.8.0+
Available local Postgres database instance.
Follow these steps to configure your Postgres database for Lenses Agent.
External Secrets Operator is the only supported secrets operator.
In order to configure an Agent, we have to understand the parameter groups that the Helm Chart offers.
Under the lensesAgent parameter there are some key parameter groups that are used to set up the connection to Lenses HQ:
Storage
HQ connection
Provision
Cluster RBACs
Moving forward, in the same order you can start configuring your Helm chart.
You can use JSON schema support to help you configure the Values files for Helm. See JSON schema for support. Included in the repository is a JSON schema for the Agent Helm chart.
Running Agent with Postgres database
Prerequisite:
Running Postgres instance;
Created database for an Agent;
Username (and password) which has access to the created database;
In order to successfully run HQ, storage is within values.yaml has to be defined first.
The definition of storage object is as follows:
lensesAgent:
storage:
postgres:
enabled: true
host: ""
port:
username: ""
database: ""
schema: ""
params: {}Alongside Postgres password, which can be referenced/created through Helm chart, there are a few more options which can help while setting up HQ.
There are two ways how username can be defined:
The most straightforward way, if the username is not being changed, is by just defining it within the username parameter such as
lensesAgent:
storage:
postgres:
enabled: true
host: postgres-postgresql.postgres.svc.cluster.local
port: 5432
database: lensesagent
username: lenseslensesAgent:
storage:
postgres:
enabled: true
host: postgres-postgresql.postgres.svc.cluster.local
port: 5432
database: lensesagent
username: external # use "external" to manage it using secrets
additionalEnv:
- name: LENSES_STORAGE_POSTGRES_USERNAME
valueFrom:
secretKeyRef:
name: [SECRET_RESOURCE_NAME]
key: [SECRET_RESOURCE_KEY]
Password reference types
Postgres password can be handled in three ways using:
Pre-created secret;
Creating secrets on the spot through values.yaml;
lensesAgent:
storage:
postgres:
enabled: true
host: postgres-postgresql.playground.svc.cluster.local
port: 5432
username: lenses
database: lensesagent
password: useOnlyForDemos lensesAgent:
storage:
postgres:
enabled: true
host: postgres-postgresql.postgres.svc.cluster.local
port: 5432
database: lensesagent
username: lenses
password: external # use "external" to manage it using secrets
additionalEnv:
- name: LENSES_STORAGE_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: [SECRET_RESOURCE_NAME]
key: [SECRET_RESOURCE_KEY]Running Agent with H2 embedded database
Embedded database is not recommended to be used in Production or high load environments.
In order to run Agent with H2 embedded database there are few things to be aware about:
K8s cluster Agent will be deployed on has to support Persistent Volumes;
Postgres options in Helm chart has to be left out.
persistence:
storageH2:
enabled: true
accessModes:
- ReadWriteOnce
size: 300MiConnection to Lenses HQ is a straight forward process which requires two steps:
Creating Environment and obtaining AGENT KEY in HQ as described here, if you already have not done so;
Storing that same key in Vault or as a K8s secret.
The agent communicates with HQ via a secure custom binary protocol channel. To establish this channel and authenticate the Agent needs and AGENT KEY.
Once the AGENT KEY has been copied, store it inside of Vault or any other tool that has integration with Kubernetes secrets.
There are three available options how the agent key can be used:
ExternalSecret via External Secret Operator (ESO)
Pre-created secret
Inline string
To use this option, the External Secret Operator (ESO) has to be installed and available for usage in K8s cluster your are deploying Agent.
When specifying secret.type: "externalSecret", the chart will:
create an ExternalSecret in the namespace where Agent is deployed;
a secret is mounted for Agent to use.
lensesAgent:
hq:
agentKey:
secret:
type: "externalSecret"
# Secret name where agentKey will be read from
name: hq-password
# Key name under secret where agentKey is stored
key: key
externalSecret:
additionalSpecs: {}
secretStoreRef:
type: ClusterSecretStore # ClusterSecretStore | SecretStore
name: [secretstore_name]lensesAgent:
hq:
agentKey:
secret:
type: "precreated"
# Secret name where agentKey will be read from
name: hq-password
# Key name under secret where agentKey is stored
key: keyThis option is NOT for PRODUCTION usage but rather just for demo / testing.
The chart will create a secret with defined values below and the same secret will be read by Agent to connect to HQ.
lensesAgent:
hq:
agentKey:
secret:
type: "createNew"
# Secret name where agentKey will be read from
name: "lenses-agent-secret-1"
# Value of agentKey generated by HQ
value: "agent_key_*"This secret will be fed into the provisioning.yaml. The HQ connection is specified below, where reference ${LENSESHQ_AGENT_KEY} is being set:
lensesAgent:
provision:
path: /mnt/provision-secrets
connections:
lensesHq:
- name: lenses-hq
version: 1
tags: ['hq']
configuration:
server:
value: [LENSES_HQ_FQDN_OR_IP]
port:
value: 10000
agentKey:
# This property shouldn't be changed as it is mounted automatically
# based on secret choice for hq.agentKey above
value: ${LENSESHQ_AGENT_KEY}
sslEnabled:
value: falseProvisioning offers various connections starting with:
Kafka ecosystem components such as:
lensesAgent:
provision:
path: /mnt/provision-secrets
connections:
# Kafka Connection
kafka:
- name: Kafka
version: 1
tags: [my-tag]
configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://your.kafka.broker.0:9092
- PLAINTEXT://your.kafka.broker.1:9092
protocol:
value: PLAINTEXT
# all metrics properties are optional
metricsPort:
value: 9581
metricsType:
value: JMX
metricsSsl:
value: false
# Confluent Schema Registry Connection
confluentSchemaRegistry:
- name: schema-registry
tags: ["tag1"]
version: 1
configuration:
schemaRegistryUrls:
value:
- http://my-sr.host1:8081
- http://my-sr.host2:8081
## all metrics properties are optional
metricsPort:
value: 9581
metricsType:
value: JMX
metricsSsl:
value: false
# Kafka Connect connection
connect:
- name: my-connect-cluster-name
version: 1
tags: ["tag1"]
configuration:
workers:
value:
- http://my-kc.worker1:8083
- http://my-kc.worker2:8083
metricsPort:
value: 9585
metricsType:
value: JMXlensesAgent:
additionalEnv:
- name: SASL_JAAS_CONFIG
valueFrom:
secretKeyRef:
name: kafka-sharedkey
key: sasljaasconfig
provision:
path: /mnt/provision-secrets
connections:
# Kafka Connection
kafka:
- name: kafka
version: 1
tags: [ "dev", "dev-2", "eu"]
configuration:
kafkaBootstrapServers:
value:
- SASL_SSL://test-dev-2-kafka-bootstrap.kafka-dev.svc.cluster.local:9093
saslJaasConfig:
value: ${SASL_JAAS_CONFIG}
saslMechanism:
value: SCRAM-SHA-512
protocol:
value: SASL_SSL
# Confluent Schema Registry Connection
confluentSchemaRegistry:
- name: schema-registry
tags: ["tag1"]
version: 1
configuration:
schemaRegistryUrls:
value:
- http://my-sr.host1:8081
- http://my-sr.host2:8081
## all metrics properties are optional
metricsPort:
value: 9581
metricsType:
value: JMX
metricsSsl:
value: false
# Kafka Connect connection
connect:
- name: my-connect-cluster-name
version: 1
tags: ["tag1"]
configuration:
workers:
value:
- http://my-kc.worker1:8083
- http://my-kc.worker2:8083
metricsPort:
value: 9585
metricsType:
value: JMXMore about provisioning and more advanced configuration options for each of these components can be found on the following link.
The Helm chart creates Cluster roles and bindings, that are used by SQL Processors, if the deployment mode is set to KUBERNETES. They are used so that Lenses can deploy and monitor SQL Processor deployments in namespaces.
To disable the creation of Kubernetes RBAC set: rbacEnabled: false
If you want to limit the permissions the Agent has against your Kubernetes cluster, you can use Role/RoleBinging resources instead. Follow this link in order to enable it.
If you are not using SQL Processors and want to limit permissions given to Agent's ServiceAccount, there are two options you can choose from:
rbacEnable: true - will enable the creation of ClusterRole and ClusterRoleBinding for service account mentioned above;
rbacsEnable: true
namespaceScope: falserbacEnable: true and namespaceScope: true - will enable the creation of Role and RoleBinding which is more restrictive;
rbacsEnable: true
namespaceScope: trueIn this case, TLS has to be enabled on HQ. In case you haven't yet enabled it, you can find details here to do it.
Enabling TLS in communication between HQ is being done in the provisioning part of values.yaml.
In order to successfully enable TLS for the Agent you would need to:
additionalVolume & additionalVolumeMounts - with which you will mount truststore with CA certificate that HQ is using and which Agent will need to successfully pass the handshake.
additionalEnv - which will be used to securely read passwords to unlock truststore.
Enable SSL in provision.
# Additional Volume with CA that HQ uses
additionalVolumes:
- name: hq-truststore
secret:
secretName: hq-agent-test-authority
additionalVolumeMounts:
- name: hq-truststore
mountPath: "/mnt/provision-secrets/hq"
lensesAgent:
# Additional Env to read truststore password from secret
additionalEnv:
- name: LENSES_HQ_AGENT_TRUSTSTORE_PWD
valueFrom:
secretKeyRef:
name: hq-agent-test-authority
key: truststore.jks.password
provision:
path: /mnt/provision-secrets
connections:
lensesHq:
- name: lenses-hq
version: 1
tags: ['hq']
configuration:
server:
value: [HQ_URL]
port:
value: 10000
agentKey:
value: ${LENSESHQ_AGENT_KEY}
sslEnabled:
value: true
sslTruststore:
file: "/mnt/provision-secrets/gq/truststore.jks"
sslTruststorePassword:
value: ${LENSES_HQ_AGENT_TRUSTSTORE_PWD}Enable a service resource in the values.yaml:
# Lenses service
service:
enabled: true
annotations: {}To control the resources used by the Agent:
# Resource management
resources:
requests:
cpu: 1
memory: 4Gi
limits:
cpu: 2
memory: 5GiTo enable SQL processor in KUBERENTES mode and control the defaults:
lensesAgent:
sql:
processorImage: hub.docker.com/r/lensesioextra/sql-processor/
processorImageTag: latest
mode: KUBERNETES
heap: 1024M
minHeap: 128M
memLimit: 1152M
memRequest: 128M
livenessInitialDelay: 60 secondsTo achieve you need to create a Role and a RoleBinding resource in the namespace you want the processors deployed to.
For example:
Lenses namespace = lenses-ns
Processor namespace = lenses-proc-ns
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: processor-role
namespace: lenses-proc-ns
rules:
- apiGroups: [""]
resources:
- namespaces
- persistentvolumes
- persistentvolumeclaims
- pods/log
verbs:
- list
- watch
- get
- create
- apiGroups: ["", "extensions", "apps"]
resources:
- pods
- replicasets
- deployments
- ingresses
- secrets
- statefulsets
- services
verbs:
- list
- watch
- get
- update
- create
- delete
- patch
- apiGroups: [""]
resources:
- events
verbs:
- list
- watch
- get
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: processor-role-binding
namespace: lenses-proc-ns
subjects:
- kind: ServiceAccount
namespace: lenses-ns
name: default
roleRef:
kind: Role
name: processor-role
apiGroup: rbac.authorization.k8s.ioFinally you need to define in the Agent configuration which namespaces the Agent has access to. Amend values.yaml to contain the following:
lensesAgent:
append:
conf: |
lenses.kubernetes.namespaces = {
incluster = [
"lenses-processors"
]
} Persistence can be enabled for three purposes:
Use H2 embedded database
Logging
Provisioning
When using the Data Policies module to persist your data policies rules
When lenses.storage.enabled: false and an H2 local filesystem database is used instead of PostgreSQL
For non critical and NON PROD deployments
Configuration:
persistence:
storageH2:
enabled: true
accessModes:
- ReadWriteOnce
size: 20Gi
storageClass: ""
annotations: {}
existingClaim: ""When you need persistent log storage across pod restarts
When you want to retain logs for auditing or debugging purposes
Configuration:
persistence:
log:
enabled: true
accessModes:
- ReadWriteOnce
size: 5Gi
storageClass: ""
annotations: {}
existingClaim: ""Dedicated volume for provisioning data managed via the HQ.
When to enable:
When using HQ-based provisioning workflows
Must be combined with PROVISION_HQ_URL and PROVISION_AGENT_KEY environment variables
Configuration:
persistence:
provisioning:
enabled: true
accessModes:
- ReadWriteOnce
size: 5Mi
storageClass: ""
annotations: {}
existingClaim: ""or Helm command execution:
# Install the Chart.
helm repo add lensesio https://helm.repo.lenses.io/
helm repo update
# Deploy the Agent. Only available from version 6.1.0 onwards.
helm install lenses-agent \
lensesio/lenses-agent \
--set 'persistence.provisioning.enabled=true' \
--set 'lensesAgent.additionalEnv[0].name=PROVISION_HQ_URL' \
--set 'lensesAgent.additionalEnv[0].value=[lenses-hq.url]' \
--set 'lensesAgent.additionalEnv[1].name=PROVISION_AGENT_KEY' \
--set 'lensesAgent.additionalEnv[1].value=[agent_key_*]'Prometheus metrics are automatically exposed on port 9102 under /metrics.
At this very moment you can scrape it only via Service under port called http-metrics.
The main configurable options for lenses.conf are available in the values.yaml under the lenses object. These include:
Authentication
Database connections
SQL processor configurations
To apply other static configurations use lenses.append.conf, for example:
lensesAgent:
append:
conf: |
lenses.interval.user.session.refresh=40000First, add the Helm Chart repository using the Helm command line:
helm repo add lensesio https://helm.repo.lenses.io/
helm repo updateInstalling using cloned repository:
helm install lenses-agent charts/lenses-agent \
--values charts/lenses-agent/values.yaml \
--create-namespace --create lenses-agentInstalling using Helm repository:
helm install lenses-agent lensesio/lenses-agent \
--values values.yaml \
--create-namespace --namespace lenses-agent \
--version 6.0.0You can also find examples in the Helm chart repo.
This page describes installing Lenses HQ in Kubernetes via Helm.
Kubernetes 1.23+
Helm 3.8.0+
Available local Postgres database instance:
External Secrets Operator is the only supported secrets operator.
To configure Lenses HQ properly we have to understand the parameter groups that the Chart offers.
Under the lensesHq parameter there are some key parameter groups that are used to set up HQ:
definition of connection towards database (Postgres is the only storage option)
Password based authentication configuration
SAML / SSO configuration
definition of administrators or first users to access the HQ
defines port under which HQ will be available for end users
defines values of special headers and cookies
types of connection such as TLS and non-TLS definitions
defines connection between HQ and the Agent such as port where HQ will be listening for agent connections.
types of connection such as TLS and non-TLS definitions
license
controls the metrics settings where Prometheus alike metrics will be exposed
definition of logging level for HQ
Moving forward, in the same order you can start configuring your Helm chart.
Prerequisite:
Running Postgres instance;
Created database for HQ;
Username (and password) which has access to the created database;
In order to successfully run HQ, storage within values.yaml has to be defined first.
Definition of storage object is as follows:
lensesHq:
storage:
postgres:
enabled: true
host: ""
port:
username: ""
database: ""
schema: ""
tls:
params: {}
passwordSecret:
type: ""Alongside Postgres password, which can be referenced / created through Helm chart, there are few more options which can help while setting up HQ.
Username reference types
There are two ways how username can be defined:
The most straightforward way, if the username is not being changed, is by just defining it within the username parameter such as
lensesHq:
storage:
postgres:
enabled: true
host: postgres-postgresql.postgres.svc.cluster.local
port: 5432
username: lensesIn case Postgres username is being rotated or frequently changed it can be referenced from pre-created secret
lensesHq:
storage:
postgres:
enabled: true
host: postgres-postgresql.postgres.svc.cluster.local
port: 5432
username: lenses
useSecretForUsername:
enabled: true
existingSecret:
name: my-secret
key: usernamePassword reference types
Postgres password can be handled in three ways using:
External Secret via ExternalSecretOperator;
Pre-created secret;
Creating secret on the spot through values.yaml;
To use this option, the External Secret Operator (ESO) has to be installed and available for usage in K8s cluster your are deploying HQ.
When specifying passwordSecret.type: "externalSecret", the chart will:
create an ExternalSecret in the namespace where HQ is deployed;
a secret is mounted for HQ to use.
lensesHq:
storage:
postgres:
enabled: true
host: postgres-postgresql.playground.svc.cluster.local
port: 5432
username: lenses
database: lenseshq
passwordSecret:
type: "externalSecret"
# Secret name where database password will be read from
name: hq-password
# Key name under secret where database password is stored
key: password
externalSecret:
additionalSpecs: {}
secretStoreRef:
type: SecretStore # or ClusterSecretStore
name: secretstore-secrets lensesHq:
storage:
postgres:
enabled: true
host: postgres-postgresql.playground.svc.cluster.local
port: 5432
username: lenses
database: lenseshq
passwordSecret:
type: "precreated"
# Secret name where database password will be read from
name: hq-password
# Key from secret's data where database password is being stored
key: postgres-passwordThis option is NOT for PRODUCTION usage but rather just for demo / testing.
The chart will create a secret with defined values below and the same secret will be read by HQ in order to connect to Postgres.
lensesHq:
storage:
postgres:
enabled: true
host: [POSTGRES_HOSTNAME]
port: 5432
username: lenses
database: lenseshq
passwordSecret:
type: "createNew"
# name of a secret that will be created
name: [K8s_SECRET_NAME]
# Database password
password: [DATABASE_USER_PASSWORD]Advanced Postgres settings
Sometimes to form the correct connection URI special parameters are needed. You can set the extra settings using params.
Example:
lensesHq:
storage:
postgres:
enabled: true
host: postgres-postgresql.postgres.svc.cluster.local
port: 5432
username: lenses
params:
sslmode: requireThe second pre-requirement to successfully run HQ is setting initial authentication.
You can choose between:
password-based authentication, which requires users to provide a username and password;
and SAML/SSO (Single Sign-On) authentication, which allows users to authenticate through an external identity provider for a seamless and secure login experience.
The definition of auth object is as follows:
lensesHq:
auth:
users:
- username: admin
# bcrypt("correcthorsebatterystaple").
password: $2a$10$F66cb6ZhnJjGCZuxlvKP1e84eytTpT1MDJcpBblHaZgsqp1/Aa0LG
administrators:
- admin
- [email protected]
- [email protected]
saml:
enabled: true
baseURL: ""
entityID: ""
# -- Example: <?xml version="1.0" ... (big blob of xml) </md:EntityDescriptor>
metadata:
referenceFromSecret: false
secretName: ""
secretKeyName: ""
stringData: |
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
</md:EntityDescriptor>
userCreationMode: "sso"
usersGroupMembershipManagementMode: "sso"
uiRootURL: "/"
groupAttributeKey: "groups"
authnRequestSignature:
enabled: falseFirst to cover is the users property. Users Property: The users property is defined as an array, where each entry includes a username and a password. Passwords must be hashed using bcrypt before being placed within the password property, for security purposes, ensuring that they are stored correctly and securely.
lensesHq:
auth:
users:
- username: admin
# bcrypt("correcthorsebatterystaple").
password: $2a$10$F66cb6ZhnJjGCZuxlvKP1e84eytTpT1MDJcpBblHaZgsqp1/Aa0LGlensesHq:
auth:
users:
- username: $(ADMIN_USER)
password: $(ADMIN_USER_PWD)
additionalEnv:
- name: ADMIN_USER
valueFrom:
secretKeyRef:
name: multi-credentials-secret
key: user1-username
- name: ADMIN_USER_PWD
valueFrom:
secretKeyRef:
name: multi-credentials-secret
key: user1-passwordSecond, to cover will be administrators. It serves as the definition of user emails have the highest level of permissions upon authentication to HQ.
Third attribute is saml.metadata field, needed for setting SAML / SSO authentication. In this step, you will need metadata.xml file which can be set in two ways:
Referencing metadata.xml file through pre-created secret;
Placing metadata.xml contents inline as a string.
lensesHq:
auth:
address: ":8080"
accessControlAllowOrigin:
-
administrators:
- [email protected]
- [email protected]
saml:
baseURL: ""
entityID: ""
metadata:
referenceFromSecret: true
secretName: hq-tls-mock-saml-metadata
secretKeyName: metadata.xml
userCreationMode: "sso"
usersGroupMembershipManagementMode: "manual"lensesHq:
auth:
address: ":8080"
accessControlAllowOrigin:
-
administrators:
- [email protected]
- [email protected]
saml:
baseURL: ""
entityID: ""
metadata:
referenceFromSecret: false
stringData: |
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
...
...
</md:EntityDescriptor>
userCreationMode: "sso"
usersGroupMembershipManagementMode: "sso"In case SAML IdP requires certificate verification, same can be enabled and provided in the following way:
lensesHq:
auth:
saml:
authnRequestSignature:
enabled: true
authnRequestSigningCert:
referenceFromSecret: true
secretName: hq-agent-test-authority
secretKeyName: hq-tls-test.crt.pem
authnRequestSigningKey:
secret:
name: saml-test
key: privatekey.keylensesHq:
auth:
saml:
authnRequestSignature:
enabled: true
authnRequestSigningCert:
stringData: |
-----BEGIN CERTIFICATE-----
....
-----END CERTIFICATE-----
authnRequestSigningKey:
secret:
name: saml-test
key: privatekey.key
The third pre-requirement to successfully run HQ is the http definition. As previously mentioned, this parameter defines everything around the HTTP endpoint of the HQ itself and how users will interact with it.
Definition of HTTP object is as follows:
lensesHq:
http:
address: ":8080"
accessControlAllowOrigin:
-
accessControlAllowCredentials: false
secureSessionCookies: true
tls:
enabled: true
cert:
privateKey:
secret:
name:
key:After correctly configuring the authentication strategy and connection endpoint, the agent handling is the last most important box to tick.
The Agent's object is defined as follows:
lensesHq:
agents:
# which port to listen on for agent requests
address: ":10000"
tls:
enabled: false
verboseLogs: false
cert:
privateKey:Enabling TLS
By default TLS for the communication between Agent and HQ is disabled. In case the requirement is to enable it, fthe ollowing has to be set:
lensesHq.agents.tls - certificates to manage the connection between HQ and the Agents
lensesHq.http.tls- certificates to manage connection with HQ's API
Unlike private keys which can be referenced and obtained only through a secret, Certificates can be referenced directly in values.yaml file as a string or as a secret.
lensesHq:
agents:
address: ":10000"
tls:
enabled: true
cert:
referenceFromSecret: true
secretName: hq-agent-test-authority
secretKeyName: hq-tls-test.crt.pem
privateKey:
secret:
name: hq-agent-test-authority
key: hq-tls-test.key.pemlensesHq:
agents:
address: ":10000"
tls:
enabled: true
cert:
stringData: |
-----BEGIN CERTIFICATE-----
...
...
-----END CERTIFICATE-----
privateKey:
secret:
name: hq-agent-test-authority
key: hq-tls-test.key.pemIn demo purposes and testing the product you can use our community license
license_key_2SFZ0BesCNu6NFv0-EOSIvY22ChSzNWXa5nSds2l4z3y7aBgRPKCVnaeMlS57hHNVboR2kKaQ8Mtv1LFt0MPBBACGhDT5If8PmTraUM5xXLz4MYvLicense can be read in multiple ways:
from a pre-created secret
directly as a string defined in values.yaml file
lensesHq:
license:
referenceFromSecret: true
secretName: hq-license
secretKeyName: key
acceptEULA: truelensesHq:
licence:
referenceFromSecret: false
stringData: "license_key_*"
acceptEULA: trueIngress and service resources are optionally supported.
The http ingress is intended only for HTTP/S traffic, while the agents ingress is designed specifically for TCP protocol. Ensure appropriate ingress configuration for your use case.
Enable an Ingress resource in the values.yaml:
ingress:
http:
enabled: true
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
host: example.com
ingressClassName: ""
tls:
enabled: false
# The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS.
secretName: ""
agent:
enabled: true
agentIngressConfig:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: agents
spec:
entryPoints:
- agents
routes:
- match: HostSNI(`example.com`) # HostSNI to match TLS for TCP
services:
- name: lenses-hq # Replace with your service name
port: 10000 # Agent default TCP port
tls: {}
Enable a service resource in the values.yaml:
# Lenses HQ service
service:
enabled: true
type: ClusterIP
annotations: {}
externalTrafficPolicy:
loadBalancerIP: 130.211.x.x
loadBalancerSourceRanges:
- 0.0.0.0/0Lenses HQ, by default, uses the default Kubernetes service account but you can choose to use a specific one.
If the user defines the following:
# serviceAccount is the Service account to be used by Lenses to deploy apps
serviceAccount:
create: true
annotations: {}
name: lenses-hqThe chart will create a new service account in the defined namespace for HQ to use.
There are two options you can choose between:
rbacEnable: true - will enable the creation of ClusterRole and ClusterRoleBinding for the service account mentioned in the snippet above\
rbacEnable: true and namespaceScope: true - will enable the creation of Role and RoleBinding which is more restrictive.
There are different logging modes and levels that can be adjusted.
lensesHq:
logger:
# Allowed values are: text | json
mode: "text"
# Allowed values are: info | debug
level: "info"First, add the Helm Chart repository using the Helm command line:
helm repo add lensesio https://helm.repo.lenses.io/
helm repo updatehelm install lenses-hq lensesio/lenses-hq \
--values values.yaml \
--create-namespace --namespace lenses-hq \
--version 6.0.8After the successful configuration and installation of HQ, the next steps would be:
This page describe the Lenses Agent configuration.
HQ's configuration is defined in the config.yaml file
To accept the Lenses EULA, set the following in the lenses.conf file:
Without accepting the EULA the Agent will not start! See License.
It has the following top level groups:
http
Yes
n/a
Configures everything involving the HTTP.
agents
Yes
n/a
Controls the agent handling.
database
Yes
n/a
Configures database settings.
logger
Yes
n/a
Sets the logger behaviour.
metrics
Yes
n/a
Controls the metrics settings.
license
Yes
n/a
Holds the license key.
auth
Yes
n/a
Configures authentication and authorisation
Configures authentication and authorisation.
It has the following fields:
administrators
No
[]
strings
Grants root access to principals.
saml
no
n/a
Contains SAML2 IdP configuration.
users
no
[]
Array
Creates initial users for password based authentication.
Lists the names of the principals (users, service accounts) that have root access. Access control allows any API operation performed by such principals. Optional. If not set, it will default to [].
Contains SAML2 IdP configuration. Please refer here for its structure.
Configures everything involving the HTTP.
It has the following fields:
address
Yes
n/a
string
Sets the address the HTTP server listens at.
accessControlAllowOrigin
No
["*"]
strings
Sets the value of the "Access-Control-Allow-Origin" header.
accessControlAllowCredentials
No
false
boolean
Sets the value of the "Access-Control-Allow-Credentials" header.
secureSessionCookies
No
true
boolean
Sets the "Secure" attribute on session cookies.
tls
Yes
n/a
Contains TLS configuration.
Sets the address the HTTP server listens at.
Example value: 127.0.0.1:80.
Sets the value of the "Access-Control-Allow-Origin" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to ["*"].
Sets the value of the "Access-Control-Allow-Credentials" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to false.
Sets the "Secure" attribute on authentication session cookies. When set, a browser sends such cookies not over unsecured HTTP (expect for localhost). If running Lenses HQ over unsecured HTTP, set this to false. Optional. If not set, it will default to true.
Contains TLS configuration. Please refer here for its structure.
Contains SAML2 IdP configuration.
It has the following fields:
metadata
Yes
n/a
string
Contains the IdP issued XML metadata blob.
baseURL
Yes
n/a
string
Defines base URL of HQ for IdP redirects.
uiRootURL
No
/
string
Controls where to redirect to upon successful authentication.
entityID
Yes
n/a
string
Defines the Entity ID.
groupAttributeKey
No
groups
string
Sets the attribute name for group names.
userCreationMode
No
manual
string
Controls how the creation of users should be handled in relation to SSO information.
groupMembershipMode
No
manual
string
Controls how the management of a user's group membership should be handled in relation to SSO information.
Contains the IdP issued XML metadata blob.
Example value: <?xml version="1.0" ... (big blob of xml) </md:EntityDescriptor>.
Defines the base URL of Lenses HQ; the IdP redirects back to here on success.
Example value: https://hq.example.com.
Controls where the backend redirects to after having received a valid SAML2 assertion. Optional. If not set, it will default to /.
Example value: /.
Defines the Entity ID.
Example value: https://hq.example.com.
Sets the attribute name from which group names are extracted in the SAML2 assertions. Different providers use different names. Okta, Keycloak and Google use "groups". OneLogin uses "roles". Azure uses "http://schemas.microsoft.com/ws/2008/06/identity/claims/groups". Optional. If not set, it will default to groups.
Example value: groups.
Controls how the creation of users should be handled in relation to SSO information. With the 'manual' mode, only users that currently exist in HQ can login. Users that do not exist are rejected. With the 'sso' mode, users that do not exist are automatically created. Allowed values are manual or sso. Optional. If not set, it will default to manual.
Controls how the management of a user's group membership should be handled in relation to SSO information. With the 'manual' mode, the information about the group membership returned from an Identity Provider will not be used and a user will only be a member of groups that were explicitly assigned to him locally. With the 'sso' mode, group information from Identity Provider (IdP) will be used. On login, a user's group membership is set to the groups listed in the IdP. Groups that do not exist in HQ are ignored. Allowed values are manual or sso. Optional. If not set, it will default to manual.
Controls the agent handling.
It has the following fields:
address
Yes
n/a
string
Sets the address the agent server listens at.
tls
Yes
n/a
Contains TLS configuration.
Sets the address the agent server listens at.
Example value: 127.0.0.1:3000.
Contains TLS configuration. Please refer here for its structure.
Contains TLS configuration.
It has the following fields:
enabled
Yes
n/a
boolean
Enables or disables TLS.
cert
No
``
string
Sets the PEM formatted public certificate.
key
No
``
string
Sets the PEM formatted private key.
verboseLogs
No
false
boolean
Enables verbose TLS logging.
Enables or disables TLS.
Example value: false.
Sets the PEM formatted public certificate. Optional. If not set, it will default to ``.
Example value: -----BEGIN CERTIFICATE----- EXampLeRanDoM ... -----END CERTIFICATE----- .
Sets the PEM formatted private key. Optional. If not set, it will default to ``.
Example value: -----BEGIN PRIVATE KEY----- ExAmPlErAnDoM ... -----END PRIVATE KEY----- .
Enables additional logging of TLS settings and events at debug level. The information presented might be a bit too much for day to day use but can provide extra information for troubleshooting TLS configuration. Optional. If not set, it will default to false.
Configures database settings.
It has the following fields:
host
Yes
n/a
string
Sets the name of the host to connect to.
username
No
``
string
Sets the username to authenticate as.
password
No
``
string
Sets the password to authenticate as.
database
Yes
n/a
string
Sets the database to use.
schema
No
``
string
Sets the schema to use.
TLS
No
false
boolean
Enables TLS.
params
No
{}
DBConnectionParams
Provides fine-grained control.
Sets the name of the host to connect to. A comma-separated list of host names is also accepted; each host name in the list is tried in order.
Example value: postgres:5432.
Sets the username to authenticate as. Optional. If not set, it will default to ``.
Example value: johhnybingo.
Sets the password to authenticate as. Optional. If not set, it will default to ``.
Example value: my-password.
Sets the database to use.
Example value: my-database.
Sets the schema to use. Optional. If not set, it will default to ``.
Example value: my-schema.
Enables TLS. In PostgreSQL connection string terms, setting TLS to false corresponds to sslmode=disable; setting TLS to true corresponds to sslmode=verify-full. For more fine-grained control, specify sslmode in the params which takes precedence. Optional. If not set, it will default to false.
Example value: true.
Contains connection string parameters as key/values pairs. It allow fine-grained control of connection settings. The parameters can be found here: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS Optional. If not set, it will default to {}.
Example value: {"application_name":"example"}.
Sets the logger behaviour.
It has the following fields:
mode
Yes
n/a
string
Controls the format of the logger's output.
level
No
info
string
Controls the level of the logger.
Controls the format of the logger's output. Allowed values are text or json.
Controls the level of the logger. Allowed values are info or debug. Optional. If not set, it will default to info.
Controls the metrics settings.
It has the following fields:
prometheusAddress
No
:9090
string
Sets the Prometheus address.
Sets the address at which Prometheus metrics are served. Optional. If not set, it will default to :9090.
Holds the license key.
It has the following fields:
key
Yes
n/a
string
Sets the license key.
acceptEULA
Yes
fals
boolean
Accepts the
Sets the license key. An HQ key starts with "licensekey".
Accepts the Lenses EULA.
HQ's configuration is defined in the config.yaml file.
It has the following top level groups:
Configures authentication and authorisation.
It has the following fields:
Lists the names of the principals (users, service accounts) that have root access. Access control allows any API operation performed by such principals. Optional. If not set, it will default to [].
Contains SAML2 IdP configuration. Please refer for its structure.
Configures everything involving the HTTP.
It has the following fields:
Sets the address the HTTP server listens at.
Example value: 127.0.0.1:80.
Sets the value of the "Access-Control-Allow-Origin" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to ["*"].
Sets the value of the "Access-Control-Allow-Credentials" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to false.
Sets the "Secure" attribute on authentication session cookies. When set, a browser sends such cookies not over unsecured HTTP (expect for localhost). If running Lenses HQ over unsecured HTTP, set this to false. Optional. If not set, it will default to true.
Contains TLS configuration. Please refer here for its structure.
Contains SAML2 IdP configuration.
It has the following fields:
Contains the IdP issued XML metadata blob.
Example value: <?xml version="1.0" ... (big blob of xml) </md:EntityDescriptor>.
Defines the base URL of Lenses HQ; the IdP redirects back to here on success.
Example value: https://hq.example.com.
Controls where the backend redirects to after having received a valid SAML2 assertion. Optional. If not set, it will default to /.
Example value: /.
Defines the Entity ID.
Example value: https://hq.example.com.
Sets the attribute name from which group names are extracted in the SAML2 assertions. Different providers use different names. Okta, Keycloak and Google use "groups". OneLogin uses "roles". Azure uses "http://schemas.microsoft.com/ws/2008/06/identity/claims/groups". Optional. If not set, it will default to groups.
Example value: groups.
Controls how the creation of users should be handled in relation to SSO information. With the 'manual' mode, only users that currently exist in HQ can login. Users that do not exist are rejected. With the 'sso' mode, users that do not exist are automatically created. Allowed values are manual or sso. Optional. If not set, it will default to manual.
Controls how the management of a user's group membership should be handled in relation to SSO information. With the 'manual' mode, the information about the group membership returned from an Identity Provider will not be used and a user will only be a member of groups that were explicitly assigned to him locally. With the 'sso' mode, group information from Identity Provider (IdP) will be used. On login, a user's group membership is set to the groups listed in the IdP. Groups that do not exist in HQ are ignored. Allowed values are manual or sso. Optional. If not set, it will default to manual.
Controls the agent handling.
It has the following fields:
Sets the address the agent server listens at.
Example value: 127.0.0.1:3000.
Contains TLS configuration. Please refer here for its structure.
Contains Agent gRPC configuration. This configuration section is optional. If not provided, its values are set to the defaults described in its [structure]
Contains TLS configuration.
It has the following fields:
Enables or disables TLS.
Example value: false.
Sets the PEM formatted public certificate. Optional. If not set, it will default to ``.
Example value: -----BEGIN CERTIFICATE----- EXampLeRanDoM ... -----END CERTIFICATE----- .
Sets the PEM formatted private key. Optional. If not set, it will default to ``.
Example value: -----BEGIN PRIVATE KEY----- ExAmPlErAnDoM ... -----END PRIVATE KEY----- .
Enables additional logging of TLS settings and events at debug level. The information presented might be a bit too much for day to day use but can provide extra information for troubleshooting TLS configuration. Optional. If not set, it will default to false.
Configures database settings.
It has the following fields:
Sets the name of the host to connect to. A comma-separated list of host names is also accepted; each host name in the list is tried in order.
Example value: postgres:5432.
Sets the username to authenticate as. Optional. If not set, it will default to ``.
Example value: johhnybingo.
Sets the password to authenticate as. Optional. If not set, it will default to ``.
Example value: my-password.
Sets the database to use.
Example value: my-database.
Sets the schema to use. Optional. If not set, it will default to "".
Example value: my-schema.
Enables TLS. In PostgreSQL connection string terms, setting TLS to false corresponds to sslmode=disable; setting TLS to true corresponds to sslmode=verify-full. For more fine-grained control, specify sslmode in the params which takes precedence. Optional. If not set, it will default to false.
Example value: true.
Contains connection string parameters as key/values pairs. It allow fine-grained control of connection settings. The parameters can be found here: Optional. If not set, it will default to {}.
Example value: {"application_name":"example"}.
Sets the logger behaviour.
It has the following fields:
Controls the format of the logger's output. Allowed values are text or json.
Controls the level of the logger. Allowed values are info or debug. Optional. If not set, it will default to info.
Controls the metrics settings.
It has the following fields:
Sets the address at which Prometheus metrics are served. Optional. If not set, it will default to :9090.
Holds the license key.
It has the following fields:
Sets the license key. An HQ key starts with "licensekey".
Accepts the Lenses EULA.
http
Yes
n/a
Configures everything involving the HTTP.
agents
Yes
n/a
Controls the agent handling.
database
Yes
n/a
Configures database settings.
logger
Yes
n/a
Sets the logger behaviour.
metrics
Yes
n/a
Controls the metrics settings.
license
Yes
n/a
Holds the license key.
auth
Yes
n/a
Configures authentication and authorisation
administrators
No
[]
strings
Grants root access to principals.
saml
no
n/a
Contains SAML2 IdP configuration.
users
no
[]
Array
Creates initial users for password based authentication.
address
Yes
n/a
string
Sets the address the HTTP server listens at.
accessControlAllowOrigin
No
["*"]
strings
Sets the value of the "Access-Control-Allow-Origin" header.
accessControlAllowCredentials
No
false
boolean
Sets the value of the "Access-Control-Allow-Credentials" header.
secureSessionCookies
No
true
boolean
Sets the "Secure" attribute on session cookies.
tls
Yes
n/a
Contains TLS configuration.
enabled
Yes
false
boolean
Enables or disables SAML.
metadata
Yes
n/a
string
Contains the IdP issued XML metadata blob.
baseURL
Yes
n/a
string
Defines base URL of HQ for IdP redirects.
uiRootURL
No
/
string
Controls where to redirect to upon successful authentication.
entityID
Yes
n/a
string
Defines the Entity ID.
groupAttributeKey
No
groups
string
Sets the attribute name for group names.
userCreationMode
No
manual
string
Controls how the creation of users should be handled in relation to SSO information.
groupMembershipMode
No
manual
string
Controls how the management of a user's group membership should be handled in relation to SSO information.
authnRequestSignature
No
Enables signing the AuthnRequest that HQ sends to the IdP
enabled
Yes
true
string
Sets the address the agent server listens at.
cert
Yes
n/a
string
String 'cert' sets the PEM formatted AuthnRequest signing certificate. If provided, the key needs to be provided as well. If not provided while AuthnRequest signing is enabled, HQ will generate a key-pair on start.
key
No
n/a
string
String 'key' sets the PEM formatted AuthnRequest signing private key. If provided, the cert needs to be provided as well. If not provided while AuthnRequest signing is enabled, HQ will generate a key-pair on start.
address
Yes
n/a
string
Sets the address the agent server listens at.
tls
Yes
n/a
Contains TLS configuration.
grpc
No
n/a
Contains Agent gRPC configuration
apiMaxRecvMessageSize
No
33554432
integer
Overrides the default maximum body size in bytes for proxied API responses.
enabled
Yes
n/a
boolean
Enables or disables TLS.
cert
No
``
string
Sets the PEM formatted public certificate.
key
No
``
string
Sets the PEM formatted private key.
verboseLogs
No
false
boolean
Enables verbose TLS logging.
host
Yes
n/a
string
Sets the name of the host to connect to.
username
No
``
string
Sets the username to authenticate as.
password
No
``
string
Sets the password to authenticate as.
database
Yes
n/a
string
Sets the database to use.
schema
No
``
string
Sets the schema to use.
TLS
No
false
boolean
Enables TLS.
params
No
{}
DBConnectionParams
Provides fine-grained control.
mode
Yes
n/a
string
Controls the format of the logger's output.
level
No
info
string
Controls the level of the logger.
prometheusAddress
No
:9090
string
Sets the Prometheus address.
key
Yes
n/a
string
Sets the license key.
acceptEULA
Yes
fals
boolean
Accepts the Lenses EULA.
Lenses 5.0+ introduces two primary methods for configuring your Lenses instance:
Wizard Mode: An interactive UI-based setup that appears when no Kafka brokers are configured
Provision Mode: A programmatic approach using provisioning.yaml configuration with(out) sidecar containers
This guide walks you through migrating from the Wizard Mode setup to a fully automated Provision configuration, enabling GitOps workflows and Infrastructure as Code practices.
You should migrate from Wizard Mode to Provision when you need:
Automated deployments and Infrastructure as Code
GitOps workflows for configuration management
Consistent environments across development, staging, and production
Version control for your Lenses configuration
Scalable deployment patterns for multiple Lenses Agent instances
Migrating from Lenses ⅘ to Lenses 6 Agent
Setup Method
Interactive UI
YAML configuration
Automation
Manual
Fully automated
Version Control
Not supported
Full Git integration
Secrets Management
Manual entry
Kubernetes secrets + file references
Deployment
One-time setup
Repeatable deployments
Configuration Updates
UI-based
Code-based with CI/CD
Before starting your migration, please check Tips: Before Upgrade
Basic Structure
For Helm Deployment:
create a values.yaml file with the provision configuration enabled
For deployment from Archive create:
lenses-agent.conf and
provisioning.yaml file
Connections that provisioning is going to have:
Kafka;
SchemaRegistry;
KafkaConnect;
LensesHQ.
lensesAgent:
hq:
agentKey:
secret:
type: "createNew"
name: lenses-agent-secret
value: <your-HQ-generated-agentKey>
provision:
path: /mnt/provision-secrets
connections:
lensesHq:
- name: lenses-hq
version: 1
tags: ['hq']
configuration:
server:
value: <your-HQ-address>
port:
value: 10000
agentKey:
value: ${LENSESHQ_AGENT_KEY}
sslEnabled:
value: false
kafka:
- name: kafka
version: 1
tags: [ "prod", "prod-1", "us"]
configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://<your-kafka-address>:9092
metricsType:
value: JMX
metricsPort:
value: 9999For more Kafka connection details, such as using secure connections, please read Kafka
lensesHq:
- configuration:
agentKey:
value: <your-HQ-generated-agentKey>
port:
value: 10000
server:
value: <your-HQ-address>
sslEnabled:
value: false
name: lenses-hq
tags:
- hq
version: 1
kafka:
- configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://<your-kafka-address>:9092
metricsPort:
value: 9999
metricsType:
value: JMX
name: kafka
tags:
- prod
- prod-1
- us
version: 1Last two bits are:
Kafka Connect and
Schema Registry
lensesAgent:
hq:
agentKey:
secret:
type: "createNew"
name: lenses-agent-secret
value: <your-HQ-generated-agentKey>
provision:
path: /mnt/provision-secrets
connections:
lensesHq:
- name: lenses-hq
version: 1
tags: ['hq']
configuration:
server:
value: <your-HQ-address>
port:
value: 10000
agentKey:
value: ${LENSESHQ_AGENT_KEY}
sslEnabled:
value: false
kafka:
- name: kafka
version: 1
tags: [ "prod", "prod-1", "us"]
configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://<your-kafka-address>:9092
metricsType:
value: JMX
metricsPort:
value: 9999
confluentSchemaRegistry:
- name: schema-registry
version: 1
tags: [ "prod", "global" ]
configuration:
schemaRegistryUrls:
value:
- http://<your-schema-registry-address>:8081
connect:
- name: datalake-connect
version: 1
tags: [ "prod", "us" ]
configuration:
workers:
value:
- http://<your-kafka-connect-address>:8083lensesHq:
- configuration:
agentKey:
value: <your-HQ-generated-agentKey>
port:
value: 10000
server:
value: <your-HQ-address>
sslEnabled:
value: false
name: lenses-hq
tags:
- hq
version: 1
kafka:
- configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://<your-kafka-address>:9092
metricsPort:
value: 9999
metricsType:
value: JMX
name: kafka
tags:
- prod
- prod-1
- us
version: 1
confluentSchemaRegistry:
- configuration:
schemaRegistryUrls:
value:
- http://<your-schemaregistry-address>:8081
name: schema-registry
tags:
- prod
- global
version: 1
connect:
- configuration:
workers:
value:
- http://<your-kafkaconnect-address>:8083
name: datalake-connect
tags:
- prod
- us
version: 1Through last few steps, we covered configuring of:
Kafka
Schema Registry
Kafka Connect connections.
Last but not least and probably the most important one is creating HQ connection otherwise Agent won't be useable.
lensesAgent:
hq:
agentKey:
secret:
type: "createNew"
name: lenses-agent-secret
value: <your-HQ-generated-agentKey>
provision:
path: /mnt/provision-secrets
connections:
lensesHq:
- name: lenses-hq
version: 1
tags: ['hq']
configuration:
server:
value: <your-HQ-address>
port:
value: 10000
agentKey:
value: ${LENSESHQ_AGENT_KEY}
sslEnabled:
value: falselensesHq:
- configuration:
agentKey:
value: <your-HQ-generated-agentKey>
port:
value: 10000
server:
value: <your-HQ-address>
sslEnabled:
value: false
name: lenses-hq
tags:
- hq
version: 1
kafka:
- configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://<your-kafka-address>:9092
metricsPort:
value: 9999
metricsType:
value: JMX
name: kafka
tags:
- prod
- prod-1
- us
version: 1License configuration is part of Lenses HQ from version 6.
In case Postgres is being used:
lensesAgent:
storage:
postgres:
enabled: true
host: postgres-1.postgres.svc.cluster.local
port: 5432 # optional, defaults to 5432
username: prod
password: external # use "external" to manage it using secrets
database: agent
additionalEnv:
- name: LENSES_STORAGE_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: local-postgres-pwd
key: password
hq:
agentKey:
secret:
type: "createNew"
name: lenses-agent-secret
value: agent_key_*
ssl:
enabled: false
provision:
path: /mnt/provision-secrets
connections:
lensesHq:
- name: lenses-hq
version: 1
tags: ['hq']
configuration:
server:
value: lenses-hq.lenses.svc.cluster.local
port:
value: 10000
agentKey:
value: ${LENSESHQ_AGENT_KEY}
sslEnabled:
value: false
kafka:
- name: kafka
version: 1
tags: [ "prod", "prod-2", "eu"]
configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://testing-kafka-bootstrap.kafka.svc.cluster.local:9092
metricsType:
value: JMX
metricsPort:
value: 9999
connect:
- name: datalake-connect
version: 1
tags: [ "prod-2", "eu" ]
configuration:
workers:
value:
- http://testing-connect-connect.kafka-connect.svc.cluster.local:8083
confluentSchemaRegistry:
- name: schema-registry
version: 1
tags: [ "prod", "global" ]
configuration:
schemaRegistryUrls:
value:
- http://testing-schema-registry.schema-registry.svc.cluster.local:8081You would have to use two files:
lenses-agent.conf
provisioning.yaml
# Auto-detected env vars
lenses.secret.file=/data/security.conf
# lenses.append.conf
lenses.storage.postgres.host="postgres-1.postgres.svc.cluster.local"
lenses.storage.postgres.database="agent"
lenses.storage.postgres.username="agent"
lenses.storage.postgres.password="pleasechangeme"
lenses.storage.postgres.port="5432"
lenses.provisioning.path="/mnt/provision-secrets"lensesHq:
- configuration:
agentKey:
value: <your-HQ-generated-agentKey>
port:
value: 10000
server:
value: <your-HQ-address>
sslEnabled:
value: false
name: lenses-hq
tags:
- hq
version: 1
kafka:
- configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://<your-kafka-address>:9092
metricsPort:
value: 9999
metricsType:
value: JMX
name: kafka
tags:
- prod
- prod-1
- us
version: 1
confluentSchemaRegistry:
- configuration:
schemaRegistryUrls:
value:
- http://<your-schemaregistry-address>:8081
name: schema-registry
tags:
- prod
- global
version: 1
connect:
- configuration:
workers:
value:
- http://<your-kafkaconnect-address>:8083
name: datalake-connect
tags:
- prod
- us
version: 1H2 as a storage mechanism is available only from Agent v6.0.6
Be aware that H2 is not recommended for production environments
persistence:
storageH2:
enabled: true
accessModes:
- ReadWriteOnce
size: 5Gi
lensesAgent:
hq:
agentKey:
secret:
type: "createNew"
name: lenses-agent-secret
value: agent_key_*
ssl:
enabled: false
provision:
path: /mnt/provision-secrets
connections:
lensesHq:
- name: lenses-hq
version: 1
tags: ['hq']
configuration:
server:
value: lenses-hq.lenses.svc.cluster.local
port:
value: 10000
agentKey:
value: ${LENSESHQ_AGENT_KEY}
sslEnabled:
value: false
kafka:
- name: kafka
version: 1
tags: [ "prod", "prod-2", "eu"]
configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://testing-kafka-bootstrap.kafka.svc.cluster.local:9092
metricsType:
value: JMX
metricsPort:
value: 9999
connect:
- name: datalake-connect
version: 1
tags: [ "prod-2", "eu" ]
configuration:
workers:
value:
- http://testing-connect-connect.kafka-connect.svc.cluster.local:8083
confluentSchemaRegistry:
- name: schema-registry
version: 1
tags: [ "prod", "global" ]
configuration:
schemaRegistryUrls:
value:
- http://testing-schema-registry.schema-registry.svc.cluster.local:8081You would have to use two files:
lenses-agent.conf
provisioning.yaml
# Auto-detected env vars
lenses.secret.file=/data/security.conf
# lenses.append.conf
lenses.provisioning.path="/mnt/provision-secrets"lensesHq:
- configuration:
agentKey:
value: <your-HQ-generated-agentKey>
port:
value: 10000
server:
value: <your-HQ-address>
sslEnabled:
value: false
name: lenses-hq
tags:
- hq
version: 1
kafka:
- configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://<your-kafka-address>:9092
metricsPort:
value: 9999
metricsType:
value: JMX
name: kafka
tags:
- prod
- prod-1
- us
version: 1
confluentSchemaRegistry:
- configuration:
schemaRegistryUrls:
value:
- http://<your-schemaregistry-address>:8081
name: schema-registry
tags:
- prod
- global
version: 1
connect:
- configuration:
workers:
value:
- http://<your-kafkaconnect-address>:8083
name: datalake-connect
tags:
- prod
- us
version: 1Here's a complete values.yaml and lenses-agent + provisioning.yaml example for a production migration:
For more Helm options, please check lenses-helm-chart repo.
rbacEnable: true
namespaceScope: true
lensesAgent:
hq:
agentKey:
secret:
type: "createNew"
name: lenses-agent-secret
value: agent_key_*
ssl:
enabled: false
provision:
path: /mnt/provision-secrets
connections:
lensesHq:
- name: lenses-hq
version: 1
tags: ['hq']
configuration:
server:
value: lenses-hq.lenses.svc.cluster.local
port:
value: 10000
agentKey:
value: ${LENSESHQ_AGENT_KEY}
sslEnabled:
value: false
kafka:
- name: kafka
version: 1
tags: [ "prod", "prod-2", "eu"]
configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://testing-kafka-bootstrap.kafka.svc.cluster.local:9092
metricsType:
value: JMX
metricsPort:
value: 9999
connect:
- name: datalake-connect
version: 1
tags: [ "prod-2", "eu" ]
configuration:
workers:
value:
- http://testing-connect-connect.kafka-connect.svc.cluster.local:8083
confluentSchemaRegistry:
- name: schema-registry
version: 1
tags: [ "prod", "global" ]
configuration:
schemaRegistryUrls:
value:
- http://testing-schema-registry.schema-registry.svc.cluster.local:8081
storage:
postgres:
enabled: true
host: postgres-1.postgres.svc.cluster.local
port: 5432 # optional, defaults to 5432
username: prod
password: external # use "external" to manage it using secrets
database: agent
additionalEnv:
- name: LENSES_STORAGE_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: local-postgres-pwd
key: passwordYou would have to use two files:
lenses-agent.conf
provisioning.yaml
# Auto-detected env vars
lenses.kubernetes.pod.mem.request=128M
lenses.kubernetes.pod.mem.limit=1152M
lenses.jmx.port=9101
lenses.kubernetes.pod.liveness.initial.delay="60 seconds"
lenses.sql.execution.mode=KUBERNETES
lenses.topics.metrics=_kafka_lenses_metrics
lenses.provisioning.path="/mnt/provision-secrets"
lenses.topics.external.topology=__topology
lenses.kubernetes.pod.min.heap=128M
lenses.port=3030
lenses.kubernetes.pod.heap=1024M
lenses.topics.external.metrics=__topology__metrics
lenses.secret.file=/data/security.conf
# lenses.append.conf
lenses.storage.postgres.host="postgres-1.postgres.svc.cluster.local"
lenses.storage.postgres.database="agent"
lenses.storage.postgres.username="agent"
lenses.storage.postgres.password="pleasechangeme"
lenses.storage.postgres.port="5432"
lenses.provisioning.path="/mnt/provision-secrets"lensesHq:
- configuration:
agentKey:
value: <your-HQ-generated-agentKey>
port:
value: 10000
server:
value: <your-HQ-address>
sslEnabled:
value: false
name: lenses-hq
tags:
- hq
version: 1
kafka:
- configuration:
kafkaBootstrapServers:
value:
- PLAINTEXT://<your-kafka-address>:9092
metricsPort:
value: 9999
metricsType:
value: JMX
name: kafka
tags:
- prod
- prod-1
- us
version: 1
confluentSchemaRegistry:
- configuration:
schemaRegistryUrls:
value:
- http://<your-schemaregistry-address>:8081
name: schema-registry
tags:
- prod
- global
version: 1
connect:
- configuration:
workers:
value:
- http://<your-kafkaconnect-address>:8083
name: datalake-connect
tags:
- prod
- us
version: 1Add the Lenses Helm repository:
helm repo add lenses-agent https://helm.repo.lenses.io
helm repo updateDeploy Lenses with your provision configuration:
helm install lenses-agent lenses/lenses-agent \
--namespace lenses \
--create-namespace \
-f values.yamlMonitor the deployment:
kubectl get pods -n lenses -w
kubectl logs -n lenses deployment/lenses-agentDownload Archive
Link to archives can be found here: https://archive.lenses.io/lenses/6.0/agent/
Extract the archive using the following command
tar -xvf lenses-agent-latest-linux64.tar.gz -C lensesStart Agent
bin/lenses lenses-agent.confUse Kubernetes secrets for sensitive data instead of inline values
Enable TLS for all Lenses HQ connections
Implement RBAC for Kubernetes and Lenses HQ & Agent access
Rotate credentials regularly
Monitor resource usage of sidecar containers
Set resource limits to prevent resource monopolization
Implement health checks for the provision process
Use GitOps workflows for configuration management
Plan for multiple environments (dev, staging, prod)
Implement configuration templates for reusability
Use Helm chart dependencies for complex deployments
Monitor deployment metrics and success rates
Migrating from Lenses Wizard Mode to Provision Mode enables Infrastructure as Code practices, better security management, and automated deployments. While the initial setup requires more configuration, the long-term benefits of automated, version-controlled, and repeatable deployments make this migration worthwhile for production environments.
The provision sidecar pattern ensures that your Lenses configuration is managed alongside your infrastructure code, enabling true GitOps workflows and reducing configuration drift between environments.

This page describes the IAM Reference options.
service: administration
Resource Syntax
administration:connection:${Environment}/${ConnectionType}/${Connection}
administration:lenses-logs:${Environment}
administration:lenses-configuration:${Environment}
administration:setting:${Setting}
CreateConnection
connection
ListConnections
connection
GetConnectionDetails
connection
UpdateConnection
connection
DeleteConnection
connection
GetLensesLogs
lenses-logs
GetLensesConfiguration
lenses-configuration
ListAgents
agent
GetAgentDetails
agent
UpdateAgent
agent
DeleteAgent
agent
GetSetting
setting
UpdateSetting
setting
service: applications
Resource Syntax
RegisterApplication
external-application
UnregisterApplication
external-application
ListApplications
external-application
GetApplicationDetails
external-application
ListApplicationDependants
external-application
service: alerts
Resource Syntax
alerts:alert:${Environment}/${AlertType}/${Alert}
alerts:rule:${Environment}/Infrastructure/KafkaBrokerDown
alerts:rule:${Environment}/DataProduced/red-app-going-slow
CreateAlertRule
rule
DeleteAlertRule
rule
UpdateAlertRule
rule
ListAlertRules
rule
GetAlertRuleDetails
rule
ToggleAlertRule
rule
ListAlertEvents
alert-event
DeleteAlertEvents
alert-event
CreateChannel
alert-channel
ListChannels
alert-channel
GetChannelDetails
alert-channel
UpdateChannel
alert-channel
DeleteChannel
alert-channel
service: k2k
Resource Syntax
k2k:app:${Name}
CreateApp
app
DeleteApp
app
GetApp
app
ListApps
app
ManageOffsets
app
UpdateApp
app
UpsertApp
app
service: audit
Resource Syntax
audit:log:${Environment}
audit:channel:${Environment}/${AuditChannelType}/${AuditChannel}
ListLogEvents
log
GetLogEventDetails
log
CreateChannel
channel
ListChannels
channel
GetChannelDetails
channel
UpdateChannel
channel
DeleteChannel
channel
ToggleChannel
channel
service: data-policies
Resource Syntax
data-policies:policy:${Environment}/${Policy}
CreatePolicy
policy
ListPolicies
policy
GetPolicyDetails
policy
UpdatePolicy
policy
DeletePolicy
policy
ListPolicyDependants
policy
service: environments
Resource Syntax
environments:environment:${Environment}
CreateEnvironment
environment
DeleteEnvironment
environment
ListEnvironments
environment
UpdateEnvironment
environment
AccessEnvironment
environment
GetEnvironmentDetails
environment
Permission which allows users to gain overview of more information about the environment such as metrics, versions and more.
service: environments
Resource Syntax
environments:kafka-connection:${Environment}/${Connection}
GetKafkaConnectionDetails
environments
ListKafkaConnections
environments
UpsertKafkaConnection
environments
Create or update a Kafka Connection
DeleteKafkaConnection
environments
service: governance
Resource Syntax
governance:request:${Environment}/${ActionType}/*
governance:rule:${Environment}/${RuleCategory}/*
CreateRequest
request
ListRequests
request
GetRequestDetails
request
ApproveRequest
request
DenyRequest
request
GetRuleDetails
rule
UpdateRule
rule
service: iam
Resource Syntax
iam:role:${Role}
iam:group:${Group}
iam:user:${Username}
iam:service-account:${ServiceAccount}
CreateRole
role
DeleteRole
role
UpdateRole
role
ListRoles
role
ListRoleDependants
role
GetRoleDetails
role
CreateGroup
group
DeleteGroup
group
UpdateGroup
group
ListGroups
group
ListGroupDependants
group
GetGroupDetails
group
CreateUser
user
DeleteUser
user
UpdateUser
user
ListUsers
user
ListUserDependants
user
GetUserDetails
user
CreateServiceAccount
service account
DeleteServiceAccount
service account
UpdateServiceAccount
service account
ListServiceAccounts
service account
ListServiceAccountDependants
service account
GetServiceAccountDetails
service account
service: kafka-connect
Resource Syntax
kafka-connect:connector:${Environment}/${KafkaConnectCluster}/${Connector}
kafka-connect:cluster:${Environment}/${KafkaConnectCluster}
name: global-connector-operator
policy:
- action:
- iam:List*
- iam:Get*
resource: iam:*
effect: allow
- action:
- environments:Get*
- environments:List*
- environments:AccessEnvironment
resource: environments:*
effect: allow
- action:
- kafka-connect:List*
- kafka-connect:GetClusterDetails
- kafka-connect:GetConnectorDetails
- kafka-connect:StartConnector
- kafka-connect:StopConnector
resource:
- kafka-connect:cluster:*/*
- kafka-connect:connector:*/*/*
effect: allowCreateConnector
connector
ListConnectors
connector
ListConnectors
connector
GetConnectorConfiguration
connector
UpdateConnectorConfiguration
connector
DeleteConnector
connector
StartConnector
connector
StopConnector
connector
ListConnectorDependants
connector
ListClusters
cluster
GetClusterDetails
cluster
DeployConnectors
cluster
service: kafka
Resource Syntax
kafka:topic:${Environment}/${KafkaCluster}/${Topic}
kafka:acl:${Environment}/${KafkaCluster}/${AclResourceType}/* or kafka:acl:${Environment}/${KafkaCluster}/${AclResourceType}/${PrincipalType}/${Principal}
kafka:quota:${Environment}/${KafkaCluster}/${QuotaType}/* or
kafka:quota:${Environment}/${KafkaCluster}/clients
kafka:quota:${Environment}/${KafkaCluster}/users-default
kafka:quota:${Environment}/${KafkaCluster}/client/${ClientID}
kafka:quota:${Environment}/${KafkaCluster}/user/${Username}
kafka:quota:${Environment}/${KafkaCluster}/user/${Username}/client/${ClientID}
kafka:quota:${Environment}/${KafkaCluster}/user-client/${Username}/${ClientID}
kafka:quota:${Environment}/${KafkaCluster}/user/${Username}/client/*
kafka:quota:${Environment}/${KafkaCluster}/user-all-clients/${Username}
name: example
policy:
- action:
- kafka:ListTopics
- kafka:GetTopicDetails
resource:
- kafka:topic:my_env/kafka/my_topicCreateTopic
topic
DeleteTopic
topic
ListTopics
topic
GetTopicDetails
topic
UpdateTopicDetails
topic
ReadTopicData
topic
WriteTopicData
topic
DeleteTopicData
topic
ListTopicDependants
topic
List visibility of all entities that depend on this entity e.g. ListTopicDependants means that you'll be able to see (i.e. List) all consumer groups that read from that topic regardless of what your specific consumer group permissions.
CreateAcl
acl
GetAclDetails
acl
UpdateAcl
acl
DeleteAcl
acl
CreateQuota
quota
ListQuotas
quota
GetQuotaDetails
quota
UpdateQuota
quota
DeleteQuota
quota
DeleteConsumerGroup
consumer-group
UpdateConsumerGroup
consumer-group
ListConsumerGroups
consumer-group
GetConsumerGroupDetails
consumer-group
ListConsumerGroupDependants
consumer-group
service: kubernetes
Resource Syntax
kubernetes:cluster:${Environment}/${KubernetesCluster}
kubernetes:namespace:${Environment}/${KubernetesCluster}/${KubernetesNamespace}
ListClusters
cluster
GetClusterDetails
cluster
ListNamespaces
namespace
DeployApps
namespace
service: registry
Resource Syntax
schemas:registry:${Environment}/${SchemaRegistry}
GetRegistryConfiguration
registry
UpdateRegistryConfiguration
registry
service: schemas
Resource Syntax
schemas:schema:${Environment}/${SchemaRegistry}/${Schema}
CreateSchema
schema
DeleteSchema
schema
UpdateSchema
schema
GetSchemaDetails
schema
ListSchemas
schema
ListSchemaDependants
schema
service: sql-streaming
Resource Syntax
sql-streaming:sql-processor:${Environment}/${KubernetesCluster}/${KubernetesNamespace}/${SqlProcessor}
For IN_PROC processors sql-streaming:sql-processor:${Environment}/lenses-in-process/default/${SqlProcessor}
CreateProcessor
sql-processor
ListProcessors
sql-processor
GetProcessorDetails
sql-processor
GetProcessorSql
sql-processor
UpdateProcessorSql
sql-processor
DeleteProcessor
sql-processor
StartProcessor
sql-processor
StopProcessor
sql-processor
ScaleProcessor
sql-processor
GetProcessorLogs
sql-processor
ListProcessorDependants
sql-processor
This page lists the available configurations in Lenses Agent.
Set in lenses.conf
Reference documentation of all configuration and authentication options:
lenses.eula.accept
Accept the
false
boolean
yes
lenses.ip
Bind HTTP at the given endpoint. Use in conjunction with lenses.port
0.0.0.0
string
no
lenses.port
The HTTP port to listen for API, UI and WS calls
9991
int
no
lenses.jmx.port
Bind JMX port to enable monitoring Lenses
int
no
lenses.root.path
The path from which all the Lenses URLs are served
string
no
lenses.secret.file
The full path to security.conf for security credentials
security.conf
string
no
lenses.sql.execution.mode
Streaming SQL mode IN_PROC (test mode) or KUBERNETES (prod mode)
IN_PROC
string
no
lenses.offset.workers
Number of workers to monitor topic offsets
5
int
no
lenses.kafka.control.topics
An array of topics to be treated as “system topics”
list
array
no
lenses.grafana
Add your Grafana url i.e. http://grafanahost:port
string
no
lenses.api.response.cache.enable
If enabled, it disables client cache on the Lenses API HTTP responses by adding these HTTP Headers: Cache-Control: no-cache, no-store, must-revalidate, Pragma: no-cache, and Expires: -1.
false
boolean
no
lenses.workspace
Directory to write temp files. If write access is denied, Lenses will fallback to /tmp.
/run
string
no
lenses.connections.webhook.whitelist
This configuration key allows you to specify a whitelist of allowed IP ranges and hostnames for webhook connections. Only addresses matching the whitelist will be permitted for webhook connections.
The value should be a list of strings, where each string can be:
An IPv4 address (e.g., "192.168.1.10")
An IPv4 CIDR range (e.g., "192.168.1.0/24")
An IPv6 address (e.g., "2001:db8::1")
An IPv6 CIDR range (e.g., "2001:db8::/32")
A hostname pattern (e.g., "*.trusted.com", "localhost", "api.example.com"
array
no
System or control topics are created by services for their internal use. Below is the list of built-in configurations to identify them.
_schemas
__consumer_offsets
_kafka_lenses_
lsql_*
lsql-*
__transaction_state
__topology
__topology__metrics
_confluent*
*-KSTREAM-*
*-TableSource-*
*-changelog
__amazon_msk*
Wildcard (*) is used to match any name in the path to capture a list of topics not just one. When the wildcard is not specified, Lenses matches on the entry name provided.
lenses.access.control.allow.methods
HTTP verbs allowed in cross-origin HTTP requests
GET,POST,PUT,DELETE,OPTIONS
lenses.access.control.allow.origin
Allowed hosts for cross-origin HTTP requests
*
lenses.allow.weak.ssl
Allow https:// with self-signed certificates
false
lenses.ssl.keystore.location
The full path to the keystore file used to enable TLS on Lenses port
lenses.ssl.keystore.password
Password for the keystore file
lenses.ssl.key.password
Password for the ssl certificate used
lenses.ssl.enabled.protocols
Version of TLS protocol to use
TLSv1.2
lenses.ssl.algorithm
X509 or PKIX algorithm to use for TLS termination
SunX509
lenses.ssl.cipher.suites
Comma separated list of ciphers allowed for TLS negotiation
lenses.security.kerberos.service.principal
The Kerberos principal for Lenses to use in the SPNEGO form: HTTP/[email protected]
lenses.security.kerberos.keytab
Path to Kerberos keytab with the service principal. It should not be password protected
lenses.security.kerberos.debug
Enable Java’s JAAS debugging information
false
lenses.storage.hikaricp.[*]
To pass additional properties to HikariCP connection pool
no
lenses.storage.postgres.host
Host of PostgreSQL server for Lenses to use for persistence
string
no
lenses.storage.postgres.port
Port of PostgreSQL server for Lenses to use for persistence
5432
integer
no
lenses.storage.postgres.username
Username for PostgreSQL database user
string
no
lenses.storage.postgres.password
Password for PostgreSQL database user
string
no
lenses.storage.postgres.database
PostgreSQL database name for Lenses to use for persistence
string
no
lenses.storage.postgres.schema
PostgreSQL schema name for Lenses to use for persistence
"public"
string
no
lenses.storage.postgres.properties.[*]
To pass additional properties to PostgreSQL JDBC driver
no
Set in security.conf
lenses.storage.msssql.host
Specifies the hostname or IP address of the Microsoft SQL Server instance
string
yes
lenses.storage.mssql.port
Specifies the TCP port number that the Lenses application uses to connect to a Microsoft SQL Server database
int
yes
lenses.storage.mssql.schema
Specifies the database schema Lenses uses within Microsoft SQL Server
string
yes
lenses.storage.mssql.database
Specifies the Microsoft SQL server database Lenses connects to
string
yes
lenses.storage.mssql.username
Specifies the username that the Lenses application uses to authenticate with the Microsoft SQL Server database
string
yes
lenses.storage.mssql.password
Specifies the password that the Lenses application uses to authenticate with the Microsoft SQL Server database
string
yes
lenses.storage.mssql.properties
Allows additional properties to be set for the Microsoft SQL Servicer JDBC drive
no
CommentShare feedback on the editor
If the records schema is centralized, the connectivity to Schema Registry nodes is defined by a Lenses Connection.
There are two static config entries to enable/disable the deletion of schemas:
lenses.schema.registry.delete
Allow schemas to be deleted. Default is false
boolean
lenses.schema.registry.cascade.delete
Deletes associated schemas when a topic is deleted. Default is false
boolean
Options for specific deployment targets:
Global options
Kubernetes
Common settings, independently of the underlying deployment target:
lenses.deployments.events.buffer.size
Buffer size for events coming from Deployment targets such as Kubernetes
10000
lenses.deployments.errors.buffer.size
Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes
1000
Kubernetes connectivity is optional. Minimum supported K8 version 0.11.10. All settings are string.
lenses.kubernetes.processor.image.name
The url for the streaming SQL Docker for K8
lensesioextra/sql-processor
lenses.kubernetes.processor.image.tag
The version/tag of the above container
5.2
lenses.kubernetes.config.file
The path for the kubectrl config file
/home/lenses/.kube/config
lenses.kubernetes.pull.policy
Pull policy for K8 containers: IfNotPresent or Always
IfNotPresent
lenses.kubernetes.service.account
The service account for deployments. Will also pull the image
default
lenses.kubernetes.init.container.image.name
The docker/container repository url and name of the Init Container image used to deploy applications to Kubernetes
lensesio/lenses-cli
lenses.kubernetes.init.container.image.tag
The tag of the Init Container image used to deploy applications to Kubernetes
5.2.0
lenses.kubernetes.watch.reconnect.limit
How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable
10
lenses.kubernetes.watch.reconnect.interval
How often to wait between Kubernetes Watcher reconnection attempts expressed in milliseconds
5000
lenses.kubernetes.websocket.timeout
How long to wait for a Kubernetes Websocket response expressed in milliseconds
15000
lenses.kubernetes.websocket.ping.interval
How often to ping Kubernetes Websocket to check it’s alive expressed in milliseconds
30000
lenses.kubernetes.pod.heap
The max amount of memory the underlying Java process will use
900M
lenses.kubernetes.pod.min.heap
The initial amount of memory the underlying Java process will allocate
128M
lenses.kubernetes.pod.mem.request
The value will control how much memory resource the Pod Container will request
128M
lenses.kubernetes.pod.mem.limit
The value will control the Pod Container memory limit
1152M
lenses.kubernetes.pod.cpu.request
The value will control how much cpu resource the Pod Container will request
null
lenses.kubernetes.pod.cpu.limit
The value will control the Pod Container cpu limit
null
lenses.kubernetes.namespaces
Object setting a list of Kubernetes namespaces that Lenses will see for each of the specified and configured cluster
null
lenses.kubernetes.pod.liveness.initial.delay
Amount of time Kubernetes will wait to check Processor’s health for the first time. It can be expressed like 30 second, 2 minute or 3 hour, mind the time unit is singular
60 second
lenses.deployments.events.buffer.size
Buffer size for events coming from Deployment targets such as Kubernetes
10000
lenses.deployments.errors.buffer.size
Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes
1000
lenses.kubernetes.config.reload.interval
Time interval to reload the Kubernetes configuration file. Expressed in milliseconds.
30000
Optimization settings for SQL queries.
lenses.sql.settings.max.size
Restricts the max bytes that a kafka sql query will return
long
20971520 (20MB)
lenses.sql.settings.max.query.time
Max time (in msec) that a sql query will run
int
3600000 (1h)
lenses.sql.settings.max.idle.time
Max time (in msec) for a query when it reaches the end of the topic
int
5000 (5 sec)
lenses.sql.settings.show.bad.records
By default show bad records when querying a kafka topic
boolean
true
lenses.sql.settings.format.timestamp
By default convert AVRO date to human readable format
boolean
true
lenses.sql.settings.live.aggs
By default allow aggregation queries on kafka data
boolean
true
lenses.sql.sample.default
Number of messages to sample when live tailing a kafka topic
int
2/window
lenses.sql.sample.window
How frequently to sample messages when tailing a kafka topic
int
200 msec
lenses.sql.websocket.buffer
Buffer size for messages in a SQL query
int
10000
lenses.metrics.workers
Number of workers for parallelising SQL queries
int
16
lenses.kafka.ws.buffer.size
Buffer size for WebSocket consumer
int
10000
lenses.kafka.ws.max.poll.records
Max number of kafka messages to return in a single poll()
long
1000
lenses.sql.state.dir
Folder to store KStreams state.
string
logs/sql-kstream-state
lenses.sql.udf.packages
The list of allowed java packages for UDFs/UDAFs
array of strings
["io.lenses.sql.udf"]
Lenses requires these Kafka topics to be available, otherwise, it will try to create them. The topics can be created manually before Lenses is run, or allow Lenses the correct Kafka ACLs to create the topics:
lenses.topics.external.topology
Topic for applications to publish their topology
1
3 (recommended)
__topology
yes
N/A
lenses.topics.external.metrics
Topic for external application to publish their metrics
1
3 (recommended)
__topology__metrics
no
1 day
lenses.topics.metrics
Topic for SQL Processor to send the metrics
1
3 (recommended)
_kafka_lenses_metrics
no
To allow for fine-grained control over the replication factor of the three topics, the following settings are available:
lenses.topics.replication.external.topology
Replication factor for the lenses.topics.external.topology topic
1
lenses.topics.replication.external.metrics
Replication factor for the lenses.topics.external.metrics topic
1
lenses.topics.replication.metrics
Replication factor for the lenses.topics.metrics topic
1
When configuring the replication factor for your deployment, it's essential to consider the requirements imposed by your cloud provider. Many cloud providers enforce a minimum replication factor to ensure data durability and high availability. For example, IBM Cloud mandates a minimum replication factor of 3. Therefore, it's crucial to set the replication factor for the Lenses internal topics to at least 3 when deploying Lenses on IBM Cloud.
All time configuration options are in milliseconds.
lenses.interval.summary
How often to refresh kafka topic list and configs
long
10000
lenses.interval.consumers.refresh.ms
How often to refresh kafka consumer group info
long
10000
lenses.interval.consumers.timeout.ms
How long to wait for kafka consumer group info to be retrieved
long
300000
lenses.interval.partitions.messages
How often to refresh kafka partition info
long
10000
lenses.interval.type.detection
How often to check kafka topic payload info
long
30000
lenses.interval.user.session.ms
How long a client-session stays alive if inactive (4 hours)
long
14400000
lenses.interval.user.session.refresh
How often to check for idle client sessions
long
60000
lenses.interval.topology.topics.metrics
How often to refresh topology info
long
30000
lenses.interval.schema.registry.healthcheck
How often to check the schema registries health
long
30000
lenses.interval.schema.registry.refresh.ms
How often to refresh schema registry data
long
30000
lenses.interval.metrics.refresh.zk
How often to refresh ZK metrics
long
5000
lenses.interval.metrics.refresh.sr
How often to refresh Schema Registry metrics
long
5000
lenses.interval.metrics.refresh.broker
How often to refresh Kafka Broker metrics
long
5000
lenses.interval.metrics.refresh.connect
How often to refresh Kafka Connect metrics
long
30000
lenses.interval.metrics.refresh.brokers.in.zk
How often to refresh from ZK the Kafka broker list
long
5000
lenses.interval.topology.timeout.ms
Time period when a metric is considered stale
long
120000
lenses.interval.audit.data.cleanup
How often to clean up dataset view entries from the audit log
long
300000
lenses.audit.to.log.file
Path to a file to write audits to in JSON format.
string
lenses.interval.jmxcache.refresh.ms
How often to refresh the JMX cache used in the Explore page
long
180000
lenses.interval.jmxcache.graceperiod.ms
How long to pause for when a JMX connectity error occurs
long
300000
lenses.interval.jmxcache.timeout.ms
How long to wait for a JMX response
long
500
lenses.interval.sql.udf
How often to look for new UDF/UDAF (user defined [aggregate] functions)
long
10000
lenses.kafka.consumers.batch.size
How many consumer groups to retrieve in a single request
Int
500
lenses.kafka.ws.heartbeat.ms
How often to send heartbeat messages in TCP connection
long
30000
lenses.kafka.ws.poll.ms
Max time for kafka consumer data polling on WS APIs
long
10000
lenses.kubernetes.config.reload.interval
Time interval to reload the Kubernetes configuration file.
long
30000
lenses.kubernetes.watch.reconnect.limit
How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable
long
10
lenses.kubernetes.watch.reconnect.interval
How often to wait between Kubernetes Watcher reconnection attempts
long
5000
lenses.kubernetes.websocket.timeout
How long to wait for a Kubernetes Websocket response
long
15000
lenses.kubernetes.websocket.ping.interval
How often to ping Kubernetes Websocket to check it’s alive
long
30000
lenses.akka.request.timeout.ms
Max time for a response in an Akka Actor
long
10000
lenses.sql.monitor.frequency
How often to emit healthcheck and performance metrics on Streaming SQL
long
10000
lenses.audit.data.access
Record dataset access as audit log entries
boolean
true
lenses.audit.data.max.records
How many dataset view entries to retain in the audit log. Set to -1 to retain indefinitely
int
500000
lenses.explore.lucene.max.clause.count
Override Lucene’s maximum number of clauses permitted per BooleanQuery
int
1024
lenses.explore.queue.size
Optional setting to bound Lenses internal queue used by the catalog subsystem. It needs to be positive integer or it will be ignored.
int
N/A
lenses.interval.kafka.connect.http.timeout.ms
How long to wait for Kafka Connect response to be retrieved
int
10000
lenses.interval.kafka.connect.healthcheck
How often to check the Kafka health
int
15000
lenses.interval.schema.registry.http.timeout.ms
How long to wait for Schema Registry response to be retrieved
int
10000
lenses.interval.zookeeper.healthcheck
How often to check the Zookeeper health
int
15000
lenses.ui.topics.row.limit
The number of Kafka records to load automatically when exploring a topic
int
200
lenses.deployments.connect.failure.alert.check.interval
Time interval in seconds to check the connector failure grace period has completed. Used by the Connect auto-restart failed connectors functionality. It needs too be a value between (1,600].
int
10
lenses.provisioning.path
Folder on the filesystem containing the provisioning data. See [provisioning docs](link to provisioning docs) for further details
string
lenses.provisioning.interval
Time interval in seconds to check for changes on the provisioning resources
int
lenses.schema.registry.client.http.retryOnTooManyRequest
When enabled, Lenses will retry a request whenever the schema registry returns a 429 Too Many Requests
boolean
lenses.schema.registry.client.http.maxRetryAwait
Max amount of time to wait whenever a 429 Too Many Requests is returned.
duration
lenses.schema.registry.client.http.maxRetryCount
Max retry count whenever a 429 Too Many Requests is returned.
integer
2
lenses.schema.registry.client.http.rate.type
Specifies if http requests to the configured schema registry should be rate limited. Can be "session" or "unlimited"
"unlimited" | "session"
lenses.schema.registry.client.http.rate.maxRequests
Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.
integer
N/A
lenses.schema.registry.client.http.rate.window
Whenever the rate limiter is "session" this configuration will determine the duration of the window used.
duration
N/A
lenses.schema.connect.client.http.retryOnTooManyRequest
Retry a request whenever a connect cluster returns a 429 Too Many Requests
boolean
lenses.schema.connect.client.http.maxRetryAwait
Max amount of time to wait whenever a 429 Too Many Requests is returned.
duration
lenses.schema.connect.client.http.maxRetryCount
Max retry count whenever a 429 Too Many Requests is returned.
integer
2
lenses.connect.client.http.rate.type
Specifies if http requests to the configured connect cluster should be rate limited. Can be "session" or "unlimited"
"unlimited" | "session"
lenses.connect.client.http.rate.maxRequests
Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.
integer
N/A
lenses.connect.client.http.rate.window
Whenever the rate limiter is "session" this configuration will determine the duration of the window used.
duration
N/A
Control how Lenses identifies your connectors in the Topology view. Catalogue your connector types, set their icons, and control how Lenses extracts the topics used by your connectors.
Lenses comes preconfigured for some of the popular connectors as well as the Stream Reactor connectors. If you see that Lenses doesn’t automatically identify your connector type then use the lenses.connectors.info setting to register it with Lenses.
Add a new HOCON object {} for every new Connector in your lenses.connectors.info list :
lenses.connectors.info = [
{
class.name = "The connector full classpath"
name = "The name which will be presented in the UI"
instance = "Details about the instance. Contains the connector configuration field which holds the information. If a database is involved it would be the DB connection details, if it is a file it would be the file path, etc"
sink = true
extractor.class = "The full classpath for the implementation knowing how to extract the Kafka topics involved. This is only required for a Source"
icon = "file.png"
description = "A description for the connector"
author = "The connector author"
}
]This configuration allows the connector to work with the topology graph, and also have the RBAC rules applied to it.
To extract the topic information from the connector configuration, source connectors require an extra configuration. The extractor class should be: io.lenses.config.kafka.connect.SimpleTopicsExtractor. Using this extractor requires an extra property configuration. It specifies the field in the connector configuration which determines the topics data is sent to.
Here is an example for the file source:
lenses.connectors.info = [
{
class.name = "org.apache.kafka.connect.file.FileStreamSource"
name = "File"
instance = "file"
sink = false
property = "topic"
extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
}
]An example of a Splunk sink connector and a Debezium SQL server connector
lenses.connectors.info = [
{
class.name = "com.splunk.kafka.connect.SplunkSinkConnector"
name = "Splunk Sink",
instance = "splunk.hec.uri"
sink = true,
extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
icon = "splunk.png",
description = "Stores Kafka data in Splunk"
docs = "https://github.com/splunk/kafka-connect-splunk",
author = "Splunk"
},
{
class.name = "io.debezium.connector.sqlserver.SqlServerConnector"
name = "CDC MySQL"
instance = "database.hostname"
sink = false,
property = "database.history.kafka.topic"
extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
icon = "debezium.png"
description = "CDC data from RDBMS into Kafka"
docs = "//debezium.io/docs/connectors/mysql/",
author = "Debezium"
}
]apps.external.http.state.refresh.ms
When registering a runner for external app, a health-check interval can be specified. If it is not, this default interval is used (value in milliseconds)
30000
int
no
apps.external.http.state.cache.expiration.ms
Last known state of the runner is stored in a cache. The entries in the cache are being invalidated after a time that is defined by following configuration key (value in milliseconds). This value should not be lower than the apps.external.http.state.refresh.ms value.
60000
int
no
false"2 seconds"unlimitedfalse2 secondsunlimited