Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page provides examples for defining a connection to Kerberos.
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page describes an overview of Lenses.
Lenses is the leading developer experience and UI for exploring and moving real-time data, across any Kafka, on any cloud or on premise. We are on a mission to create an operating fabric to increase developer productivity on real-time data.
Lenses is an application that connects to your Kafka environments, allowing you to manage, discover, explore and catalogue your data via SQL. You can also deploy and monitor stream processing applications (SQL Processors) and Kafka Connectors, all wrapped in an enterprize grade RBAC layer.
All Lenses needs is connectivity to your services, think of it as a Kafka client.
The diagram gives a high-level overview of the logical components. At the core of Lenses, we have:
A Kafka UI for day-to-day work with Kafka
SQL Engine to query data and create streaming apps leveraging Kafka Streams
App Engine to manage seamless deployments of SQL apps (deployed to Kubernetes)
Metadata Engine to create a real-time Data Catalog for cross-system datasets and apps
Lenses is a JVM application exposes secure restful APIs and websockets in addition to providing a Kafka UI. A CLI is available to help automate operations.
The changelog of the current release and patch versions, as well as upgrade notes.
For versions 4.0 to 5.4 see our .
London, UK - December 10th, 2024 Lenses 5.5.14 is now generally available.
Customers on the 5.5 series are urged to upgrade to this release or later.
New Features
Add support for MSSQL as a backing store for Lenses.
Improvements
LDAP connection management to avoid connection reset.
Extra debug logging for when the Schema Registry sends an invalid Content-type header.
London, UK - November 25th, 2024 Lenses 5.5.13 is now generally available.
Customers on the 5.5 series are urged to upgrade to this release or later.
Fixes
A security issue has been addressed.
London, UK - November 22nd, 2024 Lenses 5.5.12 is now generally available.
Improvements
The webhook for audits now offers the {{CONTENT}}
variable to insert all the details of the audit log entry.
Improve Kubernetes watchers and handling of SQL Processor Initialization events to avoid blocking operations.
London, UK - October 31th, 2024 Lenses 5.5.11 is now generally available.
Improvements
The login audit now tracks both source IdP groups and applied groups.
London, UK - October 17th, 2024 Lenses 5.5.10 is now generally available.
Improvements
Login audit now tracks source IdP groups.
The Group Details API now includes user and service accounts within each group.
London, UK - October 4th, 2024 Lenses 5.5.9 is now generally available.
Improvements
Optimise kubernetes event handling
Add extra logging for queue processing and event handling
London, UK - September 27th, 2024 Lenses 5.5.8> is now generally available
Improvements
Optimise topic auto-detection audit logging to avoid duplicate entries
Optimise logging (adjust UDAF for intellisense polluting the logs, better actor mailbox logging)
Improvements to the connector verification logic when Lenses has to mock topics or topics.regex
London, UK - August 28th, 2024 Lenses 5.5.7 is now generally available
Improvements
London, UK - August 6th, 2024 Lenses 5.5.6 is now generally available.
Improvements
The S3 backup/restore functionality now supports the latest version of the Stream Reactor S3 connector plugin.
New users coming from LDAP will not be created unless they have groups coming from LDAP matching Lenses groups. Users can still be created manually by an administrator.
London, UK - July 26th, 2024 Lenses 5.5.5 is now generally available.
Improvements
Improve performance of the data catalogue. Lenses should now be many times faster to detect topics and their serialization, and use less memory and CPU time. For teams with Kafka clusters that have thousands of schemas, the startup time will also improve. For teams with tens of thousands of schemas, consumers, and partitions, software stability will also improve.
Bring back the restart task button for paused connectors. This undocumented behaviour of Kafka Connect allows users to stop a connector’s consumer group, so they can reset offsets. For Kafka Connect 3.5 or later the new STOP connector API and corresponding button in Lenses can have the same effect.
Compress schemas before sending them to the Schema Registry. This allows to send larger schemas to the Schema Registry as the limit is on the size of the request rather than the schema itself.
Improvements to the Skip Validation option for inserting JSON messages, to allow for less strict (but still valid) schemas for inserted messages.
If you have enabled the setting to keep lucene's index on disk (option lenses.explore.index.dir
), you should disable it and delete the files from disk. You can keep it enabled if you prefer but you still need to delete the files on disk. Please note that on-disk performance is slower than in-memory. The amount of memory we use is fixed per entry, so the default in-memory configuration is advised.
London, UK - July 17th, 2024 Lenses 5.5.4 is now generally available.
New Features
Add STOP operation (button) for Connectors. The STOP operation requires Kafka Connect 3.5 or greater
Allow to skip schema validation when inserting to JSON topics
Improvements
Connector search is now case-insensitive
Allow to type to search groups when creating service accounts
Show masked passwords when editing a connector (regression in 5.5.3)
Fixes
Filtering connectors by type doesn’t work
When there were at least two Connect clusters with at least one connector with common name in both clusters, filtering connectors returns incorrect or multiple results
Validating connectors with passwords may not work (regression in 5.5.3)
London, UK - July 1st, 2024 Lenses 5.5.3 is now generally available.
New Features
Support for case-insensitive LDAP users
Whilst Lenses users are case-sensitive, LDAP most of the time performs case-insensitive searches on user accounts. This can lead to users who try to login to Lenses with different casing in their username (e.g., user
and USER
) to get duplicate accounts.
We added the option lenses.security.ldap.case.sensitive
with a default value of true
. It can be switched to false
in which case Lenses will treat usernames from LDAP as case-insensitive and always converting to lowercase.
Improvements
Upgrade the AWS IAM library to better support service account roles inside EKS
Upgrade libraries with known CVEs —not affecting Lenses in either way
Fixes
Fix Grafana link not showing up on sidebar
Fix a case where some sensitive data might leak in the logs
Fix filtering by connector name causing the connector screen to crash if a connect cluster is offline
London, UK - May 23rd, 2024 Lenses 5.5.2 is now generally available.
Improvements
The connectors’ screen will not mask passwords if they are referencing a secret from a secret provider.
Fixes
Fix regression where connectors’ passwords were not masked.
London, UK - April 23rd, 2024 Lenses 5.5.1 is now generally available.
Improvements
Authentication:
Enhanced authentication to reject with a 401 status code when the user lacks any attached groups in the IdP (Identity Provider).
Improved authentication flow, allowing an authenticated SSO (Single Sign-On) user to log in even if there isn’t a corresponding group in Lenses.
Documentation Enhancement:
New SQL processor page with direct links to the latest documentation and support resources for user convenience.
Fixes
Deployment Issue:
Addressed a bug introduced in Lenses git ops deployment* version 5.5.0, resolving provisioning issues experienced in certain deployment scenarios.
SSO Authentication Fix:
Corrected SSO authentication behavior. When an SSO user is configured to overwrite the IdP groups, Lenses now correctly refrains from extracting groups from the IdP.
London, UK1 - 11 April 2024 - Lenses 5.5 is now generally available.
Kafka Connectors as Code
Lenses now introduces support for managing Kafka connectors as code. With this feature, you can define your connectors in a YAML file and seamlessly deploy them to Lenses. This capability is accessible via both the Lenses CLI and the Lenses UI. This release marks the commencement of our journey towards a more declarative and automated approach to managing Kafka and Lenses resources.
Consumer Group Management
In this version, Lenses introduces support for deleting consumer group offsets and entire consumer groups, enhancing flexibility and control over consumer group management.
Generic SSO Provider
Lenses provides support for a few SSO providers out of the box like Google, Okta, etc. In this release, Lenses introduces a generic SSO provider, enabling users to integrate with any SSO provider that supports the SAML 2.0 protocol. This feature is configurable via the lenses.conf
file under lenses.security.saml.idp.provider
.
Enhancements
Kafka Message Replay
The Kafka message replay feature receives an enhancement, now enabling users to replay messages from a specific offset. This functionality is accessible from both the Lenses topic screen and the Lenses SQL studio screen, providing greater precision in message replay operations.
Consumer Group Offsets Data Link
Users can now seamlessly navigate from the consumer group offsets screen to the data of the topic that the consumer group offset points to, enhancing visibility and ease of data exploration.
Audits to log file
Lenses now provides the capability to log audit events to its log file, enabling users to store audit logs locally for compliance and security purposes. This feature is configurable via the lenses.conf
file under lenses.audit.to.log.file
.
Lenses Internal Topics Replication Factor
To ensure compatibility with cloud providers such as IBM, where a minimum replication factor is mandated, Lenses now allows the configuration of the replication factor for its internal topics. This setting can be configured in the lenses.conf
file under lenses.internal.topics.replication.***
.
Bug Fixes
External Applications via Lenses SDK
The Lenses SDK, a thin client facilitating the monitoring and tracking of external applications connected to Kafka within Lenses topology, has been enhanced in this release. An issue where the application’s status in Lenses was not updated correctly has been resolved.
S3 Backup-Restore for JSON Payloads
In this release, a bug affecting the S3 backup-restore feature for JSON payloads has been rectified. Previously, the feature encountered issues due to the Connect converter enforcing schema on JSON payloads, leading to incorrect functionality. This bug has been addressed to ensure seamless backup and restoration of JSON data via S3.
Lenses 5.5 is an incremental release which brings in new features and improvements.
Upgrading from 5.0 or later does not require any static configuration change but if you have automated the creation of any AWS connection, then you will have to adjust the provisioning section of your Helm chart, or your CICD, or —if you use the API directly— your API calls.
Breaking Changes and Caution Items
Lenses upgrades (except patch releases) are not backwards compatible. It is best practice to take a backup of the Lenses database before an upgrade.
New provisioning API [caution]
With Lenses 5.3 the provisioning API was introduced. This new API can be used to create or update the connections landscape. The old provisioning methods could only create the connection landscape (first run).
What this means, is that now the Helm chart or a CICD process can be used to manage Lenses’ connections.
For teams that are on the old provisioning method some adjustments are required to their Helm charts or other provisioning code to switch to the new API. The old methods are still available but are considered deprecated and will be removed or break in the future.
AWS and Glue Connection provisioning [breaking]
With Lenses 5.4 IAM support was added for the AWS connection type. An AWS connection is used as an authentication provider for the Glue Schema Registry and Cloudwatch channels.
Due to this change, if you create or manage your AWS and Glue connections via the API or provisioning, you need to update your configuration to the new format.
Action required
Add the new authMode
property to your connections for AWS and Glue Schema Registry.
Details
Lenses 5.4 adds a new required property for the AWS and Glue Schema Registry connections.
The property is authMode
.
It controls how Lenses authenticates with AWS:
Access keys (existing feature).
Credentials provider chain (new feature).
You set the property either with the:
Connections API - create, update.
Provision YAML.
You can set authMode
in 2 modes:
1. Access keys mode
This is the existing mode where Lenses uses AWS access keys.
Set the authMode
to Access Key
.
Specify the access key ID and secret access key, as you had before.
2. Credentials provider chain mode (new)
This is the new mode where Lenses uses the AWS default credentials provider chain.
Set the authMode
to Credentials Chain
.
No additional properties needed.
Examples - Provision YAML
1. Access mode
2. Credentials provider chain mode
Examples - API JSON
1. Access mode
AWS connection
Glue Schema Registry connection
2. Credentials provider chain mode
AWS connection
Glue Schema Registry connection
Docker image base change
Starting with Lenses 5.2 the base image of Lenses and SQL Processor Dockers switched from Debian to Ubuntu. On some older systems, these docker images will fail to run, due to a combination of a recent glibc in the container, and older docker daemon on the host.
If you fall under this category, during the startup of the Lenses container, you might see errors such as Unable to identify system. Uname is required or [warning][os,thread] Failed to start thread “GC Thread#0”.
For these cases, we now offer Lenses docker images with the suffix -debian
in their tags. E.g:
lensesio/lenses:5.5-debian
lensesio/lenses:5.5.0-debian
lensesio/lenses:latest-debian
If your host is running on an older operating system and you encounter these errors, try to use the debian equivalent tag.
Update Process
Using the Lenses Archive
Make sure you have a JRE (or JDK) installed in the server running Lenses. Lenses can run on JRE 8 or greater, and the recommended version is JRE 11.
Using the Lenses Docker
The docker image uses tags to distinguish between versions. The latest
tag (lensesio/lenses:latest
) brings the latest stable version of Lenses. There are minor tags to help users get the latest patch in a minor version (e.g 5.5
, 5.1
) and patch tags to help users pin to a specific patch (e.g 5.5.1
, 5.1.2
). The best practice advice is to use the minor tag (lensesio/lenses:5.5
), which ensures that your installation will always get compatible updates until you made a conscious decision to upgrade the minor version.
If you use the internal database instead of PostgreSQL as the backing store of Lenses, make sure you keep the /data/storage
volume to not lose your data. Other volumes supported by the docker are /data/kafka-streams-state
which holds state for SQL Processors running IN-PROC and may have to be rebuilt (automatically) if lost, /data/log
(log files on disk), /data/plugins
(custom UDFs).
Pull the 5.5 docker:
Stop your current container and restart with the 5.5
image, mounting any volumes you might need.
Lenses Box
If you are a Box user, pull the latest version, preserve your /data
volume and restart Lenses:
Helm
Download the latest charts and update your values.yaml
as described below. Remember that if you are using the internal database instead of PostgreSQL as the backing store, then the Lenses Storage Directory should be stored in a persistent volume and be kept intact between updates. To support a potential downgrade, make sure this volume is backed-up before installing a newer version of Lenses.
If you have provisioning enabled (lenses.provision.enabled: true
) in your values.yaml
, and you are on provision version “1” then you have to act. Version “1” means either that lenses.provision.version
is set to "1"
, or it is not set at all. You have two options:
Disable it, as Lenses already has all the information stored in the database, and version “1” does not support updating the connections and license.Copy
Switch to provisioning version “2” which supports updating connections and licenses every time you do a helm upgrade. To do that, you must make some changes to your old provisioning section. Some resources that can come handy for the switch are:
If you don’t have your values.yaml
you can download it from the Kubernetes cluster using Helm:
Proceed to upgrade:
Alternatively, reusing the old values and turning provisioning off:
Cloud Installations
Use the latest version available in the marketplaces. Remember that Lenses Storage Directory should be provided as a persistent volume and be kept intact between updates. If a new image does not exist, you may be able to update Lenses in-place. Our support team will be happy to go through the available options with you.
\
This version improves the fetching of schemas from Schema Registries. The related subsystem has been re-worked to provide better error handling, fewer requests to the Schema Registry, and support rate-limiting. Find out .
If you upgrade your S3 connector plugin, existing S3 connectors will stop working. Check to find out how you can update your connector configuration to work with the latest plugin version.
For versions 4.0 to 5.4 see our doc .
If you are upgrading from version 4.3 or older, you need to follow the as well as the rest of the instructions that follow.
Download the and extract it in a new directory on your server. It is important to avoid extracting an archive over an older installation to avoid having multiple versions of libraries. Instead, you should remove (or rename) the old directory, then move the new into its place. Copy if needed and update your lenses.conf
and security.conf
files. If you are using the internal database instead of PostgreSQL, make sure Lenses Storage Directory (lenses.storage.directory
) is kept intact. The folder is where persistent data is stored, such as users, groups, audits, data policies, connections, and more.
Quick Start
Launch Lenses local with an all-in-one docker or against your Kafka environment.
Installation
Learn how to install and automate configuration.
Configuration
Learn how to configure Lenses.
IAM
Learn how to set up authentication and authorization of users in Lenses.
SQL for exploration & processing
Learn how to use Lenses SQL to explore and process data.
Kafka Connector Management
Learn how to use Lenses to manage your Kafka Connectors.
Kafka Connectors
Lenses provides a collection of open source Connector plugins, available with Enterprize support. Learn about them here.
Topics
Learn how to find, create and manage Kafka topics in the Data catalogue.
Schemas
Learn how to manage Schemas in your schema registries with Lenses.
Governance
Learn how to use Lenses to self serve Data Policies, Kafka ACLs & Quotas
Monitoring & Alerting
Learn how to configure Lenses to monitor and alert about your Kafka environments and applications.
This page describes connecting Lenses to Apache Kafka.
Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use provisioning for automated deployments.
Add your bootstrap brokers including ports
Optionally, security protocol e.g. SASL_SCRAM, SASL_SSL
Optionally, SASL Mechanism, e.g. SCRAM-SHA-256
If your Kafka connection requires TLS, set the following
Truststore: The SSL/TLS trust store to use as the global JVM trust store. Available formats are .jks
, .p12
, .pfx
.
Keystore: The SSL/TLS keystore to use for the TLS listener for Lenses. Available format is .jks
.
Lenses allows you to connect to brokers JMX. Supported formats are:
Simple with and without SSL
Jolokia (JOLOKIAG and JOLOKIAP)
With and without SSL
With Basic Auth
Custom http requests and suffix
AWS Open Monitoring
Prerequisites to check before using Lenses against your Kafka cluster.
Any version of Apache Kafka (2.0 or newer) on-premise and on-cloud.
Any version of Confluent Schema Registry (5.5.0 or newer), APICurio (2.0 or newer) and AWS Glue.
Connectivity to JMX is optional (not required) but recommended for additional/enhanced monitoring of the Kafka Brokers and Connect Workers. Secure JMX connections are also supported, as well as JOLOKIA and OpenMetrics (MSK).
For more enable JMX for Lenses itself see here.
Run on any Linux server (review ulimits or container technology (docker/kubernetes). For RHEL 6.x and CentOS 6.x use docker.
Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit
command:
Increase as a super-user the soft limit to 4096 with:
Use 6GB RAM/4 CPUs and 500MB disk space.
This is the default configuration = Request 1
CPU & Memory 3Gi
, Limit 2 CPU & Memory 5Gi
All recent versions of major browsers are fully supported.
Every action in Lenses is backed by an API or websocket, documented at https://api.lenses.io. A Golang client is available and CLI (command line interface).
For websockets you may need to adjust your loadbalancer to allow them. See here.
Lenses can use an embedded H2 database or a Postgres database. Postgres is not supplied by Lenses.
By default, Lenses does not provide TLS termination but can be enabled via a configuration option. TLS termination is recommended for enhanced security and a prerequisite for integrating with SSO (Single Sign On) via SAML2.0.
TLS termination can be configured directly within Lenses or by using a TLS proxy or load balancer. Refer to the TLS documentation for additional information.
Connect Lenses to your environment.
To connect Lenses to your real environment you can:
Install Lenses (not the Box) and manually configure the connections to Kafka, Zookeepers, Schema Registries and Connect, or
Install Lenses and configure the connections in one go using provisioning.
How to connect to Kafka depends on your Kafka provider.
This page describes how to connect to your Kafka brokers.
See provisioning for automating connections.
Lenses can connect to any Kafka cluster or service exposing the Apache Kafka APIs and supporting the authentication methods offered by Apache Kafka.
Follow the guide for your distribution to obtain the credentials and bootstrap broker to provide to Lenses.
This page describes connection Lenses to a AWS MSK cluster.
Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use provisioning for automated deployments.
It is recommended to install Lenses on an EC2 instance or with EKS in the same VPC as your MSK cluster. Lenses can be installed and preconfigured via the AWS Marketplace.
Edit the AWS MSK security group in the AWS Console and add the IP address of your Lenses installation.
If you want to have Lenses collect JMX metrics you have to enable Open Monitoring on your MSK cluster. Follow the AWS guide here.
Depending on your MSK cluster, select the endpoint and protocol you want to connect with.
It is not recommended to use Plaintext for secure environments. For these environments use TLS or IAM.
In the Lenses bootstrap UI, Select:
Security Protocol and set the protocol you want to use
SASL Mechanism and set the mechanism you want to use.
In the Lenses bootstrap UI, Select:
Security Protocol and set it to SASL_SSL
Sasl Mechanism and set it to AWS_MSK_IAM
Add software.amazon.msk.auth.iam.IAMLoginModule required;
to the Sasl Jaas Config section
Optionally upload your trust store
Set sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
in the Advances Kafka Properties section.
This page describes configuring Lenses to connect to Confluent Cloud.
Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use for automated deployments.
For Confluent Platform see
From Data integration API keys
, select Create Key
.
For this guide select Global access
In the Lenses bootstrap UI, Select:
Security Protocol SASL SSL
SASL Mechanism PLAIN
In the JAAS Configuration update the username
and password
from the respective fields Key and Secret of the API key created above:
This page describes configuring Lenses to connect to Confluent Platform.
Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use for automated deployments.
For Confluent Platform see
This page describes how to connect Lenses to an Amazon MSK Serverless cluster.
Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use for automated deployments.
It is recommended to install Lenses on an EC2 instance or with EKS in the same VPC as your MSK Serverless cluster. Lenses can be installed and preconfigured via the .
Enable communications between Lenses & the Amazon MSK Serverless cluster by opening the Amazon MSK Serverless cluster's security group in the AWS Console and add the IP address of your Lenses installation.
To authenticate Lenses & access resources within our MSK Serverless cluster, we'll need to create an IAM policy and apply that to the resource (EC2, EKS cluster, etc) running the Lenses service. here is an example IAM policy with sufficient permissions which you can associate with the relevant IAM role:
MSK Serverless IAM to be used after cluster creation. Update this IAM policy with the relevant ARN.
Click your MSK Serverless Cluster in the MSK console and select View Client Information page to check the bootstrap server endpoint.
In the Lenses bootstrap UI, Select:
For the bootsrap server configuration, use the MSK Serverless endpoint
For the Security Protocol, set it to SASL_SSL
Customize the Sasl Mechanism and set it to AWS_MSK_IAM
Add software.amazon.msk.auth.iam.IAMLoginModule required;
to the Sasl Jaas Config section
Set sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
in the Advances Kafka Properties section.
During the broker metrics export step, keep it disabled, as AWS Serverless does not export the metrics to Lenses. Click Next
Copy your license and add it to Lenses, validate your license, and click Next
Click on Save & Boot Lenses. Lenses will finish the setup on its own
To enable the creation of SQL Processors that create consumer groups, you need to add the following statement in your IAM policy:
Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.
To integrate with the AWS Glue Schema Registry, you also need to add the following statement for the registries and schemas in your IAM policy:
Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.
To integrate with the AWS Glue Schema Registry, you also need to modify the security policy for the registry and schemas, which results in additional functions within it:
When using Lenses with MSK Serverless:
Lenses does not receive Prometheus-compatible metrics from the brokers because they are not exported outside of CloudWatch.
Lenses does not configure quotas and ACLs because MSK Serveless does not allow this.
This page describes connection Lenses to a Azure HDInsight cluster.
Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use for automated deployments.
In our Azure Portal, go to Dashboards > Ambari home
.
Kafka endpoints: Go to Kafka > Configs > Kafka Broker > Kafka Broker hosts
Optionally get the Zookeeper endpoints: Go to Zookeeper > Configs > Zookeeper Server > Zookeeper Server hosts
.
In the Lenses bootstrap UI:
Set the Kafka endpoints as bootstrap servers
Set the security protocol, mechanism and Jaas config according to your setup. For information on configuring clients (Lenses) for your HDInsight cluster see for unauthenticated and for authenticated.
Set the following:
security.protocol
to SSL
Set the password for your trust store
Upload your trust store
Perform the additional steps to above
Set the password for your key store
Upload your key store
Set your key password
More details about how IAM works with MSK Serverless can be found in the documentation:
Kafka
Learn how to connect Lenses to your Kafka.
Schema Registries
Learn how to connect Lenses to your Schema Registry.
Zookeeper
Learn how to connect Lenses to your Zookeepers.
Kafka Connect
Learn how to connect Lenses to your Kafka Connect Clusters.
Alert & Audits
Learn how to connect Lenses to your alerting and auditing systems.
AWS
Learn how to connect Lenses to AWS (credentials).
Apache Kafka
Connect Lenses to your Apache Kafka cluster.
AWS MSK
Connect Lenses to your AWS MSK cluster.
AWS MSK Serverless
Connect Lenses to your AWS MSK Serverless.
Aiven
Connect Lenses to your Aiven Kafka cluster.
Azure HDInsight
Connect Lenses to your Azure HDInsight cluster.
Confluent Cloud
Connect Lenses to your Confluent Cloud.
Confluent Platform
Connect Lenses to your Confluent Platform (on premise) cluster.
IBM Event Streams
Connect Lenses to your IBM Event Streams cluster.
This page describes adding a Kafka Connect Cluster to Lenses.
Lenses integrates with Kafka Connect Clusters to manage connectors.
For documentation about the available Lenses Apache 2.0 Connectors, see the Stream Reactor documentation.
The name of a Kafka Connect Connections may only contain alphanumeric characters ([A-Za-z0-9]
) and dashes (-
). Valid examples would be dev
, Prod1
, SQLCluster
,Prod-1
, SQL-Team-Awesome
.
Multiple Kafka Connect clusters are supported.
If you are using Kafka Connect < 2.6 set the following to ensure you can see Connectors
lenses.features.connectors.topics.via.api.enabled=false
See provisioning for automating connections.
Consider Rate Limiting if you have a high number of connectors.
To add a connection, go to Admin->Connections->New Connection->Kafka Connect.
Provide a name for the Connect cluster
Add a comma-separated list of the workers in the Connector cluster, including ports
Optionally enable Basic Auth and set the username and password
Optionally enable SSL and upload the key-store file
Optionally upload a trust store
Optionally enable the collection of JMX metrics (Simple or Jolokia with SSL and Basic auth support)
If you have developed your own Connector or are using not using a Lenses connector you can still display the connector instances in the topology. To do this Lenses needs to know the configuration option of the Connector that defines which topic the Connector reads from or writes to. This is set in the connectors.info
parameter in the lenses.conf
file.
This page describes connecting Lenses to Confluent schema registries.
To add a connection, go to:
Admin->Connections
Select the New Connection button and select Schema Registry.
Enter:
Comma-separated list of schema registry URLs including ports
Enable basic auth if required and set the user name and password
Enable SSL if required and upload the keystore
Optionally upload a trust store
Set any additional properties
Optional enable metrics
This page describes connecting Lenses to IBM Event Streams schema registry.
Requires Enterprise subscription on IBM Event Streams and only hard delete is supported for IBM Event streams
To configure an application to use this compatibility API, specify the Schema Registry endpoint in the following format:
To add a connection, go to:
Admin->Connections
Select the New Connection button and select Schema Registry.
Enter:
Comma-separated list of schema registry URLs including ports, adding the confluent path at the end. Use the value from the kafka_http_url
field in the IBM Console Service Credentials tab
Enable basic auth if required
Set the user name "token"
Set the password as the value from API key in the IBM Console Service Credentials tab
This page describes connection to AWS Glue.
Lenses provides support for AWS Glue to manage schema and also explore and process data linked to it via the Lenses SQL Engines.
To connect to Glue, first create a AWS Connection. Go to Admin->Connections->AWS and enter your AWS credentials or select the IAM support if Lenses is running on an AWS host, e.g. EC2 instance or it has the AWS default credentials toolchain provider in place.
Rather than enter your AWS credentials you can use the AWS credentials chain.
Next, Select New Connection->Schema Registry->AWS Glue. Select your AWS Connection with access to Glue, and enter the Glue ARN.
This page describes connecting Lenses to Zookeeper.
Not all cloud providers give access to Zookeeper. Zookeeper is optional for Lenses.
See provisioning for automating connections.
Connectivity to Zookeeper is optional for Lenses. Zookeeper is used by Lenses for such purposes:
To provide quotas management (until quotas can be managed via the Brokers API)
To autodetect the JMX connectivity settings to Kafka brokers (if metrics are not defined directly for Kafka connection).
To add a Zookeeper connection go to Admin->Connections->New Connection->Zookeeper.
Add a comma-separated list of Zookeepers, including port
Optionally set a session timeout
Optionally set a Chroot path
Optionally set a connection timeout
Optionally enable the collection of JMX metrics (Simple or Jolokia with SSL and Basic auth support)
Connect Lenses to your alerting and auditing systems.
You can either configure the connections in the UI or via provisioning. Provisioning is recommended.
Lenses can send out alerts and audits events, the following integrations are supported:
Alerts
DataDog
AWS CloudWatch
PagerDuty
Slack
Alert Manager
Webook (Email, SMS, HTTP and MS Teams)
Audits
Webhook
Splunk
Once you have configure alert and audit connections, you can create alert and audit channels to route events to them. See Monitoring & Alerting or Auditing for more information.
This page describes connecting Lenses to Apicurio.
Apicuro supports the following versions of Confluent's API:
Confluent Schema Registry API v6
Confluent Schema Registry API v7
Set the schema registry URLs to include the compatibility endpoints, for example:
To add a connection, go to:
Admin->Connections
Select the New Connection button and select Schema Registry.
Enter:
Comma-separated list of schema registry URLs including ports and compatibility endpoint path
Enable basic auth if required and set the user name and password
This page describes connecting Lenses to Schema registries
See provisioning for automating connections.
Consider Rate Limiting if you have a high number of schemas.
Lenses can work with the following schema registry implementations which can be added via the Connections
page in Lenses.
Go to Admin->Connections->New Connections->Schema Registry and follow the guide for your registry provider.
TLS and basic authentication are supported for connections to Schema Registries.
Lenses can collect Schema registry metrics via:
JMX
Jolokia
AVRO
PROTOBUF
JSON and XML formats are supported by Lenses but without a backing schema registry.
To connect your Schema Registry with Lenses, select Schema Registry -> Create Connection.
To enable the deletion of schemas in the UI, set the following in the lenses.conf
file.
IBM Event Streams supports hard deletes only
This page describes the supported deployment methods for Lenses.
To automate the configuration of connections we recommend using provisioning.
Lenses can be deployed in the following ways:
This page describes installing Lenses with Docker Image.
On start-up, Lenses will be in bootstrap mode unless it has an existing Kafka Connection. See for automating.
The Lenses docker image can be configured via environment variables or via volume mounts for the configuration files (lenses.conf
, security.conf
).
Open Lenses in your , log in with admin/admin
and configure your and add your .
Environment variables prefixed with LENSES_
are transformed into corresponding configuration options. The environment variable name is converted to lowercase and underscores (_
) are replaced with dots (.
). As an example set the option lenses.port
use the environment variable LENSES_PORT
.
Alternatively, the lenses.conf and security.conf can be mounted directly as
/mnt/settings/lenses.conf
/mnt/secrets/security.conf
The Docker image exposes four volumes in total, where cache, logs, plugins, and persistent data are stored:
/data/storage
/data/plugins
/data/logs
/data/kafka-streams-state
Resides under /data/storage
and is used to store persistent data, such as Data Policies. For this data to survive between Docker runs and/or Lenses upgrades, the volume must be managed externally (persistent volume).
Resides under /data/plugins
it’s where classes that extend Lenses may be added —such as custom Serdes, LDAP filters, UDFs for the Lenses SQL table engine, and custom_http implementations.
Resides under /data/logs
, logs are stored here. The application also logs to stdout, so the log files aren’t needed for most cases.
Resides under /data/kafka-streams-state
, used when Lenses SQL is in IN_PROC configuration. In such a case, Lenses uses this scratch directory to cache Lenses SQL internal state. Whilst this directory can safely be removed, it can be beneficial to keep it around, so the Processors won’t have to rebuild their state during a restart.
By default, the Lenses serves connections over plaintext (HTTP). It is possible to use TLS instead. The Docker image offers the ability to provide the content for extra files via secrets mounted as files or as environment variables. Especially for SSL, Docker supports SSL/TLS keys and certificates in Java Keystore (JKS) formats.
This capability is optional, and users can mount such files under custom paths and configure lenses.conf
manually via environment variables, or lenses.append.conf
.
There are two ways to use the File/Variable names of the table below.
Create a file with the appropriate filename as listed below and mount it under /mnt/settings
, /mnt/secrets
, or /run/secrets
Set them as environment variables.
All settings except for passwords, can be optionally encoded in base64. The docker will detect such encoding automatically.
The docker does not require running as root. The default user is set to root for convenience and to verify upon start-up that all the directories and files have the correct permissions. The user drops to nobody
and group nogroup
(65534:65534) before starting Lenses.
If the image is started without root privileges, the agent will start successfully using the effective uid:gid applied. Ensure any volumes mounted (i.e., for the license, settings, and data) have the correct permission set.
Add a connection to AWS in Lenses.
You can either configure the connections in the UI or via . Provisioning is recommended.
Lenses uses an AWS in two places:
AWS IAM connection to for Lenses itself
to Cloud Watch.
If Lenses is deployed on an EC2 Instance or has access to AWS credentials in the default that can be used instead.
Lenses Box is a container solution for building applications on a localhost Apache Kafka docker.
Lenses Box contains all components of the Apache Kafka ecosystem, CLI tools, and synthetic data streams.
Install and run the Docker
The broker in the Kafka docker has a broker id 101
and advertises the listener configuration endpoint to accept client connections.
If you run Docker on macOS or Windows, you may need to find the address of the VM running Docker and export it as the advertised listener address for the broker (On macOS it usually is 192.168.99.100
). At the same time, you should give the lensesio/box
image access to the VM’s network:
If you run on Linux, you don’t have to set the ADV_HOST
, but you can do something cool with it. If you set it to be your machine’s IP address, you can access Kafka from any clients in your network.
If you decide to run a box in the cloud, you (and all your team) can access Kafka from your development machines. Remember to provide the public IP of your server as the kafka advertised host for your producers and consumers to access it.
Kafka JMX metrics are enabled by default. Refer to ports once you expose the relevant port, ie. -p 9581:9581
you can connect to JMX with
If you are using docker-machine or setting this up in a Cloud or DOCKER_HOST is a custom IP address such as 192.168.99.100
, you will need to use the parameters --net=host -e ADV_HOST=192.168.99.100
.
To persist the Kafka data between multiple executions, provide a name for your Docker instance and do not set the container to be removed automatically (--rm
flag). For example:
Once you want to free up resources, just press Control-C
. Now you have two options: either remove the Docker:
Or use it at a later time and continue from where you left off:
Download your key locally and run the command:
The container is running multiple services, and it is recommended to allocate 5GB of RAM to the docker (although it can operate with even less than 4GB).
To reduce the memory footprint, it is possible to disable some connectors and shrink the Kafka Connect heap size by applying these options (choose connectors to keep) to the docker run
command:
This page describes how to install Lenses via the AWS Marketplace.
The AWS Marketplace offering requires AWS MSK (Managed Apache Kafka) to be available. Optionally, AWS RDS (or any other PostgreSQL-compatible database) can be configured for Lenses to store its state.
The following AWS resources are created:
An EC2 instance that runs Lenses;
A SecurityGroup to allow network access to the Lenses UI;
A SecurityGroupIngress for Lenses to connect to MSK;
A CloudWatch LogGroup where Lenses stores its logs;
An IAM Role to allow the EC2 instance to store logs;
An IAM InstanceProfile to pass the role to the EC2 instance;
Optionally if enabled during deployment: an IAM Policy to allow the EC2 instance to emit CloudWatch metrics.
Deployment takes approximately three minutes.
Select CloudFormation Template, Lenses EC2 and your region.
Choose Launch CloudFormation.
Continue with the default options for creating the stack in the AWS wizard.
Fill in the parameters at Specify stack details.
Deployment Here the EC2 instance size and password for the Lenses admin user are set. A t2.large instance size is recommended;
Network Configuration This section controls the network settings of the Lenses EC2 instance. The ingress allows access to the Lenses UI only from particular IP addresses;
MSK Set the Security Group ID to that of your MSK cluster. A rule will be added to it so that Lenses can communicate with your cluster. You can find the ID by navigating in the AWS console to your MSK cluster and then under Properties -> Networking settings;
Monitoring Optionally produce the Lenses logs to CloudWatch;
Storage Lenses stores its state in a database locally on the EC2 instance’s disk or in a PostgreSQL database. Local storage is a development/quickstart option and is not suitable for production use. It is advised to use a Postgres database for smoother upgrades.
Review the stack.
Accept the terms and conditions and create the stack.
Once the stack has deployed, go to the Output tab and click on the FQDN link. If there are no outputs listed you might need to press the refresh button.
Login to Lenses with admin and the password value you have submitted for the parameter LensesAdminPassword.
Lenses supports connection to MSK brokers via IAM. If Lenses is deployed on an EC2 instance it will use the default credential chain loader to authenticate and connect to MSK.
The following Regions are supported:
us-east-1
;
us-east-2
;
us-west-1
;
us-west-2
;
ca-central-1
;
eu-central-1
;
eu-west-1
;
eu-west-2
;
eu-west-3
;
ap-southeast-1
;
ap-southeast-2
;
ap-south-1
;
ap-northeast-1
;
ap-northeast-2
;
sa-east-1
.
Please:
Do not use your AWS root user for deployment or operations;
Follow the least privileges principle when granting access to individual IAM user accounts;
Avoid allowing traffic to the Lenses UI from a broad CIDR block where a more specific block could be used.
AWS billing applies for the EC2 instance, CloudWatch logs and optionally CloudWatch metrics.
In case you run into problems, e.g. you cannot connect to Lenses, then the logs could provide more information. The easiest route to do this is to go to CloudWatch in the AWS console. Here, find the log group corresponding to your deployment (it has the same name as the deployment) and pick a log stream. The stream with the /lenses.log
suffix contains all log lines regardless of the log level; the stream with the /lenses-warn.log
suffix only contains warning-level logs.
If the above fails, for example, because the logs integration is broken, you can SSH into the EC2 instance. Lenses is installed into /opt/lenses
, the logs can be found under /opt/lenses/logs
for further inspection
To start with the Box online.
Open Lenses in your , log in with admin/admin.
For the hourly billed version additional hourly charges apply, which depend on the instance size. For the Bring Your Own License (BYOL) you can get a free trial license .
AWS Glue
Connect Lenses to your AWS Glue service for schema registry support.
Confluent
Connect Lenses to Confluent Schema Registry.
IBM Event Streams
Connect Lenses to IBM Event Streams Schema Registry
Apicurio
Connect Lenses to Apicurio.
Helm
Deploy Lenses in your Kubernetes cluster with Helm.
Docker
Deploy Lenses with Docker.
Linux (archive)
Deploy Lenses on Linux servers or VMs.
AWS Marketplace
Deploy Lenses via the AWS Marketplace.
Lenses Box
Try out Lenses with the Lenses Box.
FILECONTENT_JVM_SSL_TRUSTSTORE
The SSL/TLS trust store to use as the global JVM trust store. Add to LENSES_OPTS the property javax.net.ssl.trustStore
FILECONTENT_JVM_SSL_TRUSTSTORE_PASSWORD
Τhe trust store password. If set, the startup script will add automatically to LENSESOPTS the property javax.net.ssl.trustStorePassword (**_base64 not supported**)
FILECONTENT_LENSES_SSL_KEYSTORE
The SSL/TLS keystore to use for the TLS listener for Lenses
Kafka broker
9092
Kafka connect
8083
Zookeeper
2181
Schema Registry
8081
Lenses
3030
Elasticsearch
9200
Kafka broker JMX
9581
Schema registry JMX
9582
Kafka connect JMX
9584
Zookeeper JMX
9585
Kafka broker (ssl)
9092
ADV_HOST=[ip-address]
The ip address that the broker will advertise
DEBUG=1
Prints all stdout and stderr processes to container’s stdout for debugging.
DISABLE_JMX=1
Disables exposing JMX metrics on Kafka services.
ELASTICSEARCH_PORT=0
Will not start Elasticsearch.
ENABLE_SSL=1
Creates CA and key-cert pairs and makes the broker also listen to SSL://127.0.0.1:9093
KAFKA_BROKER_ID=1
Overrides the broker id (the default id is 101).
SAMPLEDATA=0
Disables the synthetic streaming data generator that are running by default.
SUPERVISORWEB=1
Enables supervisor interface on port 9001 (adjust via SUPERVISORWEB_PORT) to control services.
This page provides examples for defining a connection to Kafka.
If deploying with Helm put the connections YAML under provisioning in the values file.
With PLAINTEXT, there's no encryption and no authentication when connecting to Kafka.
The only required fields are:
kafkaBootstrapServers
- a list of bootstrap servers (brokers).
It is recommended to add as many brokers (if available) as convenient to this list for fault tolerance.
protocol
- depending on the protocol, other fields might be necessary (see examples for other protocols)
In following example JMX metrics for Kafka Brokers are configured too, assuming that all brokers expose their JMX metrics using the same port (9581), without SSL and authentication.
With SSL the connection to Kafka is encrypted. You can also uses SSL and certificates to authenticate users against Kafka.
A truststore (with password) might need to be set explicitly if the global truststore of Lenses does not include the Certificate Authority (CA) of the brokers.
If TLS is used for authentication to the brokers in addition to encryption-in-transit, a key store (with passwords) is required.
There are 2 SASL-based protocols to access Kafka Brokers: SASL_SSL
and SASL_PLAINTEXT
. They both require SASL mechanism and JAAS Configuration values. What is different is if:
The transport layer is encyrpted (SSL)
The SASL mechanisn for authentication (PLAIN, AWS_MSK_IAM, GSSAPI).
In addition to this, there might be a keytab file required, depending on the SASL mechanism (for example when using GSSAPI mechanism, most often used for Kerberos).
In order to use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.
Apart from that, when encryption-in-transit is used (with SASL_SSL
), a trust store might need to be set explicitly if the global trust store of Lenses does not include the CA of the brokers.
Following are a few examples of SASL_PLAINTEXT and SASL_SSL
Encrypted communication and basic username and password for authentication.
When Lenses is running inside AWS and is connecting to an Amazon’s Managed Kafka (MSK) instance, IAM can be used for authentication.
In order to use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.
No SSL encrypted of communication, credentials communicated to Kafka in clear text.
Lenses interacts with your Kafka Cluster via Kafka Client API. To override the default behavior use additionalProperties
.
By default there shouldn’t be a need to use additional properties, use it only if really necessary, as a wrong usage might brake the communication with Kafka.
Lenses SQL processors uses the same Kafka connection information provided to Lenses.
This page gives examples of the provisioning yaml for Lenses.
To use with Helm file place the examples under lenses.provisioning.connections
in the values file.
This page provides examples for defining a connection to Schema Registries.
The URLs (nodes) should always have a scheme defined (http:// or https://).
For Basic Authentication, define username
and password
properties.
A custom truststore is needed when the Schema Registry is served over TLS (encryption-in-transit) and the Registry’s certificate is not signed by a trusted CA.
A custom truststore might be necessary too (see above).
By default, Lenses will use hard delete for Schema Registry. To use soft delete, add the following property:
Some connections depend on others. One example is the AWS Glue Schema Registry connection, which depends on an AWS connection. These are examples of provision Lenses with an AWS connection named my-aws-connection
and an AWS Glue Schema Registry that references it.
This page describes import end exporting resources from Lenses to YAML via the CLI.
The CLI allows you to import and export resources to and from files.
Import is done on a per-resource basis, the directory structure defined by the CLI. A base directory can be provided by the —dir flag.
Processors, connectors, topics, and schemas have an additional prefix flag to restrict resources to export.
The expected directory structure is:
Only the update of name, cluster name, namespace, and runner are allowed. Changes to the SQL are effectively the creation of a new Processor.
This page describes how to configure JMX metrics for Connections in Lenses.
All core services (Kafka, Schema Registry, Kafka Connect, Zookeeper) use the same set of properties for services’ monitoring.
The Agent will discover all the brokers by itself and will try to fetch metrics using metricsPort
, metricsCustomUrlMappings
and other properties (if specified).
The same port used for all brokers/workers/nodes. No SSL, no authentication.
Such a configuration means that the Agent will try to connect using JMX with every pair of kafkaBootstrapServers.host:metricsPort
, so following the example: my-kafka-host-0:9581
.
For Jolokia the Agent supports two types of requests: GET (JOLOKIAG
) and POST (JOLOKIAP
).
For JOLOKIA each entry value in metricsCustomUrlMappings
must contain protocol.
The same port used for all brokers/workers/nodes. No SSL, no authentication.
JOLOKIA monitoring works on the top of HTTP protocol. To fetch metrics the Agent has to perform either GET or POST request. There is a way of configuring http request timeout using httpRequestTimeout
property (ms value). Its default value is 20 seconds.
Default suffix for Jolokia endpoints is /jolokia/
, so that should be provided value. Sometimes that suffix can be different, so there is a way of customizing it by using metricsHttpSuffix
field.
AWS has predefined metrics configuration. The Agent hits the Prometheus endpoint using port 11001
for each broker. There is an option of customizing AWS metrics connection in Lenses by using metricsUsername
, metricsPassword
, httpRequestTimeout
, metricsHttpSuffix
, metricsCustomUrlMappings
, metricsSsl
properties, but most likely no one will need to do that - AWS has its own standard and most probably it won’t change. Customization can be achieved only by API or CLI - UI does not support it.
There is also a way to configure custom mapping for each broker (Kafka) / node (Schema Registry, Zookeeper) / worker (Kafka Connect).
Such a configuration means that the Agent will try to connect using JMX for:
my-kafka-host-0:9582 - because of metricsCustomUrlMappings
my-kafka-host-1:9581 - because of metricsPort
and no entry in metricsCustomUrlMappings
This page provides examples for defining a connection to Kafka Connect Clusters.
The URLs (workers) should always have a scheme defined (http:// or https://).
This example uses an optional AES-256 key. The key decodes values encoded with AES-256 to enable passing encrypted values to connectors. It is only needed if your cluster uses AES-256 Decryption plugin.
For Basic Authentication, define username
and password
properties.
A custom truststore is needed when the Kafka Connect workers are served over TLS (encryption-in-transit) and their certificates are not signed by a trusted CA.
A custom truststore might be necessary too (see above).
This page describes the Provisioning API reference.
For the options for each connection see the Schema /Object of the PUT call.
This page describes how to use the Lenses File Watcher to setup connections to Kafka and other services and have changes applied.
Connections are defined in the provisioning.yaml
file. Lenses will then watch the file and resolve the desired state, applying connections defined in the file.
If a connection is not defined but exists in Lenses it will be removed. It is very important to keep your provision YAML updated to reflect the desired state.
File watcher provisioning must be explicitly enabled. Set the following in the lenses.conf
file:
Updates to the file will be loaded and applied if valid without a restart of Lenses.
Lenses expects a set of files in the directory, defined by lenses.provisioning.path
. The structure of the directory must follow:
files/ directory for storing any certificates, JKS files or other files needed by the connection
provisioning.yaml - This is the main file, holding the definition of the connections
license.json - Your lenses license file
The provisioning.yaml
contains secrets. If you are deploying via Helm, the chart will use Kubernetes secrets.
Additionally, support is provided for referencing environment variables. This allows you to set secrets in your environment and have the value resolved at runtime.
Many connections need files, for example, to secure Kafka with SSL you will need a key store
and optionally a trust store
.
To reference a file in the provisioning.yaml
, for example, given:
a file called my-keystore.jks
is expected in the files directory. This file will be used for the key store location.
This page describes installing Lenses in Kubernetes via Helm.
Only Helm version 3 is supported.
On start-up, Lenses will be in bootstrap mode unless it has an existing Kafka Connection. Enable provisioning to automate the creation of connections.
First, add the Helm Chart repository using the Helm command line:
Use helm to install Lenses with default values:
The default install of Lenses will place Lenses in bootstrap mode, you can add the connections to Kafka manually and upload your license or automation with provisioning. Please refer to the GitHub values.yaml
for all options.
To automatically provision the connections to Kafka and other systems set the .Values.lenses.provision.connections
to be the YAML definition of your connections. For a full list of the connection types supported see Provisioning.
The chart will render the full YAML specified under this setting as the provisioning.yaml
file.
Alternatively you can use a second YAML file, which contains only the connections pass them at the command line when installing:
You must explicitly enable provisioning via lenses.provision.enabled: true otherwise Lenses will start in bootstrap mode.
The chart uses:
Secrets to store Lenses Postgres credentials and authentication credentials
Secrets to store connection credentials such as Kafka SASL_SCRAM password or password for SSL JKS stores.
Secrets to hold the base64 encoded values of the JKS stores
ConfigMap for Lenses configuration overrides
Cluster roles and role bindings (optional).
Secrets and config maps are mounted as files under the mount /mnt
:
settings - holds the lenses.conf
secrets - holds the secrets Lenses and license
provision-secrets - holds the secrets for connections in the provisioning.yaml
file
provision-secrets/files - holds any file needed for a connection, e.g. JKS files.
The Helm chart creates Cluster roles and bindings, these are used by SQL Processors if the deployment mode is set to KUBERENTES. They are used so that Lenses can deploy and monitor SQL Processor deployments in namespaces.
To disable the RBAC set: rbacEnabled: false
If you want to limit the permissions Lenses has against your Kubernetes cluster, you can use Role/RoleBinging
resources instead.
To achieve this you need to create a Role
and a RoleBinding
resource in the namespace you want the processors deployed to.
For example:
Lenses namespace = lenses-ns
Processor namespace = lenses-proc-ns
Finally you need to define in Lenses configuration which namespaces can Lenses access. To achieve this amend values.yaml
to contain the following:
The main configurable options for lenses.conf
are available in the values.yaml
under the lenses
object. These include:
Authentication
Database connections
SQL processor configurations
To apply other static configurations use lenses.append.conf
, for example:
Set accordingly under**lenses.security.
**
For SSO set lenses.security.saml
To use Postgres as the backing store for Lenses set the details in the lenses.storage.postgres object
.
If Postgres is not enabled a default embedded H2 database is used. To enable persistence for this data:
The chart relies on secrets for sensitive information such as Passwords. Secrets can rotate and are commonly stored in an external store such as Azure KeyVault, Hashicorp Vault or AWS Secrets Manager.
If you wish to have the chart use external secrets that are synchronized with these providers, set the following for the Lenses user:
For Postgres, add additional ENV variables via the lenses.additionalEnv
object to point to your secret and set the username and password to external in the Postgres section.
While the chart supports setting TLS on Lenses itself we recommend placing it on the Ingress resource
Ingress and service resources are supported.
Enabled an Ingress resource in the values.yaml
:
Enable a service resource in the values.yaml
:
To control the resources used by Lenses:
To enable SQL processor in KUBERENTES mode and control the defaults:
To control the namespace Lenses can deploy processors, use the sql.namespaces
value.
Prometheus metrics are automatically exposed on port 9102 under /metrics
.
For Connections, see Provisioning examples. You can also find examples in the Helm chart repo.
This page describes how to use the Lenses provisioning API to setup connections to Kafka and other services and have changes applied.
Building on the provisioning.yaml
, API provisiong
allows for uploading the files directly Lenses from anywhere with network access and without access to the host where Lenses is installed.
Many connections need files, for example, to secure Kafka with SSL you will need a keystore and optionally a trust store.
To reference a file in the, for the configuration option set the key to be "file" and the value to reference in the API request. For example, given:
To upload the file to be used for the configuration option sslKeystore
: add the following to the request:
Set the type to application/octet-stream.
The name of the part in the multipart request (supporting files) should match the value of the property pointing to the mounted file in the provisioning.yaml
descriptor. This ensures accurate mapping and referencing of files.
Set LENSES_SESSION_TOKEN as the value of the Lenses Service Account token you want to use to automate provisioning.
In this example, the provisioning.yaml
is read from provisioning=@"resources/provisioning.yaml.
The provisioning.yaml contains a reference to "my-keystore-file" which is loaded from @${PATH_TO_KEYSTORE_FILE};type=application/octet-stream
The provisioning.yaml contains secrets. If you are deploying via Helm the chart will use Kubernetes secrets.
Additionally, support is provided for referencing environment variables. This allows you to set secrets in your environment and have the value resolved at runtime. i.e. inject an environment variable from GitHub secrets for passwords.
This page describes how to configure Lenses.
This section guides you through understanding what is required to utilize Lenses efficiently and securely.
Two files control Lenses configuration:
lenses.conf - contains most of the configuration
security.conf - sensitive configuration options such as passwords for authentication
A third, optionally, provisioning.yaml allows you to define your license and connection details to Kafka and other services in a file, that is dynamically pick up by Lenses.
This page describes automating (provisioning) connections and channels for Lenses at installation and how to apply updates.
On start-up, Lenses will be in bootstrap mode unless it has an existing Kafka Connection.
To fully start Lenses you need two key pieces of information to start and perform basic functions:
Kafka Connection
Valid License
If provisioning is enabled, any changes in the UI will be overriden.
A dedicated API, called provisioning, is available to handle bootstrapping key connections at installation time. This allows you to fully install and configure key connections such as Kafka, Schema Registry, Kafka Connect, and Zookeepers in one go. You can use either of the following approaches depending on your needs:
Both approaches use a YAML file to define connections.
Connections are defined in theprovisioning.yaml.
This file is divided into components, each component representing a type of connection.
Each component must have:
Name - This is the free name of the connection
Version set to 1
Optional tags
Configuration - This is a list of keys/values and is dependent on the component type.
This page describes how to configure Lenses IAM to secure access to you Kafka cluster.
IAM (Identity and Access Management) in Lenses is controlled by Groups. Users and service accounts belong to groups. Permissions are assigned to groups and apply to the users and service accounts in those groups.
Authentication of users is determined by the configured mechanism.
For automation use the .
This page described configuring Lenses with a custom HTTP implementation for authentication.
With custom authentication, you can plug in your own authentication system by using HTTP headers.
In this approach, your own authentication proxy/code sitting in front of Lenses takes care of the authentication and injects appropriate Headers in all HTTP requests for verified users.
Set up a custom authentication layer by introducing in security.conf
:
Lenses connects similarly to any other application to the infrastructure You can implement a plugin in a few hours in Java/Scala or other JVM technology by implementing one interface:
The returned object UserAndGroups will contain the username and the groups a person authentication belongs to (or raise an exception if no such user exists).
The best way to get started is to look into a sample open-source implementation of such a plugin in .
This page describes configuring basic authentication in Lenses.
With Basic Auth, user accounts are managed by Lenses and a unique username and a password are used to log in.
For BASIC and LDAP authentication type, there is the option to set a policy to temporarily lock the account when successive login attempts fail. Once the lock time window has passed the user can log in again.
.
The internal database that stores user/group information is stored on disk, under the lenses.storage.directory
or an external Postgres database.
If using the embedded H2 database keep this directory intact between updates and upgrades.
To enforce specific password rules the following configurations need to be set:
To not allow previous passwords to be reused, use the following configuration:
For a full list of configuration options for the connect see .
Helm
Helm Chart Repo
JVM Options
Understand how to customize the Lenses JVM settings.
Logs
Understand and customize Lenses logging.
Identity & Access Management
Configure how users authenticate in Lenses.
Lenses Database
Configure the backing store for Lenses.
TLS
Configure TLS on Lenses for HTTPS.
Kafka ACLs
Configure the Kafka ACLs Lenses needs to operate.
Processor Modes
Configure how and where Lenses deploys SQL Processors.
JMX Metrics
Configure Lenses to expose JMX metrics.
Plugins
Add your own plugins to extend Lenses functionality.
Configuration Reference
Review Lenses configuration reference.
This pages describes configuring Lenses with Okta SSO.
Groups are case-sensitive and mapped by name with Okta
Integrate your user-groups with Lenses using the Okta group names. Create a group in Lenses using the same case-sensitive group name as in Okta.
For example, if the Engineers group is available in Okta, create a group with the same name.
Lenses is available directly in Okta’s Application catalog.
Go to Applications > Applications
Click Add Application
Search for Lenses
Select by pressing Add
App label: Lenses
Set the base url of your lenses installation e.g. https://lenses-dev.example.com
Click Done
Download the Metadata XML file with the Okta IdP details.
Go to Sign On > Settings > SIGN ON METHODS
Click on Identity Provider metadata and download the XML data to a file.
You will reference this file’s path in the security.conf
configuration file.
This pages describes configuring Lenses with Google SSO.
Google doesn't expose the groups, or organization unit, of a user to a SAML app. This means we must set up a custom attribute for the Lenses groups that each user belongs to.
Open the Google Admin console from an administrator account.
Click the Users button
Select the More dropdown and choose Manage custom attributes
Click the Add custom attribute button
Fill the form to add a Text, Multi-value field for Lenses Groups, then click Add
Open the Google Admin console from an administrator account.
Click the Users button
Select the user to update
Click User information
Click the Lenses Groups attribute
Enter one or more groups and click Save
Learn more about Google custom SAML apps
Open the Google Admin console from an administrator account.
Click the Apps button
Click the SAML apps button
Select the Add App dropdown and choose Add custom SAML app
Enter a descriptive name for the Lenses installation
Upload a Lenses icon
Configure in security.conf.
This pages describes configuring Lenses with Azure SSO.
Groups are case-sensitive and mapped by UUID with Azure
Integrate your user-groups with Lenses using the Azure group IDs. Create a group in Lenses using the UUID as the name.
For example, if the Engineers group has the UUID ae3f363d-f0f1-43e6-8122-afed65147ef8
, create a group with the same name.
Learn more about Azure SSO
Go to Enterprise applications > + New Application
Search for Lenses.io in the gallery directory
Choose a name for Lenses e.g. Lenses.io and click Add
Select Set up single sign on > SAML
Configure the SAML details
Identifier (Entity ID)
Use the base url
of the Lenses installation e.g. https://lenses-dev.example.com
Reply URL
Use the base url
with the callback details e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client
Sign on URL
Use the base url
Download the Federation Metadata XML file with the Azure IdP details. You will reference this file’s path in the Lenses security.conf
configuration file.
This page describes configuring Lenses with LDAP.
Lenses can be configured via LDAP handle the user authentication.
The groups that a user belongs to (authorization) may come either from LDAP (automatic mapping) or via manually mapping an LDAP user to a set of Lenses groups.
All the user’s groups are then matched by name (case sensitive) with the groups stored in Lenses. All the matching groups' permissions are combined. If a user has been assigned manually a set of Lenses groups, then the groups coming from LDAP are ignored.
Active Directory (AD) and OpenLDAP (with the memberOf overlay if LDAP group mapping is required) servers are tested and supported in general.
Due to the LDAP standard ambiguity, it is impossible to support all the configurations in the wild. The most common pain point is LDAP group mapping. If the default class that extracts and maps LDAP groups to Lenses groups does not work, it is possible to implement your own.
Before setting up an LDAP connection, we advise you to familiarize yourself with LDAP and/or have access to your LDAP and/or Active Directory administrators.
An LDAP setup example with LDAP group mapping is shown below:
In the example above you can distinguish three key sections for LDAP:
the connection settings,
the user search settings,
and the group search settings.
Lenses uses the connection settings to connect to your LDAP server. The provided account should be able to list users under the base path and their groups. The default group plugin only needs access to the memberOf attributes for each user, but your custom implementation may need different permissions.
When a user tries to log in, a query is sent to the LDAP server for all accounts that are under the lenses.security.ldap.base
and match the lenses.security.ldap.filter
. The result needs to be unique; a distinguished name (DN) —the user that will log in to Lenses.
In the example, the application would query the LDAP server for all entities under ou=Users,dc=example,dc=com that satisfy the LDAP filter (&(objectClass=person)(sAMAccountName=)) which would be replaced by the username that tries to login to Lenses. A more simple filter could be cn=, which for user Mark would return the DN cn=Mark,ou=Users,dc=example,dc=com.
Once the user has been verified, Lenses queries the user groups and maps them to Lenses groups. For every LDAP group that matches a Lenses group, the user is granted the selected permissions.
Depending on the LDAP setup, only one of the users or the Lenses service user may be able to retrieve the group memberships. This can be controlled by the option lenses.security.ldap.use.service.user.search.
The default value (false) uses the user itself to query for groups. Groups be can be created in the admin section of the web interface, or in the command line via the lenses-cli application.
Set lenses.security.ldap.use.service.user.search
to true to use lenses.security.ldap.user account to list a logged user groups when your LDAP setup restricts most of the user's action to list their groups.
When working with LDAP or Active Directory, user and group management is done in LDAP.
Lenses provides fine-grained role-based access (RBAC) for your existing groups of users over data and applications.
Create a group in Lenses with the same name (case-sensitive) as in LDAP/AD.
If mapping LDAP groups to Lenses groups is not desired. Manually map LDAP users to Lenses groups, using the web interface, or the lenses-cli.
LDAP still provides the authentication, but all LDAP groups for this user are ignored.
When you create an LDAP user in Lenses, the username will be used in the search expression set in lenses.security.ldap.filter to authenticate them. If no user should be allowed to use the groups coming from LDAP, then this functionality should be disabled.
Set lenses.security.ldap.plugin.memberof.key or lenses.security.ldap.plugin.group.extract.regex to a bogus entry, rendering it unusable.
An example would be:
The group extract plugin is a class that implements an LDAP query that retrieves a user’s groups and makes any necessary transformation to match the LDAP group to a Lenses group name.
The default class implementation that comes with Lenses is io.lenses.security.ldap.LdapMemberOfUserGroupPlugin.
If your LDAP server supports the memberOf functionality, where each user has his/her group memberships added as attributes to his/her entity, you can use it by setting the lenses.security.ldap.plugin.class option to this class:
Below you will see a brief example of its setup.
As an example, the memberOf search may return two attributes for user Mark:
The regular expression (?i)cn=(\w+),ou=Groups.* will return these two regex group matches:
If any of these groups exist in Lenses, Mark will be granted the permissions of the matching groups.
The lenses.security.ldap.plugin.group.extract.regex
should contain exactly one regular expression capturing group.
If you need to apply more groups for your matching purposes, you should use non-capturing groups (e.g (?:groupRegex)
.
As an example, the regular expression (?i)cn=((?:Kafka|Apps)Admin),ou=Groups,dc=example,dc=com
applied to memberOf attributes:
will return these two regex group matches:
If your LDAP does not offer the memberOf functionality or uses a complex setup, you can provide your own implementation. Start with the code on GitHub, create a JAR, and add it to the plugins/
folder and set your implementation’s full classpath:
Do not forget to grant to the account any permissions it may need for your plugin to work.
The following configuration entries are specific to the default group plugin. A custom LDAP plugin might require different entries under lenses.security.ldap.plugin
:
This page describe creating and managing groups in Lenses.
A Group is a collection of permissions that defines the level of access for users belonging to it. Groups consist of:
Namespaces
Application permissions
Administration permissions
Groups must be pre-created, and the group's names in Lenses must match (case sensitive) those in the SSO provider.
To create a new Group, go to Admin->Groups->New Group.
For every Group, you must set the data namespaces for Kafka or other available connections to data sources.
Groups must be given names, optionally a description.
Namespace permissions define the access to datasets. and
Each group must have a namespace. A namespace is a set of permissions that apply to topics or a set of topics, for example, prod*. This allows you to define virtual multi-tenancy.
Application permissions define how a user can interact with applications and linked resources associated with those datasets.
Application permissions cover:
Viewing or resetting Consumer group offsets linked to a group's namespaces
Deploying or viewing connectors linked to a group's namespaces
Deploying or viewing SQL Processors linked to a group's namespaces
Additionally, application permissions define whether a group can access a specified Connect cluster.
Admin permissions refer to activities that are in the global scope of Lenses and affect all the related entities.
This pages describes configuring Lenses with Onelogin SSO.
Groups are case-sensitive and mapped to roles, by name, with OneLogin
Integrate your user roles with Lenses using the Keycloak role names. Create a group in Lenses using the same case-sensitive role name as in OneLogin.
For example, if the Engineers role is available in OneLogin, create a group with the same name.
Lenses is available in the OneLogin Application catalog.
Visit OneLogin’s Administration console. Select Applications > Applications > Add App
Search and select Lenses
Optionally add a description and click save
In the Configuration section set the base path from the url of the Lenses installation e.g. lenses-dev.example.com
( without the https://
)
Click Save
Use the More Actions button
Click and download the SAML Metadata
You will reference this file’s path in the security.conf
configuration file.
This page describes managing users in Lenses.
Users must be assigned to a group. SSO and LDAP users are mapped to a group matching the group name provided by the Idp.
Group name matching is case-sensitive.
Multiple types of users can be supported at the same time.
Select Admin->Users->New User->Basic Auth
.
By default, users are mapped to the group provided by the SSO provider. If you wish to override the group mapping from your SSO, users can be created directly in Lenses and you can manually map the user to a group.
By default, users are mapped to the group provided by the LDAP server. If you wish to override the group mapping you manually map the user to a group.
Lenses allows you to view users and:
Authentication type
Groups they belong to
Last login
Go to Admin -> Users.
This page describes how to configure the storage layer Lenses.
Lenses state can be stored:
on the local filesystem - (quick start and default option)
in a PostgreSQL database - (recommended) and takes preference when configured
Start with Postgres if possible to avoid migrations from H2 when moving to production. H2 is not recommended in production environments.
If any Postgres configuration is defined either in lenses.conf or security.conf, the storage mode will switch to Postgres.
Databases settings go in security.conf.
By default, Lenses will store its internal state in the storage
folder. We advise explicitly setting this location, ensuring the Lenses process has permission to read and write files in this directory and have an upgrade and backup policy.
Lenses can persist their internal state to a remote PostgreSQL database server.
Current minimum requirements:
Postgres server running version 9.6 or higher
The recommended configuration is to create a dedicated login role and database for the agent, setting the agent role as the database owner. This will mean the agent will only be able to manage that database and require no superuser privileges.
Example psql command for initial setup:
You can then configure Lenses as so:
Enabling PostgreSQL storage for an existing Lenses installation means the data will be automatically migrated to the PostgreSQL schema on the first run.
After this process has succeeded, a lensesdb.postgresql.migration
file will be created in the local storage directory to flag that the migration has already been run. You can then delete the local storage directory and remove the lenses.storage.directory
configuration.
If, for whatever reason, you want to re-run the migration to PostgreSQL, deleting the lensesdb.postgresql.migration
file will cause Lenses to re-attempt migration on the next restart. The migration process will fail if it encounters any data that can’t be migrated into PostgreSQL, so re-running the migration should only be done on an empty PostgreSQL schema to avoid duplicate record failures.
Lenses use the HikariCP library for high-performance database connection pooling.
The default settings should perform well but can be overridden via the lenses.storage.hikaricp
configuration prefix. The supported parameters can be found in the HikariCP documentation.
Camelcase configuration keys are not supported in agent configuration and should be translated to "dot notation"
For example:
This page describes the JVM options for Lenses.
Lenses runs as a JVM app; you can tune runtime configurations via environment variables.
This page describes how to configure TLS for Lenses.
TLS settings go in security.conf.
To use a non-default global truststore, set the path in accordingly with the LENSES_OPTS variable.
To use a custom truststore set the following in security.conf. Supported types: jks, pkcs12
.
To enable mutual TLS, set your keystore accordingly.
Additional configuration for the PostgreSQL database connection can be passed under the lenses.storage.postgres.properties
configuration prefix. The supported parameters can be found in the . For example:
Azure SSO
Configure Azure SSO for Lenses.
Google SSO
Configure Google SSO for Lenses.
Keycloak SSO
Configure Keycloak SSO for Lenses.
Okta SSO
Configure Okta SSO for Lenses.
Onelogin SSO
Configure Onelogin SSO for Lenses.
LENSES_OPTS
For generic settings, such as the global truststore. Note that the docker image is using this to plug in a prometheus java agent for monitoring Lenses
LENSES_HEAP_OPTS
JVM heap options. Default setting are -Xmx3g -Xms512m
that sets the heap size between 512MB and 3GB. The upper limit is set to 1.2GB on the Box development docker image.
LENSES_JMX_OPTS
Tune the JMX options for the JVM i.e. to allowing remote access.
LENSES_LOG4J_OPTS
Override Lenses logging configuration. Should only be used to set the logback configuration file, using the format -Dlogback.configurationFile=file:/path/to/logback.xml
.
LENSES_PERFORMANCE_OPTS
JVM performance tuning. The default settings are -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=
This page describes configuring Lenses logging.
All logs are emitted unbuffered as a stream of events to both stdout and to rotating files inside the directory logs/
.
The logback.xml
file is used to configure logging.
If customization is required, it is recommended to adapt the default configuration rather than write your own from scratch.
The file can be placed in any of the following directories:
the directory where Lenses is started from
/etc/lenses/
agent installation directory.
The first one found, in the above order, is used, but to override this and use a custom location, set the following environment variable:
The default configuration file is set up to hot-reload any changes every 30 seconds.
The default log level is set to INFO
(apart from some very verbose classes).
All the log entries are written to the output using the following pattern:
You can adjust this inside logback.xml to match your organization’s defaults.
logs/
you will find three files: lenses.log
, lenses-warn.log
and metrics.log
. The first contains all logs and is the same as the stdout. The second contains only messages at level WARN and above. The third one contains timing metrics and can be useful for debugging.
The default configuration contains two cyclic buffer appenders: "CYCLIC-INFO” and “CYCLIC-METRICS”. These appenders are required to expose the Lenses logs within the Admin UI.
This page describes how to install plugins in Lenses.
The following implementations can be specified:
Serializers/Deserializers Plug your serializer and deserializer to enable observability over any data format (i.e., protobuf / thrift)
Custom authentication Authenticate users on your proxy and inject permissions HTTP headers.
LDAP lookup Use multiple LDAP servers or your group mapping logic.
SQL UDFs User Defined Functions (UDF) that extend SQL and streaming SQL capabilities.
Once built, the jar files and any plugin dependencies should be added to Lenses and, in the case of Serializers and UDFs, to the SQL Processors if required.
On startup, Lenses loads plugins from the $LENSES_HOME/plugins/
directory and any location set in the environment variable LENSES_PLUGINS_CLASSPATH_OPTS
. These locations Lenses is watching, and dropping a new plugin will hot-reload it. For the Lenses docker (and Helm chart) you use /data/plugins.
Any first-level directories under the paths mentioned above, detected on startup will also be monitored for new files. During startup, the list of monitored locations will be shown in the logs to help confirm the setup.
Whilst all jar files may be added to the same directory (e.g /data/plugins
), it is suggested to use a directory hierarchy to make management and maintenance easier.
An example hierarchy for a set of plugins:
There are two ways to add custom plugins (UDFs and Serializers) to the SQL Processors; (1) via making available a tar.gz archive at an HTTP (s) address, or (2) via creating a custom docker image.
With this method, a tar archive, compressed with gzip, can be created that contains all plugin jars and their dependencies. Then this archive should be uploaded to a web server that the SQL Processors containers can access, and its address should be set with the option lenses.kubernetes.processor.extra.jars.url
.
Step by step:
Create a tar.gz file that includes all required jars at its root:
Upload to a web server, ie. https://example.net/myfiles/FILENAME.tar.gz
Set
For the docker image, set the corresponding environment variable
The SQL Processors that run inside Kubernetes use the docker image lensesio-extra/sql-processor
. It is possible to build a custom image and add all the required jar files under the /plugins
directory, then set lenses.kubernetes.processor.image.name
and lenses.kubernetes.processor.image.tag
options to point to the custom image.
Step by step:
Create a Docker image using lensesio-extra/sql-processor:VERSION
as a base and add all required jar files under /plugins
:
Upload the docker image to a registry:
Set
For the docker image, set the corresponding environment variables
This page describes the how to retrieve Lenses JMX metrics.
The JMX endpoint is managed by the lenses.jmx.port option. To disable the JMX leave the option empty.
To enable monitoring of Lenses metrics:
To export via Prometheus exporter:
The Lenses Docker image (lensesio/lenses) automatically sets up the Prometheus endpoint. You only have to expose the 9102 port to access it.
This will be done in two parts. The first part is about setting up the required files that JMX Agent will require and the second is about the options we need to pass to the agent.
First let’s create a new folder called jmxremote
To enable basic auth JMX, first create two files:
jmxremote.access
jmxremote.password
The password file has the credentials that the JMX agent will check during client authentication
The above code is registering 2 users.
UserA:
username admin
password admin
UserB:
username: guest
password: admin
The access file has authorization information, like who is allowed to do what.
In the above code, we can see that the admin user can do read and write operations in JMX, while guest user can only read the JMX content.
Now, to enable JMX with basic auth protection, all we need to do is pass the following options in the JRE’s env that will run the Java process you need to protect the jmx.
Let’s assume this java process is Kafka.
Change the permissions on both files so only owner can edit and view them.
If you do not change the permissions to 0600 and to the user that will run the jre process, then JMX will Agent will cause an error complaining that the Process is not the owner of the files that will be used for authentication and authorization.
Finally export the following options in the user’s env which will run Kafka.
First setup JMX with basic auth as shown in the Secure JMX: Basic Auth page.
To enable TLS Encryption/Authentication
in JMX you need a jks keystore and truststore.
Please note that both JKS Truststore and Keystore should have the same password.
The reason for this is because the javax.net.ssl
class will use the password you pass to the Keystore as the keypassword
Let’s assume this java process is Kafka and that you have installed the keystore.jks and truststore.jks under `/etc/certs``
Export the following options in the user’s env which will run Kafka.
Rate limit the calls Lenses makes to Schema Registries and Connect Clusters.
Careful monitoring of the data managed by the configured Schema Registry is paramount in order for Lenses to provide its users with the most up-to-date data.
For most cases, this monitoring doesn't cause any issues, but it might happen that, in some cases, Lenses is forced to access the Schema Registry too often.
If this happens, or if you want to make sure Lenses does not go over a rate limit imposed by the Schema Registry, it is possible to throttle Lenses usage of the Schema Registry's API.
In order to do so, it is possible to set the following Lenses configuration:
Doing so will make sure Lenses does not issue more than maxRequests
over any window
period.
The exact values provided will depend on things like the resources of the machine hosting the Schema Registry, the number of schemas, how often are new schemas added, so some trial and error is required. These values should however define a rate smaller than the one allowed by the Schema Registry.
This page describes how to use Lenses to view metrics for a topic.
To view a live snapshot of the metrics for a topic, select the metrics tab for the topic.
This will show you metric information over the last 30 days, alert rules on the topic and low JMX metrics.
This page describes how to use Lenses to insert or delete messages in Kafka.
To insert a message, select Insert Message from the Action menu. Either enter a message, according to the topic schema or have a message auto-generated for you.
Deleting messages deletes messages based on an offset range. Select Delete Messages from the Action menu.
Allow users to create and manage their own topics and apply topic settings as guard rails.
For automation use the CLI
.
To create a topic go to Workspace->Explore->New Topic. Enter the name, partitions and replication factor.
If topic settings apply you will not be able to create the topic unless the rules have been met.
The Explore screen lists high-level details of the topics
Selecting a topic allows you to drill into more details.
Topics marked for deletion will be highlighted with a D.
Compacted topics will be highlighted with a C.
To increase the number of partitions, select the topic, then select Increase Partitions from the actions menu. Increasing the number of partitions does not automatically rebalance the topic.
Topics inherit their configurations from the broker defaults. To override a configuration, select the topic, then the Configuration
tab. Search for the desired configuration and edit its value.
To delete a topic, click the trash can icon.
Topics can only be deleted if all clients reading or writing to the topic have been stopped. The topic will be marked for deletion with a D until the clients have stopped.
To quickly find compacted or empty topics use quick filter checkboxes, for example, you can find all empty topics and perform a bulk delete action on them.
This page describes how to use Lenses to search for topics and fields across Kafka, Postgres and Elasticsearch.
This page describes how to use Lenses topic settings to provide governance when creating topics in your Kafka cluster.
Topic settings and naming rules allow for the enforcement of best practices when onboarding new teams and topics into your data platform.
Topic configuration rules can be used to enforce partition sizing, replication, and retention configuration during topic creation. Go to Admin->Topic Settings->Edit.
By setting naming conventions you can control how topics are named. To define a naming convention, go to Admin->Topic Settings->Edit. Naming rules allow you to select from predefined regex or apply your own.
This page describes how to use Lenses approval requests.
To enable Approval Requests for a group, grant the group Create Topic Request permission. When a user belonging to this group creates a topic it will be sent for approval first.
To enable approval requests, create a group with, or add to a group, the Create Topic Request permission to the data namespace.
Go to Admin->Audits->Requests, select the request, and click view.
Approve or reject the request. If you Approve the topic will be created.
This page describes how to use Lenses to download messages to CSV or JSON from a Kafka topic.
Only the data returned to the frontend is downloaded.
Data can be downloaded, optionally including headers, as JSON or as CSV with a choice of delimiters.
This page lists the available configurations in Lenses.
Reference documentation of all configuration and authentication options:
Set in lenses.conf
System or control topics are created by services for their internal use. Below is the list of built-in configurations to identify them.
_schemas
__consumer_offsets
_kafka_lenses_
lsql_*
lsql-*
__transaction_state
__topology
__topology__metrics
_confluent*
*-KSTREAM-*
*-TableSource-*
*-changelog
__amazon_msk*
Wildcard (*
) is used to match any name in the path to capture a list of topics not just one. When the wildcard is not specified, Lenses matches on the entry name provided.
Set in security.conf
LDAP or AD connectivity is optional. All settings are string.
Set in security.conf
An additional configuration setting lenses.security.ldap.use.service.user.search
when set to true will use the lenses.security.ldap.user
account to read the groups of the currently logged user. The default behaviour (false) uses the currently logged user to read group memberships.
Set in security.conf
Set in security.conf
Set in security.conf
Set in lenses.conf
If the records schema is centralized, the connectivity to Schema Registry nodes is defined by a Lenses Connection.
There are two static config entries to enable/disable the deletion of schemas:
Set in lenses.conf
Options for specific deployment targets:
Global options
Kubernetes
Common settings, independently of the underlying deployment target:
Kubernetes connectivity is optional. Minimum supported K8 version 0.11.10. All settings are string.
Set in lenses.conf
Optimization settings for SQL queries.
Set in lenses.conf
Lenses requires these Kafka topics to be available, otherwise, it will try to create them. The topics can be created manually before Lenses is run, or allow Lenses the correct Kafka ACLs to create the topics:
Set in lenses.conf
To allow for fine-grained control over the replication factor of the three topics, the following settings are available:
When configuring the replication factor for your deployment, it's essential to consider the requirements imposed by your cloud provider. Many cloud providers enforce a minimum replication factor to ensure data durability and high availability. For example, IBM Cloud mandates a minimum replication factor of 3. Therefore, it's crucial to set the replication factor for the Lenses internal topics to at least 3 when deploying Lenses on IBM Cloud.
All time configuration options are in milliseconds.
Set in lenses.conf
Set in lenses.conf
Control how Lenses identifies your connectors in the Topology view. Catalogue your connector types, set their icons, and control how Lenses extracts the topics used by your connectors.
Lenses comes preconfigured for some of the popular connectors as well as the Stream Reactor connectors. If you see that Lenses doesn’t automatically identify your connector type then use the lenses.connectors.info
setting to register it with Lenses.
Add a new HOCON object {}
for every new Connector in your lenses.connectors.info
list :
This configuration allows the connector to work with the topology graph, and also have the RBAC rules applied to it.
To extract the topic information from the connector configuration, source connectors require an extra configuration. The extractor class should be: io.lenses.config.kafka.connect.SimpleTopicsExtractor
. Using this extractor requires an extra property configuration. It specifies the field in the connector configuration which determines the topics data is sent to.
Here is an example for the file source:
An example of a Splunk sink connector and a Debezium SQL server connector
Set in lenses.conf
This page describes how to use Lenses to view topic partition metrics and configuration.
To view topic partitions select the Partition tab. Here you can see a heat map of messages in the topic and their distribution across the partitions.
Is the map evenly distributed? If not you might have partition skew.
Further information about the partitions and replicas is displayed, for example, whether the replicas are in-sync or not.
If the replicas are not in-sync an alert will be raised.
lenses.ip
Bind HTTP at the given endpoint. Use in conjunction with lenses.port
0.0.0.0
string
no
lenses.port
The HTTP port to listen for API, UI and WS calls
9991
int
no
lenses.jmx.port
Bind JMX port to enable monitoring Lenses
int
no
lenses.root.path
The path from which all the Lenses URLs are served
string
no
lenses.secret.file
The full path to security.conf
for security credentials
security.conf
string
no
lenses.sql.execution.mode
Streaming SQL mode IN_PROC
(test mode) or KUBERNETES
(prod mode)
IN_PROC
string
no
lenses.offset.workers
Number of workers to monitor topic offsets
5
int
no
lenses.telemetry.enable
Enable telemetry data collection
true
boolean
no
lenses.kafka.control.topics
An array of topics to be treated as “system topics”
list
array
no
lenses.grafana
Add your Grafana url i.e. http://grafanahost:port
string
no
lenses.api.response.cache.enable
If enabled, it disables client cache on the Lenses API HTTP responses by adding these HTTP Headers: Cache-Control: no-cache, no-store, must-revalidate
, Pragma: no-cache
, and Expires: -1
.
false
boolean
no
lenses.workspace
Directory to write temp files. If write access is denied, Lenses will fallback to /tmp
.
/run
string
no
lenses.access.control.allow.methods
HTTP verbs allowed in cross-origin HTTP requests
GET,POST,PUT,DELETE,OPTIONS
lenses.access.control.allow.origin
Allowed hosts for cross-origin HTTP requests
*
lenses.allow.weak.ssl
Allow https://
with self-signed certificates
false
lenses.ssl.keystore.location
The full path to the keystore file used to enable TLS on Lenses port
lenses.ssl.keystore.password
Password for the keystore file
lenses.ssl.key.password
Password for the ssl certificate used
lenses.ssl.enabled.protocols
Version of TLS protocol to use
TLSv1.2
lenses.ssl.algorithm
X509 or PKIX algorithm to use for TLS termination
SunX509
lenses.ssl.cipher.suites
Comma separated list of ciphers allowed for TLS negotiation
lenses.security.ldap.url
LDAP server URL (TLS, StartTLS and unencrypted supported)
lenses.security.ldap.user
LDAP user account. Must be able to list users and their groups. The distinguished name (DN) must be used
lenses.security.ldap.password
LDAP account password
lenses.security.ldap.base
LDAP base path for querying user accounts. All user accounts that will be able to access Lenses should be under this path
lenses.security.ldap.filter
LDAP query filter for matching users. Lenses will request all entries under the base path that satisfy this filter. The result should be unique
(&(objectClass=person)(sAMAccountName=<user>))
lenses.security.ldap.plugin.class
Full classpath that implements the LDAP query for the user’s groups. You can use the implementation that comes with Lenses if your LDAP setup is supported
lenses.security.ldap.plugin.memberof.key
LDAP user attribute that provides memberOf information. In most implementations the attribute has the same name, so you don’t have to set anything. Used by the default plugin
memberOf
lenses.security.ldap.plugin.group.extract.regex
A regular expression to extract a part of the user’s groups. If this part matches a Lenses group, the user will be granted all the permissions of this group. Lenses checks against the list of memberOf attribute values and uses the first regex group that is returned
(?i)CN=(\\w+),ou=Groups.*
lenses.security.ldap.plugin.person.name.key
This key is used by the included LDAP plugin class LdapMemberOfUserGroupPlugin. It expects the LDAP user attribute that provides the full name of the user
sn
lenses.security.saml.base.url
Lenses HTTPS URL that matches the Service Provider (SP) and part of the Identity Provider (IdP) SAML handshake i.e. https://lenses-dev.example.com
lenses.security.saml.sp.entityid
SAML Service Provider (SP) Entity ID for Lenses, used as part of the SAML handshake protocol.
lenses.security.saml.idp.provider
The Identity Provider (IdP) type: azure
, google
, keycloak
, okta
, onelogin
lenses.security.saml.idp.metadata.file
Path to XML file provided by the Identity Provider. e.g. /path/to/saml-idp.xml
lenses.security.saml.idp.session.lifetime.max
The maximum “duration since login” to accept from IdP. A SAML safety measure that is usually not used. See the duration syntax.
100days
lenses.security.saml.keystore.location
Location for the Java keystore file to be used for SAML crypto i.e. /path/to/keystore.jks
lenses.security.saml.keystore.password
Password for accessing the keystore
lenses.security.saml.key.alias
Alias to use for the private key within the keystore (only required when the keystore has multiple keys)
lenses.security.saml.key.password
Password for accessing the private key within the keystore
lenses.security.kerberos.service.principal
The Kerberos principal for Lenses to use in the SPNEGO form: HTTP/lenses.address@REALM.COM
lenses.security.kerberos.keytab
Path to Kerberos keytab with the service principal. It should not be password protected
lenses.security.kerberos.debug
Enable Java’s JAAS debugging information
false
lenses.storage.hikaricp.[*]
To pass additional properties to HikariCP connection pool
no
lenses.storage.directory
The full path to a directory for Lenses to use for persistence
"./storage"
string
no
lenses.storage.postgres.host
Host of PostgreSQL server for Lenses to use for persistence
string
no
lenses.storage.postgres.port
Port of PostgreSQL server for Lenses to use for persistence
5432
integer
no
lenses.storage.postgres.username
Username for PostgreSQL database user
string
no
lenses.storage.postgres.password
Password for PostgreSQL database user
string
no
lenses.storage.postgres.database
PostgreSQL database name for Lenses to use for persistence
string
no
lenses.storage.postgres.schema
PostgreSQL schema name for Lenses to use for persistence
"public"
string
no
lenses.storage.postgres.properties.[*]
To pass additional properties to PostgreSQL JDBC driver
no
lenses.storage.msssql.host
Specifies the hostname or IP address of the Microsoft SQL Server instance
string
yes
lenses.storage.mssql.port
Specifies the TCP port number that the Lenses application uses to connect to a Microsoft SQL Server database
int
yes
lenses.storage.mssql.schema
Specifies the database schema Lenses uses within Microsoft SQL Server
string
yes
lenses.storage.mssql.database
Specifies the Microsoft SQL server database Lenses connects to
string
yes
lenses.storage.mssql.username
Specifies the username that the Lenses application uses to authenticate with the Microsoft SQL Server database
string
yes
lenses.storage.mssql.password
Specifies the password that the Lenses application uses to authenticate with the Microsoft SQL Server database
string
yes
lenses.storage.mssql.properties
Allows additional properties to be set for the Microsoft SQL Servicer JDBC drive
no
lenses.schema.registry.delete
Allow schemas to be deleted. Default is false
boolean
lenses.schema.registry.cascade.delete
Deletes associated schemas when a topic is deleted. Default is false
boolean
lenses.deployments.events.buffer.size
Buffer size for events coming from Deployment targets such as Kubernetes
10000
lenses.deployments.errors.buffer.size
Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes
1000
lenses.kubernetes.processor.image.name
The url for the streaming SQL Docker for K8
lensesioextra/sql-processor
lenses.kubernetes.processor.image.tag
The version/tag of the above container
5.2
lenses.kubernetes.config.file
The path for the kubectrl
config file
/home/lenses/.kube/config
lenses.kubernetes.pull.policy
Pull policy for K8 containers: IfNotPresent
or Always
IfNotPresent
lenses.kubernetes.service.account
The service account for deployments. Will also pull the image
default
lenses.kubernetes.init.container.image.name
The docker/container repository url and name of the Init Container image used to deploy applications to Kubernetes
lensesio/lenses-cli
lenses.kubernetes.init.container.image.tag
The tag of the Init Container image used to deploy applications to Kubernetes
5.2.0
lenses.kubernetes.watch.reconnect.limit
How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable
10
lenses.kubernetes.watch.reconnect.interval
How often to wait between Kubernetes Watcher reconnection attempts expressed in milliseconds
5000
lenses.kubernetes.websocket.timeout
How long to wait for a Kubernetes Websocket response expressed in milliseconds
15000
lenses.kubernetes.websocket.ping.interval
How often to ping Kubernetes Websocket to check it’s alive expressed in milliseconds
30000
lenses.kubernetes.pod.heap
The max amount of memory the underlying Java process will use
900M
lenses.kubernetes.pod.min.heap
The initial amount of memory the underlying Java process will allocate
128M
lenses.kubernetes.pod.mem.request
The value will control how much memory resource the Pod Container will request
128M
lenses.kubernetes.pod.mem.limit
The value will control the Pod Container memory limit
1152M
lenses.kubernetes.pod.cpu.request
The value will control how much cpu resource the Pod Container will request
null
lenses.kubernetes.pod.cpu.limit
The value will control the Pod Container cpu limit
null
lenses.kubernetes.namespaces
Object setting a list of Kubernetes namespaces that Lenses will see for each of the specified and configured cluster
null
lenses.kubernetes.pod.liveness.initial.delay
Amount of time Kubernetes will wait to check Processor’s health for the first time. It can be expressed like 30 second, 2 minute or 3 hour, mind the time unit is singular
60 second
lenses.deployments.events.buffer.size
Buffer size for events coming from Deployment targets such as Kubernetes
10000
lenses.deployments.errors.buffer.size
Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes
1000
lenses.kubernetes.config.reload.interval
Time interval to reload the Kubernetes configuration file. Expressed in milliseconds.
30000
lenses.sql.settings.max.size
Restricts the max bytes that a kafka sql query will return
long
20971520
(20MB)
lenses.sql.settings.max.query.time
Max time (in msec) that a sql query will run
int
3600000
(1h)
lenses.sql.settings.max.idle.time
Max time (in msec) for a query when it reaches the end of the topic
int
5000
(5 sec)
lenses.sql.settings.show.bad.records
By default show bad records when querying a kafka topic
boolean
true
lenses.sql.settings.format.timestamp
By default convert AVRO date to human readable format
boolean
true
lenses.sql.settings.live.aggs
By default allow aggregation queries on kafka data
boolean
true
lenses.sql.sample.default
Number of messages to sample when live tailing a kafka topic
int
2
/window
lenses.sql.sample.window
How frequently to sample messages when tailing a kafka topic
int
200
msec
lenses.sql.websocket.buffer
Buffer size for messages in a SQL query
int
10000
lenses.metrics.workers
Number of workers for parallelising SQL queries
int
16
lenses.kafka.ws.buffer.size
Buffer size for WebSocket consumer
int
10000
lenses.kafka.ws.max.poll.records
Max number of kafka messages to return in a single poll()
long
1000
lenses.sql.state.dir
Folder to store KStreams state.
string
logs/lenses-sql-kstream-state
lenses.sql.udf.packages
The list of allowed java packages for UDFs/UDAFs
array of strings
["io.lenses.sql.udf"]
lenses.topics.external.topology
Topic for applications to publish their topology
1
3
(recommended)
__topology
yes
N/A
lenses.topics.external.metrics
Topic for external application to publish their metrics
1
3
(recommended)
__topology__metrics
no
1 day
lenses.topics.metrics
Topic for SQL Processor to send the metrics
1
3
(recommended)
_kafka_lenses_metrics
no
lenses.topics.replication.external.topology
Replication factor for the lenses.topics.external.topology
topic
1
lenses.topics.replication.external.metrics
Replication factor for the lenses.topics.external.metrics
topic
1
lenses.topics.replication.metrics
Replication factor for the lenses.topics.metrics
topic
1
lenses.interval.summary
How often to refresh kafka topic list and configs
long
10000
lenses.interval.consumers.refresh.ms
How often to refresh kafka consumer group info
long
10000
lenses.interval.consumers.timeout.ms
How long to wait for kafka consumer group info to be retrieved
long
300000
lenses.interval.partitions.messages
How often to refresh kafka partition info
long
10000
lenses.interval.type.detection
How often to check kafka topic payload info
long
30000
lenses.interval.user.session.ms
How long a client-session stays alive if inactive (4 hours)
long
14400000
lenses.interval.user.session.refresh
How often to check for idle client sessions
long
60000
lenses.interval.topology.topics.metrics
How often to refresh topology info
long
30000
lenses.interval.schema.registry.healthcheck
How often to check the schema registries health
long
30000
lenses.interval.schema.registry.refresh.ms
How often to refresh schema registry data
long
30000
lenses.interval.metrics.refresh.zk
How often to refresh ZK metrics
long
5000
lenses.interval.metrics.refresh.sr
How often to refresh Schema Registry metrics
long
5000
lenses.interval.metrics.refresh.broker
How often to refresh Kafka Broker metrics
long
5000
lenses.interval.metrics.refresh.connect
How often to refresh Kafka Connect metrics
long
30000
lenses.interval.metrics.refresh.brokers.in.zk
How often to refresh from ZK the Kafka broker list
long
5000
lenses.interval.topology.timeout.ms
Time period when a metric is considered stale
long
120000
lenses.interval.audit.data.cleanup
How often to clean up dataset view entries from the audit log
long
300000
lenses.audit.to.log.file
Path to a file to write audits to in JSON format.
string
lenses.interval.jmxcache.refresh.ms
How often to refresh the JMX cache used in the Explore page
long
180000
lenses.interval.jmxcache.graceperiod.ms
How long to pause for when a JMX connectity error occurs
long
300000
lenses.interval.jmxcache.timeout.ms
How long to wait for a JMX response
long
500
lenses.interval.sql.udf
How often to look for new UDF/UDAF (user defined [aggregate] functions)
long
10000
lenses.kafka.consumers.batch.size
How many consumer groups to retrieve in a single request
Int
500
lenses.kafka.ws.heartbeat.ms
How often to send heartbeat messages in TCP connection
long
30000
lenses.kafka.ws.poll.ms
Max time for kafka consumer data polling on WS APIs
long
10000
lenses.kubernetes.config.reload.interval
Time interval to reload the Kubernetes configuration file.
long
30000
lenses.kubernetes.watch.reconnect.limit
How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable
long
10
lenses.kubernetes.watch.reconnect.interval
How often to wait between Kubernetes Watcher reconnection attempts
long
5000
lenses.kubernetes.websocket.timeout
How long to wait for a Kubernetes Websocket response
long
15000
lenses.kubernetes.websocket.ping.interval
How often to ping Kubernetes Websocket to check it’s alive
long
30000
lenses.akka.request.timeout.ms
Max time for a response in an Akka Actor
long
10000
lenses.sql.monitor.frequency
How often to emit healthcheck and performance metrics on Streaming SQL
long
10000
lenses.audit.data.access
Record dataset access as audit log entries
boolean
true
lenses.audit.data.max.records
How many dataset view entries to retain in the audit log. Set to -1
to retain indefinitely
int
500000
lenses.explore.lucene.max.clause.count
Override Lucene’s maximum number of clauses permitted per BooleanQuery
int
1024
lenses.explore.queue.size
Optional setting to bound Lenses internal queue used by the catalog subsystem. It needs to be positive integer or it will be ignored.
int
N/A
lenses.interval.kafka.connect.http.timeout.ms
How long to wait for Kafka Connect response to be retrieved
int
10000
lenses.interval.kafka.connect.healthcheck
How often to check the Kafka health
int
15000
lenses.interval.schema.registry.http.timeout.ms
How long to wait for Schema Registry response to be retrieved
int
10000
lenses.interval.zookeeper.healthcheck
How often to check the Zookeeper health
int
15000
lenses.ui.topics.row.limit
The number of Kafka records to load automatically when exploring a topic
int
200
lenses.deployments.connect.failure.alert.check.interval
Time interval in seconds to check the connector failure grace period has completed. Used by the Connect auto-restart failed connectors functionality. It needs too be a value between (1,600].
int
10
lenses.provisioning.path
Folder on the filesystem containing the provisioning data. See [provisioning docs](link to provisioning docs) for further details
string
lenses.provisioning.interval
Time interval in seconds to check for changes on the provisioning resources
int
lenses.schema.registry.client.http.retryOnTooManyRequest
When enabled, Lenses will retry a request whenever the schema registry returns a 429 Too Many Requests
boolean
lenses.schema.registry.client.http.maxRetryAwait
Max amount of time to wait whenever a 429 Too Many Requests
is returned.
duration
lenses.schema.registry.client.http.maxRetryCount
Max retry count whenever a 429 Too Many Requests
is returned.
integer
2
lenses.schema.registry.client.http.rate.type
Specifies if http requests to the configured schema registry should be rate limited. Can be "session" or "unlimited"
"unlimited" | "session"
lenses.schema.registry.client.http.rate.maxRequests
Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.
integer
N/A
lenses.schema.registry.client.http.rate.window
Whenever the rate limiter is "session" this configuration will determine the duration of the window used.
duration
N/A
lenses.schema.connect.client.http.retryOnTooManyRequest
Retry a request whenever a connect cluster returns a 429 Too Many Requests
boolean
lenses.schema.connect.client.http.maxRetryAwait
Max amount of time to wait whenever a 429 Too Many Requests
is returned.
duration
lenses.schema.connect.client.http.maxRetryCount
Max retry count whenever a 429 Too Many Requests
is returned.
integer
2
lenses.connect.client.http.rate.type
Specifies if http requests to the configured connect cluster should be rate limited. Can be "session" or "unlimited"
"unlimited" | "session"
lenses.connect.client.http.rate.maxRequests
Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.
integer
N/A
lenses.connect.client.http.rate.window
Whenever the rate limiter is "session" this configuration will determine the duration of the window used.
duration
N/A
apps.external.http.state.refresh.ms
When registering a runner for external app, a health-check interval can be specified. If it is not, this default interval is used (value in milliseconds)
30000
int
no
apps.external.http.state.cache.expiration.ms
Last known state of the runner is stored in a cache. The entries in the cache are being invalidated after a time that is defined by following configuration key (value in milliseconds). This value should not be lower than the apps.external.http.state.refresh.ms
value.
60000
int
no
This page describes how Lenses integrates with Kafka Connect to create, manage, and monitor connectors via multiple connect clusters.
For documentation about the available Lenses Apache 2.0 Connectors, see the Stream Reactor documentation.
For automation use the CLI
.
To connect your Connect Clusters see provisioning.
Lenses connects to Connect Clusters via Connects APIs. You can deploy connectors outside of Lenses and Lenses will still be able to see and manage them
You can connect Lenses to one or more Kafka Connect clusters. Once connected, Lenses will list the available Connector plugins that are installed in each Cluster. Additionally, Connectors can automatically be restarted and alert notifications sent.
To list the currently deployed connectors go to Workspace->Connectors. Lenses will display a list of connectors and their status.
Once a connector has been created, selecting the connector allows us to:
View its configuration
Update its configurations (Action)
View individual task configurations
View metrics
View exceptions.
To view the YAML specification as Code, select the Code tab in the Connector details page.
To download the YAML specification, click the Download button.
To create a new connector go to Workspace->Connectors->New Connectors.
Select the Connect Cluster you want to use and Lenses will display the plugins installed in the Connect Cluster.
Connectors are searchable by:
Type
Author
After selecting a connector, enter the configuration of the connector instance. Lenses will show the documentation for the currently selected option.
To deploy and start the connector, click Create.
Creation of a Connector as code can be done via either
Selecting Configure Connector->Configure As Code from the main connector page, or
Selecting a Connect Cluster and Connector, then the Code tab
Both options allow for direct input of a Connectors YAML specification or uploading of an existing file.
Connectors can be stopped, restarted, and deleted via the Actions button.
This page describes how to use Lenses to view and manage topic configurations in Kafka.
To view a configuration for a topic select the Configuration tab. Here you will see the current configurations inherited (default) from the brokers and if they have been overridden (current value).
To edit a configuration click the Edit icon and enter your value.
This page describes the available Apache 2.0 Source Connectors from Lenses. Lenses can also work with any other Kafka Connect Connector.
Lenses supports any Connector implementing the Connect APIs, bring your own or use community connectors.
You need to add the connector information for them to be visible in the Topology.
Enterprise support is also offered for connectors in the Stream Reactor project, managed and maintained by the Lenses team.
This page describes the available Apache 2.0 Sink Connectors from Lenses. Lenses can also work with any other Kafka Connect Connector.
Lenses supports any Connector implementing the Connect APIs, bring your own or use community connectors.
You need to add the connector information for them to be visible in the Topology.
Enterprise support is also offered for connectors in the Stream Reactor project, managed and maintained by the Lenses team.
This quick start guide will walk you through installing and starting Lenses using Docker, followed by connecting Lenses to your Kafka cluster.
For a local quick start, you can use Lenses Box, an all-in-one docker, with Lenses, Kafka, Schema Registry and more. Lenses will start and be configured to connect to the built-in Kafka brokers.
To start with the Box online.
Install and run the Docker
Open Lenses in your , log in with admin/admin.
For more information see .
If you want to deploy via Helm see .
For production and automated deployments see .
Lenses starts in a bootstrap mode, this allows you to be guided through adding the minimum requirements for Lenses to start, a license, and connection details to Kafka.
Kafka versions - Any version of Apache Kafka (2.0 or newer) on-premise and on-cloud.
Network connectivity - Lenses needs access to your Kafka brokers
Run the following command to pull the latest Lenses image and run it:
This section describes the integrations available for alerting.
Alerts are sent to channels.
See for integration into your CI/CD pipelines.
To send alerts to AWS Cloud Watch, you first need an AWS connection. Go to Admin->Connections->New Connection->AWS. Enter your AWS Credentials.
Rather than enter your AWS credentials you can use the .
Next, go to Admin->Alerts->Channels->New Channel->AWS Cloud Watch.
Select an AWS connection.
To send alerts to Datadog, you first need a Datadog connection. Go to Admin->Connections->New Connection->DataDog. Enter your API, Application Key and Site.
Next, go to A
dmin->Alerts->Channels->New Channel->Data Dog.
Select a DataDog connection.
To send alerts to Pager Duty, you first need a Pager Duty connection. Go to Admin->Connections->New Connection->PagerDuty. Enter your S
ervice Integration Key.
Next, go to Admin->Alerts->Channels->New Channel->Pager Duty.
Select the pager duty connection.
To send alerts to Pager Duty, you first need a Pager Duty connection. Go to Admin->Connections->New Connection->Prometheus.
Select your Prometheus connection
Set the Source
Set the GeneratorURL for your Alert Manager instance
To send alerts to Slack, you first need a Slack connection. Go to Admin->Connections->New Connection->Slack. Enter your Slack webhook URL.
Next, go to Admin->Alerts->Channels->New Channel->Slack.
Enter the Slack channel you want to send alerts to.
Webhooks allow you to send alerts to any service implementing them, they are very flexible.
First, you need a Webhook connection. Go to Admin->Connections->New Connection
Enter the URL, port and credentials.
Create a Channel to use the connection. Go to Admin->Alerts->Channels->New Channel.
Choose a name for your Channel instance.
Select your connection.
Set the HTTP method to use.
Set the Request pathA URI0 encoded request path, which may include a query string. Supports alert-variable interpolation.
Set the HTTP Headers
Set the Body payload
In Request path
, HTTP Headers
and Body payload
there is a possibility of using template variables, which will be translated to alert specific fields. To use template variables, you have to use this format: {{VARIABLE}}
, i.e. {{LEVEL}}
.
Supported template variables:
LEVEL - alert level (INFO
, LOW
, MEDIUM
, HIGH
, CRITICAL
).
CATEGORY - alert category (Infrastructure
, Consumers
, Kafka Connect
, Topics
, Producers
).
INSTANCE - (broker url / topic name etc.).
SUMMARY - alert summary - same content in the Alert Events tab.
TIMESTAMP
ID - alert global id (i.e. 1000
for BrokerStatus alert).
CREDS - CREDS[0]
etc. - variables specified in connections Credentials
as a list of values separated by a comma.
To configure real-time email alerts you can leverage Webhooks, for example with the following service:
Twilio and SendGrid
Zapier
Create a webhook connection, for SendGrid with api.sendgrid.com as the host and enable HTTPS
Configure a channel to use the connect you just created
Set the method to Post
Set the request path to the webhook URL from your Zapier account
Set the Headers to
Set the payload to be
Change the above payload according to your requirements, and remember that the [sender-email-address]
needs to be the same email address you registered during the Sender Authentication Sendgrid setup process.
Create a webhook connection, for SendGrid with hooks.zapier.com as the host and enable HTTPS
Configure a channel to use the connect you just created
Set the method to Post
Set the request path tp /v3/mail/send
Set the request path to the webhook URL from your Zapier account
Set the Headers to:
Set the payload to be
You’ll need the second part
/webhook2/<secret-token-by-ms>/IncomingWebhook/<secret-token-by-ms>
Create a new Webhook Connection, set the host to outlook.office.com and enable HTTPS
Configure an new channel, using this connection
Set the Method to POST
The Request Path to the second part of the URL you recieved from MS Teams
/webhook2/<secret-token-by-ms>/IncomingWebhook/<secret-token-by-ms>
In the body set
This page describes the alert references for Lenses.
This section describes the monitoring and alerting features of Lenses.
For automation use the .
This page describes the available Apache 2.0 Connect Secret Providers from Lenses.
You are not limited to Lenses Secret Providers, you are free to use your own.
Valid license - Lenses is a licensed product. Get a trial license .
Once Lenses has started, open Lenses in your , log in with admin/admin.
You will be presented with the bootstrap UI that will guide you through connecting to Kafka.
The connection to your Kafka depends on your Kafka distribution, you can view more details in the .
To create a webhook in your MS Teams workspace you can use .
At the end of the process you get a url of the format:
/webhook2/<secret-token-by-ms>/IncomingWebhook/<secret-token-by-ms>
See Zapier and follow blog post .
AWS S3
Load data from AWS S3 including restoring topics.
Azure Data Lake Gen2
Load data from Azure Data Lake Gen2 including restoring topics.
Azure Event Hubs
Load data from Azure Event Hubs into Kafka topics.
Azure Service Bus
Load data from Azure Service Bus into Kafka topics.
Cassandra
Load data from Cassandra into Kafka topics.
GCP PubSub
Load data from GCP PubSub into Kafka topics.
GCP Storage
Load data from GCP Storage including restoring topics.
FTP
Load data from files on FTP servers into Kafka topics.
JMS
Load data from JMS topics and queues into Kafka topics.
MQTT
Load data from MQTT into Kafka topics.
AWS S3
Sink data from Kafka to AWS S3 including backing up topics and offsets.
Azure CosmosDB
Sink data from Kafka to Azure CosmosDB.
Azure Data Lake Gen2
Sink data from Kafka to Azure Data Lake Gen2 including backing up topics and offsets.
Azure Event Hubs
Load data from Azure Event Hubs into Kafka topics.
Azure Service Bus
Sink data from Kafka to Azure Service Bus topics and queues.
Cassandra
Sink data from Kafka to Cassandra.
Elasticsearch
Sink data from Kafka to Elasticsearch.
GCP PubSub
Sink data from Kafka to GCP PubSub.
GCP Storage
Sink data from Kafka to GCP Storage.
HTTP Sink
Sink data from Kafka to a HTTP endpoint.
InfluxDB
Sink data from Kafka to InfluxDB.
JMS
Sink data from Kafka to JMS.
MongoDB
Sink data from Kafka to MongoDB.
MQTT
Sink data from Kafka to MQTT.
Redis
Sink data from Kafka to Redis.
Authorization: Bearer [your-Sendgrid-API-Key]
Content-Type: application/json
Kafka Broker is down
1000
Raised when the Kafka broker is not part of the cluster for at least 1 minute. i.e:host-1,host-2
Infrastructure
brokerID
INFO, CRITICAL
Zookeeper Node is down
1001
Raised when the Zookeeper node is not reachable. This is information is based on the Zookeeper JMX. If it responds to JMX queries it is considered to be running.
Infrastructure
service name
INFO, CRITICAL
Connect Worker is down
1002
Raised when the Kafka Connect worker is not responding to the API call for /connectors for more than 1 minute.
Infrastructure
worker URL
MEDIUM
Schema Registry is down
1003
Raised when the Schema Registry node is not responding to the root API call for more than 1 minute.
Infrastructure
service URL
HIGH, INFO
Under replicated partitions
1005
Raised when there are (topic, partitions) not meeting the replication factor set.
Infrastructure
partitions
HIGH, INFO
Partitions offline
1006
Raised when there are partitions which do not have an active leader. These partitions are not writable or readable.
Infrastructure
brokers
HIGH, INFO
Active Controllers
1007
Raised when the number of active controllers is not 1. Each cluster should have exactly one controller.
Infrastructure
brokers
HIGH, INFO
Multiple Broker Versions
1008
Raised when there are brokers in the cluster running on different Kafka version.
Infrastructure
brokers versions
HIGH, INFO
File-open descriptors high capacity on Brokers
1009
A broker has too many open file descriptors
Infrastructure
brokerID
HIGH, INFO, CRITICAL
Average % the request handler is idle
1010
Raised when the average fraction of time the request handler threads are idle. When the valueis smaller than 0.02 the alert level is CRITICAL. When the value is smaller than 0.1 the alert level is HIGH.
Infrastructure
brokerID
HIGH, INFO, CRITICAL
Fetch requests failure
1011
Raised when the Fetch request rate (the value is per second) for requests that failed is greater than a threshold. If the value is greater than 0.1 the alert level is set to CRITICAL otherwise is set to HIGH.
Infrastructure
brokerID
HIGH, INFO, CRITICAL
Produce requests failure
1012
Raised when the Producer request rate (the value is per second) for requests that failed is greater than a threshold. If the value is greater than 0.1 the alert level is set to CRITICAL otherwise is set to HIGH.
Infrastructure
brokerID
HIGH, INFO, CRITICAL
Broker disk usage is greater than the cluster average
1013
Raised when the Kafka Broker disk usage is greater than the cluster average. We provide by default a threshold of 1GB disk usage.
Infrastructure
brokerID
MEDIUM, INFO
Leader Imbalance
1014
Raised when the Kafka Broker has more leader replicas than the cluster average.
Infrastructure
brokerID
INFO
Consumer Lag exceeded
2000
Raises an alert when the consumer lag exceeds the threshold on any partition.
Consumers
topic
HIGH, INFO
Connector deleted
3000
Connector was deleted
Kafka Connect
connector name
INFO
Topic has been created
4000
New topic was added
Topics
topic
INFO
Topic has been deleted
4001
Topic was deleted
Topics
topic
INFO
Topic data has been deleted
4002
Records from topic were deleted
Topics
topic
INFO
Data Produced
5000
Raises an alert when the data produced on a topic doesn’t match expected threshold
Data Produced
topic
LOW, INFO
Connector Failed
6000
Raises an alert when a connector, or any worker in a connector is down
Apps
connector
LOW, INFO
This page described consumer group monitoring.
Consumer group monitoring is a key part of operating Kafka. Lenses allows operators to view and manage consumer groups.
The connector and SQL Processor pages allow you to navigate straight to the corresponding consumer groups.
The Explore screen also shows the active consumer groups on each topic.
To view consumer groups and the max and min lag across the partitions go to Workspace->Monitor->Consumers. You can also see this information for each topic in the Explore screen->Select topic->Partition tab.
Select, or search for a consumer group, you can also search for consumer groups that are not active.
To view alerts for a consumer group, click the view alerts button. Resetting consumer groups is only possible if the consumer group is not active. i.e. the application must be stopped, such as a Connector or SQL Processor. Enable the show inactive consumers to find them.
Select the consumer group
Select the partition to reset the offsets for
Specify the offset
To reset a consumer group (all clients in the group), select the consumer groups, select Actions, and Change Multiple offsets. This will reset all clients in the consumer group to either:
To the start
To the last offset
To a specific timestamp
This page describes how to add a License.
Lenses requires a valid license to start. The license can be added via the UI when in bootstrap mode or at deployment time via the provisioning APIs.
See provisioning for integration into your CI/CD pipelines.
If at any point the license becomes invalid (it expired / too many brokers were added to the cluster) only the license page will be available.
See License Management.
This page describes how to connect Lenses to IBM Event Streams.
Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use for automated deployments.
From the IBM Cloud console, locate your bootstrap_endpoints
. for the service credentials you want to connect with.
In the Lenses bootstrap UI:
Set the bootstrap_endpoints as bootstrap servers
Set SASL SSL
as the security protocol
Set PLAIN
as the security mechanism
Set the**jaas.conf
** as the following, using the apiKey value as the password.
IBM Event Streams requires a replication factor of 3. Ensure you set the replication factor accordingly for Lenses internal topics.
See .
This page describes configuring Lenses to connect to Aiven.
Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use for automated deployments.
From the Aiven, locate your Service URI
.
In the Lenses bootstrap UI:
Set the Service URI as bootstrap servers
Set SASL SSL
as the security protocol
Set SCRAM-SHA-SHA-256
as the security mechanism
Set the**jaas.conf
** as the following:
This page describes install the Lenses via a Linux archive
On start-up, Lenses will be in bootstrap mode unless it has an existing Kafka Connection. See provisioning for automating.
To install Lenses from the archive you must:
Extract the archive
Configure Lenses
Start Lenses
Extract the archive using the following command
Inside the extract archive, you will find.
Start Lenses by running:
or pass the location of the config file:
If you do not pass the location of the config file, Lenses will look for it inside the current (runtime) directory. If it does not exist, it will try its installation directory.
To stop Lenses, press CTRL+C
.
Open Lenses in your browser, log in with admin/admin
configure your brokers and add your license.
Set the permissions of the security.conf
to be readable only by the lenses user.
The agent needs write access in 4-5 places in total:
[RUNTIME DIRECTORY]
When Lenses runs, it will create at least one directory under the directory it is run in:
[RUNTIME DIRECTORY]/logs
Where logs are stored
[RUNTIME DIRECTORY]/logs/lenses-sql-kstream-state
Where SQL processors (when In Process mode) store state. To change the location for the processors’ state directory, use lenses.sql.state.dir
option.
[RUNTIME DIRECTORY]/storage
Where the H2 embedded database is stored when PostgreSQL is not set. To change this directory, use the lenses.storage.directory
option.
/run
(Global directory for temporary data at runtime)
Used for temporary files. If Lenses does not have permission to use it, it will fall back to /tmp
.
/tmp
(Global temporary directory)
Used for temporary files (if access /run
fails), and JNI shared libraries.
Back-up this location for disaster recovery
Lenses and Kafka use two common Java libraries that take advantage of JNI and are extracted to /tmp
.
You must either:
Mount /tmp without noexec
or set org.xerial.snappy.tempdir and java.io.tmpdir to a different location
If your server uses systemd as a Service Manager, then manage Lenses (start upon system boot, stop, restart). Below is a simple unit file that starts Lenses automatically on system boot.
Lenses uses the default trust store (cacerts
) of the system’s JRE (Java Runtime) installation. The trust store is used to verify remote servers on TLS connections, such as Kafka Brokers with an SSL protocol, Secure LDAP, JMX over TLS, and more. Whilst for some types of connections (e.g. Kafka Brokers) a separate keystore can be provided at the connection’s configuration, for some other connections (e.g. Secure LDAP and JMX over TLS) we always rely on the system trust store.
It is possible to set up a global custom trust store via the LENSES_OPTS
environment variable:
Run on any Linux server. For RHEL 6.x and CentOS 6.x use docker.
Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit
command:
Increase as a super-user the soft limit to 4096 with:
Use 6GB RAM/4 CPUs and 500MB disk space.
This page provides examples for defining a connection to Zookeeper.
Simple configuration with Zookeeper metrics read via JMX.
With such a configuration, Lenses will use 3 Zookeeper nodes and will try to read their metrics from following URLs (notice the same port - 9581 - used for all of them, as defined by metricsPort
property):
my-zookeeper-host-0:9581
my-zookeeper-host-1:9581
my-zookeeper-host-2:9581
This page describes configuring Lenses with Azure AD via LDAP.
Azure AD supports the LDAP protocol. You can use it as an authentication provider with users, passwords, and groups stored in Azure AD. When a user is authenticated successfully, Lenses queries Azure AD to get the user’s groups and authorizes the user with the selected permissions.
Here is a sample Lenses configuration:
In the Azure portal create a resource. Search for Domain service
and select Azure AD Domain Services from the options.
Set the DNS Domain Name as the same one you have with for your existing Azure AD tenant
In the Administration tab, you can manage the group membership for the AAD DC Administrator and control the members with access rights on Azure AD.
Azure AD Domain Services provides one-way synchronization from Azure Active Directory to the managed domain. Only certain attributes are synchronized to the managed domain, along with groups, group memberships and passwords.
The Synchronization tab provides two options. The first one is All, where everything will be synchronized to Azure AD DS managed domain. The second one is Scoped, which allows the selection of specific groups to be synced.
Once the managed domain is ready to be used, configure the DNS server settings for the Azure Virtual Network. Click the button configure:
For the DNS changes to be applied, all the VMs are required to be restarted.
Azure AD DS needs password hashes in a format that’s suitable for NT LAN Manager (NTLM) and Kerberos authentication. Azure AD does not generate or store password hashes in the format that’s required for NTLM or Kerberos authentication until you enable Azure AD DS for your tenant.
For security reasons, Azure AD doesn’t store any password credentials in clear-text form. Therefore, Azure AD can’t automatically generate these NTLM or Kerberos password hashes based on users’ existing credentials.
Read the details from Microsoft on how to generate for your existing users.
The Virtual Network to deploy Lenses, requires enabling Virtual Network Peering. This allows it to communicate with Azure AD DS. You should add the IPs that have been generated in the previous step as DNS Servers.
Read more details on virtual network peering
To enable the LDAP(S) protocol on Azure AD DS, use the following PowerShell to generate the self-signed certificate:
In case PowerShell is not available, you can use the openssl command. This following script generates a certificate for Azure AD DS.
Under Secure LDAP, upload the PFX certificate and make sure the options Allow secure LDAP and access over the Internet are enabled.
After the secure LDAP is enabled to allow secure LDAP access, use the Azure AD DS properties to review the external IP address that is used to expose the LDAP service.
Finally, you need to allow inbound traffic to the Azure AD DS network security group for the LDAPS port 636 and limit the access only to the the virtual machine or the range of the IPs to which they should have inbound access.
This section describes configuring user authentication in Lenses.
Authentication is configured in the security configuration file. Lenses Administrator and Basic Auth do not require any configuration.
Multiple authentication configurations can be used together.
Authentication settings go in security.conf.
The following authentication methods are available. Users, regardless of the method need to be mapped to groups.
For BASIC and LDAP authentication types, there is the option to set a policy to temporarily lock the account when successive login attempts fail. Once the lock time window has passed the user can log in again.
These two configuration entries enable the functionality (both of them have to be provided to take effect):
A Group is a collection of permissions that defines the level of access for users belonging to it. Groups consist of:
Namespaces
Application permissions
Administration permissions
When working with LDAP or Active Directory, user and group management is done in LDAP.
Lenses provides fine-grained role-based access (RBAC) for your existing groups of users over data and applications. Create a group in Lenses with the same name (case-sensitive) as in LDAP/AD.
When using an SSO solution such as Azure AD, Google, Okta, OneLogin or an open source like KeyCloak user and group management is done in the Identity Provider.
Lenses provides fine-grained role-based access (RBAC) for your existing groups of users over data and applications. Create a group in Lenses with the same name (case-sensitive) as in your SSO group.
With Basic Authentication, create groups of users and add users to those groups. Authentication and authorization are fully managed, and users can change their passwords.
This page describes how to configure the default admin account for Lenses.
When you first log in to Lenses, use the default credentials admin/admin
The default account is a super user and can be used to create groups and other accounts with appropriate permissions.
The default account username and password may be adjusted as below.
We strongly recommend that you change the default password. If you don’t, you will be prompted with a dashboard notification.
For security purposes, it is strongly advised to use your password’s SHA256 checksum instead of the plaintext.
To create a SHA256 checksum for your password you can use the command line tools available in your Linux server or macOS.
To disable the Lenses Administrator user, set an adequately long random password. You can achieve this by using the snippet below:
This pages describes configuring Lenses with Keycloak SSO.
Integrate your user groups with Lenses using the Keycloak group names. Create a group in Lenses using the same case-sensitive group name as in Keycloak.
For example, if the Engineers group is available in Keycloak, with Lenses assigned to it, create a group with the same name.
Go to Clients
Click Create
Fill in the details: see the table below.
Click Save
Client ID
Use the base.url
of the Lenses installation e.g. https://lenses-dev.example.com
Client Protocol
Set it to saml
Client Saml Endpoint
This is the Lenses API point for Keycloak to call back. Set it to [BASE_URL]/api/v2/auth/saml/callback?client_name=SAML2Client
. e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client
Change the settings on client you just created to:
Name
Lenses
Description
(Optional) Add a description to your app.
SAML Signature Name
KEY_ID
Client Signature Required
OFF
Force POST Binding
ON
Front Channel Logout
OFF
Force Name ID Format
ON
Name ID Format
email
Root URL
Use the base.url
of the Lenses installation e.g. https://lenses-dev.example.com
Valid Redirect URIs
Use the base.url
of the Lenses installation e.g. https://lenses-dev.example.com
Configure Keycloak to communicate groups to Lenses. Head to the Mappers section.
Click Create
Fill in the details: see table below.
Click Save
Name
Groups
Mapper Type
Group list
Group attribute name
groups
(case-sensitive)
Single Group Attribute
ON
Full group path
OFF
Configure in the security.conf file.
This page describes configuring Lenses with Kerberos.
Deprecated in Lenses 6.0
Kerberos uses SPNEGO (Simple and Protected GSSAPI Negotiation Mechanism) for authentication.
Kerberos will automatically log in authorized users when using the /api/auth REST endpoint. If using Microsoft Windows, logging into your Windows domain is usually sufficient to issue your Kerberos credentials.
On Linux, if you use Kerberos with PAM, your Kerberos credentials should be already available to Kerberos-enabled browsers. Otherwise, you will need to authenticate to the KDC manually using kinit
at the command line and start your browser from the same terminal.
In order to use Kerberos authentication in Lenses, both a static configuration and Kerberos Connection
is required.
Static configuration
To set up Kerberos you need a Kerberos principal and a password-less keytab. Add them in security.conf
before starting Lenses:
Kerberos Connection
A Kerberos Connection
should be defined in order to use a proper krb5.conf
The user’s session in the SSO provider is too old.
The system clocks of the SSO provider and the Lenses instance are out of sync.
For security purposes, Lenses prevents authenticating SSO users that have remained logged in SSO for a very long time.
Example: You use Okta SSO and, you logged in to Okta a year ago. Okta might allow you to remain logged in along that year without having to re-authenticate. Lenses has a limit of 100 days
. In that case, Lenses will receive an authenticated user that originally logged in before the 100 days mark.
Ensure that the SSO and Lenses system clocks are in sync.
If the SSO provider supports very long sessions either:
Log out of the SSO and log back in. This explicitly renews the SSO session.
Increase the Lenses limit to more than 100 days
.
Example:
This page describes how create and use Lenses Service Accounts.
Service accounts require an authentication token to be authenticated and must belong to at least one group for authorization.
Service accounts are commonly used for automation, for example, when using Lenses CLI or APIs, or any other application or service to interact with Lenses.
Service account tokens are not recoverable. You can edit, revoke or delete a Service Account, but you can never retrieve the original token.
To create a new Service Account, navigate to the Admin and select Users and New Service Account.
You can manually enter the authentication token or autogenerate it. If you select to auto-generate tokens, then you will receive a one-time token for this service account. Follow the instructions and copy and store this token. You can now use this token to authenticate via API and CLI.
You can only change the groups and owner of services accounts. Go to the service account and select Edit Info, from the Actions menu.
To change the token, go to the service account and select Revoke Token from the Actions menu.
To use the service account you need to prefix the token with its name separated by a colon. You then include that in the corresponding header.
For a service account named myservice
and a token da6bad50-55c8-4ed4-8cad-5ebd54a18e26
then the combination looks like this:
myservice:28ab4195-18cf-426a-abda-c41a451e001a
To use the CLI with a service account for CI/CD you need to pass these options:
This page describes how to configure the agent to deploy and manage SQL Processors for stream processing.
Set in lenses.conf
Lenses can be used to define & deploy stream processing applications that read from Kafka and write back to Kafka with SQL. They are based on the Kafka Stream framework. They are known as SQL Processors.
SQL processing of real-time data can run in 2 modes:
SQL In-Process - the workload runs inside of Lenses.
SQL in Kubernetes - the workload runs & scale on your Kubernetes cluster.
Which mode the SQL Processors will run as should be defined within the lenses.conf before Lenses is started.
In this mode, SQL processors run as part of the Lenses process, sharing resources, memory, and CPU time with the rest of the platform.
This mode of operation is meant to be used for development only.
As such, the agent will not allow the creation of more than 50 SQL Processors in In Process mode, as this could impact the platform's stability and performance negatively.
For production, use the KUBERNETES
mode for maximum flexibility and scalability.
Set the execution configuration to IN_PROC
Set the directory to store the internal state of the SQL Processors:
SQL processors use the same connection details that Lenses uses to speak to Kafka and Schema Registry. The following properties are mounted, if present, on the file system for each processor:
Kafka
SSLTruststore
SSLKeystore
Schema Registry
SSL Keystore
SSL Truststore
The file structure created by applications is the following: /run/[lenses_installation_id]/applications/
Keep in mind Lenses require an installation folder with write permissions. The following are tried:
/run
/tmp
Kubernetes can be used to deploy SQL Processors. To configure Kubernetes, set the mode to KUBERNETES
and configure the location of the kubeconfig file.
When Lenses is deployed inside Kubernetes, the lenses.kubernetes.config.file
configuration entry should be set to an empty string. The Kubernetes client will auto-configure from the pod it is deployed in.
The SQL Processor docker image is live in Dockerhub.
Custom serdes should be embedded in a new Lenses SQL processor Docker image.
To build a custom Docker image, create the following directory structure:
Copy your serde jar files under processor-docker/serde.
Create Dockerfile
containing:
Build the Docker.
Once the image is deployed in your registry, please set Lenses to use it (lenses.conf):
Don't use the LPFP_
prefix.
Internally, Lenses prefixes all its properties with LPFP_
.
Avoid passing custom environment variables starting with LPFP_
as it may cause the processors to fail.
To deploy Lenses Processors in Kubernetes the suggested way is to activate RBAC in Cluster level through Helm values.yaml
:
If you want to limit the permissions Lenses has against your Kubernetes cluster, you can use Role/RoleBinding
resources instead.
To achieve this you need to create a Role
and a RoleBinding
resource in the namespace you want the processors deployed to:
example for:
Lenses namespace = lenses-ns
Processor namespace = lenses-proc-ns
You can repeat this for as many namespaces you may want Lenses to have access to.
Finally you need to define in Lenses configuration which namespaces can Lenses access. To achieve this amend values.yaml
to contain the following:
example:
This page describes how to use Lenses to use the Explore screen to explore, search and debug messages in a topic.
After selecting a topic you will be shown more details of the topic. The SQL Snapshot engine will return to you 200 of the latest messages for partition 0. Both the key and value of the message are displayed in a tree format which is expandable.
At the top of each message, the Kafka metadata (partition, timestamp, offset) is displayed.
Hovering to the right allows you a message the clipboard.
To download all messages to JSON or CSV see here.
The SQL Snapshot engine deserializes the data on the backend of Lenses and sends it over the WebSocket to the client. By default, the data is presented in a tree format but it's also possible to flatten the data into a grid view. Select the grid icon.
Use the partition drop-down to change the partition to return messages you are interested in.
Use the timestamp picker to search for messages from a timestamp.
Use the offset select to search for messages from an offset.
The SQL Snapshot engine has a live mode. In this mode the engine will return a sample of messages matching the query. To enable this, select the Live Sample
button. The data view will now update with live records as the are written to the topic. You can also edit the query if required.
This is sample data, not the full set to avoid overloading the browser
For the SQL Snapshot engine to return data it needs to understand the format of the data in a topic. If a topic is backed by a Schema registry it is automatically set to AVRO. For other types, such as JSON or Strings the engine tries to determine the format.
If you wish to override or correct the format used select either Reset Types or Change Types from the action menu.
Configuring Lenses Websockets to work with Load Balancers.
Lenses uses Websockets. It can be that your load balancers block them by default. Depending on your load balancer you need to allow websockets.
For example on NGINX:
If it is exposed via a service type LoadBalancer, ensure the protocol between the load balancer and NGINX is set to TCP. See Kubernetes documentation for more information.
Lenses can be placed behind a proxy, but you must allow websocket connections.
These two paths are used for WebSocket connections:
/api/ws
/api/kafka/ws
Disable proxy buffering for SSE (Server Sent Events) connections on this path:
/api/sse
Lenses supports TLS termination out of the box, see Enabling TLS
This page describes the ACLs that need to be configured on your Kafka Cluster if ACLs are enabled, for Lenses to function.
These ACLs are for the underlying Lenses Kafka client. Lenses has its own set of permissions guarding access.
You can restrict the access of the Lenses Kafka client but this can reduce the functionality on offer in Lenses, e.g. not allow Lenses to create topic at all, even though this can be managed by Lenses own RBAC system.
When your Kafka cluster is configured with an authorizer which enforces ACLs, Lenses will need a set of permissions to function correctly.
Common practice is to give Lenses superuser status or the complete list of available operations for all resources. The fine-grained permission model of Lenses can then be used to restrict the access level per user.
The agent needs permission to manage and access their own internal Kafka topics:
__topology
__topology__metrics
It also needs to read and describe permissions for the consumer offsets and Kafka Connect topics —if enabled:
__consumer_offsets
connect-configs
connect-offsets
connect-status
This same set of permissions is required for any topic that the agent must have read access.
DescribeConfigs was added in Kafka 2.0. It may not be needed for versions before 2.2.
Additional permissions are needed to produce topics or manage them.
Permission to at least read and describe consumer groups is required to take advantage of the Consumer Groups' monitoring capabilities.
Additional permissions are needed to manage groups.
To manage ACLs, permission to the cluster is required:
This page contains the Lenses IAM permission references.
This matrix shows both display name
(first column) and code name
(second column) for permissions. Knowing code name
may be helpful while using API / CLI.
View Kafka Settings
ViewKafkaSettings
Allows viewing Kafka ACLs, Quotas
Manage Kafka Settings
ManageKafkaSettings
Allows managing Kafka ACLs, Quotas
View Log
ViewLogs
Allows viewing Lenses logs
View Users
ViewUsers
Allows viewing the users, groups and service accounts
Manage Users
ManageUsers
Allows to add/remove/update/delete users,groups and service accounts
View Alert Rules
ViewAlertRules
Allows viewing the alert settings rules
Manage Alert Rules
ManageAlertRules
Allows adding/deleting/updating alert settings rules
View Audit
ViewAuditLogs
Allows viewing the audit records
View Data Policies
ViewDataPolicies
Allows viewing the data policies
Manage Data Policies
ManageDataPolicies
Allows to add/remove/update data policies
Manage Connections
ManageConnections
Allows to add/remove/update connections
View Approvals
ViewApprovalRequest
Allows viewing raised approval requests
Manage Approvals
ManageApprovalRequest
Allows to accept/reject requests
Manage Lenses License
ManageLensesLicense
Allows to update Lenses license at runtime via the Lenses API
Manage Audit Logs
ManageAuditLogs
Allows deleting audit logs
Show
Allows viewing the topic name and basic info
Query
Allows viewing the data in a topic
Create
Allows creating topics
Create Topic Request
Topics are not created directly, they are sent for approval
Drop
Allows deleting topics
Configure
Allows changing a topic configuration
Insert Data
Allows inserting data into the topic
Delete Data
Allows deleting data from the topic
Update Schema
Allows configuring the topic storage format and schema
View Schema
Allows viewing schema information
Show Index
Allows viewing Elasticsearch index information
Query Index
Allows viewing the data in an Elasticsearch index
This matrix shows both display name
(first column) and code name
(second column) for permissions. Knowing code name
may be helpful while using API / CLI.
View SQL Processors
ViewSQLProcessors
Allows viewing the SQL processors
Manage SQL Processors
ManageSQLProcessors
Allows to add/remove/stop/delete SQL processors
View Schemas
ViewSchemaRegistry
Allows viewing your Schema Registry entries
Manage Schema Registry
ManageSchemaRegistry
Allows to add/remove/update/delete your Schema Registry entries
View Topology
ViewTopology
Allows viewing the data pipeline topology
Manage Topology
ManageTopology
Allows decommissioning topology applications
View Kafka Connectors
ViewConnectors
Allows viewing running Kafka Connectors
Manage Kafka Connectors
ManageConnectors
Allows to add/update/delete/stop Kafka Connectors
View Kafka Consumers
ViewKafkaConsumers
Allows viewing the Kafka Consumers details
Manage Kafka Consumers
ManageKafkaConsumers
Allows changing the Kafka Consumers offset
Connect Clusters Access
-
Allows to use Connect Clusters
This page describes how to use Lenses to add metadata and tags to topics in Kafka.
To add descriptions or tags to datasets, click the edit icon in the Summary panel.
This page describes how to manage Schema in a Schema Registry with Lenses.
For automation use the CLI
.
To delete schemas you need to enable lenses.schema.registry.delete
in lenses.conf
.
To connect your Schema Registry see provisioning.
To create a new schema, select New Schema and add your schema.
To view the schema associated with a topic, select the Schema
tab. Here you can view the schema for both the key and the value of the topic.
To edit a schema select either the key or value schema. The schema editor will be expanded, click Edit to change the schema.
To list schemas go to Workspace->Schema Registry. Lenses will show the current schemas, you can search in schemas for fields and schema names as well as filtering by format and tags.
To evolve a schema, select the schema and select Edit. In the editor apply your changes. If the changes match the evolution rules the changes will be saved and a new version created.
To change the compatibility of a schema, select the schema and from the actions menu select Change compatibility.
This page describes how to use Lenses to back and restore data in a Kafka topic to AWS S3.
To initiate either a topic backup to S3 or topic restoration from S3, follow these steps:
Navigate to the Actions menu within the Kafka topic details screen.
Choose your desired action: “Backup Topic to S3” or “Restore Topic from S3.”
A modal window will open, providing step-by-step guidance to configure your backup or restoration entity.
A single topic can be backed up or restored to/from multiple locations.
If a topic is being backed up it will be displayed on the topology.
Additional information on the location of the backup can be found by navigating to the topic in the Explore
screen where the information is available in the Summary
section.
To back up a topic, navigate to the topic you wish to back up and select Backup Topic to S3
from the Actions menu.
Enter the S3 bucket ARN and select the Connect Cluster that has the Lenses S3 connector installed.
Click Backup Topic, an S3 sink connector instance will now be deployed and configured automatically to back up data from the topic to the specified bucket.
To restore a topic, navigate to the topic you wish to restore and select Restore Topic from S3 from the Actions menu.
Enter the S3 bucket ARN and select the Connect Cluster that has the Lenses S3 connector installed. Click Restore Topic
, an S3 source connector instance will now be deployed and configured automatically to restore data to the topic from the specified bucket.
This section describes how to configure alerting in Lenses.
Alerts rules are configurable in Lenses, alerts that are generated can then be sent to specific channels. Several different integration points are available for channels.
These are a set of built-in alerting rules for the core connections, Kafka, Schema Registry, Zookeeper, and Kafka Connect. See infrastructure health.
Data produced are user-defined alerts on the amount of data on a topic over time. Users have a choice to notify if the topic receives either:
more than
or less than
Consumer rules are alerting on consumer group lag. Users can define:
a lag
on a topic
for a consumer group
which channels to send an alert to
Lenses allows operators to configure alerting on Connectors. Operators can:
Set channels to send alerts to
Enable auto restart of connector tasks. Lenses will restart failed tasks with a grace period.
The sequence is:
Lenses watches for task failures.
If a task fails, Lenses will restart it.
If the restart is successful Lenses resets the "restart attempts" back to zero
If the restart is not successful, Lenses increments the restart attempts, waits for the grace period and tries another restart if the task is still in a failed state.
Steps 4 is repeated until restart attempts is reached. Lenses will only rest the restart attempts to zero after the tasks have been brought back to a healthy start by manual intervention.
The number of times Lenses attempts to restart is based on the entry in the alert setting.
The restart attempts can be tracked in the Audits page.
To view events go to Admin -> Alerts -> Events.
Successful retrieval of system state
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
License successfully updated and current license info returned
It will update the connections state and validate the configuration. If the validation fails, the state will not be updated.
It will only validate the request, not applying any actual change to the system.
It will try to connect to the configured service as part of the validation step.
Configuration in YAML format representing the connections state.
The only allowed name for the Kafka connection is "kafka".
Kafka security protocol.
SSL keystore file path.
Password to the keystore.
Key password for the keystore.
Password to the truststore.
SSL truststore file path.
JAAS Login module configuration for SASL.
Kerberos keytab file path.
Comma separated list of protocol://host:port to use for initial connection to Kafka.
Mechanism to use when authenticated using SASL.
Default port number for metrics connection (JMX and JOLOKIA).
The username for metrics connections.
The password for metrics connections.
Flag to enable SSL for metrics connections.
HTTP URL suffix for Jolokia or AWS metrics.
HTTP Request timeout (ms) for Jolokia or AWS metrics.
Metrics type.
Additional properties for Kafka connection.
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis.
DEPRECATED.
The only allowed name for a schema registry connection is "schema-registry".
Path to SSL keystore file
Password to the keystore
Key password for the keystore
Password to the truststore
Path to SSL truststore file
List of schema registry urls
Source for the basic auth credentials
Basic auth user information
Metrics type
Flag to enable SSL for metrics connections
The username for metrics connections
The password for metrics connections
Default port number for metrics connection (JMX and JOLOKIA)
Additional properties for Schema Registry connection
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis
DEPRECATED
HTTP URL suffix for Jolokia metrics
HTTP Request timeout (ms) for Jolokia metrics
Username for HTTP Basic Authentication
Password for HTTP Basic Authentication
Enables Schema Registry hard delete
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The username to connect to the Elasticsearch service.
The password to connect to the Elasticsearch service.
The nodes of the Elasticsearch cluster to connect to, e.g. https://hostname:port. Use the tab key to specify multiple nodes.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An Integration Key for PagerDuty's service with Events API v2 integration type.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The Datadog site.
The Datadog API key.
The Datadog application key.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The Slack endpoint to send the alert to.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Comma separated list of Alert Manager endpoints.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The host name.
An optional port number to be appended to the hostname.
Set to true in order to set the URL scheme to https
. Will otherwise default to http
.
An array of (secret) strings to be passed over to alert channel plugins.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Way to authenticate against AWS.
Access key ID of an AWS IAM account.
Secret access key of an AWS IAM account.
AWS region to connect to. If not provided, this is deferred to client configuration.
Specifies the session token value that is required if you are using temporary security credentials that you retrieved directly from AWS STS operations.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
List of Kafka Connect worker URLs.
Username for HTTP Basic Authentication.
Password for HTTP Basic Authentication.
Flag to enable SSL for metrics connections.
The username for metrics connections.
The password for metrics connections.
Metrics type.
Default port number for metrics connection (JMX and JOLOKIA).
AES256 Key used to encrypt secret properties when deploying Connectors to this ConnectCluster.
Name of the ssl algorithm. If empty default one will be used (X509).
SSL keystore file.
Password to the keystore.
Key password for the keystore.
Password to the truststore.
SSL truststore file.
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis.
DEPRECATED.
HTTP URL suffix for Jolokia metrics.
HTTP Request timeout (ms) for Jolokia metrics.
The only allowed name for a schema registry connection is "schema-registry".
Way to authenticate against AWS. The value for this project corresponds to the AWS connection name of the AWS connection that contains the authentication mode.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Access key ID of an AWS IAM account. The value for this project corresponds to the AWS connection name of the AWS connection that contains the access key ID.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Secret access key of an AWS IAM account. The value for this project corresponds to the AWS connection name of the AWS connection that contains the secret access key.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Specifies the session token value that is required if you are using temporary security credentials that you retrieved directly from AWS STS operations.
Enter the Amazon Resource Name (ARN) of the Glue schema registry that you want to connect to.
The period in milliseconds that Lenses will be updating its schema cache from AWS Glue.
The size of the schema cache.
Type of schema registry connection.
Default compatibility mode to use on Schema creation.
The only allowed name for the Zookeeper connection is "zookeeper".
List of zookeeper urls.
Zookeeper /znode path.
Zookeeper connection session timeout.
Zookeeper connection timeout.
Metrics type.
Default port number for metrics connection (JMX and JOLOKIA).
The username for metrics connections.
The password for metrics connections.
Flag to enable SSL for metrics connections.
HTTP URL suffix for Jolokia metrics.
HTTP Request timeout (ms) for Jolokia metrics.
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis.
DEPRECATED.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The Postgres hostname.
The port number.
The database to connect to.
The user name.
The password.
The SSL connection mode as detailed in https://jdbc.postgresql.org/documentation/head/ssl-client.html.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The host name for the HTTP Event Collector API of the Splunk instance.
The port number for the HTTP Event Collector API of the Splunk instance.
Use SSL.
This is not encouraged but is required for a Splunk Cloud Trial instance.
HTTP event collector authorization token.
The only allowed name for the Zookeeper connection is "kerberos".
Kerberos krb5 config
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Attached file(s) needed for establishing the connection. The name of each file part is used as a reference in the manifest.
Successfully updated connection state
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Monitoring the health of your infrastructure.
Lenses provides monitoring of the health of your infrastructure via JMX.
Additionally, Lenses has a number of built-in alerts for these services.
Lenses monitors (by default every 10 seconds) your entire streaming data platform infrastructure and has the following alert rules built-in:
Lenses License
Lenses licnese is invalid
Kafka broker is down
A Kafka broker from the cluster is not healthy
Zookeeper node is down
A Zookeeper node is not healthy
Connect Worker is down
A Kafka Connect worker node is not healthy
Schema Registry is down
A Schema Registry instance is not healthy
Under replicated partitions
The Kafka cluster has 1 or more under-replicated partitions
Partitions offline
The Kafka cluster has 1 or more partitions offline (partitions without an active leader)
Active Controller
The Kafka cluster has 0 or more than 1 active controllers
Multiple Broker versions
The Kafka cluster is under a version upgrade, and not all brokers have been upgraded
File-open descriptors on Brokers
A Kafka broker has an alarming number of file-open descriptors. When the operating system is exceeding 90% of the available open file descriptors
Average % the request handler is idle
The average fraction of time the request handler threads are idle is dangerously low. The alert is HIGH when the value is smaller than 10%, and CRITICAL when it is smaller than 2%.
Fetch requests failure
Fetch requests are failing. If the rate of failures per second is > 10% the alert level is set to CRITICAL, otherwise it is set to HIGH.
Produce requests failure
Producer requests are failing. When the value is > 10% the alert level is set to CRITICAL, otherwise it is set to HIGH.
Broker disk usage
A Kafka broker’s disk usage is greater than the cluster average. The build-in threshold is 1 GByte.
Leader imbalance
A Kafka broker has more leader replicas than the average broker in the cluster.
If you change your Kafka cluster size or replace an existing Kafka broker with another, Lenses will raise an active alert as it will detect that a broker of your Kafka cluster is no longer available. If the Kafka broker has been intentionally removed, then decommission it:
Navigate to Services.
Select the broker, click on the actions in the options menu and click on the Decommission option.