Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
We have made new alpha release 19:
Agent image:
HQ image:
HQ CLI image
New Helm version 19 for agent and for the HQ: https://lenses.jfrog.io/ui/native/helm-charts-preview/
We have made new alpha release 20:
Agent image:
HQ image:
HQ CLI image
New Helm version 20 for agent and for the HQ: https://lenses.jfrog.io/ui/native/helm-charts-preview/
Archive installation: https://archive.lenses.io/pub/testing/lenses/
Service account annotations were wrongly referenced and were not taking into account upon creation of a new service account
In case authenSignReq was enabled and secrets were not placed it would create environment variable with null values - now it skips creation of environment variables
Breaking change
External secret supports now:
Changeable External Secret Store type: SecretStore | ClusterSecretStore hq:
additionalSpecs
Service account annotations were wrongly referenced and were not taking into account upon creation of a new service account
This page details the release notes of Lenses.
Lenses 6.0 introduces a new service, called HQ, acting as portal for multi-kafka environments.
New HQ service
IAM (Identity & Access Management). This has moved from each Lenses instant to a global location in the new HQ service
Global SQL Studio
Global Data Catalogue
Community License: You can now use Lenses without a license (community license key is bundled in the docker-compose) or expiry but the following restrictions apply:
No SSO
Maximum of two environments (Kafka clusters) can be connected
Two Users with one an admin user
Two Service Accounts
Two Groups
Two Roles
No Backup / Restore for topics to S3
H2 embedded database is no longer support.
Lenses 5.x permission model is replaced by global IAM. You must recreate the roles and groups in HQ
Connection management in the agent is via file Provisioning only.
LRN UX for HQ IAM.
See & copy the LRN of an IAM resource (User, Group, Service Account, Role).
LRN is also an available column in all IAM listing pages. You can enable it in the column selector of the table.
Environments + metrics columns.
Tables UX update: powerful grid
Multi sort
Reorder of columns + column selection
Filters
Preferences are saved for the user
CLI
Provide shell completion for agent --env flag
IAM - Updated permissions syntax.
Renaming. Some services and resources have been renamed. This is to simplify and make the IAM model more consistent. You can find the latest spec here. You may need to adjust your policy YAML definitions. Reach out to us if you need help.
Prefix/infix wildcard support. You can now use *
in a prefix or infix position for resource-id path segments. Use this to express things like "environments that end in -dev" (*-dev
) or "topics whose name starts with fraud and ends with analytics" (fraud*analytics
).
Improved Global SQL studio UX
Topic navigator IDE experience.
See latest topic messages by default, ordered by time, with latest messages 1st.
Dark mode for sign-in page
User Profile page now correctly save changes.
Expose Agent dashboard metrics via API.
Improved Agent-level navigation bar.
We have made new alpha release 17:
Agent image:
HQ image:
HQ CLI image
New Helm version 17 for agent and for the HQ: https://lenses.jfrog.io/ui/native/helm-charts-preview/
When working on software projects, there often arises a need to create additional environment variables for various purposes. One common scenario for this is when users need to securely handle sensitive information, such as passwords or API keys. By storing a user password in a secret, the system ensures that such sensitive information is not exposed to unauthorized access, and this practice offers enhanced security.
Property restPort
has been removed and replaced by lensesHq.http.address
In the provisioning, there has been slight adjustment in the parent agent configuration parameter.
Changes:
lenses has been renamed to lensesAgent
As provisioning with the latest version (2) is mandatory for successful running of the agent, both configs are removed.
In the the past it was possible to use H2 database which would be instantly deployed and ready to use alongside the agent.
Due to certain performance limitations which come with H2 database which can impact the agent functionality, we decided to completely remove H2 support.
However, persistence parameter still remains and can be used to enable extra volume creation dedicated specifically just logs.
In the past, HQ has been using TOML file format. As we want to reduce differences in file formats between Agent and HQ as much as possible, this was the first step.
Postgres connection URI is not being built within config.yaml but in backend runtime;
parameter group has changed from postgres to storage.postgres.*
In the previous version, schema was defined as a part of extraParamSpecs. In the new version schema is now defined as a separate property storage.postgres.database.schema;
Property extraParamSpecs is renamed to params;
Parameter group api has been renamed to http and following parameters are not part of it anymore:
administrators;
saml;
Property auth is being derived from property api (now. http).
Parameters that has been moved from http to auth are following:
administrators;
saml;
HQ has been tested against Aurora (Postgres) and is compatible.
In case of any changes in ConfigMap and after executing helm upgrade HQ pod will be automatically restarted as well therefore no need for manual interventions.
Previously environment variable known as LENSES_HQ_AGENT_KEY that was referenced in provisioning.yaml and stores the agentKey value has been renamed to LENSESHQ_AGENT_KEY.
Since newest version of Lenses HQ and Agent bring breaking changes following issues can happen.
Upon doing helm upgrade HQ can fail with following error log:
In order to fix it, following command has to be run on the postgres database:
In case SQL command cannot be run, database has to be cleared as if one is starting from scratch.
This page describe an overview of deploying Lenses against your Kafka clusters.
The quick start is for local development, with a local Kafka. This guide takes you through manually deploying HQ and an Agent to connect to your Kafka clusters.
For more detailed guides on the Helm, Docker and Linux see here.
To deploy Lenses against your environments you need to:
To start HQ and an Agent you have to accept the Lenses EULA.
For HQ, in the config.yaml set:
Any version of Apache Kafka (2.0 or newer) on-premise and on-cloud. Supported providers include:
Confluent Platform & Cloud
AWS MSK & AWS MSK Serverless
Aiven
IBM Event Streams
Azure HDInsight & EventHubs
Any version of Confluent Schema Registry (5.5.0 or newer), APICurio (2.0 or newer) and AWS Glue.
Only needed if you want to bring your own Postgres. The docker compose will start a local Postgres instance.
HQ and Agents can share the same instance, by either using a separate database or schema for HQ and each agent, depending on your networking needs.
Postgres server running version 9.6 or higher.
The recommended configuration is to create a dedicated login role and database for the HQ and each Agent, setting the HQ or Agent role as the database or schema owner. Both the agent and HQ need credentials, create a role for each.
Web sockets - You may need to adjust your load balancer to allow them. See here.
JMX connectivity - Connectivity to JMX is optional (not required) but recommended for additional/enhanced monitoring of the Kafka Brokers and Connect Workers. Secure JMX connections are also supported, as well as JOLOKIA and Open Metrics (MSK).
For more enable JMX for Agent itself see here.
These ACLs are for the underlying Lenses Agent Kafka client. Lenses has its own set of permissions guarding access.
You can restrict the access of the Lenses Kafka client but this can reduce the functionality on offer in Lenses, e.g. not allow Lenses to create topic at all, even though this can be managed by Lenses own IAM system.
The agent requires access to your Kafka cluster. If ACLs are enable you will need to allow the Agent access.
If you want to use SSO / SAML for authentication you will need the metadata.xml file from your provider. See Authentication for more information.
We have made new alpha release 18:
Agent image:
HQ image:
HQ CLI image
New Helm version 18 for agent and for the HQ: https://lenses.jfrog.io/ui/native/helm-charts-preview/
In the past only ClusterSecretStore when handling external secret was support. With a new version you can choose to pick either ClusterSecretStore or a SecretStore when creating external secret.
New property has been added to external secret template called additionalSpecs where you can add or change any of the specs that would normally be added in ExternalSecret resource.
Example:
lensesHq.http.tls
property was referencing wrong properties within defined values.yaml
Agent Metrics are available in overview screen
Introduction of Lenses Resource name in the HQ screens
Added Role description
"Roles" panel improved by getting right side panel, detailed view
Dark mode coloring improved
Authentication error flashing
Introduction of Lenses Resource name key resources like: topic, consumers, schemas
Added "Admin Overview" in the Agent menu
Fixed bad records view
We have made new alpha release 16:
Agent image:
HQ image:
New Helm version 16 for agent and for the HQ: https://lenses.jfrog.io/ui/native/helm-charts-preview/
In previous versions, SAML / SSO was a mandatory requirement for authentication. However, with the new release, it becomes optional, allowing you to choose between password-based authentication and SAML / SSO according to your needs.
Existing alpha users will have to introduce lensesHq.saml.enabled
property into their values.yaml
files
In this release, the ingress configuration has been enhanced to provide more flexibility.
Previously, the HQ chart supported a single ingress setting, but now you can define separate ingress configurations for HTTP and the agent.
This addition allows you to tailor ingress rules more specifically to your deployment needs, with dedicated rules for handling HTTP traffic and TCP-based agent connections.
The http
ingress is intended only for HTTP/S traffic, while the agents
ingress is designed specifically for TCP protocol. Ensure appropriate ingress configuration for your use case.
In the following example you will notice how ingress configuration has been broken into:
http - which covers main ingress for HQ and where users will be accessing HQ portal
agent - new and additional ingress which allows you to add new ingress with your custom implementation, whether it is Traefik or any other based.
By default both http and agent ingresses are disabled.
Due to new changes in provisioning structure, the database to which agent is connected must be recreated.
In the provisioning, there has been slight adjustment in connection naming with HQ.
Changes:
grpcServer has been renamed to lensesHq
apiKey has been renamed to agentKey
With the new version of Agent, HQ connection in provisioning has changed which requires complete recreation of database. Following log message will indicate it:
This page describes configuring and starting Lenses HQ and Agent against your Kafka cluster.
This guide is using the Lenses docker compose file. For non dev installations and automation see the Installation section.
HQ is configured via by one file, config.yaml. The docker compose files loads the content of hq.config.yaml and mounts it as the HQ config.yaml file.
You only need to follow this step if you do not want to use the local postgres instance started by the docker compose file.
You must create a database and role in your postgres instance for HQ to use. See Database Role.
Edit the docker-compose.yaml and add the set the credentials for your database in the hq.config.yaml section.
Currently HQ supports:
Basic Authentication (default)
SAML
For this example we will use basic authentication, for information on configuring other methods, see Authentication and configure the hq.config.yaml key accordingly for SAML.
To start HQ, run the following docker command:
You can now log in your browser with admin/admin.
To create an environment in HQ:
Login into HQ and create an environment, Environments->New Environment.
At the end of the process, you will be shown an Agent Key. Copy that, keep it safe!
The environment will be disconnected until the Agent is up and configured with the key.
You can also manage environments using the CLI.
The Agent is configured via two files:
lenses.conf - holds low-level configuration options for the agent and the database connection. You can set this via the agent.lenses.conf in the docker-compose file
provisioning.yaml - holds the connection details to your Kafka cluster and supporting systems. can set this via the agent.provisioning.yaml key in the docker-compose file.
You only need to follow this step if you do not want to use the local postgres instance started by the docker compose file.
You must create a database and role in your postgres instance for the Agent to use. See Database Role.
Update the docker-compose file agent.lenses.conf key for your Postgres instance.
The Agent Key for an environment needs to be added to the agent.provisioning.yaml key in the docker compose file.
Replace ${{LENSESHQ_AGENT_KEY}} with the Agent Key for the environment that you want to link to.
For more information on the configuration of the connection to HQ see here.
By default, the agent is configured to connect to Kafka on localhost. To change this update the agent.provisioning.yaml key. The information required here depends on how you want the Agent to authenticate against Kafka.
See provisioning for examples of different authentication types for Kafka.
Add the following for a basic plaintext connection to a Kafka broker, if you are using a different authentication mechanism adjust accordingly.
Remove, or adjust the Kafka (kafka-demo), Schema Registry and Connect services in the default docker-compose file.
Replace [YOUR_BOOTSTRAP_BROKER:PORT] with the bootstrap brokers and ports for the Kafka cluster you want the Agent to connect to.
For examples of adding in other services such as Schema Registries and Kafka Connect see provisioning.
To start Agent, run the following docker command:
For none dev environments, install the agent as close as possible to your Kafka clusters and automate the installation.
Once the agent fully starts, it will report as connected in HQ, allowing you to explore your Kafka environments.
This page describes deploying an Lenses Agent via Docker.
The Agent docker image can be configured via environment variables or via volume mounts for the configuration files.
Please check "File configuration" before running the command
In order for command above provisioning yaml has to be configured.
There are two mandatory connections:
HQ which requires AgentKey (LENSESHQ_AGENT_KEY) - this key is being created once user registers "New environment" in HQ;
Kafka connection
Environment variables prefixed with LENSES_ are transformed into corresponding configuration options. The environment variable name is converted to lowercase and underscores (_) are replaced with dots (.). As an example set the option lenses.port use the environment variable LENSES_PORT.
Alternatively, the lenses.conf can be mounted directly as
/mnt/settings/lenses.conf
The Docker image exposes four volumes in total, where cache, logs, plugins, and persistent data are stored:
/data/storage
/data/plugins
/data/logs
/data/kafka-streams-state
Resides under /data/storage and is used to store persistent data, such as Data Policies. For this data to survive between Docker runs and/or Agent upgrades, the volume must be managed externally (persistent volume).
Resides under /data/plugins
it’s where classes that extend Agent may be added —such as custom Serdes, LDAP filters, UDFs for the Lenses SQL table engine, and custom_http implementations.
Resides under /data/logs, logs are stored here. The application also logs to stdout, so the log files aren’t needed for most cases.
Resides under /data/kafka-streams-state, used when Lenses SQL is in IN_PROC configuration. In such a case, Lenses uses this scratch directory to cache Lenses SQL internal state. Whilst this directory can safely be removed, it can be beneficial to keep it around, so the Processors won’t have to rebuild their state during a restart.
By default, the the serves connections over plaintext (HTTP). It is possible to use TLS instead. The Docker image offers the ability to provide the content for extra files via secrets mounted as files or as environment variables. Especially for SSL, Docker supports SSL/TLS keys and certificates in Java Keystore (JKS) formats.
This capability is optional, and users can mount such files under custom paths and configure lenses.conf manually via environment variables, or lenses.append.conf.
There are two ways to use the File/Variable names of the table below.
Create a file with the appropriate filename as listed below and mount it under /mnt/settings, /mnt/secrets, or /run/secrets
Set them as environment variables.
All settings except for passwords, can be optionally encoded in base64. The docker will detect such encoding automatically.
FILECONTENT_JVM_SSL_TRUSTSTORE
The SSL/TLS trust store to use as the global JVM trust store. Add to LENSES_OPTS the property javax.net.ssl.trustStore
FILECONTENT_JVM_SSL_TRUSTSTORE_PASSWORD
Τhe trust store password. If set, the startup script will add automatically to LENSESOPTS the property javax.net.ssl.trustStorePassword (**_base64 not supported**)
FILECONTENT_LENSES_SSL_KEYSTORE
The SSL/TLS keystore to use for the TLS listener for the Agent
The docker does not require running as root. The default user is set to root for convenience and to verify upon start-up that all the directories and files have the correct permissions. The user drops to nobody and group nogroup (65534:65534) before starting the Agent.
If the image is started without root privileges, the agent will start successfully using the effective uid:gid applied. Ensure any volumes mounted (i.e., for the license, settings, and data) have the correct permission set.
This page describes deploying Lenses HQ via docker.
The HQ docker image can be configured via volume mounts for the configuration file.
The HQ looks for the config.yaml in the current working directory. This is the root directory for Docker.
The main pre-requirements that has to be fulfilled before Lenses HQ container can be started and those are:
In demo purposes and testing the product you can use our community license
Main configuration file that has to be configured before running docker command is config.yaml.
Sample configuration file is following:
More about configuration options you can find on the HQ configuration page.
This page describes how to configure admin accounts in Lenses.
You can configure a list of the principals (users, service accounts) that have root admin access. Access control allows any API operation performed by such principals. If not set, it will default to [].
Admin accounts are set in the config.yaml for HQ under the http.adminstrators key, as an array of usernames.
This page describes the install of the Lenses Agent via an archive on Linux.
To install the HQ from the archive you must:
Extract the archive
Configure the HQ
Start the HQ
Installation link
Link to archives can be found here: https://archive.lenses.io/pub/testing/lenses/
Extract the archive using the following command
Inside the extract archive, you will find.
In order to properly configure HQ, two core components are necessary:
To set up authentication, there are multiple methods available.
You can choose between:
password-based authentication, which requires users to provide a username and password;
and SAML/SSO (Single Sign-On) authentication, which allows users to authenticate through an external identity provider for a seamless and secure login experience.
Both password based and SAML / SSO authentication methods can be used alongside each other.
First to cover is users property. Users Property: The users
property is defined as an array, where each entry includes a username
and a password
. The passwords are hashed using bcrypt for security purposes, ensuring that they are stored securely.
Second to cover will be administrators. It serves as definition of user emails which will have highest level of permissions upon authentication to HQ.
Another part which has to be set in order to successfully run HQ is the http
definition. As previously mentioned, this parameter defines everything around HTTP endpoint of the HQ itself and how users will interact with.
Definition of HTTP object is as follows:
More about setting up TLS can be read here.
After correctly configuring authentication strategy and connection endpoint , agent handling is the last most important box to tick.
The Agent's object is defined as follows:
More about setting up TLS can be read here.
If you have meticulously followed all the outlined steps, your config.yaml file should mirror the example provided below, fully configured and ready for deployment. This ensures your system is set up correctly with all necessary settings for authentication, database connection, and other configurations optimally defined.
Start Lenses by running:
or pass the location of the config file:
If you do not pass the location of the config file, the HQ will look for it inside the current (runtime) directory. If it does not exist, it will try its installation directory.
Once HQ starts, it will be listening on the https://localhost:8080
To stop HQ, press CTRL+C.
If your server uses systemd as a Service Manager, then manage the Agent (start upon system boot, stop, restart). Below is a simple unit file that starts the Agent automatically on system boot.
This page describes configuring Azure SSO for Lenses authentication.
Learn more here about Azure SSO
Remember to activate HTTPS on HQ. See TLS.
Identifier (Entity ID)
Use the base url of the Lenses installation e.g. https://lenses-dev.example.com
Reply URL
Use the base url with the callback details e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client
Sign on URL
Use the base url
SAML configuration is set in HQ's config.yaml file. See here for more details.
This page describes configuring basic authentication in Lenses.
Basic authentication is set in the config.yaml for HQ under the http.users key, as an array of usernames and passwords.
To enhance security, it's essential that passwords in the config.yaml file are stored in bcrypt format.
This ensures that the passwords are hashed and secure rather than stored in plaintext. For instance, instead of using "builder" directly, it should be hashed using bcrypt.
An example of a bcrypt-hashed password looks like this: $2a$12$XQW..XQrtZXCvbQWertqQeFi/1KoQW4eNephNXTfHqtoW9Q4qih5G
.
Always ensure that you replace plaintext passwords with their bcrypt counterparts to securely authenticate users.
This page gives an overview of SSO & SAML for authentication with Lenses.
Control of how user create with SSO is determined by the SSO User Creation Mode. There are two modes:
Manual
SSO
With manual mode, only users that pre-created in HQ can login.
With sso mode, users that do not already exists are created and logged in.
Control of how a user's group membership should be handled in relation to SSO is determined by the SSO Group Membership Mode. There are two modes:
Manual
SSO
With the manual mode, the information about the group membership returned from an Identity Provider will not be used and a user will only be a member of groups that were explicitly assigned to them in HQ.
With the sso mode, group information from Identity Provider (IdP) will be used. On login, a user's group membership is set to the groups listed in the IdP.
Groups that do not exist in HQ are ignored.
SAML configuration is defined in the config.yaml provided to HQ. For more information on the configuration options see here.
The follow SSO / SAML providers are supported.
This page describes the install of the Lenses Agent via an archive on Linux.
To install the Agent from the archive you must:
Extract the archive
Configure the Agent
Start the Agent
Installation link
Link to archives can be found here: https://archive.lenses.io/pub/testing/lenses/
Extract the archive using the following command
Inside the extract archive, you will find.
To configure the agents connection to Postgres and its provisioning file. See here in the quickstart.
Once the agent files are configure you can continue to start the agent.
The configuration files are the same for docker and Linux, for docker we are simply mounting the files into the container.
To see be able to view and drilling to your Kafka environment, you need to connect the agent to HQ. You need to create an environment in HQ and copy the Agent Key into the provisioning.yaml.
Agent key reference
Agent key within provisioning.yaml can be referenced as a:
environment variable shown in example above
inline string
There are many Kafka flavours today in the market. Good news is that Lenses support all flavours of Kafka and we are trying hard to keep documention up to date.
In the following link you can find provisioning examples for the most common Kafka flavours.
There are also provisioning examples for other components:
Provisioning file path
If you configured provisioning.yaml make sure to place following property:
Start Lenses by running:
or pass the location of the config file:
If you do not pass the location of lenses.conf, the Agent will look for it inside the current (runtime) directory. If it does not exist, it will try its installation directory.
To stop Lenses, press CTRL+C
.
Set the permissions of the lenses.conf to be readable only by the lenses user.
The agent needs write access in 4-5 places in total:
[RUNTIME DIRECTORY]
When the Agent runs, it will create at least one directory under the directory it is run in:
[RUNTIME DIRECTORY]/logs
Where logs are stored
[RUNTIME DIRECTORY]/logs/sql-kstream-state
Where SQL processors (when In Process mode) store state. To change the location for the processors’ state directory, use lenses.sql.state.dir
option.
[RUNTIME DIRECTORY]/storage
Where the H2 embedded database is stored when PostgreSQL is not set. To change this directory, use the lenses.storage.directory
option.
/run
(Global directory for temporary data at runtime)
Used for temporary files. If Lenses does not have permission to use it, it will fall back to /tmp
.
/tmp
(Global temporary directory)
Used for temporary files (if access /run
fails), and JNI shared libraries.
Back-up this location for disaster recovery
The Agent and Kafka use two common Java libraries that take advantage of JNI and are extracted to /tmp.
You must either:
Mount /tmp without noexec
or set org.xerial.snappy.tempdir and java.io.tmpdir to a different location
If your server uses systemd as a Service Manager, then manage the Agent (start upon system boot, stop, restart). Below is a simple unit file that starts the Agent automatically on system boot.
The Agent uses the default trust store (cacerts) of the system’s JRE (Java Runtime) installation. The trust store is used to verify remote servers on TLS connections, such as Kafka Brokers with an SSL protocol, JMX over TLS, and more. Whilst for some types of connections (e.g. Kafka Brokers) a separate keystore can be provided at the connection’s configuration, for some other connections (JMX over TLS) we always rely on the system trust store.
It is possible to set up a global custom trust store via the LENSES_OPTS environment variable:
Run on any Linux server (review ulimits or container technology (docker/kubernetes). For RHEL 6.x and CentOS 6.x use docker.
Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit
command:
Increase as a super-user the soft limit to 4096 with:
Use 8GB RAM /4 CPUs and 20GB disk space.
This page describes configuring Okta SSO for Lenses authentication.
Lenses is available directly in Okta’s Application catalog.
SAML configuration is set in HQ's config.yaml file. See here for more details.
This page describes configuring Keycloak SSO for Lenses authentication.
Go to Clients
Click Create
Fill in the details: see the table below.
Click Save
Client ID
Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com
Client Protocol
Set it to saml
Client Saml Endpoint
This is the Lenses API point for Keycloak to call back. Set it to [BASE_URL]/api/v2/auth/saml/callback?client_name=SAML2Client. e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client
Change the settings on client you just created to:
Name
Lenses
Description
(Optional) Add a description to your app.
SAML Signature Name
KEY_ID
Client Signature Required
OFF
Force POST Binding
ON
Front Channel Logout
OFF
Force Name ID Format
ON
Name ID Format
Root URL
Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com
Valid Redirect URIs
Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com
Configure Keycloak to communicate groups to Lenses. Head to the Mappers (under Client scope tab) section.
Click Create
Fill in the details: see table below.
Click Save
Name
Groups
Mapper Type
Group list
Group attribute name
groups (case-sensitive)
Single Group Attribute
ON
Full group path
OFF
SAML configuration is set in HQ's config.yaml file. See here for more details.
This page describes configuring Google SSO for Lenses authentication.
Google doesn't expose the groups, or organization unit, of a user to a SAML app. This means we must set up a custom attribute for the Lenses groups that each user belongs to.
Open the Google Admin console from an administrator account.
Click the Users button
Select the More dropdown and choose Manage custom attributes
Click the Add custom attribute button
Fill the form to add a Text, Multi-value field for Lenses Groups, then click Add
Learn more about Google custom attributes
The attribute values should correspond exactly with the names of groups created within Lenses.
Open the Google Admin console from an administrator account.
Click the Users button
Select the user to update
Click User information
Click the Lenses Groups attribute
Enter one or more groups and click Save
Learn more about Google custom SAML apps
Open the Google Admin console from an administrator account.
Click the Apps button
Click the SAML apps button
Select the Add App dropdown and choose Add custom SAML app
Run through the below steps
Enter a descriptive name for the Lenses installation
Upload a Lenses icon
This will appear in the Google apps menu once the app is enabled
Given the base URL of the Lenses installation, e.g. https://lenses-dev.example.com, fill out the settings:
ACS URL
Use the base url with the callback path e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client
Entity ID
Use the base url e.g. https://lenses-dev.example.com
Start URL
Leave empty
Signed Response
Leave unchecked
Name ID format
Leave as UNSPECIFIED
Name ID
Leave as Basic Information > Primary Email
Add a mapping from the custom attribute for Lenses groups to the app attribute groups
From the newly added app details screen, select User access
Turn on the service
Lenses will reject any user that doesn't have the groups attribute set, so enabling the app for all users in the account is a good option to simplify ongoing administration.
Download the Federation Metadata XML file with the Google IdP details.
SAML configuration is set in HQ's config.yaml file. See here for more details.
This page describes installing Lenses Agent in Kubernetes via Helm.
Latest Agent image lensting/lenses-agent:6-preview (v4)
Kubernetes 1.23+
Helm 3.8.0+
Running Postgres instance
External secret operator (in case of ExternalSecret usage)
In order to configure properly an Agent, we have to understand parameter groups that the Chart offers.
Under the lensesAgent parameter there some key parameter groups that are used to setup HQ:
Storage
HQ connection
Provision
Cluster RBACs
Moving forward, in the same order you can start configuring your Helm chart.
Postgres is the only available storage option.
Prerequisite:
Running Postgres instance;
Created database for an Agent;
Username (and password) which has access to created database;
In order to successfully run HQ, storage within values.yaml has to be defined first.
Definition of storage object is as follows:
Alongside Postgres password, which can be referenced / created through Helm chart, there are few more options which can help while setting up HQ.
There are two ways how username can be defined:
The most straight forward way if username is not being changed by just defining it within username parameter such as
Postgres password can be handled in three ways using:
Pre-created secret;
Creating secret on the spot through values.yaml;
Connection to Lenses HQ is straight forward process which requires two steps:
Creating Environment and obtaining AGENT KEY in HQ as described here, if you already have not done so.
Storing that same key in Vault or as a K8s secret
The agent communicates with HQ via a secure custom binary protocol channel. To establish this channel and authenticate the Agent needs and AGENT KEY.
Once the AGENT KEY has been copied, store it inside of Vault or any other tool that has integration with Kubernetes secrets.
There are three available options how agent key can be used:
ExternalSecret via External Secret Operator (ESO)
Pre-created secret
Inline string
To use this option, the External Secret Operator (ESO) has to be installed and available for usage in K8s cluster your are deploying Agent.
When specifying secret.type: "externalSecret", the chart will:
create an ExternalSecret in the namespace where Agent is deployed;
a secret is mounted for Agent to use.
Make sure that secret you are going to use is already created in namespace where Agent will be installed.
This option is NOT for PRODUCTION usage but rather just for demo / testing.
The chart will create a secret with defined values below and the same secret will be read by Agent to connect to HQ.
This secret will be fed into the provisioning.yaml. The HQ connection is specified at line 30, below, where reference ${LENSESHQ_AGENT_KEY} is being set:
In order to enable TLS for secure communication between HQ and the Agent please refer to the following part of the page.
Provisioning offers various connection starting with:
Kafka ecosystem components such as:
More about provisioning and more advanced configuration options for each of these components can be found on the following link.
The Helm chart creates Cluster roles and bindings, that are used by SQL Processors, if the deployment mode is set to KUBERNETES. They are used so that Lenses can deploy and monitor SQL Processor deployments in namespaces.
To disable the creation of Kubernetes RBAC set: rbacEnabled: false
If you want to limit the permissions the Agent has against your Kubernetes cluster, you can use Role/RoleBinging resources instead. Follow this link in order to enable it.
If you are not using SQL Processors and want to limit permissions given to Agent's ServiceAccount, there are two options you can choose from:
rbacEnable: true - will enable the creation of ClusterRole and ClusterRoleBinding for service account mentioned above;
rbacEnable: true and namespaceScope: true - will enable creation of Role and RoleBinding which is more restrictive;
In this case, TLS has to be enabled on HQ. In case you haven't still enabled it, you can find details here to do it.
Enabling TLS in communication between HQ is being done in provisioning part of values.yaml.
In order to successfully enable TLS for the Agent you would need to:
additionalVolume & additionalVolumeMounts - with which you will mount truststore with CA certificate that HQ is using and which Agent will need to successfully pass the handshake.
additionalEnv - which will be used to securely read password to unlock truststore.
Enable ssl in provision.
Enable a service resource in the values.yaml:
To control the resources used by the Agent:
In case LENSES_HEAP_OPTS is not set explicitly it will be set implicitly.
Examples:
if no requests or limits are defined, LENSES_HEAP_OPTS will be set as -Xms1G -Xmx3G
If requests and limits are defined above defined values, LENSES_HEAP_OPTS will be set by formula -Xms[-Xmx / 2] -Xmx[limits.memory - 2]
If .Values.lenses.jvm.heapOpts it will override everything
To enable SQL processor in KUBERENTES mode and control the defaults:
To control the namespace Lenses can deploy processors, use the sql.namespaces value.
To achieve you need to create a Role and a RoleBinding resource in the namespace you want the processors deployed to.
For example:
Lenses namespace = lenses-ns
Processor namespace = lenses-proc-ns
Finally you need to define in the Agent configuration which namespaces the Agent has access to. Amend values.yaml to contain the following:
Prometheus metrics are automatically exposed on port 9102 under /metrics.
The main configurable options for lenses.conf are available in the values.yaml under the lenses object. These include:
Authentication
Database connections
SQL processor configurations
To apply other static configurations use lenses.append.conf, for example:
First, add the Helm Chart repository using the Helm command line:
Installing using cloned repository:
Installing using Helm repository:
Be aware that for the time being and for alpha purposes usage of --version
is mandatory when deploying Helm chart through Helm repository.
You can also find examples in the Helm chart repo.
This page describes installing Lenses HQ in Kubernetes via Helm.
Lenses HQ is prerequisite for installation of Lenses Agent
Latest images (v19):
HQ image: lensting/lenses-hq:6-preview
HQ Cli image: lensting/lenses-cli:6-preview
Kubernetes 1.23+
Helm 3.8.0+
Running Postgres instance:
database for HQ;
username (and password) that has access to HQ database;
Optional External secret operator (in case of ExternalSecret usage)
In order to configure properly HQ, we have to understand parameter groups that the Chart offers.
Under the lensesHq parameter there some key parameter groups that are used to setup HQ:
definition of connection towards database (Postgres is the only storage option)
Password based authentication configuration
SAML / SSO configuration
definition of administrators or first users to access the HQ
defines port under which HQ will be available for end users
defines values of special headers and cookies
types of connection such as TLS and non-TLS definitions
defines connection between HQ and the Agent such as port where HQ will be listening for agent connections.
types of connection such as TLS and non-TLS definitions
license
controls the metrics settings where Prometheus alike metrics will be exposed
definition of logging level for HQ
Moving forward, in the same order you can start configuring your Helm chart.
Postgres is the only available storage option.
Prerequisite:
Running Postgres instance;
Created database for HQ;
Username (and password) which has access to created database;
In order to successfully run HQ, storage within values.yaml has to be defined first.
Definition of storage object is as follows:
Alongside Postgres password, which can be referenced / created through Helm chart, there are few more options which can help while setting up HQ.
There are two ways how username can be defined:
The most straight forward way if username is not being changed by just defining it within username parameter such as
In case Postgres username is being rotated or frequently changed it can be referenced from pre-created secret
Postgres password can be handled in three ways using:
External Secret via ExternalSecretOperator;
Pre-created secret;
Creating secret on the spot through values.yaml;
To use this option, the External Secret Operator (ESO) has to be installed and available for usage in K8s cluster your are deploying HQ.
When specifying passwordSecret.type: "externalSecret", the chart will:
create an ExternalSecret in the namespace where HQ is deployed;
a secret is mounted for HQ to use.
Make sure that secret you are going to use is already created in namespace where HQ will be installed.
This option is NOT for PRODUCTION usage but rather just for demo / testing.
The chart will create a secret with defined values below and the same secret will be read by HQ in order to connect to Postgres.
Sometimes to form correct connection URI special parameters are needed. In order to od the same you can set extra settings using params
.
Example:
SAML / SSO is available only with Enterprise license.
Second pre-requirement to successfully run HQ is setting initial authentication.
You can choose between:
password-based authentication, which requires users to provide a username and password;
and SAML/SSO (Single Sign-On) authentication, which allows users to authenticate through an external identity provider for a seamless and secure login experience.
Definition of auth object is as follows:
First to cover is users property. Users Property: The users
property is defined as an array, where each entry includes a username
and a password
. The passwords need to be hashed using bcrypt before placed within password property for security purposes, ensuring that they are stored correctly and securely.
Second to cover will be administrators. It serves as definition of user emails which will have highest level of permissions upon authentication to HQ.
Third attribute is saml.metadata field needed for setting SAML / SSO authentication. In this step, you will need metadata.xml file which can be set in two ways:
Referencing metadata.xml file through pre-created secret;
Placing metadata.xml contents inline as a string.
In case SAML IdP requires certificate verification, same can be enabled and provided in the following way:
Third pre-requirement to successfully run HQ is the http
definition. As previously mentioned, this parameter defines everything around HTTP endpoint of the HQ itself and how users will interact with.
Definition of HTTP object is as follows:
Second part of HTTP definition would be enabling TLS and TLS definition itself. As previously defined for lensesHq.agents.tls same way of configuring TLS can be used for lensesHq.http.tls definition as well.
After correctly configuring authentication strategy and connection endpoint , agent handling is the last most important box to tick.
The Agent's object is defined as follows:
By default TLS for the communication between Agent and HQ is disabled. In case requirement is to enabled it, following has to be set:
lensesHq.agents.tls
- certificates to manage connection between HQ and the Agents
lensesHq.http.tls
- certificates to manage connection with HQ's API
Unlike private keys which can be referenced and obtained only through a secret, Certificates can be referenced directly in values.yaml file as a string or as a secret.
Whilst the chart supports setting TLS on Lenses HQ itself we recommend placing it on the Ingress resource
Ingress and service resources are optionally supported.
The http
ingress is intended only for HTTP/S traffic, while the agents
ingress is designed specifically for TCP protocol. Ensure appropriate ingress configuration for your use case.
Enable an Ingress resource in the values.yaml:
Enable a service resource in the values.yaml:
Lenses HQ, by default, uses default Kubernetes service account but user can choose to use specific one.
If user defines following:
The chart will create new service account in the the defined namespace for HQ to use.
There are two options user can choose between:
rbacEnable: true - will enable creation of ClusterRole and ClusterRoleBinding for service account mentioned in snippet above
rbacEnable: true and namespaceScope: true - will enable creation of Role and RoleBinding which is more restrictive.
There are different logging modes and levels that can be adjusted.
First, add the Helm Chart repository using the Helm command line:
Be aware that for the time being and for alpha purposes usage of --version
is mandatory when deploying Helm chart through Helm repository.
After successful configuration and installation of HQ the next steps would be:
Welcome to Lenses, Autonomy in data streaming.
This documentation is for Lenses 6 (preview). For Lenses 5.5 (stable) see here.
Lenses has two components:
HQ is a central portal where end users interact with different environments (clusters). It provides a central place to explore data across many environments.
HQ is a single binary, installed on premise or in your cloud. From HQ you create environments, and for each environment, you deploy an agent that connects back to HQ.
Lenses defines each Kafka Cluster and supporting services, such as Schema Registries and Kafka Connect Clusters, as an environment.
You can have many environments, on premise, in the cloud, provided HQ has network access to the agent and the agent can connect to your Kafka cluster or any Kafka API compatible service.
Each environment has an agent. Environments can also be assigned extra metadata such as tiers, domains and descriptions.
There's a 1 to 1 relationship between environments, agents and Kafka clusters.
To explore and operate in an environment you need an agent. Agents are headless applications, deployed with connectivity to your Kafka cluster and supporting services.
Agents only ever communicate with HQ, using an Agent key over a secure channel. You can not, as a user, interact directly with them. End users are unaware of agents, only environments.
Agents require:
Agent Key to establish a communication channel to HQ
Connectivity to a Kafka cluster and credentials to do so.
The agents acts as a proxy to read, or write to your Kafka cluster, execute queries, monitor for alerts and manage SQL Processors and Kafka Connectors.
You can freely map a Group to any SSO group including any characters in the name.
Before, a Group was mapped to SSO via its resource-name
. That was limiting you to SSO groups with only characters and -
dashes.
Now 2 things happen:
By default, the SSO name is the Group name (not the resource-name). This is most of the time what you'd expect.
For the special cases when your SSO name is something specific or cryptic (e.g. a UUID) you can override the mapping by setting the SSO mapping name
to anything you want.
Before, you could set configuration rules for your topics for each of your environments. E.g. enforce topic naming conventions (only dashes) or a maximum number of partitions.
Now, you can control who can access and set these rules with IAM permissions.
The resource type is governance:rule
.
Here's an example for read/write access to this rules for environment eu-stg-env
:
Find more information in the IAM section.
See what's happening under the hood of your SQL queries. Learn about your query's performance:
Which partitions were read and how much of them.
How many records were scanned, skipped and offered as results.
Timing and size.
Configuration details that you can tweak.
The Global SQL Studio will now show you any bad records it cannot understand. These records may be of incorrect formats (e.g. String in an AVRO topic) or have invalid schemas.
IAM permissions editor: improved IntelliSense
For a more IDE-like experience. Get tab completion specific to each segment that you're working on.
Global SQL Studio: improved performance.
SSO: fixed offboarding SSO users. When a user has no SSO groups that map to Lenses, Lenses ensures that the user also has no Lenses groups. This is useful when offboarding users to ensure that they won't have Lenses access.
Global SQL Studio: fixed syntax highlighting.
This quick start guide will walk you through installing and starting Lenses using Docker, followed by connecting Lenses to your Kafka cluster.
This is quick start is for a local setup. To connect to your Kafka clusters, see here.
By running the following command including the ACCEPT_EULA setting your are accepting the Lenses EULA agreement.
Run the following command:
Once the images are pulled and containers started, you can login here with admin/admin and explore.
It may take a few seconds for the agent to full boot and connect to HQ.
The quick start uses a docker compose file to:
Permissions IntelliSense
New admin permissions to control who can access Lenses Logs
SQL Studio redesigned view
scanned partitions progress
additional statistics
split tab left and right for SQL queries to compare data.
Policy changes are now effective as soon as they are saved, re-login is not necessary anymore
Connect Lenses to your environment.
This page describes the supported installation methods for Lenses.
Lenses can be deployed in the following ways:
This page describes installing Lenses with Docker Image.
This page describes how to configure Lenses.
This page describes installing Lenses HQ and Agent in Kubernetes via Helm.
Only Helm 3 is supported.
This page describes the authentication methods supported in Lenses.
Authentication is configured in HQ.
Users can authentication is two ways. Basic authentication and SSO / SAML. Additional specific users can be assigned as admin accounts.
This page describes configure SSO & SAML in Lenses for authentication.
This page describes install the Lenses via a Linux archive
This page describes configuring OneLogin SSO for Lenses authentication.
SAML configuration is set in HQ's config.yaml file. See here for more details.
This page describes configuring a Generic SSO provider for Lenses authentication.
SAML configuration is set in HQ's config.yaml file. See here for more details.
This page describes configuring Lenses to connect to Aiven.
This page describe the Lenses Agent configuration.
HQ's configuration is defined in the config.yaml file
To accept the Lenses EULA, set the following in the lenses.conf file:
Without accepting the EULA the Agent will not start! See License.
It has the following top level groups:
http
Yes
n/a
Configures everything involving the HTTP.
agents
Yes
n/a
Controls the agent handling.
database
Yes
n/a
Configures database settings.
logger
Yes
n/a
Sets the logger behaviour.
metrics
Yes
n/a
Controls the metrics settings.
license
Yes
n/a
Holds the license key.
auth
Yes
n/a
Configures authentication and authorisation
Configures authentication and authorisation.
It has the following fields:
administrators
No
[]
strings
Grants root access to principals.
saml
no
n/a
Contains SAML2 IdP configuration.
users
no
[]
Array
Creates initial users for password based authentication.
Lists the names of the principals (users, service accounts) that have root access. Access control allows any API operation performed by such principals. Optional. If not set, it will default to []
.
Contains SAML2 IdP configuration. Please refer here for its structure.
Configures everything involving the HTTP.
It has the following fields:
address
Yes
n/a
string
Sets the address the HTTP server listens at.
accessControlAllowOrigin
No
["*"]
strings
Sets the value of the "Access-Control-Allow-Origin" header.
accessControlAllowCredentials
No
false
boolean
Sets the value of the "Access-Control-Allow-Credentials" header.
secureSessionCookies
No
true
boolean
Sets the "Secure" attribute on session cookies.
tls
Yes
n/a
Contains TLS configuration.
Sets the address the HTTP server listens at.
Example value: 127.0.0.1:80
.
Sets the value of the "Access-Control-Allow-Origin" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to ["*"]
.
Sets the value of the "Access-Control-Allow-Credentials" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to false
.
Sets the "Secure" attribute on authentication session cookies. When set, a browser sends such cookies not over unsecured HTTP (expect for localhost). If running Lenses HQ over unsecured HTTP, set this to false. Optional. If not set, it will default to true
.
Contains TLS configuration. Please refer here for its structure.
Contains SAML2 IdP configuration.
It has the following fields:
metadata
Yes
n/a
string
Contains the IdP issued XML metadata blob.
baseURL
Yes
n/a
string
Defines base URL of HQ for IdP redirects.
uiRootURL
No
/
string
Controls where to redirect to upon successful authentication.
entityID
Yes
n/a
string
Defines the Entity ID.
groupAttributeKey
No
groups
string
Sets the attribute name for group names.
userCreationMode
No
manual
string
Controls how the creation of users should be handled in relation to SSO information.
groupMembershipMode
No
manual
string
Controls how the management of a user's group membership should be handled in relation to SSO information.
Contains the IdP issued XML metadata blob.
Example value: <?xml version="1.0" ... (big blob of xml) </md:EntityDescriptor>
.
Defines the base URL of Lenses HQ; the IdP redirects back to here on success.
Example value: https://hq.example.com
.
Controls where the backend redirects to after having received a valid SAML2 assertion. Optional. If not set, it will default to /
.
Example value: /
.
Defines the Entity ID.
Example value: https://hq.example.com
.
Sets the attribute name from which group names are extracted in the SAML2 assertions. Different providers use different names. Okta, Keycloak and Google use "groups". OneLogin uses "roles". Azure uses "http://schemas.microsoft.com/ws/2008/06/identity/claims/groups". Optional. If not set, it will default to groups
.
Example value: groups
.
Controls how the creation of users should be handled in relation to SSO information. With the 'manual' mode, only users that currently exist in HQ can login. Users that do not exist are rejected. With the 'sso' mode, users that do not exist are automatically created. Allowed values are manual
or sso
. Optional. If not set, it will default to manual
.
Controls how the management of a user's group membership should be handled in relation to SSO information. With the 'manual' mode, the information about the group membership returned from an Identity Provider will not be used and a user will only be a member of groups that were explicitly assigned to him locally. With the 'sso' mode, group information from Identity Provider (IdP) will be used. On login, a user's group membership is set to the groups listed in the IdP. Groups that do not exist in HQ are ignored. Allowed values are manual
or sso
. Optional. If not set, it will default to manual
.
Controls the agent handling.
It has the following fields:
address
Yes
n/a
string
Sets the address the agent server listens at.
tls
Yes
n/a
Contains TLS configuration.
Sets the address the agent server listens at.
Example value: 127.0.0.1:3000
.
Contains TLS configuration. Please refer here for its structure.
Contains TLS configuration.
It has the following fields:
enabled
Yes
n/a
boolean
Enables or disables TLS.
cert
No
``
string
Sets the PEM formatted public certificate.
key
No
``
string
Sets the PEM formatted private key.
verboseLogs
No
false
boolean
Enables verbose TLS logging.
Enables or disables TLS.
Example value: false
.
Sets the PEM formatted public certificate. Optional. If not set, it will default to ``.
Example value: -----BEGIN CERTIFICATE----- EXampLeRanDoM ... -----END CERTIFICATE-----
.
Sets the PEM formatted private key. Optional. If not set, it will default to ``.
Example value: -----BEGIN PRIVATE KEY----- ExAmPlErAnDoM ... -----END PRIVATE KEY-----
.
Enables additional logging of TLS settings and events at debug level. The information presented might be a bit too much for day to day use but can provide extra information for troubleshooting TLS configuration. Optional. If not set, it will default to false
.
Configures database settings.
It has the following fields:
host
Yes
n/a
string
Sets the name of the host to connect to.
username
No
``
string
Sets the username to authenticate as.
password
No
``
string
Sets the password to authenticate as.
database
Yes
n/a
string
Sets the database to use.
schema
No
``
string
Sets the schema to use.
TLS
No
false
boolean
Enables TLS.
params
No
{}
DBConnectionParams
Provides fine-grained control.
Sets the name of the host to connect to. A comma-separated list of host names is also accepted; each host name in the list is tried in order.
Example value: postgres:5432
.
Sets the username to authenticate as. Optional. If not set, it will default to ``.
Example value: johhnybingo
.
Sets the password to authenticate as. Optional. If not set, it will default to ``.
Example value: my-password
.
Sets the database to use.
Example value: my-database
.
Sets the schema to use. Optional. If not set, it will default to ``.
Example value: my-schema
.
Enables TLS. In PostgreSQL connection string terms, setting TLS to false
corresponds to sslmode=disable
; setting TLS to true
corresponds to sslmode=verify-full
. For more fine-grained control, specify sslmode
in the params which takes precedence. Optional. If not set, it will default to false
.
Example value: true
.
Contains connection string parameters as key/values pairs. It allow fine-grained control of connection settings. The parameters can be found here: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS Optional. If not set, it will default to {}
.
Example value: {"application_name":"example"}
.
Sets the logger behaviour.
It has the following fields:
mode
Yes
n/a
string
Controls the format of the logger's output.
level
No
info
string
Controls the level of the logger.
Controls the format of the logger's output. Allowed values are text
or json
.
Controls the level of the logger. Allowed values are info
or debug
. Optional. If not set, it will default to info
.
Controls the metrics settings.
It has the following fields:
prometheusAddress
No
:9090
string
Sets the Prometheus address.
Sets the address at which Prometheus metrics are served. Optional. If not set, it will default to :9090
.
Holds the license key.
It has the following fields:
key
Yes
n/a
string
Sets the license key.
acceptEULA
Yes
fals
boolean
Sets the license key. An HQ key starts with "licensekey".
Accepts the Lenses EULA.
HQ's configuration is defined in the config.yaml file
To accept the Lenses EULA, set the following in the lenses.conf file:
Without accepting the EULA the Agent will not start! See License.
It has the following top level groups:
http
Yes
n/a
Configures everything involving the HTTP.
agents
Yes
n/a
Controls the agent handling.
database
Yes
n/a
Configures database settings.
logger
Yes
n/a
Sets the logger behaviour.
metrics
Yes
n/a
Controls the metrics settings.
license
Yes
n/a
Holds the license key.
auth
Yes
n/a
Configures authentication and authorisation
Configures authentication and authorisation.
It has the following fields:
administrators
No
[]
strings
Grants root access to principals.
saml
no
n/a
Contains SAML2 IdP configuration.
users
no
[]
Array
Creates initial users for password based authentication.
Lists the names of the principals (users, service accounts) that have root access. Access control allows any API operation performed by such principals. Optional. If not set, it will default to []
.
Contains SAML2 IdP configuration. Please refer here for its structure.
Configures everything involving the HTTP.
It has the following fields:
address
Yes
n/a
string
Sets the address the HTTP server listens at.
accessControlAllowOrigin
No
["*"]
strings
Sets the value of the "Access-Control-Allow-Origin" header.
accessControlAllowCredentials
No
false
boolean
Sets the value of the "Access-Control-Allow-Credentials" header.
secureSessionCookies
No
true
boolean
Sets the "Secure" attribute on session cookies.
tls
Yes
n/a
Contains TLS configuration.
Sets the address the HTTP server listens at.
Example value: 127.0.0.1:80
.
Sets the value of the "Access-Control-Allow-Origin" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to ["*"]
.
Sets the value of the "Access-Control-Allow-Credentials" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to false
.
Sets the "Secure" attribute on authentication session cookies. When set, a browser sends such cookies not over unsecured HTTP (expect for localhost). If running Lenses HQ over unsecured HTTP, set this to false. Optional. If not set, it will default to true
.
Contains TLS configuration. Please refer here for its structure.
Contains SAML2 IdP configuration.
It has the following fields:
metadata
Yes
n/a
string
Contains the IdP issued XML metadata blob.
baseURL
Yes
n/a
string
Defines base URL of HQ for IdP redirects.
uiRootURL
No
/
string
Controls where to redirect to upon successful authentication.
entityID
Yes
n/a
string
Defines the Entity ID.
groupAttributeKey
No
groups
string
Sets the attribute name for group names.
userCreationMode
No
manual
string
Controls how the creation of users should be handled in relation to SSO information.
groupMembershipMode
No
manual
string
Controls how the management of a user's group membership should be handled in relation to SSO information.
Contains the IdP issued XML metadata blob.
Example value: <?xml version="1.0" ... (big blob of xml) </md:EntityDescriptor>
.
Defines the base URL of Lenses HQ; the IdP redirects back to here on success.
Example value: https://hq.example.com
.
Controls where the backend redirects to after having received a valid SAML2 assertion. Optional. If not set, it will default to /
.
Example value: /
.
Defines the Entity ID.
Example value: https://hq.example.com
.
Sets the attribute name from which group names are extracted in the SAML2 assertions. Different providers use different names. Okta, Keycloak and Google use "groups". OneLogin uses "roles". Azure uses "http://schemas.microsoft.com/ws/2008/06/identity/claims/groups". Optional. If not set, it will default to groups
.
Example value: groups
.
Controls how the creation of users should be handled in relation to SSO information. With the 'manual' mode, only users that currently exist in HQ can login. Users that do not exist are rejected. With the 'sso' mode, users that do not exist are automatically created. Allowed values are manual
or sso
. Optional. If not set, it will default to manual
.
Controls how the management of a user's group membership should be handled in relation to SSO information. With the 'manual' mode, the information about the group membership returned from an Identity Provider will not be used and a user will only be a member of groups that were explicitly assigned to him locally. With the 'sso' mode, group information from Identity Provider (IdP) will be used. On login, a user's group membership is set to the groups listed in the IdP. Groups that do not exist in HQ are ignored. Allowed values are manual
or sso
. Optional. If not set, it will default to manual
.
Controls the agent handling.
It has the following fields:
address
Yes
n/a
string
Sets the address the agent server listens at.
tls
Yes
n/a
Contains TLS configuration.
Sets the address the agent server listens at.
Example value: 127.0.0.1:3000
.
Contains TLS configuration. Please refer here for its structure.
Contains TLS configuration.
It has the following fields:
enabled
Yes
n/a
boolean
Enables or disables TLS.
cert
No
``
string
Sets the PEM formatted public certificate.
key
No
``
string
Sets the PEM formatted private key.
verboseLogs
No
false
boolean
Enables verbose TLS logging.
Enables or disables TLS.
Example value: false
.
Sets the PEM formatted public certificate. Optional. If not set, it will default to ``.
Example value: -----BEGIN CERTIFICATE----- EXampLeRanDoM ... -----END CERTIFICATE-----
.
Sets the PEM formatted private key. Optional. If not set, it will default to ``.
Example value: -----BEGIN PRIVATE KEY----- ExAmPlErAnDoM ... -----END PRIVATE KEY-----
.
Enables additional logging of TLS settings and events at debug level. The information presented might be a bit too much for day to day use but can provide extra information for troubleshooting TLS configuration. Optional. If not set, it will default to false
.
Configures database settings.
It has the following fields:
host
Yes
n/a
string
Sets the name of the host to connect to.
username
No
``
string
Sets the username to authenticate as.
password
No
``
string
Sets the password to authenticate as.
database
Yes
n/a
string
Sets the database to use.
schema
No
``
string
Sets the schema to use.
TLS
No
false
boolean
Enables TLS.
params
No
{}
DBConnectionParams
Provides fine-grained control.
Sets the name of the host to connect to. A comma-separated list of host names is also accepted; each host name in the list is tried in order.
Example value: postgres:5432
.
Sets the username to authenticate as. Optional. If not set, it will default to ``.
Example value: johhnybingo
.
Sets the password to authenticate as. Optional. If not set, it will default to ``.
Example value: my-password
.
Sets the database to use.
Example value: my-database
.
Sets the schema to use. Optional. If not set, it will default to ``.
Example value: my-schema
.
Enables TLS. In PostgreSQL connection string terms, setting TLS to false
corresponds to sslmode=disable
; setting TLS to true
corresponds to sslmode=verify-full
. For more fine-grained control, specify sslmode
in the params which takes precedence. Optional. If not set, it will default to false
.
Example value: true
.
Contains connection string parameters as key/values pairs. It allow fine-grained control of connection settings. The parameters can be found here: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS Optional. If not set, it will default to {}
.
Example value: {"application_name":"example"}
.
Sets the logger behaviour.
It has the following fields:
mode
Yes
n/a
string
Controls the format of the logger's output.
level
No
info
string
Controls the level of the logger.
Controls the format of the logger's output. Allowed values are text
or json
.
Controls the level of the logger. Allowed values are info
or debug
. Optional. If not set, it will default to info
.
Controls the metrics settings.
It has the following fields:
prometheusAddress
No
:9090
string
Sets the Prometheus address.
Sets the address at which Prometheus metrics are served. Optional. If not set, it will default to :9090
.
Holds the license key.
It has the following fields:
key
Yes
n/a
string
Sets the license key.
acceptEULA
Yes
fals
boolean
Sets the license key. An HQ key starts with "licensekey".
Accepts the Lenses EULA.
This page describe the Lenses Agent configuration.
This page describes an overview of the Lenses Agent configuration.
The Agent configuration is driven by two files:
lenses.conf
provisioning.yaml
lenses.conf holds all the database connections and low level options for the agent.
provisioning.yaml holds the your Kafka cluster and supporting services, that the Agent is to connect to. In addition it defines the connection to HQ. The provisioning.yaml is watched by the Agent, so any changes made, if valid, are applied. See for more information. Without provisioning your agent can not connect to HQ.
This page describes how to setup connections to Kafka and other services and have changes applied automatically for the Lenses Agent.
This page describes how to connect the Lenses Agent to your Kafka brokers.
The Lenses Agent can connect to any Kafka cluster or service exposing the Apache Kafka APIs and supporting the authentication methods offered by Apache Kafka.
Accepts the
Accepts the
This page describes connection a Lenses Agent with HQ
To be able to view and drill into your Kafka environment, you need to connect the agent to HQ. You need to create an environment in HQ and copy the Agent Key into the provisioning.yaml.
This page describes connecting the Lenses Agent to Apache Kafka.
A Kafka connection is required for the agent to start. You can connect to Kafka via:
Plaintext (no credentials an unencrypted)
SSL (no credentials an encrypted)
SASL Plaintext and SASL SSL
With PLAINTEXT, there's no encryption and no authentication when connecting to Kafka.
The only required fields are:
kafkaBootstrapServers - a list of bootstrap servers (brokers). It is recommended to add as many brokers (if available) as convenient to this list for fault tolerance.
protocol - depending on the protocol, other fields might be necessary (see examples for other protocols)
In following example JMX metrics for Kafka Brokers are configured too, assuming that all brokers expose their JMX metrics using the same port (9581), without SSL and authentication.
With SSL the connection to Kafka is encrypted. You can also uses SSL and certificates to authenticate users against Kafka.
A truststore (with password) might need to be set explicitly if the global truststore of the Agent does not include the Certificate Authority (CA) of the brokers.
If TLS is used for authentication to the brokers in addition to encryption-in-transit, a key store (with passwords) is required.
There are 2 SASL-based protocols to access Kafka Brokers: SASL_SSL
and SASL_PLAINTEXT
. They both require SASL mechanism and JAAS Configuration values. What is different is:
The transport layer is encyrpted (SSL)
The SASL mechanism for authentication (PLAIN, AWS_MSK_IAM, GSSAPI).
In addition to this, there might be a keytab file required, depending on the SASL mechanism (for example when using GSSAPI mechanism, most often used for Kerberos).
To use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.
When encryption-in-transit is used (with SASL_SSL), a trust store might need to be set explicitly if the global trust store of Lenses does not include the CA of the brokers.
Encrypted communication and basic username and password for authentication.
In order to use Kerberos authentication, a Kerberos Connection should be created beforehand.
No SSL encrypted of communication, credentials communicated to Kafka in clear text.
This page describes configuring Lenses to connect to Confluent Platform.
For Confluent Platform see Apache Kafka.
This page describes configuring Lenses to connect to Confluent Cloud.
For Confluent Platform see Apache Kafka.
This page describes connection Lenses to Azure EventHubs.
Add a shared access policy
Navigate to your Event Hub resource and select Shared access policies in the Settings section.
Select + Add shared access policy, give a name, and check all boxes for the permissions (Manage, Send, Listen)
Once the policy is created, obtain the Primary Connection String, by clicking the policy and copying the connection string. The connection string will be used as a JAAS password to connect to Kafka.
The bootstrap broker [YOUR_EVENT_HUBS_NAMESPACE].servicebus.windows.net:9093
Set the following in the provisioning.yaml
First set environment variable
Note that "\" at "$ConnectionString" is set additionally to escape the $ sign.
This page describes an overview of Lenses Agent Provisioning.
As of version 6.0 the calling the Rest endpoint for provisioning is no longer available.
Connections are defined in the provisioning.yaml file. The Agent will watch the file and resolve the desired state, applying connections defined in the file.
Connections are defined in the provisioning.yaml. This file is divided into components, each component representing a type of connection.
Each component is mandatory:
Name - This is the free name of the connection
Version set to 1
Configuration - This is a list of keys/values dependent on the component type.
The provisioning.yaml contains secrets. If you are deploying via Helm, the chart will use Kubernetes secrets.
Additionally, support is provided for referencing environment variables. This allows you to set secrets in your environment and resolve the value at runtime.
Many connections need files, for example, to secure Kafka with SSL you will need a key store and optionally a trust store.
To reference a file in the provisioning.yaml, for example, given:
a file called my-keystore.jks is expected in the same directory.
This page describes connecting Lenses to Confluent schema registries.
Set the following examples in provisioning.yaml
The URLs (nodes) should always have a scheme defined (http:// or https://).
For Basic Authentication, define username
and password
properties.
A custom truststore is needed when the Schema Registry is served over TLS (encryption-in-transit) and the Registry’s certificate is not signed by a trusted CA.
A custom truststore might be necessary too (see above).
By default, Lenses will use hard delete for Schema Registry. To use soft delete, add the following property:
This page describes connecting Lenses to Apicurio.
Apicuro supports the following versions of Confluent's API:
Confluent Schema Registry API v6
Confluent Schema Registry API v7
Set the following examples in provisioning.yaml
Set the schema registry URLs to include the compatibility endpoints, for example:
This page describes how to connect Lenses to an Amazon MSK Serverless cluster.
It is recommended to install the Agent on an EC2 instance or with EKS in the same VPC as your MSK Serverless cluster.
Enable communications between the Agent & the Amazon MSK Serverless cluster by opening the Amazon MSK Serverless cluster's security group in the AWS Console and add the IP address of your Agent installation.
To authenticate the Agent & access resources within our MSK Serverless cluster, we'll need to create an IAM policy and apply that to the resource (EC2, EKS cluster, etc) running the Agent service. here is an example IAM policy with sufficient permissions which you can associate with the relevant IAM role:
MSK Serverless IAM to be used after cluster creation. Update this IAM policy with the relevant ARN.
Click your MSK Serverless Cluster in the MSK console and select View Client Information page to check the bootstrap server endpoint.
To enable the creation of SQL Processors that create consumer groups, you need to add the following statement in your IAM policy:
Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.
To integrate with the AWS Glue Schema Registry, you also need to add the following statement for the registries and schemas in your IAM policy:
Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.
To integrate with the AWS Glue Schema Registry, you also need to modify the security policy for the registry and schemas, which results in additional functions within it:
More details about how IAM works with MSK Serverless can be found in the documentation: MSK Serverless
When using the Agent with MSK Serverless:
The agent does not receive Prometheus-compatible metrics from the brokers because they are not exported outside of CloudWatch.
The agent does not configure quotas and ACLs because MSK Serveless does not allow this.
This page describes connection the Lenses Agent to a AWS MSK cluster.
It is recommended to install the Agent on an EC2 instance or with EKS in the same VPC as your MSK cluster. The Agent can be installed and preconfigured via the AWS Marketplace.
Edit the AWS MSK security group in the AWS Console and add the IP address of your Agent installation.
If you want to have the Agent collect JMX metrics you have to enable Open Monitoring on your MSK cluster. Follow the AWS guide here.
Depending on your MSK cluster, select the endpoint and protocol you want to connect with.
It is not recommended to use Plaintext for secure environments. For these environments use TLS or IAM.
When the Agent is running inside AWS and is connecting to an Amazon’s Managed Kafka (MSK) instance, IAM can be used for authentication.
This page describes an overview of connecting a Lenses Agent with Schema Registries
Consider if you have a high number of schemas.
TLS and basic authentication are supported for connections to Schema Registries.
The Agent can collect Schema registry metrics via:
JMX
Jolokia
AVRO
PROTOBUF
JSON and XML formats are supported by Lenses but without a backing schema registry.
To enable the deletion of schemas in the UI, set the following in the lenses.conf
file.
IBM Event Streams supports hard deletes only
This page describes how to connect Lenses to IBM Event Streams.
IBM Event Streams requires a replication factor of 3. Ensure you set the replication factor accordingly for Lenses internal topics.
See .
This page describes connection to AWS Glue.
This page describes adding a Schema Registries to the Lenses Agent.
This page describes connecting Lenses to IBM Event Streams schema registry.
Requires Enterprise subscription on IBM Event Streams and only hard delete is supported for IBM Event streams
To configure an application to use this compatibility API, specify the Schema Registry endpoint in the following format:
Use "token" as the username. Set the password as your API KEY from IBM Event streams
Set the following examples in provisioning.yaml
Connect the Lenses Agent to your alerting and auditing systems.
The Agent can send out alerts and audits events. Once you have configured alert and audit connections, you can create alert and audit channels to route events to them.
See AWS connection.
Add a connection to AWS in the Lenses Agent.
The agent uses an AWS in three places:
AWS IAM connection to MSK for Lenses itself
Connecting to AWS Glue
Alert channels to Cloud Watch.
If the Agent is deployed on an EC2 Instance or has access to AWS credentials in the default AWS toolchain that can be used instead.
This page describes adding a Zookeeper to the Lenses Agent.
Set the following examples in provisioning.yaml
Simple configuration with Zookeeper metrics read via JMX.
With such a configuration, Lenses will use 3 Zookeeper nodes and will try to read their metrics from following URLs (notice the same port - 9581 - used for all of them, as defined by metricsPort property):
my-zookeeper-host-0:9581
my-zookeeper-host-1:9581
my-zookeeper-host-2:9581
This page describes adding a Kafka Connect Cluster to the Lenses Agent.
Lenses integrates with Kafka Connect Clusters to manage connectors.
The name of a Kafka Connect Connections may only contain alphanumeric characters ([A-Za-z0-9]) and dashes (-). Valid examples would be dev, Prod1, SQLCluster,Prod-1, SQL-Team-Awesome.
Multiple Kafka Connect clusters are supported.
If you are using Kafka Connect < 2.6 set the following to ensure you can see Connectors
lenses.features.connectors.topics.via.api.enabled=false
Consider Rate Limiting if you have a high number of connectors.
The URLs (workers) should always have a scheme defined (http:// or https://).
For Basic Authentication, define username
and password
properties.
A custom truststore is needed when the Kafka Connect workers are served over TLS (encryption-in-transit) and their certificates are not signed by a trusted CA.
A custom truststore might be necessary too (see above).
If you have developed your own Connector or are using not using a Lenses connector you can still display the connector instances in the topology. To do this Lenses needs to know the configuration option of the Connector that defines which topic the Connector reads from or writes to. This is set in the connectors.info
parameter in the lenses.conf
file.
This page describes the Kafka ACLs prerequisites for the Lenses Agent if ACLs are enabled on your Kafka clusters.
These ACLs are for the underlying Lenses Agent Kafka client. Lenses has its own set of permissions guarding access.
You can restrict the access of the Lenses Kafka client but this can reduce the functionality on offer in Lenses, e.g. not allow Lenses to create topic at all, even though this can be managed by Lenses own IAM system.
When your Kafka cluster is configured with an authorizer which enforces ACLs, the Agent will need a set of permissions to function correctly.
Common practice is to give teh Agent superuser status or the complete list of available operations for all resources. The IAM model of Lenses can then be used to restrict the access level per user.
The Agent needs permission to manage and access their own internal Kafka topics:
__topology
__topology__metrics
It also needs to read and describe permissions for the consumer offsets and Kafka Connect topics —if enabled:
__consumer_offsets
connect-configs
connect-offsets
connect-status
This same set of permissions is required for any topic that the agent must have read access.
DescribeConfigs was added in Kafka 2.0. It may not be needed for versions before 2.2.
Additional permissions are needed to produce topics or manage them.
Permission to at least read and describe consumer groups is required to take advantage of the Consumer Groups' monitoring capabilities.
Additional permissions are needed to manage groups.
To manage ACLs, permission to the cluster is required:
This page describes configuring the database connection for the Lenses Agent.
Once you have created a role for the agent to use you can then configure the Agent in the lenses.conf
file:
Additional configurations for the PostgreSQL database connection can be passed under the lenses.storage.postgres.properties configuration prefix.
One Postgres server can be used for all agents by using a separate database or schema each.
For the Agent see lenses.storage.postgres.schema or lenses.storage.postgres.database
The supported parameters can be found in the PostgreSQL documentation. For example:
The Agent uses the HikariCP library for high-performance database connection pooling.
The default settings should perform well but can be overridden via the lenses.storage.hikaricp configuration prefix. The supported parameters can be found in the HikariCP documentation.
Camelcase configuration keys are not supported in agent configuration and should be translated to dot notation.
For example:
This page describes the hardware and OS prerequisites for Lenses.
Run on any Linux server (review ulimits or container technology (docker/kubernetes). For RHEL 6.x and CentOS 6.x use docker.
Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit command:
Increase as a super-user the soft limit to 4096 with:
This page describes how to configure TLS for the Lenses Agent.
By default, the Agent does not provide TLS termination but can be enabled via a configuration option. TLS termination is recommended for enhanced security and a prerequisite for integrating with SSO (Single Sign On) via SAML2.0.
TLS termination can be configured directly within Agent or by using a TLS proxy or load balancer.
To use a non-default global truststore, set the path in accordingly with the LENSES_OPTS variable.
To enable mutual TLS, set your keystore accordingly.
This page describes how to configure the agent to deploy and manage SQL Processors for stream processing.
Lenses can be used to define & deploy stream processing applications that read from Kafka and write back to Kafka with SQL. They are based on the Kafka Stream framework. They are known as SQL Processors.
SQL processing of real-time data can run in 2 modes:
SQL In-Process - the workload runs inside of the Lenses Agent.
SQL in Kubernetes - the workload runs & scale on your Kubernetes cluster.
Which mode the SQL Processors will run as should be defined within the lenses.conf before Lenses is started.
In this mode, SQL processors run as part of the Agent process, sharing resources, memory, and CPU time with the rest of the platform.
This mode of operation is meant to be used for development only.
As such, the agent will not allow the creation of more than 50 SQL Processors in In Process mode, as this could impact the platform's stability and performance negatively.
For production, use the KUBERNETES
mode for maximum flexibility and scalability.
Set the execution configuration to IN_PROC
Set the directory to store the internal state of the SQL Processors:
SQL processors use the same connection details that Agent uses to speak to Kafka and Schema Registry. The following properties are mounted, if present, on the file system for each processor:
Kafka
SSLTruststore
SSLKeystore
Schema Registry
SSL Keystore
SSL Truststore
The file structure created by applications is the following: /run/[lenses_installation_id]/applications/
Keep in mind Lenses require an installation folder with write permissions. The following are tried:
/run
/tmp
Kubernetes can be used to deploy SQL Processors. To configure Kubernetes, set the mode to KUBERNETES
and configure the location of the kubeconfig file.
When the Agent is deployed inside Kubernetes, the lenses.kubernetes.config.file configuration entry should be set to an empty string. The Kubernetes client will auto-configure from the pod it is deployed in.
The SQL Processor docker image is live in Dockerhub.
Custom serdes should be embedded in a new Lenses SQL processor Docker image.
To build a custom Docker image, create the following directory structure:
Copy your serde jar files under processor-docker/serde.
Create Dockerfile
containing:
Build the Docker.
Once the image is deployed in your registry, please set Lenses to use it (lenses.conf):
Don't use the LPFP_
prefix.
Internally, Lenses prefixes all its properties with LPFP_
.
Avoid passing custom environment variables starting with LPFP_
as it may cause the processors to fail.
To deploy Lenses Processors in Kubernetes the suggested way is to activate RBAC in Cluster level through Helm values.yaml
:
If you want to limit the permissions Lenses has against your Kubernetes cluster, you can use Role/RoleBinding
resources instead.
To achieve this you need to create a Role and a RoleBinding resource in the namespace you want the processors deployed to:
example for:
Lenses namespace = lenses-ns
Processor namespace = lenses-proc-ns
You can repeat this for as many namespaces you may want Lenses to have access to.
Finally you need to define in Lenses configuration which namespaces can Lenses access. To achieve this amend values.yaml
to contain the following:
example:
This page describes how to install plugins in the Lenses Agent.
The following implementations can be specified:
Serializers/Deserializers Plug your serializer and deserializer to enable observability over any data format (i.e., protobuf / thrift)
Custom authentication Authenticate users on your proxy and inject permissions HTTP headers.
LDAP lookup Use multiple LDAP servers or your group mapping logic.
SQL UDFs User Defined Functions (UDF) that extend SQL and streaming SQL capabilities.
Once built, the jar files and any plugin dependencies should be added to the Agent and, in the case of Serializers and UDFs, to the SQL Processors if required.
On startup, the Agent loads plugins from the $LENSES_HOME/plugins/ directory and any location set in the environment variable LENSES_PLUGINS_CLASSPATH_OPTS. The Agent is watching, and dropping a new plugin will hot-reload it. For the Agent docker (and Helm chart) you use /data/plugins.
Any first-level directories under the paths mentioned above, detected on startup will also be monitored for new files. During startup, the list of monitored locations will be shown in the logs to help confirm the setup.
Whilst all jar files may be added to the same directory (e.g /data/plugins), it is suggested to use a directory hierarchy to make management and maintenance easier.
An example hierarchy for a set of plugins:
There are two ways to add custom plugins (UDFs and Serializers) to the SQL Processors; (1) via making available a tar.gz archive at an HTTP (s) address, or (2) via creating a custom docker image.
With this method, a tar archive, compressed with gzip, can be created that contains all plugin jars and their dependencies. Then this archive should be uploaded to a web server that the SQL Processors containers can access, and its address should be set with the option lenses.kubernetes.processor.extra.jars.url.
Step by step:
Create a tar.gz file that includes all required jars at its root:
Upload to a web server, ie. https://example.net/myfiles/FILENAME.tar.gz
Set
For the docker image, set the corresponding environment variable
The SQL Processors inside Kubernetes use the docker image lensesio-extra/sql-processor. It is possible to build a custom image and add all the required jar files under the /plugins directory, then set lenses.kubernetes.processor.image.name and lenses.kubernetes.processor.image.tag options to point to the custom image.
Step by step:
Create a Docker image using lensesio-extra/sql-processor:VERSION as a base and add all required jar files under /plugins:
Upload the docker image to a registry:
Set
For the docker image, set the corresponding environment variables
This page describes configuring Lenses Agent logging.
Changes to the logback.xml are hot reloaded by the Agent, no need to restart.
All logs are emitted unbuffered as a stream of events to both stdout and to rotating files inside the directory logs/.
The logback.xml file is used to configure logging.
If customization is required, it is recommended to adapt the default configuration rather than write your own from scratch.
The file can be placed in any of the following directories:
the directory where the Agent is started from
/etc/lenses/
agent installation directory.
The first one found, in the above order, is used, but to override this and use a custom location, set the following environment variable:
The default configuration file is set up to hot-reload any changes every 30 seconds.
The default log level is set to INFO
(apart from some very verbose classes).
All the log entries are written to the output using the following pattern:
You can adjust this inside logback.xml to match your organization’s defaults.
logs/ you will find three files: lenses.log
, lenses-warn.log
and metrics.log
. The first contains all logs and is the same as the stdout. The second contains only messages at level WARN and above. The third one contains timing metrics and can be useful for debugging.
The default configuration contains two cyclic buffer appenders: "CYCLIC-INFO” and “CYCLIC-METRICS”. These appenders are required to expose the Agent logs within the Admin UI.
Rate limit the calls the Lenses Agent makes to Schema Registries and Connect Clusters.
To rate limit the calls the Agent makes to Schema Registries or Connect Clusters set the following the Agent configuration:
The exact values provided will depend on your setup, for example the number of schemas, how often are new schemas added, so some trial and error is required.
This page describes the how to retrieve Lenses Agent JMX metrics.
The JMX endpoint is managed by the lenses.jmx.port option. To disable the JMX leave the option empty.
To enable monitoring of the Agent metrics:
To export via Prometheus exporter:
The Agent Docker image (lensesio/lenses) automatically sets up the Prometheus endpoint. You only have to expose the 9102 port to access it.
This will be done in two parts. The first part is about setting up the required files that JMX Agent will require and the second is about the options we need to pass to the agent.
First let’s create a new folder called jmxremote
To enable basic auth JMX, first create two files:
jmxremote.access
jmxremote.password
The password file has the credentials that the JMX agent will check during client authentication
The above code is registering 2 users.
UserA:
username admin
password admin
UserB:
username: guest
password: admin
The access file has authorization information, like who is allowed to do what.
In the above code, we can see that the admin user can do read and write operations in JMX, while guest user can only read the JMX content.
Now, to enable JMX with basic auth protection, all we need to do is pass the following options in the JRE’s env that will run the Java process you need to protect the jmx.
Let’s assume this java process is Kafka.
Change the permissions on both files so only owner can edit and view them.
If you do not change the permissions to 0600 and to the user that will run the jre process, then JMX will Agent will cause an error complaining that the Process is not the owner of the files that will be used for authentication and authorization.
Finally export the following options in the user’s env which will run Kafka.
First setup JMX with basic auth as shown in the Secure JMX: Basic Auth page.
To enable TLS Encryption/Authentication
in JMX you need a jks keystore and truststore.
Please note that both JKS Truststore and Keystore should have the same password.
The reason for this is because the javax.net.ssl
class will use the password you pass to the Keystore as the keypassword
Let’s assume this java process is Kafka and that you have installed the keystore.jks and truststore.jks under `/etc/certs``
Export the following options in the user’s env which will run Kafka.
This page describes the memory & cpu prerequisites for Lenses.
This documentation provides memory recommendations for Lenses.io, considering the number of Kafka topics, the number of schemas, and the complexity of these schemas (measured by the number of fields). Proper memory allocation ensures optimal performance and stability of Lenses.io in various environments.
Number of Topics: Kafka topics require memory for indexing, metadata, and state management.
Schemas and Their Complexity: The memory impact of schemas is influenced by both the number of schemas and the number of fields within each schema. Each schema field contributes to the creation of Lucene indexes, which affects memory usage.
For a basic setup with minimal topics and schemas:
Minimum Memory: 4 GB
Recommended Memory: 8 GB
This setup assumes:
Fewer than 100 topics
Fewer than 100 schemas
Small schemas with few fields (less than 10 fields per schema)
Memory requirements increase with the number of topics. Topics are used as the primary reference for memory scaling, with additional considerations for schemas.
Schemas have a significant impact on memory usage, particularly as the number of fields within each schema increases. The memory impact is determined by both the number of schemas and the complexity (number of fields) of these schemas.
To help illustrate how to apply these recommendations, here are some example configurations considering both topics and schema complexity:
Topics: 500
Schemas: 100 (average size 50 KB, 8 fields per schema)
Recommended Memory: 8 GB
Schema Complexity: Low → No additional memory needed.
Total Recommended Memory: 8 GB
Topics: 5,000
Schemas: 1,000 (average size 200 KB, 25 fields per schema)
Base Memory: 12 GB
Schema Complexity: Moderate → No additional memory needed.
Total Recommended Memory: 16 GB
Topics: 15,000
Schemas: 3,000 (average size 500 KB, 70 fields per schema)
Base Memory: 32 GB
Schema Complexity: High → Add 3 GB for schema complexity.
Total Recommended Memory: 35 GB
30,000 Topics
Schemas: 5,000 (average size 300 KB, 30 fields per schema)
Base Memory: 64 GB
Schema Complexity: Moderate → Add 5 GB for schema complexity.
Total Recommended Memory: 69 GB
High Throughput: If your Kafka cluster is expected to handle high throughput, consider adding 20-30% more memory than the recommendations.
Complex Queries and Joins: If using Lenses.io for complex data queries and joins, consider increasing the memory allocation by 10-15% to accommodate the additional processing.
Monitoring and Adjustment: Regularly monitor memory usage and adjust based on actual load and performance.
Proper memory allocation is crucial for the performance and reliability of Lenses.io, especially in environments with a large number of topics and complex schemas. While topics provide a solid baseline for memory recommendations, the complexity of schemas—particularly the number of fields—can also significantly impact memory usage. Regular monitoring and adjustments are recommended to ensure that your Lenses.io setup remains performant as your Kafka environment scales.
This page describes connection Lenses to a Azure HDInsight cluster.
Number of Topics / Partitions
Recommended Memory
Up to 1,000 / 10,000 partitions
12 GB
1,001 to 10,000 / 100.000 partitions
24 GB
10,001 to 30,000 / 300.000 partitions
64 GB
Schema Complexity
Number of Fields per Schema
Memory Addition
Low to Moderate Complexity
Up to 50 fields
None
High Complexity
51 - 100 fields
1 GB for every 1,000 schemas
Very High Complexity
100+ fields
2 GB for every 1,000 schemas
Number of Topics
Number of Schemas
Number of Fields per Schema
Base Memory
Additional Memory
Total Recommended Memory
1,000
1,000
Up to 10
8 GB
None
12 GB
1,000
1,000
11 - 50
8 GB
None
12 GB
5,000
5,000
Up to 10
12 GB
None
16 GB
5,000
5,000
11 - 50
12 GB
None
16 GB
10,000
10,000
Up to 10
16 GB
None
24 GB
10,000
10,000
51 - 100
24 GB
10 GB
34 GB
30,000
30,000
Up to 10
64 GB
None
64 GB
30,000
30,000
51 - 100
64 GB
30 GB
94 GB
This page describes an overview of Lenses IAM (Identify & Access Management)
Principals (Users & Service accounts) receive their permissions based on their group membership.
Roles hold a set of policies, defining the permissions. Roles are assigned to groups.
Roles provide flexibility in how you want to provide access, you can create policy that is very open or a policy that is very granular, for example allowing operators and support engineers certain permissions to restart Connectors but denying actions that would allow them to view data or configuration options.
Roles are defined at the HQ level. This allows you to control access to not only actions at HQ but at lower environment levels, and to assign the same set of permissions across your whole Kafka landscape in a central place.
A role has:
A unique name;
A list of Permission Statements called a Policy.
A policies have:
One or more actions;
One or more resource patterns that the actions apply to;
An effect: allow or deny.
If any effect is deny for a resource the result is always deny, the principle of least privileged applies.
A policy is defined by a YAML specification.
Actions describe a set of actions. Concrete actions can match an Action Pattern. In this text, action and action patterns are used interchangeably.
An action has the format: service:operation
, e.g. iam:DeleteUser
Services describe the system entity that an action applies to. Services are:
environments
kafka
registry
schemas
kafka-connect
sql-streaming
kubernetes
applications
alerts
data-policies
governance
audit
iam
administration
Operation can contain a wildcard. If so, only at the end. See IAM Reference for the available operations per service.
Resources identify which resource, in a service, that the principal is allowed or denied, to perform the operation on.
Resource-type cannot contain a combination of characters with wildcards.
If the service is provided, resource-type can be a wildcard.
The resource ID identifies the resource within the context of a service and a resource type.
A resource-id consists of one or more segments separated by a slash /
. A segment can be a wildcard, or contain a wildcard as a suffix of a string. If a segment is a wildcard, then remaining segments do not need to be provided, and will be assumed to be wildcards as well.
The format is service:resource-type:resource-id
Where LRN is the Lenses Resource Name
kafka:topic:my-env/*
will be expanded to kafka:topic:my-env/*/*
;
kafka:topic:my-env/my-cluster*
is invalid because the Topic segment is missing, kafka:topic:my-env/my-cluster*/topic
would be valid though;
*:topic:*
is invalid, the service is not provided;
kaf*:*
and kafka:top*
are invalid, service and resource-type cannot contain wildcards;
kafka:*:foo
is invalid, if the resource-type is a wildcard then resource-id cannot be set.
A principal (user or service account) can perform an action on a resource if:
In any of the roles it receives via group membership:
There is any matching Permission Statement that has an effect of allow
;
And there is not any matching Permission Statement that has an effect of deny
.
A Permission Statement matches an action plus resource, if:
The action matches any of the Permission Statement's Action Patterns, AND:
The resource matches any of the Permission Statement's Resource Patterns.
An Action matches an Action Pattern (AP) if:
The AP is a wildcard, OR:
The Action's service equals the AP's and the AP's operation string-matches the Action's operation.
A Resource matches a Resource Pattern (RP) if:
The RP is a wildcard, OR:
The Resource's services equals the RP's and the RP's resource-type is a wildcard, OR:
The Resource's service and types equals that of the RP and resource-ids match. Resource-ids are matched by string-matching each individual segment. If the RP has a trailing wildcard segment, the remaining segments are ignored.
A string s
matches p
if:
They equal character by character.
If s or p has more non-wildcard characters than the other they don't match;
If p contains a *
suffix, any remaining characters in s are ignored.
"lit"
"lit"
true
"lit"
"li"
false
"lit"
"litt"
false
"lit"
"oth"
false
"*"
"some"
true
"foo*"
"foo"
true
"foo*"
"foo-bar"
true
""
""
true
"x"
""
false
""
"x"
false
Order of items in any collection is irrelevant during evaluation. Collections are considered sets rather than ordered lists. The following are equivalent:
Order of Resource Patterns does not matter
Order of Permission Statements does not matter
Order of Roles does not matter
Order of Groups does not matter
In the examples we're not too religious about strict JSON formatting.
Broad Allow + Specific Deny
Given:
A principal:
Can ReadKafkaData
on kafka:topic:my-env/the-cluster/some-topic
because it is allowed and not denied;
Cannot DeleteKafkaTopic
on kafka:topic:my-env/the-cluster/some-topic
because there is no allow;
Cannot ReadKafkaData
on kafka:topic:my-env/the-cluster/forbidden-topic
because while it is allowed the deny kicks in.
Given:
A principal:
Can ReadKafkaData
on kafka:topic:someone-else-cluster/their-topic
because the resource matches *
.
Note that here the matching can be considered "most permissive".
Given:
A principal:
Can ReadKafkaData
on kafka:topic:my-cluster/my-topic-1
and kafka:topic:my-cluster/my-topic-2
because the resources match, but cannot ReadKafkaData
on kafka:topic:my-cluster/my-topic-3
.
This page describes Environments in Lenses.
Environments are virtual containers for you, including Kafka Cluster, Schema Registries, and Kafka Connect Clusters.
Each Environment has an Agent, the Agent communicates with HQ via an Agent Key generated at the environment creation time.
Environments can be assigned tiers, domains and a description and group accordingly.
Go to Environments in the left-hand side navigation, then select New Environments button in the top right corner.
Once you have created an environment you will be presented with an Agent Key. Copy this and deploy an Agent for your environment (Kafka Cluster).
Learn how to configure an agent here.
This page describes how to configure JMX metrics for Connections in Lenses.
All core services (Kafka, Schema Registry, Kafka Connect, Zookeeper) use the same set of properties for services’ monitoring.
The Agent will discover all the brokers by itself and will try to fetch metrics using metricsPort, metricsCustomUrlMappings and other properties (if specified).
The same port used for all brokers/workers/nodes. No SSL, no authentication.
Such a configuration means that the Agent will try to connect using JMX with every pair of kafkaBootstrapServers.host:metricsPort, so following the example: my-kafka-host-0:9581.
For Jolokia the Agent supports two types of requests: GET (JOLOKIAG) and POST (JOLOKIAP).
For JOLOKIA each entry value in metricsCustomUrlMappings must contain protocol.
The same port used for all brokers/workers/nodes. No SSL, no authentication.
JOLOKIA monitoring works on the top of HTTP protocol. To fetch metrics the Agent has to perform either GET or POST request. There is a way of configuring http request timeout using httpRequestTimeout property (ms value). Its default value is 20 seconds.
Default suffix for Jolokia endpoints is /jolokia/, so that should be provided value. Sometimes that suffix can be different, so there is a way of customizing it by using metricsHttpSuffix field.
Before enabling collection of metrics within Agents provision configuration, make sure in your MSK Provisioned cluster you have enabled open monitoring with Prometheus.
AWS has predefined metrics configuration. The Agent hits the Prometheus endpoint using port 11001 for each broker. There is an option of customizing AWS metrics connection in Lenses by using metricsUsername, metricsPassword, httpRequestTimeout, metricsHttpSuffix, metricsCustomUrlMappings
, metricsSsl properties, but most likely no one will need to do that - AWS has its own standard and most probably it won’t change. Customization can be achieved only by API or CLI - UI does not support it.
There is also a way to configure custom mapping for each broker (Kafka) / node (Schema Registry, Zookeeper) / worker (Kafka Connect).
Such a configuration means that the Agent will try to connect using JMX for:
my-kafka-host-0:9582 - because of metricsCustomUrlMappings
my-kafka-host-1:9581 - because of metricsPort and no entry in metricsCustomUrlMappings
LRNs uniquely identify all resources that Lenses understands. Examples are a Lenses User, a Kafka topic or a Kafka-Connect connector.
Use an LRN to specify a resource across all of Lenses, unambiguously:
To add topic permissions for a team in IAM permissions.
To share a consumer-group reference with a colleague.
The top-level format has 3 parts called segments. A semi-colon :
separates them:
service
is the namespace of the Lenses service that manages a set of resource types.
e.g. kafka
for things like topics and consumer groups.
resource-type
is the type of resources that are served by a service.
e.g. topic
for a Kafka topic, consumer-group
for a Kafka consumer group. They both belong to the kafka
service.
resource-id
is the unique name or path that identifies a resource. The resource ID is specific to a service and resource type. The resource ID can be:
a single resource name, e.g. :
lucy.clearview@lenses.io
for a user resource name.
The full LRN would be iam:user:lucy.clearview@lenses.io
.
a nested resource path that contains slashes /
e.g. :
dev-environment/kafka/my-topic
for a kafka topic.
The full LRN would be kafka:topic:dev-environment/kafka/my-topic
.
IAM user
Kafka topic
Kafka consumer group
Schema Registry schema
Kafka Connect connector
LRNs separate top-level segments with a colon :
and resource path segments with a slash /
.
A segment may have:
Alphanumeric characters: a-z, A-Z, 0-9
Hyphen symbols only: -
Use the wildcard asterisk *
to express catch-all LRNs.
Use these examples to express multiple resources easily.
Avoid these examples because they are ambiguous. Lenses does not allow them.
This page describes Users in Lenses.
Users are assigned to groups. The groups inherit permissions from the roles assigned to the groups.
User can be manually created in Lenses. Users can either be of type:
SSO, or
Basic Authentication
When creating a User, you can assign them groups membership.
Each user, once logged in can update their Name, Profile Photo and set an email address.
For SSO, your SSO email is still required to login
To Create Service Account go to IAM->Users->New User, once created you can assign the user to a group.
You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.
This page lists the available configurations in Lenses Agent.
Set in lenses.conf
Reference documentation of all configuration and authentication options:
System or control topics are created by services for their internal use. Below is the list of built-in configurations to identify them.
_schemas
__consumer_offsets
_kafka_lenses_
lsql_*
lsql-*
__transaction_state
__topology
__topology__metrics
_confluent*
*-KSTREAM-*
*-TableSource-*
*-changelog
__amazon_msk*
Wildcard (*
) is used to match any name in the path to capture a list of topics not just one. When the wildcard is not specified, Lenses matches on the entry name provided.
If the records schema is centralized, the connectivity to Schema Registry nodes is defined by a Lenses Connection.
There are two static config entries to enable/disable the deletion of schemas:
Options for specific deployment targets:
Global options
Kubernetes
Common settings, independently of the underlying deployment target:
Kubernetes connectivity is optional. Minimum supported K8 version 0.11.10. All settings are string.
Optimization settings for SQL queries.
Lenses requires these Kafka topics to be available, otherwise, it will try to create them. The topics can be created manually before Lenses is run, or allow Lenses the correct Kafka ACLs to create the topics:
To allow for fine-grained control over the replication factor of the three topics, the following settings are available:
When configuring the replication factor for your deployment, it's essential to consider the requirements imposed by your cloud provider. Many cloud providers enforce a minimum replication factor to ensure data durability and high availability. For example, IBM Cloud mandates a minimum replication factor of 3. Therefore, it's crucial to set the replication factor for the Lenses internal topics to at least 3 when deploying Lenses on IBM Cloud.
All time configuration options are in milliseconds.
Control how Lenses identifies your connectors in the Topology view. Catalogue your connector types, set their icons, and control how Lenses extracts the topics used by your connectors.
Lenses comes preconfigured for some of the popular connectors as well as the Stream Reactor connectors. If you see that Lenses doesn’t automatically identify your connector type then use the lenses.connectors.info
setting to register it with Lenses.
Add a new HOCON object {}
for every new Connector in your lenses.connectors.info
list :
This configuration allows the connector to work with the topology graph, and also have the RBAC rules applied to it.
To extract the topic information from the connector configuration, source connectors require an extra configuration. The extractor class should be: io.lenses.config.kafka.connect.SimpleTopicsExtractor
. Using this extractor requires an extra property configuration. It specifies the field in the connector configuration which determines the topics data is sent to.
Here is an example for the file source:
An example of a Splunk sink connector and a Debezium SQL server connector
This page describes IAM groups in Lenses.
Groups are a collection of users, service accounts and roles.
Users can be assign to Groups in two ways:
Manual
Linked from the groups provided by your SSO provider
This behaviour can be toggled in the organizational settings of your profile. To control the default set the following in the config.yaml for HQ.
Groups can be defined with the following metadata:
Colour
Description
Each group has a resource that unique identifies it across an HQ installation.
To Create Group go to IAM->Groups->New Group, create the group, assign members, service accounts and roles.
You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.
This page describes Roles in Lenses.
Lenses IAM is built around Roles. Roles contain policies and each policy defines a set of actions a user is allow to take.
Roles are then assigned to groups.
The Lenses policies are resource based. They are YAML based documents attached to a resource.
Each policy has:
Action
Resource
Effect
The action describes the action or verb that a user can perform. The format of the action is
For example to list topics in Kafka
For a full list of the actions see .
To allow all actions set '*'
To restrict access to resources, for example, only list topics being with red we can used use the resource field.
To allow all actions set '*'
Effect is either allow the action on the resource or deny. If allow is not set the action will be denied and if any policy for a resource has a deny effect it takes precedence.
To Create Service Account go to IAM->Roles->New Role.
You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.
This page describes the JVM options for the Lenses Agent.
The Agent runs as a JVM app; you can tune runtime configurations via environment variables.
For a full list of the actions see .
lenses.eula.accept
Accept the Lenses EULA
false
boolean
yes
lenses.ip
Bind HTTP at the given endpoint. Use in conjunction with lenses.port
0.0.0.0
string
no
lenses.port
The HTTP port to listen for API, UI and WS calls
9991
int
no
lenses.jmx.port
Bind JMX port to enable monitoring Lenses
int
no
lenses.root.path
The path from which all the Lenses URLs are served
string
no
lenses.secret.file
The full path to security.conf
for security credentials
security.conf
string
no
lenses.sql.execution.mode
Streaming SQL mode IN_PROC
(test mode) or KUBERNETES
(prod mode)
IN_PROC
string
no
lenses.offset.workers
Number of workers to monitor topic offsets
5
int
no
lenses.telemetry.enable
Enable telemetry data collection
true
boolean
no
lenses.kafka.control.topics
An array of topics to be treated as “system topics”
list
array
no
lenses.grafana
Add your Grafana url i.e. http://grafanahost:port
string
no
lenses.api.response.cache.enable
If enabled, it disables client cache on the Lenses API HTTP responses by adding these HTTP Headers: Cache-Control: no-cache, no-store, must-revalidate
, Pragma: no-cache
, and Expires: -1
.
false
boolean
no
lenses.workspace
Directory to write temp files. If write access is denied, Lenses will fallback to /tmp
.
/run
string
no
lenses.access.control.allow.methods
HTTP verbs allowed in cross-origin HTTP requests
GET,POST,PUT,DELETE,OPTIONS
lenses.access.control.allow.origin
Allowed hosts for cross-origin HTTP requests
*
lenses.allow.weak.ssl
Allow https://
with self-signed certificates
false
lenses.ssl.keystore.location
The full path to the keystore file used to enable TLS on Lenses port
lenses.ssl.keystore.password
Password for the keystore file
lenses.ssl.key.password
Password for the ssl certificate used
lenses.ssl.enabled.protocols
Version of TLS protocol to use
TLSv1.2
lenses.ssl.algorithm
X509 or PKIX algorithm to use for TLS termination
SunX509
lenses.ssl.cipher.suites
Comma separated list of ciphers allowed for TLS negotiation
lenses.security.kerberos.service.principal
The Kerberos principal for Lenses to use in the SPNEGO form: HTTP/lenses.address@REALM.COM
lenses.security.kerberos.keytab
Path to Kerberos keytab with the service principal. It should not be password protected
lenses.security.kerberos.debug
Enable Java’s JAAS debugging information
false
lenses.storage.hikaricp.[*]
To pass additional properties to HikariCP connection pool
no
lenses.storage.directory
The full path to a directory for Lenses to use for persistence
"./storage"
string
no
lenses.storage.postgres.host
Host of PostgreSQL server for Lenses to use for persistence
string
no
lenses.storage.postgres.port
Port of PostgreSQL server for Lenses to use for persistence
5432
integer
no
lenses.storage.postgres.username
Username for PostgreSQL database user
string
no
lenses.storage.postgres.password
Password for PostgreSQL database user
string
no
lenses.storage.postgres.database
PostgreSQL database name for Lenses to use for persistence
string
no
lenses.storage.postgres.schema
PostgreSQL schema name for Lenses to use for persistence
"public"
string
no
lenses.storage.postgres.properties.[*]
To pass additional properties to PostgreSQL JDBC driver
no
lenses.schema.registry.delete
Allow schemas to be deleted. Default is false
boolean
lenses.schema.registry.cascade.delete
Deletes associated schemas when a topic is deleted. Default is false
boolean
lenses.deployments.events.buffer.size
Buffer size for events coming from Deployment targets such as Kubernetes
10000
lenses.deployments.errors.buffer.size
Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes
1000
lenses.kubernetes.processor.image.name
The url for the streaming SQL Docker for K8
lensesioextra/sql-processor
lenses.kubernetes.processor.image.tag
The version/tag of the above container
5.2
lenses.kubernetes.config.file
The path for the kubectrl
config file
/home/lenses/.kube/config
lenses.kubernetes.pull.policy
Pull policy for K8 containers: IfNotPresent
or Always
IfNotPresent
lenses.kubernetes.service.account
The service account for deployments. Will also pull the image
default
lenses.kubernetes.init.container.image.name
The docker/container repository url and name of the Init Container image used to deploy applications to Kubernetes
lensesio/lenses-cli
lenses.kubernetes.init.container.image.tag
The tag of the Init Container image used to deploy applications to Kubernetes
5.2.0
lenses.kubernetes.watch.reconnect.limit
How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable
10
lenses.kubernetes.watch.reconnect.interval
How often to wait between Kubernetes Watcher reconnection attempts expressed in milliseconds
5000
lenses.kubernetes.websocket.timeout
How long to wait for a Kubernetes Websocket response expressed in milliseconds
15000
lenses.kubernetes.websocket.ping.interval
How often to ping Kubernetes Websocket to check it’s alive expressed in milliseconds
30000
lenses.kubernetes.pod.heap
The max amount of memory the underlying Java process will use
900M
lenses.kubernetes.pod.min.heap
The initial amount of memory the underlying Java process will allocate
128M
lenses.kubernetes.pod.mem.request
The value will control how much memory resource the Pod Container will request
128M
lenses.kubernetes.pod.mem.limit
The value will control the Pod Container memory limit
1152M
lenses.kubernetes.pod.cpu.request
The value will control how much cpu resource the Pod Container will request
null
lenses.kubernetes.pod.cpu.limit
The value will control the Pod Container cpu limit
null
lenses.kubernetes.namespaces
Object setting a list of Kubernetes namespaces that Lenses will see for each of the specified and configured cluster
null
lenses.kubernetes.pod.liveness.initial.delay
Amount of time Kubernetes will wait to check Processor’s health for the first time. It can be expressed like 30 second, 2 minute or 3 hour, mind the time unit is singular
60 second
lenses.deployments.events.buffer.size
Buffer size for events coming from Deployment targets such as Kubernetes
10000
lenses.deployments.errors.buffer.size
Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes
1000
lenses.kubernetes.config.reload.interval
Time interval to reload the Kubernetes configuration file. Expressed in milliseconds.
30000
lenses.sql.settings.max.size
Restricts the max bytes that a kafka sql query will return
long
20971520
(20MB)
lenses.sql.settings.max.query.time
Max time (in msec) that a sql query will run
int
3600000
(1h)
lenses.sql.settings.max.idle.time
Max time (in msec) for a query when it reaches the end of the topic
int
5000
(5 sec)
lenses.sql.settings.show.bad.records
By default show bad records when querying a kafka topic
boolean
true
lenses.sql.settings.format.timestamp
By default convert AVRO date to human readable format
boolean
true
lenses.sql.settings.live.aggs
By default allow aggregation queries on kafka data
boolean
true
lenses.sql.sample.default
Number of messages to sample when live tailing a kafka topic
int
2
/window
lenses.sql.sample.window
How frequently to sample messages when tailing a kafka topic
int
200
msec
lenses.sql.websocket.buffer
Buffer size for messages in a SQL query
int
10000
lenses.metrics.workers
Number of workers for parallelising SQL queries
int
16
lenses.kafka.ws.buffer.size
Buffer size for WebSocket consumer
int
10000
lenses.kafka.ws.max.poll.records
Max number of kafka messages to return in a single poll()
long
1000
lenses.sql.state.dir
Folder to store KStreams state.
string
logs/sql-kstream-state
lenses.sql.udf.packages
The list of allowed java packages for UDFs/UDAFs
array of strings
["io.lenses.sql.udf"]
lenses.topics.external.topology
Topic for applications to publish their topology
1
3
(recommended)
__topology
yes
N/A
lenses.topics.external.metrics
Topic for external application to publish their metrics
1
3
(recommended)
__topology__metrics
no
1 day
lenses.topics.metrics
Topic for SQL Processor to send the metrics
1
3
(recommended)
_kafka_lenses_metrics
no
lenses.topics.replication.external.topology
Replication factor for the lenses.topics.external.topology
topic
1
lenses.topics.replication.external.metrics
Replication factor for the lenses.topics.external.metrics
topic
1
lenses.topics.replication.metrics
Replication factor for the lenses.topics.metrics
topic
1
lenses.interval.summary
How often to refresh kafka topic list and configs
long
10000
lenses.interval.consumers.refresh.ms
How often to refresh kafka consumer group info
long
10000
lenses.interval.consumers.timeout.ms
How long to wait for kafka consumer group info to be retrieved
long
300000
lenses.interval.partitions.messages
How often to refresh kafka partition info
long
10000
lenses.interval.type.detection
How often to check kafka topic payload info
long
30000
lenses.interval.user.session.ms
How long a client-session stays alive if inactive (4 hours)
long
14400000
lenses.interval.user.session.refresh
How often to check for idle client sessions
long
60000
lenses.interval.topology.topics.metrics
How often to refresh topology info
long
30000
lenses.interval.schema.registry.healthcheck
How often to check the schema registries health
long
30000
lenses.interval.schema.registry.refresh.ms
How often to refresh schema registry data
long
30000
lenses.interval.metrics.refresh.zk
How often to refresh ZK metrics
long
5000
lenses.interval.metrics.refresh.sr
How often to refresh Schema Registry metrics
long
5000
lenses.interval.metrics.refresh.broker
How often to refresh Kafka Broker metrics
long
5000
lenses.interval.metrics.refresh.connect
How often to refresh Kafka Connect metrics
long
30000
lenses.interval.metrics.refresh.brokers.in.zk
How often to refresh from ZK the Kafka broker list
long
5000
lenses.interval.topology.timeout.ms
Time period when a metric is considered stale
long
120000
lenses.interval.audit.data.cleanup
How often to clean up dataset view entries from the audit log
long
300000
lenses.audit.to.log.file
Path to a file to write audits to in JSON format.
string
lenses.interval.jmxcache.refresh.ms
How often to refresh the JMX cache used in the Explore page
long
180000
lenses.interval.jmxcache.graceperiod.ms
How long to pause for when a JMX connectity error occurs
long
300000
lenses.interval.jmxcache.timeout.ms
How long to wait for a JMX response
long
500
lenses.interval.sql.udf
How often to look for new UDF/UDAF (user defined [aggregate] functions)
long
10000
lenses.kafka.consumers.batch.size
How many consumer groups to retrieve in a single request
Int
500
lenses.kafka.ws.heartbeat.ms
How often to send heartbeat messages in TCP connection
long
30000
lenses.kafka.ws.poll.ms
Max time for kafka consumer data polling on WS APIs
long
10000
lenses.kubernetes.config.reload.interval
Time interval to reload the Kubernetes configuration file.
long
30000
lenses.kubernetes.watch.reconnect.limit
How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable
long
10
lenses.kubernetes.watch.reconnect.interval
How often to wait between Kubernetes Watcher reconnection attempts
long
5000
lenses.kubernetes.websocket.timeout
How long to wait for a Kubernetes Websocket response
long
15000
lenses.kubernetes.websocket.ping.interval
How often to ping Kubernetes Websocket to check it’s alive
long
30000
lenses.akka.request.timeout.ms
Max time for a response in an Akka Actor
long
10000
lenses.sql.monitor.frequency
How often to emit healthcheck and performance metrics on Streaming SQL
long
10000
lenses.audit.data.access
Record dataset access as audit log entries
boolean
true
lenses.audit.data.max.records
How many dataset view entries to retain in the audit log. Set to -1
to retain indefinitely
int
500000
lenses.explore.lucene.max.clause.count
Override Lucene’s maximum number of clauses permitted per BooleanQuery
int
1024
lenses.explore.queue.size
Optional setting to bound Lenses internal queue used by the catalog subsystem. It needs to be positive integer or it will be ignored.
int
N/A
lenses.interval.kafka.connect.http.timeout.ms
How long to wait for Kafka Connect response to be retrieved
int
10000
lenses.interval.kafka.connect.healthcheck
How often to check the Kafka health
int
15000
lenses.interval.schema.registry.http.timeout.ms
How long to wait for Schema Registry response to be retrieved
int
10000
lenses.interval.zookeeper.healthcheck
How often to check the Zookeeper health
int
15000
lenses.ui.topics.row.limit
The number of Kafka records to load automatically when exploring a topic
int
200
lenses.deployments.connect.failure.alert.check.interval
Time interval in seconds to check the connector failure grace period has completed. Used by the Connect auto-restart failed connectors functionality. It needs too be a value between (1,600].
int
10
lenses.provisioning.path
Folder on the filesystem containing the provisioning data. See [provisioning docs](link to provisioning docs) for further details
string
lenses.provisioning.interval
Time interval in seconds to check for changes on the provisioning resources
int
lenses.schema.registry.client.http.retryOnTooManyRequest
When enabled, Lenses will retry a request whenever the schema registry returns a 429 Too Many Requests
boolean
lenses.schema.registry.client.http.maxRetryAwait
Max amount of time to wait whenever a 429 Too Many Requests
is returned.
duration
lenses.schema.registry.client.http.maxRetryCount
Max retry count whenever a 429 Too Many Requests
is returned.
integer
2
lenses.schema.registry.client.http.rate.type
Specifies if http requests to the configured schema registry should be rate limited. Can be "session" or "unlimited"
"unlimited" | "session"
lenses.schema.registry.client.http.rate.maxRequests
Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.
integer
N/A
lenses.schema.registry.client.http.rate.window
Whenever the rate limiter is "session" this configuration will determine the duration of the window used.
duration
N/A
lenses.schema.connect.client.http.retryOnTooManyRequest
Retry a request whenever a connect cluster returns a 429 Too Many Requests
boolean
lenses.schema.connect.client.http.maxRetryAwait
Max amount of time to wait whenever a 429 Too Many Requests
is returned.
duration
lenses.schema.connect.client.http.maxRetryCount
Max retry count whenever a 429 Too Many Requests
is returned.
integer
2
lenses.connect.client.http.rate.type
Specifies if http requests to the configured connect cluster should be rate limited. Can be "session" or "unlimited"
"unlimited" | "session"
lenses.connect.client.http.rate.maxRequests
Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.
integer
N/A
lenses.connect.client.http.rate.window
Whenever the rate limiter is "session" this configuration will determine the duration of the window used.
duration
N/A
apps.external.http.state.refresh.ms
When registering a runner for external app, a health-check interval can be specified. If it is not, this default interval is used (value in milliseconds)
30000
int
no
apps.external.http.state.cache.expiration.ms
Last known state of the runner is stored in a cache. The entries in the cache are being invalidated after a time that is defined by following configuration key (value in milliseconds). This value should not be lower than the apps.external.http.state.refresh.ms
value.
60000
int
no
*
*
Global wildcard.
Capture all the resources that Lenses manages.
"Everything"
service:*
kafka:*
Service-specific wildcard.
Capture all the resources for a service.
"All Kafka resources in all environments, i.e. topics, consumer groups, acls and quotas"
service:resource-type:*
kafka:topic:*
Resource-type-specific wildcard.
Capture all the resources for a type of resources of a service.
"All Kafka topics in all environments"
service:resource-type:parent/*/grandchild
kafka-connect:connector:dev-environment/*/my-s3-sink
Path segment wildcard.
Capture a part of the resource path.
"All connectors named 'my-s3-sink' in all Connect clusters under the environment 'dev-environment' "
service:resource-type:resourcePa*
kafka:topic:dev-environment/kafka/red-*
Trailing wildcard.
This wildcard is at the end of an LRN. It acts as a 'globstar' (**
) and matches against the rest of the string.
Capture the resources that start with the given path prefix.
"All Kafka topics in the environment 'dev-environment' whose name starts with 'red-' "
service:resource-type:paren*/chil*/grandchil*
kafka-connect:connector:dev*/sinks*/s3*
Path suffix wildcard.
Capture resources where different path segments start with certain prefixes.
"All connectors in all environments that start with 'dev', within any Connect cluster that starts with 'sinks' and where the connector name starts with 's3' "
servic*:resource-type:resource-id
kafk*:*:dev-environment/
or
*:topic:dev-environment/
No wildcards allowed at the service level. A service must be its full string.
Global wildcard *
service:resource-typ*:resource-id
kafka:topi*:dev-environment/*
No wilcards allowed at the resource-type level. A resource type must be its full string.
Service-specific wildcard service:*
No resource-id segments allowed in this case.
LENSES_OPTS
For generic settings, such as the global truststore. Note that the docker image is using this to plug in a prometheus java agent for monitoring Lenses
LENSES_HEAP_OPTS
JVM heap options. Default setting are -Xmx3g -Xms512m
that sets the heap size between 512MB and 3GB. The upper limit is set to 1.2GB on the Box development docker image.
LENSES_JMX_OPTS
Tune the JMX options for the JVM i.e. to allowing remote access.
LENSES_LOG4J_OPTS
Override Agent logging configuration. Should only be used to set the logback configuration file, using the format -Dlogback.configurationFile=file:/path/to/logback.xml
.
LENSES_PERFORMANCE_OPTS
JVM performance tuning. The default settings are -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=
IAM Reference
Reference for Lenses IAM.
Examples
Examples of IAM Policies
Hardware & OS
Learn about the hardware & OS requirements for Linux archive installs.
JVM Options
Understand how to customize the Lenses JVM settings.
Logs
Understand and customize Lenses logging.
JMX
Learn how to connect the Agent to JMX for Kafka, Schema Registries, Kafka Connect and others.
Returns all users
/v1/users
Creates a new user.
/v1/users
Allows attaching custom string key/values to resources. The following maxima apply:
Sets the unique name of the new user. It must be a valid HQ resource name: it can only contain lowercase alphanumeric characters or hyphens; hyphens cannot appear at the end or start; the length is 63 characters at most.
Sets the display name of the new user. If not provided, the value of "name" will be used.
Returns a specific user
/v1/users/{name}
Updates a user.
/v1/users/{name}
Patches metadata. It has the following semantics:
Updates the display name of the user.
Deletes a user.
/v1/users/{name}
No body
Allows updating fields of the user profile.
/v1/users/{name}/profile
Contains the users' full name, e.g. Mary Jane Doe.
Contains the users' email address, e.g. mary.jane@doe.net. Note that this is not necessarily the same as the user's name, which often looks like an email address, but is not per se.
Assigns the given user exactly to the provided groups, ensuring they are not part of any other groups.
/v1/users/{name}/groups
The name of the user.
Adds the user or service account to the groups (specified by their names).
Removes the user or service account from the groups (specified by their names). If a group is specified in both add_to_groups as well in here, removal wins.
Sets the user or service account memberships to those groups (specified by their names) in an absolute fashion (ensures user/sa will be exactly a member of those), if provided. Cannot be combined with the add_to_groups or remove_from_groups.
Returns the currently authenticated user
/v1/users/me
Starts a session given a username/password and puts it into a cookie.
/v1/login
No body
Deletes all sessions associated with the current user.
/v1/users/me/sessions
No body
Returns the backend's settings information.
/v1/settings
Returns the backend's meta information.
/v1/meta-info
Returns HQ's licence summary.
/v1/licence-summary
Returns HQ's licence.
/v1/licence
Returns all roles.
/v1/roles
Creates a new role.
/v1/roles
Allows attaching custom string key/values to resources. The following maxima apply:
Sets the unique name of the new role. It must be a valid HQ resource name: it can only contain lowercase alphanumeric characters or hyphens; hyphens cannot appear at the end or start; the length is 63 characters at most.
Sets the display name of the new role. If not provided, the value of "name" will be used.
Contains a list of permission statements.
Returns a specific role.
/v1/roles/{name}
Updates a role.
/v1/roles/{name}
Patches metadata. It has the following semantics:
Updates the display name of the role.
Sets, if specififed, the new permission statements.
Deletes a role.
/v1/roles/{name}
No body
Lists all environments
/v1/environments
Creates a new environment.
/v1/environments
Enumerates Tiers.
development
, staging
, production
Allows attaching custom string key/values to resources. The following maxima apply:
Sets the name of the new environment. It must be a valid HQ resource name: it can only contain lowercase alphanumeric characters or hyphens; hyphens cannot appear at the end or start; the length is 63 characters at most.
Sets the display name of the new environment. If not provided, the value of "name" will be used.
Retrieves a single environment by name.
/v1/environments/{name}
Updates an environment.
/v1/environments/{name}
Enumerates Tiers.
development
, staging
, production
Patches metadata. It has the following semantics:
Updates the display name of the environment.
Deletes an environment.
/v1/environments/{name}
No body
Provides Server-Sent Events (SSE) for environment updates. TODO.
/v1/environments/live/sse
Proxies HTTP to a Lenses instance. Note: this is not a regular HTTP API endpoint. The path specified here is a prefix. Everything beneath it gets proxied to the corresponding Lenses instance. Any request body and method (the GET here is only a placeholder) are accepted, as long as the Lenses API accepts it. The connection can even be upgraded to a websocket. The status code and response body are controlled by the Lenses API. This concept does not fit into the OpenAPI world at all; this definition is only here for the sake of documentation to avoid having an undocumented dark matter API.
/v1/environments/{name}/proxy/
Retrieves a list of datasets
/v1/environments/{name}/proxy/api/v1/datasets
The page number to be returned, must be greater than zero. Defaults to 1.
1
The elements amount on a single page, must be greater than zero.
25
A search keyword to match dataset, fields and description against.
name
A list of connection names to filter by. All connections will be included when no value is supplied.
A list of tag names to filter by. All tags will be included when no value is supplied.
The field to sort results by
name
, records
, connectionName
, sourceType
, isSystemEntity
, recordsPerSecond
, keyType
, valueType
, replication
, consumers
, partitions
, retentionBytes
, retentionMs
, sizeBytes
, replicas
, shard
, version
, format
, compatibility
, backupRestoreState
Sorting order. Defaults to ascending
asc
, desc
A flag to include in the search also system entities (e.g. Kafka's __consumer_offsets
topic).
Whether to search only by table name, or also to include field names/documentation (defaults to true)
Schema format. Relevant only when sourceType is SchemaRegistrySubject
Filter based on whether the dataset has records
Filter based on compacted. Relevant only when sourceType is Kafka
Get a single dataset by connection/name. While information mastered externally might be a few second out of sync with their respective sources (e.g. JMX metadata, Elasticsearch index status, etc), information mastered in Lenses's db is guaranteed to be up to date (e.g. tags, descriptions).
/v1/environments/{name}/proxy/api/v1/datasets/{connection}//{dataset}
kafka
customer-positions
Retrieves a list of dataset tags
/v1/environments/{name}/proxy/api/v1/datasets/tags
Get tags sorted by dataset count
user
Returns the intellisense result for a given query
/v1/environments/{name}/proxy/api/v1/sql/presentation
Assigns the given service account exactly to the provided groups, ensuring they are not part of any other groups.
/v1/service-accounts/{name}/groups
The name of the service account.
Adds the user or service account to the groups (specified by their names).
Removes the user or service account from the groups (specified by their names). If a group is specified in both add_to_groups as well in here, removal wins.
Sets the user or service account memberships to those groups (specified by their names) in an absolute fashion (ensures user/sa will be exactly a member of those), if provided. Cannot be combined with the add_to_groups or remove_from_groups.
Renews the service account's token. The current token is invalidated and a new one is generated. An optional expiration timestamp can be provided.
/v1/service-accounts/{name}/renew-token
Determines the moment of token expiration. If not specified, the token will never expire.
Deletes a ServiceAccount.
/v1/service-accounts/{name}
No body
Updates a service account.
/v1/service-accounts/{name}
Patches metadata. It has the following semantics:
Updates the display name of the service account.
Updates the description of a service account.
Returns a specific ServiceAccount.
/v1/service-accounts/{name}
Creates a new ServiceAccount.
/v1/service-accounts
Allows attaching custom string key/values to resources. The following maxima apply:
Sets the unique name of the new service account. It must be a valid HQ resource name: it can only contain lowercase alphanumeric characters or hyphens; hyphens cannot appear at the end or start; the length is 63 characters at most.
Sets | the display name of the new service account. If not provided, the value of "name" will be used.
Sets the description of the new service account.
Determines the moment of token expiration. If not specified, the token will never expire.
Returns all ServiceAccounts.
/v1/service-accounts
Lists all groups
/v1/groups
Creates a new Group.
/v1/groups
Allows attaching custom string key/values to resources. The following maxima apply:
Sets the unique name of the new group. It must be a valid HQ resource name: it can only contain lowercase alphanumeric characters or hyphens; hyphens cannot appear at the end or start; the length is 63 characters at most.
Sets the display name of the new group. If not provided, the value of "name" will be used.
Sets the description of the new group.
Lists principal names (users, service accounts) to be member of this group.
Sets the Roles that are bound to this Group by name.
Gets a group by its name.
/v1/groups/{name}
Updates a group.
/v1/groups/{name}
Patches metadata. It has the following semantics:
Updates the display name of the group.
Updates the Group description, if a value is provided.
Sets the Roles that are bound to this Group to the Roles (specified by their names), if provided.
Adds the users/principals (specified by their names) to this group, if provided.
Removes the users/principals (specified by their names) from this group, if provided. If members are specified in both add_members as well in here, removal wins.
Sets the members of this group to those users/principals (specified by their names) in an absolute fashion, if provided. Cannot be combined with the add_members or remove_members fields.
Deletes a group.
/v1/groups/{name}
No body