Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page details the release notes of Lenses.
Lenses 6.0 introduces a new service, called HQ, acting as portal for multi-kafka environments.
New HQ service
IAM (Identity & Access Management). This has moved from each Lenses instant to a global location in the new HQ service
Global SQL Studio
Global Data Catalogue
Community License: You can no use Lenses without a license (community license key still rebuild and bundled in docker-compose) or expiry but the following restrictions apply:
No SSO
Maximum of two environments (Kafka clusters) can be connected
Two Users with one an admin user
Two Service Accounts
Two Groups
Two Roles
No Backup / Restore for topics to S3
H2 embedded database is no longer support.
Lenses 5.x permission model is replaced by global IAM. You must recreate the roles and groups in HQ
Connection management in the agent is via file Provisioning only.
This quick start guide will walk you through installing and starting Lenses using Docker, followed by connecting Lenses to your Kafka cluster.
This is a local set up only. To connect to your Kafka clusters, see here.
By running the following command including the ACCEPT_EULA setting your are accepting the Lenses EULA agreement.
Run the following command:
Once the images are pulled and containers started, you can login here with admin/admin and explore.
It may take a few seconds for the agent to full boot and connect to HQ.
The quick start uses a docker compose file to:
We have made new alpha release 16:
Agent image:
HQ image:
New Helm version 16 for agent and for the HQ:
In previous versions, SAML / SSO was a mandatory requirement for authentication. However, with the new release, it becomes optional, allowing you to choose between password-based authentication and SAML / SSO according to your needs.
Existing alpha users will have to introduce lensesHq.saml.enabled
property into their values.yaml
files
In this release, the ingress configuration has been enhanced to provide more flexibility.
Previously, the HQ chart supported a single ingress setting, but now you can define separate ingress configurations for HTTP and the agent.
This addition allows you to tailor ingress rules more specifically to your deployment needs, with dedicated rules for handling HTTP traffic and TCP-based agent connections.
The http
ingress is intended only for HTTP/S traffic, while the agents
ingress is designed specifically for TCP protocol. Ensure appropriate ingress configuration for your use case.
In the following example you will notice how ingress configuration has been broken into:
http - which covers main ingress for HQ and where users will be accessing HQ portal
agent - new and additional ingress which allows you to add new ingress with your custom implementation, whether it is Traefik or any other based.
By default both http and agent ingresses are disabled.
Due to new changes in provisioning structure, the database to which agent is connected must be recreated.
In the provisioning, there has been slight adjustment in connection naming with HQ.
Changes:
grpcServer has been renamed to lensesHq
apiKey has been renamed to agentKey
With the new version of Agent, HQ connection in provisioning has changed which requires complete recreation of database. Following log message will indicate it:
This page describe an overview of deploying Lenses against your Kafka clusters.
The quick start is for local development, with a local Kafka. This guide takes you through manually deploying HQ and an Agent to connect to your Kafka clusters.
For more detailed guides on the Helm, Docker and Linux see .
To deploy Lenses against your environments you need to:
To start HQ and an Agent you have to accept the .
For HQ, in the config.yaml set:
Any version of Apache Kafka (2.0 or newer) on-premise and on-cloud. Supported providers include:
Confluent Platform & Cloud
AWS MSK & AWS MSK Serverless
Aiven
IBM Event Streams
Azure HDInsight & EventHubs
Any version of Confluent Schema Registry (5.5.0 or newer), APICurio (2.0 or newer) and AWS Glue.
Only needed if you want to bring your own Postgres. The docker compose will start a local Postgres instance.
HQ and Agents can share the same instance, by either using a separate database or schema for HQ and each agent, depending on your networking needs.
Postgres server running version 9.6 or higher.
The recommended configuration is to create a dedicated login role and database for the HQ and each Agent, setting the HQ or Agent role as the database or schema owner. Both the agent and HQ need credentials, create a role for each.
JMX connectivity - Connectivity to JMX is optional (not required) but recommended for additional/enhanced monitoring of the Kafka Brokers and Connect Workers. Secure JMX connections are also supported, as well as JOLOKIA and Open Metrics (MSK).
These ACLs are for the underlying Lenses Agent Kafka client. Lenses has its own set of permissions guarding access.
The agent requires access to your Kafka cluster. If ACLs are enable you will need to allow the Agent access.
Welcome to Lenses, Autonomy in data streaming.
This documentation is for Lenses 6 (preview). For Lenses 5.10 (stable) see .
Lenses has two components:
HQ is a central portal where end users interact with different environments (clusters). It provides a central place to explore data across many environments.
HQ is a single binary, installed on premise or in your cloud. From HQ you create environments, and for each environment, you deploy an agent that connects back to HQ.
Lenses defines each Kafka Cluster and supporting services, such as Schema Registries and Kafka Connect Clusters, as an environment.
You can have many environments, on premise, in the cloud, provided HQ has network access to the agent and the agent can connect to your Kafka cluster or any Kafka API compatible service.
Each environment has an agent. Environments can also be assigned extra metadata such as tiers, domains and descriptions.
There's a 1 to 1 relationship between environments, agents and Kafka clusters.
To explore and operate in an environment you need an agent. Agents are headless applications, deployed with connectivity to your Kafka cluster and supporting services.
Agents only ever communicate with HQ, using an Agent key over a secure channel. You can not, as a user, interact directly with them. End users are unaware of agents, only environments.
Agents require:
Agent Key to establish a communication channel to HQ
Connectivity to a Kafka cluster and credentials to do so.
The agents acts as a proxy to read, or write to your Kafka cluster, execute queries, monitor for alerts and manage SQL Processors and Kafka Connectors.
Web sockets - You may need to adjust your load balancer to allow them. See .
For more enable JMX for Agent itself see .
You can restrict the access of the Lenses Kafka client but this can reduce the functionality on offer in Lenses, e.g. not allow Lenses to create topic at all, even though this can be managed by .
If you want to use SSO / SAML for authentication you will need the metadata.xml file from your provider. See for more information.
In the past, HQ has been using TOML file format. As we want to reduce differences in file formats between Agent and HQ as much as possible, this was the first step.
Postgres connection URI is not being built within config.yaml but in backend runtime;
parameter group has changed from postgres to storage.postgres.*
In the previous version, schema was defined as a part of extraParamSpecs. In the new version schema is now defined as a separate property storage.postgres.database.schema;
Property extraParamSpecs is renamed to params;
Parameter group api has been renamed to http and following parameters are not part of it anymore:
administrators;
saml;
Property auth is being derived from property api (now. http).
Parameters that has been moved from http to auth are following:
administrators;
saml;
HQ has been tested against Aurora (Postgres) and is compatible.
In case of any changes in ConfigMap and after executing helm upgrade HQ pod will be automatically restarted as well therefore no need for manual interventions.
Previously environment variable known as LENSES_HQ_AGENT_KEY that was referenced in provisioning.yaml and stores the agentKey value has been renamed to LENSESHQ_AGENT_KEY.
Since newest version of Lenses HQ and Agent bring breaking changes following issues can happen.
Upon doing helm upgrade HQ can fail with following error log:
In order to fix it, following command has to be run on the postgres database:
In case SQL command cannot be run, database has to be cleared as if one is starting from scratch.
This page describes configuring and starting Lenses HQ and Agent against your Kafka cluster.
This guide is using the Lenses docker compose file. For non dev installations and automation see the Installation section.
HQ is configured via by one file, config.yaml. The docker compose files loads the content of hq.config.yaml and mounts it as the HQ config.yaml file.
You only need to follow this step if you do not want to use the local postgres instance started by the docker compose file.
You must create a database and role in your postgres instance for HQ to use. See Database Role.
Edit the docker-compose.yaml and add the set the credentials for your database in the hq.config.yaml section.
Currently HQ supports:
Basic Authentication (default)
SAML
For this example we will use basic authentication, for information on configuring other methods, see Authentication and configure the hq.config.yaml key accordingly for SAML.
To start HQ, run the following docker command:
You can now log in your browser with admin/admin.
To create an environment in HQ:
Login into HQ and create an environment, Environments->New Environment.
At the end of the process, you will be shown an Agent Key. Copy that, keep it safe!
The environment will be disconnected until the Agent is up and configured with the key.
You can also manage environments using the CLI.
The Agent is configured via two files:
lenses.conf - holds low-level configuration options for the agent and the database connection. You can set this via the agent.lenses.conf in the docker-compose file
provisioning.yaml - holds the connection details to your Kafka cluster and supporting systems. can set this via the agent.provisioning.yaml key in the docker-compose file.
You only need to follow this step if you do not want to use the local postgres instance started by the docker compose file.
You must create a database and role in your postgres instance for the Agent to use. See Database Role.
Update the docker-compose file agent.lenses.conf key for your Postgres instance.
The Agent Key for an environment needs to be added to the agent.provisioning.yaml key in the docker compose file.
Replace ${{LENSESHQ_AGENT_KEY}} with the Agent Key for the environment that you want to link to.
For more information on the configuration of the connection to HQ see here.
By default, the agent is configured to connect to Kafka on localhost. To change this update the agent.provisioning.yaml key. The information required here depends on how you want the Agent to authenticate against Kafka.
See provisioning for examples of different authentication types for Kafka.
Add the following for a basic plaintext connection to a Kafka broker, if you are using a different authentication mechanism adjust accordingly.
Remove, or adjust the Kafka (kafka-demo), Schema Registry and Connect services in the default docker-compose file.
Replace [YOUR_BOOTSTRAP_BROKER:PORT] with the bootstrap brokers and ports for the Kafka cluster you want the Agent to connect to.
For examples of adding in other services such as Schema Registries and Kafka Connect see provisioning.
To start Agent, run the following docker command:
For none dev environments, install the agent as close as possible to your Kafka clusters and automate the installation.
Once the agent fully starts, it will report as connected in HQ, allowing you to explore your Kafka environments.
This page describes installing Lenses HQ in Kubernetes via Helm.
Lenses HQ is prerequisite for installation of Lenses Agent
Kubernetes 1.23+
Helm 3.8.0+
Running Postgres instance:
database for HQ;
username (and password) that has access to HQ database;
Optional External secret operator (in case of ExternalSecret usage)
In order to configure properly HQ we have to understand parameter groups that the Chart offers.
Under the lensesHq parameter there some key parameter groups that are used to setup HQ:
definition of connection towards database (Postgres is the only storage option)
Password based authentication configuration
SAML / SSO configuration
definition of administrators or first users to access the HQ
defines port under which HQ will be available for end users
defines values of special headers and cookies
types of connection such as TLS and non-TLS definitions
defines connection between HQ and the Agent such as port where HQ will be listening for agent connections.
types of connection such as TLS and non-TLS definitions
license
controls the metrics settings where Prometheus alike metrics will be exposed
definition of logging level for HQ
Moving forward, in the same order you can start configuring your Helm chart.
Postgres is the only available storage option.
Prerequisite:
Running Postgres instance;
Created database for HQ;
Username (and password) which has access to created database;
In order to successfully run HQ, storage within values.yaml has to be defined first.
Definition of storage object is as follows:
Alongside Postgres password, which can be referenced / created through Helm chart, there are few more options which can help while setting up HQ.
There are two ways how username can be defined:
The most straight forward way if username is not being changed by just defining it within username parameter such as
In case Postgres username is being rotated or frequently changed it can be referenced from pre-created secret
Postgres password can be handled in three ways using:
External Secret via ExternalSecretOperator;
Pre-created secret;
Creating secret on the spot through values.yaml;
To use this option, the External Secret Operator (ESO) has to be installed and available for usage in K8s cluster your are deploying HQ.
When specifying passwordSecret.type: "externalSecret", the chart will:
create an ExternalSecret in the namespace where HQ is deployed;
a secret is mounted for HQ to use.
Make sure that secret you are going to use is already created in namespace where HQ will be installed.
This option is NOT for PRODUCTION usage but rather just for demo / testing.
The chart will create a secret with defined values below and the same secret will be read by HQ in order to connect to Postgres.
Sometimes to form correct connection URI special parameters are needed. In order to od the same you can set extra settings using params
.
Example:
SAML / SSO is available only with Enterprise license.
Second pre-requirement to successfully run HQ is setting initial authentication.
You can choose between:
password-based authentication, which requires users to provide a username and password;
and SAML/SSO (Single Sign-On) authentication, which allows users to authenticate through an external identity provider for a seamless and secure login experience.
Definition of auth object is as follows:
First to cover is users property. Users Property: The users
property is defined as an array, where each entry includes a username
and a password
. The passwords are hashed using bcrypt for security purposes, ensuring that they are stored securely.
Second to cover will be administrators. It serves as definition of user emails which will have highest level of permissions upon authentication to HQ.
Third attribute is saml.metadata field needed for setting SAML / SSO authentication. In this step, you will need metadata.xml file which can be set in two ways:
Referencing metadata.xml file through pre-created secret;
Placing metadata.xml contents inline as a string.
Third pre-requirement to successfully run HQ is the http
definition. As previously mentioned, this parameter defines everything around HTTP endpoint of the HQ itself and how users will interact with.
Definition of HTTP object is as follows:
Second part of HTTP definition would be enabling TLS and TLS definition itself. As previously defined for lensesHq.agents.tls same way of configuring TLS can be used for lensesHq.http.tls definition as well.
After correctly configuring authentication strategy and connection endpoint , agent handling is the last most important box to tick.
The Agent's object is defined as follows:
By default TLS for the communication between Agent and HQ is disabled. In case requirement is to enabled it, following has to be set:
lensesHq.agents.tls
- certificates to manage connection between HQ and the Agents
lensesHq.http.tls
- certificates to manage connection with HQ's API
Unlike private keys which can be referenced and obtained only through a secret, Certificates can be referenced directly in values.yaml file as a string or as a secret.
Whilst the chart supports setting TLS on Lenses HQ itself we recommend placing it on the Ingress resource
Ingress and service resources are optionally supported.
The http
ingress is intended only for HTTP/S traffic, while the agents
ingress is designed specifically for TCP protocol. Ensure appropriate ingress configuration for your use case.
Enable an Ingress resource in the values.yaml:
Enable a service resource in the values.yaml:
Lenses HQ, by default, uses default Kubernetes service account but user can choose to use specific one.
If user defines following:
The chart will create new service account in the the defined namespace for HQ to use.
There are two options user can choose between:
rbacEnable: true - will enable creation of ClusterRole and ClusterRoleBinding for service account mentioned in snippet above
rbacEnable: true and namespaceScope: true - will enable creation of Role and RoleBinding which is more restrictive.
There are different logging modes and levels that can be adjusted.
First, add the Helm Chart repository using the Helm command line:
Be aware that for the time being and for alpha purposes usage of --version
is mandatory when deploying Helm chart through Helm repository.
Connect Lenses to your environment.
This page describes the supported installation methods for Lenses.
Lenses can be deployed in the following ways:
This page describes installing Lenses HQ and Agent in Kubernetes via Helm.
Only Helm 3 is supported.
This page describes deploying an Lenses Agent via Docker.
The Agent docker image can be configured via environment variables or via volume mounts for the configuration files.
Please check "File configuration" before running the command
In order for command above provisioning yaml has to be configured.
There are two mandatory connections:
HQ which requires AgentKey (LENSESHQ_AGENT_KEY) - this key is being created once user registers "New environment" in HQ;
Kafka connection
Environment variables prefixed with LENSES_ are transformed into corresponding configuration options. The environment variable name is converted to lowercase and underscores (_) are replaced with dots (.). As an example set the option lenses.port use the environment variable LENSES_PORT.
Alternatively, the lenses.conf can be mounted directly as
/mnt/settings/lenses.conf
The Docker image exposes four volumes in total, where cache, logs, plugins, and persistent data are stored:
/data/storage
/data/plugins
/data/logs
/data/kafka-streams-state
Resides under /data/storage and is used to store persistent data, such as Data Policies. For this data to survive between Docker runs and/or Agent upgrades, the volume must be managed externally (persistent volume).
Resides under /data/plugins
it’s where classes that extend Agent may be added —such as custom Serdes, LDAP filters, UDFs for the Lenses SQL table engine, and custom_http implementations.
Resides under /data/logs, logs are stored here. The application also logs to stdout, so the log files aren’t needed for most cases.
Resides under /data/kafka-streams-state, used when Lenses SQL is in IN_PROC configuration. In such a case, Lenses uses this scratch directory to cache Lenses SQL internal state. Whilst this directory can safely be removed, it can be beneficial to keep it around, so the Processors won’t have to rebuild their state during a restart.
By default, the the serves connections over plaintext (HTTP). It is possible to use TLS instead. The Docker image offers the ability to provide the content for extra files via secrets mounted as files or as environment variables. Especially for SSL, Docker supports SSL/TLS keys and certificates in Java Keystore (JKS) formats.
This capability is optional, and users can mount such files under custom paths and configure lenses.conf manually via environment variables, or lenses.append.conf.
There are two ways to use the File/Variable names of the table below.
Create a file with the appropriate filename as listed below and mount it under /mnt/settings, /mnt/secrets, or /run/secrets
Set them as environment variables.
All settings except for passwords, can be optionally encoded in base64. The docker will detect such encoding automatically.
The docker does not require running as root. The default user is set to root for convenience and to verify upon start-up that all the directories and files have the correct permissions. The user drops to nobody and group nogroup (65534:65534) before starting the Agent.
If the image is started without root privileges, the agent will start successfully using the effective uid:gid applied. Ensure any volumes mounted (i.e., for the license, settings, and data) have the correct permission set.
This page describes the install of the Lenses Agent via an archive on Linux.
To install the HQ from the archive you must:
Extract the archive
Configure the HQ
Start the HQ
Extract the archive using the following command
Inside the extract archive, you will find.
In order to properly configure HQ, two core components are necessary:
To set up authentication, there are multiple methods available.
You can choose between:
password-based authentication, which requires users to provide a username and password;
and SAML/SSO (Single Sign-On) authentication, which allows users to authenticate through an external identity provider for a seamless and secure login experience.
Both password based and SAML / SSO authentication methods can be used alongside each other.
First to cover is users property. Users Property: The users
property is defined as an array, where each entry includes a username
and a password
. The passwords are hashed using bcrypt for security purposes, ensuring that they are stored securely.
Second to cover will be administrators. It serves as definition of user emails which will have highest level of permissions upon authentication to HQ.
Another part which has to be set in order to successfully run HQ is the http
definition. As previously mentioned, this parameter defines everything around HTTP endpoint of the HQ itself and how users will interact with.
Definition of HTTP object is as follows:
More about setting up TLS can be read .
If you have meticulously followed all the outlined steps, your config.yaml file should mirror the example provided below, fully configured and ready for deployment. This ensures your system is set up correctly with all necessary settings for authentication, database connection, and other configurations optimally defined.
Start Lenses by running:
or pass the location of the config file:
If you do not pass the location of the config file, the HQ will look for it inside the current (runtime) directory. If it does not exist, it will try its installation directory.
To stop HQ, press CTRL+C.
If your server uses systemd as a Service Manager, then manage the Agent (start upon system boot, stop, restart). Below is a simple unit file that starts the Agent automatically on system boot.
This page describes deploying Lenses HQ via docker.
The HQ docker image can be configured via volume mounts for the configuration file.
The HQ looks for the config.yaml in the current working directory. This is the root directory for Docker.
The main pre-requirements that has to be fulfilled before Lenses HQ container can be started and those are:
In demo purposes and testing the product you can use our community license
Main configuration file that has to be configured before running docker command is config.yaml.
Sample configuration file is following:
This page describes the authentication methods supported in Lenses.
Authentication is configured in HQ.
Users can authentication is two ways. Basic authentication and SSO / SAML. Additional specific users can be assigned as admin accounts.
This page describes installing Lenses with Docker Image.
This page describes installing Lenses Agent in Kubernetes via Helm.
Kubernetes 1.23+
Helm 3.8.0+
Running Postgres instance
(in case of ExternalSecret usage)
First, add the Helm Chart repository using the Helm command line:
Installing using cloned repository:
Installing using Helm repository:
Be aware that for the time being and for alpha purposes usage of --version
is mandatory when deploying Helm chart through Helm repository.
To automatically provision the connections to Kafka, HQ and other systems set the .Values.lenses.provision.connections to be the YAML definition of your connections.
The chart will render the full YAML specified under this setting as the provisioning.yaml file.
Alternatively you can use a second YAML file, which contains only the connections pass them at the command line when installing:
The chart uses:
Secrets to store Postgres credentials and authentication credentials
Secrets to store connection credentials such as Kafka SASL_SCRAM password or password for SSL JKS stores.
Secrets to hold the base64 encoded values of the JKS stores
Secrets to store AGENT KEY for connection to Lenses HQ
ConfigMap for Lenses configuration overrides
Cluster roles and role bindings (optional).
Secrets and config maps are mounted as files under the mount /mnt:
settings - holds the lenses.conf
provision-secrets - holds the secrets for connections in the provisioning.yaml file
provision-secrets/files - holds any file needed for a connection, e.g. JKS files.
Connection to Lenses HQ is straight forward process which requires two steps:
Storing that same key in Vault or as a K8s secret
The agent communicates with HQ via a secure custom binary protocol channel. To establish this channel and authenticate the Agent needs and AGENT KEY.
Once the AGENT KEY has been copied, store it inside of Vault or any other tool that has integration with Kubernetes secrets.
There are three available options how agent key can be used:
ExternalSecret via External Secret Operator (ESO)
Pre-created secret
Inline string
To use this option, the External Secret Operator (ESO) has to be installed and available for usage in K8s cluster your are deploying Agent.
When specifying secret.type: "externalSecret", the chart will:
create an ExternalSecret in the namespace where Agent is deployed;
a secret is mounted for Agent to use.
Make sure that secret you are going to use is already created in namespace where Agent will be installed.
This option is NOT for PRODUCTION usage but rather just for demo / testing.
The chart will create a secret with defined values below and the same secret will be read by Agent to connect to HQ.
This secret will be fed into the provisioning.yaml. The HQ connection is specified at line 30, below, where reference ${LENSESHQ_AGENT_KEY} is being set:
The Helm chart creates Cluster roles and bindings, that are used by SQL Processors, if the deployment mode is set to KUBERNETES. They are used so that Lenses can deploy and monitor SQL Processor deployments in namespaces.
To disable the creation of Kubernetes RBAC set: rbacEnabled: false
If you are not using SQL Processors and want to limit permissions given to Agent's ServiceAccount, there are two options you can choose from:
rbacEnable: true - will enable the creation of ClusterRole and ClusterRoleBinding for service account mentioned above;
rbacEnable: true and namespaceScope: true - will enable creation of Role and RoleBinding which is more restrictive.
To use Postgres as the backing store for the Agent set the details in the lenses.storage.postgres object.
The chart relies on secrets for sensitive information such as Passwords. Secrets can rotate and are commonly stored in an external store such as Azure KeyVault, Hashicorp Vault or AWS Secrets Manager.
For Postgres, add additional ENV variables via the lenses.additionalEnv object to point to your secret and set the username and password to external in the Postgres section.
Enable a service resource in the values.yaml:
To control the resources used by the Agent:
In case LENSES_HEAP_OPTS is not set explicitly it will be set implicitly.
Examples:
if no requests or limits are defined, LENSES_HEAP_OPTS will be set as -Xms1G -Xmx3G
If requests and limits are defined above defined values, LENSES_HEAP_OPTS will be set by formula -Xms[-Xmx / 2] -Xmx[limits.memory - 2]
If .Values.lenses.jvm.heapOpts it will override everything
To enable SQL processor in KUBERENTES mode and control the defaults:
To control the namespace Lenses can deploy processors, use the sql.namespaces value.
To achieve you need to create a Role and a RoleBinding resource in the namespace you want the processors deployed to.
For example:
Lenses namespace = lenses-ns
Processor namespace = lenses-proc-ns
Finally you need to define in the Agent configuration which namespaces the Agent has access to. Amend values.yaml to contain the following:
Prometheus metrics are automatically exposed on port 9102 under /metrics.
The main configurable options for lenses.conf are available in the values.yaml under the lenses object. These include:
Authentication
Database connections
SQL processor configurations
To apply other static configurations use lenses.append.conf, for example:
This page describes the install of the Lenses Agent via an archive on Linux.
To install the Agent from the archive you must:
Extract the archive
Configure the Agent
Start the Agent
Extract the archive using the following command
Inside the extract archive, you will find.
Once the agent files are configure you can continue to start the agent.
The configuration files are the same for docker and Linux, for docker we are simply mounting the files into the container.
To see be able to view and drilling to your Kafka environment, you need to connect the agent to HQ. You need to create an environment in HQ and copy the Agent Key into the provisioning.yaml.
Agent key reference
Agent key within provisioning.yaml can be referenced as a:
environment variable shown in example above
inline string
Start Lenses by running:
or pass the location of the config file:
Provisioning file path
If you configured provisioning.yaml make sure to place following property:
If you do not pass the location of lenses.conf, the Agent will look for it inside the current (runtime) directory. If it does not exist, it will try its installation directory.
To stop Lenses, press CTRL+C
.
Set the permissions of the lenses.conf to be readable only by the lenses user.
The agent needs write access in 4-5 places in total:
[RUNTIME DIRECTORY]
When the Agent runs, it will create at least one directory under the directory it is run in:
[RUNTIME DIRECTORY]/logs
Where logs are stored
[RUNTIME DIRECTORY]/logs/sql-kstream-state
Where SQL processors (when In Process mode) store state. To change the location for the processors’ state directory, use lenses.sql.state.dir
option.
[RUNTIME DIRECTORY]/storage
Where the H2 embedded database is stored when PostgreSQL is not set. To change this directory, use the lenses.storage.directory
option.
/run
(Global directory for temporary data at runtime)
Used for temporary files. If Lenses does not have permission to use it, it will fall back to /tmp
.
/tmp
(Global temporary directory)
Used for temporary files (if access /run
fails), and JNI shared libraries.
Back-up this location for disaster recovery
The Agent and Kafka use two common Java libraries that take advantage of JNI and are extracted to /tmp.
You must either:
Mount /tmp without noexec
or set org.xerial.snappy.tempdir and java.io.tmpdir to a different location
If your server uses systemd as a Service Manager, then manage the Agent (start upon system boot, stop, restart). Below is a simple unit file that starts the Agent automatically on system boot.
The Agent uses the default trust store (cacerts) of the system’s JRE (Java Runtime) installation. The trust store is used to verify remote servers on TLS connections, such as Kafka Brokers with an SSL protocol, JMX over TLS, and more. Whilst for some types of connections (e.g. Kafka Brokers) a separate keystore can be provided at the connection’s configuration, for some other connections (JMX over TLS) we always rely on the system trust store.
It is possible to set up a global custom trust store via the LENSES_OPTS environment variable:
Run on any Linux server (review ulimits or container technology (docker/kubernetes). For RHEL 6.x and CentOS 6.x use docker.
Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit
command:
Increase as a super-user the soft limit to 4096 with:
Use 8GB RAM /4 CPUs and 20GB disk space.
This page describes how to configure Lenses.
This page describes install the Lenses via a Linux archive
This page describes how to configure admin accounts in Lenses.
You can configure a list of the principals (users, service accounts) that have root admin access. Access control allows any API operation performed by such principals. If not set, it will default to [].
Admin accounts are set in the config.yaml for HQ under the key, as an array of usernames.
This page describes configuring basic authentication in Lenses.
Basic authentication is set in the config.yaml for HQ under the key, as an array of usernames and passwords.
To enhance security, it's essential that passwords in the config.yaml file are stored in bcrypt format.
This ensures that the passwords are hashed and secure rather than stored in plaintext. For instance, instead of using "builder" directly, it should be hashed using bcrypt.
An example of a bcrypt-hashed password looks like this: $2a$12$XQW..XQrtZXCvbQWertqQeFi/1KoQW4eNephNXTfHqtoW9Q4qih5G
.
Always ensure that you replace plaintext passwords with their bcrypt counterparts to securely authenticate users.
File / Variable Name | Description |
---|
Once HQ starts, it will be listening on the
More about configuration options you can find on the page.
For more information on provisioning see .
If you are curious about Provisioning API specs which can help you understand connection configs, you can find the same under page .
Creating Environment and obtaining AGENT KEY in HQ as described , if you already have not done so.
If you want to limit the permissions the Agent has against your Kubernetes cluster, you can use Role/RoleBinging resources instead. Follow in order to enable it.
You can also find examples in the .
To configure the agents connection to Postgres and its provisioning file. See here in the .
In the following you can find provisioning examples for the most common Kafka flavours.
FILECONTENT_JVM_SSL_TRUSTSTORE | The SSL/TLS trust store to use as the global JVM trust store. Add to LENSES_OPTS the property javax.net.ssl.trustStore |
FILECONTENT_JVM_SSL_TRUSTSTORE_PASSWORD | Τhe trust store password. If set, the startup script will add automatically to LENSESOPTS the property javax.net.ssl.trustStorePassword (**_base64 not supported**) |
FILECONTENT_LENSES_SSL_KEYSTORE | The SSL/TLS keystore to use for the TLS listener for the Agent |
This page gives an overview of SSO & SAML for authentication with Lenses.
Control of how user create with SSO is determined by the SSO User Creation Mode. There are two modes:
Manual
SSO
With manual mode, only users that pre-created in HQ can login.
With sso mode, users that do not already exists are created and logged in.
Control of how a user's group membership should be handled in relation to SSO is determined by the SSO Group Membership Mode. There are two modes:
Manual
SSO
With the manual mode, the information about the group membership returned from an Identity Provider will not be used and a user will only be a member of groups that were explicitly assigned to them in HQ.
With the sso mode, group information from Identity Provider (IdP) will be used. On login, a user's group membership is set to the groups listed in the IdP.
Groups that do not exist in HQ are ignored.
SAML configuration is defined in the config.yaml provided to HQ. For more information on the configuration options see here.
The follow SSO / SAML providers are supported.
This page describes configuring Okta SSO for Lenses authentication.
Lenses is available directly in Okta’s Application catalog.
SAML configuration is set in HQ's config.yaml file. See here for more details.
This page describes configure SSO & SAML in Lenses for authentication.
This page describes configuring Keycloak SSO for Lenses authentication.
SAML configuration is set in HQ's config.yaml file. See here for more details.
This page describes configuring Google SSO for Lenses authentication.
Google doesn't expose the groups, or organization unit, of a user to a SAML app. This means we must set up a custom attribute for the Lenses groups that each user belongs to.
Open the Google Admin console from an administrator account.
Click the Users button
Select the More dropdown and choose Manage custom attributes
Click the Add custom attribute button
Fill the form to add a Text, Multi-value field for Lenses Groups, then click Add
Learn more about Google custom attributes
The attribute values should correspond exactly with the names of groups created within Lenses.
Open the Google Admin console from an administrator account.
Click the Users button
Select the user to update
Click User information
Click the Lenses Groups attribute
Enter one or more groups and click Save
Learn more about Google custom SAML apps
Open the Google Admin console from an administrator account.
Click the Apps button
Click the SAML apps button
Select the Add App dropdown and choose Add custom SAML app
Run through the below steps
Enter a descriptive name for the Lenses installation
Upload a Lenses icon
This will appear in the Google apps menu once the app is enabled
Given the base URL of the Lenses installation, e.g. https://lenses-dev.example.com, fill out the settings:
Setting | Value |
---|---|
Add a mapping from the custom attribute for Lenses groups to the app attribute groups
From the newly added app details screen, select User access
Turn on the service
Lenses will reject any user that doesn't have the groups attribute set, so enabling the app for all users in the account is a good option to simplify ongoing administration.
Download the Federation Metadata XML file with the Google IdP details.
SAML configuration is set in HQ's config.yaml file. See here for more details.
This page describes an overview of the Lenses Agent configuration.
The Agent configuration is driven by two files:
lenses.conf
provisioning.yaml
lenses.conf holds all the database connections and low level options for the agent.
provisioning.yaml holds the your Kafka cluster and supporting services, that the Agent is to connect to. In addition it defines the connection to HQ. The provisioning.yaml is watched by the Agent, so any changes made, if valid, are applied. See for more information. Without provisioning your agent can not connect to HQ.
This page describes configuring OneLogin SSO for Lenses authentication.
This page describe the Lenses Agent configuration.
HQ's configuration is defined in the config.yaml file
To accept the Lenses , set the following in the lenses.conf file:
Without accepting the EULA the Agent will not start! See .
It has the following top level groups:
Name | Required | Default | Type | Description |
---|
Configures authentication and authorisation.
It has the following fields:
Name | Required | Default | Type | Description |
---|
Lists the names of the principals (users, service accounts) that have root access. Access control allows any API operation performed by such principals. Optional. If not set, it will default to []
.
Configures everything involving the HTTP.
It has the following fields:
Sets the address the HTTP server listens at.
Example value: 127.0.0.1:80
.
Sets the value of the "Access-Control-Allow-Origin" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to ["*"]
.
Sets the value of the "Access-Control-Allow-Credentials" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to false
.
Sets the "Secure" attribute on authentication session cookies. When set, a browser sends such cookies not over unsecured HTTP (expect for localhost). If running Lenses HQ over unsecured HTTP, set this to false. Optional. If not set, it will default to true
.
Contains TLS configuration. Please refer here for its structure.
Contains SAML2 IdP configuration.
It has the following fields:
Contains the IdP issued XML metadata blob.
Example value: <?xml version="1.0" ... (big blob of xml) </md:EntityDescriptor>
.
Defines the base URL of Lenses HQ; the IdP redirects back to here on success.
Example value: https://hq.example.com
.
Controls where the backend redirects to after having received a valid SAML2 assertion. Optional. If not set, it will default to /
.
Example value: /
.
Defines the Entity ID.
Example value: https://hq.example.com
.
Sets the attribute name from which group names are extracted in the SAML2 assertions. Different providers use different names. Okta, Keycloak and Google use "groups". OneLogin uses "roles". Azure uses "http://schemas.microsoft.com/ws/2008/06/identity/claims/groups". Optional. If not set, it will default to groups
.
Example value: groups
.
Controls how the creation of users should be handled in relation to SSO information. With the 'manual' mode, only users that currently exist in HQ can login. Users that do not exist are rejected. With the 'sso' mode, users that do not exist are automatically created. Allowed values are manual
or sso
. Optional. If not set, it will default to manual
.
Controls how the management of a user's group membership should be handled in relation to SSO information. With the 'manual' mode, the information about the group membership returned from an Identity Provider will not be used and a user will only be a member of groups that were explicitly assigned to him locally. With the 'sso' mode, group information from Identity Provider (IdP) will be used. On login, a user's group membership is set to the groups listed in the IdP. Groups that do not exist in HQ are ignored. Allowed values are manual
or sso
. Optional. If not set, it will default to manual
.
Controls the agent handling.
It has the following fields:
Sets the address the agent server listens at.
Example value: 127.0.0.1:3000
.
Contains TLS configuration. Please refer here for its structure.
Contains TLS configuration.
It has the following fields:
Enables or disables TLS.
Example value: false
.
Sets the PEM formatted public certificate. Optional. If not set, it will default to ``.
Example value: -----BEGIN CERTIFICATE----- EXampLeRanDoM ... -----END CERTIFICATE-----
.
Sets the PEM formatted private key. Optional. If not set, it will default to ``.
Example value: -----BEGIN PRIVATE KEY----- ExAmPlErAnDoM ... -----END PRIVATE KEY-----
.
Enables additional logging of TLS settings and events at debug level. The information presented might be a bit too much for day to day use but can provide extra information for troubleshooting TLS configuration. Optional. If not set, it will default to false
.
Configures database settings.
It has the following fields:
Sets the name of the host to connect to. A comma-separated list of host names is also accepted; each host name in the list is tried in order.
Example value: postgres:5432
.
Sets the username to authenticate as. Optional. If not set, it will default to ``.
Example value: johhnybingo
.
Sets the password to authenticate as. Optional. If not set, it will default to ``.
Example value: my-password
.
Sets the database to use.
Example value: my-database
.
Sets the schema to use. Optional. If not set, it will default to ``.
Example value: my-schema
.
Enables TLS. In PostgreSQL connection string terms, setting TLS to false
corresponds to sslmode=disable
; setting TLS to true
corresponds to sslmode=verify-full
. For more fine-grained control, specify sslmode
in the params which takes precedence. Optional. If not set, it will default to false
.
Example value: true
.
Example value: {"application_name":"example"}
.
Sets the logger behaviour.
It has the following fields:
Controls the format of the logger's output. Allowed values are text
or json
.
Controls the level of the logger. Allowed values are info
or debug
. Optional. If not set, it will default to info
.
Controls the metrics settings.
It has the following fields:
Sets the address at which Prometheus metrics are served. Optional. If not set, it will default to :9090
.
Holds the license key.
It has the following fields:
Sets the license key. An HQ key starts with "licensekey".
Accepts the Lenses EULA.
This page describe the Lenses Agent configuration.
This page describes how to setup connections to Kafka and other services and have changes applied automatically for the Lenses Agent.
Contains SAML2 IdP configuration. Please refer for its structure.
Name | Required | Default | Type | Description |
---|
Name | Required | Default | Type | Description |
---|
Name | Required | Default | Type | Description |
---|
Name | Required | Default | Type | Description |
---|
Name | Required | Default | Type | Description |
---|
Contains connection string parameters as key/values pairs. It allow fine-grained control of connection settings. The parameters can be found here: Optional. If not set, it will default to {}
.
Name | Required | Default | Type | Description |
---|
Name | Required | Default | Type | Description |
---|
Name | Required | Default | Type | Description |
---|
Identifier (Entity ID)
Use the base url of the Lenses installation e.g. https://lenses-dev.example.com
Reply URL
Use the base url with the callback details e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client
Sign on URL
Use the base url
Client ID
Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com
Client Protocol
Set it to saml
Client Saml Endpoint
This is the Lenses API point for Keycloak to call back. Set it to [BASE_URL]/api/v2/auth/saml/callback?client_name=SAML2Client. e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client
Name
Lenses
Description
(Optional) Add a description to your app.
SAML Signature Name
KEY_ID
Client Signature Required
OFF
Force POST Binding
ON
Front Channel Logout
OFF
Force Name ID Format
ON
Name ID Format
Root URL
Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com
Valid Redirect URIs
Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com
Name
Groups
Mapper Type
Group list
Group attribute name
groups (case-sensitive)
Single Group Attribute
ON
Full group path
OFF
ACS URL
Use the base url with the callback path e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client
Entity ID
Use the base url e.g. https://lenses-dev.example.com
Start URL
Leave empty
Signed Response
Leave unchecked
Name ID format
Leave as UNSPECIFIED
Name ID
Leave as Basic Information > Primary Email
| Yes | n/a |
| Contains the IdP issued XML metadata blob. |
| Yes | n/a |
| Defines base URL of HQ for IdP redirects. |
| No |
|
| Controls where to redirect to upon successful authentication. |
| Yes | n/a |
| Defines the Entity ID. |
| No |
|
| Sets the attribute name for group names. |
| No |
|
| Controls how the creation of users should be handled in relation to SSO information. |
| No |
|
| Controls how the management of a user's group membership should be handled in relation to SSO information. |
| Yes | n/a |
| Enables or disables TLS. |
| No | `` |
| Sets the PEM formatted public certificate. |
| No | `` |
| Sets the PEM formatted private key. |
| No |
|
| Enables verbose TLS logging. |
| Yes | n/a |
| Sets the name of the host to connect to. |
| No | `` |
| Sets the username to authenticate as. |
| No | `` |
| Sets the password to authenticate as. |
| Yes | n/a |
| Sets the database to use. |
| No | `` |
| Sets the schema to use. |
| No |
|
| Enables TLS. |
| No |
|
| Provides fine-grained control. |
| Yes | n/a |
| Controls the format of the logger's output. |
| No |
|
| Controls the level of the logger. |
| No |
|
| Sets the Prometheus address. |
This page describes configuring Lenses to connect to Aiven.
This page describes connecting the Lenses Agent to Apache Kafka.
A Kafka connection is required for the agent to start. You can connect to Kafka via:
Plaintext (no credentials an unencrypted)
SSL (no credentials an encrypted)
SASL Plaintext and SASL SSL
With PLAINTEXT, there's no encryption and no authentication when connecting to Kafka.
The only required fields are:
kafkaBootstrapServers - a list of bootstrap servers (brokers). It is recommended to add as many brokers (if available) as convenient to this list for fault tolerance.
protocol - depending on the protocol, other fields might be necessary (see examples for other protocols)
In following example JMX metrics for Kafka Brokers are configured too, assuming that all brokers expose their JMX metrics using the same port (9581), without SSL and authentication.
With SSL the connection to Kafka is encrypted. You can also uses SSL and certificates to authenticate users against Kafka.
A truststore (with password) might need to be set explicitly if the global truststore of the Agent does not include the Certificate Authority (CA) of the brokers.
If TLS is used for authentication to the brokers in addition to encryption-in-transit, a key store (with passwords) is required.
There are 2 SASL-based protocols to access Kafka Brokers: SASL_SSL
and SASL_PLAINTEXT
. They both require SASL mechanism and JAAS Configuration values. What is different is:
The transport layer is encyrpted (SSL)
The SASL mechanism for authentication (PLAIN, AWS_MSK_IAM, GSSAPI).
In addition to this, there might be a keytab file required, depending on the SASL mechanism (for example when using GSSAPI mechanism, most often used for Kerberos).
To use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.
When encryption-in-transit is used (with SASL_SSL), a trust store might need to be set explicitly if the global trust store of Lenses does not include the CA of the brokers.
Encrypted communication and basic username and password for authentication.
In order to use Kerberos authentication, a Kerberos Connection should be created beforehand.
No SSL encrypted of communication, credentials communicated to Kafka in clear text.
This page describes an overview of Lenses Agent Provisioning.
As of version 6.0 the calling the Rest endpoint for provisioning is no longer available.
Connections are defined in the provisioning.yaml file. The Agent will watch the file and resolve the desired state, applying connections defined in the file.
Connections are defined in the provisioning.yaml. This file is divided into components, each component representing a type of connection.
Each component is mandatory:
Name - This is the free name of the connection
Version set to 1
Configuration - This is a list of keys/values dependent on the component type.
The provisioning.yaml contains secrets. If you are deploying via Helm, the chart will use Kubernetes secrets.
Additionally, support is provided for referencing environment variables. This allows you to set secrets in your environment and resolve the value at runtime.
Many connections need files, for example, to secure Kafka with SSL you will need a key store and optionally a trust store.
To reference a file in the provisioning.yaml, for example, given:
a file called my-keystore.jks is expected in the same directory.
This page describes connection a Lenses Agent with HQ
To be able to view and drill into your Kafka environment, you need to connect the agent to HQ. You need to create an environment in HQ and copy the Agent Key into the provisioning.yaml.
This page describes configuring Lenses to connect to Confluent Platform.
For Confluent Platform see Apache Kafka.
This page describes configuring Lenses to connect to Confluent Cloud.
For Confluent Platform see Apache Kafka.
This page describes connection Lenses to a Azure HDInsight cluster.
This page describes an overview of connecting a Lenses Agent with Schema Registries
Consider Rate Limiting if you have a high number of schemas.
TLS and basic authentication are supported for connections to Schema Registries.
The Agent can collect Schema registry metrics via:
JMX
Jolokia
AVRO
PROTOBUF
JSON and XML formats are supported by Lenses but without a backing schema registry.
To enable the deletion of schemas in the UI, set the following in the lenses.conf
file.
IBM Event Streams supports hard deletes only
This page describes connecting Lenses to Confluent schema registries.
Set the following examples in provisioning.yaml
The URLs (nodes) should always have a scheme defined (http:// or https://).
For Basic Authentication, define username
and password
properties.
A custom truststore is needed when the Schema Registry is served over TLS (encryption-in-transit) and the Registry’s certificate is not signed by a trusted CA.
A custom truststore might be necessary too (see above).
By default, Lenses will use hard delete for Schema Registry. To use soft delete, add the following property:
This page describes connection Lenses to Azure EventHubs.
Add a shared access policy
Navigate to your Event Hub resource and select Shared access policies in the Settings section.
Select + Add shared access policy, give a name, and check all boxes for the permissions (Manage, Send, Listen)
Once the policy is created, obtain the Primary Connection String, by clicking the policy and copying the connection string. The connection string will be used as a JAAS password to connect to Kafka.
The bootstrap broker [YOUR_EVENT_HUBS_NAMESPACE].servicebus.windows.net:9093
Set the following in the provisioning.yaml
First set environment variable
Note that "\" at "$ConnectionString" is set additionally to escape the $ sign.
| Yes | n/a | Configures everything involving the HTTP. |
| Yes | n/a | Controls the agent handling. |
| Yes | n/a | Configures database settings. |
| Yes | n/a | Sets the logger behaviour. |
| Yes | n/a | Controls the metrics settings. |
| Yes | n/a | Holds the license key. |
| Yes | n/a | Configures authentication and authorisation |
| No |
|
| Grants root access to principals. |
| no | n/a | Contains SAML2 IdP configuration. |
| no | [] | Array | Creates initial users for password based authentication. |
| Yes | n/a |
| Sets the address the HTTP server listens at. |
| No |
|
| Sets the value of the "Access-Control-Allow-Origin" header. |
| No |
|
| Sets the value of the "Access-Control-Allow-Credentials" header. |
| No |
|
| Sets the "Secure" attribute on session cookies. |
| Yes | n/a | Contains TLS configuration. |
| Yes | n/a |
| Sets the address the agent server listens at. |
| Yes | n/a | Contains TLS configuration. |
| Yes | n/a |
| Sets the license key. |
acceptEULA | Yes | fals | boolean |
This page describes how to connect the Lenses Agent to your Kafka brokers.
The Lenses Agent can connect to any Kafka cluster or service exposing the Apache Kafka APIs and supporting the authentication methods offered by Apache Kafka.
This page describes how to connect Lenses to an Amazon MSK Serverless cluster.
It is recommended to install the Agent on an EC2 instance or with EKS in the same VPC as your MSK Serverless cluster.
Enable communications between the Agent & the Amazon MSK Serverless cluster by opening the Amazon MSK Serverless cluster's security group in the AWS Console and add the IP address of your Agent installation.
To authenticate the Agent & access resources within our MSK Serverless cluster, we'll need to create an IAM policy and apply that to the resource (EC2, EKS cluster, etc) running the Agent service. here is an example IAM policy with sufficient permissions which you can associate with the relevant IAM role:
MSK Serverless IAM to be used after cluster creation. Update this IAM policy with the relevant ARN.
Click your MSK Serverless Cluster in the MSK console and select View Client Information page to check the bootstrap server endpoint.
To enable the creation of SQL Processors that create consumer groups, you need to add the following statement in your IAM policy:
Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.
To integrate with the AWS Glue Schema Registry, you also need to add the following statement for the registries and schemas in your IAM policy:
Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.
To integrate with the AWS Glue Schema Registry, you also need to modify the security policy for the registry and schemas, which results in additional functions within it:
More details about how IAM works with MSK Serverless can be found in the documentation: MSK Serverless
When using the Agent with MSK Serverless:
The agent does not receive Prometheus-compatible metrics from the brokers because they are not exported outside of CloudWatch.
The agent does not configure quotas and ACLs because MSK Serveless does not allow this.
This page describes adding a Schema Registries to the Lenses Agent.
This page describes connecting Lenses to Apicurio.
Apicuro supports the following versions of Confluent's API:
Confluent Schema Registry API v6
Confluent Schema Registry API v7
Set the following examples in provisioning.yaml
Set the schema registry URLs to include the compatibility endpoints, for example:
This page describes connection the Lenses Agent to a AWS MSK cluster.
It is recommended to install the Agent on an EC2 instance or with EKS in the same VPC as your MSK cluster. The Agent can be installed and preconfigured via the.
Edit the AWS MSK security group in the AWS Console and add the IP address of your Agent installation.
Depending on your MSK cluster, select the endpoint and protocol you want to connect with.
It is not recommended to use Plaintext for secure environments. For these environments use TLS or IAM.
When the Agent is running inside AWS and is connecting to an Amazon’s Managed Kafka (MSK) instance, IAM can be used for authentication.
This page describes connecting Lenses to IBM Event Streams schema registry.
Requires Enterprise subscription on IBM Event Streams and only hard delete is supported for IBM Event streams
To configure an application to use this compatibility API, specify the Schema Registry endpoint in the following format:
Use "token" as the username. Set the password as your API KEY from IBM Event streams
Set the following examples in provisioning.yaml
This page describes how to connect Lenses to IBM Event Streams.
IBM Event Streams requires a replication factor of 3. Ensure you set the replication factor accordingly for Lenses internal topics.
See .
This page describes adding a Kafka Connect Cluster to the Lenses Agent.
Lenses integrates with Kafka Connect Clusters to manage connectors.
The name of a Kafka Connect Connections may only contain alphanumeric characters ([A-Za-z0-9]) and dashes (-). Valid examples would be dev, Prod1, SQLCluster,Prod-1, SQL-Team-Awesome.
Multiple Kafka Connect clusters are supported.
If you are using Kafka Connect < 2.6 set the following to ensure you can see Connectors
lenses.features.connectors.topics.via.api.enabled=false
Consider if you have a high number of connectors.
The URLs (workers) should always have a scheme defined (http:// or https://).
This example uses an optional AES-256 key. The key decodes values encoded with AES-256 to enable passing encrypted values to connectors. It is only needed if your cluster uses AES-256 Decryption plugin.
For Basic Authentication, define username
and password
properties.
A custom truststore is needed when the Kafka Connect workers are served over TLS (encryption-in-transit) and their certificates are not signed by a trusted CA.
A custom truststore might be necessary too (see above).
If you have developed your own Connector or are using not using a Lenses connector you can still display the connector instances in the topology. To do this Lenses needs to know the configuration option of the Connector that defines which topic the Connector reads from or writes to. This is set in the connectors.info
parameter in the lenses.conf
file.
This page describes connection to AWS Glue.
Accepts the
If you want to have the Agent collect JMX metrics you have to enable Open Monitoring on your MSK cluster. Follow the AWS guide.
This page describes the hardware and OS prerequisites for Lenses.
Run on any Linux server (review ulimits or container technology (docker/kubernetes). For RHEL 6.x and CentOS 6.x use docker.
Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit command:
Increase as a super-user the soft limit to 4096 with:
This page describes adding a Zookeeper to the Lenses Agent.
Set the following examples in provisioning.yaml
Simple configuration with Zookeeper metrics read via JMX.
With such a configuration, Lenses will use 3 Zookeeper nodes and will try to read their metrics from following URLs (notice the same port - 9581 - used for all of them, as defined by metricsPort property):
my-zookeeper-host-0:9581
my-zookeeper-host-1:9581
my-zookeeper-host-2:9581
This page describes how to configure JMX metrics for Connections in Lenses.
All core services (Kafka, Schema Registry, Kafka Connect, Zookeeper) use the same set of properties for services’ monitoring.
The Agent will discover all the brokers by itself and will try to fetch metrics using metricsPort, metricsCustomUrlMappings and other properties (if specified).
The same port used for all brokers/workers/nodes. No SSL, no authentication.
Such a configuration means that the Agent will try to connect using JMX with every pair of kafkaBootstrapServers.host:metricsPort, so following the example: my-kafka-host-0:9581.
For Jolokia the Agent supports two types of requests: GET (JOLOKIAG) and POST (JOLOKIAP).
For JOLOKIA each entry value in metricsCustomUrlMappings must contain protocol.
The same port used for all brokers/workers/nodes. No SSL, no authentication.
JOLOKIA monitoring works on the top of HTTP protocol. To fetch metrics the Agent has to perform either GET or POST request. There is a way of configuring http request timeout using httpRequestTimeout property (ms value). Its default value is 20 seconds.
Default suffix for Jolokia endpoints is /jolokia/, so that should be provided value. Sometimes that suffix can be different, so there is a way of customizing it by using metricsHttpSuffix field.
AWS has predefined metrics configuration. The Agent hits the Prometheus endpoint using port 11001 for each broker. There is an option of customizing AWS metrics connection in Lenses by using metricsUsername, metricsPassword, httpRequestTimeout, metricsHttpSuffix, metricsCustomUrlMappings
, metricsSsl properties, but most likely no one will need to do that - AWS has its own standard and most probably it won’t change. Customization can be achieved only by API or CLI - UI does not support it.
There is also a way to configure custom mapping for each broker (Kafka) / node (Schema Registry, Zookeeper) / worker (Kafka Connect).
Such a configuration means that the Agent will try to connect using JMX for:
my-kafka-host-0:9582 - because of metricsCustomUrlMappings
my-kafka-host-1:9581 - because of metricsPort and no entry in metricsCustomUrlMappings
Add a connection to AWS in the Lenses Agent.
The agent uses an AWS in three places:
AWS IAM connection to MSK for Lenses itself
Connecting to AWS Glue
Alert channels to Cloud Watch.
If the Agent is deployed on an EC2 Instance or has access to AWS credentials in the default AWS toolchain that can be used instead.
This page describes the Kafka ACLs prerequisites for the Lenses Agent if ACLs are enabled on your Kafka clusters.
These ACLs are for the underlying Lenses Agent Kafka client. Lenses has its own set of permissions guarding access.
You can restrict the access of the Lenses Kafka client but this can reduce the functionality on offer in Lenses, e.g. not allow Lenses to create topic at all, even though this can be managed by Lenses own IAM system.
When your Kafka cluster is configured with an authorizer which enforces ACLs, the Agent will need a set of permissions to function correctly.
Common practice is to give teh Agent superuser status or the complete list of available operations for all resources. The IAM model of Lenses can then be used to restrict the access level per user.
The Agent needs permission to manage and access their own internal Kafka topics:
__topology
__topology__metrics
It also needs to read and describe permissions for the consumer offsets and Kafka Connect topics —if enabled:
__consumer_offsets
connect-configs
connect-offsets
connect-status
This same set of permissions is required for any topic that the agent must have read access.
DescribeConfigs was added in Kafka 2.0. It may not be needed for versions before 2.2.
Additional permissions are needed to produce topics or manage them.
Permission to at least read and describe consumer groups is required to take advantage of the Consumer Groups' monitoring capabilities.
Additional permissions are needed to manage groups.
To manage ACLs, permission to the cluster is required:
Connect the Lenses Agent to your alerting and auditing systems.
The Agent can send out alerts and audits events. Once you have configured alert and audit connections, you can create alert and audit channels to route events to them.
See AWS connection.
This page describes configuring the database connection for the Lenses Agent.
Once you have created a role for the agent to use you can then configure the Agent in the lenses.conf
file:
Additional configurations for the PostgreSQL database connection can be passed under the lenses.storage.postgres.properties
configuration prefix.
One Postgres server can be used for all agents by using a separate database or schema each.
For the Agent see lenses.storage.postgres.schema or lenses.storage.postgres.database
The supported parameters can be found in the PostgreSQL documentation. For example:
The Agent uses the HikariCP library for high-performance database connection pooling.
The default settings should perform well but can be overridden via the lenses.storage.hikaricp configuration prefix. The supported parameters can be found in the HikariCP documentation.
Camelcase configuration keys are not supported in agent configuration and should be translated to dot notation.
For example:
This page describes the how to retrieve Lenses Agent JMX metrics.
The JMX endpoint is managed by the lenses.jmx.port option. To disable the JMX leave the option empty.
To enable monitoring of the Agent metrics:
To export via Prometheus exporter:
The Agent Docker image (lensesio/lenses) automatically sets up the Prometheus endpoint. You only have to expose the 9102 port to access it.
This will be done in two parts. The first part is about setting up the required files that JMX Agent will require and the second is about the options we need to pass to the agent.
First let’s create a new folder called jmxremote
To enable basic auth JMX, first create two files:
jmxremote.access
jmxremote.password
The password file has the credentials that the JMX agent will check during client authentication
The above code is registering 2 users.
UserA:
username admin
password admin
UserB:
username: guest
password: admin
The access file has authorization information, like who is allowed to do what.
In the above code, we can see that the admin user can do read and write operations in JMX, while guest user can only read the JMX content.
Now, to enable JMX with basic auth protection, all we need to do is pass the following options in the JRE’s env that will run the Java process you need to protect the jmx.
Let’s assume this java process is Kafka.
Change the permissions on both files so only owner can edit and view them.
If you do not change the permissions to 0600 and to the user that will run the jre process, then JMX will Agent will cause an error complaining that the Process is not the owner of the files that will be used for authentication and authorization.
Finally export the following options in the user’s env which will run Kafka.
First setup JMX with basic auth as shown in the Secure JMX: Basic Auth page.
To enable TLS Encryption/Authentication
in JMX you need a jks keystore and truststore.
Please note that both JKS Truststore and Keystore should have the same password.
The reason for this is because the javax.net.ssl
class will use the password you pass to the Keystore as the keypassword
Let’s assume this java process is Kafka and that you have installed the keystore.jks and truststore.jks under `/etc/certs``
Export the following options in the user’s env which will run Kafka.
This page describes how to configure TLS for the Lenses Agent.
By default, the Agent does not provide TLS termination but can be enabled via a configuration option. TLS termination is recommended for enhanced security and a prerequisite for integrating with SSO (Single Sign On) via SAML2.0.
TLS termination can be configured directly within Agent or by using a TLS proxy or load balancer.
To use a non-default global truststore, set the path in accordingly with the LENSES_OPTS variable.
To enable mutual TLS, set your keystore accordingly.
This page describes the Provisioning API reference.
For the options for each connection see the Schema /Object of the PUT call.
This page describes the memory & cpu prerequisites for Lenses.
This documentation provides memory recommendations for Lenses.io, considering the number of Kafka topics, the number of schemas, and the complexity of these schemas (measured by the number of fields). Proper memory allocation ensures optimal performance and stability of Lenses.io in various environments.
Number of Topics: Kafka topics require memory for indexing, metadata, and state management.
Schemas and Their Complexity: The memory impact of schemas is influenced by both the number of schemas and the number of fields within each schema. Each schema field contributes to the creation of Lucene indexes, which affects memory usage.
For a basic setup with minimal topics and schemas:
Minimum Memory: 4 GB
Recommended Memory: 8 GB
This setup assumes:
Fewer than 100 topics
Fewer than 100 schemas
Small schemas with few fields (less than 10 fields per schema)
Memory requirements increase with the number of topics. Topics are used as the primary reference for memory scaling, with additional considerations for schemas.
Schemas have a significant impact on memory usage, particularly as the number of fields within each schema increases. The memory impact is determined by both the number of schemas and the complexity (number of fields) of these schemas.
To help illustrate how to apply these recommendations, here are some example configurations considering both topics and schema complexity:
Topics: 500
Schemas: 100 (average size 50 KB, 8 fields per schema)
Recommended Memory: 8 GB
Schema Complexity: Low → No additional memory needed.
Total Recommended Memory: 8 GB
Topics: 5,000
Schemas: 1,000 (average size 200 KB, 25 fields per schema)
Base Memory: 12 GB
Schema Complexity: Moderate → No additional memory needed.
Total Recommended Memory: 16 GB
Topics: 15,000
Schemas: 3,000 (average size 500 KB, 70 fields per schema)
Base Memory: 32 GB
Schema Complexity: High → Add 3 GB for schema complexity.
Total Recommended Memory: 35 GB
30,000 Topics
Schemas: 5,000 (average size 300 KB, 30 fields per schema)
Base Memory: 64 GB
Schema Complexity: Moderate → Add 5 GB for schema complexity.
Total Recommended Memory: 69 GB
High Throughput: If your Kafka cluster is expected to handle high throughput, consider adding 20-30% more memory than the recommendations.
Complex Queries and Joins: If using Lenses.io for complex data queries and joins, consider increasing the memory allocation by 10-15% to accommodate the additional processing.
Monitoring and Adjustment: Regularly monitor memory usage and adjust based on actual load and performance.
Proper memory allocation is crucial for the performance and reliability of Lenses.io, especially in environments with a large number of topics and complex schemas. While topics provide a solid baseline for memory recommendations, the complexity of schemas—particularly the number of fields—can also significantly impact memory usage. Regular monitoring and adjustments are recommended to ensure that your Lenses.io setup remains performant as your Kafka environment scales.
This page describes the JVM options for the Lenses Agent.
The Agent runs as a JVM app; you can tune runtime configurations via environment variables.
Key | Description |
---|---|
This page describes how to install plugins in the Lenses Agent.
The following implementations can be specified:
Serializers/Deserializers Plug your serializer and deserializer to enable observability over any data format (i.e., protobuf / thrift)
Custom authentication Authenticate users on your proxy and inject permissions HTTP headers.
LDAP lookup Use multiple LDAP servers or your group mapping logic.
SQL UDFs User Defined Functions (UDF) that extend SQL and streaming SQL capabilities.
Once built, the jar files and any plugin dependencies should be added to the Agent and, in the case of Serializers and UDFs, to the SQL Processors if required.
On startup, the Agent loads plugins from the $LENSES_HOME/plugins/ directory and any location set in the environment variable LENSES_PLUGINS_CLASSPATH_OPTS. The Agent is watching, and dropping a new plugin will hot-reload it. For the Agent docker (and Helm chart) you use /data/plugins.
Any first-level directories under the paths mentioned above, detected on startup will also be monitored for new files. During startup, the list of monitored locations will be shown in the logs to help confirm the setup.
Whilst all jar files may be added to the same directory (e.g /data/plugins), it is suggested to use a directory hierarchy to make management and maintenance easier.
An example hierarchy for a set of plugins:
There are two ways to add custom plugins (UDFs and Serializers) to the SQL Processors; (1) via making available a tar.gz archive at an HTTP (s) address, or (2) via creating a custom docker image.
With this method, a tar archive, compressed with gzip, can be created that contains all plugin jars and their dependencies. Then this archive should be uploaded to a web server that the SQL Processors containers can access, and its address should be set with the option lenses.kubernetes.processor.extra.jars.url.
Step by step:
Create a tar.gz file that includes all required jars at its root:
Upload to a web server, ie. https://example.net/myfiles/FILENAME.tar.gz
Set
For the docker image, set the corresponding environment variable
The SQL Processors inside Kubernetes use the docker image lensesio-extra/sql-processor. It is possible to build a custom image and add all the required jar files under the /plugins directory, then set lenses.kubernetes.processor.image.name and lenses.kubernetes.processor.image.tag options to point to the custom image.
Step by step:
Create a Docker image using lensesio-extra/sql-processor:VERSION as a base and add all required jar files under /plugins:
Upload the docker image to a registry:
Set
For the docker image, set the corresponding environment variables
This page describes configuring Lenses Agent logging.
Changes to the logback.xml are hot reloaded by the Agent, no need to restart.
All logs are emitted unbuffered as a stream of events to both stdout and to rotating files inside the directory logs/.
The logback.xml file is used to configure logging.
If customization is required, it is recommended to adapt the default configuration rather than write your own from scratch.
The file can be placed in any of the following directories:
the directory where the Agent is started from
/etc/lenses/
agent installation directory.
The first one found, in the above order, is used, but to override this and use a custom location, set the following environment variable:
The default configuration file is set up to hot-reload any changes every 30 seconds.
The default log level is set to INFO
(apart from some very verbose classes).
All the log entries are written to the output using the following pattern:
You can adjust this inside logback.xml to match your organization’s defaults.
logs/ you will find three files: lenses.log
, lenses-warn.log
and metrics.log
. The first contains all logs and is the same as the stdout. The second contains only messages at level WARN and above. The third one contains timing metrics and can be useful for debugging.
The default configuration contains two cyclic buffer appenders: "CYCLIC-INFO” and “CYCLIC-METRICS”. These appenders are required to expose the Agent logs within the Admin UI.
LRNs uniquely identify all resources that Lenses understands. Examples are a Lenses User, a Kafka topic or a Kafka-Connect connector.
Use an LRN to specify a resource across all of Lenses, unambiguously:
To add topic permissions for a team in IAM permissions.
To share a consumer-group reference with a colleague.
The top-level format has 3 parts called segments. A semi-colon :
separates them:
service
is the namespace of the Lenses service that manages a set of resource types.
e.g. kafka
for things like topics and consumer groups.
resource-type
is the type of resources that are served by a service.
e.g. topic
for a Kafka topic, consumer-group
for a Kafka consumer group. They both belong to the kafka
service.
resource-path
is the unique path that identifies a resource. The resource path is specific to a service and resource type. The resource path can be:
a single resource name, e.g. :
lucy.clearview@lenses.io
for a user resource name.
The full LRN would be iam:user:lucy.clearview@lenses.io
a nested path that contains slashes /
e.g. :
dev-environment/kafka/my-topic
for a kafka topic
The full LRN would be kafka:topic:dev-environment/kafka/my-topic
IAM user
Kafka topic
Kafka consumer group
Schema Registry schema
Kafka Connect connector
LRNs separate top-level segments with a colon :
and resource path segments with a slash /
.
A segment may have:
Alphanumeric characters: a-z, A-Z, 0-9
Allowed symbols: -
Use the wildcard asterisk *
to express catch-all LRNs.
Use these examples to express multiple resources easily.
Avoid these examples because they are ambiguous. Lenses does not allow them.
This page describes how to configure the agent to deploy and manage SQL Processors for stream processing.
Lenses can be used to define & deploy stream processing applications that read from Kafka and write back to Kafka with SQL. They are based on the Kafka Stream framework. They are known as SQL Processors.
SQL processing of real-time data can run in 2 modes:
SQL In-Process - the workload runs inside of the Lenses Agent.
SQL in Kubernetes - the workload runs & scale on your Kubernetes cluster.
Which mode the SQL Processors will run as should be defined within the lenses.conf before Lenses is started.
In this mode, SQL processors run as part of the Agent process, sharing resources, memory, and CPU time with the rest of the platform.
This mode of operation is meant to be used for development only.
As such, the agent will not allow the creation of more than 50 SQL Processors in In Process mode, as this could impact the platform's stability and performance negatively.
For production, use the KUBERNETES
mode for maximum flexibility and scalability.
Set the execution configuration to IN_PROC
Set the directory to store the internal state of the SQL Processors:
SQL processors use the same connection details that Agent uses to speak to Kafka and Schema Registry. The following properties are mounted, if present, on the file system for each processor:
Kafka
SSLTruststore
SSLKeystore
Schema Registry
SSL Keystore
SSL Truststore
The file structure created by applications is the following: /run/[lenses_installation_id]/applications/
Keep in mind Lenses require an installation folder with write permissions. The following are tried:
/run
/tmp
Kubernetes can be used to deploy SQL Processors. To configure Kubernetes, set the mode to KUBERNETES
and configure the location of the kubeconfig file.
When the Agent is deployed inside Kubernetes, the lenses.kubernetes.config.file configuration entry should be set to an empty string. The Kubernetes client will auto-configure from the pod it is deployed in.
The SQL Processor docker image is live in Dockerhub.
Custom serdes should be embedded in a new Lenses SQL processor Docker image.
To build a custom Docker image, create the following directory structure:
Copy your serde jar files under processor-docker/serde.
Create Dockerfile
containing:
Build the Docker.
Once the image is deployed in your registry, please set Lenses to use it (lenses.conf):
Don't use the LPFP_
prefix.
Internally, Lenses prefixes all its properties with LPFP_
.
Avoid passing custom environment variables starting with LPFP_
as it may cause the processors to fail.
To deploy Lenses Processors in Kubernetes the suggested way is to activate RBAC in Cluster level through Helm values.yaml
:
If you want to limit the permissions Lenses has against your Kubernetes cluster, you can use Role/RoleBinding
resources instead.
To achieve this you need to create a Role and a RoleBinding resource in the namespace you want the processors deployed to:
example for:
Lenses namespace = lenses-ns
Processor namespace = lenses-proc-ns
You can repeat this for as many namespaces you may want Lenses to have access to.
Finally you need to define in Lenses configuration which namespaces can Lenses access. To achieve this amend values.yaml
to contain the following:
example:
This page describes Identity & Access Management (IAM) in Lenses.
Rate limit the calls the Lenses Agent makes to Schema Registries and Connect Clusters.
To rate limit the calls the Agent makes to Schema Registries or Connect Clusters set the following the Agent configuration:
The exact values provided will depend on your setup, for example the number of schemas, how often are new schemas added, so some trial and error is required.
This page describes an overview of Lenses IAM (Identify & Access Management)
Principals (Users & Service accounts) receive their permissions based on their group membership.
Roles hold a set of policies, defining the permissions. Roles are assigned to groups.
Roles provide flexibility in how you want to provide access, you can create policy that is very open or a policy that is very granular, for example allowing operators and support engineers certain permissions to restart Connectors but denying actions that would allow them to view data or configuration options.
Roles are defined at the HQ level. This allows you to control access to not only actions at HQ but at lower environment levels, and to assign the same set of permissions across your whole Kafka landscape in a central place.
A role has:
A unique name;
A list of Permission Statements called a Policy.
A policies have:
One or more actions;
One or more resource patterns that the actions apply to;
An effect: allow or deny.
If any effect is deny for a resource the result is always deny, the principle of least privileged applies.
A policy is defined by a YAML specification.
Actions describe a set of actions. Concrete actions can match an Action Pattern. In this text, action and action patterns are used interchangeably.
An action has the format: service:operation
, e.g. iam:DeleteUser
Services describe the system entity that an action applies to. Services are:
environments
kafka
registry
schemas
kafka-connect
sql-streaming
kubernetes
applications
alerts
data-policies
governance
audit
iam
administration
Resources identify which resource, in a service, that the principal is allowed or denied, to perform the operation on.
Resource-type cannot contain a combination of characters with wildcards.
If the service is provided, resource-type can be a wildcard.
The resource path identifies the resource within the context of a service and a resource type.
A resource-path consists of one or more segments separated by /. A segment can be a wildcard, or contain a wildcard as a suffix of a string. If a segment is a wildcard, then remaining segments do not need to be provided, and will be assumed to be wildcards as well.
The format is service:resource-type:resource-path
Where LRN is the Lenses Resource Name
kafka:topic/my-env/* will be expanded to kafka:topic/my-env/*/*;
kafka:topic/my-env/my-cluster* is invalid because the Topic segment is missing, kafka:topic/my-env/my-cluster*/topic would be valid though;
*:topic/* is invalid, the service is not provided;
kaf*:* and kafka:top* are invalid, service and resource-type cannot contain wildcards;
kafka:*/foo is invalid, if the resource-type is a wildcard then resource-id cannot be set.
A principal (user or service account) can perform an action on a resource if:
In any of the roles it receives via group membership:
There is any matching Permission Statement that has an effect of allow;
And there is not any matching Permission Statement that has an effect of deny.
A Permission Statement matches an action plus resource, if:
The action matches any of the Permission Statement's Action Patterns, AND:
The resource matches any of the Permission Statement's Resource Patterns.
An Action matches an Action Pattern (AP) if:
The AP is a wildcard, OR:
The Action's service equals the AP's and the AP's operation string-matches the Action's operation.
A Resource matches a Resource Pattern (RP) if:
The RP is a wildcard, OR:
The Resource's services equals the RP's and the RP's resource-type is a wildcard, OR:
The Resource's service and types equals that of the RP and resource-ids match. Resource-ids are matched by string-matching each individual segment. If the RP has a trailing wildcard segment, the remaining segments are ignored.
A string s matches p if:
They equal character by character.
If s or p has more non-wildcard characters than the other they don't match;
If p contains a * suffix, any remaining characters in s are ignored.
Order of items in any collection is irrelevant during evaluation. Collections are considered sets rather than ordered lists. The following are equivalent:
Order of Resource Patterns does not matter
Order of Permission Statements does not matter
Order of Roles does not matter
Order of Groups does not matter
In the examples we're not too religious about strict JSON formatting.
Broad Allow + Specific Deny
Given:
A principal:
Can ReadKafkaData on kafka:topic/my-env/the-cluster/some-topic because it is allowed and not denied;
Cannot DeleteKafkaTopic on kafka:topic/my-env/the-cluster/some-topic because there is no allow;
Cannot ReadKafkaData on kafka:topic/my-env/the-cluster/forbidden-topic because while it is allowed the deny kicks in.
Given:
A principal:
Can ReadKafkaData on kafka:topic/someone-else-cluster/their-topic because the resource matches *.
Note that here the matching can be considered "most permissive".
Given:
A principal:
Can ReadKafkaData
on k
afka:topic/my-cluster/my-topic-1 and kafka:topic/my-cluster/my-topic-2 because the resources match, but cannot ReadKafkaData on kafka:topic/my-cluster/my-topic-3.
This page describes Environments in Lenses.
Environments are virtual containers for you, including Kafka Cluster, Schema Registries, and Kafka Connect Clusters.
Each Environment has an Agent, the Agent communicates with HQ via an Agent Key generated at the environment creation time.
Environments can be assigned tiers, domains and a description and group accordingly.
Go to Environments in the left-hand side navigation, then select New Environments button in the top right corner.
Once you have created an environment you will be presented with an Agent Key. Copy this and deploy an Agent for your environment (Kafka Cluster).
Learn how to configure an agent .
Wildcard pattern | LRN Example | Definition | Example means… |
---|
Wildcard pattern | LRN Example | Restriction | Better alternative |
---|
Operation can contain a wildcard. If so, only at the end. See for the available operations per service.
p | s | match |
---|
Number of Topics / Partitions
Recommended Memory
Up to 1,000 / 10,000 partitions
12 GB
1,001 to 10,000 / 100.000 partitions
24 GB
10,001 to 30,000 / 300.000 partitions
64 GB
Schema Complexity
Number of Fields per Schema
Memory Addition
Low to Moderate Complexity
Up to 50 fields
None
High Complexity
51 - 100 fields
1 GB for every 1,000 schemas
Very High Complexity
100+ fields
2 GB for every 1,000 schemas
Number of Topics
Number of Schemas
Number of Fields per Schema
Base Memory
Additional Memory
Total Recommended Memory
1,000
1,000
Up to 10
8 GB
None
12 GB
1,000
1,000
11 - 50
8 GB
None
12 GB
5,000
5,000
Up to 10
12 GB
None
16 GB
5,000
5,000
11 - 50
12 GB
None
16 GB
10,000
10,000
Up to 10
16 GB
None
24 GB
10,000
10,000
51 - 100
24 GB
10 GB
34 GB
30,000
30,000
Up to 10
64 GB
None
64 GB
30,000
30,000
51 - 100
64 GB
30 GB
94 GB
LENSES_OPTS
For generic settings, such as the global truststore. Note that the docker image is using this to plug in a prometheus java agent for monitoring Lenses
LENSES_HEAP_OPTS
JVM heap options. Default setting are -Xmx3g -Xms512m
that sets the heap size between 512MB and 3GB. The upper limit is set to 1.2GB on the Box development docker image.
LENSES_JMX_OPTS
Tune the JMX options for the JVM i.e. to allowing remote access.
LENSES_LOG4J_OPTS
Override Agent logging configuration. Should only be used to set the logback configuration file, using the format -Dlogback.configurationFile=file:/path/to/logback.xml
.
LENSES_PERFORMANCE_OPTS
JVM performance tuning. The default settings are -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=
|
| Global wildcard. Capture all the resources that Lenses manages. | "Everything" |
|
| Service-specific wildcard. Capture all the resources for a service. | "All Kafka resources in all environments, i.e. topics, consumer groups, acls and quotas" |
|
| Resource-type-specific wildcard. Capture all the resources for a type of resources of a service. | "All Kafka topics in all environments" |
|
| Path segment wildcard. Capture a part of the resource path. | "All connectors named 'my-s3-sink' in all Connect clusters under the environment 'dev-environment' " |
|
| Trailing wildcard. This wildcard is at the end of an LRN. It acts as a 'globstar' ( Capture the resources that start with the given path prefix. | "All Kafka topics in the environment 'dev-environment' whose name starts with 'red-' " |
|
| Path suffix wildcard. Capture resources where different path segments start with certain prefixes. | "All connectors in all environments that start with 'dev', within any Connect cluster that starts with 'sinks' and where the connector name starts with 's3' " |
|
or
| No wildcards allowed at the service level. A service must be its full string. | Global wildcard |
|
| No wilcards allowed at the resource-type level. A resource type must be its full string. | Service-specific wildcard |
"lit" | "lit" | true |
"lit" | "li" | false |
"lit" | "litt" | false |
"lit" | "oth" | false |
"*" | "some" | true |
"foo*" | "foo" | true |
"foo*" | "foo-bar" | true |
"" | "" | true |
"x" | "" | false |
"" | "x" | false |
This page describes how to use Lenses to search for topics and fields across Kafka, Postgres and Elasticsearch.
Allow users to create and manage their topics and apply topic settings as guard rails.
Topics can be accessed via the Global Data Catalogue or at an environment level.
To create a topic go to Environments->[Your Environment]->Workspace->Explore->New Topic. Enter the name, partitions and replication factor.
If topic settings apply, you will not be able to create the topic unless the rules have been met.
The Explore screen lists high-level details of the topics
Selecting a topic allows you to drill into more details.
Topics marked for deletion will be highlighted with a D.
Compacted topics will be highlighted with a C.
To increase the number of partitions, select the topic, then select Increase Partitions from the actions menu. Increasing the number of partitions does not automatically rebalance the topic.
Topics inherit their configurations from the broker defaults. To override a configuration, select the topic, then the Configuration
tab. Search for the desired configuration and edit its value.
To delete a topic, click the trash can icon.
Topics can only be deleted if all clients reading or writing to the topic have been stopped. The topic will be marked for deletion with a D until the clients have stopped.
To quickly find compacted or empty topics use quick filter checkboxes, for example, you can find all empty topics and perform a bulk delete action on them.
This section provides example IAM policies for Lenses.
These are only some same policies to help you build your own
Full admin across all resources.
Allow full access for all services and resources beginning with blue.
Allow read only access for topics and schemas beginning with la.
Allow operators to restart connectors and list & get IAM resource only.
No access to data!
Explicity deny access to a production environment.
Allow developers access to topics, schemas, sql processors, consumer groups, acls, quotas, connectors for us-dev.
This page describes how to use Lenses to insert or delete messages in Kafka.
To insert a message, select Insert Message from the Action menu. Either enter a message, according to the topic schema or have a message auto-generated for you.
Deleting messages deletes messages based on an offset range. Select Delete Messages from the Action menu.
This page describes how to use Lenses to view metrics for a topic.
To view a live snapshot of the metrics for a topic, select the metrics tab for the topic.
This will show you metric information over the last 30 days, alert rules on the topic and low JMX metrics.
This page describes how to use Lenses approval requests.
To enable Approval Requests for a group, grant the group Create Topic Request permission. When a user belonging to this group creates a topic it will be sent for approval first.
To enable approval requests, create a group with, or add to a group, the Create Topic Request permission to the data namespace.
Go to Environments->[Your Environment]->Admin->Audits->Requests, select the request, and click view.
Approve or reject the request. If you Approve the topic will be created.
This page describes how to use Lenses to add metadata and tags to topics in Kafka.
To add descriptions or tags to datasets, click the edit icon in the Summary panel.
This page describes how to use Lenses to view topic partition metrics and configuration.
To view topic partitions select the Partition tab. Here you can see a heat map of messages in the topic and their distribution across the partitions.
Is the map evenly distributed? If not you might have partition skew.
Further information about the partitions and replicas is displayed, for example, whether the replicas are in-sync or not.
If the replicas are not in-sync an infrastructure alert will be raised.
This page describes how to use Lenses to view and manage topic configurations in Kafka.
To view a configuration for a topic select the Configuration tab. Here you will see the current configurations inherited (default) from the brokers and if they have been overridden (current value).
To edit a configuration click the Edit icon and enter your value.
A declarative SQL interface, for querying, transforming and manipulating data at rest and data in motion. It works with Apache Kafka topics and other data sources. It helps developers and Kafka users
The Lenses SQL Snapshot engine accesses the data at the point in time the query is executed. This means, that for Apache Kafka, data added just after the query was initiated will not be processed.
Typical use cases are but are not limited to:
Identifying a specific message.
Identifying a particular transaction of payment that your system has processed
Identifying all thermostats readings for a specific customer if you are working for an energy provider
Counting transactions processed within a given time window.
The Snapshot engine presents a familiar SQL interface, but remember that it queries Kafka with no indexes. Use Kafka's metadata (partition, offset, timestamp) to improve query performance.
Go to Environments->[Your Environment]->Workspace->Sql Studio, enter your query, and click run.
This page describes how to use Lenses topic settings to provide governance when creating topics in your Kafka cluster.
Topic settings and naming rules allow for the enforcement of best practices when onboarding new teams and topics into your data platform.
Topic configuration rules can be used to enforce partition sizing, replication, and retention configuration during topic creation. Go to Environments->[Your Environment]->Admin->Topic Settings->Edit.
By setting naming conventions you can control how topics are named. To define a naming convention, go to Environments->[Your Environment]->Admin->Topic Settings->Edit. Naming rules allow you to select from predefined regex or apply your own.
This page describes how to use Lenses to download messages to CSV or JSON from a Kafka topic.
Only the data returned to the frontend is downloaded.
Data can be downloaded, optionally including headers, as JSON or as CSV with a choice of delimiters.
This page describes Roles in Lenses.
Lenses IAM is built around Roles. Roles contain policies and each policy defines a set of actions a user is allow to take.
Roles are then assigned to groups.
The Lenses policies are resource based. They are YAML based documents attached to a resource.
Each policy has:
Action
Resource
Effect
The action describes the action or verb that a user can perform. The format of the action is
For example to list topics in Kafka
For a full list of the actions see Permission Reference.
To allow all actions set '*'
To restrict access to resources, for example, only list topics being with red we can used use the resource field.
For a full list of the actions see Permission Reference.
To allow all actions set '*'
Effect is either allow the action on the resource or deny. If allow is not set the action will be denied and if any policy for a resource has a deny effect it takes precedence.
To Create Service Account go to IAM->Roles->New Role.
You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.
This page describes Service account in Lenses.
Service accounts are intended for programmatic access to Lenses.
Service accounts are assigned to groups. The groups inherit permissions from the roles assigned to the groups.
Each service account has a key that is used to authenticate and identify the service account.
In addition you can set:
Description
Resource name - Must be unique across Lenses.
Key expiry
Regenerate the key
Key expiry can be 7, 30, 60, 90 days, 1 year or a custom expiration or no expiration at all.
To Create Service Account go to IAM->Service Accounts->New Service Account, once created you can then assign service accounts to groups.
You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.
This page describes IAM groups in Lenses.
Groups are a collection of users, service accounts and roles.
Users can be assign to Groups in two ways:
Manual
Linked from the groups provided by your SSO provider
This behaviour can be toggled in the organizational settings of your profile. To control the default set the following in the config.yaml for HQ.
Groups can be defined with the following metadata:
Colour
Description
Each group has a resource that unique identifies it across an HQ installation.
To Create Group go to IAM->Groups->New Group, create the group, assign members, service accounts and roles.
You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.
This page describes Users in Lenses.
Users are assigned to groups. The groups inherit permissions from the roles assigned to the groups.
User can be manually created in Lenses. Users can either be of type:
SSO, or
Basic Authentication
When creating a User, you can assign them groups membership.
Each user, once logged in can update their Name, Profile Photo and set an email address.
For SSO, your SSO email is still required to login
To Create Service Account go to IAM->Users->New User, once created you can assign the user to a group.
You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.
This page lists the available configurations in Lenses Agent.
Set in lenses.conf
Reference documentation of all configuration and authentication options:
Key | Description | Default | Type | Required |
---|---|---|---|---|
System or control topics are created by services for their internal use. Below is the list of built-in configurations to identify them.
_schemas
__consumer_offsets
_kafka_lenses_
lsql_*
lsql-*
__transaction_state
__topology
__topology__metrics
_confluent*
*-KSTREAM-*
*-TableSource-*
*-changelog
__amazon_msk*
Wildcard (*
) is used to match any name in the path to capture a list of topics not just one. When the wildcard is not specified, Lenses matches on the entry name provided.
If the records schema is centralized, the connectivity to Schema Registry nodes is defined by a Lenses Connection.
There are two static config entries to enable/disable the deletion of schemas:
Options for specific deployment targets:
Global options
Kubernetes
Common settings, independently of the underlying deployment target:
Kubernetes connectivity is optional. Minimum supported K8 version 0.11.10. All settings are string.
Optimization settings for SQL queries.
Lenses requires these Kafka topics to be available, otherwise, it will try to create them. The topics can be created manually before Lenses is run, or allow Lenses the correct Kafka ACLs to create the topics:
To allow for fine-grained control over the replication factor of the three topics, the following settings are available:
When configuring the replication factor for your deployment, it's essential to consider the requirements imposed by your cloud provider. Many cloud providers enforce a minimum replication factor to ensure data durability and high availability. For example, IBM Cloud mandates a minimum replication factor of 3. Therefore, it's crucial to set the replication factor for the Lenses internal topics to at least 3 when deploying Lenses on IBM Cloud.
All time configuration options are in milliseconds.
Control how Lenses identifies your connectors in the Topology view. Catalogue your connector types, set their icons, and control how Lenses extracts the topics used by your connectors.
Lenses comes preconfigured for some of the popular connectors as well as the Stream Reactor connectors. If you see that Lenses doesn’t automatically identify your connector type then use the lenses.connectors.info
setting to register it with Lenses.
Add a new HOCON object {}
for every new Connector in your lenses.connectors.info
list :
This configuration allows the connector to work with the topology graph, and also have the RBAC rules applied to it.
To extract the topic information from the connector configuration, source connectors require an extra configuration. The extractor class should be: io.lenses.config.kafka.connect.SimpleTopicsExtractor
. Using this extractor requires an extra property configuration. It specifies the field in the connector configuration which determines the topics data is sent to.
Here is an example for the file source:
An example of a Splunk sink connector and a Debezium SQL server connector
This page describes the IAM Reference options.
service: administration
Resource Syntax
admin:connection:${Environment}/${ConnectionType}/${Connection}
admin:license:${Environment}
admin:lenses-logs:${Environment}
admin:lenses-configuration:${Environment}
admin:setting:${Setting}
Operation | Resource Type | Description |
---|---|---|
service: applications
Resource Syntax
service: alerts
Resource Syntax
alerts:alert:${Environment}/${AlertType}/${Alert}
alerts:rule:${Environment}/Infrastructure/KafkaBrokerDown
alerts:rule:${Environment}/DataProduced/red-app-going-slow
service: audit
Resource Syntax
audit:log:${Environment}
audit:channel:${Environment}/${AuditChannelType}/${AuditChannel}
service: data-policies
Resource Syntax
data-policies:policy/${Environment}/${Policy}
service: environments
Resource Syntax
environments:environment/${Environment}
service: governance
Resource Syntax
governance:request:${Environment}/${ActionType}/*
governance:rule:${Environment}/${RuleCategory}/*
service: iam
Resource Syntax
iam:role:${Role}
iam:group:${Group}
iam:user:${Username}
iam:service-account:${ServiceAccount}
service: kafka-connect
Resource Syntax
kafka-connect:connector:${Environment}/${KafkaConnectCluster}/${Connector}
kafka-connect:cluster:${Environment}/${KafkaConnectCluster}
service: kafka
Resource Syntax
kafka:topic:${Environment}/${KafkaCluster}/${Topic}
kafka:acl:${Environment}/${KafkaCluster}/${AclResourceType}/* or kafka:acl:${Environment}/${KafkaCluster}/${AclResourceType}/${PrincipalType}/${Principal}
kafka:quota:${Environment}/${KafkaCluster}/${QuotaType}/* or
kafka:quota:${Environment}/${KafkaCluster}/clients
kafka:quota:${Environment}/${KafkaCluster}/users-default
kafka:quota:${Environment}/${KafkaCluster}/client/${ClientID}
kafka:quota:${Environment}/${KafkaCluster}/user/${Username}
kafka:quota:${Environment}/${KafkaCluster}/user/${Username}/client/${ClientID}
kafka:quota:${Environment}/${KafkaCluster}/user-client/${Username}/${ClientID}
kafka:quota:${Environment}/${KafkaCluster}/user/${Username}/client/*
kafka:quota:${Environment}/${KafkaCluster}/user-all-clients/${Username}
service: kubernetes
Resource Syntax
kubernetes:cluster:${Environment}/${KubernetesCluster}
kubernetes:namespace:${Environment}/${KubernetesCluster}/${KubernetesNamespace}
service: registry
Resource Syntax
schemas:registry:${Environment}/${SchemaRegistry}
service: schemas
Resource Syntax
schemas:schema:${Environment}/${SchemaRegistry}/${Schema}
service: sql-streaming
Resource Syntax
sql-streaming:sql-processor:${Environment}/${KubernetesCluster}/${KubernetesNamespace}/${SqlProcessor}
For IN_PROC processors sql-streaming:sql-processor:${Environment}/lenses-in-process/default/${SqlProcessor}
This page describes how to use Lenses to use the Explore screen to explore, search and debug messages in a topic.
After selecting a topic you will be shown more details of the topic. The SQL Snapshot engine will return to you 200 of the latest messages for partition 0. Both the key and value of the message are displayed in a tree format which is expandable.
At the top of each message, the Kafka metadata (partition, timestamp, offset) is displayed.
Hovering to the right allows you a message the clipboard.
To download all messages to JSON or CSV see .
The SQL Snapshot engine deserializes the data on the backend of Lenses and sends it over the WebSocket to the client. By default, the data is presented in a tree format but it's also possible to flatten the data into a grid view. Select the grid icon.
Use the partition drop-down to change the partition to return messages you are interested in.
Use the timestamp picker to search for messages from a timestamp.
Use the offset select to search for messages from an offset.
The SQL Snapshot engine has a live mode. In this mode the engine will return a sample of messages matching the query. To enable this, select the Live Sample
button. The data view will now update with live records as the are written to the topic. You can also edit the query if required.
This is sample data, not the full set to avoid overloading the browser
For the SQL Snapshot engine to return data it needs to understand the format of the data in a topic. If a topic is backed by a Schema registry it is automatically set to AVRO. For other types, such as JSON or Strings the engine tries to determine the format.
If you wish to override or correct the format used select either Reset Types or Change Types from the action menu.
This page describes the concepts of the Lenses SQL snapshot engine that drives the SQL Studio allowing you to query data in Kafka.
Escape topic names with backticks if they contain non-alpha numeric characters
Snapshot queries on streaming data provide answers to a direct question, e.g. The current balance is $10. The query is active, the data is passive.
A single entry in a Kafka topic is called a message.
The engine considers a message to have four distinct components key
, value
, headers
and metadata
.
Currently, the Snapshot Engine supports four different facets _key
, _value
, _headers
and _metadata
; These strings can be used to reference properties of each of the aforementioned message components and build a query that way.
By default, unqualified properties are assumed to belong to the _value
facet:
In order to reference a different facet, a facet qualifier can be added:
When more than one sources/topics are specified in a query (like it happens when two topics are joined) a table reference can be added to the selection to fix the ambiguity:
the same can be done for any of the other facets (_key
,_meta
,_headers
).
Note Using a wildcard selection statement SELECT * provides only the value component of a message.
Headers are interpreted as a simple mapping of strings to strings. This means that if a header is a JSON, XML or any other structured type, the snapshot engine will still read it as a string value.
Messages can contain nested elements and embedded arrays. The .
operator is used to refer to children, and the []
operator is used for referring to an element in an array.
You can use a combination of these two operators to access data of any depth.
You explicitly reference the key, value and metadata.
For the key use _key
, for the value use _value
, and for metadata use _meta
. When there is no prefix, the engine will resolve the field(s) as being part of the message value. For example, the following two queries are identical:
When the key or a value content is a primitive data type use the prefix only to address them.
For example, if messages contain a device identifier as the key and the temperature as the value, SQL code would be:
Use the _meta
keyword to address the metadata. For example:
When projecting a field into a target record, Lenses allows complex structures to be built. This can be done by using a nested alias like below:
The result would be a struct with the following shape:
When two alias names clash, the snapshot engine does not “override” that field. Lenses will instead generate a new name by appending a unique integer. This means that a query like the following:
will generate a structure like the following:
The tabled query allows you to nest queries. Let us take the query in the previous section and say we are only interested in those entries where there exist more than 1 customer per country.
Run the query, and you will only see those entries for which there is more than one person registered per country.
Functions can be used directly.
For example, the ROUND
function allows you to round numeric functions:
This page describes the best practices when using Lenses SQL Studio to query data in Kafka.
Does Apache Kafka have indexing?
No. Apache Kafka does not have the full indexing capabilities in the payload (indexes typically come at a high cost even on an RDBMS / DB or a system like Elastic Search), however, Kafka indexes the metadata.
The only filters Kafka supports are topic, partition and offsets or timestamps.
When querying Kafka topic data with SQL such as
a full scan will be executed, and the query processes the entire data on that topic to identify all records that match the transaction id.
If the Kafka topic contains a billion 50KB messages - that would require querying 50 GB of data. Depending on your network capabilities, brokers’ performance, any quotas on your account, and other parameters, fetching 50 GB of data could take some time! Even more, if the data is compressed. In the last case, the client has to decompress it before parsing the raw bytes to translate into a structure to which the query can be applied.
When Lenses can’t read (deserialize) your topic’s messages, it classifies them as “bad records”. This happens for one of the following reasons:
Kafka records are corrupted. On an AVRO topic, a rogue producer might have published a different format
Lenses topic settings do not match the payload data. Maybe a topic was incorrectly given AVRO format when it’s JSON or vice versa
If AVRO payload is involved, maybe the Schema Registry is down or not accessible from the machine running Lenses
By default, Lenses skips them and displays the records’ metadata in the Bad Records tab. If you want to force stop the query in such case use:
Querying a table can take a long time if it contains a lot of records. The underlying Kafka topic has to be read, the filter conditions applied, and the projections made.
Additionally, the SELECT
statement could end up bringing a large amount of data to the client. To be able to constrain the resources involved, Lenses allows for context customization, which ends up driving the execution, thus giving control to the user. Here is the list of context parameters to overwrite:
All the above values can be given a default value via the configuration file. Using lenses.sql.settings
as prefix the format.timestamp
can be set like this:
Lenses SQL uses Kafka Consumer to read the data. This means that an advanced user with knowledge of Kafka could tweak the consumer properties to achieve better throughput. This would occur on very rare occasions. The query context can receive Kafka consumer settings. For example, the max.poll.records
consumer can be set as:
The fact is that streaming SQL is operating on unbounded streams of events: a query would normally be a never-ending query. In order to bring query termination semantics into Apache Kafka we introduced 4 controls:
LIMIT = 10000 - Force the query to terminate when 10,000 records are matched.
max.bytes = 20000000 - Force the query to terminate once 20 MBytes have been retrieved.
max.time = 60000 - Force the query to terminate after 60 seconds.
max.zero.polls = 8 - Force the query to terminate after 8 consecutive polls are empty, indicating we have exhausted a topic.
Thus, when retrieving data, you can set a limit of 1GB to the maximum number of bytes retrieved and a maximum query time of one hour like this:
This page describes how to use Lenses to back and restore data in a Kafka topic to AWS S3.
To initiate either a topic backup to S3 or topic restoration from S3, follow these steps:
Navigate to the Actions menu within the Kafka topic details screen.
Choose your desired action: “Backup Topic to S3” or “Restore Topic from S3.”
A modal window will open, providing step-by-step guidance to configure your backup or restoration entity.
A single topic can be backed up or restored to/from multiple locations.
If a topic is being backed up it will be displayed on the topology.
Additional information on the location of the backup can be found by navigating to the topic in the Explore screen where the information is available in the Summary section.
To back up a topic, navigate to the topic you wish to back up and select Backup Topic to S3 from the Actions menu.
Enter the S3 bucket ARN and select the that has the Lenses S3 connector installed.
Click Backup Topic, an S3 sink connector instance will now be deployed and configured automatically to back up data from the topic to the specified bucket.
To restore a topic, navigate to the topic you wish to restore and select Restore Topic from S3 from the Actions menu.
Enter the S3 bucket ARN and select the Connect Cluster that has the Lenses S3 connector installed. Click Restore Topic, an S3 source connector instance will now be deployed and configured automatically to restore data to the topic from the specified bucket.
Key | Description | Default |
---|---|---|
Key | Description | Default |
---|---|---|
Key | Description | Default | Type | Required |
---|---|---|---|---|
Key | Description | Default | Type | Required |
---|---|---|---|---|
Key | Description | Default | Type | Required |
---|---|---|---|---|
Key | Description | Type |
---|---|---|
Key | Description | Default |
---|---|---|
Key | Description | Default |
---|---|---|
Key | Description | Type | Default |
---|---|---|---|
Key | Description | Partition | Replication | Default | Compacted | Retention |
---|---|---|---|---|---|---|
Key | Description | Default |
---|---|---|
Key | Description | Type | Default |
---|---|---|---|
Key | Description | Default | Type | Required |
---|---|---|---|---|
Operation | Resource Type | Description |
---|---|---|
Operation | Resource Type | Description |
---|---|---|
Operation | Resource Type | Description |
---|---|---|
Operation | Resource Type | Description |
---|---|---|
Operation | Resource Type | Description |
---|---|---|
Operation | Resource Type | Description |
---|---|---|
Operation | Resource Type | Description |
---|---|---|
Operation | Resource Type | Description |
---|---|---|
Operation | Resource Type | Description |
---|---|---|
Operation | Resource Type | Description | Example |
---|---|---|---|
Operation | Resource Type | Description |
---|---|---|
Operation | Resource Type | Description |
---|---|---|
Available Actions | Resource Type | Description |
---|---|---|
For a full list of functions see .
Query by , or to avoid full scans.
Name | Description | Example |
---|
lenses.eula.accept
Accept the Lenses EULA
false
boolean
yes
lenses.ip
Bind HTTP at the given endpoint. Use in conjunction with lenses.port
0.0.0.0
string
no
lenses.port
The HTTP port to listen for API, UI and WS calls
9991
int
no
lenses.jmx.port
Bind JMX port to enable monitoring Lenses
int
no
lenses.root.path
The path from which all the Lenses URLs are served
string
no
lenses.secret.file
The full path to security.conf
for security credentials
security.conf
string
no
lenses.sql.execution.mode
Streaming SQL mode IN_PROC
(test mode) or KUBERNETES
(prod mode)
IN_PROC
string
no
lenses.offset.workers
Number of workers to monitor topic offsets
5
int
no
lenses.telemetry.enable
Enable telemetry data collection
true
boolean
no
lenses.kafka.control.topics
An array of topics to be treated as “system topics”
list
array
no
lenses.grafana
Add your Grafana url i.e. http://grafanahost:port
string
no
lenses.api.response.cache.enable
If enabled, it disables client cache on the Lenses API HTTP responses by adding these HTTP Headers: Cache-Control: no-cache, no-store, must-revalidate
, Pragma: no-cache
, and Expires: -1
.
false
boolean
no
lenses.workspace
Directory to write temp files. If write access is denied, Lenses will fallback to /tmp
.
/run
string
no
lenses.access.control.allow.methods
HTTP verbs allowed in cross-origin HTTP requests
GET,POST,PUT,DELETE,OPTIONS
lenses.access.control.allow.origin
Allowed hosts for cross-origin HTTP requests
*
lenses.allow.weak.ssl
Allow https://
with self-signed certificates
false
lenses.ssl.keystore.location
The full path to the keystore file used to enable TLS on Lenses port
lenses.ssl.keystore.password
Password for the keystore file
lenses.ssl.key.password
Password for the ssl certificate used
lenses.ssl.enabled.protocols
Version of TLS protocol to use
TLSv1.2
lenses.ssl.algorithm
X509 or PKIX algorithm to use for TLS termination
SunX509
lenses.ssl.cipher.suites
Comma separated list of ciphers allowed for TLS negotiation
lenses.security.kerberos.service.principal
The Kerberos principal for Lenses to use in the SPNEGO form: HTTP/lenses.address@REALM.COM
lenses.security.kerberos.keytab
Path to Kerberos keytab with the service principal. It should not be password protected
lenses.security.kerberos.debug
Enable Java’s JAAS debugging information
false
lenses.storage.hikaricp.[*]
To pass additional properties to HikariCP connection pool
no
lenses.storage.directory
The full path to a directory for Lenses to use for persistence
"./storage"
string
no
lenses.storage.postgres.host
Host of PostgreSQL server for Lenses to use for persistence
string
no
lenses.storage.postgres.port
Port of PostgreSQL server for Lenses to use for persistence
5432
integer
no
lenses.storage.postgres.username
Username for PostgreSQL database user
string
no
lenses.storage.postgres.password
Password for PostgreSQL database user
string
no
lenses.storage.postgres.database
PostgreSQL database name for Lenses to use for persistence
string
no
lenses.storage.postgres.schema
PostgreSQL schema name for Lenses to use for persistence
"public"
string
no
lenses.storage.postgres.properties.[*]
To pass additional properties to PostgreSQL JDBC driver
no
lenses.schema.registry.delete
Allow schemas to be deleted. Default is false
boolean
lenses.schema.registry.cascade.delete
Deletes associated schemas when a topic is deleted. Default is false
boolean
lenses.deployments.events.buffer.size
Buffer size for events coming from Deployment targets such as Kubernetes
10000
lenses.deployments.errors.buffer.size
Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes
1000
lenses.kubernetes.processor.image.name
The url for the streaming SQL Docker for K8
lensesioextra/sql-processor
lenses.kubernetes.processor.image.tag
The version/tag of the above container
5.2
lenses.kubernetes.config.file
The path for the kubectrl
config file
/home/lenses/.kube/config
lenses.kubernetes.pull.policy
Pull policy for K8 containers: IfNotPresent
or Always
IfNotPresent
lenses.kubernetes.service.account
The service account for deployments. Will also pull the image
default
lenses.kubernetes.init.container.image.name
The docker/container repository url and name of the Init Container image used to deploy applications to Kubernetes
lensesio/lenses-cli
lenses.kubernetes.init.container.image.tag
The tag of the Init Container image used to deploy applications to Kubernetes
5.2.0
lenses.kubernetes.watch.reconnect.limit
How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable
10
lenses.kubernetes.watch.reconnect.interval
How often to wait between Kubernetes Watcher reconnection attempts expressed in milliseconds
5000
lenses.kubernetes.websocket.timeout
How long to wait for a Kubernetes Websocket response expressed in milliseconds
15000
lenses.kubernetes.websocket.ping.interval
How often to ping Kubernetes Websocket to check it’s alive expressed in milliseconds
30000
lenses.kubernetes.pod.heap
The max amount of memory the underlying Java process will use
900M
lenses.kubernetes.pod.min.heap
The initial amount of memory the underlying Java process will allocate
128M
lenses.kubernetes.pod.mem.request
The value will control how much memory resource the Pod Container will request
128M
lenses.kubernetes.pod.mem.limit
The value will control the Pod Container memory limit
1152M
lenses.kubernetes.pod.cpu.request
The value will control how much cpu resource the Pod Container will request
null
lenses.kubernetes.pod.cpu.limit
The value will control the Pod Container cpu limit
null
lenses.kubernetes.namespaces
Object setting a list of Kubernetes namespaces that Lenses will see for each of the specified and configured cluster
null
lenses.kubernetes.pod.liveness.initial.delay
Amount of time Kubernetes will wait to check Processor’s health for the first time. It can be expressed like 30 second, 2 minute or 3 hour, mind the time unit is singular
60 second
lenses.deployments.events.buffer.size
Buffer size for events coming from Deployment targets such as Kubernetes
10000
lenses.deployments.errors.buffer.size
Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes
1000
lenses.kubernetes.config.reload.interval
Time interval to reload the Kubernetes configuration file. Expressed in milliseconds.
30000
lenses.sql.settings.max.size
Restricts the max bytes that a kafka sql query will return
long
20971520
(20MB)
lenses.sql.settings.max.query.time
Max time (in msec) that a sql query will run
int
3600000
(1h)
lenses.sql.settings.max.idle.time
Max time (in msec) for a query when it reaches the end of the topic
int
5000
(5 sec)
lenses.sql.settings.show.bad.records
By default show bad records when querying a kafka topic
boolean
true
lenses.sql.settings.format.timestamp
By default convert AVRO date to human readable format
boolean
true
lenses.sql.settings.live.aggs
By default allow aggregation queries on kafka data
boolean
true
lenses.sql.sample.default
Number of messages to sample when live tailing a kafka topic
int
2
/window
lenses.sql.sample.window
How frequently to sample messages when tailing a kafka topic
int
200
msec
lenses.sql.websocket.buffer
Buffer size for messages in a SQL query
int
10000
lenses.metrics.workers
Number of workers for parallelising SQL queries
int
16
lenses.kafka.ws.buffer.size
Buffer size for WebSocket consumer
int
10000
lenses.kafka.ws.max.poll.records
Max number of kafka messages to return in a single poll()
long
1000
lenses.sql.state.dir
Folder to store KStreams state.
string
logs/sql-kstream-state
lenses.sql.udf.packages
The list of allowed java packages for UDFs/UDAFs
array of strings
["io.lenses.sql.udf"]
lenses.topics.external.topology
Topic for applications to publish their topology
1
3
(recommended)
__topology
yes
N/A
lenses.topics.external.metrics
Topic for external application to publish their metrics
1
3
(recommended)
__topology__metrics
no
1 day
lenses.topics.metrics
Topic for SQL Processor to send the metrics
1
3
(recommended)
_kafka_lenses_metrics
no
lenses.topics.replication.external.topology
Replication factor for the lenses.topics.external.topology
topic
1
lenses.topics.replication.external.metrics
Replication factor for the lenses.topics.external.metrics
topic
1
lenses.topics.replication.metrics
Replication factor for the lenses.topics.metrics
topic
1
lenses.interval.summary
How often to refresh kafka topic list and configs
long
10000
lenses.interval.consumers.refresh.ms
How often to refresh kafka consumer group info
long
10000
lenses.interval.consumers.timeout.ms
How long to wait for kafka consumer group info to be retrieved
long
300000
lenses.interval.partitions.messages
How often to refresh kafka partition info
long
10000
lenses.interval.type.detection
How often to check kafka topic payload info
long
30000
lenses.interval.user.session.ms
How long a client-session stays alive if inactive (4 hours)
long
14400000
lenses.interval.user.session.refresh
How often to check for idle client sessions
long
60000
lenses.interval.topology.topics.metrics
How often to refresh topology info
long
30000
lenses.interval.schema.registry.healthcheck
How often to check the schema registries health
long
30000
lenses.interval.schema.registry.refresh.ms
How often to refresh schema registry data
long
30000
lenses.interval.metrics.refresh.zk
How often to refresh ZK metrics
long
5000
lenses.interval.metrics.refresh.sr
How often to refresh Schema Registry metrics
long
5000
lenses.interval.metrics.refresh.broker
How often to refresh Kafka Broker metrics
long
5000
lenses.interval.metrics.refresh.connect
How often to refresh Kafka Connect metrics
long
30000
lenses.interval.metrics.refresh.brokers.in.zk
How often to refresh from ZK the Kafka broker list
long
5000
lenses.interval.topology.timeout.ms
Time period when a metric is considered stale
long
120000
lenses.interval.audit.data.cleanup
How often to clean up dataset view entries from the audit log
long
300000
lenses.audit.to.log.file
Path to a file to write audits to in JSON format.
string
lenses.interval.jmxcache.refresh.ms
How often to refresh the JMX cache used in the Explore page
long
180000
lenses.interval.jmxcache.graceperiod.ms
How long to pause for when a JMX connectity error occurs
long
300000
lenses.interval.jmxcache.timeout.ms
How long to wait for a JMX response
long
500
lenses.interval.sql.udf
How often to look for new UDF/UDAF (user defined [aggregate] functions)
long
10000
lenses.kafka.consumers.batch.size
How many consumer groups to retrieve in a single request
Int
500
lenses.kafka.ws.heartbeat.ms
How often to send heartbeat messages in TCP connection
long
30000
lenses.kafka.ws.poll.ms
Max time for kafka consumer data polling on WS APIs
long
10000
lenses.kubernetes.config.reload.interval
Time interval to reload the Kubernetes configuration file.
long
30000
lenses.kubernetes.watch.reconnect.limit
How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable
long
10
lenses.kubernetes.watch.reconnect.interval
How often to wait between Kubernetes Watcher reconnection attempts
long
5000
lenses.kubernetes.websocket.timeout
How long to wait for a Kubernetes Websocket response
long
15000
lenses.kubernetes.websocket.ping.interval
How often to ping Kubernetes Websocket to check it’s alive
long
30000
lenses.akka.request.timeout.ms
Max time for a response in an Akka Actor
long
10000
lenses.sql.monitor.frequency
How often to emit healthcheck and performance metrics on Streaming SQL
long
10000
lenses.audit.data.access
Record dataset access as audit log entries
boolean
true
lenses.audit.data.max.records
How many dataset view entries to retain in the audit log. Set to -1
to retain indefinitely
int
500000
lenses.explore.lucene.max.clause.count
Override Lucene’s maximum number of clauses permitted per BooleanQuery
int
1024
lenses.explore.queue.size
Optional setting to bound Lenses internal queue used by the catalog subsystem. It needs to be positive integer or it will be ignored.
int
N/A
lenses.interval.kafka.connect.http.timeout.ms
How long to wait for Kafka Connect response to be retrieved
int
10000
lenses.interval.kafka.connect.healthcheck
How often to check the Kafka health
int
15000
lenses.interval.schema.registry.http.timeout.ms
How long to wait for Schema Registry response to be retrieved
int
10000
lenses.interval.zookeeper.healthcheck
How often to check the Zookeeper health
int
15000
lenses.ui.topics.row.limit
The number of Kafka records to load automatically when exploring a topic
int
200
lenses.deployments.connect.failure.alert.check.interval
Time interval in seconds to check the connector failure grace period has completed. Used by the Connect auto-restart failed connectors functionality. It needs too be a value between (1,600].
int
10
lenses.provisioning.path
Folder on the filesystem containing the provisioning data. See [provisioning docs](link to provisioning docs) for further details
string
lenses.provisioning.interval
Time interval in seconds to check for changes on the provisioning resources
int
lenses.schema.registry.client.http.retryOnTooManyRequest
When enabled, Lenses will retry a request whenever the schema registry returns a 429 Too Many Requests
boolean
lenses.schema.registry.client.http.maxRetryAwait
Max amount of time to wait whenever a 429 Too Many Requests
is returned.
duration
lenses.schema.registry.client.http.maxRetryCount
Max retry count whenever a 429 Too Many Requests
is returned.
integer
2
lenses.schema.registry.client.http.rate.type
Specifies if http requests to the configured schema registry should be rate limited. Can be "session" or "unlimited"
"unlimited" | "session"
lenses.schema.registry.client.http.rate.maxRequests
Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.
integer
N/A
lenses.schema.registry.client.http.rate.window
Whenever the rate limiter is "session" this configuration will determine the duration of the window used.
duration
N/A
lenses.schema.connect.client.http.retryOnTooManyRequest
Retry a request whenever a connect cluster returns a 429 Too Many Requests
boolean
lenses.schema.connect.client.http.maxRetryAwait
Max amount of time to wait whenever a 429 Too Many Requests
is returned.
duration
lenses.schema.connect.client.http.maxRetryCount
Max retry count whenever a 429 Too Many Requests
is returned.
integer
2
lenses.connect.client.http.rate.type
Specifies if http requests to the configured connect cluster should be rate limited. Can be "session" or "unlimited"
"unlimited" | "session"
lenses.connect.client.http.rate.maxRequests
Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.
integer
N/A
lenses.connect.client.http.rate.window
Whenever the rate limiter is "session" this configuration will determine the duration of the window used.
duration
N/A
apps.external.http.state.refresh.ms
When registering a runner for external app, a health-check interval can be specified. If it is not, this default interval is used (value in milliseconds)
30000
int
no
apps.external.http.state.cache.expiration.ms
Last known state of the runner is stored in a cache. The entries in the cache are being invalidated after a time that is defined by following configuration key (value in milliseconds). This value should not be lower than the apps.external.http.state.refresh.ms
value.
60000
int
no
CreateConnection
connection
ListConnections
connection
GetConnectionDetails
connection
UpdateConnection
connection
DeleteConnection
connection
ListLicenses
license
GetLicenseDetails
license
UpdateLicense
license
GetLensesLogs
lenses-logs
GetLensesConfiguration
lenses-configuration
ListAgents
agent
GetAgentDetails
agent
UpdateAgent
agent
DeleteAgent
agent
GetSetting
setting
UpdateSetting
setting
RegisterApplication
external-application
UnregisterApplication
external-application
ListApplications
external-application
GetApplicationDetails
external-application
ListApplicationDependants
external-application
CreateAlertRule
rule
DeleteAlertRule
rule
UpdateAlertRule
rule
ListAlertRules
rule
GetAlertRuleDetails
rule
ToggleAlertRule
rule
ListAlertEvents
alert-event
DeleteAlertEvents
alert-event
CreateChannel
alert-channel
ListChannels
alert-channel
GetChannelDetails
alert-channel
UpdateChannel
alert-channel
DeleteChannel
alert-channel
ListLogEvents
log
GetLogEventDetails
log
CreateChannel
channel
ListChannels
channel
GetChannelDetails
channel
UpdateChannel
channel
DeleteChannel
channel
ToggleChannel
channel
CreatePolicy
policy
ListPolicies
policy
GetPolicyDetails
policy
UpdatePolicy
policy
DeletePolicy
policy
ListPolicyDependants
policy
CreateEnvironment
environment
DeleteEvironment
environment
ListEnvironment
environment
UpdateEnvironment
environment
AccessEnvironment
environment
CreateRequest
request
ListRequests
request
GetRequestDetails
request
ApproveRequest
request
DenyRequest
request
GetRuleDetails
rule
UpdateRule
rule
CreateRole
role
DeleteRole
role
UpdateRole
role
ListRoles
role
ListRoleDependants
role
GetRoleDetails
role
CreateGroup
group
DeleteGroup
group
UpdateGroup
group
ListGroups
group
ListGroupDependants
group
GetGroupDetails
group
CreateUser
user
DeleteUser
user
UpdateUser
user
ListUsers
user
ListUserDependants
user
GetUserDetails
user
CreateServiceAccount
service account
DeleteServiceAccount
service account
UpdateServiceAccount
service account
ListServiceAccounts
service account
ListServiceAccountDependants
service account
GetServiceAccountDetails
service account
CreateConnector
connector
ListConnectors
connector
ListConnectors
connector
GetConnectorConfiguration
connector
UpdateConnectorConfiguration
connector
DeleteConnector
connector
StartConnector
connector
StopConnector
connector
ListConnectorDependants
connector
ListClusters
cluster
GetClusterDetails
cluster
DeployConnectors
cluster
CreateTopic
topic
DeleteTopic
topic
ListTopic
topic
GetTopicDetails
topic
UpdateTopicDetails
topic
ReadTopicData
topic
WriteTopicData
topic
DeleteTopicData
topic
ListTopicDependants
topic
CreateAcl
acl
GetAclDetails
acl
UpdateAcl
acl
DeleteAcl
acl
CreateQuota
quota
ListQuotas
quota
GetQuotaDetails
quota
UpdateQuota
quota
DeleteQuota
quota
DeleteConsumerGroup
consumer-group
UpdateConsumerGroup
consumer-group
ListConsumerGroups
consumer-group
GetConsumerGroupDetails
consumer-group
ListConsumerGroupDependants
consumer-group
ListClusters
cluster
GetClusterDetails
cluster
ListNamespaces
namespace
DeployApps
namespace
GetRegistryConfiguration
registry
UpdateRegistryConfiguration
registry
CreateSchema
schema
DeleteSchema
schema
UpdateSchema
schema
GetSchemaDetails
schema
ListSchemas
schema
ListSchemaDependants
schema
CreateProcessor
sql-processor
ListProcessors
sql-processor
GetProcessorDetails
sql-processor
GetProcessorSql
sql-processor
UpdateProcessorSql
sql-processor
DeleteProcessor
sql-processor
StartProcessor
sql-processor
StopProcessor
sql-processor
ScaleProcessor
sql-processor
GetProcessorLogs
sql-processor
ListProcessorDependants
sql-processor
Global Catalogue
Learn how to use the Global Catalogue.
Environment
Learn how to explore topics in an environment.
max.size | The maximum amount of Kafka data to scan. This is to avoid full topic scan over large topics. It can be expressed as bytes (1024), as kilo-bytes (1024k), as mega-bytes (10m) or as giga-bytes (5g). Default is 20MB. |
|
max.query.time | The maximum amount of time the query is allowed to run. It can be specified as milliseconds (2000ms), as hours (2h), minutes (10m) or seconds (60s). Default is 1 hour. |
|
max.idle.time | The amount of time to wait when no more records are read from the source before the query is completed. Default is 5 seconds |
|
LIMIT N | The maximum of records to return. Default is 10000 |
|
show.bad.records | Flag to drive the behavior of handling topic records when their payload does not correspond with the table storage format. Default is true. This means bad records are processed, and displayed seperately in the Bad Records section. Set it to false to fail to skip them completely. |
|
format.timestamp | Flag to control the values for Avro date time. Avro encodes date time via Long values. Set the value to true if you want the values to be returned as text and in a human readable format. |
|
format.decimal | Flag to control the formatting of decimal types. Use to specify how many decimal places are shown. |
|
format.uppercase | Flag to control the formatting of string types. Use to specify if strings should all be made uppercase. Default is false. |
|
live.aggs | Flag to control if aggregation queries should be allowed to run. Since they accumulate data they require more memory to retain the state. |
|
max.group.records | When an aggregation is calculated, this config is used to define the maximum number of records over which the engine is computed. Default is 10 000 000 |
|
optimize.kafka.partition | When enabled, it will use the primitive used for the _key filter to determine the partition the same way the default Kafka partitioner logic does. Therefore, queries like |
|
query.parallel | When used, it will parallelize the query. The number provided will be capped by the target topic partitions count. |
|
query.buffer | Internal buffer when processing messages. Higher number might yield better performance when coupled with |
|
kafka.offset.timeout | Timeout for retrieving target topic start/end offsets. |
|
Hardware & OS
Learn about the hardware & OS requirements for Linux archive installs.
JVM Options
Understand how to customize the Lenses JVM settings.
Logs
Understand and customize Lenses logging.
JMX
Learn how to connect the Agent to JMX for Kafka, Schema Registries, Kafka Connect and others.
It will update the connections state and validate the configuration. If the validation fails, the state will not be updated.
It will only validate the request, not applying any actual change to the system.
It will try to connect to the configured service as part of the validation step.
Configuration in YAML format representing the connections state.
The only allowed name for the Kafka connection is "kafka".
Kafka security protocol.
SSL keystore file path.
Password to the keystore.
Key password for the keystore.
Password to the truststore.
SSL truststore file path.
JAAS Login module configuration for SASL.
Kerberos keytab file path.
Comma separated list of protocol://host:port to use for initial connection to Kafka.
Mechanism to use when authenticated using SASL.
Default port number for metrics connection (JMX and JOLOKIA).
The username for metrics connections.
The password for metrics connections.
Flag to enable SSL for metrics connections.
HTTP URL suffix for Jolokia or AWS metrics.
HTTP Request timeout (ms) for Jolokia or AWS metrics.
Metrics type.
Additional properties for Kafka connection.
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis.
DEPRECATED.
The only allowed name for a schema registry connection is "schema-registry".
Path to SSL keystore file
Password to the keystore
Key password for the keystore
Password to the truststore
Path to SSL truststore file
List of schema registry urls
Source for the basic auth credentials
Basic auth user information
Metrics type
Flag to enable SSL for metrics connections
The username for metrics connections
The password for metrics connections
Default port number for metrics connection (JMX and JOLOKIA)
Additional properties for Schema Registry connection
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis
DEPRECATED
HTTP URL suffix for Jolokia metrics
HTTP Request timeout (ms) for Jolokia metrics
Username for HTTP Basic Authentication
Password for HTTP Basic Authentication
Enables Schema Registry hard delete
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The username to connect to the Elasticsearch service.
The password to connect to the Elasticsearch service.
The nodes of the Elasticsearch cluster to connect to, e.g. https://hostname:port. Use the tab key to specify multiple nodes.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An API Token for accessing PagerDuty's REST API.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The Datadog site.
The Datadog API key.
The Datadog application key.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The Slack endpoint to send the alert to.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Comma separated list of Alert Manager endpoints.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The host name.
An optional port number to be appended to the hostname.
Set to true in order to set the URL scheme to https
. Will otherwise default to http
.
An array of (secret) strings to be passed over to alert channel plugins.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Way to authenticate against AWS.
Access key ID of an AWS IAM account.
Secret access key of an AWS IAM account.
AWS region to connect to. If not provided, this is deferred to client configuration.
Specifies the session token value that is required if you are using temporary security credentials that you retrieved directly from AWS STS operations.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
List of Kafka Connect worker URLs.
Username for HTTP Basic Authentication.
Password for HTTP Basic Authentication.
Flag to enable SSL for metrics connections.
The username for metrics connections.
The password for metrics connections.
Metrics type.
Default port number for metrics connection (JMX and JOLOKIA).
AES256 Key used to encrypt secret properties when deploying Connectors to this ConnectCluster.
Name of the ssl algorithm. If empty default one will be used (X509).
SSL keystore file.
Password to the keystore.
Key password for the keystore.
Password to the truststore.
SSL truststore file.
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis.
DEPRECATED.
HTTP URL suffix for Jolokia metrics.
HTTP Request timeout (ms) for Jolokia metrics.
The only allowed name for a schema registry connection is "schema-registry".
Way to authenticate against AWS. The value for this project corresponds to the AWS connection name of the AWS connection that contains the authentication mode.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Access key ID of an AWS IAM account. The value for this project corresponds to the AWS connection name of the AWS connection that contains the access key ID.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Secret access key of an AWS IAM account. The value for this project corresponds to the AWS connection name of the AWS connection that contains the secret access key.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Specifies the session token value that is required if you are using temporary security credentials that you retrieved directly from AWS STS operations.
Enter the Amazon Resource Name (ARN) of the Glue schema registry that you want to connect to.
The period in milliseconds that Lenses will be updating its schema cache from AWS Glue.
The size of the schema cache.
Type of schema registry connection.
Default compatibility mode to use on Schema creation.
The only allowed name for the Zookeeper connection is "zookeeper".
List of zookeeper urls.
Zookeeper /znode path.
Zookeeper connection session timeout.
Zookeeper connection timeout.
Metrics type.
Default port number for metrics connection (JMX and JOLOKIA).
The username for metrics connections.
The password for metrics connections.
Flag to enable SSL for metrics connections.
HTTP URL suffix for Jolokia metrics.
HTTP Request timeout (ms) for Jolokia metrics.
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis.
DEPRECATED.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The Postgres hostname.
The port number.
The database to connect to.
The user name.
The password.
The SSL connection mode as detailed in https://jdbc.postgresql.org/documentation/head/ssl-client.html.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An Integration Key for PagerDuty's service with Events API v2 integration type.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The host name for the HTTP Event Collector API of the Splunk instance.
The port number for the HTTP Event Collector API of the Splunk instance.
Use SSL.
This is not encouraged but is required for a Splunk Cloud Trial instance.
HTTP event collector authorization token.
The only allowed name for the Zookeeper connection is "kerberos".
Kerberos krb5 config
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Attached file(s) needed for establishing the connection. The name of each file part is used as a reference in the manifest.
Successfully updated connection state
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$