Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 347 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

6.1

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Getting Started

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Deployment

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

User Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

6.1.0

Changelog for Lenses 6.1.0

2025-10-24(YYYY-MM-DD)

Packages

Agent image:

  • lensesio/lenses-agent:6.1
  • Helm charts for HQ and Agent: https://helm.repo.lenses.io/

  • Archive installation: https://archive.lenses.io/lenses/

Features / Improvements & Fixes

Changelog details for Lenses 6.1.1

Improvements 💪

SQL Processor Docker image

When developing a SQL processor, the application Docker image is fixed. Consequently, any updates for performance or security enhancements cannot be applied. In this release, a new API and user interface options were introduced to enable attachment of the desired Docker image.

Fixes 🛠️

Kafka Quotas

Fixes an issue preventing default quotas for Users and Clients to be properly stored and applied.

Lenses HQ

6.1.1

Changelog for Lenses 6.1.1

2025-11-27(YYYY-MM-DD)

Packages

Agent image:

  • lensesio/lenses-agent:6.1.1
  • Helm charts for HQ and Agent: https://helm.repo.lenses.io/

  • Archive installation: https://archive.lenses.io/lenses/

Lenses Agent

Basic Authentication

This page describes configuring basic authentication in Lenses.

Basic authentication is set in the config.yaml for HQ under the http.users key, as an array of usernames and passwords.

Passwords need to bcrypt hashes.

This ensures that the passwords are hashed and secure rather than stored in plaintext. For instance, instead of using "builder" directly, it should be hashed using bcrypt.

An example of a bcrypt-hashed password looks like this:

$2a$12$XQW..XQrtZXCvbQWertqQeFi/1KoQW4eNephNXTfHqtoW9Q4qih5G.

Always ensure that you replace plaintext passwords with their bcrypt counterparts to securely authenticate users.

You can use the Lenses CLI to create a bcrypt password:

lenses utils hash-password
config.yaml
auth:
  users:
  - username: bob
    password: $2a$12$XQW..XQrtZXCvbQWertqQeFi/1KoQW4eNephNXTfHqtoW9Q4qih5G 
  - username: brain
    password: $2a$12$XQW..XQrtZXCvbQWertqQeFi/1KoQW4eNephNXTfHqtoW9Q4qih5G  

What's New?

This page details the release notes of Lenses.

Kafka to Kafka Replication

Lenses 6.1 has introduced Kafka-to-Kafka replication, initially supporting AWS MSK to AWS MSK, including Express Brokers. Kafka Replicators can be deployed through the Lenses UI, featuring comprehensive lifecycle management and monitoring capabilities.

Kafka Connections

Kafka Connections let administrators establish links to Kafka using Kubernetes secrets or service accounts that handle connection credentials. Many organizations employ secret providers like AWS Secret Manager or Vault, automatically syncing them to Kubernetes secrets. This process ensures that Lenses or users deploying applications don't need to manage credentials manually.

Environment Creation

A new environment creation flow has been added, so now Lenses Agents can be configured directly from HQ, with a new, in product editor to allow you test can configure Kafka, Schema Registry, Kafka Connect and more connections.

The new APIs support a GitOps style approach, allows you manage the connection state and files fully via the APIs or maintain them in version control.

SQL Studio

The significantly enhanced tree view explorer sidebar in SQL Studio improves user experience when working with an extensive number of topics and environments. We've introduced easier search and navigation designed to minimize scrolling and maximize your daily productivity. Plus, you can now bookmark both topics and environments for quick access.

It is now possible to view a topic's schema directly in SQL Studio, including a powerful split view that allows for seamless comparison of schema versions.

Connecting Lenses to your Kafka environment

Connect Lenses to your actual environments (Kafka clusters).

Setting Up Community Edition

This quick start guide will walk you through installing and starting Lenses using the Community Edition, an all-in one docker compose, including Kafka brokers.

This is a quick start for a local setup using the Lenses Community edition. To connect to your Kafka clusters, see here.\

By running the following command, including the ACCEPT_EULA, you are accepting the Lenses EULA agreement.

Run the following command:

The very first time you run this command it will take a bit longer as Docker has to download the images. Subsequent runs should take much less time.

To run this setup smoothly, your Docker settings must allocate at least 5GB of memory

Once the images are pulled and containers started, you can log in by going to http://localhost:9991 or the IP of your Docker host.

Username: admin 
Password: admin

The HQ binary does not have a default password. This is a default password configured by the Docker Compose scripts to secure your deployment.

CHANGE THE DEFAULT PASSWORD. You can see how here.

Community login

It may take a few seconds for the agent to fully boot and connect to HQ.

You will need an access code to use Community Edition. It will ask you to set it up the first time you login. Once applied you won't need to access it again. Please see the self-guided walk through for details on setting it up.

The quick start uses a docker-compose file to:

1

Start Postgres, HQ and Agent uses Postgres as a backing store.

2

Start a single local Kafka Broker

3

Start a local Confluent Schema Registry

4

Start HQ and create an environment & agent key

5

Start Agent and connect to Kafka, the Schema Registry and HQ.

Linux

This page describes install the Lenses via a Linux archive

Authentication

This page describes the authentication methods supported in Lenses.

Authentication is configured in HQ.

Users can authentication is two ways. Basic authentication and SSO / SAML. Additional specific users can be assigned as admin accounts.

Features / Improvements & Fixes

Changelog details for Lenses 6.1.0

New 🎉

Kafka Connections

Kafka Connections allow administrators to define connections to Kafka as Kubernetes secrets or service accounts, that reference the credentials to connect with.

Most organisations already use secret providers such as AWS Secret Manager or Vault, and sync these to Kubernetes secrets automatically. This ensures Lenses or users deploying applications never need to deal with the credentials themselves.

See the documentation to get started.

Kafka to Kafka Replication

Lenses Kafka to Kafka Replicator is now integrated into Lenses. You can configure and deploy Lenses, with predefined Kafka Connections to move data between AWS MSK IAM clusters.

More cluster authentication methods and providers coming soon!

See the documentation to get started.

Improvements 💪

Configure Agent Provisioning from HQ

You can now create an environment and configure the Agent provisioning directly from HQ. JSON schema support is added, providing syntax highlighting, auto completion and error reporting.

You can also view and edit the existing provisioning files of agents already connected.

SQL Studio

SQL Studio is moving forward again, and now brings a more IDE style experience. An improved tree navigation provides:

  1. Improved search functionality

  2. Expanded topics nodes to browse schemas and consumer groups associated with topics

  3. Context menu support to allow actions on topics

  4. Bookmarking of favourite topics.

Performance

Enhanced the performance of the environments screen, making it more responsive and capable of handling larger datasets.

Helm

Service Type Configuration: Added configurable service.type parameter to allow customization of the Kubernetes service type (ClusterIP, NodePort, LoadBalancer, etc.)

  • New parameter in values.yaml: service.type (default: ClusterIP)

  • Updated service template to use the configurable value

Fixes ✅

Helm

Fix values.schema.json examples format for postgres params object

Features / Improvements & Fixes

Changelog details for Lenses 6.1.1

New 🎉

SQL Studio

  • Left panel with explorer toggle had been removed.

  • Schema Registry entries can now be listed and the schema details viewed from the Studio.

  • Context menus are now available in both explorer and breadcrumbs.

Improvements 💪

K2K

When a new K2K app is created, it automatically sets the Docker image to K2K Kafka-to-Kafka replicator version 1.1.0.

K2K IAM

The IAM checks for the K2K application have been enhanced to include dependencies such as source and target environment Kafka connections, Kubernetes cluster, and namespace. This update ensures comprehensive permission validation across multiple axes.

New actions introduced:

  • ManageOffsets: Allows management of K2K application resources.

  • GetKafkaConnectionDetails: Retrieves details for Kafka connection resources.

UI

Various fixes have been applied throughout the user interface to address glitches and inconsistencies, enhancing the in-app user experience.

Kubernetes - Helm

This page describes installing Lenses HQ and Agent in Kubernetes via Helm.

Only Helm 3 is supported.

Version 6.1.1

Changelog for Lenses 6.1.1

2025-11-27 (YYYY-MM-DD)

Packages

  • HQ image:

  • HQ CLI image

  • Helm charts for HQ and Agent:

  • Archive installation:

Overview

Welcome to Lenses, Autonomy in data streaming.

How Lenses Works

Lenses has two components:

1

HQ

HQ is the control plane / central portal where end users interact with different environments (Kafka clusters). It provides a central place to explore data across many environments.

HQ is a single binary, installed on premise or in your cloud. From HQ you create environments which represent individual Kafka clusters and their supporting services. For each environment, you deploy an agent which connects to Kafka and back to HQ.

Environments

Lenses defines each Kafka Cluster and supporting services, such as Schema Registries and Kafka Connect Clusters, as an environment.

You can have many environments, on premise, in the cloud, provided HQ has network access to the agent and the agent can connect to your Kafka cluster or any Kafka API compatible service.

Each environment has an agent. Environments can also be assigned extra metadata such as tiers, domains and descriptions.

2

Agents

There's a 1 to 1 relationship between environments, agents and Kafka clusters.

To explore and operate in an environment you need an agent. Agents are headless applications, deployed with connectivity to your Kafka cluster and its supporting services.

Agents only ever communicate with HQ, using an Agent key over a secure channel. You can not, as a user, interact directly with them. End users are unaware of agents, only environments.

Agents require:

  1. Agent Key to establish a communication channel to HQ

  2. Connectivity to a Kafka cluster and credentials to do so.

The agents acts as a proxy to read, or write to your Kafka cluster, execute queries, monitor for alerts and manage SQL Processors and Kafka Connectors.

Azure SSO

This page describes configuring Azure SSO for Lenses authentication.

Learn more here about

1

Add Lenses from Azure SSO gallery

Go to Enterprise applications->New Application.

Search for Lenses.io in the gallery directory.

Choose a name for Lenses e.g. Lenses.io and click Add.

2

Enable SSO

On the overview page select Single Sign On.

3

Configure SAML

Remember to activate HTTPS on HQ. See .

Setting
Value
4

Download SAML Certificates

Download the Federation Metadata XML.

5

Configure SAML in HQ

SAML configuration is set in HQ's config.yaml file. See for more details.

Docker

This page describes installing Lenses with Docker Image.

Admin Account

This page describes how to configure admin accounts in Lenses.

You can configure a list of the principals (users, service accounts) with root admin access. Access control allows any API operation performed by such principals. If not set, it will default to [].

Admin accounts are set in the config.yaml for HQ under the key, as an array of usernames.

Changing the Admin Password

To change the admin password, update the config.yaml in the section. Set the password of the admin users.

Passwords need to be bcrypt hashes.

You can use the Lenses CLI to create a bcrypt password. You can download the CLI . The executable for Lenses 6 CLI is called "hq".

Configuration

This page describes how to configure Lenses.

lensesio/lenses-hq:6.1.1
lensesio/lenses-cli:6.1.1
https://lenses.jfrog.io/ui/native/helm-charts
https://archive.lenses.io/lenses/

Identifier (Entity ID)

Use the base url of the Lenses installation e.g. https://lenses-dev.example.com

Reply URL

Use the base url with the callback details e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client

Sign on URL

Use the base url

Azure SSO
TLS
here
config.yaml
auth:
  administrators:
  - admin
  - [email protected]
  - [email protected]    
hq utils hash-password
auth:
  - admin
  - [email protected]
  - [email protected]
users:
  - username: admin
    password: $2a$12$XQW..XQrtZXCvbQWertqQeFi/1KoQW4eNephNXTfHqtoW9Q4qih5G 
auth.administrators
auth.users
here
Cover

Deploy HQ

Learn how to deploy HQ with Helm.

Cover

Deploy an Agent

Learn how to deploy an Agent with Helm.

Lenses Architecture
Cover

Deploy Lenses HQ

Learn how to deploy Lenses HQ via Docker.

Cover

Deploy Lenses Agent

Learn how to deploy Lenses Agent via Docker.

Cover

Authentication

Learn how to configure Lenses user authentication.

Cover

HQ

Learn how to configure Lenses HQ.

Cover

Agent

Learn how to configure an Lenses Agent and connect it to a cluster.

Overview

Introduction on how to connect your Kafka Cluster to Lenses.

Install

Learn how to configure and start HQ and an agent.

Cover
Cover

Deploy Lenses HQ

Learn how to deploy HQ via tarball.

Deploy Lenses Agent

Learn how to deploy Agent via tarball.

Cover
Cover

Admin Account

Learn how to configure admin accounts in Lenses.

Basic Authentication

Learn how to configure Lenses with Basic Auth.

SSO & SAML

Learn how to configure Lenses with SSO & SAML.

Cover
Cover
Cover

Version 6.1.0

Changelog for Lenses 6.1.0

2025-10-24 (YYYY-MM-DD)

Packages

  • HQ image:

    • lensesio/lenses-hq:6.1.0
  • HQ CLI image

    • lensesio/lenses-cli:6.1.0
  • Helm charts for HQ and Agent: https://lenses.jfrog.io/ui/native/helm-charts

  • Archive installation: https://archive.lenses.io/lenses/

Overview

This page gives an overview of SSO & SAML for authentication with Lenses.

Users

Control of how user create with SSO is determined by the SSO User Creation Mode. There are two modes:

  1. Manual

  2. SSO

With manual mode, only users that pre-created in HQ can login.

With sso mode, users that do not already exists are created and logged in.

Group Mapping

Control of how a user's group membership should be handled in relation to SSO is determined by the SSO Group Membership Mode. There are two modes:

  1. Manual

  2. SSO

With the manual mode, the information about the group membership returned from an Identity Provider will not be used and a user will only be a member of groups that were explicitly assigned to them in HQ.

With the sso mode, group information from Identity Provider (IdP) will be used. On login, a user's group membership is set to the groups listed in the IdP.

Groups that do not exist in HQ are ignored.

SAML configuration is defined in the config.yaml provided to HQ. For more information on the configuration options see here.

config.yaml
http:
  saml:
    metadata: |-

The follow SSO / SAML providers are supported.

Creating a Keystore

Enable SAML single-sign on by creating a keystore.

  • SAML needs a keystore with a generated key-pair.

  • SAML uses the key-pair to encrypt its communication with the IdP.

Creating a keystore

Use the Java keytool to create one.

keytool \
 -genkeypair \
 -storetype pkcs12 \
 -keystore lenses.p12 \
 -storepass my_password \
 -alias lenses \
 -keypass my_password \
 -keyalg RSA \
 -keysize 2048 \
 -validity 10000
Setting
Definition

storetype

The type of keystore (pkcs12 is industry standard, but jks also supported)

keystore

The filename of the keystore

storepass

The password of the keystore

alias

The name of the key-pair

keypass

The password of the key-pair (must be same as storepass for pkcs12 stores

Features / Improvements & Fixes

Changelog details for Lenses 6.1.0

Improvements 💪

Minimal start up requirements

The Agent now requires only a connection to HQ for startup. This connection can be configured through environment variables or by setting a provisioning file with HQ details. This streamlined process enhances the user experience for creating a new environment, allowing Lenses HQ to automatically push connection details to the Lenses agent.

Logging

Enhance the logs outlining the HQ connectivity status.

Helm

Service Type Configuration: Added configurable service.type parameter to allow customization of the Kubernetes service type (ClusterIP, NodePort, LoadBalancer, etc.)

  • New parameter in values.yaml: service.type (default: ClusterIP)

  • Updated service template to use the configurable value

Add persistence.provisioning configuration with 50Mi default size, disabled by default

  • Create PVC template for provisioning data storage at /data/provisioning

  • Update deployment to mount provisioning volume when enabled

  • Add helper function for provisioning claim name generation

  • Add tests for provisioning volume

  • Switch .Values.persistence.existingClaim to .Values.persistence.log.existingClaim and .Values.persistence.provisioning.existingClaim and fail the deployment if the old value is being used

Fixes ✅

Helm

Component Label Correction: Fixed component labels from lenses to lenses-agent for proper identification

  • Updated in deployment.yaml

  • Updated in service.yaml

Installation

This page describes the supported installation methods for Lenses.

Lenses can be deployed in the following ways:

SSO & SAML

This page describes configure SSO & SAML in Lenses for authentication.

OneLogin SSO

This page describes configuring OneLogin SSO for Lenses authentication.

1

Set up OneLogin Id

Lenses is available in the OneLogin Application catalog.

Visit OneLogin’s Administration console.

  • Select Applications->Applications->Add App

  • Search and select Lenses

  • Optionally add a description and click save

2

Add Lenses via the Application Catalog

  • In the Configuration section set the base path from the url of the Lenses installation e.g. lenses-dev.example.com (without the https://)

  • Click Save

3

Download SAML Certificates

Download the Federation Metadata XML file with the OneLogin IdP details.

4

Configure SAML in HQ

SAML configuration is set in HQ's config.yaml file. See for more details.

Generic SSO

This page describes configuring a Generic SSO provider for Lenses authentication.

Configure SAML in HQ

SAML configuration is set in HQ's config.yaml file. See for more details.

Confluent Platform

This page describes configuring Lenses to connect to Confluent Platform.

For Confluent Platform see

Apicurio

This page describes connecting Lenses to Apicurio.

Apicuro supports the following versions of Confluent's API:

  • Confluent Schema Registry API v6

  • Confluent Schema Registry API v7

Only one Schema Registry connection is allowed.

Name must be schema-registry.

See for support.

Environment variables are supported; escape the dollar sign

Set the schema registry URLs to include the compatibility endpoints, for example:

IBM Event Streams Registry

This page describes connecting Lenses to IBM Event Streams schema registry.

Requires Enterprise subscription on IBM Event Streams and only hard delete is supported for IBM Event streams

To configure an application to use this compatibility API, specify the Schema Registry endpoint in the following format:

Use "token" as the username. Set the password as your API KEY from IBM Event streams

Only one Schema Registry connection is allowed.

Name must be schema-registry.

See for support.

Environment variables are supported; escape the dollar sign

Upgrade to Lenses 6.1

There's no breaking change between 6.0 and 6.1.

here
here
Apache Kafka.
sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
http://localhost:8080/apis/ccompat/v6
provisioning.yaml
confluentSchemaRegistry:
  - name: schema-registry
    tags: ["tag1"]
    version: 1      
    configuration:
      schemaRegistryUrls:
        value:
          - http://localhost:8080/apis/ccompat/v6
JSON schema
https://token:{$APIKEY}@{$HOST}/confluent
sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
provisioning.yaml
confluentSchemaRegistry:
  - name: schema-registry
    tags: ["tag1"]
    version: 1      
    configuration:
      schemaRegistryUrls:
        value:
          - https://token:{$APIKEY}@{$HOST}/confluent
JSON schema

Tips: Before Upgrade

Steps for In-Place Upgrade

Steps for SideBySide Upgrade

How to convert Wizard Mode to Provisioning Mode

Helm

Learn how to deploy Lenses in your Kubernetes cluster with Helm.

Docker

Learn how to deploy Lenses with Docker.

Linux

Learn how to deploy Lenses on Linux / VMs.

Cover
Cover
Cover

Overview

Learn about SSO & SAML for Lenses authentication.

Azure SSO

Configure Lenses with Azure SSO.

Google SSO

Configure Lenses with Google SSO.

Keycloak SSO

Configure Lenses with Keycloak SSO.

Okta SSO

Configure Lenses with Okta SSO.

OneLogin SSO

Configure Lenses with OneLogin SSO.

Generic SSO

Configure Lenses with a Generic SSO provider.

Cover
Cover
Cover
Cover
Cover
Cover
Cover

Overview

This page describe an overview of deploying Lenses against your Kafka clusters.

This guide walks you through manually deploying HQ and an Agent to connect to your Kafka clusters. Lenses acts as a Kafka client, it can connect to any provider exposing a Kafka-compatible API.

For more detailed guides on Helm, Docker, and Linux, see here.

How to connect to your Kafka?

To deploy Lenses against your environments, you need to:

1

Configure HQ

Optionally use your own Postgres instance.

2

Start HQ

Start HQ using your configuration from step 1.

3

Create an Environment in HQ

To connect an agent to HQ for your Kafka cluster, we need to create an environment in HQ.

4

Start & Configure an Agent

To configure the agent, you need to:

  1. Optionally using your own Postgres instance, it uses an embedded database by default

  2. Start it with the Agent Key from step 3

  3. Configure a connection to your Kafka cluster

Prerequisites

EULA acceptance

To start HQ and an Agent, you have to accept the Lenses EULA.

For HQ, in the config.yaml set:

config.yaml
license:
  acceptEULA: true

Kafka

Any version of Apache Kafka (2.0 or newer) on-premise and in the cloud. Supported providers include:

  1. Confluent Platform & Cloud

  2. AWS MSK & AWS MSK Serverless

  3. Aiven

  4. IBM Event Streams

  5. Azure HDInsight & EventHubs

Schema Registry

Any version of Confluent Schema Registry (5.5.0 or newer), APICurio (2.0 or newer) and AWS Glue.

Postgres

Only needed if you want to use your Postgres. The docker compose will start a local Postgres instance.

HQ and Agents can share the same instance, by either using a separate database or schema for HQ and each agent, depending on your networking needs.

  1. Postgres server running version 9.6 or higher.

Database Role

The recommended configuration is to create a dedicated login role and database for the HQ and each Agent, setting the HQ or Agent role as the database or schema owner. Both the agent and HQ need credentials; create a role for each.

terminal
# login as superuser and add Lenses role and database
psql -U postgres -d postgres <<EOF
CREATE ROLE lenses_agent WITH LOGIN PASSWORD 'changeme';
CREATE DATABASE lenses_agent OWNER lenses_agent;

CREATE ROLE lenses_hq WITH LOGIN PASSWORD 'changeme';
CREATE DATABASE lenses_hq OWNER lenses_hq;
EOF

Networking

  1. Web sockets - You may need to adjust your load balancer to allow them. See here.

  2. JMX connectivity - Connectivity to JMX is optional (not required) but recommended for additional/enhanced monitoring of the Kafka Brokers and Connect Workers. Secure JMX connections are supported, including JOLOKIA and Open Metrics (MSK).

For more enable JMX for the Agent itself, see here.

Kafka ACLs

These ACLs are for the underlying Lenses Agent Kafka client. Lenses has its own set of permissions guarding access.

You can restrict the access of the Lenses Kafka client but this can reduce the functionality on offer in Lenses, e.g. not allow Lenses to create a topic at all, even though this can be managed by Lenses own IAM system.

The agent requires access to your Kafka cluster. If ACLs are enabled, you will need to allow the Agent access.

SSO (optional)

If you want to use SSO / SAML for authentication, you will need the metadata.xml file from your provider. See Authentication for more information.

Deploying HQ

This page describes deploying Lenses HQ via docker.

The HQ docker image can be configured via volume mounts for the configuration file.

The HQ looks for the config.yaml in the current working directory. This is the root directory for Docker.

Running the Docker

terminal
docker run --name lenses-hq \
--network panoptes \
-p 8080:8080 \
-v $(pwd)/config.yaml:/config.yaml\
lensting/lenses-hq:6-preview

Prerequisites

The main pre-requirements that has to be fulfilled before Lenses HQ container can be started and those are:

Complete configuration file

In demo purposes and testing the product you can use our community license

license_key_2SFZ0BesCNu6NFv0-EOSIvY22ChSzNWXa5nSds2l4z3y7aBgRPKCVnaeMlS57hHNVboR2kKaQ8Mtv1LFt0MPBBACGhDT5If8PmTraUM5xXLz4MYv

Main configuration file that has to be configured before running docker command is config.yaml.

Assertion Consumer Service endpoint is following

/api/v2/auth/saml/callback?client_name=SAML2Client

Sample configuration file is following:

config.yaml
auth:
  administrators:
    - admin
    - [email protected]
  users:
    - username: admin
      # bcrypt("correcthorsebatterystaple").
      password: $2a$10$F66cb6ZhnJjGCZuxlvKP1e84eytTpT1MDJcpBblHaZgsqp1/Aa0LG
  sessionDuration: 24h
  saml:
    enabled: true
    baseURL: https://lenses6.company.com
    entityID: https://lenses6.company.com
    metadata: |          
         <?xml version="1.0" encoding="UTF-8"?><md:EntityDescriptor
          xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata"
            </md:IDPSSODescriptor>
          </md:EntityDescriptor>
          
    userCreationMode: manual
    groupMembershipMode: manual
    uiRootURL: /
    groupAttributeKey: groups
    authnRequestSignature:
      enabled: false
http:
  address: :8080
  accessControlAllowOrigin:
    - https://lenses6.company.com
  accessControlAllowCredentials: false
  secureSessionCookies: false
  
agents:
  address: :10000
  
database:
  host: postgres-postgresql.postgres.svc.cluster.local:5432
  username: $(LENSESHQ_PG_USERNAME)
  password: $(LENSESHQ_PG_PASSWORD)
  schema:
  database: lenseshq
  TLS: false
license:
  key: license_key_
  acceptEULA: true
logger:
  mode: text
  level: debug
metrics:
  prometheusAddress: :9090

More about configuration options you can find on the HQ configuration page.


What's next?

After the successful configuration and installation of HQ, the next steps would be:

  1. Deploying and Agent

  2. Configuring IAM roles / groups / policies

Okta SSO

This page describes configuring Okta SSO for Lenses authentication.

1

Set up Okta IdP

Lenses is available directly in Okta’s Application catalog.

2

Add application in the Catalog

  • Go to Applications->Applications

  • Click Add Application

  • Search for Lenses

  • Select by pressing Add

3

Set General Settings

  • App label: Lenses

  • Set the base url of your lenses installation e.g. https://lenses-dev.example.com

  • Click Done

4

Download SAML Certificates

Download the Federation Metadata XML file with the OktaIdP details.

5

Configure SAML in HQ

SAML configuration is set in HQ's config.yaml file. See here for more details.

HQ

This page describes connection a Lenses Agent with HQ

To be able to view and drill into your Kafka environment, you need to connect the agent to HQ. You need to create an environment in HQ and copy the Agent Key into the provisioning.yaml.

Only one HQ connection is allowed.

See JSON schema for support.

Environment variables are supported; escape the dollar sign

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
provisioning.yaml
lensesHq:
  - name: lenses-hq
    version: 1
    tags: ['hq']
    configuration:
      server:
        value: "\${LENSES_HQ_HOST}"
      port:
        value: 10000
      agentKey:
        value: "\${LENSES_HQ_AGENT_KEY}"
      sslEnabled:
        value: true
      sslTruststore:
        file: hq-truststore.jks
      sslTruststorePassword:
        value: "\${LENSES_HQ_AGENT_TRUSTSTORE_PWD}"

Aiven

This page describes configuring Lenses to connect to Aiven.

Only one Kafka connection is allowed.

The name must be kafka.

See JSON schema for support.

Environment variables are supported; escape the dollar sign

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
1

Find your Service URI

From the Aiven, locate your Service URI and set it as the bootstrap servers.

2

Configure Provisioning

Set the following in the provisioning.yaml, replacing Service URI, username and password from your Aiven account.

provisioning.yaml
kafka:
- name: kafka
  version: 1
  tags: ['my-tag']
  configuration:
    kafkaBootstrapServers:
      value:
        - SASL_SSL://[Service URI]
    protocol: 
      value: SASL_SSL
    saslMechanism: 
      value: SCRAM-SHA-256
    saslJaasConfig: 
      value: |
        org.apache.kafka.common.security.scram.ScramLoginModule required
        username="[your-username]"
        password="[your-password]";    

Confluent Cloud

This page describes configuring Lenses to connect to Confluent Cloud.

For Confluent Platform see Apache Kafka.

Only one Kafka connection is allowed.

The name must be kafka.

See JSON schema for support.

Environment variables are supported; escape the dollar sign

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
1

Create a Data Integration API key

  • From Data integration API keys, select Create Key.

  • For this guide select Global access to get your API Key and API Secret Key.

  • Go to Cluster Settings to get your Bootstrap Server.

2

Configure Provisioning

Set the following in the provisioning.yaml

Environment variables are supported; escape the dollar sign

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
provisioning.yaml
kafka:
- name: kafka
  version: 1
  tags: ['my-tag']
  configuration:
    kafkaBootstrapServers:
      value:
        - SASL_SSL://[YOUR_BOOTSTRAP_SERVER]
        - SASL_SSL://[YOUR_BOOTSTRAP_SERVER]
    protocol: 
      value: SASL_SSL
    saslMechanism: 
      value: PLAIN
    saslJaasConfig:
      value: |
        org.apache.kafka.common.security.plain.PlainLoginModule required 
        username="[YOUR_API_KEY]" 
        password="[YOUR_API_KEY_SECRET]";

Hardware & OS

This page describes the hardware and OS prerequisites for Lenses.

Run on any Linux server (review ulimits or container technology (docker/kubernetes). For RHEL 6.x and CentOS 6.x use docker.

Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit command:

ulimit -S -n     # soft limit
ulimit -H -n     # hard limit

Increase as a super-user the soft limit to 4096 with:

ulimit -S -n 4096

JVM Options

This page describes the JVM options for the Lenses Agent.

The Agent runs as a JVM app; you can tune runtime configurations via environment variables.

Key
Description

LENSES_OPTS

For generic settings, such as the global truststore. Note that the docker image is using this to plug in a prometheus java agent for monitoring Lenses

LENSES_HEAP_OPTS

JVM heap options. Default setting are -Xmx3g -Xms512m that sets the heap size between 512MB and 3GB. The upper limit is set to 1.2GB on the Box development docker image.

LENSES_JMX_OPTS

Tune the JMX options for the JVM i.e. to allowing remote access.

LENSES_LOG4J_OPTS

Override Agent logging configuration. Should only be used to set the logback configuration file, using the format -Dlogback.configurationFile=file:/path/to/logback.xml.

LENSES_PERFORMANCE_OPTS

JVM performance tuning. The default settings are -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=

Logs

This page describes configuring Lenses Agent logging.

Changes to the logback.xml are hot reloaded by the Agent, no need to restart.

All logs are emitted unbuffered as a stream of events to both stdout and to rotating files inside the directory logs/.

The logback.xml file is used to configure logging.

If customization is required, it is recommended to adapt the default configuration rather than write your own from scratch.

The file can be placed in any of the following directories:

  • the directory where the Agent is started from

  • /etc/lenses/

  • agent installation directory.

The first one found, in the above order, is used, but to override this and use a custom location, set the following environment variable:

export LENSES_LOG4J_OPTS="-Dlogback.configurationFile=file:/path/to/logback.xml"

Default configuration

The default configuration file is set up to hot-reload any changes every 30 seconds.

Log Level

The default log level is set to INFO (apart from some very verbose classes).

Log Format

All the log entries are written to the output using the following pattern:

%d{ISO8601} %-5p [%c{2}:%L] [%thread] %m%n

You can adjust this inside logback.xml to match your organization’s defaults.

Log Location

logs/ you will find three files: lenses.log, lenses-warn.log and metrics.log. The first contains all logs and is the same as the stdout. The second contains only messages at level WARN and above. The third one contains timing metrics and can be useful for debugging.

Log Buffering

The default configuration contains two cyclic buffer appenders: "CYCLIC-INFO” and “CYCLIC-METRICS”. These appenders are required to expose the Agent logs within the Admin UI.

Environments

This page describes Environments in Lenses.

Environments are virtual containers for you, including Kafka Cluster, Schema Registries, and Kafka Connect Clusters.

Each Environment has an Agent, the Agent communicates with HQ via an Agent Key generated at the environment creation time.

Environments can be assigned tiers, domains and a description and group accordingly.

Go to Environments in the left-hand side navigation, then select New Environments button in the top right corner.

Enter the details for the agent, once you have a key you will be guided on how to run and start docker, then configure the agent to connect to your environment.

Learn how to configure an agent here.

Provisioning

This page describes how to setup connections to Kafka and other services and have changes applied automatically for the Lenses Agent.

AWS MSK

This page describes connection the Lenses Agent to a AWS MSK cluster.

Only one Kafka connection is allowed.

The name must be kafka.

See JSON schema for support.

Environment variables are supported; escape the dollar sign

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"

It is recommended to install the Agent on an EC2 instance or with EKS in the same VPC as your MSK cluster. The Agent can be installed and preconfigured via the AWS Marketplace.

Open network connectivity

Edit the AWS MSK security group in the AWS Console and add the IP address of your Agent installation.

MSK Security group

Enable Open Monitoring

If you want to have the Agent collect JMX metrics you have to enable Open Monitoring on your MSK cluster. Follow the AWS guide here.

Select your MSK endpoint

Depending on your MSK cluster, select the endpoint and protocol you want to connect with.

It is not recommended to use Plaintext for secure environments. For these environments use TLS or IAM.

When the Agent is running inside AWS and is connecting to an Amazon’s Managed Kafka (MSK) instance, IAM can be used for authentication.

Configure Provisioning

provisioning.yaml
kafka:
- name: kafka
  version: 1
  tags: ["optional-tag"]
  configuration:
    kafkaBootstrapServers:
      value:
       - SASL_SSL://your.kafka.broker.0:9098
       - SASL_SSL://your.kafka.broker.1:9098
    protocol: 
      value: SASL_SSL
    saslMechanism: 
      value: AWS_MSK_IAM
    saslJaasConfig:
      value: software.amazon.msk.auth.iam.IAMLoginModule required;
    additionalProperties:
      value:
        sasl.client.callback.handler.class: "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
    metricsType:
      value: AWS

Schema Registries

This page describes adding a Schema Registries to the Lenses Agent.

Topics

This page describes exploring topics in Lenses.

Hands-On Walk Through of Community Edition

Simple walk through to introduce you to the Lenses 6 user interface.

Outline of Walk Through

  1. Logging in to Community Edition

  2. Exploring Lenses UI

  3. Adding a new environment

  4. Searching Topics and Schemas

  5. Using SQL Studio

  6. Drilling Down Into Environments

  7. Adding a Data Policy

1. Logging in to Community Edition

After you've run your docker compose command you can access it running locally at . CE will ask you to login:

User: admin Pass: admin

The very first time you login Lenses will ask you to verify with your email. This is easy to setup, just click on the "Verify" button:

If you have done the verification before, then you can enter your email address and the received access code from the link "Already verified?":

The verify link will take you to the setup page on Lenses website, where you can enter your email address:

Click Send Link and then Lenses will send you an email with a magic link to activate Lenses Community Edition. Be sure to check your junk folder if it doesn't arrive. In this email you will also find other important information - your personal access code and useful links, which will help you to quick start with Lenses. Don't forget to bookmark and keep this email.

The very first time you login to Lenses CE you will see our first start help screen. There is a video to watch as well as links to these docs, , our and .

2. Exploring Lenses UI

Click on Let's Start to access the Lenses UI. The first view you'll see is the Environments view. This is where Lenses displays all of your connected Kafka Environments. This can include: the Kafka clusters themselves, Kafka Connect, Schema Registry, connectors, consumers, even the Kubernetes clusters it's all running in. Environments mean your entire Kafka ecosystem not just the clusters themselves. For our demo setup we only have one environment connected, but you can have up to two at no charge with Community Edition.

Click on the link below Environments view to switch to the topics view. Here you'll see all the topics in your connected Environments. We are currently logged in as Admin so we can see all the Topics in our Environments. If we were logging in with a more restricted role we might only see the Topics we have permission to view.

Use the bottom scroll bar to scroll to the right so you can see further information about each topic.

You can see what type of schema it uses, how many partitions it uses, and much more.

3. Adding a new environment

Adding a new environment will allow you to connect a second Kafka cluster to Lenses HQ.

Important! Before you begin, ensure you have your Kafka Cluster already deployed and all its credentials. You will need them to configure the Lenses Agent. Without them, it will not be able to connect to your environment.

Click on the button New environment in the top right corner and the new environment wizard will guide you through the process.

Fill the environment name and give it a short description, if you like. You can select also domain membership, if you use the Domain conventions and the environment tier. After you are ready, click on the button "Create environment"

This will generate an Agent key. You need this to start the agent docker.

Copy the command to start the docker, this uses the environment variables to create a provisioning.yaml to connect to HQ.

It may take a minute for Agent to start fully and connect.

By default the agent will create a mounted volume. We recommended once the agent is full configured you download the provisioning.yaml for safe keeping.

Once the agent docker has started, its will connect and you can move onto the next step, select the type of Kafka and other services you need. This will update the YAML editor and highlight any errors. You can also type directly in the editor to get JSON schema support is required, e.g. type "Kafka" to get the default snippets.

Once you have entered the details for your Kafka and have no validation errors you can test the configuration. This will push the configuration to the agent, where it will check validation and connectivity.

If valid, you can apply to the agent.

By testing the configuration first you will get feedback in what will change, be deleted or added.

If the configuration fails, errors and the line number will be visible in the problem panel.

4. Searching Topics and Schemas

The topics view is fully searchable. So for example if we wanted to build a "Customer Location View" for our web page using Kafka data — we could search for the keyword longitude here and see which topics include location data. Let's do a search for "latitude" in the topics view and see what comes up:

Three topics appear to have data about latitude, but let's dive a bit deeper. Tick on the "Search in Schema" tickbox to get Lenses to display the actual names of the keys in the schema.

This will surface the actual schema names that match your search.

5. Using SQL Studio

Based on what we've discovered it seems like the nyc_yellow_taxi_trip_data might be useful for our theoretical project. Let's use Lenses to dive a bit deeper into that topic and view the actual data flowing through using SQL Studio. To get to SQL Studio from this view simply hover your mouse over the topic: nyc_yellow_taxi_trip_data. That will cause the interactive controls to appear. Click on the SQL shortcut when it pops up:

Clicking that button automatically opens up that topic in SQL Studio. You can now interact directly with the data flowing through that topic using SQL statements. Note when you first access SQL Studio it appears with both side "drawers" open. You can click on the drawer close icons on either side to make more room to work directly with your data and SQL.

Now you can go back and open those as needed later on, but this give you all the screen to view and work with your data. Toggle your view from Grid to List. Now you have your data in a more JSON style format. Expand out the JSON to view the individual key / value pairs. Across the top you'll see the metadata for each event: Partition, Offset, and Time Stamp. Below you can examine the key / value pairs. As you can see we've got plenty of longitude and latitude data to work with for our customer location visualization.

Now let's move on from data discovery to troubleshooting. Using the same taxi data topic we can troubleshoot a "live" problem. Several drivers are reporting errors with credit card transactions going through in the last 15 minutes. Let's use SQL Studio to examine taxi transactions in the last 15 minutes using a SQL search:

Copy that text and paste it into the SQL box in your SQL Studio. Then from the Time Range picker select the last 15 minutes to set your time frame and then hit run.

Next up let's clean up our view so the data is a bit easier to see. Go to the Columns button and get ride of the timestamp, partition, and offset columns. Now we just have our vendorID, fare_amount, and payment_type. Assuming payment_type = 1 means the customer paid cash and payment_type = 2 means card scroll down and notice that both types of payments seem to be going through. Maybe the problem is with a particular driver. Let's filter our results on vendorID. Select the Filters button and create a filter to just show vendorID = 1.

Toggle the filter back and forth between vendorID = 1 and 2 and see that transactions of both types seem to be flowing through. So perhaps the drivers' reported problem is not here, maybe it's a wireless connectivity issue? We could check our wireless telecom topic to further troubleshoot this theoretical issue. We have a detailed guide to Lenses SQL and using SQL Studio in our docs here:

6. Drilling Down Into Environments

But for now let's move on to more other Lenses features. Let's switch back to the Environments View. Hover your mouse over our environment and some controls should appear. Click on the big arrow that appears in order to drill down into the specifics for this environment.

Now we're in the details page of this specific Environment. We can quickly see the health of all the components of the Kafka ecosystem. We can use any of the specific views on the left side, or drill down more interactively from details that appear on the main dashboard. Take a moment to look around at all the stats and data presented on this page before we move on.

On the lefthand side switch to the Topics view and select the backblaze_smart topic. That will open up the Topic View. Here we can see examples of the data but can also view much more detailed information about the topic. Be sure to click on the button to close the right side drawer to free up some screen space. Take a moment to toggle through the different topics view as listed across the top but then come back to the Data view.

7. Adding a Data Policy

Coming back to the Data View you'll notice that we have serial_number field displayed. This field is tied to registered owners and can be considered personally identifiable data. Luckily Lenses has the capability to block the view of this sensitve data. We need to setup a Data Policy to block this. Make a note of the name of the filed we want to obscure: serial_number.

Click on the Policy view on the left hand side and click on New Policy. Then fill out the form:

Name: serial-number-blocker

Redaction: last 3 (this means we'll mark out everything but the last 3 digits in the number)

Category: private_info (note after you type this in you'll need to hit enter to make it stick)

Impact Type: medium

Affected Datasets: don't change

Add Fields: serial_number (you'll need to hit return here as well to make it stick)

Once you're done it should look like this:

Then click "Create New Policy"

Now you'll see your new policy in the list. You can go back to the topics page and click on backblaze_smart topic again and verify that the serial_number field has been obfuscated.

It should look like this:

Congrats you've completed a basic introduction to Lenses 6. There's lots more to learn and features to use. Look for more tutorials to come soon.

Install

This page describes configuring and starting Lenses HQ and Agent against your Kafka cluster.

This guide uses the Lenses docker-compose file. For non-dev installations and automation see the section.

Configure HQ

HQ is configured via by one file, config.yaml. The docker-compose files loads the content of hq.config.yaml and mounts it as the HQ config.yaml file.

Adding a Database Connection

You only need to follow this step if you do not want to use the local Postgres instance started by the docker-compose file.

You must create a database and role in your Postgres instance for HQ to use. See .

Edit the docker-compose.yaml and add the set the credentials for your database in the hq.config.yaml section.

Authentication

Currently HQ supports:

  1. Basic Authentication (default)

  2. SAML

For this example, we will use basic authentication, for information on configuring other methods, see and configure the hq.config.yaml key accordingly for SAML.

Start HQ

To start HQ, run the following docker command:

You can now log in to your with admin/admin.

Create an Environment for your Kafka Cluster

To create an environment in HQ:

  1. Login into HQ and create an environment, Environments->New Environment.

  2. At the end of the process, you will be shown an Agent Key. Copy that, keep it safe!

The environment will be disconnected until the Agent is up and configured with the key.

You can also manage environments using the CLI.

Configure the Agent

The Agent is configured via two files:

  • lenses.conf - holds low-level options for the agent and the database connection. You can set this via the agent.lenses.conf in the docker-compose file

  • provisioning.yaml - holds the connection details to your Kafka cluster and supporting systems. can set this via the agent.provisioning.yaml key in the docker-compose file.

Adding an Agent Database Connection

You only need to follow this step if you do not want to use the local Postgres instance started by the docker-compose file.

You must create a database and role in your Postgres instance for the Agent to use. See .

Update the docker-compose file agent.lenses.conf key for your Postgres instance.

Connect the Agent to HQ

You can connect the agent to HQ in two ways, all via

  1. Start the Agent docker with the an AGENT_KEY via environment variables at minimum. You need to create an environment in HQ to get this key

  2. Or mount a provisioning file that contains the connection to HQ, recommended for TLS enabled HQs

You can still reference environment variables if you mount the file, e.g

Minimal Start

First deploy HQ and create an environment, then with the AGENT KEY run:

This will start and connect to HQ but not to Kafka or other services. It will create a provisioning file in data/provisioning.

Set the docker network accordingly

Adding a Kafka Connection

By default, the agent is configured to connect to Kafka on localhost. To change this, update the provisioning.yaml key. The information required here depends on how you want the Agent to authenticate against Kafka.

You can add connections in three ways:

  1. direct editing of the provisioning file directly

  2. Lenses UX

  3. APIs (which step 2 uses)

They all result in writing a provisioning file which the Agent picks up and loads.

Manual file editing

You must manual add all the connections you want to the file and then mount it. To help you create a provisioning file you can use the JSON SCHEMA support. In you IDE, like VS Code, create a file called provisioning.yaml and add the following line at the top:

Then start typing, for example k for Kafka and Kafka Connect, s for schema registry or just crtl+space to trigger the default templates.

Fill in the required fields, your editor should highlight issues for you

Start with provisioning

Add lenses-agent.conf if you are overriding defaults like the embedded database.

See for examples of different authentication types for Kafka.

Lenses UX

When you create an environment via Lenses UI, you will be guided through the process to, start the agent, and configuration the connections. The experience is similar to manually editing the provisioning but it uses the APIs to push down and test configurations.

APIs

You can also use the APIs directly. See .

Deploying an Agent

This page describes the install of the Lenses Agent via an archive on Linux.

To install the Agent from the archive you must:

  1. Extract the archive

  2. Configure the Agent

  3. Start the Agent


Extracting the archive

Installation link

Link to archives can be found here:

Extract the archive using the following command

Inside the extract archive, you will find.


Configure the Agent

To configure the agents connection to Postgres and its provisioning file. See here in the .

Once the agent files are configure you can continue to start the agent.

The configuration files are the same for docker and Linux, for docker we are simply mounting the files into the container.

Connect the Agent to HQ

You can connect the agent to HQ in two ways, all via

  1. Start the Agent with the an AGENT_KEY via environment variables at minimum. You need to create an environment in HQ to get this key

  2. Or provisioning file that contains the connection to HQ, recommended for TLS enabled HQs

You can still reference environment variables if you use the file, e.g

Database

My default the Agent will start with an embedded database, if you wish to use Progress, recommended for production, see . Database settings are set in lenses-agent.conf


Starting the Agent

Provisioning file path

If you configured provisioning.yaml make sure to place following property:

Start Lenses by running:

or pass the location of the config file:

If you do not pass the location of lenses-agent.conf, the Agent will look for it inside the current (runtime) directory. If it does not exist, it will try its installation directory.

In case agent fails with error message that security.conf does not exist and is provided just run following command under lenses directory

To stop Lenses, press CTRL+C.


File permissions

Set the permissions of the lenses-agent.conf to be readable only by the lenses user.

The agent needs write access in 4-5 places in total:

  1. [RUNTIME DIRECTORY] When the Agent runs, it will create at least one directory under the directory it is run in:

    1. [RUNTIME DIRECTORY]/logs Where logs are stored

    2. [RUNTIME DIRECTORY]/logs/sql-kstream-state Where SQL processors (when In Process mode) store state. To change the location for the processors’ state directory, use lenses.sql.state.dir option.

    3. [RUNTIME DIRECTORY]/storage Where the H2 embedded database is stored when PostgreSQL is not set. To change this directory, use the lenses.storage.directory option.

    4. /run (Global directory for temporary data at runtime) Used for temporary files. If Lenses does not have permission to use it, it will fall back to /tmp.

    5. /tmp (Global temporary directory) Used for temporary files (if access /run fails), and JNI shared libraries.

Back-up this location for disaster recovery


JNI libraries

The Agent and Kafka use two common Java libraries that take advantage of JNI and are extracted to /tmp.

You must either:

  1. Mount /tmp without noexec

  2. or set org.xerial.snappy.tempdir and java.io.tmpdir to a different location


SystemD example

If your server uses systemd as a Service Manager, then manage the Agent (start upon system boot, stop, restart). Below is a simple unit file that starts the Agent automatically on system boot.


Global Truststore

The Agent uses the default trust store (cacerts) of the system’s JRE (Java Runtime) installation. The trust store is used to verify remote servers on TLS connections, such as Kafka Brokers with an SSL protocol, JMX over TLS, and more. Whilst for some types of connections (e.g. Kafka Brokers) a separate keystore can be provided at the connection’s configuration, for some other connections (JMX over TLS) we always rely on the system trust store.

It is possible to set up a global custom trust store via the LENSES_OPTS environment variable:


Hardware & OS

Run on any Linux server (review ulimits or container technology (docker/kubernetes). For RHEL 6.x and CentOS 6.x use docker.

Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit command:

Increase as a super-user the soft limit to 4096 with:

Use 8GB RAM /4 CPUs and 20GB disk space.

Agent

This page describe the Lenses Agent configuration.

Overview

This page describes an overview of the Lenses Agent configuration.

The Agent configuration is split between two files.

  1. lenses-agent.conf

  2. provisioning.yaml

lenses-agent.conf holds all the database connections and low-level options for the agent.

In the you define how to connect to your Kafka cluster, Schema Registries, Kafka Connect clusters and HQ. See for more information.

The provisioning.yaml is watched by the Agent, so any changes made, if valid, are applied.

JSON Schema Support

To help with creating a provisioning.yaml from your IDE you can use the provided JSON schema support. They are available in the following repo.

Add the following to the top of your YAML file

You will then get auto completion and validation, for example, at the start of a line type kafka at the start of the line to trigger the default snippets (templates) for Kafka.

For Schema registry, type Schema, Connect type Connect, Alerting type altering, etc.

You require at minimum a lenses-hq connection and a kafka connection for the schema to be valid.

You do not need to use the default snippets, you can also use the auto completion for each connection type.

Kafka

This page describes how to connect the Lenses Agent to your Kafka brokers.

The Lenses Agent can connect to any Kafka cluster or service exposing the Apache Kafka APIs and supporting the authentication methods offered by Apache Kafka.

For JMX see

Azure HDInsight

This page describes connection Lenses to a Azure HDInsight cluster.

Only one Kafka connection is allowed.

The name must be kafka.

See for support.

Environment variables are supported; escape the dollar sign

1

Find your Kafka endpoints

In our Azure Portal, go to Dashboards->Ambari home. Go to Kafka->Configs->Kafka Broker->Kafka Broker hosts

2

Optionally find your Zookeeper endpoints

Optionally get the Zookeeper endpoints: Go to Zookeeper->Configs->Zookeeper Server->Zookeeper Server hosts.

3

Configure Provisioning

See for different configurations for your security protocols.

Environment variables are supported; escape the dollar sign

IBM Event Streams

This page describes how to connect Lenses to IBM Event Streams.

IBM Event Streams requires a replication factor of 3. Ensure you set the replication factor accordingly for Lenses internal topics.

See .

Only one Kafka connection is allowed.

The name must be kafka.

See for support.

Environment variables are supported; escape the dollar sign

1

Find your bootstrap endpoints

From the IBM Cloud console, locate your bootstrap_endpoints, for the service credentials you want to connect with.

2

Configure Provisioning

Set the following in the provisioning.yaml:

Use "token" as the username in the Jaas Config. Set the password as your API KEY from IBM Event streams

Environment variables are supported; escape the dollar sign

Overview

This page describes an overview of connecting a Lenses Agent with Schema Registries

Consider if you have a high number of schemas.

Only one Schema Registry connection is allowed.

Authentication

TLS and basic authentication are supported for connections to Schema Registries.

JMX Metrics

The Agent can collect Schema registry metrics via:

  1. JMX

  2. Jolokia

See

Supported formats

  • AVRO

  • PROTOBUF

JSON and XML formats are supported by Lenses but without a backing schema registry.

Schema deletion

To enable the deletion of schemas in the UI, set the following in the lenses.conf file.

IBM Event Streams supports hard deletes only

TLS

This page describes how to configure TLS for the Lenses Agent.

By default, the Agent does not provide TLS termination but can be enabled via a configuration option. TLS termination is recommended for enhanced security and a prerequisite for integrating with SSO (Single Sign On) via SAML2.0.

TLS termination can be configured directly within Agent or by using a TLS proxy or load balancer.

Global Truststore

To use a non-default global truststore, set the path in accordingly with the LENSES_OPTS variable.

Custom Truststore

Mutual TLS

To enable mutual TLS, set your keystore accordingly.

Rate Limiting

Rate limit the calls the Lenses Agent makes to Schema Registries and Connect Clusters.

To rate limit the calls the Agent makes to Schema Registries or Connect Clusters set the following the Agent configuration:

The exact values provided will depend on your setup, for example the number of schemas, how often are new schemas added, so some trial and error is required.

Identity & Access Management

This page describes Identity & Access Management (IAM) in Lenses.

LENSES_OPTS=-Djavax.net.ssl.trustStore=/path/to/truststore
lenses.conf
lenses.ssl.truststore.location = "/path/to/truststore.jks"
lenses.ssl.truststore.password = "changeit"
lenses.conf
# To secure and encrypt all HTTPS connections to Lenses via TLS termination.
# Java Keystore location and passwords
lenses.ssl.client.auth = true
lenses.ssl.keystore.location = "/path/to/keystore.jks"
lenses.ssl.keystore.password = "changeit"
lenses.ssl.key.password      = "changeit"


# You can also tweak the TLS version, algorithm and ciphers
#lenses.ssl.enabled.protocols = "TLSv1.2"
#lenses.ssl.cipher.suites     = "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WIT
# schema registry
lenses.schema.registry.client.http.rate.type="sliding" 
lenses.schema.registry.client.http.rate.maxRequests= 200
lenses.schema.registry.client.http.rate.window="2 seconds"

# connect clusters
lenses.connect.client.http.rate.type="sliding"                 
lenses.connect.client.http.rate.maxRequests=200        
lenses.connect.client.http.rate.window="2 seconds"  
terminal
tar -xvf lenses-agent-latest-linux64.tar.gz -C lenses
   lenses
   ├── lenses.conf       ← edited and renamed from .sample
   ├── logback.xml
   ├── logback-debug.xml
   ├── bin/
   ├── lib/
   ├── licences/
   ├── logs/             ← created when you run Lenses
   ├── plugins/
   ├── storage/          ← created when you run Lenses
   └── ui/
agentKey:
  value: ${LENSES_HQ_AGENT_KEY
lenses.conf
# Directory containing the provision.yaml files
lenses.provisioning.path=/my/dir
terminal
bin/lenses
terminal
bin/lenses lenses-agent.conf
touch security.conf
chmod 0600 /path/to/lenses-agent.conf
chown [lenses-user]:root /path/to/lenses-agent.conf
LENSES_OPTS="-Dorg.xerial.snappy.tempdir=/path/to/exec/tmp -Djava.io.tmpdir=/path/to/exec/tmp"
[Unit]
Description=Run Agent service

[Service]
Restart=always
User=[LENSES-USER]
Group=[LENSES-GROUP]
LimitNOFILE=4096
WorkingDirectory=/opt/lenses
#Environment=LENSES_LOG4J_OPTS="-Dlogback.configurationFile=file:/etc/lenses/logback.xml"
ExecStart=/opt/lenses/bin/lenses /etc/lenses/lenses-agent.conf

[Install]
WantedBy=multi-user.target
export LENSES_OPTS="-Djavax.net.ssl.trustStore=/path/to/truststore.jks -Djavax.net.ssl.trustStorePassword=changeit"
bin/lenses
ulimit -S -n     # soft limit
ulimit -H -n     # hard limit
ulimit -S -n 4096
https://archive.lenses.io/lenses/6.0/agent/
quickstart
provisioning
here
sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
kafka:
- name: kafka
  version: 1
  tags: [my-tag]
  configuration:
    kafkaBootstrapServers:
      value:
        - PLAINTEXT://your.kafka.broker.0:9092
        - PLAINTEXT://your.kafka.broker.1:9092
    protocol: 
      value: PLAINTEXT
    # all metrics properties are optional
    metricsPort: 
      value: 9581
    metricsType: 
      value: JMX
    metricsSsl: 
      value: false
JSON schema
Apache Kafka
sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
provisioning.yaml
kafka:
- name: kafka
  version: 1
  tags: ['my-tag']
  configuration:
    kafkaBootstrapServers:
      value:
        - SASL_SSL://[YOUR_BOOTSTRAP_ENDPOINTS]
    protocol: 
      value: SASL_SSL
    saslMechanism: 
      value: PLAIN
    saslJaasConfig:
      value: |
        org.apache.kafka.common.security.plain.PlainLoginModule required 
        username="token" 
        password="[YOUR_API_KEY]";
Configuration
JSON schema
lenses.conf
## Enable schema deletion in the Lenses UI
## default: false
lenses.schema.registry.delete = true

## When a topic is deleted,
## automatically delete also its associated Schema Registry subjects
## default: false
lenses.schema.registry.cascade.delete = true
Rate Limiting
Infrastructure JMX Metrics.
SELECT VendorID, fare_amount, payment_type
FROM nyc_yellow_taxi_trip_data
http://localhost:9991
AskMarios user forum
YouTube channel,
Community Slack
https://docs.lenses.io/latest/user-guide/sql-studio
CE Main Login Screen
Email Verification
Pop up for first time CE Logins
Simple search for "Latitude"
Search in schema shows the key names that match a search.
Access SQL Studio from the Topics View
Click on these icons to close your side drawers
Viewing data in SQL Studio
Search results looking for payment issues.
Filtered results.
Drill down into Environment specifics.
Different view options for topics.
Filled out Data Policy form
Serial numbers obfuscated!
docker-compose.yaml
 hq.config.yaml:
    content: |
      # ACCEPT THE LENSES EULA
      license:
        acceptEULA: true
      database:
        host: postgres:5432
        username: [YOUR_POSTGRES_YOUR_NAME]
        password: lenses
        database: hq
terminal
docker-compose up hq
terminal
➜  lenses environments
Manage Environments.

Usage:
  lenses environments [command]

Aliases:
  environments, e, env, envs

Available Commands:
  create      Creates a new environment.
  delete      Deletes an environment.
  get         Retrieves a single environment by name.
  list        Lists all environments
  metadata    Manages environment metadata.
  update      Updates an environment.
  watch       Watch live environment updates.
docker-compose.yaml
 agent.lenses.conf:
    content: |
      lenses.storage.postgres.host=[YOUR_POSTGRES_INSTANCE]
      lenses.storage.postgres.port=[YOUR_POSTGRES_PORT]
      lenses.storage.postgres.database=agent
      lenses.storage.postgres.username=lenses
      lenses.storage.postgres.password=lenses
agentKey:
  value: ${LENSES_HQ_AGENT_KEY
docker run \                                                                          
  --name "xxx" \
  --network=lenses \
  --restart=unless-stopped \
  -e PROVISION_AGENT_KEY=YOUR_AGENT_KEY \
  -e PROVISION_HQ_URL=YOUR_LENSES_HQ_URL \
  lensesio/lenses-agent:latest 
# yaml-language-server: $schema=./agent/provisioning.schema-6.1.json
terminal
docker run --name lenses-agent \
-v $(pwd)/provisioning.yaml:/mnt/provision-secrets/provisioning.yaml \
-v $(pwd)/lenses-agent.conf:/data/lenses-agent.conf \
-e LENSES_PROVISIONING_PATH=/mnt/provision-secrets \
lensesio/lenses-agent:latest
Installation
Database Role
Authentication
browser
configuration
Database Role
provisioning
provisioning
here
Cover

Overview

Learn about configuring an Agent.

Cover

Provisioning

Learn how to connect the Agent to your Kafka environment.

Cover

Hardware & OS

Learn about the hardware & OS requirements for Linux archive installs.

Cover

Memory & CPU

Learn about memory & CPU requirements for the Agent.

Cover

Agent Database

Configure the backing store for the Agent.

Cover

TLS

Configure TLS on Lenses for HTTPS.

Cover

Kafka ACLs

Configure the Kafka ACLs Lenses needs to operate.

Cover

Rate Limiting

Configure rate limiting for API calls against Schema Registries and Kafka Connect Clusters.

Cover

JMX Metrics

Configure Lenses to expose JMX metrics.

Cover

JVM Options

Understand how to customize the Lenses JVM settings.

Cover

SQL Processor Modes

Configure how and where Lenses deploys SQL Processors.

Cover

Logs

Understand and customize Lenses logging.

Cover

Plugins

Add your own plugins to extend Lenses functionality.

Cover

Configuration Reference

Review Agent configuration reference.

Infrastructure JMX Metrics.
Cover

Apache Kafka

Connect the Lenses Agent to your Apache Kafka cluster.

Cover

Aiven

Connect the Lenses Agent to your Aiven Kafka cluster.

Cover

AWS MSK

Connect the Lenses Agent to your AWS MSK cluster.

Cover

AWS MSK Serverless

Connect the Lenses Agent to your AWS MSK Serverless.

Cover

Azure Event Hubs

Connect the Lenses Agent to your Azure Event Hubs.

Cover

Azure HDInsight

Connect the Lenses Agent to your Azure HDInsight cluster.

Cover

Confluent Cloud

Connect the Lenses Agent to your Confluent Cloud.

Cover

Confluent Platform

Connect the Lenses Agent to your Confluent Platform (on premise) cluster.

Cover

IBM Event Streams

Connect the Lenses Agent to your IBM Event Streams cluster.

Cover

Overview

Learn how Lenses IAM works.

Cover

Authentication

Learn about how to authenticate users in Lenses.

Cover

Roles

Learn how Roles work in Lenses IAM.

Cover

Groups

Learn how Groups work in Lenses IAM.

Cover

Users

Learn different user types in Lenses.

Cover

Service Accounts

Learn how Service Accounts work in Lenses.

IAM Reference

Reference for Lenses IAM.

Examples

Examples of IAM Policies

Overview

Learn about provisioning.

HQ

Learn how to connect the Agent to HQ.

Kafka

Learn how to connect the Agent to Kafka.

Schema Registries

Learn how to connect the Agent to Schema Registries.

Kafka Connect

Learn how to connect the Agent to Kafka Connect Clusters.

Zookeeper

Learn how to connect the Agent to Zookeeper.

AWS

Learn how to connect the Agent to AWS.

Alert & Auditing Integrations

Learn how to connect the Agent to Alert & Audit Integrations.

JMX

Learn how to connect the Agent to JMX for Kafka, Schema Registries, Kafka Connect and others.

Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover
Cover

Overview

Learn an overview of connecting the Lenses Agent to Schema Registries.

AWS Glue

Connect the Lenses Agent to your AWS Glue service for schema registry support.

Confluent

Connect the Lenses Agent to Confluent Schema Registry.

IBM Event Streams

Connect the Lenses Agent to IBM Event Streams Schema Registry

Apicurio

Connect the Lenses Agent to Apicurio.

Cover
Cover
Cover
Cover
Cover

Global Catalogue

Learn how to use the Global Catalogue.

Environment

Learn how to explore topics in an environment.

Cover
Cover
terminal
curl -L https://lenses.io/preview -o docker-compose.yml \
 && ACCEPT_EULA=true docker compose up -d --wait \
 && echo "Lenses.io is running on http://localhost:9991"

Deploying an Agent

This page describes deploying an Lenses Agent via Docker.

The Agent docker image can be configured via environment variables or via volume mounts for the configuration files.

Connect the Agent to HQ

You can connect the agent to HQ in two ways, all via provisioning

  1. Start the Agent docker with the an AGENT_KEY via environment variables at minimum. You need to create an environment in HQ to get this key

  2. Or mount a provisioning file that contains the connection to HQ, recommended for TLS enabled HQs

You can still reference environment variables if you mount the file, e.g

agentKey:
  value: ${LENSES_HQ_AGENT_KEY

Minimal Start

First deploy HQ and create an environment, then with the AGENT KEY run:

docker run \                                                                          
  --name "xxx" \
  --network=lenses \
  --restart=unless-stopped \
  -e PROVISION_AGENT_KEY=YOUR_AGENT_KEY \
  -e PROVISION_HQ_URL=YOUR_LENSES_HQ_URL \
  lensesio/lenses-agent:latest 

This will start and connect to HQ but not to Kafka or other services. It will create a provisioning file in data/provisioning.

Set the docker network accordingly

Start and mount provisioning

terminal
docker run --name lenses-agent \
-v $(pwd)/provisioning.yaml:/mnt/provision-secrets/provisioning.yaml \
-e LENSES_PROVISIONING_PATH=/mnt/provision-secrets \
lensesio/lenses-agent:6.0

Example provisioning files:

provisioning.yaml
lensesHq:
  - name: lenses-hq
    version: 1
    tags: ['hq']
    configuration:
      server:
        value: [LENSES_HQ_URL]
      port:
        value: 10000
      agentKey:
        value: ${LENSES_HQ_AGENT_KEY}
      sslEnabled:
        value: true
      sslTruststore:
        file: "hq-truststore.jks"
      sslTruststorePassword:
        value: ${LENSES_HQ_AGENT_TRUSTSTORE_PWD}
provisioning.yaml
lensesHq:
  - name: lenses-hq
    version: 1
    tags: ['hq']
    configuration:
      server:
        value: [LENSES_HQ_URL]
      port:
        value: 10000
      agentKey:
        value: ${LENSES_HQ_AGENT_KEY}
      sslEnabled:
        value: false

Agent key reference

Agent key within provisioning.yaml can be referenced as a:

  • environment variable shown in example above

  • inline string

Database

My default the Agent will start with an embedded database, if you wish to use Progress, recommended for production, see here. Database settings are set in lenses-agent.conf

Environment Variables

Environment variables prefixed with LENSES_ are transformed into corresponding configuration options. The environment variable name is converted to lowercase and underscores (_) are replaced with dots (.). As an example set the option lenses.port use the environment variable LENSES_PORT.

Alternatively, the lenses-agent.conf can be mounted directly as

  • /mnt/settings/lenses-agent.conf


Docker volumes

The Docker image exposes four volumes in total, where cache, logs, plugins, and persistent data are stored:

  • /data/storage

  • /data/plugins

  • /data/logs

  • /data/kafka-streams-state

Storage volume

Resides under /data/storage and is used to store persistent data, such as Data Policies. For this data to survive between Docker runs and/or Agent upgrades, the volume must be managed externally (persistent volume).

Plugins volume

Resides under /data/plugins it’s where classes that extend Agent may be added —such as custom Serdes, LDAP filters, UDFs for the Lenses SQL table engine, and custom_http implementations.

Logs volume

Resides under /data/logs, logs are stored here. The application also logs to stdout, so the log files aren’t needed for most cases.

KStreams state volume

Resides under /data/kafka-streams-state, used when Lenses SQL is in IN_PROC configuration. In such a case, Lenses uses this scratch directory to cache Lenses SQL internal state. Whilst this directory can safely be removed, it can be beneficial to keep it around, so the Processors won’t have to rebuild their state during a restart.


Agent TLS and Global JVM Trust Store

By default, the the serves connections over plaintext (HTTP). It is possible to use TLS instead. The Docker image offers the ability to provide the content for extra files via secrets mounted as files or as environment variables. Especially for SSL, Docker supports SSL/TLS keys and certificates in Java Keystore (JKS) formats.

This capability is optional, and users can mount such files under custom paths and configure lenses-agent.conf manually via environment variables, or lenses.append.conf.

There are two ways to use the File/Variable names of the table below.

  1. Create a file with the appropriate filename as listed below and mount it under /mnt/settings, /mnt/secrets, or /run/secrets

  2. Set them as environment variables.

All settings except for passwords, can be optionally encoded in base64. The docker will detect such encoding automatically.

File / Variable Name
Description

FILECONTENT_JVM_SSL_TRUSTSTORE

The SSL/TLS trust store to use as the global JVM trust store. Add to LENSES_OPTS the property javax.net.ssl.trustStore

FILECONTENT_JVM_SSL_TRUSTSTORE_PASSWORD

Τhe trust store password. If set, the startup script will add automatically to LENSESOPTS the property javax.net.ssl.trustStorePassword (**_base64 not supported**)

FILECONTENT_LENSES_SSL_KEYSTORE

The SSL/TLS keystore to use for the TLS listener for the Agent


Process UID/GUI

The docker does not require running as root. The default user is set to root for convenience and to verify upon start-up that all the directories and files have the correct permissions. The user drops to nobody and group nogroup (65534:65534) before starting the Agent.

If the image is started without root privileges, the agent will start successfully using the effective uid:gid applied. Ensure any volumes mounted (i.e., for the settings and data) have the correct permission set.

Keycloak SSO

This page describes configuring Keycloak SSO for Lenses authentication.

1

Create a new SAML application client in Keycloak

  • Go to Clients

  • Click Create

  • Fill in the details: see the table below.

  • Click Save

Setting
Value

Client ID

Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com

Client Protocol

Set it to saml

Client Saml Endpoint

This is the Lenses API point for Keycloak to call back. Set it to [BASE_URL]/api/v2/auth/saml/callback?client_name=SAML2Client. e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client

2

Update Client Settings

Change the settings on client you just created to:

Setting
Value

Name

Lenses

Description

(Optional) Add a description to your app.

SAML Signature Name

KEY_ID

Client Signature Required

OFF

Force POST Binding

ON

Front Channel Logout

OFF

Force Name ID Format

ON

Name ID Format

email

Root URL

Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com

Valid Redirect URIs

Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com

3

Map users to groups

Configure Keycloak to communicate groups to Lenses. Head to the Mappers (under Client scope tab) section.

  1. Click Create

  2. Fill in the details: see table below.

  3. Click Save

Setting
Value

Name

Groups

Mapper Type

Group list

Group attribute name

groups (case-sensitive)

Single Group Attribute

ON

Full group path

OFF

4

Download SAML Certificates

Download the Federation Metadata XML file with the Keycloak IdP details.

5

Configure SAML in HQ

SAML configuration is set in HQ's config.yaml file. See here for more details.

Azure EventHubs

This page describes connection Lenses to Azure EventHubs.

Azure EventHubs only support delete or compact as a topic cleanup policy.

Only one Kafka connection is allowed.

The name must be kafka.

See JSON schema for support.

Environment variables are supported; escape the dollar sign

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
1

Create a Data Integration API key

  • Add a shared access policy

    • Navigate to your Event Hub resource and select Shared access policies in the Settings section.

    • Select + Add shared access policy, give a name, and check all boxes for the permissions (Manage, Send, Listen)

  • Once the policy is created, obtain the Primary Connection String, by clicking the policy and copying the connection string. The connection string will be used as a JAAS password to connect to Kafka.

  • The bootstrap broker [YOUR_EVENT_HUBS_NAMESPACE].servicebus.windows.net:9093

Configure Provisioning

2

Set the following in the provisioning.yaml

Due to Azure EventHubs limitation, Pricing tier for EventHub has to be at least Standard.

First, set the environment variable

Note that "\" at "$ConnectionString" is set additionally to escape the $ sign.

terminal
export SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username="\$ConnectionString" password="Endpoint=sb://[SB_URL]/;SharedAccessKeyName=[KEY_NAME];SharedAccessKey=[ACCESS_KEY]";
provision.yaml
kafka:
- name: kafka
  version: 1
  tags: [my-tag]
  configuration:
    kafkaBootstrapServers:
      value:
        - SASL_SSL://[YOUR_BOOTSTRAP_SERVER]
        - SASL_SSL://[YOUR_BOOTSTRAP_SERVER]
    saslJaasConfig:
      value: '${SASL_JAAS_CONFIG}'
    saslMechanism:
      value: PLAIN
    protocol:
      value: SASL_SSL
provision.yaml
connections:
  kafka:
  - name: Kafka
    version: 1
    tags: [my-tag]
    configuration:
      kafkaBootstrapServers:
        value:
          - SASL_SSL://[YOUR_BOOTSTRAP_SERVER]
          - SASL_SSL://[YOUR_BOOTSTRAP_SERVER]
      saslJaasConfig:
        value: org.apache.kafka.common.security.plain.PlainLoginModule required username="\$ConnectionString" password="Endpoint=sb://[SB_URL]/;SharedAccessKeyName=[KEY_NAME];SharedAccessKey=[ACCESS_KEY]";
      saslMechanism:
        value: PLAIN
      protocol:
        value: SASL_SSL

AWS Glue

This page describes connection to AWS Glue.

AWS Glue Schema Registry connection, depends on an AWS connection.

Only one Schema Registry connection is allowed.

Name must be schema-registry.

See JSON schema for support.

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"

These are examples of provision Lenses with an AWS connection named my-aws-connection and an AWS Glue Schema Registry that references it.

Using AWS Access Key

aws:
  - name: my-aws-connection
    tags: ["tag1"]
    version: 1      
    configuration:
      authMode: 
        value: Access Key
      accessKeyId: 
        value: my-access-key-id
      secretAccessKey: 
        value: my-secret-access-key
      region: 
        value: eu-west-1
      
glueSchemaRegistry:
  - name: schema-registry
    tags: ["tag1"]
    version: 1      
    configuration:
      authMode: 
        reference: my-aws-connection
      accessKeyId:
        reference: my-aws-connection
      secretAccessKey:
        reference: my-aws-connection
      glueRegistryArn:
        value: arn:aws:glue:[region]:[account-id]:registry/[name]

Using AWS Credentials Chain

aws:
  - name: my-aws-connection
    version: 1
    tags: []
    configuration:
      region:
        value: eu-north-1
      authMode:
        value: "Credentials Chain"

glueSchemaRegistry:
  - name: schema-registry
    version: 1
    tags: []
    templateName: SchemaRegistry
    configuration:
      authMode:
        reference: my-aws-connection
      glueRegistryArn:
        value: arn:aws:glue:[region]:[account-id]:registry/[name]

Using AWS Assume Role

aws:
  - name: my-aws-connection
    version: 1
    tags: []
    configuration:
      region:
        value: eu-north-1
      authMode:
        value: "Assume Role"
      assumeRoleArn:
        value: arn:aws:iam::[account-id]:role/[name]
      assumeRoleSessionName:
        value: [session-name]

glueSchemaRegistry:
  - name: schema-registry
    version: 1
    tags: []
    templateName: SchemaRegistry
    configuration:
      authMode:
        reference: my-aws-connection
      assumeRoleArn:
        reference: my-aws-connection
      assumeRoleSessionName:
        reference: my-aws-connection
      glueRegistryArn:
        value: arn:aws:glue:[region]:[account-id]:registry/[name]

Confluent

This page describes connecting Lenses to Confluent schema registries.

Only one Schema Registry connection is allowed.

Name must be schema-registry.

See JSON schema for support.

Environment variables are supported; escape the dollar sign

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"

Simple configuration, with JMX metrics

The URLs (nodes) should always have a scheme defined (http:// or https://).

confluentSchemaRegistry:
  - name: schema-registry
    tags: ["tag1"]
    version: 1      
    configuration:
      schemaRegistryUrls:
        value:
          - http://my-sr.host1:8081
          - http://my-sr.host2:8081
      ## all metrics properties are optional
      metricsPort: 
        value: 9581
      metricsType: 
        value: JMX
      metricsSsl: 
        value: false

Basic authentication

For Basic Authentication, define username and password properties.

confluentSchemaRegistry:
- name: schema-registry
  tags: ["tag1"]
  version: 1    
  configuration:
    schemaRegistryUrls:
      value:
        - http://my-sr.host1:8081
        - http://my-sr.host2:8081
    username: 
      value: my-username
    password: 
      value: my-password

TLS with custom truststore

A custom truststore is needed when the Schema Registry is served over TLS (encryption-in-transit) and the Registry’s certificate is not signed by a trusted CA.

confluentSchemaRegistry:
  - name: schema-registry
    tags: ["tag1"]
    version: 1      
    configuration:
      schemaRegistryUrls:
        value:
          - https://my-sr.host1:8081
          - https://my-sr.host2:8081
      sslTruststore:
        file: schema-truststore.jks
      sslTruststorePassword: 
        value: myPassword

TLS with client authentication

A custom truststore might be necessary too (see above).

confluentSchemaRegistry:
  - name: schema-registry
    tags: ["tag1"]
    version: 1      
    configuration:
      schemaRegistryUrls:
        value:
          - https://my-sr.host1:8081
          - https://my-sr.host2:8081
      sslKeystore:
        file: schema-keystore.jks
      sslKeystorePassword: 
        value: myPassword

Hard or soft delete

By default, Lenses will use hard delete for Schema Registry. To use soft delete, add the following property:

confluentSchemaRegistry:
  - name: schema-registry
    tags: ["tag1"]
    version: 1      
    configuration:
      schemaRegistryUrls:
        value:
          - http://my-sr.host1:8081
          - http://my-sr.host2:8081
      hardDelete:
        value: true      

Zookeeper

This page describes adding a Zookeeper to the Lenses Agent.

Only one Zookeeper connection is allowed.

The name must be zookeeper.

See JSON schema for support.

Environment variables are supported; escape the dollar sign

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"

Only one Zookeeper connection is allowed.

Simple configuration, without metrics

zookeeper:
- name: Zookeeper
  version: 1
  tags: ["tag1"]
  configuration:
    zookeeperUrls:
      value:
        - my-zookeeper-host-0:2181
        - my-zookeeper-host-1:3181
        - my-zookeeper-host-2:4181
    # optional, a suffix to Zookeeper's connection string
    zookeeperChrootPath: 
      value: "/mypath" 
    zookeeperSessionTimeout: 
      value: 10000 # in milliseconds
    zookeeperConnectionTimeout: 
      value: 10000 # in milliseconds

Simple configuration, with JMX metrics

Simple configuration with Zookeeper metrics read via JMX.

zookeeper:    
- name: Zookeeper
  version: 1
  tags: ["tag1"]
  configuration:
    zookeeperUrls:
      value:
        - my-zookeeper-host-0:2181
        - my-zookeeper-host-1:3181
        - my-zookeeper-host-2:4181
    # optional, a suffix to Zookeeper's connection string
    zookeeperChrootPath: 
      value: "/mypath" 
    zookeeperSessionTimeout: 
      value: 10000 # in milliseconds
    zookeeperConnectionTimeout: 
      value: 10000 # in milliseconds
    # all metrics properties are optional
    metricsPort: 
      value: 9581
    metricsType: 
      value: JMX
    metricsSsl: 
      value: false

With such a configuration, Lenses will use 3 Zookeeper nodes and will try to read their metrics from following URLs (notice the same port - 9581 - used for all of them, as defined by metricsPort property):

  • my-zookeeper-host-0:9581

  • my-zookeeper-host-1:9581

  • my-zookeeper-host-2:9581

AWS

Add a connection to AWS in the Lenses Agent.

The agent uses an AWS in three places:

  1. AWS IAM connection to MSK for Lenses itself

  2. Connecting to AWS Glue

  3. Alert channels to Cloud Watch.

If the Agent is deployed on an EC2 Instance or has access to AWS credentials in the default AWS toolchain that can be used instead.

See JSON schema for support.

Environment variables are supported; escape the dollar sign

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"

Names must match be alphanumeric or dash non-empty string.

provisioning.yaml
aws:
- name: my-aws-connection
  version: 1
  tags: [tag1, tag2]
  configuration:
    # Way to authenticate against AWS.Credentials Chain or Access Key
    authMode:
      value:
    # Access key ID of an AWS IAM account.
    accessKeyId:
       value:
    # Secret access key of an AWS IAM account.
    secretAccessKey:
       value:
    # AWS region to connect to. If not provided, this is deferred to client 
    # configuration.
    region:
       value:
    # Specifies the session token value that is required if you are using temporary 
    # security credentials that you retrieved directly from AWS STS operations.
    sessionToken:
       value:
    # The Amazon Resource Name (ARN) of the IAM role to assume using AWS STS
    assumeRoleArn:
       value: arn:aws:iam::[account-id]:role/[name]
    # An identifier for the assumed role session, used to uniquely distinguish
    # sessions when assuming the same role multiple times
    assumeRoleSessionName:
       value: [session-name]

Using AWS Access Key

aws:
  - name: my-aws-connection
    tags: ["tag1"]
    version: 1      
    configuration:
      authMode: 
        value: Access Key
      accessKeyId: 
        value: my-access-key-id
      secretAccessKey: 
        value: my-secret-access-key
      region: 
        value: eu-west-1

Using AWS Credentials Chain

aws:
  - name: my-aws-connection
    version: 1
    tags: []
    configuration:
      region:
        value: eu-north-1
      authMode:
        value: "Credentials Chain"

Using AWS Assume Role

aws:
  - name: my-aws-connection
    version: 1
    tags: []
    configuration:
      region:
        value: eu-north-1
      authMode:
        value: "Assume Role"
      assumeRoleArn:
        value: arn:aws:iam::[account-id]:role/[name]
      assumeRoleSessionName:
        value: [session-name]

Kafka ACLs

This page describes the Kafka ACLs prerequisites for the Lenses Agent if ACLs are enabled on your Kafka clusters.

These ACLs are for the underlying Lenses Agent Kafka client. Lenses has its own set of permissions guarding access.

You can restrict the access of the Lenses Kafka client but this can reduce the functionality on offer in Lenses, e.g. not allow Lenses to create topic at all, even though this can be managed by Lenses own IAM system.

When your Kafka cluster is configured with an authorizer which enforces ACLs, the Agent will need a set of permissions to function correctly.

Common practice is to give teh Agent superuser status or the complete list of available operations for all resources. The IAM model of Lenses can then be used to restrict the access level per user.

kafka-acls \
    --bootstrap-server [broker.url:9092] --command-config [client.properties] \
    --add \
    --allow-principal [User:Lenses] \
    --allow-host [lenses.host] \
    --operation All \
    --topic * \
    --group * \
    --delegation-token * \
    --cluster

Minimal Permissions

The Agent needs permission to manage and access their own internal Kafka topics:

  • __topology

  • __topology__metrics

kafka-acls \
    --bootstrap-server [broker.url:9092] --command-config [client.properties] \
    --add \
    --allow-principal [User:Lenses] \
    --allow-host [lenses.host] \
    --operation All \
    --topic [topic]

It also needs to read and describe permissions for the consumer offsets and Kafka Connect topics —if enabled:

  • __consumer_offsets

  • connect-configs

  • connect-offsets

  • connect-status

kafka-acls \
    --bootstrap-server [broker.url:9092] --command-config [client.properties] \
    --add \
    --allow-principal [User:Lenses] \
    --allow-host [lenses.host] \
    --operation Describe \
    --operation DescribeConfigs \
    --operation Read \
    --topic [topic]

This same set of permissions is required for any topic that the agent must have read access.

kafka-acls \
    --bootstrap-server [broker.url:9092] --command-config [client.properties] \
    --add \
    --allow-principal [User:Lenses] \
    --allow-host [lenses.host] \
    --operation Describe \
    --operation DescribeConfigs \
    --operation Read \
    --topic *

DescribeConfigs was added in Kafka 2.0. It may not be needed for versions before 2.2.

Additional permissions are needed to produce topics or manage them.

Consumer Groups

Permission to at least read and describe consumer groups is required to take advantage of the Consumer Groups' monitoring capabilities.

kafka-acls \
    --bootstrap-server [broker.url:9092] --command-config [client.properties] \
    --add \
    --allow-principal [User:Lenses] \
    --allow-host [lenses.host] \
    --operation Describe \
    --operation Read \
    --group *

Additional permissions are needed to manage groups.

ACLs

To manage ACLs, permission to the cluster is required:

kafka-acls \
    --bootstrap-server [broker.url:9092] --command-config [client.properties] \
    --add \
    --allow-principal [User:Lenses] \
    --allow-host [lenses.host] \
    --operation Describe \
    --operation DescribeConfigs \
    --operation Alter \
    --cluster

Plugins

This page describes how to install plugins in the Lenses Agent.

The following implementations can be specified:

  1. Serializers/Deserializers Plug your serializer and deserializer to enable observability over any data format (i.e., protobuf / thrift)

  2. Custom authentication Authenticate users on your proxy and inject permissions HTTP headers.

  3. LDAP lookup Use multiple LDAP servers or your group mapping logic.

  4. SQL UDFs User Defined Functions (UDF) that extend SQL and streaming SQL capabilities.

Once built, the jar files and any plugin dependencies should be added to the Agent and, in the case of Serializers and UDFs, to the SQL Processors if required.

Adding plugins

On startup, the Agent loads plugins from the $LENSES_HOME/plugins/ directory and any location set in the environment variable LENSES_PLUGINS_CLASSPATH_OPTS. The Agent is watching, and dropping a new plugin will hot-reload it. For the Agent docker (and Helm chart) you use /data/plugins.

Any first-level directories under the paths mentioned above, detected on startup will also be monitored for new files. During startup, the list of monitored locations will be shown in the logs to help confirm the setup.

...
Initializing (pre-run) Lenses
Installation directory autodetected: /opt/lenses
Current directory: /data
Logback configuration file autodetected: logback.xml
These directories will be monitored for new jar files:
 - /opt/lenses/plugins
 - /data/plugins
 - /opt/lenses/serde
Starting application
...

Whilst all jar files may be added to the same directory (e.g /data/plugins), it is suggested to use a directory hierarchy to make management and maintenance easier.

An example hierarchy for a set of plugins:

├── security
│   └── sso_header_decoder.jar
├── serde
│   ├── protobuf_actions.jar
│   └── protobuf_clients.jar
└── udf
    ├── eu_vat.jar
    ├── reverse_geocode.jar
    └── summer_sale_discount.jar

SQL Processors in Kubernetes

There are two ways to add custom plugins (UDFs and Serializers) to the SQL Processors; (1) via making available a tar.gz archive at an HTTP (s) address, or (2) via creating a custom docker image.

Archive served via HTTP

With this method, a tar archive, compressed with gzip, can be created that contains all plugin jars and their dependencies. Then this archive should be uploaded to a web server that the SQL Processors containers can access, and its address should be set with the option lenses.kubernetes.processor.extra.jars.url.

Step by step:

  1. Create a tar.gz file that includes all required jars at its root:

    tar -czf [FILENAME.tar.gz] -C /path/to/jars/ *
  2. Upload to a web server, ie. https://example.net/myfiles/FILENAME.tar.gz

  3. Set

    lenses.kubernetes.processor.extra.jars.url=https://example.net/myfiles/FILENAME.tar.gz

    For the docker image, set the corresponding environment variable

    LENSES_KUBERNETES_PROCESSOR_EXTRA_JARS_URL=https://example.net/myfiles/FILENAME.tar.gz`

Custom Docker image

The SQL Processors inside Kubernetes use the docker image lensesio-extra/sql-processor. It is possible to build a custom image and add all the required jar files under the /plugins directory, then set lenses.kubernetes.processor.image.name and lenses.kubernetes.processor.image.tag options to point to the custom image.

Step by step:

  1. Create a Docker image using lensesio-extra/sql-processor:VERSION as a base and add all required jar files under /plugins:

    FROM lensesio-extra/sql-processor:4.2
    ADD jars/* /plugins
    docker build -t example/sql-processor:4.2 .
  2. Upload the docker image to a registry:

    docker push example/sql-processor:4.2
  3. Set

    lenses.kubernetes.processor.image.name=example/sql-processor
    lenses.kubernetes.processor.image.tag=4.2

    For the docker image, set the corresponding environment variables

    LENSES_KUBERNETES_PROCESSOR_IMAGE_NAME=example/sql-processor
    LENSES_KUBERNETES_PROCESSOR_IMAGE_TAG=4.2
# yaml-language-server: $schema=https://raw.githubusercontent.com/lensesio/json-schemas/refs/heads/main/agent/provisioning.schema.json
provisioning.yaml
Provisioning
View JSON Schemas

Users

This page describes Users in Lenses.

Users are assigned to groups. The groups inherit permissions from the roles assigned to the groups.

User can be manually created in Lenses. Users can either be of type:

  1. SSO, or

  2. Basic Authentication

When creating a User, you can assign them groups membership.

Each user, once logged in can update their Name, Profile Photo and set an email address.

For SSO, your SSO email is still required to login

Create a User

To Create Service Account go to IAM->Users->New User, once created you can assign the user to a group.

IAM Users

You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.

terminal
➜  lenses users
Usage:
  lenses users [command]

Aliases:
  users, usr, u

Available Commands:
  create         Creates a new user.
  delete         Deletes a user.
  get            Returns a specific user
  get-current    Returns the currently authenticated user
  list           Returns all users
  metadata       Manages user metadata.
  set-groups     Assigns the given user exactly to the provided groups, ensuring they are not part of any other groups.
  update         Updates a user.
  update-profile Allows updating fields of the user profile.

Groups

This page describes IAM groups in Lenses.

Groups are a collection of users, service accounts and roles.

Assigning Users to Groups

Users can be assign to Groups in two ways:

  1. Manual

  2. Linked from the groups provided by your SSO provider

This behaviour can be toggled in the organizational settings of your profile. To control the default set the following in the config.yaml for HQ.

users_group_membership_management_mode: [manual|sso]

Groups can be defined with the following metadata:

  1. Colour

  2. Description

Each group has a resource that unique identifies it across an HQ installation.

Create a Group

To Create Group go to IAM->Groups->New Group, create the group, assign members, service accounts and roles.

IAM Groups

You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.

terminal
➜  lenses groups
Manage Groups.

Usage:
  lenses groups [command]

Aliases:
  groups, grp

Available Commands:
  create      Creates a new Group.
  delete      Deletes a group.
  get         Gets a group by its name.
  list        Lists all groups
  metadata    Manages group metadata.
  update      Updates a group.

Service Accounts

This page describes Service account in Lenses.

Service accounts are intended for programmatic access to Lenses.

Service accounts are assigned to groups. The groups inherit permissions from the roles assigned to the groups.

Each service account has a key that is used to authenticate and identify the service account.

In addition you can set:

  1. Description

  2. Resource name - Must be unique across Lenses.

  3. Key expiry

  4. Regenerate the key

Key expiry can be 7, 30, 60, 90 days, 1 year or a custom expiration or no expiration at all.

Creating a Service Account

To Create Service Account go to IAM->Service Accounts->New Service Account, once created you can then assign service accounts to groups.

IAM Service Account

You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.

terminal
➜  hq service-accounts
Manage ServiceAccounts.

Usage:
  hq service-accounts [command]

Aliases:
  service-accounts, sa

Available Commands:
  create      Creates a new ServiceAccount.
  delete      Deletes a ServiceAccount.
  get         Returns a specific ServiceAccount.
  list        Returns all ServiceAccounts.
  metadata    Manages service-account metadata.
  renew-token Renews the service account's token. The current token is invalidated and a new one is generated. An optional expiration timestamp can be provided.
  set-groups  Assigns the given service account exactly to the provided groups, ensuring they are not part of any other groups.
  update      Updates a service account.

API Calls

When interacting with Lenses via APIs set the service account token as in the header:

"Authorization": "Bearer sa_token"

Google SSO

This page describes configuring Google SSO for Lenses authentication.

1

Create a custom attribute for Lenses groups

Google doesn't expose the groups, or organization unit, of a user to a SAML app. This means we must set up a custom attribute for the Lenses groups that each user belongs to.

Open the from an administrator account.

  • Click the Users button

  • Select the More dropdown and choose Manage custom attributes

  • Click the Add custom attribute button

  • Fill the form to add a Text, Multi-value field for Lenses Groups, then click Add

Learn more about

2

Assign Lenses groups attributes to Google users

The attribute values should correspond exactly with the names of groups created within Lenses.

Open the from an administrator account.

  • Click the Users button

  • Select the user to update

  • Click User information

  • Click the Lenses Groups attribute

  • Enter one or more groups and click Save

3

Add Google custom SAML app

Learn more about

  • Open the from an administrator account.

  • Click the Apps button

  • Click the SAML apps button

  • Select the Add App dropdown and choose Add custom SAML app

  • Run through the below steps

App Details

  • Enter a descriptive name for the Lenses installation

  • Upload a

This will appear in the Google apps menu once the app is enabled

4

Configure SAML

Service provider details

Given the base URL of the Lenses installation, e.g. https://lenses-dev.example.com, fill out the settings:

Setting
Value

Attribute mapping

  • Add a mapping from the custom attribute for Lenses groups to the app attribute groups

Enable the app

  • From the newly added app details screen, select User access

  • Turn on the service

Lenses will reject any user that doesn't have the groups attribute set, so enabling the app for all users in the account is a good option to simplify ongoing administration.

Download the Federation Metadata XML file with the Google IdP details.

5

Download SAML Certificates

Click Download Metadata and save the metadata file for configuring Lenses.Configure SAML in HQ.

6

Configure SAML in HQ

SAML configuration is set in HQ's config.yaml file. See for more details.

Overview

This page describes an overview of Lenses Agent Provisioning.

Connections are defined in the provisioning.yaml file. The Agent will watch the file and resolve the desired state, applying connections defined in the file.

When deploying via the provisioning.yaml is part of the Agent Values.yaml file.

Minimum configuration

The minimum configuration needed is a configuration for Lenses HQ. Once the connection is established you can use the Lenses APIs to configure and test the remaining connections, or at start up provide the full configuration.

The minimum configuration, only connects to HQ, the Agent is not fully configured yet to connect to Kafka.

Uploading a configuration

The APIs will validate the schema and connectivity, and if validate update the file used by the Agent. They update the file provided at start up.

The file is the source of truth for connection management

Defining a Connection

Connections are defined in the provisioning.yaml. This file is divided into components, each component representing a type of connection.

Each component is mandatory:

  1. Name - This is the free name of the connection

  2. Version set to 1

  3. Configuration - This is a list of keys/values dependent on the component type.

Example provisioning.yaml

IDE & JSON Schema Support

To help you create a provisioning file you can use the JSON SCHEMA support. In you IDE, like VS Code, create a file called provisioning.yaml and add the following line at the top:

Then start typing, for example k for Kafka and Kafka Connect, s for schema registry or just crtl+space to trigger the default templates.

Fill in the required fields, your editor should highlight issues for you.

Managing secrets

The provisioning.yaml contains secrets. If you are deploying via Helm, the chart will use Kubernetes secrets.

Support is provided for referencing environment variables. This allows you to set secrets in your environment and resolve the value at runtime.

Escape the dollar sign

Referencing files

Many connections need files, for example, to secure Kafka with SSL you will need a key store and optionally a trust store.

To reference a file in the provisioning.yaml, for example, given:

a file called my-keystore.jks is expected in the same directory.

AWS MSK Serverless

This page describes how to connect Lenses to an Amazon MSK Serverless cluster.

Only one Kafka connection is allowed.

The name must be kafka.

See for support.

Environment variables are supported; escape the dollar sign

It is recommended to install the Agent on an EC2 instance or with EKS in the same VPC as your MSK Serverless cluster.

Security Groups

Enable communications between the Agent & the Amazon MSK Serverless cluster by opening the Amazon MSK Serverless cluster's security group in the AWS Console and add the IP address of your Agent installation.

IAM Policy

To authenticate the Agent & access resources within our MSK Serverless cluster, we'll need to create an IAM policy and apply that to the resource (EC2, EKS cluster, etc) running the Agent service. here is an example IAM policy with sufficient permissions which you can associate with the relevant IAM role:

MSK Serverless IAM to be used after cluster creation. Update this IAM policy with the relevant ARN.

Select your MSK endpoint

Click your MSK Serverless Cluster in the MSK console and select View Client Information page to check the bootstrap server endpoint.

Configure Provisioning

SQL Processor Configurations

To enable the creation of SQL Processors that create consumer groups, you need to add the following statement in your IAM policy:

Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.

To integrate with the AWS Glue Schema Registry, you also need to add the following statement for the registries and schemas in your IAM policy:

Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.

To integrate with the AWS Glue Schema Registry, you also need to modify the security policy for the registry and schemas, which results in additional functions within it:

More details about how IAM works with MSK Serverless can be found in the documentation:

Limitations

When using the Agent with MSK Serverless:

  • The agent does not receive Prometheus-compatible metrics from the brokers because they are not exported outside of CloudWatch.

  • The agent does not configure quotas and ACLs because MSK Serveless does not allow this.

JMX Metrics

This page describes the how to retrieve Lenses Agent JMX metrics.

The JMX endpoint is managed by the lenses.jmx.port option. To disable the JMX leave the option empty.

To enable monitoring of the Agent metrics:

To export via Prometheus exporter:

The Agent Docker image (lensesio/lenses) automatically sets up the Prometheus endpoint. You only have to expose the 9102 port to access it.

Setting up the JMX Agent with Basic Auth.

This will be done in two parts. The first part is about setting up the required files that JMX Agent will require and the second is about the options we need to pass to the agent.

Setting up required files

First let’s create a new folder called jmxremote

To enable basic auth JMX, first create two files:

  • jmxremote.access

  • jmxremote.password

JMX Password file

The password file has the credentials that the JMX agent will check during client authentication

The above code is registering 2 users.

  • UserA:

    • username admin

    • password admin

  • UserB:

    • username: guest

    • password: admin

JMX Access file

The access file has authorization information, like who is allowed to do what.

In the above code, we can see that the admin user can do read and write operations in JMX, while guest user can only read the JMX content.

Enable JMX with Basic Auth Protection

Now, to enable JMX with basic auth protection, all we need to do is pass the following options in the JRE’s env that will run the Java process you need to protect the jmx.

Let’s assume this java process is Kafka.

Change the permissions on both files so only owner can edit and view them.

If you do not change the permissions to 0600 and to the user that will run the jre process, then JMX will Agent will cause an error complaining that the Process is not the owner of the files that will be used for authentication and authorization.

Finally export the following options in the user’s env which will run Kafka.

Secure JMX with TLS Encryption

First setup JMX with basic auth as shown in the Secure JMX: Basic Auth page.

To enable TLS Encryption/Authentication in JMX you need a jks keystore and truststore.

Please note that both JKS Truststore and Keystore should have the same password.

The reason for this is because the javax.net.ssl class will use the password you pass to the Keystore as the keypassword

Let’s assume this java process is Kafka and that you have installed the keystore.jks and truststore.jks under `/etc/certs``

Export the following options in the user’s env which will run Kafka.

Tips: Before Upgrade

Lenses Agent
Lenses HQ

Read more how they work together in

Pick a upgrade method

There are two upgrade methods and each has its ups and downs:

Decision Matrix

Choose Side by Side Migration When
Choose In-Place When
LENSES_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=[HOSTNAME]"
export LENSES_OPTS="-javaagent:/path/to/jmx_exporter/fastdata_agent.jar=9102:/path/to/jmx_exporter/client.yml"
mkdir -vp /etc/jmxremote
cat /etc/jmxremote/jmxremote.password 
admin admin
guest admin
cat jmxremote/jmxremote.access 
admin readwrite
guest readonly
chmod -R 0600 /etc/jmxremote
chown -R <user-that-will-run-kafka-name>:<user-that-will-run-kafka-group> /etc/jmxremote/jmxremote.*
export BROKER_JMX_OPTS= "-Dcom.sun.management.jmxremote=true \
  -Dcom.sun.management.jmxremote.authenticate=true \
  -Dcom.sun.management.jmxremote.ssl=false \
  -Dcom.sun.management.jmxremote.local.only=false \
  -Djava.rmi.server.hostname=10.15.3.1 \
  -Dcom.sun.management.jmxremote.rmi.port=9581 \
  -Dcom.sun.management.jmxremote.access.file=/etc/jmxremote/jmxremote.access \
  -Dcom.sun.management.jmxremote.password.file=/etc/jmxremote/jmxremote.password \
  -Dcom.sun.management.jmxremote.port=9581
export BROKER_JMX_OPTS= "-Dcom.sun.management.jmxremote=true
  -Dcom.sun.management.jmxremote.authenticate=true \
  -Dcom.sun.management.jmxremote.ssl=true \
  -Dcom.sun.management.jmxremote.local.only=false \
  -Djava.rmi.server.hostname=10.15.3.1 \
  -Dcom.sun.management.jmxremote.rmi.port=9581 \
  -Dcom.sun.management.jmxremote.access.file=/etc/jmxremote.access \
  -Dcom.sun.management.jmxremote.password.file=/etc/jmxremote.password \
  -Dcom.sun.management.jmxremote.port=9581 \
  -Djavax.net.ssl.keyStore=/etc/certs/kafka.jks \
  -Djavax.net.ssl.keyStorePassword=somePassword \
  -Djavax.net.ssl.trustStore=/etc/certs/truststore.jks \
  -Djavax.net.ssl.trustStorePassword=somePassword \
  -Dcom.sun.management.jmxremote.registry.ssl=true \
  -Dcom.sun.management.jmxremote.ssl.need.client.auth=true

ACS URL

Use the base url with the callback path e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client

Entity ID

Use the base url e.g. https://lenses-dev.example.com

Start URL

Leave empty

Signed Response

Leave unchecked

Name ID format

Leave as UNSPECIFIED

Name ID

Leave as Basic Information > Primary Email

Google Admin console
Google custom attributes
Google Admin console
Google custom SAML apps
Google Admin console
Lenses icon
here
sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kafka-cluster:Connect",
                "kafka-cluster:AlterCluster",
                "kafka-cluster:DescribeCluster"
            ],
            "Resource": "arn:aws:kafka:[region]:[aws_account_id]:cluster/[cluster_name]/[cluster_uuid]/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "kafka-cluster:DescribeTopic",
                "kafka-cluster:CreateTopic",
                "kafka-cluster:WriteData",
                "kafka-cluster:ReadData"
            ],
            "Resource": "arn:aws:kafka:[region]:[aws_account_id]:topic/[cluster_name]/[cluster_uuid]/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "kafka-cluster:AlterGroup",
                "kafka-cluster:DescribeGroup"
            ],
            "Resource": "arn:aws:kafka:[region]:[aws_account_id]:group/[cluster_name]/[cluster_uuid]/*"
        }
    ]
}
provisioning.yaml
kafka:
  tags: ["optional-tag"]
  name: kafka
  configuration:
    kafkaBootstrapServers:
      value:
       - SASL_SSL://your.kafka.broker.0:9098
       - SASL_SSL://your.kafka.broker.1:9098
    protocol: 
      value: SASL_SSL
    saslMechanism: 
      value: AWS_MSK_IAM
    saslJaasConfig:
      value: software.amazon.msk.auth.iam.IAMLoginModule required;
    additionalProperties:
      value:
        sasl.client.callback.handler.class: "software.amazon.msk.auth.iam.IAMClientCallbackHandle
{
  "Action": [
    "kafka-cluster:*Topic*",
    "kafka-cluster:WriteData",
    "kafka-cluster:ReadData"
  ],
  "Resource": "arn:aws:kafka:[region]:[aws_account_id]:cluster/[cluster_name]/[cluster_uuid]/*"
}
{
  "Action": [
    "kafka-cluster:*Group*"
  ],
  "Resource": "arn:aws:kafka:[region]:[aws_account_id]:cluster/[cluster_name]/[cluster_uuid]/*"
}
{
  "Action": [
    "glue:DeregisterDataPreview",
    "glue:ListRegistries",
    "glue:CreateRegistry",
    "glue:RegisterSchemaVersion",
    "glue:GetRegistry",
    "glue:UpdateRegistry",
    "glue:ListSchemas",
    "glue:DeleteRegistry",
    "glue:GetSchema",
    "glue:CreateSchema",
    "glue:ListSchemaVersions",
    "glue:GetSchemaVersion",
    "glue:UpdateSchema",
    "glue:DeleteSchemaVersions"
  ],
  "Resource": [
    "arn:aws:glue:[region]:[aws_account_id]:registry/*",
    "arn:aws:glue:[region]:[aws_account_id]:schema/*"
  ]
}
JSON schema
MSK Serverless
MSK Serverless security group

✅ Provisioning (v2) is mandatory. MAKE SURE PROVISIONING CONTAINS ALL CONNECTIONS INCLUDING ALERT & AUDIT CHANNELS

✅ Main element managing all agents which are connecting to Kafka

❌ It no longer uses the ingress controller

✅ Supports only Postgres as a database

✅ All security elements are moved from security.conf to Lenses HQ

✅ Holder of license

✅ Remained the same in terms of functionalities; authentication and authorization were moved to HQ, along with new features.

✅ Single pane of glass for all engineers to check the whole Kafka ecosystem

❌ No Wizard / UI for making connections between Agent and any component of Kafka ecosystem

❌ Cannot work without HQ

✅ PostgresDB is recommended for Production systems (H2 embedded available as of v6.0.6 for non production deployments)

❌ No longer holder of license

SLA requires < 5 minutes downtime or follows blue-green deployment patterns.

Downtime windows are acceptable.

Connections in Lenses ⅘ have been set by Wizard mode.

Configuration is simple (already use Provision v2).

Resources available for parallel infrastructure.

Quick upgrade is priority.

Wants to spend more time exploring IAMs and new potential permissions for certain AD Groups.

Enterprise deployment with SQL processors, complex data policies, and multi-tenant configurations.

Overview
Side By Side Upgrade
In Place Upgrade
# yaml-language-server: $schema=./agent/provisioning.schema-6.1.json
sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
    configuration:
      protocol:
        value: SASL_SSL
      sslKeystore:
        file: "my-keystore.jks"
Helm
kafka:
- name: Kafka
  version: 1
  tags: [my-tag]
  configuration:
    kafkaBootstrapServers:
      value:
        - PLAINTEXT://your.kafka.broker.0:9092
        - PLAINTEXT://your.kafka.broker.1:9092
    protocol: 
      value: PLAINTEXT
    # all metrics properties are optional
    metricsPort: 
      value: 9581
    metricsType: 
      value: JMX
    metricsSsl: 
      value: falseSSL 

Provisioning API

This page describes how to use the provisioning API

The Lenses Provisioning System allows you to manage Lenses connections declaratively through YAML manifests. It provides a GitOps-friendly approach to managing your Lenses infrastructure, enabling version control, automated deployments, and consistent configuration across environments.

Key Features

  • Declarative Configuration: Define your entire Lenses infrastructure in YAML

  • File Management: Upload and manage SSL certificates, keystores, and other binary files

  • Validation: Comprehensive validation with detailed error messages

  • Selective Updates: Update only specific connections without affecting others

  • File Preservation: Existing files are preserved when not explicitly replaced

  • Connectivity Testing: Optional connectivity validation for all connections

API Endpoints

File Upload

Files are uploaded as part of the multipart form data:

curl -X POST "https://lenses-server/api/v1/state/connections/upload" \
  -H "Authorization: Bearer your-token" \
  -F "[email protected]" \
  -F "[email protected]" \
  -F "[email protected]"

File Preservation

When updating connections, existing files are preserved if not explicitly provided in the new request. This allows for selective updates without losing existing SSL certificates or other files.

file names must match the file names in the actual provisioning file

Upload Provisioning Manifest

Endpoint: POST /api/v1/state/connections/upload

Description: Uploads a complete provisioning manifest with files. This replaces the entire connection state.

Request: multipart/form-data

  • provisioning: YAML manifest file

  • Additional files: SSL certificates, keystores, etc.

Response: ProvisioningValidationResponse

POST /api/v1/state/connections/upload
Content-Type: multipart/form-data

--boundary
Content-Disposition: form-data; name="provisioning"; filename="provisioning.yaml"
Content-Type: text/plain

kafka:
  - name: my-kafka
    version: 1
    tags: ["production"]
    configuration:
      kafkaBootstrapServers:
        value: ["localhost:9092"]
      protocol:
        value: "PLAINTEXT"

--boundary
Content-Disposition: form-data; name="keystore.jks"; filename="keystore.jks"
Content-Type: application/octet-stream

[binary keystore content]
--boundary--

Validate Provisioning Manifest

Endpoint: POST /api/v1/state/connections/validate/upload

Description: Validates a provisioning manifest without applying changes (dry-run).

Request: Same as upload endpoint Response: ProvisioningValidationResponse

Get Current Provisioning State

Endpoint: GET /api/v1/state/connections

Description: Retrieves the current provisioning.yaml file contents.

Response: Raw YAML content

Kafka Connect

This page describes adding a Kafka Connect Cluster to the Lenses Agent.

Lenses integrates with Kafka Connect Clusters to manage connectors.

The name of a Kafka Connect Connections may only contain alphanumeric characters ([A-Za-z0-9]) and dashes (-). Valid examples would be dev, Prod1, SQLCluster,Prod-1, SQL-Team-Awesome.

Multiple Kafka Connect clusters are supported.

If you are using Kafka Connect < 2.6 set the following to ensure you can see Connectors

lenses.features.connectors.topics.via.api.enabled=false

Consider Rate Limiting if you have a high number of connectors.

See JSON schema for support.

Environment variables are supported; escape the dollar sign

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"

Names must match be alphanumeric or dash non-empty string.

Simple configuration, with JMX metrics

The URLs (workers) should always have a scheme defined (http:// or https://).

provisioning.yaml
connect:
  - name: my-connect-cluster-name
    version: 1    
    tags: ["tag1"]
    configuration:
      workers:
        value:
          - http://my-kc.worker1:8083
          - http://my-kc.worker2:8083
      metricsPort: 
        value: 9585
      metricsType: 
        value: JMX              

Basic authentication

For Basic Authentication, define username and password properties.

provisioning.yaml
connect:
  - name: my-connect-cluster-name
    tags: ["tag1"]
    version: 1      
    configuration:
      workers:
        value:
          - http://my-kc.worker1:8083
          - http://my-kc.worker2:8083    
      username: 
        value: my-username
      password: 
        value: my-password

TLS with custom truststore

A custom truststore is needed when the Kafka Connect workers are served over TLS (encryption-in-transit) and their certificates are not signed by a trusted CA.

provisioning.yaml
connect:
  - name: my-connect-cluster-name
    tags: ["tag1"]
    version: 1      
    configuration:
      workers:
        value:
          - http://my-kc.worker1:8083
          - http://my-kc.worker2:8083    
      sslTruststore:
        file: /connect-truststore.jks
      sslTruststorePassword: 
        value: myPassword

TLS with client authentication

A custom truststore might be necessary too (see above).

provisioning.yaml
connect:
  name: my-connect-cluster-name
  tags: ["tag1"]
  version: 1    
  configuration:
    workers:
      value:
        - http://my-kc.worker1:8083
        - http://my-kc.worker2:8083    
    sslKeystore:
      file: connect-keystore.jks
    sslKeystorePassword: 
      value: myPassword

Adding 3rd Party Connector to the Topology

If you have developed your own Connector or are using not using a Lenses connector you can still display the connector instances in the topology. To do this Lenses needs to know the configuration option of the Connector that defines which topic the Connector reads from or writes to. This is set in the connectors.info parameter in the lenses.conf file.

lenses.conf
connectors.info = [
      {
           class.name = "The connector full classpath"
           name = "The name which will be presented in the UI"
           instance = "Details about the instance. Contains the connector configuration field which holds the information. If  a database is involved it would be  the DB connection details, if it is a file it would be the file path, etc"
           sink = true
           extractor.class = "The full classpath for the implementation knowing how to extract the Kafka topics involved. This is only required for a Source"
           icon = "file.png"
           description = "A description for the connector"
           author = "The connector author"
      }

  ]

Alert & Audit integrations

Connect the Lenses Agent to your alerting and auditing systems.

The Agent can send out alerts and audits events. Once you have configured alert and audit connections, you can create alert and audit channels to route events to them.

See JSON schema for support.

Environment variables are supported; escape the dollar sign

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"  

Names must match be alphanumeric or dash non-empty string.

Alerts

DataDog

provisioning.yaml
datadog:
- name: my-datadog-connection
  version: 1
  tags: [tag1, tag2]
  configuration:
    # The Datadog site.
    site:
      value:
    # The Datadog API key.
    apiKey:
      value:   
    # The Datadog application key.
    applicationKey:
      value:  

AWS CloudWatch

See AWS connection.

PagerDuty

provisioning.yaml
pagerduty:
- name: my-pagerduty-connection
  version: 1
  tags: [tag1, tag2]
  configuration:
    # An Integration Key for PagerDuty's service with Events API v2 integration type.
    integrationKey:
      value: 

Slack

provisioning.yaml
slack:
- name: my-slack-connection
  version: 1
  tags: [tag1, tag2]
  configuration:
    # The Slack endpoint to send the alert to.
    webhookUrl:
      value: 

Alert Manager

provisioning.yaml
alertManager:
- name: my-alertmanager-connection
  version: 1
  tags: [tag1, tag2]
  configuration:
    # Comma separated list of Alert Manager endpoints.
    endpoints:
      value: 

Webook (Email, SMS, HTTP and MS Teams)

provisioning.yaml
webhook:
- name: my-webhook-alert-connection
  version: 1
  tags: [tag1, tag2]
  configuration:
    # The host name for the HTTP Event Collector API of the Splunk instance.
    host:
      value: 
    # The port number for the HTTP Event Collector API of the Splunk instance. (int)
    port:
      value:  
    # Set to true in order to set the URL scheme to https. 
    # Will otherwise default to http.
    useHttps:
      value:
    # An array of (secret) strings to be passed over to alert channel plugins.
    creds:
      value:
        - 
        - 

Audits

Webhook

provisioning.yaml
webhook:
- name: my-webhook-audit-connection
  version: 1
  tags: [tag1, tag2]
  configuration:
    # The host name for the HTTP Event Collector API of the Splunk instance.
    host:
      value: 
    # The port number for the HTTP Event Collector API of the Splunk instance. (int)
    port:
      value:  
    # Set to true in order to set the URL scheme to https. 
    # Will otherwise default to http.
    useHttps:
      value:
    # An array of (secret) strings to be passed over to alert channel plugins.
    creds:
      value:
        - 
        - 

Splunk

provisioning.yaml
splunk:
- name: my-splunk-connection
  version: 1
  tags: [tag1, tag2]
  configuration:
    # The host name for the HTTP Event Collector API of the Splunk instance.
    host:
      value: 
    # The port number for the HTTP Event Collector API of the Splunk instance. (int)
    port:
      value:  
    # Use TLS. Boolean, default false
    useHttps:
      value:
    # This is not encouraged but is required for a Splunk Cloud Trial instance. Bool
    insecure:
      value:
    # HTTP event collector authorization token. (string)
    token:
      value:    

Database

This page describes configuring the database connection for the Lenses Agent. There are two options for the backing storage: Postgres or Microsoft SQL Server.

Postgres

Once you have created a role for the agent to use you can then configure the Agent in the lenses.conf file:

lenses.conf
lenses.storage.postgres.host="my-postgres-server"
lenses.storage.postgres.port=5432
lenses.storage.postgres.username="lenses_agent"
lenses.storage.postgres.database="lenses_agent"
lenses.storage.postgres.password="changeme"

Additional configurations for the PostgreSQL database connection can be passed under the lenses.storage.postgres.properties configuration prefix.

One Postgres server can be used for all agents by using a separate database or schema each.

For the Agent see lenses.storage.postgres.schema or lenses.storage.postgres.database

The supported parameters can be found in the PostgreSQL documentation. For example:

lenses.conf
# require SSL encryption with full host verification
lenses.storage.postgres.properties.ssl=true
lenses.storage.postgres.properties.sslmode="verify-full"
lenses.storage.postgres.properties.sslcert="/path/to/certs/lenses.crt.pem"
lenses.storage.postgres.properties.sslkey="/path/to/certs/lenses.key.pk8"
lenses.storage.postgres.properties.sslpassword="mypassword"
lenses.storage.postgres.properties.sslrootcert="/path/to/certs/CA.crt.pem"

Database Role

terminal
# login as superuser and add Lenses role and database
psql -U postgres -d postgres <<EOF
CREATE ROLE lenses_agent WITH LOGIN PASSWORD 'changeme';
CREATE DATABASE lenses_agent OWNER lenses_agent;
EOF

Microsoft SQL Server

To configure Lenses to use a Microsoft SQL Server database, you need to add the following settings to your lenses.conf file. This example mirrors the structure of the PostgreSQL configuration you provided.

lenses.conf
lenses.storage.mssql.host="my-mssql-server"
lenses.storage.mssql.port=1433
lenses.storage.mssql.database="lenses_db"
lenses.storage.mssql.schema="lenses_schema"
lenses.storage.mssql.username="lenses_user"
lenses.storage.mssql.password="changeme"

Database and Login Creation

Before starting Lenses, you must create the database, schema, and login credentials on your Microsoft SQL Server instance. You can use a tool like SQL Server Management Studio (SSMS) or the sqlcmd command-line utility to execute these commands.

SQL Server Commands
-- Create the database for Lenses
CREATE DATABASE lenses_db;
GO

-- Switch to the newly created database
USE lenses_db;
GO

-- Create a login (user) for Lenses to use
CREATE LOGIN lenses_user WITH PASSWORD = 'changeme';
GO

-- Create a database user linked to the login
CREATE USER lenses_user FOR LOGIN lenses_user;
GO

-- Create a schema for Lenses
CREATE SCHEMA lenses_schema;
GO

-- Grant the necessary permissions to the user on the schema
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE TABLE, ALTER ON SCHEMA::lenses_schema TO lenses_user;
GO

Advanced Configuration with Properties

You can pass additional JDBC driver properties using the lenses.storage.mssql.properties prefix. This is useful for enabling features like connection encryption. The supported parameters can be found in the Microsoft JDBC Driver documentation.

For example, to enforce SSL encryption and validate the server certificate:

lenses.conf
# Require SSL encryption
lenses.storage.mssql.properties.encrypt="true"
lenses.storage.mssql.properties.trustServerCertificate="false"
lenses.storage.mssql.properties.hostNameInCertificate="my-mssql-server.example.com"

Connection pooling

The Agent uses the HikariCP library for high-performance database connection pooling.

The default settings should perform well but can be overridden via the lenses.storage.hikaricp configuration prefix. The supported parameters can be found in the HikariCP documentation.

Camelcase configuration keys are not supported in agent configuration and should be translated to dot notation.

For example:

# set maximumPoolSize to 25
lenses.storage.hikaricp.maximum.pool.size=25

Lenses Resource Names (LRNs)

LRNs uniquely identify all resources that Lenses understands. Examples are a Lenses User, a Kafka topic or a Kafka-Connect connector.

Use an LRN to specify a resource across all of Lenses, unambiguously:

  • To add topic permissions for a team in IAM permissions.

  • To share a consumer-group reference with a colleague.

LRN format

The top-level format has 3 parts called segments. A semi-colon : separates them:

service

service is the namespace of the Lenses service that manages a set of resource types.

e.g. kafka for things like topics and consumer groups.

resource-type

resource-type is the type of resources that are served by a service.

e.g. topic for a Kafka topic, consumer-group for a Kafka consumer group. They both belong to the kafka service.

resource-id

resource-id is the unique name or path that identifies a resource. The resource ID is specific to a service and resource type. The resource ID can be:

  • a single resource name, e.g. :

    • [email protected] for a user resource name.

    • The full LRN would be iam:user:[email protected].

  • a nested resource path that contains slashes / e.g. :

    • dev-environment/kafka/my-topic for a kafka topic.

    • The full LRN would be kafka:topic:dev-environment/kafka/my-topic.

Examples

IAM user

Kafka topic

Kafka consumer group

Schema Registry schema

Kafka Connect connector

Allowed characters

LRNs separate top-level segments with a colon : and resource path segments with a slash /.

A segment may have:

  • Alphanumeric characters: a-z, A-Z, 0-9

  • Hyphen symbols only: -

Using wildcards

Use the wildcard asterisk * to express catch-all LRNs.

Good examples

Use these examples to express multiple resources easily.

Wildcard pattern
LRN Example
Definition
Example means…

Bad examples

Avoid these examples because they are ambiguous. Lenses does not allow them.

Wildcard pattern
LRN Example
Restriction
Better alternative
service:resource-type:resource-id
iam:user:[email protected]
kafka:topic:dev-environment/kafka/my-topic
kafka:consumer-group:dev-environment/kafka/my-consumer-group
schemas:schema:dev-environment/schema-registry/my-topic-value
kafka-connect:connector:dev-environment/connect-cluster-1/my-s3-sink

*

*

Global wildcard.

Capture all the resources that Lenses manages.

"Everything"

service:*

kafka:*

Service-specific wildcard.

Capture all the resources for a service.

"All Kafka resources in all environments, i.e. topics, consumer groups, acls and quotas"

service:resource-type:*

kafka:topic:*

Resource-type-specific wildcard.

Capture all the resources for a type of resources of a service.

"All Kafka topics in all environments"

service:resource-type:parent/*/grandchild

kafka-connect:connector:dev-environment/*/my-s3-sink

Path segment wildcard.

Capture a part of the resource path.

"All connectors named 'my-s3-sink' in all Connect clusters under the environment 'dev-environment' "

service:resource-type:resourcePa*

kafka:topic:dev-environment/kafka/red-*

Trailing wildcard.

This wildcard is at the end of an LRN. It acts as a 'globstar' (**) and matches against the rest of the string.

Capture the resources that start with the given path prefix.

"All Kafka topics in the environment 'dev-environment' whose name starts with 'red-' "

service:resource-type:paren*/chil*/grandchil*

kafka-connect:connector:dev*/sinks*/s3*

Path suffix wildcard.

Capture resources where different path segments start with certain prefixes.

"All connectors in all environments that start with 'dev', within any Connect cluster that starts with 'sinks' and where the connector name starts with 's3' "

servic*:resource-type:resource-id

kafk*::dev-environment/

or :topic:dev-environment/

No wildcards allowed at the service level. A service must be its full string.

Global wildcard *

service:resource-typ*:resource-id

kafka:topi*:dev-environment/*

No wilcards allowed at the resource-type level. A resource type must be its full string.

Service-specific wildcard service:* No resource-id segments allowed in this case.

Infrastructure JMX Metrics

This page describes how to configure JMX metrics for Connections in Lenses.

All core services (Kafka, Schema Registry, Kafka Connect, Zookeeper) use the same set of properties for services’ monitoring.

The Agent will discover all the brokers by itself and will try to fetch metrics using metricsPort, metricsCustomUrlMappings and other properties (if specified).

See JSON schema for support.

Environment variables are supported; escape the dollar sign

sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
[connnection]
  configuration:
    metricsPort: 
      value: 9581
    metricsType: 
      value: JMX
    metricsSsl: 
      value: false
    metricsUsername: 
      value: user
    metricsPassword: 
      value: pass

JMX

Simple

The same port used for all brokers/workers/nodes. No SSL, no authentication.

kafka:
  tags: []
  name: kafka
  configuration:
    kafkaBootstrapServers:
      - PLAINTEXT://my-kafka-host-0:9092
    protocol:
      value: PLAINTEXT
    metricsPort: 
      value: 9585
    metricsType: 
      value: JMX

SSL

kafka:
  tags: []
  name: kafka
  configuration:
    kafkaBootstrapServers:
      - PLAINTEXT://my-kafka-host-0:9092
    protocol: 
      value: PLAINTEXT
    metricsPort: 
      value: 9585
    metricsType: 
      value: JMX
    metricsSsl: 
      value: true

Basic Auth

kafka:
  tags: []
  name: kafka
  configuration:
    kafkaBootstrapServers:
      - PLAINTEXT://my-kafka-host-0:9092
    protocol: 
       value: PLAINTEXT
    metricsPort: 
      value: 9581
    metricsType: 
      value: JMX
    metricsSsl: 
      value: false
    metricsUsername: 
      value: user
    metricsPassword: 
      value: pass

Such a configuration means that the Agent will try to connect using JMX with every pair of kafkaBootstrapServers.host:metricsPort, so following the example: my-kafka-host-0:9581.

Jolokia

For Jolokia the Agent supports two types of requests: GET (JOLOKIAG) and POST (JOLOKIAP).

For JOLOKIA each entry value in metricsCustomUrlMappings must contain protocol.

Simple

The same port used for all brokers/workers/nodes. No SSL, no authentication.

kafka:
  tags: []
  name: kafka
  configuration:
    kafkaBootstrapServers:
      - PLAINTEXT://my-kafka-host-0:9092
    protocol:
      value: PLAINTEXT
    metricsPort: 
      value: 9585
    # For GET method: JOLOKIAG
    # For POST method: JOLOKIAP 
    metricsType:
      value: JOLOKIAG
    metricsSsl: 
      value: false
    metricsHttpSuffix: 
      value: /jolokia/

Custom Http Request Timeout

JOLOKIA monitoring works on the top of HTTP protocol. To fetch metrics the Agent has to perform either GET or POST request. There is a way of configuring http request timeout using httpRequestTimeout property (ms value). Its default value is 20 seconds.

httpRequestTimeout: 
  value: 30000

Custom Metrics Http Suffix

Default suffix for Jolokia endpoints is /jolokia/, so that should be provided value. Sometimes that suffix can be different, so there is a way of customizing it by using metricsHttpSuffix field.

metricsHttpSuffix: 
  value: /custom/

AWS

Before enabling collection of metrics within Agents provision configuration, make sure in your MSK Provisioned cluster you have enabled open monitoring with Prometheus.

AWS has a predefined metrics configuration. The Agent hits the Prometheus endpoint using port 11001 for each broker. There is an option of customizing AWS metrics connection in Lenses by using metricsUsername, metricsPassword, metricsHttpTimeout, metricsHttpSuffix, metricsCustomUrlMappings, and metricsSsl properties. However, except for metricsHttpTimeout, the other settings will seldom be needed - AWS has its standard, which is unlikely to change. Customization can be achieved only by API or CLI.

kafka:
  tags: [ "dev", "dev-2", "eu"]
  configuration:
    kafkaBootstrapServers:
      value:
        - SASL_SSL://my-broker-0:9098
        - SASL_SSL://my-broker-1:9098
        - SASL_SSL://my-broker-2:9098
    protocol:
      value: SASL_SSL
    saslMechanism:
      value: AWS_MSK_IAM
    saslJaasConfig:
      value: software.amazon.msk.auth.iam.IAMLoginModule required;
    additionalProperties:
      value:
        sasl.client.callback.handler.class: "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
    metricsType:
      value: AWS
    metricsHttpTimeout: # optional, milliseconds
      value: 20000

In some cases, the metricsHttpTimeout option may be required. Typically, this occurs when the OpenMetrics instance is undersized for the size of the MSK cluster, resulting in longer-than-usual metric retrieval times. Each Kafka partition adds a large number of metrics, so the OpenMetrics instance should ideally be sized to accommodate the number of partitions that the MSK will host.

Another common pitfall with MSK OpenMetrics is that there exists a global rate limit for each instance. If more than one service hits the OpenMetrics endpoint, the rate limit may be triggered, and the clients will receive an HTTP error code 429. To overcome this, you can set the lenses.interval.metrics.refresh.broker option in Lenses Agent. As an example, to make Lenses request metrics every minute, set the value to 60000 (milliseconds).

Custom URL mapping

There is also a way to configure custom mapping for each broker (Kafka) / node (Schema Registry, Zookeeper) / worker (Kafka Connect).

Such a configuration means that the Agent will try to connect using JMX for:

  • my-kafka-host-0:9582 - because of metricsCustomUrlMappings

  • my-kafka-host-1:9581 - because of metricsPort and no entry in metricsCustomUrlMappings

kafka:
  tags: ["optional-tag"]
  name: kafka
  configuration:
    kafkaBootstrapServers:
      - PLAINTEXT://my-kafka-host-0:9092
    protocol:
      value: PLAINTEXT
    metricsPort: 
      value: 9581
    metricsType: 
      value: JMX
    metricsSsl: 
      value: false
    metricsCustomUrlMappings:
      value:
        "my-kafka-host-0:9092": my-kafka-host-0:9582

Installing Community Edition Using Helm

These instructions are NOT for production environments. They are intended for dev or test environment setups. Please see for details on installing Lenses for more secure environments.

Tool Requirements

  1. Kubernetes cluster and kubectl - you can use something like Minikube or Docker Desktop in Kubernetes mode if you'd like, but you will need to allocate at least 8 gigs of RAM and 6 CPUs

  2. Helm.

  3. Text editor.

  4. Kafka cluster and a Postgres database(we will provide setup instructions below for this if it's not already installed.)

  5. Kafka Connect and a schema registry (optional)

Adding Required Helm Repositories

From a workstation with kubectl and Helm installed, add the Lenses Helm repository:

If you don't already have a Kafka cluster or Postgres installed you will need to add this repository as well:

Once you've added them, run the following command:

Installing Postgres - if needed

If you already have Postgres installed skip to the next section: Configuring Postgres

  1. Create a namespace for Postgres

  1. Create a PVC claim for Postgres

PLEASE NOTE: PVC claims vary greatly depending on the type of Kubernetes cluster you are using. Here we are using a "standard" storage class. Please refer to your version of Kubernetes' docs for the best storage class to use.

Save the above to a file called postgres-pvc.yaml and then run the following command:

  1. Install Postgres using the Bitnami Helm chart.

Using simple cleartext passwords like in the below example is NEVER recommended for anything other than a test or dev environment.

Save the above text to a file called postgres-values.yaml. Then run the following command:

Verify that Postgres is up and running. It may take a minute or so to download and be fully ready.

Configuring Postgres

Again a reminder, we are using simple cleartext passwords here. NEVER recommended for anything other than test or dev environments.

  1. We need to create the databases in Postgres for Lenses to use.

Option 1: You will need to use a Postgres client to run the following commands.

Log in to your Postgres instance and run the following commands:

Option 2: Use a Kubernetes job to run the Postgres commands.

Lenses needs a database for LensesHQ and for Lenses Agent. This job will create one for each using the same Postgres instance.

Copy the above text to a file called lenses-db-init-job.yaml and then run the following command:

Wait a bit and then run

You should see

Now Postgres is setup and configured to work with Lenses.

Installing a Kafka Cluster - Optional

If you already have a Kafka cluster installed skip to the Installing HQ section.

The provided cluster install is for a simple single node open-source Kafka cluster with basic authentication and limited resources. Only suitable for testing or small development environments.

  1. Create the kafka-cluster-values.yaml file for installation. We are using "standard" storage class here. Depending on what K8s vendor you're using and where you are running it, your PCV setup will vary.

  1. Create a namespace for Kafka

  1. Install the Kafka cluster with the Bitnami Helm chart:

Give the Helm chart a few minutes to install then verify the installation:

Installing Lenses HQ

  1. Create lenses namespace

  1. Install Lenses HQ with its Helm chart using the following lensesHQ-values.yaml

Copy the above text to a file lenseshq-values.yaml and apply it with the following command:

You can verify that Lenses HQ is installed:

  1. Accessing Lenses HQ:

In order to access Lenses HQ you will need to setup an ingress route using an ingress controller. There are so many different ways to do this depending on how and where you are running Kubernetes.

We have provided here an example ingress configuration using Nginx:

Installing Lenses Agent

  1. Once you have successfully logged on to Lenses HQ you can start to setup your agent. See for login details.

  2. Click on the Add New Environment button at the bottom of the main screen. Give your new environment a name (you can accept the other defaults for now) and click Create Environment.

  3. Be sure to save your Agent Key from the screen that follows.

  1. Now we can install the Lenses Agent using the agent_key. Here is the lenses-agent-values.yaml file:

  1. Copy the above config to a file named lenses-agent-values.yaml.

NOTE: you must replace value: "agent_key_Insert_Your_Agent_Key_Here" with your actual Agent Key you saved in a previous step.

Your lenses-agent-values.yaml should look like this:

  1. Use the Lenses Agent Helm chart to install the Lenses Agent

Give Kubernetes time to install the Lenses Agent, then go back to the Lenses HQ UI and view your Kafka cluster is connected. You can now uses Lenses on your own cluster! Congrats!!

Deploying HQ

This page describes the install of the Lenses Agent via an archive on Linux.

To install the HQ from the archive you must:

  1. Extract the archive

  2. Configure the HQ

  3. Start the HQ


Extracting the archive

Installation link

Link to archives can be found here:

Extract the archive using the following command

Inside the extract archive, you will find.


Configuring the HQ

In order to properly configure HQ, one core components is necessary as prerequirement:

1

Configure Authentication

To set up authentication, there are multiple methods available.

You can choose between:

  • password-based authentication, which requires users to provide a username and password;

  • and SAML/SSO (Single Sign-On) authentication, which allows users to authenticate through an external identity provider for a seamless and secure login experience.

Both password based and SAML / SSO authentication methods can be used alongside each other.

First to cover is users property.

Users Property: The users property is defined as an array, where each entry includes a username and a password. The passwords are hashed using bcrypt for security purposes, ensuring that they are stored securely.

Second to cover will be administrators. It serves as definition of user emails which will have highest level of permissions upon authentication to HQ.

Assertion Consumer Service endpoint is following

Full auth configuration spec can be found .

2

Configure HTTP endpoint

Another part which has to be set in order to successfully run HQ is the http definition. As previously mentioned, this parameter defines everything around HTTP endpoint of the HQ itself and how users will interact with.

Definition of HTTP object is as follows:

More about setting up TLS can be read . Full http configuration spec can be found .

3

Configure Agent endpoint

After correctly configuring authentication strategy and connection endpoint , agent handling is the last most important box to tick.

The Agent's object is defined as follows:

More about setting up TLS can be read .

4

Configure database

Prerequisite:

  • Running Postgres instance;

  • Created database for HQ;

  • Username (and password) which has access to created database;

In order to successfully run HQ, storage within config.yaml has to be defined first.

Definition of storage object is as follows:

Full database configuration spec can be found .

5

Configure license and accept EULA

In demo purposes and testing the product you can use our community license


Final Configuration File

If you have meticulously followed all the outlined steps, your config.yaml file should mirror the example provided below, fully configured and ready for deployment. This ensures your system is set up correctly with all necessary settings for authentication, database connection, and other configurations optimally defined.


Starting the HQ

Start Lenses by running:

or pass the location of the config file:

If you do not pass the location of the config file, the HQ will look for it inside the current (runtime) directory. If it does not exist, it will try its installation directory.

Once HQ starts, it will be listening on the

To stop HQ, press CTRL+C.


SystemD example

If your server uses systemd as a Service Manager, then manage the Agent (start upon system boot, stop, restart). Below is a simple unit file that starts the Agent automatically on system boot.


What's next?

After the successful configuration and installation of HQ, the next steps would be:

Apache Kafka

This page describes connecting the Lenses Agent to Apache Kafka.

A Kafka connection is required for the agent to start. You can connect to Kafka via:

  1. Plaintext (no credentials an unencrypted)

  2. SSL (no credentials an encrypted)

  3. SASL Plaintext and SASL SSL

Only one Kafka connection is allowed.

The name must be kafka.

See for support.

Environment variables are supported; escape the dollar sign

Plaintext

With PLAINTEXT, there's no encryption and no authentication when connecting to Kafka.

The only required fields are:

  • kafkaBootstrapServers - a list of bootstrap servers (brokers). It is recommended to add as many brokers (if available) as convenient to this list for fault tolerance.

  • protocol - depending on the protocol, other fields might be necessary (see examples for other protocols)

In following example JMX metrics for Kafka Brokers are configured too, assuming that all brokers expose their JMX metrics using the same port (9581), without SSL and authentication.

SSL

With SSL the connection to Kafka is encrypted. You can also uses SSL and certificates to authenticate users against Kafka.

A truststore (with password) might need to be set explicitly if the global truststore of the Agent does not include the Certificate Authority (CA) of the brokers.

If TLS is used for authentication to the brokers in addition to encryption-in-transit, a key store (with passwords) is required.

SASL Plaintext vs SASL SSL

There are 2 SASL-based protocols to access Kafka Brokers: SASL_SSL and SASL_PLAINTEXT. They both require SASL mechanism and JAAS Configuration values. What is different is:

  1. The transport layer is encyrpted (SSL)

  2. The SASL mechanism for authentication (PLAIN, AWS_MSK_IAM, GSSAPI).

In addition to this, there might be a keytab file required, depending on the SASL mechanism (for example when using GSSAPI mechanism, most often used for Kerberos).

To use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.

When encryption-in-transit is used (with SASL_SSL), a trust store might need to be set explicitly if the global trust store of Lenses does not include the CA of the brokers.

SASL SSL

Mechanism PLAIN

Encrypted communication and basic username and password for authentication.

Mechanism GSSAPI

In order to use Kerberos authentication, a Kerberos Connection should be created beforehand.

SASL Plaintext

No SSL encrypted of communication, credentials communicated to Kafka in clear text.

Mechanism SCRAM-SHA-256

Mechanism SCRAM-SHA-512

terminal
tar -xvf lenses-hq-linux-amd64-latest.tar.gz -C lenses-hq
terminal
   lenses-hq
   ├── lenses-hq
/api/v2/auth/saml/callback?client_name=SAML2Client
config.yaml
auth:
  users:
    - username: admin
      password: $2a$10$F66cb6ZhnJjGCZuxlvKP1e84eytTpT1MDJcpBblHaZgsqp1/Aa0LG # bcrypt("correcthorsebatterystaple").
  administrators:
    - admin
    - [email protected]
  saml:
    enabled: true
    metadata: |-
      <?xml version="1.0" encoding="UTF-8"?><md:EntityDescriptor>
      ...
      ...
      </md:EntityDescriptor>
    # Defines base URL of HQ for IdP redirects
    baseURL: https://changeme.com # <--- Change this
    # Defines  globally unique identifier for the SAML entity 
    # — either the Service Provider (SP) or Identity Provider (IdP)
    # It's often a URL, but it doesn't necessarily need to resolve to anything
    entityID: https://example.com # <--- Change this
    userCreationMode: sso
    groupMembershipMode: sso
config.yaml
http:
  address: :8080
  accessControlAllowOrigin:
    - https://example.com
  accessControlAllowCredentials: false
  secureSessionCookies: false
  tls:
    enabled: true
    cert: |
      -----BEGIN CERTIFICATE-----
      MIIDXTCCAkWgAwIBAgIJALkNfT3d1N8tMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV
      BAYTAlVTMRYwFAYDVQQKEw1FeGFtcGxlIENlcnQwHhcNMjUwMzI2MDAwMDAwWhcN
      MzUwMzIzMDAwMDAwWjBFMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNZXhhbXBsZS5j
      b20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC5D3jXq5JnE9NnRJ8N
      ...
      -----END CERTIFICATE-----
    key: |
      -----BEGIN PRIVATE KEY-----
      MIIEvQIBADANBgkqhkiG9w0BAQEFAASC...
      ...
      -----END PRIVATE KEY-----
config.yaml
http:
  address: :8080
  accessControlAllowOrigin:
    - https://example.com
  accessControlAllowCredentials: false
  secureSessionCookies: false
  tls:
    enabled: false
config.yaml
agents:
  address: :10000
  tls:
    enabled: true
    cert: |
      -----BEGIN CERTIFICATE-----
      MIIDXTCCAkWgAwIBAgIJALkNfT3d1N8tMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV
      BAYTAlVTMRYwFAYDVQQKEw1FeGFtcGxlIENlcnQwHhcNMjUwMzI2MDAwMDAwWhcN
      MzUwMzIzMDAwMDAwWjBFMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNZXhhbXBsZS5j
      b20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC5D3jXq5JnE9NnRJ8N
      ...
      -----END CERTIFICATE-----
    key: |
      -----BEGIN PRIVATE KEY-----
      MIIEvQIBADANBgkqhkiG9w0BAQEFAASC...
      ...
      -----END PRIVATE KEY-----
config.yaml
agents:
  address: :10000
  tls:
    enabled: false
config.yaml
database:
  host: postgres:5432
  username: panoptes
  password: password
  database: panoptes
  schema: insert-schema-here
  # Params example - not required and it depends on your PG requirements
  params:
    sslmode: require
license_key_2SFZ0BesCNu6NFv0-EOSIvY22ChSzNWXa5nSds2l4z3y7aBgRPKCVnaeMlS57hHNVboR2kKaQ8Mtv1LFt0MPBBACGhDT5If8PmTraUM5xXLz4MYv
config.yaml
license:
  key: license_key_*
  acceptEULA: true
config.yaml
auth:
  users:
    - username: admin
      password: $2a$10$F66cb6ZhnJjGCZuxlvKP1e84eytTpT1MDJcpBblHaZgsqp1/Aa0LG # bcrypt("correcthorsebatterystaple").
  administrators:
    - admin
    - [email protected]
  saml:
    enabled: true
    metadata: |-
      <?xml version="1.0" encoding="UTF-8"?><md:EntityDescriptor>
      ...
      ...
      </md:EntityDescriptor>
    baseURL: https://example.com
    entityID: https://example.com
    userCreationMode: sso
    groupMembershipMode: sso
http:
  address: ":8080"
  accessControlAllowOrigin:
    - https://example.com
agents:
  address: ":10000"
database:
  host: postgres:5432
  username: panoptes
  password: password
  database: panoptes
  schema: insert-schema-here
  params:
    sslmode: require
license:
  key: license_key_*
  acceptEULA: true
logger:
  mode: text
  level: debug
terminal
./lenses-hq
terminal
./lenses-hq config.yaml
[Unit]
Description=Run HQ service

[Service]
Restart=always
User=[LENSES-USER]
Group=[LENSES-GROUP]
LimitNOFILE=4096
WorkingDirectory=/opt/lenses-hq
ExecStart=/opt/lenses-hq /etc/lenses-hq/config.yaml

[Install]
WantedBy=multi-user.target
https://archive.lenses.io/lenses/6.0/
here
here
here
here
here
https://localhost:8080
Deploying and Agent
Configuring IAM roles / groups / policies
sslKeystorePassword:
  value: "\${ENV_VAR_NAME}"
provisioning.yaml
kafka:
- name: kafka
  version: 1
  tags: [my-tag]
  configuration:
    kafkaBootstrapServers:
      value:
        - PLAINTEXT://your.kafka.broker.0:9092
        - PLAINTEXT://your.kafka.broker.1:9092
    protocol: 
      value: PLAINTEXT
    # all metrics properties are optional
    metricsPort: 
      value: 9581
    metricsType: 
      value: JMX
    metricsSsl: 
      value: false
kafka:
- name: kafka
  version: 1
  tags: [my-tag]
  configuration:
    kafkaBootstrapServers:
      value:
        - SSL://your.kafka.broker.0:9092
        - SSL://your.kafka.broker.1:9092
    protocol: 
      value: SSL
    sslTruststore:
      file: kafka-truststore.jks
    sslTruststorePassword: 
      value: truststorePassword
    sslKeystore:
      file: kafka-keystore.jks
    sslKeyPassword: 
      value: keyPassword
    sslKeystorePassword: 
      value: keystorePassword
provisioning.yaml
kafka:
- name: kafka
  version: 1
  tags: [my-tag]
  configuration:
    kafkaBootstrapServers:
      value:
        - SASL_SSL://your.kafka.broker.0:9092
        - SASL_SSL://your.kafka.broker.1:9092
    protocol: 
      value: SASL_SSL
    sslTruststore:
      file: kafka-truststore.jks
    sslTruststorePassword: 
      value: truststorePassword
    sslKeystore:
      file: kafka-keystore.jks
    sslKeyPassword: 
      value: keyPassword
    sslKeystorePassword: 
      value: keystorePassword
    saslMechanism: 
      value: PLAIN
    saslJaasConfig:
      value: |
        org.apache.kafka.common.security.plain.PlainLoginModule required
        username="your-username"
        password="your-password";      
provisioning.yaml
kafka:
- name: kafka
  version: 1
  tags: [my-tag]
  configuration:
    kafkaBootstrapServers:
      value:
        - SASL_SSL://your.kafka.broker.0:9092
        - SASL_SSL://your.kafka.broker.1:9092
    protocol: 
      value: SASL_SSL
    sslTruststore:
      file: kafka-truststore.jks
    sslTruststorePassword: 
      value: ${SSL_KEYSTORE_PASSWORD}
    sslKeystore:
      file: kafka-keystore.jks
    sslKeyPassword: 
      value: ${SSL_KEYSTORE_PASSWORD}
    sslKeystorePassword: 
      value: ${SSL_KEYSTORE_PASSWORD}
    saslMechanism: 
      value: PLAIN
    saslJaasConfig:
      value: ${SASL_JAAS_CONFIG}
kafka:
- name: kafka
  version: 1
  tags: [my-tag]
  configuration:
    kafkaBootstrapServers:
      value:
        - SASL_SSL://your.kafka.broker.0:9092
        - SASL_SSL://your.kafka.broker.1:9092
    protocol: 
      value: SASL_SSL
    sslTruststore:
      file: kafka-truststore.jks
    sslTruststorePassword: 
      value: truststorePassword
    sslKeystore:
      file: kafka-keystore.jks
    sslKeyPassword: 
      value: keyPassword
    sslKeystorePassword: 
      value: keystorePassword  
    saslMechanism: 
      value: GSSAPI
    saslJaasConfig:
      value: |
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
        storeKey=true
        useTicketCache=false
        serviceName=kafka
        principal="[email protected]";      
    keytab:
      file: /path/to/kafka-keytab.keytab
kafka:
- name: kafka
  version: 1
  tags: [my-tag]
  configuration:
    kafkaBootstrapServers:
      value:
        - SASL_PLAINTEXT://your.kafka.broker.0:9092
        - SASL_PLAINTEXT://your.kafka.broker.1:9092
    protocol: 
      value: SASL_PLAINTEXT
    saslMechanism: 
      value: SCRAM-SHA-256
    saslJaasConfig: 
      value: |
        org.apache.kafka.common.security.scram.ScramLoginModule required
        username="your-username"
        password="your-password";  
kafka:
- name: kafka
  version: 1
  tags: [my-tag]
  configuration:
    kafkaBootstrapServers:
      value:
        - SASL_PLAINTEXT://your.kafka.broker.0:9092
        - SASL_PLAINTEXT://your.kafka.broker.1:9092
    protocol: 
      value: SASL_PLAINTEXT
    saslMechanism: 
      value: SCRAM-SHA-512
    saslJaasConfig: 
      value: |
        org.apache.kafka.common.security.scram.ScramLoginModule required
        username="your-username"
        password="your-password";      
JSON schema
helm repo add lensesio https://helm.repo.lenses.io/
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
kubectl create namespace postgres-system
# postgres-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-data
  namespace: postgres-system
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: standard
kubectl apply -f postgres-pvc.yaml
# postgres-values.yaml
global:
  postgresql:
    auth:
      username: "admin"
      password: "changeme"
      postgresPassword: "changeme"

primary:
  persistence:
    existingClaim: "postgres-data"

auth:
  database: postgres
  username: admin
  password: changeme
  postgresPassword: changeme
  enablePostgresUser: true
helm install postgres bitnami/postgresql \
  --namespace postgres-system \
  --values postgres-values.yaml
CREATE ROLE lenses_agent WITH LOGIN PASSWORD 'changeme'

CREATE DATABASE lenses_agent OWNER lenses_agent

CREATE ROLE lenses_hq WITH LOGIN PASSWORD 'changeme'

CREATE DATABASE lenses_hq OWNER lenses_hq
# lenses-db-init-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: lenses-db-init
  namespace: postgres-system
spec:
  template:
    spec:
      containers:
      - name: db-init
        image: postgres:14
        command:
        - /bin/bash
        - -c
        - |
          echo "Waiting for PostgreSQL to be ready..."
          until PGPASSWORD=changeme psql -h postgres-postgresql -U postgres -d postgres -c '\l' &> /dev/null; do
            echo "PostgreSQL is unavailable - sleeping 2s"
            sleep 2
          done
          echo "PostgreSQL is up - creating databases and roles"
          PGPASSWORD=changeme psql -h postgres-postgresql -U postgres -d postgres <<EOF
          CREATE ROLE lenses_agent WITH LOGIN PASSWORD 'changeme';
          CREATE DATABASE lenses_agent OWNER lenses_agent;
          CREATE ROLE lenses_hq WITH LOGIN PASSWORD 'changeme';
          CREATE DATABASE lenses_hq OWNER lenses_hq;
          EOF
          echo "Database initialization completed!"
      restartPolicy: OnFailure
  backoffLimit: 5
kubectl apply -f lenses-db-init-job.yaml
kubectl get job -n postgres-system
# Kafka Bitnami Helm chart values for dev/testing with KRaft mode
## Global settings
global:
  storageClass: "standard"

## Enable KRaft mode and disable Zookeeper
kraft:
  enabled: true
  controllerQuorumVoters: "0@kafka-controller-0.kafka-controller-headless.kafka.svc.cluster.local:9093"

# Disable Zookeeper since we're using KRaft
zookeeper:
  enabled: false

## Controller configuration (for KRaft mode)
controller:
  replicaCount: 1
  persistence:
    enabled: true
    storageClass: "standard"
    size: 2Gi
    selector:
      matchLabels:
        app: kafka-controller
  resources:
    requests:
      memory: "512Mi"
      cpu: "250m"
    limits:
      memory: "1Gi"
      cpu: "500m"

## Broker configuration
broker:
  replicaCount: 1
  persistence:
    enabled: true
    storageClass: "standard"
    size: 2Gi
    selector:
      matchLabels:
        app: kafka-broker
  resources:
    requests:
      memory: "512Mi"
      cpu: "250m"
    limits:
      memory: "1Gi"
      cpu: "500m"

# Networking configuration for standalone K8s cluster
service:
  type: ClusterIP
  ports:
    client: 9092

## External access configuration (if needed)
externalAccess:
  enabled: false
  service:
    type: NodePort
    nodePorts: [31090]
  autoDiscovery:
    enabled: false

# Listeners configuration for standalone cluster
listeners:
  client:
    name: PLAINTEXT
    protocol: PLAINTEXT
    containerPort: 9092
  controller:
    name: CONTROLLER
    protocol: PLAINTEXT
    containerPort: 9093
  interbroker:
    name: INTERNAL
    protocol: PLAINTEXT
    containerPort: 9094

# Disable authentication for simplicity in dev environment
auth:
  clientProtocol: plaintext
  interBrokerProtocol: plaintext
  sasl:
    enabled: false
    jaas:
      clientUsers: []
      interBrokerUser: ""
  tls:
    enabled: false
  zookeeper:
    user: ""
    password: ""

# Configuration suitable for development
configurationOverrides:
  "offsets.topic.replication.factor": 1
  "transaction.state.log.replication.factor": 1
  "transaction.state.log.min.isr": 1
  "log.retention.hours": 24
  "num.partitions": 3
  "security.inter.broker.protocol": PLAINTEXT
  "sasl.enabled.mechanisms": ""
  "sasl.mechanism.inter.broker.protocol": PLAINTEXT
  "allow.everyone.if.no.acl.found": "true"

# Enable JMX metrics
metrics:
  jmx:
    enabled: true
    containerPorts:
      jmx: 5555
    service:
      ports:
        jmx: 5555
  kafka:
    enabled: true
    containerPorts:
      metrics: 9308
    service:
      ports:
        metrics: 9308

# Enable auto-creation of topics
allowAutoTopicCreation: true
kubectl create ns kafka
helm install my-kafka bitnami/kafka \
--namespace kafka \
--values kafka-cluster-values.yaml
kubectl create ns lenses
# lenseshq-values.yaml
resources:
  requests:
    cpu: 1
    memory: 1Gi
  limits:
    cpu: 2
    memory: 4Gi

image:
  repository: lensesio/lenses-hq:6.0
  pullPolicy: Always

rbacEnable: false
namespaceScope: true

# Lense HQ container port
restPort: 8080
# Lenses HQ service port, service targets restPort
servicePort: 80
servicePortName: lenses-hq

# serviceAccount is the Service account to be used by Lenses to deploy apps
serviceAccount:
  create: false
  name: default

# Lenses service
service:
  enabled: true
  type: ClusterIP
  annotations: {}

lensesHq:
  agents:
    address: ":10000"
  auth:
    administrators:
     - "admin"
    users:
      - username: admin
        password: $2a$10$DPQYpxj4Y2iTWeuF1n.ItewXnbYXh5/E9lQwDJ/cI/.gBboW2Hodm # bcrypt("admin").
  http:
    address: ":8080"
    accessControlAllowOrigin:
      - "http://localhost:8080"
    secureSessionCookies: false
  # Storage property has to be properly filled with Postgres database information
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres-system.svc.cluster.local
      port: 5432
      username: lenses_hq
      database: lenses_hq
      passwordSecret:
        type: "createNew"
        password: "changeme"
  logger:
    mode: "text"
    level: "debug"
  license:
    referenceFromSecret: false
    stringData: "license_key_2SFZ0BesCNu6NFv0-EOSIvY22ChSzNWXa5nSds2l4z3y7aBgRPKCVnaeMlS57hHNVboR2kKaQ8Mtv1LFt0MPBBACGhDT5If8PmTraUM5xXLz4MYv"
    acceptEULA: true
helm install lenses-hq lensesio/lenses-hq \
--namespace lenses \
--values lenseshq-values.yaml
# lenses-hq-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: lenses-hq-ingress
  namespace: lenses  # Update this if LensesHQ is in a different namespace
  annotations:
    # For nginx ingress controller
    nginx.ingress.kubernetes.io/rewrite-target: /
    # If you need larger request bodies for API calls
    nginx.ingress.kubernetes.io/proxy-body-size: "50m"
    # Optional: enable CORS if needed
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-origin: "*"
spec:
  ingressClassName: nginx
  
  rules:
  - host: lenses-hq.local  # Change this to your desired hostname
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: lenses-hq
            port:
              number: 80
      # Optional: expose the agents port if needed externally
      - path: /agents
        pathType: Prefix
        backend:
          service:
            name: lenses-hq
            port:
              number: 10000
# lenses-agent-values.yaml
image:
  repository: lensesio/lenses-agent
  tag: 6.0.0
  pullPolicy: IfNotPresent
lensesAgent:
  # Postgres connection
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres-system.svc.cluster.local
      port: 5432
      username: lenses_agent
      password: changeme
      database: lenses_agent
  hq:
    agentKey:
      secret:
        type: "createNew"
        name: "agentKey"
        value: "agent_key_Insert_Your_Agent_Key_Here"
  sql:
        processorImage: hub.docker.com/r/lensesioextra/sql-processor/
        processorImageTag: latest
        mode: KUBERNETES
        heap: 1024M
        minHeap: 128M
        memLimit: 1152M
        memRequest: 128M
        livenessInitialDelay: 60 seconds
        namespace: lenses
  provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: lenses-hq.lenses.svc.cluster.local
            port:
              value: 10000
            agentKey:
              value: ${LENSESHQ_AGENT_KEY}
      kafka:
        # There can only be one Kafka cluster at a time
        - name: kafka
          version: 1
          tags: ['staging', 'pseudo-data-only']
          configuration:
            kafkaBootstrapServers:
              value:
                - PLAINTEXT://my-kafka.kafka.svc.cluster.local:9092
            protocol:
              value: PLAINTEXT
helm install lenses-agent lensesio/lenses-agent \
--namespace lenses \
--values lenses-agent-values.yaml
here
Community Edition walk through
Verifying Postgres is running.
1/1 Completions means it ran correctly.
Kafka cluster up and running.
HQ Successfully Installed
Be sure to copy and save your Agent Key
Paste your actual agent key into the file.

Memory & CPU

This page describes the memory & cpu prerequisites for Lenses.

This documentation provides memory recommendations for Lenses.io, considering the number of Kafka topics, the number of schemas, and the complexity of these schemas (measured by the number of fields). Proper memory allocation ensures optimal performance and stability of Lenses.io in various environments.

Key Considerations

  • Number of Topics: Kafka topics require memory for indexing, metadata, and state management.

  • Schemas and Their Complexity: The memory impact of schemas is influenced by both the number of schemas and the number of fields within each schema. Each schema field contributes to the creation of Lucene indexes, which affects memory usage.

Baseline Memory Requirements

For a basic setup with minimal topics and schemas:

  • Minimum Memory: 4 GB

  • Recommended Memory: 8 GB

This setup assumes:

  • Fewer than 100 topics

  • Fewer than 100 schemas

  • Small schemas with few fields (less than 10 fields per schema)

Scaling with Topics

Memory requirements increase with the number of topics. Topics are used as the primary reference for memory scaling, with additional considerations for schemas.

Number of Topics / Partitions

Recommended Memory

Up to 1,000 / 10,000 partitions

12 GB

1,001 to 10,000 / 100.000 partitions

24 GB

10,001 to 30,000 / 300.000 partitions

64 GB

Impact of Schemas and Their Complexity

Schemas have a significant impact on memory usage, particularly as the number of fields within each schema increases. The memory impact is determined by both the number of schemas and the complexity (number of fields) of these schemas.

Memory Addition Based on Schema Complexity

Schema Complexity

Number of Fields per Schema

Memory Addition

Low to Moderate Complexity

Up to 50 fields

None

High Complexity

51 - 100 fields

1 GB for every 1,000 schemas

Very High Complexity

100+ fields

2 GB for every 1,000 schemas

Cross-Reference Table for Topics and Schema Complexity

Number of Topics

Number of Schemas

Number of Fields per Schema

Base Memory

Additional Memory

Total Recommended Memory

1,000

1,000

Up to 10

8 GB

None

12 GB

1,000

1,000

11 - 50

8 GB

None

12 GB

5,000

5,000

Up to 10

12 GB

None

16 GB

5,000

5,000

11 - 50

12 GB

None

16 GB

10,000

10,000

Up to 10

16 GB

None

24 GB

10,000

10,000

51 - 100

24 GB

10 GB

34 GB

30,000

30,000

Up to 10

64 GB

None

64 GB

30,000

30,000

51 - 100

64 GB

30 GB

94 GB

Example Configurations

To help illustrate how to apply these recommendations, here are some example configurations considering both topics and schema complexity:

Small Setup

  • Topics: 500

  • Schemas: 100 (average size 50 KB, 8 fields per schema)

Recommended Memory: 8 GB

  • Schema Complexity: Low → No additional memory needed.

Total Recommended Memory: 8 GB

Medium Setup

  • Topics: 5,000

  • Schemas: 1,000 (average size 200 KB, 25 fields per schema)

Base Memory: 12 GB

  • Schema Complexity: Moderate → No additional memory needed.

Total Recommended Memory: 16 GB

Large Setup

  • Topics: 15,000

  • Schemas: 3,000 (average size 500 KB, 70 fields per schema)

Base Memory: 32 GB

  • Schema Complexity: High → Add 3 GB for schema complexity.

Total Recommended Memory: 35 GB

High-Volume Setup Examples

  • 30,000 Topics

    • Schemas: 5,000 (average size 300 KB, 30 fields per schema)

    Base Memory: 64 GB

    • Schema Complexity: Moderate → Add 5 GB for schema complexity.

    Total Recommended Memory: 69 GB

Additional Considerations

  • High Throughput: If your Kafka cluster is expected to handle high throughput, consider adding 20-30% more memory than the recommendations.

  • Complex Queries and Joins: If using Lenses.io for complex data queries and joins, consider increasing the memory allocation by 10-15% to accommodate the additional processing.

  • Monitoring and Adjustment: Regularly monitor memory usage and adjust based on actual load and performance.

Conclusion

Proper memory allocation is crucial for the performance and reliability of Lenses.io, especially in environments with a large number of topics and complex schemas. While topics provide a solid baseline for memory recommendations, the complexity of schemas—particularly the number of fields—can also significantly impact memory usage. Regular monitoring and adjustments are recommended to ensure that your Lenses.io setup remains performant as your Kafka environment scales.

Example Policies

This section provides example IAM policies for Lenses.

These are only some sample policies to help you build your own

Admin

Full admin across all resources.

role
name: administrator
policy:
  - action: '*'
    resource: '*'
    effect: allow

Full access for data namespace

Allow full access for all services and resources beginning with blue.

role
name: blue-things
policy:
  - action:
      - iam:List*
      - iam:Get*
    resource: iam:*
    effect: allow
  - action:
      - environments:Get*
      - environments:List*
      - environments:AccessEnvironment
    resource: environments:*
    effect: allow
  - action:
      - kafka:*
      - schemas:*
      - kafka-connect:*
      - kubernetes:*
      - applications:*
    resource:
      - kafka:topic:*/*/blue-*
      - kafka:consumer-group:*/*/blue-*
      - kafka:acl:*/*/*/user/blue-*
      - schemas:schema:*/*/blue-*
      - kafka-connect:cluster:*/*
      - kafka-connect:connector:*/*/blue-*
      - sql-streaming:processor:*/*/*/blue-*
      - kubernetes:cluster:*/*
      - kubernetes:namespace:*/*/*
    effect: allow
  - action:
      - alerts:*
      - data-policies:*
    resource:
      - alerts:alert:*/*/blue-*
      - alerts:event:*/*/*
      - data-policies:policy:*/blue-*
    effect: allow

Explore a data namespace

Allow read only access for topics and schemas beginning with la.

role
name: public-data-explorer
policy:
  - action:
      - environments:ListEnvironments
      - environments:GetEnvironmentDetails
      - environments:AccessEnvironment
    resource: environments:environment:global*
    effect: allow
  - action:
      - kafka:ListTopics
      - kafka:ListTopicDependants
      - kafka:GetTopicDetails
      - kafka:ReadTopicData
    resource: kafka:topic:*/kafka/la-*
    effect: allow
  - action:
      - schemas:ListSchemas
      - schemas:ListSchemaDependants
      - schemas:GetSchemaDetails
    resource: schemas:schema:*/*/la-*
    effect: allow

Connect Operator

Allow operators to restart connectors and list & get IAM resource only.

No access to data!

role
name: global-connector-operator
policy:
  - action:
      - iam:List*
      - iam:Get*
    resource: iam:*
    effect: allow
  - action:
      - environments:Get*
      - environments:List*
      - environments:AccessEnvironment
    resource: environments:*
    effect: allow
  - action:
      - kafka-connect:List*
      - kafka-connect:GetClusterDetails
      - kafka-connect:GetConnectorDetails
      - kafka-connect:StartConnector
      - kafka-connect:StopConnector
    resource:
      - kafka-connect:cluster:*/*
      - kafka-connect:connector:*/*/*
    effect: allow

Explicit no access to production

Explicitly deny access to environments with names starting with prod-.

roles
name: no-access-prod-name-prefix
policy:
  - action: environments:AccessEnvironment
    resource: environments:environment:prod-*
    effect: deny

Developer access

Allow developers access to topics, schemas, sql processors, consumer groups, acls, quotas, connectors for us-dev.

role
name: us-dev-permissions
policy:
  - action:
      - iam:List*
      - iam:Get*
    resource: iam:*
    effect: allow
  - action:
      - environments:Get*
      - environments:List*
    resource: environments:*
    effect: allow
  - action: environments:AccessEnvironment
    resource: environments:environment:us-dev
    effect: allow
  - action:
      - kafka:*
      - schemas:*
      - kafka-connect:*
      - kubernetes:*
      - applications:*
    resource:
      - kafka:topic:us-dev/*
      - kafka:consumer-group:us-dev/*
      - kafka:acl:us-dev/*
      - kafka:quota:us-dev/*
      - schemas:schema:us-dev/*
      - kafka-connect:cluster:us-dev/*
      - kafka-connect:connector:us-dev/*
      - sql-streaming:sql-processor:us-dev/*
      - kubernetes:cluster:us-dev/*
      - kubernetes:namespace:us-dev/*
    effect: allow
  - action:
      - alerts:*
      - data-policies:*
    resource:
      - alerts:channel:us-dev/*
      - alerts:event:us-dev/*
      - alerts:rule:us-dev/*
      - data-policies:policy:us-dev/*
    effect: allow

Steps for In-Place Upgrade

Prerequisites

1

Have HQ deployed

The first step is to have HQ deployed and agent_key being obtained upon environment creation

2

Prepare Lenses 5 Configuration Files

Before doing upgrade, make a copy of all configuration files.

In case of Helm deployment that would be:

  • values.yaml

In case of Archive of Docker deployment that would be:

  • lenses.conf It is strongly suggested to rename this file to lenses-agent.conf for the upgrade.

  • provisioning.yaml

In case you are missing provisioning.yaml configuration file, re-creating it will be covered in next step.

3

Convert all connections / alerts / channels to Provisioning

There were multiple ways how Lenses resources could have been managed in the past:

  1. Wizard

  2. Provisioning v1

  3. Provisioning v2

For Lenses (6) Agent, it is recommended that all connections (kafka, schema-registry, connect,...) are kept inside of provisioning.yaml file in version 2. Provisioning

Differences of provisioning between version 1 and 2 can be seen below:

provisioning.yaml
license:
  fileRef:
    inline: '{"source":"Landoop LTD","clientId":"Lenses Dev","details":"kafka-lenses","key":"eyJhbGciOiJBMTI4S1ciLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0.AqO6Ax-o-4T0WKFX7eCGFRu329wxplkZuWrGdhyncrhBfh9higjsZA.-uGCXWjULTzb7-3ROfsRhw.olmx6FR7FH7c2adHol0ipokHF6jOo6LTDtoFOSPWfqKxbA3yI-CUqlyo_-Obin7MSA4KqXBLpXOvP72EJhIYuyqkxUVGRoXHF0Oj2V6kzdDcmjJHbMB4VTxdE8YBAYbPXzEXdhq7lZy4fxHHhYxAsRATCtqf7t7TQCE0TWOiSHvLwyD7xMK2X47KiKbnNlNvqeVnnjLUMMd7vzA5dTft48wJm2D5HJNZ0mS32gTaiiExT5nqolToL0KYIOpRiT00MTQkGlBdagVigc-DZBPM0ZTP5wuLkwdk4XbfoQKaWC4qaYA6VpGgQg03Mo1W4ljlqRy0N4cPQ-l4Mi1XV9VK-825-zhyxzPrxef5Zct2nzVEJ9MbWy0-xuf6THX4q2X8zmz_KiHoA-hBWjebv_2R9479ldGj0h-vm9htVD59_6RBOGb0rT4XSS-4_CGYBZzv5PIPpLdnVbkr_qjsxCI0BO7tPKoyxXg2qh4YQbn3wn5MqsE9yR2BbRaso9MSPFlF8PxqR7A4qrKJjn_mPlcrR-XGf0ua2XfWCVe4ngcWpzssYHcJJD80APyZgzneIw2dSaO0enfFYUq6avqGSeoG7VC9zYACfUdofdlULH2azmptJ2Jzw3ggpLR7ZzZ9QrySXTUB2jkzrqiHyM9fqIXUVwAkAJMcBuwF5zY5B_ChA69Uj_-s-S1RITBbg5wtB3LuHyJGtTo4fuYY75F_OL9Cwp7gcpa5u0M_wWZlx70j_6jCb-ogvghALbHY8OPeWz_1-3bvJM9T_jmjKy0FDt6x8FJV1lgMMR0j1RiUeauUMsnd4TNUYAH50mFwtK5PU-Iq.U4LwNfOL2JB4vzBMvo3Hig"}'
connections:
  kafka:
    tags: []
    templateName: Kafka
    configurationObject:
      kafkaBootstrapServers:
        - PLAINTEXT:///localhost:9092
      protocol: PLAINTEXT
      # metrics
      ## JMX
      metricsPort: 9581
      metricsType: JMX
      metricsSsl: false
      ## JOLOKIA
      # metricsPort: 19581
      # metricsType: JOLOKIAG # or JOLOKIAP
      # metricsSsl: false
      # metricsHttpSuffix: "/jolokia/"

  zookeeper:
    tags: []
    templateName: Zookeeper
    configurationObject:
      zookeeperUrls:
        - localhost:2181
      zookeeperSessionTimeout: 10000
      zookeeperConnectionTimeout: 10000
      # metrics
      ## JMX
      metricsPort: 9585
      metricsType: JMX
      metricsSsl: false
      ## JOLOKIA
      # metricsPort: 19585
      # metricsType: JOLOKIAG # or JOLOKIAP
      # metricsSsl: false
      # metricsHttpSuffix: "/jolokia/"

  schema-registry:
    templateName: SchemaRegistry
    tags: [ ]
    configurationObject:
      schemaRegistryUrls:
        - http://localhost:8081
      additionalProperties: { }
      # metrics
      ## JMX
      metricsPort: 9582
      metricsType: JMX
      metricsSsl: false
      ## JOLOKIA
      # metricsPort: 19582
      # metricsType: JOLOKIAG # or JOLOKIAP
      # metricsSsl: false
      # metricsHttpSuffix: "/jolokia/"

  connect-cluster-dev-1:
    templateName: KafkaConnect
    tags: []
    configurationObject:
      workers:
        - http://localhost:8083
      aes256Key: PasswordPasswordPasswordPassword
      # metrics
      ## JMX
      metricsPort: 9584
      metricsType: JMX
      metricsSsl: false
      ## JOLOKIA
      # metricsPort: 19584
      # metricsType: JOLOKIAG # or JOLOKIAP
      # metricsSsl: false
      # metricsHttpSuffix: "/jolokia/"

  my-prometheus: {"configuration":[{"key":"endpoints","value":["https://am.acme.com"]}],"tags":["prometheus","monitoring","metrics"],"templateName":"PrometheusAlertmanager"}
provisioning.yaml
lensesHq:
  - configuration:
      agentKey:
        value: ${LENSESHQ_AGENT_KEY}
      port:
        value: 10000
      server:
        value: lenses-hq
    name: lenses-hq
    tags: ['hq']
    version: 1
kafka:
  - name: kafka
    version: 1
    tags: [ 'kafka', 'dev' ]
    configuration:
      metricsType:
        value: JMX
      metricsPort:
        value: 9581
      kafkaBootstrapServers:
        value: [PLAINTEXT://demo-kafka:19092]
      protocol:
        value: PLAINTEXT
confluentSchemaRegistry:
  - name: schema-registry
    version: 1
    tags: [ 'dev' ]
    configuration:
      schemaRegistryUrls:
        value: [http://demo-kafka:8081]
      metricsType:
        value: JMX
      metricsPort:
        value: 9582
connect:
  - name: dev
    version: 1
    tags: [ 'dev' ]
    configuration:
      workers:
        value: [http://demo-kafka:8083]
      aes256Key:
        value: 0123456789abcdef0123456789abcdef
      metricsType:
        value: JMX
      metricsPort:
        value: 9584%

More about other configuration options in provisioning.yaml -> Provisioning

If you are curious on how to properly create provisioning.yaml file, you can read more on How to convert Wizard Mode to Provisioning Mode.

Steps to deploy Agent

1

Follow prerequisite steps above

Steps:

Once you went through the steps above you'll be ready to move ahead and deploying Agent at the side.

2

Clone / Migrate Lenses 5 Groups

In this step you'll ensure that Groups (with permissions) that exist in Lenses 5 will still have the same amount of permissions in Lenses 6 for newly created Environment (Agent).

Migration of :

  • data policies

  • alerts

  • sql processors

is not necessary in case Agent will re-use the same database as Lenses 5.

To execute this step we have tooling that can help you Lenses Migration Tool.

Be aware, cloning of alerts is not available yet via script above.

Once script is initiated you should be able to see new:

  • Groups and

  • Roles with its permissions inside HQ screen.

These are matching the ones you have in Lenses 5 instance and will enable users to see new Environment once it is connected 👇

3

Stop Lenses 5 instance and start Lenses 6 Agent

There are multiple deployment methods for the Agent deployment, please choose one from Installation

Two Lenses instances shouldn't be connecting to the same database therefore old Lenses 5 should be stopped.

Note that there is no rollback mechanism once upgrade is initiated over the same database.

This type of upgrade is only possible when Postgres is used as datastore.

4

Check Environment screen for a new Agent

Screen of "Connected" Agent should look as follows and should be seen by AD groups that has been cloned in step 3️.

Roles

This page describes Roles in Lenses.

Lenses IAM is built around Roles. Roles contain policies and each policy defines a set of actions a user is allow to take.

Roles are then assigned to groups.

Role Policies

The Lenses policies are resource based. They are YAML based documents attached to a resource.

Each policy has:

  1. Action

  2. Resource

  3. Effect

The resource is the name of the resource. This is defined by the creator of the resource.

Action

The action describes the action or verb that a user can perform. The format of the action is

[entity type]:action

For example to list topics in Kafka

policy
  - action:
    - kafka:ListTopics

For a full list of the actions see Permission Reference.

To allow all actions set '*'

Resource

To restrict access to resources, for example, only list topics being with red we can used use the resource field.

For a full list of the actions see Permission Reference.

To allow all actions set '*'

Effect

Effect is either allow the action on the resource or deny. If allow is not set the action will be denied and if any policy for a resource has a deny effect it takes precedence.

Create a Role

To Create Service Account go to IAM->Roles->New Role.

Create a roles & add permissions

You can also manage Users via the CLI and YAML, for integration in your CI/CD pipelines.

terminal
➜  lenses roles
Manage Roles.

Usage:
  lenses roles [command]

Available Commands:
  create      Creates a new role.
  delete      Deletes a role.
  get         Returns a specific role.
  list        Returns all roles.
  metadata    Manages role metadata.
  update      Updates a role.

SQL Processor Deployment

This page describes how to configure the agent to deploy and manage SQL Processors for stream processing.

Lenses can be used to define & deploy stream processing applications that read from Kafka and write back to Kafka with SQL. They are based on the Kafka Stream framework. They are known as SQL Processors.

SQL processing of real-time data can run in 2 modes:

  • SQL In-Process - the workload runs inside of the Lenses Agent.

  • SQL in Kubernetes - the workload runs & scale on your Kubernetes cluster.

Which mode the SQL Processors will run as should be defined within the lenses.conf before Lenses is started.

In-Process Mode

In this mode, SQL processors run as part of the Agent process, sharing resources, memory, and CPU time with the rest of the platform.

This mode of operation is meant to be used for development only.

As such, the agent will not allow the creation of more than 50 SQL Processors in In Process mode, as this could impact the platform's stability and performance negatively.

For production, use the KUBERNETES mode for maximum flexibility and scalability.

Set the execution configuration to IN_PROC

Set the directory to store the internal state of the SQL Processors:

TLS connections to Kafka and Schema Registries

SQL processors use the same connection details that Agent uses to speak to Kafka and Schema Registry. The following properties are mounted, if present, on the file system for each processor:

  • Kafka

    1. SSLTruststore

    2. SSLKeystore

  • Schema Registry

    1. SSL Keystore

    2. SSL Truststore

The file structure created by applications is the following: /run/[lenses_installation_id]/applications/

Keep in mind Lenses require an installation folder with write permissions. The following are tried:

  1. /run

  2. /tmp

Kubernetes Mode

Kubernetes can be used to deploy SQL Processors. To configure Kubernetes, set the mode to KUBERNETES and configure the location of the kubeconfig file.

When the Agent is deployed inside Kubernetes, the lenses.kubernetes.config.file configuration entry should be set to an empty string. The Kubernetes client will auto-configure from the pod it is deployed in.

The SQL Processor docker image is live in Dockerhub.

Custom Serdes

Custom serdes should be embedded in a new Lenses SQL processor Docker image.

To build a custom Docker image, create the following directory structure:

Copy your serde jar files under processor-docker/serde.

Create Dockerfile containing:

Build the Docker.

Once the image is deployed in your registry, please set Lenses to use it (lenses.conf):

Don't use the LPFP_ prefix.

Internally, Lenses prefixes all its properties with LPFP_.

Avoid passing custom environment variables starting with LPFP_ as it may cause the processors to fail.

Use Role/RoleBinging to deploy Lenses processors

To deploy Lenses Processors in Kubernetes the suggested way is to activate RBAC in Cluster level through Helm values.yaml:

If you want to limit the permissions Lenses has against your Kubernetes cluster, you can use Role/RoleBinding resources instead.

To achieve this you need to create a Role and a RoleBinding resource in the namespace you want the processors deployed to:

example for:

  • Lenses namespace = lenses-ns

  • Processor namespace = lenses-proc-ns

You can repeat this for as many namespaces you may want Lenses to have access to.

Finally you need to define in Lenses configuration which namespaces can Lenses access. To achieve this amend values.yaml to contain the following:

example:

# Set up Lenses SQL processing engine
lenses.sql.execution.mode = "IN_PROC"
lenses.sql.state.dir = "/tmp/sql-kstream-state"
lenses.sql.execution.mode = KUBERNETES
# kubernetes configuration
lenses.kubernetes.config.file = "/home/lenses/.kube/config"
lenses.kubernetes.service.account = "default"
#lenses.kubernetes.processor.image.name = "" # Only needed if you use a custom image
#lenses.kubernetes.processor.image.tag = ""  # Only needed if you use a custom image

# Only needed if you want to tune the buffer size for incoming events from Kubernetes
#lenses.deployments.errors.buffer.size = 1000

# Only needed if you want to tune the buffer size for incoming errors from Kubernetes WS communication
#lenses.deployments.events.buffer.size = 10000
mkdir -p processor-docker/serde
FROM lensesioextra/sql-processor:4.2

ADD serde /opt/serde
ENV LENSES_SQL_RUNNERS_SERDE_CLASSPATH_OPTS=/opt/serde
cd processor-docker
docker build -t example/lsql-processor .
lenses.kubernetes.processor.image.name = "your/image-name"
lenses.kubernetes.processor.image.tag = "your-tag"
rbacEnable: true
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: [ROLE_NAME]
  namespace: [PROCESSORS_NAMESPACE]
rules:
- apiGroups: [""]
  resources:
    - namespaces
    - persistentvolumes
    - persistentvolumeclaims
    - pods/log
  verbs:
    - list
    - watch
    - get
    - create
- apiGroups: ["", "extensions", "apps"]
  resources:
    - pods
    - replicasets
    - deployments
    - ingresses
    - secrets
    - statefulsets
    - services
  verbs:
    - list
    - watch
    - get
    - update
    - create
    - delete
    - patch
- apiGroups: [""]
  resources:
    - events
  verbs:
    - list
    - watch
    - get
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: [ROLE_BINDING_NAME]
  namespace: [PROCESSOR_NAMESPACE]
subjects:
- kind: ServiceAccount
  namespace: [LENSES_NAMESPACE]
  name: [SERVICE_ACCOUNT_NAME]
roleRef:
  kind: Role
  name: [ROLE_NAME]
  apiGroup: rbac.authorization.k8s.io
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: processor-role
  namespace: lenses-proc-ns
rules:
- apiGroups: [""]
  resources:
    - namespaces
    - persistentvolumes
    - persistentvolumeclaims
    - pods/log
  verbs:
    - list
    - watch
    - get
    - create
- apiGroups: ["", "extensions", "apps"]
  resources:
    - pods
    - replicasets
    - deployments
    - ingresses
    - secrets
    - statefulsets
    - services
  verbs:
    - list
    - watch
    - get
    - update
    - create
    - delete
    - patch
- apiGroups: [""]
  resources:
    - events
  verbs:
    - list
    - watch
    - get
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: processor-role-binding
  namespace: lenses-proc-ns
subjects:
- kind: ServiceAccount
  namespace: lenses-ns
  name: default
roleRef:
  kind: Role
  name: processor-role
  apiGroup: rbac.authorization.k8s.io
lenses:
  append:
    conf: |
      lenses.kubernetes.namespaces = {
        incluster = [
          "[PROCESSORS NAMESPACE]"
        ]
      }      
lenses:
  append:
    conf: |
      lenses.kubernetes.namespaces = {
        incluster = [
          "lenses-processors"
        ]
      }      

Overview

This page describes an overview of Lenses IAM (Identify & Access Management)

Principals (Users & Service accounts) receive their permissions based on their group membership.

Roles hold a set of policies, defining the permissions. Roles are assigned to groups.

Roles provide flexibility in how you want to provide access. You can create a very open policy or a very granular policy, for example, allowing operators and support engineers certain permissions to restart Connectors but denying actions that would allow them to view data or configuration options.

Roles are defined at the HQ level. This allows you to control access to actions at HQ and lower environment levels and to assign the same set of permissions across your whole Kafka landscape in a central place.

Policies

A policies have:

  • One or more actions;

  • One or more resource patterns that the actions apply to;

  • An effect: allow or deny.

name: [policy_name]
policy: #list of actions/resources/effect
- action:
  resource:
  effect:[allow|deny]      

If any effect is deny for a resource the result is always deny, the principle of least privileged applies.

A policy is defined by a YAML specification.

Example Policy
name: blue-things
policy:
  - action:
      - iam:List*
      - iam:Get*
    resource: iam:*
    effect: allow
  - action:
      - environments:Get*
      - environments:List*
      - environments:AccessEnvironment
    resource: environments:*
    effect: allow
  - action:
      - kafka:*
      - schemas:*
      - kafka-connect:*
      - kubernetes:*
      - applications:*
    resource:
      - kafka:topic:*/*/blue-*
      - kafka:consumer-group:*/*/blue-*
      - kafka:acl:*/*/*/user/blue-*
      - schemas:schema:*/*/blue-*
      - kafka-connect:cluster:*/*
      - kafka-connect:connector:*/*/blue-*
      - sql-streaming:sql-processor:*/*/*/blue-*
      - kubernetes:cluster:*/*
      - kubernetes:namespace:*/*/*
      - applications:external-application:*/blue-*

Actions

Actions describe a set of actions. Concrete actions can match an Action Pattern. In this text, action and action patterns are used interchangeably.

An action has the format: service:operation, e.g. iam:DeleteUser

Services

Services describe the system entity that an action applies to. Services are:

  1. environments

  2. kafka

  3. registry

  4. schemas

  5. kafka-connect

  6. sql-streaming

  7. kubernetes

  8. applications

  9. alerts

  10. data-policies

  11. governance

  12. audit

  13. iam

  14. administration

Operations

Operation can contain a wildcard. If so, only at the end. See IAM Reference for the available operations per service.

Resources

Resources identify which resource, in a service, that the principal is allowed or denied, to perform the operation on.

Resource-type cannot contain a combination of characters with wildcards.

If the service is provided, resource-type can be a wildcard.

Resource ID

The resource ID identifies the resource within the context of a service and a resource type.

A resource-id consists of one or more segments separated by a slash /. A segment can be a wildcard, or contain a wildcard as a suffix of a string. If a segment is a wildcard, then remaining segments do not need to be provided, and will be assumed to be wildcards as well.

The format is service:resource-type:resource-id

Where LRN is the Lenses Resource Name

  • kafka:topic:my-env/* will be expanded to kafka:topic:my-env/*/*;

  • kafka:topic:my-env/my-cluster* is invalid because the Topic segment is missing, kafka:topic:my-env/my-cluster*/topic would be valid though;

  • *:topic:* is invalid, the service is not provided;

  • kaf*:* and kafka:top* are invalid, service and resource-type cannot contain wildcards;

  • kafka:*:foo is invalid, if the resource-type is a wildcard then resource-id cannot be set.

Evaluation

A principal (user or service account) can perform an action on a resource if:

In any of the roles it receives via group membership:

  • There is any matching Permission Statement that has an effect of allow;

  • And there is not any matching Permission Statement that has an effect of deny.

A Permission Statement matches an action plus resource, if:

  • The action matches any of the Permission Statement's Action Patterns, AND:

  • The resource matches any of the Permission Statement's Resource Patterns.

An Action matches an Action Pattern (AP) if:

  • The AP is a wildcard, OR:

  • The Action's service equals the AP's and the AP's operation string-matches the Action's operation.

A Resource matches a Resource Pattern (RP) if:

  • The RP is a wildcard, OR:

  • The Resource's services equals the RP's and the RP's resource-type is a wildcard, OR:

  • The Resource's service and types equals that of the RP and resource-ids match. Resource-ids are matched by string-matching each individual segment. If the RP has a trailing wildcard segment, the remaining segments are ignored.

A string s matches p if:

  • They equal character by character.

  • If s or p has more non-wildcard characters than the other they don't match;

  • If p contains a * suffix, any remaining characters in s are ignored.

p
s
match

"lit"

"lit"

true

"lit"

"li"

false

"lit"

"litt"

false

"lit"

"oth"

false

"*"

"some"

true

"foo*"

"foo"

true

"foo*"

"foo-bar"

true

""

""

true

"x"

""

false

""

"x"

false

Order of items in any collection is irrelevant during evaluation. Collections are considered sets rather than ordered lists. The following are equivalent:

  • Order of Resource Patterns does not matter

  • Order of Permission Statements does not matter

  • Order of Roles does not matter

  • Order of Groups does not matter

Examples

In the examples we're not too religious about strict JSON formatting.

Broad Allow + Specific Deny

Given:

policy:
  - effect: allow
    resource: kafka:topic:my-env/*/*
    action: ReadKafkaData
  - effect: deny
    resource: kafka:topic:*/*/forbidden-topic
    action: ReadKafkaData

A principal:

  • Can ReadKafkaData on kafka:topic:my-env/the-cluster/some-topic because it is allowed and not denied;

  • Cannot DeleteKafkaTopic on kafka:topic:my-env/the-cluster/some-topic because there is no allow;

  • Cannot ReadKafkaData on kafka:topic:my-env/the-cluster/forbidden-topic because while it is allowed the deny kicks in.

Multiple Resources I

Given:

policy:  
  - effect: allow
    resource: [*, kafka:topic:my-cluster/*]
    action: ReadKafkaData

A principal:

  • Can ReadKafkaData on kafka:topic:someone-else-cluster/their-topic because the resource matches *.

Note that here the matching can be considered "most permissive".

Multiple Resources II

Given:

policy:
  - effect: allow
    resource: [kafka:topic:my-cluster/my-topic-1, kafka:topic:my-cluster/my-topic-2]
    action: ReadKafkaData

A principal:

  • Can ReadKafkaData on kafka:topic:my-cluster/my-topic-1 and kafka:topic:my-cluster/my-topic-2 because the resources match, but cannot ReadKafkaData on kafka:topic:my-cluster/my-topic-3.

Steps for SideBySide Upgrade

Guide on how to migrate your Lenses 5 instance to Lenses 6 Agent

Prerequisites

1

Have HQ deployed

The first step is to have HQ deployed and agent_key being obtained upon .

2

Prepare Postgres Database

H2 is not recommended for production environments.

For any other purposes, it is highly recommended to use a PostgreSQL (preferred) or Microsoft SQL Server database. Multiple agents can use the same Postgres database, but in that case, you must ensure that each Agent uses a different schema.

Therefore, in preparatio,n you must ensure:

For non-production environments, you can rely on the embedded H2 database.

3

Convert all connections / alerts / channels to Provisioning

There were multiple ways which Lenses resources could have been managed in the past:

  1. Wizard

  2. Provisioning v1

  3. Provisioning v2

For Lenses (6) Agent, it is recommended that all connections (kafka, schema-registry, connect,...) are kept inside of provisioning.yaml file in version 2.

Differences in provisioning between version 1 and 2 can be seen below:

More about other configuration options in provisioning.yaml ->

If you are curious about how to properly create the provisioning.yaml file, you can read more on

Steps to upgrade to Agent

In case you have to migrate SQL Processors and Postgres is not set as a data store, please read first.

1

Follow the prerequisite steps above

Steps:

Once you've completed the steps above, you'll be ready to proceed and deploy the Agent.

2

Pick a deployment method

There are multiple deployment methods for the Agent deployment. Please choose one from

It is perfectly safe to run your older installation of Lenses 4.x/5.x and at the side Agent that is connecting to the existing Kafka Cluster.

Lenses ⅘ and Agent behaves as just another KafkaAdminClient that is connecting to your Kafka cluster and therefore they can live next to each other.

Two Lenses instances shouldn't be connecting to the same database.

3

Migrate of Lenses 5.5 groups and data policies

Migrating of :

  • data policies

  • alerts

  • sql processors

is not necessary in case Agent will re-use the same database as Lenses 5.

Once you confirm that:

It is time to move towards migrating Lenses ⅘ groups to:

  • HQ Groups / Roles / Permissions;

  • Data policies

In order to do so, we have tooling that can help you .

Be aware, cloning of alerts is not available yet via script above.

4

(Optional) Migrate of SQL Processors and the rest

In case new database will be used for Agent upgrade path it it highly recommended to pick .

SQL Processors are stored within Lenses ⅘ database. Same will continue to work even if Lenses instance is not running anymore, but in order to preserve its configuration and possibility for Agent to be aware of their whereabouts following step has to be done.

Old Postgres to new Postgres database

This requires multiple steps:

  1. Stop Lenses 5 instance;

  2. Backup Postgres database Lenses 5 was using;

  3. Load the same database to new Postgres database;

  4. Start Lenses Agent (v6).

Old H2 database to new H2 database

Under construction 🚧

Old H2 database to new Postgres database

Only available with if upgrade to Lenses Agent v6 is being done from Lenses 5.5 and onwards

Process of migration looks a follows:

  1. Create a database and user in PostgreSQL for Lenses ⅘ to use.\

  2. Make a backup Take a backup of your current embedded database (its path is controlled via the setting lenses.storage.directory). Just copy the directory.

  3. Prepare Lenses conf Edit the Lenses configuration file (lenses.conf), adding the PostgreSQL settings. E.g:\

  4. Restart Lenses. It will perform the migration automatically.

  5. Once everything looks good, delete the directory containing the embedded database, and remove the key lenses.storage.directory from your lenses-agent.conf.

  6. Perform Lenses Agent upgrade

provisioning.yaml
lensesHq:
  - configuration:
      agentKey:
        value: ${LENSESHQ_AGENT_KEY}
      port:
        value: 10000
      server:
        value: lenses-hq
    name: lenses-hq
    tags: ['hq']
    version: 1
kafka:
  - name: kafka
    version: 1
    tags: [ 'kafka', 'dev' ]
    configuration:
      metricsType:
        value: JMX
      metricsPort:
        value: 9581
      kafkaBootstrapServers:
        value: [PLAINTEXT://demo-kafka:19092]
      protocol:
        value: PLAINTEXT
confluentSchemaRegistry:
  - name: schema-registry
    version: 1
    tags: [ 'dev' ]
    configuration:
      schemaRegistryUrls:
        value: [http://demo-kafka:8081]
      metricsType:
        value: JMX
      metricsPort:
        value: 9582
connect:
  - name: dev
    version: 1
    tags: [ 'dev' ]
    configuration:
      workers:
        value: [http://demo-kafka:8083]
      aes256Key:
        value: 0123456789abcdef0123456789abcdef
      metricsType:
        value: JMX
      metricsPort:
        value: 9584%
# login as superuser and add Lenses role and database
psql -U postgres -d postgres <<EOF
CREATE ROLE lenses WITH LOGIN PASSWORD 'changeme';
CREATE DATABASE lenses OWNER lenses;
EOF
lenses.storage.postgres.password="changeme"
lenses.storage.postgres.host="my-postgres-server"
lenses.storage.postgres.port=5431 # optional, defaults to 5432
lenses.storage.postgres.username="lenses"
lenses.storage.postgres.database="lenses"
environment creation
Provisioning
Provisioning
How to convert Wizard Mode to Provisioning Mode
Installation
Lenses Migration Tool
In Place upgrade
(Optional) Migrate of SQL Processors and the rest
Steps to upgrade to Agent
provisioning.yaml
license:
  fileRef:
    inline: '{"source":"Landoop LTD","clientId":"Lenses Dev","details":"kafka-lenses","key":"eyJhbGciOiJBMTI4S1ciLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0.AqO6Ax-o-4T0WKFX7eCGFRu329wxplkZuWrGdhyncrhBfh9higjsZA.-uGCXWjULTzb7-3ROfsRhw.olmx6FR7FH7c2adHol0ipokHF6jOo6LTDtoFOSPWfqKxbA3yI-CUqlyo_-Obin7MSA4KqXBLpXOvP72EJhIYuyqkxUVGRoXHF0Oj2V6kzdDcmjJHbMB4VTxdE8YBAYbPXzEXdhq7lZy4fxHHhYxAsRATCtqf7t7TQCE0TWOiSHvLwyD7xMK2X47KiKbnNlNvqeVnnjLUMMd7vzA5dTft48wJm2D5HJNZ0mS32gTaiiExT5nqolToL0KYIOpRiT00MTQkGlBdagVigc-DZBPM0ZTP5wuLkwdk4XbfoQKaWC4qaYA6VpGgQg03Mo1W4ljlqRy0N4cPQ-l4Mi1XV9VK-825-zhyxzPrxef5Zct2nzVEJ9MbWy0-xuf6THX4q2X8zmz_KiHoA-hBWjebv_2R9479ldGj0h-vm9htVD59_6RBOGb0rT4XSS-4_CGYBZzv5PIPpLdnVbkr_qjsxCI0BO7tPKoyxXg2qh4YQbn3wn5MqsE9yR2BbRaso9MSPFlF8PxqR7A4qrKJjn_mPlcrR-XGf0ua2XfWCVe4ngcWpzssYHcJJD80APyZgzneIw2dSaO0enfFYUq6avqGSeoG7VC9zYACfUdofdlULH2azmptJ2Jzw3ggpLR7ZzZ9QrySXTUB2jkzrqiHyM9fqIXUVwAkAJMcBuwF5zY5B_ChA69Uj_-s-S1RITBbg5wtB3LuHyJGtTo4fuYY75F_OL9Cwp7gcpa5u0M_wWZlx70j_6jCb-ogvghALbHY8OPeWz_1-3bvJM9T_jmjKy0FDt6x8FJV1lgMMR0j1RiUeauUMsnd4TNUYAH50mFwtK5PU-Iq.U4LwNfOL2JB4vzBMvo3Hig"}'
connections:
  kafka:
    tags: []
    templateName: Kafka
    configurationObject:
      kafkaBootstrapServers:
        - PLAINTEXT:///localhost:9092
      protocol: PLAINTEXT
      # metrics
      ## JMX
      metricsPort: 9581
      metricsType: JMX
      metricsSsl: false
      ## JOLOKIA
      # metricsPort: 19581
      # metricsType: JOLOKIAG # or JOLOKIAP
      # metricsSsl: false
      # metricsHttpSuffix: "/jolokia/"

  zookeeper:
    tags: []
    templateName: Zookeeper
    configurationObject:
      zookeeperUrls:
        - localhost:2181
      zookeeperSessionTimeout: 10000
      zookeeperConnectionTimeout: 10000
      # metrics
      ## JMX
      metricsPort: 9585
      metricsType: JMX
      metricsSsl: false
      ## JOLOKIA
      # metricsPort: 19585
      # metricsType: JOLOKIAG # or JOLOKIAP
      # metricsSsl: false
      # metricsHttpSuffix: "/jolokia/"

  schema-registry:
    templateName: SchemaRegistry
    tags: [ ]
    configurationObject:
      schemaRegistryUrls:
        - http://localhost:8081
      additionalProperties: { }
      # metrics
      ## JMX
      metricsPort: 9582
      metricsType: JMX
      metricsSsl: false
      ## JOLOKIA
      # metricsPort: 19582
      # metricsType: JOLOKIAG # or JOLOKIAP
      # metricsSsl: false
      # metricsHttpSuffix: "/jolokia/"

  connect-cluster-dev-1:
    templateName: KafkaConnect
    tags: []
    configurationObject:
      workers:
        - http://localhost:8083
      aes256Key: PasswordPasswordPasswordPassword
      # metrics
      ## JMX
      metricsPort: 9584
      metricsType: JMX
      metricsSsl: false
      ## JOLOKIA
      # metricsPort: 19584
      # metricsType: JOLOKIAG # or JOLOKIAP
      # metricsSsl: false
      # metricsHttpSuffix: "/jolokia/"

  my-prometheus: {"configuration":[{"key":"endpoints","value":["https://am.acme.com"]}],"tags":["prometheus","monitoring","metrics"],"templateName":"PrometheusAlertmanager"}

Deploying an Agent

This page describes installing Lenses Agent in Kubernetes via Helm.

Lenses HQ must be installed before setting up an Agent.

Latest Agent container image is here on Docker Hub.

Helm Charts available here.

Run the following commands to add the charts to your Helm repo.

helm repo add lensesio https://helm.repo.lenses.io/
helm repo update

Prerequisites

  • Kubernetes 1.23+

  • Helm 3.8.0+

  • Available local Postgres database instance.

    • If you need to install Postgres on Kubernetes you can use one of many different publicly available Helm charts such as Bitnami's.

    • Or you can use a cloud provider's Postgres service such as one of these: AWS, Azure, or GCP.

    • See Lenses docs here for information on configuring Postgres to work with Agent.

  • Follow these steps to configure your Postgres database for Lenses Agent.

  • External Secrets Operator is the only supported secrets operator.

Configure an Agent

In order to configure an Agent, we have to understand the parameter groups that the Helm Chart offers.

Under the lensesAgent parameter there are some key parameter groups that are used to set up the connection to Lenses HQ:

  1. Storage

  2. HQ connection

  3. Provision

  4. Cluster RBACs

Moving forward, in the same order you can start configuring your Helm chart.

JSON Schema Support

You can use JSON schema support to help you configure the Values files for Helm. See JSON schema for support. Included in the repository is a JSON schema for the Agent Helm chart.


Configuring Agent chart

1

Configure storage (Postgres / H2 - Embedded database)

Postgres database is recommended for Production and Non-production workloads.

H2 embedded database is recommended to Evaluation purposes.

Running Agent with Postgres database

Prerequisite:

  • Running Postgres instance;

  • Created database for an Agent;

  • Username (and password) which has access to the created database;

In order to successfully run HQ, storage is within values.yaml has to be defined first.

The definition of storage object is as follows:

lensesAgent:
  storage:
    postgres:
      enabled: true
      host: ""
      port: 
      username: ""
      database: ""
      schema: ""
      params: {}

Alongside Postgres password, which can be referenced/created through Helm chart, there are a few more options which can help while setting up HQ.

There are two ways how username can be defined:

The most straightforward way, if the username is not being changed, is by just defining it within the username parameter such as

values.yaml
lensesAgent:
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres.svc.cluster.local
      port: 5432
      database: lensesagent
      username: lenses
values.yaml
lensesAgent:
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres.svc.cluster.local
      port: 5432
      database: lensesagent
      username: external  # use "external" to manage it using secrets
  additionalEnv:
    - name: LENSES_STORAGE_POSTGRES_USERNAME
      valueFrom:
        secretKeyRef:
          name: [SECRET_RESOURCE_NAME]
          key: [SECRET_RESOURCE_KEY]

Password reference types

Postgres password can be handled in three ways using:

  1. Pre-created secret;

  2. Creating secrets on the spot through values.yaml;

values.yaml
lensesAgent:
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.playground.svc.cluster.local
      port: 5432
      username: lenses
      database: lensesagent
      password: useOnlyForDemos         
values.yaml
lensesAgent:
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres.svc.cluster.local
      port: 5432
      database: lensesagent
      username: lenses
      password: external   # use "external" to manage it using secrets
  additionalEnv:
    - name: LENSES_STORAGE_POSTGRES_PASSWORD
      valueFrom:
        secretKeyRef:
          name: [SECRET_RESOURCE_NAME]
          key: [SECRET_RESOURCE_KEY]

Running Agent with H2 embedded database

Embedded database is not recommended to be used in Production or high load environments.

In order to run Agent with H2 embedded database there are few things to be aware about:

  • K8s cluster Agent will be deployed on has to support Persistent Volumes;

  • Postgres options in Helm chart has to be left out.

values.yaml
persistence:
  storageH2:
    enabled: true
    accessModes:
      - ReadWriteOnce
    size: 300Mi
2

Configure HQ connection (agent key)

Connection to Lenses HQ is a straight forward process which requires two steps:

  1. Creating Environment and obtaining AGENT KEY in HQ as described here, if you already have not done so;

  2. Storing that same key in Vault or as a K8s secret.

The agent communicates with HQ via a secure custom binary protocol channel. To establish this channel and authenticate the Agent needs and AGENT KEY.

Once the AGENT KEY has been copied, store it inside of Vault or any other tool that has integration with Kubernetes secrets.

There are three available options how the agent key can be used:

  1. ExternalSecret via External Secret Operator (ESO)

  2. Pre-created secret

  3. Inline string

To use this option, the External Secret Operator (ESO) has to be installed and available for usage in K8s cluster your are deploying Agent.

When specifying secret.type: "externalSecret", the chart will:

  • create an ExternalSecret in the namespace where Agent is deployed;

  • a secret is mounted for Agent to use.

values.yaml
lensesAgent:
  hq:
    agentKey:
      secret:
        type: "externalSecret"
        # Secret name where agentKey will be read from
        name: hq-password
        # Key name under secret where agentKey is stored
        key: key
        externalSecret:
          additionalSpecs: {}
          secretStoreRef:
            type: ClusterSecretStore # ClusterSecretStore | SecretStore
            name: [secretstore_name]

Make sure that secret you are going to use is already created in namespace where Agent will be installed.

values.yaml
lensesAgent:
  hq:
    agentKey:
      secret:
        type: "precreated"
        # Secret name where agentKey will be read from
        name: hq-password
        # Key name under secret where agentKey is stored
        key: key

This option is NOT for PRODUCTION usage but rather just for demo / testing.

The chart will create a secret with defined values below and the same secret will be read by Agent to connect to HQ.

values.yaml
lensesAgent:
  hq:
    agentKey:
      secret:
        type: "createNew"
        # Secret name where agentKey will be read from
        name: "lenses-agent-secret-1"
        # Value of agentKey generated by HQ
        value: "agent_key_*"

This secret will be fed into the provisioning.yaml. The HQ connection is specified below, where reference ${LENSESHQ_AGENT_KEY} is being set:

values.yaml
lensesAgent:
  provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: [LENSES_HQ_FQDN_OR_IP]
            port:
              value: 10000
            agentKey:
              # This property shouldn't be changed as it is mounted automatically
              # based on secret choice for hq.agentKey above 
              value: ${LENSESHQ_AGENT_KEY}
            sslEnabled:
              value: false

In order to enable TLS for secure communication between HQ and the Agent please refer to the following part of the page.

3

Configure provisioning (Kafka / SchemaRegistry / Kafka Connect)

Provisioning offers various connections starting with:

  • Kafka ecosystem components such as:

    • Kafka

    • Schema Registry

    • Kafka Connect

    • Zookeeper

  • Alerts & Audits

  • AWS

values.yaml
lensesAgent:
  provision:
    path: /mnt/provision-secrets
    connections:
      # Kafka Connection
      kafka:
        - name: Kafka
          version: 1
          tags: [my-tag]
          configuration:
            kafkaBootstrapServers:
              value:
                - PLAINTEXT://your.kafka.broker.0:9092
                - PLAINTEXT://your.kafka.broker.1:9092
            protocol: 
              value: PLAINTEXT
            # all metrics properties are optional
            metricsPort: 
              value: 9581
            metricsType: 
              value: JMX
            metricsSsl: 
              value: false
      # Confluent Schema Registry Connection
      confluentSchemaRegistry:
        - name: schema-registry
          tags: ["tag1"]
          version: 1      
          configuration:
            schemaRegistryUrls:
              value:
                - http://my-sr.host1:8081
                - http://my-sr.host2:8081
            ## all metrics properties are optional
            metricsPort: 
              value: 9581
            metricsType: 
              value: JMX
            metricsSsl: 
              value: false
      # Kafka Connect connection
      connect:
        - name: my-connect-cluster-name
          version: 1    
          tags: ["tag1"]
          configuration:
            workers:
              value:
                - http://my-kc.worker1:8083
                - http://my-kc.worker2:8083
            metricsPort: 
              value: 9585
            metricsType: 
              value: JMX
values.yaml
lensesAgent:
  additionalEnv:
    - name: SASL_JAAS_CONFIG
      valueFrom:
        secretKeyRef:
          name: kafka-sharedkey
          key: sasljaasconfig
  provision:
    path: /mnt/provision-secrets
    connections:
      # Kafka Connection
      kafka:
        - name: kafka
          version: 1
          tags: [ "dev", "dev-2", "eu"]
          configuration:
            kafkaBootstrapServers:
              value:
                - SASL_SSL://test-dev-2-kafka-bootstrap.kafka-dev.svc.cluster.local:9093
            saslJaasConfig:
              value: ${SASL_JAAS_CONFIG}
            saslMechanism:
              value: SCRAM-SHA-512
            protocol:
              value: SASL_SSL
      # Confluent Schema Registry Connection
      confluentSchemaRegistry:
        - name: schema-registry
          tags: ["tag1"]
          version: 1      
          configuration:
            schemaRegistryUrls:
              value:
                - http://my-sr.host1:8081
                - http://my-sr.host2:8081
            ## all metrics properties are optional
            metricsPort: 
              value: 9581
            metricsType: 
              value: JMX
            metricsSsl: 
              value: false
      # Kafka Connect connection
      connect:
        - name: my-connect-cluster-name
          version: 1    
          tags: ["tag1"]
          configuration:
            workers:
              value:
                - http://my-kc.worker1:8083
                - http://my-kc.worker2:8083
            metricsPort: 
              value: 9585
            metricsType: 
              value: JMX

More about provisioning and more advanced configuration options for each of these components can be found on the following link.

4

Cluster RBACs

The Helm chart creates Cluster roles and bindings, that are used by SQL Processors, if the deployment mode is set to KUBERNETES. They are used so that Lenses can deploy and monitor SQL Processor deployments in namespaces.

To disable the creation of Kubernetes RBAC set: rbacEnabled: false

If you want to limit the permissions the Agent has against your Kubernetes cluster, you can use Role/RoleBinging resources instead. Follow this link in order to enable it.

If you are not using SQL Processors and want to limit permissions given to Agent's ServiceAccount, there are two options you can choose from:

  • rbacEnable: true - will enable the creation of ClusterRole and ClusterRoleBinding for service account mentioned above;

values.yaml
rbacsEnable: true
namespaceScope: false
  • rbacEnable: true and namespaceScope: true - will enable the creation of Role and RoleBinding which is more restrictive;

values.yaml
rbacsEnable: true
namespaceScope: true

(Optional) Enable TLS connection with HQ

In this case, TLS has to be enabled on HQ. In case you haven't yet enabled it, you can find details here to do it.

Enabling TLS in communication between HQ is being done in the provisioning part of values.yaml.

In order to successfully enable TLS for the Agent you would need to:

  • additionalVolume & additionalVolumeMounts - with which you will mount truststore with CA certificate that HQ is using and which Agent will need to successfully pass the handshake.

  • additionalEnv - which will be used to securely read passwords to unlock truststore.

  • Enable SSL in provision.

values.yaml
# Additional Volume with CA that HQ uses
additionalVolumes:
  - name: hq-truststore
    secret:
      secretName: hq-agent-test-authority
additionalVolumeMounts:
  - name: hq-truststore
    mountPath: "/mnt/provision-secrets/hq"

lensesAgent:
 # Additional Env to read truststore password from secret
 additionalEnv:
    - name: LENSES_HQ_AGENT_TRUSTSTORE_PWD
      valueFrom:
        secretKeyRef:
          name: hq-agent-test-authority
          key: truststore.jks.password
 provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: [HQ_URL]
            port:
              value: 10000
            agentKey:
              value: ${LENSESHQ_AGENT_KEY}
            sslEnabled:
              value: true
            sslTruststore:
              file: "/mnt/provision-secrets/gq/truststore.jks"
            sslTruststorePassword:
              value: ${LENSES_HQ_AGENT_TRUSTSTORE_PWD}

(Optional) Services

Enable a service resource in the values.yaml:

# Lenses service
service:
  enabled: true
  annotations: {}

(Optional) Controlling resources

To control the resources used by the Agent:

# Resource management
resources:
  requests:
    cpu: 1
    memory: 4Gi
  limits:
    cpu: 2
    memory: 5Gi

In case LENSES_HEAP_OPTS is not set explicitly it will be set implicitly.

Examples:

  1. if no requests or limits are defined, LENSES_HEAP_OPTS will be set as -Xms1G -Xmx3G

  2. If requests and limits are defined above defined values, LENSES_HEAP_OPTS will be set by formula -Xms[-Xmx / 2] -Xmx[limits.memory - 2]

  3. If .Values.lenses.jvm.heapOpts it will override everything


Enabling SQL processors in K8s mode

To enable SQL processor in KUBERENTES mode and control the defaults:

lensesAgent:
  sql:
    processorImage: hub.docker.com/r/lensesioextra/sql-processor/
    processorImageTag: latest
    mode: KUBERNETES
    heap: 1024M
    minHeap: 128M
    memLimit: 1152M
    memRequest: 128M
    livenessInitialDelay: 60 seconds

To control the namespace Lenses can deploy processors, use the sql.namespaces value.

SQL Processor Role Binding

To achieve you need to create a Role and a RoleBinding resource in the namespace you want the processors deployed to.

For example:

  • Lenses namespace = lenses-ns

  • Processor namespace = lenses-proc-ns

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: processor-role
  namespace: lenses-proc-ns
rules:
- apiGroups: [""]
  resources:
    - namespaces
    - persistentvolumes
    - persistentvolumeclaims
    - pods/log
  verbs:
    - list
    - watch
    - get
    - create
- apiGroups: ["", "extensions", "apps"]
  resources:
    - pods
    - replicasets
    - deployments
    - ingresses
    - secrets
    - statefulsets
    - services
  verbs:
    - list
    - watch
    - get
    - update
    - create
    - delete
    - patch
- apiGroups: [""]
  resources:
    - events
  verbs:
    - list
    - watch
    - get
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: processor-role-binding
  namespace: lenses-proc-ns
subjects:
- kind: ServiceAccount
  namespace: lenses-ns
  name: default
roleRef:
  kind: Role
  name: processor-role
  apiGroup: rbac.authorization.k8s.io

Finally you need to define in the Agent configuration which namespaces the Agent has access to. Amend values.yaml to contain the following:

values.yaml
lensesAgent:
  append:
    conf: |
      lenses.kubernetes.namespaces = {
        incluster = [
          "lenses-processors"
        ]
      }      

Persistence Volume

Persistence can be enabled for three purposes:

  • Use H2 embedded database

  • Logging

  • Provisioning

  • When using the Data Policies module to persist your data policies rules

  • When lenses.storage.enabled: false and an H2 local filesystem database is used instead of PostgreSQL

  • For non critical and NON PROD deployments

Configuration:

values.yaml
persistence:
  storageH2:
    enabled: true
    accessModes:
      - ReadWriteOnce
    size: 20Gi
    storageClass: ""
    annotations: {}
    existingClaim: ""
  • When you need persistent log storage across pod restarts

  • When you want to retain logs for auditing or debugging purposes

Configuration:

values.yaml
persistence:
  log:
    enabled: true
    accessModes:
      - ReadWriteOnce
    size: 5Gi
    storageClass: ""
    annotations: {}
    existingClaim: ""

Dedicated volume for provisioning data managed via the HQ.

When to enable:

  • When using HQ-based provisioning workflows

  • Must be combined with PROVISION_HQ_URL and PROVISION_AGENT_KEY environment variables

Configuration:

values.yaml
persistence:
  provisioning:
    enabled: true
    accessModes:
      - ReadWriteOnce
    size: 5Mi
    storageClass: ""
    annotations: {}
    existingClaim: ""

or Helm command execution:

# Install the Chart.
helm repo add lensesio https://helm.repo.lenses.io/
helm repo update
# Deploy the Agent. Only available from version 6.1.0 onwards.
helm install lenses-agent \
  lensesio/lenses-agent \
  --set 'persistence.provisioning.enabled=true' \
  --set 'lensesAgent.additionalEnv[0].name=PROVISION_HQ_URL' \
  --set 'lensesAgent.additionalEnv[0].value=[lenses-hq.url]' \
  --set 'lensesAgent.additionalEnv[1].name=PROVISION_AGENT_KEY' \
  --set 'lensesAgent.additionalEnv[1].value=[agent_key_*]'

Prometheus metrics

Prometheus metrics are automatically exposed on port 9102 under /metrics.

At this very moment you can scrape it only via Service under port called http-metrics.

lenses.conf

The main configurable options for lenses.conf are available in the values.yaml under the lenses object. These include:

  • Authentication

  • Database connections

  • SQL processor configurations

To apply other static configurations use lenses.append.conf, for example:

values.yaml
lensesAgent:
  append:
    conf: |
      lenses.interval.user.session.refresh=40000

Install the Chart

First, add the Helm Chart repository using the Helm command line:

helm repo add lensesio https://helm.repo.lenses.io/
helm repo update

Installing the Agent

Installing using cloned repository:

helm install lenses-agent charts/lenses-agent \
   --values charts/lenses-agent/values.yaml \
   --create-namespace --create lenses-agent

Installing using Helm repository:

terminal
helm install lenses-agent lensesio/lenses-agent \
   --values values.yaml \
   --create-namespace --namespace lenses-agent \
   --version 6.0.0

Be aware that for the time being and alpha purposes usage of --version is mandatory when deploying Helm chart through Helm repository.


Example Values files

Be aware that example of values.yaml shows only how all parameters should look at the end. Please fill them with correct values otherwise Helm installation might not be successful.

Example of values.yaml
values.yaml
lensesAgent:
  storage:
    postgres:
      enabled: true
      host: [postgres.url]
      port: 5432
      username: postgres 
      password: changeMe
      database: agent
  hq:
    agentKey:
      secret:
        type: "createNew"
        name: "agentKey"
        value: "agent_key_*"
  provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: hq-tls-test-lenses-hq.hq-agent-test.svc.cluster.local
            port:
              value: 10000
            agentKey:
              value: ${LENSESHQ_AGENT_KEY}
      kafka:
        # There can only be one Kafka cluster at a time
        - name: kafka
          version: 1
          tags: ['staging', 'pseudo-data-only']
          configuration:
            kafkaBootstrapServers:
              value:
                - PLAINTEXT://kafka-1.svc.cluster.local:9092
                - PLAINTEXT://kafka-2.svc.cluster.local:9092
            protocol:
              value: PLAINTEXT
            # Metrics are strongly suggested for better Kafka cluster observability
            metricsType:
              value: JMX
            metricsPort:
              value: 9581

You can also find examples in the Helm chart repo.

Deploying HQ

This page describes installing Lenses HQ in Kubernetes via Helm.

Lenses HQ is prerequisite for installation of Lenses Agent

Latest images:

Latest HQ image available here from Docker Hub.

Latest Lenses CLI as tarball or as a container.

Prerequisites

  • Kubernetes 1.23+

  • Helm 3.8.0+

  • Available local Postgres database instance:

    • If you need to install Postgres on Kubernetes you can use one of the many publicly available Helm charts such as Bitnami's.

    • Or you can use one of the cloud provider's Postgres services such as one of these: AWS, Azure, or GCP.

    • See Lenses docs here for information on configuring Postgres to work with HQ .

    • username (and password) that has access to HQ database;

  • External Secrets Operator is the only supported secrets operator.

Configure HQ

To configure Lenses HQ properly we have to understand the parameter groups that the Chart offers.

Under the lensesHq parameter there are some key parameter groups that are used to set up HQ:

  1. storage

    • definition of connection towards database (Postgres is the only storage option)

  2. auth

    • Password based authentication configuration

    • SAML / SSO configuration

    • definition of administrators or first users to access the HQ

  3. http

    • defines port under which HQ will be available for end users

    • defines values of special headers and cookies

    • types of connection such as TLS and non-TLS definitions

  4. agents

    • defines connection between HQ and the Agent such as port where HQ will be listening for agent connections.

    • types of connection such as TLS and non-TLS definitions

  5. license

  6. monitoring

    • controls the metrics settings where Prometheus alike metrics will be exposed

  7. loggers

    • definition of logging level for HQ

Moving forward, in the same order you can start configuring your Helm chart.


1

Configure storage (Postgres)

Postgres is the only available storage option.

Prerequisite:

  • Running Postgres instance;

  • Created database for HQ;

  • Username (and password) which has access to the created database;

In order to successfully run HQ, storage within values.yaml has to be defined first.

Definition of storage object is as follows:

lensesHq:
  storage:
    postgres:
      enabled: true
      host: ""
      port: 
      username: ""
      database: ""
      schema: ""
      tls: 
      params: {}
      passwordSecret:
        type: ""

Alongside Postgres password, which can be referenced / created through Helm chart, there are few more options which can help while setting up HQ.

Username reference types

There are two ways how username can be defined:

The most straightforward way, if the username is not being changed, is by just defining it within the username parameter such as

values.yaml
lensesHq:
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres.svc.cluster.local
      port: 5432
      username: lenses

In case Postgres username is being rotated or frequently changed it can be referenced from pre-created secret

values.yaml
lensesHq:
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres.svc.cluster.local
      port: 5432
      username: lenses
      useSecretForUsername:
        enabled: true
        existingSecret:
          name: my-secret
          key: username

Password reference types

Postgres password can be handled in three ways using:

  1. External Secret via ExternalSecretOperator;

  2. Pre-created secret;

  3. Creating secret on the spot through values.yaml;

To use this option, the External Secret Operator (ESO) has to be installed and available for usage in K8s cluster your are deploying HQ.

When specifying passwordSecret.type: "externalSecret", the chart will:

  • create an ExternalSecret in the namespace where HQ is deployed;

  • a secret is mounted for HQ to use.

values.yaml
lensesHq:
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.playground.svc.cluster.local
      port: 5432
      username: lenses
      database: lenseshq
      passwordSecret:
        type: "externalSecret"
        # Secret name where database password will be read from
        name: hq-password
        # Key name under secret where database password is stored
        key: password
        externalSecret:
          additionalSpecs: {}
          secretStoreRef:
            type: SecretStore # or ClusterSecretStore
            name: secretstore-secrets            

Make sure that secret you are going to use is already created in namespace where HQ will be installed.

values.yaml
 lensesHq:
   storage:  
     postgres:
       enabled: true
       host: postgres-postgresql.playground.svc.cluster.local
       port: 5432
       username: lenses
       database: lenseshq
       passwordSecret:
         type: "precreated"
         # Secret name where database password will be read from
         name: hq-password
         # Key from secret's data where database password is being stored
         key: postgres-password

This option is NOT for PRODUCTION usage but rather just for demo / testing.

The chart will create a secret with defined values below and the same secret will be read by HQ in order to connect to Postgres.

values.yaml
 lensesHq:
   storage:
     postgres:
       enabled: true
       host: [POSTGRES_HOSTNAME]
       port: 5432
       username: lenses
       database: lenseshq
       passwordSecret:
         type: "createNew"
         # name of a secret that will be created
         name: [K8s_SECRET_NAME]
         # Database password
         password: [DATABASE_USER_PASSWORD]

Advanced Postgres settings

Sometimes to form the correct connection URI special parameters are needed. You can set the extra settings using params.

Example:

values.yaml
lensesHq:
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres.svc.cluster.local
      port: 5432
      username: lenses
      params:
        sslmode: require
2

Configure AUTH endpoint

SAML / SSO is available only with Enterprise license.

The second pre-requirement to successfully run HQ is setting initial authentication.

You can choose between:

  • password-based authentication, which requires users to provide a username and password;

  • and SAML/SSO (Single Sign-On) authentication, which allows users to authenticate through an external identity provider for a seamless and secure login experience.

Assertion Consumer Service endpoint is following

/api/v2/auth/saml/callback?client_name=SAML2Client

The definition of auth object is as follows:

values.yaml
lensesHq:
  auth:
    users:
      - username: admin
        # bcrypt("correcthorsebatterystaple").
        password: $2a$10$F66cb6ZhnJjGCZuxlvKP1e84eytTpT1MDJcpBblHaZgsqp1/Aa0LG
    administrators:
      - admin
      - [email protected]
      - [email protected]
    saml:
      enabled: true
      baseURL: ""
      entityID: ""
      # -- Example: <?xml version="1.0" ... (big blob of xml) </md:EntityDescriptor>
      metadata:
        referenceFromSecret: false
        secretName: ""
        secretKeyName: ""
        stringData: |
          <?xml version="1.0" encoding="UTF-8" standalone="no"?>
          </md:EntityDescriptor>
  
      userCreationMode: "sso"
      usersGroupMembershipManagementMode: "sso"
      uiRootURL: "/"
      groupAttributeKey: "groups"
      authnRequestSignature:
        enabled: false

First to cover is the users property. Users Property: The users property is defined as an array, where each entry includes a username and a password. Passwords must be hashed using bcrypt before being placed within the password property, for security purposes, ensuring that they are stored correctly and securely.

values.yaml
lensesHq:
  auth:
    users:
      - username: admin
        # bcrypt("correcthorsebatterystaple").
        password: $2a$10$F66cb6ZhnJjGCZuxlvKP1e84eytTpT1MDJcpBblHaZgsqp1/Aa0LG
values.yaml
lensesHq:
  auth:
    users:
      - username: $(ADMIN_USER)
        password: $(ADMIN_USER_PWD)
  additionalEnv:
    - name: ADMIN_USER
      valueFrom:
        secretKeyRef:
          name: multi-credentials-secret
          key: user1-username
    - name: ADMIN_USER_PWD
      valueFrom:
        secretKeyRef:
          name: multi-credentials-secret
          key: user1-password

Second, to cover will be administrators. It serves as the definition of user emails have the highest level of permissions upon authentication to HQ.

Third attribute is saml.metadata field, needed for setting SAML / SSO authentication. In this step, you will need metadata.xml file which can be set in two ways:

  1. Referencing metadata.xml file through pre-created secret;

  2. Placing metadata.xml contents inline as a string.

lensesHq:
  auth:
    address: ":8080"
    accessControlAllowOrigin:
      - 
    administrators:
      - [email protected]
      - [email protected]
    saml:
      baseURL: ""
      entityID: ""
      metadata:
        referenceFromSecret: true
        secretName: hq-tls-mock-saml-metadata
        secretKeyName: metadata.xml
      userCreationMode: "sso"
      usersGroupMembershipManagementMode: "manual"
lensesHq:
  auth:
    address: ":8080"
    accessControlAllowOrigin:
      - 
    administrators:
      - [email protected]
      - [email protected]
    saml:
      baseURL: ""
      entityID: ""
      metadata:
        referenceFromSecret: false
        stringData: |
          <?xml version="1.0" encoding="UTF-8" standalone="no"?>
          ...
          ...
          </md:EntityDescriptor>
      userCreationMode: "sso"
      usersGroupMembershipManagementMode: "sso"

In case SAML IdP requires certificate verification, same can be enabled and provided in the following way:

values.yaml
lensesHq:
  auth:
    saml:
      authnRequestSignature:
        enabled: true
        authnRequestSigningCert:
          referenceFromSecret: true
          secretName: hq-agent-test-authority
          secretKeyName: hq-tls-test.crt.pem
        authnRequestSigningKey:
          secret:
            name: saml-test
            key: privatekey.key
values.yaml
lensesHq:
  auth:
    saml:
      authnRequestSignature:
        enabled: true
        authnRequestSigningCert:
          stringData: |
            -----BEGIN CERTIFICATE-----
            ....
            -----END CERTIFICATE-----
        authnRequestSigningKey:
          secret:
            name: saml-test
            key: privatekey.key
3

Configure HTTP endpoint

The third pre-requirement to successfully run HQ is the http definition. As previously mentioned, this parameter defines everything around the HTTP endpoint of the HQ itself and how users will interact with it.

Definition of HTTP object is as follows:

lensesHq:
  http:
    address: ":8080"
    accessControlAllowOrigin:
      - 
    accessControlAllowCredentials: false
    secureSessionCookies: true
    tls:
      enabled: true
      cert:
      privateKey:
        secret:
          name: 
          key:

Second part of HTTP definition would be enabling TLS and TLS definition itself. As previously defined for lensesHq.agents.tls same way of configuring TLS can be used for lensesHq.http.tls definition as well.

4

Configure agent's connection endpoint

After correctly configuring the authentication strategy and connection endpoint, the agent handling is the last most important box to tick.

The Agent's object is defined as follows:

lensesHq:
  agents:
    # which port to listen on for agent requests
    address: ":10000" 
    tls:
      enabled: false
      verboseLogs: false
      cert:
      privateKey:

Enabling TLS

By default TLS for the communication between Agent and HQ is disabled. In case the requirement is to enable it, fthe ollowing has to be set:

  • lensesHq.agents.tls - certificates to manage the connection between HQ and the Agents

  • lensesHq.http.tls- certificates to manage connection with HQ's API

Unlike private keys which can be referenced and obtained only through a secret, Certificates can be referenced directly in values.yaml file as a string or as a secret.

values.yaml
lensesHq:
  agents:
    address: ":10000"
    tls:
      enabled: true
      cert:
        referenceFromSecret: true
        secretName: hq-agent-test-authority
        secretKeyName: hq-tls-test.crt.pem
      privateKey:
        secret:
          name: hq-agent-test-authority
          key: hq-tls-test.key.pem
values.yaml
lensesHq:
  agents:
    address: ":10000"
    tls:
      enabled: true
      cert:
        stringData: |
          -----BEGIN CERTIFICATE-----
          ...
          ...
          -----END CERTIFICATE-----
      privateKey:
        secret:
          name: hq-agent-test-authority
          key: hq-tls-test.key.pem
5

Configure license

In demo purposes and testing the product you can use our community license

license_key_2SFZ0BesCNu6NFv0-EOSIvY22ChSzNWXa5nSds2l4z3y7aBgRPKCVnaeMlS57hHNVboR2kKaQ8Mtv1LFt0MPBBACGhDT5If8PmTraUM5xXLz4MYv

License can be read in multiple ways:

  • from a pre-created secret

  • directly as a string defined in values.yaml file

values.yaml
lensesHq:
  license:
    referenceFromSecret: true
    secretName: hq-license
    secretKeyName: key
    acceptEULA: true
values.yaml
lensesHq:
  licence:
    referenceFromSecret: false
    stringData: "license_key_*"
    acceptEULA: true
6

Configure metrics endpoint

Metrics are optionally available in a Prometheus format and by default served on port 9090.

The port can be changed in the following way:

values.yaml
lensesHq:
  metrics:
    prometheusAddress: ":9090"

(Optional) Configure Ingress & Services

Whilst the chart supports setting TLS on Lenses HQ itself we recommend placing it on the Ingress resource

Ingress and service resources are optionally supported.

The http ingress is intended only for HTTP/S traffic, while the agents ingress is designed specifically for TCP protocol. Ensure appropriate ingress configuration for your use case.

Enable an Ingress resource in the values.yaml:

values.yaml
ingress:
  http:
    enabled: true
    annotations:
      traefik.ingress.kubernetes.io/router.entrypoints: websecure
    host: example.com
    ingressClassName: ""
    tls:
      enabled: false
      # The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS.
      secretName: ""


  agent:
    enabled: true
    agentIngressConfig:
      apiVersion: traefik.containo.us/v1alpha1
      kind: IngressRouteTCP
      metadata:
        name: agents
      spec:
        entryPoints:
          - agents
        routes:
          - match: HostSNI(`example.com`)  # HostSNI to match TLS for TCP
            services:
              - name: lenses-hq            # Replace with your service name
                port: 10000                # Agent default TCP port  
        tls: {}

Enable a service resource in the values.yaml:

values.yaml
# Lenses HQ service
service:
  enabled: true
  type: ClusterIP
  annotations: {}
  externalTrafficPolicy:
  loadBalancerIP: 130.211.x.x
  loadBalancerSourceRanges:
    - 0.0.0.0/0

(Optional) Configure Service Accounts

Lenses HQ, by default, uses the default Kubernetes service account but you can choose to use a specific one.

If the user defines the following:

values.yaml
# serviceAccount is the Service account to be used by Lenses to deploy apps
serviceAccount:
  create: true
  annotations: {}
  name: lenses-hq

The chart will create a new service account in the defined namespace for HQ to use.


(Optional) Enable RBAC

There are two options you can choose between:

  1. rbacEnable: true - will enable the creation of ClusterRole and ClusterRoleBinding for the service account mentioned in the snippet above\

  2. rbacEnable: true and namespaceScope: true - will enable the creation of Role and RoleBinding which is more restrictive.


Configure logging

There are different logging modes and levels that can be adjusted.

values.yaml
lensesHq:
  logger:
    # Allowed values are: text | json
    mode: "text"

    # Allowed values are: info | debug
    level: "info"

Add chart repository

First, add the Helm Chart repository using the Helm command line:

helm repo add lensesio https://helm.repo.lenses.io/
helm repo update

Installing HQ

Be aware that for the time being and for alpha purposes usage of --versionis mandatory when deploying Helm chart through Helm repository.

terminal
helm install lenses-hq lensesio/lenses-hq \
   --values values.yaml \
   --create-namespace --namespace lenses-hq \
   --version 6.0.8

Example Values files

Be aware that example of values.yaml shows only how all parameters should look at the end. Please fill them with correct values otherwise Helm installation might not be successful.

Example of values.yaml

More about default values for Lenses HQ Helm Chart can be found in values.yaml. An example is below:

values.yaml
resources:
  requests:
  #   cpu: 1
    memory: 4Gi
  limits:
  #   cpu: 2
    memory: 8Gi

image:
  repository: lensesio/lenses:latest

rbacEnable: true
namespaceScope: true

ingress:
  http:
    enabled: true
    annotations:
      traefik.ingress.kubernetes.io/router.entrypoints: websecure
    host: example.com

lensesHq:
  agents:
    address: ":10000"
    tls:
      enabled: true. # optional
      cert:
        referenceFromSecret: true
        secretName: hq-agent-test-authority
        secretKeyName: hq-tls-test.crt.pem
      privateKey:
        secret:
          name: hq-agent-test-authority
          key: hq-tls-test.key.pem
  auth:
    users:
      - username: admin
        # bcrypt("correcthorsebatterystaple").
        password: $2a$10$F66cb6ZhnJjGCZuxlvKP1e84eytTpT1MDJcpBblHaZgsqp1/Aa0LG
    administrators:
      - admin
      - [email protected]
      - [email protected]
    saml:
      enabled: true    # optional
      baseURL: ""
      entityID: ""
      # -- Example: <?xml version="1.0" ... (big blob of xml) </md:EntityDescriptor>
      metadata:
        referenceFromSecret: false
        secretName: ""
        secretKeyName: ""
        stringData: |
          <?xml version="1.0" encoding="UTF-8" standalone="no"?>
          </md:EntityDescriptor>
  
      userCreationMode: "sso"
      usersGroupMembershipManagementMode: "sso"
      uiRootURL: "/"
      groupAttributeKey: "groups"
  http:
    address: ":8080"
    accessControlAllowOrigin:
      - 
    accessControlAllowCredentials: false
    secureSessionCookies: true
    tls:
      enabled: true    # optional
      cert:
      privateKey:
        secret:
          name: 
          key:
  # Find more details in https://docs.lenses.io/current/installation/kubernetes/helm/#helm-storage
  ## Postgres template example: "postgres://[username]:[pwd]@[host]:[port]/[database]?sslmode=require"
  storage:
    postgres:
      enabled: true
      host: [POSTGRES_HOST_URL]
      port: 5432
      username: [POSTGRES_USERNAME]
      database: [POSTGRES_USER_PWD]
      passwordSecret:
        type: "precreated"
        name: initContainer-2-db-secret
        key: password

What's next?

After the successful configuration and installation of HQ, the next steps would be:

  1. Deploying and Agent

  2. Configuring IAM roles / groups / policies

HQ

This page describe the Lenses Agent configuration.

HQ's configuration is defined in the config.yaml file

EULA

To accept the Lenses EULA, set the following in the lenses.conf file:

Without accepting the EULA the Agent will not start! See License.

It has the following top level groups:

Name
Required
Default
Type
Description

http

Yes

n/a

Configures everything involving the HTTP.

agents

Yes

n/a

Controls the agent handling.

database

Yes

n/a

Configures database settings.

logger

Yes

n/a

Sets the logger behaviour.

metrics

Yes

n/a

Controls the metrics settings.

license

Yes

n/a

Holds the license key.

auth

Yes

n/a

Configures authentication and authorisation


AuthConfig

Configures authentication and authorisation.

It has the following fields:

Name
Required
Default
Type
Description

administrators

No

[]

strings

Grants root access to principals.

saml

no

n/a

Contains SAML2 IdP configuration.

users

no

[]

Array

Creates initial users for password based authentication.

AuthConfig: administrators

Lists the names of the principals (users, service accounts) that have root access. Access control allows any API operation performed by such principals. Optional. If not set, it will default to [].

AuthConfig: saml

Contains SAML2 IdP configuration. Please refer here for its structure.


HTTPConfig

Configures everything involving the HTTP.

It has the following fields:

Name
Required
Default
Type
Description

address

Yes

n/a

string

Sets the address the HTTP server listens at.

accessControlAllowOrigin

No

["*"]

strings

Sets the value of the "Access-Control-Allow-Origin" header.

accessControlAllowCredentials

No

false

boolean

Sets the value of the "Access-Control-Allow-Credentials" header.

secureSessionCookies

No

true

boolean

Sets the "Secure" attribute on session cookies.

tls

Yes

n/a

Contains TLS configuration.

HTTPConfig: address

Sets the address the HTTP server listens at.

Example value: 127.0.0.1:80.

HTTPConfig: accessControlAllowOrigin

Sets the value of the "Access-Control-Allow-Origin" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to ["*"].

HTTPConfig: accessControlAllowCredentials

Sets the value of the "Access-Control-Allow-Credentials" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to false.

HTTPConfig: secureSessionCookies

Sets the "Secure" attribute on authentication session cookies. When set, a browser sends such cookies not over unsecured HTTP (expect for localhost). If running Lenses HQ over unsecured HTTP, set this to false. Optional. If not set, it will default to true.

HTTPConfig: tls

Contains TLS configuration. Please refer here for its structure.


SAMLConfig

Contains SAML2 IdP configuration.

It has the following fields:

Name
Required
Default
Type
Description

metadata

Yes

n/a

string

Contains the IdP issued XML metadata blob.

baseURL

Yes

n/a

string

Defines base URL of HQ for IdP redirects.

uiRootURL

No

/

string

Controls where to redirect to upon successful authentication.

entityID

Yes

n/a

string

Defines the Entity ID.

groupAttributeKey

No

groups

string

Sets the attribute name for group names.

userCreationMode

No

manual

string

Controls how the creation of users should be handled in relation to SSO information.

groupMembershipMode

No

manual

string

Controls how the management of a user's group membership should be handled in relation to SSO information.

SAMLConfig: metadata

Contains the IdP issued XML metadata blob.

Example value: <?xml version="1.0" ... (big blob of xml) </md:EntityDescriptor>.

SAMLConfig: baseURL

Defines the base URL of Lenses HQ; the IdP redirects back to here on success.

Example value: https://hq.example.com.

SAMLConfig: uiRootURL

Controls where the backend redirects to after having received a valid SAML2 assertion. Optional. If not set, it will default to /.

Example value: /.

SAMLConfig: entityID

Defines the Entity ID.

Example value: https://hq.example.com.

SAMLConfig: groupAttributeKey

Sets the attribute name from which group names are extracted in the SAML2 assertions. Different providers use different names. Okta, Keycloak and Google use "groups". OneLogin uses "roles". Azure uses "http://schemas.microsoft.com/ws/2008/06/identity/claims/groups". Optional. If not set, it will default to groups.

Example value: groups.

SAMLConfig: userCreationMode

Controls how the creation of users should be handled in relation to SSO information. With the 'manual' mode, only users that currently exist in HQ can login. Users that do not exist are rejected. With the 'sso' mode, users that do not exist are automatically created. Allowed values are manual or sso. Optional. If not set, it will default to manual.

SAMLConfig: groupMembershipMode

Controls how the management of a user's group membership should be handled in relation to SSO information. With the 'manual' mode, the information about the group membership returned from an Identity Provider will not be used and a user will only be a member of groups that were explicitly assigned to him locally. With the 'sso' mode, group information from Identity Provider (IdP) will be used. On login, a user's group membership is set to the groups listed in the IdP. Groups that do not exist in HQ are ignored. Allowed values are manual or sso. Optional. If not set, it will default to manual.

AgentsConfig

Controls the agent handling.

It has the following fields:

Name
Required
Default
Type
Description

address

Yes

n/a

string

Sets the address the agent server listens at.

tls

Yes

n/a

Contains TLS configuration.

AgentsConfig: address

Sets the address the agent server listens at.

Example value: 127.0.0.1:3000.

AgentsConfig: tls

Contains TLS configuration. Please refer here for its structure.


TLSConfig

Contains TLS configuration.

It has the following fields:

Name
Required
Default
Type
Description

enabled

Yes

n/a

boolean

Enables or disables TLS.

cert

No

``

string

Sets the PEM formatted public certificate.

key

No

``

string

Sets the PEM formatted private key.

verboseLogs

No

false

boolean

Enables verbose TLS logging.

TLSConfig: enabled

Enables or disables TLS.

Example value: false.

TLSConfig: cert

Sets the PEM formatted public certificate. Optional. If not set, it will default to ``.

Example value: -----BEGIN CERTIFICATE----- EXampLeRanDoM ... -----END CERTIFICATE----- .

TLSConfig: key

Sets the PEM formatted private key. Optional. If not set, it will default to ``.

Example value: -----BEGIN PRIVATE KEY----- ExAmPlErAnDoM ... -----END PRIVATE KEY----- .

TLSConfig: verboseLogs

Enables additional logging of TLS settings and events at debug level. The information presented might be a bit too much for day to day use but can provide extra information for troubleshooting TLS configuration. Optional. If not set, it will default to false.


DatabaseConfig

Configures database settings.

It has the following fields:

Name
Required
Default
Type
Description

host

Yes

n/a

string

Sets the name of the host to connect to.

username

No

``

string

Sets the username to authenticate as.

password

No

``

string

Sets the password to authenticate as.

database

Yes

n/a

string

Sets the database to use.

schema

No

``

string

Sets the schema to use.

TLS

No

false

boolean

Enables TLS.

params

No

{}

DBConnectionParams

Provides fine-grained control.

DatabaseConfig: host

Sets the name of the host to connect to. A comma-separated list of host names is also accepted; each host name in the list is tried in order.

Example value: postgres:5432.

DatabaseConfig: username

Sets the username to authenticate as. Optional. If not set, it will default to ``.

Example value: johhnybingo.

DatabaseConfig: password

Sets the password to authenticate as. Optional. If not set, it will default to ``.

Example value: my-password.

DatabaseConfig: database

Sets the database to use.

Example value: my-database.

DatabaseConfig: schema

Sets the schema to use. Optional. If not set, it will default to ``.

Example value: my-schema.

DatabaseConfig: TLS

Enables TLS. In PostgreSQL connection string terms, setting TLS to false corresponds to sslmode=disable; setting TLS to true corresponds to sslmode=verify-full. For more fine-grained control, specify sslmode in the params which takes precedence. Optional. If not set, it will default to false.

Example value: true.

DatabaseConfig: params

Contains connection string parameters as key/values pairs. It allow fine-grained control of connection settings. The parameters can be found here: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS Optional. If not set, it will default to {}.

Example value: {"application_name":"example"}.

LoggerConfig

Sets the logger behaviour.

It has the following fields:

Name
Required
Default
Type
Description

mode

Yes

n/a

string

Controls the format of the logger's output.

level

No

info

string

Controls the level of the logger.

LoggerConfig: mode

Controls the format of the logger's output. Allowed values are text or json.

LoggerConfig: level

Controls the level of the logger. Allowed values are info or debug. Optional. If not set, it will default to info.

MetricsConfig

Controls the metrics settings.

It has the following fields:

Name
Required
Default
Type
Description

prometheusAddress

No

:9090

string

Sets the Prometheus address.

MetricsConfig: prometheusAddress

Sets the address at which Prometheus metrics are served. Optional. If not set, it will default to :9090.

License

Holds the license key.

It has the following fields:

Name
Required
Default
Type
Description

key

Yes

n/a

string

Sets the license key.

acceptEULA

Yes

fals

boolean

Accepts the

License: key

Sets the license key. An HQ key starts with "licensekey".

License: acceptEULA

Accepts the Lenses EULA.

Configuration Reference

HQ's configuration is defined in the config.yaml file.

It has the following top level groups:

Name
Required
Default
Type
Description

AuthConfig

Configures authentication and authorisation.

It has the following fields:

Name
Required
Default
Type
Description

AuthConfig: administrators

Lists the names of the principals (users, service accounts) that have root access. Access control allows any API operation performed by such principals. Optional. If not set, it will default to [].

AuthConfig: saml

Contains SAML2 IdP configuration. Please refer for its structure.


HTTPConfig

Configures everything involving the HTTP.

It has the following fields:

Name
Required
Default
Type
Description

HTTPConfig: address

Sets the address the HTTP server listens at.

Example value: 127.0.0.1:80.

HTTPConfig: accessControlAllowOrigin

Sets the value of the "Access-Control-Allow-Origin" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to ["*"].

HTTPConfig: accessControlAllowCredentials

Sets the value of the "Access-Control-Allow-Credentials" header. This is only relevant when serving the backend from a different origin than the UI. Optional. If not set, it will default to false.

HTTPConfig: secureSessionCookies

Sets the "Secure" attribute on authentication session cookies. When set, a browser sends such cookies not over unsecured HTTP (expect for localhost). If running Lenses HQ over unsecured HTTP, set this to false. Optional. If not set, it will default to true.

HTTPConfig: tls

Contains TLS configuration. Please refer here for its structure.


SAMLConfig

Contains SAML2 IdP configuration.

It has the following fields:

Name
Required
Default
Type
Description

SAMLConfig: metadata

Contains the IdP issued XML metadata blob.

Example value: <?xml version="1.0" ... (big blob of xml) </md:EntityDescriptor>.

SAMLConfig: baseURL

Defines the base URL of Lenses HQ; the IdP redirects back to here on success.

Example value: https://hq.example.com.

SAMLConfig: uiRootURL

Controls where the backend redirects to after having received a valid SAML2 assertion. Optional. If not set, it will default to /.

Example value: /.

SAMLConfig: entityID

Defines the Entity ID.

Example value: https://hq.example.com.

SAMLConfig: groupAttributeKey

Sets the attribute name from which group names are extracted in the SAML2 assertions. Different providers use different names. Okta, Keycloak and Google use "groups". OneLogin uses "roles". Azure uses "http://schemas.microsoft.com/ws/2008/06/identity/claims/groups". Optional. If not set, it will default to groups.

Example value: groups.

SAMLConfig: userCreationMode

Controls how the creation of users should be handled in relation to SSO information. With the 'manual' mode, only users that currently exist in HQ can login. Users that do not exist are rejected. With the 'sso' mode, users that do not exist are automatically created. Allowed values are manual or sso. Optional. If not set, it will default to manual.

SAMLConfig: groupMembershipMode

Controls how the management of a user's group membership should be handled in relation to SSO information. With the 'manual' mode, the information about the group membership returned from an Identity Provider will not be used and a user will only be a member of groups that were explicitly assigned to him locally. With the 'sso' mode, group information from Identity Provider (IdP) will be used. On login, a user's group membership is set to the groups listed in the IdP. Groups that do not exist in HQ are ignored. Allowed values are manual or sso. Optional. If not set, it will default to manual.

SAMLConfig: authRequestSignature

Name
Required
Default
Type
Description

AgentsConfig

Controls the agent handling.

It has the following fields:

Name
Required
Default
Type
Description

AgentsConfig: address

Sets the address the agent server listens at.

Example value: 127.0.0.1:3000.

AgentsConfig: tls

Contains TLS configuration. Please refer here for its structure.


AgentGrpcConfig

Contains Agent gRPC configuration. This configuration section is optional. If not provided, its values are set to the defaults described in its [structure]

Name
Required
Default
Type
Description

TLSConfig

Contains TLS configuration.

It has the following fields:

Name
Required
Default
Type
Description

TLSConfig: enabled

Enables or disables TLS.

Example value: false.

TLSConfig: cert

Sets the PEM formatted public certificate. Optional. If not set, it will default to ``.

Example value: -----BEGIN CERTIFICATE----- EXampLeRanDoM ... -----END CERTIFICATE----- .

TLSConfig: key

Sets the PEM formatted private key. Optional. If not set, it will default to ``.

Example value: -----BEGIN PRIVATE KEY----- ExAmPlErAnDoM ... -----END PRIVATE KEY----- .

TLSConfig: verboseLogs

Enables additional logging of TLS settings and events at debug level. The information presented might be a bit too much for day to day use but can provide extra information for troubleshooting TLS configuration. Optional. If not set, it will default to false.


DatabaseConfig

Configures database settings.

It has the following fields:

Name
Required
Default
Type
Description

DatabaseConfig: host

Sets the name of the host to connect to. A comma-separated list of host names is also accepted; each host name in the list is tried in order.

Example value: postgres:5432.

DatabaseConfig: username

Sets the username to authenticate as. Optional. If not set, it will default to ``.

Example value: johhnybingo.

DatabaseConfig: password

Sets the password to authenticate as. Optional. If not set, it will default to ``.

Example value: my-password.

DatabaseConfig: database

Sets the database to use.

Example value: my-database.

DatabaseConfig: schema

Sets the schema to use. Optional. If not set, it will default to "".

Example value: my-schema.

DatabaseConfig: TLS

Enables TLS. In PostgreSQL connection string terms, setting TLS to false corresponds to sslmode=disable; setting TLS to true corresponds to sslmode=verify-full. For more fine-grained control, specify sslmode in the params which takes precedence. Optional. If not set, it will default to false.

Example value: true.

DatabaseConfig: params

Contains connection string parameters as key/values pairs. It allow fine-grained control of connection settings. The parameters can be found here: Optional. If not set, it will default to {}.

Example value: {"application_name":"example"}.

LoggerConfig

Sets the logger behaviour.

It has the following fields:

Name
Required
Default
Type
Description

LoggerConfig: mode

Controls the format of the logger's output. Allowed values are text or json.

LoggerConfig: level

Controls the level of the logger. Allowed values are info or debug. Optional. If not set, it will default to info.

MetricsConfig

Controls the metrics settings.

It has the following fields:

Name
Required
Default
Type
Description

MetricsConfig: prometheusAddress

Sets the address at which Prometheus metrics are served. Optional. If not set, it will default to :9090.

License

Holds the license key.

It has the following fields:

Name
Required
Default
Type
Description

License: key

Sets the license key. An HQ key starts with "licensekey".

License: acceptEULA

Accepts the Lenses EULA.

HTTPConfig
AgentsConfig
DatabaseConfig
LoggerConfig
MetricsConfig
License
AuthConfig
SAMLConfig
TLSConfig
TLSConfig
Lenses EULA.

http

Yes

n/a

HTTPConfig

Configures everything involving the HTTP.

agents

Yes

n/a

AgentsConfig

Controls the agent handling.

database

Yes

n/a

DatabaseConfig

Configures database settings.

logger

Yes

n/a

LoggerConfig

Sets the logger behaviour.

metrics

Yes

n/a

MetricsConfig

Controls the metrics settings.

license

Yes

n/a

License

Holds the license key.

auth

Yes

n/a

AuthConfig

Configures authentication and authorisation

administrators

No

[]

strings

Grants root access to principals.

saml

no

n/a

SAMLConfig

Contains SAML2 IdP configuration.

users

no

[]

Array

Creates initial users for password based authentication.

address

Yes

n/a

string

Sets the address the HTTP server listens at.

accessControlAllowOrigin

No

["*"]

strings

Sets the value of the "Access-Control-Allow-Origin" header.

accessControlAllowCredentials

No

false

boolean

Sets the value of the "Access-Control-Allow-Credentials" header.

secureSessionCookies

No

true

boolean

Sets the "Secure" attribute on session cookies.

tls

Yes

n/a

TLSConfig

Contains TLS configuration.

enabled

Yes

false

boolean

Enables or disables SAML.

metadata

Yes

n/a

string

Contains the IdP issued XML metadata blob.

baseURL

Yes

n/a

string

Defines base URL of HQ for IdP redirects.

uiRootURL

No

/

string

Controls where to redirect to upon successful authentication.

entityID

Yes

n/a

string

Defines the Entity ID.

groupAttributeKey

No

groups

string

Sets the attribute name for group names.

userCreationMode

No

manual

string

Controls how the creation of users should be handled in relation to SSO information.

groupMembershipMode

No

manual

string

Controls how the management of a user's group membership should be handled in relation to SSO information.

authnRequestSignature

No

ARSConfig

Enables signing the AuthnRequest that HQ sends to the IdP

enabled

Yes

true

string

Sets the address the agent server listens at.

cert

Yes

n/a

string

String 'cert' sets the PEM formatted AuthnRequest signing certificate. If provided, the key needs to be provided as well. If not provided while AuthnRequest signing is enabled, HQ will generate a key-pair on start.

key

No

n/a

string

String 'key' sets the PEM formatted AuthnRequest signing private key. If provided, the cert needs to be provided as well. If not provided while AuthnRequest signing is enabled, HQ will generate a key-pair on start.

address

Yes

n/a

string

Sets the address the agent server listens at.

tls

Yes

n/a

TLSConfig

Contains TLS configuration.

grpc

No

n/a

AgentGRPCConfig

Contains Agent gRPC configuration

apiMaxRecvMessageSize

No

33554432

integer

Overrides the default maximum body size in bytes for proxied API responses.

enabled

Yes

n/a

boolean

Enables or disables TLS.

cert

No

``

string

Sets the PEM formatted public certificate.

key

No

``

string

Sets the PEM formatted private key.

verboseLogs

No

false

boolean

Enables verbose TLS logging.

host

Yes

n/a

string

Sets the name of the host to connect to.

username

No

``

string

Sets the username to authenticate as.

password

No

``

string

Sets the password to authenticate as.

database

Yes

n/a

string

Sets the database to use.

schema

No

``

string

Sets the schema to use.

TLS

No

false

boolean

Enables TLS.

params

No

{}

DBConnectionParams

Provides fine-grained control.

mode

Yes

n/a

string

Controls the format of the logger's output.

level

No

info

string

Controls the level of the logger.

prometheusAddress

No

:9090

string

Sets the Prometheus address.

key

Yes

n/a

string

Sets the license key.

acceptEULA

Yes

fals

boolean

Accepts the Lenses EULA.

here
https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS
GitHub - lensesio/lenses-helm-charts: Helm Charts for Lenses.ioGitHub
GitHub - lensesio/lenses-helm-charts at release/6.0GitHub

How to convert Wizard Mode to Provisioning Mode

Migrating from Lenses Wizard Mode to Provision

Overview

Lenses 5.0+ introduces two primary methods for configuring your Lenses instance:

  • Wizard Mode: An interactive UI-based setup that appears when no Kafka brokers are configured

  • Provision Mode: A programmatic approach using provisioning.yaml configuration with(out) sidecar containers

This guide walks you through migrating from the Wizard Mode setup to a fully automated Provision configuration, enabling GitOps workflows and Infrastructure as Code practices.

Understanding the Migration Path

When to Consider Migration

You should migrate from Wizard Mode to Provision when you need:

  • Automated deployments and Infrastructure as Code

  • GitOps workflows for configuration management

  • Consistent environments across development, staging, and production

  • Version control for your Lenses configuration

  • Scalable deployment patterns for multiple Lenses Agent instances

  • Migrating from Lenses ⅘ to Lenses 6 Agent

Key Differences

Aspect
Wizard Mode
Provision Mode

Setup Method

Interactive UI

YAML configuration

Automation

Manual

Fully automated

Version Control

Not supported

Full Git integration

Secrets Management

Manual entry

Kubernetes secrets + file references

Deployment

One-time setup

Repeatable deployments

Configuration Updates

UI-based

Code-based with CI/CD

Pre-Migration Checklist

Before starting your migration, please check Tips: Before Upgrade

1

Assess Agent Connections that have to be migrated from Lenses 5

In the picture below it is visible that three connections has to be migrated to provisioning.yaml file:

  1. Kafka

  2. SchemaRegistry

  3. KafkaConnect

in addition to that a new connection has to be made called LensesHQ.

2

Prepare Your Provision Configuration

Basic Structure

For Helm Deployment:

  • create a values.yaml file with the provision configuration enabled

For deployment from Archive create:

  • lenses-agent.conf and

  • provisioning.yaml file

Connections that provisioning is going to have:

  • Kafka;

  • SchemaRegistry;

  • KafkaConnect;

  • LensesHQ.

3

Configure Kafka Connection

values.yaml
lensesAgent:
  hq:
    agentKey:
      secret:
        type: "createNew"
        name: lenses-agent-secret
        value: <your-HQ-generated-agentKey>
  provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: <your-HQ-address>
            port:
              value: 10000
            agentKey:
              value: ${LENSESHQ_AGENT_KEY}
            sslEnabled:
              value: false
      kafka:
        - name: kafka
          version: 1
          tags: [ "prod", "prod-1", "us"]
          configuration:
            kafkaBootstrapServers:
              value:
                - PLAINTEXT://<your-kafka-address>:9092
            metricsType:
             value: JMX
            metricsPort:
             value: 9999

For more Kafka connection details, such as using secure connections, please read Kafka

provisioning.yaml
lensesHq:
- configuration:
    agentKey:
      value: <your-HQ-generated-agentKey>
    port:
      value: 10000
    server:
      value: <your-HQ-address>
    sslEnabled:
      value: false
  name: lenses-hq
  tags:
  - hq
  version: 1
kafka:
- configuration:
    kafkaBootstrapServers:
      value:
      - PLAINTEXT://<your-kafka-address>:9092
    metricsPort:
      value: 9999
    metricsType:
      value: JMX
  name: kafka
  tags:
  - prod
  - prod-1
  - us
  version: 1
4

Configure Additional Services Connections

Last two bits are:

  • Kafka Connect and

  • Schema Registry

values.yaml
lensesAgent:
  hq:
    agentKey:
      secret:
        type: "createNew"
        name: lenses-agent-secret
        value: <your-HQ-generated-agentKey>
  provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: <your-HQ-address>
            port:
              value: 10000
            agentKey:
              value: ${LENSESHQ_AGENT_KEY}
            sslEnabled:
              value: false
      kafka:
        - name: kafka
          version: 1
          tags: [ "prod", "prod-1", "us"]
          configuration:
            kafkaBootstrapServers:
              value:
                - PLAINTEXT://<your-kafka-address>:9092
            metricsType:
             value: JMX
            metricsPort:
             value: 9999
      confluentSchemaRegistry:
        - name: schema-registry
          version: 1
          tags: [ "prod", "global" ]
          configuration:
            schemaRegistryUrls:
              value:
                - http://<your-schema-registry-address>:8081
      connect:
        - name: datalake-connect
          version: 1
          tags: [ "prod", "us" ]
          configuration:
            workers:
              value:
                - http://<your-kafka-connect-address>:8083
provisioning.yaml
lensesHq:
- configuration:
    agentKey:
      value: <your-HQ-generated-agentKey>
    port:
      value: 10000
    server:
      value: <your-HQ-address>
    sslEnabled:
      value: false
  name: lenses-hq
  tags:
  - hq
  version: 1
kafka:
- configuration:
    kafkaBootstrapServers:
      value:
      - PLAINTEXT://<your-kafka-address>:9092
    metricsPort:
      value: 9999
    metricsType:
      value: JMX
  name: kafka
  tags:
  - prod
  - prod-1
  - us
  version: 1
confluentSchemaRegistry:
- configuration:
    schemaRegistryUrls:
      value:
      - http://<your-schemaregistry-address>:8081
  name: schema-registry
  tags:
  - prod
  - global
  version: 1
connect:
- configuration:
    workers:
      value:
      - http://<your-kafkaconnect-address>:8083
  name: datalake-connect
  tags:
  - prod
  - us
  version: 1
5

Prepare HQ Connection

Through last few steps, we covered configuring of:

  • Kafka

  • Schema Registry

  • Kafka Connect connections.

Last but not least and probably the most important one is creating HQ connection otherwise Agent won't be useable.

values.yaml
lensesAgent:
  hq:
    agentKey:
      secret:
        type: "createNew"
        name: lenses-agent-secret
        value: <your-HQ-generated-agentKey>
  provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: <your-HQ-address>
            port:
              value: 10000
            agentKey:
              value: ${LENSESHQ_AGENT_KEY}
            sslEnabled:
              value: false
provisioning.yaml
lensesHq:
- configuration:
    agentKey:
      value: <your-HQ-generated-agentKey>
    port:
      value: 10000
    server:
      value: <your-HQ-address>
    sslEnabled:
      value: false
  name: lenses-hq
  tags:
  - hq
  version: 1
kafka:
- configuration:
    kafkaBootstrapServers:
      value:
      - PLAINTEXT://<your-kafka-address>:9092
    metricsPort:
      value: 9999
    metricsType:
      value: JMX
  name: kafka
  tags:
  - prod
  - prod-1
  - us
  version: 1

License configuration is part of Lenses HQ from version 6.

6

Configure Database details

In case Postgres is being used:

values.yaml
lensesAgent:
  storage:
    postgres:
      enabled: true
      host: postgres-1.postgres.svc.cluster.local
      port: 5432              # optional, defaults to 5432
      username: prod          
      password: external      # use "external" to manage it using secrets
      database: agent
  additionalEnv:
    - name: LENSES_STORAGE_POSTGRES_PASSWORD
      valueFrom:
        secretKeyRef:
          name: local-postgres-pwd
          key: password
  hq:
    agentKey:
      secret:
        type: "createNew"
        name: lenses-agent-secret
        value: agent_key_*
    ssl:
      enabled: false
  provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: lenses-hq.lenses.svc.cluster.local
            port:
              value: 10000
            agentKey:
              value: ${LENSESHQ_AGENT_KEY}
            sslEnabled:
              value: false
      kafka:
        - name: kafka
          version: 1
          tags: [ "prod", "prod-2", "eu"]
          configuration:
            kafkaBootstrapServers:
              value:
                - PLAINTEXT://testing-kafka-bootstrap.kafka.svc.cluster.local:9092
            metricsType:
             value: JMX
            metricsPort:
             value: 9999
      connect:
        - name: datalake-connect
          version: 1
          tags: [ "prod-2", "eu" ]
          configuration:
            workers:
              value:
                - http://testing-connect-connect.kafka-connect.svc.cluster.local:8083
      confluentSchemaRegistry:
        - name: schema-registry
          version: 1
          tags: [ "prod", "global" ]
          configuration:
            schemaRegistryUrls:
              value:
                - http://testing-schema-registry.schema-registry.svc.cluster.local:8081

You would have to use two files:

  • lenses-agent.conf

  • provisioning.yaml

lenses-agent.conf
# Auto-detected env vars

lenses.secret.file=/data/security.conf

# lenses.append.conf

lenses.storage.postgres.host="postgres-1.postgres.svc.cluster.local"
lenses.storage.postgres.database="agent"
lenses.storage.postgres.username="agent"
lenses.storage.postgres.password="pleasechangeme"
lenses.storage.postgres.port="5432"
lenses.provisioning.path="/mnt/provision-secrets"
provisioning.yaml
lensesHq:
- configuration:
    agentKey:
      value: <your-HQ-generated-agentKey>
    port:
      value: 10000
    server:
      value: <your-HQ-address>
    sslEnabled:
      value: false
  name: lenses-hq
  tags:
  - hq
  version: 1
kafka:
- configuration:
    kafkaBootstrapServers:
      value:
      - PLAINTEXT://<your-kafka-address>:9092
    metricsPort:
      value: 9999
    metricsType:
      value: JMX
  name: kafka
  tags:
  - prod
  - prod-1
  - us
  version: 1
confluentSchemaRegistry:
- configuration:
    schemaRegistryUrls:
      value:
      - http://<your-schemaregistry-address>:8081
  name: schema-registry
  tags:
  - prod
  - global
  version: 1
connect:
- configuration:
    workers:
      value:
      - http://<your-kafkaconnect-address>:8083
  name: datalake-connect
  tags:
  - prod
  - us
  version: 1

H2 as a storage mechanism is available only from Agent v6.0.6

Be aware that H2 is not recommended for production environments

values.yaml
persistence:
  storageH2:
    enabled: true
    accessModes:
      - ReadWriteOnce
    size: 5Gi

lensesAgent:
  hq:
    agentKey:
      secret:
        type: "createNew"
        name: lenses-agent-secret
        value: agent_key_*
    ssl:
      enabled: false
  provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: lenses-hq.lenses.svc.cluster.local
            port:
              value: 10000
            agentKey:
              value: ${LENSESHQ_AGENT_KEY}
            sslEnabled:
              value: false
      kafka:
        - name: kafka
          version: 1
          tags: [ "prod", "prod-2", "eu"]
          configuration:
            kafkaBootstrapServers:
              value:
                - PLAINTEXT://testing-kafka-bootstrap.kafka.svc.cluster.local:9092
            metricsType:
             value: JMX
            metricsPort:
             value: 9999
      connect:
        - name: datalake-connect
          version: 1
          tags: [ "prod-2", "eu" ]
          configuration:
            workers:
              value:
                - http://testing-connect-connect.kafka-connect.svc.cluster.local:8083
      confluentSchemaRegistry:
        - name: schema-registry
          version: 1
          tags: [ "prod", "global" ]
          configuration:
            schemaRegistryUrls:
              value:
                - http://testing-schema-registry.schema-registry.svc.cluster.local:8081

You would have to use two files:

  • lenses-agent.conf

  • provisioning.yaml

lenses-agent.conf
# Auto-detected env vars
lenses.secret.file=/data/security.conf
# lenses.append.conf
lenses.provisioning.path="/mnt/provision-secrets"
provisioning.yaml
lensesHq:
- configuration:
    agentKey:
      value: <your-HQ-generated-agentKey>
    port:
      value: 10000
    server:
      value: <your-HQ-address>
    sslEnabled:
      value: false
  name: lenses-hq
  tags:
  - hq
  version: 1
kafka:
- configuration:
    kafkaBootstrapServers:
      value:
      - PLAINTEXT://<your-kafka-address>:9092
    metricsPort:
      value: 9999
    metricsType:
      value: JMX
  name: kafka
  tags:
  - prod
  - prod-1
  - us
  version: 1
confluentSchemaRegistry:
- configuration:
    schemaRegistryUrls:
      value:
      - http://<your-schemaregistry-address>:8081
  name: schema-registry
  tags:
  - prod
  - global
  version: 1
connect:
- configuration:
    workers:
      value:
      - http://<your-kafkaconnect-address>:8083
  name: datalake-connect
  tags:
  - prod
  - us
  version: 1

Complete Migration Example

Here's a complete values.yaml and lenses-agent + provisioning.yaml example for a production migration:

For more Helm options, please check lenses-helm-chart repo.

values.yaml
rbacEnable: true
namespaceScope: true

lensesAgent:
  hq:
    agentKey:
      secret:
        type: "createNew"
        name: lenses-agent-secret
        value: agent_key_*
    ssl:
      enabled: false
  provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: lenses-hq.lenses.svc.cluster.local
            port:
              value: 10000
            agentKey:
              value: ${LENSESHQ_AGENT_KEY}
            sslEnabled:
              value: false
      kafka:
        - name: kafka
          version: 1
          tags: [ "prod", "prod-2", "eu"]
          configuration:
            kafkaBootstrapServers:
              value:
                - PLAINTEXT://testing-kafka-bootstrap.kafka.svc.cluster.local:9092
            metricsType:
             value: JMX
            metricsPort:
             value: 9999
      connect:
        - name: datalake-connect
          version: 1
          tags: [ "prod-2", "eu" ]
          configuration:
            workers:
              value:
                - http://testing-connect-connect.kafka-connect.svc.cluster.local:8083
      confluentSchemaRegistry:
        - name: schema-registry
          version: 1
          tags: [ "prod", "global" ]
          configuration:
            schemaRegistryUrls:
              value:
                - http://testing-schema-registry.schema-registry.svc.cluster.local:8081
  storage:
    postgres:
      enabled: true
      host: postgres-1.postgres.svc.cluster.local
      port: 5432              # optional, defaults to 5432
      username: prod          
      password: external      # use "external" to manage it using secrets
      database: agent
  additionalEnv:
    - name: LENSES_STORAGE_POSTGRES_PASSWORD
      valueFrom:
        secretKeyRef:
          name: local-postgres-pwd
          key: password

You would have to use two files:

  • lenses-agent.conf

  • provisioning.yaml

lenses-agent.conf
# Auto-detected env vars

lenses.kubernetes.pod.mem.request=128M
lenses.kubernetes.pod.mem.limit=1152M
lenses.jmx.port=9101
lenses.kubernetes.pod.liveness.initial.delay="60 seconds"
lenses.sql.execution.mode=KUBERNETES
lenses.topics.metrics=_kafka_lenses_metrics
lenses.provisioning.path="/mnt/provision-secrets"
lenses.topics.external.topology=__topology
lenses.kubernetes.pod.min.heap=128M
lenses.port=3030
lenses.kubernetes.pod.heap=1024M
lenses.topics.external.metrics=__topology__metrics

lenses.secret.file=/data/security.conf

# lenses.append.conf

lenses.storage.postgres.host="postgres-1.postgres.svc.cluster.local"
lenses.storage.postgres.database="agent"
lenses.storage.postgres.username="agent"
lenses.storage.postgres.password="pleasechangeme"
lenses.storage.postgres.port="5432"
lenses.provisioning.path="/mnt/provision-secrets"
provisioning.yaml
lensesHq:
- configuration:
    agentKey:
      value: <your-HQ-generated-agentKey>
    port:
      value: 10000
    server:
      value: <your-HQ-address>
    sslEnabled:
      value: false
  name: lenses-hq
  tags:
  - hq
  version: 1
kafka:
- configuration:
    kafkaBootstrapServers:
      value:
      - PLAINTEXT://<your-kafka-address>:9092
    metricsPort:
      value: 9999
    metricsType:
      value: JMX
  name: kafka
  tags:
  - prod
  - prod-1
  - us
  version: 1
confluentSchemaRegistry:
- configuration:
    schemaRegistryUrls:
      value:
      - http://<your-schemaregistry-address>:8081
  name: schema-registry
  tags:
  - prod
  - global
  version: 1
connect:
- configuration:
    workers:
      value:
      - http://<your-kafkaconnect-address>:8083
  name: datalake-connect
  tags:
  - prod
  - us
  version: 1

Deployment and Testing

1

Deploy with Helm

Add the Lenses Helm repository:

helm repo add lenses-agent https://helm.repo.lenses.io
helm repo update

Deploy Lenses with your provision configuration:

helm install lenses-agent lenses/lenses-agent \
  --namespace lenses \
  --create-namespace \
  -f values.yaml

Monitor the deployment:

kubectl get pods -n lenses -w
kubectl logs -n lenses deployment/lenses-agent
2

Deploy through Archive

Download Archive

Link to archives can be found here: https://archive.lenses.io/lenses/6.0/agent/

Extract the archive using the following command

terminal
tar -xvf lenses-agent-latest-linux64.tar.gz -C lenses

Start Agent

terminal
bin/lenses lenses-agent.conf

Best Practices for Production

Security Considerations

  • Use Kubernetes secrets for sensitive data instead of inline values

  • Enable TLS for all Lenses HQ connections

  • Implement RBAC for Kubernetes and Lenses HQ & Agent access

  • Rotate credentials regularly

Operations Best Practices

  • Monitor resource usage of sidecar containers

  • Set resource limits to prevent resource monopolization

  • Implement health checks for the provision process

  • Use GitOps workflows for configuration management

Scaling Considerations

  • Plan for multiple environments (dev, staging, prod)

  • Implement configuration templates for reusability

  • Use Helm chart dependencies for complex deployments

  • Monitor deployment metrics and success rates

Conclusion

Migrating from Lenses Wizard Mode to Provision Mode enables Infrastructure as Code practices, better security management, and automated deployments. While the initial setup requires more configuration, the long-term benefits of automated, version-controlled, and repeatable deployments make this migration worthwhile for production environments.

The provision sidecar pattern ensures that your Lenses configuration is managed alongside your infrastructure code, enabling true GitOps workflows and reducing configuration drift between environments.

Logo

IAM Reference

This page describes the IAM Reference options.

Administration

service: administration

Resource Syntax

  • administration:connection:${Environment}/${ConnectionType}/${Connection}

  • administration:lenses-logs:${Environment}

  • administration:lenses-configuration:${Environment}

  • administration:setting:${Setting}

Operation
Resource Type
Description

CreateConnection

connection

ListConnections

connection

GetConnectionDetails

connection

UpdateConnection

connection

DeleteConnection

connection

GetLensesLogs

lenses-logs

GetLensesConfiguration

lenses-configuration

ListAgents

agent

GetAgentDetails

agent

UpdateAgent

agent

DeleteAgent

agent

GetSetting

setting

UpdateSetting

setting

Applications

service: applications

Resource Syntax

Operation
Resource Type
Description

RegisterApplication

external-application

UnregisterApplication

external-application

ListApplications

external-application

GetApplicationDetails

external-application

ListApplicationDependants

external-application

Alerts

service: alerts

Resource Syntax

  • alerts:alert:${Environment}/${AlertType}/${Alert}

  • alerts:rule:${Environment}/Infrastructure/KafkaBrokerDown

  • alerts:rule:${Environment}/DataProduced/red-app-going-slow

Operation
Resource Type
Description

CreateAlertRule

rule

DeleteAlertRule

rule

UpdateAlertRule

rule

ListAlertRules

rule

GetAlertRuleDetails

rule

ToggleAlertRule

rule

ListAlertEvents

alert-event

DeleteAlertEvents

alert-event

CreateChannel

alert-channel

ListChannels

alert-channel

GetChannelDetails

alert-channel

UpdateChannel

alert-channel

DeleteChannel

alert-channel

K2K

service: k2k

Resource Syntax

  • k2k:app:${Name}

Action
Resource Type
Description

CreateApp

app

DeleteApp

app

GetApp

app

ListApps

app

ManageOffsets

app

UpdateApp

app

UpsertApp

app

Audits

service: audit

Resource Syntax

  • audit:log:${Environment}

  • audit:channel:${Environment}/${AuditChannelType}/${AuditChannel}

Operation
Resource Type
Description

ListLogEvents

log

GetLogEventDetails

log

CreateChannel

channel

ListChannels

channel

GetChannelDetails

channel

UpdateChannel

channel

DeleteChannel

channel

ToggleChannel

channel

Data Policies

service: data-policies

Resource Syntax

  • data-policies:policy:${Environment}/${Policy}

Operation
Resource Type
Description

CreatePolicy

policy

ListPolicies

policy

GetPolicyDetails

policy

UpdatePolicy

policy

DeletePolicy

policy

ListPolicyDependants

policy

Environments

service: environments

Resource Syntax

  • environments:environment:${Environment}

Operation
Resource Type
Description

CreateEnvironment

environment

DeleteEnvironment

environment

ListEnvironments

environment

UpdateEnvironment

environment

AccessEnvironment

environment

GetEnvironmentDetails

environment

Permission which allows users to gain overview of more information about the environment such as metrics, versions and more.

Kafka Connections

service: environments

Resource Syntax

  • environments:kafka-connection:${Environment}/${Connection}

Operation
Resource Type
Description

GetKafkaConnectionDetails

environments

ListKafkaConnections

environments

UpsertKafkaConnection

environments

Create or update a Kafka Connection

DeleteKafkaConnection

environments

Governance

service: governance

Resource Syntax

  • governance:request:${Environment}/${ActionType}/*

  • governance:rule:${Environment}/${RuleCategory}/*

Operation
Resource Type
Description

CreateRequest

request

ListRequests

request

GetRequestDetails

request

ApproveRequest

request

DenyRequest

request

GetRuleDetails

rule

UpdateRule

rule

IAM

service: iam

Resource Syntax

  • iam:role:${Role}

  • iam:group:${Group}

  • iam:user:${Username}

  • iam:service-account:${ServiceAccount}

Operation
Resource Type
Description

CreateRole

role

DeleteRole

role

UpdateRole

role

ListRoles

role

ListRoleDependants

role

GetRoleDetails

role

CreateGroup

group

DeleteGroup

group

UpdateGroup

group

ListGroups

group

ListGroupDependants

group

GetGroupDetails

group

CreateUser

user

DeleteUser

user

UpdateUser

user

ListUsers

user

ListUserDependants

user

GetUserDetails

user

CreateServiceAccount

service account

DeleteServiceAccount

service account

UpdateServiceAccount

service account

ListServiceAccounts

service account

ListServiceAccountDependants

service account

GetServiceAccountDetails

service account

Kafka Connect

service: kafka-connect

Resource Syntax

  • kafka-connect:connector:${Environment}/${KafkaConnectCluster}/${Connector}

  • kafka-connect:cluster:${Environment}/${KafkaConnectCluster}

Example role permission
name: global-connector-operator
policy:
  - action:
      - iam:List*
      - iam:Get*
    resource: iam:*
    effect: allow
  - action:
      - environments:Get*
      - environments:List*
      - environments:AccessEnvironment
    resource: environments:*
    effect: allow
  - action:
      - kafka-connect:List*
      - kafka-connect:GetClusterDetails
      - kafka-connect:GetConnectorDetails
      - kafka-connect:StartConnector
      - kafka-connect:StopConnector
    resource:
      - kafka-connect:cluster:*/*
      - kafka-connect:connector:*/*/*
    effect: allow
Operation
Resource Type
Description

CreateConnector

connector

ListConnectors

connector

ListConnectors

connector

GetConnectorConfiguration

connector

UpdateConnectorConfiguration

connector

DeleteConnector

connector

StartConnector

connector

StopConnector

connector

ListConnectorDependants

connector

ListClusters

cluster

GetClusterDetails

cluster

DeployConnectors

cluster

Kafka

service: kafka

Resource Syntax

  • kafka:topic:${Environment}/${KafkaCluster}/${Topic}

  • kafka:acl:${Environment}/${KafkaCluster}/${AclResourceType}/* or kafka:acl:${Environment}/${KafkaCluster}/${AclResourceType}/${PrincipalType}/${Principal}

  • kafka:quota:${Environment}/${KafkaCluster}/${QuotaType}/* or

  • kafka:quota:${Environment}/${KafkaCluster}/clients

  • kafka:quota:${Environment}/${KafkaCluster}/users-default

  • kafka:quota:${Environment}/${KafkaCluster}/client/${ClientID}

  • kafka:quota:${Environment}/${KafkaCluster}/user/${Username}

  • kafka:quota:${Environment}/${KafkaCluster}/user/${Username}/client/${ClientID}

  • kafka:quota:${Environment}/${KafkaCluster}/user-client/${Username}/${ClientID}

  • kafka:quota:${Environment}/${KafkaCluster}/user/${Username}/client/*

  • kafka:quota:${Environment}/${KafkaCluster}/user-all-clients/${Username}

Example role permission
name: example
policy:
  - action:
      - kafka:ListTopics
      - kafka:GetTopicDetails 
    resource: 
      - kafka:topic:my_env/kafka/my_topic
Operation
Resource Type
Description

CreateTopic

topic

DeleteTopic

topic

ListTopics

topic

GetTopicDetails

topic

UpdateTopicDetails

topic

ReadTopicData

topic

WriteTopicData

topic

DeleteTopicData

topic

ListTopicDependants

topic

List visibility of all entities that depend on this entity e.g. ListTopicDependants means that you'll be able to see (i.e. List) all consumer groups that read from that topic regardless of what your specific consumer group permissions.

CreateAcl

acl

GetAclDetails

acl

UpdateAcl

acl

DeleteAcl

acl

CreateQuota

quota

ListQuotas

quota

GetQuotaDetails

quota

UpdateQuota

quota

DeleteQuota

quota

DeleteConsumerGroup

consumer-group

UpdateConsumerGroup

consumer-group

ListConsumerGroups

consumer-group

GetConsumerGroupDetails

consumer-group

ListConsumerGroupDependants

consumer-group

Kubernetes

service: kubernetes

Resource Syntax

  • kubernetes:cluster:${Environment}/${KubernetesCluster}

  • kubernetes:namespace:${Environment}/${KubernetesCluster}/${KubernetesNamespace}

Operation
Resource Type
Description
Example

ListClusters

cluster

GetClusterDetails

cluster

ListNamespaces

namespace

DeployApps

namespace

Registry

service: registry

Resource Syntax

  • schemas:registry:${Environment}/${SchemaRegistry}

Operation
Resource Type
Description

GetRegistryConfiguration

registry

UpdateRegistryConfiguration

registry

Schemas

service: schemas

Resource Syntax

  • schemas:schema:${Environment}/${SchemaRegistry}/${Schema}

Operation
Resource Type
Description

CreateSchema

schema

DeleteSchema

schema

UpdateSchema

schema

GetSchemaDetails

schema

ListSchemas

schema

ListSchemaDependants

schema

SQL Streaming

service: sql-streaming

Resource Syntax

  • sql-streaming:sql-processor:${Environment}/${KubernetesCluster}/${KubernetesNamespace}/${SqlProcessor}

  • For IN_PROC processors sql-streaming:sql-processor:${Environment}/lenses-in-process/default/${SqlProcessor}

Available Actions
Resource Type
Description

CreateProcessor

sql-processor

ListProcessors

sql-processor

GetProcessorDetails

sql-processor

GetProcessorSql

sql-processor

UpdateProcessorSql

sql-processor

DeleteProcessor

sql-processor

StartProcessor

sql-processor

StopProcessor

sql-processor

ScaleProcessor

sql-processor

GetProcessorLogs

sql-processor

ListProcessorDependants

sql-processor

Logo

Configuration Reference

This page lists the available configurations in Lenses Agent.

Set in lenses.conf

Basics

Reference documentation of all configuration and authentication options:

Key
Description
Default
Type
Required

lenses.eula.accept

Accept the

false

boolean

yes

lenses.ip

Bind HTTP at the given endpoint. Use in conjunction with lenses.port

0.0.0.0

string

no

lenses.port

The HTTP port to listen for API, UI and WS calls

9991

int

no

lenses.jmx.port

Bind JMX port to enable monitoring Lenses

int

no

lenses.root.path

The path from which all the Lenses URLs are served

string

no

lenses.secret.file

The full path to security.conf for security credentials

security.conf

string

no

lenses.sql.execution.mode

Streaming SQL mode IN_PROC (test mode) or KUBERNETES (prod mode)

IN_PROC

string

no

lenses.offset.workers

Number of workers to monitor topic offsets

5

int

no

lenses.kafka.control.topics

An array of topics to be treated as “system topics”

list

array

no

lenses.grafana

Add your Grafana url i.e. http://grafanahost:port

string

no

lenses.api.response.cache.enable

If enabled, it disables client cache on the Lenses API HTTP responses by adding these HTTP Headers: Cache-Control: no-cache, no-store, must-revalidate, Pragma: no-cache, and Expires: -1.

false

boolean

no

lenses.workspace

Directory to write temp files. If write access is denied, Lenses will fallback to /tmp.

/run

string

no

lenses.connections.webhook.whitelist

This configuration key allows you to specify a whitelist of allowed IP ranges and hostnames for webhook connections. Only addresses matching the whitelist will be permitted for webhook connections.

The value should be a list of strings, where each string can be:

  • An IPv4 address (e.g., "192.168.1.10")

  • An IPv4 CIDR range (e.g., "192.168.1.0/24")

  • An IPv6 address (e.g., "2001:db8::1")

  • An IPv6 CIDR range (e.g., "2001:db8::/32")

  • A hostname pattern (e.g., "*.trusted.com", "localhost", "api.example.com"

array

no

Default system topics

System or control topics are created by services for their internal use. Below is the list of built-in configurations to identify them.

  • _schemas

  • __consumer_offsets

  • _kafka_lenses_

  • lsql_*

  • lsql-*

  • __transaction_state

  • __topology

  • __topology__metrics

  • _confluent*

  • *-KSTREAM-*

  • *-TableSource-*

  • *-changelog

  • __amazon_msk*

Wildcard (*) is used to match any name in the path to capture a list of topics not just one. When the wildcard is not specified, Lenses matches on the entry name provided.

Security

TLS

Key
Description
Default

lenses.access.control.allow.methods

HTTP verbs allowed in cross-origin HTTP requests

GET,POST,PUT,DELETE,OPTIONS

lenses.access.control.allow.origin

Allowed hosts for cross-origin HTTP requests

*

lenses.allow.weak.ssl

Allow https:// with self-signed certificates

false

lenses.ssl.keystore.location

The full path to the keystore file used to enable TLS on Lenses port

lenses.ssl.keystore.password

Password for the keystore file

lenses.ssl.key.password

Password for the ssl certificate used

lenses.ssl.enabled.protocols

Version of TLS protocol to use

TLSv1.2

lenses.ssl.algorithm

X509 or PKIX algorithm to use for TLS termination

SunX509

lenses.ssl.cipher.suites

Comma separated list of ciphers allowed for TLS negotiation

Kerberos

Key
Description
Default

lenses.security.kerberos.service.principal

The Kerberos principal for Lenses to use in the SPNEGO form: HTTP/[email protected]

lenses.security.kerberos.keytab

Path to Kerberos keytab with the service principal. It should not be password protected

lenses.security.kerberos.debug

Enable Java’s JAAS debugging information

false

Persistent storage


Common

Key
Description
Default
Type
Required

lenses.storage.hikaricp.[*]

To pass additional properties to HikariCP connection pool

no

Postgres

Key
Description
Default
Type
Required

lenses.storage.postgres.host

Host of PostgreSQL server for Lenses to use for persistence

string

no

lenses.storage.postgres.port

Port of PostgreSQL server for Lenses to use for persistence

5432

integer

no

lenses.storage.postgres.username

Username for PostgreSQL database user

string

no

lenses.storage.postgres.password

Password for PostgreSQL database user

string

no

lenses.storage.postgres.database

PostgreSQL database name for Lenses to use for persistence

string

no

lenses.storage.postgres.schema

PostgreSQL schema name for Lenses to use for persistence

"public"

string

no

lenses.storage.postgres.properties.[*]

To pass additional properties to PostgreSQL JDBC driver

no

Microsoft SQL Server

Set in security.conf

Key
Description
Default
Type
Required

lenses.storage.msssql.host

Specifies the hostname or IP address of the Microsoft SQL Server instance

​

string

yes

lenses.storage.mssql.port

Specifies the TCP port number that the Lenses application uses to connect to a Microsoft SQL Server database

​

int

yes

lenses.storage.mssql.schema

Specifies the database schema Lenses uses within Microsoft SQL Server

​

string

yes

lenses.storage.mssql.database

Specifies the Microsoft SQL server database Lenses connects to

​

string

yes

lenses.storage.mssql.username

Specifies the username that the Lenses application uses to authenticate with the Microsoft SQL Server database

​

string

yes

lenses.storage.mssql.password

Specifies the password that the Lenses application uses to authenticate with the Microsoft SQL Server database

​

string

yes

lenses.storage.mssql.properties

Allows additional properties to be set for the Microsoft SQL Servicer JDBC drive

​

​

no

CommentShare feedback on the editor

Schema registries

If the records schema is centralized, the connectivity to Schema Registry nodes is defined by a Lenses Connection.

There are two static config entries to enable/disable the deletion of schemas:

Key
Description
Type

lenses.schema.registry.delete

Allow schemas to be deleted. Default is false

boolean

lenses.schema.registry.cascade.delete

Deletes associated schemas when a topic is deleted. Default is false

boolean

Deployments

Options for specific deployment targets:

  • Global options

  • Kubernetes

Global options

Common settings, independently of the underlying deployment target:

Key
Description
Default

lenses.deployments.events.buffer.size

Buffer size for events coming from Deployment targets such as Kubernetes

10000

lenses.deployments.errors.buffer.size

Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes

1000

Kubernetes

Kubernetes connectivity is optional. Minimum supported K8 version 0.11.10. All settings are string.

Key
Description
Default

lenses.kubernetes.processor.image.name

The url for the streaming SQL Docker for K8

lensesioextra/sql-processor

lenses.kubernetes.processor.image.tag

The version/tag of the above container

5.2

lenses.kubernetes.config.file

The path for the kubectrl config file

/home/lenses/.kube/config

lenses.kubernetes.pull.policy

Pull policy for K8 containers: IfNotPresent or Always

IfNotPresent

lenses.kubernetes.service.account

The service account for deployments. Will also pull the image

default

lenses.kubernetes.init.container.image.name

The docker/container repository url and name of the Init Container image used to deploy applications to Kubernetes

lensesio/lenses-cli

lenses.kubernetes.init.container.image.tag

The tag of the Init Container image used to deploy applications to Kubernetes

5.2.0

lenses.kubernetes.watch.reconnect.limit

How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable

10

lenses.kubernetes.watch.reconnect.interval

How often to wait between Kubernetes Watcher reconnection attempts expressed in milliseconds

5000

lenses.kubernetes.websocket.timeout

How long to wait for a Kubernetes Websocket response expressed in milliseconds

15000

lenses.kubernetes.websocket.ping.interval

How often to ping Kubernetes Websocket to check it’s alive expressed in milliseconds

30000

lenses.kubernetes.pod.heap

The max amount of memory the underlying Java process will use

900M

lenses.kubernetes.pod.min.heap

The initial amount of memory the underlying Java process will allocate

128M

lenses.kubernetes.pod.mem.request

The value will control how much memory resource the Pod Container will request

128M

lenses.kubernetes.pod.mem.limit

The value will control the Pod Container memory limit

1152M

lenses.kubernetes.pod.cpu.request

The value will control how much cpu resource the Pod Container will request

null

lenses.kubernetes.pod.cpu.limit

The value will control the Pod Container cpu limit

null

lenses.kubernetes.namespaces

Object setting a list of Kubernetes namespaces that Lenses will see for each of the specified and configured cluster

null

lenses.kubernetes.pod.liveness.initial.delay

Amount of time Kubernetes will wait to check Processor’s health for the first time. It can be expressed like 30 second, 2 minute or 3 hour, mind the time unit is singular

60 second

lenses.deployments.events.buffer.size

Buffer size for events coming from Deployment targets such as Kubernetes

10000

lenses.deployments.errors.buffer.size

Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes

1000

lenses.kubernetes.config.reload.interval

Time interval to reload the Kubernetes configuration file. Expressed in milliseconds.

30000

SQL snapshot (Explore & Studio)

Optimization settings for SQL queries.

Key
Description
Type
Default

lenses.sql.settings.max.size

Restricts the max bytes that a kafka sql query will return

long

20971520 (20MB)

lenses.sql.settings.max.query.time

Max time (in msec) that a sql query will run

int

3600000 (1h)

lenses.sql.settings.max.idle.time

Max time (in msec) for a query when it reaches the end of the topic

int

5000 (5 sec)

lenses.sql.settings.show.bad.records

By default show bad records when querying a kafka topic

boolean

true

lenses.sql.settings.format.timestamp

By default convert AVRO date to human readable format

boolean

true

lenses.sql.settings.live.aggs

By default allow aggregation queries on kafka data

boolean

true

lenses.sql.sample.default

Number of messages to sample when live tailing a kafka topic

int

2/window

lenses.sql.sample.window

How frequently to sample messages when tailing a kafka topic

int

200 msec

lenses.sql.websocket.buffer

Buffer size for messages in a SQL query

int

10000

lenses.metrics.workers

Number of workers for parallelising SQL queries

int

16

lenses.kafka.ws.buffer.size

Buffer size for WebSocket consumer

int

10000

lenses.kafka.ws.max.poll.records

Max number of kafka messages to return in a single poll()

long

1000

lenses.sql.state.dir

Folder to store KStreams state.

string

logs/sql-kstream-state

lenses.sql.udf.packages

The list of allowed java packages for UDFs/UDAFs

array of strings

["io.lenses.sql.udf"]

Lenses internal Kafka topics

Lenses requires these Kafka topics to be available, otherwise, it will try to create them. The topics can be created manually before Lenses is run, or allow Lenses the correct Kafka ACLs to create the topics:

Key
Description
Partition
Replication
Default
Compacted
Retention

lenses.topics.external.topology

Topic for applications to publish their topology

1

3 (recommended)

__topology

yes

N/A

lenses.topics.external.metrics

Topic for external application to publish their metrics

1

3 (recommended)

__topology__metrics

no

1 day

lenses.topics.metrics

Topic for SQL Processor to send the metrics

1

3 (recommended)

_kafka_lenses_metrics

no

To allow for fine-grained control over the replication factor of the three topics, the following settings are available:

Key
Description
Default

lenses.topics.replication.external.topology

Replication factor for the lenses.topics.external.topology topic

1

lenses.topics.replication.external.metrics

Replication factor for the lenses.topics.external.metrics topic

1

lenses.topics.replication.metrics

Replication factor for the lenses.topics.metrics topic

1

When configuring the replication factor for your deployment, it's essential to consider the requirements imposed by your cloud provider. Many cloud providers enforce a minimum replication factor to ensure data durability and high availability. For example, IBM Cloud mandates a minimum replication factor of 3. Therefore, it's crucial to set the replication factor for the Lenses internal topics to at least 3 when deploying Lenses on IBM Cloud.

Advanced

All time configuration options are in milliseconds.

Key
Description
Type
Default

lenses.interval.summary

How often to refresh kafka topic list and configs

long

10000

lenses.interval.consumers.refresh.ms

How often to refresh kafka consumer group info

long

10000

lenses.interval.consumers.timeout.ms

How long to wait for kafka consumer group info to be retrieved

long

300000

lenses.interval.partitions.messages

How often to refresh kafka partition info

long

10000

lenses.interval.type.detection

How often to check kafka topic payload info

long

30000

lenses.interval.user.session.ms

How long a client-session stays alive if inactive (4 hours)

long

14400000

lenses.interval.user.session.refresh

How often to check for idle client sessions

long

60000

lenses.interval.topology.topics.metrics

How often to refresh topology info

long

30000

lenses.interval.schema.registry.healthcheck

How often to check the schema registries health

long

30000

lenses.interval.schema.registry.refresh.ms

How often to refresh schema registry data

long

30000

lenses.interval.metrics.refresh.zk

How often to refresh ZK metrics

long

5000

lenses.interval.metrics.refresh.sr

How often to refresh Schema Registry metrics

long

5000

lenses.interval.metrics.refresh.broker

How often to refresh Kafka Broker metrics

long

5000

lenses.interval.metrics.refresh.connect

How often to refresh Kafka Connect metrics

long

30000

lenses.interval.metrics.refresh.brokers.in.zk

How often to refresh from ZK the Kafka broker list

long

5000

lenses.interval.topology.timeout.ms

Time period when a metric is considered stale

long

120000

lenses.interval.audit.data.cleanup

How often to clean up dataset view entries from the audit log

long

300000

lenses.audit.to.log.file

Path to a file to write audits to in JSON format.

string

lenses.interval.jmxcache.refresh.ms

How often to refresh the JMX cache used in the Explore page

long

180000

lenses.interval.jmxcache.graceperiod.ms

How long to pause for when a JMX connectity error occurs

long

300000

lenses.interval.jmxcache.timeout.ms

How long to wait for a JMX response

long

500

lenses.interval.sql.udf

How often to look for new UDF/UDAF (user defined [aggregate] functions)

long

10000

lenses.kafka.consumers.batch.size

How many consumer groups to retrieve in a single request

Int

500

lenses.kafka.ws.heartbeat.ms

How often to send heartbeat messages in TCP connection

long

30000

lenses.kafka.ws.poll.ms

Max time for kafka consumer data polling on WS APIs

long

10000

lenses.kubernetes.config.reload.interval

Time interval to reload the Kubernetes configuration file.

long

30000

lenses.kubernetes.watch.reconnect.limit

How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable

long

10

lenses.kubernetes.watch.reconnect.interval

How often to wait between Kubernetes Watcher reconnection attempts

long

5000

lenses.kubernetes.websocket.timeout

How long to wait for a Kubernetes Websocket response

long

15000

lenses.kubernetes.websocket.ping.interval

How often to ping Kubernetes Websocket to check it’s alive

long

30000

lenses.akka.request.timeout.ms

Max time for a response in an Akka Actor

long

10000

lenses.sql.monitor.frequency

How often to emit healthcheck and performance metrics on Streaming SQL

long

10000

lenses.audit.data.access

Record dataset access as audit log entries

boolean

true

lenses.audit.data.max.records

How many dataset view entries to retain in the audit log. Set to -1 to retain indefinitely

int

500000

lenses.explore.lucene.max.clause.count

Override Lucene’s maximum number of clauses permitted per BooleanQuery

int

1024

lenses.explore.queue.size

Optional setting to bound Lenses internal queue used by the catalog subsystem. It needs to be positive integer or it will be ignored.

int

N/A

lenses.interval.kafka.connect.http.timeout.ms

How long to wait for Kafka Connect response to be retrieved

int

10000

lenses.interval.kafka.connect.healthcheck

How often to check the Kafka health

int

15000

lenses.interval.schema.registry.http.timeout.ms

How long to wait for Schema Registry response to be retrieved

int

10000

lenses.interval.zookeeper.healthcheck

How often to check the Zookeeper health

int

15000

lenses.ui.topics.row.limit

The number of Kafka records to load automatically when exploring a topic

int

200

lenses.deployments.connect.failure.alert.check.interval

Time interval in seconds to check the connector failure grace period has completed. Used by the Connect auto-restart failed connectors functionality. It needs too be a value between (1,600].

int

10

lenses.provisioning.path

Folder on the filesystem containing the provisioning data. See [provisioning docs](link to provisioning docs) for further details

string

lenses.provisioning.interval

Time interval in seconds to check for changes on the provisioning resources

int

lenses.schema.registry.client.http.retryOnTooManyRequest

When enabled, Lenses will retry a request whenever the schema registry returns a 429 Too Many Requests

boolean

lenses.schema.registry.client.http.maxRetryAwait

Max amount of time to wait whenever a 429 Too Many Requests is returned.

duration

lenses.schema.registry.client.http.maxRetryCount

Max retry count whenever a 429 Too Many Requests is returned.

integer

2

lenses.schema.registry.client.http.rate.type

Specifies if http requests to the configured schema registry should be rate limited. Can be "session" or "unlimited"

"unlimited" | "session"

lenses.schema.registry.client.http.rate.maxRequests

Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.

integer

N/A

lenses.schema.registry.client.http.rate.window

Whenever the rate limiter is "session" this configuration will determine the duration of the window used.

duration

N/A

lenses.schema.connect.client.http.retryOnTooManyRequest

Retry a request whenever a connect cluster returns a 429 Too Many Requests

boolean

lenses.schema.connect.client.http.maxRetryAwait

Max amount of time to wait whenever a 429 Too Many Requests is returned.

duration

lenses.schema.connect.client.http.maxRetryCount

Max retry count whenever a 429 Too Many Requests is returned.

integer

2

lenses.connect.client.http.rate.type

Specifies if http requests to the configured connect cluster should be rate limited. Can be "session" or "unlimited"

"unlimited" | "session"

lenses.connect.client.http.rate.maxRequests

Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.

integer

N/A

lenses.connect.client.http.rate.window

Whenever the rate limiter is "session" this configuration will determine the duration of the window used.

duration

N/A

Connectors topology

Control how Lenses identifies your connectors in the Topology view. Catalogue your connector types, set their icons, and control how Lenses extracts the topics used by your connectors.

Lenses comes preconfigured for some of the popular connectors as well as the Stream Reactor connectors. If you see that Lenses doesn’t automatically identify your connector type then use the lenses.connectors.info setting to register it with Lenses.

Add a new HOCON object {} for every new Connector in your lenses.connectors.info list :

  lenses.connectors.info = [
      {
        class.name = "The connector full classpath"
        name = "The name which will be presented in the UI"
        instance = "Details about the instance. Contains the connector configuration field which holds the information. If  a database is involved it would be  the DB connection details, if it is a file it would be the file path, etc"
        sink = true
        extractor.class = "The full classpath for the implementation knowing how to extract the Kafka topics involved. This is only required for a Source"
        icon = "file.png"
        description = "A description for the connector"
        author = "The connector author"
      }
  ]

This configuration allows the connector to work with the topology graph, and also have the RBAC rules applied to it.

Source example

To extract the topic information from the connector configuration, source connectors require an extra configuration. The extractor class should be: io.lenses.config.kafka.connect.SimpleTopicsExtractor. Using this extractor requires an extra property configuration. It specifies the field in the connector configuration which determines the topics data is sent to.

Here is an example for the file source:

  lenses.connectors.info = [
    {
      class.name = "org.apache.kafka.connect.file.FileStreamSource"
      name = "File"
      instance = "file"
      sink = false
      property = "topic"
      extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
    }
  ]

Sink example

An example of a Splunk sink connector and a Debezium SQL server connector

  lenses.connectors.info = [
    {
      class.name = "com.splunk.kafka.connect.SplunkSinkConnector"
      name = "Splunk Sink",
      instance = "splunk.hec.uri"
      sink = true,
      extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
      icon = "splunk.png",
      description = "Stores Kafka data in Splunk"
      docs = "https://github.com/splunk/kafka-connect-splunk",
      author = "Splunk"
    },
    {
      class.name = "io.debezium.connector.sqlserver.SqlServerConnector"
      name = "CDC MySQL"
      instance = "database.hostname"
      sink = false,
      property = "database.history.kafka.topic"
      extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
      icon = "debezium.png"
      description = "CDC data from RDBMS into Kafka"
      docs = "//debezium.io/docs/connectors/mysql/",
      author = "Debezium"
    }
  ]

External Applications

Key
Description
Default
Type
Required

apps.external.http.state.refresh.ms

When registering a runner for external app, a health-check interval can be specified. If it is not, this default interval is used (value in milliseconds)

30000

int

no

apps.external.http.state.cache.expiration.ms

Last known state of the runner is stored in a cache. The entries in the cache are being invalidated after a time that is defined by following configuration key (value in milliseconds). This value should not be lower than the apps.external.http.state.refresh.ms value.

60000

int

no

false
"2 seconds"
unlimited
false
2 seconds
unlimited
Lenses EULA