Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 347 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

5.5

Loading...

Loading...

Loading...

Getting Started

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Deployment

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

User Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Overview

This page describes an overview of Lenses.

Lenses is the leading developer experience and UI for exploring and moving real-time data, across any Kafka, on any cloud or on premise. We are on a mission to create an operating fabric to increase developer productivity on real-time data.

Architecture

Lenses is an application that connects to your Kafka environments, allowing you to manage, discover, explore and catalogue your data via SQL. You can also deploy and monitor stream processing applications (SQL Processors) and Kafka Connectors, all wrapped in an enterprize grade RBAC layer.

All Lenses needs is connectivity to your services, think of it as a Kafka client.

The diagram gives a high-level overview of the logical components. At the core of Lenses, we have:

  • A Kafka UI for day-to-day work with Kafka

  • SQL Engine to query data and create streaming apps leveraging Kafka Streams

  • App Engine to manage seamless deployments of SQL apps (deployed to Kubernetes)

  • Metadata Engine to create a real-time Data Catalog for cross-system datasets and apps

Lenses is a JVM application exposes secure restful APIs and websockets in addition to providing a Kafka UI. A CLI is available to help automate operations.

Quick Start

This quick start guide will walk you through installing and starting Lenses using Docker, followed by connecting Lenses to your Kafka cluster.

We are no longer issuing Box licenses for Lenses 5. You can get the same functionality from the new .

Locally on your laptop

For a local quick start, you can use Lenses Box, an all-in-one docker, with Lenses, Kafka, Schema Registry and more. Lenses will start and be configured to connect to the built-in Kafka brokers.

Connecting Lenses to your environment

Connect Lenses to your environment.

To connect Lenses to your real environment you can:

  1. Install Lenses (not the Box) and manually configure the connections to Kafka, Zookeepers, Schema Registries and Connect, or

  2. Install Lenses and configure the connections in one go using .

How to connect to Kafka depends on your Kafka provider.

Lenses Architecture
provisioning

Kafka

Learn how to connect Lenses to your Kafka.

Schema Registries

Learn how to connect Lenses to your Schema Registry.

Zookeeper

Learn how to connect Lenses to your Zookeepers.

Kafka Connect

Learn how to connect Lenses to your Kafka Connect Clusters.

Alert & Audits

Learn how to connect Lenses to your alerting and auditing systems.

AWS

Learn how to connect Lenses to AWS (credentials).

  1. To start with the Box you need a license. Please contact us.

  2. Install and run the Docker

Open Lenses in your browser, log in with admin/admin.

For more information see here.

If you want to deploy via Helm see here.

Against your environment

For production and automated deployments see Installation.

Lenses starts in a bootstrap mode, this allows you to be guided through adding the minimum requirements for Lenses to start, a license, and connection details to Kafka.

Prerequisites

  1. Valid license - Lenses is a licensed product.

  2. Kafka versions - Any version of Apache Kafka (2.0 or newer) on-premise and on-cloud.

  3. Network connectivity - Lenses needs access to your Kafka brokers

Start Lenses

Run the following command to pull the latest Lenses image and run it:

Once Lenses has started, open Lenses in your browser, log in with admin/admin. You will be presented with the bootstrap UI that will guide you through connecting to Kafka.

The connection to your Kafka depends on your Kafka distribution, you can view more details in the Connection to Environment section.

Lenses 6 version

Prerequisites

Prerequisites to check before using Lenses against your Kafka cluster.

Kafka versions

Any version of Apache Kafka (2.0 or newer) on-premise and on-cloud.

Schema Registry

Any version of Confluent Schema Registry (5.5.0 or newer), APICurio (2.0 or newer) and AWS Glue.

JMX connectivity

Connectivity to JMX is optional (not required) but recommended for additional/enhanced monitoring of the Kafka Brokers and Connect Workers. Secure JMX connections are also supported, as well as JOLOKIA and OpenMetrics (MSK).

For more enable JMX for Lenses itself see .

Hardware & OS

Run on any Linux server (review ulimits or container technology (docker/kubernetes). For RHEL 6.x and CentOS 6.x use docker.

Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit command:

Increase as a super-user the soft limit to 4096 with:

Use 6GB RAM/4 CPUs and 500MB disk space.

Memory & CPU

This is the default configuration = Request 1 CPU & Memory 3Gi, Limit 2 CPU & Memory 5Gi

Browser

All recent versions of major browsers are fully supported.

APIs and Websockets

Every action in Lenses is backed by an API or websocket, documented at . A Golang client is available and CLI (command line interface).

For websockets you may need to adjust your loadbalancer to allow them. See .

Lenses state store

Lenses can use an embedded H2 database or a . Postgres is not supplied by Lenses.

TLS termination

By default, Lenses does not provide TLS termination but can be enabled via a configuration option. TLS termination is recommended for enhanced security and a prerequisite for integrating with SSO (Single Sign On) via SAML2.0.

TLS termination can be configured directly within Lenses or by using a TLS proxy or load balancer. Refer to the TLS for additional information.

Learn

Learn how to install and configure Lenses.

IBM Event Streams Registry

This page describes connecting Lenses to IBM Event Streams schema registry.

Requires Enterprise subscription on IBM Event Streams and only hard delete is supported for IBM Event streams

To configure an application to use this compatibility API, specify the Schema Registry endpoint in the following format:

To add a connection, go to:

Admin->Connections

Select the New Connection button and select Schema Registry.

Enter:

docker run --rm \
    -p 3030:3030 \
    --name=dev \
    --net=host \
    -e EULA="https://dl.lenses.stream/d/?id=CHECK_YOUR_EMAIL_FOR_KEY" \
    lensesio/box   
docker run --name lenses \
  -e LENSES_PORT=3030\
  -e LENSES_SECURITY_USER=admin \
  -e LENSES_SECURITY_PASSWORD=sha256:8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 \
  -p 3030:3030\
  -p 9102:9102 \
  lensesio/lenses:latest

Quick Start

Launch Lenses local with an all-in-one docker or against your Kafka environment.

Installation

Learn how to install and automate configuration.

Configuration

Learn how to configure Lenses.

IAM

Learn how to set up authentication and authorization of users in Lenses.

SQL for exploration & processing

Learn how to use Lenses SQL to explore and process data.

Kafka Connector Management

Learn how to use Lenses to manage your Kafka Connectors.

Kafka Connectors

Lenses provides a collection of open source Connector plugins, available with Enterprize support. Learn about them here.

Topics

Learn how to find, create and manage Kafka topics in the Data catalogue.

Schemas

Learn how to manage Schemas in your schema registries with Lenses.

Governance

Learn how to use Lenses to self serve Data Policies, Kafka ACLs & Quotas

Monitoring & Alerting

Learn how to configure Lenses to monitor and alert about your Kafka environments and applications.

Confluent

This page describes connecting Lenses to Confluent schema registries.

To add a connection, go to:

Admin->Connections

Select the New Connection button and select Schema Registry.

Enter:

  1. Comma-separated list of schema registry URLs including ports

  2. Enable basic auth if required and set the user name and password

  3. Enable SSL if required and upload the keystore

  4. Optionally upload a trust store

  5. Set any additional properties

  6. Optional enable metrics

Confluent Platform

This page describes configuring Lenses to connect to Confluent Platform.

Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use provisioning for automated deployments.

For Confluent Platform see Apache Kafka.

Provisioning examples

This page gives examples of the provisioning yaml for Lenses.

To use with Helm file place the examples under lenses.provisioning.connections in the values file.

Topic Settings

This page describes how to use Lenses topic settings to provide governance when creating topics in your Kafka cluster.

Topic settings and naming rules allow for the enforcement of best practices when onboarding new teams and topics into your data platform.

Topic configuration rules

Topic configuration rules can be used to enforce partition sizing, replication, and retention configuration during topic creation. Go to Admin->Topic Settings->Edit.

Naming conventions

By setting naming conventions you can control how topics are named. To define a naming convention, go to Admin->Topic Settings->Edit. Naming rules allow you to select from predefined regex or apply your own.

Finding topics & fields

This page describes how to use Lenses to search for topics and fields across Kafka, Postgres and Elasticsearch.

Searching

In the Explore screen, you can use the search bar to filter for both fields and topic names in your Kafka cluster, additionally, it will search across any Elasticsearch or Postgres instances that have been connected.

AWS Glue

This page describes connection to AWS Glue.

Lenses provides support for AWS Glue to manage schema and also explore and process data linked to it via the Lenses SQL Engines.

To connect to Glue, first create a AWS Connection. Go to Admin->Connections->AWS and enter your AWS credentials or select the IAM support if Lenses is running on an AWS host, e.g. EC2 instance or it has the AWS default credentials toolchain provider in place.

Rather than enter your AWS credentials you can use the AWS credentials chain.

Next, Select New Connection->Schema Registry->AWS Glue. Select your AWS Connection with access to Glue, and enter the Glue ARN.

Kerberos

This page provides examples for defining a connection to Kerberos.

Adding metadata & tags to topics

This page describes how to use Lenses to add metadata and tags to topics in Kafka.

To add descriptions or tags to datasets, click the edit icon in the Summary panel.

Downloading messages

This page describes how to use Lenses to download messages to CSV or JSON from a Kafka topic.

Only the data returned to the frontend is downloaded.

Data can be downloaded, optionally including headers, as JSON or as CSV with a choice of delimiters.

Managing topic configurations

This page describes how to use Lenses to view and manage topic configurations in Kafka.

To view a configuration for a topic select the Configuration tab. Here you will see the current configurations inherited (default) from the brokers and if they have been overridden (current value).

To edit a configuration click the Edit icon and enter your value.

here
https://api.lenses.io
here
Postgres database
documentation

Comma-separated list of schema registry URLs including ports, adding the confluent path at the end. Use the value from the kafka_http_url field in the IBM Console Service Credentials tab

  • Enable basic auth if required

  • Set the user name "token"

  • Set the password as the value from API key in the IBM Console Service Credentials tab

  • https://token:{$APIKEY}@{$HOST}/{confluent}

    Apicurio

    This page describes connecting Lenses to Apicurio.

    Apicuro supports the following versions of Confluent's API:

    • Confluent Schema Registry API v6

    • Confluent Schema Registry API v7

    Set the schema registry URLs to include the compatibility endpoints, for example:

    http://localhost:8080/apis/ccompat/v6

    To add a connection, go to:

    Admin->Connections

    Select the New Connection button and select Schema Registry.

    Enter:

    1. Comma-separated list of schema registry URLs including ports and compatibility endpoint path

    2. Enable basic auth if required and set the user name and password

    ulimit -S -n     # soft limit
    ulimit -H -n     # hard limit
    ulimit -S -n 4096

    Apache Kafka

    This page describes connecting Lenses to Apache Kafka.

    Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use provisioning for automated deployments.

    1. Add your bootstrap brokers including ports

    2. Optionally, security protocol e.g. SASL_SCRAM, SASL_SSL

    3. Optionally, SASL Mechanism, e.g. SCRAM-SHA-256

    SSL/TLS Configuration


    If your Kafka connection requires TLS, set the following

    • Truststore: The SSL/TLS trust store to use as the global JVM trust store. Available formats are .jks, .p12, .pfx.

    • Keystore: The SSL/TLS keystore to use for the TLS listener for Lenses. Available format is .jks.

    JMX Metrics

    Lenses allows you to connect to brokers JMX. Supported formats are:

    1. Simple with and without SSL

    2. Jolokia (JOLOKIAG and JOLOKIAP)

      1. With and without SSL

      2. With Basic Auth

    IBM Event Streams

    This page describes how to connect Lenses to IBM Event Streams.

    Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use provisioning for automated deployments.

    From the IBM Cloud console, locate your bootstrap_endpoints. for the service credentials you want to connect with.

    In the Lenses bootstrap UI:

    1. Set the bootstrap_endpoints as bootstrap servers

    2. Set SASL SSL as the security protocol

    3. Set PLAIN as the security mechanism

    4. Set the**jaas.conf** as the following, using the apiKey value as the password.

    IBM Event Streams requires a replication factor of 3. Ensure you set the replication factor accordingly for Lenses internal topics.

    See .

    Adding a License

    This page describes how to add a License.

    Lenses requires a valid license to start. The license can be added via the UI when in bootstrap mode or at deployment time via the provisioning APIs.

    See provisioning for integration into your CI/CD pipelines.

    License Expired

    If at any point the license becomes invalid (it expired / too many brokers were added to the cluster) only the license page will be available.

    License Management

    See .

    Zookeeper

    This page describes connecting Lenses to Zookeeper.

    Not all cloud providers give access to Zookeeper. Zookeeper is optional for Lenses.

    See provisioning for automating connections.

    Connectivity to Zookeeper is optional for Lenses. Zookeeper is used by Lenses for such purposes:

    1. To provide quotas management (until quotas can be managed via the Brokers API)

    2. To autodetect the JMX connectivity settings to Kafka brokers (if metrics are not defined directly for Kafka connection).

    To add a Zookeeper connection go to Admin->Connections->New Connection->Zookeeper.

    1. Add a comma-separated list of Zookeepers, including port

    2. Optionally set a session timeout

    3. Optionally set a Chroot path

    4. Optionally set a connection timeout

    Alert & Audit Integrations

    Connect Lenses to your alerting and auditing systems.

    You can either configure the connections in the UI or via provisioning. Provisioning is recommended.

    Lenses can send out alerts and audits events, the following integrations are supported:

    Alerts

    1. DataDog

    2. AWS CloudWatch

    3. PagerDuty

    4. Slack

    5. Alert Manager

    6. Webook (Email, SMS, HTTP and MS Teams)

    Audits

    1. Webhook

    2. Splunk

    Once you have configure alert and audit connections, you can create alert and audit channels to route events to them. See or for more information.

    AWS

    Add a connection to AWS in Lenses.

    You can either configure the connections in the UI or via provisioning. Provisioning is recommended.

    Lenses uses an AWS in two places:

    1. AWS IAM connection to MSK for Lenses itself

    2. to Cloud Watch.

    If Lenses is deployed on an EC2 Instance or has access to AWS credentials in the default that can be used instead.

    Aiven

    This page describes configuring Lenses to connect to Aiven.

    Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use provisioning for automated deployments.

    From the Aiven, locate your Service URI.

    In the Lenses bootstrap UI:

    1. Set the Service URI as bootstrap servers

    2. Set SASL SSL as the security protocol

    3. Set SCRAM-SHA-SHA-256 as the security mechanism

    4. Set the**jaas.conf** as the following:

    Provisioning API reference

    This page describes the Provisioning API reference.

    For the options for each connection see the Schema /Object of the PUT call.

    AWS MSK

    This page describes connection Lenses to a AWS MSK cluster.

    Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use for automated deployments.

    It is recommended to install Lenses on an EC2 instance or with EKS in the same VPC as your MSK cluster. Lenses can be installed and preconfigured via the.

    Confluent Cloud

    This page describes configuring Lenses to connect to Confluent Cloud.

    Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use for automated deployments.

    For Confluent Platform see

    Installation

    This page describes the supported deployment methods for Lenses.

    To automate the configuration of connections we recommend using .

    Lenses can be deployed in the following ways:

    Azure HDInsight

    This page describes connection Lenses to a Azure HDInsight cluster.

    Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use for automated deployments.

    In our Azure Portal, go to Dashboards > Ambari home.

    1. Kafka endpoints: Go to

    Okta SSO

    This pages describes configuring Lenses with Okta SSO.

    Groups are case-sensitive and mapped by name with Okta

    Integrate your user-groups with Lenses using the Okta group names. Create a group in Lenses using the same case-sensitive group name as in Okta.

    For example, if the Engineers group is available in Okta, create a group with the same name.

    Set up Okta IdP

    Lenses is available directly in Okta’s .

    Kafka

    This page describes how to connect to your Kafka brokers.

    See for automating connections.

    Lenses can connect to any Kafka cluster or service exposing the Apache Kafka APIs and supporting the authentication methods offered by Apache Kafka.

    Follow the guide for your distribution to obtain the credentials and bootstrap broker to provide to Lenses.

    Configuration

    This page describes how to configure Lenses.

    This section guides you through understanding what is required to utilize Lenses efficiently and securely.

    Two files control Lenses configuration:

    • lenses.conf - contains most of the configuration

    • security.conf - sensitive configuration options such as passwords for authentication

    Logs

    This page describes configuring Lenses logging.

    All logs are emitted unbuffered as a stream of events to both stdout and to rotating files inside the directory logs/.

    The logback.xml file is used to configure logging.

    If customization is required, it is recommended to adapt the default configuration rather than write your own from scratch.

    The file can be placed in any of the following directories:

    • the directory where Lenses is started from

    Basic Authentication

    This page describes configuring basic authentication in Lenses.

    With Basic Auth, user accounts are managed by Lenses and a unique username and a password are used to log in.

    Account locking

    For BASIC and LDAP authentication type, there is the option to set a policy to temporarily lock the account when successive login attempts fail. Once the lock time window has passed the user can log in again.

    .

    Groups

    This page describe creating and managing groups in Lenses.

    A Group is a collection of permissions that defines the level of access for users belonging to it. Groups consist of:

    • Namespaces

    • Application permissions

    • Administration permissions

    Groups must be pre-created, and the group's names in Lenses must match (case sensitive) those in the SSO provider.

    Admin Account

    This page describes how to configure the default admin account for Lenses.

    When you first log in to Lenses, use the default credentials admin/admin

    The default account is a super user and can be used to create groups and other accounts with appropriate permissions.

    The default account username and password may be adjusted as below.

    We strongly recommend that you change the default password. If you don’t, you will be prompted with a dashboard notification.

    For security purposes, it is strongly advised to use your password’s SHA256 checksum instead of the plaintext.

    Onelogin SSO

    This pages describes configuring Lenses with Onelogin SSO.

    Groups are case-sensitive and mapped to roles, by name, with OneLogin

    Integrate your user roles with Lenses using the Keycloak role names. Create a group in Lenses using the same case-sensitive role name as in OneLogin.

    For example, if the Engineers role is available in OneLogin, create a group with the same name.

    Set up OneLogin IdP

    Lenses is available in the OneLogin Application catalog.

    FAQs

    Authentication instances too old or in the future

    The error that you see

    Custom Http

    This page described configuring Lenses with a custom HTTP implementation for authentication.

    With custom authentication, you can plug in your own authentication system by using HTTP headers.

    In this approach, your own authentication proxy/code sitting in front of Lenses takes care of the authentication and injects appropriate Headers in all HTTP requests for verified users.

    Setup a custom authentication layer

    Set up a custom authentication layer by introducing in security.conf:

    Lenses connects similarly to any other application to the infrastructure You can implement a plugin in a few hours in Java/Scala or other JVM technology by implementing one interface:

    Kerberos

    This page describes configuring Lenses with Kerberos.

    Deprecated in Lenses 6.0

    Kerberos uses SPNEGO (Simple and Protected GSSAPI Negotiation Mechanism) for authentication.

    Kerberos will automatically log in authorized users when using the /api/auth REST endpoint. If using Microsoft Windows, logging into your Windows domain is usually sufficient to issue your Kerberos credentials.

    On Linux, if you use Kerberos with PAM, your Kerberos credentials should be already available to Kerberos-enabled browsers. Otherwise, you will need to authenticate to the KDC manually using kinit at the command line and start your browser from the same terminal.

    Viewing topic metrics

    This page describes how to use Lenses to view metrics for a topic.

    To view a live snapshot of the metrics for a topic, select the metrics tab for the topic.

    This will show you metric information over the last 30 days, on the topic and low JMX metrics.

    Viewing topic partitions

    This page describes how to use Lenses to view topic partition metrics and configuration.

    To view topic partitions select the Partition tab. Here you can see a heat map of messages in the topic and their distribution across the partitions.

    Is the map evenly distributed? If not you might have partition skew.

    Further information about the partitions and replicas is displayed, for example, whether the replicas are in-sync or not.

    Secret Providers

    This page describes the available Apache 2.0 Connect Secret Providers from Lenses.

    You are not limited to Lenses Secret Providers, you are free to use your own.

    Custom http requests and suffix

  • AWS Open Monitoring

  • Optionally enable the collection of JMX metrics (Simple or Jolokia with SSL and Basic auth support)

    Creating groups

    To create a new Group, go to Admin->Groups->New Group.

    For every Group, you must set the data namespaces for Kafka or other available connections to data sources.

    Groups must be given names, optionally a description.

    Adding namespaces to groups

    Namespace permissions define the access to datasets. and

    Each group must have a namespace. A namespace is a set of permissions that apply to topics or a set of topics, for example, prod*. This allows you to define virtual multi-tenancy.

    Adding application permissions

    Application permissions define how a user can interact with applications and linked resources associated with those datasets.

    Application permissions cover:

    1. Viewing or resetting Consumer group offsets linked to a group's namespaces

    2. Deploying or viewing connectors linked to a group's namespaces

    3. Deploying or viewing SQL Processors linked to a group's namespaces

    Additionally, application permissions define whether a group can access a specified Connect cluster.

    Adding administration permissions

    Admin permissions refer to activities that are in the global scope of Lenses and affect all the related entities.

    License Management
    Monitoring & Alerting
    Auditing
    Connecting to AWS Glue
    Alert channels
    AWS toolchain
    Kafka > Configs > Kafka Broker > Kafka Broker hosts
  • Optionally get the Zookeeper endpoints: Go to Zookeeper > Configs > Zookeeper Server > Zookeeper Server hosts.

  • In the Lenses bootstrap UI:

    1. Set the Kafka endpoints as bootstrap servers

    2. Set the security protocol, mechanism and Jaas config according to your setup. For information on configuring clients (Lenses) for your HDInsight cluster see here for unauthenticated and here for authenticated.

    TLS without Authentication

    Set the following:

    1. security.protocol to SSL

    2. Set the password for your trust store

    3. Upload your trust store

    TLS with Authentication

    Perform the additional steps to above

    1. Set the password for your key store

    2. Upload your key store

    3. Set your key password

    provisioning
    alert rules
    provisioning

    Helm

    Deploy Lenses in your Kubernetes cluster with Helm.

    Docker

    Deploy Lenses with Docker.

    Linux (archive)

    Deploy Lenses on Linux servers or VMs.

    AWS Marketplace

    Deploy Lenses via the AWS Marketplace.

    Lenses Box

    Try out Lenses with the Lenses Box.

    provisioning

    Apache Kafka

    Connect Lenses to your Apache Kafka cluster.

    AWS MSK

    Connect Lenses to your AWS MSK cluster.

    AWS MSK Serverless

    Connect Lenses to your AWS MSK Serverless.

    Aiven

    Connect Lenses to your Aiven Kafka cluster.

    Azure HDInsight

    Connect Lenses to your Azure HDInsight cluster.

    Confluent Cloud

    Connect Lenses to your Confluent Cloud.

    Confluent Platform

    Connect Lenses to your Confluent Platform (on premise) cluster.

    IBM Event Streams

    Connect Lenses to your IBM Event Streams cluster.

    AWS Secret Manager

    Secure Kafka Connect secrets with AWS Secret Manager.

    Azure KeyVault

    Secure Kafka Connect secrets with Azure KeyVault.

    Environment Variables

    Secure Kafka Connect secrets with Environment Variable.

    Hashicorp Vault

    Secure Kafka Connect secrets with Hashicorp Vault.

    AES256

    Secure Kafka Connect secrets with AES256 encryption.

    Inserting & deleting messages

    This page describes how to use Lenses to insert or delete messages in Kafka.

    Inserting a message

    To insert a message, select Insert Message from the Action menu. Either enter a message, according to the topic schema or have a message auto-generated for you.

    Deleting messages

    Deleting messages deletes messages based on an offset range. Select Delete Messages from the Action menu.

    Schemas

    This page describes how to manage Schema in a Schema Registry with Lenses.

    For automation use the CLI.

    To delete schemas you need to enable lenses.schema.registry.delete in lenses.conf.

    To connect your Schema Registry see provisioning.

    Creating schemas

    To create a new schema, select New Schema and add your schema.

    Viewing schemas

    To view the schema associated with a topic, select the Schema tab. Here you can view the schema for both the key and the value of the topic.

    Editing Schemas

    To edit a schema select either the key or value schema. The schema editor will be expanded, click Edit to change the schema.

    Searching for schemas or fields

    To list schemas go to Workspace->Schema Registry. Lenses will show the current schemas, you can search in schemas for fields and schema names as well as filtering by format and tags.

    Evolving schemas

    To evolve a schema, select the schema and select Edit. In the editor apply your changes. If the changes match the evolution rules the changes will be saved and a new version created.

    Changing compatibility

    To change the compatibility of a schema, select the schema and from the actions menu select Change compatibility.

    Approval requests

    This page describes how to use Lenses approval requests.

    To enable Approval Requests for a group, grant the group Create Topic Request permission. When a user belonging to this group creates a topic it will be sent for approval first.

    Enabling approval requests

    To enable approval requests, create a group with, or add to a group, the Create Topic Request permission to the data namespace.

    Viewing, approving, or rejecting approval requests

    Go to Admin->Audits->Requests, select the request, and click view.

    Approve or reject the request. If you Approve the topic will be created.

    If the replicas are not in-sync an
    alert will be raised.
    infrastructure
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="token"
    password="[Your API KEY]"
    Configuration
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="[Your_Connection_Details_Username]"
    password="[Your_Connection_Details_Password]"

    Searching for messages

    This page describes how to use Lenses to use the Explore screen to explore, search and debug messages in a topic.

    Examining a message

    After selecting a topic you will be shown more details of the topic. The SQL Snapshot engine will return to you 200 of the latest messages for partition 0. Both the key and value of the message are displayed in a tree format which is expandable.

    At the top of each message, the Kafka metadata (partition, timestamp, offset) is displayed.

    Hovering to the right allows you a message the clipboard.

    To download all messages to JSON or CSV see here.

    Flattening the data view

    The SQL Snapshot engine deserializes the data on the backend of Lenses and sends it over the WebSocket to the client. By default, the data is presented in a tree format but it's also possible to flatten the data into a grid view. Select the grid icon.

    Searching by partition

    Use the partition drop-down to change the partition to return messages you are interested in.

    Searching by timestamp

    Use the timestamp picker to search for messages from a timestamp.

    Searching by offset

    Use the offset select to search for messages from an offset.

    Live sample

    The SQL Snapshot engine has a live mode. In this mode the engine will return a sample of messages matching the query. To enable this, select the Live Sample button. The data view will now update with live records as the are written to the topic. You can also edit the query if required.

    This is sample data, not the full set to avoid overloading the browser

    Changing the SQL deserializer format

    For the SQL Snapshot engine to return data it needs to understand the format of the data in a topic. If a topic is backed by a Schema registry it is automatically set to AVRO. For other types, such as JSON or Strings the engine tries to determine the format.

    If you wish to override or correct the format used select either Reset Types or Change Types from the action menu.

    Rate Limiting

    Rate limit the calls Lenses makes to Schema Registries and Connect Clusters.

    Careful monitoring of the data managed by the configured Schema Registry is paramount in order for Lenses to provide its users with the most up-to-date data.

    For most cases, this monitoring doesn't cause any issues, but it might happen that, in some cases, Lenses is forced to access the Schema Registry too often.

    If this happens, or if you want to make sure Lenses does not go over a rate limit imposed by the Schema Registry, it is possible to throttle Lenses usage of the Schema Registry's API.

    In order to do so, it is possible to set the following Lenses configuration:

    lenses.conf
    # schema registry
    lenses.schema.registry.client.http.rate.type="sliding" 
    lenses.schema.registry.client.http.rate.maxRequests= 200
    lenses.schema.registry.client.http.rate.window="2 seconds"
    
    # connect clusters
    lenses.connect.client.http.rate.type="sliding"                 
    lenses.connect.client.http.rate.maxRequests=200        
    lenses.connect.client.http.rate.window="2 seconds"  

    Doing so will make sure Lenses does not issue more than maxRequests over any window period.

    The exact values provided will depend on things like the resources of the machine hosting the Schema Registry, the number of schemas, how often are new schemas added, so some trial and error is required. These values should however define a rate smaller than the one allowed by the Schema Registry.

    Create a Data integration API key
    1. From Data integration API keys, select Create Key.

    2. For this guide select Global access

    Creating a Connection

    In the Lenses bootstrap UI, Select:

    1. Security Protocol SASL SSL

    2. SASL Mechanism PLAIN

    In the JAAS Configuration update the username and password from the respective fields Key and Secret of the API key created above:

    provisioning
    Apache Kafka.
    Add application in the Catalog
    1. Go to Applications > Applications

    2. Click Add Application

    3. Search for Lenses

    4. Select by pressing Add

    Set General Settings

    1. App label: Lenses

    2. Set the base url of your lenses installation e.g. https://lenses-dev.example.com

    3. Click Done

    Download IdP XML file

    Download the Metadata XML file with the Okta IdP details.

    1. Go to Sign On > Settings > SIGN ON METHODS

    2. Click on Identity Provider metadata and download the XML data to a file.

    3. You will reference this file’s path in the security.conf configuration file.

    Application catalog
    /etc/lenses/
  • agent installation directory.

  • The first one found, in the above order, is used, but to override this and use a custom location, set the following environment variable:

    Default configuration

    The default configuration file is set up to hot-reload any changes every 30 seconds.

    Log Level

    The default log level is set to INFO (apart from some very verbose classes).

    Log Format

    All the log entries are written to the output using the following pattern:

    You can adjust this inside logback.xml to match your organization’s defaults.

    Log Location

    logs/ you will find three files: lenses.log, lenses-warn.log and metrics.log. The first contains all logs and is the same as the stdout. The second contains only messages at level WARN and above. The third one contains timing metrics and can be useful for debugging.

    Log Buffering

    The default configuration contains two cyclic buffer appenders: "CYCLIC-INFO” and “CYCLIC-METRICS”. These appenders are required to expose the Lenses logs within the Admin UI.

    Accounts storage

    The internal database that stores user/group information is stored on disk, under the lenses.storage.directory or an external Postgres database.

    If using the embedded H2 database keep this directory intact between updates and upgrades.

    Password rules

    To enforce specific password rules the following configurations need to be set:

    Password history

    To not allow previous passwords to be reused, use the following configuration:

    Configure account locking
    security.conf
    # The regex security.confcheck the password. If it does not meet the requirements adding a user account or changing the
    # password will be rejected.
    lenses.security.basic.password.rules.regex = "((?=.*\\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[@#$%]).{6,20})"
    
    # Human readable description for the password rule. This will be returned to the user when the requirements fail
    lenses.security.basic.password.rules.desc = "Password needs to contain: one lower case, one upper case, 1 number, one special character, and have a length of 6 to 20 characters"

    To create a SHA256 checksum for your password you can use the command line tools available in your Linux server or macOS.

    Disabling the Admin Account

    To disable the Lenses Administrator user, set an adequately long random password. You can achieve this by using the snippet below:

    security.conf
    # Lenses Administrator settings
    lenses.security.user = "admin"
    
    ## For the password you can either use the plaintext
    #lenses.security.password = "admin"
    ## Or you may use the SHA256 checksum (advised)
    lenses.security.password = "sha256:8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918"
    unset HISTFILE # Disable history for the current terminal
    echo -n "password" | sha256sum
    dd if=/dev/urandom count=1 bs=1024 | sha256sum

    Visit OneLogin’s Administration console. Select Applications > Applications > Add App

    1. Search and select Lenses

    2. Optionally add a description and click save

    Add Lenses via the Application Catalog

    1. In the Configuration section set the base path from the url of the Lenses installation e.g. lenses-dev.example.com ( without the https://)

    2. Click Save

    Download the IdP XML file

    1. Use the More Actions button

    2. Click and download the SAML Metadata

    3. You will reference this file’s path in the security.conf configuration file.

    What causes it
    1. The user’s session in the SSO provider is too old.

    2. The system clocks of the SSO provider and the Lenses instance are out of sync.

    For security purposes, Lenses prevents authenticating SSO users that have remained logged in SSO for a very long time.

    Example: You use Okta SSO and, you logged in to Okta a year ago. Okta might allow you to remain logged in along that year without having to re-authenticate. Lenses has a limit of 100 days. In that case, Lenses will receive an authenticated user that originally logged in before the 100 days mark.

    How to solve it

    1. Ensure that the SSO and Lenses system clocks are in sync.

    2. If the SSO provider supports very long sessions either:

      1. Log out of the SSO and log back in. This explicitly renews the SSO session.

      2. Increase the Lenses limit to more than 100 days.

    Example:

    The returned object UserAndGroups will contain the username and the groups a person authentication belongs to (or raise an exception if no such user exists).

    Example

    The best way to get started is to look into a sample open-source implementation of such a plugin in GitHub.

    # Full classpath of customer authentication plugin
    lenses.security.plugin=com.mycompany.authentication.plugin.class.path
    public interface HttpAuthenticationPlugin {
        UserAndGroups authenticate(HttpRequest request);
    }
    Configuration

    In order to use Kerberos authentication in Lenses, both a static configuration and Kerberos Connection is required.

    • Static configuration To set up Kerberos you need a Kerberos principal and a password-less keytab. Add them in security.conf before starting Lenses:

    • Kerberos Connection A Kerberos Connection should be defined in order to use a proper krb5.conf

    lenses.security.kerberos.service.principal="HTTP/lenses.url[@REALM]"
    lenses.security.kerberos.keytab=/path/to/lenses.keytab
    { "key": { "MMSI": 219347000 }, "value": { "Type": 1, "Repeat": 0, "MMSI": 219347000, "Speed": 0, "Accuracy": true, "Longitude": "9.747901666666667", "Latitude": 59.006915, "location": "59.006915,9.747902", "Course": 141.1, "Heading": 511, "Second": 14, "RAIM": true, "Radio": 23096, "Status": 0, "Turn": -128, "Maneuver": 0, "Timestamp": "1491318149612948547" } }
    org.apache.kafka.common.security.plain.PlainLoginModule required 
    username="[Your_API_KEY]" 
    password="[Your_API_KEY_SECRET]"
    security.conf
    lenses.security.saml.idp.metadata.file="/path/to/OktaIDPMetadata.xml"
    export LENSES_LOG4J_OPTS="-Dlogback.configurationFile=file:/path/to/logback.xml"
    %d{ISO8601} %-5p [%c{2}:%L] [%thread] %m%n
    security.conf
    # When a user tries to change her password, she cannot use any the last # passwords used in the past
    # Default value is 1
    lenses.security.basic.password.history.count = 3
    security.conf
    lenses.security.saml.idp.metadata.file="/path/to/OneLoginIDPMetadata.xml"
    Authentication issue instant is too old or in the future
    lenses.security.saml.idp.session.lifetime.max = 365days
    Open network connectivity

    Edit the AWS MSK security group in the AWS Console and add the IP address of your Lenses installation.

    MSK Security group

    Enable Open Monitoring

    If you want to have Lenses collect JMX metrics you have to enable Open Monitoring on your MSK cluster. Follow the AWS guide here.

    Select your MSK endpoint

    Depending on your MSK cluster, select the endpoint and protocol you want to connect with.

    It is not recommended to use Plaintext for secure environments. For these environments use TLS or IAM.

    Creating a Connection

    In the Lenses bootstrap UI, Select:

    1. Security Protocol and set the protocol you want to use

    2. SASL Mechanism and set the mechanism you want to use.

    Connecting with AWS IAM

    In the Lenses bootstrap UI, Select:

    1. Security Protocol and set it to SASL_SSL

    2. Sasl Mechanism and set it to AWS_MSK_IAM

    3. Add software.amazon.msk.auth.iam.IAMLoginModule required; to the Sasl Jaas Config section

    4. Optionally upload your trust store

    5. Set sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler in the Advances Kafka Properties section.

    provisioning
    AWS Marketplace
    A third, optionally, provisioning.yaml allows you to define your license and connection details to Kafka and other services in a file, that is dynamically pick up by Lenses.

    Kafka Connect

    This page describes adding a Kafka Connect Cluster to Lenses.

    Lenses integrates with Kafka Connect Clusters to manage connectors.

    For documentation about the available Lenses Apache 2.0 Connectors, see the Stream Reactor documentation.

    The name of a Kafka Connect Connections may only contain alphanumeric characters ([A-Za-z0-9]) and dashes (-). Valid examples would be dev, Prod1, SQLCluster,Prod-1, SQL-Team-Awesome.

    Multiple Kafka Connect clusters are supported.

    If you are using Kafka Connect < 2.6 set the following to ensure you can see Connectors

    lenses.features.connectors.topics.via.api.enabled=false

    See for automating connections.

    Consider if you have a high number of connectors.

    Adding a connection

    To add a connection, go to Admin->Connections->New Connection->Kafka Connect.

    1. Provide a name for the Connect cluster

    2. Add a comma-separated list of the workers in the Connector cluster, including ports

    3. Optionally enable Basic Auth and set the username and password

    4. Optionally enable SSL and upload the key-store file

    Adding 3rd Party Connector to the Topology

    If you have developed your own Connector or are using not using a Lenses connector you can still display the connector instances in the topology. To do this Lenses needs to know the configuration option of the Connector that defines which topic the Connector reads from or writes to. This is set in the connectors.info parameter in the lenses.conf file.

    File Watcher Provisioning

    This page describes how to use the Lenses File Watcher to setup connections to Kafka and other services and have changes applied.

    Connections are defined in the provisioning.yaml file. Lenses will then watch the file and resolve the desired state, applying connections defined in the file.

    If a connection is not defined but exists in Lenses it will be removed. It is very important to keep your provision YAML updated to reflect the desired state.

    Enabling File Watcher Provisioning

    File watcher provisioning must be explicitly enabled. Set the following in the lenses.conf file:

    Updates to the file will be loaded and applied if valid without a restart of Lenses.

    Directory layout

    Lenses expects a set of files in the directory, defined by lenses.provisioning.path. The structure of the directory must follow:

    1. files/ directory for storing any certificates, JKS files or other files needed by the connection

    2. provisioning.yaml - This is the main file, holding the definition of the connections

    3. license.json - Your lenses license file

    Managing secrets

    The provisioning.yaml contains secrets. If you are deploying via Helm, the chart will use Kubernetes secrets.

    Additionally, support is provided for referencing environment variables. This allows you to set secrets in your environment and have the value resolved at runtime.

    Referencing files

    Many connections need files, for example, to secure Kafka with SSL you will need a key store and optionally a trust store.

    To reference a file in the provisioning.yaml, for example, given:

    a file called my-keystore.jks is expected in the files directory. This file will be used for the key store location.

    API Provisioning

    This page describes how to use the Lenses provisioning API to setup connections to Kafka and other services and have changes applied.

    Building on the provisioning.yaml, API provisiong allows for uploading the files directly Lenses from anywhere with network access and without access to the host where Lenses is installed.

    Uploading supporting files

    Many connections need files, for example, to secure Kafka with SSL you will need a keystore and optionally a trust store.

    To reference a file in the, for the configuration option set the key to be "file" and the value to reference in the API request. For example, given:

    To upload the file to be used for the configuration option sslKeystore: add the following to the request:

    API Call

    1. Set the type to application/octet-stream.

    2. The name of the part in the multipart request (supporting files) should match the value of the property pointing to the mounted file in the provisioning.yaml descriptor. This ensures accurate mapping and referencing of files.

    3. Set LENSES_SESSION_TOKEN as the value of the Lenses Service Account token you want to use to automate provisioning.

    In this example, the provisioning.yaml is read from provisioning=@"resources/provisioning.yaml.

    The provisioning.yaml contains a reference to "my-keystore-file" which is loaded from @${PATH_TO_KEYSTORE_FILE};type=application/octet-stream

    Managing secrets

    The provisioning.yaml contains secrets. If you are deploying via Helm the chart will use Kubernetes secrets.

    Additionally, support is provided for referencing environment variables. This allows you to set secrets in your environment and have the value resolved at runtime. i.e. inject an environment variable from GitHub secrets for passwords.

    Authentication

    This section describes configuring user authentication in Lenses.

    Authentication is configured in the security configuration file. Lenses Administrator and Basic Auth do not require any configuration.

    Multiple authentication configurations can be used together.

    Authentication settings go in security.conf.

    The following authentication methods are available. Users, regardless of the method need to be mapped to groups.

    Account Locking

    For BASIC and LDAP authentication types, there is the option to set a policy to temporarily lock the account when successive login attempts fail. Once the lock time window has passed the user can log in again.

    These two configuration entries enable the functionality (both of them have to be provided to take effect):

    Group Mapping

    A Group is a collection of permissions that defines the level of access for users belonging to it. Groups consist of:

    • Namespaces

    • Application permissions

    • Administration permissions

    LDAP & Active Directory

    When working with LDAP or Active Directory, user and group management is done in LDAP.

    Lenses provides fine-grained role-based access (RBAC) for your existing groups of users over data and applications. Create a group in Lenses with the same name (case-sensitive) as in LDAP/AD.

    SSO & SAML

    When using an SSO solution such as Azure AD, Google, Okta, OneLogin or an open source like KeyCloak user and group management is done in the Identity Provider.

    Lenses provides fine-grained role-based access (RBAC) for your existing groups of users over data and applications. Create a group in Lenses with the same name (case-sensitive) as in your SSO group.

    Basic Auth

    With Basic Authentication, create groups of users and add users to those groups. Authentication and authorization are fully managed, and users can change their passwords.

    SSO & SAML

    This page describes configuring Lenses with SSO via SAML 2.0 protocol.

    1. Enable TLS (SSL) for Lenses HTTPS.

    2. Create a keystore for SAML.

    3. Choose your identity provider (IdP):

    Set the following in the security.conf

    security.conf
    lenses.security.saml.keystore.location = "/path/to/lenses.p12"
    lenses.security.saml.keystore.password = "my_password"
    lenses.security.saml.key.password = "my_password"

    Azure SSO

    This pages describes configuring Lenses with Azure SSO.

    Groups are case-sensitive and mapped by UUID with Azure

    Integrate your user-groups with Lenses using the Azure group IDs. Create a group in Lenses using the UUID as the name.

    For example, if the Engineers group has the UUID ae3f363d-f0f1-43e6-8122-afed65147ef8, create a group with the same name.

    Set up Microsoft Azure SSO

    Learn more about

    Add from Azure app-gallery

    1. Go to Enterprise applications > + New Application

    2. Search for Lenses.io in the gallery directory

    3. Choose a name for Lenses e.g. Lenses.io and click Add

    4. Select Set up single sign on > SAML

    Setting
    Value
    1. Download the Federation Metadata XML file with the Azure IdP details. You will reference this file’s path in the Lenses security.conf configuration file.

    Topics

    Allow users to create and manage their own topics and apply topic settings as guard rails.

    For automation use the CLI.

    Creating topics

    To create a topic go to Workspace->Explore->New Topic. Enter the name, partitions and replication factor.

    If apply you will not be able to create the topic unless the rules have been met.

    Viewing topic details

    The Explore screen lists high-level details of the topics

    Selecting a topic allows you to drill into more details.

    Topics marked for deletion will be highlighted with a D.

    Compacted topics will be highlighted with a C.

    Increasing the number of partitions

    To increase the number of partitions, select the topic, then select Increase Partitions from the actions menu. Increasing the number of partitions does not automatically rebalance the topic.

    Overriding a topic configurations

    Topics inherit their configurations from the broker defaults. To override a configuration, select the topic, then the Configuration tab. Search for the desired configuration and edit its value.

    Deleting topics

    To delete a topic, click the trash can icon.

    Topics can only be deleted if all clients reading or writing to the topic have been stopped. The topic will be marked for deletion with a D until the clients have stopped.

    Empty or compacted topics

    To quickly find compacted or empty topics use quick filter checkboxes, for example, you can find all empty topics and perform a bulk delete action on them.

    Enable TLS on Lenses

    This page describes how to configure TLS for Lenses.

    TLS settings go in security.conf.

    Global Truststore

    To use a non-default global truststore, set the path in accordingly with the LENSES_OPTS variable.

    LENSES_OPTS=-Djavax.net.ssl.trustStore=/path/to/truststore

    Custom Truststore

    To use a custom truststore set the following in security.conf. Supported types: jks, pkcs12.

    Mutual TLS

    To enable mutual TLS, set your keystore accordingly.

    Automating Connections

    This page describes automating (provisioning) connections and channels for Lenses at installation and how to apply updates.

    On start-up, Lenses will be in bootstrap mode unless it has an existing Kafka Connection.

    To fully start Lenses you need two key pieces of information to start and perform basic functions:

    1. Kafka Connection

    2. Valid License

    Provisioning

    If provisioning is enabled, any changes in the UI will be overriden.

    A dedicated API, called provisioning, is available to handle bootstrapping key connections at installation time. This allows you to fully install and configure key connections such as Kafka, Schema Registry, Kafka Connect, and Zookeepers in one go. You can use either of the following approaches depending on your needs:

    Both approaches use a YAML file to define connections.

    Defining a Connection

    Connections are defined in theprovisioning.yaml. This file is divided into components, each component representing a type of connection.

    Each component must have:

    1. Name - This is the free name of the connection

    2. Version set to 1

    3. Optional tags

    4. Configuration - This is a list of keys/values and is dependent on the component type.

    For a full list of configuration options for the connect see .

    Connectors

    This page describes how Lenses integrates with Kafka Connect to create, manage, and monitor connectors via multiple connect clusters.

    For documentation about the available Lenses Apache 2.0 Connectors, see the Stream Reactor documentation.

    For automation use the CLI.

    To connect your Connect Clusters see provisioning.

    Lenses connects to Connect Clusters via Connects APIs. You can deploy connectors outside of Lenses and Lenses will still be able to see and manage them

    You can connect Lenses to one or more Kafka Connect clusters. Once connected, Lenses will list the available Connector plugins that are installed in each Cluster. Additionally, Connectors can automatically be and alert notifications sent.

    Listing Connectors

    To list the currently deployed connectors go to Workspace->Connectors. Lenses will display a list of connectors and their status.

    View Connector details

    Once a connector has been created, selecting the connector allows us to:

    1. View its configuration

    2. Update its configurations (Action)

    3. View individual task configurations

    4. View metrics

    View a Connector as Code

    To view the YAML specification as Code, select the Code tab in the Connector details page.

    Download a Connector as Code

    To download the YAML specification, click the Download button.

    Creating a Connector

    To create a new connector go to Workspace->Connectors->New Connectors.

    Select the Connect Cluster you want to use and Lenses will display the plugins installed in the Connect Cluster.

    Connectors are searchable by:

    1. Type

    2. Author

    After selecting a connector, enter the configuration of the connector instance. Lenses will show the documentation for the currently selected option.

    To deploy and start the connector, click Create.

    Create a Connector as Code

    Creation of a Connector as code can be done via either

    1. Selecting Configure Connector->Configure As Code from the main connector page, or

    2. Selecting a Connect Cluster and Connector, then the Code tab

    Both options allow for direct input of a Connectors YAML specification or uploading of an existing file.

    Managing a Connector's lifecycle

    Connectors can be stopped, restarted, and deleted via the Actions button.

    Lenses JVM Options

    This page describes the JVM options for Lenses.

    Lenses runs as a JVM app; you can tune runtime configurations via environment variables.

    Key
    Description

    LENSES_OPTS

    For generic settings, such as the global truststore. Note that the docker image is using this to plug in a prometheus java agent for monitoring Lenses

    LENSES_HEAP_OPTS

    JVM heap options. Default setting are -Xmx3g -Xms512m that sets the heap size between 512MB and 3GB. The upper limit is set to 1.2GB on the Box development docker image.

    LENSES_JMX_OPTS

    Tune the JMX options for the JVM i.e. to allowing remote access.

    LENSES_LOG4J_OPTS

    Override Lenses logging configuration. Should only be used to set the logback configuration file, using the format -Dlogback.configurationFile=file:/path/to/logback.xml.

    Alerting

    This section describes how to configure alerting in Lenses.

    Alerts rules are configurable in Lenses, alerts that are generated can then be sent to specific channels. Several different integration points are available for channels.

    Infrastructure alerts

    These are a set of built-in alerting rules for the core connections, Kafka, Schema Registry, Zookeeper, and Kafka Connect. See infrastructure health.

    Data Produced Alerts

    Data produced are user-defined alerts on the amount of data on a topic over time. Users have a choice to notify if the topic receives either:

    1. more than

    2. or less than

    Consumer lag alerts

    Consumer rules are alerting on consumer group lag. Users can define:

    1. a lag

    2. on a topic

    3. for a consumer group

    4. which channels to send an alert to

    Application alerts

    Lenses allows operators to configure alerting on Connectors. Operators can:

    1. Set channels to send alerts to

    2. Enable auto restart of connector tasks. Lenses will restart failed tasks with a grace period.

    The sequence is:

    1. Lenses watches for task failures.

    2. If a task fails, Lenses will restart it.

    3. If the restart is successful Lenses resets the "restart attempts" back to zero

    4. If the restart is not successful, Lenses increments the restart attempts, waits for the grace period and tries another restart if the task is still in a failed state.

    Steps 4 is repeated until restart attempts is reached. Lenses will only rest the restart attempts to zero after the tasks have been brought back to a healthy start by manual intervention.

    The number of times Lenses attempts to restart is based on the entry in the alert setting.

    The restart attempts can be tracked in the page.

    Viewing alert events

    To view events go to Admin -> Alerts -> Events.

    Schema Registries

    This page describes connecting Lenses to Schema registries

    See for automating connections.

    Consider if you have a high number of schemas.

    Lenses can work with the following schema registry implementations which can be added via the Connections page in Lenses.

    Go to Admin->Connections->New Connections->Schema Registry and follow the guide for your registry provider.

    Kafka Connect

    This page provides examples for defining a connection to Kafka Connect Clusters.

    Simple configuration, with JMX metrics

    The URLs (workers) should always have a scheme defined (http:// or https://).

    This example uses an optional AES-256 key. The key decodes values encoded with AES-256 to enable passing encrypted values to connectors. It is only needed if your cluster uses AES-256 Decryption plugin.

    Google SSO

    This pages describes configuring Lenses with Google SSO.

    Google doesn't expose the groups, or organization unit, of a user to a SAML app. This means we must set up a custom attribute for the Lenses groups that each user belongs to.

    Create a custom attribute for Lenses groups

    1. Open the from an administrator account.

    Zookeeper

    This page provides examples for defining a connection to Zookeeper.

    Simple configuration, without metrics

    Simple configuration, with JMX metrics

    CLI Import & Export

    This page describes import end exporting resources from Lenses to YAML via the CLI.

    The CLI allows you to import and export resources to and from files.

    Import is done on a per-resource basis, the directory structure defined by the CLI. A base directory can be provided by the —dir flag.

    Processors, connectors, topics, and schemas have an additional prefix flag to restrict resources to export.

    Directory structure

    The expected directory structure is:

    Identity & Access Management

    This page describes how to configure Lenses IAM to secure access to you Kafka cluster.

    IAM (Identity and Access Management) in Lenses is controlled by Groups. Users and service accounts belong to groups. Permissions are assigned to groups and apply to the users and service accounts in those groups.

    Authentication of users is determined by the configured mechanism.

    For automation use the .

    Users

    This page describes managing users in Lenses.

    Users must be assigned to a group. SSO and LDAP users are mapped to a group matching the group name provided by the Idp.

    Group name matching is case-sensitive.

    Multiple types of users can be supported at the same time.

    Service accounts

    This page describes how create and use Lenses Service Accounts.

    Service accounts require an authentication token to be authenticated and must belong to at least one group for authorization.

    Service accounts are commonly used for automation, for example, when using Lenses CLI or APIs, or any other application or service to interact with Lenses.

    Service account tokens are not recoverable. You can edit, revoke or delete a Service Account, but you can never retrieve the original token.

    Sources

    This page describes the available Apache 2.0 Source Connectors from Lenses. Lenses can also work with any other Kafka Connect Connector.

    Lenses supports any Connector implementing the Connect APIs, bring your own or use community connectors.

    Enterprise support is also offered for connectors in the Stream Reactor project, managed and maintained by the Lenses team.

    Monitoring & Alerting

    This section describes the monitoring and alerting features of Lenses.

    For automation use the .

    Consumer Groups

    This page described consumer group monitoring.

    Consumer group monitoring is a key part of operating Kafka. Lenses allows operators to view and manage consumer groups.

    The connector and SQL Processor pages allow you to navigate straight to the corresponding consumer groups.

    The Explore screen also shows the active consumer groups on each topic.

    Viewing consumer groups

    To view consumer groups and the max and min lag across the partitions go to Workspace->Monitor->Consumers. You can also see this information for each topic in the Explore screen->Select topic->Partition tab.

    View consumer group details

    Select, or search for a consumer group, you can also search for consumer groups that are not active.

    Viewing alerts for a consumer group

    To view alerts for a consumer group, click the view alerts button. Resetting consumer groups is only possible if the consumer group is not active. i.e. the application must be stopped, such as a Connector or SQL Processor. Enable the show inactive consumers to find them.​​

    Resetting consumer group for a specific partition to an offset

    1. Select the consumer group

    2. Select the partition to reset the offsets for

    3. Specify the offset

    Resetting the whole consumer group

    To reset a consumer group (all clients in the group), select the consumer groups, select Actions, and Change Multiple offsets. This will reset all clients in the consumer group to either:

    1. To the start

    2. To the last offset

    3. To a specific timestamp

    LENSES_PERFORMANCE_OPTS

    JVM performance tuning. The default settings are -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=

    Identity & Access Management

    Configure how users authenticate in Lenses.

    Lenses Database

    Configure the backing store for Lenses.

    TLS

    Configure TLS on Lenses for HTTPS.

    Kafka ACLs

    Configure the Kafka ACLs Lenses needs to operate.

    Processor Modes

    Configure how and where Lenses deploys SQL Processors.

    JVM Options

    Understand how to customize the Lenses JVM settings.

    JMX Metrics

    Configure Lenses to expose JMX metrics.

    Logs

    Understand and customize Lenses logging.

    Plugins

    Add your own plugins to extend Lenses functionality.

    Configuration Reference

    Review Lenses configuration reference.

    Azure SSO

    Configure Azure SSO for Lenses.

    Google SSO

    Configure Google SSO for Lenses.

    Keycloak SSO

    Configure Keycloak SSO for Lenses.

    Okta SSO

    Configure Okta SSO for Lenses.

    Onelogin SSO

    Configure Onelogin SSO for Lenses.

    topic settings
    View exceptions.
    restarted
    Audits
    Provisioning API Spec

    File Watcher provisioning

    Provisioning with a YAML file, with Lenses watching for changes in the file.

    API Based provisioning

    Using APIs to load the provisioning YAML files.

    authentication
    CLI

    Groups

    Learn how to create and manage Lenses Groups.

    Users

    Learn how to create and manage Users in Lenses.

    Adding a user with Basic Authentication

    Select Admin->Users->New User->Basic Auth.

    Mapping SSO users to groups

    By default, users are mapped to the group provided by the SSO provider. If you wish to override the group mapping from your SSO, users can be created directly in Lenses and you can manually map the user to a group.

    Mapping LDAP users to groups

    By default, users are mapped to the group provided by the LDAP server. If you wish to override the group mapping you manually map the user to a group.

    See LDAP user group mapping.

    Listing Users

    Lenses allows you to view users and:

    1. Authentication type

    2. Groups they belong to

    3. Last login

    Go to Admin -> Users.

    Basic Authentication

    SSO & SAML

    Azure AD

    LDAP

    Custom HTTP

    You need to add the connector information for them to be visible in the Topology.

    AWS S3

    Load data from AWS S3 including restoring topics.

    Azure Data Lake Gen2

    Load data from Azure Data Lake Gen2 including restoring topics.

    Azure Event Hubs

    Load data from Azure Event Hubs into Kafka topics.

    Azure Service Bus

    Load data from Azure Service Bus into Kafka topics.

    Cassandra

    Load data from Cassandra into Kafka topics.

    GCP PubSub

    Load data from GCP PubSub into Kafka topics.

    GCP Storage

    Load data from GCP Storage including restoring topics.

    FTP

    Load data from files on FTP servers into Kafka topics.

    JMS

    Load data from JMS topics and queues into Kafka topics.

    MQTT

    Load data from MQTT into Kafka topics.

    CLI

    Infrastructure

    Lenses allows for monitoring of your infrastructure, providing visibility into the key services of your Kafka deployments.

    Consumer Lag

    Lenses allows for the monitoring and alerting of consumer lag. Additionally, you can perform corrective actions on consumer groups by resetting offsets.

    Alerting

    Lenses provides a set of pre-configured Alert Rules. When rules are enabled, Lenses monitors the relevant resources and triggers an Alert Event any time the condition is met.

    Integrations

    Alert events sent to alert Channels. Lenses supports the most common alerting integrations.

    Optionally upload a trust store

  • Optionally enable the collection of JMX metrics (Simple or Jolokia with SSL and Basic auth support)

  • provisioning
    Rate Limiting

    Admin Account

    Configure the Lenses admin account.

    Azure AD

    Configure Azure AD for Lenses.

    Basic Authentication

    Configure basic authentication for Lenses.

    Custom HTTP

    Configure a custom HTTP endpoint for authentication with Lenses.

    LDAP

    Configure LDAP for Lenses.

    SAML & SSO

    Configure SAML & SSO for Lenses.

    Configure the SAML details

    Identifier (Entity ID)

    Use the base url of the Lenses installation e.g. https://lenses-dev.example.com

    Reply URL

    Use the base url with the callback details e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client

    Sign on URL

    Use the base url

    Azure SSO
    security.conf
    lenses.ssl.truststore.location = "/path/to/truststore.jks"
    lenses.ssl.truststore.password = "changeit"
    SQL Processors

    Only the update of name, cluster name, namespace, and runner are allowed. Changes to the SQL are effectively the creation of a new Processor.

    #export
    lenses-cli export acls --dir my-dir
    lenses-cli export alert-channels --dir my-dir
    lenses-cli export alert-settings --dir my-dir
    lenses-cli export connections --dir my-dir
    lenses-cli export connectors --dir my-dir
    lenses-cli export processors --dir my-dir
    lenses-cli export quota --dir my-dir
    lenses-cli export schemas --dir my-dir
    lenses-cli export topics --dir my-dir
    lenses-cli export policies --dir my-dir
    lenses-cli export groups --dir my-dir
    lenses-cli export serviceaccounts --dir my-dir
    
    #import
    lenses-cli import acls --dir my-dir
    lenses-cli import alert-channels --dir my-dir
    lenses-cli import alert-settings --dir my-dir
    lenses-cli import connections --dir my-dir
    lenses-cli import connectors --dir my-dir
    lenses-cli import processors --dir my-dir
    lenses-cli import quota --dir my-dir
    lenses-cli import schemas --dir my-dir
    lenses-cli import topics --dir my-dir
    lenses-cli import policies --dir my-dir
    lenses-cli import groups --dir my-dir
    lenses-cli import serviceaccounts --dir my-dir
    Basic authentication

    For Basic Authentication, define username and password properties.

    TLS with custom truststore

    A custom truststore is needed when the Kafka Connect workers are served over TLS (encryption-in-transit) and their certificates are not signed by a trusted CA.

    TLS with client authentication

    A custom truststore might be necessary too (see above).

    kafkaConnect:
      - name: my-connect-cluster-name
        version: 1    
        tags: ["tag1"]
        configuration:
          workers:
            value:
              - http://my-kc.worker1:8083
              - http://my-kc.worker2:8083    

    Click the Users button

  • Select the More dropdown and choose Manage custom attributes

  • Click the Add custom attribute button

  • Fill the form to add a Text, Multi-value field for Lenses Groups, then click Add

  • Assign Lenses groups attributes to Google users

    1. Open the Google Admin console from an administrator account.

    2. Click the Users button

    3. Select the user to update

    4. Click User information

    5. Click the Lenses Groups attribute

    6. Enter one or more groups and click Save

    Add Google custom SAML app

    Learn more about Google custom SAML apps

    1. Open the Google Admin console from an administrator account.

    2. Click the Apps button

    3. Click the SAML apps button

    4. Select the Add App dropdown and choose Add custom SAML app

    App Details

    1. Enter a descriptive name for the Lenses installation

    2. Upload a Lenses icon

    Download IdPXML file

    Configure in security.conf.

    Google Admin console
    Simple configuration with Zookeeper metrics read via JMX.

    With such a configuration, Lenses will use 3 Zookeeper nodes and will try to read their metrics from following URLs (notice the same port - 9581 - used for all of them, as defined by metricsPort property):

    • my-zookeeper-host-0:9581

    • my-zookeeper-host-1:9581

    • my-zookeeper-host-2:9581

    zookeeper:
    - name: Zookeeper
      version: 1
      tags: ["tag1"]
      configuration:
        zookeeperUrls:
          value:
            - my-zookeeper-host-0:2181
            - my-zookeeper-host-1:3181
            - my-zookeeper-host-2:4181
        # optional, a suffix to Zookeeper's connection string
        zookeeperChrootPath: 
          value: "/mypath" 
        zookeeperSessionTimeout: 
          value: 10000 # in milliseconds
        zookeeperConnectionTimeout: 
          value: 10000 # in milliseconds
    Creating a service account

    To create a new Service Account, navigate to the Admin and select Users and New Service Account.

    Authentication token

    You can manually enter the authentication token or autogenerate it. If you select to auto-generate tokens, then you will receive a one-time token for this service account. Follow the instructions and copy and store this token. You can now use this token to authenticate via API and CLI.

    Editing a service account

    You can only change the groups and owner of services accounts. Go to the service account and select Edit Info, from the Actions menu.

    Revoking a service account

    To change the token, go to the service account and select Revoke Token from the Actions menu.

    Using a service account

    To use the service account you need to prefix the token with its name separated by a colon. You then include that in the corresponding header.

    Example

    For a service account named myservice and a token da6bad50-55c8-4ed4-8cad-5ebd54a18e26 then the combination looks like this:

    myservice:28ab4195-18cf-426a-abda-c41a451e001a

    To use the CLI with a service account for CI/CD you need to pass these options:

    connectors.info = [
          {
               class.name = "The connector full classpath"
               name = "The name which will be presented in the UI"
               instance = "Details about the instance. Contains the connector configuration field which holds the information. If  a database is involved it would be  the DB connection details, if it is a file it would be the file path, etc"
               sink = true
               extractor.class = "The full classpath for the implementation knowing how to extract the Kafka topics involved. This is only required for a Source"
               icon = "file.png"
               description = "A description for the connector"
               author = "The connector author"
          }
    
      ]
    lenses.conf
    # Directory containing the provision.yaml files
    lenses.provisioning.path=/my/dir
    # The interval at which Lenses will check for updates 
    # in the file seconds
    lenses.provisioning.interval=10s
    ➜  ~ tree provisioning-folder
    provisioning-folder
    ├── files
    │   └── tuststore.jks
    ├── license.json
    └── provisioning.yaml
    
    4 directories, 0 files
    ➜  ~
    values.yaml
    sslKeystorePassword:
      value: ${ENV_VAR_NAME}
    values.yaml
        configuration:
          protocol:
            value: SASL_SSL
          sslKeystore:
            file: "my-keystore.jks"
    security.conf
    # Number of failed login attempts before an account is locked.
    lenses.security.lockout.user.attempts.max = "5"
    
    # The time in seconds to keep the account locked.
    lenses.security.lockout.user.period.sec = "600"  #10 minutes
    security.conf
    lenses.security.saml.base.url="https://lenses-dev.example.com"
    lenses.security.saml.idp.provider="azure"
    lenses.security.saml.idp.metadata.file="/path/to/AzureIDPMetadata.xml"
    lenses.security.saml.keystore.location="/path/to/keystore.jks"
    lenses.security.saml.keystore.password="my_keystore_password"
    lenses.security.saml.key.password="my_saml_key_password"
    security.conf
    # To secure and encrypt all HTTPS connections to Lenses via TLS termination.
    # Java Keystore location and passwords
    lenses.ssl.client.auth = true
    lenses.ssl.keystore.location = "/path/to/keystore.jks"
    lenses.ssl.keystore.password = "changeit"
    lenses.ssl.key.password      = "changeit"
    
    
    # You can also tweak the TLS version, algorithm and ciphers
    #lenses.ssl.enabled.protocols = "TLSv1.2"
    #lenses.ssl.cipher.suites     = "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WIT
    my-dir
    ├── alert-settings
    │   └── alert-setting.yaml
    ├── apps
    │   ├── connectors
    │   │   ├── connector-1.yaml
    │   │   └── connector-2.yaml
    │   └── sql
    ├── groups
    │   └── groups.yaml
    ├── kafka
    │   ├── quotas
    │   │   └── quotas.yaml
    │   └── topics
    │       ├── topic-1.yaml
    │       └── topic-2.yaml
    ├── policies
    │   └── policies-city.yaml
    ├── service-accounts
    │   └── service-accounts.yaml
    └── schemas
        ├── schema-1.yaml
        └── schema-2.yaml
    kafkaConnect:
      - name: my-connect-cluster-name
        tags: ["tag1"]
        version: 1      
        configuration:
          workers:
            value:
              - http://my-kc.worker1:8083
              - http://my-kc.worker2:8083    
          username: 
            value: my-username
          password: 
            value: my-password
    kafkaConnect:
      - name: my-connect-cluster-name
        tags: ["tag1"]
        version: 1      
        configuration:
          workers:
            value:
              - http://my-kc.worker1:8083
              - http://my-kc.worker2:8083    
          sslTruststore:
            file: /path/to/my/truststore.jks
          sslTruststorePassword: 
            value: myPassword
    kafkaConnect:
      name: my-connect-cluster-name
      tags: ["tag1"]
      version: 1    
      configuration:
        workers:
          value:
            - http://my-kc.worker1:8083
            - http://my-kc.worker2:8083    
       sslKeystore:
          file: /path/to/my/keystore.jks
        sslKeystorePassword: 
          value: myPassword
    security.conf
    lenses.security.saml.base.url="https://lenses-dev.example.com"
    lenses.security.saml.idp.provider="google"
    lenses.security.saml.idp.metadata.file="/path/to/GoogleIDPMetadata.xml"
    lenses.security.saml.keystore.location="/path/to/keystore.jks"
    lenses.security.saml.keystore.password="my_keystore_password"
    lenses.security.saml.key.password="my_saml_key_password"
    zookeeper:    
    - name: Zookeeper
      version: 1
      tags: ["tag1"]
      configuration:
        zookeeperUrls:
          value:
            - my-zookeeper-host-0:2181
            - my-zookeeper-host-1:3181
            - my-zookeeper-host-2:4181
        # optional, a suffix to Zookeeper's connection string
        zookeeperChrootPath: 
          value: "/mypath" 
        zookeeperSessionTimeout: 
          value: 10000 # in milliseconds
        zookeeperConnectionTimeout: 
          value: 10000 # in milliseconds
        # all metrics properties are optional
        metricsPort: 
          value: 9581
        metricsType: 
          value: JMX
        metricsSsl: 
          value: false
    lenses-cli topics \
      --token=<service-account-name>:<service-account-token> \
      --host=<lenses-url-host>
    
    # Real Example
    lenses-cli topics \
      --token=ci:58d86476-bcc6-47e2-a57e-0c6bbd9c88b9 \
      --host=http://<your-lenses-url>:9991

    Authentication

    TLS and basic authentication are supported for connections to Schema Registries.

    JMX Metrics

    Lenses can collect Schema registry metrics via:

    1. JMX

    2. Jolokia

    Supported formats

    • AVRO

    • PROTOBUF

    JSON and XML formats are supported by Lenses but without a backing schema registry.

    To connect your Schema Registry with Lenses, select Schema Registry -> Create Connection.

    Schema deletion

    To enable the deletion of schemas in the UI, set the following in the lenses.conf file.

    IBM Event Streams supports hard deletes only

    provisioning
    Rate Limiting
    --form "my-keystore-file=@${PATH_TO_KEYSTORE_FILE};type=application/octet-stream" \

    AWS MSK Serverless

    This page describes how to connect Lenses to an Amazon MSK Serverless cluster.

    Lenses will not start without a valid Kafka Connection. You can either add the connection via the bootstrap wizard or use provisioning for automated deployments.

    It is recommended to install Lenses on an EC2 instance or with EKS in the same VPC as your MSK Serverless cluster. Lenses can be installed and preconfigured via the AWS Marketplace.

    Edit the relevant Security Group

    Enable communications between Lenses & the Amazon MSK Serverless cluster by opening the Amazon MSK Serverless cluster's security group in the AWS Console and add the IP address of your Lenses installation.

    Configure IAM Policies

    To authenticate Lenses & access resources within our MSK Serverless cluster, we'll need to create an IAM policy and apply that to the resource (EC2, EKS cluster, etc) running the Lenses service. here is an example IAM policy with sufficient permissions which you can associate with the relevant IAM role:

    MSK Serverless IAM to be used after cluster creation. Update this IAM policy with the relevant ARN.

    Select your MSK endpoint

    Click your MSK Serverless Cluster in the MSK console and select View Client Information page to check the bootstrap server endpoint.

    Creating the Connection in Lenses

    In the Lenses bootstrap UI, Select:

    1. For the bootsrap server configuration, use the MSK Serverless endpoint

    2. For the Security Protocol, set it to SASL_SSL

    3. Customize the Sasl Mechanism and set it to AWS_MSK_IAM

    4. Add

    1. During the broker metrics export step, keep it disabled, as AWS Serverless does not export the metrics to Lenses. Click Next

    2. Copy your license and add it to Lenses, validate your license, and click Next

    3. Click on Save & Boot Lenses. Lenses will finish the setup on its own

    Additional Configurations

    To enable the creation of SQL Processors that create consumer groups, you need to add the following statement in your IAM policy:

    Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.

    To integrate with the AWS Glue Schema Registry, you also need to add the following statement for the registries and schemas in your IAM policy:

    Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.

    To integrate with the AWS Glue Schema Registry, you also need to modify the security policy for the registry and schemas, which results in additional functions within it:

    More details about how IAM works with MSK Serverless can be found in the documentation:

    Limitations

    When using Lenses with MSK Serverless:

    • Lenses does not receive Prometheus-compatible metrics from the brokers because they are not exported outside of CloudWatch.

    • Lenses does not configure quotas and ACLs because MSK Serveless does not allow this.

    Docker

    This page describes installing Lenses with Docker Image.

    On start-up, Lenses will be in bootstrap mode unless it has an existing Kafka Connection. See provisioning for automating.

    The Lenses docker image can be configured via environment variables or via volume mounts for the configuration files (lenses.conf, security.conf).

    Running the Docker

    Open Lenses in your , log in with admin/admin and configure your and add your .

    Environment Variables

    Environment variables prefixed with LENSES_ are transformed into corresponding configuration options. The environment variable name is converted to lowercase and underscores (_) are replaced with dots (.). As an example set the option lenses.port use the environment variable LENSES_PORT.

    Alternatively, the lenses.conf and security.conf can be mounted directly as

    • /mnt/settings/lenses.conf

    • /mnt/secrets/security.conf

    Docker volumes

    The Docker image exposes four volumes in total, where cache, logs, plugins, and persistent data are stored:

    • /data/storage

    • /data/plugins

    • /data/logs

    Storage volume

    Resides under /data/storage and is used to store persistent data, such as Data Policies. For this data to survive between Docker runs and/or Lenses upgrades, the volume must be managed externally (persistent volume).

    Plugins volume

    Resides under /data/plugins it’s where classes that extend Lenses may be added —such as custom Serdes, LDAP filters, UDFs for the Lenses SQL table engine, and custom_http implementations.

    Logs volume

    Resides under /data/logs, logs are stored here. The application also logs to stdout, so the log files aren’t needed for most cases.

    KStreams state volume

    Resides under /data/kafka-streams-state, used when Lenses SQL is in IN_PROC configuration. In such a case, Lenses uses this scratch directory to cache Lenses SQL internal state. Whilst this directory can safely be removed, it can be beneficial to keep it around, so the Processors won’t have to rebuild their state during a restart.

    Lenses TLS and Global JVM Trust Store

    By default, the Lenses serves connections over plaintext (HTTP). It is possible to use TLS instead. The Docker image offers the ability to provide the content for extra files via secrets mounted as files or as environment variables. Especially for SSL, Docker supports SSL/TLS keys and certificates in Java Keystore (JKS) formats.

    This capability is optional, and users can mount such files under custom paths and configure lenses.conf manually via environment variables, or lenses.append.conf.

    There are two ways to use the File/Variable names of the table below.

    1. Create a file with the appropriate filename as listed below and mount it under /mnt/settings, /mnt/secrets, or /run/secrets

    2. Set them as environment variables.

    All settings except for passwords, can be optionally encoded in base64. The docker will detect such encoding automatically.

    File / Variable Name
    Description

    Process UID/GUI

    The docker does not require running as root. The default user is set to root for convenience and to verify upon start-up that all the directories and files have the correct permissions. The user drops to nobody and group nogroup (65534:65534) before starting Lenses.

    If the image is started without root privileges, the agent will start successfully using the effective uid:gid applied. Ensure any volumes mounted (i.e., for the license, settings, and data) have the correct permission set.

    AWS Marketplace

    This page describes how to install Lenses via the AWS Marketplace.

    The AWS Marketplace offering requires AWS MSK (Managed Apache Kafka) to be available. Optionally, AWS RDS (or any other PostgreSQL-compatible database) can be configured for Lenses to store its state.

    The following AWS resources are created:

    • An EC2 instance that runs Lenses;

    • A SecurityGroup to allow network access to the Lenses UI;

    • A SecurityGroupIngress for Lenses to connect to MSK;

    • A CloudWatch LogGroup where Lenses stores its logs;

    • An IAM Role to allow the EC2 instance to store logs;

    • An IAM InstanceProfile to pass the role to the EC2 instance;

    • Optionally if enabled during deployment: an IAM Policy to allow the EC2 instance to emit CloudWatch metrics.

    Deployment takes approximately three minutes.

    AWS Marketplace Installation

    Select CloudFormation Template, Lenses EC2 and your region.

    Choose Launch CloudFormation.

    Continue with the default options for creating the stack in the AWS wizard.

    Fill in the parameters at Specify stack details.

    • Deployment Here the EC2 instance size and password for the Lenses admin user are set. A t2.large instance size is recommended;

    • Network Configuration This section controls the network settings of the Lenses EC2 instance. The ingress allows access to the Lenses UI only from particular IP addresses;

    • MSK Set the Security Group ID to that of your MSK cluster. A rule will be added to it so that Lenses can communicate with your cluster. You can find the ID by navigating in the AWS console to your MSK cluster and then under Properties -> Networking settings;

    Review the stack.

    Accept the terms and conditions and create the stack.

    Once the stack has deployed, go to the Output tab and click on the FQDN link. If there are no outputs listed you might need to press the refresh button.

    Login to Lenses with admin and the password value you have submitted for the parameter LensesAdminPassword.

    IAM Support

    Lenses supports connection to MSK brokers via IAM. If Lenses is deployed on an EC2 instance it will use the default credential chain loader to authenticate and connect to MSK.

    Supported Regions

    The following Regions are supported:

    • us-east-1;

    • us-east-2;

    • us-west-1;

    Security Recommendations

    Please:

    • Do not use your AWS root user for deployment or operations;

    • Follow the least privileges principle when granting access to individual IAM user accounts;

    • Avoid allowing traffic to the Lenses UI from a broad CIDR block where a more specific block could be used.

    Pricing

    AWS billing applies for the EC2 instance, CloudWatch logs and optionally CloudWatch metrics.

    For the hourly billed version additional hourly charges apply, which depend on the instance size. For the Bring Your Own License (BYOL) you can get a free trial license .

    Troubleshooting

    In case you run into problems, e.g. you cannot connect to Lenses, then the logs could provide more information. The easiest route to do this is to go to CloudWatch in the AWS console. Here, find the log group corresponding to your deployment (it has the same name as the deployment) and pick a log stream. The stream with the /lenses.log suffix contains all log lines regardless of the log level; the stream with the /lenses-warn.log suffix only contains warning-level logs.

    If the above fails, for example, because the logs integration is broken, you can SSH into the EC2 instance. Lenses is installed into /opt/lenses, the logs can be found under /opt/lenses/logs for further inspection

    Keycloak SSO

    This pages describes configuring Lenses with Keycloak SSO.

    Integrate your user groups with Lenses using the Keycloak group names. Create a group in Lenses using the same case-sensitive group name as in Keycloak.

    For example, if the Engineers group is available in Keycloak, with Lenses assigned to it, create a group with the same name.

    Create a new SAML application client in Keycloak

    1. Go to Clients

    2. Click Create

    3. Fill in the details: see the table below.

    4. Click Save

    Setting
    Value

    Change the settings on client you just created to:

    Setting
    Value

    Map user groups

    Configure Keycloak to communicate groups to Lenses. Head to the Mappers section.

    1. Click Create

    2. Fill in the details: see table below.

    3. Click Save

    Setting
    Value

    Download IdP XML file

    Configure in the security.conf file.

    JMX Metrics

    This page describes how to configure JMX metrics for Connections in Lenses.

    All core services (Kafka, Schema Registry, Kafka Connect, Zookeeper) use the same set of properties for services’ monitoring.

    The Agent will discover all the brokers by itself and will try to fetch metrics using metricsPort, metricsCustomUrlMappings and other properties (if specified).

    JMX

    Simple

    The same port used for all brokers/workers/nodes. No SSL, no authentication.

    SSL

    Basic Auth

    Such a configuration means that the Agent will try to connect using JMX with every pair of kafkaBootstrapServers.host:metricsPort, so following the example: my-kafka-host-0:9581.

    Jolokia

    For Jolokia the Agent supports two types of requests: GET (JOLOKIAG) and POST (JOLOKIAP).

    For JOLOKIA each entry value in metricsCustomUrlMappings must contain protocol.

    Simple

    The same port used for all brokers/workers/nodes. No SSL, no authentication.

    Custom Http Request Timeout

    JOLOKIA monitoring works on the top of HTTP protocol. To fetch metrics the Agent has to perform either GET or POST request. There is a way of configuring http request timeout using httpRequestTimeout property (ms value). Its default value is 20 seconds.

    Custom Metrics Http Suffix

    Default suffix for Jolokia endpoints is /jolokia/, so that should be provided value. Sometimes that suffix can be different, so there is a way of customizing it by using metricsHttpSuffix field.

    AWS

    AWS has predefined metrics configuration. The Agent hits the Prometheus endpoint using port 11001 for each broker. There is an option of customizing AWS metrics connection in Lenses by using metricsUsername, metricsPassword, httpRequestTimeout, metricsHttpSuffix, metricsCustomUrlMappings, metricsSsl properties, but most likely no one will need to do that - AWS has its own standard and most probably it won’t change. Customization can be achieved only by API or CLI - UI does not support it.

    Custom url mapping

    There is also a way to configure custom mapping for each broker (Kafka) / node (Schema Registry, Zookeeper) / worker (Kafka Connect).

    Such a configuration means that the Agent will try to connect using JMX for:

    • my-kafka-host-0:9582 - because of metricsCustomUrlMappings

    • my-kafka-host-1:9581 - because of metricsPort and no entry in metricsCustomUrlMappings

    Plugins

    This page describes how to install plugins in Lenses.

    The following implementations can be specified:

    1. Serializers/Deserializers Plug your serializer and deserializer to enable observability over any data format (i.e., protobuf / thrift)

    2. Custom authentication Authenticate users on your proxy and inject permissions HTTP headers.

    3. LDAP lookup Use multiple LDAP servers or your group mapping logic.

    4. SQL UDFs User Defined Functions (UDF) that extend SQL and streaming SQL capabilities.

    Once built, the jar files and any plugin dependencies should be added to Lenses and, in the case of Serializers and UDFs, to the SQL Processors if required.

    Adding plugins

    On startup, Lenses loads plugins from the $LENSES_HOME/plugins/ directory and any location set in the environment variable LENSES_PLUGINS_CLASSPATH_OPTS. These locations Lenses is watching, and dropping a new plugin will hot-reload it. For the Lenses docker (and Helm chart) you use /data/plugins.

    Any first-level directories under the paths mentioned above, detected on startup will also be monitored for new files. During startup, the list of monitored locations will be shown in the logs to help confirm the setup.

    Whilst all jar files may be added to the same directory (e.g /data/plugins), it is suggested to use a directory hierarchy to make management and maintenance easier.

    An example hierarchy for a set of plugins:

    SQL Processors in Kubernetes

    There are two ways to add custom plugins (UDFs and Serializers) to the SQL Processors; (1) via making available a tar.gz archive at an HTTP (s) address, or (2) via creating a custom docker image.

    Archive served via HTTP

    With this method, a tar archive, compressed with gzip, can be created that contains all plugin jars and their dependencies. Then this archive should be uploaded to a web server that the SQL Processors containers can access, and its address should be set with the option lenses.kubernetes.processor.extra.jars.url.

    Step by step:

    1. Create a tar.gz file that includes all required jars at its root:

    2. Upload to a web server, ie. https://example.net/myfiles/FILENAME.tar.gz

    3. Set

      For the docker image, set the corresponding environment variable

    Custom Docker image

    The SQL Processors that run inside Kubernetes use the docker image lensesio-extra/sql-processor. It is possible to build a custom image and add all the required jar files under the /plugins directory, then set lenses.kubernetes.processor.image.name and lenses.kubernetes.processor.image.tag options to point to the custom image.

    Step by step:

    1. Create a Docker image using lensesio-extra/sql-processor:VERSION as a base and add all required jar files under /plugins:

    2. Upload the docker image to a registry:

    3. Set

      For the docker image, set the corresponding environment variables

    Networking with Load Balancers

    Configuring Lenses Websockets to work with Load Balancers.

    Lenses uses Websockets. It can be that your load balancers block them by default. Depending on your load balancer you need to allow websockets.

    For example on NGINX:

    If it is exposed via a service type LoadBalancer, ensure the protocol between the load balancer and NGINX is set to TCP. See Kubernetes documentation for more information.

    Lenses can be placed behind a proxy, but you must allow websocket connections.

    These two paths are used for WebSocket connections:

    • /api/ws

    • /api/kafka/ws

    Disable proxy buffering for SSE (Server Sent Events) connections on this path:

    • /api/sse

    TLS termination

    Lenses supports TLS termination out of the box, see Enabling TLS

    Sample Apache configuration

    Sample Caddy configuration

    Sample NGINX configuration

    Backup & Restore

    This page describes how to use Lenses to back and restore data in a Kafka topic to AWS S3.

    To initiate either a topic backup to S3 or topic restoration from S3, follow these steps:

    • Navigate to the Actions menu within the Kafka topic details screen.

    • Choose your desired action: “Backup Topic to S3” or “Restore Topic from S3.”

    • A modal window will open, providing step-by-step guidance to configure your backup or restoration entity.

    A single topic can be backed up or restored to/from multiple locations.

    Identifying if a topic is being backed up

    If a topic is being backed up it will be displayed on the topology.

    Additional information on the location of the backup can be found by navigating to the topic in the Explore screen where the information is available in the Summary section.

    Backing up a topic

    To back up a topic, navigate to the topic you wish to back up and select Backup Topic to S3 from the Actions menu.

    Enter the S3 bucket ARN and select the that has the Lenses S3 connector installed.

    Click Backup Topic, an S3 sink connector instance will now be deployed and configured automatically to back up data from the topic to the specified bucket.

    Restoring a topic

    To restore a topic, navigate to the topic you wish to restore and select Restore Topic from S3 from the Actions menu.

    Enter the S3 bucket ARN and select the Connect Cluster that has the Lenses S3 connector installed. Click Restore Topic, an S3 source connector instance will now be deployed and configured automatically to restore data to the topic from the specified bucket.

    Infrastructure Health

    Monitoring the health of your infrastructure.

    Lenses provides monitoring of the health of your infrastructure via JMX.

    Additionally, Lenses has a number of built-in alerts for these services.

    Monitoring alerts

    Lenses monitors (by default every 10 seconds) your entire streaming data platform infrastructure and has the following alert rules built-in:

    Rule
    This rule fires when

    Broker decommissioning

    If you change your Kafka cluster size or replace an existing Kafka broker with another, Lenses will raise an active alert as it will detect that a broker of your Kafka cluster is no longer available. If the Kafka broker has been intentionally removed, then decommission it:

    1. Navigate to Services.

    2. Select the broker, click on the actions in the options menu and click on the Decommission option.

    Lenses JMX Metrics

    This page describes the how to retrieve Lenses JMX metrics.

    The JMX endpoint is managed by the lenses.jmx.port option. To disable the JMX leave the option empty.

    To enable monitoring of Lenses metrics:

    To export via Prometheus exporter:

    The Lenses Docker image (lensesio/lenses) automatically sets up the Prometheus endpoint. You only have to expose the 9102 port to access it.

    Setting up the JMX Agent with Basic Auth.

    This will be done in two parts. The first part is about setting up the required files that JMX Agent will require and the second is about the options we need to pass to the agent.

    Lenses Metadata Database

    This page describes how to configure the storage layer Lenses.

    Lenses state can be stored:

    • on the local filesystem - (quick start and default option; deprecated, it will be removed in the next major version)

    • in a PostgreSQL database - (recommended) and takes preference when configured

    • in a Microsoft SQL Server database

    Azure AD

    This page describes configuring Lenses with Azure AD via LDAP.

    Azure AD supports the LDAP protocol. You can use it as an authentication provider with users, passwords, and groups stored in Azure AD. When a user is authenticated successfully, Lenses queries Azure AD to get the user’s groups and authorizes the user with the selected permissions.

    Here is a sample Lenses configuration:

    Create Azure AD Domain Instance

    1. In the Azure portal create a resource. Search for Domain service

    ## Enable schema deletion in the Lenses UI
    ## default: false
    lenses.schema.registry.delete = true
    
    ## When a topic is deleted,
    ## automatically delete also its associated Schema Registry subjects
    ## default: false
    lenses.schema.registry.cascade.delete = true
        configuration:
          protocol:
            value: SASL_SSL
          sslKeystore:
            file: "my-keystore-file"
    curl --location --request PUT "${LENSES_ENV}/api/v1/state/connections" \
       --header "X-Kafka-Lenses-Token: ${LENSES_SESSION_TOKEN}" \
       --header 'Content-Type: multipart/form-data' \
       --header 'Content-Disposition: form-data;' \
       --form "my-keystore-file=@${PATH_TO_KEYSTORE_FILE};type=application/octet-stream" \
       --form 'provisioning=@"resources/provisioning.yaml";type=text/plain(utf-8)' 
    sslKeystorePassword:
      value: ${ENV_VAR_NAME}
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
      nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"

    Multiple Broker versions

    The Kafka cluster is under a version upgrade, and not all brokers have been upgraded

    File-open descriptors on Brokers

    A Kafka broker has an alarming number of file-open descriptors. When the operating system is exceeding 90% of the available open file descriptors

    Average % the request handler is idle

    The average fraction of time the request handler threads are idle is dangerously low. The alert is HIGH when the value is smaller than 10%, and CRITICAL when it is smaller than 2%.

    Fetch requests failure

    Fetch requests are failing. If the rate of failures per second is > 10% the alert level is set to CRITICAL, otherwise it is set to HIGH.

    Produce requests failure

    Producer requests are failing. When the value is > 10% the alert level is set to CRITICAL, otherwise it is set to HIGH.

    Broker disk usage

    A Kafka broker’s disk usage is greater than the cluster average. The build-in threshold is 1 GByte.

    Leader imbalance

    A Kafka broker has more leader replicas than the average broker in the cluster.

    Lenses License

    Lenses licnese is invalid

    Kafka broker is down

    A Kafka broker from the cluster is not healthy

    Zookeeper node is down

    A Zookeeper node is not healthy

    Connect Worker is down

    A Kafka Connect worker node is not healthy

    Schema Registry is down

    A Schema Registry instance is not healthy

    Under replicated partitions

    The Kafka cluster has 1 or more under-replicated partitions

    Partitions offline

    The Kafka cluster has 1 or more partitions offline (partitions without an active leader)

    Active Controller

    The Kafka cluster has 0 or more than 1 active controllers

    AWS Glue

    Connect Lenses to your AWS Glue service for schema registry support.

    Confluent

    Connect Lenses to Confluent Schema Registry.

    IBM Event Streams

    Connect Lenses to IBM Event Streams Schema Registry

    Apicurio

    Connect Lenses to Apicurio.

    Monitoring Optionally produce the Lenses logs to CloudWatch;

  • Storage Lenses stores its state in a database locally on the EC2 instance’s disk or in a PostgreSQL database. Local storage is a development/quickstart option and is not suitable for production use. It is advised to use a Postgres database for smoother upgrades.

  • us-west-2;

  • ca-central-1;

  • eu-central-1;

  • eu-west-1;

  • eu-west-2;

  • eu-west-3;

  • ap-southeast-1;

  • ap-southeast-2;

  • ap-south-1;

  • ap-northeast-1;

  • ap-northeast-2;

  • sa-east-1.

  • here
    Cloud formation
    Launch
    Review
    T&C
    Output
    Connect Cluster
    /data/kafka-streams-state

    FILECONTENT_JVM_SSL_TRUSTSTORE

    The SSL/TLS trust store to use as the global JVM trust store. Add to LENSES_OPTS the property javax.net.ssl.trustStore

    FILECONTENT_JVM_SSL_TRUSTSTORE_PASSWORD

    Τhe trust store password. If set, the startup script will add automatically to LENSESOPTS the property javax.net.ssl.trustStorePassword (**_base64 not supported**)

    FILECONTENT_LENSES_SSL_KEYSTORE

    The SSL/TLS keystore to use for the TLS listener for Lenses

    browser
    brokers
    license

    ON

    Name ID Format

    email

    Root URL

    Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com

    Valid Redirect URIs

    Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com

    Client ID

    Use the base.url of the Lenses installation e.g. https://lenses-dev.example.com

    Client Protocol

    Set it to saml

    Client Saml Endpoint

    This is the Lenses API point for Keycloak to call back. Set it to [BASE_URL]/api/v2/auth/saml/callback?client_name=SAML2Client. e.g. https://lenses-dev.example.com/api/v2/auth/saml/callback?client_name=SAML2Client

    Name

    Lenses

    Description

    (Optional) Add a description to your app.

    SAML Signature Name

    KEY_ID

    Client Signature Required

    OFF

    Force POST Binding

    ON

    Front Channel Logout

    OFF

    Name

    Groups

    Mapper Type

    Group list

    Group attribute name

    groups (case-sensitive)

    Single Group Attribute

    ON

    Full group path

    OFF

    Force Name ID Format

    Setting up required files

    First let’s create a new folder called jmxremote

    To enable basic auth JMX, first create two files:

    • jmxremote.access

    • jmxremote.password

    JMX.Password file

    The password file has the credentials that the JMX agent will check during client authentication

    The above code is registering 2 users.

    • UserA:

      • username admin

      • password admin

    • UserB:

      • username: guest

      • password: admin

    JMX.Access file

    The access file has authorization information, like who is allowed to do what.

    In the above code, we can see that the admin user can do read and write operations in JMX, while guest user can only read the JMX content.

    Enable JMX with Basic Auth Protection

    Now, to enable JMX with basic auth protection, all we need to do is pass the following options in the JRE’s env that will run the Java process you need to protect the jmx.

    Let’s assume this java process is Kafka.

    Change the permissions on both files so only owner can edit and view them.

    If you do not change the permissions to 0600 and to the user that will run the jre process, then JMX will Agent will cause an error complaining that the Process is not the owner of the files that will be used for authentication and authorization.

    Finally export the following options in the user’s env which will run Kafka.

    Secure JMX with TLS Encryption

    First setup JMX with basic auth as shown in the Secure JMX: Basic Auth page.

    To enable TLS Encryption/Authentication in JMX you need a jks keystore and truststore.

    Please note that both JKS Truststore and Keystore should have the same password.

    The reason for this is because the javax.net.ssl class will use the password you pass to the Keystore as the keypassword

    Let’s assume this java process is Kafka and that you have installed the keystore.jks and truststore.jks under `/etc/certs``

    Export the following options in the user’s env which will run Kafka.

    Start with Postgres if possible to avoid migrations from H2 when moving to production. H2 is not recommended in production environments.

    If any Postgres configuration is defined either in lenses.conf or security.conf, the storage mode will switch to Postgres.

    There is no migration support from H2-to-MSSQL or PostgreSQL-to-MSSQL or MSSQL-to-PostgreSQL

    Databases settings go in security.conf.

    Local storage

    By default, Lenses will store its internal state in the storage folder. We advise explicitly setting this location, ensuring the Lenses process has permission to read and write files in this directory and have an upgrade and backup policy.

    PostgreSQL

    Lenses can persist their internal state to a remote PostgreSQL database server.

    Current minimum requirements:

    • Postgres server running version 9.6 or higher

    The recommended configuration is to create a dedicated login role and database for the agent, setting the agent role as the database owner. This will mean the agent will only be able to manage that database and require no superuser privileges.

    Example psql command for initial setup:

    You can then configure Lenses as so:

    Additional configuration for the PostgreSQL database connection can be passed under the lenses.storage.postgres.properties configuration prefix. The supported parameters can be found in the PostgreSQL documentation. For example:

    Migration of local storage to PostgreSQL

    Enabling PostgreSQL storage for an existing Lenses installation means the data will be automatically migrated to the PostgreSQL schema on the first run.

    After this process has succeeded, a lensesdb.postgresql.migration file will be created in the local storage directory to flag that the migration has already been run. You can then delete the local storage directory and remove the lenses.storage.directory configuration.

    If, for whatever reason, you want to re-run the migration to PostgreSQL, deleting the lensesdb.postgresql.migration file will cause Lenses to re-attempt migration on the next restart. The migration process will fail if it encounters any data that can’t be migrated into PostgreSQL, so re-running the migration should only be done on an empty PostgreSQL schema to avoid duplicate record failures.

    Microsoft SQL Server

    Lenses can persist their internal state to a remote Microsoft SQL Server database server.

    Current minimum requirements: MSSQL 2019.

    The recommended configuration is to create a dedicated login role and database for the agent, setting the agent role as the database owner. This will mean the agent will only be able to manage that database and require no superuser privileges.

    You can then configure Lenses as so:

    Additional configuration for the MSSQL database connection can be passed under the lenses.storage.mssql.properties configuration prefix. The full list and information can be found here.

    Connection pooling

    Lenses use the HikariCP library for high-performance database connection pooling.

    The default settings should perform well but can be overridden via the lenses.storage.hikaricp configuration prefix. The supported parameters can be found in the HikariCP documentation.

    Camelcase configuration keys are not supported in agent configuration and should be translated to "dot notation"

    For example:

    docker run --name lenses \
      -e LENSES_PORT=3030\
      -e LENSES_SECURITY_USER=admin \
      -e LENSES_SECURITY_PASSWORD=sha256:8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 \
      -p 3030:3030\
      -p 9102:9102 \
       lensesio/lenses:latest
    security.conf
    lenses.security.saml.idp.metadata.file="/path/to/KeycloakIDPMetadata.xml"
    kafka:
      tags: []
      templateName: Kafka
      configurationObject:
        kafkaBootstrapServers:
          - PLAINTEXT://my-kafka-host-0:9092
        protocol: PLAINTEXT
        metricsPort: 
          value: 9585
        metricsType: 
          value: JMX
    kafka:
      tags: []
      templateName: Kafka
      configurationObject:
        kafkaBootstrapServers:
          - PLAINTEXT://my-kafka-host-0:9092
        protocol: PLAINTEXT
        metricsPort: 
          value: 9585
        metricsType: 
          value: JMX
        metricsSsl: 
          value: true
    kafka:
      tags: []
      templateName: Kafka
      configurationObject:
        kafkaBootstrapServers:
          - PLAINTEXT://my-kafka-host-0:9092
        protocol: PLAINTEXT
        metricsPort: 
          value: 9581
        metricsType: 
          value: JMX
        metricsSsl: 
          value: false
        metricsUsername: 
          value: user
        metricsPassword: 
          value: pass
    kafka:
      tags: []
      templateName: Kafka
      configurationObject:
        kafkaBootstrapServers:
          - PLAINTEXT://my-kafka-host-0:9092
        protocol: PLAINTEXT
        metricsPort: 
          value: 9585
        metricsType: 
          value: JMX
        metricsSsl: 
         value: false
        metricsHttpSuffix: 
         value: /jolokia/
    httpRequestTimeout: 
      value: 30000
    metricsHttpSuffix: 
      value: /custom/
    kafka:
      tags: ["optional-tag"]
      name: Kafka
      configuration:
        kafkaBootstrapServers:
          value:
           - SASL_SSL://your.kafka.broker.0:9098
           - SASL_SSL://your.kafka.broker.1:9098
        protocol: SASL_SSL
        saslMechanism: 
          value: AWS_MSK_IAM
        saslJaasConfig:
          value: software.amazon.msk.auth.iam.IAMLoginModule required;
        additionalProperties:
          value:
            sasl.client.callback.handler.class: "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
        metricsPort: 
          value: 9581
        metricsType: 
          value: JMX
        metricsSsl: 
          value: false
        metricsCustomUrlMappings:
          value:
            "my-kafka-host-0:9092": my-kafka-host-0:9582
    ...
    Initializing (pre-run) Lenses
    Installation directory autodetected: /opt/lenses
    Current directory: /data
    Logback configuration file autodetected: logback.xml
    These directories will be monitored for new jar files:
     - /opt/lenses/plugins
     - /data/plugins
     - /opt/lenses/serde
    Starting application
    ...
    ├── security
    │   └── sso_header_decoder.jar
    ├── serde
    │   ├── protobuf_actions.jar
    │   └── protobuf_clients.jar
    └── udf
        ├── eu_vat.jar
        ├── reverse_geocode.jar
        └── summer_sale_discount.jar
    tar -czf [FILENAME.tar.gz] -C /path/to/jars/ *
    lenses.kubernetes.processor.extra.jars.url=https://example.net/myfiles/FILENAME.tar.gz
    LENSES_KUBERNETES_PROCESSOR_EXTRA_JARS_URL=https://example.net/myfiles/FILENAME.tar.gz`
    FROM lensesio-extra/sql-processor:4.2
    ADD jars/* /plugins
    docker build -t example/sql-processor:4.2 .
    docker push example/sql-processor:4.2
    lenses.kubernetes.processor.image.name=example/sql-processor
    lenses.kubernetes.processor.image.tag=4.2
    LENSES_KUBERNETES_PROCESSOR_IMAGE_NAME=example/sql-processor
    LENSES_KUBERNETES_PROCESSOR_IMAGE_TAG=4.2
    # Add these settings to your httpd.conf or under the VirtualHost section
    # for Lenses.
    # The rewrite directives need the rewrite module:
    #   LoadModule rewrite_module modules/mod_rewrite.so
    # The proxy directives need the proxy, proxy_http and proxy_wstunnel modules:
    #   LoadModule proxy_module modules/mod_proxy.so
    #   LoadModule proxy_http_module modules/mod_proxy_http.so
    #   LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so
    
    RewriteEngine On
    RewriteCond %{HTTP:Upgrade} =websocket [NC]
    RewriteRule ^/(.*)$           ws://lenses.url:9991/$1 [P,L]
    RewriteCond %{HTTP:Upgrade} !=websocket [NC]
    RewriteRule ^/(.*)$           http://lenses.url:9991/$1 [P,L]
    
    ProxyRequests On
    ProxyPreserveHost On
    ProxyPass / http://lenses.url:9991/
    ProxyPassReverse / http://lenses.url:9991/
    proxy /api/kafka/ws http://lenses.url:9991 {
        websocket
    }
    proxy /api/ws http://lenses.url:9991 {
        websocket
    }
    proxy / http://lenses.url:9991
    map $http_upgrade $connection_upgrade {
        default upgrade;
        '' close;
    }
    
    server {
        listen 80;
        server_name example.lenses.url;
    
        # websocket paths
        location /api/ws {
            proxy_pass http://lenses.url:9991;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
    
            proxy_redirect off;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  Host $http_host;
        }
        location /api/kafka/ws {
            proxy_pass http://lenses.url:9991;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
    
            proxy_redirect off;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  Host $http_host;
        }
    
        # SSE paths
        location /api/sse {
            proxy_pass http://lenses.url:9991;
            proxy_http_version 1.1;
    
            proxy_buffering off;
            proxy_redirect off;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  Host $http_host;
        }
    
        # all other paths
        location / {
            proxy_pass http://lenses.url:9991;
            proxy_http_version 1.1;
    
            proxy_redirect off;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  Host $http_host;
        }
    }
    LENSES_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=[HOSTNAME]"
    export LENSES_OPTS="-javaagent:/path/to/jmx_exporter/fastdata_agent.jar=9102:/path/to/jmx_exporter/client.yml"
    mkdir -vp /etc/jmxremote
    cat /etc/jmxremote/jmxremote.password 
    admin admin
    guest admin
    cat jmxremote/jmxremote.access 
    admin readwrite
    guest readonly
    chmod -R 0600 /etc/jmxremote
    chown -R <user-that-will-run-kafka-name>:<user-that-will-run-kafka-group> /etc/jmxremote/jmxremote.*
    export BROKER_JMX_OPTS= "-Dcom.sun.management.jmxremote=true \
      -Dcom.sun.management.jmxremote.authenticate=true \
      -Dcom.sun.management.jmxremote.ssl=false \
      -Dcom.sun.management.jmxremote.local.only=false \
      -Djava.rmi.server.hostname=10.15.3.1 \
      -Dcom.sun.management.jmxremote.rmi.port=9581 \
      -Dcom.sun.management.jmxremote.access.file=/etc/jmxremote/jmxremote.access \
      -Dcom.sun.management.jmxremote.password.file=/etc/jmxremote/jmxremote.password \
      -Dcom.sun.management.jmxremote.port=9581
    export BROKER_JMX_OPTS= "-Dcom.sun.management.jmxremote=true
      -Dcom.sun.management.jmxremote.authenticate=true \
      -Dcom.sun.management.jmxremote.ssl=true \
      -Dcom.sun.management.jmxremote.local.only=false \
      -Djava.rmi.server.hostname=10.15.3.1 \
      -Dcom.sun.management.jmxremote.rmi.port=9581 \
      -Dcom.sun.management.jmxremote.access.file=/etc/jmxremote.access \
      -Dcom.sun.management.jmxremote.password.file=/etc/jmxremote.password \
      -Dcom.sun.management.jmxremote.port=9581 \
      -Djavax.net.ssl.keyStore=/etc/certs/kafka.jks \
      -Djavax.net.ssl.keyStorePassword=somePassword \
      -Djavax.net.ssl.trustStore=/etc/certs/truststore.jks \
      -Djavax.net.ssl.trustStorePassword=somePassword \
      -Dcom.sun.management.jmxremote.registry.ssl=true \
      -Dcom.sun.management.jmxremote.ssl.need.client.auth=true
    lenses.storage.directory = "/path/to/persistent/data/directory"
    
    # login as superuser and add Lenses role and database
    psql -U postgres -d postgres <<EOF
    CREATE ROLE lenses WITH LOGIN PASSWORD 'changeme';
    CREATE DATABASE lenses OWNER lenses;
    EOF
    lenses.storage.postgres.host="my-postgres-server"
    lenses.storage.postgres.port=5431 # optional, defaults to 5432
    lenses.storage.postgres.username="lenses"
    lenses.storage.postgres.database="lenses"
    lenses.storage.postgres.password="changeme"
    # require SSL encryption with full host verification
    lenses.storage.postgres.properties.ssl=true
    lenses.storage.postgres.properties.sslmode="verify-full"
    lenses.storage.postgres.properties.sslcert="/path/to/certs/lenses.crt.pem"
    lenses.storage.postgres.properties.sslkey="/path/to/certs/lenses.key.pk8"
    lenses.storage.postgres.properties.sslpassword="mypassword"
    lenses.storage.postgres.properties.sslrootcert="/path/to/certs/CA.crt.pem"
    lenses.storage.mssql.database=lenses
    lenses.storage.mssql.host=my-mssql-server
    lenses.storage.mssql.port=1433
    lenses.storage.mssql.password=changeme
    lenses.storage.mssql.username=lenses
    
    # set maximumPoolSize to 25
    lenses.storage.hikaricp.maximum.pool.size=25
    software.amazon.msk.auth.iam.IAMLoginModule required;
    to the Sasl Jaas Config section
  • Set sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler in the Advances Kafka Properties section.

  • MSK Serverless
    MSK Serverless security group
    Configuration screen of Lenses with the selected options for MSK Serverless.
    and select Azure AD Domain Services from the options.
  • Set the DNS Domain Name as the same one you have with for your existing Azure AD tenant

    1. In the Administration tab, you can manage the group membership for the AAD DC Administrator and control the members with access rights on Azure AD.

    Azure AD Domain Services provides one-way synchronization from Azure Active Directory to the managed domain. Only certain attributes are synchronized to the managed domain, along with groups, group memberships and passwords.

    The Synchronization tab provides two options. The first one is All, where everything will be synchronized to Azure AD DS managed domain. The second one is Scoped, which allows the selection of specific groups to be synced.

    Configure DNS server settings

    Once the managed domain is ready to be used, configure the DNS server settings for the Azure Virtual Network. Click the button configure:

    For the DNS changes to be applied, all the VMs are required to be restarted.

    Azure AD DS needs password hashes in a format that’s suitable for NT LAN Manager (NTLM) and Kerberos authentication. Azure AD does not generate or store password hashes in the format that’s required for NTLM or Kerberos authentication until you enable Azure AD DS for your tenant.

    For security reasons, Azure AD doesn’t store any password credentials in clear-text form. Therefore, Azure AD can’t automatically generate these NTLM or Kerberos password hashes based on users’ existing credentials.

    Read the details from Microsoft on how to generate for your existing users.

    Virtual network peering

    The Virtual Network to deploy Lenses, requires enabling Virtual Network Peering. This allows it to communicate with Azure AD DS. You should add the IPs that have been generated in the previous step as DNS Servers.

    Read more details on virtual network peering

    Enable Secure LDAP

    To enable the LDAP(S) protocol on Azure AD DS, use the following PowerShell to generate the self-signed certificate:

    In case PowerShell is not available, you can use the openssl command. This following script generates a certificate for Azure AD DS.

    Under Secure LDAP, upload the PFX certificate and make sure the options Allow secure LDAP and access over the Internet are enabled.

    After the secure LDAP is enabled to allow secure LDAP access, use the Azure AD DS properties to review the external IP address that is used to expose the LDAP service.

    Finally, you need to allow inbound traffic to the Azure AD DS network security group for the LDAPS port 636 and limit the access only to the the virtual machine or the range of the IPs to which they should have inbound access.

    What’s New?

    The changelog of the current release and patch versions, as well as upgrade notes.


    Changelog

    For versions 4.0 to 5.4, see our legacy documentation.

    5.5.22

    London, UK - November 20th, 2025 Lenses 5.5.22 is now generally available.

    If you are using SCRAM authentication and Kafka Quotas, we urge you to upgrade to this version or a later one. A bug in Lenses 5.5.21 and earlier will cause the SCRAM credentials of a user to be removed when a quota is removed within Lenses.

    Improvements

    • Quotas are now managed via the Kafka Admin API instead of Zookeeper

    • Security updates to underlying libraries

    Fixes

    • Deleting the quota for a user or client with SCRAM credentials will not remove the SCRAM credentials

    5.5.21

    London, UK - October 2nd, 2025 Lenses 5.5.21 is now generally available.

    Improvements

    • Additional logging has been implemented to diagnose scenarios where Lenses fails to retrieve the first and last available offsets for topic-partitions

    5.5.20

    London, UK - September 16th, 2025 Lenses 5.5.20 is now generally available.

    Improvements

    • SQL Processors will automatically adjust the heap size depending on the available memory in the container

    Fixes

    • Consumers screen: make filtering case-insensitive and fix not showing uppercase matches

    • Remove SQL Processors' heap settings since they weren't used

    5.5.19

    London, UK - August 20th, 2025 Lenses 5.5.19 is now generally available.

    Fixes

    • Consumers screen: make filtering persist page hot updates

    5.5.18

    London, UK - July 25th, 2025 Lenses 5.5.18 is now generally available.

    Improvement

    • SQL Studio: Added support for batch mode when retrieving topic-partition bounds (start-end offsets) to improve performance with overloaded Kafka clusters. Configuration:

      Per-query overwrite:

      Default: All bounds are retrieved at once. Per-query settings override global configuration.

    5.5.17

    London, UK - June 30th, 2025 Lenses 5.5.17 is now generally available.

    Fixes

    • Fixed a parsing error in our custom Protocol Buffers component that failed when handling certain optional field cases. This directly resolves interruptions in the background task that inspects topics and schemas.

    • Enhanced the error handling within the background inspection task. The task is now more resilient and can gracefully manage any unexpected error, improving overall robustness and preventing task failures.

    5.5.16

    London, UK - April 17th, 2025 Lenses 5.5.16 is now generally available.

    Fixes

    • Fix an edge case where the connector screen would break (show as blank)

    • Fix SQL to respect the default value for optional enum fields in protobuf that have a null value on the wire

    5.5.15

    London, UK - March 26th, 2025 Lenses 5.5.15 is now generally available.

    Improvements

    • Support JMX metrics for KRaft-enabled Kafka clusters with split-role brokers.

    • Wider name columns for connectors, schemas, and consumers' screens.

    Fixes

    • Bring back the bad records table in SQL Studio.

    5.5.14

    London, UK - December 10th, 2024 Lenses 5.5.14 is now generally available.

    Customers on the 5.5 series are urged to upgrade to this release or later.

    New Features

    • Add support for MSSQL as a backing store for Lenses.

    Improvements

    • LDAP connection management to avoid connection reset.

    • Extra debug logging for when the Schema Registry sends an invalid Content-type header.

    5.5.13

    London, UK - November 25th, 2024 Lenses 5.5.13 is now generally available.

    Customers on the 5.5 series are urged to upgrade to this release or later.

    Fixes

    • A security issue has been addressed.

    5.5.12

    London, UK - November 22nd, 2024 Lenses 5.5.12 is now generally available.

    Improvements

    • The webhook for audits now offers the {{CONTENT}} variable to insert all the details of the audit log entry.

    • Improve Kubernetes watchers and handling of SQL Processor Initialization events to avoid blocking operations.

    5.5.11

    London, UK - October 31th, 2024 Lenses 5.5.11 is now generally available.

    Improvements

    • The login audit now tracks both source IdP groups and applied groups.

    5.5.10

    London, UK - October 17th, 2024 Lenses 5.5.10 is now generally available.

    Improvements

    • Login audit now tracks source IdP groups.

    • The Group Details API now includes user and service accounts within each group.

    5.5.9

    London, UK - October 4th, 2024 Lenses 5.5.9 is now generally available.

    Improvements

    • Optimise kubernetes event handling

    • Add extra logging for queue processing and event handling

    5.5.8

    London, UK - September 27th, 2024 Lenses 5.5.8> is now generally available

    Improvements

    • Optimise topic auto-detection audit logging to avoid duplicate entries

    • Optimise logging (adjust UDAF for intellisense polluting the logs, better actor mailbox logging)

    • Improvements to the connector verification logic when Lenses has to mock topics or topics.regex

    5.5.7

    London, UK - August 28th, 2024 Lenses 5.5.7 is now generally available

    Improvements

    This version improves the fetching of schemas from Schema Registries. The related subsystem has been re-worked to provide better error handling, fewer requests to the Schema Registry, and support rate-limiting. Find out .

    5.5.6

    London, UK - August 6th, 2024 Lenses 5.5.6 is now generally available.

    Improvements

    • The S3 backup/restore functionality now supports the latest version of the Stream Reactor S3 connector plugin.

    • New users coming from LDAP will not be created unless they have groups coming from LDAP matching Lenses groups. Users can still be created manually by an administrator.

    If you upgrade your S3 connector plugin, existing S3 connectors will stop working. Check to find out how you can update your connector configuration to work with the latest plugin version.

    5.5.5

    London, UK - July 26th, 2024 Lenses 5.5.5 is now generally available.

    Improvements

    • Improve performance of the data catalogue. Lenses should now be many times faster to detect topics and their serialization, and use less memory and CPU time. For teams with Kafka clusters that have thousands of schemas, the startup time will also improve. For teams with tens of thousands of schemas, consumers, and partitions, software stability will also improve.

    • Bring back the restart task button for paused connectors. This undocumented behaviour of Kafka Connect allows users to stop a connector’s consumer group, so they can reset offsets. For Kafka Connect 3.5 or later the new STOP connector API and corresponding button in Lenses can have the same effect.

    • Compress schemas before sending them to the Schema Registry. This allows to send larger schemas to the Schema Registry as the limit is on the size of the request rather than the schema itself.

    If you have enabled the setting to keep lucene's index on disk (option lenses.explore.index.dir), you should disable it and delete the files from disk. You can keep it enabled if you prefer but you still need to delete the files on disk. Please note that on-disk performance is slower than in-memory. The amount of memory we use is fixed per entry, so the default in-memory configuration is advised.

    5.5.4

    London, UK - July 17th, 2024 Lenses 5.5.4 is now generally available.

    New Features

    • Add STOP operation (button) for Connectors. The STOP operation requires Kafka Connect 3.5 or greater

    • Allow to skip schema validation when inserting to JSON topics

    Improvements

    • Connector search is now case-insensitive

    • Allow to type to search groups when creating service accounts

    • Show masked passwords when editing a connector (regression in 5.5.3)

    Fixes

    • Filtering connectors by type doesn’t work

    • When there were at least two Connect clusters with at least one connector with common name in both clusters, filtering connectors returns incorrect or multiple results

    • Validating connectors with passwords may not work (regression in 5.5.3)

    5.5.3

    London, UK - July 1st, 2024 Lenses 5.5.3 is now generally available.

    New Features

    Support for case-insensitive LDAP users

    Whilst Lenses users are case-sensitive, LDAP most of the time performs case-insensitive searches on user accounts. This can lead to users who try to login to Lenses with different casing in their username (e.g., user and USER) to get duplicate accounts.

    We added the option lenses.security.ldap.case.sensitive with a default value of true. It can be switched to false in which case Lenses will treat usernames from LDAP as case-insensitive and always converting to lowercase.

    Improvements

    • Upgrade the AWS IAM library to better support service account roles inside EKS

    • Upgrade libraries with known CVEs —not affecting Lenses in either way

    Fixes

    • Fix Grafana link not showing up on sidebar

    • Fix a case where some sensitive data might leak in the logs

    • Fix filtering by connector name causing the connector screen to crash if a connect cluster is offline

    5.5.2

    London, UK - May 23rd, 2024 Lenses 5.5.2 is now generally available.

    Improvements

    • The connectors’ screen will not mask passwords if they are referencing a secret from a secret provider.

    Fixes

    • Fix regression where connectors’ passwords were not masked.

    5.5.1

    London, UK - April 23rd, 2024 Lenses 5.5.1 is now generally available.

    Improvements

    • Authentication:

      • Enhanced authentication to reject with a 401 status code when the user lacks any attached groups in the IdP (Identity Provider).

      • Improved authentication flow, allowing an authenticated SSO (Single Sign-On) user to log in even if there isn’t a corresponding group in Lenses.

    • Documentation Enhancement:

    Fixes

    • Deployment Issue:

      • Addressed a bug introduced in Lenses git ops deployment* version 5.5.0, resolving provisioning issues experienced in certain deployment scenarios.

    • SSO Authentication Fix:

    5.5 Release

    London, UK1 - 11 April 2024 - Lenses 5.5 is now generally available.

    For versions 4.0 to 5.4 see our doc .

    New Features

    Kafka Connectors as Code

    Lenses now introduces support for managing Kafka connectors as code. With this feature, you can define your connectors in a YAML file and seamlessly deploy them to Lenses. This capability is accessible via both the Lenses CLI and the Lenses UI. This release marks the commencement of our journey towards a more declarative and automated approach to managing Kafka and Lenses resources.

    Consumer Group Management

    In this version, Lenses introduces support for deleting consumer group offsets and entire consumer groups, enhancing flexibility and control over consumer group management.

    Generic SSO Provider

    Lenses provides support for a few SSO providers out of the box like Google, Okta, etc. In this release, Lenses introduces a generic SSO provider, enabling users to integrate with any SSO provider that supports the SAML 2.0 protocol. This feature is configurable via the lenses.conf file under lenses.security.saml.idp.provider.

    Enhancements

    Kafka Message Replay

    The Kafka message replay feature receives an enhancement, now enabling users to replay messages from a specific offset. This functionality is accessible from both the Lenses topic screen and the Lenses SQL studio screen, providing greater precision in message replay operations.

    Consumer Group Offsets Data Link

    Users can now seamlessly navigate from the consumer group offsets screen to the data of the topic that the consumer group offset points to, enhancing visibility and ease of data exploration.

    Audits to log file

    Lenses now provides the capability to log audit events to its log file, enabling users to store audit logs locally for compliance and security purposes. This feature is configurable via the lenses.conf file under lenses.audit.to.log.file.

    Lenses Internal Topics Replication Factor

    To ensure compatibility with cloud providers such as IBM, where a minimum replication factor is mandated, Lenses now allows the configuration of the replication factor for its internal topics. This setting can be configured in the lenses.conf file under lenses.internal.topics.replication.***.

    Bug Fixes

    External Applications via Lenses SDK

    The Lenses SDK, a thin client facilitating the monitoring and tracking of external applications connected to Kafka within Lenses topology, has been enhanced in this release. An issue where the application’s status in Lenses was not updated correctly has been resolved.

    S3 Backup-Restore for JSON Payloads

    In this release, a bug affecting the S3 backup-restore feature for JSON payloads has been rectified. Previously, the feature encountered issues due to the Connect converter enforcing schema on JSON payloads, leading to incorrect functionality. This bug has been addressed to ensure seamless backup and restoration of JSON data via S3.

    Ugrade Notes

    Lenses 5.5 is an incremental release which brings in new features and improvements.

    Upgrading from 5.0 or later does not require any static configuration change but if you have automated the creation of any AWS connection, then you will have to adjust the provisioning section of your Helm chart, or your CICD, or —if you use the API directly— your API calls.

    If you are upgrading from version 4.3 or older, you need to follow the as well as the rest of the instructions that follow.

    Breaking Changes and Caution Items

    Lenses upgrades (except patch releases) are not backwards compatible. It is best practice to take a backup of the Lenses database before an upgrade.

    New provisioning API [caution]

    With Lenses 5.3 the provisioning API was introduced. This new API can be used to create or update the connections landscape. The old provisioning methods could only create the connection landscape (first run).

    What this means, is that now the Helm chart or a CICD process can be used to manage Lenses’ connections.

    For teams that are on the old provisioning method some adjustments are required to their Helm charts or other provisioning code to switch to the new API. The old methods are still available but are considered deprecated and will be removed or break in the future.

    AWS and Glue Connection provisioning [breaking]

    With Lenses 5.4 IAM support was added for the AWS connection type. An AWS connection is used as an authentication provider for the Glue Schema Registry and Cloudwatch channels.

    Due to this change, if you create or manage your AWS and Glue connections via the API or provisioning, you need to update your configuration to the new format.

    Action required

    • Add the new authMode property to your connections for AWS and Glue Schema Registry.

    Details

    • Lenses 5.4 adds a new required property for the AWS and Glue Schema Registry connections.

    • The property is authMode.

    • It controls how Lenses authenticates with AWS:

    You can set authMode in 2 modes:

    1. Access keys mode

    This is the existing mode where Lenses uses AWS access keys.

    • Set the authMode to Access Key.

    • Specify the access key ID and secret access key, as you had before.

    2. Credentials provider chain mode (new)

    This is the new mode where Lenses uses the AWS default credentials provider chain.

    • Set the authMode to Credentials Chain.

    • No additional properties needed.

    Examples - Provision YAML

    1. Access mode

    2. Credentials provider chain mode

    Examples - API JSON

    1. Access mode

    AWS connection

    Glue Schema Registry connection

    2. Credentials provider chain mode

    AWS connection

    Glue Schema Registry connection

    Docker image base change

    Starting with Lenses 5.2 the base image of Lenses and SQL Processor Dockers switched from Debian to Ubuntu. On some older systems, these docker images will fail to run, due to a combination of a recent glibc in the container, and older docker daemon on the host.

    If you fall under this category, during the startup of the Lenses container, you might see errors such as Unable to identify system. Uname is required or [warning][os,thread] Failed to start thread “GC Thread#0”.

    For these cases, we now offer Lenses docker images with the suffix -debian in their tags. E.g:

    • lensesio/lenses:5.5-debian

    • lensesio/lenses:5.5.0-debian

    • lensesio/lenses:latest-debian

    If your host is running on an older operating system and you encounter these errors, try to use the debian equivalent tag.

    Update Process

    Using the Lenses Archive

    Download the and extract it in a new directory on your server. It is important to avoid extracting an archive over an older installation to avoid having multiple versions of libraries. Instead, you should remove (or rename) the old directory, then move the new into its place. Copy if needed and update your lenses.conf and security.conf files. If you are using the internal database instead of PostgreSQL, make sure Lenses Storage Directory (lenses.storage.directory) is kept intact. The folder is where persistent data is stored, such as users, groups, audits, data policies, connections, and more.

    Make sure you have a JRE (or JDK) installed in the server running Lenses. Lenses can run on JRE 8 or greater, and the recommended version is JRE 11.

    Using the Lenses Docker

    The docker image uses tags to distinguish between versions. The latest tag (lensesio/lenses:latest) brings the latest stable version of Lenses. There are minor tags to help users get the latest patch in a minor version (e.g 5.5, 5.1) and patch tags to help users pin to a specific patch (e.g 5.5.1, 5.1.2). The best practice advice is to use the minor tag (lensesio/lenses:5.5), which ensures that your installation will always get compatible updates until you made a conscious decision to upgrade the minor version.

    If you use the internal database instead of PostgreSQL as the backing store of Lenses, make sure you keep the /data/storage volume to not lose your data. Other volumes supported by the docker are /data/kafka-streams-state which holds state for SQL Processors running IN-PROC and may have to be rebuilt (automatically) if lost, /data/log (log files on disk), /data/plugins (custom UDFs).

    Pull the 5.5 docker:

    Stop your current container and restart with the 5.5 image, mounting any volumes you might need.

    Lenses Box

    If you are a Box user, pull the latest version, preserve your /data volume and restart Lenses:

    Helm

    Download the latest charts and update your values.yaml as described below. Remember that if you are using the internal database instead of PostgreSQL as the backing store, then the Lenses Storage Directory should be stored in a persistent volume and be kept intact between updates. To support a potential downgrade, make sure this volume is backed-up before installing a newer version of Lenses.

    If you have provisioning enabled (lenses.provision.enabled: true) in your values.yaml, and you are on provision version “1” then you have to act. Version “1” means either that lenses.provision.version is set to "1", or it is not set at all. You have two options:

    • Disable it, as Lenses already has all the information stored in the database, and version “1” does not support updating the connections and license.Copy

    • Switch to provisioning version “2” which supports updating connections and licenses every time you do a helm upgrade. To do that, you must make some changes to your old provisioning section. Some resources that can come handy for the switch are:

    If you don’t have your values.yaml you can download it from the Kubernetes cluster using Helm:

    Proceed to upgrade:

    Alternatively, reusing the old values and turning provisioning off:

    Cloud Installations

    Use the latest version available in the marketplaces. Remember that Lenses Storage Directory should be provided as a persistent volume and be kept intact between updates. If a new image does not exist, you may be able to update Lenses in-place. Our support team will be happy to go through the available options with you.

    \

    Linux

    This page describes install the Lenses via a Linux archive

    On start-up, Lenses will be in bootstrap mode unless it has an existing Kafka Connection. See provisioning for automating.

    To install Lenses from the archive you must:

    1. Extract the archive

    2. Configure Lenses

    3. Start Lenses

    Extracting the archive

    Extract the archive using the following command

    Inside the extract archive, you will find.

    Starting Lenses

    Start Lenses by running:

    or pass the location of the config file:

    If you do not pass the location of the config file, Lenses will look for it inside the current (runtime) directory. If it does not exist, it will try its installation directory.

    To stop Lenses, press CTRL+C.

    Open Lenses in your browser, log in with admin/admin configure your and add your .

    File permissions

    Set the permissions of the security.conf to be readable only by the lenses user.

    The agent needs write access in 4-5 places in total:

    1. [RUNTIME DIRECTORY] When Lenses runs, it will create at least one directory under the directory it is run in:

      1. [RUNTIME DIRECTORY]/logs Where logs are stored

      2. [RUNTIME DIRECTORY]/logs/lenses-sql-kstream-state Where SQL processors (when In Process mode) store state. To change the location for the processors’ state directory, use

    Back-up this location for disaster recovery

    JNI libraries

    Lenses and Kafka use two common Java libraries that take advantage of JNI and are extracted to /tmp.

    You must either:

    1. Mount /tmp without noexec

    2. or set org.xerial.snappy.tempdir and java.io.tmpdir to a different location

    SystemD example

    If your server uses systemd as a Service Manager, then manage Lenses (start upon system boot, stop, restart). Below is a simple unit file that starts Lenses automatically on system boot.

    Global Truststore

    Lenses uses the default trust store (cacerts) of the system’s JRE (Java Runtime) installation. The trust store is used to verify remote servers on TLS connections, such as Kafka Brokers with an SSL protocol, Secure LDAP, JMX over TLS, and more. Whilst for some types of connections (e.g. Kafka Brokers) a separate keystore can be provided at the connection’s configuration, for some other connections (e.g. Secure LDAP and JMX over TLS) we always rely on the system trust store.

    It is possible to set up a global custom trust store via the LENSES_OPTS environment variable:

    Hardware & OS

    Run on any Linux server. For RHEL 6.x and CentOS 6.x use docker.

    Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit command:

    Increase as a super-user the soft limit to 4096 with:

    Use 6GB RAM/4 CPUs and 500MB disk space.

    Schema Registry

    This page provides examples for defining a connection to Schema Registries.

    Confluent

    Simple configuration, with JMX metrics

    The URLs (nodes) should always have a scheme defined (http:// or https://).

    Basic authentication

    For Basic Authentication, define username and password properties.

    TLS with custom truststore

    A custom truststore is needed when the Schema Registry is served over TLS (encryption-in-transit) and the Registry’s certificate is not signed by a trusted CA.

    TLS with client authentication

    A custom truststore might be necessary too (see above).

    Hard or soft delete

    By default, Lenses will use hard delete for Schema Registry. To use soft delete, add the following property:

    AWS Glue

    Some connections depend on others. One example is the AWS Glue Schema Registry connection, which depends on an AWS connection. These are examples of provision Lenses with an AWS connection named my-aws-connection and an AWS Glue Schema Registry that references it.

    Using AWS Access Key

    Using AWS Credentials Chain

    Required Kafka ACLs

    This page describes the ACLs that need to be configured on your Kafka Cluster if ACLs are enabled, for Lenses to function.

    These ACLs are for the underlying Lenses Kafka client. Lenses has its own set of permissions guarding access.

    You can restrict the access of the Lenses Kafka client but this can reduce the functionality on offer in Lenses, e.g. not allow Lenses to create topic at all, even though this can be managed by Lenses own RBAC system.

    When your Kafka cluster is configured with an authorizer which enforces ACLs, Lenses will need a set of permissions to function correctly.

    Common practice is to give Lenses superuser status or the complete list of available operations for all resources. The fine-grained permission model of Lenses can then be used to restrict the access level per user.

    Minimal Permissions

    The agent needs permission to manage and access their own internal Kafka topics:

    • __topology

    • __topology__metrics

    It also needs to read and describe permissions for the consumer offsets and Kafka Connect topics —if enabled:

    • __consumer_offsets

    • connect-configs

    • connect-offsets

    This same set of permissions is required for any topic that the agent must have read access.

    DescribeConfigs was added in Kafka 2.0. It may not be needed for versions before 2.2.

    Additional permissions are needed to produce topics or manage them.

    Consumer Groups

    Permission to at least read and describe consumer groups is required to take advantage of the Consumer Groups' monitoring capabilities.

    Additional permissions are needed to manage groups.

    ACLs

    To manage ACLs, permission to the cluster is required:

    Sinks

    This page describes the available Apache 2.0 Sink Connectors from Lenses. Lenses can also work with any other Kafka Connect Connector.

    Lenses supports any Connector implementing the Connect APIs, bring your own or use community connectors.

    You need to add the connector information for them to be visible in the Topology.

    Enterprise support is also offered for connectors in the Stream Reactor project, managed and maintained by the Lenses team.

    Update Lenses connections state.

    put

    It will update the connections state and validate the configuration. If the validation fails, the state will not be updated.

    Query parameters
    validateOnlybooleanOptional

    It will only validate the request, not applying any actual change to the system.

    Default: false
    validateConnectivitybooleanOptional

    It will try to connect to the configured service as part of the validation step.

    Default: true
    Body
    Responses
    200

    Successfully updated connection state

    application/json
    400

    Bad request

    application/json
    put
    /api/v1/state/connections

    Retrieve system state

    get
    Responses
    200

    Successful retrieval of system state

    application/json
    get
    /api/v1/state
    200

    Successful retrieval of system state

    Update the license data

    put
    Body
    sourcestringOptional
    clientIdstringOptional
    detailsstringOptional
    keystringOptional
    Responses
    200

    License successfully updated and current license info returned

    application/json
    400

    Bad request

    application/json
    put
    /api/v1/state/license

    LDAP

    This page describes configuring Lenses with LDAP.

    Lenses can be configured via LDAP handle the user authentication.

    The groups that a user belongs to (authorization) may come either from LDAP (automatic mapping) or via manually mapping an LDAP user to a set of Lenses groups.

    All the user’s groups are then matched by name (case sensitive) with the groups stored in Lenses. All the matching groups' permissions are combined. If a user has been assigned manually a set of Lenses groups, then the groups coming from LDAP are ignored.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "kafka-cluster:Connect",
                    "kafka-cluster:AlterCluster",
                    "kafka-cluster:DescribeCluster"
                ],
                "Resource": "arn:aws:kafka:[region]:[aws_account_id]:cluster/[cluster_name]/[cluster_uuid]/*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "kafka-cluster:DescribeTopic",
                    "kafka-cluster:CreateTopic",
                    "kafka-cluster:WriteData",
                    "kafka-cluster:ReadData"
                ],
                "Resource": "arn:aws:kafka:[region]:[aws_account_id]:topic/[cluster_name]/[cluster_uuid]/*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "kafka-cluster:AlterGroup",
                    "kafka-cluster:DescribeGroup"
                ],
                "Resource": "arn:aws:kafka:[region]:[aws_account_id]:group/[cluster_name]/[cluster_uuid]/*"
            }
        ]
    }
    {
      "Action": [
        "kafka-cluster:*Topic*",
        "kafka-cluster:WriteData",
        "kafka-cluster:ReadData"
      ],
      "Resource": "arn:aws:kafka:[region]:[aws_account_id]:cluster/[cluster_name]/[cluster_uuid]/*"
    }
    {
      "Action": [
        "kafka-cluster:*Group*"
      ],
      "Resource": "arn:aws:kafka:[region]:[aws_account_id]:cluster/[cluster_name]/[cluster_uuid]/*"
    }
    {
      "Action": [
        "glue:DeregisterDataPreview",
        "glue:ListRegistries",
        "glue:CreateRegistry",
        "glue:RegisterSchemaVersion",
        "glue:GetRegistry",
        "glue:UpdateRegistry",
        "glue:ListSchemas",
        "glue:DeleteRegistry",
        "glue:GetSchema",
        "glue:CreateSchema",
        "glue:ListSchemaVersions",
        "glue:GetSchemaVersion",
        "glue:UpdateSchema",
        "glue:DeleteSchemaVersions"
      ],
      "Resource": [
        "arn:aws:glue:[region]:[aws_account_id]:registry/*",
        "arn:aws:glue:[region]:[aws_account_id]:schema/*"
      ]
    }
    security.conf
    lenses.security.ldap.url="ldaps://ldaps.lenses.io:636"
    lenses.security.ldap.user="[email protected]"
    lenses.security.ldap.password="<your-svc-password>"
    
    lenses.security.ldap.base="ou=AADDC Users,dc=lenses,dc=io"
    lenses.security.ldap.filter="(&(objectClass=person)(sAMAccountName=<user>))"
    
    lenses.security.ldap.plugin.class="io.lenses.security.ldap.LdapMemberOfUserGroupPlugin"
    lenses.security.ldap.plugin.group.extract.regex="(?i)CN=(\\w+),ou=AADDC Users.*"
    lenses.security.ldap.plugin.memberof.key="memberOf"
    lenses.security.ldap.plugin.person.name.key = "sn"
    # Define your DNS name used by your Azure AD DS managed domain
    $dnsName="contoso.com"
    
    # Get the current date to set a one-year expiration
    $lifetime=Get-Date
    
    # Create a self-signed certificate for use with Azure AD DS
    New-SelfSignedCertificate -Subject *.$dnsName
      `-NotAfter $lifetime.AddDays(365) -KeyUsage DigitalSignature, KeyEncipherment `
      -Type SSLServerAuthentication -DnsName *.$dnsName, $dnsName
    DOMAIN="lenses.io" COUNTRY=US STATE=California ORGANIZATION="Lenses.io Ltd" ./create-certs.sh
    
    # export certificate in PFX
    openssl pkcs12 -export -name "<your-ad-domain>" -out openssl.pfx -inkey tls/my-service.key -in tls/my-service.crt
    kafka-acls \
        --bootstrap-server [broker.url:9092] --command-config [client.properties] \
        --add \
        --allow-principal [User:Lenses] \
        --allow-host [lenses.host] \
        --operation All \
        --topic * \
        --group * \
        --delegation-token * \
        --cluster

    AWS S3

    Sink data from Kafka to AWS S3 including backing up topics and offsets.

    Azure CosmosDB

    Sink data from Kafka to Azure CosmosDB.

    Azure Data Lake Gen2

    Sink data from Kafka to Azure Data Lake Gen2 including backing up topics and offsets.

    Azure Event Hubs

    Load data from Azure Event Hubs into Kafka topics.

    Azure Service Bus

    Sink data from Kafka to Azure Service Bus topics and queues.

    Cassandra

    Sink data from Kafka to Cassandra.

    Elasticsearch

    Sink data from Kafka to Elasticsearch.

    GCP PubSub

    Sink data from Kafka to GCP PubSub.

    GCP Storage

    Sink data from Kafka to GCP Storage.

    HTTP Sink

    Sink data from Kafka to a HTTP endpoint.

    InfluxDB

    Sink data from Kafka to InfluxDB.

    JMS

    Sink data from Kafka to JMS.

    MongoDB

    Sink data from Kafka to MongoDB.

    MQTT

    Sink data from Kafka to MQTT.

    Redis

    Sink data from Kafka to Redis.

    Improvements to the Skip Validation option for inserting JSON messages, to allow for less strict (but still valid) schemas for inserted messages.

  • New SQL processor page with direct links to the latest documentation and support resources for user convenience.

  • Corrected SSO authentication behavior. When an SSO user is configured to overwrite the IdP groups, Lenses now correctly refrains from extracting groups from the IdP.
    Access keys (existing feature).
  • Credentials provider chain (new feature).

  • You set the property either with the:

    • Connections API - create, update.

    • Provision YAML.

  • Provisioning API OpenAPI reference
  • Helm chart examples

  • how to configure rate limiting
    here
    archive
    upgrade procedure for Lenses 5.0
    latest 5.5 archive
    Provisioning API introduction
    connect-status
    attachedFilestring · binaryOptional

    Attached file(s) needed for establishing the connection. The name of each file part is used as a reference in the manifest.

    Active Directory (AD) and OpenLDAP (with the memberOf overlay if LDAP group mapping is required) servers are tested and supported in general.

    Due to the LDAP standard ambiguity, it is impossible to support all the configurations in the wild. The most common pain point is LDAP group mapping. If the default class that extracts and maps LDAP groups to Lenses groups does not work, it is possible to implement your own.

    Before setting up an LDAP connection, we advise you to familiarize yourself with LDAP and/or have access to your LDAP and/or Active Directory administrators.

    An LDAP setup example with LDAP group mapping is shown below:

    In the example above you can distinguish three key sections for LDAP:

    • the connection settings,

    • the user search settings,

    • and the group search settings.

    Lenses uses the connection settings to connect to your LDAP server. The provided account should be able to list users under the base path and their groups. The default group plugin only needs access to the memberOf attributes for each user, but your custom implementation may need different permissions.

    When a user tries to log in, a query is sent to the LDAP server for all accounts that are under the lenses.security.ldap.base and match the lenses.security.ldap.filter. The result needs to be unique; a distinguished name (DN) —the user that will log in to Lenses.

    In the example, the application would query the LDAP server for all entities under ou=Users,dc=example,dc=com that satisfy the LDAP filter (&(objectClass=person)(sAMAccountName=)) which would be replaced by the username that tries to login to Lenses. A more simple filter could be cn=, which for user Mark would return the DN cn=Mark,ou=Users,dc=example,dc=com.

    Once the user has been verified, Lenses queries the user groups and maps them to Lenses groups. For every LDAP group that matches a Lenses group, the user is granted the selected permissions.

    Depending on the LDAP setup, only one of the users or the Lenses service user may be able to retrieve the group memberships. This can be controlled by the option lenses.security.ldap.use.service.user.search.

    The default value (false) uses the user itself to query for groups. Groups be can be created in the admin section of the web interface, or in the command line via the lenses-cli application.

    Set lenses.security.ldap.use.service.user.search to true to use lenses.security.ldap.user account to list a logged user groups when your LDAP setup restricts most of the user's action to list their groups.

    Group mapping

    When working with LDAP or Active Directory, user and group management is done in LDAP.

    Lenses provides fine-grained role-based access (RBAC) for your existing groups of users over data and applications.

    Create a group in Lenses with the same name (case-sensitive) as in LDAP/AD.

    If mapping LDAP groups to Lenses groups is not desired. Manually map LDAP users to Lenses groups, using the web interface, or the lenses-cli.

    LDAP still provides the authentication, but all LDAP groups for this user are ignored.

    When you create an LDAP user in Lenses, the username will be used in the search expression set in lenses.security.ldap.filter to authenticate them. If no user should be allowed to use the groups coming from LDAP, then this functionality should be disabled.

    Set lenses.security.ldap.plugin.memberof.key or lenses.security.ldap.plugin.group.extract.regex to a bogus entry, rendering it unusable.

    An example would be:

    Group extract plugin

    The group extract plugin is a class that implements an LDAP query that retrieves a user’s groups and makes any necessary transformation to match the LDAP group to a Lenses group name.

    The default class implementation that comes with Lenses is io.lenses.security.ldap.LdapMemberOfUserGroupPlugin.

    If your LDAP server supports the memberOf functionality, where each user has his/her group memberships added as attributes to his/her entity, you can use it by setting the lenses.security.ldap.plugin.class option to this class:

    Below you will see a brief example of its setup.

    As an example, the memberOf search may return two attributes for user Mark:

    The regular expression (?i)cn=(\w+),ou=Groups.* will return these two regex group matches:

    If any of these groups exist in Lenses, Mark will be granted the permissions of the matching groups.

    The lenses.security.ldap.plugin.group.extract.regex should contain exactly one regular expression capturing group.

    If you need to apply more groups for your matching purposes, you should use non-capturing groups (e.g (?:groupRegex).

    As an example, the regular expression (?i)cn=((?:Kafka|Apps)Admin),ou=Groups,dc=example,dc=com applied to memberOf attributes:

    will return these two regex group matches:

    Custom LDAP plugin

    If your LDAP does not offer the memberOf functionality or uses a complex setup, you can provide your own implementation. Start with the code on GitHub, create a JAR, and add it to the plugins/ folder and set your implementation’s full classpath:

    Do not forget to grant to the account any permissions it may need for your plugin to work.

    LDAP Configuration Options

    See configuration settings.

    The following configuration entries are specific to the default group plugin. A custom LDAP plugin might require different entries under lenses.security.ldap.plugin:

    lenses.sql.settings.kafka.offset.batch.size=4;
    SET kafka.offset.batch.size=4;
    
    <Your query>
    {
      "authMode": { "value": "Access Key" },
      "accessKeyId": { "value": "yourAccessKeyId" },
      "secretAccessKey": { "value": "yourSecretAccessKey" }
    }
    {
      "authMode": { "value": "Credentials Chain" }
    }
    aws:
      - name: my-aws-connection
        version: 1
        tags: [dev]
        configuration:
          authMode:
            value: Access Key
          accessKeyId:
            value: yourAccessKeyId
          secretAccessKey:
            value: yourSecretAccessKey
    
    glueSchemaRegistry:
      - name: schema-registry
        version: 1
        tags: [dev]
        configuration:
          authMode:
            reference: my-aws-connection
          glueRegistryArn:
            value: arn:aws:glue:region:account:registry/registry-name
          accessKeyId:
            reference: my-aws-connection
          secretAccessKey:
            reference: my-aws-connection
    aws:
      - name: my-aws-connection
        version: 1
        tags: [dev]
        configuration:
          authMode: 
            value: Credentials Chain
    
    glueSchemaRegistry:
      - name: schema-registry
        version: 1
        tags: [dev]
        configuration:
          authMode:
            reference: my-aws-connection
          glueRegistryArn:
            value: arn:aws:glue:region:account:registry/registry-name
    {
     "name": "my-aws-connection",
     "tags": ["dev"],
     "templateName": "AWS",
     "configuration": {
       "authMode": { "value": "Access Key" },
       "accessKeyId": { "value": "yourAccessKeyId" },
       "secretAccessKey": { "value": "yourSecretAccessKey" }
     }
    }
    {
        "name":"schema-registry",
        "tags": ["dev"],
        "templateName":"AWSGlueSchemaRegistry",
        "configuration": {
            "authMode": {"reference":"my-aws-connection"},
            "accessKeyId": {"reference":"my-aws-connection"},
            "secretAccessKey": {"reference":"my-aws-connection"},
            "glueRegistryArn":{"value":"arn:aws:glue:region:account:registry/registry-name"}
        }
    }
    {
      "name": "my-aws-connection",
      "tags": ["dev"],
      "templateName": "AWS",
      "configuration": {
        "authMode": { "value": "Credentials Chain" }
      }
    }
    {
        "name":"schema-registry",
        "tags": ["dev"],
        "templateName":"AWSGlueSchemaRegistry",
        "configuration": {
            "authMode": {"reference":"my-aws-connection"},
            "glueRegistryArn":{"value":"arn:aws:glue:region:account:registry/registry-name"}
        }
    }
    docker pull lensesio/lenses:5.5
    docker pull lensesio/box:5.5
    docker stop [CURRENT BOX NAME or ID]
    docker run -v /path/to/box/data:/data -e EULA="..." -p 3030 lensesio/box:5.5
    helm repo add lensesio https://helm.repo.lenses.io/
    helm repo update
    lenses:
      provision:
        enabled: false
    helm get values --namespace [LENSES_NAMESPACE] \
         --output yaml [LENSES_DEPLOYMENT] > values.yaml
    helm upgrade --namespace [LENSES_NAMESPACE] --values values.yaml [LENSES_DEPLOYMENT]
    helm upgrade --namespace [LENSES_NAMESPACE] --reuse-values \
         --set lenses.provision.enabled=false [LENSES_DEPLOYMENT]
    confluentSchemaRegistry:
      - name: schema-registry
        tags: ["tag1"]
        version: 1      
        configuration:
          schemaRegistryUrls:
            value:
              - http://my-sr.host1:8081
              - http://my-sr.host2:8081
          ## all metrics properties are optional
          metricsPort: 
            value: 9581
          metricsType: 
            value: JMX
          metricsSsl: 
            value: false
    confluentSchemaRegistry:
    - name: schema-registry
      tags: ["tag1"]
      version: 1    
      configuration:
        schemaRegistryUrls:
          value:
            - http://my-sr.host1:8081
            - http://my-sr.host2:8081
        username: 
          value: my-username
        password: 
          value: my-password
    confluentSchemaRegistry:
      - name: schema-registry
        tags: ["tag1"]
        version: 1      
        configuration:
          schemaRegistryUrls:
            value:
              - http://my-sr.host1:8081
              - http://my-sr.host2:8081
          sslTruststore:
            fileRef:
              filePath: /path/to/my/truststore.jks
          sslTruststorePassword: 
            value: myPassword
    confluentSchemaRegistry:
      - name: schema-registry
        tags: ["tag1"]
        version: 1      
        configuration:
          schemaRegistryUrls:
            value:
              - http://my-sr.host1:8081
              - http://my-sr.host2:8081
          sslKeystore:
            fileRef:
              filePath: /path/to/my/keystore.jks
          sslKeystorePassword: 
            value: myPassword
    confluentSchemaRegistry:
      - name: schema-registry
        tags: ["tag1"]
        version: 1      
        configuration:
          schemaRegistryUrls:
            value:
              - http://my-sr.host1:8081
              - http://my-sr.host2:8081
          hardDelete:
            value: true      
    aws:
      - name: my-aws-connection
        tags: ["tag1"]
        version: 1      
        configuration:
          authMode: 
            value: Access Key
          accessKeyId: 
            value: my-access-key-id
          secretAccessKey: 
            value: my-secret-access-key
          region: 
            value: eu-west-1
          
    glueSchemaRegistry:
      - name: schema-registry
        tags: ["tag1"]
        version: 1      
        configuration:
          authMode: 
            value: my-aws-connection
          accessKeyId:
            reference: my-aws-connection
          secretAccessKey:
            reference: my-aws-connection
          glueRegistryArn:
              value: arn:aws:glue:region:123123123:registry/my-registry
    aws:
      - name: my-aws-connection
        tags: ["tag1"]
        version: 1      
        configuration:
          authMode: 
            value: Credentials Chain
          region: 
            value: eu-west-1
            
    glueSchemaRegistry:
      - name: schema-registry
        tags: ["tag1"]
        version: 1      
        configuration:
          authMode: 
            value: my-aws-connection
          glueRegistryArn:
            value: arn:aws:glue:region:123123123:registry/my-registry
    kafka-acls \
        --bootstrap-server [broker.url:9092] --command-config [client.properties] \
        --add \
        --allow-principal [User:Lenses] \
        --allow-host [lenses.host] \
        --operation All \
        --topic [topic]
    kafka-acls \
        --bootstrap-server [broker.url:9092] --command-config [client.properties] \
        --add \
        --allow-principal [User:Lenses] \
        --allow-host [lenses.host] \
        --operation Describe \
        --operation DescribeConfigs \
        --operation Read \
        --topic [topic]
    kafka-acls \
        --bootstrap-server [broker.url:9092] --command-config [client.properties] \
        --add \
        --allow-principal [User:Lenses] \
        --allow-host [lenses.host] \
        --operation Describe \
        --operation DescribeConfigs \
        --operation Read \
        --topic *
    kafka-acls \
        --bootstrap-server [broker.url:9092] --command-config [client.properties] \
        --add \
        --allow-principal [User:Lenses] \
        --allow-host [lenses.host] \
        --operation Describe \
        --operation Read \
        --group *
    kafka-acls \
        --bootstrap-server [broker.url:9092] --command-config [client.properties] \
        --add \
        --allow-principal [User:Lenses] \
        --allow-host [lenses.host] \
        --operation Describe \
        --operation DescribeConfigs \
        --operation Alter \
        --cluster
    {
      "updated": [
        "text"
      ],
      "created": [
        "text"
      ],
      "deleted": [
        "text"
      ]
    }
    GET /api/v1/state HTTP/1.1
    Host: 
    Accept: */*
    
    {
      "license": {
        "maxBrokers": 1,
        "expiry": 1,
        "clientId": "text",
        "isRespected": true,
        "status": "Valid",
        "message": "text"
      },
      "connections": {
        "kafka": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "confluentSchemaRegistry": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "elasticSearch": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "pagerDuty": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "datadog": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "slack": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "alertManager": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "webhook": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "aws": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "connect": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "awsGlueSchemaRegistry": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "zookeeper": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "postgres": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "splunk": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ],
        "kerberos": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ]
          }
        ]
      }
    }
    PUT /api/v1/state/license HTTP/1.1
    Host: 
    Content-Type: application/json
    Accept: */*
    Content-Length: 65
    
    {
      "source": "text",
      "clientId": "text",
      "details": "text",
      "key": "text"
    }
    {
      "maxBrokers": 1,
      "expiry": 1,
      "clientId": "text",
      "isRespected": true,
      "status": "Valid",
      "message": "text",
      "settings": {
        "security": {
          "root": {
            "enabled": true
          },
          "basic": {
            "enabled": true
          },
          "ldap": {
            "enabled": true
          },
          "kerberos": {
            "enabled": true
          },
          "custom": {
            "enabled": true
          },
          "sso": {
            "enabled": true
          },
          "serviceAccount": {
            "enabled": true,
            "restriction": {
              "name": "None"
            }
          }
        },
        "sql": {
          "streaming": {
            "enabled": true,
            "restriction": {
              "name": "None"
            }
          },
          "sql": {
            "enabled": true
          }
        },
        "kafkaSettings": {
          "acls": true,
          "quotas": true,
          "consumerOffsetManagement": true
        },
        "audit": {
          "enabled": true,
          "integration": true
        },
        "connections": {
          "enabled": true
        },
        "application": {
          "topology": true,
          "connectorsOnKubernetes": true
        },
        "approval": {
          "enabled": true
        },
        "alerts": {
          "enabled": true,
          "rules": {
            "name": "None"
          },
          "integration": {
            "enabled": true,
            "channels": [
              "text"
            ],
            "max": {
              "name": "None"
            }
          }
        },
        "data": {
          "masking": true,
          "customSerde": true,
          "sla": true,
          "namespace": {
            "enabled": true,
            "max": {
              "name": "None"
            }
          }
        },
        "backup": {
          "enabled": true
        }
      },
      "currentTime": 1
    }
    PUT /api/v1/state/connections HTTP/1.1
    Host: 
    Content-Type: multipart/form-data
    Accept: */*
    Content-Length: 4971
    
    {
      "provisioning": {
        "kafka": [
          {
            "name": "kafka",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "protocol": {
                "value": "PLAINTEXT"
              },
              "sslKeystore": {
                "file": "text"
              },
              "sslKeystorePassword": {
                "value": "text"
              },
              "sslKeyPassword": {
                "value": "text"
              },
              "sslTruststorePassword": {
                "value": "text"
              },
              "sslTruststore": {
                "file": "text"
              },
              "saslJaasConfig": {
                "value": "text"
              },
              "keytab": {
                "file": "text"
              },
              "kafkaBootstrapServers": {
                "value": [
                  "text"
                ]
              },
              "saslMechanism": {
                "value": "text"
              },
              "metricsPort": {
                "value": 1
              },
              "metricsUsername": {
                "value": "text"
              },
              "metricsPassword": {
                "value": "text"
              },
              "metricsSsl": {
                "value": true
              },
              "metricsHttpSuffix": {
                "value": "text"
              },
              "metricsHttpTimeout": {
                "value": 1
              },
              "metricsType": {
                "value": "AWS"
              },
              "additionalProperties": {
                "value": {}
              },
              "metricsCustomUrlMappings": {
                "value": {}
              },
              "metricsCustomPortMappings": {
                "value": {}
              }
            }
          }
        ],
        "confluentSchemaRegistry": [
          {
            "name": "schema-registry",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "sslKeystore": {
                "file": "text"
              },
              "sslKeystorePassword": {
                "value": "password"
              },
              "sslKeyPassword": {
                "value": "password"
              },
              "sslTruststorePassword": {
                "value": "password"
              },
              "sslTruststore": {
                "file": "text"
              },
              "schemaRegistryUrls": {
                "value": [
                  "text"
                ]
              },
              "basicAuthCredentialsSource": {
                "value": "password"
              },
              "basicAuthUserInfo": {
                "value": "password"
              },
              "metricsType": {
                "value": "JMX"
              },
              "metricsSsl": {
                "value": true
              },
              "metricsUsername": {
                "value": "text"
              },
              "metricsPassword": {
                "value": "password"
              },
              "metricsPort": {
                "value": 1
              },
              "additionalProperties": {
                "value": {}
              },
              "metricsCustomUrlMappings": {
                "value": {}
              },
              "metricsCustomPortMappings": {
                "value": {}
              },
              "metricsHttpSuffix": {
                "value": "text"
              },
              "metricsHttpTimeout": {
                "value": 1
              },
              "username": {
                "value": "text"
              },
              "password": {
                "value": "password"
              },
              "hardDelete": {
                "value": true
              }
            }
          }
        ],
        "elasticSearch": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "user": {
                "value": "text"
              },
              "password": {
                "value": "password"
              },
              "nodes": {
                "value": [
                  "text"
                ]
              }
            }
          }
        ],
        "pagerDuty": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "integrationKey": {
                "value": "text"
              }
            }
          }
        ],
        "datadog": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "site": {
                "value": "EU"
              },
              "apiKey": {
                "value": "text"
              },
              "applicationKey": {
                "value": "text"
              }
            }
          }
        ],
        "slack": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "webhookUrl": {
                "value": "text"
              }
            }
          }
        ],
        "alertManager": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "endpoints": {
                "value": [
                  "text"
                ]
              }
            }
          }
        ],
        "webhook": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "host": {
                "value": "text"
              },
              "port": {
                "value": 1
              },
              "useHttps": {
                "value": true
              },
              "creds": {
                "value": [
                  "text"
                ]
              }
            }
          }
        ],
        "aws": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "authMode": {
                "value": "Credentials Chain"
              },
              "accessKeyId": {
                "value": "text"
              },
              "secretAccessKey": {
                "value": "text"
              },
              "region": {
                "value": "text"
              },
              "sessionToken": {
                "value": "text"
              }
            }
          }
        ],
        "connect": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "workers": {
                "value": [
                  "text"
                ]
              },
              "username": {
                "value": "text"
              },
              "password": {
                "value": "text"
              },
              "metricsSsl": {
                "value": true
              },
              "metricsUsername": {
                "value": "text"
              },
              "metricsPassword": {
                "value": "text"
              },
              "metricsType": {
                "value": "JMX"
              },
              "metricsPort": {
                "value": 1
              },
              "aes256Key": {
                "value": "text"
              },
              "sslAlgorithm": {
                "value": "text"
              },
              "sslKeystore": {
                "file": "text"
              },
              "sslKeystorePassword": {
                "value": "text"
              },
              "sslKeyPassword": {
                "value": "text"
              },
              "sslTruststorePassword": {
                "value": "text"
              },
              "sslTruststore": {
                "file": "text"
              },
              "metricsCustomUrlMappings": {
                "value": {
                  "ANY_ADDITIONAL_PROPERTY": "text"
                }
              },
              "metricsCustomPortMappings": {
                "value": {
                  "ANY_ADDITIONAL_PROPERTY": 1
                }
              },
              "metricsHttpSuffix": {
                "value": "text"
              },
              "metricsHttpTimeout": {
                "value": 1
              }
            }
          }
        ],
        "awsGlueSchemaRegistry": [
          {
            "name": "schema-registry",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "authMode": {
                "reference": "text"
              },
              "accessKeyId": {
                "reference": "text"
              },
              "secretAccessKey": {
                "reference": "text"
              },
              "sessionToken": {
                "value": "text"
              },
              "glueRegistryArn": {
                "value": "text"
              },
              "glueRegistryCacheTtl": {
                "value": 1
              },
              "glueRegistryCacheSize": {
                "value": 1
              },
              "schemaRegistryFlavour": {
                "value": "text"
              },
              "glueRegistryDefaultCompatibility": {
                "value": "BACKWARD"
              }
            }
          }
        ],
        "zookeeper": [
          {
            "name": "zookeeper",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "zookeeperUrls": {
                "value": [
                  "text"
                ]
              },
              "zookeeperChrootPath": {
                "value": "text"
              },
              "zookeeperSessionTimeout": {
                "value": 1
              },
              "zookeeperConnectionTimeout": {
                "value": 1
              },
              "metricsType": {
                "value": "JMX"
              },
              "metricsPort": {
                "value": 1
              },
              "metricsUsername": {
                "value": "text"
              },
              "metricsPassword": {
                "value": "text"
              },
              "metricsSsl": {
                "value": true
              },
              "metricsHttpSuffix": {
                "value": "text"
              },
              "metricsHttpTimeout": {
                "value": 1
              },
              "metricsCustomUrlMappings": {
                "value": {}
              },
              "metricsCustomPortMappings": {
                "value": {}
              }
            }
          }
        ],
        "postgres": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "host": {
                "value": "text"
              },
              "port": {
                "value": 1
              },
              "database": {
                "value": "text"
              },
              "username": {
                "value": "text"
              },
              "password": {
                "value": "text"
              },
              "sslMode": {
                "value": "allow"
              }
            }
          }
        ],
        "splunk": [
          {
            "name": "text",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "host": {
                "value": "text"
              },
              "port": {
                "value": 1
              },
              "useHttps": {
                "value": true
              },
              "insecure": {
                "value": true
              },
              "token": {
                "value": "text"
              }
            }
          }
        ],
        "kerberos": [
          {
            "name": "kerberos",
            "version": 1,
            "tags": [
              "text"
            ],
            "configuration": {
              "kerberosKrb5": {
                "file": "text"
              }
            }
          }
        ]
      },
      "attachedFile": "binary"
    }
    security.conf
    # LDAP connection details
    
    lenses.security.ldap.url="ldaps://example.com:636"
    ## For the LDAP user please use the distinguished name (DN).
    ## The LDAP user must be able to list users and their groups.
    lenses.security.ldap.user="cn=lenses,ou=Services,dc=example,dc=com"
    lenses.security.ldap.password="[PASSWORD]"
    ## When set to true, it uses the lenses.security.ldap.user to read the user's groups
    ## lenses.security.ldap.use.service.user.search=false
    
    # LDAP user search settings
    
    lenses.security.ldap.base="ou=Users,dc=example,dc=com"
    lenses.security.ldap.filter="(&(objectClass=person)(sAMAccountName=<user>))"
    
    # LDAP group search and mapping settings
    
    lenses.security.ldap.plugin.class="io.lenses.security.ldap.LdapMemberOfUserGroupPlugin"
    lenses.security.ldap.plugin.group.extract.regex="(?i)CN=(\\w+),ou=Groups.*"
    lenses.security.ldap.plugin.memberof.key="memberOf"
    lenses.security.ldap.plugin.person.name.key = "sn"
    
    lenses.security.ldap.plugin.memberof.key = "notaKey"
    lenses.security.ldap.plugin.class=io.lenses.security.ldap.LdapMemberOfUserGroupPlugin
    security.conf
    # Set the full classpath that implements the group extraction
    lenses.security.ldap.plugin.class="io.lenses.security.ldap.LdapMemberOfUserGroupPlugin"
    
    # The plugin uses the 'memberOf' attribute. If this attribute has a different
    # name in your LDAP set it here.
    lenses.security.ldap.plugin.memberof.key="memberOf"
    
    # This regular expression should return the group common name. If it matches
    # a Lenses group name, the user is granted its permissions.
    # As an example if there is a 'memberOf' attribute with value:
    #   cn=LensesAdmins,ou=Groups,dn=example,dn=com
    # The regular expression will return 'LensesAdmins'.
    # Group names are case sensitive.
    lenses.security.ldap.plugin.group.extract.regex="(?i)cn=(\\w+),ou=Groups.*"
    
    # This is the LDAP attribute that holds the user's full name. It's optional.
    lenses.security.ldap.plugin.person.name.key = "sn"
    
    attribute  value
    ---------  ------------------------------------------
    memberOf   cn=LensesAdmin,ou=Groups,dc=example,dc=com
    memberOf   cn=RandomGroup,ou=Groups,dc=example,dc=com
    
    LensesAdmin
    RandomGroup
    attribute  value
    ---------  ------------------------------------------
    memberOf   cn=KafkaAdmin,ou=Groups,dc=example,dc=com
    memberOf   cn=AppsAdmin,ou=Groups,dc=example,dc=com
    memberOf   cn=BizAdmin,ou=Groups,dc=example,dc=com
    AppsAdmin
    KafkaAdmin
    # Set the full classpath that implements the group extraction
    lenses.security.ldap.plugin.class="io.lenses.security.ldap.LdapMemberOfUserGroupPlugin"
    lenses.security.ldap.plugin.memberof.key
    lenses.security.ldap.plugin.person.name.key
    lenses.security.ldap.plugin.group.extract.regex
    lenses.security.ldap.plugin.person.name.key
    lenses.sql.state.dir
    option.
  • [RUNTIME DIRECTORY]/storage Where the H2 embedded database is stored when PostgreSQL is not set. To change this directory, use the lenses.storage.directory option.

  • /run (Global directory for temporary data at runtime) Used for temporary files. If Lenses does not have permission to use it, it will fall back to /tmp.

  • /tmp (Global temporary directory) Used for temporary files (if access /run fails), and JNI shared libraries.

  • brokers
    license

    Lenses Box

    Lenses Box is a container solution for building applications on a localhost Apache Kafka docker.

    What’s in the Box?

    Lenses Box contains all components of the Apache Kafka ecosystem, CLI tools, and synthetic data streams.

    Lenses Box

    Starting the Box

    1. To start with the Box online.

    2. Install and run the Docker

    Open Lenses in your , log in with admin/admin.

    Kafka Docker advertisement

    The broker in the Kafka docker has a broker id 101 and advertises the listener configuration endpoint to accept client connections.

    If you run Docker on macOS or Windows, you may need to find the address of the VM running Docker and export it as the advertised listener address for the broker (On macOS it usually is 192.168.99.100). At the same time, you should give the lensesio/box image access to the VM’s network:

    If you run on Linux, you don’t have to set the ADV_HOST , but you can do something cool with it. If you set it to be your machine’s IP address, you can access Kafka from any clients in your network.

    If you decide to run a box in the cloud, you (and all your team) can access Kafka from your development machines. Remember to provide the public IP of your server as the kafka advertised host for your producers and consumers to access it.

    Kafka Docker JMX

    Kafka JMX metrics are enabled by default. Refer to ports once you expose the relevant port, ie. -p 9581:9581 you can connect to JMX with

    Custom hostname

    If you are using docker-machine or setting this up in a Cloud or DOCKER_HOST is a custom IP address such as 192.168.99.100, you will need to use the parameters --net=host -e ADV_HOST=192.168.99.100.

    Docker data persistence

    To persist the Kafka data between multiple executions, provide a name for your Docker instance and do not set the container to be removed automatically (--rm flag). For example:

    Once you want to free up resources, just press Control-C. Now you have two options: either remove the Docker:

    Or use it at a later time and continue from where you left off:

    Port Numbers

    Service
    Port Number

    Advanced options

    Variable
    Description

    FAQ

    How can I run offline?

    Download your key locally and run the command:

    How much memory to allocate?

    The container is running multiple services, and it is recommended to allocate 5GB of RAM to the docker (although it can operate with even less than 4GB).

    To reduce the memory footprint, it is possible to disable some connectors and shrink the Kafka Connect heap size by applying these options (choose connectors to keep) to the docker run command:

    Permission Reference

    This page contains the Lenses IAM permission references.

    Admin permissions

    This matrix shows both display name (first column) and code name (second column) for permissions. Knowing code name may be helpful while using API / CLI.

    Permission
    Code name
    Description

    Data permissions

    Permission
    Description

    Application permissions

    This matrix shows both display name (first column) and code name (second column) for permissions. Knowing code name may be helpful while using API / CLI.

    Permission
    Code name
    Description

    Alert Reference

    This page describes the alert references for Lenses.

    Alert
    Alert Identifier
    Description
    Category
    Instance
    Severity

    Kafka Broker is down

    1000

    Raised when the Kafka broker is not part of the cluster for at least 1 minute. i.e:host-1,host-2

    Infrastructure

    brokerID

    INFO, CRITICAL

    Kafka

    This page provides examples for defining a connection to Kafka.

    If deploying with Helm put the connections YAML under provisioning in the values file.

    PLAINTEXT

    With PLAINTEXT, there's no encryption and no authentication when connecting to Kafka.

    Integrations

    This section describes the integrations available for alerting.

    Alerts are sent to channels.

    See for integration into your CI/CD pipelines.

    AWS Cloud Watch

    tar -xvf lenses.tar.gz -C lenses
       lenses
       ├── lenses.conf       ← edited and renamed from .sample
       ├── security.conf     ← edited and renamed from .sample
       ├── license.json
       ├── logback.xml
       ├── logback-debug.xml
       ├── bin/
       ├── lib/
       ├── licences/
       ├── logs/             ← created when you run Lenses
       ├── plugins/
       ├── storage/          ← created when you run Lenses
       └── ui/
    bin/lenses
    bin/lenses lenses.conf
    chmod 0600 /path/to/security.conf
    chown [lenses-user]:root /path/to/security.conf
    LENSES_OPTS="-Dorg.xerial.snappy.tempdir=/path/to/exec/tmp -Djava.io.tmpdir=/path/to/exec/tmp"
    [Unit]
    Description=Run Lenses.io service
    
    [Service]
    Restart=always
    User=[LENSES-USER]
    Group=[LENSES-GROUP]
    LimitNOFILE=4096
    WorkingDirectory=/opt/lenses
    #Environment=LENSES_LOG4J_OPTS="-Dlogback.configurationFile=file:/etc/lenses/logback.xml"
    ExecStart=/opt/lenses/bin/lenses /etc/lenses/lenses.conf
    
    [Install]
    WantedBy=multi-user.target
    export LENSES_OPTS="-Djavax.net.ssl.trustStore=/path/to/truststore.jks -Djavax.net.ssl.trustStorePassword=changeit"
    bin/lenses
    ulimit -S -n     # soft limit
    ulimit -H -n     # hard limit
    ulimit -S -n 4096

    Zookeeper Node is down

    1001

    Raised when the Zookeeper node is not reachable. This is information is based on the Zookeeper JMX. If it responds to JMX queries it is considered to be running.

    Infrastructure

    service name

    INFO, CRITICAL

    Connect Worker is down

    1002

    Raised when the Kafka Connect worker is not responding to the API call for /connectors for more than 1 minute.

    Infrastructure

    worker URL

    MEDIUM

    Schema Registry is down

    1003

    Raised when the Schema Registry node is not responding to the root API call for more than 1 minute.

    Infrastructure

    service URL

    HIGH, INFO

    Under replicated partitions

    1005

    Raised when there are (topic, partitions) not meeting the replication factor set.

    Infrastructure

    partitions

    HIGH, INFO

    Partitions offline

    1006

    Raised when there are partitions which do not have an active leader. These partitions are not writable or readable.

    Infrastructure

    brokers

    HIGH, INFO

    Active Controllers

    1007

    Raised when the number of active controllers is not 1. Each cluster should have exactly one controller.

    Infrastructure

    brokers

    HIGH, INFO

    Multiple Broker Versions

    1008

    Raised when there are brokers in the cluster running on different Kafka version.

    Infrastructure

    brokers versions

    HIGH, INFO

    File-open descriptors high capacity on Brokers

    1009

    A broker has too many open file descriptors

    Infrastructure

    brokerID

    HIGH, INFO, CRITICAL

    Average % the request handler is idle

    1010

    Raised when the average fraction of time the request handler threads are idle. When the valueis smaller than 0.02 the alert level is CRITICAL. When the value is smaller than 0.1 the alert level is HIGH.

    Infrastructure

    brokerID

    HIGH, INFO, CRITICAL

    Fetch requests failure

    1011

    Raised when the Fetch request rate (the value is per second) for requests that failed is greater than a threshold. If the value is greater than 0.1 the alert level is set to CRITICAL otherwise is set to HIGH.

    Infrastructure

    brokerID

    HIGH, INFO, CRITICAL

    Produce requests failure

    1012

    Raised when the Producer request rate (the value is per second) for requests that failed is greater than a threshold. If the value is greater than 0.1 the alert level is set to CRITICAL otherwise is set to HIGH.

    Infrastructure

    brokerID

    HIGH, INFO, CRITICAL

    Broker disk usage is greater than the cluster average

    1013

    Raised when the Kafka Broker disk usage is greater than the cluster average. We provide by default a threshold of 1GB disk usage.

    Infrastructure

    brokerID

    MEDIUM, INFO

    Leader Imbalance

    1014

    Raised when the Kafka Broker has more leader replicas than the cluster average.

    Infrastructure

    brokerID

    INFO

    Consumer Lag exceeded

    2000

    Raises an alert when the consumer lag exceeds the threshold on any partition.

    Consumers

    topic

    HIGH, INFO

    Connector deleted

    3000

    Connector was deleted

    Kafka Connect

    connector name

    INFO

    Topic has been created

    4000

    New topic was added

    Topics

    topic

    INFO

    Topic has been deleted

    4001

    Topic was deleted

    Topics

    topic

    INFO

    Topic data has been deleted

    4002

    Records from topic were deleted

    Topics

    topic

    INFO

    Data Produced

    5000

    Raises an alert when the data produced on a topic doesn’t match expected threshold

    Data Produced

    topic

    LOW, INFO

    Connector Failed

    6000

    Raises an alert when a connector, or any worker in a connector is down

    Apps

    connector

    LOW, INFO

    Allows viewing the alert settings rules

    Manage Alert Rules

    ManageAlertRules

    Allows adding/deleting/updating alert settings rules

    View Audit

    ViewAuditLogs

    Allows viewing the audit records

    View Data Policies

    ViewDataPolicies

    Allows viewing the data policies

    Manage Data Policies

    ManageDataPolicies

    Allows to add/remove/update data policies

    Manage Connections

    ManageConnections

    Allows to add/remove/update connections

    View Approvals

    ViewApprovalRequest

    Allows viewing raised approval requests

    Manage Approvals

    ManageApprovalRequest

    Allows to accept/reject requests

    Manage Lenses License

    ManageLensesLicense

    Allows to update Lenses license at runtime via the Lenses API

    Manage Audit Logs

    ManageAuditLogs

    Allows deleting audit logs

    Insert Data

    Allows inserting data into the topic

    Delete Data

    Allows deleting data from the topic

    Update Schema

    Allows configuring the topic storage format and schema

    View Schema

    Allows viewing schema information

    Show Index

    Allows viewing Elasticsearch index information

    Query Index

    Allows viewing the data in an Elasticsearch index

    View Topology

    ViewTopology

    Allows viewing the data pipeline topology

    Manage Topology

    ManageTopology

    Allows decommissioning topology applications

    View Kafka Connectors

    ViewConnectors

    Allows viewing running Kafka Connectors

    Manage Kafka Connectors

    ManageConnectors

    Allows to add/update/delete/stop Kafka Connectors

    View Kafka Consumers

    ViewKafkaConsumers

    Allows viewing the Kafka Consumers details

    Manage Kafka Consumers

    ManageKafkaConsumers

    Allows changing the Kafka Consumers offset

    Connect Clusters Access

    -

    Allows to use Connect Clusters

    View Kafka Settings

    ViewKafkaSettings

    Allows viewing Kafka ACLs, Quotas

    Manage Kafka Settings

    ManageKafkaSettings

    Allows managing Kafka ACLs, Quotas

    View Log

    ViewLogs

    Allows viewing Lenses logs

    View Users

    ViewUsers

    Allows viewing the users, groups and service accounts

    Manage Users

    ManageUsers

    Allows to add/remove/update/delete users,groups and service accounts

    View Alert Rules

    Show

    Allows viewing the topic name and basic info

    Query

    Allows viewing the data in a topic

    Create

    Allows creating topics

    Create Topic Request

    Topics are not created directly, they are sent for approval

    Drop

    Allows deleting topics

    Configure

    Allows changing a topic configuration

    View SQL Processors

    ViewSQLProcessors

    Allows viewing the SQL processors

    Manage SQL Processors

    ManageSQLProcessors

    Allows to add/remove/stop/delete SQL processors

    View Schemas

    ViewSchemaRegistry

    Allows viewing your Schema Registry entries

    Manage Schema Registry

    ManageSchemaRegistry

    ViewAlertRules

    Allows to add/remove/update/delete your Schema Registry entries

    The only required fields are:

    • kafkaBootstrapServers - a list of bootstrap servers (brokers). It is recommended to add as many brokers (if available) as convenient to this list for fault tolerance.

    • protocol - depending on the protocol, other fields might be necessary (see examples for other protocols)

    In following example JMX metrics for Kafka Brokers are configured too, assuming that all brokers expose their JMX metrics using the same port (9581), without SSL and authentication.

    SSL

    With SSL the connection to Kafka is encrypted. You can also uses SSL and certificates to authenticate users against Kafka.

    A truststore (with password) might need to be set explicitly if the global truststore of Lenses does not include the Certificate Authority (CA) of the brokers.

    If TLS is used for authentication to the brokers in addition to encryption-in-transit, a key store (with passwords) is required.

    SASL_PLAINTEXT vs SASL_SSL

    There are 2 SASL-based protocols to access Kafka Brokers: SASL_SSL and SASL_PLAINTEXT. They both require SASL mechanism and JAAS Configuration values. What is different is if:

    1. The transport layer is encyrpted (SSL)

    2. The SASL mechanisn for authentication (PLAIN, AWS_MSK_IAM, GSSAPI).

    In addition to this, there might be a keytab file required, depending on the SASL mechanism (for example when using GSSAPI mechanism, most often used for Kerberos).

    In order to use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.

    Apart from that, when encryption-in-transit is used (with SASL_SSL), a trust store might need to be set explicitly if the global trust store of Lenses does not include the CA of the brokers.

    Following are a few examples of SASL_PLAINTEXT and SASL_SSL

    SASL_SSL

    PLAIN

    Encrypted communication and basic username and password for authentication.

    AWS_MSK_IAM

    When Lenses is running inside AWS and is connecting to an Amazon’s Managed Kafka (MSK) instance, IAM can be used for authentication.

    GSSAPI

    In order to use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.

    SASL_PLAINTEXT

    No SSL encrypted of communication, credentials communicated to Kafka in clear text.

    SCRAM-SHA-256

    SCRAM-SHA-512

    Advanced Client Configuration

    Lenses interacts with your Kafka Cluster via Kafka Client API. To override the default behavior use additionalProperties.

    By default there shouldn’t be a need to use additional properties, use it only if really necessary, as a wrong usage might brake the communication with Kafka.

    Lenses SQL processors uses the same Kafka connection information provided to Lenses.

    To send alerts to AWS Cloud Watch, you first need an AWS connection. Go to Admin->Connections->New Connection->AWS. Enter your AWS Credentials.

    Rather than enter your AWS credentials you can use the AWS credentials chain.

    Next, go to Admin->Alerts->Channels->New Channel->AWS Cloud Watch.

    Select an AWS connection.

    Datadog

    To send alerts to Datadog, you first need a Datadog connection. Go to Admin->Connections->New Connection->DataDog. Enter your API, Application Key and Site.

    Next, go to Admin->Alerts->Channels->New Channel->Data Dog.

    Select a DataDog connection.

    Pager Duty

    To send alerts to Pager Duty, you first need a Pager Duty connection. Go to Admin->Connections->New Connection->PagerDuty. Enter your Service Integration Key.

    Next, go to Admin->Alerts->Channels->New Channel->Pager Duty.

    Select the pager duty connection.

    Prometheus Alert Manager

    To send alerts to Pager Duty, you first need a Pager Duty connection. Go to Admin->Connections->New Connection->Prometheus.

    1. Select your Prometheus connection

    2. Set the Source

    3. Set the GeneratorURL for your Alert Manager instance

    Slack

    To send alerts to Slack, you first need a Slack connection. Go to Admin->Connections->New Connection->Slack. Enter your Slack webhook URL.

    Next, go to Admin->Alerts->Channels->New Channel->Slack.

    Enter the Slack channel you want to send alerts to.

    Webhook

    Webhooks allow you to send alerts to any service implementing them, they are very flexible.

    First, you need a Webhook connection. Go to Admin->Connections->New Connection

    Enter the URL, port and credentials.

    Create a Channel to use the connection. Go to Admin->Alerts->Channels->New Channel.

    1. Choose a name for your Channel instance.

    2. Select your connection.

    3. Set the HTTP method to use.

    4. Set the Request pathA URI0 encoded request path, which may include a query string. Supports alert-variable interpolation.

    5. Set the HTTP Headers

    6. Set the Body payload

    Template variables

    In Request path, HTTP Headers and Body payload there is a possibility of using template variables, which will be translated to alert specific fields. To use template variables, you have to use this format: {{VARIABLE}}, i.e. {{LEVEL}}.

    Supported template variables:

    • LEVEL - alert level (INFO, LOW, MEDIUM, HIGH, CRITICAL).

    • CATEGORY - alert category (Infrastructure, Consumers, Kafka Connect, Topics, Producers).

    • INSTANCE - (broker url / topic name etc.).

    • SUMMARY - alert summary - same content in the Alert Events tab.

    • TIMESTAMP

    • ID - alert global id (i.e. 1000 for BrokerStatus alert).

    • CREDS - CREDS[0] etc. - variables specified in connections Credentials as a list of values separated by a comma.

    Webhook Email

    To configure real-time email alerts you can leverage Webhooks, for example with the following service:

    • Twilio and SendGrid

    • Zapier

    SendGrid Example

    1. Create a webhook connection, for SendGrid with api.sendgrid.com as the host and enable HTTPS

    2. Configure a channel to use the connect you just created

    3. Set the method to Post

    4. Set the request path to the webhook URL from your Zapier account

    5. Set the Headers to

    HTTP Headers

    Authorization: Bearer [your-Sendgrid-API-Key]

    Content-Type: application/json

    1. Set the payload to be

    Change the above payload according to your requirements, and remember that the [sender-email-address] needs to be the same email address you registered during the Sender Authentication Sendgrid setup process.

    Zapier Example

    1. Create a webhook connection, for SendGrid with hooks.zapier.com as the host and enable HTTPS

    2. Configure a channel to use the connect you just created

    3. Set the method to Post

    4. Set the request path tp /v3/mail/send

    5. Set the request path to the webhook URL from your Zapier account

    6. Set the Headers to:

    1. Set the payload to be

    Webhook MS Teams

    To create a webhook in your MS Teams workspace you can use this guide.

    At the end of the process you get a url of the format: https://YOUR_URL.webhook.office.com/webhook2/<secret-token-by-ms>/IncomingWebhook/<secret-token-by-ms>

    You’ll need the second part

    /webhook2/<secret-token-by-ms>/IncomingWebhook/<secret-token-by-ms>

    1. Create a new Webhook Connection, set the host to outlook.office.com and enable HTTPS

    2. Configure an new channel, using this connection

    3. Set the Method to POST

    4. The Request Path to the second part of the URL you recieved from MS Teams

    /webhook2/<secret-token-by-ms>/IncomingWebhook/<secret-token-by-ms>

    1. In the body set

    Webhook SMS

    See Zapier and follow blog post SMS alerts with Zapier.

    provisioning
    kafka:
    - name: Kafka
      version: 1
      tags: ["optional-tag"]
      configuration:
        kafkaBootstrapServers:
          value:
            - PLAINTEXT://your.kafka.broker.0:9092
            - PLAINTEXT://your.kafka.broker.1:9092
        protocol: 
          value: PLAINTEXT
        # all metrics properties are optional
        metricsPort: 
          value: 9581
        metricsType: 
          value: JMX
        metricsSsl: 
          value: falseSSL 
    kafka:
    - name: Kafka
      version: 1
      tags: ["optional-tag"]
      configuration:
        kafkaBootstrapServers:
          value:
            - SSL://your.kafka.broker.0:9092
            - SSL://your.kafka.broker.1:9092
        protocol: 
          value: SSL
        sslTruststore:
          file: /path/to/truststore.jks
        sslTruststorePassword: 
          value: truststorePassword
        sslKeystore:
          file: /path/to/keystore.jks
        sslKeyPassword: 
          value: keyPassword
        sslKeystorePassword: 
          value: keystorePassword
    kafka:
    - name: Kafka
      version: 1
      tags: ["optional-tag"]
      configuration:
        kafkaBootstrapServers:
          value:
            - SASL_SSL://your.kafka.broker.0:9092
            - SASL_SSL://your.kafka.broker.1:9092
        protocol: 
          value: SASL_SSL
        sslTruststore:
          file: /path/to/truststore.jks
        sslTruststorePassword: 
          value: truststorePassword
        sslKeystore:
          file: /path/to/keystore.jks
        sslKeyPassword: 
          value: keyPassword
        sslKeystorePassword: 
          value: keystorePassword
        saslMechanism: 
          value: PLAIN
        saslJaasConfig:
          value: |
            org.apache.kafka.common.security.plain.PlainLoginModule required
            username="your-username"
            password="your-password";      
    kafka:
    - name: Kafka
      version: 1
      tags: ["optional-tag"]
      configuration:
        kafkaBootstrapServers:
          value:
           - SASL_SSL://your.kafka.broker.0:9098
           - SASL_SSL://your.kafka.broker.1:9098
        protocol: SASL_SSL
        saslMechanism: 
          value: AWS_MSK_IAM
        saslJaasConfig:
          value: software.amazon.msk.auth.iam.IAMLoginModule required;
        additionalProperties:
          value:
            sasl.client.callback.handler.class: "software.amazon.msk.auth.iam.IAMClientCallbackHandler"
    kafka:
    - name: Kafka
      version: 1
      tags: ["optional-tag"]
      configuration:
        kafkaBootstrapServers:
          value:
            - SASL_SSL://your.kafka.broker.0:9092
            - SASL_SSL://your.kafka.broker.1:9092
        protocol: 
          value: SASL_SSL
        sslTruststore:
          file: /path/to/truststore.jks
        sslTruststorePassword: 
          value: truststorePassword
        sslKeystore:
          file: /path/to/keystore.jks
        sslKeyPassword: 
          value: keyPassword
        sslKeystorePassword: 
          value: keystorePassword  
        saslMechanism: 
          value: GSSAPI
        saslJaasConfig:
          value: |
            com.sun.security.auth.module.Krb5LoginModule required
            useKeyTab=true
            storeKey=true
            useTicketCache=false
            serviceName=kafka
            principal="[email protected]";      
         keytab:
           file: /path/to/keytab.jks
    kafka:
    - name: Kafka
      version: 1
      tags: ["optional-tag"]
      configuration:
        kafkaBootstrapServers:
          value:
            - SASL_PLAINTEXT://your.kafka.broker.0:9092
            - SASL_PLAINTEXT://your.kafka.broker.1:9092
        protocol: 
          value: SASL_PLAINTEXT
        saslMechanism: 
          value: SCRAM-SHA-256
        saslJaasConfig: 
          value: |
            org.apache.kafka.common.security.scram.ScramLoginModule required
            username="your-username"
            password="your-password";      
    kafka:
    - name: Kafka
      version: 1
      tags: ["optional-tag"]
      configuration:
        kafkaBootstrapServers:
          value:
            - SASL_PLAINTEXT://your.kafka.broker.0:9092
            - SASL_PLAINTEXT://your.kafka.broker.1:9092
        protocol: 
          value: SASL_PLAINTEXT
        saslMechanism: 
          value: SCRAM-SHA-256
        saslJaasConfig: 
          value: |
            org.apache.kafka.common.security.scram.ScramLoginModule required
            username="your-username"
            password="your-password";    
    kafka:
    - name: Kafka
      version: 1
      tags: ["optional-tag"]
      configurationObject:
        kafkaBootstrapServers:
          value:
           - PLAINTEXT://your.kafka.broker.0:9092
        protocol: 
          value: PLAINTEXT
        additionalProperties:
          value:
            isolation.level: "read_committed"
            acks: "all"
            ssl.endpoint.identification.algorithm: "https"
    {
      "personalizations":[{
        "to":[{
          "email":"[email protected]",
          "name":"DevOps & SRE team | MyCompany"
        }],
        "subject":"PRODUCTION | Streaming Data Platform Alert"
      }],
      "content":[{
        "type":"text/html",
        "value":"<html><body><p>Priority Level:{{LEVEL}}</br></br>Category: {{CATEGORY}}</br><br>Description: {{SUMMARY}}</br><small>Alert ID: {{ID}}</small></p></body></html>"
      }],
      "from":{
        "email":"sender-email-address",
        "name":"sender name ie. PRODUCTION | Streaming Data"
      },
      "reply_to":{
        "email":"reply-to-email-address",
        "name":"reply to name"
      }
    }
    X-Api-Token: {{CREDS[0]}
    <html>
      <body>
        <h2>Streaming data platform - alert</h2>
        <p>Environment: <a href="http://enjoy.lenses.io"><b>PRODUCTION</b></a></p>
        <p>Priority Level: <b>{{LEVEL}}</b></p>
        <p>Category: <b>{{CATEGORY}}</b></p>
        <p>Summary: {{SUMMARY}}</p>
        <p>Alert ID: {{ID}}</p>
        <p><small>Lenses.io</small></p>
      </body>
    </html>
    {"text": "{{CATEGORY}} {{LEVEL}} {{ID}}"}

    Kafka broker JMX

    9581

    Schema registry JMX

    9582

    Kafka connect JMX

    9584

    Zookeeper JMX

    9585

    Kafka broker (ssl)

    9092

    SAMPLEDATA=0

    Disables the synthetic streaming data generator that are running by default.

    SUPERVISORWEB=1

    Enables supervisor interface on port 9001 (adjust via SUPERVISORWEB_PORT) to control services.

    Kafka broker

    9092

    Kafka connect

    8083

    Zookeeper

    2181

    Schema Registry

    8081

    Lenses

    3030

    Elasticsearch

    9200

    ADV_HOST=[ip-address]

    The ip address that the broker will advertise

    DEBUG=1

    Prints all stdout and stderr processes to container’s stdout for debugging.

    DISABLE_JMX=1

    Disables exposing JMX metrics on Kafka services.

    ELASTICSEARCH_PORT=0

    Will not start Elasticsearch.

    ENABLE_SSL=1

    Creates CA and key-cert pairs and makes the broker also listen to SSL://127.0.0.1:9093

    KAFKA_BROKER_ID=1

    Overrides the broker id (the default id is 101).

    get your free development license
    browser

    SQL Processor Deployment

    This page describes how to configure the agent to deploy and manage SQL Processors for stream processing.

    Set in lenses.conf

    Lenses can be used to define & deploy stream processing applications that read from Kafka and write back to Kafka with SQL. They are based on the Kafka Stream framework. They are known as SQL Processors.

    SQL processing of real-time data can run in 2 modes:

    • SQL In-Process - the workload runs inside of Lenses.

    • SQL in Kubernetes - the workload runs & scale on your Kubernetes cluster.

    Which mode the SQL Processors will run as should be defined within the lenses.conf before Lenses is started.

    In-Process Mode

    In this mode, SQL processors run as part of the Lenses process, sharing resources, memory, and CPU time with the rest of the platform.

    This mode of operation is meant to be used for development only.

    As such, the agent will not allow the creation of more than 50 SQL Processors in In Process mode, as this could impact the platform's stability and performance negatively.

    For production, use the KUBERNETES mode for maximum flexibility and scalability.

    Set the execution configuration to IN_PROC

    Set the directory to store the internal state of the SQL Processors:

    TLS connections to Kafka and Schema Registries

    SQL processors use the same connection details that Lenses uses to speak to Kafka and Schema Registry. The following properties are mounted, if present, on the file system for each processor:

    • Kafka

      1. SSLTruststore

      2. SSLKeystore

    • Schema Registry

    The file structure created by applications is the following: /run/[lenses_installation_id]/applications/

    Keep in mind Lenses require an installation folder with write permissions. The following are tried:

    1. /run

    2. /tmp

    Kubernetes Mode

    Kubernetes can be used to deploy SQL Processors. To configure Kubernetes, set the mode to KUBERNETES and configure the location of the kubeconfig file.

    When Lenses is deployed inside Kubernetes, the lenses.kubernetes.config.file configuration entry should be set to an empty string. The Kubernetes client will auto-configure from the pod it is deployed in.

    The SQL Processor docker image is live in Dockerhub.

    Custom Serdes

    Custom serdes should be embedded in a new Lenses SQL processor Docker image.

    To build a custom Docker image, create the following directory structure:

    Copy your serde jar files under processor-docker/serde.

    Create Dockerfile containing:

    Build the Docker.

    Once the image is deployed in your registry, please set Lenses to use it (lenses.conf):

    Don't use the LPFP_ prefix.

    Internally, Lenses prefixes all its properties with LPFP_.

    Avoid passing custom environment variables starting with LPFP_ as it may cause the processors to fail.

    Use Role/RoleBinging to deploy Lenses processors

    To deploy Lenses Processors in Kubernetes the suggested way is to activate RBAC in Cluster level through Helm values.yaml:

    If you want to limit the permissions Lenses has against your Kubernetes cluster, you can use Role/RoleBinding resources instead.

    To achieve this you need to create a Role and a RoleBinding resource in the namespace you want the processors deployed to:

    example for:

    • Lenses namespace = lenses-ns

    • Processor namespace = lenses-proc-ns

    You can repeat this for as many namespaces you may want Lenses to have access to.

    Finally you need to define in Lenses configuration which namespaces can Lenses access. To achieve this amend values.yaml to contain the following:

    example:

    docker run --rm \
        -p 3030:3030 \
        --name=dev \
        --net=host \
        -e EULA="https://dl.lenses.stream/d/?id=CHECK_YOUR_EMAIL_FOR_KEY" \
        lensesio/box   
    docker run -e EULA="CHECK_YOUR_EMAIL_FOR_KEY" \
               -e ADV_HOST="192.168.99.100" \
               --net=host --name=dev \
               lensesio/box
    jconsole localhost:9581
    docker run --rm \
        -p 3030:3030 \
        --net=host \
        -e ADV_HOST=192.168.99.100 \
        -e EULA="https://dl.lenses.stream/d/?id=CHECK_YOUR_EMAIL_FOR_KEY" \
        lensesio/box    
    docker run \
        -p 3030:3030 -e EULA="CHECK_YOUR_EMAIL_FOR_KEY" \
        --name=dev lensesio/box
    docker rm dev
    docker start -a devhshel
    LFILE=`cat license.json`
    docker run --rm -it -p 3030:3030 -e LICENSE="$LFILE" lensesio/box:latest
    
    -e DISABLE=azure-documentdb,cassandra,elastic5,ftp,influxdb,jms,mongodb,mqtt,redis
    -e CONNECT_HEAP=512m

    SSL Keystore

  • SSL Truststore

  • lenses.conf
    # Set up Lenses SQL processing engine
    lenses.sql.execution.mode = "IN_PROC"
    lenses.conf
    lenses.sql.state.dir = "/tmp/lenses-sql-kstream-state"
    lenses.conf
    lenses.sql.execution.mode = KUBERNETES
    # kubernetes configuration
    lenses.kubernetes.config.file = "/home/lenses/.kube/config"
    lenses.kubernetes.service.account = "default"
    #lenses.kubernetes.processor.image.name = "" # Only needed if you use a custom image
    #lenses.kubernetes.processor.image.tag = ""  # Only needed if you use a custom image
    
    # Only needed if you want to tune the buffer size for incoming events from Kubernetes
    #lenses.deployments.errors.buffer.size = 1000
    
    # Only needed if you want to tune the buffer size for incoming errors from Kubernetes WS communication
    #lenses.deployments.events.buffer.size = 10000
    mkdir -p processor-docker/serde
    FROM lensesioextra/sql-processor:4.2
    
    ADD serde /opt/serde
    ENV LENSES_SQL_RUNNERS_SERDE_CLASSPATH_OPTS=/opt/serde
    cd processor-docker
    docker build -t example/lsql-processor .
    lenses.kubernetes.processor.image.name = "your/image-name"
    lenses.kubernetes.processor.image.tag = "your-tag"
    rbacEnable: true
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: [ROLE_NAME]
      namespace: [PROCESSORS_NAMESPACE]
    rules:
    - apiGroups: [""]
      resources:
        - namespaces
        - persistentvolumes
        - persistentvolumeclaims
        - pods/log
      verbs:
        - list
        - watch
        - get
        - create
    - apiGroups: ["", "extensions", "apps"]
      resources:
        - pods
        - replicasets
        - deployments
        - ingresses
        - secrets
        - statefulsets
        - services
      verbs:
        - list
        - watch
        - get
        - update
        - create
        - delete
        - patch
    - apiGroups: [""]
      resources:
        - events
      verbs:
        - list
        - watch
        - get
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: [ROLE_BINDING_NAME]
      namespace: [PROCESSOR_NAMESPACE]
    subjects:
    - kind: ServiceAccount
      namespace: [LENSES_NAMESPACE]
      name: [SERVICE_ACCOUNT_NAME]
    roleRef:
      kind: Role
      name: [ROLE_NAME]
      apiGroup: rbac.authorization.k8s.io
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: processor-role
      namespace: lenses-proc-ns
    rules:
    - apiGroups: [""]
      resources:
        - namespaces
        - persistentvolumes
        - persistentvolumeclaims
        - pods/log
      verbs:
        - list
        - watch
        - get
        - create
    - apiGroups: ["", "extensions", "apps"]
      resources:
        - pods
        - replicasets
        - deployments
        - ingresses
        - secrets
        - statefulsets
        - services
      verbs:
        - list
        - watch
        - get
        - update
        - create
        - delete
        - patch
    - apiGroups: [""]
      resources:
        - events
      verbs:
        - list
        - watch
        - get
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: processor-role-binding
      namespace: lenses-proc-ns
    subjects:
    - kind: ServiceAccount
      namespace: lenses-ns
      name: default
    roleRef:
      kind: Role
      name: processor-role
      apiGroup: rbac.authorization.k8s.io
    lenses:
      append:
        conf: |
          lenses.kubernetes.namespaces = {
            incluster = [
              "[PROCESSORS NAMESPACE]"
            ]
          }      
    lenses:
      append:
        conf: |
          lenses.kubernetes.namespaces = {
            incluster = [
              "lenses-processors"
            ]
          }      

    Helm

    This page describes installing Lenses in Kubernetes via Helm.

    Only Helm version 3 is supported.

    On start-up, Lenses will be in bootstrap mode unless it has an existing Kafka Connection. Enable provisioning to automate the creation of connections.

    First, add the Helm Chart repository using the Helm command line:

    Use helm to install Lenses with default values:

    The default install of Lenses will place Lenses in bootstrap mode, you can add the connections to Kafka manually and upload your license or automation with provisioning. Please refer to the GitHub values.yaml for all options.

    Provisioning

    To automatically provision the connections to Kafka and other systems set the .Values.lenses.provision.connections to be the YAML definition of your connections. For a full list of the connection types supported see .

    The chart will render the full YAML specified under this setting as the provisioning.yaml file.

    Alternatively you can use a second YAML file, which contains only the connections pass them at the command line when installing:

    You must explicitly enable provisioning via lenses.provision.enabled: true otherwise Lenses will start in bootstrap mode.

    Helm Chart components

    The chart uses:

    1. Secrets to store Lenses Postgres credentials and authentication credentials

    2. Secrets to store connection credentials such as Kafka SASL_SCRAM password or password for SSL JKS stores.

    3. Secrets to hold the base64 encoded values of the JKS stores

    4. ConfigMap for Lenses configuration overrides

    Secrets and config maps are mounted as files under the mount /mnt:

    1. settings - holds the lenses.conf

    2. secrets - holds the secrets Lenses and license

    3. provision-secrets - holds the secrets for connections in the provisioning.yaml file

    Cluster RBAC

    The Helm chart creates Cluster roles and bindings, these are used by SQL Processors if the deployment mode is set to KUBERENTES. They are used so that Lenses can deploy and monitor SQL Processor deployments in namespaces.

    To disable the RBAC set: rbacEnabled: false

    If you want to limit the permissions Lenses has against your Kubernetes cluster, you can use Role/RoleBinging resources instead.

    To achieve this you need to create a Role and a RoleBinding resource in the namespace you want the processors deployed to.

    For example:

    • Lenses namespace = lenses-ns

    • Processor namespace = lenses-proc-ns

    Finally you need to define in Lenses configuration which namespaces can Lenses access. To achieve this amend values.yaml to contain the following:

    lenses.conf

    The main configurable options for lenses.conf are available in the values.yaml under the lenses object. These include:

    • Authentication

    • Database connections

    • SQL processor configurations

    To apply other static configurations use lenses.append.conf, for example:

    Authentication secrets

    Set accordingly under**lenses.security.**

    For SSO set lenses.security.saml

    Postgres

    To use Postgres as the backing store for Lenses set the details in the lenses.storage.postgres object.

    If Postgres is not enabled a default embedded H2 database is used. To enable persistence for this data:

    External Secrets

    The chart relies on secrets for sensitive information such as Passwords. Secrets can rotate and are commonly stored in an external store such as Azure KeyVault, Hashicorp Vault or AWS Secrets Manager.

    If you wish to have the chart use external secrets that are synchronized with these providers, set the following for the Lenses user:

    For Postgres, add additional ENV variables via the lenses.additionalEnv object to point to your secret and set the username and password to external in the Postgres section.

    Ingress & Services

    While the chart supports setting TLS on Lenses itself we recommend placing it on the Ingress resource

    Ingress and service resources are supported.

    Enabled an Ingress resource in the values.yaml:

    Enable a service resource in the values.yaml:

    Controlling resources

    To control the resources used by Lenses:

    Enabling SQL Processors in K8s mode

    To enable SQL processor in KUBERENTES mode and control the defaults:

    To control the namespace Lenses can deploy processors, use the sql.namespaces value.

    Prometheus metrics

    Prometheus metrics are automatically exposed on port 9102 under /metrics.

    Example Values files

    For Connections, see . You can also find examples in the .

    https://lenses.io/start/lenses.io
    Lenses Box download
    helm repo add lensesio https://helm.repo.lenses.io
    helm repo update
    helm install lenses lensesio/lenses --namespace lenses --create-namespace

    Cluster roles and role bindings (optional).

    provision-secrets/files - holds any file needed for a connection, e.g. JKS files.
    Provisioning
    Provisioning examples
    Helm chart repo

    Helm

    Helm Chart Repo

    helm install lenses \
    charts/lenses \
    --values charts/lenses/values.yaml \
    --values provisioning.yaml
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: processor-role
      namespace: lenses-proc-ns
    rules:
    - apiGroups: [""]
      resources:
        - namespaces
        - persistentvolumes
        - persistentvolumeclaims
        - pods/log
      verbs:
        - list
        - watch
        - get
        - create
    - apiGroups: ["", "extensions", "apps"]
      resources:
        - pods
        - replicasets
        - deployments
        - ingresses
        - secrets
        - statefulsets
        - services
      verbs:
        - list
        - watch
        - get
        - update
        - create
        - delete
        - patch
    - apiGroups: [""]
      resources:
        - events
      verbs:
        - list
        - watch
        - get
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: processor-role-binding
      namespace: lenses-proc-ns
    subjects:
    - kind: ServiceAccount
      namespace: lenses-ns
      name: default
    roleRef:
      kind: Role
      name: processor-role
      apiGroup: rbac.authorization.k8s.io
    lenses:
      append:
        conf: |
          lenses.kubernetes.namespaces = {
            incluster = [
              "lenses-processors"
            ]
          }      
    lenses:
      append:
        conf: |
          lenses.interval.user.session.refresh=40000
    lenses:
      security:
        defaultUser:
          username: admin
          password: admin
    lenses:
      security:
        saml:
          enabled: true
          baseUrl: "https://lenses-prod.eastus2.cloudapp.azure.com"
          provider: "azure"
          keyStoreFileData: |-
                    somebase64encodedvalue
          keyStorePassword: "password"
          keyPassword: "password"
          metadataFileData: |-
                    somebase64encodedvalue=
    storage:
        postgres:
          enabled: false
          host:
          port: 
          username:
          password:
          database:
          schema:        
    persistence:
      enabled: true
      accessModes:
        - ReadWriteOnce
      size: 5Gi
      security:
        defaultUser:  
          # username: external # "external" that tells Lenses to look for a Secret
          # password: external # Same here.
          # usernameSecretKeyRef:
          #   name: my-existing-secret
          #   key: the-username-key
          # passwordSecretKeyRef:
          #   name: my-existing-secret
          #   key: the-password-key
          # - name: LENSES_STORAGE_POSTGRES_PASSWORD
          #   valueFrom:
          #     secretKeyRef:
          #       name: [SECRET_RESOURCE_NAME]
          #       key: [SECRET_RESOURCE_KEY]
    ingress:
      enabled: false
      host:
      annotations: {}
      tls:
        enabled: false
        crt: |-
        key: |-
    # Lenses service
    service:
      enabled: true
      type: ClusterIP
      annotations: {}
      externalTrafficPolicy:
      loadBalancerIP: 130.211.x.x
      loadBalancerSourceRanges:
        - 0.0.0.0/0
    # Resource management
    resources:
      requests:
        cpu: 1
        memory: 4Gi
      limits:
        cpu: 2
        memory: 5Gi
    sql:
        # processorImage: eu.gcr.io/lenses-container-registry/lenses-sql-processor
        # processorImageTag: 2.3
        mode: IN_PROC
        heap: 1024M
        minHeap: 128M
        memLimit: 1152M
        memRequest: 128M
        livenessInitialDelay: 60 seconds

    Configuration Reference

    This page lists the available configurations in Lenses.

    Basics

    Reference documentation of all configuration and authentication options:

    Set in lenses.conf

    Key
    Description
    Default
    Type
    Required

    Default system topics

    System or control topics are created by services for their internal use. Below is the list of built-in configurations to identify them.

    • _schemas

    • __consumer_offsets

    • _kafka_lenses_

    Wildcard (*) is used to match any name in the path to capture a list of topics not just one. When the wildcard is not specified, Lenses matches on the entry name provided.

    Security

    Set in security.conf

    TLS

    Key
    Description
    Default

    LDAP

    LDAP or AD connectivity is optional. All settings are string.

    Set in security.conf

    Key
    Description
    Default

    An additional configuration setting lenses.security.ldap.use.service.user.search when set to true will use the lenses.security.ldap.user account to read the groups of the currently logged user. The default behaviour (false) uses the currently logged user to read group memberships.

    SSO SAML

    Set in security.conf

    Key
    Description
    Default

    Kerberos

    Key
    Description
    Default

    Persistent storage


    Common

    Key
    Description
    Default
    Type
    Required

    H2

    Key
    Description
    Default
    Type
    Required

    Postgres

    Set in security.conf

    Key
    Description
    Default
    Type
    Required

    Microsoft SQL Server

    Set in security.conf

    Key
    Description
    Default
    Type
    Required

    Schema registries

    Set in lenses.conf

    If the records schema is centralized, the connectivity to Schema Registry nodes is defined by a Lenses Connection.

    There are two static config entries to enable/disable the deletion of schemas:

    Key
    Description
    Type

    Deployments

    Set in lenses.conf

    Options for specific deployment targets:

    • Global options

    • Kubernetes

    Global options

    Common settings, independently of the underlying deployment target:

    Key
    Description
    Default

    Kubernetes

    Kubernetes connectivity is optional. Minimum supported K8 version 0.11.10. All settings are string.

    Set in lenses.conf

    Key
    Description
    Default

    SQL snapshot (Explore & Studio)

    Optimization settings for SQL queries.

    Set in lenses.conf

    Key
    Description
    Type
    Default

    Lenses internal Kafka topics

    Lenses requires these Kafka topics to be available, otherwise, it will try to create them. The topics can be created manually before Lenses is run, or allow Lenses the correct Kafka ACLs to create the topics:

    Set in lenses.conf

    Key
    Description
    Partition
    Replication
    Default
    Compacted
    Retention

    To allow for fine-grained control over the replication factor of the three topics, the following settings are available:

    Key
    Description
    Default

    When configuring the replication factor for your deployment, it's essential to consider the requirements imposed by your cloud provider. Many cloud providers enforce a minimum replication factor to ensure data durability and high availability. For example, IBM Cloud mandates a minimum replication factor of 3. Therefore, it's crucial to set the replication factor for the Lenses internal topics to at least 3 when deploying Lenses on IBM Cloud.

    Advanced

    All time configuration options are in milliseconds.

    Set in lenses.conf

    Key
    Description
    Type
    Default

    Connectors topology

    Set in lenses.conf

    Control how Lenses identifies your connectors in the Topology view. Catalogue your connector types, set their icons, and control how Lenses extracts the topics used by your connectors.

    Lenses comes preconfigured for some of the popular connectors as well as the Stream Reactor connectors. If you see that Lenses doesn’t automatically identify your connector type then use the lenses.connectors.info setting to register it with Lenses.

    Add a new HOCON object {} for every new Connector in your lenses.connectors.info list :

    This configuration allows the connector to work with the topology graph, and also have the RBAC rules applied to it.

    Source example

    To extract the topic information from the connector configuration, source connectors require an extra configuration. The extractor class should be: io.lenses.config.kafka.connect.SimpleTopicsExtractor. Using this extractor requires an extra property configuration. It specifies the field in the connector configuration which determines the topics data is sent to.

    Here is an example for the file source:

    Sink example

    An example of a Splunk sink connector and a Debezium SQL server connector

    External Applications

    Set in lenses.conf

    Key
    Description
    Default
    Type
    Required

    string

    no

    lenses.secret.file

    The full path to security.conf for security credentials

    security.conf

    string

    no

    lenses.sql.execution.mode

    Streaming SQL mode IN_PROC (test mode) or KUBERNETES (prod mode)

    IN_PROC

    string

    no

    lenses.offset.workers

    Number of workers to monitor topic offsets

    5

    int

    no

    lenses.telemetry.enable

    Enable telemetry data collection

    true

    boolean

    no

    lenses.kafka.control.topics

    An array of topics to be treated as “system topics”

    list

    array

    no

    lenses.grafana

    Add your Grafana url i.e. http://grafanahost:port

    string

    no

    lenses.api.response.cache.enable

    If enabled, it disables client cache on the Lenses API HTTP responses by adding these HTTP Headers: Cache-Control: no-cache, no-store, must-revalidate, Pragma: no-cache, and Expires: -1.

    false

    boolean

    no

    lenses.workspace

    Directory to write temp files. If write access is denied, Lenses will fallback to /tmp.

    /run

    string

    no

    lsql_*
  • lsql-*

  • __transaction_state

  • __topology

  • __topology__metrics

  • _confluent*

  • *-KSTREAM-*

  • *-TableSource-*

  • *-changelog

  • __amazon_msk*

  • lenses.ssl.keystore.password

    Password for the keystore file

    lenses.ssl.key.password

    Password for the ssl certificate used

    lenses.ssl.enabled.protocols

    Version of TLS protocol to use

    TLSv1.2

    lenses.ssl.algorithm

    X509 or PKIX algorithm to use for TLS termination

    SunX509

    lenses.ssl.cipher.suites

    Comma separated list of ciphers allowed for TLS negotiation

    lenses.security.ldap.filter

    LDAP query filter for matching users. Lenses will request all entries under the base path that satisfy this filter. The result should be unique

    (&(objectClass=person)(sAMAccountName=<user>))

    lenses.security.ldap.plugin.class

    Full classpath that implements the LDAP query for the user’s groups. You can use the implementation that comes with Lenses if your LDAP setup is supported

    lenses.security.ldap.plugin.memberof.key

    LDAP user attribute that provides memberOf information. In most implementations the attribute has the same name, so you don’t have to set anything. Used by the default plugin

    memberOf

    lenses.security.ldap.plugin.group.extract.regex

    A regular expression to extract a part of the user’s groups. If this part matches a Lenses group, the user will be granted all the permissions of this group. Lenses checks against the list of memberOf attribute values and uses the first regex group that is returned

    (?i)CN=(\\w+),ou=Groups.*

    lenses.security.ldap.plugin.person.name.key

    This key is used by the included LDAP plugin class LdapMemberOfUserGroupPlugin. It expects the LDAP user attribute that provides the full name of the user

    sn

    lenses.security.saml.idp.session.lifetime.max

    The maximum “duration since login” to accept from IdP. A SAML safety measure that is usually not used. See the .

    100days

    lenses.security.saml.keystore.location

    Location for the Java keystore file to be used for SAML crypto i.e. /path/to/keystore.jks

    lenses.security.saml.keystore.password

    Password for accessing the keystore

    lenses.security.saml.key.alias

    Alias to use for the private key within the keystore (only required when the keystore has multiple keys)

    lenses.security.saml.key.password

    Password for accessing the private key within the keystore

    Username for PostgreSQL database user

    string

    no

    lenses.storage.postgres.password

    Password for PostgreSQL database user

    string

    no

    lenses.storage.postgres.database

    PostgreSQL database name for Lenses to use for persistence

    string

    no

    lenses.storage.postgres.schema

    PostgreSQL schema name for Lenses to use for persistence

    "public"

    string

    no

    lenses.storage.postgres.properties.[*]

    To pass additional properties to PostgreSQL JDBC driver

    no

    Specifies the database schema Lenses uses within Microsoft SQL Server

    string

    yes

    lenses.storage.mssql.database

    Specifies the Microsoft SQL server database Lenses connects to

    string

    yes

    lenses.storage.mssql.username

    Specifies the username that the Lenses application uses to authenticate with the Microsoft SQL Server database

    string

    yes

    lenses.storage.mssql.password

    Specifies the password that the Lenses application uses to authenticate with the Microsoft SQL Server database

    string

    yes

    lenses.storage.mssql.properties

    Allows additional properties to be set for the Microsoft SQL Servicer JDBC drive

    no

    lenses.kubernetes.service.account

    The service account for deployments. Will also pull the image

    default

    lenses.kubernetes.init.container.image.name

    The docker/container repository url and name of the Init Container image used to deploy applications to Kubernetes

    lensesio/lenses-cli

    lenses.kubernetes.init.container.image.tag

    The tag of the Init Container image used to deploy applications to Kubernetes

    5.2.0

    lenses.kubernetes.watch.reconnect.limit

    How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable

    10

    lenses.kubernetes.watch.reconnect.interval

    How often to wait between Kubernetes Watcher reconnection attempts expressed in milliseconds

    5000

    lenses.kubernetes.websocket.timeout

    How long to wait for a Kubernetes Websocket response expressed in milliseconds

    15000

    lenses.kubernetes.websocket.ping.interval

    How often to ping Kubernetes Websocket to check it’s alive expressed in milliseconds

    30000

    lenses.kubernetes.pod.heap

    The max amount of memory the underlying Java process will use

    900M

    lenses.kubernetes.pod.min.heap

    The initial amount of memory the underlying Java process will allocate

    128M

    lenses.kubernetes.pod.mem.request

    The value will control how much memory resource the Pod Container will request

    128M

    lenses.kubernetes.pod.mem.limit

    The value will control the Pod Container memory limit

    1152M

    lenses.kubernetes.pod.cpu.request

    The value will control how much cpu resource the Pod Container will request

    null

    lenses.kubernetes.pod.cpu.limit

    The value will control the Pod Container cpu limit

    null

    lenses.kubernetes.namespaces

    Object setting a list of Kubernetes namespaces that Lenses will see for each of the specified and configured cluster

    null

    lenses.kubernetes.pod.liveness.initial.delay

    Amount of time Kubernetes will wait to check Processor’s health for the first time. It can be expressed like 30 second, 2 minute or 3 hour, mind the time unit is singular

    60 second

    lenses.deployments.events.buffer.size

    Buffer size for events coming from Deployment targets such as Kubernetes

    10000

    lenses.deployments.errors.buffer.size

    Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes

    1000

    lenses.kubernetes.config.reload.interval

    Time interval to reload the Kubernetes configuration file. Expressed in milliseconds.

    30000

    lenses.sql.settings.show.bad.records

    By default show bad records when querying a kafka topic

    boolean

    true

    lenses.sql.settings.format.timestamp

    By default convert AVRO date to human readable format

    boolean

    true

    lenses.sql.settings.live.aggs

    By default allow aggregation queries on kafka data

    boolean

    true

    lenses.sql.sample.default

    Number of messages to sample when live tailing a kafka topic

    int

    2/window

    lenses.sql.sample.window

    How frequently to sample messages when tailing a kafka topic

    int

    200 msec

    lenses.sql.websocket.buffer

    Buffer size for messages in a SQL query

    int

    10000

    lenses.metrics.workers

    Number of workers for parallelising SQL queries

    int

    16

    lenses.kafka.ws.buffer.size

    Buffer size for WebSocket consumer

    int

    10000

    lenses.kafka.ws.max.poll.records

    Max number of kafka messages to return in a single poll()

    long

    1000

    lenses.sql.state.dir

    Folder to store KStreams state.

    string

    logs/lenses-sql-kstream-state

    lenses.sql.udf.packages

    The list of allowed java packages for UDFs/UDAFs

    array of strings

    ["io.lenses.sql.udf"]

    1

    3 (recommended)

    __topology__metrics

    no

    1 day

    lenses.topics.metrics

    Topic for SQL Processor to send the metrics

    1

    3 (recommended)

    _kafka_lenses_metrics

    no

    lenses.interval.partitions.messages

    How often to refresh kafka partition info

    long

    10000

    lenses.interval.type.detection

    How often to check kafka topic payload info

    long

    30000

    lenses.interval.user.session.ms

    How long a client-session stays alive if inactive (4 hours)

    long

    14400000

    lenses.interval.user.session.refresh

    How often to check for idle client sessions

    long

    60000

    lenses.interval.topology.topics.metrics

    How often to refresh topology info

    long

    30000

    lenses.interval.schema.registry.healthcheck

    How often to check the schema registries health

    long

    30000

    lenses.interval.schema.registry.refresh.ms

    How often to refresh schema registry data

    long

    30000

    lenses.interval.metrics.refresh.zk

    How often to refresh ZK metrics

    long

    5000

    lenses.interval.metrics.refresh.sr

    How often to refresh Schema Registry metrics

    long

    5000

    lenses.interval.metrics.refresh.broker

    How often to refresh Kafka Broker metrics

    long

    5000

    lenses.interval.metrics.refresh.connect

    How often to refresh Kafka Connect metrics

    long

    30000

    lenses.interval.metrics.refresh.brokers.in.zk

    How often to refresh from ZK the Kafka broker list

    long

    5000

    lenses.interval.topology.timeout.ms

    Time period when a metric is considered stale

    long

    120000

    lenses.interval.audit.data.cleanup

    How often to clean up dataset view entries from the audit log

    long

    300000

    lenses.audit.to.log.file

    Path to a file to write audits to in JSON format.

    string

    lenses.interval.jmxcache.refresh.ms

    How often to refresh the JMX cache used in the Explore page

    long

    180000

    lenses.interval.jmxcache.graceperiod.ms

    How long to pause for when a JMX connectity error occurs

    long

    300000

    lenses.interval.jmxcache.timeout.ms

    How long to wait for a JMX response

    long

    500

    lenses.interval.sql.udf

    How often to look for new UDF/UDAF (user defined [aggregate] functions)

    long

    10000

    lenses.kafka.consumers.batch.size

    How many consumer groups to retrieve in a single request

    Int

    500

    lenses.kafka.ws.heartbeat.ms

    How often to send heartbeat messages in TCP connection

    long

    30000

    lenses.kafka.ws.poll.ms

    Max time for kafka consumer data polling on WS APIs

    long

    10000

    lenses.kubernetes.config.reload.interval

    Time interval to reload the Kubernetes configuration file.

    long

    30000

    lenses.kubernetes.watch.reconnect.limit

    How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable

    long

    10

    lenses.kubernetes.watch.reconnect.interval

    How often to wait between Kubernetes Watcher reconnection attempts

    long

    5000

    lenses.kubernetes.websocket.timeout

    How long to wait for a Kubernetes Websocket response

    long

    15000

    lenses.kubernetes.websocket.ping.interval

    How often to ping Kubernetes Websocket to check it’s alive

    long

    30000

    lenses.akka.request.timeout.ms

    Max time for a response in an Akka Actor

    long

    10000

    lenses.sql.monitor.frequency

    How often to emit healthcheck and performance metrics on Streaming SQL

    long

    10000

    lenses.audit.data.access

    Record dataset access as audit log entries

    boolean

    true

    lenses.audit.data.max.records

    How many dataset view entries to retain in the audit log. Set to -1 to retain indefinitely

    int

    500000

    lenses.explore.lucene.max.clause.count

    Override Lucene’s maximum number of clauses permitted per BooleanQuery

    int

    1024

    lenses.explore.queue.size

    Optional setting to bound Lenses internal queue used by the catalog subsystem. It needs to be positive integer or it will be ignored.

    int

    N/A

    lenses.interval.kafka.connect.http.timeout.ms

    How long to wait for Kafka Connect response to be retrieved

    int

    10000

    lenses.interval.kafka.connect.healthcheck

    How often to check the Kafka health

    int

    15000

    lenses.interval.schema.registry.http.timeout.ms

    How long to wait for Schema Registry response to be retrieved

    int

    10000

    lenses.interval.zookeeper.healthcheck

    How often to check the Zookeeper health

    int

    15000

    lenses.ui.topics.row.limit

    The number of Kafka records to load automatically when exploring a topic

    int

    200

    lenses.deployments.connect.failure.alert.check.interval

    Time interval in seconds to check the connector failure grace period has completed. Used by the Connect auto-restart failed connectors functionality. It needs too be a value between (1,600].

    int

    10

    lenses.provisioning.path

    Folder on the filesystem containing the provisioning data. See [provisioning docs](link to provisioning docs) for further details

    string

    lenses.provisioning.interval

    Time interval in seconds to check for changes on the provisioning resources

    int

    lenses.schema.registry.client.http.retryOnTooManyRequest

    When enabled, Lenses will retry a request whenever the schema registry returns a 429 Too Many Requests

    boolean

    lenses.schema.registry.client.http.maxRetryAwait

    Max amount of time to wait whenever a 429 Too Many Requests is returned.

    duration

    lenses.schema.registry.client.http.maxRetryCount

    Max retry count whenever a 429 Too Many Requests is returned.

    integer

    2

    lenses.schema.registry.client.http.rate.type

    Specifies if http requests to the configured schema registry should be rate limited. Can be "session" or "unlimited"

    "unlimited" | "session"

    lenses.schema.registry.client.http.rate.maxRequests

    Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.

    integer

    N/A

    lenses.schema.registry.client.http.rate.window

    Whenever the rate limiter is "session" this configuration will determine the duration of the window used.

    duration

    N/A

    lenses.schema.connect.client.http.retryOnTooManyRequest

    Retry a request whenever a connect cluster returns a 429 Too Many Requests

    boolean

    lenses.schema.connect.client.http.maxRetryAwait

    Max amount of time to wait whenever a 429 Too Many Requests is returned.

    duration

    lenses.schema.connect.client.http.maxRetryCount

    Max retry count whenever a 429 Too Many Requests is returned.

    integer

    2

    lenses.connect.client.http.rate.type

    Specifies if http requests to the configured connect cluster should be rate limited. Can be "session" or "unlimited"

    "unlimited" | "session"

    lenses.connect.client.http.rate.maxRequests

    Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.

    integer

    N/A

    lenses.connect.client.http.rate.window

    Whenever the rate limiter is "session" this configuration will determine the duration of the window used.

    duration

    N/A

    lenses.ip

    Bind HTTP at the given endpoint. Use in conjunction with lenses.port

    0.0.0.0

    string

    no

    lenses.port

    The HTTP port to listen for API, UI and WS calls

    9991

    int

    no

    lenses.jmx.port

    Bind JMX port to enable monitoring Lenses

    int

    no

    lenses.root.path

    The path from which all the Lenses URLs are served

    lenses.access.control.allow.methods

    HTTP verbs allowed in cross-origin HTTP requests

    GET,POST,PUT,DELETE,OPTIONS

    lenses.access.control.allow.origin

    Allowed hosts for cross-origin HTTP requests

    *

    lenses.allow.weak.ssl

    Allow https:// with self-signed certificates

    false

    lenses.ssl.keystore.location

    The full path to the keystore file used to enable TLS on Lenses port

    lenses.security.ldap.url

    LDAP server URL (TLS, StartTLS and unencrypted supported)

    lenses.security.ldap.user

    LDAP user account. Must be able to list users and their groups. The distinguished name (DN) must be used

    lenses.security.ldap.password

    LDAP account password

    lenses.security.ldap.base

    LDAP base path for querying user accounts. All user accounts that will be able to access Lenses should be under this path

    lenses.security.saml.base.url

    Lenses HTTPS URL that matches the Service Provider (SP) and part of the Identity Provider (IdP) SAML handshake i.e. https://lenses-dev.example.com

    lenses.security.saml.sp.entityid

    SAML Service Provider (SP) Entity ID for Lenses, used as part of the SAML handshake protocol.

    lenses.security.saml.idp.provider

    The Identity Provider (IdP) type: azure, google, keycloak, okta, onelogin

    lenses.security.saml.idp.metadata.file

    Path to XML file provided by the Identity Provider. e.g. /path/to/saml-idp.xml

    lenses.security.kerberos.service.principal

    The Kerberos principal for Lenses to use in the SPNEGO form: HTTP/[email protected]

    lenses.security.kerberos.keytab

    Path to Kerberos keytab with the service principal. It should not be password protected

    lenses.security.kerberos.debug

    Enable Java’s JAAS debugging information

    false

    lenses.storage.hikaricp.[*]

    To pass additional properties to HikariCP connection pool

    no

    lenses.storage.directory

    The full path to a directory for Lenses to use for persistence

    "./storage"

    string

    no

    lenses.storage.postgres.host

    Host of PostgreSQL server for Lenses to use for persistence

    string

    no

    lenses.storage.postgres.port

    Port of PostgreSQL server for Lenses to use for persistence

    5432

    integer

    no

    lenses.storage.msssql.host

    Specifies the hostname or IP address of the Microsoft SQL Server instance

    string

    yes

    lenses.storage.mssql.port

    Specifies the TCP port number that the Lenses application uses to connect to a Microsoft SQL Server database

    int

    yes

    lenses.schema.registry.delete

    Allow schemas to be deleted. Default is false

    boolean

    lenses.schema.registry.cascade.delete

    Deletes associated schemas when a topic is deleted. Default is false

    boolean

    lenses.deployments.events.buffer.size

    Buffer size for events coming from Deployment targets such as Kubernetes

    10000

    lenses.deployments.errors.buffer.size

    Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes

    1000

    lenses.kubernetes.processor.image.name

    The url for the streaming SQL Docker for K8

    lensesioextra/sql-processor

    lenses.kubernetes.processor.image.tag

    The version/tag of the above container

    5.2

    lenses.kubernetes.config.file

    The path for the kubectrl config file

    /home/lenses/.kube/config

    lenses.kubernetes.pull.policy

    Pull policy for K8 containers: IfNotPresent or Always

    IfNotPresent

    lenses.sql.settings.max.size

    Restricts the max bytes that a kafka sql query will return

    long

    20971520 (20MB)

    lenses.sql.settings.max.query.time

    Max time (in msec) that a sql query will run

    int

    3600000 (1h)

    lenses.sql.settings.max.idle.time

    Max time (in msec) for a query when it reaches the end of the topic

    int

    lenses.topics.external.topology

    Topic for applications to publish their topology

    1

    3 (recommended)

    __topology

    yes

    N/A

    lenses.topics.external.metrics

    lenses.topics.replication.external.topology

    Replication factor for the lenses.topics.external.topology topic

    1

    lenses.topics.replication.external.metrics

    Replication factor for the lenses.topics.external.metrics topic

    1

    lenses.topics.replication.metrics

    Replication factor for the lenses.topics.metrics topic

    1

    lenses.interval.summary

    How often to refresh kafka topic list and configs

    long

    10000

    lenses.interval.consumers.refresh.ms

    How often to refresh kafka consumer group info

    long

    10000

    lenses.interval.consumers.timeout.ms

    How long to wait for kafka consumer group info to be retrieved

    long

    apps.external.http.state.refresh.ms

    When registering a runner for external app, a health-check interval can be specified. If it is not, this default interval is used (value in milliseconds)

    30000

    int

    no

    apps.external.http.state.cache.expiration.ms

    Last known state of the runner is stored in a cache. The entries in the cache are being invalidated after a time that is defined by following configuration key (value in milliseconds). This value should not be lower than the apps.external.http.state.refresh.ms value.

    60000

    int

    no

    lenses.storage.postgres.username

    lenses.storage.mssql.schema

    5000 (5 sec)

    Topic for external application to publish their metrics

    300000

      lenses.connectors.info = [
          {
            class.name = "The connector full classpath"
            name = "The name which will be presented in the UI"
            instance = "Details about the instance. Contains the connector configuration field which holds the information. If  a database is involved it would be  the DB connection details, if it is a file it would be the file path, etc"
            sink = true
            extractor.class = "The full classpath for the implementation knowing how to extract the Kafka topics involved. This is only required for a Source"
            icon = "file.png"
            description = "A description for the connector"
            author = "The connector author"
          }
      ]
      lenses.connectors.info = [
        {
          class.name = "org.apache.kafka.connect.file.FileStreamSource"
          name = "File"
          instance = "file"
          sink = false
          property = "topic"
          extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
        }
      ]
      lenses.connectors.info = [
        {
          class.name = "com.splunk.kafka.connect.SplunkSinkConnector"
          name = "Splunk Sink",
          instance = "splunk.hec.uri"
          sink = true,
          extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
          icon = "splunk.png",
          description = "Stores Kafka data in Splunk"
          docs = "https://github.com/splunk/kafka-connect-splunk",
          author = "Splunk"
        },
        {
          class.name = "io.debezium.connector.sqlserver.SqlServerConnector"
          name = "CDC MySQL"
          instance = "database.hostname"
          sink = false,
          property = "database.history.kafka.topic"
          extractor.class = "io.lenses.config.kafka.connect.SimpleTopicsExtractor"
          icon = "debezium.png"
          description = "CDC data from RDBMS into Kafka"
          docs = "//debezium.io/docs/connectors/mysql/",
          author = "Debezium"
        }
      ]
    duration syntax
    false
    "2 seconds"
    unlimited
    false
    2 seconds
    unlimited
    Logo
    https://archive.lenses.io/lenses/archive.lenses.io
    Lenses archive