Learn how to connect the Agent to JMX for Kafka, Schema Registries, Kafka Connect and others.
Overview
This page describes an overview of Lenses Agent Provisioning.
As of version 6.0 the calling the Rest endpoint for provisioning is no longer available.
Connections are defined in the provisioning.yaml file. The Agent will watch the file and resolve the desired state, applying connections defined in the file.
Defining a Connection
Connections are defined in the provisioning.yaml. This file is divided into components, each component representing a type of connection.
Each component is mandatory:
Name - This is the free name of the connection
Version set to 1
Configuration - This is a list of keys/values dependent on the component type.
The provisioning.yaml contains secrets. If you are deploying via Helm, the chart will use Kubernetes secrets.
Additionally, support is provided for referencing environment variables. This allows you to set secrets in your environment and resolve the value at runtime.
sslKeystorePassword:
value: ${ENV_VAR_NAME}
Referencing files
Many connections need files, for example, to secure Kafka with SSL you will need a key store and optionally a trust store.
To reference a file in the provisioning.yaml, for example, given:
a file called my-keystore.jks is expected in the same directory.
HQ
This page describes connection a Lenses Agent with HQ
To be able to view and drill into your Kafka environment, you need to connect the agent to HQ. You need to create an environment in HQ and copy the Agent Key into the provisioning.yaml.
This page describes how to connect the Lenses Agent to your Kafka brokers.
The Lenses Agent can connect to any Kafka cluster or service exposing the Apache Kafka APIs and supporting the authentication methods offered by Apache Kafka.
This page describes connecting the Lenses Agent to Apache Kafka.
A Kafka connection is required for the agent to start. You can connect to Kafka via:
Plaintext (no credentials an unencrypted)
SSL (no credentials an encrypted)
SASL Plaintext and SASL SSL
Plaintext
With PLAINTEXT, there's no encryption and no authentication when connecting to Kafka.
The only required fields are:
kafkaBootstrapServers - a list of bootstrap servers (brokers).
It is recommended to add as many brokers (if available) as convenient to this list for fault tolerance.
protocol - depending on the protocol, other fields might be necessary (see examples for other protocols)
In following example JMX metrics for Kafka Brokers are configured too, assuming that all brokers expose their JMX metrics using the same port (9581), without SSL and authentication.
With SSL the connection to Kafka is encrypted. You can also uses SSL and certificates to authenticate users against Kafka.
A truststore (with password) might need to be set explicitly if the global truststore of the Agent does not include the Certificate Authority (CA) of the brokers.
If TLS is used for authentication to the brokers in addition to encryption-in-transit, a key store (with passwords) is required.
There are 2 SASL-based protocols to access Kafka Brokers: SASL_SSL and SASL_PLAINTEXT. They both require SASL mechanism and JAAS Configuration values. What is different is:
The transport layer is encyrpted (SSL)
The SASL mechanism for authentication (PLAIN, AWS_MSK_IAM, GSSAPI).
In addition to this, there might be a keytab file required, depending on the SASL mechanism (for example when using GSSAPI mechanism, most often used for Kerberos).
To use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.
When encryption-in-transit is used (with SASL_SSL), a trust store might need to be set explicitly if the global trust store of Lenses does not include the CA of the brokers.
SASL SSL
Mechanism PLAIN
Encrypted communication and basic username and password for authentication.
This page describes connection the Lenses Agent to a AWS MSK cluster.
It is recommended to install the Agent on an EC2 instance or with EKS in the same VPC as your MSK cluster. The Agent can be installed and preconfigured via the AWS Marketplace.
Open network connectivity
Edit the AWS MSK security group in the AWS Console and add the IP address of your Agent installation.
MSK Security group
Enable Open Monitoring
If you want to have the Agent collect JMX metrics you have to enable Open Monitoring on your MSK cluster. Follow the AWS guide here.
Select your MSK endpoint
Depending on your MSK cluster, select the endpoint and protocol you want to connect with.
It is not recommended to use Plaintext for secure environments. For these environments use TLS or IAM.
When the Agent is running inside AWS and is connecting to an Amazon’s Managed Kafka (MSK) instance, IAM can be used for authentication.
This page describes how to connect Lenses to an Amazon MSK Serverless cluster.
It is recommended to install the Agent on an EC2 instance or with EKS in the same VPC as your MSK Serverless cluster.
Security Groups
Enable communications between the Agent & the Amazon MSK Serverless cluster by opening the Amazon MSK Serverless cluster's security group in the AWS Console and add the IP address of your Agent installation.
MSK Serverless security group
IAM Policy
To authenticate the Agent & access resources within our MSK Serverless cluster, we'll need to create an IAM policy and apply that to the resource (EC2, EKS cluster, etc) running the Agent service. here is an example IAM policy with sufficient permissions which you can associate with the relevant IAM role:
Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.
To integrate with the AWS Glue Schema Registry, you also need to modify the security policy for the registry and schemas, which results in additional functions within it:
More details about how IAM works with MSK Serverless can be found in the documentation: MSK Serverless
Limitations
When using the Agent with MSK Serverless:
The agent does not receive Prometheus-compatible metrics from the brokers because they are not exported outside of CloudWatch.
The agent does not configure quotas and ACLs because MSK Serveless does not allow this.
Azure EventHubs
This page describes connection Lenses to Azure EventHubs.
1
Create a Data Integration API key
Add a shared access policy
Navigate to your Event Hub resource and select Shared access policies in the Settings section.
Select + Add shared access policy, give a name, and check all boxes for the permissions (Manage, Send, Listen)
Once the policy is created, obtain the Primary Connection String, by clicking the policy and copying the connection string. The connection string will be used as a JAAS password to connect to Kafka.
The bootstrap broker [YOUR_EVENT_HUBS_NAMESPACE].servicebus.windows.net:9093
Configure Provisioning
2
Set the following in the provisioning.yaml
First set environment variable
Note that "\" at "$ConnectionString" is set additionally to escape the $ sign.
This page describes an overview of connecting a Lenses Agent with Schema Registries
Consider Rate Limiting if you have a high number of schemas.
Authentication
TLS and basic authentication are supported for connections to Schema Registries.
JMX Metrics
The Agent can collect Schema registry metrics via:
JMX
Jolokia
Supported formats
AVRO
PROTOBUF
JSON and XML formats are supported by Lenses but without a backing schema registry.
Schema deletion
To enable the deletion of schemas in the UI, set the following in the lenses.conf file.
lenses.conf
## Enable schema deletion in the Lenses UI
## default: false
lenses.schema.registry.delete = true
## When a topic is deleted,
## automatically delete also its associated Schema Registry subjects
## default: false
lenses.schema.registry.cascade.delete = true
IBM Event Streams supports hard deletes only
AWS Glue
This page describes connection to AWS Glue.
AWS Glue Schema Registry connection, depends on an AWS connection.
Set the following examples in provisioning.yaml
These are examples of provision Lenses with an AWS connection named my-aws-connection and an AWS Glue Schema Registry that references it.
A custom truststore is needed when the Schema Registry is served over TLS (encryption-in-transit) and the Registry’s certificate is not signed by a trusted CA.
This page describes adding a Kafka Connect Cluster to the Lenses Agent.
Lenses integrates with Kafka Connect Clusters to manage connectors.
The name of a Kafka Connect Connections may only contain alphanumeric characters ([A-Za-z0-9]) and dashes (-). Valid examples would be dev, Prod1, SQLCluster,Prod-1, SQL-Team-Awesome.
Multiple Kafka Connect clusters are supported.
If you are using Kafka Connect < 2.6 set the following to ensure you can see Connectors
A custom truststore is needed when the Kafka Connect workers are served over TLS (encryption-in-transit) and their certificates are not signed by a trusted CA.
If you have developed your own Connector or are using not using a Lenses connector you can still display the connector instances in the topology. To do this Lenses needs to know the configuration option of the Connector that defines which topic the Connector reads from or writes to. This is set in the connectors.info parameter in the lenses.conf file.
lenses.conf
connectors.info = [
{
class.name = "The connector full classpath"
name = "The name which will be presented in the UI"
instance = "Details about the instance. Contains the connector configuration field which holds the information. If a database is involved it would be the DB connection details, if it is a file it would be the file path, etc"
sink = true
extractor.class = "The full classpath for the implementation knowing how to extract the Kafka topics involved. This is only required for a Source"
icon = "file.png"
description = "A description for the connector"
author = "The connector author"
}
]
Zookeeper
This page describes adding a Zookeeper to the Lenses Agent.
Set the following examples in provisioning.yaml
Simple configuration, without metrics
zookeeper:
- name: Zookeeper
version: 1
tags: ["tag1"]
configuration:
zookeeperUrls:
value:
- my-zookeeper-host-0:2181
- my-zookeeper-host-1:3181
- my-zookeeper-host-2:4181
# optional, a suffix to Zookeeper's connection string
zookeeperChrootPath:
value: "/mypath"
zookeeperSessionTimeout:
value: 10000 # in milliseconds
zookeeperConnectionTimeout:
value: 10000 # in milliseconds
Simple configuration, with JMX metrics
Simple configuration with Zookeeper metrics read via JMX.
zookeeper:
- name: Zookeeper
version: 1
tags: ["tag1"]
configuration:
zookeeperUrls:
value:
- my-zookeeper-host-0:2181
- my-zookeeper-host-1:3181
- my-zookeeper-host-2:4181
# optional, a suffix to Zookeeper's connection string
zookeeperChrootPath:
value: "/mypath"
zookeeperSessionTimeout:
value: 10000 # in milliseconds
zookeeperConnectionTimeout:
value: 10000 # in milliseconds
# all metrics properties are optional
metricsPort:
value: 9581
metricsType:
value: JMX
metricsSsl:
value: false
With such a configuration, Lenses will use 3 Zookeeper nodes and will try to read their metrics from following URLs (notice the same port - 9581 - used for all of them, as defined by metricsPort property):
If the Agent is deployed on an EC2 Instance or has access to AWS credentials in the default AWS toolchain that can be used instead.
provisioning.yaml
aws:
- name: my-aws-connection
version: 1
tags: [tag1, tag2]
configuration:
# Way to authenticate against AWS.Credentials Chain or Access Key
authMode:
value:
# Access key ID of an AWS IAM account.
accessKeyId:
value:
# Secret access key of an AWS IAM account.
secretAccessKey:
value:
# AWS region to connect to. If not provided, this is deferred to client
# configuration.
region:
value:
# Specifies the session token value that is required if you are using temporary
# security credentials that you retrieved directly from AWS STS operations.
sessionToken:
value:
Alert & Audit integrations
Connect the Lenses Agent to your alerting and auditing systems.
The Agent can send out alerts and audits events. Once you have configured alert and audit connections, you can create alert and audit channels to route events to them.
Alerts
DataDog
provisioning.yaml
datadog:
- name: my-datadog-connection
version: 1
tags: [tag1, tag2]
configuration:
# The Datadog site.
site:
value:
# The Datadog API key.
apiKey:
value:
# The Datadog application key.
applicationKey:
value:
pagerduty:
- name: my-pagerduty-connection
version: 1
tags: [tag1, tag2]
configuration:
# An Integration Key for PagerDuty's service with Events API v2 integration type.
integrationKey:
value:
Slack
provisioning.yaml
slack:
- name: my-slack-connection
version: 1
tags: [tag1, tag2]
configuration:
# The Slack endpoint to send the alert to.
webhookUrl:
value:
Alert Manager
provisioning.yaml
alertManager:
- name: my-alertmanager-connection
version: 1
tags: [tag1, tag2]
configuration:
# Comma separated list of Alert Manager endpoints.
endpoints:
value:
Webook (Email, SMS, HTTP and MS Teams)
provisioning.yaml
webhook:
- name: my-webhook-alert-connection
version: 1
tags: [tag1, tag2]
configuration:
# The host name for the HTTP Event Collector API of the Splunk instance.
host:
value:
# The port number for the HTTP Event Collector API of the Splunk instance. (int)
port:
value:
# Set to true in order to set the URL scheme to https.
# Will otherwise default to http.
useHttps:
value:
# An array of (secret) strings to be passed over to alert channel plugins.
creds:
value:
-
-
Audits
Webhook
provisioning.yaml
webhook:
- name: my-webhook-audit-connection
version: 1
tags: [tag1, tag2]
configuration:
# The host name for the HTTP Event Collector API of the Splunk instance.
host:
value:
# The port number for the HTTP Event Collector API of the Splunk instance. (int)
port:
value:
# Set to true in order to set the URL scheme to https.
# Will otherwise default to http.
useHttps:
value:
# An array of (secret) strings to be passed over to alert channel plugins.
creds:
value:
-
-
Splunk
provisioning.yaml
splunk:
- name: my-splunk-connection
version: 1
tags: [tag1, tag2]
configuration:
# The host name for the HTTP Event Collector API of the Splunk instance.
host:
value:
# The port number for the HTTP Event Collector API of the Splunk instance. (int)
port:
value:
# Use TLS. Boolean, default false
useHttps:
value:
# This is not encouraged but is required for a Splunk Cloud Trial instance. Bool
insecure:
value:
# HTTP event collector authorization token. (string)
token:
value:
Infrastructure JMX Metrics
This page describes how to configure JMX metrics for Connections in Lenses.
All core services (Kafka, Schema Registry, Kafka Connect, Zookeeper) use the same set of properties for services’ monitoring.
The Agent will discover all the brokers by itself and will try to fetch metrics using metricsPort, metricsCustomUrlMappings and other properties (if specified).
JMX
Simple
The same port used for all brokers/workers/nodes. No SSL, no authentication.
Such a configuration means that the Agent will try to connect using JMX with every pair of kafkaBootstrapServers.host:metricsPort, so following the example: my-kafka-host-0:9581.
Jolokia
For Jolokia the Agent supports two types of requests: GET (JOLOKIAG) and POST (JOLOKIAP).
For JOLOKIA each entry value in metricsCustomUrlMappings must contain protocol.
Simple
The same port used for all brokers/workers/nodes. No SSL, no authentication.
JOLOKIA monitoring works on the top of HTTP protocol. To fetch metrics the Agent has to perform either GET or POST request. There is a way of configuring http request timeout using httpRequestTimeout property (ms value). Its default value is 20 seconds.
httpRequestTimeout:
value: 30000
Custom Metrics Http Suffix
Default suffix for Jolokia endpoints is /jolokia/, so that should be provided value. Sometimes that suffix can be different, so there is a way of customizing it by using metricsHttpSuffix field.
metricsHttpSuffix:
value: /custom/
AWS
Before enabling collection of metrics within Agents provision configuration, make sure in your MSK Provisioned cluster you have enabled open monitoring with Prometheus.
AWS has predefined metrics configuration. The Agent hits the Prometheus endpoint using port 11001 for each broker. There is an option of customizing AWS metrics connection in Lenses by using metricsUsername, metricsPassword, httpRequestTimeout, metricsHttpSuffix, metricsCustomUrlMappings, metricsSsl properties, but most likely no one will need to do that - AWS has its own standard and most probably it won’t change. Customization can be achieved only by API or CLI - UI does not support it.