Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page provides examples for defining a connection to Kerberos.
Loading...
Loading...
Loading...
This page describes the supported deployment methods for Lenses.
To automate the configuration of connections we recommend using provisioning.
Lenses can be deployed in the following ways:
This page describes install the Lenses via a Linux archive
On start-up, Lenses will be in bootstrap mode unless it has an existing Kafka Connection. See for automating.
To install Lenses from the archive you must:
Extract the archive
Configure Lenses
Start Lenses
Extract the archive using the following command
Inside the extract archive, you will find.
Start Lenses by running:
or pass the location of the config file:
If you do not pass the location of the config file, Lenses will look for it inside the current (runtime) directory. If it does not exist, it will try its installation directory.
To stop Lenses, press CTRL+C
.
Set the permissions of the security.conf
to be readable only by the lenses user.
The agent needs write access in 4-5 places in total:
[RUNTIME DIRECTORY]
When Lenses runs, it will create at least one directory under the directory it is run in:
[RUNTIME DIRECTORY]/logs
Where logs are stored
[RUNTIME DIRECTORY]/logs/lenses-sql-kstream-state
Where SQL processors (when In Process mode) store state. To change the location for the processors’ state directory, use lenses.sql.state.dir
option.
[RUNTIME DIRECTORY]/storage
Where the H2 embedded database is stored when PostgreSQL is not set. To change this directory, use the lenses.storage.directory
option.
/run
(Global directory for temporary data at runtime)
Used for temporary files. If Lenses does not have permission to use it, it will fall back to /tmp
.
/tmp
(Global temporary directory)
Used for temporary files (if access /run
fails), and JNI shared libraries.
Back-up this location for disaster recovery
Lenses and Kafka use two common Java libraries that take advantage of JNI and are extracted to /tmp
.
You must either:
Mount /tmp without noexec
or set org.xerial.snappy.tempdir and java.io.tmpdir to a different location
If your server uses systemd as a Service Manager, then manage Lenses (start upon system boot, stop, restart). Below is a simple unit file that starts Lenses automatically on system boot.
Lenses uses the default trust store (cacerts
) of the system’s JRE (Java Runtime) installation. The trust store is used to verify remote servers on TLS connections, such as Kafka Brokers with an SSL protocol, Secure LDAP, JMX over TLS, and more. Whilst for some types of connections (e.g. Kafka Brokers) a separate keystore can be provided at the connection’s configuration, for some other connections (e.g. Secure LDAP and JMX over TLS) we always rely on the system trust store.
It is possible to set up a global custom trust store via the LENSES_OPTS
environment variable:
Run on any Linux server. For RHEL 6.x and CentOS 6.x use docker.
Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit
command:
Increase as a super-user the soft limit to 4096 with:
Use 6GB RAM/4 CPUs and 500MB disk space.
Open Lenses in your browser, log in with admin/admin
configure your and add your .
Helm
Deploy Lenses in your Kubernetes cluster with Helm.
Docker
Deploy Lenses with Docker.
Linux (archive)
Deploy Lenses on Linux servers or VMs.
AWS Marketplace
Deploy Lenses via the AWS Marketplace.
Lenses Box
Try out Lenses with the Lenses Box.
This page provides examples for defining a connection to Zookeeper.
Simple configuration with Zookeeper metrics read via JMX.
With such a configuration, Lenses will use 3 Zookeeper nodes and will try to read their metrics from following URLs (notice the same port - 9581 - used for all of them, as defined by metricsPort
property):
my-zookeeper-host-0:9581
my-zookeeper-host-1:9581
my-zookeeper-host-2:9581
This page describes how to configure JMX metrics for Connections in Lenses.
All core services (Kafka, Schema Registry, Kafka Connect, Zookeeper) use the same set of properties for services’ monitoring.
The Agent will discover all the brokers by itself and will try to fetch metrics using metricsPort
, metricsCustomUrlMappings
and other properties (if specified).
The same port used for all brokers/workers/nodes. No SSL, no authentication.
Such a configuration means that the Agent will try to connect using JMX with every pair of kafkaBootstrapServers.host:metricsPort
, so following the example: my-kafka-host-0:9581
.
For Jolokia the Agent supports two types of requests: GET (JOLOKIAG
) and POST (JOLOKIAP
).
For JOLOKIA each entry value in metricsCustomUrlMappings
must contain protocol.
The same port used for all brokers/workers/nodes. No SSL, no authentication.
JOLOKIA monitoring works on the top of HTTP protocol. To fetch metrics the Agent has to perform either GET or POST request. There is a way of configuring http request timeout using httpRequestTimeout
property (ms value). Its default value is 20 seconds.
Default suffix for Jolokia endpoints is /jolokia/
, so that should be provided value. Sometimes that suffix can be different, so there is a way of customizing it by using metricsHttpSuffix
field.
AWS has predefined metrics configuration. The Agent hits the Prometheus endpoint using port 11001
for each broker. There is an option of customizing AWS metrics connection in Lenses by using metricsUsername
, metricsPassword
, httpRequestTimeout
, metricsHttpSuffix
, metricsCustomUrlMappings
, metricsSsl
properties, but most likely no one will need to do that - AWS has its own standard and most probably it won’t change. Customization can be achieved only by API or CLI - UI does not support it.
There is also a way to configure custom mapping for each broker (Kafka) / node (Schema Registry, Zookeeper) / worker (Kafka Connect).
Such a configuration means that the Agent will try to connect using JMX for:
my-kafka-host-0:9582 - because of metricsCustomUrlMappings
my-kafka-host-1:9581 - because of metricsPort
and no entry in metricsCustomUrlMappings
This page provides examples for defining a connection to Schema Registries.
The URLs (nodes) should always have a scheme defined (http:// or https://).
For Basic Authentication, define username
and password
properties.
A custom truststore is needed when the Schema Registry is served over TLS (encryption-in-transit) and the Registry’s certificate is not signed by a trusted CA.
A custom truststore might be necessary too (see above).
By default, Lenses will use hard delete for Schema Registry. To use soft delete, add the following property:
Some connections depend on others. One example is the AWS Glue Schema Registry connection, which depends on an AWS connection. These are examples of provision Lenses with an AWS connection named my-aws-connection
and an AWS Glue Schema Registry that references it.
This page provides examples for defining a connection to Kafka.
If deploying with Helm put the connections YAML under provisioning in the values file.
With PLAINTEXT, there's no encryption and no authentication when connecting to Kafka.
The only required fields are:
kafkaBootstrapServers
- a list of bootstrap servers (brokers).
It is recommended to add as many brokers (if available) as convenient to this list for fault tolerance.
protocol
- depending on the protocol, other fields might be necessary (see examples for other protocols)
In following example JMX metrics for Kafka Brokers are configured too, assuming that all brokers expose their JMX metrics using the same port (9581), without SSL and authentication.
With SSL the connection to Kafka is encrypted. You can also uses SSL and certificates to authenticate users against Kafka.
A truststore (with password) might need to be set explicitly if the global truststore of Lenses does not include the Certificate Authority (CA) of the brokers.
If TLS is used for authentication to the brokers in addition to encryption-in-transit, a key store (with passwords) is required.
There are 2 SASL-based protocols to access Kafka Brokers: SASL_SSL
and SASL_PLAINTEXT
. They both require SASL mechanism and JAAS Configuration values. What is different is if:
The transport layer is encyrpted (SSL)
The SASL mechanisn for authentication (PLAIN, AWS_MSK_IAM, GSSAPI).
In addition to this, there might be a keytab file required, depending on the SASL mechanism (for example when using GSSAPI mechanism, most often used for Kerberos).
In order to use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.
Apart from that, when encryption-in-transit is used (with SASL_SSL
), a trust store might need to be set explicitly if the global trust store of Lenses does not include the CA of the brokers.
Following are a few examples of SASL_PLAINTEXT and SASL_SSL
Encrypted communication and basic username and password for authentication.
When Lenses is running inside AWS and is connecting to an Amazon’s Managed Kafka (MSK) instance, IAM can be used for authentication.
In order to use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.
No SSL encrypted of communication, credentials communicated to Kafka in clear text.
Lenses interacts with your Kafka Cluster via Kafka Client API. To override the default behavior use additionalProperties
.
By default there shouldn’t be a need to use additional properties, use it only if really necessary, as a wrong usage might brake the communication with Kafka.
Lenses SQL processors uses the same Kafka connection information provided to Lenses.
This page gives examples of the provisioning yaml for Lenses.
To use with Helm file place the examples under lenses.provisioning.connections
in the values file.
This page provides examples for defining a connection to Kafka Connect Clusters.
The URLs (workers) should always have a scheme defined (http:// or https://).
This example uses an optional AES-256 key. The key decodes values encoded with AES-256 to enable passing encrypted values to connectors. It is only needed if your cluster uses AES-256 Decryption plugin.
For Basic Authentication, define username
and password
properties.
A custom truststore is needed when the Kafka Connect workers are served over TLS (encryption-in-transit) and their certificates are not signed by a trusted CA.
A custom truststore might be necessary too (see above).
This page describes import end exporting resources from Lenses to YAML via the CLI.
The CLI allows you to import and export resources to and from files.
Import is done on a per-resource basis, the directory structure defined by the CLI. A base directory can be provided by the —dir flag.
Processors, connectors, topics, and schemas have an additional prefix flag to restrict resources to export.
The expected directory structure is:
Only the update of name, cluster name, namespace, and runner are allowed. Changes to the SQL are effectively the creation of a new Processor.
This page describes installing Lenses with Docker Image.
On start-up, Lenses will be in bootstrap mode unless it has an existing Kafka Connection. See provisioning for automating.
The Lenses docker image can be configured via environment variables or via volume mounts for the configuration files (lenses.conf
, security.conf
).
Open Lenses in your browser, log in with admin/admin
and configure your brokers and add your license.
Environment variables prefixed with LENSES_
are transformed into corresponding configuration options. The environment variable name is converted to lowercase and underscores (_
) are replaced with dots (.
). As an example set the option lenses.port
use the environment variable LENSES_PORT
.
Alternatively, the lenses.conf and security.conf can be mounted directly as
/mnt/settings/lenses.conf
/mnt/secrets/security.conf
The Docker image exposes four volumes in total, where cache, logs, plugins, and persistent data are stored:
/data/storage
/data/plugins
/data/logs
/data/kafka-streams-state
Resides under /data/storage
and is used to store persistent data, such as Data Policies. For this data to survive between Docker runs and/or Lenses upgrades, the volume must be managed externally (persistent volume).
Resides under /data/plugins
it’s where classes that extend Lenses may be added —such as custom Serdes, LDAP filters, UDFs for the Lenses SQL table engine, and custom_http implementations.
Resides under /data/logs
, logs are stored here. The application also logs to stdout, so the log files aren’t needed for most cases.
Resides under /data/kafka-streams-state
, used when Lenses SQL is in IN_PROC configuration. In such a case, Lenses uses this scratch directory to cache Lenses SQL internal state. Whilst this directory can safely be removed, it can be beneficial to keep it around, so the Processors won’t have to rebuild their state during a restart.
By default, the Lenses serves connections over plaintext (HTTP). It is possible to use TLS instead. The Docker image offers the ability to provide the content for extra files via secrets mounted as files or as environment variables. Especially for SSL, Docker supports SSL/TLS keys and certificates in Java Keystore (JKS) formats.
This capability is optional, and users can mount such files under custom paths and configure lenses.conf
manually via environment variables, or lenses.append.conf
.
There are two ways to use the File/Variable names of the table below.
Create a file with the appropriate filename as listed below and mount it under /mnt/settings
, /mnt/secrets
, or /run/secrets
Set them as environment variables.
All settings except for passwords, can be optionally encoded in base64. The docker will detect such encoding automatically.
The docker does not require running as root. The default user is set to root for convenience and to verify upon start-up that all the directories and files have the correct permissions. The user drops to nobody
and group nogroup
(65534:65534) before starting Lenses.
If the image is started without root privileges, the agent will start successfully using the effective uid:gid applied. Ensure any volumes mounted (i.e., for the license, settings, and data) have the correct permission set.
This page describes the Provisioning API reference.
For the options for each connection see the Schema /Object of the PUT call.
This page describes how to install Lenses via the AWS Marketplace.
The AWS Marketplace offering requires AWS MSK (Managed Apache Kafka) to be available. Optionally, AWS RDS (or any other PostgreSQL-compatible database) can be configured for Lenses to store its state.
The following AWS resources are created:
An EC2 instance that runs Lenses;
A SecurityGroup to allow network access to the Lenses UI;
A SecurityGroupIngress for Lenses to connect to MSK;
A CloudWatch LogGroup where Lenses stores its logs;
An IAM Role to allow the EC2 instance to store logs;
An IAM InstanceProfile to pass the role to the EC2 instance;
Optionally if enabled during deployment: an IAM Policy to allow the EC2 instance to emit CloudWatch metrics.
Deployment takes approximately three minutes.
Select CloudFormation Template, Lenses EC2 and your region.
Choose Launch CloudFormation.
Continue with the default options for creating the stack in the AWS wizard.
Fill in the parameters at Specify stack details.
Deployment Here the EC2 instance size and password for the Lenses admin user are set. A t2.large instance size is recommended;
Network Configuration This section controls the network settings of the Lenses EC2 instance. The ingress allows access to the Lenses UI only from particular IP addresses;
MSK Set the Security Group ID to that of your MSK cluster. A rule will be added to it so that Lenses can communicate with your cluster. You can find the ID by navigating in the AWS console to your MSK cluster and then under Properties -> Networking settings;
Monitoring Optionally produce the Lenses logs to CloudWatch;
Storage Lenses stores its state in a database locally on the EC2 instance’s disk or in a PostgreSQL database. Local storage is a development/quickstart option and is not suitable for production use. It is advised to use a Postgres database for smoother upgrades.
Review the stack.
Accept the terms and conditions and create the stack.
Once the stack has deployed, go to the Output tab and click on the FQDN link. If there are no outputs listed you might need to press the refresh button.
Login to Lenses with admin and the password value you have submitted for the parameter LensesAdminPassword.
Lenses supports connection to MSK brokers via IAM. If Lenses is deployed on an EC2 instance it will use the default credential chain loader to authenticate and connect to MSK.
The following Regions are supported:
us-east-1
;
us-east-2
;
us-west-1
;
us-west-2
;
ca-central-1
;
eu-central-1
;
eu-west-1
;
eu-west-2
;
eu-west-3
;
ap-southeast-1
;
ap-southeast-2
;
ap-south-1
;
ap-northeast-1
;
ap-northeast-2
;
sa-east-1
.
Please:
Do not use your AWS root user for deployment or operations;
Follow the least privileges principle when granting access to individual IAM user accounts;
Avoid allowing traffic to the Lenses UI from a broad CIDR block where a more specific block could be used.
AWS billing applies for the EC2 instance, CloudWatch logs and optionally CloudWatch metrics.
For the hourly billed version additional hourly charges apply, which depend on the instance size. For the Bring Your Own License (BYOL) you can get a free trial license here.
In case you run into problems, e.g. you cannot connect to Lenses, then the logs could provide more information. The easiest route to do this is to go to CloudWatch in the AWS console. Here, find the log group corresponding to your deployment (it has the same name as the deployment) and pick a log stream. The stream with the /lenses.log
suffix contains all log lines regardless of the log level; the stream with the /lenses-warn.log
suffix only contains warning-level logs.
If the above fails, for example, because the logs integration is broken, you can SSH into the EC2 instance. Lenses is installed into /opt/lenses
, the logs can be found under /opt/lenses/logs
for further inspection
This page describes automating (provisioning) connections and channels for Lenses at installation and how to apply updates.
On start-up, Lenses will be in bootstrap mode unless it has an existing Kafka Connection.
To fully start Lenses you need two key pieces of information to start and perform basic functions:
Kafka Connection
Valid License
If provisioning is enabled, any changes in the UI will be overriden.
A dedicated API, called provisioning, is available to handle bootstrapping key connections at installation time. This allows you to fully install and configure key connections such as Kafka, Schema Registry, Kafka Connect, and Zookeepers in one go. You can use either of the following approaches depending on your needs:
Both approaches use a YAML file to define connections.
Connections are defined in theprovisioning.yaml.
This file is divided into components, each component representing a type of connection.
Each component must have:
Name - This is the free name of the connection
Version set to 1
Optional tags
Configuration - This is a list of keys/values and is dependent on the component type.
For a full list of configuration options for the connect see Provisioning API Spec.
This page describes installing Lenses in Kubernetes via Helm.
Only Helm version 3 is supported.
On start-up, Lenses will be in bootstrap mode unless it has an existing Kafka Connection. Enable provisioning to automate the creation of connections.
First, add the Helm Chart repository using the Helm command line:
Use helm to install Lenses with default values:
The default install of Lenses will place Lenses in bootstrap mode, you can add the connections to Kafka manually and upload your license or automation with provisioning. Please refer to the GitHub values.yaml
for all options.
To automatically provision the connections to Kafka and other systems set the .Values.lenses.provision.connections
to be the YAML definition of your connections. For a full list of the connection types supported see Provisioning.
The chart will render the full YAML specified under this setting as the provisioning.yaml
file.
Alternatively you can use a second YAML file, which contains only the connections pass them at the command line when installing:
You must explicitly enable provisioning via lenses.provision.enabled: true otherwise Lenses will start in bootstrap mode.
The chart uses:
Secrets to store Lenses Postgres credentials and authentication credentials
Secrets to store connection credentials such as Kafka SASL_SCRAM password or password for SSL JKS stores.
Secrets to hold the base64 encoded values of the JKS stores
ConfigMap for Lenses configuration overrides
Cluster roles and role bindings (optional).
Secrets and config maps are mounted as files under the mount /mnt
:
settings - holds the lenses.conf
secrets - holds the secrets Lenses and license
provision-secrets - holds the secrets for connections in the provisioning.yaml
file
provision-secrets/files - holds any file needed for a connection, e.g. JKS files.
The Helm chart creates Cluster roles and bindings, these are used by SQL Processors if the deployment mode is set to KUBERENTES. They are used so that Lenses can deploy and monitor SQL Processor deployments in namespaces.
To disable the RBAC set: rbacEnabled: false
If you want to limit the permissions Lenses has against your Kubernetes cluster, you can use Role/RoleBinging
resources instead.
To achieve this you need to create a Role
and a RoleBinding
resource in the namespace you want the processors deployed to.
For example:
Lenses namespace = lenses-ns
Processor namespace = lenses-proc-ns
Finally you need to define in Lenses configuration which namespaces can Lenses access. To achieve this amend values.yaml
to contain the following:
The main configurable options for lenses.conf
are available in the values.yaml
under the lenses
object. These include:
Authentication
Database connections
SQL processor configurations
To apply other static configurations use lenses.append.conf
, for example:
Set accordingly under**lenses.security.
**
For SSO set lenses.security.saml
To use Postgres as the backing store for Lenses set the details in the lenses.storage.postgres object
.
If Postgres is not enabled a default embedded H2 database is used. To enable persistence for this data:
The chart relies on secrets for sensitive information such as Passwords. Secrets can rotate and are commonly stored in an external store such as Azure KeyVault, Hashicorp Vault or AWS Secrets Manager.
If you wish to have the chart use external secrets that are synchronized with these providers, set the following for the Lenses user:
For Postgres, add additional ENV variables via the lenses.additionalEnv
object to point to your secret and set the username and password to external in the Postgres section.
While the chart supports setting TLS on Lenses itself we recommend placing it on the Ingress resource
Ingress and service resources are supported.
Enabled an Ingress resource in the values.yaml
:
Enable a service resource in the values.yaml
:
To control the resources used by Lenses:
To enable SQL processor in KUBERENTES mode and control the defaults:
To control the namespace Lenses can deploy processors, use the sql.namespaces
value.
Prometheus metrics are automatically exposed on port 9102 under /metrics
.
For Connections, see Provisioning examples. You can also find examples in the Helm chart repo.
This page describes how to use the Lenses provisioning API to setup connections to Kafka and other services and have changes applied.
Building on the provisioning.yaml
, API provisiong
allows for uploading the files directly Lenses from anywhere with network access and without access to the host where Lenses is installed.
Many connections need files, for example, to secure Kafka with SSL you will need a keystore and optionally a trust store.
To reference a file in the, for the configuration option set the key to be "file" and the value to reference in the API request. For example, given:
To upload the file to be used for the configuration option sslKeystore
: add the following to the request:
Set the type to application/octet-stream.
The name of the part in the multipart request (supporting files) should match the value of the property pointing to the mounted file in the provisioning.yaml
descriptor. This ensures accurate mapping and referencing of files.
Set LENSES_SESSION_TOKEN as the value of the Lenses Service Account token you want to use to automate provisioning.
In this example, the provisioning.yaml
is read from provisioning=@"resources/provisioning.yaml.
The provisioning.yaml contains a reference to "my-keystore-file" which is loaded from @${PATH_TO_KEYSTORE_FILE};type=application/octet-stream
The provisioning.yaml contains secrets. If you are deploying via Helm the chart will use Kubernetes secrets.
Additionally, support is provided for referencing environment variables. This allows you to set secrets in your environment and have the value resolved at runtime. i.e. inject an environment variable from GitHub secrets for passwords.
This page describes how to use the Lenses File Watcher to setup connections to Kafka and other services and have changes applied.
Connections are defined in the provisioning.yaml
file. Lenses will then watch the file and resolve the desired state, applying connections defined in the file.
If a connection is not defined but exists in Lenses it will be removed. It is very important to keep your provision YAML updated to reflect the desired state.
File watcher provisioning must be explicitly enabled. Set the following in the lenses.conf
file:
Updates to the file will be loaded and applied if valid without a restart of Lenses.
Lenses expects a set of files in the directory, defined by lenses.provisioning.path
. The structure of the directory must follow:
files/ directory for storing any certificates, JKS files or other files needed by the connection
provisioning.yaml - This is the main file, holding the definition of the connections
license.json - Your lenses license file
The provisioning.yaml
contains secrets. If you are deploying via Helm, the chart will use Kubernetes secrets.
Additionally, support is provided for referencing environment variables. This allows you to set secrets in your environment and have the value resolved at runtime.
Many connections need files, for example, to secure Kafka with SSL you will need a key store
and optionally a trust store
.
To reference a file in the provisioning.yaml
, for example, given:
a file called my-keystore.jks
is expected in the files directory. This file will be used for the key store location.
Lenses Box is a container solution for building applications on a localhost Apache Kafka docker.
Lenses Box contains all components of the Apache Kafka ecosystem, CLI tools, and synthetic data streams.
To start with the Box get your free development license online.
Install and run the Docker
Open Lenses in your browser, log in with admin/admin.
The broker in the Kafka docker has a broker id 101
and advertises the listener configuration endpoint to accept client connections.
If you run Docker on macOS or Windows, you may need to find the address of the VM running Docker and export it as the advertised listener address for the broker (On macOS it usually is 192.168.99.100
). At the same time, you should give the lensesio/box
image access to the VM’s network:
If you run on Linux, you don’t have to set the ADV_HOST
, but you can do something cool with it. If you set it to be your machine’s IP address, you can access Kafka from any clients in your network.
If you decide to run a box in the cloud, you (and all your team) can access Kafka from your development machines. Remember to provide the public IP of your server as the kafka advertised host for your producers and consumers to access it.
Kafka JMX metrics are enabled by default. Refer to ports once you expose the relevant port, ie. -p 9581:9581
you can connect to JMX with
If you are using docker-machine or setting this up in a Cloud or DOCKER_HOST is a custom IP address such as 192.168.99.100
, you will need to use the parameters --net=host -e ADV_HOST=192.168.99.100
.
To persist the Kafka data between multiple executions, provide a name for your Docker instance and do not set the container to be removed automatically (--rm
flag). For example:
Once you want to free up resources, just press Control-C
. Now you have two options: either remove the Docker:
Or use it at a later time and continue from where you left off:
Download your key locally and run the command:
The container is running multiple services, and it is recommended to allocate 5GB of RAM to the docker (although it can operate with even less than 4GB).
To reduce the memory footprint, it is possible to disable some connectors and shrink the Kafka Connect heap size by applying these options (choose connectors to keep) to the docker run
command:
File / Variable Name | Description |
---|---|
Service | Port Number |
---|---|
Variable | Description |
---|---|
FILECONTENT_JVM_SSL_TRUSTSTORE
The SSL/TLS trust store to use as the global JVM trust store. Add to LENSES_OPTS the property javax.net.ssl.trustStore
FILECONTENT_JVM_SSL_TRUSTSTORE_PASSWORD
Τhe trust store password. If set, the startup script will add automatically to LENSESOPTS the property javax.net.ssl.trustStorePassword (**_base64 not supported**)
FILECONTENT_LENSES_SSL_KEYSTORE
The SSL/TLS keystore to use for the TLS listener for Lenses
Kafka broker
9092
Kafka connect
8083
Zookeeper
2181
Schema Registry
8081
Lenses
3030
Elasticsearch
9200
Kafka broker JMX
9581
Schema registry JMX
9582
Kafka connect JMX
9584
Zookeeper JMX
9585
Kafka broker (ssl)
9092
ADV_HOST=[ip-address]
The ip address that the broker will advertise
DEBUG=1
Prints all stdout and stderr processes to container’s stdout for debugging.
DISABLE_JMX=1
Disables exposing JMX metrics on Kafka services.
ELASTICSEARCH_PORT=0
Will not start Elasticsearch.
ENABLE_SSL=1
Creates CA and key-cert pairs and makes the broker also listen to SSL://127.0.0.1:9093
KAFKA_BROKER_ID=1
Overrides the broker id (the default id is 101).
SAMPLEDATA=0
Disables the synthetic streaming data generator that are running by default.
SUPERVISORWEB=1
Enables supervisor interface on port 9001 (adjust via SUPERVISORWEB_PORT) to control services.
File Watcher provisioning
Provisioning with a YAML file, with Lenses watching for changes in the file.
API Based provisioning
Using APIs to load the provisioning YAML files.
Helm
Helm Chart Repo
Successful retrieval of system state
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
License successfully updated and current license info returned
It will update the connections state and validate the configuration. If the validation fails, the state will not be updated.
It will only validate the request, not applying any actual change to the system.
It will try to connect to the configured service as part of the validation step.
Configuration in YAML format representing the connections state.
The only allowed name for the Kafka connection is "kafka".
Kafka security protocol.
SSL keystore file path.
Password to the keystore.
Key password for the keystore.
Password to the truststore.
SSL truststore file path.
JAAS Login module configuration for SASL.
Kerberos keytab file path.
Comma separated list of protocol://host:port to use for initial connection to Kafka.
Mechanism to use when authenticated using SASL.
Default port number for metrics connection (JMX and JOLOKIA).
The username for metrics connections.
The password for metrics connections.
Flag to enable SSL for metrics connections.
HTTP URL suffix for Jolokia or AWS metrics.
HTTP Request timeout (ms) for Jolokia or AWS metrics.
Metrics type.
Additional properties for Kafka connection.
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis.
DEPRECATED.
The only allowed name for a schema registry connection is "schema-registry".
Path to SSL keystore file
Password to the keystore
Key password for the keystore
Password to the truststore
Path to SSL truststore file
List of schema registry urls
Source for the basic auth credentials
Basic auth user information
Metrics type
Flag to enable SSL for metrics connections
The username for metrics connections
The password for metrics connections
Default port number for metrics connection (JMX and JOLOKIA)
Additional properties for Schema Registry connection
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis
DEPRECATED
HTTP URL suffix for Jolokia metrics
HTTP Request timeout (ms) for Jolokia metrics
Username for HTTP Basic Authentication
Password for HTTP Basic Authentication
Enables Schema Registry hard delete
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The username to connect to the Elasticsearch service.
The password to connect to the Elasticsearch service.
The nodes of the Elasticsearch cluster to connect to, e.g. https://hostname:port. Use the tab key to specify multiple nodes.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An Integration Key for PagerDuty's service with Events API v2 integration type.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The Datadog site.
The Datadog API key.
The Datadog application key.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The Slack endpoint to send the alert to.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Comma separated list of Alert Manager endpoints.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The host name.
An optional port number to be appended to the hostname.
Set to true in order to set the URL scheme to https
. Will otherwise default to http
.
An array of (secret) strings to be passed over to alert channel plugins.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Way to authenticate against AWS.
Access key ID of an AWS IAM account.
Secret access key of an AWS IAM account.
AWS region to connect to. If not provided, this is deferred to client configuration.
Specifies the session token value that is required if you are using temporary security credentials that you retrieved directly from AWS STS operations.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
List of Kafka Connect worker URLs.
Username for HTTP Basic Authentication.
Password for HTTP Basic Authentication.
Flag to enable SSL for metrics connections.
The username for metrics connections.
The password for metrics connections.
Metrics type.
Default port number for metrics connection (JMX and JOLOKIA).
AES256 Key used to encrypt secret properties when deploying Connectors to this ConnectCluster.
Name of the ssl algorithm. If empty default one will be used (X509).
SSL keystore file.
Password to the keystore.
Key password for the keystore.
Password to the truststore.
SSL truststore file.
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis.
DEPRECATED.
HTTP URL suffix for Jolokia metrics.
HTTP Request timeout (ms) for Jolokia metrics.
The only allowed name for a schema registry connection is "schema-registry".
Way to authenticate against AWS. The value for this project corresponds to the AWS connection name of the AWS connection that contains the authentication mode.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Access key ID of an AWS IAM account. The value for this project corresponds to the AWS connection name of the AWS connection that contains the access key ID.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Secret access key of an AWS IAM account. The value for this project corresponds to the AWS connection name of the AWS connection that contains the secret access key.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Specifies the session token value that is required if you are using temporary security credentials that you retrieved directly from AWS STS operations.
Enter the Amazon Resource Name (ARN) of the Glue schema registry that you want to connect to.
The period in milliseconds that Lenses will be updating its schema cache from AWS Glue.
The size of the schema cache.
Type of schema registry connection.
Default compatibility mode to use on Schema creation.
The only allowed name for the Zookeeper connection is "zookeeper".
List of zookeeper urls.
Zookeeper /znode path.
Zookeeper connection session timeout.
Zookeeper connection timeout.
Metrics type.
Default port number for metrics connection (JMX and JOLOKIA).
The username for metrics connections.
The password for metrics connections.
Flag to enable SSL for metrics connections.
HTTP URL suffix for Jolokia metrics.
HTTP Request timeout (ms) for Jolokia metrics.
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis.
DEPRECATED.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The Postgres hostname.
The port number.
The database to connect to.
The user name.
The password.
The SSL connection mode as detailed in https://jdbc.postgresql.org/documentation/head/ssl-client.html.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The host name for the HTTP Event Collector API of the Splunk instance.
The port number for the HTTP Event Collector API of the Splunk instance.
Use SSL.
This is not encouraged but is required for a Splunk Cloud Trial instance.
HTTP event collector authorization token.
The only allowed name for the Zookeeper connection is "kerberos".
Kerberos krb5 config
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Attached file(s) needed for establishing the connection. The name of each file part is used as a reference in the manifest.
Successfully updated connection state
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$