Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page describe the Lenses Agent configuration.
This page describes an overview of the Lenses Agent configuration.
The Agent configuration is driven by two files:
lenses.conf
provisioning.yaml
lenses.conf holds all the database connections and low level options for the agent.
provisioning.yaml holds the your Kafka cluster and supporting services, that the Agent is to connect to. In addition it defines the connection to HQ. The provisioning.yaml is watched by the Agent, so any changes made, if valid, are applied. See Provisioning for more information. Without provisioning your agent can not connect to HQ.
This page describes connection a Lenses Agent with HQ
To be able to view and drill into your Kafka environment, you need to connect the agent to HQ. You need to create an environment in HQ and copy the Agent Key into the provisioning.yaml.
This page describes an overview of Lenses Agent Provisioning.
As of version 6.0 the calling the Rest endpoint for provisioning is no longer available.
Connections are defined in the provisioning.yaml file. The Agent will watch the file and resolve the desired state, applying connections defined in the file.
Connections are defined in the provisioning.yaml. This file is divided into components, each component representing a type of connection.
Each component is mandatory:
Name - This is the free name of the connection
Version set to 1
Configuration - This is a list of keys/values dependent on the component type.
The provisioning.yaml contains secrets. If you are deploying via Helm, the chart will use Kubernetes secrets.
Additionally, support is provided for referencing environment variables. This allows you to set secrets in your environment and resolve the value at runtime.
Many connections need files, for example, to secure Kafka with SSL you will need a key store and optionally a trust store.
To reference a file in the provisioning.yaml, for example, given:
a file called my-keystore.jks is expected in the same directory.
This page describes connecting the Lenses Agent to Apache Kafka.
A Kafka connection is required for the agent to start. You can connect to Kafka via:
Plaintext (no credentials an unencrypted)
SSL (no credentials an encrypted)
SASL Plaintext and SASL SSL
With PLAINTEXT, there's no encryption and no authentication when connecting to Kafka.
The only required fields are:
kafkaBootstrapServers - a list of bootstrap servers (brokers). It is recommended to add as many brokers (if available) as convenient to this list for fault tolerance.
protocol - depending on the protocol, other fields might be necessary (see examples for other protocols)
In following example JMX metrics for Kafka Brokers are configured too, assuming that all brokers expose their JMX metrics using the same port (9581), without SSL and authentication.
With SSL the connection to Kafka is encrypted. You can also uses SSL and certificates to authenticate users against Kafka.
A truststore (with password) might need to be set explicitly if the global truststore of the Agent does not include the Certificate Authority (CA) of the brokers.
If TLS is used for authentication to the brokers in addition to encryption-in-transit, a key store (with passwords) is required.
There are 2 SASL-based protocols to access Kafka Brokers: SASL_SSL
and SASL_PLAINTEXT
. They both require SASL mechanism and JAAS Configuration values. What is different is:
The transport layer is encyrpted (SSL)
The SASL mechanism for authentication (PLAIN, AWS_MSK_IAM, GSSAPI).
In addition to this, there might be a keytab file required, depending on the SASL mechanism (for example when using GSSAPI mechanism, most often used for Kerberos).
To use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.
When encryption-in-transit is used (with SASL_SSL), a trust store might need to be set explicitly if the global trust store of Lenses does not include the CA of the brokers.
Encrypted communication and basic username and password for authentication.
In order to use Kerberos authentication, a Kerberos Connection should be created beforehand.
No SSL encrypted of communication, credentials communicated to Kafka in clear text.
This page describes configuring Lenses to connect to Aiven.
This page describes how to connect the Lenses Agent to your Kafka brokers.
The Lenses Agent can connect to any Kafka cluster or service exposing the Apache Kafka APIs and supporting the authentication methods offered by Apache Kafka.
This page describes connection the Lenses Agent to a AWS MSK cluster.
It is recommended to install the Agent on an EC2 instance or with EKS in the same VPC as your MSK cluster. The Agent can be installed and preconfigured via the AWS Marketplace.
Edit the AWS MSK security group in the AWS Console and add the IP address of your Agent installation.
If you want to have the Agent collect JMX metrics you have to enable Open Monitoring on your MSK cluster. Follow the AWS guide here.
Depending on your MSK cluster, select the endpoint and protocol you want to connect with.
It is not recommended to use Plaintext for secure environments. For these environments use TLS or IAM.
When the Agent is running inside AWS and is connecting to an Amazon’s Managed Kafka (MSK) instance, IAM can be used for authentication.
This page describes how to connect Lenses to an Amazon MSK Serverless cluster.
It is recommended to install the Agent on an EC2 instance or with EKS in the same VPC as your MSK Serverless cluster.
Enable communications between the Agent & the Amazon MSK Serverless cluster by opening the Amazon MSK Serverless cluster's security group in the AWS Console and add the IP address of your Agent installation.
To authenticate the Agent & access resources within our MSK Serverless cluster, we'll need to create an IAM policy and apply that to the resource (EC2, EKS cluster, etc) running the Agent service. here is an example IAM policy with sufficient permissions which you can associate with the relevant IAM role:
MSK Serverless IAM to be used after cluster creation. Update this IAM policy with the relevant ARN.
Click your MSK Serverless Cluster in the MSK console and select View Client Information page to check the bootstrap server endpoint.
To enable the creation of SQL Processors that create consumer groups, you need to add the following statement in your IAM policy:
Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.
To integrate with the AWS Glue Schema Registry, you also need to add the following statement for the registries and schemas in your IAM policy:
Update the placeholders in the IAM policy based on the relevant MSK Serverless cluster ARN.
To integrate with the AWS Glue Schema Registry, you also need to modify the security policy for the registry and schemas, which results in additional functions within it:
More details about how IAM works with MSK Serverless can be found in the documentation: MSK Serverless
When using the Agent with MSK Serverless:
The agent does not receive Prometheus-compatible metrics from the brokers because they are not exported outside of CloudWatch.
The agent does not configure quotas and ACLs because MSK Serveless does not allow this.
This page describes how to setup connections to Kafka and other services and have changes applied automatically for the Lenses Agent.
This page describes configuring Lenses to connect to Confluent Cloud.
For Confluent Platform see
This page describes configuring Lenses to connect to Confluent Platform.
For Confluent Platform see
This page describes connection Lenses to a Azure HDInsight cluster.
This page describes connection Lenses to Azure EventHubs.
Add a shared access policy
Navigate to your Event Hub resource and select Shared access policies in the Settings section.
Select + Add shared access policy, give a name, and check all boxes for the permissions (Manage, Send, Listen)
Once the policy is created, obtain the Primary Connection String, by clicking the policy and copying the connection string. The connection string will be used as a JAAS password to connect to Kafka.
The bootstrap broker [YOUR_EVENT_HUBS_NAMESPACE].servicebus.windows.net:9093
Set the following in the provisioning.yaml
First set environment variable
Note that "\" at "$ConnectionString" is set additionally to escape the $ sign.
This page describes an overview of connecting a Lenses Agent with Schema Registries
Consider Rate Limiting if you have a high number of schemas.
TLS and basic authentication are supported for connections to Schema Registries.
The Agent can collect Schema registry metrics via:
JMX
Jolokia
AVRO
PROTOBUF
JSON and XML formats are supported by Lenses but without a backing schema registry.
To enable the deletion of schemas in the UI, set the following in the lenses.conf
file.
IBM Event Streams supports hard deletes only
This page describes connection to AWS Glue.
AWS Glue Schema Registry connection, depends on an AWS connection.
Set the following examples in provisioning.yaml
These are examples of provision Lenses with an AWS connection named my-aws-connection
and an AWS Glue Schema Registry that references it.
This page describes connecting Lenses to Apicurio.
Apicuro supports the following versions of Confluent's API:
Confluent Schema Registry API v6
Confluent Schema Registry API v7
Set the following examples in provisioning.yaml
Set the schema registry URLs to include the compatibility endpoints, for example:
This page describes connecting Lenses to Confluent schema registries.
Set the following examples in provisioning.yaml
The URLs (nodes) should always have a scheme defined (http:// or https://).
For Basic Authentication, define username
and password
properties.
A custom truststore is needed when the Schema Registry is served over TLS (encryption-in-transit) and the Registry’s certificate is not signed by a trusted CA.
A custom truststore might be necessary too (see above).
By default, Lenses will use hard delete for Schema Registry. To use soft delete, add the following property:
This page describes the memory & cpu prerequisites for Lenses.
This documentation provides memory recommendations for Lenses.io, considering the number of Kafka topics, the number of schemas, and the complexity of these schemas (measured by the number of fields). Proper memory allocation ensures optimal performance and stability of Lenses.io in various environments.
Number of Topics: Kafka topics require memory for indexing, metadata, and state management.
Schemas and Their Complexity: The memory impact of schemas is influenced by both the number of schemas and the number of fields within each schema. Each schema field contributes to the creation of Lucene indexes, which affects memory usage.
For a basic setup with minimal topics and schemas:
Minimum Memory: 4 GB
Recommended Memory: 8 GB
This setup assumes:
Fewer than 100 topics
Fewer than 100 schemas
Small schemas with few fields (less than 10 fields per schema)
Memory requirements increase with the number of topics. Topics are used as the primary reference for memory scaling, with additional considerations for schemas.
Schemas have a significant impact on memory usage, particularly as the number of fields within each schema increases. The memory impact is determined by both the number of schemas and the complexity (number of fields) of these schemas.
To help illustrate how to apply these recommendations, here are some example configurations considering both topics and schema complexity:
Topics: 500
Schemas: 100 (average size 50 KB, 8 fields per schema)
Recommended Memory: 8 GB
Schema Complexity: Low → No additional memory needed.
Total Recommended Memory: 8 GB
Topics: 5,000
Schemas: 1,000 (average size 200 KB, 25 fields per schema)
Base Memory: 12 GB
Schema Complexity: Moderate → No additional memory needed.
Total Recommended Memory: 16 GB
Topics: 15,000
Schemas: 3,000 (average size 500 KB, 70 fields per schema)
Base Memory: 32 GB
Schema Complexity: High → Add 3 GB for schema complexity.
Total Recommended Memory: 35 GB
30,000 Topics
Schemas: 5,000 (average size 300 KB, 30 fields per schema)
Base Memory: 64 GB
Schema Complexity: Moderate → Add 5 GB for schema complexity.
Total Recommended Memory: 69 GB
High Throughput: If your Kafka cluster is expected to handle high throughput, consider adding 20-30% more memory than the recommendations.
Complex Queries and Joins: If using Lenses.io for complex data queries and joins, consider increasing the memory allocation by 10-15% to accommodate the additional processing.
Monitoring and Adjustment: Regularly monitor memory usage and adjust based on actual load and performance.
Proper memory allocation is crucial for the performance and reliability of Lenses.io, especially in environments with a large number of topics and complex schemas. While topics provide a solid baseline for memory recommendations, the complexity of schemas—particularly the number of fields—can also significantly impact memory usage. Regular monitoring and adjustments are recommended to ensure that your Lenses.io setup remains performant as your Kafka environment scales.
This page describes adding a Schema Registries to the Lenses Agent.
This page describes the Provisioning API reference.
For the options for each connection see the Schema /Object of the PUT call.
This page describes how to configure JMX metrics for Connections in Lenses.
All core services (Kafka, Schema Registry, Kafka Connect, Zookeeper) use the same set of properties for services’ monitoring.
The Agent will discover all the brokers by itself and will try to fetch metrics using metricsPort, metricsCustomUrlMappings and other properties (if specified).
The same port used for all brokers/workers/nodes. No SSL, no authentication.
Such a configuration means that the Agent will try to connect using JMX with every pair of kafkaBootstrapServers.host:metricsPort, so following the example: my-kafka-host-0:9581.
For Jolokia the Agent supports two types of requests: GET (JOLOKIAG) and POST (JOLOKIAP).
For JOLOKIA each entry value in metricsCustomUrlMappings must contain protocol.
The same port used for all brokers/workers/nodes. No SSL, no authentication.
JOLOKIA monitoring works on the top of HTTP protocol. To fetch metrics the Agent has to perform either GET or POST request. There is a way of configuring http request timeout using httpRequestTimeout property (ms value). Its default value is 20 seconds.
Default suffix for Jolokia endpoints is /jolokia/, so that should be provided value. Sometimes that suffix can be different, so there is a way of customizing it by using metricsHttpSuffix field.
AWS has predefined metrics configuration. The Agent hits the Prometheus endpoint using port 11001 for each broker. There is an option of customizing AWS metrics connection in Lenses by using metricsUsername, metricsPassword, httpRequestTimeout, metricsHttpSuffix, metricsCustomUrlMappings
, metricsSsl properties, but most likely no one will need to do that - AWS has its own standard and most probably it won’t change. Customization can be achieved only by API or CLI - UI does not support it.
There is also a way to configure custom mapping for each broker (Kafka) / node (Schema Registry, Zookeeper) / worker (Kafka Connect).
Such a configuration means that the Agent will try to connect using JMX for:
my-kafka-host-0:9582 - because of metricsCustomUrlMappings
my-kafka-host-1:9581 - because of metricsPort and no entry in metricsCustomUrlMappings
This page describes connecting Lenses to IBM Event Streams schema registry.
Requires Enterprise subscription on IBM Event Streams and only hard delete is supported for IBM Event streams
To configure an application to use this compatibility API, specify the Schema Registry endpoint in the following format:
Use "token" as the username. Set the password as your API KEY from IBM Event streams
Set the following examples in provisioning.yaml
Add a connection to AWS in the Lenses Agent.
The agent uses an AWS in three places:
AWS IAM connection to for Lenses itself
Connecting to AWS Glue
Alert channels to Cloud Watch.
If the Agent is deployed on an EC2 Instance or has access to AWS credentials in the default that can be used instead.
This page describes how to connect Lenses to IBM Event Streams.
IBM Event Streams requires a replication factor of 3. Ensure you set the replication factor accordingly for Lenses internal topics.
See .
This page describes adding a Zookeeper to the Lenses Agent.
Set the following examples in provisioning.yaml
Simple configuration with Zookeeper metrics read via JMX.
With such a configuration, Lenses will use 3 Zookeeper nodes and will try to read their metrics from following URLs (notice the same port - 9581 - used for all of them, as defined by metricsPort property):
my-zookeeper-host-0:9581
my-zookeeper-host-1:9581
my-zookeeper-host-2:9581
This page describes the hardware and OS prerequisites for Lenses.
Run on any Linux server (review ulimits or container technology (docker/kubernetes). For RHEL 6.x and CentOS 6.x use docker.
Linux machines typically have a soft limit of 1024 open file descriptors. Check your current limit with the ulimit command:
Increase as a super-user the soft limit to 4096 with:
This page describes adding a Kafka Connect Cluster to the Lenses Agent.
Lenses integrates with Kafka Connect Clusters to manage connectors.
The name of a Kafka Connect Connections may only contain alphanumeric characters ([A-Za-z0-9]) and dashes (-). Valid examples would be dev, Prod1, SQLCluster,Prod-1, SQL-Team-Awesome.
Multiple Kafka Connect clusters are supported.
If you are using Kafka Connect < 2.6 set the following to ensure you can see Connectors
lenses.features.connectors.topics.via.api.enabled=false
Consider if you have a high number of connectors.
The URLs (workers) should always have a scheme defined (http:// or https://).
This example uses an optional AES-256 key. The key decodes values encoded with AES-256 to enable passing encrypted values to connectors. It is only needed if your cluster uses AES-256 Decryption plugin.
For Basic Authentication, define username
and password
properties.
A custom truststore is needed when the Kafka Connect workers are served over TLS (encryption-in-transit) and their certificates are not signed by a trusted CA.
A custom truststore might be necessary too (see above).
If you have developed your own Connector or are using not using a Lenses connector you can still display the connector instances in the topology. To do this Lenses needs to know the configuration option of the Connector that defines which topic the Connector reads from or writes to. This is set in the connectors.info
parameter in the lenses.conf
file.
Connect the Lenses Agent to your alerting and auditing systems.
This page describes the Kafka ACLs prerequisites for the Lenses Agent if ACLs are enabled on your Kafka clusters.
These ACLs are for the underlying Lenses Agent Kafka client. Lenses has its own set of permissions guarding access.
You can restrict the access of the Lenses Kafka client but this can reduce the functionality on offer in Lenses, e.g. not allow Lenses to create topic at all, even though this can be managed by .
When your Kafka cluster is configured with an authorizer which enforces ACLs, the Agent will need a set of permissions to function correctly.
Common practice is to give teh Agent superuser status or the complete list of available operations for all resources. The IAM model of Lenses can then be used to restrict the access level per user.
The Agent needs permission to manage and access their own internal Kafka topics:
__topology
__topology__metrics
It also needs to read and describe permissions for the consumer offsets and Kafka Connect topics —if enabled:
__consumer_offsets
connect-configs
connect-offsets
connect-status
This same set of permissions is required for any topic that the agent must have read access.
DescribeConfigs was added in Kafka 2.0. It may not be needed for versions before 2.2.
Additional permissions are needed to produce topics or manage them.
Permission to at least read and describe consumer groups is required to take advantage of the Consumer Groups' monitoring capabilities.
Additional permissions are needed to manage groups.
To manage ACLs, permission to the cluster is required:
Rate limit the calls the Lenses Agent makes to Schema Registries and Connect Clusters.
To rate limit the calls the Agent makes to Schema Registries or Connect Clusters set the following the Agent configuration:
The exact values provided will depend on your setup, for example the number of schemas, how often are new schemas added, so some trial and error is required.
See connection.
Number of Topics / Partitions | Recommended Memory |
Up to 1,000 / 10,000 partitions | 12 GB |
1,001 to 10,000 / 100.000 partitions | 24 GB |
10,001 to 30,000 / 300.000 partitions | 64 GB |
Schema Complexity | Number of Fields per Schema | Memory Addition |
Low to Moderate Complexity | Up to 50 fields | None |
High Complexity | 51 - 100 fields | 1 GB for every 1,000 schemas |
Very High Complexity | 100+ fields | 2 GB for every 1,000 schemas |
Number of Topics | Number of Schemas | Number of Fields per Schema | Base Memory | Additional Memory | Total Recommended Memory |
1,000 | 1,000 | Up to 10 | 8 GB | None | 12 GB |
1,000 | 1,000 | 11 - 50 | 8 GB | None | 12 GB |
5,000 | 5,000 | Up to 10 | 12 GB | None | 16 GB |
5,000 | 5,000 | 11 - 50 | 12 GB | None | 16 GB |
10,000 | 10,000 | Up to 10 | 16 GB | None | 24 GB |
10,000 | 10,000 | 51 - 100 | 24 GB | 10 GB | 34 GB |
30,000 | 30,000 | Up to 10 | 64 GB | None | 64 GB |
30,000 | 30,000 | 51 - 100 | 64 GB | 30 GB | 94 GB |
This page describes configuring Lenses Agent logging.
Changes to the logback.xml are hot reloaded by the Agent, no need to restart.
All logs are emitted unbuffered as a stream of events to both stdout and to rotating files inside the directory logs/.
The logback.xml file is used to configure logging.
If customization is required, it is recommended to adapt the default configuration rather than write your own from scratch.
The file can be placed in any of the following directories:
the directory where the Agent is started from
/etc/lenses/
agent installation directory.
The first one found, in the above order, is used, but to override this and use a custom location, set the following environment variable:
The default configuration file is set up to hot-reload any changes every 30 seconds.
The default log level is set to INFO
(apart from some very verbose classes).
All the log entries are written to the output using the following pattern:
You can adjust this inside logback.xml to match your organization’s defaults.
logs/ you will find three files: lenses.log
, lenses-warn.log
and metrics.log
. The first contains all logs and is the same as the stdout. The second contains only messages at level WARN and above. The third one contains timing metrics and can be useful for debugging.
The default configuration contains two cyclic buffer appenders: "CYCLIC-INFO” and “CYCLIC-METRICS”. These appenders are required to expose the Agent logs within the Admin UI.
This page describes how to install plugins in the Lenses Agent.
The following implementations can be specified:
Serializers/Deserializers Plug your serializer and deserializer to enable observability over any data format (i.e., protobuf / thrift)
Custom authentication Authenticate users on your proxy and inject permissions HTTP headers.
LDAP lookup Use multiple LDAP servers or your group mapping logic.
SQL UDFs User Defined Functions (UDF) that extend SQL and streaming SQL capabilities.
Once built, the jar files and any plugin dependencies should be added to the Agent and, in the case of Serializers and UDFs, to the SQL Processors if required.
On startup, the Agent loads plugins from the $LENSES_HOME/plugins/ directory and any location set in the environment variable LENSES_PLUGINS_CLASSPATH_OPTS. The Agent is watching, and dropping a new plugin will hot-reload it. For the Agent docker (and Helm chart) you use /data/plugins.
Any first-level directories under the paths mentioned above, detected on startup will also be monitored for new files. During startup, the list of monitored locations will be shown in the logs to help confirm the setup.
Whilst all jar files may be added to the same directory (e.g /data/plugins), it is suggested to use a directory hierarchy to make management and maintenance easier.
An example hierarchy for a set of plugins:
There are two ways to add custom plugins (UDFs and Serializers) to the SQL Processors; (1) via making available a tar.gz archive at an HTTP (s) address, or (2) via creating a custom docker image.
With this method, a tar archive, compressed with gzip, can be created that contains all plugin jars and their dependencies. Then this archive should be uploaded to a web server that the SQL Processors containers can access, and its address should be set with the option lenses.kubernetes.processor.extra.jars.url.
Step by step:
Create a tar.gz file that includes all required jars at its root:
Upload to a web server, ie. https://example.net/myfiles/FILENAME.tar.gz
Set
For the docker image, set the corresponding environment variable
The SQL Processors inside Kubernetes use the docker image lensesio-extra/sql-processor. It is possible to build a custom image and add all the required jar files under the /plugins directory, then set lenses.kubernetes.processor.image.name and lenses.kubernetes.processor.image.tag options to point to the custom image.
Step by step:
Create a Docker image using lensesio-extra/sql-processor:VERSION as a base and add all required jar files under /plugins:
Upload the docker image to a registry:
Set
For the docker image, set the corresponding environment variables
This page describes the how to retrieve Lenses Agent JMX metrics.
The JMX endpoint is managed by the lenses.jmx.port option. To disable the JMX leave the option empty.
To enable monitoring of the Agent metrics:
To export via Prometheus exporter:
The Agent Docker image (lensesio/lenses) automatically sets up the Prometheus endpoint. You only have to expose the 9102 port to access it.
This will be done in two parts. The first part is about setting up the required files that JMX Agent will require and the second is about the options we need to pass to the agent.
First let’s create a new folder called jmxremote
To enable basic auth JMX, first create two files:
jmxremote.access
jmxremote.password
The password file has the credentials that the JMX agent will check during client authentication
The above code is registering 2 users.
UserA:
username admin
password admin
UserB:
username: guest
password: admin
The access file has authorization information, like who is allowed to do what.
In the above code, we can see that the admin user can do read and write operations in JMX, while guest user can only read the JMX content.
Now, to enable JMX with basic auth protection, all we need to do is pass the following options in the JRE’s env that will run the Java process you need to protect the jmx.
Let’s assume this java process is Kafka.
Change the permissions on both files so only owner can edit and view them.
If you do not change the permissions to 0600 and to the user that will run the jre process, then JMX will Agent will cause an error complaining that the Process is not the owner of the files that will be used for authentication and authorization.
Finally export the following options in the user’s env which will run Kafka.
First setup JMX with basic auth as shown in the Secure JMX: Basic Auth page.
To enable TLS Encryption/Authentication
in JMX you need a jks keystore and truststore.
Please note that both JKS Truststore and Keystore should have the same password.
The reason for this is because the javax.net.ssl
class will use the password you pass to the Keystore as the keypassword
Let’s assume this java process is Kafka and that you have installed the keystore.jks and truststore.jks under `/etc/certs``
Export the following options in the user’s env which will run Kafka.
This page describes how to configure the agent to deploy and manage SQL Processors for stream processing.
Lenses can be used to define & deploy stream processing applications that read from Kafka and write back to Kafka with SQL. They are based on the Kafka Stream framework. They are known as SQL Processors.
SQL processing of real-time data can run in 2 modes:
SQL In-Process - the workload runs inside of the Lenses Agent.
SQL in Kubernetes - the workload runs & scale on your Kubernetes cluster.
Which mode the SQL Processors will run as should be defined within the lenses.conf before Lenses is started.
In this mode, SQL processors run as part of the Agent process, sharing resources, memory, and CPU time with the rest of the platform.
This mode of operation is meant to be used for development only.
As such, the agent will not allow the creation of more than 50 SQL Processors in In Process mode, as this could impact the platform's stability and performance negatively.
For production, use the KUBERNETES
mode for maximum flexibility and scalability.
Set the execution configuration to IN_PROC
Set the directory to store the internal state of the SQL Processors:
SQL processors use the same connection details that Agent uses to speak to Kafka and Schema Registry. The following properties are mounted, if present, on the file system for each processor:
Kafka
SSLTruststore
SSLKeystore
Schema Registry
SSL Keystore
SSL Truststore
The file structure created by applications is the following: /run/[lenses_installation_id]/applications/
Keep in mind Lenses require an installation folder with write permissions. The following are tried:
/run
/tmp
Kubernetes can be used to deploy SQL Processors. To configure Kubernetes, set the mode to KUBERNETES
and configure the location of the kubeconfig file.
When the Agent is deployed inside Kubernetes, the lenses.kubernetes.config.file configuration entry should be set to an empty string. The Kubernetes client will auto-configure from the pod it is deployed in.
The SQL Processor docker image is live in Dockerhub.
Custom serdes should be embedded in a new Lenses SQL processor Docker image.
To build a custom Docker image, create the following directory structure:
Copy your serde jar files under processor-docker/serde.
Create Dockerfile
containing:
Build the Docker.
Once the image is deployed in your registry, please set Lenses to use it (lenses.conf):
Don't use the LPFP_
prefix.
Internally, Lenses prefixes all its properties with LPFP_
.
Avoid passing custom environment variables starting with LPFP_
as it may cause the processors to fail.
To deploy Lenses Processors in Kubernetes the suggested way is to activate RBAC in Cluster level through Helm values.yaml
:
If you want to limit the permissions Lenses has against your Kubernetes cluster, you can use Role/RoleBinding
resources instead.
To achieve this you need to create a Role and a RoleBinding resource in the namespace you want the processors deployed to:
example for:
Lenses namespace = lenses-ns
Processor namespace = lenses-proc-ns
You can repeat this for as many namespaces you may want Lenses to have access to.
Finally you need to define in Lenses configuration which namespaces can Lenses access. To achieve this amend values.yaml
to contain the following:
example:
This page describes configuring the database connection for the Lenses Agent.
Once you have created a role for the agent to use you can then configure the Agent in the lenses.conf
file:
Additional configurations for the PostgreSQL database connection can be passed under the lenses.storage.postgres.properties
configuration prefix.
One Postgres server can be used for all agents by using a separate database or schema each.
For the Agent see lenses.storage.postgres.schema or lenses.storage.postgres.database
The supported parameters can be found in the PostgreSQL documentation. For example:
The Agent uses the HikariCP library for high-performance database connection pooling.
The default settings should perform well but can be overridden via the lenses.storage.hikaricp configuration prefix. The supported parameters can be found in the HikariCP documentation.
Camelcase configuration keys are not supported in agent configuration and should be translated to dot notation.
For example:
This page describes how to configure TLS for the Lenses Agent.
By default, the Agent does not provide TLS termination but can be enabled via a configuration option. TLS termination is recommended for enhanced security and a prerequisite for integrating with SSO (Single Sign On) via SAML2.0.
TLS termination can be configured directly within Agent or by using a TLS proxy or load balancer.
To use a non-default global truststore, set the path in accordingly with the LENSES_OPTS variable.
To enable mutual TLS, set your keystore accordingly.
This page describes the JVM options for the Lenses Agent.
The Agent runs as a JVM app; you can tune runtime configurations via environment variables.
Key | Description |
---|---|
This page lists the available configurations in Lenses Agent.
Set in lenses.conf
Reference documentation of all configuration and authentication options:
Key | Description | Default | Type | Required |
---|---|---|---|---|
System or control topics are created by services for their internal use. Below is the list of built-in configurations to identify them.
_schemas
__consumer_offsets
_kafka_lenses_
lsql_*
lsql-*
__transaction_state
__topology
__topology__metrics
_confluent*
*-KSTREAM-*
*-TableSource-*
*-changelog
__amazon_msk*
Wildcard (*
) is used to match any name in the path to capture a list of topics not just one. When the wildcard is not specified, Lenses matches on the entry name provided.
If the records schema is centralized, the connectivity to Schema Registry nodes is defined by a Lenses Connection.
There are two static config entries to enable/disable the deletion of schemas:
Options for specific deployment targets:
Global options
Kubernetes
Common settings, independently of the underlying deployment target:
Kubernetes connectivity is optional. Minimum supported K8 version 0.11.10. All settings are string.
Optimization settings for SQL queries.
Lenses requires these Kafka topics to be available, otherwise, it will try to create them. The topics can be created manually before Lenses is run, or allow Lenses the correct Kafka ACLs to create the topics:
To allow for fine-grained control over the replication factor of the three topics, the following settings are available:
When configuring the replication factor for your deployment, it's essential to consider the requirements imposed by your cloud provider. Many cloud providers enforce a minimum replication factor to ensure data durability and high availability. For example, IBM Cloud mandates a minimum replication factor of 3. Therefore, it's crucial to set the replication factor for the Lenses internal topics to at least 3 when deploying Lenses on IBM Cloud.
All time configuration options are in milliseconds.
Control how Lenses identifies your connectors in the Topology view. Catalogue your connector types, set their icons, and control how Lenses extracts the topics used by your connectors.
Lenses comes preconfigured for some of the popular connectors as well as the Stream Reactor connectors. If you see that Lenses doesn’t automatically identify your connector type then use the lenses.connectors.info
setting to register it with Lenses.
Add a new HOCON object {}
for every new Connector in your lenses.connectors.info
list :
This configuration allows the connector to work with the topology graph, and also have the RBAC rules applied to it.
To extract the topic information from the connector configuration, source connectors require an extra configuration. The extractor class should be: io.lenses.config.kafka.connect.SimpleTopicsExtractor
. Using this extractor requires an extra property configuration. It specifies the field in the connector configuration which determines the topics data is sent to.
Here is an example for the file source:
An example of a Splunk sink connector and a Debezium SQL server connector
Key | Description | Default |
---|---|---|
Key | Description | Default |
---|---|---|
Key | Description | Default | Type | Required |
---|---|---|---|---|
Key | Description | Default | Type | Required |
---|---|---|---|---|
Key | Description | Default | Type | Required |
---|---|---|---|---|
Key | Description | Type |
---|---|---|
Key | Description | Default |
---|---|---|
Key | Description | Default |
---|---|---|
Key | Description | Type | Default |
---|---|---|---|
Key | Description | Partition | Replication | Default | Compacted | Retention |
---|---|---|---|---|---|---|
Key | Description | Default |
---|---|---|
Key | Description | Type | Default |
---|---|---|---|
Key | Description | Default | Type | Required |
---|---|---|---|---|
LENSES_OPTS
For generic settings, such as the global truststore. Note that the docker image is using this to plug in a prometheus java agent for monitoring Lenses
LENSES_HEAP_OPTS
JVM heap options. Default setting are -Xmx3g -Xms512m
that sets the heap size between 512MB and 3GB. The upper limit is set to 1.2GB on the Box development docker image.
LENSES_JMX_OPTS
Tune the JMX options for the JVM i.e. to allowing remote access.
LENSES_LOG4J_OPTS
Override Agent logging configuration. Should only be used to set the logback configuration file, using the format -Dlogback.configurationFile=file:/path/to/logback.xml
.
LENSES_PERFORMANCE_OPTS
JVM performance tuning. The default settings are -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=
lenses.eula.accept
Accept the Lenses EULA
false
boolean
yes
lenses.ip
Bind HTTP at the given endpoint. Use in conjunction with lenses.port
0.0.0.0
string
no
lenses.port
The HTTP port to listen for API, UI and WS calls
9991
int
no
lenses.jmx.port
Bind JMX port to enable monitoring Lenses
int
no
lenses.root.path
The path from which all the Lenses URLs are served
string
no
lenses.secret.file
The full path to security.conf
for security credentials
security.conf
string
no
lenses.sql.execution.mode
Streaming SQL mode IN_PROC
(test mode) or KUBERNETES
(prod mode)
IN_PROC
string
no
lenses.offset.workers
Number of workers to monitor topic offsets
5
int
no
lenses.telemetry.enable
Enable telemetry data collection
true
boolean
no
lenses.kafka.control.topics
An array of topics to be treated as “system topics”
list
array
no
lenses.grafana
Add your Grafana url i.e. http://grafanahost:port
string
no
lenses.api.response.cache.enable
If enabled, it disables client cache on the Lenses API HTTP responses by adding these HTTP Headers: Cache-Control: no-cache, no-store, must-revalidate
, Pragma: no-cache
, and Expires: -1
.
false
boolean
no
lenses.workspace
Directory to write temp files. If write access is denied, Lenses will fallback to /tmp
.
/run
string
no
lenses.access.control.allow.methods
HTTP verbs allowed in cross-origin HTTP requests
GET,POST,PUT,DELETE,OPTIONS
lenses.access.control.allow.origin
Allowed hosts for cross-origin HTTP requests
*
lenses.allow.weak.ssl
Allow https://
with self-signed certificates
false
lenses.ssl.keystore.location
The full path to the keystore file used to enable TLS on Lenses port
lenses.ssl.keystore.password
Password for the keystore file
lenses.ssl.key.password
Password for the ssl certificate used
lenses.ssl.enabled.protocols
Version of TLS protocol to use
TLSv1.2
lenses.ssl.algorithm
X509 or PKIX algorithm to use for TLS termination
SunX509
lenses.ssl.cipher.suites
Comma separated list of ciphers allowed for TLS negotiation
lenses.security.kerberos.service.principal
The Kerberos principal for Lenses to use in the SPNEGO form: HTTP/lenses.address@REALM.COM
lenses.security.kerberos.keytab
Path to Kerberos keytab with the service principal. It should not be password protected
lenses.security.kerberos.debug
Enable Java’s JAAS debugging information
false
lenses.storage.hikaricp.[*]
To pass additional properties to HikariCP connection pool
no
lenses.storage.directory
The full path to a directory for Lenses to use for persistence
"./storage"
string
no
lenses.storage.postgres.host
Host of PostgreSQL server for Lenses to use for persistence
string
no
lenses.storage.postgres.port
Port of PostgreSQL server for Lenses to use for persistence
5432
integer
no
lenses.storage.postgres.username
Username for PostgreSQL database user
string
no
lenses.storage.postgres.password
Password for PostgreSQL database user
string
no
lenses.storage.postgres.database
PostgreSQL database name for Lenses to use for persistence
string
no
lenses.storage.postgres.schema
PostgreSQL schema name for Lenses to use for persistence
"public"
string
no
lenses.storage.postgres.properties.[*]
To pass additional properties to PostgreSQL JDBC driver
no
lenses.schema.registry.delete
Allow schemas to be deleted. Default is false
boolean
lenses.schema.registry.cascade.delete
Deletes associated schemas when a topic is deleted. Default is false
boolean
lenses.deployments.events.buffer.size
Buffer size for events coming from Deployment targets such as Kubernetes
10000
lenses.deployments.errors.buffer.size
Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes
1000
lenses.kubernetes.processor.image.name
The url for the streaming SQL Docker for K8
lensesioextra/sql-processor
lenses.kubernetes.processor.image.tag
The version/tag of the above container
5.2
lenses.kubernetes.config.file
The path for the kubectrl
config file
/home/lenses/.kube/config
lenses.kubernetes.pull.policy
Pull policy for K8 containers: IfNotPresent
or Always
IfNotPresent
lenses.kubernetes.service.account
The service account for deployments. Will also pull the image
default
lenses.kubernetes.init.container.image.name
The docker/container repository url and name of the Init Container image used to deploy applications to Kubernetes
lensesio/lenses-cli
lenses.kubernetes.init.container.image.tag
The tag of the Init Container image used to deploy applications to Kubernetes
5.2.0
lenses.kubernetes.watch.reconnect.limit
How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable
10
lenses.kubernetes.watch.reconnect.interval
How often to wait between Kubernetes Watcher reconnection attempts expressed in milliseconds
5000
lenses.kubernetes.websocket.timeout
How long to wait for a Kubernetes Websocket response expressed in milliseconds
15000
lenses.kubernetes.websocket.ping.interval
How often to ping Kubernetes Websocket to check it’s alive expressed in milliseconds
30000
lenses.kubernetes.pod.heap
The max amount of memory the underlying Java process will use
900M
lenses.kubernetes.pod.min.heap
The initial amount of memory the underlying Java process will allocate
128M
lenses.kubernetes.pod.mem.request
The value will control how much memory resource the Pod Container will request
128M
lenses.kubernetes.pod.mem.limit
The value will control the Pod Container memory limit
1152M
lenses.kubernetes.pod.cpu.request
The value will control how much cpu resource the Pod Container will request
null
lenses.kubernetes.pod.cpu.limit
The value will control the Pod Container cpu limit
null
lenses.kubernetes.namespaces
Object setting a list of Kubernetes namespaces that Lenses will see for each of the specified and configured cluster
null
lenses.kubernetes.pod.liveness.initial.delay
Amount of time Kubernetes will wait to check Processor’s health for the first time. It can be expressed like 30 second, 2 minute or 3 hour, mind the time unit is singular
60 second
lenses.deployments.events.buffer.size
Buffer size for events coming from Deployment targets such as Kubernetes
10000
lenses.deployments.errors.buffer.size
Buffer size for errors happening on the communication between Lenses and the Deployment targets such as Kubernetes
1000
lenses.kubernetes.config.reload.interval
Time interval to reload the Kubernetes configuration file. Expressed in milliseconds.
30000
lenses.sql.settings.max.size
Restricts the max bytes that a kafka sql query will return
long
20971520
(20MB)
lenses.sql.settings.max.query.time
Max time (in msec) that a sql query will run
int
3600000
(1h)
lenses.sql.settings.max.idle.time
Max time (in msec) for a query when it reaches the end of the topic
int
5000
(5 sec)
lenses.sql.settings.show.bad.records
By default show bad records when querying a kafka topic
boolean
true
lenses.sql.settings.format.timestamp
By default convert AVRO date to human readable format
boolean
true
lenses.sql.settings.live.aggs
By default allow aggregation queries on kafka data
boolean
true
lenses.sql.sample.default
Number of messages to sample when live tailing a kafka topic
int
2
/window
lenses.sql.sample.window
How frequently to sample messages when tailing a kafka topic
int
200
msec
lenses.sql.websocket.buffer
Buffer size for messages in a SQL query
int
10000
lenses.metrics.workers
Number of workers for parallelising SQL queries
int
16
lenses.kafka.ws.buffer.size
Buffer size for WebSocket consumer
int
10000
lenses.kafka.ws.max.poll.records
Max number of kafka messages to return in a single poll()
long
1000
lenses.sql.state.dir
Folder to store KStreams state.
string
logs/sql-kstream-state
lenses.sql.udf.packages
The list of allowed java packages for UDFs/UDAFs
array of strings
["io.lenses.sql.udf"]
lenses.topics.external.topology
Topic for applications to publish their topology
1
3
(recommended)
__topology
yes
N/A
lenses.topics.external.metrics
Topic for external application to publish their metrics
1
3
(recommended)
__topology__metrics
no
1 day
lenses.topics.metrics
Topic for SQL Processor to send the metrics
1
3
(recommended)
_kafka_lenses_metrics
no
lenses.topics.replication.external.topology
Replication factor for the lenses.topics.external.topology
topic
1
lenses.topics.replication.external.metrics
Replication factor for the lenses.topics.external.metrics
topic
1
lenses.topics.replication.metrics
Replication factor for the lenses.topics.metrics
topic
1
lenses.interval.summary
How often to refresh kafka topic list and configs
long
10000
lenses.interval.consumers.refresh.ms
How often to refresh kafka consumer group info
long
10000
lenses.interval.consumers.timeout.ms
How long to wait for kafka consumer group info to be retrieved
long
300000
lenses.interval.partitions.messages
How often to refresh kafka partition info
long
10000
lenses.interval.type.detection
How often to check kafka topic payload info
long
30000
lenses.interval.user.session.ms
How long a client-session stays alive if inactive (4 hours)
long
14400000
lenses.interval.user.session.refresh
How often to check for idle client sessions
long
60000
lenses.interval.topology.topics.metrics
How often to refresh topology info
long
30000
lenses.interval.schema.registry.healthcheck
How often to check the schema registries health
long
30000
lenses.interval.schema.registry.refresh.ms
How often to refresh schema registry data
long
30000
lenses.interval.metrics.refresh.zk
How often to refresh ZK metrics
long
5000
lenses.interval.metrics.refresh.sr
How often to refresh Schema Registry metrics
long
5000
lenses.interval.metrics.refresh.broker
How often to refresh Kafka Broker metrics
long
5000
lenses.interval.metrics.refresh.connect
How often to refresh Kafka Connect metrics
long
30000
lenses.interval.metrics.refresh.brokers.in.zk
How often to refresh from ZK the Kafka broker list
long
5000
lenses.interval.topology.timeout.ms
Time period when a metric is considered stale
long
120000
lenses.interval.audit.data.cleanup
How often to clean up dataset view entries from the audit log
long
300000
lenses.audit.to.log.file
Path to a file to write audits to in JSON format.
string
lenses.interval.jmxcache.refresh.ms
How often to refresh the JMX cache used in the Explore page
long
180000
lenses.interval.jmxcache.graceperiod.ms
How long to pause for when a JMX connectity error occurs
long
300000
lenses.interval.jmxcache.timeout.ms
How long to wait for a JMX response
long
500
lenses.interval.sql.udf
How often to look for new UDF/UDAF (user defined [aggregate] functions)
long
10000
lenses.kafka.consumers.batch.size
How many consumer groups to retrieve in a single request
Int
500
lenses.kafka.ws.heartbeat.ms
How often to send heartbeat messages in TCP connection
long
30000
lenses.kafka.ws.poll.ms
Max time for kafka consumer data polling on WS APIs
long
10000
lenses.kubernetes.config.reload.interval
Time interval to reload the Kubernetes configuration file.
long
30000
lenses.kubernetes.watch.reconnect.limit
How many times to reconnect to Kubernetes Watcher before considering the cluster unavailable
long
10
lenses.kubernetes.watch.reconnect.interval
How often to wait between Kubernetes Watcher reconnection attempts
long
5000
lenses.kubernetes.websocket.timeout
How long to wait for a Kubernetes Websocket response
long
15000
lenses.kubernetes.websocket.ping.interval
How often to ping Kubernetes Websocket to check it’s alive
long
30000
lenses.akka.request.timeout.ms
Max time for a response in an Akka Actor
long
10000
lenses.sql.monitor.frequency
How often to emit healthcheck and performance metrics on Streaming SQL
long
10000
lenses.audit.data.access
Record dataset access as audit log entries
boolean
true
lenses.audit.data.max.records
How many dataset view entries to retain in the audit log. Set to -1
to retain indefinitely
int
500000
lenses.explore.lucene.max.clause.count
Override Lucene’s maximum number of clauses permitted per BooleanQuery
int
1024
lenses.explore.queue.size
Optional setting to bound Lenses internal queue used by the catalog subsystem. It needs to be positive integer or it will be ignored.
int
N/A
lenses.interval.kafka.connect.http.timeout.ms
How long to wait for Kafka Connect response to be retrieved
int
10000
lenses.interval.kafka.connect.healthcheck
How often to check the Kafka health
int
15000
lenses.interval.schema.registry.http.timeout.ms
How long to wait for Schema Registry response to be retrieved
int
10000
lenses.interval.zookeeper.healthcheck
How often to check the Zookeeper health
int
15000
lenses.ui.topics.row.limit
The number of Kafka records to load automatically when exploring a topic
int
200
lenses.deployments.connect.failure.alert.check.interval
Time interval in seconds to check the connector failure grace period has completed. Used by the Connect auto-restart failed connectors functionality. It needs too be a value between (1,600].
int
10
lenses.provisioning.path
Folder on the filesystem containing the provisioning data. See [provisioning docs](link to provisioning docs) for further details
string
lenses.provisioning.interval
Time interval in seconds to check for changes on the provisioning resources
int
lenses.schema.registry.client.http.retryOnTooManyRequest
When enabled, Lenses will retry a request whenever the schema registry returns a 429 Too Many Requests
boolean
lenses.schema.registry.client.http.maxRetryAwait
Max amount of time to wait whenever a 429 Too Many Requests
is returned.
duration
lenses.schema.registry.client.http.maxRetryCount
Max retry count whenever a 429 Too Many Requests
is returned.
integer
2
lenses.schema.registry.client.http.rate.type
Specifies if http requests to the configured schema registry should be rate limited. Can be "session" or "unlimited"
"unlimited" | "session"
lenses.schema.registry.client.http.rate.maxRequests
Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.
integer
N/A
lenses.schema.registry.client.http.rate.window
Whenever the rate limiter is "session" this configuration will determine the duration of the window used.
duration
N/A
lenses.schema.connect.client.http.retryOnTooManyRequest
Retry a request whenever a connect cluster returns a 429 Too Many Requests
boolean
lenses.schema.connect.client.http.maxRetryAwait
Max amount of time to wait whenever a 429 Too Many Requests
is returned.
duration
lenses.schema.connect.client.http.maxRetryCount
Max retry count whenever a 429 Too Many Requests
is returned.
integer
2
lenses.connect.client.http.rate.type
Specifies if http requests to the configured connect cluster should be rate limited. Can be "session" or "unlimited"
"unlimited" | "session"
lenses.connect.client.http.rate.maxRequests
Whenever the rate limiter is "session" this configuration will determine the max amount of requests per window size that are allowed.
integer
N/A
lenses.connect.client.http.rate.window
Whenever the rate limiter is "session" this configuration will determine the duration of the window used.
duration
N/A
apps.external.http.state.refresh.ms
When registering a runner for external app, a health-check interval can be specified. If it is not, this default interval is used (value in milliseconds)
30000
int
no
apps.external.http.state.cache.expiration.ms
Last known state of the runner is stored in a cache. The entries in the cache are being invalidated after a time that is defined by following configuration key (value in milliseconds). This value should not be lower than the apps.external.http.state.refresh.ms
value.
60000
int
no
Hardware & OS
Learn about the hardware & OS requirements for Linux archive installs.
JVM Options
Understand how to customize the Lenses JVM settings.
Logs
Understand and customize Lenses logging.
JMX
Learn how to connect the Agent to JMX for Kafka, Schema Registries, Kafka Connect and others.
It will update the connections state and validate the configuration. If the validation fails, the state will not be updated.
It will only validate the request, not applying any actual change to the system.
It will try to connect to the configured service as part of the validation step.
Configuration in YAML format representing the connections state.
The only allowed name for the Kafka connection is "kafka".
Kafka security protocol.
SSL keystore file path.
Password to the keystore.
Key password for the keystore.
Password to the truststore.
SSL truststore file path.
JAAS Login module configuration for SASL.
Kerberos keytab file path.
Comma separated list of protocol://host:port to use for initial connection to Kafka.
Mechanism to use when authenticated using SASL.
Default port number for metrics connection (JMX and JOLOKIA).
The username for metrics connections.
The password for metrics connections.
Flag to enable SSL for metrics connections.
HTTP URL suffix for Jolokia or AWS metrics.
HTTP Request timeout (ms) for Jolokia or AWS metrics.
Metrics type.
Additional properties for Kafka connection.
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis.
DEPRECATED.
The only allowed name for a schema registry connection is "schema-registry".
Path to SSL keystore file
Password to the keystore
Key password for the keystore
Password to the truststore
Path to SSL truststore file
List of schema registry urls
Source for the basic auth credentials
Basic auth user information
Metrics type
Flag to enable SSL for metrics connections
The username for metrics connections
The password for metrics connections
Default port number for metrics connection (JMX and JOLOKIA)
Additional properties for Schema Registry connection
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis
DEPRECATED
HTTP URL suffix for Jolokia metrics
HTTP Request timeout (ms) for Jolokia metrics
Username for HTTP Basic Authentication
Password for HTTP Basic Authentication
Enables Schema Registry hard delete
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The username to connect to the Elasticsearch service.
The password to connect to the Elasticsearch service.
The nodes of the Elasticsearch cluster to connect to, e.g. https://hostname:port. Use the tab key to specify multiple nodes.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An API Token for accessing PagerDuty's REST API.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The Datadog site.
The Datadog API key.
The Datadog application key.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The Slack endpoint to send the alert to.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Comma separated list of Alert Manager endpoints.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The host name.
An optional port number to be appended to the hostname.
Set to true in order to set the URL scheme to https
. Will otherwise default to http
.
An array of (secret) strings to be passed over to alert channel plugins.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Way to authenticate against AWS.
Access key ID of an AWS IAM account.
Secret access key of an AWS IAM account.
AWS region to connect to. If not provided, this is deferred to client configuration.
Specifies the session token value that is required if you are using temporary security credentials that you retrieved directly from AWS STS operations.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
List of Kafka Connect worker URLs.
Username for HTTP Basic Authentication.
Password for HTTP Basic Authentication.
Flag to enable SSL for metrics connections.
The username for metrics connections.
The password for metrics connections.
Metrics type.
Default port number for metrics connection (JMX and JOLOKIA).
AES256 Key used to encrypt secret properties when deploying Connectors to this ConnectCluster.
Name of the ssl algorithm. If empty default one will be used (X509).
SSL keystore file.
Password to the keystore.
Key password for the keystore.
Password to the truststore.
SSL truststore file.
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis.
DEPRECATED.
HTTP URL suffix for Jolokia metrics.
HTTP Request timeout (ms) for Jolokia metrics.
The only allowed name for a schema registry connection is "schema-registry".
Way to authenticate against AWS. The value for this project corresponds to the AWS connection name of the AWS connection that contains the authentication mode.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Access key ID of an AWS IAM account. The value for this project corresponds to the AWS connection name of the AWS connection that contains the access key ID.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Secret access key of an AWS IAM account. The value for this project corresponds to the AWS connection name of the AWS connection that contains the secret access key.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Specifies the session token value that is required if you are using temporary security credentials that you retrieved directly from AWS STS operations.
Enter the Amazon Resource Name (ARN) of the Glue schema registry that you want to connect to.
The period in milliseconds that Lenses will be updating its schema cache from AWS Glue.
The size of the schema cache.
Type of schema registry connection.
Default compatibility mode to use on Schema creation.
The only allowed name for the Zookeeper connection is "zookeeper".
List of zookeeper urls.
Zookeeper /znode path.
Zookeeper connection session timeout.
Zookeeper connection timeout.
Metrics type.
Default port number for metrics connection (JMX and JOLOKIA).
The username for metrics connections.
The password for metrics connections.
Flag to enable SSL for metrics connections.
HTTP URL suffix for Jolokia metrics.
HTTP Request timeout (ms) for Jolokia metrics.
Mapping from node URL to metrics URL, allows overriding metrics target on a per-node basis.
DEPRECATED.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The Postgres hostname.
The port number.
The database to connect to.
The user name.
The password.
The SSL connection mode as detailed in https://jdbc.postgresql.org/documentation/head/ssl-client.html.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An Integration Key for PagerDuty's service with Events API v2 integration type.
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
The host name for the HTTP Event Collector API of the Splunk instance.
The port number for the HTTP Event Collector API of the Splunk instance.
Use SSL.
This is not encouraged but is required for a Splunk Cloud Trial instance.
HTTP event collector authorization token.
The only allowed name for the Zookeeper connection is "kerberos".
Kerberos krb5 config
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
Attached file(s) needed for establishing the connection. The name of each file part is used as a reference in the manifest.
Successfully updated connection state
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$
An alphanumeric or dash non-empty string.
^[a-zA-Z0-9-]+$