Loading...
Loading...
Loading...
Loading...
Loading...
This page provides examples for defining a connection to Kerberos.
Loading...
This page gives examples of the provisioning yaml for Lenses.
To use with Helm file place the examples under lenses.provisioning.connections
in the values file.
This page describes how to configure JMX metrics for Connections in Lenses.
All core services (Kafka, Schema Registry, Kafka Connect, Zookeeper) use the same set of properties for services’ monitoring.
The Agent will discover all the brokers by itself and will try to fetch metrics using metricsPort
, metricsCustomUrlMappings
and other properties (if specified).
The same port used for all brokers/workers/nodes. No SSL, no authentication.
Such a configuration means that the Agent will try to connect using JMX with every pair of kafkaBootstrapServers.host:metricsPort
, so following the example: my-kafka-host-0:9581
.
For Jolokia the Agent supports two types of requests: GET (JOLOKIAG
) and POST (JOLOKIAP
).
For JOLOKIA each entry value in metricsCustomUrlMappings
must contain protocol.
The same port used for all brokers/workers/nodes. No SSL, no authentication.
JOLOKIA monitoring works on the top of HTTP protocol. To fetch metrics the Agent has to perform either GET or POST request. There is a way of configuring http request timeout using httpRequestTimeout
property (ms value). Its default value is 20 seconds.
Default suffix for Jolokia endpoints is /jolokia/
, so that should be provided value. Sometimes that suffix can be different, so there is a way of customizing it by using metricsHttpSuffix
field.
AWS has predefined metrics configuration. The Agent hits the Prometheus endpoint using port 11001
for each broker. There is an option of customizing AWS metrics connection in Lenses by using metricsUsername
, metricsPassword
, httpRequestTimeout
, metricsHttpSuffix
, metricsCustomUrlMappings
, metricsSsl
properties, but most likely no one will need to do that - AWS has its own standard and most probably it won’t change. Customization can be achieved only by API or CLI - UI does not support it.
There is also a way to configure custom mapping for each broker (Kafka) / node (Schema Registry, Zookeeper) / worker (Kafka Connect).
Such a configuration means that the Agent will try to connect using JMX for:
my-kafka-host-0:9582 - because of metricsCustomUrlMappings
my-kafka-host-1:9581 - because of metricsPort
and no entry in metricsCustomUrlMappings
This page provides examples for defining a connection to Zookeeper.
Simple configuration with Zookeeper metrics read via JMX.
With such a configuration, Lenses will use 3 Zookeeper nodes and will try to read their metrics from following URLs (notice the same port - 9581 - used for all of them, as defined by metricsPort
property):
my-zookeeper-host-0:9581
my-zookeeper-host-1:9581
my-zookeeper-host-2:9581
This page provides examples for defining a connection to Kafka.
If deploying with Helm put the connections YAML under provisioning in the values file.
With PLAINTEXT, there's no encryption and no authentication when connecting to Kafka.
The only required fields are:
kafkaBootstrapServers
- a list of bootstrap servers (brokers).
It is recommended to add as many brokers (if available) as convenient to this list for fault tolerance.
protocol
- depending on the protocol, other fields might be necessary (see examples for other protocols)
In following example JMX metrics for Kafka Brokers are configured too, assuming that all brokers expose their JMX metrics using the same port (9581), without SSL and authentication.
With SSL the connection to Kafka is encrypted. You can also uses SSL and certificates to authenticate users against Kafka.
A truststore (with password) might need to be set explicitly if the global truststore of Lenses does not include the Certificate Authority (CA) of the brokers.
If TLS is used for authentication to the brokers in addition to encryption-in-transit, a key store (with passwords) is required.
There are 2 SASL-based protocols to access Kafka Brokers: SASL_SSL
and SASL_PLAINTEXT
. They both require SASL mechanism and JAAS Configuration values. What is different is if:
The transport layer is encyrpted (SSL)
The SASL mechanisn for authentication (PLAIN, AWS_MSK_IAM, GSSAPI).
In addition to this, there might be a keytab file required, depending on the SASL mechanism (for example when using GSSAPI mechanism, most often used for Kerberos).
In order to use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.
Apart from that, when encryption-in-transit is used (with SASL_SSL
), a trust store might need to be set explicitly if the global trust store of Lenses does not include the CA of the brokers.
Following are a few examples of SASL_PLAINTEXT and SASL_SSL
Encrypted communication and basic username and password for authentication.
When Lenses is running inside AWS and is connecting to an Amazon’s Managed Kafka (MSK) instance, IAM can be used for authentication.
In order to use Kerberos authentication, a Kerberos _Connection_ should be created beforehand.
No SSL encrypted of communication, credentials communicated to Kafka in clear text.
Lenses interacts with your Kafka Cluster via Kafka Client API. To override the default behavior use additionalProperties
.
By default there shouldn’t be a need to use additional properties, use it only if really necessary, as a wrong usage might brake the communication with Kafka.
Lenses SQL processors uses the same Kafka connection information provided to Lenses.
This page provides examples for defining a connection to Schema Registries.
The URLs (nodes) should always have a scheme defined (http:// or https://).
For Basic Authentication, define username
and password
properties.
A custom truststore is needed when the Schema Registry is served over TLS (encryption-in-transit) and the Registry’s certificate is not signed by a trusted CA.
A custom truststore might be necessary too (see above).
By default, Lenses will use hard delete for Schema Registry. To use soft delete, add the following property:
Some connections depend on others. One example is the AWS Glue Schema Registry connection, which depends on an AWS connection. These are examples of provision Lenses with an AWS connection named my-aws-connection
and an AWS Glue Schema Registry that references it.
This page provides examples for defining a connection to Kafka Connect Clusters.
The URLs (workers) should always have a scheme defined (http:// or https://).
This example uses an optional AES-256 key. The key decodes values encoded with AES-256 to enable passing encrypted values to connectors. It is only needed if your cluster uses AES-256 Decryption plugin.
For Basic Authentication, define username
and password
properties.
A custom truststore is needed when the Kafka Connect workers are served over TLS (encryption-in-transit) and their certificates are not signed by a trusted CA.
A custom truststore might be necessary too (see above).