This section describes how to configure alerting in Lenses.
Alerts rules are configurable in Lenses, alerts that are generated can then be sent to specific channels. Several different integration points are available for channels.
These are a set of built-in alerting rules for the core connections, Kafka, Schema Registry, Zookeeper, and Kafka Connect. See infrastructure health.
Data produced are user-defined alerts on the amount of data on a topic over time. Users have a choice to notify if the topic receives either:
more than
or less than
Consumer rules are alerting on consumer group lag. Users can define:
a lag
on a topic
for a consumer group
which channels to send an alert to
Lenses allows operators to configure alerting on Connectors. Operators can:
Set channels to send alerts to
Enable auto restart of connector tasks. Lenses will restart failed tasks with a grace period.
The sequence is:
Lenses watches for task failures.
If a task fails, Lenses will restart it.
If the restart is successful Lenses resets the "restart attempts" back to zero
If the restart is not successful, Lenses increments the restart attempts, waits for the grace period and tries another restart if the task is still in a failed state.
Steps 4 is repeated until restart attempts is reached. Lenses will only rest the restart attempts to zero after the tasks have been brought back to a healthy start by manual intervention.
The number of times Lenses attempts to restart is based on the entry in the alert setting.
The restart attempts can be tracked in the Audits page.
To view events go to Environments->[Your Environment]->Admin -> Alerts -> Events.
This page described consumer group monitoring.
Consumer group monitoring is a key part of operating Kafka. Lenses allows operators to view and manage consumer groups.
The connector and SQL Processor pages allow you to navigate straight to the corresponding consumer groups.
The Explore screen also shows the active consumer groups on each topic.
To view consumer groups and the max and min lag across the partitions go to Environments->[Your Environment]->Workspace->Monitor->Consumers. You can also see this information for each topic in the Environments->[Your Environment]->Explore screen->Select topic->Partition tab.
Select, or search for a consumer group, you can also search for consumer groups that are not active.
To view alerts for a consumer group, click the view alerts button. Resetting consumer groups is only possible if the consumer group is not active. i.e. the application must be stopped, such as a Connector or SQL Processor. Enable the show inactive consumers to find them.​​
Select the consumer group
Select the partition to reset the offsets for
Specify the offset
To reset a consumer group (all clients in the group), select the consumer groups, select Actions, and Change Multiple offsets. This will reset all clients in the consumer group to either:
To the start
To the last offset
To a specific timestamp
Monitoring the health of your infrastructure.
Lenses provides monitoring of the health of your infrastructure via JMX.
Additionally, Lenses has a number of built-in alerts for these services.
Lenses monitors (by default every 10 seconds) your entire streaming data platform infrastructure and has the following alert rules built-in:
Rule | This rule fires when |
---|---|
For version below Lenses 6.0 omit the environment selection.
If you change your Kafka cluster size or replace an existing Kafka broker with another, Lenses will raise an active alert as it will detect that a broker of your Kafka cluster is no longer available. If the Kafka broker has been intentionally removed, then decommission it:
Navigate to Environments->[Your Environment]->Workspace->Services.
Select the broker, click on the actions in the options menu and click on the Decommission option.
This section describes the monitoring and alerting features of Lenses.
This page describes the alert references for Lenses.
Alert | Alert Identifier | Description | Category | Instance | Severity |
---|
This section describes the integrations available for alerting.
Alerts are sent to channels.
See for integration into your CI/CD pipelines.
To send alerts to AWS Cloud Watch, you first need an AWS connection. Go to Environments->[Your Environment]->Admin->Connections->New Connection->AWS. Enter your AWS Credentials.
Rather than enter your AWS credentials you can use the .
Next, go to Environments->[Your Environment]->Admin->Alerts->Channels->New Channel->AWS Cloud Watch.
Select an AWS connection.
To send alerts to Datadog, you first need a Datadog connection. Go to Environments->[Your Environment]->Admin->Connections->New Connection->DataDog. Enter your API, Application Key and Site.
Next, go to Environments->[Your Environment]->Admin->Alerts->Channels->New Channel->Data Dog.
Select a DataDog connection.
To send alerts to Pager Duty, you first need a Pager Duty connection. Go to Environments->[Your Environment]->Admin->Connections->New Connection->PagerDuty. Enter your S
ervice Integration Key.
Next, go to Environments->[Your Environment]->Admin->Alerts->Channels->New Channel->Pager Duty.
Select the pager duty connection.
To send alerts to Pager Duty, you first need a Pager Duty connection. Go to Environments->[Your Environment]->Admin->Connections->New Connection->Prometheus.
Select your Prometheus connection
Set the Source
Set the GeneratorURL for your Alert Manager instance
To send alerts to Slack, you first need a Slack connection. Go to Environments->[Your Environment]->Admin->Connections->New Connection->Slack. Enter your Slack webhook URL.
Next, go to Environments->[Your Environment]->Admin->Alerts->Channels->New Channel->Slack.
Enter the Slack channel you want to send alerts to.
Webhooks allow you to send alerts to any service implementing them, they are very flexible.
First, you need a Webhook connection. Go to Environments->[Your Environment]->Admin->Connections->New Connection
Enter the URL, port and credentials.
Create a Channel to use the connection. Go to Environments->[Your Environment]->Admin->Alerts->Channels->New Channel.
Choose a name for your Channel instance.
Select your connection.
Set the HTTP method to use.
Set the Request pathA URI0 encoded request path, which may include a query string. Supports alert-variable interpolation.
Set the HTTP Headers
Set the Body payload
In Request path
, HTTP Headers
and Body payload
there is a possibility of using template variables, which will be translated to alert specific fields. To use template variables, you have to use this format: {{VARIABLE}}
, i.e. {{LEVEL}}
.
Supported template variables:
LEVEL - alert level (INFO
, LOW
, MEDIUM
, HIGH
, CRITICAL
).
CATEGORY - alert category (Infrastructure
, Consumers
, Kafka Connect
, Topics
, Producers
).
INSTANCE - (broker url / topic name etc.).
SUMMARY - alert summary - same content in the Alert Events tab.
TIMESTAMP
ID - alert global id (i.e. 1000
for BrokerStatus alert).
CREDS - CREDS[0]
etc. - variables specified in connections Credentials
as a list of values separated by a comma.
To configure real-time email alerts you can leverage Webhooks, for example with the following service:
Twilio and SendGrid
Zapier
Create a webhook connection, for SendGrid with api.sendgrid.com as the host and enable HTTPS
Configure a channel to use the connect you just created
Set the method to Post
Set the request path to the webhook URL from your Zapier account
Set the Headers to
Set the payload to be
Change the above payload according to your requirements, and remember that the [sender-email-address] needs to be the same email address you registered during the Sender Authentication Sendgrid setup process.
Create a webhook connection, for SendGrid with hooks.zapier.com as the host and enable HTTPS
Configure a channel to use the connect you just created
Set the method to Post
Set the request path tp /v3/mail/send
Set the request path to the webhook URL from your Zapier account
Set the Headers to:
Set the payload to be
You’ll need the second part
/webhook2/<secret-token-by-ms>/IncomingWebhook/<secret-token-by-ms>
Create a new Webhook Connection, set the host to outlook.office.com and enable HTTPS
Configure an new channel, using this connection
Set the Method to POST
The Request Path to the second part of the URL you recieved from MS Teams
/webhook2/<secret-token-by-ms>/IncomingWebhook/<secret-token-by-ms>
In the body set
HTTP Headers |
---|
To create a webhook in your MS Teams workspace you can use .
At the end of the process you get a url of the format:
/webhook2/<secret-token-by-ms>/IncomingWebhook/<secret-token-by-ms>
See Zapier and follow blog post .
Lenses License
Lenses licnese is invalid
Kafka broker is down
A Kafka broker from the cluster is not healthy
Zookeeper node is down
A Zookeeper node is not healthy
Connect Worker is down
A Kafka Connect worker node is not healthy
Schema Registry is down
A Schema Registry instance is not healthy
Under replicated partitions
The Kafka cluster has 1 or more under-replicated partitions
Partitions offline
The Kafka cluster has 1 or more partitions offline (partitions without an active leader)
Active Controller
The Kafka cluster has 0 or more than 1 active controllers
Multiple Broker versions
The Kafka cluster is under a version upgrade, and not all brokers have been upgraded
File-open descriptors on Brokers
A Kafka broker has an alarming number of file-open descriptors. When the operating system is exceeding 90% of the available open file descriptors
Average % the request handler is idle
The average fraction of time the request handler threads are idle is dangerously low. The alert is HIGH when the value is smaller than 10%, and CRITICAL when it is smaller than 2%.
Fetch requests failure
Fetch requests are failing. If the rate of failures per second is > 10% the alert level is set to CRITICAL, otherwise it is set to HIGH.
Produce requests failure
Producer requests are failing. When the value is > 10% the alert level is set to CRITICAL, otherwise it is set to HIGH.
Broker disk usage
A Kafka broker’s disk usage is greater than the cluster average. The build-in threshold is 1 GByte.
Leader imbalance
A Kafka broker has more leader replicas than the average broker in the cluster.
Kafka Broker is down | 1000 | Raised when the Kafka broker is not part of the cluster for at least 1 minute. i.e:host-1,host-2 | Infrastructure | brokerID | INFO, CRITICAL |
Zookeeper Node is down | 1001 | Raised when the Zookeeper node is not reachable. This is information is based on the Zookeeper JMX. If it responds to JMX queries it is considered to be running. | Infrastructure | service name | INFO, CRITICAL |
Connect Worker is down | 1002 | Raised when the Kafka Connect worker is not responding to the API call for /connectors for more than 1 minute. | Infrastructure | worker URL | MEDIUM |
Schema Registry is down | 1003 | Raised when the Schema Registry node is not responding to the root API call for more than 1 minute. | Infrastructure | service URL | HIGH, INFO |
Under replicated partitions | 1005 | Raised when there are (topic, partitions) not meeting the replication factor set. | Infrastructure | partitions | HIGH, INFO |
Partitions offline | 1006 | Raised when there are partitions which do not have an active leader. These partitions are not writable or readable. | Infrastructure | brokers | HIGH, INFO |
Active Controllers | 1007 | Raised when the number of active controllers is not 1. Each cluster should have exactly one controller. | Infrastructure | brokers | HIGH, INFO |
Multiple Broker Versions | 1008 | Raised when there are brokers in the cluster running on different Kafka version. | Infrastructure | brokers versions | HIGH, INFO |
File-open descriptors high capacity on Brokers | 1009 | A broker has too many open file descriptors | Infrastructure | brokerID | HIGH, INFO, CRITICAL |
Average % the request handler is idle | 1010 | Raised when the average fraction of time the request handler threads are idle. When the valueis smaller than 0.02 the alert level is CRITICAL. When the value is smaller than 0.1 the alert level is HIGH. | Infrastructure | brokerID | HIGH, INFO, CRITICAL |
Fetch requests failure | 1011 | Raised when the Fetch request rate (the value is per second) for requests that failed is greater than a threshold. If the value is greater than 0.1 the alert level is set to CRITICAL otherwise is set to HIGH. | Infrastructure | brokerID | HIGH, INFO, CRITICAL |
Produce requests failure | 1012 | Raised when the Producer request rate (the value is per second) for requests that failed is greater than a threshold. If the value is greater than 0.1 the alert level is set to CRITICAL otherwise is set to HIGH. | Infrastructure | brokerID | HIGH, INFO, CRITICAL |
Broker disk usage is greater than the cluster average | 1013 | Raised when the Kafka Broker disk usage is greater than the cluster average. We provide by default a threshold of 1GB disk usage. | Infrastructure | brokerID | MEDIUM, INFO |
Leader Imbalance | 1014 | Raised when the Kafka Broker has more leader replicas than the cluster average. | Infrastructure | brokerID | INFO |
Consumer Lag exceeded | 2000 | Raises an alert when the consumer lag exceeds the threshold on any partition. | Consumers | topic | HIGH, INFO |
Connector deleted | 3000 | Connector was deleted | Kafka Connect | connector name | INFO |
Topic has been created | 4000 | New topic was added | Topics | topic | INFO |
Topic has been deleted | 4001 | Topic was deleted | Topics | topic | INFO |
Topic data has been deleted | 4002 | Records from topic were deleted | Topics | topic | INFO |
Data Produced | 5000 | Raises an alert when the data produced on a topic doesn’t match expected threshold | Data Produced | topic | LOW, INFO |
Connector Failed | 6000 | Raises an alert when a connector, or any worker in a connector is down | Apps | connector | LOW, INFO |
Authorization: Bearer [your-Sendgrid-API-Key] |
Content-Type: application/json |