Metrics
Expose K2K metrics.
Found an issue? Feed it back to us at Github, on Slack, Ask Marios or email.
To execute K2K, you must agree to the EULA and secure a free license.
Accept the EULA by setting license.acceptEula
to true
.
Secure a free license by:
emailing [email protected] to receive a token within seconds
setting
license.token
with the received token
K2K provides deep insights into its runtime performance and operational health using the OpenTelemetry (OTEL) framework. This standardized approach allows you to seamlessly integrate K2K's metrics into your existing observability platforms, enabling robust monitoring, alerting, and performance analysis.
Built-in Exporter Support
The application includes out-of-the-box support for several metric exporters. You can configure one of the following to direct telemetry data to your chosen destination:
Prometheus: Exposes a standard
/metrics
endpoint that can be scraped by a Prometheus server, ideal for time-series monitoring and alerting in production environments.Kafka: exposes the metrics to a Kafka topic. Lenses, serving as the control plane, collects data and delivers it to the user.
Console: For development and debugging purposes only. This exporter prints all collected metrics directly to the standard output, providing a simple way to inspect telemetry data locally without requiring any external backend.
OTLP: Allows exporting data to any OTLP compatible platform.
Enabling and Configuring Metrics
To activate metrics collection, you must launch the application with the telemetry flag and provide the necessary OpenTelemetry configuration.
Enable Telemetry: Use the
-t
flag with thek2k start
command to initialize the telemetry module.Configure the Exporter: Specify which exporter to use and its settings. This is typically managed through standard OpenTelemetry environment variables or a dedicated configuration file, depending on your deployment.
Publishing Prometheus Metrics
To configure K2K to expose metrics for a Prometheus server, you would set the appropriate environment variables and then run the start command with the telemetry flag.
services:
k2k:
image: "https://hub.docker.com/layers/lensting/k2k/0.0.11-alpha"
volumes:
- ".:/pipelines"
environment:
OTEL_SERVICE_NAME: "k2k"
OTEL_METRICS_EXPORTER: prometheus
OTEL_EXPORTER_PROMETHEUS_HOST: "0.0.0.0"
OTEL_TRACES_EXPORTER: none
OTEL_LOGS_EXPORTER: none
command:
- k2k
- start
- -f
- /pipelines/k2k-pipeline.yml
- -t
- -g
- enabled
ports:
- 9464:9464
Publishing Kafka Metrics
services:
k2k:
image: "https://hub.docker.com/layers/lensting/k2k/0.0.11-alpha"
volumes:
- ".:/pipelines"
environment:
OTEL_SERVICE_NAME: "k2k"
OTEL_METRICS_EXPORTER: kafka
OTEL_TRACES_EXPORTER: none
OTEL_LOGS_EXPORTER: none
command:
- k2k
- start
- -f
- /pipelines/k2k-pipeline.yml
- -t
- -g
- enabled
ports:
- 9464:9464
Publishing Console Metrics
services:
k2k:
image: "https://hub.docker.com/layers/lensting/k2k/0.0.11-alpha"
volumes:
- ".:/pipelines"
environment:
OTEL_SERVICE_NAME: "k2k"
OTEL_METRICS_EXPORTER: console
OTEL_TRACES_EXPORTER: none
OTEL_LOGS_EXPORTER: none
command:
- k2k
- start
- -f
- /pipelines/k2k-pipeline.yml
- -t
- -g
- enabled
ports:
- 9464:9464
Publishing OTLP Metrics
services:
k2k:
image: "https://hub.docker.com/layers/lensting/k2k/0.0.11-alpha"
volumes:
- ".:/pipelines"
environment:
OTEL_SERVICE_NAME:"k2k"
OTEL_METRICS_EXPORTER:otlp
OTEL_TRACES_EXPORTER:none
OTEL_LOGS_EXPORTER:none
OTEL_METRIC_EXPORT_INTERVAL:15000
OTEL_EXPORTER_OTLP_PROTOCOL:"http/protobuf"
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT:"http://localhost:9090/api/v1/otlp/v1/metrics"
command:
- k2k
- start
- -f
- /pipelines/k2k-pipeline.yml
- -t
- -g
- enabled
ports:
- 9464:9464
Last updated
Was this helpful?