Installing Community Edition Using Helm
Step-by-step guide to installing Lenses CE with Helm charts
Installing Community Edition Using Helm
Note: These instructions are NOT for production environments. They are intended for dev or test environment setups. Please see here for details on installing Lenses for more secure environments.
Tool Requirements
Kubernetes cluster and kubectl (you can use something like Minikube, K3s, or Docker Desktop in Kubernetes mode, but you will need to allocate at least 8 gigs of RAM and 6 CPUs)
Helm
A Kafka cluster (these instructions assume you already have one running)
Adding the Lenses Helm Repository
From a workstation with kubectl and Helm installed, add the Lenses Helm repository:
helm repo add lensesio https://helm.repo.lenses.io/
helm repo updateStep 1: Install and Configure Postgres
If you already have Postgres installed, skip to Step 2: Install Lenses HQ.
The following commands create the postgres-system namespace, then apply a PersistentVolumeClaim, Postgres deployment, service, and a Job that initializes the lenses_hq and lenses_agent databases.
Note: This configuration does not specify a
storageClassName, so Kubernetes will use your cluster's default storage class. If your cluster does not have a default storage class configured, you may need to addstorageClassName: <your-storage-class>to the PVC spec (for example,hostpathfor Docker Desktop orlocal-pathfor K3s).
Warning: Using simple cleartext passwords like in the example below is NEVER recommended for anything other than a test or demo environment.
Wait for the Postgres deployment and the database init job to complete:
Verify everything is running:
You should see the Postgres pod running and the init job completed:
Troubleshooting: If the Postgres pod shows
Pendingstatus with an event likepod has unbound immediate PersistentVolumeClaims, your cluster likely does not have a default storage class or the default does not support dynamic provisioning. Runkubectl get storageclassto see available options, then delete the PVC and re-apply with the correctstorageClassNameadded to the PVC spec.
Step 2: Install Lenses HQ
Create the Lenses namespace and install HQ with the Helm chart:
Verify that Lenses HQ is running:
You should see something similar to this:
Accessing Lenses HQ
For quick local access, use port-forwarding:
Then open your browser to http://localhost:8080.
For a more permanent setup, choose one of the ingress options below.
Option A: Traefik (Recommended)
Traefik is the modern, actively maintained replacement for Nginx Ingress and is the default ingress controller in K3s — so if you are running K3s, you already have it installed and can skip straight to applying the Ingress resource below.
Traefik handles WebSocket connections out of the box, which is important for the Lenses Agent communication channel on port 10000.
If Traefik is not already installed, add it via Helm:
Wait for Traefik to be ready:
Apply the Lenses HQ Ingress resource:
Option B: Kubernetes Gateway API (Envoy Gateway)
The Kubernetes Gateway API is now GA and is the official successor to the Ingress spec. This is the forward-looking choice if you want to align with where the ecosystem is heading. Envoy Gateway is the CNCF-backed implementation.
Install the Gateway API CRDs and Envoy Gateway:
Wait for Envoy Gateway to be ready:
Create a GatewayClass and Gateway:
Create HTTPRoute resources for HQ and the Agent channel:
Installing Lenses Agent
Once you have successfully logged on to Lenses HQ you can start to set up your agent. See the Community Edition walk through for login details.
Click on the Add New Environment button in the Environments screen. Follow the in-product instructions to setup your new environment.
Be sure to save your Agent Key from the screen that follows. It can only be displayed once.
Install the Lenses Agent with the following command, replacing
YOUR_AGENT_KEY_HEREwith your actual Agent Key:
Give Kubernetes time to install the Lenses Agent, then go back to the Lenses HQ UI and verify your Kafka cluster is connected. You can now use Lenses on your own cluster!
Last updated
Was this helpful?

