Installing Community Edition Using Helm

These instructions are NOT for production environments. They are intended for dev or test environment setups. Please see here for details on installing Lenses for more secure environments.

Tool Requirements

  1. Kubernetes cluster and kubectl - you can use something like Minikube or Docker Desktop in Kubernetes mode if you'd like, but you will need to allocate at least 8 gigs of RAM and 6 CPUs

  2. Helm.

  3. Text editor.

  4. Kafka cluster and a Postgres database(we will provide setup instructions below for this if it's not already installed.)

  5. Kafka Connect and a schema registry (optional)

Adding Required Helm Repositories

From a workstation with kubectl and Helm installed, add the Lenses Helm repository:

helm repo add lensesio https://helm.repo.lenses.io/

If you don't already have a Kafka cluster or Postgres installed you will need to add this repository as well:

helm repo add bitnami https://charts.bitnami.com/bitnami

Once you've added them, run the following command:

helm repo update

Installing Postgres - if needed

If you already have Postgres installed skip to the next section: Configuring Postgres

  1. Create a namespace for Postgres

kubectl create namespace postgres-system
  1. Create a PVC claim for Postgres

PLEASE NOTE: PVC claims vary greatly depending on the type of Kubernetes cluster you are using. Here we are using a "standard" storage class. Please refer to your version of Kubernetes' docs for the best storage class to use.

# postgres-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-data
  namespace: postgres-system
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: standard

Save the above to a file called postgres-pvc.yaml and then run the following command:

kubectl apply -f postgres-pvc.yaml
  1. Install Postgres using the Bitnami Helm chart.

Using simple cleartext passwords like in the below example is NEVER recommended for anything other than a test or dev environment.

# postgres-values.yaml
global:
  postgresql:
    auth:
      username: "admin"
      password: "changeme"
      postgresPassword: "changeme"

primary:
  persistence:
    existingClaim: "postgres-data"

auth:
  database: postgres
  username: admin
  password: changeme
  postgresPassword: changeme
  enablePostgresUser: true

Save the above text to a file called postgres-values.yaml. Then run the following command:

helm install postgres bitnami/postgresql \
  --namespace postgres-system \
  --values postgres-values.yaml

Verify that Postgres is up and running. It may take a minute or so to download and be fully ready.

Verifying Postgres is running.

Configuring Postgres

Again a reminder, we are using simple cleartext passwords here. NEVER recommended for anything other than test or dev environments.

  1. We need to create the databases in Postgres for Lenses to use.

Option 1: You will need to use a Postgres client to run the following commands.

Log in to your Postgres instance and run the following commands:

CREATE ROLE lenses_agent WITH LOGIN PASSWORD 'changeme'

CREATE DATABASE lenses_agent OWNER lenses_agent

CREATE ROLE lenses_hq WITH LOGIN PASSWORD 'changeme'

CREATE DATABASE lenses_hq OWNER lenses_hq

Option 2: Use a Kubernetes job to run the Postgres commands.

Lenses needs a database for LensesHQ and for Lenses Agent. This job will create one for each using the same Postgres instance.

# lenses-db-init-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: lenses-db-init
  namespace: postgres-system
spec:
  template:
    spec:
      containers:
      - name: db-init
        image: postgres:14
        command:
        - /bin/bash
        - -c
        - |
          echo "Waiting for PostgreSQL to be ready..."
          until PGPASSWORD=changeme psql -h postgres-postgresql -U postgres -d postgres -c '\l' &> /dev/null; do
            echo "PostgreSQL is unavailable - sleeping 2s"
            sleep 2
          done
          echo "PostgreSQL is up - creating databases and roles"
          PGPASSWORD=changeme psql -h postgres-postgresql -U postgres -d postgres <<EOF
          CREATE ROLE lenses_agent WITH LOGIN PASSWORD 'changeme';
          CREATE DATABASE lenses_agent OWNER lenses_agent;
          CREATE ROLE lenses_hq WITH LOGIN PASSWORD 'changeme';
          CREATE DATABASE lenses_hq OWNER lenses_hq;
          EOF
          echo "Database initialization completed!"
      restartPolicy: OnFailure
  backoffLimit: 5

Copy the above text to a file called lenses-db-init-job.yaml and then run the following command:

kubectl apply -f lenses-db-init-job.yaml

Wait a bit and then run

kubectl get job -n postgres-system

You should see

1/1 Completions means it ran correctly.

Now Postgres is setup and configured to work with Lenses.

Installing a Kafka Cluster - Optional

If you already have a Kafka cluster installed skip to the Installing HQ section.

The provided cluster install is for a simple single node open-source Kafka cluster with basic authentication and limited resources. Only suitable for testing or small development environments.

  1. Create the kafka-cluster-values.yaml file for installation. We are using "standard" storage class here. Depending on what K8s vendor you're using and where you are running it, your PCV setup will vary.

# Kafka Bitnami Helm chart values for dev/testing with KRaft mode
## Global settings
global:
  storageClass: "standard"

## Enable KRaft mode and disable Zookeeper
kraft:
  enabled: true
  controllerQuorumVoters: "0@kafka-controller-0.kafka-controller-headless.kafka.svc.cluster.local:9093"

# Disable Zookeeper since we're using KRaft
zookeeper:
  enabled: false

## Controller configuration (for KRaft mode)
controller:
  replicaCount: 1
  persistence:
    enabled: true
    storageClass: "standard"
    size: 2Gi
    selector:
      matchLabels:
        app: kafka-controller
  resources:
    requests:
      memory: "512Mi"
      cpu: "250m"
    limits:
      memory: "1Gi"
      cpu: "500m"

## Broker configuration
broker:
  replicaCount: 1
  persistence:
    enabled: true
    storageClass: "standard"
    size: 2Gi
    selector:
      matchLabels:
        app: kafka-broker
  resources:
    requests:
      memory: "512Mi"
      cpu: "250m"
    limits:
      memory: "1Gi"
      cpu: "500m"

# Networking configuration for standalone K8s cluster
service:
  type: ClusterIP
  ports:
    client: 9092

## External access configuration (if needed)
externalAccess:
  enabled: false
  service:
    type: NodePort
    nodePorts: [31090]
  autoDiscovery:
    enabled: false

# Listeners configuration for standalone cluster
listeners:
  client:
    name: PLAINTEXT
    protocol: PLAINTEXT
    containerPort: 9092
  controller:
    name: CONTROLLER
    protocol: PLAINTEXT
    containerPort: 9093
  interbroker:
    name: INTERNAL
    protocol: PLAINTEXT
    containerPort: 9094

# Disable authentication for simplicity in dev environment
auth:
  clientProtocol: plaintext
  interBrokerProtocol: plaintext
  sasl:
    enabled: false
    jaas:
      clientUsers: []
      interBrokerUser: ""
  tls:
    enabled: false
  zookeeper:
    user: ""
    password: ""

# Configuration suitable for development
configurationOverrides:
  "offsets.topic.replication.factor": 1
  "transaction.state.log.replication.factor": 1
  "transaction.state.log.min.isr": 1
  "log.retention.hours": 24
  "num.partitions": 3
  "security.inter.broker.protocol": PLAINTEXT
  "sasl.enabled.mechanisms": ""
  "sasl.mechanism.inter.broker.protocol": PLAINTEXT
  "allow.everyone.if.no.acl.found": "true"

# Enable JMX metrics
metrics:
  jmx:
    enabled: true
    containerPorts:
      jmx: 5555
    service:
      ports:
        jmx: 5555
  kafka:
    enabled: true
    containerPorts:
      metrics: 9308
    service:
      ports:
        metrics: 9308

# Enable auto-creation of topics
allowAutoTopicCreation: true
  1. Create a namespace for Kafka

kubectl create ns kafka
  1. Install the Kafka cluster with the Bitnami Helm chart:

helm install my-kafka bitnami/kafka \
--namespace kafka \
--values kafka-cluster-values.yaml

Give the Helm chart a few minutes to install then verify the installation:

Kafka cluster up and running.

Installing Lenses HQ

  1. Create lenses namespace

kubectl create ns lenses
  1. Install Lenses HQ with its Helm chart using the following lensesHQ-values.yaml

# lenseshq-values.yaml
resources:
  requests:
    cpu: 1
    memory: 1Gi
  limits:
    cpu: 2
    memory: 4Gi

image:
  repository: lensesio/lenses-hq:6.0
  pullPolicy: Always

rbacEnable: false
namespaceScope: true

# Lense HQ container port
restPort: 8080
# Lenses HQ service port, service targets restPort
servicePort: 80
servicePortName: lenses-hq

# serviceAccount is the Service account to be used by Lenses to deploy apps
serviceAccount:
  create: false
  name: default

# Lenses service
service:
  enabled: true
  type: ClusterIP
  annotations: {}

lensesHq:
  agents:
    address: ":10000"
  auth:
    administrators:
     - "admin"
    users:
      - username: admin
        password: $2a$10$DPQYpxj4Y2iTWeuF1n.ItewXnbYXh5/E9lQwDJ/cI/.gBboW2Hodm # bcrypt("admin").
  http:
    address: ":8080"
    accessControlAllowOrigin:
      - "http://localhost:8080"
    secureSessionCookies: false
  # Storage property has to be properly filled with Postgres database information
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres-system.svc.cluster.local
      port: 5432
      username: lenses_hq
      database: lenses_hq
      passwordSecret:
        type: "createNew"
        password: "changeme"
  logger:
    mode: "text"
    level: "debug"
  license:
    referenceFromSecret: false
    stringData: "license_key_2SFZ0BesCNu6NFv0-EOSIvY22ChSzNWXa5nSds2l4z3y7aBgRPKCVnaeMlS57hHNVboR2kKaQ8Mtv1LFt0MPBBACGhDT5If8PmTraUM5xXLz4MYv"
    acceptEULA: true

Copy the above text to a file lenseshq-values.yaml and apply it with the following command:

helm install lenses-hq lensesio/lenses-hq \
--namespace lenses \
--values lenseshq-values.yaml

You can verify that Lenses HQ is installed:

HQ Successfully Installed
  1. Accessing Lenses HQ:

In order to access Lenses HQ you will need to setup an ingress route using an ingress controller. There are so many different ways to do this depending on how and where you are running Kubernetes.

We have provided here an example ingress configuration using Nginx:

# lenses-hq-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: lenses-hq-ingress
  namespace: lenses  # Update this if LensesHQ is in a different namespace
  annotations:
    # For nginx ingress controller
    nginx.ingress.kubernetes.io/rewrite-target: /
    # If you need larger request bodies for API calls
    nginx.ingress.kubernetes.io/proxy-body-size: "50m"
    # Optional: enable CORS if needed
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-origin: "*"
spec:
  ingressClassName: nginx
  
  rules:
  - host: lenses-hq.local  # Change this to your desired hostname
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: lenses-hq
            port:
              number: 80
      # Optional: expose the agents port if needed externally
      - path: /agents
        pathType: Prefix
        backend:
          service:
            name: lenses-hq
            port:
              number: 10000

Installing Lenses Agent

  1. Once you have successfully logged on to Lenses HQ you can start to setup your agent. See Community Edition walk through for login details.

  2. Click on the Add New Environment button at the bottom of the main screen. Give your new environment a name (you can accept the other defaults for now) and click Create Environment.

  3. Be sure to save your Agent Key from the screen that follows.

Be sure to copy and save your Agent Key
  1. Now we can install the Lenses Agent using the agent_key. Here is the lenses-agent-values.yaml file:

# lenses-agent-values.yaml
image:
  repository: lensesio/lenses-agent
  tag: 6.0.0
  pullPolicy: IfNotPresent
lensesAgent:
  # Postgres connection
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres-system.svc.cluster.local
      port: 5432
      username: lenses_agent
      password: changeme
      database: lenses_agent
  hq:
    agentKey:
      secret:
        type: "createNew"
        name: "agentKey"
        value: "agent_key_Insert_Your_Agent_Key_Here"
  sql:
        processorImage: hub.docker.com/r/lensesioextra/sql-processor/
        processorImageTag: latest
        mode: KUBERNETES
        heap: 1024M
        minHeap: 128M
        memLimit: 1152M
        memRequest: 128M
        livenessInitialDelay: 60 seconds
        namespace: lenses
  provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: lenses-hq.lenses.svc.cluster.local
            port:
              value: 10000
            agentKey:
              value: ${LENSESHQ_AGENT_KEY}
      kafka:
        # There can only be one Kafka cluster at a time
        - name: kafka
          version: 1
          tags: ['staging', 'pseudo-data-only']
          configuration:
            kafkaBootstrapServers:
              value:
                - PLAINTEXT://my-kafka.kafka.svc.cluster.local:9092
            protocol:
              value: PLAINTEXT
  1. Copy the above config to a file named lenses-agent-values.yaml.

NOTE: you must replace value: "agent_key_Insert_Your_Agent_Key_Here" with your actual Agent Key you saved in a previous step.

Your lenses-agent-values.yaml should look like this:

Paste your actual agent key into the file.
  1. Use the Lenses Agent Helm chart to install the Lenses Agent

helm install lenses-agent lensesio/lenses-agent \
--namespace lenses \
--values lenses-agent-values.yaml

Give Kubernetes time to install the Lenses Agent, then go back to the Lenses HQ UI and view your Kafka cluster is connected. You can now uses Lenses on your own cluster! Congrats!!

Last updated

Was this helpful?