# Deploying an Agent

{% hint style="info" %}
[Lenses HQ](https://docs.lenses.io/latest/deployment/installation/helm/hq) must be installed before setting up an Agent.

Latest Agent container image is[ here](https://hub.docker.com/r/lensesio/lenses-agent) on Docker Hub.

Helm Charts available [here](https://helm.repo.lenses.io/).

Run the following commands to add the charts to your Helm repo.

<pre><code><strong>helm repo add lensesio https://helm.repo.lenses.io/
</strong>helm repo update
</code></pre>

{% endhint %}

## Prerequisites

* Kubernetes 1.23+
* Helm 3.8.0+
* Available local Postgres database instance.
  * If you need to install Postgres on Kubernetes you can use one of many different publicly available Helm charts such as [Bitnami's](https://bitnami.com/stack/postgresql/helm).
  * Or you can use a cloud provider's Postgres service such as one of these: [AWS](https://aws.amazon.com/rds/postgresql/), [Azure](http://azure.microsoft.com/en-us/products/postgresql), or [GCP](https://cloud.google.com/sql/postgresql).
  * See Lenses [docs here](https://docs.lenses.io/latest/getting-started/connecting-lenses-to-your-environment/overview#postgres) for information on configuring Postgres to work with Agent.
* Follow [these steps](https://docs.lenses.io/latest/getting-started/connecting-lenses-to-your-environment/overview#postgres) to configure your Postgres database for Lenses Agent.
* [External Secrets Operator](https://external-secrets.io/latest/) is the only supported secrets operator.

## Configure an Agent

In order to configure an Agent, we have to understand the parameter groups that the Helm Chart offers.

Under the **lensesAgent** parameter there are some key parameter groups that are used to set up the connection to Lenses HQ:

1. Storage
2. HQ connection
3. Provision
4. Cluster RBACs

Moving forward, in the same order you can start configuring your Helm chart.

## JSON Schema Support

You can use JSON schema support to help you configure the values files for Helm. See [JSON schema](https://docs.lenses.io/latest/configuration/agent/overview#json-schema-support) for support. The repository includes a JSON schema for the Agent Helm chart.

***

## Configuring Agent chart

{% stepper %}
{% step %}
**Configure storage (Postgres / H2 - Embedded database)**

{% hint style="info" %}
**Postgres database** is recommended for Production and Non-production workloads.

**H2 embedded database** is recommended to Evaluation purposes.
{% endhint %}

**Running Agent with Postgres database**

Prerequisite:

* Running Postgres instance;
* Created database for an Agent;
* Username (and password) which has access to the created database;

In order to successfully run HQ, ***storage is*** within *values.yaml* has to be defined first.

The definition of *storage* object is as follows:

```yaml
lensesAgent:
  storage:
    postgres:
      enabled: true
      host: ""
      port: 
      username: ""
      database: ""
      schema: ""
      params: {}
```

Alongside Postgres password, which can be referenced/created through Helm chart, there are a few more options which can help while setting up HQ.

There are two ways how username can be defined:

{% tabs %}
{% tab title="Plain Value" %}
The most straightforward way, if the username is not being changed, is by just defining it within the *username* parameter such as

{% code title="values.yaml" %}

```yaml
lensesAgent:
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres.svc.cluster.local
      port: 5432
      database: lensesagent
      username: lenses
```

{% endcode %}
{% endtab %}

{% tab title="Environment Variable" %}
{% code title="values.yaml" %}

```yaml
lensesAgent:
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres.svc.cluster.local
      port: 5432
      database: lensesagent
      username: external  # use "external" to manage it using secrets
  additionalEnv:
    - name: LENSES_STORAGE_POSTGRES_USERNAME
      valueFrom:
        secretKeyRef:
          name: [SECRET_RESOURCE_NAME]
          key: [SECRET_RESOURCE_KEY]

```

{% endcode %}
{% endtab %}
{% endtabs %}

**Password reference types**

Postgres password can be handled in three ways using:

1. Pre-created secret;
2. Creating secrets on the spot through **values.yaml;**

{% tabs %}
{% tab title="Plain Value" %}
{% code title="values.yaml" %}

```yaml
lensesAgent:
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.playground.svc.cluster.local
      port: 5432
      username: lenses
      database: lensesagent
      password: useOnlyForDemos         
```

{% endcode %}
{% endtab %}

{% tab title="Environment variable" %}
{% code title="values.yaml" %}

```yaml
lensesAgent:
  storage:
    postgres:
      enabled: true
      host: postgres-postgresql.postgres.svc.cluster.local
      port: 5432
      database: lensesagent
      username: lenses
      password: external   # use "external" to manage it using secrets
  additionalEnv:
    - name: LENSES_STORAGE_POSTGRES_PASSWORD
      valueFrom:
        secretKeyRef:
          name: [SECRET_RESOURCE_NAME]
          key: [SECRET_RESOURCE_KEY]
```

{% endcode %}
{% endtab %}
{% endtabs %}

**Running Agent with H2 embedded database**

{% hint style="warning" %}
Embedded database is not recommended to be used in Production or high load environments.
{% endhint %}

In order to run Agent with H2 embedded database there are few things to be aware about:

* K8s cluster Agent will be deployed on has to support Persistent Volumes;
* Postgres options in Helm chart **has to be left out.**

{% code title="values.yaml" %}

```yaml
persistence:
  storageH2:
    enabled: true
    accessModes:
      - ReadWriteOnce
    size: 300Mi
```

{% endcode %}
{% endstep %}

{% step %}
**Configure HQ connection (agent key)**

Connection to Lenses HQ is a straight forward process which requires two steps:

1. Create an environment and obtain an **AGENT KEY** in HQ, as described in [Install](https://docs.lenses.io/latest/getting-started/connecting-lenses-to-your-environment/install#create-an-environment-for-your-kafka-cluster), if you have not already done so;
2. Storing that same key in Vault or as a K8s secret.

The agent communicates with HQ via a secure custom binary protocol channel. To establish this channel and authenticate the Agent needs and **AGENT KEY**.

Once the **AGENT KEY** has been copied, store it inside of Vault or any other tool that has integration with Kubernetes secrets.

There are three available options how the agent key can be used:

1. ExternalSecret via External Secret Operator (ESO)
2. Pre-created secret
3. Inline string

{% tabs %}
{% tab title="ExternalSecret" %}
{% hint style="warning" %}
To use this option, the External Secret Operator (ESO) has to be installed and available for usage in K8s cluster your are deploying Agent.
{% endhint %}

When specifying ***secret.type: "externalSecret",*** the chart will:

* create an ***ExternalSecret*** in the namespace where Agent is deployed;
* a secret is mounted for Agent to use.

{% code title="values.yaml" %}

```yaml
lensesAgent:
  hq:
    agentKey:
      secret:
        type: "externalSecret"
        # Secret name where agentKey will be read from
        name: hq-password
        # Key name under secret where agentKey is stored
        key: key
        externalSecret:
          additionalSpecs: {}
          secretStoreRef:
            type: ClusterSecretStore # ClusterSecretStore | SecretStore
            name: [secretstore_name]
```

{% endcode %}
{% endtab %}

{% tab title="Pre-created secret" %}
{% hint style="info" %}
Make sure that secret you are going to use is already created in namespace where Agent will be installed.
{% endhint %}

{% code title="values.yaml" %}

```yaml
lensesAgent:
  hq:
    agentKey:
      secret:
        type: "precreated"
        # Secret name where agentKey will be read from
        name: hq-password
        # Key name under secret where agentKey is stored
        key: key
```

{% endcode %}
{% endtab %}

{% tab title="Inline String" %}
{% hint style="warning" %}
This option is **NOT** for PRODUCTION usage but rather just for demo / testing.
{% endhint %}

The chart will create a secret with defined values below and the same secret will be read by Agent to connect to HQ.

{% code title="values.yaml" %}

```yaml
lensesAgent:
  hq:
    agentKey:
      secret:
        type: "createNew"
        # Secret name where agentKey will be read from
        name: "lenses-agent-secret-1"
        # Value of agentKey generated by HQ
        value: "agent_key_*"
```

{% endcode %}
{% endtab %}
{% endtabs %}

This secret will be fed into the **provisioning.yaml*****.*** The HQ connection is specified below, where reference **${LENSESHQ\_AGENT\_KEY}** is being set:

<pre class="language-yaml" data-title="values.yaml" data-line-numbers><code class="lang-yaml">lensesAgent:
  provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: [LENSES_HQ_FQDN_OR_IP]
            port:
              value: 10000
            agentKey:
              # This property shouldn't be changed as it is mounted automatically
              # based on secret choice for hq.agentKey above 
<strong>              value: ${LENSESHQ_AGENT_KEY}
</strong>            sslEnabled:
              value: false
</code></pre>

{% hint style="info" %}
In order to enable TLS for secure communication between HQ and the Agent please refer to the [following part of the page](#optional-enable-tls-connection-with-hq).
{% endhint %}
{% endstep %}

{% step %}
**Configure provisioning (Kafka / SchemaRegistry / Kafka Connect)**

Provisioning offers various connections starting with:

* Kafka ecosystem components such as:
* [Kafka](https://docs.lenses.io/latest/deployment/configuration/agent/automation/kafka)
* [Schema Registries](https://docs.lenses.io/latest/deployment/configuration/agent/automation/schema-registries)
* [Kafka Connect](https://docs.lenses.io/latest/deployment/configuration/agent/automation/kafka-connect)
* [Zookeeper](https://docs.lenses.io/latest/deployment/configuration/agent/automation/zookeeper)
* [Alert & Audit integrations](https://docs.lenses.io/latest/deployment/configuration/agent/automation/alert-and-audit-integrations)
* [AWS](https://docs.lenses.io/latest/deployment/configuration/agent/automation/aws)

{% tabs %}
{% tab title="Values.yaml with Plaintext secrets" %}
{% code title="values.yaml" %}

```yaml
lensesAgent:
  provision:
    path: /mnt/provision-secrets
    connections:
      # Kafka Connection
      kafka:
        - name: Kafka
          version: 1
          tags: [my-tag]
          configuration:
            kafkaBootstrapServers:
              value:
                - PLAINTEXT://your.kafka.broker.0:9092
                - PLAINTEXT://your.kafka.broker.1:9092
            protocol: 
              value: PLAINTEXT
            # all metrics properties are optional
            metricsPort: 
              value: 9581
            metricsType: 
              value: JMX
            metricsSsl: 
              value: false
      # Confluent Schema Registry Connection
      confluentSchemaRegistry:
        - name: schema-registry
          tags: ["tag1"]
          version: 1      
          configuration:
            schemaRegistryUrls:
              value:
                - http://my-sr.host1:8081
                - http://my-sr.host2:8081
            ## all metrics properties are optional
            metricsPort: 
              value: 9581
            metricsType: 
              value: JMX
            metricsSsl: 
              value: false
      # Kafka Connect connection
      connect:
        - name: my-connect-cluster-name
          version: 1    
          tags: ["tag1"]
          configuration:
            workers:
              value:
                - http://my-kc.worker1:8083
                - http://my-kc.worker2:8083
            metricsPort: 
              value: 9585
            metricsType: 
              value: JMX
```

{% endcode %}
{% endtab %}

{% tab title="Values.yaml with Secrets as Environment Variable" %}
{% code title="values.yaml" %}

```yaml
lensesAgent:
  additionalEnv:
    - name: SASL_JAAS_CONFIG
      valueFrom:
        secretKeyRef:
          name: kafka-sharedkey
          key: sasljaasconfig
  provision:
    path: /mnt/provision-secrets
    connections:
      # Kafka Connection
      kafka:
        - name: kafka
          version: 1
          tags: [ "dev", "dev-2", "eu"]
          configuration:
            kafkaBootstrapServers:
              value:
                - SASL_SSL://test-dev-2-kafka-bootstrap.kafka-dev.svc.cluster.local:9093
            saslJaasConfig:
              value: ${SASL_JAAS_CONFIG}
            saslMechanism:
              value: SCRAM-SHA-512
            protocol:
              value: SASL_SSL
      # Confluent Schema Registry Connection
      confluentSchemaRegistry:
        - name: schema-registry
          tags: ["tag1"]
          version: 1      
          configuration:
            schemaRegistryUrls:
              value:
                - http://my-sr.host1:8081
                - http://my-sr.host2:8081
            ## all metrics properties are optional
            metricsPort: 
              value: 9581
            metricsType: 
              value: JMX
            metricsSsl: 
              value: false
      # Kafka Connect connection
      connect:
        - name: my-connect-cluster-name
          version: 1    
          tags: ["tag1"]
          configuration:
            workers:
              value:
                - http://my-kc.worker1:8083
                - http://my-kc.worker2:8083
            metricsPort: 
              value: 9585
            metricsType: 
              value: JMX
```

{% endcode %}
{% endtab %}
{% endtabs %}

More about provisioning and advanced configuration options for each of these components can be found in [Provisioning](https://docs.lenses.io/latest/deployment/configuration/agent/automation).
{% endstep %}

{% step %}
**Cluster RBACs**

The Helm chart creates Cluster roles and bindings, that are used by SQL Processors, if the deployment mode is set to KUBERNETES. They are used so that Lenses can deploy and monitor SQL Processor deployments in namespaces.

To disable the creation of Kubernetes RBAC set: **rbacEnabled: false**

If you want to **limit the permissions** the Agent has against your Kubernetes cluster, you can use **Role/RoleBinging** resources instead. Follow [**this link**](#enabling-sql-processors-in-k8s-mode) in order to enable it.

If you are not using SQL Processors and want to limit permissions given to Agent's ServiceAccount, there are two options you can choose from:

* **rbacEnable: true** - will enable the creation of **ClusterRole** and **ClusterRoleBinding** for service account mentioned above;

{% code title="values.yaml" %}

```yaml
rbacsEnable: true
namespaceScope: false
```

{% endcode %}

* **rbacEnable: true** and **namespaceScope: true** - will enable the creation of **Role** and **RoleBinding** which is more restrictive;

{% code title="values.yaml" %}

```yaml
rbacsEnable: true
namespaceScope: true
```

{% endcode %}
{% endstep %}
{% endstepper %}

***

### (Optional) Enable TLS connection with HQ

{% hint style="warning" %}
In this case, TLS has to be enabled on HQ. In case you haven't yet enabled it, you can find details [here](https://docs.lenses.io/latest/deployment/installation/helm/hq#enabling-tls) to do it.
{% endhint %}

Enabling TLS in communication between HQ is being done in the provisioning part of *values.yaml*.

In order to successfully enable TLS for the Agent you would need to:

* *additionalVolume & additionalVolumeMounts* - with which you will mount truststore with CA certificate that HQ is using and which Agent will need to successfully pass the handshake.
* *additionalEnv -* which will be used to securely read passwords to unlock truststore.
* Enable SSL in *provision*.

{% code title="values.yaml" %}

```yaml
# Additional Volume with CA that HQ uses
additionalVolumes:
  - name: hq-truststore
    secret:
      secretName: hq-agent-test-authority
additionalVolumeMounts:
  - name: hq-truststore
    mountPath: "/mnt/provision-secrets/hq"

lensesAgent:
 # Additional Env to read truststore password from secret
 additionalEnv:
    - name: LENSES_HQ_AGENT_TRUSTSTORE_PWD
      valueFrom:
        secretKeyRef:
          name: hq-agent-test-authority
          key: truststore.jks.password
 provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: [HQ_URL]
            port:
              value: 10000
            agentKey:
              value: ${LENSESHQ_AGENT_KEY}
            sslEnabled:
              value: true
            sslTruststore:
              file: "/mnt/provision-secrets/gq/truststore.jks"
            sslTruststorePassword:
              value: ${LENSES_HQ_AGENT_TRUSTSTORE_PWD}
```

{% endcode %}

***

### (Optional) Services

Enable a service resource in the **values.yaml**:

```yaml
# Lenses service
service:
  enabled: true
  annotations: {}
```

***

### (Optional) Controlling resources

To control the resources used by the Agent:

```yaml
# Resource management
resources:
  requests:
    cpu: 1
    memory: 4Gi
  limits:
    cpu: 2
    memory: 5Gi
```

{% hint style="info" %}
In case **LENSES\_HEAP\_OPTS** is not set explicitly it will be set implicitly.

Examples:

1. if no requests or limits are defined, **LENSES\_HEAP\_OPTS** will be set as **-Xms1G -Xmx3G**
2. If requests and limits are defined above defined values, **LENSES\_HEAP\_OPTS** will be set by formula **-Xms\[-Xmx / 2] -Xmx\[limits.memory - 2]**
3. If **.Values.lenses.jvm.heapOpts** it will override everything
   {% endhint %}

***

### Enabling SQL processors in K8s mode

To enable SQL processor in KUBERENTES mode and control the defaults:

<pre class="language-yaml"><code class="lang-yaml">lensesAgent:
  sql:
<strong>    processorImage: hub.docker.com/r/lensesioextra/sql-processor/
</strong>    processorImageTag: latest
    mode: KUBERNETES
    heap: 1024M
    minHeap: 128M
    memLimit: 1152M
    memRequest: 128M
    livenessInitialDelay: 60 seconds
</code></pre>

{% hint style="info" %}
To control the namespace Lenses can deploy processors, use the **sql.namespaces** value.
{% endhint %}

#### SQL Processor Role Binding

To achieve you need to create a **Role** and a **RoleBinding** resource in the namespace you want the processors deployed to.

For example:

* Lenses namespace = **lenses-ns**
* Processor namespace = **lenses-proc-ns**

```yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: processor-role
  namespace: lenses-proc-ns
rules:
- apiGroups: [""]
  resources:
    - namespaces
    - persistentvolumes
    - persistentvolumeclaims
    - pods/log
  verbs:
    - list
    - watch
    - get
    - create
- apiGroups: ["", "extensions", "apps"]
  resources:
    - pods
    - replicasets
    - deployments
    - ingresses
    - secrets
    - statefulsets
    - services
  verbs:
    - list
    - watch
    - get
    - update
    - create
    - delete
    - patch
- apiGroups: [""]
  resources:
    - events
  verbs:
    - list
    - watch
    - get
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: processor-role-binding
  namespace: lenses-proc-ns
subjects:
- kind: ServiceAccount
  namespace: lenses-ns
  name: default
roleRef:
  kind: Role
  name: processor-role
  apiGroup: rbac.authorization.k8s.io
```

Finally you need to define in the Agent configuration which namespaces the Agent has access to. Amend **values.yaml** to contain the following:

{% code title="values.yaml" %}

```yaml
lensesAgent:
  append:
    conf: |
      lenses.kubernetes.namespaces = {
        incluster = [
          "lenses-processors"
        ]
      }      
```

{% endcode %}

***

### Persistence Volume

Persistence can be enabled for three purposes:

* Use H2 embedded database
* Logging
* Provisioning

{% tabs %}
{% tab title="H2 Embedded database" %}

* When using the Data Policies module to persist your data policies rules
* When `lenses.storage.enabled: false` and an H2 local filesystem database is used instead of PostgreSQL
* For non critical and NON PROD deployments

**Configuration:**

{% code title="values.yaml" %}

```yaml
persistence:
  storageH2:
    enabled: true
    accessModes:
      - ReadWriteOnce
    size: 20Gi
    storageClass: ""
    annotations: {}
    existingClaim: ""
```

{% endcode %}
{% endtab %}

{% tab title="Logging" %}

* When you need persistent log storage across pod restarts
* When you want to retain logs for auditing or debugging purposes

**Configuration:**

{% code title="values.yaml" %}

```yaml
persistence:
  log:
    enabled: true
    accessModes:
      - ReadWriteOnce
    size: 5Gi
    storageClass: ""
    annotations: {}
    existingClaim: ""
```

{% endcode %}
{% endtab %}

{% tab title="Provisioning" %}
Dedicated volume for provisioning data managed via the HQ.

**When to enable:**

* When using HQ-based provisioning workflows
* Must be combined with `PROVISION_HQ_URL` and `PROVISION_AGENT_KEY` environment variables

**Configuration:**

{% code title="values.yaml" %}

```yaml
persistence:
  provisioning:
    enabled: true
    accessModes:
      - ReadWriteOnce
    size: 5Mi
    storageClass: ""
    annotations: {}
    existingClaim: ""
```

{% endcode %}

**or Helm command execution:**

```bash
# Install the Chart.
helm repo add lensesio https://helm.repo.lenses.io/
helm repo update
# Deploy the Agent. Only available from version 6.1.0 onwards.
helm install lenses-agent \
  lensesio/lenses-agent \
  --set 'persistence.provisioning.enabled=true' \
  --set 'lensesAgent.additionalEnv[0].name=PROVISION_HQ_URL' \
  --set 'lensesAgent.additionalEnv[0].value=[lenses-hq.url]' \
  --set 'lensesAgent.additionalEnv[1].name=PROVISION_AGENT_KEY' \
  --set 'lensesAgent.additionalEnv[1].value=[agent_key_*]'
```

{% endtab %}
{% endtabs %}

### Prometheus metrics

Prometheus metrics are automatically exposed on port 9102 under **/metrics.**

At this very moment you can scrape it only via ***Service*** under port called *http-metrics.*

#### lenses.conf

The main configurable options for **lenses.conf** are available in the **values.yaml** under the **lenses** object. These include:

* Authentication
* Database connections
* SQL processor configurations

To apply other static configurations use **lenses.append.conf**, for example:

{% code title="values.yaml" %}

```yaml
lensesAgent:
  append:
    conf: |
      lenses.interval.user.session.refresh=40000
```

{% endcode %}

## Install the Chart

First, add the Helm Chart repository using the Helm command line:

```bash
helm repo add lensesio https://helm.repo.lenses.io/
helm repo update
```

{% embed url="<https://github.com/lensesio/lenses-helm-charts>" %}

## Installing the Agent

Installing using cloned repository:

{% code fullWidth="false" %}

```bash
helm install lenses-agent charts/lenses-agent \
   --values charts/lenses-agent/values.yaml \
   --create-namespace --create lenses-agent
```

{% endcode %}

Installing using Helm repository:

{% code title="terminal" %}

```bash
helm install lenses-agent lensesio/lenses-agent \
   --values values.yaml \
   --create-namespace --namespace lenses-agent \
   --version 6.1.2
```

{% endcode %}

{% hint style="info" %}
Be aware that for the time being and alpha purposes usage of `--version` is mandatory when deploying Helm chart through Helm repository.
{% endhint %}

***

## Example Values files

{% hint style="info" %}
Be aware that example of *values.yaml* shows only how all parameters should look at the end. Please fill them with correct values otherwise Helm installation might not be successful.
{% endhint %}

<details>

<summary>Example of <em>values.yaml</em></summary>

{% code title="values.yaml" %}

```yaml
lensesAgent:
  storage:
    postgres:
      enabled: true
      host: [postgres.url]
      port: 5432
      username: postgres 
      password: changeMe
      database: agent
  hq:
    agentKey:
      secret:
        type: "createNew"
        name: "agentKey"
        value: "agent_key_*"
  provision:
    path: /mnt/provision-secrets
    connections:
      lensesHq:
        - name: lenses-hq
          version: 1
          tags: ['hq']
          configuration:
            server:
              value: hq-tls-test-lenses-hq.hq-agent-test.svc.cluster.local
            port:
              value: 10000
            agentKey:
              value: ${LENSESHQ_AGENT_KEY}
      kafka:
        # There can only be one Kafka cluster at a time
        - name: kafka
          version: 1
          tags: ['staging', 'pseudo-data-only']
          configuration:
            kafkaBootstrapServers:
              value:
                - PLAINTEXT://kafka-1.svc.cluster.local:9092
                - PLAINTEXT://kafka-2.svc.cluster.local:9092
            protocol:
              value: PLAINTEXT
            # Metrics are strongly suggested for better Kafka cluster observability
            metricsType:
              value: JMX
            metricsPort:
              value: 9581

```

{% endcode %}

</details>

You can also find examples in the [Helm chart repo](https://github.com/lensesio/lenses-helm-charts).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.lenses.io/latest/deployment/installation/helm/agent.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
