This page details the release notes of Lenses.
Lenses 6.0 introduces a new service, called HQ, acting as portal for multi-kafka environments.
New HQ service
IAM (Identity & Access Management). This has moved from each Lenses instant to a global location in the new HQ service
Global SQL Studio
Global Data Catalogue
Community License: You can no use Lenses without a license (community license key still rebuild and bundled in docker-compose) or expiry but the following restrictions apply:
No SSO
Maximum of two environments (Kafka clusters) can be connected
Two Users with one an admin user
Two Service Accounts
Two Groups
Two Roles
No Backup / Restore for topics to S3
H2 embedded database is no longer support.
Lenses 5.x permission model is replaced by global IAM. You must recreate the roles and groups in HQ
Connection management in the agent is via file Provisioning only.
We have made new alpha release 16:
Agent image:
HQ image:
New Helm version 16 for agent and for the HQ:
In previous versions, SAML / SSO was a mandatory requirement for authentication. However, with the new release, it becomes optional, allowing you to choose between password-based authentication and SAML / SSO according to your needs.
Existing alpha users will have to introduce lensesHq.saml.enabled
property into their values.yaml
files
In this release, the ingress configuration has been enhanced to provide more flexibility.
Previously, the HQ chart supported a single ingress setting, but now you can define separate ingress configurations for HTTP and the agent.
This addition allows you to tailor ingress rules more specifically to your deployment needs, with dedicated rules for handling HTTP traffic and TCP-based agent connections.
The http
ingress is intended only for HTTP/S traffic, while the agents
ingress is designed specifically for TCP protocol. Ensure appropriate ingress configuration for your use case.
In the following example you will notice how ingress configuration has been broken into:
http - which covers main ingress for HQ and where users will be accessing HQ portal
agent - new and additional ingress which allows you to add new ingress with your custom implementation, whether it is Traefik or any other based.
By default both http and agent ingresses are disabled.
Due to new changes in provisioning structure, the database to which agent is connected must be recreated.
In the provisioning, there has been slight adjustment in connection naming with HQ.
Changes:
grpcServer has been renamed to lensesHq
apiKey has been renamed to agentKey
With the new version of Agent, HQ connection in provisioning has changed which requires complete recreation of database. Following log message will indicate it:
In the past, HQ has been using TOML file format. As we want to reduce differences in file formats between Agent and HQ as much as possible, this was the first step.
Postgres connection URI is not being built within config.yaml but in backend runtime;
parameter group has changed from postgres to storage.postgres.*
In the previous version, schema was defined as a part of extraParamSpecs. In the new version schema is now defined as a separate property storage.postgres.database.schema;
Property extraParamSpecs is renamed to params;
Parameter group api has been renamed to http and following parameters are not part of it anymore:
administrators;
saml;
Property auth is being derived from property api (now. http).
Parameters that has been moved from http to auth are following:
administrators;
saml;
HQ has been tested against Aurora (Postgres) and is compatible.
In case of any changes in ConfigMap and after executing helm upgrade HQ pod will be automatically restarted as well therefore no need for manual interventions.
Previously environment variable known as LENSES_HQ_AGENT_KEY that was referenced in provisioning.yaml and stores the agentKey value has been renamed to LENSESHQ_AGENT_KEY.
Since newest version of Lenses HQ and Agent bring breaking changes following issues can happen.
Upon doing helm upgrade HQ can fail with following error log:
In order to fix it, following command has to be run on the postgres database:
In case SQL command cannot be run, database has to be cleared as if one is starting from scratch.