4.1

What’s New in Lenses 4.1


If you are upgrading from an older version, make sure to check the upgrade notes .

Access control on Kafka Connect clusters 

ACLs for Kafka Connect provide more flexibility to manage who can access and provision connectors. Combined with namespace permissions it gives enterprise grade security support for your Kafka Connect. As an administrator you can add one or more Kafka Connect clusters to Lenses and control which user groups can access them.

Alerts for Connector & Tasks 

A new set of Application Alerts has been added to Alert Rules.

With this feature you will be able to enable Connector Alert Rule which monitors the health status of each running connector and its tasks. If a failure occurs then it will trigger and Alert Event. You can always route the alerts to the desired channel by adding Alert Channels.

New SQL processors scale on Kafka Connect clusters 

Lenses second generation SQL streaming engine has been released in 4.0 with full Kubernetes native deployment. With 4.1 SQL processors can be deployed using Connect. Connect is advised for those setups where Kubernetes has not been adopted.

Read more about the SQL engine .

Navigate to the help center for tutorials on how to use the Lenses SQL Streaming for Apache Kafka.

Support for pre 4.0 SQL processors 

First generation SQL processors are now supported with 4.1 for Connect deployments. Kubernetes deployments are going to be delivered with 4.1.1 with our 2 weeks release train for patches. Old generations SQL processors are imported and given the access controls the user can manage them.

Navigate to the help center for tutorials on how to use the Lenses SQL Streaming for Apache Kafka.

Namespaced data policies 

Protect sensitive data with dataset specific data policies. Redaction policies can be set for specific dataset(-s).

Global: A data policy without a linked dataset, detects the specified field(-s) across all datasets.

Namespaced: Scopes the redaction rules to the linked dataset(-s) only.

More info about data policies at our help center .

Recursive Schemas & Schema References 

Lenses integrates with Schema Registry to manage the schemas but also to serve the data catalog, queries and views. When AVRO schemas with references or recursive schemas are used Lenses updates the internal metadata store to manage these use cases.

Support for Kafka message headers 

Now you can view and explore Kafka message headers in Lenses and with SQL just like other payload, key and metadata data in your Kafka streams.

SQL and LATERAL Joins 

Lateral Joins improve array support in Lenses SQL, for both Streaming and Snapshot engines. A Lateral join allows single array element access thus unlocking the following possibilities:

  • Explode records that contains arrays into multiple records
  • Filter the exploded records with a custom condition
  • Group by exploded records with a custom group by
  • Explode multi-level arrays using nested Lateral Joins
  • Exploit the new array functions, like zip , to “explode” multiple arrays at the same time.

Config backing store with PostgreSQL 

Lenses internal state can be stored into a PostgreSQL database. It’s a step forward towards Lenses cluster support.

SQL gives a fast way to see last N records on a topic 

Using LAST_OFFSET() function, you can get the latest N messages on all partitions at once:

SELECT ...
FROM table
WHERE _meta.offset >= LAST_OFFSET() - 100

To filter for a given partition, all that is required is the extra filter: AND _meta.partition = 2

Other Improvements 

  • SQL Studio improved Elasticsearch support.
  • Data SLA improvements. Better UX on creating and monitoring alerts based on traffic of your Kafka Topics.
  • External Applications monitoring and internal topology
  • Improved License Handling through the User Interface. Users are now get an early notification about license expiration and can update Lenses license directly though the UI
  • PagerDuty Integration is now using PagerDuty’s Events API instead of Incident Creation
  • Lenses startup sequence does not stop if the SQL processor metrics topic is not present and Lenses process cannot create the Kafka topic
  • Schema registry config value lenses.schema.registry.delete is set by default to false. In order to allow users, with the proper access rights, to perform schema deletion you need to change it to true.

Data Catalogue 

  • Improved support for setting the Kafka topic schema when the format is set to JSON
  • Improved support for setting the Kafka topic schema to Avro manually
  • Editing the schema for primitive type storage format is not allowed anymore
  • Improved schema details for a topic with a bytes storage format

SQL 

  • Support JSON functions with powerful extract semantics. See the JSON_EXTRACT_*** functions
  • SQL Streaming does not require an intermediary WITH x as () to express a JOIN and a GROUP BY. It can all be done in one SQL statement
  • Array functions: REPEAT, IN_ARRAY, ELEMENT_OF, FLATTEN, ZIP, ZIP_ALL. See Array functions
  • SQL Streaming EVENTTIME BY accepts a SQL expression not just field selection
  • Snapshot INSERT INTO X(_key, _value) VALUES('abc', null) correctly sets the resulting Kafka message Value as null
  • SQL Snapshot engine under certain scenario could crash Lenses
  • On restart the SQL Processors were logging the restart action as an audit
  • SQL processor last action timestamp computed correctly on the UI
  • DESCRIBE TABLE on non existing Kafka topic would fail with a misleading error message for the user
  • Snapshot and Streaming share the same name for generating random integer: RANDINT
  • Improvements on fields suggestion on SQL streaming intellisense
  • Nested field selection does yield the correct value when parent field value is null
  • SQL Snapshot query with a forward timestamp short-circuits the process and returns instantly
  • SQL Streaming GROUP BY followed by HAVING fixed
  • SQL Streaming validation enforces to have GROUP BY when using aggregation functions
  • Improved SQL Streaming/Snapshot intellisense latency
  • REGEX function improvements
  • SQL Intellisense text overflow is avoided

CLI 

  • SQL Processor state was not rendered
  • Fix SQL processor delete command
  • Fix SQL processor start command

Topology 

  • Improved custom application internal processing graph