Release notes

4.0.7 release 


  • Memory footprint reduction as a result of better handling of services metrics
  • Services metrics (Kafka Brokers, Zookeeper, Schema Registry) endpoints performance improvements to avoid the occasional timeout for large clusters
  • Wrapping text on records with long string values resulting to improved visualization of the record.
  • Updated the default truststore with OpenJDK’s 11 latest truststore.


  • Kafka topic metrics screen was not showing the heat map when at least one partition has 0 messages

4.0.6 release 


  • SQL Streaming error on XML data fixed
  • UI allows to set the schema for topic storing data as XML

4.0.5 release 


  • Performance improvements on the SQL equality operator
  • Kafka topics page JMX metrics impact less the Kafka cluster
  • SQL Intellisense works now with Kafka message Key, nested fields. SET statement offer suggestions when using .
  • Consumers screen now supports up to 250K consumer groups


  • SQL Streaming errors on arrays not giving a typing error
  • SQL Streaming typing for projections like a.b.c where b is a nullable field
  • Lenses Topology now handles gracefully apps registered with topology nodes type which is not valid.

4.0.4 release 


  • Performance improvements on the SQL equality operator
  • Kafka topics page JMX metrics impact less the Kafka cluster
  • SQL Snapshot query initialization is not blocking the HTTP thread


  • SQL Snapshot queries terminating after max.idle.time when applying filters
  • Login audit event tracked when using SSO
  • Lenses schema shows defaults values if they are provided

4.0.3 release 


  • Data catalogue search performance improved when dealing with thousands of datasets with complex schemas
  • SQL Streaming validation for HAVING statements now rejects Key projection
  • SQL intellisense identifies Kafka message Key fields
  • Topic partition size showing N/A as opposed to 0 when there is no JMX information available from the Kafka Broker
  • Kafka consumer groups page handles thousands of entries by paginating the data
  • Better handling of consumer groups state when one of the groups does not have a coordinator assigned
  • Kafka topic partition messages per partition does not bring the UI down


  • Unsupported Avro schema (union of null and a few types) was breaking the Data catalogue indexing process
  • SQL Snapshot query optimization for Key filter. Queries like WHERE _key = '***' was not calculating the target partition correctly.
  • Kafka topic Data tab rendering issue in certain context when the topic does not have any records on it
  • SQL Streaming WINDOW statement with HAVING clause
  • SQL Streaming IN_PROC mode was not respecting the topic replication factor
  • Timestamp filter beyond the end of the topic now does not end the query

4.0.2 release 


  • Performance improvements, significantly lowering CPU usage
  • Avro schemas with nested arrays of nullable (union) elements are now fully supported
  • Schema registry status, and related alerts are now more accurate
  • Kafka Topic’s Schema types are now searchable when changing them through the User Interface
  • Dataset’s metadata can now be excluded from the search parameters, to improve the search’s performance
  • Introduced pagination for our Consumers Groups and Topic Partitions display
  • Significantly improved the Partitions calculation time
  • Improved the User Experience in Kafka Topics, SQL Studio and Connectors screens

Bug fixes 

  • Message headers fixes in SQL for handling null and empty record headers
  • Failing to login when using SSO is not resulting on a blank screen anymore
  • Fixed issues with projections of message keys within streaming SQL engine
  • Fixed an issue resulting to wrong dataset count calculation in the Explore screen

4.0.1 release 


  • Performance improvements for the Data catalog (faster indexing step)
  • SQL Snapshot queries on Kafka handle deleted records while the query is running
  • Display the compacted sign for Data Catalog for Kafka topics with a cleanup policy involving compaction
  • User password restriction to not use the same password again
  • Data SLAs topic metrics readability and rule editing tooltips
  • Kafka topic message delete for non-compacted topics shows the number of entries to be deleted
  • Data catalog schema shows decimal for AVRO schemas where this type is defined as logical type

Bug fixes 

  • Prometheus alert integration memory leak causing the alert to not be cleared out
  • SQL Snapshot max.idle.time setting is respected. If there is no more data received from the Kafka topic (case of deleted records towards the end of the bounded set), the query will be terminated
  • SQL intellisense was highlighting TO_TIMESTAMP function as unknown
  • An invalid ElasticSearch connection was causing Lenses to stop during a restart
  • Lenses custom serdes involving Google Protobuf were not loaded
  • SQL streaming joins on two Kafka datasets using different message key format were giving an error
  • Kafka Connect connectors authorization state was not working correctly when using at least two separate Connect cluster and they both run connectors with the same name
  • SQL Streaming JSON to AVRO conversion for optional fields
  • Data catalog filter for system datasets
  • Kafka topic message delete was not reflecting the starting offset when changing the partition
  • ElasticSearch datasets permission for “Show Schema” was not correctly handled
  • SQL Processors failed to run on OpenShift due to non-root user enforcement

What’s New in Lenses 4.0

If you are upgrading from an older version, make sure to check the upgrade notes .

Real Time data Catalog 

Your data is much more than its content. With this release, Lenses brings a new unified experience for discovering and exploring data across multiple data stores. Apart from listing the Kafka topics or Elastic search indexes, the user will be able to search for keywords spanning the data name and description, its schema fields and their description, and applications.

New SQL streaming for Apache Kafka 

The new SQL streaming support for Apache Kafka has been in the making for more than one year. The release unifies data exploration and stream processing by providing the same syntax and functions set between the two SQL engine modes: Streaming and Snapshot.

With this release, Lenses makes it a lot easier to process your Apache Kafka streaming data leveraging SQL. It still leverages Apache Kafka Streams, and it offers many new capabilities compared with the previous version:

  • convert between storage formats like JSON and AVRO
  • you can use the message content (i.e., field) as the message event time.
  • introduces support for User Defined Functions (UDF) and User Defined Aggregation Functions(UDAF)
  • support for non-equi joins
  • simplified syntax for describing time windows
  • specific SQL syntax to distinguish between stateful and stateless processing
  • support for multi-inserts in one statement
  • support for field unwrapping
  • support for projecting nested payloads
  • support for re-keying a topic based on a field from the message Key or Value content
  • enhanced type checking for the projection and functions used which avoids runtime errors
  • stricter requirements for topics schema. For example, a JSON topic requires a schema to be processed.
  • fine-grained control over the output topics
  • same function set as Snapshot
  • control the application underlying Kafka Streams identifier

Lenses SQL processors, the Kafka Streams applications built with Lenses SQL, can be deployed:

  • within the Lenses process (aimed at low volume data)
  • Kubernetes

You can read more about the SQL engine .

Navigate to the help center for tutorials on how to use the Lenses SQL Streaming for Apache Kafka.

Data SLA 

Data availability is paramount for critical business processes. With this new functionality, you can monitor and alert on data volume metrics. Whenever there is a spike or a drop in data, Lenses can raise an alert.

External Apps status 

Previous releases have added support to register your application with Lenses and its data lineage topology. Starting with this release, Lenses can monitor your HTTP-registered applications health status.

Amazon S3 Kafka Connector Sink 

The open-source connector brings a new level of reliable streaming ETL integration with S3. The sink supports storing these formats: Avro, Json, Parquet, CSV (including headers), and Bytes. Archiving Apache Kafka data is a common use case. Apart from storing the Kafka records optimally (avoiding small files), the sink supports data partition by the record field(s), the key, the key field(s), or headers. To round the functionality up, it can partition by a combination of both.

You can read more about the Kafka S3 sink connector .

Kafka Connect Secret providers 

Kafka Connect provides a way to plugin a secret provider. There should be no compromise when it comes to security. With this release, we announce the support we offer via open source for Connect Secret Providers integrating with:

  • AWS Secret Manager
  • Azure Keyvault
  • Environment variables
  • Hashicorp Vault

You can read more about Kafka connectors and secrets .

Other Improvements 

  • Improved Lenses upgrade process.
  • License can be updated either by a PUT /api/v1/license API or a CLI command.
  • Fixes log viewing for SQL Processors when running in Kubernetes.
  • SQL intellisense hints and validation
  • New SQL functions .

Lenses CLI 

  • Add support for Data SLAs alerts
  • Add support for updating the license