London, UK - December 10, 2018 Lenses v2.2 is now generally available
- Data Policies with field-level security for a Data Officer to protect Confidential or Personal information data. Data is sensitive and should not be in plain text. Fields like credit card or social security number and personal identification data can now be tracked and protected.
- Intelligent SQL with auto completion that is context-aware to easily query and aggregate your data.
- User-defined function support.
- Custom authentication based on the incoming HTTP request. The details on how to use it can be found here
- New SQL-engine for Tabled SQL queries (or bound queries) which support aggregation amongst other great things.
You can now run queries like this
SELECT count(*) as total FROM iot_data WHERE device_id='id01'for data at rest.
- The Lenses CLI now has extra commands
- shell for interactive queries, both browsing and continuous with autocompletion
- export to export resources such as topics, processor and connectors as request enabling GitOps
- import to import Lenses landscapes, for example, created by the export command, into Lenses enabling GitOps.
- policies to view and manage data policies
- Topology View improvements for consumers, producers and micro-services. Inactive or idle instances are tracked better.
- Time travelling support for exploring data on Kafka topics based on the event time in the metadata.
- Micro-Service application landscape topology support.
- CQRS event-sourcing support for querying Avro Unions with
WHERE TYPEOF(fieldC) = 'io.lenses.domain.LensesIsGreat'
- Lenses API performance improvements.
- Basic auth for Kafka Connect API and Schema Registry.
- Kafka consumers screen now supports thousands of entries and now displays the application type.
- No Lenses restarts are required when users/groups are changed. The security.conf file now supports hot reload.
- When a Kafka broker is decommissioned, the UI now allows the user to instruct Lenses the broker has been decommissioned.
- When a Kafka topic is deleted, if there is any associated Avro schema then that will be deleted as well.
- New role introduced to allow specific users to set a Kafka topic storage format.
- New role introduced to allow specific users to set Kafka consumer groups alerts.
- Better handling of Apache Kafka timeouts when setting a consumer group offset.
- Improved the Kubernetes pod labels. Labels now are in the form
- Improvements in the Lenses way identifies new or removed Kafka topics.
- Logging improvements in Lenses for better debugging.
- SQL Processors for Kubernetes no longer require the Lenses SQL runner Docker image to be rebuilt with JKS, jaas.config, and keytabs embedded. Lenses leverages Kubernetes secrets to deliver the settings to the running pod.
- Binding to Kafka 2.0.1 client libraries.
- Lenses CLI
machine-friendlyjson output deprecated and replaced by –output flag which accepts
sql livecommand is now deprecated and replace by a single
sqlcommand with a
--live-streamflag to have continuous queries. New flags added to return key and metadata or keys only.
sql cancelquery command have been dropped in favour of
SHOW QUERIESand ``KILL QUERY` see manage queries. Use the sql or shell command instead.
Kerberos support for HortonWorks Schema registry (#256).
Lenses continuous SQL queries
- Fixed joins referencing SQL code defined via WITH and referencing nested fields.
- Using nested or array items in the GROUP BY clause is now possible.
- IF function now supports nested functions for the SQL parameter.
- Avro deserialization returning primitive not wrapped in NonRecordContainer is now handled.
Better error message when inserting records via JSON into a topic when the topic storage payload includes Bytes.
The custom microservices metrics reflect the correct value when there are multiple instances of the same application running.
Lenses topic for custom application metrics is now automatically created.
_connect-configs, _connect-status, _connect-offsets are now considered system topics automatically.
Viewing Kafka Consumer Groups now takes into account the user groups whitelisting/blacklisting.
When a Kafka consumer alert setting is removed, it now stops continuously raising the alert if it has already triggered.