London, UK - September 26, 2019, Lenses 3.0 is now generally available
If you are upgrading from an older version, make sure to check the Upgrade Notes.
- Data Privacy and User Management introduces the concept of data ownership and enables seamless multi-tenant capabilities on Apache Kafka. A data namespace identifies one of more topics (or wildcards) and allows assigning permissions, such as create, query, update topics or schemas.
- Pluggable Alerts Lenses provides data and application performance monitoring and alerting for real time data. With this new release, the alerting mechanism is now pluggable. What if I want an infrastructure alert to automatically generate a ticket or notify my in-house incident management solution?
- Revamped User Experience This new major release revamped the user experience, grouping all administrative tasks under “Admin” menu option at the top of the page and simplified and improved the overall experience.
- Enhanced Kafka Cloud support Using Lenses to a managed service such as AWS MSK or Azure HDInsight and other is now easier than ever. Auto-configuration and auto-detection and out of the box support of TLS setups.
- Data Federation via a re-worked security module, that enables multiple authentication providers. One can now use LDAP, BASIC, Kerberos and/or CUSTOM (pluggable) authentication and for example authenticate internal users based on KERBEROS and external partners via LDAP.
- Learning center to easier get productive with real-time data and an improved open-source CLI for automation and CI/CD.
- Auditing improvement. With this new release Lenses further extends and improves auditing information, better tracking SQL data commands.
- Performance improvements for alerts notification.
- SQL table engine support for help command, for example,
HELP lengthwill describe the function named
- Better experience for querying audits and alerts events logs. When searching the records there’s much more flexibility on the search and faster performance.
- The JDBC driver for Kafka is now using our generation 2 engine. This means the same queries you are running in Lenses SQL studio can be used with the JDBC driver.
- SQL Streaming allows to use
SET schema.registry.*='???'to overwrite your schema registry settings at the SQL processor level.
- SQL Streaming does not clean the topics and schemas when the SQL processor is deleted. This is to allow recreating the SQL processor with the code amendments and continue from where it left off.
- SQL statements for DROP/DELETE/TRUNCATE now can accept Kafka topics containing
-or they start with numbers. Therefore you do not need to escape the topic.
- SQL Table engine error handling has been improved to avoid rendering to the screen code related exceptions.
- SQL Intellisense has been improved to display
- all the supported keys for a SET statement
- views and virtual tables (__table, __fields, __queries, __dual)
- the exists function
- the key fields in joins
- _meta.__keysize which allows to address the Kafka record key raw bytes size
- _meta.__valsize which allows to address the Kafka record value raw bytes size
- SQL Table engine introduces support for creating a table if it does not exists
CREATE IF NOT EXISTS TableA
- SQL Table engine supports Windowed Keys resulted from Stream processing aggregation. For those payloads with WSTRING, WLONG, WAVRO, WJSON the returned key also displays the window start timestamp.
- SQL Table engine reports a user-friendly error when trying to use Avro but your setup does not have Schema Registry set up.
- SQL Table engine does not consume the entire topic data when using upper bounds offset/timestamp limits
SELECT * FROM A WHERE _meta.offset <=70.
- SQL Table engine queries for equality offset or timestamp complete as fast as Apache Kafka returns the record, and does not read the entire topic anymore.
- SQL Table engine support offset and timestamp filter in queries like this
SELECT * FROM A WHERE 1235 < _meta.offsetor
SELECT * FROM A WHERE '2019-08-01 12:30:19' <= _meta.timestamp.
- SQL Table engine returns a friendly error when the partition number is incorrect.
- SQL date math supports now plurals (day-> days, minute->minutes, etc).
- SQL Table engine time parsing supports now these formats:
- SQL multiple INSERT statements will guarantee the order of the data as it appears in the SQL code.
- SQL Table engine does not throw an exception when using an ISO date filter like
- SQL Statements inserting LONG/INT values does not throw an exception anymore.
- SQL Statements inserting FLOAT values does not throw an exception anymore.
- SQL Statements inserting DECIMAL values respect the table column decimal scale and precision
CREATE TABLE abc (n decimal(18,38)); INSERT INTO abc(n) VALUES(3).
- SQL Table engines multi joins statements filters are now working correctly.
- SQL Table engine introduced a regression on corrupted records it does not provide the offset and partition details on the error (You need to set skip.bad.records=false).
With this version, some HTTP endpoints have been removed but an equivalent has been provided. A full list of public endpoints exposed by Lenses can be found here <http://api.lenses.io>
|Kafka Topics||GET /api/topics||GET /api/v1/kafka/topics||check here|
|Kafka Topic Data||GET /api/sql/data?sql=…||GET /api/ws/v2/sql/execute||check here|
|SQL Validation||GET /api/sql/validation||removed|
|Alerts||GET /api/alerts||GET /api/alerts||Return payload changed check it here|
|Audit||GET /api/audit||GET /api/audit||Return payload changed check it here|
- Using service accounts need to pass to Lenses this composite token