View the latest documentation 5.5
The Lenses SQL Studio provides a familiar query editor that allows:
With SQL studio you can run queries to find or manipulate data on demand. It supports different clauses which apply to different operations, so the relevant permission should apply. In principal, you will need view data permission for the namespaces you are authorized to view data.
view data
Access Management & permissions
In SQL Studio you can run Lenses SQL queries for Kafka or other data sources. Even if you are familiar with SQL we recommend that you get familiar with studio and the syntax reference, to effectively use it. It’s important to understand that Kafka, has its own semantics for streaming data therefore finding data may not work the same way.
The query editor includes an autocomplete capability to make writing queries easier. You have context-aware and quick access to the information you need and insert directly to the query editor. It makes it easier to access query syntax identifiers ( like SELECT or JOIN ), or the schema fields based on your context and helps you create simple or complex expressions.
SELECT
JOIN
Apart from autocompletion, you can use the mini catalog of the datasets to preview schema information while writing a query. On the right panel, you can see the available list of datasets for each connection and a flatten catalog of the schema fields for each dataset as well as some important information about the data: size, number of records etc.
To quickly preview the schema, use DESCRIBE TABLE to explore fields, data formats and configurations which will appear in the results. For detailed schema information and management go directly to the relevant resource.
DESCRIBE TABLE
While running a query you will start getting data results back and execution details will also appear to indicate the status of your query.
Data returned from the queries can be visualized in Tree and Grid format. The format can be toggled from the top of the result set. The data set can also be downloaded in a JSON format.
Kafka topics are continuous streams of data. When querying a topic, a point in time snapshot will start get scanned to match the results. Here are some tips to help you tune your queries
When your query is running, you can view the execution details. From there you can understand how the data is scanned and returned. Each partition will be consumed and scanned, so it’s possible if you are running a “wild query” that you will hit termination controls.
Lenses applies termination controls to protect your cluster, so it stops the queries when one of the conditions is met, both factors are configurable:
Scanning process starts from the begining of each partition by default. To navigate to the most recent data you can use the available functions and metadata:
By partition / offset:
SELECT field1, field2 FROM topicA WHERE _meta.offset >= LAST_OFFSET() - 100 AND _meta.partition = 0
By timestamp:
SELECT field1, field2 FROM topicA WHERE _meta.timestamp > NOW()-"10m"
Lenses automatically applies optimisations when searching for data. To improve your queries use the message metadata to specify the range to scan. For example:
SELECT * FROM topicA WHERE transaction_id=123 AND _meta.offset > 100 AND _meta.offset < 100100 AND _meta.partition = 1
SELECT * FROM topicA WHERE transaction_id=123 AND _meta.timestamp > NOW() - "1H"
Lenses applies controls to protect your cluster. There is default configuration or you can override per query, for example:
SET max.size = '100m'; -- scan up to 100 Mbytes SET max.query.time = '5m'; -- scan up to 5 minutes SELECT field FROM topicA
When Lenses can’t read (deserialize) your topic’s messages, it classifies them as “bad records”. This happens for one of the following reasons:
By default Lenses skips them, and displays the records’ metadata in the Bad Records tab. If you want to force stop the query in such case use:
SET skip.bad.records=false; SELECT * FROM topicA LIMIT 100
Contents:
On this page