Best practices
Last updated
Last updated
2024 © Lenses.io Ltd. Apache, Apache Kafka, Kafka and associated open source project names are trademarks of the Apache Software Foundation.
Does Apache Kafka have indexing?
No. Apache Kafka does not have the full indexing capabilities in the payload (indexes typically come at a high cost even on an RDBMS / DB or a system like Elastic Search), however, Kafka indexes the metadata.
The only filters Kafka supports are topic, partition and offsets or timestamps.
When querying Kafka topic data with SQL such as
a full scan will be executed, and the query processes the entire data on that topic to identify all records that match the transaction id.
If the Kafka topic contains a billion 50KB messages - that would require querying 50 GB of data. Depending on your network capabilities, brokers’ performance, any quotas on your account, and other parameters, fetching 50 GB of data could take some time! Even more, if the data is compressed. In the last case, the client has to decompress it before parsing the raw bytes to translate into a structure to which the query can be applied.
When Lenses can’t read (deserialize) your topic’s messages, it classifies them as “bad records”. This happens for one of the following reasons:
Kafka records are corrupted. On an AVRO topic, a rogue producer might have published a different format
Lenses topic settings do not match the payload data. Maybe a topic was incorrectly given AVRO format when it’s JSON or vice versa
If AVRO payload is involved, maybe the Schema Registry is down or not accessible from the machine running Lenses
By default, Lenses skips them and displays the records’ metadata in the Bad Records tab. If you want to force stop the query in such case use:
Querying a table can take a long time if it contains a lot of records. The underlying Kafka topic has to be read, the filter conditions applied, and the projections made.
Additionally, the SELECT
statement could end up bringing a large amount of data to the client. To be able to constrain the resources involved, Lenses allows for context customization, which ends up driving the execution, thus giving control to the user. Here is the list of context parameters to overwrite:
max.size
The maximum amount of Kafka data to scan. This is to avoid full topic scan over large topics. It can be expressed as bytes (1024), as kilo-bytes (1024k), as mega-bytes (10m) or as giga-bytes (5g). Default is 20MB.
SET max.size = '1g';
max.query.time
The maximum amount of time the query is allowed to run. It can be specified as milliseconds (2000ms), as hours (2h), minutes (10m) or seconds (60s). Default is 1 hour.
SET max.query.time = '60000ms';
max.idle.time
The amount of time to wait when no more records are read from the source before the query is completed. Default is 5 seconds
SET max.idle.time = '5s';
LIMIT N
The maximum of records to return. Default is 10000
SELECT * FROM payments LIMIT 100;
show.bad.records
Flag to drive the behavior of handling topic records when their payload does not correspond with the table storage format. Default is true. This means bad records are processed, and displayed seperately in the Bad Records section. Set it to false to fail to skip them completely.
SET show.bad.records=false;
format.timestamp
Flag to control the values for Avro date time. Avro encodes date time via Long values. Set the value to true if you want the values to be returned as text and in a human readable format.
SET format.timestamp=true;
format.decimal
Flag to control the formatting of decimal types. Use to specify how many decimal places are shown.
SET format.decimal= 2;
format.uppercase
Flag to control the formatting of string types. Use to specify if strings should all be made uppercase. Default is false.
SET format.decimal= 2;
live.aggs
Flag to control if aggregation queries should be allowed to run. Since they accumulate data they require more memory to retain the state.
SET live.aggs=true;
max.group.records
When an aggregation is calculated, this config is used to define the maximum number of records over which the engine is computed. Default is 10 000 000
SET max.group.records=10000000
optimize.kafka.partition
When enabled, it will use the primitive used for the _key filter to determine the partition the same way the default Kafka partitioner logic does. Therefore, queries like SELECT * FROM trips WHERE _key='customer_id_value';
on multiple partition topics will only read one partition as opposed to the entire topic. To disable it, set the flag to false.
SET optimize.kafka.partition=false;
query.parallel
When used, it will parallelize the query. The number provided will be capped by the target topic partitions count.
SET query.parallel=2;
query.buffer
Internal buffer when processing messages. Higher number might yield better performance when coupled with max.poll.records
.
SET query.buffer=50000;
kafka.offset.timeout
Timeout for retrieving target topic start/end offsets.
SET kafka.offset.timeout=20000;
All the above values can be given a default value via the configuration file. Using lenses.sql.settings
as prefix the format.timestamp
can be set like this:
Lenses SQL uses Kafka Consumer to read the data. This means that an advanced user with knowledge of Kafka could tweak the consumer properties to achieve better throughput. This would occur on very rare occasions. The query context can receive Kafka consumer settings. For example, the max.poll.records
consumer can be set as:
The fact is that streaming SQL is operating on unbounded streams of events: a query would normally be a never-ending query. In order to bring query termination semantics into Apache Kafka we introduced 4 controls:
LIMIT = 10000 - Force the query to terminate when 10,000 records are matched.
max.bytes = 20000000 - Force the query to terminate once 20 MBytes have been retrieved.
max.time = 60000 - Force the query to terminate after 60 seconds.
max.zero.polls = 8 - Force the query to terminate after 8 consecutive polls are empty, indicating we have exhausted a topic.
Thus, when retrieving data, you can set a limit of 1GB to the maximum number of bytes retrieved and a maximum query time of one hour like this: