LSQL Snapshot mode allows to control the resources involved when running a query.
To instruct the engine what to do, a user would have to set a few parameters using the
Querying a table can take a long time if it contains a lot of records.The underlying Kafka topic has to be read, and the filter conditions applied, and the projections made.
SELECT statement could end up bringing a large amount of data to the client. To be able to constrain the resources involved, Lenses allows for context customization, which ends up driving the execution, thus giving control to the user.
Here is the list of context parameters to overwrite:
|max.size||The maximum amount of Kafka data to scan. This is to avoid full topic scan over large topics. It can be expressed as bytes (1024), as kilo-bytes (1024k), as mega-bytes (10m) or as giga-bytes (5g). Default is 20MB.|
|max.query.time||The maximum amount of time the query is allowed to run. It can be specified as milliseconds (2000ms), as hours (2h), minutes (10m) or seconds (60s). Default is 1 hour.|
|max.idle.time||The amount of time to wait when no more records are read from the source before the query is completed. Default is 5 seconds|
|LIMIT N||The maximum of records to return. Default is 10000|
|skip.bad.records||Flag to drive the behavior of handling topic records when their payload does not correspond with the table storage format. Default is true. This means bad records are skipped. Set it to false to fail the query on first invalid payload record.|
|format.timestamp||Flag to control the values for Avro date time. Avro encodes date time via Long values. Set the value to true if you want the values to be returned as text and in a human readable format.|
|live.aggs||Flag to control if aggregation queries should be allowed to run. Since they accumulate data they require more memory to retain the state.|
|optimize.kafka.partition||When enabled, it will use the primitive used for the _key filter to determine the partition the same way the default Kafka partitioner logic does. Therefore, queries like |
|query.parallel||When used, it will parallelize the query. The number provided will be capped by the target topic partitions count.|
|query.buffer||Internal buffer when processing messages. Higher number might yield better performance when coupled with |
|kafka.offset.timeout||Timeout for retrieving target topic start/end offsets.|
All the above values can be given a default value via the configuration file. Using
lenses.sql.settings as prefix the
can be set like this:
Lenses SQL uses Kafka Consumer to read the data. This means that an advanced user with knowledge of Kafka could tweak the consumer properties to achieve better throughput.
This would occur on very rare occasions. The query context can receive Kafka consumer settings. For example, the
max.poll.records consumer can be set as:
SET max.poll.records= 100000; SELECT * FROM payments LIMIT 1000000
The fact is that streaming SQL is operating on unbounded streams of events: a query would normally be a never-ending query. In order to bring query termination semantics into Apache Kafka we introduced 4 controls:
- LIMIT = 10000 - Force the query to terminate when 10,000 records are matched.
- max.bytes = 20000000 - Force the query to terminate once 20 MBytes have been retrieved.
- max.time = 60000 - Force the query to terminate after 60 seconds.
- max.zero.polls = 8 - Force the query to terminate after 8 consecutive polls are empty, indicating we have exhausted a topic.
Thus, when retrieving data, you can set a limit of 1GByte to the maximum number of bytes retrieved and maximum query time of one hour as follows:
SET max.bytes = 1000000000; SET max.time = 60000000; SELECT * FROM topicA WHERE customer.id = "XXX";