This page describes how to insert and delete data into Kafka with Lenses SQL Studio.
Lenses SQL allows you to utilize the ANSI SQL command to store new records into a table.
Single or multi-record inserts are supported:
$Table - The name of the table to insert the data into
Columns - The target columns to populate with data. Adding a record does not require you to fill all the available columns. In the case of Avro stored Key, Value pairs, the user needs to make sure that a value is specified for all the required Avro fields.
VALUES - The set of value to insert. It has to match the list of columns provided, including their data types. You can use simple constants or more complex expressions as values, like 1 + 1
or NOW()
.
Example:
Records can be inserted from the result of SELECT statement.
The syntax is:
For example, to copy all the records from the customer table into customer_avro one:
There are scenarios where a record key is a complex type. Regardless of the storage format, JSON or Avro, the SQL engine allows the insertion of such entries:
There are two ways to delete data:
If the topic is not compacted, then DELETE
expects an offset to delete records up to.
If the topic is compacted, then DELETE
expects the record Key
to be provided. For a compacted topic a delete translates to inserting a record with the existing Key, but the Value is null. For the customer_avro
topic (which has the compacted flag on), a delete operation for a specific customer identifier would look like this:
Deleting is an insert operation. Until the compaction takes place, there will be at least one record with the Key used earlier. The latest (or last) record will have the Value set to null.
To remove all records from a table:
where the $Table
is the table name to delete all records from. This operation is only supported on non-compacted topics, which is a Kafka design restriction. To remove the data from a compacted topic, you have two options: either dropping and recreating the topic or inserting null Value records for each unique Key on the topic.
After rebuilding the customer
table to be non-compacted, perform the truncate:
Truncating a compacted Kafka topic is not supported. This is an Apache Kafka restriction. You can drop and recreate the table, or insert a record with a null Value for each unique key in the topic.