Inserting & deleting data
This page describes how to insert and delete data into Kafka with Lenses SQL Studio.
Lenses SQL allows you to utilize the ANSI SQL command to store new records into a table.
Single or multi-record inserts are supported:
INSERT INTO $Table(column1[, column2, column3])
VALUES(value1[,value2, value3])
INSERT INTO $Table(column1[, column2, column3])
VALUES
(value1[,value2, value3]),
(value4[,value5, value6])
$Table - The name of the table to insert the data into
Columns - The target columns to populate with data. Adding a record does not require you to fill all the available columns. In the case of Avro stored Key, Value pairs, the user needs to make sure that a value is specified for all the required Avro fields.
VALUES - The set of value to insert. It has to match the list of columns provided, including their data types. You can use simple constants or more complex expressions as values, like
1 + 1
orNOW()
.
Example:
INSERT INTO customer (
_key, id
, address.line
, address.city
, address.postcode
, email)
VALUES
('maria.wood','maria.wood', '698 E. Bedford Lane','Los Angeles', 90044, '[email protected]'),
('david.green', 'david.green', '4309 S Morgan St', 'Chicago', 60609, '[email protected]');
Inserting data from a SELECT
Records can be inserted from the result of SELECT statement.
The syntax is:
INSERT INTO $TABLE1
SELECT */column1[,column2, ...]
FROM $Table2
[WHERE $condition]
[LIMIT N]
For example, to copy all the records from the customer table into customer_avro one:
INSERT INTO customer_avro
SELECT *
FROM customer
Insert complex key
There are scenarios where a record key is a complex type. Regardless of the storage format, JSON or Avro, the SQL engine allows the insertion of such entries:
-- creates a smart_devices table where the key is a type with one field {deviceId:100}
CREATE TABLE smart_devices(
_key.deviceId INT
, room INT
, temperature double)
FORMAT(avro, avro);
INSERT INTO smart_devices(
_key.deviceId
, room
, temperature)
VALUES(11223, 99, 22.1);
SELECT *
FROM smart_devices;
Deleting data in Kafka
There are two ways to delete data:
If the topic is not compacted, then
DELETE
expects an offset to delete records up to.
-- Delete records across all partitions up to the offset 10
DELETE FROM customer
WHERE _meta.offset <= 10;
-- Delete records from a specific partition
DELETE FROM customer
WHERE _meta.offset <= 10 AND _meta.partition = 2
If the topic is compacted, then
DELETE
expects the recordKey
to be provided. For a compacted topic a delete translates to inserting a record with the existing Key, but the Value is null. For thecustomer_avro
topic (which has the compacted flag on), a delete operation for a specific customer identifier would look like this:
DELETE FROM customer_avro
WHERE _key = 'andy.perez'
Deleting is an insert operation. Until the compaction takes place, there will be at least one record with the Key used earlier. The latest (or last) record will have the Value set to null.
Truncating a table
To remove all records from a table:
TRUNCATE TABLE $Table
where the $Table
is the table name to delete all records from. This operation is only supported on non-compacted topics, which is a Kafka design restriction. To remove the data from a compacted topic, you have two options: either dropping and recreating the topic or inserting null Value records for each unique Key on the topic.
After rebuilding the customer
table to be non-compacted, perform the truncate:
TRUNCATE TABLE customer;
/* SELECT count(*) FROM customer returns 0 after the previous statement */
Last updated
Was this helpful?