Inserting & deleting data

This page describes how to insert and delete data into Kafka with Lenses SQL Studio.

Lenses SQL allows you to utilize the ANSI SQL command to store new records into a table.

Single or multi-record inserts are supported:

INSERT INTO $Table(column1[, column2, column3])
VALUES(value1[,value2, value3])

INSERT INTO $Table(column1[, column2, column3])
VALUES
(value1[,value2, value3]),
(value4[,value5, value6])
  • $Table - The name of the table to insert the data into

  • Columns - The target columns to populate with data. Adding a record does not require you to fill all the available columns. In the case of Avro stored Key, Value pairs, the user needs to make sure that a value is specified for all the required Avro fields.

  • VALUES - The set of value to insert. It has to match the list of columns provided, including their data types. You can use simple constants or more complex expressions as values, like 1 + 1 or NOW().

Example:

INSERT INTO customer (
    _key, id
    , address.line
    , address.city
    , address.postcode
    , email)
VALUES
('maria.wood','maria.wood', '698 E. Bedford Lane','Los Angeles', 90044, '[email protected]'),
('david.green', 'david.green', '4309 S Morgan St', 'Chicago', 60609, '[email protected]');

Inserting data from a SELECT

Records can be inserted from the result of SELECT statement.

The syntax is:

For example, to copy all the records from the customer table into customer_avro one:

Insert complex key

There are scenarios where a record key is a complex type. Regardless of the storage format, JSON or Avro, the SQL engine allows the insertion of such entries:

Deleting data in Kafka

There are two ways to delete data:

  • If the topic is not compacted, then DELETE expects an offset to delete records up to.

  • If the topic is compacted, then DELETE expects the record Key to be provided. For a compacted topic a delete translates to inserting a record with the existing Key, but the Value is null. For the customer_avro topic (which has the compacted flag on), a delete operation for a specific customer identifier would look like this:

Deleting is an insert operation. Until the compaction takes place, there will be at least one record with the Key used earlier. The latest (or last) record will have the Value set to null.

Truncating a table

To remove all records from a table:

where the $Table is the table name to delete all records from. This operation is only supported on non-compacted topics, which is a Kafka design restriction. To remove the data from a compacted topic, you have two options: either dropping and recreating the topic or inserting null Value records for each unique Key on the topic.

After rebuilding the customer table to be non-compacted, perform the truncate:

Truncating a compacted Kafka topic is not supported. This is an Apache Kafka restriction. You can drop and recreate the table, or insert a record with a null Value for each unique key in the topic.

Last updated

Was this helpful?