VoltDB Sink

Download connector VoltDB Connector for Kafka 2.1.0

This VoltDB Kafka Connector allows you to write events from Kafka to VoltDB. The connector uses the built-in stored procedures for inserts and upserts but requires the target tables to be pre-created.


  • Apache Kafka 0.11.x of above
  • Kafka Connect 0.11.x or above
  • VoltDB 6.4 or above
  • Java 1.8


  1. The KCQL routing querying - Kafka topic payload field selection is supported, allowing you to select fields written to VoltDB
  2. Error policies for handling failures.

KCQL Support

{ INSERT | UPSERT} INTO table_name SELECT { FIELD, ... } FROM kafka_topic_name


You can specify multiple KCQL statements separated by ; to have a the connector sink multiple topics.

The VoltDB sink supports KCQL, Kafka Connect Query Language. The following support KCQL is available:

  1. Field selection
  2. Target VoltDb table selection
  3. Insert and Upsert modes.


-- Insert mode, select all fields from topicA and write to tableA

-- Insert mode, select 3 fields and rename from topicB and write to tableB
INSERT INTO tableB SELECT x AS a, y AS b and z AS c FROM topicB

-- Upsert mode, select 3 fields and rename from topicB and write to tableB
UPSERT INTO tableB SELECT x AS a, y AS b and z AS c FROM topicB

This is set in the connect.volt.kcql option.

Insert Mode

Insert is the default write mode of the sink.

Kafka currently provides at least once delivery semantics. Therefore, this mode may produce errors if unique constraints have been implemented on the target tables. If the error policy has been set to NOOP then the Sink will discard the error and continue to process, however, it currently makes no attempt to distinguish violation of integrity constraints from other exceptions such as casting issues.

Upsert Mode

The Sink supports VoltDB upsert functionality which replaces the existing row if a match is found on the primary keys.

Kafka currently provides at least once delivery semantics and order is a guaranteed within partitions. This mode will, if the same record is delivered twice to the sink, result in an idempotent write. The existing record will be updated with the values of the second which are the same.

If records are delivered with the same field or group of fields that are used as the primary key on the target table, but different values, the existing record in the target table will be updated.

Since records are delivered in the order they were written per partition the write is idempotent on failure or restart. Redelivery produces the same result.

Error Polices

Landoop sink connectors support error polices. These error polices allow you to control the behavior of the sink if it encounters an error when writing records to the target system. Since Kafka retains the records, subject to the configured retention policy of the topic, the sink can ignore the error, fail the connector or attempt redelivery.


Any error on write to the target system will be propagated up and processing is stopped. This is the default behavior.


Any error on write to the target database is ignored and processing continues.


This can lead to missed errors if you don’t have adequate monitoring. Data is not lost as it’s still in Kafka subject to Kafka’s retention policy. The sink currently does not distinguish between integrity constraint violations and or other exceptions thrown by any drivers or target system.


Any error on write to the target system causes the RetryIterable exception to be thrown. This causes the Kafka Connect framework to pause and replay the message. Offsets are not committed. For example, if the table is offline it will cause a write failure, the message can be replayed. With the Retry policy, the issue can be fixed without stopping the sink.

Lenses QuickStart

The easiest way to try out this is using Lenses Box the pre-configured docker, that comes with this connector pre-installed. You would need to Connectors –> New Connector –> Sink –> VoltDB and paste your configuration


VoltDB Setup

Download VoltDB from here

Unzip the archive

tar -xzf voltdb-ent-*.tar.gz

Start VoltDB:

cd voltdb-ent-*
➜  bin/voltdb create

Build: 6.5 voltdb-6.5-0-gd1fe3fa-local Enterprise Edition
Initializing VoltDB...
 _    __      ____  ____  ____
| |  / /___  / / /_/ __ \/ __ )
| | / / __ \/ / __/ / / / __  |
| |/ / /_/ / / /_/ /_/ / /_/ /


Connecting to VoltDB cluster as the leader...
Host id of this node is: 0
Starting VoltDB with trial license. License expires on Sep 11, 2016.
Initializing the database and command logs. This may take a moment...
WARN: This is not a highly available cluster. K-Safety is set to 0.

Create VoltDB Table

At present the Sink doesn’t support table auto-creation so we need to login to VoltDb to create one. In the directory, you extracted VoltDB start the sqlcmd shell and enter the following DDL statement. This creates a table called person.

create table person(firstname varchar(128), lastname varchar(128), age int, salary float, primary key (firstname, lastname));
➜  bin ./sqlcmd
SQL Command :: localhost:21212
1> create table person(firstname varchar(128), lastname varchar(128), age int, salary float, primary key (firstname, lastname));
Command succeeded.

Sink Connector QuickStart

Start Kafka Connect in distributed mode (see install). In this mode a Rest Endpoint on port 8083 is exposed to accept connector configurations. We developed Command Line Interface to make interacting with the Connect Rest API easier. The CLI can be found in the Stream Reactor download under the bin folder. Alternatively the Jar can be pulled from our GitHub releases page.

Starting the Connector (Distributed)

Download, and install Stream Reactor. Follow the instructions here if you haven’t already done so. All paths in the quickstart are based on the location you installed the Stream Reactor.

Once the Connect has started we can now use the kafka-connect-tools cli to post in our distributed properties file for VoltDB. For the CLI to work including when using the dockers you will have to set the following environment variable to point the Kafka Connect Rest API.

export KAFKA_CONNECT_REST="http://myserver:myport"
➜  bin/connect-cli create voltdb-sink < conf/voltdb-sink.properties

connect.volt.kcql=INSERT INTO person SELECT * FROM sink-test

If you switch back to the terminal you started Kafka Connect in, you should see the VoltDB Sink being accepted and the task starting.

We can use the CLI to check if the connector is up but you should be able to see this in logs as well.

#check for running connectors with the CLI
➜ bin/connect-cli ps
    __                    __
   / /   ____ _____  ____/ /___  ____  ____
  / /   / __ `/ __ \/ __  / __ \/ __ \/ __ \
 / /___/ /_/ / / / / /_/ / /_/ / /_/ / /_/ /
/_____/\__,_/_/ /_/\__,_/\____/\____/ .___/
                                      by Stefan Bocutiu
   _    _     _      _____   _           _    _       _
  | |  | |   | |_   (____ \ | |         | |  (_)     | |
  | |  | |__ | | |_  _   \ \| | _        \ \  _ ____ | |  _
   \ \/ / _ \| |  _)| |   | | || \        \ \| |  _ \| | / )
    \  / |_| | | |__| |__/ /| |_) )   _____) ) | | | | |< (
     \/ \___/|_|\___)_____/ |____/   (______/|_|_| |_|_| \_)

Test Records


If your input topic doesn’t match the target use Lenses SQL to transform in real-time the input, no Java or Scala required!

Now we need to put some records it to the sink-test topics. We can use the kafka-avro-console-producer to do this. Start the producer and pass in a schema to register in the Schema Registry. The schema has a firstname field of type string a lastname field of type string, an age field of type int and a salary field of type double.

bin/kafka-avro-console-producer \
  --broker-list localhost:9092 --topic sink-test \
  --property value.schema='{"type":"record","name":"User","namespace":"com.datamountaineer.streamreactor.connect.voltdb"

Now the producer is waiting for input. Paste in the following:

{"firstName": "John", "lastName": "Smith", "age":30, "salary": 4830}

Check for records in VoltDb

Now check the logs of the connector you should see this:

[2016-08-21 20:41:25,361] INFO Writing complete (com.datamountaineer.streamreactor.connect.voltdb.writers.VoltDbWriter:61)
[2016-08-21 20:41:25,362] INFO Records handled (com.datamountaineer.streamreactor.connect.voltdb.VoltSinkTask:86)

In VoltDB sqlcmd terminal


---------- --------- ---- -------
John       Smith       30  4830.0

(Returned 1 rows in 0.01s)

Now stop the connector.


The Kafka Connect framework requires the following in addition to any connectors specific configurations:

Config Description Type Value
name Name of the connector string This must be unique across the Connect cluster
The topics to sink.
The connector will check that this matches the KCQL statement
tasks.max The number of tasks to scale output int 1
connector.class Name of the connector class string com.datamountaineer.streamreactor.connect.voltdb.sink.VoltSinkConnector

Connector Configurations

Config Description Type
connect.volt.kcql KCQL expression describing field selection and routes string
connect.volt.servers Comma separated server[:port] string
connect.volt.username The user to connect to the volt database string
connect.volt.password The password for the VoltDB user string

Optional Configurations

Config Description Type Default
Specifies the action to be
taken if an error occurs while inserting the data.
There are three available options, NOOP, the error
is swallowed, THROW, the error is allowed
to propagate and retry.
For RETRY the Kafka message is redelivered up
to a maximum number of times specified by the
connect.volt.max.retries option
string THROW
The maximum number of times a message
is retried. Only valid when the
connect.volt.error.policy is set to RETRY
string 10
The interval, in milliseconds between retries,
if the sink is using
connect.volt.error.policy set to RETRY
string 60000
Enables the output for how many
records have been processed
boolean false

Schema Evolution

Upstream changes to schemas are handled by Schema registry which will validate the addition and removal or fields, data type changes and if defaults are set. The Schema Registry enforces AVRO schema evolution rules. More information can be found here.

No schema evolution is handled by the Sink yet on changes in the upstream topics.


Helm Charts are provided at our repo, add the repo to your Helm instance and install. We recommend using the Landscaper to manage Helm Values since typically each Connector instance has its own deployment.

Add the Helm charts to your Helm instance:

helm repo add landoop https://lensesio.github.io/kafka-helm-charts/


Please review the FAQs and join our slack channel