Cassandra Sink

Download connector Cassandra for Kafka 1.1 Cassandra for Kafka 1.0

The Cassandra Sink allows you to write events from Kafka to Cassandra. The connector converts the value from the Kafka Connect SinkRecords to JSON and uses Cassandra’s JSON insert functionality to insert the rows. The task expects pre-created tables in Cassandra.

See Cassandra’s documentation for type mapping.

Prerequisites

  • Apache Kafka 0.11.x of above
  • Kafka Connect 0.11.x or above
  • Cassandra 2.2.4+ if your are on version 2.* or 3.0.1+ if you are on version 3.*
  • Java 1.8

Note

You must be using at least Cassandra 3.0.9 to have JSON support!

Features

  1. The KCQL routing querying - Kafka topic payload field selection is supported, allowing you to have choose selection of fields or all fields written to Cassandra
  2. Error policies for handling failures
  3. Payload support for Schema.Struct and payload Struct, Schema.String and JSON payload and JSON payload with no schema
  4. Optional TTL, time to live on inserts. See Cassandras documentation for more information
  5. Delete for records in Cassandra for null payloads
  6. SSL support.

KCQL Support

INSERT INTO cassandra_table SELECT { FIELDS, ... } FROM kafka_topic [TTL=Time to live]

Tip

You can specify multiple KCQL statements separated by ; to have a the connector sink multiple topics.

The Cassandra sink supports KCQL, Kafka Connect Query Language. The following support KCQL is available:

  1. Field selection
  2. Selection of target table
  3. Time to live on inserts.

Example:

-- Insert mode, select all fields from topicA and write to tableA
INSERT INTO tableA SELECT * FROM topicA

-- Insert mode, select 3 fields and rename from topicB and write to tableB
INSERT INTO tableB SELECT x AS a, y AS b and z AS c FROM topicB


-- Insert mode, select 3 fields and rename from topicB and write to tableB with TTL
INSERT INTO tableB SELECT x AS a, y AS b and z AS c FROM topicB TTL=100000

Deletion in Cassandra

Compacted topics in Kafka retain the last message per key. Deletion in Kafka occurs by tombestoning. If compaction is enabled on the topic and a message is sent with a null payload, Kafka flags this record for delete and is compacted/removed from the topic. For more information on compaction see this.

The use case for this delete functionality would be, for example, when the source topic is a compacted topic, perhaps capturing data changes from an upstream source such as a CDC connector. Let’s say a record is deleted from the upstream source and that delete operation is propagated to the kafka topic, with the key of the kafka message as the PK of the record in the targeted cassandra table - meaning the value of the kafka message is now null. This feature allows you to delete these records in Cassandra.

This functionality will be migrated to KCQL in future releases.

Deletion in Cassandra is supported based on fields in the key of messages with a empty/null payload. Deletion is enabled by settings the connect.cassandra.delete.enabled option. A Cassandra delete statement must be provided, connect.cassandra.delete.statement which specifies the Cassandra CQL delete statement with parameters to bind field values from the key to, for example, with the delete statement of:

DELETE FROM orders WHERE id = ? and product = ?

If a message was received with a empty/null value and key fields key.id and key.product the final bound Cassandra statement would be:

# connect.cassandra.delete.enabled=true
# connect.cassandra.delete.statement=DELETE FROM orders WHERE id = ? and product = ?
# connect.cassandra.delete.struct_flds=id,product

# "{ "key": { "id" : 999, "product" : "DATAMOUNTAINEER" }, "value" : null }"
DELETE FROM orders WHERE id = 999 and product = "DATAMOUNTAINEER"

Note

Deletion will only occur if a message with an empty payload is recieved from Kafka.

Important

Ensure your ordinal position of the connect.cassandra.delete.struct_flds matches the bind order in the Cassandra delete statement!

Payload Support

Schema.Struct and a Struct Payload

If you follow the best practice while producing the events, each message should carry its schema information. The best option is to send AVRO. Your Connector configurations options include:

key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081

This requires the SchemaRegistry.

Note

This needs to be done in the connect worker properties if using Kafka versions prior to 0.11

Schema.String and a JSON Payload

Sometimes the producer would find it easier to just send a message with Schema.String and a JSON string. In this case your connector configuration should be set to value.converter=org.apache.kafka.connect.json.JsonConverter. This doesn’t require the SchemaRegistry.

key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter

Note

This needs to be done in the connect worker properties if using Kafka versions prior to 0.11

No schema and a JSON Payload

There are many existing systems which are publishing Json over Kafka and bringing them in line with best practices is quite a challenge, hence we added the support. To enable this support you must change the converters in the connector configuration.

key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false

Note

This needs to be done in the connect worker properties if using Kafka versions prior to 0.11

Error Polices

The sink has three error policies that determine how failed writes to the target database are handled. These error polices allow you to control the behaviour of the sink if it encounters an error when writing records to the target system. Since Kafka retains the records, subject to the configured retention policy of the topic, the sink can ignore the error, fail the connector or attempt redelivery.

Throw

Any error on write to the target system will be propagated up and processing is stopped. This is the default behavior.

Noop

Any error on write to the target database is ignored and processing continues.

Warning

This can lead to missed errors if you don’t have adequate monitoring. Data is not lost as it’s still in Kafka subject to Kafka’s retention policy. The sink currently does not distinguish between integrity constraint violations and or other exceptions thrown by any drivers or target systems.

Retry

Any error on write to the target system causes the RetryIterable exception to be thrown. This causes the Kafka Connect framework to pause and replay the message. Offsets are not committed. For example, if the table is offline it will cause a write failure, the message can be replayed. With the Retry policy, the issue can be fixed without stopping the sink.

Lenses QuickStart

The easiest way to try out this is using Lenses Box the pre-configured docker, that comes with this connector pre-installed. You would need to Connectors –> New Connector –> Sink –> Cassandra and paste your configuration

../../_images/lenses-create-cassandra-sink-connector.png

Cassandra Setup

First download and install Cassandra if you don’t have a compatible cluster available.

#make a folder for cassandra
mkdir cassandra

#Download Cassandra
wget http://apache.cs.uu.nl/cassandra/3.5/apache-cassandra-3.5-bin.tar.gz

#extract archive to cassandra folder
tar -xvf apache-cassandra-3.5-bin.tar.gz -C cassandra

#Set up environment variables
export CASSANDRA_HOME=~/cassandra/apache-cassandra-3.5-bin
export PATH=$PATH:$CASSANDRA_HOME/bin

#Start Cassandra
sudo sh ~/cassandra/bin/cassandra

Test data

The Sink currently expects precreated tables and keyspaces. So lets create a keyspace and table in Cassandra via the CQL shell first.

Once you have installed and started Cassandra create a table to write records to. This snippet creates a table called orders to hold fictional orders on a trading platform.

Start the Cassandra cql shell

➜  bin ./cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.2 | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
cqlsh>

Execute the following to create the keyspace and table:

CREATE KEYSPACE demo WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 3};
use demo;

create table orders (id int, created varchar, product varchar, qty int, price float, PRIMARY KEY (id, created))
WITH CLUSTERING ORDER BY (created asc);

Installing the Connector

Connect, in production should be run in distributed mode

  1. Install and configure a Kafka Connect cluster
  2. Create a folder on each server called plugins/lib
  3. Copy into the above folder the required connector jars from the stream reactor download
  4. Edit connect-avro-distributed.properties in the etc/schema-registry folder and uncomment the plugin.path option. Set it to the root directory i.e. plugins you deployed the stream reactor connector jars in step 2.
  5. Start Connect, bin/connect-distributed etc/schema-registry/connect-avro-distributed.properties

Connect Workers are long running processes so set an init.d or systemctl service accordingly.

Starting the Connector

Download, and install Stream Reactor to your Kafka Connect cluster. Follow the instructions here if you haven’t already done so. All paths in the quickstart are based on the location you installed Stream Reactor.

Once the Connect has started we can now use the kafka-connect-tools cli to post in our distributed properties file for Cassandra. If you are using the dockers you will have to set the following environment variable too for the CLI to connect to the Kafka Connect Rest API.

export KAFKA_CONNECT_REST="http://myserver:myport"
➜  bin/connect-cli create cassandra-sink-orders < conf/cassandra-sink.properties

name=cassandra-sink-orders
connector.class=com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector
tasks.max=1
topics=orders-topic
connect.cassandra.kcql=INSERT INTO orders SELECT * FROM orders-topic
connect.cassandra.port=9042
connect.cassandra.key.space=demo
connect.cassandra.contact.points=localhost
connect.cassandra.username=cassandra
connect.cassandra.password=cassandra

If you switch back to the terminal you started Kafka Connect in you should see the Cassandra Sink being accepted and the task starting.

We can use the CLI to check if the connector is up but you should be able to see this in logs as well.

#check for running connectors with the CLI
➜ bin/connect-cli ps
cassandra-sink
  INFO
    __                    __
   / /   ____ _____  ____/ /___  ____  ____
  / /   / __ `/ __ \/ __  / __ \/ __ \/ __ \
 / /___/ /_/ / / / / /_/ / /_/ / /_/ / /_/ /
/_____/\__,_/_/ /_/\__,_/\____/\____/ .___/
                                   /_/
         ______                                __           _____ _       __
        / ____/___ _______________ _____  ____/ /________ _/ ___/(_)___  / /__
       / /   / __ `/ ___/ ___/ __ `/ __ \/ __  / ___/ __ `/\__ \/ / __ \/ //_/
      / /___/ /_/ (__  |__  ) /_/ / / / / /_/ / /  / /_/ /___/ / / / / / ,<
      \____/\__,_/____/____/\__,_/_/ /_/\__,_/_/   \__,_//____/_/_/ /_/_/|_|

   By Andrew Stevenson. (com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkTask:50)

Test Records

Tip

If your input topic doesn’t match the target use Lenses SQL to transform in real-time the input, no Java or Scala required!

Now we need to put some records it to the orders-topic. We can use the kafka-avro-console-producer to do this. Start the producer and pass in a schema to register in the Schema Registry. The schema matches the table created earlier.

bin/kafka-avro-console-producer \
 --broker-list localhost:9092 --topic orders-topic \
 --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"id","type":"int"},{"name":"created","type":"string"},{"name":"product","type":"string"},{"name":"price","type":"double"}, {"name":"qty", "type":"int"}]}'

Now the producer is waiting for input. Paste in the following (each on a line separately):

{"id": 1, "created": "2016-05-06 13:53:00", "product": "OP-DAX-P-20150201-95.7", "price": 94.2, "qty":100}
{"id": 2, "created": "2016-05-06 13:54:00", "product": "OP-DAX-C-20150201-100", "price": 99.5, "qty":100}
{"id": 3, "created": "2016-05-06 13:55:00", "product": "FU-DATAMOUNTAINEER-20150201-100", "price": 10000, "qty":100}
{"id": 4, "created": "2016-05-06 13:56:00", "product": "FU-KOSPI-C-20150201-100", "price": 150, "qty":100}

Now if we check the logs of the connector we should see 2 records being inserted to Cassandra:

[2016-05-06 13:55:10,368] INFO Setting newly assigned partitions [orders-topic-0] for group connect-cassandra-sink-1 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:219)
[2016-05-06 13:55:16,423] INFO Received 4 records. (com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraJsonWriter:96)
[2016-05-06 13:55:16,484] INFO Processed 4 records. (com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraJsonWriter:138)
use demo;
SELECT * FROM orders;

 id | created             | price | product                           | qty
----+---------------------+-------+-----------------------------------+-----
  1 | 2016-05-06 13:53:00 |  94.2 |            OP-DAX-P-20150201-95.7 | 100
  2 | 2016-05-06 13:54:00 |  99.5 |             OP-DAX-C-20150201-100 | 100
  3 | 2016-05-06 13:55:00 | 10000 |   FU-DATAMOUNTAINEER-20150201-100 | 500
  4 | 2016-05-06 13:56:00 |   150 |           FU-KOSPI-C-20150201-100 | 200

(4 rows)

Bingo, our 4 rows!

Configurations

Config Description Type Value
name Name of the connector string This must be unique across the Connect cluster
topics
The topics to sink.
The connector will check this matchs the KCQL statement
string  
tasks.max The number of tasks to scale output int 1
connector.class Name of the connector class string com.datamountaineer.streamreactor.connect.cassandra.CassandraSinkConnector

Connector Configurations

Config Description Type
connect.cassandra.contact.points
Contact points (hosts) in Cassandra
cluster. This is a comma separated value.
i.e:host-1,host-2|
string
connect.cassandra.key.space
Key space the tables to write belong to
string
connect.cassandra.kcql
Kafka connect query language expression.
Allows for an expressive topic to table
routing, field selection, and renaming
string

Optional Configurations

Config Description Type Default
connect.cassandra.port Port for the native Java driver int 9042
connect.cassandra.username Username to connect to Cassandra with string  
connect.cassandra.password Password to connect to Cassandra with string  
connect.cassandra.ssl.enabled
Enables SSL communication against SSL
enable Cassandra cluster
boolean false
connect.cassandra.trust.store.password Password for truststore string  
connect.cassandra.key.store.path Path to truststore string  
connect.cassandra.key.store.password Password for key store string  
connect.cassandra.ssl.client.cert.auth Path to keystore string  
connect.cassandra.consistency.level
Consistency refers to how up-to-date
and synchronized a row of Cassandra data is on all of its replicas.
Cassandra offers tunable consistency. For any given read or write operation,
the client application decides how consistent the requested data must be.
Please refer the to Cassandra documention
for further details
string ONE
connect.cassandra.delete.enabled Enables row deletion from Cassandra boolean false
connect.cassandra.delete.statement
Delete statement for cassandra.
Required if connect.cassandra.delete.enabled is set
string  
connect.cassandra.delete.struct_flds
Fields in the key struct data type
used in there delete statement.
Comma-separated in the order they
are found in connect.cassandra.delete.statement
string  
connect.cassandra.error.policy
Specifies the action to be
taken if an error occurs while inserting the data.
There are three available options, NOOP, the error
is swallowed, THROW, the error is allowed
to propagate and retry.
For RETRY the Kafka message is redelivered up
to a maximum number of times specified by the
connect.cassandra.max.retires option
string THROW
connect.cassandra.max.retires
The maximum number of times a message
is retried. Only valid when the
connect.cassandra.error.policy is set to RETRY
string 10
connect.cassandra.retry.interval
The interval, in milliseconds between retries,
if the sink is using
connect.cassandra.error.policy set to RETRY
string 60000
connect.progress.enabled
Enables the output for how many
records have been processed
boolean false
connect.cassandra.mapping.collection.to.json Mapping columns with type Map, List and Set like json boolean true
connect.cassandra.default.value
By default a column omitted from the JSON map will be set to NULL.
Alternatively, if set UNSET, pre-existing value will be preserved.
string  

Example

name=cassandra-sink-orders
connector.class=com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector
tasks.max=1
topics=orders-topic
connect.cassandra.kcql = INSERT INTO TABLE1 SELECT * FROM TOPIC1;INSERT INTO TABLE2 SELECT field1,
field2, field3 as renamedField FROM TOPIC2
connect.cassandra.contact.points=localhost
connect.cassandra.port=9042
connect.cassandra.key.space=demo
connect.cassandra.contact.points=localhost
connect.cassandra.username=cassandra
connect.cassandra.password=cassandra

Schema Evolution

Upstream changes to schemas are handled by Schema registry which will validate the addition and removal or fields, data type changes and if defaults are set. The Schema Registry enforces AVRO schema evolution rules. More information can be found here.

For the Sink connector, if columns are added to the target Cassandra table and not present in the Source topic they will be set to null by Cassandras JSON insert functionality. Columns which are omitted from the JSON value map are treated as a null insert (which results in an existing value being deleted, if one is present) if a record with the same key is inserted again.

Future releases will support table auto-creation of tables and adding columns on changes to the topic schema.

Kubernetes

Helm Charts are provided at our repo, add the repo to your Helm instance and install. We recommend using the Landscaper to manage Helm Values since typically each Connector instance has its own deployment.

Add the Helm charts to your Helm instance:

helm repo add landoop https://landoop.github.io/kafka-helm-charts/

TroubleShooting

Please review the FAQs and join our slack channel