Azure DocumentDB Sink

Download connector DocumentDB for Kafka 1.1 DocumentDB for Kafka 1.0

The Azure DocumentDB Kafka Connect sink allows you to write events from Kafka to your DocumentDB instance. The connector converts the Kafka Connect SinkRecords to DocumentDB Documents and will do an insert or upsert, depending on the configuration you chose. If the database doesn’t exist it can be created automatically - if the configuration flag is set to true (See Configurations section below). The targeted collections will be created if they don’t already exist.

Prerequisites

  • Apache Kafka 0.11.x of above
  • Kafka Connect 0.11.x or above
  • Azure DocumentDB instance
  • Java 1.8

Features

  1. The KCQL routing querying - Topic to measure mapping and Field selection
  2. Insert and Upsert modes
  3. Supported schemas: String and JSON payload, JSON payload with no schema, Struct and payload Struct
  4. Error policies for handling failures.

KCQL Support

{ INSERT | UPSERT } INTO collection SELECT { FIELD, ... } FROM kafka_topic [PK FIELDS,...]

Tip

You can specify multiple KCQL statements separated by ; to have a the connector sink multiple topics.

The DocumentDB sink supports KCQL, Kafka Connect Query Language. The following support KCQL is available:

  1. Field selection
  2. Selection of target table
  3. Insert and Upsert.

Example:

-- Insert mode, select all fields from topicA and write to tableA
INSERT INTO collectionA SELECT * FROM topicA

-- UPSERT mode, select 3 fields and rename from topicB and write to tableB with primary key as the field id from the topic
UPSERT INTO tableB SELECT x AS a, y AS b and z AS c FROM topicB PK id

Topic Routing

The sink supports topic routing that maps the messages from topics to a specific collection. For example map a topic called bloomberg_prices to a collection called prices. This mapping is set in the connect.documentdb.kcql option. The sink supports multiple routing configuration via separating with ;

connect.documentdb.kcql = INSERT INTO orders SELECT * FROM orders-topic; UPSERT INTO customers SELECT * FROM customer-topic PK customer_id

Insert Mode

Insert is the default write mode of the sink. It inserts messages from Kafka topics into DocumentDB.

A failure to insert a record in DocumentDB may occur due to integrity constraints or other exceptions such as casting issues. Kafka currently provides at least once delivery semantics. Therefore, this mode may produce errors if unique constraints have been implemented on the target tables. If the error policy has been set to NOOP then the Sink will discard the error and continue to process, however, it currently makes no attempt to distinguish violation of integrity constraints from other exceptions such as casting issues.

Upsert Mode

The Sink supports DocumentDB upsert functionality which replaces the existing row if a match is found on the primary keys.

This mode works nicely with at least once delivery semantics on Kafka as order is a guaranteed within partitions. If the same record is delivered twice to the sink, it results in an idempotent write. The existing record will be updated with the values of the second which are the same.

If records are delivered with the same field or group of fields that are used as the primary key on the target table, but different values, the existing record in the target table will be updated.

Since records are delivered in the order they were written per partition the write is idempotent on failure or restart. Redelivery produces the same result.

Field Selection

The sink supports selecting fields from the source topic or selecting all. There is an option to rename a field as well. All of this can be easily expressed with KCQL:

--  Select all fields from topic fx_prices and insert into the fx collection
INSERT INTO fx SELECT * FROM fx_prices

--  Select all fields from topic fx_prices and upsert into the fx collection
--  The assumption is there will be a ticker field in the incoming json
UPSERT INTO fx SELECT * FROM fx_prices PK ticker

--  Select specific fields from the topic sample_topic and insert into the sample collection
INSERT INTO sample SELECT field1,field2,field3 FROM sample_topic

--  Select specific fields from the topic sample_topic and upsert into the sample collection
UPSERT INTO sample SELECT field1,field2,field3 FROM sample_fopic PK field1

--  Rename some fields while selecting all from the topic sample_topic and insert into the sample collection
INSERT INTO sample SELECT *, field1 as new_name1,field2 as new_name2 FROM sample_topic

--  Rename some fields while selecting all from the topic sample_topic and upsert into the sample collection
UPSERT INTO sample SELECT *, field1 as new_name1,field2 as new_name2 FROM sample_topic PK new_name1

--  Select specific fields and rename some of them from the topic sample_topic and insert into the sample collection
INSERT INTO sample SELECT field1 as new_name1,field2, field3 as new_name3 FROM sample_topic

--  Select specific fields and rename some of them from the topic sample_topic and upsert into the sample collection
INSERT INTO sample SELECT field1 as new_name1,field2, field3 as new_name3 FROM sample_fopic PK new_name3

Payload Support

Schema.Struct and a Struct Payload

If you follow the best practice while producing the events, each message should carry its schema information. The best option is to send AVRO. Your Connector configurations options include:

key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081

This requires the SchemaRegistry.

Note

This needs to be done in the connect worker properties if using Kafka versions prior to 0.11

Schema.String and a JSON Payload

Sometimes the producer would find it easier to just send a message with Schema.String and a JSON string. In this case your connector configuration should be set to value.converter=org.apache.kafka.connect.json.JsonConverter. This doesn’t require the SchemaRegistry.

key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter

Note

This needs to be done in the connect worker properties if using Kafka versions prior to 0.11

No schema and a JSON Payload

There are many existing systems which are publishing Json over Kafka and bringing them in line with best practices is quite a challenge, hence we added the support. To enable this support you must change the converters in the connector configuration.

key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false

Note

This needs to be done in the connect worker properties if using Kafka versions prior to 0.11

Error Polices

The sink has three error policies that determine how failed writes to the target database are handled. These error polices allow you to control the behaviour of the sink if it encounters an error when writing records to the target system. Since Kafka retains the records, subject to the configured retention policy of the topic, the sink can ignore the error, fail the connector or attempt redelivery.The error policies affect the behaviour of the schema evolution characteristics of the sink. See the schema evolution section for more information.

Throw

Any error on write to the target database will be propagated up and processing is stopped. This is the default behavior.

Noop

Any error on write to the target database is ignored and processing continues.

Warning

This can lead to missed errors if you don’t have adequate monitoring. Data is not lost as it’s still in Kafka subject to Kafka’s retention policy. The sink currently does not distinguish between integrity constraint violations and or other exceptions thrown by drivers..

Retry

Any error on write to the target database causes the RetryIterable exception to be thrown. This causes the Kafka Connect framework to pause and replay the message. Offsets are not committed. For example, if the database is offline it will cause a write failure, the message can be replayed. With the Retry policy, the issue can be fixed without stopping the sink.

The length of time the sink will retry can be controlled by using the connect.documentdb.max.retires and the connect.documentdb.retry.interval.

DocumentDB Setup

For DocumentDB instance you can either use the emulator provided by Microsoft or provision yourself an instance in Azure. If you already have an instance of Azure DocumentDB running you can skip this step. Otherwise, please follow this to get an Azure account or use the Emulator.

Test Database

The Sink can handle creating the database if is not present. All you have to do in this case is to set the following in the configuration

connect.documentdb.db.create=true

Lenses QuickStart

The easiest way to try out this is using Lenses Box the pre-configured docker, that comes with this connector pre-installed. You would need to Connectors –> New Connector –> DocumentDB and paste your configuration

../../_images/lenses-create-documentdb-connector.png

Enabling AVRO support

To have the connector work with AVRO messages, and integrate with the Schema Registry, every AVRO subject will be registered in the Schema Registry, and every message will have a magic byte followed by the AVRO schema id and then the actual AVRO record in binary format.

The following config entries should either be placed in the connect-properties file that Kafka Connect is started with, or in the configuration of the connector. Of course, if your SchemaRegistry runs on a different machine or you have multiple instances of it you will have to amend the configuration.

key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081

Installing the Connector

Connect, in production should be run in distributed mode

  1. Install and configure a Kafka Connect cluster
  2. Create a folder on each server called plugins/lib
  3. Copy into the above folder the required connector jars from the stream reactor download
  4. Edit connect-avro-distributed.properties in the etc/schema-registry folder and uncomment the plugin.path option. Set it to the root directory i.e. plugins you deployed the stream reactor connector jars in step 2.
  5. Start Connect, bin/connect-distributed etc/schema-registry/connect-avro-distributed.properties

Connect Workers are long running processes so set an init.d or systemctl service accordingly.

Starting the Connector

Download, and install Stream Reactor to your Kafka Connect cluster. Follow the instructions here if you haven’t already done so. All paths in the quickstart are based on the location you installed Stream Reactor.

Once the Connect has started we can now use the kafka-connect-tools cli to post in our distributed properties file for Azure DocumentDB. If you are using the dockers you will have to set the following environment variable too for the CLI to connect to the Kafka Connect Rest API.

export KAFKA_CONNECT_REST="http://myserver:myport"
➜  bin/connect-cli create azure-docdb-sink < conf/source.kcql/azure-docdb-sink.properties

name=azure-docdb-sink
connector.class=com.datamountaineer.streamreactor.connect.azure.documentdb.sink.DocumentDbSinkConnector
tasks.max=1
topics=orders-avro
connect.documentdb.kcql=INSERT INTO orders SELECT * FROM orders-avro
connect.documentdb.db=dm
connect.documentdb.endpoint=[YOUR_AZURE_ENDPOINT]
connect.documentdb.db.create=true
connect.documentdb.master.key=[YOUR_MASTER_KEY]
connect.documentdb.batch.size=10

If you switch back to the terminal you started Kafka Connect in you should see the DocumentDB Sink being accepted and the task starting.

We can use the CLI to check if the connector is up but you should be able to see this in logs as well.

#check for running connectors with the CLI
➜ bin/connect-cli ps
azure-docdb-sink
   [2018-01-11 21:34:09,922] INFO
    __                    __
   / /   ____ _____  ____/ /___  ____  ____
  / /   / __ `/ __ \/ __  / __ \/ __ \/ __ \
 / /___/ /_/ / / / / /_/ / /_/ / /_/ / /_/ /
/_____/\__,_/_/ /_/\__,_/\____/\____/ .___/
                                   /_/

       /\                         |  __ \           |  __ \|  _ \   / ____(_)     | |
      /  \    _____   _ _ __ ___  | |  | | ___   ___| |  | | |_) | | (___  _ _ __ | | __
     / /\ \  |_  / | | | '__/ _ \ | |  | |/ _ \ / __| |  | |  _ <   \___ \| | '_ \| |/ /
    / ____ \  / /| |_| | | |  __/ | |__| | (_) | (__| |__| | |_) |  ____) | | | | |   <
   /_/    \_\/___|\__,_|_|  \___| |_____/ \___/ \___|_____/|____/  |_____/|_|_| |_|_|\_\

           By Stefan Bocutiu       _____             _____  ____     _____ _       _

Test Records

Tip

If your input topic doesn’t match the target use Lenses SQL to transform in real-time the input, no Java or Scala required!

Now we need to put some records it to the orders-avro. We can use the kafka-avro-console-producer to do this. Start the producer and pass in a schema to register in the Schema Registry. The schema matches the table created earlier.

bin/kafka-avro-console-producer \
 --broker-list localhost:9092 --topic orders-avro \
 --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"id","type":"string"},
{"name":"created", "type": "string"}, {"name":"product", "type": "string"}, {"name":"price", "type": "double"}]}'

Now the producer is waiting for input. Paste in the following (each on a line separately):

{"id": "1", "created": "2016-05-06 13:53:00", "product": "OP-DAX-P-20150201-95.7", "price": 94.2}
{"id": "2", "created": "2016-05-06 13:54:00", "product": "OP-DAX-C-20150201-100", "price": 99.5}
{"id": "3", "created": "2016-05-06 13:55:00", "product": "FU-DATAMOUNTAINEER-20150201-100", "price": 10000}
{"id": "4", "created": "2016-05-06 13:56:00", "product": "FU-KOSPI-C-20150201-100", "price": 150}

Now if we check the logs of the connector we should see 4 records being inserted to DocumentDB:

#From the Query Explorer in you Azure run
SELECT * FROM orders
#The query should return something along the lines[
      {
        "product": "OP-DAX-P-20150201-95.7",
        "created": "2016-05-06 13:53:00",
        "price": 94.2,
        "id": "1",
        "_rid": "Rrg+APfcfwABAAAAAAAAAA==",
        "_self": "dbs/***/colls/***/docs/Rrg+APfcfwABAAAAAAAAAA==/",
        "_etag": "\"4000c5f0-0000-0000-0000-58b5ecd10000\"",
        "_attachments": "attachments/",
        "_ts": 1488317649
      },
      {
        "product": "OP-DAX-C-20150201-100",
        "created": "2016-05-06 13:54:00",
        "price": 99.5,
        "id": "2",
        "_rid": "Rrg+APfcfwACAAAAAAAAAA==",
        "_self": "dbs/***/colls/***/docs/Rrg+APfcfwACAAAAAAAAAA==/",
        "_etag": "\"4000c6f0-0000-0000-0000-58b5ecd10000\"",
        "_attachments": "attachments/",
        "_ts": 1488317649
      },
      {
        "product": "FU-DATAMOUNTAINEER-20150201-100",
        "created": "2016-05-06 13:55:00",
        "price": 10000,
        "id": "3",
        "_rid": "Rrg+APfcfwADAAAAAAAAAA==",
        "_self": "dbs/***/colls/***/docs/Rrg+APfcfwADAAAAAAAAAA==/",
        "_etag": "\"4000c7f0-0000-0000-0000-58b5ecd10000\"",
        "_attachments": "attachments/",
        "_ts": 1488317650
      },
      {
        "product": "FU-KOSPI-C-20150201-100",
        "created": "2016-05-06 13:56:00",
        "price": 150,
        "id": "4",
        "_rid": "Rrg+APfcfwAEAAAAAAAAAA==",
        "_self": "dbs/***/colls/***/docs/Rrg+APfcfwAEAAAAAAAAAA==/",
        "_etag": "\"4000c8f0-0000-0000-0000-58b5ecd10000\"",
        "_attachments": "attachments/",
        "_ts": 1488317650
      }
    ]

Bingo, our 4 documents!

Legacy topics (plain text payload with a json string)

We have found some of the clients have already an infrastructure where they publish pure JSON on the topic and obviously the jump to follow the best practice and use schema registry is quite an ask. So we offer support for them as well.

This time we need to start the Kafka Connect with a different set of settings.

#create a new configuration for connect
➜ cp  etc/schema-registry/connect-avro-distributed.properties etc/schema-registry/connect-avro-distributed-json.properties
➜ vi vim etc/schema-registry/connect-avro-distributed.properties

Replace the following 4 entries in the config

key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081

with the following

key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false

Now let’s restart the connect instance:

#start a new instance of connect$bin/start-connect.sh

Use the CLI to remove the old DocumentDB Sink:

➜ bin/connect-cli rm  azure-docdb-sink

and start the new sink with the JSON properties files to read from.

#make a copy of azure-docdb-sink.properties
cp azure-docdb-sink.properties azure-docdb-sink-json.properties
#edit  azure-docdb-sink-json.properties replace the following keys
topics=orders-topic-json
connect.documentdb.kcql=INSERT INTO orders_j SELECT * FROM orders-topic-json
#start the connector for DocumentDB
➜   bin/connect-cli create azure-docdb-sink-json < azure-docdb-sink-json.properties

Check the logs of Connect.

# Get connects logs
connect log connect

[2017-02-28 21:55:52,192] INFO DocumentDBConfig values:
        connect.documentdb.db = dm
        connect.documentdb.endpoint = [hidden]
        connect.documentdb.error.policy = THROW
        connect.documentdb.master.key = [hidden]
        connect.documentdb.max.retires = 20
        connect.documentdb.proxy = null
        connect.documentdb.retry.interval = 60000
        connect.documentdb.batch.size = 10
        connect.documentdb.consistency.level = Session
        connect.documentdb.db.create = true
        connect.documentdb.kcql = INSERT INTO orders_j SELECT * FROM orders-topic-json
 (com.datamountaineer.streamreactor.connect.azure.DocumentDB.config.DocumentDBConfig:180)
[2018-01-12 21:55:52,193] INFO
 _____        _        __  __                   _        _
|  __ \      | |      |  \/  |                 | |      (_)
| |  | | __ _| |_ __ _| \  / | ___  _   _ _ __ | |_ __ _ _ _ __   ___  ___ _ __
| |  | |/ _` | __/ _` | |\/| |/ _ \| | | | '_ \| __/ _` | | '_ \ / _ \/ _ \ '__|
| |__| | (_| | || (_| | |  | | (_) | |_| | | | | || (_| | | | | |  __/  __/ |
|_____/ \__,_|\__\__,_|_|  |_|\___/ \__,_|_| |_|\__\__,_|_|_| |_|\___|\___|_|
       By Stefan Bocutiu        _____             _____  ____     _____ _       _
    /\                         |  __ \           |  __ \|  _ \   / ____(_)     | |
   /  \    _____   _ _ __ ___  | |  | | ___   ___| |  | | |_) | | (___  _ _ __ | | __
  / /\ \  |_  / | | | '__/ _ \ | |  | |/ _ \ / __| |  | |  _ <   \___ \| | '_ \| |/ /
 / ____ \  / /| |_| | | |  __/ | |__| | (_) | (__| |__| | |_) |  ____) | | | | |   <
/_/    \_\/___|\__,_|_|  \___| |_____/ \___/ \___|_____/|____/  |_____/|_|_| |_|_|\_\


 (com.datamountaineer.streamreactor.connect.azure.documentdb.sink.DocumentDbSinkTask:56)

Now it’s time to produce some records. This time we will use the simple kafka-consoler-consumer to put simple JSON on the topic:

➜ bin/kafka-console-producer --broker-list localhost:9092 --topic orders-topic-json

{"id": "1", "created": "2016-05-06 13:53:00", "product": "OP-DAX-P-20150201-95.7", "price": 94.2}
{"id": "2", "created": "2016-05-06 13:54:00", "product": "OP-DAX-C-20150201-100", "price": 99.5}
{"id": "3", "created": "2016-05-06 13:55:00", "product": "FU-DATAMOUNTAINEER-20150201-100", "price":10000}

Let’s check the DocumentDB database for the new records:

 #From the Query Explorer in you Azure run
SELECT * FROM orders
#The query should return something along the lines[
      {
        "product": "OP-DAX-P-20150201-95.7",
        "created": "2016-05-06 13:53:00",
        "price": 94.2,
        "id": "1",
        "_rid": "Rrg+AP5X3gABAAAAAAAAAA==",
        "_self": "dbs/***/colls/***/docs/Rrg+AP5X3gABAAAAAAAAAA==/",
        "_etag": "\"00007008-0000-0000-0000-58b5f3ff0000\"",
        "_attachments": "attachments/",
        "_ts": 1488319485
      },
      {
        "product": "OP-DAX-C-20150201-100",
        "created": "2016-05-06 13:54:00",
        "price": 99.5,
        "id": "2",
        "_rid": "Rrg+AP5X3gACAAAAAAAAAA==",
        "_self": "dbs/****/colls/***/docs/Rrg+AP5X3gACAAAAAAAAAA==/",
        "_etag": "\"00007108-0000-0000-0000-58b5f3ff0000\"",
        "_attachments": "attachments/",
        "_ts": 1488319485
      },
      {
        "product": "FU-DATAMOUNTAINEER-20150201-100",
        "created": "2016-05-06 13:55:00",
        "price": 10000,
        "id": "3",
        "_rid": "Rrg+AP5X3gADAAAAAAAAAA==",
        "_self": "dbs/****/colls/****/docs/Rrg+AP5X3gADAAAAAAAAAA==/",
        "_etag": "\"00007208-0000-0000-0000-58b5f3ff0000\"",
        "_attachments": "attachments/",
        "_ts": 1488319485
      }
    ]

Bingo, our 3 rows!

Configurations

The Kafka Connect framework requires the following in addition to any connectors specific configurations:

Config Description Type Value
name Name of the connector string This must be unique across the Connect cluster
topics
The topics to sink.
The connector will check this matchs the KCQL statement
string  
tasks.max The number of tasks to scale output int 1
connector.class Name of the connector class string com.datamountaineer.streamreactor.connect.azure.documentdb.sink.DocumentDbSinkConnector

Connector Configurations

Config Description Type
connect.documentdb.kcql
Kafka connect query language expression.
Allows for an expressive topic to collection
routing, field selection, and renaming
string
connect.documentdb.db The Azure DocumentDB target database string
connect.documentdb.endpoint The service endpoint to use to create the client string
connect.documentdb.master.key The connection master key string

Optional Configurations

Config Description Type Default
connect.documentdb.consistency.level
Determines the write visibility.
There are four possible values:
Strong, BoundedStaleness, Session or Eventual
string Session
connect.documentdb.db.create
If set to true it will create the
database if it doesn’t exist. If this
is set to default(false) an exception will be raised
boolean false
connect.documentdb.proxy Specifies the connection proxy details string  
connect.documentdb.batch.size
The number of records the sink
would push to DocumentDB at
once (improved performance)
int 100
connect.documentdb.error.policy
Specifies the action to be
taken if an error occurs while inserting the data.
There are three available options, NOOP, the error
is swallowed, THROW, the error is allowed
to propagate and retry.
For RETRY the Kafka message is redelivered up
to a maximum number of times specified by the
connect.documentdb.max.retires option
string THROW
connect.documentdb.max.retires
The maximum number of times a message
is retried. Only valid when the
connect.documentdb.error.policy is set to RETRY
string 10
connect.documentdb.retry.interval
The interval, in milliseconds between retries,
if the sink is using
connect.documentdb.error.policy set to RETRY
string 60000
connect.progress.enabled
Enables the output for how many
records have been processed
boolean false

Example

name=azure-docdb-sink
connector.class=com.datamountaineer.streamreactor.connect.azure.documentdb.sink.DocumentDbSinkConnector
tasks.max=1
topics=orders-avro
connect.documentdb.kcql=INSERT INTO orders SELECT * FROM orders-avro
connect.documentdb.db=dm
connect.documentdb.endpoint=[YOUR_AZURE_ENDPOINT]
connect.documentdb.db.create=true
connect.documentdb.master.key=[YOUR_MASTER_KEY]
connect.documentdb.batch.size=10

Schema Evolution

Upstream changes to schemas are handled by Schema registry which will validate the addition and removal or fields, data type changes and if defaults are set. The Schema Registry enforces AVRO schema evolution rules. More information can be found here.

Kubernetes

Helm Charts are provided at our repo, add the repo to your Helm instance and install. We recommend using the Landscaper to manage Helm Values since typically each Connector instance has its own deployment.

Add the Helm charts to your Helm instance:

helm repo add landoop https://landoop.github.io/kafka-helm-charts/

TroubleShooting

Please review the FAQs and join our slack channel