HBase Sink

Download connector HBase Connector for Kafka 2.1.0

This HBase Sink allows you to write events from Kafka to HBase. The connector takes the value from the Kafka Connect SinkRecords and inserts a new entry to HBase.

Prerequisites

  • Apache Kafka 0.11.x of above
  • Kafka Connect 0.11.x or above
  • HBase 1.2.0
  • Java 1.8

Features

  1. The KCQL routing querying - Kafka topic payload field selection is supported, allowing you to select fields written to HBase
  2. Error policies for handling failures.

KCQL Support

INSERT INTO hbase_table SELECT { FIELD, ... } FROM kafka_topic [PK FIELD, ...]

Tip

You can specify multiple KCQL statements separated by ; to have a the connector sink multiple topics.

The HBase sink supports KCQL, Kafka Connect Query Language. The following support KCQL is available:

  1. Field selection
  2. Target table selection
  3. Selection of fields to concatenate to make the row key.

Example:

-- Insert mode, select all fields from topicA and write to tableA and use the default rowkey (topic name, partition, offset)
INSERT INTO tableA SELECT * FROM topicA

-- Insert mode, select 3 fields and rename from topicB and write to tableB, use field y from the topic as the row key
INSERT INTO tableB SELECT x AS a, y AS b and z AS c FROM topicB PK y

This is set in the connect.hbase.kcql option.

Primary Keys

The PK keyword can be used to specify the fields which will be used for the key value. The field values will be concatenated and separated by a -. If no fields are set the topic name, partition and message offset are used.

Error Polices

Landoop sink connectors support error polices. These error polices allow you to control the behavior of the sink if it encounters an error when writing records to the target system. Since Kafka retains the records, subject to the configured retention policy of the topic, the sink can ignore the error, fail the connector or attempt redelivery.

Throw

Any error on write to the target system will be propagated up and processing is stopped. This is the default behavior.

Noop

Any error on write to the target database is ignored and processing continues.

Warning

This can lead to missed errors if you don’t have adequate monitoring. Data is not lost as it’s still in Kafka subject to Kafka’s retention policy. The Sink currently does not distinguish for example between integrity constraint violations and or other exceptions thrown by any drivers or target systems.

Retry

Any error on write to the target system causes the RetryIterable exception to be thrown. This causes the Kafka Connect framework to pause and replay the message. Offsets are not committed. For example, if the table is offline it will cause a write failure, the message can be replayed. With the Retry policy, the issue can be fixed without stopping the sink.

Lenses QuickStart

The easiest way to try out this is using Lenses Box the pre-configured docker, that comes with this connector pre-installed. You would need to Connectors –> New Connector –> Sink –> HBase and paste your configuration

../../_images/lenses-create-hbase-sink-connector.png

HBase Setup

Download and extract HBase:

wget https://www.apache.org/dist/hbase/1.2.2/hbase-1.2.2-bin.tar.gz
tar -xvf hbase-1.2.2-bin.tar.gz -C hbase

Edit conf/hbase-site.xml and add the following content:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
 <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.rootdir</name>
    <value>file:///tmp/hbase</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/tmp/zookeeper</value>
  </property>
</configuration>

The hbase.cluster.distributed is required since when you start HBase it will try and start its own Zookeeper, but in this case we want to use the same Zookeeper our Kafka cluster is using.

Now start HBase and check the logs to ensure it’s up:

bin/start-hbase.sh

HBase Table

The Sink expects the table to exist in HBase. In the HBase shell create the test table, go to your HBase install location.

bin/hbase shell
hbase(main):001:0> create 'person',{NAME=>'d', VERSIONS=>1}

hbase(main):001:0> list
person
1 row(s) in 0.9530 seconds

hbase(main):002:0> describe 'person'
DESCRIPTION
 'person', {NAME => 'd', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'false', DATA_BLOCK_ENCOD true
 ING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION
 _SCOPE => '0'}
1 row(s) in 0.0810 seconds

Installing the Connector

Connect, in production should be run in distributed mode

  1. Install and configure a Kafka Connect cluster
  2. Create a folder on each server called plugins/lib
  3. Copy into the above folder the required connector jars from the stream reactor download
  4. Edit connect-avro-distributed.properties in the etc/schema-registry folder and uncomment the plugin.path option. Set it to the root directory i.e. plugins you deployed the stream reactor connector jars in step 2.
  5. Ensure the hbase-site.xml is on the classpath of the connector by copying the file into plugins/lib
  6. Start Connect, bin/connect-distributed etc/schema-registry/connect-avro-distributed.properties

Connect Workers are long running processes so set an init.d or systemctl service accordingly.

Starting the Connector (Distributed)

Download, and install Stream Reactor. Follow the instructions here if you haven’t already done so. All paths in the quickstart are based on the location you installed Stream Reactor.

Once the Connect has started we can now use the kafka-connect-tools cli to post in our distributed properties file for HBase. For the CLI to work including when using the dockers you will have to set the following environment variable to point the Kafka Connect Rest API.

export KAFKA_CONNECT_REST="http://myserver:myport"
➜  bin/connect-cli create hbase-sink < conf/hbase-sink.properties

name=person-hbase
connector.class=com.datamountaineer.streamreactor.connect.hbase.HbaseSinkConnector
tasks.max=1
topics=hbase-topic
connect.hbase.column.family=d
connect.hbase.kcql=INSERT INTO person SELECT * FROM hbase-topic PK firstName, lastName

If you switch back to the terminal you started Kafka Connect in, you should see the Elastic Search Sink being accepted and the task starting.

We can use the CLI to check if the connector is up but you should be able to see this in logs as well.

#check for running connectors with the CLI
➜ bin/connect-cli ps
hbase-sink
INFO
    __                    __
   / /   ____ _____  ____/ /___  ____  ____
  / /   / __ `/ __ \/ __  / __ \/ __ \/ __ \
 / /___/ /_/ / / / / /_/ / /_/ / /_/ / /_/ /
/_____/\__,_/_/ /_/\__,_/\____/\____/ .___/
                                   /_/
      / / / / __ )____ _________ / ___/(_)___  / /__
     / /_/ / __  / __ `/ ___/ _ \\__ \/ / __ \/ //_/
    / __  / /_/ / /_/ (__  )  __/__/ / / / / / ,<
   /_/ /_/_____/\__,_/____/\___/____/_/_/ /_/_/|_|
By Stefan Bocutiu (com.datamountaineer.streamreactor.connect.hbase.HbaseSinkTask:44)

Test Records

Tip

If your input topic doesn’t match the target use Lenses SQL to transform in real-time the input, no Java or Scala required!

Now we need to put some records it to the hbase-topic topics. We can use the kafka-avro-console-producer to do this. Start the producer and pass in a schema to register in the Schema Registry. The schema has a firstname field of type string, a lastname field of type string, an age field of type int and a salary field of type double.

bin/kafka-avro-console-producer \
  --broker-list localhost:9092 --topic hbase-topic \
  --property value.schema='{"type":"record","name":"User",
  "fields":[{"name":"firstName","type":"string"},{"name":"lastName","type":"string"},{"name":"age","type":"int"},
  {"name":"salary","type":"double"}]}'

Now the producer is waiting for input. Paste in the following:

{"firstName": "John", "lastName": "Smith", "age":30, "salary": 4830}
{"firstName": "Anna", "lastName": "Jones", "age":28, "salary": 5430}

Check for records in HBase

Now check the logs of the connector you should see this

INFO Sink task org.apache.kafka.connect.runtime.WorkerSinkTask@48ffb4dc finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:155)
INFO Writing 2 rows to Hbase... (com.datamountaineer.streamreactor.connect.hbase.writers.HbaseWriter:83)

In HBase:

hbase(main):004:0* scan 'person'
ROW                                                  COLUMN+CELL
 Anna\x0AJones                                       column=d:age, timestamp=1463056888641, value=\x00\x00\x00\x1C
 Anna\x0AJones                                       column=d:firstName, timestamp=1463056888641, value=Anna
 Anna\x0AJones                                       column=d:income, timestamp=1463056888641, value=@\xB56\x00\x00\x00\x00\x00
 Anna\x0AJones                                       column=d:lastName, timestamp=1463056888641, value=Jones
 John\x0ASmith                                       column=d:age, timestamp=1463056693877, value=\x00\x00\x00\x1E
 John\x0ASmith                                       column=d:firstName, timestamp=1463056693877, value=John
 John\x0ASmith                                       column=d:income, timestamp=1463056693877, value=@\xB2\xDE\x00\x00\x00\x00\x00
 John\x0ASmith                                       column=d:lastName, timestamp=1463056693877, value=Smith
2 row(s) in 0.0260 seconds

Now stop the connector.

Configurations

The Kafka Connect framework requires the following in addition to any connectors specific configurations:

Config Description Type Value
name Name of the connector string This must be unique across the Connect cluster
topics
The topics to sink.
The connector will check that this matches the KCQL statement
string  
tasks.max The number of tasks to scale output int 1
connector.class Name of the connector class string com.datamountaineer.streamreactor.connect.hbase.HbaseSinkConnector

Connector Configurations

Config Description Type
connect.hbase.column.family The HBase column family string
connect.hbase.kcql
Kafka connect query language expression.
Allows for an expressive topic to table routing,
field selection, and renaming. Fields to be used as the row
key can be set by specifying the PK
string

Optional Configurations

Config Description Type Default
connect.hbase.error.policy
Specifies the action to be
taken if an error occurs while inserting the data.
There are three available options, NOOP, the error
is swallowed, THROW, the error is allowed
to propagate and retry.
For RETRY the Kafka message is redelivered up
to a maximum number of times specified by the
connect.hbase.max.retries option
string THROW
connect.hbase.max.retries
The maximum number of times a message
is retried. Only valid when the
connect.hbase.error.policy is set to RETRY
string 10
connect.hbase.retry.interval
The interval, in milliseconds between retries,
if the sink is using
connect.hbase.error.policy set to RETRY
string 60000
connect.progress.enabled
Enables the output for how many
records have been processed
boolean false

Example

name=hbase-test
connector.class=com.datamountaineer.streamreactor.connect.hbase.HbaseSinkConnector
tasks.max=1
topics=TOPIC1
connect.hbase.column.family=d
connect.hbase.kcql=INSERT INTO person SELECT * FROM TOPIC1

Schema Evolution

Upstream changes to schemas are handled by Schema registry which will validate the addition and removal or fields, data type changes and if defaults are set. The Schema Registry enforces AVRO schema evolution rules. More information can be found here.

The HBase Sink will automatically write and update the HBase table if new fields are added to the Source topic, if fields are removed the Kafka Connect framework will return the default value for this field, dependent of the compatibility settings of the Schema registry. This value will be put into the HBase column family cell based on the connect.hbase.kcql value.

Deployment Guidelines

Ensure the hbase-site.xml is on the classpath of the connector.

Kubernetes

Helm Charts are provided at our repo, add the repo to your Helm instance and install. We recommend using the Landscaper to manage Helm Values since typically each Connector instance has its own deployment.

Add the Helm charts to your Helm instance:

helm repo add landoop https://lensesio.github.io/kafka-helm-charts/

TroubleShooting

Please review the FAQs and join our slack channel