Hive Sink

Download connector Hive Connector 1.2 for Kafka

This Hive Sink allows you to write data from Kafka to Hive. The connector takes the value from the Kafka Connect SinkRecords and inserts a new entry to Hive.

Prerequisites

  • Apache Kafka 0.11.x of above
  • Kafka Connect 0.11.x or above
  • Hive
  • Java 1.8

Features

  1. The KCQL routing querying - Kafka topic payload field selection is supported, allowing you to have choose selection of fields or all fields written to Hive
  2. Error policies for handling failures
  3. Payload support for Schema.Struct and payload Struct, Schema.String and JSON payload and JSON payload with no schema

KCQL Support

The KCQL (Kafka Connect Query Languages) is a SQL like syntax allowing a streamlined configuration of Kafka Connect Sink. More details about KCQL can be found here.

INSERT INTO hive_table SELECT { FIELD, ... } FROM kafka_topic_name

Tip

You can specify multiple KCQL statements separated by ; to have a the connector sink multiple topics.

The Hive sink supports KCQL:

  1. Field selection
  2. Target Hive table selection.
  3. KCQL Options on KQCL statement.

Example:

-- Insert into hive_tableA all fields from kafka_topicA
INSERT INTO hive_tableA SELECT * FROM kafka_topicA

KCQL Optional Configurations

Below is a list of the KCQL optional configurations that can be placed inside the connect.hive.kcql configuration parameter.

Config Description Type Default
WITH_FLUSH_INTERVAL The time interval in milliseconds that file commits long  
WITH_FLUSH_SIZE Number of record before invoking file commits long  
WITH_FLUSH_COUNT The current count of records before invoking file commits long  
WITH_SCHEMA_EVOLUTION
The hive connector try to add or remove fields
to make the hive schema match the incoming schema of the kafka records
string MATCH
WITH_TABLE_LOCATION
Manually set the location of the table in hive
if not specified then by default use the location of your database as specified in hive configuration
String  
WITH_OVERWRITE Overwrite existing records boolean  
PARTITIONBY
Allows to create hive partitions
if specified then the values for the partitions will be taken from the column(s) specified
List<String>  
WITH_PARTITIONING
Two modes DYNAMIC or STRICT
in DYNAMIC mode the connector will create any partitions i.e. the ones specified in PARTITIONBY
in STRICT mode any partitions must already have been created
string STRICT
AUTOCREATE Creates the table if does not exists boolean  

Examples:

INSERT INTO hive_tableA SELECT * FROM kafka_topicA WITH_FLUSH_INTERVAL = 10
INSERT INTO hive_tableA SELECT col1,col2 FROM kafka_topicA WITH_SCHEMA_EVOLUTION = ADD
INSERT INTO hive_tableA SELECT col1, col2 FROM kafka_topicA WITH_TABLE_LOCATION = "/magic/location/on/my/ssd"
INSERT INTO hive_tableA SELECT col1, col2 FROM kafka_topicA WITH_OVERWRITE
INSERT INTO hive_tableA SELECT col1, col2 FROM kafka_topicA PARTITIONBY col1, col2
INSERT INTO hive_tableA SELECT col1, col2 FROM kafka_topicA WITH_PARTITIONING = DYNAMIC
INSERT INTO hive_tableA SELECT f1 as col1, f2 as col2 FROM kafka_topicA AUTOCREATE

Payload Support

Schema.Struct and a Struct Payload

If you follow the best practice while producing the events, each message should carry its schema information. The best option is to send AVRO. Your Connector configurations options include:

key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081

This requires the SchemaRegistry.

Note

This needs to be done in the connect worker properties if using Kafka versions prior to 0.11

Schema.String and a JSON Payload

Sometimes the producer would find it easier to just send a message with Schema.String and a JSON string. In this case your connector configuration should be set to value.converter=org.apache.kafka.connect.json.JsonConverter. This doesn’t require the SchemaRegistry.

key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter

Note

This needs to be done in the connect worker properties if using Kafka versions prior to 0.11

No schema and a JSON Payload

There are many existing systems which are publishing Json over Kafka and bringing them in line with best practices is quite a challenge, hence we added the support. To enable this support you must change the converters in the connector configuration.

key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false

Note

This needs to be done in the connect worker properties if using Kafka versions prior to 0.11

Error Polices

The sink has three error policies that determine how failed writes to the target database are handled. These error polices allow you to control the behaviour of the sink if it encounters an error when writing records to the target system. Since Kafka retains the records, subject to the configured retention policy of the topic, the sink can ignore the error, fail the connector or attempt redelivery.

Throw

Any error on write to the target system will be propagated up and processing is stopped. This is the default behavior.

Noop

Any error on write to the target database is ignored and processing continues.

Warning

This can lead to missed errors if you don’t have adequate monitoring. Data is not lost as it’s still in Kafka subject to Kafka’s retention policy. The sink currently does not distinguish between integrity constraint violations and or other exceptions thrown by any drivers or target systems.

Retry

Any error on write to the target system causes the RetryIterable exception to be thrown. This causes the Kafka Connect framework to pause and replay the message. Offsets are not committed. For example, if the table is offline it will cause a write failure, the message can be replayed. With the Retry policy, the issue can be fixed without stopping the sink.

Lenses QuickStart

The easiest way to try out this is using Lenses Box the pre-configured docker, that comes with this connector pre-installed. You would need to Connectors –> New Connector –> Hive and paste your configuration

../../_images/lenses-create-hive-sink-connector.png

Sink Connector QuickStart

Test data

Once you have installed and started Hive create a table to write records to. This snippet creates a table called cities to hold cities data.

Start the Hive shell and execute

hive> create database hive_connect;
hive> use hive_connect;
hive> create table cities (city string, state string, population int, country string) stored as parquet;

Installing the Connector

Connect, in production should be run in distributed mode

  1. Install and configure a Kafka Connect cluster
  2. Create a folder on each server called plugins/lib
  3. Copy into the above folder the required connector jars from the stream reactor download
  4. Edit connect-avro-distributed.properties in the etc/schema-registry folder and uncomment the plugin.path option. Set it to the root directory i.e. plugins you deployed the stream reactor connector jars in step 2.
  5. Start Connect, bin/connect-distributed etc/schema-registry/connect-avro-distributed.properties

Connect Workers are long running processes so set an init.d or systemctl service accordingly.

Starting the Connector

Download, and install Stream Reactor to your Kafka Connect cluster. Follow the instructions here if you haven’t already done so. All paths in the quickstart are based on the location you installed Stream Reactor.

Once the Connect has started we can now use the kafka-connect-tools cli to post in our distributed properties file for Hive. If you are using the dockers you will have to set the following environment variable too for the CLI to connect to the Kafka Connect Rest API.

export KAFKA_CONNECT_REST="http://myserver:myport"
➜  bin/connect-cli create hive-sink-example < conf/hive-sink.properties

name=hive-sink-example
connector.class=com.landoop.streamreactor.connect.hive.sink.HiveSinkConnector
tasks.max=1
topics=hive_topic
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false
connect.hive.kcql=insert into cities select * from hive_topic AUTOCREATE PARTITION_BY state WITH_FLUSH_INTERVAL = 10
connect.hive.database.name=hive_connect
connect.hive.hive.metastore=thrift
connect.hive.hive.metastore.uris=thrift://hive-metastore:9083
connect.hive.fs.defaultFS=hdfs://namenode:8020

If you switch back to the terminal you started Kafka Connect in you should see the Hive Sink being accepted and the task starting.

We can use the CLI to check if the connector is up but you should be able to see this in logs as well.

#check for running connectors with the CLI
➜ bin/connect-cli ps
hive-sink
INFO
 __         ______     __   __     _____     ______     ______     ______      __  __     __     __   __   ______
/\ \       /\  __ \   /\ "-.\ \   /\  __-.  /\  __ \   /\  __ \   /\  == \    /\ \_\ \   /\ \   /\ \ / /  /\  ___\
\ \ \____  \ \  __ \  \ \ \-.  \  \ \ \/\ \ \ \ \/\ \  \ \ \/\ \  \ \  _-/    \ \  __ \  \ \ \  \ \ \'/   \ \  __\
 \ \_____\  \ \_\ \_\  \ \_\\"\_\  \ \____-  \ \_____\  \ \_____\  \ \_\       \ \_\ \_\  \ \_\  \ \__|    \ \_____\
  \/_____/   \/_/\/_/   \/_/ \/_/   \/____/   \/_____/   \/_____/   \/_/        \/_/\/_/   \/_/   \/_/      \/_____/

Test Records

Tip

If your input topic doesn’t match the target use Lenses SQL to transform in real-time the input, no Java or Scala required!

Now we need to put some records it to the hive-topic. We can use the kafka-avro-console-producer to do this. Start the producer and pass in a schema to register in the Schema Registry. The schema matches the table created earlier.

bin/kafka-avro-console-producer \
 --broker-list localhost:9092 --topic hive-topic \
 --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"city","type":"string"},{"name":"state","type":"string"},{"name":"population","type":"int"},{"name":"country","type":"string"}]}'

Now the producer is waiting for input. Paste in the following (each on a line separately):

{"city":{"string":"Philadelphia"},"state":{"string":"PA"},"population":{"int":1568000},"country":{"string":"USA"}}
{"city":{"string":"Chicago"},"state":{"string":"IL"},"population":{"int":2705000},"country":{"string":"USA"}}
{"city":{"string":"New York"},"state":{"string":"NY"},"population":{"int":8538000},"country":{"string":"USA"}}

Let’s check Hive.

select * from cities;

New York NY 8538000 USA
Chicago IL 2705000 USA
Philadelphia PA 1568000 USA

(3 rows)

Configurations

The Kafka Connect framework requires the following in addition to any connectors specific configurations:

Config Description Type Value
name Name of the connector string This must be unique across the Connect cluster
topics
The topics to sink.
The connector will check this matchs the KCQL statement
string  
tasks.max The number of tasks to scale output int 1
connector.class Name of the connector class string com.landoop.streamreactor.connect.hive.sink.HiveSinkConnector

Connector Configurations

Config Description Type
connect.hive.kcql Kafka connect query language expression string
connect.hive.database.name Sets the database name string
connect.hive.hive.metastore Protocol used by the hive metastore string
connect.hive.hive.metastore.uris URI to point to the metastore string
connect.hive.fs.defaultF HDFS Filesystem default uri string

Example

name=hive-sink-example
connector.class=com.landoop.streamreactor.connect.hive.sink.HiveSinkConnector
tasks.max=1
topics=hive_topic
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false
connect.hive.kcql=insert into cities_copy select * from hive_topic AUTOCREATE PARTITION_BY state WITH_FLUSH_INTERVAL = 10
connect.hive.database.name=hive_connect
connect.hive.hive.metastore=thrift
connect.hive.hive.metastore.uris=thrift://hive-metastore:9083
connect.hive.fs.defaultFS=hdfs://namenode:8020

Schema Evolution

Upstream changes to schemas are handled by Schema registry which will validate the addition and removal or fields, data type changes and if defaults are set. The Schema Registry enforces AVRO schema evolution rules. More information can be found here.

Kubernetes

Helm Charts are provided at our repo, add the repo to your Helm instance and install. We recommend using the Landscaper to manage Helm Values since typically each Connector instance has its own deployment.

Add the Helm charts to your Helm instance:

helm repo add landoop https://landoop.github.io/kafka-helm-charts/

TroubleShooting

Please review the FAQs and join our slack channel