# Azure Data Lake Gen2

This Kafka Connect sink connector facilitates the seamless transfer of records from Kafka to Azure Data Lake Buckets. It offers robust support for various data formats, including AVRO, Parquet, JSON, CSV, and Text, making it a versatile choice for data storage. Additionally, it ensures the reliability of data transfer with built-in support for exactly-once semantics.

## Connector Class

```
io.lenses.streamreactor.connect.datalake.sink.DatalakeSinkConnector
```

## Example

{% hint style="success" %}
For more examples see the [tutorials](/latest/connectors/tutorials.md).
{% endhint %}

{% code fullWidth="true" %}

```properties
connector.class=io.lenses.streamreactor.connect.datalake.sink.DatalakeSinkConnector
connect.datalake.kcql=insert into lensesio:demo select * from demo PARTITIONBY _value.metadata_id, _value.customer_id, _header.ts, _header.wallclock STOREAS `JSON` PROPERTIES('flush.interval'=600, 'flush.size'=1000000, 'flush.count'=5000)
topics=demo
name=demo
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.storage.StringConverter
transforms=insertFormattedTs,insertWallclock
transforms.insertFormattedTs.type=io.lenses.connect.smt.header.TimestampConverter
transforms.insertFormattedTs.header.name=ts
transforms.insertFormattedTs.field=timestamp
transforms.insertFormattedTs.target.type=string
transforms.insertFormattedTs.format.to.pattern=yyyy-MM-dd-HH
transforms.insertWallclock.type=io.lenses.connect.smt.header.InsertWallclock
transforms.insertWallclock.header.name=wallclock
transforms.insertWallclock.value.type=format
transforms.insertWallclock.format=yyyy-MM-dd-HH
topics=demo
name=demo
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.storage.StringConverter
transforms=insertFormattedTs,insertWallclock
transforms.insertFormattedTs.type=io.lenses.connect.smt.header.TimestampConverter
transforms.insertFormattedTs.header.name=ts
transforms.insertFormattedTs.field=timestamp
transforms.insertFormattedTs.target.type=string
transforms.insertFormattedTs.format.to.pattern=yyyy-MM-dd-HH
transforms.insertWallclock.type=io.lenses.connect.smt.header.InsertWallclock
transforms.insertWallclock.header.name=wallclock
transforms.insertWallclock.value.type=format
transforms.insertWallclock.format=yyyy-MM-dd-HH
```

{% endcode %}

## KCQL Support

{% hint style="success" %}
You can specify multiple KCQL statements separated by **;** to have a connector sink multiple topics. The connector properties **topics** or **topics.regex** are required to be set to a value that matches the KCQL statements.
{% endhint %}

The connector uses KCQL to map topics to Datalake buckets and paths. The full KCQL syntax is:

```sql
INSERT INTO bucketAddress[:pathPrefix]
SELECT *
FROM kafka-topic
[[PARTITIONBY (partition[, partition] ...)] | NOPARTITION]
[STOREAS storage_format]
[PROPERTIES(
  'property.1'=x,
  'property.2'=x,
)]
```

Please note that you can employ escaping within KCQL for the **INSERT INTO, SELECT \* FROM**, and PARTITIONBY clauses when necessary. For example, an incoming Kafka message stored as JSON can use fields containing `.`:

```json
{
  ...
  "a.b": "value",
  ...
}
```

In this case, you can use the following KCQL statement:

```sql
INSERT INTO `container-name`:`prefix` SELECT * FROM `kafka-topic` PARTITIONBY `a.b`
```

### Target Bucket and Path <a href="#target-bucket-and-path" id="target-bucket-and-path"></a>

The target bucket and path are specified in the INSERT INTO clause. The path is optional and if not specified, the connector will write to the root of the bucket and append the topic name to the path.

Here are a few examples:

```sql
INSERT INTO testcontainer:pathToWriteTo SELECT * FROM topicA;
INSERT INTO testcontainer SELECT * FROM topicA;
INSERT INTO testcontainer:path/To/Write/To SELECT * FROM topicA PARTITIONBY fieldA;
```

### SQL Projection <a href="#sql-projection" id="sql-projection"></a>

Currently, the connector does not offer support for SQL projection; consequently, anything other than a SELECT \* query is disregarded. The connector will faithfully write all fields from Kafka exactly as they are.

## Source Topic <a href="#source-topic" id="source-topic"></a>

To avoid runtime errors, make sure the *topics* or *topics.regex* setting matches your KCQL statements. If the connector receives data for a topic without matching KCQL, it will throw an error. When using a regex to select topics, follow this KCQL pattern:

```
topics.regex = ^sensor_data_\d+$
connect.datalake.kcql= INSERT INTO $target SELECT * FROM  `*` ....
```

In this case the topic name will be appended to the $target destination.

### KCQL Properties <a href="#object-key" id="object-key"></a>

The **PROPERTIES** clause is optional and adds a layer of configuration to the connector. It enhances versatility by permitting the application of multiple configurations (delimited by ‘,’). The following properties are supported:

| Name                            | Description                                                                                                             | Type                    | Available Values        | Default Value                                                        |
| ------------------------------- | ----------------------------------------------------------------------------------------------------------------------- | ----------------------- | ----------------------- | -------------------------------------------------------------------- |
| padding.type                    | Specifies the type of padding to be applied.                                                                            | LeftPad, RightPad, NoOp | LeftPad, RightPad, NoOp | LeftPad                                                              |
| padding.char                    | Defines the character used for padding.                                                                                 | Char                    |                         | ‘0’                                                                  |
| padding.length.partition        | Sets the padding length for the partition.                                                                              | Int                     |                         | 0                                                                    |
| padding.length.offset           | Sets the padding length for the offset.                                                                                 | Int                     |                         | 12                                                                   |
| partition.include.keys          | Specifies whether partition keys are included.                                                                          | Boolean                 |                         | <p>false<br><strong>Default (Custom Partitioning):</strong> true</p> |
| store.envelope                  | Indicates whether to store the entire Kafka message                                                                     | Boolean                 |                         |                                                                      |
| store.envelope.fields.key       | Indicates whether to store the envelope’s key.                                                                          | Boolean                 |                         |                                                                      |
| store.envelope.fields.headers   | Indicates whether to store the envelope’s headers.                                                                      | Boolean                 |                         |                                                                      |
| store.envelope.fields.value     | Indicates whether to store the envelope’s value.                                                                        | Boolean                 |                         |                                                                      |
| store.envelope.fields..metadata | Indicates whether to store the envelope’s metadata.                                                                     | Boolean                 |                         |                                                                      |
| flush.size                      | Specifies the size (in bytes) for the flush operation.                                                                  | Long                    |                         | 500000000 (500MB)                                                    |
| flush.count                     | Specifies the number of records for the flush operation.                                                                | Int                     |                         | 50000                                                                |
| flush.interval                  | Specifies the interval (in seconds) for the flush operation.                                                            | Long                    |                         | 3600 (1 hour)                                                        |
| key.suffix                      | When specified it appends the given value to the resulting object key before the "extension" (avro, json, etc) is added | String                  |                         | \<empty>                                                             |

The sink connector optimizes performance by padding the output files, a practice that proves beneficial when using the Datalake Source connector to restore data. This file padding ensures that files are ordered lexicographically, allowing the Datalake Source connector to skip the need for reading, sorting, and processing all files, thereby enhancing efficiency.

## Partitioning & File names <a href="#object-key" id="object-key"></a>

The object key serves as the filename used to store data in Datalake. There are two options for configuring the object key:

* **Default**: The object key is automatically generated by the connector and follows the Kafka topic-partition structure. The format is $container/\[$prefix]/$topic/$partition/offset.extension. The extension is determined by the chosen storage format.
* **Custom**: The object key is driven by the `PARTITIONBY` clause. The format is either `$container/[$prefix]/$topic/customKey1=customValue1/customKey2=customValue2/topic(partition_offset).extension` (naming style mimicking Hive-like data partitioning) or `$container/[$prefix]/customValue/topic(partition_offset).ext`. The extension is determined by the selected storage format.

{% hint style="warning" %}
The Connector automatically adds the topic name to the partition. There is no need to add it to the partition clause. If you want to explicitly add the topic or partition you can do so by using \_topic and \_partition.

The partition clause works on Header, Key and Values fields of the Kafka message.
{% endhint %}

Custom keys and values can be extracted from the Kafka message key, message value, or message headers, as long as the headers are of types that can be converted to strings. There is no fixed limit to the number of elements that can form the object key, but you should be aware of Azure Datalake key length restrictions.

To extract fields from the message values, simply use the field names in the **`PARTITIONBY`** clause. For example:

```sql
PARTITIONBY fieldA, fieldB
```

However, note that the message fields must be of primitive types (e.g., string, int, long) to be used for partitioning.

You can also use the entire message key as long as it can be coerced into a primitive type:

```sql
PARTITIONBY _key
```

In cases where the Kafka message Key is not a primitive but a complex object, you can use individual fields within the message Key to create the Datalake object key name:

```sql
PARTITIONBY _key.fieldA, _key.fieldB
```

Kafka message headers can also be used in the Datalake object key definition, provided the header values are of primitive types easily convertible to strings:

```sql
PARTITIONBY _header.<header_key1>[, _header.<header_key2>]
```

Customizing the object key can leverage various components of the Kafka message. For example:

```sql
PARTITIONBY fieldA, _key.fieldB, _headers.fieldC
```

This flexibility allows you to tailor the object key to your specific needs, extracting meaningful information from Kafka messages to structure Datalake object keys effectively.

To enable Athena-like partitioning, use the following syn

```sql
INSERT INTO $container[:$prefix]
SELECT * FROM $topic
PARTITIONBY fieldA, _key.fieldB, _headers.fieldC
STOREAS `AVRO`
PROPERTIES (
    'partition.include.keys'=true,
)
```

## Rolling Windows <a href="#rolling-window" id="rolling-window"></a>

Storing data in Azure Datalake and partitioning it by time is a common practice in data management. For instance, you may want to organize your Datalake data in hourly intervals. This partitioning can be seamlessly achieved using the **`PARTITIONBY`** clause in combination with specifying the relevant time field. However, it’s worth noting that the time field typically doesn’t adjust automatically.

To address this, we offer a Kafka Connect Single Message Transformer (SMT) designed to streamline this process. You can find the transformer plugin and documentation [here](/latest/connectors/single-message-transforms.md).

Let’s consider an example where you need the object key to include the wallclock time (the time when the message was processed) and create an hourly window based on a field called `timestamp`. Here’s the connector configuration to achieve this:

{% code fullWidth="true" %}

```properties
connector.class=io.lenses.streamreactor.connect.azure.datalake.sink.DatalakeSinkConnector
connect.datalake.kcql=insert into lensesio:demo select * from demo PARTITIONBY _value.metadata_id, _value.customer_id, _header.ts, _header.wallclock STOREAS `JSON` PROPERTIES('flush.interval'=30, 'flush.size'=1000000, 'flush.count'=5000)
topics=demo
name=demo
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.storage.StringConverter
transforms=insertFormattedTs,insertWallclock
transforms.insertFormattedTs.type=io.lenses.connect.smt.header.TimestampConverter
transforms.insertFormattedTs.header.name=ts
transforms.insertFormattedTs.field=timestamp
transforms.insertFormattedTs.target.type=string
transforms.insertFormattedTs.format.to.pattern=yyyy-MM-dd-HH
transforms.insertWallclock.type=io.lenses.connect.smt.header.InsertWallclock
transforms.insertWallclock.header.name=wallclock
transforms.insertWallclock.value.type=format
transforms.insertWallclock.format=yyyy-MM-dd-HH
topics=demo
name=demo
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.storage.StringConverter
transforms=insertFormattedTs,insertWallclock
transforms.insertFormattedTs.type=io.lenses.connect.smt.header.TimestampConverter
transforms.insertFormattedTs.header.name=ts
transforms.insertFormattedTs.field=timestamp
transforms.insertFormattedTs.target.type=string
transforms.insertFormattedTs.format.to.pattern=yyyy-MM-dd-HH
transforms.insertWallclock.type=io.lenses.connect.smt.header.InsertWallclock
transforms.insertWallclock.header.name=wallclock
transforms.insertWallclock.value.type=format
transforms.insertWallclock.format=yyyy-MM-dd-HH
```

{% endcode %}

In this example, the incoming Kafka message’s Value content includes a field called timestamp, represented as a long value indicating the epoch time in milliseconds. The TimestampConverter SMT will expertly convert this into a string value according to the format specified in the format.to.pattern property. Additionally, the insertWallclock SMT will incorporate the current wallclock time in the format you specify in the format property.

The `PARTITIONBY` clause then leverages both the timestamp field and the wallclock header to craft the object key, providing you with precise control over data partitioning.

## Data Storage Format <a href="#data-storage-format" id="data-storage-format"></a>

While the `STOREAS` clause is optional, it plays a pivotal role in determining the storage format within Azure Datalake. It’s crucial to understand that this format is entirely independent of the data format stored in Kafka. The connector maintains its neutrality towards the storage format at the topic level and relies on the `key.converter` and `value.converter` settings to interpret the data.

Supported storage formats encompass:

* AVRO
* Parquet
* JSON
* CSV (including headers)
* Text
* BYTES

Opting for BYTES ensures that each record is stored in its own separate file. This feature proves particularly valuable for scenarios involving the storage of images or other binary data in Datalake. For cases where you prefer to consolidate multiple records into a single binary file, AVRO or Parquet are the recommended choices.

By default, the connector exclusively stores the Kafka message value. However, you can expand storage to encompass the entire message, including the key, headers, and metadata, by configuring the `store.envelope` property as true. This property operates as a boolean switch, with the default value being false. When the envelope is enabled, the data structure follows this format:

{% hint style="warning" %}
Not supported with a custom partition strategy.
{% endhint %}

```json
{
  "key": <the message Key, which can be a primitive or a complex object>,
  "value": <the message Key, which can be a primitive or a complex object>,
  "headers": {
    "header1": "value1",
    "header2": "value2"
  },
  "metadata": {
    "offset": 0,
    "partition": 0,
    "timestamp": 0,
    "topic": "topic"
  }
}
```

Utilizing the envelope is particularly advantageous in scenarios such as backup and restore or replication, where comprehensive storage of the entire message in Datalake is desired.

### Examples <a href="#examples" id="examples"></a>

Storing the message Value Avro data as Parquet in Datalake:

{% code fullWidth="true" %}

```properties
...
connect.datalake.kcql=INSERT INTO lensesioazure:car_speed SELECT * FROM car_speed_events STOREAS `PARQUET` 
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
key.converter=org.apache.kafka.connect.storage.StringConverter
...
```

{% endcode %}

The converter also facilitates seamless JSON to AVRO/Parquet conversion, eliminating the need for an additional processing step before the data is stored in Datalake.

{% code fullWidth="true" %}

```properties
...
connect.datalake.kcql=INSERT INTO lensesioazure:car_speed SELECT * FROM car_speed_events STOREAS `PARQUET` 
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.storage.StringConverter
...
```

{% endcode %}

Enabling the full message stored as JSON in Datalake:

{% code fullWidth="true" %}

```properties
  ...
  connect.datalake.kcql=INSERT INTO lensesioazure:car_speed SELECT * FROM car_speed_events STOREAS `JSON` PROPERTIES('store.envelope'=true)
  value.converter=org.apache.kafka.connect.json.JsonConverter
  key.converter=org.apache.kafka.connect.storage.StringConverter
  ...
```

{% endcode %}

Enabling the full message stored as AVRO in Datalake:

{% code fullWidth="true" %}

```properties
...
connect.datalake.kcql=INSERT INTO lensesioazure:car_speed SELECT * FROM car_speed_events STOREAS `AVRO` PROPERTIES('store.envelope'=true)
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
key.converter=org.apache.kafka.connect.storage.StringConverter
...
```

{% endcode %}

If the restore (see the Datalake Source documentation) happens on the same cluster, then the most performant way is to use the ByteConverter for both Key and Value and store as AVRO or Parquet:

{% code fullWidth="true" %}

```properties
...
connect.datalake.kcql=INSERT INTO lensesioazure:car_speed SELECT * FROM car_speed_events STOREAS `AVRO` PROPERTIES('store.envelope'=true)
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
...
```

{% endcode %}

## Flush Options <a href="#flush-options" id="flush-options"></a>

The connector offers three distinct flush options for data management:

* Flush by Count - triggers a file flush after a specified number of records have been written to it.
* Flush by Size - initiates a file flush once a predetermined size (in bytes) has been attained.
* Flush by Interval - enforces a file flush after a defined time interval (in seconds).

It’s worth noting that the interval flush is a continuous process that acts as a fail-safe mechanism, ensuring that files are periodically flushed, even if the other flush options are not configured or haven’t reached their thresholds.

Consider a scenario where the flush size is set to 10MB, and only 9.8MB of data has been written to the file, with no new Kafka messages arriving for an extended period of 6 hours. To prevent undue delays, the interval flush guarantees that the file is flushed after the specified time interval has elapsed. This ensures the timely management of data even in situations where other flush conditions are not met.

The flush options are configured using the **flush.count**, **flush.size**, and **flush.interval** KCQL Properties (see [#object-key](#object-key "mention") section). The settings are optional and if not specified the defaults are:

* flush.count = 50\_000
* flush.size = 500000000 (500MB)
* flush.interval = 3600 (1 hour)

{% hint style="success" %}
A connector instance can simultaneously operate on multiple topic partitions. When one partition triggers a flush, it will initiate a flush operation for all of them, even if the other partitions are not yet ready to flush.
{% endhint %}

When `connect.datalake.latest.schema.optimization.enabled` is set to true, it reduces unnecessary data flushes when writing to Avro or Parquet formats. Specifically, it leverages schema compatibility to avoid flushing data when messages with older but *backward-compatible* schemas are encountered. Consider the following sequence of messages and their associated schemas:

```
pgsqlCopyEditmessage1 -> schema1  
message2 -> schema1  
  (No flush needed – same schema)

message3 -> schema2  
  (Flush occurs – new schema introduced)

message4 -> schema2  
  (No flush needed – same schema)

message5 -> schema1  
  Without optimization: would trigger a flush  
  With optimization: no flush – schema1 is backward-compatible with schema2

message6 -> schema2  
message7 -> schema2  
  (No flush needed – same schema, it would happen based on the flush thresholds)
```

### Flushing By Interval

The next flush time is calculated based on the time the previous flush completed (the last modified time of the file written to Data Lake). Therefore, by design, the sink connector’s behaviour will have a slight drift based on the time it takes to flush records and whether records are present or not. If Kafka Connect makes no calls to put records, the logic for flushing won't be executed. This ensures a more consistent number of records per file.

![sink commit.png](/files/ivy3blnGxjimO3QrDgew)

## Compression <a href="#avro-and-parquet-compression" id="avro-and-parquet-compression"></a>

AVRO and Parquet offer the capability to compress files as they are written. The The Data Lake Sink connector provides advanced users with the flexibility to configure compression options.

Here are the available options for the `connect.datalake.compression.codec`, along with indications of their support by Avro, Parquet and JSON writers:

| Compression  | Avro Support | Avro (requires Level) | Parquet Support | JSON |
| ------------ | ------------ | --------------------- | --------------- | ---- |
| UNCOMPRESSED | ✅            |                       | ✅               | ✅    |
| SNAPPY       | ✅            |                       | ✅               |      |
| GZIP         |              |                       | ✅               | ✅    |
| LZ0          |              |                       | ✅               |      |
| LZ4          |              |                       | ✅               |      |
| BROTLI       |              |                       | ✅               |      |
| BZIP2        | ✅            |                       |                 |      |
| ZSTD         | ✅            | ⚙️                    | ✅               |      |
| DEFLATE      | ✅            | ⚙️                    |                 |      |
| XZ           | ✅            | ⚙️                    |                 |      |

Please note that not all compression libraries are bundled with the Datalake connector. Therefore, you may need to manually add certain libraries to the classpath to ensure they function correctly.

## Authentication <a href="#auth-mode" id="auth-mode"></a>

The connector offers two distinct authentication modes:

* Default: This mode relies on the default Azure authentication chain, simplifying the authentication process.
* Connection String: This mode enables simpler configuration by relying on the connection string to authenticate with Azure.
* Credentials: In this mode, explicit configuration of Azure Access Key and Secret Key is required for authentication.

When selecting the “Credentials” mode, it is essential to provide the necessary access key and secret key properties. Alternatively, if you prefer not to configure these properties explicitly, the connector will follow the credentials retrieval order as described here.

Here’s an example configuration for the “Credentials” mode:

```properties
...
connect.datalake.azure.auth.mode=Credentials
connect.datalake.azure.account.name=$AZURE_ACCOUNT_NAME
connect.datalake.azure.account.key=$AZURE_ACCOUNT_KEY
...
```

And here is an example configuration using the “Connection String” mode:

```properties
...
connect.datalake.azure.auth.mode=ConnectionString
connect.datalake.azure.connection.string=$AZURE_CONNECTION_STRING
...
```

For enhanced security and flexibility when using either the “Credentials” or “Connection String” modes, it is highly advisable to utilize Connect Secret Providers.

## Error policies <a href="#error-polices" id="error-polices"></a>

The connector supports [Error policies](/latest/connectors/tutorials/using-error-policies.md).

### Retry behaviour

The connector applies retries at **two independent layers**. They are complementary, not duplicates: each one targets a different category of failure, and both are active at the same time.

#### Layer 1 — HTTP / Azure SDK retries

Every individual call the connector makes to Azure Data Lake (file create, append, flush, copy, delete, list, read) is routed through the official Azure SDK for Java (`azure-storage-file-datalake`), which transparently retries transient failures using exponential backoff. These retries are **invisible to Kafka Connect**: they happen entirely inside a single `put()` invocation, and the connector only sees the failure if every HTTP attempt has been exhausted.

Typical failures absorbed at this layer:

* TCP / TLS handshake errors, connection resets, DNS hiccups
* HTTP 5xx responses from the Data Lake service
* Throttling errors (`ServerBusy`, `OperationTimedOut`, IngressOverAccountLimit / EgressOverAccountLimit)
* Short-lived endpoint blips

Properties:

| Name                                   | Description                                                                                                                    | Type | Default |
| -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ---- | ------- |
| `connect.datalake.http.max.retries`    | Maximum number of attempts per individual HTTP request.                                                                        | int  | 5       |
| `connect.datalake.http.retry.interval` | Initial backoff delay (in milliseconds) before the first HTTP retry. The Azure SDK applies its own exponential backoff on top. | long | 50      |

The Azure SDK handles the per-attempt backoff internally; the connector does not expose a separate multiplier knob.

{% hint style="info" %}
The Azure SDK's classification of "retryable" errors is broad and covers the vast majority of transient Data Lake issues. Increase connect.datalake.http.max.retries if you operate against a heavily throttled storage account or over a noisy network.
{% endhint %}

#### Layer 2 — Connector / Kafka Connect retries

When **all** HTTP retries above have been exhausted, or when an error happens **outside** an HTTP call (serialisation, schema, file-system, etc.), control returns to the connector's error policy. If the policy is `RETRY`, the connector throws a `RetriableException`, which causes Kafka Connect to **redeliver the same batch of records** to `put()` after a delay. This is repeated until the batch eventually succeeds or the configured retry budget is exhausted.

Properties:

| Name                              | Description                                                                                      | Type   | Default |
| --------------------------------- | ------------------------------------------------------------------------------------------------ | ------ | ------- |
| `connect.datalake.error.policy`   | `THROW` (fail immediately), `NOOP` (swallow and continue), or `RETRY` (re-deliver the batch).    | string | `THROW` |
| `connect.datalake.max.retries`    | Maximum number of batch redeliveries before the task fails. Only used when `error.policy=RETRY`. | int    | 20      |
| `connect.datalake.retry.interval` | Delay (in milliseconds) between batch redeliveries. Only used when `error.policy=RETRY`.         | int    | 60000   |

{% hint style="warning" %}
If connect.datalake.error.policy is left at its default THROW, the max.retries and retry.interval settings are not used — any error escaping the HTTP layer will fail the task immediately.
{% endhint %}

#### Which layer handles what

| Failure category                                                                                                                 | Handled by                             | Properties to tune                                                                                         |
| -------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| Transient cloud noise: 5xx, throttling, network resets, DNS / TLS blips                                                          | Azure SDK retries (silent)             | `connect.datalake.http.max.retries`, `connect.datalake.http.retry.interval`                                |
| Sustained Data Lake unavailability, RBAC / auth failures, schema or format errors, or anything that escapes the SDK retry budget | Connector-level retry policy (`RETRY`) | `connect.datalake.error.policy=RETRY` + `connect.datalake.max.retries` + `connect.datalake.retry.interval` |

#### Choosing values

* **Tune the `http.*` settings to absorb cloud noise.** The defaults (5 attempts, starting at 50 ms) are sensible for most workloads. Increase `connect.datalake.http.max.retries` if you operate over a noisy network or against a heavily-throttled storage account.
* **Use `error.policy=RETRY` as a backstop** for longer outages. The total ride-through window is approximately `max.retries x retry.interval`. With the defaults (20 x 60 s) the task survives roughly 20 minutes of continuous failure before giving up.
* **Combine with Kafka Connect's framework-level error handling** (`errors.tolerance=all`, `errors.deadletterqueue.topic.name`, etc.) for **per-record** poison pills (converter / SMT failures). The framework's `errors.tolerance` is **not** a substitute for `error.policy=RETRY`: it handles record-level errors, not batch-level Data Lake infrastructure failures. The two settings address different failure modes and are intended to be used together.

#### Example

A robust production configuration that combines both layers and adds Kafka Connect's poison-pill protection:

```properties
# Layer 1 - leave at defaults, or relax for noisy networks
# connect.datalake.http.max.retries=5
# connect.datalake.http.retry.interval=50

# Layer 2 - ride through up to ~30 minutes of Data Lake unavailability
connect.datalake.error.policy=RETRY
connect.datalake.max.retries=30
connect.datalake.retry.interval=60000

# Kafka Connect framework - per-record DLQ for converter / SMT errors
errors.tolerance=all
errors.log.enable=true
errors.log.include.messages=true
errors.deadletterqueue.topic.name=my-connector-dlq
errors.deadletterqueue.context.headers.enable=true
errors.deadletterqueue.topic.replication.factor=3
```

## Offset commit semantics&#x20;

A frequent question is **when** the Azure Data Lake Gen2 sink connector advances Kafka consumer offsets, and what role the various "temporary" locations play in that process.

{% hint style="success" %}
Offsets are only advanced after the data has been durably written to its final path in Data Lake (with exactly-once enabled, the connector's .indexes/ entry must also be updated first). Neither the local staging file nor the transient .temp-upload/... Data Lake file causes offsets to advance.
{% endhint %}

#### End-to-end flow for one batch

<figure><img src="/files/wC2OwtIrngvOrBjWsbAu" alt=""><figcaption></figcaption></figure>

1. **Local staging.** Incoming records are serialized and appended to a file on the **Connect worker's local disk**, inside the directory pointed to by `connect.datalake.local.tmp.directory` (or an OS temp directory if not set). Nothing is written to Data Lake and no Kafka offsets advance. If the task crashes here, the records are simply re-consumed from Kafka on restart.
2. **Flush.** When a flush threshold is reached (`flush.count`, `flush.size`, `flush.interval`, schema change, etc.), the connector uploads the local staging file to Data Lake. The pipeline depends on `connect.datalake.exactly.once.enable`:
   * **Exactly-once (default, `exactly.once.enable=true`).** A 3-step pipeline that goes via a transient file, fenced by ETags:
   1. **Upload** the local staging file to a transient file at `abfss://<filesystem>@<account>.dfs.core.windows.net/.temp-upload/<topic>/<partition>/<uuid>/<finalPath>`.
   2. **Copy** that transient file to the final destination path, using an `If-Match`/ETag precondition so a concurrent writer (e.g. during a rebalance) cannot overwrite or duplicate the final file.
   3. **Delete** the transient file under `.temp-upload/`.

      e `.indexes/` entry is updated after each step so that a restarted task can pick up the pipeline mid-flight.

      At-least-once (`exactly.once.enable=false`).\*\* The local staging file is uploaded **directly to the final destination path**. There is no `.temp-upload/` indirection, no ETag fencing, and no `.indexes/` checkpoint. The connector falls back to Kafka Connect's native at-least-once offset management.
3. **Committed offset advances.** Only once the final file is in place at its proper destination path — and, with exactly-once enabled, only once the corresponding `.indexes/` entry has been updated — does the writer's committed offset advance.
4. **`preCommit`.** The next time Kafka Connect calls `preCommit`, the connector returns the latest safely committed offset for each partition. If any records are still buffered locally and not yet flushed, the connector returns the offset of the **first still-buffered record** — Kafka Connect will not advance past anything that is not durably in Data Lake.

#### The two "temporary" locations

| Location                                                     | Where it physically lives                                                                 | Purpose                                                                                                                                                                                                 | Affects committed offset?    |
| ------------------------------------------------------------ | ----------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------- |
| Local staging file                                           | Disk on the Connect worker (`connect.datalake.local.tmp.directory` or OS tmp)             | Buffers Kafka records into the chosen file format (AVRO / Parquet / JSON / CSV / Text / BYTES) before flush. Bounded by `flush.size` / `flush.count` / `flush.interval`.                                | No                           |
| `.temp-upload/<topic>/<partition>/<uuid>/...` Data Lake file | The **same Data Lake filesystem** as the destination, under the `.temp-upload/` directory | Atomic, fenced staging step used only when exactly-once is enabled. The local file is uploaded here, then copied to the final path with an ETag precondition, then deleted. Exists only for one commit. | No                           |
| Final destination path                                       | The configured Data Lake filesystem / path                                                | The actual data that downstream consumers read.                                                                                                                                                         | **Yes** (after index update) |

{% hint style="info" %}
Nothing is held in memory only. The local staging file is real on-disk storage on the worker, and .temp-upload/... is a real Data Lake file. This is what allows the connector to recover cleanly from crashes, rebalances and worker restarts.
{% endhint %}

#### Restart behaviour

The connector is designed so that no Kafka offset ever moves ahead of data that has been durably written to its final Data Lake path. The exact failure modes vary by mode:

**Exactly-once (default)**

* **Crash during local staging** — nothing in Data Lake, offsets unchanged. Records are re-consumed from Kafka. No duplicates, no data loss.
* **Crash mid-pipeline** (between upload, copy and delete in `.temp-upload/`) — the `.indexes/` entry records exactly which step was reached. On restart the connector resumes the pipeline from that point. The ETag precondition on the copy step prevents two writers from racing the final path during a rebalance.
* **Crash after the final write but before the index entry advances** — the final file is in Data Lake but the offset has not advanced. On restart the records are re-uploaded; the ETag fence keeps the existing final file unchanged.

**At-least-once (`exactly.once.enable=false`)**

* **Crash during local staging** behaves identically to the exactly-once case (records re-consumed, no data loss).
* **Crash during or after the upload to the final path** may produce duplicate or partially-overwritten files on restart, because there is no fencing and no `.indexes/` checkpoint. Use this mode only when downstream consumers can tolerate duplicates.

#### Operational notes

* Tuning the flush thresholds (`flush.count` / `flush.size` / `flush.interval`) controls how often offsets advance, and therefore how much data is replayed after a worker crash. Smaller flush windows = smaller replays on restart, at the cost of more, smaller files in Data Lake.
* The `.temp-upload/` directory is internal connector machinery. It is safe to ignore in lifecycle management policies, but **do not exclude it from the connector's RBAC permissions** — the connector needs write, create, delete and read permissions on files under that directory. The minimum role is **Storage Blob Data Contributor** (or an equivalent custom role with `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/*` actions).
* If your filesystem has soft-delete, immutability policies, or strict ACLs configured, ensure they allow the short-lived `.temp-upload/...` files to be written **and deleted** during the commit pipeline.

## Indexes Directory

The connector uses the concept of index files that it writes to in order to store information about the latest offsets for Kafka topics and partitions as they are being processed. This allows the connector to quickly resume from the correct position when restarting and provides flexibility in naming the index files.

By default, the root directory for these index files is named .indexes for all connectors. However, each connector will create and store its index files within its own subdirectory inside this `.indexes` directory.

You can configure the root directory for these index files using the property `connect.datalake.indexes.name`. This property specifies the path from the root of the data lake filesystem. Note that even if you configure this property, the connector will still create a subdirectory within the specified root directory.

### Examples

| Index Name (`connect.datalake.indexes.name`) | Resulting Indexes Directory Structure               | Description                                                                                                    |
| -------------------------------------------- | --------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- |
| `.indexes` (default)                         | `.indexes/<connector_name>/`                        | The default setup, where each connector uses its own subdirectory within `.indexes`.                           |
| `custom-indexes`                             | `custom-indexes/<connector_name>/`                  | Custom root directory `custom-indexes`, with a subdirectory for each connector.                                |
| `indexes/datalake-connector-logs`            | `indexes/datalake-connector-logs/<connector_name>/` | Uses a custom subdirectory `datalake-connector-logs` within `indexes`, with a subdirectory for each connector. |
| `logs/indexes`                               | `logs/indexes/<connector_name>/`                    | Indexes are stored under `logs/indexes`, with a subdirectory for each connector.                               |

## Option Reference <a href="#connector-properties" id="connector-properties"></a>

| Name                                                | Description                                                                                                                                                                                                                                                                                 | Type    | Available Values                                                                           | Default Value  |
| --------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | ------------------------------------------------------------------------------------------ | -------------- |
| connect.datalake.azure.auth.mode                    | Specifies the Azure authentication mode for connecting to Datalake.                                                                                                                                                                                                                         | string  | “Credentials”, “ConnectionString” or “Default”                                             | “Default”      |
| connect.datalake.azure.account.key                  | The Azure Account Key used for authentication.                                                                                                                                                                                                                                              | string  |                                                                                            | (Empty)        |
| connect.datalake.azure.account.name                 | The Azure Account Name used for authentication.                                                                                                                                                                                                                                             | string  |                                                                                            | (Empty)        |
| connect.datalake.pool.max.connections               | Specifies the maximum number of connections allowed in the Azure Client’s HTTP connection pool when interacting with Datalake.                                                                                                                                                              | int     | -1 (undefined)                                                                             | 50             |
| connect.datalake.endpoint                           | Datalake endpoint URL.                                                                                                                                                                                                                                                                      | string  |                                                                                            | (Empty)        |
| connect.datalake.error.policy                       | Defines the error handling policy when errors occur during data transfer to or from Datalake.                                                                                                                                                                                               | string  | “NOOP,” “THROW,” “RETRY”                                                                   | “THROW”        |
| connect.datalake.max.retries                        | Sets the maximum number of retries the connector will attempt before reporting an error to the Connect Framework.                                                                                                                                                                           | int     |                                                                                            | 20             |
| connect.datalake.retry.interval                     | Specifies the interval (in milliseconds) between retry attempts by the connector.                                                                                                                                                                                                           | int     |                                                                                            | 60000          |
| connect.datalake.http.max.retries                   | Sets the maximum number of retries for the underlying HTTP client when interacting with Datalake.                                                                                                                                                                                           | long    |                                                                                            | 5              |
| connect.datalake.http.retry.interval                | Specifies the retry interval (in milliseconds) for the underlying HTTP client. An exponential backoff strategy is employed.                                                                                                                                                                 | long    |                                                                                            | 50             |
| connect.datalake.local.tmp.directory                | Enables the use of a local folder as a staging area for data transfer operations.                                                                                                                                                                                                           | string  |                                                                                            | (Empty)        |
| connect.datalake.kcql                               | A SQL-like configuration that defines the behavior of the connector. Refer to the KCQL section below for details.                                                                                                                                                                           | string  |                                                                                            | (Empty)        |
| connect.datalake.compression.codec                  | Sets the Parquet compression codec to be used when writing data to Datalake.                                                                                                                                                                                                                | string  | “UNCOMPRESSED,” “SNAPPY,” “GZIP,” “LZ0,” “LZ4,” “BROTLI,” “BZIP2,” “ZSTD,” “DEFLATE,” “XZ” | “UNCOMPRESSED” |
| connect.datalake.compression.level                  | Sets the compression level when compression is enabled for data transfer to Datalake.                                                                                                                                                                                                       | int     | 1-9                                                                                        | (Empty)        |
| connect.datalake.seek.max.files                     | Specifies the maximum threshold for the number of files the connector uses to ensure exactly-once processing of data.                                                                                                                                                                       | int     |                                                                                            | 5              |
| connect.datalake.indexes.name                       | Configure the indexes root directory for this connector.                                                                                                                                                                                                                                    | string  |                                                                                            | ".indexes"     |
| connect.datalake.exactly.once.enable                | By setting to 'false', disable exactly-once semantics, opting instead for Kafka Connect’s native at-least-once offset management                                                                                                                                                            | boolean | true, false                                                                                | true           |
| connect.datalake.schema.change.detector             | Configure how the file will roll over upon receiving a record with a schema different from the accumulated ones. This property configures schema change detection with `default` (object equality), `version` (version field comparison), or `compatibility` (Avro compatibility checking). | string  | `default`, `version`, `compatibility`                                                      | `default`      |
| connect.datalake.skip.null.values                   | Skip records with null values (a.k.a. tombstone records).                                                                                                                                                                                                                                   | boolean | true, false                                                                                | false          |
| connect.datalake.latest.schema.optimization.enabled | When set to true, reduces unnecessary data flushes when writing to Avro or Parquet formats. Specifically, it leverages schema compatibility to avoid flushing data when messages with older but backward-compatible schemas are encountered.                                                | boolean | true,false                                                                                 | false          |


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.lenses.io/latest/connectors/kafka-connectors/sinks/azure-data-lake-gen2.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
