4.3

You are viewing documentation for an older version of Lenses.io View latest documentation here

FTP

Kafka Connector source connector that monitors files on an FTP server and feeds changes into Kafka.

KCQL support 

KCQL is not supported.

Concepts 

Provide the remote directories and on specified intervals, the list of files in the directories is refreshed. Files are downloaded when they were not known before, or when their timestamp or size are changed. Only files with a timestamp younger than the specified maximum age are considered. Hashes of the files are maintained and used to check for content changes. Changed files are then fed into Kafka, either as a whole (update) or only the appended part (tail), depending on the configuration. Optionally, file bodies can be transformed through a pluggable system prior to putting it into Kafka.

Data Types 

Each Kafka record represents a file and has the following types.

  • The format of the keys is configurable through connect.ftp.keystyle=string|struct. It can be a string with the file name, or a FileInfo structure with name: string and offset: long. The offset is always 0 for files that are updated as a whole, and hence only relevant for tailed files.
  • The values of the records contain the body of the file as bytes.

Tailing Versus Update as a Whole 

The following rules are used.

Tailed files are only allowed to grow. Bytes that have been appended to it since the last inspection are yielded. Preceding bytes are not allowed to change;

Updated files can grow, shrink and change anywhere. The entire contents are yielded.

Data Converters 

Instead of dumping whole file bodies (and the danger of exceeding Kafka’s message.max.bytes), one might want to give an interpretation to the data contained in the files before putting it into Kafka. For example, if the files that are fetched from the FTP are comma-separated values (CSVs), one might prefer to have a stream of CSV records instead. To allow to do so, the connector provides a pluggable conversion of SourceRecords. Right before sending a SourceRecord to the Connect framework, it is run through an object that implements:

package com.datamountaineer.streamreactor.connect.ftp

trait SourceRecordConverter extends Configurable {
    def convert(in:SourceRecord) : java.util.List[SourceRecord]
}

The default object that is used is a pass-through converter, an instance of:

class NopSourceRecordConverter extends SourceRecordConverter{
    override def configure(props: util.Map[String, _]): Unit = {}
    override def convert(in: SourceRecord): util.List[SourceRecord] = Seq(in).asJava
}

To override it, create your own implementation of SourceRecordConverter and place the jar in the plugin.path.

connect.ftp.sourcerecordconverter=your.name.space.YourConverter

Quickstart 

Launch the stack 


  1. Copy the docker-compose file.
  2. Bring up the stack.
export CONNECTOR=ftp
docker-compose up -d ftp

Inserting test data 

Once your containers are running. Login into your container:


docker exec -ti ftp /bin/bash

Start the connector 

If you are using Lenses, login into Lenses and navigate to the connectors page , select FTP as the source and paste the following:

name=ftp-source
connector.class=com.datamountaineer.streamreactor.connect.ftp.source.FtpSourceConnector
tasks.max=1

#server settings
connect.ftp.address=localhost:21
connect.ftp.user=ftp
connect.ftp.password=ftp

#refresh rate, every minute
connect.ftp.refresh=PT1M

#ignore files older than 14 days.
connect.ftp.file.maxage=P14D

#monitor /forecasts/weather/ and /logs/ for appends to files.
#any updates go to the topics `weather` and `error-logs` respectively.
connect.ftp.monitor.tail=/forecasts/weather/:weather,/logs/:error-logs

#keep an eye on /statuses/, files are retrieved as a whole and sent to topic `status`
connect.ftp.monitor.update=/statuses/:status

#keystyle controls the format of the key and can be string or struct.
#string only provides the file name
#struct provides a structure with the filename and offset
connect.ftp.keystyle=struct

To start the connector without using Lenses, log into the fastdatadev container:


docker exec -ti fastdata /bin/bash

and create a connector.properties file containing the properties above.

Create the connector, with the connect-cli :

connect-cli create ftp < connector.properties

Wait a for the connector to start and check its running:

connect-cli status ftp

Check for records in Kafka 

Check the records in Lenses or with via the console:

kafka-avro-console-consumer \
    --bootstrap-server localhost:9092 \
    --topic orders-topic \
    --from-beginning

Clean up 

Bring down the stack:

docker-compose down

Options 

NameDescriptionTypeDefault Value
connect.ftp.addresshost[:port] of the ftp serverstring
connect.ftp.userUsername to connect withstring
connect.ftp.passwordPassword to connect withstring
connect.ftp.refreshiso8601 duration that the server is polledstring
connect.ftp.file.maxageiso8601 duration for how old files can bestring
connect.ftp.keystyleSourceRecord keystyle, string or structstring
connect.ftp.protocolProtocol to use, FTP, FTPS or SFTPstringftp
connect.ftp.timeoutFtp connection timeout in millisecondsint30000
connect.ftp.filterRegular expression to use when selecting files for processingstring.*
connect.ftp.monitor.tailComma separated lists of path:destinationtopic; tail of file to trackedstring
connect.ftp.monitor.updateComma separated lists of path:destinationtopic; whole file is trackedstring
connect.ftp.monitor.slicesizeFile slice size in bytesint-1
connect.ftp.fileconverterFile converter classstringcom.datamountaineer.streamreactor.connect.ftp.source.SimpleFileConverter
connect.ftp.sourcerecordconverterSource record converter classstringcom.datamountaineer.streamreactor.connect.ftp.source.NopSourceRecordConverter
connect.ftp.max.poll.recordsMax number of records returned per pollint10000