Modifier and Type | Method and Description |
---|---|
static StreamSource<org.apache.kafka.connect.source.SourceRecord> |
connect(Properties properties)
A generic Kafka Connect source provides ability to plug any Kafka
Connect source for data ingestion to Jet pipelines.
|
static <T> StreamSource<T> |
connect(Properties properties,
FunctionEx<org.apache.kafka.connect.source.SourceRecord,T> projectionFn)
A generic Kafka Connect source provides ability to plug any Kafka
Connect source for data ingestion to Jet pipelines.
|
@Nonnull @Beta public static <T> StreamSource<T> connect(@Nonnull Properties properties, @Nonnull FunctionEx<org.apache.kafka.connect.source.SourceRecord,T> projectionFn)
You need to add the Kafka Connect connector JARs or a ZIP file
contains the JARs as a job resource via JobConfig.addJar(URL)
or JobConfig.addJarsInZip(URL)
respectively.
After that you can use the Kafka Connect connector with the configuration parameters as you'd use it with Kafka. Hazelcast Jet will drive the Kafka Connect connector from the pipeline and the records will be available to your pipeline as a stream of the custom type objects created by projectionFn.
In case of a failure; this source keeps track of the source partition offsets, it will restore the partition offsets and resume the consumption from where it left off.
Hazelcast Jet will instantiate tasks on a random cluster member and use local parallelism for scaling.
Property tasks.max
is not allowed. Use StreamStage.setLocalParallelism(int)
in the pipeline
instead. This limitation can be changed in the future.
properties
- Kafka connect propertiesprojectionFn
- function to create output objects from the Kafka SourceRecord
s.
If the projection returns a null
for an item,
that item will be filtered out.Pipeline.readFrom(StreamSource)
@Nonnull @Beta public static StreamSource<org.apache.kafka.connect.source.SourceRecord> connect(@Nonnull Properties properties)
You need to add the Kafka Connect connector JARs or a ZIP file
contains the JARs as a job resource via JobConfig.addJar(URL)
or JobConfig.addJarsInZip(URL)
respectively.
After that you can use the Kafka Connect connector with the
configuration parameters as you'd use it with Kafka. Hazelcast
Jet will drive the Kafka Connect connector from the pipeline and
the records will be available to your pipeline as SourceRecord
s.
In case of a failure; this source keeps track of the source partition offsets, it will restore the partition offsets and resume the consumption from where it left off.
Hazelcast Jet will instantiate tasks on a random cluster member and use local parallelism for scaling.
Property tasks.max
is not allowed. Use StreamStage.setLocalParallelism(int)
in the pipeline
instead.
properties
- Kafka connect propertiesPipeline.readFrom(StreamSource)
Copyright © 2024 Hazelcast, Inc.. All rights reserved.