public final class KafkaSinks extends Object
Modifier and Type | Class and Description |
---|---|
static class |
KafkaSinks.Builder<E>
A builder for Kafka sink.
|
Modifier and Type | Method and Description |
---|---|
static <E> KafkaSinks.Builder<E> |
kafka(DataConnectionRef dataConnectionRef)
Returns a builder object that you can use to create an Apache Kafka
pipeline sink.
|
static <E,K,V> Sink<E> |
kafka(DataConnectionRef dataConnectionRef,
FunctionEx<? super E,org.apache.kafka.clients.producer.ProducerRecord<K,V>> toRecordFn)
Returns a sink that publishes messages to Apache Kafka topic(s).
|
static <E,K,V> Sink<E> |
kafka(DataConnectionRef dataConnectionRef,
Properties properties,
String topic,
FunctionEx<? super E,K> extractKeyFn,
FunctionEx<? super E,V> extractValueFn)
Convenience for
kafka(Properties, FunctionEx) which creates
a ProducerRecord using the given topic and the given key and value
mapping functions with additional properties available |
static <K,V> Sink<Map.Entry<K,V>> |
kafka(DataConnectionRef dataConnectionRef,
String topic)
Convenience for
kafka(DataConnectionRef, String, FunctionEx, FunctionEx)
which expects Map.Entry<K, V> as input and extracts its key and value
parts to be published to Kafka. |
static <E,K,V> Sink<E> |
kafka(DataConnectionRef dataConnectionRef,
String topic,
FunctionEx<? super E,K> extractKeyFn,
FunctionEx<? super E,V> extractValueFn)
Convenience for
kafka(Properties, FunctionEx) which creates
a ProducerRecord using the given topic and the given key and value
mapping functions |
static <E> KafkaSinks.Builder<E> |
kafka(Properties properties)
Returns a builder object that you can use to create an Apache Kafka
pipeline sink.
|
static <E,K,V> Sink<E> |
kafka(Properties properties,
FunctionEx<? super E,org.apache.kafka.clients.producer.ProducerRecord<K,V>> toRecordFn)
Returns a sink that publishes messages to Apache Kafka topic(s).
|
static <K,V> Sink<Map.Entry<K,V>> |
kafka(Properties properties,
String topic)
Convenience for
kafka(Properties, String, FunctionEx, FunctionEx)
which expects Map.Entry<K, V> as input and extracts its key and value
parts to be published to Kafka. |
static <E,K,V> Sink<E> |
kafka(Properties properties,
String topic,
FunctionEx<? super E,K> extractKeyFn,
FunctionEx<? super E,V> extractValueFn)
Convenience for
kafka(Properties, FunctionEx) which creates
a ProducerRecord using the given topic and the given key and value
mapping functions |
@Nonnull public static <E,K,V> Sink<E> kafka(@Nonnull Properties properties, @Nonnull FunctionEx<? super E,org.apache.kafka.clients.producer.ProducerRecord<K,V>> toRecordFn)
ProducerRecord
using the
supplied mapping function.
The sink creates a single KafkaProducer
per processor using
the supplied properties
.
The behavior depends on the job's processing guarantee:
When using transactions pay attention to your transaction.timeout.ms
config property. It limits the entire
duration of the transaction since it is begun, not just inactivity
timeout. It must not be smaller than your snapshot interval,
otherwise the Kafka broker will roll the transaction back before Jet
is done with it. Also it should be large enough so that Jet has time
to restart after a failure: a member can crash just before it's
about to commit, and Jet will attempt to commit the transaction
after the restart, but the transaction must be still waiting in the
broker. The default in Kafka 2.4 is 1 minute.
Also keep in mind the consumers need to use isolation.level=read_committed
, which is not the default. Otherwise
the consumers will see duplicate messages.
exactlyOnce(false)
on the builder.
IO failures are generally handled by Kafka producer and do not cause the processor to fail. Refer to Kafka documentation for details.
The default local parallelism for this processor is 1.
E
- type of stream itemK
- type of the key published to KafkaV
- type of the value published to Kafkaproperties
- producer properties which should contain broker
address and key/value serializerstoRecordFn
- function that extracts the key from the stream item@Beta @Nonnull public static <E,K,V> Sink<E> kafka(@Nonnull DataConnectionRef dataConnectionRef, @Nonnull FunctionEx<? super E,org.apache.kafka.clients.producer.ProducerRecord<K,V>> toRecordFn)
ProducerRecord
using the
supplied mapping function.
The sink uses the supplied DataConnection to obtain a KafkaProducer
instance for each Processor
. Depending on the DataConnection
configuration it may be either a new instance for each processor,
or a shared instance. NOTE: Shared instance can't be used with
exactly-once.
The behavior depends on the job's processing guarantee:
When using transactions pay attention to your transaction.timeout.ms
config property. It limits the entire
duration of the transaction since it is begun, not just inactivity
timeout. It must not be smaller than your snapshot interval,
otherwise the Kafka broker will roll the transaction back before Jet
is done with it. Also it should be large enough so that Jet has time
to restart after a failure: a member can crash just before it's
about to commit, and Jet will attempt to commit the transaction
after the restart, but the transaction must be still waiting in the
broker. The default in Kafka 2.4 is 1 minute.
Also keep in mind the consumers need to use isolation.level=read_committed
, which is not the default. Otherwise
the consumers will see duplicate messages.
If you want to avoid the overhead of transactions, you can reduce the
guarantee just for the sink. To do so, use the builder version and call
exactlyOnce(false)
on the builder.
IO failures are generally handled by Kafka producer and do not cause the processor to fail. Refer to Kafka documentation for details.
The default local parallelism for this processor is 1.
E
- type of stream itemK
- type of the key published to KafkaV
- type of the value published to KafkadataConnectionRef
- DataConnection reference to use to obtain the KafkaProducertoRecordFn
- function that extracts the key from the stream item@Nonnull public static <E,K,V> Sink<E> kafka(@Nonnull Properties properties, @Nonnull String topic, @Nonnull FunctionEx<? super E,K> extractKeyFn, @Nonnull FunctionEx<? super E,V> extractValueFn)
kafka(Properties, FunctionEx)
which creates
a ProducerRecord
using the given topic and the given key and value
mapping functionsE
- type of stream itemK
- type of the key published to KafkaV
- type of the value published to Kafkaproperties
- producer properties which should contain broker
address and key/value serializerstopic
- name of the Kafka topic to publish toextractKeyFn
- function that extracts the key from the stream itemextractValueFn
- function that extracts the value from the stream item@Beta @Nonnull public static <E,K,V> Sink<E> kafka(@Nonnull DataConnectionRef dataConnectionRef, @Nonnull String topic, @Nonnull FunctionEx<? super E,K> extractKeyFn, @Nonnull FunctionEx<? super E,V> extractValueFn)
kafka(Properties, FunctionEx)
which creates
a ProducerRecord
using the given topic and the given key and value
mapping functionsE
- type of stream itemK
- type of the key published to KafkaV
- type of the value published to KafkadataConnectionRef
- producer properties which should contain broker
address and key/value serializerstopic
- name of the Kafka topic to publish toextractKeyFn
- function that extracts the key from the stream itemextractValueFn
- function that extracts the value from the stream item@Beta @Nonnull public static <E,K,V> Sink<E> kafka(@Nonnull DataConnectionRef dataConnectionRef, @Nonnull Properties properties, @Nonnull String topic, @Nonnull FunctionEx<? super E,K> extractKeyFn, @Nonnull FunctionEx<? super E,V> extractValueFn)
kafka(Properties, FunctionEx)
which creates
a ProducerRecord
using the given topic and the given key and value
mapping functions with additional properties availableE
- type of stream itemK
- type of the key published to KafkaV
- type of the value published to KafkadataConnectionRef
- producer properties which should contain broker
address and key/value serializersproperties
- additional propertiestopic
- name of the Kafka topic to publish toextractKeyFn
- function that extracts the key from the stream itemextractValueFn
- function that extracts the value from the stream item@Nonnull public static <K,V> Sink<Map.Entry<K,V>> kafka(@Nonnull Properties properties, @Nonnull String topic)
kafka(Properties, String, FunctionEx, FunctionEx)
which expects Map.Entry<K, V>
as input and extracts its key and value
parts to be published to Kafka.K
- type of the key published to KafkaV
- type of the value published to Kafkaproperties
- producer properties which should contain broker
address and key/value serializerstopic
- Kafka topic name to publish to@Beta @Nonnull public static <K,V> Sink<Map.Entry<K,V>> kafka(@Nonnull DataConnectionRef dataConnectionRef, @Nonnull String topic)
kafka(DataConnectionRef, String, FunctionEx, FunctionEx)
which expects Map.Entry<K, V>
as input and extracts its key and value
parts to be published to Kafka.K
- type of the key published to KafkaV
- type of the value published to KafkadataConnectionRef
- DataConnection reference to use to obtain the KafkaProducertopic
- Kafka topic name to publish to@Nonnull public static <E> KafkaSinks.Builder<E> kafka(@Nonnull Properties properties)
The sink creates a single KafkaProducer
per processor using
the supplied properties
.
The behavior depends on the job's processing guarantee:
When using transactions pay attention to your transaction.timeout.ms
config property. It limits the entire
duration of the transaction since it is begun, not just inactivity
timeout. It must not be smaller than your snapshot interval,
otherwise the Kafka broker will roll the transaction back before Jet
is done with it. Also it should be large enough so that Jet has time
to restart after a failure: a member can crash just before it's
about to commit, and Jet will attempt to commit the transaction
after the restart, but the transaction must be still waiting in the
broker. The default in Kafka 2.4 is 1 minute.
exactlyOnce(false)
on the returned builder.
IO failures are generally handled by Kafka producer and do not cause the processor to fail. Refer to Kafka documentation for details.
Default local parallelism for this processor is 1.
E
- type of stream itemproperties
- producer properties which should contain broker
address and key/value serializers@Beta @Nonnull public static <E> KafkaSinks.Builder<E> kafka(@Nonnull DataConnectionRef dataConnectionRef)
The sink creates a single KafkaProducer
per processor using
the supplied properties
.
The behavior depends on the job's processing guarantee:
When using transactions pay attention to your transaction.timeout.ms
config property. It limits the entire
duration of the transaction since it is begun, not just inactivity
timeout. It must not be smaller than your snapshot interval,
otherwise the Kafka broker will roll the transaction back before Jet
is done with it. Also it should be large enough so that Jet has time
to restart after a failure: a member can crash just before it's
about to commit, and Jet will attempt to commit the transaction
after the restart, but the transaction must be still waiting in the
broker. The default in Kafka 2.4 is 1 minute.
exactlyOnce(false)
on the returned builder.
IO failures are generally handled by Kafka producer and do not cause the processor to fail. Refer to Kafka documentation for details.
Default local parallelism for this processor is 1.
E
- type of stream itemdataConnectionRef
- DataConnection reference to use to obtain the KafkaProducerCopyright © 2024 Hazelcast, Inc.. All rights reserved.