Package | Description |
---|---|
com.hazelcast.jet |
The Pipeline API is Jet's high-level API to build and execute
distributed computation jobs.
|
com.hazelcast.jet.aggregate |
Contains
AggregateOperation and several its variants, as well
as a builder object for the aggregate operations. |
com.hazelcast.jet.core |
Jet's Core API.
|
com.hazelcast.jet.core.processor |
Apache Kafka reader/writer support for Hazelcast Jet.
|
com.hazelcast.jet.function |
Serializable variants of functional interfaces from
java.util.function . |
com.hazelcast.jet.stream |
java.util.stream implementation using Hazelcast Jet
|
Modifier and Type | Method and Description |
---|---|
DistributedFunction<E0,K> |
JoinClause.leftKeyFn()
Returns the left-hand key extractor function.
|
DistributedFunction<E1,K> |
JoinClause.rightKeyFn()
Returns the right-hand key extractor function.
|
DistributedFunction<E1,E1_OUT> |
JoinClause.rightProjectFn()
Returns the right-hand projection function.
|
Modifier and Type | Method and Description |
---|---|
<E> Tag<E> |
CoGroupBuilder.add(ComputeStage<E> stage,
DistributedFunction<? super E,K> groupKeyFn)
Adds another contributing pipeline stage to the co-grouping operation.
|
static <K,V,T> Source<T> |
Sources.cacheJournal(String cacheName,
DistributedPredicate<com.hazelcast.cache.journal.EventJournalCacheEvent<K,V>> predicateFn,
DistributedFunction<com.hazelcast.cache.journal.EventJournalCacheEvent<K,V>,T> projectionFn,
boolean startFromLatestSequence)
Returns a source that will stream the
EventJournalCacheEvent
events of the Hazelcast ICache with the specified name. |
<K,A,E1,R> ComputeStage<Map.Entry<K,R>> |
ComputeStage.coGroup(DistributedFunction<? super E,? extends K> thisKeyFn,
ComputeStage<E1> stage1,
DistributedFunction<? super E1,? extends K> key1Fn,
AggregateOperation2<? super E,? super E1,A,R> aggrOp)
Attaches to this and the supplied stage a stage that co-groups their items
by a common key and applies the supplied aggregate operation to co-grouped
items.
|
<K,A,E1,R> ComputeStage<Map.Entry<K,R>> |
ComputeStage.coGroup(DistributedFunction<? super E,? extends K> thisKeyFn,
ComputeStage<E1> stage1,
DistributedFunction<? super E1,? extends K> key1Fn,
AggregateOperation2<? super E,? super E1,A,R> aggrOp)
Attaches to this and the supplied stage a stage that co-groups their items
by a common key and applies the supplied aggregate operation to co-grouped
items.
|
<K,A,E1,E2,R> |
ComputeStage.coGroup(DistributedFunction<? super E,? extends K> thisKeyFn,
ComputeStage<E1> stage1,
DistributedFunction<? super E1,? extends K> key1Fn,
ComputeStage<E2> stage2,
DistributedFunction<? super E2,? extends K> key2Fn,
AggregateOperation3<? super E,? super E1,? super E2,A,R> aggrOp)
Attaches to this and the supplied stages a stage that co-groups their items
by a common key and applies the supplied aggregate operation to co-grouped
items.
|
<K,A,E1,E2,R> |
ComputeStage.coGroup(DistributedFunction<? super E,? extends K> thisKeyFn,
ComputeStage<E1> stage1,
DistributedFunction<? super E1,? extends K> key1Fn,
ComputeStage<E2> stage2,
DistributedFunction<? super E2,? extends K> key2Fn,
AggregateOperation3<? super E,? super E1,? super E2,A,R> aggrOp)
Attaches to this and the supplied stages a stage that co-groups their items
by a common key and applies the supplied aggregate operation to co-grouped
items.
|
<K,A,E1,E2,R> |
ComputeStage.coGroup(DistributedFunction<? super E,? extends K> thisKeyFn,
ComputeStage<E1> stage1,
DistributedFunction<? super E1,? extends K> key1Fn,
ComputeStage<E2> stage2,
DistributedFunction<? super E2,? extends K> key2Fn,
AggregateOperation3<? super E,? super E1,? super E2,A,R> aggrOp)
Attaches to this and the supplied stages a stage that co-groups their items
by a common key and applies the supplied aggregate operation to co-grouped
items.
|
default <K> CoGroupBuilder<K,E> |
ComputeStage.coGroupBuilder(DistributedFunction<? super E,K> thisKeyFn)
Returns a fluent API builder object to construct a co-group operation
with any number of contributing stages.
|
static <E> Sink<E> |
Sinks.files(String directoryName,
DistributedFunction<E,String> toStringFn)
Convenience for
Sinks.files(String, DistributedFunction, Charset,
boolean) with the UTF-8 charset and with overwriting of existing files. |
static <E> Sink<E> |
Sinks.files(String directoryName,
DistributedFunction<E,String> toStringFn,
Charset charset,
boolean append)
Returns a sink that that writes the items it receives to files.
|
<R> ComputeStage<R> |
ComputeStage.flatMap(DistributedFunction<? super E,Traverser<? extends R>> flatMapFn)
Attaches to this stage a flat-mapping stage, one which applies the
supplied function to each input item independently and emits all items
from the
Traverser it returns as the output items. |
<K,A,R> ComputeStage<Map.Entry<K,R>> |
ComputeStage.groupBy(DistributedFunction<? super E,? extends K> keyFn,
AggregateOperation1<? super E,A,R> aggrOp)
Attaches to this stage a group-by-key stage, one which will group all
received items by the key returned from the provided key-extracting
function.
|
static <E,K,V> Sink<E> |
HdfsSinks.hdfs(org.apache.hadoop.mapred.JobConf jobConf,
DistributedFunction<? super E,K> extractKeyF,
DistributedFunction<? super E,V> extractValueF)
Returns a sink that writes to Apache Hadoop HDFS.
|
static <E,K,V> Sink<E> |
HdfsSinks.hdfs(org.apache.hadoop.mapred.JobConf jobConf,
DistributedFunction<? super E,K> extractKeyF,
DistributedFunction<? super E,V> extractValueF)
Returns a sink that writes to Apache Hadoop HDFS.
|
static <K,E0,E1_IN extends Map.Entry<K,E1>,E1> |
JoinClause.joinMapEntries(DistributedFunction<E0,K> leftKeyFn)
A shorthand factory for the common case of hash-joining with a stream of
map entries.
|
static <E,K,V> Sink<E> |
KafkaSinks.kafka(Properties properties,
DistributedFunction<? super E,org.apache.kafka.clients.producer.ProducerRecord<K,V>> toRecordFn)
Returns a source that publishes messages to an Apache Kafka topic.
|
static <E,K,V> Sink<E> |
KafkaSinks.kafka(Properties properties,
String topic,
DistributedFunction<? super E,K> extractKeyFn,
DistributedFunction<? super E,V> extractValueFn)
Convenience for
KafkaSinks.kafka(Properties, DistributedFunction) which creates
a ProducerRecord using the given topic and the given key and value
mapping functions |
static <E,K,V> Sink<E> |
KafkaSinks.kafka(Properties properties,
String topic,
DistributedFunction<? super E,K> extractKeyFn,
DistributedFunction<? super E,V> extractValueFn)
Convenience for
KafkaSinks.kafka(Properties, DistributedFunction) which creates
a ProducerRecord using the given topic and the given key and value
mapping functions |
static <E> Sink<E> |
Sinks.logger(DistributedFunction<E,String> toStringFn)
Returns a sink that logs all the data items it receives, at the INFO
level to the log category
WriteLoggerP . |
<R> ComputeStage<R> |
ComputeStage.map(DistributedFunction<? super E,? extends R> mapFn)
Attaches to this stage a mapping stage, one which applies the supplied
function to each input item independently and emits the function's
result as the output item.
|
static <K,V,T> Source<T> |
Sources.map(String mapName,
com.hazelcast.query.Predicate<K,V> predicate,
DistributedFunction<Map.Entry<K,V>,T> projectionFn)
Convenience for
Sources.map(String, Predicate, Projection)
which uses a DistributedFunction as the projection function. |
static <K,V,T> Source<T> |
Sources.mapJournal(String mapName,
DistributedPredicate<com.hazelcast.map.journal.EventJournalMapEvent<K,V>> predicateFn,
DistributedFunction<com.hazelcast.map.journal.EventJournalMapEvent<K,V>,T> projectionFn,
boolean startFromLatestSequence)
Returns a source that will stream the
EventJournalMapEvent
events of the Hazelcast IMap with the specified name. |
static <K,E0,E1> JoinClause<K,E0,E1,E1> |
JoinClause.onKeys(DistributedFunction<E0,K> leftKeyFn,
DistributedFunction<E1,K> rightKeyFn)
Constructs and returns a join clause with the supplied left-hand and
right-hand key extractor functions, and with an identity right-hand
projection function.
|
static <K,E0,E1> JoinClause<K,E0,E1,E1> |
JoinClause.onKeys(DistributedFunction<E0,K> leftKeyFn,
DistributedFunction<E1,K> rightKeyFn)
Constructs and returns a join clause with the supplied left-hand and
right-hand key extractor functions, and with an identity right-hand
projection function.
|
default ComputeStage<E> |
ComputeStage.peek(DistributedFunction<? super E,String> toStringFn)
Adds a peeking layer to this compute stage which logs its output.
|
ComputeStage<E> |
ComputeStage.peek(DistributedPredicate<? super E> shouldLogFn,
DistributedFunction<? super E,String> toStringFn)
Adds a peeking layer to this compute stage which logs its output.
|
<E1_NEW_OUT> |
JoinClause.projecting(DistributedFunction<E1,E1_NEW_OUT> rightProjectFn)
Returns a copy of this join clause, but with the right-hand projection
function replaced with the supplied one.
|
static <K,V,T> Source<T> |
Sources.remoteCacheJournal(String cacheName,
com.hazelcast.client.config.ClientConfig clientConfig,
DistributedPredicate<com.hazelcast.cache.journal.EventJournalCacheEvent<K,V>> predicateFn,
DistributedFunction<com.hazelcast.cache.journal.EventJournalCacheEvent<K,V>,T> projectionFn,
boolean startFromLatestSequence)
Returns a source that will stream the
EventJournalCacheEvent
events of the Hazelcast ICache with the specified name from a
remote cluster. |
static <K,V,T> Source<T> |
Sources.remoteMap(String mapName,
com.hazelcast.client.config.ClientConfig clientConfig,
com.hazelcast.query.Predicate<K,V> predicate,
DistributedFunction<Map.Entry<K,V>,T> projectionFn)
Convenience for
Sources.remoteMap(String, ClientConfig, Predicate, Projection)
which use a DistributedFunction as the projection function. |
static <K,V,T> Source<T> |
Sources.remoteMapJournal(String mapName,
com.hazelcast.client.config.ClientConfig clientConfig,
DistributedPredicate<com.hazelcast.map.journal.EventJournalMapEvent<K,V>> predicateFn,
DistributedFunction<com.hazelcast.map.journal.EventJournalMapEvent<K,V>,T> projectionFn,
boolean startFromLatestSequence)
Returns a source that will stream the
EventJournalMapEvent
events of the Hazelcast IMap with the specified name from a
remote cluster. |
static <E> Sink<E> |
Sinks.socket(String host,
int port,
DistributedFunction<E,String> toStringFn)
Convenience for
Sinks.socket(String, int, DistributedFunction,
Charset) with UTF-8 as the charset. |
static <E> Sink<E> |
Sinks.socket(String host,
int port,
DistributedFunction<E,String> toStringFn,
Charset charset)
Returns a sink that connects to the specified TCP socket and writes to
it a string representation of the items it receives.
|
Modifier and Type | Method and Description |
---|---|
DistributedFunction<? super A,R> |
AggregateOperation.finishFn()
A primitive that finishes the accumulation process by transforming
the accumulator object into the final result.
|
Modifier and Type | Method and Description |
---|---|
<R> AggregateOperation1<T0,A,R> |
AggregateOperationBuilder.Arity1.andFinish(DistributedFunction<? super A,R> finishFn)
Constructs and returns an
AggregateOperation1 from the
current state of the builder and the supplied finish primitive. |
<R> AggregateOperation2<T0,T1,A,R> |
AggregateOperationBuilder.Arity2.andFinish(DistributedFunction<? super A,R> finishFn)
Constructs and returns an
AggregateOperation2 from the
current state of the builder and the supplied finish primitive. |
<R> AggregateOperation3<T0,T1,T2,A,R> |
AggregateOperationBuilder.Arity3.andFinish(DistributedFunction<? super A,R> finishFn)
Constructs and returns an
AggregateOperation3 from the
current state of the builder and the supplied finish primitive. |
<R> AggregateOperation<A,R> |
AggregateOperationBuilder.VarArity.andFinish(DistributedFunction<? super A,R> finishFn)
Constructs and returns an
AggregateOperation from the
current state of the builder and the supplied finish primitive. |
static <T,U,A,R> AggregateOperation1<T,?,R> |
AggregateOperations.mapping(DistributedFunction<? super T,? extends U> mapFn,
AggregateOperation1<? super U,A,R> downstream)
Adapts an
AggregateOperation1 accepting elements of type U to one accepting elements of type T by applying a mapping
function to each input element before accumulation. |
static <T,A> AggregateOperation1<T,MutableReference<A>,A> |
AggregateOperations.reducing(A emptyAccValue,
DistributedFunction<? super T,? extends A> toAccValueFn,
DistributedBinaryOperator<A> combineAccValuesFn,
DistributedBinaryOperator<A> deductAccValueFn)
A reducing operation maintains an accumulated value that starts out as
emptyAccValue and is being iteratively transformed by applying
the combine primitive to it and each stream item's accumulated
value, as returned from toAccValueFn . |
static <T,K,U> AggregateOperation1<T,Map<K,U>,Map<K,U>> |
AggregateOperations.toMap(DistributedFunction<? super T,? extends K> getKeyFn,
DistributedFunction<? super T,? extends U> getValueFn)
Returns an
AggregateOperation1 that accumulates elements
into a HashMap whose keys and values are the result of applying
the provided mapping functions to the input elements. |
static <T,K,U> AggregateOperation1<T,Map<K,U>,Map<K,U>> |
AggregateOperations.toMap(DistributedFunction<? super T,? extends K> getKeyFn,
DistributedFunction<? super T,? extends U> getValueFn)
Returns an
AggregateOperation1 that accumulates elements
into a HashMap whose keys and values are the result of applying
the provided mapping functions to the input elements. |
static <T,K,U> AggregateOperation1<T,Map<K,U>,Map<K,U>> |
AggregateOperations.toMap(DistributedFunction<? super T,? extends K> getKeyFn,
DistributedFunction<? super T,? extends U> getValueFn,
DistributedBinaryOperator<U> mergeFn)
Returns an
AggregateOperation1 that accumulates elements
into a HashMap whose keys and values are the result of applying
the provided mapping functions to the input elements. |
static <T,K,U> AggregateOperation1<T,Map<K,U>,Map<K,U>> |
AggregateOperations.toMap(DistributedFunction<? super T,? extends K> getKeyFn,
DistributedFunction<? super T,? extends U> getValueFn,
DistributedBinaryOperator<U> mergeFn)
Returns an
AggregateOperation1 that accumulates elements
into a HashMap whose keys and values are the result of applying
the provided mapping functions to the input elements. |
static <T,K,U,M extends Map<K,U>> |
AggregateOperations.toMap(DistributedFunction<? super T,? extends K> getKeyFn,
DistributedFunction<? super T,? extends U> getValueFn,
DistributedBinaryOperator<U> mergeFn,
DistributedSupplier<M> createMapFn)
Returns an
AggregateOperation1 that accumulates elements
into a Map whose keys and values are the result of applying the
provided mapping functions to the input elements. |
static <T,K,U,M extends Map<K,U>> |
AggregateOperations.toMap(DistributedFunction<? super T,? extends K> getKeyFn,
DistributedFunction<? super T,? extends U> getValueFn,
DistributedBinaryOperator<U> mergeFn,
DistributedSupplier<M> createMapFn)
Returns an
AggregateOperation1 that accumulates elements
into a Map whose keys and values are the result of applying the
provided mapping functions to the input elements. |
default <T> AggregateOperation1<T,A,R> |
AggregateOperation.withCombiningAccumulateFn(DistributedFunction<T,A> getAccFn)
Returns a copy of this aggregate operation, but with the
accumulate primitive replaced with one that expects to find
accumulator objects in the input and will combine them all into
a single accumulator of the same type. |
<R_NEW> AggregateOperation<A,R_NEW> |
AggregateOperation.withFinishFn(DistributedFunction<? super A,R_NEW> finishFn)
Returns a copy of this aggregate operation, but with the
finish
primitive replaced with the supplied one. |
<R_NEW> AggregateOperation1<T,A,R_NEW> |
AggregateOperation1.withFinishFn(DistributedFunction<? super A,R_NEW> finishFn) |
<R1> AggregateOperation3<T0,T1,T2,A,R1> |
AggregateOperation3.withFinishFn(DistributedFunction<? super A,R1> finishFn) |
<R1> AggregateOperation2<T0,T1,A,R1> |
AggregateOperation2.withFinishFn(DistributedFunction<? super A,R1> finishFn) |
Modifier and Type | Method and Description |
---|---|
static ProcessorMetaSupplier |
ProcessorMetaSupplier.of(DistributedFunction<com.hazelcast.nio.Address,ProcessorSupplier> addressToSupplier)
Factory method that creates a
ProcessorMetaSupplier from the
supplied function that maps a cluster member address to a ProcessorSupplier . |
static ProcessorMetaSupplier |
ProcessorMetaSupplier.of(DistributedFunction<com.hazelcast.nio.Address,ProcessorSupplier> addressToSupplier,
int preferredLocalParallelism)
Factory method that creates a
ProcessorMetaSupplier from the
supplied function that maps a cluster member address to a ProcessorSupplier . |
<T> Edge |
Edge.partitioned(DistributedFunction<T,?> extractKeyFn)
Activates the
PARTITIONED routing
policy and applies the default
Hazelcast partitioning strategy. |
<T,K> Edge |
Edge.partitioned(DistributedFunction<T,K> extractKeyFn,
Partitioner<? super K> partitioner)
Activates the
PARTITIONED routing
policy and applies the provided partitioning strategy. |
Modifier and Type | Method and Description |
---|---|
static <T,K,A> DistributedSupplier<Processor> |
Processors.accumulateByFrameP(DistributedFunction<? super T,K> getKeyFn,
DistributedToLongFunction<? super T> getTimestampFn,
TimestampKind timestampKind,
WindowDefinition windowDef,
AggregateOperation1<? super T,A,?> aggrOp)
Returns a supplier of processors for the first-stage vertex in a
two-stage sliding window aggregation setup (see the
class Javadoc for an explanation of aggregation stages). |
static <T,K,A> DistributedSupplier<Processor> |
Processors.accumulateByKeyP(DistributedFunction<? super T,K> getKeyFn,
AggregateOperation1<? super T,A,?> aggrOp)
Returns a supplier of processors for the first-stage vertex in a
two-stage group-and-aggregate setup.
|
static <T,K,A,R> DistributedSupplier<Processor> |
Processors.aggregateByKeyP(DistributedFunction<? super T,K> getKeyFn,
AggregateOperation1<? super T,A,R> aggrOp)
Returns a supplier of processors for a vertex that groups items by key
and performs the provided aggregate operation on each group.
|
static <T,K,A,R> DistributedSupplier<Processor> |
Processors.aggregateToSessionWindowP(long sessionTimeout,
DistributedToLongFunction<? super T> getTimestampFn,
DistributedFunction<? super T,K> getKeyFn,
AggregateOperation1<? super T,A,R> aggrOp)
Returns a supplier of processors for a vertex that aggregates events into
session windows.
|
static <T,K,A,R> DistributedSupplier<Processor> |
Processors.aggregateToSlidingWindowP(DistributedFunction<? super T,K> getKeyFn,
DistributedToLongFunction<? super T> getTimestampFn,
TimestampKind timestampKind,
WindowDefinition windowDef,
AggregateOperation1<? super T,A,R> aggrOp)
Returns a supplier of processors for a vertex that aggregates events
into a sliding window in a single stage (see the
class Javadoc for an explanation of aggregation stages). |
static <T,R> DistributedSupplier<Processor> |
Processors.flatMapP(DistributedFunction<T,? extends Traverser<? extends R>> mapper)
Returns a supplier of processors for a vertex that applies the provided
item-to-traverser mapping function to each received item and emits all
the items from the resulting traverser.
|
static <T,R> DistributedSupplier<Processor> |
Processors.mapP(DistributedFunction<T,R> mapper)
Returns a supplier of processors for a vertex which, for each received
item, emits the result of applying the given mapping function to it.
|
static <T> DistributedSupplier<Processor> |
DiagnosticProcessors.peekInputP(DistributedFunction<T,String> toStringFn,
DistributedPredicate<T> shouldLogFn,
DistributedSupplier<Processor> wrapped)
Same as
peekInput(toStringFn, shouldLogFn, metaSupplier) ,
but accepts a DistributedSupplier of processors instead of a
meta-supplier. |
static <T> ProcessorMetaSupplier |
DiagnosticProcessors.peekInputP(DistributedFunction<T,String> toStringFn,
DistributedPredicate<T> shouldLogFn,
ProcessorMetaSupplier wrapped)
Returns a meta-supplier that wraps the provided one and adds a logging
layer to each processor it creates.
|
static <T> ProcessorSupplier |
DiagnosticProcessors.peekInputP(DistributedFunction<T,String> toStringFn,
DistributedPredicate<T> shouldLogFn,
ProcessorSupplier wrapped)
Same as
peekInput(toStringFn, shouldLogFn, metaSupplier) ,
but accepts a ProcessorSupplier instead of a meta-supplier. |
static <T> DistributedSupplier<Processor> |
DiagnosticProcessors.peekOutputP(DistributedFunction<T,String> toStringFn,
DistributedPredicate<T> shouldLogFn,
DistributedSupplier<Processor> wrapped)
Same as
peekOutput(toStringFn, shouldLogFn, metaSupplier) ,
but accepts a DistributedSupplier of processors instead of a
meta-supplier. |
static <T> ProcessorMetaSupplier |
DiagnosticProcessors.peekOutputP(DistributedFunction<T,String> toStringFn,
DistributedPredicate<T> shouldLogFn,
ProcessorMetaSupplier wrapped)
Returns a meta-supplier that wraps the provided one and adds a logging
layer to each processor it creates.
|
static <T> ProcessorSupplier |
DiagnosticProcessors.peekOutputP(DistributedFunction<T,String> toStringFn,
DistributedPredicate<T> shouldLogFn,
ProcessorSupplier wrapped)
Same as
peekOutput(toStringFn, shouldLogFn, metaSupplier) ,
but accepts a ProcessorSupplier instead of a meta-supplier. |
static <K,V> DistributedSupplier<Processor> |
DiagnosticProcessors.peekSnapshotP(DistributedFunction<Map.Entry<K,V>,String> toStringFn,
DistributedPredicate<Map.Entry<K,V>> shouldLogFn,
DistributedSupplier<Processor> wrapped)
Same as
peekSnapshot(toStringFn, shouldLogFn, metaSupplier) ,
but accepts a DistributedSupplier of processors instead of a
meta-supplier. |
static <K,V> ProcessorMetaSupplier |
DiagnosticProcessors.peekSnapshotP(DistributedFunction<Map.Entry<K,V>,String> toStringFn,
DistributedPredicate<Map.Entry<K,V>> shouldLogFn,
ProcessorMetaSupplier wrapped)
Returns a meta-supplier that wraps the provided one and adds a logging
layer to each processor it creates.
|
static <K,V> ProcessorSupplier |
DiagnosticProcessors.peekSnapshotP(DistributedFunction<Map.Entry<K,V>,String> toStringFn,
DistributedPredicate<Map.Entry<K,V>> shouldLogFn,
ProcessorSupplier wrapped)
Same as
peekSnapshot(toStringFn, shouldLogFn, metaSupplier) ,
but accepts a ProcessorSupplier instead of a meta-supplier. |
static <K,V,T> ProcessorMetaSupplier |
SourceProcessors.readMapP(String mapName,
com.hazelcast.query.Predicate<K,V> predicate,
DistributedFunction<Map.Entry<K,V>,T> projectionFn)
Returns a supplier of processors for
Sources.map(String, Predicate, DistributedFunction) }. |
static <K,V,T> ProcessorMetaSupplier |
SourceProcessors.readRemoteMapP(String mapName,
com.hazelcast.client.config.ClientConfig clientConfig,
com.hazelcast.query.Predicate<K,V> predicate,
DistributedFunction<Map.Entry<K,V>,T> projectionFn)
Returns a supplier of processors for
Sources.remoteMap(String, ClientConfig, Predicate, DistributedFunction) . |
static <K,V,T> ProcessorMetaSupplier |
SourceProcessors.streamCacheP(String cacheName,
DistributedPredicate<com.hazelcast.cache.journal.EventJournalCacheEvent<K,V>> predicate,
DistributedFunction<com.hazelcast.cache.journal.EventJournalCacheEvent<K,V>,T> projection,
boolean startFromLatestSequence)
Returns a supplier of processors for
Sources.cacheJournal(String, DistributedPredicate, DistributedFunction, boolean) . |
static <K,V,T> ProcessorMetaSupplier |
SourceProcessors.streamMapP(String mapName,
DistributedPredicate<com.hazelcast.map.journal.EventJournalMapEvent<K,V>> predicate,
DistributedFunction<com.hazelcast.map.journal.EventJournalMapEvent<K,V>,T> projection,
boolean startFromLatestSequence)
Returns a supplier of processors for
Sources.mapJournal(String, DistributedPredicate, DistributedFunction, boolean) . |
static <K,V,T> ProcessorMetaSupplier |
SourceProcessors.streamRemoteCacheP(String cacheName,
com.hazelcast.client.config.ClientConfig clientConfig,
DistributedPredicate<com.hazelcast.cache.journal.EventJournalCacheEvent<K,V>> predicate,
DistributedFunction<com.hazelcast.cache.journal.EventJournalCacheEvent<K,V>,T> projection,
boolean startFromLatestSequence)
Returns a supplier of processors for
Sources.remoteCacheJournal(
String, ClientConfig, DistributedPredicate, DistributedFunction, boolean
) . |
static <K,V,T> ProcessorMetaSupplier |
SourceProcessors.streamRemoteMapP(String mapName,
com.hazelcast.client.config.ClientConfig clientConfig,
DistributedPredicate<com.hazelcast.map.journal.EventJournalMapEvent<K,V>> predicate,
DistributedFunction<com.hazelcast.map.journal.EventJournalMapEvent<K,V>,T> projection,
boolean startFromLatestSequence)
Returns a supplier of processors for
Sources.remoteMapJournal(
String, ClientConfig, DistributedPredicate, DistributedFunction, boolean
) . |
static <T> ProcessorMetaSupplier |
SinkProcessors.writeFileP(String directoryName,
DistributedFunction<T,String> toStringFn)
Returns a supplier of processors for
Sinks.files(String, DistributedFunction) . |
static <T> ProcessorMetaSupplier |
SinkProcessors.writeFileP(String directoryName,
DistributedFunction<T,String> toStringFn,
Charset charset,
boolean append)
Returns a supplier of processors for
Sinks.files(String, DistributedFunction, Charset, boolean) . |
static <E,K,V> ProcessorMetaSupplier |
HdfsProcessors.writeHdfsP(org.apache.hadoop.mapred.JobConf jobConf,
DistributedFunction<? super E,K> extractKeyFn,
DistributedFunction<? super E,V> extractValueFn)
Returns a supplier of processors for
HdfsSinks.hdfs(JobConf, DistributedFunction, DistributedFunction) . |
static <E,K,V> ProcessorMetaSupplier |
HdfsProcessors.writeHdfsP(org.apache.hadoop.mapred.JobConf jobConf,
DistributedFunction<? super E,K> extractKeyFn,
DistributedFunction<? super E,V> extractValueFn)
Returns a supplier of processors for
HdfsSinks.hdfs(JobConf, DistributedFunction, DistributedFunction) . |
static <T,K,V> ProcessorMetaSupplier |
KafkaProcessors.writeKafkaP(Properties properties,
DistributedFunction<? super T,org.apache.kafka.clients.producer.ProducerRecord<K,V>> toRecordFn)
Returns a supplier of processors for
KafkaSinks.kafka(Properties, DistributedFunction) . |
static <T,K,V> ProcessorMetaSupplier |
KafkaProcessors.writeKafkaP(Properties properties,
String topic,
DistributedFunction<? super T,K> extractKeyFn,
DistributedFunction<? super T,V> extractValueFn)
Returns a supplier of processors for
KafkaSinks.kafka(Properties, String, DistributedFunction, DistributedFunction) . |
static <T,K,V> ProcessorMetaSupplier |
KafkaProcessors.writeKafkaP(Properties properties,
String topic,
DistributedFunction<? super T,K> extractKeyFn,
DistributedFunction<? super T,V> extractValueFn)
Returns a supplier of processors for
KafkaSinks.kafka(Properties, String, DistributedFunction, DistributedFunction) . |
static <T> ProcessorMetaSupplier |
DiagnosticProcessors.writeLoggerP(DistributedFunction<T,String> toStringFn)
Returns a meta-supplier of processors for a sink vertex that logs all
the data items it receives.
|
static <T> ProcessorMetaSupplier |
SinkProcessors.writeSocketP(String host,
int port,
DistributedFunction<T,String> toStringFn,
Charset charset)
Returns a supplier of processors for
Sinks.socket(String, int) . |
Modifier and Type | Method and Description |
---|---|
static <K,A> DistributedSupplier<Processor> |
Processors.coAccumulateByKeyP(List<DistributedFunction<?,? extends K>> getKeyFs,
AggregateOperation<A,?> aggrOp)
Returns a supplier of processors for the first-stage vertex in a
two-stage group-and-aggregate setup.
|
static <K,A,R> DistributedSupplier<Processor> |
Processors.coAggregateByKeyP(List<DistributedFunction<?,? extends K>> getKeyFs,
AggregateOperation<A,R> aggrOp)
Returns a supplier of processors for a vertex that groups items by key
and performs the provided aggregate operation on each group.
|
Modifier and Type | Interface and Description |
---|---|
interface |
DistributedUnaryOperator<T>
Represents an operation on a single operand that produces a result of the
same type as its operand.
|
Modifier and Type | Method and Description |
---|---|
default <V> DistributedFunction<T,V> |
DistributedFunction.andThen(DistributedFunction<? super R,? extends V> after)
Returns a composed function that first applies this function to
its input, and then applies the
after function to the result. |
default <V> DistributedFunction<V,R> |
DistributedFunction.compose(DistributedFunction<? super V,? extends T> before)
Returns a composed function that first applies the
before
function to its input, and then applies this function to the result. |
static <T> DistributedFunction<T,String> |
DistributedFunctions.constantKey()
Returns a function that always evaluates to the
DistributedFunctions.CONSTANT_KEY . |
static <K,V> DistributedFunction<Map.Entry<K,V>,K> |
DistributedFunctions.entryKey()
Returns a function that extracts the key of a
Map.Entry . |
static <K,V> DistributedFunction<Map.Entry<K,V>,V> |
DistributedFunctions.entryValue()
Returns a function that extracts the value of a
Map.Entry . |
static <T> DistributedFunction<T,T> |
DistributedFunction.identity()
Returns a function that always returns its input argument.
|
static <T> DistributedFunction<T,T> |
DistributedFunctions.wholeItem()
Synonym for
identity() , to be used as a
projection function (e.g., key extractor). |
Modifier and Type | Method and Description |
---|---|
default <V> DistributedBiFunction<T,U,V> |
DistributedBiFunction.andThen(DistributedFunction<? super R,? extends V> after)
Returns a composed function that first applies this function to
its input, and then applies the
after function to the result. |
default <V> DistributedFunction<T,V> |
DistributedFunction.andThen(DistributedFunction<? super R,? extends V> after)
Returns a composed function that first applies this function to
its input, and then applies the
after function to the result. |
static <T,U extends Comparable<? super U>> |
DistributedComparator.comparing(DistributedFunction<? super T,? extends U> keyExtractor) |
static <T,U> DistributedComparator<T> |
DistributedComparator.comparing(DistributedFunction<? super T,? extends U> keyExtractor,
DistributedComparator<? super U> keyComparator) |
default <V> DistributedFunction<V,R> |
DistributedFunction.compose(DistributedFunction<? super V,? extends T> before)
Returns a composed function that first applies the
before
function to its input, and then applies this function to the result. |
<U> DistributedOptional<U> |
DistributedOptional.flatMap(DistributedFunction<? super T,DistributedOptional<U>> mapper)
If a value is present, apply the provided
Optional -bearing
mapping function to it, return that result, otherwise return an empty
Optional . |
<U> DistributedOptional<U> |
DistributedOptional.map(DistributedFunction<? super T,? extends U> mapper)
If a value is present, apply the provided mapping function to it,
and if the result is non-null, return an
Optional describing the
result. |
default <U extends Comparable<? super U>> |
DistributedComparator.thenComparing(DistributedFunction<? super T,? extends U> keyExtractor) |
default <U> DistributedComparator<T> |
DistributedComparator.thenComparing(DistributedFunction<? super T,? extends U> keyExtractor,
DistributedComparator<? super U> keyComparator) |
Modifier and Type | Method and Description |
---|---|
DistributedFunction<A,R> |
DistributedCollector.finisher()
Perform the final transformation from the intermediate accumulation type
A to the final result type R . |
Modifier and Type | Method and Description |
---|---|
static <T,A,R,RR> DistributedCollector<T,A,RR> |
DistributedCollectors.collectingAndThen(DistributedCollector<T,A,R> downstream,
DistributedFunction<R,RR> finisher)
Adapts a
DistributedCollector to perform an additional finishing
transformation. |
default <R> DistributedStream<R> |
DistributedStream.flatMap(DistributedFunction<? super T,? extends java.util.stream.Stream<? extends R>> mapper)
Returns a stream consisting of the results of replacing each element of
this stream with the contents of a mapped stream produced by applying
the provided mapping function to each element.
|
default DistributedDoubleStream |
DistributedStream.flatMapToDouble(DistributedFunction<? super T,? extends java.util.stream.DoubleStream> mapper)
Returns an
DoubleStream consisting of the results of replacing
each element of this stream with the contents of a mapped stream produced
by applying the provided mapping function to each element. |
default DistributedIntStream |
DistributedStream.flatMapToInt(DistributedFunction<? super T,? extends java.util.stream.IntStream> mapper)
Returns an
IntStream consisting of the results of replacing each
element of this stream with the contents of a mapped stream produced by
applying the provided mapping function to each element. |
default DistributedLongStream |
DistributedStream.flatMapToLong(DistributedFunction<? super T,? extends java.util.stream.LongStream> mapper)
Returns an
LongStream consisting of the results of replacing each
element of this stream with the contents of a mapped stream produced by
applying the provided mapping function to each element. |
static <T,K> DistributedCollector<T,?,Map<K,List<T>>> |
DistributedCollectors.groupingBy(DistributedFunction<? super T,? extends K> classifier)
Returns a
DistributedCollector implementing a "group by" operation on
input elements of type T , grouping elements according to a
classification function, and returning the results in a Map . |
static <T,K,A,D> DistributedCollector<T,?,Map<K,D>> |
DistributedCollectors.groupingBy(DistributedFunction<? super T,? extends K> classifier,
DistributedCollector<? super T,A,D> downstream)
Returns a
DistributedCollector implementing a cascaded "group by" operation
on input elements of type T , grouping elements according to a
classification function, and then performing a reduction operation on
the values associated with a given key using the specified downstream
DistributedCollector . |
static <T,K,D,A,M extends Map<K,D>> |
DistributedCollectors.groupingBy(DistributedFunction<? super T,? extends K> classifier,
DistributedSupplier<M> mapFactory,
DistributedCollector<? super T,A,D> downstream)
Returns a
DistributedCollector implementing a cascaded "group by" operation
on input elements of type T , grouping elements according to a
classification function, and then performing a reduction operation on
the values associated with a given key using the specified downstream
DistributedCollector . |
static <T,K> DistributedCollector.Reducer<T,com.hazelcast.cache.ICache<K,List<T>>> |
DistributedCollectors.groupingByToICache(String cacheName,
DistributedFunction<? super T,? extends K> classifier)
Returns a
Reducer implementing a "group by" operation on
input elements of type T , grouping elements according to a
classification function, and returning the results in a
new distributed Hazelcast ICache . |
static <T,K,A,D> DistributedCollector.Reducer<T,com.hazelcast.cache.ICache<K,D>> |
DistributedCollectors.groupingByToICache(String cacheName,
DistributedFunction<? super T,? extends K> classifier,
DistributedCollector<? super T,A,D> downstream)
Returns a
Reducer implementing a cascaded "group by" operation
on input elements of type T , grouping elements according to a
classification function, and then performing a reduction operation on
the values associated with a given key using the specified downstream
DistributedCollector . |
static <T,K> DistributedCollector.Reducer<T,com.hazelcast.core.IMap<K,List<T>>> |
DistributedCollectors.groupingByToIMap(String mapName,
DistributedFunction<? super T,? extends K> classifier)
Returns a
Reducer implementing a "group by" operation on
input elements of type T , grouping elements according to a
classification function, and returning the results in a
new distributed Hazelcast IMap . |
static <T,K,A,D> DistributedCollector.Reducer<T,com.hazelcast.core.IMap<K,D>> |
DistributedCollectors.groupingByToIMap(String mapName,
DistributedFunction<? super T,? extends K> classifier,
DistributedCollector<? super T,A,D> downstream)
Returns a
Reducer implementing a cascaded "group by" operation
on input elements of type T , grouping elements according to a
classification function, and then performing a reduction operation on
the values associated with a given key using the specified downstream
DistributedCollector . |
default <R> DistributedStream<R> |
DistributedStream.map(DistributedFunction<? super T,? extends R> mapper)
Returns a stream consisting of the results of applying the given
function to the elements of this stream.
|
static <T,U,A,R> DistributedCollector<T,?,R> |
DistributedCollectors.mapping(DistributedFunction<? super T,? extends U> mapper,
DistributedCollector<? super U,A,R> downstream)
Adapts a
DistributedCollector accepting elements of type U to one
accepting elements of type T by applying a mapping function to
each input element before accumulation. |
static <T,A,R> DistributedCollector<T,A,R> |
DistributedCollector.of(DistributedSupplier<A> supplier,
DistributedBiConsumer<A,T> accumulator,
DistributedBinaryOperator<A> combiner,
DistributedFunction<A,R> finisher,
java.util.stream.Collector.Characteristics... characteristics)
Returns a new
DistributedCollector described by the given supplier ,
accumulator , combiner , and finisher functions. |
static <T,U> DistributedCollector<T,?,U> |
DistributedCollectors.reducing(U identity,
DistributedFunction<? super T,? extends U> mapper,
DistributedBinaryOperator<U> op)
Returns a
DistributedCollector which performs a reduction of its
input elements under a specified mapping function and
DistributedBinaryOperator . |
<T> DistributedStream<T> |
IStreamMap.stream(com.hazelcast.query.Predicate<K,V> predicate,
DistributedFunction<Map.Entry<K,V>,T> projectionFn)
Returns a parallel and distributed
Stream with this map as its source. |
static <T,K,U> DistributedCollector.Reducer<T,IStreamCache<K,U>> |
DistributedCollectors.toICache(String cacheName,
DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper)
Returns a
Reducer that accumulates elements into a
new Hazelcast ICache whose keys and values are the result of applying the provided
mapping functions to the input elements. |
static <T,K,U> DistributedCollector.Reducer<T,IStreamCache<K,U>> |
DistributedCollectors.toICache(String cacheName,
DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper)
Returns a
Reducer that accumulates elements into a
new Hazelcast ICache whose keys and values are the result of applying the provided
mapping functions to the input elements. |
static <T,K,U> DistributedCollector.Reducer<T,IStreamCache<K,U>> |
DistributedCollectors.toICache(String cacheName,
DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper,
DistributedBinaryOperator<U> mergeFunction)
Returns a
Reducer that accumulates elements into a
new distributed Hazelcast ICache whose keys and values are the result of applying
the provided mapping functions to the input elements. |
static <T,K,U> DistributedCollector.Reducer<T,IStreamCache<K,U>> |
DistributedCollectors.toICache(String cacheName,
DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper,
DistributedBinaryOperator<U> mergeFunction)
Returns a
Reducer that accumulates elements into a
new distributed Hazelcast ICache whose keys and values are the result of applying
the provided mapping functions to the input elements. |
static <T,K,U> DistributedCollector.Reducer<T,IStreamMap<K,U>> |
DistributedCollectors.toIMap(String mapName,
DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper)
Returns a
Reducer that accumulates elements into a
new Hazelcast IMap whose keys and values are the result of applying the provided
mapping functions to the input elements. |
static <T,K,U> DistributedCollector.Reducer<T,IStreamMap<K,U>> |
DistributedCollectors.toIMap(String mapName,
DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper)
Returns a
Reducer that accumulates elements into a
new Hazelcast IMap whose keys and values are the result of applying the provided
mapping functions to the input elements. |
static <T,K,U> DistributedCollector.Reducer<T,IStreamMap<K,U>> |
DistributedCollectors.toIMap(String mapName,
DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper,
DistributedBinaryOperator<U> mergeFunction)
Returns a
Reducer that accumulates elements into a
new distributed Hazelcast IMap whose keys and values are the result of applying
the provided mapping functions to the input elements. |
static <T,K,U> DistributedCollector.Reducer<T,IStreamMap<K,U>> |
DistributedCollectors.toIMap(String mapName,
DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper,
DistributedBinaryOperator<U> mergeFunction)
Returns a
Reducer that accumulates elements into a
new distributed Hazelcast IMap whose keys and values are the result of applying
the provided mapping functions to the input elements. |
static <T,K,U> DistributedCollector<T,?,Map<K,U>> |
DistributedCollectors.toMap(DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper)
Returns a
DistributedCollector that accumulates elements into a
Map whose keys and values are the result of applying the provided
mapping functions to the input elements. |
static <T,K,U> DistributedCollector<T,?,Map<K,U>> |
DistributedCollectors.toMap(DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper)
Returns a
DistributedCollector that accumulates elements into a
Map whose keys and values are the result of applying the provided
mapping functions to the input elements. |
static <T,K,U> java.util.stream.Collector<T,?,Map<K,U>> |
DistributedCollectors.toMap(DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper,
DistributedBinaryOperator<U> mergeFunction)
Returns a
DistributedCollector that accumulates elements into a
Map whose keys and values are the result of applying the provided
mapping functions to the input elements. |
static <T,K,U> java.util.stream.Collector<T,?,Map<K,U>> |
DistributedCollectors.toMap(DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper,
DistributedBinaryOperator<U> mergeFunction)
Returns a
DistributedCollector that accumulates elements into a
Map whose keys and values are the result of applying the provided
mapping functions to the input elements. |
static <T,K,U,M extends Map<K,U>> |
DistributedCollectors.toMap(DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper,
DistributedBinaryOperator<U> mergeFunction,
DistributedSupplier<M> mapSupplier)
Returns a
DistributedCollector that accumulates elements into a
Map whose keys and values are the result of applying the provided
mapping functions to the input elements. |
static <T,K,U,M extends Map<K,U>> |
DistributedCollectors.toMap(DistributedFunction<? super T,? extends K> keyMapper,
DistributedFunction<? super T,? extends U> valueMapper,
DistributedBinaryOperator<U> mergeFunction,
DistributedSupplier<M> mapSupplier)
Returns a
DistributedCollector that accumulates elements into a
Map whose keys and values are the result of applying the provided
mapping functions to the input elements. |
Copyright © 2017 Hazelcast, Inc.. All Rights Reserved.