public final class Processors extends Object
package-level documentation
.
Many of the processors deal with an aggregating operation over stream items. Prior to aggregation items may be grouped by an arbitrary key and/or an event timestamp-based window. There are two main aggregation setups: single-stage and two-stage.
----------------- | upstream vertex | ----------------- | | partitioned-distributed V ----------- | aggregate | -----------
accumulate
aggregation
primitive and the second stage does combine
and finish
. The essential property
of this setup is that the edge leading to the first stage is local,
incurring no network traffic, and only the edge from the first to the
second stage is distributed. There is only one item per group traveling on
the distributed edge. Compared to the single-stage setup this can
dramatically reduce network traffic, but it needs more memory to keep
track of all keys on each cluster member. This is the outline of the DAG:
----------------- | upstream vertex | ----------------- | | partitioned-local V ------------ | accumulate | ------------ | | partitioned-distributed V ---------------- | combine/finish | ----------------The variants without a grouping key are equivalent to grouping by a single, global key. In that case the edge towards the final-stage vertex must be all-to-one and the local parallelism of the vertex must be one. Unless the volume of the aggregated data is small (e.g., some side branch off the main flow in the DAG), the best choice is this two-stage setup:
----------------- | upstream vertex | ----------------- | | local, non-partitioned V ------------ | accumulate | ------------ | | distributed, all-to-one V ---------------- | combine/finish | localParallelism = 1 ----------------This will parallelize and distributed most of the processing and the second-stage processor will receive just a single item from each upstream processor, doing very little work.
single-stage | stage 1/2 | stage 2/2 | |
---|---|---|---|
batch, no grouping |
aggregate() |
accumulate() |
combine() |
batch, group by key | aggregateByKey() |
accumulateByKey() |
combineByKey() |
stream, group by key and aligned window |
aggregateToSlidingWindow() |
accumulateByFrame() |
combineToSlidingWindow() |
stream, group by key and session window |
aggregateToSessionWindow() |
N/A | N/A |
Tumbling window is a special case of sliding window with sliding step =
window size. To achieve the effect of aggregation without a
grouping key, specify constantKey()
as the key-extracting function.
Modifier and Type | Method and Description |
---|---|
static <T,K,A> DistributedSupplier<Processor> |
accumulateByFrameP(DistributedFunction<? super T,K> getKeyFn,
DistributedToLongFunction<? super T> getTimestampFn,
TimestampKind timestampKind,
WindowDefinition windowDef,
AggregateOperation1<? super T,A,?> aggrOp)
Returns a supplier of processors for the first-stage vertex in a
two-stage sliding window aggregation setup (see the
class Javadoc for an explanation of aggregation stages). |
static <T,K,A> DistributedSupplier<Processor> |
accumulateByKeyP(DistributedFunction<? super T,K> getKeyFn,
AggregateOperation1<? super T,A,?> aggrOp)
Returns a supplier of processors for the first-stage vertex in a
two-stage group-and-aggregate setup.
|
static <T,A,R> DistributedSupplier<Processor> |
accumulateP(AggregateOperation1<T,A,R> aggrOp)
Returns a supplier of processors for a vertex that performs the provided
aggregate operation on all the items it receives.
|
static <T,K,A,R> DistributedSupplier<Processor> |
aggregateByKeyP(DistributedFunction<? super T,K> getKeyFn,
AggregateOperation1<? super T,A,R> aggrOp)
Returns a supplier of processors for a vertex that groups items by key
and performs the provided aggregate operation on each group.
|
static <T,A,R> DistributedSupplier<Processor> |
aggregateP(AggregateOperation1<T,A,R> aggrOp)
Returns a supplier of processors for a vertex that performs the provided
aggregate operation on all the items it receives.
|
static <T,K,A,R> DistributedSupplier<Processor> |
aggregateToSessionWindowP(long sessionTimeout,
DistributedToLongFunction<? super T> getTimestampFn,
DistributedFunction<? super T,K> getKeyFn,
AggregateOperation1<? super T,A,R> aggrOp)
Returns a supplier of processors for a vertex that aggregates events into
session windows.
|
static <T,K,A,R> DistributedSupplier<Processor> |
aggregateToSlidingWindowP(DistributedFunction<? super T,K> getKeyFn,
DistributedToLongFunction<? super T> getTimestampFn,
TimestampKind timestampKind,
WindowDefinition windowDef,
AggregateOperation1<? super T,A,R> aggrOp)
Returns a supplier of processors for a vertex that aggregates events
into a sliding window in a single stage (see the
class Javadoc for an explanation of aggregation stages). |
static <K,A> DistributedSupplier<Processor> |
coAccumulateByKeyP(List<DistributedFunction<?,? extends K>> getKeyFs,
AggregateOperation<A,?> aggrOp)
Returns a supplier of processors for the first-stage vertex in a
two-stage group-and-aggregate setup.
|
static <K,A,R> DistributedSupplier<Processor> |
coAggregateByKeyP(List<DistributedFunction<?,? extends K>> getKeyFs,
AggregateOperation<A,R> aggrOp)
Returns a supplier of processors for a vertex that groups items by key
and performs the provided aggregate operation on each group.
|
static <A,R> DistributedSupplier<Processor> |
combineByKeyP(AggregateOperation<A,R> aggrOp)
Returns a supplier of processors for the second-stage vertex in a
two-stage group-and-aggregate setup.
|
static <T,A,R> DistributedSupplier<Processor> |
combineP(AggregateOperation1<T,A,R> aggrOp)
Returns a supplier of processors for a vertex that performs the provided
aggregate operation on all the items it receives.
|
static <K,A,R> DistributedSupplier<Processor> |
combineToSlidingWindowP(WindowDefinition windowDef,
AggregateOperation1<?,A,R> aggrOp)
Returns a supplier of processors for the second-stage vertex in a
two-stage sliding window aggregation setup (see the
class Javadoc for an explanation of aggregation stages). |
static <T> DistributedSupplier<Processor> |
filterP(DistributedPredicate<T> predicate)
Returns a supplier of processors for a vertex that emits the same items
it receives, but only those that pass the given predicate.
|
static <T,R> DistributedSupplier<Processor> |
flatMapP(DistributedFunction<T,? extends Traverser<? extends R>> mapper)
Returns a supplier of processors for a vertex that applies the provided
item-to-traverser mapping function to each received item and emits all
the items from the resulting traverser.
|
static <T> DistributedSupplier<Processor> |
insertWatermarksP(DistributedToLongFunction<T> getTimestampF,
DistributedSupplier<WatermarkPolicy> newWmPolicyF,
WatermarkEmissionPolicy wmEmitPolicy)
Returns a supplier of processors for a vertex that inserts
watermark items into the stream. |
static <T,R> DistributedSupplier<Processor> |
mapP(DistributedFunction<T,R> mapper)
Returns a supplier of processors for a vertex which, for each received
item, emits the result of applying the given mapping function to it.
|
static DistributedSupplier<Processor> |
nonCooperativeP(DistributedSupplier<Processor> wrapped)
Decorates a
Supplier<Processor> into one that will declare
its processors non-cooperative. |
static ProcessorMetaSupplier |
nonCooperativeP(ProcessorMetaSupplier wrapped)
Decorates a processor meta-supplier with one that will declare all its
processors non-cooperative.
|
static ProcessorSupplier |
nonCooperativeP(ProcessorSupplier wrapped)
Decorates a
ProcessorSupplier with one that will declare all its
processors non-cooperative. |
static DistributedSupplier<Processor> |
noopP()
Returns a supplier of processor that consumes all its input (if any) and
does nothing with it.
|
@Nonnull public static <T,K,A,R> DistributedSupplier<Processor> aggregateByKeyP(@Nonnull DistributedFunction<? super T,K> getKeyFn, @Nonnull AggregateOperation1<? super T,A,R> aggrOp)
Map.Entry<K, R>
per
distinct key.
This processor has state, but does not save it to snapshot. On job restart, the state will be lost.
T
- type of received itemK
- type of keyA
- type of accumulator returned from aggregateOperation.
createAccumulatorFn()
R
- type of the finished result returned from aggregateOperation.
finishAccumulationFn()
getKeyFn
- computes the key from the entryaggrOp
- the aggregate operation to perform@Nonnull public static <T,K,A> DistributedSupplier<Processor> accumulateByKeyP(@Nonnull DistributedFunction<? super T,K> getKeyFn, @Nonnull AggregateOperation1<? super T,A,?> aggrOp)
AggregateOperation1.accumulateFn()
accumulate}
primitive to each group. After exhausting all its input it emits one
Map.Entry<K, A>
per observed key.
This processor has state, but does not save it to snapshot. On job restart, the state will be lost.
T
- type of received itemK
- type of keyA
- type of accumulator returned from aggrOp.createAccumulatorFn()
getKeyFn
- computes the key from the entryaggrOp
- the aggregate operation to perform@Nonnull public static <K,A,R> DistributedSupplier<Processor> coAggregateByKeyP(@Nonnull List<DistributedFunction<?,? extends K>> getKeyFs, @Nonnull AggregateOperation<A,R> aggrOp)
Map.Entry<K, R>
per
distinct key.
The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.
This processor has state, but does not save it to snapshot. On job restart, the state will be lost.
K
- type of keyA
- type of accumulator returned from aggrOp.createAccumulatorFn()
R
- type of the finished result returned from aggrOp.finishAccumulationFn()
getKeyFs
- functions that compute the grouping keyaggrOp
- the aggregate operation@Nonnull public static <K,A> DistributedSupplier<Processor> coAccumulateByKeyP(@Nonnull List<DistributedFunction<?,? extends K>> getKeyFs, @Nonnull AggregateOperation<A,?> aggrOp)
accumulate
primitive to each group.
After exhausting all its input it emits one Map.Entry<K, A>
per
distinct key.
The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.
This processor has state, but does not save it to snapshot. On job restart, the state will be lost.
K
- type of keyA
- type of accumulator returned from aggrOp.createAccumulatorFn()
getKeyFs
- functions that compute the grouping keyaggrOp
- the aggregate operation to perform@Nonnull public static <A,R> DistributedSupplier<Processor> combineByKeyP(@Nonnull AggregateOperation<A,R> aggrOp)
combine
aggregation primitive to the
entries received from several upstream instances of accumulateByKey()
. After exhausting all its input it emits one
Map.Entry<K, R>
per distinct key.
Since the input to this vertex must be bounded, its primary use case are batch jobs.
This processor has state, but does not save it to snapshot. On job restart, the state will be lost.
A
- type of accumulator returned from aggrOp.createAccumulatorFn()
R
- type of the finished result returned from
aggrOp.finishAccumulationFn()
aggrOp
- the aggregate operation to perform@Nonnull public static <T,A,R> DistributedSupplier<Processor> aggregateP(@Nonnull AggregateOperation1<T,A,R> aggrOp)
R
—the result of
the aggregate operation.
Since the input to this vertex must be bounded, its primary use case are batch jobs.
This processor has state, but does not save it to snapshot. On job restart, the state will be lost.
T
- type of received itemA
- type of accumulator returned from aggrOp.createAccumulatorFn()
R
- type of the finished result returned from aggrOp.
finishAccumulationFn()
aggrOp
- the aggregate operation to perform@Nonnull public static <T,A,R> DistributedSupplier<Processor> accumulateP(@Nonnull AggregateOperation1<T,A,R> aggrOp)
R
—the result of
the aggregate operation.
Since the input to this vertex must be bounded, its primary use case are batch jobs.
This processor has state, but does not save it to snapshot. On job restart, the state will be lost.
T
- type of received itemA
- type of accumulator returned from aggrOp.createAccumulatorFn()
R
- type of the finished result returned from aggrOp.
finishAccumulationFn()
aggrOp
- the aggregate operation to perform@Nonnull public static <T,A,R> DistributedSupplier<Processor> combineP(@Nonnull AggregateOperation1<T,A,R> aggrOp)
R
—the result of
the aggregate operation.
Since the input to this vertex must be bounded, its primary use case are batch jobs.
This processor has state, but does not save it to snapshot. On job restart, the state will be lost.
T
- type of received itemA
- type of accumulator returned from aggrOp.createAccumulatorFn()
R
- type of the finished result returned from aggrOp.
finishAccumulationFn()
aggrOp
- the aggregate operation to perform@Nonnull public static <T,K,A,R> DistributedSupplier<Processor> aggregateToSlidingWindowP(@Nonnull DistributedFunction<? super T,K> getKeyFn, @Nonnull DistributedToLongFunction<? super T> getTimestampFn, @Nonnull TimestampKind timestampKind, @Nonnull WindowDefinition windowDef, @Nonnull AggregateOperation1<? super T,A,R> aggrOp)
class Javadoc
for an explanation of aggregation stages). The vertex
groups items by the grouping key (as obtained from the given
key-extracting function) and by frame, which is a range of
timestamps equal to the sliding step. It emits sliding window results
labeled with the timestamp denoting the window's end time (the exclusive
upper bound of the timestamps belonging to the window).
When the vertex receives a watermark with a given wmVal
, it
emits the result of aggregation for all the positions of the sliding
window with windowTimestamp <= wmVal
. It computes the window
result by combining the partial results of the frames belonging to it
and finally applying the finish
aggregation primitive. After this
it deletes from storage all the frames that trail behind the emitted
windows. The type of emitted items is TimestampedEntry<K, A>
so there is one item per key per window position.
Behavior on job restart
This processor saves its state to snapshot. After restart, it can
continue accumulating where it left off.
After a restart in at-least-once mode, watermarks are allowed to go back in time. If such a watermark is received, some windows that were emitted in previous execution will be re-emitted. These windows might miss events as some of them had already been evicted before the snapshot was done in previous execution.
@Nonnull public static <T,K,A> DistributedSupplier<Processor> accumulateByFrameP(@Nonnull DistributedFunction<? super T,K> getKeyFn, @Nonnull DistributedToLongFunction<? super T> getTimestampFn, @Nonnull TimestampKind timestampKind, @Nonnull WindowDefinition windowDef, @Nonnull AggregateOperation1<? super T,A,?> aggrOp)
class Javadoc
for an explanation of aggregation stages). The vertex
groups items by the grouping key (as obtained from the given
key-extracting function) and by frame, which is a range of
timestamps equal to the sliding step. It applies the accumulate
aggregation primitive to
each key-frame group.
The frame is identified by the timestamp denoting its end time (equal to
the exclusive upper bound of its timestamp range). WindowDefinition.higherFrameTs(long)
maps the event timestamp to the
timestamp of the frame it belongs to.
When the processor receives a watermark with a given wmVal
, it
emits the current accumulated state of all frames with timestamp <= wmVal
and deletes these frames from its storage.
The type of emitted items is TimestampedEntry<K, A>
so there is one item per key per frame.
When a state snapshot is requested, the state is flushed to second-stage processor and nothing is saved to snapshot.
T
- input item typeK
- type of key returned from getKeyFn
A
- type of accumulator returned from aggrOp.
createAccumulatorFn()
@Nonnull public static <K,A,R> DistributedSupplier<Processor> combineToSlidingWindowP(@Nonnull WindowDefinition windowDef, @Nonnull AggregateOperation1<?,A,R> aggrOp)
class Javadoc
for an explanation of aggregation stages). Each
processor applies the combine
aggregation primitive to frames received from several upstream instances
of accumulateByFrame()
. It emits sliding window results labeled with
the timestamp denoting the window's end time. This timestamp is equal to
the exclusive upper bound of timestamps belonging to the window.
When the processor receives a watermark with a given wmVal
,
it emits the result of aggregation for all positions of the sliding
window with windowTimestamp <= wmVal
. It computes the window
result by combining the partial results of the frames belonging to it
and finally applying the finish
aggregation primitive. After this
it deletes from storage all the frames that trail behind the emitted
windows. The type of emitted items is TimestampedEntry<K, A>
so there is one item per key per window position.
Behavior on job restart
This processor saves its state to snapshot. After restart, it can
continue accumulating where it left off.
After a restart in at-least-once mode, watermarks are allowed to go back in time. If such a watermark is received, some windows that were emitted in previous execution will be re-emitted. These windows might miss events as some of them had already been evicted before the snapshot was done in previous execution.
A
- type of the accumulatorR
- type of the finished result returned from aggrOp.
finishAccumulationFn()
@Nonnull public static <T,K,A,R> DistributedSupplier<Processor> aggregateToSessionWindowP(long sessionTimeout, @Nonnull DistributedToLongFunction<? super T> getTimestampFn, @Nonnull DistributedFunction<? super T,K> getKeyFn, @Nonnull AggregateOperation1<? super T,A,R> aggrOp)
Session
.
The functioning of this vertex is easiest to explain in terms of the
event interval: the range [timestamp, timestamp +
sessionTimeout]
. Initially an event causes a new session window to be
created, covering exactly the event interval. A following event under
the same key belongs to this window iff its interval overlaps it. The
window is extended to cover the entire interval of the new event. The
event may happen to belong to two existing windows if its interval
bridges the gap between them; in that case they are combined into one.
Behavior on job restart
This processor saves its state to snapshot. After restart, it can
continue accumulating where it left off.
After a restart in at-least-once mode, watermarks are allowed to go back in time. The processor evicts state based on watermarks it received. If it receives duplicate watermark, it might emit sessions with missing events, because they were already evicted. The sessions before and after snapshot might overlap, which they normally don't.
T
- type of the stream eventK
- type of the item's grouping keyA
- type of the container of the accumulated valueR
- type of the session window's result valuesessionTimeout
- maximum gap between consecutive events in the same session windowgetTimestampFn
- function to extract the timestamp from the itemgetKeyFn
- function to extract the grouping key from the itemaggrOp
- the aggregate operation@Nonnull public static <T> DistributedSupplier<Processor> insertWatermarksP(@Nonnull DistributedToLongFunction<T> getTimestampF, @Nonnull DistributedSupplier<WatermarkPolicy> newWmPolicyF, @Nonnull WatermarkEmissionPolicy wmEmitPolicy)
watermark items
into the stream. The
value of the watermark is determined by the supplied WatermarkPolicy
instance.
This processor also drops late items. It never allows an event, which is late with regard to already emitted watermark to pass.
The processor saves value of the last emitted watermark to snapshot. Different instances of this processor can be at different watermark at snapshot time. After restart all instances will start at watermark of the most-behind instance before the restart.
This might sound as it could break the monotonicity requirement, but thanks to watermark coalescing, watermarks are only delivered for downstream processing after they have been received from all upstream processors. Another side effect of this is, that a late event, which was dropped before restart, is not considered late after restart.
T
- the type of the stream item@Nonnull public static <T,R> DistributedSupplier<Processor> mapP(@Nonnull DistributedFunction<T,R> mapper)
null
, it emits nothing. Therefore this vertex can
be used to implement filtering semantics as well.
This processor is stateless.
T
- type of received itemR
- type of emitted itemmapper
- the mapping function@Nonnull public static <T> DistributedSupplier<Processor> filterP(@Nonnull DistributedPredicate<T> predicate)
This processor is stateless.
T
- type of received itempredicate
- the predicate to test each received item against@Nonnull public static <T,R> DistributedSupplier<Processor> flatMapP(@Nonnull DistributedFunction<T,? extends Traverser<? extends R>> mapper)
This processor is stateless.
T
- received item typeR
- emitted item typemapper
- function that maps the received item to a traverser over output items@Nonnull public static DistributedSupplier<Processor> noopP()
@Nonnull public static ProcessorMetaSupplier nonCooperativeP(@Nonnull ProcessorMetaSupplier wrapped)
instanceof
AbstractProcessor
.@Nonnull public static ProcessorSupplier nonCooperativeP(@Nonnull ProcessorSupplier wrapped)
ProcessorSupplier
with one that will declare all its
processors non-cooperative. The wrapped supplier must return processors
that are instanceof
AbstractProcessor
.@Nonnull public static DistributedSupplier<Processor> nonCooperativeP(@Nonnull DistributedSupplier<Processor> wrapped)
Supplier<Processor>
into one that will declare
its processors non-cooperative. The wrapped supplier must return
processors that are instanceof
AbstractProcessor
.Copyright © 2017 Hazelcast, Inc.. All Rights Reserved.