public final class Processors extends Object
Many of the processors deal with an aggregating operation over stream items. Prior to aggregation items may be grouped by an arbitrary key and/or an event timestamp-based window. There are two main aggregation setups: single-stage and two-stage.
Unless specified otherwise, all functions passed to member methods must be stateless.
                 -----------------
                | upstream vertex |
                 -----------------
                         |
                         | partitioned-distributed
                         V
                    -----------
                   | aggregate |
                    -----------
 
 accumulate aggregation
 primitive and the second stage does combine and finish. The essential property
 of this setup is that the edge leading to the first stage is local,
 incurring no network traffic, and only the edge from the first to the
 second stage is distributed. There is only one item per group traveling on
 the distributed edge. Compared to the single-stage setup this can
 dramatically reduce network traffic, but it needs more memory to keep
 track of all keys on each cluster member. This is the outline of the DAG:
 
                -----------------
               | upstream vertex |
                -----------------
                        |
                        | partitioned-local
                        V
                  ------------
                 | accumulate |
                  ------------
                        |
                        | partitioned-distributed
                        V
                 ----------------
                | combine/finish |
                 ----------------
 
 The variants without a grouping key are equivalent to grouping by a
 single, global key. In that case the edge towards the final-stage
 vertex must be all-to-one and the local parallelism of the vertex must
 be one. Unless the volume of the aggregated data is small (e.g., some
 side branch off the main flow in the DAG), the best choice is this
 two-stage setup:
 
                -----------------
               | upstream vertex |
                -----------------
                        |
                        | local, non-partitioned
                        V
                  ------------
                 | accumulate |
                  ------------
                        |
                        | distributed, all-to-one
                        V
                 ----------------
                | combine/finish | localParallelism = 1
                 ----------------
 
 This will parallelize and distributed most of the processing and
 the second-stage processor will receive just a single item from
 each upstream processor, doing very little work.
 Tumbling window is a special case of sliding window with sliding step = window size.
| Modifier and Type | Method and Description | 
|---|---|
static <K,A> SupplierEx<Processor> | 
accumulateByFrameP(List<FunctionEx<?,? extends K>> keyFns,
                  List<ToLongFunctionEx<?>> timestampFns,
                  TimestampKind timestampKind,
                  SlidingWindowPolicy winPolicy,
                  AggregateOperation<A,?> aggrOp)
Returns a supplier of processors for the first-stage vertex in a
 two-stage sliding window aggregation setup (see the  
class Javadoc for an explanation of aggregation stages). | 
static <K,A> SupplierEx<Processor> | 
accumulateByFrameP(List<FunctionEx<?,? extends K>> keyFns,
                  List<ToLongFunctionEx<?>> timestampFns,
                  TimestampKind timestampKind,
                  SlidingWindowPolicy winPolicy,
                  AggregateOperation<A,?> aggrOp,
                  byte watermarkKey)
Returns a supplier of processors for the first-stage vertex in a
 two-stage sliding window aggregation setup (see the  
class Javadoc for an explanation of aggregation stages). | 
static <K,A> SupplierEx<Processor> | 
accumulateByKeyP(List<FunctionEx<?,? extends K>> getKeyFns,
                AggregateOperation<A,?> aggrOp)
Returns a supplier of processors for the first-stage vertex in a
 two-stage group-and-aggregate setup. 
 | 
static <A,R> SupplierEx<Processor> | 
accumulateP(AggregateOperation<A,R> aggrOp)
Returns a supplier of processors for a vertex that performs the
 accumulation step of the provided aggregate operation on all the items
 it receives. 
 | 
static <K,A,R,OUT> | 
aggregateByKeyP(List<FunctionEx<?,? extends K>> keyFns,
               AggregateOperation<A,R> aggrOp,
               BiFunctionEx<? super K,? super R,OUT> mapToOutputFn)
Returns a supplier of processors for a vertex that groups items by key
 and performs the provided aggregate operation on each group. 
 | 
static <A,R> SupplierEx<Processor> | 
aggregateP(AggregateOperation<A,R> aggrOp)
Returns a supplier of processors for a vertex that performs the provided
 aggregate operation on all the items it receives. 
 | 
static <K,A,R,OUT> | 
aggregateToSessionWindowP(long sessionTimeout,
                         long earlyResultsPeriod,
                         List<ToLongFunctionEx<?>> timestampFns,
                         List<FunctionEx<?,? extends K>> keyFns,
                         AggregateOperation<A,? extends R> aggrOp,
                         KeyedWindowResultFunction<? super K,? super R,? extends OUT> mapToOutputFn)
Returns a supplier of processors for a vertex that aggregates events into
 session windows. 
 | 
static <K,A,R,OUT> | 
aggregateToSlidingWindowP(List<FunctionEx<?,? extends K>> keyFns,
                         List<ToLongFunctionEx<?>> timestampFns,
                         TimestampKind timestampKind,
                         SlidingWindowPolicy winPolicy,
                         long earlyResultsPeriod,
                         AggregateOperation<A,? extends R> aggrOp,
                         KeyedWindowResultFunction<? super K,? super R,? extends OUT> mapToOutputFn)
Returns a supplier of processors for a vertex that aggregates events
 into a sliding window in a single stage (see the  
class Javadoc for an explanation of aggregation stages). | 
static <K,A,R,OUT> | 
aggregateToSlidingWindowP(List<FunctionEx<?,? extends K>> keyFns,
                         List<ToLongFunctionEx<?>> timestampFns,
                         TimestampKind timestampKind,
                         SlidingWindowPolicy winPolicy,
                         long earlyResultsPeriod,
                         AggregateOperation<A,? extends R> aggrOp,
                         KeyedWindowResultFunction<? super K,? super R,? extends OUT> mapToOutputFn,
                         byte windowWatermarkKey)
Returns a supplier of processors for a vertex that aggregates events
 into a sliding window in a single stage (see the  
class Javadoc for an explanation of aggregation stages). | 
static <K,A,R,OUT> | 
combineByKeyP(AggregateOperation<A,R> aggrOp,
             BiFunctionEx<? super K,? super R,OUT> mapToOutputFn)
Returns a supplier of processors for the second-stage vertex in a
 two-stage group-and-aggregate setup. 
 | 
static <A,R> SupplierEx<Processor> | 
combineP(AggregateOperation<A,R> aggrOp)
Returns a supplier of processors for a vertex that performs the
 combining and finishing steps of the provided aggregate operation. 
 | 
static <K,A,R,OUT> | 
combineToSlidingWindowP(SlidingWindowPolicy winPolicy,
                       AggregateOperation<A,? extends R> aggrOp,
                       KeyedWindowResultFunction<? super K,? super R,? extends OUT> mapToOutputFn)
Returns a supplier of processors for the second-stage vertex in a
 two-stage sliding window aggregation setup (see the  
class Javadoc for an explanation of aggregation stages). | 
static <K,A,R,OUT> | 
combineToSlidingWindowP(SlidingWindowPolicy winPolicy,
                       AggregateOperation<A,? extends R> aggrOp,
                       KeyedWindowResultFunction<? super K,? super R,? extends OUT> mapToOutputFn,
                       byte windowWatermarkKey)
Returns a supplier of processors for the second-stage vertex in a
 two-stage sliding window aggregation setup (see the  
class Javadoc for an explanation of aggregation stages) with specified
 windowWatermarkKey. | 
static <T> SupplierEx<Processor> | 
filterP(PredicateEx<? super T> filterFn)
Returns a supplier of processors for a vertex that emits the same items
 it receives, but only those that pass the given predicate. 
 | 
static <C,S,T> ProcessorSupplier | 
filterUsingServiceP(ServiceFactory<C,S> serviceFactory,
                   BiPredicateEx<? super S,? super T> filterFn)
Returns a supplier of processors for a vertex that emits the same items
 it receives, but only those that pass the given predicate. 
 | 
static <T,R> SupplierEx<Processor> | 
flatMapP(FunctionEx<? super T,? extends Traverser<? extends R>> flatMapFn)
Returns a supplier of processors for a vertex that applies the provided
 item-to-traverser mapping function to each received item and emits all
 the items from the resulting traverser. 
 | 
static <T,K,S,R> SupplierEx<Processor> | 
flatMapStatefulP(long ttl,
                FunctionEx<? super T,? extends K> keyFn,
                ToLongFunctionEx<? super T> timestampFn,
                Supplier<? extends S> createFn,
                TriFunction<? super S,? super K,? super T,? extends Traverser<R>> statefulFlatMapFn,
                TriFunction<? super S,? super K,? super Long,? extends Traverser<R>> onEvictFn)
Returns a supplier of processors for a vertex that performs a stateful
 flat-mapping of its input. 
 | 
static <C,S,T,R> ProcessorSupplier | 
flatMapUsingServiceP(ServiceFactory<C,S> serviceFactory,
                    BiFunctionEx<? super S,? super T,? extends Traverser<R>> flatMapFn)
Returns a supplier of processors for a vertex that applies the provided
 item-to-traverser mapping function to each received item and emits all
 the items from the resulting traverser. 
 | 
static <T> SupplierEx<Processor> | 
insertWatermarksP(EventTimePolicy<? super T> eventTimePolicy)
Returns a supplier of processors for a vertex that inserts  
watermark items into the stream. | 
static <T> SupplierEx<Processor> | 
insertWatermarksP(FunctionEx<ProcessorSupplier.Context,EventTimePolicy<? super T>> eventTimePolicyProvider)
Returns a supplier of processors for a vertex that inserts  
watermark items into the stream. | 
static <T,R> SupplierEx<Processor> | 
mapP(FunctionEx<? super T,? extends R> mapFn)
Returns a supplier of processors for a vertex which, for each received
 item, emits the result of applying the given mapping function to it. 
 | 
static <T,K,S,R> SupplierEx<Processor> | 
mapStatefulP(long ttl,
            FunctionEx<? super T,? extends K> keyFn,
            ToLongFunctionEx<? super T> timestampFn,
            Supplier<? extends S> createFn,
            TriFunction<? super S,? super K,? super T,? extends R> statefulMapFn,
            TriFunction<? super S,? super K,? super Long,? extends R> onEvictFn)
Returns a supplier of processors for a vertex that performs a stateful
 mapping of its input. 
 | 
static <C,S,T,K,R> | 
mapUsingServiceAsyncP(ServiceFactory<C,S> serviceFactory,
                     int maxConcurrentOps,
                     boolean preserveOrder,
                     FunctionEx<T,K> extractKeyFn,
                     BiFunctionEx<? super S,? super T,CompletableFuture<R>> mapAsyncFn)
Asynchronous version of  
mapUsingServiceP(com.hazelcast.jet.pipeline.ServiceFactory<C, S>, com.hazelcast.function.BiFunctionEx<? super S, ? super T, ? extends R>): the mapAsyncFn returns a CompletableFuture<R> instead of just
 R. | 
static <C,S,T,R> ProcessorSupplier | 
mapUsingServiceP(ServiceFactory<C,S> serviceFactory,
                BiFunctionEx<? super S,? super T,? extends R> mapFn)
Returns a supplier of processors for a vertex which, for each received
 item, emits the result of applying the given mapping function to it. 
 | 
static SupplierEx<Processor> | 
noopP()
Returns a supplier of a processor that swallows all its normal input (if
 any), does nothing with it, forwards the watermarks, produces no output
 and completes immediately. 
 | 
static <T> SupplierEx<Processor> | 
sortP(Comparator<T> comparator)
Returns a supplier of processors for a vertex that sorts its input using
 a  
PriorityQueue and emits it in the complete phase. | 
@Nonnull public static <A,R> SupplierEx<Processor> aggregateP(@Nonnull AggregateOperation<A,R> aggrOp)
R — the result of
 the aggregate operation's finish
 primitive. The primitive may return null, in that case the vertex
 will not produce any output.
 Since the input to this vertex must be bounded, its primary use case are batch jobs.
This processor has state, but does not save it to the snapshot. On job restart, the state will be lost.
A - type of accumulator returned from aggrOp.createAccumulatorFn()R - type of the finished result returned from aggrOp.finishAccumulationFn()aggrOp - the aggregate operation to perform@Nonnull public static <A,R> SupplierEx<Processor> accumulateP(@Nonnull AggregateOperation<A,R> aggrOp)
A — the accumulator object.
 Since the input to this vertex must be bounded, its primary use case are batch jobs.
This processor has state, but does not save it to the snapshot. On job restart, the state will be lost.
A - type of accumulator returned from aggrOp.createAccumulatorFn()R - type of the finished result returned from aggrOp.
            finishAccumulationFn()aggrOp - the aggregate operation to perform@Nonnull public static <A,R> SupplierEx<Processor> combineP(@Nonnull AggregateOperation<A,R> aggrOp)
accumulateP(com.hazelcast.jet.aggregate.AggregateOperation<A, R>) vertex and combines their state into a single
 accumulator. After exhausting all its input, it emits a single result
 of type R — the result of applying the finish
 primitive to the combined accumulator. The primitive may return null, in that case the vertex will not produce any output.
 Since the input to this vertex must be bounded, its primary use case is batch jobs.
This processor has state, but does not save it to the snapshot. On job restart, the state will be lost.
A - type of accumulator returned from aggrOp.createAccumulatorFn()R - type of the finished result returned from aggrOp.
            finishAccumulationFn()aggrOp - the aggregate operation to perform@Nonnull public static <K,A,R,OUT> SupplierEx<Processor> aggregateByKeyP(@Nonnull List<FunctionEx<?,? extends K>> keyFns, @Nonnull AggregateOperation<A,R> aggrOp, @Nonnull BiFunctionEx<? super K,? super R,OUT> mapToOutputFn)
mapToOutputFn.
 The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.
This processor has state, but does not save it to snapshot. On job restart, the state will be lost.
K - type of keyA - type of accumulator returned from aggrOp.createAccumulatorFn()R - type of the result returned from aggrOp.finishAccumulationFn()OUT - type of the item to emitkeyFns - functions that compute the grouping keyaggrOp - the aggregate operationmapToOutputFn - function that takes the key and the aggregation result and returns
                      the output item@Nonnull public static <K,A> SupplierEx<Processor> accumulateByKeyP(@Nonnull List<FunctionEx<?,? extends K>> getKeyFns, @Nonnull AggregateOperation<A,?> aggrOp)
accumulate primitive to each group.
 After exhausting all its input it emits one Map.Entry<K, A> per
 distinct key.
 The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.
This processor has state, but does not save it to snapshot. On job restart, the state will be lost.
K - type of keyA - type of accumulator returned from aggrOp.createAccumulatorFn()getKeyFns - functions that compute the grouping keyaggrOp - the aggregate operation to perform@Nonnull public static <K,A,R,OUT> SupplierEx<Processor> combineByKeyP(@Nonnull AggregateOperation<A,R> aggrOp, @Nonnull BiFunctionEx<? super K,? super R,OUT> mapToOutputFn)
combine aggregation primitive to the
 entries received from several upstream instances of accumulateByKeyP(java.util.List<com.hazelcast.function.FunctionEx<?, ? extends K>>, com.hazelcast.jet.aggregate.AggregateOperation<A, ?>). After exhausting all its input it emits one item per
 distinct key. It computes the item to emit by passing each (key, result)
 pair to mapToOutputFn.
 Since the input to this vertex must be bounded, its primary use case are batch jobs.
This processor has state, but does not save it to snapshot. On job restart, the state will be lost.
A - type of accumulator returned from aggrOp.createAccumulatorFn()R - type of the finished result returned from
            aggrOp.finishAccumulationFn()OUT - type of the item to emitaggrOp - the aggregate operation to performmapToOutputFn - function that takes the key and the aggregation result and returns
                      the output item@Nonnull public static <K,A,R,OUT> SupplierEx<Processor> aggregateToSlidingWindowP(@Nonnull List<FunctionEx<?,? extends K>> keyFns, @Nonnull List<ToLongFunctionEx<?>> timestampFns, @Nonnull TimestampKind timestampKind, @Nonnull SlidingWindowPolicy winPolicy, long earlyResultsPeriod, @Nonnull AggregateOperation<A,? extends R> aggrOp, @Nonnull KeyedWindowResultFunction<? super K,? super R,? extends OUT> mapToOutputFn)
class Javadoc for an explanation of aggregation stages). The vertex
 groups items by the grouping key (as obtained from the given
 key-extracting function) and by frame, which is a range of
 timestamps equal to the sliding step. It emits sliding window results
 labeled with the timestamp denoting the window's end time (the exclusive
 upper bound of the timestamps belonging to the window).
 The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.
 When the vertex receives a watermark with a given wmVal, it
 emits the result of aggregation for all the positions of the sliding
 window with windowTimestamp <= wmVal. It computes the window
 result by combining the partial results of the frames belonging to it
 and finally applying the finish aggregation primitive. After this
 it deletes from storage all the frames that trail behind the emitted
 windows. In the output there is one item per key per window position.
 
 Behavior on job restart
 This processor saves its state to snapshot. After restart, it can
 continue accumulating where it left off.
 
After a restart in at-least-once mode, watermarks are allowed to go back in time. If such a watermark is received, some windows that were emitted in previous execution will be re-emitted. These windows might miss events as some of them had already been evicted before the snapshot was done in previous execution.
@Nonnull public static <K,A,R,OUT> SupplierEx<Processor> aggregateToSlidingWindowP(@Nonnull List<FunctionEx<?,? extends K>> keyFns, @Nonnull List<ToLongFunctionEx<?>> timestampFns, @Nonnull TimestampKind timestampKind, @Nonnull SlidingWindowPolicy winPolicy, long earlyResultsPeriod, @Nonnull AggregateOperation<A,? extends R> aggrOp, @Nonnull KeyedWindowResultFunction<? super K,? super R,? extends OUT> mapToOutputFn, byte windowWatermarkKey)
class Javadoc for an explanation of aggregation stages). The vertex
 groups items by the grouping key (as obtained from the given
 key-extracting function) and by frame, which is a range of
 timestamps equal to the sliding step. It emits sliding window results
 labeled with the timestamp denoting the window's end time (the exclusive
 upper bound of the timestamps belonging to the window).
 The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.
 When the vertex receives a watermark with a given wmVal, it
 emits the result of aggregation for all the positions of the sliding
 window with windowTimestamp <= wmVal. It computes the window
 result by combining the partial results of the frames belonging to it
 and finally applying the finish aggregation primitive. After this
 it deletes from storage all the frames that trail behind the emitted
 windows. In the output there is one item per key per window position.
 
 Behavior on job restart
 This processor saves its state to snapshot. After restart, it can
 continue accumulating where it left off.
 
After a restart in at-least-once mode, watermarks are allowed to go back in time. If such a watermark is received, some windows that were emitted in previous execution will be re-emitted. These windows might miss events as some of them had already been evicted before the snapshot was done in previous execution.
@Nonnull public static <K,A> SupplierEx<Processor> accumulateByFrameP(@Nonnull List<FunctionEx<?,? extends K>> keyFns, @Nonnull List<ToLongFunctionEx<?>> timestampFns, @Nonnull TimestampKind timestampKind, @Nonnull SlidingWindowPolicy winPolicy, @Nonnull AggregateOperation<A,?> aggrOp)
class Javadoc for an explanation of aggregation stages). The vertex
 groups items by the grouping key (as obtained from the given
 key-extracting function) and by frame, which is a range of
 timestamps equal to the sliding step. It applies the accumulate aggregation primitive to
 each key-frame group.
 
 The frame is identified by the timestamp denoting its end time (equal to
 the exclusive upper bound of its timestamp range). SlidingWindowPolicy.higherFrameTs(long) maps the event timestamp to the
 timestamp of the frame it belongs to.
 
The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.
 When the processor receives a watermark with a given wmVal, it
 emits the current accumulated state of all frames with timestamp <= wmVal and deletes these frames from its storage. In the
 output there is one item per key per frame.
 
When a state snapshot is requested, the state is flushed to second-stage processor and nothing is saved to snapshot.
K - type of the grouping keyA - type of accumulator returned from aggrOp.
            createAccumulatorFn()@Nonnull public static <K,A> SupplierEx<Processor> accumulateByFrameP(@Nonnull List<FunctionEx<?,? extends K>> keyFns, @Nonnull List<ToLongFunctionEx<?>> timestampFns, @Nonnull TimestampKind timestampKind, @Nonnull SlidingWindowPolicy winPolicy, @Nonnull AggregateOperation<A,?> aggrOp, byte watermarkKey)
class Javadoc for an explanation of aggregation stages). The vertex
 groups items by the grouping key (as obtained from the given
 key-extracting function) and by frame, which is a range of
 timestamps equal to the sliding step. It applies the accumulate aggregation primitive to
 each key-frame group.
 
 The frame is identified by the timestamp denoting its end time (equal to
 the exclusive upper bound of its timestamp range). SlidingWindowPolicy.higherFrameTs(long) maps the event timestamp to the
 timestamp of the frame it belongs to.
 
The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.
 When the processor receives a keyed watermark with a given wmVal, it
 emits the current accumulated state of all frames with timestamp <= wmVal and deletes these frames from its storage. In the
 output there is one item per key per frame.
 
When a state snapshot is requested, the state is flushed to second-stage processor and nothing is saved to snapshot.
K - type of the grouping keyA - type of accumulator returned from aggrOp.
            createAccumulatorFn()@Nonnull public static <K,A,R,OUT> SupplierEx<Processor> combineToSlidingWindowP(@Nonnull SlidingWindowPolicy winPolicy, @Nonnull AggregateOperation<A,? extends R> aggrOp, @Nonnull KeyedWindowResultFunction<? super K,? super R,? extends OUT> mapToOutputFn)
class Javadoc for an explanation of aggregation stages). Each
 processor applies the combine
 aggregation primitive to the frames received from several upstream
 instances of accumulateByFrame().
 
 When the processor receives a watermark with a given wmVal,
 it emits the result of aggregation for all positions of the sliding
 window with windowTimestamp <= wmVal. It computes the window
 result by combining the partial results of the frames belonging to it
 and finally applying the finish aggregation primitive. After
 this it deletes from storage all the frames that trail behind the
 emitted windows. To compute the item to emit, it calls mapToOutputFn with the window's start and end timestamps, the key and
 the aggregation result. The window end time is the exclusive upper bound
 of the timestamps belonging to the window.
 
 Behavior on job restart
 This processor saves its state to snapshot. After restart, it can
 continue accumulating where it left off.
 
After a restart in at-least-once mode, watermarks are allowed to go back in time. If such a watermark is received, some windows that were emitted in previous execution will be re-emitted. These windows might miss events as some of them had already been evicted before the snapshot was done in previous execution.
A - type of the accumulatorR - type of the finished result returned from aggrOp.
            finishAccumulationFn()OUT - type of the item to emit@Nonnull public static <K,A,R,OUT> SupplierEx<Processor> combineToSlidingWindowP(@Nonnull SlidingWindowPolicy winPolicy, @Nonnull AggregateOperation<A,? extends R> aggrOp, @Nonnull KeyedWindowResultFunction<? super K,? super R,? extends OUT> mapToOutputFn, byte windowWatermarkKey)
class Javadoc for an explanation of aggregation stages) with specified
 windowWatermarkKey.
 
 Each processor applies the combine
 aggregation primitive to the frames received from several upstream
 instances of accumulateByFrame().
 
 When the processor receives a watermark with a given wmVal,
 it emits the result of aggregation for all positions of the sliding
 window with windowTimestamp <= wmVal. It computes the window
 result by combining the partial results of the frames belonging to it
 and finally applying the finish aggregation primitive. After
 this it deletes from storage all the frames that trail behind the
 emitted windows. To compute the item to emit, it calls mapToOutputFn with the window's start and end timestamps, the key and
 the aggregation result. The window end time is the exclusive upper bound
 of the timestamps belonging to the window.
 
 Behavior on job restart
 This processor saves its state to snapshot. After restart, it can
 continue accumulating where it left off.
 
After a restart in at-least-once mode, watermarks are allowed to go back in time. If such a watermark is received, some windows that were emitted in previous execution will be re-emitted. These windows might miss events as some of them had already been evicted before the snapshot was done in previous execution.
A - type of the accumulatorR - type of the finished result returned from aggrOp.
            finishAccumulationFn()OUT - type of the item to emit@Nonnull public static <K,A,R,OUT> SupplierEx<Processor> aggregateToSessionWindowP(long sessionTimeout, long earlyResultsPeriod, @Nonnull List<ToLongFunctionEx<?>> timestampFns, @Nonnull List<FunctionEx<?,? extends K>> keyFns, @Nonnull AggregateOperation<A,? extends R> aggrOp, @Nonnull KeyedWindowResultFunction<? super K,? super R,? extends OUT> mapToOutputFn)
WindowResult.
 The vertex accepts input from one or more inbound edges. The type of items may be different on each edge. For each edge a separate key extracting function must be supplied and the aggregate operation must contain a separate accumulation function for each edge.
 The functioning of this vertex is easiest to explain in terms of the
 event interval: the range [timestamp, timestamp +
 sessionTimeout). Initially an event causes a new session window to be
 created, covering exactly the event interval. A following event under
 the same key belongs to this window iff its interval overlaps it. The
 window is extended to cover the entire interval of the new event. The
 event may happen to belong to two existing windows if its interval
 bridges the gap between them; in that case they are combined into one.
 
 Behavior on job restart
 This processor saves its state to snapshot. After restart, it can
 continue accumulating where it left off.
 
After a restart in at-least-once mode, watermarks are allowed to go back in time. The processor evicts state based on watermarks it received. If it receives duplicate watermark, it might emit sessions with missing events, because they were already evicted. The sessions before and after snapshot might overlap, which they normally don't.
K - type of the item's grouping keyA - type of the container of the accumulated valueR - type of the session window's result valuesessionTimeout - maximum gap between consecutive events in the same session windowtimestampFns - functions to extract the timestamp from the itemkeyFns - functions to extract the grouping key from the itemaggrOp - the aggregate operation@Nonnull public static <T> SupplierEx<Processor> insertWatermarksP(@Nonnull EventTimePolicy<? super T> eventTimePolicy)
watermark items into the stream. The
 value of the watermark is determined by the supplied EventTimePolicy instance.
 This processor also drops late items. It never allows an event which is late with regard to already emitted watermark to pass.
The processor saves value of the last emitted watermark to snapshot. Different instances of this processor can be at different watermark at snapshot time. After restart all instances will start at watermark of the most-behind instance before the restart.
This might sound as it could break the monotonicity requirement, but thanks to watermark coalescing, watermarks are only delivered for downstream processing after they have been received from all upstream processors. Another side effect of this is, that a late event, which was dropped before restart, is not considered late after restart.
T - the type of the stream item@Nonnull public static <T> SupplierEx<Processor> insertWatermarksP(@Nonnull FunctionEx<ProcessorSupplier.Context,EventTimePolicy<? super T>> eventTimePolicyProvider)
watermark items into the stream. The
 value of the watermark is determined by the supplied EventTimePolicy instance.
 This processor also drops late items. It never allows an event which is late with regard to already emitted watermark to pass.
The processor saves value of the last emitted watermark to snapshot. Different instances of this processor can be at different watermark at snapshot time. After restart all instances will start at watermark of the most-behind instance before the restart.
This might sound as it could break the monotonicity requirement, but thanks to watermark coalescing, watermarks are only delivered for downstream processing after they have been received from all upstream processors. Another side effect of this is, that a late event, which was dropped before restart, is not considered late after restart.
T - the type of the stream item@Nonnull public static <T,R> SupplierEx<Processor> mapP(@Nonnull FunctionEx<? super T,? extends R> mapFn)
null, it emits nothing. Therefore this vertex can
 be used to implement filtering semantics as well.
 This processor is stateless.
T - type of received itemR - type of emitted itemmapFn - a stateless mapping function@Nonnull public static <T> SupplierEx<Processor> filterP(@Nonnull PredicateEx<? super T> filterFn)
This processor is stateless.
T - type of received itemfilterFn - a stateless predicate to test each received item against@Nonnull public static <T,R> SupplierEx<Processor> flatMapP(@Nonnull FunctionEx<? super T,? extends Traverser<? extends R>> flatMapFn)
This processor is stateless.
T - received item typeR - emitted item typeflatMapFn - a stateless function that maps the received item
                 to a traverser over output items. It must not return
                 null traverser, but can return an empty traverser.@Nonnull public static <T,K,S,R> SupplierEx<Processor> mapStatefulP(long ttl, @Nonnull FunctionEx<? super T,? extends K> keyFn, @Nonnull ToLongFunctionEx<? super T> timestampFn, @Nonnull Supplier<? extends S> createFn, @Nonnull TriFunction<? super S,? super K,? super T,? extends R> statefulMapFn, @Nullable TriFunction<? super S,? super K,? super Long,? extends R> onEvictFn)
createFn returns the object that holds the
 state. The processor passes this object along with each input item to
 mapFn, which can update the object's state. For each grouping
 key there's a separate state object. The state object will be included
 in the state snapshot, so it survives job restarts. For this reason the
 object must be serializable. If the mapping function maps an item to
 null, it will have the effect of filtering out that item.
 
 If the given ttl is greater than zero, the processor will
 consider the state object stale if its time-to-live has expired. The
 time-to-live refers to the event time as kept by the watermark: each
 time it processes an event, the processor compares the state object's
 timestamp with the current watermark. If it is less than wm - ttl, it discards the state object. Otherwise it updates the
 timestamp with the current watermark.
T - type of the input itemK - type of the keyS - type of the state objectR - type of the mapping function's resultttl - state object's time to livekeyFn - function to extract the key from an input itemcreateFn - supplier of the state objectstatefulMapFn - the stateful mapping function@Nonnull public static <T,K,S,R> SupplierEx<Processor> flatMapStatefulP(long ttl, @Nonnull FunctionEx<? super T,? extends K> keyFn, @Nonnull ToLongFunctionEx<? super T> timestampFn, @Nonnull Supplier<? extends S> createFn, @Nonnull TriFunction<? super S,? super K,? super T,? extends Traverser<R>> statefulFlatMapFn, @Nullable TriFunction<? super S,? super K,? super Long,? extends Traverser<R>> onEvictFn)
createFn returns the object that
 holds the state. The processor passes this object along with each input
 item to mapFn, which can update the object's state. For each
 grouping key there's a separate state object. The state object will be
 included in the state snapshot, so it survives job restarts. For this
 reason the object must be serializable.
 
 If the given ttl is greater than zero, the processor will
 consider the state object stale if its time-to-live has expired. The
 time-to-live refers to the event time as kept by the watermark: each
 time it processes an event, the processor compares the state object's
 timestamp with the current watermark. If it is less than wm - ttl, it discards the state object. Otherwise it updates the
 timestamp with the current watermark.
T - type of the input itemK - type of the keyS - type of the state objectR - type of the mapping function's resultttl - state object's time to livekeyFn - function to extract the key from an input itemcreateFn - supplier of the state objectstatefulFlatMapFn - the stateful mapping function@Nonnull public static <C,S,T,R> ProcessorSupplier mapUsingServiceP(@Nonnull ServiceFactory<C,S> serviceFactory, @Nonnull BiFunctionEx<? super S,? super T,? extends R> mapFn)
serviceFactory.
 
 If the mapping result is null, the vertex emits nothing.
 Therefore it can be used to implement filtering semantics as well.
 
 Unlike mapStatefulP(long, com.hazelcast.function.FunctionEx<? super T, ? extends K>, com.hazelcast.function.ToLongFunctionEx<? super T>, java.util.function.Supplier<? extends S>, com.hazelcast.jet.function.TriFunction<? super S, ? super K, ? super T, ? extends R>, com.hazelcast.jet.function.TriFunction<? super S, ? super K, ? super java.lang.Long, ? extends R>) (with the "Keyed" part),
 this method creates one service object per processor.
 
While it's allowed to store some local state in the service object, it won't be saved to the snapshot and will misbehave in a fault-tolerant stream processing job.
C - type of context objectS - type of service objectT - type of received itemR - type of emitted itemserviceFactory - the service factorymapFn - a stateless mapping function@Nonnull public static <C,S,T,K,R> ProcessorSupplier mapUsingServiceAsyncP(@Nonnull ServiceFactory<C,S> serviceFactory, int maxConcurrentOps, boolean preserveOrder, @Nonnull FunctionEx<T,K> extractKeyFn, @Nonnull BiFunctionEx<? super S,? super T,CompletableFuture<R>> mapAsyncFn)
mapUsingServiceP(com.hazelcast.jet.pipeline.ServiceFactory<C, S>, com.hazelcast.function.BiFunctionEx<? super S, ? super T, ? extends R>): the mapAsyncFn returns a CompletableFuture<R> instead of just
 R.
 The function can return a null future and the future can return a null result: in both cases it will act just like a filter.
 The extractKeyFn is used to extract keys under which to save
 in-flight items to the snapshot. If the input to this processor is over
 a partitioned edge, you should use the same key. If it's a round-robin
 edge, you can use any key, for example Object::hashCode.
C - type of context objectS - type of service objectT - type of received itemK - type of keyR - type of result itemserviceFactory - the service factorymaxConcurrentOps - maximum number of concurrent async operations per processorpreserveOrder - whether the async responses are ordered or notextractKeyFn - a function to extract snapshot keys. Used only if preserveOrder==falsemapAsyncFn - a stateless mapping function@Nonnull public static <C,S,T> ProcessorSupplier filterUsingServiceP(@Nonnull ServiceFactory<C,S> serviceFactory, @Nonnull BiPredicateEx<? super S,? super T> filterFn)
serviceFactory.
 While it's allowed to store some local state in the service object, it won't be saved to the snapshot and will misbehave in a fault-tolerant stream processing job.
C - type of context objectS - type of service objectT - type of received itemserviceFactory - the service factoryfilterFn - a stateless predicate to test each received item against@Nonnull public static <C,S,T,R> ProcessorSupplier flatMapUsingServiceP(@Nonnull ServiceFactory<C,S> serviceFactory, @Nonnull BiFunctionEx<? super S,? super T,? extends Traverser<R>> flatMapFn)
serviceFactory.
 While it's allowed to store some local state in the service object, it won't be saved to the snapshot and will misbehave in a fault-tolerant stream processing job.
C - type of context objectS - type of service objectT - type of input itemR - type of result itemserviceFactory - the service factoryflatMapFn - a stateless function that maps the received item to a traverser over
                  the output items@Nonnull public static <T> SupplierEx<Processor> sortP(Comparator<T> comparator)
PriorityQueue and emits it in the complete phase.
 
 The output edge of this vertex should be distributed monotonicOrder allToOne so it preserves the ordering when merging
 the data from all upstream processors.
@Nonnull public static SupplierEx<Processor> noopP()
Copyright © 2023 Hazelcast, Inc.. All rights reserved.