See: Description
Interface | Description |
---|---|
BatchSource<T> |
Represents a finite source of data for a Jet pipeline.
|
BatchStage<T> |
Represents a stage in a distributed computation
pipeline that will observe a finite amount of data (a batch). |
GeneralStage<T> | |
GeneralStageWithGrouping<T,K> |
Represents an intermediate step when constructing a group-and-aggregate
pipeline stage.
|
Pipeline |
Models a distributed computation job using an analogy with a system of
interconnected water pipes.
|
Sink<T> |
Represents a data sink in a Jet pipeline.
|
SinkStage |
A pipeline stage that doesn't allow any downstream stages to be attached
to it.
|
Stage |
The basic element of a Jet
pipeline , represents
a computation step. |
StageWithGrouping<T,K> |
Represents an intermediate step while constructing a group-and-aggregate
batch pipeline stage.
|
StageWithGroupingAndWindow<T,K> |
Represents an intermediate step in the construction of a pipeline stage
that performs a windowed group-and-aggregate operation.
|
StageWithWindow<T> |
Represents an intermediate step in the construction of a pipeline stage
that performs a windowed aggregate operation.
|
StreamSource<T> |
Represents an infinite source of data for a Jet pipeline.
|
StreamStage<T> |
Represents a stage in a distributed computation
pipeline that will observe an unbounded amount of data (i.e., an event
stream). |
StreamStageWithGrouping<T,K> |
Represents an intermediate step while constructing a windowed
group-and-aggregate pipeline stage.
|
WindowDefinition |
The definition of the window for a windowed aggregation operation.
|
Class | Description |
---|---|
AggregateBuilder<T0> |
Offers a step-by-step fluent API to build a pipeline stage that
co-aggregates the data from several input stages.
|
ContextFactories |
Utility class with methods that create several useful kinds of
context factories . |
ContextFactory<C> |
A holder of functions needed to create and destroy a context object that
can be used in the Pipeline API to give the transforming functions (map,
filter, flatMap) access to some resource that would be too expensive to
acquire-release on each function call.
|
GeneralHashJoinBuilder<T0> |
Offers a step-by-step fluent API to build a hash-join pipeline stage.
|
GroupAggregateBuilder<T0,K> |
Offers a step-by-step fluent API to build a pipeline stage that
co-groups and aggregates the data from several input stages.
|
HashJoinBuilder<T0> |
Offers a step-by-step fluent API to build a hash-join pipeline stage.
|
JoinClause<K,T0,T1,T1_OUT> |
Specifies how to join an enriching stream to the primary stream in a
hash-join operation. |
SessionWindowDef<T> |
Represents the definition of a
session window. |
SinkBuilder<W,T> | |
Sinks |
Contains factory methods for various types of pipeline sinks.
|
SlidingWindowDef |
Represents the definition of a
sliding
window . |
Sources |
Contains factory methods for various types of pipeline sources.
|
StreamHashJoinBuilder<T0> |
Offers a step-by-step fluent API to build a hash-join pipeline stage.
|
WindowAggregateBuilder<T0> |
Offers a step-by-step fluent API to build a pipeline stage that
performs a windowed co-aggregation of the data from several input
stages.
|
WindowGroupAggregateBuilder<T0,K> |
Offers a step-by-step fluent API to build a pipeline stage that
performs a windowed co-grouping and aggregation of the data from several
input stages.
|
Enum | Description |
---|---|
JournalInitialPosition |
When passed to an IMap/ICache Event Journal source, specifies which
event to start from.
|
WindowDefinition.WindowKind |
Enumerates the kinds of window that Jet supports.
|
The basic element is a pipeline stage which can be attached to one or more other stages, both in the upstream and the downstream direction. A pipeline accepts the data coming from its upstream stages, transforms it, and directs the resulting data to its downstream stages.
map
, filter
, and flatMap
.
aggregate*()
transformations perform an aggregate operation
on a set of items. You can call stage.groupingKey()
to group the
items by a key and then Jet will aggregate each group separately. For
stream stages you must specify a stage.window()
which will
transform the infinite stream into a series of finite windows. If you
specify more than one input stage for the aggregation (using stage.aggregate2()
, stage.aggregate3()
or stage.aggregateBuilder()
, the data from all streams will be combined
into the aggregation result. The AggregateOperation
you supply must define a separate accumulate
primitive for each contributing stream. Refer to its Javadoc for further
details.
IMap
). It must be a batch
stage and each item must have a distinct join key. The primary stage,
on the other hand, may be either a batch or a stream stage and may
contain duplicate keys.
For each of the enriching stages there is a separate pair of functions
to extract the joining key on both sides. For example, a Trade
can be joined with both a Broker
on trade.getBrokerId()
== broker.getId()
and a Product
on trade.getProductId()
== product.getId()
, and all this can happen in a single hash-join
transform.
Implementationally, the hash-join transform is optimized for throughput so that each computing member has a local copy of all the enriching data, stored in hashtables (hence the name). The enriching streams are consumed in full before ingesting any data from the primary stream.
Copyright © 2018 Hazelcast, Inc.. All rights reserved.