Represents a stage in a distributed computation
Represents either an instance of a Jet server node or a Jet client instance that connects to a remote cluster.
A Jet computation job created from a
Models a distributed computation job using an analogy with a system of interconnected water pipes.
A transform which accepts an input stream and produces no output streams.
A pipeline stage that doesn't allow any downstream stages to be attached to it.
A transform that takes no input streams and produces an output stream.
The basic element of a Jet
Represents the data transformation performed by a pipeline
Traverses a potentially infinite sequence of non-
Offers a step-by-step fluent API to build a co-grouping pipeline stage by adding any number of contributing stages.
Offers a step-by-step fluent API to build a hash-join pipeline stage.
Factories of Apache Hadoop HDFS sinks.
Contains factory methods for Apache Hadoop HDFS sources.
Entry point to the Jet product.
Specifies how to join an enriching stream to the primary stream in a
Contains factory methods for Apache Kafka sinks.
Contains factory methods for Apache Kafka sources.
Contains factory methods for various types of pipeline sinks.
Contains factory methods for various types of pipeline sources.
Miscellaneous utility methods useful in DAG building logic.
Base Jet exception.
The basic element is a pipeline stage which can be attached to one or more other stages, both in the upstream and the downstream direction. A stage accepts the data coming from its upstream stages, transforms it, and directs the resulting data to its downstream stages.
groupBytransformation groups items by key and performs an aggregate operation on each group. It outputs the results of the aggregate operation, one for each observed distinct key.
coGroup transformation groups items by key in several
streams at once and performs an aggregate operation on all groups that
share the same key, separately for each key. It outputs the results of
the aggregate operation, one for each observed distinct key.
IMap). Its data stream must be finite and each item must have a distinct join key. The primary stage, on the other hand, may be infinite and contain duplicate keys.
For each of the enriching stages there is a separate pair of functions
to extract the joining key on both sides. For example, a
can be joined with both a
== broker.getId() and a
== product.getId(), and all this can happen in a single hash-join
Implementationally, the hash-join transform is optimized for throughput so that each computing member has a local copy of all the enriching data, stored in hashtables (hence the name). The enriching streams are consumed in full before ingesting any data from the primary stream.
Copyright © 2017 Hazelcast, Inc.. All Rights Reserved.