Uses of Package
com.hazelcast.jet.pipeline
Package
Description
Hazelcast Jet is a distributed computation engine running on top of
Hazelcast IMDG technology.
Apache Avro file read/write support for Hazelcast Jet.
Contains source/sink connectors that deal with Change Data Capture (CDC)
events from various databases as well as a generic connector for Debezium
CDC sources.
Contains connectors for change data capture events from MySQL
databases.
Contains connectors for change data capture events from PostgreSQL
databases.
Contains static utility classes with factories of Jet processors.
Contains sources and sinks for Elasticsearch 7
Contributes
gRPC service factories
that can be to apply transformations to
a pipeline which for each input item calls to a gRPC service.Apache Hadoop read/write support for Hazelcast Jet.
Apache Kafka reader/writer support for Hazelcast Jet.
Contains a generic Kafka Connect source provides ability to plug any Kafka
Connect source for data ingestion to Jet pipelines.
Amazon Kinesis Data Streams producer/consumer support for Hazelcast Jet.
Contains sources and sinks for MongoDB.
The Pipeline API is Jet's high-level API to build and execute
distributed computation jobs.
This package offers the
FileSourceBuilder
which allows you to construct various kinds of Pipeline
sources that read from local or distributed files.This package contains various mock sources to help with pipeline testing
and development.
Contributes a
PythonTransforms.mapUsingPython(com.hazelcast.jet.python.PythonServiceConfig)
transform that allows you to transform Jet pipeline data using a Python
function.AWS S3 read/write support for Hazelcast Jet.
Provides Jet related Spring interfaces/classes for Hazelcast.
-
ClassDescriptionModels a distributed computation job using an analogy with a system of interconnected water pipes.
-
ClassDescriptionA finite source of data for a Jet pipeline.A data sink in a Jet pipeline.
-
ClassDescriptionA data sink in a Jet pipeline.An infinite source of data for a Jet pipeline.
-
-
-
ClassDescriptionRepresents a reference to the data connection, used with
Sources.jdbc(DataConnectionRef, ToResultSetFunction, FunctionEx)
.When passed to an IMap/ICache Event Journal source, specifies which event to start from.A holder of functions needed to create and destroy a service object used in pipeline transforms such asstage.mapUsingService()
.The buffer object that thefillBufferFn
gets on each call.The buffer object that thefillBufferFn
gets on each call. -
ClassDescriptionA finite source of data for a Jet pipeline.A data sink in a Jet pipeline.
-
ClassDescriptionA holder of functions needed to create and destroy a service object used in pipeline transforms such as
stage.mapUsingService()
. -
ClassDescriptionA finite source of data for a Jet pipeline.A data sink in a Jet pipeline.
-
ClassDescriptionRepresents a reference to the data connection, used with
Sources.jdbc(DataConnectionRef, ToResultSetFunction, FunctionEx)
.A data sink in a Jet pipeline.An infinite source of data for a Jet pipeline. -
-
ClassDescriptionA data sink in a Jet pipeline.An infinite source of data for a Jet pipeline.
-
ClassDescriptionA finite source of data for a Jet pipeline.Represents a reference to the data connection, used with
Sources.jdbc(DataConnectionRef, ToResultSetFunction, FunctionEx)
.A data sink in a Jet pipeline.An infinite source of data for a Jet pipeline. -
ClassDescriptionOffers a step-by-step API to build a pipeline stage that co-aggregates the data from several input stages.Offers a step-by-step API to build a pipeline stage that co-aggregates the data from several input stages.A finite source of data for a Jet pipeline.A stage in a distributed computation
pipeline
that will observe a finite amount of data (a batch).An intermediate step while constructing a group-and-aggregate batch pipeline stage.Represents a reference to the data connection, used withSources.jdbc(DataConnectionRef, ToResultSetFunction, FunctionEx)
.Builder for a file source which reads lines from files in a directory (but not its subdirectories) and emits output object created bymapOutputFn
Offers a step-by-step fluent API to build a hash-join pipeline stage.An intermediate step when constructing a group-and-aggregate pipeline stage.Offers a step-by-step API to build a pipeline stage that co-groups and aggregates the data from several input stages.Offers a step-by-step API to build a pipeline stage that co-groups and aggregates the data from several input stages.Offers a step-by-step fluent API to build a hash-join pipeline stage.SeeSinks.jdbcBuilder()
.Specifies how to join an enriching stream to the primary stream in ahash-join
operation.When passed to an IMap/ICache Event Journal source, specifies which event to start from.Builder for a map that is used as sink.Parameters for using a map as a sink with an EntryProcessor: TODO review if this can be merged with MapSinkEntryProcessorBuilder, add full javadoc if notModels a distributed computation job using an analogy with a system of interconnected water pipes.Builder providing a fluent API to build a remote map source.A holder of functions needed to create and destroy a service object used in pipeline transforms such asstage.mapUsingService()
.Represents the definition of a session window.A data sink in a Jet pipeline.A pipeline stage that doesn't allow any downstream stages to be attached to it.Represents the definition of a sliding window.Represents a step in building a custom source where you add aSourceBuilder.FaultTolerant.restoreSnapshotFn(com.hazelcast.function.BiConsumerEx<? super C, ? super java.util.List<S>>)
after adding acreateSnapshotFn
.The buffer object that thefillBufferFn
gets on each call.The buffer object that thefillBufferFn
gets on each call.The basic element of a Jetpipeline
, represents a computation step.Represents an intermediate step in the construction of a pipeline stage that performs a windowed group-and-aggregate operation.Represents an intermediate step in the construction of a pipeline stage that performs a windowed aggregate operation.Offers a step-by-step fluent API to build a hash-join pipeline stage.An infinite source of data for a Jet pipeline.A source stage in a distributed computationpipeline
that will observe an unbounded amount of data (i.e., an event stream).A stage in a distributed computationpipeline
that will observe an unbounded amount of data (i.e., an event stream).An intermediate step while constructing a pipeline transform that involves a grouping key, such as windowed group-and-aggregate.Offers a step-by-step fluent API to build a pipeline stage that performs a windowed co-aggregation of the data from several input stages.Offers a step-by-step fluent API to build a pipeline stage that performs a windowed co-aggregation of the data from several input stages.The definition of the window for a windowed aggregation operation.Offers a step-by-step API to build a pipeline stage that performs a windowed co-grouping and aggregation of the data from several input stages.Offers a step-by-step API to build a pipeline stage that performs a windowed co-grouping and aggregation of the data from several input stages. -
-
ClassDescriptionA finite source of data for a Jet pipeline.A stage in a distributed computation
pipeline
that will observe a finite amount of data (a batch).A data sink in a Jet pipeline.An infinite source of data for a Jet pipeline.A stage in a distributed computationpipeline
that will observe an unbounded amount of data (i.e., an event stream). -
-
ClassDescriptionA finite source of data for a Jet pipeline.A data sink in a Jet pipeline.
-
ClassDescriptionA holder of functions needed to create and destroy a service object used in pipeline transforms such as
stage.mapUsingService()
.