Package | Description |
---|---|
com.hazelcast.client.impl |
Contains most of the client side HazelcastInstance implementation functionality.
|
com.hazelcast.client.impl.protocol.codec |
Client protocol custom codec implementations
|
com.hazelcast.client.impl.protocol.task.mapreduce |
Client protocol tasks implementations for map reduce
|
com.hazelcast.client.impl.protocol.template | |
com.hazelcast.client.proxy |
This package contains client side proxy implementations of the different Hazelcast data structures
and operation types
|
com.hazelcast.config |
Provides classes for configuring HazelcastInstance.
|
com.hazelcast.core |
Provides core API interfaces/classes.
|
com.hazelcast.instance |
This package contains Hazelcast Instance functionality.
|
com.hazelcast.jca |
This package contains jca functionality
|
com.hazelcast.map.impl.proxy |
Contains map proxy implementation and support classes.
|
com.hazelcast.mapreduce |
This package contains the MapReduce API definition for Hazelcast.
All map reduce operations running in a distributed manner inside the active Hazelcast cluster. |
com.hazelcast.mapreduce.aggregation |
This package contains the aggregation API and the convenience helper classes
to retrieve predefined aggregation implementations.
|
com.hazelcast.mapreduce.aggregation.impl |
This package contains a set of predefined aggregation implementations
|
com.hazelcast.mapreduce.impl |
This package contains the default implementation for the map reduce framework
internals.
|
com.hazelcast.mapreduce.impl.client |
This package contains request and response classes for communication between cluster
members and Hazelcast native clients
|
com.hazelcast.mapreduce.impl.operation |
This package contains all remote operations that are needed to control work on
supervising or worker nodes.
|
com.hazelcast.mapreduce.impl.task |
This package contains the base implementation for a standard map reduce job.
|
com.hazelcast.multimap.impl |
Contains classes for Hazelcast MultiMap module.
|
Class and Description |
---|
JobTracker
The JobTracker interface is used to create instances of
Job s depending
on the given data structure / data source. |
Class and Description |
---|
JobPartitionState
An implementation of this interface contains current information about
the status of an process piece while operation is executing.
|
Class and Description |
---|
CombinerFactory |
KeyPredicate
This interface is used to pre-evaluate keys before spreading the MapReduce task to the cluster.
|
KeyValueSource |
Mapper
The interface Mapper is used to build mappers for the
Job . |
ReducerFactory
A ReducerFactory implementation is used to build
Reducer instances per key.An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations for reducing. |
Class and Description |
---|
JobPartitionState
An implementation of this interface contains current information about
the status of an process piece while operation is executing.
|
Class and Description |
---|
Job
This interface describes a mapreduce Job that is build by
JobTracker.newJob(KeyValueSource) .It is used to execute mappings and calculations on the different cluster nodes and reduce or collate these mapped values to results. |
JobTracker
The JobTracker interface is used to create instances of
Job s depending
on the given data structure / data source. |
KeyValueSource |
TrackableJob
This interface describes a trackable job.
|
Class and Description |
---|
TopologyChangedStrategy
This enum class is used to define how a map reduce job behaves
if the job owner recognizes a topology changed event.
When members are leaving the cluster, it might lose processed data chunks that were already send to the reducers on the leaving node. Also, on any topology change, there is a redistribution of the member assigned partitions, which means that a map job might have a problem finishing its currently processed partition. The default behavior is to immediately cancel the running task and throw an TopologyChangedException , but it is possible
to submit the same job configuration again if
JobTracker.getTrackableJob(String)
returns null for the requested job id. |
Class and Description |
---|
JobTracker
The JobTracker interface is used to create instances of
Job s depending
on the given data structure / data source. |
Class and Description |
---|
JobTracker
The JobTracker interface is used to create instances of
Job s depending
on the given data structure / data source. |
Class and Description |
---|
JobTracker
The JobTracker interface is used to create instances of
Job s depending
on the given data structure / data source. |
Class and Description |
---|
JobTracker
The JobTracker interface is used to create instances of
Job s depending
on the given data structure / data source. |
Class and Description |
---|
Collator
This interface can be implemented to define a Collator which is executed after calculation
of the MapReduce algorithm on remote cluster nodes but before returning the final result.
Collator can, for example, be used to sum up a final value. |
Combiner |
CombinerFactory |
Context
The Context interface is used for emitting keys and values to the intermediate working space of
the MapReduce algorithm.
|
Job
This interface describes a mapreduce Job that is build by
JobTracker.newJob(KeyValueSource) .It is used to execute mappings and calculations on the different cluster nodes and reduce or collate these mapped values to results. |
JobCompletableFuture
This is a special version of ICompletableFuture to return the assigned job
id of the submit operation.
|
JobPartitionState
An implementation of this interface contains current information about
the status of an process piece while operation is executing.
|
JobPartitionState.State
Definition of the processing states
|
JobProcessInformation
This interface holds basic information about a running map reduce job, such as the
state of the different partitions and the number of currently processed
records.
The number of processed records is not a real time value, it is updated on a regular basis (after 1000 processed elements per node). |
JobTracker
The JobTracker interface is used to create instances of
Job s depending
on the given data structure / data source. |
KeyPredicate
This interface is used to pre-evaluate keys before spreading the MapReduce task to the cluster.
|
KeyValueSource |
LifecycleMapper
The LifecycleMapper interface is a more sophisticated version of
Mapper normally used for more complex
algorithms with a need for initialization and finalization. |
Mapper
The interface Mapper is used to build mappers for the
Job . |
MappingJob
This interface describes a mapping mapreduce Job.
For further information see Job . |
Reducer
The abstract Reducer class is used to build reducers for the
Job .Reducers may be distributed inside of the cluster but there is always only one Reducer per key. |
ReducerFactory
A ReducerFactory implementation is used to build
Reducer instances per key.An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations for reducing. |
ReducingJob
This interface describes a reducing mapreduce Job.
For further information Job . |
ReducingSubmittableJob
This interface describes a submittable mapreduce Job.
For further information Job . |
TopologyChangedStrategy
This enum class is used to define how a map reduce job behaves
if the job owner recognizes a topology changed event.
When members are leaving the cluster, it might lose processed data chunks that were already send to the reducers on the leaving node. Also, on any topology change, there is a redistribution of the member assigned partitions, which means that a map job might have a problem finishing its currently processed partition. The default behavior is to immediately cancel the running task and throw an TopologyChangedException , but it is possible
to submit the same job configuration again if
JobTracker.getTrackableJob(String)
returns null for the requested job id. |
TrackableJob
This interface describes a trackable job.
|
Class and Description |
---|
Collator
This interface can be implemented to define a Collator which is executed after calculation
of the MapReduce algorithm on remote cluster nodes but before returning the final result.
Collator can, for example, be used to sum up a final value. |
CombinerFactory |
KeyPredicate
This interface is used to pre-evaluate keys before spreading the MapReduce task to the cluster.
|
Mapper
The interface Mapper is used to build mappers for the
Job . |
ReducerFactory
A ReducerFactory implementation is used to build
Reducer instances per key.An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations for reducing. |
Class and Description |
---|
Collator
This interface can be implemented to define a Collator which is executed after calculation
of the MapReduce algorithm on remote cluster nodes but before returning the final result.
Collator can, for example, be used to sum up a final value. |
CombinerFactory |
KeyPredicate
This interface is used to pre-evaluate keys before spreading the MapReduce task to the cluster.
|
Mapper
The interface Mapper is used to build mappers for the
Job . |
ReducerFactory
A ReducerFactory implementation is used to build
Reducer instances per key.An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations for reducing. |
Class and Description |
---|
Collator
This interface can be implemented to define a Collator which is executed after calculation
of the MapReduce algorithm on remote cluster nodes but before returning the final result.
Collator can, for example, be used to sum up a final value. |
CombinerFactory |
Job
This interface describes a mapreduce Job that is build by
JobTracker.newJob(KeyValueSource) .It is used to execute mappings and calculations on the different cluster nodes and reduce or collate these mapped values to results. |
JobCompletableFuture
This is a special version of ICompletableFuture to return the assigned job
id of the submit operation.
|
JobPartitionState.State
Definition of the processing states
|
JobTracker
The JobTracker interface is used to create instances of
Job s depending
on the given data structure / data source. |
KeyPredicate
This interface is used to pre-evaluate keys before spreading the MapReduce task to the cluster.
|
KeyValueSource |
Mapper
The interface Mapper is used to build mappers for the
Job . |
MappingJob
This interface describes a mapping mapreduce Job.
For further information see Job . |
PartitionIdAware
This interface can be used to mark implementation being aware of the data partition
it is currently working on.
|
ReducerFactory
A ReducerFactory implementation is used to build
Reducer instances per key.An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations for reducing. |
ReducingJob
This interface describes a reducing mapreduce Job.
For further information Job . |
ReducingSubmittableJob
This interface describes a submittable mapreduce Job.
For further information Job . |
TopologyChangedStrategy
This enum class is used to define how a map reduce job behaves
if the job owner recognizes a topology changed event.
When members are leaving the cluster, it might lose processed data chunks that were already send to the reducers on the leaving node. Also, on any topology change, there is a redistribution of the member assigned partitions, which means that a map job might have a problem finishing its currently processed partition. The default behavior is to immediately cancel the running task and throw an TopologyChangedException , but it is possible
to submit the same job configuration again if
JobTracker.getTrackableJob(String)
returns null for the requested job id. |
Class and Description |
---|
CombinerFactory |
KeyPredicate
This interface is used to pre-evaluate keys before spreading the MapReduce task to the cluster.
|
KeyValueSource |
Mapper
The interface Mapper is used to build mappers for the
Job . |
ReducerFactory
A ReducerFactory implementation is used to build
Reducer instances per key.An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations for reducing. |
TopologyChangedStrategy
This enum class is used to define how a map reduce job behaves
if the job owner recognizes a topology changed event.
When members are leaving the cluster, it might lose processed data chunks that were already send to the reducers on the leaving node. Also, on any topology change, there is a redistribution of the member assigned partitions, which means that a map job might have a problem finishing its currently processed partition. The default behavior is to immediately cancel the running task and throw an TopologyChangedException , but it is possible
to submit the same job configuration again if
JobTracker.getTrackableJob(String)
returns null for the requested job id. |
Class and Description |
---|
CombinerFactory |
JobPartitionState.State
Definition of the processing states
|
KeyPredicate
This interface is used to pre-evaluate keys before spreading the MapReduce task to the cluster.
|
KeyValueSource |
Mapper
The interface Mapper is used to build mappers for the
Job . |
ReducerFactory
A ReducerFactory implementation is used to build
Reducer instances per key.An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations for reducing. |
TopologyChangedStrategy
This enum class is used to define how a map reduce job behaves
if the job owner recognizes a topology changed event.
When members are leaving the cluster, it might lose processed data chunks that were already send to the reducers on the leaving node. Also, on any topology change, there is a redistribution of the member assigned partitions, which means that a map job might have a problem finishing its currently processed partition. The default behavior is to immediately cancel the running task and throw an TopologyChangedException , but it is possible
to submit the same job configuration again if
JobTracker.getTrackableJob(String)
returns null for the requested job id. |
Class and Description |
---|
Collator
This interface can be implemented to define a Collator which is executed after calculation
of the MapReduce algorithm on remote cluster nodes but before returning the final result.
Collator can, for example, be used to sum up a final value. |
Combiner |
CombinerFactory |
Context
The Context interface is used for emitting keys and values to the intermediate working space of
the MapReduce algorithm.
|
Job
This interface describes a mapreduce Job that is build by
JobTracker.newJob(KeyValueSource) .It is used to execute mappings and calculations on the different cluster nodes and reduce or collate these mapped values to results. |
JobCompletableFuture
This is a special version of ICompletableFuture to return the assigned job
id of the submit operation.
|
JobPartitionState
An implementation of this interface contains current information about
the status of an process piece while operation is executing.
|
JobPartitionState.State
Definition of the processing states
|
JobProcessInformation
This interface holds basic information about a running map reduce job, such as the
state of the different partitions and the number of currently processed
records.
The number of processed records is not a real time value, it is updated on a regular basis (after 1000 processed elements per node). |
JobTracker
The JobTracker interface is used to create instances of
Job s depending
on the given data structure / data source. |
KeyPredicate
This interface is used to pre-evaluate keys before spreading the MapReduce task to the cluster.
|
KeyValueSource |
Mapper
The interface Mapper is used to build mappers for the
Job . |
Reducer
The abstract Reducer class is used to build reducers for the
Job .Reducers may be distributed inside of the cluster but there is always only one Reducer per key. |
ReducerFactory
A ReducerFactory implementation is used to build
Reducer instances per key.An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations for reducing. |
TopologyChangedStrategy
This enum class is used to define how a map reduce job behaves
if the job owner recognizes a topology changed event.
When members are leaving the cluster, it might lose processed data chunks that were already send to the reducers on the leaving node. Also, on any topology change, there is a redistribution of the member assigned partitions, which means that a map job might have a problem finishing its currently processed partition. The default behavior is to immediately cancel the running task and throw an TopologyChangedException , but it is possible
to submit the same job configuration again if
JobTracker.getTrackableJob(String)
returns null for the requested job id. |
TrackableJob
This interface describes a trackable job.
|
Class and Description |
---|
JobTracker
The JobTracker interface is used to create instances of
Job s depending
on the given data structure / data source. |
Copyright © 2016 Hazelcast, Inc.. All Rights Reserved.