Uses of Package
com.hazelcast.mapreduce

Packages that use com.hazelcast.mapreduce
com.hazelcast.client.impl Contains most of the client side HazelcastInstance implementation functionality. 
com.hazelcast.client.proxy This package contains client side proxy implementations of the different Hazelcast data structures and operation types 
com.hazelcast.config Provides classes for configuring HazelcastInstance. 
com.hazelcast.core Provides core API interfaces/classes. 
com.hazelcast.instance This package contains Hazelcast Instance functionality.
 
com.hazelcast.jca This package contains jca functionality 
com.hazelcast.map.impl.proxy Contains map proxy implementation and support classes. 
com.hazelcast.mapreduce This package contains the MapReduce API definition for Hazelcast.
All map reduce operations running in a distributed manner inside the active Hazelcast cluster. 
com.hazelcast.mapreduce.aggregation This package contains the aggregation API and the convenience helper classes to retrieve predefined aggregation implementations. 
com.hazelcast.mapreduce.aggregation.impl This package contains a set of predefined aggregation implementations 
com.hazelcast.mapreduce.impl This package contains the default implementation for the map reduce framework internals. 
com.hazelcast.mapreduce.impl.client This package contains request and response classes for communication between cluster members and Hazelcast native clients 
com.hazelcast.mapreduce.impl.operation This package contains all remote operations that are needed to control work on supervising or worker nodes. 
com.hazelcast.mapreduce.impl.task This package contains the base implementation for a standard map reduce job. 
com.hazelcast.multimap.impl Contains classes for Hazelcast MultiMap module. 
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.client.impl
JobTracker
           The JobTracker interface is used to create instances of Jobs depending on the given data structure / data source.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.client.proxy
Job
           This interface describes a mapreduce Job that is build by JobTracker.newJob(KeyValueSource).
It is used to execute mappings and calculations on the different cluster nodes and reduce or collate these mapped values to results.
JobTracker
           The JobTracker interface is used to create instances of Jobs depending on the given data structure / data source.
KeyValueSource
          The abstract KeyValueSource class is used to implement custom data sources for mapreduce algorithms.
Default shipped implementations contains KeyValueSources for Hazelcast data structures like IMap and MultiMap.
TrackableJob
          This interface describes a trackable job.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.config
TopologyChangedStrategy
          This enum class is used to define how a map reduce job behaves if the job owner recognizes a topology changed event.
When members are leaving the cluster it might happen to loose processed data chunks that were already send to the reducers on the leaving node.
In addition to that on any topology change there is a redistribution of the member assigned partitions which means that a map job might have a problem to finish it's currently processed partition.
The default behavior is immediately cancelling the running task and throwing an TopologyChangedException but it is possible to submit the same job configuration again if JobTracker.getTrackableJob(String) returns null for the requested job id.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.core
JobTracker
           The JobTracker interface is used to create instances of Jobs depending on the given data structure / data source.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.instance
JobTracker
           The JobTracker interface is used to create instances of Jobs depending on the given data structure / data source.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.jca
JobTracker
           The JobTracker interface is used to create instances of Jobs depending on the given data structure / data source.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.map.impl.proxy
JobTracker
           The JobTracker interface is used to create instances of Jobs depending on the given data structure / data source.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.mapreduce
Collator
          This interface can be implemented to define a Collator which is executed after calculation of the MapReduce algorithm on remote cluster nodes but before returning the final result.
Collator can for example be used to sum up a final value.
Combiner
           The abstract Combiner class is used to build combiners for the Job.
Those Combiners are distributed inside of the cluster and are running alongside the Mapper implementations in the same node.
Combiners are called in a threadsafe way so internal locking is not required.
CombinerFactory
           A CombinerFactory implementation is used to build Combiner instances per key.
An implementation needs to be serializable by Hazelcast since it is distributed together with the Mapper implementation to run alongside.
Context
          The Context interface is used for emitting keys and values to the intermediate working space of the MapReduce algorithm.
Job
           This interface describes a mapreduce Job that is build by JobTracker.newJob(KeyValueSource).
It is used to execute mappings and calculations on the different cluster nodes and reduce or collate these mapped values to results.
JobCompletableFuture
          This is a special version of ICompletableFuture to return the assigned job id of the submit operation.
JobPartitionState
          An implementation of this interface contains current information about the status of an process piece while operation is executing.
JobPartitionState.State
          Definition of the processing states
JobProcessInformation
          This interface holds basic information about a running map reduce job like state of the different partitions and the number of currently processed records.
The number of processed records is not a real time value but updated on regular base (after 1000 processed elements per node).
JobTracker
           The JobTracker interface is used to create instances of Jobs depending on the given data structure / data source.
KeyPredicate
          This interface is used to pre evaluate keys before spreading the MapReduce task to the cluster.
KeyValueSource
          The abstract KeyValueSource class is used to implement custom data sources for mapreduce algorithms.
Default shipped implementations contains KeyValueSources for Hazelcast data structures like IMap and MultiMap.
LifecycleMapper
          The LifecycleMapper interface is a more sophisticated version of Mapper normally used for complexer algorithms with a need of initialization and finalization.
Mapper
           The interface Mapper is used to build mappers for the Job.
MappingJob
           This interface describes a mapping mapreduce Job.
For further information Job.
Reducer
           The abstract Reducer class is used to build reducers for the Job.
Reducers may be distributed inside of the cluster but there is always only one Reducer per key.
ReducerFactory
          A ReducerFactory implementation is used to build Reducer instances per key.
An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations of reducing step.
ReducingJob
           This interface describes a reducing mapreduce Job.
For further information Job.
ReducingSubmittableJob
           This interface describes a submittable mapreduce Job.
For further information Job.
TopologyChangedStrategy
          This enum class is used to define how a map reduce job behaves if the job owner recognizes a topology changed event.
When members are leaving the cluster it might happen to loose processed data chunks that were already send to the reducers on the leaving node.
In addition to that on any topology change there is a redistribution of the member assigned partitions which means that a map job might have a problem to finish it's currently processed partition.
The default behavior is immediately cancelling the running task and throwing an TopologyChangedException but it is possible to submit the same job configuration again if JobTracker.getTrackableJob(String) returns null for the requested job id.
TrackableJob
          This interface describes a trackable job.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.mapreduce.aggregation
Collator
          This interface can be implemented to define a Collator which is executed after calculation of the MapReduce algorithm on remote cluster nodes but before returning the final result.
Collator can for example be used to sum up a final value.
CombinerFactory
           A CombinerFactory implementation is used to build Combiner instances per key.
An implementation needs to be serializable by Hazelcast since it is distributed together with the Mapper implementation to run alongside.
KeyPredicate
          This interface is used to pre evaluate keys before spreading the MapReduce task to the cluster.
Mapper
           The interface Mapper is used to build mappers for the Job.
ReducerFactory
          A ReducerFactory implementation is used to build Reducer instances per key.
An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations of reducing step.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.mapreduce.aggregation.impl
Collator
          This interface can be implemented to define a Collator which is executed after calculation of the MapReduce algorithm on remote cluster nodes but before returning the final result.
Collator can for example be used to sum up a final value.
CombinerFactory
           A CombinerFactory implementation is used to build Combiner instances per key.
An implementation needs to be serializable by Hazelcast since it is distributed together with the Mapper implementation to run alongside.
KeyPredicate
          This interface is used to pre evaluate keys before spreading the MapReduce task to the cluster.
Mapper
           The interface Mapper is used to build mappers for the Job.
ReducerFactory
          A ReducerFactory implementation is used to build Reducer instances per key.
An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations of reducing step.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.mapreduce.impl
Collator
          This interface can be implemented to define a Collator which is executed after calculation of the MapReduce algorithm on remote cluster nodes but before returning the final result.
Collator can for example be used to sum up a final value.
CombinerFactory
           A CombinerFactory implementation is used to build Combiner instances per key.
An implementation needs to be serializable by Hazelcast since it is distributed together with the Mapper implementation to run alongside.
Job
           This interface describes a mapreduce Job that is build by JobTracker.newJob(KeyValueSource).
It is used to execute mappings and calculations on the different cluster nodes and reduce or collate these mapped values to results.
JobCompletableFuture
          This is a special version of ICompletableFuture to return the assigned job id of the submit operation.
JobPartitionState
          An implementation of this interface contains current information about the status of an process piece while operation is executing.
JobPartitionState.State
          Definition of the processing states
JobTracker
           The JobTracker interface is used to create instances of Jobs depending on the given data structure / data source.
KeyPredicate
          This interface is used to pre evaluate keys before spreading the MapReduce task to the cluster.
KeyValueSource
          The abstract KeyValueSource class is used to implement custom data sources for mapreduce algorithms.
Default shipped implementations contains KeyValueSources for Hazelcast data structures like IMap and MultiMap.
Mapper
           The interface Mapper is used to build mappers for the Job.
MappingJob
           This interface describes a mapping mapreduce Job.
For further information Job.
PartitionIdAware
          This interface can be used to mark implementation being aware of the data partition it is currently working on.
ReducerFactory
          A ReducerFactory implementation is used to build Reducer instances per key.
An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations of reducing step.
ReducingJob
           This interface describes a reducing mapreduce Job.
For further information Job.
ReducingSubmittableJob
           This interface describes a submittable mapreduce Job.
For further information Job.
TopologyChangedStrategy
          This enum class is used to define how a map reduce job behaves if the job owner recognizes a topology changed event.
When members are leaving the cluster it might happen to loose processed data chunks that were already send to the reducers on the leaving node.
In addition to that on any topology change there is a redistribution of the member assigned partitions which means that a map job might have a problem to finish it's currently processed partition.
The default behavior is immediately cancelling the running task and throwing an TopologyChangedException but it is possible to submit the same job configuration again if JobTracker.getTrackableJob(String) returns null for the requested job id.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.mapreduce.impl.client
CombinerFactory
           A CombinerFactory implementation is used to build Combiner instances per key.
An implementation needs to be serializable by Hazelcast since it is distributed together with the Mapper implementation to run alongside.
KeyPredicate
          This interface is used to pre evaluate keys before spreading the MapReduce task to the cluster.
KeyValueSource
          The abstract KeyValueSource class is used to implement custom data sources for mapreduce algorithms.
Default shipped implementations contains KeyValueSources for Hazelcast data structures like IMap and MultiMap.
Mapper
           The interface Mapper is used to build mappers for the Job.
ReducerFactory
          A ReducerFactory implementation is used to build Reducer instances per key.
An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations of reducing step.
TopologyChangedStrategy
          This enum class is used to define how a map reduce job behaves if the job owner recognizes a topology changed event.
When members are leaving the cluster it might happen to loose processed data chunks that were already send to the reducers on the leaving node.
In addition to that on any topology change there is a redistribution of the member assigned partitions which means that a map job might have a problem to finish it's currently processed partition.
The default behavior is immediately cancelling the running task and throwing an TopologyChangedException but it is possible to submit the same job configuration again if JobTracker.getTrackableJob(String) returns null for the requested job id.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.mapreduce.impl.operation
CombinerFactory
           A CombinerFactory implementation is used to build Combiner instances per key.
An implementation needs to be serializable by Hazelcast since it is distributed together with the Mapper implementation to run alongside.
JobPartitionState.State
          Definition of the processing states
KeyPredicate
          This interface is used to pre evaluate keys before spreading the MapReduce task to the cluster.
KeyValueSource
          The abstract KeyValueSource class is used to implement custom data sources for mapreduce algorithms.
Default shipped implementations contains KeyValueSources for Hazelcast data structures like IMap and MultiMap.
Mapper
           The interface Mapper is used to build mappers for the Job.
ReducerFactory
          A ReducerFactory implementation is used to build Reducer instances per key.
An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations of reducing step.
TopologyChangedStrategy
          This enum class is used to define how a map reduce job behaves if the job owner recognizes a topology changed event.
When members are leaving the cluster it might happen to loose processed data chunks that were already send to the reducers on the leaving node.
In addition to that on any topology change there is a redistribution of the member assigned partitions which means that a map job might have a problem to finish it's currently processed partition.
The default behavior is immediately cancelling the running task and throwing an TopologyChangedException but it is possible to submit the same job configuration again if JobTracker.getTrackableJob(String) returns null for the requested job id.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.mapreduce.impl.task
Collator
          This interface can be implemented to define a Collator which is executed after calculation of the MapReduce algorithm on remote cluster nodes but before returning the final result.
Collator can for example be used to sum up a final value.
Combiner
           The abstract Combiner class is used to build combiners for the Job.
Those Combiners are distributed inside of the cluster and are running alongside the Mapper implementations in the same node.
Combiners are called in a threadsafe way so internal locking is not required.
CombinerFactory
           A CombinerFactory implementation is used to build Combiner instances per key.
An implementation needs to be serializable by Hazelcast since it is distributed together with the Mapper implementation to run alongside.
Context
          The Context interface is used for emitting keys and values to the intermediate working space of the MapReduce algorithm.
Job
           This interface describes a mapreduce Job that is build by JobTracker.newJob(KeyValueSource).
It is used to execute mappings and calculations on the different cluster nodes and reduce or collate these mapped values to results.
JobCompletableFuture
          This is a special version of ICompletableFuture to return the assigned job id of the submit operation.
JobPartitionState
          An implementation of this interface contains current information about the status of an process piece while operation is executing.
JobPartitionState.State
          Definition of the processing states
JobProcessInformation
          This interface holds basic information about a running map reduce job like state of the different partitions and the number of currently processed records.
The number of processed records is not a real time value but updated on regular base (after 1000 processed elements per node).
JobTracker
           The JobTracker interface is used to create instances of Jobs depending on the given data structure / data source.
KeyPredicate
          This interface is used to pre evaluate keys before spreading the MapReduce task to the cluster.
KeyValueSource
          The abstract KeyValueSource class is used to implement custom data sources for mapreduce algorithms.
Default shipped implementations contains KeyValueSources for Hazelcast data structures like IMap and MultiMap.
Mapper
           The interface Mapper is used to build mappers for the Job.
Reducer
           The abstract Reducer class is used to build reducers for the Job.
Reducers may be distributed inside of the cluster but there is always only one Reducer per key.
ReducerFactory
          A ReducerFactory implementation is used to build Reducer instances per key.
An implementation needs to be serializable by Hazelcast since it might be distributed inside the cluster to do parallel calculations of reducing step.
TopologyChangedStrategy
          This enum class is used to define how a map reduce job behaves if the job owner recognizes a topology changed event.
When members are leaving the cluster it might happen to loose processed data chunks that were already send to the reducers on the leaving node.
In addition to that on any topology change there is a redistribution of the member assigned partitions which means that a map job might have a problem to finish it's currently processed partition.
The default behavior is immediately cancelling the running task and throwing an TopologyChangedException but it is possible to submit the same job configuration again if JobTracker.getTrackableJob(String) returns null for the requested job id.
TrackableJob
          This interface describes a trackable job.
 

Classes in com.hazelcast.mapreduce used by com.hazelcast.multimap.impl
JobTracker
           The JobTracker interface is used to create instances of Jobs depending on the given data structure / data source.
 



Copyright © 2015 Hazelcast, Inc.. All Rights Reserved.