com.hazelcast.mapreduce
Interface MappingJob<EntryKey,KeyIn,ValueIn>

Type Parameters:
EntryKey - type of the original input key
KeyIn - type of key used as input key type
ValueIn - type of value used as input value type

@Beta
public interface MappingJob<EntryKey,KeyIn,ValueIn>

This interface describes a mapping mapreduce Job.
For further information Job.

Since:
3.2
See Also:
Job

Method Summary
 MappingJob<EntryKey,KeyIn,ValueIn> chunkSize(int chunkSize)
          Defines the number of elements per chunk.
<ValueOut> ReducingJob<EntryKey,KeyIn,ValueOut>
combiner(CombinerFactory<KeyIn,ValueIn,ValueOut> combinerFactory)
          Defines the CombinerFactory for this task.
 MappingJob<EntryKey,KeyIn,ValueIn> keyPredicate(KeyPredicate<EntryKey> predicate)
          Defines the KeyPredicate implementation to preselect keys the MapReduce task will be executed on.
 MappingJob<EntryKey,KeyIn,ValueIn> onKeys(EntryKey... keys)
          Defines keys to execute the mapper and a possibly defined reducer against.
 MappingJob<EntryKey,KeyIn,ValueIn> onKeys(Iterable<EntryKey> keys)
          Defines keys to execute the mapper and a possibly defined reducer against.
<ValueOut> ReducingSubmittableJob<EntryKey,KeyIn,ValueOut>
reducer(ReducerFactory<KeyIn,ValueIn,ValueOut> reducerFactory)
          Defines the ReducerFactory for this task.
 JobCompletableFuture<Map<KeyIn,List<ValueIn>>> submit()
          Submits the task to Hazelcast and executes the defined mapper and reducer on all cluster nodes
<ValueOut> JobCompletableFuture<ValueOut>
submit(Collator<Map.Entry<KeyIn,List<ValueIn>>,ValueOut> collator)
          Submits the task to Hazelcast and executes the defined mapper and reducer on all cluster nodes and executes the collator before returning the final result.
 MappingJob<EntryKey,KeyIn,ValueIn> topologyChangedStrategy(TopologyChangedStrategy topologyChangedStrategy)
          Defines the strategy to handle topology changes while executing the map reduce job.
 

Method Detail

onKeys

MappingJob<EntryKey,KeyIn,ValueIn> onKeys(Iterable<EntryKey> keys)
Defines keys to execute the mapper and a possibly defined reducer against. If keys are known before submitting the task setting them can improve execution speed.

Parameters:
keys - keys to be executed against
Returns:
instance of this Job with generics changed on usage

onKeys

MappingJob<EntryKey,KeyIn,ValueIn> onKeys(EntryKey... keys)
Defines keys to execute the mapper and a possibly defined reducer against. If keys are known before submitting the task setting them can improve execution speed.

Parameters:
keys - keys to be executed against
Returns:
instance of this Job with generics changed on usage

keyPredicate

MappingJob<EntryKey,KeyIn,ValueIn> keyPredicate(KeyPredicate<EntryKey> predicate)
Defines the KeyPredicate implementation to preselect keys the MapReduce task will be executed on. Preselecting keys can speed up the job massively.
This method can be used in conjunction with onKeys(Iterable) or onKeys(Object...) to define a range of known and evaluated keys.

Parameters:
predicate - predicate implementation to be used to evaluate keys
Returns:
instance of this Job with generics changed on usage

chunkSize

MappingJob<EntryKey,KeyIn,ValueIn> chunkSize(int chunkSize)
Defines the number of elements per chunk. Whenever the chunk size is reached and a ReducerFactory is defined the chunk will be send to the nodes that is responsible for the emitted keys.
Please note, that chunks are deactivated when no ReducerFactory is defined

Parameters:
chunkSize - the number of elements per chunk
Returns:
instance of this Job with generics changed on usage

topologyChangedStrategy

MappingJob<EntryKey,KeyIn,ValueIn> topologyChangedStrategy(TopologyChangedStrategy topologyChangedStrategy)
Defines the strategy to handle topology changes while executing the map reduce job. For further information see TopologyChangedStrategy.

Parameters:
topologyChangedStrategy - strategy to use
Returns:
instance of this Job with generics changed on usage

combiner

<ValueOut> ReducingJob<EntryKey,KeyIn,ValueOut> combiner(CombinerFactory<KeyIn,ValueIn,ValueOut> combinerFactory)
Defines the CombinerFactory for this task. This method is not idempotent and is callable only one time. Further calls result in an IllegalStateException to be thrown telling you to not change the internal state.

Type Parameters:
ValueOut - type of the combined value
Parameters:
combinerFactory - CombinerFactory to build Combiner
Returns:
instance of this Job with generics changed on usage

reducer

<ValueOut> ReducingSubmittableJob<EntryKey,KeyIn,ValueOut> reducer(ReducerFactory<KeyIn,ValueIn,ValueOut> reducerFactory)
Defines the ReducerFactory for this task. This method is not idempotent and is callable only one time. Further calls result in an IllegalStateException to be thrown telling you to not change the internal state.

Type Parameters:
ValueOut - type of the reduced value
Parameters:
reducerFactory - ReducerFactory to build Reducers
Returns:
instance of this Job with generics changed on usage

submit

JobCompletableFuture<Map<KeyIn,List<ValueIn>>> submit()
Submits the task to Hazelcast and executes the defined mapper and reducer on all cluster nodes

Returns:
JobCompletableFuture to wait for mapped and possibly reduced result

submit

<ValueOut> JobCompletableFuture<ValueOut> submit(Collator<Map.Entry<KeyIn,List<ValueIn>>,ValueOut> collator)
Submits the task to Hazelcast and executes the defined mapper and reducer on all cluster nodes and executes the collator before returning the final result.

Type Parameters:
ValueOut - type of the collated value
Parameters:
collator - collator to use after map and reduce
Returns:
JobCompletableFuture to wait for mapped and possibly reduced result


Copyright © 2014 Hazelcast, Inc.. All Rights Reserved.