[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
courses:mapreduce-tutorial:hadoop-job-overview [2012/02/05 19:36]
straka
courses:mapreduce-tutorial:hadoop-job-overview [2012/02/06 06:11]
straka
Line 5: Line 5:
   * [optional] //a reducer// -- in an ascending order of keys, it processes a key and all its associated values. Produces (key, value) pairs. User can specify number of reducers: 0, 1 or more, default is 1.   * [optional] //a reducer// -- in an ascending order of keys, it processes a key and all its associated values. Produces (key, value) pairs. User can specify number of reducers: 0, 1 or more, default is 1.
   * [optional] //a combiner// -- a reducer which is executed locally on output of a mapper.   * [optional] //a combiner// -- a reducer which is executed locally on output of a mapper.
-  * [optional] //a partitioner// -- partitioner is executed on every (key, value) pair produced by mapper, and outputs the number of the reducer which should process this pair.+  * [optional] //a partitioner// -- partitioner is executed on every (key, value) pair produced by mapper, and outputs the number of the reducer which should process this pair. When no partitioner is specified, the partition is derived from the hash of the key.
  
 An AllReduce Hadoop job ([[.:step-16|Perl version]], [[.:step-31|Java version]]) consists of a mapper only. All the mappers must be executed simultaneously and can communicate using a ''allReduce'' function. An AllReduce Hadoop job ([[.:step-16|Perl version]], [[.:step-31|Java version]]) consists of a mapper only. All the mappers must be executed simultaneously and can communicate using a ''allReduce'' function.

[ Back to the navigation ] [ Back to the content ]