[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
courses:mapreduce-tutorial:step-5 [2012/01/24 22:08]
straka
courses:mapreduce-tutorial:step-5 [2012/01/25 21:56]
straka
Line 1: Line 1:
 ====== MapReduce Tutorial : Basic reducer ====== ====== MapReduce Tutorial : Basic reducer ======
  
-<file perl reducer.pl> +The interesting part of a Hadoop job is the //reducer// -- after all mappers produce the (key, value) pairs, for every unique key and all its values a ''reduce'' function is called. The ''reduce'' function can output (key, value) pairs, which are written to disk. 
-#!/usr/bin/perl + 
- +The ''reduce'' is similar to ''map'', but instead of one value it gets an iterator, which enumerates all values associated with the key: 
 + 
 +<file perl>
 package Mapper; package Mapper;
 use Moose; use Moose;
 with 'Hadoop::Mapper'; with 'Hadoop::Mapper';
- +
 sub map { sub map {
   my ($self, $key, $value, $context) = @_;   my ($self, $key, $value, $context) = @_;
- +
   $context->write($key, $value);   $context->write($key, $value);
 } }
- + 
 +package Reducer; 
 +use Moose; 
 +with 'Hadoop::Reducer'; 
 + 
 +sub reduce { 
 +  my ($self, $key, $values, $context) = @_; 
 + 
 +  while ($values->next) { 
 +    $context->write($key, $values->value); 
 +  } 
 +
 package Main; package Main;
 use Hadoop::Runner; use Hadoop::Runner;
- +
 my $runner = Hadoop::Runner->new( my $runner = Hadoop::Runner->new(
   mapper => Mapper->new(),   mapper => Mapper->new(),
-  input_format => 'TextInputFormat', +  reducer => Reducer->new()); 
-  output_format => 'TextOutputFormat', +
-  output_compression =0); +
- +
 $runner->run(); $runner->run();
 </file> </file>
 +
 +As before, Hadoop silently handles failures. It can happen that even a successfully finished mapper needs to be executed again -- if the machine, where its output data were stored, gets disconnected from the network.
 +
 +===== Exercise 1 =====
 +
 +Run a Hadoop job on ''/home/straka/wiki/cs-text-small'', which counts occurrences of every word in the article texts.
 +
 +{{:courses:mapreduce-tutorial:step-5-solution1.txt|Solution.pl}}
 +
 +===== Exercise 2 =====
 +
 +Run a Hadoop job on ''/home/straka/wiki/cs-text-small'', which generates an inverted index. Inverted index contains for each word all its //occurrences//, where each occurrence is pair (article of occurrence, position of occurrence).
 +
 +{{:courses:mapreduce-tutorial:step-5-solution2.txt|Solution.pl}}
  

[ Back to the navigation ] [ Back to the content ]