This is an old revision of the document!
Table of Contents
MapReduce Tutorial : Basic mapper
The simplest Hadoop job consists of a mapper only. The input data is divided in several parts, every processed by an independent mapper, and the results are collected in one directory, one file per mapper.
The Hadoop framework silently handles failures. If a mapper task fails, another is executed and the input of the failed attempt is discarded.
Example Perl mapper
#!/usr/bin/perl package Mapper; use Moose; with 'Hadoop::Mapper'; sub map { my ($self, $key, $value, $context) = @_; $context->write($key, $value); } package Main; use Hadoop::Runner; my $runner = Hadoop::Runner->new( mapper => Mapper->new(), input_format => 'TextInputFormat', output_format => 'TextOutputFormat', output_compression => 0); $runner->run();
The values input_format
, output_format
and output_compression
could be left out, because they are all set to their default value.
Resulting script can be executed locally in a single thread using
perl script.pl run input_directory output_directory
All files in input_directory are processes. The output_directory must not exist.
Exercise
To check that your Hadoop environment works, try running a MR job on /home/straka/wiki/cs-text
, which outputs only articles with names beginning with an A
(ignoring the case). You can download the template step-3-exercise.pl and execute it.
wget --no-check-certificate 'https://wiki.ufal.ms.mff.cuni.cz/_media/courses:mapreduce-tutorial:step-3-exercise.txt' -O 'step-3-exercise.pl' rm -rf step-3-out-ex; perl step-3-exercise.pl run /home/straka/wiki/cs-text-medium/ step-3-out-ex less step-3-out-ex/part-m-*
Solution
You can also download the solution step-3-solution.pl and check the correct output.
wget --no-check-certificate 'https://wiki.ufal.ms.mff.cuni.cz/_media/courses:mapreduce-tutorial:step-3-solution.txt' -O 'step-3-solution.pl' rm -rf step-3-out-sol; perl step-3-solution.pl run /home/straka/wiki/cs-text-medium/ step-3-out-sol less step-3-out-sol/part-m-*