This is an old revision of the document!
MapReduce Tutorial : Basic mapper
The simplest Hadoop job consists of a mapper only. The input data is divided in several parts, every processed by an independent mapper, and the results are collected in one directory, one file per mapper.
The Hadoop framework silently handles failures. If a mapper task fails, another is executed and the input of the failed attempt is discarded.
Example Perl mapper
#!/usr/bin/perl package Mapper; use Moose; with 'Hadoop::Mapper'; sub map { my ($self, $key, $value, $context) = @_; $context->write($key, $value); } package Main; use Hadoop::Runner; my $runner = Hadoop::Runner->new( mapper => Mapper->new(), input_format => 'TextInputFormat', output_format => 'TextOutputFormat', output_compression => 0); $runner->run();
The values input_format
, output_format
and output_compression
could be left out, because they are all set to their default value.
Resulting script can be executed locally in a single thread using
perl script.pl run input_directory output_directory
All files in input_directory are processes. The output_directory must not exist.
Step 2: Input and output format, testing data. | Overview | Step 4: Counters. |