[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
courses:mapreduce-tutorial:perl-api [2012/01/23 21:33]
straka
courses:mapreduce-tutorial:perl-api [2012/01/31 09:38] (current)
straka Change Perl commandline syntax.
Line 3: Line 3:
 ===== Hadoop::Runner ===== ===== Hadoop::Runner =====
  
-<code perl>+<file perl>
 package Hadoop::Runner; package Hadoop::Runner;
 use Moose; use Moose;
Line 17: Line 17:
  
 has 'hadoop_prefix' => (isa => 'Str', default => '/SGE/HADOOP/active'); has 'hadoop_prefix' => (isa => 'Str', default => '/SGE/HADOOP/active');
-has 'keep_env' => (isa => 'ArrayRef[Str]', default => sub { ["PERLLIB", "PERL5LIB"] });+has 'copy_environment' => (isa => 'ArrayRef[Str]', default => sub { [] });
  
 sub run(); sub run();
-</code>+</file>
   * ''mapper'' -- a ''Hadoop::Mapper'' to use   * ''mapper'' -- a ''Hadoop::Mapper'' to use
   * ''reducer'' -- an optional ''Hadoop::Reducer'' to use   * ''reducer'' -- an optional ''Hadoop::Reducer'' to use
Line 29: Line 29:
   * ''output_compression'' -- Bool flag controlling the compression of output   * ''output_compression'' -- Bool flag controlling the compression of output
   * ''hadoop_prefix'' -- the prefix of Hadoop instalation. Default value is fine in UFAL cluster.   * ''hadoop_prefix'' -- the prefix of Hadoop instalation. Default value is fine in UFAL cluster.
-  * ''keep_env'' -- which environment variables are preserved when running perl mappers, reducers, combiners and partitioners+  * ''copy_environment'' -- which environment variables are preserved when running perl mappers, reducers, combiners and partitioners. Needed only when running job using ''-jt'' -- both local execution and execution using ''-c'' option retain all environmental variables. 
 + 
 +==== Command line arguments supported by Hadoop::Runner::run() ==== 
 + 
 +  script.pl [-jt jobtracker | -c number_of_machines [-w secs]] [-r reducers] [-Dname=value -Dname=value ...] input output 
 +  script.pl --map number_of_reducers 
 +  script.pl --reduce 
 +  script.pl --combine
  
 ===== Hadoop::Mapper ===== ===== Hadoop::Mapper =====
  
-<code perl>+<file perl>
 package Hadoop::Mapper; package Hadoop::Mapper;
 use Moose::Role; use Moose::Role;
Line 39: Line 46:
 requires 'map'; requires 'map';
  
-sub setup() { } +sub setup() {} 
-sub cleanup { } +sub cleanup {} 
-</code+</file
-  * ''sub map($self, $key, $value, $context)'' -- executed for every (key, value) input pair. The variable '$content' has following methods:+  * ''sub map($self, $key, $value, $context)'' -- executed for every (key, value) input pair. The variable ''$content'' has following methods:
     * ''$content%%->%%write($key, $value)'' -- output the (''$key'', ''$value'') pair     * ''$content%%->%%write($key, $value)'' -- output the (''$key'', ''$value'') pair
     * ''$content%%->%%counter($group, $name, $increment)'' -- increases the counter ''$name'' in the group ''$group'' by ''$increment''     * ''$content%%->%%counter($group, $name, $increment)'' -- increases the counter ''$name'' in the group ''$group'' by ''$increment''
Line 50: Line 57:
 ===== Hadoop::Reducer ===== ===== Hadoop::Reducer =====
  
-<code perl>+<file perl>
 package Hadoop::Reduce; package Hadoop::Reduce;
 use Moose::Role; use Moose::Role;
Line 56: Line 63:
 requires 'reduce'; requires 'reduce';
  
-sub setup() { } +sub setup() {} 
-sub cleanup { } +sub cleanup {} 
-</code>+</file>
   * ''sub reduce($self, $key, $values, $context)'' -- executed for every ''$key''. The ''$values'' is an iterator with the following methods:   * ''sub reduce($self, $key, $values, $context)'' -- executed for every ''$key''. The ''$values'' is an iterator with the following methods:
     * ''$values%%->%%value()'' -- returns the current value, undef if there is any.     * ''$values%%->%%value()'' -- returns the current value, undef if there is any.
     * ''$values%%->%%next()'' -- advance to next value. Returns true if there is any, false otherwise.     * ''$values%%->%%next()'' -- advance to next value. Returns true if there is any, false otherwise.
-    * At the beginning there is no current value, the first value should be obtained by calling 'next'.+    * At the beginning there is no current value, the first value should be obtained by calling ''next''.
   * ''sub reduce($self, $key, $values, $context)'' -- the variable ''$content'' has following methods:   * ''sub reduce($self, $key, $values, $context)'' -- the variable ''$content'' has following methods:
     * ''$content%%->%%write($key, $value)'' -- output the (''$key'', ''$value'') pair     * ''$content%%->%%write($key, $value)'' -- output the (''$key'', ''$value'') pair
Line 71: Line 78:
 ===== Hadoop::Partitioner ===== ===== Hadoop::Partitioner =====
  
-<code perl>+<file perl>
 package Hadoop::Partitioner; package Hadoop::Partitioner;
 use Moose::Role; use Moose::Role;
Line 77: Line 84:
 requires 'getPartition'; requires 'getPartition';
  
-sub setup { } +sub setup {} 
-sub cleanup { }+sub cleanup {}
  
-</code+</file
-  * ''sub getPartition($self, $key, $value, $partitions)'' -- executed for every (key, value) input pair. It must return a number in the range 0..$partitions-1, +  * ''sub getPartition($self, $key, $value, $partitions)'' -- executed for every output (key, value) pair. It must return a number of partition in range 0..$partitions-1, where the output (key, value) pair should be placed.
   * ''sub setup($self)'' -- executed once before any input (key, value) pairs are processed.   * ''sub setup($self)'' -- executed once before any input (key, value) pairs are processed.
-  * ''sub cleanup($self)'' -- executed once after all input (key, value) pairs are processed. +  * ''sub cleanup($self)'' -- executed once after all input (key, value) pairs are processed. 
 + 
 +===== Available environmental variables ===== 
 +  * ''HADOOP_TASK_ID'' -- available in every mapper and reducer. The serial number of the mapper and reducer task (in range 0..number_of_tasks-1). 
 +  * ''HADOOP_WORK_OUTPUT_PATH'' -- available in every reducer, and also in every mapper of a reduce-less job. It contains an existing directory where the reducer can output files. If the reducer finishes successfully, all files and subdirectories will be moved to output directory of the job. 
  

[ Back to the navigation ] [ Back to the content ]