[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
courses:mapreduce-tutorial:step-3 [2012/01/28 11:08]
majlis
courses:mapreduce-tutorial:step-3 [2012/01/31 09:40] (current)
straka Change Perl commandline syntax.
Line 8: Line 8:
  
 <file perl> <file perl>
-#!/usr/bin/perl +package My::Mapper;
- +
-package Mapper;+
 use Moose; use Moose;
 with 'Hadoop::Mapper'; with 'Hadoop::Mapper';
Line 20: Line 18:
 } }
  
-package Main;+package main;
 use Hadoop::Runner; use Hadoop::Runner;
  
 my $runner = Hadoop::Runner->new( my $runner = Hadoop::Runner->new(
-  mapper => Mapper->new(),+  mapper => My::Mapper->new(),
   input_format => 'TextInputFormat',   input_format => 'TextInputFormat',
   output_format => 'TextOutputFormat',   output_format => 'TextOutputFormat',
   output_compression => 0);   output_compression => 0);
  
-$runner->run();+$runner->run();  # Parse arguments in @ARGV and run the Hadoop job.
 </file> </file>
  
Line 35: Line 33:
  
 Resulting script can be executed locally in a single thread using Resulting script can be executed locally in a single thread using
-  perl script.pl run input_directory output_directory +  perl script.pl input output_directory 
-All files in input_directory are processes. The output_directory must not exist.+Input can be either file or a directory -- in that case, all files in this directory are processes. The output_directory must not exist
 + 
 +=== Standard input and output === 
 +Standard input and standard output of the Perl script are used to communicate with the Hadoop framework. Therefore you should use standard error output if you want to print some debugging output using ''print STDERR "Message"''.
  
 ===== Exercise ===== ===== Exercise =====
  
-To check that your Hadoop environment works, try running a MR job on ''/home/straka/wiki/cs-text'', which outputs only articles with names beginning with an ''A'' (ignoring the case). You can download the template  and execute it using+To check that your Hadoop environment works, try running a MR job on ''/home/straka/wiki/cs-text-medium'', which outputs only articles with names beginning with an ''A'' (ignoring the case). You can download the template {{:courses:mapreduce-tutorial:step-3-exercise.txt|step-3-exercise.pl}}  and execute it.
   wget --no-check-certificate 'https://wiki.ufal.ms.mff.cuni.cz/_media/courses:mapreduce-tutorial:step-3-exercise.txt' -O 'step-3-exercise.pl'   wget --no-check-certificate 'https://wiki.ufal.ms.mff.cuni.cz/_media/courses:mapreduce-tutorial:step-3-exercise.txt' -O 'step-3-exercise.pl'
-  rm -rf step-3-output; perl step-3-exercise.pl run /home/straka/wiki/cs-text step-3-output +  # NOW EDIT THE FILE 
- +  # $EDITOR step-3-exercise.pl 
-{{.:step-3-solution.txt|Solution.pl}}+  rm -rf step-3-out-ex; perl step-3-exercise.pl /home/straka/wiki/cs-text-medium/ step-3-out-ex 
 +  less step-3-out-ex/part-* 
 +   
 +==== Solution ==== 
 +You can also download the solution {{:courses:mapreduce-tutorial:step-3-solution.txt|step-3-solution.pl}} and check the correct output. 
 +  wget --no-check-certificate 'https://wiki.ufal.ms.mff.cuni.cz/_media/courses:mapreduce-tutorial:step-3-solution.txt' -O 'step-3-solution.pl' 
 +  # NOW VIEW THE FILE 
 +  # $EDITOR step-3-solution.pl 
 +  rm -rf step-3-out-sol; perl step-3-solution.pl /home/straka/wiki/cs-text-medium/ step-3-out-sol 
 +  less step-3-out-sol/part-*
  
 +----
  
 +<html>
 +<table style="width:100%">
 +<tr>
 +<td style="text-align:left; width: 33%; "></html>[[step-2|Step 2]]: Input and output format, testing data.<html></td>
 +<td style="text-align:center; width: 33%; "></html>[[.|Overview]]<html></td>
 +<td style="text-align:right; width: 33%; "></html>[[step-4|Step 4]]: Counters.<html></td>
 +</tr>
 +</table>
 +</html>

[ Back to the navigation ] [ Back to the content ]