[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
courses:mapreduce-tutorial:step-24 [2012/01/27 21:47]
straka
courses:mapreduce-tutorial:step-24 [2012/01/27 22:17]
straka
Line 16: Line 16:
 </code> </code>
  
-Outputting (key, value) pairs is performed using the [[http://hadoop.apache.org/common/docs/r1.0.0/api/org/apache/hadoop/mapreduce/MapContext.html|MapContext<Kin, Vin, Kout, Vout>]] (the ''Context'' is an abbreviation for this type).+Outputting (key, value) pairs is performed using the [[http://hadoop.apache.org/common/docs/r1.0.0/api/org/apache/hadoop/mapreduce/MapContext.html|MapContext<Kin, Vin, Kout, Vout>]] object (the ''Context'' is an abbreviation for this type), with the method ''context.write(Kout key, Vout value)''
 + 
 +Here is the source of the whole Hadoop job:
  
 <file java MapperOnlyHadoopJob.java> <file java MapperOnlyHadoopJob.java>
Line 52: Line 54:
     }     }
  
-    Job job = new Job(getConf(), this.getClass().getName());+    Job job = new Job(getConf(), this.getClass().getName());    // Create class representing Hadoop job. 
 +                                                                // Name of the job is the name of current class.
  
-    job.setJarByClass(this.getClass()); +    job.setJarByClass(this.getClass());                         // Use jar containing current class. 
-    job.setMapperClass(TheMapper.class); +    job.setMapperClass(TheMapper.class);                        // The mapper of the job. 
-    job.setOutputKeyClass(Text.class); +    job.setOutputKeyClass(Text.class);                          // Type of the output keys. 
-    job.setOutputValueClass(Text.class);+    job.setOutputValueClass(Text.class);                        // Type of the output values.
  
-    job.setInputFormatClass(KeyValueTextInputFormat.class);+    job.setInputFormatClass(KeyValueTextInputFormat.class);     // Input format. 
 +                                                                // Output format is the default -- TextOutputFormat
  
-    FileInputFormat.addInputPath(job, new Path(args[0])); +    FileInputFormat.addInputPath(job, new Path(args[0]));       // Input path is on command line. 
-    FileOutputFormat.setOutputPath(job, new Path(args[1]));+    FileOutputFormat.setOutputPath(job, new Path(args[1]));     // Output path is on command line too.
  
     return job.waitForCompletion(true) ? 0 : 1;     return job.waitForCompletion(true) ? 0 : 1;
Line 76: Line 80:
 </file> </file>
  
-===== Running the job ===== +Remarks: 
-Download the source and compile it.+  * The filename //must// be the same as the name of the class -- this is enforced by Java compiler. 
 +  * In one class multiple jobs can be submitted, either in sequence or in parallel. 
 +  * A mismatch of types is usually detected by the compiler, but sometimes it is detected only at runtime. If that happens, an exception is raised and the program crashes. For example, default key output class it ''LongWritable'' -- if ''Text'' was not specified, the program would crash.
  
 +===== Running the job =====
 The official way of running Hadoop jobs is to use the ''/SGE/HADOOP/active/bin/hadoop'' script. Jobs submitted through this script can be configured using Hadoop properties only. Therefore a wrapper script is provided, with similar options as the Perl API runner: The official way of running Hadoop jobs is to use the ''/SGE/HADOOP/active/bin/hadoop'' script. Jobs submitted through this script can be configured using Hadoop properties only. Therefore a wrapper script is provided, with similar options as the Perl API runner:
   * ''net/projects/hadoop/bin/hadoop [-r number_of_reducers] job.jar [generic Hadoop properties] input_path output_path'' -- executes the given job locally in a single thread. It is useful for debugging.   * ''net/projects/hadoop/bin/hadoop [-r number_of_reducers] job.jar [generic Hadoop properties] input_path output_path'' -- executes the given job locally in a single thread. It is useful for debugging.
   * ''net/projects/hadoop/bin/hadoop -jt cluster_master [-r number_of_reducers] job.jar [generic Hadoop properties] input_path output_path'' -- submits the job to given ''cluster_master''.   * ''net/projects/hadoop/bin/hadoop -jt cluster_master [-r number_of_reducers] job.jar [generic Hadoop properties] input_path output_path'' -- submits the job to given ''cluster_master''.
   * ''net/projects/hadoop/bin/hadoop -c number_of_machines [-w secs_to_wait_after_job_finishes] [-r number_of_reducers] job.jar [generic Hadoop properties] input_path output_path'' -- creates a new cluster with specified number of machines, which executes given job, and then waits for specified number of seconds before it stops.   * ''net/projects/hadoop/bin/hadoop -c number_of_machines [-w secs_to_wait_after_job_finishes] [-r number_of_reducers] job.jar [generic Hadoop properties] input_path output_path'' -- creates a new cluster with specified number of machines, which executes given job, and then waits for specified number of seconds before it stops.
 +
 +===== Exercise =====
 +Download the ''MapperOnlyHadoopJob.java'', compile it and run it using
 +  /net/projects/hadoop/bin/hadoop -r 0 MapperOnlyHadoopJob.jar /home/straka/wiki/cs-text-small outdir
 +
 +Mind the ''-r 0'' switch -- specifying ''-r 0'' disable the reducer. If the switch ''-r 0'' was not given, one reducer of default type ''IdentityReducer'' would be used. The ''IdentityReducer'' outputs every (key, value) pair it is given.
 +  * When using ''-r 0'', the job runs faster, as the mappers write the output directly to disk. Buth there are as many output files as mappers and the (key, value) pairs are stored in no special order.
 +  * When not specifying ''-r 0'' (i.e., using ''-r 1'' with ''IdentityReducer''), the job produces the same (key, value) pairs. But this time they are in one output file, sorted by the key. Of course, the job runs slower in this case.
  

[ Back to the navigation ] [ Back to the content ]