[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
courses:mapreduce-tutorial:step-6 [2012/01/25 00:13]
straka
courses:mapreduce-tutorial:step-6 [2012/02/06 13:55] (current)
straka
Line 3: Line 3:
 Probably the most important feature of MapReduce is to run computations distributively. Probably the most important feature of MapReduce is to run computations distributively.
  
-So far all our MR jobs were executed locally. But all of them can be executed on multiple machines. It suffices to add parameter ''-c number_of_machines'' when running them: +So far all our Hadoop jobs were executed locally. But all of them can be executed on multiple machines. It suffices to add parameter ''-c number_of_machines'' when running them: 
-  perl script.pl run -c number_of_machines [-w sec_to_wait_after_job_completion] input_directory output_directory +  perl script.pl -c number_of_machines [-w sec_to_wait_after_job_completion] input_directory output_directory 
-This commands creates a cluster of specified number of machines. Every machine is able to run two mappers and two reducers simultaneously. In order to be able to observe the status of the computation after it ends, parameter ''-w sec_to_wait_after_job_completion'' can be used.+This commands creates a cluster of specified number of machines. Every machine is able to run two mappers and two reducers simultaneously. In order to be able to observe the counters, status and error logs of the computation after it ends, parameter ''-w sec_to_wait_after_job_completion'' can be used -- when it is used, after the job finishes (successfully or not) the cluster waits for specified time before shutting down.
  
-When a distributed MR computations is executed, it submits a job to SGE cluster, with the name of the Perl script. The SGE cluster creates 3 files in current directory: +One of the machines in the cluster is a //master//, or a //job tracker//, and it is used to identify the cluster. 
-  * ''script.pl.c$SGE_JOBID'' -- high-level status of computation + 
-  * ''script.pl.o$SGE_JOBID'' -- contains stdout and stderr of the MR job +In the UFAL environment, when a distributed Hadoop computations is executed, it submits a job to SGE cluster, with the name of the Perl script. The job creates 3 files in the current directory: 
-  * ''script.pl.po$SGE_JOBID'' -- contains stdout and stderr of the MR cluster +  * ''script.pl.c$SGE_JOBID'' -- high-level status of the Hadoop computation 
-When the computation ends and is waiting because of the ''-w'' parameter, removing the file ''script.pl.c$SGE_JOBID'' stops the cluster. The cluster can be also stopped by removing its SGE job.+  * ''script.pl.o$SGE_JOBID'' -- contains stdout and stderr of the Hadoop job 
 +  * ''script.pl.po$SGE_JOBID'' -- contains stdout and stderr of the Hadoop cluster 
 +When the computation ends and is waiting because of the ''-w'' parameter, removing the file ''script.pl.c$SGE_JOBID'' stops the cluster. The cluster can be also stopped by removing its SGE job using ''qdel''.
  
 ===== Web interface ===== ===== Web interface =====
  
-The cluster master provides a web interface on port 50030 (the port may change in the future). The cluster master address can be found at the first line of ''script.pl.c$SGE_JOBID'', or using ''qstat -j $SGE_JOBID'' (context variable ''hdfs_jobtracker_admin'').+The cluster master provides a web interface on address printed by the ''hadoop-cluster'' script. The address is also present on the second line of ''script.pl.c$SGE_JOBID'', or using ''qstat -j $SGE_JOBID''context variable ''hdfs_jobtracker_admin''.
  
-The web interface provides a lot of useful informations:+The web interface provides a lot of useful information:
   * running, failed and successfully completed jobs   * running, failed and successfully completed jobs
   * for running job, current progress and counters of the whole job and also of each mapper and reducer is available   * for running job, current progress and counters of the whole job and also of each mapper and reducer is available
Line 24: Line 26:
  
  
 +
 +===== Example =====
 +
 +Try running the {{:courses:mapreduce-tutorial:step-6.txt|step-6-wordcount.pl}} using
 +  wget --no-check-certificate 'https://wiki.ufal.ms.mff.cuni.cz/_media/courses:mapreduce-tutorial:step-6.txt' -O 'step-6-wordcount.pl'
 +  rm -rf step-6-out; perl step-6-wordcount.pl -c 1 -w 600 -Dmapred.max.split.size=1000000 /home/straka/wiki/cs-text-medium step-6-out
 +and explore the web interface.
 +
 +If you cannot access directly the ''*.ufal.hide.ms.mff.cuni.cz'' network, you can use
 +  ssh -N -L 50030:pandora3:50030 geri.ms.mff.cuni.cz
 +on your computer to create a tunnel from local port 50030 to machine ''pandora3:50030''. Replace **''pandora3''** by your cluster_master, but leave the hostname **''geri.ms.mff.cuni.cz''** unmodified. Now you can access the web interface on the URL [[http://localhost:50030]]
 +
 +----
 +
 +<html>
 +<table style="width:100%">
 +<tr>
 +<td style="text-align:left; width: 33%; "></html>[[step-5|Step 5]]: Basic reducer.<html></td>
 +<td style="text-align:center; width: 33%; "></html>[[.|Overview]]<html></td>
 +<td style="text-align:right; width: 33%; "></html>[[step-7|Step 7]]: Dynamic Hadoop cluster for several computations.<html></td>
 +</tr>
 +</table>
 +</html>

[ Back to the navigation ] [ Back to the content ]