[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
courses:mapreduce-tutorial:managing-a-hadoop-cluster [2012/02/05 19:59]
straka
courses:mapreduce-tutorial:managing-a-hadoop-cluster [2013/02/08 15:25]
popel
Line 5: Line 5:
 A Hadoop cluster can be created: A Hadoop cluster can be created:
   * for a specific Hadoop job. This is done by executing the job with the ''-c'' option, see [[.:Running jobs]].   * for a specific Hadoop job. This is done by executing the job with the ''-c'' option, see [[.:Running jobs]].
-  * manually using ''/net/projects/hadoop/bin/hadoop-cluster'' script: <code>/net/projects/hadoop/bin/hadoop-cluster -c number_of_machines -w seconds_until_cluster_terminates</code>+  * manually using ''/net/projects/hadoop/bin/hadoop-cluster'' script: <code>/net/projects/hadoop/bin/hadoop-cluster -c number_of_machines -w seconds_to_wait_after_all_jobs_completed</code>
  
 When a Hadoop cluster is about to start, a job is submitted to SGE cluster. When the cluster starts successfully, the jobtracker:port and the address of the web interface is printed, and 3 files are created in the current directory: When a Hadoop cluster is about to start, a job is submitted to SGE cluster. When the cluster starts successfully, the jobtracker:port and the address of the web interface is printed, and 3 files are created in the current directory:
Line 13: Line 13:
  
 A Hadoop cluster is stopped: A Hadoop cluster is stopped:
-  * after the timeout specified by ''-w''+  * after the timeout specified by ''-w'' after the last task is finished
   * when the ''HadoopCluster.c$SGE_JOBID'' file is deleted   * when the ''HadoopCluster.c$SGE_JOBID'' file is deleted
   * using ''qdel''.   * using ''qdel''.

[ Back to the navigation ] [ Back to the content ]