This is an old revision of the document!
Table of Contents
MapReduce Tutorial : Managing a Hadoop cluster
Hadoop clusters can be created and stopped dynamically, using the SGE cluster. A Hadoop cluster consists of one jobtracker (master of the cluster) and multiple tasktrackers. The cluster is identified by its jobtracker. The jobtracker listens on two ports – one is used to submit jobs and the other is a web interface.
A Hadoop cluster can be created:
- for a specific Hadoop job. This is done by executing the job with the
-c
option, see Running jobs. - manually using
/net/projects/hadoop/bin/hadoop-cluster
script:/net/projects/hadoop/bin/hadoop-cluster -c number_of_machines -w seconds_until_cluster_terminates
When a Hadoop cluster is about to start, a job is submitted to SGE cluster. When the cluster starts successfully, the jobtracker:port and the address of the web interface is printed, and 3 files are created in the current directory:
HadoopCluster.c$SGE_JOBID
– high-level status of the Hadoop computation. Contains both the jobtracker:port and the address of the web interface.HadoopCluster.o$SGE_JOBID
– contains stdout and stderr of the Hadoop job.HadoopCluster.po$SGE_JOBID
– contains stdout and stderr of the Hadoop cluster.
A Hadoop cluster is stopped:
- after the timeout specified by
-w
- when the
HadoopCluster.c$SGE_JOBID
file is deleted - using
qdel
.
Web interface
The web interface provides a lot of useful information:
- running, failed and successfully completed jobs
- for running job, current progress and counters of the whole job and also of each mapper and reducer is available
- for any job, the counters and outputs of all mappers and reducers
- for any job, all Hadoop settings
Killing running jobs
Jobs running in a cluster can be stopped using
/SGE/HADOOP/active/bin/hadoop -jt jobtracker:port -kill hadoop-job-id
The jobs running on a cluster are present in the web interface, or can be printed using
/SGE/HADOOP/active/bin/hadoop -jt jobtracker:port -list