Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
courses:mapreduce-tutorial:managing-a-hadoop-cluster [2012/02/05 19:51] straka |
courses:mapreduce-tutorial:managing-a-hadoop-cluster [2013/02/08 15:25] (current) popel |
||
---|---|---|---|
Line 5: | Line 5: | ||
A Hadoop cluster can be created: | A Hadoop cluster can be created: | ||
* for a specific Hadoop job. This is done by executing the job with the '' | * for a specific Hadoop job. This is done by executing the job with the '' | ||
- | * manually using ''/ | + | * manually using ''/ |
- | When a Hadoop cluster | + | When a Hadoop cluster |
- | * '' | + | * '' |
- | * '' | + | * '' |
- | * '' | + | * '' |
A Hadoop cluster is stopped: | A Hadoop cluster is stopped: | ||
- | * after the timeout specified by '' | + | * after the timeout specified by '' |
* when the '' | * when the '' | ||
* using '' | * using '' | ||
- | ===== Controlling | + | ===== Web interface ===== |
+ | |||
+ | The web interface provides a lot of useful information: | ||
+ | * running, failed and successfully completed jobs | ||
+ | * for running job, current progress and counters of the whole job and also of each mapper and reducer is available | ||
+ | * for any job, the counters and outputs of all mappers and reducers | ||
+ | * for any job, all Hadoop settings | ||
+ | |||
+ | ===== Killing running jobs ===== | ||
+ | |||
+ | Jobs running in a cluster can be stopped using | ||
+ | < | ||
+ | |||
+ | The jobs running on a cluster are present in the web interface, or can be printed using | ||
+ | < | ||