This is an old revision of the document!
MapReduce Tutorial : Dynamic Hadoop cluster for several computations
When multiple MR jobs should be executed, it would be better to reuse the cluster instead of allocating a new one for every computation.
A cluster can be created using
/home/straka/hadoop/bin/hadoop-cluster -c number_of_machines [-w sec_to_run_the_cluster_for]
The syntax is the same as in perl script.pl run
.
The associated