Next revision
|
Previous revision
Next revision
Both sides next revision
|
spark:running-spark-on-single-machine-or-on-cluster [2014/11/07 16:09] straka created |
spark:running-spark-on-single-machine-or-on-cluster [2022/12/14 12:49] straka [Starting Spark Cluster] |
===== Running Spark on Single Machine ===== | ===== Running Spark on Single Machine ===== |
| |
Spark computations can be started both on desktop machines and on cluster machines, either by specifying ''MASTER'' to one of ''local'' modes, or by not speficying MASTER at all (''local[*]'' is used then). | Spark computations can be started both on desktop machines and on cluster machines, either by specifying ''MASTER'' to one of ''local'' modes, or by not specifying MASTER at all (''local[*]'' is used then). |
| |
Note that when you use ''qrsh'' or ''qsub'', your job can usually use one core, so you should specify ''MASTER=local''. If you do not, Spark will use all cores on the machine, even though SGE gave you onle one. | Note that when you use ''qrsh'' or ''qsub'', your job is usually expected to use one core, so you should specify ''MASTER=local''. If you do not, Spark will use all cores on the machine, even though SGE gave you only one. |
| |
===== Starting Spark Cluster ===== | ===== Starting Spark Cluster ===== |
| |
| Spark cluster can be started using Slurm. The cluster is user-specific, but it can be used for several consecutive Spark computations. |
| |
| The Spark cluster can be started using one of the following two commands: |
| * ''spark-sbatch'': start a Spark cluster via an ''sbatch'' <file>spark-srun [sbatch args] workers memory_per_workerG[:python_memoryG] command [arguments...]</file> |
| * ''spark-srun'': start a Spark cluster via an ''srun'' <file>spark-srun [salloc args] workers memory_per_workerG[:python_memoryG] [command arguments...]</file> |
| |
| Both ''spark-sbatch'' and ''spark-srun'' commands start a Spark cluster with the specified number of workers, each with the given amount of memory. Then they set ''MASTER'' and ''SPARK_ADDRESS'' to the address of the Spark master and ''SPARK_WEBUI'' to the URL of the master web interface. Both these values are also written on standard output, and the ''SPARK_WEBUI'' is added to the Slurm job Comment. Finally, the specified command is started; when ''spark-srun'' is used, the command may be empty, in which case ''bash'' is opened. |
| |
| ==== Memory Specification ==== |
| |
| Memory specification used for master and worker heap size (and for ''mem_free'' SGE constraint) must be specified. The memory can be specified either in bytes, or using ''kK/mM/gG'' suffix. A reasonable default value is 512M or 1G. |
| |
| |
| ==== Examples ==== |
| |
| Start Spark cluster with 10 machines 1GB RAM each and then run interactive shell. The cluster stops after the shell is exited. |
| <file>spark-qrsh 10 1G</file> |
| |
| Start Spark cluster with 20 machines 512MB RAM each. The cluster has to be stopped manually using ''qdel''. |
| <file>spark-qsub 20 512m sleep infinity</file> |
| |
| Note that a running Spark cluster can currently be used only from other cluster machines (connections to a running SGE Spark cluster from my workstation ends with timeout). |
| |
| ==== Additional SGE Options ==== |
| |
| Additional ''qrsh'' or ''qsub'' options can be specified in ''SGE_OPTS'' environmental variable (not as ''spark-qsub'' or ''spark-qrsh'' arguments), as in the following example which schedules the Spark master and workers to machines different then ''hyperion*'' and ''pandora*'': |
| <file>SGE_OPTS='-q *@!(hyperion*|pandora*)' spark-qrsh 10 1G</file> |
| |