[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
spark:running-spark-on-single-machine-or-on-cluster [2014/11/10 12:32]
straka
spark:running-spark-on-single-machine-or-on-cluster [2023/11/07 12:48] (current)
straka [Starting Spark Cluster]
Line 13: Line 13:
 Spark computations can be started both on desktop machines and on cluster machines, either by specifying ''MASTER'' to one of ''local'' modes, or by not specifying MASTER at all (''local[*]'' is used then). Spark computations can be started both on desktop machines and on cluster machines, either by specifying ''MASTER'' to one of ''local'' modes, or by not specifying MASTER at all (''local[*]'' is used then).
  
-Note that when you use ''qrsh'' or ''qsub'', your job is usually expected to use one core, so you should specify ''MASTER=local''. If you do not, Spark will use all cores on the machine, even though SGE gave you only one.+Note that when you use ''sbatch'' or ''srun'' to run a cluster locally, your job is by default expected to use just a single core, so you should specify ''MASTER=local''. If you do not, Spark will use all cores on the machine, even though Slurm gave you only one.
  
 ===== Starting Spark Cluster  ===== ===== Starting Spark Cluster  =====
  
-Spark cluster can be started using SGE. The cluster is user-specific, but it can be used for multiple Spark computations.+Spark cluster can be started using Slurm. The cluster is user-specific, but it can be used for several consecutive Spark computations.
  
 The Spark cluster can be started using one of the following two commands: The Spark cluster can be started using one of the following two commands:
-  * ''spark-qsub'': start Spark cluster and perform qsub <file bash>[SGE_OPTS=additional_SGE_args] spark-qsub workers memory command [arguments]</file> +  * ''spark-sbatch'': start Spark cluster via an ''sbatch'' <file>spark-sbatch [sbatch args] workers memory_per_workerG[:python_memoryG] command [arguments...]</file> 
-  * ''spark-qrsh'': start Spark cluster and perform qrsh <file bash>[SGE_OPTS=additional_SGE_args] spark-qrsh workers memory [command arguments]</file>+  * ''spark-srun'': start Spark cluster via an ''srun'' <file>spark-srun [salloc args] workers memory_per_workerG[:python_memoryG] [command arguments...]</file>
  
-Both ''spark-qsub'' and ''spark-qrsh'' commands start Spark cluster with specified number of workers, each with given number of memory. Then they set ''MASTER'' and ''SPARK_ADDRESS'' to address of the Spark master and ''SPARK_WEBUI'' to http address of the master web interface. Both these values are also written on standard output and added to SGE job metadataLastly, specified command is started either as using ''qsub'' or ''qrsh''. Note that when ''spark-qrsh'' is used, no command may be specified, in which case an interactive shell is opened.+Both ''spark-sbatch'' and ''spark-srun'' commands start Spark cluster with the specified number of workers, each with the given amount of memory. Then they set ''MASTER'' and ''SPARK_ADDRESS'' to the address of the Spark master and ''SPARK_WEBUI'' to the URL of the master web interface. Both these values are also written on standard outputand the ''SPARK_WEBUI'' is added to the Slurm job CommentFinallythe specified command is startedwhen ''spark-srun'' is used, the command may be empty, in which case ''bash'' is opened.
  
-To use cluster for a long period time, run ''spark-qsub'' with command ''sleep infinity'' and then stop the cluster using ''qdel''.+==== Memory Specification ====
  
-==== Examples ====+TL;DR: Good default is ''2G''.
  
-Start Spark cluster with 10 machines 1GB RAM each and then run interactive shell. The cluster stops after the shell is exited. +The memory for each worker is specified using the following format: <file>spark_memory_per_workerG[:memory_per_Python_processG]</file>
-<file bash> +
-spark-qrsh 10 1G +
-</file>+
  
-Start Spark cluster with 20 machines 512MB RAM each. The cluster has to be stopped manually using ''qdel''. +The Spark memory limits the Java heap, and half of it is reserved for memory storage of cached RDDs. The second value sets a memory limit of every Python process and is by default set to ''2G''.
-<file bash> +
-spark-qsub 20 512m sleep infinity +
-</file>+
  
-==== Memory Specification ==== +==== Examples ====
- +
-Memory specification used for master and worker heap size (and for ''mem_free'' SGE constraint) must be specified. The memory can be specified either in bytes, or using ''kK/mM/gG'' suffix. A reasonable default value is 512M or 1G. +
- +
-==== Additional SGE Options ====+
  
-Additional ''qrsh'' or ''qsub'' options can be specified in ''SGE_OPTS'' environmental variable (not as ''spark-qsub'' or ''spark-qrsh'' arguments), as in the following example which schedules the Spark master and workers to machines different from ''hyperion*'' and ''pandora*'': +Start Spark cluster with 10 workers 2GB RAM each and then run interactive shell. The cluster stops after the shell is exited. 
-<file bash> +<file>spark-srun 10 2G</file>
-SGE_OPTS='-q *@!(hyperion*|pandora*)' spark-qrsh 10 1G +
-</file>+
  
 +Start Spark cluster with 20 workers 4GB RAM each in the ''cpu-ms'' partition, and run ''screen'' in it, so that several computations can be performed using this cluster. The cluster has to be stopped manually (either by quitting the scree or calling ''scancel'').
 +<file>spark-sbatch -p cpu-ms 20 4G screen -D -m</file>
  

[ Back to the navigation ] [ Back to the content ]