[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
spark:running-spark-on-single-machine-or-on-cluster [2022/12/14 12:58]
straka [Examples]
spark:running-spark-on-single-machine-or-on-cluster [2023/11/07 12:48] (current)
straka [Starting Spark Cluster]
Line 13: Line 13:
 Spark computations can be started both on desktop machines and on cluster machines, either by specifying ''MASTER'' to one of ''local'' modes, or by not specifying MASTER at all (''local[*]'' is used then). Spark computations can be started both on desktop machines and on cluster machines, either by specifying ''MASTER'' to one of ''local'' modes, or by not specifying MASTER at all (''local[*]'' is used then).
  
-Note that when you use ''qrsh'' or ''qsub'', your job is usually expected to use one core, so you should specify ''MASTER=local''. If you do not, Spark will use all cores on the machine, even though SGE gave you only one.+Note that when you use ''sbatch'' or ''srun'' to run a cluster locally, your job is by default expected to use just a single core, so you should specify ''MASTER=local''. If you do not, Spark will use all cores on the machine, even though Slurm gave you only one.
  
 ===== Starting Spark Cluster  ===== ===== Starting Spark Cluster  =====
Line 20: Line 20:
  
 The Spark cluster can be started using one of the following two commands: The Spark cluster can be started using one of the following two commands:
-  * ''spark-sbatch'': start a Spark cluster via an ''sbatch'' <file>spark-srun [sbatch args] workers memory_per_workerG[:python_memoryG] command [arguments...]</file>+  * ''spark-sbatch'': start a Spark cluster via an ''sbatch'' <file>spark-sbatch [sbatch args] workers memory_per_workerG[:python_memoryG] command [arguments...]</file>
   * ''spark-srun'': start a Spark cluster via an ''srun'' <file>spark-srun [salloc args] workers memory_per_workerG[:python_memoryG] [command arguments...]</file>   * ''spark-srun'': start a Spark cluster via an ''srun'' <file>spark-srun [salloc args] workers memory_per_workerG[:python_memoryG] [command arguments...]</file>
  

[ Back to the navigation ] [ Back to the content ]