[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
spark:running-spark-on-single-machine-or-on-cluster [2022/12/14 12:53]
straka [Memory Specification]
spark:running-spark-on-single-machine-or-on-cluster [2022/12/14 12:58]
straka [Examples]
Line 27: Line 27:
 ==== Memory Specification ==== ==== Memory Specification ====
  
-TL;DR: Good default is ''2G'.+TL;DR: Good default is ''2G''.
  
 The memory for each worker is specified using the following format: <file>spark_memory_per_workerG[:memory_per_Python_processG]</file> The memory for each worker is specified using the following format: <file>spark_memory_per_workerG[:memory_per_Python_processG]</file>
 +
 The Spark memory limits the Java heap, and half of it is reserved for memory storage of cached RDDs. The second value sets a memory limit of every Python process and is by default set to ''2G''. The Spark memory limits the Java heap, and half of it is reserved for memory storage of cached RDDs. The second value sets a memory limit of every Python process and is by default set to ''2G''.
  
 ==== Examples ==== ==== Examples ====
  
-Start Spark cluster with 10 machines 1GB RAM each and then run interactive shell. The cluster stops after the shell is exited. +Start Spark cluster with 10 workers 2GB RAM each and then run interactive shell. The cluster stops after the shell is exited. 
-<file>spark-qrsh 10 1G</file> +<file>spark-srun 10 2G</file>
- +
-Start Spark cluster with 20 machines 512MB RAM each. The cluster has to be stopped manually using ''qdel''+
-<file>spark-qsub 20 512m sleep infinity</file> +
- +
-Note that a running Spark cluster can currently be used only from other cluster machines (connections to a running SGE Spark cluster from my workstation ends with timeout). +
- +
-==== Additional SGE Options ====+
  
-Additional ''qrsh'' or ''qsub'' options can be specified in ''SGE_OPTS'' environmental variable (not as ''spark-qsub'' or ''spark-qrsh'' arguments), as in the following example which schedules the Spark master and workers to machines different then ''hyperion*'' and ''pandora*'': +Start Spark cluster with 20 workers 4GB RAM each in the ''cpu-ms'' partition, and run ''screen'' in it, so that several computations can be performed using this cluster. The cluster has to be stopped manually (either by quitting the scree or calling ''scancel''). 
-<file>SGE_OPTS='-q *@!(hyperion*|pandora*)' spark-qrsh 10 1G</file>+<file>spark-sbatch -p cpu-ms 20 4G screen -D -m</file>
  

[ Back to the navigation ] [ Back to the content ]