[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
spark:running-spark-on-single-machine-or-on-cluster [2022/12/14 12:57]
straka [Examples]
spark:running-spark-on-single-machine-or-on-cluster [2022/12/14 12:58]
straka [Additional SGE Options]
Line 38: Line 38:
 <file>spark-srun 10 2G</file> <file>spark-srun 10 2G</file>
  
-Start Spark cluster with 20 workers 4GB RAM each, and run ''screen'' in it, so that several computations can be performed using this cluster. The cluster has to be stopped manually (either by quitting the scree or calling ''scancel''). +Start Spark cluster with 20 workers 4GB RAM each in the ''cpu-ms'' partition, and run ''screen'' in it, so that several computations can be performed using this cluster. The cluster has to be stopped manually (either by quitting the scree or calling ''scancel''). 
-<file>spark-sbatch 20 4G screen -D -m</file>+<file>spark-sbatch -p cpu-ms 20 4G screen -D -m</file>
  
  
-==== Additional SGE Options ==== 
- 
-Additional ''qrsh'' or ''qsub'' options can be specified in ''SGE_OPTS'' environmental variable (not as ''spark-qsub'' or ''spark-qrsh'' arguments), as in the following example which schedules the Spark master and workers to machines different then ''hyperion*'' and ''pandora*'': 
-<file>SGE_OPTS='-q *@!(hyperion*|pandora*)' spark-qrsh 10 1G</file> 
  

[ Back to the navigation ] [ Back to the content ]