[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

This is an old revision of the document!


Table of Contents

Running Spark on Single Machine or on Cluster

In order to use Spark, environment has to bee set up according to Using Spark in UFAL Environment.

When Spark computation starts, it uses environment variable MASTER to determine the mode of computation. The following values are possible:

Running Spark on Single Machine

Spark computations can be started both on desktop machines and on cluster machines, either by specifying MASTER to one of local modes, or by not specifying MASTER at all (local[*] is used then).

Note that when you use qrsh or qsub, your job is usually expected to use one core, so you should specify MASTER=local. If you do not, Spark will use all cores on the machine, even though SGE gave you only one.

Starting Spark Cluster

Spark cluster can be started using Slurm. The cluster is user-specific, but it can be used for several consecutive Spark computations.

The Spark cluster can be started using one of the following two commands:

Both spark-sbatch and spark-srun commands start a Spark cluster with the specified number of workers, each with the given amount of memory. Then they set MASTER and SPARK_ADDRESS to the address of the Spark master and SPARK_WEBUI to the URL of the master web interface. Both these values are also written on standard output, and the SPARK_WEBUI is added to the Slurm job Comment. Finally, the specified command is started; when spark-srun is used, the command may be empty, in which case bash is opened.

Memory Specification

TL;DR: Good default is 2G'.

The memory for each worker is specified using the following format: <file>spark_memory_per_workerG[:memory_per_Python_processG]</file>
The Spark memory limits the Java heap, and half of it is reserved for memory storage of cached RDDs. The second value sets a memory limit of every Python process and is by default set to
2G.
==== Examples ====

Start Spark cluster with 10 machines 1GB RAM each and then run interactive shell. The cluster stops after the shell is exited.
<file>spark-qrsh 10 1G</file>

Start Spark cluster with 20 machines 512MB RAM each. The cluster has to be stopped manually using
qdel.
<file>spark-qsub 20 512m sleep infinity</file>

Note that a running Spark cluster can currently be used only from other cluster machines (connections to a running SGE Spark cluster from my workstation ends with timeout).

==== Additional SGE Options ====

Additional
qrsh or qsub options can be specified in SGE_OPTS environmental variable (not as spark-qsub or spark-qrsh arguments), as in the following example which schedules the Spark master and workers to machines different then hyperion* and pandora*'':

SGE_OPTS='-q *@!(hyperion*|pandora*)' spark-qrsh 10 1G

[ Back to the navigation ] [ Back to the content ]