[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
spark:running-spark-on-single-machine-or-on-cluster [2014/11/10 12:17]
straka
spark:running-spark-on-single-machine-or-on-cluster [2014/11/11 09:21]
straka
Line 20: Line 20:
  
 The Spark cluster can be started using one of the following two commands: The Spark cluster can be started using one of the following two commands:
-  * ''spark_qsub'': start Spark cluster and perform qsub <file bash>[SGE_OPTS=additional_SGE_args] spark_qsub workers memory command [arguments]</file> +  * ''spark-qsub'': start Spark cluster and perform qsub <file>[SGE_OPTS=additional_SGE_args] spark-qsub workers memory command [arguments]</file> 
-  * ''spark_qrsh'': start Spark cluster and perform qrsh <file bash>[SGE_OPTS=additional_SGE_args] spark_qrsh workers memory [command arguments]</file>+  * ''spark-qrsh'': start Spark cluster and perform qrsh <file>[SGE_OPTS=additional_SGE_args] spark-qrsh workers memory [command arguments]</file>
  
-Both ''spark_qsub'' and ''spark_qrsh'' commands start Spark cluster with specified number of workers, each with given number of memory. Then they set ''MASTER'' and ''SPARK_ADDRESS'' to address of the Spark master and ''SPARK_WEBUI'' to http address of the master web interface. Both these values are also written on standard output and added to SGE job metadata. Lastly, specified command is started either as using ''qsub'' or ''qrsh''. Note that when ''spark_qrsh'' is used, no command may be specified, in which case an interactive shell is opened+Both ''spark-qsub'' and ''spark-qrsh'' commands start Spark cluster with specified number of workers, each with given number of memory. Then they set ''MASTER'' and ''SPARK_ADDRESS'' to address of the Spark master and ''SPARK_WEBUI'' to http address of the master web interface. Both these values are also written on standard output and added to SGE job metadata. Lastly, specified command is started either as using ''qsub'' or ''qrsh''. Note that when ''spark-qrsh'' is used, the command may be empty, in which case an interactive shell is opened.
- +
-To use cluster for a long period time, run ''spark-qsub'' with command ''sleep infinity'' and then stop the cluster using ''qdel''.+
  
 ==== Memory Specification ==== ==== Memory Specification ====
Line 31: Line 29:
 Memory specification used for master and worker heap size (and for ''mem_free'' SGE constraint) must be specified. The memory can be specified either in bytes, or using ''kK/mM/gG'' suffix. A reasonable default value is 512M or 1G. Memory specification used for master and worker heap size (and for ''mem_free'' SGE constraint) must be specified. The memory can be specified either in bytes, or using ''kK/mM/gG'' suffix. A reasonable default value is 512M or 1G.
  
-==== Additional SGE Options ==== 
  
-Additional ''qrsh'' or ''qsub'' options can be specified in ''SGE_OPTS'' environmental variable (not as ''spark-qsub'' or ''spark-qrsh'' arguments), as in the following example which schedules the Spark master and workers to machines different from ''hyperion*'' and ''pandora*'': +==== Examples ==== 
-<file bash> + 
-SGE_OPTS='-q *@!(hyperion*|pandora*)' spark_qrsh 10 1G +Start Spark cluster with 10 machines 1GB RAM each and then run interactive shell. The cluster stops after the shell is exited. 
-</file>+<file>spark-qrsh 10 1G</file> 
 + 
 +Start Spark cluster with 20 machines 512MB RAM each. The cluster has to be stopped manually using ''qdel''. 
 +<file>spark-qsub 20 512m sleep infinity</file> 
 + 
 +Note that a running Spark cluster can currently be used only from other cluster machines (connections to a running SGE Spark cluster from my workstation ends with timeout). 
 + 
 +==== Additional SGE Options ====
  
 +Additional ''qrsh'' or ''qsub'' options can be specified in ''SGE_OPTS'' environmental variable (not as ''spark-qsub'' or ''spark-qrsh'' arguments), as in the following example which schedules the Spark master and workers to machines different then ''hyperion*'' and ''pandora*'':
 +<file>SGE_OPTS='-q *@!(hyperion*|pandora*)' spark-qrsh 10 1G</file>
  

[ Back to the navigation ] [ Back to the content ]