[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revision Both sides next revision
spark:using-python [2014/11/10 15:15]
straka created
spark:using-python [2017/10/16 20:58]
ufal [Using Python]
Line 9: Line 9:
  
 Better interactive shell with code completion using ''ipython'' (installed everywhere on cluster; ask our IT if you want to have it installed on your workstations too) can be started using: Better interactive shell with code completion using ''ipython'' (installed everywhere on cluster; ask our IT if you want to have it installed on your workstations too) can be started using:
-<file>IPYTHON=pyspark</file>+<file>PYSPARK_DRIVER_PYTHON=ipython pyspark</file>
  
 As described in [[running-spark-on-single-machine-or-on-cluster|Running Spark on Single Machine or on Cluster]], environmental variable ''MASTER'' specifies which Spark master to use (or whether to start a local one). As described in [[running-spark-on-single-machine-or-on-cluster|Running Spark on Single Machine or on Cluster]], environmental variable ''MASTER'' specifies which Spark master to use (or whether to start a local one).
  
 ==== Usage Examples ==== ==== Usage Examples ====
 +Consider the following simple script computing 10 most frequent words of Czech Wikipedia:
 +<file python>
 +(sc.textFile("/net/projects/spark-example-data/wiki-cs", 3*sc.defaultParallelism)
 +   .flatMap(lambda line: line.split())
 +   .map(lambda word: (word, 1))
 +   .reduceByKey(lambda c1,c2: c1+c2)
 +   .sortBy(lambda (word,count): count, ascending=False)
 +   .take(10))
 +</file>
  
-  * run interactive shell with local Spark cluster using as many threads as there are cores: +  * run interactive shell using existing Spark cluster (i.e., inside ''spark-qrsh''), or start local Spark cluster using as many threads as there are cores if there is none
-  <file>IPYTHON=pyspark #if MASTER not defined +  <file>PYSPARK_DRIVER_PYTHON=ipython pyspark</file>
-MASTER="local[*]" IPYTHON=1 pyspark #if MASTER is defined differently and must be overwritten</file>+
   * run interactive shell with local Spark cluster using one thread:   * run interactive shell with local Spark cluster using one thread:
-  <file>MASTER=local IPYTHON=pyspark</file>+  <file>MASTER=local PYSPARK_DRIVER_PYTHON=ipython pyspark</file>
   * start Spark cluster (10 machines, 1GB RAM each) on SGE and run interactive shell:   * start Spark cluster (10 machines, 1GB RAM each) on SGE and run interactive shell:
-  <file>IPYTHON=spark-qrsh 10 1G pyspark</file>+  <file>PYSPARK_DRIVER_PYTHON=ipython spark-qrsh 10 1G pyspark</file>
  
-Note that ''IPYTHON'' variable can be left out or specified in ''.bashrc'' (or similar).+Note that ''PYSPARK_DRIVER_PYTHON'' variable can be left out or specified in ''.bashrc'' (or similar).
  
  
-===== Running Scripts =====+===== Running Python Spark Scripts ===== 
 + 
 +Python Spark scripts can be started using: 
 +<file>spark-submit</file> 
 + 
 +As described in [[running-spark-on-single-machine-or-on-cluster|Running Spark on Single Machine or on Cluster]], environmental variable ''MASTER'' specifies which Spark master to use (or whether to start a local one). 
 + 
 +==== Usage Examples ==== 
 +Consider the following simple word-count script ''word_count.py'': 
 +<file python> 
 +#!/usr/bin/python 
 + 
 +import sys 
 +if len(sys.argv) < 3: 
 +    print >>sys.stderr, "Usage: %s input output" % sys.argv[0] 
 +    exit(1) 
 +input = sys.argv[1] 
 +output = sys.argv[2] 
 + 
 +from pyspark import SparkContext 
 +    
 +sc = SparkContext() 
 +(sc.textFile(input, 3*sc.defaultParallelism) 
 +   .flatMap(lambda line: line.split()) 
 +   .map(lambda token: (token, 1)) 
 +   .reduceByKey(lambda x,y: x + y) 
 +   .sortBy(lambda (word,count): count, ascending=False) 
 +   .saveAsTextFile(output)) 
 +sc.stop() 
 +</file> 
 + 
 +  * run ''word_count.py'' script inside existing Spark cluster (i.e., inside ''spark-qsub'' or ''spark-qrsh''), or start local Spark cluster using as many threads as there are cores if there is none: 
 +  <file>spark-submit word_count.py /net/projects/spark-example-data/wiki-cs outdir</file> 
 +  * run ''word_count.py'' script with local Spark cluster using one thread: 
 +  <file>MASTER=local spark-submit word_count.py /net/projects/spark-example-data/wiki-cs outdir</file> 
 +  * start Spark cluster (10 machines, 1GB RAM each) on SGE and run ''word_count.py'' script: 
 +  <file>spark-qsub 10 1G spark-submit word_count.py /net/projects/spark-example-data/wiki-cs outdir</file> 
 + 
 +===== Using Virtual Environments ===== 
 + 
 +If you want to use specific virtual environment in your Spark job, use 
 +<file>PYSPARK_PYTHON=path_to_python_in_venv [pyspark|spark-submit]</file> 

[ Back to the navigation ] [ Back to the content ]