[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
spark:using-python [2022/12/14 13:01]
straka [Usage Examples]
spark:using-python [2022/12/14 13:25] (current)
straka [Usage Examples]
Line 16: Line 16:
 Consider the following simple script computing 10 most frequent words of Czech Wikipedia: Consider the following simple script computing 10 most frequent words of Czech Wikipedia:
 <file python> <file python>
-(sc.textFile("/lnet/troja/data/npfl118/wiki/cs/wiki.txt", 3*sc.defaultParallelism)+(sc.textFile("/net/projects/spark-example-data/wiki-cs", 3*sc.defaultParallelism)
    .flatMap(lambda line: line.split())    .flatMap(lambda line: line.split())
    .map(lambda word: (word, 1))    .map(lambda word: (word, 1))
Line 28: Line 28:
   * run interactive shell with local Spark cluster using one thread:   * run interactive shell with local Spark cluster using one thread:
   <file>MASTER=local PYSPARK_DRIVER_PYTHON=ipython3 pyspark</file>   <file>MASTER=local PYSPARK_DRIVER_PYTHON=ipython3 pyspark</file>
-  * start Spark cluster (10 machines, 1GB RAM each) on SGE and run interactive shell: +  * start Spark cluster (10 machines, 2GB RAM each) on Slurm and run interactive shell: 
-  <file>PYSPARK_DRIVER_PYTHON=ipython3 spark-qrsh 10 1G pyspark</file>+  <file>PYSPARK_DRIVER_PYTHON=ipython3 spark-srun 10 2G pyspark</file>
  
 Note that ''PYSPARK_DRIVER_PYTHON'' variable can be left out or specified in ''.bashrc'' (or other configuration files). Note that ''PYSPARK_DRIVER_PYTHON'' variable can be left out or specified in ''.bashrc'' (or other configuration files).
Line 66: Line 66:
  
   * run ''word_count.py'' script inside existing Spark cluster (i.e., inside ''spark-sbatch'' or ''spark-srun''), or start local Spark cluster using as many threads as there are cores if there is none:   * run ''word_count.py'' script inside existing Spark cluster (i.e., inside ''spark-sbatch'' or ''spark-srun''), or start local Spark cluster using as many threads as there are cores if there is none:
-  <file>spark-submit word_count.py /lnet/troja/data/npfl118/wiki/cs/wiki.txt outdir</file>+  <file>spark-submit word_count.py /net/projects/spark-example-data/wiki-cs outdir</file>
   * run ''word_count.py'' script with local Spark cluster using one thread:   * run ''word_count.py'' script with local Spark cluster using one thread:
-  <file>MASTER=local spark-submit word_count.py /lnet/troja/data/npfl118/wiki/cs/wiki.txt outdir</file> +  <file>MASTER=local spark-submit word_count.py /net/projects/spark-example-data/wiki-cs outdir</file> 
-  * start Spark cluster (10 machines, @GB RAM each) using Slurm SGE and run ''word_count.py'' script: +  * start Spark cluster (10 machines, @GB RAM each) using Slurm and run ''word_count.py'' script: 
-  <file>spark-sbatch 10 2G spark-submit word_count.py /lnet/troja/data/npfl118/wiki/cs/wiki.txt outdir</file>+  <file>spark-sbatch 10 2G spark-submit word_count.py /net/projects/spark-example-data/wiki-cs outdir</file>
  
 ===== Using Virtual Environments ===== ===== Using Virtual Environments =====

[ Back to the navigation ] [ Back to the content ]