[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
spark:using-python [2022/12/14 13:01]
straka [Usage Examples]
spark:using-python [2022/12/14 13:23]
straka
Line 16: Line 16:
 Consider the following simple script computing 10 most frequent words of Czech Wikipedia: Consider the following simple script computing 10 most frequent words of Czech Wikipedia:
 <file python> <file python>
-(sc.textFile("/lnet/troja/data/npfl118/wiki/cs/wiki.txt", 3*sc.defaultParallelism)+(sc.textFile("/net/projects/spark-example-data/wiki-cs", 3*sc.defaultParallelism)
    .flatMap(lambda line: line.split())    .flatMap(lambda line: line.split())
    .map(lambda word: (word, 1))    .map(lambda word: (word, 1))
Line 66: Line 66:
  
   * run ''word_count.py'' script inside existing Spark cluster (i.e., inside ''spark-sbatch'' or ''spark-srun''), or start local Spark cluster using as many threads as there are cores if there is none:   * run ''word_count.py'' script inside existing Spark cluster (i.e., inside ''spark-sbatch'' or ''spark-srun''), or start local Spark cluster using as many threads as there are cores if there is none:
-  <file>spark-submit word_count.py /lnet/troja/data/npfl118/wiki/cs/wiki.txt outdir</file>+  <file>spark-submit word_count.py /net/projects/spark-example-data/wiki-cs outdir</file>
   * run ''word_count.py'' script with local Spark cluster using one thread:   * run ''word_count.py'' script with local Spark cluster using one thread:
-  <file>MASTER=local spark-submit word_count.py /lnet/troja/data/npfl118/wiki/cs/wiki.txt outdir</file>+  <file>MASTER=local spark-submit word_count.py /net/projects/spark-example-data/wiki-cs outdir</file>
   * start Spark cluster (10 machines, @GB RAM each) using Slurm SGE and run ''word_count.py'' script:   * start Spark cluster (10 machines, @GB RAM each) using Slurm SGE and run ''word_count.py'' script:
-  <file>spark-sbatch 10 2G spark-submit word_count.py /lnet/troja/data/npfl118/wiki/cs/wiki.txt outdir</file>+  <file>spark-sbatch 10 2G spark-submit word_count.py /net/projects/spark-example-data/wiki-cs outdir</file>
  
 ===== Using Virtual Environments ===== ===== Using Virtual Environments =====

[ Back to the navigation ] [ Back to the content ]