[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
spark:spark-introduction [2014/10/03 15:02]
straka
spark:spark-introduction [2014/11/03 17:24]
straka
Line 7: Line 7:
 To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster) To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster)
   IPYTHON=1 pyspark   IPYTHON=1 pyspark
-The IPYTHON=1 parameter instructs Spark to use ''ipython'' instead of ''python'' (the ''ipython'' is an enhanced interactive shell than Python). If you do not want ''ipython'' or you do not have it installed (it is installed everywhere on the cluster, but maybe not on your local workstations -- ask it@ufal.mff.cuni.cz if you want it), leave out the ''IPYTHON=1''.+The IPYTHON=1 parameter instructs Spark to use ''ipython'' instead of ''python'' (the ''ipython'' is an enhanced interactive shell than Python). If you do not want ''ipython'' or you do not have it installed (it is installed everywhere on the cluster, but maybe not on your local workstations -- ask our IT if you want it), leave out the ''IPYTHON=1''.
  
-After a local Spark executor is started, the Python shell starts.+After a local Spark executor is started, the Python shell starts. Severel lines above 
 +the prompt line, the SparkUI address is listed in the following format:
   14/10/03 10:54:35 INFO SparkUI: Started SparkUI at http://tauri4.ufal.hide.ms.mff.cuni.cz:4040   14/10/03 10:54:35 INFO SparkUI: Started SparkUI at http://tauri4.ufal.hide.ms.mff.cuni.cz:4040
 +The SparkUI is an HTML interface which displays the state of the application -- if a distributed computation is taking place, how many workers are part of it, how many tasks are left to be processed, any error logs, also cached datasets and their properties (cached on disk / memory, their size) are displayed.
  
 ==== Running Spark Shell in Scala ==== ==== Running Spark Shell in Scala ====
Line 24: Line 26:
  
 Here we load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell: Here we load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell:
-  wiki = sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"+<file python> 
-  words = wiki.flatMap(lambda line: line.split()) +wiki = sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"
-  counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1,c2: c1+c2) +words = wiki.flatMap(lambda line: line.split()) 
-  sorted = counts.sortBy(lambda (word,count): count) +counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1,c2: c1+c2) 
-  sorted.saveAsTextFile('output'+sorted = counts.sortBy(lambda (word,count): count, ascending=False
-   +sorted.saveAsTextFile('output'
-  # Alternatively, we can avoid variables: + 
-  (sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"+# Alternatively, we can avoid variables: 
-     .flatMap(lambda line: line.split()) +(sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"
-     .map(lambda word: (word, 1)) +   .flatMap(lambda line: line.split()) 
-     .reduceByKey(lambda c1,c2: c1+c2) +   .map(lambda word: (word, 1)) 
-     .sortBy(lambda (word,count): count) +   .reduceByKey(lambda c1,c2: c1+c2) 
-     .take(100)) # Instead of saveAsTextFile, we only print 100 most frequent words+   .sortBy(lambda (word,count): count, ascending=False
 +   .take(10)) # Instead of saveAsTextFile, we only print 10 most frequent words 
 +</file>
 The output of 'saveAsTextFile' is the directory ''output'' -- because the RDD can be distributed on several computers, the output is a directory containing possibly multiple files. The output of 'saveAsTextFile' is the directory ''output'' -- because the RDD can be distributed on several computers, the output is a directory containing possibly multiple files.
  
 The Scala versions is quite similar: The Scala versions is quite similar:
-  val wiki = sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"+<file scala> 
-  val words = wiki.flatMap(line => line.split("\\s")) +val wiki = sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"
-  val counts = words.map(word => (word,1)).reduceByKey((c1,c2) => c1+c2)   +val words = wiki.flatMap(line => line.split("\\s")) 
-  val sorted = counts.sortBy({case (word, count) => count}, false) +val counts = words.map(word => (word,1)).reduceByKey((c1,c2) => c1+c2)   
-  sorted.saveAsTextFile('output'+val sorted = counts.sortBy({case (word, count) => count}, ascending=false) 
-   +sorted.saveAsTextFile('output'
-  // Alternatively without variables and using placeholders in lambda parameters: + 
-  (sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"+// Alternatively without variables and using placeholders in lambda parameters: 
-     .flatMap(_.split("\\s")) +(sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"
-     .map((_,1)).reduceByKey(_+_) +   .flatMap(_.split("\\s")) 
-     .sortBy(_._2, false) +   .map((_,1)).reduceByKey(_+_) 
-     .take(100))+   .sortBy(_._2, ascending=false) 
 +   .take(10)) 
 +</file>
  

[ Back to the navigation ] [ Back to the content ]