[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
spark:spark-introduction [2014/10/03 10:56]
straka
spark:spark-introduction [2014/10/06 11:27]
straka
Line 1: Line 1:
 ====== Spark Introduction ====== ====== Spark Introduction ======
  
-===== Spark Introduction in Python =====+This introduction shows several simple examples to give you an idea what programming in Spark is like. See the official [[http://spark.apache.org/docs/latest/quick-start.html|Quick Start]] or [[http://spark.apache.org/docs/latest/programming-guide.html|Spark Programming Guide]] or [[http://spark.apache.org/docs/latest/api/python/index.html|Python API Reference]]/[[http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.package|Scala API Reference]] for more information. 
 + 
 +===== Running Spark Shell in Python =====
  
 To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster) To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster)
-  IPYSPARK=1 pyspark +<file bash> 
-The IPYSPARK=1 parameter instructs Spark to use ''ipython'' instead of ''python'' (the ''ipython'' is an enhanced interactive shell than Python). If you do not want ''ipython'' or you do not have it installed (it is installed everywhere on the cluster, but maybe not on your local workstations -- ask Milan if you want it), leave out the ''IPYSPARK=1''.+IPYTHON=1 pyspark 
 +</file> 
 +The IPYTHON=1 parameter instructs Spark to use ''ipython'' instead of ''python'' (the ''ipython'' is an enhanced interactive shell than Python). If you do not want ''ipython'' or you do not have it installed (it is installed everywhere on the cluster, but maybe not on your local workstations -- ask our IT if you want it), leave out the ''IPYTHON=1''.
  
 After a local Spark executor is started, the Python shell starts. After a local Spark executor is started, the Python shell starts.
   14/10/03 10:54:35 INFO SparkUI: Started SparkUI at http://tauri4.ufal.hide.ms.mff.cuni.cz:4040   14/10/03 10:54:35 INFO SparkUI: Started SparkUI at http://tauri4.ufal.hide.ms.mff.cuni.cz:4040
  
 +==== Running Spark Shell in Scala ====
 +
 +To run interactive Scala shell in local Spark mode, run (on your local workstation or on cluster)
 +<file bash>
 +spark-shell
 +</file>
 +Once again, the SparkUI address is listed several lines above the shell prompt line.
 +
 +
 +===== Word Count Example =====
 +
 +The central object of Spark framework is RDD -- resilient distributed dataset. It contains ordered sequence of items, which may be distributed in several threads or on several computers. Spark offers multiple operations which can be performed on RDD, like ''map'', ''filter'', ''reduceByKey'', ''union'', ''join'', ''sortBy'', ''sample'' etc.
 +
 +Here we load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell:
 +<file python>
 +wiki = sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt")
 +words = wiki.flatMap(lambda line: line.split())
 +counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1,c2: c1+c2)
 +sorted = counts.sortBy(lambda (word,count): count, ascending=False)
 +sorted.saveAsTextFile('output')
 +
 +# Alternatively, we can avoid variables:
 +(sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt")
 +   .flatMap(lambda line: line.split())
 +   .map(lambda word: (word, 1))
 +   .reduceByKey(lambda c1,c2: c1+c2)
 +   .sortBy(lambda (word,count): count, ascending=False)
 +   .take(10)) # Instead of saveAsTextFile, we only print 10 most frequent words
 +</file>
 +The output of 'saveAsTextFile' is the directory ''output'' -- because the RDD can be distributed on several computers, the output is a directory containing possibly multiple files.
 +
 +The Scala versions is quite similar:
 +<file scala>
 +val wiki = sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt")
 +val words = wiki.flatMap(line => line.split("\\s"))
 +val counts = words.map(word => (word,1)).reduceByKey((c1,c2) => c1+c2)  
 +val sorted = counts.sortBy({case (word, count) => count}, ascending=false)
 +sorted.saveAsTextFile('output')
  
-===== Spark Introduction in Scala =====+// Alternatively without variables and using placeholders in lambda parameters: 
 +(sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"
 +   .flatMap(_.split("\\s")) 
 +   .map((_,1)).reduceByKey(_+_) 
 +   .sortBy(_._2, ascending=false) 
 +   .take(10)) 
 +</file>
  

[ Back to the navigation ] [ Back to the content ]