Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
spark:spark-introduction [2014/10/03 10:56] straka |
spark:spark-introduction [2014/10/03 14:43] straka |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Spark Introduction ====== | ====== Spark Introduction ====== | ||
- | ===== Spark Introduction | + | This introduction shows several simple examples to give you an idea what programming in Spark is like. See the official [[http:// |
+ | |||
+ | ===== Running | ||
To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster) | To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster) | ||
- | | + | |
- | The IPYSPARK=1 parameter instructs Spark to use '' | + | The IPYTHON=1 parameter instructs Spark to use '' |
After a local Spark executor is started, the Python shell starts. | After a local Spark executor is started, the Python shell starts. | ||
14/10/03 10:54:35 INFO SparkUI: Started SparkUI at http:// | 14/10/03 10:54:35 INFO SparkUI: Started SparkUI at http:// | ||
+ | ==== Running Spark Shell in Scala ==== | ||
+ | |||
+ | To run interactive Scala shell in local Spark mode, run (on your local workstation or on cluster) | ||
+ | scala-shell | ||
+ | Once again, the SparkUI address is listed several lines above the shell prompt line. | ||
+ | |||
+ | |||
+ | ===== Word Count Example ===== | ||
- | ===== Spark Introduction | + | The central object of Spark framework is RDD -- resilient distributed dataset. It contains ordered sequence of items, which may be distributed |
+ | Here we load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell: | ||
+ | w = sc.textFile('/ | ||
+ | words = w.flatMap(lambda line: line.split()) | ||
+ | counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1,c2: c1+c2) | ||
+ | sorted = counts.sortBy(lambda (word, | ||
+ | sorted.saveAsTextFile(' | ||
+ | | ||
+ | # Alternatively, | ||
+ | sc.textFile('/ | ||
+ | .flatMap(lambda line: line.split()) \ | ||
+ | .map(lambda word: (word, 1)) \ | ||
+ | .reduceByKey(lambda c1,c2: c1+c2) \ | ||
+ | .sortBy(lambda (word, | ||
+ | .take(100) # Instead of saveAsTextFile, | ||
+ | The output of ' |