Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision Next revision Both sides next revision | ||
spark:spark-introduction [2014/10/03 10:22] straka created |
spark:spark-introduction [2014/11/03 18:23] straka |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Spark Introduction ====== | ====== Spark Introduction ====== | ||
- | ===== Spark Introduction | + | This introduction shows several simple examples to give you an idea what programming |
- | ===== Spark Introduction | + | ===== Running |
+ | To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster) | ||
+ | IPYTHON=1 pyspark | ||
+ | The IPYTHON=1 parameter instructs Spark to use '' | ||
+ | |||
+ | After a local Spark executor is started, the Python shell starts. Severel lines above | ||
+ | the prompt line, the SparkUI address is listed in the following format: | ||
+ | 14/10/03 10:54:35 INFO SparkUI: Started SparkUI at http:// | ||
+ | The SparkUI is an HTML interface which displays the state of the application -- if a distributed computation is taking place, how many workers are part of it, how many tasks are left to be processed, any error logs, also cached datasets and their properties (cached on disk / memory, their size) are displayed. | ||
+ | |||
+ | ==== Running Spark Shell in Scala ==== | ||
+ | |||
+ | To run interactive Scala shell in local Spark mode, run (on your local workstation or on cluster) | ||
+ | spark-shell | ||
+ | Once again, the SparkUI address is listed several lines above the shell prompt line. | ||
+ | |||
+ | |||
+ | ===== Word Count Example ===== | ||
+ | |||
+ | The central object of Spark framework is RDD -- resilient distributed dataset. It contains ordered sequence of items, which may be distributed in several threads or on several computers. Spark offers multiple operations which can be performed on RDD, like '' | ||
+ | |||
+ | We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell: | ||
+ | <file python> | ||
+ | wiki = sc.textFile("/ | ||
+ | words = wiki.flatMap(lambda line: line.split()) | ||
+ | counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1,c2: c1+c2) | ||
+ | sorted = counts.sortBy(lambda (word, | ||
+ | sorted.saveAsTextFile(' | ||
+ | |||
+ | # Alternatively, | ||
+ | (sc.textFile("/ | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | The output of ' | ||
+ | |||
+ | Note that ' | ||
+ | |||
+ | The Scala versions is quite similar: | ||
+ | <file scala> | ||
+ | val wiki = sc.textFile("/ | ||
+ | val words = wiki.flatMap(line => line.split(" | ||
+ | val counts = words.map(word => (word, | ||
+ | val sorted = counts.sortBy({case (word, count) => count}, ascending=false) | ||
+ | sorted.saveAsTextFile(' | ||
+ | |||
+ | // Alternatively without variables and using placeholders in lambda parameters: | ||
+ | (sc.textFile("/ | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | |||
+ | |||
+ | ===== K-Means Example ===== | ||
+ | To show an example of iterative algorithm, consider [[http:// |