Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
spark:spark-introduction [2014/10/06 11:21] straka |
spark:spark-introduction [2014/11/03 19:00] straka |
||
---|---|---|---|
Line 7: | Line 7: | ||
To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster) | To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster) | ||
IPYTHON=1 pyspark | IPYTHON=1 pyspark | ||
- | The IPYTHON=1 parameter instructs Spark to use '' | + | The IPYTHON=1 parameter instructs Spark to use '' |
- | After a local Spark executor is started, the Python shell starts. | + | After a local Spark executor is started, the Python shell starts. |
+ | the prompt line, the SparkUI address is listed in the following format: | ||
14/10/03 10:54:35 INFO SparkUI: Started SparkUI at http:// | 14/10/03 10:54:35 INFO SparkUI: Started SparkUI at http:// | ||
+ | The SparkUI is an HTML interface which displays the state of the application -- if a distributed computation is taking place, how many workers are part of it, how many tasks are left to be processed, any error logs, also cached datasets and their properties (cached on disk / memory, their size) are displayed. | ||
==== Running Spark Shell in Scala ==== | ==== Running Spark Shell in Scala ==== | ||
Line 23: | Line 25: | ||
The central object of Spark framework is RDD -- resilient distributed dataset. It contains ordered sequence of items, which may be distributed in several threads or on several computers. Spark offers multiple operations which can be performed on RDD, like '' | The central object of Spark framework is RDD -- resilient distributed dataset. It contains ordered sequence of items, which may be distributed in several threads or on several computers. Spark offers multiple operations which can be performed on RDD, like '' | ||
- | Here we load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell: | + | We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell: |
<file python> | <file python> | ||
wiki = sc.textFile("/ | wiki = sc.textFile("/ | ||
Line 40: | Line 42: | ||
</ | </ | ||
The output of ' | The output of ' | ||
+ | |||
+ | Note that ' | ||
The Scala versions is quite similar: | The Scala versions is quite similar: | ||
Line 56: | Line 60: | ||
| | ||
</ | </ | ||
+ | |||
+ | |||
+ | ===== K-Means Example ===== | ||
+ | To show an example of iterative algorithm, consider [[http:// | ||
+ | <file python> | ||
+ | import numpy as np | ||
+ | |||
+ | def closestPoint(point, | ||
+ | return min((np.sum((point - centers[i]) ** 2), i) for i in range(len(centers)))[1] | ||
+ | |||
+ | lines = sc.textFile("/ | ||
+ | data = lines.map(lambda line: np.array([float(x) for x in line.split()])).cache() | ||
+ | |||
+ | K = 50 | ||
+ | epsilon = 1e-3 | ||
+ | |||
+ | centers = data.takeSample(False, | ||
+ | for i in range(5): | ||
+ | old_centers = sc.broadcast(centers) | ||
+ | centers = (data | ||
+ | # For each point, find its closest center index. | ||
+ | | ||
+ | # Sum points and counts in each cluster. | ||
+ | | ||
+ | # Sort by cluster index. | ||
+ | | ||
+ | # Compute the new centers by averaging points in clusters. | ||
+ | | ||
+ | | ||
+ | |||
+ | # If the change in center positions is less than epsilon, stop. | ||
+ | centers_change = sum(np.sqrt(np.sum((a - b)**2)) for (a, b) in zip(centers, | ||
+ | old_centers.unpersist() | ||
+ | if centers_change < epsilon: | ||
+ | break | ||
+ | |||
+ | print "Final centers: " + str(centers) | ||
+ | </ | ||
+ | The implementation starts by loading the data and caching them in memory using '' | ||