Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
spark:spark-introduction [2014/11/03 20:37] straka |
spark:spark-introduction [2022/12/14 13:28] (current) straka [Running Spark Shell in Python] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Spark Introduction ====== | ====== Spark Introduction ====== | ||
- | This introduction shows several simple examples to give you an idea what programming in Spark is like. See the official [[http:// | + | This introduction shows several simple examples to give you an idea what programming in Spark is like. See the official [[http:// |
===== Running Spark Shell in Python ===== | ===== Running Spark Shell in Python ===== | ||
- | To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster) | + | To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster |
- | | + | |
- | The IPYTHON=1 parameter instructs Spark to use '' | + | The PYSPARK_DRIVER_PYTHON=ipython3 |
- | After a local Spark executor is started, the Python shell starts. | + | After a local Spark executor is started, the Python shell starts. |
- | the prompt line, the SparkUI | + | the prompt line, the Spark UI address is listed in the following format: |
- | | + | |
- | The SparkUI | + | The Spark UI is an HTML interface, which displays the state of the application -- whether |
==== Running Spark Shell in Scala ==== | ==== Running Spark Shell in Scala ==== | ||
Line 25: | Line 25: | ||
The central object of Spark framework is RDD -- resilient distributed dataset. It contains ordered sequence of items, which may be distributed in several threads or on several computers. Spark offers multiple operations which can be performed on RDD, like '' | The central object of Spark framework is RDD -- resilient distributed dataset. It contains ordered sequence of items, which may be distributed in several threads or on several computers. Spark offers multiple operations which can be performed on RDD, like '' | ||
- | We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. | + | We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. |
<file python> | <file python> | ||
- | wiki = sc.textFile("/ | + | wiki = sc.textFile("/ |
words = wiki.flatMap(lambda line: line.split()) | words = wiki.flatMap(lambda line: line.split()) | ||
- | counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1,c2: c1+c2) | + | counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1, c2: c1+c2) |
- | sorted = counts.sortBy(lambda | + | sorted = counts.sortBy(lambda |
- | sorted.saveAsTextFile('output') | + | sorted.saveAsTextFile("output") |
# Alternatively, | # Alternatively, | ||
- | (sc.textFile("/ | + | (sc.textFile("/ |
| | ||
| | ||
- | | + | |
- | | + | |
| | ||
</ | </ | ||
The output of ' | The output of ' | ||
- | Note that 'map' and ' | + | Note that '' |
The Scala versions is quite similar: | The Scala versions is quite similar: | ||
<file scala> | <file scala> | ||
- | val wiki = sc.textFile("/ | + | val wiki = sc.textFile("/ |
val words = wiki.flatMap(line => line.split(" | val words = wiki.flatMap(line => line.split(" | ||
- | val counts = words.map(word => (word, | + | val counts = words.map(word => (word, 1)).reduceByKey((c1, |
val sorted = counts.sortBy({case (word, count) => count}, ascending=false) | val sorted = counts.sortBy({case (word, count) => count}, ascending=false) | ||
- | sorted.saveAsTextFile('output') | + | sorted.saveAsTextFile("output") |
// Alternatively without variables and using placeholders in lambda parameters: | // Alternatively without variables and using placeholders in lambda parameters: | ||
- | (sc.textFile("/ | + | (sc.textFile("/ |
| | ||
| | ||
Line 63: | Line 63: | ||
===== K-Means Example ===== | ===== K-Means Example ===== | ||
- | To show an example | + | An example |
<file python> | <file python> | ||
import numpy as np | import numpy as np | ||
Line 70: | Line 70: | ||
return min((np.sum((point - centers[i]) ** 2), i) for i in range(len(centers)))[1] | return min((np.sum((point - centers[i]) ** 2), i) for i in range(len(centers)))[1] | ||
- | lines = sc.textFile("/ | + | lines = sc.textFile("/ |
- | data = lines.map(lambda line: np.array([float(x) for x in line.split()])).cache() | + | data = lines.map(lambda line: np.array(map(float, |
- | K = 50 | + | K = 100 |
epsilon = 1e-3 | epsilon = 1e-3 | ||
Line 97: | Line 97: | ||
print "Final centers: " + str(centers) | print "Final centers: " + str(centers) | ||
</ | </ | ||
- | The implementation starts by loading the data and caching them in memory using '' | + | The implementation starts by loading the data points |
Note that explicit broadcasting used for '' | Note that explicit broadcasting used for '' | ||
Line 111: | Line 111: | ||
centers.map(center => (center-point).norm(2)).zipWithIndex.min._2 | centers.map(center => (center-point).norm(2)).zipWithIndex.min._2 | ||
- | val lines = sc.textFile("/ | + | val lines = sc.textFile("/ |
val data = lines.map(line => Vector(line.split(" | val data = lines.map(line => Vector(line.split(" | ||
- | val K = 50 | + | val K = 100 |
val epsilon = 1e-3 | val epsilon = 1e-3 | ||
[ Back to the navigation ] [ Back to the content ]