Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
spark:spark-introduction [2014/11/11 09:11] straka |
spark:spark-introduction [2022/12/14 12:34] straka [Word Count Example] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Spark Introduction ====== | ====== Spark Introduction ====== | ||
- | This introduction shows several simple examples to give you an idea what programming in Spark is like. See the official [[http:// | + | This introduction shows several simple examples to give you an idea what programming in Spark is like. See the official [[http:// |
===== Running Spark Shell in Python ===== | ===== Running Spark Shell in Python ===== | ||
To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster using '' | To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster using '' | ||
- | | + | |
- | The IPYTHON=1 parameter instructs Spark to use '' | + | The PYSPARK_DRIVER_PYTHON=ipython3 |
- | After a local Spark executor is started, the Python shell starts. | + | After a local Spark executor is started, the Python shell starts. |
- | the prompt line, the SparkUI | + | the prompt line, the Spark UI address is listed in the following format: |
- | | + | |
- | The SparkUI | + | The Spark UI is an HTML interface, which displays the state of the application -- whether |
==== Running Spark Shell in Scala ==== | ==== Running Spark Shell in Scala ==== | ||
Line 27: | Line 27: | ||
We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Copy the following to the opened Python shell: | We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Copy the following to the opened Python shell: | ||
<file python> | <file python> | ||
- | wiki = sc.textFile("/ | + | wiki = sc.textFile("/ |
words = wiki.flatMap(lambda line: line.split()) | words = wiki.flatMap(lambda line: line.split()) | ||
- | counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1,c2: c1+c2) | + | counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1, c2: c1+c2) |
- | sorted = counts.sortBy(lambda | + | sorted = counts.sortBy(lambda |
- | sorted.saveAsTextFile('output') | + | sorted.saveAsTextFile("output") |
# Alternatively, | # Alternatively, | ||
- | (sc.textFile("/ | + | (sc.textFile("/ |
| | ||
| | ||
- | | + | |
- | | + | |
| | ||
</ | </ | ||
Line 51: | Line 51: | ||
val counts = words.map(word => (word, | val counts = words.map(word => (word, | ||
val sorted = counts.sortBy({case (word, count) => count}, ascending=false) | val sorted = counts.sortBy({case (word, count) => count}, ascending=false) | ||
- | sorted.saveAsTextFile('output') | + | sorted.saveAsTextFile("output") |
// Alternatively without variables and using placeholders in lambda parameters: | // Alternatively without variables and using placeholders in lambda parameters: | ||
Line 97: | Line 97: | ||
print "Final centers: " + str(centers) | print "Final centers: " + str(centers) | ||
</ | </ | ||
- | The implementation starts by loading the data and caching them in memory using '' | + | The implementation starts by loading the data points |
Note that explicit broadcasting used for '' | Note that explicit broadcasting used for '' |