[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
spark:spark-introduction [2014/11/11 08:57]
straka
spark:spark-introduction [2022/12/14 12:27]
straka [Spark Introduction]
Line 1: Line 1:
 ====== Spark Introduction ====== ====== Spark Introduction ======
  
-This introduction shows several simple examples to give you an idea what programming in Spark is like. See the official [[http://spark.apache.org/docs/latest/quick-start.html|Quick Start]] or [[http://spark.apache.org/docs/latest/programming-guide.html|Spark Programming Guide]] or [[http://spark.apache.org/docs/latest/api/python/index.html|Python API Reference]]/[[http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.package|Scala API Reference]] for more information.+This introduction shows several simple examples to give you an idea what programming in Spark is like. See the official [[http://spark.apache.org/docs/latest/quick-start.html|Quick Start]] or [[http://spark.apache.org/docs/latest/programming-guide.html|Spark Programming Guide]] or [[http://spark.apache.org/docs/latest/api/python/index.html|Python API Reference]]/[[https://spark.apache.org/docs/latest/api/scala/org/apache/spark/index.html|Scala API Reference]] for more information.
  
 ===== Running Spark Shell in Python ===== ===== Running Spark Shell in Python =====
  
-To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster)+To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster using ''qrsh'' from ''lrc1'')
   IPYTHON=1 pyspark   IPYTHON=1 pyspark
-The IPYTHON=1 parameter instructs Spark to use ''ipython'' instead of ''python'' (the ''ipython'' is an enhanced interactive shell than Python). If you do not want ''ipython'' or you do not have it installed (it is installed everywhere on the cluster, but maybe not on your local workstations -- ask our IT if you want it), use only ''pyspark''.+The IPYTHON=1 parameter instructs Spark to use ''ipython'' instead of ''python'' (the ''ipython'' is an enhanced interactive shell than Python). If you do not want ''ipython'' or you do not have it installed (it is installed everywhere on the cluster, but maybe not on your local workstations -- ask our IT if you want it), use only ''pyspark'', but note that it has some issues when copy-pasting examples from this wiki.
  
 After a local Spark executor is started, the Python shell starts. Severel lines above After a local Spark executor is started, the Python shell starts. Severel lines above
Line 25: Line 25:
 The central object of Spark framework is RDD -- resilient distributed dataset. It contains ordered sequence of items, which may be distributed in several threads or on several computers. Spark offers multiple operations which can be performed on RDD, like ''map'', ''filter'', ''reduceByKey'', ''union'', ''join'', ''sortBy'', ''sample'' etc. The central object of Spark framework is RDD -- resilient distributed dataset. It contains ordered sequence of items, which may be distributed in several threads or on several computers. Spark offers multiple operations which can be performed on RDD, like ''map'', ''filter'', ''reduceByKey'', ''union'', ''join'', ''sortBy'', ''sample'' etc.
  
-We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell:+We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Copy the following to the opened Python shell:
 <file python> <file python>
 wiki = sc.textFile("/net/projects/spark-example-data/wiki-cs") wiki = sc.textFile("/net/projects/spark-example-data/wiki-cs")
Line 51: Line 51:
 val counts = words.map(word => (word,1)).reduceByKey((c1,c2) => c1+c2) val counts = words.map(word => (word,1)).reduceByKey((c1,c2) => c1+c2)
 val sorted = counts.sortBy({case (word, count) => count}, ascending=false) val sorted = counts.sortBy({case (word, count) => count}, ascending=false)
-sorted.saveAsTextFile('output')+sorted.saveAsTextFile("output")
  
 // Alternatively without variables and using placeholders in lambda parameters: // Alternatively without variables and using placeholders in lambda parameters:
Line 63: Line 63:
  
 ===== K-Means Example ===== ===== K-Means Example =====
-An example implementing [[http://en.wikipedia.org/wiki/K-means_clustering|Standard iterative K-Means algorithm]] follows:+An example implementing [[http://en.wikipedia.org/wiki/K-means_clustering|Standard iterative K-Means algorithm]] follows. Try copying it to open Python shell. Note that this wiki is formating empty lines as lines with one space, which is confusing for ''pyspark'' used without ''IPYTHON=1'', so either use ''IPYTHON=1'' or copy the text paragraph-by-paragraph.
 <file python> <file python>
 import numpy as np import numpy as np
Line 71: Line 71:
  
 lines = sc.textFile("/net/projects/spark-example-data/points", sc.defaultParallelism) lines = sc.textFile("/net/projects/spark-example-data/points", sc.defaultParallelism)
-data = lines.map(lambda line: np.array([float(x) for x in line.split()])).cache()+data = lines.map(lambda line: np.array(map(float, line.split()))).cache()
  
 K = 50 K = 50
Line 97: Line 97:
 print "Final centers: " + str(centers) print "Final centers: " + str(centers)
 </file> </file>
-The implementation starts by loading the data and caching them in memory using ''cache''. Then, standard iterative algorithm is performed, running in parallel. +The implementation starts by loading the data points and caching them in memory using ''cache''. Then, standard iterative algorithm is performed, running in parallel and synchronizing where necessary.
  
 Note that explicit broadcasting used for ''centers'' object is not strictly needed -- if we used ''old_centers = centers'', the example would work too, but it would send a copy of ''old_centers'' to //every distributed task//, instead of once to every machine. Note that explicit broadcasting used for ''centers'' object is not strictly needed -- if we used ''old_centers = centers'', the example would work too, but it would send a copy of ''old_centers'' to //every distributed task//, instead of once to every machine.

[ Back to the navigation ] [ Back to the content ]