[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
spark:spark-introduction [2014/11/03 17:24]
straka
spark:spark-introduction [2014/11/03 18:23]
straka
Line 25: Line 25:
 The central object of Spark framework is RDD -- resilient distributed dataset. It contains ordered sequence of items, which may be distributed in several threads or on several computers. Spark offers multiple operations which can be performed on RDD, like ''map'', ''filter'', ''reduceByKey'', ''union'', ''join'', ''sortBy'', ''sample'' etc. The central object of Spark framework is RDD -- resilient distributed dataset. It contains ordered sequence of items, which may be distributed in several threads or on several computers. Spark offers multiple operations which can be performed on RDD, like ''map'', ''filter'', ''reduceByKey'', ''union'', ''join'', ''sortBy'', ''sample'' etc.
  
-Here we load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell:+We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell:
 <file python> <file python>
 wiki = sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt") wiki = sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt")
Line 42: Line 42:
 </file> </file>
 The output of 'saveAsTextFile' is the directory ''output'' -- because the RDD can be distributed on several computers, the output is a directory containing possibly multiple files. The output of 'saveAsTextFile' is the directory ''output'' -- because the RDD can be distributed on several computers, the output is a directory containing possibly multiple files.
 +
 +Note that 'map' and 'reduceByKey' operations exist, allowing any Hadoop MapReduce operation to be implemented. On the other hand, several operations like 'join', 'sortBy', 'cogroup' are available, which are not available in Hadoop (or at least not directly), making Spark computational model a strict superset of Hadoop computational model.
  
 The Scala versions is quite similar: The Scala versions is quite similar:
Line 59: Line 61:
 </file> </file>
  
 +
 +===== K-Means Example =====
 +To show an example of iterative algorithm, consider [[http://en.wikipedia.org/wiki/K-means_clustering|Standard iterative K-Means algorithm]].

[ Back to the navigation ] [ Back to the content ]