[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
spark:spark-introduction [2014/10/06 11:18]
straka
spark:spark-introduction [2014/10/06 11:21]
straka
Line 24: Line 24:
  
 Here we load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell: Here we load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell:
-  wiki = sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"+<file python> 
-  words = wiki.flatMap(lambda line: line.split()) +wiki = sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"
-  counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1,c2: c1+c2) +words = wiki.flatMap(lambda line: line.split()) 
-  sorted = counts.sortBy(lambda (word,count): count, ascending=False) +counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1,c2: c1+c2) 
-  sorted.saveAsTextFile('output'+sorted = counts.sortBy(lambda (word,count): count, ascending=False) 
-   +sorted.saveAsTextFile('output'
-  # Alternatively, we can avoid variables: + 
-  (sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"+# Alternatively, we can avoid variables: 
-     .flatMap(lambda line: line.split()) +(sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"
-     .map(lambda word: (word, 1)) +   .flatMap(lambda line: line.split()) 
-     .reduceByKey(lambda c1,c2: c1+c2) +   .map(lambda word: (word, 1)) 
-     .sortBy(lambda (word,count): count, ascending=False) +   .reduceByKey(lambda c1,c2: c1+c2) 
-     .take(10)) # Instead of saveAsTextFile, we only print 10 most frequent words+   .sortBy(lambda (word,count): count, ascending=False) 
 +   .take(10)) # Instead of saveAsTextFile, we only print 10 most frequent words 
 +</file>
 The output of 'saveAsTextFile' is the directory ''output'' -- because the RDD can be distributed on several computers, the output is a directory containing possibly multiple files. The output of 'saveAsTextFile' is the directory ''output'' -- because the RDD can be distributed on several computers, the output is a directory containing possibly multiple files.
  
 The Scala versions is quite similar: The Scala versions is quite similar:
-  val wiki = sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"+<file scala> 
-  val words = wiki.flatMap(line => line.split("\\s")) +val wiki = sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"
-  val counts = words.map(word => (word,1)).reduceByKey((c1,c2) => c1+c2)   +val words = wiki.flatMap(line => line.split("\\s")) 
-  val sorted = counts.sortBy({case (word, count) => count}, ascending=false) +val counts = words.map(word => (word,1)).reduceByKey((c1,c2) => c1+c2)   
-  sorted.saveAsTextFile('output'+val sorted = counts.sortBy({case (word, count) => count}, ascending=false) 
-   +sorted.saveAsTextFile('output'
-  // Alternatively without variables and using placeholders in lambda parameters: + 
-  (sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"+// Alternatively without variables and using placeholders in lambda parameters: 
-     .flatMap(_.split("\\s")) +(sc.textFile("/net/projects/hadoop/wikidata/cs-text/cswiki.txt"
-     .map((_,1)).reduceByKey(_+_) +   .flatMap(_.split("\\s")) 
-     .sortBy(_._2, ascending=false) +   .map((_,1)).reduceByKey(_+_) 
-     .take(10))+   .sortBy(_._2, ascending=false) 
 +   .take(10)) 
 +</file>
  

[ Back to the navigation ] [ Back to the content ]