[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
spark:spark-introduction [2022/12/14 12:29]
straka [Running Spark Shell in Python]
spark:spark-introduction [2022/12/14 12:34]
straka [Word Count Example]
Line 27: Line 27:
 We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Copy the following to the opened Python shell: We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Copy the following to the opened Python shell:
 <file python> <file python>
-wiki = sc.textFile("/net/projects/spark-example-data/wiki-cs")+wiki = sc.textFile("/lnet/troja/data/npfl118/wiki/cs/wiki.txt")
 words = wiki.flatMap(lambda line: line.split()) words = wiki.flatMap(lambda line: line.split())
-counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1,c2: c1+c2) +counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1, c2: c1+c2) 
-sorted = counts.sortBy(lambda (word,count)count, ascending=False) +sorted = counts.sortBy(lambda word_countword_count[1], ascending=False) 
-sorted.saveAsTextFile('output')+sorted.saveAsTextFile("output")
  
 # Alternatively, we can avoid variables: # Alternatively, we can avoid variables:
-(sc.textFile("/net/projects/spark-example-data/wiki-cs")+(sc.textFile("/lnet/troja/data/npfl118/wiki/cs/wiki.txt")
    .flatMap(lambda line: line.split())    .flatMap(lambda line: line.split())
    .map(lambda word: (word, 1))    .map(lambda word: (word, 1))
-   .reduceByKey(lambda c1,c2: c1+c2) +   .reduceByKey(lambda c1, c2: c1+c2) 
-   .sortBy(lambda (word,count)count, ascending=False)+   .sortBy(lambda word_countword_count[1], ascending=False)
    .take(10)) # Instead of saveAsTextFile, we only print 10 most frequent words    .take(10)) # Instead of saveAsTextFile, we only print 10 most frequent words
 </file> </file>

[ Back to the navigation ] [ Back to the content ]