[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

This is an old revision of the document!


Table of Contents

Spark Introduction

This introduction shows several simple examples to give you an idea what programming in Spark is like. See the official Quick Start or Spark Programming Guide or Python API Reference/Scala API Reference for more information.

Running Spark Shell in Python

To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster)

IPYTHON=1 pyspark

The IPYTHON=1 parameter instructs Spark to use ipython instead of python (the ipython is an enhanced interactive shell than Python). If you do not want ipython or you do not have it installed (it is installed everywhere on the cluster, but maybe not on your local workstations – ask it@ufal.mff.cuni.cz if you want it), leave out the IPYTHON=1.

After a local Spark executor is started, the Python shell starts.

14/10/03 10:54:35 INFO SparkUI: Started SparkUI at http://tauri4.ufal.hide.ms.mff.cuni.cz:4040

Running Spark Shell in Scala

To run interactive Scala shell in local Spark mode, run (on your local workstation or on cluster)

scala-shell

Once again, the SparkUI address is listed several lines above the shell prompt line.

Word Count Example

The central object of Spark framework is RDD – resilient distributed dataset. It contains ordered sequence of items, which may be distributed in several threads or on several computers. Spark offers multiple operations which can be performed on RDD, like map, filter, reduceByKey, union, join, sortBy, sample etc.

Here we load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Try the following in the opened Python shell:

w = sc.textFile('/net/projects/hadoop/wikidata/cs-text/cswiki.txt')
words = w.flatMap(lambda line: line.split())
counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1,c2: c1+c2)
sorted = counts.sortBy(lambda (word,count): count)
sorted.saveAsTextFile('output')

# Alternatively, we can avoid variables and use the following
sc.textFile('/net/projects/hadoop/wikidata/cs-text/cswiki.txt') \
  .flatMap(lambda line: line.split()) \
  .map(lambda word: (word, 1)) \
  .reduceByKey(lambda c1,c2: c1+c2) \
  .sortBy(lambda (word,count): count) \
  .take(100) # Instead of saveAsTextFile, we only print 100 most frequent words

The output of 'saveAsTextFile' is the directory output – because the RDD can be distributed on several computers, the output is a directory containing possibly multiple files.


[ Back to the navigation ] [ Back to the content ]