[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
spark:spark-introduction [2022/12/14 12:42]
straka [K-Means Example]
spark:spark-introduction [2022/12/14 13:28]
straka [Running Spark Shell in Python]
Line 5: Line 5:
 ===== Running Spark Shell in Python ===== ===== Running Spark Shell in Python =====
  
-To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster using ''qrsh'' from ''lrc1''+To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster using ''srun'' from ''lrc1''
-  PYSPARK_DRIVER_PYTHON=ipython3 pyspark+  MASTER=local PYSPARK_DRIVER_PYTHON=ipython3 pyspark
 The PYSPARK_DRIVER_PYTHON=ipython3 parameter instructs Spark to use ''ipython3'' instead of ''python3''. The PYSPARK_DRIVER_PYTHON=ipython3 parameter instructs Spark to use ''ipython3'' instead of ''python3''.
  
Line 27: Line 27:
 We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Copy the following to the opened Python shell: We start by simple word count example. We load the RDD from text file, every line of the input file becoming an element of RDD. We then split every line into words, count every word occurrence and sort the words by the occurrences. Copy the following to the opened Python shell:
 <file python> <file python>
-wiki = sc.textFile("/lnet/troja/data/npfl118/wiki/cs/wiki.txt")+wiki = sc.textFile("/net/projects/spark-example-data/wiki-cs")
 words = wiki.flatMap(lambda line: line.split()) words = wiki.flatMap(lambda line: line.split())
 counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1, c2: c1+c2) counts = words.map(lambda word: (word, 1)).reduceByKey(lambda c1, c2: c1+c2)
Line 34: Line 34:
  
 # Alternatively, we can avoid variables: # Alternatively, we can avoid variables:
-(sc.textFile("/lnet/troja/data/npfl118/wiki/cs/wiki.txt")+(sc.textFile("/net/projects/spark-example-data/wiki-cs")
    .flatMap(lambda line: line.split())    .flatMap(lambda line: line.split())
    .map(lambda word: (word, 1))    .map(lambda word: (word, 1))
Line 47: Line 47:
 The Scala versions is quite similar: The Scala versions is quite similar:
 <file scala> <file scala>
-val wiki = sc.textFile("/lnet/troja/data/npfl118/wiki/cs/wiki.txt")+val wiki = sc.textFile("/net/projects/spark-example-data/wiki-cs")
 val words = wiki.flatMap(line => line.split("\\s")) val words = wiki.flatMap(line => line.split("\\s"))
 val counts = words.map(word => (word, 1)).reduceByKey((c1, c2) => c1+c2) val counts = words.map(word => (word, 1)).reduceByKey((c1, c2) => c1+c2)
Line 54: Line 54:
  
 // Alternatively without variables and using placeholders in lambda parameters: // Alternatively without variables and using placeholders in lambda parameters:
-(sc.textFile("/lnet/troja/data/npfl118/wiki/cs/wiki.txt")+(sc.textFile("/net/projects/spark-example-data/wiki-cs")
    .flatMap(_.split("\\s"))    .flatMap(_.split("\\s"))
    .map((_,1)).reduceByKey(_+_)    .map((_,1)).reduceByKey(_+_)
Line 70: Line 70:
     return min((np.sum((point - centers[i]) ** 2), i) for i in range(len(centers)))[1]     return min((np.sum((point - centers[i]) ** 2), i) for i in range(len(centers)))[1]
  
-lines = sc.textFile("/lnet/troja/data/npfl118/points/points-medium.txt", sc.defaultParallelism)+lines = sc.textFile("/net/projects/spark-example-data/points", sc.defaultParallelism)
 data = lines.map(lambda line: np.array(map(float, line.split()))).cache() data = lines.map(lambda line: np.array(map(float, line.split()))).cache()
  
Line 111: Line 111:
   centers.map(center => (center-point).norm(2)).zipWithIndex.min._2   centers.map(center => (center-point).norm(2)).zipWithIndex.min._2
  
-val lines = sc.textFile("/lnet/troja/data/npfl118/points/points-medium.txt", sc.defaultParallelism)+val lines = sc.textFile("/net/projects/spark-example-data/points", sc.defaultParallelism)
 val data = lines.map(line => Vector(line.split("\\s+").map(_.toDouble))).cache() val data = lines.map(line => Vector(line.split("\\s+").map(_.toDouble))).cache()
  

[ Back to the navigation ] [ Back to the content ]