[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
spark:using-scala [2014/11/10 15:43]
straka created
spark:using-scala [2024/09/27 09:21] (current)
straka [Usage Examples]
Line 20: Line 20:
 </file> </file>
  
-  * run interactive shell inside ''spark-qrsh'', or start local Spark cluster using as many threads as there are cores:+  * run interactive shell using existing Spark cluster (i.e., inside ''spark-srun''), or start local Spark cluster using as many threads as there are cores if there is none:
   <file>spark-shell</file>   <file>spark-shell</file>
   * run interactive shell with local Spark cluster using one thread:   * run interactive shell with local Spark cluster using one thread:
   <file>MASTER=local spark-shell</file>   <file>MASTER=local spark-shell</file>
-  * start Spark cluster (10 machines, 1GB RAM each) on SGE and run interactive shell: +  * start Spark cluster (10 machines, 2GB RAM each) via Slurm and run interactive shell: 
-  <file>spark-qrsh 10 1G spark-shell</file>+  <file>spark-srun 10 2G spark-shell</file> 
 + 
 + 
 +===== Running Scala Spark Applications ===== 
 + 
 +Compiled Scala Spark program (JAR) can be started using: 
 +<file>spark-submit</file> 
 + 
 +As described in [[running-spark-on-single-machine-or-on-cluster|Running Spark on Single Machine or on Cluster]], environmental variable ''MASTER'' specifies which Spark master to use (or whether to start a local one). 
 + 
 +==== Compilation of Scala Spark Programs ==== 
 + 
 +If you do not know how to compile Scala programs, you can use the following directions: 
 +  - create a directory for your project 
 +  - copy ''/net/projects/spark/sbt/spark-template.sbt'' to your project directory and rename it to your project name (i.e., ''my-best-project.sbt''
 +  - replace the ''spark-template'' by your project name in the first line (i.e., ''name := "my-best-project"''
 +  - run ''sbt package'' to create JAR (note that first run of ''sbt'' will take several minutes) 
 +The resulting JAR can be found in ''target/scala-2.11'' subdirectory, named after your project. 
 + 
 +==== Usage Examples ==== 
 +Consider the following simple word-count application ''word_count.scala'': 
 +<file scala> 
 +import org.apache.spark.SparkContext 
 +import org.apache.spark.SparkContext._ 
 + 
 +object Main { 
 +  def main(args: Array[String]) { 
 +    if (args.length < 2) sys.error("Usage: input output"
 +    val (input, output) = (args(0), args(1)) 
 + 
 +    val sc = new SparkContext() 
 +    sc.textFile(input, 3*sc.defaultParallelism) 
 +      .flatMap(_.split("\\s")) 
 +      .map((_,1)).reduceByKey(_+_) 
 +      .sortBy(_._2, ascending=false) 
 +      .saveAsTextFile(output) 
 +    sc.stop() 
 +  } 
 +
 +</file> 
 + 
 +The ''sbt'' project file ''word_count.sbt'': 
 +<file> 
 +name := "word_count" 
 + 
 +version := "1.0" 
 + 
 +scalaVersion := "2.12.20" 
 +  
 +libraryDependencies += "org.apache.spark" %% "spark-core" % "3.5.3" 
 +</file> 
 + 
 +  * compile the application 
 +  <file>sbt package</file> 
 + 
 +  * run ''word_count'' application inside existing Spark cluster (i.e., inside ''spark-sbatch'' or ''spark-srun''), or start local Spark cluster using as many threads as there are cores if there is none: 
 +  <file>spark-submit target/scala-2.12/word_count_2.12-1.0.jar /net/projects/spark-example-data/wiki-cs outdir</file> 
 +  * run ''word_count'' application with local Spark cluster using one thread: 
 +  <file>MASTER=local spark-submit target/scala-2.12/word_count_2.12-1.0.jar /net/projects/spark-example-data/wiki-cs outdir</file> 
 +  * start Spark cluster (10 machines, 2GB RAM each) on Slurm and run ''word_count'' application: 
 +  <file>spark-sbatch 10 2G spark-submit target/scala-2.12/word_count_2.12-1.0.jar /net/projects/spark-example-data/wiki-cs outdir</file>
  

[ Back to the navigation ] [ Back to the content ]