[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
spark:spark-introduction [2014/11/11 08:55]
straka
spark:spark-introduction [2014/11/11 09:06]
straka
Line 5: Line 5:
 ===== Running Spark Shell in Python ===== ===== Running Spark Shell in Python =====
  
-To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster)+To run interactive Python shell in local Spark mode, run (on your local workstation or on cluster using ''qrsh'' from ''lrc1'')
   IPYTHON=1 pyspark   IPYTHON=1 pyspark
-The IPYTHON=1 parameter instructs Spark to use ''ipython'' instead of ''python'' (the ''ipython'' is an enhanced interactive shell than Python). If you do not want ''ipython'' or you do not have it installed (it is installed everywhere on the cluster, but maybe not on your local workstations -- ask our IT if you want it), use only ''pyspark''.+The IPYTHON=1 parameter instructs Spark to use ''ipython'' instead of ''python'' (the ''ipython'' is an enhanced interactive shell than Python). If you do not want ''ipython'' or you do not have it installed (it is installed everywhere on the cluster, but maybe not on your local workstations -- ask our IT if you want it), use only ''pyspark'', but note that it has some issues when copy-pasting examples from this wiki.
  
 After a local Spark executor is started, the Python shell starts. Severel lines above After a local Spark executor is started, the Python shell starts. Severel lines above
Line 63: Line 63:
  
 ===== K-Means Example ===== ===== K-Means Example =====
-To show an example of iterative algorithm, consider [[http://en.wikipedia.org/wiki/K-means_clustering|Standard iterative K-Means algorithm]].+An example implementing [[http://en.wikipedia.org/wiki/K-means_clustering|Standard iterative K-Means algorithm]] follows:
 <file python> <file python>
 import numpy as np import numpy as np
Line 71: Line 71:
  
 lines = sc.textFile("/net/projects/spark-example-data/points", sc.defaultParallelism) lines = sc.textFile("/net/projects/spark-example-data/points", sc.defaultParallelism)
-data = lines.map(lambda line: np.array([float(x) for x in line.split()])).cache()+data = lines.map(lambda line: map(floatline.split())).cache()
  
 K = 50 K = 50

[ Back to the navigation ] [ Back to the content ]