[ Skip to the content ]

Institute of Formal and Applied Linguistics Wiki


[ Back to the navigation ]

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
spark:recipes:reading-text-files [2016/03/31 22:02]
straka
spark:recipes:reading-text-files [2025/10/15 20:13] (current)
straka [Number of Partitions: Multiple Files in a Directory]
Line 25: Line 25:
 ==== Number of Partitions: Compressed File ==== ==== Number of Partitions: Compressed File ====
  
-If the input file is compressed, it is always read as 1 partition, as splitting cannot be performed efficiently.+If the input file is compressed with **''gzip''** or **''zip''**, it is always read sequentially as 1 partition because of how the format works. In other wordseven a very large file must be read sequentially, so you want to avoid large compressed files in these formats.
  
-To create multiple partitions, ''repartition'' can be used in the following way: +On the other handfiles compressed with **''bzip2''** can be **split effectively** (technically, blocks with length at most 900k are compressed independently in bzip2), so it allows parallel processing of very large files.
-<file python> +
-lines = sc.textFile(compressed_file).repartition(3*sc.defaultParallelism) +
-</file>+
  
 ==== Number of Partitions: Multiple Files in a Directory ==== ==== Number of Partitions: Multiple Files in a Directory ====
Line 36: Line 33:
 When the input file is a directory, each file is read in separate partitions. The minimum number of partitions given as second argument to ''textFile'' is applied only to the first file (if it is not compressed). Other uncompressed files are split only into 32MB chunks, or into 1 partition if compressed. When the input file is a directory, each file is read in separate partitions. The minimum number of partitions given as second argument to ''textFile'' is applied only to the first file (if it is not compressed). Other uncompressed files are split only into 32MB chunks, or into 1 partition if compressed.
  
-Note that when there are many files (as for example in ''/net/projects/spark-example-data/hamledt-cs-conll''), the number of partitions can be quite large, which slows down the computation. In that case, ''coalesce'' can be used to decrease the number of partitions efficiently (by merging existing partitions without running the ''repartition''):+Note that when there are many files (thousands or more, as for example in ''/net/projects/spark-example-data/hamledt-cs-conll''), the number of partitions can be quite large, which slows down the computation. In that case, ''coalesce'' can be used to decrease the number of partitions efficiently (by merging existing partitions without running the ''repartition''):
 <file python> <file python>
 conll_lines = sc.textFile("/net/projects/spark-example-data/hamledt-cs-conll").coalesce(3*sc.defaultParallelism) conll_lines = sc.textFile("/net/projects/spark-example-data/hamledt-cs-conll").coalesce(3*sc.defaultParallelism)

[ Back to the navigation ] [ Back to the content ]