Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
spark:recipes:reading-text-files [2014/11/04 10:40] straka |
spark:recipes:reading-text-files [2016/03/31 22:02] (current) straka |
||
---|---|---|---|
Line 40: | Line 40: | ||
conll_lines = sc.textFile("/ | conll_lines = sc.textFile("/ | ||
</ | </ | ||
+ | |||
===== Reading Text Files by Paragraphs ===== | ===== Reading Text Files by Paragraphs ===== | ||
+ | Although there is no method of '' | ||
+ | Python version: | ||
+ | <file python> | ||
+ | def paragraphFile(sc, | ||
+ | return sc.newAPIHadoopFile(path, | ||
+ | " | ||
+ | conf={" | ||
+ | </ | ||
- | ===== Reading Whole Text Files ===== | + | Scala version: |
+ | <file scala> | ||
+ | def paragraphFile(sc: | ||
+ | val conf = new org.apache.hadoop.conf.Configuration() | ||
+ | conf.set(" | ||
+ | return sc.newAPIHadoopFile(path, | ||
+ | classOf[org.apache.hadoop.io.LongWritable], | ||
+ | } | ||
+ | </ | ||
- | To read whole text file or whole text files in a given directory, '' | + | Compressed |
- | Unfortunately, '' | + | To control the number of partitions, '' |
+ | For example, to read compressed HamleDT Czech CoNLL files, so that every sentence is one element of the resulting '' | ||
<file python> | <file python> | ||
- | conlls = sc.textFile("/ | + | conlls = paragraphFile(sc, "/ |
</ | </ | ||
+ | |||
+ | ===== Reading Whole Text Files ===== | ||
+ | |||
+ | To read whole text file or whole text files in a given directory, '' | ||
+ | |||
+ | <file python> | ||
+ | whole_wiki = sc.wholeTextFiles("/ | ||
+ | </ | ||
+ | |||
+ | By default, every file is read in separate partitions. To control the number of partitions, '' |