Currently there are two different Java APIs:
When browsing through the documentation, make sure to stay in org.apache.hadoop.mapreduce
namespace.
The Java API differs from the Perl API in one important aspect: the keys and values are types.
The type of a value must be a subclass of Writable, which provides methods for serializing and deserializing values.
The type of a key must be a subclass of WritableComparable, which provides both Writable
and Comparable
interface.
Here is a list of frequently used types:
Text
– UTF-8 encoded stringBytesWritable
– sequence of arbitrary bytesIntWritable
– 32-bit integerLongWritable
– 64-bit integerFloatWritable
– 64-bit floating numberDoubleWritable
– 64-bit floating numberNullWritable
– no valueFor more complicated types like variable-length encoded integers, dictionaries, bloom filters, etc., see Writable.
The Perl API is always using strings as keys and values. From the Java point of view:
Text
.Text
, a toString
method is used to convert the value to string before the value is passed to Perl.The input formats are the same as in Perl API. Every input format also specifies which types it can provide.
An input format is a subclass of FileInputFormat<K,V>, where K is the type of keys and V is the type of values it can load.
Available input formats:
TextInputFormat
: The type of keys is LongWritable
and the type of values is Text
.KeyValueTextInputFormat
: The type of both keys and values is Text
.SequenceFileInputFormat
: Any type of keys and values can be used.An output format is a subclass of FileOutputFormat<K,V>, where K is the type of keys and V is the type of values it can store.
Available output formats:
TextOutputFormat
: The type of both keys and values is Text
.SequenceFileOutputFormat
: Any type of keys and values can be used.
Step 22: Optional – Setting Eclipse. | Overview | Step 24: Mappers, running Java Hadoop jobs, combiners. |