T - Type of data to be readpublic class CombinedAvroKeyInputFormat<T>
extends org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat<org.apache.avro.mapred.AvroKey<T>,org.apache.hadoop.io.NullWritable>
| Modifier and Type | Class and Description |
|---|---|
static class |
CombinedAvroKeyInputFormat.CombinedAvroKeyRecordReader<T> |
| Constructor and Description |
|---|
CombinedAvroKeyInputFormat() |
| Modifier and Type | Method and Description |
|---|---|
org.apache.hadoop.mapreduce.RecordReader<org.apache.avro.mapred.AvroKey<T>,org.apache.hadoop.io.NullWritable> |
createRecordReader(org.apache.hadoop.mapreduce.InputSplit inputSplit,
org.apache.hadoop.mapreduce.TaskAttemptContext context) |
createPool, createPool, getFileBlockLocations, getSplits, isSplitable, setMaxSplitSize, setMinSplitSizeNode, setMinSplitSizeRackaddInputPath, addInputPathRecursively, addInputPaths, computeSplitSize, getBlockIndex, getFormatMinSplitSize, getInputDirRecursive, getInputPathFilter, getInputPaths, getMaxSplitSize, getMinSplitSize, listStatus, makeSplit, makeSplit, setInputDirRecursive, setInputPathFilter, setInputPaths, setInputPaths, setMaxInputSplitSize, setMinInputSplitSizepublic org.apache.hadoop.mapreduce.RecordReader<org.apache.avro.mapred.AvroKey<T>,org.apache.hadoop.io.NullWritable> createRecordReader(org.apache.hadoop.mapreduce.InputSplit inputSplit, org.apache.hadoop.mapreduce.TaskAttemptContext context) throws java.io.IOException
createRecordReader in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat<org.apache.avro.mapred.AvroKey<T>,org.apache.hadoop.io.NullWritable>java.io.IOException