Class ParquetRowDataBuilder
- java.lang.Object
-
- org.apache.parquet.hadoop.ParquetWriter.Builder<org.apache.flink.table.data.RowData,ParquetRowDataBuilder>
-
- org.apache.flink.formats.parquet.row.ParquetRowDataBuilder
-
public class ParquetRowDataBuilder extends org.apache.parquet.hadoop.ParquetWriter.Builder<org.apache.flink.table.data.RowData,ParquetRowDataBuilder>
RowDataofParquetWriter.Builder.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static classParquetRowDataBuilder.FlinkParquetBuilderFlink RowParquetBuilder.
-
Constructor Summary
Constructors Constructor Description ParquetRowDataBuilder(org.apache.parquet.io.OutputFile path, org.apache.flink.table.types.logical.RowType rowType, boolean utcTimestamp)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description static ParquetWriterFactory<org.apache.flink.table.data.RowData>createWriterFactory(org.apache.flink.table.types.logical.RowType rowType, org.apache.hadoop.conf.Configuration conf, boolean utcTimestamp)Create a parquetBulkWriter.Factory.protected org.apache.parquet.hadoop.api.WriteSupport<org.apache.flink.table.data.RowData>getWriteSupport(org.apache.hadoop.conf.Configuration conf)protected ParquetRowDataBuilderself()-
Methods inherited from class org.apache.parquet.hadoop.ParquetWriter.Builder
build, config, enableDictionaryEncoding, enablePageWriteChecksum, enableValidation, getWriteSupport, withAdaptiveBloomFilterEnabled, withAllocator, withBloomFilterCandidateNumber, withBloomFilterEnabled, withBloomFilterEnabled, withBloomFilterFPP, withBloomFilterNDV, withByteStreamSplitEncoding, withCodecFactory, withColumnIndexTruncateLength, withCompressionCodec, withConf, withConf, withDictionaryEncoding, withDictionaryEncoding, withDictionaryPageSize, withEncryption, withExtraMetaData, withMaxBloomFilterBytes, withMaxPaddingSize, withMaxRowCountForPageSizeCheck, withMinRowCountForPageSizeCheck, withPageRowCountLimit, withPageSize, withPageWriteChecksumEnabled, withRowGroupSize, withRowGroupSize, withSizeStatisticsEnabled, withSizeStatisticsEnabled, withStatisticsEnabled, withStatisticsEnabled, withStatisticsTruncateLength, withValidation, withWriteMode, withWriterVersion
-
-
-
-
Method Detail
-
self
protected ParquetRowDataBuilder self()
- Specified by:
selfin classorg.apache.parquet.hadoop.ParquetWriter.Builder<org.apache.flink.table.data.RowData,ParquetRowDataBuilder>
-
getWriteSupport
protected org.apache.parquet.hadoop.api.WriteSupport<org.apache.flink.table.data.RowData> getWriteSupport(org.apache.hadoop.conf.Configuration conf)
- Specified by:
getWriteSupportin classorg.apache.parquet.hadoop.ParquetWriter.Builder<org.apache.flink.table.data.RowData,ParquetRowDataBuilder>
-
createWriterFactory
public static ParquetWriterFactory<org.apache.flink.table.data.RowData> createWriterFactory(org.apache.flink.table.types.logical.RowType rowType, org.apache.hadoop.conf.Configuration conf, boolean utcTimestamp)
Create a parquetBulkWriter.Factory.- Parameters:
rowType- row type of parquet table.conf- hadoop configuration.utcTimestamp- Use UTC timezone or local timezone to the conversion between epoch time and LocalDateTime. Hive 0.x/1.x/2.x use local timezone. But Hive 3.x use UTC timezone.
-
-