-
Interfaces Interface Description org.apache.flink.table.connector.source.SourceFunctionProvider This interface is based on theSourceFunctionAPI, which is due to be removed. UseSourceProviderinstead.org.apache.flink.table.factories.StreamTableSinkFactory This interface has been replaced byDynamicTableSinkFactory. The new interface creates instances ofDynamicTableSink. See FLIP-95 for more information.org.apache.flink.table.factories.StreamTableSourceFactory This interface has been replaced byDynamicTableSourceFactory. The new interface creates instances ofDynamicTableSource. See FLIP-95 for more information.org.apache.flink.table.sinks.AppendStreamTableSink This interface has been replaced byDynamicTableSink. The new interface consumes internal data structures. See FLIP-95 for more information.org.apache.flink.table.sinks.RetractStreamTableSink This interface has been replaced byDynamicTableSink. The new interface consumes internal data structures. See FLIP-95 for more information.org.apache.flink.table.sinks.StreamTableSink This interface has been replaced byDynamicTableSink. The new interface consumes internal data structures. See FLIP-95 for more information.org.apache.flink.table.sinks.UpsertStreamTableSink This interface has been replaced byDynamicTableSink. The new interface consumes internal data structures. See FLIP-95 for more information.org.apache.flink.table.sources.StreamTableSource This interface has been replaced byDynamicTableSource. The new interface produces internal data structures. See FLIP-95 for more information.
-
Classes Class Description org.apache.flink.table.descriptors.OldCsvValidator Use the RFC-compliantCsvformat in the dedicated flink-formats/flink-csv module instead.org.apache.flink.table.descriptors.RowtimeValidator SeeRowtimefor details.org.apache.flink.table.descriptors.SchemaValidator SeeSchemafor details.org.apache.flink.table.sinks.CsvAppendTableSinkFactory The legacy CSV connector has been replaced byFileSink. It is kept only to support tests for the legacy connector stack.org.apache.flink.table.sinks.CsvBatchTableSinkFactory The legacy CSV connector has been replaced byFileSink. It is kept only to support tests for the legacy connector stack.org.apache.flink.table.sinks.CsvTableSink The legacy CSV connector has been replaced byFileSink. It is kept only to support tests for the legacy connector stack.org.apache.flink.table.sinks.CsvTableSinkFactoryBase The legacy CSV connector has been replaced byFileSink. It is kept only to support tests for the legacy connector stack.org.apache.flink.table.sinks.OutputFormatTableSink This interface has been replaced byDynamicTableSink. The new interface consumes internal data structures. See FLIP-95 for more information.org.apache.flink.table.sources.CsvAppendTableSourceFactory The legacy CSV connector has been replaced byFileSource. It is kept only to support tests for the legacy connector stack.org.apache.flink.table.sources.CsvBatchTableSourceFactory The legacy CSV connector has been replaced byFileSource. It is kept only to support tests for the legacy connector stack.org.apache.flink.table.sources.CsvTableSource The legacy CSV connector has been replaced byFileSource. It is kept only to support tests for the legacy connector stack.org.apache.flink.table.sources.CsvTableSourceFactoryBase The legacy CSV connector has been replaced byFileSource. It is kept only to support tests for the legacy connector stack.org.apache.flink.table.sources.InputFormatTableSource This interface has been replaced byDynamicTableSource. The new interface produces internal data structures. See FLIP-95 for more information.
-
Methods Method Description org.apache.flink.table.api.bridge.java.StreamTableEnvironment.createTemporaryView(String, DataStream<T>, Expression...) UseStreamTableEnvironment.createTemporaryView(String, DataStream, Schema)instead. In most cases,StreamTableEnvironment.createTemporaryView(String, DataStream)should already be sufficient. It integrates with the new type system and supports all kinds ofDataTypesthat the table runtime can consume. The semantics might be slightly different for raw and structured types.org.apache.flink.table.api.bridge.java.StreamTableEnvironment.fromDataStream(DataStream<T>, Expression...) UseStreamTableEnvironment.fromDataStream(DataStream, Schema)instead. In most cases,StreamTableEnvironment.fromDataStream(DataStream)should already be sufficient. It integrates with the new type system and supports all kinds ofDataTypesthat the table runtime can consume. The semantics might be slightly different for raw and structured types.org.apache.flink.table.api.bridge.java.StreamTableEnvironment.registerDataStream(String, DataStream<T>) org.apache.flink.table.api.bridge.java.StreamTableEnvironment.registerFunction(String, TableFunction<T>) UseTableEnvironment.createTemporarySystemFunction(String, UserDefinedFunction)instead. Please note that the new method also uses the new type system and reflective extraction logic. It might be necessary to update the function implementation as well. See the documentation ofTableFunctionfor more information on the new function design.org.apache.flink.table.api.bridge.java.StreamTableEnvironment.toAppendStream(Table, Class<T>) UseStreamTableEnvironment.toDataStream(Table, Class)instead. It integrates with the new type system and supports all kinds ofDataTypesthat the table runtime can produce. The semantics might be slightly different for raw and structured types. UsetoDataStream(DataTypes.of(TypeInformation.of(Class)))ifTypeInformationshould be used as source of truth.org.apache.flink.table.api.bridge.java.StreamTableEnvironment.toRetractStream(Table, Class<T>) UseStreamTableEnvironment.toChangelogStream(Table, Schema)instead. It integrates with the new type system and supports all kinds ofDataTypesand everyChangelogModethat the table runtime can produce.org.apache.flink.table.connector.sink.DataStreamSinkProvider.consumeDataStream(DataStream<RowData>) UseDataStreamSinkProvider.consumeDataStream(ProviderContext, DataStream)and correctly set a unique identifier for each data stream transformation.org.apache.flink.table.connector.source.DataStreamScanProvider.produceDataStream(StreamExecutionEnvironment) org.apache.flink.table.descriptors.SchemaValidator.deriveTableSinkSchema(DescriptorProperties) This method combines two separate concepts of table schema and field mapping. This should be split into two methods once we have support for the corresponding interfaces (see FLINK-9870).org.apache.flink.table.factories.StreamTableSinkFactory.createStreamTableSink(Map<String, String>) TableSinkFactory.Contextcontains more information, and already contains table schema too. Please useTableSinkFactory.createTableSink(Context)instead.org.apache.flink.table.factories.StreamTableSourceFactory.createStreamTableSource(Map<String, String>) TableSourceFactory.Contextcontains more information, and already contains table schema too. Please useTableSourceFactory.createTableSource(Context)instead.org.apache.flink.table.sources.CsvTableSource.Builder.field(String, TypeInformation<?>) This method will be removed in future versions as it uses the old type system. It is recommended to useCsvTableSource.Builder.field(String, DataType)instead which uses the new type system based onDataTypes. Please make sure to use either the old or the new type system consistently to avoid unintended behavior. See the website documentation for more information.