public class TierSegmentReader
extends java.lang.Object
Constructor and Description |
---|
TierSegmentReader() |
Modifier and Type | Method and Description |
---|---|
static org.apache.kafka.common.record.MemoryRecords |
loadRecords(CancellationContext cancellationContext,
java.io.InputStream inputStream,
int maxBytes,
long maxOffset,
long targetOffset)
Loads records from a given InputStream up to maxBytes.
|
static java.util.Optional<java.lang.Long> |
offsetForTimestamp(CancellationContext cancellationContext,
java.io.InputStream inputStream,
long targetTimestamp)
Read the supplied input stream to find the first offset with a timestamp >= targetTimestamp
|
static org.apache.kafka.common.record.RecordBatch |
readBatch(java.io.InputStream inputStream)
Reads one full batch from an InputStream.
|
static org.apache.kafka.common.record.RecordBatch |
readBatchInto(java.io.InputStream inputStream,
java.nio.ByteBuffer buffer)
Similar to readBatch(), this method reads a full RecordBatch.
|
public static org.apache.kafka.common.record.MemoryRecords loadRecords(CancellationContext cancellationContext, java.io.InputStream inputStream, int maxBytes, long maxOffset, long targetOffset) throws java.io.IOException
In the event that maxOffset is hit, this method can return an empty buffer.
Cancellation can be triggered using the CancellationContext, and it's granularity is on the individual record batch level. That's to say, cancellation must wait for the current record batch to be parsed and loaded (or ignored) before taking effect.
java.io.IOException
public static java.util.Optional<java.lang.Long> offsetForTimestamp(CancellationContext cancellationContext, java.io.InputStream inputStream, long targetTimestamp) throws java.io.IOException
cancellationContext
- cancellation context to allow reads of InputStream to be abortedinputStream
- InputStream for the tiered segmenttargetTimestamp
- target timestamp to lookup the offset forjava.io.IOException
public static org.apache.kafka.common.record.RecordBatch readBatch(java.io.InputStream inputStream) throws java.io.IOException
Throws EOFException if either the header or full record batch cannot be read.
java.io.IOException
public static org.apache.kafka.common.record.RecordBatch readBatchInto(java.io.InputStream inputStream, java.nio.ByteBuffer buffer) throws java.io.IOException
Throws EOFException if either the header or full record batch cannot be read.
java.io.IOException