public class TierSegmentReader
extends java.lang.Object
Modifier and Type | Class and Description |
---|---|
static class |
TierSegmentReader.NextOffsetAndBatchMetadata |
static class |
TierSegmentReader.RecordsAndNextBatchMetadata |
Constructor and Description |
---|
TierSegmentReader(java.lang.String logPrefix) |
Modifier and Type | Method and Description |
---|---|
java.util.Optional<java.lang.Long> |
offsetForTimestamp(CancellationContext cancellationContext,
java.io.InputStream inputStream,
long targetTimestamp,
int segmentSize)
Read the supplied input stream to find the first offset with a timestamp >= targetTimestamp
|
org.apache.kafka.common.record.RecordBatch |
readBatch(java.io.InputStream inputStream,
int segmentSize)
Reads one full batch from an InputStream.
|
TierSegmentReader.RecordsAndNextBatchMetadata |
readRecords(CancellationContext cancellationContext,
java.util.Optional<MemoryTracker.MemoryLease> lease,
java.io.InputStream inputStream,
int maxBytes,
long targetOffset,
int startBytePosition,
int segmentSize)
Loads records from a given InputStream up to maxBytes.
|
public TierSegmentReader.RecordsAndNextBatchMetadata readRecords(CancellationContext cancellationContext, java.util.Optional<MemoryTracker.MemoryLease> lease, java.io.InputStream inputStream, int maxBytes, long targetOffset, int startBytePosition, int segmentSize) throws java.io.IOException
In the event that maxOffset is hit, this method can return an empty buffer.
Cancellation can be triggered using the CancellationContext, and it's granularity is on the individual record batch level. That's to say, cancellation must wait for the current record batch to be parsed and loaded (or ignored) before taking effect.
java.io.IOException
public java.util.Optional<java.lang.Long> offsetForTimestamp(CancellationContext cancellationContext, java.io.InputStream inputStream, long targetTimestamp, int segmentSize) throws java.io.IOException
cancellationContext
- cancellation context to allow reads of InputStream to be abortedinputStream
- InputStream for the tiered segmenttargetTimestamp
- target timestamp to lookup the offset forsegmentSize
- total size of the segment we are readingjava.io.IOException
public org.apache.kafka.common.record.RecordBatch readBatch(java.io.InputStream inputStream, int segmentSize) throws java.io.IOException
Throws EOFException if either the header or full record batch cannot be read. Visible for testing.
java.io.IOException