public class FtpsStateForRestore extends Object
PointInTimeTierPartitionStateBuilder.buildFtpsFromSnapshot(java.util.Map<org.apache.kafka.common.TopicPartition, kafka.restore.db.PartitionRestoreContext>)
It will be passed in FiniteStateMachine
for providing necessary info during restore.Modifier and Type | Field and Description |
---|---|
Map<UUID,SegmentStateAndPath> |
compactSegmentsToDelete |
Map<UUID,SegmentStateAndPath> |
compactSegmentsToRestore
The maps for storing segments to be restored and deleted during restore
|
long |
fromTimestamp |
Path |
ftpsSnapshot
The original tier partition snapshot downloaded from tier storage
|
Queue<org.apache.kafka.clients.consumer.ConsumerRecord<byte[],byte[]>> |
liveConsumerRecords
Queue to hold live consumer records pulled from TierTopic directly
|
Map<UUID,SegmentStateAndPath> |
retentionSegmentsToRestore |
long |
revertSinceTimestamp |
TopicIdPartition |
topicIdPartition |
FileTierPartitionState |
updatedFtpsState
Tier Partition State initialized from downloaded snapshot
|
Constructor and Description |
---|
FtpsStateForRestore(TopicIdPartition topicIdPartition,
Path ftpsSnapshot,
FileTierPartitionState ftps,
long fromTimestamp,
long revertSinceTimestamp,
SnapshotObjectStoreUtils snapshotUtils,
RestoreMetricsManager metricsManager) |
Modifier and Type | Method and Description |
---|---|
void |
applyEvent(AbstractTierMetadata event,
OffsetAndEpoch offsetAndEpoch)
There are 2 places to have events to apply to the ftps:
- apply old events from TTPS snapshot, which happens in
PointInTimeTierPartitionStateBuilder.buildFtpsFromSnapshot(java.util.Map<org.apache.kafka.common.TopicPartition, kafka.restore.db.PartitionRestoreContext>)
- apply live events from TierTopicConsumerForRestore |
Map<UUID,SegmentStateAndPath> |
restore()
Restore is for applying all the live events, then based on the to-be-restored and to-be-deleted
segment lists to flip the segment states in FTPS accordingly to generate the final version of it
for restore.
|
Map<UUID,String> |
revertRestore()
This will be called only when there is any segment restore failed in the cloud.
|
public final TopicIdPartition topicIdPartition
public final long fromTimestamp
public final long revertSinceTimestamp
public final Path ftpsSnapshot
public final FileTierPartitionState updatedFtpsState
public final Map<UUID,SegmentStateAndPath> compactSegmentsToRestore
public final Map<UUID,SegmentStateAndPath> compactSegmentsToDelete
public final Map<UUID,SegmentStateAndPath> retentionSegmentsToRestore
public Queue<org.apache.kafka.clients.consumer.ConsumerRecord<byte[],byte[]>> liveConsumerRecords
public FtpsStateForRestore(TopicIdPartition topicIdPartition, Path ftpsSnapshot, FileTierPartitionState ftps, long fromTimestamp, long revertSinceTimestamp, SnapshotObjectStoreUtils snapshotUtils, RestoreMetricsManager metricsManager) throws IOException
IOException
public void applyEvent(AbstractTierMetadata event, OffsetAndEpoch offsetAndEpoch) throws InterruptedException
PointInTimeTierPartitionStateBuilder.buildFtpsFromSnapshot(java.util.Map<org.apache.kafka.common.TopicPartition, kafka.restore.db.PartitionRestoreContext>)
- apply live events from TierTopicConsumerForRestore
event
- offsetAndEpoch
- InterruptedException
public Map<UUID,SegmentStateAndPath> restore() throws Exception
Exception
public Map<UUID,String> revertRestore() throws IOException
IOException