public class ClusterModel extends Object implements Serializable
Modifier and Type | Class and Description |
---|---|
static interface |
ClusterModel.CapacityLimitProvider
A simple call-able interface to provide the capacity limit - total amount of allowed capacity in the respective unit of a given resource (MiB, KB, etc.).
|
static class |
ClusterModel.NonExistentBrokerException
Thrown when a broker is not found in the cluster model
|
Modifier and Type | Field and Description |
---|---|
static Integer |
DEAD_BROKERS_CELL_ID |
static Integer |
DEFAULT_CELL_ID |
Constructor and Description |
---|
ClusterModel(ModelGeneration generation,
double monitoredPartitionsRatio)
Constructor for the cluster class.
|
Modifier and Type | Method and Description |
---|---|
void |
addReassigningPartition(org.apache.kafka.common.TopicPartition tp)
Marks a #
TopicPartition in the cluster model as reassigning. |
void |
addTenant(Tenant tenant)
Add or update tenant in the cluster.
|
Set<Broker> |
aliveBrokers()
Get alive brokers in the cluster, including those which are ineligible for replica movement
(i.e.
|
Set<Broker> |
aliveBrokersMatchingAttributes(Map<String,String> attributes)
Get brokers with matching attributes
|
Set<String> |
aliveRackIds()
Return a set of rack ids that are alive (at least one broker inside a rack is alive).
|
SortedSet<Broker> |
allBrokers()
Get the set of all brokers in the cluster.
|
SortedSet<Broker> |
allBrokersWithStateOtherThan(Broker.Strategy stateToSkip)
Returns a set of all brokers in the cluster in any state other than stateToSkip
|
SortedSet<Broker> |
brokenBrokers()
Get broken brokers brokers -- i.e.
|
Broker |
broker(int brokerId)
Get the requested broker in the cluster.
|
Set<Broker> |
brokersEligibleForRebalancing()
Returns set of brokers that are ineligible as source and destination.
|
Set<Broker> |
brokersHavingOfflineReplicasOnBadDisks()
Get brokers containing offline replicas residing on bad disks in the current cluster model.
|
Set<Broker> |
brokersNotOfStatesMatchingAttributes(Collection<Broker> brokerPool,
EnumSet<Broker.Strategy> states,
Map<String,String> attributes)
Returns a subset of brokerPool, such that the brokers in it have state that is NOT in the provided set of states,
and their attributes are an exact match of the attributes in the provided attribute map.
|
Set<Broker> |
brokersOfStatesMatchingAttributes(Collection<Broker> brokerPool,
EnumSet<Broker.Strategy> states,
Map<String,String> attributes)
Returns a subset of brokerPool, such that the brokers in it have state that is in the provided set of states,
and their attributes are an exact match of the attributes in the provided attribute map.
|
List<Broker> |
brokersOverThreshold(Collection<Broker> originalBrokers,
Resource resource,
double utilizationThreshold) |
BrokerStats |
brokerStats(KafkaCruiseControlConfig config)
Get broker return the broker stats.
|
List<Broker> |
brokersUnderCapacityLimit(Collection<Broker> brokers,
Resource resource,
ClusterModel.CapacityLimitProvider capacityLimitProvider) |
List<Broker> |
brokersUnderThreshold(Collection<Broker> brokers,
Resource resource,
double utilizationThreshold) |
SortedSet<Broker> |
brokersWithBadDisks()
Get the set of brokers with bad disks -- i.e.
|
Capacity |
capacity() |
Map<Integer,String> |
capacityEstimationInfoByBrokerId() |
Cell |
cell(Integer cellId)
Get the cell with the cell id if it is found in the cluster; null otherwise.
|
List<org.apache.kafka.common.CellLoad> |
cellLoadStats(Long maxReplicasPerBroker) |
List<org.apache.kafka.common.CellLoad> |
cellLoadStats(Long maxReplicasPerBroker,
List<Integer> cellIds)
Get the cell load of the cluster.
|
Collection<Cell> |
cells() |
Map<Integer,Cell> |
cellsById() |
boolean |
changeObservership(org.apache.kafka.common.TopicPartition tp,
int replicaId)
Flips the given replica's observership
|
boolean |
clusterHasEligibleDestinationBrokers()
Checks if cluster has at least one rack eligible as a destination for replica placement.
|
Broker |
createBroker(String rackId,
Integer cellId,
String host,
int brokerId,
BrokerCapacityInfo brokerCapacityInfo,
boolean populateReplicaPlacementInfo,
Broker.Strategy strategy)
Create a broker under this cluster/rack and get the created broker.
|
Broker |
createBroker(String rackId,
String host,
int brokerId,
BrokerCapacityInfo brokerCapacityInfo,
boolean populateReplicaPlacementInfo,
Broker.Strategy strategy) |
Cell |
createCellIfAbsent(Optional<Integer> cellId)
Create a cell under this cluster.
|
boolean |
createOrDeleteReplicas(Map<Short,Set<String>> topicsByReplicationFactor,
Map<String,List<Integer>> brokersByRack)
For partitions of specified topics, create or delete replicas in given cluster model to change the partition's replication
factor to target replication factor.
|
Rack |
createRackIfAbsent(String rackId)
Create a rack under this cluster.
|
Replica |
createReplica(String rackId,
int brokerId,
org.apache.kafka.common.TopicPartition tp,
int index,
boolean isLeader)
Create a replica under given cluster/rack/broker.
|
Replica |
createReplica(String rackId,
int brokerId,
org.apache.kafka.common.TopicPartition tp,
int index,
boolean isLeader,
boolean isOffline,
String logdir,
boolean isFuture,
boolean isObserver)
Create a replica under given cluster/rack/broker.
|
SortedSet<Broker> |
deadBrokers()
Get the dead brokers in the cluster.
|
void |
deleteReplica(org.apache.kafka.common.TopicPartition topicPartition,
int brokerId)
Delete a replica from cluster.
|
SortedSet<Broker> |
eligibleDestinationBrokers()
Get the set of eligible destination brokers in the cluster.
|
double |
eligibleDestinationCapacityFor(Resource resource) |
SortedSet<Broker> |
eligibleSourceAndDestinationBrokers()
Get the set of brokers in the cluster which are simultaneously eligible sources *and* destinations.
|
SortedSet<Broker> |
eligibleSourceBrokers()
Get the set of eligible source brokers in the cluster.
|
SortedSet<Broker> |
eligibleSourceOrDestinationBrokers()
Get the set of brokers in the cluster which are eligible sources and/or eligible destinations.
|
Set<Integer> |
excludedBrokers()
Returns the set of broker ids that have replica exclusions placed on them (i.e are excluded from having new
replicas placed on them)
|
double |
expectedUtilizationInEligibleSourceBrokersFor(Resource resource) |
ModelGeneration |
generation()
get the metadata generation for this cluster model.
|
ClusterModelStats |
getClusterStats(BalancingConstraint balancingConstraint)
A
Populate the analysis stats with this cluster and given balancing constraint.
|
Map<org.apache.kafka.common.TopicPartition,ReplicaPlacementInfo> |
getLeaderDistribution()
Get leader broker ids for each partition.
|
Map<org.apache.kafka.common.TopicPartition,List<ReplicaPlacementInfo>> |
getObserverDistribution()
Get the distribution of observer replicas in the cluster.
|
SortedMap<String,List<Partition>> |
getPartitionsByTopic()
Get a map of partitions by topic names.
|
Map<org.apache.kafka.common.TopicPartition,List<ReplicaPlacementInfo>> |
getReplicaDistribution()
Get the distribution of replicas in the cluster at the point of call.
|
Optional<kafka.common.TopicPlacement> |
getTopicPlacement(String topic) |
void |
handleDeadBroker(String rackId,
int brokerId,
BrokerCapacityInfo brokerCapacityInfo)
If the rack or broker does not exist, create them with UNKNOWN host name.
|
boolean |
hasReassigningPartitions() |
Set<Broker> |
ignoredBrokers()
Get the ignored brokers in the cluster.
|
boolean |
isCellEnabled()
Return a boolean indicating if Cell is enabled on the cluster.
|
Set<Replica> |
leaderReplicas()
Get all the leader replicas in the cluster.
|
Load |
load()
Get the recent cluster load information.
|
int |
maxReplicationFactor()
Get the maximum replication factor of a replica that was added to the cluster before.
|
double |
monitoredPartitionsRatio()
Get the coverage of this cluster model.
|
SortedSet<Broker> |
newBrokers()
Get the set of new brokers.
|
int |
numLeaderReplicas()
Get the number of leader replicas in cluster.
|
int |
numLeaderReplicasOnEligibleSourceBrokers()
Get the total number of leader replicas on all source eligible brokers.
|
int |
numReplicas()
Get the number of replicas in cluster.
|
int |
numReplicasOnEligibleSourceBrokers()
Get the total number of replicas in all eligible source brokers (see
eligibleSourceBrokers() . |
int |
numTopicReplicas(String topic)
Get the number of replicas with the given topic name in cluster.
|
int |
numTopicReplicasOnEligibleSourceBrokers(String topic)
Get the number of replicas on eligible source brokers with the given topic name in cluster.
|
Partition |
partition(org.apache.kafka.common.TopicPartition tp)
Get partition of the given replica.
|
Load |
potentialLeadershipLoadFor(Integer brokerId)
Get the leadership load for given broker id.
|
Utilization |
potentialLeadershipUtilizationFor(Integer brokerId) |
Rack |
rack(String rackId)
Get the rack with the rack id if it is found in the cluster; null otherwise.
|
Set<org.apache.kafka.common.TopicPartition> |
reassigningPartitions() |
void |
refreshClusterMaxReplicationFactor()
Refresh the maximum topic replication factor statistic.
|
boolean |
relocateLeadership(org.apache.kafka.common.TopicPartition tp,
int sourceBrokerId,
int destinationBrokerId)
(1) Removes leadership from source replica.
|
void |
relocateReplica(org.apache.kafka.common.TopicPartition tp,
int sourceBrokerId,
int destinationBrokerId)
For replica movement across the broker:
(1) Remove the replica from the source broker,
(2) Set the broker of the removed replica as the destination broker,
(3) Add this replica to the destination broker.
|
void |
relocateReplica(org.apache.kafka.common.TopicPartition tp,
int brokerId,
String destinationLogdir)
For replica movement across the disks of the same broker.
|
Replica |
removeReplica(int brokerId,
org.apache.kafka.common.TopicPartition tp)
Remove and get removed replica from the cluster.
|
Map<String,Integer> |
replicationFactorByTopic()
Get the replication factor that each topic in the cluster created with.
|
void |
sanityCheck()
(1) Check whether each load in the cluster contains exactly the number of windows defined by the Load.
|
Set<Replica> |
selfHealingEligibleReplicas()
Get replicas eligible for self-healing.
|
void |
setReplicaExclusions(Set<Integer> brokerIds) |
void |
setReplicaLoad(String rackId,
int brokerId,
org.apache.kafka.common.TopicPartition tp,
AggregatedMetricValues metricValues,
List<Long> windows)
Set the load for the given replica.
|
void |
setTopicPlacements(Map<String,kafka.common.TopicPlacement> topicPlacements) |
boolean |
skipInterCellBalancing()
Return a boolean indicating if inter-cell balancing can be skipped
on this cluster.
|
Optional<Tenant> |
tenant(String tenantId)
Return optional tenant with the given tenant id if it is found in the cluster.
|
Set<String> |
topics()
Get topics in the cluster.
|
String |
toString() |
void |
trackSortedReplicas(Collection<Broker> brokersToTrack,
String sortName,
Function<Replica,Boolean> selectionFunc,
Function<Replica,Integer> priorityFunc,
Function<Replica,Double> scoreFunc)
Ask the cluster model to keep track of the replicas sorted with the given priority function and score function.
|
void |
trackSortedReplicas(String sortName,
Function<Replica,Boolean> selectionFunc,
Function<Replica,Integer> priorityFunc,
Function<Replica,Double> scoreFunc)
Overload of
trackSortedReplicas(Collection, String, Function, Function, Function) )} for when all brokers
should be considered for replica tracking. |
void |
trackSortedReplicas(String sortName,
Function<Replica,Double> scoreFunction)
Ask the cluster model to keep track of the replicas sorted with the given score function.
|
void |
trackSortedReplicas(String sortName,
Function<Replica,Integer> priorityFunc,
Function<Replica,Double> scoreFunc)
Ask the cluster model to keep track of the replicas sorted with the given priority function and score function.
|
void |
untrackSortedReplicas(String sortName)
Un-track the sorted replicas with the given name to release memory.
|
boolean |
updateReplicationFactor(Map<Short,Set<String>> topicsByReplicationFactor,
Set<Integer> brokersExcludedForReplicaMovement)
For partitions of specified topics, create or delete replicas in given cluster model to change the partition's replication
factor to target replication factor.
|
Utilization |
utilization() |
void |
writeTo(OutputStream out) |
public static final Integer DEFAULT_CELL_ID
public static final Integer DEAD_BROKERS_CELL_ID
public ClusterModel(ModelGeneration generation, double monitoredPartitionsRatio)
public ModelGeneration generation()
public double monitoredPartitionsRatio()
public ClusterModelStats getClusterStats(BalancingConstraint balancingConstraint)
balancingConstraint
- Balancing constraint.public Set<String> aliveRackIds()
public Rack rack(String rackId)
public Cell cell(Integer cellId)
public Optional<Tenant> tenant(String tenantId)
public Collection<Cell> cells()
public boolean isCellEnabled()
public boolean skipInterCellBalancing()
public Map<org.apache.kafka.common.TopicPartition,List<ReplicaPlacementInfo>> getReplicaDistribution()
public Map<org.apache.kafka.common.TopicPartition,ReplicaPlacementInfo> getLeaderDistribution()
public Map<org.apache.kafka.common.TopicPartition,List<ReplicaPlacementInfo>> getObserverDistribution()
public Set<Replica> selfHealingEligibleReplicas()
public Load load()
public Utilization utilization()
public Load potentialLeadershipLoadFor(Integer brokerId)
brokerId
- Broker id.public Utilization potentialLeadershipUtilizationFor(Integer brokerId)
public int maxReplicationFactor()
public Map<String,Integer> replicationFactorByTopic()
public Partition partition(org.apache.kafka.common.TopicPartition tp)
tp
- Topic partition of the replica for which the partition is requested.public SortedMap<String,List<Partition>> getPartitionsByTopic()
public void relocateReplica(org.apache.kafka.common.TopicPartition tp, int brokerId, String destinationLogdir)
tp
- Partition Info of the replica to be relocated.brokerId
- Broker id.destinationLogdir
- Destination logdir.public void relocateReplica(org.apache.kafka.common.TopicPartition tp, int sourceBrokerId, int destinationBrokerId)
relocateLeadership(TopicPartition, int, int)
.tp
- Partition Info of the replica to be relocated.sourceBrokerId
- Source broker id.destinationBrokerId
- Destination broker id.public boolean relocateLeadership(org.apache.kafka.common.TopicPartition tp, int sourceBrokerId, int destinationBrokerId)
(2) Adds this leadership to the destination replica. The source replica receives the preferred leader ranking that the destination had prior to this call (preferred leader ranking of other replicas is not affected).
(3) Switch CPU and NW_OUT resource utilization between source and destination replica
(4) Updates the leader and list of followers of the partition.
tp
- Topic partition of this replica.sourceBrokerId
- Source broker id.destinationBrokerId
- Destination broker id.public boolean changeObservership(org.apache.kafka.common.TopicPartition tp, int replicaId)
public Set<Broker> aliveBrokers()
Broker.Strategy.IGNORE
state). See eligibleDestinationBrokers()
to get a list of brokers
which can serve as a replica movement destination.public SortedSet<Broker> eligibleSourceBrokers()
Broker.Strategy.IGNORE
.
In other words, an eligible source broker is one which is eligible to have replicas moved away from it.
It is recommended that goals rebalance over this set of brokers rather than allBrokers()
public SortedSet<Broker> eligibleDestinationBrokers()
Broker.Strategy.ALIVE
and not Broker.Strategy.IGNORE
.
In other words, an eligible destination broker is one which is eligible to have replicas moved to it.public SortedSet<Broker> eligibleSourceAndDestinationBrokers()
eligibleSourceBrokers()
and eligibleDestinationBrokers()
public SortedSet<Broker> eligibleSourceOrDestinationBrokers()
eligibleSourceBrokers()
and eligibleDestinationBrokers()
public Set<Broker> ignoredBrokers()
Broker.Strategy.IGNORE
broker is one that is both alive and excluded for replica placement.
SBC currently treats these brokers specially by not interacting with replicas of theirs at all.public SortedSet<Broker> brokenBrokers()
public Map<Integer,String> capacityEstimationInfoByBrokerId()
public SortedSet<Broker> brokersWithBadDisks()
public Set<Broker> brokersHavingOfflineReplicasOnBadDisks()
public Set<Broker> aliveBrokersMatchingAttributes(Map<String,String> attributes)
public boolean clusterHasEligibleDestinationBrokers()
public Replica removeReplica(int brokerId, org.apache.kafka.common.TopicPartition tp)
brokerId
- Id of the broker containing the partition.tp
- Topic partition of the replica to be removed.public SortedSet<Broker> allBrokers()
Broker.Strategy.IGNORE
or Broker.Strategy.DEAD
public Set<Broker> brokersEligibleForRebalancing()
public SortedSet<Broker> allBrokersWithStateOtherThan(Broker.Strategy stateToSkip)
public Set<Broker> brokersOfStatesMatchingAttributes(Collection<Broker> brokerPool, EnumSet<Broker.Strategy> states, Map<String,String> attributes)
public Set<Broker> brokersNotOfStatesMatchingAttributes(Collection<Broker> brokerPool, EnumSet<Broker.Strategy> states, Map<String,String> attributes)
public Broker broker(int brokerId)
brokerId
- Id of the requested broker.public void trackSortedReplicas(String sortName, Function<Replica,Double> scoreFunction)
It is recommended to use the functions from ReplicaSortFunctionFactory
so the functions can be maintained
in a single place.
The sorted replica will only be updated in the following cases:
1. A replica is added to or removed from a broker
2. A replica's role has changed from leader to follower, and vice versa.
The sorted replicas are named using the given sortName, and can be accessed using
Broker.trackedSortedReplicas(String)
. If the sorted replicas are no longer needed,
untrackSortedReplicas(String)
to release memory.
sortName
- the name of the sorted replicas.scoreFunction
- the score function to sort the replicas with the same priority, replicas are sorted in
ascending order of score.SortedReplicas
public void trackSortedReplicas(String sortName, Function<Replica,Integer> priorityFunc, Function<Replica,Double> scoreFunc)
It is recommended to use the functions from ReplicaSortFunctionFactory
so the functions can be maintained
in a single place.
The sort will first use the priority function then the score function. The priority function allows the caller to prioritize a certain type of replicas, e.g immigrant replicas.
The sorted replica will only be updated in the following cases:
1. A replica is added to or removed from a broker
2. A replica's role has changed from leader to follower, and vice versa.
The sorted replicas are named using the given sortName, and can be accessed using
Broker.trackedSortedReplicas(String)
. If the sorted replicas are no longer needed,
untrackSortedReplicas(String)
to release memory.
sortName
- the name of the sorted replicas.priorityFunc
- the priority function to sort the replicasscoreFunc
- the score function to sort the replicas with the same priority, replicas are sorted in
ascending order of score.SortedReplicas
public void trackSortedReplicas(String sortName, Function<Replica,Boolean> selectionFunc, Function<Replica,Integer> priorityFunc, Function<Replica,Double> scoreFunc)
trackSortedReplicas(Collection, String, Function, Function, Function)
)} for when all brokers
should be considered for replica tracking.public void trackSortedReplicas(Collection<Broker> brokersToTrack, String sortName, Function<Replica,Boolean> selectionFunc, Function<Replica,Integer> priorityFunc, Function<Replica,Double> scoreFunc)
The sort will first use the priority function then the score function. The priority function allows the caller to prioritize a certain type of replicas, e.g immigrant replicas. The selection function determines which replicas to be included in the sorted replicas.
It is recommended to use the functions from ReplicaSortFunctionFactory
so the functions can be maintained
in a single place.
The sorted replica will only be updated in the following cases:
1. A replica is added to or removed from a broker
2. A replica's role has changed from leader to follower, and vice versa.
The sorted replicas are named using the given sortName, and can be accessed using
Broker.trackedSortedReplicas(String)
. If the sorted replicas are no longer needed,
untrackSortedReplicas(String)
to release memory.
brokersToTrack
- the collection of brokers for which to track sorted replicassortName
- the name of the sorted replicas.selectionFunc
- the selection function to decide which replicas to include in the sort. If it is
null
, all the replicas are to be included.priorityFunc
- the priority function to sort the replicasscoreFunc
- the score function to sort the replicas with the same priority, replicas are sorted in
ascending order of score.SortedReplicas
public void untrackSortedReplicas(String sortName)
sortName
- the name of the sorted replicas.public int numTopicReplicas(String topic)
topic
- Name of the topic for which the number of replicas in cluster will be counted.public int numTopicReplicasOnEligibleSourceBrokers(String topic)
topic
- Name of the topic for which the number of replicas in cluster will be counted.public int numLeaderReplicas()
public int numLeaderReplicasOnEligibleSourceBrokers()
public int numReplicas()
public int numReplicasOnEligibleSourceBrokers()
eligibleSourceBrokers()
.public Capacity capacity()
public void setReplicaLoad(String rackId, int brokerId, org.apache.kafka.common.TopicPartition tp, AggregatedMetricValues metricValues, List<Long> windows)
rackId
- Rack id.brokerId
- Broker Id containing the replica with the given topic partition.tp
- Topic partition that identifies the replica in this broker.metricValues
- The load of the replica.windows
- The windows list of the aggregated metrics.public void handleDeadBroker(String rackId, int brokerId, BrokerCapacityInfo brokerCapacityInfo)
rackId
- Rack id under which the replica will be created.brokerId
- Broker id under which the replica will be created.brokerCapacityInfo
- The capacity information to use if the broker does not exist.public Replica createReplica(String rackId, int brokerId, org.apache.kafka.common.TopicPartition tp, int index, boolean isLeader)
The LoadMonitor
uses
createReplica(String, int, TopicPartition, int, boolean, boolean, String, boolean, boolean)
while setting
the replica offline status, and it considers the broken disks as well. Whereas, this method is used only by the
unit tests. The relevant unit tests may use Replica.markOriginalOffline()
to mark offline replicas on
broken disks.
The main reason for this separation is the lack of disk representation in the current broker model. Once the patch #327 is merged, we can simplify this logic.
rackId
- Rack id under which the replica will be created.brokerId
- Broker id under which the replica will be created.tp
- Topic partition information of the replica.index
- The index of the replica in the replica list.isLeader
- True if the replica is a leader, false otherwise.public Replica createReplica(String rackId, int brokerId, org.apache.kafka.common.TopicPartition tp, int index, boolean isLeader, boolean isOffline, String logdir, boolean isFuture, boolean isObserver)
rackId
- Rack id under which the replica will be created.brokerId
- Broker id under which the replica will be created.tp
- Topic partition information of the replica.index
- The index of the replica in the replica list.isLeader
- True if the replica is a leader, false otherwise.isOffline
- True if the replica is offline in its original location, false otherwise.logdir
- The logdir of replica's hosting disk. If replica placement over disk information is not populated,
this parameter is null.isFuture
- True if the replica does not correspond to any existing replica in the cluster, but a replica
we are going to add to the cluster. This replica's original broker will not be any existing broker
so that it will be treated as an immigrant replica for whatever broker it is assigned to and
grant goals greatest freedom to allocate to an existing broker.isObserver
- True if the replica is an observer, false otherwisepublic void deleteReplica(org.apache.kafka.common.TopicPartition topicPartition, int brokerId)
refreshClusterMaxReplicationFactor()
after all replica deletion.topicPartition
- Topic partition of the replica to be removed.brokerId
- Id of the broker hosting the replica.public void refreshClusterMaxReplicationFactor()
public Broker createBroker(String rackId, Integer cellId, String host, int brokerId, BrokerCapacityInfo brokerCapacityInfo, boolean populateReplicaPlacementInfo, Broker.Strategy strategy)
capacityEstimationInfoByBrokerId
if the broker capacity has been estimated.rackId
- Id of the rack that the broker will be created in.host
- The host of this brokerbrokerId
- Id of the broker to be created.brokerCapacityInfo
- Capacity information of the created broker.populateReplicaPlacementInfo
- Whether populate replica placement over disk information or not.public Broker createBroker(String rackId, String host, int brokerId, BrokerCapacityInfo brokerCapacityInfo, boolean populateReplicaPlacementInfo, Broker.Strategy strategy)
public Rack createRackIfAbsent(String rackId)
rackId
- Id of the rack to be created.public Cell createCellIfAbsent(Optional<Integer> cellId)
cellId
- Id of the cell to be created.public void addTenant(Tenant tenant)
public boolean createOrDeleteReplicas(Map<Short,Set<String>> topicsByReplicationFactor, Map<String,List<Integer>> brokersByRack) throws OptimizationFailureException
topicsByReplicationFactor
- The topics to modify replication factor with target replication factor.brokersByRack
- A map from rack to broker.OptimizationFailureException
public boolean updateReplicationFactor(Map<Short,Set<String>> topicsByReplicationFactor, Set<Integer> brokersExcludedForReplicaMovement) throws OptimizationFailureException
topicsByReplicationFactor
- The topics to modify replication factor with target replication factor.OptimizationFailureException
public List<Broker> brokersUnderThreshold(Collection<Broker> brokers, Resource resource, double utilizationThreshold)
public List<Broker> brokersUnderCapacityLimit(Collection<Broker> brokers, Resource resource, ClusterModel.CapacityLimitProvider capacityLimitProvider)
public List<Broker> brokersOverThreshold(Collection<Broker> originalBrokers, Resource resource, double utilizationThreshold)
public void sanityCheck()
public BrokerStats brokerStats(KafkaCruiseControlConfig config)
public void writeTo(OutputStream out) throws IOException
IOException
public void setTopicPlacements(Map<String,kafka.common.TopicPlacement> topicPlacements)
public Set<Integer> excludedBrokers()
This method represents the Kafka-level state modified from the
ConfluentAdmin.alterBrokerReplicaExclusions(Map)
API. How SBC handles such
brokers as part of plan computation is not strictly related.
Note that this set may contain broker ids that do not exist in the cluster/cluster model. (e.g id 9999)
public void addReassigningPartition(org.apache.kafka.common.TopicPartition tp)
TopicPartition
in the cluster model as reassigning.
Goals will generally avoid moving partitions that are already reassigning.public boolean hasReassigningPartitions()
public Set<org.apache.kafka.common.TopicPartition> reassigningPartitions()
public List<org.apache.kafka.common.CellLoad> cellLoadStats(Long maxReplicasPerBroker, List<Integer> cellIds)
cellLoad = Σ (utilization of all brokers in the cell) / (number of brokers in the cell)
brokerUtilRatio = max(cpuUtil, networkInboundUtil, networkOutboundUtil, replicaCountUtil). cpuUtil = cpuUtilization / cpuCapacity networkInboundUtil = networkInboundUtilization / networkInboundCapacity networkOutboundUtil = networkOutboundUtilization / networkOutboundCapacity replicaCountUtil = replicaCount / maxReplicaCount
public List<org.apache.kafka.common.CellLoad> cellLoadStats(Long maxReplicasPerBroker)
public double expectedUtilizationInEligibleSourceBrokersFor(Resource resource)
public double eligibleDestinationCapacityFor(Resource resource)