环境:
节点 zookeeper端口 kafka端口 备注
172.17.0.81 12181 19092 原有节点
172.17.0.82 12181 19092 原有节点
172.17.0.83 12181 19092 原有节点
172.17.0.90 12181 19092 扩容节点
172.17.0.91 12181 19092 扩容节点
步骤:
一、扩容zookeeper节点
#修改配置文件,需注意新节点myid配置
vim /app/apache-zookeeper-3.5.9-bin/conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=5
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=2
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/logs
# the port at which the clients will connect
clientPort=12181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=172.17.0.81:22888:23888
server.2=172.17.0.82:22888:23888
server.3=172.17.0.83:22888:23888
server.4=172.17.0.90:22888:23888
server.5=172.17.0.91:22888:23888
#修改myid
vim /app/apache-zookeeper-3.5.9-bin/data/myid
#启动新节点并轮流重启原有节点(先重启follower后重启leader)
#启动新节点
cd /app/apache-zookeeper-3.5.9-bin/bin
#./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /app/apache-zookeeper-3.5.9-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
#./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /app/apache-zookeeper-3.5.9-bin/bin/../conf/zoo.cfg
Client port found: 12181. Client address: localhost. Client SSL: false.
Mode: follower
#重启原有节点
./zkServer.sh restart
ZooKeeper JMX enabled by default
Using config: /app/apache-zookeeper-3.5.9-bin/bin/../conf/zoo.cfg
ZooKeeper JMX enabled by default
Using config: /app/apache-zookeeper-3.5.9-bin/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
ZooKeeper JMX enabled by default
Using config: /app/apache-zookeeper-3.5.9-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
ZooKeeper JMX enabled by default
Using config: /app/apache-zookeeper-3.5.9-bin/bin/../conf/zoo.cfg
Client port found: 12181. Client address: localhost. Client SSL: false.
Mode: follower
二、扩容kafka节点
#修改原有kafka节点zookeeper配置(增加节点配置),修改数据保存时间,默认7天,单节点数据量6T左右,后续平衡耗时耗性能耗带宽。
zookeeper.connect=172.17.0.81:12181,172.17.0.82:12181,172.17.0.83:12181,172.17.0.90:12181.172.17.0.91:12181
log.retention.hours=168
#重启现有节点
./kafka-server-stop.sh
nohup ./bin/kafka-server-start.sh ./config/server.properties &
#修改新节点的配置
broker.id=
host.name=
#启动新节点
nohup ./bin/kafka-server-start.sh ./config/server.properties &
三、调整topic分区
#按需调整分区数
./kafka-topics.sh --alter --zookeeper 127.0.0.1:12181 --topic test --partitions 5
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
#查看topic情况
./kafka-topics.sh --describe --zookeeper 172.17.0.81:12181 --topic test
Topic: test PartitionCount: 5 ReplicationFactor: 3 Configs:
Topic: test Partition: 0 Leader: 2 Replicas: 2,1,0 Isr: 1,2,0
Topic: test Partition: 1 Leader: 1 Replicas: 1,0,2 Isr: 1,2,0
Topic: test Partition: 2 Leader: 0 Replicas: 0,2,1 Isr: 1,2,0
Topic: test Partition: 3 Leader: 0 Replicas: 0,3,4 Isr: 0,3,4
Topic: test Partition: 4 Leader: 1 Replicas: 1,4,0 Isr: 1,4,0
./kafka-topics.sh --describe --zookeeper 172.17.0.81:12181 --topic test6p3r
Topic: test6p3r PartitionCount: 6 ReplicationFactor: 3 Configs:
Topic: test6p3r Partition: 0 Leader: 2 Replicas: 2,0,1 Isr: 1,2,0
Topic: test6p3r Partition: 1 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0
Topic: test6p3r Partition: 2 Leader: 0 Replicas: 0,1,2 Isr: 1,2,0
Topic: test6p3r Partition: 3 Leader: 2 Replicas: 2,1,0 Isr: 1,2,0
Topic: test6p3r Partition: 4 Leader: 1 Replicas: 1,0,2 Isr: 1,2,0
Topic: test6p3r Partition: 5 Leader: 0 Replicas: 0,2,1 Isr: 1,2,0
#准备生成迁移计划
cat << EOF > topic-to-move.json
{"topics": [{"topic": "test"},{"topic": "test6p3r"}],
"version":1
}
#执行生成迁移计划
./kafka-reassign-partitions.sh --zookeeper 127.0.0.1:12181 --topics-to-move-json-file ./topic-to-move.json --broker-list "0,1,2,3,4" --generate > move_20230104.json
#查看文件,删除蓝色字体无用数据
Warning: --zookeeper is deprecated, and will be removed in a future version of Kafka.
Current partition replica assignment
{"version":1,"partitions":[{"topic":"test","partition":0,"replicas":[3,4,0],"log_dirs":["any","any","any"]},{"topic":"test","partition":1,"replicas":[4,0,1],"log_dirs":["any","any","any"]},{"topic":"test","partition":2,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test","partition":3,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"test","partition":4,"replicas":[2,3,4],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":0,"replicas":[2,0,1],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":1,"replicas":[1,2,0],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":2,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":3,"replicas":[2,1,0],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":4,"replicas":[1,0,2],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":5,"replicas":[0,2,1],"log_dirs":["any","any","any"]}]}
Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"test","partition":0,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"test","partition":1,"replicas":[2,3,4],"log_dirs":["any","any","any"]},{"topic":"test","partition":2,"replicas":[3,4,0],"log_dirs":["any","any","any"]},{"topic":"test","partition":3,"replicas":[4,0,1],"log_dirs":["any","any","any"]},{"topic":"test","partition":4,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":0,"replicas":[3,4,0],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":1,"replicas":[4,0,1],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":2,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":3,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":4,"replicas":[2,3,4],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":5,"replicas":[3,0,1],"log_dirs":["any","any","any"]}]}
#执行迁移计划
./kafka-reassign-partitions.sh --zookeeper 127.0.0.1:12181 --reassignment-json-file ./move_20230104.json --execute
Warning: --zookeeper is deprecated, and will be removed in a future version of Kafka.
Current partition replica assignment
{"version":1,"partitions":[{"topic":"test","partition":0,"replicas":[3,4,0],"log_dirs":["any","any","any"]},{"topic":"test","partition":1,"replicas":[4,0,1],"log_dirs":["any","any","any"]},{"topic":"test","partition":2,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test","partition":3,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"test","partition":4,"replicas":[2,3,4],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":0,"replicas":[2,0,1],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":1,"replicas":[1,2,0],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":2,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":3,"replicas":[2,1,0],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":4,"replicas":[1,0,2],"log_dirs":["any","any","any"]},{"topic":"test6p3r","partition":5,"replicas":[0,2,1],"log_dirs":["any","any","any"]}]}
Save this to use as the --reassignment-json-file option during rollback
Successfully started partition reassignments for test-0,test-1,test-2,test-3,test-4,test6p3r-0,test6p3r-1,test6p3r-2,test6p3r-3,test6p3r-4,test6p3r-5
#验证迁移结果
./kafka-reassign-partitions.sh --zookeeper 127.0.0.1:12181 --reassignment-json-file ./move_20230104.json --verify
Warning: --zookeeper is deprecated, and will be removed in a future version of Kafka.
Warning: because you are using the deprecated --zookeeper option, the results may be incomplete. Use --bootstrap-server instead for more accurate results.
Status of partition reassignment:
Reassignment of partition test-0 is complete.
Reassignment of partition test-1 is complete.
Reassignment of partition test-2 is complete.
Reassignment of partition test-3 is complete.
Reassignment of partition test-4 is complete.
Reassignment of partition test6p3r-0 is complete.
Reassignment of partition test6p3r-1 is complete.
Reassignment of partition test6p3r-2 is complete.
Reassignment of partition test6p3r-3 is complete.
Reassignment of partition test6p3r-4 is complete.
Reassignment of partition test6p3r-5 is complete.
Clearing broker-level throttles on brokers 0,1,2,3,4
Clearing topic-level throttles on topics test,test6p3r
#查看分区情况
./kafka-topics.sh --describe --zookeeper 127.0.0.1:12181 --topic test
Topic: test PartitionCount: 5 ReplicationFactor: 3 Configs:
Topic: test Partition: 0 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: test Partition: 1 Leader: 2 Replicas: 2,3,4 Isr: 2,3,4
Topic: test Partition: 2 Leader: 3 Replicas: 3,4,0 Isr: 0,3,4
Topic: test Partition: 3 Leader: 4 Replicas: 4,0,1 Isr: 0,1,4
Topic: test Partition: 4 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2
./kafka-topics.sh --describe --zookeeper 127.0.0.1:12181 --topic test6p3r
Topic: test6p3r PartitionCount: 6 ReplicationFactor: 3 Configs:
Topic: test6p3r Partition: 0 Leader: 3 Replicas: 3,4,0 Isr: 0,3,4
Topic: test6p3r Partition: 1 Leader: 4 Replicas: 4,0,1 Isr: 1,0,4
Topic: test6p3r Partition: 2 Leader: 0 Replicas: 0,1,2 Isr: 1,2,0
Topic: test6p3r Partition: 3 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: test6p3r Partition: 4 Leader: 2 Replicas: 2,3,4 Isr: 2,3,4
Topic: test6p3r Partition: 5 Leader: 3 Replicas: 3,0,1 Isr: 1,0,3
四、平衡topic分区leader
#查看leader分布情况
./kafka-topics.sh --describe --zookeeper 127.0.0.1:12181 --topic test
Topic: test PartitionCount: 5 ReplicationFactor: 3 Configs:
Topic: test Partition: 0 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: test Partition: 1 Leader: 2 Replicas: 2,3,4 Isr: 2,3,4
Topic: test Partition: 2 Leader: 3 Replicas: 3,4,0 Isr: 0,3,4
Topic: test Partition: 3 Leader: 4 Replicas: 4,0,1 Isr: 0,1,4
Topic: test Partition: 4 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2
#以上topic分区leader已经分不到不同节点,如果不均衡可以使用以下命令手动执行
./kafka-preferred-replica-election.sh --zookeeper 127.0.0.1:12181
#或者在配置文件中将参数设置为(不建议使用,虽然设置为false会可能导致该partition不可用,但是设置为ture会有丢数据的风险):
auto.leader.rebalance.enable=true
#例如已下不均衡例子:
./kafka-topics.sh --describe --zookeeper 127.0.0.1:12181 --topic test1
Topic: test1 PartitionCount: 5 ReplicationFactor: 3 Configs:
Topic: test1 Partition: 0 Leader: 1 Replicas: 4,0,1 Isr: 1,0,4
Topic: test1 Partition: 1 Leader: 0 Replicas: 0,1,2 Isr: 0,2,1
Topic: test1 Partition: 2 Leader: 2 Replicas: 1,2,3 Isr: 2,1,3
Topic: test1 Partition: 3 Leader: 4 Replicas: 2,3,4 Isr: 4,2,3
Topic: test1 Partition: 4 Leader: 0 Replicas: 3,4,0 Isr: 0,3,4
#手动触发平衡
./kafka-preferred-replica-election.sh --zookeeper 127.0.0.1:12181
This tool is deprecated. Please use kafka-leader-election tool. Tracking issue: KAFKA-8405
Warning: --zookeeper is deprecated and will be removed in a future version of Kafka.
Use --bootstrap-server instead to specify a broker to connect to.
Created preferred replica election path with test1-3,test6-3,test2-4,test4-1,test3-0,test5-2,test4-3,test2-1,test3-2,test5-4,test5-1,test6-2,test4-0,test1-0,test1-4,test1-1,test3-3,test2-2,test5-0,test6-1,test4-4,test6-0,test2-3,test4-2,test1-2,test6-4,test5-3,test2-0,test3-1,test3-4
Successfully started preferred replica election for partitions Set(test1-3, test6-3, test2-4, test4-1, test3-0, test5-2, test4-3, test2-1, test3-2, test5-4, test5-1, test6-2, test4-0, test1-0, test1-4, test1-1, test3-3, test2-2, test5-0, test6-1, test4-4, test6-0, test2-3, test4-2, test1-2, test6-4, test5-3, test2-0, test3-1, test3-4)
#重新查看leader已分布到不同节点
./kafka-topics.sh --describe --zookeeper 127.0.0.1:12181 --topic test1
Topic: test1 PartitionCount: 5 ReplicationFactor: 3 Configs:
Topic: test1 Partition: 0 Leader: 4 Replicas: 4,0,1 Isr: 1,0,4
Topic: test1 Partition: 1 Leader: 0 Replicas: 0,1,2 Isr: 0,2,1
Topic: test1 Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 2,1,3
Topic: test1 Partition: 3 Leader: 2 Replicas: 2,3,4 Isr: 4,2,3
Topic: test1 Partition: 4 Leader: 3 Replicas: 3,4,0 Isr: 0,3,4