原有环境
主机名 | IP 地址 | 安装路径 | 系统 |
sht-sgmhadoopdn-01 | 172.16.101.58 |
/opt/kafka_2.12-1.0.0 /opt/kafka(软连接) |
CentOS Linux release 7.3.1611 (Core) |
sht-sgmhadoopdn-02 | 172.16.101.59 | ||
sht-sgmhadoopdn-03 | 172.16.101.60 |
向集群增加节点
sht-sgmhadoopdn-04(172.16.101.66)
过程
一. 新节点配置和集群节点环境一致
二. zookeeper配置
1. 集群各节点增加新节点的zookeeper配置
tickTime=2000 initLimit=10 syncLimit=5 dataDir=/opt/kafka/data clientPort=2182 server.1=sht-sgmhadoopdn-01:2889:3889 server.2=sht-sgmhadoopdn-02:2889:3889 server.3=sht-sgmhadoopdn-03:2889:3889 server.4=sht-sgmhadoopdn-04:2889:3889
2. 新节点创建server-id
# echo 4 > /opt/kafka/data/myid
3. 启动zookeeper
# /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties
4. 查看新节点zookeeper状态
# echo stat | nc sht-sgmhadoopdn-04 2182 | grep Mode Mode: follower
三. kafka配置
1.新节点配置文件server.properties
broker.id=3 listeners=PLAINTEXT://172.16.101.66:9092 advertised.listeners=PLAINTEXT://172.16.101.66:9092 log.dirs=/opt/kafka/data zookeeper.connect=sht-sgmhadoopdn-01:2182,sht-sgmhadoopdn-02:2182,sht-sgmhadoopdn-03:2182,sht-sgmhadoopdn-04:2182
2. 向集群中所有节点kafka配置文件增加对新zookeeper节点的支持
zookeeper.connect=sht-sgmhadoopdn-01:2182,sht-sgmhadoopdn-02:2182,sht-sgmhadoopdn-03:2182,sht-sgmhadoopdn-04:2182
3. 启动kafka
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
4. 查看集群
# echo dump | nc sht-sgmhadoopdn-01 2182 | grep broker /brokers/ids/0 /brokers/ids/1 /brokers/ids/2 /brokers/ids/3
四. 分区重分配
1. 查看现有集群的topic以及分区方案
# kafka-topics.sh --zookeeper 172.16.101.58:2182,172.16.101.59:2182,172.16.101.60:2182,172.16.101.66:2182 --list __consumer_offsets test-topic # kafka-topics.sh --zookeeper 172.16.101.58:2181,172.16.101.59:2181,172.16.101.60:2181,172.16.101.66:2182 --describe --topic test-topic Topic:test-topic PartitionCount:6 ReplicationFactor:3 Configs: Topic: test-topic Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0 Topic: test-topic Partition: 1 Leader: 2 Replicas: 2,0,1 Isr: 2,1,0 Topic: test-topic Partition: 2 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2 Topic: test-topic Partition: 3 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0 Topic: test-topic Partition: 4 Leader: 2 Replicas: 2,0,1 Isr: 2,1,0 Topic: test-topic Partition: 5 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2
可以看到test-topic的6个分区均集中在在老集群中,新添加的节点并未参与分区方案。
现在将执行分区重分配,将数据均匀分散在左右节点上
2. 创建json文件
# cat topics-to-move.json {"topics":[{"topic":"test-topic"}],"version":1}
3. 产生分区分配方案
[root@sht-sgmhadoopdn-01 kafka]# kafka-reassign-partitions.sh --zookeeper 172.16.101.58:2182,172.16.101.59:2182,172.16.101.60:2182,172.16.101.66:2182 --topics-to-move-json-file topics-to-move.json --broker-list "0,1,2,3" --generate
Current partition replica assignment
{"version":1,"partitions":[{"topic":"test-topic","partition":0,"replicas":[1,2,0],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":5,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":3,"replicas":[1,2,0],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":2,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":4,"replicas":[2,0,1],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":1,"replicas":[2,0,1],"log_dirs":["any","any","any"]}]}
Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"test-topic","partition":0,"replicas":[3,0,1],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":5,"replicas":[0,2,3],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":3,"replicas":[2,3,0],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":2,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":4,"replicas":[3,1,2],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":1,"replicas":[0,1,2],"log_dirs":["any","any","any"]}]}
注意“Proposed partition reassignment configuration”为kafka提供的分区方案,实际上并没有真正执行,我们将该分区方案保存为另外一个文件expand_cluster_reassignment.json,然后再真正执行这个分区方案。
4. 执行分区重分配
# kafka-reassign-partitions.sh --zookeeper 172.16.101.58:2182,172.16.101.59:2182,172.16.101.60:2182,172.16.101.66:2182 --reassignment-json-file expand_cluster_reassignment.json --execute Current partition replica assignment {"version":1,"partitions":[{"topic":"test-topic","partition":0,"replicas":[1,2,0],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":5,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":3,"replicas":[1,2,0],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":2,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":4,"replicas":[2,0,1],"log_dirs":["any","any","any"]},{"topic":"test-topic","partition":1,"replicas":[2,0,1],"log_dirs":["any","any","any"]}]} Save this to use as the --reassignment-json-file option during rollback Successfully started reassignment of partitions.
通过--verify查看分区进程
# kafka-reassign-partitions.sh --zookeeper 172.16.101.58:2182,172.16.101.59:2182,172.16.101.60:2182,172.16.101.66:2182 --reassignment-json-file expand_cluster_reassignment.json --verify Status of partition reassignment: Reassignment of partition test-topic-4 is still in progress Reassignment of partition test-topic-1 completed successfully Reassignment of partition test-topic-0 is still in progress Reassignment of partition test-topic-5 is still in progress Reassignment of partition test-topic-2 is still in progress Reassignment of partition test-topic-3 is still in progress
5. 等到上述分区过程结束后,再次查看topic分区情况
# kafka-topics.sh --zookeeper 172.16.101.58:2182 --describe --topic test-topic Topic:test-topic PartitionCount:6 ReplicationFactor:3 Configs: Topic: test-topic Partition: 0 Leader: 1 Replicas: 3,0,1 Isr: 0,1,3 Topic: test-topic Partition: 1 Leader: 2 Replicas: 0,1,2 Isr: 0,1,2 Topic: test-topic Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3 Topic: test-topic Partition: 3 Leader: 2 Replicas: 2,3,0 Isr: 0,2,3 Topic: test-topic Partition: 4 Leader: 2 Replicas: 3,1,2 Isr: 1,2,3 Topic: test-topic Partition: 5 Leader: 0 Replicas: 0,2,3 Isr: 0,2,3
===================来自一泽涟漪的博客,转载请标明出处 www.cnblogs.com/ilifeilong===================