kafka搭建二、集群搭建
系列导航
五、kafka集群__consumer_offsets副本数修改
kafka集群搭建
本章讲解如何安装一个由三台机器组成的kafka集群,搭建完成该集群就可以在生产环境上使用了
三台服务器的ip 192.168.0.104,192.168.0.105,192.168.0.106
需要的软件包:
1、jdk1.8安装包 jdk-8u211-linux-x64.tar
2、zookeeper的安装包 zookeeper-3.4.14.tar
3、kafka的安装包 kafka_2.11-2.1.1.tgz
(一)zookeeper 搭建(三台机器都要操作)
1、软件环境
192.168.0.104 server1
192.168.0.105 server2
192.168.0.106 server3
相关的安装包拷入/opt/kafka下
cd /opt
mkdir kafka
2、安装jdk1.8
(1)解压安装包
tar -xvf jdk-8u211-linux-x64.tar
(2)移动到安装目录
mv jdk1.8.0_211 /usr/local
(3)设置环境变量
vi /etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_211
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH
source /etc/profile //让配置生效
(4)测试是否安装成功
cd /
echo $JAVA_HOME
echo $PATH
echo $CLASSPATH
java -version 查看版本
3、创建目录
mkdir /opt/zookeeper #项目目录
mkdir /opt/zookeeper/zkdata #存放快照日志
mkdir /opt/zookeeper/zkdatalog #存放事物日志
将zookeeper-3.4.14.tar放到/opt/zookeeper/目录下
cp zookeeper-3.4.14.tar /opt/zookeeper/
tar -xvf zookeeper-3.4.14.tar
4、修改配置信息
cd /opt/zookeeper/zookeeper-3.4.14/conf
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
#添加如下内容
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/zkdata
dataLogDir=/opt/zookeeper/zkdatalog
clientPort=12181
server.1=192.168.0.104:12888:13888
server.2=192.168.0.105:12888:13888
server.3=192.168.0.106:12888:13888
5、创建myid文件
#server1
echo "1" > /opt/zookeeper/zkdata/myid
#server2
echo "2" > /opt/zookeeper/zkdata/myid
#server3
echo "3" > /opt/zookeeper/zkdata/myid
6、主要的shell
/opt/zookeeper/zookeeper-3.4.14/bin
zkServer.sh 主的管理程序文件
zkEnv.sh 是主要配置,zookeeper集群启动时配置环境变量的文件
7、启动服务并查看
cd /opt/zookeeper/zookeeper-3.4.14/bin
#启动服务(3台都需要操作)
./zkServer.sh start
#检查服务器状态
./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower #他是否为领导
可以用“jps”查看zk的进程,这个是zk的整个工程的main
#执行命令jps
[root@zhu bin]# jps
27813 QuorumPeerMain
27909 Jps
8、安装scala 2.11
tar -xvf scala-2.11.12.tar
vi /etc/profile 添加如下内容:
export SCALA_HOME=/opt/kafka/scala-2.11.12
export PATH=$SCALA_HOME/bin:$PATH
source /etc/profile //让配置生效
-------------------------------kafka集群安装-------------------------------------------------------------
kafka的日志位置:/opt/kafka/kafka_2.11-2.1.1/logs
(二)kafka集群 搭建(三台机器)
1、创建目录
mkdir -p /opt/kafka/kafkalogs
2、解压安装包
cd /opt/kafka
tar -zxvf kafka_2.11-2.1.1.tgz
3、修改配置文件
cd /opt/kafka/kafka_2.11-2.1.1/config
vi server.properties #根据正式库的配置修改该文件
#添加如下内容 注意不同机器的ip不同
#192.168.0.104主机
broker.id=104
listeners=PLAINTEXT://192.168.0.104:9092
port=9092
advertised.listeners=PLAINTEXT://192.168.0.104:9092
advertised.port=9092
host.name=192.168.0.104
advertised.host.name=192.168.0.104
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka/kafkalogs/
num.partitions=2
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.104:12181,192.168.0.105:12181,192.168.0.106:12181
zookeeper.connection.timeout.ms=60000
group.initial.rebalance.delay.ms=0
#inter.broker.protocol.version=0.9.0
#log.message.format.version=0.9.0
unclean.leader.election.enable=true
auto.create.topics.enable=true
default.replication.factor=3
#192.168.0.105主机
broker.id=105
listeners=PLAINTEXT://192.168.0.105:9092
port=9092
advertised.listeners=PLAINTEXT://192.168.0.105:9092
advertised.port=9092
host.name=192.168.0.105
advertised.host.name=192.168.0.105
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka/kafkalogs/
num.partitions=2
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.104:12181,192.168.0.105:12181,192.168.0.106:12181
zookeeper.connection.timeout.ms=60000
group.initial.rebalance.delay.ms=0
#inter.broker.protocol.version=0.9.0
#log.message.format.version=0.9.0
unclean.leader.election.enable=true
auto.create.topics.enable=true
default.replication.factor=3
#192.168.0.106主机
broker.id=106
listeners=PLAINTEXT://192.168.0.106:9092
port=9092
advertised.listeners=PLAINTEXT://192.168.0.106:9092
advertised.port=9092
host.name=192.168.0.106
advertised.host.name=192.168.0.106
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka/kafkalogs/
num.partitions=2
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.104:12181,192.168.0.105:12181,192.168.0.106:12181
zookeeper.connection.timeout.ms=60000
group.initial.rebalance.delay.ms=0
#inter.broker.protocol.version=0.9.0
#log.message.format.version=0.9.0
unclean.leader.election.enable=true
auto.create.topics.enable=true
default.replication.factor=3
4、启动服务并测试
设置环境变量
vi /etc/profile 添加如下内容:
export KAFKA_HOME=/opt/kafka/kafka_2.11-2.1.1
export PATH=$KAFKA_HOME/bin:$PATH
source /etc/profile //让配置生效
#启动
[root@zhu bin]# kafka-server-start.sh -daemon ../config/server.properties
[root@zhu bin]# jps
27813 QuorumPeerMain
28847 Kafka
28882 Jps
#关闭
[root@zhu bin]# jps
27813 QuorumPeerMain
28847 Kafka
-------------------------------常用操作-----------------------------------------------
5、 创建Topic来验证是否创建成功
#创建Topic
kafka-topics.sh --create --zookeeper 192.168.0.104:12181 --replication-factor 2 --partitions 2 --topic yc
#解释
--replication-factor 2 #复制两份
--partitions 1 #创建1个分区
--topic #主题为yc
删除 topic
kafka-topics.sh --delete --zookeeper 192.168.0.104:12181 --topic yc
6、相关命令
(1)查看topic
kafka-topics.sh --list --zookeeper localhost:12181或者 kafka-topics.sh --list --zookeeper 192.168.0.104:12181
#就会显示我们创建的所有topic
(2)查看topic状态
[root@zhu bin]# kafka-topics.sh --describe --zookeeper localhost:12181 --topic yc
Topic:yc PartitionCount:2 ReplicationFactor:2 Configs:
Topic: yc Partition: 0 Leader: 104 Replicas: 104,105 Isr: 104,105
Topic: yc Partition: 1 Leader: 105 Replicas: 105,106 Isr: 105,106
#分区为为2 复制因子为2 他的 yc的分区为0和1
#说明
partiton: partion id
leader:当前负责读写的lead broker id
replicas:当前partition的所有replication broker list
isr:relicas的子集,只包含出于活动状态的broker
(3)重新分配partition
bin/kafka-reassign-partitions.sh
--zookeeper <urls> 指定zookeeper的连接地址,格式host:port
--broker-list<brokerlist> 指定partition需要重新分配到哪些节点,格式为”0,1,2”
--topics-to-move-json-file <topics to reassign json file path> 指定JSON文件的地址,文件内容是需要重新分配的topic列表。这个选项和manual-assignment-json-file选项需要指定其中的一个。
文件内容的格式为 {"topics": [{"topic": "test"},{"topic": "test1"}], "version":1 }
--manual-assignment-json-file<manual assignment json file path> 指定JSON文件的地址,文件内容是手动分配的策略。这个选项和topics-to-move-json-file选项需要指定其中的一个。
文件内容的格式为{"partitions": [{"topic": "test", "partition": 1, "replicas": [1,2,3] }], "version":1 }
--status-check-json-file<partition reassignment json file path> 指定JSON文件的地址,文件内容是partition和partition需要分配到的新的replica的列表。这个JSON文件可以从模拟执行的结果得到。
--execute 如果使用这个选项,那么会执行真实的重新分配分区的操作。如果不指定这个选项,默认会进行模拟执行。
例子:
<1>将test和test1 topic迁移到新的编号为3,4的broker上
./kafka-reassign-partitions.sh --zookeeper 172.1.1.1:2181 --broder-list "3,4" --topics-to-move-json-filetopicMove.json -execute
topicMove.json 的内容是:{"topics":[{"topic":"test"},{"topic","test1"}],"version":1}
<2>将test topic的partition 1 迁移到 broker 1 2 4 上
./kafka-reassign-partitions.sh --zookeeper 172.1.1.1:2181 --broder-list "1,2,4" --manual-assignment-json-file manualAssignment.json --execute
manualAssignment.json的内容为:
{"partitions":[{"topic":"test","partition":1,"replicas":[1,2,4]}],"version":1}
(4)增加Topic的partition数量,命令为:
通过kafka-topics.sh 的alter选项 ,将topic1的partitions从1增加到6;
./kafka-topics.sh --alter --topic topic1 --zookeeper localhost:12181 --partitions 6
(5)手动均衡Topic,让partition选择preferred replica作为leader
./kafka-preferred-replica-election.sh
--zookeeper 指定zookeeper的连接地址,格式host:port
--path-to-json-file 指定需要重新进行leader选举的partition列表文件所在的地址,文件内容的格式为
{“partitions”: [{“topic”: “test”,“partitions”: 1},{“topic”: “test”, “partitions”: 2}]}
默认值为所有存在的partition
例如:
<1> ./kafka-preferred-replica-election.sh --zookeeper 172.1.1.1:2181
<2> ./kafka-preferred-replica-election.sh --zookeeper 172.1.1.1:2181 --path-to-json-file partitionList.json
partitionList.json 文件内容为{"partitions":[{"topic":"test","partition":1},{"topic":"test","partition":2}]}
(6) 查看Consumer的消费和积压信息
./kafka-consumer-groups.sh --bootstrap-server plaintext://192.168.0.104:9092,plaintext://192.168.0.105:9092,plaintext://192.168.0.106:9092 --describe --group group_test1
(7)动态增加topic副本
1、generate模式,给定需要重新分配的Topic,自动生成reassign plan(并不执行)
2、execute模式,根据指定的reassign plan重新分配Partition
3、verify模式,验证重新分配Partition是否成功
./bin/kafka-reassign-partitions.sh --zookeeper localhost:12181 --reassignment-json-file replication.json --verify
# replication.json内容为:(书写的时候写在一行要不会有问题)
{"partitions":[{"topic":"topic_test1","partition":0,"replicas":[104,105,106]},{"topic":"topic_test1","partition":1,"replicas":[104,105,106]},{"topic":"topic_test1","partition":2,"replicas":[104,105,106]},{"topic":"topic_test1","partition":3,"replicas":[104,105,106]}],"version":1}
资源丰富的的网盘资源:网盘资源大全! 推荐一个适合零基础学习SQL的网站:不用安装数据库,在线轻松学习SQL!