Kafka3.1.0集群搭建(使用自带zookeeper)
前言
kafka2.8.0版本后使用了新的Raft模式来搭建集群,也就是从这个版本后kafka在逐步放弃zookeeper。本文以kafka3.1.0自带的zookeeper模式来搭建生产环境集群。
所需材料
kafka3.1.0,下载地址:https://dlcdn.apache.org/kafka/3.1.0/kafka_2.13-3.1.0.tgz 或者 https://dlcdn.apache.org/kafka/3.1.0/kafka_2.13-3.1.0.tgz
三台机器:
192.168.50.125 kafka1 192.168.50.126 kafka2 192.168.50.127 kafka3
部署步骤
网络基础配置:
配置IP、hostname及IP地址解析:
编辑:/etc/sysconfig/network-scripts/ifcfg-ens33,以kafka1机器为例:
BOOTPROTO="static" ONBOOT="yes" IPADDR=192.168.50.125 NETMASK=255.255.255.0 GATEWAY=192.168.50.1 DNS1=192.168.50.1
修改机器名称:
vi /etc/hostname
kafka1
vi /etc/hosts
192.168.50.125 kafka1 192.168.50.126 kafka2 192.168.50.127 kafka3
配置应用软件
使用wget或者手动下载kafka后,放置于路径:/usr/local/目录下:
解压kafka:
tar -zxvf kafka_2.13-3.1.0.tgz
修改解压后的文件夹名称为kafka。
kafka是使用Scala及java(生产端、消费端)编写的中间件,因此需要系统安装jdk支持,jdk安装后,配置jdk及kafka如下(kafka安装后的路径为:/usr/local/kafka):
vi /etc/profile
export JAVA_HOME=/usr/local/jdk export CLASSPATH=.:%JAVA_HOME%/lib/dt.jar:%JAVA_HOME%/lib/tools.jar export PATH=$PATH:$JAVA_HOME/bin export KAFKA_HOME=/usr/local/kafka export PATH=$PATH:$KAFKA_HOME/bin
刷新缓存:source /etc/profle,然后运行java -version检查jdk是否安装成功。
配置zookeeper是重中之重 ,只有zookeeper正常工作了,kafaka才能正常工作:
配置:zookeeper.properties
dataDir=/usr/local/kafka/zkdata dataLogDir=/usr/local/kafka/zklog # the port at which the clients will connect clientPort=2181 # disable the per-ip limit on the number of connections since this is a non-production config maxClientCnxns=100 # Disable the adminserver by default to avoid port conflicts. # Set the port to something non-conflicting if choosing to enable this #admin.enableServer=true #admin.serverPort=8080 tickTime=2000 initLimit=20 syncLimit=10 server.1=192.168.50.125:2182:2183 server.2=192.168.50.126:2182:2183 server.3=192.168.50.127:2182:2183
将zookeeper的数据及日志放在kafka文件夹内,方便管理。
然后在zookeeper数据所在目录:/usr/local/kafka/zkdata,建立myid文件,并根据server.id填写id。
echo 1 > myid
zookeeper配置完成后,配置kafka,打开server. properties,配置以下内容:
broker.id=0 listeners=PLAINTEXT://192.168.50.125:9092 log.dirs=/usr/local/kafka/kafka-logs zookeeper.connect=192.168.50.125:2181,192.168.50.126:2181,192.168.50.127:2181
启动集群
启动zookeeper及kafka(每个机器都需要启动)
/usr/local/kafka/bin/zookeeper-server-start.sh -daemon /usr/local/kafka/config/zookeeper.properties
/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
使用jps命令检查,发现kafka启动后不一会就关闭了。
查找日志kafkaServer.out,发现:
[2022-04-30 11:42:37,976] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) kafka.common.InconsistentClusterIdException: The Cluster ID m4fHqPFyRzWr8RMl1M0M4g doesn't match stored clusterId Some(F0oUUIPCQYWuUXhsv1eCOw) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong. at kafka.server.KafkaServer.startup(KafkaServer.scala:228)The Cluster ID m4fHqPFyRzWr8RMl1M0M4g doesn't match stored clusterId at kafka.Kafka$.main(Kafka.scala:109) at kafka.Kafka.main(Kafka.scala) [2022-04-30 11:42:37,977] INFO shutting down (kafka.server.KafkaServer)
经过一番搜索,发现kafka重新运行配置文件后与原生成的Cluster ID不一致所致,解决方法有3个:
1.删除data文件夹,重新运行kafka
2.删除data所在文件夹中的meta.properties文件。
3.修改meta.properties文件中的Cluster ID为系统最新生成的Cluster ID(推荐)。
关闭kafka在控制台打印Info
在logback.xml的根节点下或者log4j2.xml的Loggers节点下添加:
<Logger name="org.apache.kafka" level="OFF"/>