Kafka架构
producer: 生产者 consumer: 消费者 broker: 类似容器,存放数据 topic: 类似标签
1.创建文件夹
mkdir -p /usr/local/kafka/data /usr/local/kafka/logs /usr/local/kafka/logs1 /usr/local/kafka/logs2 /usr/local/kafka/logs3 cd /usr/local/kafka
2.下载Flume http://kafka.apache.org/downloads
wget http://apache.mirror.iphh.net/kafka/2.5.0/kafka_2.13-2.5.0.tgz
3.解压
tar -zvxf kafka_2.13-2.5.0.tgz
4.修改配置文件
vim /etc/profile export KAFKA_HOME=/usr/local/kafka/kafka_2.13-2.5.0 export PATH=$KAFKA_HOME/bin:$PATH source /etc/profile
5.修改配置
cd /usr/local/kafka/kafka_2.13-2.5.0/config cp server.properties server1.properties cp server.properties server2.properties cp server.properties server3.properties vim server.properties broker.id=0 # broker id 这个不能重复 1, 2, 3 往上设置 listeners=PLAINTEXT://:9092 # 取消注释 默认端口是9092 修改三个配置文件 端口不能重复9092 9093 9094 9095 host.name=0.0.0.0 # ip log.dirs=/usr/local/kafka/logs # log文件放置路径 修改三个配置文件 路径不能重复 上次建了三个文件路径分布配置这三个文件路径 zookeeper.connect=localhost:2181 # zookeeper连接信息
6.启动kafka
kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties & kafka-server-start.sh -daemon $KAFKA_HOME/config/server1.properties & kafka-server-start.sh -daemon $KAFKA_HOME/config/server2.properties & kafka-server-start.sh -daemon $KAFKA_HOME/config/server3.properties & -daemon: 后台启动 config/server.properties: server.properties文件路径 kafka-server-stop.sh # 关闭kafka jps 查看进程 jps -m: 查看比较详细的信息 kafka 启动报错: kafka.common.InconsistentClusterIdException: The Cluster ID BmhgroWiQEWltKkCY_4u4Q doesn't match stored clusterId Some(3BVX0S4nTjSvbCvqUU3WGQ) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong. 在Kafka config目录中,打开kafka config属性文件,让server.properties查找具有参数log.dirs =的日志路径目录,然后转到日志路径目录并在其中找到文件meta.properties。打开文件meta.properties并更新cluster.id =或从日志路径目录中删除此文件或所有日志文件,然后重新启动kafka。(这个方法并没有解决我的问题) cd /usr/local/kafka/kafka_2.13-2.5.0/logs cat server.log | tail -n 200 # 查看kafka启动时产生日志