centos7 docker搭建kafka 2.6.0 集群

当前目录是/root

1、创建zookeeper目录

# mkdir -p /root/kafka_cluster
# cd /root/kafka_cluster
# mkdir -p zookeeper/{conf,data,datalog}

 2、创建zookeeper配置文件zoo.cfg

#3台虚拟机的zoo.cfg都需要配置

#cd /root/kafka_cluster/zookeeper/conf/

#vi zoo.cfg

将以下内容添加上

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
# zookeeper镜像中设定会将zookeeper的data与dataLog分别映射到/data, /datalog
# 本质上,这个配置文件是为zookeeper的容器所用,容器中路径的配置与容器所在的宿主机上的路径是有区别的,要区分清楚。
dataDir=/data
dataLogDir=/datalog
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.59.102:2888:3888
server.2=192.168.59.103:2888:3888
server.3=192.168.59.104:2888:3888

#枚举失败,提示连接不上,取消下句的屏蔽

#quorumListenOnAllIPs=true

 

3、编辑zookeeper配置文件myid

#3台虚拟机的myid分别为:1、2、3

#cd /root/kafka_cluster/zookeeper/data

#vi myid

 4、启动zookeeper容器

#3台虚拟机均启动

#docker run -tid --name=zookeeper --restart=always --net=host -v /root/kafka_cluster/zookeeper/conf:/conf -v /root/kafka_cluster/zookeeper/data:/data -v /root/kafka_cluster/zookeeper/datalog:/datalog zookeeper

命令解析:

--net=host: 容器与主机共享同一Network Namespace,即容器与网络看到的是相同的网络视图(host模式存在一定的风险,对安全要求很高的生产环境最好不要用host模式,应考虑除此之外的其他几种模式)
-v: 指定主机到容器的目录映射关系

 #进入容器查看状态

 

 

 

 

 

 

 以上zookeeper安装完成

 

5、创建kafka目录

# cd /root/kafka_cluster/
# mkdir -p /root/kafka_cluster/kafka/{config,data,logs}

 

 6、启动kafka 

#本次安装的kafka版本是:2.6.0

#kafka1
#下面的命令会把信息插入server.properties文件中
#做地址映射容器启动10秒后停止,原因是找不到配置文件   !!!!!!????????????,百度有解决方法,先屏蔽conf目录映射

docker run -itd --name=kafka --net=host \
-v /etc/hosts:/etc/hosts \
-v /root/kafka_cluster/kafka/data:/opt/bitnami/kafka/data \
-v /root/kafka_cluster/kafka/config:/opt/bitnami/kafka/config \
-v /root/kafka_cluster/kafka/logs:/opt/bitnami/kafka/logs \
-e KAFKA_ADVERTISED_HOST_NAME=192.168.59.102 \
 -e HOST_IP=192.168.59.102 -e KAFKA_ADVERTISED_PORT=9092 \
-e KAFKA_ZOOKEEPER_CONNECT=192.168.59.102:2181,192.168.59.103:2181,192.168.59.104:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.59.102:9092 \
-e ALLOW_PLAINTEXT_LISTENER=yes \
-e KAFKA_BROKER_ID=102 \
bitnami/kafka

  

#kafka2

#

docker run -itd --name=kafka --net=host \
-v /etc/hosts:/etc/hosts \
-v /root/kafka_cluster/kafka/data:/opt/bitnami/kafka/data \
-v /root/kafka_cluster/kafka/config:/opt/bitnami/kafka/config \
-v /root/kafka_cluster/kafka/logs:/opt/bitnami/kafka/logs \
-e KAFKA_ADVERTISED_HOST_NAME=192.168.59.103 \
 -e HOST_IP=192.168.59.103 -e KAFKA_ADVERTISED_PORT=9092 \
-e KAFKA_ZOOKEEPER_CONNECT=192.168.59.102:2181,192.168.59.103:2181,192.168.59.104:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.59.103:9092 \
-e ALLOW_PLAINTEXT_LISTENER=yes \
-e KAFKA_BROKER_ID=103 \
bitnami/kafka

#kafka3

#

docker run -itd --name=kafka --net=host \
-v /etc/hosts:/etc/hosts \
-v /root/kafka_cluster/kafka/data:/opt/bitnami/kafka/data \
-v /root/kafka_cluster/kafka/config:/opt/bitnami/kafka/config \
-v /root/kafka_cluster/kafka/logs:/opt/bitnami/kafka/logs \
-e KAFKA_ADVERTISED_HOST_NAME=192.168.59.104 \
-e HOST_IP=192.168.59.104 -e KAFKA_ADVERTISED_PORT=9092 \
-e KAFKA_ZOOKEEPER_CONNECT=192.168.59.102:2181,192.168.59.103:2181,192.168.59.104:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.59.104:9092 \
-e ALLOW_PLAINTEXT_LISTENER=yes \
-e KAFKA_BROKER_ID=104 \
bitnami/kafka

 8、启动kafka管理容器

#docker run -itd --restart=always --name=kafka-manager -p 9000:9000 -e ZK_HOSTS="192.168.59.102:2181" sheepkiller/kafka-manager:alpine

 9、验证

#在1_102窗口进行操作

#进入kafka根目录下的bin

#cd /opt/bitnami/kafka/bin/

#创建一个test主题,分区数为2,备份数为2

#./kafka-topics.sh --create --zookeeper 192.168.59.102:2181 --replication-factor 2 --partitions 2 --topic test

#启动一个生产者

#./kafka-console-producer.sh --broker-list 192.168.59.102:9092 --topic test

输入信息:aaaaaaaaa

新开的2_103窗口

#启动消费者命令

#./kafka-console-consumer.sh --bootstrap-server 192.168.59.102:9092 --topic test --from-beginning

 

 

 

 

消费者窗口出现信息:aaaaaaaaaaaaaaaaaaaaaa,搭建完成

 

 

ps

通过这个web界面,可以创建kafka集群与topic,操作topic的partition,监控topic的行为等

浏览器登录地址:http://192.168.59.102:9000

 

 

 

 

 

 

 
 
posted @ 2022-05-26 11:09  leihongnu  阅读(338)  评论(0编辑  收藏  举报