Docker部署zookeeper集群和kafka集群,实现互联
本文介绍在单机上通过docker部署zookeeper集群和kafka集群的可操作方案。
0、准备工作
创建zk目录,在该目录下创建生成zookeeper集群和kafka集群的yml文件,以及用于在该目录下挂载zookeeper和kafka的主要目录。
1、创建docker网络,用于zookeeper和kafka互联
docker network create --driver bridge --subnet 172.168.0.0/16 --gateway 172.168.0.1 zk_network
2、通过docker-compose部署zookeeper集群
在zk目录下创建zookeeper.yml,编辑后保存,内容如下:
version: '3.1' services: zoo1: image: zookeeper:3.4.11 restart: always hostname: zoo1 container_name: zoo1 ports: - 2181:2181 volumes: - "./zoo1/data:/data" - "./zoo1/datalog:/datalog" environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 networks: zk_default: ipv4_address: 172.168.0.2 zoo2: image: zookeeper:3.4.11 restart: always hostname: zoo2 container_name: zoo2 ports: - 2182:2181 volumes: - "./zoo2/data:/data" - "./zoo2/datalog:/datalog" environment: ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 networks: zk_default: ipv4_address: 172.168.0.3 zoo3: image: zookeeper:3.4.11 restart: always hostname: zoo3 container_name: zoo3 ports: - 2183:2181 volumes: - "./zoo3/data:/data" - "./zoo3/datalog:/datalog" environment: ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 networks: zk_default: ipv4_address: 172.168.0.4 networks: zk_default: external: name: zk_network
注意,zookeeper使用的是官方3.4.11版本。保存后,执行docker-compose命令完成部署,f参数用于指定yml文件。
docker-compose -f zookeeper.yml up -d
为了验证zookeeper集群是否成功部署,需要进入各容器进行验证。
docker exec -it zoo3 /bin/sh
进入容器SSH后,执行zkServer.sh status命令查看状态,重点查看Mode模式。
/zookeeper-3.4.11 # zkServer.sh status ZooKeeper JMX enabled by default Using config: /conf/zoo.cfg Mode: leader
我部署后,三个状态分别是:follower,follower,leader。
3、通过docker-compose部署kafka集群
在zk目录下创建kafka.yml,编辑后保存,内容如下:
version: '2' services: kafka1: image: wurstmeister/kafka restart: always hostname: kafka1 container_name: kafka1 ports: - "9092:9092" - "9992:9992" environment: KAFKA_ADVERTISED_HOST_NAME: 192.168.1.16 KAFKA_ADVERTISED_PORT: 9092 KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 JMX_PORT: 9992 volumes: - ./kafka1/logs:/kafka external_links: - zoo1 - zoo2 - zoo3 networks: zk_default: ipv4_address: 172.23.0.7 kafka2: image: wurstmeister/kafka restart: always hostname: kafka2 container_name: kafka2 ports: - "9093:9092" - "9993:9993" environment: KAFKA_ADVERTISED_HOST_NAME: 192.168.1.16 KAFKA_ADVERTISED_PORT: 9093 KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 JMX_PORT: 9993 volumes: - ./kafka2/logs:/kafka external_links: - zoo1 - zoo2 - zoo3 networks: zk_default: ipv4_address: 172.23.0.8 kafka3: image: wurstmeister/kafka restart: always hostname: kafka3 container_name: kafka3 ports: - "9094:9092" - "9994:9994" environment: KAFKA_ADVERTISED_HOST_NAME: 192.168.1.16 KAFKA_ADVERTISED_PORT: 9094 KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 JMX_PORT: 9994 volumes: - ./kafka3/logs:/kafka external_links: - zoo1 - zoo2 - zoo3 networks: zk_default: ipv4_address: 172.23.0.9 networks: zk_default: external: name: zk_network
保存后,执行docker-compose命令完成部署,f参数用于指定yml文件。
docker-compose -f kafka.yml up -d
4、