docker安装zk、kafka集群
version: '3.4' services: zoo1: image: zookeeper:3.4 restart: always hostname: zoo1 container_name: zoo1 ports: - 2184:2181 volumes: - /mountpath/mykafka/zkcluster/zoo1/data:/data - /mountpath/mykafka/zkcluster/zoo1/datalog:/datalog environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 networks: - kafka zoo2: image: zookeeper:3.4 restart: always hostname: zoo2 container_name: zoo2 ports: - 2185:2181 volumes: - /mountpath/mykafka/zkcluster/zoo2/data:/data - /mountpath/mykafka/zkcluster/zoo2/datalog:/datalog environment: ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 networks: - kafka zoo3: image: zookeeper:3.4 restart: always hostname: zoo3 container_name: zoo3 ports: - 2186:2181 volumes: - /mountpath/mykafka/zkcluster/zoo3/data:/data - /mountpath/mykafka/zkcluster/zoo3/datalog:/datalog environment: ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 networks: - kafka kafka1: image: wurstmeister/kafka:2.12-2.4.1 restart: always hostname: kafka1 container_name: kafka1 privileged: true ports: - 9092:9092 environment: KAFKA_ADVERTISED_HOST_NAME: kafka1 KAFKA_LISTENERS: PLAINTEXT://kafka1:9092
#这里我没有做内外网分离 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://hostip:9092 KAFKA_ADVERTISED_PORT: 9092 KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181 volumes: - /mountpath/mykafka/kafkaCluster/kafka1/logs:/kafka networks: - kafka depends_on: - zoo1 - zoo2 - zoo3 external_links: - zoo1 - zoo2 - zoo3 kafka2: image: wurstmeister/kafka:2.12-2.4.1 restart: always hostname: kafka2 container_name: kafka2 privileged: true ports: - 9093:9093 environment: KAFKA_ADVERTISED_HOST_NAME: kafka2 KAFKA_LISTENERS: PLAINTEXT://kafka2:9093 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://hostip:9093 KAFKA_ADVERTISED_PORT: 9093 KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181 volumes: - /mountpath/mykafka/kafkaCluster/kafka2/logs:/kafka networks: - kafka depends_on: - zoo1 - zoo2 - zoo3 external_links: - zoo1 - zoo2 - zoo3 kafka3: image: wurstmeister/kafka:2.12-2.4.1 restart: always hostname: kafka3 container_name: kafka3 privileged: true ports: - 9094:9094 environment: KAFKA_ADVERTISED_HOST_NAME: kafka3 KAFKA_LISTENERS: PLAINTEXT://kafka3:9094 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://hostip:9094 KAFKA_ADVERTISED_PORT: 9094 KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181 volumes: - /mountpath/mykafka/kafkaCluster/kafka3/logs:/kafka networks: - kafka depends_on: - zoo1 - zoo2 - zoo3 external_links: - zoo1 - zoo2 - zoo3 kafka_manager: image: kafkamanager/kafka-manager:3.0.0.4 hostname: kafkamanager container_name: kafkamanager ports: - 9091:9000 environment: ZK_HOSTS: zoo1:2181,zoo2:2181,zoo3:2181 APPLICATION_SECRET: letmein networks: - kafka depends_on: - zoo1 - zoo2 - zoo3 - kafka1 - kafka2 - kafka3 networks: kafka: driver: bridge
发送消息测试
import time from kafka import KafkaConsumer, KafkaProducer import json from bson import json_util from kafka.errors import KafkaError import datetime import random producer = KafkaProducer(bootstrap_servers=['hostip:9092', 'hostip:9093', 'hostip:9094'], value_serializer=lambda m: json.dumps(m).encode('ascii')) # produce asynchronously for i in range(50): now_time = str(datetime.datetime.now()) #send_string = '{"log_id":"' + str(i) + '", "ts":"' + now_time + '","origin_json": "test' + str(i) + '"}' accountId=random.randint(1,10) accountName="u" + str(accountId) amount=random.randint(1,5000) transactionTime=int(round(time.time() * 1000)) #毫秒 #send_string = '{ "accountId" : ' + str(accountId) +', "accountName": "' +accountName +'", "amount": ' + str(amount) +', "transactionTime": '+str(transactionTime) +'}' send_string={ "accountId": accountId, "accountName": accountName, "amount": amount, "transactionTime": transactionTime } #json_string = json.dumps(send_string, default=json_util.default).encode('utf-8') #print(type(json.loads(json_string))) future = producer.send('testtopic', send_string) try: record_metadata = future.get(timeout=10) except KafkaError as e: # Decide what to do if produce request failed... print("send error!") print(e) pass time.sleep(1) print(send_string) print(record_metadata.partition) print(record_metadata.offset)
命令行查看消息内容
docker exec -it kafka1 bash bash-5.1# cd opt bash-5.1# cd kafka_2.12-2.4.1/ bash-5.1# cd bin bash-5.1# kafka-console-consumer.sh --bootstrap-server kafka1:9092 --topic testtopic --from-beginning
遇到的问题:
1、在kafka manager上添加集群的时候报错 KeeperErrorCode = Unimplemented for /kafka-manager/mutex
解决:在其中一个zk节点中执行如下命令
docker exec -it zoo1 bash root@98747a9eac65:/zookeeper-3.4.14# ./bin/zkCli.sh [zk: localhost:2181(CONNECTED) 2] ls /kafka-manager [configs, deleteClusters, clusters] [zk: localhost:2181(CONNECTED) 3] create /kafka-manager/mutex "" Created /kafka-manager/mutex [zk: localhost:2181(CONNECTED) 5] create /kafka-manager/mutex/locks "" Created /kafka-manager/mutex/locks [zk: localhost:2181(CONNECTED) 6] create /kafka-manager/mutex/leases "" Created /kafka-manager/mutex/leases
2、kafka manager上添加cluster的时候,cluster zookeeper hosts需要填写docker 网络内部ip
3、生产消息的时候报错,提示超时
解决:KAFKA_ADVERTISED_LISTENERS这个参数一开始写的是docker的内部地址,改成外部地址
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka3:9094 改成
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://hostip:9094
4、建cluster的时候这两个要选上,不然会导致topic的生产和消费信息不回同步到kafka manager上,如sum of partition offsets一直为0的问题
参考:https://github.com/yahoo/CMAK/issues/731
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· SQL Server 2025 AI相关能力初探
· AI编程工具终极对决:字节Trae VS Cursor,谁才是开发者新宠?
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南