基于KRaft搭建Kafka集群并实现ACL授权策略
网上Kafka的KRaft集群搭建文章不少,但是照搬过来启动的时候或多或少会有些报错,也没有系统性的描述用户和ACL这块应该如何配置,经过一番十分艰难的探索,终于搞定了KRaft集群、用户管理和ACL授权,在这里和大家分享下成果,如果正在面临同样的问题希望对你会有些启发。
基于KRaft的架构
在Kafka 2.8之前,Kafka
重度依赖于Zookeeper
集群做元数据管理和集群的高可用(即所谓的共识服务)。
在Kafka 2.8之后,引入了基于Raft协议的KRaft模式,支持取消对Zookeeper
的依赖。在此模式下,一部分Kafka Broker
被指定为Controller
,另一部分则为Broker
。这些Controller
的作用就是以前由Zookeeper
提供的共识服务,并且所有的元数据都将存储在Kafka
主题中并在内部进行管理。
基于KRaft构建Kafka集群
- 使用
bitnami/kafka
Docker镜像 - 配置
KRaft
、SASL
、ACL
集群节点1启动脚本
可以根据自身需求改写为docker-compose
脚本
#!/bin/bash
docker stop kafka_cluster01
docker rm kafka_cluster01
mkdir -p /opt/middle-images/kafka_cluster01/volume/kdata
chmod -R 777 /opt/middle-images/kafka_cluster01/volume/kdata
img=`docker load -i /opt/middle-images/kafka_cluster01/kafka.tar | awk -F " " '{print $3}'`
docker run -d --restart=always \
--name kafka_cluster01 \
--net=host \
--add-host="kafka1":10.32.122.117 \
--add-host="kafka2":10.32.122.100 \
--add-host="kafka3":10.32.122.101 \
-e TZ=Asia/Shanghai \
-e KAFKA_BROKER_ID=1 \
-e KAFKA_KRAFT_CLUSTER_ID=LelM2dIFQkiUFvXCEcqRWA \
-e KAFKA_CFG_SECURITY_PROTOCOL=SASL_PLAINTEXT \
-e KAFKA_CFG_NODE_ID=1 \
-e KAFKA_ENABLE_KRAFT=yes \
-e KAFKA_CFG_PROCESS_ROLES=broker,controller \
-e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \
-e KAFKA_CFG_LISTENERS=BROKER://:9591,CONTROLLER://:9599,EXTERNAL://:9592 \
-e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@kafka1:9599,2@kafka2:9599,3@kafka3:9599 \
-e KAFKA_CFG_ADVERTISED_LISTENERS=BROKER://kafka1:9591,EXTERNAL://kafka1:9592 \
-e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=BROKER:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT \
-e KAFKA_CFG_INTER_BROKER_LISTENER_NAME=SASL_PLAINTEXT \
-e KAFKA_INTER_BROKER_LISTENER_NAME=BROKER \
-e KAFKA_CONTROLLER_USER=admin_controller \
-e KAFKA_CONTROLLER_PASSWORD=Kafka_2024 \
-e KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN \
-e KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN \
-e KAFKA_CFG_EARLY_START_LISTENERS=CONTROLLER \
-e KAFKA_INTER_BROKER_USER=admin \
-e KAFKA_INTER_BROKER_PASSWORD=Kafka_2024 \
-e KAFKA_CLIENT_USERS=kafka \
-e KAFKA_CLIENT_PASSWORDS=8lpWTiEy1FfqAZf \
-e KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN \
-e KAFKA_CFG_SUPER_USERS="User:admin;User:admin_controller;User:ANONYMOUS" \
-e KAFKA_CFG_AUTHORIZE_SASL_USERS=true \
-e KAFKA_CFG_AUTHORIZER_CLASS_NAME=org.apache.kafka.metadata.authorizer.StandardAuthorizer \
-v /opt/middle-images/kafka_cluster01/volume/kdata:/bitnami/kafka \
$img
集群节点2启动脚本
#!/bin/bash
docker stop kafka_cluster02
docker rm kafka_cluster02
mkdir -p /opt/middle-images/kafka_cluster02/volume/kdata
chmod -R 777 /opt/middle-images/kafka_cluster02/volume/kdata
img=`docker load -i /opt/middle-images/kafka_cluster02/kafka.tar | awk -F " " '{print $3}'`
docker run -d --restart=always \
--name kafka_cluster02 \
--net=host \
--add-host="kafka1":10.32.122.117 \
--add-host="kafka2":10.32.122.100 \
--add-host="kafka3":10.32.122.101 \
-e TZ=Asia/Shanghai \
-e KAFKA_BROKER_ID=2 \
-e KAFKA_KRAFT_CLUSTER_ID=LelM2dIFQkiUFvXCEcqRWA \
-e KAFKA_CFG_SECURITY_PROTOCOL=SASL_PLAINTEXT \
-e KAFKA_CFG_NODE_ID=2 \
-e KAFKA_ENABLE_KRAFT=yes \
-e KAFKA_CFG_PROCESS_ROLES=broker,controller \
-e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \
-e KAFKA_CFG_LISTENERS=BROKER://:9591,CONTROLLER://:9599,EXTERNAL://:9592 \
-e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@kafka1:9599,2@kafka2:9599,3@kafka3:9599 \
-e KAFKA_CFG_ADVERTISED_LISTENERS=BROKER://kafka2:9591,EXTERNAL://kafka2:9592 \
-e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=BROKER:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT \
-e KAFKA_CFG_INTER_BROKER_LISTENER_NAME=SASL_PLAINTEXT \
-e KAFKA_INTER_BROKER_LISTENER_NAME=BROKER \
-e KAFKA_CONTROLLER_USER=admin_controller \
-e KAFKA_CONTROLLER_PASSWORD=Kafka_2024 \
-e KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN \
-e KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN \
-e KAFKA_CFG_EARLY_START_LISTENERS=CONTROLLER \
-e KAFKA_INTER_BROKER_USER=admin \
-e KAFKA_INTER_BROKER_PASSWORD=Kafka_2024 \
-e KAFKA_CLIENT_USERS=kafka \
-e KAFKA_CLIENT_PASSWORDS=8lpWTiEy1FfqAZf \
-e KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN \
-e KAFKA_CFG_SUPER_USERS="User:admin;User:admin_controller;User:ANONYMOUS" \
-e KAFKA_CFG_AUTHORIZE_SASL_USERS=true \
-e KAFKA_CFG_AUTHORIZER_CLASS_NAME=org.apache.kafka.metadata.authorizer.StandardAuthorizer \
-v /opt/middle-images/kafka_cluster02/volume/kdata:/bitnami/kafka \
$img
集群节点3启动脚本
#!/bin/bash
docker stop kafka_cluster03
docker rm kafka_cluster03
mkdir -p /opt/middle-images/kafka_cluster03/volume/kdata
chmod -R 777 /opt/middle-images/kafka_cluster03/volume/kdata
img=`docker load -i /opt/middle-images/kafka_cluster03/kafka.tar | awk -F " " '{print $3}'`
docker run -d --restart=always \
--name kafka_cluster03 \
--net=host \
--add-host="kafka1":10.32.122.117 \
--add-host="kafka2":10.32.122.100 \
--add-host="kafka3":10.32.122.101 \
-e TZ=Asia/Shanghai \
-e KAFKA_BROKER_ID=3 \
-e KAFKA_KRAFT_CLUSTER_ID=LelM2dIFQkiUFvXCEcqRWA \
-e KAFKA_CFG_SECURITY_PROTOCOL=SASL_PLAINTEXT \
-e KAFKA_CFG_NODE_ID=3 \
-e KAFKA_ENABLE_KRAFT=yes \
-e KAFKA_CFG_PROCESS_ROLES=broker,controller \
-e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \
-e KAFKA_CFG_LISTENERS=BROKER://:9591,CONTROLLER://:9599,EXTERNAL://:9592 \
-e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@kafka1:9599,2@kafka2:9599,3@kafka3:9599 \
-e KAFKA_CFG_ADVERTISED_LISTENERS=BROKER://kafka3:9591,EXTERNAL://kafka3:9592 \
-e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=BROKER:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT \
-e KAFKA_CFG_INTER_BROKER_LISTENER_NAME=SASL_PLAINTEXT \
-e KAFKA_INTER_BROKER_LISTENER_NAME=BROKER \
-e KAFKA_CONTROLLER_USER=admin_controller \
-e KAFKA_CONTROLLER_PASSWORD=Kafka_2024 \
-e KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN \
-e KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN \
-e KAFKA_CFG_EARLY_START_LISTENERS=CONTROLLER \
-e KAFKA_INTER_BROKER_USER=admin \
-e KAFKA_INTER_BROKER_PASSWORD=Kafka_2024 \
-e KAFKA_CLIENT_USERS=kafka \
-e KAFKA_CLIENT_PASSWORDS=8lpWTiEy1FfqAZf \
-e KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN \
-e KAFKA_CFG_SUPER_USERS="User:admin;User:admin_controller;User:ANONYMOUS" \
-e KAFKA_CFG_AUTHORIZE_SASL_USERS=true \
-e KAFKA_CFG_AUTHORIZER_CLASS_NAME=org.apache.kafka.metadata.authorizer.StandardAuthorizer \
-v /opt/middle-images/kafka_cluster03/volume/kdata:/bitnami/kafka \
$img
在三台机器上面一次启动Docker
容器,观察启动日志是否成功,日志信息如下:
[2024-11-06 10:59:34,318] INFO Kafka version: 3.4.1 (org.apache.kafka.common.utils.AppInfoParser)
[2024-11-06 10:59:34,318] INFO Kafka commitId: 8a516edc2755df89 (org.apache.kafka.common.utils.AppInfoParser)
[2024-11-06 10:59:34,318] INFO Kafka startTimeMs: 1730861974317 (org.apache.kafka.common.utils.AppInfoParser)
[2024-11-06 10:59:34,321] INFO [KafkaRaftServer nodeId=3] Kafka Server started (kafka.server.KafkaRaftServer)
SCRAM用户管理
使用Zookeeper
部署Kafka时,可以通过kafka-configs.sh
脚本或者AdminClient
中UserScramCredentialUpsertion
类动态添加用户,示例脚本如下:
bin/kafka-configs.sh --bootstrap-server kafka1:9591,kafka2:9591,kafka3:9591 --command-config client.properties --alter --add-config 'SCRAM-SHA-256=[password=pgVYjqkSYEmMLFT!],SCRAM-SHA-512=[password=pgVYjqkSYEmMLFT!]' --entity-type users --entity-name tester
或者
UserScramCredentialUpsertion credentialUpsertion = new UserScramCredentialUpsertion("test", new ScramCredentialInfo(ScramMechanism.SCRAM_SHA_256, 4096),"test");
但是,通过KRaft部署的Kafka是不支持动态用户管理和SCRAM-SHA-512
的。
除了SUPER USERS
以外,需要通过KAFKA_CLIENT_USERS
定义其他用户,多个用户、密码之间可以通过;
连接,如下:
-e KAFKA_CLIENT_USERS=kafka1;kafka2 \
-e KAFKA_CLIENT_PASSWORDS=8lpWTiEy1FfqAZf;123456 \
ACL访问控制
如果不需要ACL授权策略控制,直接加上如下配置即可,普通用户也可以访问所有Topic
:
-e KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND=true \
需要使用ACL授权策略控制时,去掉上面的配置即可。
进入Kafka
容器内部对用户授予某个Topic
的读写权限:
bin/kafka-acls.sh --bootstrap-server kafka1:9591,kafka2:9591,kafka3:9591 --command-config client.properties --add --allow-principal User:kafka --operation all --topic test1
client.properties:
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="Kafka_2024";
执行结果如下:
Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=test1, patternType=LITERAL)`:
(principal=User:kafka, host=*, operation=ALL, permissionType=ALLOW)
使用admin
用户连接能看到所有Topic
:
使用kafka
用户连接只能看到授权的Topic
: