kafka基于SCRAM认证,快速配置启用ACL

启动和停止服务

zookeeper
/usr/local/apache-zookeeper-3.8.2-bin/bin/zkServer.sh start
/usr/local/apache-zookeeper-3.8.2-bin/bin/zkServer.sh stop

kafka
/usr/local/kafka_2.13-3.2.3/bin/kafka-server-stop.sh
/usr/local/kafka_2.13-3.2.3/bin/kafka-server-start.sh -daemon /usr/local/kafka_2.13-3.2.3/config/server.properties 

一. 创建SCRAM用户证书

1.需要在zookeeper启动后,不启动kafka时创建

#1)创建两种认证方式SCRAM-SHA-512,SCRAM-SHA-256,本文档使用此方法
#admin用户需要在kafka启动前创建,切记
/usr/local/kafka_2.13-3.2.3/bin/kafka-configs.sh --zookeeper 10.255.61.28:2181,10.255.61.29:2181,10.255.61.30:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-sec],SCRAM-SHA-512=[password=admin-sec]' --entity-type users --entity-name admin

/usr/local/kafka_2.13-3.2.3/bin/kafka-configs.sh --zookeeper 10.255.61.28:2181,10.255.61.29:2181,10.255.61.30:2181 --alter --add-config 'SCRAM-SHA-256=[password=cons-sec],SCRAM-SHA-512=[password=cons-sec]' --entity-type users --entity-name consumer

/usr/local/kafka_2.13-3.2.3/bin/kafka-configs.sh --zookeeper 10.255.61.28:2181,10.255.61.29:2181,10.255.61.30:2181 --alter --add-config 'SCRAM-SHA-256=[password=prod-sec],SCRAM-SHA-512=[password=prod-sec]' --entity-type users --entity-name producer


#2)只创建一种SCRAM-SHA-256认证的例子,server.properties中只开启了SHA-256.
/usr/local/kafka_2.13-3.2.3/bin/kafka-configs.sh --zookeeper 10.255.61.28:2181,10.255.61.29:2181,10.255.61.30:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-sec]' --entity-type users --entity-name admin


#3)新版kafka一般用bootstrap命令来创建
./kafka-configs.sh --bootstrap-server 10.255.61.28:9092 --alter --add-config 'SCRAM-SHA-256=[password=admin],SCRAM-SHA-512=[password=admin]' --entity-type users --entity-name admin

2.如果想启动kafka后创建用户, 需要在命令中添加认证文件

1)创建admin.conf
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-sec";

2)创建用户
/usr/local/kafka_2.13-3.2.3/bin/kafka-configs.sh --bootstrap-server 10.255.61.28:9092 --alter --add-config 'SCRAM-SHA-256=[password=123456],SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name mytest --command-config /usr/local/kafka_2.13-3.2.3/config/admin.conf

二. 查看与删除证书

#查看所有用户凭证
/usr/local/kafka_2.13-3.2.3/bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-type users

#查看admin用户的证书
/usr/local/kafka_2.13-3.2.3/bin/kafka-configs.sh --zookeeper 10.255.61.28:2181,10.255.61.29:2181,10.255.61.30:2181  --describe --entity-type users --entity-name admin

#删除证书,两种认证加密都删除,比如删除这里的producer用户
/usr/local/kafka_2.13-3.2.3/bin/kafka-configs.sh --zookeeper localhost:2181 --alter --delete-config 'SCRAM-SHA-512' --delete-config 'SCRAM-SHA-256' --entity-type users --entity-name producer

#只删除某一种认证类型证书,这里只删除SCRAM-SHA-512
/usr/local/kafka_2.13-3.2.3/bin/kafka-configs.sh --zookeeper  10.255.61.28:2181,10.255.61.29:2181,10.255.61.30:2181 --alter  --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name producer

三. 服务端配置

1.kafka_server_jaas.conf(位置随意)

cd  /usr/local/kafka_2.13-3.2.3/config
cat > kafka_server_jaas.conf << EOF
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-sec";
};
EOF

2.修改kafka-server-start.sh

将JAAS配置文件位置作为JVM参数传递给每个Kafka Broker
cd /usr/local/kafka_2.13-3.2.3/bin
vim kafka-server-start.sh
在EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'}这行上面添加
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka_2.13-3.2.3/config/kafka_server_jaas.conf"

3.修改server.properties

cd /usr/local/kafka_2.13-3.2.3/config 添加下面配置
#认证配置
#默认是listeners=PLAINTEXT://10.255.61.29:9092,需要进行修改
listeners=SASL_PLAINTEXT://10.255.61.28:9092  

advertised.listeners=SASL_PLAINTEXT://10.255.61.28:9092
sasl.enabled.mechanisms=SCRAM-SHA-256 #注意这里要和前面创建用户的认证方式一致
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT

#ACL配置
allow.everyone.if.no.acl.found=false
#authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer  这个适用于kafka2.x, 3.0以上报错,应用下面的配置
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
super.users=User:admin;User:xuxiaodong
#可能问题点,后面多写了一个User:xuxiaodong,仅用于创建多个用户的示例

四.客户端配置

1.创建用户的jaas文件(备用)

cat > kafka_client_scram_producer_jaas.conf << EOF
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="producer"
password="prod-sec";
};
EOF

cat > kafka_client_scram_consumer_jaas.conf << EOF
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="consumer"
password="cons-sec";
};
EOF

2.修改生产者和消费者的启动脚本,引入jaas文件(备用)

###生产者配置bin/kafka-console-producer.sh
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka_2.13-3.2.3/config/kafka_client_scram_producer_jaas.conf"

###消费者配置bin/kafka-console-consumer.sh
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka_2.13-3.2.3/config/kafka_client_scram_consumer_jaas.conf"

3.修改consumer.properties和producer.properties

加入下面配置
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
bootstrap.servers=10.255.61.28:9092,10.255.61.29:9092,10.255.61.30:9092  #确定是需要的

以上三个节点都要做

说明

其中第1和第2步如果写了,下面测试中的mytest.conf可以不写,测试生产和消费的命令如下

./kafka-console-producer.sh --broker-list 10.255.61.28:9092 --topic test --producer-property security.protocol=SASL_PLAINTEXT --producer-property sasl.mechanism=SCRAM-SHA-256

前提是config/kafka_server_jaas.conf必须配置上客户的用户名和密码,原方案只写admin用户就行了。如果用原方案,必须把多余的用户删除,否则kafka起不来。

KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-sec";
user_admin="admin-sec"
user_producer="prod-sec"
user_consumer="cons-sec";
};

五. 测试

1.创建测试topic

1)创建topic
/usr/local/kafka_2.13-3.2.3/bin/kafka-topics.sh --create  --bootstrap-server  10.255.61.28:9092,10.255.61.29:9092,10.255.61.30:9092 --replication-factor 1 --partitions 1 --topic mytest1 --command-config /usr/local/kafka_2.13-3.2.3/config/admin.conf

2)查看所有topic
/usr/local/kafka_2.13-3.2.3/bin/kafka-topics.sh --list  --bootstrap-server  10.255.61.28:9092,10.255.61.29:9092,10.255.61.30:9092 --command-config /usr/local/kafka_2.13-3.2.3/config/admin.conf

2.创建测试用户

1)新增用户,新建用户mytest, 需要停掉kafka,开着zookeeper才行
/usr/local/kafka_2.13-3.2.3/bin/kafka-configs.sh --zookeeper 10.255.61.28:2181,10.255.61.29:2181,10.255.61.30:2181 --alter --add-config 'SCRAM-SHA-256=[password=123456],SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name mytest

另外一种方法不关闭kafka的方法
/usr/local/kafka_2.13-3.2.3/bin/kafka-configs.sh --bootstrap-server 10.255.61.28:9092 --alter --add-config 'SCRAM-SHA-256=[password=123456],SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name mytest --command-config /usr/local/kafka_2.13-3.2.3/config/admin.conf

2)更新用户,更新mytest的密码为mytest,仅参考不执行。
bin/kafka-configs.sh --zookeeper 192.168.18.11:12181 --alter --add-config 'SCRAM-SHA-512=[password=mytest]' --entity-type users --entity-name mytest

3.授权

https://www.jianshu.com/p/74f84fbd1f3f

1)给mytest用户配置test这个主题的消息发送权限
/usr/local/kafka_2.13-3.2.3/bin/kafka-acls.sh --bootstrap-server '10.255.61.28:9092' --add --allow-principal User:"mytest" --producer --topic 'test' --command-config /usr/local/kafka_2.13-3.2.3/config/admin.conf
注意:不能使用正则匹配多个topic,比如test*


2)给mytest用户配置使用所有消费组消费test消息的权限
/usr/local/kafka_2.13-3.2.3/bin/kafka-acls.sh --bootstrap-server '10.255.61.28:9092' --add --allow-principal User:"mytest" --consumer --topic 'test' --group '*' --command-config /usr/local/kafka_2.13-3.2.3/config/admin.conf

3)查看kafka中所有topic权限列表
./kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list

查看某个topic权限列表
./kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list --topic test.1


4)同时添加读写
kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add \
--allow-principal User:developers --operation Read --operation Write --topic my-topic

5)删除读写权限
kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove \
--allow-principal User:developers --operation Read --operation Write --topic my-topic

6)添加一个acl “以允许198.51.100.0和198.51.100.1,Principal为User:Bob和User:Alice对主题是Test-Topic有Read和Write的执行权限
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic让所有用户读取Test-topic,只拒绝IP为198.51.100.3的User:BadBob,我们可以使用下面的命令:

7)让所有用户读取Test-topic,只拒绝IP为198.51.100.3的User:BadBob
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* --allow-host * --deny-principal User:BadBob --deny-host 198.51.100.3 --operation Read --topic Test-topic

限流测试(没验证)

#https://blog.csdn.net/weixin_39750695/article/details/120596741
# 对用户 test 限流 10M/S
./bin/kafka-configs.sh --zookeeper xxxxxx:2181/kafka27 --alter --add-config 'producer_byte_rate=10485760' --entity-type users --entity-name test

# 对 client id 为 clientA 的限流 10M/S
./bin/kafka-configs.sh --zookeeper xxxxxx:2181/kafka27 --alter --add-config 'producer_byte_rate=10485760' --entity-type clients --entity-name clientA

# /bin/kafka-configs.sh  --bootstrap-server xxxxxx:9092 --alter --add-config 'producer_byte_rate=10240,consumer_byte_rate=10240,request_percentage=200' --entity-type clients --entity-name test_lz4_10m_client

4.进行生产,消费测试

1)创建配置文件mytest.conf
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256  #这里写512会报错
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="mytest" password="123456";

2)生产消息
/usr/local/kafka_2.13-3.2.3/bin/kafka-console-producer.sh --broker-list 10.255.61.28:9092,10.255.61.29:9092,10.255.61.30:9092 --topic test --producer.config /usr/local/kafka_2.13-3.2.3/config/mytest.conf

3)查看消费
/usr/local/kafka_2.13-3.2.3/bin/kafka-console-consumer.sh --bootstrap-server 10.255.61.28:9092 --topic test --consumer.config /usr/local/kafka_2.13-3.2.3/config/mytest.conf --from-beginning

4)添加消费者所有组权限(仅参考,未实践)
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=192.168.18.11:12181 --add --allow-principal User:"mytest" --consumer --topic 'mytest' --group '*'


5)消费组相关命令
查看消费者组
./kafka-consumer-groups.sh --bootstrap-server 10.255.61.29:9092 10.255.61.30:9092 --list --command-config ../config/admin.conf 

生产消息:
/usr/local/kafka_2.13-3.2.3/bin/kafka-console-producer.sh --broker-list 10.255.61.28:9092,10.255.61.29:9092,10.255.61.30:9092 --topic mytest1 --producer.config /usr/local/kafka_2.13-3.2.3/config/admin.conf
输入
it is a test for consumergroup

指定消费者组来消费消息
./kafka-console-consumer.sh  --bootstrap-server=10.255.61.29:9092 10.255.61.30:9092 --topic mytest1 --consumer-property group.id=mytest1 --consumer.config ../config/admin.conf
此时可以同时看到上面生产的消息

查看消费组详情
./kafka-consumer-groups.sh --bootstrap-server 10.255.61.29:9092,10.255.61.30:9092 --group mytest1  --command-config ../config/admin.conf  --describe

六.验证

1.查看topic

admin用户可以查看所有topic,mytest用户只能查看test这个topic

image-20230808133628307

2.生产消息

mytest这个用户只能在test这个topic下正常消息,不能在mytest1这个topic下生产消息

image-20230808133946393

3.消费消息

用户mytest只能消费test这个topic的消息,没有权限消费mytest1的topic

image-20230808134410813

参考:

https://juejin.cn/post/7024713687986339853

https://blog.csdn.net/zhqsdhr/article/details/105805604

https://blog.csdn.net/jxlhljh/article/details/127286603

https://blog.csdn.net/w269000710/article/details/129572628

https://stackoverflow.com/questions/71938048/solr-zookeeper-an-exception-was-thrown-while-closing-send-thread

posted @ 2023-10-26 09:46  坚强的小蚂蚁  阅读(877)  评论(0编辑  收藏  举报