Apache zookeeper kafka 开启SASL安全认证_kafka开启认证
自0.9.0.0版本开始Kafka社区添加了许多功能用于提高Kafka集群的安全性,Kafka提供SSL或者SASL两种安全策略。SSL方式主要是通过CA令牌实现,此方案主要介绍SASL方式。
Kafka 的安全机制
kafka 社区在 0.9.0.0
版本正式添加了安全特性,可以满足各种安全性的要求,包括:
- Kafka 与 ZooKeeper 之间的安全通信;
- Kafka 集群 ZooKeeper 之间的安全通信;
- 客户端与服务端之间的安全通信;
- 消息级别的权限控制,可以控制客户端(生产者或消费者)的读写操作权限。
认证方式 | 引入版本 | 适用场景 |
---|---|---|
SSL | 0.9.0 | SSL做信道加密比较多,SSL认证不如SASL所以一般都会使用SSL来做通信加密。 |
SASL/GSSAPI | 0.9.9 | 主要是给 Kerberos 使用的。如果你的公司已经做了 Kerberos 认证(比如使用 Active Directory),那么使用 GSSAPI 是最方便的了。因为你不需要额外地搭建 Kerberos,只要让你们的 Kerberos 管理员给每个 Broker 和要访问 Kafka 集群的操作系统用户申请 principal 就好了。 |
SASL/PLAIN | 0.10.2 | 简单的用户名密码认证,通常与SSL结合使用,对于小公司来说,没必要搭建公司级别的Kerberos,使用它就比较合适。 |
SASL/SCRAM | 0.10.2 | PLAIN的加强版本,支持动态的用户增减。 |
Deleation Token | 1.1 | Delegation Token 是在 1.1 版本引入的,它是一种轻量级的认证机制,主要目的是补充现有的 SASL 或 SSL 认证。如果要使用 Delegation Token,你需要先配置好 SASL 认证,然后再利用 Kafka 提供的 API 去获取对应的 Delegation Token。这样 Broker 和客户端在做认证的时候,可以直接使用这个 token,不用每次都去 KDC 获取对应的 ticket(Kerberos 认证)或传输 Keystore 文件(SSL 认证)。 |
SASL/OAUTHBEARER | 2.0 | OAuth 2框架的集成。 |
必须配置/etc/hosts ip与主机的对应关系
10.20.1.1 kafka1 10.20.1.2 kafka2 10.20.1.3 kafka3
配置zookeeper集群启用SASL
1. 配置zookeeper,启用sasl认证,cat zoo.cfg查看到如下内容:
tickTime=2000 initLimit=10 syncLimit=5 clientPort=2181 maxClientCnxns=2000 maxSessionTimeout=60000000 autopurge.snapRetainCount=10 autopurge.purgeInterval=1 preAllocSize=131072 snapCount=3000000 leaderServes=yes dataDir=/data/log/zookeeper/data dataLogDir=/data/log/zookeeper/datalog 4lw.commands.whitelist=* server.1=10.20.1.1:2888:3888 server.2=10.20.1.2:2888:3888 server.3=10.20.1.3:2888:3888 #zk SASL authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider jaasLoginRenew=3600000 requireClientAuthScheme=sasl zookeeper.sasl.client=true #3.6.0(包括)新加的参数 指定客户端是否必须sasl认证成功后才能成功生成session sessionRequireClientSASLAuth=true ## 打开sasl开关, 默认是关的 quorum.auth.enableSasl=true ## ZK做为leaner的时候, 会发送认证信息 quorum.auth.learnerRequireSasl=true # 设置为true的时候,learner连接的时候需要发送认证信息,否则拒绝 quorum.auth.serverRequireSasl=true # JAAS 配置里面的 Context 名字 quorum.auth.learner.loginContext=QuorumLearner # JAAS 配置里面的 Context 名字 quorum.auth.server.loginContext=QuorumServer # 建议设置成ZK节点的数量乘2 quorum.cnxn.threads.size=20
配置zk log4j目录,已经告警级别
####################################### [kafka@kafka1 conf]$ cat log4j.properties # Copyright 2012 The Apache Software Foundation # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Define some default values that can be overridden by system properties zookeeper.root.logger=INFO, CONSOLE zookeeper.console.threshold=INFO zookeeper.log.dir="/data/log/zookeeper/logs" zookeeper.log.file=zookeeper.log zookeeper.log.threshold=INFO zookeeper.log.maxfilesize=256MB zookeeper.log.maxbackupindex=20 zookeeper.tracelog.dir=${zookeeper.log.dir} zookeeper.tracelog.file=zookeeper_trace.log log4j.rootLogger=${zookeeper.root.logger} # # console # Add "console" to rootlogger above if you want to use this # log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold} log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n # # Add ROLLINGFILE to rootLogger to get log file output # log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold} log4j.appender.ROLLINGFILE.File=/data/log/zookeeper/logs/${zookeeper.log.file} log4j.appender.ROLLINGFILE.MaxFileSize=${zookeeper.log.maxfilesize} log4j.appender.ROLLINGFILE.MaxBackupIndex=${zookeeper.log.maxbackupindex} log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n # # Add TRACEFILE to rootLogger to get log file output # Log TRACE level and above messages to a log file # log4j.appender.TRACEFILE=org.apache.log4j.FileAppender log4j.appender.TRACEFILE.Threshold=TRACE log4j.appender.TRACEFILE.File=/data/log/zookeeper/logs/${zookeeper.tracelog.file} log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout ### Notice we are including log4j's NDC here (%x) log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L][%x] - %m%n # # zk audit logging # zookeeper.auditlog.file=zookeeper_audit.log zookeeper.auditlog.threshold=INFO audit.logger=INFO, RFAAUDIT log4j.logger.org.apache.zookeeper.audit.Log4jAuditLogger=${audit.logger} log4j.additivity.org.apache.zookeeper.audit.Log4jAuditLogger=false log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender log4j.appender.RFAAUDIT.File=/data/log/zookeeper/logs/${zookeeper.auditlog.file} log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n log4j.appender.RFAAUDIT.Threshold=${zookeeper.auditlog.threshold} # Max log file size of 10MB log4j.appender.RFAAUDIT.MaxFileSize=10MB log4j.appender.RFAAUDIT.MaxBackupIndex=10
conf目录下新建zookeeper-env.sh文件,设置jvm参数,添加了zk的sasl 认证文件
[kafka@kafka1 conf]$ cat zookeeper-env.sh JAVA_HOME=/usr/local/openjdk-1.8.0.392 ZOO_LOG_DIR=/data/log/zookeeper/logs ZOO_LOG4J_PROP="WARN,ROLLINGFILE" SERVER_JVMFLAGS="-server -Xms2G -Xmx2G -Xmn384m -Xss228k -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError -Djava.security.auth.login.config=/usr/local/zookeeper/conf/zk_jaas.conf -Dzookeeper.requireClientAuthScheme=sasl"
2. 配置zookeeper JAAS
cat zk_jaas.conf文件内容如下,如果没有改文件则使用vi命令编辑
[kafka@kafka1 conf]$ cat zk_jaas.conf QuorumServer { org.apache.zookeeper.server.auth.DigestLoginModule required user_kafka="abc123456"; #创建一个kafka账户,密码是abc123456 }; QuorumLearner { org.apache.zookeeper.server.auth.DigestLoginModule required username="kafka" password="abc123456"; }; Server { org.apache.zookeeper.server.auth.DigestLoginModule required user_kafka="abc123456"; #定义zk server端的用户名与密码 }; Client { org.apache.zookeeper.server.auth.DigestLoginModule required username="kafka" #这里客户端与zk server连接的用户名 password="abc123456"; #密码 };
注意:kafka用户户名 broker或者zk client 连接 zk server使用的。不定义client,zkclient无法连接 zks集群
修改zk bin/zkEnv.sh 脚本,注销 export SERVER_JVMFLAGS 与 CLIENT_JAMFLAGS
[kafka@cld-t-vmw0007-kfk bin]$ cat zkEnv.sh # default heap for zookeeper server #ZK_SERVER_HEAP="${ZK_SERVER_HEAP:-1000}" #export SERVER_JVMFLAGS="-Xmx${ZK_SERVER_HEAP}m $SERVER_JVMFLAGS" # default heap for zookeeper client #ZK_CLIENT_HEAP="${ZK_CLIENT_HEAP:-256}" #export CLIENT_JVMFLAGS="-Xmx${ZK_CLIENT_HEAP}m $CLIENT_JVMFLAGS"
修改 zkCli.sh 脚本, 添加 SASL 认证文件,使用的client 部分
[kafka@cld-t-vmw0007-kfk bin]$ cat zkCli.sh ZOO_LOG_FILE=zookeeper-$USER-cli-$HOSTNAME.log "$JAVA" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" "-Dzookeeper.log.file=${ZOO_LOG_FILE}" "-Djava.security.auth.login.config=/usr/local/zookeeper/conf/zk_jaas.conf" \ -cp "$CLASSPATH" $CLIENT_JVMFLAGS $JVMFLAGS \ org.apache.zookeeper.ZooKeeperMain "$@"
配置kafka sasl动态认证
SASL/SCRAM认证是把凭证(credential)存储在Zookeeper,使用kafka-configs.sh在Zookeeper中创建凭据。对于每个SCRAM机制,必须添加具有机制名称的配置来创建凭证,所以在启动Kafka broker之前需要创建代理间通信的凭据。
这里配置的 Kafka和生产者/消费者之间 采用SASL/PLAIN和SASL/SCRAM两种方式共同完成认证,授权使用ACL方式。PLAIN方式的用户是在jaas文件中写死的,不能动态的添加;SCRAM支持动态的添加用户。
1. 创建用户
kakfa进程不需要启动,使用kafka-configs.sh 脚本创建用户,直接执行无法成功,需要使用授权于zookeeper 进行通信
首先Broker 创建一个对应的 JAAS 文件。Kafka 的 jaas认证配置文件,配置的是登录类,超管密码和管理的帐号密码列表
vim kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username ="admin"
password="abc123456"
user_admin="abc123456"
user_producer="abc234567"
user_consumer="abc345678";
};
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="abc123456"
user_producer="abc234567"
user_consumer="abc345678";
};
Client {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="kafka"
password="abc123456";
};
KafkaServer中usename=admin 是kafka 管理员的账号和密码,后面的user_xxx事预设的普通帐号认证信息。
中间部分配置的是PLAIN认证方式的账户和密码,其中producer是账户名,abc234567是密码。
Client配置了broker到Zookeeper的连接用户名密码,这里要和前面zookeeper配置中的zk_jaas.conf.conf 中 user_kafka 的账号和密码相同(kafka/abc123456)。
修改 kafka-topics.sh 配置文件中,加载认证zk_jaas.conf文件
[kafka@kafka1 bin]$ cat kafka-topics.sh exec $(dirname $0)/kafka-run-class.sh -Xss512k -Djava.security.auth.login.config=/usr/local/zookeeper/conf/zk_jaas.conf kafka.admin.TopicCommand "$@"
配置SASL/SCRAM认证的第一步,是配置可以连接到kafka集群的用户。本案例创建了3个用户:admin,producer,consumer。kafka_server_admin用户用于broker之间的认证通信,producer用户用于生产者连接kafka,consumer用户用于消费者连接kafka 。
./kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=abc123456],SCRAM-SHA-512=[password=abc123456]' --entity-type users --entity-name admin ./kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=abc234567],SCRAM-SHA-512=[password=abc234567]' --entity-type users --entity-name producer ./kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=abc345678],SCRAM-SHA-512=[password=abc345678]' --entity-type users --entity-name consumer
2. 查看创建的用户信息
kafka-configs 脚本查看创建 SASL/SCRAM 认证中的用户信息。可以使用下列命令来查看刚才创建的用户数据。
#(可以单独指定某个用户 --entity-name producer,如下) ./kafka-configs.sh --zookeeper 10.20.1.1:2181 --describe --entity-type users --entity-name admin
./kafka-configs.sh --zookeeper 10.20.1.1:2181 --describe --entity-type users --entity-name producer
./kafka-configs.sh --zookeeper 10.20.1.1:2181 --describe --entity-type users --entity-name consumer
登录到zookeeper中,查看用户名是否创建成功
ZK客户端命令行查看: ./zkCli.sh -server 10.20.1.1:2181 ls /config/users
4. Kafka 服务配置文件 server.propertis,配置认证协议及认证实现类
cat server.properties其它内容都注释掉,然后追加如下内容:
[kafka@kafka1 config]$ cat server.properties | grep -v "#" | sed '/^$/d' broker.id=1 listeners=SASL_PLAINTEXT://:9092 advertised.listeners=SASL_PLAINTEXT://10.20.1.1:9092 sasl.enabled.mechanisms=SCRAM-SHA-256,PLAIN sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 security.inter.broker.protocol=SASL_PLAINTEXT allow.everyone.if.no.acl.found=false authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer super.users=User:admin num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/data/log/kafka/data num.partitions=3 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 log.retention.hours=24 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 zookeeper.connect=10.20.1.1:2181,10.20.1.2:2181,10.220.1.3:2181 zookeeper.connection.timeout.ms=18000 group.initial.rebalance.delay.ms=0 delete.topic.enable=true auto.create.topics.enable=false
修改kafka log4j 配置文件
[kafka@kafka1 config]$ cat log4j.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Unspecified loggers and loggers with additivity=true output to server.log and stdout # Note that INFO only applies to unspecified loggers, the log level of the child logger is used otherwise log4j.rootLogger=DEBUG, stdout, kafkaAppender log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.kafkaAppender.File=/data/log/kafka/logs/server.log log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.stateChangeAppender.File=/data/log/kafka/logs/state-change.log log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.requestAppender.File=/data/log/kafka/logs/kafka-request.log log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.cleanerAppender.File=/data/log/kafka/logs/log-cleaner.log log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.controllerAppender.File=/data/log/kafka/logs/controller.log log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.authorizerAppender.File=/data/log/kafka/logs/kafka-authorizer.log log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n # Change the line below to adjust ZK client logging log4j.logger.org.apache.zookeeper=DEBUG # Change the two lines below to adjust the general broker logging level (output to server.log and stdout) log4j.logger.kafka=DEBUG log4j.logger.org.apache.kafka=DEBUG # Change to DEBUG or TRACE to enable request logging log4j.logger.kafka.request.logger=DEBUG, requestAppender log4j.additivity.kafka.request.logger=false # Uncomment the lines below and change log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output # related to the handling of requests #log4j.logger.kafka.network.Processor=TRACE, requestAppender #log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender #log4j.additivity.kafka.server.KafkaApis=false log4j.logger.kafka.network.RequestChannel$=DEBUG, requestAppender log4j.additivity.kafka.network.RequestChannel$=false log4j.logger.kafka.controller=TRACE, controllerAppender log4j.additivity.kafka.controller=false log4j.logger.kafka.log.LogCleaner=DEBUG, cleanerAppender log4j.additivity.kafka.log.LogCleaner=false log4j.logger.state.change.logger=DEBUG, stateChangeAppender log4j.additivity.state.change.logger=false # Access denials are logged at DEBUG level, change to DEBUG to also log allowed accesses log4j.logger.kafka.authorizer.logger=DEBUG, authorizerAppender log4j.additivity.kafka.authorizer.logger=false
配置consumer.properties和producer.properties,都要加入以下配置,生产者最后一行添加配置
[kafka@kafka1 config]$ cat producer.properties security.protocol=SASL_PLAINTEXT sasl.mechanism=SCRAM-SHA-512
配置consumer.properties和producer.properties,都要加入以下配置,消费者最后一行添加配置
[kafka@kafka1 config]$ cat consumer.properties security.protocol=SASL_PLAINTEXT sasl.mechanism=SCRAM-SHA-512
生产者配置
使用kafka-console-producer.sh脚本测试生产者,由于开启安全认证和授权,此时使用console-producer脚本来尝试发送消息,那么消息会发送失败,原因是没有指定合法的认证用户,因此客户端需要做相应的配置,需要创建一个名为producer.conf的配置文件给producer程序使用。config目录下创建一个producer.conf的文件,文件内容如下:
[kafka@kafka1 config]$ cat producer.conf security.protocol=SASL_PLAINTEXT sasl.mechanism=SCRAM-SHA-256 sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="producer" password="abc234567";
消费者配置
使用kafka-console-consumer.sh脚本测试生产者,由于开启安全认证和授权,因此客户端需要做相应的配置。需要为 consumer 用户创建consumer.conf给消费者程序,同时设置对topic的读权限。
[kafka@kafka1 config]$ cat consumer.conf security.protocol=SASL_PLAINTEXT sasl.mechanism=SCRAM-SHA-256 sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="consumer" password="abc345678";
启动kafka
bin/kafka-server-start.sh -daemon config/server.properties
kafka开启认证后,当生产者往Topic写数据需要为write权限,即对消费者设置Read 权限,使用的命令为kafka-acls.sh,默认直接执行报错,需要添加jaas认证文件
[kafka@kafka1 bin]$ cat kafka-acls.sh exec $(dirname $0)/kafka-run-class.sh -Djava.security.auth.login.config=/opt/app/kafka/config/kafka_server_jaas.conf kafka.admin.AclCommand "$@"
kafka开启认证后,当创建topics 也需要添加jaas认证文件
[kafka@kafka1 bin]$ cat kafka-topics.sh exec $(dirname $0)/kafka-run-class.sh -Xss512k -Djava.security.auth.login.config=/opt/app/zookeeper/conf/zk_jaas.conf kafka.admin.TopicCommand "$@"
kafka-console-producer.sh脚本测试生产者,由于开启安全认证和授权,添加jaas认证文件
[kafka@kafka1 bin]$ cat kafka-console-consumer.sh if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx512M -Djava.security.auth.login.config=/opt/app/kafka/config/kafka_client_consumer.conf" fi exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleConsumer "$@"
kafka-console-consumer.sh脚本测试生产者,由于开启安全认证和授权,添加jaas认证文件
[kafka@kafka1 bin]$ cat kafka-console-producer.sh if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx512M -Djava.security.auth.login.config=/opt/app/kafka/config/kafka_client_producer.conf" fi exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleProducer "$@"
测试验证
#创建Topic ./kafka-topics.sh --zookeeper 10.225.12.44:2181 --create --partitions 3 --replication-factor 3 --topic test1 #添加写权限 ./kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=10.20.1.1:2181 --add --allow-principal User:producer --operation Write --topic test1 #添加读权限 ./kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=10.20.1.1:2181 --add --allow-principal User:consumer --operation Read --topic test1 #添加消费者组权限 ./kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=10.20.1.1:2181 --add --allow-principal User:consumer --operation Read --group test-group #写入消息 ./kafka-console-producer.sh --broker-list 10.20.1.1:9092,10.20.1.2:9092,10.20.1.3:9092 --topic test1 --producer.config /opt/app/kafka/config/producer.conf #接收消息 ./kafka-console-consumer.sh --bootstrap-server 10.20.1.1:9092,10.20.1.2:9092,10.20.1.3:9092 --topic test1 --from-beginning --consumer.config /opt/app/kafka/config/consumer.conf --group test-group
添加新的 生产者消费者用户名密码:
./kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=abc234567],SCRAM-SHA-512=[password=abc234567]' --entity-type users --entity-name systemuser ./kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=abc345678],SCRAM-SHA-512=[password=abc345678]' --entity-type users --entity-name fengjian
生产者配置授权
使用kafka-console-producer.sh脚本测试生产者,由于开启安全认证和授权,此时使用console-producer脚本来尝试发送消息,那么消息会发送失败,原因是没有指定合法的认证用户,因此客户端需要做相应的配置,需要创建一个名为producer.conf的配置文件给producer程序使用。config目录下创建一个producer.conf的文件,文件内容如下:
[kafka@kafka1 config]$ cat systemuser.conf
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="systemuser" password="abc234567";
消费者配置授权
使用kafka-console-consumer.sh脚本测试生产者,由于开启安全认证和授权,因此客户端需要做相应的配置。需要为 consumer 用户创建consumer.conf给消费者程序,同时设置对topic的读权限。
[kafka@kafka1 config]$ cat fengjian.conf
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="fengjian" password="abc345678";
测试验证
#创建Topic
./kafka-topics.sh --zookeeper 10.225.12.44:2181 --create --partitions 3 --replication-factor 3 --topic test3
#添加写权限
./kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=10.20.1.1:2181 --add --allow-principal User:systemuser --operation Write --topic test3
#添加读权限
./kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=10.20.1.1:2181 --add --allow-principal User:fengjian --operation Read --topic test3
#添加消费者组权限
./kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=10.20.1.1:2181 --add --allow-principal User:fengjian --operation Read --group test3-group
#写入消息
./kafka-console-producer.sh --broker-list 10.20.1.1:9092,10.20.1.2:9092,10.20.1.3:9092 --topic test3 --producer.config /opt/app/kafka/config/systemuser.conf
#接收消息
./kafka-console-consumer.sh --bootstrap-server 10.20.1.1:9092,10.20.1.2:9092,10.20.1.3:9092 --topic test3 --from-beginning --consumer.config /opt/app/kafka/config/fengjian.conf --group test3-group
删除用户[producer]的SCRAM证书:
kafka-configs.sh --bootstrap-server 10.20.1.1:2181 --alter --delete-config 'SCRAM-SHA-256' --entity-type users --entity-name systemuser