Docker搭建带SASL用户密码验证的Kafka

最近需要用到带有鉴权的kafka,网上基本都是使用confluentinc的kafka,试了下有各种问题,因此使用 wurstmeister/zookeeper 和 wurstmeister/kafka 搭建了一个带有密码验证的kafka,简单记录下搭建的过程

1 配置ZOOKEEPER

1.1 新建放置配置文件的目录

/home/tool/kafka-sasl/conf

1.2 在文件夹内创建一个新的zookeeper配置文件zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/opt/zookeeper-3.4.13/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=1

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider

requireClientAuthScheme=sasl

jaasLoginRenew=3600000

zookeeper.sasl.client=true

1.3 新建密码验证的配置文件 server_jaas.conf

Client {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="admin"
    password="12345678";
};


Server {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="admin"
    password="12345678"
    user_super="12345678"
    user_admin="12345678";
};

KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="12345678"
    user_admin="12345678";
};

KafkaClient {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="12345678";
};

1.4 新建配置文件 log4j.properties和configuration.xsl(也可以先不加参数直接启动zookeeper后,从容器/opt/zookeeper-3.4.13/conf/中复制过来

log4j.properties

# Define some default values that can be overridden by system properties
zookeeper.root.logger=INFO, CONSOLE
zookeeper.console.threshold=INFO
zookeeper.log.dir=.
zookeeper.log.file=zookeeper.log
zookeeper.log.threshold=DEBUG
zookeeper.tracelog.dir=.
zookeeper.tracelog.file=zookeeper_trace.log

#
# ZooKeeper Logging Configuration
#

# Format is "<default threshold> (, <appender>)+

# DEFAULT: console appender only
log4j.rootLogger=${zookeeper.root.logger}

# Example with rolling log file
#log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE

# Example with rolling log file and tracing
#log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE

#
# Log INFO level and above messages to the console
#
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n

#
# Add ROLLINGFILE to rootLogger to get log file output
#    Log DEBUG level and above messages to a log file
log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold}
log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file}

# Max log file size of 10MB
log4j.appender.ROLLINGFILE.MaxFileSize=10MB
# uncomment the next line to limit number of backup files
log4j.appender.ROLLINGFILE.MaxBackupIndex=10

log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n


#
# Add TRACEFILE to rootLogger to get log file output
#    Log DEBUG level and above messages to a log file
log4j.appender.TRACEFILE=org.apache.log4j.FileAppender
log4j.appender.TRACEFILE.Threshold=TRACE
log4j.appender.TRACEFILE.File=${zookeeper.tracelog.dir}/${zookeeper.tracelog.file}

log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout
### Notice we are including log4j's NDC here (%x)
log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L][%x] - %m%n

configuration.xsl

<?xml version="1.0"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="html"/>
<xsl:template match="configuration">
<html>
<body>
<table border="1">
<tr>
 <td>name</td>
 <td>value</td>
 <td>description</td>
</tr>
<xsl:for-each select="property">
<tr>
  <td><a name="{name}"><xsl:value-of select="name"/></a></td>
  <td><xsl:value-of select="value"/></td>
  <td><xsl:value-of select="description"/></td>
</tr>
</xsl:for-each>
</table>
</body>
</html>
</xsl:template>
</xsl:stylesheet>

1.4 启动命令

docker run --name zookeeper_sasl -p 2181:2181 -e SERVER_JVMFLAGS="-Djava.security.auth.login.config=/opt/zookeeper-3.4.13/secrets/server_jaas.conf" -v /home/tool/kafka-sasl/conf:/opt/zookeeper-3.4.13/conf -v /home/tool/kafka-sasl/conf:/opt/zookeeper-3.4.13/secrets/ --rm -it wurstmeister/zookeeper

2 配置 KAFKA

使用上面的server_jass.conf 配置文件作为密码验证文件

2.1 启动命令

docker run  --name kafka_sasl -p 59092:9092 --link zookeeper_sasl:zookeeper_sasl -e KAFKA_BROKER_ID=0 -e KAFKA_ADVERTISED_LISTENERS=SASL_PLAINTEXT://10.18.104.202:59092 -e KAFKA_ADVERTISED_PORT=59092 -e KAFKA_LISTENERS=SASL_PLAINTEXT://0.0.0.0:9092 -e KAFKA_SECURITY_INTER_BROKER_PROTOCOL=SASL_PLAINTEXT -e KAFKA_PORT=59092 -e KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN -e KAFKA_SASL_ENABLED_MECHANISMS=PLAIN -e KAFKA_AUTHORIZER_CLASS_NAME=kafka.security.auth.SimpleAclAuthorizer -e KAFKA_SUPER_USERS=User:admin -e KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND=false -e KAFKA_ZOOKEEPER_CONNECT='zookeeper_sasl:2181' -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 -e KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0 -e KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/secrets/server_jaas.conf" -v /home/tool/kafka-sasl/conf/:/opt/kafka/secrets/ --rm -it  wurstmeister/kafka:2.11-0.11.0.3 

换端口再启动两个,可以建立一个集群

3 验证

3.1 创建topic

kafka-topics.sh --zookeeper 10.18.104.202:2181 --create --partitions 2 --replication-factor 1 --topic  testTopic

kafka-topics.sh --zookeeper  10.18.104.202:2181  --list 

kafka-topics.sh --zookeeper 10.18.104.202:2181  --describe --topic  testTopic

3.2 产出一条消息

执行脚本前,修改/opt/kafka/bin/kafka-console-producer.sh:

export KAFKA_HEAP_OPTS="-Xmx512M -Djava.security.auth.login.config=/opt/kafka/secrets/server_jaas.conf"

修改/opt/kafka/config/producer.properties

metadata.broker.list=kafka外网IP:外网端口

添加:

security.protocol=SASL_PLAINTEXT

sasl.mechanism=PLAIN

 

执行脚本:

./bin/kafka-console-producer.sh --broker-list 10.18.104.202:59092  --topic  testTopic  --producer.config config/producer.properties

3.3 消费一条消息

执行脚本前,修改/opt/kafka/bin/kafka-console-consumer.sh:

export KAFKA_HEAP_OPTS="-Xmx512M -Djava.security.auth.login.config=/opt/kafka/secrets/server_jaas.conf"

修改/opt/kafka/config/consumer.properties

zookeeper.connect=zookeeper外网ip:外面端口

添加:

security.protocol=SASL_PLAINTEXT

sasl.mechanism=PLAIN

 

执行脚本:

./bin/kafka-console-consumer.sh --bootstrap-server 10.18.104.202:59092 --topic  testTopic  --from-beginning --consumer.config config/consumer.properties

 

posted @ 2021-12-01 17:56  水木神舟10  阅读(5109)  评论(11编辑  收藏  举报