大数据安全系列的其它文章
https://www.cnblogs.com/bainianminguo/p/12548076.html-----------安装kerberos
https://www.cnblogs.com/bainianminguo/p/12548334.html-----------hadoop的kerberos认证
https://www.cnblogs.com/bainianminguo/p/12548175.html-----------zookeeper的kerberos认证
https://www.cnblogs.com/bainianminguo/p/12584732.html-----------hive的kerberos认证
https://www.cnblogs.com/bainianminguo/p/12584880.html-----------es的search-guard认证
https://www.cnblogs.com/bainianminguo/p/12639821.html-----------flink的kerberos认证
https://www.cnblogs.com/bainianminguo/p/12639887.html-----------spark的kerberos认证
本篇博客介绍配置zookeeper的kerberos配置
一、zookeeper安装
1、解压安装包和重命名和创建数据目录
tar -zxvf /data/apache-zookeeper-3.5.5-bin.tar.gz -C /usr/local/ mv apache-zookeeper-3.5.5-bin/ zookeeper/
2、查看解压目录
[root@localhost zookeeper]# ll total 36 drwxr-xr-x. 2 2002 2002 4096 Apr 9 2019 bin drwxr-xr-x. 2 2002 2002 88 Feb 27 22:09 conf drwxr-xr-x. 2 root root 6 Feb 27 21:48 data drwxr-xr-x. 5 2002 2002 4096 May 3 2019 docs drwxr-xr-x. 2 root root 4096 Feb 27 21:25 lib -rw-r--r--. 1 2002 2002 11358 Feb 15 2019 LICENSE.txt drwxr-xr-x. 2 root root 6 Feb 27 21:48 log -rw-r--r--. 1 2002 2002 432 Apr 9 2019 NOTICE.txt -rw-r--r--. 1 2002 2002 1560 May 3 2019 README.md -rw-r--r--. 1 2002 2002 1347 Apr 2 2019 README_packaging.txt
3、修改配置文件
[root@localhost conf]# ll total 16 -rw-r--r--. 1 2002 2002 535 Feb 15 2019 configuration.xsl -rw-r--r--. 1 2002 2002 2712 Apr 2 2019 log4j.properties -rw-r--r--. 1 root root 922 Feb 27 21:36 zoo.cfg -rw-r--r--. 1 2002 2002 922 Feb 15 2019 zoo_sample.cfg [root@localhost conf]#
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/usr/local/zookeeper/data dataLogDir=/usr/local/zookeeper/log # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1=cluster1_host1:2888:3888 server.2=cluster1_host2:2888:3888 server.3=cluster1_host3:2888:3888
4、创建myid文件
[root@localhost data]# pwd /usr/local/zookeeper/data [root@localhost data]# [root@localhost data]# [root@localhost data]# ll total 4 -rw-r--r--. 1 root root 2 Feb 27 22:10 myid [root@localhost data]# cat myid 1 [root@localhost data]#
5、拷贝安装目录到其它节点
scp -r zookeeper/ root@10.8.8.33:/usr/local/
修改其它节点的myid文件
6、启动zk
[root@localhost bin]# ./zkServer.sh start ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@localhost bin]# jps 28350 Jps 25135 QuorumPeerMain [root@localhost bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: leader [root@localhost bin]#
二、zookeeper的kerberos配置
1、生成zk的kerberos的认证标志
kadmin.local: addprinc zookeeper/cluster2-host1 WARNING: no policy specified for zookeeper/cluster2-host1@HADOOP.COM; defaulting to no policy Enter password for principal "zookeeper/cluster2-host1@HADOOP.COM": Re-enter password for principal "zookeeper/cluster2-host1@HADOOP.COM": Principal "zookeeper/cluster2-host1@HADOOP.COM" created. kadmin.local: addprinc zookeeper/cluster2-host2 WARNING: no policy specified for zookeeper/cluster2-host2@HADOOP.COM; defaulting to no policy Enter password for principal "zookeeper/cluster2-host2@HADOOP.COM": Re-enter password for principal "zookeeper/cluster2-host2@HADOOP.COM": Principal "zookeeper/cluster2-host2@HADOOP.COM" created. kadmin.local: addprinc zookeeper/cluster2-host3 WARNING: no policy specified for zookeeper/cluster2-host3@HADOOP.COM; defaulting to no policy Enter password for principal "zookeeper/cluster2-host3@HADOOP.COM": Re-enter password for principal "zookeeper/cluster2-host3@HADOOP.COM": Principal "zookeeper/cluster2-host3@HADOOP.COM" created. [root@cluster2-host1 etc]# kadmin.local Authenticating as principal root/admin@HADOOP.COM with password. kadmin.local: addprinc zkcli/hadoop kadmin.local: ktadd -norandkey -k /etc/security/keytab/zk-cluster2-host1.keytab zookeeper/cluster2-host1 kadmin.local: ktadd -norandkey -k /etc/security/keytab/zk-server.keytab zookeeper/cluster2-host2 kadmin.local: ktadd -norandkey -k /etc/security/keytab/zk-server.keytab zookeeper/cluster2-host3
拷贝keytab到所有的节点
[root@cluster2-host1 keytab]# scp zk-server.keytab root@cluster2-host2:/usr/local/zookeeper/conf/ zk-server.keytab 100% 1664 1.6KB/s 00:00 [root@cluster2-host1 keytab]# scp zk-server.keytab root@cluster2-host1:/usr/local/zookeeper/conf/ zk-server.keytab 100% 1664 1.6KB/s 00:00 [root@cluster2-host1 keytab]# scp zk-server.keytab root@cluster2-host3:/usr/local/zookeeper/conf/ zk-server.keytab
2、修改zk的配置文件,加如下数据
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider jaasLoginRenew=3600000 kerberos.removeHostFromPrincipal=true
同步到其他节点
[root@cluster2-host1 keytab]# scp /usr/local/zookeeper/conf/zoo.cfg root@cluster2-host2:/usr/local/zookeeper/conf/ zoo.cfg 100% 1207 1.2KB/s 00:00 [root@cluster2-host1 keytab]# scp /usr/local/zookeeper/conf/zoo.cfg root@cluster2-host3:/usr/local/zookeeper/conf/ zoo.cfg
3、生成jaas.conf文件
Server { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/usr/local/zookeeper/conf/zk-server.keytab" storeKey=true useTicketCache=false principal="zookeeper/cluster2-host1@HADOOP.COM"; };
同步到其他节点,并修改节点的principal
[root@cluster2-host1 conf]# scp jaas.conf root@cluster2-host2:/usr/local/zookeeper/conf/ jaas.conf 100% 229 0.2KB/s 00:00 [root@cluster2-host1 conf]# scp jaas.conf root@cluster2-host3:/usr/local/zookeeper/conf/ jaas.conf
4、创建client的priincipal
kadmin.local: addprinc zkcli/cluster2-host1 kadmin.local: addprinc zkcli/cluster2-host2 kadmin.local: addprinc zkcli/cluster2-host3
kadmin.local: ktadd -norandkey -k /etc/security/keytab/zk-clie.keytab zkcli/cluster2-host1 kadmin.local: ktadd -norandkey -k /etc/security/keytab/zk-clie.keytab zkcli/cluster2-host2 kadmin.local: ktadd -norandkey -k /etc/security/keytab/zk-clie.keytab zkcli/cluster2-host3
分发keytab文件到其他节点
[root@cluster2-host1 conf]# scp /etc/security/keytab/zk-clie.keytab root@cluster2-host1:/usr/local/zookeeper/conf/ zk-clie.keytab 100% 1580 1.5KB/s 00:00 [root@cluster2-host1 conf]# scp /etc/security/keytab/zk-clie.keytab root@cluster2-host2:/usr/local/zookeeper/conf/ zk-clie.keytab 100% 1580 1.5KB/s 00:00 [root@cluster2-host1 conf]# scp /etc/security/keytab/zk-clie.keytab root@cluster2-host3:/usr/local/zookeeper/conf/ zk-clie.keytab
5、配置client-jaas.conf文件
[root@cluster2-host1 conf]# cat client-jaas.conf Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/usr/local/zookeeper/conf/zk-clie.keytab" storeKey=true useTicketCache=false principal="zkcli/cluster2-host1@HADOOP.COM"; };
分发到其他节点,并修改其他节点的principal
[root@cluster2-host1 conf]# scp client-jaas.conf root@cluster2-host2:/usr/local/zookeeper/conf/ client-jaas.conf 100% 222 0.2KB/s 00:00 [root@cluster2-host1 conf]# scp client-jaas.conf root@cluster2-host3:/usr/local/zookeeper/conf/ client-jaas.conf
6、验证zk的kerberos
严格按照下面的顺序验证
[root@cluster2-host1 bin]# export JVMFLAGS="-Djava.security.auth.login.config=/usr/local/zookeeper/conf/jaas.conf" [root@cluster2-host1 bin]# ./zkServer.sh start ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@cluster2-host1 bin]# export JVMFLAGS="-Djava.security.auth.login.config=/usr/local/zookeeper/conf/client-jaas.conf" [root@cluster2-host1 bin]# [root@cluster2-host1 bin]# [root@cluster2-host1 bin]# echo $JVMFLAGS -Djava.security.auth.login.config=/usr/local/zookeeper/conf/client-jaas.conf [root@cluster2-host1 bin]# ./zkCli.sh -server cluster2-host1:2181 [zk: cluster2-host1:2181(CONNECTED) 2] create /abcd "abcdata" Created /abcd [zk: cluster2-host1:2181(CONNECTED) 3] ls / [abc, abcd, zookeeper] [zk: cluster2-host1:2181(CONNECTED) 4] getAcl /abcd 'world,'anyone : cdrwa [zk: cluster2-host1:2181(CONNECTED) 5]
同时启动zk的client,也会login successfull的日志,大家可以注意留意下