最近搞了一下kerberos,准备写一个系列,介绍kerberos的安装,和常用组件kerberos配置,今天进入第一篇:kerberOS安装
具体kerberos是什么东西,大家可以百度查一下,这里就不做介绍,直接上干货!
这里kerberos有个大坑,大家一定要注意
1、主机名不能有大写
2、主机名不能有下划线
具体还是否有其他限制,我还不清楚,这个是我踩的坑,所以大家配置kerberos尽量把主机名搞的正常的一些,不要特殊
大数据安全系列的其它文章
https://www.cnblogs.com/bainianminguo/p/12548076.html-----------安装kerberos
https://www.cnblogs.com/bainianminguo/p/12548334.html-----------hadoop的kerberos认证
https://www.cnblogs.com/bainianminguo/p/12548175.html-----------zookeeper的kerberos认证
https://www.cnblogs.com/bainianminguo/p/12584732.html-----------hive的kerberos认证
https://www.cnblogs.com/bainianminguo/p/12584880.html-----------es的search-guard认证
https://www.cnblogs.com/bainianminguo/p/12639821.html-----------flink的kerberos认证
https://www.cnblogs.com/bainianminguo/p/12639887.html-----------spark的kerberos认证
1、通过yum的方式安装kerberos
1 | yum install krb5 - workstation krb5 - libs krb5 - auth - dialog krb5 - server |
2、执行完命令后,会生成kerberos配置文件,krb5.conf和kdc.conf
1 2 3 4 | [root@cluster2 - host1 etc] # ll krb5.conf - rw - r - - r - - . 1 root root 641 Sep 13 12 : 40 krb5.conf [root@cluster2 - host1 etc] # pwd / etc |
1 2 | [root@cluster2 - host1 etc] # ll /var/kerberos/krb5kdc/kdc.conf - rw - - - - - - - . 1 root root 451 Sep 13 12 : 40 / var / kerberos / krb5kdc / kdc.conf |
3、修改kdc.conf配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] HADOOP.COM = { #master_key_type = aes256-cts acl_file = / var / kerberos / krb5kdc / kadm5.acl dict_file = / usr / share / dict / words admin_keytab = / var / kerberos / krb5kdc / kadm5.keytab supported_enctypes = aes128 - cts:normal des3 - hmac - sha1:normal arcfour - hmac:normal camellia256 - cts:normal camellia128 - cts:normal des - hmac - sha1:normal des - cbc - md5:normal des - cbc - crc:normal max_life = 1d max_renewable_life = 7d } |
注:aes256-cts:normal这个算法需要额外的jar包支持,可以干掉
4、修改krb5.conf配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | [root@cluster2 - host1 etc] # cat krb5.conf # Configuration snippets may be placed in this directory as well includedir / etc / krb5.conf.d / [logging] default = FILE : / var / log / krb5libs.log kdc = FILE : / var / log / krb5kdc.log admin_server = FILE : / var / log / kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false pkinit_anchors = / etc / pki / tls / certs / ca - bundle.crt default_realm = HADOOP.COM udp_preference_limit = 1 #default_ccache_name = KEYRING:persistent:%{uid} [realms] HADOOP.COM = { kdc = cluster2 - host1 admin_server = cluster2 - host1 } [domain_realm] 指定域名和域的映射关系 # .example.com = EXAMPLE.COM # example.com = EXAMPLE.COM |
5、配置kerberos的数据库,这里我设置的密码都是123456
1 2 3 4 5 6 7 8 9 | [root@cluster2 - host1 etc] # !671 kdb5_util create - s - r HADOOP.COM Loading random data Initializing database '/var/kerberos/krb5kdc/principal' for realm 'HADOOP.COM' , master key name 'K/M@HADOOP.COM' You will be prompted for the database Master Password. It is important that you NOT FORGET this password. Enter KDC database master key: Re - enter KDC database master key to verify: |
检查生成的数据库文件
1 2 3 4 5 6 7 8 | [root@cluster2 - host1 etc] # ll /var/kerberos/krb5kdc/ total 24 - rw - - - - - - - . 1 root root 22 Sep 13 12 : 40 kadm5.acl - rw - - - - - - - . 1 root root 474 Mar 3 03 : 18 kdc.conf - rw - - - - - - - . 1 root root 8192 Mar 3 03 : 18 principal - rw - - - - - - - . 1 root root 8192 Mar 3 03 : 18 principal.kadm5 - rw - - - - - - - . 1 root root 0 Mar 3 03 : 18 principal.kadm5.lock - rw - - - - - - - . 1 root root 0 Mar 3 03 : 18 principal.ok |
添加database administrator并设置密码为admin
1 2 3 4 5 6 | [root@cluster2 - host1 etc] # /usr/sbin/kadmin.local -q "addprinc admin/admin" Authenticating as principal root / admin@HADOOP.COM with password. WARNING: no policy specified for admin / admin@HADOOP.COM; defaulting to no policy Enter password for principal "admin/admin@HADOOP.COM" : Re - enter password for principal "admin/admin@HADOOP.COM" : Principal "admin/admin@HADOOP.COM" created. |
6、设置krb5kdc/kadmin 开机自启动
1 2 3 4 5 6 7 8 9 10 | [root@cluster2 - host1 etc] # service krb5kdc start Redirecting to / bin / systemctl start krb5kdc.service [root@cluster2 - host1 etc] # service kadmin start Redirecting to / bin / systemctl start kadmin.service [root@cluster2 - host1 etc] # chkconfig krb5kdc on Note: Forwarding request to 'systemctl enable krb5kdc.service' . Created symlink from / etc / systemd / system / multi - user.target.wants / krb5kdc.service to / usr / lib / systemd / system / krb5kdc.service. [root@cluster2 - host1 etc] # chkconfig kadmin on Note: Forwarding request to 'systemctl enable kadmin.service' . Created symlink from / etc / systemd / system / multi - user.target.wants / kadmin.service to / usr / lib / systemd / system / kadmin.service. |
7、创建主体,密码设置为123456
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | [root@cluster2 - host1 etc] # kadmin.local Authenticating as principal root / admin@HADOOP.COM with password. kadmin.local: kadmin.local: kadmin.local: kadmin.local: kadmin.local: list_principals K / M@HADOOP.COM admin / admin@HADOOP.COM kadmin / admin@HADOOP.COM kadmin / changepw@HADOOP.COM kadmin / cluster2 - host1@HADOOP.COM kiprop / cluster2 - host1@HADOOP.COM krbtgt / HADOOP.COM@HADOOP.COM kadmin.local: add_principal test / test@HADOOP.COM WARNING: no policy specified for test / test@HADOOP.COM; defaulting to no policy Enter password for principal "test/test@HADOOP.COM" : Re - enter password for principal "test/test@HADOOP.COM" : Principal "test/test@HADOOP.COM" created. kadmin.local: list_principals K / M@HADOOP.COM admin / admin@HADOOP.COM kadmin / admin@HADOOP.COM kadmin / changepw@HADOOP.COM kadmin / cluster2 - host1@HADOOP.COM kiprop / cluster2 - host1@HADOOP.COM krbtgt / HADOOP.COM@HADOOP.COM test / test@HADOOP.COM |
8、另外两个节点配置kerberos的client
1 | [root@cluster2 - host3 bin ] # yum install krb5-workstation krb5-libs krb5-auth-dialog -y |
设置配置/etc/krb5.conf配置和server端保持一致
1 2 3 4 | [root@cluster2 - host1 etc] # scp /etc/krb5.conf root@cluster2-host2:/etc/krb5.conf krb5.conf 100 % 651 0.6KB / s 00 : 00 [root@cluster2 - host1 etc] # scp /etc/krb5.conf root@cluster2-host3:/etc/krb5.conf krb5.conf |
9、使用用户名和密码的方式验证kerberos配置在客户端通过用户名和密码认证
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | [root@cluster2 - host2 bin ] # klist test/test klist: No credentials cache found (filename: test / test) [root@cluster2 - host2 bin ] # kinit test/test Password for test / test@HADOOP.COM: [root@cluster2 - host2 bin ] # klist Ticket cache: FILE : / tmp / krb5cc_0 Default principal: test / test@HADOOP.COM Valid starting Expires Service principal 03 / 03 / 2020 03 : 33 : 58 03 / 04 / 2020 03 : 33 : 58 krbtgt / HADOOP.COM@HADOOP.COM renew until 03 / 10 / 2020 04 : 33 : 58 [root@cluster2 - host2 bin ] # kdestroy [root@cluster2 - host2 bin ] # klist klist: No credentials cache found (filename: / tmp / krb5cc_0) |
10、密钥的方式认证
在server端生成秘钥,并拷贝到client
1 2 3 4 5 6 7 8 9 10 | [root@cluster2 - host1 etc] # kadmin.local -q "xst -k /root/test.keytab test/test@HADOOP.COM" Authenticating as principal root / admin@HADOOP.COM with password. Entry for principal test / test@HADOOP.COM with kvno 2 , encryption type aes128 - cts - hmac - sha1 - 96 added to keytab WRFILE: / root / test.keytab. Entry for principal test / test@HADOOP.COM with kvno 2 , encryption type des3 - cbc - sha1 added to keytab WRFILE: / root / test.keytab. Entry for principal test / test@HADOOP.COM with kvno 2 , encryption type arcfour - hmac added to keytab WRFILE: / root / test.keytab. Entry for principal test / test@HADOOP.COM with kvno 2 , encryption type camellia256 - cts - cmac added to keytab WRFILE: / root / test.keytab. Entry for principal test / test@HADOOP.COM with kvno 2 , encryption type camellia128 - cts - cmac added to keytab WRFILE: / root / test.keytab. Entry for principal test / test@HADOOP.COM with kvno 2 , encryption type des - hmac - sha1 added to keytab WRFILE: / root / test.keytab. Entry for principal test / test@HADOOP.COM with kvno 2 , encryption type des - cbc - md5 added to keytab WRFILE: / root / test.keytab. [root@cluster2 - host1 etc] # scp /root/test.keytab root@cluster2-host2:/root/ |
在client通过秘钥登录
1 2 3 4 5 6 7 8 | [root@cluster2 - host2 bin ] # kinit -kt /root/test.keytab test/test [root@cluster2 - host2 bin ] # klist Ticket cache: FILE : / tmp / krb5cc_0 Default principal: test / test@HADOOP.COM Valid starting Expires Service principal 03 / 03 / 2020 03 : 38 : 00 03 / 04 / 2020 03 : 38 : 00 krbtgt / HADOOP.COM@HADOOP.COM renew until 03 / 10 / 2020 04 : 38 : 00 |
这里要注意:
通过秘钥登录后,就不能通过用户名和密码登录了
至此,kerberos安装和配置完成
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 【自荐】一款简洁、开源的在线白板工具 Drawnix
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
· 无需6万激活码!GitHub神秘组织3小时极速复刻Manus,手把手教你使用OpenManus搭建本
2019-03-22 vue绑定html的class属性的方法
2018-03-22 JavaScript的控制语句和循环语句和函数的总结
2017-03-22 python之内置函数