Kerberos 安装

1|0Kerberos 安装


2|01. Kerberos 服务端安装


服务端重点三个配置文件:

  • /etc/krb5.conf
  • /var/kerberos/krb5kdc/kdc.conf
  • /var/kerberos/krb5kdc/kadm5.acl

注意点:

  • 防止配置文件格式错误,如编辑过程中导致内容缺失。
  • 查询软件对加密算法的支持程度,如降低版本 hadoop 需要去除掉 aes 和 camellia 相关加密算法支持。
  • 配置文件不要加注释。
  • 域名大小写敏感,建议统一使用大写(default_realm)。
  • Java 使用 aes256-cts 验证方式需要安装额外的 jar 包,简单起见科删除 aes256-cts 加密方式。

前置准备

  • 确认域名:本例为 BIGDATA
  • 确保所有进入 Kerberos 认证的机器时间同步。

2|11.1 安装依赖


yum install -y krb5-server krb5-libs krb5-workstation krb5-devel # 离线安装rpm rpm -Uvh --force --nodeps *.rpm

2|21.2 修改配置文件


1.2.1 修改krb5.conf

cat /etc/krb5.conf

开发环境

# Configuration snippets may be placed in this directory as well includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 7d renew_lifetime = 7d forwardable = true rdns = false pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt default_realm = BIGDATA.COM default_ccache_name = /tmp/krb5cc_%{uid} default_tkt_enctypes = des-cbc-md5 des-cbc-crc des3-cbc-sha1 default_tgs_enctypes = des-cbc-md5 des-cbc-crc des3-cbc-sha1 permitted_enctypes = des-cbc-md5 des-cbc-crc des3-cbc-sha1 allow_weak_crypto = true # 禁止使用udp(可以防止一个Hadoop中的错误) udp_preference_limit = 1 [realms] BIGDATA.COM = { kdc = minivision-cdh-dev-1 admin_server = minivision-cdh-dev-1 } [domain_realm] # .BIGDATA.com = BIGDATA.COM minivision-cdh-dev-1 = BIGDATA.COM BIGDATA.com = BIGDATA.COM

生产环境

# Configuration snippets may be placed in this directory as well includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 7d renew_lifetime = 7d forwardable = true rdns = false pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt default_realm = BIGDATA.COM default_ccache_name = /tmp/krb5cc_%{uid} default_tkt_enctypes = des-cbc-md5 des-cbc-crc des3-cbc-sha1 default_tgs_enctypes = des-cbc-md5 des-cbc-crc des3-cbc-sha1 permitted_enctypes = des-cbc-md5 des-cbc-crc des3-cbc-sha1 allow_weak_crypto = true udp_preference_limit = 1 [realms] BIGDATA.COM = { kdc = zhxq-cdh01 admin_server = zhxq-cdh01 } [domain_realm] # .BIGDATA.com = BIGDATA.COM zhxq-cdh01 = BIGDATA.COM BIGDATA.com = BIGDATA.COM

默认配置

# Configuration snippets may be placed in this directory as well includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt # default_realm = EXAMPLE.COM default_ccache_name = KEYRING:persistent:%{uid} [realms] # EXAMPLE.COM = { # kdc = kerberos.example.com # admin_server = kerberos.example.com # } [domain_realm] # .example.com = EXAMPLE.COM # example.com = EXAMPLE.COM

1.2.2 修改kdc.conf

cat /var/kerberos/krb5kdc/kdc.conf

开发环境&生产环境

[root@minivision-cdh-dev-1 user]# cat /var/kerberos/krb5kdc/kdc.conf [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] BIGDATA.COM = { #master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words max_renewable_life = 7d default_principal_flags = +renewable max_life = 6d admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal }

默认配置

[kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] EXAMPLE.COM = { #master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal }

1.2.3 修改kadm5.acl

所有环境都一样

# cat /var/kerberos/krb5kdc/kadm5.acl */admin@BIGDATA.COM *

2|31.3 初始化 KDC 数据库


安装命令:kdb5_util create -r BIGDATA.COM -s

  • -r:指定域名,注意和配置文件域名保持一致。
  • -s:指定将数据库的主节点密钥存储在文件中,从而可以在每次启动KDC时自动重新生成主节点密钥。
  • -d: 指定数据库名,默认名为 principal。
$ sudo kdb5_util create -r BIGDATA.COM -s Loading random data Initializing database '/var/kerberos/krb5kdc/principal' for realm 'BIGDATA', master key name 'K/M@BIGDATA' You will be prompted for the database Master Password. It is important that you NOT FORGET this password. Enter KDC database master key: # 输入 KDC 数据库密码,重要,Xskj@2022 Re-enter KDC database master key to verify: #Xskj@2022 12345678

说明:

  • 该命令会在 /var/kerberos/krb5kdc/ 目录下创建 principal 数据库文件。在重装时,可能需要需要删除数据库文件。
  • 对于 KDC 数据库密码,非常重要,请妥善保存。

2|41.4 启动服务


service krb5kdc start service kadmin start service krb5kdc restart service kadmin restart

2|51.5 创建项目需要的账户


1.5.1 unix用户创建

# 所有机器 # sparketl useradd -s /sbin/nologin sparketl usermod -a -G supergroup sparketl usermod -a -G hdfs sparketl usermod -a -G hive sparketl usermod -a -G impala sparketl usermod -a -G admin sparketl usermod -a -G root sparketl usermod -a -G hue sparketl usermod -a -G kafka sparketl usermod -a -G oozie sparketl usermod -a -G spark sparketl usermod -a -G yarn sparketl usermod -a -G zookeeper sparketl # sjnjekins useradd -s /sbin/nologin sjnjekins usermod -a -G supergroup sjnjekins usermod -a -G hdfs sjnjekins usermod -a -G hive sjnjekins usermod -a -G impala sjnjekins usermod -a -G admin sjnjekins usermod -a -G root sjnjekins # sjn useradd -s /sbin/nologin sjn usermod -a -G supergroup sjn usermod -a -G hdfs sjn usermod -a -G hive sjn usermod -a -G impala sjn usermod -a -G admin sjn usermod -a -G root sjn usermod -a -G hue sjn usermod -a -G kafka sjn usermod -a -G oozie sjn usermod -a -G spark sjn usermod -a -G yarn sjn usermod -a -G zookeeper sjn # pyjudge useradd -s /sbin/nologin pyjudge usermod -a -G supergroup pyjudge usermod -a -G hdfs pyjudge usermod -a -G hive pyjudge usermod -a -G impala pyjudge usermod -a -G admin pyjudge usermod -a -G root pyjudge # admin useradd -s /sbin/nologin admin usermod -a -G supergroup admin usermod -a -G hdfs admin usermod -a -G yarn admin usermod -a -G hive admin usermod -a -G impala admin usermod -a -G root admin # hue useradd -s /sbin/nologin hue usermod -a -G supergroup hue usermod -a -G hdfs hue usermod -a -G yarn hue usermod -a -G hive hue usermod -a -G impala hue

1.5.2 KDC用户创建

echo -e "cloudera\ncloudera" | kadmin.local -q "addprinc cloudera-scm/admin" echo -e "root\nroot" | kadmin.local -q "addprinc root/admin" echo -e "test\ntest" | kadmin.local -q "addprinc test" echo -e "hive\nhive" | kadmin.local -q "addprinc hive/admin" echo -e "hive\nhive" | kadmin.local -q "addprinc hive" echo -e "hdfs\nhdfs" | kadmin.local -q "addprinc hdfs" echo -e "yarn\nyarn" | kadmin.local -q "addprinc yarn" echo -e "sparketl\nsparketl" | kadmin.local -q "addprinc sparketl"

1.5.3 KDC用户密钥生成

kadmin.local -q "addprinc -randkey sparketl" kadmin.local -q "modprinc -maxlife "7d" +allow_renewable sparketl@BIGDATA.COM" kadmin.local -q "modprinc -maxrenewlife "7d" +allow_renewable sparketl@BIGDATA.COM" kadmin.local -q "xst -norandkey -k /var/kerberos/sparketl.keytab sparketl@BIGDATA.COM" kadmin.local -q "addprinc -randkey sjnjekins" kadmin.local -q "modprinc -maxlife "7d" +allow_renewable sjnjekins@BIGDATA.COM" kadmin.local -q "modprinc -maxrenewlife "7d" +allow_renewable sjnjekins@BIGDATA.COM" kadmin.local -q "xst -norandkey -k /var/kerberos/sjnjekins.keytab sjnjekins@BIGDATA.COM" kadmin.local -q "addprinc -randkey sjn" kadmin.local -q "modprinc -maxlife "7d" +allow_renewable sjn@BIGDATA.COM" kadmin.local -q "modprinc -maxrenewlife "7d" +allow_renewable sjn@BIGDATA.COM" kadmin.local -q "xst -norandkey -k /var/kerberos/sjn.keytab sjn@BIGDATA.COM" kadmin.local -q "addprinc -randkey pyjudge" kadmin.local -q "modprinc -maxlife "7d" +allow_renewable pyjudge@BIGDATA.COM" kadmin.local -q "modprinc -maxrenewlife "7d" +allow_renewable pyjudge@BIGDATA.COM" kadmin.local -q "xst -norandkey -k /var/kerberos/pyjudge.keytab pyjudge@BIGDATA.COM" kadmin.local -q "addprinc -randkey admin" kadmin.local -q "modprinc -maxlife "7d" -maxrenewlife "7d" +allow_renewable admin@BIGDATA.COM" kadmin.local -q "xst -norandkey -k /var/kerberos/admin.keytab admin@BIGDATA.COM"

1.5.4 Unix用户刷新&KDC用户密钥分发

# Unix用户刷新 hdfs dfsadmin -refreshUserToGroupsMappings # KDC用户密钥分发 chmod 777 /var/kerberos/*

1.5.5 常用命令

listprincs modprinc -maxrenewlife "7d" -maxlife "7d" +allow_renewable impala/minivision-cdh-dev-3@BIGDATA.COM kinit -kt /var/kerberos/sjn.keytab sjn

3|02. Kerberos 客户端安装


3|12.1 安装依赖


yum install krb5-devel krb5-workstation krb5-libs -y # 下载离线RPM安装包 yum install krb5-devel krb5-workstation krb5-libs --downloadonly --downloaddir=/root/krbrpm/client

3|22.2 配置 krb5.conf


与服务端保持一致

4|03. 集群相关


4|13.1 Hive


3.1.1 CDH配置

# 取消hive的认证 hive.server2.enable.doAs false

3.1.2 Hive的连接测试

# 1. 机器验证通过 kinit test # 2. 连接参数修改 beeline !connect jdbc:hive2://minivision-cdh-dev-1:10000/default;principal=hive/minivision-cdh-dev-1@BIGDATA.COM beeline -u "jdbc:hive2://minivision-cdh-dev-1:10000/default;principal=hive/minivision-cdh-dev-1@BIGDATA.COM"

4|23.2 Kafka


3.2.1 CDH配置

# 设置认证方式为SASL_PLAINTEXT security.inter.broker.protocol SASL_PLAINTEXT

3.2.2 Kafka认证

# 1. 创建文件夹 mkdir -p /var/kerberos/kafka/ cd /var/kerberos/kafka # 2. 创建认证jaas文件 vi kafka_client_jaas.conf KafkaClient{ com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true; }; # 3. 创建认证配置文件 vi kafka_client_prop.properties security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka # 4. 配置kafka-console,告诉配置文件位置 export KAFKA_OPTS="-Djava.security.auth.login.config=/var/kerberos/kafka/kafka_client_jaas.conf" echo $KAFKA_OPTS # 5. 生产测试 kafka-console-producer --broker-list ${bs} --topic firstTopic --producer.config /var/kerberos/kafka/kafka_client_prop.properties # 6. 消费测试 kafka-console-consumer --bootstrap-server ${bs} --topic firstTopic --from-beginning --consumer.config /var/kerberos/kafka/kafka_client_prop.properties

4|33.2 其它命令


3.2.1. 查找CDH正在使用的keytab并验证

find /var/run/ -name hdfs.keytab kinit -kt /var/run/cloudera-scm-agent/process/5420-hdfs-DATANODE/hdfs.keytab hdfs/minivision-cdh-dev-1@BIGDATA.COM

__EOF__

本文作者JessePinkMan
本文链接https://www.cnblogs.com/edclol/p/17282464.html
关于博主:评论和私信会在第一时间回复。或者直接私信我。
版权声明:本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!
声援博主:如果您觉得文章对您有帮助,可以点击文章右下角推荐一下。您的鼓励是博主的最大动力!
posted @   edclol  阅读(925)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 上周热点回顾(2.24-3.2)
点击右上角即可分享
微信分享提示