二进制部署Kubernetes高可用集群(版本1.22.x)

目录

二进制部署Kubernetes高可用集群

第一章、节点规划

高可用Kubernetes集群规划

Kubernetes官网

最新版本kubernetes高可用官方部署文档

主机名 IP地址 说明
k8s-master01 ~0 3 192.168.150.120 ~ 122 master节点 * 3
k8s-master-lb 192.168.150.236 keepalived虚拟IP
k8s-node01 ~0 2 192.168.150.123 ~ 124 worker节点 * 2

Pod网段和service和宿主机网段不要重复!!!

配置信息 备注
系统版本 CentOS 7.9
Docker版本 19.03.x
Pod网段 172.16.0.0/12
Service网段 10.168.0.0/16

第二章、基础环境配置

1、Host解析设置(所有节点)

[root@k8s-master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.150.120 k8s-master01
192.168.150.121 k8s-master02
192.168.150.122 k8s-master03
192.168.150.236 k8s-master-lb # 如果不是高可用集群,该IP为Master-1的IP
192.168.150.123 k8s-node01
192.168.150.124 k8s-node02

2、推送公钥

Master-1节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master-1上操作,集群管理也在Master-1上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:

# 使用k8s-master01管理其他节点
# 生成秘钥对:
ssh-keygen -t rsa
# 推送公钥:
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

3、配置YUM源(所有节点)

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

4、安装必备工具(所有节点)

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

5、关闭防火墙、selinux、dnsmasq、swap分区(所有节点)

systemctl disable --now firewalld 
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager    # 公有云可以不用关闭

# 关闭selinux
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

# 关闭swap分区
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

6、同步时区与时间(所有节点)

rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
# 加入到crontab
crontab -e
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

7、配置Limits文件(所有节点)

ulimit -SHn 65535
vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

8、克隆源码文件(所有节点)

cd /root/ && git clone https://github.com/dotbalo/k8s-ha-install.git
# 如果无法下载就下载:
cd /root/ && git clone https://gitee.com/dukuan/k8s-ha-install.git

9、升级系统(所有节点)

# CentOS7需要升级,CentOS8可以按需升级系统
yum update -y --exclude=kernel* && reboot 

10、升级内核版本至4.19(所有节点)

# CentOS7 需要升级内核至4.18+,本地升级的版本为4.19
# 在Master-1节点下载内核:
cd /root && wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
# 从Master-1节点传到其他节点:
for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done

# 所有节点安装内核:
cd /root && yum localinstall -y kernel-ml*
# 所有节点更改内核启动顺序:
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
# 检查默认内核是不是4.19
[root@k8s-master02 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
# 所有节点重启,然后检查内核是不是4.19
[root@k8s-master02 ~]# uname -a
Linux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

11、部署Ipvsadm服务(所有节点)

# 所有节点安装ipvsadm:
yum install ipvsadm ipset sysstat conntrack libseccomp -y
# 所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

vim /etc/modules-load.d/ipvs.conf 
# 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
# 然后执行systemctl enable --now systemd-modules-load.service即可

12、配置kubernetes内核参数(所有节点)

# 开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

sysctl --system

# 所有节点配置完内核后,重启服务器,保证重启后内核依旧加载
reboot
[root@k8s-master01 ~]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack
ip_vs_ftp              16384  0
nf_nat                 49152  1 ip_vs_ftp
ip_vs_sed              16384  0
ip_vs_nq               16384  0
ip_vs_fo               16384  0
ip_vs_sh               16384  0
ip_vs_dh               16384  0
ip_vs_lblcr            16384  0
ip_vs_lblc             16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs_wlc              16384  0
ip_vs_lc               16384  0
ip_vs                 159744  24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack          155648  2 nf_nat,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs

第三章、Kubernetes基本组件安装部署

1、部署Docker(所有节点)

# 1.所有节点安装Docker-ce 19.03:
yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y

# 2.由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 3.配置Contained所需的模块
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br netfilter
EOF
# 所有节点加载模块
modprobe -- overlay
modprobe -- br_netfilter
# 所有节点配置Contained所需的内核
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward
net.bridge.bridge-nf-call-ip6tables = 1
EOF
# 所有节点加载内核
sysctl --system
# 所有节点配置Contained的配置文件
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
# 所有节点将Contained的Cgroup改为Systemd
vim /etc/containerd/config.toml
# 找到“containerd.runtimes.runc.options”,添加“SystemdCgroup = true”
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            SystemdCgroup = true
# 所有节点将“sandbox_image”的Pause镜像改成符合自己版本的地址“registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6”
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
# 所有节点重新加载并启动Containerd,并设置开机自启动
systemctl daemon-reload 
systemctl enable --now containerd
# 所有节点配置“crictl”客户端连接的运行时位置
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

2、Kubernetes二进制安装包部署(Master01节点)

wget https://dl.k8s.io/v1.22.0/kubernetes-server-linux-amd64.tar.gz

Kubernetes-1.22.0版本下载地址

![image-20220105172950059](/Users/xcz/Library/Application Support/typora-user-images/image-20220105172950059.png)

解压kubernetes安装文件:

[root@k8s-master01 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

Kubernetes版本查看:

[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.22.0

3、Kubernetes数据库Etcd二进制部署(Master01)

[root@k8s-master01 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.5.0/etcd-v3.5.0-linux-amd64.tar.gz

解压etcd安装文件:

[root@k8s-master01 ~]# tar -zxvf etcd-v3.5.0-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.0-linux-amd64/etcd{,ctl}
etcd-v3.5.0-linux-amd64/etcdctl
etcd-v3.5.0-linux-amd64/etcd

Etcd版本查看:

[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.5.0
API version: 3.5

4、其他节点部署Kubernetes和Etcd

MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
for NODE in $WorkNodes; do     scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done

5、所有节点部署目录并切换分支

# 创建目录:
[root@k8s-master01 ~]# mkdir -p /opt/cni/bin
[root@k8s-master02 ~]# mkdir -p /opt/cni/bin
[root@k8s-master03 ~]# mkdir -p /opt/cni/bin
[root@k8s-node01 ~]# mkdir -p /opt/cni/bin
[root@k8s-node02 ~]# mkdir -p /opt/cni/bin
# 切换分支:
# Master01
[root@k8s-master01 ~]# cd k8s-ha-install && git checkout manual-installation-v1.22.x
分支 manual-installation-v1.22.x 设置为跟踪来自 origin 的远程分支 manual-installation-v1.22.x。
切换到一个新分支 'manual-installation-v1.22.x'
# Master02
[root@k8s-master02 ~]# cd k8s-ha-install && git checkout manual-installation-v1.22.x
分支 manual-installation-v1.22.x 设置为跟踪来自 origin 的远程分支 manual-installation-v1.22.x。
切换到一个新分支 'manual-installation-v1.22.x'
# Master03
[root@k8s-master03 ~]# cd k8s-ha-install && git checkout manual-installation-v1.22.x
分支 manual-installation-v1.22.x 设置为跟踪来自 origin 的远程分支 manual-installation-v1.22.x。
切换到一个新分支 'manual-installation-v1.22.x'
# Node01
[root@k8s-node01 ~]# cd k8s-ha-install && git checkout manual-installation-v1.22.x
分支 manual-installation-v1.22.x 设置为跟踪来自 origin 的远程分支 manual-installation-v1.22.x。
切换到一个新分支 'manual-installation-v1.22.x'
# Node02
[root@k8s-node02 ~]# cd k8s-ha-install && git checkout manual-installation-v1.22.x
分支 manual-installation-v1.22.x 设置为跟踪来自 origin 的远程分支 manual-installation-v1.22.x。
切换到一个新分支 'manual-installation-v1.22.x'

第四章、证书生成

1、下载CFSSL证书

wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
# 给证书授权
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

2、Etcd配置证书

# 1.所有Master节点创建Etcd证书存放目录
mkdir /etc/etcd/ssl -p
# 2.所有节点创建kubernetes相关目录
mkdir -p /etc/kubernetes/pki
# 3.Master01节点生成etcd证书
cd /root/k8s-ha-install/pki
# 3.1 生成CA证书
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
# 3.2 生成Etcd证书
cfssl gencert \
   -ca=/etc/etcd/ssl/etcd-ca.pem \
   -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
   -config=ca-config.json \
   -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.150.120,192.168.150.121,192.168.150.122 \
   -profile=kubernetes \
   etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
   
# 输出结果:
[root@k8s-master01 ~/k8s-ha-install/pki]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
2022/01/05 21:56:08 [INFO] generating a new CA key and certificate from CSR
2022/01/05 21:56:08 [INFO] generate received request
2022/01/05 21:56:08 [INFO] received CSR
2022/01/05 21:56:08 [INFO] generating key: rsa-2048
2022/01/05 21:56:08 [INFO] encoded CSR
2022/01/05 21:56:08 [INFO] signed certificate with serial number 248923142999041885340699454846187870198045850716

[root@k8s-master01 ~/k8s-ha-install/pki]# cfssl gencert    -ca=/etc/etcd/ssl/etcd-ca.pem    -ca-key=/etc/etcd/ssl/etcd-ca-key.pem    -config=ca-config.json    -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.150.120,192.168.150.121,192.168.150.122    -profile=kubernetes    etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
2022/01/05 21:56:22 [INFO] generate received request
2022/01/05 21:56:22 [INFO] received CSR
2022/01/05 21:56:22 [INFO] generating key: rsa-2048
2022/01/05 21:56:23 [INFO] encoded CSR
2022/01/05 21:56:23 [INFO] signed certificate with serial number 31174869574952836433924545973034906219532818960

image

# 4.将证书复制到其他节点上:
MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'

for NODE in $MasterNodes; do
     ssh $NODE "mkdir -p /etc/etcd/ssl"
     for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do
       scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
     done
 done

image

3、生成Kubernetes证书(Master01操作)

3.1 生成CA证书:

cd /root/k8s-ha-install/pki && cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca

# 10.168.0.0/16是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.168.0.1,
# 如果不是高可用集群,192.168.150.236为Master01的IP

cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca

cfssl gencert   -ca=/etc/kubernetes/pki/ca.pem   -ca-key=/etc/kubernetes/pki/ca-key.pem   -config=ca-config.json   -hostname=10.168.0.1,192.168.150.236,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.150.120,192.168.150.121,192.168.150.122   -profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

2022/01/07 21:20:10 [INFO] generating a new CA key and certificate from CSR
2022/01/07 21:20:10 [INFO] generate received request
2022/01/07 21:20:10 [INFO] received CSR
2022/01/07 21:20:10 [INFO] generating key: rsa-2048
2022/01/07 21:20:10 [INFO] encoded CSR
2022/01/07 21:20:10 [INFO] signed certificate with serial number 536446235943853483700375137023713284910777916961

3.2 生成apiserver的聚合证书

cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 

2022/01/07 21:32:35 [INFO] generating a new CA key and certificate from CSR
2022/01/07 21:32:35 [INFO] generate received request
2022/01/07 21:32:35 [INFO] received CSR
2022/01/07 21:32:35 [INFO] generating key: rsa-2048
2022/01/07 21:32:36 [INFO] encoded CSR
2022/01/07 21:32:36 [INFO] signed certificate with serial number 113525945838166231824038702079166047055309051559

cfssl gencert   -ca=/etc/kubernetes/pki/front-proxy-ca.pem   -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   -config=ca-config.json   -profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

2022/01/07 21:33:06 [INFO] generate received request
2022/01/07 21:33:06 [INFO] received CSR
2022/01/07 21:33:06 [INFO] generating key: rsa-2048
2022/01/07 21:33:06 [INFO] encoded CSR
2022/01/07 21:33:06 [INFO] signed certificate with serial number 497088928468329434203095431688933045303503687861
# 返回结果(忽略警告)
2022/01/07 21:33:06 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

3.3 生成controller-manage的证书

cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager

2022/01/11 20:50:46 [INFO] generate received request
2022/01/11 20:50:46 [INFO] received CSR
2022/01/11 20:50:46 [INFO] generating key: rsa-2048
2022/01/11 20:50:47 [INFO] encoded CSR
2022/01/11 20:50:47 [INFO] signed certificate with serial number 387754172889907559525179156540521938574678232956
2022/01/11 20:50:47 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

# 注意,如果不是高可用集群,192.168.150.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
# set-cluster:设置一个集群项:
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.150.236:8443 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig


# 设置一个环境项,一个上下文:
kubectl config set-context system:kube-controller-manager@kubernetes --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig


# set-credentials 设置一个用户项:
kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/controller-manager.pem --client-key=/etc/kubernetes/pki/controller-manager-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig


# 使用某个环境当做默认环境:
kubectl config use-context system:kube-controller-manager@kubernetes  --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler

2022/01/11 21:18:07 [INFO] generate received request
2022/01/11 21:18:07 [INFO] received CSR
2022/01/11 21:18:07 [INFO] generating key: rsa-2048
2022/01/11 21:18:07 [INFO] encoded CSR
2022/01/11 21:18:07 [INFO] signed certificate with serial number 366859119362024024386354505805984026902969967922
2022/01/11 21:18:07 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

# 注意,如果不是高可用集群,192.168.150.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.150.236:8443 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig


kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/scheduler.pem --client-key=/etc/kubernetes/pki/scheduler-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/scheduler.kubeconfig


kubectl config set-context system:kube-scheduler@kubernetes --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=/etc/kubernetes/scheduler.kubeconfig


kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=/etc/kubernetes/scheduler.kubeconfig


cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin

2022/01/11 22:16:06 [INFO] generate received request
2022/01/11 22:16:06 [INFO] received CSR
2022/01/11 22:16:06 [INFO] generating key: rsa-2048
2022/01/11 22:16:06 [INFO] encoded CSR
2022/01/11 22:16:06 [INFO] signed certificate with serial number 719063184359918801743097538988798956843927582374
2022/01/11 22:16:06 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

# 注意,如果不是高可用集群,192.168.150.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.150.236:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig



kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig


kubectl config set-context kubernetes-admin@kubernetes     --cluster=kubernetes     --user=kubernetes-admin     --kubeconfig=/etc/kubernetes/admin.kubeconfig


kubectl config use-context kubernetes-admin@kubernetes     --kubeconfig=/etc/kubernetes/admin.kubeconfig



# 创建ServiceAccount Key  secret
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048

openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

# 发送证书至其他节点

for NODE in k8s-master02 k8s-master03; do 
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do 
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done; 
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do 
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done

scp /etc/kubernetes/pki/* 192.168.150.121:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/* 192.168.150.122:/etc/kubernetes/pki/
scp /etc/kubernetes/{admin.kubeconfig,controller-manager.kubeconfig,scheduler.kubeconfig}192.168.150.121:/etc/kubernetes/
scp /etc/kubernetes/{admin.kubeconfig,controller-manager.kubeconfig,scheduler.kubeconfig}192.168.150.122:/etc/kubernetes/

第五章、Kubernetes系统组件配置

1、Etcd配置

Etcd配置大致相同,注意修改每个Master节点的Etcd配置的主机名IP地址

1.1 Master01

vim /etc/etcd/etcd.config.yml 
iname: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.150.120:2380'
listen-client-urls: 'https://192.168.150.120:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.150.120:2380'
advertise-client-urls: 'https://192.168.150.120:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.150.120:2380,k8s-master02=https://192.168.150.121:2380,k8s-master03=https://192.168.150.122:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false

1.2 Master02

vim /etc/etcd/etcd.config.yml 
iname: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.150.121:2380'
listen-client-urls: 'https://192.168.150.121:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.150.121:2380'
advertise-client-urls: 'https://192.168.150.121:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.150.120:2380,k8s-master02=https://192.168.150.121:2380,k8s-master03=https://192.168.150.122:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false

1.3 Master03

vim /etc/etcd/etcd.config.yml 
iname: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.150.122:2380'
listen-client-urls: 'https://192.168.150.122:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.150.122:2380'
advertise-client-urls: 'https://192.168.150.122:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.150.120:2380,k8s-master02=https://192.168.150.121:2380,k8s-master03=https://192.168.150.122:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false

2、创建Service

所有Master节点创建etcd.service并启动

vim /usr/lib/systemd/system/etcd.service
i[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service

所有Master节点创建etcd的证书目录

mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd

查看Etcd的状态

export ETCDCTL_API=3
etcdctl --endpoints="192.168.150.122:2379,192.168.150.121:2379,192.168.150.120:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table

image

第六章、高可用配置

高可用配置(注意:如果不是高可用集群,HaproxyKeepalived无需安装)如果在云上安装也无需执行此章节的步骤,可以直接使用云上的LB,比如阿里云SLB、腾讯云ELB等。

公有云要用公有云自带的负载均衡,比如阿里云的SLB、腾讯云的ELB,用来替代HaproxyKeepalived,因为公有云大部分都是不支持Keepalived的,另外如果用阿里云的话,Kubectl控制端不能放在Master节点,推荐使用腾讯云,因为阿里云的SLB有回环的问题,也就是SLB代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。

SLB -> Haproxy -> Apiserver

1、部署KeepaAived和Haproxy服务

所有Master节点安装KeepAlivedHaproxy

yum install keepalived haproxy -y

2、配置Haproxy

所有Master配置HAProxy,配置一样

vim /etc/haproxy/haproxy.cfg
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend k8s-master
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01    192.168.150.120:6443  check
  server k8s-master02    192.168.150.121:6443  check
  server k8s-master03    192.168.150.122:6443  check

3、配置KeepAlived

1、Master01配置KeepAlived

所有Master节点配置KeepAlived配置不一样,注意区分 ,注意每个节点的IP和网卡(interface参数)

 vim /etc/keepalived/keepalived.conf
 
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens192
    mcast_src_ip 192.168.150.120
    virtual_router_id 51
    priority 101
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.150.236
    }
    track_script {
      chk_apiserver 
} }

2、Master02配置KeepAlived

vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
 
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens192
    mcast_src_ip 192.168.150.121
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.150.236
    }
    track_script {
      chk_apiserver 
} }

2、Master03配置KeepAlived

vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
    rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    mcast_src_ip 192.168.150.122
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.150.236
    }
    track_script {
      chk_apiserver 
} }

4、健康检查配置

所有Master节点:

vim /etc/keepalived/check_apiserver.sh 

i#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

# 授权
chmod +x /etc/keepalived/check_apiserver.sh

5、启动所有Kaster节点的Haproxy和Keepalived服务

systemctl daemon-reload && systemctl enable --now haproxy && systemctl enable --now keepalived

6、Kubernetes高可用集群测试

# VIP测试:
[root@k8s-master01 ~]# ping 192.168.150.236
PING 192.168.150.236 (192.168.150.236) 56(84) bytes of data.
64 bytes from 192.168.150.236: icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from 192.168.150.236: icmp_seq=2 ttl=64 time=0.022 ms
64 bytes from 192.168.150.236: icmp_seq=3 ttl=64 time=0.039 ms
64 bytes from 192.168.150.236: icmp_seq=4 ttl=64 time=0.035 ms
64 bytes from 192.168.150.236: icmp_seq=5 ttl=64 time=0.042 ms

# 重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的
[root@k8s-master01 ~]# telnet 192.168.150.236 8443 
Trying 192.168.150.236...
Connected to 192.168.150.236.
Escape character is '^]'.
Connection closed by foreign host.

# 如果无法通过测试,则根据下面步骤进行问题排查:
# 1.如果ping不通且telnet没有出现 ],则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等
# 2.所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
# 3.所有节点查看selinux状态,必须为disable:getenforce
# 4.master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
# 5.master节点查看监听端口:netstat -lntp 

image

第七章、Kubernetes组件配置

1、所有节点创建相关目录

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

2、ApiServer配置

所有Master节点创建kube-apiserver.service

注意,如果不是高可用集群,192.168.150.236改为Master01的地址。

2.1 Master01配置kube-apiserver.service

注意:本文档使用的k8s service网段为10.168.0.0/16,该网段不能和宿主机的网段、Pod网段的重复。

vim /usr/lib/systemd/system/kube-apiserver.service
i[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --insecure-port=0  \
      --advertise-address=192.168.150.120 \
      --service-cluster-ip-range=10.168.0.0/16  \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://192.168.150.120:2379,https://192.168.150.121:2379,https://192.168.150.122:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User
      # --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

2.2 Master02配置kube-apiserver.service

vim /usr/lib/systemd/system/kube-apiserver.service
i[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --insecure-port=0  \
      --advertise-address=192.168.150.121 \
      --service-cluster-ip-range=10.168.0.0/16  \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://192.168.150.120:2379,https://192.168.150.121:2379,https://192.168.150.122:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User
      # --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

2.3 Master03配置kube-apiserver.service

vim /usr/lib/systemd/system/kube-apiserver.service
i[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --insecure-port=0  \
      --advertise-address=192.168.150.122 \
      --service-cluster-ip-range=10.168.0.0/16  \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://192.168.150.120:2379,https://192.168.150.121:2379,https://192.168.150.122:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User
      # --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

2.4 启动apiserver

所有Master节点开启kube-apiserver

systemctl daemon-reload && systemctl enable --now kube-apiserver

检测kube-server状态

[root@k8s-master03 ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2022-03-16 18:22:25 CST; 14s ago

image

3、ControllerManager配置

所有Master节点配置kube-controller-manager.service(所有master节点配置一样)

注意:本文档使用的k8s Pod网段为172.16.0.0/12,该网段不能和宿主机的网段、k8s Service网段的重复

vim /usr/lib/systemd/system/kube-controller-manager.service
i[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
      --v=2 \
      --logtostderr=true \
      --address=127.0.0.1 \
      --root-ca-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
      --leader-elect=true \
      --use-service-account-credentials=true \
      --node-monitor-grace-period=40s \
      --node-monitor-period=5s \
      --pod-eviction-timeout=2m0s \
      --controllers=*,bootstrapsigner,tokencleaner \
      --allocate-node-cidrs=true \
      --cluster-cidr=172.16.0.0/12 \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
      --node-cidr-mask-size=24
      
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

# 1.所有Master节点启动kube-controller-manager
systemctl daemon-reload
systemctl enable --now kube-controller-manager

# 2.查看启动状态
systemctl  status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2022-03-16 18:29:47 CST; 30s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 10284 (kube-controller)

image

4、Scheduler配置

所有Master节点配置kube-scheduler.service(所有master节点配置一样)

vim /usr/lib/systemd/system/kube-scheduler.service 
i[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
      --v=2 \
      --logtostderr=true \
      --address=127.0.0.1 \
      --leader-elect=true \
      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

# 1.启动Scheduler并加入开机自启动:
systemctl daemon-reload && systemctl enable --now kube-scheduler

# 2.查看Scheduler服务状态:
systemctl status kube-scheduler

image

第八章、TLS Bootstrapping证书自动颁发配置

只需要在Master01创建bootstrap

注意:如果不是高可用集群,192.168.150.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443

cd /root/k8s-ha-install/bootstrap
kubectl config set-cluster kubernetes     --certificate-authority=/etc/kubernetes/pki/ca.pem     --embed-certs=true     --server=https://192.168.150.236:8443     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-credentials tls-bootstrap-token-user     --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-context tls-bootstrap-token-user@kubernetes     --cluster=kubernetes     --user=tls-bootstrap-token-user     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config use-context tls-bootstrap-token-user@kubernetes     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

注意:如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保证下图红圈内的字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致
image

mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

可以正常查询集群状态,才可以继续往下,否则不行,需要排查k8s组件是否有故障

[root@k8s-master01 ~/k8s-ha-install/bootstrap]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-2               Healthy   {"health":"true","reason":""}   
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""} 

[root@k8s-master01 ~/k8s-ha-install/bootstrap]# kubectl create -f bootstrap.secret.yaml 
secret/bootstrap-token-c8ad9c created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created

第九章、Node节点配置

1、复制证书

Master01节点复制证书至Node节点

cd /etc/kubernetes/

for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
     ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/ssl
     for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do
       scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/
     done
     for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
       scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
 done
 done
 
 
 # 执行结果:
 etcd-ca.pem                            100% 1367     3.2MB/s   00:00    
etcd.pem                               100% 1509     3.6MB/s   00:00    
etcd-key.pem                           100% 1679     3.6MB/s   00:00    
ca.pem                                 100% 1411     2.2MB/s   00:00    
ca-key.pem                             100% 1679     2.5MB/s   00:00    
front-proxy-ca.pem                     100% 1143     3.0MB/s   00:00    
bootstrap-kubelet.kubeconfig           100% 2302     1.6MB/s   00:00    
etcd-ca.pem                            100% 1367     3.8MB/s   00:00    
etcd.pem                               100% 1509     3.8MB/s   00:00    
etcd-key.pem                           100% 1679     3.1MB/s   00:00    
ca.pem                                 100% 1411     2.3MB/s   00:00    
ca-key.pem                             100% 1679     4.2MB/s   00:00    
front-proxy-ca.pem                     100% 1143     2.9MB/s   00:00    
bootstrap-kubelet.kubeconfig           100% 2302     1.7MB/s   00:00    
etcd-ca.pem                            100% 1367     1.1MB/s   00:00    
etcd.pem                               100% 1509   726.9KB/s   00:00    
etcd-key.pem                           100% 1679     1.2MB/s   00:00    
ca.pem                                 100% 1411     1.1MB/s   00:00    
ca-key.pem                             100% 1679     1.3MB/s   00:00    
front-proxy-ca.pem                     100% 1143   930.9KB/s   00:00    
bootstrap-kubelet.kubeconfig           100% 2302   932.4KB/s   00:00    
etcd-ca.pem                            100% 1367   835.7KB/s   00:00    
etcd.pem                               100% 1509     1.1MB/s   00:00    
etcd-key.pem                           100% 1679     1.0MB/s   00:00    
ca.pem                                 100% 1411     1.0MB/s   00:00    
ca-key.pem                             100% 1679     1.2MB/s   00:00    
front-proxy-ca.pem                     100% 1143   784.9KB/s   00:00    
bootstrap-kubelet.kubeconfig           100% 2302     1.5MB/s   00:00  

2、Kubelet配置

所有节点创建相关目录

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/

所有节点配置kubelet.service

vim  /usr/lib/systemd/system/kubelet.service

i[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kubelet

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

所有节点配置kubelet.service的配置文件

vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf

i[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

创建kubelet的配置文件

注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.168.0.10

vim /etc/kubernetes/kubelet-conf.yml

iapiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.168.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

启动所有节点kubelet

systemctl daemon-reload && systemctl enable --now kubelet

此时系统日志/var/log/messages显示只有如下信息为正常

Unable to update cni config: no networks found in /etc/cni/net.d

如果有很多报错日志,或者有大量看不懂的报错,说明kubelet的配置有误,需要检查kubelet配置

查看集群状态

kubectl get node

image

3、Kube-Proxy配置

注意,如果不是高可用集群,192.168.150.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443

以下操作只在Master01执行

cd /root/k8s-ha-install
kubectl -n kube-system create serviceaccount kube-proxy

kubectl create clusterrolebinding system:kube-proxy         --clusterrole system:node-proxier         --serviceaccount kube-system:kube-proxy

SECRET=$(kubectl -n kube-system get sa/kube-proxy \
    --output=jsonpath='{.secrets[0].name}')

JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)

PKI_DIR=/etc/kubernetes/pki
K8S_DIR=/etc/kubernetes

kubectl config set-cluster kubernetes     --certificate-authority=/etc/kubernetes/pki/ca.pem     --embed-certs=true     --server=https://192.168.150.236:8443     --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig

kubectl config set-credentials kubernetes     --token=${JWT_TOKEN}     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

kubectl config set-context kubernetes     --cluster=kubernetes     --user=kubernetes     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

kubectl config use-context kubernetes     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

将kubeconfig发送至其他节点

for NODE in k8s-master02 k8s-master03; do
     scp /etc/kubernetes/kube-proxy.kubeconfig  $NODE:/etc/kubernetes/kube-proxy.kubeconfig
 done

for NODE in k8s-node01 k8s-node02; do
     scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
 done

所有节点添加kube-proxy的配置和service文件:

vim /usr/lib/systemd/system/kube-proxy.service

i[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --v=2

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

如果更改了集群Pod的网段,需要更改kube-proxy.yaml的clusterCIDR为自己的Pod网段:

vim /etc/kubernetes/kube-proxy.yaml

iapiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 172.16.0.0/12 
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms

所有节点启动kube-proxy

systemctl daemon-reload && systemctl enable --now kube-proxy

第十章、安装Calico

安装官方推荐版本,以下步骤只在Master01节点执行:

cd /root/k8s-ha-install/calico/
# 更改calico的网段,主要需要将红色部分的网段,改为自己的Pod网段:
sed -i "s#POD_CIDR#172.16.0.0/12#g" calico.yaml
# 变更后检查:
[root@k8s-master01 ~/k8s-ha-install/calico]# grep "CALICO_IPV4POOL_CIDR" -A 1 calico.yaml 
            - name: CALICO_IPV4POOL_CIDR
              value: "172.16.0.0/12"
              
# 创建Calico容器:
kubectl apply -f calico.yaml

# 查看容器状态:
kubectl get po -n kube-system

image

第十一章、安装CoreDNS

1、部署官方推荐版本CoreDNS

cd /root/k8s-ha-install/

# 如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP:
COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0

sed -i "s#10.168.0.10#${COREDNS_SERVICE_IP}#g" CoreDNS/coredns.yaml

# 安装CoreDNS:
kubectl  create -f CoreDNS/coredns.yaml

# 查看pord状态:
 kubectl get po -n kube-system

image

2、安装最新版本CoreDNS

COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0


git clone https://github.com/coredns/deployment.git
cd deployment/kubernetes
./deploy.sh -s -i ${COREDNS_SERVICE_IP} | kubectl apply -f -
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
查看状态
 # kubectl get po -n kube-system -l k8s-app=kube-dns
NAME                       READY   STATUS    RESTARTS   AGE
coredns-85b4878f78-h29kh   1/1     Running   0          8h

第十二章、安装Metrics Server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

# 安装metrics server
cd /root/k8s-ha-install/metrics-server && kubectl  create -f . 
# 等待metrics server启动然后查看状态:
kubectl  top node

第十三章、安装Dashboard

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

1、安装指定版本的Dashboard

cd /root/k8s-ha-install/dashboard/
kubectl  create -f .

# 查看状态:
kubectl get po -n kubernetes-dashboard

image
image

2、安装最新版本的Dashboard

[官方GitHub地址:

可以在官方dashboard查看到最新版dashboard

https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
image

# 创建管理员用户
vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
  
kubectl apply -f admin.yaml -n kube-system

3、登陆Dashboard

登录dashboard

在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,windows系统参考图1-1:

图1-1 谷歌浏览器 Chrome的配置

--test-type --ignore-certificate-errors

image

mac系统参考图1-2:
image

具体配置步骤如下:

# 1.打开 Terminal 进入终端状态,默认的提示符应该是 $; 
# 2.进入 Chrome.app 目录输入:
cd "/Applications/Google Chrome.app/Contents/MacOS/"
# 3.将原先的启动脚本改个名字;
mv "Google Chrome" Google.real 
# 4.使用管道操作创建新的启动脚本,注意其中加入你所需要的启动参数:
printf '#!/bin/bash\ncd "/Applications/Google Chrome.app/Contents/MacOS"\n"/Applications/Google Chrome.app/Contents/MacOS/Google.real" --test-type --ignore-certificate-errors "$@"\n' > Google\ Chrome
# 5.给新的脚本加上运行权限:
chmod u+x "Google Chrome" 
配置完成后重新启动 Google Chrome 就是可以了。

更改dashboardsvcNodePort

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
type: NodePort
# 保存方式:shift+zz

image

ClusterIP更改为NodePort(如果已经为NodePort忽略此步骤):

image

查看端口号:

[root@k8s-master01 ~/k8s-ha-install/dashboard]# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.168.208.44   <none>        443:32520/TCP   3m47s

image

根据自己的实例端口号,通过任意安装了kube-proxy的宿主机的IP+端口即可访问到dashboard:访问Dashboardhttps://192.168.150.120:32520(请更改32520为自己的端口),选择登录方式为令牌(即token方式),参考图1-2
image

查看Token

[root@k8s-master01 ~/k8s-ha-install/dashboard]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-4mnzh
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: c8cad271-61fa-47cb-80a8-3f77a2444a2d

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1411 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1hb2NaZTJuRHFtSlF6TTlNbjZhMzhxeUhLQjgyQW5BYVgwTlVHaFNvWDQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTRtbnpoIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjOGNhZDI3MS02MWZhLTQ3Y2ItODBhOC0zZjc3YTI0NDRhMmQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Sga_p0IThCuBKv6n2-MVB6J58-TDohfdzMmW8WCKbuxaAoh-2PoD_J1kz6Ee3p-nhziF7Mf5naV-sCfzgGYzWalrIW0wxNuJECFX4sic9Nu4gDGq681OxUflYomlwTfroPIf_veTX2K8UyGgMiN2OqCfYtdMGTW80S5yDI0C_kdz5ZnAPqRo2gBr9hFt0NeVAbbqAoaCs90u9oBNjJF0MAskpdILoyYsHJnmCiR79JybjR46PVT2YlVVsdsqf2Kx1AmQ6Uyn_tC6gcA6F2akrHnnudlJE10Q6gctmfnxilScEuzAaWCPXDyeK3gzQFTXYz0ITJd3ZZOEED9TX4l_8g

image
image
image

posted @ 2022-06-07 11:44  婷婷~玉立  阅读(47)  评论(0编辑  收藏  举报