第二章 kubeadm安装多master多node篇

一.节点规划

kubeadm和二进制安装k8s适用场景分析
kubeadm是官方提供的开源工具,是一个开源项目,用于快速搭建kubernetes集群,目前是比较方便和推荐使用的。kubeadm init 以及 kubeadm join 这两个命令可以快速创建 kubernetes 集群。Kubeadm初始化k8s,所有的组件都是以pod形式运行的,具备故障自恢复能力。
kubeadm是工具,可以快速搭建集群,也就是相当于用程序脚本帮我们装好了集群,属于自动部署,简化部署操作,自动部署屏蔽了很多细节,使得对各个模块感知很少,如果对k8s架构组件理解不深的话,遇到问题比较难排查。
kubeadm适合需要经常部署k8s,或者对自动化要求比较高的场景下使用。
二进制:在官网下载相关组件的二进制包,如果手动安装,对kubernetes理解也会更全面。 
Kubeadm和二进制都适合生产环境,在生产环境运行都很稳定,具体如何选择,可以根据实际项目进行评估。
Hostname Outer-IP Inner-IP
k8s-master-001 192.168.40.180 172.16.1.113
K8s-node-001 192.168.40.181 172.16.1.114
K8s-node-002 192.168.40.182 172.16.1.115
#1.在所有节点上安装 Docker 和 kubeadm

#2.部署 Kubernetes Master

#3.部署容器网络插件

#4.部署 Kubernetes Node,将节点加入 Kubernetes 集群中

#5.部署 Dashboard Web 页面,可视化查看 Kubernetes 资源

二、系统初始化(所有节点)

1天假host解析

#1.修改主机名
[root@ip-172-16-1-113 ~]# hostnamectl set-hostname k8s-master-001
[root@ip-172-16-1-114 ~]# hostnamectl set-hostname k8s-node-001
[root@ip-172-16-1-115 ~]# hostnamectl set-hostname k8s-node-002

#2.Master节点添加hosts解析
[root@k8s-master-001 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.40.180 k8s-admin-master01 m1
192.168.40.181 k8s-admin-node01 n1
192.168.40.182 k8s-admin-node02 n2

#3.分发hosts文件到Node节点
[root@k8s-master-001 ~]# scp /etc/hosts root@n1:/etc/hosts
hosts                                                                                             100%  247     2.7KB/s   00:00    

[root@k8s-master-001 ~]# scp /etc/hosts root@n2:/etc/hosts
hosts

2 配置免密登录

[root@k8s-admin-master01 ~]#   ssh-keygen
[root@k8s-admin-node01 ~]#   ssh-keygen
[root@k8s-admin-node02 ~]#   ssh-keygen
 
[root@k8s-admin-master01 ~]#  ssh-copy-id  m1
[root@k8s-admin-master01 ~]#  ssh-copy-id  n1
[root@k8s-admin-master01 ~]#  ssh-copy-id  n2

[root@k8s-admin-node01 ~]#  ssh-copy-id  m1
[root@k8s-admin-node01 ~]#  ssh-copy-id  n1
[root@k8s-admin-node01 ~]#  ssh-copy-id  n2

[root@k8s-admin-node02 ~]#  ssh-copy-id  m1
[root@k8s-admin-node02 ~]#  ssh-copy-id  n1
[root@k8s-admin-node02 ~]#  ssh-copy-id  n2

3关闭交换分区 swap,提升性能

#临时关闭 

[root@k8s-admin-master01 ~]# swapoff -a
[root@k8s-admin-node01 ~]# swapoff -a 
[root@k8s-admin-node02 ~]# swapoff -a



[root@k8s-admin-master01 ~]# vim  /etc/fstab 
#UUID=ff05d4f6-5d80-4d32-90a2-8268a0d4d0d3 swap                    swap    defaults        0 0
#如果是克隆的虚拟机,需要删除 UUID 

[root@k8s-admin-node01 ~]# vim  /etc/fstab 
#UUID=ff05d4f6-5d80-4d32-90a2-8268a0d4d0d3 swap                    swap    defaults        0 0
#如果是克隆的虚拟机,需要删除 UUID 

[root@k8s-admin-node02 ~]# vim  /etc/fstab 
#UUID=ff05d4f6-5d80-4d32-90a2-8268a0d4d0d3 swap                    swap    defaults        0 0
#如果是克隆的虚拟机,需要删除 UUID

解释:
Swap 是交换分区,如果机器内存不够,会使用 swap 分区,但是 swap 分区的性能较低,k8s 设计的
时候为了能提升性能,默认是不允许使用姜欢分区的。Kubeadm 初始化的时候会检测 swap 是否关
闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装 k8s 的时候可以指定--ignorepreflight-errors=Swap
来解决。 

4.修改机器内核参数

[root@xianchaomaster1 ~]# modprobe br_netfilter 
[root@xianchaomaster1 ~]# echo "modprobe br_netfilter" >> /etc/profile 
[root@xianchaomaster1 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
net.ipv4.ip_forward = 1 
EOF 
[root@xianchaomaster1 ~]# sysctl -p /etc/sysctl.d/k8s.conf

[root@xianchaonode1 ~]# modprobe br_netfilter 
[root@xianchaonode1 ~]# echo "modprobe br_netfilter" >> /etc/profile 
[root@xianchaonode1 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
net.ipv4.ip_forward = 1 
EOF 
[root@xianchaonode1 ~]# sysctl -p /etc/sysctl.d/k8s.conf 
 
[root@xianchaonode2 ~]# modprobe br_netfilter 
[root@xianchaonode2 ~]# echo "modprobe br_netfilter" >> /etc/profile 
[root@xianchaonode2 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
net.ipv4.ip_forward = 1 
EOF 
[root@xianchaonode2 ~]# sysctl -p /etc/sysctl.d/k8s.conf

问题1:sysctl是做什么的?
在运行时配置内核参数
  -p   从指定的文件加载系统参数,如不指定即从/etc/sysctl.conf中加载

问题2:为什么要执行modprobe br_netfilter?
修改/etc/sysctl.d/k8s.conf文件,增加如下三行参数:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

sysctl -p /etc/sysctl.d/k8s.conf出现报错:

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

解决方法:
modprobe br_netfilter

问题3:为什么开启net.bridge.bridge-nf-call-iptables内核参数?
在centos下安装docker,执行docker info出现如下警告:
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

解决办法:
vim  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

问题4:为什么要开启net.ipv4.ip_forward = 1参数?
kubeadm初始化k8s如果报错:

就表示没有开启ip_forward,需要开启。

net.ipv4.ip_forward是数据包转发:
出于安全考虑,Linux系统默认是禁止数据包转发的。所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包,根据数据包的目的ip地址将数据包发往本机另一块网卡,该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能。
要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。这个参数指定了Linux系统当前对路由转发功能的支持情况;其值为0时表示禁止进行IP转发;如果是1,则说明IP转发功能已经打开。


5.关闭防火墙和Selinux

#1.关闭防火墙
[root@k8s-master-001 ~]# systemctl disable --now firewalld

#2.关闭Selinux
1)临时关闭
[root@k8s-master-001 ~]# setenforce 0

2)永久关闭
[root@k8s-master-001 ~]# sed -i 's#enforcing#disabled#g' /etc/selinux/config

3)检查
[root@xianchaonode2~]#getenforce 
Disabled 
#显示 Disabled 说明 selinux 已经关

5.配置国内yum源

默认情况下,CentOS使用的是官方yum源,所以一般情况下在国内使用是非常慢,所以我们可以替换成国内的一些比较成熟的yum源,例如:清华大学镜像源,网易云镜像源等等。

#1.更改yum源
[root@k8s-master-001 ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
[root@k8s-master-001 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master-001 ~]# curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

#2.刷新缓存
[root@k8s-master-001 ~]# yum makecache

#3.禁止自动更新更新内核版本
[root@k8s-master-001 ~]# yum update -y --exclud=kernel*

6.时间同步

在集群当中,时间是一个很重要的概念,一旦集群当中某台机器时间跟集群时间不一致,可能会导致集群面临很多问题。所以,在部署集群之前,需要同步集群当中的所有机器的时间。

#1.CentOS7版
yum install ntp -y

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone

ntpdate time2.aliyun.com

# 写入定时任务
#Timing synchronization time
* * * * * /usr/sbin/ntpdate ntp1.aliyun.com &>/dev/null


#2.CentOS8版
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install wntp -y

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone

ntpdate time2.aliyun.com

# 写入定时任务
#Timing synchronization time
* * * * * /usr/sbin/ntpdate ntp1.aliyun.com &>/dev/null

7.开启ipvs

ipvs是系统内核中的一个模块,其网络转发性能很高。一般情况下,我们首选ipvs。

#1.安装IPVS
[root@k8s-master-001 ~]# yum install -y conntrack-tools ipvsadm ipset conntrack libseccomp

# 加载IPVS模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
  /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    /sbin/modprobe \${kernel_module}
  fi
done
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

8.安装基础软件包

[root@xianchaomaster1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlibdevel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm

10.安装docker

#1准备docker源
[root@xianchaomaster1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
[root@xianchaonode1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/dockerce/linux/centos/docker-ce.repo
[root@xianchaonode2 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/dockerce/linux/centos/docker-ce.repo



[root@xianchaomaster1 ~]# yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 
containerd.io -y 
[root@xianchaomaster1 ~]# systemctl start docker && systemctl enable docker.service 
 
[root@xianchaonode1 ~]# yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 
containerd.io -y 
[root@xianchaonode1 ~]# systemctl start docker && systemctl enable docker.service 

[root@xianchaonode2 ~]# yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 
containerd.io -y 
[root@xianchaonode2 ~]# systemctl start docker && systemctl enable docker.service

注:每个软件包的作用
Kubeadm:  kubeadm是一个工具,用来初始化k8s集群的
kubelet:   安装在集群所有节点上,用于启动Pod的
kubectl:   通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件


 #2 配置 docker 镜像加速器和驱动 
[root@xianchaomaster1 ~]#vim /etc/docker/daemon.json 
{ 
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.dockercn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hubmirror.c.163.com","http://qtid6917.mirror.aliyuncs.com",

"https://rncxm540.mirror.aliyuncs.com"], 
 "exec-opts": ["native.cgroupdriver=systemd"] 
}


#解释:
#修改 docker 文件驱动为 systemd,默认为 cgroupfs,kubelet 默认使用 systemd,两者必须一致才可
以。

修改了dockers的配置文件需要重新启动  
[root@xianchaonode1 ~]# systemctl daemon-reload
[root@xianchaonode1 ~]# systemctl restart docker
[root@xianchaonode1 ~]# systemctl status docker

三通过keepalive+nginx实现k8s apiserver节点高可用

1、安装nginx主备:

在xianchaomaster1和xianchaomaster2上做nginx主备安装

[root@xianchaomaster1 ~]#  yum install nginx keepalived -y
[root@xianchaomaster2 ~]#  yum install nginx keepalived -y

2、修改nginx配置文件。主备一样

[root@xianchaomaster1 ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
            server 192.168.40.180:6443 weight=5 max_fails=3 fail_timeout=30s;  
            server 192.168.40.181:6443 weight=5 max_fails=3 fail_timeout=30s;
    }
    
    server {
       listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}

[root@xianchaomaster2 ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.40.180:6443 weight=5 max_fails=3 fail_timeout=30s;  
       server 192.168.40.181:6443 weight=5 max_fails=3 fail_timeout=30s;
    }
    
    server {
       listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}

3、keepalive配置

主keepalived
[root@xianchaomaster1 ~]# vim  /etc/keepalived/keepalived.conf 
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface ens33  # 修改为实际网卡名
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    # 虚拟IP
    virtual_ipaddress { 
        192.168.40.199/24
    } 
    track_script {
        check_nginx
    } 
}

#vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
#virtual_ipaddress:虚拟IP(VIP)

[root@xianchaomaster1 ~]# vim  /etc/keepalived/check_nginx.sh 
#!/bin/bash
#1、判断Nginx是否存活
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
if [ $counter -eq 0 ]; then
    #2、如果不存活则尝试启动Nginx
    service nginx start
    sleep 2
    #3、等待2秒后再次获取一次Nginx状态
    counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
    #4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
    if [ $counter -eq 0 ]; then
        service  keepalived stop
    fi
fi

[root@xianchaomaster1 ~]# chmod +x  /etc/keepalived/check_nginx.sh
备keepalive
[root@xianchaomaster2 ~]# vim  /etc/keepalived/keepalived.conf 
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_BACKUP
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP 
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 90
    advert_int 1
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.40.199/24
    } 
    track_script {
        check_nginx
    } 
}


[root@xianchaomaster2 ~]# vim  /etc/keepalived/check_nginx.sh 
#!/bin/bash
#1、判断Nginx是否存活
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
if [ $counter -eq 0 ]; then
    #2、如果不存活则尝试启动Nginx
    service nginx start
    sleep 2
    #3、等待2秒后再次获取一次Nginx状态
    counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
    #4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
    if [ $counter -eq 0 ]; then
        service  keepalived stop
    fi
fi
[root@xianchaomaster2 ~]# chmod +x /etc/keepalived/check_nginx.sh

#注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移

4、启动服务:

[root@xianchaomaster1 ~]# systemctl daemon-reload
[root@xianchaomaster1 ~]# yum install nginx-mod-stream -y
[root@xianchaomaster1 ~]# systemctl start nginx
[root@xianchaomaster1 ~]# systemctl start keepalived
[root@xianchaomaster1 ~]# systemctl enable nginx keepalived
[root@xianchaomaster1]# systemctl status keepalived

[root@xianchaomaster2 ~]# systemctl daemon-reload
[root@xianchaomaster2 ~]# yum install nginx-mod-stream -y
[root@xianchaomaster2 ~]# systemctl start nginx
[root@xianchaomaster2 ~]# systemctl start keepalived
[root@xianchaomaster2 ~]# systemctl enable nginx keepalived
[root@xianchaomaster2]# systemctl status keepalived

5、测试vip是否绑定成功

[root@xianchaomaster1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:79:9e:36 brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.180/24 brd 192.168.40.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.40.199/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::b6ef:8646:1cfc:3e0c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

6、测试keepalived:

停掉xianchaomaster1上的keepalived,Vip会漂移到xianchaomaster2
[root@xianchaomaster1 ~]# service keepalived stop
[root@xianchaomaster2]# ip addr

启动xianchaomaster1上的nginx和keepalived,vip又会漂移回来
[root@xianchaomaster1 ~]# systemctl daemon-reload
[root@xianchaomaster1 ~]# systemctl start keepalived
[root@xianchaomaster1]# ip addr

四、kubeadm初始化k8s集群

[root@xianchaomaster1 ~]# cd /root/
[root@xianchaomaster1]# vim kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
controlPlaneEndpoint: 192.168.40.199:16443
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
 certSANs:             #这个下面尽量多谢一些IP 不然以后添加节点不好添加
 - 192.168.40.180          
 - 192.168.40.181
 - 192.168.40.182
 - 192.168.40.199
networking:
  podSubnet: 10.244.0.0/16               #pod网段
  serviceSubnet: 10.96.0.0/16            #service网段
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind:  KubeProxyConfiguration
mode: ipvs

#使用kubeadm初始化k8s集群

#把初始化k8s集群需要的离线镜像包上传到xianchaomaster1、xianchaomaster2、xianchaonode1机器上,手动解压:(没用不用管 只是会慢一点)
[root@xianchaomaster1 ~]# docker load -i k8simage-1-20-6.tar.gz
[root@xianchaomaster2 ~]# docker load -i k8simage-1-20-6.tar.gz
[root@xianchaonode1 ~]# docker load -i k8simage-1-20-6.tar.gz
[root@xianchaomaster1]# kubeadm init --config kubeadm-config.yaml --ignore-preflight-errors=SystemVerification

特别提醒:--image-repository registry.aliyuncs.com/google_containers为保证拉取镜像不到国外站点拉取,手动指定仓库地址为registry.aliyuncs.com/google_containers。kubeadm默认从k8s.gcr.io拉取镜像。   我们本地有导入到的离线镜像,所以会优先使用本地的镜像。
mode: ipvs 表示kube-proxy代理模式是ipvs,如果不指定ipvs,会默认使用iptables,但是iptables效率低,所以我们生产环境建议开启ipvs,阿里云和华为云托管的K8s,也提供ipvs模式

显示如下,说明安装完成:

#配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理
[root@xianchaomaster1 ~]# mkdir -p $HOME/.kube
[root@xianchaomaster1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@xianchaomaster1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@xianchaomaster1 ~]# kubectl get nodes
NAME               STATUS    ROLES                  AGE   VERSION
xianchaomaster1   NotReady   control-plane,master    60s   v1.20.6
此时集群状态还是NotReady状态,因为没有安装网络插件。

五、扩容k8s集群-添加master节点

#把xianchaomaster1节点的证书拷贝到xianchaomaster2上
在xianchaomaster2创建证书存放目录:
[root@xianchaomaster2 ~]# cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/
#把xianchaomaster1节点的证书拷贝到xianchaomaster2上:
[root@xianchaomaster1 ~]# scp /etc/kubernetes/pki/ca.crt xianchaomaster2:/etc/kubernetes/pki/
[root@xianchaomaster1 ~]# scp /etc/kubernetes/pki/ca.key xianchaomaster2:/etc/kubernetes/pki/
[root@xianchaomaster1 ~]# scp /etc/kubernetes/pki/sa.key xianchaomaster2:/etc/kubernetes/pki/
[root@xianchaomaster1 ~]# scp /etc/kubernetes/pki/sa.pub xianchaomaster2:/etc/kubernetes/pki/
[root@xianchaomaster1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt xianchaomaster2:/etc/kubernetes/pki/
[root@xianchaomaster1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.key xianchaomaster2:/etc/kubernetes/pki/
[root@xianchaomaster1 ~]# scp /etc/kubernetes/pki/etcd/ca.crt xianchaomaster2:/etc/kubernetes/pki/etcd/
[root@xianchaomaster1 ~]# scp /etc/kubernetes/pki/etcd/ca.key xianchaomaster2:/etc/kubernetes/pki/etcd/


#证书拷贝之后在xianchaomaster2上执行如下命令,大家复制自己的,这样就可以把xianchaomaster2和加入到集群,成为控制节点:

在xianchaomaster1上查看加入节点的命令:
[root@xianchaomaster1 ~]# kubeadm token create --print-join-command
显示如下:
kubeadm join 192.168.40.199:16443 --token zwzcks.u4jd8lj56wpckcwv \
    --discovery-token-ca-cert-hash sha256:1ba1b274090feecfef58eddc2a6f45590299c1d0624618f1f429b18a064cb728 \
    --control-plane

在xianchaomaster2上执行:
[root@xianchaomaster2 ~]#kubeadm join 192.168.40.199:16443 --token zwzcks.u4jd8lj56wpckcwv \
    --discovery-token-ca-cert-hash sha256:1ba1b274090feecfef58eddc2a6f45590299c1d0624618f1f429b18a064cb728 \
    --control-plane --ignore-preflight-errors=SystemVerification

在xianchaomaster1上查看集群状况:
[root@xianchaomaster1 ~]#  kubectl get nodes
NAME              STATUS     ROLES                  AGE   VERSION
xianchaomaster1   NotReady   control-plane,master   49m   v1.20.6
xianchaomaster2   NotReady   <none>                 39s   v1.20.6
     
上面可以看到xianchaomaster2已经加入到集群了

六、扩容k8s集群-添加node节点

在xianchaomaster1上查看加入节点的命令:
[root@xianchaomaster1 ~]# kubeadm token create --print-join-command
#显示如下:
kubeadm join 192.168.40.199:16443 --token y23a82.hurmcpzedblv34q8     --discovery-token-ca-cert-hash sha256:1ba1b274090feecfef58eddc2a6f45590299c1d0624618f1f429b18a064cb728
把xianchaonode1加入k8s集群:
[root@xianchaonode1~]# kubeadm token create --print-join-command
kubeadm join 192.168.40.199:16443 --token y23a82.hurmcpzedblv34q8     --discovery-token-ca-cert-hash sha256:1ba1b274090feecfef58eddc2a6f45590299c1d0624618f1f429b18a064cb728 --ignore-preflight-errors=SystemVerification
#看到下面说明xianchaonode1节点已经加入到集群了,充当工作节点

在xianchaomaster1上查看集群节点状况:
[root@xianchaomaster1 ~]# kubectl get nodes
NAME              STATUS     ROLES                  AGE     VERSION
xianchaomaster1   NotReady   control-plane,master   53m     v1.20.6
xianchaomaster2   NotReady   control-plane,master   5m13s   v1.20.6
xianchaonode1     NotReady   <none>                 59s     v1.20.6

#可以看到xianchaonode1的ROLES角色为空,<none>就表示这个节点是工作节点。
#可以把xianchaonode1的ROLES变成work,按照如下方法:
[root@xianchaomaster1 ~]# kubectl label node xianchaonode1 node-role.kubernetes.io/worker=worker

注意:上面状态都是NotReady状态,说明没有安装网络插件
[root@xianchaomaster1 ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-lh28j                  0/1     Pending   0          18h
coredns-7f89b7bc75-p7nhj                  0/1     Pending   0          18h
etcd-xianchaomaster1                      1/1     Running   0          18h
etcd-xianchaomaster2                      1/1     Running   0          15m
kube-apiserver-xianchaomaster1            1/1     Running   0          18h
kube-apiserver-xianchaomaster2            1/1     Running   0          15m
kube-controller-manager-xianchaomaster1   1/1     Running   1          18h
kube-controller-manager-xianchaomaster2   1/1     Running   0          15m
kube-proxy-n26mf                          1/1     Running   0          4m33s
kube-proxy-sddbv                          1/1     Running   0          18h
kube-proxy-sgqm2                          1/1     Running   0          15m
kube-scheduler-xianchaomaster1            1/1     Running   1          18h
kube-scheduler-xianchaomaster2            1/1     Running   0          15m

coredns-7f89b7bc75-lh28j是pending状态,这是因为还没有安装网络插件,等到下面安装好网络插件之后这个cordns就会变成running了

七、安装kubernetes网络组件-Calico

上传calico.yaml到xianchaomaster1上,使用yaml文件安装calico 网络插件 。
[root@xianchaomaster1 ~]# kubectl apply -f  calico.yaml

注:在线下载配置文件地址是: https://docs.projectcalico.org/manifests/calico.yaml
。
[root@xianchaomaster1 ~]# kubectl get pod -n kube-system 

coredns-这个pod现在是running状态,运行正常

再次查看集群状态。
[root@xianchaomaster1 ~]# kubectl get nodes
NAME              STATUS   ROLES                  AGE     VERSION
xianchaomaster1   Ready    control-plane,master   58m     v1.20.6
xianchaomaster2   Ready    control-plane,master   10m     v1.20.6
xianchaonode1     Ready    <none>                 5m46s   v1.20.6

#STATUS状态是Ready,说明k8s集群正常运行了

八、测试在k8s创建pod是否可以正常访问网络

#把busybox-1-28.tar.gz上传到xianchaonode1节点,手动解压
[root@xianchaonode1 ~]# docker load -i busybox-1-28.tar.gz
[root@xianchaomaster1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
/ # ping www.baidu.com
PING www.baidu.com (39.156.66.18): 56 data bytes
64 bytes from 39.156.66.18: seq=0 ttl=127 time=39.3 ms
#通过上面可以看到能访问网络,说明calico网络插件已经被正常安装了

九、测试coredns是否正常

[root@xianchaomaster1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

#注意:
busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip 

十延长k8s证书

查看证书有效时间:
openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text  |grep Not
显示如下,通过下面可看到ca证书有效期是10年,从2025到2035年:

Not Before: Apr 22 04:09:07 2025 GMT
Not After : Apr 20 04:09:07 2035 GMT

openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text  |grep Not
显示如下,通过下面可看到apiserver证书有效期是1年,从2025到2026年:
Not Before: Apr 22 04:09:07 2025 GMT
Not After : Apr 22 04:09:07 2026 GMT


延长证书过期时间:
把update-kubeadm-cert.sh文件上传到xianchaomaster1和xianchaomaster2节点,在xianchaomaster1和xianchaomaster2上执行如下:
1)给update-kubeadm-cert.sh证书授权可执行权限
[root@xianchaomaster1~]#chmod +x update-kubeadm-cert.sh
2)执行下面命令,修改证书过期时间,把时间延长到10年
[root@xianchaomaster1 ~]# ./update-kubeadm-cert.sh all

3)给update-kubeadm-cert.sh证书授权可执行权限
[root@xianchaomaster2~]#chmod +x update-kubeadm-cert.sh
4)执行下面命令,修改证书过期时间,把时间延长到10年
[root@xianchaomaster2 ~]# ./update-kubeadm-cert.sh all


3)在xianchaomaster1节点查询Pod是否正常,能查询出数据说明证书签发完成
kubectl  get pods -n kube-system

显示如下,能够看到pod信息,说明证书签发正常:

......
calico-node-b5ks5                  1/1     Running   0          157m
calico-node-r6bfr                  1/1     Running   0          155m
calico-node-r8qzv                  1/1     Running   0          7h1m
coredns-66bff467f8-5vk2q           1/1     Running   0          7h
......

验证证书有效时间是否延长到10年
openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text  |grep Not
显示如下,通过下面可看到ca证书有效期是10年,从2025到2035年:
Not Before: Apr 22 04:09:07 2025 GMT
Not After : Apr 20 04:09:07 2035 GMT

openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text  |grep Not
显示如下,通过下面可看到apiserver证书有效期是10年,从2025到2035年:

Not Before: Apr 22 11:15:53 2025 GMT
Not After : Apr 20 11:15:53 2035 GMT
##脚本
#!/bin/bash


set -o errexit
set -o pipefail
# set -o xtrace

log::err() {
  printf "[$(date +'%Y-%m-%dT%H:%M:%S.%N%z')]: \033[31mERROR: \033[0m$@\n"
}

log::info() {
  printf "[$(date +'%Y-%m-%dT%H:%M:%S.%N%z')]: \033[32mINFO: \033[0m$@\n"
}

log::warning() {
  printf "[$(date +'%Y-%m-%dT%H:%M:%S.%N%z')]: \033[33mWARNING: \033[0m$@\n"
}

check_file() {
  if [[ ! -r  ${1} ]]; then
    log::err "can not find ${1}"
    exit 1
  fi
}

# get x509v3 subject alternative name from the old certificate
cert::get_subject_alt_name() {
  local cert=${1}.crt
  check_file "${cert}"
  local alt_name=$(openssl x509 -text -noout -in ${cert} | grep -A1 'Alternative' | tail -n1 | sed 's/[[:space:]]*Address//g')
  printf "${alt_name}\n"
}

# get subject from the old certificate
cert::get_subj() {
  local cert=${1}.crt
  check_file "${cert}"
  local subj=$(openssl x509 -text -noout -in ${cert}  | grep "Subject:" | sed 's/Subject:/\//g;s/\,/\//;s/[[:space:]]//g')
  printf "${subj}\n"
}

cert::backup_file() {
  local file=${1}
  if [[ ! -e ${file}.old-$(date +%Y%m%d) ]]; then
    cp -rp ${file} ${file}.old-$(date +%Y%m%d)
    log::info "backup ${file} to ${file}.old-$(date +%Y%m%d)"
  else
    log::warning "does not backup, ${file}.old-$(date +%Y%m%d) already exists"
  fi
}

# generate certificate whit client, server or peer
# Args:
#   $1 (the name of certificate)
#   $2 (the type of certificate, must be one of client, server, peer)
#   $3 (the subject of certificates)
#   $4 (the validity of certificates) (days)
#   $5 (the x509v3 subject alternative name of certificate when the type of certificate is server or peer)
cert::gen_cert() {
  local cert_name=${1}
  local cert_type=${2}
  local subj=${3}
  local cert_days=${4}
  local alt_name=${5}
  local cert=${cert_name}.crt
  local key=${cert_name}.key
  local csr=${cert_name}.csr
  local csr_conf="distinguished_name = dn\n[dn]\n[v3_ext]\nkeyUsage = critical, digitalSignature, keyEncipherment\n"

  check_file "${key}"
  check_file "${cert}"

  # backup certificate when certificate not in ${kubeconf_arr[@]}
  # kubeconf_arr=("controller-manager.crt" "scheduler.crt" "admin.crt" "kubelet.crt")
  # if [[ ! "${kubeconf_arr[@]}" =~ "${cert##*/}" ]]; then
  #   cert::backup_file "${cert}"
  # fi

  case "${cert_type}" in
    client)
      openssl req -new  -key ${key} -subj "${subj}" -reqexts v3_ext \
        -config <(printf "${csr_conf} extendedKeyUsage = clientAuth\n") -out ${csr}
      openssl x509 -in ${csr} -req -CA ${CA_CERT} -CAkey ${CA_KEY} -CAcreateserial -extensions v3_ext \
        -extfile <(printf "${csr_conf} extendedKeyUsage = clientAuth\n") -days ${cert_days} -out ${cert}
      log::info "generated ${cert}"
    ;;
    server)
      openssl req -new  -key ${key} -subj "${subj}" -reqexts v3_ext \
        -config <(printf "${csr_conf} extendedKeyUsage = serverAuth\nsubjectAltName = ${alt_name}\n") -out ${csr}
      openssl x509 -in ${csr} -req -CA ${CA_CERT} -CAkey ${CA_KEY} -CAcreateserial -extensions v3_ext \
        -extfile <(printf "${csr_conf} extendedKeyUsage = serverAuth\nsubjectAltName = ${alt_name}\n") -days ${cert_days} -out ${cert}
      log::info "generated ${cert}"
    ;;
    peer)
      openssl req -new  -key ${key} -subj "${subj}" -reqexts v3_ext \
        -config <(printf "${csr_conf} extendedKeyUsage = serverAuth, clientAuth\nsubjectAltName = ${alt_name}\n") -out ${csr}
      openssl x509 -in ${csr} -req -CA ${CA_CERT} -CAkey ${CA_KEY} -CAcreateserial -extensions v3_ext \
        -extfile <(printf "${csr_conf} extendedKeyUsage = serverAuth, clientAuth\nsubjectAltName = ${alt_name}\n") -days ${cert_days} -out ${cert}
      log::info "generated ${cert}"
    ;;
    *)
      log::err "unknow, unsupported etcd certs type: ${cert_type}, supported type: client, server, peer"
      exit 1
  esac

  rm -f ${csr}
}

cert::update_kubeconf() {
  local cert_name=${1}
  local kubeconf_file=${cert_name}.conf
  local cert=${cert_name}.crt
  local key=${cert_name}.key

  # generate  certificate
  check_file ${kubeconf_file}
  # get the key from the old kubeconf
  grep "client-key-data" ${kubeconf_file} | awk {'print$2'} | base64 -d > ${key}
  # get the old certificate from the old kubeconf
  grep "client-certificate-data" ${kubeconf_file} | awk {'print$2'} | base64 -d > ${cert}
  # get subject from the old certificate
  local subj=$(cert::get_subj ${cert_name})
  cert::gen_cert "${cert_name}" "client" "${subj}" "${CAER_DAYS}"
  # get certificate base64 code
  local cert_base64=$(base64 -w 0 ${cert})

  # backup kubeconf
  # cert::backup_file "${kubeconf_file}"

  # set certificate base64 code to kubeconf
  sed -i 's/client-certificate-data:.*/client-certificate-data: '${cert_base64}'/g' ${kubeconf_file}

  log::info "generated new ${kubeconf_file}"
  rm -f ${cert}
  rm -f ${key}

  # set config for kubectl
  if [[ ${cert_name##*/} == "admin" ]]; then
    mkdir -p ~/.kube
    cp -fp ${kubeconf_file} ~/.kube/config
    log::info "copy the admin.conf to ~/.kube/config for kubectl"
  fi
}

cert::update_etcd_cert() {
  PKI_PATH=${KUBE_PATH}/pki/etcd
  CA_CERT=${PKI_PATH}/ca.crt
  CA_KEY=${PKI_PATH}/ca.key

  check_file "${CA_CERT}"
  check_file "${CA_KEY}"

  # generate etcd server certificate
  # /etc/kubernetes/pki/etcd/server
  CART_NAME=${PKI_PATH}/server
  subject_alt_name=$(cert::get_subject_alt_name ${CART_NAME})
  cert::gen_cert "${CART_NAME}" "peer" "/CN=etcd-server" "${CAER_DAYS}" "${subject_alt_name}"

  # generate etcd peer certificate
  # /etc/kubernetes/pki/etcd/peer
  CART_NAME=${PKI_PATH}/peer
  subject_alt_name=$(cert::get_subject_alt_name ${CART_NAME})
  cert::gen_cert "${CART_NAME}" "peer" "/CN=etcd-peer" "${CAER_DAYS}" "${subject_alt_name}"

  # generate etcd healthcheck-client certificate
  # /etc/kubernetes/pki/etcd/healthcheck-client
  CART_NAME=${PKI_PATH}/healthcheck-client
  cert::gen_cert "${CART_NAME}" "client" "/O=system:masters/CN=kube-etcd-healthcheck-client" "${CAER_DAYS}"

  # generate apiserver-etcd-client certificate
  # /etc/kubernetes/pki/apiserver-etcd-client
  check_file "${CA_CERT}"
  check_file "${CA_KEY}"
  PKI_PATH=${KUBE_PATH}/pki
  CART_NAME=${PKI_PATH}/apiserver-etcd-client
  cert::gen_cert "${CART_NAME}" "client" "/O=system:masters/CN=kube-apiserver-etcd-client" "${CAER_DAYS}"

  # restart etcd
  docker ps | awk '/k8s_etcd/{print$1}' | xargs -r -I '{}' docker restart {} || true
  log::info "restarted etcd"
}

cert::update_master_cert() {
  PKI_PATH=${KUBE_PATH}/pki
  CA_CERT=${PKI_PATH}/ca.crt
  CA_KEY=${PKI_PATH}/ca.key

  check_file "${CA_CERT}"
  check_file "${CA_KEY}"

  # generate apiserver server certificate
  # /etc/kubernetes/pki/apiserver
  CART_NAME=${PKI_PATH}/apiserver
  subject_alt_name=$(cert::get_subject_alt_name ${CART_NAME})
  cert::gen_cert "${CART_NAME}" "server" "/CN=kube-apiserver" "${CAER_DAYS}" "${subject_alt_name}"

  # generate apiserver-kubelet-client certificate
  # /etc/kubernetes/pki/apiserver-kubelet-client
  CART_NAME=${PKI_PATH}/apiserver-kubelet-client
  cert::gen_cert "${CART_NAME}" "client" "/O=system:masters/CN=kube-apiserver-kubelet-client" "${CAER_DAYS}"

  # generate kubeconf for controller-manager,scheduler,kubectl and kubelet
  # /etc/kubernetes/controller-manager,scheduler,admin,kubelet.conf
  cert::update_kubeconf "${KUBE_PATH}/controller-manager"
  cert::update_kubeconf "${KUBE_PATH}/scheduler"
  cert::update_kubeconf "${KUBE_PATH}/admin"
  # check kubelet.conf
  # https://github.com/kubernetes/kubeadm/issues/1753
  set +e
  grep kubelet-client-current.pem /etc/kubernetes/kubelet.conf > /dev/null 2>&1
  kubelet_cert_auto_update=$?
  set -e
  if [[ "$kubelet_cert_auto_update" == "0" ]]; then
    log::warning "does not need to update kubelet.conf"
  else
    cert::update_kubeconf "${KUBE_PATH}/kubelet"
  fi

  # generate front-proxy-client certificate
  # use front-proxy-client ca
  CA_CERT=${PKI_PATH}/front-proxy-ca.crt
  CA_KEY=${PKI_PATH}/front-proxy-ca.key
  check_file "${CA_CERT}"
  check_file "${CA_KEY}"
  CART_NAME=${PKI_PATH}/front-proxy-client
  cert::gen_cert "${CART_NAME}" "client" "/CN=front-proxy-client" "${CAER_DAYS}"

  # restart apiserve, controller-manager, scheduler and kubelet
  docker ps | awk '/k8s_kube-apiserver/{print$1}' | xargs -r -I '{}' docker restart {} || true
  log::info "restarted kube-apiserver"
  docker ps | awk '/k8s_kube-controller-manager/{print$1}' | xargs -r -I '{}' docker restart {} || true
  log::info "restarted kube-controller-manager"
  docker ps | awk '/k8s_kube-scheduler/{print$1}' | xargs -r -I '{}' docker restart {} || true
  log::info "restarted kube-scheduler"
  systemctl restart kubelet
  log::info "restarted kubelet"
}

main() {
  local node_tpye=$1
  
  KUBE_PATH=/etc/kubernetes
  CAER_DAYS=3650

  # backup $KUBE_PATH to $KUBE_PATH.old-$(date +%Y%m%d)
  cert::backup_file "${KUBE_PATH}"

  case ${node_tpye} in
    etcd)
	  # update etcd certificates
      cert::update_etcd_cert
    ;;
    master)
	  # update master certificates and kubeconf
      cert::update_master_cert
    ;;
    all)
      # update etcd certificates
      cert::update_etcd_cert
      # update master certificates and kubeconf
      cert::update_master_cert
    ;;
    *)
      log::err "unknow, unsupported certs type: ${cert_type}, supported type: all, etcd, master"
      printf "Documentation: https://github.com/yuyicai/update-kube-cert
  example:
    '\033[32m./update-kubeadm-cert.sh all\033[0m' update all etcd certificates, master certificates and kubeconf
      /etc/kubernetes
      ├── admin.conf
      ├── controller-manager.conf
      ├── scheduler.conf
      ├── kubelet.conf
      └── pki
          ├── apiserver.crt
          ├── apiserver-etcd-client.crt
          ├── apiserver-kubelet-client.crt
          ├── front-proxy-client.crt
          └── etcd
              ├── healthcheck-client.crt
              ├── peer.crt
              └── server.crt

    '\033[32m./update-kubeadm-cert.sh etcd\033[0m' update only etcd certificates
      /etc/kubernetes
      └── pki
          ├── apiserver-etcd-client.crt
          └── etcd
              ├── healthcheck-client.crt
              ├── peer.crt
              └── server.crt

    '\033[32m./update-kubeadm-cert.sh master\033[0m' update only master certificates and kubeconf
      /etc/kubernetes
      ├── admin.conf
      ├── controller-manager.conf
      ├── scheduler.conf
      ├── kubelet.conf
      └── pki
          ├── apiserver.crt
          ├── apiserver-kubelet-client.crt
          └── front-proxy-client.crt
"
      exit 1
    esac
}

main "$@"


posted @ 2022-09-27 15:38  高压锅炖主播  阅读(95)  评论(0编辑  收藏  举报