Kubernetes——centos8.0 使用二进制部署 k8s-v1.18.20+etcd-v3.3.10+flannel-v0.10.0 高可用集群

centos8.0 使用二进制部署 k8s-v1.18.20+etcd-v3.3.10+flannel-v0.10.0 高可用集群

一、资源规划:

主机名 IP地址 配置 角色 系统版本
k8s-master01 10.100.12.168 2C2G master/Work/etcd centos8.0
k8s-master02 10.100.12.200 2C2G master/Work/etcd centos8.0
k8s-master-lb 10.100.12.103 - k8s-master-lb centos8.0
k8s-node01 10.100.15.246 2C4G Work/etcd centos8.0
k8s-node02 10.100.10.195 2C4G Work centos8.0

二、环境初始化:

所有主机都要做初始化操作

2.1 停止所有主机 firewalld 防火墙 :

systemctl disable --now firewalld
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
systemctl disable --now iptables

2.2 关闭 swap :

swapoff -a 
sed -i 's/.*swap.*/#&/' /etc/fstab

2.3 关闭 selinux :

setenforce  0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux

2.4 根据规划,设置 hostname :

hostnamectl set-hostname <hostname>

2.5 添加本地hosts解析 :

cat >> /etc/hosts << EOF
10.100.12.168 k8s-master01
10.100.10.200 k8s-master02
10.100.10.103 k8s-master-lb
10.100.15.246 k8s-node01
10.100.10.195 k8s-node02
EOF

2.6 将桥接的 ipv4 流量传递到 iptables 的链 :

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system  # 生效

2.7 时间同步 :

yum install chrony -y
systemctl restart chronyd.service
systemctl enable --now chronyd.service
chronyc -a makestep

2.8 查看当前系统版本 :

cat /etc/redhat-release 
CentOS Linux release 8.0.1905 (Core) 

2.9 查看当前系统内核版本 :

uname -r
4.18.0-80.el8.x86_64

2.10 使用 elrepo 仓库 :

  这里使用ELRepo仓库,ELRepo 仓库是基于社区的用于企业级 Linux 仓库,提供对 RedHat Enterprise(RHEL)和其他基于 RHEL的 Linux 发行版(CentOS、Scientific、Fedora 等)的支持。ELRepo 聚焦于和硬件相关的软件包,包括文件系统驱动、显卡驱动、网络驱动、声卡驱动和摄像头驱动等。网址:http://elrepo.org/tiki/tiki-index.php :

  2.10.1 导入 elrepo 仓库的公共密钥:

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

  2.10.2 安装 elrepo 仓库的 yum 源:

yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y

2.11 查看当前可用的系统内核安装包

[root@k8s-master01 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
ELRepo.org Community Enterprise Linux Kernel Repository - el8                                          130 kB/s | 2.0 MB     00:15    
Last metadata expiration check: 0:00:11 ago on Tue 16 Nov 2021 06:57:29 PM CST.
Available Packages
bpftool.x86_64                                                     5.15.2-1.el8.elrepo                                    elrepo-kernel
kernel-lt.x86_64                                                   5.4.159-1.el8.elrepo                                   elrepo-kernel
kernel-lt-core.x86_64                                              5.4.159-1.el8.elrepo                                   elrepo-kernel
kernel-lt-devel.x86_64                                             5.4.159-1.el8.elrepo                                   elrepo-kernel
kernel-lt-doc.noarch                                               5.4.159-1.el8.elrepo                                   elrepo-kernel
kernel-lt-headers.x86_64                                           5.4.159-1.el8.elrepo                                   elrepo-kernel
kernel-lt-modules.x86_64                                           5.4.159-1.el8.elrepo                                   elrepo-kernel
kernel-lt-modules-extra.x86_64                                     5.4.159-1.el8.elrepo                                   elrepo-kernel
kernel-lt-tools.x86_64                                             5.4.159-1.el8.elrepo                                   elrepo-kernel
kernel-lt-tools-libs.x86_64                                        5.4.159-1.el8.elrepo                                   elrepo-kernel
kernel-lt-tools-libs-devel.x86_64                                  5.4.159-1.el8.elrepo                                   elrepo-kernel
kernel-ml.x86_64                                                   5.15.2-1.el8.elrepo                                    elrepo-kernel
kernel-ml-core.x86_64                                              5.15.2-1.el8.elrepo                                    elrepo-kernel
kernel-ml-devel.x86_64                                             5.15.2-1.el8.elrepo                                    elrepo-kernel
kernel-ml-doc.noarch                                               5.15.2-1.el8.elrepo                                    elrepo-kernel
kernel-ml-headers.x86_64                                           5.15.2-1.el8.elrepo                                    elrepo-kernel
kernel-ml-modules.x86_64                                           5.15.2-1.el8.elrepo                                    elrepo-kernel
kernel-ml-modules-extra.x86_64                                     5.15.2-1.el8.elrepo                                    elrepo-kernel
kernel-ml-tools.x86_64                                             5.15.2-1.el8.elrepo                                    elrepo-kernel
kernel-ml-tools-libs.x86_64                                        5.15.2-1.el8.elrepo                                    elrepo-kernel
kernel-ml-tools-libs-devel.x86_64                                  5.15.2-1.el8.elrepo                                    elrepo-kernel
perf.x86_64                                                        5.15.2-1.el8.elrepo                                    elrepo-kernel
python3-perf.x86_64                                                5.15.2-1.el8.elrepo                                    elrepo-kernel
[root@k8s-master01 ~]# 

2.12 安装最新版内核 :

yum --enablerepo=elrepo-kernel install kernel-ml -y

2.13 设置以新的内核启动:

  0 表示最新安装的内核,设置为 0 表示以新版本内核启动:

grub2-set-default 0

2.14 生成 grub 配置文件并重启系统 :

grub2-mkconfig -o /boot/grub2/grub.cfg
reboot
遇到报错:
[root@k8s-master01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
/usr/bin/grub2-editenv: error: environment block too small.
-------
解决办法
[root@k8s-master01 ~]# mv /boot/grub2/grubenv /home/bak
[root@k8s-master01 ~]# grub2-editenv /boot/grub2/grubenv create
[root@k8s-master01 ~]# yum --enablerepo=elrepo-kernel install kernel-ml
Last metadata expiration check: 0:06:48 ago on Tue 16 Nov 2021 06:58:49 PM CST.
Package kernel-ml-5.15.2-1.el8.elrepo.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
[root@k8s-master01 ~]# grub2-set-default 0
[root@k8s-master01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
done
[root@k8s-master01 ~]# 

2.15 验证新内核:

旧内核版本: 4.18.0-80.el8.x86_64
新内核版本: 5.15.2-1.el8.elrepo.x86_64

2.16 查看系统中已安装的内核:

[root@k8s-master01 ~]# rpm -qa | grep kernel
kernel-ml-modules-5.15.2-1.el8.elrepo.x86_64
kernel-core-4.18.0-80.el8.x86_64
kernel-modules-4.18.0-80.el8.x86_64
kernel-tools-libs-4.18.0-80.el8.x86_64
kernel-4.18.0-80.el8.x86_64
kernel-ml-core-5.15.2-1.el8.elrepo.x86_64
kernel-ml-5.15.2-1.el8.elrepo.x86_64
kernel-tools-4.18.0-80.el8.x86_64
[root@k8s-master01 ~]# 

2.17 删除旧内核

[root@k8s-master01 ~]# yum remove -y kernel-core-4.18.0 kernel-devel-4.18.0 kernel-tools-libs-4.18.0 kernel-headers-4.18.0
No match for argument: kernel-devel-4.18.0
No match for argument: kernel-headers-4.18.0
Dependencies resolved.
=======================================================================================================================================
 Package                              Arch                      Version                             Repository                    Size
=======================================================================================================================================
Removing:
 kernel-core                          x86_64                    4.18.0-80.el8                       @anaconda                     57 M
 kernel-tools-libs                    x86_64                    4.18.0-80.el8                       @anaconda                     24 k
Removing dependent packages:
 kernel                               x86_64                    4.18.0-80.el8                       @anaconda                      0  
 kernel-tools                         x86_64                    4.18.0-80.el8                       @anaconda                    509 k
Removing unused dependencies:
 kernel-modules                       x86_64                    4.18.0-80.el8                       @anaconda                     19 M

Transaction Summary
=======================================================================================================================================
Remove  5 Packages

Freed space: 77 M
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                               1/1 
  Running scriptlet: kernel-4.18.0-80.el8.x86_64                                                                                   1/1 
  Erasing          : kernel-4.18.0-80.el8.x86_64                                                                                   1/5 
  Running scriptlet: kernel-4.18.0-80.el8.x86_64                                                                                   1/5 
  Erasing          : kernel-tools-4.18.0-80.el8.x86_64                                                                             2/5 
  Running scriptlet: kernel-tools-4.18.0-80.el8.x86_64                                                                             2/5 
  Erasing          : kernel-modules-4.18.0-80.el8.x86_64                                                                           3/5 
  Running scriptlet: kernel-modules-4.18.0-80.el8.x86_64                                                                           3/5 
  Running scriptlet: kernel-core-4.18.0-80.el8.x86_64                                                                              4/5 
  Erasing          : kernel-core-4.18.0-80.el8.x86_64                                                                              4/5 
  Running scriptlet: kernel-core-4.18.0-80.el8.x86_64                                                                              4/5 
  Erasing          : kernel-tools-libs-4.18.0-80.el8.x86_64                                                                        5/5 
  Running scriptlet: kernel-tools-libs-4.18.0-80.el8.x86_64                                                                        5/5 
  Verifying        : kernel-4.18.0-80.el8.x86_64                                                                                   1/5 
  Verifying        : kernel-core-4.18.0-80.el8.x86_64                                                                              2/5 
  Verifying        : kernel-modules-4.18.0-80.el8.x86_64                                                                           3/5 
  Verifying        : kernel-tools-4.18.0-80.el8.x86_64                                                                             4/5 
  Verifying        : kernel-tools-libs-4.18.0-80.el8.x86_64                                                                        5/5 

Removed:
  kernel-core-4.18.0-80.el8.x86_64             kernel-tools-libs-4.18.0-80.el8.x86_64            kernel-4.18.0-80.el8.x86_64           
  kernel-tools-4.18.0-80.el8.x86_64            kernel-modules-4.18.0-80.el8.x86_64              

Complete!
yum remove -y kernel-core-4.18.0

2.18 再查看系统中已安装的内核 

[root@k8s-master01 ~]# rpm -qa | grep kernel
kernel-ml-modules-5.15.2-1.el8.elrepo.x86_64
kernel-ml-core-5.15.2-1.el8.elrepo.x86_64
kernel-ml-5.15.2-1.el8.elrepo.x86_64

  也可以安装 yum-utils 工具,当系统安装的内核大于3个时,会自动删除旧的内核版本:

[root@k8s-master01 ~]# yum install yum-utils -y
Last metadata expiration check: 0:17:20 ago on Tue 16 Nov 2021 06:58:53 PM CST.
Dependencies resolved.
=======================================================================================================================================
 Package                                     Arch                    Version                             Repository               Size
=======================================================================================================================================
Installing:
 yum-utils                                   noarch                  4.0.18-4.el8                        BaseOS                   71 k
Upgrading:
 dnf                                         noarch                  4.4.2-11.el8                        BaseOS                  539 k
 dnf-data                                    noarch                  4.4.2-11.el8                        BaseOS                  152 k
 dnf-plugins-core                            noarch                  4.0.18-4.el8                        BaseOS                   69 k
 elfutils-libelf                             x86_64                  0.182-3.el8                         BaseOS                  216 k
 elfutils-libs                               x86_64                  0.182-3.el8                         BaseOS                  293 k
 ima-evm-utils                               x86_64                  1.3.2-12.el8                        BaseOS                   64 k
 libdnf                                      x86_64                  0.55.0-7.el8                        BaseOS                  681 k
 librepo                                     x86_64                  1.12.0-3.el8                        BaseOS                   91 k
 libsolv                                     x86_64                  0.7.16-3.el8_4                      BaseOS                  363 k
 python3-dnf                                 noarch                  4.4.2-11.el8                        BaseOS                  541 k
 python3-dnf-plugins-core                    noarch                  4.0.18-4.el8                        BaseOS                  234 k
 python3-hawkey                              x86_64                  0.55.0-7.el8                        BaseOS                  114 k
 python3-libdnf                              x86_64                  0.55.0-7.el8                        BaseOS                  769 k
 python3-librepo                             x86_64                  1.12.0-3.el8                        BaseOS                   52 k
 python3-rpm                                 x86_64                  4.14.3-14.el8_4                     BaseOS                  158 k
 rpm                                         x86_64                  4.14.3-14.el8_4                     BaseOS                  542 k
 rpm-build-libs                              x86_64                  4.14.3-14.el8_4                     BaseOS                  155 k
 rpm-libs                                    x86_64                  4.14.3-14.el8_4                     BaseOS                  339 k
 rpm-plugin-selinux                          x86_64                  4.14.3-14.el8_4                     BaseOS                   76 k
 rpm-plugin-systemd-inhibit                  x86_64                  4.14.3-14.el8_4                     BaseOS                   77 k
 tpm2-tss                                    x86_64                  2.3.2-3.el8                         BaseOS                  275 k
 yum                                         noarch                  4.4.2-11.el8                        BaseOS                  198 k
Installing dependencies:
 libmodulemd                                 x86_64                  2.9.4-2.el8                         BaseOS                  189 k
 libzstd                                     x86_64                  1.4.4-1.el8                         BaseOS                  266 k
Installing weak dependencies:
 elfutils-debuginfod-client                  x86_64                  0.182-3.el8                         BaseOS                   65 k

Transaction Summary
=======================================================================================================================================
Install   4 Packages
Upgrade  22 Packages

Total download size: 6.4 M
Downloading Packages:
(1/26): elfutils-debuginfod-client-0.182-3.el8.x86_64.rpm                                              8.9 MB/s |  65 kB     00:00    
(2/26): libzstd-1.4.4-1.el8.x86_64.rpm                                                                  27 MB/s | 266 kB     00:00    
(3/26): libmodulemd-2.9.4-2.el8.x86_64.rpm                                                              13 MB/s | 189 kB     00:00    
(4/26): yum-utils-4.0.18-4.el8.noarch.rpm                                                              8.3 MB/s |  71 kB     00:00    
(5/26): dnf-plugins-core-4.0.18-4.el8.noarch.rpm                                                       8.4 MB/s |  69 kB     00:00    
(6/26): dnf-data-4.4.2-11.el8.noarch.rpm                                                                13 MB/s | 152 kB     00:00    
(7/26): dnf-4.4.2-11.el8.noarch.rpm                                                                     25 MB/s | 539 kB     00:00    
(8/26): elfutils-libelf-0.182-3.el8.x86_64.rpm                                                          20 MB/s | 216 kB     00:00    
(9/26): ima-evm-utils-1.3.2-12.el8.x86_64.rpm                                                           14 MB/s |  64 kB     00:00    
(10/26): elfutils-libs-0.182-3.el8.x86_64.rpm                                                           19 MB/s | 293 kB     00:00    
(11/26): librepo-1.12.0-3.el8.x86_64.rpm                                                                12 MB/s |  91 kB     00:00    
(12/26): python3-dnf-4.4.2-11.el8.noarch.rpm                                                            34 MB/s | 541 kB     00:00    
(13/26): libsolv-0.7.16-3.el8_4.x86_64.rpm                                                              16 MB/s | 363 kB     00:00    
(14/26): libdnf-0.55.0-7.el8.x86_64.rpm                                                                 20 MB/s | 681 kB     00:00    
(15/26): python3-hawkey-0.55.0-7.el8.x86_64.rpm                                                         20 MB/s | 114 kB     00:00    
(16/26): python3-dnf-plugins-core-4.0.18-4.el8.noarch.rpm                                               18 MB/s | 234 kB     00:00    
(17/26): python3-librepo-1.12.0-3.el8.x86_64.rpm                                                        15 MB/s |  52 kB     00:00    
(18/26): python3-rpm-4.14.3-14.el8_4.x86_64.rpm                                                         27 MB/s | 158 kB     00:00    
(19/26): rpm-build-libs-4.14.3-14.el8_4.x86_64.rpm                                                      32 MB/s | 155 kB     00:00    
(20/26): rpm-libs-4.14.3-14.el8_4.x86_64.rpm                                                            37 MB/s | 339 kB     00:00    
(21/26): python3-libdnf-0.55.0-7.el8.x86_64.rpm                                                         27 MB/s | 769 kB     00:00    
(22/26): rpm-plugin-selinux-4.14.3-14.el8_4.x86_64.rpm                                                  16 MB/s |  76 kB     00:00    
(23/26): rpm-4.14.3-14.el8_4.x86_64.rpm                                                                 20 MB/s | 542 kB     00:00    
(24/26): rpm-plugin-systemd-inhibit-4.14.3-14.el8_4.x86_64.rpm                                          14 MB/s |  77 kB     00:00    
(25/26): tpm2-tss-2.3.2-3.el8.x86_64.rpm                                                                24 MB/s | 275 kB     00:00    
(26/26): yum-4.4.2-11.el8.noarch.rpm                                                                    20 MB/s | 198 kB     00:00    
---------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                   55 MB/s | 6.4 MB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                               1/1 
  Running scriptlet: elfutils-libelf-0.182-3.el8.x86_64                                                                            1/1 
  Upgrading        : elfutils-libelf-0.182-3.el8.x86_64                                                                           1/48 
  Installing       : libzstd-1.4.4-1.el8.x86_64                                                                                   2/48 
  Upgrading        : librepo-1.12.0-3.el8.x86_64                                                                                  3/48 
  Upgrading        : elfutils-libs-0.182-3.el8.x86_64                                                                             4/48 
  Installing       : elfutils-debuginfod-client-0.182-3.el8.x86_64                                                                5/48 
  Upgrading        : rpm-4.14.3-14.el8_4.x86_64                                                                                   6/48 
  Upgrading        : rpm-libs-4.14.3-14.el8_4.x86_64                                                                              7/48 
  Running scriptlet: rpm-libs-4.14.3-14.el8_4.x86_64                                                                              7/48 
  Installing       : libmodulemd-2.9.4-2.el8.x86_64                                                                               8/48 
  Upgrading        : libsolv-0.7.16-3.el8_4.x86_64                                                                                9/48 
  Upgrading        : libdnf-0.55.0-7.el8.x86_64                                                                                  10/48 
  Upgrading        : python3-libdnf-0.55.0-7.el8.x86_64                                                                          11/48 
  Upgrading        : python3-hawkey-0.55.0-7.el8.x86_64                                                                          12/48 
  Upgrading        : rpm-plugin-systemd-inhibit-4.14.3-14.el8_4.x86_64                                                           13/48 
  Running scriptlet: tpm2-tss-2.3.2-3.el8.x86_64                                                                                 14/48 
  Upgrading        : tpm2-tss-2.3.2-3.el8.x86_64                                                                                 14/48 
  Running scriptlet: tpm2-tss-2.3.2-3.el8.x86_64                                                                                 14/48 
  Upgrading        : ima-evm-utils-1.3.2-12.el8.x86_64                                                                           15/48 
  Upgrading        : rpm-build-libs-4.14.3-14.el8_4.x86_64                                                                       16/48 
  Running scriptlet: rpm-build-libs-4.14.3-14.el8_4.x86_64                                                                       16/48 
  Upgrading        : python3-rpm-4.14.3-14.el8_4.x86_64                                                                          17/48 
  Upgrading        : dnf-data-4.4.2-11.el8.noarch                                                                                18/48 
warning: /etc/dnf/dnf.conf created as /etc/dnf/dnf.conf.rpmnew

  Upgrading        : python3-dnf-4.4.2-11.el8.noarch                                                                             19/48 
  Upgrading        : dnf-4.4.2-11.el8.noarch                                                                                     20/48 
  Running scriptlet: dnf-4.4.2-11.el8.noarch                                                                                     20/48 
  Upgrading        : python3-dnf-plugins-core-4.0.18-4.el8.noarch                                                                21/48 
  Upgrading        : dnf-plugins-core-4.0.18-4.el8.noarch                                                                        22/48 
  Installing       : yum-utils-4.0.18-4.el8.noarch                                                                               23/48 
  Upgrading        : yum-4.4.2-11.el8.noarch                                                                                     24/48 
  Upgrading        : rpm-plugin-selinux-4.14.3-14.el8_4.x86_64                                                                   25/48 
  Upgrading        : python3-librepo-1.12.0-3.el8.x86_64                                                                         26/48 
  Cleanup          : yum-4.0.9.2-5.el8.noarch                                                                                    27/48 
  Running scriptlet: dnf-4.0.9.2-5.el8.noarch                                                                                    28/48 
  Cleanup          : dnf-4.0.9.2-5.el8.noarch                                                                                    28/48 
  Running scriptlet: dnf-4.0.9.2-5.el8.noarch                                                                                    28/48 
  Cleanup          : dnf-plugins-core-4.0.2.2-3.el8.noarch                                                                       29/48 
  Cleanup          : rpm-plugin-selinux-4.14.2-9.el8.x86_64                                                                      30/48 
  Cleanup          : python3-dnf-plugins-core-4.0.2.2-3.el8.noarch                                                               31/48 
  Cleanup          : python3-dnf-4.0.9.2-5.el8.noarch                                                                            32/48 
  Cleanup          : python3-hawkey-0.22.5-4.el8.x86_64                                                                          33/48 
  Cleanup          : python3-rpm-4.14.2-9.el8.x86_64                                                                             34/48 
  Cleanup          : rpm-build-libs-4.14.2-9.el8.x86_64                                                                          35/48 
  Running scriptlet: rpm-build-libs-4.14.2-9.el8.x86_64                                                                          35/48 
  Cleanup          : elfutils-libs-0.174-6.el8.x86_64                                                                            36/48 
  Cleanup          : python3-libdnf-0.22.5-4.el8.x86_64                                                                          37/48 
  Cleanup          : libdnf-0.22.5-4.el8.x86_64                                                                                  38/48 
  Cleanup          : rpm-plugin-systemd-inhibit-4.14.2-9.el8.x86_64                                                              39/48 
  Cleanup          : libsolv-0.6.35-6.el8.x86_64                                                                                 40/48 
  Cleanup          : rpm-4.14.2-9.el8.x86_64                                                                                     41/48 
  Cleanup          : rpm-libs-4.14.2-9.el8.x86_64                                                                                42/48 
  Running scriptlet: rpm-libs-4.14.2-9.el8.x86_64                                                                                42/48 
  Cleanup          : python3-librepo-1.9.2-1.el8.x86_64                                                                          43/48 
  Cleanup          : dnf-data-4.0.9.2-5.el8.noarch                                                                               44/48 
  Cleanup          : librepo-1.9.2-1.el8.x86_64                                                                                  45/48 
  Cleanup          : elfutils-libelf-0.174-6.el8.x86_64                                                                          46/48 
  Cleanup          : ima-evm-utils-1.1-4.el8.x86_64                                                                              47/48 
  Cleanup          : tpm2-tss-2.0.0-4.el8.x86_64                                                                                 48/48 
  Running scriptlet: tpm2-tss-2.0.0-4.el8.x86_64                                                                                 48/48 
  Verifying        : elfutils-debuginfod-client-0.182-3.el8.x86_64                                                                1/48 
  Verifying        : libmodulemd-2.9.4-2.el8.x86_64                                                                               2/48 
  Verifying        : libzstd-1.4.4-1.el8.x86_64                                                                                   3/48 
  Verifying        : yum-utils-4.0.18-4.el8.noarch                                                                                4/48 
  Verifying        : dnf-4.4.2-11.el8.noarch                                                                                      5/48 
  Verifying        : dnf-4.0.9.2-5.el8.noarch                                                                                     6/48 
  Verifying        : dnf-data-4.4.2-11.el8.noarch                                                                                 7/48 
  Verifying        : dnf-data-4.0.9.2-5.el8.noarch                                                                                8/48 
  Verifying        : dnf-plugins-core-4.0.18-4.el8.noarch                                                                         9/48 
  Verifying        : dnf-plugins-core-4.0.2.2-3.el8.noarch                                                                       10/48 
  Verifying        : elfutils-libelf-0.182-3.el8.x86_64                                                                          11/48 
  Verifying        : elfutils-libelf-0.174-6.el8.x86_64                                                                          12/48 
  Verifying        : elfutils-libs-0.182-3.el8.x86_64                                                                            13/48 
  Verifying        : elfutils-libs-0.174-6.el8.x86_64                                                                            14/48 
  Verifying        : ima-evm-utils-1.3.2-12.el8.x86_64                                                                           15/48 
  Verifying        : ima-evm-utils-1.1-4.el8.x86_64                                                                              16/48 
  Verifying        : libdnf-0.55.0-7.el8.x86_64                                                                                  17/48 
  Verifying        : libdnf-0.22.5-4.el8.x86_64                                                                                  18/48 
  Verifying        : librepo-1.12.0-3.el8.x86_64                                                                                 19/48 
  Verifying        : librepo-1.9.2-1.el8.x86_64                                                                                  20/48 
  Verifying        : libsolv-0.7.16-3.el8_4.x86_64                                                                               21/48 
  Verifying        : libsolv-0.6.35-6.el8.x86_64                                                                                 22/48 
  Verifying        : python3-dnf-4.4.2-11.el8.noarch                                                                             23/48 
  Verifying        : python3-dnf-4.0.9.2-5.el8.noarch                                                                            24/48 
  Verifying        : python3-dnf-plugins-core-4.0.18-4.el8.noarch                                                                25/48 
  Verifying        : python3-dnf-plugins-core-4.0.2.2-3.el8.noarch                                                               26/48 
  Verifying        : python3-hawkey-0.55.0-7.el8.x86_64                                                                          27/48 
  Verifying        : python3-hawkey-0.22.5-4.el8.x86_64                                                                          28/48 
  Verifying        : python3-libdnf-0.55.0-7.el8.x86_64                                                                          29/48 
  Verifying        : python3-libdnf-0.22.5-4.el8.x86_64                                                                          30/48 
  Verifying        : python3-librepo-1.12.0-3.el8.x86_64                                                                         31/48 
  Verifying        : python3-librepo-1.9.2-1.el8.x86_64                                                                          32/48 
  Verifying        : python3-rpm-4.14.3-14.el8_4.x86_64                                                                          33/48 
  Verifying        : python3-rpm-4.14.2-9.el8.x86_64                                                                             34/48 
  Verifying        : rpm-4.14.3-14.el8_4.x86_64                                                                                  35/48 
  Verifying        : rpm-4.14.2-9.el8.x86_64                                                                                     36/48 
  Verifying        : rpm-build-libs-4.14.3-14.el8_4.x86_64                                                                       37/48 
  Verifying        : rpm-build-libs-4.14.2-9.el8.x86_64                                                                          38/48 
  Verifying        : rpm-libs-4.14.3-14.el8_4.x86_64                                                                             39/48 
  Verifying        : rpm-libs-4.14.2-9.el8.x86_64                                                                                40/48 
  Verifying        : rpm-plugin-selinux-4.14.3-14.el8_4.x86_64                                                                   41/48 
  Verifying        : rpm-plugin-selinux-4.14.2-9.el8.x86_64                                                                      42/48 
  Verifying        : rpm-plugin-systemd-inhibit-4.14.3-14.el8_4.x86_64                                                           43/48 
  Verifying        : rpm-plugin-systemd-inhibit-4.14.2-9.el8.x86_64                                                              44/48 
  Verifying        : tpm2-tss-2.3.2-3.el8.x86_64                                                                                 45/48 
  Verifying        : tpm2-tss-2.0.0-4.el8.x86_64                                                                                 46/48 
  Verifying        : yum-4.4.2-11.el8.noarch                                                                                     47/48 
  Verifying        : yum-4.0.9.2-5.el8.noarch                                                                                    48/48 

Upgraded:
  dnf-4.4.2-11.el8.noarch                    dnf-data-4.4.2-11.el8.noarch                       dnf-plugins-core-4.0.18-4.el8.noarch 
  elfutils-libelf-0.182-3.el8.x86_64         elfutils-libs-0.182-3.el8.x86_64                   ima-evm-utils-1.3.2-12.el8.x86_64    
  libdnf-0.55.0-7.el8.x86_64                 librepo-1.12.0-3.el8.x86_64                        libsolv-0.7.16-3.el8_4.x86_64        
  python3-dnf-4.4.2-11.el8.noarch            python3-dnf-plugins-core-4.0.18-4.el8.noarch       python3-hawkey-0.55.0-7.el8.x86_64   
  python3-libdnf-0.55.0-7.el8.x86_64         python3-librepo-1.12.0-3.el8.x86_64                python3-rpm-4.14.3-14.el8_4.x86_64   
  rpm-4.14.3-14.el8_4.x86_64                 rpm-build-libs-4.14.3-14.el8_4.x86_64              rpm-libs-4.14.3-14.el8_4.x86_64      
  rpm-plugin-selinux-4.14.3-14.el8_4.x86_64  rpm-plugin-systemd-inhibit-4.14.3-14.el8_4.x86_64  tpm2-tss-2.3.2-3.el8.x86_64          
  yum-4.4.2-11.el8.noarch                   

Installed:
  yum-utils-4.0.18-4.el8.noarch          elfutils-debuginfod-client-0.182-3.el8.x86_64          libmodulemd-2.9.4-2.el8.x86_64         
  libzstd-1.4.4-1.el8.x86_64            

Complete!
[root@k8s-master01 ~]# 
yum install yum-utils -y

2.19 设置ulimit参数

echo "* soft nofile 655360" >> /etc/security/limits.conf
echo "* hard nofile 655360" >> /etc/security/limits.conf
echo "* soft nproc 655360" >> /etc/security/limits.conf
echo "* hard nproc 655360" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
echo "DefaultLimitNOFILE=1024000" >> /etc/systemd/system.conf
echo "DefaultLimitNPROC=1024000" >> /etc/systemd/system.conf

三、部署 etcd 集群:

  etcd 是一个分布式键值存储系统,Kubernetes 使用 etcd 进行数据存储,所以先准备一个 etcd 数据库,为解决 etcd 单点故障,应采用集群方式部署,这里使用 3 台组件集群,可容忍 1 台机器故障,当然,也可以用 5 台组建集群,可容忍 2 台机器故障。

3.1、准备 cfssl 证书生成工具

  找任意一台主机操作,这里用 k8s-master01。

  cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。

wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
chmod a+x cfssl*
mv cfssl_1.6.1_linux_amd64 /usr/local/bin/cfssl
mv cfssljson_1.6.1_linux_amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_1.6.1_linux_amd64 /usr/local/bin/cfssl-certinfo

3.2 生成 etcd 证书

    3.2.1 创建自签 CA 证书

   3.2.1.1 创建工作目录
mkdir -pv /root/certs/{etcd,k8s}
  3.2.1.2 生成 CA 配置和证书模版文件
cfssl print-defaults config > ca-config.json
cfssl print-defaults csr > ca-csr.json
   3.2.1.3 创建 CA 配置文件

配置证书生成策略,规定 CA 可以预发哪种类型的证书。

修改 CA 配置文件 ca-config.json,证书有效期时间改为 10 年,默认是 1 年。

cat >ca-config.json<<EOF
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF
   3.2.1.4 定义 etcd 自签名的根证书
cat >ca-csr.json<<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
  },
        
  "names": [
        {
            "C": "CN",
            "L": "Shanghai",
            "ST": "Shanghai"
        }
     ]
}
EOF
   3.2.1.5 生成证书

生成 CA 所必需的文件 ca-key.pem(私钥)和 ca.pem(证书),还会生成 ca.csr(证书签名请求),用于交叉签名或重新签名。

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

    3.2.2 使用自签 CA 签发 etcd https 证书

  3.2.2.1 创建证书申请文件

添加 api 证书 文件配置模板,下面的文件的hosts字段的ip地址是为所有etcd集群内部通信的 IP,我们要 3 个 etcd 做集群

cat >server-csr.json<< EOF
{
    "CN": "etcd",
    "hosts": [
    "10.100.12.168",
    "10.100.10.200",
    "10.100.15.246"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "L": "Shanghai",
        "ST": "Shanghai"
    }
  ]
}
EOF
  3.2.2.2 基于模板生成证书生成 etcd server 证书
[root@k8s-master01 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2021/11/17 10:18:46 [INFO] generate received request
2021/11/17 10:18:46 [INFO] received CSR
2021/11/17 10:18:46 [INFO] generating key: rsa-2048
2021/11/17 10:18:46 [INFO] encoded CSR
2021/11/17 10:18:46 [INFO] signed certificate with serial number 595998877373980408433662712528622813743684914214
[root@k8s-master01 etcd]# 

3.3 部署 etcd

    3.3.1 下载 etcd 二进制包

# 下载地址
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz

# 安装部署
tar -zxvf etcd-v3.3.10-linux-amd64.tar.gz
mkdir -pv /usr/local/etcd/release/etcd-v3.3.10-linux-amd64/{bin,cfg,ssl,data}
cp -rf etcd-v3.3.10-linux-amd64/etcd* /usr/local/etcd/release/bin/
cp -rf /root/certs/etcd/ca*pem server*.pem /usr/local/etcd/current/ssl/
cd /usr/local/etcd; ln -nsvf release/etcd-v3.3.10-linux-amd64 current

    3.3.2 添加 etcd 主配置文件

注意修改 etcd ip 地址
cat > /usr/local/etcd/current/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/usr/local/etcd/current/data/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.100.12.168:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.100.12.168:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.100.12.168:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.100.12.168:2379"
 
ETCD_INITIAL_CLUSTER="etcd-1=https://10.100.12.168:2380,etcd-2=https://10.100.10.200:2380,etcd-3=https://10.100.15.246:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
参数说明:
ETCD_NAME #节点名称,集群中唯一
ETCD_DATA_DIR: #数据目录
ETCD_LISTEN_PEER_URLS #集群通信监听地址
ETCD_LISTEN_CLIENT_URLS #客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS#集群通告地址
ETCD_ADVERTISE_CLIENT_URLS #客户端通告地址
ETCD_INITIAL_CLUSTER #集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN #集群 Token
ETCD_INITIAL_CLUSTER_STATE #加入集群的当前状态,new 是新集群,existing 表示加入已有集群

    3.3.3 etcd 配置 systemd 管理

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=/usr/local/etcd/current/cfg/etcd.conf
ExecStart=/usr/local/etcd/current/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=${ETCD_INITIAL_CLUSTER_STATE} \
--cert-file=/usr/local/etcd/current/ssl/server.pem \
--key-file=/usr/local/etcd/current/ssl/server-key.pem \
--peer-cert-file=/usr/local/etcd/current/ssl/server.pem \
--peer-key-file=/usr/local/etcd/current/ssl/server-key.pem \
--trusted-ca-file=/usr/local/etcd/current/ssl/ca.pem \
--peer-trusted-ca-file=/usr/local/etcd/current/ssl/ca.pem
 
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

    3.3.4 推送 etcd 到其他节点

scp /usr/lib/systemd/system/etcd.service k8s-master02:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service k8s-node01:/usr/lib/systemd/system/etcd.service
rsync -avzpP /usr/local/etcd k8s-master02:/usr/local/
rsync -avzpP /usr/local/etcd k8s-node01:/usr/local/

    3.3.5 修改集群中其他 etcd 节点的配置文件

sed -i '1,9s#10.100.12.168#10.100.10.200#' /usr/local/etcd/current/cfg/etcd.conf
sed -i 's#ETCD_NAME="etcd-1"#ETCD_NAME="etcd-2"#g" /usr/local/etcd/current/cfg/etcd.conf
sed -i '1,9s#10.100.12.168#10.100.15.246#' /usr/local/etcd/current/cfg/etcd.conf
sed -i 's#ETCD_NAME="etcd-1"#ETCD_NAME="etcd-3"#g" /usr/local/etcd/current/cfg/etcd.conf

    3.3.6 启动 etcd 集群

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
systemctl status etcd 

    3.3.7 查看etcd集群状态

ETCDCTL_API=3 \
/usr/local/etcd/current/bin/etcdctl \
--cacert=/usr/local/etcd/current/ssl/ca.pem \
--cert=/usr/local/etcd/current/ssl/server.pem \
--key=/usr/local/etcd/current/ssl/server-key.pem \
--endpoints="https://10.100.12.168:2379,https://10.100.10.200:2379,https://10.100.15.246:2379" endpoint health

四、部署 docker Engine:

所有主机都要安装 docker engine,官方安装步骤文档: https://docs.docker.com/engine/install/centos

4.1 操作系统版本要求🔗

  To install Docker Engine, you need a maintained version of CentOS 7 or 8. Archived versions aren’t supported or tested.

The centos-extras repository must be enabled. This repository is enabled by default, but if you have disabled it, you need to re-enable it.

The overlay2 storage driver is recommended.

4.2 卸载老版本的 docker 

sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

4.3 配置 yum 源

 sudo yum install -y yum-utils
 sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

4.4 yum 安装最新的 docker engine

sudo yum install docker-ce docker-ce-cli containerd.io
遇到报错:
[root@k8s-master01 ~]# sudo yum install docker-ce docker-ce-cli containerd.io
Docker CE Stable - x86_64                                                                               14 kB/s | 3.5 kB     00:00    
Error: 
 Problem 1: problem with installed package podman-1.0.0-2.git921f98f.module_el8.0.0+58+91b614e7.x86_64
  - package podman-1.0.0-2.git921f98f.module_el8.0.0+58+91b614e7.x86_64 requires runc, but none of the providers can be installed
  - package podman-3.2.3-0.11.module_el8.4.0+942+d25aada8.x86_64 requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-3.2.3-0.10.module_el8.4.0+886+c9a8d9ad.x86_64 requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-3.0.1-7.module_el8.4.0+830+8027e1c4.x86_64 requires runc >= 1.0.0-57, but none of the providers can be installed
  - package podman-3.0.1-6.module_el8.4.0+781+acf4c33b.x86_64 requires runc >= 1.0.0-57, but none of the providers can be installed
  - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - cannot install the best candidate for the job
  - package runc-1.0.0-56.rc5.dev.git2abd837.module_el8.3.0+569+1bada2e4.x86_64 is filtered out by modular filtering
  - package runc-1.0.0-64.rc10.module_el8.4.0+522+66908d0c.x86_64 is filtered out by modular filtering
  - package runc-1.0.0-65.rc10.module_el8.4.0+819+4afbd1d6.x86_64 is filtered out by modular filtering
  - package runc-1.0.0-70.rc92.module_el8.4.0+786+4668b267.x86_64 is filtered out by modular filtering
  - package runc-1.0.0-71.rc92.module_el8.4.0+833+9763146c.x86_64 is filtered out by modular filtering
  - package runc-1.0.0-72.rc92.module_el8.4.0+964+56b6762f.x86_64 is filtered out by modular filtering
 Problem 2: package buildah-1.19.7-1.module_el8.4.0+781+acf4c33b.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
  - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package docker-ce-3:20.10.10-3.el8.x86_64 requires containerd.io >= 1.4.1, but none of the providers can be installed
  - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - problem with installed package buildah-1.5-3.gite94b4f9.module_el8.0.0+58+91b614e7.x86_64
  - package buildah-1.19.7-2.module_el8.4.0+830+8027e1c4.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
  - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-55.rc5.dev.git2abd837.module_el8.0.0+58+91b614e7.x86_64
  - cannot install the best candidate for the job
  - package runc-1.0.0-56.rc5.dev.git2abd837.module_el8.3.0+569+1bada2e4.x86_64 is filtered out by modular filtering
  - package runc-1.0.0-64.rc10.module_el8.4.0+522+66908d0c.x86_64 is filtered out by modular filtering
  - package runc-1.0.0-65.rc10.module_el8.4.0+819+4afbd1d6.x86_64 is filtered out by modular filtering
  - package runc-1.0.0-70.rc92.module_el8.4.0+786+4668b267.x86_64 is filtered out by modular filtering
  - package runc-1.0.0-71.rc92.module_el8.4.0+833+9763146c.x86_64 is filtered out by modular filtering
  - package runc-1.0.0-72.rc92.module_el8.4.0+964+56b6762f.x86_64 is filtered out by modular filtering
  - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.4.0+673+eabfc99d.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-73.rc93.module_el8.4.0+830+8027e1c4.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
  - package buildah-1.21.4-1.module_el8.4.0+886+c9a8d9ad.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
  - package buildah-1.21.4-2.module_el8.4.0+942+d25aada8.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
  - package buildah-1.5-3.gite94b4f9.module_el8.0.0+58+91b614e7.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
解决办法:
[root@k8s
-master01 ~]# sudo yum install --allowerasing docker-ce docker-ce-cli containerd.io

4.5 yum 安装指定版本 docker engine

[root@k8s-master01 ~]# yum list docker-ce --showduplicates | sort -r
Last metadata expiration check: 0:07:10 ago on Wed 17 Nov 2021 11:46:17 AM CST.
Installed Packages
docker-ce.x86_64               3:20.10.9-3.el8                 docker-ce-stable 
docker-ce.x86_64               3:20.10.8-3.el8                 docker-ce-stable 
docker-ce.x86_64               3:20.10.7-3.el8                 docker-ce-stable 
docker-ce.x86_64               3:20.10.6-3.el8                 docker-ce-stable 
docker-ce.x86_64               3:20.10.5-3.el8                 docker-ce-stable 
docker-ce.x86_64               3:20.10.4-3.el8                 docker-ce-stable 
docker-ce.x86_64               3:20.10.3-3.el8                 docker-ce-stable 
docker-ce.x86_64               3:20.10.2-3.el8                 docker-ce-stable 
docker-ce.x86_64               3:20.10.1-3.el8                 docker-ce-stable 
docker-ce.x86_64               3:20.10.10-3.el8                docker-ce-stable 
docker-ce.x86_64               3:20.10.10-3.el8                @docker-ce-stable
docker-ce.x86_64               3:20.10.0-3.el8                 docker-ce-stable 
docker-ce.x86_64               3:19.03.15-3.el8                docker-ce-stable 
docker-ce.x86_64               3:19.03.14-3.el8                docker-ce-stable 
docker-ce.x86_64               3:19.03.13-3.el8                docker-ce-stable 
Available Packages
[root@k8s-master01 ~]# 

4.6 添加阿里云 docker 镜像加速

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://zp4fac78.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload

4.7 启动 docker engine

systemctl start docker 
systemctl enable docker 
systemctl status docker

 五、创建 kubernetes  CA 证书

5.1 自签证书颁发机构(CA)

cd /root/certs/k8s

  5.1.1 添加证书配置 

cat > ca-config.json<< EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
     },
      "profiles": {
      "kubernetes": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      }
    }
  }
}
EOF

  5.1.2 ca-k8s 根证书文件

cat > ca-csr.json<< EOF
{
    "CN": "kubernetes",
    "key": {
      "algo": "rsa",
      "size": 2048
  },    
  "names": [
      {
        "C": "CN",
        "L": "Shanghai",
        "ST": "Shanghai",
        "O": "k8s",
        "OU": "System"
       }
   ]    
}
EOF

  5.1.3 生成证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

5.2 使用自签 CA 签发 kube-apiserver https 证书

  5.2.1 创建证书申请文件

cat > server-csr.json<< EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "10.100.12.168",
      "10.100.10.200",
      "10.100.10.103",
      "10.100.15.246",
      "10.100.10.195",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
],
"key": {
  "algo": "rsa",
  "size": 2048
},
"names": [
    {
      "C": "CN",
      "L": "Shanghai",
      "ST": "Shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

  5.2.2 生成 api-server 证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

六、部署 master 节点

当前 master node 节点 IP:10.100.12.168/16,10.100.10.200/16

下载地址:https://storage.googleapis.com/kubernetes-release/release/v1.18.20/kubernetes-server-linux-amd64.tar.gz

需要安装的软件:

  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler
mkdir -pv /usr/local/kubernetes/release/{bin,cfg,ssl,logs}
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
find ./ -perm /+x -type f|xargs cp -t /usr/local/kubernetes/current/bin
cp -rf /root/certs/k8s/*.pem /usr/local/kubernetes/current/ssl/

 6.1 部署 kube-apiserver

  6.1.1 创建 kube-apiserver 配置文件 

cat > /usr/local/kubernetes/current/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/usr/local/kubernetes/current/logs \\
--etcd-servers=https://10.100.12.168:2379,https://10.100.10.200:2379,https://10.100.15.246:2379 \\
--bind-address=0.0.0.0 \\
--secure-port=6443 \\
--advertise-address=10.100.10.103 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/usr/local/kubernetes/current/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/usr/local/kubernetes/current/ssl/server.pem \\
--kubelet-client-key=/usr/local/kubernetes/current/ssl/server-key.pem \\
--tls-cert-file=/usr/local/kubernetes/current/ssl/server.pem \\
--tls-private-key-file=/usr/local/kubernetes/current/ssl/server-key.pem \\
--client-ca-file=/usr/local/kubernetes/current/ssl/ca.pem \\
--service-account-key-file=/usr/local/kubernetes/current/ssl/ca-key.pem \\
--etcd-cafile=/usr/local/etcd/current/ssl/ca.pem \\
--etcd-certfile=/usr/local/etcd/current/ssl/server.pem \\
--etcd-keyfile=/usr/local/etcd/current/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/usr/local/kubernetes/current/logs/k8s-audit.log"
EOF

参数说明:

--logtostderr:     #设置为false表示将日志写入文件,不写入stderr
--v                 #日志等级
--log-dir          #指定日志输出目录
--etcd-servers       #etcd 集群地址
--bind-address      #监听地址,当前地址
--secure-port       #https 安全端口 (默认6443)
--advertise-address   #集群通告地址
--allow-privileged  #启用授权,运行docker使用特权模式
--service-cluster-ip-range   #Service 虚拟 IP 地址段
--enable-admission-plugins   #准入控制模块,各控制模块以插件的形式依次生效
--authorization-mode   #认证授权,启用 RBAC 授权和节点自管理
--enable-bootstrap-token-auth  #启用 TLS bootstrap 机制
      #启用以允许将 "kube-system" 名字空间中类型为 "bootstrap.kubernetes.io/token" 的 Secret 用于 TLS 引导身份验证。(官网解释)
      #这里开启这个机制的作用,如果后续有新的节点加入,会自动帮忙授权,只要加入到对应的组中
--token-auth-file    #bootstrap token 文件
--service-node-port-range   #Service nodeport 类型默认分配端口范围
--kubelet-client-xxx   #apiserver 访问 kubelet 客户端证书
--tls-xxx-file         #apiserver https 证书
--etcd-xxxfile      #连接 Etcd 集群证书
--audit-log-xxx     #审计日志

  6.1.2 创建 token 文件

# 创建 TLS Bootstrapping Token
cat > /usr/local/kubernetes/current/cfg/token.csv << EOF
19c2834350a42f92912196a841ceecad,kubelet-bootstrap,10001,"system:nodebootstrapper"
EOF
 
#格式:token,用户名,UID,用户组
#token 也可自行生成替换:
#head -c 16 /dev/urandom | od -An -t x | tr -d ' '

  6.1.3  kube-apiserver 配置 systemd 管理

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/usr/local/kubernetes/current/cfg/kube-apiserver.conf
ExecStart=/usr/local/kubernetes/current/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

  6.1.4 启动 kube-apiserver 服务

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

  6.1.5 授权 kubelet-bootstrap 用户允许请求证书

  k8s 为了解决 kubelet 颁发证书的复杂性,所以引入了 bootstrap 机制,自动的为将要加入到集群的 node 颁发 kubelet 证书,所有链接 apiserver 的都需要证书。

  kubelet bootstrap 简单理解是 kubernetes apiserver 和 work node 之间建立通信的引导机制。api-server 配置参数中启用 Bootstrap Token Authentication(–enable-bootstrap-token-auth )会告诉 kubelet  一个特殊的 token,kubelet 根据此 token 通过 api server 的认证,kubelet 使用低权限的 bootstrap token 跟 api server 建立连接后,Bootstrap 使得kubelet 自动向 api server 申请自己的证书,并且使得api server 自动审批证书。

创建kubelet-bootstrap角色绑定:

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
返回:
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

6.2 部署 kube-controller-manager

    6.2.1. 创建配置文件

cat > /usr/local/kubernetes/current/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/usr/local/kubernetes/current/logs \\
--leader-elect=true \\
#--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/usr/local/kubernetes/current/ssl/ca.pem \\
--cluster-signing-key-file=/usr/local/kubernetes/current/ssl/ca-key.pem \\
--root-ca-file=/usr/local/kubernetes/current/ssl/ca.pem \\
--service-account-private-key-file=/usr/local/kubernetes/current/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF

参数说明

–master #通过本地非安全本地端口 8080 连接 apiserver。
–leader-elect #当该组件启动多个时,自动选举(HA)
–cluster-signing-cert-file #ca证书文件
–cluster-signing-key-file #ca证书私钥
#自动为 kubelet 颁发证书的 CA,与 apiserver 保持一致

     6.2.2 生成kubeconfig文件

  生成kube-controller-manager证书:

cat > kube-controller-manager-csr.json << EOF
{
  "CN": "system:kube-controller-manager",
  "hosts": [
    "10.0.0.1",
    "127.0.0.1",
    "10.100.12.168",
    "10.100.10.200",
    "10.100.10.103",
    "10.100.15.246",
    "10.100.10.195",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Shanghai", 
      "ST": "Shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

  生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

  生成kubeconfig文件:

KUBE_CONFIG="/usr/local/kubernetes/current/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://10.100.12.168:6443"

kubectl config set-credentials kube-controller-manager \
--client-certificate=./kube-controller-manager.pem \
--client-key=./kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}

kubectl config set-context default \
--cluster=kubernetes\
--user=kube-controller-manager\
--kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

     6.2.3 kube-controller-manager 配置 systemd 管理

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/usr/local/kubernetes/current/cfg/kube-controller-manager.conf
ExecStart=/usr/local/kubernetes/current/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
EOF

  6.2.4 启动 kube-controller-manager 服务 

systemctl daemon-reload
systemctl start  kube-controller-manager
systemctl enable kube-controller-manager
systemctl status kube-controller-manager

6.3 部署 kube-scheduler

  6.3.1 添加配置文件

cat > /usr/local/kubernetes/current/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false 
--v=2 
--log-dir=/usr/local/kubernetes/current/logs 
--leader-elect 
#--master=127.0.0.1:8080 
--bind-address=127.0.0.1"
EOF

参数说明

--master        #通过本地非安全本地端口 8080 连接 apiserver。
--leader-elect  #当该组件启动多个时,自动选举(HA)

  6.3.2 生成证书

cat > kube-scheduler-csr.json << EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [
    "10.0.0.1",
    "127.0.0.1",
    "10.100.12.168",
    "10.100.10.200",
    "10.100.10.103",
    "10.100.15.246",
    "10.100.10.195",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
    ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Shanghai",
      "ST": "Shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

  6.3.3 生成 kubeconfig 文件

KUBE_CONFIG="/usr/local/kubernetes/current/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://10.100.12.168:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/usr/local/kubernetes/current/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
  --client-certificate=./kube-scheduler.pem \
  --client-key=./kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

  6.3.4 kube-scheduler 配置 systemd 管理

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/usr/local/kubernetes/current/cfg/kube-scheduler.conf
ExecStart=/usr/local/kubernetes/current/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

   6.3.5 启动 kube-scheduler 服务

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
systemctl status kube-scheduler

至此master部署完成,集群状态查看:

[root@k8s-master01 k8s]# kubectl get cs               
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"} 

七、部署 WorkNode 节点

当前 work node 节点 IP:10.100.15.246/16,10.100.10.195/16

下载地址:https://storage.googleapis.com/kubernetes-release/release/v1.18.20/kubernetes-server-linux-amd64.tar.gz

需要安装的软件:

  • kubelet
  • kube-proxy

7.1 部署 kubelet

  7.1.1 创建 kubelet 配置文件

cat > /usr/local/kubernetes/current/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/usr/local/kubernetes/current/logs \\
--hostname-override=k8s-master01 \\
--network-plugin=cni \\
--kubeconfig=/usr/local/kubernetes/current/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/usr/local/kubernetes/current/cfg/bootstrap.kubeconfig \\
--config=/usr/local/kubernetes/current/cfg/kubelet.config \\
--cert-dir=/usr/local/kubernetes/current/ssl \\
--pod-infra-container-image=registry-wx.biggeryun.com/kubernetes/pause:3.5"
EOF
–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像

  7.1.2 创建参数配置文件

cat > /usr/local/kubernetes/current/cfg/kubelet.config << EOF 
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /usr/local/kubernetes/current/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

  7.1.3 生成 bootstrap.kubeconfig 文件

KUBE_APISERVER="https://10.100.12.168:6443" # apiserver IP:PORT 
TOKEN="19c2834350a42f92912196a841ceecad" # 与/usr/local/kubernetes/current/cfg/token.csv里保持一致

# 生成 kubelet bootstrap kubeconfig 配置文件
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/usr/local/kubernetes/current/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文    
kubectl config use-context default \
--kubeconfig=bootstrap.kubeconfig

#拷贝生成的配置到cfg
/bin/cp -rf bootstrap.kubeconfig /usr/local/kubernetes/current/cfg

  7.1.4 kubelet 配置 systemd 管理

    7.1.4.1 创建启动文件
cat > /usr/lib/systemd/system/kubelet.service << EOF 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/usr/local/kubernetes/current/cfg/kubelet.conf
ExecStart=/usr/local/kubernetes/current/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
    7.1.4.1 创建启动文件
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

  7.1.5 批准 kubelet 证书申请并加入集群

    7.1.5.1 查看 kubelet 证书请求
[root@k8s-master01 k8s]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-kH6v-kPprOgpJnvDg8tVke7TvBBKgAiGryxqlOvF8Pg   5s    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
[root@k8s-master01 k8s]# 
    7.1.5.2 批准 kubelet 证书申请,并加入集群
[root@k8s-master01 k8s]# kubectl certificate approve node-csr-kH6v-kPprOgpJnvDg8tVke7TvBBKgAiGryxqlOvF8Pg
certificatesigningrequest.certificates.k8s.io/node-csr-kH6v-kPprOgpJnvDg8tVke7TvBBKgAiGryxqlOvF8Pg approved
[root@k8s-master01 k8s]# 
    7.1.5.3 查看节点
[root@k8s-master01 k8s]# kubectl get nodes
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   NotReady   <none>   31s   v1.18.20
[root@k8s-master01 k8s]# 

  注意:由于网络插件还没有部署,节点的 STATUS: NotReady 是正常的。

7.2 部署 kube-proxy

  7.2.1 创建配置文件

cat > /usr/local/kubernetes/current/cfg/kube-proxy.conf << EOF 
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/usr/local/kubernetes/current/logs \\
--config=/usr/local/kubernetes/current/cfg/kube-proxy.config"
EOF

  7.2.2 配置参数文件

cat > /usr/local/kubernetes/current/cfg/kube-proxy.config << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /usr/local/kubernetes/current/cfg/kube-proxy.kubeconfig
hostnameOverride: ${HOSTNAME}
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
  scheduler: "r"
EOF

  7.2.3 生成 kube-proxy.kubeconfig 文件

    7.2.3.1 证书签发
cat > kube-proxy-csr.json << EOF
{
    "CN":"system:kube-proxy",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "10.100.12.168",
      "10.100.10.200",
      "10.100.10.103",
      "10.100.15.246",
      "10.100.10.195",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"Shanghai",
            "ST":"Shanghai",
            "O":"k8s",
            "OU":"System"
        }
    ]
}
EOF
    7.2.3.2 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
    7.2.3.3 生成 kubeconfig 配置文件
KUBE_APISERVER="https://10.100.12.168:6443"

kubectl config set-cluster kubernetes \
--certificate-authority=/usr/local/kubernetes/current/ssl/ca.pem \
--embed-certs=true --server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default \
--kubeconfig=kube-proxy.kubeconfig

# 拷贝配置到cfg下
/bin/cp -rf kube-proxy.kubeconfig /usr/local/kubernetes/current/cfg/

  7.2.4 kube-proxy 配置 systemd 管理

    7.2.4.1 创建启动文件
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/usr/local/kubernetes/current/cfg/kube-proxy.conf
ExecStart=/usr/local/kubernetes/current/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
    7.2.4.2 设置开机启动
systemctl daemon-reload
systemctl start  kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy

 八、部署 CNI 网络(flannel)

  8.1 向 etcd 写入集群 pod 网段信息

ETCDCTL_API=3 /usr/local/etcd/current/bin/etcdctl \
--cacert=/usr/local/etcd/current/ssl/ca.pem \
--cert=/usr/local/etcd/current/ssl/server.pem \
--key=/usr/local/etcd/current/ssl/server-key.pem \
--endpoints=https://10.100.12.168:2379,https://10.100.10.200:2379,https://10.100.15.246:2379 \
put /coreos.com/network/config  '{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}'

 注意2个问题:

  • flanneld 版本不支持 etcd v3 的,可以使用 etcd v2 API 写入配置 key 和网段数据;
  • 写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致;

  8.2 下载flannel安装包和解压缩

wget https://github.com/flannel-io/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
mkdir -pv  /usr/local/flannel/release/flannel-v0.10.0-linux-amd64/bin
tar -zxvf  flannel-v0.10.0-linux-amd64.tar.gz -C  /usr/local/flannel/release/flannel-v0.10.0-linux-amd64/bin
cd /usr/local/flannel; ln -snvf /usr/local/flannel/release/flannel-v0.10.0-linux-amd64/ current

  8.3 配置 flannel 配置文件

cat flanneld.config 
FLANNEL_OPTIONS="--etcd-endpoints=https://10.100.12.168:2379,https://10.100.10.200:2379,https://10.100.15.246:2379 \
-etcd-cafile=/usr/local/etcd/current/ssl/ca.pem \
-etcd-certfile=/usr/local/etcd/current/ssl/server.pem \
-etcd-keyfile=/usr/local/etcd/current/ssl/server-key.pem \
-etcd-prefix=/usr/local/kubernetes/current/cfg/network \
-etcd-prefix=/coreos.com/network"

  8.4 flannel 配置 systemd 管理

cat /usr/lib/systemd/system/flanneld.service 
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/usr/local/kubernetes/current/cfg/flanneld.config
ExecStart=/usr/local/flannel/current/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/usr/local/flannel/current/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 docker 启动时使用这个文件中的环境变量配置 docker0 网桥;
  • flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口,如上面的 eth0 接口;
  • flanneld 运行时需要 root 权限;

  8.5 配置Docker启动指定子网段

cat /usr/lib/systemd/system/flanneld.service 
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/usr/local/kubernetes/current/cfg/flanneld.config
ExecStart=/usr/local/flannel/current/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/usr/local/flannel/current/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
[root@k8s-master01 cfg]# cat /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
#EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target

  8.6 推送 flanneld 文件到其他节点

    8.6.1 推送 flanneld systemd unit 文件到其他节点

rsync -avzpp /usr/lib/systemd/system/flanneld.service root@k8s-master02:/usr/lib/systemd/system/
rsync -avzpp /usr/lib/systemd/system/flanneld.service root@k8s-node01:/usr/lib/systemd/system/
rsync -avzpp /usr/lib/systemd/system/flanneld.service root@k8s-node02:/usr/lib/systemd/system/

    8.6.2 推送 docker systemd unit 文件到其他节点

rsync -avzpp /usr/lib/systemd/system/docker.service root@k8s-master02:/usr/lib/systemd/system/
rsync -avzpp /usr/lib/systemd/system/docker.service root@k8s-node01:/usr/lib/systemd/system/
rsync -avzpp /usr/lib/systemd/system/docker.service root@k8s-node02:/usr/lib/systemd/system/

    8.6.3 推送 flanneld 配置文件到其他节点

rsync -avzpP /usr/local/kubernetes/current/cfg/flanneld.config root@k8s-master02:/usr/local/kubernetes/current/cfg/
rsync -avzpP /usr/local/kubernetes/current/cfg/flanneld.config root@k8s-node01:/usr/local/kubernetes/current/cfg/
rsync -avzpP /usr/local/kubernetes/current/cfg/flanneld.config root@k8s-node02:/usr/local/kubernetes/current/cfg/

  8.7 启动 flanneld 服务

systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

  8.8 查看服务是否生效

[root@k8s-master01 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:18:93:04 brd ff:ff:ff:ff:ff:ff
    inet 10.100.12.168/16 brd 10.100.255.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe18:9304/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:28:05:17:f7 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 12:b1:5e:eb:3d:1c brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/32 brd 10.0.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 82:56:b8:53:ef:6b brd ff:ff:ff:ff:ff:ff
    inet 10.244.100.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::8056:b8ff:fe53:ef6b/64 scope link 
       valid_lft forever preferred_lft forever
[root@k8s-master01 ~]# 

  8.9 配置 docker 使用 flannel

[root@k8s-master01 ]# cat /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target

  8.10 查看 docker0 和 flannel 地址

[root@k8s-master01 current]# cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=10.244.100.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.244.100.1/24 --ip-masq=false --mtu=1450"
[root@k8s-master01 current]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:18:93:04 brd ff:ff:ff:ff:ff:ff
    inet 10.100.12.168/16 brd 10.100.255.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe18:9304/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:28:05:17:f7 brd ff:ff:ff:ff:ff:ff
    inet 10.244.100.1/24 brd 10.244.100.255 scope global docker0
       valid_lft forever preferred_lft forever
4: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 12:b1:5e:eb:3d:1c brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/32 brd 10.0.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 82:56:b8:53:ef:6b brd ff:ff:ff:ff:ff:ff
    inet 10.244.100.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::8056:b8ff:fe53:ef6b/64 scope link 
       valid_lft forever preferred_lft forever
[root@k8s-master01 current]# 

  如果遇到 cni 未初始化,一定要注意 flannel 的 /etc/cni/net.d 和 /opt/cni。

  如果遇到 kubelet 启动成功,但是 master 节点 kubectl get csr 看不到 node,一定要从 kubelet 启动原理去查:https://www.cnblogs.com/zuoyang/p/15698865.html

  8.11 查看集群节点服务状态

[root@k8s-master01 current]# clear
[root@k8s-master01 current]# kubectl get cs,node
NAME                                 STATUS      MESSAGE                                                                                           ERROR
componentstatus/etcd-2               Unhealthy   Get https://10.100.15.246:2379/health: dial tcp 10.100.15.246:2379: connect: connection refused   
componentstatus/scheduler            Healthy     ok                                                                                                
componentstatus/controller-manager   Healthy     ok                                                                                                
componentstatus/etcd-0               Healthy     {"health":"true"}                                                                                 
componentstatus/etcd-1               Healthy     {"health":"true"}                                                                                 

NAME                STATUS   ROLES    AGE     VERSION
node/k8s-master01   Ready    <none>   30h     v1.18.20
node/k8s-master02   Ready    <none>   23h     v1.18.20
node/k8s-node01     Ready    <none>   85m     v1.18.20
node/k8s-node02     Ready    <none>   2m37s   v1.18.20
[root@k8s-master01 current]# 

  8.12 给 master 和 node 集群节点打标签

kubectl label node k8s-master01 node-role.kubernetes.io/master=master
kubectl label node k8s-master01 node-role.kubernetes.io/etcd=etcd
kubectl label node k8s-master02 node-role.kubernetes.io/master=master
kubectl label node k8s-master02 node-role.kubernetes.io/etcd=etcd
kubectl label node k8s-node01 node-role.kubernetes.io/node=node
kubectl label node k8s-node01 node-role.kubernetes.io/etcd=etcd
kubectl label node k8s-node02 node-role.kubernetes.io/node=node
节点添加标签:kubectl label node k8s-master01 node-role.kubernetes.io/master=master
节点删除标签:kubectl label node k8s-master01 node-role.kubernetes.io/master-

  8.13 打完标签后集群节点的状态

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES         AGE   VERSION
k8s-master01   Ready    etcd,master   30h   v1.18.20
k8s-master02   Ready    etcd,master   23h   v1.18.20
k8s-node01     Ready    etcd,node     98m   v1.18.20
k8s-node02     Ready    node          16m   v1.18.20
[root@k8s-master01 ~]# 

  centos8.0 使用二进制部署 k8s-v1.18.20+etcd-v3.3.10+flannel-v0.10.0 高可用集群完成!

posted @ 2021-11-16 17:31  左扬  阅读(379)  评论(0编辑  收藏  举报
levels of contents