kubeadm搭建k8s集群

一、初始化k8s实验环境

K8s集群角色 IP地址 主机名 安装的组件
控制节点 192.168.88.180 k8s-master1 kube-apiserver、kube-controller-manager、kube-scheduler、docker、etcd、calico
工作节点 192.168.88.181 k8s-node1 kubelet、kube-proxy、docker、calico、coreDNS
工作节点 192.168.88.182 k8s-node2 kubelet、kube-proxy、docker、calico、coreDNS

1、kubeadm和二进制安装k8s适用场景分析

kubeadm是官方提供的开源工具,用于快速搭建kubernetes集群,目前比较方便和推荐使用的。

kubeadmi int 以及 kubeadm join 这两个命令可以快速创建 kubernetes集群。kubeadm 初始化 k8s,所有的组件都是以pod形式运行的,具备故障自恢复能力。

kubeadm 是工具,可以快速搭建集群,也就是相当于程序脚本自动部署,简化操作。证书、组件资源清单文件都是自动创建的,屏蔽了很多细节,使得对各个模块感知很少。

kubeadm适合经常部署k8s集群的场景,或对自动化要求比较高的场景。

二进制:在官网下载相关组件的二进制包,如果手动安装,对k8s理解更加清楚,对集群组件的安装部署更加了解。

kubeadm和二进制都适合生产环境,在生产环境运行很稳定,具体根据实际项目评估。

2、创建虚拟机,修改IP

如下所示修改网卡信息,下面是控制节点的配置信息,工作节点仅IP地址不同。

[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=92c69021-2981-4c15-8c9a-2d1efb6bb011
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.88.180
NETMASK=255.255.255.0
GATEWAY=192.168.88.2
DNS=114.114.114.114

# 重启网络
[root@localhost ~]# systemctl restart network

3、配置主机名和hosts文件

配置机器主机名:

# 在192.168.88.180上执行
[root@localhost ~]# hostnamectl set-hostname k8s-master1 && bash
# 在192.168.88.181上执行
[root@localhost ~]# hostnamectl set-hostname k8s-node1 && bash
# 在192.168.88.182上执行
[root@localhost ~]# hostnamectl set-hostname k8s-node2 && bash

配置主机hosts文件,相互间通过主机名访问:

echo '''192.168.88.180 k8s-master1 
192.168.88.181 k8s-node1
192.168.88.182 k8s-node2''' >> /etc/hosts

4、配置主机间免密登录

# 1、三台机器都生成ssh密钥,一路回车
[root@k8s-master1 ~]# ssh-keygen
[root@k8s-node1 ~]# ssh-keygen
[root@k8s-node2 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:56kFtc+z1a7/cWTg9Hlh7xLcf8PHLh/LSQF113kK8os root@k8s-master1
The keys randomart image is:
+---[RSA 2048]----+
|               .=|
|          . . ..=|
|          .o oo+.|
|         . ..++++|
|        S o. .+oB|
|         +E+. .B+|
|          + + o*O|
|         o   +++X|
|        .   . .OB|
+----[SHA256]-----+

# 三台机器都执行将公钥拷贝到其他机器
[root@k8s-master1 ~]# ssh-copy-id k8s-master1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-master1 (192.168.88.180)' can't be established.
ECDSA key fingerprint is SHA256:u7nlxwc8AwCbT5fFs2kWLcQQJpZ4cTlLBhl8qWRZtis.
ECDSA key fingerprint is MD5:80:39:be:e6:78:40:82:6b:30:de:69:74:64:11:a0:7d.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-master1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-master1'"
and check to make sure that only the key(s) you wanted were added.

[root@k8s-master1 ~]# ssh-copy-id k8s-node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-node1 (192.168.88.181)' can't be established.
ECDSA key fingerprint is SHA256:u7nlxwc8AwCbT5fFs2kWLcQQJpZ4cTlLBhl8qWRZtis.
ECDSA key fingerprint is MD5:80:39:be:e6:78:40:82:6b:30:de:69:74:64:11:a0:7d.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-node1'"
and check to make sure that only the key(s) you wanted were added.

[root@k8s-master1 ~]# ssh-copy-id k8s-node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-node2 (192.168.88.182)' can't be established.
ECDSA key fingerprint is SHA256:u7nlxwc8AwCbT5fFs2kWLcQQJpZ4cTlLBhl8qWRZtis.
ECDSA key fingerprint is MD5:80:39:be:e6:78:40:82:6b:30:de:69:74:64:11:a0:7d.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-node2'"
and check to make sure that only the key(s) you wanted were added.

5、关闭交换分区

SWAP是交换分区,如果机器内存不够,会使用swaq分区,但是swap分区的性能较低,k8s设计时为了提升性能,默认不允许使用交换分区。

kubeadm初始化的时候会监测swap是否关闭,若没有关闭,则初始化失败。如不想关闭交换分区,安装k8s的时候可以指定————ignore-preflight-error=Swap 来解决。

# 临时关闭交换分区
[root@k8s-master1 ~]# swapoff -a
[root@k8s-node1 ~]# swapoff -a
[root@k8s-node2 ~]# swapoff -a
[root@k8s-master1 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3934         151        3621          11         160        3556
Swap:             0           0           0

# 永久关闭交换分区
# 编辑/etc/fstab文件,注释掉swap分区的行
[root@k8s-master1 ~]# vim /etc/fstab
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=1f767b34-97e1-4fd0-a40e-f078e9535bfa /boot                   xfs     defaults        0 0
/dev/mapper/centos-home /home                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

# 如果是克隆的机器需要删除uuid
[root@k8s-node1 ~]# vim /etc/fstab
/dev/mapper/centos-root /                       xfs     defaults        0 0
/dev/mapper/centos-home /home                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

# 如果是克隆的机器需要删除uuid
[root@k8s-node2 ~]# vim /etc/fstab
/dev/mapper/centos-root /                       xfs     defaults        0 0
/dev/mapper/centos-home /home                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

6、配置内核参数

[root@k8s-master1 ~]# modprobe br_netfilter
[root@k8s-master1 ~]# echo "modprobe br_netfilter" >> /etc/profile
[root@k8s-master1 ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@k8s-master1 ~]# sysctl -p /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

(1)sysctl命令

在运行时配置内核参数
-p 从指定的文件加载系统参数,如不指定即从 /etc/sysctl.conf 文件中加载。

(2)modprobe br_netfilter

br_netfilter模块是docker网络插件的依赖模块,如果不加载,docker网络插件将无法工作。

(3)net.bridge.bridge-nf-call-iptables

在centos安装docker,执行 docker info 时,会提示如下警告:

Warning: bridge-nf-call-iptables is disabled
Warning: bridge-nf-call-ip6tables is disabled

警告提示表明,需要设置net.bridge.bridge-nf-call-iptables和net.bridge.bridge-nf-call-ip6tables参数为1,才能使docker网络插件正常工作。

(4)net.ipv4.ip_forward

默认情况下,内核不支持IP转发,需要设置net.ipv4.ip_forward为1,才能使docker网络插件正常工作。

如没配置,kubeadm初始化时,会提示如下报错:

ERROR FileContent--proc-sys-net-ipv4-ip_forward: /proc/sys/net/ipv4/ip_forward: contents are not set to 1

出于安全考虑,Linux 系统默认是禁止数据包转发的。所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包,根据数据包的目的 ip 地址将数据包发往本机另一块网卡,该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能。

要让 Linux 系统具有路由转发功能,需要配置一个 Linux 的内核参数 net.ipv4.ip_forward。这个参数指定了 Linux 系统当前对路由转发功能的支持情况;其值为 0 时表示禁止进行 IP 转发;如果是 1,则说明 IP 转发功能已经打开。

7、关闭防火墙和selinux

# 关闭防火墙
[root@k8s-master1 ~]# systemctl stop firewalld; systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@k8s-node1 ~]# systemctl stop firewalld; systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@k8s-node2 ~]# systemctl stop firewalld; systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

# 关闭selinux
[root@k8s-master1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

[root@k8s-node1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

[root@k8s-node2 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

# 修改完selinux配置文件后,需要重启系统才生效
# 显示Disabled 说明已经关闭
[root@k8s-master1 ~]# getenforce 
Disabled

8、配置repo源

# 备份基础源
[root@k8s-master1 ~]# mkdir /etc/yum.repos.d/repo.bak
[root@k8s-master1 ~]# mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/repo.bak/

[root@k8s-node1 ~]# mkdir /etc/yum.repos.d/repo.bak
[root@k8s-node1 ~]# mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/repo.bak/

[root@k8s-node2 ~]#  mkdir /etc/yum.repos.d/repo.bak
[root@k8s-node2 ~]# mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/repo.bak/

# 添加阿里云源
[root@k8s-master1 ~]# curl -o /etc/yum.repos.d/CenOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2523  100  2523    0     0  34062      0 --:--:-- --:--:-- --:--:-- 34561
[root@k8s-master1 ~]# scp /etc/yum.repos.d/CenOS-Base.repo k8s-node1:/etc/yum.repos.d/
CenOS-Base.repo                                                         100% 2523     5.7MB/s   00:00    
[root@k8s-master1 ~]# scp /etc/yum.repos.d/CenOS-Base.repo k8s-node2:/etc/yum.repos.d/
CenOS-Base.repo

# 配置国内docker源
[root@k8s-master1 ~]# yum install yum-utils -y
[root@k8s-master1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@k8s-node1 ~]# yum install yum-utils -y
[root@k8s-node1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-node2 ~]# yum install yum-utils -y
[root@k8s-node2 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 配置epel源
[root@k8s-master1 ~]# vi /etc/yum.repos.d/epel.repo 
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch/debug
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[root@k8s-master1 ~]# scp /etc/yum.repos.d/epel.repo k8s-node1:/etc/yum.repos.d/
epel.repo                                                               100% 1050     2.0MB/s   00:00    
[root@k8s-master1 ~]# scp /etc/yum.repos.d/epel.repo k8s-node2:/etc/yum.repos.d/
epel.repo                                                               100% 1050     2.1MB/s   00:00 


# 配置安装kubernetes组件需要的阿里云源
[root@k8s-master1 ~]# vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

[root@k8s-master1 ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node1:/etc/yum.repos.d/
kubernetes.repo                                                         100%  129   177.7KB/s   00:00    
[root@k8s-master1 ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node2:/etc/yum.repos.d/
kubernetes.repo                                                         100%  129    22.0KB/s   00:00 

9、配置时区和时间同步

# 1.配置主节点为NTP时间服务器
[root@k8s-master1 ~]# yum install chrony -y
[root@k8s-master1 ~]# systemctl start chronyd && systemctl enable chronyd
[root@k8s-master1 ~]# vi /etc/chrony.conf
###### 注释默认同步服务器,添加阿里时间同步 #########
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp.aliyun.com iburst

###### 配置允许同网段主机使用本机的NTP服务  ###########
# Allow NTP client access from local network.
allow 192.168.10.0/24


# 2.配置计算节点时间同步
[root@k8s-node1 ~]# yum install chrony -y
[root@k8s-node2 ~]# yum install chrony -y

[root@k8s-node1 ~]# systemctl start chronyd && systemctl enable chronyd
[root@k8s-node2 ~]# systemctl start chronyd && systemctl enable chronyd

[root@k8s-node1 ~]# vi /etc/chrony.conf
[root@k8s-node2 ~]# vi /etc/chrony.conf 
###### 注释默认同步服务器,添加本地同步 #########
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server k8s-master1 iburst

# 3.重启时间同步服务让配置生效
[root@k8s-master1 ~]# systemctl restart chronyd
[root@k8s-node1 ~]# systemctl restart chronyd
[root@k8s-node2 ~]# systemctl restart chronyd

# 4.检查时间同步状态
[root@k8s-node1 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* k8s-master1                   3   6    17    34  +4386ns[  +40us] +/-   26ms

[root@k8s-master1 ~]# date
Tue Dec 24 22:32:16 CST 2024
[root@k8s-node1 ~]# date
Tue Dec 24 22:32:03 CST 2024
[root@k8s-node2 ~]# date
Tue Dec 24 22:32:39 CST 2024

10、开启ipvs

ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的 4 层 LAN 交换,作为 Linux 内核的一部分。ipvs 运行在主机上,在真实服务器集群前充当负载均衡器。ipvs 可以将基于 TCP 和 UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。

[root@k8s-master1 ~]# vi /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
 /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
 if [ 0 -eq 0 ]; then
 /sbin/modprobe ${kernel_module}
 fi
done

[root@k8s-master1 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26787  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 141432  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          133053  2 ip_vs,nf_nat
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

[root@k8s-master1 ~]# scp /etc/sysconfig/modules/ipvs.modules k8s-node1:/etc/sysconfig/modules/
ipvs.modules                                                            100%  320   148.7KB/s   00:00    
[root@k8s-master1 ~]# scp /etc/sysconfig/modules/ipvs.modules k8s-node2:/etc/sysconfig/modules/
ipvs.modules                                                            100%  320   433.1KB/s   00:00

[root@k8s-node1 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26787  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 141432  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          133053  2 ip_vs,nf_nat
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
[root@k8s-node2 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26787  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 141432  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          133053  2 ip_vs,nf_nat
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

(1)ipvs 和 iptable 对比分析:

kube-proxy 支持 iptables 和 ipvs 两种模式, 在 kubernetes v1.8 中引入了 ipvs 模式,在 v1.9 中处于 beta 阶段,在 v1.11 中已经正式可用了。iptables 模式在 v1.1 中就添加支持了,从 v1.2 版本开始 iptables 就是 kube-proxy 默认的操作模式,ipvs 和 iptables 都是基于 netfilter的,但是 ipvs 采用的是 hash 表,因此当 service 数量达到一定规模时,hash 查表的速度优势就会显现出来,从而提高 service 的服务性能。

那么 ipvs 模式和 iptables 模式之间有哪些差异呢?

  1. ipvs 为大型集群提供了更好的可扩展性和性能
  2. ipvs 支持比 iptables 更复杂的复制均衡算法(最小负载、最少连接、加权等等)
  3. ipvs 支持服务器健康检查和连接重试等功能

11、安装基础软件包

[root@k8s-master1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm
[root@k8s-node1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gccgcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-develpython-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm
[root@k8s-node2 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gccgcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconfautomake zlib-develpython-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm

如果 firewalld 不习惯,可以安装 iptables。

# 安装
yum install -y iptables-services
# 关闭
service iptables stop && systemctl disable iptables
# 清空默认规则
iptables -F

二、安装 docker 服务

1、安装docker-ce

[root@k8s-master1 ~]# yum install -y docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io
[root@k8s-master1 ~]# systemctl start docker && systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

[root@k8s-node1 ~]# yum install -y docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io
[root@k8s-node1 ~]# systemctl start docker && systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

[root@k8s-node2 ~]# yum install -y docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io
[root@k8s-node2 ~]# systemctl start docker && systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

2、配置加速器

[root@k8s-master1 ~]# vi /etc/docker/daemon.json
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.dockercn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hubmirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
 "exec-opts": ["native.cgroupdriver=systemd"]
}
# 修改 docker 文件驱动为 systemd,默认为 cgroupfs,kubelet 默认使用 systemd,两者必须一致才可以
[root@k8s-master1 ~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2024-12-24 23:15:48 CST; 5ms ago

[root@k8s-master1 ~]# scp /etc/docker/daemon.json k8s-node1:/etc/docker/
daemon.json                                                             100%  320    55.8KB/s   00:00    
[root@k8s-master1 ~]# scp /etc/docker/daemon.json k8s-node2:/etc/docker/
daemon.json                                                             100%  320   646.2KB/s   00:00 
[root@k8s-node1 ~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker
[root@k8s-node2 ~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker

三、安装 k8s

1、安装初始化k8s需要的软件包

Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的。

kubelet: 安装在集群所有节点上,用于启动 Pod 的。

kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

[root@k8s-master1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-node1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-node2 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6

[root@k8s-master1 ~]# systemctl enable kubelet && systemctl status kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)

[root@k8s-node1 ~]# systemctl enable kubelet && systemctl status kubelet
[root@k8s-node2 ~]# systemctl enable kubelet && systemctl status kubelet

2、初始化 k8s 集群

把初始化 k8s 集群需要的离线镜像包上传到 master1、node1、node2 机器上,手动解压:

# 上传 离线镜像包
[root@k8s-master1 ~]# yum install -y lrzsz
[root@k8s-master1 ~]# rz

[root@k8s-master1 ~]# du -sh *
4.0K	anaconda-ks.cfg
1.1G	k8simage-1-20-6.tar.gz

[root@k8s-master1 ~]# scp k8simage-1-20-6.tar.gz k8s-node1:/root
k8simage-1-20-6.tar.gz                                                  100% 1033MB  77.4MB/s   00:13    
[root@k8s-master1 ~]# scp k8simage-1-20-6.tar.gz k8s-node2:/root
k8simage-1-20-6.tar.gz                                                  100% 1033MB  79.3MB/s   00:13 

# 加载 镜像包
[root@k8s-master1 ~]# docker load -i k8simage-1-20-6.tar.gz
[root@k8s-node1 ~]# docker load -i k8simage-1-20-6.tar.gz
[root@k8s-node2 ~]# docker load -i k8simage-1-20-6.tar.gz
# 查看镜像
[root@k8s-master1 ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED       SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.20.6    9a1ebfd8124d   3 years ago   118MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.20.6    560dd11d4550   3 years ago   116MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.20.6    b93ab2ec4475   3 years ago   47.3MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.20.6    b05d611c1af9   3 years ago   122MB
calico/pod2daemon-flexvol                                         v3.18.0    2a22066e9588   3 years ago   21.7MB
calico/node                                                       v3.18.0    5a7c4970fbc2   3 years ago   172MB
calico/cni                                                        v3.18.0    727de170e4ce   3 years ago   131MB
calico/kube-controllers                                           v3.18.0    9a154323fbf7   3 years ago   53.4MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   4 years ago   253MB
registry.aliyuncs.com/google_containers/coredns                   1.7.0      bfe3a36ebd25   4 years ago   45.2MB
registry.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   4 years ago   683kB

# 初始化 k8s 集群
# 注:--image-repository registry.aliyuncs.com/google_containers:手动指定仓库地址为registry.aliyuncs.com/google_containers。kubeadm 默认从 k8s.grc.io 拉取镜像,但是 k8s.gcr.io访问不到,所以需要指定从 registry.aliyuncs.com/google_containers 仓库拉取镜像。
[root@k8s-master1 ~]# kubeadm init --kubernetes-version v1.20.6 --apiserver-advertise-address=192.168.88.180 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification

# 显示如下信息,表示初始化成功
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.88.180:6443 --token ka1nw2.belxjk1hqbycje2r \
    --discovery-token-ca-cert-hash sha256:47b8eb2f92736a73f4358e6e7f286c88f80db91fcae29e81c6055e1fc0b37c71


# 上面这个命令是把 node 节点加入集群,需要保存下来,每个人的都不一样  

配置 kubectl 的配置文件 config,相当于对 kubectl 进行授权,这样 kubectl 命令可以使用这个证书对 k8s 集群进行管理。

[root@k8s-master1 ~]# mkdir -p $HOME/.kube
[root@k8s-master1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master1 ~]# kubectl get node
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   2m51s   v1.20.6 

# 此时集群状态还是 NotReady 状态,因为没有安装网络插件。

3、扩容集群

(1)添加第一个工作节点

# 查看加入节点的命令
[root@k8s-master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.88.180:6443 --token dgzhsb.ngeaupe77fd1lr2y     --discovery-token-ca-cert-hash sha256:47b8eb2f92736a73f4358e6e7f286c88f80db91fcae29e81c6055e1fc0b37c71

# 将node1加入集群:
[root@k8s-node1 ~]# kubeadm join 192.168.88.180:6443 --token dgzhsb.ngeaupe77fd1lr2y     --discovery-token-ca-cert-hash sha256:47b8eb2f92736a73f4358e6e7f286c88f80db91fcae29e81c6055e1fc0b37c71  --ignore-preflight-errors=SystemVerification

# 查看集群状态
[root@k8s-master1 ~]# kubectl get node
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   3m48s   v1.20.6
k8s-node1     NotReady   <none>                 14s     v1.20.6

(2)添加第二个工作节点

# 将node2加入集群:
[root@k8s-node2 ~]# kubeadm join 192.168.88.180:6443 --token dgzhsb.ngeaupe77fd1lr2y     --discovery-token-ca-cert-hash sha256:47b8eb2f92736a73f4358e6e7f286c88f80db91fcae29e81c6055e1fc0b37c71  --ignore-preflight-errors=SystemVerification

# 查看集群状态
[root@k8s-master1 ~]# kubectl get node
NAME          STATUS     ROLES                  AGE    VERSION
k8s-master1   NotReady   control-plane,master   5m1s   v1.20.6
k8s-node1     NotReady   <none>                 87s    v1.20.6
k8s-node2     NotReady   <none>                 12s    v1.20.6

看到上面说明 node1、node2 节点已经加入到集群了,充当工作节点。

(3)将工作节点ROLES设置为worker

[root@k8s-master1 ~]# kubectl label node k8s-node1 node-role.kubernetes.io/worker=worker
node/k8s-node1 labeled
[root@k8s-master1 ~]# kubectl label node k8s-node2 node-role.kubernetes.io/worker=worker
node/k8s-node2 labeled
[root@k8s-master1 ~]# kubectl get node
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   8m20s   v1.20.6
k8s-node1     NotReady   worker                 4m46s   v1.20.6
k8s-node2     NotReady   worker                 3m31s   v1.20.6

注意:上面状态都是 NotReady 状态,说明没有安装网络插件

4、安装网络组件-Calico

上传 calico.yaml 到 xianchaomaster1 上,使用 yaml 文件安装 calico 网络插件。

在线下载配置文件地址是: https://docs.projectcalico.org/manifests/calico.yaml

calico.yaml 文件适配 1.20- 1.26 版本。

[root@k8s-master1 ~]# kubectl apply -f calico.yaml

# STATUS 状态是 Running,说明安装成功。
[root@k8s-master1 ~]# kubectl get pod -n kube-system
NAME                                       READY   STATUS              RESTARTS   AGE
calico-kube-controllers-6949477b58-t5w62   1/1     Running             0          27s
calico-node-55kqf                          1/1     Running             0          27s
calico-node-bfrkc                          1/1     Running             0          27s
calico-node-dfnhp                          1/1     Running             0          27s

# STATUS 状态是 Ready,说明节点已经就绪。
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE     VERSION
k8s-master1   Ready    control-plane,master   2d22h   v1.20.6
k8s-node1     Ready    worker                 2d21h   v1.20.6
k8s-node2     Ready    worker                 2d21h   v1.20.6

5、测试 k8s 网络和 dns 是否正常

把 busybox-1-28.tar.gz 上传到 xianchaonode1、xianchaonode2 节点,手动解压

首先测试网络:

# 上传 busybox-1-28.tar.gz,传递给 k8s-node1、k8s-node2 节点
[root@k8s-master1 ~]# rz
[root@k8s-master1 ~]# scp busybox-1-28.tar.gz k8s-node1:/root
busybox-1-28.tar.gz                                                          100% 1338KB  88.6MB/s   00:00    
[root@k8s-master1 ~]# scp busybox-1-28.tar.gz k8s-node2:/root
busybox-1-28.tar.gz                                                          100% 1338KB  19.6MB/s   00:00 

# 解压 busybox-1-28.tar.gz
[root@k8s-node1 ~]# docker load -i busybox-1-28.tar.gz 
432b65032b94: Loading layer [==================================================>]   1.36MB/1.36MB
Loaded image: busybox:1.28
[root@k8s-node2 ~]# docker load -i busybox-1-28.tar.gz 
432b65032b94: Loading layer [==================================================>]   1.36MB/1.36MB
Loaded image: busybox:1.28

# 创建 busybox pod
# -- sh:作用是启动一个交互式 shell,这样在容器中就可以执行命令了。
[root@k8s-master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you dont see a command prompt, try pressing enter.
/ # ping www.baidu.com
PING www.baidu.com (183.2.172.42): 56 data bytes
64 bytes from 183.2.172.42: seq=0 ttl=127 time=27.719 ms
64 bytes from 183.2.172.42: seq=1 ttl=127 time=22.266 ms
64 bytes from 183.2.172.42: seq=2 ttl=127 time=23.727 ms

# 通过上面的命令,已经可以ping通了,说明 calico 网络插件安装成功。

然后测试dns:

/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

# ClusterIP 是 Kubernetes Service 的默认类型,它为服务分配一个集群内部的虚拟 IP 地址,这个地址只在集群网络中可达。每个服务都会获得一个唯一的 ClusterIP,用于访问该服务提供的资源或应用程序。
# 10.96.0.1 是k8s集群的 apiserver 的 clusterIP
[root@k8s-master1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d22h

# 10.96.0.10 是k8s内部的 dns 服务
[root@k8s-master1 ~]# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   2d22h

10.96.0.10 就是 coreDNS 的 clusterIP,说明 dns 已经配置好了。

解析内部 Service 的名称,是通过 coreDNS 来解析的。

kubernetes.default.svc.cluster.local的:是 Kubernetes 集群中的一种特殊的 DNS 名称,用于解析集群内的 API Server 的服务地址。这是 Kubernetes 服务发现机制的一部分,允许 Pod 内的应用通过这个预定义的 DNS 名称与 Kubernetes API Server 进行通信。

  • kubernetes:这是默认的服务名称,对应于 Kubernetes API Server。
  • default:这表示服务所在的命名空间(Namespace)。在这个例子中,它位于默认的命名空间中。
  • svc:这是一个简写,代表“service”,表明这是一个 Kubernetes Service 类型的资源。
  • cluster.local:这是集群的域后缀,通常配置在集群级别,用来标识集群内部的 DNS 查询。不同的集群可能会有不同的配置,但 cluster.local 是一个常见的默认值。

注意:busybox 要使用指定的1.28版本,不能用最新版本,最新版本,nslookup 会解析不到dns和ip。

四、k8s可视化UI界面dashboard

1、安装dashboard

把安装 kubernetes-dashboard 需要的镜像上传到工作节点 k8s-node1、k8s-node2 节点,手动解压。

# 上传 dashboard_2_0_0.tar.gz 和 metrics-scrapter-1-0-1.tar.gz,传递给 k8s-node1、k8s-node2 节点
[root@k8s-master1 ~]# scp dashboard_2_0_0.tar.gz k8s-node1:/root
dashboard_2_0_0.tar.gz                                                       100%   87MB  65.0MB/s   00:01    
[root@k8s-master1 ~]# scp dashboard_2_0_0.tar.gz k8s-node2:/root
dashboard_2_0_0.tar.gz                                                       100%   87MB  65.6MB/s   00:01    
[root@k8s-master1 ~]# scp metrics-scrapter-1-0-1.tar.gz k8s-node1:/root
metrics-scrapter-1-0-1.tar.gz                                                100%   38MB  80.3MB/s   00:00    
[root@k8s-master1 ~]# scp metrics-scrapter-1-0-1.tar.gz k8s-node2:/root
metrics-scrapter-1-0-1.tar.gz                                                100%   38MB 109.0MB/s   00:00

# 解压 dashboard_2_0_0.tar.gz 和 metrics-scrapter-1-0-1.tar.gz
[root@k8s-node1 ~]# docker load -i dashboard_2_0_0.tar.gz 
954115f32d73: Loading layer [==================================================>]  91.22MB/91.22MB
Loaded image: kubernetesui/dashboard:v2.0.0-beta8
[root@k8s-node1 ~]# docker load -i metrics-scrapter-1-0-1.tar.gz 
89ac18ee460b: Loading layer [==================================================>]  238.6kB/238.6kB
878c5d3194b0: Loading layer [==================================================>]  39.87MB/39.87MB
1dc71700363a: Loading layer [==================================================>]  2.048kB/2.048kB
Loaded image: kubernetesui/metrics-scraper:v1.0.1

[root@k8s-node2 ~]# docker load -i dashboard_2_0_0.tar.gz 
954115f32d73: Loading layer [==================================================>]  91.22MB/91.22MB
Loaded image: kubernetesui/dashboard:v2.0.0-beta8
[root@k8s-node2 ~]# docker load -i metrics-scrapter-1-0-1.tar.gz 
89ac18ee460b: Loading layer [==================================================>]  238.6kB/238.6kB
878c5d3194b0: Loading layer [==================================================>]  39.87MB/39.87MB
1dc71700363a: Loading layer [==================================================>]  2.048kB/2.048kB
Loaded image: kubernetesui/metrics-scraper:v1.0.1

# 安装dashboard
[root@k8s-master1 ~]# kubectl apply -f kubernetes-dashboard.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

# 查看dashboard状态
[root@k8s-master1 ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7445d59dfd-qsm56   1/1     Running   0          44s
kubernetes-dashboard-54f5b6dc4b-8ssf9        1/1     Running   0          44s
# 如上所示,dashboard 已经安装成功。

# 查看dashboard前端的service
[root@k8s-master1 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.107.167.42    <none>        8000/TCP   2m
kubernetes-dashboard        ClusterIP   10.103.139.196   <none>        443/TCP    2m

# 修改 service type 类型为 NodePort
[root@k8s-master1 ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{ "annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
  creationTimestamp: "2024-12-27T15:01:09Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  resourceVersion: "7130"
  uid: 399eb8a8-0b53-41ca-8613-6c80c0c48526
spec:
  clusterIP: 10.103.139.196
  clusterIPs:
  - 10.103.139.196
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  #type: ClusterIP
  type: NodePort       # 修改在这里
status:
  loadBalancer: {}

# 修改后再查看service,可以看到 type 类型已经修改为 NodePort
# PORT(S)能看到30563端口,这是节点端口,可以通过 http://工作节点:30563 访问dashboard
[root@k8s-master1 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.107.167.42    <none>        8000/TCP        5m3s
kubernetes-dashboard        NodePort    10.103.139.196   <none>        443:30563/TCP   5m3s

上面可看到 service 类型是 NodePort,访问任何一个工作节点 ip: 32728 端口即可访问 kubernetes dashboard,在浏览器(使用火狐浏览器)访问如下地址: https://192.168.88.180:30563

在chrome访问地址会提示证书不安全,直接用键盘输入 thisisunsafe 继续访问。

图片 dashboard

2、通过 token 令牌访问 dashboard

创建管理员 token,具有查看任何空间的权限,可以管理所有资源对象

# 创建管理员 token
[root@k8s-master1 ~]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created

# 查看kubernetes-dashboard命名空间下的secret
[root@k8s-master1 ~]# kubectl get secret -n kubernetes-dashboard
NAME                               TYPE                                  DATA   AGE
default-token-wcpsb                kubernetes.io/service-account-token   3      20m
kubernetes-dashboard-certs         Opaque                                0      20m
kubernetes-dashboard-csrf          Opaque                                1      20m
kubernetes-dashboard-key-holder    Opaque                                2      20m
kubernetes-dashboard-token-r572x   kubernetes.io/service-account-token   3      20m

# 选择对应带有token的secret
# 获取kubernetes-dashboard-token-r572x的token
[root@k8s-master1 ~]# kubectl describe secret kubernetes-dashboard-token-r572x -n kubernetes-dashboard
Name:         kubernetes-dashboard-token-r572x
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 384ab9bb-fc33-4b40-adf0-6d14f94b36c7

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Im9sRDkyNUs5WXo1RGtwMG5NaDBFRnZkeWdzb3ZCZVhQVlV2cTVubk9VbzQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1yNTcyeCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjM4NGFiOWJiLWZjMzMtNGI0MC1hZGYwLTZkMTRmOTRiMzZjNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.GtnKsNs7l4BJfUAuJPDtvABkbNEFUwZNCLXQuzhA7HBj_hFMZ7kRXSf0qBAkNNGN5lLouv1KsRq7WfC2z5Hk39RVqbIs1WHzvYvwAr-xX1NMv51Rws8kbU3JKpp8VMsQXcdRoULsTnzpB6yIEqHR0OE76dV62VHFnAxXIbVhtuV_IVWDi-xemm4kL-OWVgjH46li1Xq41d2Rz_74IBIMYe5Ac-mYBCMLLheCoPIJIRo9lK87DEETQ4GZBwP748bMgk5tWHzolcZHeQwF7G_TIMoQkLt4kd7dPW0CVaYw0hvY9E1uRaCiUyq9i7hTXqBEi3kb18-i12xWfzbRJK6J4A
ca.crt:     1066 bytes
namespace:  20 bytes

复制token后面的值内容到浏览器token登录处,然后sign in,进入集群管理页面:

图片 overview

posted @ 2024-12-25 00:22  休耕  阅读(75)  评论(0编辑  收藏  举报