k8s二进制安装
# 服务器配置选型
#### 1. 学习环境(用于k8s基础的学习)
```
a) 2核2G 磁盘40G足矣,可以采用单master多Node,或者多Master无Node(Master节点也可以充当Node节点)
```
#### 2. 实战环境(用于k8s实战学习,从进阶部分开始)
```
a) 2核4G+磁盘40G+40G,可以采用单Master多Node,
或者多Master多Node(Master节点也可以充当Node节点,总计可用node节点数为5即可)
```
#### 3. 企业测试环境:
```
a) Master节点(尽量三台实现高可用,可以将某台Master禁止调度):8核16G+ 磁盘分为系统盘(路径:/,大小100G+)、Docker数据盘(/var/lib/docker,200G+)
b) Etcd数据盘(/var/lib/etcd,50节点50G+,150节点150G+,etcd节点可以和Master节点同一个宿主机,三个节点实现高可用)
c) Node节点:无特殊要求
d) 注意:测试环境所有的数据盘可以无需区分,有条件最好单独
```
#### 4. 企业生产环境:
```
a) Master节点:三个节点实现高可用(必须)
i. 节点数:0-100 8核16+
ii. 节点数:100-250 8核32G+
iii. 节点数:250-500 16核32G+
b) etcd节点:三个节点实现高可用(必须),有条件存储分区必须高性能SSD硬盘,没有SSD也要有高效独立磁盘
i. 节点数:0-50 2核8G+ 50G SSD存储
ii. 节点数:50-250 4核16G+ 150G SSD存储
iii. 节点数:250-1000 8核32G+ 250G SSD存储
c) Node节点:无特殊要求,主要是Docker数据分区、系统分区需要单独使用,不可以使用同一个磁盘,系统分区100G+、Docker数据分区200G+,有条件使用SSD硬盘,必须独立于系统盘
d) 其他:集群规模不大可以将etcd和master放置于同一个宿主机,
也就是每个master节点部署k8s组件和etcd服务,但是etcd的数据目录一定要独立,并且使用SSD,
两者部署在一起需要相对增加宿主机的资源,个人建议生产环境把master节点的资源一次性给够,
此处的费用不应该节省,可以直接使用16核32G或者64G的机器,之后集群扩容就无需扩容master节点的资源,减少风险。
其中master节点和etcd节点的系统分区100G即可。
```
软件包准备
git clone https://gitee.com/dukuan/k8s-ha-install.git
1.集群网段划分
网段划分http://tools.jb51.net/aideddesign/ip_net_calc/
不能有重复网段
1.主机节点网段
192.168.2.0/24
2.Service网段
10.96.0.0/16
3.Pod网段
10.244.0.0/12
2.集群安装版本选择
1.24版本以后就不再支持docker
VIP(虚拟IP)不要和公司内网IP重复,首先去ping一下,不通才可用。VIP需要和你的主机在同一个局域网内!不是直接用我的VIP!!!
公有云的话,VIP为公有云的负载均衡的IP,比如阿里云的SLB地址,腾讯云的ELB地址,注意公有云的负载均衡都是内网的负载均衡。
配置host文件
192.168.2.8 k8s-master01
192.168.2.9 k8s-master02
192.168.2.10 k8s-master03
192.168.2.236 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
192.168.2.100 k8s-node01
192.168.2.101 k8s-node02
3.环境优化
3.1配置源
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
3.2关闭firewalld等服务
systemctl disable --now firewalld
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
3.3 必备工具安装
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
3.4关闭swap分区
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
3.5同步时间
如果是云服务器就无需同步了
安装ntpdate
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y
所有节点同步时间。时间同步配置如下:
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
# 加入到crontab
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
3.6 所有节点配置limit
文件句柄的限制
ulimit -SHn 65535
vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
3.7 master01主节点登录其它节点,免密要登录,集群管理也需要再master01上操作,阿里云或者AWS上需要单独一台kubectl 服务器,密钥配置如下
[root@k8s-master01 ~]# ssh-keygen -t rsa
Master01配置免密码登录其他节点
[root@k8s-master01 ~]# for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
所有节点安装基本工具
yum install wget jq psmisc vim net-tools yum-utils device-mapper-persistent-data lvm2 git -y
Master01下载安装文件
[root@k8s-master01 ~]# cd /root/ ; git clone https://gitee.com/dukuan/k8s-ha-install.git
Cloning into 'k8s-ha-install'...
remote: Enumerating objects: 12, done.
remote: Counting objects: 100% (12/12), done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 461 (delta 2), reused 5 (delta 1), pack-reused 449
Receiving objects: 100% (461/461), 19.52 MiB | 4.04 MiB/s, done.
Resolving deltas: 100% (163/163), done.
3.8内核升级
所有节点升级系统并重启(线上环境必须升级)
yum update -y --exclude=kernel*
CentOS7 需要升级内核至4.18+,本次升级的版本为4.19
在master01节点下载内核:(购买架构师课程的可以从百度网盘下载)
cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
==================================================================================
从master01节点传到其他节点:
for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
======================================================================================
所有节点安装内核
cd /root && yum localinstall -y kernel-ml*
======================================================================================
所有节点更改内核启动顺序
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
======================================================================================
检查默认内核是不是4.19
[root@k8s-master02 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
======================================================================================
所有节点重启,然后检查内核是不是4.19
[root@k8s-master02 ~]# uname -a
Linux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
3.9安装ipvsadm
yum install ipvsadm ipset sysstat conntrack libseccomp -y
======================================================================================
所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
======================================================================================
vim /etc/modules-load.d/ipvs.conf
# 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
然后执行systemctl enable --now systemd-modules-load.service即可
======================================================================================
检查是否加载:
[root@k8s-master01 ~]# lsmod | grep -e ip_vs -e nf_conntrack
nf_conntrack_ipv4 16384 23
nf_defrag_ipv4 16384 1 nf_conntrack_ipv4
nf_conntrack 135168 10 xt_conntrack,nf_conntrack_ipv6,nf_conntrack_ipv4,nf_nat,nf_nat_ipv6,ipt_MASQUERADE,nf_nat_ipv4,xt_nat,nf_conntrack_netlink,ip_vs
======================================================================================
开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:
#路由转发使用
#net.ipv4.ip_forward = 1
#docker使用
#net.bridge.bridge-nf-call-iptables = 1
#net.bridge.bridge-nf-call-ip6tables = 1
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
net.ipv4.conf.all.route_localnet = 1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system
======================================================================================
所有节点配置完内核后,重启服务器,保证重启后内核依旧加载
reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack
4.基本组件安装
本节主要安装的是集群中用到的各种组件,比如Docker-ce、Kubernetes各组件等
4.1runtime安装
k8s1.23版本以下可以使用docker安装,官方将在1.24版本以后舍弃docker
4.1.1Containerd作为Runtime
端口号:37863
ctr 是containerd的命令
#所有节点安装docker-ce-20.10
# yum install docker-ce-20.10.* docker-ce-cli-20.10.* containerd -y
首先配置Containerd所需的模块(所有节点):
# cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
=====================================================
所有节点加载模块
modprobe -- overlay
modprobe -- br_netfilter
=====================================================
所有节点,配置Containerd所需的内核模块
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
======================================================
#所有节点加载内核
sysctl --system
=======================================================
#所有节点配置Containerd的配置文件
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
=======================================================
#所有节点讲Containerd的Cgroup改为Systemd [官方推荐使用systemd作为隔离机制]
vim /etc/containerd/config.toml (大概在125行)
找到containerd.runtimes.runc.options,添加SystemdCgroup = true(如果已存在直接修改,否则会报错)
=======================================================
#所有节点将sandbox_image的Pause镜像改成符合自己版本的地址(大概在61行)registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
4.1.2 Docker作为Runtime
所有节点安装docker-ce 20.10:
# yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y
=======================================================
由于新版Kubelet建议使用systemd,所以把Docker的CgroupDriver也改成systemd:
# mkdir /etc/docker
# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
=======================================================
所有节点设置开机自启动Docker:
# systemctl daemon-reload && systemctl enable --now docker
4.2 k8s及etcd安装
// k8s github : https://github.com/kubernetes/kubernetes/
Master01下载kubernetes安装包(1.23.0需要更改为你看到的最新版本)
wget https://dl.k8s.io/v1.23.0/kubernetes-server-linux-amd64.tar.gz
#注意目前版本是1.23.0学员安装时需要下载最新的1.23.x版本:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md
================================================================
#以下操作都在master01执行
-下载 etcd安装包
[root@k8s-master01 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz
============================================================
#解压kubernetes安装文件
[root@k8s-master01 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
=============================================================
解压etcd安装文件
[root@k8s-master01 ~]# tar -zxvf etcd-v3.5.1-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.1-linux-amd64/etcd{,ctl}
=============================================================
#版本查看
[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.23.0
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.5.1
API version: 3.5
=============================================================
#将组件发送到其他节点
MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
for NODE in $WorkNodes; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
=============================================================
所有节点创建/opt/cni/bin目录
mkdir -p /opt/cni/bin
=============================================================
切换分支
Master01节点切换到1.23.x分支(其他版本可以切换到其他分支,.x即可,不需要更改为具体的小版本)
cd /root/k8s-ha-install && git checkout manual-installation-v1.23.x
5.生成组件证书
二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的
#Master01下载生成证书工具 或者浏览器下载,然后上传服务器
wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
5.1.1etcd证书
#所有Master节点创建etcd证书目录
mkdir /etc/etcd/ssl -p
#所有节点创建kubernetes相关目录
mkdir -p /etc/kubernetes/pki
=============================================================
#Master01节点生成etcd证书
生成证书的CSR文件:证书签名请求文件,配置了一些域名、公司、单位
# 生成etcd CA证书和CA证书的key
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
[root@k8s-master01 pki]# ll /etc/etcd/ssl/
total 12
-rw-r--r-- 1 root root 1005 Nov 2 21:01 etcd-ca.csr
-rw------- 1 root root 1679 Nov 2 21:01 etcd-ca-key.pem
-rw-r--r-- 1 root root 1367 Nov 2 21:01 etcd-ca.pem
=============================================================
#注意了,生成key,以下ip指的是etcd的ip地址,我这里是把etcd和k8s集群放在一起了,所以指定了以下ip,
#如果生产环境,单独分离出来的etcd,那么就单独指定etcd的集群ip,并且建议指定多个ip,方便后面扩容etcd
cfssl gencert \
-ca=/etc/etcd/ssl/etcd-ca.pem \
-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.2.8,192.168.2.9,192.168.2.10,192.168.2.11,192.168.2.12 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
[root@k8s-master01 ssl]# ll /etc/etcd/ssl/
total 24
-rw-r--r-- 1 root root 1005 Nov 2 21:01 etcd-ca.csr
-rw------- 1 root root 1679 Nov 2 21:01 etcd-ca-key.pem
-rw-r--r-- 1 root root 1367 Nov 2 21:01 etcd-ca.pem
-rw-r--r-- 1 root root 1005 Nov 2 21:05 etcd.csr
-rw------- 1 root root 1675 Nov 2 21:05 etcd-key.pem
-rw-r--r-- 1 root root 1525 Nov 2 21:05 etcd.pem
=============================================================
#将证书分发到其它etcd节点。(node节点不需要)
MasterNodes='k8s-master02 k8s-master03'
for NODE in $MasterNodes; do
ssh $NODE "mkdir -p /etc/etcd/ssl"
for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do
scp -rp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
done
done
5.1.2 生成apiserver证书
master01 生成k8s证书,生成根证书
#生成根证书
[root@k8s-master01 pki]# cd /root/k8s-ha-install/pki
cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
=============================================================
# 10.96.0.0/16是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.96.0.1,
# 如果不是高可用集群,192.168.2.236为Master01的IP
#生成apiserver证书
cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,192.168.2.236,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.2.8,192.168.2.9,192.168.2.10 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
=============================================================
#生成apiserver聚合证书 第三方组件会用到它
生成apiserver的聚合证书。Requestheader-client-xxx requestheader-allowwd-xxx:aggerator
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
#以下是返回结果,忽略警告
2022/11/02 22:07:36 [INFO] generate received request
2022/11/02 22:07:36 [INFO] received CSR
2022/11/02 22:07:36 [INFO] generating key: rsa-2048
2022/11/02 22:07:37 [INFO] encoded CSR
2022/11/02 22:07:37 [INFO] signed certificate with serial number 6019675031934127522582209023552902484058926032
2022/11/02 22:07:37 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
5.1.3 生成controller-manage的证书
-我们不仅要生成controller-manage的证书,还要生成一个controller-manage.kubeconfig
=================生成manage证书============================================
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
# 注意,如果不是高可用集群,192.168.2.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
# set-cluster:设置一个集群项,
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.2.236:8443 \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置一个环境项,一个上下文
kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# set-credentials 设置一个用户项
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem \
--client-key=/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 使用某个环境当做默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
5.1.4生成scheduler证书
================生成scheduler证书=============================================
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
# 注意,如果不是高可用集群,192.168.2.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.2.236:8443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
5.1.5 生成admin证书
=================生成admin证书============================================
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
# 注意,如果不是高可用集群,192.168.2.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.2.236:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
5.1.6生成密钥
创建ServiceAccount Key à secret
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
返回结果Generating RSA private key, 2048 bit long modulus (2 primes)...................................................................................+++++...............+++++e is 65537 (0x010001)
==================================================================
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
==================================================================
发送证书至其它节点
for NODE in k8s-master02 k8s-master03; do
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done;
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done
提示:到此一共在/etc/kubernetes/pki/目录下,一共生成了23个证书文件。
6.kubernetes系统组件配置
6.1.1 Etcd
疑问:生成etcd证书时,我们多指定了一些ip为了方便扩展etcd,那么下面,initial-cluster需要指定多个ip吗,即使他们现在并不存在
etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址
etcd01配置
#etcd01
====================================
vim /etc/etcd/etcd.config.yml
=====================================
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.2.8:2380'
listen-client-urls: 'https://192.168.2.8:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.2.8:2380'
advertise-client-urls: 'https://192.168.2.8:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.2.8:2380,k8s-master02=https://192.168.2.9:2380,k8s-master03=https://192.168.2.10:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
etcd02配置
#etcd02
vim /etc/etcd/etcd.config.yml
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.2.9:2380'
listen-client-urls: 'https://192.168.2.9:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.2.9:2380'
advertise-client-urls: 'https://192.168.2.9:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.2.8:2380,k8s-master02=https://192.168.2.9:2380,k8s-master03=https://192.168.2.10:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
etcd03配置
#etcd03
=====================================
vim /etc/etcd/etcd.config.yml
====================================
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.2.10:2380'
listen-client-urls: 'https://192.168.2.10:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.2.10:2380'
advertise-client-urls: 'https://192.168.2.10:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.2.8:2380,k8s-master02=https://192.168.2.9:2380,k8s-master03=https://192.168.2.10:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
6.1.2 Etcd配置Service启动脚本
所有Master节点创建etcd service并启动【因为我们k8s和master节点共同部署在一个节点上】
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
6.1.2 创建etcd证书目录
所有Master节点创建etcd的证书目录
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd
6.1.3 查看etcd状态
export ETCDCTL_API=3
etcdctl --endpoints="192.168.2.10:2379,192.168.2.9:2379,192.168.2.8:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
第七章.高可用配置
高可用配置(注意:如果不是高可用集群,haproxy和keepalived无需安装)
如果在云上安装也无需执行此章节的步骤,可以直接使用云上的lb,比如阿里云slb,腾讯云elb等
公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。
slb>haproxy>apiserver
所有master节点安装keepalived 和haproxy
7.1.1 haproxy配置
yum install keepalived haproxy -y
所有Master配置HAproxy 配置一样
vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend k8s-master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 192.168.2.8:6443 check
server k8s-master02 192.168.2.9:6443 check
server k8s-master03 192.168.2.10:6443 check
7.1.2 keepalived配置
所有Master节点配置KeepAlived,配置不一样,注意区分 [root@k8s-master01
pki]# vim /etc/keepalived/keepalived.conf ,注意每个节点的IP和网卡(interface参数)
master01配置
#master01
=======================================================
#vim /etc/keepalived/keepalived.conf
=======================================================
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens33
mcast_src_ip 192.168.2.8
virtual_router_id 51
priority 101
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.2.236
}
track_script {
chk_apiserver
} }
master02配置
#master02
========================================
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.2.9
virtual_router_id 51
priority 100
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.2.236
}
track_script {
chk_apiserver
} }
master03配置
#master03
=============================================
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.2.10
virtual_router_id 51
priority 100
nopreempt
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.2.236
}
track_script {
chk_apiserver
} }
7.1.3 健康检查配置
所有master节点
[root@k8s-master01 keepalived]# vim /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
授权
chmod +x /etc/keepalived/check_apiserver.sh
所有master节点启动haproxy和keepalived
[root@k8s-master01 keepalived]# systemctl daemon-reload
[root@k8s-master01 keepalived]# systemctl enable --now haproxy
[root@k8s-master01 keepalived]# systemctl enable --now keepalived
vip 测试
[root@k8s-master01 pki]# ping 192.168.2.236
PING 192.168.2.236 (192.168.2.236) 56(84) bytes of data.
64 bytes from 192.168.2.236: icmp_seq=1 ttl=64 time=1.39 ms
64 bytes from 192.168.2.236: icmp_seq=2 ttl=64 time=2.46 ms
64 bytes from 192.168.2.236: icmp_seq=3 ttl=64 time=1.68 ms
64 bytes from 192.168.2.236: icmp_seq=4 ttl=64 time=1.08 ms
重要:如果安装了keepalived和haproxy 需要测试keepalived是否正常
telnet 192.168.2.236 8443
如果ping不通且telnet没有出现 ],则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等
所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
所有节点查看selinux状态,必须为disable:getenforce
master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
master节点查看监听端口:netstat -lntp
第八章 kubernetes组件配置
所有节点创建相关目录
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
8.1.1APiserver
所有Master节点创建kube-apiserver service,# 注意,如果不是高可用集群,192.168.2.236改为master01的地址
注意本文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,请按需修改
master01
#master01
vim /usr/lib/systemd/system/kube-apiserver.service
============================================================
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=192.168.2.8 \
--service-cluster-ip-range=10.96.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://192.168.2.8:2379,https://192.168.2.9:2379,https://192.168.2.10:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
master02
注意本文档使用的k8s service网段为10.96.0.0/16,该网段不能和宿主机的网段、Pod网段的重复,请按需修改
[root@k8s-master02 pki]# vim /usr/lib/systemd/system/kube-apiserver.service
=======================================================================
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=192.168.2.9 \
--service-cluster-ip-range=10.96.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://192.168.2.8:2379,https://192.168.2.9:2379,https://192.168.2.10:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
master03
vim /usr/lib/systemd/system/kube-apiserver.service
==============================================================
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=192.168.2.10 \
--service-cluster-ip-range=10.96.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://192.168.2.8:2379,https://192.168.2.9:2379,https://192.168.2.10:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
apiserver启动
systemctl daemon-reload && systemctl enable --now kube-apiserver
systemctl status kube-apiserver
===============
如果系统日志有这些提示可以忽略
Dec 11 20:51:15 k8s-master01 kube-apiserver: I1211 20:51:15.004739 7450 clientconn.go:948] ClientConn switching balancer to "pick_first"
Dec 11 20:51:15 k8s-master01 kube-apiserver: I1211 20:51:15.004843 7450 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc011bd4c80, {CONNECTING <nil>}
Dec 11 20:51:15 k8s-master01 kube-apiserver: I1211 20:51:15.010725 7450 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc011bd4c80, {READY <nil>}
Dec 11 20:51:15 k8s-master01 kube-apiserver: I1211 20:51:15.011370 7450 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
8.2 ControllerManager
所有Master节点配置kube-controller-manager service(所有master节点配置一样)
注意本文档使用的k8s Pod网段为10.244.0.0/12,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改
配置
[root@k8s-master01 pki]# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/12 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
启动
systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl status kube-controller-manager
8.3 Scheduler
所有Master节点配置kube-scheduler service(所有master节点配置一样)
vim /usr/lib/systemd/system/kube-scheduler.service
==============================================
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--leader-elect=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
启动
systemctl daemon-reload
systemctl enable --now kube-scheduler
第九章 TLS Bootstrapping配置
只需要在Master01创建bootstrap
# 注意,如果不是高可用集群,192.168.2.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
证书生成
cd /root/k8s-ha-install/bootstrap
=========================================
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.2.236:8443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
注意:如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保证下图红圈内的字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致
mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
可以正常查询集群状态,才可以继续往下,否则不行,需要排查k8s组件是否有故障
kubectl get cs
==========执行结果===========
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
etcd-2 Healthy {"health":"true","reason":""}
etcd-1 Healthy {"health":"true","reason":""}
===========================================
[root@k8s-master01 bootstrap]# kubectl create -f bootstrap.secret.yaml
secret/bootstrap-token-c8ad9c created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
第10章节 Node节点配置
10.1.1Master01节点复制证书至Node节点
cd /etc/kubernetes/
for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
ssh $NODE mkdir -p /etc/kubernetes/pki
for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
scp -rp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
done
done
10.2 kubelet
所有节点创建相关目录
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
所有节点配置kubelet service 【因为kubelet经常变动,所以我们把配置文件分离出来】
[root@k8s-master01 bootstrap]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
如果Runtime为Containerd,请使用如下Kubelet的配置:
所有节点配置kubelet service的配置文件(也可以写到kubelet.service):
# Runtime为Containerd
# vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
如果Runtime为Docker,请使用如下Kubelet的配置:
# Runtime为Docker
# vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
创建kubelet的配置文件
注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.96.0.10
[root@k8s-master01 bootstrap]# vim /etc/kubernetes/kubelet-conf.yml
============================================
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
kubelet启动
启动所有节点kubelet
systemctl daemon-reload
systemctl enable --now kubelet
此时系统日志/var/log/messages显示只有如下两种信息为正常,安装calico后即可恢复
Unable to update cni config: no networks found in
/etc/cni/net.d
10.3查看集群状态
[root@k8s-master01 bootstrap]# kubectl get node
[root@k8s-master01 kubernetes]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady <none> 6s v1.23.1
k8s-master02 NotReady <none> 3m33s v1.23.1
k8s-master03 NotReady <none> 6s v1.23.1
k8s-node01 NotReady <none> 6s v1.23.1
k8s-node02 NotReady <none> 6s v1.23.1
10.4 安装kube-proxy配置
注意,如果不是高可用集群,192.168.2.236:8443改为master01的地址,8443改为apiserver的端口,默认是6443
以下操作只在master执行
cd /root/k8s-ha-install
kubectl -n kube-system create serviceaccount kube-proxy
kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)
PKI_DIR=/etc/kubernetes/pki
K8S_DIR=/etc/kubernetes
====================================================================
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem
kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
=======================================================================
将kubeconfig发送至其他节点
for NODE in k8s-master02 k8s-master03; do
scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
done
for NODE in k8s-node01 k8s-node02; do
scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
done
========================================================================
所有节点添加kube-proxy的配置和service文件:
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yaml \
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
====================================================================
如果更改了集群Pod的网段,需要更改kube-proxy.yaml的clusterCIDR为自己的Pod网段:
vim /etc/kubernetes/kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 10.244.0.0/12
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
======================================
kube-proxy设置开机自启
#所有节点启动kube-proxy
[root@k8s-master01 k8s-ha-install]# systemctl daemon-reload
[root@k8s-master01 k8s-ha-install]# systemctl enable --now kube-proxy
Created symlink /etc/systemd/system/multi-user.target.wants/kube-proxy.service → /usr/lib/systemd/system/kube-proxy.service.
======================成功小提示==================
#检查日志里是否有此字段
[root@k8s-master01 k8s-ha-install]# grep Adding /var/log/messages
====以下是过滤结果=================
kube-proxy: I1104 16:34:47.852540 14017 service.go:419] "Adding new service port" portName="kube-system/kube-dns:dns" servicePort="10.96.0.10:53/UDP"
Nov 4 16:34:47 k8s-master01 kube-proxy: I1104 16:34:47.852562 14017 service.go:419] "Adding new service port" portName="kube-system/kube-dns:dns-tcp" servicePort="10.96.0.10:53/TCP"
第11章:安装Cacalico
11.1 安装官方推荐版本(推荐)
以下步骤只在master01执行
cd /root/k8s-ha-install/calico/
更改calico的网段,主要需要将红色部分的网段,改为自己的Pod网段
sed -i "s#POD_CIDR#10.244.0.0/12#g" calico.yaml
检查网段是自己的Pod网段, grep "IPV4POOL_CIDR" calico.yaml -A 1
[root@k8s-master01 calico]# grep "IPV4POOL_CIDR" calico.yaml -A 1
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/12"
生成calico
kubectl apply -f calico.yaml
[root@k8s-master01 calico]# kubectl get po -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6f6595874c-l5pzv 1/1 Running 3 (14m ago) 71m 10.242.195.4 k8s-master03 <none> <none>
calico-node-jbcr5 1/1 Running 2 (14m ago) 71m 192.168.2.9 k8s-master02 <none> <none>
calico-node-mczg7 1/1 Running 1 (14m ago) 71m 192.168.2.8 k8s-master01 <none> <none>
calico-node-np6gc 1/1 Running 1 (14m ago) 71m 192.168.2.101 k8s-node02 <none> <none>
calico-node-qjzk8 1/1 Running 1 (14m ago) 71m 192.168.2.100 k8s-node01 <none> <none>
calico-node-qpx6f 1/1 Running 2 (14m ago) 71m 192.168.2.10 k8s-master03 <none> <none>
calico-typha-6b6cf8cbdf-4bx4q 1/1 Running 2 (14m ago) 71m 192.168.2.10 k8s-master03 <none> <none>
coredns-5db5696c7-5vfrw 1/1 Running 1 (14m ago) 17h 10.249.244.196 k8s-master01 <none> <none>
metrics-server-6bf7dcd649-fvb4w 1/1 Running 1 (14m ago) 61m 10.249.92.67 k8s-master02 <none> <none>
===============================================================
如果发现0/1,可以查看/var/log/message的日志还有kubectl logs -f calico-node-mczg7 -n kube-system
如果安装步骤出问题,也记得检查下kube-proxy
第12章节安装coredns
安装coredns
#过滤 service网段
kubectl get svc | grep kubernetes | awk '{print $3}'
#过滤结果
10.96.0.1
我们之前约定了dns地址为10.96.0.10
将coredns.yaml 里的字段COREDNS_SERVICE_IP 手动替换成10.96.0.10
安装
kubectl create -f CoreDns/coredns.yaml
查看状态
kubectl get po -n kube-system -l k8s-app=kube-dns
第13章安装metrics
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率
安装metrics server
cd /root/k8s-ha-install/metrics-server
安装metrics
kubectl create -f .
查看状态
kubectl top node
第14章 集群验证
集群验证必须要做
安装busybox
cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
EOF
#检测注意事项
1. Pod必须能解析Service
2. Pod必须能解析跨namespace的Service
3. 每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53
4. Pod和Pod之前要能通
5. 同namespace能通信
6. 跨namespace能通信
7. 跨机器能通信
#验证解析
[root@k8s-master01 CoreDNS]# kubectl exec busybox -n default -- nslookup kubernetes
Server: 192.168.0.10
Address 1: 192.168.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 192.168.0.1 kubernetes.default.svc.cluster.local
[root@k8s-master01 CoreDNS]# kubectl exec busybox -n default -- nslookup kube-dns.kube-system
Server: 192.168.0.10
Address 1: 192.168.0.10 kube-dns.kube-system.svc.cluster.local
Name: kube-dns.kube-system
Address 1: 192.168.0.10 kube-dns.kube-system.svc.cluster.local
第15章安装dashboard
【我自己操作的指定版本的】
Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。
15.1.1 安装指定版本 dashboard
cd /root/k8s-ha-install/dashboard/
kubectl create -f .
=====执行结果==============
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
15.1.2 安装最新版
官方GitHub地址:https://github.com/kubernetes/dashboard
可以在官方dashboard查看到最新版dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
地址改为页面上最新的地址
创建管理员用户vim admin.yaml
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
地址改为页面上最新的地址
创建管理员用户vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
kubectl apply -f admin.yaml -n kube-system
15.1.3 登录dashboard
在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,参考图1-1:
--test-type --ignore-certificate-errors
更改dashboard的svc为NodePort:
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
将ClusterIP更改为NodePort(如果已经为NodePort忽略此步骤):
查看端口号:
kubectl get svc -n kubernetes-dashboard -owide
dashboard:【随便选择自己的一个node节点ip】
访问Dashboard:https://192.168.2.101:18282(请更改18282为自己的端口),选择登录方式为令牌(即token方式)
查看登录token密钥
[root@k8s-master01 dashboard]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-lzd76
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 0a55a389-cb3c-4170-84ee-3c3d3c72396f
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1411 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InJ1RXBmX2lwYnhiSWczQWZZdWJCS3dZX3p5eUpZVDRPOTN4c0J2SFFRZHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWx6ZDc2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwYTU1YTM4OS1jYjNjLTQxNzAtODRlZS0zYzNkM2M3MjM5NmYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.W0voeSOtcoxjxqiGgYpjtXj5bHBz9CGSsr3JxkMHppGblIkWp1tM1QnQk9VJNoY0xZXXJn_NogczbYs_idL5Uaf-wZoF_JgHAG0cmth5ENG1pqaVYmsjpa-eWvV8XYX9lPYKcky_HtalIDk9kYWtNgK80oHXCcKM8HjR-RgOEPstppvx1YIvT_uvyCF-tyFVZt4IiK9Ib5kVxy0hJfPu3to72nDpjJBHgCQzw2JXgJcx6T4aab9gt8ka1mP1cDXDcRo6HjGeCRQyPSzQf8-kzDqfDIVk18rX0-TqGtROV-bz3FMngsHkhNPU6kNzsh8P-iQVP-DNuIYE5AYPGP0reA
登录成功【cpu和内存使用的展示,需要安装metrics】