二进制安装Kubernetes高可用集群
一、首先配置k8s(master)主节点和node节点 修改/etc/hosts配置文件如下:
1 2 3 4 5 6 7 8 | 192.168.3.123 k8s-master1 192.168.3.124 k8s-master2 192.168.3.125 k8s-master3 192.168.3.128 k8s-vip #如果不是高可用集群,该IP为k8s-master1的IP地址 192.168.3.126 node-1 192.168.3.127 node-2 |
注:Pod网段和service和宿主机网段不要重复
1、Pod网段:172.16.0.0/16
2、Service:10.0.0.0/16
二、所有k8s-master节点和node节点安装yum源如下
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | curl -o /etc/yum .repos.d /CentOS-Base .repo http: //mirrors .aliyun.com /repo/Centos-7 .repo sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum .repos.d /CentOS-Base .repo yum install -y wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git yum-config-manager --add-repo https: //mirrors .aliyun.com /docker-ce/linux/centos/docker-ce .repo cat <<EOF > /etc/yum .repos.d /kubernetes .repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF |
三、所有k8s-master节点和node节点关闭所有节点防火墙,selinux NetworkManager swap 服务配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | # 关闭防火墙 systemctl disable firewalld systemctl stop firewalld # 注如果这个是共有云不需要关闭(NetworkManager如果使用私有云这个服务不需要关闭) systemctl disable NetworkManager systemctl stop NetworkManager #关闭dns服务,如果关闭dns服务报错可以忽略(或没有dns服务) systemctl disable --now dnsmasq # 关闭selinux # 临时禁用selinux setenforce 0 # 永久关闭 修改/etc/sysconfig/selinux文件设置 sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config # 禁用交换分区 swapoff -a # 永久禁用,打开/etc/fstab注释掉swap那一行。 sed -i 's/.*swap.*/#&/' /etc/fstab |
四、所有节点安装ntpdate
1 2 3 4 5 | [root@k8s-master1 ~] # yum -y install ntpdate [root@k8s-master1 ~] # ntpdate time2.aliyun.com #添加计划任务同步时间 * 3 * * * /usr/sbin/ntpdate time2.aliyun.com |
五、所有节点配置limit
1 2 3 4 5 6 7 8 9 10 11 12 | #临时生效 ulimit -SHn 65535 #末尾添加使永久生效 vim /etc/security/limits .conf * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * soft memlock unlimited * hard memlock unlimited |
六、只在k8s-master1节点上面登陆其它节点,安装过程中生成配置文件和证书均在k8s-master1上操作,集群管理也在k8s-master1上操作。阿里云或其它云服务器上需要单独的一台kubectl服务器。密钥配置如下
1 2 3 | ssh -keygen -t rsa for i in k8s-master1 k8s-master2 k8s-master3 node-1 node-2; do ssh -copy- id -i . ssh /id_rsa .pub $i; done |
七、升级内核:需要升级内核至4.18+,本次升级的版本为4.19。下载4.19内核在本地升级
7-1、系统更新和内核下载
1 2 3 4 5 6 7 8 9 10 11 12 13 | #所有节点更新升级 yum update -y --exclude=kernel* #升级内核操作:在k8s-master1节点下载内核 cd /root wget http: //193 .49.22.109 /elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4 .19.12-1.el7.elrepo.x86_64.rpm wget http: //193 .49.22.109 /elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4 .19.12-1.el7.elrepo.x86_64.rpm #从k8s-master1节点传到其它节点/root目录下面 for i in k8s-master2 k8s-master3 node-1 node-2; do scp kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm $i: /root/ ; done |
7-2、所有节点安装内核
1 | cd /root && yum localinstall -y kernel-ml-* |
7-3、所有节点更改内核启动顺序(注:保留原来内核防止升级内核升级不能启动可用修改为原来内核)
1 2 3 4 5 6 7 8 9 10 11 | [root@k8s-master1 ~] # grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-4 .19.12-1.el7.elrepo.x86_64 Found initrd image: /boot/initramfs-4 .19.12-1.el7.elrepo.x86_64.img Found linux image: /boot/vmlinuz-3 .10.0-1160.el7.x86_64 Found initrd image: /boot/initramfs-3 .10.0-1160.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-159c5700b90c473598c0d0d88f656997 Found initrd image: /boot/initramfs-0-rescue-159c5700b90c473598c0d0d88f656997 .img done [root@k8s-master1 ~] # grubby --args="user namespace.enable=1" --update-kernel="$(grubby --default-kernel)" |
7-4、检查节点内核是不是4.19
1 2 | [root@k8s-master1 ~] # grubby --default-kernel /boot/vmlinuz-4 .19.12-1.el7.elrepo.x86_64 |
7-5、所有节点重启reboot,检查内核是不是4.19
1 2 | [root@k8s-master1 ~] # uname -a Linux k8s-master1 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux |
八、所有节点安装ipvsadm
8-1、安装ipvsadm
1 | yum -y install ipvsadm ipset sysstat conntrack libseccomp |
8-2、所有节点配置ipvs模块,内核4.19+版本nf_conntrack_ipv4 已经 改为nf_conntrack。4.18以下的版本内核使用nf_conntrack_ipv4 即可
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | [root@k8s-master1 ~] # vim /etc/modules-load.d/ipvs.conf ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip |
注:然后执行systemctl enable --now systemd-modules-load.service 命令即可。检查是否加载(注:需要重启系统才会生效。启动的时候报错忽略)
8-3、开启k8s集群中内核参数,所有节点配置k8s内核(注:需要重启系统reboot)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | cat <<EOF > /etc/sysctl .d /k8s .conf vm.panic_on_oom=0 vm.overcommit_memory=1 fs.file-max=52706963 fs.nr_open=52706963 fs.may_detach_mounts = 1 fs.inotify.max_user_watches=89100 net.ipv4.ip_forward = 1 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.ipv4.conf.all.route_localnet = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.core.somaxconn = 16384 net.netfilter.nf_conntrack_max=2310720 EOF |
8-4、所有节点配置后,重启服务器,保证重启后内核依据加载
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | [root@k8s-master1 ~] # lsmod |grep --color=auto -e ip_vs -e nf_conntrack ip_vs_ftp 16384 0 nf_nat 32768 1 ip_vs_ftp ip_vs_sed 16384 0 ip_vs_nq 16384 0 ip_vs_fo 16384 0 ip_vs_sh 16384 0 ip_vs_dh 16384 0 ip_vs_lblcr 16384 0 ip_vs_lblc 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs_wlc 16384 0 ip_vs_lc 16384 0 ip_vs 151552 24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp nf_conntrack 143360 2 nf_nat,ip_vs nf_defrag_ipv6 20480 1 nf_conntrack nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 4 nf_conntrack,nf_nat,xfs,ip_vs |
九、K8s组件和Runtime安装
说明:如果安装版本低于1.24,选择Docker和Containerd均可,高于1.24选择Containerd作为Runtime。Containerd作为Runtime(版本大于等于1.24)
9-1、所有节点安装docker-20.10版本
1 | yum install -y docker-ce-20.10.* docker-ce-cli-20.10.* containerd |
注:可以无需启动docker,只需配置和启动containerd即可
9-2、所有节点配置containerd需要的模块
1 2 3 4 | cat <<EOF | sudo tee /etc/modules-load .d /containerd .conf overlay br_netfilter EOF |
9-3、所有节点加载模块
1 2 | [root@k8s-master1 ~] # modprobe -- overlay [root@k8s-master1 ~] # modprobe -- br_netfilter |
9-4、所有节点,配置Containerd所需的内核和加载内核
1 2 3 4 5 6 7 8 9 10 | #配置containerd所需内核 cat <<EOF | sudo tee /etc/sysctl .d /99-kubernetes-cri .conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF #加载内核 sysctl --system |
9-5、所有节点创建Containerd的配置目录及Containerd配置文件
1 2 3 | mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config .toml |
注:
- containerd config default:生成一个containerd默认配置文件
- tee /etc/containerd/config.toml:在使用tee 转换到 /etc/containerd/config.toml中
9-6、所有节点将Containerd的Cgroup改为Systemd
vim /etc/containerd/config.toml
- 找到containerd.runtimes.runc.options或找到SystemdCgroup = false,把SystemdCgroup = false改为SystemdCgroup = true(如果已存在直接修改,否则会报错),如下图所示:
注:containerd启动后会在/run/containerd/目录下有个containerd.sock文件
- 所有节点找到sandbox_image的Pause镜像改成符合自己版本的地址:registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
注:找到sandbox_image按照红色方框里面进行修改镜像
9-7、所有节点启动Containerd,并配置开机自启动:
1 2 3 | systemctl daemon-reload systemctl enable --now containerd |
9-8、所有节点配置crictl客户端连接的运行时位置
1 2 3 4 5 6 | cat > /etc/crictl .yaml <<EOF runtime-endpoint: unix: ///run/containerd/containerd .sock image-endpoint: unix: ///run/containerd/containerd .sock timeout: 10 debug: false EOF |
十、k8s及etcd安装
10-1、k8s-master1下载kubernetes安装包(1.23.0需要更改为你看到的最新版本)
1 | [root@k8s-master1 ~] # wget https://dl.k8s.io/v1.23.0/kubernetes-server-linux-amd64.tar.gz |
说明:以下操作都在k8s-master1执行
10-2、下载etcd安装包
1 | [root@k8s-master1 ~] # wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz |
10-3、解压安装kubernetes安装文件
1 | [root@k8s-master1 ~] # tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} |
10-4、解压安装etcd安装文件
1 | [root@k8s-master1 ~] # tar -zxvf etcd-v3.5.1-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.1-linux-amd64/etcd{,ctl} |
10-5、查看kubernets和etcd版本
1 2 3 4 5 6 7 8 | #查看kubernetes版本 [root@k8s-master1 ~] # kubelet --version Kubernetes v1.23.0 #查看etcd版本 [root@k8s-master1 ~] # etcdctl version etcdctl version: 3.5.1 API version: 3.5 |
10-6、将组件发送到其它节点
1 2 3 4 5 6 7 | master= 'k8s-master2 k8s-master3' nodes= 'node-1 node-2' #发送到k8s-master2,k8s-master3节点上面 for NODE in $master; do echo $NODE; scp /usr/local/bin/kube { let ,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE: /usr/local/bin/ ; scp /usr/local/bin/etcd * $NODE: /usr/local/bin/ ; done #发送到node-1,node-2节点上面 for NODE in $nodes; do scp /usr/local/bin/kube { let ,-proxy} $NODE: /usr/local/bin/ ; done |
10-7、所有节点(k8s-master, node)创建/opt/cni/bin目录
1 | mkdir -p /opt/cni/bin |
10-8、k8s-master1节点切换到1.23.x分支(其它版本可以切换到对应的分支,不需更改到具体的小版本号)
- 下载安装所有的源码文件
1 | cd /root && git clone https: //github .com /dotbalo/k8s-ha-install .git |
- 切换到1.23.x分支
1 | cd /root/k8s-ha-install && git checkout manual-installation-v1.23.x |
十一、生成证书
说明:二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的
k8s-master1下载生成证书工具
1 2 3 4 5 | wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl chmod +x /usr/local/bin/cfssljson |
11-1、etcd证书
- 所有k8s-master节点创建etcd证书目录
1 | mkdir -p /etc/etcd/ssl |
- 所有(master,node)节点创建kubernetes相关目录
1 | mkdir -p /etc/kubernetes/pki |
- 在k8s-master1节点上面生成etcd证书。说明:生成证书的CSR文件:证书签名请求文件,配置了一些域名、公司、单位
1 2 | #切换到pki目录下面 cd /root/k8s-ha-install/pki/ |
-
在k8s-master1生成etcd CA证书和CA证书的key
1 2 3 4 5 6 7 | [root@k8s-master1 pki] # cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca 2022 /09/04 17:31:40 [INFO] generating a new CA key and certificate from CSR 2022 /09/04 17:31:40 [INFO] generate received request 2022 /09/04 17:31:40 [INFO] received CSR 2022 /09/04 17:31:40 [INFO] generating key: rsa-2048 2022 /09/04 17:31:41 [INFO] encoded CSR 2022 /09/04 17:31:41 [INFO] signed certificate with serial number 557346770631314603183056530477660578962081146079 |
注:如果这里报错有可能是下载生成证书工具的时候没有下载完整,重新下载或者利用游览器下载上传到机器上面
- 在k8s-master1生成etcd证书
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | cfssl gencert \ -ca= /etc/etcd/ssl/etcd-ca .pem \ -ca-key= /etc/etcd/ssl/etcd-ca-key .pem \ -config=ca-config.json \ - hostname =127.0.0.1,k8s-master1,k8s-master2,k8s-master3,192.168.3.123,192.168.3.124,192.168.3.125 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd #执行结果如下: 2022 /09/04 17:35:56 [INFO] generate received request 2022 /09/04 17:35:56 [INFO] received CSR 2022 /09/04 17:35:56 [INFO] generating key: rsa-2048 2022 /09/04 17:35:56 [INFO] encoded CSR 2022 /09/04 17:35:56 [INFO] signed certificate with serial number 11452450532046076284041010477003693269694275633 |
注:-hostname=127.0.0.1,k8s-master1,k8s-master2,k8s-master3,192.168.3.123,192.168.3.124,192.168.3.125
k8s-master1,k8s-master2,k8s-master3 这里指的是k8s-master节点的主机名称
192.168.3.123,192.168.3.124,192.168.3.125 这里指的是k8s-master节点主机IP地址
- 将证书复制到k8s-master2,k8s-master3节点上面
1 2 3 4 5 6 7 8 | master= 'k8s-master2 k8s-master3' for NODE in $master; do ssh $NODE "mkdir -p /etc/etcd/ssl" for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/ ${FILE} $NODE: /etc/etcd/ssl/ ${FILE} done done |
十二、k8s组建证书
下面操作在k8s-master1机器上面生成kubernetes证书
12-1、在k8s-master1生成统一的ca证书
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | #切换目录 cd /root/k8s-ha-install/pki/ #生成证书 [root@k8s-master1 pki] # cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca 2022 /09/04 17:58:34 [INFO] generating a new CA key and certificate from CSR 2022 /09/04 17:58:34 [INFO] generate received request 2022 /09/04 17:58:34 [INFO] received CSR 2022 /09/04 17:58:34 [INFO] generating key: rsa-2048 2022 /09/04 17:58:35 [INFO] encoded CSR 2022 /09/04 17:58:35 [INFO] signed certificate with serial number 289277835655508790670694914546410919393041994006 #查看生成的证书 [root@k8s-master1 pki] # ls /etc/kubernetes/pki/ ca.csr ca-key.pem ca.pem |
12-2、在k8s-master1操作
注:生成10.0.0.0是k8s service网段证书。如果不是高可用集群虚拟VIP地址:192.168.3.128更改为k8s-master1主机IP地址
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | #生成证书 [root@k8s-master1 pki] # cfssl gencert -ca=/etc/kubernetes/pki/ca.pem \ -ca-key= /etc/kubernetes/pki/ca-key .pem \ -config=ca-config.json \ - hostname =10.0.0.1,192.168.3.128,127.0.0.1,kubernetes,kubernetes.default,\ kubernetes.default.svc,kubernetes.default.svc.cluster,\ kubernetes.default.svc.cluster. local ,192.168.3.123,192.168.3.124,192.168.3.125 \ -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver #执行结果 2022 /09/04 18:28:17 [INFO] generate received request 2022 /09/04 18:28:17 [INFO] received CSR 2022 /09/04 18:28:17 [INFO] generating key: rsa-2048 2022 /09/04 18:28:17 [INFO] encoded CSR 2022 /09/04 18:28:17 [INFO] signed certificate with serial number 379464281587401807466354805627556301631310652020 #查看service证书(apiserver是生成的service证书) [root@k8s-master1 pki] # ls -l /etc/kubernetes/pki/ total 24 -rw-r--r-- 1 root root 1029 Sep 4 18:12 apiserver.csr -rw------- 1 root root 1679 Sep 4 18:12 apiserver-key.pem -rw-r--r-- 1 root root 1692 Sep 4 18:12 apiserver.pem -rw-r--r-- 1 root root 1025 Sep 4 17:58 ca.csr -rw------- 1 root root 1679 Sep 4 17:58 ca-key.pem -rw-r--r-- 1 root root 1411 Sep 4 17:58 ca.pem |
注:-hostname=10.0.0.1,192.168.3.128,127.0.0.1:10.0.0.1代表的是service网段第一个IP地址,
192.168.3.128代表的是虚拟VIP地址.。
如果不是高可用集群虚拟VIP地址:192.168.3.128更改为k8s-master1主机IP地址
12-3、在k8s-master1生成apiserver的聚合证书(第三方组建会使用到)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | [root@k8s-master1 pki] # cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 2022 /09/04 18:54:42 [INFO] generating a new CA key and certificate from CSR 2022 /09/04 18:54:42 [INFO] generate received request 2022 /09/04 18:54:42 [INFO] received CSR 2022 /09/04 18:54:42 [INFO] generating key: rsa-2048 2022 /09/04 18:54:43 [INFO] encoded CSR 2022 /09/04 18:54:43 [INFO] signed certificate with serial number 265819775794763427244719135230945669085146423906 [root@k8s-master1 pki] # cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem \ -ca-key= /etc/kubernetes/pki/front-proxy-ca-key .pem -config=ca-config.json \ -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client #结果显示 2022 /09/04 18:57:40 [INFO] generate received request 2022 /09/04 18:57:40 [INFO] received CSR 2022 /09/04 18:57:40 [INFO] generating key: rsa-2048 2022 /09/04 18:57:41 [INFO] encoded CSR 2022 /09/04 18:57:41 [INFO] signed certificate with serial number 249192007423366874620123665920435604625423299525 2022 /09/04 18:57:41 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v .1.1.6, from the CA /Browser Forum (https: //cabforum .org); specifically, section 10.2.3 ( "Information Requirements" ). |
注:[WARNING] 这里的警告信息可以忽略
12-4、在k8s-master1生成controller-manage证书
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | [root@k8s-master1 pki] # cfssl gencert \ -ca= /etc/kubernetes/pki/ca .pem \ -ca-key= /etc/kubernetes/pki/ca-key .pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager #执行结果显示 2022 /09/05 10:34:31 [INFO] generate received request 2022 /09/05 10:34:31 [INFO] received CSR 2022 /09/05 10:34:31 [INFO] generating key: rsa-2048 2022 /09/05 10:34:31 [INFO] encoded CSR 2022 /09/05 10:34:31 [INFO] signed certificate with serial number 472135505136916637983762283727347854295419577487 2022 /09/05 10:34:31 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v .1.1.6, from the CA /Browser Forum (https: //cabforum .org); specifically, section 10.2.3 ( "Information Requirements" ). |
12-5、在k8s-master生成set-cluster:设置一个集群项
注:
--server=https://192.168.3.128:8443 这里的192.168.3.128IP地址修改为自己的虚拟VIP地址。
如果不是高可用集群192.168.3.128:8443修改为k8s-master的IP地址,8443改为apiserver端口,默认是6443
1 2 3 4 5 | kubectl config set -cluster kubernetes \ --certificate-authority= /etc/kubernetes/pki/ca .pem \ --embed-certs= true \ --server=https: //192 .168.3.128:8443 \ --kubeconfig= /etc/kubernetes/controller-manager .kubeconfig |
12-6、在k8s-master1设置一个环境,一个上下文
1 2 3 4 | [root@k8s-master1 pki] # kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig= /etc/kubernetes/controller-manager .kubeconfig |
12-7、在k8s-master1 set-credentials 设置一个用户项
1 2 3 4 5 | [root@k8s-master1 pki] # kubectl config set-credentials system:kube-controller-manager \ --client-certificate= /etc/kubernetes/pki/controller-manager .pem \ --client-key= /etc/kubernetes/pki/controller-manager-key .pem \ --embed-certs= true \ --kubeconfig= /etc/kubernetes/controller-manager .kubeconfig |
12-9、在k8s-master1,使用某个环境当做默认环境
1 2 | kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig= /etc/kubernetes/controller-manager .kubeconfig |
12-10、在k8s-master1生成一个scheduler
1 2 3 4 5 6 | cfssl gencert \ -ca= /etc/kubernetes/pki/ca .pem \ -ca-key= /etc/kubernetes/pki/ca-key .pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler |
12-11、在k8s-master1 set-cluster:设置一个集群项
注:
--server=https://192.168.3.128:8443 这里的192.168.3.128IP地址修改为自己的虚拟VIP地址。
如果不是高可用集群192.168.3.128:8443修改为k8s-master1的IP地址,8443改为apiserver端口,默认是6443
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | kubectl config set -cluster kubernetes \ --certificate-authority= /etc/kubernetes/pki/ca .pem \ --embed-certs= true \ --server=https: //192 .168.3.128:8443 \ --kubeconfig= /etc/kubernetes/scheduler .kubeconfig # set-credentials 设置一个用户项 kubectl config set -credentials system:kube-scheduler \ --client-certificate= /etc/kubernetes/pki/scheduler .pem \ --client-key= /etc/kubernetes/pki/scheduler-key .pem \ --embed-certs= true \ --kubeconfig= /etc/kubernetes/scheduler .kubeconfig # 设置一个环境项,一个上下文 kubectl config set -context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig= /etc/kubernetes/scheduler .kubeconfig # 使用某个环境当做默认环境 kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig= /etc/kubernetes/scheduler .kubeconfig |
12-12、在k8s-master1生成一个andmin
注:
--server=https://192.168.3.128:8443 这里的192.168.3.128IP地址修改为自己的虚拟VIP地址。
如果不是高可用集群192.168.3.128:8443修改为k8s-master1的IP地址,8443改为apiserver端口,默认是6443
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | #生成admin证书 cfssl gencert \ -ca= /etc/kubernetes/pki/ca .pem \ -ca-key= /etc/kubernetes/pki/ca-key .pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin #set-cluster设置一个集群项 kubectl config set -cluster kubernetes \ --certificate-authority= /etc/kubernetes/pki/ca .pem \ --embed-certs= true \ --server=https: //192 .168.3.128:8443 \ --kubeconfig= /etc/kubernetes/admin .kubeconfig #set-credentials生成一个用户项 kubectl config set -credentials kubernetes-admin \ --client-certificate= /etc/kubernetes/pki/admin .pem \ --client-key= /etc/kubernetes/pki/admin-key .pem \ --embed-certs= true \ --kubeconfig= /etc/kubernetes/admin .kubeconfig #set-context生成一个环境项,一个上下文 kubectl config set -context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig= /etc/kubernetes/admin .kubeconfig #使用某个环境当做默认环境 kubectl config use-context kubernetes-admin@kubernetes --kubeconfig= /etc/kubernetes/admin .kubeconfig |
12-13、在k8s-master1上面创建ServiceAccount Key > secret
1 2 3 4 5 6 7 8 9 10 11 12 | #生成一个sa.key [root@k8s-master1 pki] # openssl genrsa -out /etc/kubernetes/pki/sa.key 2048 #执行返回结果 Generating RSA private key, 2048 bit long modulus ...............+++ ...............................................................+++ e is 65537 (0x10001) #生成一个sa.pub [root@k8s-master1 pki] # openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub |
12-14、发送证书至其他节点(k8s-master2, k8s-master3)
1 2 3 4 | for NODE in k8s-master2 k8s-master3; do for FILE in $( ls /etc/kubernetes/pki | grep - v etcd); do scp /etc/kubernetes/pki/ ${FILE} $NODE: /etc/kubernetes/pki/ ${FILE}; done ; done for NODE in k8s-master2 k8s-master3; do for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/ ${FILE} $NODE: /etc/kubernetes/ ${FILE}; done ; done |
12-15、查看证书
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | [root@k8s-master2 ~] # ls -l /etc/kubernetes/pki/ total 92 -rw-r--r-- 1 root root 1025 Sep 5 12:15 admin.csr -rw------- 1 root root 1679 Sep 5 12:15 admin-key.pem -rw-r--r-- 1 root root 1444 Sep 5 12:15 admin.pem -rw-r--r-- 1 root root 1029 Sep 5 12:15 apiserver.csr -rw------- 1 root root 1675 Sep 5 12:15 apiserver-key.pem -rw-r--r-- 1 root root 1692 Sep 5 12:15 apiserver.pem -rw-r--r-- 1 root root 1025 Sep 5 12:15 ca.csr -rw------- 1 root root 1679 Sep 5 12:15 ca-key.pem -rw-r--r-- 1 root root 1411 Sep 5 12:15 ca.pem -rw-r--r-- 1 root root 1082 Sep 5 12:15 controller-manager.csr -rw------- 1 root root 1679 Sep 5 12:15 controller-manager-key.pem -rw-r--r-- 1 root root 1501 Sep 5 12:15 controller-manager.pem -rw-r--r-- 1 root root 891 Sep 5 12:15 front-proxy-ca.csr -rw------- 1 root root 1675 Sep 5 12:15 front-proxy-ca-key.pem -rw-r--r-- 1 root root 1143 Sep 5 12:15 front-proxy-ca.pem -rw-r--r-- 1 root root 903 Sep 5 12:15 front-proxy-client.csr -rw------- 1 root root 1675 Sep 5 12:15 front-proxy-client-key.pem -rw-r--r-- 1 root root 1188 Sep 5 12:15 front-proxy-client.pem -rw-r--r-- 1 root root 1679 Sep 5 12:15 sa.key -rw-r--r-- 1 root root 451 Sep 5 12:15 sa.pub -rw-r--r-- 1 root root 1058 Sep 5 12:15 scheduler.csr -rw------- 1 root root 1679 Sep 5 12:15 scheduler-key.pem -rw-r--r-- 1 root root 1476 Sep 5 12:15 scheduler.pem #查看证书数量 [root@k8s-master2 ~] # ls /etc/kubernetes/pki/ |wc -l 23 |
十三、Kubernetes系统组件配置
说明:etcd.config.yml配置文件大致相同,注意修改每个k8s-master节点的etcd.config.yml配置的主机名和IP地址
--data-dir :指定工作目录和数据目录为${ETCD_DATA_DIR},需在启动服务前创建这个目录
–wal-dir :指定 wal 目录,为了提高性能,一般使用 SSD 或者和 --data-dir 不同的磁盘
注:data-dir和wal-dir目录在正式生成环境当中需要修改到其它目录路径,不要按照下面的路径设置
13-1、k8s-master1的etcd.config.yml配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | [root@k8s-master1 pki] # vim /etc/etcd/etcd.config.yml name: 'k8s-master1' data- dir : /var/lib/etcd wal- dir : /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota -backend-bytes: 0 listen-peer-urls: 'https://192.168.3.123:2380' listen-client-urls: 'https://192.168.3.123:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.3.123:2380' advertise-client-urls: 'https://192.168.3.123:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master1=https://192.168.3.123:2380,k8s-master2=https://192.168.3.124:2380,k8s-master3=https://192.168.3.125:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable -v2: true enable -pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy- read -timeout: 0 client-transport-security: cert- file : '/etc/kubernetes/pki/etcd/etcd.pem' key- file : '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca- file : '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert- file : '/etc/kubernetes/pki/etcd/etcd.pem' key- file : '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca- file : '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false |
13-2、k8s-master2的etcd.config.yml配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | [root@k8s-master2 ~] # vim /etc/etcd/etcd.config.yml name: 'k8s-master2' data- dir : /var/lib/etcd wal- dir : /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota -backend-bytes: 0 listen-peer-urls: 'https://192.168.3.124:2380' listen-client-urls: 'https://192.168.3.124:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.3.124:2380' advertise-client-urls: 'https://192.168.3.124:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master1=https://192.168.3.123:2380,k8s-master2=https://192.168.3.124:2380,k8s-master3=https://192.168.3.125:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable -v2: true enable -pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy- read -timeout: 0 client-transport-security: cert- file : '/etc/kubernetes/pki/etcd/etcd.pem' key- file : '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca- file : '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert- file : '/etc/kubernetes/pki/etcd/etcd.pem' key- file : '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca- file : '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false |
13-3、k8s-master3的etcd.config.yml配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | [root@k8s-master3 ~] # vim /etc/etcd/etcd.config.yml name: 'k8s-master3' data- dir : /var/lib/etcd wal- dir : /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota -backend-bytes: 0 listen-peer-urls: 'https://192.168.3.125:2380' listen-client-urls: 'https://192.168.3.125:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.3.125:2380' advertise-client-urls: 'https://192.168.3.125:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master1=https://192.168.3.123:2380,k8s-master2=https://192.168.3.124:2380,k8s-master3=https://192.168.3.125:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable -v2: true enable -pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy- read -timeout: 0 client-transport-security: cert- file : '/etc/kubernetes/pki/etcd/etcd.pem' key- file : '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca- file : '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert- file : '/etc/kubernetes/pki/etcd/etcd.pem' key- file : '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca- file : '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false |
13-4、创建Service(服务)
- 所有k8s-master节点创建etcd service并启动
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | vim /usr/lib/systemd/system/etcd .service [Unit] Description=Etcd Service Documentation=https: //coreos .com /etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart= /usr/local/bin/etcd --config- file = /etc/etcd/etcd .config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service |
- 所有k8s-master节点创建etcd的证书目录
1 2 3 4 5 | #创建目录 mkdir /etc/kubernetes/pki/etcd #添加软连接 ln -s /etc/etcd/ssl/ * /etc/kubernetes/pki/etcd/ |
- 启动etcd服务,添加开机自动启动
1 2 3 | systemctl daemon-reload systemctl enable --now etcd |
注:k8s-master三个节点需要同时启动,不然会报错或者不能启动
- 检查etcd状态
注:--endpoints="192.168.3.123:2379,192.168.3.124:2379,192.168.3.125:2379" 这里需要修改为自己k8s-master主机的三个节点IP地址
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | [root@k8s-master1 ~] # export ETCDCTL_API=3 [root@k8s-master1 ~] # etcdctl --endpoints="192.168.3.123:2379,192.168.3.124:2379,192.168.3.125:2379" \ --cacert= /etc/kubernetes/pki/etcd/etcd-ca .pem \ --cert= /etc/kubernetes/pki/etcd/etcd .pem \ --key= /etc/kubernetes/pki/etcd/etcd-key .pem endpoint status --write-out=table #执行结果 +--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | 192.168.3.123:2379 | e71f60ac7dc6dbd | 3.5.1 | 25 kB | true | false | 9 | 25 | 25 | | | 192.168.3.124:2379 | 8a55f6614a037795 | 3.5.1 | 20 kB | false | false | 9 | 25 | 25 | | | 192.168.3.125:2379 | bfc043b42836fc81 | 3.5.1 | 20 kB | false | false | 9 | 25 | 25 | | +--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ |
注:IS LEADER里面的true和false分别代表主节点(true)和从节点(false)。
十四、安装高可用配置
说明:
高可用配置(注意:如果不是高可用集群,haproxy和keepalived无需安装)
如果在云上安装也无需执行此章节的步骤,可以直接使用云上的lb,比如阿里云slb,腾讯云elb等。
公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,
因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubectl控制端不能放在master节点,
推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题
14-1、在所有k8s-master节点上面安装keepalived和haproxy
1 | yum install -y keepalived haproxy |
14-2、所有k8s-master主节点(主节点:k8s-master1,k8s-master2,k8s-master3)节点配置HAProxy(详细配置参考HAProxy文档,所有k8s主节点的HAProxy配置相同)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | [root@k8s-master1 ~] # vim /etc/haproxy/haproxy.cfg global maxconn 2000 ulimit -n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults mode http log global option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor- in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master1 192.168.3.123:6443 check server k8s-master2 192.168.3.124:6443 check server k8s-master3 192.168.3.125:6443 check |
14-3、所有k8s主节点配置KeepAlived,配置文件不一样,注意区分每个k8s-master节点的IP地址和网卡(interface)
- k8s-master1主节点配置文件(MASTER)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | [root@k8s-master1 ~] # vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface ens33 mcast_src_ip 192.168.3.123 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.3.128 } track_script { chk_apiserver } } |
- k8s-master2节点配置文件(BACKUP)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | [root@k8s-master2 ~] # vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 192.168.3.124 virtual_router_id 51 priority 99 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.3.128 } track_script { chk_apiserver } } |
- k8s-master3节点配置文件(BACKUP)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | [root@k8s-master3 ~] # vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 192.168.3.125 virtual_router_id 51 priority 99 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.3.128 } track_script { chk_apiserver } } |
注:
- vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"} 这里是指定健康检测脚本路径
- mcast_src_ip 192.168.3.123 这里 192.168.3.125 IP地址指的是本机IP地址
- priority 100 这里指的是运行优先级(数字越大优先级越高)
- virtual_ipaddress {192.168.3.128} 这里指的是vip地址(192.168.3.128)
14-4、所k8s-master节点配置KeepAlived健康检查脚本
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | [root@k8s-master1 ~] # vim /etc/keepalived/check_apiserver.sh #!/bin/bash #检测haproxy状态3次失败,关闭keepalived跳转到其它keepalived节点上面 err=0 for k in $( seq 1 3) do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$( expr $err + 1) sleep 1 continue else err=0 break fi done if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi |
1 | chmod +x /etc/keepalived/check_apiserver .sh |
14-5、启动haproxy和keepalived
1 2 3 | [root@k8s-master1 ~] # systemctl daemon-reload [root@k8s-master1 ~] # systemctl enable --now haproxy [root@k8s-master1 ~] # systemctl enable --now keepalived |
十五、Kubernetes组件配置
15-1、所有(k8s-master,node)节点创建相关目录
1 | mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet .service.d /var/lib/kubelet /var/log/kubernetes |
15-2、编辑apiserver配置文件;
说明:所有K8s-master节点创建kube-apiserver service,# 注意,如果不是高可用集群,192.168.3.128改为k8s-master1的IP地址
- k8s-master1配置
注:
k8s service网段使用的是10.0.0.0/16 该网段不能和宿主机的网段、Pod网段的重复,请按需修改
--advertise-address=192.168.3.123 这里设置的是k8s-master1主机IP地址
--service-cluster-ip-range=10.0.0.0/16 这里设置的是service IP网段地址
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | [root@k8s-master1 ~] # vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https: //github .com /kubernetes/kubernetes After=network.target [Service] ExecStart= /usr/local/bin/kube-apiserver \ -- v =2 \ --logtostderr= true \ --allow-privileged= true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.3.123 \ --service-cluster-ip-range=10.0.0.0 /16 \ --service-node-port-range=30000-32767 \ --etcd-servers=https: //192 .168.3.123:2379,https: //192 .168.3.124:2379,https: //192 .168.3.125:2379 \ --etcd-cafile= /etc/etcd/ssl/etcd-ca .pem \ --etcd-certfile= /etc/etcd/ssl/etcd .pem \ --etcd-keyfile= /etc/etcd/ssl/etcd-key .pem \ --client-ca- file = /etc/kubernetes/pki/ca .pem \ --tls-cert- file = /etc/kubernetes/pki/apiserver .pem \ --tls-private-key- file = /etc/kubernetes/pki/apiserver-key .pem \ --kubelet-client-certificate= /etc/kubernetes/pki/apiserver .pem \ --kubelet-client-key= /etc/kubernetes/pki/apiserver-key .pem \ --service-account-key- file = /etc/kubernetes/pki/sa .pub \ --service-account-signing-key- file = /etc/kubernetes/pki/sa .key \ --service-account-issuer=https: //kubernetes .default.svc.cluster. local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ -- enable -admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ -- enable -bootstrap-token-auth= true \ --requestheader-client-ca- file = /etc/kubernetes/pki/front-proxy-ca .pem \ --proxy-client-cert- file = /etc/kubernetes/pki/front-proxy-client .pem \ --proxy-client-key- file = /etc/kubernetes/pki/front-proxy-client-key .pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target |
- k8s-master2配置
注:
k8s service网段使用的是10.0.0.0/16 该网段不能和宿主机的网段、Pod网段的重复,请按需修改。
--advertise-address=192.168.3.124 这里设置的是k8s-master2主机IP地址
--service-cluster-ip-range=10.0.0.0/16 这里设置的是service IP网段地址
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | [root@k8s-master2 ~] # vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https: //github .com /kubernetes/kubernetes After=network.target [Service] ExecStart= /usr/local/bin/kube-apiserver \ -- v =2 \ --logtostderr= true \ --allow-privileged= true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.3.124 \ --service-cluster-ip-range=10.0.0.0 /16 \ --service-node-port-range=30000-32767 \ --etcd-servers=https: //192 .168.3.123:2379,https: //192 .168.3.124:2379,https: //192 .168.3.125:2379 \ --etcd-cafile= /etc/etcd/ssl/etcd-ca .pem \ --etcd-certfile= /etc/etcd/ssl/etcd .pem \ --etcd-keyfile= /etc/etcd/ssl/etcd-key .pem \ --client-ca- file = /etc/kubernetes/pki/ca .pem \ --tls-cert- file = /etc/kubernetes/pki/apiserver .pem \ --tls-private-key- file = /etc/kubernetes/pki/apiserver-key .pem \ --kubelet-client-certificate= /etc/kubernetes/pki/apiserver .pem \ --kubelet-client-key= /etc/kubernetes/pki/apiserver-key .pem \ --service-account-key- file = /etc/kubernetes/pki/sa .pub \ --service-account-signing-key- file = /etc/kubernetes/pki/sa .key \ --service-account-issuer=https: //kubernetes .default.svc.cluster. local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ -- enable -admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ -- enable -bootstrap-token-auth= true \ --requestheader-client-ca- file = /etc/kubernetes/pki/front-proxy-ca .pem \ --proxy-client-cert- file = /etc/kubernetes/pki/front-proxy-client .pem \ --proxy-client-key- file = /etc/kubernetes/pki/front-proxy-client-key .pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target |
- k8s-master3配置
注:
k8s service网段使用的是10.0.0.0/16 该网段不能和宿主机的网段、Pod网段的重复,请按需修改。
--advertise-address=192.168.3.125 这里设置的是k8s-master3主机IP地址
--service-cluster-ip-range=10.0.0.0/16 这里设置的是service IP网段地址
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | [root@k8s-master3 ~] # vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https: //github .com /kubernetes/kubernetes After=network.target [Service] ExecStart= /usr/local/bin/kube-apiserver \ -- v =2 \ --logtostderr= true \ --allow-privileged= true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.3.125 \ --service-cluster-ip-range=10.0.0.0 /16 \ --service-node-port-range=30000-32767 \ --etcd-servers=https: //192 .168.3.123:2379,https: //192 .168.3.124:2379,https: //192 .168.3.125:2379 \ --etcd-cafile= /etc/etcd/ssl/etcd-ca .pem \ --etcd-certfile= /etc/etcd/ssl/etcd .pem \ --etcd-keyfile= /etc/etcd/ssl/etcd-key .pem \ --client-ca- file = /etc/kubernetes/pki/ca .pem \ --tls-cert- file = /etc/kubernetes/pki/apiserver .pem \ --tls-private-key- file = /etc/kubernetes/pki/apiserver-key .pem \ --kubelet-client-certificate= /etc/kubernetes/pki/apiserver .pem \ --kubelet-client-key= /etc/kubernetes/pki/apiserver-key .pem \ --service-account-key- file = /etc/kubernetes/pki/sa .pub \ --service-account-signing-key- file = /etc/kubernetes/pki/sa .key \ --service-account-issuer=https: //kubernetes .default.svc.cluster. local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ -- enable -admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ -- enable -bootstrap-token-auth= true \ --requestheader-client-ca- file = /etc/kubernetes/pki/front-proxy-ca .pem \ --proxy-client-cert- file = /etc/kubernetes/pki/front-proxy-client .pem \ --proxy-client-key- file = /etc/kubernetes/pki/front-proxy-client-key .pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target |
15-3、启动apiserver服务,所有(k8s-master)节点kube-apiserver
1 | systemctl daemon-reload && systemctl enable --now kube-apiserver |
十六、ControllerManage
说明:
所有k8s-master节点配置kube-controller-manager service(所有k8s-master节点配置一样)
注意使用的k8s Pod网段为172.16.0.0/16,该网段不能和宿主机的网段、k8s service网段的重复
--cluster-cidr=172.16.0.0/16这里指的的是Pod网段IP地址
16-1、编辑kube-controller-manager配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | vim /usr/lib/systemd/system/kube-controller-manager .service [Unit] Description=Kubernetes Controller Manager Documentation=https: //github .com /kubernetes/kubernetes After=network.target [Service] ExecStart= /usr/local/bin/kube-controller-manager \ -- v =2 \ --logtostderr= true \ --address=127.0.0.1 \ --root-ca- file = /etc/kubernetes/pki/ca .pem \ --cluster-signing-cert- file = /etc/kubernetes/pki/ca .pem \ --cluster-signing-key- file = /etc/kubernetes/pki/ca-key .pem \ --service-account-private-key- file = /etc/kubernetes/pki/sa .key \ --kubeconfig= /etc/kubernetes/controller-manager .kubeconfig \ --leader-elect= true \ --use-service-account-credentials= true \ --node-monitor-grace-period=40s \ --node-monitor-period=5s \ --pod-eviction-timeout=2m0s \ --controllers=*,bootstrapsigner,tokencleaner \ --allocate-node-cidrs= true \ --cluster-cidr=172.16.0.0 /16 \ --requestheader-client-ca- file = /etc/kubernetes/pki/front-proxy-ca .pem \ --authentication-kubeconfig= /etc/kubernetes/controller-manager .kubeconfig \ --authorization-kubeconfig= /etc/kubernetes/controller-manager .kubeconfig \ --node-cidr-mask-size=24 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target |
16-2、所有k8s-master节点kube-controller-manager启动,添加开机启动
1 2 3 | systemctl daemon-reload systemctl enable --now kube-controller-manager |
十七、Scheduler
所有k8s-master节点配置kube-scheduler service(所有k8s-master节点配置一样)
17-1、编辑kube-scheduler配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | vim /usr/lib/systemd/system/kube-scheduler .service [Unit] Description=Kubernetes Scheduler Documentation=https: //github .com /kubernetes/kubernetes After=network.target [Service] ExecStart= /usr/local/bin/kube-scheduler \ -- v =2 \ --logtostderr= true \ --address=127.0.0.1 \ --leader-elect= true \ --requestheader-client-ca- file = /etc/kubernetes/pki/front-proxy-ca .pem \ --authentication-kubeconfig= /etc/kubernetes/scheduler .kubeconfig \ --authorization-kubeconfig= /etc/kubernetes/scheduler .kubeconfig \ --kubeconfig= /etc/kubernetes/scheduler .kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target |
17-2、启动kube-scheduler,添加开机启动项
1 2 3 | systemctl daemon-reload systemctl enable --now kube-scheduler |
十八、Bootstrapping配置
说明:只需要在k8s-master1创建bootstrap。
注意:如果不是高可用集群,192.168.3.128:8443改为k8s-master IP地址,8443改为ApiServer的端口,默认是6443
18-1、bootstrap配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | cd /root/k8s-ha-install/bootstrap/ #set-cluster设置一个集群项 kubectl config set -cluster kubernetes \ --certificate-authority= /etc/kubernetes/pki/ca .pem \ --embed-certs= true --server=https: //192 .168.3.128:8443 \ --kubeconfig= /etc/kubernetes/bootstrap-kubelet .kubeconfig #set-credentials 设置一个用户项 kubectl config set -credentials tls-bootstrap-token-user \ --token=c8ad9c.2e4d610cf3e7426e --kubeconfig= /etc/kubernetes/bootstrap-kubelet .kubeconfig #set-context设置一个环境项 kubectl config set -context tls-bootstrap-token-user@kubernetes \ --cluster=kubernetes --user=tls-bootstrap-token-user \ --kubeconfig= /etc/kubernetes/bootstrap-kubelet .kubeconfig #使用某个环境作为默认环境 kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig= /etc/kubernetes/bootstrap-kubelet .kubeconfig |
注:
--server=https://192.168.3.128:8443 这里的192.168.3.128IP地址指的是VIP地址
--token=c8ad9c.2e4d610cf3e7426e c8ad9c和2e4d610cf3e7426e这里分别代表的是/root/k8s-ha-install/bootstrap/bootstrap.secret.yaml配置文件里面设置token-id值和token-secret值。
如果--token=c8ad9c.2e4d610cf3e7426e这两项值修改后需要把这配置文件里面修改保持一致,还有配置文件name: bootstrap-token-c8ad9c这项 c8ad9c也要修改保持一致。如下图:
18-2、创建bootstrap.secret.yaml
- 拷贝/etc/kubernetes/admin.kubeconfig 文件到/root/.kube目录下面命名为config 因/root/.kube目录不存在需要先创建在拷贝。命令如下
1 | mkdir -p /root/ .kube ; cp /etc/kubernetes/admin .kubeconfig /root/ .kube /config |
注:如果不执行上面的拷贝命令在创建bootstrap.secret.yaml报如下图错误;(注:其它节点也报下面图错误也需要按照上面的拷贝命令执行,在创建bootstrap.secret.yaml)
-
查询集群状态。集群状态正常才能往下执行,否则不行,需要排查k8s组件是否有故障。查询集群状态命令及正常结果
1 2 3 4 5 6 7 8 9 10 | [root@k8s-master1 bootstrap] # kubectl get cs #查询集群状态结果 Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy { "health" : "true" , "reason" : "" } etcd-0 Healthy { "health" : "true" , "reason" : "" } etcd-1 Healthy { "health" : "true" , "reason" : "" } |
- 创建bootstrap.secret.yaml
1 2 3 4 5 6 7 8 | [root@k8s-master1 bootstrap] # kubectl create -f bootstrap.secret.yaml #操作显示 secret /bootstrap-token-c8ad9c created clusterrolebinding.rbac.authorization.k8s.io /kubelet-bootstrap created clusterrolebinding.rbac.authorization.k8s.io /node-autoapprove-bootstrap created clusterrolebinding.rbac.authorization.k8s.io /node-autoapprove-certificate-rotation created clusterrole.rbac.authorization.k8s.io /system :kube-apiserver-to-kubelet created clusterrolebinding.rbac.authorization.k8s.io /system :kube-apiserver created |
十九、node节点配置
19-1、在k8s-master1操作。发送证书到k8s-master2,k8s-master3,node-1,node-2。
1 2 3 4 5 6 7 8 9 10 | #切换目录 cd /etc/kubernetes #发送证书 for NODE in k8s-master2 k8s-master3 node-1 node-2; do ssh $NODE mkdir -p /etc/kubernetes/pki for FILE in pki /ca .pem pki /ca-key .pem pki /front-proxy-ca .pem bootstrap-kubelet.kubeconfig; do scp /etc/kubernetes/ $FILE $NODE: /etc/kubernetes/ ${FILE} done done |
19-2、kubelet配置
- 所有节点创建相关目录
1 | mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet .service.d /etc/kubernetes/manifests/ |
- 所有节点配置kubelet service
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | [root@k8s-master1 kubernetes] # vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https: //github .com /kubernetes/kubernetes [Service] ExecStart= /usr/local/bin/kubelet Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target |
- 所有节点操作,如果Runtime为Containerd,请使用如下Kubelet的配置。所有节点配置kubelet service的配置文件(也可以写到kubelet.service)
1 2 3 4 5 6 7 8 9 | [root@k8s-master1 kubernetes] # vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf [Service] Environment= "KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" Environment= "KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd" Environment= "KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml" Environment= "KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' " ExecStart= ExecStart= /usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS |
-
所有节点操作,如果Runtime为Docker,请使用如下Kubelet的配置:
1 2 3 4 5 6 7 8 9 | [root@k8s-master1 kubernetes] # vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf [Service] Environment= "KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" Environment= "KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment= "KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5" Environment= "KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' " ExecStart= ExecStart= /usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS |
注:Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml这里单独指定了kubelet-conf.yml这个文件需要单独创建编辑
- 所有节点创建kubelet的配置文件
注:如果更改了k8s的service网段,需要更改kubelet-conf.yml的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.0.0.10
clusterDNS:
- 10.0.0.10 这里的IP地址修改为自己service网段的第十个IP地址(注:第十个IP地址如果自己不能确定是哪个最好利用子网掩码计算器把service网段计算下)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | [root@k8s-master1 kubernetes] # vim /etc/kubernetes/kubelet-conf.yml apiVersion: kubelet.config.k8s.io /v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca .pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.0.0.10 clusterDomain: cluster. local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application /vnd .kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv .conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s |
- 启动kubelet,添加开机自动启动kubelet服务
1 2 3 | systemctl daemon-reload systemctl enable --now kubelet |
- 查看集群状态
1 2 3 4 5 6 7 | [root@k8s-master1 kubernetes] # kubectl get node NAME STATUS ROLES AGE VERSION k8s-master1 NotReady <none> 4m45s v1.23.0 k8s-master2 NotReady <none> 4m45s v1.23.0 k8s-master3 NotReady <none> 4m46s v1.23.0 node-1 NotReady <none> 4m45s v1.23.0 node-2 NotReady <none> 4m46s v1.23.0 |
19-3、kube-proxy配置
说明:如果不是高可用集群 192.168.3.128:8443改为k8s-master1 IP地址,8443改为apiserver的端口,默认是6443
注:以下操作都是在k8s-master1上面执行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | kubectl -n kube-system create serviceaccount kube-proxy kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy #设置环境变量,是下面的kubectl config使用 SECRET=$(kubectl -n kube-system get sa /kube-proxy --output=jsonpath= '{.secrets[0].name}' ) JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET --output=jsonpath= '{.data.token}' | base64 -d) PKI_DIR= /etc/kubernetes/pki K8S_DIR= /etc/kubernetes #设置一个集群项 kubectl config set -cluster kubernetes --certificate-authority= /etc/kubernetes/pki/ca .pem \ --embed-certs= true --server=https: //192 .168.3.128:8443 --kubeconfig=${K8S_DIR} /kube-proxy .kubeconfig #设置一个用户项 kubectl config set -credentials kubernetes --token=${JWT_TOKEN} --kubeconfig= /etc/kubernetes/kube-proxy .kubeconfig #设置一个环境项 kubectl config set -context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig= /etc/kubernetes/kube-proxy .kubeconfig #使用某个环境当做默认环境 kubectl config use-context kubernetes --kubeconfig= /etc/kubernetes/kube-proxy .kubeconfig |
- 将kubeconfig发送至k8s-master2,k8s-master3
1 2 3 | for NODE in k8s-master2 k8s-master3; do scp /etc/kubernetes/kube-proxy .kubeconfig $NODE: /etc/kubernetes/kube-proxy .kubeconfig done |
- 将kubeconfig发送至node-1, node-2
1 2 3 | for NODE in node-1 node-2; do scp /etc/kubernetes/kube-proxy .kubeconfig $NODE: /etc/kubernetes/kube-proxy .kubeconfig done |
- 所有节点添加kube-proxy的配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | vim /usr/lib/systemd/system/kube-proxy .service [Unit] Description=Kubernetes Kube Proxy Documentation=https: //github .com /kubernetes/kubernetes After=network.target [Service] ExecStart= /usr/local/bin/kube-proxy \ --config= /etc/kubernetes/kube-proxy .yaml \ -- v =2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target |
- 所有节点添加service kube-proxy.yaml配置文件
注:如果更改了集群Pod的网段,需要更改kube-proxy.yaml的clusterCIDR为自己的Pod网段
clusterCIDR: 172.16.0.0/16 这里指的就是Pod网段的设置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | vim /etc/kubernetes/kube-proxy .yaml apiVersion: kubeproxy.config.k8s.io /v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application /vnd .kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy .kubeconfig qps: 5 clusterCIDR: 172.16.0.0 /16 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms |
- 所有节点启动kube-proxy,添加开机自动启动
1 2 3 | systemctl daemon-reload systemctl enable --now kube-proxy |
二十、安装Calico
说明:以下操作步骤只在k8s-master1执行
20-1、calico.yaml配置文件修改
- 切换到Calico目录
1 | cd /root/k8s-ha-install/calico/ |
- 利用sed命令修改calico.yaml配置文件。找到POD_CIDR修改为Pod网段地址
1 2 3 4 5 6 7 8 | #修改Pod网段IP sed -i "s#POD_CIDR#172.16.0.0/16#g" calico.yaml #检查修改后是否为自动Pod网段地址 [root@k8s-master1 calico] # cat calico.yaml | grep "IPV4POOL_CIDR" -A 1 - name: CALICO_IPV4POOL_CIDR value: "172.16.0.0/16" |
注:注意sed命令里面设置的172.16.0.0/16修改为自己的Pod网段地址。利用vim 打开calicao.yaml配置文件修改Pod地址网段图说明
20-2、安装calico
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | [root@k8s-master1 calico] # kubectl apply -f calico.yaml configmap /calico-config created customresourcedefinition.apiextensions.k8s.io /bgpconfigurations .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /bgppeers .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /blockaffinities .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /caliconodestatuses .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /clusterinformations .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /felixconfigurations .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /globalnetworkpolicies .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /globalnetworksets .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /hostendpoints .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /ipamblocks .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /ipamconfigs .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /ipamhandles .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /ippools .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /ipreservations .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /kubecontrollersconfigurations .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /networkpolicies .crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io /networksets .crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io /calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io /calico-kube-controllers created clusterrole.rbac.authorization.k8s.io /calico-node created clusterrolebinding.rbac.authorization.k8s.io /calico-node created service /calico-typha created deployment.apps /calico-typha created Warning: policy /v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy /v1 PodDisruptionBudget poddisruptionbudget.policy /calico-typha created daemonset.apps /calico-node created serviceaccount /calico-node created deployment.apps /calico-kube-controllers created serviceaccount /calico-kube-controllers created poddisruptionbudget.policy /calico-kube-controllers created |
注:Warning警告可以忽略
20-3、检查calico状态
1 2 3 4 5 6 7 8 9 | [root@k8s-master1 calico] # kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-6f6595874c-82hrx 1 /1 Running 0 5m47s calico-node-6vlp4 1 /1 Running 0 5m48s calico-node-7bv5c 1 /1 Running 0 5m48s calico-node-9vtmt 1 /1 Running 0 5m48s calico-node-clqls 1 /1 Running 0 5m48s calico-node-hckp8 1 /1 Running 0 5m48s calico-typha-6b6cf8cbdf-t5b74 1 /1 Running 0 5m48s |
二十一、安装CoreDNS
说明;只在k8s-master1节点安装
21-1、切换到CoreDNS目录
1 | cd /root/k8s-ha-install/CoreDNS/ |
21-2、修改coredns.yaml配置文件
说明:如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP
1 2 3 4 5 6 | #查看service IP地址,并设置为第十个IP地址 COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}' `0 #修改coredns.yaml配置文件为第十个IP地址。利用上面查看的第十个命令环境变量 sed -i "s#KUBEDNS_SERVICE_IP#${COREDNS_SERVICE_IP}#g" coredns.yaml |
注:coredns.yaml里面的clusterIP: 10.0.0.10和kubelet-conf.yml里面的clusterDNS: -10.0.0.10 保持一致Pod才能解析service
手动修改利用vim 打开修改service第十个IP如图
21-3、安装CoreDNS
1 2 3 4 5 6 7 | [root@k8s-master1 CoreDNS] # kubectl create -f coredns.yaml serviceaccount /coredns created clusterrole.rbac.authorization.k8s.io /system :coredns created clusterrolebinding.rbac.authorization.k8s.io /system :coredns created configmap /coredns created deployment.apps /coredns created service /kube-dns created |
21-4、查看CoreDNS状态
1 2 3 | [root@k8s-master1 CoreDNS] # kubectl get po -n kube-system -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE coredns-5db5696c7-xkmzg 1 /1 Running 0 46s |
二十二、安装Metrics Server
说明:在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率
22-1、安装metrics server
1 2 3 4 5 6 7 8 9 10 11 12 | cd /root/k8s-ha-install/metrics-server [root@k8s-master1 metrics-server] # kubectl create -f comp.yaml serviceaccount /metrics-server created clusterrole.rbac.authorization.k8s.io /system :aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io /system :metrics-server created rolebinding.rbac.authorization.k8s.io /metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io /metrics-server :system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io /system :metrics-server created service /metrics-server created deployment.apps /metrics-server created apiservice.apiregistration.k8s.io /v1beta1 .metrics.k8s.io created |
二十三、Dashboard部署
23-1、dashboard安装
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | cd /root/k8s-ha-install/dashboard/ [root@k8s-master1 dashboard] # kubectl create -f . serviceaccount /admin-user created clusterrolebinding.rbac.authorization.k8s.io /admin-user created namespace /kubernetes-dashboard created serviceaccount /kubernetes-dashboard created service /kubernetes-dashboard created secret /kubernetes-dashboard-certs created secret /kubernetes-dashboard-csrf created secret /kubernetes-dashboard-key-holder created configmap /kubernetes-dashboard-settings created role.rbac.authorization.k8s.io /kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io /kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io /kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io /kubernetes-dashboard created deployment.apps /kubernetes-dashboard created service /dashboard-metrics-scraper created deployment.apps /dashboard-metrics-scraper created |
23-2、查看dashboard状态
1 2 3 4 | [root@k8s-master1 dashboard] # kubectl get po -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-7fcdff5f4c-4zzrw 1 /1 Running 0 102s kubernetes-dashboard-85f59f8ff7-6p6zm 1 /1 Running 0 102s |
23-3、查看dashboard端口
1 2 3 | [root@k8s-master1 dashboard] # kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.10.150.85 <none> 443:31988 /TCP 17m |
注:443:31988/TCP 这里的31988是dashboard端口
访问dashboard:
通过任意宿主机IP或者VIP的IP+端口即可访问到dashboard:
访问Dashboard:https://192.168.3.123:31988(请更改31988为自己的端口),选择登录方式为令牌(即token方式)
也可以通过宿主机的ip访问:https://192.168.3.128:31988
23-4、获取Token命令
1 | kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}' ) |
二十四、安装kuboard
说明:Kuboard 是 Kubernetes 的一款图形化管理界面
24-1、安装
1 | kubectl apply -f https: //kuboard .cn /install-script/kuboard .yaml |
24-2、查看 Kuboard 运行状态
1 2 3 | [root@k8s-master1 dashboard] # kubectl get pods -l k8s.kuboard.cn/name=kuboard -n kube-system NAME READY STATUS RESTARTS AGE kuboard-5bb9b6cb7-ldktv 1 /1 Running 0 8m19s |
24-3、查看kuboard端口
1 2 3 | [root@k8s-master1 dashboard] # kubectl get svc -n kube-system kuboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kuboard NodePort 10.14.142.156 <none> 80:32567 /TCP 9m59s |
注:
kuboard 这行 80:32567/TCP。32567端口是访问kuboard到时候使用的
vip虚拟主机IP访问:192.168.3.128:32567
k8s-master1真是主机IP地址访问:192.168.3.123:32567
24-4、获取kuboard Token作为登录口令。获取命令
1 | echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep ^kuboard-user | awk '{print $1}' ) -o go-template= '{{.data.token}}' | base64 -d) |
二十五、集群验证
25-1、安装busybox
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always EOF |
25-2、对集群进行验证
- Pod必须能解析Service
1 2 3 4 5 6 | [root@k8s-master1 ~] # kubectl exec busybox -n default -- nslookup kubernetes Server: 10.0.0.10 Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster. local Name: kubernetes Address 1: 10.0.0.1 kubernetes.default.svc.cluster. local |
注:这里的kubernetes:10.0.0.1 IP可以根据 kubectl get svc 提前查看。10.0.0.10是service的第十个IP地址
-
Pod必须能解析跨namespace的Service
1 2 3 4 5 6 | [root@k8s-master1 ~] # kubectl exec busybox -n default -- nslookup kube-dns.kube-system Server: 10.0.0.10 Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster. local Name: kube-dns.kube-system Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster. local |
- 每个节点都必须要能访问Kubernetes的kubernetes svc 端口:443和kube-dns的service 端口:53。可以利用telnet 测试
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | #测试kubernetes svc 443端口,telnet测试的时候不直接退出说明443端口正常 [root@k8s-master1 ~] # telnet 10.0.0.1 443 Trying 10.0.0.1... Connected to 10.0.0.1. Escape character is '^]' . #测试kube-dns 的service 53端口 [root@k8s-master3 ~] # telnet 10.0.0.10 53 Trying 10.0.0.10... Connected to 10.0.0.10. Escape character is '^]' . Connection closed by foreign host. [root@k8s-master3 ~] # curl 10.0.0.10:53 curl: (52) Empty reply from server #注:测试kube-dns 53端口两种方法 |
-
Pod和Pod之间要能通
a) 同namespace能通信
b) 跨namespace能通信
c) 跨机器能通信
注:可以进入上面安装的busybox 终端命令里面。利用ping 命令测试Pod和pod是否能通,操作如下
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | #查看pod 每个运行节点的IP地址 [root@k8s-master1 ~] # kubectl get po -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-6f6595874c-82hrx 1 /1 Running 2 (116m ago) 14h 172.26.159.130 k8s-master1 <none> <none> calico-node-6vlp4 1 /1 Running 1 (116m ago) 14h 192.168.3.127 node-2 <none> <none> calico-node-7bv5c 1 /1 Running 1 14h 192.168.3.126 node-1 <none> <none> calico-node-9vtmt 1 /1 Running 1 (117m ago) 14h 192.168.3.124 k8s-master2 <none> <none> calico-node-clqls 1 /1 Running 1 (117m ago) 14h 192.168.3.125 k8s-master3 <none> <none> calico-node-hckp8 1 /1 Running 1 (117m ago) 14h 192.168.3.123 k8s-master1 <none> <none> calico-typha-6b6cf8cbdf-t5b74 1 /1 Running 1 (117m ago) 14h 192.168.3.123 k8s-master1 <none> <none> coredns-5db5696c7-xkmzg 1 /1 Running 1 (117m ago) 14h 172.25.115.67 k8s-master2 <none> <none> kuboard-5bb9b6cb7-ldktv 1 /1 Running 1 (117m ago) 12h 172.25.115.68 k8s-master2 <none> <none> metrics-server-6bf7dcd649-wdnkk 1 /1 Running 1 (117m ago) 13h 172.27.95.194 k8s-master3 <none> <none> #进入busybox命令终端 [root@k8s-master1 ~] # kubectl exec -ti busybox -- sh #这里测试的是ping service网段 / # ping 10.0.0.10 PING 10.0.0.10 (10.0.0.10): 56 data bytes 64 bytes from 10.0.0.10: seq =0 ttl=64 time =0.266 ms 64 bytes from 10.0.0.10: seq =1 ttl=64 time =0.101 ms 64 bytes from 10.0.0.10: seq =2 ttl=64 time =0.075 ms 64 bytes from 10.0.0.10: seq =3 ttl=64 time =0.071 ms ^C --- 10.0.0.10 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min /avg/max = 0.071 /0 .128 /0 .266 ms #pod网段是否通 / # ping 172.25.115.67 PING 172.25.115.67 (172.25.115.67): 56 data bytes 64 bytes from 172.25.115.67: seq =0 ttl=62 time =3.049 ms 64 bytes from 172.25.115.67: seq =1 ttl=62 time =1.003 ms 64 bytes from 172.25.115.67: seq =2 ttl=62 time =1.272 ms 64 bytes from 172.25.115.67: seq =3 ttl=62 time =1.165 ms ^C --- 172.25.115.67 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min /avg/max = 1.003 /1 .622 /3 .049 ms / # ping 172.27.95.194 PING 172.27.95.194 (172.27.95.194): 56 data bytes 64 bytes from 172.27.95.194: seq =0 ttl=63 time =0.493 ms 64 bytes from 172.27.95.194: seq =1 ttl=63 time =0.086 ms 64 bytes from 172.27.95.194: seq =2 ttl=63 time =0.087 ms 64 bytes from 172.27.95.194: seq =3 ttl=63 time =0.103 ms ^C --- 172.27.95.194 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min /avg/max = 0.086 /0 .192 /0 .493 ms #测试真实主机IP网段 / # ping 192.168.3.127 PING 192.168.3.127 (192.168.3.127): 56 data bytes 64 bytes from 192.168.3.127: seq =0 ttl=63 time =3.011 ms 64 bytes from 192.168.3.127: seq =1 ttl=63 time =5.612 ms 64 bytes from 192.168.3.127: seq =2 ttl=63 time =5.952 ms 64 bytes from 192.168.3.127: seq =3 ttl=63 time =4.069 ms ^C --- 192.168.3.127 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min /avg/max = 3.011 /4 .661 /5 .952 ms |
注:如果以上测试都正常说明集群可以正常使用
二十六、Kubernetes安全性设置
26-1、docker 作为Runtime进行配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | vim /etc/docker/daemon .json { "registry-mirrors" : [ "https://registry.docker-cn.com" , "http://hub-mirror.c.163.com" , "https://docker.mirrors.ustc.edu.cn" ], "exec-opts" : [ "native.cgroupdriver=systemd" ], "max-concurrent-downloads" : 10, "max-concurrent-uploads" : 5, "log-opts" : { "max-size" : "300m" , "max-file" : "2" }, "live-restore" : true } |
注:
"max-concurrent-downloads": 10 : 并发下载线程数量
"max-concurrent-uploads": 5 : 并发上传线程数量
"log-opts": {
"max-size": "300m", docker日志大小切割保存,这里设置的是300m进行切割日志文件
"max-file": "2" docker日志保存多少切割的日志文件
},
"live-restore": true docker服务重启,不对容器进行重启
26-2、对(k8s-master)主节点上面kube-controller-manager.service配置文件添加颁发证书有效期时间
1 2 3 4 5 6 7 8 9 10 | vim /usr/lib/systemd/system/kube-controller-manager .service #把下面的参数添加到配置文件里面即可 --cluster-signing-duration=876000h0m0s \ #添加好后重启kube-controller-manager服务 systemctl daemon-reload systemctl restart kube-controller-manager |
注:虽然添加颁发证书有效期时间很长,不一定会生效876000h0m0s这个时间。因为正常情况证书最长有效期时间为五年
26-3、对所有节点上面的10-kubelet.conf配置文件添加相关参数
1 2 3 4 5 6 7 8 | vim /etc/systemd/system/kubelet .service.d /10-kubelet .conf # Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' " 添加到node.kubernetes.io/node='' 末尾处即可 #需要添加的参数如下: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --image-pull-progress-deadline=30m #把上面的参数添加设置: Environment= "KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --image-pull-progress-deadline=30m" |
注:
--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 说明:
使kubernetes改变加密方式,因为默认的kubernetes加密方式很简单容易被外网扫描到漏洞,利用漏洞进行对kubernetes进行攻击访问
--image-pull-progress-deadline=30m 因为默认设置下载镜像时间短下载镜像的时候有可能因为网络问题来回循环下载,所以把下载镜像时间设置长些
26-4、kubelet-conf.yml添加相关参数
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | vim /etc/kubernetes/kubelet-conf .yml #添加如下配置 rotateServerCertificates: true allowedUnsafeSysctls: - "net.core*" - "net.ipv4.*" kubeReserved: cpu: "1" memory: 1Gi ephemeral-storage: 10Gi systemReserved: cpu: "1" memory: 1Gi ephemeral-storage: 10Gi #添加玩后对kubelet服务重启,查看重启后状态是否正常 systemctl daemon-reload systemctl restart kubelet systemctl status kubelet |
allowedUnsafeSysctls:
- "net.core*"
- "net.ipv4.*" kubernetes 内核参数是不允许修改,添加这个参数可以对内核参数进行修改。这参数设置会影响安全,设置的时候需谨慎
kubeReserved:
cpu: "1"
memory: 1Gi
ephemeral-storage: 10Gi
systemReserved:
cpu: "1"
memory: 1Gi
ephemeral-storage: 10Gi
这里配置是对kubernetes 组件和系统设置预留cpu 和内存使用量。生产环境中根据自己情况进行设置,尽量预留高些
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 记一次.NET内存居高不下排查解决与启示