Docker学习【8】容器监控搭建Kubernetes集群(K8s)
Docker学习【8】容器监控搭建Kubernetes集群(K8s)
集群类型:
Kubernetes集群大体上分为两类:一主多从和多主多从
一主多从:一台master节点和多台node节点,搭建简单,但是有单机故障风险,适用于测试环境
多主多从:多台master节点和多台node节点,搭建麻烦,安全性高,适用于生产环境
安装方式:(本次实验使用kubeadm方式)
Kubernetes有多种部署方式,目前主流的方式有
kubeadm
、minikube
、二进制包
- Minikube:一个用于快速搭建单节点kubernetes的工具
- Kubeadm:一个用于快速搭建kubernetes集群的工具,https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
- 二进制包:从官网下载每个组件的二进制包,依次去安装,此方式对于理解kubernetes组件更加有效,https://github.com/kubernetes/kubernetes
一、部署系统要求
1、虚拟机的系统配置信息
节点名称 | CPU配置 | 内存配置 | IP |
---|---|---|---|
CentOS8Shaowenhua(Master) | 4core | 2GB | 192.168.27.128 |
Docker1(Node1) | 4core | 2GB | 192.168.27.134 |
Docker2(Node1) | 4core | 2GB | 192.168.27.135 |
操作系统的版本:
# 此方式下安装kubernetes集群要求Centos版本要在7.5或之上
[root@CentOS8Shaowenhua ~]# cat /etc/redhat-release
CentOS Linux release 8.4.2105
2、修改主机名,配置映射关系
为了方便集群节点间的直接调用,配置主机名解析,企业中推荐使用内部DNS服务器
(1)msater节点
[root@CentOS8Shaowenhua ~]# cat /etc/hostname
CentOS8Shaowenhua
[root@CentOS8Shaowenhua ~]# cat >> /etc/hosts << EOF
> 192.168.27.128 CentOS8Shaowenhua
> 192.168.27.134 Docker1
> 192.168.27.135 Docker2
> EOF
[root@CentOS8Shaowenhua ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.27.128 CentOS8Shaowenhua
192.168.27.134 Docker1
192.168.27.135 Docker2
[root@CentOS8Shaowenhua ~]#
(2)node节点
-
node1
[root@Docker1 ~]# cat /etc/hostname Docker1
-
node2
[root@Docker2 ~]# cat /etc/hostname Docker2
[root@CentOS8Shaowenhua ~]# scp /etc/hosts root@192.168.27.134:/etc/hosts
The authenticity of host '192.168.27.134 (192.168.27.134)' can't be established.
ECDSA key fingerprint is SHA256:OuD2KXokT075Gi40zZEaDtpJSIfKcCOtVPV5kXqSYmk.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.27.134' (ECDSA) to the list of known hosts.
root@192.168.27.134's password:
hosts 100% 249 8.2KB/s 00:00
[root@CentOS8Shaowenhua ~]# scp /etc/hosts root@192.168.27.135:/etc/hosts
The authenticity of host '192.168.27.135 (192.168.27.135)' can't be established.
ECDSA key fingerprint is SHA256:OuD2KXokT075Gi40zZEaDtpJSIfKcCOtVPV5kXqSYmk.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.27.135' (ECDSA) to the list of known hosts.
root@192.168.27.135's password:
hosts 100% 249 13.5KB/s 00:00
[root@CentOS8Shaowenhua ~]#
3、禁用firewalld、selinux、postfix(三个节点都做)
(1)msater节点
[root@CentOS8Shaowenhua ~]# systemctl disable --now firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@CentOS8Shaowenhua ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@CentOS8Shaowenhua ~]# setenforce 0
[root@CentOS8Shaowenhua ~]# systemctl stop postfix
Failed to stop postfix.service: Unit postfix.service not loaded.
[root@CentOS8Shaowenhua ~]# systemctl disable postfix
Failed to disable unit: Unit file postfix.service does not exist.
[root@CentOS8Shaowenhua ~]#
(2)node节点
-
node1
[root@Docker1 ~]# systemctl disable --now firewalld Removed /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@Docker1 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config [root@Docker1 ~]# setenforce 0 [root@Docker1 ~]# systemctl stop postfix Failed to stop postfix.service: Unit postfix.service not loaded. [root@Docker1 ~]#
-
node2
[root@Docker2 ~]# systemctl disable --now firewalld Removed /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@Docker2 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config [root@Docker2 ~]# setenforce 0 [root@Docker2 ~]# systemctl stop postfix Failed to stop postfix.service: Unit postfix.service not loaded. [root@Docker2 ~]# systemctl disable postfix Failed to disable unit: Unit file postfix.service does not exist. [root@Docker2 ~]#
4、关闭系统Swap(三个节点操作相同)
(1)Master
[root@CentOS8Shaowenhua ~]# vi /etc/fstab
[root@CentOS8Shaowenhua ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Mar 4 07:49:54 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root / xfs defaults 0 0
UUID=95ec24a1-21f2-4846-9348-0b25be254d8a /boot xfs defaults 0 0
/dev/mapper/cl-home /home xfs defaults 0 0
#/dev/mapper/cl-swap none swap defaults 0 0
[root@CentOS8Shaowenhua ~]# swapoff -a
[root@CentOS8Shaowenhua ~]# free -m
total used free shared buff/cache available
Mem: 1790 312 970 8 507 1322
Swap: 0 0 0
[root@CentOS8Shaowenhua ~]#
(2)Node1
[root@Docker1 ~]# vi /etc/fstab
[root@Docker1 ~]# swapoff -a
[root@Docker1 ~]# free -m
total used free shared buff/cache available
Mem: 1790 315 969 8 505 1318
Swap: 0 0 0
[root@Docker1 ~]#
(3)Node2
[root@Docker2 ~]# vi /etc/fstab
[root@Docker2 ~]# swapoff -a
[root@Docker2 ~]# free -m
total used free shared buff/cache available
Mem: 1790 315 969 8 505 1318
Swap: 0 0 0
[root@Docker2 ~]#
5、主机时间同步(三个节点操作相同)
kubernetes要求集群中的节点时间必须精确一致,这里使用chronyd服务从网络同步时间,企业中建议配置内部的时间同步服务器
yum -y install chrony
~master:
[root@CentOS8Shaowenhua ~]# vi /etc/chrony.conf
取消掉local stratum 10的注释
[root@CentOS8Shaowenhua ~]# cat /etc/chrony.conf
......
# Serve time even if not synchronized to a time source.
local stratum 10
......
[root@CentOS8Shaowenhua ~]# systemctl restart chronyd.service
[root@CentOS8Shaowenhua ~]# systemctl enable chronyd.service
[root@CentOS8Shaowenhua ~]# hwclock -w
~node1:
[root@Docker1 ~]# vi /etc/chrony.conf
[root@Docker1 ~]# cat /etc/chrony.conf
......
server CentOS8Shaowenhua iburst
[root@Docker1 ~]# systemctl restart chronyd.service
[root@Docker1 ~]# systemctl enable chronyd.service
[root@Docker1 ~]# hwclock -w
~node2:
[root@Docker2 ~]# vi /etc/chrony.conf
[root@Docker2 ~]# cat /etc/chrony.conf
......
server CentOS8Shaowenhua iburst
[root@Docker2 ~]# systemctl restart chronyd.service
[root@Docker2 ~]# systemctl enable chronyd.service
[root@Docker2 ~]# hwclock -w
二、基本环境和集群架构
1、Kubeadm方式部署Kubernetes集群系统配置信息
节点名称 | ip | 组件 |
---|---|---|
Master | 192.168.27.128 | etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kube-proxty、flannel |
Note1 | 192.168.27.134 | etcd、kubelet、kube-proxty、docker、flannel |
Note2 | 192.168.27.135 | etcd、kubelet、kube-proxty、docker、flannel |
2、开启IP转发,和修改内核信息---三个节点都需要配置
(1)master
[root@CentOS8Shaowenhua ~]# vi /etc/sysctl.d/k8s.conf
[root@CentOS8Shaowenhua ~]# cat /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@CentOS8Shaowenhua ~]# modprobe br_netfilter
[root@CentOS8Shaowenhua ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@CentOS8Shaowenhua ~]#
(2)node1
[root@Docker1 ~]# vi /etc/sysctl.d/k8s.conf
[root@Docker1 ~]# cat /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@Docker1 ~]# modprobe br_netfilter
[root@Docker1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
(3)node2
[root@Docker2 ~]# vi /etc/sysctl.d/k8s.conf
[root@Docker2 ~]# cat /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@Docker2 ~]# modprobe br_netfilter
[root@Docker2 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@Docker2 ~]#
3、配置IPVS功能(三个节点都做)
(1)master
[root@CentOS8Shaowenhua ~]# vi /etc/sysconfig/modules/ipvs.modules
[root@CentOS8Shaowenhua ~]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
[root@CentOS8Shaowenhua ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@CentOS8Shaowenhua ~]# bash /etc/sysconfig/modules/ipvs.modules
[root@CentOS8Shaowenhua ~]# lsmod | grep -e ip_vs
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 172032 2 nf_nat,ip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
[root@CentOS8Shaowenhua ~]# reboot
(2)node1
[root@Docker1 ~]# vi /etc/sysconfig/modules/ipvs.modules
[root@Docker1 ~]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
[root@Docker1 ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@Docker1 ~]# bash /etc/sysconfig/modules/ipvs.modules
[root@Docker1 ~]# lsmod | grep -e ip_vs
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 172032 2 nf_nat,ip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
[root@Docker1 ~]# reboot
(3)node3
[root@Docker2 ~]# vi /etc/sysconfig/modules/ipvs.modules
[root@Docker2 ~]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
[root@Docker2 ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@Docker2 ~]# bash /etc/sysconfig/modules/ipvs.modules
[root@Docker2 ~]# lsmod | grep -e ip_vs
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 172032 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 172032 2 nf_nat,ip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
libcrc32c 16384 5 nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
[root@Docker2 ~]# reboot
4、ssh免密认证
[root@CentOS8Shaowenhua ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:zGa35b/56hYBv2Td4fMBbkfcOoyAfif98q9GD6wXWoM root@CentOS8Shaowenhua
The key's randomart image is:
+---[RSA 3072]----+
| . . .|
| . . .. +.|
| . o.=+oo|
| o. o +oO=o|
| S..oo*.++|
| o . +E @ .|
| . .B * |
| o.=..|
| =B*o|
+----[SHA256]-----+
[root@CentOS8Shaowenhua ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@Docker1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'docker1 (192.168.27.134)' can't be established.
ECDSA key fingerprint is SHA256:OuD2KXokT075Gi40zZEaDtpJSIfKcCOtVPV5kXqSYmk.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@docker1's password:
Permission denied, please try again.
root@docker1's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@Docker1'"
and check to make sure that only the key(s) you wanted were added.
[root@CentOS8Shaowenhua ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@Docker2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'docker2 (192.168.27.135)' can't be established.
ECDSA key fingerprint is SHA256:OuD2KXokT075Gi40zZEaDtpJSIfKcCOtVPV5kXqSYmk.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@docker2's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@Docker2'"
and check to make sure that only the key(s) you wanted were added.
[root@CentOS8Shaowenhua ~]#
5、安装docker
[root@CentOS8Shaowenhua ~]# systemctl restart docker
[root@CentOS8Shaowenhua ~]# systemctl enable docker
[root@Docker1 ~]# systemctl restart docker
[root@Docker1 ~]# systemctl enable docker
[root@Docker2 ~]# systemctl restart docker
[root@Docker2 ~]# systemctl enable docker
6、添加一个配置文件,配置docker仓库加速器
~master:
[root@CentOS8Shaowenhua ~]# vi /etc/docker/daemon.json
[root@CentOS8Shaowenhua ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://6vrrj6n2.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
[root@CentOS8Shaowenhua ~]# systemctl daemon-reload
[root@CentOS8Shaowenhua ~]# systemctl restart docker
~node1:
[root@Docker1 ~]# vi /etc/docker/daemon.json
[root@Docker1 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://6vrrj6n2.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
[root@Docker1 ~]# systemctl daemon-reload
[root@Docker1 ~]# systemctl restart docker
~node2:
[root@Docker2 ~]# vi /etc/docker/daemon.json
[root@Docker2 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://6vrrj6n2.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
[root@Docker2 ~]# systemctl daemon-reload
[root@Docker2 ~]# systemctl restart docker
7、安装kubernetes组件
由于kubernetes的镜像在国外,速度比较慢,这里切换成国内的镜像源
~master:
[root@CentOS8Shaowenhua ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@CentOS8Shaowenhua ~]# ll /etc/yum.repos.d/
total 12
-rw-r--r--. 1 root root 2495 Mar 4 16:10 CentOS-Base.repo
-rw-r--r--. 1 root root 2081 Apr 21 18:33 docker-ce.repo
-rw-r--r-- 1 root root 275 May 5 19:26 kubernetes.repo
[root@CentOS8Shaowenhua ~]#
~node1:
[root@Docker1 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@Docker1 ~]# ll /etc/yum.repos.d/
total 12
-rw-r--r--. 1 root root 2495 Mar 4 16:10 CentOS-Base.repo
-rw-r--r--. 1 root root 2081 Apr 21 18:33 docker-ce.repo
-rw-r--r-- 1 root root 275 May 5 19:27 kubernetes.repo
[root@Docker1 ~]#
~node2:
[root@Docker2 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@Docker2 ~]# ll /etc/yum.repos.d/
total 12
-rw-r--r--. 1 root root 2495 Mar 4 16:10 CentOS-Base.repo
-rw-r--r--. 1 root root 2081 Apr 21 18:33 docker-ce.repo
-rw-r--r-- 1 root root 275 May 5 19:27 kubernetes.repo
[root@Docker2 ~]#
8、安装kubeadm kubelet kubectl工具
~master:
[root@CentOS8Shaowenhua ~]# dnf -y install kubeadm kubelet kubectl
[root@CentOS8Shaowenhua ~]# systemctl restart kubelet
[root@CentOS8Shaowenhua ~]# systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@CentOS8Shaowenhua ~]#
~node1:
[root@Docker1 ~]# dnf -y install kubeadm kubelet kubectl
[root@Docker1 ~]# systemctl restart kubelet
[root@Docker1 ~]# systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@Docker1 ~]#
~node2:
[root@Docker2 ~]# dnf -y install kubeadm kubelet kubectl
[root@Docker2 ~]# systemctl restart kubelet
[root@Docker2 ~]# systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@Docker2 ~]#
9、配置containerd(此操作需要在所有节点执行)
~master:
[root@CentOS8Shaowenhua ~]# containerd config default > /etc/containerd/config.toml
# 将/etc/containerd/config.toml文件中的k8s镜像仓库改为registry.aliyuncs.com/google_containers
[root@CentOS8Shaowenhua ~]# vi /etc/containerd/config.toml
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
# 然后重启并设置containerd服务
[root@CentOS8Shaowenhua ~]# systemctl restart containerd
[root@CentOS8Shaowenhua ~]# systemctl enable containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
[root@CentOS8Shaowenhua ~]#
~node1:
[root@Docker1 ~]# containerd config default > /etc/containerd/config.toml
[root@Docker1 ~]# vi /etc/containerd/config.toml
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
[root@Docker1 ~]# systemctl restart containerd
[root@Docker1 ~]# systemctl enable containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
[root@Docker1 ~]#
~node2:
[root@Docker2 ~]# containerd config default > /etc/containerd/config.toml
[root@Docker2 ~]# vi /etc/containerd/config.toml
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
[root@Docker2 ~]# systemctl restart containerd
[root@Docker2 ~]# systemctl enable containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
[root@Docker2 ~]#
10、部署k8s的master节点
[root@CentOS8Shaowenhua ~]# kubeadm init \
> --apiserver-advertise-address=192.168.27.128 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.28.0 \
> --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0505 19:54:23.198936 7646 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [centos8shaowenhua kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.27.128]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [centos8shaowenhua localhost] and IPs [192.168.27.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [centos8shaowenhua localhost] and IPs [192.168.27.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 45.020128 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node centos8shaowenhua as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node centos8shaowenhua as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 02oa3i.3kr84f5u9ee6gyca
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.27.128:6443 --token 02oa3i.3kr84f5u9ee6gyca \
--discovery-token-ca-cert-hash sha256:f188871265f5c890578abc7e0deb29aeba0abe22540829572dd585a561a1defc
[root@CentOS8Shaowenhua ~]#
-
将初始化内容保存在k8s01文件中:
[root@CentOS8Shaowenhua ~]# mkdir -p k8s [root@CentOS8Shaowenhua ~]# cd k8s/ [root@CentOS8Shaowenhua k8s]# vi k8s01 [root@CentOS8Shaowenhua k8s]# cat k8s01 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.27.128:6443 --token 02oa3i.3kr84f5u9ee6gyca \ --discovery-token-ca-cert-hash sha256:f188871265f5c890578abc7e0deb29aeba0abe22540829572dd585a561a1defc [root@CentOS8Shaowenhua k8s]# [root@CentOS8Shaowenhua ~]# vi /etc/profile.d/k8s.sh [root@CentOS8Shaowenhua ~]# cat /etc/profile.d/k8s.sh export KUBECONFIG=/etc/kubernetes/admin.conf [root@CentOS8Shaowenhua ~]# source /etc/profile.d/k8s.sh
11、安装pod网络插件
链接:
(1)配置kube-flannel.yml
[root@CentOS8Shaowenhua ~]# vi kube-flannel.yml
[root@CentOS8Shaowenhua ~]# cat kube-flannel.yml
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
k8s-app: flannel
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- networking.k8s.io
resources:
- clustercidrs
verbs:
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: flannel
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
k8s-app: flannel
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
k8s-app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
image: docker.io/flannel/flannel-cni-plugin:v1.2.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
image: docker.io/flannel/flannel:v0.22.3
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: docker.io/flannel/flannel:v0.22.3
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
[root@CentOS8Shaowenhua ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@CentOS8Shaowenhua ~]#
(2)kubectl get nodes
[root@CentOS8Shaowenhua ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
centos8shaowenhua NotReady control-plane 34m v1.28.2
[root@CentOS8Shaowenhua ~]#
12、将node节点加入到k8s集群中
(1)node1
[root@Docker1 ~]# kubeadm join 192.168.27.128:6443 --token 02oa3i.3kr84f5u9ee6gyca \
> --discovery-token-ca-cert-hash sha256:f188871265f5c890578abc7e0deb29aeba0abe22540829572dd585a561a1defc
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@Docker1 ~]#
(2)node2
[root@Docker2 ~]# kubeadm join 192.168.27.128:6443 --token 02oa3i.3kr84f5u9ee6gyca \
> --discovery-token-ca-cert-hash sha256:f188871265f5c890578abc7e0deb29aeba0abe22540829572dd585a561a1defc
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@Docker2 ~]#
13、kubectl get nodes 查看node状态
[root@CentOS8Shaowenhua ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
centos8shaowenhua Ready control-plane 40m v1.28.2
docker1 NotReady <none> 2m12s v1.28.2
docker2 NotReady <none> 2m v1.28.2
[root@CentOS8Shaowenhua ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
centos8shaowenhua Ready control-plane 40m v1.28.2
docker1 NotReady <none> 2m21s v1.28.2
docker2 NotReady <none> 2m9s v1.28.2
[root@CentOS8Shaowenhua ~]#
14、使用k8s集群创建一个pod,运行nginx容器,然后进行测试
[root@CentOS8Shaowenhua ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 42m
[root@CentOS8Shaowenhua ~]# kubectl create deployment nginx --image nginx
deployment.apps/nginx created
[root@CentOS8Shaowenhua ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-7854ff8877-k4d69 0/1 ContainerCreating 0 8s
[root@CentOS8Shaowenhua ~]# kubectl expose deployment nginx --port 80 --type NodePort
service/nginx exposed
[root@CentOS8Shaowenhua ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7854ff8877-k4d69 0/1 ContainerCreating 0 25s <none> docker1 <none> <none>
[root@CentOS8Shaowenhua ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43m
nginx NodePort 10.98.50.119 <none> 80:32300/TCP 18s
[root@CentOS8Shaowenhua ~]#
15、集群状态检测
(1)查看pod
[root@CentOS8Shaowenhua ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66f779496c-6nk25 1/1 Running 0 46m
coredns-66f779496c-chmlg 1/1 Running 0 46m
etcd-centos8shaowenhua 1/1 Running 0 46m
kube-apiserver-centos8shaowenhua 1/1 Running 0 46m
kube-controller-manager-centos8shaowenhua 1/1 Running 0 46m
kube-proxy-4dbwn 1/1 Running 0 8m23s
kube-proxy-bw6q9 1/1 Running 0 8m35s
kube-proxy-kzhtv 1/1 Running 0 46m
kube-scheduler-centos8shaowenhua 1/1 Running 0 46m
[root@CentOS8Shaowenhua ~]#
(2)查看异常pod信息
[root@CentOS8Shaowenhua ~]# kubectl describe pods kube-scheduler-centos8shaowenhua -n kube-system
Name: kube-scheduler-centos8shaowenhua
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: centos8shaowenhua/192.168.27.128
Start Time: Sun, 05 May 2024 19:57:38 +0800
Labels: component=kube-scheduler
tier=control-plane
Annotations: kubernetes.io/config.hash: 042330b0232e7d2db5ac25d02424f740
kubernetes.io/config.mirror: 042330b0232e7d2db5ac25d02424f740
kubernetes.io/config.seen: 2024-05-05T19:57:38.213670888+08:00
kubernetes.io/config.source: file
Status: Running
SeccompProfile: RuntimeDefault
IP: 192.168.27.128
IPs:
IP: 192.168.27.128
Controlled By: Node/centos8shaowenhua
Containers:
kube-scheduler:
Container ID: containerd://dc3262523e789b70ef88c37e1526d889c930e169cff04215ee5af86896df9fa1
Image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
Image ID: registry.aliyuncs.com/google_containers/kube-scheduler@sha256:cd2275aed550dca60fbccb136fdc407a8e9dd045a015762d7a769e4dee36b6c1
Port: <none>
Host Port: <none>
Command:
kube-scheduler
--authentication-kubeconfig=/etc/kubernetes/scheduler.conf
--authorization-kubeconfig=/etc/kubernetes/scheduler.conf
--bind-address=127.0.0.1
--kubeconfig=/etc/kubernetes/scheduler.conf
--leader-elect=true
State: Running
Started: Sun, 05 May 2024 19:57:31 +0800
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: http-get https://127.0.0.1:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
Startup: http-get https://127.0.0.1:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
Environment: <none>
Mounts:
/etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/scheduler.conf
HostPathType: FileOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute op=Exists
Events: <none>
[root@CentOS8Shaowenhua ~]#
FINISH