D1 centos7.9 部署kubernetes

集群角色 主机名 操作系统 IP地址 内核
Master k8s-master CentOS Linux release 7.9.2009 (Core) 172.16.99.71 3.10.0-1160.119.1.el7.x86_64
Node k8s-node01 CentOS Linux release 7.9.2009 (Core) 172.16.99.72 3.10.0-1160.119.1.el7.x86_64
Node k8s-node02 CentOS Linux release 7.9.2009 (Core) 172.16.99.73 3.10.0-1160.119.1.el7.x86_64

0、安装常用命令

yum install -y libpcap libpcap-devel epel-release
yum install -y wget vim sysstat iotop nethogs iftop ntpdate jq net-tools python3 mtr telnet
echo "* * * * * ntpdate ntp.aliyun.com > /dev/null 2>&1" | crontab -

1、清空iptables 默认规则

iptables -F
systemctl stop firewalld
systemctl disable firewalld

2、关闭selinux

setenforce 0

3、关闭swap交换分区

  • 注意检查/etc/fstab
swapoff -a

4、设置主机名

  • 更具实际情况修改
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02

改完主机名后添加hosts解析,所有节点都要添加
cat /etc/hosts

172.16.99.71 k8s-master
172.16.99.72 k8s-node01
172.16.99.73 k8s-node02

5、配置内核参数

  • 加载模块
modprobe bridge
modprobe br_netfilter
lsmod | egrep "^bridge|^br_netfilter"
  • 临时生效
sysctl net.bridge.bridge-nf-call-ip6tables=1
sysctl net.bridge.bridge-nf-call-iptables=1
sysctl -a | egrep "net.bridge.bridge-nf-call-ip6tables|net.bridge.bridge-nf-call-iptables"
  • 永久生效
echo "net.bridge.bridge-nf-call-ip6tables=1" >> /etc/sysctl.d/k8s.conf
echo "net.bridge.bridge-nf-call-iptables=1" >> /etc/sysctl.d/k8s.conf

6、安装docker

  • 设置yum源:阿里仓库
wget -c https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
  • 安装指定版本docker
yum install -y docker-ce-24.0.0
  • 启动并设置开机自启动
  systemctl start docker
  systemctl enable docker

7、安装cri-docker

在kubernetes的早期版本中,Docker被作为默认运行容器时,kubernetes的的早期版本在kubelet程序中开发了一个名为Dockers him的代理程序,负责kubelet与Docker之间的通信
随着kubernetes生态系统的发展,涌现出多重容器运行时,如containerd、cri-o、rkt等。为了支持这些容器运行时,kubernetes引入了CRI标准,该标准允许第三方容器运行时只需要与CRI对接即可与kubernetes进行集成
后来,在kubernetes1.20 版本发布时宣布:为了优化核心代码,减少维护负担,将在1.24版本中正式移除Dockershim,而当时docker不太支持CRI,这就意味着kubernetes无法再将Docker作为容器运行时。Docker管饭为了解决这个问题,与Mirantis公司合作,开发了一个名为cri-docker的代理程序,负责kubelet与docker之间的通信
因此,从kubernetes1.24版本及更高版本开始,使用Docker作为容器运行时,需要安装cri-docker。可以在GitHub Release页面上找到适用于系统平台版本的安装包,下载该安装包,然后将其上传到所有节点上进行安装:

wget -c https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.2/cri-dockerd-0.3.2-3.el7.x86_64.rpm
rpm -ivh cri-dockerd-0.3.2-3.el7.x86_64.rpm
  • 安装完成后,修改systemd服务文件,将依赖的Pause镜像指定为阿里云镜像地址
grep ExecStart /usr/lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 
  • 启动并设置开机自启
systemctl start cri-docker
systemctl enable cri-docker

8、安装kubeadm和kubelet

之前的步骤都是在模版上操作,然后用模版克隆出来对应的机器
在所有节点上安装kubeadm和kubelet组件
这些软件包未被包含在系统默认软件源中,需要额外配置yum软件源,如下配置阿里云的软件源

vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
  • 安装指定版本的kubeadm和kubelet
yum install -y kubeadm-1.28.0 kubelet-1.28.0

kubeadm仅是一个集群搭建工具,不涉及启动。kubelet是一个守护进程程序,由kubeadm在搭建过程中自动启动,这里仅设置开机启动即可

systemctl enable kubelet

8、部署Master节点

在Master节点上执行以下命令以初始化kubernetes管理节点

kubeadm init \
--apiserver-advertise-address=172.16.99.71 \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.28.0 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--cri-socket=unix:///var/run/cri-dockerd.sock \
--v=5
  • 该命令中各参数含义如下:
--apiserver-advertise-address: 指定API Server 监听的IP地址。如果没有设置,则将使用默认的网络接口
--image-repository:指定镜像仓库地址。默认值为 registry.k8s.io,但是该仓库国内无法访问,因此这里指定阿里云仓库
--kubernetes-version:指定kubernetes版本
--pod-network-cidr:指定pod网络的CIDR地址范围
--service-cidr:指定service网络的CIDR地址范围
--cri-socket:指定kubelet连接容器运行时的UNIX套接字文件
--v:设置日志的详细级别。数字越大,日志越详细,这里设置为 5。如果遇到任何错误或问题,可以将日志级别提高(通过增加 --v 的数值)以获取更多调试信息。
  • 执行命令后,kubeadm会执行一系列任务,具体如下
[preflight]:该阶段执行一系列检查,验证当前系统环境是否满足kubernetes的安装要求,包括
CPU和内存是否满足最低要求
网络是否正常
操作系统版本是否支持满足要求
容器运行时是否可以连接
内核参数配置是否正确
卸载所需的容器镜像
[certs]:生成kubernetes组件所需的https证书和秘钥,并将其存储到 /etc/kubernetes/pki 目录中
[kubeconfig]:生成kubeconfig文件,其中包含API server地址,客户端证书等信息,并将其存储在 /etc/kubernetes 目录中
[kubelet-start]:生成kubelet配置文件 /var/lib/kubelet/config.yaml 并启动kubelet服务
[control-plane]:为kube-apiserver、kube-controller-manager、kube-scheduler创建静态pod资源文件,并将其存储到 /etc/kubernetes/manifests/目录中
[etcd]:为etcd创建静态pod资源文件,并将其存储在 /etc/kubernetes/manifests 目录中
[wait-control-plane]:等待kubelet从目录 /etc/kubernetes/manifests 中以静态pod的形式启动Master组件
[apiclient]:检查Master组件是否监控
[upload-config]:将kubeadm配置存储在ConfigMap对象中
[kubelet]:将kubelet配置存储在ConfigMap对象中
[upload-certs]:提示用户跳过证书上传
[mark-control-plane]:给Master添加标签和污点
[bootstrap-token]:生成引导令牌,供Node节点在加入集群时使用
[kubelet-finalize]:更新kubelet配置文件 /etc/kubernetes/kubelet.conf
[addons]:安装coredns和kube-proxy插件。

- 紧接着,输出初始化成功的信息:
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.99.71:6443 --token dshu6i.ke5smpssjwp4ie7f \
	--discovery-token-ca-cert-hash sha256:18c1fa2b2e126ac1e19e851af8acc41ab4b3b693c47bbb363c552fa657c1480e
  • 更具上述提示,执行以下命令开始使用集群:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

以上述命令是将文件 /etc/kubernetes/admin.conf 复制到 $HOME/.kube/config中,以便kubelet根据配置文件连接和管理kubern集群

8、部署Node节点

在两个工作节点上执行上述返回的 kubeadm init命令,并添加--cri-socket参数,以将这些工作节点添加到集群中

kubeadm join 172.16.99.71:6443 --token dshu6i.ke5smpssjwp4ie7f \
	--discovery-token-ca-cert-hash sha256:18c1fa2b2e126ac1e19e851af8acc41ab4b3b693c47bbb363c552fa657c1480e \
    --cri-socket=unix:///var/run/cri-dockerd.sock
  • 命令执行后,将看到以下内容
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

可以在Master节点上执行 kubectl get nodes 命令来查看节点,结果如下

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES           AGE     VERSION
k8s-master   NotReady   control-plane   15h     v1.28.0
k8s-node01   NotReady   <none>          2m14s   v1.28.0
k8s-node02   NotReady   <none>          106s    v1.28.0

9、部署网络插件

在上述结果中,节点状态显示为 NotReady,表是该节点尚未准备就绪。这是由于kubelet服务未发现网络插件导致的,kubelet日志中也对此进行了说明

systemctl status kubelet|grep ready
8月 22 11:16:12 k8s-master kubelet[5746]: E0822 11:16:12.638103    5746 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
8月 22 11:16:17 k8s-master kubelet[5746]: E0822 11:16:17.649283    5746 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
8月 22 11:16:22 k8s-master kubelet[5746]: E0822 11:16:22.661692    5746 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
8月 22 11:16:27 k8s-master kubelet[5746]: E0822 11:16:27.672278    5746 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
8月 22 11:16:32 k8s-master kubelet[5746]: E0822 11:16:32.684099    5746 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
8月 22 11:16:37 k8s-master kubelet[5746]: E0822 11:16:37.695814    5746 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
8月 22 11:16:42 k8s-master kubelet[5746]: E0822 11:16:42.707015    5746 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
8月 22 11:16:47 k8s-master kubelet[5746]: E0822 11:16:47.722910    5746 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
8月 22 11:16:52 k8s-master kubelet[5746]: E0822 11:16:52.732801    5746 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
8月 22 11:16:57 k8s-master kubelet[5746]: E0822 11:16:57.743214    5746 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"

kubernetes网络插件主要用语实现机器内部pod通信,负责配置和管理pod的网络。常见的网络插件包括calico、flannel、cilium等,这里选择是用calico作为kubernetes网络插件,安装calico网络插件:

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/tigera-operator.yaml
wget -c  https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/custom-resources.yaml
cat custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16 #修改此值,与kubeadm init命令中指定的pod网络CIDR地址范围保持一致
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}

# 在执行下面的命令之前最好先在每个节点上pull下来下面的image。
docker.io/calico/typha:v3.26.0
docker.io/calico/pod2daemon-flexvol:v3.26.0
docker.io/calico/node:v3.26.0
docker.io/calico/node-driver-registrar:v3.26.0
docker.io/calico/kube-controllers:v3.26.0
docker.io/calico/csi:v3.26.0
docker.io/calico/cni:v3.26.0

kubectl create -f custom-resources.yaml

  • 等待片刻,查看pod对象
[root@k8s-master ~]# kubectl get pods -n calico-system -o wide
NAME                                      STATUS                  NODE      
calico-kube-controllers-7965786c7c-xnbhl  Running                 k8s-master
calico-node-dgb2l                         Init:ImagePullBackOff   k8s-node02
calico-node-f4dgx                         Init:ImagePullBackOff   k8s-node01
calico-node-hkwlf                         Running                 k8s-master
calico-typha-78c657f49b-4xvrb             ImagePullBackOff        k8s-node02
calico-typha-78c657f49b-gb5pc             ImagePullBackOff        k8s-node01
csi-node-driver-75s2d                     Running                 k8s-master
csi-node-driver-8tfsx                     ContainerCreating       k8s-node02
csi-node-driver-9kd2f                     ContainerCreating       k8s-node01
会看到很多pod状态都不正常,可以看到Master上的都是运行正常的。因为Master上的镜像都已经下载下来了。
所以我们要把下面这些镜像在node节点上也下载一下
kubectl describe pods -n calico-system|grep Image:|sort -rnk2|uniq -c |awk '{print "docker pull "$NF}'
docker pull docker.io/calico/typha:v3.26.0
docker pull docker.io/calico/pod2daemon-flexvol:v3.26.0
docker pull docker.io/calico/node:v3.26.0
docker pull docker.io/calico/node-driver-registrar:v3.26.0
docker pull docker.io/calico/kube-controllers:v3.26.0
docker pull docker.io/calico/csi:v3.26.0
docker pull docker.io/calico/cni:v3.26.0

你也可以用下面的脚本将master节点的image 同步到各个node节点上,只需要修改脚本里面的nodeIP地址即可

[root@k8s-master script]# cat image.sh
#!/bin/bash

# 定义节点信息
NODES=("172.16.99.72" "172.16.99.73") # 替换为实际节点的IP地址或主机名

# 获取所有本地镜像的列表
IMAGES=$(docker images --format "{{.Repository}}:{{.Tag}}")

# 循环处理每个镜像
for IMAGE in $IMAGES; do
  # 提取镜像名称和标签
  IMAGE_NAME=$(echo $IMAGE | tr '/' '_' | tr ':' '_')  # 替换 / 和 :,以便用于文件名
  TAR_FILE="${IMAGE_NAME}.tar"

  # 保存镜像为 .tar 文件
  docker save -o $TAR_FILE $IMAGE

  # 将镜像传输到每个节点
  for NODE in "${NODES[@]}"; do
    scp $TAR_FILE $NODE:/tmp/

    # 在节点上加载镜像
    ssh $NODE "docker load -i /tmp/$TAR_FILE"
  done
done
  • 等待片刻,查看pod
[root@k8s-master ~]# kubectl get pod  -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-695c68787d-crfn7   1/1     Running   0          76s
calico-node-fbx8k                          1/1     Running   0          76s
calico-node-hddsw                          1/1     Running   0          76s
calico-node-jszx2                          1/1     Running   0          76s
calico-typha-7cf9b98747-j56np              1/1     Running   0          76s
calico-typha-7cf9b98747-z9qzh              1/1     Running   0          77s
csi-node-driver-c7fh4                      2/2     Running   0          76s
csi-node-driver-n4w2w                      2/2     Running   0          76s
csi-node-driver-r5zzk                      2/2     Running   0          76s

所有的pod的状态都显示为running,说明calico安装成功。再通过 kubectl get nodes命令查看node节点,状态转为 Ready,表示节点准备就绪

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES           AGE     VERSION
k8s-master   Ready    control-plane   9m13s   v1.28.0
k8s-node01   Ready    <none>          8m27s   v1.28.0
k8s-node02   Ready    <none>          8m13s   v1.28.0

需要注意的是,kubernetes考虑到安全性,kubeadm join命令中的Token有效期为24小时,过期后不能再使用。但是,可以使用 kubeadm token create --print-join-command 命令创建新的Token,以添加新的工作节点

10、部署dashboard

dashboard是官方开发的一个web管理系统。通过ta,可以管理集群资源,查看应用概览、查看容器日志和访问容器等操作

  • 下载dashboard的资源文件
wget -c https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
  • 将service的类型设置为 Nodeport类型并指定访问端口,以便将其暴露到集群外部进行访问,修改如下
 30 ---
 31
 32 kind: Service
 33 apiVersion: v1
 34 metadata:
 35   labels:
 36     k8s-app: kubernetes-dashboard
 37   name: kubernetes-dashboard
 38   namespace: kubernetes-dashboard
 39 spec:
 40   type: NodePort # 指定NodePort类型
 41   ports:
 42     - port: 443
 43       targetPort: 8443
 44       nodePort: 30001 # 指定访问端口
 45   selector:
 46     k8s-app: kubernetes-dashboard
 47
 48 ---
  • 在集群中创建资源
kubectl apply -f recommended.yaml
  • 查看pod对象
kubectl get pod  -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-67c65bdd58-rj9j2   1/1     Running   0          58s
kubernetes-dashboard-6cc44f7548-kdmbz        1/1     Running   0          58s

所有pod的状态都显示为running,说明dashboard安装成功。在浏览器中访问https://172.16.99.72:30001/,将看到登录页面

  • 创建一个服务账号并授予集群管理员权限
kubectl create serviceaccount admin-user -n kubernetes-dashboard serviceaccount/admin-user created
kubectl create clusterrolebinding  admin-user --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:admin-user
根据服务账号创建token
kubectl create token admin-user -n kubernetes-dashboard
eyJhbGciOiJSUzI1NiIsImtpZCI6IjhLd3N3R1J1QjZTRkFOb280Z0MxSDU0VW02ekczd1VCZ1VZanE0MmxDd0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzI0MzE3NDc3LCJpYXQiOjE3MjQzMTM4NzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZmZlZWUyODktNTM0My00ZWZiLTgxYjctZWNhOTE4MGUxZWZhIn19LCJuYmYiOjE3MjQzMTM4NzcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.CmbmRmz6TIm2mJaRqKnHm6ohYLBmDQFKMU3EGYRIkZiVezF3jlpveUV-NWBl_173lw1EMX5eJ5ojHH7xuiUz_4wGoaFcqEGBt9D00EssdfOLHyppw9pswiPHbTpG30-sImxCVS07UGbc22qZnUxemHIlKhFjbMKsbq5tQ5MMvVlEnSUMe3W-4vnktuaTyBHEvNx_G3Yql19kiTNXEkaFPWjCSoIRpPemOoHyXY9a7Ykbvbv7dPGl5lLE58Vw5reSsfYaWcwVWeEvFm-Y7q-DEamatCajRcV0WzgtBxbZGNmPgTDBRvBRd4OS7-QXW4cTokKKOg9oKEqw_B9BUYtfcQ
  • 将输出的token复制到输入框中,然后单机登录进入页面

11、 清空kubernetes环境

如果需要重新部署或者卸载kubernetes机器环境,可以使用下面的命令

kubeadm reset --cri-socket=unix://var/run/cri-dockerd.sock

该命令将清空当前节点上由kubeadm生成的所有操作和配置

posted @ 2024-08-21 18:26  Hello_worlds  阅读(53)  评论(0编辑  收藏  举报