K8S1.20.11高可用集群部署(kubeadm方式)

什么是kubeadm?

Kubeadm是一个用于快速部署Kubernetes集群的工具。它简化了Kubernetes集群的部署过程,并自动处理证书和配置。Kubeadm提供了一种简单的方式来初始化主节点、添加工作节点以及安装网络插件等操作,使得Kubernetes集群的部署更加便捷和高效。

使用kubeadm方式部署k8s的好处

 

  1. 快速部署:Kubeadm可以快速地部署一个Kubernetes集群,节省了手动部署的时间和成本。
  2. 自动化:Kubeadm自动处理证书和配置,减少了手动操作的错误。
  3. 可扩展性:Kubeadm支持多种部署场景,可以根据需求进行扩展。
  4. 易于维护:Kubeadm提供了一些工具来升级和维护集群,使得集群的维护更加容易。

部署开始

 

1. 环境说明

 

角色

主机名

IP(例如)

Master1

k8s-master1

192.168.10.100

Master2

k8s-master2

192.168.10.101

Master3

k8s-master3

192.168.10.102

Node1

k8s-node1

192.168.10.103

Node2

k8s-node2

192.168.10.104

Node3

k8s-node3

192.168.10.105

 

服务器系统版本:Centos7.9

k8s版本:1.20.11

2. 初始化操作系统(所有机器)

#关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

#关闭selinux

sed -i 's/Enforcing/disabled/' /etc/selinux/config

#禁止swap分区

sed -i 's/.*swap.*/#&/' /etc/fstab

#添加hosts解析

cat >> /etc/hosts << EOF
192.168.10.100  k8s-master1
192.168.10.101  k8s-master2
192.168.10.102  k8s-master3
192.168.10.103  k8s-node1
192.168.10.104  k8s-node2
192.168.10.105  k8s-node3
EOF

# 开启内核 namespace 支持

grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

# 修改内核参数

cat<<EOF > /etc/sysctl.d/docker.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
vm.swappiness=0
EOF

# 生效配置

sysctl --system

# 重启系统

reboot

# 添加 kubernetes 内核优化

cat<<EOF > /etc/sysctl.d/kubernetes.conf
# conntrack 连接跟踪数最大数量
net.netfilter.nf_conntrack_max = 10485760 
# 允许送到队列的数据包的最大数目
net.core.netdev_max_backlog = 10000
# ARP 高速缓存中的最少层数
net.ipv4.neigh.default.gc_thresh1 = 80000
# ARP 高速缓存中的最多的记录软限制
net.ipv4.neigh.default.gc_thresh2 = 90000
# ARP 高速缓存中的最多记录的硬限制
net.ipv4.neigh.default.gc_thresh3 = 100000
EOF

# 生效配置

sysctl --system

# 配置ipvs模块

kube-proxy 使用 ipvs 方式负载 ,所以需要内核加载 ipvs 模块, 否则只会使用 iptables 方式

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

# 授权

chmod 755 /etc/sysconfig/modules/ipvs.modules

# 加载模块

bash /etc/sysconfig/modules/ipvs.modules

# 查看加载

lsmod | grep -e ip_vs -e nf_conntrack_ipv4

# 输出如下:

-----------------------------------------------------------------------
nf_conntrack_ipv4      20480  0 
nf_defrag_ipv4         16384  1 nf_conntrack_ipv4
ip_vs_sh               16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs                 147456  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          110592  2 ip_vs,nf_conntrack_ipv4
libcrc32c              16384  2 xfs,ip_vs
-----------------------------------------------------------------------

# 配置master1服务器对其他5台节点服务器ssh免密

1.生成密钥对

登录master服务器用root用户执行:

ssh-keygen -t rsa

2. 将生成的公钥复制到远程主机:

ssh-copy-id user@remote_host

注:这里的user是远程主机上的用户名,remote_host是远程主机的IP或主机名。

需要输入远程主机root密码

3. 登录到远程主机,编辑/etc/ssh/sshd_config文件,确保以下配置:

PubkeyAuthentication yes

AuthorizedKeysFile .ssh/authorized_keys

如果这些选项被注释掉了,请去掉注释

4. 在客户端和远程主机上,重启SSH服务:

systemctl restart sshd

完成这些步骤后,你应该能够从客户端无密码SSH到远程主机

3. 安装docker-ce/kubelet/kubeadm

下载yum源

yum install -y wget
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo  /etc/yum.repos.d/
yum clean all

#安装依赖包:

yum install policycoreutils-python
wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
rpm -ivh container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm

# 安装docker

yum -y install docker

# 配置docker

mkdir -p /etc/docker/
cat > /etc/docker/daemon.json <<EOF
{
   "registry-mirrors": ["https://dockerhub.azk8s.cn", "https://docker.mirrors.ustc.edu.cn"],
   "insecure-registries":["可以是自己公司的镜像仓库地址"],
   "max-concurrent-downloads": 10,
   "log-driver": "json-file",
   "log-level": "warn",
   "log-opts": {
     "max-size": "10m",
     "max-file": "3"
     },
   "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

# 启动docker

systemctl daemon-reload
systemctl enable docker
systemctl start docker

# 部署kubernetes

配置kubernetes源

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

# 部署kubernetes 相关 (Master服务器)

yum  install  kubelet-1.20.11  kubeadm-1.20.11  kubectl-1.20.11 -y

# 部署kubernetes 相关 (node服务器)

yum  install  kubelet-1.20.11  kubeadm-1.20.11 –y

# 部署ipvs 相关(所有服务器

yum -y install ipvsadm ipset 

启动kubelet并设置为开机自启(暂时不需要启动)

systemctl enable kubelet

#修改 kubeadm 配置信息

打印 kubeadm init 的 yaml 配置

kubeadm config print init-defaults
kubeadm config print init-defaults --component-configs KubeletConfiguration
kubeadm config print init-defaults --component-configs KubeProxyConfiguration 

# 导出配置信息(在master1上操作

kubeadm config print init-defaults > kubeadm-init.yaml

1.文中配置的 127.0.0.1 均为后续配置的 Nginx Api 代理ip

2.advertiseAddress: 192.168.10.100 与 bindPort: 5443 为程序绑定的地址与端口

3.controlPlaneEndpoint: "127.0.0.1:6443" 为实际访问 ApiServer 的地址

4.这里这样配置是为了维持 Apiserver 的HA, 所以每个master服务器上部署一个 Nginx 做4层代理 ApiServer

 

# 修改相关配置,本文配置信息如下(以下是样例,以实际打开的文件内容为准,主要修改红色部分)

vim kubeadm-init.yaml 

以下是样例内容:

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  # ApiServer 程序绑定的 ip, 填写网卡实际ip
  advertiseAddress: 192.168.10.100
  # ApiServer 程序绑定的端口,修改为5443是为怕跟下面不冲突
  bindPort: 5443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  # apiserver相关配置
  extraArgs:
    # 审计日志相关配置
    audit-log-maxage: "20"
    audit-log-maxbackup: "10"
    audit-log-maxsize: "100"
    audit-log-path: "/var/log/kube-audit/audit.log"
    audit-policy-file: "/etc/kubernetes/audit-policy.yaml"
    audit-log-format: json
  # 开启审计日志配置, 所以需要将宿主机上的审计配置
  extraVolumes:
  - name: "audit-config"
    hostPath: "/etc/kubernetes/audit-policy.yaml"
    mountPath: "/etc/kubernetes/audit-policy.yaml"
    readOnly: true
    pathType: "File"
  - name: "audit-log"
    hostPath: "/var/log/kube-audit"
    mountPath: "/var/log/kube-audit"
    pathType: "DirectoryOrCreate"
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
# Api Server 实际访问地址
controlPlaneEndpoint: "127.0.0.1:6443"
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    # Etcd Volume 本地路径,最好修改为独立的磁盘
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.11
networking:
  dnsDomain: cluster.local
  # K8s Pod ip地址的取值范围
  podSubnet: 10.253.0.0/16
  # K8s Svc ip地址的取值范围
  serviceSubnet: 10.254.0.0/16
scheduler: {}
---
# kubelet 相关配置
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
clusterDNS:
# coredns 默认ip地址
- 10.96.0.10
# 如下为 NodeLocal DNSCache 默认主机地址
#- 169.254.20.10
clusterDomain: cluster.local
---
# kube-proxy 相关配置
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  minSyncPeriod: 5s
  syncPeriod: 5s
  # 加权轮询调度
  scheduler: "wrr" 

# 创建审计策略文件

vim /etc/kubernetes/audit-policy.yaml
apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
omitStages:
  - "RequestReceived"
rules:
  - level: RequestResponse
    resources:
    - group: ""
      resources: ["pods"]
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version" 
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    namespaces: ["kube-system"] 
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.
  - level: Metadata
    omitStages:
      - "RequestReceived"

# 配置Nginx-Proxy(三台master

创建配置目录

mkdir -p /etc/nginx 

# 写入代理配置

cat << EOF >> /etc/nginx/nginx.conf

error_log stderr notice;
worker_processes auto;
events {
  multi_accept on;
  use epoll;
  worker_connections 1024;
}

stream {
    upstream kube_apiserver {
        least_conn;
        server 192.168.10.100:5443;
        server 192.168.10.101:5443;
        server 192.168.10.102:5443;
    }

    server {
        listen        0.0.0.0:6443;
        proxy_pass    kube_apiserver;
        proxy_timeout 10m;
        proxy_connect_timeout 1s;
    }
}

EOF 

# 更新权限

chmod +r /etc/nginx/nginx.conf 

# 创建系统 systemd.service 文件

cat << EOF >> /etc/systemd/system/nginx-proxy.service

[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service 

[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run -p 127.0.0.1:6443:6443 \\
                              -v /etc/nginx:/etc/nginx \\
                              --name nginx-proxy \\
                              --net=host \\
                              --restart=on-failure:5 \\
                              --memory=512M \\
                              nginx:alpine
ExecStartPre=-/usr/bin/docker rm -f nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s

[Install] WantedBy
=multi-user.target EOF

# 启动 Nginx

systemctl daemon-reload
systemctl start nginx-proxy
systemctl enable nginx-proxy
systemctl status nginx-proxy

4. 初始化集群

--upload-certs 会在加入 master 节点的时候自动拷贝证书

kubeadm init --config kubeadm-init.yaml --upload-certs 

# 输出如下(样例、以实际为准):

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of the control-plane node running the following command on each as root:

 

  kubeadm join 127.0.0.1:6443 --token abcdef.0123456789abcdef \

    --discovery-token-ca-cert-hash sha256:ed09a75d84bfbb751462262757310d0cf3d015eaa45680130be1d383245354f8 \

    --control-plane --certificate-key 93cb0d7b46ba4ac64c6ffd2e9f022cc5f22bea81acd264fb4e1f6150489cd07a

 

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!

As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use

"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

 

Then you can join any number of worker nodes by running the following on each as root:

 

kubeadm join 127.0.0.1:6443 --token abcdef.0123456789abcdef \

    --discovery-token-ca-cert-hash sha256:ed09a75d84bfbb751462262757310d0cf3d015eaa45680130be1d383245354f8

# 拷贝权限文件

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

注:$HOME一般为root家目录

# 查看集群状态

[root@k8s-node-1 kubeadm]# kubectl get cs

NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                 
controller-manager   Healthy   ok                 
etcd-0               Healthy   {"health":"true"}

5. 加入kubernetes集群

如上有 kubeadm init 后有两条 kubeadm join 命令(红色字体部分), –control-plane 为 加入 Master

另外token 有时效性,如果提示 token 失效,请自行创建一个新的 token.

kubeadm token create –print-join-command 创建新的 join token

# 加入其他master节点

其他两台master服务器创建

ssh k8s-master2 "mkdir -p /etc/kubernetes/"
ssh k8s-master3 "mkdir -p /etc/kubernetes/"

注:在master1服务器上执行以上命令

#拷贝策略文件

k8s-master2 节点

scp /etc/kubernetes/audit-policy.yaml k8s-master2:/etc/kubernetes/

k8s-master3 节点

scp /etc/kubernetes/audit-policy.yaml k8s-master3:/etc/kubernetes/

注:在master1服务器上执行以上命令

# 分别join master

先在master2和master3服务器上测试 api server 连通性

curl -k https://127.0.0.1:6443

# 返回如下信息:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {  
  },

  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {
  },

  "code": 403

增加额外的配置,用于区分不同的 master 中的 apiserver-advertise-address 与 apiserver-bind-port

# 在k8s-master2服务器上执行

kubeadm join 127.0.0.1:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:ed09a75d84bfbb751462262757310d0cf3d015eaa45680130be1d383245354f8 \
    --control-plane --certificate-key 93cb0d7b46ba4ac64c6ffd2e9f022cc5f22bea81acd264fb4e1f6150489cd07a \
    --apiserver-advertise-address 192.168.10.101 \
    --apiserver-bind-port 5443

注:--apiserver-advertise-address 192.168.10.101为k8s-master2的本机IP

拷贝 config 配置文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#在k8s-master3服务器上执行

kubeadm join 127.0.0.1:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:ed09a75d84bfbb751462262757310d0cf3d015eaa45680130be1d383245354f8 \
    --control-plane --certificate-key 93cb0d7b46ba4ac64c6ffd2e9f022cc5f22bea81acd264fb4e1f6150489cd07a \
    --apiserver-advertise-address 192.168.10.102 \
    --apiserver-bind-port 5443

注:--apiserver-advertise-address 192.168.10.102为k8s-master3的本机IP

拷贝 config 配置文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

注:以上加入命令为样例,以实际为准

# 分别加入node节点

node 节点, 直接 join 就可以

分别登录k8s-node1、k8s-node2、k8s-node3服务器

执行如下命令:

kubeadm join k8s-master1:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:ed09a75d84bfbb751462262757310d0cf3d015eaa45680130be1d383245354f8

注:以上加入命令为样例,以实际为准

6. 验证所有节点

[root@k8s-master1 yaml]# kubectl get nodes

NAME         STATUS     ROLES    AGE     VERSION
k8s-master1   NotReady   master   106m    v1.20.11
k8s-master2   NotReady   master   2m18s   v1.20.11
k8s-master3   NotReady   master   63s     v1.20.11
k8s-node1    NotReady   <none>   2m46s   v1.20.11
k8s-node2    NotReady   <none>   2m46s   v1.20.11
k8s-node3    NotReady   <none>   2m46s   v1.20.11

注:这里 STATUS 显示 NotReady 是因为没有安装网络组件

7. 安装网络组件

# 下载 yaml 文件

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

注:因为该网址为外网,会被防火墙拦截登陆不上,该yml文件会以以下内容方式展示,自行黏贴以下内容至kube-flannel.yml文件中

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "EnableNFTables": false,
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.4.0-flannel1
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.25.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.25.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

#将该文件上传到master1服务器并修改文件内容:

这里只需要修改 分配的 CIDR 就可以

vim kube-flannel.yml

# 修改 pods 分配的 IP 段, 与模式 vxlan

# "Type": "vxlan" , 云上一般都不支持 host-gw 模式,一般只用于 2层网络。

# 主要是如下部分

data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },

        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }

  net-conf.json: |
    {
      "Network": "10.253.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

---        

# 导入 yaml 文件

kubectl apply -f kube-flannel.yml

# 查看服务

[root@k8s-master1 flannel]# kubectl get pods -n kube-system -o wide |grep kube-flannel
kube-flannel-ds-amd64-2tw6q          1/1     Running   0          88s    10.18.77.61    k8s-node-1   <none>           <none>
kube-flannel-ds-amd64-8nrtd          1/1     Running   0          88s    10.18.77.218   k8s-node-3   <none>           <none>
kube-flannel-ds-amd64-frmk9          1/1     Running   0          88s    10.18.77.117   k8s-node-2   <none>           <none>

8. 检验整体集群

# kubectl get nodes
NAME STATUS ROLES AGE VERSION k8s
-master1 Ready master 106m v1.20.11 k8s-master2 Ready master 2m18s v1.20.11 k8s-master3 Ready master 63s v1.20.11 k8s-node1 Ready <none> 2m46s v1.20.11 k8s-node2 Ready <none> 2m46s v1.20.11 k8s-node3 Ready <none> 2m46s v1.20.11

所有节点均为ready则集群状态正常

9. 查看集群证书

因为kubeadm工具的证书默认为一年,一年后证书会过期,则无法管理集群

如果后续替换的话, 所有 master 节点都需要执行如下更新命令

# 更新证书

kubeadm alpha certs renew all

# 查看证书时间

kubeadm alpha certs check-expiration 

 

到这里为止k8s 1.20.11集群的所有部署已经完成。

 

posted @ 2024-08-19 17:01  人则鱼  阅读(35)  评论(0编辑  收藏  举报