Containerd实现Kubernetes-v1.25.0高可用集群

内核参数调整

如果是安装 Docker 会自动配置以下的内核参数,而无需手动实现 但是如果安装Contanerd,还需手动配置 允许 iptables 检查桥接流量,若要显式加载此模块,需运行 modprobe br_netfilter 为了让 Linux 节点的 iptables 能够正确查看桥接流量,还需要确认net.bridge.bridge-nf-call-iptables 设置为 1。

#开机加载
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
#加载模块
modprobe overlay
modprobe br_netfilter
#设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
#应用 sysctl 参数而不重新启动
sysctl --system

安装containerd

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install containerd
mkdir /etc/containerd/
containerd config default > /etc/containerd/config.toml
sed -i "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g"  /etc/containerd/config.toml
​
systemctl restart containerd.service

所有主机安装 kubeadm、kubelet 和 kubectl

通过国内镜像站点阿里云安装的参考链接:

kubernetes镜像_kubernetes下载地址_kubernetes安装教程-阿里巴巴开源镜像站

yum install -y yum-utils
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache
yum list kubeadm|head #查看版本
yum install -y kubelet kubeadm kubectl


#从国内镜像站拉取镜像(可选)
[root@master1 ~]# kubeadm config images pull --kubernetes-version=v1.25.3 --image-repository registry.aliyuncs.com/google_containers

在第一个 master 节点初始化 Kubernetes 集群

--kubernetes-version:#kubernetes程序组件的版本号,它必须要与安装的kubelet程序包的版本号相同
--control-plane-endpoint:#多主节点必选项,用于指定控制平面的固定访问地址,可是IP地址或DNS名称,会被用于集群管理员及集群组件的kubeconfig配置文件的API Server的访问地址,如果是单主节点的控制平面部署时不使用该选项,注意:kubeadm 不支持将没有 
--control-plane-endpoint 参数的单个控制平面集群转换为高可用性集群。
--pod-network-cidr:#Pod网络的地址范围,其值为CIDR格式的网络地址,通常情况下Flannel网络插件的默认为10.244.0.0/16,Calico网络插件的默认值为192.168.0.0/16
--service-cidr:#Service的网络地址范围,其值为CIDR格式的网络地址,默认为10.96.0.0/12;通常,仅Flannel一类的网络插件需要手动指定该地址
--service-dns-domain string #指定k8s集群域名,默认为cluster.local,会自动通过相应的DNS服务实现解析
--apiserver-advertise-address:#API 服务器所公布的其正在监听的 IP 地址。如果未设置,则使用默认网络接口。apiserver通告给其他组件的IP地址,一般应该为Master节点的用于集群内部通信的IP地址,0.0.0.0表示此节点上所有可用地址,非必选项
--image-repository string #设置镜像仓库地址,默认为 k8s.gcr.io,此地址国内可能无法访问,可以指向国内的镜像地址
--token-ttl #共享令牌(token)的过期时长,默认为24小时,0表示永不过期;为防止不安全存储等原因导致的令牌泄露危及集群安全,建议为其设定过期时长。未设定该选项时,在token过期后,若期望再向集群中加入其它节点,可以使用如下命令重新创建token,并生成节点加入命令。kubeadm token create --print-join-command
--ignore-preflight-errors=Swap” #若各节点未禁用Swap设备,还需附加选项“从而让kubeadm忽略该错误
--upload-certs #将控制平面证书上传到 kubeadm-certs Secret
--cri-socket  #v1.24版之后指定连接cri的socket文件路径,注意;不同的CRI连接文件不同
#如果是cRI是containerd,则使用--cri-socket unix:///run/containerd/containerd.sock
#如果是cRI是docker,则使用--cri-socket unix:///var/run/cri-dockerd.sock
#如果是CRI是CRI-o,则使用--cri-socket unix:///var/run/crio/crio.sock
#注意:CRI-o与containerd的容器管理机制不一样,所以镜像文件不能通用。
kubeadm init --control-plane-endpoint="192.168.1.88" --kubernetes-version=v1.25.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --token-ttl=0 --image-repository registry.aliyuncs.com/google_containers --upload-certs

启动

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.1.88:6443 --token vvxr9q.wl0wn1twon6dnz79 \
	--discovery-token-ca-cert-hash sha256:2648f41959497125af4296a5995f84e6366a83a0ba8d654dc8614e4c8255e013 \
	--control-plane --certificate-key 9b5a363751f115883b817821e872a420aaaf3ad0498364e3f39abb08cf48250d
#上面是master加入
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.88:6443 --token vvxr9q.wl0wn1twon6dnz79 \
	--discovery-token-ca-cert-hash sha256:2648f41959497125af4296a5995f84e6366a83a0ba8d654dc8614e4c8255e013 
#上面是node节点加入

token有效期为24小时

#node节点
kubeadm token create --print-join-command
#当过期之后,该token就不可用了。这时就需要重新创建token,可以直接使用命令快捷生成:
#master节点
[root@master1 ~]# kubeadm init phase upload-certs --upload-certs
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
20a8288158d70f600e05b8cd1e1e3ddfb765083e4620c4ede1b546bdc7e54302
#把新生的20a8288158d70f600e05b8cd1e1e3ddfb765083e4620c4ede1b546bdc7e54302粘贴到新生的node节点后面,加上--control-plane --certificate-key

如果想重新初始化,可以执行下面

#如果有工作节点,先在工作节点执行,再在control节点执行下面操作
kubeadm reset -f --cri-socket unix:///run/cri-dockerd.sock
rm -rf /etc/cni/net.d/  $HOME/.kube/config
reboot

实现 kubectl 命令补全

kubectl 命令功能丰富,默认不支持命令补会,可以用下面方式实现

kubectl completion bash > /etc/profile.d/kubectl_completion.sh
. /etc/profile.d/kubectl_completion.sh

在第一个 master 节点配置网络组件

Kubernetes系统上Pod网络的实现依赖于第三方插件进行,这类插件有近数十种之多,较为著名的有 flannel、calico、canal和kube-router等,简单易用的实现是为CoreOS提供的flannel项目。下面的命令 用于在线部署flannel至Kubernetes系统之上:

首先,下载适配系统及硬件平台环境的flanneld至每个节点,并放置于/opt/bin/目录下。我们这里选用 flanneld-amd64,目前最新的版本为v0.19.1,因而,我们需要在集群的每个节点上执行如下命令: 提示:下载flanneld的地址为 Releases · flannel-io/flannel · GitHub

随后,在初始化的第一个master节点k8s-master01上运行如下命令,向Kubernetes部署kubeflannel。

#默认没有网络插件,所以显示如下状态
[root@master1 ~]#kubectl get nodes
NAME               STATUS     ROLES           AGE   VERSION
master1.wang.org   NotReady   control-plane   17m   v1.25.0
wget https://raw.githubusercontent.com/flannelio/flannel/master/Documentation/kube-flannel.yml
[root@master1 ~]# kubectl apply -f kube-flannel-v0.19.1.yml
#或者
[root@master1 ~]#kubectl apply -f https://raw.githubusercontent.com/flannelio/flannel/master/Documentation/kube-flannel.yml
#稍等一会儿,可以看到下面状态
[root@master1 ~]#kubectl get nodes
NAME               STATUS   ROLES           AGE   VERSION
master1.wang.org   Ready   control-plane   23m   v1.25.0

加入node节点

kubeadm join 192.168.1.88:6443 --token vvxr9q.wl0wn1twon6dnz79 \
	--discovery-token-ca-cert-hash sha256:2648f41959497125af4296a5995f84e6366a83a0ba8d654dc8614e4c8255e013 

k8s集群节点添加失败,可用以下命令清理后,重新加入节点

kubeadm reset
rm -rf /var/lib/cni/

加入master节点

kubeadm join 192.168.1.88:6443 --token vvxr9q.wl0wn1twon6dnz79 \
	--discovery-token-ca-cert-hash sha256:2648f41959497125af4296a5995f84e6366a83a0ba8d654dc8614e4c8255e013 \
	--control-plane --certificate-key 9b5a363751f115883b817821e872a420aaaf3ad0498364e3f39abb08cf48250d

安装插件metrics-server

vim metrics-server.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:       # 添加以下内容
        - --cert-dir=/tmp
        - --secure-port=4443
        - --metric-resolution=30s
        - --kubelet-preferred-address-types=InternalIP
        - --kubelet-use-node-status-port
        - --kubelet-insecure-tls
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.4.1 #修改镜像下载地址
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100       
[root@master1 ~]# kubectl apply -f metrics-server.yaml 

#查询pod创建过程,加-w参数
kubectl get pod -n kube-system 
# 1-2分钟后查看结果
[root@master1 ~]# kubectl top nodes

posted @ 2022-10-22 17:24  落笔画秋枫乀  阅读(8)  评论(0编辑  收藏  举报  来源