k8s-1.26.0 + Containerd安装过程


1.前言


Kubernetes 社区早在2020年7月就开始着手移除 dockershim,这将意味着 Kubernetes 不再将 docker 作为默认的底层容器工具,Docker 和其他容器运行时将一视同仁,不会单独对待内置支持,如果我们还想直接使用 Docker 这种容器运行时应该怎么办呢?可以将 dockershim 的功能单独提取出来独立维护一个 cri-dockerd 即可,,当然还有一种办法就是 Docker 官方社区将 CRI 接口内置到 Dockerd 中去实现。

Docker 底层是直接去调去 Containerd,而 Containerd 1.1 版本后就内置实现了 CRI,所以 Docker 也没必要再去单独实现 CRI 了,当 Kubernetes 不再内置支持开箱即用的 Docker 的以后,最好的方式当然也就是直接使用 Containerd 这种容器运行时,而且该容器运行时也已经经过了生产环境实践的,接下来就以Containerd作为底层容器部署Kubernetes 。


2.环境介绍


三台主机,信息如下:

主机名 IP 系统信息
k8s-master 192.168.199.101 Centos7.9
k8s-node01 192.168.199.102 Centos7.9
k8s-node02 192.168.199.103 Centos7.9

软件版本如下:

名称 版本号
containerd.io 1.6.21
kubernetes 1.26.0

3.基础配置


提示:该部分三台主机都需要执行!


2.1 配置yum仓库


cd /etc/yum.repos.d/
mkdir bak ; mv *.repo bak/

curl https://mirrors.aliyun.com/repo/Centos-7.repo -o Centos-7.repo
curl https://mirrors.aliyun.com/repo/epel-7.repo -o epel-7.repo
sed -i '/aliyuncs/d' Centos-7.repo

#添加 kubernetes 仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.1 修改主机名


root@localhost(192.168.199.101)~>hostnamectl set-hostname k8s-master

root@k8s-master(192.168.199.101)~>cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.199.101 k8s-master
192.168.199.102 k8s-node01
192.168.199.103 k8s-node02

#拷贝到两台node主机
root@k8s-master(192.168.199.101)~>for i in 1 2; do scp /etc/hosts 192.168.199.10$i:/etc/ ; done

2.2 配置ntp服务


yum install chrony ntpdate -y
sed "s/^server/#server/g" /etc/chrony.conf
echo 'server tiger.sina.com.cn iburst' >> /etc/chrony.conf
echo 'server ntp1.aliyun.com iburst' >> /etc/chrony.conf
systemctl enable chronyd ; systemctl start chronyd
ntpdate tiger.sina.com.cn

2.3 关闭selinux及firewalld


sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
systemctl disable firewalld; systemctl stop firewalld

这里修改完成后,建议重启主机。

reboot

2.4 关闭Swap


Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。 关闭系统的Swap方法如下:

  1. 关闭swap
swapoff -a
  1. 注释swap挂载
vim /etc/fstab

注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。

  1. swappiness参数调整
vim /etc/sysctl.d/99-kubernetes-cri.conf
...
vm.swappiness = 0

sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

4.导入模块


提示:该部分三台主机都需要执行!

cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

执行下面命令生效:

modprobe overlay
modprobe br_netfilter

5. 配置内核参数


cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF

执行下面命令生效:

sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

6.配置支持IPVS


提示:该部分三台主机都需要执行!

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

三台服务器执行如下:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

接下来还需要确保各个节点上已经安装了ipset软件包,为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。

yum install -y ipset ipvsadm

注意:如果不满足以上前提条件,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。


7.部署Containerd


提示:该部分三台主机都需要执行!


7.1 安装containerd


yum info containerd.io
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Installed Packages
Name        : containerd.io
Arch        : x86_64
Version     : 1.6.21	#版本号
Release     : 3.1.el7
Size        : 114 M
Repo        : installed
From repo   : docker-ce-stable
Summary     : An industry-standard container runtime
URL         : https://containerd.io
License     : ASL 2.0
Description : containerd is an industry-standard container runtime with an emphasis on
            : simplicity, robustness and portability. It is available as a daemon for Linux
            : and Windows, which can manage the complete container lifecycle of its host
            : system: image transfer and storage, container execution and supervision,
            : low-level storage and network attachments, etc.


yum install -y containerd.io

7.2 配置containerd


cd /etc/containerd/
mv config.toml config.toml.orig
containerd config default > config.toml

根据文档Container runtimes 中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为容器的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里配置各个节点上containerd的cgroup driver为systemd。

修改前面生成的配置文件/etc/containerd/config.toml

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true	# false 修改为 true

再修改/etc/containerd/config.toml中的

[plugins."io.containerd.grpc.v1.cri"]
  ...
  # sandbox_image = "k8s.gcr.io/pause:3.6"
  sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"	#这里一定要注意,要根据下载到本地 pause镜像的版本来进行修改,否则初始化会过不去。

为镜像下载添加加速源

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://hub-mirror.c.163.com"]

配置containerd开机启动,并启动containerd

systemctl enable containerd ; systemctl start containerd

7.3 查看版本信息


ctr version

Client:
  Version:  1.6.21
  Revision: 3dce8eb055cbb6872793272b4f20ed16117344f8
  Go version: go1.19.9

Server:
  Version:  1.6.21
  Revision: 3dce8eb055cbb6872793272b4f20ed16117344f8
  UUID: f0b52f6f-aea7-4c9b-a277-e46289e23b40


runc -v

runc version 1.1.7
commit: v1.1.7-0-g860f061
spec: 1.0.2-dev
go: go1.19.9
libseccomp: 2.3.1

8.使用kubeadm部署k8s


提示:该部分操作只在 k8s-master 上执行!


8.1 安装包


安装 1.26.0 版本:

yum install -y kubeadm-1.26.0 kubelet-1.26.0 kubectl-1.26.0

8.2 kubeadm生成初始文件


官方推荐使用--config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file最初Kubernetes这么做是为了支持动态Kubelet配置(Dynamic Kubelet Configuration),但动态Kubelet配置特性从k8s 1.22中已弃用,并在1.24中被移除。如果需要调整集群汇总所有节点kubelet的配置,还是推荐使用ansible等工具将配置分发到各个节点

在各节点开机启动kubelet服务:

systemctl enable kubelet

配置 kubeadm 及 kubectl 自动补全功能:

yum install -y bash-completion 
kubeadm completion bash > /etc/bash_completion.d/kubeadm
kubectl completion bash > /etc/bash_completion.d/kubectl
source /etc/bash_completion.d/kubeadm /etc/bash_completion.d/kubectl

生成配置文件:

kubeadm config print init-defaults > kubeadm-init.yml

cat kubeadm-init.yml

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件kubeadm.yaml:

vim kubeadm-init.yml

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 0s	#修改token过期时间为无限制
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.199.101	#修改为k8s-master节点IP
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers	#替换为国内的镜像仓库
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16	#为pod网络指定网络段
---
#申明cgroup用 systemd
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
failSwapOn: false
---
#启用ipvs
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

8.3 下载镜像


8.3.1 查看镜像


kubeadm config images list --config=kubeadm-init.yml

registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.26.0
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.6-0
registry.aliyuncs.com/google_containers/coredns:v1.9.3

注意:再次确认 registry.aliyuncs.com/google_containers/pause:3.9 就是上面 /etc/containerd/config.toml 中所需要填写正确的 pause镜像及版本号。

...
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
...

修改完成后记得重启 containerd,所有节点都需要操作:

systemctl restart containerd

8.3.2 下载镜像


kubeadm config images pull --config=kubeadm-init.yml

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.9.3

8.4 kubeadm初始化集群


8.4.1 初始化


#执行初始化并将日志记录到 kubeadm-init.log 日志文件中
kubeadm init --config=kubeadm-init.yml | tee kubeadm-init.log

[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.199.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.199.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.199.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.004941 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.199.101:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:fff9cb2c367c904d31994abb5a460b2e16305a4b687744ad1f31af06c02d37d7

解释下上面的日志输出内容,看行首 [] 括起来的内容。

  • [init] :开始初始化集群,动作声明;
  • [preflight] :开始初始化之前的检查工作;
  • [certs] :生成相关的各种证书
  • [kubeconfig] :生成相关的kubeconfig文件
  • [kubelet-start]: 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"
  • [control-plane] :使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod,也称为控制平面
  • [bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • [addons]: 安装基本插件:CoreDNS, kube-proxy

8.4.2 使用集群


使用下面的命令是配置常规用户如何使用kubectl访问集群(root也可这样执行):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

日志中也给出了加入集群的命令:

kubeadm join 192.168.199.101:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:fff9cb2c367c904d31994abb5a460b2e16305a4b687744ad1f31af06c02d37d7

8.4.3 检查集群


查看集群是否处于健康状态:

kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true","reason":""}
scheduler            Healthy   ok

集群正常,如果遇到错误的情况,可使用 kubeadm reset 重置,然后重启主机,再次进行 初始化。


8.5 安装网络插件Flanel


8.5.1 下载 flannel


wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

由于 flannel 清单文件需要特殊手段获取才能下载 ,这里直接放出来吧,较长。

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.21.5
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.21.5
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate


8.5.2 安装执行flannel


kubectl apply -f  kube-flannel.yml

namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

8.5.3 检查flannel


flannel 会创建独立的名称空间:

kubectl get ns

NAME              STATUS   AGE
default           Active   21m
kube-flannel      Active   87s
kube-node-lease   Active   21m
kube-public       Active   21m
kube-system       Active   21m

查看flannel创建的Pod

kubectl get po -n kube-flannel

NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-nbdmd   1/1     Running   0          93s

查看集群节点的状态

kubectl get nodes
NAME         STATUS   ROLES           AGE   VERSION
k8s-master   Ready    control-plane   22m   v1.26.0

9.集群加入node节点


提示:该部分只在k8s-node01 和 k8s-node02 执行!


9.1 安装包


node节点无需安装 kubectl 客户端工具。

yum install -y kubeadm-1.26.0 kubelet-1.26.0

9.2 加入集群


提示:加入集群前,请务必确认 /etc/containerd/config.toml 中配置修改正确,可以直接拷贝 k8s-master 节点配置文件到此。

scp k8s-master:/etc/containerd/config.toml /etc/containerd/config.toml 

配置完成后,重启服务

systemctl restart containerd

设置 kubelet 开机启动:

systemctl enable kubelet

提示:该项不设置,加入集群会出现告警信息:

[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

加入集群,该命令在 k8s-master 节点初始化成功后,日志直接给出。

kubeadm join 192.168.199.101:6443 --token abcdef.0123456789abcdef \
         --discovery-token-ca-cert-hash sha256:fff9cb2c367c904d31994abb5a460b2e16305a4b687744ad1f31af06c02d37d7
         
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

出现如上提示表示加入集群成功。提示:如果加入集群失败,可使用 kubeadm reset 重置。


9.3 查看集群


通过 k8s-master 上执行操作查看:

kubectl get nodes -o wide

NAME         STATUS   ROLES           AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-master   Ready    control-plane   40m     v1.26.0   192.168.199.101   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.21
k8s-node01   Ready    <none>          4m15s   v1.26.0   192.168.199.102   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.21

k8s-node01 已经加入成功,k8s-node02 同样如上操作。


10.集群状态


10.1 集群节点状态


kubectl get nodes -o wide

NAME         STATUS   ROLES           AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-master   Ready    control-plane   49m    v1.26.0   192.168.199.101   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.21
k8s-node01   Ready    <none>          12m    v1.26.0   192.168.199.102   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.21
k8s-node02   Ready    <none>          2m9s   v1.26.0   192.168.199.103   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.21

10.2 集群所有Pod状态


kubectl get po -o wide -A

NAMESPACE      NAME                                 READY   STATUS    RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-mh4fs                1/1     Running   1          13m     192.168.199.102   k8s-node01   <none>           <none>
kube-flannel   kube-flannel-ds-nbdmd                1/1     Running   0          30m     192.168.199.101   k8s-master   <none>           <none>
kube-flannel   kube-flannel-ds-xz6gp                1/1     Running   0          2m40s   192.168.199.103   k8s-node02   <none>           <none>
kube-system    coredns-5bbd96d687-rwk97             1/1     Running   0          49m     10.244.0.3        k8s-master   <none>           <none>
kube-system    coredns-5bbd96d687-z8zx6             1/1     Running   0          49m     10.244.0.2        k8s-master   <none>           <none>
kube-system    etcd-k8s-master                      1/1     Running   0          49m     192.168.199.101   k8s-master   <none>           <none>
kube-system    kube-apiserver-k8s-master            1/1     Running   0          49m     192.168.199.101   k8s-master   <none>           <none>
kube-system    kube-controller-manager-k8s-master   1/1     Running   0          49m     192.168.199.101   k8s-master   <none>           <none>
kube-system    kube-proxy-4v7q2                     1/1     Running   0          2m40s   192.168.199.103   k8s-node02   <none>           <none>
kube-system    kube-proxy-g7g6j                     1/1     Running   0          49m     192.168.199.101   k8s-master   <none>           <none>
kube-system    kube-proxy-hrh6n                     1/1     Running   1          13m     192.168.199.102   k8s-node01   <none>           <none>
kube-system    kube-scheduler-k8s-master            1/1     Running   0          49m     192.168.199.101   k8s-master   <none>           <none>

10.3 集群所有Service状态


kubectl get svc -A

NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  49m
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   49m

10.4 集群所有 ConfigMap 状态


kubectl get configmaps -A

NAMESPACE         NAME                                 DATA   AGE
default           kube-root-ca.crt                     1      50m
kube-flannel      kube-flannel-cfg                     2      30m
kube-flannel      kube-root-ca.crt                     1      30m
kube-node-lease   kube-root-ca.crt                     1      50m
kube-public       cluster-info                         2      50m
kube-public       kube-root-ca.crt                     1      50m
kube-system       coredns                              1      50m
kube-system       extension-apiserver-authentication   6      50m
kube-system       kube-proxy                           2      50m
kube-system       kube-root-ca.crt                     1      50m
kube-system       kubeadm-config                       1      50m
kube-system       kubelet-config                       1      50m

11.应用测试


11.1 创建Pod


创建:

kubectl run ngx --image=nginx:alpine --port=80

查看:

kubectl get po -o wide

NAME   READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
ngx    1/1     Running   0          9s    10.244.2.5   k8s-node02   <none>           <none>

11.2 创建SVC


创建:

kubectl expose pod ngx --target-port 80 --type NodePort

查看:

kubectl get svc

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        4h8m
ngx          NodePort    10.110.76.103   <none>        80:30090/TCP   3s

11.3 通过浏览器访问


nginx 的 svc 是通过 NodePort 的方式暴露出来的, 直接通过浏览器访问 30090 端口

image-20230524135544599


12.参考链接


https://blog.frognew.com/2023/01/kubeadm-install-kubernetes-1.26.html

本文作者:hukey

本文链接:https://www.cnblogs.com/hukey/p/17428157.html

版权声明:本作品采用知识共享署名-非商业性使用-禁止演绎 2.5 中国大陆许可协议进行许可。

posted @   hukey  阅读(6728)  评论(0编辑  收藏  举报
历史上的今天:
2022-05-24 OpenSSL自签发CA证书chrome浏览器安全访问
点击右上角即可分享
微信分享提示
💬
评论
📌
收藏
💗
关注
👍
推荐
🚀
回顶
收起
  1. 1 彩虹 Jay
彩虹 - Jay
00:00 / 00:00
An audio error has occurred.

彩虹 + 轨迹 (Live) - 周杰伦 (Jay Chou)

彩虹

词:周杰伦

曲:周杰伦

哪里有彩虹告诉我

哪里有彩虹告诉我

能不能把我的愿望还给我

能不能把我的愿望还给我

为什么天这么安静

为什么天这么安静

所有的云都跑到我这里

有没有口罩一个给我

有没有口罩一个给我

释怀说了太多就成真不了

释怀说了太多就成真不了

也许时间是一种解药

也许时间是一种解药

也是我现在正服下的毒药

也是我现在正服下的毒药

看不见你的笑 我怎么睡得着

看不见你的笑 我怎么睡得着

你的声音这么近我却抱不到

你的声音这么近我却抱不到

没有地球太阳还是会绕

没有地球太阳还是会绕

没有理由我也能自己走

没有理由我也能自己走

你要离开 我知道很简单

你要离开 我知道很简单

你说依赖 是我们的阻碍

你说依赖 是我们的阻碍

就算放开 但能不能别没收我的爱

就算放开 但能不能别没收我的爱

当作我最后才明白

当作我最后才明白

看不见你的笑 要我怎么睡得着

看不见你的笑 要我怎么睡得着

你的声音这么近我却抱不到

没有地球太阳还是会绕 会绕

没有理由我也能自己走掉

释怀说了太多就成真不了

也许时间是一种解药 解药

也是我现在正服下的毒药

轨迹

词:黄俊郎

曲:周杰伦

我会发着呆然后忘记你

接着紧紧闭上眼

想着哪一天 会有人代替

想着哪一天 会有人代替

让我不再想念你

我会发着呆 然后微微笑

我会发着呆 然后微微笑

接着紧紧闭上眼

又想了一遍 你温柔的脸

又想了一遍 你温柔的脸

在我忘记之前