4、单个Master配置、crictl、flannel

Kubernetes学习目录

1、初始化Master

1.1、初始化命令

kubeadm init --kubernetes-version=1.25.7 \
--apiserver-advertise-address=192.168.10.26 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository=registry.aliyuncs.com/google_containers \
--ignore-preflight-errors=Swap


注意:
--apiserver-advertise-address 要设定为当前集群的master地址由于kubeadm init命令默认去外网获取镜像,这里我们使用--image-repository来指定使用国内镜像
--kubernetes-version选项的版本号用于指定要部署的Kubenretes程序版本,它需要与当前的kubeadm支持的版本保持一致;该参数是必须的
--pod-network-cidr选项用于指定分Pod分配使用的网络地址,它通常应该与要部署使用的网络插件(例如flannel、calico等)的默认设定保持一致,10.244.0.0/16是flannel默认使用的网络;
--service-cidr用于指定为Service分配使用的网络地址,它由kubernetes管理,默认即为10.96.0.0/12;
--ignore-preflight-errors=Swap 如果没有该项,必须保证系统禁用Swap设备的状态。一般最好加上  
--image-repository 用于指定我们在安装kubernetes环境的时候,从哪个镜像里面下载相关的docker镜像,如果需要用本地的仓库,那么就用本地的仓库地址即可如:
--image-repository 10.0.0.19:80/google_containers ,因为提前下载,打标签为默认,所以这里不用配置

1.2、初始化成功的打印日志的介绍

# 运行成功,显示如下
Your Kubernetes control-plane has initialized successfully!

# 配置kubectl可以访问的方法
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

# 加入集群的方法
kubeadm join 192.168.10.26:6443 --token chsuee.cxw3s9mdyodk8ehk \
        --discovery-token-ca-cert-hash sha256:5cd4bd62fa2c5bd83eb50c40dd941d3fdcd31edc9b7a169d3442ee2da6363218

1.3、配置kubectl可访问apiserver

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@master1 ~]# kubectl get nodes
NAME      STATUS     ROLES           AGE     VERSION
master1   NotReady   control-plane   5m15s   v1.26.2

2、集群的重置

如果因为特殊因素导致,集群创建失败,我们可以通过两条命令实现环境的快速还原

2.1、Master重置清空方法

# Master节点重置
kubeadm reset;
rm -rf /etc/kubernetes;
rm -rf ~/.kube ;
rm -rf /etc/cni/; # 清除容器的网络接口
systemctl restart containerd.service

2.2、Node重置清空方法

rm -rf /etc/cni/net.d;
kubeadm reset;

# 需要重启一下这个服务,避免网络插件有问题
systemctl restart containerd.service

3、网络插件的安装

3.1、当前的网络现状

[root@master1 ~]# journalctl -xefu kubelet
Mar 14 00:08:05 master1 kubelet[3013]: E0314 00:08:05.891756    3013 kubelet.go:2475] "Container runtime network not ready" 
networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" 结果显示: 里面提示,cni网络报错,我们需要配置一下网络才可以正常解决这个问题

3.2、安装CNI-flannel插件

# 下载地址
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
apiVersion: v1
kind: Namespace
metadata:
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
  name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
kind: ConfigMap
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-cfg
  namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-ds
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: flannel
      k8s-app: flannel
  template:
    metadata:
      labels:
        app: flannel
        k8s-app: flannel
        tier: node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      containers:
      - args:
        - --ip-masq
        - --kube-subnet-mgr
        command:
        - /opt/bin/flanneld
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        image: docker.io/flannel/flannel:v0.21.3
        name: kube-flannel
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
          privileged: false
        volumeMounts:
        - mountPath: /run/flannel
          name: run
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
        - mountPath: /run/xtables.lock
          name: xtables-lock
      hostNetwork: true
      initContainers:
      - args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        command:
        - cp
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
        name: install-cni-plugin
        volumeMounts:
        - mountPath: /opt/cni/bin
          name: cni-plugin
      - args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        command:
        - cp
        image: docker.io/flannel/flannel:v0.21.3
        name: install-cni
        volumeMounts:
        - mountPath: /etc/cni/net.d
          name: cni
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
      priorityClassName: system-node-critical
      serviceAccountName: flannel
      tolerations:
      - effect: NoSchedule
        operator: Exists
      volumes:
      - hostPath:
          path: /run/flannel
        name: run
      - hostPath:
          path: /opt/cni/bin
        name: cni-plugin
      - hostPath:
          path: /etc/cni/net.d
        name: cni
      - configMap:
          name: kube-flannel-cfg
        name: flannel-cfg
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock
flannel.yaml

3.3、检查是否安装成功

# 查看节点数量
[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES           AGE   VERSION
master1   Ready    control-plane   23h   v1.26.2
master2   Ready    <none>          23h   v1.26.2
master3   Ready    <none>          23h   v1.26.2

# 查询命名空间名字
[root@master1 ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   23h
kube-flannel      Active   116s
kube-node-lease   Active   23h
kube-public       Active   23h
kube-system       Active   23h

# 查询是否有flannel运行
[root@master1 ~]# kubectl get pods -n kube-system | grep flannel
kube-flannel-ds-cd5x6             1/1     Running   0          4m33s
kube-flannel-ds-g9j8h             1/1     Running   0          4m49s
kube-flannel-ds-pb66w             1/1     Running   0          4m17s

3.4、多网卡flannel选择网口配置

   # 如果Node有多个网卡的话,需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡
的名称,
   # 否则可能会出现dns无法解析。容器无法通信的情况,需要将kube-flannel.yml下载到本地,
   # flanneld启动参数加上--iface=<iface-name>
       containers:
         - name: kube-flannel
           image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64
           command:
           - /opt/bin/flanneld
           args:
           - --ip-masq
           - --kube-subnet-mgr
           - --iface=eth1
   ⚠ ⚠ ⚠ --iface=eth1 的值,是你当前的网卡

4、配置crictl

4.1、背景

因为docker查看不了k8s运行镜像的状态,所以要使用crictl查询

4.2、配置

cat >/etc/crictl.yaml<<'EOF'
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
EOF

4.3、测试

4.3.1、查询运行的pod

[root@node1 ~]# crictl pods
POD ID              CREATED             STATE               NAME                                NAMESPACE           ATTEMPT             RUNTIME
98c512f3fb52d       49 minutes ago      Ready               nginx-deployment-5b47ccdd5c-b9pnc   default             0                   (default)
548537fb74473       15 hours ago        Ready               kube-flannel-ds-z4bgg               kube-system         0                   (default)
6d84a7ce7d3d5       15 hours ago        Ready               kube-proxy-t7n6h                    kube-system         0                   (default)

4.3.2、批量删除多余的镜像

for image in `crictl images | grep v1.26 | awk  '{print $1":"$2}'` ; do crictl rmi $image; done

 

posted @ 2023-03-16 10:27  小粉优化大师  阅读(171)  评论(0编辑  收藏  举报