返回顶部

nfs storageclass 创建pvc pending

nfs storageclass 创建pvc pending

使用nfs 作为外部存储,一直启动不起来,查看 pvc 和 pods 信息如下:

  • 1、PVC 一直处于 pending 状态【www-nfs-web-0 Pending k8s-nfs-storage 11s】;
  • 2、 pods 显示 【FailedScheduling 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

PVC 没有创建成功,动态 PVC 中,是 provisioner 中来负责创建。分析主要原因是,官方在 k8s 1.20 中基于对性能和统一apiserver调用方式的初衷,移除了对 SelfLink 的支持,而 nfs-provisioner 需要 SelfLink 该项功能

报错信息

#kubectl get pvc
NAME            STATUS    VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
app-pvc         Bound     pv01       50Gi       RWX     10m
local-pvc       Bound     local-pv   50Gi       RWO            local-storage     14m
www-nfs-web-0   Pending                                        k8s-nfs-storage   7m40s

#kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
nfs-client-provisioner-76d45db677-bxc77   1/1     Running   0          3m26s   10.244.1.102   node1    <none>           <none>
nfs-web-0                                 0/1     Pending   0          3m26s   <none>         <none>   <none>           <none>


#kubectl describe pod nfs-web-0
Name:           nfs-web-0
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=nfs-web
                controller-revision-hash=nfs-web-64b64845df
                statefulset.kubernetes.io/pod-name=nfs-web-0
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  StatefulSet/nfs-web
Containers:
  nginx:
    Image:        ikubernetes/myapp:v1
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /usr/share/nginx/html from www (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dnwgx (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  www:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  www-nfs-web-0
    ReadOnly:   false
  default-token-dnwgx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-dnwgx
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  3m40s  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  3m40s  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

 

nfs storageclass yaml

#cat nfs-storageclass.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-provisioner
            - name: NFS_SERVER
              value: 10.0.0.127
            - name: NFS_PATH
              value: /data/volumes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.127
            path: /data/volumes
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update", "create"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch", "create"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: k8s-nfs-storage
provisioner: nfs-provisioner
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nfs-web
spec:
  serviceName: "nginx"
  replicas: 1
  selector:
    matchLabels:
      app: nfs-web
  template:
    metadata:
      labels:
        app: nfs-web
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: ikubernetes/myapp:v1
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
      annotations:
        volume.beta.kubernetes.io/storage-class: k8s-nfs-storage
    spec:
      accessModes: [ "ReadWriteMany" ]
      resources:
        requests:
          storage: 1Gi


# kubectl apply -f nfs-storageclass.yaml
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
storageclass.storage.k8s.io/k8s-nfs-storage created
statefulset.apps/nfs-web created
#kubectl get pv
#kubectl get pvc

方法一: 修改 apiserver 的配置文件

修改 apiserver 的配置文件,重新启用 SelfLink 功能。针对 K8S,可添加如下配置:

#vim /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
  containers:
  - command:
    - kube-apiserver
...

    - --feature-gates=RemoveSelfLink=false   #新增此行


#apiserver 是基于静态pod部署,需要重新加载
mv /etc/kubernetes/manifests/kube-apiserver.yaml
mv kube-apiserver.yaml /etc/kubernetes/manifests/

#kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
www-nfs-web-0   Bound    pvc-b7793ca6-43ae-42b1-a397-91acc9e11172   1Gi   RWX            k8s-nfs-storage   26m

#kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-76d45db677-bxc77   1/1     Running   3          26m
nfs-web-0                                 1/1     Running   0          26m

方法二:修改k3s的启动文件

K3S 中没有 apiserver 的配置文件,可通过 systemd 的启动文件添加该参数

# /etc/systemd/system/k3s.service

ExecStart=/usr/local/bin/k3s \
    server \
        ...
        '--kube-apiserver-arg' \   # 新增
        'feature-gates=RemoveSelfLink=false' \  # 新增

方法三: 新安装操作

$ curl -sfL https://get.k3s.io | sh -s - --kube-apiserver-arg "feature-gates=RemoveSelfLink=false"

 

方法四: 使用不基于 SelfLink 功能的 provisioner 镜像

使用新的不基于 SelfLink 功能的 provisioner 镜像,重新创建 provisioner 容器。

gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0

registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0

 

 

 

 

 

 

 

 

 

 

 

 

posted @ 2022-08-21 21:21  九尾cat  阅读(2567)  评论(0编辑  收藏  举报