存储类StorageClass

  存储类StorageClass是kubernetes资源类型的一种,它是由管理员为管理pv之便而按需创建的类别(逻辑组),例如可按存储系统的性能高低分类,或者根据其综合服务质量级别进行分类、依照备份策略分类,甚至直接按管理员自定义的标准进行分类等。

  存储类的好处之一是支持PV的动态创建。用户用到持久性存储时,需要通过创建PVC来绑定匹配的PV,此类操作需求量较大,或者当管理员手动创建pv无法满足PVC的所有需求时,系统按PVC的需求标准动态创建适配的PV会为存储管理带来极大的灵活性。

一、StorageClass 资源

  官网参考文档:https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/

  存储类对象的名称至关重要,它是用户调用的标识。创建存储类对象时,除了名称之外,还需要为其定义三个关键字段:provisioner,parameters和reclaimPolicy。当管理员设置 StorageClass 对象的命名和其他参数,一旦创建了对象就不能再对其更新。  

  具体来说,StorageClass会定义以下两部分:

  1)PV的属性 ,比如存储的大小、类型等;

  2)创建这种PV需要使用到的存储插件,比如Ceph、NFS等

  有了这两部分信息,Kubernetes就能够根据用户提交的PVC,找到对应的StorageClass,然后Kubernetes就会调用 StorageClass声明的存储插件,创建出需要的PV。

1. 创建StorageClass资源清单说明

  查看定义的storageclass需要的字段

[root@k8s-master1 ~]# kubectl explain storageclass
KIND:     StorageClass
VERSION:  storage.k8s.io/v1

DESCRIPTION:
     StorageClass describes the parameters for a class of storage for which
     PersistentVolumes can be dynamically provisioned.

     StorageClasses are non-namespaced; the name of the storage class according
     to etcd is in ObjectMeta.Name.

FIELDS:
   allowVolumeExpansion <boolean> #允许卷扩展,PersistentVolume 可以配置成可扩展的卷。将此功能设置为true时,允许用户通过编辑相应的 PVC 对象来调整卷大小。此功能仅用于扩容卷,不能用于缩小卷
     AllowVolumeExpansion shows whether the storage class allow volume expand

   allowedTopologies    <[]Object>
     Restrict the node topologies where volumes can be dynamically provisioned.
     Each volume plugin defines its own supported topology specifications. An
     empty TopologySelectorTerm list means there is no topology restriction.
     This field is only honored by servers that enable the VolumeScheduling
     feature.

   apiVersion   <string> #api版本
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind <string> #类型
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata     <Object> #元数据
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   mountOptions <[]string> #由当前类动态创建的pv的挂载选项列表
     Dynamically provisioned PersistentVolumes of this storage class are created
     with these mountOptions, e.g. ["ro", "soft"]. Not validated - mount of the
     PVs will simply fail if one is invalid.

   parameters   <map[string]string> #存储类使用参数描述要关联到的存储卷,不过,不同的provisioner可用的参数各不相同
     Parameters holds the parameters for the provisioner that should create
     volumes of this storage class.

   provisioner  <string> -required-  #即提供了存储资源的存储系统,存储类要依赖provisioner来判定要使用的存储插件以便适配到目标存储系统。kubernetes内建有多种供给方provisioner,这些provisioner的名字都以"kubernetes.io"为前缀。另外,它还支持用户依据kubernetes规范自定义provisioner。provisioner既可以由内部供应商提供,也可以由外部供应商提供,代码仓库 kubernetes-sigs/sig-storage-lib-external-provisioner 包含一个用于为外部制备器编写功能实现的类库。可以通过访问代码仓库 kubernetes-sigs/sig-storage-lib-external-provisioner 了解外部驱动列表。另外,也可以参考https://github.com/kubernetes-incubator/external-storage/ 提供的方法创建
     Provisioner indicates the type of the provisioner.

   reclaimPolicy        <string> #为当前存储类动态动态创建的pv指定的回收策略,可用值为Delete(默认)和Retain
     Dynamically provisioned PersistentVolumes of this storage class are created
     with this reclaimPolicy. Defaults to Delete.

   volumeBindingMode    <string> #定义何时为PVC完成供给和绑定,默认值为"VolumeBindingImmediate",该模式表示一旦创建了 PersistentVolumeClaim 也就完成了卷绑定和动态制备;WaitForFirstConsumer模式将延迟 PersistentVolume 的绑定和制备,直到使用该 PersistentVolumeClaim 的 Pod 被创建。此选项仅在启用了存储卷调度功能时才能生效
     VolumeBindingMode indicates how PersistentVolumeClaims should be
     provisioned and bound. When unset, VolumeBindingImmediate is used. This
     field is only honored by servers that enable the VolumeScheduling feature.

2. 部署nfs provisioner

  要使用 StorageClass,就得安装对应的自动配置程序,比如存储后端使用的是nfs,那么就需要使用到一个nfs-client 的自动配置程序,即nfs Provisioner,这个程序使用已经配置好的 nfs 服务器,来自动创建持久卷PV。自动创建的 PV 以${namespace}-${pvcName}-${pvName}这样的命名格式创建在 NFS 服务器上的共享数据目录中,而当这个 PV 被回收后会以archieved-${namespace}-${pvcName}-${pvName}这样的命名格式存在 NFS 服务器上。

  当然在部署nfs-client之前,需要先成功安装nfs 服务器,之前已经安装了nfs服务器,服务器地址是10.0.0.131,共享数据目录是/data/volumes/,然后接下来部署 nfs-client 即可,也可以直接参考 nfs-client 的文档:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client,进行安装即可。

  1)创建运行nfs-provisioner需要的sa账号,然后绑定上对应的权限

[root@k8s-master1 storageclass]# vim nfs-client-sa.yaml
You have new mail in /var/spool/mail/root
[root@k8s-master1 storageclass]# cat nfs-client-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
[root@k8s-master1 storageclass]# kubectl apply -f nfs-client-sa.yaml
serviceaccount/nfs-client-provisioner created
[root@k8s-master1 storageclass]# kubectl get sa
NAME                     SECRETS   AGE
default                  1         53d
nfs-client-provisioner   1         5s
[root@k8s-master1 storageclass]# kubectl create clusterrolebinding nfs-client-provisioner-runner --clusterrole=cluster-admin --serviceaccount=default:nfs-client-provisioner
clusterrolebinding.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
You have new mail in /var/spool/mail/root
[root@k8s-master1 storageclass]# kubectl get clusterrolebinding |grep nfs-client-provisioner-runner
nfs-client-provisioner-runner                          ClusterRole/cluster-admin                                                          31s

  新建的一个名为 nfs-client-provisioner 的ServiceAccount,然后对其授权。

  2)创建Deployment资源安装nfs-client

[root@k8s-master1 storageclass]# vim nfs-client.yaml
[root@k8s-master1 storageclass]# cat nfs-client.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: example.com/nfs
            - name: NFS_SERVER
              value: 10.0.0.131
            - name: NFS_PATH
              value: /data/volumes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.131
            path: /data/volumes
[root@k8s-master1 storageclass]# kubectl apply -f nfs-client.yaml
deployment.apps/nfs-client-provisioner created
[root@k8s-master1 storageclass]# kubectl get pods -o wide
NAME                                     READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
nfs-client-provisioner-5d65b75f7-2f7qx   1/1     Running   0          5s    10.244.36.87   k8s-node1   <none>           <none>

  查看该pod的详细信息

[root@k8s-master1 storageclass]# kubectl describe pods nfs-client-provisioner-5d65b75f7-2f7qx
Name:         nfs-client-provisioner-5d65b75f7-2f7qx
Namespace:    default
Priority:     0
Node:         k8s-node1/10.0.0.132
Start Time:   Sat, 24 Sep 2022 18:08:48 +0800
Labels:       app=nfs-client-provisioner
              pod-template-hash=5d65b75f7
Annotations:  cni.projectcalico.org/podIP: 10.244.36.87/32
              cni.projectcalico.org/podIPs: 10.244.36.87/32
Status:       Running
IP:           10.244.36.87
IPs:
  IP:           10.244.36.87
Controlled By:  ReplicaSet/nfs-client-provisioner-5d65b75f7
Containers:
  nfs-client-provisioner:
    Container ID:   docker://4190304f422bd37c7924ce471c5d5b394dc63d9a6ad14e679dab5a955c6df782
    Image:          registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
    Image ID:       docker://sha256:3beaacba3ff4821eabd4e15b7fbe99c95e079f74d3bf60ef2ba2fefda5d47b42
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 24 Sep 2022 18:08:49 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  example.com/nfs
      NFS_SERVER:        10.0.0.131
      NFS_PATH:          /data/volumes
    Mounts:
      /persistentvolumes from nfs-client-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from nfs-client-provisioner-token-qhpns (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  nfs-client-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    10.0.0.131
    Path:      /data/volumes
    ReadOnly:  false
  nfs-client-provisioner-token-qhpns:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nfs-client-provisioner-token-qhpns
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  62s   default-scheduler  Successfully assigned default/nfs-client-provisioner-5d65b75f7-2f7qx to k8s-node1
  Normal  Pulled     61s   kubelet            Container image "registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0" already present on machine
  Normal  Created    61s   kubelet            Created container nfs-client-provisioner
  Normal  Started    61s   kubelet            Started container nfs-client-provisioner 

3. 创建一个StorageClass对象

  声明了一个名为 course-nfs-storage 的StorageClass对象,注意下面的provisioner对应的值一定要和上面的Deployment下面的 PROVISIONER_NAME 这个环境变量的值一样。

[root@k8s-master1 storageclass]# cat nfs-client-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
provisioner: example.com/nfs  #or choose another name, must match deployment's env PROVISIONER_NAME'
[root@k8s-master1 storageclass]# kubectl apply -f nfs-client-class.yaml
storageclass.storage.k8s.io/course-nfs-storage created
[root@k8s-master1 storageclass]# kubectl get storageclass
NAME                 PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
course-nfs-storage   example.com/nfs   Delete          Immediate           false                  7s

4. 动态PV供给

  动态PV供给的启用,需要事先由管理员创建至少一个存储类。上述存储类已经创建成功,便可据此使用动态PV供给功能。声明了一个 PVC 对象,采用 ReadWriteMany 的访问模式,请求 1G 的空间,创建完成后,检查其绑定状态:

[root@k8s-master1 storageclass]# vim test-pvc.yaml
[root@k8s-master1 storageclass]# cat test-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-nfs-dynamic-001
spec:
  storageClassName: course-nfs-storage
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
[root@k8s-master1 storageclass]# kubectl apply -f test-pvc.yaml
persistentvolumeclaim/pvc-nfs-dynamic-001 created
[root@k8s-master1 storageclass]# kubectl get pvc
NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
pvc-nfs-dynamic-001   Bound    pvc-14b6128b-1d2c-41ed-98d8-e8b87e7e2db8   1Gi        RWX            course-nfs-storage   4s
[root@k8s-master1 storageclass]# kubectl describe pvc pvc-nfs-dynamic-001
Name:          pvc-nfs-dynamic-001
Namespace:     default
StorageClass:  course-nfs-storage
Status:        Bound
Volume:        pvc-14b6128b-1d2c-41ed-98d8-e8b87e7e2db8
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: example.com/nfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                 Age   From                                                                                         Message
  ----    ------                 ----  ----                                                                                         -------
  Normal  ExternalProvisioning   23s   persistentvolume-controller                                                                  waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator
  Normal  Provisioning           23s   example.com/nfs_nfs-client-provisioner-5d65b75f7-2f7qx_688a5021-6961-4594-8519-be26c8de4487  External provisioner is provisioning volume for claim "default/pvc-nfs-dynamic-001"
  Normal  ProvisioningSucceeded  23s   example.com/nfs_nfs-client-provisioner-5d65b75f7-2f7qx_688a5021-6961-4594-8519-be26c8de4487  Successfully provisioned volume pvc-14b6128b-1d2c-41ed-98d8-e8b87e7e2db8

  通过上面可以看到pvc-nfs-dynamic-001的pvc已经成功创建了,绑定的pv是pvc-14b6128b-1d2c-41ed-98d8-e8b87e7e2db8,这个pv是由storageclass调用nfs provisioner自动创建。

[root@k8s-master1 storageclass]# kubectl get pvc -o wide
NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE    VOLUMEMODE
pvc-nfs-dynamic-001   Bound    pvc-14b6128b-1d2c-41ed-98d8-e8b87e7e2db8   1Gi        RWX            course-nfs-storage   114s   Filesystem
[root@k8s-master1 storageclass]# kubectl get pv -o wide
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         STORAGECLASS         REASON   AGE     VOLUMEMODE
pvc-14b6128b-1d2c-41ed-98d8-e8b87e7e2db8   1Gi        RWX            Delete           Bound    default/pvc-nfs-dynamic-001   course-nfs-storage            2m55s   Filesystem
[root@k8s-master1 storageclass]#

  任何支持PV动态供给的存储系统都可以在定义为其存储类后由PVC动态申请使用,这对于难以事先预估使用到的存储空间大小及存储卷数量的使用场景尤为有用。

  步骤总结:

1)供应商:创建一个nfs provisioner供应商,即自动配置程序。

2)创建存储类,storageclass指定上步创建的供应商

3)创建pvc,这个pvc指定storageclass

二、 测试验证

  用一个简单的示例来测试下上面用 StorageClass 方式声明的 PVC 对象,验证数据的持久化。

1.   使用storageClass动态生成的pvc

  创建pod,挂载storageclass动态生成的pvc

[root@k8s-master1 storageclass]# vim storageclass-pod.yaml
[root@k8s-master1 storageclass]# cat storageclass-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: storageclass-pod
spec:
  containers:
  - name: test-pod
    image: nginx:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - name: nfs-pvc
        mountPath: /usr/share/nginx/html
  restartPolicy: "Never"
  volumes:
  - name: nfs-pvc
    persistentVolumeClaim:
      claimName: pvc-nfs-dynamic-001
[root@k8s-master1 storageclass]# kubectl apply -f storageclass-pod.yaml
pod/storageclass-pod created
You have new mail in /var/spool/mail/root
[root@k8s-master1 storageclass]# kubectl get pods -o wide
NAME                                     READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
nfs-client-provisioner-5d65b75f7-2f7qx   1/1     Running   0          26m   10.244.36.87   k8s-node1   <none>           <none>
storageclass-pod                         1/1     Running   0          5s    10.244.36.89   k8s-node1   <none>           <none>

  创建资源后,查看资源详细信息如下:

[root@k8s-master1 storageclass]# kubectl describe pods storageclass-pod
Name:         storageclass-pod
Namespace:    default
Priority:     0
Node:         k8s-node1/10.0.0.132
Start Time:   Sat, 24 Sep 2022 18:35:13 +0800
Labels:       <none>
Annotations:  cni.projectcalico.org/podIP: 10.244.36.89/32
              cni.projectcalico.org/podIPs: 10.244.36.89/32
Status:       Running
IP:           10.244.36.89
IPs:
  IP:  10.244.36.89
Containers:
  test-pod:
    Container ID:   docker://4ec8d952e8e1c59f7b01a31661b3636cfd3f6e26f33389298fced88c319df917
    Image:          nginx:latest
    Image ID:       docker-pullable://nginx@sha256:b95a99feebf7797479e0c5eb5ec0bdfa5d9f504bc94da550c2f58e839ea6914f
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 24 Sep 2022 18:35:16 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /usr/share/nginx/html from nfs-pvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5n29f (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  nfs-pvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-nfs-dynamic-001
    ReadOnly:   false
  default-token-5n29f:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5n29f
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  2m43s  default-scheduler  Successfully assigned default/storageclass-pod to k8s-node1
  Normal  Pulled     2m40s  kubelet            Container image "nginx:latest" already present on machine
  Normal  Created    2m40s  kubelet            Created container test-pod
  Normal  Started    2m40s  kubelet            Started container test-pod

  访问该pod,storageclass-pod,查看结果:

[root@k8s-master1 storageclass]# curl 10.244.36.89
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.23.1</center>
</body>
</html>

  登录到该pod中,创建index.html文件,再次访问该pod

[root@k8s-master1 storageclass]# kubectl exec -it storageclass-pod -- /bin/sh
# pwd
/
# cd /usr/share/nginx/html
# echo "welcome to test nginx" >>index.html
# cat index.html
welcome to test nginx
# exit
You have new mail in /var/spool/mail/root
[root@k8s-master1 storageclass]# curl 10.244.36.89
welcome to test nginx

  上面这个 Pod 非常简单,就是用一个 test-pod 容器,在/usr/share/nginx/html 目录下面新建一个 index.html 的文件,然后把/usr/share/nginx/html目录挂载到上面新建的pvc-nfs-dynamic-001 这个资源对象上面了,要验证很简单,只需要去查看下nfs 服务器上面的共享数据目录下面是否有index.html这个文件即可。可以在 nfs 服务器的共享数据目录下面查看下数据

[root@k8s-master1 storageclass]# ll /data/volumes/
total 0
drwxrwxrwx 2 root root 24 Sep 24 18:42 default-pvc-nfs-dynamic-001-pvc-14b6128b-1d2c-41ed-98d8-e8b87e7e2db8

  可以看到下面有名字很长的文件夹,这个文件夹的命名方式的规则:${namespace}-${pvcName}-${pvName},再看下这个文件夹下面是否有index.html文件

[root@k8s-master1 storageclass]# ll /data/volumes/default-pvc-nfs-dynamic-001-pvc-14b6128b-1d2c-41ed-98d8-e8b87e7e2db8/
total 4
-rw-r--r-- 1 root root 22 Sep 24 18:42 index.html
[root@k8s-master1 storageclass]# cat /data/volumes/default-pvc-nfs-dynamic-001-pvc-14b6128b-1d2c-41ed-98d8-e8b87e7e2db8/index.html
welcome to test nginx

  可以看到下面有一个 index.html 的文件,文件内容与pod中的文件内容一致。

2. 删除pvc查看pv状态

  删除由storageclass创建的pvc,查看pv是否会被删除,这主要依据于存储的回收策略。

[root@k8s-master1 storageclass]# kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-5d65b75f7-2f7qx   1/1     Running   0          46m
storageclass-pod                         1/1     Running   0          20m
You have new mail in /var/spool/mail/root
[root@k8s-master1 storageclass]# kubectl get pvc
NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
pvc-nfs-dynamic-001   Bound    pvc-14b6128b-1d2c-41ed-98d8-e8b87e7e2db8   1Gi        RWX            course-nfs-storage   41m
[root@k8s-master1 storageclass]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         STORAGECLASS         REASON   AGE
pvc-14b6128b-1d2c-41ed-98d8-e8b87e7e2db8   1Gi        RWX            Delete           Bound    default/pvc-nfs-dynamic-001   course-nfs-storage            41m
#删除pod和pvc 查看pv是否会被删除
[root@k8s-master1 storageclass]# kubectl delete pods storageclass-pod
pod "storageclass-pod" deleted
[root@k8s-master1 storageclass]# kubectl delete pvc pvc-nfs-dynamic-001
persistentvolumeclaim "pvc-nfs-dynamic-001" deleted
[root@k8s-master1 storageclass]# kubectl get pv -o wide
No resources found
[root@k8s-master1 storageclass]#

  由于storageclass创建的pvc设置的回收策略为Delete,所以删除pvc后,pv自动被删除。

posted @ 2022-09-24 19:02  出水芙蓉·薇薇  阅读(617)  评论(0编辑  收藏  举报