Kubernetes-PV、PVC、StorageClass

PV、PVC

PV:由k8s配置的存储,PV同样是集群的一类资源
PVC:对PV的申请

创建PVC之后,一直无法绑定:

  • PVC的空间申请大小大于PV的大小
  • PVC的StorageClassName没有和PV的一致
  • PVC的访问模式和PV的不一致

创建了PVC的Pod之后,一直处于Pending状态:

  • PVC没有被创建成功,或者被创建
  • PVC和Pod不在同一个Namespace

删除PVC后-->K8S会创建一个用于回收的Pod-->根据PV的回收策略进行PV的回收-->回收完之后PV的状态就会变成可被绑定的状态也就是空闲状态-->其他处于Pending状态的PVC如果匹配到了这个PV,就能和这个PV进行绑定。

# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs-slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /data
    server: 192.168.2.241
# pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc0001
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 5Gi
  storageClassName: nfs-slow

persistentVolumeReclaimPolicy #回收策略

  • Recycle: # 回收,rm -rf 删除顺序:Pod -> PVC -> PV
  • Retain: # 保留
  • Delete: # 删除pvc后pv也会被删掉,这一类的pv,需要支持删除的功能,动态存储默认方式

Capacity: # pv的容量

  • volumeMode: # 挂载的类型,Filesystem,block
  • accessModes: # 这个pv的访问模式:
    # ReadWriteOnce:RWO,可以被单节点以读写的模式挂载。
    # ReadWriteMany:RWX,可以被多节点以读写的模式挂载。
    # ReadOnlyMany:ROX,可以被多节点以只读的模式挂载。

storageClassName: # PV的类,可以说是一个类名,PVC和PV的这个name必须一样,才能绑定。

PV的状态:

  • Available: # 空闲的PV,没有被任何PVC绑定
  • Bound: # 已经被PVC绑定
  • Released: # PVC被删除,但是资源未被重新使用
  • Failed: # 自动回收失败

yaml中使用pvc

- name: data
  persistentVolumeClaim:
    claimName: pvc0001    # pvc的名称

StorageClass

只有 Kubernetes 1.20 及以上版本才支持NFS动态存储。

# nfs-rbac.yaml
[root@k8s-master ~]# cat nfs-rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: k8s-nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: k8s-nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: k8s-nfs
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: k8s-nfs
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: k8s-nfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
# nfs-storage.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true" #---设置为默认的storageclass
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
# nfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim1
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
  storageClassName: managed-nfs-storage   # name需要和上面创建的storageclass的name相同
# nfs-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: k8s-nfs
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: quay.io/external_storage/nfs-client-provisioner:latest
          image: docker.io/jmgao1983/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME   #nfs-provisioner的名称
              value: fuseim.pri/ifs
            - name: NFS_SERVER         #nfs服务端地址
              value: 192.168.2.241
            - name: NFS_PATH           # nfs服务端路径
              value: /data             # nfs服务端路径
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.2.241     # nfs服务端地址
            path: /data               # nfs服务端路径

可以查看到pvc状态已经是为bound

$ kubectl get pvc
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
test-claim1   Bound    pvc-03100fa6-9a04-4bb7-8bab-b79c451b677c   1Mi        RWX            managed-nfs-storage   3m34s

自动创建了一个pv

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS          REASON   AGE
pvc-03100fa6-9a04-4bb7-8bab-b79c451b677c   1Mi        RWX            Delete           Bound    default/test-claim1   managed-nfs-storage            4m18s

当pvc被删除之后,pv也会自动被回收

$ kubectl delete -f nfs-pvc.yaml 
persistentvolumeclaim "test-claim1" deleted

$ kubectl get pvc,pv
No resources found

基于NFS的动态存储配置完成,可以在Pod进行Pvc挂载使用。

posted @ 2021-09-02 21:53  Cai_HL  阅读(143)  评论(0编辑  收藏  举报
>