Kubernetes ---- 存储卷(PV、PVC)


  在创建需要持久化数据的Pod之前,要先创建一个PVC,这个PVC要在系统上找一个符合规则的PV,进行申请并占用,他们是一一对应关系的,一旦一个PV被某个PVC占用了,那么状态会变为Bound,不能被别的PVC绑定了,
如果PVC找不到匹配规则的PV的话,那么状态就会变为pending状态,直到与符合规则的PV关联上;一个pvc创建后,相当于是一个存储卷,可以被多个Pod访问,可以自己制定规则(accessMode)


pods.spec.volumes.persistentVolumeClaim(pvc) 持久存储卷申请
  pvc要与pv建立关联关系
  pv要与存储系统建立关联关系
  pv: 存储系统之上的存储空间


# 查看pvc相关语法
$ kubectl explain pvc

pv.spec.accessModes <[]string>

Access Modes
ReadWriteOnce -- 单个节点可以将卷挂载为读写状态
ReadOnlyMany -- 该卷可以由多个节点以只读方式挂载
ReadWriteMany -- 多个节点可以将卷挂载为读写状态

RWO - ReadWriteOnce
ROX - ReadOnlyMany
RWX - ReadWriteMany


pv.spec.capacity <map[string]string>  定义PV的大小

基于https://www.cnblogs.com/k-free-bolg/p/13155970.html里的NFS,我们创建几个PV,然后才能创建PVC供客户绑定,使用;
1. 创建目录以供pv使用
nfs node:

# mkdir /data/volumes/v{1,2,3,4,5}
# vim /etc/exports
/data/volumes/v1 192.168.222.0/24(rw,no_root_squash)
/data/volumes/v2 192.168.222.0/24(rw,no_root_squash)
/data/volumes/v3 192.168.222.0/24(rw,no_root_squash)
/data/volumes/v4 192.168.222.0/24(rw,no_root_squash)
/data/volumes/v5 192.168.222.0/24(rw,no_root_squash)
# exportfs -arv
# showmount -e 
Export list for node-nfs:
/data/volumes/v5 192.168.222.0/24
/data/volumes/v4 192.168.222.0/24
/data/volumes/v3 192.168.222.0/24
/data/volumes/v2 192.168.222.0/24
/data/volumes/v1 192.168.222.0/24

2. 创建NFS格式的pv
node2:

# 查看定义语法
$ kubectl explain pv
$ vim pv-nfs.yaml
  apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: pv001
    labels:
      name: pv0001
  spec:
    accessModes: ["ReadWriteMany","ReadWriteOnce"]
     capacity:
      storage: 2Gi
        nfs:
          path: /data/volumes/v1
          server: 192.168.222.103
  ---
  apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: pv002
    labels:
      name: pv0002
  spec:
    accessModes: ["ReadWriteOnce"]
    capacity:
      storage: 5Gi
    nfs:
      path: /data/volumes/v2
      server: 192.168.222.103
  ---
  apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: pv003
    labels:
      name: pv0003
  spec:
    accessModes: ["ReadWriteOnce","ReadWriteMany"]
   capacity: 
     storge: 20Gi
     nfs:
       path: /data/volumes/v3
       server: 192.168.222.103
  ---
  apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: pv004
    labels:
      name: pv0004
  spec:
    accessModes: ["ReadWriteOnce","ReadWriteMany"]
    capacity:
      storage: 10Gi
    nfs:
      path: /data/volumes/v4
      server: 192.168.222.103
  ---
  apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: pv005
    labels:
      name: pv0005
  spec:
    accessModes: ["ReadWriteOnce","ReadWriteMany"]
    capacity:
      storage: 10Gi
   nfs:
      path: /data/volumes/v5
      server: 192.168.222.103

$ kubectl apply -f pv-nfs.yaml
$ kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM     STORAGECLASS   REASON   AGE
pv001   2Gi      RWO,RWX      Retain       Available                          13m
pv002   5Gi      RWO          Retain       Available                          13m
pv003   20Gi     RWO,RWX      Retain       Available                          13m
pv004   10Gi     RWO,RWX      Retain        Available                         13m
pv005   10Gi     RWO,RWX        Retain       Available                         13m

# RECLAIM POLICY(回收策略):

  Retain: 当PVC释放了,取消了与PV的绑定,那么数据将会继续保留;
  Recycle: 保留PV,但清空其上数据,已废弃
  delete: 删除被PVC释放的PV及其后端存储volume

创建pvc

$ vim pod-vol-pvc.yaml
  apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: mypvc
    namespace: default
  spec:
    accessModes: ["ReadWriteMany"]
    resources:
    requests:
      storage: 6Gi
  ---
  apiVersion: v1
  kind: Pod
  metadata:
    name: pvc-pod
    namespace: default
  spec:
    containers:
    - name: pvc-container
      image: ikubernetes/myapp:v1
      imagePullPolicy: IfNotPresent
     ports:
      - name: http
        containerPort: 80
      volumeMounts:
      - name: disk-pvc
        mountPath: /usr/share/nginx/html/
    lifecycle:
      postStart:
        exec:
          command: ["/bin/sh","-c","echo $(hostname) > /usr/share/nginx/html/index.html"]
  volumes:
  - name: disk-pvc
    persistentVolumeClaim:
    claimName: mypvc

StorageClass(存储类): 动态创建PV
当PVC想与PV进行绑定的时候,如果没有找到符合PVC定义的条件的PV,那么PVC就会是pending状态,直到找到符合条件的PV才会绑定,这就出现了一个问题,如果我们不知道Pod会申请的PVC需要多大,那么就
没办法提前创建PV,也就是没办法进行PVC与PV的绑定.

针对这种情况,k8s拥有"存储类"这个功能,可以根据自己定义的各种组来进行存储分类,比如把本地存储分为一类,云端存储分为一类,NFS的分为一类等等,分完类后要定义出存出来,此时PVC再去申请PV的时候
PVC不会再单独针对某个PV进行,而是针对存储类来进行绑定,比如有A存储类和B存储类两个存储类,PVC像A存储类请求空间,那么A存储类会动态创建一块供PVC使用的PV.(要求允许支持Restful风格的接口)

posted @ 2020-06-18 10:32  k-free  阅读(351)  评论(0编辑  收藏  举报