k8s 存储卷

 
在 k8s 中部署的应用都是以 pod 容器的形式运行的,假如我们部署 MySQL、Redis 等数据库,需要对这些数据库产生的数据做备份。因为 Pod 是有生命周期的,如果 pod 不挂载数据卷,那 pod 被删除或重启后这些数据会随之消失,如果想要长久的保留这些数据就要用到 pod 数据持久化存储。
[root@master-1 service]#  kubectl explain pods.spec.volumes
KIND:     Pod
VERSION:  v1

RESOURCE: volumes <[]Object>

DESCRIPTION:
     List of volumes that can be mounted by containers belonging to the pod.
     More info: https://kubernetes.io/docs/concepts/storage/volumes

     Volume represents a named volume in a pod that may be accessed by any
     container in the pod.

FIELDS:
   awsElasticBlockStore	<Object>
     AWSElasticBlockStore represents an AWS Disk resource that is attached to a
     kubelet's host machine and then exposed to the pod. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore

   azureDisk	<Object>
     AzureDisk represents an Azure Data Disk mount on the host and bind mount to
     the pod.

   azureFile	<Object>
     AzureFile represents an Azure File Service mount on the host and bind mount
     to the pod.

   cephfs	<Object>
     CephFS represents a Ceph FS mount on the host that shares a pod's lifetime

   cinder	<Object>
     Cinder represents a cinder volume attached and mounted on kubelets host
     machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md

   configMap	<Object>
     ConfigMap represents a configMap that should populate this volume

   csi	<Object>
     CSI (Container Storage Interface) represents ephemeral storage that is
     handled by certain external CSI drivers (Beta feature).

   downwardAPI	<Object>
     DownwardAPI represents downward API about the pod that should populate this
     volume

   emptyDir	<Object>
     EmptyDir represents a temporary directory that shares a pod's lifetime.
     More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir

   ephemeral	<Object>
     Ephemeral represents a volume that is handled by a cluster storage driver
     (Alpha feature). The volume's lifecycle is tied to the pod that defines it
     - it will be created before the pod starts, and deleted when the pod is
     removed.

     Use this if: a) the volume is only needed while the pod runs, b) features
     of normal volumes like restoring from snapshot or capacity tracking are
     needed, c) the storage driver is specified through a storage class, and d)
     the storage driver supports dynamic volume provisioning through a
     PersistentVolumeClaim (see EphemeralVolumeSource for more information on
     the connection between this volume type and PersistentVolumeClaim).

     Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes
     that persist for longer than the lifecycle of an individual pod.

     Use CSI for light-weight local ephemeral volumes if the CSI driver is meant
     to be used that way - see the documentation of the driver for more
     information.

     A pod can use both types of ephemeral volumes and persistent volumes at the
     same time.

   fc	<Object>
     FC represents a Fibre Channel resource that is attached to a kubelet's host
     machine and then exposed to the pod.

   flexVolume	<Object>
     FlexVolume represents a generic volume resource that is
     provisioned/attached using an exec based plugin.

   flocker	<Object>
     Flocker represents a Flocker volume attached to a kubelet's host machine.
     This depends on the Flocker control service being running

   gcePersistentDisk	<Object>
     GCEPersistentDisk represents a GCE Disk resource that is attached to a
     kubelet's host machine and then exposed to the pod. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

   gitRepo	<Object>
     GitRepo represents a git repository at a particular revision. DEPRECATED:
     GitRepo is deprecated. To provision a container with a git repo, mount an
     EmptyDir into an InitContainer that clones the repo using git, then mount
     the EmptyDir into the Pod's container.

   glusterfs	<Object>
     Glusterfs represents a Glusterfs mount on the host that shares a pod's
     lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md

   hostPath	<Object>
     HostPath represents a pre-existing file or directory on the host machine
     that is directly exposed to the container. This is generally used for
     system agents or other privileged things that are allowed to see the host
     machine. Most containers will NOT need this. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

   iscsi	<Object>
     ISCSI represents an ISCSI Disk resource that is attached to a kubelet's
     host machine and then exposed to the pod. More info:
     https://examples.k8s.io/volumes/iscsi/README.md

   name	<string> -required-
     Volume's name. Must be a DNS_LABEL and unique within the pod. More info:
     https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names

   nfs	<Object>
     NFS represents an NFS mount on the host that shares a pod's lifetime More
     info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

   persistentVolumeClaim	<Object>
     PersistentVolumeClaimVolumeSource represents a reference to a
     PersistentVolumeClaim in the same namespace. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

   photonPersistentDisk	<Object>
     PhotonPersistentDisk represents a PhotonController persistent disk attached
     and mounted on kubelets host machine

   portworxVolume	<Object>
     PortworxVolume represents a portworx volume attached and mounted on
     kubelets host machine

   projected	<Object>
     Items for all in one resources secrets, configmaps, and downward API

   quobyte	<Object>
     Quobyte represents a Quobyte mount on the host that shares a pod's lifetime

   rbd	<Object>
     RBD represents a Rados Block Device mount on the host that shares a pod's
     lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md

   scaleIO	<Object>
     ScaleIO represents a ScaleIO persistent volume attached and mounted on
     Kubernetes nodes.

   secret	<Object>
     Secret represents a secret that should populate this volume. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#secret

   storageos	<Object>
     StorageOS represents a StorageOS volume attached and mounted on Kubernetes
     nodes.

   vsphereVolume	<Object>
     VsphereVolume represents a vSphere volume attached and mounted on kubelets
     host machine

  

emptyDir  存储类型pod
[root@master-1 pods]# cat pod-1.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: test-pod
spec:
  containers: 
  - name: cx-test
    image: nginx
    volumeMounts: 
    - mountPath: /cache
      name: cache #存储名字
  volumes:
  - emptyDir: {} # 临时空目录
    name: cache #存储设备名字 
[root@master-1 pods]# kubectl apply -f pod-1.yaml 
pod/test-pod created
-it 指定pod 名字 -c 指定容器名字
[root@master-1 pods]# kubectl exec -it test-pod -c cx-test -- /bin/sh 
# ls /
bin  boot  cache  dev  docker-entrypoint.d  docker-entrypoint.sh  etc  home  lib  lib64  media	mnt  opt  proc	root  run  sbin  srv  sys  tmp	usr  var

  hostPath 类的存储,持久存储

[root@master-1 pods]# cat hostPath-pod.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: test-hostpath
spec:
  containers: 
  - name: test-nginx-2
    image: nginx
    imagePullPolicy: IfNotPresent
  - name: cx-test-2
    image: tomcat
    imagePullPolicy: IfNotPresent
    volumeMounts: 
    - mountPath: /hostpath
      name: hostpath-test #存储名字
  volumes:
  - name: hostpath-test  #存储设备名字 
    hostPath: 
      path: /k8s/hostpath # 该目录宿主机上必须存在
      type: Directory # 空字符串(默认)用于向后兼容,这意味着在安装 hostPath 卷之前不会执行任何检查。DirectoryOrCreate:如果在给定路径上什么都不存在,那么将根据需要创建空目录,权限设置为 0755,具有与 kubelet 
相同的组和属主信息。Directory:在给定路径上必须存在的目录。FileOrCreate: 如果在给定路径上什么都不存在,那么将在那里根据需要创建空文件,权限设置为 0644,具有与 kubelet 相同的组和所有权。File:在给定路径上必须存在的文件。Socket:在给定路径上必须存在的 UNIX 套接字。CharDevice:在给定路径上必须存在的字符设备。BlockDevice:在给定路径上必须存在的块设备。

[root@node-1 ~]# mkdir -pv /k8s/hostpath
mkdir: 已创建目录 "/k8s"
mkdir: 已创建目录 "/k8s/hostpath"
您在 /var/spool/mail/root 中有新邮件
[root@master-1 pods]# kubectl apply -f hostPath-pod.yaml 
pod/test-hostpath created
[root@master-1 pods]# kubectl exec -it test-hostpath -c  cx-test-2 -- /bin/sh 

# ls
BUILDING.txt  CONTRIBUTING.md  LICENSE	NOTICE	README.md  RELEASE-NOTES  RUNNING.txt  bin  conf  lib  logs  native-jni-lib  temp  webapps  webapps.dist  work
# cd /
# ls
bin  boot  dev	etc  home  hostpath  lib  lib64  media	mnt  opt  proc	root  run  sbin  srv  sys  tmp	usr  var
# cd hostpath
# echo "123" > 1.txt
# ls
1.txt
# 
node节点查看文件
[root@node-1 hostpath]# ls
1.txt
[root@node-1 hostpath]# pwd
/k8s/hostpath
[root@node-1 hostpath]# cat 1.txt 
123

  帮助文档https://kubernetes.io/zh-cn/docs/concepts/storage/volumes/#hostpath

安装nfs并测试

[root@master-2 ~]# yum -y install nfs-utils
[root@master-2 ~]# mkdir /data/volumes
[root@master-2 ~]# cat /etc/exports
/data/volumes 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
[root@master-2 ~]# exportfs  -arv
exporting 192.168.16.0/24:/data/volumes
[root@master-2 ~]# 
[root@master-2 ~]# 
[root@master-2 ~]# 
[root@master-2 ~]# systemctl start nfs
[root@master-2 ~]# systemctl restart nfs
[root@master-2 ~]# systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
node测试挂载
[root@node-1 hostpath]# mount -t nfs 192.168.10.30:/data/volumes /k8s/nfs-1
您在 /var/spool/mail/root 中有新邮件
[root@node-1 hostpath]# 
[root@node-1 hostpath]# 
[root@node-1 hostpath]# 
[root@node-1 hostpath]# df -h 
文件系统                     容量  已用  可用 已用% 挂载点
/dev/mapper/centos-root       17G  3.6G   14G   21% /
devtmpfs                     1.9G     0  1.9G    0% /dev
tmpfs                        1.9G     0  1.9G    0% /dev/shm
tmpfs                        1.9G  187M  1.7G   10% /run
tmpfs                        1.9G     0  1.9G    0% /sys/fs/cgroup
/dev/sda1                   1014M  146M  869M   15% /boot
tmpfs                        378M     0  378M    0% /run/user/0
tmpfs                        1.9G   12K  1.9G    1% /var/lib/kubelet/pods/8ea06dc6-1b1b-4c7b-8937-0c2f0561d932/volumes/kubernetes.io~secret/calico-node-token-qmnn4
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/4d9f6023d9c8bf17a66b26c6946ead4324f9396abec51a1181b574f4f624a6d5/merged
shm                           64M     0   64M    0% /var/lib/docker/containers/d0821c5a8a136326b9b9e983cb55a5b95e69617155d9464e68641b2b07c46090/mounts/shm
tmpfs                        1.9G   12K  1.9G    1% /var/lib/kubelet/pods/154cc762-0b2e-4253-86fd-24247af1d81a/volumes/kubernetes.io~secret/calico-kube-controllers-token-2ll5q
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/580a3cdb72da3cb511d77af5cd1b86f8fbed6396882c2d2764dd1b419c547a02/merged
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/d85f7a5cab2123f9bdb753acb2c76ad27d32824b3759cc444fc7ba705149e564/merged
shm                           64M     0   64M    0% /var/lib/docker/containers/b149d593cd068355b5bf623b548f8e3ae702a78aa34fe7a36dd4728aac3519c5/mounts/shm
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/9b1c5754f11aafa40bc13ceec0829e6d7ffe06af2abc88f8c7233b7c14fdab31/merged
tmpfs                        1.9G   12K  1.9G    1% /var/lib/kubelet/pods/56b81310-2355-408b-8342-c1c1aafe30f2/volumes/kubernetes.io~secret/coredns-token-6f48b
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/3b89f3d27813a70a863321c7a7c57f05d86a251120381fd5586df99994637d0a/merged
shm                           64M     0   64M    0% /var/lib/docker/containers/a676bc184af981c89063a9c7b956718a3745921c47b5f92cba6d2c714c71077b/mounts/shm
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/13db8c5613d3e7420ed0a835bfe8569a669227f7fbf61927b435c70fda1c3851/merged
tmpfs                        1.9G   12K  1.9G    1% /var/lib/kubelet/pods/5ead2b68-2463-4786-9e53-79a6018ffa53/volumes/kubernetes.io~secret/default-token-zvvq8
tmpfs                        1.9G   12K  1.9G    1% /var/lib/kubelet/pods/f499f9c5-815c-471c-891f-0bfd0301d3a4/volumes/kubernetes.io~secret/default-token-zvvq8
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/b48154abdaf4671c305a39ea33ef7f00c63d12d86fe972ae7f8d4608c68b9ae5/merged
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/9276a0ede63711935b2019c3de7458ad1db1d5a355d5aca75db813cb8f49a1bc/merged
shm                           64M     0   64M    0% /var/lib/docker/containers/d37d9954112fb097f4b4d8bf43d8c89a269bc6a01741d0e6870e24cf27e14a28/mounts/shm
shm                           64M     0   64M    0% /var/lib/docker/containers/4f1c5da957a533f24a4ad850a11fa4660215a732fe4af6d26aa987fe05ac51f8/mounts/shm
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/881dbcad7c52a4f5c0c6f883f1c8906d1fa94fa5af073c369c56a93c866505d4/merged
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/22fad7ba89092d9492ba2eadbe58bbcb744a4215ec54335887641401c6411945/merged
tmpfs                        1.9G   12K  1.9G    1% /var/lib/kubelet/pods/1e79e714-e02a-469c-897c-8f1c7ea32bdb/volumes/kubernetes.io~secret/default-token-zvvq8
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/7090a42d196ea106195781ad4540f5426f80ba62cad2c882ae2ff488ec895a7c/merged
shm                           64M     0   64M    0% /var/lib/docker/containers/3d4fd2fdf451d04723524674d867b7e80dbe1637837d3632ff4d005c851e3fe5/mounts/shm
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/690c1a2c4429ced59b00529da77b3b5706297e4bb209c5540de7eef8b15b32fd/merged
tmpfs                        1.9G   12K  1.9G    1% /var/lib/kubelet/pods/86d26b9a-5b8a-4298-82bc-5a921dea04f3/volumes/kubernetes.io~secret/default-token-zvvq8
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/adf7001b5d91b4601ad3951f43fe2ff1e3131277760f0c2c94d10d0d2f5664eb/merged
shm                           64M     0   64M    0% /var/lib/docker/containers/f8f1117e1997fab047fd7a685dfbad24ebb2e19753efdfb45a9c6070c7d5c3a3/mounts/shm
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/36f33e9b755a2edffb5922541d42755a9895526273e5b7253d5db3e112fb0f2d/merged
overlay                       17G  3.6G   14G   21% /var/lib/docker/overlay2/56f69eb1defb9c3555c4a7bc06415bfb44c3ff11403b04fee0fef431a79ba189/merged
192.168.10.30:/data/volumes   17G  2.4G   15G   14% /k8s/nfs-1
取消挂载
[root@node-1 hostpath]# umount /k8s/nfs-1
您在 /var/spool/mail/root 中有新邮件

  nfs 存储类型pod

[root@master-1 pods]# cat pod-nfs.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-nfs-1
spec:
  containers:
  - name: test-nfs
    image: nginx
    ports:
    - containerPort: 80
      protocol: TCP
    volumeMounts:
    - name: nfs-volumes
      mountPath: /data
  volumes:
  - name: nfs-volumes
    nfs:
     path: /data/volumes # nfs 服务端的共享目录
     server: 192.168.10.30
您在 /var/spool/mail/root 中有新邮件
[root@master-1 pods]# kubectl apply -f pod-nfs.yaml 
[root@master-2 volumes]# cp /var/log/messages .
[root@master-2 volumes]# ls
messages
[root@master-2 volumes]# ls -l
总用量 123788
-rw------- 1 root root 126757380 1月  18 10:46 messages
[root@master-1 pods]#  kubectl exec -it pod-nfs-1 -c test-nfs /bin/sh  #进入容器
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.

# cd /data
# ls -l
total 123788
-rw------- 1 root root 126757380 Jan 18 02:46 messages

  pvc 存储 pv 类似于物理存储硬盘

持久卷

本文描述 Kubernetes 中的持久卷(Persistent Volume) 。 建议先熟悉卷(Volume)的概念。

介绍

存储的管理是一个与计算实例的管理完全不同的问题。 PersistentVolume 子系统为用户和管理员提供了一组 API, 将存储如何制备的细节从其如何被使用中抽象出来。 为了实现这点,我们引入了两个新的 API 资源:PersistentVolume 和PersistentVolumeClaim。

持久卷(PersistentVolume,PV) 是集群中的一块存储,可以由管理员事先制备, 或者使用存储类(Storage Class)来动态制备。 持久卷是集群资源,就像节点也是集群资源一样。PV 持久卷和普通的 Volume 一样, 也是使用卷插件来实的,只是它们拥有独立于任何使用 PV 的 Pod 的生命周期。 此 API 对象中记述了存储的实现细节,无论其背后是 NFS、iSCSI 还是特定于云平台的存储系统。

持久卷申领(PersistentVolumeClaim,PVC) 表达的是用户对存储的请求。概念上与 Pod 类似。 Pod 会耗用节点资源,而 PVC 申领会耗用 PV 资源。Pod 可以请求特定数量的资源(CPU 和内存);同样 PVC 申领也可以请求特定的大小和问模式 (例如,可以要求 PV 卷能够以 ReadWriteOnce、ReadOnlyMany 或 ReadWriteMany 模式之一来挂载,参见访问模式)。

尽管 PersistentVolumeClaim 允许用户消耗抽象的存储资源, 常见的情况是针对不同的问题用户需要的是具有不同属性(如,性能)的 PersistentVolume 卷。 集群管理员需要能够提供不同性质的 PersistentVolume, 并且这些 PV 卷之间的差不仅限于卷大小和访问模式,同时又不能将卷是如何实现的这些细节暴露给用户。 为了满足这类需求,就有了存储类(StorageClass) 资源。

persistentVolumeClaim 卷用来将持久卷(PersistentVolume)挂载到 Pod 中。 持久卷申领(PersistentVolumeClaim)是用户在不知道特定云环境细节的情况下“申领”持久存储(例如 GCE PersistentDisk 或者 iSCSI 卷)的一种方法。

在nfs 服务器创建共享目录并作相关设置

[root@master-2 volumes]#  mkdir -pv /data/volume/v{1,2,3,4,5,6,7,8,9,10}
mkdir: 已创建目录 "/data/volume"
mkdir: 已创建目录 "/data/volume/v1"
mkdir: 已创建目录 "/data/volume/v2"
mkdir: 已创建目录 "/data/volume/v3"
mkdir: 已创建目录 "/data/volume/v4"
mkdir: 已创建目录 "/data/volume/v5"
mkdir: 已创建目录 "/data/volume/v6"
mkdir: 已创建目录 "/data/volume/v7"
mkdir: 已创建目录 "/data/volume/v8"
mkdir: 已创建目录 "/data/volume/v9"
mkdir: 已创建目录 "/data/volume/v10"
[root@master-2 volumes]# cat /etc/exports
/data/volumes 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
/data/volume/v1 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
/data/volume/v2 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
/data/volume/v3 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
/data/volume/v4 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
/data/volume/v5 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
/data/volume/v6 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
/data/volume/v7 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
/data/volume/v8 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
/data/volume/v9 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
/data/volume/v10 192.168.10.0/24(rw,sync,no_root_squash,no_all_squash)
[root@master-2 volumes]# exportfs  -arv
[root@master-2 volumes]# systemctl restart nfs

  创建pv

[root@master-1 pods]# cat pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata: 
  name: pv-v1
spec: 
  capacity:
    storage: 5Gi
  accessModes: ["ReadWriteOnce"] #- ReadWriteOnce:卷可以被一个节点以读写方式挂载。 ReadWriteOnce 访问模式也允许运行在同一节点上的多个 Pod 访问卷。ReadOnlyMany :卷可以被多个节点以只读方式挂载.ReadWriteMany:卷可
以被多个节点以读写方式挂载。ReadWriteOncePod:卷可以被单个 Pod 以读写方式挂载。 如果你想确保整个集群中只有一个 Pod 可以读取或写入该 PVC, 请使用 ReadWriteOncePod 访问模式。这只支持 CSI 卷以及需要 Kubernetes 1.22 以上版本。    nfs: 
    path: /data/volume/v1
    server: 192.168.10.30
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-v2
spec:
  capacity:
    storage: 5Gi
  accessModes: ["ReadOnlyMany"] 
  nfs:
    path: /data/volume/v2
    server: 192.168.10.30
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-v3
spec:
  capacity:
    storage: 5Gi
  accessModes: ["ReadWriteOnce","ReadOnlyMany"]
  nfs:
    path: /data/volume/v3
    server: 192.168.10.30

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-v4
spec:
  capacity:
    storage: 5Gi
  accessModes: ["ReadWriteOnce","ReadOnlyMany"]
  nfs:
    path: /data/volume/v4
    server: 192.168.10.30

[root@master-1 pods]# kubectl apply -f pv.yaml 
persistentvolume/pv-v1 unchanged
persistentvolume/pv-v2 unchanged
persistentvolume/pv-v3 unchanged
persistentvolume/pv-v4 created

  创建pvc

[root@master-1 pods]# cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cx-data
spec:
  accessModes: ["ReadOnlyMany"]      #访问模式
  resources:   #空间大小
    requests:
      storage: 3Gi
[root@master-1 pods]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/cx-data created
[root@master-1 pods]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cx-data   Bound    pv-v2    5Gi        ROX                           7s
[root@master-1 pods]# kubectl describe pvc cx-data
Name:          cx-data
Namespace:     default
StorageClass:  
Status:        Bound
Volume:        pv-v2
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      5Gi
Access Modes:  ROX
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>
[root@master-1 pods]# cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cx-data
spec:
  accessModes: ["ReadOnlyMany"]      #访问模式
  resources:   #空间大小
    requests:
      storage: 3Gi

  创建绑定pvc 存储pod

[root@master-1 pods]# cat pod-pvc.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-pvc
spec:
  containers:
  - name: cx-ng-pvc
    image: nginx
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: ng-cx-pvc
      mountPath: /data
  volumes:
  - name: ng-cx-pvc
    persistentVolumeClaim:
      claimName: cx-data
      
[root@master-1 pods]# kubectl apply -f pod-pvc.yaml 
pod/pod-pvc created
[root@master-1 pods]# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
my-test-59ff9d548c-782jd   1/1     Running   0          45h
my-test-59ff9d548c-hcv6p   1/1     Running   0          45h
pod-nfs-1                  1/1     Running   0          98m
pod-pvc                    1/1     Running   0          12s
test-hostpath              2/2     Running   0          19h
test-pod                   1/1     Running   0          19h

 

 

 

posted @ 2023-01-18 12:43  烟雨楼台,行云流水  阅读(147)  评论(0编辑  收藏  举报