kubernetes 数据持久化
pod本身是无状态,所以很多有状态的应用,就需要将数据进行持久化。
1:将数据挂在到宿主机。但是pod重启之后有可能到另外一个节点,这样数据虽然不会丢但是还是有可能会找不到
apiVersion: v1 kind: Pod metadata: name: busybox labels: name: busybox spec: containers: - image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox volumeMounts: - mountPath: /busybox-data name: data volumes: - hostPath: path: /tmp/data name: data
2:挂到外部存储,如nfs
apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 2 selector: app: web01 template: metadata: name: nginx labels: app: web01 spec: containers: - name: nginx image: reg.docker.tb/harbor/nginx ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html readOnly: false name: nginx-data volumes: - name: nginx-data nfs: server: 10.0.10.31 path: "/data/www-data"
上述说的是简单的存储方法,直接在deployment中定义了具体的存储,但是这样会存在几个问题。
1:权限管理,任何一个pod都可以动任意一个路径
2:磁盘大小限制,无法对某个存储块进行限制
3:如果NFS的url变了,那么所有的配置都需要修改
为了解决以上的问题,引入了PV-PVC的概念
创建一个卷PV,不属于任何namespaces,可以限制大小,读写权限
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 labels: app: "my-nfs" spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle nfs: path: "/data/disk1" server: 192.168.20.47 readOnly: false
再对应的namespace下面创建PVC。
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi selector: matchLabels: app: "my-nfs"
然后kubectl apply 创建PV,PVC
最后在应用用使用该PVC
apiVersion: v1 kind: Pod metadata: name: test-nfs-pvc labels: name: test-nfs-pvc spec: containers: - name: test-nfs-pvc image: registry:5000/back_demon:1.0 ports: - name: backdemon containerPort: 80 command: - /run.sh volumeMounts: - name: nfs-vol mountPath: /home/laizy/test/nfs-pvc volumes: - name: nfs-vol persistentVolumeClaim: claimName: nfs-pvc
这样可以方便的限制每个pvc所在的子目录,同时万一nfs迁移后,只需要更改pv中的url即可