Kubernetes ---- 存储卷(emptyDir、hostPath、NFS)
存储卷
Pod是有生命周期的,当Pod出现故障时,数据会随着Pod的终结就结束了.
针对K8s集群,我们应该使用脱离节点的存储设备,共享存储设备.
如果使用docker持久化数据的那种方法,那么Pod重构后就不能更换节点,否则,挂载的目录位置肯定就访问不到了.
可用存储卷:
1. emptyDir: 临时存储目录,随着Pod删除也会被删除.
2. hostPath: 节点路径,直接在宿主机上找一个目录,与Pod进行关联.
网络存储卷:
脱离主机节点本地的存储设备:
1. SAN: ISCSI,...
2. NAS: NFS、CIFS
分布式存储:
1. GlusterFS
2. cephfs: ceph文件系统存储
3. rbd: ceph块存储
云端存储:
1. EBS(AWS): 弹性块存储
2. Azure Disk(微软)
# 查看k8s支持的存储类型
$ kubectl explain pods.spec.volumes
pods.spec.volumes.emptyDir <Object>
emptyDir类型的volume在pod分配到node上时被创建,kubernetes会在node上自动分配一个目录,因此无需指定宿主机node上对应的目录文件。这个目录的初始内容为空,
当Pod从node上移除时,emptyDir中的数据会被永久删除.
# 以下为了演示同一Pod的容器可以共享存储空间,虽然挂载目录指定的不一样,但是创建数据最终会在各个定义的目录中出现,实则是一样的.
$ vim pod-vol-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-vol-demo namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 volumeMounts: - name: disk1 mountPath: /usr/share/nginx/html/ - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent volumeMounts: - name: disk1 mountPath: /data/ command: ["/bin/sh","-c","while true;do echo $(date) >> /data/index.html;done"] volumes: - name: disk1 emptyDir: {} $ kubectl apply -f pod-vol-demo.yaml $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-vol-demo 2/2 Running 0 43m 10.244.2.124 node2 <none> <none> $ curl 10.244.2.124
pods.spec.volumes.hostPath <Object> 宿主机路径
把Pod所在的宿主机之上的宿主机的文件系统目录与Pod建立关联关系,在Pod被删除时,这个目录是不会被删除的,所以在Pod被删除之后重新被调度到这个节点上的话
对应的数据依然是存在的,如果宿主机上的目录默认不存在的话是否自动创建取决于type的定义,但如果节点宕机了,那么数据还是会丢失的.
path <string> -required-
type <string>
空字符串(默认)用于向后兼容,这意味着在安装 hostPath 卷之前不会执行任何检查。
DirectoryOrCreate 如果在给定路径上什么都不存在,那么将根据需要创建空目录,权限设置为 0755,具有与 Kubelet 相同的组和所有权。
Directory 在给定路径上必须存在的目录。
FileOrCreate 如果在给定路径上什么都不存在,那么将在那里根据需要创建空文件,权限设置为 0644,具有与 Kubelet 相同的组和所有权。
File 在给定路径上必须存在的文件。
Socket 在给定路径上必须存在的 UNIX 套接字。
CharDevice 在给定路径上必须存在的字符设备。
BlockDevice 在给定路径上必须存在的块设备。
$ vim pod-hostpath.yaml
apiVersion: v1 kind: Pod metadata: name: pod-hostpath namespace: default spec: containers: - name: hostpath-container image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 volumeMounts: - name: disk2 mountPath: /usr/share/nginx/html/ lifecycle: postStart: exec: command: ["/bin/sh","-c","echo $(hostname) > /usr/share/nginx/html/index.html"] volumes: - name: disk2 hostPath: path: /data/pod/volume/pod-hostpath type: DirectoryOrCreate
$ kubectl apply -f pod-hostpath.yaml
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-hostpath 1/1 Running 0 4m34s 10.244.2.127 node2 <none> <none>
$ curl 10.244.2.127
pod-hostpath
k8s配置NFS共享存储:
1. 在指定一台服务器安装NFS服务以提供共享存储服务
nfs node:
# yum -y install nfs-utils # mkdir /data/volumes -pv # vim /etc/exports /data/volumes 192.168.222.0/24(rw,no_root_squash) # systemctl start nfs # ss -nlt | grep 2049
2. 手动测试k8s的node是否支持挂载NFS文件系统,如果不支持则进行解决.
node2:
# mount -t nfs 192.168.222.103:/data/volumes/ /mnt mount: wrong fs type, bad option, bad superblock on 192.168.222.103:/data/volumes, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program) In some cases useful info is found in syslog - try dmesg | tail or so. # yum -y install nfs-utils # mount -t nfs 192.168.222.103:/data/volumes /mnt # mount | grep 192.168.222.103 # umount /mnt
3. 定义Pod进行测试
$ vim pod-nfs-demo.yaml
apiVersion: v1 kind: Pod metadata: name: nfs-pod namespace: default spec: containers: - name: nfs-container image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 volumeMounts: - name: nfs-disk mountPath: /usr/share/nginx/html/ lifecycle: postStart: exec: command: ["/bin/sh","-c","echo $(hostname) > /usr/share/nginx/html/index.html"] volumes: - name: nfs-disk nfs: path: /data/volumes/ server: 192.168.222.103
$ kubectl apply -f pod-nfs-demo.yaml
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nfs-vol 1/1 Running 0 4m41s 10.244.2.129 node2 <none> <none>
$ curl 10.244.2.129
pod-nfs-vol
3.2 定义Deployment多个Pod同时挂载至NFS
$ vim deploy-nfs-demo.yaml
apiVersion: v1 Kind: Service metadata: name: nfs-svc namespace: default spec: selector: app: nfs-pod release: dev type: NodePort ports: - name: http port: 80 targetPort: 80 nodePort: 30180 --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-deploy namespace: default spec: replicas: 3 selector: matchLabels: app: nfs-pod release: dev template: meatadata: name: nfs-pod namespace: defualt labels: app: nfs-pod release: dev spec: containers: - name: nfs-container image: ikubernetes/myapp:v2 imagePullPolicy: IfNotPresent ports: - name: http port: 80 containerPort: 80 volumeMounts: - name: nfs-disk mountPath: /usr/share/nginx/html/ lifecycle: postStart: exec: command: ["/bin/sh","-c","echo $(hostname) > /usr/share/nginx/html/index.html"] livenessProbe: exec: command: ["/bin/sh","-c","ps aux | grep nginx"] initialDelaySeconds: 2 periodSeconds: 3 readinessProbe: tcpSocket: port: 80 initialDelaySeconds: 2 periodSeconds: 3 volumes: - name: nfs-disk nfs: path: /data/volumes/ server: 192.168.222.103
$ kubectl apply -f deploy-nfs-demo.yaml
$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nfs-svc NodePort 10.101.149.48 <none> 80:30180/TCP 12m app=nfs-pod,release=dev
客户端访问"http://192.168.222.101:30180"
4. 登录至nfs node查看指定目录下是否存在文件
# cat /data/volumes/index.html pod-nfs-vol