k8s动态存储管理GlusterFS
1. 在node上安装Gluster客户端(Heketi要求GlusterFS集群至少有三个节点) 删除master标签 kubectl taint nodes --all node-role.kubernetes.io/master- kubectl describe node k8s查看taint是否为空 查看kube-apiserver是否以特权模式运行: ps -ef | grep kube | grep allow 给每个node打上标签: kubectl label node k8s storagenode=glusterfs kubectl label node k8s-node1 storagenode=glusterfs kubectl label node k8s-node2 storagenode=glusterfs 2. 确保每个node上运行一个GlusterFS管理服务 cat glusterfs.yaml kind: DaemonSet apiVersion: extensions/v1beta1 metadata: name: glusterfs labels: glusterfs: daemonsett annotations: description: GlusterFS DaemonSet tags: glusterfs spec: template: metadata: name: glusterfs labels: glusterfs-node: pod spec: nodeSelector: storagenode: glusterfs hostNetwork: true containers: - image: gluster/gluster-centos:latest name: glusterfs volumeMounts: - name: glusterfs-heketi mountPath: "/var/lib/heketi" - name: glusterfs-run mountPath: "/run" - name: glusterfs-lvm mountPath: "/run/lvm" - name: glusterfs-etc mountPath: "/etc/glusterfs" - name: glusterfs-logs mountPath: "/var/log/glusterfs" - name: glusterfs-config mountPath: "/var/lib/glusterd" - name: glusterfs-dev mountPath: "/dev" - name: glusterfs-misc mountPath: "/var/lib/misc/glusterfsd" - name: glusterfs-cgroup mountPath: "/sys/fs/cgroup" readOnly: true - name: glusterfs-ssl mountPath: "/etc/ssl" readOnly: true securityContext: capabilities: {} privileged: true readinessProbe: timeoutSeconds: 3 initialDelaySeconds: 60 exec: command: - "/bin/bash" - "-c" - systemctl status glusterd.service livenessProbe: timeoutSeconds: 3 initialDelaySeconds: 60 exec: command: - "/bin/bash" - "-c" - systemctl status glusterd.service volumes: - name: glusterfs-heketi hostPath: path: "/var/lib/heketi" - name: glusterfs-run - name: glusterfs-lvm hostPath: path: "/run/lvm" - name: glusterfs-etc hostPath: path: "/etc/glusterfs" - name: glusterfs-logs hostPath: path: "/var/log/glusterfs" - name: glusterfs-config hostPath: path: "/var/lib/glusterd" - name: glusterfs-dev hostPath: path: "/dev" - name: glusterfs-misc hostPath: path: "/var/lib/misc/glusterfsd" - name: glusterfs-cgroup hostPath: path: "/sys/fs/cgroup" - name: glusterfs-ssl hostPath: path: "/etc/ssl" kubectl create -f glusterfs.yaml && kubectl describe pods <pod_name>
2. 创建Heketi服务 创建一个ServiceAccount对象 cat heketi-service.yaml apiVersion: v1 kind: ServiceAccount metadata: name: heketi-service-account kubectl create -f heketi-service.yaml 部署heketi服务: cat heketi-svc.yaml --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: deploy-heketi labels: glusterfs: heketi-deployment deploy-heketi: heket-deployment annotations: description: Defines how to deploy Heketi spec: replicas: 1 template: metadata: name: deploy-heketi labels: glusterfs: heketi-pod name: deploy-heketi spec: serviceAccountName: heketi-service-account containers: - image: heketi/heketi imagePullPolicy: IfNotPresent name: deploy-heketi env: - name: HEKETI_EXECUTOR value: kubernetes - name: HEKETI_FSTAB value: "/var/lib/heketi/fstab" - name: HEKETI_SNAPSHOT_LIMIT value: '14' - name: HEKETI_KUBE_GLUSTER_DAEMONSET value: "y" ports: - containerPort: 8080 volumeMounts: - name: db mountPath: "/var/lib/heketi" readinessProbe: timeoutSeconds: 3 initialDelaySeconds: 3 httpGet: path: "/hello" port: 8080 livenessProbe: timeoutSeconds: 3 initialDelaySeconds: 30 httpGet: path: "/hello" port: 8080 volumes: - name: db hostPath: path: "/heketi-data" --- kind: Service apiVersion: v1 metadata: name: deploy-heketi labels: glusterfs: heketi-service deploy-heketi: support annotations: description: Exposes Heketi Service spec: selector: name: deploy-heketi ports: - name: deploy-heketi port: 8080 targetPort: 8080 kubectl create -f heketi-service.yaml && kubectl get svc && kubectl get deploument
kubectl describe pod deploy-heketi查看运行在哪个node
3. Heketi安装 yum install -y centos-release-gluster yum install -y heketi heketi-client cat topology.json { "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "k8s" ], "storage": [ "192.168.66.86" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] }, { "node": { "hostnames": { "manage": [ "k8s-node1" ], "storage": [ "192.168.66.87" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] }, { "node": { "hostnames": { "manage": [ "k8s-node2" ], "storage": [ "192.168.66.84" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] } ] } ] } HEKETI_BOOTSTRAP_POD=$(kubectl get pods | grep deploy-heketi | awk '{print $1}') kubectl port-forward $HEKETI_BOOTSTRAP_POD 8080:8080 &后台启动 export HEKETI_CLI_SERVER=http://localhost:8080 heketi-cli topology load --json=topology.json heketi-cli topology info
4. 报错处理 4.1:执行heketi-cli topology load --json=topology.json报错如下 Creating cluster ... ID: 76576f2209ccd75a0ab1e44fc38fd393 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node k8s ... Unable to create node: New Node doesn't have glusterd running Creating node k8s-node1 ... Unable to create node: New Node doesn't have glusterd running Creating node k8s-node2 ... Unable to create node: New Node doesn't have glusterd running 解决:kubectl create clusterrole fao --verb=get,list,watch,create --resource=pods,pods/status,pods/exec如还报错 kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account 4.2:执行heketi-cli topology load --json=topology.json报错如下 Found node k8s on cluster 88bed810717c204761b99c7ec1b71cd0 Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): Can't open /dev/vdb exclusively. Mounted filesystem? Can't open /dev/vdb exclusively. Mounted filesystem? Found node k8s-node1 on cluster 88bed810717c204761b99c7ec1b71cd0 Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): Can't open /dev/vdb exclusively. Mounted filesystem? Can't open /dev/vdb exclusively. Mounted filesystem? Found node k8s-node2 on cluster 88bed810717c204761b99c7ec1b71cd0 Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): Can't open /dev/vdb exclusively. Mounted filesystem? Can't open /dev/vdb exclusively. Mounted filesystem? 解决:格式化节点磁盘:mkfs.xfs -f /dev/vdb(-f 强制) 4.3:执行heketi-cli topology load --json=topology.json报错如下 Found node k8s on cluster 88bed810717c204761b99c7ec1b71cd0 Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): WARNING: xfs signature detected on /dev/vdb at offset 0. Wipe it? [y/n]: [n] Aborted wiping of xfs. 1 existing signature left on the device. Found node k8s-node1 on cluster 88bed810717c204761b99c7ec1b71cd0 Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): WARNING: xfs signature detected on /dev/vdb at offset 0. Wipe it? [y/n]: [n] Aborted wiping of xfs. 1 existing signature left on the device. Found node k8s-node2 on cluster 88bed810717c204761b99c7ec1b71cd0 Adding device /dev/vdb ... Unable to add device: Setup of device /dev/vdb failed (already initialized or contains data?): WARNING: xfs signature detected on /dev/vdb at offset 0. Wipe it? [y/n]: [n] Aborted wiping of xfs. 1 existing signature left on the device 解决:进入节点glusterfs容器执行pvcreate -ff --metadatasize=128M --dataalignment=256K /dev/vdb
5. 定义StorageClass netstat -anp | grep 8080查看resturl地址,resturl必须设置为API Server能访问Heketi服务的地址 cat storageclass-gluster-heketi.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-heketi provisioner: kubernetes.io/glusterfs #此参数必须设置kubernetes.io/glusterfs parameters: resturl: "http://127.0.0.1:8080" restauthenabled: "false" 6. 定义PVC cat storageclass.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-gluster-heketi spec: storageClassName: gluster-heketi accessModes: - ReadWriteOnce resources: requests: storage: 1Gi kubectl get pvc
7. Pod使用pvc存储 cat pod-pvc.yaml apiVersion: v1 kind: Pod metadata: name: pod-use-pvc spec: containers: - name: pod-pvc image: busybox command: - sleep - "3600" volumeMounts: - name: gluster-volume mountPath: "/mnt" readOnly: false volumes: - name: gluster-volume persistentVolumeClaim: claimName: pvc-gluster-heketi 报错如下: Warning FailedMount 9m34s kubelet, k8s-node2 ****: mount failed: mount failed: exit status 1 在node2上查看日志: tail -100f /var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-5ea820ba-8538-11ea-8750-5254000e327c/pod-use-pvc-glusterfs.log line 67: type 'features/utime' is not valid or not found on this machine 解决:查看node时间,glusterfs和k8s_glusterfs容器的时间和glusterfs版本不对 安装ntp,使用ntp ntp1.aliyun.com发现glusterfs容器时间一直同步了 升级node glusterfs版本,升级前node版本为3.12.x,容器内版本为7.1 yum install centos-release-gluster -y yum install glusterfs-client -y升级后版本为7.5
8. 创建文件验证是否成功 在k8s集群master上执行:kubectl exec -ti pod-use-pvc -- /bin/sh echo "hello world" > /mnt/b.txt df -h: 查看挂载那台的glusterfs 在node节点进入glusterfs节点查看文件 docker exec -ti 89f927aa2110 /bin/bash find / -name b.txt cat /var/lib/heketi/mounts/vg_22e127efbdefc1bbb315ab0fcf90e779/brick_97de1365f98b19ee3b93ce8ecb588366/brick/b.txt 或者在k8s集群master上查看 进入相应的glusterfs集群几点 kubectl exec -ti glusterfs-h4k22 -- /bin/sh find / -name b.txt cat /var/lib/heketi/mounts/vg_22e127efbdefc1bbb315ab0fcf90e779/brick_97de1365f98b19ee3b93ce8ecb588366/brick/b.txt