k8s 持久卷 本地卷 Local 使用记录

k8s 环境:

[root@k8s-master ~]# kubectl get node -o wide
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
k8s-master   Ready    master   91d   v1.18.8   192.168.1.230   <none>        CentOS Linux 7 (Core)   5.8.2-1.el7.elrepo.x86_64   docker://19.3.12
k8s-node01   Ready    <none>   91d   v1.18.8   192.168.1.231   <none>        CentOS Linux 7 (Core)   5.8.1-1.el7.elrepo.x86_64   docker://19.3.12
k8s-node02   Ready    <none>   91d   v1.18.8   192.168.1.232   <none>        CentOS Linux 7 (Core)   5.8.1-1.el7.elrepo.x86_64   docker://19.3.12

注:本地卷 local 还不支持动态分配,然而还是需要创建 StorageClass 以延迟卷绑定,直到完成 pod 的调度。这是由 WaitForFirstConsumer 卷绑定模式指定的。

1、创建 SC 

vi bxy-local-StorageClass.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: bxy-local-sc-volume provisioner: kubernetes.io/no-provisioner #指定 volume 插件类型 # reclaimPolicy: Retain            #回收策略 默认 Delete volumeBindingMode: WaitForFirstConsumer #当 PVC 被 Pod 使用时,才触发 PV 和后端存储的创建,同时实现 PVC/PV 的绑定
  [root@k8s-node02 test]# kubectl apply -f bxy-local-StorageClass.yaml
  storageclass.storage.k8s.io/bxy-local-sc-volume created
[root@k8s-node02 test]# kubectl get sc
NAME                  PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
bxy-local-sc-volume   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  8s

 2、创建 PV

vi bxy-local-PersistentVolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: bxy-local-pv-volume
  labels:
    name: bxy-local-pv-labels
spec:
  capacity:          #容量大小
    storage: 5Gi
  volumeMode: Filesystem #default
  accessModes:        #持久卷访问模式 RWO/RWX/WOX 三种  RWO 为单机可读写 RWX 联机可读写  ROX 联机只读
  - ReadWriteOnce    #local-volume RWO
  persistentVolumeReclaimPolicy: Delete  #PVC 回收策略
  storageClassName: bxy-local-sc-volume  
  local:        #本地挂载路径
    path: /opt/test/bxy/nginx
  nodeAffinity:    #节点亲和性,此处匹配节点标签 hostname 为 k8s-node01 的 node
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s-node01
#节点标签查询
[root@k8s-node02 test]# kubectl get node -o wide --show-labels   
k8s-master   Ready    master   91d   v1.18.8   192.168.1.230   <none>        CentOS Linux 7 (Core)   5.8.2-1.el7.elrepo.x86_64   docker://19.3.12    beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node01   Ready    <none>   91d   v1.18.8   192.168.1.231   <none>        CentOS Linux 7 (Core)   5.8.1-1.el7.elrepo.x86_64   docker://19.3.12    beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux
k8s-node02   Ready    <none>   91d   v1.18.8   192.168.1.232   <none>        CentOS Linux 7 (Core)   5.8.1-1.el7.elrepo.x86_64   docker://19.3.12    beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux
#启动 & PV 状态查看
[root@k8s-node02 test]# kubectl apply -f bxy-local-PersistentVolume.yaml persistentvolume/bxy-local-pv-volume created [root@k8s-node02 test]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE bxy-local-pv-volume 5Gi RWO Delete Available bxy-local-sc-volume 4s

#状态栏 STATUS 字段为 Available

3、创建 PVC

vi bxy-local-PersistentVolumeClaim.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: bxy
-local-pvc-volume spec: accessModes: - ReadWriteOnce volumeMode: Filesystem  #这是一个文件系统 resources: requests: storage: 5Gi storageClassName: bxy-local-sc-volume selector:          #此标签与 PV labels 匹配 matchLabels: name: bxy-local-pv-labels
#启动 & 状态查询
[root@k8s-node02 test]# kubectl apply -f bxy-local-PersistentVolumeClaim.yaml persistentvolumeclaim/bxy-local-pvc-volume created [root@k8s-node02 test]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE bxy-local-pvc-volume Pending bxy-local-sc-volume 12s

[root@k8s-node02 test]# kubectl describe pvc bxy-local-pvc-volume
Name:          bxy-local-pvc-volume
Namespace:     default
StorageClass:  bxy-local-sc-volume
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   Finalizers:  [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type    Reason                Age                From                         Message
  ----    ------                ----               ----                         -------
  Normal  WaitForFirstConsumer  11s (x6 over 79s)  persistentvolume-controller  waiting for first consumer to be created before binding


#因为 SC 里面指定了
volumeBindingMode: WaitForFirstConsumer 字段,所以只有当消费者使用 PVC 之后才会产生绑定

 

4、创建 NG-Deploy

vi bxy-local-nginx-deploy.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: bxy-local-nginx-deploy spec: replicas: 2    #副本实例 selector: matchLabels: # spec.selector.matchLabels 要与 spec.template.metadata.labels 匹配!!!!不匹配不能启动 k8s-app: bxy-local-nginx-deploy-labels template: metadata: labels:    # Pod副本拥有的标签,对应RC的Selector k8s-app: bxy-local-nginx-deploy-labels spec: containers: - name: bxy-local-nginx image: nginx imagePullPolicy: IfNotPresent volumeMounts: - mountPath: "/usr/share/nginx/html" name: bxy-local-pv-volume   # spec.template.spec.containers.name.volumeMounts.name == spec.template.spec.volumes.name volumes: - name: bxy-local-pv-volume #PersistentVolume metadata.name persistentVolumeClaim: claimName: bxy-local-pvc-volume
启动 & 状态
[root@k8s-node02 test]# kubectl apply -f bxy-local-nginx-deploy.yaml deployment.apps/bxy-local-nginx-deploy created [root@k8s-node02 test]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE bxy-local-nginx-deploy 1/1 1 1 5s nfs-client-provisioner 1/1 1 1 5h19m tomcat 1/1 1 1 90d [root@k8s-master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES bxy-local-nginx-deploy-59d9f57449-2lbrt 1/1 Running 0 106s 10.244.1.84 k8s-node01 <none> <none> bxy-local-nginx-deploy-59d9f57449-xbsmj 1/1 Running 0 105s 10.244.1.83 k8s-node01 <none> <none> nfs-client-provisioner-6ffd9d54c5-zzkz7 1/1 Running 0 5h22m 10.244.1.76 k8s-node01 <none> <none> tomcat-cb9688cd5-xnwqb 1/1 Running 17 90d 10.244.2.119 k8s-node02 <none> <none>

注意我 NG 的 deploy文件是写在 k8s-node02 上面
可以看到两台 NG 实例都被调度到 k8s-node01 上面了,后面如果在增加实例数,仍然会被调度到 k8s-node01 这台节点上,这与我们 PV 中 nodeAffinity 字段设置有关
# NG 实例未启动前 PV 状态
[root@k8s-node02 test]# kubectl get pv
NAME                  CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS          REASON   AGE
bxy-local-pv-volume   5Gi        RWO            Delete           Available           bxy-local-sc-volume            4s

#状态栏 STATUS 字段为 Available 
# NG 实例未启动前 PVC 状态
[root@k8s-node02 test]# kubectl get pvc
NAME                   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS          AGE
bxy-local-pvc-volume   Pending                                      bxy-local-sc-volume   12s

 

#再次查看 PV & PVC 状态发现 status 字段为 Bound 
[root@k8s-master ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE bxy-local-pv-volume 5Gi RWO Delete Bound default/bxy-local-pvc-volume bxy-local-sc-volume 32m [root@k8s-master ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE bxy-local-pvc-volume Bound bxy-local-pv-volume 5Gi RWO bxy-local-sc-volume 26m

5、配置 Service 

apiVersion: v1
kind: Service
metadata:
  name: bxy-local-nginx-sc
spec:
  clusterIP: 10.101.138.36  #我使用的网络插件是 flannel ,clusterIP 配置只要在集群 IP 网段内即可 
  externalTrafficPolicy: Cluster  #Cluster或者Local(Local 流量不会转发到其他节点)
  ports:
  - nodePort: 29605      #外部访问端口
    port: 19605        #内网访问端口
    targetPort: 80      #容器启动端口
    protocol: TCP
  selector:          #此标签与 deploy 中 labels 字段匹配
    k8s-app: bxy-local-nginx-deploy-labels
  type: LoadBalancer
启动 & 状态
[root@k8s-node01 test]# kubectl apply -f bxy-local-nginx-Service.yaml service/bxy-local-nginx-sc created [root@k8s-node01 test]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE bxy-local-nginx-sc LoadBalancer 10.101.138.36 <pending> 19605:29605/TCP 7s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 91d nginx-nfs LoadBalancer 10.101.138.35 <pending> 19606:29606/TCP 2d3h tomcat LoadBalancer 10.100.191.78 <pending> 19999:20000/TCP 90d

6、测试

[root@k8s-master ~]# curl 10.101.138.36:19605
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.19.4</center>
</body>
</html>


我使用的时集群内部 IP 跑的 NG ,报错 403 时因为我在 local 本地挂载目录下没有添加任何文件
在这里其实我们 PV + PVC + SC + local 已经测试成功了,因为 NG 镜像运行成功后,它会在容器里有 NG 的 index.html 文件,
但是我们访问是 403 说明 NG 默认网页文件已经被 local 本地空文件夹给取缔了


我们添加一个新文件试试

 [root@k8s-node01 nginx]# pwd
 /opt/test/bxy/nginx
 [root@k8s-node01 nginx]# ls
 [root@k8s-node01 nginx]# echo 'k8s local mount test !!!' > index.html
 [root@k8s-node01 nginx]# ls
 index.html

 

  [root@k8s-node01 nginx]# curl 10.101.138.36:19605    #集群内部 IP 访问
  k8s local mount test !!!
  [root@k8s-node01 nginx]# curl 192.168.1.230:29605    #集群外部 k8s-master IP 访问
  k8s local mount test !!!
  [root@k8s-node01 nginx]# curl 192.168.1.231:29605    #集群外部 k8s-node01 IP 访问
  k8s local mount test !!!
  [root@k8s-node01 nginx]# curl 192.168.1.232:29605    #集群外部 k8s-node02 IP 访问
  k8s local mount test !!!

 

  至于集群外部 IP 除了部署服务器,其它节点也能访问 NG 是因为我们在 Service 中配置了 externalTrafficPolicy: Cluster 字段

  至此 PV + PVC + SC + local 本地持久卷挂载已经完成

 

posted @ 2020-11-19 15:01  黑崎一护有头屑  阅读(1710)  评论(0编辑  收藏  举报