kubernetes系列-StorageClass (k8s)

Storage Class资源

1、为什么要使用Storage Class?

  • 之前常规的手动挂载,看似没有什么问题,但细想一下,pvc在向pv申请存储空间时,是根据指定的pv名称,访问模式,容量大小来决定具体向那个pv来申请空间的,假设pv的容量为20G,定义的访问模式是WRO(只允许以读写的方式挂载到单个节点),而pvc申请的存储空间为10G,那么一旦这个pvc是向上面的pv申请的空间,也就是说,那么pv有10个G的空间被浪费了,因为其只允许被单个节点挂载。就算不考虑这个问题,我们每次手动去创建pv也就比较麻烦的事情,这时,我们就需要一个自动化的工具来替我们创建pv。
  • 这个东西就是阿里提供的一个开源工具“nfs-client-provisioner”,这个东西是通过k8s内置的nfs驱动将远端的NFS服务器挂载到本地目录,然后自身作为storage(存储)。

2、stroage class在集群中的作用?

  • pvc是无法直接去向nfs-client-provisioner申请使用的存储空间的,这时,就需要通过SC这个资源对象去申请了,SC的根本作用就是根据pvc定义的来动态创建pv,不仅节省了我们管理员的时间,还可以封装不同类型的存储供pvc选用。
  • 每个sc都包含以下三个重要的字段,这些字段会在sc需要动态分配pv时会使用到:

xml Provisioner(供给方):提供了存储资源的存储系统。 ReclaimPolicy:pv的回收策略,可用的值有Delete(默认)和Retiain Parameters(参数):存储类使用参数描述要关联到存储卷。

3、下面基于NFS服务来世间storage class

(1)搭建NFS服务(我将master节点作为nfs server):

[root@master1 ~]# yum -y install nfs-utils   #所有节点都要装
[root@master1 ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@master1 ~]# mkdir /nfsdata
[root@master1 ~]# systemctl start rpcbind
[root@master1 ~]# systemctl enable rpcbind
[root@master1 ~]# systemctl start nfs
[root@master1 ~]# systemctl enable nfs-server
[root@master1 ~]# showmount -e
Export list for master1:
/nfsdata *

(2)创建rbac权限:

rbac(基于角色的访问控制),就是用户通过角色与权限进行关联。 是一个从认证-----> 授权-----> 准入机制。

[root@master1 sc]# cat rbac-rolebind.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  namespace: default
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

编写好之后执行yaml文件创建出来
[root@master1 sc]# kubectl apply -f rbac-rolebind.yaml
serviceaccount/nfs-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created

(3)创建一个nfs的Deployment

[root@master1 sc]# cat nfs-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
#  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/jun-lin/nfs:client-provisioner
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-deploy    #供给方的名称
            - name: NFS_SERVER
              value: 192.168.17.10  #nfs服务器IP
            - name: NFS_PATH
              value: /nfsdata    #nfs共享目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.17.10
            path: /nfsdata


执行nfs-deployment.yaml
[root@master1 sc]# kubectl apply -f nfs-deployment.yaml
deployment.apps/nfs-client-provisioner created

查看是否正常运行
[root@master1 sc]# kubectl get po
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-6bf9cd4dfb-qn8nn   1/1     Running   0          37s

(4)创建storage class:

[root@master1 sc]# cat nfs-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: statefu-nfs
  namespace: default
provisioner: nfs-deploy     #这里的名字要和上面deploy定义的PROVISIONER_NAME一样
reclaimPolicy: Retain

执行nfs-sc.yaml
[root@master1 sc]# kubectl apply -f nfs-sc.yaml
storageclass.storage.k8s.io/statefu-nfs created

(5)上面把SC资源对象创建成功了,接下来我们测试是否能够动态创建pv。

创建一个pvc看pv是否会自动创建
[root@master1 sc]# cat test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-claim
  namespace: default
spec:
  storageClassName: statefu-nfs   #sc一定要指向上面创建的sc名称
  accessModes:
    - ReadWriteMany   #采用ReadWriteMany的访问模式
  resources:
    requests:
      storage: 1Gi

[root@master1 sc]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
pvc-0b9e1107-fefc-48b9-a5ec-8f8690e94435   1Gi        RWX            Delete           Bound    default/test-claim   statefu-nfs             107m
[root@master1 sc]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound    pvc-0b9e1107-fefc-48b9-a5ec-8f8690e94435   1Gi        RWX            statefu-nfs    107m
可以看出来创建pvc之后pv自己创建并且绑定了该pvc

4、部署nginx来实践pv,pvc

[root@master1 sc]# cat nginx-pv.yaml
kind: Pod
apiVersion: v1
metadata:
  name: nginx-pod
  namespace: default
  labels:
    app: web
spec:
  containers:
    - name: nginx-pod
      image: nginx
      volumeMounts:
        - name: nfs-pvc
          mountPath: /usr/share/nginx/html
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  namespace: default
spec:
  type: NodePort
  selector:
    app: web
  ports:
  - name: nginx
    port: 80
    targetPort: 80
    nodePort: 32134

[root@master1 sc]# kubectl get po
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-6bf9cd4dfb-qn8nn   1/1     Running   0          8m52s
nginx-pod                                 1/1     Running   0          107m
如果nginx没起来在看报错是什么
访问ninx查看是否正常
kubernetes 1.20 以上版本需要再apiserver添加参数
cat /etc/kubernetes/manifests/kube-apiserver.yaml
spec: containers:
- command:
  - kube-apiserver
    - --advertise-address=192.168.210.20
    - --.......  #省略多行内容
    - --feature-gates=RemoveSelfLink=false  #添加此行
 
posted @ 2022-04-29 15:25  Throb_JL  阅读(274)  评论(0编辑  收藏  举报