K8S进阶篇-云原生存储进阶Rook

一、Rook介绍

Rook https://rook.io 是一个自管理的分布式存储编排系统,可以为Kubernetes提供便利的存储解决方案。

Rook本身并不提供存储,而是在kubernetes和存储系统之间提供适配层,简化存储系统的部署与维护工作。

目前,Rook支持的存储系统包括:Ceph、CockroachDB、Cassandra、EdgeFS、Minio、NFS。当然,Rook支持的最好的还是Ceph 和 NFS。

 

 

二、Ceph介绍


Ceph 是一种为优秀的性能、可靠性和可扩展性而设计的统一的、分布式文件系统。Ceph 的统一体现在可以提供文件系统、块存储和对象存储,分布式体现在可以动态扩展。

Ceph支持三种存储:

块存储(RDB):可以直接作为磁盘挂载

文件系统(CephFS):提供POSIX兼容的网络文件系统CephFS,专注于高性能、大容量存储

对象存储(RADOSGW):提供RESTful接口,也提供多种编程语言绑定。兼容S3(是AWS里的对象存储)、Swift(是openstack里的对象存储)

核心组件
Ceph 主要有三个基本进程:

OSD
用于集群中所有数据与对象的存储,处理集群数据的复制、恢复、回填、再均衡,并向其他osd守护进程发送心跳,然后向 Monitor 提供一些监控信息。

Monitor
监控整个集群的状态,维护集群的 cluster MAP 二进制表,保证集群数据的一致性。

MDS (可选)
为 Ceph 文件系统提供元数据计算、缓存与同步。MDS 进程并不是必须的进程,只有需要使用 CephFS 时,才需要配置 MDS 节点。

 

三、部署Rook和Ceph

3.1 前置准备

安装Ceph集群
通过rook安装ceph集群需要满足以下两个前提条件:

  • 已部署好的Kubernetes集群 (✅)
  • osd节点需要有未格式化⽂件系统的磁盘,也就是裸盘(✅)
  1. 在master1节点下载rook到本地,使用最新版本1.8.8

git clone --single-branch --branch v1.8.8 https://github.com/rook/rook.git

        2.给所有需要安装ceph的worker节点安装lvm2

yum install lvm2 -y

    3.给worker节点打上标签,保证ceph只安装在这3台worker节点上

kubectl label node k8s-worker1 role=ceph-storage
kubectl label node k8s-worker2 role=ceph-storage
kubectl label node k8s-worker3 role=ceph-storage

  

3.2 部署rook

一条命令就可以部署好rook环境:

cd cluster/examples/kubernetes/ceph
kubectl create -f crds.yaml -f common.yaml -f operator.yaml

  

查看pod启动情况

[root@k8s-master01 ~/k8s/rook/rook/deploy/examples]# kubectl -n rook-ceph get pod|grep oper
rook-ceph-operator-84985d69d4-rncx4                      1/1     Running     0          117m

  

3.3 创建ceph集群

3.3.1 配置更改

主要修改地方1:osd的节点

 

修改地方2,节点亲和性配置:

placement:
    all:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: role
              operator: In
              values:
              - ceph-storage

  使用我们之前打好label的节点。

注意:新版必须采用裸盘,即未格式化的磁盘。其中k8s-master03 k8s-node01 node02有新加的一个磁盘,可以通过lsblk -f查看新添加的磁盘名称。建议最少三个节点,否则后面的试验可能会出现问题。

3.3.2 创建Ceph集群

kubectl create -f cluster.yaml

  

创建完成后,可以查看pod的状态:

.

 

 

需要注意的是,osd-x的容器必须是存在的,且是正常的。如果上述Pod均正常,则认为集群安装成功。

更多配置:https://rook.io/docs/rook/v1.6/ceph-cluster-crd.html

3.3.3 安装ceph snapshot控制器

k8s 1.19版本以上需要单独安装snapshot控制器,才能完成pvc的快照功能,所以在此提前安装下,如果是1.19以下版本,不需要单独安装,直接参考视频即可。

snapshot控制器的部署在集群安装时的k8s-ha-install项目中,需要切换到1.20.x分支:

cd /root/k8s-ha-install/
git checkout manual-installation-v1.20.x

  

创建snapshot controller:

kubectl create -f snapshotter/ -n kube-system

  

[root@k8s-master01 k8s-ha-install]# kubectl  get po -n kube-system -l app=snapshot-controller
NAME                    READY   STATUS    RESTARTS   AGE
snapshot-controller-0   1/1     Running   0          27h

  

3.3.4 安装ceph客户端工具

Rook 工具箱是一个包含用于 Rook 调试和测试的常用工具的容器,安装很简单

[root@k8s-master01 ceph]# pwd
/root/rook/cluster/examples/kubernetes/ceph
[root@k8s-master01 ceph]# kubectl  create -f toolbox.yaml -n rook-ceph
deployment.apps/rook-ceph-tools created

  

 

 然后我们可以进入这个pod,查看ceph集群的状态:

 

 从上图可以看出,ceph集群健康度是正常的。监控、管理、osd也都起来了。

 

 

 

 

3.3.5 安装ceph dashboard

 

Ceph Dashboard 是一个内置的基于 Web 的管理和监视应用程序,它是开源 Ceph 发行版的一部分。通过 Dashboard 可以获取 Ceph 集群的各种基本状态信息。

默认的 ceph 已经安装的 ceph-dashboard,其 SVC 地址是 service clusterIP,并不能被外部访问,需要创建 service 服务

kubectl apply -f dashboard-external-https.yaml

  

创建NodePort类型就可以被外部访问了

 

 

浏览器访问(master1-ip换成自己的集群ip):https://master1-ip:32609/

用户名默认是admin,至于密码可以通过以下代码获取:

kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}"|base64 --decode && echo

 

 

 

 四、ceph块存储的使用

块存储一般用于一个Pod挂载一块存储使用,相当于一个服务器新挂了一个盘,只给一个应用使用。

参考文档:https://rook.io/docs/rook/v1.6/ceph-block.html

4.1   创建StorageClass和ceph的存储池

[root@k8s-master01 ceph]# pwd
/root/rook/cluster/examples/kubernetes/ceph
[root@k8s-master01 ceph]# vim csi/rbd/storageclass.yaml

  因为我是试验环境,所以将副本数设置成了3,生产环境最少为2,且要小于等于osd的数量.

cat csi/rbd/storageclass.yaml 
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph # namespace:cluster
spec:
  failureDomain: host
  replicated:
    size: 3
    # Disallow setting pool with replica 1, this could lead to data loss without recovery.
    # Make sure you're *ABSOLUTELY CERTAIN* that is what you want
    requireSafeReplicaSize: true
    # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
    # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
    #targetSizeRatio: .5
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-block
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
  # clusterID is the namespace where the rook cluster is running
  # If you change this namespace, also change the namespace below where the secret namespaces are defined
  clusterID: rook-ceph # namespace:cluster

  # If you want to use erasure coded pool with RBD, you need to create
  # two pools. one erasure coded and one replicated.
  # You need to specify the replicated pool here in the `pool` parameter, it is
  # used for the metadata of the images.
  # The erasure coded pool must be set as the `dataPool` parameter below.
  #dataPool: ec-data-pool
  pool: replicapool

  # (optional) mapOptions is a comma-separated list of map options.
  # For krbd options refer
  # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
  # For nbd options refer
  # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
  # mapOptions: lock_on_read,queue_depth=1024

  # (optional) unmapOptions is a comma-separated list of unmap options.
  # For krbd options refer
  # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
  # For nbd options refer
  # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
  # unmapOptions: force

  # RBD image format. Defaults to "2".
  imageFormat: "2"

  # RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
  imageFeatures: layering

  # The secrets contain Ceph admin credentials. These are generated automatically by the operator
  # in the same namespace as the cluster.
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
  # Specify the filesystem type of the volume. If not specified, csi-provisioner
  # will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
  # in hyperconverged settings where the volume is mounted on the same node as the osds.
  csi.storage.k8s.io/fstype: ext4
# uncomment the following to use rbd-nbd as mounter on supported nodes
# **IMPORTANT**: CephCSI v3.4.0 onwards a volume healer functionality is added to reattach
# the PVC to application pod if nodeplugin pod restart.
# Its still in Alpha support. Therefore, this option is not recommended for production use.
#mounter: rbd-nbd
allowVolumeExpansion: true
reclaimPolicy: Delete

  

创建StorageClass和存储池:

[root@k8s-master01 ceph]# kubectl  create -f csi/rbd/storageclass.yaml -n rook-ceph
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created

  

查看创建的cephblockpool和storageClass(StorageClass没有namespace隔离性):

 

 

此时可以在ceph dashboard查看到改Pool,如果没有显示说明没有创建成功

 

 

 

4.2   挂载测试

使用官网推荐的一个WordPress案例

先创建mysql相关的资源:服务、PVC、Deployment

cat mysql.yaml 
apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  storageClassName: rook-ceph-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
    tier: mysql
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
        - image: mysql:5.6
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: changeme
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage #存储挂载到pod的/var/lib/mysql
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage  #mysql存储配置
          persistentVolumeClaim:
            claimName: mysql-pv-claim

  

接着创建WordPress:

cat wordpress.yaml 
apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  #type: LoadBalancer
  type: NodePort  #为了方便测试,我这里改成NodePort模式
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wp-pv-claim  #创建pvc
  labels:
    app: wordpress
spec:
  storageClassName: rook-ceph-block  #声明为我们之前创建的块存储
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
    tier: frontend
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
        - image: wordpress:4.6.1-apache
          name: wordpress
          env:
            - name: WORDPRESS_DB_HOST
              value: wordpress-mysql
            - name: WORDPRESS_DB_PASSWORD
              value: changeme
          ports:
            - containerPort: 80
              name: wordpress
          volumeMounts:
            - name: wordpress-persistent-storage
              mountPath: /var/www/html
      volumes:
        - name: wordpress-persistent-storage
          persistentVolumeClaim:
            claimName: wp-pv-claim

  说明:为了方便测试,我把WordPress的service改成了NodePort模式。

 

4.3   结果查看

1、查看pv、pvc,指定storageclass后,系统自动创建了PV,和我们的PVC进行了绑定。

 

 

2、查看WordPress和Mysql的Pod

 

 3.查看存储

 

 

4.查看服务

浏览器输入 http://nodeIP:31919

 

4.4 StatefulSet volumeClaimTemplates

当我们的statefulset服务需要每个pod都有一个独立的存储时,我们可以使用volumeClaimTemplates参数来实现。

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx 
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "rook-ceph-block"
      resources:
        requests:
          storage: 1Gi

  上面volumeClaimTemplates我们直接指定了storageclass为rook-ceph-block。

 

 说明:volumeClaimTemplates是statefulset独有的配置。

 

 五、共享文件系统的使用

使用场景:共享文件系统一般用于多个Pod共享一个存储,比如nginx网站的文件,必须是一致的,多个pod共享一份站点文件。

官方文档:https://rook.io/docs/rook/v1.6/ceph-filesystem.html

5.1   创建共享类型的文件系统

创建一个ceph的文件系统,CephFilesystem
 cat filesystem.yaml 
#################################################################################################################
# Create a filesystem with settings with replication enabled for a production environment.
# A minimum of 3 OSDs on different nodes are required in this example.
#  kubectl create -f filesystem.yaml
#################################################################################################################

apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
  name: myfs
  namespace: rook-ceph # namespace:cluster
spec:
  # The metadata pool spec. Must use replication.
  metadataPool:
    replicated:
      size: 3
      requireSafeReplicaSize: true
    parameters:
      # Inline compression mode for the data pool
      # Further reference: https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#inline-compression
      compression_mode:
        none
        # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
      # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
      #target_size_ratio: ".5"
  # The list of data pool specs. Can use replication or erasure coding.
  dataPools:
    - name: replicated
      failureDomain: host
      replicated:
        size: 3
        # Disallow setting pool with replica 1, this could lead to data loss without recovery.
        # Make sure you're *ABSOLUTELY CERTAIN* that is what you want
        requireSafeReplicaSize: true
      parameters:
        # Inline compression mode for the data pool
        # Further reference: https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#inline-compression
        compression_mode:
          none
          # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
        # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
        #target_size_ratio: ".5"
  # Whether to preserve filesystem after CephFilesystem CRD deletion
  preserveFilesystemOnDelete: true
  # The metadata service (mds) configuration
  metadataServer:
    # The number of active MDS instances
    activeCount: 1
    # Whether each active MDS instance will have an active standby with a warm metadata cache for faster failover.
    # If false, standbys will be available, but will not have a warm cache.
    activeStandby: true
    # The affinity rules to apply to the mds deployment
    placement:
      #  nodeAffinity:
      #    requiredDuringSchedulingIgnoredDuringExecution:
      #      nodeSelectorTerms:
      #      - matchExpressions:
      #        - key: role
      #          operator: In
      #          values:
      #          - mds-node
      #  topologySpreadConstraints:
      #  tolerations:
      #  - key: mds-node
      #    operator: Exists
      #  podAffinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
                - key: app
                  operator: In
                  values:
                    - rook-ceph-mds
            # topologyKey: kubernetes.io/hostname will place MDS across different hosts
            topologyKey: kubernetes.io/hostname
        preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - rook-ceph-mds
              # topologyKey: */zone can be used to spread MDS across different AZ
              # Use <topologyKey: failure-domain.beta.kubernetes.io/zone> in k8s cluster if your cluster is v1.16 or lower
              # Use <topologyKey: topology.kubernetes.io/zone>  in k8s cluster is v1.17 or upper
              topologyKey: topology.kubernetes.io/zone
    # A key/value list of annotations
    annotations:
    #  key: value
    # A key/value list of labels
    labels:
    #  key: value
    resources:
    # The requests and limits set here, allow the filesystem MDS Pod(s) to use half of one CPU core and 1 gigabyte of memory
    #  limits:
    #    cpu: "500m"
    #    memory: "1024Mi"
    #  requests:
    #    cpu: "500m"
    #    memory: "1024Mi"
    # priorityClassName: my-priority-class
  # Filesystem mirroring settings
  # mirroring:
    # enabled: true
    # list of Kubernetes Secrets containing the peer token
    # for more details see: https://docs.ceph.com/en/latest/dev/cephfs-mirroring/#bootstrap-peers
    # peers:
      #secretNames:
        #- secondary-cluster-peer
    # specify the schedule(s) on which snapshots should be taken
    # see the official syntax here https://docs.ceph.com/en/latest/cephfs/snap-schedule/#add-and-remove-schedules
    # snapshotSchedules:
    #   - path: /
    #     interval: 24h # daily snapshots
    #     startTime: 11:55
    # manage retention policies
    # see syntax duration here https://docs.ceph.com/en/latest/cephfs/snap-schedule/#add-and-remove-retention-policies
    # snapshotRetention:
    #   - path: /
    #     duration: "h 24"

 

创建完成后会启动mds容器,需要等待启动后才可进行创建pv

 

 

也可以在ceph dashboard上面查看状态

 

 

 

 

5.2   创建共享类型文件系统的StorageClass

[root@k8s-master01 cephfs]# pwd
/root/rook/cluster/examples/kubernetes/ceph/csi/cephfs
[root@k8s-master01 cephfs]# kubectl create -f storageclass.yaml 
storageclass.storage.k8s.io/rook-cephfs created

  之后将pvc的storageClassName设置成rook-cephfs即可创建共享文件类型的存储,类似于NFS,可以给多个Pod共享数据。

 

 注意:这里的csidriver必须要存在。底层接口有csidriver实现。

 

 

5.3  Nginx挂载测试

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  selector:
    app: nginx
  type: ClusterIP
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-share-pvc
spec:
  storageClassName: rook-cephfs 
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment 
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      containers:
      - name: nginx
        image: nginx 
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
      volumes:
        - name: www
          persistentVolumeClaim:
            claimName: nginx-share-pvc

  

查看共享文件的效果:

 

 

 六、PVC动态扩容

6.1 扩容需求

  • 文件共享类型的PVC扩容需要k8s 1.15+
  • 块存储类型的PVC扩容需要k8s 1.16+

PVC扩容需要开启ExpandCSIVolumes,新版本的k8s已经默认打开了这个功能,可以查看自己的k8s版本是否已经默认打开了该功能:

 如果default为true就不需要打开此功能,如果default为false,需要开启该功能。

  • storageclass也必须支持动态扩容

 

 6.2 扩容操作

扩容前,PVC是1GB的容量:

 

 

扩容后:

 

 七、PVC快照(块存储快照)

The CSI snapshotter is part of Kubernetes implementation of Container Storage Interface (CSI).

The volume snapshot feature supports CSI v1.0 and higher. It was introduced as an Alpha feature in Kubernetes v1.12 and has been promoted to an Beta feature in Kubernetes 1.17. In Kubernetes 1.20, the volume snapshot feature moves to GA.

k8s 1.20版本,snapshot已经进入到了GA版本。

PVC的快照类似VM的快照,也就是对当前的数据进行一份复制,达到数据备份的目的。

7.1   创建snapshotClass

snapshotclass的作用:

Just like StorageClass provides a way for administrators to describe the “classes” of storage they offer when provisioning a volume, VolumeSnapshotClass provides a way to describe the “classes” of storage when provisioning a volume snapshot.

和StorageClass一样,提供一种管理volume供给的方式。VolumeSnapshotClass也提供一种管理卷快照的方式。

创建:

kubectl create -f snapshotclass.yaml 

  

内容:实际上就是通过parameters指定提供snapshot资源的信息。通过snapshot.storage.k8s.io/v1这个api接口进行管理。

 cat csi/rbd/snapshotclass.yaml 
---
# 1.17 <= K8s <= v1.19
# apiVersion: snapshot.storage.k8s.io/v1beta1
# K8s >= v1.20
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: csi-rbdplugin-snapclass
driver: rook-ceph.rbd.csi.ceph.com # driver:namespace:operator
parameters:
  # Specify a string that identifies your cluster. Ceph CSI supports any
  # unique string. When Ceph CSI is deployed by Rook use the Rook namespace,
  # for example "rook-ceph".
  clusterID: rook-ceph # namespace:cluster
  csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph # namespace:cluster
deletionPolicy: Delete

  

7.2   制作快照

创建快照之前,我们在给需要创建快照的卷里面新增一些内容,以便后面来进行数据验证。

 

 在web-0的pod里面增加一个nginx的主页数据,后面用于测试。

接下来,我们进行快照制作

 snapsho定义了类型为VolumeSnapshot,来源是已经存在的一个PVC,同时注意:PVC是有namespace区分的,所以必须在同一个namespace中,不然无法Bound。

cat csi/rbd/snapshot.yaml 
---
# 1.17 <= K8s <= v1.19
# apiVersion: snapshot.storage.k8s.io/v1beta1
# K8s >= v1.20
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: rbd-pvc-snapshot
spec:
  volumeSnapshotClassName: csi-rbdplugin-snapclass
  source:
    persistentVolumeClaimName: www-web-0

  

 

 

 

7.3   指定快照创建PVC

cat pvc-restore.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc-restore
spec:
  storageClassName: rook-ceph-block
  dataSource:
    name: rbd-pvc-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

  注意:指定快照创建PVC时,需要满足如下条件:1、在同一个namespace;2、存储大小必须小于等于快照的大小。不然无法Bound。

 

 

7.4   数据验证

 cat restore-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-restore
  labels:
    app: wordpress
    tier: frontend
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
        - image: nginx
          name: nginx
          volumeMounts:
            - name: wordpress-persistent-storage-restore
              mountPath: /var/www/html
      volumes:
        - name: wordpress-persistent-storage-restore
          persistentVolumeClaim:
            claimName: rbd-pvc-restore 

  通过创建一个nginx,把刚刚通过快照创建好的pvc进行挂载到pod里面。

 

 从图上可以看出,数据是OK的。

web-0的挂载点:/usr/share/nginx/html/

wordproess-restore的挂载点:/var/www/html/

因为快照的数据也是直接挂载的PVC,所以数据直接落到了PVC的挂载点。

 

 

 

 

八、PVC克隆

PVC克隆和PVC的snapshot类似:

cat csi/rbd/pvc-clone.yaml 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc-clone
spec:
  storageClassName: rook-ceph-block
  dataSource:
    name: www-web-0
    kind: PersistentVolumeClaim
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

  

注意事项:

1、克隆只能在相同namespace下进行;

2、克隆后的pvc大小必须小于等于克隆前的

 

 

 

九、测试数据清理

如果Rook要继续使用,可以只清理创建的deploy、pod、pvc即可。之后可以直接投入使用。

9.1 数据删除步骤

数据清理步骤:

  1. 首先清理挂载了PVC和Pod,可能需要清理单独创建的Pod和Deployment或者是其他的高级资源
  2. 之后清理PVC,清理掉所有通过ceph StorageClass创建的PVC后,最好检查下PV是否被清理
  3. 之后清理快照:kubectl delete volumesnapshot XXXXXXXX
  4. 之后清理创建的Pool,包括块存储和文件存储
kubectl delete -n rook-ceph cephblockpool replicapool
kubectl delete -n rook-ceph cephfilesystem myfs

  

  1. 清理StorageClass:kubectl delete sc rook-ceph-block  rook-cephfs
  2. 清理Ceph集群:kubectl -n rook-ceph delete cephcluster rook-ceph
  3. 删除Rook资源:
kubectl delete -f operator.yaml
kubectl delete -f common.yaml
kubectl delete -f crds.yaml

1.如果卡住需要参考Rook的troubleshooting

 https://rook.io/docs/rook/v1.6/ceph-teardown.html#troubleshooting

for CRD in $(kubectl get crd -n rook-ceph | awk '/ceph.rook.io/ {print $1}'); do     kubectl get -n rook-ceph "$CRD" -o name |     xargs -I {} kubectl patch {} --type merge -p '{"metadata":{"finalizers": [null]}}' -n rook-ceph; done

2.清理数据目录和磁盘

参考链接:https://rook.io/docs/rook/v1.6/ceph-teardown.html#delete-the-data-on-hosts

参考链接:https://rook.io/docs/rook/v1.6/ceph-teardown.html

kubectl -n rook-ceph patch configmap rook-ceph-mon-endpoints --type merge -p '{"metadata":{"finalizers": [null]}}'
kubectl -n rook-ceph patch secrets rook-ceph-mon --type merge -p '{"metadata":{"finalizers": [null]}}'

  

posted @ 2022-10-31 14:16  skyflask  阅读(1355)  评论(0编辑  收藏  举报