rook-ceph 部署

背景

  • 在k8s环境中,没有共享存储的话,一些有状态的应用是无法托管在k8s集群上的,以及应用日志收集等
  • rook是一个基于k8s之上,提供存储系统的编排系统,支持nfs,glusterfs,,ceph等存储

rook-ceph组件

  • rook-oparator
               rook与k8s的交互组件
               整个集群只有一个
  • rook agent
               与rook operator交互,执行命令
               每个node都会启动一个
               不同的存储系统,启动的agent是不同的

 

部署

  • 部署之前需要有k8s环境,以三master, 三node为例,提供存储能力在三台node上,所以在node节点上申请三块裸盘,无需分区和挂载
  • Git clone  https://github.com/rook/rook/tree/v0.9.0
  • Cd ./ceph/rook/cluster/examples/kubernetes/ceph
  • Kubectl apply -f common.yaml

  • Vim  operator.yaml   //  更改两个相关端口,使用默认的容易冲突

    端口更改
    # Configure CSI CSI Ceph FS grpc and liveness metrics port
    # CSI_CEPHFS_GRPC_METRICS_PORT: "9091"
    # CSI_CEPHFS_LIVENESS_METRICS_PORT: "9081"
    # Configure CSI RBD grpc and liveness metrics port
    CSI_RBD_GRPC_METRICS_PORT: "19090"     // 默认是9090
    CSI_RBD_LIVENESS_METRICS_PORT: "19080"   // 默认是9080
  • Kubectl apply -f operator.yaml   //  等待pod启动成功

  • Vim  cluster.yaml  // 修改相关配置,自定义磁盘相关信息,如果不写的话,部署的时候可能读不到磁盘信息

    磁盘信息
      # The option to automatically remove OSDs that are out and are safe to destroy.
      removeOSDsIfOutAndSafeToRemove: false
    #  priorityClassNames:
    #    all: rook-ceph-default-priority-class
    #    mon: rook-ceph-mon-priority-class
    #    osd: rook-ceph-osd-priority-class
    #    mgr: rook-ceph-mgr-priority-class
      storage: # cluster level storage configuration and selection
        useAllNodes: false
        useAllDevices: false
        #deviceFilter:
        config:
          metadataDevice:
          databaseSizeMB: "1024"
          journalSizeMB:  "1024"
        nodes:
        - name: "node-1"
          devices:
          - name: "vdc"
          config:
            storeType: bluestore
        - name: "node-2"
          devices:
          - name: "vdc"
          config:
            storeType: bluestore
        - name: "node-3"
          devices:
          - name: "vdc"
          config:
            storeType: bluestore
  • Kubectl apply -f cluster.yaml   //  等待pod启动成功

  • Kubectl apply -f toolbox.yaml  // 部署toolbox验证工具

  • 所有组件启动成功的话,pod状态如下

    pod成功状态
    [root@master-2 ceph]# kubectl get pods -n rook-ceph -o wide
    NAME                                               READY   STATUS      RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
    csi-cephfsplugin-4jxst                             3/3     Running     0          34d   172.31.129.239   node-3     <none>           <none>
    csi-cephfsplugin-6dl55                             3/3     Running     0          34d   172.31.129.37    master-1   <none>           <none>
    csi-cephfsplugin-gnzcr                             3/3     Running     0          34d   172.31.129.150   master-3   <none>           <none>
    csi-cephfsplugin-lt2hj                             3/3     Running     0          34d   172.31.129.226   node-2     <none>           <none>
    csi-cephfsplugin-provisioner-5f49d658bf-856wh      6/6     Running     266        23d   10.233.106.197   master-1   <none>           <none>
    csi-cephfsplugin-provisioner-5f49d658bf-b885h      6/6     Running     368        34d   10.233.113.136   master-3   <none>           <none>
    csi-cephfsplugin-wd8x5                             3/3     Running     0          34d   172.31.129.92    master-2   <none>           <none>
    csi-cephfsplugin-xz28l                             3/3     Running     0          34d   172.31.129.167   node-1     <none>           <none>
    csi-rbdplugin-4lz2d                                3/3     Running     0          34d   172.31.129.239   node-3     <none>           <none>
    csi-rbdplugin-5gw74                                3/3     Running     0          34d   172.31.129.92    master-2   <none>           <none>
    csi-rbdplugin-65bst                                3/3     Running     0          34d   172.31.129.226   node-2     <none>           <none>
    csi-rbdplugin-kmx44                                3/3     Running     0          34d   172.31.129.167   node-1     <none>           <none>
    csi-rbdplugin-provisioner-fcdbb7c7c-gtg9q          6/6     Running     280        34d   10.233.113.135   master-3   <none>           <none>
    csi-rbdplugin-provisioner-fcdbb7c7c-x5hfr          6/6     Running     358        34d   10.233.106.168   master-1   <none>           <none>
    csi-rbdplugin-qlx97                                3/3     Running     0          34d   172.31.129.150   master-3   <none>           <none>
    csi-rbdplugin-zr4gv                                3/3     Running     0          34d   172.31.129.37    master-1   <none>           <none>
    rook-ceph-crashcollector-node-1-fb888b4d9-7jnxd    1/1     Running     0          34d   10.233.112.192   node-1     <none>           <none>
    rook-ceph-crashcollector-node-2-9f5c76cc8-xb7dm    1/1     Running     0          34d   10.233.69.230    node-2     <none>           <none>
    rook-ceph-crashcollector-node-3-6b954569f5-6nm6j   1/1     Running     0          34d   10.233.109.192   node-3     <none>           <none>
    rook-ceph-mgr-a-555b699477-97mnp                   1/1     Running     0          34d   10.233.112.193   node-1     <none>           <none>
    rook-ceph-mon-a-746b6db5dd-l5pj7                   1/1     Running     0          34d   10.233.112.191   node-1     <none>           <none>
    rook-ceph-mon-b-85d56b7fb9-5dbjg                   1/1     Running     0          34d   10.233.69.228    node-2     <none>           <none>
    rook-ceph-mon-c-bf69fd456-r55rd                    1/1     Running     1          34d   10.233.109.190   node-3     <none>           <none>
    rook-ceph-operator-7fc86dd6b8-m82z8                1/1     Running     0          34d   10.233.106.165   master-1   <none>           <none>
    rook-ceph-osd-0-766987689d-44sjq                   1/1     Running     0          34d   10.233.69.231    node-2     <none>           <none>
    rook-ceph-osd-1-57ddd66cc5-88dh4                   1/1     Running     0          34d   10.233.112.195   node-1     <none>           <none>
    rook-ceph-osd-2-759bc59c76-4gkcr                   1/1     Running     0          34d   10.233.109.193   node-3     <none>           <none>
    rook-ceph-osd-prepare-node-1-q7qr7                 0/1     Completed   0          8h    10.233.112.140   node-1     <none>           <none>
    rook-ceph-osd-prepare-node-2-tw6tg                 0/1     Completed   0          8h    10.233.69.117    node-2     <none>           <none>
    rook-ceph-osd-prepare-node-3-2ks52                 0/1     Completed   0          8h    10.233.109.78    node-3     <none>           <none>
    rook-ceph-tools-56868d58c6-qkxmn                   1/1     Running     0          33d   10.233.106.188   master-1   <none>           <none>
    rook-discover-cxvqx                                1/1     Running     0          34d   10.233.109.188   node-3     <none>           <none>
    rook-discover-fj86l                                1/1     Running     0          34d   10.233.110.137   master-2   <none>           <none>
    rook-discover-gf68b                                1/1     Running     0          34d   10.233.112.188   node-1     <none>           <none>
    rook-discover-jqrgs                                1/1     Running     0          34d   10.233.106.166   master-1   <none>           <none>
    rook-discover-kh9ll                                1/1     Running     0          34d   10.233.69.226    node-2     <none>           <none>
    rook-discover-ps4pb                                1/1     Running     0          34d   10.233.113.134   master-3   <none>           <none>
  • Kubectl  edit  svc  rook-ceph-mgr-dashboard -n rook-ceph  // 暴露ceph  dashboard

  • Ciphertext=$(kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}")
    Pass=$(echo ${Ciphertext}|base64 --decode)echo ${Pass}    // 这三行获取密码

  • 登录页面,查看状态

       

验证

  • 创建pool,storageClass,这里使用块设备提供存储

    storageclass
    apiVersion:
    ceph.rook.io/v1
    kind: CephBlockPool
    metadata:
      name: replicapool
      namespace: rook-ceph
    spec:
      failureDomain: host
      replicated:
        size: 3
    ---
    apiVersion:
    storage.k8s.io/v1
    kind: StorageClass
    metadata:
       name: rook-ceph-block
    provisioner: rook-ceph.rbd.csi.ceph.com
    parameters:
        clusterID: rook-ceph
        pool: replicapool
        imageFormat: "2"
        imageFeatures: layering
        csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
        csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
        csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
        csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
        csi.storage.k8s.io/fstype: ext4
    reclaimPolicy: Delete

     

  • rook-ceph项目提供了验证方法,部署一个mysql和wordpress
  • Cd ceph/rook/cluster/examples/kubernetes
  • Kubectl apply -f mysql.yaml
  • Kubectl apply -f wordpress.yaml
  • wordpress能正常打开说明部署基本成功

 

卸载

    • Rook-ceph 卸载需要彻底,不然第二次安装会存在数据不一致情况
    • Kubectl delete -f wordpress.yaml
    • Kubectl delete -f mysql.yaml
    • kubectl delete -n rook-ceph cephblockpool replicapool   //  删除创建的pool
    • kubectl delete storageclass rook-ceph-block   // 删除创建的storageClass
    • Kubectl delete -f cluster.yaml
    • Kubectl delete -f  operator.yaml
    • Kubectl delete -f common.yaml
    • 删除每个节点的数据,每个节点执行该脚本,块设备需要换成本地的

      块设备信息更改
      #!/usr/bin/env bash
      rm  -rf  /var/lib/rook/
      DISK="/dev/sdb"
      sgdisk --zap-all $DISK
      ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove %
      rm -rf /dev/ceph-*
    • 如果上述步骤仍未删除成功,执行该语句,for CRD in $(kubectl get crd -n rook-ceph | awk '/ceph.rook.io/ {print $1}'); do kubectl patch crd -n rook-ceph $CRD --type merge -p '{"metadata":{"finalizers": [null]}}'; done
posted @ 2020-11-10 10:59  LinuxSFeng  阅读(73)  评论(0编辑  收藏  举报