14 CRDS 资源详解(转载)

CRDs 资源详解

CRDs 资源概述

部署集群的过程中我们部署了很多的 CRDs ,通过这些 CRDs 极大的扩充了 kubernetes 集群中的能力的扩充,每个 CRDs 分别承担的不同的⻆色和功能,一般使用默认的 CRDs 参数能够满足集群的需求,生产环境中需要根据实际的情况进行调整。

[root@m1 rbd]# kubectl get crds
NAME                                             CREATED AT
alertmanagers.monitoring.coreos.com              2022-11-27T03:47:44Z
cephblockpools.ceph.rook.io                      2022-11-24T03:24:04Z
cephclients.ceph.rook.io                         2022-11-24T03:24:04Z
cephclusters.ceph.rook.io                        2022-11-24T03:24:04Z
cephfilesystems.ceph.rook.io                     2022-11-24T03:24:04Z
cephnfses.ceph.rook.io                           2022-11-24T03:24:04Z
cephobjectrealms.ceph.rook.io                    2022-11-24T03:24:04Z
cephobjectstores.ceph.rook.io                    2022-11-24T03:24:04Z
cephobjectstoreusers.ceph.rook.io                2022-11-24T03:24:04Z
cephobjectzonegroups.ceph.rook.io                2022-11-24T03:24:04Z
cephobjectzones.ceph.rook.io                     2022-11-24T03:24:04Z
cephrbdmirrors.ceph.rook.io                      2022-11-24T03:24:04Z
objectbucketclaims.objectbucket.io               2022-11-24T03:24:04Z
objectbuckets.objectbucket.io                    2022-11-24T03:24:04Z
podmonitors.monitoring.coreos.com                2022-11-27T03:47:44Z
prometheuses.monitoring.coreos.com               2022-11-27T03:47:44Z
prometheusrules.monitoring.coreos.com            2022-11-27T03:47:44Z
servicemonitors.monitoring.coreos.com            2022-11-27T03:47:44Z
thanosrulers.monitoring.coreos.com               2022-11-27T03:47:45Z
volumes.rook.io                                  2022-11-24T03:24:04Z
volumesnapshotclasses.snapshot.storage.k8s.io    2022-12-02T04:39:20Z
volumesnapshotcontents.snapshot.storage.k8s.io   2022-12-02T04:39:20Z
volumesnapshots.snapshot.storage.k8s.io          2022-12-02T04:39:21Z

Ceph 集群 CRD

参考官网文档

Ceph 资源池 CRD

Ceph 的资源池分为两种模式

  • 副本型
  • 纠删码

Ceph 原生提供了创建 pool 的接口,如下

[root@m1 rbd]# ceph osd pool create testpool1 16 16 replicated
pool 'testpool1' created
[root@m1 rbd]# ceph osd pool create testpool2 16 16 erasure
pool 'testpool2' created

[root@m1 rbd]# ceph osd pool get testpool1 crush_rule
crush_rule: replicated_rule
[root@m1 rbd]# ceph osd pool get testpool2 crush_rule
crush_rule: erasure-code

云原生副本型 pool

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 3
  deviceClass: hdd

云原生纠删码方式

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: ecpool
  namespace: rook-ceph
spec:
  failureDomain: osd
  erasureCoded:
    dataChunks: 2
    codingChunks: 1
  deviceClass: hdd

对象存储 CRD

参考官网文档

apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
  name: my-store
  namespace: rook-ceph
spec:
  metadataPool:
    failureDomain: host
    replicated:
      size: 3
  dataPool:
    failureDomain: host
    erasureCoded:
      dataChunks: 2
      codingChunks: 1
  preservePoolsOnDelete: true
  gateway:
    type: s3
    sslCertificateRef:
    port: 80
    # securePort: 443
    instances: 1
    # A key/value list of annotations
    annotations:
    #  key: value
    placement:
    #  nodeAffinity:
    #    requiredDuringSchedulingIgnoredDuringExecution:
    #      nodeSelectorTerms:
    #      - matchExpressions:
    #        - key: role
    #          operator: In
    #          values:
    #          - rgw-node
    #  tolerations:
    #  - key: rgw-node
    #    operator: Exists
    #  podAffinity:
    #  podAntiAffinity:
    #  topologySpreadConstraints:
    resources:
    #  limits:
    #    cpu: "500m"
    #    memory: "1024Mi"
    #  requests:
    #    cpu: "500m"
    #    memory: "1024Mi"
  #zone:
    #name: zone-a

对象存储 Bucket

参考官网文档

Ceph 原生提供了 S3Swift 接口实现对象存储的能力,要存储数据首先需要有个 bucketbucket 的创建可以通过 s3cmd 来完成,如下

[root@m1 rbd]# s3cmd mb s3://buckettest
Bucket 's3://buckettest/' created
[root@m1 rbd]# s3cmd ls
2022-12-02 08:12  s3://buckettest
2022-11-26 02:57  s3://test

Rook 提供了两种云原生的方式创建 Bucket

  • Object Bucket Claim
  • Object Bucket

要使用 OBC ,首先需要定义一个 StorageClass ,该 StorageClass 负责 bucket 的创建过程,创建的时候会通过 configmap 提供一个配置文件存储访问 bucketendpoint ,同时还会通过 Secret 提供访问 bucket 的认证信息,如下是 StorageClass 的定义

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-bucket
  labels:
    aws-s3/object
provisioner: rook-ceph.ceph.rook.io/bucket
parameters:
  objectStoreName: my-store
  objectStoreNamespace: rook-ceph
  region: us-west-1
  bucketName: ceph-bucket
reclaimPolicy: Delete

创建 OBC

apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: ceph-bucket
  namespace: rook-ceph
spec:
  bucketName:
  generateBucketName: photo-booth
  storageClassName: rook-ceph-bucket
  additionalConfig:
    maxObjects: "1000"
    maxSize: "2G"

对象存储用户 CRD

参考官网文档

原生创建用户的方式可以通过 radosgw-admin 命令完成创建

[root@m1 rbd]# radosgw-admin user create --uid test --display-name "test"
{
    "user_id": "test",
    "display_name": "test",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "test",
            "access_key": "JT0G3PRF9YV1J4263AHE",
            "secret_key": "wROVQBPDBBUGUZvy8aPKMnw7LZOawcFE7MKXgMF7"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

云原生的定义方式为

apiVersion: ceph.rook.io/v1
kind: CephObjectStoreUser
metadata:
  name: my-user
  namespace: rook-ceph
spec:
  store: my-store
  displayName: my-display-name

文件存储 CRD

参考官方文档

定义方式如下

apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
  name: myfs
  namespace: rook-ceph
spec:
  metadataPool:
    failureDomain: host
    replicated:
      size: 3
  dataPools:
    - failureDomain: host
      replicated:
        size: 3
  preserveFilesystemOnDelete: true
  metadataServer:
    activeCount: 1
    activeStandby: true
    # A key/value list of annotations
    annotations:
    #  key: value
    placement:
    #  nodeAffinity:
    #    requiredDuringSchedulingIgnoredDuringExecution:
    #      nodeSelectorTerms:
    #      - matchExpressions:
    #        - key: role
    #          operator: In
    #          values:
    #          - mds-node
    #  tolerations:
    #  - key: mds-node
    #    operator: Exists
    #  podAffinity:
    #  podAntiAffinity:
    #  topologySpreadConstraints:
    resources:
    #  limits:
    #    cpu: "500m"
    #    memory: "1024Mi"
    #  requests:
    #    cpu: "500m"
    #    memory: "1024Mi"

客户端 CRD

Ceph原生创建用户的接口

[root@m1 rbd]# ceph auth get-or-create client.test mon 'profile rbd' osd 'profile rbd pool=testpool1'
[client.test]
        key = AQDJtIlja+cCBhAA4h9A+jKM/pQyCS7/oLQunw==

[root@m1 rbd]# ceph auth list | grep -A 5 test
installed auth entries:

client.test
        key: AQDJtIlja+cCBhAA4h9A+jKM/pQyCS7/oLQunw==
        caps: [mon] profile rbd
        caps: [osd] profile rbd pool=testpool1
mgr.a
        key: AQAJ5H5jFdajFBAAOBTLazUJx6pOX8FPuwO6XA==
        caps: [mds] allow *
        caps: [mon] allow profile mgr
        caps: [osd] allow *

云原生的创建方式

apiVersion: ceph.rook.io/v1
kind: CephClient
metadata:
  name: glance
  namespace: rook-ceph
spec:
  caps:
    mon: 'profile rbd'
    osd: 'profile rbd pool=images'
---
apiVersion: ceph.rook.io/v1
kind: CephClient
metadata:
  name: cinder
  namespace: rook-ceph
spec:
  caps:
    mon: 'profile rbd'
    osd: 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images'
posted @ 2022-12-02 16:23  evescn  阅读(135)  评论(0编辑  收藏  举报