Ceph 故障域配置(修改CRUSH MAP)

添加或移动OSD:

ceph osd crush set {name} {weight} root={root} [{bucket-type}={bucket-name} ...]
ceph osd crush set osd.0 1.0 root=default datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1

调整OSD权重:

ceph osd crush reweight {name} {weight}

移除OSD:

ceph osd crush remove {name}

添加BUCKET:

ceph osd crush add-bucket {bucket-name} {bucket-type}
ceph osd crush add-bucket rack12 rack

重命名BUCKET:

ceph osd crush rename-bucket <srcname> <dstname>

移动BUCKET:

ceph osd crush move {bucket-name} {bucket-type}={bucket-name}, [...]

移除BUCKET:

ceph osd crush remove {bucket-name}
ceph osd crush remove rack12

创建副本池规则:

ceph osd crush rule create-replicated {name} {root} {failure-domain-type} [{class}]

创建EC池规则:

ceph osd erasure-code-profile ls
ceph osd erasure-code-profile get {profile-name}

创建EC Profile:

[root@ceph-node-0 ~]# ceph osd erasure-code-profile set 
#create erasure code profile <name> with [<key[=value]> ...] pairs. Add a --force at the end to override an existing profile (VERY DANGEROUS)

The erasure code profile properties of interest are:

        crush-root: the name of the CRUSH node to place data under [default: default].

        crush-failure-domain: the CRUSH type to separate erasure-coded shards across [default: host].

        crush-device-class: the device class to place data on [default: none, meaning all devices are used].

        k and m (and, for the lrc plugin, l): these determine the number of erasure code shards, affecting the resulting CRUSH rule.

Once a profile is defined, you can create a CRUSH rule with:

ceph osd crush rule create-erasure {name} {profile-name}

删除规则:

ceph osd crush rule rm {rule-name}


一个示例:

ID   CLASS  WEIGHT   TYPE NAME               STATUS  REWEIGHT  PRI-AFF
 -6               0  root rgw                                         
 -8               0      rack rgw-rack0                               
-10               0          host rgw-node0                           
-20               0      rack rgw-rack1                               
-18               0          host rgw-node1                           
-25               0      rack rgw-rack2                               
-26               0          host rgw-node2                           
 -5               0  root rbd                                         
 -7               0      rack rbd-rack0                               
 -9               0          host rbd-node0                           
-19               0      rack rbd-rack1                               
-17               0          host rbd-node1                           
 -1         0.02899  root default                                     
 -3         0.02899      host ceph-node-0                             
  0    hdd  0.00999          osd.0               up   1.00000  1.00000
  1    hdd  0.00999          osd.1               up   1.00000  1.00000
  2    hdd  0.00999          osd.2               up   1.00000  1.00000

posted @ 2022-10-11 16:05  Varden  阅读(1234)  评论(0编辑  收藏  举报