ceph区分hdd和ssd存储

ceph区分hdd和ssd存储

1.确定磁盘已经加进集群,CLASS 类型ceph 会自动识别出来

最少两种磁盘类型3块,不然创建pool默认副本为3,磁盘数少于3,会写入pool异常,也可以手动修改pool副本数为1

[root@node3 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF 
-1       0.05878 root default                           
-3       0.01959     host node1                         
 3   hdd 0.00980         osd.3      up  1.00000 1.00000 
 0   ssd 0.00980         osd.0      up  1.00000 1.00000 
-5       0.01959     host node2                         
 4   hdd 0.00980         osd.4      up  1.00000 1.00000 
 1   ssd 0.00980         osd.1      up  1.00000 1.00000 
-7       0.01959     host node3                         
 5   hdd 0.00980         osd.5      up  1.00000 1.00000 
 2   ssd 0.00980         osd.2      up  1.00000 1.00000 

2.创建rule

[root@node3 ~]# ceph osd crush rule create-replicated rule-ssd default  host ssd 
[root@node3 ~]# ceph osd crush rule create-replicated rule-hdd default  host hdd 

3.创建pool

[root@node3 ~]# ceph osd pool create ssdpool 64 64 rule-ssd
pool 'ssdpool' created

4.创建对象测试ssdpool

[root@node3 ~]# rados -p ssdpool ls
[root@node3 ~]# echo "hahah" >test.txt
[root@node3 ~]# rados -p ssdpool put test test.txt 
[root@node3 ~]# rados -p ssdpool ls
test

查看该对象的osd组:

[root@node3 ~]# ceph osd map ssdpool test
osdmap e46 pool 'ssdpool' (1) object 'test' -> pg 1.40e8aab5 (1.35) -> up ([1,2,0], p1) acting ([1,2,0], p1)

可以看到该对象的osd组使用的都是ssd磁盘,至此验证成功。可以看出crush class相当于一个辨别磁盘类型的标签。

5.修改以前创建好的pool规则

ceph osd pool set oldpool crush_rule rule-hdd

补充

修改CrushRule:ceph osd pool set [存储池名] crush_rule [CrushRule规则名]
修改默认备份数:ceph osd pool set [存储池名] size [份数]
修改最小备份数(低于这个就停止写入):ceph osd pool set [存储池名] min_size [份数]

ceph-创建使用rule-ssd规则的存储池:https://blog.csdn.net/wangyiyan315/article/details/124022377
官方文档:https://docs.ceph.com/en/latest/rados/operations/crush-map/

posted @ 2022-06-14 18:38  鸣昊  阅读(913)  评论(0编辑  收藏  举报