ceph创建pool后100.000% pgs not active
原文:https://www.cnblogs.com/zyxnhr/p/10553717.html
1、没有创建pool之前
[root@cluster9 ceph-cluster]# ceph -s cluster: id: d81b3ce4-bcbc-4b43-870e-430950652315 health: HEALTH_OK services: mon: 1 daemons, quorum cluster9 mgr: cluster9(active) osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0B usage: 3.06GiB used, 10.9TiB / 10.9TiB avail pgs:
2、创建pool后
[root@cluster9 ceph-cluster]# ceph -s cluster: id: d81b3ce4-bcbc-4b43-870e-430950652315 health: HEALTH_OK services: mon: 1 daemons, quorum cluster9 mgr: cluster9(active) osd: 3 osds: 3 up, 3 in data: pools: 1 pools, 128 pgs objects: 0 objects, 0B usage: 3.06GiB used, 10.9TiB / 10.9TiB avail pgs: 100.000% pgs not active 128 undersized+peered
3、修改osd级别
[root@cluster9 ceph-cluster]# cd /etc/ceph/ [root@cluster9 ceph]# ceph osd getcrushmap -o /etc/ceph/crushmap 18 [root@cluster9 ceph]# crushtool -d /etc/ceph/crushmap -o /etc/ceph/crushmap.txt [root@cluster9 ceph]# sed -i 's/step chooseleaf firstn 0 type host/step chooseleaf firstn 0 type osd/' /etc/ceph/crushmap.txt [root@cluster9 ceph]# grep 'step chooseleaf' /etc/ceph/crushmap.txt step chooseleaf firstn 0 type osd [root@cluster9 ceph]# crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap-new [root@cluster9 ceph]# ceph osd setcrushmap -i /etc/ceph/crushmap-new 19
4、再次查看ceph状态
[root@cluster9 ceph]# ceph -s cluster: id: d81b3ce4-bcbc-4b43-870e-430950652315 health: HEALTH_OK services: mon: 1 daemons, quorum cluster9 mgr: cluster9(active) osd: 3 osds: 3 up, 3 in data: pools: 1 pools, 128 pgs objects: 0 objects, 0B usage: 3.06GiB used, 10.9TiB / 10.9TiB avail pgs: 128 active+clean