ceph故障处理记录
1、删除ceph pools
环境信息
~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
~]# ceph --version
ceph version 12.2.3 (2dab17a455c09584f2a85e6b10888337d1ec8949) luminous (stable)
使用cephfs
设置删除pools权限
~]# ceph --show-config |grep mon_allow
mon_allow_pool_delete = false
三个节点修改配置文件
~]# vi /etc/ceph/ceph.conf
[global]
mon_allow_pool_delete = true
~]# reboot #尝试重启mon,mgr进程不生效,这里进行重启
删除存储池
~]# ceph osd pool delete fs_metadatatest fs_metadatatest --yes-i-really-really-mean-it
Error EBUSY: pool 'cephfs_data' is in use by CephFS
直接删除存储池会出现上面报错,需要先删除创建的文件系统,解决方法如下:
~]# ceph fs ls
name: cephfs_01, metadata pool: fs_metadata, data pools: [fs_data ]
name: cephfs_02, metadata pool: fs_metadatatest, data pools: [fs_datatest ]
~]# ceph fs rm cephfs_02 --yes-i-really-mean-it
Error EINVAL: all MDS daemons must be inactive before removing filesystem
~]# systemctl stop ceph-mds@ceph_test1
~]# systemctl stop ceph-mds@ceph_test2
~]# systemctl stop ceph-mds@ceph_test3
~]# ceph fs rm cephfs_02 --yes-i-really-mean-it
~]# ceph fs ls #确认
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data fs_datatest2 ]
~]# ceph osd lspools
1 cephfs_data,2 cephfs_metadata,4 fs_metadatatest,5 fs_datatest2,
~]# ceph osd pool delete fs_datatest fs_datatest --yes-i-really-really-mean-it
~]# ceph osd pool delete fs_metadatatest fs_metadatatest --yes-i-really-really-mean-it
~]# ceph fs set cephfs max_mds 2 #设置多活mds,这里sephora适用,其他忽略此条命令
~]# systemctl start ceph-mds@ceph_test1
~]# systemctl start ceph-mds@ceph_test2
~]# systemctl start ceph-mds@ceph_test3
~]# ceph osd lspools
1 cephfs_data,2 cephfs_metadata
2、ceph客户端挂载显示磁盘使用量问题
~]# df -h #客户端查看
ceph-fuse 26G 14G 13G 52% /data/ceph
~]# ceph df #服务端查看
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
91856M 44660M 47196M 51.38
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
cephfs_data 1 13669M 50.58 13354M 8354
cephfs_metadata 2 174M 1.29 13354M 242852
如果cephfs搭建只创建了一个datapools,所以ceph挂载显示POOLS的磁盘使用情况
最简便解决此问题的方法,添加一个datapool,可以df时显示ceph GLOBAL使用情况
~]# ceph osd pool create fs_datatest2 2
~]# ceph mds add_data_pool fs_datatest2
~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
91856M 44660M 47196M 51.38
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
cephfs_data 1 13669M 50.58 13354M 8354
cephfs_metadata 2 174M 1.29 13354M 242852
fs_datatest2 5 0 0 13354M 0
~]# df -h #客户端查看
ceph-fuse 90G 47G 44G 52% /data/ceph
End:ceph运维命令
查看ceph可用启动脚本
~]# systemctl list-units --type=service|grep ceph
检查ceph存储的使用情况
~]# ceph df
检查集群的状态
~]# ceph -s
查看osd的状态
~]# ceph osd stat
```
查看监视器的状态
~]# ceph mon stat
查看监视器法定人数的状态
~]# ceph quorum_status
查看mds的状态
~]# ceph mds stat
推送ceph配置文件
~]# ceph-deploy --overwrite-conf config push mon osd01 osd02 osd03
查看ceph pools
~]# ceph osd pool ls
查看osd权重
~]# ceph osd tree |grep osd.137
调整osd权重
~]# ceph osd crush reweight osd.137 1.5
更改ceph osd数据分布链接:
https://www.jianshu.com/p/afb6277dbfd6