ceph 停机 重启
零 修改记录
一 摘要
二 环境信息
三 实施
(一)实施
3.1.1 实施前检查
[root@cephtest001 ~]# su - cephadmin
上一次登录:五 2月 19 15:01:46 CST 2021pts/0 上
[cephadmin@cephtest001 ~]$ ceph -s
cluster:
id: 6cd05235-66dd-4929-b697-1562d308d5c3
health: HEALTH_WARN
1 pools have many more objects per pg than average
services:
mon: 3 daemons, quorum cephtest001,cephtest002,cephtest004 (age 12d)
mgr: cephtest001(active, since 9w), standbys: cephtest002, cephtest004
osd: 13 osds: 13 up (since 13d), 13 in (since 3w); 1 remapped pgs
rgw: 1 daemon active (cephtest004)
task status:
data:
pools: 8 pools, 400 pgs
objects: 24.34k objects, 167 GiB
usage: 518 GiB used, 26 TiB / 27 TiB avail
pgs: 30/73014 objects misplaced (0.041%)
399 active+clean
1 active+clean+remapped
io:
client: 89 KiB/s rd, 99 op/s rd, 0 op/s wr
3.1.2 关闭ceph osd集群流量(部署节点)
[cephadmin@cephtest001 ~]$ ceph osd set noout
noout is set
[cephadmin@cephtest001 ~]$ ceph osd set norecover
norecover is set
[cephadmin@cephtest001 ~]$ ceph osd set norebalance
norebalance is set
[cephadmin@cephtest001 ~]$ ceph osd set nobackfill
nobackfill is set
[cephadmin@cephtest001 ~]$ ceph osd set nodown
nodown is set
[cephadmin@cephtest001 ~]$ ceph osd set pause
pauserd,pausewr is set
检查
[cephadmin@cephtest001 ~]$ ceph -s
cluster:
id: 6cd05235-66dd-4929-b697-1562d308d5c3
health: HEALTH_WARN
pauserd,pausewr,nodown,noout,nobackfill,norebalance,norecover flag(s) set
1 pools have many more objects per pg than average
services:
mon: 3 daemons, quorum cephtest001,cephtest002,cephtest004 (age 12d)
mgr: cephtest001(active, since 9w), standbys: cephtest002, cephtest004
osd: 13 osds: 13 up (since 13d), 13 in (since 3w); 1 remapped pgs
flags pauserd,pausewr,nodown,noout,nobackfill,norebalance,norecover
rgw: 1 daemon active (cephtest004)
task status:
data:
pools: 8 pools, 400 pgs
objects: 24.34k objects, 167 GiB
usage: 518 GiB used, 26 TiB / 27 TiB avail
pgs: 30/73014 objects misplaced (0.041%)
399 active+clean
1 active+clean+remapped
[cephadmin@cephtest001 ~]$
恢复
[cephadmin@cephtest001 ~]$ ceph osd unset noout
noout is unset
[cephadmin@cephtest001 ~]$ ceph osd unset norecover
norecover is unset
[cephadmin@cephtest001 ~]$ ceph osd unset norebalance
norebalance is unset
[cephadmin@cephtest001 ~]$ ceph osd unset nobackfill
nobackfill is unset
[cephadmin@cephtest001 ~]$ ceph osd unset nodown
nodown is unset
[cephadmin@cephtest001 ~]$ ceph osd unset pause
pauserd,pausewr is unset
[cephadmin@cephtest001 ~]$
检查
[cephadmin@cephtest001 ~]$ ceph -s
cluster:
id: 6cd05235-66dd-4929-b697-1562d308d5c3
health: HEALTH_WARN
1 pools have many more objects per pg than average
3 monitors have not enabled msgr2
services:
mon: 3 daemons, quorum cephtest001,cephtest002,cephtest004 (age 2h)
mgr: cephtest001(active, since 2h), standbys: cephtest002, cephtest004
osd: 13 osds: 13 up (since 2h), 13 in (since 3w); 1 remapped pgs
rgw: 1 daemon active (cephtest004)
task status:
data:
pools: 8 pools, 400 pgs
objects: 24.34k objects, 167 GiB
usage: 518 GiB used, 26 TiB / 27 TiB avail
pgs: 30/73014 objects misplaced (0.041%)
399 active+clean
1 active+clean+remapped
[cephadmin@cephtest001 ~]$
posted on 2021-03-04 14:51 weiwei2021 阅读(812) 评论(0) 编辑 收藏 举报