欢迎来到战五渣的博客

人生三重境界:昨夜西风凋碧树,独上高楼,望尽天涯路。 衣带渐宽终不悔,为伊消得人憔悴。 众里寻他千百度,蓦然回首,那人却在灯火阑珊处。

016 Ceph的集群管理_2

一、Ceph集群的运行状态

集群状态:HEALTH_OK,HEALTH_WARN,HEALTH_ERR

1.1 常用查寻状态指令

[root@ceph2 ~]#    ceph health detail

HEALTH_OK

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48-8244-4e96-a7ee-980ab989d20d
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-1/1/1 up {0=ceph2=up:active}, 1 up:standby
osd: 9 osds: 9 up, 9 in; 32 remapped pgs
rbd-mirror: 1 daemon active

data:
pools: 14 pools, 536 pgs
objects: 220 objects, 240 MB
usage: 1764 MB used, 133 GB / 134 GB avail
pgs: 508 active+clean
28 active+clean+remapped

ceph -w是一样的,但是出于交互状态,可以试试更新

1.2 集群标志

        noup:OSD启动时,会将自己在MON上标识为UP状态,设置该标志位,则OSD不会被自动标识为up状态

        nodown:OSD停止时,MON会将OSD标识为down状态,设置该标志位,则MON不会将停止的OSD标识为down状态,设置noup和nodown可以防止网络抖动

        noout:设置该标志位,则mon不会从crush映射中删除任何OSD。对OSD作维护时,可设置该标志位,以防止CRUSH在OSD停止时自动重平衡数据。OSD重新启动时,需要清除该flag

        noin:设置该标志位,可以防止数据被自动分配到OSD上

        norecover:设置该flag,禁止任何集群恢复操作。在执行维护和停机时,可设置该flag

        nobackfill:禁止数据回填

        noscrub:禁止清理操作。清理PG会在短期内影响OSD的操作。在低带宽集群中,清理期间如果OSD的速度过慢,则会被标记为down。可以该标记来防止这种情况发生

        nodeep-scrub:禁止深度清理

        norebalance:禁止重平衡数据。在执行集群维护或者停机时,可以使用该flag

        pause:设置该标志位,则集群停止读写,但不影响osd自检

        full:标记集群已满,将拒绝任何数据写入,但可读

1.3 集群flag操作

只能对整个集群操作,不能针对单个osd

设置为noout状态

[root@ceph2 ~]# ceph osd set noout

noout is set

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48-8244-4e96-a7ee-980ab989d20d
health: HEALTH_WARN
noout flag(s) set

services:
mon: 3 daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-1/1/1 up {0=ceph2=up:active}, 1 up:standby
osd: 9 osds: 9 up, 9 in; 32 remapped pgs
flags noout
rbd-mirror: 1 daemon active

data:
pools: 14 pools, 536 pgs
objects: 220 objects, 240 MB
usage: 1764 MB used, 133 GB / 134 GB avail
pgs: 508 active+clean
28 active+clean+remapped

io:
client: 409 B/s rd, 0 op/s rd, 0 op/s wr

[root@ceph2 ~]# ceph osd unset noout

noout is unset

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48-8244-4e96-a7ee-980ab989d20d
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-1/1/1 up {0=ceph2=up:active}, 1 up:standby
osd: 9 osds: 9 up, 9 in; 32 remapped pgs
rbd-mirror: 1 daemon active

data:
pools: 14 pools, 536 pgs
objects: 220 objects, 240 MB
usage: 1764 MB used, 133 GB / 134 GB avail
pgs: 508 active+clean
28 active+clean+remapped

io:
client: 2558 B/s rd, 0 B/s wr, 2 op/s rd, 0 op/s wr

[root@ceph2 ~]# ceph osd set full

full is set

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48-8244-4e96-a7ee-980ab989d20d
health: HEALTH_WARN
full flag(s) set

services:
mon: 3 daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-1/1/1 up {0=ceph2=up:active}, 1 up:standby
osd: 9 osds: 9 up, 9 in; 32 remapped pgs
flags full
rbd-mirror: 1 daemon active

data:
pools: 14 pools, 536 pgs
objects: 220 objects, 240 MB
usage: 1768 MB used, 133 GB / 134 GB avail
pgs: 508 active+clean
28 active+clean+remapped

io:
client: 2558 B/s rd, 0 B/s wr, 2 op/s rd, 0 op/s wr

[root@ceph2 ~]# rados -p ssdpool put testfull /etc/ceph/ceph.conf

2019-03-27 21:59:14.250208 7f6500913e40 0 client.65175.objecter FULL, paused modify 0x55d690a412b0 tid 0

[root@ceph2 ~]# ceph osd unset full

full is unset

[root@ceph2 ~]# ceph -s

cluster:
id: 35a91e48-8244-4e96-a7ee-980ab989d20d
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph2,ceph3,ceph4
mgr: ceph4(active), standbys: ceph2, ceph3
mds: cephfs-1/1/1 up {0=ceph2=up:active}, 1 up:standby
osd: 9 osds: 9 up, 9 in; 32 remapped pgs
rbd-mirror: 1 daemon active

data:
pools: 14 pools, 536 pgs
objects: 220 objects, 240 MB
usage: 1765 MB used, 133 GB / 134 GB avail
pgs: 508 active+clean
28 active+clean+remapped

io:
client: 409 B/s rd, 0 op/s rd, 0 op/s wr

[root@ceph2 ~]# rados -p ssdpool put testfull /etc/ceph/ceph.conf

[root@ceph2 ~]# rados -p ssdpool ls

testfull
test

二、限制Pool配置更改

2.1 主要过程

禁止池被删除

osd_pool_default_flag_nodelete

禁止池的pg_num和pgp_num被修改

osd_pool_default_flag_nopgchange

禁止修改池的size和min_size

osd_pool_default_flag_nosizechang

2.2 实验操作

[root@ceph2 ~]# ceph daemon osd.0  config show|grep osd_pool_default_flag

  "osd_pool_default_flag_hashpspool": "true",
  "osd_pool_default_flag_nodelete": "false",
  "osd_pool_default_flag_nopgchange": "false",
  "osd_pool_default_flag_nosizechange": "false",
  "osd_pool_default_flags": "0",

 

[root@ceph2 ~]# ceph tell osd.* injectargs --osd_pool_default_flag_nodelete true

[root@ceph2 ~]# ceph daemon osd.0 config show|grep osd_pool_default_flag

  "osd_pool_default_flag_hashpspool": "true",
  "osd_pool_default_flag_nodelete": "true",
  "osd_pool_default_flag_nopgchange": "false",
  "osd_pool_default_flag_nosizechange": "false",
  "osd_pool_default_flags": "0",

[root@ceph2 ~]# ceph osd pool delete ssdpool  ssdpool yes-i-really-really-mean-it

Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool ssdpool.  If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.   #不能删除

改为false

[root@ceph2 ~]# ceph tell osd.* injectargs --osd_pool_default_flag_nodelete false

[root@ceph2 ~]# ceph daemon osd.0 config show|grep osd_pool_default_flag

"osd_pool_default_flag_hashpspool": "true",
"osd_pool_default_flag_nodelete": "true",                   #依然显示为ture
"osd_pool_default_flag_nopgchange": "false",
"osd_pool_default_flag_nosizechange": "false",
"osd_pool_default_flags": "0"

2.3 使用配置文件修改

在ceph1上修改

osd_pool_default_flag_nodelete false

[root@ceph1 ~]# ansible all -m copy -a 'src=/etc/ceph/ceph.conf dest=/etc/ceph/ceph.conf owner=ceph group=ceph mode=0644'

[root@ceph1 ~]# ansible mons -m shell -a ' systemctl restart ceph-mon.target'

[root@ceph1 ~]# ansible mons -m shell -a ' systemctl restart ceph-osd.target'

[root@ceph2 ~]# ceph daemon osd.0 config show|grep osd_pool_default_flag

"osd_pool_default_flag_hashpspool": "true",
"osd_pool_default_flag_nodelete": "false",
"osd_pool_default_flag_nopgchange": "false",
"osd_pool_default_flag_nosizechange": "false",
 "osd_pool_default_flags": "0",

删除ssdpool

[root@ceph2 ~]# ceph osd pool delete ssdpool ssdpool --yes-i-really-really-mean-it

成功删除!!!

三、理解PG

3.1 PG的状态

        Creating:PG正在被创建。通常当存储池被创建或者PG的数目被修改时,会出现这种状态

        Active:PG处于活跃状态。可被正常读写

        Clean:PG中的所有对象都被复制了规定的副本数

        Down:PG离线

        Replay:当某个OSD异常后,PG正在等待客户端重新发起操作

        Splitting:PG正在初分割,通常在一个存储池的PG数增加后出现,现有的PG会被分割,部分对象被移动到新的PG

        Scrubbing:PG正在做不一致校验

        Degraded:PG中部分对象的副本数未达到规定数目

        Inconsistent:PG的副本出现了不一致。如果出现副本不一致,可使用ceph pg repair来修复不一致情况

        Peering:Perring是由主OSD发起的使用存放PG副本的所有OSD就PG的所有对象和元数据的状态达成一致的过程。Peering完成后,主OSD才会接受客户端写请求

        Repair:PG正在被检查,并尝试修改被发现的不一致情况

        Recovering:PG正在迁移或同步对象及副本。通常是一个OSD down掉之后的重平衡过程

        Backfill:一个新OSD加入集群后,CRUSH会把集群现有的一部分PG分配给它,被称之为数据回填

        Backfill-wait:PG正在等待开始数据回填操作

        Incomplete:PG日志中缺失了一关键时间段的数据。当包含PG所需信息的某OSD不可用时,会出现这种情况

        Stale:PG处理未知状态。monitors在PG map改变后还没收到过PG的更新。集群刚启动时,在Peering结束前会出现该状态

        Remapped:当PG的acting set变化后,数据将会从旧acting set迁移到新acting set。新主OSD需要一段时间后才能提供服务。因此这会让老的OSD继续提供服务,直到PG迁移完成。在这段时间,PG状态就会出现Remapped

3.2  管理文件到PG的映射

[root@ceph2 ~]# ceph osd map test test

osdmap e288 pool 'test' (16) object 'test' -> pg 16.40e8aab5 (16.15) -> up ([5,6], p5) acting ([5,6,0], p5)
test对象所在pg id为16.15,存储在三个osd上,分别为osd.5、osd.5和osd.0,其中osd.5为primary osd
处于up状态的osd会一直留在PG的up set和acting set中,一旦主osd down,它首先会从up set中移除,然后从acting set中移除,之后从OSD将被升级为主。Ceph会将故障OSD上的PG恢复到一个新OSD上,然后再将这个新OSD加入到up和acting set中来维持集群的高可用性

3.3 管理stuck的状态PG

        如果PG长时间(mon_pg_stuck_threshold,默认为300s)出现如下状态时,MON会将该PG标记为stuck:

        inactive:pg有peering问题

        unclean:pg在故障恢复时遇到问题

        stale:pg没有任何OSD报告,可能其所有的OSD都是down和out

        undersized:pg没有充足的osd来存储它应具有的副本数

        默认情况下,Ceph会自动执行恢复,但如果未成自动恢复,则集群状态会一直处于HEALTH_WARN或者HEALTH_ERR

        如果特定PG的所有osd都是down和out状态,则PG会被标记为stale。要解决这一情况,其中一个OSD必须要重生,且具有可用的PG副本,否则PG不可用

        Ceph可以声明osd或PG已丢失,这也就意味着数据丢失。

        需要说明的是,osd的运行离不开journal,如果journal丢失,则osd停止

3.4 stuck的状态pg操作

检查处于stuck状态的pg

[root@ceph2 ceph]# ceph pg dump_stuck

ok
PG_STAT STATE         UP    UP_PRIMARY ACTING ACTING_PRIMARY 
17.5    stale+peering [0,2]          0  [0,2]              0 
17.4    stale+peering [2,0]          2  [2,0]              2 
17.3    stale+peering [2,0]          2  [2,0]              2 
17.2    stale+peering [2,0]          2  [2,0]              2 
17.1    stale+peering [0,2]          0  [0,2]              0 
17.0    stale+peering [2,0]          2  [2,0]              2 
17.1f   stale+peering [2,0]          2  [2,0]              2 
17.1e   stale+peering [0,2]          0  [0,2]              0 
17.1d   stale+peering [2,0]          2  [2,0]              2 
17.1c   stale+peering [0,2]          0  [0,2]              0 
17.6    stale+peering [2,0]          2  [2,0]              2 
17.11   stale+peering [0,2]          0  [0,2]              0 
17.7    stale+peering [2,0]          2  [2,0]              2 
17.8    stale+peering [2,0]          2  [2,0]              2 
17.13   stale+peering [2,0]          2  [2,0]              2 
17.9    stale+peering [0,2]          0  [0,2]              0 
17.10   stale+peering [2,0]          2  [2,0]              2 
17.a    stale+peering [0,2]          0  [0,2]              0 
17.15   stale+peering [2,0]          2  [2,0]              2 
17.b    stale+peering [2,0]          2  [2,0]              2 
17.12   stale+peering [0,2]          0  [0,2]              0 
17.c    stale+peering [2,0]          2  [2,0]              2 
17.17   stale+peering [0,2]          0  [0,2]              0 
17.d    stale+peering [2,0]          2  [2,0]              2 
17.14   stale+peering [2,0]          2  [2,0]              2 
17.e    stale+peering [0,2]          0  [0,2]              0 
17.19   stale+peering [0,2]          0  [0,2]              0 
17.f    stale+peering [2,0]          2  [2,0]              2 
17.16   stale+peering [0,2]          0  [0,2]              0 
17.18   stale+peering [0,2]          0  [0,2]              0 
17.1a   stale+peering [2,0]          2  [2,0]              2 
17.1b   stale+peering [2,0]          2  [2,0]              2
[root@ceph2 ceph]# ceph osd blocked-by
osd num_blocked 
  0          19 
  2          13 

检查导致pg一致阻塞在peering状态的osd

ceph osd blocked-by

检查某个pg的状态

ceph pg dump |grep pgid

声明pg丢失

ceph pg pgid mark_unfound_lost revert|delete

声明osd丢失(需要osd状态为down且out)

ceph osd lost osdid --yes-i-really-mean-it


 博主声明:本文的内容来源主要来自誉天教育晏威老师,由本人实验完成操作验证,需要的博友请联系誉天教育(http://www.yutianedu.com/),获得官方同意或者晏老师(https://www.cnblogs.com/breezey/)本人同意即可转载,谢谢!

 

posted @ 2019-03-28 17:33  梦中泪  阅读(2300)  评论(0编辑  收藏  举报