ceph使用问题积累

1、HEALTH_WARN:pools have too many placement groups

[root@k8s-master ~]# ceph -s
  cluster:
    id:     627227aa-4a5e-47c1-b822-28251f8a9936
    health: HEALTH_WARN
            2 pools have too many placement groups
            mons are allowing insecure global_id reclaim

[root@k8s-master ~]# ceph health detail
HEALTH_WARN 2 pools have too many placement groups; mons are allowing insecure global_id reclaim
POOL_TOO_MANY_PGS 2 pools have too many placement groups
Pool cephfs_metadata has 128 placement groups, should have 16
Pool cephfs_data has 128 placement groups, should have 32
[root@k8s-master ~]# ceph osd pool autoscale-status
POOL              SIZE TARGET SIZE RATE RAW CAPACITY  RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_metadata  1350k              2.0       30708M 0.0001                               4.0    128         16 warn
cephfs_data     194.8k              2.0       30708M 0.0000                               1.0    128         32 warn

参考链接: https://forum.proxmox.com/threads/ceph-pools-have-too-many-placement-groups.81047/
原因: 开启了  autoscale-status
解决方法:
[root@k8s-master ~]# ceph mgr module disable pg_autoscaler

2、HEALTH_WARN: mons are allowing insecure global_id reclaim

[root@k8s-master ~]# ceph -s
  cluster:
    id:     627227aa-4a5e-47c1-b822-28251f8a9936
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim

参考链接:  http://www.manongjc.com/detail/24-dvcrprtvjeglqcc.html
解决方法:  禁用不安全模式
[root@k8s-master ~]# ceph config set mon auth_allow_insecure_global_id_reclaim false
[root@k8s-master ~]# ceph -s
  cluster:
    id:     627227aa-4a5e-47c1-b822-28251f8a9936
    health: HEALTH_OK

 

posted @ 2022-01-06 16:32  流年晕开时光  阅读(759)  评论(0编辑  收藏  举报