ceph集群的OSD设备扩缩容实战指南

一.ceph集群的OSD设备扩容实战

1.添加osd的准备条件

为什么要添加OSD?
	因为随着我们对ceph集群的使用,资源可能会被消耗殆尽,这个时候就得想法扩容集群资源,对于ceph存储资源的扩容,我们只需要添加相应的OSD节点即可。
	
添加OSD的准备条件:
	我们需要单独添加一个新的节点并为其多添加2块500GB磁盘。

如果设备不存在,则参考如下操作。
[root@ceph144 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sr0              11:0    1  4.5G  0 rom  
[root@ceph144 ~]# 
[root@ceph144 ~]# 
[root@ceph144 ~]# for i in `seq 0 2`; do echo "- - -" > /sys/class/scsi_host/host${i}/scan;done
[root@ceph144 ~]# 
[root@ceph144 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  500G  0 disk 
sdc               8:32   0  500G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph144 ~]# 

2.新增节点部署ceph软件包

	1 备国内的软件源(含基础镜像软件源和epel源)
[root@ceph144 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@ceph144 ~]# curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

	2 所有节点配置ceph软件源
[root@ceph144 ~]# cat > /etc/yum.repos.d/ceph.repo << EOF
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
EOF

	3 安装ceph环境的基础包
[root@ceph144 ~]# yum -y install ceph-osd

3.管理端添加osd设备前的状态查看

[root@ceph141 ~]# ceph -s
  cluster:
    id:     5821e29c-326d-434d-a5b6-c492527eeaad
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 23h)
    mgr: ceph141(active, since 22h), standbys: ceph143, ceph142
    osd: 7 osds: 7 up (since 22h), 7 in (since 22h)
 
  data:
    pools:   3 pools, 96 pgs
    objects: 74 objects, 114 MiB
    usage:   7.8 GiB used, 1.9 TiB / 2.0 TiB avail
    pgs:     96 active+clean
 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       1.95319 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
[root@ceph141 ~]# 

4.待添加节点的设备状态

[root@ceph144 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  500G  0 disk 
sdc               8:32   0  500G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph144 ~]# 

5.ceph-deploy节点配置和待添加节点免密要登录

[root@harbor250 ~]# ssh-copy-id ceph144

6.ceph-deploy节点开始添加设备

[root@harbor250 ~]#  cd /yinzhengjie/softwares/ceph-cluster
[root@harbor250 ceph-cluster]# 
[root@harbor250 ceph-cluster]# ceph-deploy osd create ceph144 --data /dev/sdb  # 添加ceph144节点的"/dev/sdb"磁盘设备
...
[ceph144][WARNIN] Running command: /bin/systemctl start ceph-osd@7
[ceph144][WARNIN] --> ceph-volume lvm activate successful for osd ID: 7
[ceph144][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[ceph144][INFO  ] checking OSD status...
[ceph144][DEBUG ] find the location of an executable
[ceph144][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph144 is now ready for osd use.
[root@harbor250 ceph-cluster]#  
[root@harbor250 ceph-cluster]# ceph-deploy osd create ceph144 --data /dev/sdc  # 添加ceph144节点的"/dev/sdc"磁盘设备
...
[ceph144][WARNIN] Running command: /bin/systemctl start ceph-osd@8
[ceph144][WARNIN] --> ceph-volume lvm activate successful for osd ID: 8
[ceph144][WARNIN] --> ceph-volume lvm create successful for: /dev/sdc
[ceph144][INFO  ] checking OSD status...
[ceph144][DEBUG ] find the location of an executable
[ceph144][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph144 is now ready for osd use.
[root@harbor250 ceph-cluster]# 

7.查看客户端添加后的设备状态

[root@ceph144 ~]# lsblk 
NAME                                                                                                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                   8:0    0   20G  0 disk 
├─sda1                                                                                                8:1    0    1G  0 part /boot
└─sda2                                                                                                8:2    0   19G  0 part 
  ├─centos-root                                                                                     253:0    0   17G  0 lvm  /
  └─centos-swap                                                                                     253:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                   8:16   0  500G  0 disk 
└─ceph--c7502f61--dc1f--4e2a--b2e2--149810ab3351-osd--block--ec3ba06b--cacf--4392--820e--155c3b0b675d
                                                                                                    253:2    0  500G  0 lvm  
sdc                                                                                                   8:32   0  500G  0 disk 
└─ceph--89678d7b--d0a0--49ed--87ba--341b254508ba-osd--block--1b134e65--ef8b--4464--932e--14bb23ebfc4e
                                                                                                    253:3    0  500G  0 lvm  
sr0                                                                                                  11:0    1  4.5G  0 rom  
[root@ceph144 ~]# 

8.管理端再次查看osd状态

[root@ceph141 ~]# ceph -s
  cluster:
    id:     5821e29c-326d-434d-a5b6-c492527eeaad
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 23h)
    mgr: ceph141(active, since 22h), standbys: ceph143, ceph142
    osd: 9 osds: 9 up (since 91s), 9 in (since 91s)
 
  data:
    pools:   3 pools, 96 pgs
    objects: 74 objects, 114 MiB
    usage:   10 GiB used, 2.9 TiB / 2.9 TiB avail
    pgs:     96 active+clean
 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9       0.97659     host ceph144                         
 7   hdd 0.48830         osd.7        up  1.00000 1.00000 
 8   hdd 0.48830         osd.8        up  1.00000 1.00000 
[root@ceph141 ~]# 

二.ceph集群OSD设备的缩容实战

1.服务端找出osd对应的一串编码

[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9       0.97659     host ceph144                         
 7   hdd 0.48830         osd.7        up  1.00000 1.00000 
 8   hdd 0.48830         osd.8        up  1.00000 1.00000 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd dump | egrep "osd.7|osd.8"
osd.7 up   in  weight 1 up_from 548 up_thru 563 down_at 0 last_clean_interval [0,0) [v2:10.0.0.144:6800/12665,v1:10.0.0.144:6801/12665] [v2:10.0.0.144:6802/12665,v1:10.0.0.144:6803/12665] exists,up ec3ba06b-cacf-4392-820e-155c3b0b675d
osd.8 up   in  weight 1 up_from 564 up_thru 573 down_at 0 last_clean_interval [0,0) [v2:10.0.0.144:6808/13111,v1:10.0.0.144:6809/13111] [v2:10.0.0.144:6810/13111,v1:10.0.0.144:6811/13111] exists,up 1b134e65-ef8b-4464-932e-14bb23ebfc4e
[root@ceph141 ~]# 

2.客户端查看编码是否对应

[root@ceph144 ~]# lsblk 
NAME                                                                                                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                   8:0    0   20G  0 disk 
├─sda1                                                                                                8:1    0    1G  0 part /boot
└─sda2                                                                                                8:2    0   19G  0 part 
  ├─centos-root                                                                                     253:0    0   17G  0 lvm  /
  └─centos-swap                                                                                     253:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                   8:16   0  500G  0 disk 
└─ceph--c7502f61--dc1f--4e2a--b2e2--149810ab3351-osd--block--ec3ba06b--cacf--4392--820e--155c3b0b675d
                                                                                                    253:2    0  500G  0 lvm  
sdc                                                                                                   8:32   0  500G  0 disk 
└─ceph--89678d7b--d0a0--49ed--87ba--341b254508ba-osd--block--1b134e65--ef8b--4464--932e--14bb23ebfc4e
                                                                                                    253:3    0  500G  0 lvm  
sr0                                                                                                  11:0    1  4.5G  0 rom  
[root@ceph144 ~]# 

3.服务端剔除OSD

	1 单独开一个终端执行
[root@ceph142 ~]# ceph -w  # 可以查看ceph的数据迁移情况,可以单独开一个终端执行
  cluster:
    id:     5821e29c-326d-434d-a5b6-c492527eeaad
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 23h)
    mgr: ceph141(active, since 22h), standbys: ceph143, ceph142
    osd: 9 osds: 9 up (since 5m), 9 in (since 5m)
 
  data:
    pools:   3 pools, 96 pgs
    objects: 74 objects, 114 MiB
    usage:   10 GiB used, 2.9 TiB / 2.9 TiB avail
    pgs:     96 active+clean
 

...  # 当我们执行"ceph osd out ..."相关代码时,就会出现如下的所示信息。

2024-02-01 16:57:27.280545 mon.ceph141 [INF] Client client.admin marked osd.7 out, while it was still marked up
2024-02-01 16:57:32.497417 mon.ceph141 [WRN] Health check failed: Degraded data redundancy: 11/222 objects degraded (4.955%), 2 pgs degraded (PG_DEGRADED)
2024-02-01 16:57:38.912016 mon.ceph141 [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 11/222 objects degraded (4.955%), 2 pgs degraded)
2024-02-01 16:57:38.912062 mon.ceph141 [INF] Cluster is now healthy
2024-02-01 16:58:29.198849 mon.ceph141 [INF] Client client.admin marked osd.8 out, while it was still marked up
2024-02-01 16:58:33.044142 mon.ceph141 [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)
2024-02-01 16:58:33.044162 mon.ceph141 [WRN] Health check failed: Degraded data redundancy: 36/222 objects degraded (16.216%), 7 pgs degraded (PG_DEGRADED)
2024-02-01 16:58:38.462308 mon.ceph141 [WRN] Health check update: Reduced data availability: 6 pgs peering (PG_AVAILABILITY)
2024-02-01 16:58:39.469179 mon.ceph141 [WRN] Health check update: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)
2024-02-01 16:58:39.469198 mon.ceph141 [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 6 pgs peering)
2024-02-01 16:58:42.549459 mon.ceph141 [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1 pg degraded)
2024-02-01 16:58:42.549485 mon.ceph141 [INF] Cluster is now healthy
...

 
	2 再次单独开一个终端执行
[root@ceph141 ~]# ceph osd out osd.7  # 当剔除OSD时,权重会变为0
marked out osd.7. 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd out osd.8
marked out osd.8. 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9       0.97659     host ceph144                         
 7   hdd 0.48830         osd.7        up        0 1.00000 
 8   hdd 0.48830         osd.8        up        0 1.00000 
[root@ceph141 ~]# 

4.客户端的osd节点停止osd相关进程

[root@ceph144 ~]# ps -ef | grep ceph
root       12299       1  0 16:48 ?        00:00:00 /usr/bin/python2.7 /usr/bin/ceph-crash
ceph       12665       1  0 16:50 ?        00:00:05 /usr/bin/ceph-osd -f --cluster ceph --id 7 --setuser ceph --setgroup ceph
ceph       13111       1  0 16:51 ?        00:00:05 /usr/bin/ceph-osd -f --cluster ceph --id 8 --setuser ceph --setgroup ceph
root       13242   12245  0 17:00 pts/1    00:00:00 grep --color=auto ceph
[root@ceph144 ~]# 
[root@ceph144 ~]# 
[root@ceph144 ~]# systemctl disable --now ceph-osd@7
[root@ceph144 ~]# 
[root@ceph144 ~]# systemctl disable --now ceph-osd@8
[root@ceph144 ~]# 
[root@ceph144 ~]# ps -ef | grep ceph
root       12299       1  0 16:48 ?        00:00:00 /usr/bin/python2.7 /usr/bin/ceph-crash
root       13293   12245  0 17:00 pts/1    00:00:00 grep --color=auto ceph
[root@ceph144 ~]# 

5.服务端再次查看OSD状态

[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9       0.97659     host ceph144                         
 7   hdd 0.48830         osd.7      down        0 1.00000 
 8   hdd 0.48830         osd.8      down        0 1.00000 
[root@ceph141 ~]# 

6.在管理节点删除osd

	1 删除osd认证密钥
[root@ceph141 ~]# ceph auth del osd.7
updated
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph auth del osd.8
updated
[root@ceph141 ~]# 

	2 删除osd,注意观察osd的状态为DNE
[root@ceph141 ~]# ceph osd rm 7
removed osd.7
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd rm 8
removed osd.8
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9       0.97659     host ceph144                         
 7   hdd 0.48830         osd.7       DNE        0         
 8   hdd 0.48830         osd.8       DNE        0         
[root@ceph141 ~]# 

7.客户端解除ceph对磁盘的占用

[root@ceph144 ~]# lsblk 
NAME                                                                                                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                   8:0    0   20G  0 disk 
├─sda1                                                                                                8:1    0    1G  0 part /boot
└─sda2                                                                                                8:2    0   19G  0 part 
  ├─centos-root                                                                                     253:0    0   17G  0 lvm  /
  └─centos-swap                                                                                     253:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                   8:16   0  500G  0 disk 
└─ceph--c7502f61--dc1f--4e2a--b2e2--149810ab3351-osd--block--ec3ba06b--cacf--4392--820e--155c3b0b675d
                                                                                                    253:2    0  500G  0 lvm  
sdc                                                                                                   8:32   0  500G  0 disk 
└─ceph--89678d7b--d0a0--49ed--87ba--341b254508ba-osd--block--1b134e65--ef8b--4464--932e--14bb23ebfc4e
                                                                                                    253:3    0  500G  0 lvm  
sr0                                                                                                  11:0    1  4.5G  0 rom  
[root@ceph144 ~]# 
[root@ceph144 ~]# dmsetup status
ceph--89678d7b--d0a0--49ed--87ba--341b254508ba-osd--block--1b134e65--ef8b--4464--932e--14bb23ebfc4e: 0 1048567808 linear 
centos-swap: 0 4194304 linear 
centos-root: 0 35643392 linear 
ceph--c7502f61--dc1f--4e2a--b2e2--149810ab3351-osd--block--ec3ba06b--cacf--4392--820e--155c3b0b675d: 0 1048567808 linear 
[root@ceph144 ~]# 
[root@ceph144 ~]# dmsetup remove ceph--89678d7b--d0a0--49ed--87ba--341b254508ba-osd--block--1b134e65--ef8b--4464--932e--14bb23ebfc4e
[root@ceph144 ~]# 
[root@ceph144 ~]# dmsetup remove ceph--c7502f61--dc1f--4e2a--b2e2--149810ab3351-osd--block--ec3ba06b--cacf--4392--820e--155c3b0b675d
[root@ceph144 ~]# 
[root@ceph144 ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0  500G  0 disk 
sdc               8:32   0  500G  0 disk 
sr0              11:0    1  4.5G  0 rom  
[root@ceph144 ~]# 

8.管理端清除osd的DNE状态,从crush中移除

[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       2.92978 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9       0.97659     host ceph144                         
 7   hdd 0.48830         osd.7       DNE        0         
 8   hdd 0.48830         osd.8       DNE        0         
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd crush remove osd.7
removed item id 7 name 'osd.7' from crush map
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd crush remove osd.8
removed item id 8 name 'osd.8' from crush map
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       1.95319 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9             0     host ceph144                         
[root@ceph141 ~]# 

9.管理端移除主机

[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       1.95319 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
-9             0     host ceph144                         
[root@ceph141 ~]#  
[root@ceph141 ~]# ceph osd crush remove ceph144
removed item id -9 name 'ceph144' from crush map
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       1.95319 root default                             
-3       0.48830     host ceph141                         
 0   hdd 0.19530         osd.0        up  1.00000 1.00000 
 1   hdd 0.29300         osd.1        up  1.00000 1.00000 
-5       0.97659     host ceph142                         
 2   hdd 0.19530         osd.2        up  1.00000 1.00000 
 3   hdd 0.29300         osd.3        up  1.00000 1.00000 
 4   hdd 0.48830         osd.4        up  1.00000 1.00000 
-7       0.48830     host ceph143                         
 5   hdd 0.19530         osd.5        up  1.00000 1.00000 
 6   hdd 0.29300         osd.6        up  1.00000 1.00000 
[root@ceph141 ~]# 

10.验证集群状态,发现集群数据缩容成功

[root@ceph141 ~]# ceph -s
  cluster:
    id:     5821e29c-326d-434d-a5b6-c492527eeaad
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 23h)
    mgr: ceph141(active, since 22h), standbys: ceph143, ceph142
    osd: 7 osds: 7 up (since 6m), 7 in (since 8m)
 
  data:
    pools:   3 pools, 96 pgs
    objects: 74 objects, 114 MiB
    usage:   7.8 GiB used, 1.9 TiB / 2.0 TiB avail
    pgs:     96 active+clean
 
[root@ceph141 ~]# 
posted @ 2021-01-11 23:56  尹正杰  阅读(256)  评论(0编辑  收藏  举报