ceph 旧OSD 节点 格式化 数据,加入新 ceph集群

旧有的ceph osd , 想要格式化之后加入新的ceph节点。

 

查询 osd 旧有 数据,后面的步骤会用到。

[root@ceph-207 ~]# ceph-volume lvm list
====== osd.1 =======
[block] /dev/ceph-58ef1d0f-272b-4273-82b1-689946254645/osd-block-e0efe172-778e-46e1-baa2-cd56408aac34

block device /dev/ceph-58ef1d0f-272b-4273-82b1-689946254645/osd-block-e0efe172-778e-46e1-baa2-cd56408aac34
block uuid hCx4XW-OjKC-OC8Y-jEg2-NKYo-Pb6f-y9Nfl3
cephx lockbox secret
cluster fsid b7e4cb56-9cc8-4e44-ab87-24d4253d0951
cluster name ceph
crush device class None
encrypted 0
osd fsid e0efe172-778e-46e1-baa2-cd56408aac34
osd id 1
osdspec affinity
type block
vdo 0
devices /dev/sdb

直接加入集群报错:

ceph-volume lvm activate 1 e0efe172-778e-46e1-baa2-cd56408aac34
目前遇到了两类报错:

osd.1 21 heartbeat_check: no reply from 192.168.8.206:6804 osd.0 ever on either front or back, first ping sent 2020-11-26T16:00:04.842947+0800 (oldest deadline 2020-11-26T16:00:24.842947+0800)

stderr: Calculated size of logical volume is 0 extents. Needs to be larger.

--> Was unable to complete a new OSD, will rollback changes

 

格式化数据,重新加入新的ceph集群:

 

1、停止osd 服务 , @ 后面的 1 为 ceph-volume lvm list 命令查询出的 osd id

systemctl stop ceph-osd@1
2、重处理 osd lvm, 1 还是 osd id

ceph-volume lvm zap --osd-id 1
3、查询 lvs 信息, 删除 lv、pg 等信息

[root@ceph-207 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
osd-block-e0efe172-778e-46e1-baa2-cd56408aac34 ceph-58ef1d0f-272b-4273-82b1-689946254645 -wi-a----- <16.00g
home cl -wi-ao---- <145.12g
root cl -wi-ao---- 50.00g
swap cl -wi-ao---- <3.88g
[root@ceph-207 ~]# vgremove ceph-58ef1d0f-272b-4273-82b1-689946254645
Do you really want to remove volume group "ceph-58ef1d0f-272b-4273-82b1-689946254645" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume ceph-58ef1d0f-272b-4273-82b1-689946254645/osd-block-e0efe172-778e-46e1-baa2-cd56408aac34? [y/n]: y
Logical volume "osd-block-e0efe172-778e-46e1-baa2-cd56408aac34" successfully remove
4、将主机上的磁盘重新加入新的ceph集群

ceph-volume lvm create --data /dev/sdb
5、查询下 osd tree , 磁盘挂载情况

[root@ceph-207 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.07997 root default
-3 0.03508 host ceph-206
0 hdd 0.01559 osd.0 up 1.00000 1.00000
1 hdd 0.01949 osd.1 up 1.00000 1.00000
-5 0.04489 host ceph-207
2 hdd 0.01559 osd.2 up 1.00000 1.00000
[root@ceph-207 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 199G 0 part
├─cl-root 253:0 0 50G 0 lvm /
├─cl-swap 253:1 0 3.9G 0 lvm [SWAP]
└─cl-home 253:2 0 145.1G 0 lvm /home
sdb 8:16 0 16G 0 disk
└─ceph--c221ed63--d87a--4bbd--a503--d8f2ed9e806b-osd--block--530376b8--c7bc--4d64--bc0c--4f8692559562 253:3 0 16G 0 lvm
sr0


posted @ 2022-07-25 15:43  暗痛  阅读(660)  评论(0编辑  收藏  举报