Ceph Reef(18.2.X)访问ceph集群的方式及管理员节点配置案例
作者:尹正杰
版权声明:原创作品,谢绝转载!否则将追究法律责任。
目录
一.cephadm访问ceph集群
1 方式一: 使用cephadm shell交互式配置【会创建临时容器,当shell推出后就会自动删除容器哟~】
[root@ceph141 ~]# cephadm shell
Inferring fsid c044ff3c-5f05-11ef-9d8b-51db832765d6
Inferring config /var/lib/ceph/c044ff3c-5f05-11ef-9d8b-51db832765d6/mon.ceph141/config
Using ceph image with id '2bc0b0f4375d' and tag 'v18' created on 2024-07-24 06:19:35 +0800 CST
quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906
root@ceph141:/#
root@ceph141:/# ceph -s
cluster:
id: c044ff3c-5f05-11ef-9d8b-51db832765d6
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum ceph141 (age 14m)
mgr: ceph141.gqogmi(active, since 10m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
root@ceph141:/#
root@ceph141:/# exit
exit
[root@ceph141 ~]#
2 方式二: 使用cephadm非交互式配置【会创建临时容器】
[root@ceph141 ~]# cephadm shell -- ceph -s
Inferring fsid c044ff3c-5f05-11ef-9d8b-51db832765d6
Inferring config /var/lib/ceph/c044ff3c-5f05-11ef-9d8b-51db832765d6/mon.ceph141/config
Using ceph image with id '2bc0b0f4375d' and tag 'v18' created on 2024-07-24 06:19:35 +0800 CST
quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906
cluster:
id: c044ff3c-5f05-11ef-9d8b-51db832765d6
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum ceph141 (age 12m)
mgr: ceph141.gqogmi(active, since 8m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
[root@ceph141 ~]#
3 方式三: 安装ceph通用包,其中包含所有ceph命令,包括ceph、rbd、mount.ceph(用于挂载CephFS文件系统)等【推荐使用】
[root@ceph141 ~]# cephadm add-repo --release reef
[root@ceph141 ~]# cephadm install ceph-common
...
Installing repo GPG key from https://download.ceph.com/keys/release.gpg...
Installing repo file at /etc/apt/sources.list.d/ceph.list... # 会在宿主机创建源文件并安装,速度较慢,请耐性等待!
Updating package list...
Completed adding repo.
Installing packages ['ceph-common']
[root@ceph141 ~]#
[root@ceph141 ~]# ceph -v # Duang~宿主机可以正常访问啦!
ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable)
[root@ceph141 ~]#
[root@ceph141 ~]# ceph -s # 直接在宿主机访问即可
cluster:
id: c044ff3c-5f05-11ef-9d8b-51db832765d6
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum ceph141 (age 22m)
mgr: ceph141.gqogmi(active, since 18m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
[root@ceph141 ~]#
二.ceph的管理节点配置
1.拷贝apt源及认证文件
[root@ceph141 ~]# scp /etc/apt/sources.list.d/ceph.list ceph142:/etc/apt/sources.list.d/
[root@ceph141 ~]# scp /etc/apt/trusted.gpg.d/ceph.release.gpg ceph142:/etc/apt/trusted.gpg.d/
2.客户端更新源并安装客户端节点ceph客户端软件包
[root@ceph142 ~]# ll /etc/apt/trusted.gpg.d/ceph.release.gpg
-rw-r--r-- 1 root root 1143 Aug 21 16:35 /etc/apt/trusted.gpg.d/ceph.release.gpg
[root@ceph142 ~]#
[root@ceph142 ~]#
[root@ceph142 ~]# ll /etc/apt/sources.list.d/ceph.list
-rw-r--r-- 1 root root 54 Aug 21 16:33 /etc/apt/sources.list.d/ceph.list
[root@ceph142 ~]#
[root@ceph142 ~]# apt update
[root@ceph142 ~]#
[root@ceph142 ~]# apt -y install ceph-common
[root@ceph142 ~]#
[root@ceph142 ~]# ceph -v
ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable)
[root@ceph142 ~]#
[root@ceph142 ~]# ll /etc/ceph/
total 12
drwxr-xr-x 2 root root 4096 Aug 21 16:37 ./
drwxr-xr-x 101 root root 4096 Aug 21 16:37 ../
-rw-r--r-- 1 root root 92 Jul 12 23:42 rbdmap
[root@ceph142 ~]#
[root@ceph142 ~]#
[root@ceph142 ~]# ceph -s # 很明显,此节点的ceph管理ceph集群
Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')
[root@ceph142 ~]#
3.ceph141节点拷贝认证文件到ceph142节点
[root@ceph141 ~]# scp /etc/ceph/ceph.{conf,client.admin.keyring} ceph142:/etc/ceph/
4.ceph142节点测试
[root@ceph142 ~]# ll /etc/ceph/
total 20
drwxr-xr-x 2 root root 4096 Aug 21 16:40 ./
drwxr-xr-x 101 root root 4096 Aug 21 16:37 ../
-rw------- 1 root root 151 Aug 21 16:40 ceph.client.admin.keyring
-rw-r--r-- 1 root root 259 Aug 21 16:40 ceph.conf
-rw-r--r-- 1 root root 92 Jul 12 23:42 rbdmap
[root@ceph142 ~]#
[root@ceph142 ~]#
[root@ceph142 ~]# ceph -s
cluster:
id: 3cb12fba-5f6e-11ef-b412-9d303a22b70f
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 94m)
mgr: ceph141.cwgrgj(active, since 5h), standbys: ceph142.ymuzfe
osd: 7 osds: 7 up (since 78m), 7 in (since 78m)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 449 KiB
usage: 188 MiB used, 3.3 TiB / 3.3 TiB avail
pgs: 1 active+clean
[root@ceph142 ~]#
5 彩蛋: 标签管理
1.添加标签
[root@ceph141 ~]# ceph orch host label add ceph142 _admin
Added label _admin to host ceph142
[root@ceph141 ~]#
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch host label add ceph143 _admin
Added label _admin to host ceph143
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch host label add ceph143 oldboyedu
Added label oldboyedu to host ceph143
[root@ceph141 ~]#
[root@ceph141 ~]#
2.移除标签
[root@ceph141 ~]# ceph orch host label rm ceph143 oldboyedu
Removed label oldboyedu from host ceph143
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch host label rm ceph143 admin
Host ceph143 does not have label 'admin'. Please use 'ceph orch host ls' to list all the labels.
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch host label rm ceph143 _admin
Removed label _admin from host ceph143
[root@ceph141 ~]#
温馨提示:
1.可以在dashboard中查看,但是会延迟,大概30s左右
https://ceph141:8443/#/hosts
2.一般情况下,管理节点,我们都会为节点打上对应的标签,以便于日后工作交接
参考链接:
https://docs.ceph.com/en/latest/cephadm/install/#adding-hosts
当你的才华还撑不起你的野心的时候,你就应该静下心来学习。当你的能力还驾驭不了你的目标的时候,你就应该沉下心来历练。问问自己,想要怎样的人生。
欢迎交流学习技术交流,个人微信: "JasonYin2020"(添加时请备注来源及意图备注)
作者: 尹正杰, 博客: https://www.cnblogs.com/yinzhengjie/p/18372796