查看ceph版本 | | root@controller:~# ceph --version
ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable) |
|
查看ceph相关的进程
- The Ceph Manager daemon (ceph-mgr) runs alongside monitor daemons, to provide additional monitoring and interfaces to external monitoring and management systems.
| 我的devstack:
研发环境:
|
查看存储池
- images:glance的rbd_store_pool,对应的是image文件
- vms:nova的images_rbd_pool,对应的是镜像文件是启动盘,backing file依然保存在_base文件夹中
- volumes:cinder的rbd_pool,对应的volume文件
| 图
|
查看存储池内的pg数量
| | root@controller:~# ceph osd pool get images pg_num
pg_num: 8
root@controller:~# ceph osd pool get vms pg_num
pg_num: 8
root@controller:~# ceph osd pool get volumes pg_num
pg_num: 8 |
|
查看集群状态
- 若出现错误:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17 | root@controller:~# ceph -s
cluster:
id: eab37548-7aef-466a-861c-3757a12ce9e8
health: HEALTH_WARN
application not enabled on 1 pool(s)
too few PGs per OSD (24 < min 30)
services:
mon: 1 daemons, quorum controller
mgr: x(active)
osd: 1 osds: 1 up, 1 in
data:
pools: 3 pools, 24 pgs
objects: 2 objects, 19B
usage: 249MiB used, 23.7GiB / 24.0GiB avail
pgs: 24 active+clean |
- 计算方法:若pgs=64,副本数为3,osd个数为9,那么每个osd均分64/9*3=21个pgs这个也小于配置30个,也错了,这里是说明计算方法
- 由于在devstack安装好后,有24个pg,而仅有一个osd,副本数只能为1,因此每个osd均分24/1*1=24;提示说每个osd上的pg数量小于最小的数目30个,因此错误
- 解决:
- 修改pg数目
| root@controller:~# ceph osd pool set images pg_num 32
set pool 1 pg_num to 32
root@controller:~# ceph osd pool set images pgp_num 32
set pool 1 pgp_num to 32 |
| root@controller:~# ceph osd pool get images pg_num
pg_num: 32 |
- 然后查看ceph -s得到:application not enabled on 1 pool(s)
- 详细的信息可以使用ceph health detail查看
- enable application:
| root@controller:~# ceph osd pool application enable images rbd
enabled application 'rbd' on pool 'images' | ceph osd pool application enable <pool-name> <app-name>,这里<app-name> is 'cephfs', 'rbd', 'rgw',or freeform for custom applications.
- 当创建了镜像cirros后,观察ceph集群状态
| 1.解决错误后的集群状态 status
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15 | root@controller:~# ceph -s
cluster:
id: eab37548-7aef-466a-861c-3757a12ce9e8
health: HEALTH_OK
services:
mon: 1 daemons, quorum controller
mgr: x(active)
osd: 1 osds: 1 up, 1 in
data:
pools: 3 pools, 48 pgs
objects: 2 objects, 19B
usage: 249MiB used, 23.7GiB / 24.0GiB avail
pgs: 48 active+clean | 48=32+8+8
2.当创建了镜像cirros后,观察ceph集群状态
|
查看osd目录树
- 安装完带有ceph plugin的devstack环境后,发现只有一个osd
- 八节点高可用环境中有8个osd,每个对应一个volume
| 我的devstack图
controller是主机名 八节点高可用环境
|
查看mon的状态信息
- 我的devstack中mon只有一个
- 八节点高可用环境中mon有3个,Ceph 存储集群只需要单个监视器就能运行,但它就成了单一故障点,为增强可靠性和容错能力,Ceph 支持监视器集群
| 我的devstack
| root@controller:~# ceph mon stat
e1: 1 mons at {controller=172.16.1.17:6789/0}, election epoch 5, leader 0 controller, quorum 0
controller
|
八节点高可用环境 | root@osd-1:~# ceph mon stat
e1: 3 mons at {osd-1=172.16.1.46:6789/0,osd-2=172.16.1.78:6789/0,osd-3=172.16.1.62:6789/0},
election epoch 1298, leader 0 osd-1, quorum 0,1,2 osd-1,osd-3,osd-2 |
|
查看认证状态
- 查看秘钥文件(我的devstack)
| root@controller:~# ll /etc/ceph/
total 24
drwxr-xr-x 2 root root 4096 Jun 26 16:53 ./
drwxr-xr-x 112 root root 4096 Jun 25 20:36 ../
-rw------- 1 ceph ceph 63 Jun 25 19:31 ceph.client.admin.keyring
-rw-r--r-- 1 stack stack 64 Jun 25 20:19 ceph.client.cinder.keyring
-rw-r--r-- 1 stack stack 64 Jun 25 20:19 ceph.client.glance.keyring
-rw-r--r-- 1 root root 335 Jun 25 19:31 ceph.conf |
- 查看秘钥文件(八节点高可用环境)
| root@ctl-1:~# ll /etc/ceph/
total 28
drwxr-xr-x 2 root root 4096 Jun 1 10:47 ./
drwxr-xr-x 108 root root 4096 Jun 17 14:59 ../
-rw------- 1 root root 151 May 31 16:49 ceph.client.admin.keyring
-rw-r--r-- 1 cinder cinder 64 Jun 1 10:47 ceph.client.cinder.keyring
-rw-r--r-- 1 glance glance 64 Jun 1 10:41 ceph.client.glance.keyring
-rw-r--r-- 1 root root 297 May 31 16:49 ceph.conf
-rw-r--r-- 1 root root 92 Mar 20 03:51 rbdmap
-rw------- 1 root root 0 May 31 16:49 tmpbq23nn
|
| root@cmp-1:~# ll /etc/ceph/
total 24
drwxr-xr-x 2 root root 4096 Jun 1 13:11 ./
drwxr-xr-x 104 root root 4096 Jun 1 11:16 ../
-rw------- 1 root root 151 May 31 16:53 ceph.client.admin.keyring
-rw-r--r-- 1 root root 64 Jun 1 10:51 ceph.client.cinder.keyring
-rw-r--r-- 1 root root 582 Jun 1 13:11 ceph.conf
-rw-r--r-- 1 root root 92 Mar 20 03:51 rbdmap
-rw------- 1 root root 0 May 31 16:53 tmpKGhaY5
|
| root@osd-1:~# ll /etc/ceph/
total 20
drwxr-xr-x 2 root root 4096 May 31 17:03 ./
drwxr-xr-x 93 root root 4096 May 31 15:30 ../
-rw------- 1 root root 151 May 31 15:55 ceph.client.admin.keyring
-rw-r--r-- 1 root root 297 Jun 1 09:40 ceph.conf
-rw-r--r-- 1 root root 92 Mar 13 01:46 rbdmap
-rw------- 1 root root 0 May 31 15:37 tmp56mnHo |
| | root@controller:~# ceph auth get-or-create client.admin
[client.admin]
key = AQAkBhJdEXpOBxAA4N3g2mW41kxk0I0hd0EF/A== |
| root@controller:~# ceph auth ls | 这里仅展示命令输出的一部分
| osd.0
key: AQAmBhJd2AdSCRAA7RscIgX7OB20WVjbPYDlcw==
caps: [mon] allow profile osd
caps: [osd] allow *
| osd盘 | client.admin
key: AQAkBhJdEXpOBxAA4N3g2mW41kxk0I0hd0EF/A==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
| admin用户还有client.bootstrap-mds metadata、client.bootstrap-osd、 client.bootstrap-rbd、client.bootstrap-rgw client.cinder、client.glance、mgr.x x表示: |
List rbd images
- 比如创建虚拟机c1
- root@controller:~# ceph osd pool application enable vms rbd
| | root@controller:~# rbd ls images
709e0da6-197d-4d0f-a9d3-4e78552137e9 |
| root@controller:~# rbd ls vms
f092dfe4-365b-4dd6-8867-76a311399782_disk |
|
查看mon的映射信息 | 我的devstack
controller是节点名 八节点高可用环境
dump (VERB) 倾倒;倾卸 (N-COUNT) 垃圾场;垃圾堆
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
|
|