Ceph 入门篇

Ceph 初探

1. Ceph 简介

Ceph 是一个可靠地、自动重均衡、自动恢复的分布式存储系统,根据场景划分可以将 Ceph 分为三大块,分别是对象存储、块设备存储和文件系统服务。在虚拟化领域里,比较常用到的是 Ceph 的块设备存储,比如在 OpenStack 项目里,Ceph 的块设备存储可以对接 OpenStack 的 Cinder后端存储、Glance的镜像存储和虚拟机的数据存储,比较直观的是 Ceph 集群可以提供一个raw格式的块存储来作为虚拟机实例的硬盘。

Ceph 相比其它存储的优势点在于它不单单是存储,同时还充分利用了存储节点上的计算能力,在存储每一个数据时,都会通过计算得出该数据存储的位置,尽量将数据分布均衡,同时由于Ceph的良好设计,采用了CRUSH算法、HASH环等方法,使得它不存在传统的单点故障的问题,且随着规模的扩大性能并不会受到影响。

2. Ceph 核心组件及功能介绍

Ceph的核心组件包括Ceph OSD、Ceph Monitor、Ceph Manager和Ceph MDS。

  • Ceph OSD

OSD的英文全称是Object Storage Device,它的主要功能是存储数据、复制数据、平衡数据、恢复数据等,并通过检查其他Ceph OSD守护程序的心跳来向 Ceph 监视器和管理器提供一些监视信息。通常至少需要3个Ceph OSD才能实现冗余和高可用性。一般情况下一块硬盘对应一个OSD,由OSD来对硬盘存储进行管理,当然一个分区也可以成为一个OSD。

Ceph OSD的架构实现由物理磁盘驱动器、Linux文件系统和Ceph OSD服务组成,对于Ceph OSD Deamon而言,Linux文件系统显性的支持了其拓展性,一般Linux文件系统有好几种,比如有BTRFS、XFS、Ext4等,BTRFS虽然有很多优点特性,但现在还没达到生产环境所需的稳定性,一般比较推荐使用XFS。

伴随OSD的还有一个概念叫做Journal盘,一般写数据到Ceph集群时,都是先将数据写入到Journal盘中,然后每隔一段时间比如5秒再将Journal盘中的数据刷新到文件系统中。一般为了使读写时延更小,Journal盘都是采用SSD,一般分配10G以上,当然分配多点那是更好,Ceph中引入Journal盘的概念是因为Journal允许Ceph OSD功能很快做小的写操作;一个随机写入首先写入在上一个连续类型的journal,然后刷新到文件系统,这给了文件系统足够的时间来合并写入磁盘,一般情况下使用SSD作为OSD的journal可以有效缓冲突发负载。

  • Ceph Monitor

Monitor 负责监视Ceph集群,维护Ceph集群的健康状态,同时维护着Ceph集群中的各种Map图,比如OSD Map、Monitor Map、PG Map和CRUSH Map,这些Map统称为Cluster Map,Cluster Map是RADOS的关键数据结构,管理集群中的所有成员、关系、属性等信息以及数据的分发,比如当用户需要存储数据到Ceph集群时,OSD需要先通过Monitor获取最新的Map图,然后根据Map图和object id等计算出数据最终存储的位置。

  • Ceph Manager

在一个主机上运行的一个守护进程,Ceph Manager 守护程序(ceph-mgr)负责跟踪运行时指标和Ceph集群的当前状态,包括存储利用率,当前性能指标和系统负载。Ceph Manager守护程序还托管基于Python的模块来管理和公开Ceph集群信息,包括基于Web的Ceph仪表板和REST API。高可用性通常至少需要两个管理器。

  • Ceph MDS

全称是Ceph MetaData Server,主要保存的文件系统服务的元数据,但对象存储和块存储设备是不需要使用该服务的。

3. Ceph 基础架构组件

../_images/stack.png

  • Ceph最底层的是RADOS,RADOS自身是一个完整的分布式对象存储系统,它具有可靠、智能、分布式等特性,Ceph的高可靠、高可拓展、高性能、高自动化都是由这一层来提供的,用户数据的存储最终也都是通过这一层来进行存储的,RADOS可以说就是Ceph的核心。RADOS系统主要由两部分组成,分别是OSD和Monitor。

  • 基于RADOS层的上一层是LIBRADOS,LIBRADOS是一个库,它允许应用程序通过访问该库来与RADOS系统进行交互,支持多种编程语言,比如C、C++、Python等。基于LIBRADOS层开发的又可以看到有三层,分别是RADOSGW、RBD和CEPH FS。

  • RADOSGW是一套基于当前流行的RESTFUL协议的网关,并且兼容S3和Swift。

  • RBD通过Linux内核客户端和QEMU/KVM驱动来提供一个分布式的块设备。

  • CEPHFS则提供了POSIX接口,用户可直接通过客户端挂载使用。它是内核态的程序,所以无需调用用户空间的librados库。它通过内核中的net模块来与Rados进行交互。

4. Ceph 数据读写流程

查看源图像

Ceph 的读/写操作采用Primary-Replica模型,客户端只向文件对象Object所对应OSD set的Primary OSD发起读/写请求,这保证了数据的强一致性。当Primary OSD收到Object的写请求时,它负责把数据发送给其他副本,只有这个数据被保存在所有的OSD上时,Primary OSD才应答文件对象Object的写请求,这保证了副本的一致性。这点和Kafka中读写数据方式有点类似。

  • 写入数据

第一步:计算文件到对象的映射。假设客户端要存储一个文件,首先得到文件的oid,oid(object id) = ino + ono,即inode序列号(文件的元数据序列号)加上object序列号(文件分块时生成的对象序列号)。Ceph 底层存储是分块存储的,默认以 4M 切分一个块大小。

第二步:通过 hash 算法计算出文件对应的 pool 中的 PG:通过一致性 HASH 计算 Object 到 PG, Object -> PG 映射 hash(oid) & mask-> pgid。

第三步:通过 CRUSH 把对象映射到PG中的OSD。通过 CRUSH 算法计算 PG 到 OSD,PG -> OSD 映射:[CRUSH(pgid)->(osd1,osd2,osd3)]。

第四步:PG 中的主 OSD 将对象写入到硬盘。

第五步: 主 OSD 将数据同步给备份 OSD,并等待备份 OSD 返回确认。

第六步: 主 OSD 将写入完成返回给客户端。

  • 读取数据

如果需要读取数据,客户端只需完成同样的寻址过程,并直接和主 OSD联系。在目前的Ceph设计中,被读取的数据默认由Primary OSD提供,但也可以设置允许从其他OSD中获取,以分散读取压力从而提高性能。

5.Ceph 集群部署

5.1 服务器规划

172.31.0.10 ceph-deploy.example.local ceph-deploy 2c2g 30G*1
172.31.0.11 ceph-mon1.example.local ceph-mon1     2c2g 30G*1
172.31.0.12 ceph-mon2.example.local ceph-mon2     2c2g 30G*1
172.31.0.13 ceph-mon3.example.local ceph-mon3     2c2g 30G*1
172.31.0.14 ceph-mgr1.example.local ceph-mgr1     2c2g 30G*1
172.31.0.15 ceph-mgr2.example.local ceph-mgr2     2c2g 30G*1
172.31.0.16 ceph-node1.example.local ceph-node1   2c2g 30G*1 10G*4
172.31.0.17 ceph-node2.example.local ceph-node2   2c2g 30G*1 10G*4
172.31.0.18 ceph-node3.example.local ceph-node3   2c2g 30G*1 10G*4
172.31.0.19 ceph-node4.example.local ceph-node4   2c2g 30G*1 10G*4

# Ceph 版本
ubuntu@ceph-deploy:~/ceph-cluster$ ceph --version
ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)

# 服务器版本
Ubuntu 18.04

5.2 初始化服务器

网卡配置

# 每台服务器支持两个网络,public 网络针对客户端访问,cluster 网络用于集群管理及数据同步
public 172.31.0.0/24
cluster 192.168.10.0/24

初始化步骤

# 先初始化一台机器作为模板,再克隆模板服务器
# 1. 设置网卡
# 更改网卡设备名称为常用的 eth0
sudo vim /etc/default/grub
GRUB_CMDLINE_LINUX=""
# 改为:
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"
# 更新开机启动文件
sudo grub-mkconfig -o /boot/grub/grub.cfg

# 禁用ipv6
root@devops:~# cat > /etc/sysctl.conf <<EOF
# 禁用ipv6
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
EOF
root@devops:~# sysctl -p
# 设置双网卡
root@devops:~# cat /etc/netplan/10-netcfg.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
      dhcp6: no
      addresses: [172.31.0.10/24]
      gateway4: 172.31.0.2
      nameservers:
        addresses: [223.5.5.5,223.6.6.6,114.114.114.114]
    eth1:
      dhcp4: no
      dhcp6: no
      addresses: [192.168.10.10/24]
      
root@devops:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:d5:be:99 brd ff:ff:ff:ff:ff:ff
    inet 172.31.0.10/24 brd 172.31.0.255 scope global eth0
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:d5:be:a3 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.10/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever

# 2. 设置免密登录
ubuntu@devops:~$ ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa
ubuntu@devops:~$ cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
# 取消 ssh 登录指纹验证
sudo sed -i '/ask/{s/#//;s/ask/no/}' /etc/ssh/ssh_config

# 3. 换源
sudo mv /etc/apt/{sources.list,sources.list.old}
sudo cat > /etc/apt/sources.list <<EOF
# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
EOF
# 添加ceph 源
ubuntu@devops:~$ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
ubuntu@devops:~$ sudo apt-add-repository 'deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus/ bionic main'

sudo apt update
sudo apt upgrade
# 4. 安装常用工具包
sudo apt install net-tools vim wget git build-essential -y

# 5. 更改系统限制
sudo cat >> /etc/security/limits.conf <<EOF
* soft     nproc          102400
* hard     nproc          102400
* soft     nofile         102400
* hard     nofile         102400

root soft     nproc          102400
root hard     nproc          102400
root soft     nofile         102400
root hard     nofile         102400
EOF

# 6. 时钟同步
sudo apt update
sudo apt install chrony -y
sudo vim /etc/chrony/chrony.conf
# 修改为阿里云时钟同步服务器
# 公网
server ntp.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp1.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp2.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp3.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp4.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp5.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp6.aliyun.com minpoll 4 maxpoll 10 iburst
server ntp7.aliyun.com minpoll 4 maxpoll 10 iburst

# 重启chrony服务
sudo systemctl restart chrony
sudo systemctl status chrony
sudo systemctl enable chrony
# 查看是否激活
sudo chronyc activity
# 查看时钟同步状态
sudo timedatectl status
# 写入系统时钟
sudo hwclock -w

# 重启服务器
sudo reboot
# ceph 部署前环境准备
# 设置hostname 并安装 python2
for host in ceph-{deploy,mon1,mon2,mon3,mgr1,mgr2,node1,node2,node3,node4}
do
   ssh ubuntu@${host} "sudo sed -ri '/ceph/d' /etc/hosts"
   # 设置主机名
   ssh ubuntu@${host} "sudo hostnamectl set-hostname ${host}"
   # 添加 /etc/hosts 解析
   ssh ubuntu@${host} "echo \"172.31.0.10 ceph-deploy.example.local ceph-deploy\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.11 ceph-mon1.example.local ceph-mon1\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.12 ceph-mon2.example.local ceph-mon2\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.13 ceph-mon3.example.local ceph-mon3\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.14 ceph-mgr1.example.local ceph-mgr1\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.15 ceph-mgr2.example.local ceph-mgr2\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.16 ceph-node1.example.local ceph-node1\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.17 ceph-node2.example.local ceph-node2\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.18 ceph-node3.example.local ceph-node3\" |sudo tee -a /etc/hosts" 
   ssh ubuntu@${host} "echo \"172.31.0.19 ceph-node4.example.local ceph-node4\" |sudo tee -a /etc/hosts"
   # ceph 服务依赖 python2 环境
   ssh ubuntu@${host} "sudo apt install python2.7 -y"
   ssh ubuntu@${host} "sudo ln -sv /usr/bin/python2.7 /usr/bin/python2"
   # 每个节点都安装 ceph-common 工具, 以方便后期可以执行 ceph 管理命令
   ssh ubuntu@${host} "sudo apt install ceph-common -y"
done

# ceph-deploy 工具必须以普通用户登录 Ceph 节点,且此用户拥有无密码使用 sudo 的权限,因为它需要在安装软件及配置文件的过程中,不必输入密码。
for host in ceph-{deploy,mon1,mon2,mon3,mgr1,mgr2,node1,node2,node3,node4}
do
   ssh ubuntu@${host} "echo \"ubuntu ALL = (root) NOPASSWD:ALL\" | sudo tee /etc/sudoers.d/ubuntu"
   ssh ubuntu@${host} "sudo chmod 0440 /etc/sudoers.d/ubuntu"
done

5.3 安装 ceph 部署工具

# 1. 安装 ceph-deploy
ubuntu@ceph-deploy:~$ sudo apt-cache madison ceph-deploy
ceph-deploy |      2.0.1 | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 Packages
ceph-deploy | 1.5.38-0ubuntu1 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 Packages
ubuntu@ceph-deploy:~$ sudo apt install ceph-deploy python-setuptools -y

# 推荐使用指定的普通用户部署和运行 ceph 集群,普通用户只要能以非交互方式执行 sudo 命令执行一些特权命令即可,新版的 ceph-deploy 可以指定包含 root 的在内只要可以执行 sudo 命令的用户,不过仍然推荐使用普通用户,比如 ceph、cephuser、cephadmin 这样 的用户去管理 ceph 集群。

# 允许无密码 SSH 登录
# 修改 ceph-deploy 管理节点上的 ~/.ssh/config 文件,这样 ceph-deploy 就能用你所建的用户名登录 Ceph 节点了,
# 而无需每次执行 ceph-deploy 都要指定 --username {username} 。这样做同时也简化了 ssh 和 scp 的用法。
# 把 {username} 替换成你创建的用户名。

ubuntu@ceph-deploy:~$ cat > ~/.ssh/config << EOF
Host ceph-mon1
   Hostname ceph-mon1
   User ubuntu
Host ceph-mon2
   Hostname ceph-mon2
   User ubuntu
Host ceph-mon3
   Hostname ceph-mon3
   User ubuntu
Host ceph-mgr1
   Hostname ceph-mgr1
   User ubuntu
Host ceph-mgr2
   Hostname ceph-mgr2
   User ubuntu
Host ceph-node1
   Hostname ceph-node1
   User ubuntu
Host ceph-node2
   Hostname ceph-node2
   User ubuntu
Host ceph-node3
   Hostname ceph-node3
   User ubuntu
Host ceph-node4
   Hostname ceph-node4
   User ubuntu
EOF

5.4 集群搭建

# 先在管理节点上创建一个目录,用于保存 ceph-deploy 生成的配置文件和密钥对。
ubuntu@ceph-deploy:~$ mkdir ceph-cluster
ubuntu@ceph-deploy:~$ cd ceph-cluster
# 开始部署一个新的 ceph 存储集群,并生成 CLUSTER.conf 集群配置文件和 keyring 认证文件。
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy new --cluster-network 192.168.10.0/24 --public-network 172.31.0.0/24 ceph-mon1
[ceph_deploy][ERROR ] Traceback (most recent call last):
[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py", line 69, in newfunc
[ceph_deploy][ERROR ]     return f(*a, **kw)
[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/dist-packages/ceph_deploy/cli.py", line 147, in _main
[ceph_deploy][ERROR ]     fh = logging.FileHandler('ceph-deploy-{cluster}.log'.format(cluster=args.cluster))
[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/logging/__init__.py", line 920, in __init__
[ceph_deploy][ERROR ]     StreamHandler.__init__(self, self._open())
[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/logging/__init__.py", line 950, in _open
[ceph_deploy][ERROR ]     stream = open(self.baseFilename, self.mode)
[ceph_deploy][ERROR ] IOError: [Errno 13] Permission denied: '/home/ubuntu/ceph-cluster/ceph-deploy-ceph.log'
[ceph_deploy][ERROR ]
# 这里删除 ceph-deploy-ceph.log 文件就好了,因为一开始我用了sudo ceph-deploy new 执行,留下的日志文件权限是root权限,这里做了免密,使用普通用户执行就可以
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rm -rf ceph-deploy-ceph.log 
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy new --cluster-network 192.168.10.0/24 --public-network 172.31.0.0/24 ceph-mon1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new --cluster-network 192.168.10.0/24 --public-network 172.31.0.0/24 ceph-mon1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f61b116fe10>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-mon1']
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f61ae529ad0>
[ceph_deploy.cli][INFO  ]  public_network                : 172.31.0.0/24
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : 192.168.10.0/24
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-mon1][DEBUG ] connected to host: ceph-deploy 
[ceph-mon1][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-mon1
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1 
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] find the location of an executable
[ceph-mon1][INFO  ] Running command: sudo /bin/ip link show
[ceph-mon1][INFO  ] Running command: sudo /bin/ip addr show
[ceph-mon1][DEBUG ] IP addresses found: [u'172.31.0.11', u'192.168.10.11']
[ceph_deploy.new][DEBUG ] Resolving host ceph-mon1
[ceph_deploy.new][DEBUG ] Monitor ceph-mon1 at 172.31.0.11
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-mon1']
[ceph_deploy.new][DEBUG ] Monitor addrs are [u'172.31.0.11']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

ubuntu@ceph-deploy:~/ceph-cluster$ ls -l
total 12
-rw-rw-r-- 1 ubuntu ubuntu 3327 Aug 16 05:13 ceph-deploy-ceph.log
-rw-rw-r-- 1 ubuntu ubuntu  263 Aug 16 05:13 ceph.conf
-rw------- 1 ubuntu ubuntu   73 Aug 16 05:13 ceph.mon.keyring
ubuntu@ceph-deploy:~/ceph-cluster$ cat ceph.conf
[global]
fsid = b7c42944-dd49-464e-a06a-f3a466b79eb4
public_network = 172.31.0.0/24
cluster_network = 192.168.10.0/24
mon_initial_members = ceph-mon1
mon_host = 172.31.0.11
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

5.4.1 安装 ceph-mon

# 在 mon 节点安装 ceph-mon
ubuntu@ceph-mon1:~$ sudo apt install ceph-mon -y

# 初始化 mon 
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ffb094d5fa0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7ffb094b9ad0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon1
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon1 ...
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1 
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 18.04 bionic
[ceph-mon1][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon1][DEBUG ] get remote short hostname
[ceph-mon1][DEBUG ] deploying mon to ceph-mon1
[ceph-mon1][DEBUG ] get remote short hostname
[ceph-mon1][DEBUG ] remote hostname: ceph-mon1
[ceph-mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon1][DEBUG ] create the mon path if it does not exist
[ceph-mon1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon1/done
[ceph-mon1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-mon1/done
[ceph-mon1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-mon1.mon.keyring
[ceph-mon1][DEBUG ] create the monitor keyring file
[ceph-mon1][INFO  ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph-mon1 --keyring /var/lib/ceph/tmp/ceph-ceph-mon1.mon.keyring --setuser 64045 --setgroup 64045
[ceph-mon1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-mon1.mon.keyring
[ceph-mon1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon1][DEBUG ] create the init path if it does not exist
[ceph-mon1][INFO  ] Running command: sudo systemctl enable ceph.target
[ceph-mon1][INFO  ] Running command: sudo systemctl enable ceph-mon@ceph-mon1
[ceph-mon1][WARNIN] Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph-mon1.service → /lib/systemd/system/ceph-mon@.service.
[ceph-mon1][INFO  ] Running command: sudo systemctl start ceph-mon@ceph-mon1
[ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph-mon1][DEBUG ] ********************************************************************************
[ceph-mon1][DEBUG ] status for monitor: mon.ceph-mon1
[ceph-mon1][DEBUG ] {
[ceph-mon1][DEBUG ]   "election_epoch": 3, 
[ceph-mon1][DEBUG ]   "extra_probe_peers": [], 
[ceph-mon1][DEBUG ]   "feature_map": {
[ceph-mon1][DEBUG ]     "mon": [
[ceph-mon1][DEBUG ]       {
[ceph-mon1][DEBUG ]         "features": "0x3f01cfb8ffedffff", 
[ceph-mon1][DEBUG ]         "num": 1, 
[ceph-mon1][DEBUG ]         "release": "luminous"
[ceph-mon1][DEBUG ]       }
[ceph-mon1][DEBUG ]     ]
[ceph-mon1][DEBUG ]   }, 
[ceph-mon1][DEBUG ]   "features": {
[ceph-mon1][DEBUG ]     "quorum_con": "4540138292840890367", 
[ceph-mon1][DEBUG ]     "quorum_mon": [
[ceph-mon1][DEBUG ]       "kraken", 
[ceph-mon1][DEBUG ]       "luminous", 
[ceph-mon1][DEBUG ]       "mimic", 
[ceph-mon1][DEBUG ]       "osdmap-prune", 
[ceph-mon1][DEBUG ]       "nautilus", 
[ceph-mon1][DEBUG ]       "octopus"
[ceph-mon1][DEBUG ]     ], 
[ceph-mon1][DEBUG ]     "required_con": "2449958747315978244", 
[ceph-mon1][DEBUG ]     "required_mon": [
[ceph-mon1][DEBUG ]       "kraken", 
[ceph-mon1][DEBUG ]       "luminous", 
[ceph-mon1][DEBUG ]       "mimic", 
[ceph-mon1][DEBUG ]       "osdmap-prune", 
[ceph-mon1][DEBUG ]       "nautilus", 
[ceph-mon1][DEBUG ]       "octopus"
[ceph-mon1][DEBUG ]     ]
[ceph-mon1][DEBUG ]   }, 
[ceph-mon1][DEBUG ]   "monmap": {
[ceph-mon1][DEBUG ]     "created": "2021-08-16T06:26:05.290405Z", 
[ceph-mon1][DEBUG ]     "epoch": 1, 
[ceph-mon1][DEBUG ]     "features": {
[ceph-mon1][DEBUG ]       "optional": [], 
[ceph-mon1][DEBUG ]       "persistent": [
[ceph-mon1][DEBUG ]         "kraken", 
[ceph-mon1][DEBUG ]         "luminous", 
[ceph-mon1][DEBUG ]         "mimic", 
[ceph-mon1][DEBUG ]         "osdmap-prune", 
[ceph-mon1][DEBUG ]         "nautilus", 
[ceph-mon1][DEBUG ]         "octopus"
[ceph-mon1][DEBUG ]       ]
[ceph-mon1][DEBUG ]     }, 
[ceph-mon1][DEBUG ]     "fsid": "b7c42944-dd49-464e-a06a-f3a466b79eb4", 
[ceph-mon1][DEBUG ]     "min_mon_release": 15, 
[ceph-mon1][DEBUG ]     "min_mon_release_name": "octopus", 
[ceph-mon1][DEBUG ]     "modified": "2021-08-16T06:26:05.290405Z", 
[ceph-mon1][DEBUG ]     "mons": [
[ceph-mon1][DEBUG ]       {
[ceph-mon1][DEBUG ]         "addr": "172.31.0.11:6789/0", 
[ceph-mon1][DEBUG ]         "name": "ceph-mon1", 
[ceph-mon1][DEBUG ]         "priority": 0, 
[ceph-mon1][DEBUG ]         "public_addr": "172.31.0.11:6789/0", 
[ceph-mon1][DEBUG ]         "public_addrs": {
[ceph-mon1][DEBUG ]           "addrvec": [
[ceph-mon1][DEBUG ]             {
[ceph-mon1][DEBUG ]               "addr": "172.31.0.11:3300", 
[ceph-mon1][DEBUG ]               "nonce": 0, 
[ceph-mon1][DEBUG ]               "type": "v2"
[ceph-mon1][DEBUG ]             }, 
[ceph-mon1][DEBUG ]             {
[ceph-mon1][DEBUG ]               "addr": "172.31.0.11:6789", 
[ceph-mon1][DEBUG ]               "nonce": 0, 
[ceph-mon1][DEBUG ]               "type": "v1"
[ceph-mon1][DEBUG ]             }
[ceph-mon1][DEBUG ]           ]
[ceph-mon1][DEBUG ]         }, 
[ceph-mon1][DEBUG ]         "rank": 0, 
[ceph-mon1][DEBUG ]         "weight": 0
[ceph-mon1][DEBUG ]       }
[ceph-mon1][DEBUG ]     ]
[ceph-mon1][DEBUG ]   }, 
[ceph-mon1][DEBUG ]   "name": "ceph-mon1", 
[ceph-mon1][DEBUG ]   "outside_quorum": [], 
[ceph-mon1][DEBUG ]   "quorum": [
[ceph-mon1][DEBUG ]     0
[ceph-mon1][DEBUG ]   ], 
[ceph-mon1][DEBUG ]   "quorum_age": 1, 
[ceph-mon1][DEBUG ]   "rank": 0, 
[ceph-mon1][DEBUG ]   "state": "leader", 
[ceph-mon1][DEBUG ]   "sync_provider": []
[ceph-mon1][DEBUG ] }
[ceph-mon1][DEBUG ] ********************************************************************************
[ceph-mon1][INFO  ] monitor: mon.ceph-mon1 is running
[ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph-mon1
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1 
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] find the location of an executable
[ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph-mon1 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpzxBtYk
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1 
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] get remote short hostname
[ceph-mon1][DEBUG ] fetch remote file
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.admin
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.bootstrap-mds
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.bootstrap-mgr
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.bootstrap-osd
[ceph-mon1][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon1/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpzxBtYk

# 查看结果
ubuntu@ceph-mon1:~$ ps -ef |grep ceph
root       9293      1  0 06:15 ?        00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph      13339      1  0 06:26 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon1 --setuser ceph --setgroup ceph
ubuntu    13911  13898  0 06:28 pts/0    00:00:00 grep --color=auto ceph

5.4.2 分发 ceph 密钥

# 在 ceph-deploy 节点把配置文件和 admin 密钥拷贝至 Ceph 集群需要执行 ceph 管理命令的 节点,从而不需要后期通过 ceph 命令对 ceph 集群进行管理配置的时候每次都需要指定 ceph-mon 节点地址 和 ceph.client.admin.keyring 文件, 另外各 ceph-mon 节点也需要同步 ceph 的集群配置文件与认证文件。

ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy admin ceph-deploy ceph-node{1,2,3,4}
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-node1 ceph-node2 ceph-node3 ceph-node4
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc9e52ec190>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-node1', 'ceph-node2', 'ceph-node3', 'ceph-node4']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7fc9e5befa50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node1
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node2
[ceph-node2][DEBUG ] connection detected need for sudo
[ceph-node2][DEBUG ] connected to host: ceph-node2 
[ceph-node2][DEBUG ] detect platform information from remote host
[ceph-node2][DEBUG ] detect machine type
[ceph-node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node3
[ceph-node3][DEBUG ] connection detected need for sudo
[ceph-node3][DEBUG ] connected to host: ceph-node3 
[ceph-node3][DEBUG ] detect platform information from remote host
[ceph-node3][DEBUG ] detect machine type
[ceph-node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node4
[ceph-node4][DEBUG ] connection detected need for sudo
[ceph-node4][DEBUG ] connected to host: ceph-node4 
[ceph-node4][DEBUG ] detect platform information from remote host
[ceph-node4][DEBUG ] detect machine type
[ceph-node4][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

目前我们安装了一个 ceph-mon,现在看下结果

ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
 
  services:
    mon: 1 daemons, quorum ceph-mon1 (age 47m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

5.4.3 安装 ceph-mgr

ceph 的 Luminious 及以上版本有 manager 节点,早期的版本没有。

部署 ceph-mgr 节点

# 在 manager 节点安装 ceph-mgr
ubuntu@ceph-mgr1:~$ sudo apt install ceph-mgr -y

# ceph-deploy 节点 添加 ceph-mgr
# mgr 节点需要读取 ceph 的配置文件,即/etc/ceph 目录中的配置文件
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-mgr1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph-mgr1', 'ceph-mgr1')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5cf08f4c30>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f5cf0d54150>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-mgr1:ceph-mgr1
[ceph-mgr1][DEBUG ] connection detected need for sudo
[ceph-mgr1][DEBUG ] connected to host: ceph-mgr1 
[ceph-mgr1][DEBUG ] detect platform information from remote host
[ceph-mgr1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-mgr1
[ceph-mgr1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mgr1][WARNIN] mgr keyring does not exist yet, creating one
[ceph-mgr1][DEBUG ] create a keyring file
[ceph-mgr1][DEBUG ] create path recursively if it doesn't exist
[ceph-mgr1][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-mgr1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-mgr1/keyring
[ceph-mgr1][INFO  ] Running command: sudo systemctl enable ceph-mgr@ceph-mgr1
[ceph-mgr1][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-mgr1.service → /lib/systemd/system/ceph-mgr@.service.
[ceph-mgr1][INFO  ] Running command: sudo systemctl start ceph-mgr@ceph-mgr1
[ceph-mgr1][INFO  ] Running command: sudo systemctl enable ceph.target

# 在 ceph-mgr1 节点检测结果
ubuntu@ceph-mgr1:~$ sudo ps -ef |grep ceph
root      10148      1  0 07:30 ?        00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph      15202      1 14 07:32 ?        00:00:04 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-mgr1 --setuser ceph --setgroup ceph
ubuntu    15443  15430  0 07:33 pts/0    00:00:00 grep --color=auto ceph

# 通过 ceph 命令查看结果
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim  # 需要禁用非安全模式通信
            OSD count 0 < osd_pool_default_size 3       # 集群的 OSD 数量小于 3
 
  services:
    mon: 1 daemons, quorum ceph-mon1 (age 70m)
    mgr: ceph-mgr1(active, since 3m)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

5.4.4 安装 ceph-osd

在添加 osd 之前, 对 osd node 节点安装基本环境初始化

存储节点等于在存储节点安装了 ceph 及 ceph-rodsgw 安装包,但是使用默认的官方仓库会因为网络原因导致初始化超时,因此各存储节点推荐修改 ceph 仓库为阿里或者清华等国内的镜像源。

ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node1 ceph-node2 ceph-node3 ceph-node4
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node1 ceph-node2 ceph-node3 ceph-node4
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6d9495ac30>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : False
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0x7f6d9520ca50>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-node1', 'ceph-node2', 'ceph-node3', 'ceph-node4']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : True
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version mimic on cluster ceph hosts ceph-node1 ceph-node2 ceph-node3 ceph-node4
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node1 ...
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1][INFO  ] installing Ceph on ceph-node1
[ceph-node1][INFO  ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node1][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[ceph-node1][DEBUG ] Get:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease [88.7 kB]
[ceph-node1][DEBUG ] Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease [74.6 kB]
[ceph-node1][DEBUG ] Get:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease [88.7 kB]
[ceph-node1][DEBUG ] Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic InRelease
[ceph-node1][DEBUG ] Fetched 252 kB in 1s (326 kB/s)
[ceph-node1][DEBUG ] Reading package lists...
[ceph-node1][INFO  ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[ceph-node1][DEBUG ] Reading package lists...
[ceph-node1][DEBUG ] Building dependency tree...
[ceph-node1][DEBUG ] Reading state information...
[ceph-node1][DEBUG ] ca-certificates is already the newest version (20210119~18.04.1).
[ceph-node1][DEBUG ] ca-certificates set to manually installed.
[ceph-node1][DEBUG ] The following NEW packages will be installed:
[ceph-node1][DEBUG ]   apt-transport-https
[ceph-node1][DEBUG ] 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
[ceph-node1][DEBUG ] Need to get 4348 B of archives.
[ceph-node1][DEBUG ] After this operation, 154 kB of additional disk space will be used.
[ceph-node1][DEBUG ] Get:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.14 [4348 B]
[ceph-node1][DEBUG ] Fetched 4348 B in 0s (28.8 kB/s)
[ceph-node1][DEBUG ] Selecting previously unselected package apt-transport-https.
(Reading database ... 109531 files and directories currently installed.)
[ceph-node1][DEBUG ] Preparing to unpack .../apt-transport-https_1.6.14_all.deb ...
[ceph-node1][DEBUG ] Unpacking apt-transport-https (1.6.14) ...
[ceph-node1][DEBUG ] Setting up apt-transport-https (1.6.14) ...
[ceph-node1][INFO  ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node1][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[ceph-node1][DEBUG ] Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[ceph-node1][DEBUG ] Hit:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[ceph-node1][DEBUG ] Hit:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[ceph-node1][DEBUG ] Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic InRelease
[ceph-node1][DEBUG ] Reading package lists...
[ceph-node1][INFO  ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
[ceph-node1][DEBUG ] Reading package lists...
[ceph-node1][DEBUG ] Building dependency tree...
[ceph-node1][DEBUG ] Reading state information...
[ceph-node1][DEBUG ] The following additional packages will be installed:
[ceph-node1][DEBUG ]   ceph-base ceph-mgr ceph-mgr-modules-core libjs-jquery python-pastedeploy-tpl
[ceph-node1][DEBUG ]   python3-bcrypt python3-bs4 python3-cherrypy3 python3-dateutil
[ceph-node1][DEBUG ]   python3-distutils python3-lib2to3 python3-logutils python3-mako
[ceph-node1][DEBUG ]   python3-paste python3-pastedeploy python3-pecan python3-simplegeneric
[ceph-node1][DEBUG ]   python3-singledispatch python3-tempita python3-waitress python3-webob
[ceph-node1][DEBUG ]   python3-webtest python3-werkzeug
[ceph-node1][DEBUG ] Suggested packages:
[ceph-node1][DEBUG ]   python3-influxdb python3-beaker python-mako-doc httpd-wsgi
[ceph-node1][DEBUG ]   libapache2-mod-python libapache2-mod-scgi libjs-mochikit python-pecan-doc
[ceph-node1][DEBUG ]   python-waitress-doc python-webob-doc python-webtest-doc ipython3
[ceph-node1][DEBUG ]   python3-lxml python3-termcolor python3-watchdog python-werkzeug-doc
[ceph-node1][DEBUG ] Recommended packages:
[ceph-node1][DEBUG ]   ceph-fuse ceph-mgr-dashboard ceph-mgr-diskprediction-local
[ceph-node1][DEBUG ]   ceph-mgr-diskprediction-cloud ceph-mgr-k8sevents ceph-mgr-cephadm nvme-cli
[ceph-node1][DEBUG ]   smartmontools javascript-common python3-lxml python3-routes
[ceph-node1][DEBUG ]   python3-simplejson python3-pastescript python3-pyinotify
[ceph-node1][DEBUG ] The following NEW packages will be installed:
[ceph-node1][DEBUG ]   ceph ceph-base ceph-mds ceph-mgr ceph-mgr-modules-core ceph-mon ceph-osd
[ceph-node1][DEBUG ]   libjs-jquery python-pastedeploy-tpl python3-bcrypt python3-bs4
[ceph-node1][DEBUG ]   python3-cherrypy3 python3-dateutil python3-distutils python3-lib2to3
[ceph-node1][DEBUG ]   python3-logutils python3-mako python3-paste python3-pastedeploy
[ceph-node1][DEBUG ]   python3-pecan python3-simplegeneric python3-singledispatch python3-tempita
[ceph-node1][DEBUG ]   python3-waitress python3-webob python3-webtest python3-werkzeug radosgw
[ceph-node1][DEBUG ] 0 upgraded, 28 newly installed, 0 to remove and 0 not upgraded.
[ceph-node1][DEBUG ] Need to get 47.7 MB of archives.
[ceph-node1][DEBUG ] After this operation, 219 MB of additional disk space will be used.
[ceph-node1][DEBUG ] Get:1 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph-base amd64 15.2.14-1bionic [5179 kB]
[ceph-node1][DEBUG ] Get:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-dateutil all 2.6.1-1 [52.3 kB]
[ceph-node1][DEBUG ] Get:3 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph-mgr-modules-core all 15.2.14-1bionic [162 kB]
[ceph-node1][DEBUG ] Get:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-bcrypt amd64 3.1.4-2 [29.9 kB]
[ceph-node1][DEBUG ] Get:5 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-cherrypy3 all 8.9.1-2 [160 kB]
[ceph-node1][DEBUG ] Get:6 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 python3-lib2to3 all 3.6.9-1~18.04 [77.4 kB]
[ceph-node1][DEBUG ] Get:7 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 python3-distutils all 3.6.9-1~18.04 [144 kB]
[ceph-node1][DEBUG ] Get:8 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-logutils all 0.3.3-5 [16.7 kB]
[ceph-node1][DEBUG ] Get:9 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-mako all 1.0.7+ds1-1 [59.3 kB]
[ceph-node1][DEBUG ] Get:10 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-simplegeneric all 0.8.1-1 [11.5 kB]
[ceph-node1][DEBUG ] Get:11 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-singledispatch all 3.4.0.3-2 [7022 B]
[ceph-node1][DEBUG ] Get:12 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-webob all 1:1.7.3-2fakesync1 [64.3 kB]
[ceph-node1][DEBUG ] Get:13 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-bs4 all 4.6.0-1 [67.8 kB]
[ceph-node1][DEBUG ] Get:14 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-waitress all 1.0.1-1 [53.4 kB]
[ceph-node1][DEBUG ] Get:15 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-tempita all 0.5.2-2 [13.9 kB]
[ceph-node1][DEBUG ] Get:16 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-paste all 2.0.3+dfsg-4ubuntu1 [456 kB]
[ceph-node1][DEBUG ] Get:17 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python-pastedeploy-tpl all 1.5.2-4 [4796 B]
[ceph-node1][DEBUG ] Get:18 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-pastedeploy all 1.5.2-4 [13.4 kB]
[ceph-node1][DEBUG ] Get:19 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-webtest all 2.0.28-1ubuntu1 [27.9 kB]
[ceph-node1][DEBUG ] Get:20 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-pecan all 1.2.1-2 [86.1 kB]
[ceph-node1][DEBUG ] Get:21 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libjs-jquery all 3.2.1-1 [152 kB]
[ceph-node1][DEBUG ] Get:22 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/universe amd64 python3-werkzeug all 0.14.1+dfsg1-1ubuntu0.1 [174 kB]
[ceph-node1][DEBUG ] Get:23 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph-mgr amd64 15.2.14-1bionic [1309 kB]
[ceph-node1][DEBUG ] Get:24 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph-mon amd64 15.2.14-1bionic [5952 kB]
[ceph-node1][DEBUG ] Get:25 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph-osd amd64 15.2.14-1bionic [22.8 MB]
[ceph-node1][DEBUG ] Get:26 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph amd64 15.2.14-1bionic [3968 B]
[ceph-node1][DEBUG ] Get:27 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 ceph-mds amd64 15.2.14-1bionic [1854 kB]
[ceph-node1][DEBUG ] Get:28 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic/main amd64 radosgw amd64 15.2.14-1bionic [8814 kB]
[ceph-node1][DEBUG ] Fetched 47.7 MB in 2s (20.8 MB/s)
[ceph-node1][DEBUG ] Selecting previously unselected package ceph-base.
(Reading database ... 109535 files and directories currently installed.)
[ceph-node1][DEBUG ] Preparing to unpack .../00-ceph-base_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking ceph-base (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-dateutil.
[ceph-node1][DEBUG ] Preparing to unpack .../01-python3-dateutil_2.6.1-1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-dateutil (2.6.1-1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package ceph-mgr-modules-core.
[ceph-node1][DEBUG ] Preparing to unpack .../02-ceph-mgr-modules-core_15.2.14-1bionic_all.deb ...
[ceph-node1][DEBUG ] Unpacking ceph-mgr-modules-core (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-bcrypt.
[ceph-node1][DEBUG ] Preparing to unpack .../03-python3-bcrypt_3.1.4-2_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking python3-bcrypt (3.1.4-2) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-cherrypy3.
[ceph-node1][DEBUG ] Preparing to unpack .../04-python3-cherrypy3_8.9.1-2_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-cherrypy3 (8.9.1-2) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-lib2to3.
[ceph-node1][DEBUG ] Preparing to unpack .../05-python3-lib2to3_3.6.9-1~18.04_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-lib2to3 (3.6.9-1~18.04) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-distutils.
[ceph-node1][DEBUG ] Preparing to unpack .../06-python3-distutils_3.6.9-1~18.04_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-distutils (3.6.9-1~18.04) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-logutils.
[ceph-node1][DEBUG ] Preparing to unpack .../07-python3-logutils_0.3.3-5_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-logutils (0.3.3-5) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-mako.
[ceph-node1][DEBUG ] Preparing to unpack .../08-python3-mako_1.0.7+ds1-1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-mako (1.0.7+ds1-1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-simplegeneric.
[ceph-node1][DEBUG ] Preparing to unpack .../09-python3-simplegeneric_0.8.1-1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-simplegeneric (0.8.1-1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-singledispatch.
[ceph-node1][DEBUG ] Preparing to unpack .../10-python3-singledispatch_3.4.0.3-2_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-singledispatch (3.4.0.3-2) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-webob.
[ceph-node1][DEBUG ] Preparing to unpack .../11-python3-webob_1%3a1.7.3-2fakesync1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-webob (1:1.7.3-2fakesync1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-bs4.
[ceph-node1][DEBUG ] Preparing to unpack .../12-python3-bs4_4.6.0-1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-bs4 (4.6.0-1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-waitress.
[ceph-node1][DEBUG ] Preparing to unpack .../13-python3-waitress_1.0.1-1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-waitress (1.0.1-1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-tempita.
[ceph-node1][DEBUG ] Preparing to unpack .../14-python3-tempita_0.5.2-2_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-tempita (0.5.2-2) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-paste.
[ceph-node1][DEBUG ] Preparing to unpack .../15-python3-paste_2.0.3+dfsg-4ubuntu1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-paste (2.0.3+dfsg-4ubuntu1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python-pastedeploy-tpl.
[ceph-node1][DEBUG ] Preparing to unpack .../16-python-pastedeploy-tpl_1.5.2-4_all.deb ...
[ceph-node1][DEBUG ] Unpacking python-pastedeploy-tpl (1.5.2-4) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-pastedeploy.
[ceph-node1][DEBUG ] Preparing to unpack .../17-python3-pastedeploy_1.5.2-4_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-pastedeploy (1.5.2-4) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-webtest.
[ceph-node1][DEBUG ] Preparing to unpack .../18-python3-webtest_2.0.28-1ubuntu1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-webtest (2.0.28-1ubuntu1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-pecan.
[ceph-node1][DEBUG ] Preparing to unpack .../19-python3-pecan_1.2.1-2_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-pecan (1.2.1-2) ...
[ceph-node1][DEBUG ] Selecting previously unselected package libjs-jquery.
[ceph-node1][DEBUG ] Preparing to unpack .../20-libjs-jquery_3.2.1-1_all.deb ...
[ceph-node1][DEBUG ] Unpacking libjs-jquery (3.2.1-1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package python3-werkzeug.
[ceph-node1][DEBUG ] Preparing to unpack .../21-python3-werkzeug_0.14.1+dfsg1-1ubuntu0.1_all.deb ...
[ceph-node1][DEBUG ] Unpacking python3-werkzeug (0.14.1+dfsg1-1ubuntu0.1) ...
[ceph-node1][DEBUG ] Selecting previously unselected package ceph-mgr.
[ceph-node1][DEBUG ] Preparing to unpack .../22-ceph-mgr_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking ceph-mgr (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package ceph-mon.
[ceph-node1][DEBUG ] Preparing to unpack .../23-ceph-mon_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking ceph-mon (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package ceph-osd.
[ceph-node1][DEBUG ] Preparing to unpack .../24-ceph-osd_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking ceph-osd (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package ceph.
[ceph-node1][DEBUG ] Preparing to unpack .../25-ceph_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking ceph (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package ceph-mds.
[ceph-node1][DEBUG ] Preparing to unpack .../26-ceph-mds_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking ceph-mds (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Selecting previously unselected package radosgw.
[ceph-node1][DEBUG ] Preparing to unpack .../27-radosgw_15.2.14-1bionic_amd64.deb ...
[ceph-node1][DEBUG ] Unpacking radosgw (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Setting up python3-logutils (0.3.3-5) ...
[ceph-node1][DEBUG ] Setting up libjs-jquery (3.2.1-1) ...
[ceph-node1][DEBUG ] Setting up python3-werkzeug (0.14.1+dfsg1-1ubuntu0.1) ...
[ceph-node1][DEBUG ] Setting up python3-simplegeneric (0.8.1-1) ...
[ceph-node1][DEBUG ] Setting up python3-waitress (1.0.1-1) ...
[ceph-node1][DEBUG ] update-alternatives: using /usr/bin/waitress-serve-python3 to provide /usr/bin/waitress-serve (waitress-serve) in auto mode
[ceph-node1][DEBUG ] Setting up python3-mako (1.0.7+ds1-1) ...
[ceph-node1][DEBUG ] Setting up python3-tempita (0.5.2-2) ...
[ceph-node1][DEBUG ] Setting up python3-webob (1:1.7.3-2fakesync1) ...
[ceph-node1][DEBUG ] Setting up python3-bcrypt (3.1.4-2) ...
[ceph-node1][DEBUG ] Setting up python3-singledispatch (3.4.0.3-2) ...
[ceph-node1][DEBUG ] Setting up python3-cherrypy3 (8.9.1-2) ...
[ceph-node1][DEBUG ] Setting up python3-bs4 (4.6.0-1) ...
[ceph-node1][DEBUG ] Setting up ceph-base (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service.
[ceph-node1][DEBUG ] Setting up python3-paste (2.0.3+dfsg-4ubuntu1) ...
[ceph-node1][DEBUG ] Setting up python-pastedeploy-tpl (1.5.2-4) ...
[ceph-node1][DEBUG ] Setting up python3-lib2to3 (3.6.9-1~18.04) ...
[ceph-node1][DEBUG ] Setting up python3-distutils (3.6.9-1~18.04) ...
[ceph-node1][DEBUG ] Setting up python3-dateutil (2.6.1-1) ...
[ceph-node1][DEBUG ] Setting up radosgw (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target.
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target.
[ceph-node1][DEBUG ] Setting up ceph-osd (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.
[ceph-node1][DEBUG ] Setting up ceph-mds (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target.
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target.
[ceph-node1][DEBUG ] Setting up ceph-mon (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.
[ceph-node1][DEBUG ] Setting up ceph-mgr-modules-core (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Setting up python3-pastedeploy (1.5.2-4) ...
[ceph-node1][DEBUG ] Setting up python3-webtest (2.0.28-1ubuntu1) ...
[ceph-node1][DEBUG ] Setting up python3-pecan (1.2.1-2) ...
[ceph-node1][DEBUG ] update-alternatives: using /usr/bin/python3-pecan to provide /usr/bin/pecan (pecan) in auto mode
[ceph-node1][DEBUG ] update-alternatives: using /usr/bin/python3-gunicorn_pecan to provide /usr/bin/gunicorn_pecan (gunicorn_pecan) in auto mode
[ceph-node1][DEBUG ] Setting up ceph-mgr (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.
[ceph-node1][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.
[ceph-node1][DEBUG ] Setting up ceph (15.2.14-1bionic) ...
[ceph-node1][DEBUG ] Processing triggers for systemd (237-3ubuntu10.50) ...
[ceph-node1][DEBUG ] Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
[ceph-node1][DEBUG ] Processing triggers for ureadahead (0.100.0-21) ...
[ceph-node1][DEBUG ] Processing triggers for libc-bin (2.27-3ubuntu1.4) ...
[ceph-node1][INFO  ] Running command: sudo ceph --version
[ceph-node1][DEBUG ] ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node2 ...
[ceph-node2][DEBUG ] connection detected need for sudo
[ceph-node2][DEBUG ] connected to host: ceph-node2 
[ceph-node2][DEBUG ] detect platform information from remote host
[ceph-node2][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node2][INFO  ] installing Ceph on ceph-node2
[ceph-node2][INFO  ] Running command: sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node2][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[ceph-node2][DEBUG ] Get:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease [88.7 kB]
[ceph-node2][DEBUG ] Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease [74.6 kB]
[ceph-node2][DEBUG ] Get:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease [88.7 kB]
[ceph-node2][DEBUG ] Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-octopus bionic InRelease

此过程会在指定的 ceph osd node 节点按照串行的方式逐个服务器安装 ceph 源并安装 ceph、ceph-radosgw

列出远端存储 node 节点的磁盘信息

ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list ceph-node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f847f5fbfa0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-node1']
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f847f5d52d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo fdisk -l
[ceph-node1][INFO  ] Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors
[ceph-node1][INFO  ] Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 20 GiB, 21474836480 bytes, 41943040 sectors
[ceph-node1][INFO  ] Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
[ceph-node1][INFO  ] Disk /dev/sdd: 10 GiB, 10737418240 bytes, 20971520 sectors
[ceph-node1][INFO  ] Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors
[ceph-node1][INFO  ] Disk /dev/sde: 10 GiB, 10737418240 bytes, 20971520 sectors

使用 ceph-deploy disk zap 擦除各 ceph node 的 ceph 数据磁盘:

ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy  disk zap ceph-node1 /dev/sd{b,c,d,e}
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph-node1 /dev/sdb /dev/sdc /dev/sdd /dev/sde
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc58823bfa0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ceph-node1
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7fc5882152d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdb', '/dev/sdc', '/dev/sdd', '/dev/sde']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph-node1
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1][DEBUG ] zeroing last few blocks of device
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sdb
[ceph-node1][WARNIN] --> Zapping: /dev/sdb
[ceph-node1][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node1][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdb bs=1M count=10 conv=fsync
[ceph-node1][WARNIN]  stderr: 10+0 records in
[ceph-node1][WARNIN] 10+0 records out
[ceph-node1][WARNIN] 10485760 bytes (10 MB, 10 MiB) copied, 0.0192922 s, 544 MB/s
[ceph-node1][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdb>
[ceph_deploy.osd][DEBUG ] zapping /dev/sdc on ceph-node1
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1][DEBUG ] zeroing last few blocks of device
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sdc
[ceph-node1][WARNIN] --> Zapping: /dev/sdc
[ceph-node1][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node1][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync
[ceph-node1][WARNIN]  stderr: 10+0 records in
[ceph-node1][WARNIN] 10+0 records out
[ceph-node1][WARNIN]  stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0136416 s, 769 MB/s
[ceph-node1][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdc>
[ceph_deploy.osd][DEBUG ] zapping /dev/sdd on ceph-node1
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1][DEBUG ] zeroing last few blocks of device
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sdd
[ceph-node1][WARNIN] --> Zapping: /dev/sdd
[ceph-node1][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node1][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdd bs=1M count=10 conv=fsync
[ceph-node1][WARNIN]  stderr: 10+0 records in
[ceph-node1][WARNIN] 10+0 records out
[ceph-node1][WARNIN] 10485760 bytes (10 MB, 10 MiB) copied, 0.0232056 s, 452 MB/s
[ceph-node1][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdd>
[ceph_deploy.osd][DEBUG ] zapping /dev/sde on ceph-node1
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1][DEBUG ] zeroing last few blocks of device
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sde
[ceph-node1][WARNIN] --> Zapping: /dev/sde
[ceph-node1][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node1][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sde bs=1M count=10 conv=fsync
[ceph-node1][WARNIN]  stderr: 10+0 records in
[ceph-node1][WARNIN] 10+0 records out
[ceph-node1][WARNIN] 10485760 bytes (10 MB, 10 MiB) copied, 0.0235466 s, 445 MB/s
[ceph-node1][WARNIN]  stderr: 
[ceph-node1][WARNIN] --> Zapping successful for: <Raw Device: /dev/sde>

擦除剩余节点的磁盘

ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node2
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy  disk zap ceph-node2 /dev/sd{b,c,d,e}
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node3
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy  disk zap ceph-node3 /dev/sd{b,c,d,e}
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy disk list ceph-node4
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy  disk zap ceph-node4 /dev/sd{b,c,d,e}

添加 osd

数据分类保存方式: 
Data:即 ceph 保存的对象数据 
Block: rocks DB 数据即元数据 
block-wal:数据库的 wal 日志

osd 的 ID 从0开始顺序使用
osd ID: 0-3
ceph-deploy osd create ceph-node1 --data /dev/sdb
ceph-deploy osd create ceph-node1 --data /dev/sdc
ceph-deploy osd create ceph-node1 --data /dev/sdd
ceph-deploy osd create ceph-node1 --data /dev/sde
osd ID: 4-7
ceph-deploy osd create ceph-node2 --data /dev/sdb
ceph-deploy osd create ceph-node2 --data /dev/sdc
ceph-deploy osd create ceph-node2 --data /dev/sdd
ceph-deploy osd create ceph-node2 --data /dev/sde
osd ID: 8-11
ceph-deploy osd create ceph-node3 --data /dev/sdb
ceph-deploy osd create ceph-node3 --data /dev/sdc
ceph-deploy osd create ceph-node3 --data /dev/sdd
ceph-deploy osd create ceph-node3 --data /dev/sde
osd ID: 12-14
ceph-deploy osd create ceph-node4 --data /dev/sdb
ceph-deploy osd create ceph-node4 --data /dev/sdc
ceph-deploy osd create ceph-node4 --data /dev/sdd
ceph-deploy osd create ceph-node4 --data /dev/sde

ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy osd create ceph-node1 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-node1 --data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa48c0f33c0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-node1
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fa48c142250>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node1
[ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node1][WARNIN] osd keyring does not exist yet, creating one
[ceph-node1][DEBUG ] create a keyring file
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[ceph-node1][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node1][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f9b7315f-902f-4f4e-9164-5f25be885754
[ceph-node1][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6 /dev/sdb
[ceph-node1][WARNIN]  stdout: Physical volume "/dev/sdb" successfully created.
[ceph-node1][WARNIN]  stdout: Volume group "ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6" successfully created
[ceph-node1][WARNIN] Running command: /sbin/lvcreate --yes -l 2559 -n osd-block-f9b7315f-902f-4f4e-9164-5f25be885754 ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6
[ceph-node1][WARNIN]  stdout: Logical volume "osd-block-f9b7315f-902f-4f4e-9164-5f25be885754" created.
[ceph-node1][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node1][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-node1][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node1][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6/osd-block-f9b7315f-902f-4f4e-9164-5f25be885754
[ceph-node1][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[ceph-node1][WARNIN] Running command: /bin/ln -s /dev/ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6/osd-block-f9b7315f-902f-4f4e-9164-5f25be885754 /var/lib/ceph/osd/ceph-0/block
[ceph-node1][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-node1][WARNIN]  stderr: 2021-08-16T08:04:07.244+0000 7f429b575700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node1][WARNIN] 2021-08-16T08:04:07.244+0000 7f429b575700 -1 AuthRegistry(0x7f4294059b20) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node1][WARNIN]  stderr: got monmap epoch 1
[ceph-node1][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQD2GxphV+RyDBAAQFRomuzg4uDfIloEq5BI1g==
[ceph-node1][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph-node1][WARNIN]  stdout: added entity osd.0 auth(key=AQD2GxphV+RyDBAAQFRomuzg4uDfIloEq5BI1g==)
[ceph-node1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-node1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-node1][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid f9b7315f-902f-4f4e-9164-5f25be885754 --setuser ceph --setgroup ceph
[ceph-node1][WARNIN]  stderr: 2021-08-16T08:04:08.028+0000 7f7b38513d80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
[ceph-node1][WARNIN]  stderr: 2021-08-16T08:04:08.088+0000 7f7b38513d80 -1 freelist read_size_meta_from_db missing size meta in DB
[ceph-node1][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdb
[ceph-node1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node1][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6/osd-block-f9b7315f-902f-4f4e-9164-5f25be885754 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ceph-node1][WARNIN] Running command: /bin/ln -snf /dev/ceph-fb1a4ac5-8cbf-476d-9d3d-350f1df4a7e6/osd-block-f9b7315f-902f-4f4e-9164-5f25be885754 /var/lib/ceph/osd/ceph-0/block
[ceph-node1][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-node1][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[ceph-node1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node1][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-f9b7315f-902f-4f4e-9164-5f25be885754
[ceph-node1][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-f9b7315f-902f-4f4e-9164-5f25be885754.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node1][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@0
[ceph-node1][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node1][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[ceph-node1][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph-node1][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[ceph-node1][INFO  ] checking OSD status...
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use.

查看结果

# 可以看到有四个 OSD 进程,而且编号是我们刚刚添加是生成的编号,
ubuntu@ceph-node1:~$ ps -ef |grep ceph
root        3037       1  0 07:23 ?        00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph        7249       1  0 08:04 ?        00:00:02 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
ceph        8007       1  0 08:07 ?        00:00:02 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
ceph        9145       1  0 08:07 ?        00:00:02 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
ceph        9865       1  0 08:08 ?        00:00:02 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph
ubuntu     10123   10109  0 08:14 pts/0    00:00:00 grep --color=auto ceph

# 16块磁盘,16个osd服务
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
 
  services:
    mon: 1 daemons, quorum ceph-mon1 (age 106m)
    mgr: ceph-mgr1(active, since 39m)
    osd: 16 osds: 16 up (since 114s), 16 in (since 114s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

# osd 服务默认自启
ubuntu@ceph-node1:~$ sudo systemctl status ceph-osd@0.service
● ceph-osd@0.service - Ceph object storage daemon osd.0
   Loaded: loaded (/lib/systemd/system/ceph-osd@.service; indirect; vendor preset: enabled)
   Active: active (running) since Mon 2021-08-16 08:04:10 UTC; 48min ago
 Main PID: 7249 (ceph-osd)
    Tasks: 58
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
           └─7249 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

5.4.5 ceph-mon 服务高可用

ubuntu@ceph-mon2:~$ sudo apt install ceph-mon -y
ubuntu@ceph-mon3:~$ sudo apt install ceph-mon -y

# 添加 mon
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy mon add ceph-mon2 --address 192.168.10.12
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy mon add ceph-mon3 --address 172.31.0.13

# 禁用不安全模式 mons are allowing insecure global_id reclaim
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph config set mon auth_allow_insecure_global_id_reclaim false
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 111s)
    mgr: ceph-mgr1(active, since 61m)
    osd: 16 osds: 16 up (since 23m), 16 in (since 23m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

5.4.6 ceph-mgr 服务高可用

目前部署了1台,我们再部署1台,共两台

# 安装 ceph-mgr,我们提前安装,这样 ceph-deploy 添加 mgr 时就会检测并跳过安装,节省时间
ubuntu@ceph-mgr2:~$ sudo apt install ceph-mgr -y

# 添加 mgr
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy mgr create ceph-mgr2

# 查看结果
ubuntu@ceph-deploy:~/ceph-cluster$ ssh ceph-mgr2 "ps -ef |grep ceph"
root       9878      1  0 08:37 ?        00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph      14869      1 11 08:39 ?        00:00:04 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-mgr2 --setuser ceph --setgroup ceph
ubuntu    15034  15033  0 08:40 ?        00:00:00 bash -c ps -ef |grep ceph
ubuntu    15036  15034  0 08:40 ?        00:00:00 grep ceph

# 使用 ceph 命令查看结果
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 7m)
    mgr: ceph-mgr1(active, since 67m), standbys: ceph-mgr2
    osd: 16 osds: 16 up (since 29m), 16 in (since 29m)
 
  task status:
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

5.5 重启 OSD

# 重启第1台 node 机器
ubuntu@ceph-node1:~$ sudo reboot
# 只有12个 osd 服务存活
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_WARN
            4 osds down
            1 host (4 osds) down
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 22m)
    mgr: ceph-mgr1(active, since 82m), standbys: ceph-mgr2
    osd: 16 osds: 12 up (since 17s), 16 in (since 44m)
 
  task status:
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

# 服务器起来后,自动检测添加 osd
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 24m)
    mgr: ceph-mgr1(active, since 84m), standbys: ceph-mgr2
    osd: 16 osds: 16 up (since 86s), 16 in (since 46m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

5.6 移除 OSD

假如有一块盘坏掉了,那我们如何移出它呢?Ceph 集群中的一个 OSD 是一个 node 节点的服务进程且对应于一个物理磁盘设备,是一个 专用的守护进程。在某 OSD 设备出现故障,或管理员出于管理之需确实要移除特定的 OSD 设备时,需要先停止相关的守护进程,而后再进行移除操作。

# 假设我们停掉osd Id为3的进程
ubuntu@ceph-node1:~$ sudo systemctl stop ceph-osd@3.service 
ubuntu@ceph-node1:~$ sudo systemctl status ceph-osd@3.service 

# 查看集群状态
ubuntu@ceph-deploy:~$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_WARN
            1 osds down
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 48m)
    mgr: ceph-mgr1(active, since 108m), standbys: ceph-mgr2
    osd: 16 osds: 15 up (since 26s), 16 in (since 70m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

# 停用设备
ubuntu@ceph-deploy:~$ sudo ceph osd out 3
# 从mon服务监控的OSD中移出设备
# osd purge <id|osd.id> [--force] [--yes-i-really-mean-it]     purge all osd data from the monitors including the OSD id and CRUSH position
ubuntu@ceph-deploy:~$ sudo ceph osd purge 3 --yes-i-really-mean-it
purged osd.3
# 查看状态
ubuntu@ceph-deploy:~$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 53m)
    mgr: ceph-mgr1(active, since 113m), standbys: ceph-mgr2
    osd: 15 osds: 15 up (since 5m), 15 in (since 3m)
 
  task status:
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   15 GiB used, 135 GiB / 150 GiB avail
    pgs:     1 active+clean

修复osd

# 把原来坏掉的osd修复后重新加入集群
# 远程连接到osd node1节点并切换到 /etc/ceph 目录
ubuntu@ceph-node1:~$ cd /etc/ceph
# 创建osd,无需指定名,会按序号自动生成
ubuntu@ceph-node1:/etc/ceph$ sudo ceph osd create
3
# 创建账户,切记账号与文件夹对应
ubuntu@ceph-node1:/etc/ceph$ sudo ceph-authtool --create-keyring /etc/ceph/ceph.osd.3.keyring --gen-key -n osd.3 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *'
creating /etc/ceph/ceph.osd.3.keyring
# 导入新的账户秘钥,切记账号与文件夹对应
ubuntu@ceph-node1:/etc/ceph$ sudo ceph auth import -i /etc/ceph/ceph.osd.3.keyring
imported keyring
ubuntu@ceph-node1:/etc/ceph$ sudo ceph auth get-or-create osd.3 -o /var/lib/ceph/osd/ceph-3/keyring
# 加入集群
ubuntu@ceph-node1:/etc/ceph$ sudo ceph osd crush add osd.3 0.01900 host=ceph-node1
add item id 3 name 'osd.3' weight 0.019 at location {host=ceph-node1} to crush map
ubuntu@ceph-node1:/etc/ceph$ sudo ceph osd in osd.3
marked in osd.3.
# 重启osd守护进程
ubuntu@ceph-node1:/etc/ceph$ sudo systemctl restart ceph-osd@3.service
Job for ceph-osd@3.service failed because the control process exited with error code.
See "systemctl status ceph-osd@3.service" and "journalctl -xe" for details.
# 如果报如上错误,执行 systemctl reset-failed ceph-osd@3.service
ubuntu@ceph-node1:/etc/ceph$ sudo systemctl reset-failed ceph-osd@3.service
# 再次重启
ubuntu@ceph-node1:/etc/ceph$ sudo systemctl restart ceph-osd@3.service
ubuntu@ceph-node1:/etc/ceph$ sudo systemctl status ceph-osd@3.service
● ceph-osd@3.service - Ceph object storage daemon osd.3
   Loaded: loaded (/lib/systemd/system/ceph-osd@.service; indirect; vendor preset: enabled)
   Active: active (running) since Mon 2021-08-16 15:13:27 UTC; 8s ago
  Process: 3901 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id 3 (code=exited, status=0/SUCCESS)
 Main PID: 3905 (ceph-osd)
    Tasks: 58
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@3.service
           └─3905 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph
           
# 查看结果
ubuntu@ceph-node1:/etc/ceph$ sudo ceph -s
  cluster:
    id:     b7c42944-dd49-464e-a06a-f3a466b79eb4
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 4h)
    mgr: ceph-mgr1(active, since 7h), standbys: ceph-mgr2
    osd: 16 osds: 16 up (since 102s), 16 in (since 8m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   16 GiB used, 144 GiB / 160 GiB avail
    pgs:     1 active+clean

5.7 测试上传与下载数据

存取数据时,客户端必须首先连接至 RADOS 集群上某存储池,然后根据对象名称由相关的 CRUSH 规则完成数据对象寻址。为了测试集群的数据存取功能,这里首先创建一个用于测试的存储池 mypool,并设定其 PG 数量为 32 个

# 创建 pool
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool create mypool 32 32
pool 'mypool' created
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool ls
device_health_metrics
mypool

# 当前的 ceph 环境还没还没有部署使用块存储和文件系统使用 ceph,也没有使用对象存储的客户端,
# 但是 ceph 的 rados 命令可以实现访问 ceph 对象存储的功能
# 把 syslog 文件上传到 mypool 并指定对象 id 为 syslog1
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rados put syslog1 /var/log/syslog --pool=mypool
# 列出文件
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rados ls --pool=mypool
syslog1

# 文件信息
# ceph osd map 命令可以获取到存储池中数据对象的具体位置信息:
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd map mypool syslog1
osdmap e102 pool 'mypool' (3) object 'syslog1' -> pg 3.1dd3f9b (3.1b) -> up ([10,4,15], p10) acting ([10,4,15], p10)

# 下载文件
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rados get syslog1 --pool=mypool ./syslog

# 删除文件
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rados rm syslog1 --pool=mypool
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rados ls --pool=mypool

在集群中删除一个 pool时,注意删除 pool后,其映射的 image 会直接被删除,线上操作要谨慎,删除时存储池的名字需要重复两次

# 删除时会报错:
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool rm mypool mypool --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool

# 这是由于没有配置mon节点的 mon_allow_pool_delete 字段所致,解决办法就是到mon节点进行相应的设置。
# 解决方式:
# 方案一
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph tell mon.* injectargs --mon_allow_pool_delete=true
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool delete tom_test tom_test --yes-i-really-really-mean-it
# 删除完成后最好把mon_allow_pool_delete改回去,降低误删的风险
# 方案二
# 如果是测试环境,想随意删除存储池,可以在配置文件中全局开启删除存储池的功能
ubuntu@ceph-deploy:~/ceph-cluster$ vim ceph.conf
[mon]
mon allow pool delete = true
# 推送配置
ubuntu@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite-conf config push ceph-mon{1,2,3}
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf config push ceph-mon1 ceph-mon2 ceph-mon3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : push
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f33765bd2d0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-mon1', 'ceph-mon2', 'ceph-mon3']
[ceph_deploy.cli][INFO  ]  func                          : <function config at 0x7f33766048d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.config][DEBUG ] Pushing config to ceph-mon1
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1 
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to ceph-mon2
[ceph-mon2][DEBUG ] connection detected need for sudo
[ceph-mon2][DEBUG ] connected to host: ceph-mon2 
[ceph-mon2][DEBUG ] detect platform information from remote host
[ceph-mon2][DEBUG ] detect machine type
[ceph-mon2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to ceph-mon3
[ceph-mon3][DEBUG ] connection detected need for sudo
[ceph-mon3][DEBUG ] connected to host: ceph-mon3 
[ceph-mon3][DEBUG ] detect platform information from remote host
[ceph-mon3][DEBUG ] detect machine type
[ceph-mon3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

# 重启ceph-mon节点服务
for node in ceph-mon{1,2,3}
do
   ssh $node "sudo systemctl restart ceph-mon.target"
done

# 删除 pool
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool rm mypool mypool --yes-i-really-really-mean-it
pool 'mypool' removed

6. Ceph块设备RDB

RBD(RADOS Block Devices)即为块存储的一种,RBD 通过 librbd 库与 OSD 进行交互,RBD 为 KVM 等虚拟化技术和云服务(如 OpenStack 和 CloudStack)提供高性能和无限可扩展性的存储后端,这些系统依赖于 libvirt 和 QEMU 实用程序与 RBD 进行集成,客户端基于librbd 库即可将 RADOS 存储集群用作块设备,不过,用于 rbd 的存储池需要事先启用 rbd 功能并进行初始化。例如,下面的命令创建一个名为 myrbd1 的存储池,并在启用 rbd 功能后对其进行初始化:

  • 创建RDB
# 创建存储池,指定 pg 和 pgp 的数量,pgp 是对存在 于 pg 的数据进行组合存储,pgp 通常等于 pg 的值
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool create myrbd1 64 64
pool 'myrbd1' created

ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool ls
device_health_metrics
myrbd1

# 将存储池转换为RBD模式
ubuntu@ceph-deploy:~/ceph-cluster$ sudo ceph osd pool application enable myrbd1 rbd
enabled application 'rbd' on pool 'myrbd1'

# 初始化存储池
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd pool init -p myrbd1
  • 创建映像

rbd 存储池并不能直接用于块设备,而是需要事先在其中按需创建映像(image),并把映像文件作为块设备使用,rbd 命令可用于创建、查看及删除块设备所在的映像(image),以及克隆映像、创建快照、将映像回滚到快照和查看快照等管理操作。

# 1、创建映像
# 语法:rbd create --size 5G --pool <pool name> <image name>
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd create myimg1 --size 5G --pool myrbd1
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd create myimg2 --size 3G --pool myrbd1 --image-format 2 --image-feature layering

# 2、查看镜像
# 查看存储池下存在哪些镜像
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd ls --pool myrbd1 -l
NAME    SIZE   PARENT  FMT  PROT  LOCK
myimg1  5 GiB            2
myimg2  3 GiB            2

# 查看某一镜像的详细信息
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd --image myimg1 --pool myrbd1 info
rbd image 'myimg1':
	size 5 GiB in 1280 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: 8560a5411e51
	block_name_prefix: rbd_data.8560a5411e51
	format: 2
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
	op_features:
	flags:
	create_timestamp: Mon Aug 16 15:33:49 2021
	access_timestamp: Mon Aug 16 15:33:49 2021
	modify_timestamp: Mon Aug 16 15:33:49 2021

# 显示内容注解:
# size:镜像的大小与被分割成的条带数。
# order 22:条带的编号,有效范围是12到25,对应4K到32M,而22代表2的22次方,这样刚好是4M。
# id:镜像的ID标识。
# block_name_prefix:名称前缀。
# format:使用的镜像格式,默认为2。
# features:当前镜像的功能特性。
# op_features:可选的功能特性。

# 修改镜像大小
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd resize --pool myrbd1 --image myimg2 --size 5G
Resizing image: 100% complete...done.
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd ls --pool myrbd1 -l
NAME    SIZE   PARENT  FMT  PROT  LOCK
myimg1  5 GiB            2
myimg2  5 GiB            2

# 使用resize就可以调整镜像的大小,一般建议只增不减,如果是减少的话需要加一个选项 –allow-shrink
  • 验证块设备
# 客户端映射 img
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd map --pool myrbd1 --image myimg1
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable myrbd1/myimg1 object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
ubuntu@ceph-deploy:~/ceph-cluster$ sudo rbd map --pool myrbd1 --image myimg2
/dev/rbd0
# 格式化磁盘
ubuntu@ceph-deploy:~/ceph-cluster$ sudo mkfs.ext4 /dev/rbd0
mke2fs 1.44.1 (24-Mar-2018)
Discarding device blocks: done
Creating filesystem with 1310720 4k blocks and 327680 inodes
Filesystem UUID: 89dfe52f-f8a2-4a3f-bdd1-e136fd933ea9
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
# 挂载磁盘
ubuntu@ceph-deploy:~/ceph-cluster$ sudo mount /dev/rbd0 /mnt
ubuntu@ceph-deploy:~/ceph-cluster$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               954M     0  954M   0% /dev
tmpfs                              198M  9.8M  188M   5% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   20G  4.6G   15G  25% /
tmpfs                              986M     0  986M   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              986M     0  986M   0% /sys/fs/cgroup
/dev/sda2                          976M  149M  760M  17% /boot
tmpfs                              198M     0  198M   0% /run/user/1000
/dev/rbd0                          4.9G   20M  4.6G   1% /mnt

# 测试写入
ubuntu@ceph-deploy:~/ceph-cluster$ sudo dd if=/dev/zero of=/mnt/ceph-test bs=1MB count=1024
1024+0 records in
1024+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 117.308 s, 8.7 MB/s
ubuntu@ceph-deploy:~/ceph-cluster$ sudo dd if=/dev/zero of=/tmp/ceph-test bs=1MB count=1024
1024+0 records in
1024+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 1.81747 s, 563 MB/s

# 这测试有点尴尬呀,不过正常OSD节点用的是SSD,网卡用的是千兆、万兆口。

7. 参考

https://blog.csdn.net/lhc121386/article/details/113488420

https://www.huaweicloud.com/articles/25d293d7b10848aff6f67861d6458fbd.html

https://blog.csdn.net/lhc121386/article/details/113488420

https://zhuanlan.zhihu.com/p/386561535

posted on 2021-08-17 00:13  cculin  阅读(1777)  评论(0编辑  收藏  举报