20210814第一天:Ceph简介和安装

一、Ceph简介

  Ceph是一个分布式的数据对象存储,系统设计旨在性能、可靠性和可扩展性上能够提供优秀的存储服务。Ceph分布式存储能够在一个统一的系统中同时提供了对象、块、和文件存储功能,在这方面独一无二的;同时在扩展性上又可支持数以千计的客户端可以访问PB级到EB级甚至更多的数据。它不但适应非结构化数据,并且客户端可以同时使用当前及传统的对象接口进行数据存取,被称为是存储的未来!

 

二、Ceph的特点

2.1 Ceph的优势

  • CRUSH算法:Ceph摒弃了传统的集中式存储元数据寻址的方案,转而使用CRUSH算法完成数据的寻址操作。CRUSH在一致性哈希基础上很好的考虑了容灾域的隔离,能够实现各类负载的副本放置规则,例如跨机房、机架感知等。Ceph会将CRUSH规则集分配给存储池。当Ceph客户端存储或检索存储池中的数据时,Ceph会自动识别CRUSH规则集、以及存储和检索数据这一规则中的顶级bucket。当Ceph处理CRUSH规则时,它会识别出包含某个PG的主OSD,这样就可以使客户端直接与主OSD进行连接进行数据的读写。
  • 高可用:Ceph中的数据副本数量可以由管理员自行定义,并可以通过CRUSH算法指定副本的物理存储位置以分隔故障域, 可以忍受多种故障场景并自动尝试并行修复。同时支持强一致副本,而副本又能够垮主机、机架、机房、数据中心存放。所以安全可靠。存储节点可以自管理、自动修复。无单点故障,有很强的容错性;
  • 高扩展性:Ceph不同于swift,客户端所有的读写操作都要经过代理节点。一旦集群并发量增大时,代理节点很容易成为单点瓶颈。Ceph本身并没有主控节点,扩展起来比较容易,并且理论上,它的性能会随着磁盘数量的增加而线性增长;
  • 特性丰富:Ceph支持三种调用接口:对象存储,块存储,文件系统挂载。三种方式可以一同使用。Ceph统一存储,虽然Ceph底层是一个分布式文件系统,但由于在上层开发了支持对象和块的接口;
  • 统一的存储:能同时提供对象存储、文件存储和块存储;

 2.2 Ceph的缺点

        请忽略Ceph的缺点……

 

三、架构与组件

官网地址是:https://docs.ceph.com/en/nautilus/architecture/

3.1、Ceph的架构示意图:

  • Ceph的底层是RADOS,RADOS本身也是分布式存储系统,CEPH所有的存储功能都是基于RADOS实现。RADOS采用C++开发,所提供的原生Librados API包括C和C++两种。Ceph的上层应用调用本机上的librados API,再由后者通过socket与RADOS集群中的其他节点通信并完成各种操作。
  • RADOS向外界暴露了调用接口,即LibRADOS,应用程序只需要调用LibRADOS的接口,就可以操纵Ceph了。这其中,RADOS GW用于对象存储,RBD用于块存储,它们都属于LibRADOS;CephFS是内核态程序,向外界提供了POSIX接口,用户可以通过客户端直接挂载使用。
  • RADOS GateWay、RBD其作用是在librados库的基础上提供抽象层次更高、更便于应用或客户端使用的上层接口。其中,RADOS GW是一个提供与Amazon S3和Swift兼容的RESTful API的gateway,以供相应的对象存储应用开发使用。RBD则提供了一个标准的块设备接口,常用于在虚拟化的场景下为虚拟机创建volume。目前,Red Hat已经将RBD驱动集成在KVM/QEMU中,以提高虚拟机访问性能。这两种方式目前在云计算中应用的比较多。
  • CEPHFS则提供了POSIX接口,用户可直接通过客户端挂载使用。它是内核态的程序,所以无需调用用户空间的librados库。它通过内核中的net模块来与Rados进行交互。
  • RBD块设备。对外提供块存储。可以像磁盘一样被映射、格式化已经挂载到服务器上。支持snapshot。

3.2、 Ceph数据的存储过程:

废话不多说,先上图:

  • 无论使用哪种存储方式(对象、块、挂载),存储的数据都会被切分成对象(Objects)。Objects size大小可以由管理员调整,通常为2M或4M。每个对象都会有一个唯一的OID,由ino与ono生成,虽然这些名词看上去很复杂,其实相当简单。ino即是文件的File ID,用于在全局唯一标示每一个文件,而ono则是分片的编号。比如:一个文件FileID为A,它被切成了两个对象,一个对象编号0,另一个编号1,那么这两个文件的oid则为A0与A1。Oid的好处是可以唯一标示每个不同的对象,并且存储了对象与文件的从属关系。由于ceph的所有数据都虚拟成了整齐划一的对象,所以在读写时效率都会比较高。 但是对象并不会直接存储进OSD中,因为对象的size很小,在一个大规模的集群中可能有几百到几千万个对象。这么多对象光是遍历寻址,速度都是很缓慢的;并且如果将对象直接通过某种固定映射的哈希算法映射到osd上,当这个osd损坏时,对象无法自动迁移至其他osd上面(因为映射函数不允许)。为了解决这些问题,ceph引入了归置组的概念,即PG。 
  • PG是一个逻辑概念,我们linux系统中可以直接看到对象,但是无法直接看到PG。它在数据寻址时类似于数据库中的索引:每个对象都会固定映射进一个PG中,所以当我们要寻找一个对象时,只需要先找到对象所属的PG,然后遍历这个PG就可以了,无需遍历所有对象。而且在数据迁移时,也是以PG作为基本单位进行迁移,ceph不会直接操作对象。 对象时如何映射进PG的?还记得OID么?首先使用静态hash函数对OID做hash取出特征码,用特征码与PG的数量去模,得到的序号则是PGID。由于这种设计方式,PG的数量多寡直接决定了数据分布的均匀性,所以合理设置的PG数量可以很好的提升CEPH集群的性能并使数据均匀分布。 
  • 最后PG会根据管理员设置的副本数量进行复制,然后通过crush算法存储到不同的OSD节点上(其实是把PG中的所有对象存储到节点上),第一个osd节点即为主节点,其余均为从节点。

 

四、环境准备、源设置、安装过程和验证

本次使用的虚拟机部署(Ubuntu18.04)和测试(CentOS7),Ceph版本为目前最新的P版本;具体规划如下(由于笔记本计算机资源限制——相关功能只能合并安装部署):

节点 角色 IP(11:public网络,22为:cluster网络) CPU Memory 磁盘 备注
node01 deploy、mon、mgr、osd

192.168.11.210、192.168.22.210

2C 2G 两块40G的osd硬盘  Ubuntu18.04
node02 mon、mgr、osd 192.168.11.220、192.168.22.220 2C 2G 两块40G的osd硬盘  Ubuntu18.04
node03 mon、mgr、osd 192.168.11.230、192.168.22.230 2C 2G 两块40G的osd硬盘  Ubuntu18.04
client 安装:ceph-common主键包即可 192.168.11.128 \ \ \  CentOS7

 4.1、环境准备(具体怎么弄这里我就不多说了,这里只是提一下要注意的点):

  • 时间同步一致;
  • 设置主机名和hosts解析
  • deploy主键到所有节点的ssh免密登陆
  • 各个节点静态固定网卡设置

 4.2、设置操作系统和Ceph的源:

所有节点使用清华大学或阿里的源:

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -


cat > /etc/apt/sources.list <<EOF
# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic main
EOF
View Code

修改过源后,务必:# apt update

4.3、具体安装步骤:

1、在node01上安装ceph-deploy工具(由于后期有很多文件生成——创建个cephCluster目录),具体情况如下:

root@node01:~# apt-cache madison ceph-deploy
ceph-deploy |      2.0.1 | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 Packages
ceph-deploy |      2.0.1 | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main i386 Packages
ceph-deploy | 1.5.38-0ubuntu1 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 Packages
ceph-deploy | 1.5.38-0ubuntu1 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe i386 Packages
root@node01:~# 
root@node01:~# ls
root@node01:~# mkdir cephCluster
root@node01:~# cd cephCluster/
root@node01:~/cephCluster# ls
root@node01:~/cephCluster# apt install ceph-deploy
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  libpython-stdlib libpython2.7-minimal libpython2.7-stdlib python python-minimal python-pkg-resources python-setuptools python2.7 python2.7-minimal
Suggested packages:
  python-doc python-tk python-setuptools-doc python2.7-doc binutils binfmt-support
The following NEW packages will be installed:
  ceph-deploy libpython-stdlib libpython2.7-minimal libpython2.7-stdlib python python-minimal python-pkg-resources python-setuptools python2.7 python2.7-minimal
0 upgraded, 10 newly installed, 0 to remove and 157 not upgraded.
Need to get 4,521 kB of archives.
After this operation, 19.4 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 libpython2.7-minimal amd64 2.7.17-1~18.04ubuntu1.6 [335 kB]
Get:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 python2.7-minimal amd64 2.7.17-1~18.04ubuntu1.6 [1,291 kB]
Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python-minimal amd64 2.7.15~rc1-1 [28.1 kB]
Get:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 libpython2.7-stdlib amd64 2.7.17-1~18.04ubuntu1.6 [1,917 kB]
Get:5 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 python2.7 amd64 2.7.17-1~18.04ubuntu1.6 [248 kB]
Get:6 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libpython-stdlib amd64 2.7.15~rc1-1 [7,620 B]
Get:7 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python amd64 2.7.15~rc1-1 [140 kB]
Get:8 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python-pkg-resources all 39.0.1-2 [128 kB]
Get:9 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python-setuptools all 39.0.1-2 [329 kB]
Get:10 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph-deploy all 2.0.1 [97.2 kB]
Fetched 4,521 kB in 1s (3,953 kB/s)     
Selecting previously unselected package libpython2.7-minimal:amd64.
(Reading database ... 67125 files and directories currently installed.)
Preparing to unpack .../0-libpython2.7-minimal_2.7.17-1~18.04ubuntu1.6_amd64.deb ...
Unpacking libpython2.7-minimal:amd64 (2.7.17-1~18.04ubuntu1.6) ...
Selecting previously unselected package python2.7-minimal.
Preparing to unpack .../1-python2.7-minimal_2.7.17-1~18.04ubuntu1.6_amd64.deb ...
Unpacking python2.7-minimal (2.7.17-1~18.04ubuntu1.6) ...
Selecting previously unselected package python-minimal.
Preparing to unpack .../2-python-minimal_2.7.15~rc1-1_amd64.deb ...
Unpacking python-minimal (2.7.15~rc1-1) ...
Selecting previously unselected package libpython2.7-stdlib:amd64.
Preparing to unpack .../3-libpython2.7-stdlib_2.7.17-1~18.04ubuntu1.6_amd64.deb ...
Unpacking libpython2.7-stdlib:amd64 (2.7.17-1~18.04ubuntu1.6) ...
Selecting previously unselected package python2.7.
Preparing to unpack .../4-python2.7_2.7.17-1~18.04ubuntu1.6_amd64.deb ...
Unpacking python2.7 (2.7.17-1~18.04ubuntu1.6) ...
Selecting previously unselected package libpython-stdlib:amd64.
Preparing to unpack .../5-libpython-stdlib_2.7.15~rc1-1_amd64.deb ...
Unpacking libpython-stdlib:amd64 (2.7.15~rc1-1) ...
Setting up libpython2.7-minimal:amd64 (2.7.17-1~18.04ubuntu1.6) ...
Setting up python2.7-minimal (2.7.17-1~18.04ubuntu1.6) ...
Linking and byte-compiling packages for runtime python2.7...
Setting up python-minimal (2.7.15~rc1-1) ...
Selecting previously unselected package python.
(Reading database ... 67873 files and directories currently installed.)
Preparing to unpack .../python_2.7.15~rc1-1_amd64.deb ...
Unpacking python (2.7.15~rc1-1) ...
Selecting previously unselected package python-pkg-resources.
Preparing to unpack .../python-pkg-resources_39.0.1-2_all.deb ...
Unpacking python-pkg-resources (39.0.1-2) ...
Selecting previously unselected package python-setuptools.
Preparing to unpack .../python-setuptools_39.0.1-2_all.deb ...
Unpacking python-setuptools (39.0.1-2) ...
Selecting previously unselected package ceph-deploy.
Preparing to unpack .../ceph-deploy_2.0.1_all.deb ...
Unpacking ceph-deploy (2.0.1) ...
Setting up libpython2.7-stdlib:amd64 (2.7.17-1~18.04ubuntu1.6) ...
Setting up python2.7 (2.7.17-1~18.04ubuntu1.6) ...
Setting up libpython-stdlib:amd64 (2.7.15~rc1-1) ...
Setting up python (2.7.15~rc1-1) ...
Setting up python-pkg-resources (39.0.1-2) ...
Setting up python-setuptools (39.0.1-2) ...
Setting up ceph-deploy (2.0.1) ...
Processing triggers for mime-support (3.60ubuntu1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
root@node01:~/cephCluster#
View Code

 2、集群初始化(第一个mon节点——node01),具体情况如下:

root@node01:~/cephCluster# ceph-deploy new --cluster-network 192.168.22.0/24 --public-network 192.168.11.0/24 node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new --cluster-network 192.168.22.0/24 --public-network 192.168.11.0/24 node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f2a355a4e60>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['node01']
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f2a3285dad0>
[ceph_deploy.cli][INFO  ]  public_network                : 192.168.11.0/24
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : 192.168.22.0/24
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] find the location of an executable
[node01][INFO  ] Running command: /bin/ip link show
[node01][INFO  ] Running command: /bin/ip addr show
[node01][DEBUG ] IP addresses found: [u'192.168.22.210', u'192.168.11.210']
[ceph_deploy.new][DEBUG ] Resolving host node01
[ceph_deploy.new][DEBUG ] Monitor node01 at 192.168.11.210
[ceph_deploy.new][DEBUG ] Monitor initial members are ['node01']
[ceph_deploy.new][DEBUG ] Monitor addrs are [u'192.168.11.210']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
root@node01:~/cephCluster# 
root@node01:~/cephCluster# ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring
root@node01:~/cephCluster# 
View Code

3、三台mon节点上都安装:python2——不装下面的mon安装初始化会提示安装(下面只贴了node02的安装过程):

root@node02:~#  apt install python2.7 -y
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  python2.7-minimal
Suggested packages:
  python2.7-doc binfmt-support
The following NEW packages will be installed:
  python2.7 python2.7-minimal
0 upgraded, 2 newly installed, 0 to remove and 157 not upgraded.
Need to get 1,539 kB of archives.
After this operation, 4,163 kB of additional disk space will be used.
Get:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 python2.7-minimal amd64 2.7.17-1~18.04ubuntu1.6 [1,291 kB]
Get:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 python2.7 amd64 2.7.17-1~18.04ubuntu1.6 [248 kB]
Fetched 1,539 kB in 1s (2,341 kB/s)
Selecting previously unselected package python2.7-minimal.
(Reading database ... 69833 files and directories currently installed.)
Preparing to unpack .../python2.7-minimal_2.7.17-1~18.04ubuntu1.6_amd64.deb ...
Unpacking python2.7-minimal (2.7.17-1~18.04ubuntu1.6) ...
Selecting previously unselected package python2.7.
Preparing to unpack .../python2.7_2.7.17-1~18.04ubuntu1.6_amd64.deb ...
Unpacking python2.7 (2.7.17-1~18.04ubuntu1.6) ...
Setting up python2.7-minimal (2.7.17-1~18.04ubuntu1.6) ...
Linking and byte-compiling packages for runtime python2.7...
Setting up python2.7 (2.7.17-1~18.04ubuntu1.6) ...
Processing triggers for mime-support (3.60ubuntu1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
root@node02:~# which python2
root@node02:~# python
python2.7   python3     python3.6   python3.6m  python3m    
root@node02:~#  ln -sv /usr/bin/python2.7 /usr/bin/python2
'/usr/bin/python2' -> '/usr/bin/python2.7'
root@node02:~# 
View Code

4、三台mon节点上都安装:ceph-mon组件(下面只贴了node01的安装过程):

root@node01:~# apt-cache madison ceph-mon
  ceph-mon | 16.2.5-1bionic | https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 Packages
  ceph-mon | 12.2.13-0ubuntu0.18.04.8 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 Packages
  ceph-mon | 12.2.13-0ubuntu0.18.04.4 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security/main amd64 Packages
  ceph-mon | 12.2.4-0ubuntu1 | https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 Packages
root@node01:~# 
root@node01:~# apt install ceph-mon
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  binutils binutils-common binutils-x86-64-linux-gnu ceph-base ceph-common ceph-fuse ceph-mds guile-2.0-libs ibverbs-providers libaio1 libbabeltrace1 libbinutils libcephfs2 libdw1 libgc1c2
  libgoogle-perftools4 libgsasl7 libibverbs1 libjaeger libkyotocabinet16v5 libleveldb1v5 libltdl7 liblttng-ust-ctl4 liblttng-ust0 liblua5.3-0 libmailutils5 libmysqlclient20 libnl-route-3-200 libntlm0 liboath0
  libopts25 libpython2.7 librabbitmq4 librados2 libradosstriper1 librbd1 librdkafka1 librdmacm1 librgw2 libsnappy1v5 libtcmalloc-minimal4 liburcu6 mailutils mailutils-common mysql-common ntp nvme-cli postfix
  python3-ceph-argparse python3-ceph-common python3-cephfs python3-prettytable python3-rados python3-rbd python3-rgw smartmontools sntp ssl-cert
Suggested packages:
  binutils-doc mailutils-mh mailutils-doc ntp-doc procmail postfix-mysql postfix-pgsql postfix-ldap postfix-pcre postfix-lmdb postfix-sqlite sasl2-bin dovecot-common resolvconf postfix-cdb postfix-doc
  gsmartcontrol smart-notifier openssl-blacklist
The following NEW packages will be installed:
  binutils binutils-common binutils-x86-64-linux-gnu ceph-base ceph-common ceph-fuse ceph-mds ceph-mon guile-2.0-libs ibverbs-providers libaio1 libbabeltrace1 libbinutils libcephfs2 libdw1 libgc1c2
  libgoogle-perftools4 libgsasl7 libibverbs1 libjaeger libkyotocabinet16v5 libleveldb1v5 libltdl7 liblttng-ust-ctl4 liblttng-ust0 liblua5.3-0 libmailutils5 libmysqlclient20 libnl-route-3-200 libntlm0 liboath0
  libopts25 libpython2.7 librabbitmq4 librados2 libradosstriper1 librbd1 librdkafka1 librdmacm1 librgw2 libsnappy1v5 libtcmalloc-minimal4 liburcu6 mailutils mailutils-common mysql-common ntp nvme-cli postfix
  python3-ceph-argparse python3-ceph-common python3-cephfs python3-prettytable python3-rados python3-rbd python3-rgw smartmontools sntp ssl-cert
0 upgraded, 59 newly installed, 0 to remove and 157 not upgraded.
Need to get 60.9 MB of archives.
After this operation, 273 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 libopts25 amd64 1:5.18.12-4 [58.2 kB]
Get:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/universe amd64 ntp amd64 1:4.2.8p10+dfsg-5ubuntu7.3 [640 kB]
Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 binutils-common amd64 2.30-21ubuntu1~18.04.5 [197 kB]
Get:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 libbinutils amd64 2.30-21ubuntu1~18.04.5 [489 kB]
Get:5 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 binutils-x86-64-linux-gnu amd64 2.30-21ubuntu1~18.04.5 [1,839 kB]
Get:6 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 binutils amd64 2.30-21ubuntu1~18.04.5 [3,388 B]
Get:7 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 libjaeger amd64 16.2.5-1bionic [3,780 B]
Get:8 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libnl-route-3-200 amd64 3.2.29-0ubuntu3 [146 kB]
Get:9 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 libibverbs1 amd64 17.1-1ubuntu0.2 [44.4 kB]
Get:10 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 liburcu6 amd64 0.10.1-1ubuntu1 [52.2 kB]
Get:11 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 liblttng-ust-ctl4 amd64 2.10.1-1 [80.8 kB]
Get:12 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 liblttng-ust0 amd64 2.10.1-1 [154 kB]
Get:13 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 librdmacm1 amd64 17.1-1ubuntu0.2 [56.1 kB]
Get:14 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 librados2 amd64 16.2.5-1bionic [3,175 kB]
Get:15 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 libaio1 amd64 0.3.110-5ubuntu0.1 [6,476 B]
Get:16 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 librbd1 amd64 16.2.5-1bionic [3,125 kB]
Get:17 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 libcephfs2 amd64 16.2.5-1bionic [671 kB]
Get:18 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 python3-rados amd64 16.2.5-1bionic [339 kB]
Get:19 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 python3-ceph-argparse all 16.2.5-1bionic [21.9 kB]
Get:20 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 python3-cephfs amd64 16.2.5-1bionic [177 kB]
Get:21 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 python3-ceph-common all 16.2.5-1bionic [30.8 kB]
Get:22 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-prettytable all 0.7.2-3 [19.7 kB]
Get:23 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 python3-rbd amd64 16.2.5-1bionic [336 kB]
Get:24 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 liblua5.3-0 amd64 5.3.3-1ubuntu0.18.04.1 [115 kB]
Get:25 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/universe amd64 librabbitmq4 amd64 0.8.0-1ubuntu0.18.04.2 [33.9 kB]
Get:26 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 librdkafka1 amd64 0.11.3-1build1 [293 kB]
Get:27 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 librgw2 amd64 16.2.5-1bionic [3,394 kB]
Get:28 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 python3-rgw amd64 16.2.5-1bionic [99.4 kB]
Get:29 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 libdw1 amd64 0.170-0.4ubuntu0.1 [203 kB]
Get:30 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libbabeltrace1 amd64 1.5.5-1 [154 kB]
Get:31 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libtcmalloc-minimal4 amd64 2.5-2.2ubuntu3 [91.6 kB]
Get:32 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libgoogle-perftools4 amd64 2.5-2.2ubuntu3 [190 kB]
Get:33 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libsnappy1v5 amd64 1.1.7-1 [16.0 kB]
Get:34 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libleveldb1v5 amd64 1.20-2 [136 kB]
Get:35 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 liboath0 amd64 2.6.1-1 [44.7 kB]
Get:36 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 libradosstriper1 amd64 16.2.5-1bionic [415 kB]
Get:37 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph-common amd64 16.2.5-1bionic [21.3 MB]
Get:38 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph-base amd64 16.2.5-1bionic [5,630 kB]
Get:39 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph-fuse amd64 16.2.5-1bionic [777 kB]
Get:40 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph-mds amd64 16.2.5-1bionic [2,159 kB]
Get:41 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph-mon amd64 16.2.5-1bionic [6,680 kB]
Get:42 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libgc1c2 amd64 1:7.4.2-8ubuntu1 [81.8 kB]                                                                                                   
Get:43 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libltdl7 amd64 2.4.6-2 [38.8 kB]                                                                                                            
Get:44 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 guile-2.0-libs amd64 2.0.13+1-5ubuntu0.1 [2,218 kB]                                                                                 
Get:45 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 ibverbs-providers amd64 17.1-1ubuntu0.2 [160 kB]                                                                                    
Get:46 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 libntlm0 amd64 1.4-8 [13.6 kB]                                                                                                          
Get:47 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 libgsasl7 amd64 1.8.0-8ubuntu3 [118 kB]                                                                                                 
Get:48 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 libkyotocabinet16v5 amd64 1.2.76-4.2 [292 kB]                                                                                           
Get:49 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 mailutils-common all 1:3.4-1 [269 kB]                                                                                                   
Get:50 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 mysql-common all 5.8+1.0.4 [7,308 B]                                                                                                        
Get:51 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 libmysqlclient20 amd64 5.7.35-0ubuntu0.18.04.1 [691 kB]                                                                             
Get:52 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 libpython2.7 amd64 2.7.17-1~18.04ubuntu1.6 [1,053 kB]                                                                               
Get:53 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 libmailutils5 amd64 1:3.4-1 [457 kB]                                                                                                    
Get:54 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 ssl-cert all 1.0.39 [17.0 kB]                                                                                                               
Get:55 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 postfix amd64 3.3.0-1ubuntu0.3 [1,148 kB]                                                                                           
Get:56 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 mailutils amd64 1:3.4-1 [140 kB]                                                                                                        
Get:57 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 nvme-cli amd64 1.5-1ubuntu1.1 [184 kB]                                                                                              
Get:58 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 smartmontools amd64 6.5+svn4324-1ubuntu0.1 [477 kB]                                                                                 
Get:59 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/universe amd64 sntp amd64 1:4.2.8p10+dfsg-5ubuntu7.3 [86.5 kB]                                                                                 
Fetched 60.9 MB in 7s (8,850 kB/s)                                                                                                                                                                               
Extracting templates from packages: 100%
Preconfiguring packages ...
Selecting previously unselected package libopts25:amd64.
(Reading database ... 68224 files and directories currently installed.)
Preparing to unpack .../00-libopts25_1%3a5.18.12-4_amd64.deb ...
Unpacking libopts25:amd64 (1:5.18.12-4) ...
Selecting previously unselected package ntp.
Preparing to unpack .../01-ntp_1%3a4.2.8p10+dfsg-5ubuntu7.3_amd64.deb ...
Unpacking ntp (1:4.2.8p10+dfsg-5ubuntu7.3) ...
Selecting previously unselected package binutils-common:amd64.
Preparing to unpack .../02-binutils-common_2.30-21ubuntu1~18.04.5_amd64.deb ...
Unpacking binutils-common:amd64 (2.30-21ubuntu1~18.04.5) ...
Selecting previously unselected package libbinutils:amd64.
Preparing to unpack .../03-libbinutils_2.30-21ubuntu1~18.04.5_amd64.deb ...
Unpacking libbinutils:amd64 (2.30-21ubuntu1~18.04.5) ...
Selecting previously unselected package binutils-x86-64-linux-gnu.
Preparing to unpack .../04-binutils-x86-64-linux-gnu_2.30-21ubuntu1~18.04.5_amd64.deb ...
Unpacking binutils-x86-64-linux-gnu (2.30-21ubuntu1~18.04.5) ...
Selecting previously unselected package binutils.
Preparing to unpack .../05-binutils_2.30-21ubuntu1~18.04.5_amd64.deb ...
Unpacking binutils (2.30-21ubuntu1~18.04.5) ...
Selecting previously unselected package libjaeger.
Preparing to unpack .../06-libjaeger_16.2.5-1bionic_amd64.deb ...
Unpacking libjaeger (16.2.5-1bionic) ...
Selecting previously unselected package libnl-route-3-200:amd64.
Preparing to unpack .../07-libnl-route-3-200_3.2.29-0ubuntu3_amd64.deb ...
Unpacking libnl-route-3-200:amd64 (3.2.29-0ubuntu3) ...
Selecting previously unselected package libibverbs1:amd64.
Preparing to unpack .../08-libibverbs1_17.1-1ubuntu0.2_amd64.deb ...
Unpacking libibverbs1:amd64 (17.1-1ubuntu0.2) ...
Selecting previously unselected package liburcu6:amd64.
Preparing to unpack .../09-liburcu6_0.10.1-1ubuntu1_amd64.deb ...
Unpacking liburcu6:amd64 (0.10.1-1ubuntu1) ...
Selecting previously unselected package liblttng-ust-ctl4:amd64.
Preparing to unpack .../10-liblttng-ust-ctl4_2.10.1-1_amd64.deb ...
Unpacking liblttng-ust-ctl4:amd64 (2.10.1-1) ...
Selecting previously unselected package liblttng-ust0:amd64.
Preparing to unpack .../11-liblttng-ust0_2.10.1-1_amd64.deb ...
Unpacking liblttng-ust0:amd64 (2.10.1-1) ...
Selecting previously unselected package librdmacm1:amd64.
Preparing to unpack .../12-librdmacm1_17.1-1ubuntu0.2_amd64.deb ...
Unpacking librdmacm1:amd64 (17.1-1ubuntu0.2) ...
Selecting previously unselected package librados2.
Preparing to unpack .../13-librados2_16.2.5-1bionic_amd64.deb ...
Unpacking librados2 (16.2.5-1bionic) ...
Selecting previously unselected package libaio1:amd64.
Preparing to unpack .../14-libaio1_0.3.110-5ubuntu0.1_amd64.deb ...
Unpacking libaio1:amd64 (0.3.110-5ubuntu0.1) ...
Selecting previously unselected package librbd1.
Preparing to unpack .../15-librbd1_16.2.5-1bionic_amd64.deb ...
Unpacking librbd1 (16.2.5-1bionic) ...
Selecting previously unselected package libcephfs2.
Preparing to unpack .../16-libcephfs2_16.2.5-1bionic_amd64.deb ...
Unpacking libcephfs2 (16.2.5-1bionic) ...
Selecting previously unselected package python3-rados.
Preparing to unpack .../17-python3-rados_16.2.5-1bionic_amd64.deb ...
Unpacking python3-rados (16.2.5-1bionic) ...
Selecting previously unselected package python3-ceph-argparse.
Preparing to unpack .../18-python3-ceph-argparse_16.2.5-1bionic_all.deb ...
Unpacking python3-ceph-argparse (16.2.5-1bionic) ...
Selecting previously unselected package python3-cephfs.
Preparing to unpack .../19-python3-cephfs_16.2.5-1bionic_amd64.deb ...
Unpacking python3-cephfs (16.2.5-1bionic) ...
Selecting previously unselected package python3-ceph-common.
Preparing to unpack .../20-python3-ceph-common_16.2.5-1bionic_all.deb ...
Unpacking python3-ceph-common (16.2.5-1bionic) ...
Selecting previously unselected package python3-prettytable.
Preparing to unpack .../21-python3-prettytable_0.7.2-3_all.deb ...
Unpacking python3-prettytable (0.7.2-3) ...
Selecting previously unselected package python3-rbd.
Preparing to unpack .../22-python3-rbd_16.2.5-1bionic_amd64.deb ...
Unpacking python3-rbd (16.2.5-1bionic) ...
Selecting previously unselected package liblua5.3-0:amd64.
Preparing to unpack .../23-liblua5.3-0_5.3.3-1ubuntu0.18.04.1_amd64.deb ...
Unpacking liblua5.3-0:amd64 (5.3.3-1ubuntu0.18.04.1) ...
Selecting previously unselected package librabbitmq4:amd64.
Preparing to unpack .../24-librabbitmq4_0.8.0-1ubuntu0.18.04.2_amd64.deb ...
Unpacking librabbitmq4:amd64 (0.8.0-1ubuntu0.18.04.2) ...
Selecting previously unselected package librdkafka1:amd64.
Preparing to unpack .../25-librdkafka1_0.11.3-1build1_amd64.deb ...
Unpacking librdkafka1:amd64 (0.11.3-1build1) ...
Selecting previously unselected package librgw2.
Preparing to unpack .../26-librgw2_16.2.5-1bionic_amd64.deb ...
Unpacking librgw2 (16.2.5-1bionic) ...
Selecting previously unselected package python3-rgw.
Preparing to unpack .../27-python3-rgw_16.2.5-1bionic_amd64.deb ...
Unpacking python3-rgw (16.2.5-1bionic) ...
Selecting previously unselected package libdw1:amd64.
Preparing to unpack .../28-libdw1_0.170-0.4ubuntu0.1_amd64.deb ...
Unpacking libdw1:amd64 (0.170-0.4ubuntu0.1) ...
Selecting previously unselected package libbabeltrace1:amd64.
Preparing to unpack .../29-libbabeltrace1_1.5.5-1_amd64.deb ...
Unpacking libbabeltrace1:amd64 (1.5.5-1) ...
Selecting previously unselected package libtcmalloc-minimal4.
Preparing to unpack .../30-libtcmalloc-minimal4_2.5-2.2ubuntu3_amd64.deb ...
Unpacking libtcmalloc-minimal4 (2.5-2.2ubuntu3) ...
Selecting previously unselected package libgoogle-perftools4.
Preparing to unpack .../31-libgoogle-perftools4_2.5-2.2ubuntu3_amd64.deb ...
Unpacking libgoogle-perftools4 (2.5-2.2ubuntu3) ...
Selecting previously unselected package libsnappy1v5:amd64.
Preparing to unpack .../32-libsnappy1v5_1.1.7-1_amd64.deb ...
Unpacking libsnappy1v5:amd64 (1.1.7-1) ...
Selecting previously unselected package libleveldb1v5:amd64.
Preparing to unpack .../33-libleveldb1v5_1.20-2_amd64.deb ...
Unpacking libleveldb1v5:amd64 (1.20-2) ...
Selecting previously unselected package liboath0.
Preparing to unpack .../34-liboath0_2.6.1-1_amd64.deb ...
Unpacking liboath0 (2.6.1-1) ...
Selecting previously unselected package libradosstriper1.
Preparing to unpack .../35-libradosstriper1_16.2.5-1bionic_amd64.deb ...
Unpacking libradosstriper1 (16.2.5-1bionic) ...
Selecting previously unselected package ceph-common.
Preparing to unpack .../36-ceph-common_16.2.5-1bionic_amd64.deb ...
Unpacking ceph-common (16.2.5-1bionic) ...
Selecting previously unselected package ceph-base.
Preparing to unpack .../37-ceph-base_16.2.5-1bionic_amd64.deb ...
Unpacking ceph-base (16.2.5-1bionic) ...
Selecting previously unselected package ceph-fuse.
Preparing to unpack .../38-ceph-fuse_16.2.5-1bionic_amd64.deb ...
Unpacking ceph-fuse (16.2.5-1bionic) ...
Selecting previously unselected package ceph-mds.
Preparing to unpack .../39-ceph-mds_16.2.5-1bionic_amd64.deb ...
Unpacking ceph-mds (16.2.5-1bionic) ...
Selecting previously unselected package ceph-mon.
Preparing to unpack .../40-ceph-mon_16.2.5-1bionic_amd64.deb ...
Unpacking ceph-mon (16.2.5-1bionic) ...
Selecting previously unselected package libgc1c2:amd64.
Preparing to unpack .../41-libgc1c2_1%3a7.4.2-8ubuntu1_amd64.deb ...
Unpacking libgc1c2:amd64 (1:7.4.2-8ubuntu1) ...
Selecting previously unselected package libltdl7:amd64.
Preparing to unpack .../42-libltdl7_2.4.6-2_amd64.deb ...
Unpacking libltdl7:amd64 (2.4.6-2) ...
Selecting previously unselected package guile-2.0-libs:amd64.
Preparing to unpack .../43-guile-2.0-libs_2.0.13+1-5ubuntu0.1_amd64.deb ...
Unpacking guile-2.0-libs:amd64 (2.0.13+1-5ubuntu0.1) ...
Selecting previously unselected package ibverbs-providers:amd64.
Preparing to unpack .../44-ibverbs-providers_17.1-1ubuntu0.2_amd64.deb ...
Unpacking ibverbs-providers:amd64 (17.1-1ubuntu0.2) ...
Selecting previously unselected package libntlm0:amd64.
Preparing to unpack .../45-libntlm0_1.4-8_amd64.deb ...
Unpacking libntlm0:amd64 (1.4-8) ...
Selecting previously unselected package libgsasl7:amd64.
Preparing to unpack .../46-libgsasl7_1.8.0-8ubuntu3_amd64.deb ...
Unpacking libgsasl7:amd64 (1.8.0-8ubuntu3) ...
Selecting previously unselected package libkyotocabinet16v5:amd64.
Preparing to unpack .../47-libkyotocabinet16v5_1.2.76-4.2_amd64.deb ...
Unpacking libkyotocabinet16v5:amd64 (1.2.76-4.2) ...
Selecting previously unselected package mailutils-common.
Preparing to unpack .../48-mailutils-common_1%3a3.4-1_all.deb ...
Unpacking mailutils-common (1:3.4-1) ...
Selecting previously unselected package mysql-common.
Preparing to unpack .../49-mysql-common_5.8+1.0.4_all.deb ...
Unpacking mysql-common (5.8+1.0.4) ...
Selecting previously unselected package libmysqlclient20:amd64.
Preparing to unpack .../50-libmysqlclient20_5.7.35-0ubuntu0.18.04.1_amd64.deb ...
Unpacking libmysqlclient20:amd64 (5.7.35-0ubuntu0.18.04.1) ...
Selecting previously unselected package libpython2.7:amd64.
Preparing to unpack .../51-libpython2.7_2.7.17-1~18.04ubuntu1.6_amd64.deb ...
Unpacking libpython2.7:amd64 (2.7.17-1~18.04ubuntu1.6) ...
Selecting previously unselected package libmailutils5:amd64.
Preparing to unpack .../52-libmailutils5_1%3a3.4-1_amd64.deb ...
Unpacking libmailutils5:amd64 (1:3.4-1) ...
Selecting previously unselected package ssl-cert.
Preparing to unpack .../53-ssl-cert_1.0.39_all.deb ...
Unpacking ssl-cert (1.0.39) ...
Selecting previously unselected package postfix.
Preparing to unpack .../54-postfix_3.3.0-1ubuntu0.3_amd64.deb ...
Unpacking postfix (3.3.0-1ubuntu0.3) ...
Selecting previously unselected package mailutils.
Preparing to unpack .../55-mailutils_1%3a3.4-1_amd64.deb ...
Unpacking mailutils (1:3.4-1) ...
Selecting previously unselected package nvme-cli.
Preparing to unpack .../56-nvme-cli_1.5-1ubuntu1.1_amd64.deb ...
Unpacking nvme-cli (1.5-1ubuntu1.1) ...
Selecting previously unselected package smartmontools.
Preparing to unpack .../57-smartmontools_6.5+svn4324-1ubuntu0.1_amd64.deb ...
Unpacking smartmontools (6.5+svn4324-1ubuntu0.1) ...
Selecting previously unselected package sntp.
Preparing to unpack .../58-sntp_1%3a4.2.8p10+dfsg-5ubuntu7.3_amd64.deb ...
Unpacking sntp (1:4.2.8p10+dfsg-5ubuntu7.3) ...
Setting up librdkafka1:amd64 (0.11.3-1build1) ...
Setting up libdw1:amd64 (0.170-0.4ubuntu0.1) ...
Setting up python3-ceph-argparse (16.2.5-1bionic) ...
Setting up mysql-common (5.8+1.0.4) ...
update-alternatives: using /etc/mysql/my.cnf.fallback to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Setting up libgc1c2:amd64 (1:7.4.2-8ubuntu1) ...
Setting up libnl-route-3-200:amd64 (3.2.29-0ubuntu3) ...
Setting up ssl-cert (1.0.39) ...
Setting up smartmontools (6.5+svn4324-1ubuntu0.1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/smartd.service → /lib/systemd/system/smartd.service.
Setting up liburcu6:amd64 (0.10.1-1ubuntu1) ...
Setting up nvme-cli (1.5-1ubuntu1.1) ...
Setting up python3-prettytable (0.7.2-3) ...
Setting up binutils-common:amd64 (2.30-21ubuntu1~18.04.5) ...
Setting up liblttng-ust-ctl4:amd64 (2.10.1-1) ...
Setting up libtcmalloc-minimal4 (2.5-2.2ubuntu3) ...
Setting up libntlm0:amd64 (1.4-8) ...
Setting up python3-ceph-common (16.2.5-1bionic) ...
Setting up libgoogle-perftools4 (2.5-2.2ubuntu3) ...
Setting up libaio1:amd64 (0.3.110-5ubuntu0.1) ...
Setting up libsnappy1v5:amd64 (1.1.7-1) ...
Setting up libltdl7:amd64 (2.4.6-2) ...
Setting up libpython2.7:amd64 (2.7.17-1~18.04ubuntu1.6) ...
Setting up libopts25:amd64 (1:5.18.12-4) ...
Setting up libjaeger (16.2.5-1bionic) ...
Setting up libmysqlclient20:amd64 (5.7.35-0ubuntu0.18.04.1) ...
Setting up liboath0 (2.6.1-1) ...
Setting up librabbitmq4:amd64 (0.8.0-1ubuntu0.18.04.2) ...
Setting up liblttng-ust0:amd64 (2.10.1-1) ...
Setting up liblua5.3-0:amd64 (5.3.3-1ubuntu0.18.04.1) ...
Setting up libkyotocabinet16v5:amd64 (1.2.76-4.2) ...
Setting up libbabeltrace1:amd64 (1.5.5-1) ...
Setting up postfix (3.3.0-1ubuntu0.3) ...
Created symlink /etc/systemd/system/multi-user.target.wants/postfix.service → /lib/systemd/system/postfix.service.
Adding group `postfix' (GID 116) ...
Done.
Adding system user `postfix' (UID 111) ...
Adding new user `postfix' (UID 111) with group `postfix' ...
Not creating home directory `/var/spool/postfix'.
Creating /etc/postfix/dynamicmaps.cf
Adding group `postdrop' (GID 117) ...
Done.
setting myhostname: node01
setting alias maps
setting alias database
mailname is not a fully qualified domain name.  Not changing /etc/mailname.
setting destinations: $myhostname, node01, localhost.localdomain, , localhost
setting relayhost: 
setting mynetworks: 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
setting mailbox_size_limit: 0
setting recipient_delimiter: +
setting inet_interfaces: all
setting inet_protocols: all
/etc/aliases does not exist, creating it.
WARNING: /etc/aliases exists, but does not have a root alias.

Postfix (main.cf) is now set up with a default configuration.  If you need to 
make changes, edit /etc/postfix/main.cf (and others) as needed.  To view 
Postfix configuration values, see postconf(1).

After modifying main.cf, be sure to run 'service postfix reload'.

Running newaliases
Setting up mailutils-common (1:3.4-1) ...
Setting up libgsasl7:amd64 (1.8.0-8ubuntu3) ...
Setting up libibverbs1:amd64 (17.1-1ubuntu0.2) ...
Setting up sntp (1:4.2.8p10+dfsg-5ubuntu7.3) ...
Setting up libbinutils:amd64 (2.30-21ubuntu1~18.04.5) ...
Setting up ntp (1:4.2.8p10+dfsg-5ubuntu7.3) ...
Created symlink /etc/systemd/system/network-pre.target.wants/ntp-systemd-netif.path → /lib/systemd/system/ntp-systemd-netif.path.
Created symlink /etc/systemd/system/multi-user.target.wants/ntp.service → /lib/systemd/system/ntp.service.
ntp-systemd-netif.service is a disabled or a static unit, not starting it.
Setting up librdmacm1:amd64 (17.1-1ubuntu0.2) ...
Setting up libleveldb1v5:amd64 (1.20-2) ...
Setting up librados2 (16.2.5-1bionic) ...
Setting up libcephfs2 (16.2.5-1bionic) ...
Setting up ibverbs-providers:amd64 (17.1-1ubuntu0.2) ...
Setting up guile-2.0-libs:amd64 (2.0.13+1-5ubuntu0.1) ...
Setting up python3-rados (16.2.5-1bionic) ...
Setting up binutils-x86-64-linux-gnu (2.30-21ubuntu1~18.04.5) ...
Setting up libmailutils5:amd64 (1:3.4-1) ...
Setting up libradosstriper1 (16.2.5-1bionic) ...
Setting up python3-cephfs (16.2.5-1bionic) ...
Setting up librgw2 (16.2.5-1bionic) ...
Setting up ceph-fuse (16.2.5-1bionic) ...
Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target.
Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target.
Setting up librbd1 (16.2.5-1bionic) ...
Setting up mailutils (1:3.4-1) ...
update-alternatives: using /usr/bin/frm.mailutils to provide /usr/bin/frm (frm) in auto mode
update-alternatives: using /usr/bin/from.mailutils to provide /usr/bin/from (from) in auto mode
update-alternatives: using /usr/bin/messages.mailutils to provide /usr/bin/messages (messages) in auto mode
update-alternatives: using /usr/bin/movemail.mailutils to provide /usr/bin/movemail (movemail) in auto mode
update-alternatives: using /usr/bin/readmsg.mailutils to provide /usr/bin/readmsg (readmsg) in auto mode
update-alternatives: using /usr/bin/dotlock.mailutils to provide /usr/bin/dotlock (dotlock) in auto mode
update-alternatives: using /usr/bin/mail.mailutils to provide /usr/bin/mailx (mailx) in auto mode
Setting up binutils (2.30-21ubuntu1~18.04.5) ...
Setting up python3-rgw (16.2.5-1bionic) ...
Setting up python3-rbd (16.2.5-1bionic) ...
Setting up ceph-common (16.2.5-1bionic) ...
Adding group ceph....done
Adding system user ceph....done
Setting system user ceph properties....done
chown: cannot access '/var/log/ceph/*.log*': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target.
Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service.
Setting up ceph-base (16.2.5-1bionic) ...
Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service.
Setting up ceph-mds (16.2.5-1bionic) ...
Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target.
Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target.
Setting up ceph-mon (16.2.5-1bionic) ...
Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.
Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.
Processing triggers for systemd (237-3ubuntu10.42) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for rsyslog (8.32.0-1ubuntu4) ...
Processing triggers for ufw (0.36-0ubuntu0.18.04.1) ...
Processing triggers for ureadahead (0.100.0-21) ...
Processing triggers for libc-bin (2.27-3ubuntu1.2) ...
root@node01:~#
View Code

5、初始化安装三台node节点,具体操作deploy主键上操作,具体情况如下:

root@node01:~/cephCluster# ceph-deploy install  --no-adjust-repos --nogpgcheck node01 node02 node03 
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy install --no-adjust-repos --nogpgcheck node01 node02 node03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbf9515cc80>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : False
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0x7fbf95a0ea50>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['node01', 'node02', 'node03']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : True
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version mimic on cluster ceph hosts node01 node02 node03
[ceph_deploy.install][DEBUG ] Detecting platform for host node01 ...
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[node01][INFO  ] installing Ceph on node01
[node01][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[node01][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[node01][DEBUG ] Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[node01][DEBUG ] Hit:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[node01][DEBUG ] Hit:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[node01][DEBUG ] Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic InRelease
[node01][DEBUG ] Reading package lists...
[node01][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[node01][DEBUG ] Reading package lists...
[node01][DEBUG ] Building dependency tree...
[node01][DEBUG ] Reading state information...
[node01][DEBUG ] ca-certificates is already the newest version (20210119~18.04.1).
[node01][DEBUG ] apt-transport-https is already the newest version (1.6.14).
[node01][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 156 not upgraded.
[node01][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[node01][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[node01][DEBUG ] Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[node01][DEBUG ] Hit:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[node01][DEBUG ] Hit:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[node01][DEBUG ] Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic InRelease
[node01][DEBUG ] Reading package lists...
[node01][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
[node01][DEBUG ] Reading package lists...
[node01][DEBUG ] Building dependency tree...
[node01][DEBUG ] Reading state information...
[node01][DEBUG ] ceph is already the newest version (16.2.5-1bionic).
[node01][DEBUG ] ceph-mds is already the newest version (16.2.5-1bionic).
[node01][DEBUG ] ceph-mon is already the newest version (16.2.5-1bionic).
[node01][DEBUG ] ceph-osd is already the newest version (16.2.5-1bionic).
[node01][DEBUG ] radosgw is already the newest version (16.2.5-1bionic).
[node01][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 156 not upgraded.
[node01][INFO  ] Running command: ceph --version
[node01][DEBUG ] ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host node02 ...
[node02][DEBUG ] connected to host: node02 
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[node02][INFO  ] installing Ceph on node02
[node02][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[node02][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[node02][DEBUG ] Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[node02][DEBUG ] Hit:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[node02][DEBUG ] Hit:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[node02][DEBUG ] Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic InRelease
[node02][DEBUG ] Reading package lists...
[node02][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[node02][DEBUG ] Reading package lists...
[node02][DEBUG ] Building dependency tree...
[node02][DEBUG ] Reading state information...
[node02][DEBUG ] The following NEW packages will be installed:
[node02][DEBUG ]   apt-transport-https
[node02][DEBUG ] The following packages will be upgraded:
[node02][DEBUG ]   ca-certificates
[node02][DEBUG ] 1 upgraded, 1 newly installed, 0 to remove and 156 not upgraded.
[node02][DEBUG ] Need to get 151 kB of archives.
[node02][DEBUG ] After this operation, 153 kB of additional disk space will be used.
[node02][DEBUG ] Get:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 ca-certificates all 20210119~18.04.1 [147 kB]
[node02][DEBUG ] Get:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.14 [4,348 B]
[node02][DEBUG ] Preconfiguring packages ...
[node02][DEBUG ] Fetched 151 kB in 0s (555 kB/s)
(Reading database ... 69860 files and directories currently installed.)
[node02][DEBUG ] Preparing to unpack .../ca-certificates_20210119~18.04.1_all.deb ...
[node02][DEBUG ] Unpacking ca-certificates (20210119~18.04.1) over (20190110~18.04.1) ...
[node02][DEBUG ] Selecting previously unselected package apt-transport-https.
[node02][DEBUG ] Preparing to unpack .../apt-transport-https_1.6.14_all.deb ...
[node02][DEBUG ] Unpacking apt-transport-https (1.6.14) ...
[node02][DEBUG ] Setting up apt-transport-https (1.6.14) ...
[node02][DEBUG ] Setting up ca-certificates (20210119~18.04.1) ...
[node02][DEBUG ] Updating certificates in /etc/ssl/certs...
[node02][DEBUG ] 21 added, 19 removed; done.
[node02][DEBUG ] Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
[node02][DEBUG ] Processing triggers for ca-certificates (20210119~18.04.1) ...
[node02][DEBUG ] Updating certificates in /etc/ssl/certs...
[node02][DEBUG ] 0 added, 0 removed; done.
[node02][DEBUG ] Running hooks in /etc/ca-certificates/update.d...
[node02][DEBUG ] done.
[node02][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[node02][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[node02][DEBUG ] Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[node02][DEBUG ] Hit:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[node02][DEBUG ] Hit:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[node02][DEBUG ] Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic InRelease
[node02][DEBUG ] Reading package lists...
[node02][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
[node02][DEBUG ] Reading package lists...
[node02][DEBUG ] Building dependency tree...
[node02][DEBUG ] Reading state information...
[node02][DEBUG ] ceph-mds is already the newest version (16.2.5-1bionic).
[node02][DEBUG ] ceph-mds set to manually installed.
[node02][DEBUG ] ceph-mon is already the newest version (16.2.5-1bionic).
[node02][DEBUG ] The following additional packages will be installed:
[node02][DEBUG ]   ceph-mgr ceph-mgr-modules-core libjs-jquery python-pastedeploy-tpl
[node02][DEBUG ]   python3-bcrypt python3-bs4 python3-cherrypy3 python3-dateutil
[node02][DEBUG ]   python3-distutils python3-jwt python3-lib2to3 python3-logutils python3-mako
[node02][DEBUG ]   python3-markupsafe python3-paste python3-pastedeploy python3-pecan
[node02][DEBUG ]   python3-simplegeneric python3-singledispatch python3-tempita
[node02][DEBUG ]   python3-waitress python3-webob python3-webtest python3-werkzeug
[node02][DEBUG ] Suggested packages:
[node02][DEBUG ]   python3-influxdb python3-crypto python3-beaker python-mako-doc httpd-wsgi
[node02][DEBUG ]   libapache2-mod-python libapache2-mod-scgi libjs-mochikit python-pecan-doc
[node02][DEBUG ]   python-waitress-doc python-webob-doc python-webtest-doc ipython3
[node02][DEBUG ]   python3-lxml python3-termcolor python3-watchdog python-werkzeug-doc
[node02][DEBUG ] Recommended packages:
[node02][DEBUG ]   ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents
[node02][DEBUG ]   ceph-mgr-cephadm javascript-common python3-lxml python3-routes
[node02][DEBUG ]   python3-simplejson python3-pastescript python3-pyinotify
[node02][DEBUG ] The following NEW packages will be installed:
[node02][DEBUG ]   ceph ceph-mgr ceph-mgr-modules-core ceph-osd libjs-jquery
[node02][DEBUG ]   python-pastedeploy-tpl python3-bcrypt python3-bs4 python3-cherrypy3
[node02][DEBUG ]   python3-dateutil python3-distutils python3-jwt python3-lib2to3
[node02][DEBUG ]   python3-logutils python3-mako python3-markupsafe python3-paste
[node02][DEBUG ]   python3-pastedeploy python3-pecan python3-simplegeneric
[node02][DEBUG ]   python3-singledispatch python3-tempita python3-waitress python3-webob
[node02][DEBUG ]   python3-webtest python3-werkzeug radosgw
[node02][DEBUG ] 0 upgraded, 27 newly installed, 0 to remove and 156 not upgraded.
[node02][DEBUG ] Need to get 38.7 MB of archives.
[node02][DEBUG ] After this operation, 172 MB of additional disk space will be used.
[node02][DEBUG ] Get:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-dateutil all 2.6.1-1 [52.3 kB]
[node02][DEBUG ] Get:2 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph-mgr-modules-core all 16.2.5-1bionic [186 kB]
[node02][DEBUG ] Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-bcrypt amd64 3.1.4-2 [29.9 kB]
[node02][DEBUG ] Get:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-cherrypy3 all 8.9.1-2 [160 kB]
[node02][DEBUG ] Get:5 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 python3-lib2to3 all 3.6.9-1~18.04 [77.4 kB]
[node02][DEBUG ] Get:6 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 python3-distutils all 3.6.9-1~18.04 [144 kB]
[node02][DEBUG ] Get:7 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-jwt all 1.5.3+ds1-1 [15.9 kB]
[node02][DEBUG ] Get:8 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-logutils all 0.3.3-5 [16.7 kB]
[node02][DEBUG ] Get:9 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-markupsafe amd64 1.0-1build1 [13.5 kB]
[node02][DEBUG ] Get:10 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-mako all 1.0.7+ds1-1 [59.3 kB]
[node02][DEBUG ] Get:11 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-simplegeneric all 0.8.1-1 [11.5 kB]
[node02][DEBUG ] Get:12 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-singledispatch all 3.4.0.3-2 [7,022 B]
[node02][DEBUG ] Get:13 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-webob all 1:1.7.3-2fakesync1 [64.3 kB]
[node02][DEBUG ] Get:14 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-bs4 all 4.6.0-1 [67.8 kB]
[node02][DEBUG ] Get:15 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-waitress all 1.0.1-1 [53.4 kB]
[node02][DEBUG ] Get:16 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-tempita all 0.5.2-2 [13.9 kB]
[node02][DEBUG ] Get:17 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-paste all 2.0.3+dfsg-4ubuntu1 [456 kB]
[node02][DEBUG ] Get:18 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python-pastedeploy-tpl all 1.5.2-4 [4,796 B]
[node02][DEBUG ] Get:19 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-pastedeploy all 1.5.2-4 [13.4 kB]
[node02][DEBUG ] Get:20 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-webtest all 2.0.28-1ubuntu1 [27.9 kB]
[node02][DEBUG ] Get:21 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-pecan all 1.2.1-2 [86.1 kB]
[node02][DEBUG ] Get:22 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libjs-jquery all 3.2.1-1 [152 kB]
[node02][DEBUG ] Get:23 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/universe amd64 python3-werkzeug all 0.14.1+dfsg1-1ubuntu0.1 [174 kB]
[node02][DEBUG ] Get:24 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph-mgr amd64 16.2.5-1bionic [1,399 kB]
[node02][DEBUG ] Get:25 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph-osd amd64 16.2.5-1bionic [24.9 MB]
[node02][DEBUG ] Get:26 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph amd64 16.2.5-1bionic [3,876 B]
[node02][DEBUG ] Get:27 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 radosgw amd64 16.2.5-1bionic [10.5 MB]
[node02][DEBUG ] Fetched 38.7 MB in 7s (5,799 kB/s)
[node02][DEBUG ] Selecting previously unselected package python3-dateutil.
(Reading database ... 69866 files and directories currently installed.)
[node02][DEBUG ] Preparing to unpack .../00-python3-dateutil_2.6.1-1_all.deb ...
[node02][DEBUG ] Unpacking python3-dateutil (2.6.1-1) ...
[node02][DEBUG ] Selecting previously unselected package ceph-mgr-modules-core.
[node02][DEBUG ] Preparing to unpack .../01-ceph-mgr-modules-core_16.2.5-1bionic_all.deb ...
[node02][DEBUG ] Unpacking ceph-mgr-modules-core (16.2.5-1bionic) ...
[node02][DEBUG ] Selecting previously unselected package python3-bcrypt.
[node02][DEBUG ] Preparing to unpack .../02-python3-bcrypt_3.1.4-2_amd64.deb ...
[node02][DEBUG ] Unpacking python3-bcrypt (3.1.4-2) ...
[node02][DEBUG ] Selecting previously unselected package python3-cherrypy3.
[node02][DEBUG ] Preparing to unpack .../03-python3-cherrypy3_8.9.1-2_all.deb ...
[node02][DEBUG ] Unpacking python3-cherrypy3 (8.9.1-2) ...
[node02][DEBUG ] Selecting previously unselected package python3-lib2to3.
[node02][DEBUG ] Preparing to unpack .../04-python3-lib2to3_3.6.9-1~18.04_all.deb ...
[node02][DEBUG ] Unpacking python3-lib2to3 (3.6.9-1~18.04) ...
[node02][DEBUG ] Selecting previously unselected package python3-distutils.
[node02][DEBUG ] Preparing to unpack .../05-python3-distutils_3.6.9-1~18.04_all.deb ...
[node02][DEBUG ] Unpacking python3-distutils (3.6.9-1~18.04) ...
[node02][DEBUG ] Selecting previously unselected package python3-jwt.
[node02][DEBUG ] Preparing to unpack .../06-python3-jwt_1.5.3+ds1-1_all.deb ...
[node02][DEBUG ] Unpacking python3-jwt (1.5.3+ds1-1) ...
[node02][DEBUG ] Selecting previously unselected package python3-logutils.
[node02][DEBUG ] Preparing to unpack .../07-python3-logutils_0.3.3-5_all.deb ...
[node02][DEBUG ] Unpacking python3-logutils (0.3.3-5) ...
[node02][DEBUG ] Selecting previously unselected package python3-markupsafe.
[node02][DEBUG ] Preparing to unpack .../08-python3-markupsafe_1.0-1build1_amd64.deb ...
[node02][DEBUG ] Unpacking python3-markupsafe (1.0-1build1) ...
[node02][DEBUG ] Selecting previously unselected package python3-mako.
[node02][DEBUG ] Preparing to unpack .../09-python3-mako_1.0.7+ds1-1_all.deb ...
[node02][DEBUG ] Unpacking python3-mako (1.0.7+ds1-1) ...
[node02][DEBUG ] Selecting previously unselected package python3-simplegeneric.
[node02][DEBUG ] Preparing to unpack .../10-python3-simplegeneric_0.8.1-1_all.deb ...
[node02][DEBUG ] Unpacking python3-simplegeneric (0.8.1-1) ...
[node02][DEBUG ] Selecting previously unselected package python3-singledispatch.
[node02][DEBUG ] Preparing to unpack .../11-python3-singledispatch_3.4.0.3-2_all.deb ...
[node02][DEBUG ] Unpacking python3-singledispatch (3.4.0.3-2) ...
[node02][DEBUG ] Selecting previously unselected package python3-webob.
[node02][DEBUG ] Preparing to unpack .../12-python3-webob_1%3a1.7.3-2fakesync1_all.deb ...
[node02][DEBUG ] Unpacking python3-webob (1:1.7.3-2fakesync1) ...
[node02][DEBUG ] Selecting previously unselected package python3-bs4.
[node02][DEBUG ] Preparing to unpack .../13-python3-bs4_4.6.0-1_all.deb ...
[node02][DEBUG ] Unpacking python3-bs4 (4.6.0-1) ...
[node02][DEBUG ] Selecting previously unselected package python3-waitress.
[node02][DEBUG ] Preparing to unpack .../14-python3-waitress_1.0.1-1_all.deb ...
[node02][DEBUG ] Unpacking python3-waitress (1.0.1-1) ...
[node02][DEBUG ] Selecting previously unselected package python3-tempita.
[node02][DEBUG ] Preparing to unpack .../15-python3-tempita_0.5.2-2_all.deb ...
[node02][DEBUG ] Unpacking python3-tempita (0.5.2-2) ...
[node02][DEBUG ] Selecting previously unselected package python3-paste.
[node02][DEBUG ] Preparing to unpack .../16-python3-paste_2.0.3+dfsg-4ubuntu1_all.deb ...
[node02][DEBUG ] Unpacking python3-paste (2.0.3+dfsg-4ubuntu1) ...
[node02][DEBUG ] Selecting previously unselected package python-pastedeploy-tpl.
[node02][DEBUG ] Preparing to unpack .../17-python-pastedeploy-tpl_1.5.2-4_all.deb ...
[node02][DEBUG ] Unpacking python-pastedeploy-tpl (1.5.2-4) ...
[node02][DEBUG ] Selecting previously unselected package python3-pastedeploy.
[node02][DEBUG ] Preparing to unpack .../18-python3-pastedeploy_1.5.2-4_all.deb ...
[node02][DEBUG ] Unpacking python3-pastedeploy (1.5.2-4) ...
[node02][DEBUG ] Selecting previously unselected package python3-webtest.
[node02][DEBUG ] Preparing to unpack .../19-python3-webtest_2.0.28-1ubuntu1_all.deb ...
[node02][DEBUG ] Unpacking python3-webtest (2.0.28-1ubuntu1) ...
[node02][DEBUG ] Selecting previously unselected package python3-pecan.
[node02][DEBUG ] Preparing to unpack .../20-python3-pecan_1.2.1-2_all.deb ...
[node02][DEBUG ] Unpacking python3-pecan (1.2.1-2) ...
[node02][DEBUG ] Selecting previously unselected package libjs-jquery.
[node02][DEBUG ] Preparing to unpack .../21-libjs-jquery_3.2.1-1_all.deb ...
[node02][DEBUG ] Unpacking libjs-jquery (3.2.1-1) ...
[node02][DEBUG ] Selecting previously unselected package python3-werkzeug.
[node02][DEBUG ] Preparing to unpack .../22-python3-werkzeug_0.14.1+dfsg1-1ubuntu0.1_all.deb ...
[node02][DEBUG ] Unpacking python3-werkzeug (0.14.1+dfsg1-1ubuntu0.1) ...
[node02][DEBUG ] Selecting previously unselected package ceph-mgr.
[node02][DEBUG ] Preparing to unpack .../23-ceph-mgr_16.2.5-1bionic_amd64.deb ...
[node02][DEBUG ] Unpacking ceph-mgr (16.2.5-1bionic) ...
[node02][DEBUG ] Selecting previously unselected package ceph-osd.
[node02][DEBUG ] Preparing to unpack .../24-ceph-osd_16.2.5-1bionic_amd64.deb ...
[node02][DEBUG ] Unpacking ceph-osd (16.2.5-1bionic) ...
[node02][DEBUG ] Selecting previously unselected package ceph.
[node02][DEBUG ] Preparing to unpack .../25-ceph_16.2.5-1bionic_amd64.deb ...
[node02][DEBUG ] Unpacking ceph (16.2.5-1bionic) ...
[node02][DEBUG ] Selecting previously unselected package radosgw.
[node02][DEBUG ] Preparing to unpack .../26-radosgw_16.2.5-1bionic_amd64.deb ...
[node02][DEBUG ] Unpacking radosgw (16.2.5-1bionic) ...
[node02][DEBUG ] Setting up python3-logutils (0.3.3-5) ...
[node02][DEBUG ] Setting up libjs-jquery (3.2.1-1) ...
[node02][DEBUG ] Setting up python3-werkzeug (0.14.1+dfsg1-1ubuntu0.1) ...
[node02][DEBUG ] Setting up ceph-osd (16.2.5-1bionic) ...
[node02][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.
[node02][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.
[node02][DEBUG ] Setting up python3-simplegeneric (0.8.1-1) ...
[node02][DEBUG ] Setting up python3-waitress (1.0.1-1) ...
[node02][DEBUG ] update-alternatives: using /usr/bin/waitress-serve-python3 to provide /usr/bin/waitress-serve (waitress-serve) in auto mode
[node02][DEBUG ] Setting up python3-tempita (0.5.2-2) ...
[node02][DEBUG ] Setting up python3-webob (1:1.7.3-2fakesync1) ...
[node02][DEBUG ] Setting up python3-bcrypt (3.1.4-2) ...
[node02][DEBUG ] Setting up python3-singledispatch (3.4.0.3-2) ...
[node02][DEBUG ] Setting up python3-cherrypy3 (8.9.1-2) ...
[node02][DEBUG ] Setting up python3-bs4 (4.6.0-1) ...
[node02][DEBUG ] Setting up python3-markupsafe (1.0-1build1) ...
[node02][DEBUG ] Setting up python3-paste (2.0.3+dfsg-4ubuntu1) ...
[node02][DEBUG ] Setting up python-pastedeploy-tpl (1.5.2-4) ...
[node02][DEBUG ] Setting up python3-lib2to3 (3.6.9-1~18.04) ...
[node02][DEBUG ] Setting up python3-distutils (3.6.9-1~18.04) ...
[node02][DEBUG ] Setting up python3-jwt (1.5.3+ds1-1) ...
[node02][DEBUG ] Setting up python3-dateutil (2.6.1-1) ...
[node02][DEBUG ] Setting up radosgw (16.2.5-1bionic) ...
[node02][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target.
[node02][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target.
[node02][DEBUG ] Setting up python3-mako (1.0.7+ds1-1) ...
[node02][DEBUG ] Setting up ceph-mgr-modules-core (16.2.5-1bionic) ...
[node02][DEBUG ] Setting up python3-pastedeploy (1.5.2-4) ...
[node02][DEBUG ] Setting up python3-webtest (2.0.28-1ubuntu1) ...
[node02][DEBUG ] Setting up python3-pecan (1.2.1-2) ...
[node02][DEBUG ] update-alternatives: using /usr/bin/python3-pecan to provide /usr/bin/pecan (pecan) in auto mode
[node02][DEBUG ] update-alternatives: using /usr/bin/python3-gunicorn_pecan to provide /usr/bin/gunicorn_pecan (gunicorn_pecan) in auto mode
[node02][DEBUG ] Setting up ceph-mgr (16.2.5-1bionic) ...
[node02][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.
[node02][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.
[node02][DEBUG ] Setting up ceph (16.2.5-1bionic) ...
[node02][DEBUG ] Processing triggers for systemd (237-3ubuntu10.42) ...
[node02][DEBUG ] Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
[node02][DEBUG ] Processing triggers for ureadahead (0.100.0-21) ...
[node02][DEBUG ] Processing triggers for libc-bin (2.27-3ubuntu1.2) ...
[node02][INFO  ] Running command: ceph --version
[node02][DEBUG ] ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
[ceph_deploy.install][DEBUG ] Detecting platform for host node03 ...
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[node03][INFO  ] installing Ceph on node03
[node03][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[node03][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[node03][DEBUG ] Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[node03][DEBUG ] Hit:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[node03][DEBUG ] Hit:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[node03][DEBUG ] Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic InRelease
[node03][DEBUG ] Reading package lists...
[node03][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[node03][DEBUG ] Reading package lists...
[node03][DEBUG ] Building dependency tree...
[node03][DEBUG ] Reading state information...
[node03][DEBUG ] The following NEW packages will be installed:
[node03][DEBUG ]   apt-transport-https
[node03][DEBUG ] The following packages will be upgraded:
[node03][DEBUG ]   ca-certificates
[node03][DEBUG ] 1 upgraded, 1 newly installed, 0 to remove and 156 not upgraded.
[node03][DEBUG ] Need to get 151 kB of archives.
[node03][DEBUG ] After this operation, 153 kB of additional disk space will be used.
[node03][DEBUG ] Get:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 ca-certificates all 20210119~18.04.1 [147 kB]
[node03][DEBUG ] Get:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.14 [4,348 B]
[node03][DEBUG ] Preconfiguring packages ...
[node03][DEBUG ] Fetched 151 kB in 0s (436 kB/s)
(Reading database ... 69860 files and directories currently installed.)
[node03][DEBUG ] Preparing to unpack .../ca-certificates_20210119~18.04.1_all.deb ...
[node03][DEBUG ] Unpacking ca-certificates (20210119~18.04.1) over (20190110~18.04.1) ...
[node03][DEBUG ] Selecting previously unselected package apt-transport-https.
[node03][DEBUG ] Preparing to unpack .../apt-transport-https_1.6.14_all.deb ...
[node03][DEBUG ] Unpacking apt-transport-https (1.6.14) ...
[node03][DEBUG ] Setting up apt-transport-https (1.6.14) ...
[node03][DEBUG ] Setting up ca-certificates (20210119~18.04.1) ...
[node03][DEBUG ] Updating certificates in /etc/ssl/certs...
[node03][DEBUG ] 21 added, 19 removed; done.
[node03][DEBUG ] Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
[node03][DEBUG ] Processing triggers for ca-certificates (20210119~18.04.1) ...
[node03][DEBUG ] Updating certificates in /etc/ssl/certs...
[node03][DEBUG ] 0 added, 0 removed; done.
[node03][DEBUG ] Running hooks in /etc/ca-certificates/update.d...
[node03][DEBUG ] done.
[node03][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[node03][DEBUG ] Hit:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic InRelease
[node03][DEBUG ] Hit:2 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates InRelease
[node03][DEBUG ] Hit:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-backports InRelease
[node03][DEBUG ] Hit:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-security InRelease
[node03][DEBUG ] Hit:5 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic InRelease
[node03][DEBUG ] Reading package lists...
[node03][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
[node03][DEBUG ] Reading package lists...
[node03][DEBUG ] Building dependency tree...
[node03][DEBUG ] Reading state information...
[node03][DEBUG ] ceph-mds is already the newest version (16.2.5-1bionic).
[node03][DEBUG ] ceph-mds set to manually installed.
[node03][DEBUG ] ceph-mon is already the newest version (16.2.5-1bionic).
[node03][DEBUG ] The following additional packages will be installed:
[node03][DEBUG ]   ceph-mgr ceph-mgr-modules-core libjs-jquery python-pastedeploy-tpl
[node03][DEBUG ]   python3-bcrypt python3-bs4 python3-cherrypy3 python3-dateutil
[node03][DEBUG ]   python3-distutils python3-jwt python3-lib2to3 python3-logutils python3-mako
[node03][DEBUG ]   python3-markupsafe python3-paste python3-pastedeploy python3-pecan
[node03][DEBUG ]   python3-simplegeneric python3-singledispatch python3-tempita
[node03][DEBUG ]   python3-waitress python3-webob python3-webtest python3-werkzeug
[node03][DEBUG ] Suggested packages:
[node03][DEBUG ]   python3-influxdb python3-crypto python3-beaker python-mako-doc httpd-wsgi
[node03][DEBUG ]   libapache2-mod-python libapache2-mod-scgi libjs-mochikit python-pecan-doc
[node03][DEBUG ]   python-waitress-doc python-webob-doc python-webtest-doc ipython3
[node03][DEBUG ]   python3-lxml python3-termcolor python3-watchdog python-werkzeug-doc
[node03][DEBUG ] Recommended packages:
[node03][DEBUG ]   ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents
[node03][DEBUG ]   ceph-mgr-cephadm javascript-common python3-lxml python3-routes
[node03][DEBUG ]   python3-simplejson python3-pastescript python3-pyinotify
[node03][DEBUG ] The following NEW packages will be installed:
[node03][DEBUG ]   ceph ceph-mgr ceph-mgr-modules-core ceph-osd libjs-jquery
[node03][DEBUG ]   python-pastedeploy-tpl python3-bcrypt python3-bs4 python3-cherrypy3
[node03][DEBUG ]   python3-dateutil python3-distutils python3-jwt python3-lib2to3
[node03][DEBUG ]   python3-logutils python3-mako python3-markupsafe python3-paste
[node03][DEBUG ]   python3-pastedeploy python3-pecan python3-simplegeneric
[node03][DEBUG ]   python3-singledispatch python3-tempita python3-waitress python3-webob
[node03][DEBUG ]   python3-webtest python3-werkzeug radosgw
[node03][DEBUG ] 0 upgraded, 27 newly installed, 0 to remove and 156 not upgraded.
[node03][DEBUG ] Need to get 38.7 MB of archives.
[node03][DEBUG ] After this operation, 172 MB of additional disk space will be used.
[node03][DEBUG ] Get:1 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-dateutil all 2.6.1-1 [52.3 kB]
[node03][DEBUG ] Get:2 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph-mgr-modules-core all 16.2.5-1bionic [186 kB]
[node03][DEBUG ] Get:3 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-bcrypt amd64 3.1.4-2 [29.9 kB]
[node03][DEBUG ] Get:4 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-cherrypy3 all 8.9.1-2 [160 kB]
[node03][DEBUG ] Get:5 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 python3-lib2to3 all 3.6.9-1~18.04 [77.4 kB]
[node03][DEBUG ] Get:6 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/main amd64 python3-distutils all 3.6.9-1~18.04 [144 kB]
[node03][DEBUG ] Get:7 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-jwt all 1.5.3+ds1-1 [15.9 kB]
[node03][DEBUG ] Get:8 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-logutils all 0.3.3-5 [16.7 kB]
[node03][DEBUG ] Get:9 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-markupsafe amd64 1.0-1build1 [13.5 kB]
[node03][DEBUG ] Get:10 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-mako all 1.0.7+ds1-1 [59.3 kB]
[node03][DEBUG ] Get:11 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-simplegeneric all 0.8.1-1 [11.5 kB]
[node03][DEBUG ] Get:12 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-singledispatch all 3.4.0.3-2 [7,022 B]
[node03][DEBUG ] Get:13 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-webob all 1:1.7.3-2fakesync1 [64.3 kB]
[node03][DEBUG ] Get:14 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-bs4 all 4.6.0-1 [67.8 kB]
[node03][DEBUG ] Get:15 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-waitress all 1.0.1-1 [53.4 kB]
[node03][DEBUG ] Get:16 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-tempita all 0.5.2-2 [13.9 kB]
[node03][DEBUG ] Get:17 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-paste all 2.0.3+dfsg-4ubuntu1 [456 kB]
[node03][DEBUG ] Get:18 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python-pastedeploy-tpl all 1.5.2-4 [4,796 B]
[node03][DEBUG ] Get:19 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-pastedeploy all 1.5.2-4 [13.4 kB]
[node03][DEBUG ] Get:20 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 python3-webtest all 2.0.28-1ubuntu1 [27.9 kB]
[node03][DEBUG ] Get:21 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/universe amd64 python3-pecan all 1.2.1-2 [86.1 kB]
[node03][DEBUG ] Get:22 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic/main amd64 libjs-jquery all 3.2.1-1 [152 kB]
[node03][DEBUG ] Get:23 https://mirrors.tuna.tsinghua.edu.cn/ubuntu bionic-updates/universe amd64 python3-werkzeug all 0.14.1+dfsg1-1ubuntu0.1 [174 kB]
[node03][DEBUG ] Get:24 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph-mgr amd64 16.2.5-1bionic [1,399 kB]
[node03][DEBUG ] Get:25 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph-osd amd64 16.2.5-1bionic [24.9 MB]
[node03][DEBUG ] Get:26 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 ceph amd64 16.2.5-1bionic [3,876 B]
[node03][DEBUG ] Get:27 https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific bionic/main amd64 radosgw amd64 16.2.5-1bionic [10.5 MB]
[node03][DEBUG ] Fetched 38.7 MB in 8s (4,746 kB/s)
[node03][DEBUG ] Selecting previously unselected package python3-dateutil.
(Reading database ... 69866 files and directories currently installed.)
[node03][DEBUG ] Preparing to unpack .../00-python3-dateutil_2.6.1-1_all.deb ...
[node03][DEBUG ] Unpacking python3-dateutil (2.6.1-1) ...
[node03][DEBUG ] Selecting previously unselected package ceph-mgr-modules-core.
[node03][DEBUG ] Preparing to unpack .../01-ceph-mgr-modules-core_16.2.5-1bionic_all.deb ...
[node03][DEBUG ] Unpacking ceph-mgr-modules-core (16.2.5-1bionic) ...
[node03][DEBUG ] Selecting previously unselected package python3-bcrypt.
[node03][DEBUG ] Preparing to unpack .../02-python3-bcrypt_3.1.4-2_amd64.deb ...
[node03][DEBUG ] Unpacking python3-bcrypt (3.1.4-2) ...
[node03][DEBUG ] Selecting previously unselected package python3-cherrypy3.
[node03][DEBUG ] Preparing to unpack .../03-python3-cherrypy3_8.9.1-2_all.deb ...
[node03][DEBUG ] Unpacking python3-cherrypy3 (8.9.1-2) ...
[node03][DEBUG ] Selecting previously unselected package python3-lib2to3.
[node03][DEBUG ] Preparing to unpack .../04-python3-lib2to3_3.6.9-1~18.04_all.deb ...
[node03][DEBUG ] Unpacking python3-lib2to3 (3.6.9-1~18.04) ...
[node03][DEBUG ] Selecting previously unselected package python3-distutils.
[node03][DEBUG ] Preparing to unpack .../05-python3-distutils_3.6.9-1~18.04_all.deb ...
[node03][DEBUG ] Unpacking python3-distutils (3.6.9-1~18.04) ...
[node03][DEBUG ] Selecting previously unselected package python3-jwt.
[node03][DEBUG ] Preparing to unpack .../06-python3-jwt_1.5.3+ds1-1_all.deb ...
[node03][DEBUG ] Unpacking python3-jwt (1.5.3+ds1-1) ...
[node03][DEBUG ] Selecting previously unselected package python3-logutils.
[node03][DEBUG ] Preparing to unpack .../07-python3-logutils_0.3.3-5_all.deb ...
[node03][DEBUG ] Unpacking python3-logutils (0.3.3-5) ...
[node03][DEBUG ] Selecting previously unselected package python3-markupsafe.
[node03][DEBUG ] Preparing to unpack .../08-python3-markupsafe_1.0-1build1_amd64.deb ...
[node03][DEBUG ] Unpacking python3-markupsafe (1.0-1build1) ...
[node03][DEBUG ] Selecting previously unselected package python3-mako.
[node03][DEBUG ] Preparing to unpack .../09-python3-mako_1.0.7+ds1-1_all.deb ...
[node03][DEBUG ] Unpacking python3-mako (1.0.7+ds1-1) ...
[node03][DEBUG ] Selecting previously unselected package python3-simplegeneric.
[node03][DEBUG ] Preparing to unpack .../10-python3-simplegeneric_0.8.1-1_all.deb ...
[node03][DEBUG ] Unpacking python3-simplegeneric (0.8.1-1) ...
[node03][DEBUG ] Selecting previously unselected package python3-singledispatch.
[node03][DEBUG ] Preparing to unpack .../11-python3-singledispatch_3.4.0.3-2_all.deb ...
[node03][DEBUG ] Unpacking python3-singledispatch (3.4.0.3-2) ...
[node03][DEBUG ] Selecting previously unselected package python3-webob.
[node03][DEBUG ] Preparing to unpack .../12-python3-webob_1%3a1.7.3-2fakesync1_all.deb ...
[node03][DEBUG ] Unpacking python3-webob (1:1.7.3-2fakesync1) ...
[node03][DEBUG ] Selecting previously unselected package python3-bs4.
[node03][DEBUG ] Preparing to unpack .../13-python3-bs4_4.6.0-1_all.deb ...
[node03][DEBUG ] Unpacking python3-bs4 (4.6.0-1) ...
[node03][DEBUG ] Selecting previously unselected package python3-waitress.
[node03][DEBUG ] Preparing to unpack .../14-python3-waitress_1.0.1-1_all.deb ...
[node03][DEBUG ] Unpacking python3-waitress (1.0.1-1) ...
[node03][DEBUG ] Selecting previously unselected package python3-tempita.
[node03][DEBUG ] Preparing to unpack .../15-python3-tempita_0.5.2-2_all.deb ...
[node03][DEBUG ] Unpacking python3-tempita (0.5.2-2) ...
[node03][DEBUG ] Selecting previously unselected package python3-paste.
[node03][DEBUG ] Preparing to unpack .../16-python3-paste_2.0.3+dfsg-4ubuntu1_all.deb ...
[node03][DEBUG ] Unpacking python3-paste (2.0.3+dfsg-4ubuntu1) ...
[node03][DEBUG ] Selecting previously unselected package python-pastedeploy-tpl.
[node03][DEBUG ] Preparing to unpack .../17-python-pastedeploy-tpl_1.5.2-4_all.deb ...
[node03][DEBUG ] Unpacking python-pastedeploy-tpl (1.5.2-4) ...
[node03][DEBUG ] Selecting previously unselected package python3-pastedeploy.
[node03][DEBUG ] Preparing to unpack .../18-python3-pastedeploy_1.5.2-4_all.deb ...
[node03][DEBUG ] Unpacking python3-pastedeploy (1.5.2-4) ...
[node03][DEBUG ] Selecting previously unselected package python3-webtest.
[node03][DEBUG ] Preparing to unpack .../19-python3-webtest_2.0.28-1ubuntu1_all.deb ...
[node03][DEBUG ] Unpacking python3-webtest (2.0.28-1ubuntu1) ...
[node03][DEBUG ] Selecting previously unselected package python3-pecan.
[node03][DEBUG ] Preparing to unpack .../20-python3-pecan_1.2.1-2_all.deb ...
[node03][DEBUG ] Unpacking python3-pecan (1.2.1-2) ...
[node03][DEBUG ] Selecting previously unselected package libjs-jquery.
[node03][DEBUG ] Preparing to unpack .../21-libjs-jquery_3.2.1-1_all.deb ...
[node03][DEBUG ] Unpacking libjs-jquery (3.2.1-1) ...
[node03][DEBUG ] Selecting previously unselected package python3-werkzeug.
[node03][DEBUG ] Preparing to unpack .../22-python3-werkzeug_0.14.1+dfsg1-1ubuntu0.1_all.deb ...
[node03][DEBUG ] Unpacking python3-werkzeug (0.14.1+dfsg1-1ubuntu0.1) ...
[node03][DEBUG ] Selecting previously unselected package ceph-mgr.
[node03][DEBUG ] Preparing to unpack .../23-ceph-mgr_16.2.5-1bionic_amd64.deb ...
[node03][DEBUG ] Unpacking ceph-mgr (16.2.5-1bionic) ...
[node03][DEBUG ] Selecting previously unselected package ceph-osd.
[node03][DEBUG ] Preparing to unpack .../24-ceph-osd_16.2.5-1bionic_amd64.deb ...
[node03][DEBUG ] Unpacking ceph-osd (16.2.5-1bionic) ...
[node03][DEBUG ] Selecting previously unselected package ceph.
[node03][DEBUG ] Preparing to unpack .../25-ceph_16.2.5-1bionic_amd64.deb ...
[node03][DEBUG ] Unpacking ceph (16.2.5-1bionic) ...
[node03][DEBUG ] Selecting previously unselected package radosgw.
[node03][DEBUG ] Preparing to unpack .../26-radosgw_16.2.5-1bionic_amd64.deb ...
[node03][DEBUG ] Unpacking radosgw (16.2.5-1bionic) ...
[node03][DEBUG ] Setting up python3-logutils (0.3.3-5) ...
[node03][DEBUG ] Setting up libjs-jquery (3.2.1-1) ...
[node03][DEBUG ] Setting up python3-werkzeug (0.14.1+dfsg1-1ubuntu0.1) ...
[node03][DEBUG ] Setting up ceph-osd (16.2.5-1bionic) ...
[node03][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.
[node03][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.
[node03][DEBUG ] Setting up python3-simplegeneric (0.8.1-1) ...
[node03][DEBUG ] Setting up python3-waitress (1.0.1-1) ...
[node03][DEBUG ] update-alternatives: using /usr/bin/waitress-serve-python3 to provide /usr/bin/waitress-serve (waitress-serve) in auto mode
[node03][DEBUG ] Setting up python3-tempita (0.5.2-2) ...
[node03][DEBUG ] Setting up python3-webob (1:1.7.3-2fakesync1) ...
[node03][DEBUG ] Setting up python3-bcrypt (3.1.4-2) ...
[node03][DEBUG ] Setting up python3-singledispatch (3.4.0.3-2) ...
[node03][DEBUG ] Setting up python3-cherrypy3 (8.9.1-2) ...
[node03][DEBUG ] Setting up python3-bs4 (4.6.0-1) ...
[node03][DEBUG ] Setting up python3-markupsafe (1.0-1build1) ...
[node03][DEBUG ] Setting up python3-paste (2.0.3+dfsg-4ubuntu1) ...
[node03][DEBUG ] Setting up python-pastedeploy-tpl (1.5.2-4) ...
[node03][DEBUG ] Setting up python3-lib2to3 (3.6.9-1~18.04) ...
[node03][DEBUG ] Setting up python3-distutils (3.6.9-1~18.04) ...
[node03][DEBUG ] Setting up python3-jwt (1.5.3+ds1-1) ...
[node03][DEBUG ] Setting up python3-dateutil (2.6.1-1) ...
[node03][DEBUG ] Setting up radosgw (16.2.5-1bionic) ...
[node03][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target.
[node03][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target.
[node03][DEBUG ] Setting up python3-mako (1.0.7+ds1-1) ...
[node03][DEBUG ] Setting up ceph-mgr-modules-core (16.2.5-1bionic) ...
[node03][DEBUG ] Setting up python3-pastedeploy (1.5.2-4) ...
[node03][DEBUG ] Setting up python3-webtest (2.0.28-1ubuntu1) ...
[node03][DEBUG ] Setting up python3-pecan (1.2.1-2) ...
[node03][DEBUG ] update-alternatives: using /usr/bin/python3-pecan to provide /usr/bin/pecan (pecan) in auto mode
[node03][DEBUG ] update-alternatives: using /usr/bin/python3-gunicorn_pecan to provide /usr/bin/gunicorn_pecan (gunicorn_pecan) in auto mode
[node03][DEBUG ] Setting up ceph-mgr (16.2.5-1bionic) ...
[node03][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.
[node03][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.
[node03][DEBUG ] Setting up ceph (16.2.5-1bionic) ...
[node03][DEBUG ] Processing triggers for systemd (237-3ubuntu10.42) ...
[node03][DEBUG ] Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
[node03][DEBUG ] Processing triggers for ureadahead (0.100.0-21) ...
[node03][DEBUG ] Processing triggers for libc-bin (2.27-3ubuntu1.2) ...
[node03][INFO  ] Running command: ceph --version
[node03][DEBUG ] ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific (stable)
root@node01:~/cephCluster# 
View Code

6、同步配置文件、查看集群状态,具体情况如下:

root@node01:~/ceph-deploy# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f251070be60>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7f25106eb550>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node01
[ceph_deploy.mon][DEBUG ] detecting platform for host node01 ...
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 20.04 focal
[node01][DEBUG ] determining if provided host has same hostname in remote
[node01][DEBUG ] get remote short hostname
[node01][DEBUG ] deploying mon to node01
[node01][DEBUG ] get remote short hostname
[node01][DEBUG ] remote hostname: node01
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node01][DEBUG ] create the mon path if it does not exist
[node01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node01/done
[node01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node01/done
[node01][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node01.mon.keyring
[node01][DEBUG ] create the monitor keyring file
[node01][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i node01 --keyring /var/lib/ceph/tmp/ceph-node01.mon.keyring --setuser 64045 --setgroup 64045
[node01][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node01.mon.keyring
[node01][DEBUG ] create a done file to avoid re-doing the mon deployment
[node01][DEBUG ] create the init path if it does not exist
[node01][INFO  ] Running command: systemctl enable ceph.target
[node01][INFO  ] Running command: systemctl enable ceph-mon@node01
[node01][WARNIN] Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@node01.service → /lib/systemd/system/ceph-mon@.service.
[node01][INFO  ] Running command: systemctl start ceph-mon@node01
[node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node01.asok mon_status
[node01][DEBUG ] ********************************************************************************
[node01][DEBUG ] status for monitor: mon.node01
[node01][DEBUG ] {
[node01][DEBUG ]   "election_epoch": 3, 
[node01][DEBUG ]   "extra_probe_peers": [], 
[node01][DEBUG ]   "feature_map": {
[node01][DEBUG ]     "mon": [
[node01][DEBUG ]       {
[node01][DEBUG ]         "features": "0x3f01cfb9fffdffff", 
[node01][DEBUG ]         "num": 1, 
[node01][DEBUG ]         "release": "luminous"
[node01][DEBUG ]       }
[node01][DEBUG ]     ]
[node01][DEBUG ]   }, 
[node01][DEBUG ]   "features": {
[node01][DEBUG ]     "quorum_con": "4540138297136906239", 
[node01][DEBUG ]     "quorum_mon": [
[node01][DEBUG ]       "kraken", 
[node01][DEBUG ]       "luminous", 
[node01][DEBUG ]       "mimic", 
[node01][DEBUG ]       "osdmap-prune", 
[node01][DEBUG ]       "nautilus", 
[node01][DEBUG ]       "octopus", 
[node01][DEBUG ]       "pacific", 
[node01][DEBUG ]       "elector-pinging"
[node01][DEBUG ]     ], 
[node01][DEBUG ]     "required_con": "2449958747317026820", 
[node01][DEBUG ]     "required_mon": [
[node01][DEBUG ]       "kraken", 
[node01][DEBUG ]       "luminous", 
[node01][DEBUG ]       "mimic", 
[node01][DEBUG ]       "osdmap-prune", 
[node01][DEBUG ]       "nautilus", 
[node01][DEBUG ]       "octopus", 
[node01][DEBUG ]       "pacific", 
[node01][DEBUG ]       "elector-pinging"
[node01][DEBUG ]     ]
[node01][DEBUG ]   }, 
[node01][DEBUG ]   "monmap": {
[node01][DEBUG ]     "created": "2021-08-17T12:40:35.960075Z", 
[node01][DEBUG ]     "disallowed_leaders: ": "", 
[node01][DEBUG ]     "election_strategy": 1, 
[node01][DEBUG ]     "epoch": 1, 
[node01][DEBUG ]     "features": {
[node01][DEBUG ]       "optional": [], 
[node01][DEBUG ]       "persistent": [
[node01][DEBUG ]         "kraken", 
[node01][DEBUG ]         "luminous", 
[node01][DEBUG ]         "mimic", 
[node01][DEBUG ]         "osdmap-prune", 
[node01][DEBUG ]         "nautilus", 
[node01][DEBUG ]         "octopus", 
[node01][DEBUG ]         "pacific", 
[node01][DEBUG ]         "elector-pinging"
[node01][DEBUG ]       ]
[node01][DEBUG ]     }, 
[node01][DEBUG ]     "fsid": "9138c3cf-f529-4be6-ba84-97fcab59844b", 
[node01][DEBUG ]     "min_mon_release": 16, 
[node01][DEBUG ]     "min_mon_release_name": "pacific", 
[node01][DEBUG ]     "modified": "2021-08-17T12:40:35.960075Z", 
[node01][DEBUG ]     "mons": [
[node01][DEBUG ]       {
[node01][DEBUG ]         "addr": "192.168.11.210:6789/0", 
[node01][DEBUG ]         "crush_location": "{}", 
[node01][DEBUG ]         "name": "node01", 
[node01][DEBUG ]         "priority": 0, 
[node01][DEBUG ]         "public_addr": "192.168.11.210:6789/0", 
[node01][DEBUG ]         "public_addrs": {
[node01][DEBUG ]           "addrvec": [
[node01][DEBUG ]             {
[node01][DEBUG ]               "addr": "192.168.11.210:3300", 
[node01][DEBUG ]               "nonce": 0, 
[node01][DEBUG ]               "type": "v2"
[node01][DEBUG ]             }, 
[node01][DEBUG ]             {
[node01][DEBUG ]               "addr": "192.168.11.210:6789", 
[node01][DEBUG ]               "nonce": 0, 
[node01][DEBUG ]               "type": "v1"
[node01][DEBUG ]             }
[node01][DEBUG ]           ]
[node01][DEBUG ]         }, 
[node01][DEBUG ]         "rank": 0, 
[node01][DEBUG ]         "weight": 0
[node01][DEBUG ]       }
[node01][DEBUG ]     ], 
[node01][DEBUG ]     "stretch_mode": false
[node01][DEBUG ]   }, 
[node01][DEBUG ]   "name": "node01", 
[node01][DEBUG ]   "outside_quorum": [], 
[node01][DEBUG ]   "quorum": [
[node01][DEBUG ]     0
[node01][DEBUG ]   ], 
[node01][DEBUG ]   "quorum_age": 2, 
[node01][DEBUG ]   "rank": 0, 
[node01][DEBUG ]   "state": "leader", 
[node01][DEBUG ]   "stretch_mode": false, 
[node01][DEBUG ]   "sync_provider": []
[node01][DEBUG ] }
[node01][DEBUG ] ********************************************************************************
[node01][INFO  ] monitor: mon.node01 is running
[node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node01.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.node01
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] find the location of an executable
[node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node01.asok mon_status
[ceph_deploy.mon][INFO  ] mon.node01 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmp_zteqm
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] get remote short hostname
[node01][DEBUG ] fetch remote file
[node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.node01.asok mon_status
[node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node01/keyring auth get client.admin
[node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node01/keyring auth get client.bootstrap-mds
[node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node01/keyring auth get client.bootstrap-mgr
[node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node01/keyring auth get client.bootstrap-osd
[node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node01/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmp_zteqm
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# ceph-deploy admin node01 node02 node03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin node01 node02 node03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7efd7a00df50>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['node01', 'node02', 'node03']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7efd7a0e64d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node01
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node02
[node02][DEBUG ] connected to host: node02 
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node03
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# ls /etc/ceph/
ceph.client.admin.keyring  ceph.conf  rbdmap  tmpOH7r8T
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# 
root@node01:~/ceph-deploy# ceph -s
  cluster:
    id:     9138c3cf-f529-4be6-ba84-97fcab59844b
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
 
  services:
    mon: 1 daemons, quorum node01 (age 7m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 
root@node01:~/ceph-deploy# 
View Code

7、安装配置mgr高可用(一主两备),所有mgr节点上安装:apt install ceph-mgr 后在ceph-deploy上执行如下命令:

root@node01:~/cephCluster# apt install ceph-mgr 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
ceph-mgr is already the newest version (16.2.5-1bionic).
ceph-mgr set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 156 not upgraded.
root@node01:~/cephCluster# ceph-deploy mgr create node01 node02 node03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create node01 node02 node03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('node01', 'node01'), ('node02', 'node02'), ('node03', 'node03')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f934f223c80>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f934f683150>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts node01:node01 node02:node02 node03:node03
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to node01
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node01][WARNIN] mgr keyring does not exist yet, creating one
[node01][DEBUG ] create a keyring file
[node01][DEBUG ] create path recursively if it doesn't exist
[node01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.node01 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-node01/keyring
[node01][INFO  ] Running command: systemctl enable ceph-mgr@node01
[node01][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@node01.service → /lib/systemd/system/ceph-mgr@.service.
[node01][INFO  ] Running command: systemctl start ceph-mgr@node01
[node01][INFO  ] Running command: systemctl enable ceph.target
[node02][DEBUG ] connected to host: node02 
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to node02
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node02][WARNIN] mgr keyring does not exist yet, creating one
[node02][DEBUG ] create a keyring file
[node02][DEBUG ] create path recursively if it doesn't exist
[node02][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.node02 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-node02/keyring
[node02][INFO  ] Running command: systemctl enable ceph-mgr@node02
[node02][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@node02.service → /lib/systemd/system/ceph-mgr@.service.
[node02][INFO  ] Running command: systemctl start ceph-mgr@node02
[node02][INFO  ] Running command: systemctl enable ceph.target
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to node03
[node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node03][WARNIN] mgr keyring does not exist yet, creating one
[node03][DEBUG ] create a keyring file
[node03][DEBUG ] create path recursively if it doesn't exist
[node03][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.node03 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-node03/keyring
[node03][INFO  ] Running command: systemctl enable ceph-mgr@node03
[node03][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@node03.service → /lib/systemd/system/ceph-mgr@.service.
[node03][INFO  ] Running command: systemctl start ceph-mgr@node03
[node03][INFO  ] Running command: systemctl enable ceph.target
root@node01:~/cephCluster# ceph -s
  cluster:
    id:     e0f0ae6f-ee6c-4f8c-ba19-939bddaa3ee3
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum node01 (age 27m)
    mgr: node01(active, since 4s), standbys: node02, node03
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 
root@node01:~/cephCluster# 
View Code

8、配置mon的高可用(使node02、node03成为mon节点)并查看集群状态,具体情况如下:

root@node01:~/cephCluster# ceph-deploy mon add node02 --address 192.168.11.220
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon add node02 --address 192.168.11.220
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : add
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fe33d9750f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['node02']
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7fe33d952ad0>
[ceph_deploy.cli][INFO  ]  address                       : 192.168.11.220
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][INFO  ] ensuring configuration of new mon host: node02
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node02
[node02][DEBUG ] connected to host: node02 
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host node02
[ceph_deploy.mon][DEBUG ] using mon address via --address 192.168.11.220
[ceph_deploy.mon][DEBUG ] detecting platform for host node02 ...
[node02][DEBUG ] connected to host: node02 
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 18.04 bionic
[node02][DEBUG ] determining if provided host has same hostname in remote
[node02][DEBUG ] get remote short hostname
[node02][DEBUG ] adding mon to node02
[node02][DEBUG ] get remote short hostname
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node02][DEBUG ] create the mon path if it does not exist
[node02][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node02/done
[node02][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node02/done
[node02][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node02.mon.keyring
[node02][DEBUG ] create the monitor keyring file
[node02][INFO  ] Running command: ceph --cluster ceph mon getmap -o /var/lib/ceph/tmp/ceph.node02.monmap
[node02][WARNIN] got monmap epoch 1
[node02][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i node02 --monmap /var/lib/ceph/tmp/ceph.node02.monmap --keyring /var/lib/ceph/tmp/ceph-node02.mon.keyring --setuser 64045 --setgroup 64045
[node02][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node02.mon.keyring
[node02][DEBUG ] create a done file to avoid re-doing the mon deployment
[node02][DEBUG ] create the init path if it does not exist
[node02][INFO  ] Running command: systemctl enable ceph.target
[node02][INFO  ] Running command: systemctl enable ceph-mon@node02
[node02][WARNIN] Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@node02.service → /lib/systemd/system/ceph-mon@.service.
[node02][INFO  ] Running command: systemctl start ceph-mon@node02
[node02][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node02.asok mon_status
[node02][WARNIN] node02 is not defined in `mon initial members`
[node02][WARNIN] monitor node02 does not exist in monmap
[node02][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node02.asok mon_status
[node02][DEBUG ] ********************************************************************************
[node02][DEBUG ] status for monitor: mon.node02
[node02][DEBUG ] {
[node02][DEBUG ]   "election_epoch": 0, 
[node02][DEBUG ]   "extra_probe_peers": [], 
[node02][DEBUG ]   "feature_map": {
[node02][DEBUG ]     "mon": [
[node02][DEBUG ]       {
[node02][DEBUG ]         "features": "0x3f01cfb9fffdffff", 
[node02][DEBUG ]         "num": 1, 
[node02][DEBUG ]         "release": "luminous"
[node02][DEBUG ]       }
[node02][DEBUG ]     ]
[node02][DEBUG ]   }, 
[node02][DEBUG ]   "features": {
[node02][DEBUG ]     "quorum_con": "0", 
[node02][DEBUG ]     "quorum_mon": [], 
[node02][DEBUG ]     "required_con": "2449958197560098820", 
[node02][DEBUG ]     "required_mon": [
[node02][DEBUG ]       "kraken", 
[node02][DEBUG ]       "luminous", 
[node02][DEBUG ]       "mimic", 
[node02][DEBUG ]       "osdmap-prune", 
[node02][DEBUG ]       "nautilus", 
[node02][DEBUG ]       "octopus", 
[node02][DEBUG ]       "pacific", 
[node02][DEBUG ]       "elector-pinging"
[node02][DEBUG ]     ]
[node02][DEBUG ]   }, 
[node02][DEBUG ]   "monmap": {
[node02][DEBUG ]     "created": "2021-08-16T03:39:31.722967Z", 
[node02][DEBUG ]     "disallowed_leaders: ": "", 
[node02][DEBUG ]     "election_strategy": 1, 
[node02][DEBUG ]     "epoch": 1, 
[node02][DEBUG ]     "features": {
[node02][DEBUG ]       "optional": [], 
[node02][DEBUG ]       "persistent": [
[node02][DEBUG ]         "kraken", 
[node02][DEBUG ]         "luminous", 
[node02][DEBUG ]         "mimic", 
[node02][DEBUG ]         "osdmap-prune", 
[node02][DEBUG ]         "nautilus", 
[node02][DEBUG ]         "octopus", 
[node02][DEBUG ]         "pacific", 
[node02][DEBUG ]         "elector-pinging"
[node02][DEBUG ]       ]
[node02][DEBUG ]     }, 
[node02][DEBUG ]     "fsid": "e0f0ae6f-ee6c-4f8c-ba19-939bddaa3ee3", 
[node02][DEBUG ]     "min_mon_release": 16, 
[node02][DEBUG ]     "min_mon_release_name": "pacific", 
[node02][DEBUG ]     "modified": "2021-08-16T03:39:31.722967Z", 
[node02][DEBUG ]     "mons": [
[node02][DEBUG ]       {
[node02][DEBUG ]         "addr": "192.168.11.210:6789/0", 
[node02][DEBUG ]         "crush_location": "{}", 
[node02][DEBUG ]         "name": "node01", 
[node02][DEBUG ]         "priority": 0, 
[node02][DEBUG ]         "public_addr": "192.168.11.210:6789/0", 
[node02][DEBUG ]         "public_addrs": {
[node02][DEBUG ]           "addrvec": [
[node02][DEBUG ]             {
[node02][DEBUG ]               "addr": "192.168.11.210:3300", 
[node02][DEBUG ]               "nonce": 0, 
[node02][DEBUG ]               "type": "v2"
[node02][DEBUG ]             }, 
[node02][DEBUG ]             {
[node02][DEBUG ]               "addr": "192.168.11.210:6789", 
[node02][DEBUG ]               "nonce": 0, 
[node02][DEBUG ]               "type": "v1"
[node02][DEBUG ]             }
[node02][DEBUG ]           ]
[node02][DEBUG ]         }, 
[node02][DEBUG ]         "rank": 0, 
[node02][DEBUG ]         "weight": 0
[node02][DEBUG ]       }
[node02][DEBUG ]     ], 
[node02][DEBUG ]     "stretch_mode": false
[node02][DEBUG ]   }, 
[node02][DEBUG ]   "name": "node02", 
[node02][DEBUG ]   "outside_quorum": [], 
[node02][DEBUG ]   "quorum": [], 
[node02][DEBUG ]   "rank": -1, 
[node02][DEBUG ]   "state": "probing", 
[node02][DEBUG ]   "stretch_mode": false, 
[node02][DEBUG ]   "sync_provider": []
[node02][DEBUG ] }
[node02][DEBUG ] ********************************************************************************
[node02][INFO  ] monitor: mon.node02 is currently at the state of probing
root@node01:~/cephCluster# ceph-deploy mon add node03 --address 192.168.11.230
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon add node03 --address 192.168.11.230
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : add
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f682323a0f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['node03']
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7f6823217ad0>
[ceph_deploy.cli][INFO  ]  address                       : 192.168.11.230
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][INFO  ] ensuring configuration of new mon host: node03
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node03
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host node03
[ceph_deploy.mon][DEBUG ] using mon address via --address 192.168.11.230
[ceph_deploy.mon][DEBUG ] detecting platform for host node03 ...
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 18.04 bionic
[node03][DEBUG ] determining if provided host has same hostname in remote
[node03][DEBUG ] get remote short hostname
[node03][DEBUG ] adding mon to node03
[node03][DEBUG ] get remote short hostname
[node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node03][DEBUG ] create the mon path if it does not exist
[node03][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node03/done
[node03][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node03/done
[node03][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node03.mon.keyring
[node03][DEBUG ] create the monitor keyring file
[node03][INFO  ] Running command: ceph --cluster ceph mon getmap -o /var/lib/ceph/tmp/ceph.node03.monmap
[node03][WARNIN] got monmap epoch 2
[node03][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i node03 --monmap /var/lib/ceph/tmp/ceph.node03.monmap --keyring /var/lib/ceph/tmp/ceph-node03.mon.keyring --setuser 64045 --setgroup 64045
[node03][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node03.mon.keyring
[node03][DEBUG ] create a done file to avoid re-doing the mon deployment
[node03][DEBUG ] create the init path if it does not exist
[node03][INFO  ] Running command: systemctl enable ceph.target
[node03][INFO  ] Running command: systemctl enable ceph-mon@node03
[node03][WARNIN] Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@node03.service → /lib/systemd/system/ceph-mon@.service.
[node03][INFO  ] Running command: systemctl start ceph-mon@node03
[node03][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node03.asok mon_status
[node03][WARNIN] node03 is not defined in `mon initial members`
[node03][WARNIN] monitor node03 does not exist in monmap
[node03][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node03.asok mon_status
[node03][DEBUG ] ********************************************************************************
[node03][DEBUG ] status for monitor: mon.node03
[node03][DEBUG ] {
[node03][DEBUG ]   "election_epoch": 0, 
[node03][DEBUG ]   "extra_probe_peers": [
[node03][DEBUG ]     {
[node03][DEBUG ]       "addrvec": [
[node03][DEBUG ]         {
[node03][DEBUG ]           "addr": "192.168.11.220:3300", 
[node03][DEBUG ]           "nonce": 0, 
[node03][DEBUG ]           "type": "v2"
[node03][DEBUG ]         }, 
[node03][DEBUG ]         {
[node03][DEBUG ]           "addr": "192.168.11.220:6789", 
[node03][DEBUG ]           "nonce": 0, 
[node03][DEBUG ]           "type": "v1"
[node03][DEBUG ]         }
[node03][DEBUG ]       ]
[node03][DEBUG ]     }
[node03][DEBUG ]   ], 
[node03][DEBUG ]   "feature_map": {
[node03][DEBUG ]     "mon": [
[node03][DEBUG ]       {
[node03][DEBUG ]         "features": "0x3f01cfb9fffdffff", 
[node03][DEBUG ]         "num": 1, 
[node03][DEBUG ]         "release": "luminous"
[node03][DEBUG ]       }
[node03][DEBUG ]     ]
[node03][DEBUG ]   }, 
[node03][DEBUG ]   "features": {
[node03][DEBUG ]     "quorum_con": "0", 
[node03][DEBUG ]     "quorum_mon": [], 
[node03][DEBUG ]     "required_con": "2449958197560098820", 
[node03][DEBUG ]     "required_mon": [
[node03][DEBUG ]       "kraken", 
[node03][DEBUG ]       "luminous", 
[node03][DEBUG ]       "mimic", 
[node03][DEBUG ]       "osdmap-prune", 
[node03][DEBUG ]       "nautilus", 
[node03][DEBUG ]       "octopus", 
[node03][DEBUG ]       "pacific", 
[node03][DEBUG ]       "elector-pinging"
[node03][DEBUG ]     ]
[node03][DEBUG ]   }, 
[node03][DEBUG ]   "monmap": {
[node03][DEBUG ]     "created": "2021-08-16T03:39:31.722967Z", 
[node03][DEBUG ]     "disallowed_leaders: ": "", 
[node03][DEBUG ]     "election_strategy": 1, 
[node03][DEBUG ]     "epoch": 2, 
[node03][DEBUG ]     "features": {
[node03][DEBUG ]       "optional": [], 
[node03][DEBUG ]       "persistent": [
[node03][DEBUG ]         "kraken", 
[node03][DEBUG ]         "luminous", 
[node03][DEBUG ]         "mimic", 
[node03][DEBUG ]         "osdmap-prune", 
[node03][DEBUG ]         "nautilus", 
[node03][DEBUG ]         "octopus", 
[node03][DEBUG ]         "pacific", 
[node03][DEBUG ]         "elector-pinging"
[node03][DEBUG ]       ]
[node03][DEBUG ]     }, 
[node03][DEBUG ]     "fsid": "e0f0ae6f-ee6c-4f8c-ba19-939bddaa3ee3", 
[node03][DEBUG ]     "min_mon_release": 16, 
[node03][DEBUG ]     "min_mon_release_name": "pacific", 
[node03][DEBUG ]     "modified": "2021-08-16T04:11:42.722901Z", 
[node03][DEBUG ]     "mons": [
[node03][DEBUG ]       {
[node03][DEBUG ]         "addr": "192.168.11.210:6789/0", 
[node03][DEBUG ]         "crush_location": "{}", 
[node03][DEBUG ]         "name": "node01", 
[node03][DEBUG ]         "priority": 0, 
[node03][DEBUG ]         "public_addr": "192.168.11.210:6789/0", 
[node03][DEBUG ]         "public_addrs": {
[node03][DEBUG ]           "addrvec": [
[node03][DEBUG ]             {
[node03][DEBUG ]               "addr": "192.168.11.210:3300", 
[node03][DEBUG ]               "nonce": 0, 
[node03][DEBUG ]               "type": "v2"
[node03][DEBUG ]             }, 
[node03][DEBUG ]             {
[node03][DEBUG ]               "addr": "192.168.11.210:6789", 
[node03][DEBUG ]               "nonce": 0, 
[node03][DEBUG ]               "type": "v1"
[node03][DEBUG ]             }
[node03][DEBUG ]           ]
[node03][DEBUG ]         }, 
[node03][DEBUG ]         "rank": 0, 
[node03][DEBUG ]         "weight": 0
[node03][DEBUG ]       }, 
[node03][DEBUG ]       {
[node03][DEBUG ]         "addr": "192.168.11.220:6789/0", 
[node03][DEBUG ]         "crush_location": "{}", 
[node03][DEBUG ]         "name": "node02", 
[node03][DEBUG ]         "priority": 0, 
[node03][DEBUG ]         "public_addr": "192.168.11.220:6789/0", 
[node03][DEBUG ]         "public_addrs": {
[node03][DEBUG ]           "addrvec": [
[node03][DEBUG ]             {
[node03][DEBUG ]               "addr": "192.168.11.220:3300", 
[node03][DEBUG ]               "nonce": 0, 
[node03][DEBUG ]               "type": "v2"
[node03][DEBUG ]             }, 
[node03][DEBUG ]             {
[node03][DEBUG ]               "addr": "192.168.11.220:6789", 
[node03][DEBUG ]               "nonce": 0, 
[node03][DEBUG ]               "type": "v1"
[node03][DEBUG ]             }
[node03][DEBUG ]           ]
[node03][DEBUG ]         }, 
[node03][DEBUG ]         "rank": 1, 
[node03][DEBUG ]         "weight": 0
[node03][DEBUG ]       }
[node03][DEBUG ]     ], 
[node03][DEBUG ]     "stretch_mode": false
[node03][DEBUG ]   }, 
[node03][DEBUG ]   "name": "node03", 
[node03][DEBUG ]   "outside_quorum": [], 
[node03][DEBUG ]   "quorum": [], 
[node03][DEBUG ]   "rank": -1, 
[node03][DEBUG ]   "state": "probing", 
[node03][DEBUG ]   "stretch_mode": false, 
[node03][DEBUG ]   "sync_provider": []
[node03][DEBUG ] }
[node03][DEBUG ] ********************************************************************************
[node03][INFO  ] monitor: mon.node03 is currently at the state of probing
root@node01:~/cephCluster# ceph -s
  cluster:
    id:     e0f0ae6f-ee6c-4f8c-ba19-939bddaa3ee3
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 2s)
    mgr: node01(active, since 4m), standbys: node02, node03
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 
root@node01:~/cephCluster# 
View Code

9、三台node的节点上,每节点有两块硬盘需要添加到集群中并查看集群状态,具体情况如下(查看、擦盘和加盘):

root@node01:~/cephCluster# lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   40G  0 disk 
└─sda1   8:1    0   40G  0 part /
sdb      8:16   0   40G  0 disk 
sdc      8:32   0   40G  0 disk 
sr0     11:0    1  951M  0 rom  
root@node01:~/cephCluster# ceph-deploy disk --help
usage: ceph-deploy disk [-h] {zap,list} ...

Manage disks on a remote host.

positional arguments:
  {zap,list}
    zap       destroy existing data and filesystem on LV or partition
    list      List disk info from remote host(s)

optional arguments:
  -h, --help  show this help message and exit
root@node01:~/cephCluster# ceph-deploy disk list --help
usage: ceph-deploy disk list [-h] [--debug] HOST [HOST ...]

positional arguments:
  HOST        Remote HOST(s) to list OSDs from

optional arguments:
  -h, --help  show this help message and exit
  --debug     Enable debug mode on remote ceph-volume calls
root@node01:~/cephCluster# ceph-deploy disk list node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f97ac2bd0f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['node01']
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f97ac2912d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] find the location of an executable
[node01][INFO  ] Running command: fdisk -l
[node01][INFO  ] Disk /dev/sda: 40 GiB, 42949672960 bytes, 83886080 sectors
[node01][INFO  ] Disk /dev/sdc: 40 GiB, 42949672960 bytes, 83886080 sectors
[node01][INFO  ] Disk /dev/sdb: 40 GiB, 42949672960 bytes, 83886080 sectors
root@node01:~/cephCluster# 
root@node01:~/cephCluster# 
root@node01:~/cephCluster# ceph-deploy disk list node02
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list node02
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f70f5d180f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['node02']
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f70f5cec2d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[node02][DEBUG ] connected to host: node02 
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] find the location of an executable
[node02][INFO  ] Running command: fdisk -l
[node02][INFO  ] Disk /dev/sda: 40 GiB, 42949672960 bytes, 83886080 sectors
[node02][INFO  ] Disk /dev/sdc: 40 GiB, 42949672960 bytes, 83886080 sectors
[node02][INFO  ] Disk /dev/sdb: 40 GiB, 42949672960 bytes, 83886080 sectors
root@node01:~/cephCluster# ceph-deploy disk list node03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list node03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9625f690f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['node03']
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f9625f3d2d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] find the location of an executable
[node03][INFO  ] Running command: fdisk -l
[node03][INFO  ] Disk /dev/sda: 40 GiB, 42949672960 bytes, 83886080 sectors
[node03][INFO  ] Disk /dev/sdb: 40 GiB, 42949672960 bytes, 83886080 sectors
[node03][INFO  ] Disk /dev/sdc: 40 GiB, 42949672960 bytes, 83886080 sectors
root@node01:~/cephCluster# 
root@node01:~/cephCluster# 
root@node01:~/cephCluster# ceph-deploy disk zap node01 /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap node01 /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f61d3f4b0f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : node01
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f61d3f1f2d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdb']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on node01
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[node01][DEBUG ] zeroing last few blocks of device
[node01][DEBUG ] find the location of an executable
[node01][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdb
[node01][WARNIN] --> Zapping: /dev/sdb
[node01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[node01][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdb bs=1M count=10 conv=fsync
[node01][WARNIN]  stderr: 10+0 records in
[node01][WARNIN] 10+0 records out
[node01][WARNIN]  stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0375513 s, 279 MB/s
[node01][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdb>
root@node01:~/cephCluster# ceph-deploy disk zap node01 /dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap node01 /dev/sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3506a260f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : node01
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f35069fa2d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdc']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdc on node01
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[node01][DEBUG ] zeroing last few blocks of device
[node01][DEBUG ] find the location of an executable
[node01][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdc
[node01][WARNIN] --> Zapping: /dev/sdc
[node01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[node01][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync
[node01][WARNIN]  stderr: 10+0 records in
[node01][WARNIN] 10+0 records out
[node01][WARNIN] 10485760 bytes (10 MB, 10 MiB) copied, 0.0129387 s, 810 MB/s
[node01][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdc>
root@node01:~/cephCluster# ceph-deploy disk zap node02 /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap node02 /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f2cb24600f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : node02
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f2cb24342d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdb']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on node02
[node02][DEBUG ] connected to host: node02 
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[node02][DEBUG ] zeroing last few blocks of device
[node02][DEBUG ] find the location of an executable
[node02][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdb
[node02][WARNIN] --> Zapping: /dev/sdb
[node02][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[node02][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdb bs=1M count=10 conv=fsync
[node02][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdb>
root@node01:~/cephCluster# ceph-deploy disk zap node02 /dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap node02 /dev/sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f63928580f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : node02
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f639282c2d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdc']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdc on node02
[node02][DEBUG ] connected to host: node02 
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[node02][DEBUG ] zeroing last few blocks of device
[node02][DEBUG ] find the location of an executable
[node02][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdc
[node02][WARNIN] --> Zapping: /dev/sdc
[node02][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[node02][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync
[node02][WARNIN]  stderr: 10+0 records in
[node02][WARNIN] 10+0 records out
[node02][WARNIN]  stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0527737 s, 199 MB/s
[node02][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdc>
root@node01:~/cephCluster# ceph-deploy disk zap node03 /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap node03 /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f995c7040f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : node03
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f995c6d82d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdb']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on node03
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[node03][DEBUG ] zeroing last few blocks of device
[node03][DEBUG ] find the location of an executable
[node03][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdb
[node03][WARNIN] --> Zapping: /dev/sdb
[node03][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[node03][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdb bs=1M count=10 conv=fsync
[node03][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdb>
root@node01:~/cephCluster# ceph-deploy disk zap node03 /dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap node03 /dev/sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f92685dd0f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : node03
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f92685b12d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdc']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdc on node03
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[node03][DEBUG ] zeroing last few blocks of device
[node03][DEBUG ] find the location of an executable
[node03][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdc
[node03][WARNIN] --> Zapping: /dev/sdc
[node03][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[node03][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10 conv=fsync
[node03][WARNIN]  stderr: 10+0 records in
[node03][WARNIN] 10+0 records out
[node03][WARNIN]  stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0309384 s, 339 MB/s
[node03][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdc>
root@node01:~/cephCluster# 
root@node01:~/cephCluster# 
root@node01:~/cephCluster# ceph-deploy osd create node01 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create node01 --data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3186ffe410>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : node01
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f318704c250>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to node01
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node01][WARNIN] osd keyring does not exist yet, creating one
[node01][DEBUG ] create a keyring file
[node01][DEBUG ] find the location of an executable
[node01][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[node01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[node01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new bcbff69b-db8c-4cb0-98f4-3d70e4140f99
[node01][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-d643d3c9-f8f6-4f03-8d3a-4d0e7d23d63f /dev/sdb
[node01][WARNIN]  stdout: Physical volume "/dev/sdb" successfully created.
[node01][WARNIN]  stdout: Volume group "ceph-d643d3c9-f8f6-4f03-8d3a-4d0e7d23d63f" successfully created
[node01][WARNIN] Running command: /sbin/lvcreate --yes -l 10239 -n osd-block-bcbff69b-db8c-4cb0-98f4-3d70e4140f99 ceph-d643d3c9-f8f6-4f03-8d3a-4d0e7d23d63f
[node01][WARNIN]  stdout: Logical volume "osd-block-bcbff69b-db8c-4cb0-98f4-3d70e4140f99" created.
[node01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[node01][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[node01][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[node01][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-d643d3c9-f8f6-4f03-8d3a-4d0e7d23d63f/osd-block-bcbff69b-db8c-4cb0-98f4-3d70e4140f99
[node01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[node01][WARNIN] Running command: /bin/ln -s /dev/ceph-d643d3c9-f8f6-4f03-8d3a-4d0e7d23d63f/osd-block-bcbff69b-db8c-4cb0-98f4-3d70e4140f99 /var/lib/ceph/osd/ceph-0/block
[node01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[node01][WARNIN]  stderr: 2021-08-16T12:16:10.628+0800 7f3cde0cf700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[node01][WARNIN]  stderr: 
[node01][WARNIN]  stderr: 2021-08-16T12:16:10.628+0800 7f3cde0cf700 -1 AuthRegistry(0x7f3cd805b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[node01][WARNIN]  stderr: 
[node01][WARNIN]  stderr: got monmap epoch 3
[node01][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQCJ5hlhwUyHMBAA+WOMxJKxj5+UvmAEkG1K/A==
[node01][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[node01][WARNIN] added entity osd.0 auth(key=AQCJ5hlhwUyHMBAA+WOMxJKxj5+UvmAEkG1K/A==)
[node01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[node01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[node01][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid bcbff69b-db8c-4cb0-98f4-3d70e4140f99 --setuser ceph --setgroup ceph
[node01][WARNIN]  stderr: 2021-08-16T12:16:10.872+0800 7f152d130f00 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
[node01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdb
[node01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[node01][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-d643d3c9-f8f6-4f03-8d3a-4d0e7d23d63f/osd-block-bcbff69b-db8c-4cb0-98f4-3d70e4140f99 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[node01][WARNIN] Running command: /bin/ln -snf /dev/ceph-d643d3c9-f8f6-4f03-8d3a-4d0e7d23d63f/osd-block-bcbff69b-db8c-4cb0-98f4-3d70e4140f99 /var/lib/ceph/osd/ceph-0/block
[node01][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[node01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[node01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[node01][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-bcbff69b-db8c-4cb0-98f4-3d70e4140f99
[node01][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-bcbff69b-db8c-4cb0-98f4-3d70e4140f99.service → /lib/systemd/system/ceph-volume@.service.
[node01][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@0
[node01][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /lib/systemd/system/ceph-osd@.service.
[node01][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[node01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[node01][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[node01][INFO  ] checking OSD status...
[node01][DEBUG ] find the location of an executable
[node01][INFO  ] Running command: /usr/bin/ceph --cluster=ceph osd stat --format=json
[node01][WARNIN] there is 1 OSD down
[ceph_deploy.osd][DEBUG ] Host node01 is now ready for osd use.
root@node01:~/cephCluster# ceph-deploy osd create node02 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create node02 --data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fabccfd7410>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : node02
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fabcd025250>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[node02][DEBUG ] connected to host: node02 
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to node02
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node02][WARNIN] osd keyring does not exist yet, creating one
[node02][DEBUG ] create a keyring file
[node02][DEBUG ] find the location of an executable
[node02][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[node02][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[node02][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new afb2d3d4-71a7-4eb5-afd8-723a530a94e5
[node02][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-9f813d57-6c45-4a97-9625-a4b05b1dbe93 /dev/sdb
[node02][WARNIN]  stdout: Physical volume "/dev/sdb" successfully created.
[node02][WARNIN]  stdout: Volume group "ceph-9f813d57-6c45-4a97-9625-a4b05b1dbe93" successfully created
[node02][WARNIN] Running command: /sbin/lvcreate --yes -l 10239 -n osd-block-afb2d3d4-71a7-4eb5-afd8-723a530a94e5 ceph-9f813d57-6c45-4a97-9625-a4b05b1dbe93
[node02][WARNIN]  stdout: Logical volume "osd-block-afb2d3d4-71a7-4eb5-afd8-723a530a94e5" created.
[node02][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[node02][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
[node02][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[node02][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-9f813d57-6c45-4a97-9625-a4b05b1dbe93/osd-block-afb2d3d4-71a7-4eb5-afd8-723a530a94e5
[node02][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[node02][WARNIN] Running command: /bin/ln -s /dev/ceph-9f813d57-6c45-4a97-9625-a4b05b1dbe93/osd-block-afb2d3d4-71a7-4eb5-afd8-723a530a94e5 /var/lib/ceph/osd/ceph-1/block
[node02][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
[node02][WARNIN]  stderr: 2021-08-16T12:16:27.442+0800 7feaafa66700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[node02][WARNIN]  stderr: 
[node02][WARNIN]  stderr: 2021-08-16T12:16:27.442+0800 7feaafa66700 -1 AuthRegistry(0x7feaa805b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[node02][WARNIN]  stderr: 
[node02][WARNIN]  stderr: got monmap epoch 3
[node02][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQCa5hlhJa2oKRAAgRTh4T/yVJbiAqbBwT9wyw==
[node02][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-1/keyring
[node02][WARNIN] added entity osd.1 auth(key=AQCa5hlhJa2oKRAAgRTh4T/yVJbiAqbBwT9wyw==)
[node02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
[node02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
[node02][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid afb2d3d4-71a7-4eb5-afd8-723a530a94e5 --setuser ceph --setgroup ceph
[node02][WARNIN]  stderr: 2021-08-16T12:16:27.702+0800 7f1b7dd17f00 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
[node02][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdb
[node02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[node02][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-9f813d57-6c45-4a97-9625-a4b05b1dbe93/osd-block-afb2d3d4-71a7-4eb5-afd8-723a530a94e5 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
[node02][WARNIN] Running command: /bin/ln -snf /dev/ceph-9f813d57-6c45-4a97-9625-a4b05b1dbe93/osd-block-afb2d3d4-71a7-4eb5-afd8-723a530a94e5 /var/lib/ceph/osd/ceph-1/block
[node02][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
[node02][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[node02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[node02][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-1-afb2d3d4-71a7-4eb5-afd8-723a530a94e5
[node02][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-afb2d3d4-71a7-4eb5-afd8-723a530a94e5.service → /lib/systemd/system/ceph-volume@.service.
[node02][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@1
[node02][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@1.service → /lib/systemd/system/ceph-osd@.service.
[node02][WARNIN] Running command: /bin/systemctl start ceph-osd@1
[node02][WARNIN] --> ceph-volume lvm activate successful for osd ID: 1
[node02][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[node02][INFO  ] checking OSD status...
[node02][DEBUG ] find the location of an executable
[node02][INFO  ] Running command: /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node02 is now ready for osd use.
root@node01:~/cephCluster# ceph-deploy osd create node03 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create node03 --data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f62eaed9410>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : node03
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f62eaf27250>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to node03
[node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node03][WARNIN] osd keyring does not exist yet, creating one
[node03][DEBUG ] create a keyring file
[node03][DEBUG ] find the location of an executable
[node03][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[node03][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[node03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9abec957-8a23-4beb-8fe9-f5965b448a64
[node03][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-f753ec43-265d-4da2-b76c-eab5b1df7961 /dev/sdb
[node03][WARNIN]  stdout: Physical volume "/dev/sdb" successfully created.
[node03][WARNIN]  stdout: Volume group "ceph-f753ec43-265d-4da2-b76c-eab5b1df7961" successfully created
[node03][WARNIN] Running command: /sbin/lvcreate --yes -l 10239 -n osd-block-9abec957-8a23-4beb-8fe9-f5965b448a64 ceph-f753ec43-265d-4da2-b76c-eab5b1df7961
[node03][WARNIN]  stdout: Logical volume "osd-block-9abec957-8a23-4beb-8fe9-f5965b448a64" created.
[node03][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[node03][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
[node03][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[node03][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-f753ec43-265d-4da2-b76c-eab5b1df7961/osd-block-9abec957-8a23-4beb-8fe9-f5965b448a64
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[node03][WARNIN] Running command: /bin/ln -s /dev/ceph-f753ec43-265d-4da2-b76c-eab5b1df7961/osd-block-9abec957-8a23-4beb-8fe9-f5965b448a64 /var/lib/ceph/osd/ceph-2/block
[node03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
[node03][WARNIN]  stderr: 2021-08-16T12:16:41.790+0800 7fc23497f700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[node03][WARNIN]  stderr: 
[node03][WARNIN]  stderr: 2021-08-16T12:16:41.790+0800 7fc23497f700 -1 AuthRegistry(0x7fc23005b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[node03][WARNIN]  stderr: got monmap epoch 3
[node03][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQCo5hlhus81ORAAt2PHDjbNooKkR+Ulkn3nLw==
[node03][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-2/keyring
[node03][WARNIN] added entity osd.2 auth(key=AQCo5hlhus81ORAAt2PHDjbNooKkR+Ulkn3nLw==)
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
[node03][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 9abec957-8a23-4beb-8fe9-f5965b448a64 --setuser ceph --setgroup ceph
[node03][WARNIN]  stderr: 2021-08-16T12:16:42.026+0800 7fd9f9eaaf00 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
[node03][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdb
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[node03][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-f753ec43-265d-4da2-b76c-eab5b1df7961/osd-block-9abec957-8a23-4beb-8fe9-f5965b448a64 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
[node03][WARNIN] Running command: /bin/ln -snf /dev/ceph-f753ec43-265d-4da2-b76c-eab5b1df7961/osd-block-9abec957-8a23-4beb-8fe9-f5965b448a64 /var/lib/ceph/osd/ceph-2/block
[node03][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[node03][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-2-9abec957-8a23-4beb-8fe9-f5965b448a64
[node03][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-9abec957-8a23-4beb-8fe9-f5965b448a64.service → /lib/systemd/system/ceph-volume@.service.
[node03][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@2
[node03][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@2.service → /lib/systemd/system/ceph-osd@.service.
[node03][WARNIN] Running command: /bin/systemctl start ceph-osd@2
[node03][WARNIN] --> ceph-volume lvm activate successful for osd ID: 2
[node03][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[node03][INFO  ] checking OSD status...
[node03][DEBUG ] find the location of an executable
[node03][INFO  ] Running command: /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node03 is now ready for osd use.
root@node01:~/cephCluster# 
root@node01:~/cephCluster# 
root@node01:~/cephCluster# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.11728  root default                              
-3         0.03909      host node01                           
 0    hdd  0.03909          osd.0        up   1.00000  1.00000
-5         0.03909      host node02                           
 1    hdd  0.03909          osd.1        up   1.00000  1.00000
-7         0.03909      host node03                           
 2    hdd  0.03909          osd.2        up   1.00000  1.00000
root@node01:~/cephCluster# 
root@node01:~/cephCluster# 
root@node01:~/cephCluster# ceph-deploy osd create node01 --data /dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create node01 --data /dev/sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f209fccb410>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : node01
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f209fd19250>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdc
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdc
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to node01
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node01][DEBUG ] find the location of an executable
[node01][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdc
[node01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[node01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 100d91e8-03f6-497f-9aa4-ba51d10cd302
[node01][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-5139df37-7ae6-472d-a0be-092b5d8aaaa6 /dev/sdc
[node01][WARNIN]  stdout: Physical volume "/dev/sdc" successfully created.
[node01][WARNIN]  stdout: Volume group "ceph-5139df37-7ae6-472d-a0be-092b5d8aaaa6" successfully created
[node01][WARNIN] Running command: /sbin/lvcreate --yes -l 10239 -n osd-block-100d91e8-03f6-497f-9aa4-ba51d10cd302 ceph-5139df37-7ae6-472d-a0be-092b5d8aaaa6
[node01][WARNIN]  stdout: Logical volume "osd-block-100d91e8-03f6-497f-9aa4-ba51d10cd302" created.
[node01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[node01][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
[node01][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[node01][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-5139df37-7ae6-472d-a0be-092b5d8aaaa6/osd-block-100d91e8-03f6-497f-9aa4-ba51d10cd302
[node01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[node01][WARNIN] Running command: /bin/ln -s /dev/ceph-5139df37-7ae6-472d-a0be-092b5d8aaaa6/osd-block-100d91e8-03f6-497f-9aa4-ba51d10cd302 /var/lib/ceph/osd/ceph-3/block
[node01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
[node01][WARNIN]  stderr: 2021-08-16T12:17:02.796+0800 7f27c8197700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[node01][WARNIN]  stderr: 
[node01][WARNIN]  stderr: 2021-08-16T12:17:02.796+0800 7f27c8197700 -1 AuthRegistry(0x7f27c005b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[node01][WARNIN]  stderr: 
[node01][WARNIN]  stderr: got monmap epoch 3
[node01][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring --create-keyring --name osd.3 --add-key AQC+5hlh3Z+JABAAymqhxfYXXEPa5Za9G/z3Gw==
[node01][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-3/keyring
[node01][WARNIN] added entity osd.3 auth(key=AQC+5hlh3Z+JABAAymqhxfYXXEPa5Za9G/z3Gw==)
[node01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
[node01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
[node01][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 100d91e8-03f6-497f-9aa4-ba51d10cd302 --setuser ceph --setgroup ceph
[node01][WARNIN]  stderr: 2021-08-16T12:17:03.032+0800 7f04aadb4f00 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid
[node01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdc
[node01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[node01][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-5139df37-7ae6-472d-a0be-092b5d8aaaa6/osd-block-100d91e8-03f6-497f-9aa4-ba51d10cd302 --path /var/lib/ceph/osd/ceph-3 --no-mon-config
[node01][WARNIN] Running command: /bin/ln -snf /dev/ceph-5139df37-7ae6-472d-a0be-092b5d8aaaa6/osd-block-100d91e8-03f6-497f-9aa4-ba51d10cd302 /var/lib/ceph/osd/ceph-3/block
[node01][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
[node01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[node01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[node01][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-3-100d91e8-03f6-497f-9aa4-ba51d10cd302
[node01][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-3-100d91e8-03f6-497f-9aa4-ba51d10cd302.service → /lib/systemd/system/ceph-volume@.service.
[node01][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@3
[node01][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service → /lib/systemd/system/ceph-osd@.service.
[node01][WARNIN] Running command: /bin/systemctl start ceph-osd@3
[node01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 3
[node01][WARNIN] --> ceph-volume lvm create successful for: /dev/sdc
[node01][INFO  ] checking OSD status...
[node01][DEBUG ] find the location of an executable
[node01][INFO  ] Running command: /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node01 is now ready for osd use.
root@node01:~/cephCluster# ceph-deploy osd create node02 --data /dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create node02 --data /dev/sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9692725410>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : node02
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f9692773250>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdc
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdc
[node02][DEBUG ] connected to host: node02 
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to node02
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node02][DEBUG ] find the location of an executable
[node02][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdc
[node02][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[node02][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f404d6af-f43a-451f-a704-4a6ee4aadf2f
[node02][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-8bd233b6-47bd-4e87-9ea0-de5b052045c1 /dev/sdc
[node02][WARNIN]  stdout: Physical volume "/dev/sdc" successfully created.
[node02][WARNIN]  stdout: Volume group "ceph-8bd233b6-47bd-4e87-9ea0-de5b052045c1" successfully created
[node02][WARNIN] Running command: /sbin/lvcreate --yes -l 10239 -n osd-block-f404d6af-f43a-451f-a704-4a6ee4aadf2f ceph-8bd233b6-47bd-4e87-9ea0-de5b052045c1
[node02][WARNIN]  stdout: Logical volume "osd-block-f404d6af-f43a-451f-a704-4a6ee4aadf2f" created.
[node02][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[node02][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4
[node02][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[node02][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-8bd233b6-47bd-4e87-9ea0-de5b052045c1/osd-block-f404d6af-f43a-451f-a704-4a6ee4aadf2f
[node02][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[node02][WARNIN] Running command: /bin/ln -s /dev/ceph-8bd233b6-47bd-4e87-9ea0-de5b052045c1/osd-block-f404d6af-f43a-451f-a704-4a6ee4aadf2f /var/lib/ceph/osd/ceph-4/block
[node02][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap
[node02][WARNIN]  stderr: 2021-08-16T12:17:17.622+0800 7fb5fdfc3700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[node02][WARNIN]  stderr: 
[node02][WARNIN]  stderr: 2021-08-16T12:17:17.622+0800 7fb5fdfc3700 -1 AuthRegistry(0x7fb5f805b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[node02][WARNIN]  stderr: 
[node02][WARNIN]  stderr: got monmap epoch 3
[node02][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-4/keyring --create-keyring --name osd.4 --add-key AQDM5hlhKd57MxAA4UxRvyLy8H+/B15HjWRr2A==
[node02][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-4/keyring
[node02][WARNIN] added entity osd.4 auth(key=AQDM5hlhKd57MxAA4UxRvyLy8H+/B15HjWRr2A==)
[node02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring
[node02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/
[node02][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid f404d6af-f43a-451f-a704-4a6ee4aadf2f --setuser ceph --setgroup ceph
[node02][WARNIN]  stderr: 2021-08-16T12:17:17.850+0800 7f3e60d01f00 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid
[node02][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdc
[node02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
[node02][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-8bd233b6-47bd-4e87-9ea0-de5b052045c1/osd-block-f404d6af-f43a-451f-a704-4a6ee4aadf2f --path /var/lib/ceph/osd/ceph-4 --no-mon-config
[node02][WARNIN] Running command: /bin/ln -snf /dev/ceph-8bd233b6-47bd-4e87-9ea0-de5b052045c1/osd-block-f404d6af-f43a-451f-a704-4a6ee4aadf2f /var/lib/ceph/osd/ceph-4/block
[node02][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block
[node02][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[node02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
[node02][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-4-f404d6af-f43a-451f-a704-4a6ee4aadf2f
[node02][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-4-f404d6af-f43a-451f-a704-4a6ee4aadf2f.service → /lib/systemd/system/ceph-volume@.service.
[node02][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@4
[node02][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@4.service → /lib/systemd/system/ceph-osd@.service.
[node02][WARNIN] Running command: /bin/systemctl start ceph-osd@4
[node02][WARNIN] --> ceph-volume lvm activate successful for osd ID: 4
[node02][WARNIN] --> ceph-volume lvm create successful for: /dev/sdc
[node02][INFO  ] checking OSD status...
[node02][DEBUG ] find the location of an executable
[node02][INFO  ] Running command: /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node02 is now ready for osd use.
root@node01:~/cephCluster# ceph-deploy osd create node03 --data /dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create node03 --data /dev/sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8b3f7fe410>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : node03
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f8b3f84c250>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdc
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdc
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to node03
[node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node03][DEBUG ] find the location of an executable
[node03][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdc
[node03][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[node03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 25916c5d-843d-4ac9-838c-6e838d6ca53b
[node03][WARNIN] Running command: /sbin/vgcreate --force --yes ceph-52b1685b-f82e-4711-b2a7-a8fdaf53630d /dev/sdc
[node03][WARNIN]  stdout: Physical volume "/dev/sdc" successfully created.
[node03][WARNIN]  stdout: Volume group "ceph-52b1685b-f82e-4711-b2a7-a8fdaf53630d" successfully created
[node03][WARNIN] Running command: /sbin/lvcreate --yes -l 10239 -n osd-block-25916c5d-843d-4ac9-838c-6e838d6ca53b ceph-52b1685b-f82e-4711-b2a7-a8fdaf53630d
[node03][WARNIN]  stdout: Logical volume "osd-block-25916c5d-843d-4ac9-838c-6e838d6ca53b" created.
[node03][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[node03][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-5
[node03][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[node03][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-52b1685b-f82e-4711-b2a7-a8fdaf53630d/osd-block-25916c5d-843d-4ac9-838c-6e838d6ca53b
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[node03][WARNIN] Running command: /bin/ln -s /dev/ceph-52b1685b-f82e-4711-b2a7-a8fdaf53630d/osd-block-25916c5d-843d-4ac9-838c-6e838d6ca53b /var/lib/ceph/osd/ceph-5/block
[node03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-5/activate.monmap
[node03][WARNIN]  stderr: 2021-08-16T12:17:31.343+0800 7fa852e4d700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[node03][WARNIN]  stderr: 
[node03][WARNIN]  stderr: 2021-08-16T12:17:31.343+0800 7fa852e4d700 -1 AuthRegistry(0x7fa84c05b408) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[node03][WARNIN]  stderr: 
[node03][WARNIN]  stderr: got monmap epoch 3
[node03][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-5/keyring --create-keyring --name osd.5 --add-key AQDa5hlhjvQ7IRAAU9gsPCdsJOdrjNR8H7gl/g==
[node03][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-5/keyring
[node03][WARNIN] added entity osd.5 auth(key=AQDa5hlhjvQ7IRAAU9gsPCdsJOdrjNR8H7gl/g==)
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/keyring
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/
[node03][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 5 --monmap /var/lib/ceph/osd/ceph-5/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-5/ --osd-uuid 25916c5d-843d-4ac9-838c-6e838d6ca53b --setuser ceph --setgroup ceph
[node03][WARNIN]  stderr: 2021-08-16T12:17:31.575+0800 7ff48b24ff00 -1 bluestore(/var/lib/ceph/osd/ceph-5/) _read_fsid unparsable uuid
[node03][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdc
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5
[node03][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-52b1685b-f82e-4711-b2a7-a8fdaf53630d/osd-block-25916c5d-843d-4ac9-838c-6e838d6ca53b --path /var/lib/ceph/osd/ceph-5 --no-mon-config
[node03][WARNIN] Running command: /bin/ln -snf /dev/ceph-52b1685b-f82e-4711-b2a7-a8fdaf53630d/osd-block-25916c5d-843d-4ac9-838c-6e838d6ca53b /var/lib/ceph/osd/ceph-5/block
[node03][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[node03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5
[node03][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-5-25916c5d-843d-4ac9-838c-6e838d6ca53b
[node03][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-5-25916c5d-843d-4ac9-838c-6e838d6ca53b.service → /lib/systemd/system/ceph-volume@.service.
[node03][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@5
[node03][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@5.service → /lib/systemd/system/ceph-osd@.service.
[node03][WARNIN] Running command: /bin/systemctl start ceph-osd@5
[node03][WARNIN] --> ceph-volume lvm activate successful for osd ID: 5
[node03][WARNIN] --> ceph-volume lvm create successful for: /dev/sdc
[node03][INFO  ] checking OSD status...
[node03][DEBUG ] find the location of an executable
[node03][INFO  ] Running command: /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node03 is now ready for osd use.
root@node01:~/cephCluster# 
root@node01:~/cephCluster# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.23456  root default                              
-3         0.07819      host node01                           
 0    hdd  0.03909          osd.0        up   1.00000  1.00000
 3    hdd  0.03909          osd.3        up   1.00000  1.00000
-5         0.07819      host node02                           
 1    hdd  0.03909          osd.1        up   1.00000  1.00000
 4    hdd  0.03909          osd.4        up   1.00000  1.00000
-7         0.07819      host node03                           
 2    hdd  0.03909          osd.2        up   1.00000  1.00000
 5    hdd  0.03909          osd.5        up   1.00000  1.00000
root@node01:~/cephCluster# 
root@node01:~/cephCluster# ceph -s
  cluster:
    id:     e0f0ae6f-ee6c-4f8c-ba19-939bddaa3ee3
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node01,node02,node03 (age 29s)
    mgr: node01(active, since 15m), standbys: node02, node03
    osd: 6 osds: 6 up (since 24s), 6 in (since 5m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   33 MiB used, 240 GiB / 240 GiB avail
    pgs:     1 active+clean
 
root@node01:~/cephCluster# 
View Code

 至此、集群搭建完成

4.4、创建块设备并在客户端挂载验证

1、集群上创建相关块设备(创建存储池、创建镜像),具体步骤如下:

root@node01:~/cephCluster# ceph osd pool create mypool1 32 32
pool 'mypool1' created
root@node01:~/cephCluster# ceph osd pool application enable mypool1 rbd
enabled application 'rbd' on pool 'mypool1'
root@node01:~/cephCluster# rbd pool init -p mypool1
root@node01:~/cephCluster#
root@node01:~/cephCluster# rbd create myimg1 --size 1G --pool mypool1 --image-format 2 --image-feature layering
root@node01:~/cephCluster# rbd -p mypool1 ls
myimg1
root@node01:~/cephCluster# rbd -p mypool1 info --image myimg1
rbd image 'myimg1':
    size 1 GiB in 256 objects
    order 22 (4 MiB objects)
    snapshot_count: 0
    id: 143d2228227d
    block_name_prefix: rbd_data.143d2228227d
    format: 2
    features: layering
    op_features: 
    flags: 
    create_timestamp: Mon Aug 16 12:46:29 2021
    access_timestamp: Mon Aug 16 12:46:29 2021
    modify_timestamp: Mon Aug 16 12:46:29 2021
root@node01:~/cephCluster# 
View Code

2、拷贝ceph配置和认证文件到客户端(这里为了方便直接使用管理权限),具体情况如下:

root@node01:~/cephCluster# ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-mgr.keyring  ceph.bootstrap-osd.keyring  ceph.bootstrap-rgw.keyring  ceph.client.admin.keyring  ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring
root@node01:~/cephCluster# scp ceph.conf ceph.client.admin.keyring root@192.168.11.128:/etc/ceph/
The authenticity of host '192.168.11.128 (192.168.11.128)' can't be established.
ECDSA key fingerprint is SHA256:Q3ViKzORXE6BJdbk+QiRhoPl86r5+oJqIFxd5P0At/s.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.11.128' (ECDSA) to the list of known hosts.
root@192.168.11.128's password: 
ceph.conf                                                                                                                                                                       100%  265   137.3KB/s   00:00    
ceph.client.admin.keyring                                                                                                                                                       100%  151    90.6KB/s   00:00    
root@node01:~/cephCluster# 
View Code

3、登陆客户端,安装ceph连接调用模块并映射挂载(客户端格式、挂载和写入文件验证),具体情况如下:

[root@bogon ~]# rbd -p mypool1 map myimg1
/dev/rbd0
[root@bogon ~]# mkfs.xfs /dev/rbd0
Discarding blocks...Done.
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=32768 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@bogon ~]# mount /dev/rbd0 /mnt/
[root@bogon ~]# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
devtmpfs       devtmpfs  980M     0  980M   0% /dev
tmpfs          tmpfs     991M     0  991M   0% /dev/shm
tmpfs          tmpfs     991M   18M  973M   2% /run
tmpfs          tmpfs     991M     0  991M   0% /sys/fs/cgroup
/dev/sda2      xfs        17G  2.6G   15G  16% /
/dev/sda1      xfs      1014M  132M  883M  13% /boot
tmpfs          tmpfs     199M     0  199M   0% /run/user/0
/dev/rbd0      xfs      1014M   33M  982M   4% /mnt
[root@bogon ~]# cp /etc/passwd /mnt/
[root@bogon ~]# cd /mnt/
[root@bogon mnt]# ls
passwd
[root@bogon mnt]# head passwd 
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
[root@bogon mnt]# 
View Code

如有那位大神发现不到之处,请不吝赐教,谢谢……

 随便加一笔:

取消映射块设备(类似如下操作,通篇也没有操作过程的截图,下面就来一张吧)

root@node01:~# rbd  --help|grep unmap
    device unmap (unmap)              Unmap a rbd device.
root@node01:~# 
root@node01:~# 
root@node01:~# rbd showmapped 
root@node01:~# rbd -p mypool1 map test.img
/dev/rbd0
root@node01:~# rbd showmapped 
id  pool     namespace  image     snap  device   
0   mypool1             test.img  -     /dev/rbd0
root@node01:~# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=32768 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
root@node01:~# mount /dev/rbd1 /mnt/
mount: /mnt: special device /dev/rbd1 does not exist.
root@node01:~# mount /dev/rbd0 /mnt/
root@node01:~# cp /etc/passwd /mnt/
root@node01:~# tail /mnt/passwd 
uuidd:x:107:112::/run/uuidd:/usr/sbin/nologin
tcpdump:x:108:113::/nonexistent:/usr/sbin/nologin
landscape:x:109:115::/var/lib/landscape:/usr/sbin/nologin
pollinate:x:110:1::/var/cache/pollinate:/bin/false
usbmux:x:111:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin
sshd:x:112:65534::/run/sshd:/usr/sbin/nologin
systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin
vmuser:x:1000:1000:vmuser:/home/vmuser:/bin/bash
lxd:x:998:100::/var/snap/lxd/common/lxd:/bin/false
ceph:x:64045:64045:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin
root@node01:~# umount /dev/rbd0
root@node01:~# rbd showmapped 
id  pool     namespace  image     snap  device   
0   mypool1             test.img  -     /dev/rbd0
root@node01:~# rbd unmap /dev/rbd0
root@node01:~# rbd showmapped 
root@node01:~# 
View Code

 

posted @ 2021-08-16 13:20  zheng-weimin  阅读(575)  评论(1编辑  收藏  举报