Ceph 安装
Ceph部署架构
三台主机,操作系统均为Centos:
- node1:192.168.122.157 ceph-deploy mon osd
- node2:192.168.122.58 mon osd
- node3:192.168.122.54 mon osd
Ceph版本 mimic
准备工作
1)关闭防火墙,关闭selinux
systemctl stop firewalld
systemctl disable firewalld
//也可以不关闭防火墙,开放对应端口,Ceph Monitors之间默认使用 **6789** 端口通信, OSD之间默认用 **6800:7300**这个范围内的端口通信
vim /etc/selinux/config
SELINUX=disabled
然后重启 reboot
2)修改hosts
vim /etc/hosts
192.168.122.157 node1
192.168.122.58 node2
192.168.122.54 node3
3)免密登录(如下示例)
[root@node1 ~]#ssh-keygen
[root@node1 ~]#ssh-copy-id -i .ssh/id_rsa.pub node2 //首次需要输入密码
4)安装ntp服务,然后同步时间
[root@node1 ~]# yum install ntpd -y
[root@node1 ~]# systemctl start ntpd
[root@node1 ~]# systemctl enable ntpd
node2、node3节点同步node1的时间
配置计划任务
[root@node2 ~]#crontab -e
*/5 * * * * ntpdate 192.168.122.157
开始部署
1)node1安装ceph-deploy
[root@node1 ~]# yum install ceph-deploy -y
2)添加Ceph源
[root@node1 ~]#export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-mimic/el7/ //使用国内的源速度会快一些
[root@node1 ~]#export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc
[root@node1 ~]# vim /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-mimic/el7/noarch
enable=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
3)创建一个集群
[root@node1 ~]# cd /opt
[root@node1 opt]# mkdir cluster && cd cluster
[root@node1 cluster]# ceph-deploy new node1 node2 node3
4)安装Ceph
[root@node1 cluster]# ceph-deploy install --release mimic node1 node2 node3
5)创建monitor、配置admin key到各个节点
[root@node1 cluster]# ceph-deploy mon create-initial //完成后,在当前目录下会生成密钥
[root@node1 cluster]# ls
ceph.bootstrap-mds.keyring ceph.bootstrap-rgw.keyring ceph-deploy-ceph.log
ceph.bootstrap-mgr.keyring ceph.client.admin.keyring ceph.log
ceph.bootstrap-osd.keyring ceph.conf ceph.mon.keyring
[root@node1 cluster]# ceph-deploy admin node1 node2 node3 //配置admin key 到各个节点
6)添加OSD
[root@node1 cluster]# ceph-deploy osd create node1 --data /dev/vdb
[root@node1 cluster]# ceph-deploy osd create node1 --data /dev/vdc
//依次添加,默认第一个分区为数据分区,第二个分区为日志分区
7)在node2、node3上添加monitor
[root@node1 cluster]# ceph-deploy mon add node2
[root@node1 cluster]# ceph-deploy mon add node3
8)添加管理进程服务
//查看集群状态
[root@node1 cluster]# ceph status
cluster:
id: 4e1947bd-23ae-4828-ba5d-0e09779ced22
health: HEALTH_WARN
no active mgr
clock skew detected on mon.node2, mon.node1
services:
mon: 3 daemons, quorum node3,node2,node1
mgr: no daemons active
osd: 6 osds: 6 up, 6 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
//有告警no active mgr,安装下管理进程
[root@node1 cluster]# ceph-deploy mgr create node1
//再次查看集群
[root@node1 cluster]# ceph -s
cluster:
id: 4e1947bd-23ae-4828-ba5d-0e09779ced22
health: HEALTH_OK
services:
mon: 3 daemons, quorum node3,node2,node1
mgr: node1(active)
osd: 6 osds: 6 up, 6 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 6.0 GiB used, 54 GiB / 60 GiB avail
pgs:
致此,ceph的部署基本完成。