Ceph安装
Doc:http://docs.ceph.com/docs/master/start/
文档中admin node/mon node/osd node在不同的主机上。
我们的安装环境中admin node和mon node在一起。网上也有all-in-one的安装方式。
Node type | IP addr | hostname |
---|---|---|
Mon/admin | 10.254.4.3 | controller-1 |
OSD | 10.254.4.4 | controller-2 |
OSD | 10.254.4.7 | controller-3 |
操作系统:CentOS 7
安装示意图:
- admin node
- ceph node(mon node and osd node)
基本过程是在admin node上安装ceph-deploy,然后无密码ssh到ceph node上安装mon和osd。
准备工作
http://docs.ceph.com/docs/master/start/quick-start-preflight/
更新admin node的yum源并安装ceph-deploy
[root@controller-1 ~]# cat /etc/yum.repos.d/ceph.repo [ceph-noarch] name=Ceph noarch packages baseurl=http://download.ceph.com/rpm-firefly/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [root@controller-1 ~]# sudo yum update && sudo yum install ceph-deploy
在ceph nodes增加一个ceph deploy user,叫做ceph
[root@controller-3 ~]# sudo useradd -d /home/ceph -m ceph [root@controller-3 ~]# passwd ceph(密码是ceph) [root@controller-3 ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph ceph ALL = (root) NOPASSWD:ALL [root@controller-3 ~]# sudo chmod 0440 /etc/sudoers.d/ceph
在admin node上生成SSH key。文档说不能用root用户,所以在admin node上也创建ceph用户。
[root@controller-1 ~]# sudo useradd -d /home/ceph -m ceph [root@controller-1 ~]# passwd ceph [root@controller-1 ~]# su - ceph [ceph@controller-1 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/ceph/.ssh/id_rsa): Created directory '/home/ceph/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/ceph/.ssh/id_rsa. Your public key has been saved in /home/ceph/.ssh/id_rsa.pub. The key fingerprint is: ca:bb:5d:93:08:5b:7a:81:dd:3e:c9:c6:50:0a:d9:13 ceph@controller-1 The key's randomart image is: +--[ RSA 2048]----+ | E | | o . | | o o . | | + = | | o S . | | . * B o | | = o @ | | + o o | | o.. | +-----------------+
copy admin node ssh key到ceph nodes
[ceph@controller-1 ~]$ ssh-copy-id ceph@controller-1 [ceph@controller-1 ~]$ ssh-copy-id ceph@controller-2 [ceph@controller-1 ~]$ ssh-copy-id ceph@controller-3
在admin node上创建文件:
[ceph@controller-1 ~]$ cat ~/.ssh/config Host controller-1 Hostname controller-1 User ceph Host controller-2 Hostname controller-2 User ceph Host controller-3 Hostname controller-3 User ceph
mon节点用port 6789通信,osd节点用port 6800:7300通信。打开iptables:
[root@controller-1 ~]# sudo iptables -A INPUT -p tcp --dport 6789 -j ACCEPT [root@controller-2 ~]# sudo iptables -A INPUT -p tcp -m multiport --ports 6800:7300 -m comment --comment "osd nodes" -j ACCEPT [root@controller-3 ~]# sudo iptables -A INPUT -p tcp -m multiport --ports 6800:7300 -m comment --comment "osd nodes" -j ACCEPT
各节点上修改sudoers
[root@controller-2 ~]# grep requi /etc/sudoers #Defaults requiretty Defaults:ceph !requiretty
ceph nodes
[root@controller-2 ~]# sudo setenforce 0 [root@controller-2 ~]# sudo yum install yum-plugin-priorities -y
开始安装
登陆到admin node,切换到ceph用户,之后执行:
创建目录
[ceph@controller-1 ~]$ mkdir my-cluster;cd my-cluster
创建配置文件
[ceph@controller-1 my-cluster]$ ceph-deploy new controller-1 [ceph@controller-1 my-cluster]$ ls -ltr total 12 -rw-------. 1 ceph ceph 73 Nov 10 12:47 ceph.mon.keyring -rw-rw-r--. 1 ceph ceph 3744 Nov 10 12:47 ceph.log -rw-rw-r--. 1 ceph ceph 232 Nov 10 12:47 ceph.conf
修改default number of replicas,添加IP地址
[ceph@controller-1 my-cluster]$ cat ~/my-cluster/ceph.conf [global] fsid = 1c9f72d3-3ebc-465b-97a4-2784f2db1db3 mon_initial_members = controller-1 mon_host = 10.254.4.3 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx filestore_xattr_use_omap = true osd pool default size = 2 <=== 这里 public network = 10.254.4.3/24 <=== 这里
安装ceph
[ceph@controller-1 my-cluster]$ sudo mv /etc/yum.repos.d/ceph.repo /etc/yum.repos.d/ceph-deploy.repo # 否则会报错http://tracker.ceph.com/issues/12694 [ceph@controller-1 my-cluster]$ ceph-deploy install controller-1 controller-2 controller-3 [ceph@controller-1 my-cluster]$ ceph-deploy mon create-initial
增加osd
可以在osd节点上创建目录,这样osd不会使用整个disk。[ceph@controller-2 ~]$ mkdir -p /home/ceph/osd0 [ceph@controller-3 ~]$ mkdir -p /home/ceph/osd1
[ceph@controller-1 my-cluster]$ ceph-deploy osd prepare controller-2:/home/ceph/osd0 controller-3:/home/ceph/osd1
激活osd
[ceph@controller-1 my-cluster]$ ceph-deploy osd activate controller-2:/home/ceph/osd0 controller-3:/home/ceph/osd1
Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.
[ceph@controller-1 my-cluster]$ ceph-deploy admin controller-1 controller-2 controller-3
在各个节点上修改key权限
[ceph@controller-3 ~]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring [ceph@controller-3 ~]$ ceph health HEALTH_OK
安装基本就完成了,在这里还能找到如何增加mon/osd节点,如何使用ceph cluster:
后续文档
start/stop: http://docs.ceph.com/docs/master/rados/operations/operating/#running-ceph-with-sysvinit
- sudo /etc/init.d/ceph -a start
- sudo /etc/init.d/ceph stop osd
- sudo /etc/init.d/ceph start osd.0
monitor a cluster: http://docs.ceph.com/docs/master/rados/operations/monitoring/
monitor osd/pg: http://docs.ceph.com/docs/master/rados/operations/monitoring-osd-pg/
user management: http://docs.ceph.com/docs/master/rados/operations/user-management/
ceph rbd integration with openstack:http://docs.ceph.com/docs/master/rbd/rbd-openstack/
悲催的事情,装完发现这个网络10.254.x.x不能被虚机访问到。需要修改IP,用10.134.x.x。网上找了一下,没有找到能修改mon/osd IP的方法,只能重装。
删除数据:
[ceph@controller-1 ~]$ ceph-deploy purge controller-1 controller-2 controller-3
[ceph@controller-1 ~]$ ceph-deploy purgedata controller-1 controller-2 controller-3
重新安装的时候需要注意,执行ceph-deploy new xxxxx
的时候,这个xxxx一定得要是ceph-mon所在的主机的hostname。否则会报错,就像这里。
所以执行了这些:
创建cluster
[ceph@controller-1 my-cluster3]$ ceph-deploy new controller-1
修改ceph.conf
[ceph@controller-1 my-cluster3]$ cat ceph.conf [global] fsid = d3752df9-221d-43c7-8cf5-f39061a630da mon_initial_members = controller-1 <=== 未修改 mon_host = 10.134.1.3 <=== 修改IP auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx filestore_xattr_use_omap = true osd pool default size = 2 <=== 添加 public network = 10.134.1.3/24 <=== 添加
安装ceph
[ceph@controller-1 my-cluster3]$ ceph-deploy install ceph-mon ceph- osd0 ceph-osd1
安装mon
[ceph@controller-1 my-cluster3]$ ceph-deploy mon create-initial [ceph@controller-1 my-cluster3]$ netstat -an | grep 6789 tcp 0 0 10.134.1.3:6789 0.0.0.0:* LISTEN
安装osd
[ceph@controller-2 ~]$ rm -rf osd0; mkdir osd0 [ceph@controller-3 ~]$ rm -rf osd1; mkdir osd1 [ceph@controller-1 my-cluster3]$ ceph-deploy osd prepare ceph- osd0:/home/ceph/osd0 ceph-osd1:/home/ceph/osd1 [ceph@controller-1 my-cluster3]$ ceph-deploy osd activate ceph-osd0:/home/ceph/osd0 ceph-osd1:/home/ceph/osd1
copy key
[ceph@controller-1 my-cluster3]$ ceph-deploy admin ceph-mon ceph-osd0 ceph-osd1 [ceph@controller-1 my-cluster3]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring