使用ceph-deploy部署ceph集群
192.168.0.210 deploy 192.168.0.211 node1 192.168.0.212 node2 192.168.0.213 node3
一、基础环境准备
(1)关闭firewalled
systemctl stop firewalld
systemctl disable firewalld
(2)关闭selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0
(3)关闭NetworkManager
systemctl disable NetworkManager && systemctl stop NetworkManager
(4)添加主机名与IP对应关系:
vim /etc/hosts 172.30.112.78 deploy 172.30.112.179 node1 172.30.112.115 node2 172.30.112.82 node13
(5)设置主机名:
hostnamectl set-hostname deploy hostnamectl set-hostname node1 hostnamectl set-hostname node2 hostnamectl set-hostname node3
(6)同步网络时间和修改时区
yum install chrony -y systemctl restart chronyd.service && systemctl enable chronyd.service cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
(7)设置文件描述符
for i in deploy node1 node2 node3;do ssh $i "echo "ulimit -SHn 102400" >> /etc/rc.local";done cat >> /etc/security/limits.conf << EOF * soft nofile 65535 * hard nofile 65535 EOF
(8)内核参数优化
for i in deploy node1 node2 node3;do ssh $i "echo 'vm.swappiness = 0' >> /etc/sysctl.conf";done for i in deploy node1 node2 node3;do ssh $i "echo 'kernel.pid_max = 4194303' >> /etc/sysctl.conf";done for i in deploy node1 node2 node3;do ssh $i "sysctl -p";done
(9)在deploy上配置免密登录到node1 node2 node3
for host in node{1..3}; do ssh-copy-id $host;done
(10)read_ahead,通过数据预读并且记载到随机访问内存方式提高磁盘读操作
echo "8192" > /sys/block/sda/queue/read_ahead_kb
(11) I/O Scheduler,SSD要用noop,SATA/SAS使用deadline
echo "deadline" >/sys/block/sd[x]/queue/scheduler echo "noop" >/sys/block/sd[x]/queue/scheduler
二、所有节点设置yum源
vim /etc/yum.repos.d/ceph.repo [ceph] name=ceph baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [ceph-noarch] name=cephnoarch baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/ enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc [ceph-source] name=Ceph source package baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc ``` for i in deploy node1 node2 node3;do scp /root/ceph.repo root@$i:/etc/yum.repos.d/";done for i in deploy node1 node2 node3;do scp /root/centos.repo root@$i:/etc/yum.repos.d/;done for i in deploy node1 node2 node3;do ssh $i "yum clean all";done
三、部署节点安装ceph-deploy工具
如果在某些地方碰到麻烦,想从头再来,可以用下列命令清除配置:
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
用下列命令可以连 Ceph 安装包一起清除:
ceph-deploy purge {ceph-node} [{ceph-node}]
如果执行了 purge ,你必须重新安装 Ceph 。
管理节点安装:
yum install -y epel-release
yum install -y ceph-deploy
1、所有节点安装ceph包(node1 node2 node3)
for i in node1 node2 node3;do ssh $i "yum install -y epel-release";done for i in node1 node2 node3;do ssh $i "yum install -y ceph ceph-radosgw";done
2、Create the cluster
[root@deploy ~]# mkdir /etc/ceph [root@deploy ~]# ls [root@deploy ~]# cd /etc/ceph [root@deploy ceph]# ls ### 创建集群 [root@deploy ceph]# ceph-deploy new--public-network 192.168.130.0/24 --cluster-network 192.168.130.0/24 node1 node2 node3
把 Ceph 配置文件里的默认副本数从 3 改成 2 ,这样只有两个 OSD 也可以达到 active + clean 状态。把下面这行加入 [global] 段:
[global] fsid = aca2b777-962a-4f7b-8663-20e0c1e30bc4 mon_initial_members = ceph-node1,ceph-node2,ceph-node3, mon_host = 192.168.130.135, 192.168.130.136,192.168.130.137, auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx public_network = 192.168.130.0/24 cluster_network = 192.168.130.0/24osd_pool_default_size = 2
3、配置初始 monitor(s)、并收集所有密钥
[root@deploy ceph]# ceph-deploy --overwrite-conf mon create-initial
4、用 ceph-deploy 把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,这样你每次执行 Ceph 命令行时就无需指定 monitor 地址和 ceph.client.admin.keyring 了。
[root@deploy ceph]# ceph-deploy admin node1 node2 node3
ceph-deploy 和本地管理主机( admin-node )通信时,必须通过主机名可达。必要时可修改 /etc/hosts ,加入管理主机的名字。
确保你对 ceph.client.admin.keyring 有正确的操作权限。
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
5、添加osd
[root@deploy ceph]# ceph-deploy osd create node1 --data /dev/sdb [root@deploy ceph]# ceph-deploy osd create node2 --data /dev/sdb [root@deploy ceph]# ceph-deploy osd create node3 --data /dev/sdb
6、创建mgr
root@deploy ceph]# ceph-deploy mgr create node1 node2 node3
推送配置文件
root@deploy ceph]# ceph-deploy --overwrite-conf config push deploy node1 node2 node3
四、ceph开启Dashboard
1、安装Dashboard
yum install -y ceph-mgr-dashboard -y
2、开启插件
ceph mgr module enable dashboard
3、禁用SSL
ceph config set mgr mgr/dashboard/ssl false
4、配置监听IP
ceph config set mgr mgr/dashboard/server_addr 0.0.0.0
温馨提醒
此处必须设置监控地址为0.0.0.0,而不能是直接IP地址,因为其监控的是所有本地地址包括IPV4和IPV6,同时也不能禁用IPV6地址。
5、配置监听端口
ceph config set mgr mgr/dashboard/server_port 9400
6、设置用户及密码
将密码写入到一个文件:
echo "123456" >password ###此时admin的密码为:123456
然后执行
ceph dashboard ac-user-create admin -i password administrator
7、使用配置生效
ceph mgr module disable dashboard
ceph mgr module enable dashboard
8、通过查看ceph mgr services命令输出地址
[root@ceph-node01 ceph-deploy]# ceph mgr services { "dashboard": "http://ceph-node01:8444/" }
9、开启prometheus监控
[root@ceph-node01 ceph-deploy]# ceph mgr module enable prometheus [root@ceph-node01 ceph-deploy]# ceph mgr services { "dashboard": "http://ceph-node01:8444/", "prometheus": "http://ceph-node01:9283/" }
prometheus
vim /opt/prometheus/prometheus1.yml 添加以下:
- job_name: 'ceph' static_configs: - targets: ['192.168.8.100:9283']