CEPH搭建
Ceph 搭建文档
硬件环境准备
3台CentOS7。数据盘根据需要来定
软件环境准备
关闭 SELINUX
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # setenforce 0
关闭iptables
# systemctl stop firewalld
# systemctl disable firewalld
每台安装配置源
# yum clean all # rm -rf /etc/yum.repos.d/*.repo # wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.163.com/repo/Centos-7.repo # wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo # sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo # sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo
同步时间
安装
# yum -y ntp ntpdate
在node1开启编辑配置文件
# vim /etc/ntp.conf driftfile /var/lib/ntp/drift restrict default nomodify restrict 你的ip地址 mask 255.255.255.0 nomodify server 127.127.1.0 fudge 127.127.1.0 stratum 10 includefile /etc/ntp/crypto/pw keys /etc/ntp/keys disable monitor
启动
# systemctl start ntpd
在另外两台执行
# ntpdate node1
添加定时任务
# crontab –e */10 * * * * root ntpdate node1
分别修改hostname
# hostname node1
# echo node1 > /etc/hostname
分别修改/etc/hosts
# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.50.1 node1 192.168.50.2 node2 192.168.50.3 node3
分别配置ssh免密码登录
# ssh-keygen -t rsa -P '' # ssh-copy-id node1 # ssh-copy-id node2 # ssh-copy-id node3
Ceph部署
增加ceph源(安装前需要确认,purge之后重新做源)
# vim /etc/yum.repos.d/ceph.repo [ceph] name=ceph baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64/ gpgcheck=0 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/ gpgcheck=0
(选做)本步骤为确保使用163源
CentOS: # export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-jewel/el7 # export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc
node1安装deploy
# yum install ceph-deploy –y
创建目录
# mkdir ~/ceph-cluster # cd ceph-cluster/
创建
# ceph-deploy new node1 node2 node3
安装ceph
# ceph-deploy install node1 node2 node3
初始化mon
# ceph-deploy mon create-initial
创建osd
#ceph-deploy --overwrite-conf osd create mode1:/dev/sdb node2:/dev/sdb node3:/dev/sdb
创建admin
# ceph-deploy --overwrite-conf admin node1 node2 node3
查看
# ceph health
HEALTH_OK
!如果出错,一切都可以重新来过
# ceph-deploy purge node1 node2 node3 # ceph-deploy purgedata node1 node2 node3 # ceph-deploy forgetkeys
Ceph使用
创建mds
# ceph-deploy mds create node1
创建data pool 与metadata pool
# ceph osd pool create cephfs_data 128 128 # ceph osd pool create cephfs_metadata 128 128
创建cephfs
# ceph fs new cephfs cephfs_metadata cephfs_data
查看cephfs
# ceph fs ls
创建挂载目录
# cd /mnt && mkdir cephfs_mnt
获取key
# ceph auth get-key client.admin -o /etc/ceph/adminkey
挂载
# mount –t ceph node1:/ cephfs_mnt -o name=admin,secretfile=/etc/ceph/adminkey
posted on 2017-06-16 15:19 DayAfterDay 阅读(208) 评论(0) 编辑 收藏 举报