centos-7.4_ceph-12.2.4部署
centos-7.4_ceph-12.2.4部署:
前言:
基于centos7.4安装ceph-luminous的主要步骤有一下几点:
1、安装centos7.4的系统,并配置网卡
2、安装前的环境配置
3、安装ceph、配置mon、mgr、osd等组件创建pool,即完成了ceph的环境安装和部署
安装系统并配置网卡,确认系统可以上外网,或者添加本地安装源:
安装前的环境配置
修改主机名:
# hostnamectl set-hostname $HOSTNAME(主机名)
# yum -y install vim
添加本地解析:
# vim /etc/hosts
1.1.1.12 $HOSTNAME
添加epel源:
# vim /etc/yum.repos.d/epel.repo
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=http://mirrors.aliyun.com/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=http://mirrors.aliyun.com/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0
[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=http://mirrors.aliyun.com/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=0
添加ceph-12源:
# vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/
gpgcheck=0
加速ssh:
# vim /etc/ssh/sshd_config
GSSAPIAuthentication no
UseDNS no
# systemctl restart sshd
修改selinux策略:
# vim /etc/selinux/config
SELINUX=disabled
setenforce 0
systemctl stop NetworkManager.service
systemctl disable NetworkManager.service
systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl stop postfix.service
systemctl disable postfix.service
安装和部署:
安装ceph:
查询有那些可安装的包
yum --showduplicates list ceph | expand
安装ceph-12.2.4
# vim /etc/yum.conf
exclude=*12.2.7* *12.2.6* *12.2.5*(此处是限制不安装哪些版本,默认安装的是最新版本)
安装:
# yum -y install ceph
添加ceph配置文件:
# cd /etc/ceph
# vim ceph.conf
[global]
fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
mon initial members = $HOSTNAME(mon进程名)
mon host = 1.1.1.12
public network = 1.1.1.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 1
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1
mon osd max split count = 10000
mon max pg per osd = 10000
rbd_default_features = 1
配置mon服务:
# sudo -u ceph ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'
# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
# monmaptool --create --add $HOSTNAME 1.1.1.12 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /etc/ceph/monmap
# sudo -u ceph mkdir /var/lib/ceph/mon/ceph-$HOSTNAME
# sudo -u ceph ceph-mon --mkfs -i $HOSTNAME --monmap /etc/ceph/monmap --keyring /tmp/ceph.mon.keyring
# ll /var/lib/ceph/mon/ceph-$HOSTNAME/
total 8
-rw------- 1 ceph ceph 77 May 20 05:26 keyring
-rw-r--r-- 1 ceph ceph 8 May 20 05:26 kv_backend
drwxr-xr-x 2 ceph ceph 112 May 20 05:26 store.db
# touch /var/lib/ceph/mon/ceph-$HOSTNAME/{done,upstart}
# systemctl enable ceph.target
# systemctl enable ceph-mon.target
# systemctl enable ceph-mon@$HOSTNAME
# systemctl start ceph-mon@$HOSTNAME
# tail -f /var/log/messages
May 20 05:31:52 ceph systemd: Started Ceph cluster monitor daemon.
May 20 05:31:52 ceph systemd: Starting Ceph cluster monitor daemon... #说明已经成功启动
# ll /var/run/ceph/
total 0
srwxr-xr-x 1 ceph ceph 0 May 20 05:31 ceph-mon.$HOSTNAME.asok
# ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.$HOSTNAME.asok mon_status #查看mon的详细信息
配置mgr服务:
# sudo -u ceph mkdir -p /var/lib/ceph/mgr/ceph-$HOSTNAME
# ceph auth get-or-create "mgr.$HOSTNAME" mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-$HOSTNAME/keyring
# chown -R ceph:ceph /var/lib/ceph/mgr/ceph-$HOSTNAME/keyring
# systemctl enable ceph-mgr.target
# systemctl enable ceph-mgr@$HOSTNAME
# systemctl start ceph-mgr@$HOSTNAME.service
# tail -f /var/log/messages
May 20 05:42:25 ceph systemd: Started Ceph cluster manager daemon.
May 20 05:42:25 ceph systemd: Starting Ceph cluster manager daemon... #说明已经成功启动
配置osd服务(在新的版本中部署osd的方法有多种,一下一一列举):
以下部署过程中,出现问题可能会用到的命令:
清除分区,重新格式化
dd if=/dev/zero of=/dev/sdb bs=1M count=10
parted /dev/sdb -s mklabel gpt
虚拟机不关机让系统识别磁盘:
echo "- - -" > /sys/class/scsi_host/host{0..2}/scan
刷新配置:
systemctl daemon-reload
-----------------------------------------------------------------------------------------
首先第一个:(这个沿用之前的部署方式,将整个盘作为一个分区,性能和工作方式有待验证)
# vim create_osd.sh
#!/bin/bash
disk=$1
UUID=$(uuidgen)
OSD_SECRET=$(ceph-authtool --gen-print-key)
ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | \
ceph osd new $UUID -i - \
-n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring)
mkdir /var/lib/ceph/osd/ceph-$ID
mkfs.xfs /dev/${disk}
mount /dev/${disk} /var/lib/ceph/osd/ceph-$ID
ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring \
--name osd.$ID --add-key $OSD_SECRET
ceph-osd -i $ID --mkfs --osd-uuid $UUID
chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID
systemctl enable ceph-osd@$ID
systemctl start ceph-osd@$ID
--------------------------------------
第二种是基于ceph-disk部署的bluestore,会给磁盘分四个区,可以在配置文件里面定义每个分区的大小:(默认是filestore)
ceph-disk prepare --bluestore /dev/sdc --block.db /dev/sdc --block.wal /dev/sdc
bluestore主要配置项有:
[global]
bluestore block create = true
bluestore block db size = 67108864
bluestore block db create = true
bluestore block wal size = 134217728
bluestore block wal create = true
[osd]
enable experimental unrecoverable data corrupting features = bluestore rocksdb
osd objectstore = bluestore
bluestore fsck on mount = true
[mon.$HOSTNAME]
host = $HOSTNAME
mon addr = 1.1.1.12:6789
[mgr]
mgr data = /var/lib/ceph/mgr/$cluster-$name
[mgr.$HOSTNAME]
key = "AQBcHf9aOJyoGBAAGgujgi67SBrImyrTaHy3vw=="
caps mds = "allow *"
caps mon = "allow profile mgr"
caps osd = "allow *"
[osd.0]
host = $HOSTNAME
osd data = /var/lib/ceph/osd/ceph-0
以下内容,可以不定义,或者在部署完成后进行添加
# bluestore block db path = /dev/disk/by-partlabel/osd-device-0-block
# bluestore block wal path = /dev/disk/by-partlabel/osd-device-0-db
# bluestore block path = /dev/disk/by-partlabel/osd-device-0-wal
------------------------------------------
第三种方法是当下官方比较推荐的ceph-volume工具,结合lvm使用,默认部署bluestore:
ceph-volume lvm create --data /dev/sdb
or:
ceph-volume lvm prepare --data /dev/sdb
ceph-volume lvm list
ceph-volume lvm activate {ID} {FSID}
filestore:
ceph-volume lvm create --filestore --data /dev/sdb --journal /dev/sdb
or:
ceph-volume lvm prepare --filestore --data {data-path} --journal {journal-path}
ceph-volume lvm list
ceph-volume lvm activate --filestore {ID} {FSID}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
部署完之后,使用:
ceph常用操作命令:
systemctl stop ceph\*.service ceph\*.target (关闭集群)
systemctl start ceph.target (开启集群)
systemctl start ceph-osd@{0..2} (关闭一组osd)
systemctl stop ceph-osd.target (开启当前服务器上面的osd)
ceph osd pool create volumes 2048 2048 replicated (创建存储池)
使用过程中的常见报错:
# ceph osd df 报错原因, 没有授权
Error EACCES: access denied' does your client key have mgr caps? See http://docs.ceph.com/docs/master/mgr/administrator/#client-authentication
# ceph auth caps client.admin osd 'allow *' mds 'allow ' mon 'allow *' mgr 'allow *'
updated caps for client.admin
ceph的副本数不符合要求:
# ceph -s
cluster:
id: 46c8f0f7-c585-4e32-99b5-aaa5d50cb703
health: HEALTH_WARN
Reduced data availability: 2048 pgs inactive
Degraded data redundancy: 2048 pgs undersized
services:
mon: 1 daemons, quorum ceph128-06
mgr: ceph128-06(active)
osd: 6 osds: 6 up, 6 in
data:
pools: 3 pools, 2560 pgs
objects: 0 objects, 0 bytes
usage: 6384 MB used, 8931 GB / 8938 GB avail
pgs: 100.000% pgs not active
2560 undersized+peered
# ceph osd dump|grep pool
pool 1 'volumes' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2048 pgp_num 2048 last_change 29 flags hashpspool stripe_width 0
pool 2 'backups' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 34 flags hashpspool stripe_width 0
pool 3 'images' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 37 flags hashpspool stripe_width 0
# for i in `ceph osd pool ls`;do ceph osd pool set $i size 1&&ceph osd pool set $i min_size 1;done (请设置合理的副本数)
没有对创建完的pool进行定义:
# ceph -s
cluster:
id: 46c8f0f7-c585-4e32-99b5-aaa5d50cb703
health: HEALTH_WARN
application not enabled on 3 pool(s)
services:
mon: 1 daemons, quorum ceph128-06
mgr: ceph128-06(active)
osd: 6 osds: 6 up, 6 in
data:
pools: 3 pools, 2560 pgs
objects: 15 objects, 192 kB
usage: 6417 MB used, 8931 GB / 8938 GB avail
pgs: 2560 active+clean
io:
client: 383 B/s rd, 8685 B/s wr, 0 op/s rd, 0 op/s wr
# for i in `ceph osd pool ls`;do ceph osd pool application enable $i rbd;done
# for i in `ceph osd pool ls`;do rbd pool init $i;done
创建块时特性的报错:(如在配置文件中定义则不会出现这样的报错)
# rbd create -s 10T volumes/test
# rbd info volumes/test
rbd image 'test':
size 10240 GB in 2621440 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.375674b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Sat Jul 28 03:02:36 2018
# rbd map volumes/test
rbd: sysfs write failed
RBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
# vim /etc/ceph/ceph.conf
rbd_default_features = 1
# rbd rm volumes/test
# rbd create -s 10T volumes/test
# rbd info volumes/test
rbd image 'test':
size 10240 GB in 2621440 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.375b74b0dc51
format: 2
features: layering
flags:
create_timestamp: Sat Jul 28 03:08:36 2018