ceph-deploy安装ceph集群

部署图

主机资源说明:

  • 每台 ceph 主机有两个网卡,一个是走公共网络流量,另一个是走集群网络流量。
  • 每台 ceph 主机有4块硬盘,一块系统盘,一块缓存盘,两块数据盘。

检查环境:

网卡信息

$ ip -4 a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 10.0.10.187/24 brd 10.0.10.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
3: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 192.168.32.187/24 brd 192.168.32.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever

磁盘信息

$ lsblk 
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda       8:0    0   50G  0 disk 
├─sda1    8:1    0  200M  0 part /boot
├─sda2    8:2    0    4G  0 part [SWAP]
└─sda3    8:3    0   40G  0 part /
sdb       8:16   0   50G  0 disk 
sdc       8:32   0   50G  0 disk 
sr0      11:0    1 1024M  0 rom  
nvme0n1 259:0    0   20G  0 disk 

环境准备

关闭防火墙及selinux

sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo setenforce 0
sudo sed -ri 's/(SELINUX)=.*/\1=disabled/g' /etc/selinux/config

设置主机名

sudo hostnamectl set-hostname ceph01.ecloud.com
sudo hostnamectl set-hostname ceph02.ecloud.com
sudo hostnamectl set-hostname ceph03.ecloud.com

注意:必须设置ceph主机名,否则安装 ceph-mon 报错

设置域名解析

cat << EOF | sudo tee -a /etc/hosts >> /dev/null
192.168.32.187 ceph01 ceph01.ecloud.com
192.168.32.188 ceph02 ceph02.ecloud.com
192.168.32.189 ceph03 ceph03.ecloud.com
EOF

client免密登录ceph主机

ssh-keygen -P '' -f ~/.ssh/id_rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub ops@ceph01
ssh-copy-id -i ~/.ssh/id_rsa.pub ops@ceph02
ssh-copy-id -i ~/.ssh/id_rsa.pub ops@ceph03

设置时钟同步

sudo yum install -y chrony

sudo vim /etc/chrony.conf
# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst
server ntp.aliyun.com iburst

sudo systemctl restart chronyd
chronyc sources

设置依赖yum源

sudo curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

cat << EOM | sudo tee /etc/yum.repos.d/ceph.repo > /dev/null
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
enabled=1
gpgcheck=0
 
[ceph-x84_64]
name=Ceph x86_64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
enabled=1
gpgcheck=0
EOM

sudo yum clean all && sudo yum makecache

缓存盘作为数据盘的加速盘

请参考 bcache使用 的文章

安装ceph集群

下载部署工具

sudo yum install -y ceph-deploy python-setuptools python-devel
ceph-deploy --version

创建安装目录

client主机执行安装命令

mkdir cluster
cd cluster/

初始配置文件

ceph-deploy new --public-network 192.168.32.0/24 --cluster-network 10.0.10.0/24 ceph01 ceph02 ceph03

安装ceph相关包

ceph-deploy install --no-adjust-repos ceph01 ceph02 ceph03

安装mon服务

ceph-deploy mon create-initial

推送相关密钥

ceph-deploy admin ceph01 ceph02 ceph03

安装mgr服务

ceph-deploy mgr create ceph01 ceph02 ceph03

安装osd服务

ceph-deploy osd create --data /dev/bcache0 --journal /dev/nvme0n1p1 --block-db /dev/nvme0n1p2 ceph01
ceph-deploy osd create --data /dev/bcache0 --journal /dev/nvme0n1p1 --block-db /dev/nvme0n1p2 ceph02
ceph-deploy osd create --data /dev/bcache0 --journal /dev/nvme0n1p1 --block-db /dev/nvme0n1p2 ceph03
ceph-deploy osd create --data /dev/bcache1 --journal /dev/nvme0n1p3 --block-db /dev/nvme0n1p4 ceph01
ceph-deploy osd create --data /dev/bcache1 --journal /dev/nvme0n1p3 --block-db /dev/nvme0n1p4 ceph02
ceph-deploy osd create --data /dev/bcache1 --journal /dev/nvme0n1p3 --block-db /dev/nvme0n1p4 ceph03

参数说明:

  • --data: 数据盘,必须是裸设备。
  • --journal:日志盘,一般WAL分区大于10GB就足够使用
  • --block-db:数据库盘,每个DB分区不小于每个数据盘容量的4%

安装mds服务

ceph-deploy mds create ceph01

安装rgw服务

ceph-deploy rgw create ceph02 ceph03

验证

$ ceph -s
  cluster:
    id:     4795484c-592e-41e4-acc2-02a8ce5a5699
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
 
  services:
    mon: 3 daemons, quorum ceph01,ceph02,ceph03 (age 14m)
    mgr: ceph02(active, since 14m), standbys: ceph01, ceph03
    mds:  1 up:standby
    osd: 6 osds: 6 up (since 2m), 6 in (since 2m)
    rgw: 2 daemons active (ceph02, ceph03)
 
  task status:
 
  data:
    pools:   4 pools, 128 pgs
    objects: 188 objects, 1.6 KiB
    usage:   17 GiB used, 294 GiB / 311 GiB avail
    pgs:     128 active+clean
 
  io:
    client:   63 KiB/s rd, 0 B/s wr, 84 op/s rd, 56 op/s wr

总共有:3个mon,3个mgr,6个osd,1个mds,2个rgw

解决 mons are allowing insecure global_id reclaim 告警方法

ceph config set mon auth_allow_insecure_global_id_reclaim false
sudo systemctl restart ceph-mon@ceph01.service
posted @   jiaxzeng  阅读(126)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· winform 绘制太阳,地球,月球 运作规律
· TypeScript + Deepseek 打造卜卦网站:技术与玄学的结合
· AI 智能体引爆开源社区「GitHub 热点速览」
· Manus的开源复刻OpenManus初探
· 写一个简单的SQL生成工具
点击右上角即可分享
微信分享提示