Openstack+Ceph 安装及配置-00-Ceph安装及部署(cephadm)

Openstack+Ceph 安装及配置-00-Ceph安装及部署


下载cephadm,不要直接yum install cephadm,会有问题

wget https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm

安装python3 podman lvm2

yum install python3 podman lvm2 -y

安装ceph源

./cephadm add-repo --release quincy

 

按官网提示,修改ceph源

[root@node-1 ~]# vim /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph $basearch
baseurl=https://download.ceph.com/rpm-quincy/el8/$basearch
enabled=1
gpgcheck=1
priority=2
gpgkey=https://download.ceph.com/keys/release.gpg
[Ceph-noarch]
name=Ceph noarch
baseurl=https://download.ceph.com/rpm-quincy/el8/noarch
enabled=1
gpgcheck=1
priority=2
gpgkey=https://download.ceph.com/keys/release.gpg
[Ceph-source]
name=Ceph SRPMS
baseurl=https://download.ceph.com/rpm-quincy/el8/SRPMS
enabled=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.gpg

拷贝cephadm到/usr/bin 方便操作

cp cephadm /usr/bin/

安装ceph-common

cephadm install ceph-common

创建ceph集群

# cephadm bootstrap --mon-ip 172.16.1.81
...
Ceph Dashboard is now available at:
URL: https://node-1:8443/
User: admin
Password: cephpassword
Enabling client.admin keyring and conf on hosts with "admin" label
You can access the Ceph CLI with:
sudo /usr/sbin/cephadm shell --fsid e015acc8-db34-11ec-b1ba-34735a9b2994 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/pacific/mgr/telemetry/
Bootstrap complete.

上述过程完成一下内容

  • 在本地主机上为新群集创建监视器和管理器守护程序。
  • 为Ceph集群生成一个新的SSH密钥,并将其添加到root用户的/root/.ssh/authorized_keys 文件
  • 将公钥的副本写入/etc/ceph/ceph.pub
  • 将最小配置文件写入/etc/ceph/ceph.conf. 需要此文件才能与新群集通信
  • 写一份client.admin管理(特权!)/etc/ceph/ceph.client.admin.keyring的密钥
  • 将_admin标签添加到引导主机。默认情况下,任何带有此标签的主机(也)都将获得/etc/ceph/ceph.conf和/etc/ceph/ceph.client.admin.keyring的副本

碰到问题

# cephadm shell
ERROR: Cannot infer an fsid, one must be specified: ['ca1b4f1e-d296-11ec-906d-16fc54415087', 'e015acc8-db34-11ec-b1ba-34735a9b2994']

因为之前安装过,删除旧集群信息就行

#查看现在集群fsid
[root@node-1: Tue May 24 16:01:00 CST 2022 /etc/ceph 50]
# cat ceph.conf
# minimal ceph.conf for e015acc8-db34-11ec-b1ba-34735a9b2994
[global]
fsid = e015acc8-d334-11ec-b1ba-34735a9112c34
mon_host = [v2:172.16.1.81:3300/0,v1:172.16.1.81:6789/0]
#删除旧信息
[root@node-1: Tue May 24 16:01:12 CST 2022 /etc/ceph 51]
# cd /var/lib/ceph
[root@node-1: Tue May 24 16:02:26 CST 2022 /var/lib/ceph 52]
# ll
total 4
drwx------ 10 polkitd polkitd 255 May 13 16:33 ca1b4f1e-d296-11ec-906d-16fc54415087
drwx------ 10 polkitd polkitd 4096 May 24 15:45 e015acc8-d334-11ec-bc3a-34735a9112c34
[root@node-1: Tue May 24 16:02:27 CST 2022 /var/lib/ceph 53]
# rm -rf ca1b4f1e-d296-11ec-906d-16fc54415087/
# cephadm shell
Inferring fsid e015acc8-db34-11ec-b1ba-34735a9b2994
Using recent ceph image quay.io/ceph/ceph@sha256:2b72fe64f3994d5723ec84f424cf40962332adc2239e608ea2e4954548d4ed25

 

使用podman ps查看

[root@node-1 .ssh]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f811f8be3ef9 quay.io/ceph/ceph:v17 -n mon.node-... About an hour ago Up About an hour ago ceph-a9d511ee-e570-11ec-9535-34735a9b2994-mon-node-1
8efcbead499b quay.io/ceph/ceph:v17 -n mgr.node-... About an hour ago Up About an hour ago ceph-a9d511ee-e570-11ec-9535-34735a9b2994-mgr-node-1-ygtqie
bcff496ea3e7 quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac -n client.crash.c... About an hour ago Up About an hour ago ceph-a9d511ee-e570-11ec-9535-34735a9b2994-crash-node-1
f0e9e4902f01 quay.io/prometheus/node-exporter:v1.3.1 --no-collector.ti... About an hour ago Up About an hour ago ceph-a9d511ee-e570-11ec-9535-34735a9b2994-node-exporter-node-1
fdbbbf76fb8e quay.io/ceph/ceph-grafana:8.3.5 /bin/bash About an hour ago Up About an hour ago ceph-a9d511ee-e570-11ec-9535-34735a9b2994-grafana-node-1
0659d796b9d3 quay.io/prometheus/alertmanager:v0.23.0 --cluster.listen-... 6 minutes ago Up 6 minutes ago ceph-a9d511ee-e570-11ec-9535-34735a9b2994-alertmanager-node-1
0fbb8494a4c7 quay.io/prometheus/prometheus:v2.33.4 --config.file=/et... 6 minutes ago Up 6 minutes ago ceph-a9d511ee-e570-11ec-9535-34735a9b2994-prometheus-node-1

以上在每个节点安装

在主节点安装key到其他节点

[root@node-1 .ssh]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node-2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@node-2'"
and check to make sure that only the key(s) you wanted were added.

添加其他节点

[root@node-1 .ssh]# ceph orch host add node-2 172.16.1.82
Added host 'node-2' with addr '172.16.1.82'
[root@node-1 .ssh]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node-3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@node-3'"
and check to make sure that only the key(s) you wanted were added.

添加OSD

[root@node-1 ~]# ceph orch daemon add osd node-1:/dev/sdb
Created osd(s) 0 on host 'node-1'
[root@node-1 ~]# ceph orch daemon add osd node-2:/dev/sdb
Created osd(s) 1 on host 'node-2'
[root@node-1 ~]# ceph orch daemon add osd node-3:/dev/sdb
Created osd(s) 2 on host 'node-3'

如果出现以下问题,说明之前已经是就集群的osd了

[root@node-1 ~]# ceph orch daemon add osd node-1:/dev/sdb
Created no osd(s) on host node-1; already created?

需要zap,每台单独执行

[root@node-1 ~]# cephadm zap-osds --fsid a9d511ee-e570-11ec-9535-34735a9b2994 --force
Using ceph image with id '132aa60d26c9' and tag 'v17' created on 2022-05-10 13:18:50 +0000 UTC
quay.io/ceph/ceph@sha256:629fda3151cdca8163e2b8b47af23c69b41b829689d9e085bbd2539c046b0eac
Zapping /dev/sdb...
[root@node-1 ~]# ceph orch daemon add osd node-1:/dev/sdb
Created osd(s) 0 on host 'node-1'

添加成功

添加完成后dashboard上面会有显示

 

 令附上Client安装脚本,和主节点差不多

# cat CephClientInstall.sh 
#!/bin/bash

MyIP=$(ifconfig ens1f0 |grep 10.8.1 |awk '{print $2}')
MyHostname=$(hostname)
LogFile="./CephClientInstall.log"


ColorOK="%-100s\033[32m%s\033[0m\n"
ColorError="%-100s\033[31m%s\033[0m\n"
ColorChapter="\033[33m%s\033[0m\n"
ColorCmdError="\033[31m%s\033[0m\n"

printf $ColorChapter "IP=$MyIP"
printf $ColorChapter "HOSTNAME=$MyHostname"


printf $ColorChapter "【信息确认】请输入\"yes\" or \"no\""
read YesOrNo

if [[ $YesOrNo = "yes" ]]
then
    printf $ColorChapter "【信息正确】开始安装ceph"
else
    printf $ColorChapter "【信息错误】退出安装ceph"
    exit 1
fi



function CheckStatus(){
    ServiceName=$1
    ServiceStatus=$(systemctl status $1 |grep Active |awk '{print $2}')
    if [[ $ServiceStatus == "active" ]]
    then
        printf $ColorOK "检查服务状态:【$ServiceName】" " 【已启动】"
        return 0
    else
        printf $ColorError "检查服务状态:【$ServiceName】" " 【未启动】"
        return 1
    fi
}

function InstallService(){
    ServiceName=$1
    yum list installed $ServiceName  >> $LogFile 2>&1
    if [[ $? == 0 ]]
    then
        printf $ColorOK "检测服务是否已安装:【$ServiceName】" "    【已安装】"
        return 0
    else
        yum install -y $1  >> $LogFile 2>&1
        if [[ $? == 0 ]]
        then
            printf $ColorOK "安装服务:【$ServiceName】" "【安装成功】"
        else
            printf $ColorError "安装服务:【$ServiceName】" "【安装失败】"
            exit 1
        fi
    fi
}



function StartService(){
    ServiceName=$1
    CheckStatus $ServiceName
    if [[ $? != 0 ]]
    then
        systemctl start $ServiceName >>$LogFile 2>&1
        if [[ $? == 0 ]]
        then
            printf $ColorOK "启动服务【$ServiceName】" "【成功】"
            return 0
        else
            printf $ColorError "启动服务【$ServiceName】" "【失败】"
            exit 1
        fi
    fi
}

function DoSomething(){
    cmd=$1
    $cmd  >>$LogFile 2>&1
    if [[ $? != 0 ]]
    then
        printf $ColorError "命令执行" "【失败】"
        printf $ColorCmdError "命令【$cmd】"
        exit 1
    fi
}

printf $ColorChapter "【一】升级系统"
#DoSomething "setenforce 0" #最好提前配置selinux为disabled并重启服务器,避免各种问题
#DoSomething "scp node-1:/etc/selinux/config /etc/selinux/config"
DoSomething "yum update -y"

InstallService "chrony"
StartService "chronyd"


printf $ColorChapter "【二】安装python3,podman,lvm2"
InstallService "python3"
InstallService "podman"
InstallService "lvm2"

printf $ColorChapter "【三】安装ceph"
DoSomething "scp node-1:/root/Client/ceph/cephadm /usr/bin"  #提前下载好wget -O /root/Client/ceph/cephadm https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
DoSomething "cephadm add-repo --release quincy"
DoSomething "scp node-1:/etc/yum.repos.d/ceph.repo /etc/yum.repos.d/ceph.repo"
DoSomething "cephadm install ceph-common"

printf $ColorChapter "【四】主节点执行配置,添加osd"
DoSomething "ssh node-1 ssh-copy-id -f -i /etc/ceph/ceph.pub root@$MyHostname"
DoSomething "ssh node-1 ceph orch host add $MyHostname $MyIP"
DoSomething "ssh node-1 ceph orch daemon add osd $MyHostname:/dev/sdb" 

printf $ColorChapter "【五】配置完成"

 

posted @ 2023-03-20 14:01  苦逼挨踢男  阅读(356)  评论(0编辑  收藏  举报