ceph 运维-文件存储

一 摘要

本文是载centos8.1 上部署ceph 文件存储客户端。

二 环境

(一) ceph server 端信息

ceph: 14.2.15

[root@ceph001 ~]# ceph version
ceph version 14.2.15 (afdd217ae5fb1ed3f60e16bd62357ca58cc650e5) nautilus (stable)
[root@ceph001 ~]#

操作系统:centos 7.6

[root@ceph001 ~]# cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core)
[root@ceph001 ~]# uname -a
Linux ceph001 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@ceph001 ~]#

(二) ceph client 端信息

ceph:14.2.15
操作系统:centos 8.1
[root@cephclient ~]# ceph --version
ceph version 14.2.15 (afdd217ae5fb1ed3f60e16bd62357ca58cc650e5) nautilus (stable)
[root@cephclient ~]# cat /etc/centos-release
CentOS Linux release 8.1.1911 (Core)
[root@cephclient ~]# uname -a
Linux cephclient.novalocal 4.18.0-147.el8.x86_64 #1 SMP Wed Dec 4 21:51:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@cephclient ~]#

三 文件存储运维

(一) 部署ceph fs

3.1.1 ceph-deploy 部署ceph fs

部署节点,cephadmin 用户执行

[cephadmin@ceph001 cephcluster]$ ceph-deploy mds create ceph001 ceph002 ceph003
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mds create ceph001 ceph002 ceph003
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fe39399da28>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7fe393befe60>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [('ceph001', 'ceph001'), ('ceph002', 'ceph002'), ('ceph003', 'ceph003')]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph001:ceph001 ceph002:ceph002 ceph003:ceph003

每个节点部署成功可以看到

[ceph001][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph001 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph001/keyring
[ceph001][INFO  ] Running command: sudo systemctl enable ceph-mds@ceph001
[ceph001][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph001.service to /usr/lib/systemd/system/ceph-mds@.service.
[ceph001][INFO  ] Running command: sudo systemctl start ceph-mds@ceph001
[ceph001][INFO  ] Running command: sudo systemctl enable ceph.target
[ceph002][DEBUG ] connection detected need for sudo

3.1.2 创建cephfs 存储池

查看osd 硬盘种类是机械盘 还是ssd,若性能要求较高 ,可以使用ssd 做为元数据池


[root@ceph001 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF
-1       0.14639 root default
-3       0.04880     host ceph001
 0   hdd 0.04880         osd.0        up  1.00000 1.00000
-5       0.04880     host ceph002
 1   hdd 0.04880         osd.1        up  1.00000 1.00000
-7       0.04880     host ceph003
 2   hdd 0.04880         osd.2        up  1.00000 1.00000

分 别创建数据池和元数据池,我这里既使用了root 用户又使用了cephadmin 用户,最好只有cephadmin 用户

[root@ceph001 ~]# ceph osd pool create cephfs_data 64
pool 'cephfs_data' created
[root@ceph001 ~]# ceph -s
  cluster:
    id:     69002794-cf45-49fa-8849-faadae48544f
    health: HEALTH_WARN
            application not enabled on 1 pool(s)

  services:
    mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 21h)
    mgr: ceph002(active, since 20h), standbys: ceph003, ceph001
    mds:  3 up:standby
    osd: 3 osds: 3 up (since 21h), 3 in (since 21h)

  data:
    pools:   2 pools, 128 pgs
    objects: 42 objects, 116 MiB
    usage:   3.4 GiB used, 147 GiB / 150 GiB avail
    pgs:     128 active+clean

[root@ceph001 ~]# su - cephadmin
Last login: Tue Dec  1 14:16:31 CST 2020 on pts/0
[cephadmin@ceph001 ~]$ ceph osd pool create cephfs_metadata 64
pool 'cephfs_metadata' created
[cephadmin@ceph001 ~]$

3.1.3 启用文件系统

命令格式 ceph fs new 文件系统名 元数据池 数据池

ceph fs new <fs_name>

$ ceph fs new cephfs cephfs_metadata cephfs_data # 启用文件系统

[cephadmin@ceph001 cephcluster]$ ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 3 and data pool 2
[cephadmin@ceph001 cephcluster]$

3.1.4 查看

[cephadmin@ceph001 cephcluster]$ ceph mds stat
cephfs:1 {0=ceph003=up:active} 2 up:standby
[cephadmin@ceph001 cephcluster]$ ceph osd pool ls
rbd
cephfs_data
cephfs_metadata
[cephadmin@ceph001 cephcluster]$ ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[cephadmin@ceph001 cephcluster]$

3.1.5 创建用户

创建用户(可选,因为部署时,已经生成,不过我们更倾向于定义一个普通账户)

在/home/cephadmin/cephcluster该目录下执行命令,生成 ceph.client.cephfs.keyring

[cephadmin@ceph001 cephcluster]$ pwd
/home/cephadmin/cephcluster
[cephadmin@ceph001 cephcluster]$ ceph auth get-or-create client.cephfs mon 'allow r' mds 'allow r, allow rw path=/' osd 'allow rw pool=cephfs_data' -o ceph.client.cephfs.keyring
[cephadmin@ceph001 cephcluster]$ ll
total 156
-rw------- 1 cephadmin cephadmin    113 Nov 30 17:17 ceph.bootstrap-mds.keyring
-rw------- 1 cephadmin cephadmin    113 Nov 30 17:17 ceph.bootstrap-mgr.keyring
-rw------- 1 cephadmin cephadmin    113 Nov 30 17:17 ceph.bootstrap-osd.keyring
-rw------- 1 cephadmin cephadmin    113 Nov 30 17:17 ceph.bootstrap-rgw.keyring
-rw------- 1 cephadmin cephadmin    151 Nov 30 17:17 ceph.client.admin.keyring
-rw-rw-r-- 1 cephadmin cephadmin     64 Dec  1 15:11 ceph.client.cephfs.keyring
-rw-rw-r-- 1 cephadmin cephadmin     61 Dec  1 09:45 ceph.client.rbd.keyring
-rw-rw-r-- 1 cephadmin cephadmin    313 Nov 30 17:09 ceph.conf
-rw-rw-r-- 1 cephadmin cephadmin    247 Nov 30 17:00 ceph.conf.bak.orig
-rw-rw-r-- 1 cephadmin cephadmin 115251 Dec  1 14:16 ceph-deploy-ceph.log
-rw------- 1 cephadmin cephadmin     73 Nov 30 16:50 ceph.mon.keyring
[cephadmin@ceph001 cephcluster]$

(二) 挂载ceph fs

客户端挂载cepffs 由两种方式,一是 linux 内核驱动挂载,二是 fuse 挂载ceph fs

3.2.1 fuse 挂载ceph fs

首先要安装fuse ,配置yum 等 参考上篇 ceph运维系列-块存储

3.2.1.1 下载fuse 安装包及安装cephfuse

下载

[root@cephclient ~]# yum -y install --downloadonly --downloaddir=/root/software/ceph-fusecentos8/ ceph-fuse

安装

[root@cephclient ceph-fusecentos8]# yum -y install  ceph-fuse

3.2.1.2 挂载目录

首先将生成测key 从server 端拷贝到client

[cephadmin@ceph001 cephcluster]$ pwd
/home/cephadmin/cephcluster
[cephadmin@ceph001 cephcluster]$ scp ceph.client.cephfs.keyring root@172.31.185.211:/etc/ceph/
root@172.31.185.211's password:
ceph.client.cephfs.keyring                                                                                   100%   64    13.1KB/s   00:00
[cephadmin@ceph001 cephcluster]$

将ceph.conf 拷贝到 客户端/etc/ceph/

本机器以前以拷贝过,此处略。

客户端执行挂载

[root@cephclient ~]# mkdir /mnt/cephfs
[root@cephclient ~]# ceph-fuse --keyring /etc/ceph/ceph.client.cephfs.keyring --name client.cephfs -m ceph001:6789 /mnt/cephfs
ceph-fuse[2020-12-01 18:02:34.065 7ff1b6c121c0 -1 init, newargv = 0x55d0027051b0 newargc=9
24671]: starting ceph client
ceph-fuse[24671]: starting fuse
[root@cephclient ~]#

设置开机自动挂载


none /mnt/cephfs fuse.ceph ceph.id=cephfs,_netdev,defaults 0 0

posted on 2020-12-01 18:12  weiwei2021  阅读(243)  评论(0编辑  收藏  举报