分布式存储Ceph(六) CephFS使用

部署MDS服务

安装ceph-mds

root@ceph-mgr-01:~# apt -y install ceph-mds

创建MDS服务

ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mgr-01
[ceph_deploy.conf][DEBUG ] found configuration file at: /var/lib/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create ceph-mgr-01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f82357dae60>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7f82357b9350>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [('ceph-mgr-01', 'ceph-mgr-01')]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph-mgr-01:ceph-mgr-01
ceph@ceph-mgr-01's password: 
[ceph-mgr-01][DEBUG ] connection detected need for sudo
ceph@ceph-mgr-01's password: 
sudo: unable to resolve host ceph-mgr-01
[ceph-mgr-01][DEBUG ] connected to host: ceph-mgr-01 
[ceph-mgr-01][DEBUG ] detect platform information from remote host
[ceph-mgr-01][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph-mgr-01
[ceph-mgr-01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mgr-01][WARNIN] mds keyring does not exist yet, creating one
[ceph-mgr-01][DEBUG ] create a keyring file
[ceph-mgr-01][DEBUG ] create path if it doesn't exist
[ceph-mgr-01][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph-mgr-01 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph-mgr-01/keyring
[ceph-mgr-01][INFO  ] Running command: sudo systemctl enable ceph-mds@ceph-mgr-01
[ceph-mgr-01][WARNIN] Created symlink /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph-mgr-01.service → /lib/systemd/system/ceph-mds@.service.
[ceph-mgr-01][INFO  ] Running command: sudo systemctl start ceph-mds@ceph-mgr-01
[ceph-mgr-01][INFO  ] Running command: sudo systemctl enable ceph.target

创建CephFS metadata和data存储池

创建元数据存储池

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-metadata 32 32
pool 'cephfs-metadata' created

创建数据存储池

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph osd pool create cephfs-data 64 64
pool 'cephfs-data' created

查看Ceph集群状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph -s 
  cluster:
    id:     6e521054-1532-4bc8-9971-7f8ae93e8430
    health: HEALTH_WARN
            3 daemons have recently crashed
 
  services:
    mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 7m)
    mgr: ceph-mgr-01(active, since 16m), standbys: ceph-mgr-02
    mds: 1/1 daemons up
    osd: 9 osds: 9 up (since 44h), 9 in (since 44h)
 
  data:
    volumes: 1/1 healthy
    pools:   4 pools, 161 pgs
    objects: 43 objects, 24 MiB
    usage:   1.4 GiB used, 179 GiB / 180 GiB avail
    pgs:     161 active+clean

创建CephFS文件系统

创建CephFS文件系统命令格式

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph   fs new  -h

 General usage: 

usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
            [--setuser SETUSER] [--setgroup SETGROUP] [--id CLIENT_ID]
            [--name CLIENT_NAME] [--cluster CLUSTER]
            [--admin-daemon ADMIN_SOCKET] [-s] [-w] [--watch-debug]
            [--watch-info] [--watch-sec] [--watch-warn] [--watch-error]
            [-W WATCH_CHANNEL] [--version] [--verbose] [--concise]
            [-f {json,json-pretty,xml,xml-pretty,plain,yaml}]
            [--connect-timeout CLUSTER_TIMEOUT] [--block] [--period PERIOD]

Ceph administration tool

optional arguments:
  -h, --help            request mon help
  -c CEPHCONF, --conf CEPHCONF
                        ceph configuration file
  -i INPUT_FILE, --in-file INPUT_FILE
                        input file, or "-" for stdin
  -o OUTPUT_FILE, --out-file OUTPUT_FILE
                        output file, or "-" for stdout
  --setuser SETUSER     set user file permission
  --setgroup SETGROUP   set group file permission
  --id CLIENT_ID, --user CLIENT_ID
                        client id for authentication
  --name CLIENT_NAME, -n CLIENT_NAME
                        client name for authentication
  --cluster CLUSTER     cluster name
  --admin-daemon ADMIN_SOCKET
                        submit admin-socket commands ("help" for help)
  -s, --status          show cluster status
  -w, --watch           watch live cluster changes
  --watch-debug         watch debug events
  --watch-info          watch info events
  --watch-sec           watch security events
  --watch-warn          watch warn events
  --watch-error         watch error events
  -W WATCH_CHANNEL, --watch-channel WATCH_CHANNEL
                        watch live cluster changes on a specific channel
                        (e.g., cluster, audit, cephadm, or '*' for all)
  --version, -v         display version
  --verbose             make verbose
  --concise             make less verbose
  -f {json,json-pretty,xml,xml-pretty,plain,yaml}, --format {json,json-pretty,xml,xml-pretty,plain,yaml}
  --connect-timeout CLUSTER_TIMEOUT
                        set a timeout for connecting to the cluster
  --block               block until completion (scrub and deep-scrub only)
  --period PERIOD, -p PERIOD
                        polling period, default 1.0 second (for polling
                        commands only)

 Local commands: 
 

ping <mon.id>           Send simple presence/life test to a mon
                        <mon.id> may be 'mon.*' for all mons
daemon {type.id|path} <cmd>
                        Same as --admin-daemon, but auto-find admin socket
daemonperf {type.id | path} [stat-pats] [priority] [<interval>] [<count>]
daemonperf {type.id | path} list|ls [stat-pats] [priority]
                        Get selected perf stats from daemon/admin socket
                        Optional shell-glob comma-delim match string stat-pats
                        Optional selection priority (can abbreviate name):
                         critical, interesting, useful, noninteresting, debug
                        List shows a table of all available stats
                        Run <count> times (default forever),
                         once per <interval> seconds (default 1)
    

 Monitor commands: 
 
fs new <fs_name> <metadata> <data> [--force] [--allow-dangerous-metadata-overlay]                       make new filesystem using named pools <metadata> and <data>

创建CephFS文件系统

  • 一个数据池只能创建一个cephfs文件系统。
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs new wgscephfs cephfs-metadata cephfs-data
new fs with metadata pool 7 and data pool 8

查看创建的CephFS文件系统

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: wgscephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]

查看指定CephFS文件系统状态

ceph@ceph-deploy:~/ceph-cluster$ ceph fs status wgscephfs
wgscephfs - 0 clients
=========
RANK  STATE       MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr-01  Reqs:    0 /s    10     13     12      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata  96.0k  56.2G  
  cephfs-data      data       0   56.2G  
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)

启用多个文件系统

  • 每创建一个cephfs文件系统需要一个新的数据池。
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs flag set enable_multiple true

验证CephFS服务状态

ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
wgscephfs:1 {0=ceph-mgr-01=up:active}

创建客户端账户

创建账户

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph auth add client.wgs mon 'allow rw' mds 'allow rw' osd 'allow rwx pool=cephfs-data'
added key for client.wgs

验证账户信息

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph auth get client.wgs
[client.wgs]
        key = AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
        caps mds = "allow rw"
        caps mon = "allow rw"
        caps osd = "allow rwx pool=cephfs-data"
exported keyring for client.wgs

创建用户keyring文件

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph auth get client.wgs -o ceph.client.wgs.keyring
exported keyring for client.wgs

创建key文件

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph auth print-key client.wgs > wgs.key

验证用户keyring文件

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ cat ceph.client.wgs.keyring 
[client.wgs]
        key = AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
        caps mds = "allow rw"
        caps mon = "allow rw"
        caps osd = "allow rwx pool=cephfs-data"

安装ceph客户端

客户端ceph-client-centos7-01

配置仓库

点击查看代码
[root@ceph-client-centos7-01 ~]# yum -y install epel-release
[root@ceph-client-centos7-01 ~]# yum -y install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm

安装ceph-common

点击查看代码
[root@ceph-client-centos7-01 ~]# yum -y install ceph-common

客户端ceph-client-ubuntu20.04-01

配置仓库

点击查看代码
root@ceph-client-ubuntu20.04-01:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK
root@ceph-client-ubuntu20.04-01:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-client-ubuntu20.04-01:~# apt -y update && apt -y upgrade

安装ceph-common

点击查看代码
root@ceph-client-ubuntu20.04-01:~# apt -y install ceph-common

同步客户端认证文件

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf  ceph.client.wgs.keyring wgs.key  root@ceph-client-ubuntu20.04-01:/etc/ceph
ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf  ceph.client.wgs.keyring wgs.key  root@ceph-client-centos7-01:/etc/ceph

客户端验证权限

客户端ceph-client-centos7-01

点击查看代码
[root@ceph-client-centos7-01 ~]# ceph --id wgs -s
  cluster:
    id:     6e521054-1532-4bc8-9971-7f8ae93e8430
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 6h)
    mgr: ceph-mgr-01(active, since 17h), standbys: ceph-mgr-02
    mds: 1/1 daemons up
    osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
 
  data:
    volumes: 1/1 healthy
    pools:   4 pools, 161 pgs
    objects: 43 objects, 24 MiB
    usage:   1.4 GiB used, 179 GiB / 180 GiB avail
    pgs:     161 active+clean

客户端ceph-client-ubuntu20.04-01

点击查看代码
root@ceph-client-ubuntu20.04-01:~# ceph --id wgs -s
  cluster:
    id:     6e521054-1532-4bc8-9971-7f8ae93e8430
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 6h)
    mgr: ceph-mgr-01(active, since 17h), standbys: ceph-mgr-02
    mds: 1/1 daemons up
    osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
 
  data:
    volumes: 1/1 healthy
    pools:   4 pools, 161 pgs
    objects: 43 objects, 24 MiB
    usage:   1.4 GiB used, 179 GiB / 180 GiB avail
    pgs:     161 active+clean

内核空间挂载cephfs(推荐使用)

验证客户端是否可以挂载cephfs

验证客户端ceph-client-centos7-01

点击查看代码
[root@ceph-client-centos7-01 ~]# stat /sbin/mount.ceph  File: ‘/sbin/mount.ceph’
  Size: 195512          Blocks: 384        IO Block: 4096   regular file
Device: fd01h/64769d    Inode: 51110858    Links: 1
Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2021-09-25 11:52:07.544069156 +0800
Modify: 2021-08-06 01:48:44.000000000 +0800
Change: 2021-09-23 13:57:21.674953501 +0800
 Birth: -

验证客户端ceph-client-ubuntu20.04-01

点击查看代码
root@ceph-client-ubuntu20.04-01:~# stat /sbin/mount.ceph
  File: /sbin/mount.ceph
  Size: 260520          Blocks: 512        IO Block: 4096   regular file
Device: fc02h/64514d    Inode: 402320      Links: 1
Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2021-09-25 11:54:38.642951083 +0800
Modify: 2021-09-16 22:38:17.000000000 +0800
Change: 2021-09-22 18:01:23.708934550 +0800
 Birth: -

客户端通过key文件挂载cephfs

通过key文件挂载cephfs命令格式(建议使用)

点击查看代码
mount -t ceph {mon01:socket},{mon02:socket},{mon03:socket}:/ {mount-point}  -o name={name},secretfile={key_path}

客户端ceph-client-centos7-01挂载cephfs

点击查看代码
[root@ceph-client-centos7-01 ~]# mkdir /data/cephfs-data
[root@ceph-client-centos7-01 ~]# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secretfile=/etc/ceph/wgs.key
[root@ceph-client-centos7-01 ~]# df -TH
Filesystem                                                 Type      Size  Used Avail Use% Mounted on
devtmpfs                                                   devtmpfs  2.0G     0  2.0G   0% /dev
tmpfs                                                      tmpfs     2.0G     0  2.0G   0% /dev/shm
tmpfs                                                      tmpfs     2.0G  194M  1.9G  10% /run
tmpfs                                                      tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/vda1                                                  xfs        22G  5.9G   16G  28% /
/dev/vdb                                                   xfs       215G  9.1G  206G   5% /data
tmpfs                                                      tmpfs     399M     0  399M   0% /run/user/1000
tmpfs                                                      tmpfs     399M     0  399M   0% /run/user/1003
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph       61G     0   61G   0% /data/cephfs-data
[root@ceph-client-centos7-01 ~]# stat -f /data/cephfs-data/
  File: "/data/cephfs-data/"
    ID: de6f23f7f8cf0cfc Namelen: 255     Type: ceph
Block size: 4194304    Fundamental block size: 4194304
Blocks: Total: 14397      Free: 14397      Available: 14397
Inodes: Total: 1          Free: -1

客户端ceph-client-ubuntu20.04-01挂载cephfs

点击查看代码
root@ceph-client-ubuntu20.04-01:~# mkdir /data/cephfs-data
root@ceph-client-ubuntu20.04-01:~# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secretfile=/etc/ceph/wgs.key
root@ceph-client-ubuntu20.04-01:~# df -TH
Filesystem                                                 Type      Size  Used Avail Use% Mounted on
udev                                                       devtmpfs  4.1G     0  4.1G   0% /dev
tmpfs                                                      tmpfs     815M  1.1M  814M   1% /run
/dev/vda2                                                  ext4       22G   13G  7.9G  61% /
tmpfs                                                      tmpfs     4.1G     0  4.1G   0% /dev/shm
tmpfs                                                      tmpfs     5.3M     0  5.3M   0% /run/lock
tmpfs                                                      tmpfs     4.1G     0  4.1G   0% /sys/fs/cgroup
/dev/vdb                                                   ext4      528G   19G  483G   4% /data
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph       61G     0   61G   0% /data/cephfs-data
root@ceph-client-ubuntu20.04-01:~# stat -f /data/cephfs-data/
  File: "/data/cephfs-data/"
    ID: de6f23f7f8cf0cfc Namelen: 255     Type: ceph
Block size: 4194304    Fundamental block size: 4194304
Blocks: Total: 14397      Free: 14397      Available: 14397
Inodes: Total: 1          Free: -1

客户端通过key挂载cephfs

通过key挂载cephfs命令格式

点击查看代码
挂载cephfs文件根目录
mount -t ceph {mon01:socket},{mon02:socket},{mon03:socket}:/ {mount-point}  -o name={name},secret={value}
挂载cephfs文件子目录
mount -t ceph {mon01:socket},{mon02:socket},{mon03:socket}:/{subvolume/dir1/dir2} {mount-point}  -o name={name},secret={value}

查看key

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ cat wgs.key 
AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==

客户端ceph-client-centos7-01挂载cephfs

点击查看代码
[root@ceph-client-centos7-01 ~]# mkdir /data/cephfs-data
[root@ceph-client-centos7-01 ~]# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secret=AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
[root@ceph-client-centos7-01 ~]# df -TH
Filesystem                                                 Type      Size  Used Avail Use% Mounted on
devtmpfs                                                   devtmpfs  2.0G     0  2.0G   0% /dev
tmpfs                                                      tmpfs     2.0G     0  2.0G   0% /dev/shm
tmpfs                                                      tmpfs     2.0G  194M  1.9G  10% /run
tmpfs                                                      tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/vda1                                                  xfs        22G  5.9G   16G  28% /
/dev/vdb                                                   xfs       215G  9.1G  206G   5% /data
tmpfs                                                      tmpfs     399M     0  399M   0% /run/user/1000
tmpfs                                                      tmpfs     399M     0  399M   0% /run/user/1003
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph       61G     0   61G   0% /data/cephfs-data
[root@ceph-client-centos7-01 ~]# stat -f /data/cephfs-data/
  File: "/data/cephfs-data/"
    ID: de6f23f7f8cf0cfc Namelen: 255     Type: ceph
Block size: 4194304    Fundamental block size: 4194304
Blocks: Total: 14397      Free: 14397      Available: 14397
Inodes: Total: 1          Free: -1

客户端ceph-client-ubuntu20.04-01挂载cephfs

点击查看代码
root@ceph-client-ubuntu20.04-01:~# mkdir /data/cephfs-data
root@ceph-client-ubuntu20.04-01:~# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secret=AQCrhk5htve9AxAAED3UAwf2P/5YFjBPVoNayw==
root@ceph-client-ubuntu20.04-01:~# df -TH
Filesystem                                                 Type      Size  Used Avail Use% Mounted on
udev                                                       devtmpfs  4.1G     0  4.1G   0% /dev
tmpfs                                                      tmpfs     815M  1.1M  814M   1% /run
/dev/vda2                                                  ext4       22G   13G  7.9G  61% /
tmpfs                                                      tmpfs     4.1G     0  4.1G   0% /dev/shm
tmpfs                                                      tmpfs     5.3M     0  5.3M   0% /run/lock
tmpfs                                                      tmpfs     4.1G     0  4.1G   0% /sys/fs/cgroup
/dev/vdb                                                   ext4      528G   19G  483G   4% /data
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph       61G     0   61G   0% /data/cephfs-data
root@ceph-client-ubuntu20.04-01:~# stat -f /data/cephfs-data/
  File: "/data/cephfs-data/"
    ID: de6f23f7f8cf0cfc Namelen: 255     Type: ceph
Block size: 4194304    Fundamental block size: 4194304
Blocks: Total: 14397      Free: 14397      Available: 14397
Inodes: Total: 1          Free: -1

客户端写入数据并验证

客户端ceph-client-centos7-01写入数据

点击查看代码
[root@ceph-client-centos7-01 ~]# cd /data/cephfs-data/
[root@ceph-client-centos7-01 cephfs-data]# echo "ceph-client-centos7-01" > ceph-client-centos7-01
[root@ceph-client-centos7-01 cephfs-data]# ls -l
total 1
-rw-r--r-- 1 root root 23 Sep 25 12:28 ceph-client-centos7-01

客户端ceph-client-ubuntu20.04-01验证数据共享

点击查看代码
root@ceph-client-ubuntu20.04-01:~# cd /data/cephfs-data/
root@ceph-client-ubuntu20.04-01:/data/cephfs-data# ls -l
total 1
-rw-r--r-- 1 root root 23 Sep 25 12:28 ceph-client-centos7-01
root@ceph-client-ubuntu20.04-01:/data/cephfs-data# cat ceph-client-centos7-01 
ceph-client-centos7-01

客户端卸载cephfs

客户端ceph-client-centos7-01卸载cephfs

点击查看代码
[root@ceph-client-centos7-01 ~]# umount /data/cephfs-data/

客户端ceph-client-ubuntu20.04-01卸载cephfs

点击查看代码
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data

客户端开机挂载cephfs

客户端ceph-client-centos7-01开机挂载cephfs

点击查看代码
[root@ceph-client-centos7-01 ~]# cat /etc/fstab 
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/     /data/cephfs-data    ceph    defautls,name=wgs,secretfile=/etc/ceph/wgs.key,noatime,_netdev    0       2

客户端ceph-client-centos7-01开机挂载cephfs

点击查看代码
root@ceph-client-ubuntu20.04-01:~# cat /etc/fstab 
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/     /data/cephfs-data    ceph    defautls,name=wgs,secretfile=/etc/ceph/wgs.key,noatime,_netdev    0       2

用户空间挂载cephfs

  • 从 Ceph 10.x (Jewel) 开始,至少使用 4.x 内核。如果使用较旧的内核,则应使用 fuse 客户端而不是内核客户端。

客户端配置仓库

客户端ceph-client-centos7-01配置yum源

点击查看代码
[root@ceph-client-centos7-01 ~]# yum -y install epel-release
[root@ceph-client-centos7-01 ~]# yum -y install https://mirrors.aliyun.com/ceph/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpm

客户端ceph-client-ubuntu20.04-01添加仓库

点击查看代码
root@ceph-client-ubuntu20.04-01:~# wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add -
OK
root@ceph-client-ubuntu20.04-01:~# echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific $(lsb_release -cs) main" >> /etc/apt/sources.list
root@ceph-client-ubuntu20.04-01:~# apt -y update && apt -y upgrade

客户端安装ceph-fuse

客户端ceph-client-centos7-01安装ceph-fuse

点击查看代码
[root@ceph-client-centos7-01 ~]# yum -y install ceph-common ceph-fuse

客户端ceph-client-centos7-01安装ceph-fuse

点击查看代码
root@ceph-client-ubuntu20.04-01:~# apt -y install ceph-common fuse

客户端同步认证文件

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf  ceph.client.wgs.keyring  root@ceph-client-ubuntu20.04-01:/etc/ceph
ceph@ceph-deploy:~/ceph-cluster$ scp ceph.conf  ceph.client.wgs.keyring  root@ceph-client-centos7-01:/etc/ceph

客户端用ceph-fuse挂载cephfs

ceph-fuse用法

点击查看代码
root@ceph-client-centos7-01:~# ceph-fuse -h
usage: ceph-fuse [-n client.username] [-m mon-ip-addr:mon-port] <mount point> [OPTIONS]
  --client_mountpoint/-r <sub_directory>
                    use sub_directory as the mounted root, rather than the full Ceph tree.

usage: ceph-fuse mountpoint [options]

general options:
    -o opt,[opt...]        mount options
    -h   --help            print help
    -V   --version         print version

FUSE options:
    -d   -o debug          enable debug output (implies -f)
    -f                     foreground operation
    -s                     disable multi-threaded operation

  --conf/-c FILE    read configuration from the given configuration file
  --id ID           set ID portion of my name
  --name/-n TYPE.ID set name
  --cluster NAME    set cluster name (default: ceph)
  --setuser USER    set uid to user or uid (and gid to user's gid)
  --setgroup GROUP  set gid to group or gid
  --version         show version and quit
  -o opt,[opt...] 安装选项。

  -d 在前台运行,将所有日志输出发送到 stderr 并启用 FUSE 调试 (-o debug)。
  -c ceph.conf, --conf=ceph.conf 在启动期间使用ceph.conf配置文件而不是默认配置文件 /etc/ceph/ceph.conf来确定监视器地址。
  -m monaddress[:port] 连接到指定的mon节点(而不是通过 ceph.conf 查找)。
  -n client.{cephx-username} 传递密钥用于挂载的 CephX 用户的名称。
  -k <path-to-keyring> 提供keyring的路径;当它在标准位置不存在时很有用。
  --client_mountpoint/-r root_directory  使用 root_directory 作为挂载的根目录,而不是完整的 Ceph 树。
  -f 在前台运行。不生成pid文件。
  -s 禁用多线程操作。
  
使用样例
ceph-fuse --id {name} -m {mon01:socket},{mon02:socket},{mon03:socket} {mountpoint}
指定挂载cephfs文件系统目录
ceph-fuse --id wgs -r /path/to/dir /data/cephfs-data
指定用户keyring文件路径
ceph-fuse --id wgs -k /path/to/keyring /data/cephfs-data
有多个cephfs文件系统指定挂载
ceph-fuse --id wgs --client_fs mycephfs2 /data/cephfs-data

客户端ceph-client-centos7-01用ceph-fuse挂载cephfs

点击查看代码
[root@ceph-client-centos7-01 ~]# ceph-fuse --id wgs -m 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789 /data/cephfs-data
ceph-fuse[8979]: starting ceph client
2021-09-25T14:24:32.258+0800 7f2934e9df40 -1 init, newargv = 0x556e4ebb1300 newargc=9
ceph-fuse[8979]: starting fuse
[root@ceph-client-centos7-01 ~]# df -TH
Filesystem     Type            Size  Used Avail Use% Mounted on
devtmpfs       devtmpfs        2.0G     0  2.0G   0% /dev
tmpfs          tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs          tmpfs           2.0G  194M  1.9G  10% /run
tmpfs          tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/vda1      xfs              22G  6.0G   16G  28% /
/dev/vdb       xfs             215G  9.2G  206G   5% /data
tmpfs          tmpfs           399M     0  399M   0% /run/user/1000
tmpfs          tmpfs           399M     0  399M   0% /run/user/1003
ceph-fuse      fuse.ceph-fuse   61G     0   61G   0% /data/cephfs-data

客户端ceph-client-ubuntu20.04-01用ceph-fuse挂载cephfs

点击查看代码
root@ceph-client-ubuntu20.4-01:~# ceph-fuse --id wgs -m 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789 /data/cephfs-data
2021-09-25T14:26:17.664+0800 7f2473c04080 -1 init, newargv = 0x560939e8b8c0 newargc=15
ceph-fuse[8696]: starting ceph client
ceph-fuse[8696]: starting fuse
root@ceph-client-ubuntu20.4-01:~# df -TH
Filesystem     Type            Size  Used Avail Use% Mounted on
udev           devtmpfs        4.1G     0  4.1G   0% /dev
tmpfs          tmpfs           815M  1.1M  814M   1% /run
/dev/vda2      ext4             22G   13G  7.9G  61% /
tmpfs          tmpfs           4.1G     0  4.1G   0% /dev/shm
tmpfs          tmpfs           5.3M     0  5.3M   0% /run/lock
tmpfs          tmpfs           4.1G     0  4.1G   0% /sys/fs/cgroup
/dev/vdb       ext4            528G   21G  480G   5% /data
tmpfs          tmpfs           815M     0  815M   0% /run/user/1001
ceph-fuse      fuse.ceph-fuse   61G     0   61G   0% /data/cephfs-data

客户端写入数据并验证

客户端ceph-client-centos7-01写入数据

点击查看代码
[root@ceph-client-centos7-01 ~]# cd /data/cephfs-data/
[root@ceph-client-centos7-01 cephfs-data]#  mkdir -pv test/test1
mkdir: created directory 'test'
mkdir: created directory 'test/test1'

客户端ceph-client-ubuntu20.04-01验证数据

点击查看代码
root@ceph-client-ubuntu20.4-01:~# cd /data/cephfs-data/
root@ceph-client-ubuntu20.4-01:/data/cephfs-data# tree .
.
└── test
    └── test1

2 directories, 0 files

客户端卸载cephfs

客户端ceph-client-centos7-01卸载cephfs

点击查看代码
[root@ceph-client-centos7-01 ~]# umount /data/cephfs-data/

客户端ceph-client-ubuntu20.04-01卸载cephfs

点击查看代码
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data

客户端开机挂载cephfs

客户端ceph-client-centos7-01开机挂载cephfs

点击查看代码
[root@ceph-client-centos7-01 ~]# cat /etc/fstab 
none    /data/cephfs-data  fuse.ceph ceph.id=wgs,ceph.conf=/etc/ceph/ceph.conf,_netdev,defaults  0 0

客户端ceph-client-centos7-01开机挂载cephfs

点击查看代码
root@ceph-client-ubuntu20.04-01:~# cat /etc/fstab 
none    /data/cephfs-data  fuse.ceph ceph.id=wgs,ceph.conf=/etc/ceph/ceph.conf,_netdev,defaults  0 0

删除cephfs文件系统(多个文件系统)

查看cephfs文件系统信息

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: wgscephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
name: wgscephfs01, metadata pool: cephfs-metadata01, data pools: [cephfs-data02 ]

查看cephfs文件系统是否正在被挂载

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status wgscephfs
wgscephfs - 1 clients
=========
RANK  STATE       MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr-01  Reqs:    0 /s    13     15     14      2   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   216k  56.2G  
  cephfs-data      data       0   56.2G  
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)

查找挂载cephfs的客户端

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph tell mds.0 client ls
2021-09-25T18:30:09.856+0800 7f0fbdffb700  0 client.1094242 ms_handle_reset on v2:172.16.10.225:6802/1724959904
2021-09-25T18:30:09.884+0800 7f0fbdffb700  0 client.1084719 ms_handle_reset on v2:172.16.10.225:6802/1724959904
[
    {
        "id": 1094171,
        "entity": {
            "name": {
                "type": "client",
                "num": 1094171
            },
            "addr": {
                "type": "v1",
                "addr": "172.16.0.126:0",
                "nonce": 1257114724
            }
        },
        "state": "open",
        "num_leases": 0,
        "num_caps": 2,
        "request_load_avg": 0,
        "uptime": 274.89986021499999,
        "requests_in_flight": 0,
        "num_completed_requests": 0,
        "num_completed_flushes": 0,
        "reconnecting": false,
        "recall_caps": {
            "value": 0,
            "halflife": 60
        },
        "release_caps": {
            "value": 0,
            "halflife": 60
        },
        "recall_caps_throttle": {
            "value": 0,
            "halflife": 1.5
        },
        "recall_caps_throttle2o": {
            "value": 0,
            "halflife": 0.5
        },
        "session_cache_liveness": {
            "value": 1.6026981127772033,
            "halflife": 300
        },
        "cap_acquisition": {
            "value": 0,
            "halflife": 10
        },
        "delegated_inos": [],
        "inst": "client.1094171 v1:172.16.0.126:0/1257114724",
        "completed_requests": [],
        "prealloc_inos": [],
        "client_metadata": {
            "client_features": {
                "feature_bits": "0x00000000000001ff"
            },
            "metric_spec": {
                "metric_flags": {
                    "feature_bits": "0x"
                }
            },
            "entity_id": "wgs",
            "hostname": "bj2d-prod-eth-star-boot-03",
            "kernel_version": "4.19.0-1.el7.ucloud.x86_64",
            "root": "/"
        }
    }
]

客户端卸载cephfs

客户端主动卸载cephfs

点击查看代码
[root@ceph-client-ubuntu20.04-01 ~]# umount /data/cephfs-data/

客户端手动被驱逐

使用客户端用其唯一 ID

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph tell mds.0 client evict id=1094171
2021-09-25T18:31:02.895+0800 7fc5eeffd700  0 client.1094254 ms_handle_reset on v2:172.16.10.225:6802/1724959904
2021-09-25T18:31:03.671+0800 7fc5eeffd700  0 client.1084740 ms_handle_reset on v2:172.16.10.225:6802/1724959904

客户端被驱逐后查看挂载点状态

点击查看代码
root@ceph-client-ubuntu20.04-01:~# ls -l /data/
ls: cannot access '/data/cephfs-data': Permission denied
total 32
d????????? ? ?    ?        ?            ? cephfs-data

客户端解决办法

点击查看代码
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data

客户端验证挂载点状态

点击查看代码
root@ceph-client-ubuntu20.04-01:~# ls -l
total 4
drwxr-xr-x  2 root   root      6 Sep 25 11:46 cephfs-data

删除cephfs文件系统

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs rm wgscephfs01 --yes-i-really-mean-it

验证删除cephfs文件系统

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: wgscephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]

删除cephfs文件系统(单个文件系统)

查看cephfs文件系统信息

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
name: wgscephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]

查看cephfs文件系统是否正在被挂载

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status wgscephfs
wgscephfs - 1 clients
=========
RANK  STATE       MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr-01  Reqs:    0 /s    13     15     14      2   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   216k  56.2G  
  cephfs-data      data       0   56.2G  
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)

查找挂载cephfs的客户端

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph tell mds.0 client ls
2021-09-25T18:30:09.856+0800 7f0fbdffb700  0 client.1094242 ms_handle_reset on v2:172.16.10.225:6802/1724959904
2021-09-25T18:30:09.884+0800 7f0fbdffb700  0 client.1084719 ms_handle_reset on v2:172.16.10.225:6802/1724959904
[
    {
        "id": 1094171,
        "entity": {
            "name": {
                "type": "client",
                "num": 1094171
            },
            "addr": {
                "type": "v1",
                "addr": "172.16.0.126:0",
                "nonce": 1257114724
            }
        },
        "state": "open",
        "num_leases": 0,
        "num_caps": 2,
        "request_load_avg": 0,
        "uptime": 274.89986021499999,
        "requests_in_flight": 0,
        "num_completed_requests": 0,
        "num_completed_flushes": 0,
        "reconnecting": false,
        "recall_caps": {
            "value": 0,
            "halflife": 60
        },
        "release_caps": {
            "value": 0,
            "halflife": 60
        },
        "recall_caps_throttle": {
            "value": 0,
            "halflife": 1.5
        },
        "recall_caps_throttle2o": {
            "value": 0,
            "halflife": 0.5
        },
        "session_cache_liveness": {
            "value": 1.6026981127772033,
            "halflife": 300
        },
        "cap_acquisition": {
            "value": 0,
            "halflife": 10
        },
        "delegated_inos": [],
        "inst": "client.1094171 v1:172.16.0.126:0/1257114724",
        "completed_requests": [],
        "prealloc_inos": [],
        "client_metadata": {
            "client_features": {
                "feature_bits": "0x00000000000001ff"
            },
            "metric_spec": {
                "metric_flags": {
                    "feature_bits": "0x"
                }
            },
            "entity_id": "wgs",
            "hostname": "bj2d-prod-eth-star-boot-03",
            "kernel_version": "4.19.0-1.el7.ucloud.x86_64",
            "root": "/"
        }
    }
]

客户端卸载cephfs

客户端主动卸载cephfs

点击查看代码
[root@ceph-client-ubuntu20.04-01 ~]# umount /data/cephfs-data/

客户端手动被驱逐

使用客户端用其唯一 ID

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph tell mds.0 client evict id=1094171
2021-09-25T18:31:02.895+0800 7fc5eeffd700  0 client.1094254 ms_handle_reset on v2:172.16.10.225:6802/1724959904
2021-09-25T18:31:03.671+0800 7fc5eeffd700  0 client.1084740 ms_handle_reset on v2:172.16.10.225:6802/1724959904

客户端被驱逐后查看挂载点状态

点击查看代码
root@ceph-client-ubuntu20.04-01:~# ls -l /data/
ls: cannot access '/data/cephfs-data': Permission denied
total 32
d????????? ? ?    ?        ?            ? cephfs-data

客户端解决办法

点击查看代码
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data

客户端验证挂载点状态

点击查看代码
root@ceph-client-ubuntu20.04-01:~# ls -l
total 4
drwxr-xr-x  2 root   root      6 Sep 25 11:46 cephfs-data

删除Cephfs文件系统

查看cephfs服务状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
wgscephfs:1 {0=ceph-mgr-01=up:active}

关闭cephfs文件系统

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs fail wgscephfs
wgscephfs marked not joinable; MDS cannot join the cluster. All MDS ranks marked failed.

查看ceph集群状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph -s
  cluster:
    id:     6e521054-1532-4bc8-9971-7f8ae93e8430
    health: HEALTH_ERR
            1 filesystem is degraded
            1 filesystem is offline
 
  services:
    mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 97m)
    mgr: ceph-mgr-01(active, since 25h), standbys: ceph-mgr-02
    mds: 0/1 daemons up (1 failed), 1 standby
    osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
 
  data:
    volumes: 0/1 healthy, 1 failed
    pools:   6 pools, 257 pgs
    objects: 44 objects, 24 MiB
    usage:   1.4 GiB used, 179 GiB / 180 GiB avail
    pgs:     257 active+clean
 

查看Cephfs服务状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
wgscephfs:0/1 1 up:standby, 1 failed

删除cephfs文件系统

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs rm wgscephfs --yes-i-really-mean-it

查看cephfs文件系统

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs ls
No filesystems enabled

查看ceph集群状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph -s
  cluster:
    id:     6e521054-1532-4bc8-9971-7f8ae93e8430
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 9m)
    mgr: ceph-mgr-01(active, since 25h), standbys: ceph-mgr-02
    osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
 
  data:
    pools:   6 pools, 257 pgs
    objects: 43 objects, 24 MiB
    usage:   1.4 GiB used, 179 GiB / 180 GiB avail
    pgs:     257 active+clean

查看cephfs服务状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
 1 up:standby

Ceph MDS高可用

Ceph MDS高可用介绍

  • Ceph mds作为ceph的访问入口,需要实现高性能以及数据备份。

Ceph MDS高可用架构图

  • 两主两备

Ceph MDS配置常用选项

  • mds_standby_replay: 值为true表示开启relplay模式,这种模式下从MDS内的数据将实时与主MDS同步,如果主宕机,从可以快速的切换。如果值为false只有宕机时从MDS才会同步数据,有一段时间的中断。
  • mds_standby_for_name: 设置当前MDS进程只用于备份于指定名称的MDS。
  • mds_standby_for_rank:设置当前MDS进程只用于备份那个Rank,通常为Rank编号,另外在存在这个cephfs文件系统中,还可以使用mds_standby_fir _fscid参数来指定不同的文件系统。
  • mds_standby_for_fscid:  指定cephfs文件系统ID,需要联合mds_standby_for_rank生效,如果设置mds_standby_for_rank,那么久是用于指定文件的Rank,如果没有设置,就是指定文件系统的所有Rank。

当前mds服务状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph mds stat
wgscephfs:1 {0=ceph-mgr-01=up:active}

添加mds服务器

安装ceph-mds

点击查看代码
root@ceph-mgr-02:~# apt -y install ceph-mds
root@ceph-mon-01:~# apt -y install ceph-mds
root@ceph-mon-02:~# apt -y install ceph-mds
root@ceph-mon-03:~# apt -y install ceph-mds

创建mds服务

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy mds create ceph-mgr-02 ceph-mon-01 ceph-mon-02 ceph-mon-03

查看当前mds服务状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status 
wgscephfs - 0 clients
=========
RANK  STATE       MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr-01  Reqs:    0 /s    10     13     12      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata  96.0k  56.2G  
  cephfs-data      data       0   56.2G  
STANDBY MDS  
ceph-mgr-02  
ceph-mon-02  
ceph-mon-03  
ceph-mon-01  
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)

查看当前ceph集群状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph -s
  cluster:
    id:     6e521054-1532-4bc8-9971-7f8ae93e8430
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon-01,ceph-mon-02,ceph-mon-03 (age 65m)
    mgr: ceph-mgr-01(active, since 26h), standbys: ceph-mgr-02
    mds: 1/1 daemons up, 4 standby
    osd: 9 osds: 9 up (since 2d), 9 in (since 2d)
 
  data:
    volumes: 1/1 healthy
    pools:   4 pools, 161 pgs
    objects: 43 objects, 24 MiB
    usage:   1.4 GiB used, 179 GiB / 180 GiB avail
    pgs:     161 active+clean
 

查看当前cephfs文件系统状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs get wgscephfs
Filesystem 'wgscephfs' (8)
fs_name wgscephfs
epoch   39
flags   13
created 2021-09-25T20:01:13.237645+0800
modified        2021-09-25T20:05:16.835799+0800
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   1099511627776
required_client_features        {}
last_failure    0
last_failure_osd_epoch  0
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in      0
up      {0=1104195}
failed
damaged
stopped
data_pools      [14]
metadata_pool   13
inline_data     disabled
balancer
standby_count_wanted    1
[mds.ceph-mgr-01{0:1104195} state up:active seq 113 addr [v2:172.16.10.225:6802/3134604779,v1:172.16.10.225:6803/3134604779]]

删除MDS服务

  • MDS 会自动通知 Ceph 监视器它正在关闭。这使监视器能够即时故障转移到可用的备用数据库(如果存在)。没有必要使用管理命令来实现此故障转移,例如通过使用 ceph fs fail mds.id

停止mds服务

点击查看代码
root@ceph-mon-03:~# systemctl stop ceph-mds@ceph-mon-03

查看当前mds状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
wgscephfs - 0 clients
=========
RANK  STATE       MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr-02  Reqs:    0 /s    10     13     12      0   
 1    active  ceph-mgr-01  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   168k  56.2G  
  cephfs-data      data       0   56.2G  
STANDBY MDS  
ceph-mon-02  
ceph-mon-01  
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)

删除/var/lib/ceph/mds/ceph-${id}

点击查看代码
root@ceph-mon-03:~# rm -rf /var/lib/ceph/mds/ceph-ceph-mon-03

设置处于激活状态mds数量

点击查看代码
#设置同时活跃状态的主mds最大值为2
ceph@ceph-deploy:~/ceph-cluster$ ceph fs set wgscephfs max_mds 2

查看当前mds状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
wgscephfs - 0 clients
=========
RANK  STATE       MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr-02  Reqs:    0 /s    10     13     12      0   
 1    active  ceph-mgr-01  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   168k  56.2G  
  cephfs-data      data       0   56.2G  
STANDBY MDS  
ceph-mon-02  
ceph-mon-01  
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)

mds高可用优化

  • 当前ceph-mgr-01和ceph-mgr-02为active状态。
  • 将ceph-mgr-02设置为ceph-mgr-01的standby。
  • 将ceph-mon-02设置为ceph-mon-01的standby。
点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ cat ceph.conf 
[global]
fsid = 6e521054-1532-4bc8-9971-7f8ae93e8430
public_network = 172.16.10.0/24
cluster_network = 172.16.10.0/24
mon_initial_members = ceph-mon-01
mon_host = 172.16.10.148
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon_allow_pool_delete = true

mon clock drift allowed = 2
mon clock drift warn backoff = 30
[mds.ceph-mon-01]
mds_standby_for_name = ceph-mgr-01
mds_standby_replay = true
[mds.ceph-mon-02]
mds_standby_for_name = ceph-mgr-02
mds_standby_replay = true

分发配置文件

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite config push ceph-mgr-01
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite config push ceph-mgr-02
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite config push ceph-mon-01
ceph@ceph-deploy:~/ceph-cluster$ ceph-deploy --overwrite config push ceph-mon-02

重启mds服务

点击查看代码
root@ceph-mon-02:~# systemctl restart ceph-mds@ceph-mon-02
root@ceph-mon-01:~# systemctl restart ceph-mds@ceph-mon-01
root@ceph-mgr-02:~# systemctl restart ceph-mds@ceph-mgr-02
root@ceph-mgr-01:~# systemctl restart ceph-mds@ceph-mgr-01

ceph集群mds高可用状态

点击查看代码
ceph@ceph-deploy:~/ceph-cluster$ ceph fs status
wgscephfs - 0 clients
=========
RANK  STATE       MDS         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  ceph-mgr-02  Reqs:    0 /s    10     13     12      0   
 1    active  ceph-mon-01  Reqs:    0 /s    10     13     11      0   
      POOL         TYPE     USED  AVAIL  
cephfs-metadata  metadata   168k  56.2G  
  cephfs-data      data       0   56.2G  
STANDBY MDS  
ceph-mon-02  
ceph-mgr-01  
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)

验证mds状态一对一对应

  • ceph-mgr-02和ceph-mgr-01交替切换状态为actinve。
  • ceph-mon-02和ceph-mon-01交替切换状态为actinve。

通过ganesha将cephfs导出为NFS

  • https://docs.ceph.com/en/pacific/cephfs/nfs/

配置要求

  • Ceph 文件系统是luminous 或更高版本。
  • 在 NFS 服务器主机中,'libcephfs2'是luminous 或更高版本、'nfs-ganesha' 和 'nfs-ganesha-ceph' 包(Ganesha v2.5 或更高版本)。
  • NFS-Ganesha 服务器主机连接到 Ceph 公网。
  • 建议使用 3.5 或更高稳定版本的 NFS-Ganesha 包与 pacific (16.2.x) 或更高稳定版本的 Ceph 包。
  • 在安装有cephfs的节点安装nfs-ganesha,nfs-ganesha-ceph

在部署有ceph-mds节点安装ganesha服务

查看ganesha版本信息

点击查看代码
root@ceph-mgr-01:~# apt-cache madison nfs-ganesha-ceph
nfs-ganesha-ceph |    2.6.0-2 | http://mirrors.ucloud.cn/ubuntu bionic/universe amd64 Packages
nfs-ganesha |    2.6.0-2 | http://mirrors.ucloud.cn/ubuntu bionic/universe Sources

安装ganesha服务

点击查看代码
root@ceph-mgr-01:~# apt -y install  nfs-ganesha-ceph nfs-ganesha

ganesha配置信息

  • https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/ceph.conf
root@ceph-mgr-01:~# mv /etc/ganesha/ganesha.conf /etc/ganesha/ganesha.conf.back
root@ceph-mgr-01:~# vi /etc/ganesha/ganesha.conf
NFS_CORE_PARAM
{
        # Ganesha can lift the NFS grace period early if NLM is disabled.
        Enable_NLM = false;

        # rquotad doesn't add any value here. CephFS doesn't support per-uid
        # quotas anyway.
        Enable_RQUOTA = false;

        # In this configuration, we're just exporting NFSv4. In practice, it's
        # best to use NFSv4.1+ to get the benefit of sessions.
        Protocols = 4;
}

EXPORT
{
        # Export Id (mandatory, each EXPORT must have a unique Export_Id)
        Export_Id = 77;

        # Exported path (mandatory)
        Path = /;

        # Pseudo Path (required for NFS v4)
        Pseudo = /cephfs-test;

        # Time out attribute cache entries immediately
        Attr_Expiration_Time = 0;

        # We're only interested in NFSv4 in this configuration
        Protocols = 4;

        # NFSv4 does not allow UDP transport
        Transports = TCP;

        # Time out attribute cache entries immediately
        Attr_Expiration_Time = 0;

        # setting for root Squash
        Squash="No_root_squash";

        # Required for access (default is None)
        # Could use CLIENT blocks instead
        Access_Type = RW;

        # Exporting FSAL
        FSAL {
                Name = CEPH;
                hostname="172.16.10.225";  #当前节点ip地址
        }
}

LOG {
    # default log level
    Default_Log_Level = WARN;
}

ganesha服务管理

点击查看代码
root@ceph-mgr-01:~# systemctl restart nfs-ganesha
root@ceph-mgr-01:~# systemctl status nfs-ganesha

ganesha客户端设置

ubuntu系统

点击查看代码
root@ceph-client-ubuntu18.04-01:~# apt -y install nfs-common

centos系统

点击查看代码
[root@ceph-client-centos7-01 ~]# yum install -y nfs-utils

客户端挂载

客户端以ceph方式挂载

点击查看代码
root@ceph-client-ubuntu20.04-01:~# mount -t ceph 172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ /data/cephfs-data -o name=wgs,secretfile=/etc/ceph/wgs.key
root@ceph-client-ubuntu20.04-01:~# df -TH
Filesystem                                                 Type      Size  Used Avail Use% Mounted on
udev                                                       devtmpfs  4.1G     0  4.1G   0% /dev
tmpfs                                                      tmpfs     815M  1.1M  814M   1% /run
/dev/vda2                                                  ext4       22G   18G  2.1G  90% /
tmpfs                                                      tmpfs     4.1G     0  4.1G   0% /dev/shm
tmpfs                                                      tmpfs     5.3M     0  5.3M   0% /run/lock
tmpfs                                                      tmpfs     4.1G     0  4.1G   0% /sys/fs/cgroup
/dev/vdb                                                   ext4      528G   32G  470G   7% /data
tmpfs                                                      tmpfs     815M     0  815M   0% /run/user/1001
172.16.10.148:6789,172.16.10.110:6789,172.16.10.182:6789:/ ceph       61G     0   61G   0% /data/cephfs-data

写入测试数据

点击查看代码
root@ceph-client-ubuntu20.04-01:~# cd /data/cephfs-data/
root@ceph-client-ubuntu20.04-01:/data/cephfs-data# echo "mount nfs" > nfs.txt
root@ceph-client-ubuntu20.04-01:/data/cephfs-data# ls -l
total 1
-rw-r--r-- 1 root root 10 Sep 26 22:08 nfs.txt

客户端以nfs方式挂载

[root@ceph-client-centos7-01 ~]# mount -t nfs -o nfsvers=4.1,proto=tcp 172.16.10.225:/cephfs-test /data/cephfs-data/
[root@ceph-client-centos7-01 ~]# df -TH
Filesystem                 Type      Size  Used Avail Use% Mounted on
udev                       devtmpfs  2.1G     0  2.1G   0% /dev
tmpfs                      tmpfs     412M   51M  361M  13% /run
/dev/vda1                  ext4      106G  4.1G   97G   5% /
tmpfs                      tmpfs     2.1G     0  2.1G   0% /dev/shm
tmpfs                      tmpfs     5.3M     0  5.3M   0% /run/lock
tmpfs                      tmpfs     2.1G     0  2.1G   0% /sys/fs/cgroup
/dev/vdb                   ext4      106G   16G   85G  16% /data
tmpfs                      tmpfs     412M     0  412M   0% /run/user/1003
172.16.10.225:/cephfs-test nfs4       61G     0   61G   0% /data/cephfs-data

验证nfs挂载数据

点击查看代码
[root@ceph-client-centos7-01 ~]# ls -l /data/cephfs-data/
total 1
-rw-r--r-- 1 root root 10 Sep 26 22:08 nfs.txt
[root@ceph-client-centos7-01 ~]# cat /data/cephfs-data/nfs.txt 
mount nfs

客户端卸载

ubuntu系统

点击查看代码
root@ceph-client-ubuntu20.04-01:~# umount /data/cephfs-data

ubuntu系统

点击查看代码
[root@ceph-client-centos7-01 ~]# umount /data/cephfs-data

参考文档

https://docs.ceph.com/en/pacific/cephfs/

posted @ 2021-11-22 14:23  小吉猫  阅读(1451)  评论(0编辑  收藏  举报