Ceph-RBD
1、创建RBD
创建存储池
$ ceph osd pool create myrbd 64 64
- myrbd 存储池名称
- 64 是PG数
- 64 是PGP数,pgp是对存在于pg的数据进行组合的存储,一般等于pg数
启用块存储
$ ceph osd pool application enable myrbd rbd
初始化
$ rbd pool init -p myrbd
创建并验证img
rbd存储池不能直接用于块设备,而是需要事先创建映像,并把映像作为块设备使用。
创建
$ rbd create myimg --size 5G --pool myrbd
指定一些特性创建(由于有些内核不支持一些默认特性,所以要根据实际情况选择新特性)
$ rbd create myimg1 --size 5G --pool myrbd --image-format 2 --image-feature layering
查看img
$ rbd ls --pool myrbd myimg myimg1
$ rbd --image myimg --pool myrbd info rbd image 'myimg': size 5 GiB in 1280 objects order 22 (4 MiB objects) snapshot_count: 0 id: 5e3ac13f7b1c block_name_prefix: rbd_data.5e3ac13f7b1c format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Tue Aug 24 02:27:04 2021 access_timestamp: Tue Aug 24 02:27:04 2021 modify_timestamp: Tue Aug 24 02:27:04 2021
客户端安装ceph-common(我的客户端版本是ubuntu16.04,只能安装N版本ceph-common)
#添加源 $ echo "deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-nautilus xenial main" >> /etc/apt/sources.list #安装ceph-common $ apt-get install ceph-common
#deploy节点copy 认证文件和配置文件 scp ceph.client.admin.keyring ceph.conf root@172.16.143.175:/etc/ceph
#验证是否正常,客户端执行ceph -s $ ceph -s cluster: id: 6e278817-8019-4a06-82b3-b4d24d7dd743 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 20h) mgr: ceph-mgr1(active, since 4d), standbys: ceph-mgr2 osd: 8 osds: 8 up (since 22h), 8 in (since 22h) data: pools: 2 pools, 65 pgs objects: 7 objects, 405 B usage: 49 MiB used, 800 GiB / 800 GiB avail pgs: 65 active+clean
升级客户端内核(自行解决)
#客户端映射img $ rbd map myrbd/myimg1 #查看结果 $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT rbd0 252:0 0 5G 0 disk sr0 11:0 1 1024M 0 rom rbd1 252:16 0 5G 0 disk sda 8:0 0 200G 0 disk ├─sda2 8:2 0 1K 0 part ├─sda5 8:5 0 199.3G 0 part │ ├─ubuntu--vg-swap_1 253:1 0 980M 0 lvm │ │ └─cryptswap1 253:2 0 979.5M 0 crypt [SWAP] │ └─ubuntu--vg-root 253:0 0 198.3G 0 lvm / └─sda1 8:1 0 731M 0 part /boot
#格式化磁盘 $ mkfs.xfs /dev/rbd0 #挂载目录 $ mount /dev/rbd0 /root/data/ #查看是否挂载 $ df -h Filesystem Size Used Avail Use% Mounted on udev 3.9G 0 3.9G 0% /dev tmpfs 798M 19M 779M 3% /run /dev/mapper/ubuntu--vg-root 196G 7.1G 179G 4% / tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/sda1 720M 176M 508M 26% /boot tmpfs 798M 0 798M 0% /run/user/0 overlay 196G 7.1G 179G 4% /var/lib/docker/overlay2/437540fd59af26b759cbb0d529c7441988af7b5481de9bc526048c303ba2b42f/merged overlay 196G 7.1G 179G 4% /var/lib/docker/overlay2/0a97fd71931b39f5080102748bd8ead60883fe5ca1dc293ed2d16eb6a388e8ed/merged overlay 196G 7.1G 179G 4% /var/lib/docker/overlay2/3aac1806df6c62da13c7bb91160d1324b66a4939f69a7e7dc262a3d9928bb788/merged shm 64M 0 64M 0% /var/lib/docker/containers/ae6549bfee1770dfb6b86cd0b3a4d2dcea9cd5219868f6f7539dda359974d4f7/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/4f0ab7d32749845510d351701584b50ce9e340724714c6e5f3564413febc11bb/mounts/shm shm 64M 20K 64M 1% /var/lib/docker/containers/ea0deff0be06419cc7ace76f1b810ca18df3ec2ac6d9c19155643e940a31971f/mounts/shm overlay 196G 7.1G 179G 4% /var/lib/docker/overlay2/7c40806e67270cf5f395ea30c606e51e846e292ecc0c22a8aa2b2ff215fc1e49/merged shm 64M 0 64M 0% /var/lib/docker/containers/889c671b74d8356116f6dbeaf709e06399105f0b6ce41141d4530187b1a275fb/mounts/shm /dev/rbd0 5.0G 38M 5.0G 1% /root/data
碎片化时间学习和你一起终身学习