Ceph V4.0 环境搭建与推荐(3)

Linux OS Ceph Configuration(Minimum)
1) Ceph OSD server:
Volume Storage: 1x storage drive per daemon
block.db: Optional, Recommended, 1x SSD or NVMe or Optane partition or logical volume per daemon. Sizing is 4% of block.data for BlueStore
block.wal:Optional, 1x SSD or NVMe or Optane partition or logical volume per daemon. Use a small size, for example 10 GB, and only if it’s faster than the block.db device.

2) ceph-mon:
Monitor Disk: Optionally,1x SSD disk for leveldb monitor data.

3) ceph-mgr:
4) ceph-radosgw
5) ceph-mds:
Disk: 2 MB per daemon, plus any space required for logging, which might vary depending on the configured log levels.

Containerized Ceph Configuration(Minimum):
1) Ceph-osd-container:
OSD Storage: 1x storage drive per OSD container. Cannot be shared with OS Disk.
block.db:Optional, Recommended, 1x SSD or NVMe or Optane partition or lvm per daemon. Sizing is 4% of block.data for BlueStore
block.wal: Optionally, 1x SSD or NVMe or Optane partition or logical volume per daemon. Use a small size, for example 10 GB, and only if it’s faster than the block.db device.

2) Ceph-mon-container:
Monitor Disk: Optionally, 1x SSD disk for Monitor rocksdb data

3) Ceph-mgr-container
4)ceph-radosgw-container
5)ceph-mds-container

Ceph Client architect:

Reference:
https://software.intel.com/content/www/us/en/develop/articles/using-intel-optane-technology-with-ceph-to-build-high-performance-oltp-solutions.html

posted @ 2021-03-01 15:56  Arcing  阅读(67)  评论(0编辑  收藏  举报