ceph 资源池管理

Pools

Pools are logical partitions for storing objects.

When you first deploy a cluster without creating a pool, Ceph uses the default pools for storing data. A pool provides you with:

  • Resilience: You can set how many OSD are allowed to fail without losing data. For replicated pools, it is the desired number of copies/replicas of an object. A typical configuration stores an object and one additional copy (i.e., size = 2), but you can determine the number of copies/replicas. For erasure coded pools, it is the number of coding chunks (i.e. m=2 in the erasure code profile)

  • Placement Groups: You can set the number of placement groups for the pool. A typical configuration uses approximately 100 placement groups per OSD to provide optimal balancing without using up too many computing resources. When setting up multiple pools, be careful to ensure you set a reasonable number of placement groups for both the pool and the cluster as a whole.

  • CRUSH Rules: When you store data in a pool, placement of the object and its replicas (or chunks for erasure coded pools) in your cluster is governed by CRUSH rules. You can create a custom CRUSH rule for your pool if the default rule is not appropriate for your use case.

  • Snapshots: When you create snapshots with ceph osd pool mksnap, you effectively take a snapshot of a particular pool.

To organize data into pools, you can list, create, and remove pools. You can also view the utilization statistics for each pool.

List Pools

To list your cluster’s pools, execute:

ceph osd lspools

Create a Pool

Before creating pools, refer to the Pool, PG and CRUSH Config Reference. Ideally, you should override the default value for the number of placement groups in your Ceph configuration file, as the default is NOT ideal. For details on placement group numbers refer to setting the number of placement groups

Note

Starting with Luminous, all pools need to be associated to the application using the pool. See Associate Pool to Application below for more information.

For example:

osd pool default pg num = 100
osd pool default pgp num = 100

To create a pool, execute:

ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] [replicated] \
     [crush-rule-name] [expected-num-objects]
ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]]   erasure \
     [erasure-code-profile] [crush-rule-name] [expected_num_objects] [--autoscale-mode=<on,off,warn>]

Associate Pool to Application

Pools need to be associated with an application before use. Pools that will be used with CephFS or pools that are automatically created by RGW are automatically associated. Pools that are intended for use with RBD should be initialized using the rbd tool (see Block Device Commands for more information).

For other cases, you can manually associate a free-form application name to a pool.:

ceph osd pool application enable {pool-name} {application-name}

Note

CephFS uses the application name cephfs, RBD uses the application name rbd, and RGW uses the application name rgw.

Set Pool Quotas

You can set pool quotas for the maximum number of bytes and/or the maximum number of objects per pool.

ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}]

For example:

ceph osd pool set-quota data max_objects 10000

To remove a quota, set its value to 0.

Delete a Pool

To delete a pool, execute:

ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]

To remove a pool the mon_allow_pool_delete flag must be set to true in the Monitor’s configuration. Otherwise they will refuse to remove a pool.

See Monitor Configuration for more information.

If you created your own rules for a pool you created, you should consider removing them when you no longer need your pool:

ceph osd pool get {pool-name} crush_rule

If the rule was “123”, for example, you can check the other pools like so:

ceph osd dump | grep "^pool" | grep "crush_rule 123"

If no other pools use that custom rule, then it’s safe to delete that rule from the cluster.

If you created users with permissions strictly for a pool that no longer exists, you should consider deleting those users too:

ceph auth ls | grep -C 5 {pool-name}
ceph auth del {user}

Rename a Pool

To rename a pool, execute:

ceph osd pool rename {current-pool-name} {new-pool-name}

If you rename a pool and you have per-pool capabilities for an authenticated user, you must update the user’s capabilities (i.e., caps) with the new pool name.

Show Pool Statistics

To show a pool’s utilization statistics, execute:

rados df

Additionally, to obtain I/O information for a specific pool or all, execute:

ceph osd pool stats [{pool-name}]

Make a Snapshot of a Pool

To make a snapshot of a pool, execute:

ceph osd pool mksnap {pool-name} {snap-name}

Remove a Snapshot of a Pool

To remove a snapshot of a pool, execute:

ceph osd pool rmsnap {pool-name} {snap-name}

Set Pool Values

To set a value to a pool, execute the following:

ceph osd pool set {pool-name} {key} {value}


Get Pool Values

To get a value from a pool, execute the following:

ceph osd pool get {pool-name} {key}


Set the Number of Object Replicas

To set the number of object replicas on a replicated pool, execute the following:

ceph osd pool set {poolname} size {num-replicas}

Important

The {num-replicas} includes the object itself. If you want the object and two copies of the object for a total of three instances of the object, specify 3.

For example:

ceph osd pool set data size 3

 

ceph osd pool set data min_size 2

This ensures that no object in the data pool will receive I/O with fewer than min_size replicas.

Get the Number of Object Replicas

To get the number of object replicas, execute the following:

ceph osd dump | grep 'replicated size'

Ceph will list the pools, with the replicated size attribute highlighted. By default, ceph creates two replicas of an object (a total of three copies, or a size of 3).

posted @ 2020-09-28 14:05  Oops!#  阅读(684)  评论(0编辑  收藏  举报