General usage: 
 ==============
usage: ceph [-h] [-c CEPHCONF] [-i INPUT_FILE] [-o OUTPUT_FILE]
            [--setuser SETUSER] [--setgroup SETGROUP] [--id CLIENT_ID]
            [--name CLIENT_NAME] [--cluster CLUSTER]
            [--admin-daemon ADMIN_SOCKET] [-s] [-w] [--watch-debug]
            [--watch-info] [--watch-sec] [--watch-warn] [--watch-error]
            [--watch-channel {cluster,audit,*}] [--version] [--verbose]
            [--concise] [-f {json,json-pretty,xml,xml-pretty,plain}]
            [--connect-timeout CLUSTER_TIMEOUT] [--block] [--period PERIOD]

Ceph administration tool

optional arguments:
  -h, --help            request mon help
  -c CEPHCONF, --conf CEPHCONF
                        ceph configuration file
  -i INPUT_FILE, --in-file INPUT_FILE
                        input file, or "-" for stdin
  -o OUTPUT_FILE, --out-file OUTPUT_FILE
                        output file, or "-" for stdout
  --setuser SETUSER     set user file permission
  --setgroup SETGROUP   set group file permission
  --id CLIENT_ID, --user CLIENT_ID
                        client id for authentication
  --name CLIENT_NAME, -n CLIENT_NAME
                        client name for authentication
  --cluster CLUSTER     cluster name
  --admin-daemon ADMIN_SOCKET
                        submit admin-socket commands ("help" for help
  -s, --status          show cluster status
  -w, --watch           watch live cluster changes
  --watch-debug         watch debug events
  --watch-info          watch info events
  --watch-sec           watch security events
  --watch-warn          watch warn events
  --watch-error         watch error events
  --watch-channel {cluster,audit,*}
                        which log channel to follow when using -w/--watch. One
                        of ['cluster', 'audit', '*']
  --version, -v         display version
  --verbose             make verbose
  --concise             make less verbose
  -f {json,json-pretty,xml,xml-pretty,plain}, --format {json,json-pretty,xml,xml-pretty,plain}
  --connect-timeout CLUSTER_TIMEOUT
                        set a timeout for connecting to the cluster
  --block               block until completion (scrub and deep-scrub only)
  --period PERIOD, -p PERIOD
                        polling period, default 1.0 second (for polling
                        commands only)

 Local commands: 
 ===============

ping <mon.id>           Send simple presence/life test to a mon
                        <mon.id> may be 'mon.*' for all mons
daemon {type.id|path} <cmd>
                        Same as --admin-daemon, but auto-find admin socket
daemonperf {type.id | path} [stat-pats] [priority] [<interval>] [<count>]
daemonperf {type.id | path} list|ls [stat-pats] [priority]
                        Get selected perf stats from daemon/admin socket
                        Optional shell-glob comma-delim match string stat-pats
                        Optional selection priority (can abbreviate name):
                         critical, interesting, useful, noninteresting, debug
                        List shows a table of all available stats
                        Run <count> times (default forever),
                         once per <interval> seconds (default 1)
    

 Monitor commands: 
 =================
osd blacklist add|rm <EntityAddr> {<float[0.0-]>}                                     add (optionally until <expire> seconds from now) or remove <addr> from blacklist
osd blacklist clear                                                                   clear all blacklisted clients
osd blacklist ls                                                                      show blacklisted clients
osd blocked-by                                                                        print histogram of which OSDs are blocking their peers
osd count-metadata <property>                                                         count OSDs by metadata field property
osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]                  add or update crushmap position and weight for <name> with <weight> and location 
                                                                                       <args>
osd crush add-bucket <name> <type> {<args> [<args>...]}                               add no-parent (probably root) crush bucket <name> of type <type> to location <args>
osd crush class create <class>                                                        create crush device class <class>
osd crush class ls                                                                    list all crush device classes
osd crush class ls-osd <class>                                                        list all osds belonging to the specific <class>
osd crush class rename <srcname> <dstname>                                            rename crush device class <srcname> to <dstname>
osd crush class rm <class>                                                            remove crush device class <class>
osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]       create entry or move existing entry for <name> <weight> at/to location <args>
osd crush dump                                                                        dump crush map
osd crush get-device-class <ids> [<ids>...]                                           get classes of specified osd(s) <id> [<id>...]
osd crush get-tunable straw_calc_version                                              get crush tunable <tunable>
osd crush link <name> <args> [<args>...]                                              link existing entry for <name> under location <args>
osd crush ls <node>                                                                   list items beneath a node in the CRUSH tree
osd crush move <name> <args> [<args>...]                                              move existing entry for <name> to location <args>
osd crush rename-bucket <srcname> <dstname>                                           rename bucket <srcname> to <dstname>
osd crush reweight <name> <float[0.0-]>                                               change <name>'s weight to <weight> in crush map
osd crush reweight-all                                                                recalculate the weights for the tree to ensure they sum correctly
osd crush reweight-subtree <name> <float[0.0-]>                                       change all leaf items beneath <name> to <weight> in crush map
osd crush rm <name> {<ancestor>}                                                      remove <name> from crush map (everywhere, or just at <ancestor>)
osd crush rm-device-class <ids> [<ids>...]                                            remove class of the osd(s) <id> [<id>...],or use <all|any> to remove all.
osd crush rule create-erasure <name> {<profile>}                                      create crush rule <name> for erasure coded pool created with <profile> (default 
                                                                                       default)
osd crush rule create-replicated <name> <root> <type> {<class>}                       create crush rule <name> for replicated pool to start from <root>, replicate across 
                                                                                       buckets of type <type>, use devices of type <class> (ssd or hdd)
osd crush rule create-simple <name> <root> <type> {firstn|indep}                      create crush rule <name> to start from <root>, replicate across buckets of type 
                                                                                       <type>, using a choose mode of <firstn|indep> (default firstn; indep best for 
                                                                                       erasure pools)
osd crush rule dump {<name>}                                                          dump crush rule <name> (default all)
osd crush rule ls                                                                     list crush rules
osd crush rule ls-by-class <class>                                                    list all crush rules that reference the same <class>
osd crush rule rename <srcname> <dstname>                                             rename crush rule <srcname> to <dstname>
osd crush rule rm <name>                                                              remove crush rule <name>
osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]                  update crushmap position and weight for <name> to <weight> with location <args>
osd crush set {<int>}                                                                 set crush map from input file
osd crush set-all-straw-buckets-to-straw2                                             convert all CRUSH current straw buckets to use the straw2 algorithm
osd crush set-device-class <class> <ids> [<ids>...]                                   set the <class> of the osd(s) <id> [<id>...],or use <all|any> to set all.
osd crush set-tunable straw_calc_version <int>                                        set crush tunable <tunable> to <value>
osd crush show-tunables                                                               show current crush tunables
osd crush swap-bucket <source> <dest> {--yes-i-really-mean-it}                        swap existing bucket contents from (orphan) bucket <source> and <target>
osd crush tree {--show-shadow}                                                        dump crush buckets and items in a tree view
osd crush tunables legacy|argonaut|bobtail|firefly|hammer|jewel|optimal|default       set crush tunables values to <profile>
osd crush unlink <name> {<ancestor>}                                                  unlink <name> from crush map (everywhere, or just at <ancestor>)
osd crush weight-set create <poolname> flat|positional                                create a weight-set for a given pool
osd crush weight-set create-compat                                                    create a default backward-compatible weight-set
osd crush weight-set dump                                                             dump crush weight sets
osd crush weight-set ls                                                               list crush weight sets
osd crush weight-set reweight <poolname> <item> <float[0.0-]> [<float[0.0-]>...]      set weight for an item (bucket or osd) in a pool's weight-set
osd crush weight-set reweight-compat <item> <float[0.0-]> [<float[0.0-]>...]          set weight for an item (bucket or osd) in the backward-compatible weight-set
osd crush weight-set rm <poolname>                                                    remove the weight-set for a given pool
osd crush weight-set rm-compat                                                        remove the backward-compatible weight-set
osd deep-scrub <who>                                                                  initiate deep scrub on osd <who>, or use <all|any> to deep scrub all
osd destroy <osdname (id|osd.id)> {--force} {--yes-i-really-mean-it}                  mark osd as being destroyed. Keeps the ID intact (allowing reuse), but removes cephx 
                                                                                       keys, config-key data and lockbox keys, rendering data permanently unreadable.
osd df {plain|tree} {class|name} {<filter>}                                           show OSD utilization
osd down <ids> [<ids>...]                                                             set osd(s) <id> [<id>...] down, or use <any|all> to set all osds down
osd dump {<int[0-]>}                                                                  print summary of OSD map
osd erasure-code-profile get <name>                                                   get erasure code profile <name>
osd erasure-code-profile ls                                                           list all erasure code profiles
osd erasure-code-profile rm <name>                                                    remove erasure code profile <name>
osd erasure-code-profile set <name> {<profile> [<profile>...]} {--force}              create erasure code profile <name> with [<key[=value]> ...] pairs. Add a --force at 
                                                                                       the end to override an existing profile (VERY DANGEROUS)
osd find <osdname (id|osd.id)>                                                        find osd <id> in the CRUSH map and show its location
osd force-create-pg <pgid> {--yes-i-really-mean-it}                                   force creation of pg <pgid>
osd get-require-min-compat-client                                                     get the minimum client version we will maintain compatibility with
osd getcrushmap {<int[0-]>}                                                           get CRUSH map
osd getmap {<int[0-]>}                                                                get OSD map
osd getmaxosd                                                                         show largest OSD id
osd in <ids> [<ids>...]                                                               set osd(s) <id> [<id>...] in, can use <any|all> to automatically set all previously 
                                                                                       out osds in
osd last-stat-seq <osdname (id|osd.id)>                                               get the last pg stats sequence number reported for this osd
osd lost <osdname (id|osd.id)> {--yes-i-really-mean-it}                               mark osd as permanently lost. THIS DESTROYS DATA IF NO MORE REPLICAS EXIST, BE 
                                                                                       CAREFUL
osd ls {<int[0-]>}                                                                    show all OSD ids
osd ls-tree {<int[0-]>} <name>                                                        show OSD ids under bucket <name> in the CRUSH map
osd map <poolname> <objectname> {<nspace>}                                            find pg for <object> in <pool> with [namespace]
osd metadata {<osdname (id|osd.id)>}                                                  fetch metadata for osd {id} (default all)
osd new <uuid> {<osdname (id|osd.id)>}                                                Create a new OSD. If supplied, the `id` to be replaced needs to exist and have been 
                                                                                       previously destroyed. Reads secrets from JSON file via `-i <file>` (see man page).
osd numa-status                                                                       show NUMA status of OSDs
osd ok-to-stop <ids> [<ids>...]                                                       check whether osd(s) can be safely stopped without reducing immediate data 
                                                                                       availability
osd out <ids> [<ids>...]                                                              set osd(s) <id> [<id>...] out, or use <any|all> to set all osds out
osd pause                                                                             pause osd
osd perf                                                                              print dump of OSD perf summary stats
osd pg-temp <pgid> {<osdname (id|osd.id)> [<osdname (id|osd.id)>...]}                 set pg_temp mapping pgid:[<id> [<id>...]] (developers only)
osd pg-upmap <pgid> <osdname (id|osd.id)> [<osdname (id|osd.id)>...]                  set pg_upmap mapping <pgid>:[<id> [<id>...]] (developers only)
osd pg-upmap-items <pgid> <osdname (id|osd.id)> [<osdname (id|osd.id)>...]            set pg_upmap_items mapping <pgid>:{<id> to <id>, [...]} (developers only)
osd pool application disable <poolname> <app> {--yes-i-really-mean-it}                disables use of an application <app> on pool <poolname>
osd pool application enable <poolname> <app> {--yes-i-really-mean-it}                 enable use of an application <app> [cephfs,rbd,rgw] on pool <poolname>
osd pool application get {<poolname>} {<app>} {<key>}                                 get value of key <key> of application <app> on pool <poolname>
osd pool application rm <poolname> <app> <key>                                        removes application <app> metadata key <key> on pool <poolname>
osd pool application set <poolname> <app> <key> <value>                               sets application <app> metadata key <key> to <value> on pool <poolname>
osd pool autoscale-status                                                             report on pool pg_num sizing recommendation and intent
osd pool cancel-force-backfill <poolname> [<poolname>...]                             restore normal recovery priority of specified pool <who>
osd pool cancel-force-recovery <poolname> [<poolname>...]                             restore normal recovery priority of specified pool <who>
osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure} {<erasure_code_ create pool
 profile>} {<rule>} {<int>} {<int>} {<int[0-]>} {<int[0-]>} {<float[0.0-1.0]>}        
osd pool deep-scrub <poolname> [<poolname>...]                                        initiate deep-scrub on pool <who>
osd pool force-backfill <poolname> [<poolname>...]                                    force backfill of specified pool <who> first
osd pool force-recovery <poolname> [<poolname>...]                                    force recovery of specified pool <who> first
osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|hashpspool|nodelete|  get pool parameter <var>
 nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-scrub|hit_set_type|    
 hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_max_objects|target_   
 max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|cache_target_full_  
 ratio|cache_min_flush_age|cache_min_evict_age|erasure_code_profile|min_read_recency_ 
 for_promote|all|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|    
 hit_set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|     
 recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_  
 algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_ 
 size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_        
 algorithm|pg_autoscale_mode|pg_autoscale_bias|pg_num_min|target_size_bytes|target_   
 size_ratio                                                                           
osd pool get-quota <poolname>                                                         obtain object or byte limits for pool
osd pool ls {detail}                                                                  list pools
osd pool mksnap <poolname> <snap>                                                     make snapshot <snap> in <pool>
osd pool rename <poolname> <poolname>                                                 rename <srcpool> to <destpool>
osd pool repair <poolname> [<poolname>...]                                            initiate repair on pool <who>
osd pool rm <poolname> {<poolname>} {--yes-i-really-really-mean-it} {--yes-i-really-  remove pool
 really-mean-it-not-faking}                                                           
osd pool rmsnap <poolname> <snap>                                                     remove snapshot <snap> from <pool>
osd pool scrub <poolname> [<poolname>...]                                             initiate scrub on pool <who>
osd pool set <poolname> size|min_size|pg_num|pgp_num|pgp_num_actual|crush_rule|       set pool parameter <var> to <val>
 hashpspool|nodelete|nopgchange|nosizechange|write_fadvise_dontneed|noscrub|nodeep-   
 scrub|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|use_gmt_hitset|target_   
 max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_dirty_high_ratio| 
 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|min_read_recency_    
 for_promote|min_write_recency_for_promote|fast_read|hit_set_grade_decay_rate|hit_    
 set_search_last_n|scrub_min_interval|scrub_max_interval|deep_scrub_interval|         
 recovery_priority|recovery_op_priority|scrub_priority|compression_mode|compression_  
 algorithm|compression_required_ratio|compression_max_blob_size|compression_min_blob_ 
 size|csum_type|csum_min_block|csum_max_block|allow_ec_overwrites|fingerprint_        
 algorithm|pg_autoscale_mode|pg_autoscale_bias|pg_num_min|target_size_bytes|target_   
 size_ratio <val> {--yes-i-really-mean-it}                                            
osd pool set-quota <poolname> max_objects|max_bytes <val>                             set object or byte limit on pool
osd pool stats {<poolname>}                                                           obtain stats from all pools, or from specified pool
osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>                           adjust osd primary-affinity from 0.0 <= <weight> <= 1.0
osd primary-temp <pgid> <osdname (id|osd.id)>                                         set primary_temp mapping pgid:<id>|-1 (developers only)
osd purge <osdname (id|osd.id)> {--force} {--yes-i-really-mean-it}                    purge all osd data from the monitors including the OSD id and CRUSH position
osd purge-new <osdname (id|osd.id)> {--yes-i-really-mean-it}                          purge all traces of an OSD that was partially created but never started
osd repair <who>                                                                      initiate repair on osd <who>, or use <all|any> to repair all
osd require-osd-release luminous|mimic|nautilus {--yes-i-really-mean-it}              set the minimum allowed OSD release to participate in the cluster
osd reweight <osdname (id|osd.id)> <float[0.0-1.0]>                                   reweight osd to 0.0 < <weight> < 1.0
osd reweight-by-pg {<int>} {<float>} {<int>} {<poolname> [<poolname>...]}             reweight OSDs by PG distribution [overload-percentage-for-consideration, default 120]
osd reweight-by-utilization {<int>} {<float>} {<int>} {--no-increasing}               reweight OSDs by utilization [overload-percentage-for-consideration, default 120]
osd reweightn <weights>                                                               reweight osds with {<id>: <weight>,...})
osd rm-pg-upmap <pgid>                                                                clear pg_upmap mapping for <pgid> (developers only)
osd rm-pg-upmap-items <pgid>                                                          clear pg_upmap_items mapping for <pgid> (developers only)
osd safe-to-destroy <ids> [<ids>...]                                                  check whether osd(s) can be safely destroyed without reducing data durability
osd scrub <who>                                                                       initiate scrub on osd <who>, or use <all|any> to scrub all
osd set full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|   set <key>
 nodeep-scrub|notieragent|nosnaptrim|pglog_hardlimit {--yes-i-really-mean-it}         
osd set-backfillfull-ratio <float[0.0-1.0]>                                           set usage ratio at which OSDs are marked too full to backfill
osd set-full-ratio <float[0.0-1.0]>                                                   set usage ratio at which OSDs are marked full
osd set-group <flags> <who> [<who>...]                                                set <flags> for batch osds or crush nodes, <flags> must be a comma-separated subset 
                                                                                       of {noup,nodown,noin,noout}
osd set-nearfull-ratio <float[0.0-1.0]>                                               set usage ratio at which OSDs are marked near-full
osd set-require-min-compat-client <version> {--yes-i-really-mean-it}                  set the minimum client version we will maintain compatibility with
osd setcrushmap {<int>}                                                               set crush map from input file
osd setmaxosd <int[0-]>                                                               set new maximum osd value
osd stat                                                                              print summary of OSD map
osd status {<bucket>}                                                                 Show the status of OSDs within a bucket, or all
osd test-reweight-by-pg {<int>} {<float>} {<int>} {<poolname> [<poolname>...]}        dry run of reweight OSDs by PG distribution [overload-percentage-for-consideration, 
                                                                                       default 120]
osd test-reweight-by-utilization {<int>} {<float>} {<int>} {--no-increasing}          dry run of reweight OSDs by utilization [overload-percentage-for-consideration, 
                                                                                       default 120]
osd tier add <poolname> <poolname> {--force-nonempty}                                 add the tier <tierpool> (the second one) to base pool <pool> (the first one)
osd tier add-cache <poolname> <poolname> <int[0-]>                                    add a cache <tierpool> (the second one) of size <size> to existing pool <pool> (the 
                                                                                       first one)
osd tier cache-mode <poolname> none|writeback|forward|readonly|readforward|proxy|     specify the caching mode for cache tier <pool>
 readproxy {--yes-i-really-mean-it}                                                   
osd tier rm <poolname> <poolname>                                                     remove the tier <tierpool> (the second one) from base pool <pool> (the first one)
osd tier rm-overlay <poolname>                                                        remove the overlay pool for base pool <pool>
osd tier set-overlay <poolname> <poolname>                                            set the overlay pool for base pool <pool> to be <overlaypool>
osd tree {<int[0-]>} {up|down|in|out|destroyed [up|down|in|out|destroyed...]}         print OSD tree
osd tree-from {<int[0-]>} <bucket> {up|down|in|out|destroyed [up|down|in|out|         print OSD tree in bucket
 destroyed...]}                                                                       
osd unpause                                                                           unpause osd
osd unset full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub| unset <key>
 nodeep-scrub|notieragent|nosnaptrim                                                  
osd unset-group <flags> <who> [<who>...]                                              unset <flags> for batch osds or crush nodes, <flags> must be a comma-separated 
                                                                                       subset of {noup,nodown,noin,noout}
osd utilization                                                                       get basic pg distribution stats
osd versions                                                                          check running versions of OSDs
posted on 2022-02-22 13:31  heidsoft  阅读(190)  评论(0)    收藏  举报