www.cnblogs.com/ruiyqinrui

开源、架构、Linux C/C++/python AI BI 运维开发自动化运维。 春风桃李花 秋雨梧桐叶。“力尽不知热 但惜夏日长”。夏不惜,秋不获。@ruiY--秦瑞

python爬虫,C编程,嵌入式开发.hadoop大数据,桉树,onenebula云计算架构.linux运维及驱动开发.

  博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

prefaces:

ceph installation(quick)
1,preflight(ins ceph-deploy repo tools)
2,ceph storage cluster quick start
3,block device quick start
4,ceph file system quick start
5,ceph object storage quick start

(1,preflight)

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability

 

(2,ceph storage cluster quick start)

If at any point you run into trouble and you want to start over, execute the following to purge the configuration:

ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys

To purge the Ceph packages too, you may also execute:

ceph-deploy purge {ceph-node} [{ceph-node}]

Error in sys.exitfunc:

  ceph-deploy new ruiy.cc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/aceph/.cephdeploy.                                                                                        conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.19): /usr/bin/ceph-deploy new ruiy.cc
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ruiy.cc][DEBUG ] connection detected need for sudo
[ruiy.cc][DEBUG ] connected to host: ruiy.cc
[ruiy.cc][DEBUG ] detect platform information from remote host
[ruiy.cc][DEBUG ] detect machine type
[ruiy.cc][DEBUG ] find the location of an executable
[ruiy.cc][INFO  ] Running command: sudo /sbin/ip link show
[ruiy.cc][INFO  ] Running command: sudo /sbin/ip addr show
[ruiy.cc][DEBUG ] IP addresses found: ['10.128.129.216']
[ceph_deploy.new][DEBUG ] Resolving host ruiy.cc
[ceph_deploy.new][DEBUG ] Monitor ruiy at 10.128.129.216
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ruiy']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.128.129.216']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
Error in sys.exitfunc:
[aceph@ruiy my-cluster]$
[aceph@ruiy my-cluster]$ ls
ceph.conf  ceph.log  ceph.mon.keyring
[aceph@ruiy my-cluster]$
[aceph@ruiy my-cluster]$ vi ceph.conf
[aceph@ruiy my-cluster]$ ceph-deploy install ruiy.cc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/aceph/.cephdeploy.                                                                                        conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.19): /usr/bin/ceph-deploy install ruiy.cc
[ceph_deploy.install][DEBUG ] Installing stable version firefly on cluster ceph                                                                                         hosts ruiy.cc
[ceph_deploy.install][DEBUG ] Detecting platform for host ruiy.cc ...
[ruiy.cc][DEBUG ] connection detected need for sudo
[ruiy.cc][DEBUG ] connected to host: ruiy.cc
[ruiy.cc][DEBUG ] detect platform information from remote host
[ruiy.cc][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final
[ruiy.cc][INFO  ] installing ceph on ruiy.cc
[ruiy.cc][INFO  ] Running command: sudo yum clean all
[ruiy.cc][DEBUG ] Loaded plugins: fastestmirror, security
[ruiy.cc][DEBUG ] Cleaning repos: base ceph-noarch epel extras updates
[ruiy.cc][DEBUG ] Cleaning up Everything
[ruiy.cc][DEBUG ] Cleaning up list of fastest mirrors
[ruiy.cc][INFO  ] adding EPEL repository
[ruiy.cc][INFO  ] Running command: sudo yum -y install epel-release
[ruiy.cc][DEBUG ] Loaded plugins: fastestmirror, security
[ruiy.cc][DEBUG ] Determining fastest mirrors
[ruiy.cc][DEBUG ]  * base: mirrors.btte.net
[ruiy.cc][DEBUG ]  * extras: mirrors.btte.net
[ruiy.cc][DEBUG ]  * updates: mirrors.btte.net
[ruiy.cc][DEBUG ] Setting up Install Process
[ruiy.cc][DEBUG ] Package epel-release-6-8.noarch already installed and latest version
[ruiy.cc][DEBUG ] Nothing to do
[ruiy.cc][INFO  ] Running command: sudo yum -y install yum-plugin-priorities
[ruiy.cc][DEBUG ] Loaded plugins: fastestmirror, security
[ruiy.cc][DEBUG ] Loading mirror speeds from cached hostfile
[ruiy.cc][DEBUG ]  * base: mirrors.btte.net
[ruiy.cc][DEBUG ]  * extras: mirrors.btte.net
[ruiy.cc][DEBUG ]  * updates: mirrors.btte.net
[ruiy.cc][DEBUG ] Setting up Install Process
[ruiy.cc][DEBUG ] Resolving Dependencies
[ruiy.cc][DEBUG ] --> Running transaction check
[ruiy.cc][DEBUG ] ---> Package yum-plugin-priorities.noarch 0:1.1.30-30.el6 will be installed
[ruiy.cc][DEBUG ] --> Finished Dependency Resolution
[ruiy.cc][DEBUG ]
[ruiy.cc][DEBUG ] Dependencies Resolved
[ruiy.cc][DEBUG ]
[ruiy.cc][DEBUG ] ================================================================================
[ruiy.cc][DEBUG ]  Package                     Arch         Version              Repository  Size
[ruiy.cc][DEBUG ] ================================================================================
[ruiy.cc][DEBUG ] Installing:
[ruiy.cc][DEBUG ]  yum-plugin-priorities       noarch       1.1.30-30.el6        base        25 k
[ruiy.cc][DEBUG ]
[ruiy.cc][DEBUG ] Transaction Summary
[ruiy.cc][DEBUG ] ================================================================================
[ruiy.cc][DEBUG ] Install       1 Package(s)
[ruiy.cc][DEBUG ]
[ruiy.cc][DEBUG ] Total download size: 25 k
[ruiy.cc][DEBUG ] Installed size: 28 k
[ruiy.cc][DEBUG ] Downloading Packages:
[ruiy.cc][DEBUG ] Running rpm_check_debug
[ruiy.cc][DEBUG ] Running Transaction Test
[ruiy.cc][DEBUG ] Transaction Test Succeeded
[ruiy.cc][DEBUG ] Running Transaction
  Installing : yum-plugin-priorities-1.1.30-30.el6.noarch                   1/1
  Verifying  : yum-plugin-priorities-1.1.30-30.el6.noarch                   1/1
[ruiy.cc][DEBUG ]
[ruiy.cc][DEBUG ] Installed:
[ruiy.cc][DEBUG ]   yum-plugin-priorities.noarch 0:1.1.30-30.el6
[ruiy.cc][DEBUG ]
[ruiy.cc][DEBUG ] Complete!
[ruiy.cc][INFO  ] Running command: sudo rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[ruiy.cc][INFO  ] Running command: sudo rpm -Uvh --replacepkgs http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm
[ruiy.cc][DEBUG ] Retrieving http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm
[ruiy.cc][DEBUG ] Preparing...                ##################################################
[ruiy.cc][DEBUG ] ceph-release                ##################################################
[ruiy.cc][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[ruiy.cc][WARNIN] altered ceph.repo priorities to contain: priority=1
[ruiy.cc][INFO  ] Running command: sudo yum -y install ceph
[ruiy.cc][DEBUG ] Loaded plugins: fastestmirror, priorities, security
[ruiy.cc][DEBUG ] Loading mirror speeds from cached hostfile
[ruiy.cc][DEBUG ]  * base: mirrors.btte.net
[ruiy.cc][DEBUG ]  * extras: mirrors.btte.net
[ruiy.cc][DEBUG ]  * updates: mirrors.btte.net

ceph-deploy mon create-initial  (

ceph-deploy mon create ruiy.cc (create initial monitors)
ceph-deploy gatherkeys ruiy.cc (gather keys in two discreate steps)

)

Ruiy tips 安装配置一个storage cluster use ceph-deploy tool步骤:

1, ceph-deploy new ruiy.cc (创造ceph cluster)

修改使用的osd(object store daemon counts)
osd pool default size = ?
ceph 服务器多网卡  public network  = ip-address/netmask

2, ceph-deploy intall ruiy.cc (ins ceph)

3, ceph-deploy mon create-initial


ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/aceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.19): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ruiy
[ceph_deploy.mon][DEBUG ] detecting platform for host ruiy ...
[ruiy][DEBUG ] connection detected need for sudo
[ruiy][DEBUG ] connected to host: ruiy
[ruiy][DEBUG ] detect platform information from remote host
[ruiy][DEBUG ] detect machine type
[ceph_deploy.mon][INFO  ] distro info: CentOS 6.4 Final
[ruiy][DEBUG ] determining if provided host has same hostname in remote
[ruiy][DEBUG ] get remote short hostname
[ruiy][DEBUG ] deploying mon to ruiy
[ruiy][DEBUG ] get remote short hostname
[ruiy][DEBUG ] remote hostname: ruiy
[ruiy][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ruiy][DEBUG ] create the mon path if it does not exist
[ruiy][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ruiy/done
[ruiy][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ruiy/done
[ruiy][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ruiy.mon.keyring
[ruiy][DEBUG ] create the monitor keyring file
[ruiy][INFO  ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ruiy --keyring /var/lib/ceph/tmp/ceph-ruiy.mon.keyring
[ruiy][DEBUG ] ceph-mon: mon.noname-a 10.128.129.216:6789/0 is local, renaming to mon.ruiy
[ruiy][DEBUG ] ceph-mon: set fsid to 4bfa6016-cf13-4a1e-a6dd-b856713c275f
[ruiy][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ruiy for mon.ruiy
[ruiy][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ruiy.mon.keyring
[ruiy][DEBUG ] create a done file to avoid re-doing the mon deployment
[ruiy][DEBUG ] create the init path if it does not exist
[ruiy][DEBUG ] locating the `service` executable...
[ruiy][INFO  ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.ruiy
[ruiy][DEBUG ] === mon.ruiy ===
[ruiy][DEBUG ] Starting Ceph mon.ruiy on ruiy...
[ruiy][DEBUG ] Starting ceph-create-keys on ruiy...
[ruiy][WARNIN] No data was received after 7 seconds, disconnecting...
[ruiy][INFO  ] Running command: sudo chkconfig ceph on
[ruiy][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ruiy.asok mon_status
[ruiy][DEBUG ] ********************************************************************************
[ruiy][DEBUG ] status for monitor: mon.ruiy
[ruiy][DEBUG ] {
[ruiy][DEBUG ]   "election_epoch": 2,
[ruiy][DEBUG ]   "extra_probe_peers": [],
[ruiy][DEBUG ]   "monmap": {
[ruiy][DEBUG ]     "created": "0.000000",
[ruiy][DEBUG ]     "epoch": 1,
[ruiy][DEBUG ]     "fsid": "4bfa6016-cf13-4a1e-a6dd-b856713c275f",
[ruiy][DEBUG ]     "modified": "0.000000",
[ruiy][DEBUG ]     "mons": [
[ruiy][DEBUG ]       {
[ruiy][DEBUG ]         "addr": "10.128.129.216:6789/0",
[ruiy][DEBUG ]         "name": "ruiy",
[ruiy][DEBUG ]         "rank": 0
[ruiy][DEBUG ]       }
[ruiy][DEBUG ]     ]
[ruiy][DEBUG ]   },
[ruiy][DEBUG ]   "name": "ruiy",
[ruiy][DEBUG ]   "outside_quorum": [],
[ruiy][DEBUG ]   "quorum": [
[ruiy][DEBUG ]     0
[ruiy][DEBUG ]   ],
[ruiy][DEBUG ]   "rank": 0,
[ruiy][DEBUG ]   "state": "leader",
[ruiy][DEBUG ]   "sync_provider": []
[ruiy][DEBUG ] }
[ruiy][DEBUG ] ********************************************************************************
[ruiy][INFO  ] monitor: mon.ruiy is running
[ruiy][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ruiy.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ruiy
[ruiy][DEBUG ] connection detected need for sudo
[ruiy][DEBUG ] connected to host: ruiy
[ruiy][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ruiy.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ruiy monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][DEBUG ] Checking ruiy for /etc/ceph/ceph.client.admin.keyring
[ruiy][DEBUG ] connection detected need for sudo
[ruiy][DEBUG ] connected to host: ruiy
[ruiy][DEBUG ] detect platform information from remote host
[ruiy][DEBUG ] detect machine type
[ruiy][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from ruiy.
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
[ceph_deploy.gatherkeys][DEBUG ] Checking ruiy for /var/lib/ceph/bootstrap-osd/ceph.keyring
[ruiy][DEBUG ] connection detected need for sudo
[ruiy][DEBUG ] connected to host: ruiy
[ruiy][DEBUG ] detect platform information from remote host
[ruiy][DEBUG ] detect machine type
[ruiy][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from ruiy.
[ceph_deploy.gatherkeys][DEBUG ] Checking ruiy for /var/lib/ceph/bootstrap-mds/ceph.keyring
[ruiy][DEBUG ] connection detected need for sudo
[ruiy][DEBUG ] connected to host: ruiy
[ruiy][DEBUG ] detect platform information from remote host
[ruiy][DEBUG ] detect machine type
[ruiy][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from ruiy.
Error in sys.exitfunc:

 ceph-deploy osd prepare ruiy.cc:/var/local/osd0
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/aceph/.cephdeploy.                                                                                        conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.19): /usr/bin/ceph-deploy osd prepare rui                                                                                        y.cc:/var/local/osd0
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ruiy.cc:/var/local/osd0:
[ruiy.cc][DEBUG ] connection detected need for sudo
[ruiy.cc][DEBUG ] connected to host: ruiy.cc
[ruiy.cc][DEBUG ] detect platform information from remote host
[ruiy.cc][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to ruiy.cc
[ruiy.cc][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ruiy.cc][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block                                                                                         --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ruiy.cc disk /var/local/osd0 journal No                                                                                        ne activate False
[ruiy.cc][INFO  ] Running command: sudo ceph-disk -v prepare --fs-type xfs --clu                                                                                        ster ceph -- /var/local/osd0
[ruiy.cc][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ruiy.cc][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ruiy.cc][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ruiy.cc][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ruiy.cc][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ruiy.cc][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[ruiy.cc][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/local/osd0
[ruiy.cc][INFO  ] checking OSD status...
[ruiy.cc][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ruiy.cc is now ready for osd use.
Error in sys.exitfunc:
ceph-deploy osd activate ruiy.cc:/var/local/osd0
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/aceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.19): /usr/bin/ceph-deploy osd activate ruiy.cc:/var/local/osd0
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ruiy.cc:/var/local/osd0:
[ruiy.cc][DEBUG ] connection detected need for sudo
[ruiy.cc][DEBUG ] connected to host: ruiy.cc
[ruiy.cc][DEBUG ] detect platform information from remote host
[ruiy.cc][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
[ceph_deploy.osd][DEBUG ] activating host ruiy.cc disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ruiy.cc][INFO  ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /var/local/osd0
[ruiy.cc][DEBUG ] === osd.0 ===
[ruiy.cc][DEBUG ] Starting Ceph osd.0 on ruiy...
[ruiy.cc][DEBUG ] starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
[ruiy.cc][WARNIN] DEBUG:ceph-disk:Cluster uuid is 4bfa6016-cf13-4a1e-a6dd-b856713c275f
[ruiy.cc][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ruiy.cc][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[ruiy.cc][WARNIN] DEBUG:ceph-disk:OSD uuid is 158f43d4-4af5-4942-ad03-a304d949ab0c
[ruiy.cc][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[ruiy.cc][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 158f43d4-4af5-4942-ad03-a304d949ab0c
[ruiy.cc][WARNIN] DEBUG:ceph-disk:OSD id is 0
[ruiy.cc][WARNIN] DEBUG:ceph-disk:Initializing OSD...
[ruiy.cc][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/local/osd0/activate.monmap
[ruiy.cc][WARNIN] got monmap epoch 1
[ruiy.cc][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/local/osd0/activate.monmap --osd-data /var/local/osd0 --osd-journal /var/local/osd0/journal --osd-uuid 158f43d4-4af5-4942-ad03-a304d949ab0c --keyring /var/local/osd0/keyring
[ruiy.cc][WARNIN] 2014-11-13 19:09:15.249281 7ffc46e7e7a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ruiy.cc][WARNIN] 2014-11-13 19:09:15.536649 7ffc46e7e7a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ruiy.cc][WARNIN] 2014-11-13 19:09:15.552636 7ffc46e7e7a0 -1 filestore(/var/local/osd0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
[ruiy.cc][WARNIN] 2014-11-13 19:09:15.667557 7ffc46e7e7a0 -1 created object store /var/local/osd0 journal /var/local/osd0/journal for osd.0 fsid 4bfa6016-cf13-4a1e-a6dd-b856713c275f
[ruiy.cc][WARNIN] 2014-11-13 19:09:15.667754 7ffc46e7e7a0 -1 auth: error reading file: /var/local/osd0/keyring: can't open /var/local/osd0/keyring: (2) No such file or directory
[ruiy.cc][WARNIN] 2014-11-13 19:09:15.668489 7ffc46e7e7a0 -1 created new key in keyring /var/local/osd0/keyring
[ruiy.cc][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[ruiy.cc][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
[ruiy.cc][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.0 -i /var/local/osd0/keyring osd allow * mon allow profile osd
[ruiy.cc][WARNIN] added key for osd.0
[ruiy.cc][WARNIN] DEBUG:ceph-disk:ceph osd.0 data dir is ready at /var/local/osd0
[ruiy.cc][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/osd/ceph-0 -> /var/local/osd0
[ruiy.cc][WARNIN] DEBUG:ceph-disk:Starting ceph osd.0...
[ruiy.cc][WARNIN] INFO:ceph-disk:Running command: /sbin/service ceph --cluster ceph start osd.0
[ruiy.cc][WARNIN] create-or-move updating item name 'osd.0' weight 0.02 at location {host=ruiy,root=default} to crush map
[ruiy.cc][INFO  ] checking OSD status...
[ruiy.cc][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ruiy.cc][INFO  ] Running command: sudo chkconfig ceph on
Error in sys.exitfunc:
[aceph@ruiy my-cluster]$
ceph-deploy admin ruiy.cc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/aceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.19): /usr/bin/ceph-deploy admin ruiy.cc
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ruiy.cc
[ruiy.cc][DEBUG ] connection detected need for sudo
[ruiy.cc][DEBUG ] connected to host: ruiy.cc
[ruiy.cc][DEBUG ] detect platform information from remote host
[ruiy.cc][DEBUG ] detect machine type
[ruiy.cc][DEBUG ] get remote short hostname
[ruiy.cc][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
Error in sys.exitfunc:
[aceph@ruiy my-cluster]$ ls -lha /etc/ceph/ceph.client.admin.keyring
-rw-------. 1 root root 63 11月 13 19:10 /etc/ceph/ceph.client.admin.keyring
[aceph@ruiy my-cluster]$ ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph.conf  ceph.log  ceph.mon.keyring
[aceph@ruiy my-cluster]$ ls -lha ceph.client.admin.keyring
-rw-rw-r--. 1 aceph aceph 63 11月 13 18:56 ceph.client.admin.keyring
[aceph@ruiy my-cluster]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
[aceph@ruiy my-cluster]$ eph health
-bash: eph: command not found
[aceph@ruiy my-cluster]$ ceph health
HEALTH_OK
[aceph@ruiy my-cluster]$ pwd
/home/aceph/my-cluster
[aceph@ruiy my-cluster]$
rados(realiable automation distributed object store) 可扩展
Ceph Block Device, Ceph Filesystem ceph object store

expanding your ceph storre cluster

glassor:

ceph metadate server
ceph monitor
osd rados
mon  monitor

placement group states change from active+clean to active with some degraded objects

store object data in the ceph storage cluster from ceph client:
1,set an object name
2,specify a pool

(ceph storage cluster use exercise)storage/retrieving object data:
1,map the object to a placement group
2,assign the placement group to a Ceph OSD Daemon dynamically
3,find object location(目标对象定位)
4,objectName,PoolName

1.locate an object:
rados put {object_name} {file-path} --pool=data
2.ceph storage cluster stored the object
rados -p data ls
3.identify the object location
ceph osd(object store daemon) map data ruiy-object
ceph osd map {pool-name} {object-name}
4.remove object from ceph store cluster
rados rm (rados -p data ls) --pool=data
rados put {object_name} {file_path} --pool=data

expanding ceph store cluster
1,adding an osd

ssh ruiy.cc
sudo mkdir /var/local/osd1
exit

prepare the OSD(object store daemon)
ceph-deploy osd prepare {ceph-Node}:/path/to/directory

activate the OSDs
ceph-deploy osd activate {ceph-Node}:/path/to/directory
{once you have added your new OSD,ceph will begin reblancing the cluster by migrating placement groups to your new OSD observe this process with ceph -w 查看rebalanceing the cluster placemeter group}

2,add a metadata server
to use cephFS,you need at least one metadata server,execute the following to create a metadata server

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. - See more at: http://ceph.com/#sthash.NXSEXBy2.dpuf
Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. - See more at: http://ceph.com/#sthash.NXSEXBy2.dpuf
posted on 2014-11-12 10:40  秦瑞It行程实录  阅读(2705)  评论(0编辑  收藏  举报
www.cnblogs.com/ruiyqinrui