使用ceph作为openstack存储后端配置方法
1.安装ceph-common,并确保可以连接到ceph后端
添加ceph.repo:
[Ceph] name=Ceph packages for $basearch baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1
安装ceph-common:
yum install ceph-common
从ceph后端机器拷贝ceph.conf, ceph.client.admin.keyring, hosts(ceph节点映射信息)到/etc/ceph/下,通过ceph --help查看输出是否完全,完全会列出如下一些信息:
... Monitor commands: auth add <entity> {<caps> [<caps>...]} add auth info for <entity> from input file, or random key if no input is given, and/or any caps specified in the command auth caps <entity> <caps> [<caps>...] update caps for <name> from caps ...
这样就保证了机器可以连接到ceph后端。
2.通过命令行生成ceph pools和创建用户cinder
ceph osd pool create volumes 128 ceph osd pool create images 128 ceph osd pool create vms 128
glance使用images,cinder使用volumes,nova使用vms。为了方便我们只创建一个cinder用户,可以管理所有三个pools,具体如下:
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images'
然后拷贝client.cinder.keyring到/etc/ceph,具体如下;
[root@ss05 ~]# ceph auth get client.cinder exported keyring for client.cinder [client.cinder] key = AQCBHc9X7x2NLBAA2nKjpTpUaH5eHv6+FqvZog== caps mon = "allow r" caps osd = "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images" #拷贝输出到/etc/ceph/ceph.client.cinder.keyring
3.配置libvirt的secret,在有nova-compute服务节点执行
生成uuid,后续配置openstack会使用到,具体
uuidgen 457eb676-33da-42ec-9a8c-9293d545c337
生成secret.xml文件:
<secret ephemeral='no' private='no'> <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret>
执行命令写入secret:
virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
加入key:
[root@ss05 ~]# ceph auth print-key client.cinder AQCBHc9X7x2NLBAA2nKjpTpUaH5eHv6+FqvZog== [root@ss05 ~]# virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 AQCBHc9X7x2NLBAA2nKjpTpUaH5eHv6+FqvZog==
4.配置openstack,使用ceph作为后端
配置cinder.conf:
[DEFAULT] # debug = True # verbose = True rpc_backend = rabbit auth_strategy = keystone my_ip = *.*.*.* glance_api_servers = http://controller:9292 enabled_backends = rbd transport_url = rabbit://openstack:openstack@controller [database] connection = mysql+pymysql://cinder:cinder@controller/cinder [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller auth_type = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = cinder [oslo_concurrency] lock_path = /var/lock/cinder [rbd] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 rbd_user = cinder rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
重启服务
service openstack-cinder-volume restart
配置nova-compute.conf:
[DEFAULT] compute_driver=libvirt.LibvirtDriver [libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid =457eb676-33da-42ec-9a8c-9293d545c337
重启服务
service openstack-nova-compute restart
配置glance-api.conf:
[paste_deploy] flavor = keystone [glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = cinder rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8
重启服务
service openstack-glance-api restart
5.关于错误nova-compute日志出现rdb协议不支持,需重新编译qemu
参见如下链接:http://www.cnblogs.com/hurongpu/p/8514002.html
posted on 2018-03-15 16:37 carrot_hrp 阅读(340) 评论(0) 编辑 收藏 举报