主控节点安装配置 Keystone 认证服务

主控节点 IP:192.168.81.11

yum install centos-release-openstack-newton -y
yum update
yum install mariadb mariadb-server python-PyMySQL -y

vim /etc/my.cnf.d/openstack.cnf
[mysqld] 
bind-address=192.168.1.101
default-storage-engine=innodb
innodb_file_per_table
collation-server=utf8_general_ci
character-set-server=utf8

启动mysql并设置密码
systemctl enable mariadb
systemctl start mariadb
mysqladmin -uroot password 123456

Message queue 安装在controller节点

firewall-cmd --add-port=5672/tcp --permanent #rabbitmq用到5672端口
firewall-cmd --reload
(这里最好是直接关闭firewalld systemctl stop firewalld) yum install rabbitmq-server -y hostnamectl set-hostname controller #修改主机名,否则rabbitmq可能启动出错 init 6 #最好重启服务器 systemctl enable rabbitmq-server systemctl start rabbitmq-server rabbitmqctl add_user openstack 123456 rabbitmqctl list_users rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Keystone安装并配置

配置数据库

mysql -uroot -p
mysql> create database keystone;     #建立Keystone数据库
mysql> grant all privileges on keystone.* to keystone@localhost identified by '123456' with grant option;
mysql> grant all privileges on keystone.* to keystone@'%' identified by '123456' with grant option;
mysql> exit;

生成乱数Token

openssl rand -hex 24
3a11081eb34bf14262fdb496d5a2975f2b434d11424e0ef7

组件安装

yum install openstack-keystone httpd mod_wsgi \
python-openstackclient memcached python-memcached

编辑/etc/keystone/keystone.conf,在[ ]对应部分加入下列内容:

[DEFAULT]
admin_token = 3a11081eb34bf14262fdb496d5a2975f2b434d11424e0ef7

[database]
connection = mysql+pymysql://keystone:123456@192.168.81.11/keystone

[memcache]
servers = 127.0.0.1:11211

[token]
provider = fernet

完成后,通过Keystone管理命令同步数据库建立表:

keystone-manage db_sync

接着初始化 fernet keys:

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

在控制节点上设置 HTTP服务

编辑 /etc/httpd/conf/httpd.conf 文件

ServerName 192.168.81.11

建立 /etc/httpd/conf.d/wsgi-keystone.conf 來提供 Keystone 服务,并加入以下內容:

Listen 5000
Listen 35357

<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

启动 httpd 和 memcached 服务

systemctl enable httpd
systemctl enable memcached

setenforce 0            #关闭selinux,否则httpd启动可能失败

systemctl start httpd
systemctl start memcached

建立Service 和 API Endpoint

在建立 Keystone service 和 Endpoint 之前,要先导入一些环境变量, OpenStack client 会自动抓取系统某些环境变量來提供 API 的存取:

export OS_TOKEN=3a11081eb34bf14262fdb496d5a2975f2b434d11424e0ef7
export OS_URL=http://192.168.81.11:35357/v3
export OS_IDENTITY_API_VERSION=3
# 其中OS_TOKEN是之前生成的Keystone的Admin Token

建立 Service 实体來提供身份认证:

openstack service create --name keystone --description "OpenStack Identity" identity

成功的话,会看到如下类似结果:

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Identity               |
| enabled     | True                             |
| id          | 4ddaae90388b4ebc9d252ec2252d8d10 |
| name        | keystone                         |
| type        | identity                         |
+-------------+----------------------------------+

Keystone 为了让指定的 API 与 Service 拥有认证机制,故要再新增 API Endpoint 目录给Keystone,这样就能够决定如何与其他服务进行存取,通过以下方式建立:

# 建立 Public API endpoint
$ openstack endpoint create --region RegionOne identity public http://192.168.81.11:5000/v3

# 建立 internal API endpoint
$ openstack endpoint create --region RegionOne identity internal http://192.168.81.11:5000/v3

# 建立 admin API endpoint
$ openstack endpoint create --region RegionOne identity admin http://192.168.81.11:35357/v3

完成后,建立一个default domain:

openstack domain create --description "Default Domain" default
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Default Domain                   |
| enabled     | True                             |
| id          | fb9492511bd1426a861ccbf7ff1d4d9f |
| name        | default                          |
+-------------+----------------------------------+

在后续安裝的 OpenStack 各组件服务都需要建立一个或多个 Service,以及 API Endpoint 目录。

建立Keystone admin 和 user

身份认证服务会通过 Domains、Projects、Roles 与 Users 的组合來进行授权。在大多数部署下都会拥有管理者角色,因此这样通过 OpenStack client 建立一个名称为 admin 的管理者,以及一个专门给所有 OpenStack 套件沟通的Service project:

# 建立 admin Project
$ openstack project create --domain default  --description "Admin Project" admin

# 建立 admin User
$ openstack user create --domain default \
--password 123456 --email admin@example.com admin

# 建立 admin Role
$ openstack role create admin

# 将 admin项目与admin用户添加到 admin Role
$ openstack role add --project admin --user admin admin

# 建立 service Project
$ openstack project create --domain default --description "Service Project" service

接着建立一个demo用户的Project,来提供后续的权限验证:

# 建立 demo Project
$ openstack project create --domain default --description "Demo Project" demo

# 建立 demo User
$ openstack user create --domain default --password 123456 --email demo@example.com demo

# 建立 demo Role
$ openstack role create user

# 建立 demo Project
$ openstack role add --project demo --user demo user

# 以上指令可以重复的建立不同的 Project 和 User。

验证 Keystone 服务

在进行其他服务安装之前,一定要确认 Keystone 服务没有任何错误,首先取消上面导入的环境变数:

unset OS_TOKEN OS_URL

直接通过 v3 版本来验证服务,v3 增加了 Domains 的验证。因此 Project 与 User 能够在不同的 Domain 使用相同名称,我们使用预设的 Domain 进行验证:

openstack --os-auth-url http://192.168.81.11:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
# 其中 default 是当没有指定 Domain 时的预设名称。

成功的话,会看到类似如下结果:

+------------+-------------------------------------------------------------+
| Field      | Value                                                       |
+------------+-------------------------------------------------------------+
| expires    | 2017-01-09 08:33:22+00:00                                   |
| id         | gAAAAABYczzDCC4KVYYSJQF5J5grPgrmqyRGrty178PzOaKTP-YrlTH14P_ |
|            | a3VCSS6GvMgdWGJbgBoDs1esitC_zvfe4SDyz1tKEq30GjLc0LeiG_yhZ1j |
|            | gXFLbTgIOz58_a5XrT3n8_rRB7diImQl8XIX3Ip-                    |
|            | tnMtOPeyKiLDlwRjV3sLxu4p4                                   |
| project_id | a84ed1f6ae5d433ca1f84396424eae8c                            |
| user_id    | 22a65abb1c314690b6509e71d1bcca86                            |
+------------+-------------------------------------------------------------+

接下来要验证权限是否正常被设定,先使用 admin 用户来进行:

openstack --os-auth-url http://192.168.81.11:35357/v3 \
--os-project-domain-name default \
--os-user-domain-name default \
--os-project-name admin \
--os-username admin project list

成功的话,会看到如下类似结果:

+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 9d9695050f4241d6945ee97248df3350 | demo    |
| a84ed1f6ae5d433ca1f84396424eae8c | admin   |
| b10182da8bb44dffa958017c815216d3 | service |
+----------------------------------+---------+

然后再通过 demo 使用者来验证是否有存取权限,利用 v3 来取得 Token:

openstack --os-auth-url http://192.168.81.11:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue

成功的话,会看到类似效果:

+------------+-------------------------------------------------------------+
| Field      | Value                                                       |
+------------+-------------------------------------------------------------+
| expires    | 2017-01-09 08:40:33+00:00                                   |
| id         | gAAAAABYcz5xHmogpErbWQuawqqEciwd4FC28yMiWskqiqGmfZBy-f_NKOR |
|            | fQEeBa6QBIPIumZrbCfEktHym4KWvlTRIBC3g8j955bni5NDzJZHlR1GOl_ |
|            | YAgA9HKPJNqelr69waNPBT2VCW8IvhvdFdILGZkHXZZVuzpT_Tf-        |
|            | oKQjbjUVSp3UA                                               |
| project_id | 9d9695050f4241d6945ee97248df3350                            |
| user_id    | 74f41c0ad19540c6ac2c64cbed0afd4c                            |
+------------+-------------------------------------------------------------+
# P.S 这里使用的 Port 从 35357 转换成 5000,这边只是为了区别 Admin URL 与 Public URL 中使用的 Port。

最后再通过 demo 来使用拥有管理者权限的 API:

openstack --os-auth-url http://192.168.81.11:5000/v3 \
--os-project-domain-name default \
--os-user-domain-name default \
--os-project-name demo \
--os-username demo user list

成功的話,會看到類似以下結果:

You are not authorized to perform the requested action: identity:list_users (HTTP 403) (Request-ID: req-cb7e4df6-5e2a-4a0e-b263-cb3c16ec488d)

若上述过程都没有错误,表示 Keystone 目前很正常的被执行中。

使用脚本切换用户

由于后续安装可能会切换不同使用者来验证一些服务,因此可以透过建立脚本来导入相关的环境变数,来达到不同使用者的切换,首先建立以下两个文件:

touch ~/admin-openrc ~/demo-openrc

编辑admin-openrc 加入以下内容:

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://192.168.1.101:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

编辑 demo-openrc 加入以下內容:

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_AUTH_URL=http://192.168.1.101:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

完成后,导入环境变量:

source admin-openrc

完成后,再使用 OpenStack client 就可以省略一些基本参数了,如以下指令:

openstack token issue
+------------+-------------------------------------------------------------+
| Field      | Value                                                       |
+------------+-------------------------------------------------------------+
| expires    | 2017-01-09 09:00:47+00:00                                   |
| id         | gAAAAABYc0Mv-1wUq0jSCtoJ_bVycrQz7rieIy3BmeAcSDXEcikv2EJZNqE |
|            | kDNs30gG0e_JFvNEf6J1aTy73IHS0h5_sCFY_A_y3atp-B_Bks0AJ3KzmOf |
|            | fqpVkF2Qg9SeHbR3XjFlhQeQD-m3-rO0IroJGQ_E7shQ0XbOMW1A6VnzOH_ |
|            | uZObVE                                                      |
| project_id | a84ed1f6ae5d433ca1f84396424eae8c                            |
| user_id    | 22a65abb1c314690b6509e71d1bcca86                            |
+------------+-------------------------------------------------------------+

Keystone 部分部署完成

部署Swift

在controller节点需要安装swift中的Proxy服务

Swift 与其他服务不同,Controller 节点不使用任何资料库,取而代之是在每个 Storage 节点上安装 SQLite 资料库。
首先要建立 Service 与 API Endpoint,首先导入 admin 环境变量:

source admin-openrc

接着通过以下流程来建立Swift 的使用者、Service 以及 API Endpoint:

# 建立 Swift user
$ openstack user create --domain default \
--password 123456 --email swift@example.com swift

# 新增 Swift 到 Admin Role
$ openstack role add --project service --user swift admin

# 建立 Swift service
$ openstack service create --name swift  --description "OpenStack Object Storage" object-store

# 建立 Swift v1 public endpoints
$ openstack endpoint create --region RegionOne \
object-store public http://192.168.81.11:8080/v1/AUTH_%\(tenant_id\)s

# 建立 Swift v1 internal endpoints
$ openstack endpoint create --region RegionOne \
object-store internal http://192.168.81.11:8080/v1/AUTH_%\(tenant_id\)s

# 建立 Swift v1 admin endpoints
$ openstack endpoint create --region RegionOne \
object-store admin http://192.168.81.11:8080/v1

在开始设定swift之前,要安装相关套件与 OpenStack 服务套件,可以通过以下指令进行安装:

yum install openstack-swift-proxy python-swiftclient \
python-keystoneclient python-keystonemiddleware memcached -y

安装完成后,建立用于存放设定档的目录:

mkdir -p /etc/swift

通过网络下载 proxy-server.conf:

curl -o /etc/swift/proxy-server.conf \
https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/newton

编辑 /etc/swift/proxy-server.conf,在[ ]相应部分加入以下內容:

[DEFAULT]
bind_port = 8080
user = swift
swift_dir = /etc/swift

[pipeline:main]
pipeline = authtoken keystoneauth catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
#此处要删除原有的tempurl和tempauth,加入authtoken和keystoneauth
[app:proxy-server]
...
use = egg:swift
account_autocreate = true

[filter:keystoneauth]
...
use = egg:swift
operator_roles = admin,user

[filter:authtoken]
...
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://192.168.81.11:5000
auth_url = http://192.168.81.11:35357
memcached_servers = 192.168.81.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = swift
password = 123456
delay_auth_decision = True

[filter:cache]
...
use = egg:swift
memcache_servers = 192.168.81.11:11211

其他的,可以參考 Deployment Guide
== 注意每行开头不要带空格 ==

Storage Node - 存储节点 IP:192.168.81.12

安装与设定完成 Controller 上的 Swift 所有服务后,接着要来设定实际储存资料的 Storage 节点,其中会提供 Account、Container 以及 Object 服务。

Storage 安装前准备

在开始设定之前,首先要安装 Swift 相依的包:
yum install centos-release-openstack-newton
yum update
yum install xfsprogs rsync

该机器上挂载3个分区进行格式化作 为Swift 储存用:

mkfs.xfs -f /dev/sdb 
mkfs.xfs -f /dev/sdc
mkfs.xfs -f /dev/sdd
# 若有多个分区则重复操作,

==此处保证至少3个分区,才能做后面的balance==

接着创建目录挂载点

mkdir -p /srv/node/sd{b,c,d}

配置/etc/fstab文件开机自动挂载相应分区:

vim /etc/fstab
...
/dev/sdb /srv/node/sdb xfs     noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs     noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdd /srv/node/sdd xfs     noatime,nodiratime,nobarrier,logbufs=8 0 2

手动挂载分区先

mount /srv/node/sdb
mount /srv/node/sdc
mount /srv/node/sdd

配置rsync,加入以下内容:

vim /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.81.12(这个有待确认,似乎其他的IP也可以)

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

vim /etc/default/rsync ,來进行 rsync 异地备援设定:

RSYNC_ENABLE=true

设置开机运行:

systemctl enable rsyncd

Storage 套件安装与设置

安装相关套件:

yum install openstack-swift-account openstack-swift-container \
openstack-swift-object

安装完成后,要从网上下载 Swift的Account、Container和Object Server的配置文件:

# Account server
$ sudo curl -o /etc/swift/account-server.conf \
https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/newton

# Container server
$ sudo curl -o /etc/swift/container-server.conf \
https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/newton

# Object server
$ sudo curl -o /etc/swift/object-server.conf \
https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/newton

首先编辑/etc/swift/account-server.conf,在[ ]相应部分设置以下内容:

[DEFAULT]
...
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]
pipeline = healthcheck recon account-server

[filter:recon]
...
use = egg:swift
recon_cache_path = /var/cache/swift

编辑/etc/swift/container-server.conf,在[ ]相应部分设置以下内容:

[DEFAULT]
...
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true

[pipeline:main]
pipeline = healthcheck recon container-server

[filter:recon]
...
use = egg:swift
recon_cache_path = /var/cache/swift

编辑/etc/swift/object-server.conf,在[ ]相应部分设置以下内容:

[DEFAULT]
...
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true

[pipeline:main]
pipeline = healthcheck recon object-server

[filter:recon]
...
use = egg:swift
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock

完成上述所有设定后,要确保 Swift 能够存取挂载目录

chown -R swift:swift /srv/node

接着建立一个目录让 Swift 作为快取时使用:

mkdir -p /var/cache/swift
chown -R root:swift /var/cache/swift
chmod -R 775 /var/cache/swift

创建和分发初始环 Rings

在这个阶段进行之前,要确保 Controller 与各个 Storage 节点都确定设定与安装完成 Swift。若没问题的话,==回到Controller节点==,并进入 /etc/swift 目录来完成以下步骤。

cd /etc/swift

建立Account Ring

Account Server 使用 Account Ring 来维护容器的列表。首先通过以下指令建立一个 account.builder :

swift-ring-builder account.builder create 10 3 1

然后新增每一个 Storage 节点的 Account Server 信息到 Ring 中:

# Object1 sdb sdc
swift-ring-builder account.builder \
add --region 1 --zone 1 --ip 192.168.1.102 \
--port 6002 --device sdb --weight 100
swift-ring-builder account.builder \
add --region 1 --zone 1 --ip 192.168.1.102 \
--port 6002 --device sdc --weight 100
swift-ring-builder account.builder \
add --region 1 --zone 1 --ip 192.168.1.102 \
--port 6002 --device sdd --weight 100

# Object2 sdb sdc
swift-ring-builder account.builder \
add --region 1 --zone 1 --ip 192.168.1.103 \
--port 6002 --device sdb --weight 100
swift-ring-builder account.builder \
add --region 1 --zone 1 --ip 192.168.1.103 \
--port 6002 --device sdc --weight 100

# 以此类推

完成后,通过以下命令验证是否正确:

swift-ring-builder account.builder

若没有任何问题,即可将 Ring 进行重新平衡调整:

swift-ring-builder account.builder rebalance
# 会产生account.ring.gz 文件

建立Container ring

Container Server 使用 Container Ring 来维护列表。通过以下指令建立container.builder文件:

swift-ring-builder container.builder create 10 3 1

然后新增每一个 Storage 节点的 Container Server 信息到 Ring 中:

# Object1 sdb sdc
swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.81.12 \
--port 6001 --device sdb --weight 100

swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.81.12 \
--port 6001 --device sdc --weight 100
swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.81.12 \
--port 6001 --device sdd --weight 100
.....
(以下根据实际需要看是否部署)
# Object2 sdb sdc
swift-ring-builder container.builder add --region 1 --zone 2 --ip 192.168.81.13 \
--port 6001 --device sdb --weight 100

swift-ring-builder container.builder add --region 1 --zone 2 --ip 192.168.81.13 \
--port 6001 --device sdc --weight 100
# 以此类推

完成后,可以用下列命令验证:

swift-ring-builder container.builder

若没有问题,可将Ring重新平衡调整:

swift-ring-builder container.builder rebalance
# 完成后会产生container.ring.gz 文件

建立 Object Ring

Object Server 使用 Object Ring 来维护列表。 通过以下指令建立一个object.builder文件:

swift-ring-builder object.builder create 10 3 1

然后新增每一个 Storage 节点的 Object Server 信息到 Ring 中:

# Object1 sdb sdc sdd
swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.81.12 \
--port 6000 --device sdb --weight 100
swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.81.12 \
--port 6000 --device sdc --weight 100
swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.81.12 \
--port 6000 --device sdd --weight 100

............
# Object2 sdb sdc
swift-ring-builder object.builder add --region 1 --zone 2 --ip 192.168.81.13 \
--port 6000 --device sdb --weight 100
swift-ring-builder object.builder add --region 1 --zone 2 --ip 192.168.81.13 \
--port 6000 --device sdc --weight 100
# 以此类推

完成后,可以用下列命令验证:

swift-ring-builder object.builder

若没有问题,可将Ring重新平衡调整:

swift-ring-builder object.builder rebalance
# 完成后,会产生object.ring.gz文件

将Rings 分散到Storage节点

接着我们要将上述建立的所有 Ring 分散到所有 Storage 节点以及跑swift proxy服务的节点上的/etc/swift目录:

scp account.ring.gz container.ring.gz object.ring.gz root@192.168.81.12:/etc/swift
#如有其他storage节点,重复上条操作

完成安装

若上面步骤都进行顺利的话,接下来要进入最后阶段,继续在Controller 节点,获取swift.conf文件:

curl -o /etc/swift/swift.conf \
 https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/mitaka

编辑/etc/swift/swift.conf ,在[swift-hash]部分設定 Path 的 Hash 字首字尾:

[swift-hash]
...
swift_hash_path_suffix = 1505cb4249801981da86
swift_hash_path_prefix = 42da359c6af55b2e3f7d

#可通过openssl rand -hex 10 产生

[storage-policy:0]
...
name = Policy-0
default = yes

完成后,复制 /etc/swift/swift.conf 到所有 Storage 节点及其他跑swift proxy服务节点上的/etc/swift/:

scp /etc/swift/swift.conf root@192.168.81.12:/etc/swift
#scp /etc/swift/swift.conf root@192.168.81.13:/etc/swift

在所有节点保证/etc/swift目录的权限:

chown -R swift:swift /etc/swift
# 官方文档中为 root:swift

完成后,在controller节点启动memcached和swift服务:

systemctl enable openstack-swift-proxy memcached
systemctl start openstack-swift-proxy memcached

查看 openstack-swift-proxy 服务状态:

systemctl status -l openstack-swift-proxy

如启动失败,报如下类似错误

liberasurecode[2403]: liberasurecode_instance_create: dynamic linking error libJerasure.so.2: cannot open shared object file: No such file or directory
liberasurecode[2403]: liberasurecode_instance_create: dynamic linking error libisal.so.2: cannot open shared object file: No such file or directory
liberasurecode[2403]: liberasurecode_instance_create: dynamic linking error libshss.so.1: cannot open shared object file: No such file or directory

==如果启动openstack-swift-proxy服务报错,可参照本文最后部分处理,再回到这里继续==

完成后,==在所有Storage节点==设定开机时启动所有 Swift 服务:

systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service openstack-swift-account-replicator.service

systemctl enable openstack-swift-container.service \
openstack-swift-container-auditor.service openstack-swift-container-replicator.service \
  openstack-swift-container-updater.service

systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service

完成后==在所有Storage节点==启动Swift所有服务:

systemctl start openstack-swift-account.service openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service openstack-swift-account-replicator.service

systemctl start openstack-swift-container.service \
openstack-swift-container-auditor.service openstack-swift-container-replicator.service \
openstack-swift-container-updater.service

systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \
 openstack-swift-object-replicator.service openstack-swift-object-updater.service

==查看上述服务状态,如果出现与controller节点的openstack-swift-proxy服务启动失败同样的报错信息,则可按照本文最后部分的方法解决==

验证 Swift 服务

首先回到 Controller 节点并在admin-openrc和demo-openrc加入 Swift API 使用版本的环境变量:

 echo "export OS_AUTH_VERSION=3" | tee -a ~/admin-openrc ~/demo-openrc

 #由于 Swift 需要使用到 V3 的版本來進行存取,故这里要记得修改正确。

之后导入 admin 环境变量验证服务:

source ~/admin-openrc

通过swift client 命令查看服务状态:

swift -V 3 stat
Account: AUTH_aa2829b38026474ea4048d4adc807806
Containers: 0
Objects: 0
Bytes: 0
X-Put-Timestamp: 1435852736.76235
Connection: keep-alive
X-Timestamp: 1435852736.76235
X-Trans-Id: tx47d3a78a45fe491eafb27-0055955fc0
  Content-Type: text/plain; charset=utf-8

然后通过 Swift client 來来上传文档,如以下方式:

swift -V 3 upload admin-container xxxxxx.xxx

通过 Swift client 來查看所有容器,如以下方式:

swift -V 3 list

然后通过 Swift client 來下载文档,如以下方式:

swift -V 3 download admin-container xxxxxx.xxx
xxxxx.xxx [auth 0.235s, headers 0.400s, total 0.420s, 0.020 MB/

如果有问题,可以修改配置文件后尝试在storage 节点运行 swift-init all start

swift-proxy swift 部署完成

主控节点上安装Dashboard

Openstack的Dashboard,是基于OpenStack各个组件开发的web管理后台,项目名字是Horizon

安装dashboard

yum install openstack-dashboard

编辑/etc/openstack-dashboard/local_settings 配置文件,修改如下内容:

OPENSTACK_HOST = "127.0.0.1"     #设置提供openstack服务的主机IP
AllOWED_HOSTS = ['*', ]           #允许所有主机访问dashborad
#让dashboard使用memcached缓存    
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
     'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
     'LOCATION': '127.0.0.1:11211',
    }
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST         #启用认证API version 3
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True   #启用domains支持
# 设置API版本
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"   #设置user为默认的角色权限
TIME_ZONE = "Asia/Shanghai"       #设置时区

重新启动httpd和memcached服务

systemctl restart httpd memcached

访问验证 http://主控节点IP/dashboard , 用admin用户及相应密码登陆,点击“项目”->"对象存储"->“容器”,截图如下:


swift_dashboard.jpg

Swift 部署完成

启动openstack-swift-proxy服务报错的解决方法

如报如下错误

liberasurecode[2403]: liberasurecode_instance_create: dynamic linking error libJerasure.so.2: cannot open shared object file: No such file or directory
liberasurecode[2403]: liberasurecode_instance_create: dynamic linking error libisal.so.2: cannot open shared object file: No such file or directory
liberasurecode[2403]: liberasurecode_instance_create: dynamic linking error libshss.so.1: cannot open shared object file: No such file or directory

说明你的CentOS 7 缺少这些库文件,可以通过编译安装相应库搞定。
首先确保系统有基础编译环境

yum install gcc gcc-c++ make automake autoconf libtool yasm

编译安装 gf-complete库 下载

    unzip gf-complete.zip
    cd gf-complete.git
    ./autogen.sh
    ./configure
    make 
    make install

编译安装 jerasure库 下载

upzip jerasure.zip
cd jerasure.git
autoreconf --force --install
./configure&&make&&make install

编译安装 liberasurecode库 下载

unzip liberasurecode-master.zip
cd liberasurecode-master
./autogen.sh
./configure&&make &&make install

复制编译好的库文件到/usr/lib64

cp --backup /usr/local/lib/*.so.* /usr/lib64

编译安装libisal库 下载

tar xvf libisal_2.17.0.orig.tar.xz
cd libisal_2.17.0
./autogen.sh
./configure --prefix=/usr --libdir=/usr/lib64
make && make install

尝试重启controller节点的openstack-swift-proxy服务或storage节点上的各项服务

Good LUCK!

 

posted on 2017-03-23 10:39  喵喵喵喵喵!  阅读(561)  评论(0编辑  收藏  举报