OpenStack安装部署

一、云计算

1.1 什么是云计算?

云计算,是通过虚拟化技术来实现,旨于向使用自来水一样使用计算机资源,是一种按需付费的模式。
云计算是一种按使用量付费的模式,这种模式提供可用的、便捷的、按需的网络访问, 进入可配置的计算资源共享池(资源包括网络,服务器,存储,应用软件,服务),这些资源能够被快速提供,只需投入很少的管理工作,或与服务供应商进行很少的交互。

1.2 OpenStack简单说明

OpenStack(M版)官方文档

1.2.1 组件介绍

  • glance:镜像服务
  • Nova:计算节点
  • neutron:网络服务器
  • horizon:操作面板
  • keystone:认证服务
  • mariadb:数据库
  • rabbitmq:消息队列
  • memcache:缓存令牌

1.3 部署OpenStack(M版)基础环境

1.3.1 准备环境

主机:两台
角色:控制节点、计算节点
内存:控制节点--4G 计算节点--2G
CPU:开启虚拟化
IP:控制节点--10.0.0.11 计算节点--10.0.0.31

1.3.2 上传安装包并解压

所有节点上

[root@controller ~]# cd /opt/
[root@controller opt]# ll
total 241672
-rw-r--r-- 1 root root 247468369 Nov  7 17:41 openstack_rpm.tar.gz
[root@controller opt]# tar xf openstack_rpm.tar.gz

1.3.3 创建OpenStack本地yum源

所有节点

cat >/etc/yum.repos.d/openstack.repo<<EOF
[openstack]
name=openstack
baseurl=file:///opt/repo
gpgcheck=0
EOF

[root@controller ~]# yum makecache
Loaded plugins: fastestmirror
openstack                                                              | 2.9 kB  00:00:00    
yum                                                                    | 3.6 kB  00:00:00    
(1/3): openstack/filelists_db                                          | 465 kB  00:00:00    
(2/3): openstack/other_db                                              | 211 kB  00:00:00    
(3/3): openstack/primary_db                                            | 398 kB  00:00:00    
Loading mirror speeds from cached hostfile
Metadata Cache Created

1.3.4 NTP

1、安装软件包(所有节点)

yum -y install chrony

2、修改配置文件
控制节点:

sed -i '/^server/d' /etc/chrony.conf
sed -i '2a server time1.aliyun.com iburst\nallow 10/8' /etc/chrony.conf

计算节点:

sed -i '/^server/d' /etc/chrony.conf
sed -i '2a server 10.0.0.11 iburst' /etc/chrony.conf

3、启动NTP
所有节点:

systemctl start chronyd.service 
systemctl enable chronyd.service

1.3.5 安装OpenStack包

所有节点上
注意:
在官网上有一条升级包的操作yum upgrade;执行之后或进行升级,最后会出现错误,所以不要执行。

1、安装 OpenStack 客户端:

yum -y install python-openstackclient

2、RHEL 和 CentOS 默认启用了 SELinux . 安装 openstack-selinux 软件包以便自动管理 OpenStack 服务的安全策略:

yum -y install openstack-selinux

1.3.6 SQL数据库

1、安装软件包(控制节点)

yum -y install mariadb mariadb-server python2-PyMySQL

2、创建并编辑 /etc/my.cnf.d/openstack.cnf

cat >/etc/my.cnf.d/openstack.cnf<<EOF
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF

3、启动

systemctl start mariadb.service 
systemctl enable mariadb.service

4、为了保证数据库服务的安全性,运行mysql_secure_installation脚本。特别需要说明的是,为数据库的root用户设置一个适当的密码。

mysql_secure_installation

执行操作:

[root@controller ~]# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] n
... skipping.

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

1.3.7 消息队列

控制节点
1、安装软件包

yum -y install rabbitmq-server

2、启动

systemctl start rabbitmq-server.service
systemctl enable rabbitmq-server.service

3、关闭邮件服务(可选)

systemctl stop postfix.service 
systemctl disable postfix.service

4、添加 openstack 用户

rabbitmqctl add_user openstack RABBIT_PASS

可以用合适的密码替换 RABBIT_DBPASS
5、给openstack用户配置写和读权限

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

1.3.8 Memcached

1、安装软件包

yum -y install memcached python-memcached

2、启动

systemctl start memcached.service
systemctl enable memcached.service

3、修改配置文件
侦听所有地址

sed -i 's#127.0.0.1#0.0.0.0#g' /etc/sysconfig/memcached

4、重启

systemctl restart memcached.service

1.4 认证服务

1.4.1 认证服务说明

OpenStack:term:`Identity service`为认证管理,授权管理和服务目录服务管理提供单点整合。其它OpenStack服务将身份认证服务当做通用统一API来使用。此外,提供用户信息但是不在OpenStack项目中的服务(如LDAP服务)可被整合进先前存在的基础设施中。
为了从identity服务中获益,其他的OpenStack服务需要与它合作。当某个OpenStack服务收到来自用户的请求时,该服务询问Identity服务,验证该用户是否有权限进行此次请求
身份服务包含这些组件:

  • 服务器
    一个中心化的服务器使用RESTful 接口来提供认证和授权服务。
  • 驱动
    驱动或服务后端被整合进集中式服务器中。它们被用来访问OpenStack外部仓库的身份信息, 并且它们可能已经存在于OpenStack被部署在的基础设施(例如,SQL数据库或LDAP服务器)中。
  • 模块
    中间件模块运行于使用身份认证服务的OpenStack组件的地址空间中。这些模块拦截服务请求,取出用户凭据,并将它们送入中央是服务器寻求授权。中间件模块和OpenStack组件间的整合使用Python Web服务器网关接口。

当安装OpenStack身份服务,用户必须将之注册到其OpenStack安装环境的每个服务。身份服务才可以追踪那些OpenStack服务已经安装,以及在网络中定位它们。

服务目录:每个服务
认证管理:域、项目、user
授权管理:role admin、user

1.4.2 部署

先决条件
1、用数据库连接客户端以 root 用户连接到数据库服务器

mysql -uroot -p

2、创建 keystone 数据库

CREATE DATABASE keystone;

3、对keystone数据库授予恰当的权限

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
  IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
  IDENTIFIED BY 'KEYSTONE_DBPASS';

4、退出数据库

exit

安全并配置组件
1、安装keystone软件包

yum -y install openstack-keystone httpd mod_wsgi

2、编辑文件 /etc/keystone/keystone.conf
安装自动配置软件包

yum install openstack-utils -y

自动化部署

cp /etc/keystone/keystone.conf{,.bak}
egrep -v "^$|#" /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf

openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token  ADMIN_TOKEN
openstack-config --set /etc/keystone/keystone.conf database connection  mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider  fernet

存在即不改,不存在即改

3、同步数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

4、检查

[root@controller ~]# mysql keystone -e "show tables"
+------------------------+
| Tables_in_keystone    |
+------------------------+
| access_token          |
| assignment            |
| config_register        |
| consumer              |
| credential            |
| domain                |
| endpoint              |
| endpoint_group        |
| federated_user        |
| federation_protocol    |
| group                  |
| id_mapping            |
| identity_provider      |
| idp_remote_ids        |
| implied_role          |
| local_user            |
| mapping                |
| migrate_version        |
| password              |
| policy                |
| policy_association    |
| project                |
| project_endpoint      |
| project_endpoint_group |
| region                |
| request_token          |
| revocation_event      |
| role                  |
| sensitive_config      |
| service                |
| service_provider      |
| token                  |
| trust                  |
| trust_role            |
| user                  |
| user_group_membership  |
| whitelisted_config    |
+------------------------+

有表格表示同步成功

5、初始化Fernet keys

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

配置Apache
1、编辑/etc/httpd/conf/httpd.conf文件,配置ServerName选项为控制节点

echo "ServerName controller" >> /etc/httpd/conf/httpd.conf

2、创建文件 /etc/httpd/conf.d/wsgi-keystone.conf

echo 'Listen 5000
Listen 35357

<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost *:35357>
    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>' >/etc/httpd/conf.d/wsgi-keystone.conf

4、启动

systemctl start httpd.service
systemctl enable httpd.service

1.4.3 创建服务实体和API端点

身份认证服务提供服务的目录和他们的位置。每个你添加到OpenStack环境中的服务在目录中需要一个 service 实体和一些 API endpoints 。

先决条件
默认情况下,身份认证服务数据库不包含支持传统认证和目录服务的信息。你必须使用:doc:keystone-install 章节中为身份认证服务创建的临时身份验证令牌用来初始化的服务实体和API端点。

1、配置认证令牌

export OS_TOKEN=ADMIN_TOKEN

2、配置端点URL

export OS_URL=http://controller:35357/v3

3、配置认证 API 版本

export OS_IDENTITY_API_VERSION=3

创建服务实体和API端点
1、创建服务实体和身份认证服务

openstack service create \
--name keystone --description "OpenStack Identity" identity

检查:

[root@controller ~]# openstack service list
+----------------------------------+----------+----------+
| ID                              | Name    | Type    |
+----------------------------------+----------+----------+
| 7aa4213d87d149a2a4c36a1e411c66ed | keystone | identity |
+----------------------------------+----------+----------+

2、创建认证服务的 API 端点
OpenStack使用三个API端点变种代表每种服务:admin,internal和public。默认情况下,管理API端点允许修改用户和租户而公共和内部APIs不允许这些操作。在生产环境中,处于安全原因,变种为了服务不同类型的用户可能驻留在单独的网络上。对实例而言,公共API网络为了让顾客管理他们自己的云在互联网上是可见的。管理API网络在管理云基础设施的组织中操作也是有所限制的。内部API网络可能会被限制在包含OpenStack服务的主机上。此外,OpenStack支持可伸缩性的多区域。

openstack endpoint create --region RegionOne identity public http://controller:5000/v3 
openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
openstack endpoint create --region RegionOne identity admin http://controller:35357/v3

检查:

[root@controller ~]# openstack endpoint list
+---------------+-----------+--------------+--------------+---------+-----------+----------------+
| ID            | Region    | Service Name | Service Type | Enabled | Interface | URL            |
+---------------+-----------+--------------+--------------+---------+-----------+----------------+
| 0028baddd2344 | RegionOne | keystone    | identity    | True    | internal  | http://control |
| 99693496d7657 |          |              |              |        |          | ler:5000/v3    |
| e55275        |          |              |              |        |          |                |
| 758f70b29ad74 | RegionOne | keystone    | identity    | True    | admin    | http://control |
| 08f95f66e1c9b |          |              |              |        |          | ler:35357/v3  |
| 7ee658        |          |              |              |        |          |                |
| ea1280cb7d064 | RegionOne | keystone    | identity    | True    | public    | http://control |
| c41961e1baf11 |          |              |              |        |          | ler:5000/v3    |
| de5eb5        |          |              |              |        |          |                |
+---------------+-----------+--------------+--------------+---------+-----------+----------------+

1.4.4 创建域、项目、用户和角色

身份认证服务为每个OpenStack服务提供认证服务。
1、创建域default

openstack domain create --description "Default Domain" default

检查:

[root@controller ~]# openstack domain list
+----------------------------------+---------+---------+----------------+
| ID                              | Name    | Enabled | Description    |
+----------------------------------+---------+---------+----------------+
| 34a1e83eefe249db9c18b3b436244b92 | default | True    | Default Domain |
+----------------------------------+---------+---------+----------------+

2、创建管理的项目、用户和角色

  • 创建 admin 项目
openstack project create --domain default --description "Admin Project" admin

检查:

[root@controller ~]# openstack project list
+----------------------------------+-------+
| ID                              | Name  |
+----------------------------------+-------+
| 02c4969e5da04b868f61d15a7e6406fb | admin |
+----------------------------------+-------+
  • 创建 admin 用户
openstack user create --domain default --password ADMIN_PASS admin

检查:

[root@controller ~]# openstack user list
+----------------------------------+-------+
| ID                              | Name  |
+----------------------------------+-------+
| 3aa215bb57954097b92395d8e59b691a | admin |
+----------------------------------+-------+

删除用户:

openstack user delete ID
  • 创建 admin 角色
openstack role create admin

检查:

[root@controller ~]# openstack role list
+----------------------------------+-------+
| ID                              | Name  |
+----------------------------------+-------+
| 286b41ad4b614e988ad3a283ed71790c | admin |
+----------------------------------+-------+
  • 添加admin角色到admin 项目和用户上
openstack role add --project admin --user admin admin

3、创建service项目

openstack project create --domain default --description "Service Project" service

4、常规(非管理)任务应该使用无特权的项目和用户。作为例子,创建 demo 项目和用户

  • 创建demo项目
openstack project create --domain default description "Demo Project" demo

当为这个项目创建额外用户时,不要重复这一步

  • 创建demo用户
openstack user create --domain default password DEMO_PASS demo
  • 创建 user 角色
openstack role create user
  • 添加 user角色到 demo 项目和用户
openstack role add --project demo --user demo user

1.4.5 验证操作

1、重置OS_TOKEN和OS_URL环境变量

unset OS_TOKEN OS_URL

2、作为 admin 用户,请求认证令牌

openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-name default --os-user-domain-name default --os-project-name admin \
--os-username admin token issue

3、作为demo`用户,请求认证令牌

openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue

1.4.6 创建 OpenStack 客户端环境脚本

创建脚本

[root@controller ~]# cat >~/admin-openrc<<EOF
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
[root@controller ~]# source admin-openrc

编辑文件 demo-openrc 并添加如下内容:

export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

1.5 镜像服务

1.5.1 安装和配置

先决条件
1、创建数据库

mysql -u root -p

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY 'GLANCE_DBPASS';

2、创建服务证书

#创建 glance 用户
openstack user create --domain default --password GLANCE_PASS glance
#添加 admin 角色到 glance 用户和 service 项目上
openstack role add --project service --user glance admin
#创建``glance``服务实体
openstack service create --name glance --description "OpenStack Image" image

3、创建镜像服务的 API 端点

openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

安全并配置组件
1、安装软件包

yum -y install openstack-glance

2、编辑文件 /etc/glance/glance-api.conf

cp /etc/glance/glance-api.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf  database  connection  mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-api.conf  glance_store stores  file,http
openstack-config --set /etc/glance/glance-api.conf  glance_store default_store  file
openstack-config --set /etc/glance/glance-api.conf  glance_store filesystem_store_datadir  /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken password  GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf  paste_deploy flavor  keystone

3、编辑文件 /etc/glance/glance-registry.conf

cp /etc/glance/glance-registry.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf  database  connection  mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken password  GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf  paste_deploy flavor  keystone

4、写入镜像服务数据库

su -s /bin/sh -c "glance-manage db_sync" glance

5、启动镜像服务、配置他们随机启动

systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service

1.5.2 验证

1、下载镜像

wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

2、使用 QCOW2 磁盘格式, bare 容器格式上传镜像到镜像服务并设置公共可见,这样所有的项目都可以访问它

openstack image create "cirros" \
  --file cirros-0.3.4-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public

查看:

[root@controller ~]# ll /var/lib/glance/images/
total 12980
-rw-r----- 1 glance glance 13287936 Dec 12 15:27 b05b13fa-512b-48c3-9522-82f69f3ac91b

3、确认镜像的上传并验证属性

[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                  | Name  | Status |
+--------------------------------------+--------+--------+
| b05b13fa-512b-48c3-9522-82f69f3ac91b | cirros | active |
+--------------------------------------+--------+--------+

1.6 计算服务

1.6.1 计算服务概览

OpenStack计算服务由下列组件所构成:

  • nova-api 服务
    接收和响应来自最终用户的计算API请求。此服务支持OpenStack计算服务API,Amazon EC2 API,以及特殊的管理API用于赋予用户做一些管理的操作。它会强制实施一些规则,发起多数的编排活动,例如运行一个实例。
  • nova-api-metadata 服务
    接受来自虚拟机发送的元数据请求。nova-api-metadata服务一般在安装nova-network服务的多主机模式下使用。
  • nova-compute服务
    一个持续工作的守护进程,通过Hypervior的API来创建和销毁虚拟机实例。例如:
    • XenServer/XCP 的 XenAPI
    • KVM 或 QEMU 的 libvirt
    • VMware 的 VMwareAPI
      过程是蛮复杂的。最为基本的,守护进程同意了来自队列的动作请求,转换为一系列的系统命令如启动一个KVM实例,然后,到数据库中更新它的状态。
  • nova-scheduler服务
    拿到一个来自队列请求虚拟机实例,然后决定那台计算服务器主机来运行它。
  • nova-conductor模块
    媒介作用于nova-compute服务与数据库之间。它排除了由nova-compute服务对云数据库的直接访问。nova-conductor模块可以水平扩展。但是,不要将它部署在运行nova-compute服务的主机节点上。
  • nova-cert模块
    服务器守护进程向Nova Cert服务提供X509证书。用来为euca-bundle-image生成证书。仅仅是在EC2 API的请求中使用
  • nova-network worker 守护进程
    nova-compute服务类似,从队列中接受网络任务,并且操作网络。执行任务例如创建桥接的接口或者改变IPtables的规则。
  • nova-consoleauth 守护进程
    授权控制台代理所提供的用户令牌。详情可查看nova-novncproxy和 nova-xvpvncproxy。该服务必须为控制台代理运行才可奏效。在集群配置中你可以运行二者中任一代理服务而非仅运行一个nova-consoleauth服务。
  • nova-novncproxy 守护进程
    提供一个代理,用于访问正在运行的实例,通过VNC协议,支持基于浏览器的novnc客户端。
  • nova-spicehtml5proxy 守护进程
    提供一个代理,用于访问正在运行的实例,通过 SPICE 协议,支持基于浏览器的 HTML5 客户端。
  • nova-xvpvncproxy 守护进程
    提供一个代理,用于访问正在运行的实例,通过VNC协议,支持OpenStack特定的Java客户端。
  • nova-cert 守护进程
    X509 证书。
  • nova客户端
    用于用户作为租户管理员或最终用户来提交命令。
  • 队列
    一个在守护进程间传递消息的中央集线器。常见实现有RabbitMQ http://www.rabbitmq.com/__ , 以及如Zero MQ http://www.zeromq.org/__等AMQP消息队列。
  • SQL数据库
    存储构建时和运行时的状态,为云基础设施,包括有:
    • 可用实例类型
    • 使用中的实例
    • 可用网络
    • 项目

1.6.2 安装并配置控制节点

先决条件
1、创建数据库

mysql -u root -p

#创建 nova_api 和 nova 数据库
CREATE DATABASE nova_api;
CREATE DATABASE nova;
#对数据库进行正确的授权
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY 'NOVA_DBPASS';

2、创建服务证书

#创建 nova 用户
openstack user create --domain default --password NOVA_PASS nova
#给 nova 用户添加 admin 角色
openstack role add --project service --user nova admin
#创建 nova 服务实体
openstack service create --name nova --description "OpenStack Compute" compute

3、创建 Compute 服务 API 端点

openstack endpoint create --region RegionOne \
  compute public http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
  compute internal http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
  compute admin http://controller:8774/v2.1/%\(tenant_id\)s

安全并配置组件
1、安装软件包

yum -y install openstack-nova-api openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy \
  openstack-nova-scheduler

2、编辑/etc/nova/nova.conf文件

cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/nova/nova.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip  10.0.0.11
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron  True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  api_database connection  mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
openstack-config --set /etc/nova/nova.conf  database  connection  mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf  glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  username  nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  password  NOVA_PASS
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit  rabbit_host  controller
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit  rabbit_userid  openstack
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit  rabbit_password  RABBIT_PASS
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'

3、同步Compute 数据库

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" nova

4、启动 Compute 服务并将其设置为随系统启动

systemctl enable openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

1.6.3 安装和配置计算节点

安全并配置组件
1、安装软件包

yum -y install openstack-nova-compute

2、编辑/etc/nova/nova.conf文件

yum install openstack-utils -y
cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/nova/nova.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip  10.0.0.31
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron  True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  username  nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  password  NOVA_PASS
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit  rabbit_host  controller
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit  rabbit_userid  openstack
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit  rabbit_password  RABBIT_PASS
openstack-config --set /etc/nova/nova.conf  vnc enabled  True
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  0.0.0.0
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc novncproxy_base_url  http://controller:6080/vnc_auto.html

完成安装
1、确定您的计算节点是否支持虚拟机的硬件加速

egrep -c '(vmx|svm)' /proc/cpuinfo

2、启动计算服务及其依赖,并将其配置为随系统自动启动

systemctl start libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service

验证
在控制节点上执行这些命令
列出服务组件,以验证是否成功启动并注册了每个进程

[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary          | Host      | Zone    | Status  | State | Updated At                |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-conductor  | controller | internal | enabled | up    | 2017-12-12T09:01:05.000000 |
|  2 | nova-consoleauth | controller | internal | enabled | up    | 2017-12-12T09:01:05.000000 |
|  3 | nova-scheduler  | controller | internal | enabled | up    | 2017-12-12T09:01:05.000000 |
|  6 | nova-compute    | compute1  | nova    | enabled | up    | 2017-12-12T09:01:03.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+

1.7 Networking 服务

1.7.1 网络服务概览

  • neutron-server
    接收和路由API请求到合适的OpenStack网络插件,以达到预想的目的。
  • OpenStack网络插件和代理
    插拔端口,创建网络和子网,以及提供IP地址,这些插件和代理依赖于供应商和技术而不同,OpenStack网络基于插件和代理为Cisco 虚拟和物理交换机、NEC OpenFlow产品,Open vSwitch,Linux bridging以及VMware NSX 产品穿线搭桥。
    常见的代理L3(3层),DHCP(动态主机IP地址),以及插件代理。
  • 消息队列
    大多数的OpenStack Networking安装都会用到,用于在neutron-server和各种各样的代理进程间路由信息。也为某些特定的插件扮演数据库的角色,以存储网络状态

1.7.2 安装并配置控制节点

先决条件
1、创建数据库

mysql -u root -p

#创建neutron数据库
CREATE DATABASE neutron;
#对neutron数据库授予合适的访问权限,使用合适的密码替换NEUTRON_DBPASS
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY 'NEUTRON_DBPASS';

2、创建服务证书

#创建neutron用户
openstack user create --domain default --password NEUTRON_PASS neutron
#添加admin角色到neutron用户
openstack role add --project service --user neutron admin
#创建neutron服务实体
openstack service create --name neutron --description "OpenStack Networking" network

3、创建网络服务API端点

openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

网络选项1:公共网络
1、安装组件

yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

2、配置服务组件
编辑/etc/neutron/neutron.conf文件

cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf  DEFAULT core_plugin  ml2
openstack-config --set /etc/neutron/neutron.conf  DEFAULT service_plugins
openstack-config --set /etc/neutron/neutron.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/neutron/neutron.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/neutron/neutron.conf  DEFAULT notify_nova_on_port_status_changes  True
openstack-config --set /etc/neutron/neutron.conf  DEFAULT notify_nova_on_port_data_changes  True
openstack-config --set /etc/neutron/neutron.conf  database connection  mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_name  service
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken username  neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken password  NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf  nova auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  nova auth_type  password 
openstack-config --set /etc/neutron/neutron.conf  nova project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  nova user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  nova region_name  RegionOne
openstack-config --set /etc/neutron/neutron.conf  nova project_name  service
openstack-config --set /etc/neutron/neutron.conf  nova username  nova
openstack-config --set /etc/neutron/neutron.conf  nova password  NOVA_PASS
openstack-config --set /etc/neutron/neutron.conf  oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_host  controller
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_password  RABBIT_PASS

3、配置 Modular Layer 2 (ML2) 插件
编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件

cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 type_drivers  flat,vlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 tenant_network_types 
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 mechanism_drivers  linuxbridge
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 extension_drivers  port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2_type_flat flat_networks  provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  securitygroup enable_ipset  True

4、配置Linuxbridge代理
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件

cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup enable_security_group  True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  vxlan enable_vxlan  False

5、配置DHCP代理
编辑/etc/neutron/dhcp_agent.ini文件

openstack-config --set /etc/neutron/dhcp_agent.ini  DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini  DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini  DEFAULT enable_isolated_metadata true

4、配置元数据代理
编辑/etc/neutron/metadata_agent.ini文件

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip  controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret  METADATA_SECRET

5、为nova节点配置网络服务
编辑/etc/nova/nova.conf文件

openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf  neutron auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  neutron auth_type  password
openstack-config --set /etc/nova/nova.conf  neutron project_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron user_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron region_name  RegionOne
openstack-config --set /etc/nova/nova.conf  neutron project_name  service
openstack-config --set /etc/nova/nova.conf  neutron username  neutron
openstack-config --set /etc/nova/nova.conf  neutron password  NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf  neutron service_metadata_proxy  True
openstack-config --set /etc/nova/nova.conf  neutron metadata_proxy_shared_secret  METADATA_SECRET

6、完成安装
网络服务初始化脚本需要一个超链接 /etc/neutron/plugin.ini指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库:

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启计算API 服务:

systemctl restart openstack-nova-api.service

启动 Networking 服务并配置它启动

systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service

systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service

验证操作

[root@controller ~]# neutron agent-list
+---------------------+--------------------+------------+-------------------+-------+----------------+------------------------+
| id                  | agent_type        | host      | availability_zone | alive | admin_state_up | binary                |
+---------------------+--------------------+------------+-------------------+-------+----------------+------------------------+
| 0fa62af0-d535-454e- | Metadata agent    | controller |                  | :-)  | True          | neutron-metadata-agent |
| 9db0-29e280d8d529  |                    |            |                  |      |                |                        |
| 5b454683-9e44-4d1e- | DHCP agent        | controller | nova              | :-)  | True          | neutron-dhcp-agent    |
| ac74-67343cdfee27  |                    |            |                  |      |                |                        |
| bfe83c89-2f3c-43c8  | Linux bridge agent | controller |                  | :-)  | True          | neutron-linuxbridge-  |
| -866e-8130157a70f5  |                    |            |                  |      |                | agent                  |
+---------------------+--------------------+------------+-------------------+-------+----------------+------------------------+
[root@controller ~]# 

1.7.3 安装和配置计算节点

1、安装组件

yum install openstack-neutron-linuxbridge ebtables ipset -y

2、配置通用组件
编辑etc/neutron/neutron.conf文件

cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/neutron/neutron.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_name  service
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken username  neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken password  NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf  oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_host  controller
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_password  RABBIT_PASS

配置网络选项
公共网络
1、配置Linuxbridge代理
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件

cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup enable_security_group  True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  vxlan enable_vxlan  False

3、为计算节点配置网络服务
编辑/etc/nova/nova.conf文件

openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf  neutron auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  neutron auth_type  password
openstack-config --set /etc/nova/nova.conf  neutron project_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron user_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron region_name  RegionOne
openstack-config --set /etc/nova/nova.conf  neutron project_name  service
openstack-config --set /etc/nova/nova.conf  neutron username  neutron
openstack-config --set /etc/nova/nova.conf  neutron password  NEUTRON_PASS

4、完成安装
重启计算服务

systemctl restart openstack-nova-compute.service

启动Linuxbridge代理并配置它开机自启动

systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

1.7.4 验证

在控制节点上执行这些命令。

[root@controller ~]# neutron agent-list
+---------------------+--------------------+------------+-------------------+-------+----------------+------------------------+
| id                  | agent_type        | host      | availability_zone | alive | admin_state_up | binary                |
+---------------------+--------------------+------------+-------------------+-------+----------------+------------------------+
| 0fa62af0-d535-454e- | Metadata agent    | controller |                  | :-)  | True          | neutron-metadata-agent |
| 9db0-29e280d8d529  |                    |            |                  |      |                |                        |
| 2b5ac0d4-0d14-4748  | Linux bridge agent | compute1  |                  | :-)  | True          | neutron-linuxbridge-  |
| -9faa-65192b18c643  |                    |            |                  |      |                | agent                  |
| 5b454683-9e44-4d1e- | DHCP agent        | controller | nova              | :-)  | True          | neutron-dhcp-agent    |
| ac74-67343cdfee27  |                    |            |                  |      |                |                        |
| bfe83c89-2f3c-43c8  | Linux bridge agent | controller |                  | :-)  | True          | neutron-linuxbridge-  |
| -866e-8130157a70f5  |                    |            |                  |      |                | agent                  |
+---------------------+--------------------+------------+-------------------+-------+----------------+------------------------+

1.8 Dashboard

控制节点上安装和配置仪表板

1.8.1 安全并配置组件

1、安装软件包

yum install openstack-dashboard -y

2、编辑文件 /etc/openstack-dashboard/local_settings文件

[root@controller ~]# egrep -v "#|^$" /etc/openstack-dashboard/local_settings
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard import exceptions
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
TEMPLATE_DEBUG = DEBUG
WEBROOT = '/dashboard/'
ALLOWED_HOSTS = ['*', ]
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
    "compute": 2,
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'
LOCAL_PATH = '/tmp'
SECRET_KEY='65941f1393ea1c265ad7'
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': 'controller:11211',
    },
}
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_BACKEND = {
    'name': 'native',
    'can_edit_user': True,
    'can_edit_group': True,
    'can_edit_project': True,
    'can_edit_domain': True,
    'can_edit_role': True,
}
OPENSTACK_HYPERVISOR_FEATURES = {
    'can_set_mount_point': False,
    'can_set_password': False,
    'requires_keypair': False,
}
OPENSTACK_CINDER_FEATURES = {
    'enable_backup': False,
}
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': False,
    'enable_quotas': False,
    'enable_ipv6': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
    'default_ipv4_subnet_pool_label': None,
    'default_ipv6_subnet_pool_label': None,
    'profile_support': None,
    'supported_provider_types': ['*'],
    'supported_vnic_types': ['*'],
}
OPENSTACK_HEAT_STACK = {
    'enable_user_pass': True,
}
IMAGE_CUSTOM_PROPERTY_TITLES = {
    "architecture": _("Architecture"),
    "kernel_id": _("Kernel ID"),
    "ramdisk_id": _("Ramdisk ID"),
    "image_state": _("Euca2ools state"),
    "project_id": _("Project ID"),
    "image_type": _("Image Type"),
}
IMAGE_RESERVED_CUSTOM_PROPERTIES = []
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
DROPDOWN_MAX_ITEMS = 30
TIME_ZONE = "Asia/Shanghai"
POLICY_FILES_PATH = '/etc/openstack-dashboard'
LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'handlers': {
        'null': {
            'level': 'DEBUG',
            'class': 'logging.NullHandler',
        },
        'console': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
        },
    },
    'loggers': {
        'django.db.backends': {
            'handlers': ['null'],
            'propagate': False,
        },
        'requests': {
            'handlers': ['null'],
            'propagate': False,
        },
        'horizon': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'openstack_dashboard': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'novaclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'cinderclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'keystoneclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'glanceclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'neutronclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'heatclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'ceilometerclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'swiftclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'openstack_auth': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'nose.plugins.manager': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'django': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'iso8601': {
            'handlers': ['null'],
            'propagate': False,
        },
        'scss': {
            'handlers': ['null'],
            'propagate': False,
        },
    },
}
SECURITY_GROUP_RULES = {
    'all_tcp': {
        'name': _('All TCP'),
        'ip_protocol': 'tcp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_udp': {
        'name': _('All UDP'),
        'ip_protocol': 'udp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_icmp': {
        'name': _('All ICMP'),
        'ip_protocol': 'icmp',
        'from_port': '-1',
        'to_port': '-1',
    },
    'ssh': {
        'name': 'SSH',
        'ip_protocol': 'tcp',
        'from_port': '22',
        'to_port': '22',
    },
    'smtp': {
        'name': 'SMTP',
        'ip_protocol': 'tcp',
        'from_port': '25',
        'to_port': '25',
    },
    'dns': {
        'name': 'DNS',
        'ip_protocol': 'tcp',
        'from_port': '53',
        'to_port': '53',
    },
    'http': {
        'name': 'HTTP',
        'ip_protocol': 'tcp',
        'from_port': '80',
        'to_port': '80',
    },
    'pop3': {
        'name': 'POP3',
        'ip_protocol': 'tcp',
        'from_port': '110',
        'to_port': '110',
    },
    'imap': {
        'name': 'IMAP',
        'ip_protocol': 'tcp',
        'from_port': '143',
        'to_port': '143',
    },
    'ldap': {
        'name': 'LDAP',
        'ip_protocol': 'tcp',
        'from_port': '389',
        'to_port': '389',
    },
    'https': {
        'name': 'HTTPS',
        'ip_protocol': 'tcp',
        'from_port': '443',
        'to_port': '443',
    },
    'smtps': {
        'name': 'SMTPS',
        'ip_protocol': 'tcp',
        'from_port': '465',
        'to_port': '465',
    },
    'imaps': {
        'name': 'IMAPS',
        'ip_protocol': 'tcp',
        'from_port': '993',
        'to_port': '993',
    },
    'pop3s': {
        'name': 'POP3S',
        'ip_protocol': 'tcp',
        'from_port': '995',
        'to_port': '995',
    },
    'ms_sql': {
        'name': 'MS SQL',
        'ip_protocol': 'tcp',
        'from_port': '1433',
        'to_port': '1433',
    },
    'mysql': {
        'name': 'MYSQL',
        'ip_protocol': 'tcp',
        'from_port': '3306',
        'to_port': '3306',
    },
    'rdp': {
        'name': 'RDP',
        'ip_protocol': 'tcp',
        'from_port': '3389',
        'to_port': '3389',
    },
}
REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES',
                              'LAUNCH_INSTANCE_DEFAULTS']

3、完成安装

systemctl restart httpd.service memcached.service

4、验证
在浏览器中输入:http://10.0.0.11/dashboard

1.9 ALL IN ONE

配置控制节点

yum install openstack-nova-compute.noarch -y

openstack-config --set /etc/nova/nova.conf  vnc enabled  True
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  0.0.0.0
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc novncproxy_base_url  http://controller:6080/vnc_auto.html

systemctl start libvirtd.service openstack-nova-compute.service 
systemctl enable libvirtd.service openstack-nova-compute.service 
posted @ 2020-10-12 17:08  忘川的彼岸  阅读(349)  评论(0编辑  收藏  举报