openstack 基础环境配置

1.

网卡配置

[root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 
TYPE=Ethernet
BOOTPROTO=none
NAME=ens33
DEVICE=ens33
ONBOOT=yes
IPADDR=10.0.0.11
GATEWAY=10.0.0.2
DNS1=114.114.114.114
[root@controller ~]# 

关闭selinux 服务

[root@controller ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@controller ~]# setenforce 0

配置yum源

echo '[local]
name=local
baseurl=file:///mnt
gpgcheck=0

[openstack]
name=openstack
baseurl=file:///opt/repo
gpgcheck=0' >/etc/yum.repos.d/local.repo
echo 'mount /dev/cdrom /mnt' >>/etc/rc.local ##设置开机自动挂载镜像

 

一、基础配置

1.时间同步配置

控制节点和计算节点均安装chrony

[root@controller ~]# yum install -y chrony
[root@compute1 ~]# yum install -y chrony

服务端配置

[root@controller ~]# cat /etc/chrony.conf 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server ntp1.aliyun.com iburst  ###同步阿里云时间

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow 192.168.0.0/16
allow 10.0.0.0/24   ###允许本地网络同步的段落

# Serve time even if not synchronized to a time source.
#local stratum 10

# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking
[root@controller ~]# 

客户端配置

[root@compute1 ~]# cat /etc/chrony.conf 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 10.0.0.11 iburst  ###指定同步的服务器即可

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow 192.168.0.0/16

# Serve time even if not synchronized to a time source.
#local stratum 10

# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking
[root@compute1 ~]# 

配置完成之后重启服务

[root@controller ~]# systemctl restart chronyd
[root@compute1 ~]# systemctl restart chronyd

注意:如果时间同步服务器防火墙是开启的,客户端同步时间时会失败,可将其关闭或允许放行端口

2.安装openstack 客户端和openstack-selinux

首先配置yum 网络源

https://developer.aliyun.com/mirror

所有节点安装

[root@controller ~]# yum install -y python-openstackclient openstack-selinux
[root@compute1 ~]# yum install -y python-openstackclient openstack-selinux

3.在控制节点安装 数据库

[root@controller ~]# yum install -y mariadb mariadb-server python2-PyMySQL -y

配置

echo '[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8' >/etc/my.cnf.d/openstack.cnf

启动

[root@controller ~]# systemctl start mariadb
[root@controller ~]# systemctl enable mariadb

数据库安全初始化

[root@controller ~]# mysql_secure_installation
[root@controller ~]# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] n
 ... skipping.

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!
[root@controller ~]# 
View Code

4.安装消息队列

[root@controller ~]#  yum install rabbitmq-server -y

启动

[root@controller ~]# systemctl enable rabbitmq-server.service
[root@controller ~]# systemctl start rabbitmq-server.service

添加openstack 用户

[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS

给openstack 用户设置读写执行权限

[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

监听两个端口

[root@controller ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      24498/beam.smp    ###做rabbitmq集群使用  
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      9537/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      11063/master        
tcp6       0      0 :::5672                 :::*                    LISTEN      24498/beam.smp     ###给客户端使用 
tcp6       0      0 :::3306                 :::*                    LISTEN      24313/mysqld        
tcp6       0      0 :::22                   :::*                    LISTEN      9537/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      11063/master        
udp        0      0 0.0.0.0:123             0.0.0.0:*                           23856/chronyd       
udp        0      0 127.0.0.1:323           0.0.0.0:*                           23856/chronyd       
udp6       0      0 ::1:323                 :::*                                23856/chronyd  

启用rabbitmq 管理插件,方便后期做监控

[root@controller ~]# rabbitmq-plugins enable rabbitmq_management

会增加一个监听端口

[root@controller ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      24498/beam.smp      
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      9537/sshd           
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      24498/beam.smp      
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      11063/master        
tcp6       0      0 :::5672                 :::*                    LISTEN      24498/beam.smp      
tcp6       0      0 :::3306                 :::*                    LISTEN      24313/mysqld        
tcp6       0      0 :::22                   :::*                    LISTEN      9537/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      11063/master        
udp        0      0 0.0.0.0:123             0.0.0.0:*                           23856/chronyd       
udp        0      0 127.0.0.1:323           0.0.0.0:*                           23856/chronyd       
udp6       0      0 ::1:323                 :::*                                23856/chronyd  

浏览器访问15672登陆rabbitmq   http://10.0.0.11:15672

5.在控制节点安装memcached缓存token

[root@controller ~]# yum install -y memcached python-memcached 

修改配置

[root@controller ~]# sed -i 's#127.0.0.1#10.0.0.11#g' /etc/sysconfig/memcached
[root@controller ~]# cat /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 10.0.0.11,::1"
View Code

启动

[root@controller ~]# systemctl enable memcached.service
[root@controller ~]# systemctl start  memcached.service

监听的端口

[root@controller ~]# systemctl restart  memcached.service
[root@controller ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      24498/beam.smp      
tcp        0      0 10.0.0.11:11211         0.0.0.0:*               LISTEN      25732/memcached     
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      9537/sshd           
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      24498/beam.smp      
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      11063/master        
tcp6       0      0 :::5672                 :::*                    LISTEN      24498/beam.smp      
tcp6       0      0 :::3306                 :::*                    LISTEN      24313/mysqld        
tcp6       0      0 ::1:11211               :::*                    LISTEN      25732/memcached     
tcp6       0      0 :::22                   :::*                    LISTEN      9537/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      11063/master        
udp        0      0 10.0.0.11:11211         0.0.0.0:*                           25732/memcached     
udp        0      0 0.0.0.0:123             0.0.0.0:*                           23856/chronyd       
udp        0      0 127.0.0.1:323           0.0.0.0:*                           23856/chronyd       
udp6       0      0 ::1:11211               :::*                                25732/memcached     
udp6       0      0 ::1:323                 :::*                                23856/chronyd 

 二、认证服务

在控制节点运行

1登陆数据库,为keystone 创建数据库、授权

MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
[root@controller ~]# mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASSMariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> exit
View Code

2.安装应用包

[root@controller ~]# yum install -y openstack-keystone httpd mod_wsgi 

3.修改配置文件

备份配置文件

[root@controller ~]# cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
[root@controller ~]#grep -Ev '^$|#' /etc/keystone/keystone.conf.bak >/etc/keystone/keystone.conf

a.手动修改配置文件

[root@controller ~]# vi /etc/keystone/keystone.conf

[DEFAULT]
admin_token = ADMIN_TOKEN
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
[domain_config]
[DEFAULT]
admin_token = ADMIN_TOKEN
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[eventlet_server_ssl]
[federation]
[fernet_tokens]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[memcache]
[oauth1]
[os_inherit]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[resource]
[revoke]
[role]
[saml]
[shadow_users]
[signing]
[ssl]
[token]
provider = fernet
-- INSERT --
View Code

b.安装openstack 配置文件修改工具

[root@controller ~]# yum install openstack-utils -y

工具修改配置文件

[root@controller ~]# openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token ADMIN_TOKEN
[root@controller ~]# openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
[root@controller ~]# openstack-config --set /etc/keystone/keystone.conf token provider  fernet
[root@controller ~]# cat /etc/keystone/keystone.conf
[DEFAULT]
admin_token = ADMIN_TOKEN
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[eventlet_server_ssl]
[federation]
[fernet_tokens]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[memcache]
[oauth1]
[os_inherit]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[resource]
[revoke]
[role]
[saml]
[shadow_users]
[signing]
[ssl]
[token]
provider = fernet
[tokenless_auth]
[trust]
View Code

4.同步数据库

同步之前数据库状态,内容为空

[root@controller ~]# mysql keystone -e 'show tables;'

同步

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

同步后数据库状态及内容

[root@controller ~]# mysql keystone -e 'show tables;'
+------------------------+
| Tables_in_keystone     |
+------------------------+
| access_token           |
| assignment             |
| config_register        |
| consumer               |
| credential             |
| domain                 |
| endpoint               |
| endpoint_group         |
| federated_user         |
| federation_protocol    |
| group                  |
| id_mapping             |
| identity_provider      |
| idp_remote_ids         |
| implied_role           |
| local_user             |
| mapping                |
| migrate_version        |
| password               |
| policy                 |
| policy_association     |
| project                |
| project_endpoint       |
| project_endpoint_group |
| region                 |
| request_token          |
| revocation_event       |
| role                   |
| sensitive_config       |
| service                |
| service_provider       |
| token                  |
| trust                  |
| trust_role             |
| user                   |
| user_group_membership  |
| whitelisted_config     |
+------------------------+
[root@controller ~]# 
View Code

5.初始化fernet

[root@controller ~]# ll /etc/keystone/
total 104
-rw-r-----. 1 root     keystone  2303 Feb  1  2017 default_catalog.templates
-rw-r-----. 1 root     keystone   661 Dec 20 10:42 keystone.conf
-rw-r-----. 1 root     root     73101 Dec 20 10:19 keystone.conf.bak
-rw-r-----. 1 root     keystone  2400 Feb  1  2017 keystone-paste.ini
-rw-r-----. 1 root     keystone  1046 Feb  1  2017 logging.conf
-rw-r-----. 1 keystone keystone  9699 Feb  1  2017 policy.json
-rw-r-----. 1 keystone keystone   665 Feb  1  2017 sso_callback_template.html
View Code
[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# ll /etc/keystone/
total 104
-rw-r-----. 1 root     keystone  2303 Feb  1  2017 default_catalog.templates
drwx------. 2 keystone keystone    24 Dec 20 10:58 fernet-keys
-rw-r-----. 1 root     keystone   661 Dec 20 10:42 keystone.conf
-rw-r-----. 1 root     root     73101 Dec 20 10:19 keystone.conf.bak
-rw-r-----. 1 root     keystone  2400 Feb  1  2017 keystone-paste.ini
-rw-r-----. 1 root     keystone  1046 Feb  1  2017 logging.conf
-rw-r-----. 1 keystone keystone  9699 Feb  1  2017 policy.json
-rw-r-----. 1 keystone keystone   665 Feb  1  2017 sso_callback_template.html
View Code

6.配置httpd

加速启动

[root@controller ~]# echo "ServerName controller" >>/etc/httpd/conf/httpd.conf 

修改配置文件

echo 'Listen 5000
Listen 35357

<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost *:35357>
    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>' >/etc/httpd/conf.d/wsgi-keystone.conf

启动httpd

[root@controller ~]# systemctl enable httpd.service
[root@controller ~]# systemctl start httpd.service
[root@controller ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      9517/beam.smp       
tcp        0      0 10.0.0.11:3306          0.0.0.0:*               LISTEN      20963/mysqld        
tcp        0      0 10.0.0.11:11211         0.0.0.0:*               LISTEN      9519/memcached      
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      9536/sshd           
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      9517/beam.smp       
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      9941/master         
tcp6       0      0 :::5000                 :::*                    LISTEN      22202/httpd         
tcp6       0      0 :::5672                 :::*                    LISTEN      9517/beam.smp       
tcp6       0      0 ::1:11211               :::*                    LISTEN      9519/memcached      
tcp6       0      0 :::80                   :::*                    LISTEN      22202/httpd         
tcp6       0      0 :::22                   :::*                    LISTEN      9536/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      9941/master         
tcp6       0      0 :::35357                :::*                    LISTEN      22202/httpd         
udp        0      0 10.0.0.11:11211         0.0.0.0:*                           9519/memcached      
udp        0      0 0.0.0.0:123             0.0.0.0:*                           8744/chronyd        
udp        0      0 127.0.0.1:323           0.0.0.0:*                           8744/chronyd        
udp6       0      0 ::1:11211               :::*                                9519/memcached      
udp6       0      0 ::1:323                 :::*                                8744/chronyd        
[root@controller ~]# 

7.将keystone 自己注册到keystone中

[root@controller ~]# export OS_TOKEN=ADMIN_TOKEN
[root@controller ~]# export OS_URL=http://controller:35357/v3
[root@controller ~]# export OS_IDENTITY_API_VERSION=3
[root@controller ~]# env | grep OS
HOSTNAME=controller
OS_IDENTITY_API_VERSION=3
OS_TOKEN=ADMIN_TOKEN
OS_URL=http://controller:35357/v3
View Code

8.创建服务

[root@controller ~]# openstack service create --name keystone --description "OpenStack Identity" identity
[root@controller ~]# openstack endpoint create --region RegionOne identity public http://controller:5000/v3
[root@controller ~]# openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
[root@controller ~]# openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
[root@controller ~]# openstack service create --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Identity               |
| enabled     | True                             |
| id          | b530209fdb8f4d29b687792106fe1314 |
| name        | keystone                         |
| type        | identity                         |
+-------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne identity public http://controller:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | d0f6303bb2084b9a8592925c501ff233 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | b530209fdb8f4d29b687792106fe1314 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://controller:5000/v3        |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | d7cce885611c40828ded55bd30762e8a |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | b530209fdb8f4d29b687792106fe1314 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://controller:5000/v3        |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 5b8f6a9fb42547bc9766d80d97146694 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | b530209fdb8f4d29b687792106fe1314 |
| service_name | keystone                         |
| service_type | identity                         |
| url          | http://controller:35357/v3       |
+--------------+----------------------------------+
View Code

9.创建域,项目,用户,角色

[root@controller ~]# openstack domain create --description "Default Domain" default    ###创建默认域
[root@controller ~]# openstack project create --domain default --description "Admin Project" admin    ###在默认域中创建admin项目
[root@controller ~]# openstack user create --domain default --password ADMIN_PASS admin    ###在默认域中创建admin用户
[root@controller ~]# openstack role create admin    ###创建admin角色

关联项目角色用户

[root@controller ~]# openstack role add --project admin --user admin admin

注:在admin项目中,给admin用户添加admin角色

10.创建service 项目,存放后期glance,nova,neutron的服务的账号

[root@controller ~]# openstack project create --domain default --description "Service Project" service
[root@controller ~]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | c9f9dbbfdfe34c45b9e18b1aca5aea1c |
| enabled     | True                             |
| id          | 2c07141fc8af4ebfa08956209b0a5b8e |
| is_domain   | False                            |
| name        | service                          |
| parent_id   | c9f9dbbfdfe34c45b9e18b1aca5aea1c |
+-------------+----------------------------------+
[root@controller ~]# 
View Code

11.取消环境变量

[root@controller ~]# env | grep OS
HOSTNAME=controller
OS_IDENTITY_API_VERSION=3
OS_TOKEN=ADMIN_TOKEN
OS_URL=http://controller:35357/v3

a.

[root@controller ~]# unset OS_TOKEN

b.退出当前登陆窗口,重新登陆,当前会话的环境变量就自动清除

12.用新创建的用户获取token

重新配置环境变量

[root@controller ~]# export OS_PROJECT_DOMAIN_NAME=default
[root@controller ~]# export OS_DOMAIN_NAME=default ##注意这个配置不需要,只是在下面的问题时可以解决,命令执行不成功可能环境变量有错误字符 [root@controller
~]# export OS_USER_DOMAIN_NAME=default [root@controller ~]# export OS_PROJECT_NAME=admin [root@controller ~]# export OS_USERNAME=admin [root@controller ~]# export OS_PASSWORD=ADMIN_PASS [root@controller ~]# export OS_AUTH_URL=http://controller:35357/v3 [root@controller ~]# export OS_IDENTITY_API_VERSION=3 [root@controller ~]# export OS_IMAGE_API_VERSION=2
[root@controller ~]# env | grep OS
HOSTNAME=controller
OS_IMAGE_API_VERSION=2
OS_PROJECT_NAME=admin
OS_IDENTITY_API_VERSION=3
OS_USER_DOMIAN_NAME=default
OS_PASSWORD=ADMIN_PASS
OS_AUTH_URL=http://controller:35357/v3
OS_USERNAME=admin
OS_PROJECT_DOMAIN_NAME=default
View Code

验证获取token

[root@controller keystone]# openstack token issue
[root@controller keystone]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2019-12-20T09:14:39.000000Z                                                                                                                                                             |
| id         | gAAAAABd_ILvWUSJcQbZN8QlZh8SoD_mEdIZ9-8w3JiDQJRlaS9H_bFp5UDefimg1EozBFsovz4Nh4bEE_-xz44YhQVy5vXQYFDPSvMHBqzzaxrjQVDTsmUB75H0oahUJLV_Nr6ptfFnKf5Q0GquaOgPZrXNKXQgHv7IXYJIF35vKRsKKi9FNuM |
| project_id | 45f70e0011bb4c09985709c1a5dccd0d                                                                                                                                                        |
| user_id    | d1ec935819424b6db22198b528834b4e                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller keystone]# 
View Code
问题:执行命令报错
[root@controller ~]# openstack token issue
The request you have made requires authentication. (HTTP 401) (Request-ID: req-16681189-ed64-4bc7-b6b3-b29fd02284ab)
解决方法:
通过查看日志/var/log.keystone

  2019-12-20 16:02:13.115 9663 ERROR keystone.auth.plugins.core [req-ffc8ea83-61b9-4cc5-9a1c-35ecca60252d - - - - -] Could not find domain: default

[root@controller keystone]# export OS_DOMAIN_NAME=default

创建脚本,方便每次登陆都要重新配置环境变量

[root@controller ~]# vi admin-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_DOMAIN_NAME=default
export OS_USER_DOMIAN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
[root@controller ~]# source admin-openrc

让系统每次登陆自动执行脚本,配置根目录下.bashrc文件

[root@controller ~]# cat .bashrc 
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi
source admin-openrc
[root@controller ~]# 

 三、镜像服务

1创建数据库并授权

登陆数据库 创建glance 数据库并授权给

MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'loalhost' IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS'; 
[root@controller ~]# mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'loalhost' IDENTIFIED BY 'GLANCE_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS'; 
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]>
View Code

2.在keystone  上创建glance 用户

[root@controller ~]# openstack user create --domain default --password GLANCE_PASS glance
[root@controller ~]# openstack role add --project service --user glance admin
[root@controller ~]# openstack user create --domain default --password GLANCE_PASS glance
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | c9f9dbbfdfe34c45b9e18b1aca5aea1c |
| enabled   | True                             |
| id        | 69c339ae836d448fb9dae8dca4b3b623 |
| name      | glance                           |
+-----------+----------------------------------+
[root@controller ~]# openstack role add --project service --user glance admin
View Code

查看授权关联情况

[root@controller ~]# openstack role assignment list
+----------------------+----------------------+-------+----------------------+--------+-----------+
| Role                 | User                 | Group | Project              | Domain | Inherited |
+----------------------+----------------------+-------+----------------------+--------+-----------+
| d63b6106e1a042228b12 | 69c339ae836d448fb9da |       | 2c07141fc8af4ebfa089 |        | False     |
| 2d9d7bd4532c         | e8dca4b3b623         |       | 56209b0a5b8e         |        |           |
| d63b6106e1a042228b12 | d1ec935819424b6db221 |       | 45f70e0011bb4c099857 |        | False     |
| 2d9d7bd4532c         | 98b528834b4e         |       | 09c1a5dccd0d         |        |           |
+----------------------+----------------------+-------+----------------------+--------+-----------+
[root@controller ~]# openstack role list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| d63b6106e1a042228b122d9d7bd4532c | admin |
+----------------------------------+-------+
[root@controller ~]# openstack user list
+----------------------------------+--------+
| ID                               | Name   |
+----------------------------------+--------+
| 69c339ae836d448fb9dae8dca4b3b623 | glance |
| d1ec935819424b6db22198b528834b4e | admin  |
+----------------------------------+--------+
[root@controller ~]# openstack project list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 2c07141fc8af4ebfa08956209b0a5b8e | service |
| 45f70e0011bb4c09985709c1a5dccd0d | admin   |
+----------------------------------+---------+
[root@controller ~]# 

3.在keystone创建glance服务 注册api

[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| enabled     | True                             |
| id          | 0fa74611fd2c46f78e52b20231180273 |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | dd6181342d84437884988fd0f5739d21 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 0fa74611fd2c46f78e52b20231180273 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 55b689aac921492f9052c1a58c029a59 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 0fa74611fd2c46f78e52b20231180273 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ca1c5de468434cacb6ee94828351e517 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 0fa74611fd2c46f78e52b20231180273 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]# 
View Code

4.安装glance 服务软件包

[root@controller ~]# yum install -y openstack-glance

 5.修改配置文件

修改/etc/glance/glance-api.conf

[root@controller ~]# cp /etc/glance/glance-api.conf{,.bak} 
[root@controller ~]# grep '^[a-z\[]' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf

配置

openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone

[root@controller glance]# cat /etc/glance/glance-api.conf
[DEFAULT]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[image_format]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]
[root@controller glance]# 
View Code

修改/etc/glance/glance-registry.conf

[root@controller ~]# cp /etc/glance/glance-registry.conf{,.bak}  
[root@controller ~]# grep '^[a-z\[]' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf

配置

openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

  验证MD5值

[root@controller glance]# md5sum /etc/glance/glance-api.conf
3e1a4234c133eda11b413788e001cba3 /etc/glance/glance-api.conf
[root@controller glance]# md5sum /etc/glance/glance-registry.conf
46acabd81a65b924256f56fe34d90b8f /etc/glance/glance-registry.conf

[root@controller glance]# cat /etc/glance/glance-registry.conf
[DEFAULT]
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[glance_store]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[matchmaker_redis]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
View Code

注意:以上修改同样可以通过vi编辑配置文件修改

[root@controller ~]# md5sum /etc/glance/glance-api.conf
3e1a4234c133eda11b413788e001cba3  /etc/glance/glance-api.conf
[root@controller ~]# md5sum /etc/glance/glance-registry.conf
46acabd81a65b924256f56fe34d90b8f  /etc/glance/glance-registry.conf
View Code

6.同步数据库

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
[root@controller glance]# mysql glance -e "show tables;"
+----------------------------------+
| Tables_in_glance                 |
+----------------------------------+
| artifact_blob_locations          |
| artifact_blobs                   |
| artifact_dependencies            |
| artifact_properties              |
| artifact_tags                    |
| artifacts                        |
| image_locations                  |
| image_members                    |
| image_properties                 |
| image_tags                       |
| images                           |
| metadef_namespace_resource_types |
| metadef_namespaces               |
| metadef_objects                  |
| metadef_properties               |
| metadef_resource_types           |
| metadef_tags                     |
| migrate_version                  |
| task_info                        |
| tasks                            |
+----------------------------------+
View Code

7启动glance 服务

[root@controller ~]# systemctl enable openstack-glance-api.service openstack-glance-registry.service
[root@controller ~]# systemctl start openstack-glance-api.service openstack-glance-registry.service

检查端口

[root@controller ~]# netstat -lntup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:9191            0.0.0.0:*               LISTEN      20041/python2       
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      9284/beam.smp       
tcp        0      0 10.0.0.11:3306          0.0.0.0:*               LISTEN      9572/mysqld         
tcp        0      0 10.0.0.11:11211         0.0.0.0:*               LISTEN      9258/memcached      
tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      20040/python2       
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      9291/sshd           
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      9284/beam.smp       
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      9689/master         
tcp6       0      0 :::5672                 :::*                    LISTEN      9284/beam.smp       
tcp6       0      0 :::5000                 :::*                    LISTEN      9281/httpd          
tcp6       0      0 ::1:11211               :::*                    LISTEN      9258/memcached      
tcp6       0      0 :::80                   :::*                    LISTEN      9281/httpd          
tcp6       0      0 :::22                   :::*                    LISTEN      9291/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      9689/master         
tcp6       0      0 :::35357                :::*                    LISTEN      9281/httpd          
udp        0      0 10.0.0.11:11211         0.0.0.0:*                           9258/memcached      
udp        0      0 0.0.0.0:123             0.0.0.0:*                           8694/chronyd        
udp        0      0 127.0.0.1:323           0.0.0.0:*                           8694/chronyd        
udp6       0      0 ::1:11211               :::*                                9258/memcached      
udp6       0      0 ::1:323                 :::*                                8694/chronyd  

8.验证服务

[root@controller ~]# openstack image create "cirros" \
> --file cirros-0.3.4-x86_64-disk.img \
> --disk-format qcow2 --container-format bare \
> --public
[root@controller ~]# openstack image create "cirros" \
> --file cirros-0.3.4-x86_64-disk.img \
> --disk-format qcow2 --container-format bare \
> --public
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6                     |
| container_format | bare                                                 |
| created_at       | 2019-12-22T06:35:47Z                                 |
| disk_format      | qcow2                                                |
| file             | /v2/images/4492225b-eb11-4705-90dd-46c8e8cfe238/file |
| id               | 4492225b-eb11-4705-90dd-46c8e8cfe238                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | cirros                                               |
| owner            | 45f70e0011bb4c09985709c1a5dccd0d                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 13287936                                             |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2019-12-22T06:35:48Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+------------------+------------------------------------------------------+
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 4492225b-eb11-4705-90dd-46c8e8cfe238 | cirros | active |
+--------------------------------------+--------+--------+
[root@controller ~]#
View Code

提示:在删除镜像的同时,要删除数据库表中的记录,

[root@controller ~]# mysql glance -e "show tables"| grep images

四、nova计算服务

在控制节点安装服务

1.创建数据库并授权

MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
View Code

2.在keystone 在keystone 上创建系统用户并关联角色

[root@controller ~]# openstack user create --domain default --password NOVA_PASS nova
[root@controller ~]# openstack role add --project service --user nova admin
[root@controller ~]# openstack user create --domain default --password NOVA_PASS nova
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | c9f9dbbfdfe34c45b9e18b1aca5aea1c |
| enabled   | True                             |
| id        | fc1e90dcf4de4e69b4a0f7353281bb3e |
| name      | nova                             |
+-----------+----------------------------------+
[root@controller ~]# openstack role add --project service --user nova admin
View Code

3.在keystone上创建服务并注册api

[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | bb74eb99fdf043c082378ddda4c4b3d6 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field        | Value                                     |
+--------------+-------------------------------------------+
| enabled      | True                                      |
| id           | 40ac81e4976f4ebd84d233b38e3f3a58          |
| interface    | public                                    |
| region       | RegionOne                                 |
| region_id    | RegionOne                                 |
| service_id   | bb74eb99fdf043c082378ddda4c4b3d6          |
| service_name | nova                                      |
| service_type | compute                                   |
| url          | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field        | Value                                     |
+--------------+-------------------------------------------+
| enabled      | True                                      |
| id           | fefa53bb492e4ffb86af076dfa711961          |
| interface    | internal                                  |
| region       | RegionOne                                 |
| region_id    | RegionOne                                 |
| service_id   | bb74eb99fdf043c082378ddda4c4b3d6          |
| service_name | nova                                      |
| service_type | compute                                   |
| url          | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field        | Value                                     |
+--------------+-------------------------------------------+
| enabled      | True                                      |
| id           | 014d1119aaeb4f06b617957b4ca58410          |
| interface    | admin                                     |
| region       | RegionOne                                 |
| region_id    | RegionOne                                 |
| service_id   | bb74eb99fdf043c082378ddda4c4b3d6          |
| service_name | nova                                      |
| service_type | compute                                   |
| url          | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@controller ~]# 
View Code

4.安装服务软件包

[root@controller ~]# yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler 

5.修改配置文件

[root@controller ~]# cp /etc/nova/nova.conf{,.bak}
[root@controller ~]# ls /etc/nova/nova.conf
nova.conf      nova.conf.bak
[root@controller ~]# grep '^[a-z\[]' /etc/nova/nova.conf.bak > /etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.0.0.11
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address '$my_ip'

校验

[root@controller ~]# md5sum /etc/nova/nova.conf
47ded61fdd1a79ab91bdb37ce59ef192 /etc/nova/nova.conf

6.同步数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
[root@controller ~]# mysql nova_api -e 'show tables;'
+--------------------+
| Tables_in_nova_api |
+--------------------+
| build_requests     |
| cell_mappings      |
| flavor_extra_specs |
| flavor_projects    |
| flavors            |
| host_mappings      |
| instance_mappings  |
| migrate_version    |
| request_specs      |
+--------------------+
[root@controller ~]# mysql nova -e 'show tables;'
+--------------------------------------------+
| Tables_in_nova                             |
+--------------------------------------------+
| agent_builds                               |
| aggregate_hosts                            |
| aggregate_metadata                         |
| aggregates                                 |
| allocations                                |
| block_device_mapping                       |
| bw_usage_cache                             |
| cells                                      |
| certificates                               |
| compute_nodes                              |
| console_pools                              |
| consoles                                   |
| dns_domains                                |
| fixed_ips                                  |
| floating_ips                               |
| instance_actions                           |
| instance_actions_events                    |
| instance_extra                             |
| instance_faults                            |
| instance_group_member                      |
| instance_group_policy                      |
| instance_groups                            |
| instance_id_mappings                       |
| instance_info_caches                       |
| instance_metadata                          |
| instance_system_metadata                   |
| instance_type_extra_specs                  |
| instance_type_projects                     |
| instance_types                             |
| instances                                  |
| inventories                                |
| key_pairs                                  |
| migrate_version                            |
| migrations                                 |
| networks                                   |
| pci_devices                                |
| project_user_quotas                        |
| provider_fw_rules                          |
| quota_classes                              |
| quota_usages                               |
| quotas                                     |
| reservations                               |
| resource_provider_aggregates               |
| resource_providers                         |
| s3_images                                  |
| security_group_default_rules               |
| security_group_instance_association        |
| security_group_rules                       |
| security_groups                            |
| services                                   |
| shadow_agent_builds                        |
| shadow_aggregate_hosts                     |
| shadow_aggregate_metadata                  |
| shadow_aggregates                          |
| shadow_block_device_mapping                |
| shadow_bw_usage_cache                      |
| shadow_cells                               |
| shadow_certificates                        |
| shadow_compute_nodes                       |
| shadow_console_pools                       |
| shadow_consoles                            |
| shadow_dns_domains                         |
| shadow_fixed_ips                           |
| shadow_floating_ips                        |
| shadow_instance_actions                    |
| shadow_instance_actions_events             |
| shadow_instance_extra                      |
| shadow_instance_faults                     |
| shadow_instance_group_member               |
| shadow_instance_group_policy               |
| shadow_instance_groups                     |
| shadow_instance_id_mappings                |
| shadow_instance_info_caches                |
| shadow_instance_metadata                   |
| shadow_instance_system_metadata            |
| shadow_instance_type_extra_specs           |
| shadow_instance_type_projects              |
| shadow_instance_types                      |
| shadow_instances                           |
| shadow_key_pairs                           |
| shadow_migrate_version                     |
| shadow_migrations                          |
| shadow_networks                            |
| shadow_pci_devices                         |
| shadow_project_user_quotas                 |
| shadow_provider_fw_rules                   |
| shadow_quota_classes                       |
| shadow_quota_usages                        |
| shadow_quotas                              |
| shadow_reservations                        |
| shadow_s3_images                           |
| shadow_security_group_default_rules        |
| shadow_security_group_instance_association |
| shadow_security_group_rules                |
| shadow_security_groups                     |
| shadow_services                            |
| shadow_snapshot_id_mappings                |
| shadow_snapshots                           |
| shadow_task_log                            |
| shadow_virtual_interfaces                  |
| shadow_volume_id_mappings                  |
| shadow_volume_usage_cache                  |
| snapshot_id_mappings                       |
| snapshots                                  |
| tags                                       |
| task_log                                   |
| virtual_interfaces                         |
| volume_id_mappings                         |
| volume_usage_cache                         |
+--------------------------------------------+
View Code

7.启动服务

# systemctl enable openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service
问题:
启动报错
[root@controller ~]# journalctl -xe

  Dec 22 20:40:27 controller sudo[19958]: pam_unix(sudo:account): helper binary execve failed: Permission denied
  Dec 22 20:40:27 controller sudo[19955]: nova : PAM account management error: Authentication service cannot retrieve authentic

nova-api 错误日志
2019-12-22 19:14:01.435 18041 CRITICAL nova [-] ProcessExecutionError: Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-save -c
Exit code: 1
Stdout: u''
Stderr: u'sudo: PAM account management error: Authentication service cannot retrieve authentication info\n'
2019-12-22 19:14:01.435 18041 ERROR nova Traceback (most recent call last):
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/bin/nova-api", line 10, in <module>
2019-12-22 19:14:01.435 18041 ERROR nova     sys.exit(main())
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 57, in main
2019-12-22 19:14:01.435 18041 ERROR nova     server = service.WSGIService(api, use_ssl=should_use_ssl)
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/service.py", line 358, in __init__
2019-12-22 19:14:01.435 18041 ERROR nova     self.manager = self._get_manager()
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/service.py", line 415, in _get_manager
2019-12-22 19:14:01.435 18041 ERROR nova     return manager_class()
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/api/manager.py", line 30, in __init__
2019-12-22 19:14:01.435 18041 ERROR nova     self.network_driver.metadata_accept()
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 706, in metadat
a_accept
2019-12-22 19:14:01.435 18041 ERROR nova     iptables_manager.apply()
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 446, in apply
2019-12-22 19:14:01.435 18041 ERROR nova     self._apply()
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inn
er
2019-12-22 19:14:01.435 18041 ERROR nova     return f(*args, **kwargs)
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 466, in _apply
2019-12-22 19:14:01.435 18041 ERROR nova     attempts=5)
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/network/linux_net.py", line 1267, in _execu
te
2019-12-22 19:14:01.435 18041 ERROR nova     return utils.execute(*cmd, **kwargs)
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/utils.py", line 388, in execute
2019-12-22 19:14:01.435 18041 ERROR nova     return RootwrapProcessHelper().execute(*cmd, **kwargs)
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/lib/python2.7/site-packages/nova/utils.py", line 271, in execute
2019-12-22 19:14:01.435 18041 ERROR nova     return processutils.execute(*cmd, **kwargs)
2019-12-22 19:14:01.435 18041 ERROR nova   File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 389, in 
execute
2019-12-22 19:14:01.435 18041 ERROR nova     cmd=sanitized_cmd)
2019-12-22 19:14:01.435 18041 ERROR nova ProcessExecutionError: Unexpected error while running command.
2019-12-22 19:14:01.435 18041 ERROR nova Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-save -c
2019-12-22 19:14:01.435 18041 ERROR nova Exit code: 1
2019-12-22 19:14:01.435 18041 ERROR nova Stdout: u''
2019-12-22 19:14:01.435 18041 ERROR nova Stderr: u'sudo: PAM account management error: Authentication service cannot retrieve auth
entication info\n'
解决方法:
关闭selinux
修改/etc/selinux/config文件
[
root@controller ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@controller ~]# setenforce 0

 8.验证服务

[root@controller nova]# nova service-list
问题,执行验证命令时,报错
ERROR (AuthorizationFailure): Authentication cannot be scoped to multiple targets. Pick one of: project, domain, trust or unscope
问题原因:环境变量配置的有问题
原来的配置
[root@controller ~]# env | grep OS
HOSTNAME=controller
OS_IMAGE_API_VERSION=2
OS_PROJECT_NAME=admin
OS_IDENTITY_API_VERSION=3
OS_USER_DOMIAN_NAME=default  ##单词写错
OS_PASSWORD=ADMIN_PASS
OS_DOMAIN_NAME=default
OS_AUTH_URL=http://controller:35357/v3
OS_USERNAME=admin
OS_PROJECT_DOMAIN_NAME=default

更改后的配置
[root@controller ~]# env | grep OS
HOSTNAME=controller
OS_USER_DOMAIN_NAME=default
OS_IMAGE_API_VERSION=2
OS_PROJECT_NAME=admin
OS_IDENTITY_API_VERSION=3
OS_PASSWORD=ADMIN_PASS
OS_AUTH_URL=http://controller:35357/v3
OS_USERNAME=admin
OS_PROJECT_DOMAIN_NAME=default
修正admin-openrc脚本,重新登陆即可,使脚本重新生效,修改环境变量配置

 执行结果

[root@controller ~]# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor   | controller | internal | enabled | up    | 2019-12-23T01:30:56.000000 | -               |
| 2  | nova-consoleauth | controller | internal | enabled | up    | 2019-12-23T01:30:56.000000 | -               |
| 3  | nova-scheduler   | controller | internal | enabled | up    | 2019-12-23T01:30:56.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
View Code
另外两个服务一个时nova-api,显示界面正常说明nova-api没有问题,另一个服务nova-novncproxy通过一下检测
[root@controller ~]# netstat -lntup|grep 6080
tcp        0      0 0.0.0.0:6080            0.0.0.0:*               LISTEN      9475/python2        
[root@controller ~]# ps -ef | grep 9475
nova       9475      1  0 09:23 ?        00:00:01 /usr/bin/python2 /usr/bin/nova-novncproxy --web /usr/share/novnc/
root      10919  10631  0 09:47 pts/0    00:00:00 grep --color=auto 9475
View Code

在计算机点安装服务

1.安装服务

[root@compute1 ~]# yum  install -y openstack-nova-compute openstack-utils.noarch

2.修改配置文件

[root@compute1 ~]# cp /etc/nova/nova.conf{,.bak}
[root@compute1 ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak >/etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.0.0.31
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html

3.启动服务

[root@compute1 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute1 ~]# systemctl start libvirtd.service openstack-nova-compute.service

4.验证服务

在控制节点查看nova服务状态

[root@controller ~]# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor   | controller | internal | enabled | up    | 2019-12-23T06:28:53.000000 | -               |
| 2  | nova-consoleauth | controller | internal | enabled | up    | 2019-12-23T06:28:50.000000 | -               |
| 3  | nova-scheduler   | controller | internal | enabled | up    | 2019-12-23T06:28:51.000000 | -               |
| 6  | nova-compute     | compute1   | nova     | enabled | up    | 2019-12-23T06:28:48.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

五、neutron 网络服务

在控制节点上安装配置

1.创建数据库并授权

MariaDB [(none)]> CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
[root@controller ~]# mysql
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 44
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
Query OK, 0 rows affected (0.01 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> exit
View Code

2.在keystone创建系统用户

[root@controller ~]# openstack user create --domain default --password NEUTRON_PASS neutron
[root@controller ~]# openstack role add --project service --user neutron admin
[root@controller ~]# openstack user create --domain default --password NEUTRON_PASS neutron
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | c9f9dbbfdfe34c45b9e18b1aca5aea1c |
| enabled   | True                             |
| id        | 25e38bd3c4cb43069ae6f02024b00f1f |
| name      | neutron                          |
+-----------+----------------------------------+
[root@controller ~]# openstack role add --project service --user neutron admin
View Code

3.在keystone 创建服务并注册api

[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | 23811beac4d34f438aeededbe1542041 |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | fd99523437624934bec3b31c7f222679 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 23811beac4d34f438aeededbe1542041 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 7a71f08bc75d47378e9cd09bb9114dca |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 23811beac4d34f438aeededbe1542041 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 0a650a177ab14af79a389821c3cb4d67 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 23811beac4d34f438aeededbe1542041 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
View Code

4.安装服务相应软件包

[root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables 

 5.修改配置文件

增加一块万卡配置

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.11  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::389d:e340:ea17:3a30  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:67:46:37  txqueuelen 1000  (Ethernet)
        RX packets 22405  bytes 7134527 (6.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16055  bytes 4999491 (4.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.100.11  netmask 255.255.255.0  broadcast 10.10.100.255
        ether 00:0c:29:67:46:41  txqueuelen 1000  (Ethernet)
        RX packets 53  bytes 9300 (9.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 52  bytes 8782 (8.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 165658  bytes 48334681 (46.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 165658  bytes 48334681 (46.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@controller ~]# nmcli con show
NAME                UUID                                  TYPE      DEVICE 
ens33               5b7edfad-37ad-4254-a1d9-660686d0f9d7  ethernet  ens33  
Wired connection 1  2fe07932-584b-33ee-9e22-abef21281914  ethernet  ens37  
[root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens37 
NAME=ens37
DEVICE=ens37
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
UUID=2fe07932-584b-33ee-9e22-abef21281914
HWADDR=00:0C:29:67:46:41
[root@controller ~]#

修改配置文件

a:修改/etc/neutron/neutron.conf
cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password NOVA_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS

b:修改/etc/neutron/plugins/ml2/ml2_conf.ini
cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep '^[a-z\[]' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True

c:修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens33
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan False

d:修改/etc/neutron/dhcp_agent.ini
cp
/etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak >/etc/neutron/dhcp_agent.ini

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

e:修改/etc/neutron/metadata_agent.ini
cp /etc/neutron/metadata_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak >/etc/neutron/metadata_agent.ini

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET

f:修改/etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret METADATA_SECRET

 6.初始化数据库

创建软连接

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 同步数据库

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
[root@controller ~]# su -s /bash/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
su: failed to execute /bash/sh: No such file or directory
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
No handlers could be found for logger "oslo_config.cfg"
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  Running upgrade for neutron ...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> kilo, kilo_initial
INFO  [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py
INFO  [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, neutrodb_ipam
INFO  [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf, Initial operations in support of address scopes
INFO  [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee, Flavor framework
INFO  [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f, network_rbac
INFO  [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773, quota_usage
INFO  [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592, subnetpool hash
INFO  [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7, add order to dnsnameservers
INFO  [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79, address scope support in subnetpool
INFO  [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes
INFO  [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations
INFO  [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port
INFO  [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d, Add availability zone
INFO  [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a, add is_default to subnetpool
INFO  [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25, Add standard attribute table
INFO  [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee, Add network availability zone
INFO  [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9, Add router availability zone
INFO  [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4, Add ip_version to AddressScope
INFO  [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664, Add tables and attributes to support external DNS integration
INFO  [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5, add_unique_ha_router_agent_port_bindings
INFO  [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f, Auto Allocated Topology - aka Get-Me-A-Network
INFO  [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821, add dynamic routing model data
INFO  [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4, add_bgp_dragent_model_data
INFO  [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81, rbac_qos_policy
INFO  [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6, Add resource_versions row to agent table
INFO  [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532, tag support
INFO  [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f, add_timestamp_to_base_resources
INFO  [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a, Add desc to standard attr table
INFO  [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99, Initial no-op Liberty contract rule.
INFO  [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada, network_rbac
INFO  [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016, Drop legacy OVS and LB plugin tables
INFO  [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3, Metaplugin removal
INFO  [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d, Add missing foreign keys
INFO  [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d, add geneve ml2 type driver
INFO  [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297, Drop cisco monolithic tables
INFO  [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c, Drop embrane plugin table
INFO  [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39, standardattributes migration
INFO  [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b, DVR sheduling refactoring
INFO  [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050, Drop NEC plugin tables
INFO  [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9, rbac_qos_policy
INFO  [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada, network_rbac_external
INFO  [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc, standard_desc
  OK
[root@controller ~]# mysql neutron -e "show tables;"
+-----------------------------------------+
| Tables_in_neutron                       |
+-----------------------------------------+
| address_scopes                          |
| agents                                  |
| alembic_version                         |
| allowedaddresspairs                     |
| arista_provisioned_nets                 |
| arista_provisioned_tenants              |
| arista_provisioned_vms                  |
| auto_allocated_topologies               |
| bgp_peers                               |
| bgp_speaker_dragent_bindings            |
| bgp_speaker_network_bindings            |
| bgp_speaker_peer_bindings               |
| bgp_speakers                            |
| brocadenetworks                         |
| brocadeports                            |
| cisco_csr_identifier_map                |
| cisco_hosting_devices                   |
| cisco_ml2_apic_contracts                |
| cisco_ml2_apic_host_links               |
| cisco_ml2_apic_names                    |
| cisco_ml2_n1kv_network_bindings         |
| cisco_ml2_n1kv_network_profiles         |
| cisco_ml2_n1kv_policy_profiles          |
| cisco_ml2_n1kv_port_bindings            |
| cisco_ml2_n1kv_profile_bindings         |
| cisco_ml2_n1kv_vlan_allocations         |
| cisco_ml2_n1kv_vxlan_allocations        |
| cisco_ml2_nexus_nve                     |
| cisco_ml2_nexusport_bindings            |
| cisco_port_mappings                     |
| cisco_router_mappings                   |
| consistencyhashes                       |
| default_security_group                  |
| dnsnameservers                          |
| dvr_host_macs                           |
| externalnetworks                        |
| extradhcpopts                           |
| firewall_policies                       |
| firewall_rules                          |
| firewalls                               |
| flavors                                 |
| flavorserviceprofilebindings            |
| floatingipdnses                         |
| floatingips                             |
| ha_router_agent_port_bindings           |
| ha_router_networks                      |
| ha_router_vrid_allocations              |
| healthmonitors                          |
| ikepolicies                             |
| ipallocationpools                       |
| ipallocations                           |
| ipamallocationpools                     |
| ipamallocations                         |
| ipamavailabilityranges                  |
| ipamsubnets                             |
| ipavailabilityranges                    |
| ipsec_site_connections                  |
| ipsecpeercidrs                          |
| ipsecpolicies                           |
| lsn                                     |
| lsn_port                                |
| maclearningstates                       |
| members                                 |
| meteringlabelrules                      |
| meteringlabels                          |
| ml2_brocadenetworks                     |
| ml2_brocadeports                        |
| ml2_dvr_port_bindings                   |
| ml2_flat_allocations                    |
| ml2_geneve_allocations                  |
| ml2_geneve_endpoints                    |
| ml2_gre_allocations                     |
| ml2_gre_endpoints                       |
| ml2_network_segments                    |
| ml2_nexus_vxlan_allocations             |
| ml2_nexus_vxlan_mcast_groups            |
| ml2_port_binding_levels                 |
| ml2_port_bindings                       |
| ml2_ucsm_port_profiles                  |
| ml2_vlan_allocations                    |
| ml2_vxlan_allocations                   |
| ml2_vxlan_endpoints                     |
| multi_provider_networks                 |
| networkconnections                      |
| networkdhcpagentbindings                |
| networkdnsdomains                       |
| networkgatewaydevicereferences          |
| networkgatewaydevices                   |
| networkgateways                         |
| networkqueuemappings                    |
| networkrbacs                            |
| networks                                |
| networksecuritybindings                 |
| neutron_nsx_network_mappings            |
| neutron_nsx_port_mappings               |
| neutron_nsx_router_mappings             |
| neutron_nsx_security_group_mappings     |
| nexthops                                |
| nsxv_edge_dhcp_static_bindings          |
| nsxv_edge_vnic_bindings                 |
| nsxv_firewall_rule_bindings             |
| nsxv_internal_edges                     |
| nsxv_internal_networks                  |
| nsxv_port_index_mappings                |
| nsxv_port_vnic_mappings                 |
| nsxv_router_bindings                    |
| nsxv_router_ext_attributes              |
| nsxv_rule_mappings                      |
| nsxv_security_group_section_mappings    |
| nsxv_spoofguard_policy_network_mappings |
| nsxv_tz_network_bindings                |
| nsxv_vdr_dhcp_bindings                  |
| nuage_net_partition_router_mapping      |
| nuage_net_partitions                    |
| nuage_provider_net_bindings             |
| nuage_subnet_l2dom_mapping              |
| poolloadbalanceragentbindings           |
| poolmonitorassociations                 |
| pools                                   |
| poolstatisticss                         |
| portbindingports                        |
| portdnses                               |
| portqueuemappings                       |
| ports                                   |
| portsecuritybindings                    |
| providerresourceassociations            |
| qos_bandwidth_limit_rules               |
| qos_network_policy_bindings             |
| qos_policies                            |
| qos_port_policy_bindings                |
| qospolicyrbacs                          |
| qosqueues                               |
| quotas                                  |
| quotausages                             |
| reservations                            |
| resourcedeltas                          |
| router_extra_attributes                 |
| routerl3agentbindings                   |
| routerports                             |
| routerroutes                            |
| routerrules                             |
| routers                                 |
| securitygroupportbindings               |
| securitygrouprules                      |
| securitygroups                          |
| serviceprofiles                         |
| sessionpersistences                     |
| standardattributes                      |
| subnetpoolprefixes                      |
| subnetpools                             |
| subnetroutes                            |
| subnets                                 |
| tags                                    |
| tz_network_bindings                     |
| vcns_router_bindings                    |
| vips                                    |
| vpnservices                             |
+-----------------------------------------+
[root@controller ~]# 
View Code

7启动服务

重启nova服务

[root@controller ~]# systemctl restart openstack-nova-api.service

启动neutron服务

[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
问题:
启动neutron-linuxbridge-agent.service 失败

[root@controller ~]# more /var/log/neutron/linuxbridge-agent.log
2019-12-25 10:20:19.007 10957 ERROR neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Tunneling cannot b
e enabled without the local_ip bound to an interface on the host. Please configure local_ip None on the host interface to be used 
for tunneling and restart the agent.

解决方法:直接通过vi编辑,将官网配置内容复制到该文件中,问题原因可能时,在执行命令输入时有字符错误

8.检查服务

[root@controller ~]# neutron agent-list
+---------------------+--------------------+------------+-------------------+-------+----------------+------------------------+
| id                  | agent_type         | host       | availability_zone | alive | admin_state_up | binary                 |
+---------------------+--------------------+------------+-------------------+-------+----------------+------------------------+
| 174e3d3d-abab-4f10- | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-   |
| a0c7-95179a1eac2c   |                    |            |                   |       |                | agent                  |
| c9581486-ada1-425a- | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent |
| be85-1f44fbc6e5bc   |                    |            |                   |       |                |                        |
| daae304c-b211-449c- | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent     |
| 9e5f-5b94ecc8e765   |                    |            |                   |       |                |                        |
+---------------------+--------------------+------------+-------------------+-------+----------------+------------------------+
[root@controller ~]# 

 

在计算节点上安装配置neutron

1.安装软件包

[root@compute1 ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset

2.修改配置文件

cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS

校验:

[root@compute1 ~]# md5sum /etc/neutron/neutron.conf
77ffab503797be5063c06e8b956d6ed0 /etc/neutron/neutron.conf

直接拉取控制节点 linuxbridge_agent.ini文件
[root@compute1 ~]# scp -rp 10.0.0.11:/etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini

配置nova 配置文件

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS

3.启动服务

重启nova

[root@compute1 ~]# systemctl restart openstack-nova-compute.service

启动neutron服务

[root@compute1 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@compute1 ~]# systemctl start neutron-linuxbridge-agent.service

查看控制节点是否自动注册了计算几点的网络

[root@controller ~]# neutron agent-list
+---------------------+--------------------+------------+-------------------+-------+----------------+------------------------+
| id                  | agent_type         | host       | availability_zone | alive | admin_state_up | binary                 |
+---------------------+--------------------+------------+-------------------+-------+----------------+------------------------+
| 174e3d3d-abab-4f10- | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-   |
| a0c7-95179a1eac2c   |                    |            |                   |       |                | agent                  |
| 706bb03b-bb17-47e7- | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-   |
| b650-75d518c1bfdf   |                    |            |                   |       |                | agent                  |
| c9581486-ada1-425a- | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent |
| be85-1f44fbc6e5bc   |                    |            |                   |       |                |                        |
| daae304c-b211-449c- | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent     |
| 9e5f-5b94ecc8e765   |                    |            |                   |       |                |                        |
+---------------------+--------------------+------------+-------------------+-------+----------------+------------------------+

 六、horizon 界面安装

1.可独立安装,本例安装在计算节点中

[root@compute1 ~]# yum install -y openstack-dashboard

2.配置

[root@compute1 ~]# cat /etc/openstack-dashboard/local_settings
# -*- coding: utf-8 -*-

import os

from django.utils.translation import ugettext_lazy as _


from openstack_dashboard import exceptions
from openstack_dashboard.settings import HORIZON_CONFIG

DEBUG = False
TEMPLATE_DEBUG = DEBUG


# WEBROOT is the location relative to Webserver root
# should end with a slash.
WEBROOT = '/dashboard/'
#LOGIN_URL = WEBROOT + 'auth/login/'
#LOGOUT_URL = WEBROOT + 'auth/logout/'
#
# LOGIN_REDIRECT_URL can be used as an alternative for
# HORIZON_CONFIG.user_home, if user_home is not set.
# Do not set it to '/home/', as this will cause circular redirect loop
#LOGIN_REDIRECT_URL = WEBROOT

# If horizon is running in production (DEBUG is False), set this
# with the list of host/domain names that the application can serve.
# For more information see:
# https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
ALLOWED_HOSTS = ['*', ]

# Set SSL proxy settings:
# Pass this header from the proxy after terminating the SSL,
# and don't forget to strip it from the client's request.
# For more information see:
# https://docs.djangoproject.com/en/1.8/ref/settings/#secure-proxy-ssl-header
#SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')

# If Horizon is being served through SSL, then uncomment the following two
# settings to better secure the cookies from security exploits
#CSRF_COOKIE_SECURE = True
#SESSION_COOKIE_SECURE = True

# The absolute path to the directory where message files are collected.
# The message file must have a .json file extension. When the user logins to
# horizon, the message files collected are processed and displayed to the user.
#MESSAGES_PATH=None

# Overrides for OpenStack API versions. Use this setting to force the
# OpenStack dashboard to use a specific API version for a given service API.
# Versions specified here should be integers or floats, not strings.
# NOTE: The version should be formatted as it appears in the URL for the
# service API. For example, The identity service APIs have inconsistent
# use of the decimal point, so valid options would be 2.0 or 3.
OPENSTACK_API_VERSIONS = {
#    "data-processing": 1.1,
    "identity": 3,
    "image": 2,
    "volume": 2,
    "compute": 2,
}

# Set this to True if running on multi-domain model. When this is enabled, it
# will require user to enter the Domain name in addition to username for login.
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

# Overrides the default domain used when running on single-domain model
# with Keystone V3. All entities will be created in the default domain.
# NOTE: This value must be the ID of the default domain, NOT the name.
# Also, you will most likely have a value in the keystone policy file like this
#    "cloud_admin": "rule:admin_required and domain_id:<your domain id>"
# This value must match the domain id specified there.
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'

# Set this to True to enable panels that provide the ability for users to
# manage Identity Providers (IdPs) and establish a set of rules to map
# federation protocol attributes to Identity API attributes.
# This extension requires v3.0+ of the Identity API.
#OPENSTACK_KEYSTONE_FEDERATION_MANAGEMENT = False

# Set Console type:
# valid options are "AUTO"(default), "VNC", "SPICE", "RDP", "SERIAL" or None
# Set to None explicitly if you want to deactivate the console.
#CONSOLE_TYPE = "AUTO"

# If provided, a "Report Bug" link will be displayed in the site header
# which links to the value of this setting (ideally a URL containing
# information on how to report issues).
#HORIZON_CONFIG["bug_url"] = "http://bug-report.example.com"

# Show backdrop element outside the modal, do not close the modal
# after clicking on backdrop.
#HORIZON_CONFIG["modal_backdrop"] = "static"

# Specify a regular expression to validate user passwords.
#HORIZON_CONFIG["password_validator"] = {
#    "regex": '.*',
#    "help_text": _("Your password does not meet the requirements."),
#}

# Disable simplified floating IP address management for deployments with
# multiple floating IP pools or complex network requirements.
#HORIZON_CONFIG["simple_ip_management"] = False

# Turn off browser autocompletion for forms including the login form and
# the database creation workflow if so desired.
#HORIZON_CONFIG["password_autocomplete"] = "off"

# Setting this to True will disable the reveal button for password fields,
# including on the login form.
#HORIZON_CONFIG["disable_password_reveal"] = False

LOCAL_PATH = '/tmp'

# Set custom secret key:
# You can either set it to a specific value or you can let horizon generate a
# default secret key that is unique on this machine, e.i. regardless of the
# amount of Python WSGI workers (if used behind Apache+mod_wsgi): However,
# there may be situations where you would want to set this explicitly, e.g.
# when multiple dashboard instances are distributed on different machines
# (usually behind a load-balancer). Either you have to make sure that a session
# gets all requests routed to the same dashboard instance or you set the same
# SECRET_KEY for all of them.
SECRET_KEY='65941f1393ea1c265ad7'

# We recommend you use memcached for development; otherwise after every reload
# of the django development server, you will have to login again. To use
# memcached set CACHES to something like
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': 'controller:11211',
    },
}

#CACHES = {
#    'default': {
#        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
#    },
#}

# Send email to the console by default
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# Or send them to /dev/null
#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'

# Configure these for your outgoing email host
#EMAIL_HOST = 'smtp.my-company.com'
#EMAIL_PORT = 25
#EMAIL_HOST_USER = 'djangomail'
#EMAIL_HOST_PASSWORD = 'top-secret!'

# For multiple regions uncomment this configuration, and add (endpoint, title).
#AVAILABLE_REGIONS = [
#    ('http://cluster1.example.com:5000/v2.0', 'cluster1'),
#    ('http://cluster2.example.com:5000/v2.0', 'cluster2'),
#]

OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

# Enables keystone web single-sign-on if set to True.
#WEBSSO_ENABLED = False

# Determines which authentication choice to show as default.
#WEBSSO_INITIAL_CHOICE = "credentials"

# The list of authentication mechanisms which include keystone
# federation protocols and identity provider/federation protocol
# mapping keys (WEBSSO_IDP_MAPPING). Current supported protocol
# IDs are 'saml2' and 'oidc'  which represent SAML 2.0, OpenID
# Connect respectively.
# Do not remove the mandatory credentials mechanism.
# Note: The last two tuples are sample mapping keys to a identity provider
# and federation protocol combination (WEBSSO_IDP_MAPPING).
#WEBSSO_CHOICES = (
#    ("credentials", _("Keystone Credentials")),
#    ("oidc", _("OpenID Connect")),
#    ("saml2", _("Security Assertion Markup Language")),
#    ("acme_oidc", "ACME - OpenID Connect"),
#    ("acme_saml2", "ACME - SAML2"),
#)

# A dictionary of specific identity provider and federation protocol
# combinations. From the selected authentication mechanism, the value
# will be looked up as keys in the dictionary. If a match is found,
# it will redirect the user to a identity provider and federation protocol
# specific WebSSO endpoint in keystone, otherwise it will use the value
# as the protocol_id when redirecting to the WebSSO by protocol endpoint.
# NOTE: The value is expected to be a tuple formatted as: (<idp_id>, <protocol_id>).
#WEBSSO_IDP_MAPPING = {
#    "acme_oidc": ("acme", "oidc"),
#    "acme_saml2": ("acme", "saml2"),
#}

# Disable SSL certificate checks (useful for self-signed certificates):
#OPENSTACK_SSL_NO_VERIFY = True

# The CA certificate to use to verify SSL connections
#OPENSTACK_SSL_CACERT = '/path/to/cacert.pem'

# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
# capabilities of the auth backend for Keystone.
# If Keystone has been configured to use LDAP as the auth backend then set
# can_edit_user to False and name to 'ldap'.
#
# TODO(tres): Remove these once Keystone has an API to identify auth backend.
OPENSTACK_KEYSTONE_BACKEND = {
    'name': 'native',
    'can_edit_user': True,
    'can_edit_group': True,
    'can_edit_project': True,
    'can_edit_domain': True,
    'can_edit_role': True,
}

# Setting this to True, will add a new "Retrieve Password" action on instance,
# allowing Admin session password retrieval/decryption.
#OPENSTACK_ENABLE_PASSWORD_RETRIEVE = False

# The Launch Instance user experience has been significantly enhanced.
# You can choose whether to enable the new launch instance experience,
# the legacy experience, or both. The legacy experience will be removed
# in a future release, but is available as a temporary backup setting to ensure
# compatibility with existing deployments. Further development will not be
# done on the legacy experience. Please report any problems with the new
# experience via the Launchpad tracking system.
#
# Toggle LAUNCH_INSTANCE_LEGACY_ENABLED and LAUNCH_INSTANCE_NG_ENABLED to
# determine the experience to enable.  Set them both to true to enable
# both.
#LAUNCH_INSTANCE_LEGACY_ENABLED = True
#LAUNCH_INSTANCE_NG_ENABLED = False

# A dictionary of settings which can be used to provide the default values for
# properties found in the Launch Instance modal.
#LAUNCH_INSTANCE_DEFAULTS = {
#    'config_drive': False,
#}

# The Xen Hypervisor has the ability to set the mount point for volumes
# attached to instances (other Hypervisors currently do not). Setting
# can_set_mount_point to True will add the option to set the mount point
# from the UI.
OPENSTACK_HYPERVISOR_FEATURES = {
    'can_set_mount_point': False,
    'can_set_password': False,
    'requires_keypair': False,
}

# The OPENSTACK_CINDER_FEATURES settings can be used to enable optional
# services provided by cinder that is not exposed by its extension API.
OPENSTACK_CINDER_FEATURES = {
    'enable_backup': False,
}

# The OPENSTACK_NEUTRON_NETWORK settings can be used to enable optional
# services provided by neutron. Options currently available are load
# balancer service, security groups, quotas, VPN service.
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': False,
    'enable_quotas': False,
    'enable_ipv6': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,

    # Neutron can be configured with a default Subnet Pool to be used for IPv4
    # subnet-allocation. Specify the label you wish to display in the Address
    # pool selector on the create subnet step if you want to use this feature.
    'default_ipv4_subnet_pool_label': None,

    # Neutron can be configured with a default Subnet Pool to be used for IPv6
    # subnet-allocation. Specify the label you wish to display in the Address
    # pool selector on the create subnet step if you want to use this feature.
    # You must set this to enable IPv6 Prefix Delegation in a PD-capable
    # environment.
    'default_ipv6_subnet_pool_label': None,

    # The profile_support option is used to detect if an external router can be
    # configured via the dashboard. When using specific plugins the
    # profile_support can be turned on if needed.
    'profile_support': None,
    #'profile_support': 'cisco',

    # Set which provider network types are supported. Only the network types
    # in this list will be available to choose from when creating a network.
    # Network types include local, flat, vlan, gre, and vxlan.
    'supported_provider_types': ['*'],

    # Set which VNIC types are supported for port binding. Only the VNIC
    # types in this list will be available to choose from when creating a
    # port.
    # VNIC types include 'normal', 'macvtap' and 'direct'.
    # Set to empty list or None to disable VNIC type selection.
    'supported_vnic_types': ['*'],
}

# The OPENSTACK_HEAT_STACK settings can be used to disable password
# field required while launching the stack.
OPENSTACK_HEAT_STACK = {
    'enable_user_pass': True,
}

# The OPENSTACK_IMAGE_BACKEND settings can be used to customize features
# in the OpenStack Dashboard related to the Image service, such as the list
# of supported image formats.
#OPENSTACK_IMAGE_BACKEND = {
#    'image_formats': [
#        ('', _('Select format')),
#        ('aki', _('AKI - Amazon Kernel Image')),
#        ('ami', _('AMI - Amazon Machine Image')),
#        ('ari', _('ARI - Amazon Ramdisk Image')),
#        ('docker', _('Docker')),
#        ('iso', _('ISO - Optical Disk Image')),
#        ('ova', _('OVA - Open Virtual Appliance')),
#        ('qcow2', _('QCOW2 - QEMU Emulator')),
#        ('raw', _('Raw')),
#        ('vdi', _('VDI - Virtual Disk Image')),
#        ('vhd', _('VHD - Virtual Hard Disk')),
#        ('vmdk', _('VMDK - Virtual Machine Disk')),
#    ],
#}

# The IMAGE_CUSTOM_PROPERTY_TITLES settings is used to customize the titles for
# image custom property attributes that appear on image detail pages.
IMAGE_CUSTOM_PROPERTY_TITLES = {
    "architecture": _("Architecture"),
    "kernel_id": _("Kernel ID"),
    "ramdisk_id": _("Ramdisk ID"),
    "image_state": _("Euca2ools state"),
    "project_id": _("Project ID"),
    "image_type": _("Image Type"),
}

# The IMAGE_RESERVED_CUSTOM_PROPERTIES setting is used to specify which image
# custom properties should not be displayed in the Image Custom Properties
# table.
IMAGE_RESERVED_CUSTOM_PROPERTIES = []

# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
# in the Keystone service catalog. Use this setting when Horizon is running
# external to the OpenStack environment. The default is 'publicURL'.
#OPENSTACK_ENDPOINT_TYPE = "publicURL"

# SECONDARY_ENDPOINT_TYPE specifies the fallback endpoint type to use in the
# case that OPENSTACK_ENDPOINT_TYPE is not present in the endpoints
# in the Keystone service catalog. Use this setting when Horizon is running
# external to the OpenStack environment. The default is None.  This
# value should differ from OPENSTACK_ENDPOINT_TYPE if used.
#SECONDARY_ENDPOINT_TYPE = "publicURL"

# The number of objects (Swift containers/objects or images) to display
# on a single page before providing a paging element (a "more" link)
# to paginate results.
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20

# The size of chunk in bytes for downloading objects from Swift
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024

# Specify a maximum number of items to display in a dropdown.
DROPDOWN_MAX_ITEMS = 30

# The timezone of the server. This should correspond with the timezone
# of your entire OpenStack installation, and hopefully be in UTC.
TIME_ZONE = "Asia/Shanghai"

# When launching an instance, the menu of available flavors is
# sorted by RAM usage, ascending. If you would like a different sort order,
# you can provide another flavor attribute as sorting key. Alternatively, you
# can provide a custom callback method to use for sorting. You can also provide
# a flag for reverse sort. For more info, see
# http://docs.python.org/2/library/functions.html#sorted
#CREATE_INSTANCE_FLAVOR_SORT = {
#    'key': 'name',
#     # or
#    'key': my_awesome_callback_method,
#    'reverse': False,
#}

# Set this to True to display an 'Admin Password' field on the Change Password
# form to verify that it is indeed the admin logged-in who wants to change
# the password.
#ENFORCE_PASSWORD_CHECK = False

# Modules that provide /auth routes that can be used to handle different types
# of user authentication. Add auth plugins that require extra route handling to
# this list.
#AUTHENTICATION_URLS = [
#    'openstack_auth.urls',
#]

# The Horizon Policy Enforcement engine uses these values to load per service
# policy rule files. The content of these files should match the files the
# OpenStack services are using to determine role based access control in the
# target installation.

# Path to directory containing policy.json files
POLICY_FILES_PATH = '/etc/openstack-dashboard'

# Map of local copy of service policy files.
# Please insure that your identity policy file matches the one being used on
# your keystone servers. There is an alternate policy file that may be used
# in the Keystone v3 multi-domain case, policy.v3cloudsample.json.
# This file is not included in the Horizon repository by default but can be
# found at
# http://git.openstack.org/cgit/openstack/keystone/tree/etc/ \
# policy.v3cloudsample.json
# Having matching policy files on the Horizon and Keystone servers is essential
# for normal operation. This holds true for all services and their policy files.
#POLICY_FILES = {
#    'identity': 'keystone_policy.json',
#    'compute': 'nova_policy.json',
#    'volume': 'cinder_policy.json',
#    'image': 'glance_policy.json',
#    'orchestration': 'heat_policy.json',
#    'network': 'neutron_policy.json',
#    'telemetry': 'ceilometer_policy.json',
#}

# TODO: (david-lyle) remove when plugins support adding settings.
# Note: Only used when trove-dashboard plugin is configured to be used by
# Horizon.
# Trove user and database extension support. By default support for
# creating users and databases on database instances is turned on.
# To disable these extensions set the permission here to something
# unusable such as ["!"].
#TROVE_ADD_USER_PERMS = []
#TROVE_ADD_DATABASE_PERMS = []

# Change this patch to the appropriate list of tuples containing
# a key, label and static directory containing two files:
# _variables.scss and _styles.scss
#AVAILABLE_THEMES = [
#    ('default', 'Default', 'themes/default'),
#    ('material', 'Material', 'themes/material'),
#]

LOGGING = {
    'version': 1,
    # When set to True this will disable all logging except
    # for loggers specified in this configuration dictionary. Note that
    # if nothing is specified here and disable_existing_loggers is True,
    # django.db.backends will still log unless it is disabled explicitly.
    'disable_existing_loggers': False,
    'handlers': {
        'null': {
            'level': 'DEBUG',
            'class': 'logging.NullHandler',
        },
        'console': {
            # Set the level to "DEBUG" for verbose output logging.
            'level': 'INFO',
            'class': 'logging.StreamHandler',
        },
    },
    'loggers': {
        # Logging from django.db.backends is VERY verbose, send to null
        # by default.
        'django.db.backends': {
            'handlers': ['null'],
            'propagate': False,
        },
        'requests': {
            'handlers': ['null'],
            'propagate': False,
        },
        'horizon': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'openstack_dashboard': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'novaclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'cinderclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'keystoneclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'glanceclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'neutronclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'heatclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'ceilometerclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'swiftclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'openstack_auth': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'nose.plugins.manager': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'django': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'iso8601': {
            'handlers': ['null'],
            'propagate': False,
        },
        'scss': {
            'handlers': ['null'],
            'propagate': False,
        },
    },
}

# 'direction' should not be specified for all_tcp/udp/icmp.
# It is specified in the form.
SECURITY_GROUP_RULES = {
    'all_tcp': {
        'name': _('All TCP'),
        'ip_protocol': 'tcp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_udp': {
        'name': _('All UDP'),
        'ip_protocol': 'udp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_icmp': {
        'name': _('All ICMP'),
        'ip_protocol': 'icmp',
        'from_port': '-1',
        'to_port': '-1',
    },
    'ssh': {
        'name': 'SSH',
        'ip_protocol': 'tcp',
        'from_port': '22',
        'to_port': '22',
    },
    'smtp': {
        'name': 'SMTP',
        'ip_protocol': 'tcp',
        'from_port': '25',
        'to_port': '25',
    },
    'dns': {
        'name': 'DNS',
        'ip_protocol': 'tcp',
        'from_port': '53',
        'to_port': '53',
    },
    'http': {
        'name': 'HTTP',
        'ip_protocol': 'tcp',
        'from_port': '80',
        'to_port': '80',
    },
    'pop3': {
        'name': 'POP3',
        'ip_protocol': 'tcp',
        'from_port': '110',
        'to_port': '110',
    },
    'imap': {
        'name': 'IMAP',
        'ip_protocol': 'tcp',
        'from_port': '143',
        'to_port': '143',
    },
    'ldap': {
        'name': 'LDAP',
        'ip_protocol': 'tcp',
        'from_port': '389',
        'to_port': '389',
    },
    'https': {
        'name': 'HTTPS',
        'ip_protocol': 'tcp',
        'from_port': '443',
        'to_port': '443',
    },
    'smtps': {
        'name': 'SMTPS',
        'ip_protocol': 'tcp',
        'from_port': '465',
        'to_port': '465',
    },
    'imaps': {
        'name': 'IMAPS',
        'ip_protocol': 'tcp',
        'from_port': '993',
        'to_port': '993',
    },
    'pop3s': {
        'name': 'POP3S',
        'ip_protocol': 'tcp',
        'from_port': '995',
        'to_port': '995',
    },
    'ms_sql': {
        'name': 'MS SQL',
        'ip_protocol': 'tcp',
        'from_port': '1433',
        'to_port': '1433',
    },
    'mysql': {
        'name': 'MYSQL',
        'ip_protocol': 'tcp',
        'from_port': '3306',
        'to_port': '3306',
    },
    'rdp': {
        'name': 'RDP',
        'ip_protocol': 'tcp',
        'from_port': '3389',
        'to_port': '3389',
    },
}

# Deprecation Notice:
#
# The setting FLAVOR_EXTRA_KEYS has been deprecated.
# Please load extra spec metadata into the Glance Metadata Definition Catalog.
#
# The sample quota definitions can be found in:
# <glance_source>/etc/metadefs/compute-quota.json
#
# The metadata definition catalog supports CLI and API:
#  $glance --os-image-api-version 2 help md-namespace-import
#  $glance-manage db_load_metadefs <directory_with_definition_files>
#
# See Metadata Definitions on: http://docs.openstack.org/developer/glance/

# TODO: (david-lyle) remove when plugins support settings natively
# Note: This is only used when the Sahara plugin is configured and enabled
# for use in Horizon.
# Indicate to the Sahara data processing service whether or not
# automatic floating IP allocation is in effect.  If it is not
# in effect, the user will be prompted to choose a floating IP
# pool for use in their cluster.  False by default.  You would want
# to set this to True if you were running Nova Networking with
# auto_assign_floating_ip = True.
#SAHARA_AUTO_IP_ALLOCATION_ENABLED = False

# The hash algorithm to use for authentication tokens. This must
# match the hash algorithm that the identity server and the
# auth_token middleware are using. Allowed values are the
# algorithms supported by Python's hashlib library.
#OPENSTACK_TOKEN_HASH_ALGORITHM = 'md5'

# Hashing tokens from Keystone keeps the Horizon session data smaller, but it
# doesn't work in some cases when using PKI tokens.  Uncomment this value and
# set it to False if using PKI tokens and there are 401 errors due to token
# hashing.
#OPENSTACK_TOKEN_HASH_ENABLED = True

# AngularJS requires some settings to be made available to
# the client side. Some settings are required by in-tree / built-in horizon
# features. These settings must be added to REST_API_REQUIRED_SETTINGS in the
# form of ['SETTING_1','SETTING_2'], etc.
#
# You may remove settings from this list for security purposes, but do so at
# the risk of breaking a built-in horizon feature. These settings are required
# for horizon to function properly. Only remove them if you know what you
# are doing. These settings may in the future be moved to be defined within
# the enabled panel configuration.
# You should not add settings to this list for out of tree extensions.
# See: https://wiki.openstack.org/wiki/Horizon/RESTAPI
REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES',
                              'LAUNCH_INSTANCE_DEFAULTS']

# Additional settings can be made available to the client side for
# extensibility by specifying them in REST_API_ADDITIONAL_SETTINGS
# !! Please use extreme caution as the settings are transferred via HTTP/S
# and are not encrypted on the browser. This is an experimental API and
# may be deprecated in the future without notice.
#REST_API_ADDITIONAL_SETTINGS = []

# DISALLOW_IFRAME_EMBED can be used to prevent Horizon from being embedded
# within an iframe. Legacy browsers are still vulnerable to a Cross-Frame
# Scripting (XFS) vulnerability, so this option allows extra security hardening
# where iframes are not used in deployment. Default setting is True.
# For more information see:
# http://tinyurl.com/anticlickjack
#DISALLOW_IFRAME_EMBED = True
[root@compute1 ~]# 
View Code

修改/etc/httpd/conf.d/openstack-dashboard.conf 配置文件参数

[root@compute1 ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf 

WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
WSGISocketPrefix run/wsgi
WSGIApplicationGroup %{GLOBAL}   ###增加参数配置
WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
Alias /dashboard/static /usr/share/openstack-dashboard/static

<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
  Options All
  AllowOverride All
  Require all granted
</Directory>

<Directory /usr/share/openstack-dashboard/static>
  Options All
  AllowOverride All
  Require all granted
</Directory>

3.启动httpd服务

[root@compute1 ~]# systemctl start httpd

七、启动实例

创建网络

[root@controller ~]# neutron net-create --share --provider:physical_network provider --provider:network_type flat ar
[root@controller ~]# neutron subnet-create --name artest --allocation-pool start=10.0.0.101,end=10.0.0.250 --dns-nameserver 114.114.114.114 --gateway 10.0.0.2 ar 10.0.0.0/24
[root@controller ~]# neutron net-create --share --provider:physical_network provider --provider:network_type flat ar
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2019-12-26T03:30:40                  |
| description               |                                      |
| id                        | 974bf2fb-90d6-499c-8157-971a09f46106 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1500                                 |
| name                      | ar                                   |
| port_security_enabled     | True                                 |
| provider:network_type     | flat                                 |
| provider:physical_network | provider                             |
| provider:segmentation_id  |                                      |
| router:external           | False                                |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| tenant_id                 | 45f70e0011bb4c09985709c1a5dccd0d     |
| updated_at                | 2019-12-26T03:30:40                  |
+---------------------------+--------------------------------------+
[root@controller ~]# neutron subnet-create --name artest --allocation-pool start=10.0.0.101,end=10.0.0.250 --dns-nameserver 114.114.114.114 --gateway 10.0.0.2 ar 10.0.0.0/24
Created a new subnet:
+-------------------+----------------------------------------------+
| Field             | Value                                        |
+-------------------+----------------------------------------------+
| allocation_pools  | {"start": "10.0.0.101", "end": "10.0.0.250"} |
| cidr              | 10.0.0.0/24                                  |
| created_at        | 2019-12-26T03:33:30                          |
| description       |                                              |
| dns_nameservers   | 114.114.114.114                              |
| enable_dhcp       | True                                         |
| gateway_ip        | 10.0.0.2                                     |
| host_routes       |                                              |
| id                | 74489b2a-755d-4f75-9c57-b9bda2d98eb6         |
| ip_version        | 4                                            |
| ipv6_address_mode |                                              |
| ipv6_ra_mode      |                                              |
| name              | artest                                       |
| network_id        | 974bf2fb-90d6-499c-8157-971a09f46106         |
| subnetpool_id     |                                              |
| tenant_id         | 45f70e0011bb4c09985709c1a5dccd0d             |
| updated_at        | 2019-12-26T03:33:30                          |
+-------------------+----------------------------------------------+
[root@controller ~]# 
View Code

创建云主机硬件配置方案

[root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
[root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field                      | Value   |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0       |
| disk                       | 1       |
| id                         | 0       |
| name                       | m1.nano |
| os-flavor-access:is_public | True    |
| ram                        | 64      |
| rxtx_factor                | 1.0     |
| swap                       |         |
| vcpus                      | 1       |
+----------------------------+---------+
[root@controller ~]# 
View Code

创建秘钥对

[root@controller ~]# ssh-keygen -q -N "" -f ~/.ssh/id_rsa

上传秘钥对到openstack中

[root@controller ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
[root@controller ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| fingerprint | ca:d1:a8:f5:b8:8d:0d:c4:9f:d5:11:df:f1:48:e9:24 |
| name        | mykey                                           |
| user_id     | d1ec935819424b6db22198b528834b4e                |
+-------------+-------------------------------------------------+
[root@controller ~]# 
View Code

创建安全组规则

[root@controller ~]# openstack security group rule create --proto icmp default
[root@controller ~]# openstack security group rule create --proto tcp --dst-port 22 default
[root@controller ~]# openstack security group rule create --proto icmp default
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| id                    | 12926d8a-8d5e-4423-a0fc-de1f03453341 |
| ip_protocol           | icmp                                 |
| ip_range              | 0.0.0.0/0                            |
| parent_group_id       | fcbec411-99c5-4735-b418-aebd051746a5 |
| port_range            |                                      |
| remote_security_group |                                      |
+-----------------------+--------------------------------------+
[root@controller ~]# openstack security group rule create --proto tcp --dst-port 22 default
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| id                    | 331ef464-d27c-4356-8e74-a50f11855e84 |
| ip_protocol           | tcp                                  |
| ip_range              | 0.0.0.0/0                            |
| parent_group_id       | fcbec411-99c5-4735-b418-aebd051746a5 |
| port_range            | 22:22                                |
| remote_security_group |                                      |
+-----------------------+--------------------------------------+
[root@controller ~]# 
View Code

启动一个实例

检查网络id

[root@controller ~]# neutron net-list
+--------------------------------------+------+--------------------------------------------------+
| id                                   | name | subnets                                          |
+--------------------------------------+------+--------------------------------------------------+
| 974bf2fb-90d6-499c-8157-971a09f46106 | ar   | 74489b2a-755d-4f75-9c57-b9bda2d98eb6 10.0.0.0/24 |
+--------------------------------------+------+--------------------------------------------------+

启动实例

[root@controller ~]# openstack server create --flavor m1.nano --image cirros --nic net-id=974bf2fb-90d6-499c-8157-971a09f46106 --security-group default --key-name mykey ar
[root@controller ~]# openstack server create --flavor m1.nano --image cirros --nic net-id=974bf2fb-90d6-499c-8157-971a09f46106 --security-group default --key-name mykey ar
+--------------------------------------+-----------------------------------------------+
| Field                                | Value                                         |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                        |
| OS-EXT-AZ:availability_zone          |                                               |
| OS-EXT-SRV-ATTR:host                 | None                                          |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                          |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                             |
| OS-EXT-STS:power_state               | 0                                             |
| OS-EXT-STS:task_state                | scheduling                                    |
| OS-EXT-STS:vm_state                  | building                                      |
| OS-SRV-USG:launched_at               | None                                          |
| OS-SRV-USG:terminated_at             | None                                          |
| accessIPv4                           |                                               |
| accessIPv6                           |                                               |
| addresses                            |                                               |
| adminPass                            | Njw5ScVWj9mn                                  |
| config_drive                         |                                               |
| created                              | 2019-12-26T05:44:38Z                          |
| flavor                               | m1.nano (0)                                   |
| hostId                               |                                               |
| id                                   | 70217016-d9b9-4f28-8876-42af8a51c3d7          |
| image                                | cirros (4492225b-eb11-4705-90dd-46c8e8cfe238) |
| key_name                             | mykey                                         |
| name                                 | ar                                            |
| os-extended-volumes:volumes_attached | []                                            |
| progress                             | 0                                             |
| project_id                           | 45f70e0011bb4c09985709c1a5dccd0d              |
| properties                           |                                               |
| security_groups                      | [{u'name': u'default'}]                       |
| status                               | BUILD                                         |
| updated                              | 2019-12-26T05:44:39Z                          |
| user_id                              | d1ec935819424b6db22198b528834b4e              |
+--------------------------------------+-----------------------------------------------+
[root@controller ~]# 
View Code
启动实例报错
[root@controller nova]# nova list
+--------------------------------------+------+--------+------------+-------------+----------+
| ID                                   | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+----------+
| 70217016-d9b9-4f28-8876-42af8a51c3d7 | ar   | ERROR  | -          | NOSTATE     |          |
+--------------------------------------+------+--------+------------+-------------+----------+

问题:
消息
No valid host was found. There are not enough hosts available.
编码
500
详情
File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 404, in build_instances context, request_spec, filter_properties) File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 448, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 372, in wrapped return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 121, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in call retry=self.retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 461, in _send raise result

nova日志:

2019-12-30 10:37:08.006 7195 ERROR nova.compute.manager [req-909c7b7c-4986-496f-b8d6-e9f71adcb77d d1ec935819424b6db22198b528834b4e 45f70e0011bb4c09985709c1a5dccd0d - - -] Instance failed network setup after 1 attempt(s)






 八、计算几点扩容

1.克隆机器,配置网络,修改主机名及主机hosts文件

[root@compute2 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.32  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::1534:7f05:3d6a:9287  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::25e8:8754:cb81:68c8  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::389d:e340:ea17:3a30  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:0a:ba:2c  txqueuelen 1000  (Ethernet)
        RX packets 154  bytes 14563 (14.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 173  bytes 24071 (23.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 4  bytes 272 (272.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4  bytes 272 (272.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@compute2 ~]# hostname
compute2
[root@compute2 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6


10.0.0.11   controller
10.0.0.31   compute1
10.0.0.32   compute2

[root@compute2 ~]# 

2.配置yum源

[root@compute2 opt]# scp -rp 10.0.0.31:/opt/openstack_rpm.tar.gz .
[root@compute2 opt]# tar -xf openstack_rpm.tar.gz
[root@compute2 opt]# mount /dev/cdrom /mnt/ [root@compute2 opt]#
scp -rp 10.0.0.31:/etc/yum.repos.d/local.repo /etc/yum.repos.d/
[root@compute2 ~]# echo 'mount /dev/cdrom /mnt' >>/etc/rc.local 
[root@compute2 ~]# chmod +x /etc/rc.local

3.配置时间同步

[root@compute2 ~]# yum install -y chrony

配置并重启服务

[root@compute2 ~]# vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 10.0.0.11 iburst

关闭服务

关闭防火墙
[root@compute2 neutron]# systemctl disable firewalld
关闭selinux ###必须关闭,否则neutron-linuxbridge-agent.service无法启动
[root@compute2 neutron]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@compute2 neutron]# setenforce 0

 

4.安装openstack客户端和openstack-selinux

[root@compute2 ~]# yum install -y python-openstackclient.noarch

5.安装nova-compute

[root@compute2 ~]# yum install -y openstack-nova-compute openstack-utils.noarch

配置

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.0.0.32
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf oslo_concurreny lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS

6.安装neutron-linuxbridge-agent

[root@compute2 ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset -y
[root@compute2 ~]# cp /etc/neutron/neutron.conf{,.bak}
[root@compute2 ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PAS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password RABBIT_PASS
[root@compute2 ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
[root@compute2 ~]# grep -Ev '^[a-z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens33
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enabe_vxlan False 

7.启动服务

[root@compute2 ~]# systemctl start libvirtd openstack-nova-compute neutron-linuxbridge-agent

 到控制节点验证服务

[root@controller ~]# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor   | controller | internal | enabled | up    | 2020-01-02T02:14:54.000000 | -               |
| 2  | nova-consoleauth | controller | internal | enabled | up    | 2020-01-02T02:14:54.000000 | -               |
| 3  | nova-scheduler   | controller | internal | enabled | up    | 2020-01-02T02:14:54.000000 | -               |
| 6  | nova-compute     | compute1   | nova     | enabled | up    | 2020-01-02T02:14:57.000000 | -               |
| 7  | nova-compute     | compute2   | nova     | enabled | up    | 2020-01-02T02:14:55.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

[root@controller ~]# neutron agent-list
+----------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id                         | agent_type         | host       | availability_zone | alive | admin_state_up | binary                    |
+----------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 174e3d3d-abab-             | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-agent |
| 4f10-a0c7-95179a1eac2c     |                    |            |                   |       |                |                           |
| 4393af99-20f9-4db2-a1b5-87 | Linux bridge agent | compute2   |                   | :-)   | True           | neutron-linuxbridge-agent |
| 34803e46e9                 |                    |            |                   |       |                |                           |
| 706bb03b-bb17-47e7-b650-75 | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-agent |
| d518c1bfdf                 |                    |            |                   |       |                |                           |
| c9581486-ada1-425a-        | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent    |
| be85-1f44fbc6e5bc          |                    |            |                   |       |                |                           |
| daae304c-b211-449c-9e5f-   | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent        |
| 5b94ecc8e765               |                    |            |                   |       |                |                           |
+----------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+

 九,cinder 安装及配置

(1)在控制节点执行安装管理组件

1.创建数据库并授权

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'loalhost' IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';

2.在keystone 上创建系统用户并关联角色

[root@controller ~]# openstack user create --domain default --password CINDER_PASS cinder
[root@controller ~]# openstack role add --project service --user cinder admin
[root@controller ~]# openstack user create --domain default --password CINDER_PASS cinder
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | 917bdd3127104dcebd75c613a13045a4 |
| enabled   | True                             |
| id        | b7817a23bc3945bb991ffa744c3698d2 |
| name      | cinder                           |
+-----------+----------------------------------+
[root@controller ~]# openstack role add --project service --user cinder admin
View Code

3.在keystone创建服务,注册api

openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
[root@controller ~]# openstack service create --name cinder --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | c2e4aac098cc4197871fe4baec73c5cf |
| name        | cinder                           |
| type        | volume                           |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 49e8cbb4ff8848aaad9ff4c367802b8a |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
admin http://controller:8776/v1/%\(tenant_id\)s+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 4aad90024ed54f9e8a321a1387a5d718        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | c2e4aac098cc4197871fe4baec73c5cf        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | a8c8d1a79a43469bb921774562c68f6c        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | c2e4aac098cc4197871fe4baec73c5cf        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 210897960a4a46b68baa8e3438545d1a        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | c2e4aac098cc4197871fe4baec73c5cf        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 38d1c767995a41d38d62d67a2dace19d        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 49e8cbb4ff8848aaad9ff4c367802b8a        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 40a199028925404ab128717794e9708b        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 49e8cbb4ff8848aaad9ff4c367802b8a        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 0b1f16407152495c9fd44b42a0335d21        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 49e8cbb4ff8848aaad9ff4c367802b8a        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# 
View Code

4安装cinder软件包

[root@controller ~]# yum install -y openstack-cinder

5.修改配置文件

  cp /etc/cinder/cinder.conf{,.bak}
  grep -Ev '^$|#' /etc/cinder/cinder.conf.bak >/etc/cinder/cinder.conf

openstack-config --set /etc/cinder/cinder.conf  DEFAULT  rpc_backend  rabbit
openstack-config --set /etc/cinder/cinder.conf  DEFAULT  auth_strategy  keystone
openstack-config --set /etc/cinder/cinder.conf  DEFAULT  my_ip  10.0.0.11
openstack-config --set /etc/cinder/cinder.conf  database connection  mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  username  cinder
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  password  CINDER_PASS
openstack-config --set /etc/cinder/cinder.conf  oslo_concurrency  lock_path  /var/lib/cinder/tmp
openstack-config --set /etc/cinder/cinder.conf  oslo_messaging_rabbit  rabbit_host  controller
openstack-config --set /etc/cinder/cinder.conf  oslo_messaging_rabbit  rabbit_userid  openstack
openstack-config --set /etc/cinder/cinder.conf  oslo_messaging_rabbit  rabbit_password  RABBIT_PASS

6.同步数据库

[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".
2020-01-11 16:09:45.109 6959 WARNING py.warnings [-] /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:241: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning

2020-01-11 16:09:45.241 6959 INFO migrate.versioning.api [-] 0 -> 1... 
2020-01-11 16:09:45.372 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.373 6959 INFO migrate.versioning.api [-] 1 -> 2... 
2020-01-11 16:09:45.403 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.403 6959 INFO migrate.versioning.api [-] 2 -> 3... 
2020-01-11 16:09:45.430 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.430 6959 INFO migrate.versioning.api [-] 3 -> 4... 
2020-01-11 16:09:45.547 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.547 6959 INFO migrate.versioning.api [-] 4 -> 5... 
2020-01-11 16:09:45.563 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.563 6959 INFO migrate.versioning.api [-] 5 -> 6... 
2020-01-11 16:09:45.579 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.579 6959 INFO migrate.versioning.api [-] 6 -> 7... 
2020-01-11 16:09:45.598 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.598 6959 INFO migrate.versioning.api [-] 7 -> 8... 
2020-01-11 16:09:45.609 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.609 6959 INFO migrate.versioning.api [-] 8 -> 9... 
2020-01-11 16:09:45.629 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.629 6959 INFO migrate.versioning.api [-] 9 -> 10... 
2020-01-11 16:09:45.644 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.644 6959 INFO migrate.versioning.api [-] 10 -> 11... 
2020-01-11 16:09:45.666 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.666 6959 INFO migrate.versioning.api [-] 11 -> 12... 
2020-01-11 16:09:45.682 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.683 6959 INFO migrate.versioning.api [-] 12 -> 13... 
2020-01-11 16:09:45.740 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.740 6959 INFO migrate.versioning.api [-] 13 -> 14... 
2020-01-11 16:09:45.780 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.780 6959 INFO migrate.versioning.api [-] 14 -> 15... 
2020-01-11 16:09:45.795 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.795 6959 INFO migrate.versioning.api [-] 15 -> 16... 
2020-01-11 16:09:45.817 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.817 6959 INFO migrate.versioning.api [-] 16 -> 17... 
2020-01-11 16:09:45.876 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.876 6959 INFO migrate.versioning.api [-] 17 -> 18... 
2020-01-11 16:09:45.903 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.904 6959 INFO migrate.versioning.api [-] 18 -> 19... 
2020-01-11 16:09:45.929 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.929 6959 INFO migrate.versioning.api [-] 19 -> 20... 
2020-01-11 16:09:45.943 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.943 6959 INFO migrate.versioning.api [-] 20 -> 21... 
2020-01-11 16:09:45.987 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:45.987 6959 INFO migrate.versioning.api [-] 21 -> 22... 
2020-01-11 16:09:46.001 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.001 6959 INFO migrate.versioning.api [-] 22 -> 23... 
2020-01-11 16:09:46.014 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.014 6959 INFO migrate.versioning.api [-] 23 -> 24... 
2020-01-11 16:09:46.046 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.046 6959 INFO migrate.versioning.api [-] 24 -> 25... 
2020-01-11 16:09:46.132 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.133 6959 INFO migrate.versioning.api [-] 25 -> 26... 
2020-01-11 16:09:46.143 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.143 6959 INFO migrate.versioning.api [-] 26 -> 27... 
2020-01-11 16:09:46.162 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.162 6959 INFO migrate.versioning.api [-] 27 -> 28... 
2020-01-11 16:09:46.167 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.168 6959 INFO migrate.versioning.api [-] 28 -> 29... 
2020-01-11 16:09:46.173 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.173 6959 INFO migrate.versioning.api [-] 29 -> 30... 
2020-01-11 16:09:46.178 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.178 6959 INFO migrate.versioning.api [-] 30 -> 31... 
2020-01-11 16:09:46.183 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.184 6959 INFO migrate.versioning.api [-] 31 -> 32... 
2020-01-11 16:09:46.207 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.208 6959 INFO migrate.versioning.api [-] 32 -> 33... 
2020-01-11 16:09:46.247 6959 WARNING py.warnings [-] /usr/lib64/python2.7/site-packages/sqlalchemy/sql/schema.py:2999: SAWarning: Table 'encryption' specifies columns 'volume_type_id' as primary_key=True, not matching locally specified columns 'encryption_id'; setting the current primary key columns to 'encryption_id'. This warning may become an exception in a future release
  ", ".join("'%s'" % c.name for c in self.columns)

2020-01-11 16:09:46.269 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.269 6959 INFO migrate.versioning.api [-] 33 -> 34... 
2020-01-11 16:09:46.289 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.289 6959 INFO migrate.versioning.api [-] 34 -> 35... 
2020-01-11 16:09:46.309 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.309 6959 INFO migrate.versioning.api [-] 35 -> 36... 
2020-01-11 16:09:46.336 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.337 6959 INFO migrate.versioning.api [-] 36 -> 37... 
2020-01-11 16:09:46.351 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.352 6959 INFO migrate.versioning.api [-] 37 -> 38... 
2020-01-11 16:09:46.366 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.366 6959 INFO migrate.versioning.api [-] 38 -> 39... 
2020-01-11 16:09:46.382 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.383 6959 INFO migrate.versioning.api [-] 39 -> 40... 
2020-01-11 16:09:46.445 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.445 6959 INFO migrate.versioning.api [-] 40 -> 41... 
2020-01-11 16:09:46.456 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.457 6959 INFO migrate.versioning.api [-] 41 -> 42... 
2020-01-11 16:09:46.461 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.461 6959 INFO migrate.versioning.api [-] 42 -> 43... 
2020-01-11 16:09:46.467 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.467 6959 INFO migrate.versioning.api [-] 43 -> 44... 
2020-01-11 16:09:46.471 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.472 6959 INFO migrate.versioning.api [-] 44 -> 45... 
2020-01-11 16:09:46.478 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.479 6959 INFO migrate.versioning.api [-] 45 -> 46... 
2020-01-11 16:09:46.483 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.483 6959 INFO migrate.versioning.api [-] 46 -> 47... 
2020-01-11 16:09:46.495 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.495 6959 INFO migrate.versioning.api [-] 47 -> 48... 
2020-01-11 16:09:46.509 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.509 6959 INFO migrate.versioning.api [-] 48 -> 49... 
2020-01-11 16:09:46.533 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.534 6959 INFO migrate.versioning.api [-] 49 -> 50... 
2020-01-11 16:09:46.551 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.552 6959 INFO migrate.versioning.api [-] 50 -> 51... 
2020-01-11 16:09:46.565 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.565 6959 INFO migrate.versioning.api [-] 51 -> 52... 
2020-01-11 16:09:46.590 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.590 6959 INFO migrate.versioning.api [-] 52 -> 53... 
2020-01-11 16:09:46.692 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.692 6959 INFO migrate.versioning.api [-] 53 -> 54... 
2020-01-11 16:09:46.709 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.710 6959 INFO migrate.versioning.api [-] 54 -> 55... 
2020-01-11 16:09:46.733 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.733 6959 INFO migrate.versioning.api [-] 55 -> 56... 
2020-01-11 16:09:46.737 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.737 6959 INFO migrate.versioning.api [-] 56 -> 57... 
2020-01-11 16:09:46.741 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.742 6959 INFO migrate.versioning.api [-] 57 -> 58... 
2020-01-11 16:09:46.748 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.748 6959 INFO migrate.versioning.api [-] 58 -> 59... 
2020-01-11 16:09:46.753 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.753 6959 INFO migrate.versioning.api [-] 59 -> 60... 
2020-01-11 16:09:46.758 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.758 6959 INFO migrate.versioning.api [-] 60 -> 61... 
2020-01-11 16:09:46.780 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.780 6959 INFO migrate.versioning.api [-] 61 -> 62... 
2020-01-11 16:09:46.802 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.802 6959 INFO migrate.versioning.api [-] 62 -> 63... 
2020-01-11 16:09:46.806 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.807 6959 INFO migrate.versioning.api [-] 63 -> 64... 
2020-01-11 16:09:46.822 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.822 6959 INFO migrate.versioning.api [-] 64 -> 65... 
2020-01-11 16:09:46.853 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.853 6959 INFO migrate.versioning.api [-] 65 -> 66... 
2020-01-11 16:09:46.982 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.982 6959 INFO migrate.versioning.api [-] 66 -> 67... 
2020-01-11 16:09:46.995 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:46.996 6959 INFO migrate.versioning.api [-] 67 -> 68... 
2020-01-11 16:09:47.001 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:47.001 6959 INFO migrate.versioning.api [-] 68 -> 69... 
2020-01-11 16:09:47.007 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:47.007 6959 INFO migrate.versioning.api [-] 69 -> 70... 
2020-01-11 16:09:47.012 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:47.012 6959 INFO migrate.versioning.api [-] 70 -> 71... 
2020-01-11 16:09:47.016 6959 INFO migrate.versioning.api [-] done
2020-01-11 16:09:47.016 6959 INFO migrate.versioning.api [-] 71 -> 72... 
2020-01-11 16:09:47.021 6959 INFO migrate.versioning.api [-] done
View Code

7.编辑/etc/nova/nova.conf配置文件

openstack-config --set /etc/nova/nova.conf  cinder os_rsgion_name  RegionOne

重启nova服务
[root@controller ~]# systemctl restart openstack-nova-api.service

8.启动cinder 服务

[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

验证状态

[root@controller ~]# cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  | 2020-01-11T08:19:36.000000 |        -        |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

(2)安装存储节点

本例将存储节点同计算节点安装在同一个机器上

1.先决条件,安装lvm2并启动

[root@compute2 ~]# yum install -y lvm2
[root@compute2 ~]# systemctl enable lvm2-lvmetad.service
[root@compute2 ~]# systemctl start lvm2-lvmetad.service

2.为compute1添加两块硬盘,并使系统扫描,识别

[root@compute2 ~]# ll /sys/class/scsi_host/
total 0
lrwxrwxrwx 1 root root 0 Jan 11 14:47 host0 -> ../../devices/pci0000:00/0000:00:07.1/ata1/host0/scsi_host/host0
lrwxrwxrwx 1 root root 0 Jan 11 14:47 host1 -> ../../devices/pci0000:00/0000:00:07.1/ata2/host1/scsi_host/host1
lrwxrwxrwx 1 root root 0 Jan 11 14:47 host2 -> ../../devices/pci0000:00/0000:00:10.0/host2/scsi_host/host2

[root@compute2 ~]# echo "- - -" > /sys/class/scsi_host/host0/scan
[root@compute2 ~]# echo "- - -" > /sys/class/scsi_host/host1/scan 
[root@compute2 ~]# echo "- - -" > /sys/class/scsi_host/host2/scan

3.创建PV和VG

[root@compute2 ~]# pvcreate /dev/sdb
[root@compute2 ~]# pvcreate /dev/sdc
[root@compute2 ~]# vgcreate cinder-ssd /dev/sdb
[root@compute2 ~]# vgcreate cinder-sata /dev/sdc
[root@compute2 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
[root@compute2 ~]# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created.
[root@compute2 ~]# vgcreate cinder-ssd /dev/sdb
  Volume group "cinder-ssd" successfully created
[root@compute2 ~]# vgcreate cinder-sata /dev/sdc
  Volume group "cinder-sata" successfully created
[root@compute2 ~]# 
View Code

4.修改lvm配置文件/etc/lvm/lvm.conf,在130行下面插入一行

filter = [ "a/sdb/", "a/sdc/","r/.*/" ]

5.配置完成之后,安装在存储节点安装cinder软件包

[root@compute2 ~]# cat /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.31
glance_api_servers = http://controller:9292
enabled_backends = ssd,sata
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]
[ssd]
volum_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-ssd
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name =ssd
[sata]
volum_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-sata
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name =sata

6.启动服务

[root@compute2 ~]# systemctl enable openstack-cinder-volume.service target.service
[root@compute2 ~]# systemctl start openstack-cinder-volume.service target.service

7.控制节点查看服务状态

[root@controller ~]# cinder service-list
+------------------+---------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |      Host     | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+---------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |   controller  | nova | enabled |   up  | 2020-01-12T07:38:45.000000 |        -        |
|  cinder-volume   | compute2@sata | nova | enabled |   up  | 2020-01-12T07:38:46.000000 |        -        |
|  cinder-volume   |  compute2@ssd | nova | enabled |   up  | 2020-01-12T07:38:46.000000 |        -        |
+------------------+---------------+------+---------+-------+----------------------------+-----------------+

8.通过web创建测试卷

在配置lvm的计算节点上显示lvm

[root@compute2 ~]# lvs
  WARNING: Device for PV rGMvFc-rXVA-mw7B-vy7k-Bb2G-heoa-bWyVrP not found or rejected by a filter.
  WARNING: Device for PV rGMvFc-rXVA-mw7B-vy7k-Bb2G-heoa-bWyVrP not found or rejected by a filter.
  Couldn't find device with uuid rGMvFc-rXVA-mw7B-vy7k-Bb2G-heoa-bWyVrP.
  WARNING: Couldn't find all devices for LV centos/swap while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV centos/home while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV centos/root while checking used and assumed devices.
  LV                                          VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home                                        centos     -wi-ao--p- <26.80g                                                    
  root                                        centos     -wi-ao--p-  50.00g                                                    
  swap                                        centos     -wi-ao--p-   2.00g                                                    
  volume-4568d9ed-bd5b-4746-a1c5-c6a6e76b6714 cinder-ssd -wi-a-----   2.00g                                                    
  volume-5cc415ac-6324-49d3-b9ae-c161d803a664 cinder-ssd -wi-a-----   3.00g

9.为了让创建的卷可以根据需求自定义,要创建卷类型,是的创建的卷可以根据自己的需求创建在自己想创建的卷上

在web页面管理员-卷-卷类型

再次创建卷

查看存储节点上的逻辑卷

[root@compute2 ~]# lvs
  WARNING: Device for PV rGMvFc-rXVA-mw7B-vy7k-Bb2G-heoa-bWyVrP not found or rejected by a filter.
  WARNING: Device for PV rGMvFc-rXVA-mw7B-vy7k-Bb2G-heoa-bWyVrP not found or rejected by a filter.
  Couldn't find device with uuid rGMvFc-rXVA-mw7B-vy7k-Bb2G-heoa-bWyVrP.
  WARNING: Couldn't find all devices for LV centos/swap while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV centos/home while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV centos/root while checking used and assumed devices.
  LV                                          VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home                                        centos      -wi-ao--p- <26.80g                                                    
  root                                        centos      -wi-ao--p-  50.00g                                                    
  swap                                        centos      -wi-ao--p-   2.00g                                                    
  volume-c521fd62-b564-4149-9454-73477b52f96f cinder-sata -wi-a-----   1.00g                                                    
  volume-36e22c79-e0d5-4c84-b0a6-85caa944685e cinder-ssd  -wi-a-----   2.00g     

将卷连接到实例

 十、增加flat网络

在虚机上增加网卡,设置为lan分段

1.编辑网络配置

[root@controller network-scripts]# cat ifcfg-ens37
TYPE=Ethernet
BOOTPROTO=none
NAME=ens37
DEVICE=ens37
ONBOOT=yes
IPADDR=172.16.0.11
NETMASK=255.255.255.0
注意:
配置网络网卡后不要重启网络,会影响到其他网卡的运行,而是单独重启新增的网卡
[root@controller network-scripts]# ifdown ens37
[root@controller network-scripts]# ifup ens37

2.修改网络配置

控制节点

a.文件/etc/neutron/plugins/ml2/ml2_conf.ini

[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
...
[ml2_type_flat]
flat_networks = provider,net172_16_0
...

b.文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini 
...
[linux_bridge]
physical_interface_mappings = provider:ens33,net172_16_0:ens37
...

修改以上俩个配置文件后重启neutron-server 和 neutron-linuxbridge-agent服务

[root@controller ~]# systemctl restart neutron-server.service neutron-linuxbridge-agent.service

计算节点

修改文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
...
physical_interface_mappings = provider:ens33,net172_16_0:ens37
...

重启 neutron-linuxbridge-agent服务

[root@compute2 ~]# systemctl restart neutron-linuxbridge-agent.service

其他计算节点同样修改

3.创建新网络

创建网络

[root@controller ~]# neutron net-create --shared --provider:physical_network net172_16_0 --provider:network_type flat net172_16_x
[root@controller ~]# neutron net-create --shared --provider:physical_network net172_16_0 --provider:network_type flat net172_16_x
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        |                                      |
| created_at                | 2020-01-13T03:12:36                  |
| description               |                                      |
| id                        | 87c526e6-8b10-4d97-9877-e3b16da806f8 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1500                                 |
| name                      | net172_16_x                          |
| port_security_enabled     | True                                 |
| provider:network_type     | flat                                 |
| provider:physical_network | net172_16_0                          |
| provider:segmentation_id  |                                      |
| router:external           | False                                |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tags                      |                                      |
| tenant_id                 | e31a1b3d60ce47a8922f74a39c29f44e     |
| updated_at                | 2020-01-13T03:12:36                  |
+---------------------------+--------------------------------------+
View Code

创建网络子网

neutron subnet-create --name net172_16_0 \
 --allocation-pool start=172.16.0.101,end=172.16.0.250 \
 --dns-nameserver 114.114.114.114 --gateway 172.16.0.2 \
 net172_16_x 172.16.0.0/24
[root@controller ~]# neutron subnet-create --name net172_16_0 \
>  --allocation-pool start=172.16.0.101,end=172.16.0.250 \
>  --dns-nameserver 114.114.114.114 --gateway 172.16.0.2 \
>  net172_16_x 172.16.0.0/24
Created a new subnet:
+-------------------+--------------------------------------------------+
| Field             | Value                                            |
+-------------------+--------------------------------------------------+
| allocation_pools  | {"start": "172.16.0.101", "end": "172.16.0.250"} |
| cidr              | 172.16.0.0/24                                    |
| created_at        | 2020-01-13T03:19:51                              |
| description       |                                                  |
| dns_nameservers   | 114.114.114.114                                  |
| enable_dhcp       | True                                             |
| gateway_ip        | 172.16.0.2                                       |
| host_routes       |                                                  |
| id                | 40828b6e-bf33-48ef-8266-8f6e80833419             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | net172_16_0                                      |
| network_id        | 87c526e6-8b10-4d97-9877-e3b16da806f8             |
| subnetpool_id     |                                                  |
| tenant_id         | e31a1b3d60ce47a8922f74a39c29f44e                 |
| updated_at        | 2020-01-13T03:19:51                              |
+-------------------+--------------------------------------------------+
[root@controller ~]# 
View Code

十一、cinder-volume连接后端存储

1安装nfs

本例将nfs 服务安装在10.0.0.32上

[root@compute3 ~]# yum install -y nfs-utils

修改配置文件

[root@compute3 ~]# cat /etc/exports
/data 10.0.0.0/24(rw,async,no_root_squash,no_all_squash) 172.160.0.0/24(ro)

创建目录

[root@compute3 ~]# mkdir /data

启动服务

[root@compute3 ~]# systemctl start rpcbind
[root@compute3 ~]# systemctl start nfs
[root@compute3 ~]# systemctl enable rpcbind
[root@compute3 ~]# systemctl enable nfs

在其他节点查看显示出nfs的输出列表

[root@compute2 ~]# showmount -e 172.16.0.32
Export list for 172.16.0.32:
/data 172.160.0.0/24,10.0.0.0/24

到此nfs 配置完成

2.cinder对接nfs

在存储节点配置

修改/etc/cinder/cinder.conf

[root@compute2 ~]# vi /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.31
glance_api_servers = http://controller:9292
enabled_backends = ssd,sata,nfs
[nfs]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
volume_backend_name = nfs

手动创建/etc/cinder/nfs_shares挂载配置文件

[root@compute2 ~]# vi /etc/cinder/nfs_shares
10.0.0.32:/data

重启openstack-cinder-volume服务

[root@compute2 ~]# systemctl restart openstack-cinder-volume.service

到控制节点检查服务是否多出nfs服务

[root@controller ~]# cinder service-list
+------------------+---------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |      Host     | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+---------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |   controller  | nova | enabled |   up  | 2020-01-13T06:42:41.000000 |        -        |
|  cinder-volume   |  compute2@nfs | nova | enabled |   up  | 2020-01-13T06:42:46.000000 |        -        |
|  cinder-volume   | compute2@sata | nova | enabled |   up  | 2020-01-13T06:42:45.000000 |        -        |
|  cinder-volume   |  compute2@ssd | nova | enabled |   up  | 2020-01-13T06:42:45.000000 |        -        |
+------------------+---------------+------+---------+-------+----------------------------+-----------------+

3.通过web 增加卷类型,配置键值

十二、配置三层网络

1.删除平台所有实例

2.为每个节点的虚拟机增加网卡配置lan分段 172.16.1.0/24网段

配置新增网卡的配置文件,计算几点同样配置

[root@controller network-scripts]# cp ifcfg-ens33 ifcfg-ens38
[root@controller network-scripts]# vi ifcfg-ens38
[root@controller network-scripts]# cat ifcfg-ens38
TYPE=Ethernet
BOOTPROTO=none
NAME=ens38
DEVICE=ens38
ONBOOT=yes
IPADDR=172.16.1.11
NETMASK=255.255.255.

3.修改控制节点配置文件

a.修改/etc/neutron/neutron.conf配置文件

[DEFAULT]
core_plugin = ml2
service_plugins =

修改成

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True

b.修改/etc/neutron/plugins/ml2/ml2_conf.ini 文件

[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge

修改成:

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population

增加配置

[ml2_type_vxlan]
vni_ranges = 1:1000000

c.修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件

[vxlan]
enable_vxlan = False

修改成

[vxlan]
enable_vxlan = True
local_ip = 172.16.1.11
l2_population = True

d.修改/etc/neutron/l3_agent.ini文件,默认中增加如下配置

[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =

重启服务,并启动neutron-l3-agent服务

systemctl restart neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

启动neutron-l3-agent 服务

systemctl start neutron-l3-agent.service
systemctl enable neutron-l3-agent.service

4.修改计算节点配置文件

修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[vxlan]
enable_vxlan = False

修改成

[vxlan]
enable_vxlan = True
local_ip = 172.16.1.31
l2_population = True

重启服务

systemctl restart neutron-linuxbridge-agent.service

其他计算节点同样修改配置,注意更换管道ip

5.将管理员网络中编辑10.0.0.线段,将其变成外部网络

6.创建网路

 7.开启web 路由器功能

编辑配置文件/etc/openstack-dashboard/local_settings

OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    ... 

重启httpd服务

systemctl restart hettpd

刷新dasboard 页面

8.创建路由器,必须选择一个外部网络

此时在网络拓扑的位置有一个路由器连接在外部网络,点击路由器添加接口

启动实例,测试私有网络

进入控制台

私有地址网络,可以通过路由访问外网

设置让外部网络访问该内部网络的机器

 

此时通过xshell连接工具连接该地址即可登陆访问该内部网络的虚机

 

posted @ 2020-01-09 10:53  彦祚  阅读(2076)  评论(0编辑  收藏  举报