Loading

7.1、controller节点配置


0、配置openstack版本yum源:

yum install centos-release-openstack-rocky

1、安装 OpenStack 客户端:

yum install python-openstackclient

yum install openstack-selinux

#用于管理openstack的安全策略;

2、安装数据库:

大多数OpenStack服务使用SQL数据库来存储信息。数据库通常在控制器节点上运行。

(1)安装:

yum install mariadb mariadb-server python2-PyMySQL

(2)配置服务以使用控制器节点的管理IP地址:

vim /etc/my.cnf.d/openstack.cnf

[mysqld]

bind-address = controller


default-storage-engine = innodb

innodb_file_per_table=1

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8

(3)设置开机自启动:

systemctl enable mariadb.service

systemctl start mariadb.service

(4)运行脚本保护数据库服务:

mysql_secure_installation

#提示:数据库密码是123456;


3、安装消息队列:

OpenStack使用消息队列来协调服务之间的操作和状态信息。消息队列服务通常在控制器节点上运行。

(1)安装:

yum install rabbitmq-server

(2)设置开机自启动:

systemctl enable rabbitmq-server.service

systemctl start rabbitmq-server.service

(3)添加openstack用户:

rabbitmqctl add_user openstack openstack

(4)允许用户进行配置,写入和读取访问 openstack:

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

(5)打开消息队列的web监控功能:

rabbitmq-plugins enable rabbitmq_management

#开启rabbitmp的web监控插件;

lsof -i:15672

#查看rabbitmp监听的端口;

http://10.0.0.11:15672/

访问:user:guest;password:guest


4、安装memcached:

服务的身份服务身份验证机制使用Memcached来缓存令牌。memcached服务通常在控制器节点上运行。

对于生产部署,我们建议启用防火墙,身份验证和加密的组合以保护它。

(1)安装:

yum install memcached python-memcached

(2)配置服务以使用控制器节点的管理IP地址:

vim /etc/sysconfig/memcached

PORT="11211"

USER="memcached"

MAXCONN="1024"

CACHESIZE="64"

OPTIONS="-l 127.0.0.1,::1,controller"

(3)设置开机自启:

systemctl enable memcached.service

systemctl start memcached.service


5、安装keyston:

OpenStack Identity服务提供单点集成,用于管理身份验证,授权和服务目录。

身份服务通常是用户与之交互的第一个服务。经过身份验证后,最终用户可以使用其身份访问其他OpenStack服务。

同样,其他OpenStack服务利用身份服务来确保用户是他们所声称的人,并发现部署中的其他服务。

Identity服务还可以与某些外部用户管理系统(例如LDAP)集成。

用户和服务可以使用由Identity服务管理的服务目录来查找其他服务。顾名思义,服务目录是OpenStack部署中可用服务的集合。

每个服务可以有一个或多个端点,每个端点可以是以下三种类型之一:admin,internal或public。在生产环境中,出于安全

原因,不同的端点类型可能驻留在暴露给不同类型用户的不同网络上。例如,公共API网络可能从Internet上可见,因此客户可以管理他们的云。

管理API网络可能仅限于管理云基础架构的组织内的运营商。内部API网络可能仅限于包含OpenStack服务的主机。

此外,OpenStack支持多个区域以实现可伸缩性。RegionOne区域。在身份服务中创建的区域,服务和端点一起构成部署的服务目录。

部署中的每个OpenStack服务都需要一个服务条目,其中相应的端点存储在Identity服务中。这可以在安装和配置Identity服务之后完成。

Identity服务包含以下组件:

服务器:

集中式服务器使用RESTful接口提供身份验证和授权服务。

驱动程序:

驱动程序或服务后端集成到中央服务器。它们用于访问OpenStack外部存储库中的身份信息,并且

可能已存在于部署OpenStack的基础架构中(例如,SQL数据库或LDAP服务器)。

模块:

中间件模块在使用Identity服务的OpenStack组件的地址空间中运行。这些模块拦截服务请求,提取

用户凭据,并将它们发送到中央服务器以进行授权。中间件模块和OpenStack组件之间的集成使用Python Web服务器网关接口。

(1)先决条件:

1)在数据库中创建keystone的数据库:

mysql -uroot -p123456

MariaDB [(none)]>CREATE DATABASE keystone;

2)授予对keystone数据库的适当访问权限:

MariaDB [(none)]> grant all on keystone.* to 'keystone'@'localhost' identified by 'keystone';

MariaDB [(none)]> grant all on keystone.* to 'keystone'@'%' identified by 'keystone';

3)这里将其它服务的库和权限一并创建:

MariaDB [(none)]> create database glance;


MariaDB [(none)]> create database nova;

MariaDB [(none)]> create database nova_api;

MariaDB [(none)]>CREATE DATABASE nova_cell0;

MariaDB [(none)]>CREATE DATABASE placement;


MariaDB [(none)]> create database neutron;


MariaDB [(none)]> create database cinder;


MariaDB [(none)]> grant all on glance.* to 'glance'@'localhost' identified by 'glance';

MariaDB [(none)]> grant all on glance.* to 'glance'@'%' identified by 'glance';


MariaDB [(none)]> grant all on nova.* to 'nova'@'localhost' identified by 'nova';

MariaDB [(none)]> grant all on nova.* to 'nova'@'%' identified by 'nova';

MariaDB [(none)]> grant all on nova_api.* to 'nova'@'localhost' identified by 'nova';

MariaDB [(none)]> grant all on nova_api.* to 'nova'@'%' identified by 'nova';

MariaDB [(none)]>GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';

MariaDB [(none)]>GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';

MariaDB [(none)]>GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'placement';

MariaDB [(none)]>GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement';


MariaDB [(none)]> grant all on neutron.* to 'neutron'@'localhost' identified by 'neutron';

MariaDB [(none)]> grant all on neutron.* to 'neutron'@'%' identified by 'neutron';


MariaDB [(none)]> grant all on cinder.* to 'cinder'@'localhost' identified by 'cinder';

MariaDB [(none)]> grant all on cinder.* to 'cinder'@'%' identified by 'cinder';

(2)安装:

yum install openstack-keystone httpd mod_wsgi

#说明:安装keyston时会生成一个keystone虚拟用户;

(3)修改配置文件:

vim /etc/keystone/keystone.conf

[database]

connection = mysql+pymysql://keystone:keystone@controller/keystone

[token] #配置令牌;

provider = fernet

driver = memcache

[memcache] #配置memcache;

servers = controller:11211

(4)填充Identity服务数据库:

su -s /bin/sh -c "keystone-manage db_sync" keystone

#使用keystone用户进行填充不可以使用root,否则容易出现权限问题;

mysql -h 10.0.0.11 -ukeystone -pkeystone -e "use keystone;show tables;"

#验证keystone库中是否填充了表;

(5)初始化fernet秘钥库:

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

#提示:该两条命令会在/etc/keystone/目录下生成fernet-keys文件;

(6)初始化keystone用户:

初始化后会建立一个admin的项目、角色、用户;admin用户在admin项目中,并且是admin的权限;

keystone-manage bootstrap --bootstrap-password admin \

--bootstrap-admin-url http://controller:5000/v3/ \

--bootstrap-internal-url http://controller:5000/v3/ \

--bootstrap-public-url http://controller:5000/v3/ \

--bootstrap-region-id RegionOne

mysql -h 10.0.0.11 -ukeystone -pkeystone -e "use keystone;select * from endpoint\G;"

#验证初始化用户是否成功;

(7)配置apache服务:

1)修改apache的httpd.conf配置文件:

vim /etc/httpd/conf/httpd.conf

ServerName controller:80

2)创建keystone配置文件的软链接自,有httpd管理keyston的配置文件:

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

3)设置httpd服务开机自启动:

systemctl enable httpd.service

systemctl start httpd.service

(8)创建openstack客户端环境脚本:

1)创建admin用户脚本:

mkdir -p /scripts

cd /scripts/

vim admin-openstack.sh

#!/bin/sh

export OS_PROJECT_DOMAIN_NAME=Default

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=admin

export OS_AUTH_URL=http://controller:5000/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

. /scripts/admin-openstack.sh

2)查看用户等信息:

openstack user list

+----------------------------------+-------+

| ID | Name |

+----------------------------------+-------+

| 76d29ec2124945cb94dda6be494dea1f | admin |

+----------------------------------+-------

openstack role list

+----------------------------------+--------+

| ID | Name |

+----------------------------------+--------+

| 6d36f33fa1ae4a9a80938133a464f181 | reader |

| ac8442475c974108bc1ca6ee66bf1f66 | admin |

| ceca361cebbf4de09c82c413258405a4 | member |

+----------------------------------+--------+

openstack project list

+----------------------------------+-------+

| ID | Name |

+----------------------------------+-------+

| 52f289ecdd844a86aa8401c3c7d1de74 | admin |

+----------------------------------+-------+

openstack service list

+----------------------------------+----------+----------+

| ID | Name | Type |

+----------------------------------+----------+----------+

| cef0253674b34940993a492796b02fe9 | keystone | identity |

+----------------------------------+----------+----------+

openstack endpoint list

+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+

| ID | Region | Service Name | Service Type | Enabled | Interface | URL |

+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+

| 3d22c4e88764427ea5a72879c95085cd | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ |

| 79bcd31245044782aef8789839c4e7a3 | RegionOne | keystone | identity | True | public | http://controller:5000/v3/ |

| dad770f9fe84420c88f4b5cb35201082 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ |

+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+

3)admin用户验证:

openstack token issue

+------------+------------------------------------------------------------------------------------------------------------------

-----------------------------------------------------------------------+| Field | Value

|+------------+------------------------------------------------------------------------------------------------------------------

-----------------------------------------------------------------------+| expires | 2019-03-28T08:12:11+0000

|| id | gAAAAABcnHPLKywDYdpUKrCsVdq9G9F-2ZRRQla4tEpVC6e934cGOFCY9pR99PqYaQ-Cm4cr9gv3gQ8wQTlm2Q0jiRMSCDBfkoLuJldqtc6yJQXDu

CmHTzN3mqApvyVpJ8cPgZcQCUN1BrzgV7hJ761TMtyx3UykDmEzGACF43VoU4GSNbgyP5o || project_id | 52f289ecdd844a86aa8401c3c7d1de74

|| user_id | 76d29ec2124945cb94dda6be494dea1f

|+------------+------------------------------------------------------------------------------------------------------------------

-----------------------------------------------------------------------+

(9)创建域,项目、用户和角色并验证:

1)创建域:

keystone有默认的"default"域,可以不创建;

openstack domain create --description "An Example Domain" lc

2)创建demo的项目和用户以及user角色,并给demo用户授权,用于演示:

A、创建demo项目:

openstack project create --domain default --description "Demo Project" demo

B、创建demo用户:

openstack user create --domain default --password-prompt demo

User Password: #密码是demo

Repeat User Password:

C、创建user角色:

openstack role create user

D、将demo用户添加到demo项目并赋予user的角色;

openstack role add --project demo --user demo user

3)创建service项目以及相关服务的用户并授予admin角色权限:

A、创建service项目:

openstack project create --domain default --description "Service Project" service

B、创建相关用户及授权:

openstack user create --domain default --password-prompt glance

openstack role add --project service --user glance admin


openstack user create --domain default --password-prompt nova

openstack role add --project service --user nova admin

openstack user create --domain default --password-prompt placement

openstack role add --project service --user placement admin


openstack user create --domain default --password-prompt neutron

openstack role add --project service --user neutron admin


openstack user create --domain default --password-prompt cinder

openstack role add --project service --user cinder admin

#提示:密码是相关服务的服务名;

#删除相关服务的 openstack user\role\project\service delect <服务id>

#如果service错误,需要先删除endpoint然后再删除service,最后再重新创建service;

#查看相关的用户信息:openstack user/role/project list

4)验证相关角色:

例:这里验证demo用户,其他用户依次仿照验证即可:

A、建立脚本:

cat /scripts/demo-openstack.sh

#!/bin/sh

export OS_PROJECT_DOMAIN_NAME=Default

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_NAME=demo

#用户加入的项目名称;

export OS_USERNAME=demo

#用户名;

export OS_PASSWORD=demo

#用户密码;

export OS_AUTH_URL=http://controller:5000/v3

#验证地址及版本号;

export OS_IDENTITY_API_VERSION=3

#openstack API版本号;

export OS_IMAGE_API_VERSION=2

#glance使用的API版本号;

B、验证:

. /scripts/demo-openstack.sh

openstack token issue

+------------+----------------------------------------------------------------------------------------------------------------------------------

-------------------------------------------------------+| Field | Value

|+------------+----------------------------------------------------------------------------------------------------------------------------------

-------------------------------------------------------+| expires | 2019-03-28T08:19:27+0000

|| id | gAAAAABcn24niTabG_OdoMIeEijpufqn1dKDXSm_fGyyOwVTBqNmAbdkZyAh_7xWSE9nSAALJPpjthQ32ptEzIssqj7vTvdpMMHXfUD6L0JIr9vqxHA0brat1hq6ULHcL

25oCnzbW4Ui20CfqCWDj_9ZYOTFTpESPSY23-khOowOurDngXMHoCk || project_id | 7c669159485646e08448dedeb506fa2c

|| user_id | 94c1b49ceb5a40e6b207a9f0a6af2833

|+------------+----------------------------------------------------------------------------------------------------------------------------------

-------------------------------------------------------+

#提示:出现上面的内容说明验证是成功的;


6、安装glance(创建和管理kvm虚拟机):

glance-api #接受云系统镜像的创建、删除、读取请求,监听的9292端口(对外进行提供服务)

默认情况下,glance存储镜像的目录为/var/lib/glance/images/;

glance-Registry #云系统的镜像注册服务,写入到mysql数据库中,监听的是9191端口;

(1)先决条件(下面的1和2步骤在安装keyston服务时已经完成):

1)在mariaDB中创建glance数据库及glance用户,并赋予适当的访问权限;

2)为glance创建keystone服务凭证,用户名为glance,加入的项目和角色分别是service和admin;

3)使用admin的环境变量:

. /scripts/admin-openstack.sh

4)创建glance服务实体:

openstack service create --name glance --description "OpenStack Image" image

5)创建glance服务API endpoint端点(用于访问glance):

openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

6)查看glance服务及端点列表:

openstack service list

openstack endpoint list

(2)安装glance:

yum install openstack-glance -y

(3)修改glance的配置文件:

1)编辑/etc/glance/glance-api.conf文件:

[database]

connection = mysql+pymysql://glance:glance@controller/glance

[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = Default

user_domain_name = Default

project_name = service

username = glance

password = glance


[paste_deploy]

flavor = keystone

#keyston认证配置;


[glance_store]

stores = file,http

default_store = file

filesystem_store_datadir = /var/lib/glance/images/

#glace镜像存储路径配置;

#提示:可以使用'grep '^[a-z]' /etc/glance/glance-api.conf'命令查看相关配置;

2)编辑/etc/glance/glance-registry.conf文件:

[database]

connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = Default

user_domain_name = Default

project_name = service

username = glance

password = glance


[paste_deploy]

flavor = keystone

(4)同步数据库:

su -s /bin/sh -c "glance-manage db_sync" glance

#提示:如果有警告可以忽略;

(5)检查同步glance数据库是否成功:

mysql -h controller -uglance -pglance -e "use glance;show tables"

(6)启动glance的api和registry服务,并加入到开机自启:

systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service

systemctl list-unit-files | grep enable | egrep "openstack-glance-api|openstack-glance-registry"

openstack-glance-api.service enabled

openstack-glance-registry.service enabled

#检查openstack-glance-api和openstack-glance-registry服务开启自启动状态;

(7)验证glance配置是否错误:

1)方法一:

openstack image list #glance image-list

#提示:该命令是显示glance镜像的列表,如果不报错,说明glance服务搭建成功;

2)方法二:

A、下载官方提供的验证镜像(大小只有13M);

mkdir -p /tools

cd /tools

wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

B、向glance上传镜像:

openstack image create "cirros" \

--file cirros-0.3.5-x86_64-disk.img \

--disk-format qcow2 --container-format bare \

--public

C、查看glance镜像列表:

openstack image list

+--------------------------------------+--------+--------+

| ID | Name | Status |

+--------------------------------------+--------+--------+

| a036ec33-6df8-45ec-adbe-4b0ac189dc8c | cirros | active |

+--------------------------------------+--------+--------+

ls -l /var/lib/glance/images/

rw-r----- 1 glance glance 13267968 3月 28 17:46 a036ec33-6df8-45ec-adbe-4b0ac189dc8c

du -sh /var/lib/glance/images/

13M /var/lib/glance/images/


7、安装nova:

(1)nova组件介绍:

必须要配置的有mysql,keystone,message,queue;

(2)先决条件(下面的1和2步骤在安装keyston服务时已经完成):

1)在mariaDB中创建nova、nova_api、nova_cell0、placement数据库及nova、placement用户,并赋予适当的访问权限;

2)创建keystone服务凭证,用户名分别为nova和placement,加入的项目和角色分别是service和admin;

3)使用admin的环境变量:

. /scripts/admin-openstack.sh

4)创建nova和placement服务实体:

openstack service create --name nova --description "OpenStack Compute" compute


openstack service create --name placement --description "Placement API" placement

5)创建nova api和placement服务端点:

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1


openstack endpoint create --region RegionOne placement public http://controller:8778

openstack endpoint create --region RegionOne placement internal http://controller:8778

openstack endpoint create --region RegionOne placement admin http://controller:8778

6)查看glance服务及端点列表:

openstack service list

+----------------------------------+-----------+-----------+

| ID | Name | Type |

+----------------------------------+-----------+-----------+

| b4c227a999fb4a7ca3774ff0ff353f88 | placement | placement |

| cef0253674b34940993a492796b02fe9 | keystone | identity |

| d74812eea565405b8e65274209d5fbcd | glance | image |

| ebf280bbd9874282af5a9fedc16641bb | nova | compute |

+----------------------------------+-----------+-----------+

openstack endpoint list

(3)安装nova:

yum install openstack-nova-api openstack-nova-conductor openstack-nova-console \
openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api

openstack-nova-api #用于外部访问;

openstack-nova-conductor #nova访问数据库的中间件;

openstack-nova-console #nova的控制界面;

openstack-nova-novncproxy #vnc代理;

openstack-nova-scheduler #调度虚拟机;

nova-placement-api #跟踪每个提供商的库存和使用情况

(4)修改配置文件:

vim /etc/nova/nova.conf

[DEFAULT]

enabled_apis = osapi_compute,metadata

#启用计算和元数据的api;

transport_url = rabbit://openstack:openstack@controller

#启用RabbitMQ消息队列;

use_neutron = true

#启用neutron管理网络;

firewall_driver = nova.virt.firewall.NoopFirewallDriver

#关闭nova的防火墙使用neutron的防火墙;

[api_database]

connection = mysql+pymysql://nova:nova@controller/nova_api

[database]

connection = mysql+pymysql://nova:nova@controller/nova

[placement_database]

connection = mysql+pymysql://placement:placement@controller/placement

#配置数据库;

[api]

auth_strategy = keystone

[keystone_authtoken]

auth_url = http://controller:5000/v3

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = nova

#配置身份服务访问;

[vnc]

enabled = true

server_listen = 0.0.0.0

#vnc监听的网卡;

server_proxyclient_address = controller

#vnc_proxy使用;

[glance]

api_servers = http://controller:9292

#配置glance;


[oslo_concurrency]

lock_path = /var/lib/nova/tmp

#锁路径;


[placement]

region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://controller:5000/v3

username = placement

password = placement

(5)由于打包错误,通过添加到以下内容来启用对Placement API的访问:

<Directory /usr/bin>

<IfVersion >= 2.4>

Require all granted

</IfVersion>

<IfVersion < 2.4>

Order allow,deny

Allow from all

</IfVersion>

</Directory>

(6)重启httpd服务:

systemctl restart httpd

(7)同步数据库:

1)同步nova_api和placement数据库(两个库的表是一样的):

su -s /bin/sh -c "nova-manage api_db sync" nova

2)注册nova_cell0数据库并创建cell1单元格:

su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

e92cce3a-5fee-4e90-b7fe-1ef7fdfa6c69

3)同步nova数据库:

su -s /bin/sh -c "nova-manage db sync" nova

#提示:该同步有警告,可以忽略;

4)验证nova cell0和cell1是否正确注册:

su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

+-------+--------------------------------------+------------------------------------+-------------------------------------------------+---------

-+| 名称 | UUID | Transport URL | 数据库连接 | Disabled

|+-------+--------------------------------------+------------------------------------+-------------------------------------------------+---------

-+| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | False

|| cell1 | e92cce3a-5fee-4e90-b7fe-1ef7fdfa6c69 | rabbit://openstack:****@controller | mysql+pymysql://nova:****@controller/nova | False

|+-------+--------------------------------------+

(8)检查数据库同步是否成功:

mysql -h controller -unova -pnova -e "use nova_api;show tables;"

mysql -h controller -uplacement -pplacement -e "use placement;show tables;"

mysql -h controller -unova -pnova -e "use nova;show tables;"

mysql -h controller -unova -pnova -e "use nova_cell0;show tables;"

#提示:如果有表则说明配置是成功的,其中nova_api和placement的库表相同;nova和nova_cell0的库表一致;

(9)启动nova服务并加入到开机自启动;

systemctl enable openstack-nova-api.service openstack-nova-consoleauth \

openstack-nova-scheduler.service openstack-nova-conductor.service \

openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service openstack-nova-consoleauth \

openstack-nova-scheduler.service openstack-nova-conductor.service \

openstack-nova-novncproxy.service

(10)验证nova服务是否正常:

openstack host list

+------------+-------------+----------+

| Host Name | Service | Zone |

+------------+-------------+----------+

| controller | consoleauth | internal |

| controller | conductor | internal |

| controller | scheduler | internal |

+------------+-------------+----------+

openstack compute service list

+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+

| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down |

+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+

| 7a8173f6-7d86-464b-aa6e-ed8565c22ab7 | nova-consoleauth | controller | internal | enabled | up | 2019-03-29T04:25:10.000000 | - | False |

| 2434732c-7f4a-4120-b843-ba887db3bc2f | nova-conductor | controller | internal | enabled | up | 2019-03-29T04:25:11.000000 | - | False |

| 5ff41bfb-db15-4b3d-b800-64f5b9a17a72 | nova-scheduler | controller | internal | enabled | up | 2019-03-29T04:25:11.000000 | - | False |

+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+

8、neutron服务:

(1)先决条件(下面的1和2步骤在安装keyston服务时已经完成):

1)在mariaDB中创建neutron数据库及neutron用户,并赋予适当的访问权限;

2)为neutron创建keystone服务凭证,用户名为neutron,加入的项目和角色分别是service和admin;

3)使用admin的环境变量:

. /scripts/admin-openstack.sh

4)创建neutron的服务实体:

openstack service create --name neutron --description "OpenStack Networking" network

5)创建neutron访问的API端点:

openstack endpoint create --region RegionOne network public http://controller:9696

openstack endpoint create --region RegionOne network internal http://controller:9696

openstack endpoint create --region RegionOne network admin http://controller:9696

6)查看:

openstack service list

+----------------------------------+-----------+-----------+

| ID | Name | Type |

+----------------------------------+-----------+-----------+

| 30191f3b3aa94e7eb734836480306c08 | neutron | network |

| b4c227a999fb4a7ca3774ff0ff353f88 | placement | placement |

| cef0253674b34940993a492796b02fe9 | keystone | identity |

| d74812eea565405b8e65274209d5fbcd | glance | image |

| ebf280bbd9874282af5a9fedc16641bb | nova | compute |

+----------------------------------+-----------+-----------+

openstack endpoint list

(2)配置网络选项1-提供商网络:

1)安装软件包:

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

2)配置服务器组件:

vim /etc/neutron/neutron.conf

[database]

connection = mysql+pymysql://neutron:neutron@controller/neutron

#数据库;


[DEFAULT]

core_plugin = ml2

service_plugins =

#启用链路层插件并禁用其它的插件;

transport_url = rabbit://openstack:openstack@controller

#rabbit消息队列连接;

auth_strategy = keystone

#身份验证;

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

#网络拓扑更改通知;


[keystone_authtoken]

www_authenticate_uri = http://controller:5000

auth_url = http://controller:5000

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = neutron

#身份验证;


[nova]

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = nova

#网络更改通知;


[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

#配置锁定路径;

3)配置链路层插件:

vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,vlan

#启用平面和vlan网络;

tenant_network_types =

#禁用自助服务网络;

mechanism_drivers = linuxbridge

#启用linux的桥接机制;

#配置ML2插件后,删除type_drivers选项中的值,可能会导致数据库不一致,需要重新同步数据库。

extension_drivers = port_security

#启用端口安全性扩展驱动程序;


[ml2_type_flat]

flat_networks = provider

#将提供商虚拟网络置为扁平网络;

[securitygroup]

enable_ipset = true

#启用ipset以提高安全组规则的效率;

4)配置linux桥代理(二层):

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = provider:eth0


[vxlan]

enable_vxlan = false

#禁用vxlan重叠网络;


[securitygroup]

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

#启用安全组并配置Linux网桥iptables防火墙驱动程序;


以下参数在启动neutron-linuxbridge-agent.service的时候会自动设置为1,以确保

linux内核支持网桥过滤器;

sysctl net.bridge.bridge-nf-call-iptables

sysctl net.bridge.bridge-nf-call-ip6tables

5)配置dhcp代理:

vim /etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = linuxbridge

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

#配置Linux桥接接口驱动程序,Dnsmasq DHCP驱动

#程序,并启用隔离的元数据,以便提供商网络上的实

#例可以通过网络访问元数据;

(3)配置元数据代理:

vim /etc/neutron/metadata_agent.ini

[DEFAULT]

nova_metadata_host = controller

metadata_proxy_shared_secret = lc
#配置元数据主机和共享密钥;

(4)配置Compute服务以使用Networking服务:

vim /etc/nova/nova.conf

[neutron]

url = http://controller:9696

auth_url = http://controller:5000

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron

service_metadata_proxy = true

metadata_proxy_shared_secret = lc

#共享秘钥和neutron中配置的元数据代理的秘钥一致;

(5)创建软连接:

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

(6)同步数据库:

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

检测数据同步是否正确:

mysql -hcontroller -uneutron -pneutron -e "use neutron;show tables"

(7)重启compute API服务:

systemctl restart openstack-nova-api.service

(8)开启neutron服务并加入到开机自启动中:

systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

(9)验证:

openstack network agent list

+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |

+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

| 1210dc27-0620-49d4-850e-2d3c86cf6a43 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent |

| 2aed088c-e3a4-4714-a63d-3056eabddafa | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent |

| 2ccc602e-29d4-46b2-a501-19a17a6a9b8f | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent |

+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

openstack compute service list

+----+------------------+------------+----------+---------+-------+----------------------------+

| ID | Binary | Host | Zone | Status | State | Updated At |

+----+------------------+------------+----------+---------+-------+----------------------------+

| 1 | nova-consoleauth | controller | internal | enabled | up | 2019-03-30T09:56:26.000000 |

| 2 | nova-conductor | controller | internal | enabled | up | 2019-03-30T09:56:31.000000 |

| 3 | nova-scheduler | controller | internal | enabled | up | 2019-03-30T09:56:25.000000 |

| 6 | nova-compute | compute1 | nova | enabled | up | 2019-03-30T09:56:26.000000 |

+----+------------------+------------+----------+---------+-------+----------------------------+

brctl show

bridge name bridge id STP enabled interfaces

brqc148981c-3a 8000.000c29e416df no eth0

tap8c4ff3d7-3e

route -n

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

0.0.0.0 10.0.0.253 0.0.0.0 UG 99 0 0 brqc148981c-3a

10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 brqc148981c-3a

172.16.1.0 0.0.0.0 255.255.255.0 U 101 0 0 eth1









posted @ 2020-02-17 12:22  云起时。  阅读(1119)  评论(0编辑  收藏  举报