nova服务部署
一:了解nova服务的基本概念
1:作用
nova对云主机在其整个生命周期内进行管理
就是负责管理云主机的实例的创建,删除,启动,停止
2:nova的组件架构
1)nova的模块
nova-api:这个模块用于接受和响应外部的请求,必须通过这个接口来管理nova的唯一入口
nova-scheduler:主要负责虚拟机的调度服务,就是将placement发送的信息,经过scheduler模块选择某台主机创建虚拟机,然后将这个请求告知compute模块
nova-compute:核心模块,负责虚拟机的创建和资源的分配,它是通过第三方的工具来来创建和管理虚拟机,本身不提供任何的虚拟化的功能
nova-conductor:这个模块主要负责与数据库建立连接,nova中的其他组件都是通过这个模块与数据库进行交互的
2)nova的单元管理模式
就是openstack中的计算节点被分为若干个小的单元进行管理,除了顶层的单元cell0外,每个单元都有自己的消息队列和数据库,cell0只有数据库
cello:包含接口模块和调度模块
cell1:负责云主机的实列创建与管理,随着计算节点的扩大,计算的单元也会扩大,cell3和cell4
cell1包含计算模块和传导模块
nova一共有3个数据库,nova_api,nova_cell0,nova这三个数据库
nova_api:存放的是全局的信息,单元的信息,创建云主机模版的信息等
nova_cello:存放的是云主机调度失败的数据进行集中的管理
nova:为其他所有的单元服务,存储了单元中云主机的相关信息
3:nova的基本工作流程
流程图:
第一步:nova-api接收到用户的命令来创建一个云主机的请求,并将其发送到消息队列中去了
第二步:nova-conductor从消息队列中获得请求,从数据库中获得cell等的相关信息,并将请求和获得资源的数据放在消息队列中,
第三步:nova-scheduler从消息队列获得请求和数据以后,与placement组件配合选择创建的云主机,选择之后,请求转入到消息队列中等待novaa-compute处理
第四步:nova-compute从消息队列获得请求后,分别与glance,neutron,ciinder交互获取镜像资源,网络资源,云存储资源,资源准备好后,通过hypervisor调用具体的虚拟化程序(kvm,)来创建虚拟机
nova与placement之间的交互
就是nova-api发送请求到消息队列中去,nova-compute收到请求后,发送给placement,placement将获取的信息发送到nova-api这个接口中去,放在了数据库中去了,nova-scheduler模块在nova-conductor模块中的作用下,从数据库中去获取数据,从而选择一个云主机,然后将这个信息返回给placement修改,将这个请求发送给nova-compute,进行创建虚拟机
二:安装与配置控制节点上的nova服务
1:安装nova软件包
[root@controller ~]# yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy
openstack-nova-api nova与外部的接口模块
openstack-nova-conductor:nova传导服务模块,提供数据库访问
nova-scheduler:nova的调度服务,用于选择哪一台主机进行创建
nova-novncproxy:nova的虚拟网络控制台的代理模块,支持用户通过vnc访问云主机
查看nova用户的信息
[root@controller ~]# cat /etc/passwd|grep nova nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin [root@controller ~]# cat /etc/group|grep nova nobody:x:99:nova nova:x:162:nova [root@controller ~]#
2:创建数据库并授权
MariaDB [(none)]> create database nova_api; Query OK, 1 row affected (0.000 sec) MariaDB [(none)]> create database nova_cell0; Query OK, 1 row affected (0.001 sec) MariaDB [(none)]> create database nova; Query OK, 1 row affected (0.000 sec) #授权 #本地用户或者远程用户登陆时以nova用户对这些数据库的权限 MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'localhost' identified by '000000'; Query OK, 0 rows affected (0.001 sec) MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'%' identified by '000000'; Query OK, 0 rows affected (0.000 sec) MariaDB [(none)]> grant all privileges on nova_cell0.* to 'nova'@'%' identified by '000000'; Query OK, 0 rows affected (0.000 sec) MariaDB [(none)]> grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by '000000'; Query OK, 0 rows affected (0.000 sec) MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'localhost' identified by '000000'; Query OK, 0 rows affected (0.000 sec) MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'%' identified by '000000'; Query OK, 0 rows affected (0.000 sec) MariaDB [(none)]>
3:修改配置文件
/etc/nova/nova.conf
1)去注释和空行
[root@controller nova]# cp nova.conf nova.bak [root@controller nova]# [root@controller nova]# ls api-paste.ini nova.bak nova.conf policy.json release rootwrap.conf [root@controller nova]# grep -Ev '^$|#' nova.bak > nova.conf [root@controller nova]# cat nova.conf [DEFAULT] [api] [api_database] [barbican] [cache]
2:修改配置文件
[DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://rabbitmq:000000@controller:5672 my_ip = 192.168.10.10 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:000000@controller/nova_api [database] connection = mysql+pymysql://nova:000000@controller/nova [glance] api_servers = http://controller:9292 [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = nova password = 000000 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = placement password = 000000 region_name = RegionOne [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip
3)初始化nova的数据库
1:初始化nova_api数据库
[root@controller nova]# su nova -s /bin/sh -c "nova-manage api_db sync"
2:创建cell0单元,该单元将使用nova数据库
[root@controller nova]# su nova -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1" [root@controller nova]#
3:映射nova到cell0数据库,使cell0的表结构和nova的表结构一致
[root@controller nova]# su nova -s /bin/sh -c "nova-manage cell_v2 map_cell0" [root@controller nova]#
4:初始化nova数据库,由于映射的存在,cell0中将会创建相同的表
[root@controller nova]# su nova -s /bin/sh -c "nova-manage db sync"
5:验证单元是否都注册
[root@controller nova]# nova-manage cell_v2 list_cells +-------+--------------------------------------+----------------------------------------+-------------------------------------------------+----------+ | 名称 | UUID | Transport URL | 数据库连接 | Disabled | +-------+--------------------------------------+----------------------------------------+-------------------------------------------------+----------+ | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | False | | cell1 | 596c65ab-eda3-4f79-9da7-59dee6bf6e65 | rabbit://rabbitmq:****@controller:5672 | mysql+pymysql://nova:****@controller/nova | False | +-------+--------------------------------------+----------------------------------------+-------------------------------------------------+----------+ [root@controller nova]#
三:nova组件初始化
1:创建nova用户并分配角色
openstack user create --domain default --password 000000 nova openstack role add --project project --user nova admin
2:创建nova服务及服务端点
openstack endpoint create --region RegionOne nova public http://controller:8774/v2.1 openstack endpoint create --region RegionOne nova internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne nova admin http://controller:8774/v2.1
3:启动控制节点nova服务
systemctl start openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy systemctl enable openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
4:检查端口占用情况
[root@controller ~]# netstat -pant |grep 8774 tcp 0 0 0.0.0.0:8774 0.0.0.0:* LISTEN 4051/python2 [root@controller ~]# netstat -pant |grep 8775 tcp 0 0 0.0.0.0:8775 0.0.0.0:* LISTEN 4051/python2 [root@controller ~]# netstat -pant |grep 8778 tcp6 0 0 :::8778 :::* LISTEN 1103/httpd [root@controller ~]#
5:查看计算服务列表
[root@controller ~]# openstack compute service list +----+----------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+----------------+------------+----------+---------+-------+----------------------------+ | 5 | nova-conductor | controller | internal | enabled | up | 2023-11-16T14:09:11.000000 | | 6 | nova-scheduler | controller | internal | enabled | up | 2023-11-16T14:09:13.000000 | +----+----------------+------------+----------+---------+-------+----------------------------+ [root@controller ~]#
都处于up的状态。正常的情况下
四:安装与配置计算节点的nova服务
1:安装nova软件包
[root@compute /]# yum -y install openstack-nova-compute [root@compute /]# cat /etc/passwd|grep nova nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin [root@compute /]# cat /etc/group|grep nova nobody:x:99:nova qemu:x:107:nova libvirt:x:988:nova nova:x:162:nova [root@compute /]#
2:修改配置文件的信息
cp nova.conf nova.bak grep -Ev '^$|#' nova.bak > nova.conf
[DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://rabbitmq:000000@controller:5672 my_ip = 192.168.10.20 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:000000@controller/nova_api [database] connection = mysql+pymysql://nova:000000@controller/nova [glance] api_servers = http://controller:9292 [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = nova password = 000000 [oslo_concurrency] lock_path = /var/lib/nova/tmp [placement] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = placement password = 000000 region_name = RegionOne [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.10.10:6080/vnc_auto.html [libvirt] virt_type = qemu
3:启动计算节点的nova服务
[root@compute nova]# systemctl start openstack-nova-compute libvirtd [root@compute nova]# systemctl enable openstack-nova-compute libvirtd Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service. [root@compute nova]#
五:发现计算节点并检查服务
1:发现计算节点
[root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" Found 2 cell mappings. Getting computes from cell 'cell1': 596c65ab-eda3-4f79-9da7-59dee6bf6e65 Checking host mapping for compute host 'compute': 9877c5d8-be86-4db5-9541-e834da5ff7fd Creating host mapping for compute host 'compute': 9877c5d8-be86-4db5-9541-e834da5ff7fd Found 1 unmapped computes in cell: 596c65ab-eda3-4f79-9da7-59dee6bf6e65 Skipping cell0 since it does not contain hosts. [root@controller ~]#
2:设置自动发现
[scheduler] discover_hosts_in_cells_interval = 60
重启服务
[root@controller nova]# systemctl restart openstack-nova-api
[root@controller nova]#
3:验证nova服务
1:查看计算服务列表
[root@controller nova]# openstack compute service list +----+----------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+----------------+------------+----------+---------+-------+----------------------------+ | 5 | nova-conductor | controller | internal | enabled | up | 2023-11-16T14:37:11.000000 | | 6 | nova-scheduler | controller | internal | enabled | up | 2023-11-16T14:37:13.000000 | | 7 | nova-compute | compute | nova | enabled | up | 2023-11-16T14:37:09.000000 | +----+----------------+------------+----------+---------+-------+----------------------------+
2:查看compute的端点
[root@controller nova]# openstack catalog list +-----------+-----------+-----------------------------------------+ | Name | Type | Endpoints | +-----------+-----------+-----------------------------------------+ | placement | placement | RegionOne | | | | admin: http://controller:8778 | | | | RegionOne | | | | internal: http://controller:8778 | | | | RegionOne | | | | public: http://controller:8778 | | | | | | nova | compute | RegionOne | | | | admin: http://controller:8774/v2.1 | | | | RegionOne
3:使用nova状态检测工具进行检查
[root@controller nova]# nova-status upgrade check +--------------------------------+ | Upgrade Check Results | +--------------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +--------------------------------+ | Check: Placement API | | Result: Success | | Details: None | +--------------------------------+ | Check: Ironic Flavor Migration | | Result: Success | | Details: None | +--------------------------------+ | Check: Cinder API | | Result: Success | | Details: None | +--------------------------------+ [root@controller nova]#