一、nova介绍
二、Nova 组件如何协同工作
-
客户(可以是 OpenStack 最终用户,也可以是其他程序)向 API(nova-api)发送请求:“帮我创建一个虚机”
-
API 对请求做一些必要处理后,向 Messaging(RabbitMQ)发送了一条消息:“让 Scheduler 创建一个虚机”
-
Scheduler(nova-scheduler)从 Messaging 获取到 API 发给它的消息,然后执行调度算法,从若干计算节点中选出节点 A
-
Scheduler 向 Messaging 发送了一条消息:“在计算节点 A 上创建这个虚机”
-
计算节点 A 的 Compute(nova-compute)从 Messaging 中获取到 Scheduler 发给它的消息,然后在本节点的 Hypervisor 上启动虚机。
-
在虚机创建的过程中,Compute 如果需要查询或更新数据库信息,会通过 Messaging 向 Conductor(nova-conductor)发送消息,Conductor 负责数据库访问。
三、nova安装部署
controller
一、创建nova数据库,并设置权限及远程登录
mysql -u root -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS';
二、创建nova用户
①创建nova用户
source openrc
openstack user create --domain default --password=nova nova
②将default项目中的nova用户设置成admin角色
openstack role add --project service --user nova admin
③创建nova服务
openstack service create --name nova \ --description "OpenStack Compute" compute
④创建nova服务端点
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
三、创建placement用户(起到统计资源使用的功能)
①创建placement用户
openstack user create --domain default --password=placement placement
②将service项目中的placement用户设置成admin角色
openstack role add --project service --user placement admin
③创建服务
openstack service create --name placement --description "Placement API" placement
④创建服务端点
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
四、安装nova
①安装nova服务包
yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
②修改nova配置文件
cp /etc/nova/nova.conf
/etc/nova/nova.conf
.bak
[DEFAULT] my_ip=192.168.42.120 #控制节点ip use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver #防火墙驱动 enabled_apis=osapi_compute,metadata transport_url = rabbit://openstack:admin@controller #rabbitmq的连接方式 [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [barbican] [cache] [cells] [cinder] #os_region_name = RegionOne [cloudpipe] [conductor] [console] [consoleauth] [cors] [cors.subdomain] [crypto] [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [image_file_url] [ironic] [key_manager] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [libvirt] #virt_type=qemu [matchmaker_redis] [metrics] [mks] [neutron] #url = http://controller:9696 #auth_url = http://controller:35357 #auth_type = password #project_domain_name = default #user_domain_name = default #region_name = RegionOne #project_name = service #username = neutron #password = neutron #service_metadata_proxy = true #metadata_proxy_shared_secret = METADATA_SECRET [notifications] [osapi_v21] [oslo_concurrency] lock_path=/var/lib/nova/tmp #临时文件保存位置 [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] os_region_name = RegionOne auth_type = password auth_url = http://controller:35357/v3 #认证的地址 project_name = service project_domain_name = Default username = placement password = placement user_domain_name = Default [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [ssl] [trusted_computing] [upgrade_levels] [vendordata_dynamic_auth] [vmware] [vnc] #可以连接虚拟机 enabled=true vncserver_listen=$my_ip #监听ip地址 vncserver_proxyclient_address=$my_ip #vnc代理服务监听地址 #novncproxy_base_url = http://172.16.254.63:6080/vnc_auto.html [workarounds] [wsgi] [xenserver] [xvp]
③创建一个服务追踪的配置文件(补充到文件的末尾)
vim /etc/httpd/conf.d/00-nova-placement-api.conf <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory>
④重启httpd服务
systemctl restart httpd
⑤同步api、cell0数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
⑥创建cell证书的id
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
⑦同步nova数据库
su -s /bin/sh -c "nova-manage db sync" nova
正常情况:/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')
result = self._query(query)
⑧列示cell证书列表
nova-manage cell_v2 list_cells
⑨重启并设置开机自启
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
报错:
ERROR: Could not access cell0. Has the nova_api database been created? Has the nova_cell0 database been created? Has "nova-manage api_db sync" been run? Has "nova-manage cell_v2 map_cell0" been run? Is [api_database]/connection set in nova.conf? Is the cell0 database connection URL correct? Error: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'nova'@'localhost' (using password: YES)")
这是因为数据库原因。可删除数据库重新赋权
或报错:
Failed to parse /etc/nova/nova.conf: at /etc/nova/nova.conf:148,
这是因为nova配置文件错误,检查并重启服务。
报错:
compute -service list的服务显示down状态或者hypervisor无法显示内容
因为libvirtd和openstack-nova-compute服务没有启动如果启动不了或许因为rabbitmq-server服务没有启动
compute
openstack的库都建立在控制节点上,其他节点不需要建库
①安装nova包(先解决依赖)yum安装
qemu-img-ev-2.9.0-16.el7_4.8.1.x86_64.rpm qemu-kvm-ev-2.9.0-16.el7_4.8.1.x86_64.rpm qemu-kvm-common-ev-2.9.0-16.el7_4.8.1.x86_64.rpm
yum install openstack-nova-compute
②编辑nova配置文件
cp /etc/nova/nova.conf
/etc/nova/nova.conf
.bak
[DEFAULT] my_ip=192.168.253.194 #管理网卡的ip地址 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver enabled_apis=osapi_compute,metadata transport_url = rabbit://openstack:admin@controller [api] #auth_strategy = keystone [api_database] #connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [barbican] [cache] [cells] [cinder] #os_region_name = RegionOne [cloudpipe] [conductor] [console] [consoleauth] [cors] [cors.subdomain] [crypto] [database] #connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [image_file_url] [ironic] [key_manager] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [libvirt] virt_type=qemu #调用后台虚拟化 [matchmaker_redis] [metrics] [mks] [neutron] #url = http://controller:9696 #auth_url = http://controller:35357 #auth_type = password #project_domain_name = default #user_domain_name = default #region_name = RegionOne #project_name = service #username = neutron #password = neutron #service_metadata_proxy = true #metadata_proxy_shared_secret = METADATA_SECRET [notifications] [osapi_v21] [oslo_concurrency] lock_path=/var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] os_region_name = RegionOne auth_type = password auth_url = http://controller:35357/v3 project_name = service project_domain_name = Default username = placement password = placement user_domain_name = Default [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [ssl] [trusted_computing] [upgrade_levels] [vendordata_dynamic_auth] [vmware] [vnc] enabled=true vncserver_listen=0.0.0.0 vncserver_proxyclient_address=$my_ip novncproxy_base_url = http://192.168.253.135:6080/vnc_auto.html #控制节点地址 [workarounds] [wsgi] [xenserver] [xvp]
③过滤计算节点是否开启虚拟化
egrep -c '(vmx|svm)' /proc/cpuinfo
④重启并设置开启自启
systemctl enable libvirtd.service openstack-nova-compute.service
· systemctl start libvirtd.service openstack-nova-compute.service #启不来查看日志,具体错误具体解决
报错:
如果报5672错误,重启控制节点的rabbitmq-server和插件。
注意控制节点的服务设置开机自启动不然如果不下心关机前面很多服务的状态都会关系到后面的操作
⑤列示openstack管理虚拟化类型
openstack hypervisor list
报错:
一堆python2.7的错误,看看端口是否都启动了,比如9292 ^CTraceback (most recent call last): File "/usr/bin/openstack", line 10, in <module> sys.exit(main()) File "/usr/lib/python2.7/site-packages/openstackclient/shell.py", line 211, in main return OpenStackShell().run(argv) File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 135, in run ret_val = super(OpenStackShell, self).run(argv)
又回到controller节点
⑥数据库同步计算节点(在controller节点执行)
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
正常显示:
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': d1ae544d-ba14-4621-8ae8-6c978ed0288f
Found 0 computes in cell: d1ae544d-ba14-4621-8ae8-6c978ed0288f
⑦在控制节点查看计算服务的工作状态
openstack compute service list(enable表示开启,up运行)
报错:
2019-06-07 20:14:16.562 33654 ERROR oslo_service.service AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile.
这是因为rabbitmq没有创建openstack用户权限
使用rabbitmqctl list_users 查看创建用户
并更改为管理员权限,如果已经时管理员,请更改openstack密码。
openstack image list (查看镜像列表)
nova-status upgrade check (nova状态服务检测)
显示成功和up即可
报错:
没有发现计算机节点
No host mappings found but there are compute nodes. Run | | command ‘nova-manage cell_v2 simple_cell_setup’ and then | | retry.
重新发现计算机节点,执行命令:
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
报错:
placement启动失败,但是8778端口启动了 这是因为nova配置文件的placment板块配置错误
【如何删除一个计算节点】
注意:删除之前要将虚拟机停掉,对其进行备份。
查看openstack的compute服务列表
[root@controller ~]# openstack compute service list +----+------------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+------------+----------+---------+-------+----------------------------+ | 1 | nova-conductor | controller | internal | enabled | up | 2019-06-05T07:31:14.000000 | | 2 | nova-consoleauth | controller | internal | enabled | up | 2019-06-05T07:31:14.000000 | | 3 | nova-scheduler | controller | internal | enabled | up | 2019-06-05T07:31:15.000000 | | 6 | nova-compute | compute | nova | enabled | up | 2019-06-05T07:31:10.000000 | | 7 | nova-compute | controller | nova | enabled | up | 2019-06-05T07:31:14.000000 | | 8 | nova-compute | storage | nova | enabled | up | 2019-06-05T07:31:09.000000 | +----+------------------+------------+----------+---------+-------+----------------------------+
查看openstack主机列表
[root@controller ~]# openstack host list
+------------+-------------+----------+
| Host Name | Service | Zone |
+------------+-------------+----------+
| controller | conductor | internal |
| controller | consoleauth | internal |
| controller | scheduler | internal |
| compute | compute | nova |
| controller | compute | nova |
| storage | compute | nova |
+------------+-------------+----------+
现在想要删除stroage这个主机的nova,只要把status和state分别变成disabe和down即可
首先关闭storage节点的nova和libvirt
[root@storage network-scripts]# systemctl stop libvirtd.service openstack-nova-compute.service
在controller节点查看服务列表
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+----------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+----------+-------+----------------------------+
| 1 | nova-conductor | controller | internal | enabled | up | 2019-06-05T07:40:54.000000 |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2019-06-05T07:40:54.000000 |
| 3 | nova-scheduler | controller | internal | enabled | up | 2019-06-05T07:40:55.000000 |
| 6 | nova-compute | compute | nova | enabled | up | 2019-06-05T07:41:00.000000 |
| 7 | nova-compute | controller | nova | enabled | up | 2019-06-05T07:40:54.000000 |
| 8 | nova-compute | storage | nova | disabled | down | 2019-06-05T07:39:30.000000 |
+----+------------------+------------+----------+----------+-------+----------------------------+
如果执行上述命令没有down掉,则可以执行以下命令
[root@controller ~]# nova service-disable storage nova-compute
+---------+--------------+----------+
| Host | Binary | Status |
+---------+--------------+----------+
| storage | nova-compute | disabled |
+---------+--------------+----------+
[root@controller ~]# nova service-list
+----+------------------+------------+----------+----------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+------------+----------+----------+-------+----------------------------+-----------------+
| 1 | nova-conductor | controller | internal | enabled | up | 2019-06-05T07:45:24.000000 | - |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2019-06-05T07:45:25.000000 | - |
| 3 | nova-scheduler | controller | internal | enabled | up | 2019-06-05T07:45:25.000000 | - |
| 6 | nova-compute | compute | nova | enabled | up | 2019-06-05T07:45:20.000000 | - |
| 7 | nova-compute | controller | nova | enabled | up | 2019-06-05T07:45:24.000000 | - |
| 8 | nova-compute | storage | nova | disabled | down | 2019-06-05T07:44:04.000000 | - |
+----+------------------+------------+----------+----------+-------+----------------------------+-----------------+
删除数据库中nova.servies表中的storage信息
MariaDB [nova]> delete from nova.services where host="storage";
Query OK, 1 row affected (0.00 sec)
MariaDB [nova]> select host from nova.services;
+------------+
| host |
+------------+
| 0.0.0.0 |
| 0.0.0.0 |
| compute |
| controller |
| controller |
| controller |
| controller |
+------------+
7 rows in set (0.00 sec)
在计算节点(compute_nodes)删除storage信息
MariaDB [nova]> delete from compute_nodes where host="storage";
Query OK, 1 row affected (0.00 sec)
MariaDB [nova]> select host from compute_nodes;
+------------+
| host |
+------------+
| compute |
| controller |
+------------+
2 rows in set (0.00 sec)
查看compute_nodes表中是否有hypervisor_hostname
MariaDB [nova]> select hypervisor_hostname from compute_nodes;
+---------------------+
| hypervisor_hostname |
+---------------------+
| compute |
| controller |
+---------------------+
2 rows in set (0.01 sec)
查看host已经删掉storage
[root@controller ~]# openstack host list
+------------+-------------+----------+
| Host Name | Service | Zone |
+------------+-------------+----------+
| controller | conductor | internal |
| controller | consoleauth | internal |
| controller | scheduler | internal |
| compute | compute | nova |
| controller | compute | nova |
+------------+-------------+----------+
查看compute service 列表已经删除storage
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-conductor | controller | internal | enabled | up | 2019-06-05T08:00:44.000000 |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2019-06-05T08:00:45.000000 |
| 3 | nova-scheduler | controller | internal | enabled | up | 2019-06-05T08:00:45.000000 |
| 6 | nova-compute | compute | nova | enabled | up | 2019-06-05T08:00:50.000000 |
| 7 | nova-compute | controller | nova | enabled | up | 2019-06-05T08:00:44.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+