第五章 OpenStack计算服务Nova
Nova组件介绍
Nova计算服务
Nova API
Nova Dashboard
KVM
在控制节点Node1安装服务
Nova控制节点安装和配置
1.安装软件包:
[root@linux-node1 images]# yum install openstack-nova-api openstack-nova-conductor \ > openstack-nova-console openstack-nova-novncproxy \ > openstack-nova-scheduler -y
检查我们之前创建了哪些数据库:
[root@linux-node1 images]# mysql -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 12 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | glance | | information_schema | | keystone | | mysql | | neutron | | nova | | performance_schema | +--------------------+ 7 rows in set (0.01 sec) MariaDB [(none)]>
接着,我们创建Nova-api数据库:
MariaDB [(none)]> CREATE DATABASE nova_api; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | glance | | information_schema | | keystone | | mysql | | neutron | | nova | | nova_api | | performance_schema | +--------------------+ 8 rows in set (0.00 sec)
对数据库进行正确的授权:
MariaDB [(none)]> grant all on nova_api.* to nova@localhost identified by 'nova'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> grant all on nova_api.* to nova@'%' identified by 'nova'; Query OK, 0 rows affected (0.00 sec)
刷新
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
2.编辑 /etc/nova/nova.conf文件并完成下面的操作:
- 在[api_database]和[database]部分,配置数据库的连接:
[api_database] connection= mysql+pymysql://nova:nova@192.168.1.11/nova_api [database] connection= mysql+pymysql://nova:nova@192.168.1.11/nova
测试使用nova用户是否能够连接上数据库:
[root@linux-node1 images]# mysql -h 192.168.1.11 -u nova -pnova Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 14 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | nova | | nova_api | +--------------------+ 3 rows in set (0.00 sec)
3.同步数据库:
[root@linux-node1 images]# su -s /bin/sh -c "nova-manage api_db sync" nova [root@linux-node1 images]# su -s /bin/sh -c "nova-manage db sync" nova /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.') result = self._query(query) /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.') result = self._query(query)
在所有数据库同步完毕之后,需要验证一下:
[root@linux-node1 images]# mysql -h 192.168.1.11 -u nova -pnova -e "use nova;show tables;" +--------------------------------------------+ | Tables_in_nova | +--------------------------------------------+ | agent_builds | | aggregate_hosts | | aggregate_metadata | | aggregates | | allocations | | block_device_mapping | | bw_usage_cache | | cells | | certificates | | compute_nodes | | console_pools | | consoles | | dns_domains | | fixed_ips | | floating_ips | | instance_actions | | instance_actions_events | | instance_extra | | instance_faults | | instance_group_member | | instance_group_policy | | instance_groups | | instance_id_mappings | | instance_info_caches | | instance_metadata | | instance_system_metadata | | instance_type_extra_specs | | instance_type_projects | | instance_types | | instances | | inventories | | key_pairs | | migrate_version | | migrations | | networks | | pci_devices | | project_user_quotas | | provider_fw_rules | | quota_classes | | quota_usages | | quotas | | reservations | | resource_provider_aggregates | | resource_providers | | s3_images | | security_group_default_rules | | security_group_instance_association | | security_group_rules | | security_groups | | services | | shadow_agent_builds | | shadow_aggregate_hosts | | shadow_aggregate_metadata | | shadow_aggregates | | shadow_block_device_mapping | | shadow_bw_usage_cache | | shadow_cells | | shadow_certificates | | shadow_compute_nodes | | shadow_console_pools | | shadow_consoles | | shadow_dns_domains | | shadow_fixed_ips | | shadow_floating_ips | | shadow_instance_actions | | shadow_instance_actions_events | | shadow_instance_extra | | shadow_instance_faults | | shadow_instance_group_member | | shadow_instance_group_policy | | shadow_instance_groups | | shadow_instance_id_mappings | | shadow_instance_info_caches | | shadow_instance_metadata | | shadow_instance_system_metadata | | shadow_instance_type_extra_specs | | shadow_instance_type_projects | | shadow_instance_types | | shadow_instances | | shadow_key_pairs | | shadow_migrate_version | | shadow_migrations | | shadow_networks | | shadow_pci_devices | | shadow_project_user_quotas | | shadow_provider_fw_rules | | shadow_quota_classes | | shadow_quota_usages | | shadow_quotas | | shadow_reservations | | shadow_s3_images | | shadow_security_group_default_rules | | shadow_security_group_instance_association | | shadow_security_group_rules | | shadow_security_groups | | shadow_services | | shadow_snapshot_id_mappings | | shadow_snapshots | | shadow_task_log | | shadow_virtual_interfaces | | shadow_volume_id_mappings | | shadow_volume_usage_cache | | snapshot_id_mappings | | snapshots | | tags | | task_log | | virtual_interfaces | | volume_id_mappings | | volume_usage_cache | +--------------------------------------------+ [root@linux-node1 images]# mysql -h 192.168.1.11 -u nova -pnova -e "use nova_api;show tables;" +--------------------+ | Tables_in_nova_api | +--------------------+ | build_requests | | cell_mappings | | flavor_extra_specs | | flavor_projects | | flavors | | host_mappings | | instance_mappings | | migrate_version | | request_specs | +--------------------+
配置完成后,再配置消息队列RabbitMQ的访问:
- 编辑 /etc/nova/nova.conf下的[DEFAULT]和[oslo_messageing_rabbit]部分:
[DEFAULT] auth_strategy = keystone [oslo_messaging_rabbit] rabbit_userid=openstack rabbit_password=openstack
- 在[DEFAULT]和[keystone_authtoken]部分,配置认证服务访问:
[DEFAULT] rpc_backend = rabbit [keystone_authtoken] auth_uri = http://192.168.1.11:5000 auth_url = http://192.168.1.11:35357 memcached_servers = 192.168.1.11:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova
- 在[DEFAULT]部分,只启用计算和元数据API:
[DEFAULT]
enabled_apis=osapi_compute,metadata
- 在[DEFAULT]部分,配置my_ip来使用控制节点的管理接口的IP 地址:
[DEFAULT] my_ip = 127.0.0.1 保持默认
- 在[DEFAULT]部分,使能 Networking 服务:
[DEFAULT] use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver
- 在[vnc]部分,配置VNC代理使用控制节点的管理接口IP地址 :
[vnc] vncserver_listen=192.168.1.11 vncserver_proxyclient_address = 192.168.1.11
- 在[glance]区域,配置镜像服务API的位置:
[glance]
api_servers=http://192.168.1.11:9292
- 在[oslo_concurrency]部分,配置锁路径:
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
完成安装
- 启动Nova计算服务、配置他们开机自动启动:
[root@linux-node1 ~]# systemctl enable openstack-nova-api.service \ > openstack-nova-consoleauth.service openstack-nova-scheduler.service \ > openstack-nova-conductor.service openstack-nova-novncproxy.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service to /usr/lib/systemd/system/openstack-nova-consoleauth.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service. [root@linux-node1 ~]# systemctl start openstack-nova-api.service \ > openstack-nova-consoleauth.service openstack-nova-scheduler.service \ > openstack-nova-conductor.service openstack-nova-novncproxy.service
检查服务是否已经启动:
[root@linux-node1 nova]# netstat -nltp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8774 0.0.0.0:* LISTEN 6716/python2 tcp 0 0 0.0.0.0:8775 0.0.0.0:* LISTEN 6716/python2 tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 3536/python2 tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 844/beam tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 1183/mysqld tcp 0 0 127.0.0.1:11211 0.0.0.0:* LISTEN 836/memcached tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 4237/python2 tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 839/sshd tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 844/beam tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1439/master tcp 0 0 0.0.0.0:6080 0.0.0.0:* LISTEN 6720/python2 tcp6 0 0 :::5672 :::* LISTEN 844/beam tcp6 0 0 :::5000 :::* LISTEN 838/httpd tcp6 0 0 ::1:11211 :::* LISTEN 836/memcached tcp6 0 0 :::80 :::* LISTEN 838/httpd tcp6 0 0 :::22 :::* LISTEN 839/sshd tcp6 0 0 ::1:25 :::* LISTEN 1439/master tcp6 0 0 :::35357 :::* LISTEN 838/httpd
查看日志记录:
[root@linux-node1 ~]# cd /var/log/nova [root@linux-node1 nova]# ls nova-api.log nova-conductor.log nova-consoleauth.log nova-manage.log #数据库同步的日志文件 nova-novncproxy.log nova-scheduler.log [root@linux-node1 nova]# tail nova-api.log 2017-06-23 03:04:06.283 6716 INFO oslo_service.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative
最后,
注册Nova服务
1.创建nova服务实体:
[root@linux-node1 ~]# source admin-openstack.sh [root@linux-node1 ~]# openstack service create --name nova \ > --description "OpenStack Compute" compute +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | 976981ab6bef4582b49ac0b406fb1a25 | | name | nova | | type | compute | +-------------+----------------------------------+
2.创建Compute服务API端点:
[root@linux-node1 ~]# openstack endpoint create --region RegionOne \ > compute public http://192.168.1.11:8774/v2.1/%\(tenant_id\)s +--------------+---------------------------------------------+ | Field | Value | +--------------+---------------------------------------------+ | enabled | True | | id | b1c181db93dd461bb11f39888b3c653b | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 976981ab6bef4582b49ac0b406fb1a25 | | service_name | nova | | service_type | compute | | url | http://192.168.1.11:8774/v2.1/%(tenant_id)s | +--------------+---------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne \ > compute internal http://192.168.1.11:8774/v2.1/%\(tenant_id\)s +--------------+---------------------------------------------+ | Field | Value | +--------------+---------------------------------------------+ | enabled | True | | id | 0306699fe1d240848babc3b41d0be4e3 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 976981ab6bef4582b49ac0b406fb1a25 | | service_name | nova | | service_type | compute | | url | http://192.168.1.11:8774/v2.1/%(tenant_id)s | +--------------+---------------------------------------------+
[root@linux-node1 ~]# openstack endpoint create --region RegionOne \ > compute admin http://192.168.1.11:8774/v2.1/%\(tenant_id\)s +--------------+---------------------------------------------+ | Field | Value | +--------------+---------------------------------------------+ | enabled | True | | id | 1d87e06bff044ae88c50e5487485c3f9 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 976981ab6bef4582b49ac0b406fb1a25 | | service_name | nova | | service_type | compute | | url | http://192.168.1.11:8774/v2.1/%(tenant_id)s | +--------------+---------------------------------------------+
验证安装
检查openstack服务和端点列表:
[root@linux-node1 ~]# openstack service list +----------------------------------+----------+----------+ | ID | Name | Type | +----------------------------------+----------+----------+ | 78e6f8140aa344e0abbc41ca7d21d9ed | keystone | identity | | 976981ab6bef4582b49ac0b406fb1a25 | nova | compute | | c29f2863d89047b997c721cdb51e77cb | glance | image | +----------------------------------+----------+----------+
[root@linux-node1 ~]# openstack endpoint list +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------+ | 0306699fe1d240848babc3b41d0be4e3 | RegionOne | nova | compute | True | internal | http://192.168.1.11:8774/v2.1/%(tenant_id)s | | 1d87e06bff044ae88c50e5487485c3f9 | RegionOne | nova | compute | True | admin | http://192.168.1.11:8774/v2.1/%(tenant_id)s | | 353d9c2b13ec4f5d8e3d51abe7ca6ee2 | RegionOne | keystone | identity | True | internal | http://192.168.1.11:5000/v3 | | 84952464ca3644da82907fae74453c99 | RegionOne | keystone | identity | True | public | http://192.168.1.11:5000/v3 | | 9529f8eba1ce4b27bff51a13b7371d51 | RegionOne | glance | image | True | public | http://192.168.1.11:9292 | | b1c181db93dd461bb11f39888b3c653b | RegionOne | nova | compute | True | public | http://192.168.1.11:8774/v2.1/%(tenant_id)s | | bcafb0d3927f4307bfcc96f9f8882211 | RegionOne | glance | image | True | admin | http://192.168.1.11:9292 | | f3b9aae336cb4f478b95ad7c77431580 | RegionOne | glance | image | True | internal | http://192.168.1.11:9292 | | f56fc2678a13414cbe94b6fea506d13c | RegionOne | keystone | identity | True | admin | http://192.168.1.11:35357/v3 | +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------+
检查Nova服务节点是否已经打开:
[root@linux-node1 ~]# openstack host list +-------------------------+-------------+----------+ | Host Name | Service | Zone | +-------------------------+-------------+----------+ | linux-node1.example.com | consoleauth | internal | | linux-node1.example.com | conductor | internal | | linux-node1.example.com | scheduler | internal | +-------------------------+-------------+----------+
Nova计算节点安装和配置
先决条件
启用Openstack库
- 在CentOS中,"extras"仓库提供用于启用Openstack仓库的RPM包。CentOS默认启用"extras"仓库,因此你可以直接安装用于启用OpenStack仓库的包。
[root@linux-node2 ~]# yum install centos-release-openstack-mitaka
已加载插件:fastestmirror
安装Openstack客户端:
[root@linux-node2 ~]# yum install python-openstackclient
已加载插件:fastestmirror
CentOS默认启用了SELinux。安装openstack-selinux软件包以便自动管理OpenStack服务的安全策略:
[root@linux-node2 ~]# yum install openstack-selinux
已加载插件:fastestmirror
注释:这部分假设你已经一步步的按照之前的向导配置好了第一个计算节点。如果你想要配置额外的计算节点,每个额外的计算节点都需要一个唯一的IP地址。
控制节点node1
计算节点node2
1.在node1上将/etc/nova/nova.conf复制到node2下的/opt/文件下:
[root@linux-node1 ~]# scp /etc/nova/nova.conf 192.168.1.12:/opt/ The authenticity of host '192.168.1.12 (192.168.1.12)' can't be established. ECDSA key fingerprint is dd:2f:eb:53:71:72:42:94:02:1c:91:a1:bd:ec:a1:44. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.1.12' (ECDSA) to the list of known hosts. root@192.168.1.12's password: nova.conf 100% 180KB 180.1KB/s 00:00
2.查看node1和node2中nova.conf文件的不同点:
[root@linux-node2 opt]# diff /etc/nova/nova.conf /opt/nova.conf c2 < --- > rpc_backend = rabbit 267c267 < #enabled_apis=osapi_compute,metadata --- > enabled_apis=osapi_compute,metadata 1593a1594 > firewall_driver = nova.virt.firewall.NoopFirewallDriver 1684c1685 < #use_neutron=false --- > use_neutron=true 2168c2169 < #connection=mysql://nova:nova@localhost/nova --- > connection= mysql+pymysql://nova:nova@192.168.1.11/nova_api 3128c3129 < #connection=<None> --- > connection= mysql+pymysql://nova:nova@192.168.1.11/nova 3354c3355 < #api_servers=<None> --- > api_servers=http://192.168.1.11:9292 3523a3525,3537 > auth_uri = http://192.168.1.11:5000 > auth_url = http://192.168.1.11:35357 > memcached_servers = 192.168.1.11:11211 > auth_type = password > project_domain_name = default > user_domain_name = default > project_name = service > username = nova > password = nova > > > > 4298c4312,4313 < #lock_path=/var/lib/nova/tmp --- > lock_path=/var/lib/nova/tmp > 4449c4464 < #rabbit_host=localhost --- > rabbit_host= 192.168.1.11 4467c4482 < #rabbit_userid=guest --- > rabbit_userid=openstack 4471c4486 < #rabbit_password=guest --- > rabbit_password=openstack 5418c5433 < #vncserver_listen=127.0.0.1 --- > vncserver_listen=192.168.1.11 5442c5457 < #vncserver_proxyclient_address=127.0.0.1 --- > vncserver_proxyclient_address=192.168.1.11
3.复制node2的nova.conf 拷贝到/usr/local/src下作备份:
[root@linux-node2 opt]# mv /etc/nova/nova.conf /usr/local/src/
4.将/opt/下的nova.conf移动到node2的/etc/nova/文件夹下:
[root@linux-node2 nova]# mv /opt/nova.conf .
5.在/etc/nova/nova.conf下修改如下:
a.去掉connection数据库代码
解析:计算节点不需要直接连接数据库
b.打开[vnc]下的
enabled=true keymap=en-us vncserver_listen=192.168.1.12 也可以改为0.0.0.0 -->用于虚机迁移 vncserver_proxyclient_address=192.168.1.12
和
novncproxy_base_url=http://192.168.1.11:6080/vnc_auto.html
解析: 是novncproxy的IP地址
完成安装
1.确定计算节点是否支持虚拟机的硬件加速:
[root@linux-node2 nova]# grep -c vmx /proc/cpuinfo flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm ida arat pln pts dtherm hwp hwp_noitfy hwp_act_window hwp_epp tpr_shadow vnmi ept vpid fsgsbase smep xsaveopt xsavec xgetbv1
解析:
如果这个命令返回了one or greater的值,那么你的计算节点支持硬件加速且不需要额外的配置。
如果这个命令返回了zero值,那么你的计算节点不支持硬件加速。你必须配置libvirt来使用QEMU去代替KVM
- 在/etc/nova/nova.conf文件的[libvirt]区域做出如下的编辑:
[libvirt] virt_type = KVM(默认) virt_type = qemu
2.启动计算服务及其依赖,并将其配置为开机自动启动:
root@linux-node2 nova]# systemctl enable libvirtd.service openstack-nova-compute.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service. [root@linux-node2 nova]# systemctl start libvirtd.service openstack-nova-compute.service Job for openstack-nova-compute.service failed because the control process exited with error code. See "systemctl status openstack-nova-compute.service" and "journalctl -xe" for details.
Troubleshooting:
[root@linux-node2 nova]# journalctl -xe 6月 23 07:16:40 linux-node2.example.com systemd[1]: Failed to start OpenStack Nova Compute Server.
[root@linux-node2 nova]# ll 总用量 224 -rw-r----- 1 root nova 3673 3月 22 06:14 api-paste.ini -rw-r----- 1 root root 184422 6月 23 07:21 nova.conf -rw-r----- 1 root nova 27914 3月 22 06:14 policy.json -rw-r--r-- 1 root root 72 5月 23 18:43 release -rw-r----- 1 root nova 966 3月 22 06:13 rootwrap.conf
而在node1节点上查看权限:
[root@linux-node1 nova]# ll 总用量 224 -rw-r----- 1 root nova 3673 3月 22 06:14 api-paste.ini -rw-r----- 1 root nova 184421 6月 23 03:03 nova.conf -rw-r----- 1 root nova 27914 3月 22 06:14 policy.json -rw-r--r-- 1 root root 72 5月 23 18:43 release -rw-r----- 1 root nova 966 3月 22 06:13 rootwrap.conf
修改node2的/etc/nova/nova.conf:
[root@linux-node2 nova]# chown root:nova /etc/nova/nova.conf
[root@linux-node2 nova]# ll 总用量 224 -rw-r----- 1 root nova 3673 3月 22 06:14 api-paste.ini -rw-r----- 1 root nova 184422 6月 23 07:21 nova.conf -rw-r----- 1 root nova 27914 3月 22 06:14 policy.json -rw-r--r-- 1 root root 72 5月 23 18:43 release -rw-r----- 1 root nova 966 3月 22 06:13 rootwrap.conf
重新启动服务:
[root@linux-node2 nova]# systemctl start libvirtd.service openstack-nova-compute.service
[root@linux-node2 nova]# ps aux | grep nova nova 4926 70.0 3.1 306076 59948 ? Rs 07:35 0:00 /usr/bin/python2 /usr/bin/nova-compute root 4934 0.0 0.0 112664 972 pts/0 R+ 07:35 0:00 grep --color=auto nova
并查看log:
[root@linux-node2 nova]# cd /var/log/nova/ [root@linux-node2 nova]# ll 总用量 960 -rw-r--r-- 1 nova nova 506310 6月 23 07:33 nova-compute.log
验证Nova的安装
1.列出服务组件,以验证是否成功启动并注册了每个进程:
[root@linux-node2 ~]# source admin-openstack.sh [root@linux-node2 ~]# openstack compute service list +----+------------------+-------------------------+----------+---------+-------+----------------------------+ | Id | Binary | Host | Zone | Status | State | Updated At | +----+------------------+-------------------------+----------+---------+-------+----------------------------+ | 1 | nova-consoleauth | linux-node1.example.com | internal | enabled | up | 2017-06-23T12:48:36.000000 | | 2 | nova-conductor | linux-node1.example.com | internal | enabled | up | 2017-06-23T12:48:37.000000 | | 3 | nova-scheduler | linux-node1.example.com | internal | enabled | up | 2017-06-23T12:48:34.000000 | | 6 | nova-compute | linux-node2.example.com | nova | enabled | up | 2017-06-23T12:48:34.000000 | +----+------------------+-------------------------+----------+---------+-------+----------------------------+
该输出应该显示三个服务组件在控制节点上启用,一个服务组件在计算节点上启用。
查看Nova与Glance的连接是否正常:
[root@linux-node2 ~]# nova image-list +--------------------------------------+--------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+--------+--------+--------+ | e44ea6d8-5a32-4bd3-8768-e606018db5ce | cirros | ACTIVE | | +--------------------------------------+--------+--------+--------+