汇总OpenStack运维中遇到的问题
汇总OpenStack运维中遇到的问题
1.冷迁移和升降配
# 1.配置各计算节点nova用户免密互信
usermod -s /bin/bash nova
echo "NOVA_PASS"|passwd --stdin nova
su - nova
ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa
ssh-copy-id nova@compute01
ssh-copy-id nova@compute02
# 2.设置允许在同一台主机升降配和迁移
openstack-config --set /etc/nova/nova.conf DEFAULT allow_resize_to_same_host true
openstack-config --set /etc/nova/nova.conf DEFAULT allow_migrate_to_same_host true
openstack-config --set /etc/nova/nova.conf DEFAULT resize_confirm_window 1
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_default_filters \
RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
# 3.重启计算节点的计算服务
systemctl restart openstack-nova-compute.service
2.设置物理机重启后恢复虚拟机的状态
openstack-config --set /etc/nova/nova.conf DEFAULT resume_guests_state_on_host_boot true
systemctl restart openstack-nova-compute.service
3.Build of instance aborted: Volume did not finish being created even after we waited 191 seconds or 61 attempts. And its status is downloading.
# 解决方法:在nova.conf中有一个控制卷设备重试的参数:block_device_allocate_retries,可以通过修改此参数延长等待时间。该参数默认值为60,这个对应了之前实例创建失败消息里的61 attempts。我们可以将此参数设置的大一点,例如:180。这样Nova组件就不会等待卷创建超时,也即解决了此问题。修改了此参数后,需要重启Nova组件各个服务,配置才能生效。
openstack-config --set /etc/nova/nova.conf DEFAULT block_device_allocate_retries 180
systemctl restart openstack-nova-compute.service
4.Nova scheduler :Host has more disk space than database expected.
# 解决方法:空间不足,可以通过配置超分比解决。即修改cpu_allocation_ratio、ram_allocation_ratio、disk_allocation_ratio参数。
openstack-config --set /etc/nova/nova.conf DEFAULT cpu_allocation_ratio 4
openstack-config --set /etc/nova/nova.conf DEFAULT ram_allocation_ratio 1.5
openstack-config --set /etc/nova/nova.conf DEFAULT disk_allocation_ratio 2
openstack-config --set /etc/nova/nova.conf DEFAULT reserved_host_memory_mb 2048
openstack-config --set /etc/nova/nova.conf DEFAULT reserved_host_disk_mb 20480
systemctl restart openstack-nova-compute.service
# 超线程查询补充:
# 1. 逻辑CPU个数:
grep -c processor /proc/cpuinfo
# 2. 物理CPU个数:
grep 'physical id' /proc/cpuinfo |sort -u|wc -l
# 3.siblings指的是一个物理CPU有几个逻辑CPU
grep 'siblings' /proc/cpuinfo
# 4.cpu cores指的是一个物理CPU有几个核心
grep 'cpu cores' /proc/cpuinfo
# 如果siblings和cpu cores一致,则说明不支持超线程,或者超线程未打开。
# 如果siblings是cpu cores的两倍,则说明支持超线程,并且超线程已打开。
5.Failed to allocate the network(s), not rescheduling.
# 解决方法:超时导致。
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_is_fatal false
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_timeout 0
systemctl restart openstack-nova-compute.service
6.AMQPLAIN login refused: user 'openstack' - invalid credentials.
# 解决方法:一般是由于rabbitmq的用户名密码不对导致,检查nova-api库的cell_mappings表,查看数据库中rabbitmq的用户名密码。
mysql -uroot -p
MariaDB [(none)]> use nova_api;
MariaDB [nova_api]> select transport_url from cell_mappings where name="cell1";
MariaDB [nova_api]> \q
7.虚拟机桥接网卡mac地址放行(原理同Keepalived VIP放行)
neutron port-list
neutron port-show b9d47bd7-04e7-4bba-8c6a-7bcae212407f
neutron port-update b9d47bd7-04e7-4bba-8c6a-7bcae212407f --allowed-address-pairs ip_address=172.18.1.0/24,mac_address=fa:16:3e:aa:15:a0
neutron port-update --no-allowed-address-pairs b9d47bd7-04e7-4bba-8c6a-7bcae212407f
8.UnicodeEncodeError: 'ascii' codec can't encode characters in position 257-260: ordinal not in range(128)
# 解决方法:热迁移不成功,计算节点nova-compute服务日志报错:UnicodeEncodeError: 'ascii' codec can't encode characters in position 257-260: ordinal not in range(128)。原因是python2.7默认字符集不是utf-8所致。
cat >/usr/lib/python2.7/site-packages/sitecustomize.py<<EOF
import sys
reload(sys)
sys.setdefaultencoding('utf8')
EOF
systemctl restart openstack-nova-compute.service
9.修复云主机状态不对
openstack server list --all-projects --host compute01 |awk '{print $2}' |grep -Ev '^$|ID' |xargs -n1 nova reset-state --active
openstack server list --all-projects --host compute01 |awk '{print $2}' |grep -Ev '^$|ID' |xargs -n1 openstack server stop
openstack server list --all-projects --host compute01 |awk '{print $2}' |grep -Ev '^$|ID' |xargs -n1 openstack server start
10.Ceph存储独占锁导致云主机无法启动
for i in $(rbd ls -p volumes); do rbd feature disable volumes/$i exclusive-lock, object-map, fast-diff, deep-flatten; done
for i in $(rbd ls -p vms); do rbd feature disable vms/$i exclusive-lock, object-map, fast-diff, deep-flatten; done
for i in $(rbd ls -p volumesSSD); do rbd feature disable volumesSSD/$i exclusive-lock, object-map, fast-diff, deep-flatten; done
openstack server list --all-projects --host compute01 |awk '{print $2}' |grep -Ev '^$|ID' |xargs -n1 openstack server reboot --hard
# 参考资料
https://blog.csdn.net/xiaoquqi/article/details/119338817
https://blog.csdn.net/weixin_40579389/article/details/120875351
https://blog.51cto.com/u_13788458/2756828
https://www.136.la/nginx/show-162487.html
https://www.likecs.com/show-278361.html
11.RabbitMQ重置以及RabbitMQ无法启动
# controller01、controller02、controller03清理数据、重启服务
systemctl stop rabbitmq-server.service
rm -rf /var/lib/rabbitmq/mnesia/*
systemctl restart rabbitmq-server.service
# controller02、controller03加入集群
rabbitmqctl stop_app
rabbitmqctl join_cluster --ram rabbit@controller01
rabbitmqctl start_app
# controller01启用web插件、创建用户、授权、检查集群状态
rabbitmq-plugins enable rabbitmq_management
rabbitmqctl add_user openstack RABBIT_PASS
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl cluster_status
rabbitmqctl list_queues
12.从数据库删除状态为BUILD的实例
# 故障原因:映射或调度过程中消息队列通信异常导致的数据库脏数据
DELETE FROM nova.instance_extra WHERE instance_extra.instance_uuid = '$UUID';
DELETE FROM nova.instance_faults WHERE instance_faults.instance_uuid = '$UUID';
DELETE FROM nova.instance_id_mappings WHERE instance_id_mappings.uuid = '$UUID';
DELETE FROM nova.instance_info_caches WHERE instance_info_caches.instance_uuid = '$UUID';
DELETE FROM nova.instance_system_metadata WHERE instance_system_metadata.instance_uuid = '$UUID';
DELETE FROM nova.security_group_instance_association WHERE security_group_instance_association.instance_uuid = '$UUID';
DELETE FROM nova.block_device_mapping WHERE block_device_mapping.instance_uuid = '$UUID';
DELETE FROM nova.fixed_ips WHERE fixed_ips.instance_uuid = '$UUID';
DELETE FROM nova.instance_actions_events WHERE instance_actions_events.action_id in (SELECT id from nova.instance_actions where instance_actions.instance_uuid = '$UUID');
DELETE FROM nova.instance_actions WHERE instance_actions.instance_uuid = '$UUID';
DELETE FROM nova.virtual_interfaces WHERE virtual_interfaces.instance_uuid = '$UUID';
DELETE FROM nova.instances WHERE instances.uuid = '$UUID';
# DELETE FROM nova_api.build_requests WHERE request_spec_id = '$UUID';
13、无法卸载和删除卷
# 查找无法删除的卷并设置为可用状态
openstack volume list --all-project |grep dffd4456-29b5-41e4-b3b7-7b00a1b3a313
cinder reset-state dffd4456-29b5-41e4-b3b7-7b00a1b3a313 --state available
# 步骤A:获取接口
cinder --debug show dffd4456-29b5-41e4-b3b7-7b00a1b3a313
# 获取接口为:http://172.28.8.20:8776/v3/5145855bb46c4f129073172fb982660e/volumes/dffd4456-29b5-41e4-b3b7-7b00a1b3a313
# 步骤B:获取token
openstack token issue
# 获取token为:gAAAAABial5wnniQjH-iM8Y10H1li5r0GzyzEJXo4iSuDHYc4S82cuunjyKmFCZJZw3uLzEvtFGGMZ77QMkAZMKNWyq1NVFY3Lr9QgZXrh6PetBWAMCN4YMt7fLDt-IUXKx-1dWFvIZLwVvpC8Ky4S9vuMTMRT7NTM3WwkJtDE5bPLgaRixuZXc
# 步骤3:数据库查询卷对应的挂载id
mysql -uroot -p
>use cinder
>select * from volume_attachment where volume_id='dffd4456-29b5-41e4-b3b7-7b00a1b3a313';
# 获取挂载id为
8edfc42e-eb4e-4405-b0c4-f35cf2c00bfe
# 将步骤A、B、C获取的内容拼接为请求接口卸载卷挂载
curl -g -i \
-X POST http://172.28.8.20:8776/v3/5145855bb46c4f129073172fb982660e/volumes/dffd4456-29b5-41e4-b3b7-7b00a1b3a313/action \
-H "User-Agent: python-cinderclient" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "X-Auth-Token: gAAAAABial5wnniQjH-iM8Y10H1li5r0GzyzEJXo4iSuDHYc4S82cuunjyKmFCZJZw3uLzEvtFGGMZ77QMkAZMKNWyq1NVFY3Lr9QgZXrh6PetBWAMCN4YMt7fLDt-IUXKx-1dWFvIZLwVvpC8Ky4S9vuMTMRT7NTM3WwkJtDE5bPLgaRixuZXc" \
-d '{"os-detach": {"attachment_id": "8edfc42e-eb4e-4405-b0c4-f35cf2c00bfe"}}'
# 卸载成功后删除卷
openstack volume delete dffd4456-29b5-41e4-b3b7-7b00a1b3a313
14、VMware Workstation
# vmware-hgfsclient 查看共享的文件夹
# vmhgfs-fuse 挂载共享文件夹
vmhgfs-fuse .host:/OpenStack /mnt -o subtype=vmhgfs-fuse,allow_other
echo ".host:/OpenStack /mnt/hgfs fuse.vmhgfs-fuse allow_other,defaults 0 0" >>/etc/fstab
# 使能dhcp重新分配ip,分别删除服务端和客户端leases文件
find / -type f -name "dhclient-*.lease" -exec rm -f {} \;
del C:\ProgramData\VMware\vmnetdhcp.leases
15、使用growpart工具扩容分区并扩容逻辑卷
# 逻辑卷所在物理磁盘如果划了分区。若动态增加磁盘大小,有两种方式扩容逻辑卷:
# 第一是新建一个分区,将新分区扩容至逻辑卷。第二是扩容最后一个分区,再扩容逻辑卷。
# 下面介绍第二种方式:
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum install cloud-utils-growpart xfsprogs -y
# 扩容第几个分区 2表示sda的第2个分区
growpart /dev/sda 2
partprobe
# 错误提示:unexpected output in sfdisk --version [sfdisk,来自 util-linux 2.23.2]
# 解决方案:
export LC_ALL=en_US.UTF-8
# 错误提示:no tools available to resize disk with 'gpt'
# FAILED: failed to get a resizer for id ''
# 解决方案:
yum install -y gdisk
pvresize /dev/sda2
lvextend -l +100%FREE /dev/centos/root
xfs_growfs /dev/centos/root
# resize2fs /dev/centos/root
16、虚拟磁盘修复
列出虚拟机磁盘及加载nbd模块
virsh domblklist <instance_UUID>
rmmod nbd
modprobe nbd max_part=16
情况1:虚拟磁盘是qcow2格式的镜像文件
qemu-img check <path_to_qcow2_file>
qemu-nbd -c /dev/nbd0 <path_to_qcow2_file>
lsblk -f
xfs_repair /dev/nbd0p1
qemu-nbd -d /dev/nbd0
情况2:虚拟磁盘是Ceph RBD
mon_host=$(grep mon_host /etc/ceph/ceph.conf | awk -F= '{print $2}' | tr -d ' ' | awk -F, '{print $1}')
echo $mon_host
qemu-nbd -c /dev/nbd0 -f raw rbd:<pool_name>/<image_name>:mon_host=1.1.1.1:id=cinder:keyring=/etc/ceph/ceph.client.cinder.keyring
lsblk -f
xfs_repair /dev/nbd0p1
qemu-nbd -d /dev/nbd0
情况3:操作虚拟机文件使用guestmount挂载虚拟机磁盘文件系统
yum install libguestfs-tools -y
virsh destroy <instance_UUID>
guestmount -d instance-xxx -m /dev/sda1 /mnt
umount /mnt
17、计算节点console连接到虚拟机上没有显示
# 配置grub内核启动参数
GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8"
# 启动串口登录服务
systemctl enable serial-getty@ttyS0.service
systemctl start serial-getty@ttyS0.service
Linux Serial Console 设置的方法 systemd(systemctl) 跟 grub 的差异:
https://jiruiwu.pixnet.net/blog/post/357336576(复制粘贴到浏览器地址栏访问)
18、vnc界面键盘无法输入
虚拟机的xml文件有以下字段
<input type='keyboard' bus='usb'>
<address type='usb' bus='0' port='2'/>
</input>
就会出现登录vnc无法登录的情况,去掉就可以了
构造该字段的代码位于:nova.virt.libvirt.driver.LibvirtDriver._guest_add_keyboard_device
作者:wanghongwei
版权声明:本作品遵循<CC BY-NC-ND 4.0>版权协议,商业转载请联系作者获得授权,非商业转载请附上原文出处链接及本声明。