cloudstack下libvirtd服务无响应问题(来个了断)

问题来源:cloudstack下libvirtd服务无响应问题

该有个了断了!

依据上篇环境交代,在cloudstack-agent上走了一段弯路,回到正轨,来看qemu-kvm!

我们知道在OpenStack 或者cloudstack中有很多参数配置影响着我们的存储读写效率,比如cloudstack的“服务方案” -> “存储方案”中关于IO的定义(推荐使用默认的设置),归根结底都是通过调配qemu、KVM底层参数来影响性能。因此,研究qemu、KVM技术本身是解决该问题的关键所在!

一、How to solve lock contention problem in QEMU/KVM(翻译:如何解决QEMU/KVM的死锁连接问题)

传统的QEMU-KVM受限于big-qemu-lock机制,为了实现系统IOPS性能的突破,Virtio-blk-data-plane(x-data-plane)技术应运而生。
《x-data-plane feature in QEMU/KVM》
我们使用一个全局,锁定不同的线程之间的同步。这一规则也发生在QEMU和KVM(http://wiki.qemu.org/main_page)系统。
然而,这可能会导致锁争用问题。整个系统的性能和可扩展性将下降。为了解决QEMU/KVM中的这个问题,x-data-plane功能被设计和实施,其中高层的想法是“I/O请求通过专用的iothread被处理,而不是QEMU主循环线程,使它没有锁竞争之间的I/O线程和其他QEMU主回路线程”。(I/O requests are handled by dedicated IOThread rather than QEMU main loop threads so that it will not have lock contention among I/O threads and other QEMU main loop threads)

Start x-data-plane function in QEMU. It seems that older QEMU version has no this feature. My test version v2.2.0. The Libvirt XML configuration of mine is like following.(翻译:在QEMU中启动x-data-plane功能,似乎老版本的QEMU不支持,我的测试是在v2.2.0版本中进行的,Libvirtd配置文件如下所示:)

注意:当前只有raw格式支持Virtio-blk-data-plane,获得该性能的同时,意味着放弃了snapshot功能。

<domain type='kvm' id='2' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>kvm1</name>
  <uuid>8e9c4603-c4b5-fa41-b251-1dc4ffe1872c</uuid>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static'>5</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
    <vcpupin vcpu='4' cpuset='4'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/kvm-spice</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source file='/home/images/kvm1.img'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <disk type='block' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='scsi' index='0'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:01:ab:ca'/>
      <source network='default'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/11'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/11'>
      <source path='/dev/pts/11'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='none'/>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.scsi=off'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.config-wce=off'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.x-data-plane=on'/>
  </qemu:commandline>
</domain>

二、cloudstack的版本演进

在cloudstack4.6版本(CentOS / RHEL: 6.3-6.9)及以前用的是libvirt: 0.9.4 or higher,Qemu/KVM: 1.0 or higher。由于Qemu/KVM的版本低,不存在支持x-data-plane的可能;
截止2018年5月,Apache CloudStack 最新版本是 4.11.0.0,libvirt: 1.2.0 or higher Qemu/KVM: 2.0 or higher。由于Qemu/KVM的版本高,存在支持x-data-plane的可能。经确认,在CloudStack 最新版本4.11.0.0中,磁盘方案参数配置中并没有提供x-data-plane功能!期待后续版本中提供支持!

不过话说回来,获得x-data-plane能力的同时,丧失了使用qcow2镜像格式的能力。未免有些得不尝试!
伯仲之间,很难说这是cloudstack的问题,或者stack们的问题!
那么,还有没有折中或者临时的解决方案呢?答案是:有的!

三、不靠升级靠运维的解决办法!

实操过程如下:

service libvirtd stop
sh /root/bin/cpulimit.sh libvirtd
libvirtd -d

附上完整的脚本

#!/bin/bash
PROGNAME=`basename $0`
HOME='/root/bin/'
cgroupDir='/cgroup/'
pDir=$1

if [ -z "${pDir}" ]; then
    echo "usage: $PROGNAME vsftpd"
    exit
fi

function quota()
{
    if [ ! -d ${cgroupDir}/cpu/${pDir} ]; then
        su - root -c "mkdir ${cgroupDir}/cpu/${pDir}"
    fi
    cd ${cgroupDir}/cpu/${pDir}
    echo 100000 > cpu.cfs_period_us
    echo 10000  > cpu.cfs_quota_us
    echo 1024   > cpu.shares
    echo $Pid > tasks
    cat tasks
}

function unquota()
{
    su - root -c "cgdelete cpu:/$pDir"

}

while [ true ]
do
    Pid=`ps ax |grep $pDir|grep -vE "grep|$PROGNAME"|awk '{print $1}'`
    if [ -z "${Pid}" ]; then
        echo "$pDir service not started."
        sleep 60
        continue
    fi
    limitPid=`cat ${cgroupDir}/cpu/${pDir}/tasks 2>/dev/null|tr -d ' '`
    if [ -z "${limitPid}" ]; then
        echo "tasks is null"
        echo "start cpulimit"
        quota
        sleep 60
        continue
    fi
    if [ "$Pid" != "$limitPid" ]; then
        echo "start cpulimit"
        unquota
        quota
    else
        echo "not change"
    fi
    sleep 600
done

exit 0

至此,libvirtd死锁的进程资源得到有效限制。系统稳定性得到保障!

posted @ 2018-05-17 20:53  火罐儿  阅读(436)  评论(0编辑  收藏  举报