Table of Contents
Common Xen problems.
- 1.1. How to limit dom0 cpu number?
- 1.2. How to prevent xen from rebooting when panic?
- 1.3. How to escape from an vm console?
- 1.4. What is the domain builder in the guest configure file?
- 1.5. How to access virtual console tty0 in vncviewer?
- 1.6. How many VIFs can I use in HVM guest?
- 1.7. How many VBDs can I use in HVM guest?
- 2.1. How to setup Kdump for Xen?
- 2.2. How to enable automatic core dumps of xen guests?
- 2.3. How to setup gdb for xen guest debugging
- 2.4. How to set the virtual guest clock?
- 2.5. How to connect to serial console of HVM guest?
- 2.6. How to access VM graphical console by VNC?
- 2.7. How to allow root login on xen console for PV guest?
- 2.8. How to enable VFB support in Xen?
Further Xen HOWTOs.
Following these steps to setup Kdump for xen debugging in RHEL5.
Append following parameters to xen boot command line:
crashkernel=128M@32M
Comment the following lines in
/etc/init.d/kdump
:# MEM_RESERVED=`echo $KDUMP_COMMANDLINE | grep "crashkernel=[0-9]\+[MmKkGg]@[0-9]\+[MmGgKk]"` # if [ -z "$MEM_RESERVED" ] # then # $LOGGER "No crashkernel parameter specified for running kernel" # return 1 # fi
Change KDUMP_KERNELVER in
/etc/sysconfig/kdump
:KDUMP_KERNELVER="2.6.18-8.0.0.4.1.el5"
Reboot.
Run:
# chkconfig --level 2345 kdump on # service kdump start
Testing:
# echo "c" >/proc/sysrq-trigger
Reboot.
Reference
The Xen Port of Kexec/Kdump - A short introduction and status report.
XenKdumpAnalysis - How to get xen whole-machine dump image and analyse it.
Xendump is a facility for capturing vmcore dumps from Xen guests. It is built-in to the Xen Hypervisor. To configure Xendump follow the steps described below:
Edit
/etc/xen/xend-config.sxp
and change the following line:#(enable-dump no)
to:
(enable-dump yes)
Restart the xen daemon:
# service xend restart
Testing.
Change the vm config file to:
on_crash = 'restart'
Start the vm with:
# xm create vm.cfg
Do the following command within a para-virtualized (PV) Xen guest:
# sysctl -w kernel.panic=1 # sysctl -w kernel.panic_on_oops=1 # echo "c" >/proc/sysrq-trigger
Note
Right now, Xendump can be configured to capture vmcore dumps of para-virtualized (PV) Xen guests automatically upon a crash. However, vmcore dumps from fully-virtualized (FV) Xen guests can only be taken manually by running the xm dump-core command.
Following these steps to setup gdbserver for xen.
Build the GDB server:
$ cd tools/debugger/gdb/ $ ./gdbbuild
Copy
./gdb-6.2.1-linux-i386-xen/gdb/gdbserver/gdbserver-xen
to your test machine (dom0).On your test machine, run:
gdbserver-xen 127.0.0.1:9999 --attach $domid
In another terminal of your test machine, run:
gdb /path/to/vmlinux-syms-2.6.xx-xenU
From within the gdb client session:
(gdb) directory /path/to/linux-2.6.xx-xenU [*] (gdb) target remote 127.0.0.1:9999 (gdb) bt (gdb) disass
Reference
By default, the clocks in a Linux VM are synchronized to the clock running on the control domain, and cannot be independently changed (Any attempts to set or modify the time in a guest will fail). This mode is a convenient default, since only the control domain needs to be running the NTP service to keep accurate time across all VMs.
Paravirtualized guests may also perform their own system clock management:
Add the following lines to
/etc/sysctl.conf
, and reboot the system:# Set independent wall clock time xen.independent_wallclock = 1
You can temporarily override the setting for the current session in the proc filesystem. For example, as root run the following command on the guest:
# echo 1 > /proc/sys/xen/independent_wallclock
Pass "independent_wallclock=1" as a boot parameter to the VM.
Note
This setting does not apply to hardware virtualized guests.
It is easy to connect to the serial console of HVM guest. You should:
Add "serial = 'pty'" to vm configure file
vm.cfg
.Add following lines to
/boot/grub/grub.conf
of the vm:serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1 terminal --timeout=10 serial console
And add kernel parameter: "console=tty0 console=ttyS0,115200n8". This tells linux to print logs on both tty0 and ttyS0. The result like this:
kernel /boot/vmlinuz-2.6.18-8.el5 ro root=LABEL=/ console=tty0 console=ttyS0,115200n8
Add "ttyS0" to
/etc/securetty
of the vm.Add the following line to
/etc/inittab
of the vm:co:2345:respawn:/sbin/agetty ttyS0 115200 vt100-nav
then execute the following command to start the domain and you'll get the serial console output:
# xm create -c vm.cfg
To run vnciewer on Dom0, log in to Dom0 with:
$ ssh -X hostname
If you add:
vncconsole=1
to the a hvm guest config file, a vncviewer session will startup when you create that domain.
If you want to set up VNC access to the host computer (Dom0) for any remote computer, edit the /etc/xen/xend-config.sxp
file:
vnclisten=0.0.0.0
The same option can be applied to VM private configure file to override xend global settings.
Setting vnclisten to 0.0.0.0 sets the VNCViewer to listen on any port, and redirects the vncframebuffer to any host. This may compromise security on the host machine.
VNC access to Dom0 can be restricted to a particular host by setting vnclisten to the ip address of the host in the /etc/xen/xend-config.sxp
file.
In order to allow root to login on xen console (for para-virtualized machine), you should:
Add "xvc0" to
/etc/securetty
of the vm.Add the following line to
/etc/inittab
of the vm:co:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav
For PVM, the configure options look like:
vfb = ["type=vnc,vncunused=1,vnclisten=0.0.0.0,vncpasswd=passwd"]
Start the vm, then get:
/usr/lib64/xen/bin/xen-vncfb --unused --listen 0.0.0.0 --domid 13 --title xen_el5_x86_64_para
You can kill this process and rerun /usr/lib64/xen/bin/xen-vncfb
.
For HVM, the configure options look like::
vnc=1 vncunused=1 vnclisten="0.0.0.0" vncpasswd="passwd" vncconsole=1
Start the vm, then get:
/usr/lib64/xen/bin/qemu-dm -d 9 -vcpus 1 -boot c -serial pty -acpi -domain-name xen_el5_x86_64_hvm -net nic,vlan=1,macaddr=00:16:3e:5a:af:2a,model=rtl8139 -net tap,vlan=1,bridge=xenbr0 -vnc 0.0.0.0:0 -vncunused -vncviewer
You cannot kill this process. Otherwise, the vm will be destroyed.