public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* guests hang when running concurrently
@ 2011-08-16 10:15 Paolo Greppi
  0 siblings, 0 replies; only message in thread
From: Paolo Greppi @ 2011-08-16 10:15 UTC (permalink / raw)
  To: kvm

Hi there,

I have the issue that launching more than one guest concurrently causes 
them all to hang within 1 to 60 min even if there is no activity.
I mean that:
- each guest takes 100% CPU
- guests do not respond to ssh, ACPI shutdown/restart etc.
- libvirt daemon does not respond
This only happens with linux guests: Windows guests are OK.

     * What cpu model (examples: Intel Core Duo, Intel Core 2 Duo, AMD 
Opteron 2210). See /proc/cpuinfo if you're not sure.
Intel(R) Xeon(R) CPU           E3113  @ 3.00GHz
     * What kvm version you are using. If you're using git directly, 
provide the output of 'git describe'.
0.12.5
     * The host kernel version
debian 6 squeeze
     * What host kernel arch you are using (i386 or x86_64)
x86_64
     * What guest you are using, including OS type (Linux, Windows, 
Solaris, etc.), bitness (32 or 64), kernel version
debian 6 squeeze x86
debian 6 squeeze AMD64
debian 7 wheezy x86
debian 7 wheezy AMD64
     * The qemu command line you are using to start the guest
/usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 1024 -smp 
1,sockets=1,cores=1,threads=1 -name deb6_32_dev -uuid 
aef29541-a4d6-41cb-ab47-549c310ffe0e -nodefaults -chardev 
socket,id=monitor,path=/var/lib/libvirt/qemu/deb6_32_dev.monitor,server,nowait 
-mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive 
file=/dev/vg0/lv23,if=none,id=drive-virtio-disk0,boot=on,format=raw 
-device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 
-drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw 
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 
-device 
virtio-net-pci,vlan=0,id=net0,mac=00:16:36:26:d8:40,bus=pci.0,addr=0x3 
-net tap,fd=48,vlan=0,name=hostnet0 -chardev pty,id=serial0 -device 
isa-serial,chardev=serial0 -usb -device usb-tablet,id=input0 -vnc 
127.0.0.1:17 -k it -vga vmware -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
/usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1536 -smp 
1,sockets=1,cores=1,threads=1 -name deb6_64_dev -uuid 
66633ccf-3201-2f90-d125-8726885f1b72 -nodefaults -chardev 
socket,id=monitor,path=/var/lib/libvirt/qemu/deb6_64_dev.monitor,server,nowait 
-mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive 
file=/dev/vg0/lv26,if=none,id=drive-virtio-disk0,boot=on,format=raw 
-device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 
-drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw 
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 
-device 
virtio-net-pci,vlan=0,id=net0,mac=00:16:36:7c:31:7c,bus=pci.0,addr=0x3 
-net tap,fd=48,vlan=0,name=hostnet0 -chardev pty,id=serial0 -device 
isa-serial,chardev=serial0 -usb -device usb-tablet,id=input0 -vnc 
127.0.0.1:19 -k it -vga vmware -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
/usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 1024 -smp 
1,sockets=1,cores=1,threads=1 -name deb7_32_dev -uuid 
ffb1f6e5-bb67-2ed7-1d45-cf69fc61f9cd -nodefaults -chardev 
socket,id=monitor,path=/var/lib/libvirt/qemu/deb7_32_dev.monitor,server,nowait 
-mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive 
file=/dev/vg0/lv27,if=none,id=drive-virtio-disk0,boot=on,format=raw 
-device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 
-drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw 
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 
-device 
virtio-net-pci,vlan=0,id=net0,mac=00:16:36:68:7d:27,bus=pci.0,addr=0x3 
-net tap,fd=48,vlan=0,name=hostnet0 -chardev pty,id=serial0 -device 
isa-serial,chardev=serial0 -usb -device usb-tablet,id=input0 -vnc 
127.0.0.1:21 -k it -vga vmware -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
/usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1536 -smp 
1,sockets=1,cores=1,threads=1 -name deb7_64_dev -uuid 
c5b596b0-dd0c-1d92-9786-17d36eff774d -nodefaults -chardev 
socket,id=monitor,path=/var/lib/libvirt/qemu/deb7_64_dev.monitor,server,nowait 
-mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive 
file=/dev/vg0/lv28,if=none,id=drive-virtio-disk0,boot=on,format=raw 
-device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 
-drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw 
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 
-device 
virtio-net-pci,vlan=0,id=net0,mac=00:16:36:e2:5a:64,bus=pci.0,addr=0x3 
-net tap,fd=48,vlan=0,name=hostnet0 -chardev pty,id=serial0 -device 
isa-serial,chardev=serial0 -usb -device usb-tablet,id=input0 -vnc 
127.0.0.1:23 -k it -vga vmware -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
     * Whether the problem goes away if using the -no-kvm-irqchip or 
-no-kvm-pit switch.
No, the problem does not go away if kvm is started with the 
-no-kvm-irqchip switch.
     * Whether the problem also appears with the -no-kvm switch.
Yes, the problem also appears if kvm is started with the -no-kvm switch, 
it only takes longer.

I tried the following without finding any clue:
- looking at libvirt logs
- monitoring the kernel diagnostic on the guests with netconsole
- leaving a ssh shell connected to the guest with top running at high 
refresh rate in the hope to see some process picking up CPU

The strace output for the kvm processes always shows the following as 
last system call (complete traces available on request):
futex(0x858560, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

I traced with trace-cmd (complete traces available on request), the last 
10 lines (report filtered for the pid of the last kvm instance to crash) 
are:
           <...>-31376 [001] 383558.375239: kvm_apic_accept_irq:  apicid 
0 vec 81 (LowPrio|edge)
            <...>-31376 [001] 383558.958556: kvm_set_irq:          gsi 
26 level 1 source 0
            <...>-31376 [001] 383558.958556: kvm_msi_set_irq:      dst 1 
vec 61 (LowPrio|logical|edge|rh)
            <...>-31376 [001] 383558.958557: kvm_apic_accept_irq: apicid 
0 vec 97 (LowPrio|edge)
            <...>-31376 [001] 383558.958602: kvm_set_irq:          gsi 
26 level 1 source 0
            <...>-31376 [001] 383558.958602: kvm_msi_set_irq:      dst 1 
vec 61 (LowPrio|logical|edge|rh)
            <...>-31376 [001] 383558.958602: kvm_apic_accept_irq: apicid 
0 vec 97 (LowPrio|edge)
            <...>-31376 [001] 383558.958623: kvm_set_irq:          gsi 
26 level 1 source 0
            <...>-31376 [001] 383558.958623: kvm_msi_set_irq:      dst 1 
vec 61 (LowPrio|logical|edge|rh)
            <...>-31376 [001] 383558.958623: kvm_apic_accept_irq: apicid 
0 vec 97 (LowPrio|edge)

TIA,

Paolo

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2011-08-16 10:39 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-08-16 10:15 guests hang when running concurrently Paolo Greppi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox