* [Qemu-devel] QEMU/KVM performance gets worser - high load - high interrupts - high context switches
@ 2015-12-08 9:39 Gerhard Wiesinger
2015-12-11 8:03 ` Gerhard Wiesinger
2016-01-09 16:46 ` Gerhard Wiesinger
0 siblings, 2 replies; 5+ messages in thread
From: Gerhard Wiesinger @ 2015-12-08 9:39 UTC (permalink / raw)
To: qemu-devel
Hello,
Yesterday I looked at my munin statistics on my KVM host and I swar that
performance gets worser: load is getting higher, interrupts are getting
higher and are high as well as context switches. VMs and applications
didn't change that way.
You can find graphics at: http://www.wiesinger.com/tmp/kvm/
Last spike I guess was upgrade from FC22 to FC23 or a kernel update. And
it was even lower on older versions
For me it looks like the high interrupt load and context switches are
the root cause. Interrupts inside the VM are <100, so with 10 VMs I'm
expecting 1000+baseload => <2000, see statistics below.
All VMs are virtio on disk/network except one (IDE/rtl8139).
# Host as well as all guests (except 2 VMs):
uname -a
Linux kvm 4.2.6-301.fc23.x86_64 #1 SMP Fri Nov 20 22:22:41 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux
qemu-system-x86-2.4.1-1.fc23.x86_64
Platform:
All VMs have the pc-i440fx-2.4 profile (I upgraded yesterday from
pc-i440fx-2.3 without any change).
Any ideas, anyone having same issues?
Ciao,
Gerhard
kvm: no VM running
r b swpd free buff cache si so bi bo in cs us sy
id wa st
0 0 0 3308516 102408 3798568 0 0 0 12 197 679 0
0 99 0 0
0 0 0 3308516 102416 3798564 0 0 0 42 197 914 0
0 99 1 0
0 0 0 3308516 102416 3798568 0 0 0 0 190 791 0
0 100 0 0
2 0 0 3308484 102416 3798568 0 0 0 0 129 440 0
0 100 0 0
kvm: 2 VMs running
procs -----------memory---------- ---swap-- -----io---- -system--
------cpu-----
r b swpd free buff cache si so bi bo in cs us sy
id wa st
1 0 0 2641464 103052 3814700 0 0 0 0 2715 5648 3
2 95 0 0
0 0 0 2641340 103052 3814700 0 0 0 0 2601 5555 1
2 97 0 0
1 0 0 2641308 103052 3814700 0 0 0 5 2687 5708 3
2 95 0 0
0 0 0 2640620 103060 3814628 0 0 0 30 2779 5756 4
3 93 1 0
0 0 0 2640644 103060 3814636 0 0 0 0 2436 5364 1
2 97 0 0
1 0 0 2640520 103060 3814636 0 0 0 119 2734 5975 3
2 95 0 0
kvm: all 10 VMs running
procs -----------memory---------- ---swap-- -----io---- -system--
------cpu-----
r b swpd free buff cache si so bi bo in cs us sy
id wa st
1 0 0 60408 78892 3371984 0 0 0 85 9015 17357 4
9 87 0 0
2 0 0 60408 78892 3371968 0 0 0 47 9375 17797 9
9 82 0 0
0 0 0 60472 78892 3372092 0 0 40 60 8882 17343 4
8 86 1 0
1 0 0 60316 78892 3372080 0 0 0 59 8863 17517 4
8 87 0 0
0 0 0 59540 78900 3372092 0 0 0 55 9135 17796 8
9 81 1 0
0 0 0 59168 78900 3372112 0 0 0 51 8931 17484 4
9 87 0 0
cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Core(TM)2 Quad CPU @ 2.66GHz
stepping : 7
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] QEMU/KVM performance gets worser - high load - high interrupts - high context switches
2015-12-08 9:39 [Qemu-devel] QEMU/KVM performance gets worser - high load - high interrupts - high context switches Gerhard Wiesinger
@ 2015-12-11 8:03 ` Gerhard Wiesinger
2016-01-09 16:46 ` Gerhard Wiesinger
1 sibling, 0 replies; 5+ messages in thread
From: Gerhard Wiesinger @ 2015-12-11 8:03 UTC (permalink / raw)
To: qemu-devel
Any comments?
Ciao,
Gerhard
On 08.12.2015 10:39, Gerhard Wiesinger wrote:
> Hello,
>
> Yesterday I looked at my munin statistics on my KVM host and I swar
> that performance gets worser: load is getting higher, interrupts are
> getting higher and are high as well as context switches. VMs and
> applications didn't change that way.
>
> You can find graphics at: http://www.wiesinger.com/tmp/kvm/
> Last spike I guess was upgrade from FC22 to FC23 or a kernel update.
> And it was even lower on older versions
>
> For me it looks like the high interrupt load and context switches are
> the root cause. Interrupts inside the VM are <100, so with 10 VMs I'm
> expecting 1000+baseload => <2000, see statistics below.
>
> All VMs are virtio on disk/network except one (IDE/rtl8139).
>
> # Host as well as all guests (except 2 VMs):
> uname -a
> Linux kvm 4.2.6-301.fc23.x86_64 #1 SMP Fri Nov 20 22:22:41 UTC 2015
> x86_64 x86_64 x86_64 GNU/Linux
>
> qemu-system-x86-2.4.1-1.fc23.x86_64
>
> Platform:
>
> All VMs have the pc-i440fx-2.4 profile (I upgraded yesterday from
> pc-i440fx-2.3 without any change).
>
> Any ideas, anyone having same issues?
>
> Ciao,
> Gerhard
>
> kvm: no VM running
> r b swpd free buff cache si so bi bo in cs us sy
> id wa st
> 0 0 0 3308516 102408 3798568 0 0 0 12 197 679 0
> 0 99 0 0
> 0 0 0 3308516 102416 3798564 0 0 0 42 197 914 0
> 0 99 1 0
> 0 0 0 3308516 102416 3798568 0 0 0 0 190 791 0
> 0 100 0 0
> 2 0 0 3308484 102416 3798568 0 0 0 0 129 440 0
> 0 100 0 0
>
> kvm: 2 VMs running
> procs -----------memory---------- ---swap-- -----io---- -system--
> ------cpu-----
> r b swpd free buff cache si so bi bo in cs us sy
> id wa st
> 1 0 0 2641464 103052 3814700 0 0 0 0 2715 5648
> 3 2 95 0 0
> 0 0 0 2641340 103052 3814700 0 0 0 0 2601 5555
> 1 2 97 0 0
> 1 0 0 2641308 103052 3814700 0 0 0 5 2687 5708
> 3 2 95 0 0
> 0 0 0 2640620 103060 3814628 0 0 0 30 2779 5756
> 4 3 93 1 0
> 0 0 0 2640644 103060 3814636 0 0 0 0 2436 5364
> 1 2 97 0 0
> 1 0 0 2640520 103060 3814636 0 0 0 119 2734 5975
> 3 2 95 0 0
>
> kvm: all 10 VMs running
> procs -----------memory---------- ---swap-- -----io---- -system--
> ------cpu-----
> r b swpd free buff cache si so bi bo in cs us sy
> id wa st
> 1 0 0 60408 78892 3371984 0 0 0 85 9015 17357
> 4 9 87 0 0
> 2 0 0 60408 78892 3371968 0 0 0 47 9375 17797
> 9 9 82 0 0
> 0 0 0 60472 78892 3372092 0 0 40 60 8882 17343
> 4 8 86 1 0
> 1 0 0 60316 78892 3372080 0 0 0 59 8863 17517
> 4 8 87 0 0
> 0 0 0 59540 78900 3372092 0 0 0 55 9135 17796
> 8 9 81 1 0
> 0 0 0 59168 78900 3372112 0 0 0 51 8931 17484
> 4 9 87 0 0
>
> cat /proc/cpuinfo
> processor : 0
> vendor_id : GenuineIntel
> cpu family : 6
> model : 15
> model name : Intel(R) Core(TM)2 Quad CPU @ 2.66GHz
> stepping : 7
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] QEMU/KVM performance gets worser - high load - high interrupts - high context switches
2015-12-08 9:39 [Qemu-devel] QEMU/KVM performance gets worser - high load - high interrupts - high context switches Gerhard Wiesinger
2015-12-11 8:03 ` Gerhard Wiesinger
@ 2016-01-09 16:46 ` Gerhard Wiesinger
2016-01-11 8:22 ` Paolo Bonzini
1 sibling, 1 reply; 5+ messages in thread
From: Gerhard Wiesinger @ 2016-01-09 16:46 UTC (permalink / raw)
To: qemu-devel
3On 08.12.2015 10:39, Gerhard Wiesinger wrote:
> Hello,
>
> Yesterday I looked at my munin statistics on my KVM host and I swar
> that performance gets worser: load is getting higher, interrupts are
> getting higher and are high as well as context switches. VMs and
> applications didn't change that way.
>
> You can find graphics at: http://www.wiesinger.com/tmp/kvm/
> Last spike I guess was upgrade from FC22 to FC23 or a kernel update.
> And it was even lower on older versions
>
> For me it looks like the high interrupt load and context switches are
> the root cause. Interrupts inside the VM are <100, so with 10 VMs I'm
> expecting 1000+baseload => <2000, see statistics below.
>
> All VMs are virtio on disk/network except one (IDE/rtl8139).
>
> # Host as well as all guests (except 2 VMs):
> uname -a
> Linux kvm 4.2.6-301.fc23.x86_64 #1 SMP Fri Nov 20 22:22:41 UTC 2015
> x86_64 x86_64 x86_64 GNU/Linux
>
> qemu-system-x86-2.4.1-1.fc23.x86_64
>
> Platform:
>
> All VMs have the pc-i440fx-2.4 profile (I upgraded yesterday from
> pc-i440fx-2.3 without any change).
>
> Any ideas, anyone having same issues?
>
> Ciao,
> Gerhard
>
> kvm: no VM running
> r b swpd free buff cache si so bi bo in cs us sy
> id wa st
> 0 0 0 3308516 102408 3798568 0 0 0 12 197 679 0
> 0 99 0 0
> 0 0 0 3308516 102416 3798564 0 0 0 42 197 914 0
> 0 99 1 0
> 0 0 0 3308516 102416 3798568 0 0 0 0 190 791 0
> 0 100 0 0
> 2 0 0 3308484 102416 3798568 0 0 0 0 129 440 0
> 0 100 0 0
>
> kvm: 2 VMs running
> procs -----------memory---------- ---swap-- -----io---- -system--
> ------cpu-----
> r b swpd free buff cache si so bi bo in cs us sy
> id wa st
> 1 0 0 2641464 103052 3814700 0 0 0 0 2715 5648
> 3 2 95 0 0
> 0 0 0 2641340 103052 3814700 0 0 0 0 2601 5555
> 1 2 97 0 0
> 1 0 0 2641308 103052 3814700 0 0 0 5 2687 5708
> 3 2 95 0 0
> 0 0 0 2640620 103060 3814628 0 0 0 30 2779 5756
> 4 3 93 1 0
> 0 0 0 2640644 103060 3814636 0 0 0 0 2436 5364
> 1 2 97 0 0
> 1 0 0 2640520 103060 3814636 0 0 0 119 2734 5975
> 3 2 95 0 0
>
> kvm: all 10 VMs running
> procs -----------memory---------- ---swap-- -----io---- -system--
> ------cpu-----
> r b swpd free buff cache si so bi bo in cs us sy
> id wa st
> 1 0 0 60408 78892 3371984 0 0 0 85 9015 17357
> 4 9 87 0 0
> 2 0 0 60408 78892 3371968 0 0 0 47 9375 17797
> 9 9 82 0 0
> 0 0 0 60472 78892 3372092 0 0 40 60 8882 17343
> 4 8 86 1 0
> 1 0 0 60316 78892 3372080 0 0 0 59 8863 17517
> 4 8 87 0 0
> 0 0 0 59540 78900 3372092 0 0 0 55 9135 17796
> 8 9 81 1 0
> 0 0 0 59168 78900 3372112 0 0 0 51 8931 17484
> 4 9 87 0 0
>
> cat /proc/cpuinfo
> processor : 0
> vendor_id : GenuineIntel
> cpu family : 6
> model : 15
> model name : Intel(R) Core(TM)2 Quad CPU @ 2.66GHz
> stepping : 7
>
>
OK, I found what the problem is:
analysis via:
1.) kvm_stat
2.) /usr/bin/perf record -p <PID of qemu>
/usr/bin/perf report -i perf.data > perf-report.txt
cat perf-report.txt
# Overhead Command Shared Object Symbol
# ........ ............... .......................
..........................................
#
15.75% qemu-system-x86 [kernel.kallsyms] [k] __fget
8.33% qemu-system-x86 [kernel.kallsyms] [k]
_raw_spin_lock_irqsave
7.54% qemu-system-x86 [kernel.kallsyms] [k] fput
6.61% qemu-system-x86 [kernel.kallsyms] [k] do_sys_poll
3.60% qemu-system-x86 [kernel.kallsyms] [k] __pollwait
2.20% qemu-system-x86 [kernel.kallsyms] [k]
_raw_write_unlock_irqrestore
2.09% qemu-system-x86 libpthread-2.22.so [.]
pthread_mutex_lock
...
Found also:
1.) https://bugzilla.redhat.com/show_bug.cgi?id=949547
2.) https://www.kraxel.org/blog/2014/03/qemu-and-usb-tablet-cpu-consumtion/
After reading that I did the following:
# On 10 Linux VMs I removed:
# 1.) Serial device itself
# 2.) PCI controller VirtIO serial
# 3.) USB Mouse tablet
# Positive consequences via munin monitoring:
# Reduced fork rate: 40 => 13
# process states: running 15 => <1
# CÜU temperature: (core dependant) 65-70°C => 56-64°C
# CPU usage: system: 47% => 15%, user: 76% => 50%
# Context Switches: 20k => 7.5k
# Interrupts: 16k => 9k
# Load average: 2.8 => 1
=> back at the level before one year!!!!!!!
Any idea why the serial device/PCI controller and the USB mouse tablet
consume so much CPU on latest kernel and/or qemu?
Anyone has same experience?
Thnx.
Ciao,
Gerhard
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] QEMU/KVM performance gets worser - high load - high interrupts - high context switches
2016-01-09 16:46 ` Gerhard Wiesinger
@ 2016-01-11 8:22 ` Paolo Bonzini
2016-01-11 21:04 ` Gerhard Wiesinger
0 siblings, 1 reply; 5+ messages in thread
From: Paolo Bonzini @ 2016-01-11 8:22 UTC (permalink / raw)
To: Gerhard Wiesinger, qemu-devel
On 09/01/2016 17:46, Gerhard Wiesinger wrote:
>
> # Positive consequences via munin monitoring:
> # Reduced fork rate: 40 => 13
> # process states: running 15 => <1
> # CÜU temperature: (core dependant) 65-70°C => 56-64°C
> # CPU usage: system: 47% => 15%, user: 76% => 50%
> # Context Switches: 20k => 7.5k
> # Interrupts: 16k => 9k
> # Load average: 2.8 => 1
>
> => back at the level before one year!!!!!!!
>
> Any idea why the serial device/PCI controller and the USB mouse tablet
> consume so much CPU on latest kernel and/or qemu?
For USB, it's possible that you're not using the USB autosuspend
feature? (As explained in Gerd's blog post, for Microsoft OSes you may
need to fiddle with the registry).
For virtio-serial, I have no idea.
Paolo
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] QEMU/KVM performance gets worser - high load - high interrupts - high context switches
2016-01-11 8:22 ` Paolo Bonzini
@ 2016-01-11 21:04 ` Gerhard Wiesinger
0 siblings, 0 replies; 5+ messages in thread
From: Gerhard Wiesinger @ 2016-01-11 21:04 UTC (permalink / raw)
To: Paolo Bonzini, qemu-devel
On 11.01.2016 09:22, Paolo Bonzini wrote:
>
> On 09/01/2016 17:46, Gerhard Wiesinger wrote:
>> # Positive consequences via munin monitoring:
>> # Reduced fork rate: 40 => 13
>> # process states: running 15 => <1
>> # CÜU temperature: (core dependant) 65-70°C => 56-64°C
>> # CPU usage: system: 47% => 15%, user: 76% => 50%
>> # Context Switches: 20k => 7.5k
>> # Interrupts: 16k => 9k
>> # Load average: 2.8 => 1
>>
>> => back at the level before one year!!!!!!!
>>
>> Any idea why the serial device/PCI controller and the USB mouse tablet
>> consume so much CPU on latest kernel and/or qemu?
> For USB, it's possible that you're not using the USB autosuspend
> feature? (As explained in Gerd's blog post, for Microsoft OSes you may
> need to fiddle with the registry).
>
> For virtio-serial, I have no idea.
Linux VMs: All VMs are Linux VM on this KVM host, so as per blog post
udev rules should apply well for USB autosuspend. Most of the effect
came from the virtio-serial and not from USB tablet.
For another KVM host with a Windows 7 VM:
I tried to find apply the blog post: But there is no option "Allow the
computer to turn off the device to save power"
Nevertheless CPU usage could be reduced by removing the USB tablet from
the Win7 VM.
Ciao,
Gerhard
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2016-01-11 21:05 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-12-08 9:39 [Qemu-devel] QEMU/KVM performance gets worser - high load - high interrupts - high context switches Gerhard Wiesinger
2015-12-11 8:03 ` Gerhard Wiesinger
2016-01-09 16:46 ` Gerhard Wiesinger
2016-01-11 8:22 ` Paolo Bonzini
2016-01-11 21:04 ` Gerhard Wiesinger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).