From: Andrew Theurer <habanero@linux.vnet.ibm.com>
To: kvm-devel <kvm@vger.kernel.org>
Subject: KVM performance vs. Xen
Date: Wed, 29 Apr 2009 09:41:50 -0500 [thread overview]
Message-ID: <49F8672E.5080507@linux.vnet.ibm.com> (raw)
I wanted to share some performance data for KVM and Xen. I thought it
would be interesting to share some performance results especially
compared to Xen, using a more complex situation like heterogeneous
server consolidation.
The Workload:
The workload is one that simulates a consolidation of servers on to a
single host. There are 3 server types: web, imap, and app (j2ee). In
addition, there are other "helper" servers which are also consolidated:
a db server, which helps out with the app server, and an nfs server,
which helps out with the web server (a portion of the docroot is nfs
mounted). There is also one other server that is simply idle. All 6
servers make up one set. The first 3 server types are sent requests,
which in turn may send requests to the db and nfs helper servers. The
request rate is throttled to produce a fixed amount of work. In order
to increase utilization on the host, more sets of these servers are
used. The clients which send requests also have a response time
requirement which is monitored. The following results have passed the
response time requirements.
The host hardware:
A 2 socket, 8 core Nehalem with SMT, and EPT enabled, lots of disks, 4 x
1 GB Ethenret
The host software:
Both Xen and KVM use the same host Linux OS, SLES11. KVM uses the
2.6.27.19-5-default kernel and Xen uses the 2.6.27.19-5-xen kernel. I
have tried 2.6.29 for KVM, but results are actually worse. KVM modules
are rebuilt with kvm-85. Qemu is also from kvm-85. Xen version is
"3.3.1_18546_12-3.1".
The guest software:
All guests are RedHat 5.3. The same disk images are used but different
kernels. Xen uses the RedHat Xen kernel and KVM uses 2.6.29 with all
paravirt build options enabled. Both use PV I/O drivers. Software
used: Apache, PHP, Java, Glassfish, Postgresql, and Dovecot.
Hypervisor configurations:
Xen guests use "phy:" for disks
KVM guests use "-drive" for disks with cache=none
KVM guests are backed with large pages
Memory and CPU sizings are different for each guest type, but a
particular guest's sizings are the same for Xen and KVM
The test run configuration:
There are 4 sets of servers used, so that's 24 guests total (4 idle
ones, 20 active ones).
Test Results:
The throughput is equal in these tests, as the clients throttle the work
(this is assuming you don't run out of a resource on the host). What's
telling is the CPU used to do the same amount of work:
Xen: 52.85%
KVM: 66.93%
So, KVM requires 66.93/52.85 = 26.6% more CPU to do the same amount of
work. Here's the breakdown:
total user nice system irq softirq guest
66.90 7.20 0.00 12.94 0.35 3.39 43.02
Comparing guest time to all other busy time, that's a 23.88/43.02 = 55%
overhead for virtualization. I certainly don't expect it to be 0, but
55% seems a bit high. So, what's the reason for this overhead? At the
bottom is oprofile output of top functions for KVM. Some observations:
1) I'm seeing about 2.3% in scheduler functions [that I recognize].
Does that seems a bit excessive?
2) cpu_physical_memory_rw due to not using preadv/pwritev?
3) vmx_[save|load]_host_state: I take it this is from guest switches?
We have 180,000 context switches a second. Is this more than expected?
I wonder if schedstats can show why we context switch (need to let
someone else run, yielded, waiting on io, etc).
samples % name app
385914891 61.3122 kvm-intel.ko vmx_vcpu_run
11413793 1.8134 libc-2.9.so /lib64/libc-2.9.so
8943054 1.4208 qemu-system-x86_64 cpu_physical_memory_rw
6877593 1.0927 kvm.ko kvm_arch_vcpu_ioctl_run
6469799 1.0279 qemu-system-x86_64 phys_page_find_alloc
5080474 0.8072 vmlinux-2.6.27.19-5-default copy_user_generic_string
4154467 0.6600 kvm-intel.ko __vmx_load_host_state
3991060 0.6341 vmlinux-2.6.27.19-5-default schedule
3455331 0.5490 kvm-intel.ko vmx_save_host_state
2582344 0.4103 vmlinux-2.6.27.19-5-default find_busiest_group
2509543 0.3987 qemu-system-x86_64 main_loop_wait
2457476 0.3904 vmlinux-2.6.27.19-5-default kfree
2395296 0.3806 kvm.ko kvm_set_irq
2385298 0.3790 vmlinux-2.6.27.19-5-default fget_light
2229755 0.3543 vmlinux-2.6.27.19-5-default __switch_to
2178739 0.3461 bnx2.ko bnx2_rx_int
2156418 0.3426 vmlinux-2.6.27.19-5-default complete_signal
1854497 0.2946 qemu-system-x86_64 virtqueue_get_head
1833823 0.2913 vmlinux-2.6.27.19-5-default try_to_wake_up
1816954 0.2887 qemu-system-x86_64 cpu_physical_memory_map
1776548 0.2822 oprofiled find_kernel_image
1737294 0.2760 vmlinux-2.6.27.19-5-default kmem_cache_alloc
1662346 0.2641 qemu-system-x86_64 virtqueue_avail_bytes
1651070 0.2623 vmlinux-2.6.27.19-5-default do_select
1643139 0.2611 vmlinux-2.6.27.19-5-default update_curr
1640495 0.2606 vmlinux-2.6.27.19-5-default kmem_cache_free
1606493 0.2552 libpthread-2.9.so pthread_mutex_lock
1549536 0.2462 qemu-system-x86_64 lduw_phys
1535539 0.2440 vmlinux-2.6.27.19-5-default tg_shares_up
1438468 0.2285 vmlinux-2.6.27.19-5-default mwait_idle
1316461 0.2092 vmlinux-2.6.27.19-5-default __down_read
1282486 0.2038 vmlinux-2.6.27.19-5-default native_read_tsc
1226069 0.1948 oprofiled odb_update_node
1224551 0.1946 vmlinux-2.6.27.19-5-default sched_clock_cpu
1222684 0.1943 tun.ko tun_chr_aio_read
1194034 0.1897 vmlinux-2.6.27.19-5-default task_rq_lock
1186129 0.1884 kvm.ko x86_decode_insn
1131644 0.1798 bnx2.ko bnx2_start_xmit
1115575 0.1772 vmlinux-2.6.27.19-5-default enqueue_hrtimer
1044329 0.1659 vmlinux-2.6.27.19-5-default native_sched_clock
988546 0.1571 vmlinux-2.6.27.19-5-default fput
980615 0.1558 vmlinux-2.6.27.19-5-default __up_read
942270 0.1497 qemu-system-x86_64 kvm_run
925076 0.1470 kvm-intel.ko vmcs_writel
889220 0.1413 vmlinux-2.6.27.19-5-default dev_queue_xmit
884786 0.1406 kvm.ko kvm_apic_has_interrupt
880421 0.1399 librt-2.9.so /lib64/librt-2.9.so
880306 0.1399 vmlinux-2.6.27.19-5-default nf_iterate
-Andrew Theurer
next reply other threads:[~2009-04-29 14:41 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-04-29 14:41 Andrew Theurer [this message]
2009-04-29 15:20 ` KVM performance vs. Xen Nakajima, Jun
2009-04-29 15:33 ` Andrew Theurer
2009-04-30 8:56 ` Avi Kivity
2009-04-30 12:49 ` Andrew Theurer
2009-04-30 13:02 ` Avi Kivity
2009-04-30 13:44 ` Andrew Theurer
2009-04-30 13:47 ` Anthony Liguori
2009-04-30 13:52 ` Avi Kivity
2009-04-30 13:45 ` Anthony Liguori
2009-04-30 13:53 ` Avi Kivity
2009-04-30 15:08 ` Anthony Liguori
2009-04-30 13:59 ` Avi Kivity
2009-04-30 14:04 ` Andrew Theurer
2009-04-30 15:11 ` Anthony Liguori
2009-04-30 15:19 ` Avi Kivity
2009-04-30 15:59 ` Anthony Liguori
2009-05-01 0:40 ` Andrew Theurer
2009-05-03 16:20 ` Avi Kivity
2009-04-30 15:09 ` Anthony Liguori
2009-04-30 16:41 ` Marcelo Tosatti
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49F8672E.5080507@linux.vnet.ibm.com \
--to=habanero@linux.vnet.ibm.com \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox