From: Marcelo Tosatti <mtosatti@redhat.com>
To: Avi Kivity <avi@redhat.com>
Cc: Andrew Theurer <habanero@linux.vnet.ibm.com>,
kvm-devel <kvm@vger.kernel.org>
Subject: Re: KVM performance vs. Xen
Date: Thu, 30 Apr 2009 13:41:16 -0300 [thread overview]
Message-ID: <20090430164116.GA10422@amt.cnet> (raw)
In-Reply-To: <49F967AE.4040905@redhat.com>
On Thu, Apr 30, 2009 at 11:56:14AM +0300, Avi Kivity wrote:
> Andrew Theurer wrote:
>> Comparing guest time to all other busy time, that's a 23.88/43.02 = 55%
>> overhead for virtualization. I certainly don't expect it to be 0, but
>> 55% seems a bit high. So, what's the reason for this overhead? At the
>> bottom is oprofile output of top functions for KVM. Some observations:
>>
>> 1) I'm seeing about 2.3% in scheduler functions [that I recognize].
>> Does that seems a bit excessive?
>
> Yes, it is. If there is a lot of I/O, this might be due to the thread
> pool used for I/O.
>
>> 2) cpu_physical_memory_rw due to not using preadv/pwritev?
>
> I think both virtio-net and virtio-blk use memcpy().
>
>> 3) vmx_[save|load]_host_state: I take it this is from guest switches?
>
> These are called when you context-switch from a guest, and, much more
> frequently, when you enter qemu.
>
>> We have 180,000 context switches a second. Is this more than expected?
>
>
> Way more. Across 16 logical cpus, this is >10,000 cs/sec/cpu.
>
>> I wonder if schedstats can show why we context switch (need to let
>> someone else run, yielded, waiting on io, etc).
>>
>
> Yes, there is a scheduler tracer, though I have no idea how to operate it.
>
> Do you have kvm_stat logs?
In case the kvm_stat logs don't shed enough light, this should help.
Documentation/trace/ftrace.txt:
sched_switch
------------
This tracer simply records schedule switches. Here is an example
of how to use it.
# echo sched_switch > /debug/tracing/current_tracer
# echo 1 > /debug/tracing/tracing_enabled
# sleep 1
# echo 0 > /debug/tracing/tracing_enabled
# cat /debug/tracing/trace
prev parent reply other threads:[~2009-04-30 16:41 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-04-29 14:41 KVM performance vs. Xen Andrew Theurer
2009-04-29 15:20 ` Nakajima, Jun
2009-04-29 15:33 ` Andrew Theurer
2009-04-30 8:56 ` Avi Kivity
2009-04-30 12:49 ` Andrew Theurer
2009-04-30 13:02 ` Avi Kivity
2009-04-30 13:44 ` Andrew Theurer
2009-04-30 13:47 ` Anthony Liguori
2009-04-30 13:52 ` Avi Kivity
2009-04-30 13:45 ` Anthony Liguori
2009-04-30 13:53 ` Avi Kivity
2009-04-30 15:08 ` Anthony Liguori
2009-04-30 13:59 ` Avi Kivity
2009-04-30 14:04 ` Andrew Theurer
2009-04-30 15:11 ` Anthony Liguori
2009-04-30 15:19 ` Avi Kivity
2009-04-30 15:59 ` Anthony Liguori
2009-05-01 0:40 ` Andrew Theurer
2009-05-03 16:20 ` Avi Kivity
2009-04-30 15:09 ` Anthony Liguori
2009-04-30 16:41 ` Marcelo Tosatti [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090430164116.GA10422@amt.cnet \
--to=mtosatti@redhat.com \
--cc=avi@redhat.com \
--cc=habanero@linux.vnet.ibm.com \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox