From: Andrew Theurer <habanero@linux.vnet.ibm.com>
To: Avi Kivity <avi@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>, kvm@vger.kernel.org
Subject: Re: [PATCH] don't call adjust_vmx_controls() second time
Date: Mon, 31 Aug 2009 08:05:26 -0500 [thread overview]
Message-ID: <4A9BCA96.8060505@linux.vnet.ibm.com> (raw)
In-Reply-To: <4A9A3F7D.1000009@redhat.com>
Avi Kivity wrote:
> On 08/27/2009 11:42 PM, Andrew Theurer wrote:
>> On Thu, 2009-08-27 at 19:21 +0300, Avi Kivity wrote:
>>
>>> On 08/27/2009 06:41 PM, Gleb Natapov wrote:
>>>
>>>> Don't call adjust_vmx_controls() two times for the same control.
>>>> It restores options that was dropped earlier.
>>>>
>>>>
>>> Applied, thanks. Andrew, if you rerun your benchmark atop kvm.git
>>> 'next' branch, I believe you will see dramatically better results.
>>>
>> Yes! CPU is much lower:
>> user nice system irq softirq guest idle iowait
>> 5.81 0.00 9.48 0.08 1.04 21.32 57.86 4.41
>>
>> previous CPU:
>> user nice system irq softirq guest idle iowait
>> 5.67 0.00 11.64 0.09 1.05 31.90 46.06 3.59
>>
>>
>
> How does it compare to the other hypervisor now?
My original results for other hypervisor were a little inaccurate. They
mistakenly used 2 vcpu guests. New runs with 1 vcpu guests (as used in
kvm) have slightly lower CPU utilization. Anyway, here's the breakdown:
CPU percent more CPU
kvm-master/qemu-kvm-87: 50.15 78%
kvm-next/qemu-kvm-87: 37.73 34%
>
>> new oprofile:
>>
>>
>>> samples % app name symbol name
>>> 885444 53.2905 kvm-intel.ko vmx_vcpu_run
>>>
>
> guest mode = good
>
>>> 38090 2.2924 qemu-system-x86_64 cpu_physical_memory_rw
>>> 34764 2.0923 qemu-system-x86_64 phys_page_find_alloc
>>> 14730 0.8865 qemu-system-x86_64 qemu_get_ram_ptr
>>> 10814 0.6508 vmlinux-2.6.31-rc5-autokern1 copy_user_generic_string
>>> 10871 0.6543 qemu-system-x86_64 virtqueue_get_head
>>> 8557 0.5150 qemu-system-x86_64 virtqueue_avail_bytes
>>> 7173 0.4317 qemu-system-x86_64 lduw_phys
>>> 4122 0.2481 qemu-system-x86_64 ldl_phys
>>> 3339 0.2010 qemu-system-x86_64 virtqueue_num_heads
>>> 4129 0.2485 libpthread-2.5.so pthread_mutex_lock
>>>
>>>
>
> virtio and related qemu overhead: 8.2%.
>
>>> 25278 1.5214 vmlinux-2.6.31-rc5-autokern1 native_write_msr_safe
>>> 12278 0.7390 vmlinux-2.6.31-rc5-autokern1 native_read_msr_safe
>>>
>
> This will be reduced to if we move virtio to kernel context.
Are there plans to move that to kernel for disk, too?
>>> 12380 0.7451 vmlinux-2.6.31-rc5-autokern1 native_set_debugreg
>>> 3550 0.2137 vmlinux-2.6.31-rc5-autokern1 native_get_debugreg
>>>
>
> A lot less than before, but still annoying.
>
>>> 4631 0.2787 vmlinux-2.6.31-rc5-autokern1 mwait_idle
>>>
>
> idle=halt may improve this, mwait is slow.
I can try idle-halt on the host. I actually assumed it would be using
that, but I'll check.
Thanks,
-Andrew
next prev parent reply other threads:[~2009-08-31 13:05 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-08-27 15:41 [PATCH] don't call adjust_vmx_controls() second time Gleb Natapov
2009-08-27 16:21 ` Avi Kivity
2009-08-27 20:42 ` Andrew Theurer
2009-08-30 8:59 ` Avi Kivity
2009-08-31 13:05 ` Andrew Theurer [this message]
2009-08-31 13:52 ` Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A9BCA96.8060505@linux.vnet.ibm.com \
--to=habanero@linux.vnet.ibm.com \
--cc=avi@redhat.com \
--cc=gleb@redhat.com \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox