* [PATCH] don't call adjust_vmx_controls() second time
@ 2009-08-27 15:41 Gleb Natapov
2009-08-27 16:21 ` Avi Kivity
0 siblings, 1 reply; 6+ messages in thread
From: Gleb Natapov @ 2009-08-27 15:41 UTC (permalink / raw)
To: avi; +Cc: kvm
Don't call adjust_vmx_controls() two times for the same control.
It restores options that was dropped earlier.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 6b57eed..78101dd 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1262,12 +1262,9 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
if (_cpu_based_2nd_exec_control & SECONDARY_EXEC_ENABLE_EPT) {
/* CR3 accesses and invlpg don't need to cause VM Exits when EPT
enabled */
- min &= ~(CPU_BASED_CR3_LOAD_EXITING |
- CPU_BASED_CR3_STORE_EXITING |
- CPU_BASED_INVLPG_EXITING);
- if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_PROCBASED_CTLS,
- &_cpu_based_exec_control) < 0)
- return -EIO;
+ _cpu_based_exec_control &= ~(CPU_BASED_CR3_LOAD_EXITING |
+ CPU_BASED_CR3_STORE_EXITING |
+ CPU_BASED_INVLPG_EXITING);
rdmsr(MSR_IA32_VMX_EPT_VPID_CAP,
vmx_capability.ept, vmx_capability.vpid);
}
--
Gleb.
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH] don't call adjust_vmx_controls() second time
2009-08-27 15:41 [PATCH] don't call adjust_vmx_controls() second time Gleb Natapov
@ 2009-08-27 16:21 ` Avi Kivity
2009-08-27 20:42 ` Andrew Theurer
0 siblings, 1 reply; 6+ messages in thread
From: Avi Kivity @ 2009-08-27 16:21 UTC (permalink / raw)
To: Gleb Natapov; +Cc: kvm, Andrew Theurer
On 08/27/2009 06:41 PM, Gleb Natapov wrote:
> Don't call adjust_vmx_controls() two times for the same control.
> It restores options that was dropped earlier.
>
Applied, thanks. Andrew, if you rerun your benchmark atop kvm.git
'next' branch, I believe you will see dramatically better results.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] don't call adjust_vmx_controls() second time
2009-08-27 16:21 ` Avi Kivity
@ 2009-08-27 20:42 ` Andrew Theurer
2009-08-30 8:59 ` Avi Kivity
0 siblings, 1 reply; 6+ messages in thread
From: Andrew Theurer @ 2009-08-27 20:42 UTC (permalink / raw)
To: Avi Kivity; +Cc: Gleb Natapov, kvm
On Thu, 2009-08-27 at 19:21 +0300, Avi Kivity wrote:
> On 08/27/2009 06:41 PM, Gleb Natapov wrote:
> > Don't call adjust_vmx_controls() two times for the same control.
> > It restores options that was dropped earlier.
> >
>
> Applied, thanks. Andrew, if you rerun your benchmark atop kvm.git
> 'next' branch, I believe you will see dramatically better results.
Yes! CPU is much lower:
user nice system irq softirq guest idle iowait
5.81 0.00 9.48 0.08 1.04 21.32 57.86 4.41
previous CPU:
user nice system irq softirq guest idle iowait
5.67 0.00 11.64 0.09 1.05 31.90 46.06 3.59
new oprofile:
> samples % app name symbol name
> 885444 53.2905 kvm-intel.ko vmx_vcpu_run
> 38090 2.2924 qemu-system-x86_64 cpu_physical_memory_rw
> 34764 2.0923 qemu-system-x86_64 phys_page_find_alloc
> 25278 1.5214 vmlinux-2.6.31-rc5-autokern1 native_write_msr_safe
> 18205 1.0957 libc-2.5.so memcpy
> 14730 0.8865 qemu-system-x86_64 qemu_get_ram_ptr
> 14189 0.8540 kvm.ko kvm_arch_vcpu_ioctl_run
> 12380 0.7451 vmlinux-2.6.31-rc5-autokern1 native_set_debugreg
> 12278 0.7390 vmlinux-2.6.31-rc5-autokern1 native_read_msr_safe
> 10871 0.6543 qemu-system-x86_64 virtqueue_get_head
> 10814 0.6508 vmlinux-2.6.31-rc5-autokern1 copy_user_generic_string
> 9080 0.5465 vmlinux-2.6.31-rc5-autokern1 fget_light
> 9015 0.5426 vmlinux-2.6.31-rc5-autokern1 schedule
> 8557 0.5150 qemu-system-x86_64 virtqueue_avail_bytes
> 7805 0.4697 vmlinux-2.6.31-rc5-autokern1 do_select
> 7173 0.4317 qemu-system-x86_64 lduw_phys
> 7019 0.4224 qemu-system-x86_64 main_loop_wait
> 6979 0.4200 vmlinux-2.6.31-rc5-autokern1 audit_syscall_exit
> 5571 0.3353 vmlinux-2.6.31-rc5-autokern1 kfree
> 5170 0.3112 vmlinux-2.6.31-rc5-autokern1 audit_syscall_entry
> 5086 0.3061 vmlinux-2.6.31-rc5-autokern1 fput
> 4631 0.2787 vmlinux-2.6.31-rc5-autokern1 mwait_idle
> 4584 0.2759 kvm.ko kvm_load_guest_fpu
> 4491 0.2703 vmlinux-2.6.31-rc5-autokern1 system_call
> 4461 0.2685 vmlinux-2.6.31-rc5-autokern1 __switch_to
> 4431 0.2667 kvm.ko kvm_put_guest_fpu
> 4371 0.2631 vmlinux-2.6.31-rc5-autokern1 __down_read
> 4290 0.2582 qemu-system-x86_64 kvm_run
> 4218 0.2539 vmlinux-2.6.31-rc5-autokern1 getnstimeofday
> 4129 0.2485 libpthread-2.5.so pthread_mutex_lock
> 4122 0.2481 qemu-system-x86_64 ldl_phys
> 4100 0.2468 vmlinux-2.6.31-rc5-autokern1 do_vfs_ioctl
> 3811 0.2294 kvm.ko find_highest_vector
> 3593 0.2162 vmlinux-2.6.31-rc5-autokern1 unroll_tree_refs
> 3560 0.2143 vmlinux-2.6.31-rc5-autokern1 try_to_wake_up
> 3550 0.2137 vmlinux-2.6.31-rc5-autokern1 native_get_debugreg
> 3506 0.2110 kvm-intel.ko vmcs_writel
> 3487 0.2099 vmlinux-2.6.31-rc5-autokern1 task_rq_lock
> 3434 0.2067 vmlinux-2.6.31-rc5-autokern1 __up_read
> 3368 0.2027 librt-2.5.so clock_gettime
> 3339 0.2010 qemu-system-x86_64 virtqueue_num_heads
>
Thanks very much for the fix!
-Andrew
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] don't call adjust_vmx_controls() second time
2009-08-27 20:42 ` Andrew Theurer
@ 2009-08-30 8:59 ` Avi Kivity
2009-08-31 13:05 ` Andrew Theurer
0 siblings, 1 reply; 6+ messages in thread
From: Avi Kivity @ 2009-08-30 8:59 UTC (permalink / raw)
To: habanero; +Cc: Gleb Natapov, kvm
On 08/27/2009 11:42 PM, Andrew Theurer wrote:
> On Thu, 2009-08-27 at 19:21 +0300, Avi Kivity wrote:
>
>> On 08/27/2009 06:41 PM, Gleb Natapov wrote:
>>
>>> Don't call adjust_vmx_controls() two times for the same control.
>>> It restores options that was dropped earlier.
>>>
>>>
>> Applied, thanks. Andrew, if you rerun your benchmark atop kvm.git
>> 'next' branch, I believe you will see dramatically better results.
>>
> Yes! CPU is much lower:
> user nice system irq softirq guest idle iowait
> 5.81 0.00 9.48 0.08 1.04 21.32 57.86 4.41
>
> previous CPU:
> user nice system irq softirq guest idle iowait
> 5.67 0.00 11.64 0.09 1.05 31.90 46.06 3.59
>
>
How does it compare to the other hypervisor now?
> new oprofile:
>
>
>> samples % app name symbol name
>> 885444 53.2905 kvm-intel.ko vmx_vcpu_run
>>
guest mode = good
>> 38090 2.2924 qemu-system-x86_64 cpu_physical_memory_rw
>> 34764 2.0923 qemu-system-x86_64 phys_page_find_alloc
>> 14730 0.8865 qemu-system-x86_64 qemu_get_ram_ptr
>> 10814 0.6508 vmlinux-2.6.31-rc5-autokern1 copy_user_generic_string
>> 10871 0.6543 qemu-system-x86_64 virtqueue_get_head
>> 8557 0.5150 qemu-system-x86_64 virtqueue_avail_bytes
>> 7173 0.4317 qemu-system-x86_64 lduw_phys
>> 4122 0.2481 qemu-system-x86_64 ldl_phys
>> 3339 0.2010 qemu-system-x86_64 virtqueue_num_heads
>> 4129 0.2485 libpthread-2.5.so pthread_mutex_lock
>>
>>
virtio and related qemu overhead: 8.2%.
>> 25278 1.5214 vmlinux-2.6.31-rc5-autokern1 native_write_msr_safe
>> 12278 0.7390 vmlinux-2.6.31-rc5-autokern1 native_read_msr_safe
>>
This will be reduced to if we move virtio to kernel context.
>> 12380 0.7451 vmlinux-2.6.31-rc5-autokern1 native_set_debugreg
>> 3550 0.2137 vmlinux-2.6.31-rc5-autokern1 native_get_debugreg
>>
A lot less than before, but still annoying.
>> 4631 0.2787 vmlinux-2.6.31-rc5-autokern1 mwait_idle
>>
idle=halt may improve this, mwait is slow.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] don't call adjust_vmx_controls() second time
2009-08-30 8:59 ` Avi Kivity
@ 2009-08-31 13:05 ` Andrew Theurer
2009-08-31 13:52 ` Avi Kivity
0 siblings, 1 reply; 6+ messages in thread
From: Andrew Theurer @ 2009-08-31 13:05 UTC (permalink / raw)
To: Avi Kivity; +Cc: Gleb Natapov, kvm
Avi Kivity wrote:
> On 08/27/2009 11:42 PM, Andrew Theurer wrote:
>> On Thu, 2009-08-27 at 19:21 +0300, Avi Kivity wrote:
>>
>>> On 08/27/2009 06:41 PM, Gleb Natapov wrote:
>>>
>>>> Don't call adjust_vmx_controls() two times for the same control.
>>>> It restores options that was dropped earlier.
>>>>
>>>>
>>> Applied, thanks. Andrew, if you rerun your benchmark atop kvm.git
>>> 'next' branch, I believe you will see dramatically better results.
>>>
>> Yes! CPU is much lower:
>> user nice system irq softirq guest idle iowait
>> 5.81 0.00 9.48 0.08 1.04 21.32 57.86 4.41
>>
>> previous CPU:
>> user nice system irq softirq guest idle iowait
>> 5.67 0.00 11.64 0.09 1.05 31.90 46.06 3.59
>>
>>
>
> How does it compare to the other hypervisor now?
My original results for other hypervisor were a little inaccurate. They
mistakenly used 2 vcpu guests. New runs with 1 vcpu guests (as used in
kvm) have slightly lower CPU utilization. Anyway, here's the breakdown:
CPU percent more CPU
kvm-master/qemu-kvm-87: 50.15 78%
kvm-next/qemu-kvm-87: 37.73 34%
>
>> new oprofile:
>>
>>
>>> samples % app name symbol name
>>> 885444 53.2905 kvm-intel.ko vmx_vcpu_run
>>>
>
> guest mode = good
>
>>> 38090 2.2924 qemu-system-x86_64 cpu_physical_memory_rw
>>> 34764 2.0923 qemu-system-x86_64 phys_page_find_alloc
>>> 14730 0.8865 qemu-system-x86_64 qemu_get_ram_ptr
>>> 10814 0.6508 vmlinux-2.6.31-rc5-autokern1 copy_user_generic_string
>>> 10871 0.6543 qemu-system-x86_64 virtqueue_get_head
>>> 8557 0.5150 qemu-system-x86_64 virtqueue_avail_bytes
>>> 7173 0.4317 qemu-system-x86_64 lduw_phys
>>> 4122 0.2481 qemu-system-x86_64 ldl_phys
>>> 3339 0.2010 qemu-system-x86_64 virtqueue_num_heads
>>> 4129 0.2485 libpthread-2.5.so pthread_mutex_lock
>>>
>>>
>
> virtio and related qemu overhead: 8.2%.
>
>>> 25278 1.5214 vmlinux-2.6.31-rc5-autokern1 native_write_msr_safe
>>> 12278 0.7390 vmlinux-2.6.31-rc5-autokern1 native_read_msr_safe
>>>
>
> This will be reduced to if we move virtio to kernel context.
Are there plans to move that to kernel for disk, too?
>>> 12380 0.7451 vmlinux-2.6.31-rc5-autokern1 native_set_debugreg
>>> 3550 0.2137 vmlinux-2.6.31-rc5-autokern1 native_get_debugreg
>>>
>
> A lot less than before, but still annoying.
>
>>> 4631 0.2787 vmlinux-2.6.31-rc5-autokern1 mwait_idle
>>>
>
> idle=halt may improve this, mwait is slow.
I can try idle-halt on the host. I actually assumed it would be using
that, but I'll check.
Thanks,
-Andrew
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH] don't call adjust_vmx_controls() second time
2009-08-31 13:05 ` Andrew Theurer
@ 2009-08-31 13:52 ` Avi Kivity
0 siblings, 0 replies; 6+ messages in thread
From: Avi Kivity @ 2009-08-31 13:52 UTC (permalink / raw)
To: Andrew Theurer; +Cc: Gleb Natapov, kvm
On 08/31/2009 04:05 PM, Andrew Theurer wrote:
>> How does it compare to the other hypervisor now?
>
>
> My original results for other hypervisor were a little inaccurate.
> They mistakenly used 2 vcpu guests. New runs with 1 vcpu guests (as
> used in kvm) have slightly lower CPU utilization. Anyway, here's the
> breakdown:
>
> CPU percent more CPU
> kvm-master/qemu-kvm-87: 50.15 78%
> kvm-next/qemu-kvm-87: 37.73 34%
>
Much better, though still a lot of work to do.
>>>> 25278 1.5214 vmlinux-2.6.31-rc5-autokern1 native_write_msr_safe
>>>> 12278 0.7390 vmlinux-2.6.31-rc5-autokern1 native_read_msr_safe
>>
>> This will be reduced to if we move virtio to kernel context.
>
> Are there plans to move that to kernel for disk, too?
We don't know if disk or net contributed to this. If it turns out that
vhost-blk makes sense, we'll do it.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2009-08-31 13:52 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-08-27 15:41 [PATCH] don't call adjust_vmx_controls() second time Gleb Natapov
2009-08-27 16:21 ` Avi Kivity
2009-08-27 20:42 ` Andrew Theurer
2009-08-30 8:59 ` Avi Kivity
2009-08-31 13:05 ` Andrew Theurer
2009-08-31 13:52 ` Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox