From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Theurer Subject: Re: [PATCH] don't call adjust_vmx_controls() second time Date: Mon, 31 Aug 2009 08:05:26 -0500 Message-ID: <4A9BCA96.8060505@linux.vnet.ibm.com> References: <20090827154130.GR30093@redhat.com> <4A96B297.5090003@redhat.com> <1251405750.9683.110.camel@twinturbo.austin.ibm.com> <4A9A3F7D.1000009@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Gleb Natapov , kvm@vger.kernel.org To: Avi Kivity Return-path: Received: from e4.ny.us.ibm.com ([32.97.182.144]:32853 "EHLO e4.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753110AbZHaNFk (ORCPT ); Mon, 31 Aug 2009 09:05:40 -0400 Received: from d01relay02.pok.ibm.com (d01relay02.pok.ibm.com [9.56.227.234]) by e4.ny.us.ibm.com (8.14.3/8.13.1) with ESMTP id n7VCx7Si013289 for ; Mon, 31 Aug 2009 08:59:07 -0400 Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by d01relay02.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id n7VD5gUB238384 for ; Mon, 31 Aug 2009 09:05:42 -0400 Received: from d01av04.pok.ibm.com (loopback [127.0.0.1]) by d01av04.pok.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id n7VD5gMV020282 for ; Mon, 31 Aug 2009 09:05:42 -0400 In-Reply-To: <4A9A3F7D.1000009@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Avi Kivity wrote: > On 08/27/2009 11:42 PM, Andrew Theurer wrote: >> On Thu, 2009-08-27 at 19:21 +0300, Avi Kivity wrote: >> >>> On 08/27/2009 06:41 PM, Gleb Natapov wrote: >>> >>>> Don't call adjust_vmx_controls() two times for the same control. >>>> It restores options that was dropped earlier. >>>> >>>> >>> Applied, thanks. Andrew, if you rerun your benchmark atop kvm.git >>> 'next' branch, I believe you will see dramatically better results. >>> >> Yes! CPU is much lower: >> user nice system irq softirq guest idle iowait >> 5.81 0.00 9.48 0.08 1.04 21.32 57.86 4.41 >> >> previous CPU: >> user nice system irq softirq guest idle iowait >> 5.67 0.00 11.64 0.09 1.05 31.90 46.06 3.59 >> >> > > How does it compare to the other hypervisor now? My original results for other hypervisor were a little inaccurate. They mistakenly used 2 vcpu guests. New runs with 1 vcpu guests (as used in kvm) have slightly lower CPU utilization. Anyway, here's the breakdown: CPU percent more CPU kvm-master/qemu-kvm-87: 50.15 78% kvm-next/qemu-kvm-87: 37.73 34% > >> new oprofile: >> >> >>> samples % app name symbol name >>> 885444 53.2905 kvm-intel.ko vmx_vcpu_run >>> > > guest mode = good > >>> 38090 2.2924 qemu-system-x86_64 cpu_physical_memory_rw >>> 34764 2.0923 qemu-system-x86_64 phys_page_find_alloc >>> 14730 0.8865 qemu-system-x86_64 qemu_get_ram_ptr >>> 10814 0.6508 vmlinux-2.6.31-rc5-autokern1 copy_user_generic_string >>> 10871 0.6543 qemu-system-x86_64 virtqueue_get_head >>> 8557 0.5150 qemu-system-x86_64 virtqueue_avail_bytes >>> 7173 0.4317 qemu-system-x86_64 lduw_phys >>> 4122 0.2481 qemu-system-x86_64 ldl_phys >>> 3339 0.2010 qemu-system-x86_64 virtqueue_num_heads >>> 4129 0.2485 libpthread-2.5.so pthread_mutex_lock >>> >>> > > virtio and related qemu overhead: 8.2%. > >>> 25278 1.5214 vmlinux-2.6.31-rc5-autokern1 native_write_msr_safe >>> 12278 0.7390 vmlinux-2.6.31-rc5-autokern1 native_read_msr_safe >>> > > This will be reduced to if we move virtio to kernel context. Are there plans to move that to kernel for disk, too? >>> 12380 0.7451 vmlinux-2.6.31-rc5-autokern1 native_set_debugreg >>> 3550 0.2137 vmlinux-2.6.31-rc5-autokern1 native_get_debugreg >>> > > A lot less than before, but still annoying. > >>> 4631 0.2787 vmlinux-2.6.31-rc5-autokern1 mwait_idle >>> > > idle=halt may improve this, mwait is slow. I can try idle-halt on the host. I actually assumed it would be using that, but I'll check. Thanks, -Andrew