From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH 0/3] KVM: VMX: Support hosted VMM coexsitence. Date: Tue, 23 Mar 2010 10:58:26 +0200 Message-ID: <4BA882B2.8050303@redhat.com> References: <4BA222B7.6030008@redhat.com> <4BA87015.7070503@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: "kvm@vger.kernel.org" , Marcelo Tosatti To: "Xu, Dongxiao" Return-path: Received: from mx1.redhat.com ([209.132.183.28]:6391 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750939Ab0CWI62 (ORCPT ); Tue, 23 Mar 2010 04:58:28 -0400 In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On 03/23/2010 10:33 AM, Xu, Dongxiao wrote: > >> Did you measure workloads that exit to userspace very often? >> >> Also, what about future processors? My understanding is that the >> manual recommends keeping things cached, the above description is for >> sleep states. >> > I measured the performance by using kernel build in guest. I launched 6 > guests, 5 of them and the host are doing while(1) loop, and the left guest > is doing kernel build. The CPU overcommitment is 7:1, and vcpu schedule > frequency is about 15k/sec. I tested this with Intel new processors on > my hand, and the performance difference is little. > The 15k/sec context switches are distributed among 7 entities, so we have about 2k/sec for the guest you are measuring. If the cost is 1 microsecond, then the impact would be 0.2% on the kernel build. But 1 microsecond is way too high for some workloads. Can you measure the impact directly? kvm/user/test/x86/vmexit.c has a test called inl_pmtimer that measures exit to userspace costs. Please run it with and without the patch. btw, what about VPID? That's a global resource. How do you ensure no VPID conflicts? >>>> Is that the only motivation? It seems like an odd use-case. If >>>> there was no performance impact (current or future), I wouldn't >>>> mind, but the design of VMPTRLD/VMCLEAR/VMXON/VMXOFF seems to >>>> indicate that we want to keep a VMCS loaded as much as possible on >>>> the processor. >>>> >>>> >>> I just used KVM and VMware Workstation 7 for testing this patchset. >>> >>> Through this new usage of VMPTRLD/VMCLEAR/VMXON/VMXOFF, >>> we could make hosted VMMs work separately and do not impact each >>> other. >>> >>> >> What I am questioning is whether a significant number of users want to >> run kvm in parallel with another hypervisor. >> > At least this approach gives users an option to run VMMs in parallel without > significant performance loss. Think of this senario, if a server has already > Deployed VMware software, but some new customers want to use KVM, > this patch could help them to meet their requirements. > For server workloads vmware users will run esx, on which you can't run kvm. If someone wants to evaluate kvm or vmware on a workstation, they can shut down the other product. I simply don't see a scenario where you want to run both concurrently that would be worth even a small performance loss. -- error compiling committee.c: too many arguments to function