From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: [PATCH 05/10] KVM: nVMX: Allocate shadow vmcs Date: Wed, 17 Apr 2013 17:44:56 +0300 Message-ID: <20130417144456.GA8997@redhat.com> References: <1366199437-abelg@il.ibm.com> <20130417115310.B0D2D3806E7@moren.haifa.ibm.com> <20130417141028.GJ1682@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: dongxiao.xu@intel.com, jun.nakajima@intel.com, kvm@vger.kernel.org, nadav@harel.org.il, owasserm@redhat.com To: Abel Gordon Return-path: Received: from mx1.redhat.com ([209.132.183.28]:35617 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966624Ab3DQOpM (ORCPT ); Wed, 17 Apr 2013 10:45:12 -0400 Content-Disposition: inline In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On Wed, Apr 17, 2013 at 05:41:19PM +0300, Abel Gordon wrote: > > > Gleb Natapov wrote on 17/04/2013 05:10:28 PM: > > > > On Wed, Apr 17, 2013 at 02:53:10PM +0300, Abel Gordon wrote: > > > Allocate a shadow vmcs used by the processor to shadow part of the > fields > > > stored in the software defined VMCS12 (let L1 access fields without > causing > > > exits). Note we keep a shadow vmcs only for the current vmcs12. > > Once a vmcs12 > > > becomes non-current, its shadow vmcs is released. > > > > > > > > > Signed-off-by: Abel Gordon > > > --- > > > arch/x86/kvm/vmx.c | 15 +++++++++++++++ > > > 1 file changed, 15 insertions(+) > > > > > > --- .before/arch/x86/kvm/vmx.c 2013-04-17 14:20:50.000000000 +0300 > > > +++ .after/arch/x86/kvm/vmx.c 2013-04-17 14:20:50.000000000 +0300 > > > @@ -355,6 +355,7 @@ struct nested_vmx { > > > /* The host-usable pointer to the above */ > > > struct page *current_vmcs12_page; > > > struct vmcs12 *current_vmcs12; > > > + struct vmcs *current_shadow_vmcs; > > > > > > /* vmcs02_list cache of VMCSs recently used to run L2 guests */ > > > struct list_head vmcs02_pool; > > > @@ -5980,6 +5981,7 @@ static int handle_vmptrld(struct kvm_vcp > > > gva_t gva; > > > gpa_t vmptr; > > > struct x86_exception e; > > > + struct vmcs *shadow_vmcs; > > > > > > if (!nested_vmx_check_permission(vcpu)) > > > return 1; > > > @@ -6026,6 +6028,19 @@ static int handle_vmptrld(struct kvm_vcp > > > vmx->nested.current_vmptr = vmptr; > > > vmx->nested.current_vmcs12 = new_vmcs12; > > > vmx->nested.current_vmcs12_page = page; > > > + if (enable_shadow_vmcs) { > > > + shadow_vmcs = alloc_vmcs(); > > Next patch frees vmx->nested.current_shadow_vmcs couple of lines above. > > What about reusing previous page instead of allocation new one each > > time? > > Yes, we could have a single shadow vmcs per L1 vcpu that is used to shadow > multiple L2 vcpus. I preferred not to do that because I didn't want to > share the same page (physical vmcs) for different vmcs12s. However, this is > not an issues because we overwrite the shadowed fields every time we sync > the content. > It's your call. If you prefer to re-use, I'll send a new version that > do that. Please confirm. Yes, I prefer to reuse explicitly. There is a big chance that the same page will be reused even if you go through freeing and allocation. -- Gleb.