From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [PATCH 4/4 v4] KVM: VMX: VMXON/VMXOFF usage changes. Date: Tue, 11 May 2010 21:43:20 -0300 Message-ID: <20100512004320.GA20553@amt.cnet> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: "kvm@vger.kernel.org" , Avi Kivity , Alexander Graf To: "Xu, Dongxiao" Return-path: Received: from mx1.redhat.com ([209.132.183.28]:37709 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752163Ab0ELAxV (ORCPT ); Tue, 11 May 2010 20:53:21 -0400 Content-Disposition: inline In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On Tue, May 11, 2010 at 06:29:48PM +0800, Xu, Dongxiao wrote: > From: Dongxiao Xu > > SDM suggests VMXON should be called before VMPTRLD, and VMXOFF > should be called after doing VMCLEAR. > > Therefore in vmm coexistence case, we should firstly call VMXON > before any VMCS operation, and then call VMXOFF after the > operation is done. > > Signed-off-by: Dongxiao Xu > --- > arch/x86/kvm/vmx.c | 38 +++++++++++++++++++++++++++++++------- > 1 files changed, 31 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index c536b9d..dbd47a7 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -168,6 +168,8 @@ static inline struct vcpu_vmx *to_vmx(struct kvm_vcpu *vcpu) > > static int init_rmode(struct kvm *kvm); > static u64 construct_eptp(unsigned long root_hpa); > +static void kvm_cpu_vmxon(u64 addr); > +static void kvm_cpu_vmxoff(void); > > static DEFINE_PER_CPU(struct vmcs *, vmxarea); > static DEFINE_PER_CPU(struct vmcs *, current_vmcs); > @@ -786,8 +788,11 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > u64 tsc_this, delta, new_offset; > + u64 phys_addr = __pa(per_cpu(vmxarea, cpu)); > > - if (vmm_exclusive && vcpu->cpu != cpu) > + if (!vmm_exclusive) > + kvm_cpu_vmxon(phys_addr); > + else if (vcpu->cpu != cpu) > vcpu_clear(vmx); > > if (per_cpu(current_vmcs, cpu) != vmx->vmcs) { > @@ -833,8 +838,10 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > static void vmx_vcpu_put(struct kvm_vcpu *vcpu) > { > __vmx_load_host_state(to_vmx(vcpu)); > - if (!vmm_exclusive) > + if (!vmm_exclusive) { > __vcpu_clear(to_vmx(vcpu)); > + kvm_cpu_vmxoff(); > + } > } > > static void vmx_fpu_activate(struct kvm_vcpu *vcpu) > @@ -1257,9 +1264,11 @@ static int hardware_enable(void *garbage) > FEATURE_CONTROL_LOCKED | > FEATURE_CONTROL_VMXON_ENABLED); > write_cr4(read_cr4() | X86_CR4_VMXE); /* FIXME: not cpu hotplug safe */ > - kvm_cpu_vmxon(phys_addr); > > - ept_sync_global(); > + if (vmm_exclusive) { > + kvm_cpu_vmxon(phys_addr); > + ept_sync_global(); > + } > > return 0; The documentation recommends usage of INVEPT all-context after execution of VMXON and prior to execution of VMXOFF. Is it not necessary?