From: Marcelo Tosatti <mtosatti@redhat.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
Avi Kivity <avi@redhat.com>, Alexander Graf <agraf@suse.de>
Subject: Re: [PATCH 4/4 v4] KVM: VMX: VMXON/VMXOFF usage changes.
Date: Tue, 11 May 2010 21:43:20 -0300 [thread overview]
Message-ID: <20100512004320.GA20553@amt.cnet> (raw)
In-Reply-To: <D5AB6E638E5A3E4B8F4406B113A5A19A1E6E8854@shsmsx501.ccr.corp.intel.com>
On Tue, May 11, 2010 at 06:29:48PM +0800, Xu, Dongxiao wrote:
> From: Dongxiao Xu <dongxiao.xu@intel.com>
>
> SDM suggests VMXON should be called before VMPTRLD, and VMXOFF
> should be called after doing VMCLEAR.
>
> Therefore in vmm coexistence case, we should firstly call VMXON
> before any VMCS operation, and then call VMXOFF after the
> operation is done.
>
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
> arch/x86/kvm/vmx.c | 38 +++++++++++++++++++++++++++++++-------
> 1 files changed, 31 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index c536b9d..dbd47a7 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -168,6 +168,8 @@ static inline struct vcpu_vmx *to_vmx(struct kvm_vcpu *vcpu)
>
> static int init_rmode(struct kvm *kvm);
> static u64 construct_eptp(unsigned long root_hpa);
> +static void kvm_cpu_vmxon(u64 addr);
> +static void kvm_cpu_vmxoff(void);
>
> static DEFINE_PER_CPU(struct vmcs *, vmxarea);
> static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
> @@ -786,8 +788,11 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> u64 tsc_this, delta, new_offset;
> + u64 phys_addr = __pa(per_cpu(vmxarea, cpu));
>
> - if (vmm_exclusive && vcpu->cpu != cpu)
> + if (!vmm_exclusive)
> + kvm_cpu_vmxon(phys_addr);
> + else if (vcpu->cpu != cpu)
> vcpu_clear(vmx);
>
> if (per_cpu(current_vmcs, cpu) != vmx->vmcs) {
> @@ -833,8 +838,10 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> static void vmx_vcpu_put(struct kvm_vcpu *vcpu)
> {
> __vmx_load_host_state(to_vmx(vcpu));
> - if (!vmm_exclusive)
> + if (!vmm_exclusive) {
> __vcpu_clear(to_vmx(vcpu));
> + kvm_cpu_vmxoff();
> + }
> }
>
> static void vmx_fpu_activate(struct kvm_vcpu *vcpu)
> @@ -1257,9 +1264,11 @@ static int hardware_enable(void *garbage)
> FEATURE_CONTROL_LOCKED |
> FEATURE_CONTROL_VMXON_ENABLED);
> write_cr4(read_cr4() | X86_CR4_VMXE); /* FIXME: not cpu hotplug safe */
> - kvm_cpu_vmxon(phys_addr);
>
> - ept_sync_global();
> + if (vmm_exclusive) {
> + kvm_cpu_vmxon(phys_addr);
> + ept_sync_global();
> + }
>
> return 0;
The documentation recommends usage of INVEPT all-context after execution
of VMXON and prior to execution of VMXOFF. Is it not necessary?
next prev parent reply other threads:[~2010-05-12 0:53 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-05-11 10:29 [PATCH 4/4 v4] KVM: VMX: VMXON/VMXOFF usage changes Xu, Dongxiao
2010-05-12 0:43 ` Marcelo Tosatti [this message]
2010-05-12 6:13 ` Xu, Dongxiao
2010-05-12 19:58 ` Marcelo Tosatti
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100512004320.GA20553@amt.cnet \
--to=mtosatti@redhat.com \
--cc=agraf@suse.de \
--cc=avi@redhat.com \
--cc=dongxiao.xu@intel.com \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).