From: Avi Kivity <avi@redhat.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
Marcelo Tosatti <mtosatti@redhat.com>
Subject: Re: [PATCH 0/3] KVM: VMX: Support hosted VMM coexsitence.
Date: Thu, 18 Mar 2010 14:55:19 +0200 [thread overview]
Message-ID: <4BA222B7.6030008@redhat.com> (raw)
In-Reply-To: <D5AB6E638E5A3E4B8F4406B113A5A19A1D525CD5@shsmsx501.ccr.corp.intel.com>
On 03/18/2010 11:49 AM, Xu, Dongxiao wrote:
> VMX: Support for coexistence of KVM and other hosted VMMs.
>
> The following NOTE is picked up from Intel SDM 3B 27.3 chapter,
> MANAGING VMCS REGIONS AND POINTERS.
>
> ----------------------
> NOTE
> As noted in Section 21.1, the processor may optimize VMX operation
> by maintaining the state of an active VMCS (one for which VMPTRLD
> has been executed) on the processor. Before relinquishing control to
> other system software that may, without informing the VMM, remove
> power from the processor (e.g., for transitions to S3 or S4) or leave
> VMX operation, a VMM must VMCLEAR all active VMCSs. This ensures
> that all VMCS data cached by the processor are flushed to memory
> and that no other software can corrupt the current VMM's VMCS data.
> It is also recommended that the VMM execute VMXOFF after such
> executions of VMCLEAR.
> ----------------------
>
> Currently, VMCLEAR is called at VCPU migration. To support hosted
> VMM coexistence, this patch modifies the VMCLEAR/VMPTRLD and
> VMXON/VMXOFF usages. VMCLEAR will be called when VCPU is
> scheduled out of a physical CPU, while VMPTRLD is called when VCPU
> is scheduled in a physical CPU. Also this approach could eliminates
> the IPI mechanism for original VMCLEAR. As suggested by SDM,
> VMXOFF will be called after VMCLEAR, and VMXON will be called
> before VMPTRLD.
>
My worry is that newer processors will cache more and more VMCS contents
on-chip, so the VMCLEAR/VMXOFF will cause a greater loss with newer
processors.
> With this patchset, KVM and VMware Workstation 7 could launch
> serapate guests and they can work well with each other. Besides, I
> measured the performance for this patch, there is no visable
> performance loss according to the test results.
>
Is that the only motivation? It seems like an odd use-case. If there
was no performance impact (current or future), I wouldn't mind, but the
design of VMPTRLD/VMCLEAR/VMXON/VMXOFF seems to indicate that we want to
keep a VMCS loaded as much as possible on the processor.
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2010-03-18 12:55 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-18 9:49 [PATCH 0/3] KVM: VMX: Support hosted VMM coexsitence Xu, Dongxiao
2010-03-18 10:36 ` Alexander Graf
2010-03-18 12:55 ` Avi Kivity [this message]
2010-03-23 4:01 ` Xu, Dongxiao
2010-03-23 7:39 ` Avi Kivity
2010-03-23 8:33 ` Xu, Dongxiao
2010-03-23 8:58 ` Avi Kivity
2010-03-23 9:12 ` Alexander Graf
2010-03-18 13:51 ` Avi Kivity
2010-03-18 14:27 ` Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BA222B7.6030008@redhat.com \
--to=avi@redhat.com \
--cc=dongxiao.xu@intel.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox