public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Alexander Graf <agraf@suse.de>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Avi Kivity <avi@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>
Subject: Re: [PATCH 0/3] KVM: VMX: Support hosted VMM coexsitence.
Date: Thu, 18 Mar 2010 11:36:36 +0100	[thread overview]
Message-ID: <4BA20234.6010005@suse.de> (raw)
In-Reply-To: <D5AB6E638E5A3E4B8F4406B113A5A19A1D525CD5@shsmsx501.ccr.corp.intel.com>

Xu, Dongxiao wrote:
> VMX: Support for coexistence of KVM and other hosted VMMs. 
>
> The following NOTE is picked up from Intel SDM 3B 27.3 chapter, 
> MANAGING VMCS REGIONS AND POINTERS.
>
> ----------------------
> NOTE
> As noted in Section 21.1, the processor may optimize VMX operation
> by maintaining the state of an active VMCS (one for which VMPTRLD
> has been executed) on the processor. Before relinquishing control to
> other system software that may, without informing the VMM, remove
> power from the processor (e.g., for transitions to S3 or S4) or leave
> VMX operation, a VMM must VMCLEAR all active VMCSs. This ensures
> that all VMCS data cached by the processor are flushed to memory
> and that no other software can corrupt the current VMM's VMCS data.
> It is also recommended that the VMM execute VMXOFF after such
> executions of VMCLEAR.
> ----------------------
>
> Currently, VMCLEAR is called at VCPU migration. To support hosted
> VMM coexistence, this patch modifies the VMCLEAR/VMPTRLD and
> VMXON/VMXOFF usages. VMCLEAR will be called when VCPU is
> scheduled out of a physical CPU, while VMPTRLD is called when VCPU
> is scheduled in a physical CPU. Also this approach could eliminates
> the IPI mechanism for original VMCLEAR. As suggested by SDM,
> VMXOFF will be called after VMCLEAR, and VMXON will be called
> before VMPTRLD.
>
> With this patchset, KVM and VMware Workstation 7 could launch
> serapate guests and they can work well with each other. Besides, I
> measured the performance for this patch, there is no visable
> performance loss according to the test results.
>
> The following performance results are got from a host with 8 cores.
>  
> 1. vConsolidate benchmarks on KVM
>   
> Test Round	WebBench	SPECjbb	SysBench	LoadSim	GEOMEAN 
> 1 W/O patch 	2,614.72 	28,053.09 	1,108.41 	16.30 		1,072.95 
>    W/ patch 	2,691.55 	28,145.71 	1,128.41 	16.47 		1,089.28 
> 2 W/O patch 	2,642.39 	28,104.79 	1,096.99 	17.79 		1,097.19 
>    W/ patch 	2,699.25 	28,092.62 	1,116.10 	15.54 		1,070.98 
> 3 W/O patch 	2,571.58 	28,131.17 	1,108.43 	16.39 		1,070.70 
>    W/ patch 	2,627.89 	28,090.19 	1,110.94 	17.00 		1,086.57 
>
> Average
> W/O patch 	2,609.56 	28,096.35 	1,104.61 	16.83 		1,080.28 
> W/ patch 	2,672.90 	28,109.51 	1,118.48 	16.34 		1,082.28 
>
> 2. CPU overcommitment tests for KVM
>
> A) Run 8 while(1) in host which pin with 8 cores.
> B) Launch 6 guests, each has 8 VCPUs, pin each VCPU with one core.
> C) Among the 6 guests, 5 of them are running 8*while(1).
> D) The left guest is doing kernel build "make -j9" under ramdisk.
>
> In this case, the overcommitment ratio for each core is 7:1.
> The VCPU schedule frequency on all cores is totally ~15k/sec.
> l record the kernel build time.
>  
> While doing the average, the first round data is treated as invalid,
> which isn't counted into the final average result.
>  
> Kernel Build Time (second) 
> Round 		w/o patch 	w/ patch 
> 1 		541 		501 
> 2 		488 		490 
> 3 		488 		492 
> 4 		492 		493 
> 5 		489 		491 
> 6 		494 		487 
> 7 		497 		494 
> 8 		492 		492 
> 9 		493 		496 
> 10 		492 		495 
> 11 		490 		496 
> 12 		489 		494 
> 13 		489 		490 
> 14 		490 		491 
> 15 		494 		497 
> 16 		495 		496 
> 17 		496 		496 
> 18 		493 		492 
> 19 		493 		500 
> 20 		490 		499 
>
> Average 	491.79 	493.74
>   

So the general message here is:

It does get slower, but not by much.


I think this should be a module option. By default we can probably go
with the non-coexist behavior. If users really want to run two VMMs on
the same host, they can always flip the module parameter.


Alex


  reply	other threads:[~2010-03-18 10:36 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-03-18  9:49 [PATCH 0/3] KVM: VMX: Support hosted VMM coexsitence Xu, Dongxiao
2010-03-18 10:36 ` Alexander Graf [this message]
2010-03-18 12:55 ` Avi Kivity
2010-03-23  4:01   ` Xu, Dongxiao
2010-03-23  7:39     ` Avi Kivity
2010-03-23  8:33       ` Xu, Dongxiao
2010-03-23  8:58         ` Avi Kivity
2010-03-23  9:12           ` Alexander Graf
2010-03-18 13:51 ` Avi Kivity
2010-03-18 14:27   ` Avi Kivity

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4BA20234.6010005@suse.de \
    --to=agraf@suse.de \
    --cc=avi@redhat.com \
    --cc=dongxiao.xu@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox