From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH 0/24] Nested VMX, v5 Date: Sun, 17 Oct 2010 15:35:51 +0200 Message-ID: <4CBAFBB7.4030402@redhat.com> References: <1276431753-nyh@il.ibm.com> <4C1621E5.5040201@redhat.com> <20100614130341.GA4455@fermat.math.technion.ac.il> <4C174F36.2060008@redhat.com> <20101017120310.GA12274@fermat.math.technion.ac.il> <4CBAE7D2.2050602@redhat.com> <20101017123914.GA14069@fermat.math.technion.ac.il> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org To: "Nadav Har'El" Return-path: Received: from mx1.redhat.com ([209.132.183.28]:39441 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751300Ab0JQNf5 (ORCPT ); Sun, 17 Oct 2010 09:35:57 -0400 In-Reply-To: <20101017123914.GA14069@fermat.math.technion.ac.il> Sender: kvm-owner@vger.kernel.org List-ID: On 10/17/2010 02:39 PM, Nadav Har'El wrote: > On Sun, Oct 17, 2010, Avi Kivity wrote about "Re: [PATCH 0/24] Nested VMX, v5": > > >patch. In short, try running the L0 kernel with the "nosmp" option, > > What are the problems with smp? > > Unfortunately, there appears to be a bug which causes KVM with nested VMX to > hang when SMP is enabled, even if you don't try to use more than one CPU for > the guest. I still need to debug this to figure out why. Well, that seems pretty critical. > > > give the > > >"-cpu host" option to qemu, > > > > Why is this needed? > > Qemu has a list of cpu types, and for each type it lists its features. The > problem is that Qemu doesn't list the "VMX" feature for any of the CPUs, > even those (like core 2 duo). I have a trivial patch to qemu to add the "VMX" > feature to those CPUs, which is harmless even if KVM doesn't support nested > VMX (qemu will drop features which KVM doesn't support). But until I send > such a patch to qemu, the easiest workaround is just to use "-cpu host" - > which will (among other things) tell qemu to emulate a machine which has vmx, > just like the host does. > > (I also explained this in the intro to v6 of the patch). Ok. I think we can get that patch merged, just so you don't have to re-explain it over and over again. Please post it to qemu-devel. > > > > >and the "nested=1 ept=0 vpid=0" options to the > > >kvm-intel module in L0. > > > > Why are those needed? Seems trivial to support a nonept guest on an ept > > host - all you do is switch cr3 during vmentry and vmexit. > > nested=1 is needed because you asked for it *not* to be the default :-) > > You're right, ept=1 on the host *could* be supported even before nested ept > is supported (this is the mode we called "shadow on ept" in the paper). > But at the moment, I believe it doesn't work correctly. I'll add making this > case work to my TODO list. > > I'm not sure why vpid=0 is needed (but I verified that you get a failed entry > if you don't use it). I understood that there was some discussion on what is > the proper way to do nested vpid, and that in the meantime it isn't supported, > but I agree that it should have been possible to use vpid normally to run L1's > but avoid using it when running L2's. Again, I'll need to debug this issue > to understand how difficult it would be to fix this case. My feeling is the smp and vpid failures are due to bugs. vpid=0 in particular forces a tlb flush on every exit which might mask your true bug. smp might be due to host vcpu migration. Are we vmclearing the right vmcs? ept=1 may not be due to a bug per se, but my feeling is that it should be very easy to implement. In particular nsvm started out on npt (but not nnpt) and had issues with shadow-on-shadow (IIRC). -- error compiling committee.c: too many arguments to function