From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: List of unaccessible x86 states Date: Tue, 20 Oct 2009 21:31:46 +0200 Message-ID: <20091020193146.GF8278@redhat.com> References: <4ADDB49B.3010101@siemens.com> <5D3F39A4-0532-4027-8D71-87FE9BCA1C27@suse.de> <4ADDBD19.6040107@siemens.com> <20091020134811.GO29477@redhat.com> <20091020185501.GD8278@redhat.com> <32ACB0C3-1607-4D4F-A085-25D1AA1FB255@suse.de> <20091020190958.GE8278@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Jan Kiszka , oritw@il.ibm.com, kvm-devel , Avi Kivity , Marcelo Tosatti To: Alexander Graf Return-path: Received: from mx1.redhat.com ([209.132.183.28]:47286 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751025AbZJTTbs (ORCPT ); Tue, 20 Oct 2009 15:31:48 -0400 Content-Disposition: inline In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On Tue, Oct 20, 2009 at 09:23:22PM +0200, Alexander Graf wrote: > > On 20.10.2009, at 21:09, Gleb Natapov wrote: > > >On Tue, Oct 20, 2009 at 08:59:48PM +0200, Alexander Graf wrote: > >> > >>On 20.10.2009, at 20:55, Gleb Natapov wrote: > >> > >>>On Tue, Oct 20, 2009 at 03:51:02PM +0200, Alexander Graf wrote: > >>>> > >>>>On 20.10.2009, at 15:48, Gleb Natapov wrote: > >>>> > >>>>>On Tue, Oct 20, 2009 at 03:41:57PM +0200, Alexander Graf wrote: > >>>>>> > >>>>>>On 20.10.2009, at 15:37, Jan Kiszka wrote: > >>>>>> > >>>>>>>Alexander Graf wrote: > >>>>>>>>On 20.10.2009, at 15:01, Jan Kiszka wrote: > >>>>>>>> > >>>>>>>>>Hi all, > >>>>>>>>> > >>>>>>>>>as the list of yet user-unaccessible x86 states is a bit > >>>>>>>>>volatile ATM, > >>>>>>>>>this is an attempt to collect the precise requirements for > >>>>>>>>>additional > >>>>>>>>>state fields. Once everyone feels the list is complete, we can > >>>>>>>>>decide > >>>>>>>>>how to partition it into one ore more substates for the new > >>>>>>>>>KVM_GET/SET_VCPU_STATE interface. > >>>>>>>>> > >>>>>>>>>What I read so far (or tried to patch already): > >>>>>>>>> > >>>>>>>>>- nmi_masked > >>>>>>>>>- nmi_pending > >>>>>>>>>- nmi_injected > >>>>>>>>>- kvm_queued_exception (whole struct content) > >>>>>>>>>- KVM_REQ_TRIPLE_FAULT (from vcpu.requests) > >>>>>>>>> > >>>>>>>>>Unclear points (for me) from the last discussion: > >>>>>>>>> > >>>>>>>>>- sipi_vector > >>>>>>>>>- MCE (covered via kvm_queued_exception, or does it > >>>>>>>>>require more?) > >>>>>>>>> > >>>>>>>>>Please extend or correct the list as required. > >>>>>>>> > >>>>>>>>hflags. Qemu supports GIF, kvm supports GIF, but no side > >>>>>>>>knows how to > >>>>>>>>sync it. > >>>>>>> > >>>>>>>BTW, GIF is related to svm nesting, right? > >>>>>> > >>>>>>Yes and no. It's an architecture addition that came with > >>>>>>SVM, yes. > >>>>>> > >>>>>>The problem is that I don't want to support migrating while in a > >>>>>Why not? > >>>> > >>>>Because then we'd have to transfer the whole host cpu cache and the > >>>>merged intercept bitmaps to userspace as well. That's just too many > >>>>internals to expose IMHO. > >>>> > >>>But the amount of information is constant no matter how l2 > >>>guest there > >>>are. Correct? We can expose it as separate substate. > >> > >>Or we can just not migrate while in a nested guest :-). Which will > >>make everything a lot easier. > >> > >Suppose we have a l2 guest that handles interrupt/nmis by itself > >how can we > >force it to exit? > > If the nested hypervisor doesn't intercept INTR we don't support it > anyways. > Why? I looked at the code briefly and it looks like we just inject interrupt as usual instead of do nested exit if l2 does not intercept INTR. Have I miss interpreted the code. Even if I have why not support it? > >I don't think requesting certain cpu state before > >migration is the right thing to do. What if user paused a VM and then > >decided to migrate? > > So pausing has to make it go out of nested guest context too? Probably. > Then we're not in the nested guest context, right? :) > > >Or VM was paused automatically because of shortage > >of disk space and management want to migrate VM to other host with > >bigger disk? > > Same as before. What do you mean? > > > Really, pushing the whole nesting state over is not a good idea. > May be just disallow migration with nested guest running then? Cross vendor migration is not possible anyway. -- Gleb.