kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sheng Yang <sheng@linux.intel.com>
To: Avi Kivity <avi@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>, kvm@vger.kernel.org
Subject: Re: [PATCH] KVM: x86: XSAVE/XRSTOR live migration support
Date: Mon, 31 May 2010 19:21:10 +0800	[thread overview]
Message-ID: <201005311921.10566.sheng@linux.intel.com> (raw)
In-Reply-To: <4BFE4337.4010502@redhat.com>

On Thursday 27 May 2010 18:02:31 Avi Kivity wrote:
> On 05/27/2010 12:48 PM, Sheng Yang wrote:
> > This patch enable save/restore of xsave state.
> > 
> > Signed-off-by: Sheng Yang<sheng@linux.intel.com>
> > ---
> > 
> >   arch/x86/include/asm/kvm.h |   29 ++++++++++++++++
> >   arch/x86/kvm/x86.c         |   79
> >   ++++++++++++++++++++++++++++++++++++++++++++ include/linux/kvm.h      
> >    |    6 +++
> 
> Documentation/kvm/api.txt +++++++++++++
> 
> > +/* for KVM_CAP_XSAVE */
> > +struct kvm_xsave {
> > +	struct {
> > +		__u16 cwd;
> > +		__u16 swd;
> > +		__u16 twd;
> > +		__u16 fop;
> > +		__u64 rip;
> > +		__u64 rdp;
> > +		__u32 mxcsr;
> > +		__u32 mxcsr_mask;
> > +		__u32 st_space[32];
> > +		__u32 xmm_space[64];
> > +		__u32 padding[12];
> > +		__u32 sw_reserved[12];
> > +	} i387;
> > +	struct {
> > +		__u64 xstate_bv;
> > +		__u64 reserved1[2];
> > +		__u64 reserved2[5];
> > +	} xsave_hdr;
> > +	struct {
> > +		__u32 ymmh_space[64];
> > +	} ymmh;
> > +	__u64 xcr0;
> > +	__u32 padding[256];
> > +};
> 
> Need to reserve way more space here for future xsave growth.  I think at
> least 4K.  LRB wa 32x512bit = 1K (though it probably isn't a candidate
> for vmx).  Would be good to get an opinion from your processor architects.
> 
> I don't think we need to detail the contents of the structures since
> they're described by the SDM; so we can have just a large array that is
> 1:1 with the xsave as saved by the fpu.

I think we can reserve one page here. But one big array make it harder to work 
with QEmu CPUState. Do we need lots of marcos in QEmu to parse the array? Also 
it's easier to transfer get/set_fpu to get/set_xsave interface using current 
structure I think.

--
regards
Yang, Sheng

> 
> If we do that then xcr0 needs to be in a separate structure, say
> kvm_xcr, with a flags field and reserved space of its own for future xcr
> growth.
> 
> > @@ -2363,6 +2366,59 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct
> > kvm_vcpu *vcpu,
> > 
> >   	return 0;
> >   
> >   }
> > 
> > +static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
> > +					struct kvm_xsave *guest_xsave)
> > +{
> > +	struct xsave_struct *xsave =&vcpu->arch.guest_fpu.state->xsave;
> > +
> > +	if (!cpu_has_xsave)
> > +		return;
> 
> Hm, it would be nice to make it backward compatible and return the
> legacy fpu instead.  I think the layouts are compatible?
> 
> > +
> > +	guest_xsave->i387.cwd = xsave->i387.cwd;
> > +	guest_xsave->i387.swd = xsave->i387.swd;
> > +	guest_xsave->i387.twd = xsave->i387.twd;
> > +	guest_xsave->i387.fop = xsave->i387.fop;
> > +	guest_xsave->i387.rip = xsave->i387.rip;
> > +	guest_xsave->i387.rdp = xsave->i387.rdp;
> > +	memcpy(guest_xsave->i387.st_space, xsave->i387.st_space, 128);
> > +	memcpy(guest_xsave->i387.xmm_space, xsave->i387.xmm_space,
> > +			sizeof guest_xsave->i387.xmm_space);
> > +
> > +	guest_xsave->xsave_hdr.xstate_bv = xsave->xsave_hdr.xstate_bv;
> > +	memcpy(guest_xsave->ymmh.ymmh_space, xsave->ymmh.ymmh_space,
> > +			sizeof xsave->ymmh.ymmh_space);
> 
> And we can do a big memcpy here.  But we need to limit it to what the
> host actually allocated.
> 
> > +
> > +	guest_xsave->xcr0 = vcpu->arch.xcr0;
> > +}
> > +
> > 
> >   long kvm_arch_vcpu_ioctl(struct file *filp,
> >   
> >   			 unsigned int ioctl, unsigned long arg)
> >   
> >   {
> > 
> > @@ -2564,6 +2620,29 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
> > 
> >   		r = kvm_vcpu_ioctl_x86_set_debugregs(vcpu,&dbgregs);
> >   		break;
> >   	
> >   	}
> > 
> > +	case KVM_GET_XSAVE: {
> > +		struct kvm_xsave xsave;
> 
> Too big for stack (especially if we reserve room for growth).
> 
> > diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> > index 23ea022..5006761 100644
> > --- a/include/linux/kvm.h
> > +++ b/include/linux/kvm.h
> > @@ -524,6 +524,9 @@ struct kvm_enable_cap {
> > 
> >   #define KVM_CAP_PPC_OSI 52
> >   #define KVM_CAP_PPC_UNSET_IRQ 53
> >   #define KVM_CAP_ENABLE_CAP 54
> > 
> > +#ifdef __KVM_HAVE_XSAVE
> > +#define KVM_CAP_XSAVE 55
> > +#endif
> 
> Might make sense to have a separate KVM_CAP_XCR, just for consistency.

  parent reply	other threads:[~2010-05-31 11:21 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-05-27  9:48 [PATCH] KVM: x86: XSAVE/XRSTOR live migration support Sheng Yang
2010-05-27 10:02 ` Avi Kivity
2010-05-27 10:33   ` Sheng Yang
2010-05-27 11:34     ` Avi Kivity
2010-05-31 11:21   ` Sheng Yang [this message]
2010-05-31 11:26     ` Avi Kivity

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201005311921.10566.sheng@linux.intel.com \
    --to=sheng@linux.intel.com \
    --cc=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).