From: Avi Kivity <avi@redhat.com>
To: "Cui, Dexuan" <dexuan.cui@intel.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"Yang, Sheng" <sheng.yang@intel.com>
Subject: Re: [PATCH 1/1] KVM: X86: add the support of XSAVE/XRSTOR to guest
Date: Thu, 06 May 2010 11:14:18 +0300 [thread overview]
Message-ID: <4BE27A5A.2020903@redhat.com> (raw)
In-Reply-To: <1865303E0DED764181A9D882DEF65FB61A82B51D4A@shsmsx502.ccr.corp.intel.com>
On 05/06/2010 07:23 AM, Cui, Dexuan wrote:
>
>>> + goto err;
>>> + vcpu->arch.guest_xstate_mask = new_bv;
>>> + xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.guest_xstate_mask);
>>>
>>>
>> Can't we run with the host xcr0? isn't it guaranteed to be a superset
>> of the guest xcr0?
>>
> I agree host xcr0 is a superset of guest xcr0.
> In the case guest xcr0 has less bits set than host xcr0, I suppose running with guest
> xcr0 would be a bit faster.
However, switching xcr0 may be slow. That's our experience with msrs.
Can you measure its latency?
Can also be avoided if guest and host xcr0 match.
> If you think simplying the code by removing the field
> guest_xstate_mask is more important, we can do it.
>
Well we need guest_xstate_mask, it's the guest's xcr0, no?
btw, it needs save/restore for live migration, as well as save/restore
for the new fpu state.
>>> + skip_emulated_instruction(vcpu);
>>> + return 1;
>>> +err:
>>> + kvm_inject_gp(vcpu, 0);
>>>
>>>
>> Need to #UD in some circumstances.
>>
> I don't think so. When the CPU doesn't suport XSAVE, or guest doesn't set guest
> CR4.OSXSAVE, guest's attempt of exectuing xsetbv would get a #UD immediately
> and no VMexit would occur.
>
Ok.
>>> @@ -62,6 +64,7 @@
>>> (~(unsigned long)(X86_CR4_VME | X86_CR4_PVI | X86_CR4_TSD |
>>> X86_CR4_DE\ | X86_CR4_PSE | X86_CR4_PAE | X86_CR4_MCE \
>>> | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR \
>>> + | (cpu_has_xsave ? X86_CR4_OSXSAVE : 0) \
>>> | X86_CR4_OSXMMEXCPT | X86_CR4_VMXE))
>>>
>>>
>> It also depends on the guest cpuid value. Please add it outside the
>>
> If cpu_has_xsave is true, we always present xsave to guest via cpuid, so I
> think the code is correct here.
>
We don't pass all host cpuid values to the guest. We expose them to
userspace via KVM_GET_SUPPORTED_CPUID, and then userspace decides what
cpuid to present to the guest via KVM_SET_CPUID2. So the guest may run
with fewer features than the host.
> I saw the 2 patches you sent. They (like the new APIs fpu_alloc/fpu_free) will simplify
> the implementation of kvm xsave support. Thanks a lot!
>
Thanks. To simplify things, please separate host xsave support
(switching to the fpu api) and guest xsave support (cpuid, xsetbv, new
ioctls) into separate patches. In fact, guest xsave support is probably
best split into patches as well.
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2010-05-06 8:14 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-29 5:22 [PATCH 1/1] KVM: X86: add the support of XSAVE/XRSTOR to guest Dexuan Cui
2010-05-02 14:13 ` Avi Kivity
2010-05-06 4:23 ` Cui, Dexuan
2010-05-06 8:14 ` Avi Kivity [this message]
2010-05-06 14:20 ` Cui, Dexuan
2010-05-06 19:45 ` Avi Kivity
2010-05-06 19:49 ` Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BE27A5A.2020903@redhat.com \
--to=avi@redhat.com \
--cc=dexuan.cui@intel.com \
--cc=kvm@vger.kernel.org \
--cc=sheng.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).