From: Paolo Bonzini <pbonzini@redhat.com>
To: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
wanpeng.li@linux.intel.com, namit@cs.technion.ac.il,
hpa@linux.intel.com, Fenghua Yu <fenghua.yu@intel.com>
Subject: Re: [CFT PATCH v2 2/2] KVM: x86: support XSAVES usage in the host
Date: Wed, 26 Nov 2014 14:57:04 +0100 [thread overview]
Message-ID: <5475DC30.4010104@redhat.com> (raw)
In-Reply-To: <20141126135322.GA4887@potion.brq.redhat.com>
On 26/11/2014 14:53, Radim Krčmář wrote:
>>> > > get_xsave = native_xrstor(guest_xsave); xsave(aligned_userspace_buffer)
>>> > > set_xsave = xrstor(aligned_userspace_buffer); native_xsave(guest_xsave)
>>> > >
>>> > > Could that work?
>> >
>> > It could, though it is more like
>> >
>> > get_fpu()
>> > native_xrstor(guest_xsave)
>> > xsave(buffer)
>> > put_fpu()
>> >
>> > and vice versa. Also, the userspace buffer is mos likely not aligned,
>> > so you need some kind of bounce buffer. It can be done if the CPUID
>> > turns out to be a bottleneck, apart from that it'd most likely be slower.
> Yeah, it was mostly making this code more future-proof ... it is easier
> to convince xsave.h to export its structures if CPUID is the problem.
> (I still see some hope for Linux, so performance isn't my primary goal.)
>
> I'm quite interested in CPUID now though, so I'll try to benchmark it,
> someday.
I'm not sure what is more future proof. :) I wonder if native_xrstor
could be a problem the day XRSTORS actually sets/restores MSRs as the
processor documentation promises. We do not need that to pass them to
userspace via KVM_GET/SET_XSAVE because we have KVM_GET/SET_MSR for
that, but it may cause problems if get_xsave uses XRSTORS and thus sets
the MSRs to unanticipated values. Difficult to say without more
information on Intel's plans.
Paolo
next prev parent reply other threads:[~2014-11-26 13:57 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-11-24 16:43 [CFT PATCH v2 0/2] KVM: support XSAVES usage in the host Paolo Bonzini
2014-11-24 16:43 ` [CFT PATCH v2 1/2] kvm: x86: mask out XSAVES Paolo Bonzini
2014-11-24 16:43 ` [CFT PATCH v2 2/2] KVM: x86: support XSAVES usage in the host Paolo Bonzini
2014-11-26 12:07 ` Radim Krčmář
2014-11-26 13:13 ` Paolo Bonzini
2014-11-26 13:53 ` Radim Krčmář
2014-11-26 13:57 ` Paolo Bonzini [this message]
2014-11-26 14:42 ` Radim Krčmář
2014-11-26 16:26 ` Paolo Bonzini
2014-11-26 17:31 ` Radim Krčmář
2014-12-03 14:23 ` Nadav Amit
2014-12-03 14:26 ` Paolo Bonzini
2014-12-03 18:45 ` Radim Krčmář
2014-12-04 13:43 ` Paolo Bonzini
2014-12-04 13:52 ` Radim Krčmář
2014-11-25 10:13 ` [CFT PATCH v2 0/2] KVM: " Wanpeng Li
2014-11-25 10:36 ` Paolo Bonzini
2014-11-25 14:05 ` Nadav Amit
2014-11-25 14:17 ` Paolo Bonzini
2014-11-25 14:50 ` Nadav Amit
2014-11-26 1:24 ` Wanpeng Li
2014-11-26 9:00 ` Nadav Amit
2014-11-26 8:47 ` Wanpeng Li
2014-11-26 12:54 ` Paolo Bonzini
2014-12-02 5:16 ` Wanpeng Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5475DC30.4010104@redhat.com \
--to=pbonzini@redhat.com \
--cc=fenghua.yu@intel.com \
--cc=hpa@linux.intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=namit@cs.technion.ac.il \
--cc=rkrcmar@redhat.com \
--cc=wanpeng.li@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).