From: Dave Hansen <dave.hansen@linux.intel.com>
To: Keno Fischer <keno@juliacomputing.com>
Cc: "Linux Kernel Mailing List" <linux-kernel@vger.kernel.org>,
"Thomas Gleixner" <tglx@linutronix.de>,
"Ingo Molnar" <mingo@redhat.com>,
x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
"Borislav Petkov" <bp@suse.de>,
"Andi Kleen" <andi@firstfloor.org>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Radim Krčmář" <rkrcmar@redhat.com>,
"Kyle Huey" <khuey@kylehuey.com>,
"Robert O'Callahan" <robert@ocallahan.org>
Subject: Re: [RFC PATCH] x86/arch_prctl: Add ARCH_SET_XCR0 to mask XCR0 per-thread
Date: Mon, 18 Jun 2018 09:16:19 -0700 [thread overview]
Message-ID: <eaf86ae1-75f3-b49c-781c-33ebac532eb2@linux.intel.com> (raw)
In-Reply-To: <CABV8kRyWFJN1r8HtNrDGMCEDUEiCS=p0n8xucO9T5B1AvCDZVA@mail.gmail.com>
On 06/18/2018 08:13 AM, Keno Fischer wrote:
>>> 4) Catch the fault thrown by xsaves/xrestors in this situation, update
>>> XCR0, redo the xsaves/restores, put XCR0 back and continue
>>> execution after the faulting instruction.
>>
>> I'm worried about the kernel pieces that go digging in the XSAVE data
>> getting confused more than the hardware getting confused.
>
> So you prefer this option? If so, I can try to have a go at implementing it
> this way and seeing if I run into any trouble.
No, I'm saying that depending on faults is not a viable solution. We
are not guaranteed to get faults in all the cases you would need to fix up.
XSAVE*/XRSTOR* are not even *called* in some of those cases.
>>> At least currently, it is my understanding that `xfeatures_mask` only has
>>> user features, am I mistaken about that?
>>
>> We're slowing adding supervisor support. I think accounting for
>> supervisor features is a requirement for any new XSAVE code.
>
> Sure, I don't think this is in any way incompatible with that (though
> probably also informs that we want to keep the memory layout the
> same if possible).
I think you've tried to simplify your implementation by ignoring
features, like holes. However, the existing implementation actually
*does* handle those things and we've spent a significant amount of time
ensuring that it works, despite the fact that you can't buy an
off-the-shelf CPU that creates a hole without help from a hypervisor today.
next prev parent reply other threads:[~2018-06-18 16:16 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-17 0:33 [RFC PATCH] x86/arch_prctl: Add ARCH_SET_XCR0 to mask XCR0 per-thread Keno Fischer
2018-06-17 16:35 ` Andi Kleen
2018-06-17 16:48 ` Keno Fischer
2018-06-17 18:22 ` Keno Fischer
2018-06-18 16:58 ` Andi Kleen
2018-06-18 17:50 ` Keno Fischer
2018-06-19 13:43 ` Andi Kleen
2018-06-18 12:47 ` Dave Hansen
2018-06-18 14:42 ` Keno Fischer
2018-06-18 15:04 ` Dave Hansen
2018-06-18 15:13 ` Keno Fischer
2018-06-18 16:16 ` Dave Hansen [this message]
2018-06-18 17:22 ` Keno Fischer
2018-06-18 17:29 ` Dave Hansen
2018-06-18 17:43 ` Dave Hansen
2018-06-18 18:16 ` Keno Fischer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=eaf86ae1-75f3-b49c-781c-33ebac532eb2@linux.intel.com \
--to=dave.hansen@linux.intel.com \
--cc=andi@firstfloor.org \
--cc=bp@suse.de \
--cc=hpa@zytor.com \
--cc=keno@juliacomputing.com \
--cc=khuey@kylehuey.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=rkrcmar@redhat.com \
--cc=robert@ocallahan.org \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox