From: Dave Hansen <dave.hansen@linux.intel.com>
To: Kees Cook <keescook@chromium.org>, "H. Peter Anvin" <hpa@zytor.com>
Cc: LKML <linux-kernel@vger.kernel.org>,
"Rik van Riel" <riel@redhat.com>,
"Andy Lutomirski" <luto@kernel.org>,
"Thomas Gleixner" <tglx@linutronix.de>,
"Ingo Molnar" <mingo@redhat.com>,
"x86@kernel.org" <x86@kernel.org>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Radim Krčmář" <rkrcmar@redhat.com>,
"Peter Zijlstra" <peterz@infradead.org>,
"Yu-cheng Yu" <yu-cheng.yu@intel.com>,
"Masahiro Yamada" <yamada.masahiro@socionext.com>,
"Borislav Petkov" <bp@suse.de>,
"Christian Borntraeger" <borntraeger@de.ibm.com>,
"Thomas Garnier" <thgarnie@google.com>,
"Brian Gerst" <brgerst@gmail.com>,
"He Chen" <he.chen@linux.intel.com>,
"Mathias Krause" <minipli@googlemail.com>,
"Fenghua Yu" <fenghua.yu@intel.com>,
"Piotr Luc" <piotr.luc@intel.com>, "Kyle Huey" <me@kylehuey.com>,
"Len Brown" <len.brown@intel.com>, KVM <kvm@vger.kernel.org>,
"kernel-hardening@lists.openwall.com"
<kernel-hardening@lists.openwall.com>
Subject: [kernel-hardening] Re: [PATCH] x86/fpu: move FPU state into separate cache
Date: Wed, 29 Mar 2017 14:19:37 -0700 [thread overview]
Message-ID: <ca888ec2-e2e6-3600-3e39-c18e61e0c735@linux.intel.com> (raw)
In-Reply-To: <CAGXu5jLQfS6gK2MyetWPjyJDOg8NdACXsPXLt7OasQE03VUwPg@mail.gmail.com>
On 03/29/2017 02:09 PM, Kees Cook wrote:
> They're adjacent already, which poses a problem for the struct layout
> randomization plugin, since adjacency may no longer be true (after
> layout randomization). This adjacency (or not) isn't really the
> problem: it's that FPU state size is only known at runtime. Another
> solution would be to have FPU state be a fixed size...
We don't want that. It varies from a couple hundred bytes to ~3k on
newer CPUs. We don't want to eat an extra 2.5k per task on the older
processors.
next prev parent reply other threads:[~2017-03-29 21:19 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-29 20:39 [kernel-hardening] [PATCH] x86/fpu: move FPU state into separate cache Kees Cook
2017-03-29 20:45 ` [kernel-hardening] " H. Peter Anvin
2017-03-29 21:09 ` Kees Cook
2017-03-29 21:19 ` Dave Hansen [this message]
2017-03-29 21:30 ` Linus Torvalds
2017-03-29 21:35 ` Andy Lutomirski
2017-03-29 21:41 ` Linus Torvalds
2017-03-29 22:28 ` hpa
2017-03-29 23:56 ` Linus Torvalds
2017-03-30 1:50 ` Kees Cook
2017-03-29 21:35 ` Linus Torvalds
2017-03-31 4:59 ` kbuild test robot
2017-03-31 5:57 ` kbuild test robot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ca888ec2-e2e6-3600-3e39-c18e61e0c735@linux.intel.com \
--to=dave.hansen@linux.intel.com \
--cc=borntraeger@de.ibm.com \
--cc=bp@suse.de \
--cc=brgerst@gmail.com \
--cc=fenghua.yu@intel.com \
--cc=he.chen@linux.intel.com \
--cc=hpa@zytor.com \
--cc=keescook@chromium.org \
--cc=kernel-hardening@lists.openwall.com \
--cc=kvm@vger.kernel.org \
--cc=len.brown@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@kernel.org \
--cc=me@kylehuey.com \
--cc=mingo@redhat.com \
--cc=minipli@googlemail.com \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=piotr.luc@intel.com \
--cc=riel@redhat.com \
--cc=rkrcmar@redhat.com \
--cc=tglx@linutronix.de \
--cc=thgarnie@google.com \
--cc=x86@kernel.org \
--cc=yamada.masahiro@socionext.com \
--cc=yu-cheng.yu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox