From: Sarah Newman <srn@prgmr.com>
To: "H. Peter Anvin" <hpa@zytor.com>, David Vrabel <david.vrabel@citrix.com>
Cc: Suresh Siddha <sbsiddha@gmail.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>,
linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv1] x86: don't schedule when handling #NM exception
Date: Sun, 16 Mar 2014 21:12:13 -0700 [thread overview]
Message-ID: <5326761D.5000905@prgmr.com> (raw)
In-Reply-To: <53266F56.9030909@zytor.com>
On 03/16/2014 08:43 PM, H. Peter Anvin wrote:
> On 03/16/2014 08:35 PM, Sarah Newman wrote:
>> Can you please review my patch first? It's only enabled when absolutely required.
>
> It doesn't help. It means you're running on Xen, and you will have
> processes subjected to random SIGKILL because they happen to touch the
> FPU when the atomic pool is low.
>
> However, there is probably a happy medium: you don't actually need eager
> FPU restore, you just need eager FPU *allocation*. We have been
> intending to allocate the FPU state at task creation time for eagerfpu,
> and Suresh Siddha has already produced such a patch; it just needs some
> minor fixups due to an __init failure.
>
> http://lkml.kernel.org/r/1391325599.6481.5.camel@europa
>
> In the Xen case we could turn on eager allocation but not eager fpu. In
> fact, it might be justified to *always* do eager allocation...
Unconditional eager allocation works. Can xen users count on this being included and applied to the
stable kernels?
Thanks, Sarah
next prev parent reply other threads:[~2014-03-17 4:12 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-03-10 16:17 [PATCHv1] x86: don't schedule when handling #NM exception David Vrabel
2014-03-10 16:40 ` H. Peter Anvin
2014-03-10 17:15 ` David Vrabel
2014-03-10 17:25 ` H. Peter Anvin
2014-03-17 3:13 ` Sarah Newman
2014-03-17 3:32 ` [PATCH] x86, fpu, xen: Allocate fpu state for xen pv based on PVABI behavior Sarah Newman
2014-03-17 3:33 ` [PATCHv1] x86: don't schedule when handling #NM exception H. Peter Anvin
2014-03-17 3:35 ` [Xen-devel] " Sarah Newman
2014-03-17 3:43 ` H. Peter Anvin
2014-03-17 4:12 ` Sarah Newman [this message]
2014-03-17 4:23 ` H. Peter Anvin
2014-03-20 0:00 ` Greg Kroah-Hartman
2014-03-20 2:29 ` H. Peter Anvin
2014-03-17 13:29 ` David Vrabel
2014-03-19 13:21 ` Konrad Rzeszutek Wilk
2014-03-19 15:02 ` H. Peter Anvin
2014-06-23 13:08 ` Konrad Rzeszutek Wilk
2015-03-05 22:08 ` H. Peter Anvin
2015-03-06 11:46 ` [PATCHv4] x86, fpu: remove the logic of non-eager fpu mem allocation at the first usage David Vrabel
2014-03-17 12:19 ` [Xen-devel] [PATCHv1] x86: don't schedule when handling #NM exception George Dunlap
2014-03-17 16:55 ` H. Peter Anvin
2014-03-17 17:05 ` Jan Beulich
2014-03-17 17:12 ` H. Peter Anvin
2014-03-18 8:14 ` Ingo Molnar
2014-03-17 17:14 ` George Dunlap
2014-03-18 18:17 ` Sarah Newman
2014-03-18 18:27 ` H. Peter Anvin
2014-03-10 16:45 ` H. Peter Anvin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5326761D.5000905@prgmr.com \
--to=srn@prgmr.com \
--cc=david.vrabel@citrix.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=sbsiddha@gmail.com \
--cc=tglx@linutronix.de \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).