public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Frederic Weisbecker <fweisbec@gmail.com>
To: Andy Lutomirski <luto@amacapital.net>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"H. Peter Anvin" <hpa@zytor.com>, X86 ML <x86@kernel.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Richard Weinberger <richard.weinberger@gmail.com>,
	Ingo Molnar <mingo@kernel.org>
Subject: Re: vmalloced stacks on x86_64?
Date: Sun, 26 Oct 2014 21:29:47 +0100	[thread overview]
Message-ID: <20141026202943.GA9871@lerouge> (raw)
In-Reply-To: <CALCETrWyvXZCocPgcbatFvphgBuGbKhsMvYJNuy9EFc4kM-TGw@mail.gmail.com>

On Sat, Oct 25, 2014 at 10:49:25PM -0700, Andy Lutomirski wrote:
> On Oct 25, 2014 9:11 PM, "Frederic Weisbecker" <fweisbec@gmail.com> wrote:
> >
> > 2014-10-25 2:22 GMT+02:00 Andy Lutomirski <luto@amacapital.net>:
> > > Is there any good reason not to use vmalloc for x86_64 stacks?
> > >
> > > The tricky bits I've thought of are:
> > >
> > >  - On any context switch, we probably need to probe the new stack
> > > before switching to it.  That way, if it's going to fault due to an
> > > out-of-sync pgd, we still have a stack available to handle the fault.
> >
> > Would that prevent from any further fault on a vmalloc'ed kernel
> > stack? We would need to ensure that pre-faulting, say the first byte,
> > is enough to sync the whole new stack entirely otherwise we risk
> > another future fault and some places really aren't safely faulted.
> >
> 
> I think so.  The vmalloc faults only happen when the entire top-level
> page table entry is missing, and those cover giant swaths of address
> space.
> 
> I don't know whether the vmalloc code guarantees not to span a pmd
> (pud? why couldn't these be called pte0, pte1, pte2, etc.?) boundary.

So dereferencing stack[0] is probably enough for 8KB worth of stack. I think
we have vmalloc_sync_all() but I heard this only work on x86-64.

Too bad we don't have a universal solution, I have that problem with per cpu allocated
memory faulting at random places. I hit at least two places where it got harmful:
context tracking and perf callchains. We fixed the latter using open-coded per cpu
allocation. I still haven't found a solution for context tracking.

  reply	other threads:[~2014-10-26 20:29 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-25  0:22 vmalloced stacks on x86_64? Andy Lutomirski
2014-10-25  2:38 ` H. Peter Anvin
2014-10-25  4:42   ` Andy Lutomirski
2014-10-26 16:46   ` Eric Dumazet
2014-10-25  9:15 ` Ingo Molnar
2014-10-25 16:05   ` Andy Lutomirski
2014-10-25 22:26 ` Richard Weinberger
2014-10-25 23:16   ` Andy Lutomirski
2014-10-25 23:31     ` Richard Weinberger
2014-10-26 18:16     ` Linus Torvalds
2014-10-26  4:11 ` Frederic Weisbecker
2014-10-26  5:49   ` Andy Lutomirski
2014-10-26 20:29     ` Frederic Weisbecker [this message]
2014-10-27  1:12       ` Andy Lutomirski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141026202943.GA9871@lerouge \
    --to=fweisbec@gmail.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@amacapital.net \
    --cc=mingo@kernel.org \
    --cc=richard.weinberger@gmail.com \
    --cc=torvalds@linux-foundation.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox