From: Jeremy Fitzhardinge <jeremy@goop.org>
To: Vegard Nossum <vegard.nossum@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>,
Pekka Enberg <penberg@cs.helsinki.fi>,
linux-kernel@vger.kernel.org, Pekka Paalanen <pq@iki.fi>
Subject: Re: [PATCH] kmemcheck: SMP support
Date: Fri, 23 May 2008 17:13:47 +0100 [thread overview]
Message-ID: <4836ED3B.4050808@goop.org> (raw)
In-Reply-To: <19f34abd0805230830o3af93956h8152de3f6e350a09@mail.gmail.com>
Vegard Nossum wrote:
> On Fri, May 23, 2008 at 5:06 PM, Ingo Molnar <mingo@elte.hu> wrote:
>
>> Vegard, wanna have a look at introducing per CPU kernel pagetables? I
>> tried that once in the past and it wasnt too horrible. (the patches are
>> gone though) We could do it before bringing other CPUs online, i.e. much
>> of the really yucky boot time pagetable juggling phase would be over
>> already. Hm?
>>
>
> Ingo.
>
> It really doesn't matter how easy it was for you.
>
> You're one of the x86 maintainers.
>
> And I think you're forgetting how hard these things are for a newbie.
> I don't even know which one comes first of pmds and puds.
>
> Per-cpu page tables sounds about on the same scale of as, say,
> rewriting the VM or some other major subsystem. Epic!
>
No, I don't think it would really be all that bad, if you just make the
kernel parts of the pagetable percpu; userspace might be a bit
trickier. But kernel mappings change sufficiently rarely that keeping
them all in sync isn't a huge problem.
If your requirement is that you want to be able to set page permissions
on kernel mappings on a per-cpu basis, then it might be easiest to do it
on-demand. Start off with a single shared pagetable, and when you want
to make a per-cpu page protection change, clone the pagetable from the
root down to the page you're changing and then do your update.
There's certainly enough hooking places in which you could implement it
without much effect on the core kernel code.
> I'm glad to hear from you, though.
>
> Pekka suggested that per-cpu page tables might help NUMA systems too.
> Does that sound right to you? Would anybody else benefit from having
> per-CPU page tables? If not, I have a hard time believing it will ever
> get merged.
>
> (Oh. mmio-trace might. But that's also a hacking tool and doesn't really count.)
>
I'd need to think about it a bit, but its possible that 64-bit Xen might
be able to take advantage of it. Yeah, it could be useful.
J
next prev parent reply other threads:[~2008-05-23 16:14 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-05-23 14:17 [PATCH] kmemcheck: SMP support Vegard Nossum
2008-05-23 15:06 ` Ingo Molnar
2008-05-23 15:30 ` Vegard Nossum
2008-05-23 16:13 ` Jeremy Fitzhardinge [this message]
2008-05-26 9:11 ` Ingo Molnar
2008-05-26 9:29 ` Avi Kivity
2008-05-23 15:40 ` Jeremy Fitzhardinge
2008-05-23 15:51 ` Vegard Nossum
2008-05-23 17:12 ` Jan Kiszka
2008-05-23 17:32 ` Vegard Nossum
2008-05-23 17:54 ` Jan Kiszka
2008-05-23 20:54 ` Jeremy Fitzhardinge
2008-05-23 16:09 ` Johannes Weiner
2008-05-23 17:10 ` Vegard Nossum
[not found] ` <19f34abd0805230719j1ce0e2eje6da7c1f963fdf75@mail.gmail.com>
2008-05-25 14:30 ` Fwd: " Pekka Paalanen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4836ED3B.4050808@goop.org \
--to=jeremy@goop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=penberg@cs.helsinki.fi \
--cc=pq@iki.fi \
--cc=vegard.nossum@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox