From: Tejun Heo <tj@kernel.org>
To: "H. Peter Anvin" <hpa@zytor.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>,
Ingo Molnar <mingo@elte.hu>, Thomas Gleixner <tglx@linutronix.de>,
x86@kernel.org,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Jeremy Fitzhardinge <jeremy@goop.org>,
cpw@sgi.com
Subject: Re: #tj-percpu has been rebased
Date: Sat, 14 Feb 2009 11:10:26 +0900 [thread overview]
Message-ID: <49962812.8030902@kernel.org> (raw)
In-Reply-To: <49962413.9020101@zytor.com>
Hello,
H. Peter Anvin wrote:
> Okay, let's think about this a bit.
>
> At least for x86, there are two cases:
>
> - 32 bits. The vmalloc area is *extremely* constrained, and has the
> same class of fragmentation issues as main memory. In fact, it might
> have *more* just by virtue of being larger.
We can go for smaller chunks but I don't really see any perfect
solution here. If a machine is doing 16 way SMP on 32bit, it's not
gonna scale very well anyway.
> - 64 bits. At this point, we have with current memory sizes(*) an
> astronomically large virtual space. Here we have no real problem
> allocating linearly in virtual space, either by giving each CPU some
> very large hunk of virtual address space (which means each percpu area
> is contiguous in virtual space) or by doing large contiguous allocations
> out of another range.
>
> It doesn't seem to make sense to me at first glance to be any advantage
> to interlacing the CPUs. Quite on the contrary, it seems to utterly
> preclude ever doing PMDs with a win, since (a) you'd be allocating real
> memory for CPUs which aren't actually there and (b) you'd have the wrong
> NUMA associativity.
For (a), we can do hotplug online/offline thing for dynamic areas if
necessary. (b) why would it have the wrong NUMA associativity?
Thanks.
--
tejun
next prev parent reply other threads:[~2009-02-14 2:11 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-01-30 17:05 #tj-percpu has been rebased Tejun Heo
2009-01-31 5:46 ` Tejun Heo
2009-01-31 13:28 ` Ingo Molnar
2009-02-02 9:04 ` Rusty Russell
2009-02-04 3:18 ` Tejun Heo
2009-02-12 3:37 ` Tejun Heo
2009-02-12 3:44 ` Tejun Heo
2009-02-13 20:58 ` Rusty Russell
2009-02-13 21:17 ` Jeremy Fitzhardinge
2009-02-13 22:59 ` H. Peter Anvin
2009-02-14 0:45 ` Tejun Heo
2009-02-14 1:53 ` H. Peter Anvin
2009-02-14 2:10 ` Tejun Heo [this message]
2009-02-16 7:23 ` Rusty Russell
2009-02-16 17:28 ` H. Peter Anvin
2009-02-16 23:22 ` Rusty Russell
2009-02-16 23:28 ` H. Peter Anvin
2009-02-18 4:25 ` Rusty Russell
2009-02-18 6:40 ` H. Peter Anvin
2009-02-18 7:11 ` Rusty Russell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49962812.8030902@kernel.org \
--to=tj@kernel.org \
--cc=cpw@sgi.com \
--cc=hpa@zytor.com \
--cc=jeremy@goop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=rusty@rustcorp.com.au \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox