From: William Lee Irwin III <wli@holomorphy.com>
To: Paul Mackerras <paulus@samba.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>,
Andrew Morton <akpm@digeo.com>,
dipankar@in.ibm.com, linux-kernel@vger.kernel.org,
Bartlomiej Zolnierkiewicz <B.Zolnierkiewicz@elka.pw.edu.pl>
Subject: Re: [PATCH] kmalloc_percpu
Date: Tue, 6 May 2003 22:19:01 -0700 [thread overview]
Message-ID: <20030507051901.GY8978@holomorphy.com> (raw)
In-Reply-To: <16056.37397.694764.303333@argo.ozlabs.ibm.com>
William Lee Irwin III writes:
>> Same address mapped differently on different cpus is what I thought
>> you meant. It does make sense, and besides, it only really matters
>> when the thing is being switched in, so I think it's not such a big
>> deal. e.g. mark per-thread mm context with the cpu it was prepped for,
>> if they don't match at load-time then reset the kernel pmd's pgd entry
>> in the per-thread pgd at the top level. x86 blows away the TLB at the
On Wed, May 07, 2003 at 02:56:53PM +1000, Paul Mackerras wrote:
> Having to have a pgdir per thread would be a bit sucky, wouldn't it?
Not as bad as it initially sounds; on non-PAE i386, it's 4KB and would
hurt. On PAE i386, it's 32B and can be shoehorned, say, in thread_info.
Then the rest is just a per-cpu kernel pmd and properly handling vmalloc
faults (which are already handled properly for non-PAE vmallocspace).
There might be other reasons to do it, like reducing the virtualspace
overhead of the atomic kmap area, but it's not really time yet.
On Wed, May 07, 2003 at 02:56:53PM +1000, Paul Mackerras wrote:
> On PPCs with the hash-table based MMU, if we wanted to do different
> mappings of the same address on different CPUs, we would have to have
> a separate hash table for each CPU, which would chew up a lot of
> memory. On PPC64 machines with logical partitioning, I don't think
> the hypervisor would let you have a separate hash table for each CPU.
> On the flip side, PPC can afford a register to point to a per-cpu data
> area more easily than x86 can.
Well, presumably it'd have to be abstracted so the mechanism isn't
exposed to core code if ever done. The arch code insulation appears to
be there to keep one going, though not necessarily accessors. Probably
the only reason to seriously think about it is that the arithmetic
shows up as a disincentive on the register-starved FPOS's I'm stuck on.
William Lee Irwin III writes:
>> The vmallocspace bit is easier, though the virtualspace reservation
>> could get uncomfortably large depending on how much is crammed in there.
>> That can go node-local also. I guess it has some runtime arithmetic
>> overhead vs. the per-cpu TLB entries in exchange for less complex code.
On Wed, May 07, 2003 at 02:56:53PM +1000, Paul Mackerras wrote:
> I was thinking of something like 64kB per cpu times 32 cpus = 2MB.
> Anyway, 32-bit machines with > 8 cpus are a pretty rare corner case.
> On 64-bit machines we have enough virtual space to give each cpu
> gigabytes of per-cpu data if we want to.
2MB vmallocspace is doable; it'd need to be bigger or per-something
besides cpus to hurt.
-- wli
next prev parent reply other threads:[~2003-05-07 5:06 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-05-05 8:08 [PATCH] kmalloc_percpu Rusty Russell
2003-05-05 8:47 ` Andrew Morton
2003-05-06 0:47 ` Rusty Russell
2003-05-06 1:52 ` Andrew Morton
2003-05-06 2:11 ` David S. Miller
2003-05-06 4:08 ` Rusty Russell
2003-05-06 3:40 ` David S. Miller
2003-05-06 5:02 ` Andrew Morton
2003-05-06 4:16 ` David S. Miller
2003-05-06 5:48 ` Andrew Morton
2003-05-06 5:35 ` David S. Miller
2003-05-06 6:55 ` Andrew Morton
2003-05-06 5:57 ` David S. Miller
2003-05-06 7:22 ` Andrew Morton
2003-05-06 6:15 ` David S. Miller
2003-05-06 7:34 ` Andrew Morton
2003-05-06 8:42 ` William Lee Irwin III
2003-05-06 14:38 ` Martin J. Bligh
2003-05-06 7:20 ` Dipankar Sarma
2003-05-06 8:28 ` Rusty Russell
2003-05-06 8:47 ` Andrew Morton
2003-05-07 1:57 ` Rusty Russell
2003-05-07 2:41 ` William Lee Irwin III
2003-05-07 4:03 ` Paul Mackerras
2003-05-07 4:22 ` William Lee Irwin III
2003-05-07 4:56 ` Paul Mackerras
2003-05-07 5:19 ` William Lee Irwin III [this message]
2003-05-07 4:10 ` Martin J. Bligh
2003-05-07 12:13 ` William Lee Irwin III
2003-05-07 4:15 ` Rusty Russell
2003-05-07 5:37 ` Andrew Morton
2003-05-08 0:53 ` Rusty Russell
2003-05-06 14:41 ` Martin J. Bligh
2003-05-06 6:42 ` Andrew Morton
2003-05-06 5:39 ` David S. Miller
2003-05-06 6:57 ` Andrew Morton
2003-05-06 7:25 ` Jens Axboe
2003-05-06 10:41 ` Ingo Oeser
2003-05-06 16:05 ` Bryan O'Sullivan
2003-05-06 8:06 ` Rusty Russell
2003-05-06 5:03 ` Dipankar Sarma
2003-05-06 4:28 ` Andrew Morton
2003-05-06 3:37 ` David S. Miller
2003-05-06 4:11 ` Rusty Russell
2003-05-06 5:07 ` Ravikiran G Thirumalai
2003-05-06 8:03 ` Rusty Russell
2003-05-06 9:23 ` David S. Miller
2003-05-06 9:34 ` Ravikiran G Thirumalai
2003-05-06 9:38 ` Dipankar Sarma
2003-05-07 2:14 ` Rusty Russell
2003-05-07 5:51 ` Ravikiran G Thirumalai
2003-05-07 6:16 ` Rusty Russell
2003-05-08 7:42 ` Ravikiran G Thirumalai
2003-05-08 7:47 ` Rusty Russell
-- strict thread matches above, loose matches on Subject: below --
2002-10-31 16:06 [patch] kmalloc_percpu Ravikiran G Thirumalai
2002-11-01 8:33 ` Rusty Russell
2002-11-05 16:00 ` Dipankar Sarma
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20030507051901.GY8978@holomorphy.com \
--to=wli@holomorphy.com \
--cc=B.Zolnierkiewicz@elka.pw.edu.pl \
--cc=akpm@digeo.com \
--cc=dipankar@in.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=paulus@samba.org \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox