linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Steven Rostedt <rostedt@goodmis.org>
To: Rusty Russell <rusty@rustcorp.com.au>
Cc: Andrew Morton <akpm@osdl.org>,
	linux-mips@linux-mips.org,
	David Mosberger-Tang <davidm@hpl.hp.com>,
	linux-ia64@vger.kernel.org,
	Martin Mares <mj@atrey.karlin.mff.cuni.cz>,
	spyro@f2s.com, Joe Taylor <joe@tensilica.com>,
	Andi Kleen <ak@suse.de>,
	linuxppc-dev@ozlabs.org, Paul Mackerras <paulus@samba.org>,
	benedict.gaster@superh.com, bjornw@axis.com,
	Ingo Molnar <mingo@elte.hu>,
	Nick Piggin <nickpiggin@yahoo.com.au>,
	grundler@parisc-linux.org, starvik@axis.com,
	Linus Torvalds <torvalds@osdl.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	rth@twiddle.net, Chris Zankel <chris@zankel.net>,
	tony.luck@intel.com, LKML <linux-kernel@vger.kernel.org>,
	ralf@linux-mips.org, Marc Gauthier <marc@tensilica.com>,
	lethal@linux-sh.org, schwidefsky@de.ibm.com, linux390@de.ibm.com,
	davem@davemloft.net, parisc-linux@parisc-linux.org
Subject: Re: [PATCH 00/05] robust per_cpu allocation for modules
Date: Mon, 17 Apr 2006 07:33:36 -0400	[thread overview]
Message-ID: <1145273616.27828.21.camel@localhost.localdomain> (raw)
In-Reply-To: <1145256445.28600.34.camel@localhost.localdomain>

On Mon, 2006-04-17 at 16:47 +1000, Rusty Russell wrote:

> 
> The arch would allocate a virtual memory hole for each CPU, and map
> pages as required (this is the simplest of several potential schemes).
> This gives the "same space between CPUs" property which is required for
> the ptr + per-cpu-offset scheme.  An arch would supply functions like:
> 
> 	/* Returns address of new memory chunk(s)
>          * (add __per_cpu_offset to get virtual addresses). */
> 	unsigned long alloc_percpu_memory(unsigned long *size);
> 
> 	/* Set by ia64 to reserve the first chunk for percpu vars
> 	 * in modules only.
> 	#define __MODULE_RESERVE_FIRST_CHUNK
> 
> And an allocator would work on top of these.
> 
> I'm glad someone is looking at this again!

Hi Rusty, thanks for the input.

Arnd Bergmann also suggested doing the same thing.  I've slept on this
thought last night and I'm starting to like it more and more.  At least
it seems to be a better solution than some of the things that I've come
up with.

I'll start playing around a little and see what I can do with it.  I
also need to start doing some other work too, so this might take a month
or two to get some results.  So hopefully, I'll have another patch set
in June or July that will be more acceptable.

I'd like to thank all those that responded with ideas and criticisms.
It's been very helpful.

-- Steve

  reply	other threads:[~2006-04-17 11:33 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-04-14 21:18 [PATCH 00/05] robust per_cpu allocation for modules Steven Rostedt
2006-04-14 22:06 ` Andrew Morton
2006-04-14 22:12   ` Steven Rostedt
2006-04-14 22:12 ` Chen, Kenneth W
2006-04-15  3:10 ` [PATCH 00/08] robust per_cpu allocation for modules - V2 Steven Rostedt
2006-04-15  5:32 ` [PATCH 00/05] robust per_cpu allocation for modules Nick Piggin
2006-04-15 20:17   ` Steven Rostedt
2006-04-16  2:47     ` Nick Piggin
2006-04-16  3:53       ` Steven Rostedt
2006-04-16  7:02         ` Paul Mackerras
2006-04-16 13:40           ` Steven Rostedt
2006-04-16 14:03             ` Sam Ravnborg
2006-04-16 15:34             ` Arnd Bergmann
2006-04-16 18:03               ` Tony Luck
2006-04-17  0:45               ` Steven Rostedt
2006-04-17  2:07                 ` Arnd Bergmann
2006-04-17  2:17                   ` Steven Rostedt
2006-04-17 20:06               ` Ravikiran G Thirumalai
2006-04-17  6:47             ` Rusty Russell
2006-04-17 11:33               ` Steven Rostedt [this message]
2006-04-16  7:06         ` Nick Piggin
2006-04-16 16:06           ` Steven Rostedt
2006-04-17 17:10           ` Andi Kleen
2006-04-17 16:55   ` Christoph Lameter
2006-04-17 22:02     ` Ravikiran G Thirumalai
2006-04-17 23:44       ` Steven Rostedt
2006-04-17 23:48         ` Christoph Lameter
2006-04-18  1:51           ` Steven Rostedt
2006-04-18  6:42         ` Nick Piggin
2006-04-18 12:47           ` Steven Rostedt
2006-04-16  6:35 ` Paul Mackerras

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1145273616.27828.21.camel@localhost.localdomain \
    --to=rostedt@goodmis.org \
    --cc=ak@suse.de \
    --cc=akpm@osdl.org \
    --cc=benedict.gaster@superh.com \
    --cc=bjornw@axis.com \
    --cc=chris@zankel.net \
    --cc=davem@davemloft.net \
    --cc=davidm@hpl.hp.com \
    --cc=grundler@parisc-linux.org \
    --cc=joe@tensilica.com \
    --cc=lethal@linux-sh.org \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@linux-mips.org \
    --cc=linux390@de.ibm.com \
    --cc=linuxppc-dev@ozlabs.org \
    --cc=marc@tensilica.com \
    --cc=mingo@elte.hu \
    --cc=mj@atrey.karlin.mff.cuni.cz \
    --cc=nickpiggin@yahoo.com.au \
    --cc=parisc-linux@parisc-linux.org \
    --cc=paulus@samba.org \
    --cc=ralf@linux-mips.org \
    --cc=rth@twiddle.net \
    --cc=rusty@rustcorp.com.au \
    --cc=schwidefsky@de.ibm.com \
    --cc=spyro@f2s.com \
    --cc=starvik@axis.com \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=torvalds@osdl.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).