public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@osdl.org>
To: Martin Peschke <mp3@de.ibm.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [Patch] statistics infrastructure - update 10
Date: Thu, 13 Jul 2006 07:43:06 -0700	[thread overview]
Message-ID: <20060713074306.22e13848.akpm@osdl.org> (raw)
In-Reply-To: <44B62A9B.7000707@de.ibm.com>

On Thu, 13 Jul 2006 13:12:27 +0200
Martin Peschke <mp3@de.ibm.com> wrote:

> > I'd suggest that you:
> > 
> > - Create a new __alloc_percpu_mask(size_t size, cpumask_t cpus)
> > 
> > - Make that function use your newly added
> > 
> > 	percpu_data_populate(struct percpu_data *p, int cpu, size_t size, gfp_t gfp);
> > 
> > 	(maybe put `size' into 'struct percpu_data'?)
> > 
> > - implement __alloc_percpu() as __alloc_percpu_mask(size, cpu_possible_map)
> 
> Getting at the root of the problem. I will have a shot at it.
> (It will take til next week, though - pretty warm outside...)
> 
> A question:
> For symmetry's sake, should I add __free_percpu_mask(), which would
> put NULL where __alloc_percpu_mask() has put a valid address earlier?
> Otherwise, per_cpu_ptr() would return !NULL for an object released
> during cpu hotunplug handling.
> Or, is this not an issue because some cpu mask indicates that the cpu
> is offline anyway, and that the contents of the pointer is not valid.

Sure, we need a way of freeing a cpu's storage and of zapping that CPU's
slot.  Whether that's mask-based or just operates on a single CPU is
debatable.  Probably the latter, given the do-it-at-hotplug-time usage
model.


It could be argued that the whole idea is wrong - that we're putting
restrictions upon the implementation of alloc_percpu().  After all, an
implementation at present could do

alloc_percpu(size):
	size = roundup(size, L1_CACHE_SIZE);
	ret = kmalloc(size*NR_CPUS + sizeof(int));
	*(int *)ret = size;

per_cpu_ptr(ptr, cpu):
	(void *)((int *)ptr + (*((int *)ptr) * cpu))

or whatever.  The API additions which are being proposed here make that
impossible.  Or at least, more complex and slower.

Is it reasonable to assume that all implementations will, for all time,
include at least one layer of indirection?  After all, the above would be a
feasible implementation for non-NUMA SMP.  It's just as many derefs though.

hmm.  1k of memory isn't much.  How much memory will all this _really_ save?

  reply	other threads:[~2006-07-13 14:43 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-07-12 12:27 [Patch] statistics infrastructure - update 10 Martin Peschke
2006-07-12 16:10 ` Andrew Morton
2006-07-12 16:45   ` Martin Peschke
2006-07-13  8:00     ` Andrew Morton
2006-07-13 11:12       ` Martin Peschke
2006-07-13 14:43         ` Andrew Morton [this message]
2006-07-24 17:15           ` Martin Peschke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20060713074306.22e13848.akpm@osdl.org \
    --to=akpm@osdl.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mp3@de.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox