public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@novell.com>
To: "Andrew Morton" <akpm@osdl.org>
Cc: <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] tvec_bases too large for per-cpu data
Date: Mon, 23 Jan 2006 11:31:11 +0100	[thread overview]
Message-ID: <43D4BE7F.76F0.0078.0@novell.com> (raw)
In-Reply-To: <20060120232500.07f0803a.akpm@osdl.org>

>>> Andrew Morton <akpm@osdl.org> 21.01.06 08:25:00 >>>
>"Jan Beulich" <JBeulich@novell.com> wrote:
>>
>> The biggest arch-independent consumer is tvec_bases (over 4k on 32-bit
>>  archs,
>>  over 8k on 64-bit ones), which now gets converted to use dynamically
>>  allocated
>>  memory instead.
>
>ho hum, another pointer hop.
>
>Did you consider using alloc_percpu()?

I did, but I saw drawbacks with that (most notably the fact that all instances are allocated at
once, possibly wasting a lot of memory).

>The patch does trickery in init_timers_cpu() which, from my reading, defers
>the actual per-cpu allocation until the second CPU comes online. 
>Presumably because of some ordering issue which you discovered.  Readers of
>the code need to know what that issue was.

No, I don't see any trickery there (on demand allocation in CPU_UP_PREPARE is being done
elsewhere in very similar ways), and I also didn't see any ordering issues. Hence I also didn't
see any need to explain this in detail.

>And boot_tvec_bases will always be used for the BP, and hence one slot in
>the per-cpu array will forever be unused.  Until the BP is taken down and
>brought back up, in which case it will suddenly start to use a dynamically
>allocated structure.

Why? Each slot is allocated at most once, the BP's is never allocated (it will continue to use the
static one even when brought down and back up).

>But all of this modification was unchangelogged and is uncommented, so I'm
>somewhat guessing here.  Please always ensure that tricksy things like this
>have complete covering comments.
>
>Also, the new code would appear to leak one tvec_base_t per cpu-unplugging?

Not really, as it would be re-used the next time a cpu with the same ID gets brought up (that
is, compared with the current situation there is generally less memory wasted unless all NR_CPUs
are brought up and then one or more down again, in which case the amount of space wasted
would equal [neglecting the slack space resulting from kmalloc's way of allocating]).

Jan

  reply	other threads:[~2006-01-23 10:31 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-01-18 13:11 [PATCH] tvec_bases too large for per-cpu data Jan Beulich
2006-01-21  7:25 ` Andrew Morton
2006-01-23 10:31   ` Jan Beulich [this message]
2006-01-23 10:57     ` Andrew Morton
2006-01-24  8:33       ` Jan Beulich
2006-01-24  8:58         ` Andrew Morton
2006-01-24 14:46           ` [PATCH] [SMP] reduce size of percpudata, and make sure per_cpu(object, not_possible_cpu) cause an invalid memory reference Eric Dumazet
2006-01-24 14:53             ` Andi Kleen
2006-02-01  9:21             ` [PATCH] [SMP] __GENERIC_PER_CPU changes Eric Dumazet
2006-01-30  8:43       ` [PATCH] tvec_bases too large for per-cpu data Jan Beulich
2006-01-31 22:27         ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=43D4BE7F.76F0.0078.0@novell.com \
    --to=jbeulich@novell.com \
    --cc=akpm@osdl.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox