From: Andrew Morton <akpm@digeo.com>
To: dipankar@in.ibm.com
Cc: Ravikiran G Thirumalai <kiran@in.ibm.com>,
linux-kernel@vger.kernel.org,
Rusty Russell <rusty@rustcorp.com.au>
Subject: Re: [patch] kmalloc_percpu -- 2 of 2
Date: Thu, 05 Dec 2002 12:02:51 -0800 [thread overview]
Message-ID: <3DEFB0EB.9893DB9@digeo.com> (raw)
In-Reply-To: 20021205162329.A12588@in.ibm.com
Dipankar Sarma wrote:
>
> Hi Andrew,
>
> On Wed, Dec 04, 2002 at 08:32:58PM -0800, Andrew Morton wrote:
> > Where in the kernel is such a large number of 4-, 8- or 16-byte
> > objects being used?
>
> Well, kernel objects may not be that small, but one would expect
> the per-cpu parts of the kernel objects to be sometimes small, often down to
> a couple of counters counting statistics.
Sorry, "one would expect" is not sufficient grounds for incorporation of
a new allocator. As far as I can tell, all the proposed users are in
fact allocating decent-sized aggregates, and that will remain the usual
case.
The code exists, great. We can pull it in when there is a demonstrated
need for it. But until that need is shown, this is overdesign.
> >
> > The slab allocator will support caches right down to 1024 x 4-byte
> > objects per page. Why is that not appropriate?
>
> Well, if you allocated 4-byte objects directly from the slab allocator,
> you aren't guranteed to *not* share a cache line with another object
> modified by a different cpu.
If that's a problem it can be addressed in the slab head arrays - make
sure that they are always filled and emptied in multiple-of-cacheline-sized
units for objects which are smaller than a cacheline. That benefits all
slab users.
> >
> > Sorry, but you have what is basically a brand new allocator in
> > there, and we need a very good reason for including it. I'd like
> > to know what that reason is, please.
>
> The reason is concern about per-cpu allocation for small per-CPU
> parts (typically counters) of objects. If a driver has two counters
> counting reads and writes, you don't want to eat up a whole cacheline
> for them for each CPU per instance of the device.
>
I don't buy it.
- If the driver has two counters per device then the storage is
infinitesimal.
- If it has multiple counters per device (always the case) then
the driver will aggregate them anyway.
I am not aware of any situations in which a driver has a large
(or even medium) number of small, discrete counters of this nature.
Sufficiently large to justify a new allocator.
I'd suggest that you drop the new allocator until a compelling
need for it (in real, live 2.5/2.6 code) has been demonstrated.
next prev parent reply other threads:[~2002-12-05 19:55 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-12-04 12:12 [patch] kmalloc_percpu -- 1 of 2 Ravikiran G Thirumalai
2002-12-04 12:15 ` [patch] kmalloc_percpu -- 2 " Ravikiran G Thirumalai
2002-12-04 19:34 ` Andrew Morton
2002-12-05 3:42 ` Dipankar Sarma
2002-12-05 4:32 ` Andrew Morton
2002-12-05 4:47 ` William Lee Irwin III
2002-12-05 10:53 ` Dipankar Sarma
2002-12-05 11:23 ` yodaiken
2002-12-05 11:28 ` William Lee Irwin III
2002-12-05 12:41 ` Dipankar Sarma
2002-12-05 15:08 ` yodaiken
2002-12-05 20:02 ` Andrew Morton [this message]
2002-12-05 21:23 ` Dipankar Sarma
2002-12-05 22:15 ` Andrew Morton
2002-12-09 5:30 ` Ravikiran G Thirumalai
2002-12-09 5:57 ` Andrew Morton
2002-12-09 19:28 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3DEFB0EB.9893DB9@digeo.com \
--to=akpm@digeo.com \
--cc=dipankar@in.ibm.com \
--cc=kiran@in.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).