From: Nick Piggin <nickpiggin@yahoo.com.au>
To: Christoph Lameter <clameter@sgi.com>
Cc: Matt Mackall <mpm@selenic.com>,
akpm@linux-foundation.org, David Miller <davem@davemloft.net>,
linux-kernel@vger.kernel.org
Subject: Re: + fix-spellings-of-slab-allocator-section-in-init-kconfig.patch added to -mm tree
Date: Wed, 09 May 2007 13:04:09 +1000 [thread overview]
Message-ID: <46413A29.1000506@yahoo.com.au> (raw)
In-Reply-To: <Pine.LNX.4.64.0705081954300.19976@schroedinger.engr.sgi.com>
Christoph Lameter wrote:
> On Wed, 9 May 2007, Nick Piggin wrote:
>>For small systems, I would not be surprised if that was less space
>>efficient, even just looking at kmalloc caches in isolation. Or do you
>>have numbers to support your conclusion?
>
>
> No I do not have any number beyond the efficiency calculations based on
> whole slabs. We would have to do some experiments to figure out how much
> space is actually wasted through partial slabs.
>
> If you just do straight allocation on a UP system then there is at maximum
> one partial slab per slabcache with SLUB.
>
> The situation becomes different with allocation and frees. Then we may
> have lots of partial slabs that we allocate from.
Yeah, but even then I think the SLUB approach is a very nice one for a
general purpose system. Don't get me wrong, SLOB definitely is not good
for that :)
> But the SLOB approach
> also will have holes to manage. So I do not see how this could be a
> benefit unless you only have a few precious pages and you need to put
> multiple object sizes into it. A 4M system still has 1000 pages.
Right, and it takes a long long time to do anything on my 4G system ;)
But that 4MB system might not even have 50 pages that you'd want to
use for slab.
--
SUSE Labs, Novell Inc.
next prev parent reply other threads:[~2007-05-09 3:04 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <200705082302.l48N2KrZ004229@shell0.pdx.osdl.net>
2007-05-09 0:23 ` + fix-spellings-of-slab-allocator-section-in-init-kconfig.patch added to -mm tree Matt Mackall
2007-05-09 0:32 ` Alan Cox
2007-05-09 0:33 ` Matt Mackall
2007-05-09 0:43 ` Christoph Lameter
2007-05-09 0:51 ` Christoph Lameter
2007-05-09 1:27 ` Matt Mackall
2007-05-09 1:32 ` Christoph Lameter
2007-05-09 1:51 ` David Miller
2007-05-09 1:53 ` Christoph Lameter
2007-05-09 1:55 ` David Miller
2007-05-09 1:57 ` Christoph Lameter
2007-05-09 2:06 ` David Miller
2007-05-09 2:10 ` Nick Piggin
2007-05-09 2:20 ` Christoph Lameter
2007-05-09 2:02 ` Nick Piggin
2007-05-09 2:56 ` Matt Mackall
2007-05-09 3:18 ` Nick Piggin
2007-05-09 3:27 ` Christoph Lameter
2007-05-09 3:47 ` Nick Piggin
2007-05-10 0:42 ` Andrew Morton
2007-05-10 1:00 ` Nick Piggin
2007-05-10 2:27 ` Matt Mackall
2007-05-09 2:19 ` Matt Mackall
2007-05-09 2:24 ` Christoph Lameter
2007-05-09 2:43 ` Nick Piggin
2007-05-09 2:57 ` Christoph Lameter
2007-05-09 3:04 ` Nick Piggin [this message]
2007-05-09 3:08 ` Christoph Lameter
2007-05-09 3:25 ` Matt Mackall
2007-05-09 3:16 ` Matt Mackall
2007-05-09 3:24 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=46413A29.1000506@yahoo.com.au \
--to=nickpiggin@yahoo.com.au \
--cc=akpm@linux-foundation.org \
--cc=clameter@sgi.com \
--cc=davem@davemloft.net \
--cc=linux-kernel@vger.kernel.org \
--cc=mpm@selenic.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox