From: Andi Kleen <ak@suse.de>
To: Eric Dumazet <dada1@cosmosbay.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [PATCH] Shrinks sizeof(files_struct) and better layout
Date: Wed, 4 Jan 2006 12:58:38 +0100 [thread overview]
Message-ID: <200601041258.38408.ak@suse.de> (raw)
In-Reply-To: <43BBB487.8030704@cosmosbay.com>
On Wednesday 04 January 2006 12:41, Eric Dumazet wrote:
> > The overhead of the kmem_cache_t by itself is negligible.
>
> This seems a common misconception among kernel devs (even the best ones Andi :) )
It used to be true at least at some point :/
>
> On SMP (and/or NUMA) machines : overhead of kmem_cache_t is *big*
>
> See enable_cpucache in mm/slab.c for 'limit' determination :
>
> if (cachep->objsize > 131072)
> limit = 1;
> else if (cachep->objsize > PAGE_SIZE)
> limit = 8;
> else if (cachep->objsize > 1024)
> limit = 24;
> else if (cachep->objsize > 256)
> limit = 54;
> else
> limit = 120;
>
> On a 64 bits machines, 120*sizeof(void*) = 120*8 = 960
>
> So for small objects (<= 256 bytes), you end with a sizeof(array_cache) = 1024
> bytes per cpu
Hmm - in theory it could be tuned down for SMT siblings which really don't
care about sharing because they have shared caches. But I don't know
how many complications that would add to the slab code.
>
> If 16 CPUS : 16*1024 = 16 Kbytes + all other kmem_cache structures : (If you
> have a lot of Memory Nodes, then it can be *very* big too).
>
> If you know that no more than 100 objects are used in 99% of setups, then a
> dedicated cache is overkill, even locking 100 pages because of extreme
> fragmentation is better.
A system with 16 memory nodes should have more than 100 processes, but ok.
>
> Maybe we can introduce an ultra basic memory allocator for such objects
> (without CPU caches, node caches), so that the memory overhead is small.
> Hitting a spinlock at thread creation/deletion time is not that time critical.
Might be a good idea yes. There used to a "simp" allocator long ago for this,
but it was removed because it had other issues. But this was before even slab
got the per cpu/node support.
-Andi
next prev parent reply other threads:[~2006-01-04 11:58 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20051108185349.6e86cec3.akpm@osdl.org>
[not found] ` <437226B1.4040901@cosmosbay.com>
[not found] ` <20051109220742.067c5f3a.akpm@osdl.org>
[not found] ` <4373698F.9010608@cosmosbay.com>
2006-01-04 0:06 ` [PATCH] Shrinks sizeof(files_struct) and better layout Eric Dumazet
2006-01-04 9:11 ` Jan Engelhardt
2006-01-04 10:12 ` Eric Dumazet
2006-01-04 10:28 ` Folkert van Heusden
2006-01-04 10:45 ` Andi Kleen
2006-01-04 11:13 ` Eric Dumazet
2006-01-04 11:15 ` Andi Kleen
2006-01-04 11:19 ` Eric Dumazet
2006-01-04 11:22 ` Andi Kleen
2006-01-04 11:41 ` Eric Dumazet
2006-01-04 11:58 ` Andi Kleen [this message]
2006-01-06 3:01 ` David Lang
2006-01-06 6:35 ` Eric Dumazet
2006-01-06 7:26 ` David Lang
2006-01-06 7:37 ` Eric Dumazet
2006-01-06 8:28 ` David Lang
2006-01-04 11:45 ` Andrew Morton
2006-01-04 13:14 ` Eric Dumazet
2006-01-04 23:24 ` [2.6 patch] Define BITS_PER_BYTE Adrian Bunk
2006-01-05 7:03 ` Jan Engelhardt
2006-01-05 15:18 ` Bryan O'Sullivan
2006-01-05 19:19 ` H. Peter Anvin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200601041258.38408.ak@suse.de \
--to=ak@suse.de \
--cc=dada1@cosmosbay.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox