From: Manfred Spraul <manfred@colorfullife.com>
To: Badari Pulavarty <pbadari@us.ibm.com>
Cc: Andrew Morton <akpm@osdl.org>, linux-mm@kvack.org
Subject: Re: slab fragmentation ?
Date: Tue, 05 Oct 2004 19:58:07 +0200 [thread overview]
Message-ID: <4162E0AF.4000704@colorfullife.com> (raw)
In-Reply-To: <1096987570.12861.122.camel@dyn318077bld.beaverton.ibm.com>
Badari Pulavarty wrote:
>Here is the /proc/slabinfo output collected every 1 second while
>running the scsi-debug test. I enabled STATS and DEBUG.
>
>
>
Ok, thanks.
Before test:
size-40 2088 9760 64 61 1
tunables 32 16 8
slabdata 160 160 0
globalstat 3324 2010 160 0 0 0 173
cpustat 5675 213 3945 2
2nd value of cpustat ALLOCMISS: 213 calls to cache_alloc_refill.
first value of slabdata: 160 slabs.
sane.
2nd value of globalstat: maximum 2010 objects allocated.
first value of the "size" line: 2088 objects active right now.
sane,too.
After a few seconds:
size-40 4582 31110 64 61 1
tunables 32 16 8
slabdata 510 510 0
globalstat 5468 4085 510 0 0 0 173
cpustat 7924 347 4247 2
first value of slabdata: 510 slabs around.
second value of cpudata: total of 347 cache_alloc_refill calls.
Huh? Very odd. Each call of cache_alloc_refill causes at most one
cache_grow, and a cache_grow creates exactly one slab.
2nd value of globalstat: max of 4085 objects allocated.
First value of size line: 4582 objects active.
Huh? More active objects than kmem_cache_alloc/kmalloc calls?
Could you add a printk into kmem_cache_alloc_node()? Perhaps with a
dump_stack() or something like that. I'd bet that someone calls
kmem_cache_alloc_node(). Probably indirectly through alloc_percpu() -
hch recently broke the public interface.
Hmm. init_disk_stats() uses alloc_percpu. What are you testing? Creating
lots of disks for testing? If you end up calling kmem_cache_alloc_node()
then I know what happens.
The fix would be simple: kmem_cache_alloc_node must walk through the
list of partial slabs and check if it finds a slab from the correct
node. If it does, then just use that slab instead of allocating a new
one. And statistics must be added to kmem_cache_alloc_node - I forgot
that when I wrote the function.
--
Manfred
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
next prev parent reply other threads:[~2004-10-05 17:58 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-09-29 23:36 slab fragmentation ? Badari Pulavarty
2004-09-30 3:41 ` Andrew Morton
2004-09-30 4:52 ` badari
2004-09-30 14:49 ` Martin J. Bligh
2004-09-30 14:48 ` Badari Pulavarty
2004-10-03 6:04 ` Manfred Spraul
2004-10-04 15:51 ` Badari Pulavarty
2004-10-04 16:08 ` Manfred Spraul
2004-10-04 17:37 ` Badari Pulavarty
2004-10-05 14:46 ` Badari Pulavarty
2004-10-05 17:58 ` Manfred Spraul [this message]
2004-10-05 18:27 ` Badari Pulavarty
2004-10-05 18:49 ` Manfred Spraul
2004-10-05 18:47 ` Badari Pulavarty
2004-10-05 21:13 ` Badari Pulavarty
2004-10-05 22:11 ` Chen, Kenneth W
2004-10-05 22:18 ` Chen, Kenneth W
2004-10-06 14:58 ` Badari Pulavarty
2004-10-09 14:28 ` Manfred Spraul
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4162E0AF.4000704@colorfullife.com \
--to=manfred@colorfullife.com \
--cc=akpm@osdl.org \
--cc=linux-mm@kvack.org \
--cc=pbadari@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox