From: "Ted Ts'o" <tytso@mit.edu>
To: David Rientjes <rientjes@google.com>
Cc: Pekka Enberg <penberg@kernel.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
linux-kernel@vger.kernel.org, Christoph Lameter <cl@linux.com>
Subject: Re: [PATCH v2 2/2] SLUB: Mark merged slab caches in /proc/slabinfo
Date: Wed, 15 Sep 2010 18:25:09 -0400 [thread overview]
Message-ID: <20100915222509.GE3730@thunk.org> (raw)
In-Reply-To: <alpine.DEB.2.00.1009151322370.29425@chino.kir.corp.google.com>
On Wed, Sep 15, 2010 at 01:33:07PM -0700, David Rientjes wrote:
> I'd love to have per-cache statistics that we could export without the
> cost of the extra memory from fragmented partial slabs. You'd have to do
> this for every cache even if it's a "superslab", though, to avoid a branch
> in the fastpath to find the cpu slab. I'm not sure if Pekka and Christoph
> will be happy with the allocation of kmem_cache structures for mergable
> caches and the increment of the statistic in the fastpath.
I agree, it would be cleaner if we could separate out the data
structures which are used for accounting for the number of objects
allocated and reclaimed for each object type, and then have a separate
data structure which is used for dealing with the pages used by those
slabs that have been merged together.
All I can say is I hope the merging code is intelligent. We recently
had a problem where we were wasting huge amounts of memory because we
were allocating large numbers of a the ext4_group_info structure,
which was 132 bytes, and for which kmalloc() used a size-256 slab ---
and the wasted memory was enough to cause OOM's in a critical
(unfortunately statically sized) container when the disks got large
enough and numerous enough. The fix was to use a separate cache just
for these 132-byte objects, and not to use kmalloc().
I would be really annoyed if we switched to a slab allocator which did
merging, and then found that the said slab allocator helpfully merged
the 132-byte slab cache and the size-256 slab into a single slab
cache, on the grounds that it thought it would save memory... (I
guess I'm just really really nervous about merging happening behind my
back, and I really like having the per-object type allocation
statistics.)
- Ted
next prev parent reply other threads:[~2010-09-15 22:25 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-09-14 18:48 [PATCH v2 1/2] SLUB: Fix merged slab cache names Pekka Enberg
2010-09-14 18:48 ` [PATCH v2 2/2] SLUB: Mark merged slab caches in /proc/slabinfo Pekka Enberg
2010-09-14 20:00 ` David Rientjes
2010-09-14 20:05 ` Linus Torvalds
2010-09-14 20:11 ` Pekka Enberg
2010-09-14 20:56 ` Linus Torvalds
2010-09-14 20:56 ` David Rientjes
2010-09-14 21:00 ` Pekka Enberg
2010-09-15 0:02 ` David Rientjes
2010-09-15 11:16 ` Theodore Tso
2010-09-15 20:33 ` David Rientjes
2010-09-15 22:25 ` Ted Ts'o [this message]
2010-09-15 22:53 ` David Rientjes
2010-09-16 17:39 ` Christoph Lameter
2010-09-16 17:49 ` Linus Torvalds
2010-09-16 22:08 ` Tony Luck
2010-09-14 18:59 ` [PATCH v2 1/2] SLUB: Fix merged slab cache names Christoph Lameter
2010-09-14 19:32 ` Pekka Enberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100915222509.GE3730@thunk.org \
--to=tytso@mit.edu \
--cc=cl@linux.com \
--cc=linux-kernel@vger.kernel.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox