From: Vladimir Davydov <vdavydov@parallels.com>
To: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Christoph Lameter <cl@gentwo.org>,
akpm@linux-foundation.org, rientjes@google.com,
penberg@kernel.org, hannes@cmpxchg.org, mhocko@suse.cz,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH -mm v2 8/8] slab: make dead memcg caches discard free slabs immediately
Date: Thu, 12 Jun 2014 14:02:32 +0400 [thread overview]
Message-ID: <20140612100231.GA19221@esperanza> (raw)
In-Reply-To: <20140612065345.GD19918@js1304-P5Q-DELUXE>
On Thu, Jun 12, 2014 at 03:53:45PM +0900, Joonsoo Kim wrote:
> On Thu, Jun 12, 2014 at 01:24:34AM +0400, Vladimir Davydov wrote:
> > On Tue, Jun 10, 2014 at 07:18:34PM +0400, Vladimir Davydov wrote:
> > > On Tue, Jun 10, 2014 at 09:26:19AM -0500, Christoph Lameter wrote:
> > > > On Tue, 10 Jun 2014, Vladimir Davydov wrote:
> > > >
> > > > > Frankly, I incline to shrinking dead SLAB caches periodically from
> > > > > cache_reap too, because it looks neater and less intrusive to me. Also
> > > > > it has zero performance impact, which is nice.
> > > > >
> > > > > However, Christoph proposed to disable per cpu arrays for dead caches,
> > > > > similarly to SLUB, and I decided to give it a try, just to see the end
> > > > > code we'd have with it.
> > > > >
> > > > > I'm still not quite sure which way we should choose though...
> > > >
> > > > Which one is cleaner?
> > >
> > > To shrink dead caches aggressively, we only need to modify cache_reap
> > > (see https://lkml.org/lkml/2014/5/30/271).
> >
> > Hmm, reap_alien, which is called from cache_reap to shrink per node
> > alien object arrays, only processes one node at a time. That means with
> > the patch I gave a link to above it will take up to
> > (REAPTIMEOUT_AC*nr_online_nodes) seconds to destroy a virtually empty
> > dead cache, which may be quite long on large machines. Of course, we can
> > make reap_alien walk over all alien caches of the current node, but that
> > will probably hurt performance...
>
> Hmm, maybe we have a few of objects on other node, doesn't it?
I think so, but those few objects will prevent the cache from
destruction until they are reaped, which may take long.
> BTW, I have a question about cache_reap(). If there are many kmemcg
> users, we would have a lot of slab caches and just to traverse slab
> cache list could take some times. Is it no problem?
This may be a problem. Since a cache will stay alive while it has at
least one active object, there may be throngs of dead caches on the
list, actually their number won't even be limited by the number of
memcgs. This can slow down cache reaping and result in noticeable memory
pressure. Also, it will delay destruction of dead caches, making the
situation even worse. And we can't even delete dead caches from the
list, because they won't be reaped then...
OTOH, if we disable per cpu arrays for dead caches, we won't have to
reap them and therefore can remove them from the slab_caches list. Then
the number of caches on the list will be bound by the number of memcgs
multiplied by a constant. Although it still may be quite large, this
will be predictable at least - the more kmem-active memcgs you have, the
more memory you need, which sounds reasonable to me.
Regarding the slowdown introduced by disabling of per cpu arrays, I
guess it shouldn't be critical, because, as dead caches are never
allocated from, the number of kfree's left after death is quite limited.
So, everything isn't that straightforward yet...
I think I'll try to simplify the patch that disables per cpu arrays for
dead caches and send implementations of both approaches with their pros
and cons outlined in the next iteration, so that we can compare them
side by side.
Thanks.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-06-12 10:02 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-06 13:22 [PATCH -mm v2 0/8] memcg/slab: reintroduce dead cache self-destruction Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 1/8] memcg: cleanup memcg_cache_params refcnt usage Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 2/8] memcg: destroy kmem caches when last slab is freed Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 3/8] memcg: mark caches that belong to offline memcgs as dead Vladimir Davydov
2014-06-10 7:48 ` Joonsoo Kim
2014-06-10 10:06 ` Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 4/8] slub: don't fail kmem_cache_shrink if slab placement optimization fails Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 5/8] slub: make slab_free non-preemptable Vladimir Davydov
2014-06-06 14:46 ` Christoph Lameter
2014-06-09 12:52 ` Vladimir Davydov
2014-06-09 13:52 ` Christoph Lameter
2014-06-12 6:58 ` Joonsoo Kim
2014-06-12 10:03 ` Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 6/8] memcg: wait for kfree's to finish before destroying cache Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 7/8] slub: make dead memcg caches discard free slabs immediately Vladimir Davydov
2014-06-06 14:48 ` Christoph Lameter
2014-06-10 8:09 ` Joonsoo Kim
2014-06-10 10:09 ` Vladimir Davydov
2014-06-06 13:22 ` [PATCH -mm v2 8/8] slab: " Vladimir Davydov
2014-06-06 14:52 ` Christoph Lameter
2014-06-09 13:04 ` Vladimir Davydov
2014-06-10 7:43 ` Joonsoo Kim
2014-06-10 10:03 ` Vladimir Davydov
2014-06-10 14:26 ` Christoph Lameter
2014-06-10 15:18 ` Vladimir Davydov
2014-06-11 8:11 ` Joonsoo Kim
2014-06-11 21:24 ` Vladimir Davydov
2014-06-12 6:53 ` Joonsoo Kim
2014-06-12 10:02 ` Vladimir Davydov [this message]
2014-06-13 16:34 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140612100231.GA19221@esperanza \
--to=vdavydov@parallels.com \
--cc=akpm@linux-foundation.org \
--cc=cl@gentwo.org \
--cc=hannes@cmpxchg.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).