From: Vladimir Davydov <vdavydov@parallels.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: akpm@linux-foundation.org, mhocko@suse.cz, cl@linux.com,
glommer@gmail.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH -mm 0/8] memcg: reparent kmem on css offline
Date: Wed, 9 Jul 2014 11:25:59 +0400 [thread overview]
Message-ID: <20140709072559.GE6685@esperanza> (raw)
In-Reply-To: <20140708220519.GB29639@cmpxchg.org>
On Tue, Jul 08, 2014 at 06:05:19PM -0400, Johannes Weiner wrote:
> On Mon, Jul 07, 2014 at 07:40:08PM +0400, Vladimir Davydov wrote:
> > On Mon, Jul 07, 2014 at 10:25:06AM -0400, Johannes Weiner wrote:
> > > You could then reap dead slab caches as part of the regular per-memcg
> > > slab scanning in reclaim, without having to resort to auxiliary lists,
> > > vmpressure events etc.
> >
> > Do you mean adding a per memcg shrinker that will call kmem_cache_shrink
> > for all memcg caches on memcg/global pressure?
> >
> > Actually I recently made dead caches self-destructive at the cost of
> > slowing down kfrees to dead caches (see
> > https://www.lwn.net/Articles/602330/, it's already in the mmotm tree) so
> > no dead cache reaping is necessary. Do you think if we need it now?
> >
> > > I think it would save us a lot of code and complexity. You want
> > > per-memcg slab scanning *anyway*, all we'd have to change in the
> > > existing code would be to pin the css until the LRUs and kmem caches
> > > are truly empty, and switch mem_cgroup_iter() to css_tryget().
> > >
> > > Would this make sense to you?
> >
> > Hmm, interesting. Thank you for such a thorough explanation.
> >
> > One question. Do we still need to free mem_cgroup->kmemcg_id on css
> > offline so that it can be reused by new kmem-active cgroups (currently
> > we don't)?
> >
> > If we won't free it the root_cache->memcg_params->memcg_arrays may
> > become really huge due to lots of dead css holding the id.
>
> We only need the O(1) access of the array for allocation - not frees
> and reclaim, right?
Yes.
> So with your self-destruct code, can we prune caches of dead css and
> then just remove them from the array? Or move them from the array to
> a per-memcg linked list that can be scanned on memcg memory pressure?
This shouldn't be a problem. Will do that.
Actually, I now doubt if we need self-destruct at all. I don't really
like it, because its implementations is rather ugly, and, what is worse,
it slows down kfree for dead caches noticeably. SLAB maintainers doesn't
seem to be fond of it either. May be, we'd better drop this in favour of
shrinking dead caches on memory pressure?
Then *empty* dead caches will be pending until memory pressure reaps
them, which looks a bit strange, because there's absolutely no reason to
keep them for so long. However, the code will be simpler then, and
kfrees to dead caches will proceed at the same speed as to active
caches.
Thanks.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-07-09 7:26 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-07 12:00 [PATCH -mm 0/8] memcg: reparent kmem on css offline Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 1/8] memcg: add pointer from memcg_cache_params to owner cache Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 2/8] memcg: keep all children of each root cache on a list Vladimir Davydov
2014-07-07 15:24 ` Christoph Lameter
2014-07-07 15:45 ` Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 3/8] slab: guarantee unique kmem cache naming Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 4/8] slub: remove kmemcg id from create_unique_id Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 5/8] memcg: rework non-slab kmem pages charge path Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 6/8] memcg: introduce kmem context Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 7/8] memcg: move some kmem definitions upper Vladimir Davydov
2014-07-07 12:00 ` [PATCH -mm 8/8] memcg: reparent kmem context on memcg offline Vladimir Davydov
2014-07-07 14:25 ` [PATCH -mm 0/8] memcg: reparent kmem on css offline Johannes Weiner
2014-07-07 15:40 ` Vladimir Davydov
2014-07-08 22:05 ` Johannes Weiner
2014-07-09 7:25 ` Vladimir Davydov [this message]
2014-07-07 17:14 ` Vladimir Davydov
2014-07-08 22:19 ` Johannes Weiner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140709072559.GE6685@esperanza \
--to=vdavydov@parallels.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=glommer@gmail.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).