linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vladimir Davydov <vdavydov@parallels.com>
To: Christoph Lameter <cl@gentwo.org>
Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@suse.cz,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH -mm 8/8] slab: reap dead memcg caches aggressively
Date: Wed, 4 Jun 2014 00:18:19 +0400	[thread overview]
Message-ID: <20140603201817.GE6013@esperanza> (raw)
In-Reply-To: <alpine.DEB.2.10.1406021019350.2987@gentwo.org>

On Mon, Jun 02, 2014 at 10:24:09AM -0500, Christoph Lameter wrote:
> On Sat, 31 May 2014, Vladimir Davydov wrote:
> 
> > > You can use a similar approach than in SLUB. Reduce the size of the per
> > > cpu array objects to zero. Then SLAB will always fall back to its slow
> > > path in cache_flusharray() where you may be able to do something with less
> > > of an impact on performace.
> >
> > In contrast to SLUB, for SLAB this will slow down kfree significantly.
> 
> But that is only when you want to destroy a cache. This is similar.

When we want to destroy a memcg cache, there can be really a lot of
objects allocated from it, e.g. gigabytes of inodes and dentries. That's
why I think we should avoid any performance degradations if possible.

> 
> > Fast path for SLAB is just putting an object to a per cpu array, while
> > the slow path requires taking a per node lock, which is much slower even
> > with no contention. There still can be lots of objects in a dead memcg
> > cache (e.g. hundreds of megabytes of dcache), so such performance
> > degradation is not acceptable, IMO.
> 
> I am not sure that there is such a stark difference to SLUB. SLUB also
> takes the per node lock if necessary to handle freeing especially if you
> zap the per cpu partial slab pages.

Hmm, for SLUB we will only take the node lock for inserting a slab on
the partial list, while for SLAB disabling per-cpu arrays will result in
taking the lock on each object free. So if there are only several
objects per slab, the difference won't be huge, otherwise the slow down
will be noticeable for SLAB, but not for SLUB.

I'm not that sure that we should prefer one way over another though. I
just think that if we already have periodic reaping for SLAB, why not
employ it for reaping dead memcg caches too, provided it won't obfuscate
the code? Anyway, if you think that we can neglect possible performance
degradation that will result from disabling per cpu caches for SLAB, I
can give it a try.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2014-06-03 20:18 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-30 13:51 [PATCH -mm 0/8] memcg/slab: reintroduce dead cache self-destruction Vladimir Davydov
2014-05-30 13:51 ` [PATCH -mm 1/8] memcg: cleanup memcg_cache_params refcnt usage Vladimir Davydov
2014-05-30 14:31   ` Christoph Lameter
2014-05-30 13:51 ` [PATCH -mm 2/8] memcg: destroy kmem caches when last slab is freed Vladimir Davydov
2014-05-30 14:32   ` Christoph Lameter
2014-05-30 13:51 ` [PATCH -mm 3/8] memcg: mark caches that belong to offline memcgs as dead Vladimir Davydov
2014-05-30 14:33   ` Christoph Lameter
2014-05-30 13:51 ` [PATCH -mm 4/8] slub: never fail kmem_cache_shrink Vladimir Davydov
2014-05-30 14:46   ` Christoph Lameter
2014-05-31 10:18     ` Vladimir Davydov
2014-06-02 15:13       ` Christoph Lameter
2014-05-30 13:51 ` [PATCH -mm 5/8] slab: remove kmem_cache_shrink retval Vladimir Davydov
2014-05-30 14:49   ` Christoph Lameter
2014-05-31 10:27     ` Vladimir Davydov
2014-06-02 15:16       ` Christoph Lameter
2014-06-03  9:06         ` Vladimir Davydov
2014-06-03 14:48           ` Christoph Lameter
2014-06-03 19:00             ` Vladimir Davydov
2014-05-30 13:51 ` [PATCH -mm 6/8] slub: do not use cmpxchg for adding cpu partials when irqs disabled Vladimir Davydov
2014-05-30 13:51 ` [PATCH -mm 7/8] slub: make dead caches discard free slabs immediately Vladimir Davydov
2014-05-30 14:57   ` Christoph Lameter
2014-05-31 11:04     ` Vladimir Davydov
2014-06-02  4:24       ` Joonsoo Kim
2014-06-02 11:47         ` Vladimir Davydov
2014-06-02 14:03           ` Joonsoo Kim
2014-06-02 15:17             ` Christoph Lameter
2014-06-03  8:16             ` Vladimir Davydov
2014-06-04  8:53               ` Joonsoo Kim
2014-06-04  9:47                 ` Vladimir Davydov
2014-05-30 13:51 ` [PATCH -mm 8/8] slab: reap dead memcg caches aggressively Vladimir Davydov
2014-05-30 15:01   ` Christoph Lameter
2014-05-31 11:19     ` Vladimir Davydov
2014-06-02 15:24       ` Christoph Lameter
2014-06-03 20:18         ` Vladimir Davydov [this message]
2014-06-02  4:41   ` Joonsoo Kim
2014-06-02 12:10     ` Vladimir Davydov
2014-06-02 14:01       ` Joonsoo Kim
2014-06-03  8:21         ` Vladimir Davydov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140603201817.GE6013@esperanza \
    --to=vdavydov@parallels.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@gentwo.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).