linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vladimir Davydov <vdavydov@virtuozzo.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Christoph Lameter <cl@linux.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@suse.cz>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH -mm v2 3/3] slub: make dead caches discard free slabs immediately
Date: Fri, 1 Apr 2016 13:55:40 +0300	[thread overview]
Message-ID: <20160401105539.GA6610@esperanza> (raw)
In-Reply-To: <20160401090441.GD12845@twins.programming.kicks-ass.net>

On Fri, Apr 01, 2016 at 11:04:41AM +0200, Peter Zijlstra wrote:
> On Wed, Jan 28, 2015 at 07:22:51PM +0300, Vladimir Davydov wrote:
> > +++ b/mm/slub.c
> > @@ -2007,6 +2007,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
> >  	int pages;
> >  	int pobjects;
> >  
> > +	preempt_disable();
> >  	do {
> >  		pages = 0;
> >  		pobjects = 0;
> > @@ -2040,6 +2041,14 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
> >  
> >  	} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
> >  								!= oldpage);
> > +	if (unlikely(!s->cpu_partial)) {
> > +		unsigned long flags;
> > +
> > +		local_irq_save(flags);
> > +		unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
> > +		local_irq_restore(flags);
> > +	}
> > +	preempt_enable();
> >  #endif
> >  }
> >  
> > @@ -3369,7 +3378,7 @@ EXPORT_SYMBOL(kfree);
> >   * being allocated from last increasing the chance that the last objects
> >   * are freed in them.
> >   */
> > -int __kmem_cache_shrink(struct kmem_cache *s)
> > +int __kmem_cache_shrink(struct kmem_cache *s, bool deactivate)
> >  {
> >  	int node;
> >  	int i;
> > @@ -3381,14 +3390,26 @@ int __kmem_cache_shrink(struct kmem_cache *s)
> >  	unsigned long flags;
> >  	int ret = 0;
> >  
> > +	if (deactivate) {
> > +		/*
> > +		 * Disable empty slabs caching. Used to avoid pinning offline
> > +		 * memory cgroups by kmem pages that can be freed.
> > +		 */
> > +		s->cpu_partial = 0;
> > +		s->min_partial = 0;
> > +
> > +		/*
> > +		 * s->cpu_partial is checked locklessly (see put_cpu_partial),
> > +		 * so we have to make sure the change is visible.
> > +		 */
> > +		kick_all_cpus_sync();
> > +	}
> 
> Argh! what the heck! and without a single mention in the changelog.

This function is only called when a memory cgroup is removed, which is
rather a rare event. I didn't think it would cause any pain. Sorry.

> 
> Why are you spraying IPIs across the entire machine? Why isn't
> synchronize_sched() good enough, that would allow you to get rid of the
> local_irq_save/restore as well.

synchronize_sched() is slower. Calling it for every per memcg kmem cache
would slow down cleanup on cgroup removal. The latter is async, so I'm
not sure if it would be a problem though. I think we can try to replace
kick_all_cpus_sync() with synchronize_sched() here.

Regarding local_irq_save/restore - synchronize_sched() wouldn't allow us
to get rid of them, because unfreeze_partials() must be called with irqs
disabled.

Come to think of it, kick_all_cpus_sync() is used as a memory barrier
here, so as to make sure that after it's finished all cpus will use the
new ->cpu_partial value, which makes me wonder if we could replace it
with a simple smp_mb. I mean, this_cpu_cmpxchg(), which is used by
put_cpu_partial to add a page to per-cpu partial list, must issue a full
memory barrier (am I correct?), so we have two possibilities here:

Case 1: smp_mb is called before this_cpu_cmpxchg is called on another
cpu executing put_cpu_partial. In this case, put_cpu_partial will see
cpu_partial == 0 and hence call the second unfreeze_partials, flushing
per cpu partial list.

Case 2: smp_mb is called after this_cpu_cmpxchg. Then
__kmem_cache_shrink ->flush_all -> has_cpu_slab should see (thanks to
the barriers) that there's a slab on a per-cpu list and so flush it
(provided it hasn't already been flushed by put_cpu_partial).

In any case, after __kmem_cache_shrink has finished, we are guaranteed
to not have any slabs on per cpu partial lists.

Does it make sense?

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-04-01 10:55 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-28 16:22 [PATCH -mm v2 0/3] slub: make dead caches discard free slabs immediately Vladimir Davydov
2015-01-28 16:22 ` [PATCH -mm v2 1/3] slub: never fail to shrink cache Vladimir Davydov
2015-01-28 16:31   ` Christoph Lameter
2015-01-28 18:29     ` Pekka Enberg
2015-01-28 16:37   ` Christoph Lameter
2015-01-28 17:32     ` Vladimir Davydov
2015-01-28 19:20       ` Christoph Lameter
2015-01-28 21:57   ` Andrew Morton
2015-01-28 22:56     ` Christoph Lameter
2015-01-29  8:07     ` Vladimir Davydov
2015-01-29 15:55       ` Christoph Lameter
2015-01-29 16:17         ` Vladimir Davydov
2015-01-29 16:22           ` Christoph Lameter
2015-01-29 18:21             ` Vladimir Davydov
2015-01-29 19:10               ` Christoph Lameter
2015-01-29  8:32     ` Balbir Singh
2015-02-15  3:55   ` Sasha Levin
2015-02-15  9:47     ` Vladimir Davydov
2015-01-28 16:22 ` [PATCH -mm v2 2/3] slub: fix kmem_cache_shrink return value Vladimir Davydov
2015-01-28 16:33   ` Christoph Lameter
2015-01-28 17:46     ` Vladimir Davydov
2015-01-28 19:19       ` Christoph Lameter
2015-01-28 16:22 ` [PATCH -mm v2 3/3] slub: make dead caches discard free slabs immediately Vladimir Davydov
2016-04-01  9:04   ` Peter Zijlstra
2016-04-01 10:55     ` Vladimir Davydov [this message]
2016-04-01 11:41       ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160401105539.GA6610@esperanza \
    --to=vdavydov@virtuozzo.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=penberg@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).