linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vladimir Davydov <vdavydov@parallels.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: mhocko@suse.cz, akpm@linux-foundation.org, glommer@gmail.com,
	cl@linux-foundation.org, penberg@kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	devel@openvz.org
Subject: Re: [PATCH RFC -mm v2 1/3] memcg, slab: do not schedule cache destruction when last page goes away
Date: Fri, 18 Apr 2014 20:05:58 +0400	[thread overview]
Message-ID: <53514D66.1010607@parallels.com> (raw)
In-Reply-To: <20140418134122.GB26283@cmpxchg.org>

On 04/18/2014 05:41 PM, Johannes Weiner wrote:
>> Thus we have a piece of code that works only when we explicitly call
>> kmem_cache_shrink, but complicates the whole picture a lot. Moreover,
>> it's racy in fact. For instance, kmem_cache_shrink may free the last
>> slab and thus schedule cache destruction before it finishes checking
>> that the cache is empty, which can lead to use-after-free.
>
> Can't this still happen when the last object free races with css
> destruction?

AFAIU, yes, it still can happen, but we have less places to fix now. I'm
planning to sort this out by rearranging operations inside
kmem_cache_free so that we do not touch the cache after we've
decremented memcg_cache_params::nr_pages and made the cache potentially
destroyable.

Or, if we could reparent individual slabs as you proposed earlier, we
wouldn't have to bother about it at all any more as well as about per
memcg cache destruction.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2014-04-18 16:06 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-18  8:04 [PATCH RFC -mm v2 0/3] kmemcg: simplify work-flow (was "memcg-vs-slab cleanup") Vladimir Davydov
2014-04-18  8:04 ` [PATCH RFC -mm v2 1/3] memcg, slab: do not schedule cache destruction when last page goes away Vladimir Davydov
2014-04-18 13:41   ` Johannes Weiner
2014-04-18 16:05     ` Vladimir Davydov [this message]
2014-04-18  8:04 ` [PATCH RFC -mm v2 2/3] memcg, slab: merge memcg_{bind,release}_pages to memcg_{un}charge_slab Vladimir Davydov
2014-04-18 13:44   ` Johannes Weiner
2014-04-18 16:07     ` Vladimir Davydov
2014-04-18  8:04 ` [PATCH RFC -mm v2 3/3] memcg, slab: simplify synchronization scheme Vladimir Davydov
2014-04-18 14:17   ` Johannes Weiner
2014-04-18 16:08     ` Vladimir Davydov
2014-04-18 18:26       ` Johannes Weiner
2014-04-18  8:08 ` [PATCH RFC -mm v2 0/3] kmemcg: simplify work-flow (was "memcg-vs-slab cleanup") Vladimir Davydov
2014-04-18 13:23 ` Johannes Weiner
2014-04-18 16:04   ` Vladimir Davydov
2014-04-20 10:32 ` Vladimir Davydov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53514D66.1010607@parallels.com \
    --to=vdavydov@parallels.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux-foundation.org \
    --cc=devel@openvz.org \
    --cc=glommer@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=penberg@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).