From: Michal Hocko <mhocko@suse.cz>
To: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Li Zefan <lizefan@huawei.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] hugetlb/cgroup: Simplify pre_destroy callback
Date: Thu, 19 Jul 2012 13:42:28 +0200 [thread overview]
Message-ID: <20120719114228.GD2864@tiehlicka.suse.cz> (raw)
In-Reply-To: <87r4s8f0v9.fsf@skywalker.in.ibm.com>
On Thu 19-07-12 16:56:18, Aneesh Kumar K.V wrote:
> Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> writes:
>
> >>>>>
> >>>>> We test RES_USAGE before taking hugetlb_lock. What prevents some other
> >>>>> thread from increasing RES_USAGE after that test?
> >>>>>
> >>>>> After walking the list we test RES_USAGE after dropping hugetlb_lock.
> >>>>> What prevents another thread from incrementing RES_USAGE before that
> >>>>> test, triggering the BUG?
> >>>>
> >>>> IIUC core cgroup will prevent a new task getting added to the cgroup
> >>>> when we are in pre_destroy. Since we already check that the cgroup doesn't
> >>>> have any task, the RES_USAGE cannot increase in pre_destroy.
> >>>>
> >>>
> >>>
> >>> You're wrong here. We release cgroup_lock before calling pre_destroy and retrieve
> >>> the lock after that, so a task can be attached to the cgroup in this interval.
> >>>
> >>
> >> But that means rmdir can be racy right ? What happens if the task got
> >> added, allocated few pages and then moved out ? We still would have task
> >> count 0 but few pages, which we missed to to move to parent cgroup.
> >>
> >
> > That's a problem even if it's verrrry unlikely.
> > I'd like to look into it and fix the race in cgroup layer.
> > But I'm sorry I'm a bit busy in these days...
> >
>
> How about moving that mutex_unlock(&cgroup_mutex) to memcg callback ? That
> can be a patch for 3.5 ?
Bahh, I have just posted a follow up on mm-commits email exactly about
this. Sorry I have missed that the discussion is still ongoing. I have
posted also something I guess should help. Can we follow up on that one
or should I post the patch here as well?
>
> -aneesh
>
>
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
prev parent reply other threads:[~2012-07-19 11:42 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-18 5:34 [PATCH] hugetlb/cgroup: Simplify pre_destroy callback Aneesh Kumar K.V
2012-07-18 7:24 ` Wanpeng Li
2012-07-18 21:26 ` Andrew Morton
2012-07-19 2:55 ` Aneesh Kumar K.V
2012-07-19 6:59 ` Li Zefan
2012-07-19 9:41 ` Aneesh Kumar K.V
2012-07-19 10:25 ` Kamezawa Hiroyuki
2012-07-19 11:26 ` Aneesh Kumar K.V
2012-07-19 11:42 ` Michal Hocko [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120719114228.GD2864@tiehlicka.suse.cz \
--to=mhocko@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.vnet.ibm.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizefan@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).