From: Michal Hocko <mhocko@suse.cz>
To: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
linux-mm@kvack.org, Balbir Singh <bsingharora@gmail.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 3/4] memcg: get rid of percpu_charge_mutex lock
Date: Fri, 22 Jul 2011 11:41:26 +0200 [thread overview]
Message-ID: <20110722094126.GD4004@tiehlicka.suse.cz> (raw)
In-Reply-To: <20110722092759.9be9078f.nishimura@mxp.nes.nec.co.jp>
On Fri 22-07-11 09:27:59, Daisuke Nishimura wrote:
> On Thu, 21 Jul 2011 14:42:23 +0200
> Michal Hocko <mhocko@suse.cz> wrote:
>
> > On Thu 21-07-11 13:47:04, Michal Hocko wrote:
> > > On Thu 21-07-11 19:30:51, KAMEZAWA Hiroyuki wrote:
> > > > On Thu, 21 Jul 2011 09:58:24 +0200
> > > > Michal Hocko <mhocko@suse.cz> wrote:
> > [...]
> > > > > --- a/mm/memcontrol.c
> > > > > +++ b/mm/memcontrol.c
> > > > > @@ -2166,7 +2165,8 @@ static void drain_all_stock(struct mem_cgroup *root_mem, bool sync)
> > > > >
> > > > > for_each_online_cpu(cpu) {
> > > > > struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
> > > > > - if (test_bit(FLUSHING_CACHED_CHARGE, &stock->flags))
> > > > > + if (root_mem == stock->cached &&
> > > > > + test_bit(FLUSHING_CACHED_CHARGE, &stock->flags))
> > > > > flush_work(&stock->work);
> > > >
> > > > Doesn't this new check handle hierarchy ?
> > > > css_is_ancestor() will be required if you do this check.
> > >
> > > Yes you are right. Will fix it. I will add a helper for the check.
> >
> > Here is the patch with the helper. The above will then read
> > if (mem_cgroup_same_or_subtree(root_mem, stock->cached))
> >
> I welcome this new helper function, but it can be used in
> memcg_oom_wake_function() and mem_cgroup_under_move() too, can't it ?
Sure. Incremental patch (I will fold it into the one above):
---
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 8dbb9d6..64569c7 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1416,10 +1416,9 @@ static bool mem_cgroup_under_move(struct mem_cgroup *mem)
to = mc.to;
if (!from)
goto unlock;
- if (from == mem || to == mem
- || (mem->use_hierarchy && css_is_ancestor(&from->css, &mem->css))
- || (mem->use_hierarchy && css_is_ancestor(&to->css, &mem->css)))
- ret = true;
+
+ ret = mem_cgroup_same_or_subtree(mem, from)
+ || mem_cgroup_same_or_subtree(mem, to);
unlock:
spin_unlock(&mc.lock);
return ret;
@@ -1906,25 +1905,20 @@ struct oom_wait_info {
static int memcg_oom_wake_function(wait_queue_t *wait,
unsigned mode, int sync, void *arg)
{
- struct mem_cgroup *wake_mem = (struct mem_cgroup *)arg;
+ struct mem_cgroup *wake_mem = (struct mem_cgroup *)arg,
+ *oom_wait_mem;
struct oom_wait_info *oom_wait_info;
oom_wait_info = container_of(wait, struct oom_wait_info, wait);
+ oom_wait_mem = oom_wait_info->mem;
- if (oom_wait_info->mem == wake_mem)
- goto wakeup;
- /* if no hierarchy, no match */
- if (!oom_wait_info->mem->use_hierarchy || !wake_mem->use_hierarchy)
- return 0;
/*
* Both of oom_wait_info->mem and wake_mem are stable under us.
* Then we can use css_is_ancestor without taking care of RCU.
*/
- if (!css_is_ancestor(&oom_wait_info->mem->css, &wake_mem->css) &&
- !css_is_ancestor(&wake_mem->css, &oom_wait_info->mem->css))
+ if (!mem_cgroup_same_or_subtree(oom_wait_mem, wake_mem)
+ && !mem_cgroup_same_or_subtree(wake_mem, oom_wait_mem))
return 0;
-
-wakeup:
return autoremove_wake_function(wait, mode, sync, arg);
}
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2011-07-22 9:41 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-07-21 9:41 [PATCH 0/4] memcg: cleanup per-cpu charge caches + fix unnecessary reclaim if there are still cached charges Michal Hocko
2011-07-21 7:38 ` [PATCH 1/4] memcg: do not try to drain per-cpu caches without pages Michal Hocko
2011-07-21 10:12 ` KAMEZAWA Hiroyuki
2011-07-21 11:36 ` Michal Hocko
2011-07-21 23:44 ` KAMEZAWA Hiroyuki
2011-07-22 9:19 ` Michal Hocko
2011-07-22 9:28 ` KAMEZAWA Hiroyuki
2011-07-22 9:58 ` Michal Hocko
2011-07-22 10:23 ` Michal Hocko
2011-07-21 7:50 ` [PATCH 2/4] memcg: unify sync and async per-cpu charge cache draining Michal Hocko
2011-07-21 10:25 ` KAMEZAWA Hiroyuki
2011-07-21 11:36 ` Michal Hocko
2011-07-21 7:58 ` [PATCH 3/4] memcg: get rid of percpu_charge_mutex lock Michal Hocko
2011-07-21 10:30 ` KAMEZAWA Hiroyuki
2011-07-21 11:47 ` Michal Hocko
2011-07-21 12:42 ` Michal Hocko
2011-07-21 23:49 ` KAMEZAWA Hiroyuki
2011-07-22 9:21 ` Michal Hocko
2011-07-22 0:27 ` Daisuke Nishimura
2011-07-22 9:41 ` Michal Hocko [this message]
2011-07-21 8:28 ` [PATCH 4/4] memcg: prevent from reclaiming if there are per-cpu cached charges Michal Hocko
2011-07-21 10:54 ` KAMEZAWA Hiroyuki
2011-07-21 12:30 ` Michal Hocko
2011-07-21 23:56 ` KAMEZAWA Hiroyuki
2011-07-22 0:18 ` KAMEZAWA Hiroyuki
2011-07-22 9:54 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110722094126.GD4004@tiehlicka.suse.cz \
--to=mhocko@suse.cz \
--cc=bsingharora@gmail.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nishimura@mxp.nes.nec.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).