linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.cz>
To: Tejun Heo <htejun@gmail.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Ying Han <yinghan@google.com>,
	Glauber Costa <glommer@parallels.com>
Subject: Re: [RFC 2/5] memcg: rework mem_cgroup_iter to use cgroup iterators
Date: Thu, 15 Nov 2012 17:15:04 +0100	[thread overview]
Message-ID: <20121115161504.GF11990@dhcp22.suse.cz> (raw)
In-Reply-To: <20121115153124.GD7306@mtj.dyndns.org>

On Thu 15-11-12 07:31:24, Tejun Heo wrote:
> Hello, Michal.
> 
> On Thu, Nov 15, 2012 at 04:12:55PM +0100, Michal Hocko wrote:
> > > Because I'd like to consider the next functions as implementation
> > > detail, and having interations structred as loops tend to read better
> > > and less error-prone.  e.g. when you use next functions directly, it's
> > > way easier to circumvent locking requirements in a way which isn't
> > > very obvious. 
> > 
> > The whole point behind mem_cgroup_iter is to hide all the complexity
> > behind memcg iteration. Memcg code either use for_each_mem_cgroup_tree
> > for !reclaim case and mem_cgroup_iter otherwise.
> > 
> > > So, unless it messes up the code too much (and I can't see why it
> > > would), I'd much prefer if memcg used for_each_*() macros.
> > 
> > As I said this would mean that the current mem_cgroup_iter code would
> > have to be inverted which doesn't simplify the code much. I'd rather
> > hide all the grossy details inside the memcg iterator.
> > Or am I still missing your suggestion?
> 
> One way or the other, I don't think the code complexity would change
> much.  Again, I'd much *prefer* if memcg used what other controllers
> would be using, but that's a preference and if necessary we can keep
> the next functions as exposed APIs. 

Yes please.

> I think the issue I have is that I can't see much technical
> justification for that. If the code becomes much simpler by choosing
> one over the other, sure, but is that the case here?

Yes and I've tried to say that already. Memcg needs hierarchy, css
ref counting and concurrent reclaim (per-zone per-priority) aware
iteration. All of that is hidden in mem_cgroup_iter currently so the
caller doesn't have to care about it at all. Which makes shrink_zone
not care about memcg that much.

cgroup_for_each_descendant_pre is not suitable at least because it
doesn't provide a way to start a walk at a selected node (which is
shared per-zone per-priority in memcg case).
Even if cgroup_for_each_descendant_pre had start parameter there
is still a lot of house keeping that callers would have to handle
(css_tryget to start with, update of the cached possible not mentioning
use_hierarchy thingy or mem_cgroup_disabled).
We also try to not pollute mm/vmscan.c as much as possible so we
definitely do not want to bring all this into shrink_zone.

This all sounds like too much of a hassle if it is exposed so I would
really like to stay with mem_cgroup_iter and slowly simplify it until it
can go away (if that is possible at all).

> Isn't it mostly just about where to put the same things?

Unfortunately no. We wouldn't grow own iterator in such a case.

> If so, what would be the rationale for requiring a different
> interface?

Does the above explain it?

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2012-11-15 16:15 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-11-13 15:30 [RFC] rework mem_cgroup iterator Michal Hocko
2012-11-13 15:30 ` [RFC 1/5] memcg: synchronize per-zone iterator access by a spinlock Michal Hocko
2012-11-14  0:03   ` Kamezawa Hiroyuki
2012-11-13 15:30 ` [RFC 2/5] memcg: rework mem_cgroup_iter to use cgroup iterators Michal Hocko
2012-11-13 16:14   ` Tejun Heo
2012-11-14  8:51     ` Michal Hocko
2012-11-14 18:52       ` Tejun Heo
2012-11-15  9:51         ` Michal Hocko
2012-11-15 14:47           ` Tejun Heo
2012-11-15 15:12             ` Michal Hocko
2012-11-15 15:31               ` Tejun Heo
2012-11-15 16:15                 ` Michal Hocko [this message]
2012-11-14  0:20   ` Kamezawa Hiroyuki
2012-11-14 10:10     ` Michal Hocko
2012-11-15  4:12       ` Kamezawa Hiroyuki
2012-11-15  9:52         ` Michal Hocko
2012-11-19 14:05       ` Michal Hocko
2012-11-19 15:11   ` Michal Hocko
2012-11-13 15:30 ` [RFC 3/5] memcg: simplify mem_cgroup_iter Michal Hocko
2012-11-13 15:30 ` [RFC 4/5] memcg: clean up mem_cgroup_iter Michal Hocko
2012-11-13 15:30 ` [RFC 5/5] cgroup: remove css_get_next Michal Hocko
2012-11-14  0:13 ` [RFC] rework mem_cgroup iterator Kamezawa Hiroyuki
2012-11-14  1:55 ` Li Zefan
2012-11-14  8:36   ` Michal Hocko
2012-11-14 18:30     ` Tejun Heo
2012-11-15  2:12   ` Kamezawa Hiroyuki
2012-11-14 16:17 ` Glauber Costa
2012-11-14  8:40   ` Michal Hocko
2012-11-14 18:41   ` Tejun Heo
2012-11-15  2:44     ` Glauber Costa
2012-11-14 18:46       ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121115161504.GF11990@dhcp22.suse.cz \
    --to=mhocko@suse.cz \
    --cc=glommer@parallels.com \
    --cc=hannes@cmpxchg.org \
    --cc=htejun@gmail.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=yinghan@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).