public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Roman Gushchin <guro@fb.com>
To: Tejun Heo <tj@kernel.org>
Cc: <cgroups@vger.kernel.org>, Zefan Li <lizefan@huawei.com>,
	Waiman Long <longman@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>, <kernel-team@fb.com>,
	<linux-doc@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] cgroup: add cgroup.stat interface with basic hierarchy stats
Date: Fri, 28 Jul 2017 14:01:55 +0100	[thread overview]
Message-ID: <20170728130155.GA13728@castle.dhcp.TheFacebook.com> (raw)
In-Reply-To: <20170727162243.GF742618@devbig577.frc2.facebook.com>

On Thu, Jul 27, 2017 at 12:22:43PM -0400, Tejun Heo wrote:
> Hello,
> 
> On Thu, Jul 27, 2017 at 05:14:20PM +0100, Roman Gushchin wrote:
> > Add a cgroup.stat interface to the base cgroup control files
> > with the following metrics:
> > 
> > nr_descendants		total number of descendant cgroups
> > nr_dying_descendants	total number of dying descendant cgroups
> > max_descendant_depth	maximum descent depth below the current cgroup
> 
> Yeah, this'd be great to have.  Some comments below.
> 
> > +  cgroup.stat
> > +	A read-only flat-keyed file with the following entries:
> > +
> > +	  nr_descendants
> > +		Total number of descendant cgroups.
> > +
> > +	  nr_dying_descendants
> > +		Total number of dying descendant cgroups.
> 
> Can you please go into more detail on what's going on with dying
> descendants here?

Sure.
Don't we plan do describe cgroup/css lifecycle in details
in a separate section?

> > +static int cgroup_stats_show(struct seq_file *seq, void *v)
> > +{
> > +	struct cgroup_subsys_state *css;
> > +	unsigned long total = 0;
> > +	unsigned long offline = 0;
> > +	int max_level = 0;
> > +
> > +	rcu_read_lock();
> > +	css_for_each_descendant_pre(css, seq_css(seq)) {
> > +		if (css == seq_css(seq))
> > +			continue;
> > +		++total;
> 
> Let's do post increment for consistency.

Ok.

> 
> > +		if (!(css->flags & CSS_ONLINE))
> > +			++offline;
> > +		if (css->cgroup->level > max_level)
> > +			max_level = css->cgroup->level;
> > +	}
> > +	rcu_read_unlock();
> 
> I wonder whether we want to keep these counters in sync instead of
> trying to gather the number on read.  Walking all descendants can get
> expensive pretty quickly and things like nr_descendants will be useful
> for other purposes too.

Ok, assuming that I'm working on adding an ability to set limits
on cgroup hierarchy, it seems very reasonable. I'll implement this
and post the updated path as a part of a patchset.

Thanks!

  reply	other threads:[~2017-07-28 13:02 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-27 16:14 [PATCH] cgroup: add cgroup.stat interface with basic hierarchy stats Roman Gushchin
2017-07-27 16:22 ` Tejun Heo
2017-07-28 13:01   ` Roman Gushchin [this message]
2017-07-28 15:09     ` Tejun Heo
2017-07-27 17:38 ` Waiman Long
2017-07-27 17:46   ` Roman Gushchin
2017-07-27 17:55     ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170728130155.GA13728@castle.dhcp.TheFacebook.com \
    --to=guro@fb.com \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=longman@redhat.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox