public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: "Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Li Zefan <lizefan@huawei.com>,
	"containers@lists.linux-foundation.org" 
	<containers@lists.linux-foundation.org>,
	"cgroups@vger.kernel.org" <cgroups@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 0/4] x86: Add Cache QoS Monitoring (CQM) support
Date: Sat, 4 Jan 2014 17:50:58 -0500	[thread overview]
Message-ID: <20140104225058.GC24306@htj.dyndns.org> (raw)
In-Reply-To: <1388875369.9761.25.camel@ppwaskie-mobl.amr.corp.intel.com>

Hello,

On Sat, Jan 04, 2014 at 10:43:00PM +0000, Waskiewicz Jr, Peter P wrote:
> Simply put, when we want to allocate an RMID for monitoring httpd
> traffic, we can create a new child in the subsystem hierarchy, and
> assign the httpd processes to it.  Then the RMID can be assigned to the
> subsystem, and each process inherits that RMID.  So instead of dealing
> with assigning an RMID to each and every process, we can leverage the
> existing cgroup mechanisms for grouping processes and their children to
> a group, and they inherit the RMID.

Here's one thing that I don't get, possibly because I'm not
understanding the processor feature too well.  Why does the processor
have to be aware of the grouping?  ie. why can't it be done
per-process and then aggregated?  Is there something inherent about
the monitored events which requires such peculiarity?  Or is it that
accessing the stats data is noticieably expensive to do per context
switch?

> Please let me know if this is a better explanation, and gives a better
> picture of why we decided to approach the implementation this way.  Also
> note that this feature, Cache QoS Monitoring, is the first in a series
> of Platform QoS Monitoring features that will be coming.  So this isn't
> a one-off feature, so however this first piece gets accepted, we want to
> make sure it's easy to expand and not impact userspace tools repeatedly
> (if possible).

In general, I'm quite strongly opposed against using cgroup as
arbitrary grouping mechanism for anything other than resource control,
especially given that we're moving away from multiple hierarchies.

Thanks.

-- 
tejun

  reply	other threads:[~2014-01-04 22:51 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-03 20:34 [PATCH 0/4] x86: Add Cache QoS Monitoring (CQM) support Peter P Waskiewicz Jr
2014-01-03 20:34 ` [PATCH 1/4] x86: Add support for Cache QoS Monitoring (CQM) detection Peter P Waskiewicz Jr
2014-01-03 20:34 ` [PATCH 2/4] x86: Add Cache QoS Monitoring support to x86 perf uncore Peter P Waskiewicz Jr
2014-01-03 20:34 ` [PATCH 3/4] cgroup: Add new cacheqos cgroup subsys to support Cache QoS Monitoring Peter P Waskiewicz Jr
2014-01-03 20:34 ` [PATCH 4/4] Documentation: Add documentation for cacheqos cgroup Peter P Waskiewicz Jr
2014-01-04 16:10 ` [PATCH 0/4] x86: Add Cache QoS Monitoring (CQM) support Tejun Heo
2014-01-04 22:43   ` Waskiewicz Jr, Peter P
2014-01-04 22:50     ` Tejun Heo [this message]
2014-01-05  5:23       ` Waskiewicz Jr, Peter P
2014-01-06 11:16         ` Peter Zijlstra
2014-01-06 16:34           ` Waskiewicz Jr, Peter P
2014-01-06 16:41             ` Peter Zijlstra
2014-01-06 16:47               ` Waskiewicz Jr, Peter P
2014-01-06 17:53                 ` Peter Zijlstra
2014-01-06 18:05                   ` Waskiewicz Jr, Peter P
2014-01-06 18:06                 ` Peter Zijlstra
2014-01-06 20:10                   ` Waskiewicz Jr, Peter P
2014-01-06 21:26                     ` Peter Zijlstra
2014-01-06 21:48                       ` Waskiewicz Jr, Peter P
2014-01-06 22:12                         ` Peter Zijlstra
2014-01-06 22:45                           ` Waskiewicz Jr, Peter P
2014-01-07  8:34                             ` Peter Zijlstra
2014-01-07 15:15                               ` Waskiewicz Jr, Peter P
2014-01-07 21:12                                 ` Peter Zijlstra
2014-01-10 18:55                                   ` Waskiewicz Jr, Peter P
2014-01-13  7:55                                     ` Peter Zijlstra
2014-01-14 17:58                                       ` H. Peter Anvin
2014-01-27 17:34                                         ` Peter Zijlstra
2014-02-18 17:29                                           ` Waskiewicz Jr, Peter P
2014-02-18 19:35                                             ` Peter Zijlstra
2014-02-18 19:54                                               ` Waskiewicz Jr, Peter P
2014-02-20 16:58                                                 ` Peter Zijlstra
2014-01-14 20:46                                       ` Waskiewicz Jr, Peter P
2014-01-06 11:08 ` Peter Zijlstra
2014-01-06 16:42   ` Waskiewicz Jr, Peter P

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140104225058.GC24306@htj.dyndns.org \
    --to=tj@kernel.org \
    --cc=cgroups@vger.kernel.org \
    --cc=containers@lists.linux-foundation.org \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=mingo@redhat.com \
    --cc=peter.p.waskiewicz.jr@intel.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox