From: "Luck, Tony" <tony.luck@intel.com>
To: Stephane Eranian <eranian@google.com>
Cc: David Carrillo-Cisneros <davidcc@google.com>,
Thomas Gleixner <tglx@linutronix.de>,
Vikas Shivappa <vikas.shivappa@linux.intel.com>,
"Shivappa, Vikas" <vikas.shivappa@intel.com>,
linux-kernel <linux-kernel@vger.kernel.org>, x86 <x86@kernel.org>,
"hpa@zytor.com" <hpa@zytor.com>, Ingo Molnar <mingo@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
"Shankar, Ravi V" <ravi.v.shankar@intel.com>,
"Yu, Fenghua" <fenghua.yu@intel.com>,
"Kleen, Andi" <andi.kleen@intel.com>,
"Anvin, H Peter" <h.peter.anvin@intel.com>
Subject: Re: [PATCH 00/12] Cqm2: Intel Cache quality monitoring fixes
Date: Tue, 7 Feb 2017 10:52:04 -0800 [thread overview]
Message-ID: <20170207185203.GA19819@intel.com> (raw)
In-Reply-To: <CABPqkBQW80CFY7PLjDO_EKRrr0TA+tu3zwoSU7tnL7DgdwV+Wg@mail.gmail.com>
On Tue, Feb 07, 2017 at 12:08:09AM -0800, Stephane Eranian wrote:
> Hi,
>
> I wanted to take a few steps back and look at the overall goals for
> cache monitoring.
> From the various threads and discussion, my understanding is as follows.
>
> I think the design must ensure that the following usage models can be monitored:
> - the allocations in your CAT partitions
> - the allocations from a task (inclusive of children tasks)
> - the allocations from a group of tasks (inclusive of children tasks)
> - the allocations from a CPU
> - the allocations from a group of CPUs
>
> All cases but first one (CAT) are natural usage. So I want to describe
> the CAT in more details.
> The goal, as I understand it, it to monitor what is going on inside
> the CAT partition to detect
> whether it saturates or if it has room to "breathe". Let's take a
> simple example.
By "natural usage" you mean "like perf(1) provides for other events"?
But we are trying to figure out requirements here ... what data do people
need to manage caches and memory bandwidth. So from this perspective
monitoring a CAT group is a natural first choice ... did we provision
this group with too much, or too little cache.
>From that starting point I can see that a possible next step when
finding that a CAT group has too small a cache is to drill down to
find out how the tasks in the group are using cache. Armed with that
information you could move tasks that hog too much cache (and are believed
to be streaming through memory) into a different CAT group.
What I'm not seeing is how drilling to CPUs helps you.
Say you have CPUs=CPU0,CPU1 in the CAT group and you collect data that
shows that 75% of the cache occupancy is attributed to CPU0, and only
25% to CPU1. What can you do with this information to improve things?
If it is deemed too complex (from a kernel code perspective) to
implement per-CPU reporting how bad a loss would that be?
-Tony
next prev parent reply other threads:[~2017-02-07 19:04 UTC|newest]
Thread overview: 91+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-06 21:59 [PATCH 00/12] Cqm2: Intel Cache quality monitoring fixes Vikas Shivappa
2017-01-06 21:59 ` [PATCH 01/12] Documentation, x86/cqm: Intel Resource Monitoring Documentation Vikas Shivappa
2017-01-06 21:59 ` [PATCH 02/12] x86/cqm: Remove cqm recycling/conflict handling Vikas Shivappa
2017-01-06 21:59 ` [PATCH 03/12] x86/rdt: Add rdt common/cqm compile option Vikas Shivappa
2017-01-16 18:05 ` Thomas Gleixner
2017-01-17 17:25 ` Shivappa Vikas
2017-01-06 21:59 ` [PATCH 04/12] x86/cqm: Add Per pkg rmid support Vikas Shivappa
2017-01-16 18:15 ` [PATCH 04/12] x86/cqm: Add Per pkg rmid support\ Thomas Gleixner
2017-01-17 19:11 ` Shivappa Vikas
2017-01-06 21:59 ` [PATCH 05/12] x86/cqm,perf/core: Cgroup support prepare Vikas Shivappa
2017-01-17 12:11 ` Thomas Gleixner
2017-01-17 12:31 ` Peter Zijlstra
2017-01-18 2:14 ` Shivappa Vikas
2017-01-17 13:46 ` Thomas Gleixner
2017-01-17 20:22 ` Shivappa Vikas
2017-01-17 21:31 ` Thomas Gleixner
2017-01-17 15:26 ` Peter Zijlstra
2017-01-17 20:27 ` Shivappa Vikas
2017-01-06 21:59 ` [PATCH 06/12] x86/cqm: Add cgroup hierarchical monitoring support Vikas Shivappa
2017-01-17 14:07 ` Thomas Gleixner
2017-01-06 22:00 ` [PATCH 07/12] x86/rdt,cqm: Scheduling support update Vikas Shivappa
2017-01-17 21:58 ` Thomas Gleixner
2017-01-17 22:30 ` Shivappa Vikas
2017-01-06 22:00 ` [PATCH 08/12] x86/cqm: Add support for monitoring task and cgroup together Vikas Shivappa
2017-01-17 16:11 ` Thomas Gleixner
2017-01-06 22:00 ` [PATCH 09/12] x86/cqm: Add RMID reuse Vikas Shivappa
2017-01-17 16:59 ` Thomas Gleixner
2017-01-18 0:26 ` Shivappa Vikas
2017-01-06 22:00 ` [PATCH 10/12] perf/core,x86/cqm: Add read for Cgroup events,per pkg reads Vikas Shivappa
2017-01-06 22:00 ` [PATCH 11/12] perf/stat: fix bug in handling events in error state Vikas Shivappa
2017-01-06 22:00 ` [PATCH 12/12] perf/stat: revamp read error handling, snapshot and per_pkg events Vikas Shivappa
2017-01-17 17:31 ` [PATCH 00/12] Cqm2: Intel Cache quality monitoring fixes Thomas Gleixner
2017-01-18 2:38 ` Shivappa Vikas
2017-01-18 8:53 ` Thomas Gleixner
2017-01-18 9:56 ` Peter Zijlstra
2017-01-19 19:59 ` Shivappa Vikas
2017-01-18 19:41 ` Shivappa Vikas
2017-01-18 21:03 ` David Carrillo-Cisneros
2017-01-19 17:41 ` Thomas Gleixner
2017-01-20 7:37 ` David Carrillo-Cisneros
2017-01-20 8:30 ` Thomas Gleixner
2017-01-20 20:27 ` David Carrillo-Cisneros
2017-01-18 21:16 ` Yu, Fenghua
2017-01-19 2:09 ` David Carrillo-Cisneros
2017-01-19 16:58 ` David Carrillo-Cisneros
2017-01-19 17:54 ` Thomas Gleixner
2017-01-19 2:21 ` Vikas Shivappa
2017-01-19 6:45 ` Stephane Eranian
2017-01-19 18:03 ` Thomas Gleixner
2017-01-20 2:32 ` Vikas Shivappa
2017-01-20 7:58 ` David Carrillo-Cisneros
2017-01-20 13:28 ` Thomas Gleixner
2017-01-20 20:11 ` David Carrillo-Cisneros
2017-01-20 21:08 ` Shivappa Vikas
2017-01-20 21:44 ` David Carrillo-Cisneros
2017-01-20 23:51 ` Shivappa Vikas
2017-02-08 10:13 ` Peter Zijlstra
2017-01-23 9:47 ` Thomas Gleixner
2017-01-23 11:30 ` Peter Zijlstra
2017-02-01 20:08 ` Luck, Tony
2017-02-01 23:12 ` David Carrillo-Cisneros
2017-02-02 17:39 ` Luck, Tony
2017-02-02 19:33 ` Luck, Tony
2017-02-02 20:20 ` Shivappa Vikas
2017-02-02 20:22 ` David Carrillo-Cisneros
2017-02-02 23:41 ` Luck, Tony
2017-02-03 1:40 ` David Carrillo-Cisneros
2017-02-03 2:14 ` David Carrillo-Cisneros
2017-02-03 17:52 ` Luck, Tony
2017-02-03 21:08 ` David Carrillo-Cisneros
2017-02-03 22:24 ` Luck, Tony
2017-02-07 8:08 ` Stephane Eranian
2017-02-07 18:52 ` Luck, Tony [this message]
2017-02-08 19:31 ` Stephane Eranian
2017-02-07 20:10 ` Shivappa Vikas
2017-02-17 13:41 ` Thomas Gleixner
2017-02-06 18:54 ` Luck, Tony
2017-02-06 21:22 ` Luck, Tony
2017-02-06 21:36 ` Shivappa Vikas
2017-02-06 21:46 ` David Carrillo-Cisneros
2017-02-06 22:16 ` David Carrillo-Cisneros
2017-02-06 23:27 ` Luck, Tony
2017-02-07 0:33 ` David Carrillo-Cisneros
2017-02-02 0:35 ` Andi Kleen
2017-02-02 1:12 ` David Carrillo-Cisneros
2017-02-02 1:19 ` Andi Kleen
2017-02-02 1:22 ` Yu, Fenghua
2017-02-02 17:51 ` Shivappa Vikas
2017-02-08 10:11 ` Peter Zijlstra
2017-01-20 20:40 ` Shivappa Vikas
2017-01-20 19:31 ` Stephane Eranian
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170207185203.GA19819@intel.com \
--to=tony.luck@intel.com \
--cc=andi.kleen@intel.com \
--cc=davidcc@google.com \
--cc=eranian@google.com \
--cc=fenghua.yu@intel.com \
--cc=h.peter.anvin@intel.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=ravi.v.shankar@intel.com \
--cc=tglx@linutronix.de \
--cc=vikas.shivappa@intel.com \
--cc=vikas.shivappa@linux.intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox