From: Peter Zijlstra <peterz@infradead.org>
To: "Luck, Tony" <tony.luck@intel.com>
Cc: Vikas Shivappa <vikas.shivappa@linux.intel.com>,
"Shivappa, Vikas" <vikas.shivappa@intel.com>,
"x86@kernel.org" <x86@kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"hpa@zytor.com" <hpa@zytor.com>,
"tglx@linutronix.de" <tglx@linutronix.de>,
"mingo@kernel.org" <mingo@kernel.org>,
"Shankar, Ravi V" <ravi.v.shankar@intel.com>,
"Yu, Fenghua" <fenghua.yu@intel.com>,
"davidcc@google.com" <davidcc@google.com>,
"Stephane Eranian (eranian@google.com)" <eranian@google.com>
Subject: Re: [PATCH 2/3] perf/x86/mbm: Fix mbm counting for RMID reuse
Date: Wed, 11 May 2016 09:23:47 +0200 [thread overview]
Message-ID: <20160511072347.GF3193@twins.programming.kicks-ass.net> (raw)
In-Reply-To: <3908561D78D1C84285E8C5FCA982C28F3A0C496C@ORSMSX114.amr.corp.intel.com>
On Tue, May 10, 2016 at 04:39:39PM +0000, Luck, Tony wrote:
> >> (3) Also we may not want to count at every sched_in and sched_out
> >> because the MSR reads involve quite a bit of overhead.
> >
> > Every single other PMU driver just does this; why are you special?
>
> They just have to read a register. We have to write the IA32_EM_EVT_SEL MSR
> and then read from the IA32_QM_CTR MSR ... if we are tracking both local
> and total bandwidth, we have to do repeat and wrmr/rdmsr again to get the
> other counter. That seems like it will noticeably affect the system if we do it
> on every sched_in and sched_out.
Right; but Vikas didn't say that did he ;-), he just mentioned msr-read.
Also; I don't think you actually have to do it on every sched event,
only when the event<->rmid association changes. As long as the
event<->rmid association doesn't change, you can forgo updates.
> But the more we make this complicated, the more I think that we should not
> go through the pain of stealing/recycling RMIDs and just limit the number of
> things that can be simultaneously monitored. If someone tries to monitor one
> more thing when all the RMIDs are in use, we should just error out with
> -ERUNOUTOFRMIDSTRYAGAINLATER (maybe -EAGAIN???)
Possibly; but I would like to minimize churn at this point to let the
Google guys get their patches in shape. They seem to have definite ideas
about that as well :-)
next prev parent reply other threads:[~2016-05-11 7:23 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-06 23:44 [PATCH V2 0/3] Urgent fixes for Intel CQM/MBM counting Vikas Shivappa
2016-05-06 23:44 ` [PATCH 1/3] perf/x86/cqm,mbm: Store cqm,mbm count for all events when RMID is recycled Vikas Shivappa
2016-05-06 23:44 ` [PATCH 2/3] perf/x86/mbm: Fix mbm counting for RMID reuse Vikas Shivappa
2016-05-10 12:15 ` Peter Zijlstra
2016-05-10 16:39 ` Luck, Tony
2016-05-11 7:23 ` Peter Zijlstra [this message]
2016-05-06 23:44 ` [PATCH 3/3] perf/x86/mbm: Fix mbm counting during RMID recycling Vikas Shivappa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160511072347.GF3193@twins.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=davidcc@google.com \
--cc=eranian@google.com \
--cc=fenghua.yu@intel.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=ravi.v.shankar@intel.com \
--cc=tglx@linutronix.de \
--cc=tony.luck@intel.com \
--cc=vikas.shivappa@intel.com \
--cc=vikas.shivappa@linux.intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox