From: Peter Zijlstra <peterz@infradead.org>
To: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Cc: tony.luck@intel.com, ravi.v.shankar@intel.com,
fenghua.yu@intel.com, vikas.shivappa@intel.com, x86@kernel.org,
linux-kernel@vger.kernel.org, hpa@zytor.com, tglx@linutronix.de,
mingo@kernel.org, h.peter.anvin@intel.com
Subject: Re: [PATCH 1/4] perf/x86/cqm,mbm: Store cqm,mbm count for all events when RMID is recycled
Date: Mon, 25 Apr 2016 11:20:35 +0200 [thread overview]
Message-ID: <20160425092035.GH3430@twins.programming.kicks-ass.net> (raw)
In-Reply-To: <1461371241-4258-2-git-send-email-vikas.shivappa@linux.intel.com>
On Fri, Apr 22, 2016 at 05:27:18PM -0700, Vikas Shivappa wrote:
> During RMID recycling, when an event loses the RMID we saved the counter
> for group leader but it was not being saved for all the events in an
> event group. This would lead to a situation where if 2 perf instances
> are counting the same PID one of them would not see the updated count
> which other perf instance is seeing. This patch tries to fix the issue
> by saving the count for all the events in the same event group.
> @@ -486,14 +495,21 @@ static u32 intel_cqm_xchg_rmid(struct perf_event *group, u32 rmid)
> * If our RMID is being deallocated, perform a read now.
> */
> if (__rmid_valid(old_rmid) && !__rmid_valid(rmid)) {
>
> + rr = __init_rr(old_rmid, group->attr.config, 0);
> cqm_mask_call(&rr);
> local64_set(&group->count, atomic64_read(&rr.value));
> + list_for_each_entry(event, head, hw.cqm_group_entry) {
> + if (event->hw.is_group_event) {
> +
> + evttype = event->attr.config;
> + rr = __init_rr(old_rmid, evttype, 0);
> +
> + cqm_mask_call(&rr);
> + local64_set(&event->count,
> + atomic64_read(&rr.value));
Randomly indent much?
> + }
> + }
> }
next prev parent reply other threads:[~2016-04-25 9:20 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-23 0:27 [PATCH V1 0/4] Urgent fixes for Intel CQM/MBM counting Vikas Shivappa
2016-04-23 0:27 ` [PATCH 1/4] perf/x86/cqm,mbm: Store cqm,mbm count for all events when RMID is recycled Vikas Shivappa
2016-04-25 9:20 ` Peter Zijlstra [this message]
2016-04-25 16:26 ` Vikas Shivappa
2016-04-23 0:27 ` [PATCH 2/4] perf/x86/mbm: Store bytes counted for mbm during recycle Vikas Shivappa
2016-04-25 9:13 ` Peter Zijlstra
2016-04-25 18:04 ` Vikas Shivappa
2016-04-25 20:02 ` Peter Zijlstra
2016-04-25 21:12 ` Vikas Shivappa
2016-05-03 7:46 ` Peter Zijlstra
2016-05-04 0:00 ` Vikas Shivappa
2016-04-23 0:27 ` [PATCH 3/4] perf/x86/mbm: Fix mbm counting when RMIDs are reused Vikas Shivappa
2016-04-25 9:16 ` Peter Zijlstra
2016-04-25 16:44 ` Vikas Shivappa
2016-04-25 20:05 ` Peter Zijlstra
2016-04-25 21:43 ` Vikas Shivappa
2016-04-25 21:49 ` Vikas Shivappa
2016-04-23 0:27 ` [PATCH 4/4] perf/x86/cqm: Support cqm/mbm only for perf events Vikas Shivappa
2016-04-25 9:18 ` Peter Zijlstra
2016-04-25 16:23 ` Luck, Tony
2016-04-25 20:08 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160425092035.GH3430@twins.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=fenghua.yu@intel.com \
--cc=h.peter.anvin@intel.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=ravi.v.shankar@intel.com \
--cc=tglx@linutronix.de \
--cc=tony.luck@intel.com \
--cc=vikas.shivappa@intel.com \
--cc=vikas.shivappa@linux.intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox