linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] perf/x86/intel: Fix: Use u64 for limit_period
@ 2018-03-01 17:54 kan.liang
  2018-03-09  9:07 ` [tip:perf/core] perf/x86/intel: Fix large period handling on Broadwell CPUs tip-bot for Kan Liang
  0 siblings, 1 reply; 2+ messages in thread
From: kan.liang @ 2018-03-01 17:54 UTC (permalink / raw)
  To: peterz, mingo, linux-kernel; +Cc: ak, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The large fixed period could be truncated on Broadwell.
For example, perf record -e cycles -c 10000000000.
The fixed period is 0x2540BE400. But the period which finally applied is
0x540BE400.

Because the limit_period is unsigned, which is 32bit. The higher 32bit
of the period will be truncated.

The issue is introduced since 'commit 294fe0f52a44 ("perf/x86/intel: Add
INST_RETIRED.ALL workarounds")

Although the 'left' is s64, the value of 'left' must be positive when
calling limit_period.
bdw_limit_period() only modify the lowest 6 bits. It doesn't touch the
higher 32bit.
So it's safe to use u64 to replace the unsigned.

Fixes: 294fe0f52a44 ("perf/x86/intel: Add INST_RETIRED.ALL workarounds")
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 arch/x86/events/intel/core.c | 2 +-
 arch/x86/events/perf_event.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 916b6e6..8e722e4 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3222,7 +3222,7 @@ glp_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
  * Therefore the effective (average) period matches the requested period,
  * despite coarser hardware granularity.
  */
-static unsigned bdw_limit_period(struct perf_event *event, unsigned left)
+static u64 bdw_limit_period(struct perf_event *event, u64 left)
 {
 	if ((event->hw.config & INTEL_ARCH_EVENT_MASK) ==
 			X86_CONFIG(.event=0xc0, .umask=0x01)) {
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index b5352f1..810013d 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -561,7 +561,7 @@ struct x86_pmu {
 	struct x86_pmu_quirk *quirks;
 	int		perfctr_second_write;
 	bool		late_ack;
-	unsigned	(*limit_period)(struct perf_event *event, unsigned l);
+	u64		(*limit_period)(struct perf_event *event, u64 l);
 
 	/*
 	 * sysfs attrs
-- 
2.4.11

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2018-03-09  9:08 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-03-01 17:54 [PATCH] perf/x86/intel: Fix: Use u64 for limit_period kan.liang
2018-03-09  9:07 ` [tip:perf/core] perf/x86/intel: Fix large period handling on Broadwell CPUs tip-bot for Kan Liang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).