public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC] perf: x86: Improve accuracy of perf/sched clock
@ 2015-07-24 13:39 Adrian Hunter
  2015-07-24 15:03 ` Peter Zijlstra
  0 siblings, 1 reply; 2+ messages in thread
From: Adrian Hunter @ 2015-07-24 13:39 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar
  Cc: Arnaldo Carvalho de Melo, Andy Lutomirski, Thomas Gleixner,
	linux-kernel, Stephane Eranian, Andi Kleen

When TSC is stable perf/sched clock is based on it.
However the conversion from cycles to nanoseconds
is not as accurate as it could be.  Because
CYC2NS_SCALE_FACTOR is 10, the accuracy is +/- 1/2048

While I am not aware of any compelling reason to make
perf/sched clock accurate, I am not so sure there is
much reason for it to be inaccurate.  Hence this RFC.

The change is to calculate the maximum shift that
results in a multiplier that is still a 32-bit number.
For example all frequencies over 1 GHz will have
a shift of 32, making the accuracy of the conversion
+/- 1/(2^33)

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 arch/x86/kernel/tsc.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index 505449700e0c..8e6e6b584db8 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -215,14 +215,14 @@ static inline unsigned long long cycles_2_ns(unsigned long long cyc)
 
 	if (likely(data == tail)) {
 		ns = data->cyc2ns_offset;
-		ns += mul_u64_u32_shr(cyc, data->cyc2ns_mul, CYC2NS_SCALE_FACTOR);
+		ns += mul_u64_u32_shr(cyc, data->cyc2ns_mul, data->cyc2ns_shift);
 	} else {
 		data->__count++;
 
 		barrier();
 
 		ns = data->cyc2ns_offset;
-		ns += mul_u64_u32_shr(cyc, data->cyc2ns_mul, CYC2NS_SCALE_FACTOR);
+		ns += mul_u64_u32_shr(cyc, data->cyc2ns_mul, data->cyc2ns_shift);
 
 		barrier();
 
@@ -239,6 +239,8 @@ static void set_cyc2ns_scale(unsigned long cpu_khz, int cpu)
 	unsigned long long tsc_now, ns_now;
 	struct cyc2ns_data *data;
 	unsigned long flags;
+	u64 mult;
+	u32 shft = 32;
 
 	local_irq_save(flags);
 	sched_clock_idle_sleep_event();
@@ -251,17 +253,17 @@ static void set_cyc2ns_scale(unsigned long cpu_khz, int cpu)
 	rdtscll(tsc_now);
 	ns_now = cycles_2_ns(tsc_now);
 
-	/*
-	 * Compute a new multiplier as per the above comment and ensure our
-	 * time function is continuous; see the comment near struct
-	 * cyc2ns_data.
-	 */
-	data->cyc2ns_mul =
-		DIV_ROUND_CLOSEST(NSEC_PER_MSEC << CYC2NS_SCALE_FACTOR,
-				  cpu_khz);
-	data->cyc2ns_shift = CYC2NS_SCALE_FACTOR;
+	mult = (u64)NSEC_PER_MSEC << 32;
+	mult += cpu_khz / 2;
+	do_div(mult, cpu_khz);
+	while (mult > U32_MAX) {
+		mult >>= 1;
+		shft -= 1;
+	}
+	data->cyc2ns_mul = mult;
+	data->cyc2ns_shift = shft;
 	data->cyc2ns_offset = ns_now -
-		mul_u64_u32_shr(tsc_now, data->cyc2ns_mul, CYC2NS_SCALE_FACTOR);
+		mul_u64_u32_shr(tsc_now, data->cyc2ns_mul, data->cyc2ns_shift);
 
 	cyc2ns_write_end(cpu, data);
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-07-24 15:03 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-24 13:39 [PATCH RFC] perf: x86: Improve accuracy of perf/sched clock Adrian Hunter
2015-07-24 15:03 ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox