From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754180AbbHQHi0 (ORCPT ); Mon, 17 Aug 2015 03:38:26 -0400 Received: from mga02.intel.com ([134.134.136.20]:18791 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751289AbbHQHiZ (ORCPT ); Mon, 17 Aug 2015 03:38:25 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,693,1432623600"; d="scan'208";a="785111449" Message-ID: <55D18E6B.1040402@intel.com> Date: Mon, 17 Aug 2015 10:34:03 +0300 From: Adrian Hunter Organization: Intel Finland Oy, Registered Address: PL 281, 00181 Helsinki, Business Identity Code: 0357606 - 4, Domiciled in Helsinki User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.8.0 MIME-Version: 1.0 To: Peter Zijlstra CC: Adrian Hunter , Ingo Molnar , Arnaldo Carvalho de Melo , Andy Lutomirski , Thomas Gleixner , linux-kernel@vger.kernel.org, Stephane Eranian , Andi Kleen Subject: Re: [PATCH V2] perf: x86: Improve accuracy of perf/sched clock References: <1438118085-3641-1-git-send-email-adrian.hunter@intel.com> In-Reply-To: <1438118085-3641-1-git-send-email-adrian.hunter@intel.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 29/07/15 00:14, Adrian Hunter wrote: > When TSC is stable perf/sched clock is based on it. > However the conversion from cycles to nanoseconds > is not as accurate as it could be. Because > CYC2NS_SCALE_FACTOR is 10, the accuracy is +/- 1/2048 > > The change is to calculate the maximum shift that > results in a multiplier that is still a 32-bit number. > For example all frequencies over 1 GHz will have > a shift of 32, making the accuracy of the conversion > +/- 1/(2^33) > > Signed-off-by: Adrian Hunter Is this OK? > --- > arch/x86/kernel/tsc.c | 33 ++++++++++++++++++++------------- > 1 file changed, 20 insertions(+), 13 deletions(-) > > diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c > index 7437b41f6a47..e7085bcfb06b 100644 > --- a/arch/x86/kernel/tsc.c > +++ b/arch/x86/kernel/tsc.c > @@ -167,21 +167,21 @@ static void cyc2ns_write_end(int cpu, struct cyc2ns_data *data) > * ns = cycles * cyc2ns_scale / SC > * > * And since SC is a constant power of two, we can convert the div > - * into a shift. > + * into a shift. The larger SC is, the more accurate the conversion, but > + * cyc2ns_scale needs to be a 32-bit value so that 32-bit multiplication > + * (64-bit result) can be used. So start by trying SC = 2^32, reducing > + * until the criteria are met. > * > - * We can use khz divisor instead of mhz to keep a better precision, since > - * cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits. > + * We can use khz divisor instead of mhz to keep a better precision. > * (mathieu.desnoyers@polymtl.ca) > * > * -johnstul@us.ibm.com "math is hard, lets go shopping!" > */ > > -#define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */ > - > static void cyc2ns_data_init(struct cyc2ns_data *data) > { > data->cyc2ns_mul = 0; > - data->cyc2ns_shift = CYC2NS_SCALE_FACTOR; > + data->cyc2ns_shift = 0; > data->cyc2ns_offset = 0; > data->__count = 0; > } > @@ -215,14 +215,14 @@ static inline unsigned long long cycles_2_ns(unsigned long long cyc) > > if (likely(data == tail)) { > ns = data->cyc2ns_offset; > - ns += mul_u64_u32_shr(cyc, data->cyc2ns_mul, CYC2NS_SCALE_FACTOR); > + ns += mul_u64_u32_shr(cyc, data->cyc2ns_mul, data->cyc2ns_shift); > } else { > data->__count++; > > barrier(); > > ns = data->cyc2ns_offset; > - ns += mul_u64_u32_shr(cyc, data->cyc2ns_mul, CYC2NS_SCALE_FACTOR); > + ns += mul_u64_u32_shr(cyc, data->cyc2ns_mul, data->cyc2ns_shift); > > barrier(); > > @@ -239,6 +239,8 @@ static void set_cyc2ns_scale(unsigned long cpu_khz, int cpu) > unsigned long long tsc_now, ns_now; > struct cyc2ns_data *data; > unsigned long flags; > + u64 mult; > + u32 shft = 32; > > local_irq_save(flags); > sched_clock_idle_sleep_event(); > @@ -256,12 +258,17 @@ static void set_cyc2ns_scale(unsigned long cpu_khz, int cpu) > * time function is continuous; see the comment near struct > * cyc2ns_data. > */ > - data->cyc2ns_mul = > - DIV_ROUND_CLOSEST(NSEC_PER_MSEC << CYC2NS_SCALE_FACTOR, > - cpu_khz); > - data->cyc2ns_shift = CYC2NS_SCALE_FACTOR; > + mult = (u64)NSEC_PER_MSEC << 32; > + mult += cpu_khz / 2; > + do_div(mult, cpu_khz); > + while (mult > U32_MAX) { > + mult >>= 1; > + shft -= 1; > + } > + data->cyc2ns_mul = mult; > + data->cyc2ns_shift = shft; > data->cyc2ns_offset = ns_now - > - mul_u64_u32_shr(tsc_now, data->cyc2ns_mul, CYC2NS_SCALE_FACTOR); > + mul_u64_u32_shr(tsc_now, data->cyc2ns_mul, data->cyc2ns_shift); > > cyc2ns_write_end(cpu, data); > >