From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753858Ab2I1EcF (ORCPT ); Fri, 28 Sep 2012 00:32:05 -0400 Received: from mga02.intel.com ([134.134.136.20]:39591 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752368Ab2I1Eb5 (ORCPT ); Fri, 28 Sep 2012 00:31:57 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.80,498,1344236400"; d="scan'208";a="215093234" From: Andi Kleen To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, a.p.zijlstra@chello.nl, eranian@google.com, acme@redhat.com, Andi Kleen Subject: [PATCH 16/31] perf, x86: Support full width counting on Haswell Date: Thu, 27 Sep 2012 21:31:21 -0700 Message-Id: <1348806696-31170-17-git-send-email-andi@firstfloor.org> X-Mailer: git-send-email 1.7.7.6 In-Reply-To: <1348806696-31170-1-git-send-email-andi@firstfloor.org> References: <1348806696-31170-1-git-send-email-andi@firstfloor.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andi Kleen Haswell has a new alternative MSR range for perfctrs that allows writing the full counter width. Enable this range if the hardware reports it using a new capability bit. This lowers overhead of perf stat slightly because it has to do less interrupts to accumulate the counter value. It also avoids some problems with TSX aborting when the end of the counter range is reached. Signed-off-by: Andi Kleen --- arch/x86/include/asm/msr-index.h | 3 +++ arch/x86/kernel/cpu/perf_event.h | 1 + arch/x86/kernel/cpu/perf_event_intel.c | 6 ++++++ 3 files changed, 10 insertions(+), 0 deletions(-) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 957ec87..cbf344f 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -121,6 +121,9 @@ #define MSR_P6_EVNTSEL0 0x00000186 #define MSR_P6_EVNTSEL1 0x00000187 +/* Alternative perfctr range with full access. */ +#define MSR_IA32_PMC0 0x000004c1 + /* AMD64 MSRs. Not complete. See the architecture manual for a more complete list. */ diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h index 8200c69..8550601 100644 --- a/arch/x86/kernel/cpu/perf_event.h +++ b/arch/x86/kernel/cpu/perf_event.h @@ -282,6 +282,7 @@ union perf_capabilities { u64 pebs_arch_reg:1; u64 pebs_format:4; u64 smm_freeze:1; + u64 fw_write:1; }; u64 capabilities; }; diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c index cd48669..e302186 100644 --- a/arch/x86/kernel/cpu/perf_event_intel.c +++ b/arch/x86/kernel/cpu/perf_event_intel.c @@ -2188,5 +2188,11 @@ __init int intel_pmu_init(void) } } + /* Support full width counters using alternative MSR range */ + if (x86_pmu.intel_cap.fw_write) { + x86_pmu.max_period = x86_pmu.cntval_mask; + x86_pmu.perfctr = MSR_IA32_PMC0; + } + return 0; } -- 1.7.7.6