From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754115AbcB2Pj0 (ORCPT ); Mon, 29 Feb 2016 10:39:26 -0500 Received: from foss.arm.com ([217.140.101.70]:45284 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750843AbcB2PjY (ORCPT ); Mon, 29 Feb 2016 10:39:24 -0500 Date: Mon, 29 Feb 2016 15:39:35 +0000 From: Will Deacon To: Jan Glauber Cc: Mark Rutland , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com Subject: Re: [PATCH v4 4/5] arm64/perf: Enable PMCR long cycle counter bit Message-ID: <20160229153934.GB14848@arm.com> References: <467597048eda3004bd69f1fbe3981aab111e00dd.1455810755.git.jglauber@cavium.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <467597048eda3004bd69f1fbe3981aab111e00dd.1455810755.git.jglauber@cavium.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Jan, I've queued this lot on my perf/updates branch, but I just noticed an oddity whilst dealing with some potential conflicts with the kvm tree. On Thu, Feb 18, 2016 at 05:50:13PM +0100, Jan Glauber wrote: > With the long cycle counter bit (LC) disabled the cycle counter is not > working on ThunderX SOC (ThunderX only implements Aarch64). > Also, according to documentation LC == 0 is deprecated. > > To keep the code simple the patch does not introduce 64 bit wide counter > functions. Instead writing the cycle counter always sets the upper > 32 bits so overflow interrupts are generated as before. > > Original patch from Andrew Pinksi > > Signed-off-by: Jan Glauber > --- > arch/arm64/kernel/perf_event.c | 21 ++++++++++++++++----- > 1 file changed, 16 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c > index 0ed05f6..c68fa98 100644 > --- a/arch/arm64/kernel/perf_event.c > +++ b/arch/arm64/kernel/perf_event.c > @@ -405,6 +405,7 @@ static const struct attribute_group *armv8_pmuv3_attr_groups[] = { > #define ARMV8_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ > #define ARMV8_PMCR_X (1 << 4) /* Export to ETM */ > #define ARMV8_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ > +#define ARMV8_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ > #define ARMV8_PMCR_N_SHIFT 11 /* Number of counters supported */ > #define ARMV8_PMCR_N_MASK 0x1f > #define ARMV8_PMCR_MASK 0x3f /* Mask for writable bits */ You haven't extended this mask to cover the LC bit, so it will be ignored by armv8pmu_pmcr_write afaict. How did you test this? I can easily update the mask, but it would be good to know that it doesn't end up cause a breakage. Will