From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2E4ABC0015E for ; Fri, 28 Jul 2023 13:34:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6l062P2Bh4k+6t1PfE/EDLMi3LgfeocW9E+/Z/qq50U=; b=Lt4ZD0BP2+wls5 IJbFLubywBvfkTHYboiDJ0G7az2m+GNG9qeFVj0zLqYvfyUq2PybUeR6MTy2eaiI5RGRYHVWP1P83 +oa+yuDzxlJOVIEeOcf1F3UJV72ay8XiAcnohdtc8AZoBCsD4g1SKXgGaEqPra0KWb85bMVYbDe4C YgWX3sN6BtNIfyDiMAZRh7eWR1pIkXf4P8euUOUnCxYKV7+ii8aJax97asTc9+gCSPHDSpZAiQmAd HQqBbhMN32vJwboksFuMEJ8jW0/uAWwFT/RA2nyXrhPEV7rkCunV4YMzax8t6t1pSpn4F04Zi0zrk t5eKuDQsOrCXsnNzhDJg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qPNbM-003Xj6-0X; Fri, 28 Jul 2023 13:33:52 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qPNbH-003Xhe-2q for linux-arm-kernel@lists.infradead.org; Fri, 28 Jul 2023 13:33:50 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DE83D2F4; Fri, 28 Jul 2023 06:34:25 -0700 (PDT) Received: from FVFF77S0Q05N (unknown [10.57.89.82]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4E5123F5A1; Fri, 28 Jul 2023 06:33:41 -0700 (PDT) Date: Fri, 28 Jul 2023 14:33:34 +0100 From: Mark Rutland To: Xu Yang Cc: frank.li@nxp.com, will@kernel.org, shawnguo@kernel.org, s.hauer@pengutronix.de, kernel@pengutronix.de, linux-imx@nxp.com, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v2 1/3] perf/imx_ddr: speed up overflow frequency of cycle counter Message-ID: References: <20230713103758.2627269-1-xu.yang_2@nxp.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20230713103758.2627269-1-xu.yang_2@nxp.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230728_063347_985350_75F3B990 X-CRM114-Status: GOOD ( 22.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Jul 13, 2023 at 06:37:56PM +0800, Xu Yang wrote: > For i.MX8MP, we cannot ensure that cycle counter overflow occurs at least > 4 times as often as other events. Due to byte counters will count for any > event configured, it will overflow more often. And if byte counters > overflow that related counters would stop since they share the COUNTER_CNTL > We can speed up cycle counter overflow frequency by setting counter > parameter (CP) field of cycle counter. In this way, we can avoid stop > counting byte counters when interrupt didn't come and the byte counters > can be fetched or updated from each cycle counter overflow interrupt. > > Signed-off-by: Xu Yang > > --- > Changes in v2: > - improve if condition > --- > drivers/perf/fsl_imx8_ddr_perf.c | 15 +++++++++++++++ > 1 file changed, 15 insertions(+) > > diff --git a/drivers/perf/fsl_imx8_ddr_perf.c b/drivers/perf/fsl_imx8_ddr_perf.c > index 5222ba1e79d0..039069756bbc 100644 > --- a/drivers/perf/fsl_imx8_ddr_perf.c > +++ b/drivers/perf/fsl_imx8_ddr_perf.c > @@ -28,6 +28,8 @@ > #define CNTL_CLEAR_MASK 0xFFFFFFFD > #define CNTL_OVER_MASK 0xFFFFFFFE > > +#define CNTL_CP_SHIFT 16 > +#define CNTL_CP_MASK (0xFF << CNTL_CP_SHIFT) > #define CNTL_CSV_SHIFT 24 > #define CNTL_CSV_MASK (0xFFU << CNTL_CSV_SHIFT) > > @@ -427,6 +429,19 @@ static void ddr_perf_counter_enable(struct ddr_pmu *pmu, int config, > writel(0, pmu->base + reg); > val = CNTL_EN | CNTL_CLEAR; > val |= FIELD_PREP(CNTL_CSV_MASK, config); > + > + /* > + * Workaround for i.MX8MP: > + * Common counters and byte counters share the same COUNTER_CNTL, > + * and byte counters could overflow before cycle counter. Need set > + * counter parameter(CP) of cycle counter to give it initial value > + * which can speed up cycle counter overflow frequency. > + */ >From the comments on path 2, it sounds like this "counter parameter" sets bits [31..28] of the counter value, is that correct? Assuming so, could we please update this comment to say: /* * On i.MX8MP we need to bias the cycle counter to overflow more often. * We do this by initializing bits [31:28] of the counter value via the * COUNTER_CTRL Counter Parameter (CP) field. * * See ddr_perf_counter_enable() for more details. */ Thanks, Mark. > + if (pmu->devtype_data->quirks & DDR_CAP_AXI_ID_FILTER_ENHANCED) { > + if (counter == EVENT_CYCLES_COUNTER) > + val |= FIELD_PREP(CNTL_CP_MASK, 0xf0); > + } > + > writel(val, pmu->base + reg); > } else { > /* Disable counter */ > -- > 2.34.1 > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel