devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thierry Reding <thierry.reding@gmail.com>
To: Sumit Gupta <sumitg@nvidia.com>
Cc: treding@nvidia.com, krzysztof.kozlowski@linaro.org,
	dmitry.osipenko@collabora.com, viresh.kumar@linaro.org,
	rafael@kernel.org, jonathanh@nvidia.com, robh+dt@kernel.org,
	linux-kernel@vger.kernel.org, linux-tegra@vger.kernel.org,
	linux-pm@vger.kernel.org, devicetree@vger.kernel.org,
	sanjayc@nvidia.com, ksitaraman@nvidia.com, ishah@nvidia.com,
	bbasu@nvidia.com
Subject: Re: [Patch v1 06/10] arm64: tegra: Add cpu OPP tables and interconnects property
Date: Mon, 16 Jan 2023 17:29:02 +0100	[thread overview]
Message-ID: <Y8V7TkAiEWizF70l@orome> (raw)
In-Reply-To: <20221220160240.27494-7-sumitg@nvidia.com>

[-- Attachment #1: Type: text/plain, Size: 1923 bytes --]

On Tue, Dec 20, 2022 at 09:32:36PM +0530, Sumit Gupta wrote:
> Add OPP table and interconnects property required to scale DDR
> frequency for better performance. The OPP table has CPU frequency
> to per MC channel bandwidth mapping in each operating point entry.
> One table is added for each cluster even though the table data is
> same because the bandwidth request is per cluster. OPP framework
> is creating a single icc path if the table is marked 'opp-shared'
> and shared among all clusters. For us the OPP table is same but
> the MC client ID argument to interconnects property is different
> for each cluster which makes different icc path for all.
> 
> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
> ---
>  arch/arm64/boot/dts/nvidia/tegra234.dtsi | 276 +++++++++++++++++++++++
>  1 file changed, 276 insertions(+)
> 
> diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
> index eaf05ee9acd1..ed7d0f7da431 100644
> --- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi
> +++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
> @@ -2840,6 +2840,9 @@
>  
>  			enable-method = "psci";
>  
> +			operating-points-v2 = <&cl0_opp_tbl>;
> +			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER0 &emc>;

I dislike how this muddies the water between hardware and software
description. We don't have a hardware client ID for the CPU clusters, so
there's no good way to describe this in a hardware-centric way. We used
to have MPCORE read and write clients for this, but as far as I know
they used to be for the entire CCPLEX rather than per-cluster. It'd be
interesting to know what the BPMP does underneath, perhaps that could
give some indication as to what would be a better hardware value to use
for this.

Failing that, I wonder if a combination of icc_node_create() and
icc_get() can be used for this type of "virtual node" special case.

Thierry

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2023-01-16 16:41 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
2022-12-20 18:05   ` Dmitry Osipenko
2022-12-21  7:53     ` Sumit Gupta
2022-12-20 18:06   ` Dmitry Osipenko
2022-12-21  7:54     ` Sumit Gupta
2022-12-20 18:07   ` Dmitry Osipenko
2022-12-21  8:05     ` Sumit Gupta
2022-12-21 16:44       ` Dmitry Osipenko
2023-01-17 13:03         ` Sumit Gupta
2022-12-20 18:10   ` Dmitry Osipenko
2022-12-21  9:35     ` Sumit Gupta
2022-12-21 16:43       ` Dmitry Osipenko
2023-01-13 12:15         ` Sumit Gupta
2022-12-21  0:55   ` Dmitry Osipenko
2022-12-21  8:07     ` Sumit Gupta
2022-12-21 16:54   ` Dmitry Osipenko
2023-01-13 12:25     ` Sumit Gupta
2022-12-21 19:17   ` Dmitry Osipenko
2022-12-21 19:20   ` Dmitry Osipenko
2022-12-22 15:56     ` Dmitry Osipenko
2023-01-13 12:35       ` Sumit Gupta
2023-01-13 12:40     ` Sumit Gupta
2022-12-21 19:43   ` Dmitry Osipenko
2022-12-22 11:32   ` Krzysztof Kozlowski
2023-03-06 19:28     ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 02/10] memory: tegra: adding iso mc clients for Tegra234 Sumit Gupta
2022-12-20 16:02 ` [Patch v1 03/10] memory: tegra: add pcie " Sumit Gupta
2022-12-22 11:33   ` Krzysztof Kozlowski
2023-01-13 14:51     ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 04/10] memory: tegra: add support for software mc clients in Tegra234 Sumit Gupta
2022-12-22 11:36   ` Krzysztof Kozlowski
2023-03-06 19:41     ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients Sumit Gupta
2022-12-22 11:29   ` Krzysztof Kozlowski
2023-01-13 14:44     ` Sumit Gupta
2023-01-13 17:11   ` Krzysztof Kozlowski
2022-12-20 16:02 ` [Patch v1 06/10] arm64: tegra: Add cpu OPP tables and interconnects property Sumit Gupta
2023-01-16 16:29   ` Thierry Reding [this message]
2022-12-20 16:02 ` [Patch v1 07/10] cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist Sumit Gupta
2022-12-21  5:01   ` Viresh Kumar
2022-12-20 16:02 ` [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth Sumit Gupta
2022-12-22 15:46   ` Dmitry Osipenko
2023-01-13 13:50     ` Sumit Gupta
2023-01-16 12:16       ` Dmitry Osipenko
2023-01-19 10:26         ` Thierry Reding
2023-01-19 13:01           ` Dmitry Osipenko
2023-02-06 13:31             ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 09/10] memory: tegra: get number of enabled mc channels Sumit Gupta
2022-12-22 11:37   ` Krzysztof Kozlowski
2023-01-13 15:04     ` Sumit Gupta
2023-01-16 16:30       ` Thierry Reding
2022-12-20 16:02 ` [Patch v1 10/10] memory: tegra: make cluster bw request a multiple of mc_channels Sumit Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y8V7TkAiEWizF70l@orome \
    --to=thierry.reding@gmail.com \
    --cc=bbasu@nvidia.com \
    --cc=devicetree@vger.kernel.org \
    --cc=dmitry.osipenko@collabora.com \
    --cc=ishah@nvidia.com \
    --cc=jonathanh@nvidia.com \
    --cc=krzysztof.kozlowski@linaro.org \
    --cc=ksitaraman@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=rafael@kernel.org \
    --cc=robh+dt@kernel.org \
    --cc=sanjayc@nvidia.com \
    --cc=sumitg@nvidia.com \
    --cc=treding@nvidia.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).