devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sumit Gupta <sumitg@nvidia.com>
To: Dmitry Osipenko <digetx@gmail.com>, <treding@nvidia.com>,
	<krzysztof.kozlowski@linaro.org>, <dmitry.osipenko@collabora.com>,
	<viresh.kumar@linaro.org>, <rafael@kernel.org>,
	<jonathanh@nvidia.com>, <robh+dt@kernel.org>,
	<linux-kernel@vger.kernel.org>, <linux-tegra@vger.kernel.org>,
	<linux-pm@vger.kernel.org>, <devicetree@vger.kernel.org>
Cc: <sanjayc@nvidia.com>, <ksitaraman@nvidia.com>, <ishah@nvidia.com>,
	<bbasu@nvidia.com>, Sumit Gupta <sumitg@nvidia.com>,
	Rajkumar Kasirajan <rkasirajan@nvidia.com>
Subject: Re: [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth
Date: Fri, 13 Jan 2023 19:20:41 +0530	[thread overview]
Message-ID: <8e6d7dd3-1bdc-ee4b-0c1e-1ae9cd8e4f29@nvidia.com> (raw)
In-Reply-To: <4e3e4485-ba22-eb47-fb95-e8d626160bc6@gmail.com>



On 22/12/22 21:16, Dmitry Osipenko wrote:
> External email: Use caution opening links or attachments
> 
> 
> 20.12.2022 19:02, Sumit Gupta пишет:
>> Add support to use OPP table from DT in Tegra194 cpufreq driver.
>> Tegra SoC's receive the frequency lookup table (LUT) from BPMP-FW.
>> Cross check the OPP's present in DT against the LUT from BPMP-FW
>> and enable only those DT OPP's which are present in LUT also.
>>
>> The OPP table in DT has CPU Frequency to bandwidth mapping where
>> the bandwidth value is per MC channel. DRAM bandwidth depends on the
>> number of MC channels which can vary as per the boot configuration.
>> This per channel bandwidth from OPP table will be later converted by
>> MC driver to final bandwidth value by multiplying with number of
>> channels before sending the request to BPMP-FW.
>>
>> If OPP table is not present in DT, then use the LUT from BPMP-FW directy
>> as the frequency table and not do the DRAM frequency scaling which is
>> same as the current behavior.
>>
>> Now, as the CPU Frequency table is being controlling through OPP table
>> in DT. Keeping fewer entries in the table will create less frequency
>> steps and scale fast to high frequencies if required.
> 
> It's not exactly clear what you're doing here. Are you going to scale
> memory BW based on CPU freq? If yes, then this is wrong because CPU freq
> is independent from the memory subsystem.
> 
> All Tegra30+ SoCs have ACTMON hardware unit that monitors CPU memory
> activity and CPU memory BW should be scaled based on CPU memory events
> counter. We have ACTMON devfreq driver for older SoCs. I have no clue
> how ACTMON can be accessed on T186+, perhaps there should be a BPMP FW
> API for that.
> 

Yes, scaling the memory BW based on CPU freq.
Referred below patch set for previous generation of Tegra Soc's which 
you mentioned and tried to trace the history.
 
https://patchwork.ozlabs.org/project/linux-tegra/patch/1418719298-25314-3-git-send-email-tomeu.vizoso@collabora.com/

In new Tegra Soc's, actmon counter control and usage has been moved to 
BPMP-FW where only 'MCALL' counter is used and 'MCCPU is not being used.
Using the actmon counter was a reactive way to scale the frequency which 
is less effective due to averaging over a time period.
We are now using the proactive way where clients tell their bandwidth
needs to help achieve better performance.

  reply	other threads:[~2023-01-13 13:54 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
2022-12-20 18:05   ` Dmitry Osipenko
2022-12-21  7:53     ` Sumit Gupta
2022-12-20 18:06   ` Dmitry Osipenko
2022-12-21  7:54     ` Sumit Gupta
2022-12-20 18:07   ` Dmitry Osipenko
2022-12-21  8:05     ` Sumit Gupta
2022-12-21 16:44       ` Dmitry Osipenko
2023-01-17 13:03         ` Sumit Gupta
2022-12-20 18:10   ` Dmitry Osipenko
2022-12-21  9:35     ` Sumit Gupta
2022-12-21 16:43       ` Dmitry Osipenko
2023-01-13 12:15         ` Sumit Gupta
2022-12-21  0:55   ` Dmitry Osipenko
2022-12-21  8:07     ` Sumit Gupta
2022-12-21 16:54   ` Dmitry Osipenko
2023-01-13 12:25     ` Sumit Gupta
2022-12-21 19:17   ` Dmitry Osipenko
2022-12-21 19:20   ` Dmitry Osipenko
2022-12-22 15:56     ` Dmitry Osipenko
2023-01-13 12:35       ` Sumit Gupta
2023-01-13 12:40     ` Sumit Gupta
2022-12-21 19:43   ` Dmitry Osipenko
2022-12-22 11:32   ` Krzysztof Kozlowski
2023-03-06 19:28     ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 02/10] memory: tegra: adding iso mc clients for Tegra234 Sumit Gupta
2022-12-20 16:02 ` [Patch v1 03/10] memory: tegra: add pcie " Sumit Gupta
2022-12-22 11:33   ` Krzysztof Kozlowski
2023-01-13 14:51     ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 04/10] memory: tegra: add support for software mc clients in Tegra234 Sumit Gupta
2022-12-22 11:36   ` Krzysztof Kozlowski
2023-03-06 19:41     ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients Sumit Gupta
2022-12-22 11:29   ` Krzysztof Kozlowski
2023-01-13 14:44     ` Sumit Gupta
2023-01-13 17:11   ` Krzysztof Kozlowski
2022-12-20 16:02 ` [Patch v1 06/10] arm64: tegra: Add cpu OPP tables and interconnects property Sumit Gupta
2023-01-16 16:29   ` Thierry Reding
2022-12-20 16:02 ` [Patch v1 07/10] cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist Sumit Gupta
2022-12-21  5:01   ` Viresh Kumar
2022-12-20 16:02 ` [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth Sumit Gupta
2022-12-22 15:46   ` Dmitry Osipenko
2023-01-13 13:50     ` Sumit Gupta [this message]
2023-01-16 12:16       ` Dmitry Osipenko
2023-01-19 10:26         ` Thierry Reding
2023-01-19 13:01           ` Dmitry Osipenko
2023-02-06 13:31             ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 09/10] memory: tegra: get number of enabled mc channels Sumit Gupta
2022-12-22 11:37   ` Krzysztof Kozlowski
2023-01-13 15:04     ` Sumit Gupta
2023-01-16 16:30       ` Thierry Reding
2022-12-20 16:02 ` [Patch v1 10/10] memory: tegra: make cluster bw request a multiple of mc_channels Sumit Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8e6d7dd3-1bdc-ee4b-0c1e-1ae9cd8e4f29@nvidia.com \
    --to=sumitg@nvidia.com \
    --cc=bbasu@nvidia.com \
    --cc=devicetree@vger.kernel.org \
    --cc=digetx@gmail.com \
    --cc=dmitry.osipenko@collabora.com \
    --cc=ishah@nvidia.com \
    --cc=jonathanh@nvidia.com \
    --cc=krzysztof.kozlowski@linaro.org \
    --cc=ksitaraman@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=rafael@kernel.org \
    --cc=rkasirajan@nvidia.com \
    --cc=robh+dt@kernel.org \
    --cc=sanjayc@nvidia.com \
    --cc=treding@nvidia.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).