From: Dmitry Osipenko <dmitry.osipenko@collabora.com>
To: Thierry Reding <thierry.reding@gmail.com>
Cc: Sumit Gupta <sumitg@nvidia.com>,
Dmitry Osipenko <digetx@gmail.com>,
treding@nvidia.com, krzysztof.kozlowski@linaro.org,
viresh.kumar@linaro.org, rafael@kernel.org, jonathanh@nvidia.com,
robh+dt@kernel.org, linux-kernel@vger.kernel.org,
linux-tegra@vger.kernel.org, linux-pm@vger.kernel.org,
devicetree@vger.kernel.org, sanjayc@nvidia.com,
ksitaraman@nvidia.com, ishah@nvidia.com, bbasu@nvidia.com,
Rajkumar Kasirajan <rkasirajan@nvidia.com>
Subject: Re: [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth
Date: Thu, 19 Jan 2023 16:01:34 +0300 [thread overview]
Message-ID: <65e596c9-a472-d0b8-ec2f-6935d647ddb2@collabora.com> (raw)
In-Reply-To: <Y8kay0/AmjqhU2jO@orome>
On 1/19/23 13:26, Thierry Reding wrote:
> On Mon, Jan 16, 2023 at 03:16:48PM +0300, Dmitry Osipenko wrote:
>> On 1/13/23 16:50, Sumit Gupta wrote:
>>>
>>>
>>> On 22/12/22 21:16, Dmitry Osipenko wrote:
>>>> External email: Use caution opening links or attachments
>>>>
>>>>
>>>> 20.12.2022 19:02, Sumit Gupta пишет:
>>>>> Add support to use OPP table from DT in Tegra194 cpufreq driver.
>>>>> Tegra SoC's receive the frequency lookup table (LUT) from BPMP-FW.
>>>>> Cross check the OPP's present in DT against the LUT from BPMP-FW
>>>>> and enable only those DT OPP's which are present in LUT also.
>>>>>
>>>>> The OPP table in DT has CPU Frequency to bandwidth mapping where
>>>>> the bandwidth value is per MC channel. DRAM bandwidth depends on the
>>>>> number of MC channels which can vary as per the boot configuration.
>>>>> This per channel bandwidth from OPP table will be later converted by
>>>>> MC driver to final bandwidth value by multiplying with number of
>>>>> channels before sending the request to BPMP-FW.
>>>>>
>>>>> If OPP table is not present in DT, then use the LUT from BPMP-FW directy
>>>>> as the frequency table and not do the DRAM frequency scaling which is
>>>>> same as the current behavior.
>>>>>
>>>>> Now, as the CPU Frequency table is being controlling through OPP table
>>>>> in DT. Keeping fewer entries in the table will create less frequency
>>>>> steps and scale fast to high frequencies if required.
>>>>
>>>> It's not exactly clear what you're doing here. Are you going to scale
>>>> memory BW based on CPU freq? If yes, then this is wrong because CPU freq
>>>> is independent from the memory subsystem.
>>>>
>>>> All Tegra30+ SoCs have ACTMON hardware unit that monitors CPU memory
>>>> activity and CPU memory BW should be scaled based on CPU memory events
>>>> counter. We have ACTMON devfreq driver for older SoCs. I have no clue
>>>> how ACTMON can be accessed on T186+, perhaps there should be a BPMP FW
>>>> API for that.
>>>>
>>>
>>> Yes, scaling the memory BW based on CPU freq.
>>> Referred below patch set for previous generation of Tegra Soc's which
>>> you mentioned and tried to trace the history.
>>>
>>> https://patchwork.ozlabs.org/project/linux-tegra/patch/1418719298-25314-3-git-send-email-tomeu.vizoso@collabora.com/
>>>
>>> In new Tegra Soc's, actmon counter control and usage has been moved to
>>> BPMP-FW where only 'MCALL' counter is used and 'MCCPU is not being used.
>>> Using the actmon counter was a reactive way to scale the frequency which
>>> is less effective due to averaging over a time period.
>>> We are now using the proactive way where clients tell their bandwidth
>>> needs to help achieve better performance.
>>
>> You don't know what bandwidth CPU needs, you trying to guess it.
>>
>> It should be a bad decision to use cpufreq for memory bandwidth scaling.
>> You'll be wasting memory power 90% of time because cpufreq doesn't have
>> relation to the DRAM, your heuristics will be wrong and won't do
>> anything good compared to using ACTMON. The L2 CPU cache + memory
>> prefetching hides memory from CPU. And cpufreq should be less reactive
>> than ACTMON in general.
>>
>> Scaling memory freq based on cpufreq is what downstream NV kernel did
>> 10+ years ago for the oldest Tegra generations. Today upstream has all
>> the necessary infrastructure for doing memory bandwidth scaling properly
>> and we even using h/w memory counters on T20. It's strange that you want
>> to bring the downstream archaity to the modern upstream for the latest
>> Tegra generations.
>>
>> If you can skip the BPMP-FW and use ACTMON directly from kernel, then
>> that's what I suggest to do.
>
> After talking to a few people, it turns out that BPMP is already using
> ACTMON internally to do the actual scaling of the EMC frequency (or the
> CPUs contribution to that). So BPMP will use ACTMON counters to monitor
> the effective memory load of the CPU and adjust the EMC frequency. The
> bandwidth request that we generate from the cpufreq driver is more of a
> guideline for the maximum bandwidth we might consume.
Our kernel ACTMON driver uses cpufreq for guiding the EMC freq. Driving
EMC rate solely based on cpufreq would be a not good decision. So does
it mean you're now going to extend the ACTMON driver with the BPMP support?
--
Best regards,
Dmitry
next prev parent reply other threads:[~2023-01-19 13:02 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-20 16:02 [Patch v1 00/10] Tegra234 Memory interconnect support Sumit Gupta
2022-12-20 16:02 ` [Patch v1 01/10] memory: tegra: add interconnect support for DRAM scaling in Tegra234 Sumit Gupta
2022-12-20 18:05 ` Dmitry Osipenko
2022-12-21 7:53 ` Sumit Gupta
2022-12-20 18:06 ` Dmitry Osipenko
2022-12-21 7:54 ` Sumit Gupta
2022-12-20 18:07 ` Dmitry Osipenko
2022-12-21 8:05 ` Sumit Gupta
2022-12-21 16:44 ` Dmitry Osipenko
2023-01-17 13:03 ` Sumit Gupta
2022-12-20 18:10 ` Dmitry Osipenko
2022-12-21 9:35 ` Sumit Gupta
2022-12-21 16:43 ` Dmitry Osipenko
2023-01-13 12:15 ` Sumit Gupta
2022-12-21 0:55 ` Dmitry Osipenko
2022-12-21 8:07 ` Sumit Gupta
2022-12-21 16:54 ` Dmitry Osipenko
2023-01-13 12:25 ` Sumit Gupta
2022-12-21 19:17 ` Dmitry Osipenko
2022-12-21 19:20 ` Dmitry Osipenko
2022-12-22 15:56 ` Dmitry Osipenko
2023-01-13 12:35 ` Sumit Gupta
2023-01-13 12:40 ` Sumit Gupta
2022-12-21 19:43 ` Dmitry Osipenko
2022-12-22 11:32 ` Krzysztof Kozlowski
2023-03-06 19:28 ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 02/10] memory: tegra: adding iso mc clients for Tegra234 Sumit Gupta
2022-12-20 16:02 ` [Patch v1 03/10] memory: tegra: add pcie " Sumit Gupta
2022-12-22 11:33 ` Krzysztof Kozlowski
2023-01-13 14:51 ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 04/10] memory: tegra: add support for software mc clients in Tegra234 Sumit Gupta
2022-12-22 11:36 ` Krzysztof Kozlowski
2023-03-06 19:41 ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 05/10] dt-bindings: tegra: add icc ids for dummy MC clients Sumit Gupta
2022-12-22 11:29 ` Krzysztof Kozlowski
2023-01-13 14:44 ` Sumit Gupta
2023-01-13 17:11 ` Krzysztof Kozlowski
2022-12-20 16:02 ` [Patch v1 06/10] arm64: tegra: Add cpu OPP tables and interconnects property Sumit Gupta
2023-01-16 16:29 ` Thierry Reding
2022-12-20 16:02 ` [Patch v1 07/10] cpufreq: Add Tegra234 to cpufreq-dt-platdev blocklist Sumit Gupta
2022-12-21 5:01 ` Viresh Kumar
2022-12-20 16:02 ` [Patch v1 08/10] cpufreq: tegra194: add OPP support and set bandwidth Sumit Gupta
2022-12-22 15:46 ` Dmitry Osipenko
2023-01-13 13:50 ` Sumit Gupta
2023-01-16 12:16 ` Dmitry Osipenko
2023-01-19 10:26 ` Thierry Reding
2023-01-19 13:01 ` Dmitry Osipenko [this message]
2023-02-06 13:31 ` Sumit Gupta
2022-12-20 16:02 ` [Patch v1 09/10] memory: tegra: get number of enabled mc channels Sumit Gupta
2022-12-22 11:37 ` Krzysztof Kozlowski
2023-01-13 15:04 ` Sumit Gupta
2023-01-16 16:30 ` Thierry Reding
2022-12-20 16:02 ` [Patch v1 10/10] memory: tegra: make cluster bw request a multiple of mc_channels Sumit Gupta
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=65e596c9-a472-d0b8-ec2f-6935d647ddb2@collabora.com \
--to=dmitry.osipenko@collabora.com \
--cc=bbasu@nvidia.com \
--cc=devicetree@vger.kernel.org \
--cc=digetx@gmail.com \
--cc=ishah@nvidia.com \
--cc=jonathanh@nvidia.com \
--cc=krzysztof.kozlowski@linaro.org \
--cc=ksitaraman@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=rafael@kernel.org \
--cc=rkasirajan@nvidia.com \
--cc=robh+dt@kernel.org \
--cc=sanjayc@nvidia.com \
--cc=sumitg@nvidia.com \
--cc=thierry.reding@gmail.com \
--cc=treding@nvidia.com \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox