From: Sumit Gupta <sumitg@nvidia.com>
To: "zhenglifeng (A)" <zhenglifeng1@huawei.com>,
rafael@kernel.org, viresh.kumar@linaro.org, lenb@kernel.org,
robert.moore@intel.com, corbet@lwn.net, pierre.gondois@arm.com,
rdunlap@infradead.org, ray.huang@amd.com, gautham.shenoy@amd.com,
mario.limonciello@amd.com, perry.yuan@amd.com,
ionela.voinescu@arm.com, zhanjie9@hisilicon.com,
linux-pm@vger.kernel.org, linux-acpi@vger.kernel.org,
linux-doc@vger.kernel.org, acpica-devel@lists.linux.dev,
linux-kernel@vger.kernel.org
Cc: linux-tegra@vger.kernel.org, treding@nvidia.com,
jonathanh@nvidia.com, vsethi@nvidia.com, ksitaraman@nvidia.com,
sanjayc@nvidia.com, nhartman@nvidia.com, bbasu@nvidia.com,
sumitg@nvidia.com
Subject: Re: [PATCH v5 08/11] cpufreq: CPPC: sync policy limits when updating min/max_perf
Date: Thu, 8 Jan 2026 19:23:54 +0530 [thread overview]
Message-ID: <2e0e7b5d-e424-4a45-9783-178a1af24ccc@nvidia.com> (raw)
In-Reply-To: <9ea62a14-46a1-4238-97ed-aeabf9f3ab77@huawei.com>
On 25/12/25 19:26, zhenglifeng (A) wrote:
> External email: Use caution opening links or attachments
>
>
> On 2025/12/23 20:13, Sumit Gupta wrote:
>> When min_perf or max_perf is updated via sysfs in autonomous mode, the
>> policy frequency limits should also be updated to reflect the new
>> performance bounds.
>>
>> Add @update_policy parameter to cppc_cpufreq_set_mperf_limit() to
>> control whether policy constraints are synced with HW registers.
>> The policy is updated only when autonomous selection is enabled to
>> keep SW limits in sync with HW.
>>
>> This ensures that scaling_min_freq and scaling_max_freq values remain
>> consistent with the actual min/max_perf register values when operating
>> in autonomous mode.
>>
>> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
>> ---
>> drivers/cpufreq/cppc_cpufreq.c | 35 ++++++++++++++++++++++++++--------
>> 1 file changed, 27 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
>> index 1f8825006940..0202c7b823e6 100644
>> --- a/drivers/cpufreq/cppc_cpufreq.c
>> +++ b/drivers/cpufreq/cppc_cpufreq.c
>> @@ -544,14 +544,20 @@ static void populate_efficiency_class(void)
>> * cppc_cpufreq_set_mperf_limit - Set min/max performance limit
>> * @policy: cpufreq policy
>> * @val: performance value to set
>> + * @update_policy: whether to update policy constraints
>> * @is_min: true for min_perf, false for max_perf
>> + *
>> + * When @update_policy is true, updates cpufreq policy frequency limits.
>> + * @update_policy is false during cpu_init when policy isn't fully set up.
>> */
>> static int cppc_cpufreq_set_mperf_limit(struct cpufreq_policy *policy, u64 val,
>> - bool is_min)
>> + bool update_policy, bool is_min)
>> {
>> struct cppc_cpudata *cpu_data = policy->driver_data;
>> struct cppc_perf_caps *caps = &cpu_data->perf_caps;
>> unsigned int cpu = policy->cpu;
>> + struct freq_qos_request *req;
>> + unsigned int freq;
>> u32 perf;
>> int ret;
>>
>> @@ -571,15 +577,26 @@ static int cppc_cpufreq_set_mperf_limit(struct cpufreq_policy *policy, u64 val,
>> else
>> cpu_data->perf_ctrls.max_perf = perf;
>>
>> + if (update_policy) {
>> + freq = cppc_perf_to_khz(caps, perf);
>> + req = is_min ? policy->min_freq_req : policy->max_freq_req;
>> +
>> + ret = freq_qos_update_request(req, freq);
>> + if (ret < 0) {
>> + pr_warn("Failed to update %s_freq constraint for CPU%d: %d\n",
>> + is_min ? "min" : "max", cpu, ret);
>> + return ret;
>> + }
>> + }
>> +
> OK. Now I see the necessity of extracting this function. But why not use
> freq_khz as a input parameter and convert it to perf in this funciton,
> since you need the freq here?
That will still need cppc_perf_to_khz to be called so that policy
has what HW actually delivers. Otherwise, there could be some
asymmetry.
Also the clamping is done on perf values. So, if user provides a
very high freq value then that will get passed to freq_qos and the
HW register will have actual perf value which doesn't match with qos.
Either way the conversion chain is:
freq_to_perf -> clamp perf -> set perf -> perf_to_freq -> set qos
It's just a matter of where we place the logic.
Thank you,
Sumit Gupta
>> return 0;
>> }
>>
>> -#define cppc_cpufreq_set_min_perf(policy, val) \
>> - cppc_cpufreq_set_mperf_limit(policy, val, true)
>> -
>> -#define cppc_cpufreq_set_max_perf(policy, val) \
>> - cppc_cpufreq_set_mperf_limit(policy, val, false)
>> +#define cppc_cpufreq_set_min_perf(policy, val, update_policy) \
>> + cppc_cpufreq_set_mperf_limit(policy, val, update_policy, true)
>>
>> +#define cppc_cpufreq_set_max_perf(policy, val, update_policy) \
>> + cppc_cpufreq_set_mperf_limit(policy, val, update_policy, false)
>> static struct cppc_cpudata *cppc_cpufreq_get_cpu_data(unsigned int cpu)
>> {
>> struct cppc_cpudata *cpu_data;
>> @@ -988,7 +1005,8 @@ static ssize_t store_min_perf(struct cpufreq_policy *policy, const char *buf,
>> perf = cppc_khz_to_perf(&cpu_data->perf_caps, freq_khz);
>>
>> guard(mutex)(&cppc_cpufreq_update_autosel_config_lock);
>> - ret = cppc_cpufreq_set_min_perf(policy, perf);
>> + ret = cppc_cpufreq_set_min_perf(policy, perf,
>> + cpu_data->perf_ctrls.auto_sel);
>> if (ret)
>> return ret;
>>
>> @@ -1045,7 +1063,8 @@ static ssize_t store_max_perf(struct cpufreq_policy *policy, const char *buf,
>> perf = cppc_khz_to_perf(&cpu_data->perf_caps, freq_khz);
>>
>> guard(mutex)(&cppc_cpufreq_update_autosel_config_lock);
>> - ret = cppc_cpufreq_set_max_perf(policy, perf);
>> + ret = cppc_cpufreq_set_max_perf(policy, perf,
>> + cpu_data->perf_ctrls.auto_sel);
>> if (ret)
>> return ret;
>>
next prev parent reply other threads:[~2026-01-08 13:54 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-23 12:12 [PATCH v5 00/11] Enhanced autonomous selection and improvements Sumit Gupta
2025-12-23 12:12 ` [PATCH v5 01/11] cpufreq: CPPC: Add generic helpers for sysfs show/store Sumit Gupta
2025-12-25 3:41 ` zhenglifeng (A)
2026-01-08 13:31 ` Sumit Gupta
2025-12-23 12:12 ` [PATCH v5 02/11] ACPI: CPPC: Clean up cppc_perf_caps and cppc_perf_ctrls structs Sumit Gupta
2026-01-08 13:43 ` Pierre Gondois
2025-12-23 12:12 ` [PATCH v5 03/11] ACPI: CPPC: Add cppc_get_perf() API to read performance controls Sumit Gupta
2025-12-25 8:21 ` zhenglifeng (A)
2026-01-08 13:36 ` Sumit Gupta
2025-12-23 12:13 ` [PATCH v5 04/11] ACPI: CPPC: Extend cppc_set_epp_perf() to support auto_sel and epp Sumit Gupta
2025-12-25 3:56 ` zhenglifeng (A)
2026-01-08 13:39 ` Sumit Gupta
2026-01-16 15:59 ` Pierre Gondois
2025-12-23 12:13 ` [PATCH v5 05/11] ACPI: CPPC: add APIs and sysfs interface for min/max_perf Sumit Gupta
2025-12-25 9:03 ` zhenglifeng (A)
2025-12-23 12:13 ` [PATCH v5 06/11] ACPI: CPPC: add APIs and sysfs interface for perf_limited Sumit Gupta
2025-12-25 12:06 ` zhenglifeng (A)
2026-01-08 14:38 ` Sumit Gupta
2026-01-15 8:01 ` zhenglifeng (A)
2025-12-23 12:13 ` [PATCH v5 07/11] cpufreq: CPPC: Add sysfs for min/max_perf and perf_limited Sumit Gupta
2025-12-24 18:32 ` kernel test robot
2025-12-26 0:20 ` Bagas Sanjaya
2026-01-08 14:30 ` Sumit Gupta
2025-12-23 12:13 ` [PATCH v5 08/11] cpufreq: CPPC: sync policy limits when updating min/max_perf Sumit Gupta
2025-12-25 13:56 ` zhenglifeng (A)
2026-01-08 13:53 ` Sumit Gupta [this message]
2026-01-15 8:20 ` zhenglifeng (A)
2025-12-23 12:13 ` [PATCH v5 09/11] cpufreq: CPPC: sync policy limits when toggling auto_select Sumit Gupta
2025-12-26 2:55 ` zhenglifeng (A)
2026-01-08 14:21 ` Sumit Gupta
2026-01-15 8:57 ` zhenglifeng (A)
2025-12-23 12:13 ` [PATCH v5 10/11] cpufreq: CPPC: make scaling_min/max_freq read-only when auto_sel enabled Sumit Gupta
2025-12-26 3:26 ` zhenglifeng (A)
2026-01-08 14:01 ` Sumit Gupta
2026-01-08 16:46 ` Pierre Gondois
2026-01-09 14:37 ` Sumit Gupta
2026-01-12 11:44 ` Pierre Gondois
2026-01-15 12:32 ` zhenglifeng (A)
2026-01-15 15:22 ` Sumit Gupta
2026-01-16 17:05 ` Pierre Gondois
2026-01-15 15:15 ` Sumit Gupta
2025-12-23 12:13 ` [PATCH v5 11/11] cpufreq: CPPC: add autonomous mode boot parameter support Sumit Gupta
2025-12-26 8:03 ` zhenglifeng (A)
2026-01-08 14:04 ` Sumit Gupta
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2e0e7b5d-e424-4a45-9783-178a1af24ccc@nvidia.com \
--to=sumitg@nvidia.com \
--cc=acpica-devel@lists.linux.dev \
--cc=bbasu@nvidia.com \
--cc=corbet@lwn.net \
--cc=gautham.shenoy@amd.com \
--cc=ionela.voinescu@arm.com \
--cc=jonathanh@nvidia.com \
--cc=ksitaraman@nvidia.com \
--cc=lenb@kernel.org \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=mario.limonciello@amd.com \
--cc=nhartman@nvidia.com \
--cc=perry.yuan@amd.com \
--cc=pierre.gondois@arm.com \
--cc=rafael@kernel.org \
--cc=ray.huang@amd.com \
--cc=rdunlap@infradead.org \
--cc=robert.moore@intel.com \
--cc=sanjayc@nvidia.com \
--cc=treding@nvidia.com \
--cc=viresh.kumar@linaro.org \
--cc=vsethi@nvidia.com \
--cc=zhanjie9@hisilicon.com \
--cc=zhenglifeng1@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox