From: Sumit Gupta <sumitg@nvidia.com>
To: "zhenglifeng (A)" <zhenglifeng1@huawei.com>
Cc: rafael@kernel.org, viresh.kumar@linaro.org,
pierre.gondois@arm.com, ionela.voinescu@arm.com, lenb@kernel.org,
robert.moore@intel.com, corbet@lwn.net, rdunlap@infradead.org,
ray.huang@amd.com, gautham.shenoy@amd.com,
mario.limonciello@amd.com, perry.yuan@amd.com,
zhanjie9@hisilicon.com, linux-pm@vger.kernel.org,
linux-acpi@vger.kernel.org, linux-doc@vger.kernel.org,
acpica-devel@lists.linux.dev, linux-kernel@vger.kernel.org,
linux-tegra@vger.kernel.org, treding@nvidia.com,
jonathanh@nvidia.com, vsethi@nvidia.com, ksitaraman@nvidia.com,
sanjayc@nvidia.com, nhartman@nvidia.com, bbasu@nvidia.com,
sumitg@nvidia.com
Subject: Re: [PATCH v6 5/9] ACPI: CPPC: Extend cppc_set_epp_perf() for FFH/SystemMemory
Date: Tue, 27 Jan 2026 16:47:51 +0530 [thread overview]
Message-ID: <abf9889b-d5e7-4c4b-b166-a5dde2425ab8@nvidia.com> (raw)
In-Reply-To: <7e86cdbe-f16c-4fe8-92c5-e6fb89f49811@huawei.com>
>>> On 2026/1/20 22:56, Sumit Gupta wrote:
>>>> Extend cppc_set_epp_perf() to write both auto_sel and energy_perf
>>>> registers when they are in FFH or SystemMemory address space.
>>>>
>>>> This keeps the behavior consistent with PCC case where both registers
>>>> are already updated together, but was missing for FFH/SystemMemory.
>>>>
>>>> Signed-off-by: Sumit Gupta <sumitg@nvidia.com>
>>>> ---
>>>> drivers/acpi/cppc_acpi.c | 24 +++++++++++++++++++++---
>>>> 1 file changed, 21 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
>>>> index de35aeb07833..45c6bd6ec24b 100644
>>>> --- a/drivers/acpi/cppc_acpi.c
>>>> +++ b/drivers/acpi/cppc_acpi.c
>>>> @@ -1562,6 +1562,8 @@ int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable)
>>>> struct cpc_register_resource *auto_sel_reg;
>>>> struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu);
>>>> struct cppc_pcc_data *pcc_ss_data = NULL;
>>>> + bool autosel_ffh_sysmem;
>>>> + bool epp_ffh_sysmem;
>>>> int ret;
>>>>
>>>> if (!cpc_desc) {
>>>> @@ -1572,6 +1574,11 @@ int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable)
>>>> auto_sel_reg = &cpc_desc->cpc_regs[AUTO_SEL_ENABLE];
>>>> epp_set_reg = &cpc_desc->cpc_regs[ENERGY_PERF];
>>>>
>>>> + epp_ffh_sysmem = CPC_SUPPORTED(epp_set_reg) &&
>>>> + (CPC_IN_FFH(epp_set_reg) || CPC_IN_SYSTEM_MEMORY(epp_set_reg));
>>>> + autosel_ffh_sysmem = CPC_SUPPORTED(auto_sel_reg) &&
>>>> + (CPC_IN_FFH(auto_sel_reg) || CPC_IN_SYSTEM_MEMORY(auto_sel_reg));
>>>> +
>>>> if (CPC_IN_PCC(epp_set_reg) || CPC_IN_PCC(auto_sel_reg)) {
>>>> if (pcc_ss_id < 0) {
>>>> pr_debug("Invalid pcc_ss_id for CPU:%d\n", cpu);
>>>> @@ -1597,11 +1604,22 @@ int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable)
>>>> ret = send_pcc_cmd(pcc_ss_id, CMD_WRITE);
>>>> up_write(&pcc_ss_data->pcc_lock);
>>>> } else if (osc_cpc_flexible_adr_space_confirmed &&
>>>> - CPC_SUPPORTED(epp_set_reg) && CPC_IN_FFH(epp_set_reg)) {
>>>> - ret = cpc_write(cpu, epp_set_reg, perf_ctrls->energy_perf);
>>>> + (epp_ffh_sysmem || autosel_ffh_sysmem)) {
>>>> + if (autosel_ffh_sysmem) {
>>>> + ret = cpc_write(cpu, auto_sel_reg, enable);
>>>> + if (ret)
>>>> + return ret;
>>>> + }
>>>> +
>>>> + if (epp_ffh_sysmem) {
>>>> + ret = cpc_write(cpu, epp_set_reg,
>>>> + perf_ctrls->energy_perf);
>>>> + if (ret)
>>>> + return ret;
>>>> + }
>>> Don't know if such a scenario exists, but if one of them is in PCC and the
>>> other is in FFH or system memory, only the one in PCC will be updated
>>> based on your modifications.
>> The current code handles mixed cases correctly.
>> When either register is in PCC, the first if block executes and calls
>> cpc_write() for both registers. The cpc_write() internally handles
>> each register's type (PCC, FFH, or SystemMemory)
> Yes, I was wrong.
>
> According to the first if block, cpc_wite() is OK to be called for a
> register not in PCC. So it looks like this 'else if' is unnecessary. Only
> CPC_SUPPORTED is needed to be checked before calling cpc_write(), isn't it?
Yes, Once 'osc_cpc_flexible_adr_space_confirmed' is removed,
cppc_set_epp_perf() can be simplified to just call cpc_write() for
supported registers and only do PCC handling when needed.
As Pierre suggested [1], I will send a separate patch set for this
cleanup after the current patch set.
[1]
https://lore.kernel.org/all/c3fd7249-3cba-43e9-85c6-eadd711c0527@nvidia.com/
Thank you,
Sumit Gupta
....
next prev parent reply other threads:[~2026-01-27 11:18 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-20 14:56 [PATCH v6 0/9] Enhanced autonomous selection and improvements Sumit Gupta
2026-01-20 14:56 ` [PATCH v6 1/9] cpufreq: CPPC: Add generic helpers for sysfs show/store Sumit Gupta
2026-01-22 8:27 ` zhenglifeng (A)
2026-01-27 16:24 ` Rafael J. Wysocki
2026-01-27 19:01 ` Sumit Gupta
2026-01-27 20:17 ` Rafael J. Wysocki
2026-01-20 14:56 ` [PATCH v6 2/9] ACPI: CPPC: Clean up cppc_perf_caps and cppc_perf_ctrls structs Sumit Gupta
2026-01-22 8:28 ` zhenglifeng (A)
2026-01-27 16:27 ` Rafael J. Wysocki
2026-01-27 19:11 ` Sumit Gupta
2026-01-20 14:56 ` [PATCH v6 3/9] ACPI: CPPC: Rename EPP constants for clarity Sumit Gupta
2026-01-22 8:31 ` zhenglifeng (A)
2026-01-20 14:56 ` [PATCH v6 4/9] ACPI: CPPC: Add cppc_get_perf() API to read performance controls Sumit Gupta
2026-01-22 8:56 ` zhenglifeng (A)
2026-01-22 11:30 ` Pierre Gondois
2026-01-22 11:42 ` zhenglifeng (A)
2026-01-24 20:05 ` Sumit Gupta
2026-01-24 20:19 ` Sumit Gupta
2026-01-26 11:20 ` Pierre Gondois
2026-01-27 11:08 ` Sumit Gupta
2026-01-20 14:56 ` [PATCH v6 5/9] ACPI: CPPC: Extend cppc_set_epp_perf() for FFH/SystemMemory Sumit Gupta
2026-01-22 9:18 ` zhenglifeng (A)
2026-01-24 20:08 ` Sumit Gupta
2026-01-26 8:10 ` zhenglifeng (A)
2026-01-27 11:17 ` Sumit Gupta [this message]
2026-01-20 14:56 ` [PATCH v6 6/9] ACPI: CPPC: add APIs and sysfs interface for min/max_perf Sumit Gupta
2026-01-22 11:36 ` Pierre Gondois
2026-01-24 20:32 ` Sumit Gupta
2026-01-26 10:51 ` Pierre Gondois
2026-01-27 11:22 ` Sumit Gupta
2026-01-22 12:35 ` zhenglifeng (A)
2026-01-24 20:52 ` Sumit Gupta
2026-01-20 14:56 ` [PATCH v6 7/9] ACPI: CPPC: add APIs and sysfs interface for perf_limited Sumit Gupta
2026-01-22 11:51 ` Pierre Gondois
2026-01-24 21:04 ` Sumit Gupta
2026-01-26 11:23 ` Pierre Gondois
2026-01-20 14:56 ` [PATCH v6 8/9] cpufreq: CPPC: Add sysfs for min/max_perf and perf_limited Sumit Gupta
2026-01-20 14:56 ` [PATCH v6 9/9] cpufreq: CPPC: Update cached perf_ctrls on sysfs write Sumit Gupta
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=abf9889b-d5e7-4c4b-b166-a5dde2425ab8@nvidia.com \
--to=sumitg@nvidia.com \
--cc=acpica-devel@lists.linux.dev \
--cc=bbasu@nvidia.com \
--cc=corbet@lwn.net \
--cc=gautham.shenoy@amd.com \
--cc=ionela.voinescu@arm.com \
--cc=jonathanh@nvidia.com \
--cc=ksitaraman@nvidia.com \
--cc=lenb@kernel.org \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=mario.limonciello@amd.com \
--cc=nhartman@nvidia.com \
--cc=perry.yuan@amd.com \
--cc=pierre.gondois@arm.com \
--cc=rafael@kernel.org \
--cc=ray.huang@amd.com \
--cc=rdunlap@infradead.org \
--cc=robert.moore@intel.com \
--cc=sanjayc@nvidia.com \
--cc=treding@nvidia.com \
--cc=viresh.kumar@linaro.org \
--cc=vsethi@nvidia.com \
--cc=zhanjie9@hisilicon.com \
--cc=zhenglifeng1@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox