* [PATCH v5 00/19] amd-pstate cleanups
@ 2025-02-26 7:49 Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 01/19] cpufreq/amd-pstate: Invalidate cppc_req_cached during suspend Mario Limonciello
` (18 more replies)
0 siblings, 19 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello
From: Mario Limonciello <mario.limonciello@amd.com>
This series overhauls locking and drops many unnecessarily cached
variables.
Debugging messages are also dropped in favor of more ftracing.
This series is based off superm1/linux.git bleeding-edge branch.
Mario Limonciello (19):
cpufreq/amd-pstate: Invalidate cppc_req_cached during suspend
cpufreq/amd-pstate: Show a warning when a CPU fails to setup
cpufreq/amd-pstate: Drop min and max cached frequencies
cpufreq/amd-pstate: Move perf values into a union
cpufreq/amd-pstate: Overhaul locking
cpufreq/amd-pstate: Drop `cppc_cap1_cached`
cpufreq/amd-pstate-ut: Use _free macro to free put policy
cpufreq/amd-pstate-ut: Allow lowest nonlinear and lowest to be the
same
cpufreq/amd-pstate-ut: Drop SUCCESS and FAIL enums
cpufreq/amd-pstate-ut: Run on all of the correct CPUs
cpufreq/amd-pstate-ut: Adjust variable scope
cpufreq/amd-pstate: Replace all AMD_CPPC_* macros with masks
cpufreq/amd-pstate: Cache CPPC request in shared mem case too
cpufreq/amd-pstate: Move all EPP tracing into *_update_perf and
*_set_epp functions
cpufreq/amd-pstate: Update cppc_req_cached for shared mem EPP writes
cpufreq/amd-pstate: Drop debug statements for policy setting
cpufreq/amd-pstate: Rework CPPC enabling
cpufreq/amd-pstate: Stop caching EPP
cpufreq/amd-pstate: Drop actions in amd_pstate_epp_cpu_offline()
arch/x86/include/asm/msr-index.h | 20 +-
arch/x86/kernel/acpi/cppc.c | 4 +-
drivers/cpufreq/amd-pstate-trace.h | 13 +-
drivers/cpufreq/amd-pstate-ut.c | 211 +++++------
drivers/cpufreq/amd-pstate.c | 574 +++++++++++++----------------
drivers/cpufreq/amd-pstate.h | 63 ++--
6 files changed, 395 insertions(+), 490 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v5 01/19] cpufreq/amd-pstate: Invalidate cppc_req_cached during suspend
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 02/19] cpufreq/amd-pstate: Show a warning when a CPU fails to setup Mario Limonciello
` (17 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar, Miroslav Pavleski
From: Mario Limonciello <mario.limonciello@amd.com>
During resume it's possible the firmware didn't restore the CPPC request
MSR but the kernel thinks the values line up. This leads to incorrect
performance after resume from suspend.
To fix the issue invalidate the cached value at suspend. During resume use
the saved values programmed as cached limits.
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Reported-by: Miroslav Pavleski <miroslav@pavleski.net>
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217931
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
drivers/cpufreq/amd-pstate.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index dc38ef4c8f725..a093389a8fe3e 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -1611,7 +1611,7 @@ static int amd_pstate_epp_reenable(struct cpufreq_policy *policy)
max_perf, policy->boost_enabled);
}
- return amd_pstate_update_perf(cpudata, 0, 0, max_perf, cpudata->epp_cached, false);
+ return amd_pstate_epp_update_limit(policy);
}
static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
@@ -1660,6 +1660,9 @@ static int amd_pstate_epp_suspend(struct cpufreq_policy *policy)
if (cppc_state != AMD_PSTATE_ACTIVE)
return 0;
+ /* invalidate to ensure it's rewritten during resume */
+ cpudata->cppc_req_cached = 0;
+
/* set this flag to avoid setting core offline*/
cpudata->suspended = true;
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 02/19] cpufreq/amd-pstate: Show a warning when a CPU fails to setup
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 01/19] cpufreq/amd-pstate: Invalidate cppc_req_cached during suspend Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-03-18 7:58 ` Paul Menzel
2025-02-26 7:49 ` [PATCH v5 03/19] cpufreq/amd-pstate: Drop min and max cached frequencies Mario Limonciello
` (16 subsequent siblings)
18 siblings, 1 reply; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
I came across a system that MSR_AMD_CPPC_CAP1 for some CPUs isn't
populated. This is an unexpected behavior that is most likely a
BIOS bug. In the event it happens I'd like users to report bugs
to properly root cause and get this fixed.
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
drivers/cpufreq/amd-pstate.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index a093389a8fe3e..1b98f5d76894d 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -1034,6 +1034,7 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
free_cpudata2:
freq_qos_remove_request(&cpudata->req[0]);
free_cpudata1:
+ pr_warn("Failed to initialize CPU %d: %d\n", policy->cpu, ret);
kfree(cpudata);
return ret;
}
@@ -1527,6 +1528,7 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
return 0;
free_cpudata1:
+ pr_warn("Failed to initialize CPU %d: %d\n", policy->cpu, ret);
kfree(cpudata);
return ret;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 03/19] cpufreq/amd-pstate: Drop min and max cached frequencies
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 01/19] cpufreq/amd-pstate: Invalidate cppc_req_cached during suspend Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 02/19] cpufreq/amd-pstate: Show a warning when a CPU fails to setup Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 04/19] cpufreq/amd-pstate: Move perf values into a union Mario Limonciello
` (15 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
Use the perf_to_freq helpers to calculate this on the fly.
As the members are no longer cached add an extra check into
amd_pstate_epp_update_limit() to avoid unnecessary calls in
amd_pstate_update_min_max_limit().
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v5:
* add tag
* Update commit message
v4:
* Avoid some unnecessary changes to amd_pstate_init_freq()
* Add tag
v3:
* Fix calc error for min_freq
v2:
* Keep cached limits
---
drivers/cpufreq/amd-pstate-ut.c | 14 +++++------
drivers/cpufreq/amd-pstate.c | 43 +++++++++------------------------
drivers/cpufreq/amd-pstate.h | 9 ++-----
3 files changed, 20 insertions(+), 46 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
index 3a0a380c3590c..445278cf40b61 100644
--- a/drivers/cpufreq/amd-pstate-ut.c
+++ b/drivers/cpufreq/amd-pstate-ut.c
@@ -214,14 +214,14 @@ static void amd_pstate_ut_check_freq(u32 index)
break;
cpudata = policy->driver_data;
- if (!((cpudata->max_freq >= cpudata->nominal_freq) &&
+ if (!((policy->cpuinfo.max_freq >= cpudata->nominal_freq) &&
(cpudata->nominal_freq > cpudata->lowest_nonlinear_freq) &&
- (cpudata->lowest_nonlinear_freq > cpudata->min_freq) &&
- (cpudata->min_freq > 0))) {
+ (cpudata->lowest_nonlinear_freq > policy->cpuinfo.min_freq) &&
+ (policy->cpuinfo.min_freq > 0))) {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n",
- __func__, cpu, cpudata->max_freq, cpudata->nominal_freq,
- cpudata->lowest_nonlinear_freq, cpudata->min_freq);
+ __func__, cpu, policy->cpuinfo.max_freq, cpudata->nominal_freq,
+ cpudata->lowest_nonlinear_freq, policy->cpuinfo.min_freq);
goto skip_test;
}
@@ -233,13 +233,13 @@ static void amd_pstate_ut_check_freq(u32 index)
}
if (cpudata->boost_supported) {
- if ((policy->max == cpudata->max_freq) ||
+ if ((policy->max == policy->cpuinfo.max_freq) ||
(policy->max == cpudata->nominal_freq))
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
else {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n",
- __func__, cpu, policy->max, cpudata->max_freq,
+ __func__, cpu, policy->max, policy->cpuinfo.max_freq,
cpudata->nominal_freq);
goto skip_test;
}
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 1b98f5d76894d..fb28b27558882 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -717,7 +717,7 @@ static int amd_pstate_cpu_boost_update(struct cpufreq_policy *policy, bool on)
int ret = 0;
nominal_freq = READ_ONCE(cpudata->nominal_freq);
- max_freq = READ_ONCE(cpudata->max_freq);
+ max_freq = perf_to_freq(cpudata, READ_ONCE(cpudata->highest_perf));
if (on)
policy->cpuinfo.max_freq = max_freq;
@@ -923,13 +923,10 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
nominal_freq *= 1000;
WRITE_ONCE(cpudata->nominal_freq, nominal_freq);
- WRITE_ONCE(cpudata->min_freq, min_freq);
max_freq = perf_to_freq(cpudata, cpudata->highest_perf);
lowest_nonlinear_freq = perf_to_freq(cpudata, cpudata->lowest_nonlinear_perf);
-
WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq);
- WRITE_ONCE(cpudata->max_freq, max_freq);
/**
* Below values need to be initialized correctly, otherwise driver will fail to load
@@ -954,9 +951,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
{
- int min_freq, max_freq, ret;
- struct device *dev;
struct amd_cpudata *cpudata;
+ struct device *dev;
+ int ret;
/*
* Resetting PERF_CTL_MSR will put the CPU in P0 frequency,
@@ -987,17 +984,11 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
if (ret)
goto free_cpudata1;
- min_freq = READ_ONCE(cpudata->min_freq);
- max_freq = READ_ONCE(cpudata->max_freq);
-
policy->cpuinfo.transition_latency = amd_pstate_get_transition_latency(policy->cpu);
policy->transition_delay_us = amd_pstate_get_transition_delay_us(policy->cpu);
- policy->min = min_freq;
- policy->max = max_freq;
-
- policy->cpuinfo.min_freq = min_freq;
- policy->cpuinfo.max_freq = max_freq;
+ policy->cpuinfo.min_freq = policy->min = perf_to_freq(cpudata, cpudata->lowest_perf);
+ policy->cpuinfo.max_freq = policy->max = perf_to_freq(cpudata, cpudata->highest_perf);
policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
@@ -1021,9 +1012,6 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
goto free_cpudata2;
}
- cpudata->max_limit_freq = max_freq;
- cpudata->min_limit_freq = min_freq;
-
policy->driver_data = cpudata;
if (!current_pstate_driver->adjust_perf)
@@ -1081,14 +1069,10 @@ static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy)
static ssize_t show_amd_pstate_max_freq(struct cpufreq_policy *policy,
char *buf)
{
- int max_freq;
struct amd_cpudata *cpudata = policy->driver_data;
- max_freq = READ_ONCE(cpudata->max_freq);
- if (max_freq < 0)
- return max_freq;
- return sysfs_emit(buf, "%u\n", max_freq);
+ return sysfs_emit(buf, "%u\n", perf_to_freq(cpudata, READ_ONCE(cpudata->highest_perf)));
}
static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *policy,
@@ -1446,10 +1430,10 @@ static bool amd_pstate_acpi_pm_profile_undefined(void)
static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
{
- int min_freq, max_freq, ret;
struct amd_cpudata *cpudata;
struct device *dev;
u64 value;
+ int ret;
/*
* Resetting PERF_CTL_MSR will put the CPU in P0 frequency,
@@ -1480,19 +1464,13 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
if (ret)
goto free_cpudata1;
- min_freq = READ_ONCE(cpudata->min_freq);
- max_freq = READ_ONCE(cpudata->max_freq);
-
- policy->cpuinfo.min_freq = min_freq;
- policy->cpuinfo.max_freq = max_freq;
+ policy->cpuinfo.min_freq = policy->min = perf_to_freq(cpudata, cpudata->lowest_perf);
+ policy->cpuinfo.max_freq = policy->max = perf_to_freq(cpudata, cpudata->highest_perf);
/* It will be updated by governor */
policy->cur = policy->cpuinfo.min_freq;
policy->driver_data = cpudata;
- policy->min = policy->cpuinfo.min_freq;
- policy->max = policy->cpuinfo.max_freq;
-
policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
/*
@@ -1550,7 +1528,8 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
struct amd_cpudata *cpudata = policy->driver_data;
u8 epp;
- amd_pstate_update_min_max_limit(policy);
+ if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
+ amd_pstate_update_min_max_limit(policy);
if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
epp = 0;
diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h
index 19d405c6d805e..0149933692458 100644
--- a/drivers/cpufreq/amd-pstate.h
+++ b/drivers/cpufreq/amd-pstate.h
@@ -46,8 +46,6 @@ struct amd_aperf_mperf {
* @max_limit_perf: Cached value of the performance corresponding to policy->max
* @min_limit_freq: Cached value of policy->min (in khz)
* @max_limit_freq: Cached value of policy->max (in khz)
- * @max_freq: the frequency (in khz) that mapped to highest_perf
- * @min_freq: the frequency (in khz) that mapped to lowest_perf
* @nominal_freq: the frequency (in khz) that mapped to nominal_perf
* @lowest_nonlinear_freq: the frequency (in khz) that mapped to lowest_nonlinear_perf
* @cur: Difference of Aperf/Mperf/tsc count between last and current sample
@@ -77,11 +75,8 @@ struct amd_cpudata {
u8 prefcore_ranking;
u8 min_limit_perf;
u8 max_limit_perf;
- u32 min_limit_freq;
- u32 max_limit_freq;
-
- u32 max_freq;
- u32 min_freq;
+ u32 min_limit_freq;
+ u32 max_limit_freq;
u32 nominal_freq;
u32 lowest_nonlinear_freq;
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 04/19] cpufreq/amd-pstate: Move perf values into a union
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (2 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 03/19] cpufreq/amd-pstate: Drop min and max cached frequencies Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-27 12:59 ` kernel test robot
2025-03-01 7:02 ` Dhananjay Ugwekar
2025-02-26 7:49 ` [PATCH v5 05/19] cpufreq/amd-pstate: Overhaul locking Mario Limonciello
` (14 subsequent siblings)
18 siblings, 2 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello
From: Mario Limonciello <mario.limonciello@amd.com>
By storing perf values in a union all the writes and reads can
be done atomically, removing the need for some concurrency protections.
While making this change, also drop the cached frequency values,
using inline helpers to calculate them on demand from perf value.
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v5:
* Use suggestion for union docstring
* Flush out for quirk case
* Use perf variable in unit test
v4:
* Rebase on earlier changes
v3:
* Pick up tag
v2:
* cache perf variable in unit tests
* Drop unnecessary check from amd_pstate_update_min_max_limit()
* Consistency with READ_ONCE()
* Drop unneeded policy checks
* add kdoc
---
drivers/cpufreq/amd-pstate-ut.c | 18 +--
drivers/cpufreq/amd-pstate.c | 205 ++++++++++++++++++--------------
drivers/cpufreq/amd-pstate.h | 51 +++++---
3 files changed, 158 insertions(+), 116 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
index 445278cf40b61..5f6a92a816e61 100644
--- a/drivers/cpufreq/amd-pstate-ut.c
+++ b/drivers/cpufreq/amd-pstate-ut.c
@@ -129,6 +129,7 @@ static void amd_pstate_ut_check_perf(u32 index)
struct cppc_perf_caps cppc_perf;
struct cpufreq_policy *policy = NULL;
struct amd_cpudata *cpudata = NULL;
+ union perf_cached cur_perf;
for_each_possible_cpu(cpu) {
policy = cpufreq_cpu_get(cpu);
@@ -162,19 +163,20 @@ static void amd_pstate_ut_check_perf(u32 index)
lowest_perf = AMD_CPPC_LOWEST_PERF(cap1);
}
- if (highest_perf != READ_ONCE(cpudata->highest_perf) && !cpudata->hw_prefcore) {
+ cur_perf = READ_ONCE(cpudata->perf);
+ if (highest_perf != cur_perf.highest_perf && !cpudata->hw_prefcore) {
pr_err("%s cpu%d highest=%d %d highest perf doesn't match\n",
- __func__, cpu, highest_perf, cpudata->highest_perf);
+ __func__, cpu, highest_perf, cur_perf.highest_perf);
goto skip_test;
}
- if ((nominal_perf != READ_ONCE(cpudata->nominal_perf)) ||
- (lowest_nonlinear_perf != READ_ONCE(cpudata->lowest_nonlinear_perf)) ||
- (lowest_perf != READ_ONCE(cpudata->lowest_perf))) {
+ if (nominal_perf != cur_perf.nominal_perf ||
+ (lowest_nonlinear_perf != cur_perf.lowest_nonlinear_perf) ||
+ (lowest_perf != cur_perf.lowest_perf)) {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d nominal=%d %d lowest_nonlinear=%d %d lowest=%d %d, they should be equal!\n",
- __func__, cpu, nominal_perf, cpudata->nominal_perf,
- lowest_nonlinear_perf, cpudata->lowest_nonlinear_perf,
- lowest_perf, cpudata->lowest_perf);
+ __func__, cpu, nominal_perf, cur_perf.nominal_perf,
+ lowest_nonlinear_perf, cur_perf.lowest_nonlinear_perf,
+ lowest_perf, cur_perf.lowest_perf);
goto skip_test;
}
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index fb28b27558882..bd8bcda4e6eb0 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -142,18 +142,17 @@ static struct quirk_entry quirk_amd_7k62 = {
.lowest_freq = 550,
};
-static inline u8 freq_to_perf(struct amd_cpudata *cpudata, unsigned int freq_val)
+static inline u8 freq_to_perf(union perf_cached perf, u32 nominal_freq, unsigned int freq_val)
{
- u32 perf_val = DIV_ROUND_UP_ULL((u64)freq_val * cpudata->nominal_perf,
- cpudata->nominal_freq);
+ u32 perf_val = DIV_ROUND_UP_ULL((u64)freq_val * perf.nominal_perf, nominal_freq);
- return (u8)clamp(perf_val, cpudata->lowest_perf, cpudata->highest_perf);
+ return (u8)clamp(perf_val, perf.lowest_perf, perf.highest_perf);
}
-static inline u32 perf_to_freq(struct amd_cpudata *cpudata, u8 perf_val)
+static inline u32 perf_to_freq(union perf_cached perf, u32 nominal_freq, u8 perf_val)
{
- return DIV_ROUND_UP_ULL((u64)cpudata->nominal_freq * perf_val,
- cpudata->nominal_perf);
+ return DIV_ROUND_UP_ULL((u64)nominal_freq * perf_val,
+ perf.nominal_perf);
}
static int __init dmi_matched_7k62_bios_bug(const struct dmi_system_id *dmi)
@@ -347,7 +346,9 @@ static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy,
}
if (trace_amd_pstate_epp_perf_enabled()) {
- trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf,
+ union perf_cached perf = READ_ONCE(cpudata->perf);
+
+ trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
epp,
FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached),
FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached),
@@ -425,6 +426,7 @@ static inline int amd_pstate_cppc_enable(bool enable)
static int msr_init_perf(struct amd_cpudata *cpudata)
{
+ union perf_cached perf = READ_ONCE(cpudata->perf);
u64 cap1, numerator;
int ret = rdmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1,
@@ -436,19 +438,21 @@ static int msr_init_perf(struct amd_cpudata *cpudata)
if (ret)
return ret;
- WRITE_ONCE(cpudata->highest_perf, numerator);
- WRITE_ONCE(cpudata->max_limit_perf, numerator);
- WRITE_ONCE(cpudata->nominal_perf, AMD_CPPC_NOMINAL_PERF(cap1));
- WRITE_ONCE(cpudata->lowest_nonlinear_perf, AMD_CPPC_LOWNONLIN_PERF(cap1));
- WRITE_ONCE(cpudata->lowest_perf, AMD_CPPC_LOWEST_PERF(cap1));
+ perf.highest_perf = numerator;
+ perf.max_limit_perf = numerator;
+ perf.min_limit_perf = AMD_CPPC_LOWEST_PERF(cap1);
+ perf.nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1);
+ perf.lowest_nonlinear_perf = AMD_CPPC_LOWNONLIN_PERF(cap1);
+ perf.lowest_perf = AMD_CPPC_LOWEST_PERF(cap1);
+ WRITE_ONCE(cpudata->perf, perf);
WRITE_ONCE(cpudata->prefcore_ranking, AMD_CPPC_HIGHEST_PERF(cap1));
- WRITE_ONCE(cpudata->min_limit_perf, AMD_CPPC_LOWEST_PERF(cap1));
return 0;
}
static int shmem_init_perf(struct amd_cpudata *cpudata)
{
struct cppc_perf_caps cppc_perf;
+ union perf_cached perf = READ_ONCE(cpudata->perf);
u64 numerator;
int ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
@@ -459,14 +463,14 @@ static int shmem_init_perf(struct amd_cpudata *cpudata)
if (ret)
return ret;
- WRITE_ONCE(cpudata->highest_perf, numerator);
- WRITE_ONCE(cpudata->max_limit_perf, numerator);
- WRITE_ONCE(cpudata->nominal_perf, cppc_perf.nominal_perf);
- WRITE_ONCE(cpudata->lowest_nonlinear_perf,
- cppc_perf.lowest_nonlinear_perf);
- WRITE_ONCE(cpudata->lowest_perf, cppc_perf.lowest_perf);
+ perf.highest_perf = numerator;
+ perf.max_limit_perf = numerator;
+ perf.min_limit_perf = cppc_perf.lowest_perf;
+ perf.nominal_perf = cppc_perf.nominal_perf;
+ perf.lowest_nonlinear_perf = cppc_perf.lowest_nonlinear_perf;
+ perf.lowest_perf = cppc_perf.lowest_perf;
+ WRITE_ONCE(cpudata->perf, perf);
WRITE_ONCE(cpudata->prefcore_ranking, cppc_perf.highest_perf);
- WRITE_ONCE(cpudata->min_limit_perf, cppc_perf.lowest_perf);
if (cppc_state == AMD_PSTATE_ACTIVE)
return 0;
@@ -549,14 +553,14 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf,
u8 des_perf, u8 max_perf, bool fast_switch, int gov_flags)
{
struct cpufreq_policy *policy __free(put_cpufreq_policy) = cpufreq_cpu_get(cpudata->cpu);
- u8 nominal_perf = READ_ONCE(cpudata->nominal_perf);
+ union perf_cached perf = READ_ONCE(cpudata->perf);
if (!policy)
return;
des_perf = clamp_t(u8, des_perf, min_perf, max_perf);
- policy->cur = perf_to_freq(cpudata, des_perf);
+ policy->cur = perf_to_freq(perf, cpudata->nominal_freq, des_perf);
if ((cppc_state == AMD_PSTATE_GUIDED) && (gov_flags & CPUFREQ_GOV_DYNAMIC_SWITCHING)) {
min_perf = des_perf;
@@ -565,7 +569,7 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf,
/* limit the max perf when core performance boost feature is disabled */
if (!cpudata->boost_supported)
- max_perf = min_t(u8, nominal_perf, max_perf);
+ max_perf = min_t(u8, perf.nominal_perf, max_perf);
if (trace_amd_pstate_perf_enabled() && amd_pstate_sample(cpudata)) {
trace_amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq,
@@ -602,39 +606,41 @@ static int amd_pstate_verify(struct cpufreq_policy_data *policy_data)
return 0;
}
-static int amd_pstate_update_min_max_limit(struct cpufreq_policy *policy)
+static void amd_pstate_update_min_max_limit(struct cpufreq_policy *policy)
{
- u8 max_limit_perf, min_limit_perf;
struct amd_cpudata *cpudata = policy->driver_data;
+ union perf_cached perf = READ_ONCE(cpudata->perf);
- max_limit_perf = freq_to_perf(cpudata, policy->max);
- min_limit_perf = freq_to_perf(cpudata, policy->min);
+ perf.max_limit_perf = freq_to_perf(perf, cpudata->nominal_freq, policy->max);
+ perf.min_limit_perf = freq_to_perf(perf, cpudata->nominal_freq, policy->min);
if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
- min_limit_perf = min(cpudata->nominal_perf, max_limit_perf);
+ perf.min_limit_perf = min(perf.nominal_perf, perf.max_limit_perf);
- WRITE_ONCE(cpudata->max_limit_perf, max_limit_perf);
- WRITE_ONCE(cpudata->min_limit_perf, min_limit_perf);
WRITE_ONCE(cpudata->max_limit_freq, policy->max);
WRITE_ONCE(cpudata->min_limit_freq, policy->min);
-
- return 0;
+ WRITE_ONCE(cpudata->perf, perf);
}
static int amd_pstate_update_freq(struct cpufreq_policy *policy,
unsigned int target_freq, bool fast_switch)
{
struct cpufreq_freqs freqs;
- struct amd_cpudata *cpudata = policy->driver_data;
+ struct amd_cpudata *cpudata;
+ union perf_cached perf;
u8 des_perf;
+ cpudata = policy->driver_data;
+
if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
amd_pstate_update_min_max_limit(policy);
+ perf = READ_ONCE(cpudata->perf);
+
freqs.old = policy->cur;
freqs.new = target_freq;
- des_perf = freq_to_perf(cpudata, target_freq);
+ des_perf = freq_to_perf(perf, cpudata->nominal_freq, target_freq);
WARN_ON(fast_switch && !policy->fast_switch_enabled);
/*
@@ -645,8 +651,8 @@ static int amd_pstate_update_freq(struct cpufreq_policy *policy,
if (!fast_switch)
cpufreq_freq_transition_begin(policy, &freqs);
- amd_pstate_update(cpudata, cpudata->min_limit_perf, des_perf,
- cpudata->max_limit_perf, fast_switch,
+ amd_pstate_update(cpudata, perf.min_limit_perf, des_perf,
+ perf.max_limit_perf, fast_switch,
policy->governor->flags);
if (!fast_switch)
@@ -675,9 +681,10 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
unsigned long target_perf,
unsigned long capacity)
{
- u8 max_perf, min_perf, des_perf, cap_perf, min_limit_perf;
+ u8 max_perf, min_perf, des_perf, cap_perf;
struct cpufreq_policy *policy __free(put_cpufreq_policy) = cpufreq_cpu_get(cpu);
struct amd_cpudata *cpudata;
+ union perf_cached perf;
if (!policy)
return;
@@ -687,8 +694,8 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
amd_pstate_update_min_max_limit(policy);
- cap_perf = READ_ONCE(cpudata->highest_perf);
- min_limit_perf = READ_ONCE(cpudata->min_limit_perf);
+ perf = READ_ONCE(cpudata->perf);
+ cap_perf = perf.highest_perf;
des_perf = cap_perf;
if (target_perf < capacity)
@@ -699,10 +706,10 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
else
min_perf = cap_perf;
- if (min_perf < min_limit_perf)
- min_perf = min_limit_perf;
+ if (min_perf < perf.min_limit_perf)
+ min_perf = perf.min_limit_perf;
- max_perf = cpudata->max_limit_perf;
+ max_perf = perf.max_limit_perf;
if (max_perf < min_perf)
max_perf = min_perf;
@@ -713,11 +720,12 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
static int amd_pstate_cpu_boost_update(struct cpufreq_policy *policy, bool on)
{
struct amd_cpudata *cpudata = policy->driver_data;
+ union perf_cached perf = READ_ONCE(cpudata->perf);
u32 nominal_freq, max_freq;
int ret = 0;
nominal_freq = READ_ONCE(cpudata->nominal_freq);
- max_freq = perf_to_freq(cpudata, READ_ONCE(cpudata->highest_perf));
+ max_freq = perf_to_freq(perf, cpudata->nominal_freq, perf.highest_perf);
if (on)
policy->cpuinfo.max_freq = max_freq;
@@ -888,30 +896,30 @@ static u32 amd_pstate_get_transition_latency(unsigned int cpu)
}
/*
- * amd_pstate_init_freq: Initialize the max_freq, min_freq,
- * nominal_freq and lowest_nonlinear_freq for
- * the @cpudata object.
+ * amd_pstate_init_freq: Initialize the nominal_freq and lowest_nonlinear_freq
+ * for the @cpudata object.
*
- * Requires: highest_perf, lowest_perf, nominal_perf and
- * lowest_nonlinear_perf members of @cpudata to be
- * initialized.
+ * Requires: all perf members of @cpudata to be initialized.
*
- * Returns 0 on success, non-zero value on failure.
+ * Returns 0 on success, non-zero value on failure.
*/
static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
{
- int ret;
- u32 min_freq, max_freq;
- u32 nominal_freq, lowest_nonlinear_freq;
+ u32 min_freq, max_freq, nominal_freq, lowest_nonlinear_freq;
struct cppc_perf_caps cppc_perf;
+ union perf_cached perf;
+ int ret;
ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
if (ret)
return ret;
+ perf = READ_ONCE(cpudata->perf);
- if (quirks && quirks->lowest_freq)
+ if (quirks && quirks->lowest_freq) {
min_freq = quirks->lowest_freq;
- else
+ perf.lowest_perf = freq_to_perf(perf, nominal_freq, min_freq);
+ WRITE_ONCE(cpudata->perf, perf);
+ } else
min_freq = cppc_perf.lowest_freq;
if (quirks && quirks->nominal_freq)
@@ -924,8 +932,8 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
WRITE_ONCE(cpudata->nominal_freq, nominal_freq);
- max_freq = perf_to_freq(cpudata, cpudata->highest_perf);
- lowest_nonlinear_freq = perf_to_freq(cpudata, cpudata->lowest_nonlinear_perf);
+ max_freq = perf_to_freq(perf, nominal_freq, perf.highest_perf);
+ lowest_nonlinear_freq = perf_to_freq(perf, nominal_freq, perf.lowest_nonlinear_perf);
WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq);
/**
@@ -952,6 +960,7 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
{
struct amd_cpudata *cpudata;
+ union perf_cached perf;
struct device *dev;
int ret;
@@ -987,8 +996,14 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
policy->cpuinfo.transition_latency = amd_pstate_get_transition_latency(policy->cpu);
policy->transition_delay_us = amd_pstate_get_transition_delay_us(policy->cpu);
- policy->cpuinfo.min_freq = policy->min = perf_to_freq(cpudata, cpudata->lowest_perf);
- policy->cpuinfo.max_freq = policy->max = perf_to_freq(cpudata, cpudata->highest_perf);
+ perf = READ_ONCE(cpudata->perf);
+
+ policy->cpuinfo.min_freq = policy->min = perf_to_freq(perf,
+ cpudata->nominal_freq,
+ perf.lowest_perf);
+ policy->cpuinfo.max_freq = policy->max = perf_to_freq(perf,
+ cpudata->nominal_freq,
+ perf.highest_perf);
policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
@@ -1069,23 +1084,27 @@ static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy)
static ssize_t show_amd_pstate_max_freq(struct cpufreq_policy *policy,
char *buf)
{
- struct amd_cpudata *cpudata = policy->driver_data;
+ struct amd_cpudata *cpudata;
+ union perf_cached perf;
+ cpudata = policy->driver_data;
+ perf = READ_ONCE(cpudata->perf);
- return sysfs_emit(buf, "%u\n", perf_to_freq(cpudata, READ_ONCE(cpudata->highest_perf)));
+ return sysfs_emit(buf, "%u\n",
+ perf_to_freq(perf, cpudata->nominal_freq, perf.highest_perf));
}
static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *policy,
char *buf)
{
- int freq;
- struct amd_cpudata *cpudata = policy->driver_data;
+ struct amd_cpudata *cpudata;
+ union perf_cached perf;
- freq = READ_ONCE(cpudata->lowest_nonlinear_freq);
- if (freq < 0)
- return freq;
+ cpudata = policy->driver_data;
+ perf = READ_ONCE(cpudata->perf);
- return sysfs_emit(buf, "%u\n", freq);
+ return sysfs_emit(buf, "%u\n",
+ perf_to_freq(perf, cpudata->nominal_freq, perf.lowest_nonlinear_perf));
}
/*
@@ -1095,12 +1114,11 @@ static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *poli
static ssize_t show_amd_pstate_highest_perf(struct cpufreq_policy *policy,
char *buf)
{
- u8 perf;
- struct amd_cpudata *cpudata = policy->driver_data;
+ struct amd_cpudata *cpudata;
- perf = READ_ONCE(cpudata->highest_perf);
+ cpudata = policy->driver_data;
- return sysfs_emit(buf, "%u\n", perf);
+ return sysfs_emit(buf, "%u\n", cpudata->perf.highest_perf);
}
static ssize_t show_amd_pstate_prefcore_ranking(struct cpufreq_policy *policy,
@@ -1431,6 +1449,7 @@ static bool amd_pstate_acpi_pm_profile_undefined(void)
static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
{
struct amd_cpudata *cpudata;
+ union perf_cached perf;
struct device *dev;
u64 value;
int ret;
@@ -1464,8 +1483,15 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
if (ret)
goto free_cpudata1;
- policy->cpuinfo.min_freq = policy->min = perf_to_freq(cpudata, cpudata->lowest_perf);
- policy->cpuinfo.max_freq = policy->max = perf_to_freq(cpudata, cpudata->highest_perf);
+ perf = READ_ONCE(cpudata->perf);
+
+ policy->cpuinfo.min_freq = policy->min = perf_to_freq(perf,
+ cpudata->nominal_freq,
+ perf.lowest_perf);
+ policy->cpuinfo.max_freq = policy->max = perf_to_freq(perf,
+ cpudata->nominal_freq,
+ perf.highest_perf);
+
/* It will be updated by governor */
policy->cur = policy->cpuinfo.min_freq;
@@ -1526,6 +1552,7 @@ static void amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
{
struct amd_cpudata *cpudata = policy->driver_data;
+ union perf_cached perf;
u8 epp;
if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
@@ -1536,15 +1563,16 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
else
epp = READ_ONCE(cpudata->epp_cached);
+ perf = READ_ONCE(cpudata->perf);
if (trace_amd_pstate_epp_perf_enabled()) {
- trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, epp,
- cpudata->min_limit_perf,
- cpudata->max_limit_perf,
+ trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, epp,
+ perf.min_limit_perf,
+ perf.max_limit_perf,
policy->boost_enabled);
}
- return amd_pstate_update_perf(cpudata, cpudata->min_limit_perf, 0U,
- cpudata->max_limit_perf, epp, false);
+ return amd_pstate_update_perf(cpudata, perf.min_limit_perf, 0U,
+ perf.max_limit_perf, epp, false);
}
static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
@@ -1576,20 +1604,18 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
static int amd_pstate_epp_reenable(struct cpufreq_policy *policy)
{
struct amd_cpudata *cpudata = policy->driver_data;
- u8 max_perf;
+ union perf_cached perf = READ_ONCE(cpudata->perf);
int ret;
ret = amd_pstate_cppc_enable(true);
if (ret)
pr_err("failed to enable amd pstate during resume, return %d\n", ret);
- max_perf = READ_ONCE(cpudata->highest_perf);
-
if (trace_amd_pstate_epp_perf_enabled()) {
- trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf,
+ trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
cpudata->epp_cached,
FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached),
- max_perf, policy->boost_enabled);
+ perf.highest_perf, policy->boost_enabled);
}
return amd_pstate_epp_update_limit(policy);
@@ -1613,22 +1639,21 @@ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
{
struct amd_cpudata *cpudata = policy->driver_data;
- u8 min_perf;
+ union perf_cached perf = READ_ONCE(cpudata->perf);
if (cpudata->suspended)
return 0;
- min_perf = READ_ONCE(cpudata->lowest_perf);
-
guard(mutex)(&amd_pstate_limits_lock);
if (trace_amd_pstate_epp_perf_enabled()) {
- trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf,
+ trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
AMD_CPPC_EPP_BALANCE_POWERSAVE,
- min_perf, min_perf, policy->boost_enabled);
+ perf.lowest_perf, perf.lowest_perf,
+ policy->boost_enabled);
}
- return amd_pstate_update_perf(cpudata, min_perf, 0, min_perf,
+ return amd_pstate_update_perf(cpudata, perf.lowest_perf, 0, perf.lowest_perf,
AMD_CPPC_EPP_BALANCE_POWERSAVE, false);
}
diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h
index 0149933692458..83532a0079a81 100644
--- a/drivers/cpufreq/amd-pstate.h
+++ b/drivers/cpufreq/amd-pstate.h
@@ -13,6 +13,36 @@
/*********************************************************************
* AMD P-state INTERFACE *
*********************************************************************/
+
+/**
+ * union perf_cached - A union to cache performance-related data.
+ * @highest_perf: the maximum performance an individual processor may reach,
+ * assuming ideal conditions
+ * For platforms that support the preferred core feature, the highest_perf value maybe
+ * configured to any value in the range 166-255 by the firmware (because the preferred
+ * core ranking is encoded in the highest_perf value). To maintain consistency across
+ * all platforms, we split the highest_perf and preferred core ranking values into
+ * cpudata->perf.highest_perf and cpudata->prefcore_ranking.
+ * @nominal_perf: the maximum sustained performance level of the processor,
+ * assuming ideal operating conditions
+ * @lowest_nonlinear_perf: the lowest performance level at which nonlinear power
+ * savings are achieved
+ * @lowest_perf: the absolute lowest performance level of the processor
+ * @min_limit_perf: Cached value of the performance corresponding to policy->min
+ * @max_limit_perf: Cached value of the performance corresponding to policy->max
+ */
+union perf_cached {
+ struct {
+ u8 highest_perf;
+ u8 nominal_perf;
+ u8 lowest_nonlinear_perf;
+ u8 lowest_perf;
+ u8 min_limit_perf;
+ u8 max_limit_perf;
+ };
+ u64 val;
+};
+
/**
* struct amd_aperf_mperf
* @aperf: actual performance frequency clock count
@@ -30,20 +60,9 @@ struct amd_aperf_mperf {
* @cpu: CPU number
* @req: constraint request to apply
* @cppc_req_cached: cached performance request hints
- * @highest_perf: the maximum performance an individual processor may reach,
- * assuming ideal conditions
- * For platforms that do not support the preferred core feature, the
- * highest_pef may be configured with 166 or 255, to avoid max frequency
- * calculated wrongly. we take the fixed value as the highest_perf.
- * @nominal_perf: the maximum sustained performance level of the processor,
- * assuming ideal operating conditions
- * @lowest_nonlinear_perf: the lowest performance level at which nonlinear power
- * savings are achieved
- * @lowest_perf: the absolute lowest performance level of the processor
+ * @perf: cached performance-related data
* @prefcore_ranking: the preferred core ranking, the higher value indicates a higher
* priority.
- * @min_limit_perf: Cached value of the performance corresponding to policy->min
- * @max_limit_perf: Cached value of the performance corresponding to policy->max
* @min_limit_freq: Cached value of policy->min (in khz)
* @max_limit_freq: Cached value of policy->max (in khz)
* @nominal_freq: the frequency (in khz) that mapped to nominal_perf
@@ -68,13 +87,9 @@ struct amd_cpudata {
struct freq_qos_request req[2];
u64 cppc_req_cached;
- u8 highest_perf;
- u8 nominal_perf;
- u8 lowest_nonlinear_perf;
- u8 lowest_perf;
+ union perf_cached perf;
+
u8 prefcore_ranking;
- u8 min_limit_perf;
- u8 max_limit_perf;
u32 min_limit_freq;
u32 max_limit_freq;
u32 nominal_freq;
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 05/19] cpufreq/amd-pstate: Overhaul locking
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (3 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 04/19] cpufreq/amd-pstate: Move perf values into a union Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 06/19] cpufreq/amd-pstate: Drop `cppc_cap1_cached` Mario Limonciello
` (13 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
amd_pstate_cpu_boost_update() and refresh_frequency_limits() both
update the policy state and have nothing to do with the amd-pstate
driver itself.
A global "limits" lock doesn't make sense because each CPU can have
policies changed independently. Each time a CPU changes values they
will atomically be written to the per-CPU perf member. Drop per CPU
locking cases.
The remaining "global" driver lock is used to ensure that only one
entity can change driver modes at a given time.
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v5:
* Add tag
---
drivers/cpufreq/amd-pstate.c | 13 +++----------
1 file changed, 3 insertions(+), 10 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index bd8bcda4e6eb0..95b77cf145174 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -196,7 +196,6 @@ static inline int get_mode_idx_from_str(const char *str, size_t size)
return -EINVAL;
}
-static DEFINE_MUTEX(amd_pstate_limits_lock);
static DEFINE_MUTEX(amd_pstate_driver_lock);
static u8 msr_get_epp(struct amd_cpudata *cpudata)
@@ -752,7 +751,6 @@ static int amd_pstate_set_boost(struct cpufreq_policy *policy, int state)
pr_err("Boost mode is not supported by this processor or SBIOS\n");
return -EOPNOTSUPP;
}
- guard(mutex)(&amd_pstate_driver_lock);
ret = amd_pstate_cpu_boost_update(policy, state);
refresh_frequency_limits(policy);
@@ -1176,8 +1174,6 @@ static ssize_t store_energy_performance_preference(
if (ret < 0)
return -EINVAL;
- guard(mutex)(&amd_pstate_limits_lock);
-
ret = amd_pstate_set_energy_pref_index(policy, ret);
return ret ? ret : count;
@@ -1350,8 +1346,10 @@ int amd_pstate_update_status(const char *buf, size_t size)
if (mode_idx < 0 || mode_idx >= AMD_PSTATE_MAX)
return -EINVAL;
- if (mode_state_machine[cppc_state][mode_idx])
+ if (mode_state_machine[cppc_state][mode_idx]) {
+ guard(mutex)(&amd_pstate_driver_lock);
return mode_state_machine[cppc_state][mode_idx](mode_idx);
+ }
return 0;
}
@@ -1372,7 +1370,6 @@ static ssize_t status_store(struct device *a, struct device_attribute *b,
char *p = memchr(buf, '\n', count);
int ret;
- guard(mutex)(&amd_pstate_driver_lock);
ret = amd_pstate_update_status(buf, p ? p - buf : count);
return ret < 0 ? ret : count;
@@ -1644,8 +1641,6 @@ static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
if (cpudata->suspended)
return 0;
- guard(mutex)(&amd_pstate_limits_lock);
-
if (trace_amd_pstate_epp_perf_enabled()) {
trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
AMD_CPPC_EPP_BALANCE_POWERSAVE,
@@ -1685,8 +1680,6 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy)
struct amd_cpudata *cpudata = policy->driver_data;
if (cpudata->suspended) {
- guard(mutex)(&amd_pstate_limits_lock);
-
/* enable amd pstate from suspend state*/
amd_pstate_epp_reenable(policy);
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 06/19] cpufreq/amd-pstate: Drop `cppc_cap1_cached`
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (4 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 05/19] cpufreq/amd-pstate: Overhaul locking Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 07/19] cpufreq/amd-pstate-ut: Use _free macro to free put policy Mario Limonciello
` (12 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
The `cppc_cap1_cached` variable isn't used at all, there is no
need to read it at initialization for each CPU.
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
drivers/cpufreq/amd-pstate.c | 5 -----
drivers/cpufreq/amd-pstate.h | 2 --
2 files changed, 7 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 95b77cf145174..2cec8d7d92f51 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -1514,11 +1514,6 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
if (ret)
return ret;
WRITE_ONCE(cpudata->cppc_req_cached, value);
-
- ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, &value);
- if (ret)
- return ret;
- WRITE_ONCE(cpudata->cppc_cap1_cached, value);
}
ret = amd_pstate_set_epp(cpudata, cpudata->epp_default);
if (ret)
diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h
index 83532a0079a81..1557e1afea79c 100644
--- a/drivers/cpufreq/amd-pstate.h
+++ b/drivers/cpufreq/amd-pstate.h
@@ -76,7 +76,6 @@ struct amd_aperf_mperf {
* AMD P-State driver supports preferred core featue.
* @epp_cached: Cached CPPC energy-performance preference value
* @policy: Cpufreq policy value
- * @cppc_cap1_cached Cached MSR_AMD_CPPC_CAP1 register value
*
* The amd_cpudata is key private data for each CPU thread in AMD P-State, and
* represents all the attributes and goals that AMD P-State requests at runtime.
@@ -105,7 +104,6 @@ struct amd_cpudata {
/* EPP feature related attributes*/
u8 epp_cached;
u32 policy;
- u64 cppc_cap1_cached;
bool suspended;
u8 epp_default;
};
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 07/19] cpufreq/amd-pstate-ut: Use _free macro to free put policy
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (5 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 06/19] cpufreq/amd-pstate: Drop `cppc_cap1_cached` Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 08/19] cpufreq/amd-pstate-ut: Allow lowest nonlinear and lowest to be the same Mario Limonciello
` (11 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
Using a scoped cleanup macro simplifies cleanup code.
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
drivers/cpufreq/amd-pstate-ut.c | 33 ++++++++++++++-------------------
1 file changed, 14 insertions(+), 19 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
index 5f6a92a816e61..e02672e67380a 100644
--- a/drivers/cpufreq/amd-pstate-ut.c
+++ b/drivers/cpufreq/amd-pstate-ut.c
@@ -26,6 +26,7 @@
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/fs.h>
+#include <linux/cleanup.h>
#include <acpi/cppc_acpi.h>
@@ -127,11 +128,12 @@ static void amd_pstate_ut_check_perf(u32 index)
u32 highest_perf = 0, nominal_perf = 0, lowest_nonlinear_perf = 0, lowest_perf = 0;
u64 cap1 = 0;
struct cppc_perf_caps cppc_perf;
- struct cpufreq_policy *policy = NULL;
struct amd_cpudata *cpudata = NULL;
union perf_cached cur_perf;
for_each_possible_cpu(cpu) {
+ struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL;
+
policy = cpufreq_cpu_get(cpu);
if (!policy)
break;
@@ -142,7 +144,7 @@ static void amd_pstate_ut_check_perf(u32 index)
if (ret) {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cppc_get_perf_caps ret=%d error!\n", __func__, ret);
- goto skip_test;
+ return;
}
highest_perf = cppc_perf.highest_perf;
@@ -154,7 +156,7 @@ static void amd_pstate_ut_check_perf(u32 index)
if (ret) {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s read CPPC_CAP1 ret=%d error!\n", __func__, ret);
- goto skip_test;
+ return;
}
highest_perf = AMD_CPPC_HIGHEST_PERF(cap1);
@@ -167,7 +169,7 @@ static void amd_pstate_ut_check_perf(u32 index)
if (highest_perf != cur_perf.highest_perf && !cpudata->hw_prefcore) {
pr_err("%s cpu%d highest=%d %d highest perf doesn't match\n",
__func__, cpu, highest_perf, cur_perf.highest_perf);
- goto skip_test;
+ return;
}
if (nominal_perf != cur_perf.nominal_perf ||
(lowest_nonlinear_perf != cur_perf.lowest_nonlinear_perf) ||
@@ -177,7 +179,7 @@ static void amd_pstate_ut_check_perf(u32 index)
__func__, cpu, nominal_perf, cur_perf.nominal_perf,
lowest_nonlinear_perf, cur_perf.lowest_nonlinear_perf,
lowest_perf, cur_perf.lowest_perf);
- goto skip_test;
+ return;
}
if (!((highest_perf >= nominal_perf) &&
@@ -188,15 +190,11 @@ static void amd_pstate_ut_check_perf(u32 index)
pr_err("%s cpu%d highest=%d >= nominal=%d > lowest_nonlinear=%d > lowest=%d > 0, the formula is incorrect!\n",
__func__, cpu, highest_perf, nominal_perf,
lowest_nonlinear_perf, lowest_perf);
- goto skip_test;
+ return;
}
- cpufreq_cpu_put(policy);
}
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
- return;
-skip_test:
- cpufreq_cpu_put(policy);
}
/*
@@ -207,10 +205,11 @@ static void amd_pstate_ut_check_perf(u32 index)
static void amd_pstate_ut_check_freq(u32 index)
{
int cpu = 0;
- struct cpufreq_policy *policy = NULL;
struct amd_cpudata *cpudata = NULL;
for_each_possible_cpu(cpu) {
+ struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL;
+
policy = cpufreq_cpu_get(cpu);
if (!policy)
break;
@@ -224,14 +223,14 @@ static void amd_pstate_ut_check_freq(u32 index)
pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n",
__func__, cpu, policy->cpuinfo.max_freq, cpudata->nominal_freq,
cpudata->lowest_nonlinear_freq, policy->cpuinfo.min_freq);
- goto skip_test;
+ return;
}
if (cpudata->lowest_nonlinear_freq != policy->min) {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d cpudata_lowest_nonlinear_freq=%d policy_min=%d, they should be equal!\n",
__func__, cpu, cpudata->lowest_nonlinear_freq, policy->min);
- goto skip_test;
+ return;
}
if (cpudata->boost_supported) {
@@ -243,20 +242,16 @@ static void amd_pstate_ut_check_freq(u32 index)
pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n",
__func__, cpu, policy->max, policy->cpuinfo.max_freq,
cpudata->nominal_freq);
- goto skip_test;
+ return;
}
} else {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d must support boost!\n", __func__, cpu);
- goto skip_test;
+ return;
}
- cpufreq_cpu_put(policy);
}
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
- return;
-skip_test:
- cpufreq_cpu_put(policy);
}
static int amd_pstate_set_mode(enum amd_pstate_mode mode)
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 08/19] cpufreq/amd-pstate-ut: Allow lowest nonlinear and lowest to be the same
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (6 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 07/19] cpufreq/amd-pstate-ut: Use _free macro to free put policy Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 09/19] cpufreq/amd-pstate-ut: Drop SUCCESS and FAIL enums Mario Limonciello
` (10 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
Several Ryzen AI processors support the exact same value for lowest
nonlinear perf and lowest perf. Loosen up the unit tests to allow this
scenario.
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v5:
* Add tag
---
drivers/cpufreq/amd-pstate-ut.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
index e02672e67380a..b3693a4c28d26 100644
--- a/drivers/cpufreq/amd-pstate-ut.c
+++ b/drivers/cpufreq/amd-pstate-ut.c
@@ -184,7 +184,7 @@ static void amd_pstate_ut_check_perf(u32 index)
if (!((highest_perf >= nominal_perf) &&
(nominal_perf > lowest_nonlinear_perf) &&
- (lowest_nonlinear_perf > lowest_perf) &&
+ (lowest_nonlinear_perf >= lowest_perf) &&
(lowest_perf > 0))) {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d highest=%d >= nominal=%d > lowest_nonlinear=%d > lowest=%d > 0, the formula is incorrect!\n",
@@ -217,7 +217,7 @@ static void amd_pstate_ut_check_freq(u32 index)
if (!((policy->cpuinfo.max_freq >= cpudata->nominal_freq) &&
(cpudata->nominal_freq > cpudata->lowest_nonlinear_freq) &&
- (cpudata->lowest_nonlinear_freq > policy->cpuinfo.min_freq) &&
+ (cpudata->lowest_nonlinear_freq >= policy->cpuinfo.min_freq) &&
(policy->cpuinfo.min_freq > 0))) {
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n",
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 09/19] cpufreq/amd-pstate-ut: Drop SUCCESS and FAIL enums
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (7 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 08/19] cpufreq/amd-pstate-ut: Allow lowest nonlinear and lowest to be the same Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 10/19] cpufreq/amd-pstate-ut: Run on all of the correct CPUs Mario Limonciello
` (9 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
Enums are effectively used as a boolean and don't show
the return value of the failing call.
Instead of using enums switch to returning the actual return
code from the unit test.
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v5:
* Add tag
---
drivers/cpufreq/amd-pstate-ut.c | 143 ++++++++++++--------------------
1 file changed, 55 insertions(+), 88 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
index b3693a4c28d26..cd9a472e8dc3c 100644
--- a/drivers/cpufreq/amd-pstate-ut.c
+++ b/drivers/cpufreq/amd-pstate-ut.c
@@ -32,30 +32,20 @@
#include "amd-pstate.h"
-/*
- * Abbreviations:
- * amd_pstate_ut: used as a shortform for AMD P-State unit test.
- * It helps to keep variable names smaller, simpler
- */
-enum amd_pstate_ut_result {
- AMD_PSTATE_UT_RESULT_PASS,
- AMD_PSTATE_UT_RESULT_FAIL,
-};
struct amd_pstate_ut_struct {
const char *name;
- void (*func)(u32 index);
- enum amd_pstate_ut_result result;
+ int (*func)(u32 index);
};
/*
* Kernel module for testing the AMD P-State unit test
*/
-static void amd_pstate_ut_acpi_cpc_valid(u32 index);
-static void amd_pstate_ut_check_enabled(u32 index);
-static void amd_pstate_ut_check_perf(u32 index);
-static void amd_pstate_ut_check_freq(u32 index);
-static void amd_pstate_ut_check_driver(u32 index);
+static int amd_pstate_ut_acpi_cpc_valid(u32 index);
+static int amd_pstate_ut_check_enabled(u32 index);
+static int amd_pstate_ut_check_perf(u32 index);
+static int amd_pstate_ut_check_freq(u32 index);
+static int amd_pstate_ut_check_driver(u32 index);
static struct amd_pstate_ut_struct amd_pstate_ut_cases[] = {
{"amd_pstate_ut_acpi_cpc_valid", amd_pstate_ut_acpi_cpc_valid },
@@ -78,51 +68,46 @@ static bool get_shared_mem(void)
/*
* check the _CPC object is present in SBIOS.
*/
-static void amd_pstate_ut_acpi_cpc_valid(u32 index)
+static int amd_pstate_ut_acpi_cpc_valid(u32 index)
{
- if (acpi_cpc_valid())
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
- else {
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
+ if (!acpi_cpc_valid()) {
pr_err("%s the _CPC object is not present in SBIOS!\n", __func__);
+ return -EINVAL;
}
+
+ return 0;
}
-static void amd_pstate_ut_pstate_enable(u32 index)
+/*
+ * check if amd pstate is enabled
+ */
+static int amd_pstate_ut_check_enabled(u32 index)
{
- int ret = 0;
u64 cppc_enable = 0;
+ int ret;
+
+ if (get_shared_mem())
+ return 0;
ret = rdmsrl_safe(MSR_AMD_CPPC_ENABLE, &cppc_enable);
if (ret) {
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s rdmsrl_safe MSR_AMD_CPPC_ENABLE ret=%d error!\n", __func__, ret);
- return;
+ return ret;
}
- if (cppc_enable)
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
- else {
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
+
+ if (!cppc_enable) {
pr_err("%s amd pstate must be enabled!\n", __func__);
+ return -EINVAL;
}
-}
-/*
- * check if amd pstate is enabled
- */
-static void amd_pstate_ut_check_enabled(u32 index)
-{
- if (get_shared_mem())
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
- else
- amd_pstate_ut_pstate_enable(index);
+ return 0;
}
/*
* check if performance values are reasonable.
* highest_perf >= nominal_perf > lowest_nonlinear_perf > lowest_perf > 0
*/
-static void amd_pstate_ut_check_perf(u32 index)
+static int amd_pstate_ut_check_perf(u32 index)
{
int cpu = 0, ret = 0;
u32 highest_perf = 0, nominal_perf = 0, lowest_nonlinear_perf = 0, lowest_perf = 0;
@@ -142,9 +127,8 @@ static void amd_pstate_ut_check_perf(u32 index)
if (get_shared_mem()) {
ret = cppc_get_perf_caps(cpu, &cppc_perf);
if (ret) {
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cppc_get_perf_caps ret=%d error!\n", __func__, ret);
- return;
+ return ret;
}
highest_perf = cppc_perf.highest_perf;
@@ -154,9 +138,8 @@ static void amd_pstate_ut_check_perf(u32 index)
} else {
ret = rdmsrl_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &cap1);
if (ret) {
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s read CPPC_CAP1 ret=%d error!\n", __func__, ret);
- return;
+ return ret;
}
highest_perf = AMD_CPPC_HIGHEST_PERF(cap1);
@@ -169,32 +152,30 @@ static void amd_pstate_ut_check_perf(u32 index)
if (highest_perf != cur_perf.highest_perf && !cpudata->hw_prefcore) {
pr_err("%s cpu%d highest=%d %d highest perf doesn't match\n",
__func__, cpu, highest_perf, cur_perf.highest_perf);
- return;
+ return -EINVAL;
}
if (nominal_perf != cur_perf.nominal_perf ||
(lowest_nonlinear_perf != cur_perf.lowest_nonlinear_perf) ||
(lowest_perf != cur_perf.lowest_perf)) {
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d nominal=%d %d lowest_nonlinear=%d %d lowest=%d %d, they should be equal!\n",
__func__, cpu, nominal_perf, cur_perf.nominal_perf,
lowest_nonlinear_perf, cur_perf.lowest_nonlinear_perf,
lowest_perf, cur_perf.lowest_perf);
- return;
+ return -EINVAL;
}
if (!((highest_perf >= nominal_perf) &&
(nominal_perf > lowest_nonlinear_perf) &&
(lowest_nonlinear_perf >= lowest_perf) &&
(lowest_perf > 0))) {
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d highest=%d >= nominal=%d > lowest_nonlinear=%d > lowest=%d > 0, the formula is incorrect!\n",
__func__, cpu, highest_perf, nominal_perf,
lowest_nonlinear_perf, lowest_perf);
- return;
+ return -EINVAL;
}
}
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
+ return 0;
}
/*
@@ -202,7 +183,7 @@ static void amd_pstate_ut_check_perf(u32 index)
* max_freq >= nominal_freq > lowest_nonlinear_freq > min_freq > 0
* check max freq when set support boost mode.
*/
-static void amd_pstate_ut_check_freq(u32 index)
+static int amd_pstate_ut_check_freq(u32 index)
{
int cpu = 0;
struct amd_cpudata *cpudata = NULL;
@@ -219,39 +200,33 @@ static void amd_pstate_ut_check_freq(u32 index)
(cpudata->nominal_freq > cpudata->lowest_nonlinear_freq) &&
(cpudata->lowest_nonlinear_freq >= policy->cpuinfo.min_freq) &&
(policy->cpuinfo.min_freq > 0))) {
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n",
__func__, cpu, policy->cpuinfo.max_freq, cpudata->nominal_freq,
cpudata->lowest_nonlinear_freq, policy->cpuinfo.min_freq);
- return;
+ return -EINVAL;
}
if (cpudata->lowest_nonlinear_freq != policy->min) {
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d cpudata_lowest_nonlinear_freq=%d policy_min=%d, they should be equal!\n",
__func__, cpu, cpudata->lowest_nonlinear_freq, policy->min);
- return;
+ return -EINVAL;
}
if (cpudata->boost_supported) {
- if ((policy->max == policy->cpuinfo.max_freq) ||
- (policy->max == cpudata->nominal_freq))
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
- else {
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
+ if ((policy->max != policy->cpuinfo.max_freq) &&
+ (policy->max != cpudata->nominal_freq)) {
pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n",
__func__, cpu, policy->max, policy->cpuinfo.max_freq,
cpudata->nominal_freq);
- return;
+ return -EINVAL;
}
} else {
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
pr_err("%s cpu%d must support boost!\n", __func__, cpu);
- return;
+ return -EINVAL;
}
}
- amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
+ return 0;
}
static int amd_pstate_set_mode(enum amd_pstate_mode mode)
@@ -263,32 +238,28 @@ static int amd_pstate_set_mode(enum amd_pstate_mode mode)
return amd_pstate_update_status(mode_str, strlen(mode_str));
}
-static void amd_pstate_ut_check_driver(u32 index)
+static int amd_pstate_ut_check_driver(u32 index)
{
enum amd_pstate_mode mode1, mode2 = AMD_PSTATE_DISABLE;
- int ret;
for (mode1 = AMD_PSTATE_DISABLE; mode1 < AMD_PSTATE_MAX; mode1++) {
- ret = amd_pstate_set_mode(mode1);
+ int ret = amd_pstate_set_mode(mode1);
if (ret)
- goto out;
+ return ret;
for (mode2 = AMD_PSTATE_DISABLE; mode2 < AMD_PSTATE_MAX; mode2++) {
if (mode1 == mode2)
continue;
ret = amd_pstate_set_mode(mode2);
- if (ret)
- goto out;
+ if (ret) {
+ pr_err("%s: failed to update status for %s->%s\n", __func__,
+ amd_pstate_get_mode_string(mode1),
+ amd_pstate_get_mode_string(mode2));
+ return ret;
+ }
}
}
-out:
- if (ret)
- pr_warn("%s: failed to update status for %s->%s: %d\n", __func__,
- amd_pstate_get_mode_string(mode1),
- amd_pstate_get_mode_string(mode2), ret);
-
- amd_pstate_ut_cases[index].result = ret ?
- AMD_PSTATE_UT_RESULT_FAIL :
- AMD_PSTATE_UT_RESULT_PASS;
+
+ return 0;
}
static int __init amd_pstate_ut_init(void)
@@ -296,16 +267,12 @@ static int __init amd_pstate_ut_init(void)
u32 i = 0, arr_size = ARRAY_SIZE(amd_pstate_ut_cases);
for (i = 0; i < arr_size; i++) {
- amd_pstate_ut_cases[i].func(i);
- switch (amd_pstate_ut_cases[i].result) {
- case AMD_PSTATE_UT_RESULT_PASS:
+ int ret = amd_pstate_ut_cases[i].func(i);
+
+ if (ret)
+ pr_err("%-4d %-20s\t fail: %d!\n", i+1, amd_pstate_ut_cases[i].name, ret);
+ else
pr_info("%-4d %-20s\t success!\n", i+1, amd_pstate_ut_cases[i].name);
- break;
- case AMD_PSTATE_UT_RESULT_FAIL:
- default:
- pr_info("%-4d %-20s\t fail!\n", i+1, amd_pstate_ut_cases[i].name);
- break;
- }
}
return 0;
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 10/19] cpufreq/amd-pstate-ut: Run on all of the correct CPUs
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (8 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 09/19] cpufreq/amd-pstate-ut: Drop SUCCESS and FAIL enums Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 11/19] cpufreq/amd-pstate-ut: Adjust variable scope Mario Limonciello
` (8 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
If a CPU is missing a policy or one has been offlined then the unit test
is skipped for the rest of the CPUs on the system.
Instead; iterate online CPUs and skip any missing policies to allow
continuing to test the rest of them.
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v5:
* Add tag
v3:
* Only check online CPUs
* Update commit message
---
drivers/cpufreq/amd-pstate-ut.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
index cd9a472e8dc3c..2ab3017d7a0bb 100644
--- a/drivers/cpufreq/amd-pstate-ut.c
+++ b/drivers/cpufreq/amd-pstate-ut.c
@@ -116,12 +116,12 @@ static int amd_pstate_ut_check_perf(u32 index)
struct amd_cpudata *cpudata = NULL;
union perf_cached cur_perf;
- for_each_possible_cpu(cpu) {
+ for_each_online_cpu(cpu) {
struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL;
policy = cpufreq_cpu_get(cpu);
if (!policy)
- break;
+ continue;
cpudata = policy->driver_data;
if (get_shared_mem()) {
@@ -188,12 +188,12 @@ static int amd_pstate_ut_check_freq(u32 index)
int cpu = 0;
struct amd_cpudata *cpudata = NULL;
- for_each_possible_cpu(cpu) {
+ for_each_online_cpu(cpu) {
struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL;
policy = cpufreq_cpu_get(cpu);
if (!policy)
- break;
+ continue;
cpudata = policy->driver_data;
if (!((policy->cpuinfo.max_freq >= cpudata->nominal_freq) &&
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 11/19] cpufreq/amd-pstate-ut: Adjust variable scope
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (9 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 10/19] cpufreq/amd-pstate-ut: Run on all of the correct CPUs Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 12/19] cpufreq/amd-pstate: Replace all AMD_CPPC_* macros with masks Mario Limonciello
` (7 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
In amd_pstate_ut_check_freq() and amd_pstate_ut_check_perf() the cpudata
variable is only needed in the scope of the for loop. Move it there.
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v5:
* Apply to amd_pstate_ut_check_perf() too
* Add tag
---
drivers/cpufreq/amd-pstate-ut.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
index 2ab3017d7a0bb..edc1475989e3d 100644
--- a/drivers/cpufreq/amd-pstate-ut.c
+++ b/drivers/cpufreq/amd-pstate-ut.c
@@ -113,11 +113,11 @@ static int amd_pstate_ut_check_perf(u32 index)
u32 highest_perf = 0, nominal_perf = 0, lowest_nonlinear_perf = 0, lowest_perf = 0;
u64 cap1 = 0;
struct cppc_perf_caps cppc_perf;
- struct amd_cpudata *cpudata = NULL;
union perf_cached cur_perf;
for_each_online_cpu(cpu) {
struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL;
+ struct amd_cpudata *cpudata;
policy = cpufreq_cpu_get(cpu);
if (!policy)
@@ -186,10 +186,10 @@ static int amd_pstate_ut_check_perf(u32 index)
static int amd_pstate_ut_check_freq(u32 index)
{
int cpu = 0;
- struct amd_cpudata *cpudata = NULL;
for_each_online_cpu(cpu) {
struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL;
+ struct amd_cpudata *cpudata;
policy = cpufreq_cpu_get(cpu);
if (!policy)
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 12/19] cpufreq/amd-pstate: Replace all AMD_CPPC_* macros with masks
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (10 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 11/19] cpufreq/amd-pstate-ut: Adjust variable scope Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 13/19] cpufreq/amd-pstate: Cache CPPC request in shared mem case too Mario Limonciello
` (6 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
Bitfield masks are easier to follow and less error prone.
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v3:
* Add tag
* Add missing includes
v2:
* Add a comment in msr-index.h
* Pick up tag
---
arch/x86/include/asm/msr-index.h | 20 +++++++++++---------
arch/x86/kernel/acpi/cppc.c | 4 +++-
drivers/cpufreq/amd-pstate-ut.c | 9 +++++----
drivers/cpufreq/amd-pstate.c | 16 ++++++----------
4 files changed, 25 insertions(+), 24 deletions(-)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index c84930610c7e6..cfcc49e5cf925 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -701,15 +701,17 @@
#define MSR_AMD_CPPC_REQ 0xc00102b3
#define MSR_AMD_CPPC_STATUS 0xc00102b4
-#define AMD_CPPC_LOWEST_PERF(x) (((x) >> 0) & 0xff)
-#define AMD_CPPC_LOWNONLIN_PERF(x) (((x) >> 8) & 0xff)
-#define AMD_CPPC_NOMINAL_PERF(x) (((x) >> 16) & 0xff)
-#define AMD_CPPC_HIGHEST_PERF(x) (((x) >> 24) & 0xff)
-
-#define AMD_CPPC_MAX_PERF(x) (((x) & 0xff) << 0)
-#define AMD_CPPC_MIN_PERF(x) (((x) & 0xff) << 8)
-#define AMD_CPPC_DES_PERF(x) (((x) & 0xff) << 16)
-#define AMD_CPPC_ENERGY_PERF_PREF(x) (((x) & 0xff) << 24)
+/* Masks for use with MSR_AMD_CPPC_CAP1 */
+#define AMD_CPPC_LOWEST_PERF_MASK GENMASK(7, 0)
+#define AMD_CPPC_LOWNONLIN_PERF_MASK GENMASK(15, 8)
+#define AMD_CPPC_NOMINAL_PERF_MASK GENMASK(23, 16)
+#define AMD_CPPC_HIGHEST_PERF_MASK GENMASK(31, 24)
+
+/* Masks for use with MSR_AMD_CPPC_REQ */
+#define AMD_CPPC_MAX_PERF_MASK GENMASK(7, 0)
+#define AMD_CPPC_MIN_PERF_MASK GENMASK(15, 8)
+#define AMD_CPPC_DES_PERF_MASK GENMASK(23, 16)
+#define AMD_CPPC_EPP_PERF_MASK GENMASK(31, 24)
/* AMD Performance Counter Global Status and Control MSRs */
#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300
diff --git a/arch/x86/kernel/acpi/cppc.c b/arch/x86/kernel/acpi/cppc.c
index d745dd586303c..77bfb846490c0 100644
--- a/arch/x86/kernel/acpi/cppc.c
+++ b/arch/x86/kernel/acpi/cppc.c
@@ -4,6 +4,8 @@
* Copyright (c) 2016, Intel Corporation.
*/
+#include <linux/bitfield.h>
+
#include <acpi/cppc_acpi.h>
#include <asm/msr.h>
#include <asm/processor.h>
@@ -149,7 +151,7 @@ int amd_get_highest_perf(unsigned int cpu, u32 *highest_perf)
if (ret)
goto out;
- val = AMD_CPPC_HIGHEST_PERF(val);
+ val = FIELD_GET(AMD_CPPC_HIGHEST_PERF_MASK, val);
} else {
ret = cppc_get_highest_perf(cpu, &val);
if (ret)
diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
index edc1475989e3d..e671bc7d15508 100644
--- a/drivers/cpufreq/amd-pstate-ut.c
+++ b/drivers/cpufreq/amd-pstate-ut.c
@@ -22,6 +22,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/bitfield.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
@@ -142,10 +143,10 @@ static int amd_pstate_ut_check_perf(u32 index)
return ret;
}
- highest_perf = AMD_CPPC_HIGHEST_PERF(cap1);
- nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1);
- lowest_nonlinear_perf = AMD_CPPC_LOWNONLIN_PERF(cap1);
- lowest_perf = AMD_CPPC_LOWEST_PERF(cap1);
+ highest_perf = FIELD_GET(AMD_CPPC_HIGHEST_PERF_MASK, cap1);
+ nominal_perf = FIELD_GET(AMD_CPPC_NOMINAL_PERF_MASK, cap1);
+ lowest_nonlinear_perf = FIELD_GET(AMD_CPPC_LOWNONLIN_PERF_MASK, cap1);
+ lowest_perf = FIELD_GET(AMD_CPPC_LOWEST_PERF_MASK, cap1);
}
cur_perf = READ_ONCE(cpudata->perf);
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 2cec8d7d92f51..c2260bbee4eb7 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -89,11 +89,6 @@ static bool cppc_enabled;
static bool amd_pstate_prefcore = true;
static struct quirk_entry *quirks;
-#define AMD_CPPC_MAX_PERF_MASK GENMASK(7, 0)
-#define AMD_CPPC_MIN_PERF_MASK GENMASK(15, 8)
-#define AMD_CPPC_DES_PERF_MASK GENMASK(23, 16)
-#define AMD_CPPC_EPP_PERF_MASK GENMASK(31, 24)
-
/*
* AMD Energy Preference Performance (EPP)
* The EPP is used in the CCLK DPM controller to drive
@@ -439,12 +434,13 @@ static int msr_init_perf(struct amd_cpudata *cpudata)
perf.highest_perf = numerator;
perf.max_limit_perf = numerator;
- perf.min_limit_perf = AMD_CPPC_LOWEST_PERF(cap1);
- perf.nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1);
- perf.lowest_nonlinear_perf = AMD_CPPC_LOWNONLIN_PERF(cap1);
- perf.lowest_perf = AMD_CPPC_LOWEST_PERF(cap1);
+ perf.min_limit_perf = FIELD_GET(AMD_CPPC_LOWEST_PERF_MASK, cap1);
+ perf.nominal_perf = FIELD_GET(AMD_CPPC_NOMINAL_PERF_MASK, cap1);
+ perf.lowest_nonlinear_perf = FIELD_GET(AMD_CPPC_LOWNONLIN_PERF_MASK, cap1);
+ perf.lowest_perf = FIELD_GET(AMD_CPPC_LOWEST_PERF_MASK, cap1);
WRITE_ONCE(cpudata->perf, perf);
- WRITE_ONCE(cpudata->prefcore_ranking, AMD_CPPC_HIGHEST_PERF(cap1));
+ WRITE_ONCE(cpudata->prefcore_ranking, FIELD_GET(AMD_CPPC_HIGHEST_PERF_MASK, cap1));
+
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 13/19] cpufreq/amd-pstate: Cache CPPC request in shared mem case too
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (11 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 12/19] cpufreq/amd-pstate: Replace all AMD_CPPC_* macros with masks Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 14/19] cpufreq/amd-pstate: Move all EPP tracing into *_update_perf and *_set_epp functions Mario Limonciello
` (5 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
In order to prevent a potential write for shmem_update_perf()
cache the request into the cppc_req_cached variable normally only
used for the MSR case.
This adds symmetry into the code and potentially avoids extra writes.
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
drivers/cpufreq/amd-pstate.c | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index c2260bbee4eb7..0c686af5e062d 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -496,6 +496,8 @@ static int shmem_update_perf(struct amd_cpudata *cpudata, u8 min_perf,
u8 des_perf, u8 max_perf, u8 epp, bool fast_switch)
{
struct cppc_perf_ctrls perf_ctrls;
+ u64 value, prev;
+ int ret;
if (cppc_state == AMD_PSTATE_ACTIVE) {
int ret = shmem_set_epp(cpudata, epp);
@@ -504,11 +506,29 @@ static int shmem_update_perf(struct amd_cpudata *cpudata, u8 min_perf,
return ret;
}
+ value = prev = READ_ONCE(cpudata->cppc_req_cached);
+
+ value &= ~(AMD_CPPC_MAX_PERF_MASK | AMD_CPPC_MIN_PERF_MASK |
+ AMD_CPPC_DES_PERF_MASK | AMD_CPPC_EPP_PERF_MASK);
+ value |= FIELD_PREP(AMD_CPPC_MAX_PERF_MASK, max_perf);
+ value |= FIELD_PREP(AMD_CPPC_DES_PERF_MASK, des_perf);
+ value |= FIELD_PREP(AMD_CPPC_MIN_PERF_MASK, min_perf);
+ value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp);
+
+ if (value == prev)
+ return 0;
+
perf_ctrls.max_perf = max_perf;
perf_ctrls.min_perf = min_perf;
perf_ctrls.desired_perf = des_perf;
- return cppc_set_perf(cpudata->cpu, &perf_ctrls);
+ ret = cppc_set_perf(cpudata->cpu, &perf_ctrls);
+ if (ret)
+ return ret;
+
+ WRITE_ONCE(cpudata->cppc_req_cached, value);
+
+ return 0;
}
static inline bool amd_pstate_sample(struct amd_cpudata *cpudata)
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 14/19] cpufreq/amd-pstate: Move all EPP tracing into *_update_perf and *_set_epp functions
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (12 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 13/19] cpufreq/amd-pstate: Cache CPPC request in shared mem case too Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 15/19] cpufreq/amd-pstate: Update cppc_req_cached for shared mem EPP writes Mario Limonciello
` (4 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
The EPP tracing is done by the caller today, but this precludes the
information about whether the CPPC request has changed.
Move it into the update_perf and set_epp functions and include information
about whether the request has changed from the last one.
amd_pstate_update_perf() and amd_pstate_set_epp() now require the policy
as an argument instead of the cpudata.
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v4:
* Drop unused variables
v3:
* Add tag
* Update commit message
---
drivers/cpufreq/amd-pstate-trace.h | 13 +++-
drivers/cpufreq/amd-pstate.c | 118 +++++++++++++++++------------
2 files changed, 80 insertions(+), 51 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate-trace.h b/drivers/cpufreq/amd-pstate-trace.h
index f457d4af2c62e..32e1bdc588c52 100644
--- a/drivers/cpufreq/amd-pstate-trace.h
+++ b/drivers/cpufreq/amd-pstate-trace.h
@@ -90,7 +90,8 @@ TRACE_EVENT(amd_pstate_epp_perf,
u8 epp,
u8 min_perf,
u8 max_perf,
- bool boost
+ bool boost,
+ bool changed
),
TP_ARGS(cpu_id,
@@ -98,7 +99,8 @@ TRACE_EVENT(amd_pstate_epp_perf,
epp,
min_perf,
max_perf,
- boost),
+ boost,
+ changed),
TP_STRUCT__entry(
__field(unsigned int, cpu_id)
@@ -107,6 +109,7 @@ TRACE_EVENT(amd_pstate_epp_perf,
__field(u8, min_perf)
__field(u8, max_perf)
__field(bool, boost)
+ __field(bool, changed)
),
TP_fast_assign(
@@ -116,15 +119,17 @@ TRACE_EVENT(amd_pstate_epp_perf,
__entry->min_perf = min_perf;
__entry->max_perf = max_perf;
__entry->boost = boost;
+ __entry->changed = changed;
),
- TP_printk("cpu%u: [%hhu<->%hhu]/%hhu, epp=%hhu, boost=%u",
+ TP_printk("cpu%u: [%hhu<->%hhu]/%hhu, epp=%hhu, boost=%u, changed=%u",
(unsigned int)__entry->cpu_id,
(u8)__entry->min_perf,
(u8)__entry->max_perf,
(u8)__entry->highest_perf,
(u8)__entry->epp,
- (bool)__entry->boost
+ (bool)__entry->boost,
+ (bool)__entry->changed
)
);
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 0c686af5e062d..66b61ce124e21 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -228,9 +228,10 @@ static u8 shmem_get_epp(struct amd_cpudata *cpudata)
return FIELD_GET(AMD_CPPC_EPP_PERF_MASK, epp);
}
-static int msr_update_perf(struct amd_cpudata *cpudata, u8 min_perf,
+static int msr_update_perf(struct cpufreq_policy *policy, u8 min_perf,
u8 des_perf, u8 max_perf, u8 epp, bool fast_switch)
{
+ struct amd_cpudata *cpudata = policy->driver_data;
u64 value, prev;
value = prev = READ_ONCE(cpudata->cppc_req_cached);
@@ -242,6 +243,18 @@ static int msr_update_perf(struct amd_cpudata *cpudata, u8 min_perf,
value |= FIELD_PREP(AMD_CPPC_MIN_PERF_MASK, min_perf);
value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp);
+ if (trace_amd_pstate_epp_perf_enabled()) {
+ union perf_cached perf = READ_ONCE(cpudata->perf);
+
+ trace_amd_pstate_epp_perf(cpudata->cpu,
+ perf.highest_perf,
+ epp,
+ min_perf,
+ max_perf,
+ policy->boost_enabled,
+ value != prev);
+ }
+
if (value == prev)
return 0;
@@ -256,24 +269,26 @@ static int msr_update_perf(struct amd_cpudata *cpudata, u8 min_perf,
}
WRITE_ONCE(cpudata->cppc_req_cached, value);
- WRITE_ONCE(cpudata->epp_cached, epp);
+ if (epp != cpudata->epp_cached)
+ WRITE_ONCE(cpudata->epp_cached, epp);
return 0;
}
DEFINE_STATIC_CALL(amd_pstate_update_perf, msr_update_perf);
-static inline int amd_pstate_update_perf(struct amd_cpudata *cpudata,
+static inline int amd_pstate_update_perf(struct cpufreq_policy *policy,
u8 min_perf, u8 des_perf,
u8 max_perf, u8 epp,
bool fast_switch)
{
- return static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf,
+ return static_call(amd_pstate_update_perf)(policy, min_perf, des_perf,
max_perf, epp, fast_switch);
}
-static int msr_set_epp(struct amd_cpudata *cpudata, u8 epp)
+static int msr_set_epp(struct cpufreq_policy *policy, u8 epp)
{
+ struct amd_cpudata *cpudata = policy->driver_data;
u64 value, prev;
int ret;
@@ -281,6 +296,19 @@ static int msr_set_epp(struct amd_cpudata *cpudata, u8 epp)
value &= ~AMD_CPPC_EPP_PERF_MASK;
value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp);
+ if (trace_amd_pstate_epp_perf_enabled()) {
+ union perf_cached perf = cpudata->perf;
+
+ trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
+ epp,
+ FIELD_GET(AMD_CPPC_MIN_PERF_MASK,
+ cpudata->cppc_req_cached),
+ FIELD_GET(AMD_CPPC_MAX_PERF_MASK,
+ cpudata->cppc_req_cached),
+ policy->boost_enabled,
+ value != prev);
+ }
+
if (value == prev)
return 0;
@@ -299,15 +327,29 @@ static int msr_set_epp(struct amd_cpudata *cpudata, u8 epp)
DEFINE_STATIC_CALL(amd_pstate_set_epp, msr_set_epp);
-static inline int amd_pstate_set_epp(struct amd_cpudata *cpudata, u8 epp)
+static inline int amd_pstate_set_epp(struct cpufreq_policy *policy, u8 epp)
{
- return static_call(amd_pstate_set_epp)(cpudata, epp);
+ return static_call(amd_pstate_set_epp)(policy, epp);
}
-static int shmem_set_epp(struct amd_cpudata *cpudata, u8 epp)
+static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
{
- int ret;
+ struct amd_cpudata *cpudata = policy->driver_data;
struct cppc_perf_ctrls perf_ctrls;
+ int ret;
+
+ if (trace_amd_pstate_epp_perf_enabled()) {
+ union perf_cached perf = cpudata->perf;
+
+ trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
+ epp,
+ FIELD_GET(AMD_CPPC_MIN_PERF_MASK,
+ cpudata->cppc_req_cached),
+ FIELD_GET(AMD_CPPC_MAX_PERF_MASK,
+ cpudata->cppc_req_cached),
+ policy->boost_enabled,
+ epp != cpudata->epp_cached);
+ }
if (epp == cpudata->epp_cached)
return 0;
@@ -339,17 +381,7 @@ static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy,
return -EBUSY;
}
- if (trace_amd_pstate_epp_perf_enabled()) {
- union perf_cached perf = READ_ONCE(cpudata->perf);
-
- trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
- epp,
- FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached),
- FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached),
- policy->boost_enabled);
- }
-
- return amd_pstate_set_epp(cpudata, epp);
+ return amd_pstate_set_epp(policy, epp);
}
static inline int msr_cppc_enable(bool enable)
@@ -492,15 +524,16 @@ static inline int amd_pstate_init_perf(struct amd_cpudata *cpudata)
return static_call(amd_pstate_init_perf)(cpudata);
}
-static int shmem_update_perf(struct amd_cpudata *cpudata, u8 min_perf,
+static int shmem_update_perf(struct cpufreq_policy *policy, u8 min_perf,
u8 des_perf, u8 max_perf, u8 epp, bool fast_switch)
{
+ struct amd_cpudata *cpudata = policy->driver_data;
struct cppc_perf_ctrls perf_ctrls;
u64 value, prev;
int ret;
if (cppc_state == AMD_PSTATE_ACTIVE) {
- int ret = shmem_set_epp(cpudata, epp);
+ int ret = shmem_set_epp(policy, epp);
if (ret)
return ret;
@@ -515,6 +548,18 @@ static int shmem_update_perf(struct amd_cpudata *cpudata, u8 min_perf,
value |= FIELD_PREP(AMD_CPPC_MIN_PERF_MASK, min_perf);
value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp);
+ if (trace_amd_pstate_epp_perf_enabled()) {
+ union perf_cached perf = READ_ONCE(cpudata->perf);
+
+ trace_amd_pstate_epp_perf(cpudata->cpu,
+ perf.highest_perf,
+ epp,
+ min_perf,
+ max_perf,
+ policy->boost_enabled,
+ value != prev);
+ }
+
if (value == prev)
return 0;
@@ -592,7 +637,7 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf,
cpudata->cpu, fast_switch);
}
- amd_pstate_update_perf(cpudata, min_perf, des_perf, max_perf, 0, fast_switch);
+ amd_pstate_update_perf(policy, min_perf, des_perf, max_perf, 0, fast_switch);
}
static int amd_pstate_verify(struct cpufreq_policy_data *policy_data)
@@ -1531,7 +1576,7 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
return ret;
WRITE_ONCE(cpudata->cppc_req_cached, value);
}
- ret = amd_pstate_set_epp(cpudata, cpudata->epp_default);
+ ret = amd_pstate_set_epp(policy, cpudata->epp_default);
if (ret)
return ret;
@@ -1572,14 +1617,8 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
epp = READ_ONCE(cpudata->epp_cached);
perf = READ_ONCE(cpudata->perf);
- if (trace_amd_pstate_epp_perf_enabled()) {
- trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, epp,
- perf.min_limit_perf,
- perf.max_limit_perf,
- policy->boost_enabled);
- }
- return amd_pstate_update_perf(cpudata, perf.min_limit_perf, 0U,
+ return amd_pstate_update_perf(policy, perf.min_limit_perf, 0U,
perf.max_limit_perf, epp, false);
}
@@ -1611,20 +1650,12 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
static int amd_pstate_epp_reenable(struct cpufreq_policy *policy)
{
- struct amd_cpudata *cpudata = policy->driver_data;
- union perf_cached perf = READ_ONCE(cpudata->perf);
int ret;
ret = amd_pstate_cppc_enable(true);
if (ret)
pr_err("failed to enable amd pstate during resume, return %d\n", ret);
- if (trace_amd_pstate_epp_perf_enabled()) {
- trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
- cpudata->epp_cached,
- FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached),
- perf.highest_perf, policy->boost_enabled);
- }
return amd_pstate_epp_update_limit(policy);
}
@@ -1652,14 +1683,7 @@ static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
if (cpudata->suspended)
return 0;
- if (trace_amd_pstate_epp_perf_enabled()) {
- trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
- AMD_CPPC_EPP_BALANCE_POWERSAVE,
- perf.lowest_perf, perf.lowest_perf,
- policy->boost_enabled);
- }
-
- return amd_pstate_update_perf(cpudata, perf.lowest_perf, 0, perf.lowest_perf,
+ return amd_pstate_update_perf(policy, perf.lowest_perf, 0, perf.lowest_perf,
AMD_CPPC_EPP_BALANCE_POWERSAVE, false);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 15/19] cpufreq/amd-pstate: Update cppc_req_cached for shared mem EPP writes
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (13 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 14/19] cpufreq/amd-pstate: Move all EPP tracing into *_update_perf and *_set_epp functions Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 16/19] cpufreq/amd-pstate: Drop debug statements for policy setting Mario Limonciello
` (3 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
On EPP only writes update the cached variable so that the min/max
performance controls don't need to be updated again.
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
drivers/cpufreq/amd-pstate.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 66b61ce124e21..55b6231e6a092 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -336,6 +336,7 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
{
struct amd_cpudata *cpudata = policy->driver_data;
struct cppc_perf_ctrls perf_ctrls;
+ u64 value;
int ret;
if (trace_amd_pstate_epp_perf_enabled()) {
@@ -362,6 +363,11 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
}
WRITE_ONCE(cpudata->epp_cached, epp);
+ value = READ_ONCE(cpudata->cppc_req_cached);
+ value &= ~AMD_CPPC_EPP_PERF_MASK;
+ value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp);
+ WRITE_ONCE(cpudata->cppc_req_cached, value);
+
return ret;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 16/19] cpufreq/amd-pstate: Drop debug statements for policy setting
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (14 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 15/19] cpufreq/amd-pstate: Update cppc_req_cached for shared mem EPP writes Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 17/19] cpufreq/amd-pstate: Rework CPPC enabling Mario Limonciello
` (2 subsequent siblings)
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
There are trace events that exist now for all amd-pstate modes that
will output information right before programming to the hardware.
This makes the existing debug statements unnecessary remaining
overhead. Drop them.
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
drivers/cpufreq/amd-pstate.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 55b6231e6a092..f0d9ee62cb30d 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -667,7 +667,6 @@ static int amd_pstate_verify(struct cpufreq_policy_data *policy_data)
}
cpufreq_verify_within_cpu_limits(policy_data);
- pr_debug("policy_max =%d, policy_min=%d\n", policy_data->max, policy_data->min);
return 0;
}
@@ -1636,9 +1635,6 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
if (!policy->cpuinfo.max_freq)
return -ENODEV;
- pr_debug("set_policy: cpuinfo.max %u policy->max %u\n",
- policy->cpuinfo.max_freq, policy->max);
-
cpudata->policy = policy->policy;
ret = amd_pstate_epp_update_limit(policy);
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 17/19] cpufreq/amd-pstate: Rework CPPC enabling
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (15 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 16/19] cpufreq/amd-pstate: Drop debug statements for policy setting Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-03-01 7:03 ` Dhananjay Ugwekar
2025-03-04 5:08 ` Gautham R. Shenoy
2025-02-26 7:49 ` [PATCH v5 18/19] cpufreq/amd-pstate: Stop caching EPP Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 19/19] cpufreq/amd-pstate: Drop actions in amd_pstate_epp_cpu_offline() Mario Limonciello
18 siblings, 2 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello
From: Mario Limonciello <mario.limonciello@amd.com>
The CPPC enable register is configured as "write once". That is
any future writes don't actually do anything.
Because of this, all the cleanup paths that currently exist for
CPPC disable are non-effective.
Rework CPPC enable to only enable after all the CAP registers have
been read to avoid enabling CPPC on CPUs with invalid _CPC or
unpopulated MSRs.
As the register is write once, remove all cleanup paths as well.
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v5:
* Drop unnecessary extra code in shmem_cppc_enable()
* Remove redundant tracing in store_energy_performance_preference()
* Add missing call to amd_pstate_cppc_enable() in passive case
* Leave cpudata->suspended alone in amd_pstate_epp_cpu_online()
* Drop spurious whitespace
v4:
* Remove unnecessary amd_pstate_update_perf() call during online
* Remove unnecessary if (ret) ret.
* Drop amd_pstate_cpu_resume()
* Drop unnecessary derefs
v3:
* Fixup for suspend/resume issue
---
drivers/cpufreq/amd-pstate.c | 179 +++++++----------------------------
1 file changed, 35 insertions(+), 144 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index f0d9ee62cb30d..89e6d32223c9b 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -85,7 +85,6 @@ static struct cpufreq_driver *current_pstate_driver;
static struct cpufreq_driver amd_pstate_driver;
static struct cpufreq_driver amd_pstate_epp_driver;
static int cppc_state = AMD_PSTATE_UNDEFINED;
-static bool cppc_enabled;
static bool amd_pstate_prefcore = true;
static struct quirk_entry *quirks;
@@ -371,89 +370,21 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
return ret;
}
-static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy,
- int pref_index)
+static inline int msr_cppc_enable(struct cpufreq_policy *policy)
{
- struct amd_cpudata *cpudata = policy->driver_data;
- u8 epp;
-
- if (!pref_index)
- epp = cpudata->epp_default;
- else
- epp = epp_values[pref_index];
-
- if (epp > 0 && cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {
- pr_debug("EPP cannot be set under performance policy\n");
- return -EBUSY;
- }
-
- return amd_pstate_set_epp(policy, epp);
-}
-
-static inline int msr_cppc_enable(bool enable)
-{
- int ret, cpu;
- unsigned long logical_proc_id_mask = 0;
-
- /*
- * MSR_AMD_CPPC_ENABLE is write-once, once set it cannot be cleared.
- */
- if (!enable)
- return 0;
-
- if (enable == cppc_enabled)
- return 0;
-
- for_each_present_cpu(cpu) {
- unsigned long logical_id = topology_logical_package_id(cpu);
-
- if (test_bit(logical_id, &logical_proc_id_mask))
- continue;
-
- set_bit(logical_id, &logical_proc_id_mask);
-
- ret = wrmsrl_safe_on_cpu(cpu, MSR_AMD_CPPC_ENABLE,
- enable);
- if (ret)
- return ret;
- }
-
- cppc_enabled = enable;
- return 0;
+ return wrmsrl_safe_on_cpu(policy->cpu, MSR_AMD_CPPC_ENABLE, 1);
}
-static int shmem_cppc_enable(bool enable)
+static int shmem_cppc_enable(struct cpufreq_policy *policy)
{
- int cpu, ret = 0;
- struct cppc_perf_ctrls perf_ctrls;
-
- if (enable == cppc_enabled)
- return 0;
-
- for_each_present_cpu(cpu) {
- ret = cppc_set_enable(cpu, enable);
- if (ret)
- return ret;
-
- /* Enable autonomous mode for EPP */
- if (cppc_state == AMD_PSTATE_ACTIVE) {
- /* Set desired perf as zero to allow EPP firmware control */
- perf_ctrls.desired_perf = 0;
- ret = cppc_set_perf(cpu, &perf_ctrls);
- if (ret)
- return ret;
- }
- }
-
- cppc_enabled = enable;
- return ret;
+ return cppc_set_enable(policy->cpu, 1);
}
DEFINE_STATIC_CALL(amd_pstate_cppc_enable, msr_cppc_enable);
-static inline int amd_pstate_cppc_enable(bool enable)
+static inline int amd_pstate_cppc_enable(struct cpufreq_policy *policy)
{
- return static_call(amd_pstate_cppc_enable)(enable);
+ return static_call(amd_pstate_cppc_enable)(policy);
}
static int msr_init_perf(struct amd_cpudata *cpudata)
@@ -1069,6 +1000,10 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
cpudata->nominal_freq,
perf.highest_perf);
+ ret = amd_pstate_cppc_enable(policy);
+ if (ret)
+ goto free_cpudata1;
+
policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
/* It will be updated by governor */
@@ -1116,28 +1051,6 @@ static void amd_pstate_cpu_exit(struct cpufreq_policy *policy)
kfree(cpudata);
}
-static int amd_pstate_cpu_resume(struct cpufreq_policy *policy)
-{
- int ret;
-
- ret = amd_pstate_cppc_enable(true);
- if (ret)
- pr_err("failed to enable amd-pstate during resume, return %d\n", ret);
-
- return ret;
-}
-
-static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy)
-{
- int ret;
-
- ret = amd_pstate_cppc_enable(false);
- if (ret)
- pr_err("failed to disable amd-pstate during suspend, return %d\n", ret);
-
- return ret;
-}
-
/* Sysfs attributes */
/*
@@ -1229,8 +1142,10 @@ static ssize_t show_energy_performance_available_preferences(
static ssize_t store_energy_performance_preference(
struct cpufreq_policy *policy, const char *buf, size_t count)
{
+ struct amd_cpudata *cpudata = policy->driver_data;
char str_preference[21];
ssize_t ret;
+ u8 epp;
ret = sscanf(buf, "%20s", str_preference);
if (ret != 1)
@@ -1240,7 +1155,17 @@ static ssize_t store_energy_performance_preference(
if (ret < 0)
return -EINVAL;
- ret = amd_pstate_set_energy_pref_index(policy, ret);
+ if (!ret)
+ epp = cpudata->epp_default;
+ else
+ epp = epp_values[ret];
+
+ if (epp > 0 && policy->policy == CPUFREQ_POLICY_PERFORMANCE) {
+ pr_debug("EPP cannot be set under performance policy\n");
+ return -EBUSY;
+ }
+
+ ret = amd_pstate_set_epp(policy, epp);
return ret ? ret : count;
}
@@ -1273,7 +1198,6 @@ static ssize_t show_energy_performance_preference(
static void amd_pstate_driver_cleanup(void)
{
- amd_pstate_cppc_enable(false);
cppc_state = AMD_PSTATE_DISABLE;
current_pstate_driver = NULL;
}
@@ -1307,14 +1231,6 @@ static int amd_pstate_register_driver(int mode)
cppc_state = mode;
- ret = amd_pstate_cppc_enable(true);
- if (ret) {
- pr_err("failed to enable cppc during amd-pstate driver registration, return %d\n",
- ret);
- amd_pstate_driver_cleanup();
- return ret;
- }
-
/* at least one CPU supports CPB */
current_pstate_driver->boost_enabled = cpu_feature_enabled(X86_FEATURE_CPB);
@@ -1554,11 +1470,15 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
policy->cpuinfo.max_freq = policy->max = perf_to_freq(perf,
cpudata->nominal_freq,
perf.highest_perf);
+ policy->driver_data = cpudata;
+
+ ret = amd_pstate_cppc_enable(policy);
+ if (ret)
+ goto free_cpudata1;
/* It will be updated by governor */
policy->cur = policy->cpuinfo.min_freq;
- policy->driver_data = cpudata;
policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
@@ -1650,31 +1570,11 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
return 0;
}
-static int amd_pstate_epp_reenable(struct cpufreq_policy *policy)
-{
- int ret;
-
- ret = amd_pstate_cppc_enable(true);
- if (ret)
- pr_err("failed to enable amd pstate during resume, return %d\n", ret);
-
-
- return amd_pstate_epp_update_limit(policy);
-}
-
static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
{
- struct amd_cpudata *cpudata = policy->driver_data;
- int ret;
-
- pr_debug("AMD CPU Core %d going online\n", cpudata->cpu);
+ pr_debug("AMD CPU Core %d going online\n", policy->cpu);
- ret = amd_pstate_epp_reenable(policy);
- if (ret)
- return ret;
- cpudata->suspended = false;
-
- return 0;
+ return amd_pstate_cppc_enable(policy);
}
static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
@@ -1692,11 +1592,6 @@ static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
static int amd_pstate_epp_suspend(struct cpufreq_policy *policy)
{
struct amd_cpudata *cpudata = policy->driver_data;
- int ret;
-
- /* avoid suspending when EPP is not enabled */
- if (cppc_state != AMD_PSTATE_ACTIVE)
- return 0;
/* invalidate to ensure it's rewritten during resume */
cpudata->cppc_req_cached = 0;
@@ -1704,11 +1599,6 @@ static int amd_pstate_epp_suspend(struct cpufreq_policy *policy)
/* set this flag to avoid setting core offline*/
cpudata->suspended = true;
- /* disable CPPC in lowlevel firmware */
- ret = amd_pstate_cppc_enable(false);
- if (ret)
- pr_err("failed to suspend, return %d\n", ret);
-
return 0;
}
@@ -1717,8 +1607,12 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy)
struct amd_cpudata *cpudata = policy->driver_data;
if (cpudata->suspended) {
+ int ret;
+
/* enable amd pstate from suspend state*/
- amd_pstate_epp_reenable(policy);
+ ret = amd_pstate_epp_update_limit(policy);
+ if (ret)
+ return ret;
cpudata->suspended = false;
}
@@ -1733,8 +1627,6 @@ static struct cpufreq_driver amd_pstate_driver = {
.fast_switch = amd_pstate_fast_switch,
.init = amd_pstate_cpu_init,
.exit = amd_pstate_cpu_exit,
- .suspend = amd_pstate_cpu_suspend,
- .resume = amd_pstate_cpu_resume,
.set_boost = amd_pstate_set_boost,
.update_limits = amd_pstate_update_limits,
.name = "amd-pstate",
@@ -1901,7 +1793,6 @@ static int __init amd_pstate_init(void)
global_attr_free:
cpufreq_unregister_driver(current_pstate_driver);
- amd_pstate_cppc_enable(false);
return ret;
}
device_initcall(amd_pstate_init);
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 18/19] cpufreq/amd-pstate: Stop caching EPP
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (16 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 17/19] cpufreq/amd-pstate: Rework CPPC enabling Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 19/19] cpufreq/amd-pstate: Drop actions in amd_pstate_epp_cpu_offline() Mario Limonciello
18 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
EPP values are cached in the cpudata structure per CPU. This is needless
though because they are also cached in the CPPC request variable.
Drop the separate cache for EPP values and always reference the CPPC
request variable when needed.
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v5:
* Rebase on earlier patch changes
---
drivers/cpufreq/amd-pstate.c | 19 ++++++++++---------
drivers/cpufreq/amd-pstate.h | 1 -
2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 89e6d32223c9b..24a1f9e129b61 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -268,8 +268,6 @@ static int msr_update_perf(struct cpufreq_policy *policy, u8 min_perf,
}
WRITE_ONCE(cpudata->cppc_req_cached, value);
- if (epp != cpudata->epp_cached)
- WRITE_ONCE(cpudata->epp_cached, epp);
return 0;
}
@@ -318,7 +316,6 @@ static int msr_set_epp(struct cpufreq_policy *policy, u8 epp)
}
/* update both so that msr_update_perf() can effectively check */
- WRITE_ONCE(cpudata->epp_cached, epp);
WRITE_ONCE(cpudata->cppc_req_cached, value);
return ret;
@@ -335,9 +332,12 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
{
struct amd_cpudata *cpudata = policy->driver_data;
struct cppc_perf_ctrls perf_ctrls;
+ u8 epp_cached;
u64 value;
int ret;
+
+ epp_cached = FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached);
if (trace_amd_pstate_epp_perf_enabled()) {
union perf_cached perf = cpudata->perf;
@@ -348,10 +348,10 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
FIELD_GET(AMD_CPPC_MAX_PERF_MASK,
cpudata->cppc_req_cached),
policy->boost_enabled,
- epp != cpudata->epp_cached);
+ epp != epp_cached);
}
- if (epp == cpudata->epp_cached)
+ if (epp == epp_cached)
return 0;
perf_ctrls.energy_perf = epp;
@@ -360,7 +360,6 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
pr_debug("failed to set energy perf value (%d)\n", ret);
return ret;
}
- WRITE_ONCE(cpudata->epp_cached, epp);
value = READ_ONCE(cpudata->cppc_req_cached);
value &= ~AMD_CPPC_EPP_PERF_MASK;
@@ -1174,9 +1173,11 @@ static ssize_t show_energy_performance_preference(
struct cpufreq_policy *policy, char *buf)
{
struct amd_cpudata *cpudata = policy->driver_data;
- u8 preference;
+ u8 preference, epp;
+
+ epp = FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached);
- switch (cpudata->epp_cached) {
+ switch (epp) {
case AMD_CPPC_EPP_PERFORMANCE:
preference = EPP_INDEX_PERFORMANCE;
break;
@@ -1539,7 +1540,7 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
epp = 0;
else
- epp = READ_ONCE(cpudata->epp_cached);
+ epp = FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached);
perf = READ_ONCE(cpudata->perf);
diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h
index 1557e1afea79c..fbe1c08d3f061 100644
--- a/drivers/cpufreq/amd-pstate.h
+++ b/drivers/cpufreq/amd-pstate.h
@@ -102,7 +102,6 @@ struct amd_cpudata {
bool hw_prefcore;
/* EPP feature related attributes*/
- u8 epp_cached;
u32 policy;
bool suspended;
u8 epp_default;
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH v5 19/19] cpufreq/amd-pstate: Drop actions in amd_pstate_epp_cpu_offline()
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
` (17 preceding siblings ...)
2025-02-26 7:49 ` [PATCH v5 18/19] cpufreq/amd-pstate: Stop caching EPP Mario Limonciello
@ 2025-02-26 7:49 ` Mario Limonciello
2025-03-04 5:11 ` Gautham R. Shenoy
18 siblings, 1 reply; 27+ messages in thread
From: Mario Limonciello @ 2025-02-26 7:49 UTC (permalink / raw)
To: Gautham R . Shenoy, Perry Yuan
Cc: Dhananjay Ugwekar, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello,
Dhananjay Ugwekar
From: Mario Limonciello <mario.limonciello@amd.com>
When the CPU goes offline there is no need to change the CPPC request
because the CPU will go into the deepest C-state it supports already.
Actually changing the CPPC request when it goes offline messes up the
cached values and can lead to the wrong values being restored when
it comes back.
Instead drop the actions and if the CPU comes back online let
amd_pstate_epp_set_policy() restore it to expected values.
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
---
v5:
* Reword commit message
* Add tag
---
drivers/cpufreq/amd-pstate.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 24a1f9e129b61..4a364ae9b56a1 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -1580,14 +1580,7 @@ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
{
- struct amd_cpudata *cpudata = policy->driver_data;
- union perf_cached perf = READ_ONCE(cpudata->perf);
-
- if (cpudata->suspended)
- return 0;
-
- return amd_pstate_update_perf(policy, perf.lowest_perf, 0, perf.lowest_perf,
- AMD_CPPC_EPP_BALANCE_POWERSAVE, false);
+ return 0;
}
static int amd_pstate_epp_suspend(struct cpufreq_policy *policy)
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH v5 04/19] cpufreq/amd-pstate: Move perf values into a union
2025-02-26 7:49 ` [PATCH v5 04/19] cpufreq/amd-pstate: Move perf values into a union Mario Limonciello
@ 2025-02-27 12:59 ` kernel test robot
2025-02-27 20:09 ` Mario Limonciello
2025-03-01 7:02 ` Dhananjay Ugwekar
1 sibling, 1 reply; 27+ messages in thread
From: kernel test robot @ 2025-02-27 12:59 UTC (permalink / raw)
To: Mario Limonciello, Gautham R . Shenoy, Perry Yuan
Cc: llvm, oe-kbuild-all, Dhananjay Ugwekar,
(open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm,
Mario Limonciello
Hi Mario,
kernel test robot noticed the following build warnings:
[auto build test WARNING on amd-pstate/bleeding-edge]
[cannot apply to rafael-pm/linux-next rafael-pm/bleeding-edge tip/x86/core amd-pstate/linux-next linus/master v6.14-rc4 next-20250227]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Mario-Limonciello/cpufreq-amd-pstate-Invalidate-cppc_req_cached-during-suspend/20250226-155545
base: https://git.kernel.org/pub/scm/linux/kernel/git/superm1/linux.git bleeding-edge
patch link: https://lore.kernel.org/r/20250226074934.1667721-5-superm1%40kernel.org
patch subject: [PATCH v5 04/19] cpufreq/amd-pstate: Move perf values into a union
config: i386-buildonly-randconfig-003-20250227 (https://download.01.org/0day-ci/archive/20250227/202502272001.nafS0qXq-lkp@intel.com/config)
compiler: clang version 19.1.7 (https://github.com/llvm/llvm-project cd708029e0b2869e80abe31ddb175f7c35361f90)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250227/202502272001.nafS0qXq-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202502272001.nafS0qXq-lkp@intel.com/
All warnings (new ones prefixed by >>):
In file included from drivers/cpufreq/amd-pstate.c:51:
In file included from drivers/cpufreq/amd-pstate-trace.h:15:
In file included from include/linux/trace_events.h:6:
In file included from include/linux/ring_buffer.h:5:
In file included from include/linux/mm.h:2224:
include/linux/vmstat.h:504:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
504 | return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~ ^
505 | item];
| ~~~~
include/linux/vmstat.h:511:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
511 | return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~ ^
512 | NR_VM_NUMA_EVENT_ITEMS +
| ~~~~~~~~~~~~~~~~~~~~~~
>> drivers/cpufreq/amd-pstate.c:914:41: warning: variable 'nominal_freq' is uninitialized when used here [-Wuninitialized]
914 | perf.lowest_perf = freq_to_perf(perf, nominal_freq, min_freq);
| ^~~~~~~~~~~~
drivers/cpufreq/amd-pstate.c:902:38: note: initialize the variable 'nominal_freq' to silence this warning
902 | u32 min_freq, max_freq, nominal_freq, lowest_nonlinear_freq;
| ^
| = 0
3 warnings generated.
vim +/nominal_freq +914 drivers/cpufreq/amd-pstate.c
891
892 /*
893 * amd_pstate_init_freq: Initialize the nominal_freq and lowest_nonlinear_freq
894 * for the @cpudata object.
895 *
896 * Requires: all perf members of @cpudata to be initialized.
897 *
898 * Returns 0 on success, non-zero value on failure.
899 */
900 static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
901 {
902 u32 min_freq, max_freq, nominal_freq, lowest_nonlinear_freq;
903 struct cppc_perf_caps cppc_perf;
904 union perf_cached perf;
905 int ret;
906
907 ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
908 if (ret)
909 return ret;
910 perf = READ_ONCE(cpudata->perf);
911
912 if (quirks && quirks->lowest_freq) {
913 min_freq = quirks->lowest_freq;
> 914 perf.lowest_perf = freq_to_perf(perf, nominal_freq, min_freq);
915 WRITE_ONCE(cpudata->perf, perf);
916 } else
917 min_freq = cppc_perf.lowest_freq;
918
919 if (quirks && quirks->nominal_freq)
920 nominal_freq = quirks->nominal_freq;
921 else
922 nominal_freq = cppc_perf.nominal_freq;
923
924 min_freq *= 1000;
925 nominal_freq *= 1000;
926
927 WRITE_ONCE(cpudata->nominal_freq, nominal_freq);
928
929 max_freq = perf_to_freq(perf, nominal_freq, perf.highest_perf);
930 lowest_nonlinear_freq = perf_to_freq(perf, nominal_freq, perf.lowest_nonlinear_perf);
931 WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq);
932
933 /**
934 * Below values need to be initialized correctly, otherwise driver will fail to load
935 * max_freq is calculated according to (nominal_freq * highest_perf)/nominal_perf
936 * lowest_nonlinear_freq is a value between [min_freq, nominal_freq]
937 * Check _CPC in ACPI table objects if any values are incorrect
938 */
939 if (min_freq <= 0 || max_freq <= 0 || nominal_freq <= 0 || min_freq > max_freq) {
940 pr_err("min_freq(%d) or max_freq(%d) or nominal_freq(%d) value is incorrect\n",
941 min_freq, max_freq, nominal_freq);
942 return -EINVAL;
943 }
944
945 if (lowest_nonlinear_freq <= min_freq || lowest_nonlinear_freq > nominal_freq) {
946 pr_err("lowest_nonlinear_freq(%d) value is out of range [min_freq(%d), nominal_freq(%d)]\n",
947 lowest_nonlinear_freq, min_freq, nominal_freq);
948 return -EINVAL;
949 }
950
951 return 0;
952 }
953
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v5 04/19] cpufreq/amd-pstate: Move perf values into a union
2025-02-27 12:59 ` kernel test robot
@ 2025-02-27 20:09 ` Mario Limonciello
0 siblings, 0 replies; 27+ messages in thread
From: Mario Limonciello @ 2025-02-27 20:09 UTC (permalink / raw)
To: kernel test robot, Gautham R . Shenoy, Perry Yuan
Cc: llvm, oe-kbuild-all, Dhananjay Ugwekar,
(open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm,
Mario Limonciello
On 2/27/2025 06:59, kernel test robot wrote:
> Hi Mario,
>
> kernel test robot noticed the following build warnings:
>
> [auto build test WARNING on amd-pstate/bleeding-edge]
> [cannot apply to rafael-pm/linux-next rafael-pm/bleeding-edge tip/x86/core amd-pstate/linux-next linus/master v6.14-rc4 next-20250227]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Mario-Limonciello/cpufreq-amd-pstate-Invalidate-cppc_req_cached-during-suspend/20250226-155545
> base: https://git.kernel.org/pub/scm/linux/kernel/git/superm1/linux.git bleeding-edge
> patch link: https://lore.kernel.org/r/20250226074934.1667721-5-superm1%40kernel.org
> patch subject: [PATCH v5 04/19] cpufreq/amd-pstate: Move perf values into a union
> config: i386-buildonly-randconfig-003-20250227 (https://download.01.org/0day-ci/archive/20250227/202502272001.nafS0qXq-lkp@intel.com/config)
> compiler: clang version 19.1.7 (https://github.com/llvm/llvm-project cd708029e0b2869e80abe31ddb175f7c35361f90)
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250227/202502272001.nafS0qXq-lkp@intel.com/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202502272001.nafS0qXq-lkp@intel.com/
>
> All warnings (new ones prefixed by >>):
>
> In file included from drivers/cpufreq/amd-pstate.c:51:
> In file included from drivers/cpufreq/amd-pstate-trace.h:15:
> In file included from include/linux/trace_events.h:6:
> In file included from include/linux/ring_buffer.h:5:
> In file included from include/linux/mm.h:2224:
> include/linux/vmstat.h:504:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
> 504 | return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
> | ~~~~~~~~~~~~~~~~~~~~~ ^
> 505 | item];
> | ~~~~
> include/linux/vmstat.h:511:43: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum numa_stat_item') [-Wenum-enum-conversion]
> 511 | return vmstat_text[NR_VM_ZONE_STAT_ITEMS +
> | ~~~~~~~~~~~~~~~~~~~~~ ^
> 512 | NR_VM_NUMA_EVENT_ITEMS +
> | ~~~~~~~~~~~~~~~~~~~~~~
>>> drivers/cpufreq/amd-pstate.c:914:41: warning: variable 'nominal_freq' is uninitialized when used here [-Wuninitialized]
> 914 | perf.lowest_perf = freq_to_perf(perf, nominal_freq, min_freq);
> | ^~~~~~~~~~~~
> drivers/cpufreq/amd-pstate.c:902:38: note: initialize the variable 'nominal_freq' to silence this warning
> 902 | u32 min_freq, max_freq, nominal_freq, lowest_nonlinear_freq;
> | ^
> | = 0
> 3 warnings generated.
>
>
> vim +/nominal_freq +914 drivers/cpufreq/amd-pstate.c
>
> 891
> 892 /*
> 893 * amd_pstate_init_freq: Initialize the nominal_freq and lowest_nonlinear_freq
> 894 * for the @cpudata object.
> 895 *
> 896 * Requires: all perf members of @cpudata to be initialized.
> 897 *
> 898 * Returns 0 on success, non-zero value on failure.
> 899 */
> 900 static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
> 901 {
> 902 u32 min_freq, max_freq, nominal_freq, lowest_nonlinear_freq;
> 903 struct cppc_perf_caps cppc_perf;
> 904 union perf_cached perf;
> 905 int ret;
> 906
> 907 ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
> 908 if (ret)
> 909 return ret;
> 910 perf = READ_ONCE(cpudata->perf);
> 911
> 912 if (quirks && quirks->lowest_freq) {
> 913 min_freq = quirks->lowest_freq;
> > 914 perf.lowest_perf = freq_to_perf(perf, nominal_freq, min_freq);
> 915 WRITE_ONCE(cpudata->perf, perf);
> 916 } else
> 917 min_freq = cppc_perf.lowest_freq;
> 918
> 919 if (quirks && quirks->nominal_freq)
> 920 nominal_freq = quirks->nominal_freq;
> 921 else
> 922 nominal_freq = cppc_perf.nominal_freq;
> 923
> 924 min_freq *= 1000;
> 925 nominal_freq *= 1000;
> 926
> 927 WRITE_ONCE(cpudata->nominal_freq, nominal_freq);
> 928
> 929 max_freq = perf_to_freq(perf, nominal_freq, perf.highest_perf);
> 930 lowest_nonlinear_freq = perf_to_freq(perf, nominal_freq, perf.lowest_nonlinear_perf);
> 931 WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq);
> 932
> 933 /**
> 934 * Below values need to be initialized correctly, otherwise driver will fail to load
> 935 * max_freq is calculated according to (nominal_freq * highest_perf)/nominal_perf
> 936 * lowest_nonlinear_freq is a value between [min_freq, nominal_freq]
> 937 * Check _CPC in ACPI table objects if any values are incorrect
> 938 */
> 939 if (min_freq <= 0 || max_freq <= 0 || nominal_freq <= 0 || min_freq > max_freq) {
> 940 pr_err("min_freq(%d) or max_freq(%d) or nominal_freq(%d) value is incorrect\n",
> 941 min_freq, max_freq, nominal_freq);
> 942 return -EINVAL;
> 943 }
> 944
> 945 if (lowest_nonlinear_freq <= min_freq || lowest_nonlinear_freq > nominal_freq) {
> 946 pr_err("lowest_nonlinear_freq(%d) value is out of range [min_freq(%d), nominal_freq(%d)]\n",
> 947 lowest_nonlinear_freq, min_freq, nominal_freq);
> 948 return -EINVAL;
> 949 }
> 950
> 951 return 0;
> 952 }
> 953
>
The series is getting close (I think just one more patch needing review).
So if no other feedback for the series needing other fixes I will squash
this in to fix this issue when the series is merged.
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index bd8bcda4e6eb0..034ee40681b4c 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -915,6 +915,12 @@ static int amd_pstate_init_freq(struct amd_cpudata
*cpudata)
return ret;
perf = READ_ONCE(cpudata->perf);
+ if (quirks && quirks->nominal_freq)
+ nominal_freq = quirks->nominal_freq;
+ else
+ nominal_freq = cppc_perf.nominal_freq;
+ nominal_freq *= 1000;
+
if (quirks && quirks->lowest_freq) {
min_freq = quirks->lowest_freq;
perf.lowest_perf = freq_to_perf(perf, nominal_freq,
min_freq);
@@ -922,13 +928,7 @@ static int amd_pstate_init_freq(struct amd_cpudata
*cpudata)
} else
min_freq = cppc_perf.lowest_freq;
- if (quirks && quirks->nominal_freq)
- nominal_freq = quirks->nominal_freq;
- else
- nominal_freq = cppc_perf.nominal_freq;
-
min_freq *= 1000;
- nominal_freq *= 1000;
WRITE_ONCE(cpudata->nominal_freq, nominal_freq);
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH v5 04/19] cpufreq/amd-pstate: Move perf values into a union
2025-02-26 7:49 ` [PATCH v5 04/19] cpufreq/amd-pstate: Move perf values into a union Mario Limonciello
2025-02-27 12:59 ` kernel test robot
@ 2025-03-01 7:02 ` Dhananjay Ugwekar
1 sibling, 0 replies; 27+ messages in thread
From: Dhananjay Ugwekar @ 2025-03-01 7:02 UTC (permalink / raw)
To: Mario Limonciello, Gautham R . Shenoy, Perry Yuan
Cc: open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello
On 2/26/2025 1:19 PM, Mario Limonciello wrote:
> From: Mario Limonciello <mario.limonciello@amd.com>
>
> By storing perf values in a union all the writes and reads can
> be done atomically, removing the need for some concurrency protections.
>
> While making this change, also drop the cached frequency values,
> using inline helpers to calculate them on demand from perf value.
>
LGTM now,
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Thanks,
Dhananjay
> Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
> ---
> v5:
> * Use suggestion for union docstring
> * Flush out for quirk case
> * Use perf variable in unit test
> v4:
> * Rebase on earlier changes
> v3:
> * Pick up tag
> v2:
> * cache perf variable in unit tests
> * Drop unnecessary check from amd_pstate_update_min_max_limit()
> * Consistency with READ_ONCE()
> * Drop unneeded policy checks
> * add kdoc
> ---
> drivers/cpufreq/amd-pstate-ut.c | 18 +--
> drivers/cpufreq/amd-pstate.c | 205 ++++++++++++++++++--------------
> drivers/cpufreq/amd-pstate.h | 51 +++++---
> 3 files changed, 158 insertions(+), 116 deletions(-)
>
> diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
> index 445278cf40b61..5f6a92a816e61 100644
> --- a/drivers/cpufreq/amd-pstate-ut.c
> +++ b/drivers/cpufreq/amd-pstate-ut.c
> @@ -129,6 +129,7 @@ static void amd_pstate_ut_check_perf(u32 index)
> struct cppc_perf_caps cppc_perf;
> struct cpufreq_policy *policy = NULL;
> struct amd_cpudata *cpudata = NULL;
> + union perf_cached cur_perf;
>
> for_each_possible_cpu(cpu) {
> policy = cpufreq_cpu_get(cpu);
> @@ -162,19 +163,20 @@ static void amd_pstate_ut_check_perf(u32 index)
> lowest_perf = AMD_CPPC_LOWEST_PERF(cap1);
> }
>
> - if (highest_perf != READ_ONCE(cpudata->highest_perf) && !cpudata->hw_prefcore) {
> + cur_perf = READ_ONCE(cpudata->perf);
> + if (highest_perf != cur_perf.highest_perf && !cpudata->hw_prefcore) {
> pr_err("%s cpu%d highest=%d %d highest perf doesn't match\n",
> - __func__, cpu, highest_perf, cpudata->highest_perf);
> + __func__, cpu, highest_perf, cur_perf.highest_perf);
> goto skip_test;
> }
> - if ((nominal_perf != READ_ONCE(cpudata->nominal_perf)) ||
> - (lowest_nonlinear_perf != READ_ONCE(cpudata->lowest_nonlinear_perf)) ||
> - (lowest_perf != READ_ONCE(cpudata->lowest_perf))) {
> + if (nominal_perf != cur_perf.nominal_perf ||
> + (lowest_nonlinear_perf != cur_perf.lowest_nonlinear_perf) ||
> + (lowest_perf != cur_perf.lowest_perf)) {
> amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
> pr_err("%s cpu%d nominal=%d %d lowest_nonlinear=%d %d lowest=%d %d, they should be equal!\n",
> - __func__, cpu, nominal_perf, cpudata->nominal_perf,
> - lowest_nonlinear_perf, cpudata->lowest_nonlinear_perf,
> - lowest_perf, cpudata->lowest_perf);
> + __func__, cpu, nominal_perf, cur_perf.nominal_perf,
> + lowest_nonlinear_perf, cur_perf.lowest_nonlinear_perf,
> + lowest_perf, cur_perf.lowest_perf);
> goto skip_test;
> }
>
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index fb28b27558882..bd8bcda4e6eb0 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -142,18 +142,17 @@ static struct quirk_entry quirk_amd_7k62 = {
> .lowest_freq = 550,
> };
>
> -static inline u8 freq_to_perf(struct amd_cpudata *cpudata, unsigned int freq_val)
> +static inline u8 freq_to_perf(union perf_cached perf, u32 nominal_freq, unsigned int freq_val)
> {
> - u32 perf_val = DIV_ROUND_UP_ULL((u64)freq_val * cpudata->nominal_perf,
> - cpudata->nominal_freq);
> + u32 perf_val = DIV_ROUND_UP_ULL((u64)freq_val * perf.nominal_perf, nominal_freq);
>
> - return (u8)clamp(perf_val, cpudata->lowest_perf, cpudata->highest_perf);
> + return (u8)clamp(perf_val, perf.lowest_perf, perf.highest_perf);
> }
>
> -static inline u32 perf_to_freq(struct amd_cpudata *cpudata, u8 perf_val)
> +static inline u32 perf_to_freq(union perf_cached perf, u32 nominal_freq, u8 perf_val)
> {
> - return DIV_ROUND_UP_ULL((u64)cpudata->nominal_freq * perf_val,
> - cpudata->nominal_perf);
> + return DIV_ROUND_UP_ULL((u64)nominal_freq * perf_val,
> + perf.nominal_perf);
> }
>
> static int __init dmi_matched_7k62_bios_bug(const struct dmi_system_id *dmi)
> @@ -347,7 +346,9 @@ static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy,
> }
>
> if (trace_amd_pstate_epp_perf_enabled()) {
> - trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf,
> + union perf_cached perf = READ_ONCE(cpudata->perf);
> +
> + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
> epp,
> FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached),
> FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached),
> @@ -425,6 +426,7 @@ static inline int amd_pstate_cppc_enable(bool enable)
>
> static int msr_init_perf(struct amd_cpudata *cpudata)
> {
> + union perf_cached perf = READ_ONCE(cpudata->perf);
> u64 cap1, numerator;
>
> int ret = rdmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1,
> @@ -436,19 +438,21 @@ static int msr_init_perf(struct amd_cpudata *cpudata)
> if (ret)
> return ret;
>
> - WRITE_ONCE(cpudata->highest_perf, numerator);
> - WRITE_ONCE(cpudata->max_limit_perf, numerator);
> - WRITE_ONCE(cpudata->nominal_perf, AMD_CPPC_NOMINAL_PERF(cap1));
> - WRITE_ONCE(cpudata->lowest_nonlinear_perf, AMD_CPPC_LOWNONLIN_PERF(cap1));
> - WRITE_ONCE(cpudata->lowest_perf, AMD_CPPC_LOWEST_PERF(cap1));
> + perf.highest_perf = numerator;
> + perf.max_limit_perf = numerator;
> + perf.min_limit_perf = AMD_CPPC_LOWEST_PERF(cap1);
> + perf.nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1);
> + perf.lowest_nonlinear_perf = AMD_CPPC_LOWNONLIN_PERF(cap1);
> + perf.lowest_perf = AMD_CPPC_LOWEST_PERF(cap1);
> + WRITE_ONCE(cpudata->perf, perf);
> WRITE_ONCE(cpudata->prefcore_ranking, AMD_CPPC_HIGHEST_PERF(cap1));
> - WRITE_ONCE(cpudata->min_limit_perf, AMD_CPPC_LOWEST_PERF(cap1));
> return 0;
> }
>
> static int shmem_init_perf(struct amd_cpudata *cpudata)
> {
> struct cppc_perf_caps cppc_perf;
> + union perf_cached perf = READ_ONCE(cpudata->perf);
> u64 numerator;
>
> int ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
> @@ -459,14 +463,14 @@ static int shmem_init_perf(struct amd_cpudata *cpudata)
> if (ret)
> return ret;
>
> - WRITE_ONCE(cpudata->highest_perf, numerator);
> - WRITE_ONCE(cpudata->max_limit_perf, numerator);
> - WRITE_ONCE(cpudata->nominal_perf, cppc_perf.nominal_perf);
> - WRITE_ONCE(cpudata->lowest_nonlinear_perf,
> - cppc_perf.lowest_nonlinear_perf);
> - WRITE_ONCE(cpudata->lowest_perf, cppc_perf.lowest_perf);
> + perf.highest_perf = numerator;
> + perf.max_limit_perf = numerator;
> + perf.min_limit_perf = cppc_perf.lowest_perf;
> + perf.nominal_perf = cppc_perf.nominal_perf;
> + perf.lowest_nonlinear_perf = cppc_perf.lowest_nonlinear_perf;
> + perf.lowest_perf = cppc_perf.lowest_perf;
> + WRITE_ONCE(cpudata->perf, perf);
> WRITE_ONCE(cpudata->prefcore_ranking, cppc_perf.highest_perf);
> - WRITE_ONCE(cpudata->min_limit_perf, cppc_perf.lowest_perf);
>
> if (cppc_state == AMD_PSTATE_ACTIVE)
> return 0;
> @@ -549,14 +553,14 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf,
> u8 des_perf, u8 max_perf, bool fast_switch, int gov_flags)
> {
> struct cpufreq_policy *policy __free(put_cpufreq_policy) = cpufreq_cpu_get(cpudata->cpu);
> - u8 nominal_perf = READ_ONCE(cpudata->nominal_perf);
> + union perf_cached perf = READ_ONCE(cpudata->perf);
>
> if (!policy)
> return;
>
> des_perf = clamp_t(u8, des_perf, min_perf, max_perf);
>
> - policy->cur = perf_to_freq(cpudata, des_perf);
> + policy->cur = perf_to_freq(perf, cpudata->nominal_freq, des_perf);
>
> if ((cppc_state == AMD_PSTATE_GUIDED) && (gov_flags & CPUFREQ_GOV_DYNAMIC_SWITCHING)) {
> min_perf = des_perf;
> @@ -565,7 +569,7 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf,
>
> /* limit the max perf when core performance boost feature is disabled */
> if (!cpudata->boost_supported)
> - max_perf = min_t(u8, nominal_perf, max_perf);
> + max_perf = min_t(u8, perf.nominal_perf, max_perf);
>
> if (trace_amd_pstate_perf_enabled() && amd_pstate_sample(cpudata)) {
> trace_amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq,
> @@ -602,39 +606,41 @@ static int amd_pstate_verify(struct cpufreq_policy_data *policy_data)
> return 0;
> }
>
> -static int amd_pstate_update_min_max_limit(struct cpufreq_policy *policy)
> +static void amd_pstate_update_min_max_limit(struct cpufreq_policy *policy)
> {
> - u8 max_limit_perf, min_limit_perf;
> struct amd_cpudata *cpudata = policy->driver_data;
> + union perf_cached perf = READ_ONCE(cpudata->perf);
>
> - max_limit_perf = freq_to_perf(cpudata, policy->max);
> - min_limit_perf = freq_to_perf(cpudata, policy->min);
> + perf.max_limit_perf = freq_to_perf(perf, cpudata->nominal_freq, policy->max);
> + perf.min_limit_perf = freq_to_perf(perf, cpudata->nominal_freq, policy->min);
>
> if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
> - min_limit_perf = min(cpudata->nominal_perf, max_limit_perf);
> + perf.min_limit_perf = min(perf.nominal_perf, perf.max_limit_perf);
>
> - WRITE_ONCE(cpudata->max_limit_perf, max_limit_perf);
> - WRITE_ONCE(cpudata->min_limit_perf, min_limit_perf);
> WRITE_ONCE(cpudata->max_limit_freq, policy->max);
> WRITE_ONCE(cpudata->min_limit_freq, policy->min);
> -
> - return 0;
> + WRITE_ONCE(cpudata->perf, perf);
> }
>
> static int amd_pstate_update_freq(struct cpufreq_policy *policy,
> unsigned int target_freq, bool fast_switch)
> {
> struct cpufreq_freqs freqs;
> - struct amd_cpudata *cpudata = policy->driver_data;
> + struct amd_cpudata *cpudata;
> + union perf_cached perf;
> u8 des_perf;
>
> + cpudata = policy->driver_data;
> +
> if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
> amd_pstate_update_min_max_limit(policy);
>
> + perf = READ_ONCE(cpudata->perf);
> +
> freqs.old = policy->cur;
> freqs.new = target_freq;
>
> - des_perf = freq_to_perf(cpudata, target_freq);
> + des_perf = freq_to_perf(perf, cpudata->nominal_freq, target_freq);
>
> WARN_ON(fast_switch && !policy->fast_switch_enabled);
> /*
> @@ -645,8 +651,8 @@ static int amd_pstate_update_freq(struct cpufreq_policy *policy,
> if (!fast_switch)
> cpufreq_freq_transition_begin(policy, &freqs);
>
> - amd_pstate_update(cpudata, cpudata->min_limit_perf, des_perf,
> - cpudata->max_limit_perf, fast_switch,
> + amd_pstate_update(cpudata, perf.min_limit_perf, des_perf,
> + perf.max_limit_perf, fast_switch,
> policy->governor->flags);
>
> if (!fast_switch)
> @@ -675,9 +681,10 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
> unsigned long target_perf,
> unsigned long capacity)
> {
> - u8 max_perf, min_perf, des_perf, cap_perf, min_limit_perf;
> + u8 max_perf, min_perf, des_perf, cap_perf;
> struct cpufreq_policy *policy __free(put_cpufreq_policy) = cpufreq_cpu_get(cpu);
> struct amd_cpudata *cpudata;
> + union perf_cached perf;
>
> if (!policy)
> return;
> @@ -687,8 +694,8 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
> if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
> amd_pstate_update_min_max_limit(policy);
>
> - cap_perf = READ_ONCE(cpudata->highest_perf);
> - min_limit_perf = READ_ONCE(cpudata->min_limit_perf);
> + perf = READ_ONCE(cpudata->perf);
> + cap_perf = perf.highest_perf;
>
> des_perf = cap_perf;
> if (target_perf < capacity)
> @@ -699,10 +706,10 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
> else
> min_perf = cap_perf;
>
> - if (min_perf < min_limit_perf)
> - min_perf = min_limit_perf;
> + if (min_perf < perf.min_limit_perf)
> + min_perf = perf.min_limit_perf;
>
> - max_perf = cpudata->max_limit_perf;
> + max_perf = perf.max_limit_perf;
> if (max_perf < min_perf)
> max_perf = min_perf;
>
> @@ -713,11 +720,12 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
> static int amd_pstate_cpu_boost_update(struct cpufreq_policy *policy, bool on)
> {
> struct amd_cpudata *cpudata = policy->driver_data;
> + union perf_cached perf = READ_ONCE(cpudata->perf);
> u32 nominal_freq, max_freq;
> int ret = 0;
>
> nominal_freq = READ_ONCE(cpudata->nominal_freq);
> - max_freq = perf_to_freq(cpudata, READ_ONCE(cpudata->highest_perf));
> + max_freq = perf_to_freq(perf, cpudata->nominal_freq, perf.highest_perf);
>
> if (on)
> policy->cpuinfo.max_freq = max_freq;
> @@ -888,30 +896,30 @@ static u32 amd_pstate_get_transition_latency(unsigned int cpu)
> }
>
> /*
> - * amd_pstate_init_freq: Initialize the max_freq, min_freq,
> - * nominal_freq and lowest_nonlinear_freq for
> - * the @cpudata object.
> + * amd_pstate_init_freq: Initialize the nominal_freq and lowest_nonlinear_freq
> + * for the @cpudata object.
> *
> - * Requires: highest_perf, lowest_perf, nominal_perf and
> - * lowest_nonlinear_perf members of @cpudata to be
> - * initialized.
> + * Requires: all perf members of @cpudata to be initialized.
> *
> - * Returns 0 on success, non-zero value on failure.
> + * Returns 0 on success, non-zero value on failure.
> */
> static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
> {
> - int ret;
> - u32 min_freq, max_freq;
> - u32 nominal_freq, lowest_nonlinear_freq;
> + u32 min_freq, max_freq, nominal_freq, lowest_nonlinear_freq;
> struct cppc_perf_caps cppc_perf;
> + union perf_cached perf;
> + int ret;
>
> ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
> if (ret)
> return ret;
> + perf = READ_ONCE(cpudata->perf);
>
> - if (quirks && quirks->lowest_freq)
> + if (quirks && quirks->lowest_freq) {
> min_freq = quirks->lowest_freq;
> - else
> + perf.lowest_perf = freq_to_perf(perf, nominal_freq, min_freq);
> + WRITE_ONCE(cpudata->perf, perf);
> + } else
> min_freq = cppc_perf.lowest_freq;
>
> if (quirks && quirks->nominal_freq)
> @@ -924,8 +932,8 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
>
> WRITE_ONCE(cpudata->nominal_freq, nominal_freq);
>
> - max_freq = perf_to_freq(cpudata, cpudata->highest_perf);
> - lowest_nonlinear_freq = perf_to_freq(cpudata, cpudata->lowest_nonlinear_perf);
> + max_freq = perf_to_freq(perf, nominal_freq, perf.highest_perf);
> + lowest_nonlinear_freq = perf_to_freq(perf, nominal_freq, perf.lowest_nonlinear_perf);
> WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq);
>
> /**
> @@ -952,6 +960,7 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
> static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
> {
> struct amd_cpudata *cpudata;
> + union perf_cached perf;
> struct device *dev;
> int ret;
>
> @@ -987,8 +996,14 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
> policy->cpuinfo.transition_latency = amd_pstate_get_transition_latency(policy->cpu);
> policy->transition_delay_us = amd_pstate_get_transition_delay_us(policy->cpu);
>
> - policy->cpuinfo.min_freq = policy->min = perf_to_freq(cpudata, cpudata->lowest_perf);
> - policy->cpuinfo.max_freq = policy->max = perf_to_freq(cpudata, cpudata->highest_perf);
> + perf = READ_ONCE(cpudata->perf);
> +
> + policy->cpuinfo.min_freq = policy->min = perf_to_freq(perf,
> + cpudata->nominal_freq,
> + perf.lowest_perf);
> + policy->cpuinfo.max_freq = policy->max = perf_to_freq(perf,
> + cpudata->nominal_freq,
> + perf.highest_perf);
>
> policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
>
> @@ -1069,23 +1084,27 @@ static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy)
> static ssize_t show_amd_pstate_max_freq(struct cpufreq_policy *policy,
> char *buf)
> {
> - struct amd_cpudata *cpudata = policy->driver_data;
> + struct amd_cpudata *cpudata;
> + union perf_cached perf;
>
> + cpudata = policy->driver_data;
> + perf = READ_ONCE(cpudata->perf);
>
> - return sysfs_emit(buf, "%u\n", perf_to_freq(cpudata, READ_ONCE(cpudata->highest_perf)));
> + return sysfs_emit(buf, "%u\n",
> + perf_to_freq(perf, cpudata->nominal_freq, perf.highest_perf));
> }
>
> static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *policy,
> char *buf)
> {
> - int freq;
> - struct amd_cpudata *cpudata = policy->driver_data;
> + struct amd_cpudata *cpudata;
> + union perf_cached perf;
>
> - freq = READ_ONCE(cpudata->lowest_nonlinear_freq);
> - if (freq < 0)
> - return freq;
> + cpudata = policy->driver_data;
> + perf = READ_ONCE(cpudata->perf);
>
> - return sysfs_emit(buf, "%u\n", freq);
> + return sysfs_emit(buf, "%u\n",
> + perf_to_freq(perf, cpudata->nominal_freq, perf.lowest_nonlinear_perf));
> }
>
> /*
> @@ -1095,12 +1114,11 @@ static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *poli
> static ssize_t show_amd_pstate_highest_perf(struct cpufreq_policy *policy,
> char *buf)
> {
> - u8 perf;
> - struct amd_cpudata *cpudata = policy->driver_data;
> + struct amd_cpudata *cpudata;
>
> - perf = READ_ONCE(cpudata->highest_perf);
> + cpudata = policy->driver_data;
>
> - return sysfs_emit(buf, "%u\n", perf);
> + return sysfs_emit(buf, "%u\n", cpudata->perf.highest_perf);
> }
>
> static ssize_t show_amd_pstate_prefcore_ranking(struct cpufreq_policy *policy,
> @@ -1431,6 +1449,7 @@ static bool amd_pstate_acpi_pm_profile_undefined(void)
> static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
> {
> struct amd_cpudata *cpudata;
> + union perf_cached perf;
> struct device *dev;
> u64 value;
> int ret;
> @@ -1464,8 +1483,15 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
> if (ret)
> goto free_cpudata1;
>
> - policy->cpuinfo.min_freq = policy->min = perf_to_freq(cpudata, cpudata->lowest_perf);
> - policy->cpuinfo.max_freq = policy->max = perf_to_freq(cpudata, cpudata->highest_perf);
> + perf = READ_ONCE(cpudata->perf);
> +
> + policy->cpuinfo.min_freq = policy->min = perf_to_freq(perf,
> + cpudata->nominal_freq,
> + perf.lowest_perf);
> + policy->cpuinfo.max_freq = policy->max = perf_to_freq(perf,
> + cpudata->nominal_freq,
> + perf.highest_perf);
> +
> /* It will be updated by governor */
> policy->cur = policy->cpuinfo.min_freq;
>
> @@ -1526,6 +1552,7 @@ static void amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
> static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
> {
> struct amd_cpudata *cpudata = policy->driver_data;
> + union perf_cached perf;
> u8 epp;
>
> if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
> @@ -1536,15 +1563,16 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
> else
> epp = READ_ONCE(cpudata->epp_cached);
>
> + perf = READ_ONCE(cpudata->perf);
> if (trace_amd_pstate_epp_perf_enabled()) {
> - trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, epp,
> - cpudata->min_limit_perf,
> - cpudata->max_limit_perf,
> + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, epp,
> + perf.min_limit_perf,
> + perf.max_limit_perf,
> policy->boost_enabled);
> }
>
> - return amd_pstate_update_perf(cpudata, cpudata->min_limit_perf, 0U,
> - cpudata->max_limit_perf, epp, false);
> + return amd_pstate_update_perf(cpudata, perf.min_limit_perf, 0U,
> + perf.max_limit_perf, epp, false);
> }
>
> static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
> @@ -1576,20 +1604,18 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
> static int amd_pstate_epp_reenable(struct cpufreq_policy *policy)
> {
> struct amd_cpudata *cpudata = policy->driver_data;
> - u8 max_perf;
> + union perf_cached perf = READ_ONCE(cpudata->perf);
> int ret;
>
> ret = amd_pstate_cppc_enable(true);
> if (ret)
> pr_err("failed to enable amd pstate during resume, return %d\n", ret);
>
> - max_perf = READ_ONCE(cpudata->highest_perf);
> -
> if (trace_amd_pstate_epp_perf_enabled()) {
> - trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf,
> + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
> cpudata->epp_cached,
> FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached),
> - max_perf, policy->boost_enabled);
> + perf.highest_perf, policy->boost_enabled);
> }
>
> return amd_pstate_epp_update_limit(policy);
> @@ -1613,22 +1639,21 @@ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
> static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
> {
> struct amd_cpudata *cpudata = policy->driver_data;
> - u8 min_perf;
> + union perf_cached perf = READ_ONCE(cpudata->perf);
>
> if (cpudata->suspended)
> return 0;
>
> - min_perf = READ_ONCE(cpudata->lowest_perf);
> -
> guard(mutex)(&amd_pstate_limits_lock);
>
> if (trace_amd_pstate_epp_perf_enabled()) {
> - trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf,
> + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
> AMD_CPPC_EPP_BALANCE_POWERSAVE,
> - min_perf, min_perf, policy->boost_enabled);
> + perf.lowest_perf, perf.lowest_perf,
> + policy->boost_enabled);
> }
>
> - return amd_pstate_update_perf(cpudata, min_perf, 0, min_perf,
> + return amd_pstate_update_perf(cpudata, perf.lowest_perf, 0, perf.lowest_perf,
> AMD_CPPC_EPP_BALANCE_POWERSAVE, false);
> }
>
> diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h
> index 0149933692458..83532a0079a81 100644
> --- a/drivers/cpufreq/amd-pstate.h
> +++ b/drivers/cpufreq/amd-pstate.h
> @@ -13,6 +13,36 @@
> /*********************************************************************
> * AMD P-state INTERFACE *
> *********************************************************************/
> +
> +/**
> + * union perf_cached - A union to cache performance-related data.
> + * @highest_perf: the maximum performance an individual processor may reach,
> + * assuming ideal conditions
> + * For platforms that support the preferred core feature, the highest_perf value maybe
> + * configured to any value in the range 166-255 by the firmware (because the preferred
> + * core ranking is encoded in the highest_perf value). To maintain consistency across
> + * all platforms, we split the highest_perf and preferred core ranking values into
> + * cpudata->perf.highest_perf and cpudata->prefcore_ranking.
> + * @nominal_perf: the maximum sustained performance level of the processor,
> + * assuming ideal operating conditions
> + * @lowest_nonlinear_perf: the lowest performance level at which nonlinear power
> + * savings are achieved
> + * @lowest_perf: the absolute lowest performance level of the processor
> + * @min_limit_perf: Cached value of the performance corresponding to policy->min
> + * @max_limit_perf: Cached value of the performance corresponding to policy->max
> + */
> +union perf_cached {
> + struct {
> + u8 highest_perf;
> + u8 nominal_perf;
> + u8 lowest_nonlinear_perf;
> + u8 lowest_perf;
> + u8 min_limit_perf;
> + u8 max_limit_perf;
> + };
> + u64 val;
> +};
> +
> /**
> * struct amd_aperf_mperf
> * @aperf: actual performance frequency clock count
> @@ -30,20 +60,9 @@ struct amd_aperf_mperf {
> * @cpu: CPU number
> * @req: constraint request to apply
> * @cppc_req_cached: cached performance request hints
> - * @highest_perf: the maximum performance an individual processor may reach,
> - * assuming ideal conditions
> - * For platforms that do not support the preferred core feature, the
> - * highest_pef may be configured with 166 or 255, to avoid max frequency
> - * calculated wrongly. we take the fixed value as the highest_perf.
> - * @nominal_perf: the maximum sustained performance level of the processor,
> - * assuming ideal operating conditions
> - * @lowest_nonlinear_perf: the lowest performance level at which nonlinear power
> - * savings are achieved
> - * @lowest_perf: the absolute lowest performance level of the processor
> + * @perf: cached performance-related data
> * @prefcore_ranking: the preferred core ranking, the higher value indicates a higher
> * priority.
> - * @min_limit_perf: Cached value of the performance corresponding to policy->min
> - * @max_limit_perf: Cached value of the performance corresponding to policy->max
> * @min_limit_freq: Cached value of policy->min (in khz)
> * @max_limit_freq: Cached value of policy->max (in khz)
> * @nominal_freq: the frequency (in khz) that mapped to nominal_perf
> @@ -68,13 +87,9 @@ struct amd_cpudata {
> struct freq_qos_request req[2];
> u64 cppc_req_cached;
>
> - u8 highest_perf;
> - u8 nominal_perf;
> - u8 lowest_nonlinear_perf;
> - u8 lowest_perf;
> + union perf_cached perf;
> +
> u8 prefcore_ranking;
> - u8 min_limit_perf;
> - u8 max_limit_perf;
> u32 min_limit_freq;
> u32 max_limit_freq;
> u32 nominal_freq;
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v5 17/19] cpufreq/amd-pstate: Rework CPPC enabling
2025-02-26 7:49 ` [PATCH v5 17/19] cpufreq/amd-pstate: Rework CPPC enabling Mario Limonciello
@ 2025-03-01 7:03 ` Dhananjay Ugwekar
2025-03-04 5:08 ` Gautham R. Shenoy
1 sibling, 0 replies; 27+ messages in thread
From: Dhananjay Ugwekar @ 2025-03-01 7:03 UTC (permalink / raw)
To: Mario Limonciello, Gautham R . Shenoy, Perry Yuan
Cc: open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello
On 2/26/2025 1:19 PM, Mario Limonciello wrote:
> From: Mario Limonciello <mario.limonciello@amd.com>
>
> The CPPC enable register is configured as "write once". That is
> any future writes don't actually do anything.
>
> Because of this, all the cleanup paths that currently exist for
> CPPC disable are non-effective.
>
> Rework CPPC enable to only enable after all the CAP registers have
> been read to avoid enabling CPPC on CPUs with invalid _CPC or
> unpopulated MSRs.
>
> As the register is write once, remove all cleanup paths as well.
>
LGTM now,
Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
Thanks,
Dhananjay
> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
> ---
> v5:
> * Drop unnecessary extra code in shmem_cppc_enable()
> * Remove redundant tracing in store_energy_performance_preference()
> * Add missing call to amd_pstate_cppc_enable() in passive case
> * Leave cpudata->suspended alone in amd_pstate_epp_cpu_online()
> * Drop spurious whitespace
> v4:
> * Remove unnecessary amd_pstate_update_perf() call during online
> * Remove unnecessary if (ret) ret.
> * Drop amd_pstate_cpu_resume()
> * Drop unnecessary derefs
> v3:
> * Fixup for suspend/resume issue
> ---
> drivers/cpufreq/amd-pstate.c | 179 +++++++----------------------------
> 1 file changed, 35 insertions(+), 144 deletions(-)
>
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index f0d9ee62cb30d..89e6d32223c9b 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -85,7 +85,6 @@ static struct cpufreq_driver *current_pstate_driver;
> static struct cpufreq_driver amd_pstate_driver;
> static struct cpufreq_driver amd_pstate_epp_driver;
> static int cppc_state = AMD_PSTATE_UNDEFINED;
> -static bool cppc_enabled;
> static bool amd_pstate_prefcore = true;
> static struct quirk_entry *quirks;
>
> @@ -371,89 +370,21 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
> return ret;
> }
>
> -static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy,
> - int pref_index)
> +static inline int msr_cppc_enable(struct cpufreq_policy *policy)
> {
> - struct amd_cpudata *cpudata = policy->driver_data;
> - u8 epp;
> -
> - if (!pref_index)
> - epp = cpudata->epp_default;
> - else
> - epp = epp_values[pref_index];
> -
> - if (epp > 0 && cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {
> - pr_debug("EPP cannot be set under performance policy\n");
> - return -EBUSY;
> - }
> -
> - return amd_pstate_set_epp(policy, epp);
> -}
> -
> -static inline int msr_cppc_enable(bool enable)
> -{
> - int ret, cpu;
> - unsigned long logical_proc_id_mask = 0;
> -
> - /*
> - * MSR_AMD_CPPC_ENABLE is write-once, once set it cannot be cleared.
> - */
> - if (!enable)
> - return 0;
> -
> - if (enable == cppc_enabled)
> - return 0;
> -
> - for_each_present_cpu(cpu) {
> - unsigned long logical_id = topology_logical_package_id(cpu);
> -
> - if (test_bit(logical_id, &logical_proc_id_mask))
> - continue;
> -
> - set_bit(logical_id, &logical_proc_id_mask);
> -
> - ret = wrmsrl_safe_on_cpu(cpu, MSR_AMD_CPPC_ENABLE,
> - enable);
> - if (ret)
> - return ret;
> - }
> -
> - cppc_enabled = enable;
> - return 0;
> + return wrmsrl_safe_on_cpu(policy->cpu, MSR_AMD_CPPC_ENABLE, 1);
> }
>
> -static int shmem_cppc_enable(bool enable)
> +static int shmem_cppc_enable(struct cpufreq_policy *policy)
> {
> - int cpu, ret = 0;
> - struct cppc_perf_ctrls perf_ctrls;
> -
> - if (enable == cppc_enabled)
> - return 0;
> -
> - for_each_present_cpu(cpu) {
> - ret = cppc_set_enable(cpu, enable);
> - if (ret)
> - return ret;
> -
> - /* Enable autonomous mode for EPP */
> - if (cppc_state == AMD_PSTATE_ACTIVE) {
> - /* Set desired perf as zero to allow EPP firmware control */
> - perf_ctrls.desired_perf = 0;
> - ret = cppc_set_perf(cpu, &perf_ctrls);
> - if (ret)
> - return ret;
> - }
> - }
> -
> - cppc_enabled = enable;
> - return ret;
> + return cppc_set_enable(policy->cpu, 1);
> }
>
> DEFINE_STATIC_CALL(amd_pstate_cppc_enable, msr_cppc_enable);
>
> -static inline int amd_pstate_cppc_enable(bool enable)
> +static inline int amd_pstate_cppc_enable(struct cpufreq_policy *policy)
> {
> - return static_call(amd_pstate_cppc_enable)(enable);
> + return static_call(amd_pstate_cppc_enable)(policy);
> }
>
> static int msr_init_perf(struct amd_cpudata *cpudata)
> @@ -1069,6 +1000,10 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
> cpudata->nominal_freq,
> perf.highest_perf);
>
> + ret = amd_pstate_cppc_enable(policy);
> + if (ret)
> + goto free_cpudata1;
> +
> policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
>
> /* It will be updated by governor */
> @@ -1116,28 +1051,6 @@ static void amd_pstate_cpu_exit(struct cpufreq_policy *policy)
> kfree(cpudata);
> }
>
> -static int amd_pstate_cpu_resume(struct cpufreq_policy *policy)
> -{
> - int ret;
> -
> - ret = amd_pstate_cppc_enable(true);
> - if (ret)
> - pr_err("failed to enable amd-pstate during resume, return %d\n", ret);
> -
> - return ret;
> -}
> -
> -static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy)
> -{
> - int ret;
> -
> - ret = amd_pstate_cppc_enable(false);
> - if (ret)
> - pr_err("failed to disable amd-pstate during suspend, return %d\n", ret);
> -
> - return ret;
> -}
> -
> /* Sysfs attributes */
>
> /*
> @@ -1229,8 +1142,10 @@ static ssize_t show_energy_performance_available_preferences(
> static ssize_t store_energy_performance_preference(
> struct cpufreq_policy *policy, const char *buf, size_t count)
> {
> + struct amd_cpudata *cpudata = policy->driver_data;
> char str_preference[21];
> ssize_t ret;
> + u8 epp;
>
> ret = sscanf(buf, "%20s", str_preference);
> if (ret != 1)
> @@ -1240,7 +1155,17 @@ static ssize_t store_energy_performance_preference(
> if (ret < 0)
> return -EINVAL;
>
> - ret = amd_pstate_set_energy_pref_index(policy, ret);
> + if (!ret)
> + epp = cpudata->epp_default;
> + else
> + epp = epp_values[ret];
> +
> + if (epp > 0 && policy->policy == CPUFREQ_POLICY_PERFORMANCE) {
> + pr_debug("EPP cannot be set under performance policy\n");
> + return -EBUSY;
> + }
> +
> + ret = amd_pstate_set_epp(policy, epp);
>
> return ret ? ret : count;
> }
> @@ -1273,7 +1198,6 @@ static ssize_t show_energy_performance_preference(
>
> static void amd_pstate_driver_cleanup(void)
> {
> - amd_pstate_cppc_enable(false);
> cppc_state = AMD_PSTATE_DISABLE;
> current_pstate_driver = NULL;
> }
> @@ -1307,14 +1231,6 @@ static int amd_pstate_register_driver(int mode)
>
> cppc_state = mode;
>
> - ret = amd_pstate_cppc_enable(true);
> - if (ret) {
> - pr_err("failed to enable cppc during amd-pstate driver registration, return %d\n",
> - ret);
> - amd_pstate_driver_cleanup();
> - return ret;
> - }
> -
> /* at least one CPU supports CPB */
> current_pstate_driver->boost_enabled = cpu_feature_enabled(X86_FEATURE_CPB);
>
> @@ -1554,11 +1470,15 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
> policy->cpuinfo.max_freq = policy->max = perf_to_freq(perf,
> cpudata->nominal_freq,
> perf.highest_perf);
> + policy->driver_data = cpudata;
> +
> + ret = amd_pstate_cppc_enable(policy);
> + if (ret)
> + goto free_cpudata1;
>
> /* It will be updated by governor */
> policy->cur = policy->cpuinfo.min_freq;
>
> - policy->driver_data = cpudata;
>
> policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
>
> @@ -1650,31 +1570,11 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
> return 0;
> }
>
> -static int amd_pstate_epp_reenable(struct cpufreq_policy *policy)
> -{
> - int ret;
> -
> - ret = amd_pstate_cppc_enable(true);
> - if (ret)
> - pr_err("failed to enable amd pstate during resume, return %d\n", ret);
> -
> -
> - return amd_pstate_epp_update_limit(policy);
> -}
> -
> static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
> {
> - struct amd_cpudata *cpudata = policy->driver_data;
> - int ret;
> -
> - pr_debug("AMD CPU Core %d going online\n", cpudata->cpu);
> + pr_debug("AMD CPU Core %d going online\n", policy->cpu);
>
> - ret = amd_pstate_epp_reenable(policy);
> - if (ret)
> - return ret;
> - cpudata->suspended = false;
> -
> - return 0;
> + return amd_pstate_cppc_enable(policy);
> }
>
> static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
> @@ -1692,11 +1592,6 @@ static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
> static int amd_pstate_epp_suspend(struct cpufreq_policy *policy)
> {
> struct amd_cpudata *cpudata = policy->driver_data;
> - int ret;
> -
> - /* avoid suspending when EPP is not enabled */
> - if (cppc_state != AMD_PSTATE_ACTIVE)
> - return 0;
>
> /* invalidate to ensure it's rewritten during resume */
> cpudata->cppc_req_cached = 0;
> @@ -1704,11 +1599,6 @@ static int amd_pstate_epp_suspend(struct cpufreq_policy *policy)
> /* set this flag to avoid setting core offline*/
> cpudata->suspended = true;
>
> - /* disable CPPC in lowlevel firmware */
> - ret = amd_pstate_cppc_enable(false);
> - if (ret)
> - pr_err("failed to suspend, return %d\n", ret);
> -
> return 0;
> }
>
> @@ -1717,8 +1607,12 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy)
> struct amd_cpudata *cpudata = policy->driver_data;
>
> if (cpudata->suspended) {
> + int ret;
> +
> /* enable amd pstate from suspend state*/
> - amd_pstate_epp_reenable(policy);
> + ret = amd_pstate_epp_update_limit(policy);
> + if (ret)
> + return ret;
>
> cpudata->suspended = false;
> }
> @@ -1733,8 +1627,6 @@ static struct cpufreq_driver amd_pstate_driver = {
> .fast_switch = amd_pstate_fast_switch,
> .init = amd_pstate_cpu_init,
> .exit = amd_pstate_cpu_exit,
> - .suspend = amd_pstate_cpu_suspend,
> - .resume = amd_pstate_cpu_resume,
> .set_boost = amd_pstate_set_boost,
> .update_limits = amd_pstate_update_limits,
> .name = "amd-pstate",
> @@ -1901,7 +1793,6 @@ static int __init amd_pstate_init(void)
>
> global_attr_free:
> cpufreq_unregister_driver(current_pstate_driver);
> - amd_pstate_cppc_enable(false);
> return ret;
> }
> device_initcall(amd_pstate_init);
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v5 17/19] cpufreq/amd-pstate: Rework CPPC enabling
2025-02-26 7:49 ` [PATCH v5 17/19] cpufreq/amd-pstate: Rework CPPC enabling Mario Limonciello
2025-03-01 7:03 ` Dhananjay Ugwekar
@ 2025-03-04 5:08 ` Gautham R. Shenoy
1 sibling, 0 replies; 27+ messages in thread
From: Gautham R. Shenoy @ 2025-03-04 5:08 UTC (permalink / raw)
To: Mario Limonciello
Cc: Perry Yuan, Dhananjay Ugwekar,
open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello
On Wed, Feb 26, 2025 at 01:49:32AM -0600, Mario Limonciello wrote:
> From: Mario Limonciello <mario.limonciello@amd.com>
>
> The CPPC enable register is configured as "write once". That is
> any future writes don't actually do anything.
>
> Because of this, all the cleanup paths that currently exist for
> CPPC disable are non-effective.
>
> Rework CPPC enable to only enable after all the CAP registers have
> been read to avoid enabling CPPC on CPUs with invalid _CPC or
> unpopulated MSRs.
>
> As the register is write once, remove all cleanup paths as well.
>
> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
> ---
> v5:
> * Drop unnecessary extra code in shmem_cppc_enable()
> * Remove redundant tracing in store_energy_performance_preference()
> * Add missing call to amd_pstate_cppc_enable() in passive case
> * Leave cpudata->suspended alone in amd_pstate_epp_cpu_online()
> * Drop spurious whitespace
This version looks good to me.
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
--
Thanks and Regards
gautham.
> v4:
> * Remove unnecessary amd_pstate_update_perf() call during online
> * Remove unnecessary if (ret) ret.
> * Drop amd_pstate_cpu_resume()
> * Drop unnecessary derefs
> v3:
> * Fixup for suspend/resume issue
> ---
> drivers/cpufreq/amd-pstate.c | 179 +++++++----------------------------
> 1 file changed, 35 insertions(+), 144 deletions(-)
>
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index f0d9ee62cb30d..89e6d32223c9b 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -85,7 +85,6 @@ static struct cpufreq_driver *current_pstate_driver;
> static struct cpufreq_driver amd_pstate_driver;
> static struct cpufreq_driver amd_pstate_epp_driver;
> static int cppc_state = AMD_PSTATE_UNDEFINED;
> -static bool cppc_enabled;
> static bool amd_pstate_prefcore = true;
> static struct quirk_entry *quirks;
>
> @@ -371,89 +370,21 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
> return ret;
> }
>
> -static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy,
> - int pref_index)
> +static inline int msr_cppc_enable(struct cpufreq_policy *policy)
> {
> - struct amd_cpudata *cpudata = policy->driver_data;
> - u8 epp;
> -
> - if (!pref_index)
> - epp = cpudata->epp_default;
> - else
> - epp = epp_values[pref_index];
> -
> - if (epp > 0 && cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {
> - pr_debug("EPP cannot be set under performance policy\n");
> - return -EBUSY;
> - }
> -
> - return amd_pstate_set_epp(policy, epp);
> -}
> -
> -static inline int msr_cppc_enable(bool enable)
> -{
> - int ret, cpu;
> - unsigned long logical_proc_id_mask = 0;
> -
> - /*
> - * MSR_AMD_CPPC_ENABLE is write-once, once set it cannot be cleared.
> - */
> - if (!enable)
> - return 0;
> -
> - if (enable == cppc_enabled)
> - return 0;
> -
> - for_each_present_cpu(cpu) {
> - unsigned long logical_id = topology_logical_package_id(cpu);
> -
> - if (test_bit(logical_id, &logical_proc_id_mask))
> - continue;
> -
> - set_bit(logical_id, &logical_proc_id_mask);
> -
> - ret = wrmsrl_safe_on_cpu(cpu, MSR_AMD_CPPC_ENABLE,
> - enable);
> - if (ret)
> - return ret;
> - }
> -
> - cppc_enabled = enable;
> - return 0;
> + return wrmsrl_safe_on_cpu(policy->cpu, MSR_AMD_CPPC_ENABLE, 1);
> }
>
> -static int shmem_cppc_enable(bool enable)
> +static int shmem_cppc_enable(struct cpufreq_policy *policy)
> {
> - int cpu, ret = 0;
> - struct cppc_perf_ctrls perf_ctrls;
> -
> - if (enable == cppc_enabled)
> - return 0;
> -
> - for_each_present_cpu(cpu) {
> - ret = cppc_set_enable(cpu, enable);
> - if (ret)
> - return ret;
> -
> - /* Enable autonomous mode for EPP */
> - if (cppc_state == AMD_PSTATE_ACTIVE) {
> - /* Set desired perf as zero to allow EPP firmware control */
> - perf_ctrls.desired_perf = 0;
> - ret = cppc_set_perf(cpu, &perf_ctrls);
> - if (ret)
> - return ret;
> - }
> - }
> -
> - cppc_enabled = enable;
> - return ret;
> + return cppc_set_enable(policy->cpu, 1);
> }
>
> DEFINE_STATIC_CALL(amd_pstate_cppc_enable, msr_cppc_enable);
>
> -static inline int amd_pstate_cppc_enable(bool enable)
> +static inline int amd_pstate_cppc_enable(struct cpufreq_policy *policy)
> {
> - return static_call(amd_pstate_cppc_enable)(enable);
> + return static_call(amd_pstate_cppc_enable)(policy);
> }
>
> static int msr_init_perf(struct amd_cpudata *cpudata)
> @@ -1069,6 +1000,10 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
> cpudata->nominal_freq,
> perf.highest_perf);
>
> + ret = amd_pstate_cppc_enable(policy);
> + if (ret)
> + goto free_cpudata1;
> +
> policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
>
> /* It will be updated by governor */
> @@ -1116,28 +1051,6 @@ static void amd_pstate_cpu_exit(struct cpufreq_policy *policy)
> kfree(cpudata);
> }
>
> -static int amd_pstate_cpu_resume(struct cpufreq_policy *policy)
> -{
> - int ret;
> -
> - ret = amd_pstate_cppc_enable(true);
> - if (ret)
> - pr_err("failed to enable amd-pstate during resume, return %d\n", ret);
> -
> - return ret;
> -}
> -
> -static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy)
> -{
> - int ret;
> -
> - ret = amd_pstate_cppc_enable(false);
> - if (ret)
> - pr_err("failed to disable amd-pstate during suspend, return %d\n", ret);
> -
> - return ret;
> -}
> -
> /* Sysfs attributes */
>
> /*
> @@ -1229,8 +1142,10 @@ static ssize_t show_energy_performance_available_preferences(
> static ssize_t store_energy_performance_preference(
> struct cpufreq_policy *policy, const char *buf, size_t count)
> {
> + struct amd_cpudata *cpudata = policy->driver_data;
> char str_preference[21];
> ssize_t ret;
> + u8 epp;
>
> ret = sscanf(buf, "%20s", str_preference);
> if (ret != 1)
> @@ -1240,7 +1155,17 @@ static ssize_t store_energy_performance_preference(
> if (ret < 0)
> return -EINVAL;
>
> - ret = amd_pstate_set_energy_pref_index(policy, ret);
> + if (!ret)
> + epp = cpudata->epp_default;
> + else
> + epp = epp_values[ret];
> +
> + if (epp > 0 && policy->policy == CPUFREQ_POLICY_PERFORMANCE) {
> + pr_debug("EPP cannot be set under performance policy\n");
> + return -EBUSY;
> + }
> +
> + ret = amd_pstate_set_epp(policy, epp);
>
> return ret ? ret : count;
> }
> @@ -1273,7 +1198,6 @@ static ssize_t show_energy_performance_preference(
>
> static void amd_pstate_driver_cleanup(void)
> {
> - amd_pstate_cppc_enable(false);
> cppc_state = AMD_PSTATE_DISABLE;
> current_pstate_driver = NULL;
> }
> @@ -1307,14 +1231,6 @@ static int amd_pstate_register_driver(int mode)
>
> cppc_state = mode;
>
> - ret = amd_pstate_cppc_enable(true);
> - if (ret) {
> - pr_err("failed to enable cppc during amd-pstate driver registration, return %d\n",
> - ret);
> - amd_pstate_driver_cleanup();
> - return ret;
> - }
> -
> /* at least one CPU supports CPB */
> current_pstate_driver->boost_enabled = cpu_feature_enabled(X86_FEATURE_CPB);
>
> @@ -1554,11 +1470,15 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
> policy->cpuinfo.max_freq = policy->max = perf_to_freq(perf,
> cpudata->nominal_freq,
> perf.highest_perf);
> + policy->driver_data = cpudata;
> +
> + ret = amd_pstate_cppc_enable(policy);
> + if (ret)
> + goto free_cpudata1;
>
> /* It will be updated by governor */
> policy->cur = policy->cpuinfo.min_freq;
>
> - policy->driver_data = cpudata;
>
> policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
>
> @@ -1650,31 +1570,11 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
> return 0;
> }
>
> -static int amd_pstate_epp_reenable(struct cpufreq_policy *policy)
> -{
> - int ret;
> -
> - ret = amd_pstate_cppc_enable(true);
> - if (ret)
> - pr_err("failed to enable amd pstate during resume, return %d\n", ret);
> -
> -
> - return amd_pstate_epp_update_limit(policy);
> -}
> -
> static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
> {
> - struct amd_cpudata *cpudata = policy->driver_data;
> - int ret;
> -
> - pr_debug("AMD CPU Core %d going online\n", cpudata->cpu);
> + pr_debug("AMD CPU Core %d going online\n", policy->cpu);
>
> - ret = amd_pstate_epp_reenable(policy);
> - if (ret)
> - return ret;
> - cpudata->suspended = false;
> -
> - return 0;
> + return amd_pstate_cppc_enable(policy);
> }
>
> static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
> @@ -1692,11 +1592,6 @@ static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
> static int amd_pstate_epp_suspend(struct cpufreq_policy *policy)
> {
> struct amd_cpudata *cpudata = policy->driver_data;
> - int ret;
> -
> - /* avoid suspending when EPP is not enabled */
> - if (cppc_state != AMD_PSTATE_ACTIVE)
> - return 0;
>
> /* invalidate to ensure it's rewritten during resume */
> cpudata->cppc_req_cached = 0;
> @@ -1704,11 +1599,6 @@ static int amd_pstate_epp_suspend(struct cpufreq_policy *policy)
> /* set this flag to avoid setting core offline*/
> cpudata->suspended = true;
>
> - /* disable CPPC in lowlevel firmware */
> - ret = amd_pstate_cppc_enable(false);
> - if (ret)
> - pr_err("failed to suspend, return %d\n", ret);
> -
> return 0;
> }
>
> @@ -1717,8 +1607,12 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy)
> struct amd_cpudata *cpudata = policy->driver_data;
>
> if (cpudata->suspended) {
> + int ret;
> +
> /* enable amd pstate from suspend state*/
> - amd_pstate_epp_reenable(policy);
> + ret = amd_pstate_epp_update_limit(policy);
> + if (ret)
> + return ret;
>
> cpudata->suspended = false;
> }
> @@ -1733,8 +1627,6 @@ static struct cpufreq_driver amd_pstate_driver = {
> .fast_switch = amd_pstate_fast_switch,
> .init = amd_pstate_cpu_init,
> .exit = amd_pstate_cpu_exit,
> - .suspend = amd_pstate_cpu_suspend,
> - .resume = amd_pstate_cpu_resume,
> .set_boost = amd_pstate_set_boost,
> .update_limits = amd_pstate_update_limits,
> .name = "amd-pstate",
> @@ -1901,7 +1793,6 @@ static int __init amd_pstate_init(void)
>
> global_attr_free:
> cpufreq_unregister_driver(current_pstate_driver);
> - amd_pstate_cppc_enable(false);
> return ret;
> }
> device_initcall(amd_pstate_init);
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v5 19/19] cpufreq/amd-pstate: Drop actions in amd_pstate_epp_cpu_offline()
2025-02-26 7:49 ` [PATCH v5 19/19] cpufreq/amd-pstate: Drop actions in amd_pstate_epp_cpu_offline() Mario Limonciello
@ 2025-03-04 5:11 ` Gautham R. Shenoy
0 siblings, 0 replies; 27+ messages in thread
From: Gautham R. Shenoy @ 2025-03-04 5:11 UTC (permalink / raw)
To: Mario Limonciello
Cc: Perry Yuan, Dhananjay Ugwekar,
open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
open list:CPU FREQUENCY SCALING FRAMEWORK, Mario Limonciello
On Wed, Feb 26, 2025 at 01:49:34AM -0600, Mario Limonciello wrote:
> From: Mario Limonciello <mario.limonciello@amd.com>
>
> When the CPU goes offline there is no need to change the CPPC request
> because the CPU will go into the deepest C-state it supports already.
>
> Actually changing the CPPC request when it goes offline messes up the
> cached values and can lead to the wrong values being restored when
> it comes back.
>
> Instead drop the actions and if the CPU comes back online let
> amd_pstate_epp_set_policy() restore it to expected values.
>
> Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
> ---
> v5:
> * Reword commit message
> * Add tag
> ---
> drivers/cpufreq/amd-pstate.c | 9 +--------
> 1 file changed, 1 insertion(+), 8 deletions(-)
>
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index 24a1f9e129b61..4a364ae9b56a1 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -1580,14 +1580,7 @@ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
>
> static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
> {
> - struct amd_cpudata *cpudata = policy->driver_data;
> - union perf_cached perf = READ_ONCE(cpudata->perf);
> -
> - if (cpudata->suspended)
> - return 0;
> -
> - return amd_pstate_update_perf(policy, perf.lowest_perf, 0, perf.lowest_perf,
> - AMD_CPPC_EPP_BALANCE_POWERSAVE, false);
> + return 0;
LGTM
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
--
Thanks and Regards
gautham.
> }
>
> static int amd_pstate_epp_suspend(struct cpufreq_policy *policy)
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v5 02/19] cpufreq/amd-pstate: Show a warning when a CPU fails to setup
2025-02-26 7:49 ` [PATCH v5 02/19] cpufreq/amd-pstate: Show a warning when a CPU fails to setup Mario Limonciello
@ 2025-03-18 7:58 ` Paul Menzel
0 siblings, 0 replies; 27+ messages in thread
From: Paul Menzel @ 2025-03-18 7:58 UTC (permalink / raw)
To: Mario Limonciello
Cc: Gautham R . Shenoy, Perry Yuan, Dhananjay Ugwekar, linux-kernel,
linux-pm, Mario Limonciello
Dear Mario,
Thank you for the patch.
Am 26.02.25 um 08:49 schrieb Mario Limonciello:
> From: Mario Limonciello <mario.limonciello@amd.com>
>
> I came across a system that MSR_AMD_CPPC_CAP1 for some CPUs isn't
> populated. This is an unexpected behavior that is most likely a
> BIOS bug. In the event it happens I'd like users to report bugs
> to properly root cause and get this fixed.
>
> Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
> Reviewed-by: Dhananjay Ugwekar <dhananjay.ugwekar@amd.com>
> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
> ---
> drivers/cpufreq/amd-pstate.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index a093389a8fe3e..1b98f5d76894d 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -1034,6 +1034,7 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
> free_cpudata2:
> freq_qos_remove_request(&cpudata->req[0]);
> free_cpudata1:
> + pr_warn("Failed to initialize CPU %d: %d\n", policy->cpu, ret);
From a user/operator point of view, having a recommended action in the
log message would help a lot, as I am not able to judge the
consequences, and where to go to. So, I’d propose:
Failed to initialize CPU %d: %d. This is likely a firmware error, and
should be reported to the vendor.
The Linux kernel also has some macros. From `include/linux/printk.h`:
```
/*
* FW_BUG
* Add this to a message where you are sure the firmware is buggy or
behaves
* really stupid or out of spec. Be aware that the responsible BIOS
developer
* should be able to fix this issue or at least get a concrete idea of the
* problem by reading your message without the need of looking at the
kernel
* code.
*
* Use it for definite and high priority BIOS bugs.
*
* FW_WARN
* Use it for not that clear (e.g. could the kernel messed up things
already?)
* and medium priority BIOS bugs.
*
* FW_INFO
* Use this one if you want to tell the user or vendor about something
* suspicious, but generally harmless related to the firmware.
*
* Use it for information or very low priority BIOS bugs.
*/
#define FW_BUG "[Firmware Bug]: "
#define FW_WARN "[Firmware Warn]: "
#define FW_INFO "[Firmware Info]: "
```
For ACPI:
drivers/acpi/acpica/acutils.h:#define ACPI_MSG_BIOS_ERROR
"Firmware Error (ACPI): "
> kfree(cpudata);
> return ret;
> }
> @@ -1527,6 +1528,7 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
> return 0;
>
> free_cpudata1:
> + pr_warn("Failed to initialize CPU %d: %d\n", policy->cpu, ret);
> kfree(cpudata);
> return ret;
> }
Kind regards,
Paul
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2025-03-18 7:58 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-26 7:49 [PATCH v5 00/19] amd-pstate cleanups Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 01/19] cpufreq/amd-pstate: Invalidate cppc_req_cached during suspend Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 02/19] cpufreq/amd-pstate: Show a warning when a CPU fails to setup Mario Limonciello
2025-03-18 7:58 ` Paul Menzel
2025-02-26 7:49 ` [PATCH v5 03/19] cpufreq/amd-pstate: Drop min and max cached frequencies Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 04/19] cpufreq/amd-pstate: Move perf values into a union Mario Limonciello
2025-02-27 12:59 ` kernel test robot
2025-02-27 20:09 ` Mario Limonciello
2025-03-01 7:02 ` Dhananjay Ugwekar
2025-02-26 7:49 ` [PATCH v5 05/19] cpufreq/amd-pstate: Overhaul locking Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 06/19] cpufreq/amd-pstate: Drop `cppc_cap1_cached` Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 07/19] cpufreq/amd-pstate-ut: Use _free macro to free put policy Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 08/19] cpufreq/amd-pstate-ut: Allow lowest nonlinear and lowest to be the same Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 09/19] cpufreq/amd-pstate-ut: Drop SUCCESS and FAIL enums Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 10/19] cpufreq/amd-pstate-ut: Run on all of the correct CPUs Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 11/19] cpufreq/amd-pstate-ut: Adjust variable scope Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 12/19] cpufreq/amd-pstate: Replace all AMD_CPPC_* macros with masks Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 13/19] cpufreq/amd-pstate: Cache CPPC request in shared mem case too Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 14/19] cpufreq/amd-pstate: Move all EPP tracing into *_update_perf and *_set_epp functions Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 15/19] cpufreq/amd-pstate: Update cppc_req_cached for shared mem EPP writes Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 16/19] cpufreq/amd-pstate: Drop debug statements for policy setting Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 17/19] cpufreq/amd-pstate: Rework CPPC enabling Mario Limonciello
2025-03-01 7:03 ` Dhananjay Ugwekar
2025-03-04 5:08 ` Gautham R. Shenoy
2025-02-26 7:49 ` [PATCH v5 18/19] cpufreq/amd-pstate: Stop caching EPP Mario Limonciello
2025-02-26 7:49 ` [PATCH v5 19/19] cpufreq/amd-pstate: Drop actions in amd_pstate_epp_cpu_offline() Mario Limonciello
2025-03-04 5:11 ` Gautham R. Shenoy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).