* [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces
@ 2026-04-28 10:41 Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 06/11] x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu() Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 08/11] x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use wrmsr_safe_on_cpu() Juergen Gross
0 siblings, 2 replies; 3+ messages in thread
From: Juergen Gross @ 2026-04-28 10:41 UTC (permalink / raw)
To: linux-kernel, x86, linux-edac, linux-pm, linux-hwmon,
linux-perf-users, platform-driver-x86, linux-acpi, virtualization
Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Tony Luck, Rafael J. Wysocki,
Viresh Kumar, Guenter Roeck, Daniel Lezcano, Zhang Rui,
Lukasz Luba, Peter Zijlstra, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, James Clark, Huang Rui,
Mario Limonciello, Perry Yuan, K Prateek Nayak,
Srinivas Pandruvada, Len Brown, Hans de Goede, Ilpo Järvinen,
Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list
After my first attempt to rework the MSR access functions [1] this is
the result of the feedback I got.
I have still followed the idea to:
- Reduce the number of MSR access functions by keeping the ones with
64-bit values only (instead of the dual 32-bit ones).
- Try to have inline functions instead of macros for rdmsr*(), removing
the hard to read cases where parameters specified the variables for
the results.
One feedback I got was NOT to rename the access functions, which I
avoided in my new approach.
The first 8 patches are a complete set for achieving especially the
first point above for the *_on_cpu() functions.
Patch 9 is preparing the switch of the CPU-local MSR access functions
to have only rdmsr(), rdmsr_safe(), wrmsr() and wrmsr_safe() (all with
64-bit values and as inline functions) in the end. For this purpose
the already existing functions/macros are overloaded via macros to
accept both variants (64-bit and dual 32-bit values) during the phase
switching the different subsystems to the new scheme. This has the
advantage to avoid having to either patch all users of the current
functions in one patch (like done in the first 8 patches), or having
to use intermediate function names with need to be patched at the end
another time. The resulting patches would be very hard to review due
to their size.
The last 2 patches are examples how switches of subsystems would look
like.
Up to now all of that is compile tested only.
[1]: https://lore.kernel.org/lkml/20260420091634.128787-1-jgross@suse.com/
Juergen Gross (11):
x86/msr: Switch rdmsr_on_cpu() to return a 64-bit quantity
x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu()
x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity
x86/msr: Switch all callers of wrmsrq_on_cpu() to use wrmsr_on_cpu()
x86/msr: Switch rdmsr_safe_on_cpu() to return a 64-bit quantity
x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use
rdmsr_safe_on_cpu()
x86/msr: Switch wrmsr_safe_on_cpu() to use a 64-bit quantity
x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use
wrmsr_safe_on_cpu()
x86/msr: Add macros for preparing to switch rdmsr/wrmsr interfaces
x86/events: Switch core parts to use 64-bit rdmsr/wrmsr() variants
x86/cpu/mce: Switch code to use 64-bit rdmsr/wrmsr() variants
arch/x86/events/core.c | 42 ++++----
arch/x86/events/intel/ds.c | 11 +-
arch/x86/events/intel/pt.c | 2 +-
arch/x86/events/intel/uncore_discovery.c | 2 +-
arch/x86/events/intel/uncore_snbep.c | 2 +-
arch/x86/events/msr.c | 2 +-
arch/x86/events/perf_event.h | 26 ++---
arch/x86/events/probe.c | 2 +-
arch/x86/events/rapl.c | 8 +-
arch/x86/include/asm/msr.h | 90 +++++++++-------
arch/x86/include/asm/paravirt.h | 6 +-
arch/x86/kernel/acpi/cppc.c | 8 +-
arch/x86/kernel/cpu/intel_epb.c | 8 +-
arch/x86/kernel/cpu/mce/amd.c | 101 +++++++++---------
arch/x86/kernel/cpu/mce/core.c | 18 ++--
arch/x86/kernel/cpu/mce/inject.c | 40 +++----
arch/x86/kernel/cpu/mce/intel.c | 32 +++---
arch/x86/kernel/cpu/mce/p5.c | 16 +--
arch/x86/kernel/cpu/mce/winchip.c | 10 +-
arch/x86/kernel/cpu/microcode/intel.c | 2 +-
arch/x86/kernel/msr.c | 8 +-
arch/x86/lib/msr-smp.c | 79 ++------------
drivers/cpufreq/acpi-cpufreq.c | 4 +-
drivers/cpufreq/amd-pstate-ut.c | 2 +-
drivers/cpufreq/amd-pstate.c | 21 ++--
drivers/cpufreq/amd_freq_sensitivity.c | 4 +-
drivers/cpufreq/intel_pstate.c | 64 +++++------
drivers/cpufreq/p4-clockmod.c | 32 +++---
drivers/cpufreq/speedstep-centrino.c | 27 ++---
drivers/hwmon/coretemp.c | 44 ++++----
drivers/hwmon/via-cputemp.c | 16 +--
drivers/platform/x86/amd/hfi/hfi.c | 4 +-
.../intel/speed_select_if/isst_if_common.c | 13 ++-
.../intel/uncore-frequency/uncore-frequency.c | 12 +--
drivers/powercap/intel_rapl_msr.c | 2 +-
drivers/thermal/intel/intel_tcc.c | 43 ++++----
drivers/thermal/intel/x86_pkg_temp_thermal.c | 22 ++--
37 files changed, 387 insertions(+), 438 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH RFC 06/11] x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu()
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
@ 2026-04-28 10:42 ` Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 08/11] x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use wrmsr_safe_on_cpu() Juergen Gross
1 sibling, 0 replies; 3+ messages in thread
From: Juergen Gross @ 2026-04-28 10:42 UTC (permalink / raw)
To: linux-kernel, x86, linux-perf-users, linux-acpi, linux-pm,
platform-driver-x86
Cc: Juergen Gross, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Rafael J. Wysocki, Len Brown, Huang Rui,
Mario Limonciello, Perry Yuan, K Prateek Nayak, Viresh Kumar,
Srinivas Pandruvada, Hans de Goede, Ilpo Järvinen
Now that rdmsr_safe_on_cpu() has the same interface as
rdmsrq_safe_on_cpu(), the callers of rdmsrq_safe_on_cpu() can be
switched to rdmsr_safe_on_cpu() and rdmsrq_safe_on_cpu() can be
removed.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/events/intel/pt.c | 2 +-
arch/x86/events/intel/uncore_discovery.c | 2 +-
arch/x86/include/asm/msr.h | 5 -----
arch/x86/kernel/acpi/cppc.c | 6 +++---
arch/x86/lib/msr-smp.c | 10 ----------
drivers/cpufreq/amd-pstate-ut.c | 2 +-
drivers/cpufreq/amd-pstate.c | 3 +--
drivers/cpufreq/intel_pstate.c | 6 +++---
.../x86/intel/speed_select_if/isst_if_common.c | 4 ++--
drivers/powercap/intel_rapl_msr.c | 2 +-
10 files changed, 13 insertions(+), 29 deletions(-)
diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
index b5726b50e77d..7c92146b06ea 100644
--- a/arch/x86/events/intel/pt.c
+++ b/arch/x86/events/intel/pt.c
@@ -1840,7 +1840,7 @@ static __init int pt_init(void)
for_each_online_cpu(cpu) {
u64 ctl;
- ret = rdmsrq_safe_on_cpu(cpu, MSR_IA32_RTIT_CTL, &ctl);
+ ret = rdmsr_safe_on_cpu(cpu, MSR_IA32_RTIT_CTL, &ctl);
if (!ret && (ctl & RTIT_CTL_TRACEEN))
prior_warn++;
}
diff --git a/arch/x86/events/intel/uncore_discovery.c b/arch/x86/events/intel/uncore_discovery.c
index 583cbd06b9b8..0853a9e02fda 100644
--- a/arch/x86/events/intel/uncore_discovery.c
+++ b/arch/x86/events/intel/uncore_discovery.c
@@ -405,7 +405,7 @@ static bool uncore_discovery_msr(struct uncore_discovery_domain *domain)
if (__test_and_set_bit(die, die_mask))
continue;
- if (rdmsrq_safe_on_cpu(cpu, domain->discovery_base, &base))
+ if (rdmsr_safe_on_cpu(cpu, domain->discovery_base, &base))
continue;
if (!base)
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index b3b43bc04b69..f2d14c670140 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -262,7 +262,6 @@ void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
-int rdmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
@@ -295,10 +294,6 @@ static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
{
return wrmsr_safe(msr_no, l, h);
}
-static inline int rdmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
-{
- return rdmsrq_safe(msr_no, q);
-}
static inline int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
return wrmsrq_safe(msr_no, q);
diff --git a/arch/x86/kernel/acpi/cppc.c b/arch/x86/kernel/acpi/cppc.c
index d7c8ef1e354d..576319dcbbbf 100644
--- a/arch/x86/kernel/acpi/cppc.c
+++ b/arch/x86/kernel/acpi/cppc.c
@@ -49,7 +49,7 @@ int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val)
{
int err;
- err = rdmsrq_safe_on_cpu(cpunum, reg->address, val);
+ err = rdmsr_safe_on_cpu(cpunum, reg->address, val);
if (!err) {
u64 mask = GENMASK_ULL(reg->bit_offset + reg->bit_width - 1,
reg->bit_offset);
@@ -65,7 +65,7 @@ int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val)
u64 rd_val;
int err;
- err = rdmsrq_safe_on_cpu(cpunum, reg->address, &rd_val);
+ err = rdmsr_safe_on_cpu(cpunum, reg->address, &rd_val);
if (!err) {
u64 mask = GENMASK_ULL(reg->bit_offset + reg->bit_width - 1,
reg->bit_offset);
@@ -147,7 +147,7 @@ int amd_get_highest_perf(unsigned int cpu, u32 *highest_perf)
int ret;
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
- ret = rdmsrq_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &val);
+ ret = rdmsr_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &val);
if (ret)
goto out;
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index 0dc3921e0259..fa22ac662c1d 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -186,16 +186,6 @@ int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
}
EXPORT_SYMBOL(wrmsrq_safe_on_cpu);
-int rdmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
-{
- int err;
-
- err = rdmsr_safe_on_cpu(cpu, msr_no, q);
-
- return err;
-}
-EXPORT_SYMBOL(rdmsrq_safe_on_cpu);
-
/*
* These variants are significantly slower, but allows control over
* the entire 32-bit GPR set.
diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
index aa8a464fab47..8700c076b762 100644
--- a/drivers/cpufreq/amd-pstate-ut.c
+++ b/drivers/cpufreq/amd-pstate-ut.c
@@ -170,7 +170,7 @@ static int amd_pstate_ut_check_perf(u32 index)
lowest_nonlinear_perf = cppc_perf.lowest_nonlinear_perf;
lowest_perf = cppc_perf.lowest_perf;
} else {
- ret = rdmsrq_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &cap1);
+ ret = rdmsr_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &cap1);
if (ret) {
pr_err("%s read CPPC_CAP1 ret=%d error!\n", __func__, ret);
return ret;
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 543b34006918..d1eee3cd8f9b 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -471,8 +471,7 @@ static int msr_init_perf(struct amd_cpudata *cpudata)
u64 cap1, numerator, cppc_req;
u8 min_perf;
- int ret = rdmsrq_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1,
- &cap1);
+ int ret = rdmsr_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, &cap1);
if (ret)
return ret;
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 08214a0561e7..da196539affe 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -2178,13 +2178,13 @@ static int core_get_tdp_ratio(int cpu, u64 plat_info)
int err;
/* Get the TDP level (0, 1, 2) to get ratios */
- err = rdmsrq_safe_on_cpu(cpu, MSR_CONFIG_TDP_CONTROL, &tdp_ctrl);
+ err = rdmsr_safe_on_cpu(cpu, MSR_CONFIG_TDP_CONTROL, &tdp_ctrl);
if (err)
return err;
/* TDP MSR are continuous starting at 0x648 */
tdp_msr = MSR_CONFIG_TDP_NOMINAL + (tdp_ctrl & 0x03);
- err = rdmsrq_safe_on_cpu(cpu, tdp_msr, &tdp_ratio);
+ err = rdmsr_safe_on_cpu(cpu, tdp_msr, &tdp_ratio);
if (err)
return err;
@@ -2221,7 +2221,7 @@ static int core_get_max_pstate(int cpu)
return tdp_ratio;
}
- err = rdmsrq_safe_on_cpu(cpu, MSR_TURBO_ACTIVATION_RATIO, &tar);
+ err = rdmsr_safe_on_cpu(cpu, MSR_TURBO_ACTIVATION_RATIO, &tar);
if (!err) {
int tar_levels;
diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
index 1c48bf6d5457..b15a798454dc 100644
--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
@@ -511,8 +511,8 @@ static long isst_if_msr_cmd_req(u8 *cmd_ptr, int *write_only, int resume)
} else {
u64 data;
- ret = rdmsrq_safe_on_cpu(msr_cmd->logical_cpu,
- msr_cmd->msr, &data);
+ ret = rdmsr_safe_on_cpu(msr_cmd->logical_cpu,
+ msr_cmd->msr, &data);
if (!ret) {
msr_cmd->data = data;
*write_only = 0;
diff --git a/drivers/powercap/intel_rapl_msr.c b/drivers/powercap/intel_rapl_msr.c
index a34543e66446..a6bdbe44c8dd 100644
--- a/drivers/powercap/intel_rapl_msr.c
+++ b/drivers/powercap/intel_rapl_msr.c
@@ -180,7 +180,7 @@ static int rapl_msr_read_raw(int cpu, struct reg_action *ra, bool pmu_ctx)
goto out;
}
- if (rdmsrq_safe_on_cpu(cpu, ra->reg.msr, &ra->value)) {
+ if (rdmsr_safe_on_cpu(cpu, ra->reg.msr, &ra->value)) {
pr_debug("failed to read msr 0x%x on cpu %d\n", ra->reg.msr, cpu);
return -EIO;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH RFC 08/11] x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use wrmsr_safe_on_cpu()
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 06/11] x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu() Juergen Gross
@ 2026-04-28 10:42 ` Juergen Gross
1 sibling, 0 replies; 3+ messages in thread
From: Juergen Gross @ 2026-04-28 10:42 UTC (permalink / raw)
To: linux-kernel, x86, linux-acpi, linux-pm, platform-driver-x86
Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Rafael J. Wysocki, Len Brown,
Huang Rui, Mario Limonciello, Perry Yuan, K Prateek Nayak,
Viresh Kumar, Srinivas Pandruvada, Hans de Goede,
Ilpo Järvinen
Now that wrmsr_safe_on_cpu() has the same interface as
wrmsrq_safe_on_cpu(), the callers of wrmsrq_safe_on_cpu() can be
switched to wrmsr_safe_on_cpu() and wrmsrq_safe_on_cpu() can be
removed.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/include/asm/msr.h | 5 -----
arch/x86/kernel/acpi/cppc.c | 2 +-
arch/x86/lib/msr-smp.c | 16 ----------------
drivers/cpufreq/amd-pstate.c | 2 +-
.../x86/intel/speed_select_if/isst_if_common.c | 9 ++++-----
5 files changed, 6 insertions(+), 28 deletions(-)
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index cb14ede8f587..a5596d268053 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -262,7 +262,6 @@ void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
-int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
#else /* CONFIG_SMP */
@@ -294,10 +293,6 @@ static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
return wrmsrq_safe(msr_no, q);
}
-static inline int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
-{
- return wrmsrq_safe(msr_no, q);
-}
static inline int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
{
return rdmsr_safe_regs(regs);
diff --git a/arch/x86/kernel/acpi/cppc.c b/arch/x86/kernel/acpi/cppc.c
index 576319dcbbbf..9f75762622e7 100644
--- a/arch/x86/kernel/acpi/cppc.c
+++ b/arch/x86/kernel/acpi/cppc.c
@@ -74,7 +74,7 @@ int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val)
val &= mask;
rd_val &= ~mask;
rd_val |= val;
- err = wrmsrq_safe_on_cpu(cpunum, reg->address, rd_val);
+ err = wrmsr_safe_on_cpu(cpunum, reg->address, rd_val);
}
return err;
}
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index b2859435f4af..9ae9ff11f1f1 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -169,22 +169,6 @@ int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
}
EXPORT_SYMBOL(wrmsr_safe_on_cpu);
-int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
-{
- int err;
- struct msr_info rv;
-
- memset(&rv, 0, sizeof(rv));
-
- rv.msr_no = msr_no;
- rv.reg.q = q;
-
- err = smp_call_function_single(cpu, __wrmsr_safe_on_cpu, &rv, 1);
-
- return err ? err : rv.err;
-}
-EXPORT_SYMBOL(wrmsrq_safe_on_cpu);
-
/*
* These variants are significantly slower, but allows control over
* the entire 32-bit GPR set.
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index d1eee3cd8f9b..8f3d776836c3 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -450,7 +450,7 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
static inline int msr_cppc_enable(struct cpufreq_policy *policy)
{
- return wrmsrq_safe_on_cpu(policy->cpu, MSR_AMD_CPPC_ENABLE, 1);
+ return wrmsr_safe_on_cpu(policy->cpu, MSR_AMD_CPPC_ENABLE, 1);
}
static int shmem_cppc_enable(struct cpufreq_policy *policy)
diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
index b15a798454dc..9d730e6f155d 100644
--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
@@ -192,8 +192,8 @@ void isst_resume_common(void)
if (cb->registered)
isst_mbox_resume_command(cb, sst_cmd);
} else {
- wrmsrq_safe_on_cpu(sst_cmd->cpu, sst_cmd->cmd,
- sst_cmd->data);
+ wrmsr_safe_on_cpu(sst_cmd->cpu, sst_cmd->cmd,
+ sst_cmd->data);
}
}
}
@@ -500,9 +500,8 @@ static long isst_if_msr_cmd_req(u8 *cmd_ptr, int *write_only, int resume)
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
- ret = wrmsrq_safe_on_cpu(msr_cmd->logical_cpu,
- msr_cmd->msr,
- msr_cmd->data);
+ ret = wrmsr_safe_on_cpu(msr_cmd->logical_cpu,
+ msr_cmd->msr, msr_cmd->data);
*write_only = 1;
if (!ret && !resume)
ret = isst_store_cmd(0, msr_cmd->msr,
--
2.53.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-04-28 10:42 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 06/11] x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu() Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 08/11] x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use wrmsr_safe_on_cpu() Juergen Gross
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox