* [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces
@ 2026-04-28 10:41 Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 02/11] x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu() Juergen Gross
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Juergen Gross @ 2026-04-28 10:41 UTC (permalink / raw)
To: linux-kernel, x86, linux-edac, linux-pm, linux-hwmon,
linux-perf-users, platform-driver-x86, linux-acpi, virtualization
Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Tony Luck, Rafael J. Wysocki,
Viresh Kumar, Guenter Roeck, Daniel Lezcano, Zhang Rui,
Lukasz Luba, Peter Zijlstra, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, James Clark, Huang Rui,
Mario Limonciello, Perry Yuan, K Prateek Nayak,
Srinivas Pandruvada, Len Brown, Hans de Goede, Ilpo Järvinen,
Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list
After my first attempt to rework the MSR access functions [1] this is
the result of the feedback I got.
I have still followed the idea to:
- Reduce the number of MSR access functions by keeping the ones with
64-bit values only (instead of the dual 32-bit ones).
- Try to have inline functions instead of macros for rdmsr*(), removing
the hard to read cases where parameters specified the variables for
the results.
One feedback I got was NOT to rename the access functions, which I
avoided in my new approach.
The first 8 patches are a complete set for achieving especially the
first point above for the *_on_cpu() functions.
Patch 9 is preparing the switch of the CPU-local MSR access functions
to have only rdmsr(), rdmsr_safe(), wrmsr() and wrmsr_safe() (all with
64-bit values and as inline functions) in the end. For this purpose
the already existing functions/macros are overloaded via macros to
accept both variants (64-bit and dual 32-bit values) during the phase
switching the different subsystems to the new scheme. This has the
advantage to avoid having to either patch all users of the current
functions in one patch (like done in the first 8 patches), or having
to use intermediate function names with need to be patched at the end
another time. The resulting patches would be very hard to review due
to their size.
The last 2 patches are examples how switches of subsystems would look
like.
Up to now all of that is compile tested only.
[1]: https://lore.kernel.org/lkml/20260420091634.128787-1-jgross@suse.com/
Juergen Gross (11):
x86/msr: Switch rdmsr_on_cpu() to return a 64-bit quantity
x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu()
x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity
x86/msr: Switch all callers of wrmsrq_on_cpu() to use wrmsr_on_cpu()
x86/msr: Switch rdmsr_safe_on_cpu() to return a 64-bit quantity
x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use
rdmsr_safe_on_cpu()
x86/msr: Switch wrmsr_safe_on_cpu() to use a 64-bit quantity
x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use
wrmsr_safe_on_cpu()
x86/msr: Add macros for preparing to switch rdmsr/wrmsr interfaces
x86/events: Switch core parts to use 64-bit rdmsr/wrmsr() variants
x86/cpu/mce: Switch code to use 64-bit rdmsr/wrmsr() variants
arch/x86/events/core.c | 42 ++++----
arch/x86/events/intel/ds.c | 11 +-
arch/x86/events/intel/pt.c | 2 +-
arch/x86/events/intel/uncore_discovery.c | 2 +-
arch/x86/events/intel/uncore_snbep.c | 2 +-
arch/x86/events/msr.c | 2 +-
arch/x86/events/perf_event.h | 26 ++---
arch/x86/events/probe.c | 2 +-
arch/x86/events/rapl.c | 8 +-
arch/x86/include/asm/msr.h | 90 +++++++++-------
arch/x86/include/asm/paravirt.h | 6 +-
arch/x86/kernel/acpi/cppc.c | 8 +-
arch/x86/kernel/cpu/intel_epb.c | 8 +-
arch/x86/kernel/cpu/mce/amd.c | 101 +++++++++---------
arch/x86/kernel/cpu/mce/core.c | 18 ++--
arch/x86/kernel/cpu/mce/inject.c | 40 +++----
arch/x86/kernel/cpu/mce/intel.c | 32 +++---
arch/x86/kernel/cpu/mce/p5.c | 16 +--
arch/x86/kernel/cpu/mce/winchip.c | 10 +-
arch/x86/kernel/cpu/microcode/intel.c | 2 +-
arch/x86/kernel/msr.c | 8 +-
arch/x86/lib/msr-smp.c | 79 ++------------
drivers/cpufreq/acpi-cpufreq.c | 4 +-
drivers/cpufreq/amd-pstate-ut.c | 2 +-
drivers/cpufreq/amd-pstate.c | 21 ++--
drivers/cpufreq/amd_freq_sensitivity.c | 4 +-
drivers/cpufreq/intel_pstate.c | 64 +++++------
drivers/cpufreq/p4-clockmod.c | 32 +++---
drivers/cpufreq/speedstep-centrino.c | 27 ++---
drivers/hwmon/coretemp.c | 44 ++++----
drivers/hwmon/via-cputemp.c | 16 +--
drivers/platform/x86/amd/hfi/hfi.c | 4 +-
.../intel/speed_select_if/isst_if_common.c | 13 ++-
.../intel/uncore-frequency/uncore-frequency.c | 12 +--
drivers/powercap/intel_rapl_msr.c | 2 +-
drivers/thermal/intel/intel_tcc.c | 43 ++++----
drivers/thermal/intel/x86_pkg_temp_thermal.c | 22 ++--
37 files changed, 387 insertions(+), 438 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH RFC 02/11] x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu()
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
@ 2026-04-28 10:41 ` Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 03/11] x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity Juergen Gross
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Juergen Gross @ 2026-04-28 10:41 UTC (permalink / raw)
To: linux-kernel, x86, linux-perf-users, linux-edac, linux-pm,
platform-driver-x86
Cc: Juergen Gross, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Tony Luck, Rafael J. Wysocki, Viresh Kumar,
Huang Rui, Mario Limonciello, Perry Yuan, K Prateek Nayak,
Srinivas Pandruvada, Len Brown, Hans de Goede, Ilpo Järvinen
Now that rdmsr_on_cpu() has the same interface as rdmsrq_on_cpu(), the
callers of rdmsrq_on_cpu() can be switched to rdmsr_on_cpu() and
rdmsrq_on_cpu() can be removed.
At the same time switch the only user of rdmsrl_on_cpu() to
rdmsr_on_cpu() and drop rdmsrl_on_cpu(), too.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/events/intel/uncore_snbep.c | 2 +-
arch/x86/include/asm/msr.h | 7 ------
arch/x86/kernel/cpu/intel_epb.c | 4 ++--
arch/x86/kernel/cpu/mce/inject.c | 4 ++--
arch/x86/kernel/cpu/microcode/intel.c | 2 +-
arch/x86/lib/msr-smp.c | 15 -------------
drivers/cpufreq/acpi-cpufreq.c | 4 ++--
drivers/cpufreq/amd-pstate.c | 8 +++----
drivers/cpufreq/intel_pstate.c | 22 +++++++++----------
.../intel/uncore-frequency/uncore-frequency.c | 6 ++---
10 files changed, 26 insertions(+), 48 deletions(-)
diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
index 215d33e260ed..fee94698b611 100644
--- a/arch/x86/events/intel/uncore_snbep.c
+++ b/arch/x86/events/intel/uncore_snbep.c
@@ -3695,7 +3695,7 @@ static int skx_msr_cpu_bus_read(int cpu, u64 *topology)
{
u64 msr_value;
- if (rdmsrq_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value) ||
+ if (rdmsr_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value) ||
!(msr_value & SKX_MSR_CPU_BUS_VALID_BIT))
return -ENXIO;
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index fcdaeddf4337..8c96fc5c6169 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -258,7 +258,6 @@ int msr_clear_bit(u32 msr, u8 bit);
#ifdef CONFIG_SMP
int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
-int rdmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
@@ -279,11 +278,6 @@ static inline int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
wrmsr(msr_no, l, h);
return 0;
}
-static inline int rdmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
-{
- rdmsrq(msr_no, *q);
- return 0;
-}
static inline int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
wrmsrq(msr_no, q);
@@ -329,7 +323,6 @@ static inline int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
/* Compatibility wrappers: */
#define rdmsrl(msr, val) rdmsrq(msr, val)
#define wrmsrl(msr, val) wrmsrq(msr, val)
-#define rdmsrl_on_cpu(cpu, msr, q) rdmsrq_on_cpu(cpu, msr, q)
#endif /* __ASSEMBLER__ */
#endif /* _ASM_X86_MSR_H */
diff --git a/arch/x86/kernel/cpu/intel_epb.c b/arch/x86/kernel/cpu/intel_epb.c
index 2c56f8730f59..cb5a3c299f26 100644
--- a/arch/x86/kernel/cpu/intel_epb.c
+++ b/arch/x86/kernel/cpu/intel_epb.c
@@ -139,7 +139,7 @@ static ssize_t energy_perf_bias_show(struct device *dev,
u64 epb;
int ret;
- ret = rdmsrq_on_cpu(cpu, MSR_IA32_ENERGY_PERF_BIAS, &epb);
+ ret = rdmsr_on_cpu(cpu, MSR_IA32_ENERGY_PERF_BIAS, &epb);
if (ret < 0)
return ret;
@@ -161,7 +161,7 @@ static ssize_t energy_perf_bias_store(struct device *dev,
else if (kstrtou64(buf, 0, &val) || val > MAX_EPB)
return -EINVAL;
- ret = rdmsrq_on_cpu(cpu, MSR_IA32_ENERGY_PERF_BIAS, &epb);
+ ret = rdmsr_on_cpu(cpu, MSR_IA32_ENERGY_PERF_BIAS, &epb);
if (ret < 0)
return ret;
diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
index fa13a8a4946b..78649651c987 100644
--- a/arch/x86/kernel/cpu/mce/inject.c
+++ b/arch/x86/kernel/cpu/mce/inject.c
@@ -590,7 +590,7 @@ static int inj_bank_set(void *data, u64 val)
u64 cap;
/* Get bank count on target CPU so we can handle non-uniform values. */
- rdmsrq_on_cpu(m->extcpu, MSR_IA32_MCG_CAP, &cap);
+ rdmsr_on_cpu(m->extcpu, MSR_IA32_MCG_CAP, &cap);
n_banks = cap & MCG_BANKCNT_MASK;
if (val >= n_banks) {
@@ -614,7 +614,7 @@ static int inj_bank_set(void *data, u64 val)
if (cpu_feature_enabled(X86_FEATURE_SMCA)) {
u64 ipid;
- if (rdmsrq_on_cpu(m->extcpu, MSR_AMD64_SMCA_MCx_IPID(val), &ipid)) {
+ if (rdmsr_on_cpu(m->extcpu, MSR_AMD64_SMCA_MCx_IPID(val), &ipid)) {
pr_err("Error reading IPID on CPU%d\n", m->extcpu);
return -EINVAL;
}
diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
index 37ac4afe0972..b05e751ffcca 100644
--- a/arch/x86/kernel/cpu/microcode/intel.c
+++ b/arch/x86/kernel/cpu/microcode/intel.c
@@ -660,7 +660,7 @@ static void stage_microcode(void)
pkg_id = topology_logical_package_id(cpu);
- err = rdmsrq_on_cpu(cpu, MSR_IA32_MCU_STAGING_MBOX_ADDR, &mmio_pa);
+ err = rdmsr_on_cpu(cpu, MSR_IA32_MCU_STAGING_MBOX_ADDR, &mmio_pa);
if (WARN_ON_ONCE(err))
return;
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index 6e04aabda863..7c96f003bfe0 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -46,21 +46,6 @@ int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
}
EXPORT_SYMBOL(rdmsr_on_cpu);
-int rdmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
-{
- int err;
- struct msr_info rv;
-
- memset(&rv, 0, sizeof(rv));
-
- rv.msr_no = msr_no;
- err = smp_call_function_single(cpu, __rdmsr_on_cpu, &rv, 1);
- *q = rv.reg.q;
-
- return err;
-}
-EXPORT_SYMBOL(rdmsrq_on_cpu);
-
int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
{
int err;
diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
index 21639d9ac753..43bf1c21c4ca 100644
--- a/drivers/cpufreq/acpi-cpufreq.c
+++ b/drivers/cpufreq/acpi-cpufreq.c
@@ -79,11 +79,11 @@ static bool boost_state(unsigned int cpu)
case X86_VENDOR_INTEL:
case X86_VENDOR_CENTAUR:
case X86_VENDOR_ZHAOXIN:
- rdmsrq_on_cpu(cpu, MSR_IA32_MISC_ENABLE, &msr);
+ rdmsr_on_cpu(cpu, MSR_IA32_MISC_ENABLE, &msr);
return !(msr & MSR_IA32_MISC_ENABLE_TURBO_DISABLE);
case X86_VENDOR_HYGON:
case X86_VENDOR_AMD:
- rdmsrq_on_cpu(cpu, MSR_K7_HWCR, &msr);
+ rdmsr_on_cpu(cpu, MSR_K7_HWCR, &msr);
return !(msr & MSR_K7_HWCR_CPB_DIS);
}
return false;
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 453084c67327..a6fc22f770c3 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -208,7 +208,7 @@ static u8 msr_get_epp(struct amd_cpudata *cpudata)
u64 value;
int ret;
- ret = rdmsrq_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value);
+ ret = rdmsr_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value);
if (ret < 0) {
pr_debug("Could not retrieve energy perf value (%d)\n", ret);
return ret;
@@ -382,7 +382,7 @@ static int amd_pstate_init_floor_perf(struct cpufreq_policy *policy)
if (!cpu_feature_enabled(X86_FEATURE_CPPC_PERF_PRIO))
return 0;
- ret = rdmsrq_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ2, &value);
+ ret = rdmsr_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ2, &value);
if (ret) {
pr_err("failed to read CPPC REQ2 value. Error (%d)\n", ret);
return ret;
@@ -480,7 +480,7 @@ static int msr_init_perf(struct amd_cpudata *cpudata)
if (ret)
return ret;
- ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &cppc_req);
+ ret = rdmsr_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &cppc_req);
if (ret)
return ret;
@@ -881,7 +881,7 @@ static int amd_pstate_init_boost_support(struct amd_cpudata *cpudata)
goto exit_err;
}
- ret = rdmsrq_on_cpu(cpudata->cpu, MSR_K7_HWCR, &boost_val);
+ ret = rdmsr_on_cpu(cpudata->cpu, MSR_K7_HWCR, &boost_val);
if (ret) {
pr_err_once("failed to read initial CPU boost state!\n");
ret = -EIO;
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 1292da53e5fc..e5b30a53c49a 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -632,8 +632,8 @@ static s16 intel_pstate_get_epp(struct cpudata *cpu_data, u64 hwp_req_data)
* MSR_HWP_REQUEST, so need to read and get EPP.
*/
if (!hwp_req_data) {
- epp = rdmsrq_on_cpu(cpu_data->cpu, MSR_HWP_REQUEST,
- &hwp_req_data);
+ epp = rdmsr_on_cpu(cpu_data->cpu, MSR_HWP_REQUEST,
+ &hwp_req_data);
if (epp)
return epp;
}
@@ -886,7 +886,7 @@ static ssize_t show_base_frequency(struct cpufreq_policy *policy, char *buf)
if (ratio <= 0) {
u64 cap;
- rdmsrq_on_cpu(policy->cpu, MSR_HWP_CAPABILITIES, &cap);
+ rdmsr_on_cpu(policy->cpu, MSR_HWP_CAPABILITIES, &cap);
ratio = HWP_GUARANTEED_PERF(cap);
}
@@ -1187,7 +1187,7 @@ static void __intel_pstate_get_hwp_cap(struct cpudata *cpu)
{
u64 cap;
- rdmsrq_on_cpu(cpu->cpu, MSR_HWP_CAPABILITIES, &cap);
+ rdmsr_on_cpu(cpu->cpu, MSR_HWP_CAPABILITIES, &cap);
WRITE_ONCE(cpu->hwp_cap_cached, cap);
cpu->pstate.max_pstate = HWP_GUARANTEED_PERF(cap);
cpu->pstate.turbo_pstate = HWP_HIGHEST_PERF(cap);
@@ -1269,7 +1269,7 @@ static void intel_pstate_hwp_set(unsigned int cpu)
if (cpu_data->policy == CPUFREQ_POLICY_PERFORMANCE)
min = max;
- rdmsrq_on_cpu(cpu, MSR_HWP_REQUEST, &value);
+ rdmsr_on_cpu(cpu, MSR_HWP_REQUEST, &value);
value &= ~HWP_MIN_PERF(~0L);
value |= HWP_MIN_PERF(min);
@@ -2156,7 +2156,7 @@ static int core_get_min_pstate(int cpu)
{
u64 value;
- rdmsrq_on_cpu(cpu, MSR_PLATFORM_INFO, &value);
+ rdmsr_on_cpu(cpu, MSR_PLATFORM_INFO, &value);
return (value >> 40) & 0xFF;
}
@@ -2164,7 +2164,7 @@ static int core_get_max_pstate_physical(int cpu)
{
u64 value;
- rdmsrq_on_cpu(cpu, MSR_PLATFORM_INFO, &value);
+ rdmsr_on_cpu(cpu, MSR_PLATFORM_INFO, &value);
return (value >> 8) & 0xFF;
}
@@ -2209,7 +2209,7 @@ static int core_get_max_pstate(int cpu)
int tdp_ratio;
int err;
- rdmsrq_on_cpu(cpu, MSR_PLATFORM_INFO, &plat_info);
+ rdmsr_on_cpu(cpu, MSR_PLATFORM_INFO, &plat_info);
max_pstate = (plat_info >> 8) & 0xFF;
tdp_ratio = core_get_tdp_ratio(cpu, plat_info);
@@ -2241,7 +2241,7 @@ static int core_get_turbo_pstate(int cpu)
u64 value;
int nont, ret;
- rdmsrq_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value);
+ rdmsr_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value);
nont = core_get_max_pstate(cpu);
ret = (value) & 255;
if (ret <= nont)
@@ -2264,7 +2264,7 @@ static int knl_get_turbo_pstate(int cpu)
u64 value;
int nont, ret;
- rdmsrq_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value);
+ rdmsr_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value);
nont = core_get_max_pstate(cpu);
ret = (((value) >> 8) & 0xFF);
if (ret <= nont)
@@ -3318,7 +3318,7 @@ static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy)
intel_pstate_get_hwp_cap(cpu);
- rdmsrq_on_cpu(cpu->cpu, MSR_HWP_REQUEST, &value);
+ rdmsr_on_cpu(cpu->cpu, MSR_HWP_REQUEST, &value);
WRITE_ONCE(cpu->hwp_req_cached, value);
cpu->epp_cached = intel_pstate_get_epp(cpu, value);
diff --git a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
index 667f2c8b9594..b9878a4d391b 100644
--- a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
+++ b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
@@ -52,7 +52,7 @@ static int uncore_read_control_freq(struct uncore_data *data, unsigned int *valu
if (data->control_cpu < 0)
return -ENXIO;
- ret = rdmsrq_on_cpu(data->control_cpu, MSR_UNCORE_RATIO_LIMIT, &cap);
+ ret = rdmsr_on_cpu(data->control_cpu, MSR_UNCORE_RATIO_LIMIT, &cap);
if (ret)
return ret;
@@ -77,7 +77,7 @@ static int uncore_write_control_freq(struct uncore_data *data, unsigned int inpu
if (data->control_cpu < 0)
return -ENXIO;
- ret = rdmsrq_on_cpu(data->control_cpu, MSR_UNCORE_RATIO_LIMIT, &cap);
+ ret = rdmsr_on_cpu(data->control_cpu, MSR_UNCORE_RATIO_LIMIT, &cap);
if (ret)
return ret;
@@ -106,7 +106,7 @@ static int uncore_read_freq(struct uncore_data *data, unsigned int *freq)
if (data->control_cpu < 0)
return -ENXIO;
- ret = rdmsrq_on_cpu(data->control_cpu, MSR_UNCORE_PERF_STATUS, &ratio);
+ ret = rdmsr_on_cpu(data->control_cpu, MSR_UNCORE_PERF_STATUS, &ratio);
if (ret)
return ret;
--
2.53.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH RFC 03/11] x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 02/11] x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu() Juergen Gross
@ 2026-04-28 10:41 ` Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 06/11] x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu() Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 10/11] x86/events: Switch core parts to use 64-bit rdmsr/wrmsr() variants Juergen Gross
3 siblings, 0 replies; 5+ messages in thread
From: Juergen Gross @ 2026-04-28 10:41 UTC (permalink / raw)
To: linux-kernel, x86, linux-perf-users, linux-edac, linux-pm
Cc: Juergen Gross, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Tony Luck, Rafael J. Wysocki, Viresh Kumar,
Daniel Lezcano, Zhang Rui, Lukasz Luba
In order to prepare retiring wrmsrq_on_cpu() switch wrmsr_on_cpu() to
have the same interface as wrmsrq_on_cpu().
Switch all wrmsr_on_cpu() callers to use the new interface.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/events/intel/ds.c | 11 ++++-------
arch/x86/include/asm/msr.h | 8 ++++----
arch/x86/kernel/cpu/mce/inject.c | 2 +-
arch/x86/lib/msr-smp.c | 5 ++---
drivers/cpufreq/p4-clockmod.c | 4 ++--
drivers/cpufreq/speedstep-centrino.c | 4 ++--
drivers/thermal/intel/x86_pkg_temp_thermal.c | 2 +-
7 files changed, 16 insertions(+), 20 deletions(-)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 7f0d515c07c5..06d6d06c7a75 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -780,9 +780,7 @@ void init_debug_store_on_cpu(int cpu)
if (!ds)
return;
- wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA,
- (u32)((u64)(unsigned long)ds),
- (u32)((u64)(unsigned long)ds >> 32));
+ wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA, (u64)(unsigned long)ds);
}
void fini_debug_store_on_cpu(int cpu)
@@ -790,7 +788,7 @@ void fini_debug_store_on_cpu(int cpu)
if (!per_cpu(cpu_hw_events, cpu).ds)
return;
- wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA, 0, 0);
+ wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA, 0);
}
static DEFINE_PER_CPU(void *, insn_buffer);
@@ -1095,8 +1093,7 @@ void init_arch_pebs_on_cpu(int cpu)
* contiguous physical buffer (__alloc_pages_node() with order)
*/
arch_pebs_base = virt_to_phys(cpuc->pebs_vaddr) | PEBS_BUFFER_SHIFT;
- wrmsr_on_cpu(cpu, MSR_IA32_PEBS_BASE, (u32)arch_pebs_base,
- (u32)(arch_pebs_base >> 32));
+ wrmsr_on_cpu(cpu, MSR_IA32_PEBS_BASE, arch_pebs_base);
x86_pmu.pebs_active = 1;
}
@@ -1105,7 +1102,7 @@ inline void fini_arch_pebs_on_cpu(int cpu)
if (!x86_pmu.arch_pebs)
return;
- wrmsr_on_cpu(cpu, MSR_IA32_PEBS_BASE, 0, 0);
+ wrmsr_on_cpu(cpu, MSR_IA32_PEBS_BASE, 0);
}
/*
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index 8c96fc5c6169..a004440b4c0a 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -257,7 +257,7 @@ int msr_clear_bit(u32 msr, u8 bit);
#ifdef CONFIG_SMP
int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
-int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
+int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
@@ -273,9 +273,9 @@ static inline int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
rdmsrq(msr_no, *q);
return 0;
}
-static inline int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+static inline int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
- wrmsr(msr_no, l, h);
+ wrmsrq(msr_no, q);
return 0;
}
static inline int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
@@ -291,7 +291,7 @@ static inline void rdmsr_on_cpus(const struct cpumask *m, u32 msr_no,
static inline void wrmsr_on_cpus(const struct cpumask *m, u32 msr_no,
struct msr __percpu *msrs)
{
- wrmsr_on_cpu(0, msr_no, raw_cpu_read(msrs->l), raw_cpu_read(msrs->h));
+ wrmsrq_on_cpu(0, msr_no, raw_cpu_read(msrs->q));
}
static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no,
u32 *l, u32 *h)
diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
index 78649651c987..2d75098211b3 100644
--- a/arch/x86/kernel/cpu/mce/inject.c
+++ b/arch/x86/kernel/cpu/mce/inject.c
@@ -327,7 +327,7 @@ static int toggle_hw_mce_inject(unsigned int cpu, bool enable)
enable ? (val.l |= BIT(18)) : (val.l &= ~BIT(18));
- err = wrmsr_on_cpu(cpu, MSR_K7_HWCR, val.l, val.h);
+ err = wrmsr_on_cpu(cpu, MSR_K7_HWCR, val.q);
if (err)
pr_err("%s: error writing HWCR\n", __func__);
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index 7c96f003bfe0..0b4f3c4e4f82 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -46,7 +46,7 @@ int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
}
EXPORT_SYMBOL(rdmsr_on_cpu);
-int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
int err;
struct msr_info rv;
@@ -54,8 +54,7 @@ int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
memset(&rv, 0, sizeof(rv));
rv.msr_no = msr_no;
- rv.reg.l = l;
- rv.reg.h = h;
+ rv.reg.q = q;
err = smp_call_function_single(cpu, __wrmsr_on_cpu, &rv, 1);
return err;
diff --git a/drivers/cpufreq/p4-clockmod.c b/drivers/cpufreq/p4-clockmod.c
index 393c4a5d2021..409c0210e48a 100644
--- a/drivers/cpufreq/p4-clockmod.c
+++ b/drivers/cpufreq/p4-clockmod.c
@@ -68,7 +68,7 @@ static int cpufreq_p4_setdc(unsigned int cpu, unsigned int newstate)
rdmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, &val.q);
if (newstate == DC_DISABLE) {
pr_debug("CPU#%d disabling modulation\n", cpu);
- wrmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, val.l & ~(1<<4), val.h);
+ wrmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, val.q & ~(1ULL << 4));
} else {
pr_debug("CPU#%d setting duty cycle to %d%%\n",
cpu, ((125 * newstate) / 10));
@@ -79,7 +79,7 @@ static int cpufreq_p4_setdc(unsigned int cpu, unsigned int newstate)
*/
val.l = (val.l & ~14);
val.l = val.l | (1<<4) | ((newstate & 0x7)<<1);
- wrmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, val.l, val.h);
+ wrmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, val.q);
}
return 0;
diff --git a/drivers/cpufreq/speedstep-centrino.c b/drivers/cpufreq/speedstep-centrino.c
index b74c85128377..121cddb1430f 100644
--- a/drivers/cpufreq/speedstep-centrino.c
+++ b/drivers/cpufreq/speedstep-centrino.c
@@ -475,7 +475,7 @@ static int centrino_target(struct cpufreq_policy *policy, unsigned int index)
oldmsr.l |= msr;
}
- wrmsr_on_cpu(good_cpu, MSR_IA32_PERF_CTL, oldmsr.l, oldmsr.h);
+ wrmsr_on_cpu(good_cpu, MSR_IA32_PERF_CTL, oldmsr.q);
if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY)
break;
@@ -491,7 +491,7 @@ static int centrino_target(struct cpufreq_policy *policy, unsigned int index)
*/
for_each_cpu(j, covered_cpus)
- wrmsr_on_cpu(j, MSR_IA32_PERF_CTL, oldmsr.l, oldmsr.h);
+ wrmsr_on_cpu(j, MSR_IA32_PERF_CTL, oldmsr.q);
}
retval = 0;
diff --git a/drivers/thermal/intel/x86_pkg_temp_thermal.c b/drivers/thermal/intel/x86_pkg_temp_thermal.c
index fc7dbba4f9ca..e52d35015486 100644
--- a/drivers/thermal/intel/x86_pkg_temp_thermal.c
+++ b/drivers/thermal/intel/x86_pkg_temp_thermal.c
@@ -169,7 +169,7 @@ sys_set_trip_temp(struct thermal_zone_device *tzd,
}
return wrmsr_on_cpu(zonedev->cpu, MSR_IA32_PACKAGE_THERM_INTERRUPT,
- v.l, v.h);
+ v.q);
}
/* Thermal zone callback registry */
--
2.53.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH RFC 06/11] x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu()
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 02/11] x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu() Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 03/11] x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity Juergen Gross
@ 2026-04-28 10:42 ` Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 10/11] x86/events: Switch core parts to use 64-bit rdmsr/wrmsr() variants Juergen Gross
3 siblings, 0 replies; 5+ messages in thread
From: Juergen Gross @ 2026-04-28 10:42 UTC (permalink / raw)
To: linux-kernel, x86, linux-perf-users, linux-acpi, linux-pm,
platform-driver-x86
Cc: Juergen Gross, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Rafael J. Wysocki, Len Brown, Huang Rui,
Mario Limonciello, Perry Yuan, K Prateek Nayak, Viresh Kumar,
Srinivas Pandruvada, Hans de Goede, Ilpo Järvinen
Now that rdmsr_safe_on_cpu() has the same interface as
rdmsrq_safe_on_cpu(), the callers of rdmsrq_safe_on_cpu() can be
switched to rdmsr_safe_on_cpu() and rdmsrq_safe_on_cpu() can be
removed.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/events/intel/pt.c | 2 +-
arch/x86/events/intel/uncore_discovery.c | 2 +-
arch/x86/include/asm/msr.h | 5 -----
arch/x86/kernel/acpi/cppc.c | 6 +++---
arch/x86/lib/msr-smp.c | 10 ----------
drivers/cpufreq/amd-pstate-ut.c | 2 +-
drivers/cpufreq/amd-pstate.c | 3 +--
drivers/cpufreq/intel_pstate.c | 6 +++---
.../x86/intel/speed_select_if/isst_if_common.c | 4 ++--
drivers/powercap/intel_rapl_msr.c | 2 +-
10 files changed, 13 insertions(+), 29 deletions(-)
diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
index b5726b50e77d..7c92146b06ea 100644
--- a/arch/x86/events/intel/pt.c
+++ b/arch/x86/events/intel/pt.c
@@ -1840,7 +1840,7 @@ static __init int pt_init(void)
for_each_online_cpu(cpu) {
u64 ctl;
- ret = rdmsrq_safe_on_cpu(cpu, MSR_IA32_RTIT_CTL, &ctl);
+ ret = rdmsr_safe_on_cpu(cpu, MSR_IA32_RTIT_CTL, &ctl);
if (!ret && (ctl & RTIT_CTL_TRACEEN))
prior_warn++;
}
diff --git a/arch/x86/events/intel/uncore_discovery.c b/arch/x86/events/intel/uncore_discovery.c
index 583cbd06b9b8..0853a9e02fda 100644
--- a/arch/x86/events/intel/uncore_discovery.c
+++ b/arch/x86/events/intel/uncore_discovery.c
@@ -405,7 +405,7 @@ static bool uncore_discovery_msr(struct uncore_discovery_domain *domain)
if (__test_and_set_bit(die, die_mask))
continue;
- if (rdmsrq_safe_on_cpu(cpu, domain->discovery_base, &base))
+ if (rdmsr_safe_on_cpu(cpu, domain->discovery_base, &base))
continue;
if (!base)
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index b3b43bc04b69..f2d14c670140 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -262,7 +262,6 @@ void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
-int rdmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
@@ -295,10 +294,6 @@ static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
{
return wrmsr_safe(msr_no, l, h);
}
-static inline int rdmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
-{
- return rdmsrq_safe(msr_no, q);
-}
static inline int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
return wrmsrq_safe(msr_no, q);
diff --git a/arch/x86/kernel/acpi/cppc.c b/arch/x86/kernel/acpi/cppc.c
index d7c8ef1e354d..576319dcbbbf 100644
--- a/arch/x86/kernel/acpi/cppc.c
+++ b/arch/x86/kernel/acpi/cppc.c
@@ -49,7 +49,7 @@ int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val)
{
int err;
- err = rdmsrq_safe_on_cpu(cpunum, reg->address, val);
+ err = rdmsr_safe_on_cpu(cpunum, reg->address, val);
if (!err) {
u64 mask = GENMASK_ULL(reg->bit_offset + reg->bit_width - 1,
reg->bit_offset);
@@ -65,7 +65,7 @@ int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val)
u64 rd_val;
int err;
- err = rdmsrq_safe_on_cpu(cpunum, reg->address, &rd_val);
+ err = rdmsr_safe_on_cpu(cpunum, reg->address, &rd_val);
if (!err) {
u64 mask = GENMASK_ULL(reg->bit_offset + reg->bit_width - 1,
reg->bit_offset);
@@ -147,7 +147,7 @@ int amd_get_highest_perf(unsigned int cpu, u32 *highest_perf)
int ret;
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
- ret = rdmsrq_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &val);
+ ret = rdmsr_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &val);
if (ret)
goto out;
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index 0dc3921e0259..fa22ac662c1d 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -186,16 +186,6 @@ int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
}
EXPORT_SYMBOL(wrmsrq_safe_on_cpu);
-int rdmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
-{
- int err;
-
- err = rdmsr_safe_on_cpu(cpu, msr_no, q);
-
- return err;
-}
-EXPORT_SYMBOL(rdmsrq_safe_on_cpu);
-
/*
* These variants are significantly slower, but allows control over
* the entire 32-bit GPR set.
diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
index aa8a464fab47..8700c076b762 100644
--- a/drivers/cpufreq/amd-pstate-ut.c
+++ b/drivers/cpufreq/amd-pstate-ut.c
@@ -170,7 +170,7 @@ static int amd_pstate_ut_check_perf(u32 index)
lowest_nonlinear_perf = cppc_perf.lowest_nonlinear_perf;
lowest_perf = cppc_perf.lowest_perf;
} else {
- ret = rdmsrq_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &cap1);
+ ret = rdmsr_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &cap1);
if (ret) {
pr_err("%s read CPPC_CAP1 ret=%d error!\n", __func__, ret);
return ret;
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 543b34006918..d1eee3cd8f9b 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -471,8 +471,7 @@ static int msr_init_perf(struct amd_cpudata *cpudata)
u64 cap1, numerator, cppc_req;
u8 min_perf;
- int ret = rdmsrq_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1,
- &cap1);
+ int ret = rdmsr_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, &cap1);
if (ret)
return ret;
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 08214a0561e7..da196539affe 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -2178,13 +2178,13 @@ static int core_get_tdp_ratio(int cpu, u64 plat_info)
int err;
/* Get the TDP level (0, 1, 2) to get ratios */
- err = rdmsrq_safe_on_cpu(cpu, MSR_CONFIG_TDP_CONTROL, &tdp_ctrl);
+ err = rdmsr_safe_on_cpu(cpu, MSR_CONFIG_TDP_CONTROL, &tdp_ctrl);
if (err)
return err;
/* TDP MSR are continuous starting at 0x648 */
tdp_msr = MSR_CONFIG_TDP_NOMINAL + (tdp_ctrl & 0x03);
- err = rdmsrq_safe_on_cpu(cpu, tdp_msr, &tdp_ratio);
+ err = rdmsr_safe_on_cpu(cpu, tdp_msr, &tdp_ratio);
if (err)
return err;
@@ -2221,7 +2221,7 @@ static int core_get_max_pstate(int cpu)
return tdp_ratio;
}
- err = rdmsrq_safe_on_cpu(cpu, MSR_TURBO_ACTIVATION_RATIO, &tar);
+ err = rdmsr_safe_on_cpu(cpu, MSR_TURBO_ACTIVATION_RATIO, &tar);
if (!err) {
int tar_levels;
diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
index 1c48bf6d5457..b15a798454dc 100644
--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
@@ -511,8 +511,8 @@ static long isst_if_msr_cmd_req(u8 *cmd_ptr, int *write_only, int resume)
} else {
u64 data;
- ret = rdmsrq_safe_on_cpu(msr_cmd->logical_cpu,
- msr_cmd->msr, &data);
+ ret = rdmsr_safe_on_cpu(msr_cmd->logical_cpu,
+ msr_cmd->msr, &data);
if (!ret) {
msr_cmd->data = data;
*write_only = 0;
diff --git a/drivers/powercap/intel_rapl_msr.c b/drivers/powercap/intel_rapl_msr.c
index a34543e66446..a6bdbe44c8dd 100644
--- a/drivers/powercap/intel_rapl_msr.c
+++ b/drivers/powercap/intel_rapl_msr.c
@@ -180,7 +180,7 @@ static int rapl_msr_read_raw(int cpu, struct reg_action *ra, bool pmu_ctx)
goto out;
}
- if (rdmsrq_safe_on_cpu(cpu, ra->reg.msr, &ra->value)) {
+ if (rdmsr_safe_on_cpu(cpu, ra->reg.msr, &ra->value)) {
pr_debug("failed to read msr 0x%x on cpu %d\n", ra->reg.msr, cpu);
return -EIO;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH RFC 10/11] x86/events: Switch core parts to use 64-bit rdmsr/wrmsr() variants
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
` (2 preceding siblings ...)
2026-04-28 10:42 ` [PATCH RFC 06/11] x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu() Juergen Gross
@ 2026-04-28 10:42 ` Juergen Gross
3 siblings, 0 replies; 5+ messages in thread
From: Juergen Gross @ 2026-04-28 10:42 UTC (permalink / raw)
To: linux-kernel, x86, linux-perf-users
Cc: Juergen Gross, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin
Switch the core parts of the x86 events subsystem to use the new
64-bit forms of rdmsr(), rdmasr_safe(), wrmsr() and wrmsr_safe().
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/events/core.c | 42 ++++++++++++++++++------------------
arch/x86/events/msr.c | 2 +-
arch/x86/events/perf_event.h | 26 +++++++++++-----------
arch/x86/events/probe.c | 2 +-
arch/x86/events/rapl.c | 8 +++----
5 files changed, 40 insertions(+), 40 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 810ab21ffd99..dc75b9537ab5 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -279,7 +279,7 @@ bool check_hw_exists(struct pmu *pmu, unsigned long *cntr_mask,
*/
for_each_set_bit(i, cntr_mask, X86_PMC_IDX_MAX) {
reg = x86_pmu_config_addr(i);
- ret = rdmsrq_safe(reg, &val);
+ ret = rdmsr_safe(reg, &val);
if (ret)
goto msr_fail;
if (val & ARCH_PERFMON_EVENTSEL_ENABLE) {
@@ -293,7 +293,7 @@ bool check_hw_exists(struct pmu *pmu, unsigned long *cntr_mask,
if (*(u64 *)fixed_cntr_mask) {
reg = MSR_ARCH_PERFMON_FIXED_CTR_CTRL;
- ret = rdmsrq_safe(reg, &val);
+ ret = rdmsr_safe(reg, &val);
if (ret)
goto msr_fail;
for_each_set_bit(i, fixed_cntr_mask, X86_PMC_IDX_MAX) {
@@ -324,11 +324,11 @@ bool check_hw_exists(struct pmu *pmu, unsigned long *cntr_mask,
* (qemu/kvm) that don't trap on the MSR access and always return 0s.
*/
reg = x86_pmu_event_addr(reg_safe);
- if (rdmsrq_safe(reg, &val))
+ if (rdmsr_safe(reg, &val))
goto msr_fail;
val ^= 0xffffUL;
- ret = wrmsrq_safe(reg, val);
- ret |= rdmsrq_safe(reg, &val_new);
+ ret = wrmsr_safe(reg, val);
+ ret |= rdmsr_safe(reg, &val_new);
if (ret || val != val_new)
goto msr_fail;
@@ -713,13 +713,13 @@ void x86_pmu_disable_all(void)
if (!test_bit(idx, cpuc->active_mask))
continue;
- rdmsrq(x86_pmu_config_addr(idx), val);
+ val = rdmsr(x86_pmu_config_addr(idx));
if (!(val & ARCH_PERFMON_EVENTSEL_ENABLE))
continue;
val &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
- wrmsrq(x86_pmu_config_addr(idx), val);
+ wrmsr(x86_pmu_config_addr(idx), val);
if (is_counter_pair(hwc))
- wrmsrq(x86_pmu_config_addr(idx + 1), 0);
+ wrmsr(x86_pmu_config_addr(idx + 1), 0);
}
}
@@ -1446,14 +1446,14 @@ int x86_perf_event_set_period(struct perf_event *event)
*/
local64_set(&hwc->prev_count, (u64)-left);
- wrmsrq(hwc->event_base, (u64)(-left) & x86_pmu.cntval_mask);
+ wrmsr(hwc->event_base, (u64)(-left) & x86_pmu.cntval_mask);
/*
* Sign extend the Merge event counter's upper 16 bits since
* we currently declare a 48-bit counter width
*/
if (is_counter_pair(hwc))
- wrmsrq(x86_pmu_event_addr(idx + 1), 0xffff);
+ wrmsr(x86_pmu_event_addr(idx + 1), 0xffff);
perf_event_update_userpage(event);
@@ -1575,10 +1575,10 @@ void perf_event_print_debug(void)
return;
if (x86_pmu.version >= 2) {
- rdmsrq(MSR_CORE_PERF_GLOBAL_CTRL, ctrl);
- rdmsrq(MSR_CORE_PERF_GLOBAL_STATUS, status);
- rdmsrq(MSR_CORE_PERF_GLOBAL_OVF_CTRL, overflow);
- rdmsrq(MSR_ARCH_PERFMON_FIXED_CTR_CTRL, fixed);
+ ctrl = rdmsr(MSR_CORE_PERF_GLOBAL_CTRL);
+ status = rdmsr(MSR_CORE_PERF_GLOBAL_STATUS);
+ overflow = rdmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL);
+ fixed = rdmsr(MSR_ARCH_PERFMON_FIXED_CTR_CTRL);
pr_info("\n");
pr_info("CPU#%d: ctrl: %016llx\n", cpu, ctrl);
@@ -1586,19 +1586,19 @@ void perf_event_print_debug(void)
pr_info("CPU#%d: overflow: %016llx\n", cpu, overflow);
pr_info("CPU#%d: fixed: %016llx\n", cpu, fixed);
if (pebs_constraints) {
- rdmsrq(MSR_IA32_PEBS_ENABLE, pebs);
+ pebs = rdmsr(MSR_IA32_PEBS_ENABLE);
pr_info("CPU#%d: pebs: %016llx\n", cpu, pebs);
}
if (x86_pmu.lbr_nr) {
- rdmsrq(MSR_IA32_DEBUGCTLMSR, debugctl);
+ debugctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
pr_info("CPU#%d: debugctl: %016llx\n", cpu, debugctl);
}
}
pr_info("CPU#%d: active: %016llx\n", cpu, *(u64 *)cpuc->active_mask);
for_each_set_bit(idx, cntr_mask, X86_PMC_IDX_MAX) {
- rdmsrq(x86_pmu_config_addr(idx), pmc_ctrl);
- rdmsrq(x86_pmu_event_addr(idx), pmc_count);
+ pmc_ctrl = rdmsr(x86_pmu_config_addr(idx));
+ pmc_count = rdmsr(x86_pmu_event_addr(idx));
prev_left = per_cpu(pmc_prev_left[idx], cpu);
@@ -1612,7 +1612,7 @@ void perf_event_print_debug(void)
for_each_set_bit(idx, fixed_cntr_mask, X86_PMC_IDX_MAX) {
if (fixed_counter_disabled(idx, cpuc->pmu))
continue;
- rdmsrq(x86_pmu_fixed_ctr_addr(idx), pmc_count);
+ pmc_count = rdmsr(x86_pmu_fixed_ctr_addr(idx));
pr_info("CPU#%d: fixed-PMC%d count: %016llx\n",
cpu, idx, pmc_count);
@@ -2560,9 +2560,9 @@ void perf_clear_dirty_counters(void)
if (!test_bit(i - INTEL_PMC_IDX_FIXED, hybrid(cpuc->pmu, fixed_cntr_mask)))
continue;
- wrmsrq(x86_pmu_fixed_ctr_addr(i - INTEL_PMC_IDX_FIXED), 0);
+ wrmsr(x86_pmu_fixed_ctr_addr(i - INTEL_PMC_IDX_FIXED), 0);
} else {
- wrmsrq(x86_pmu_event_addr(i), 0);
+ wrmsr(x86_pmu_event_addr(i), 0);
}
}
diff --git a/arch/x86/events/msr.c b/arch/x86/events/msr.c
index 76d6418c5055..476069924e6b 100644
--- a/arch/x86/events/msr.c
+++ b/arch/x86/events/msr.c
@@ -158,7 +158,7 @@ static inline u64 msr_read_counter(struct perf_event *event)
u64 now;
if (event->hw.event_base)
- rdmsrq(event->hw.event_base, now);
+ now = rdmsr(event->hw.event_base);
else
now = rdtsc_ordered();
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index fad87d3c8b2c..e5f189c91f82 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1271,16 +1271,16 @@ static inline void __x86_pmu_enable_event(struct hw_perf_event *hwc,
u64 disable_mask = __this_cpu_read(cpu_hw_events.perf_ctr_virt_mask);
if (hwc->extra_reg.reg)
- wrmsrq(hwc->extra_reg.reg, hwc->extra_reg.config);
+ wrmsr(hwc->extra_reg.reg, hwc->extra_reg.config);
/*
* Add enabled Merge event on next counter
* if large increment event being enabled on this counter
*/
if (is_counter_pair(hwc))
- wrmsrq(x86_pmu_config_addr(hwc->idx + 1), x86_pmu.perf_ctr_pair_en);
+ wrmsr(x86_pmu_config_addr(hwc->idx + 1), x86_pmu.perf_ctr_pair_en);
- wrmsrq(hwc->config_base, (hwc->config | enable_mask) & ~disable_mask);
+ wrmsr(hwc->config_base, (hwc->config | enable_mask) & ~disable_mask);
}
void x86_pmu_enable_all(int added);
@@ -1296,10 +1296,10 @@ static inline void x86_pmu_disable_event(struct perf_event *event)
u64 disable_mask = __this_cpu_read(cpu_hw_events.perf_ctr_virt_mask);
struct hw_perf_event *hwc = &event->hw;
- wrmsrq(hwc->config_base, hwc->config & ~disable_mask);
+ wrmsr(hwc->config_base, hwc->config & ~disable_mask);
if (is_counter_pair(hwc))
- wrmsrq(x86_pmu_config_addr(hwc->idx + 1), 0);
+ wrmsr(x86_pmu_config_addr(hwc->idx + 1), 0);
}
void x86_pmu_enable_event(struct perf_event *event);
@@ -1473,12 +1473,12 @@ static __always_inline void __amd_pmu_lbr_disable(void)
{
u64 dbg_ctl, dbg_extn_cfg;
- rdmsrq(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg);
- wrmsrq(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN);
+ dbg_extn_cfg = rdmsr(MSR_AMD_DBG_EXTN_CFG);
+ wrmsr(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN);
if (cpu_feature_enabled(X86_FEATURE_AMD_LBR_PMC_FREEZE)) {
- rdmsrq(MSR_IA32_DEBUGCTLMSR, dbg_ctl);
- wrmsrq(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
+ dbg_ctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
+ wrmsr(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
}
}
@@ -1619,21 +1619,21 @@ static inline bool intel_pmu_has_bts(struct perf_event *event)
static __always_inline void __intel_pmu_pebs_disable_all(void)
{
- wrmsrq(MSR_IA32_PEBS_ENABLE, 0);
+ wrmsr(MSR_IA32_PEBS_ENABLE, 0);
}
static __always_inline void __intel_pmu_arch_lbr_disable(void)
{
- wrmsrq(MSR_ARCH_LBR_CTL, 0);
+ wrmsr(MSR_ARCH_LBR_CTL, 0);
}
static __always_inline void __intel_pmu_lbr_disable(void)
{
u64 debugctl;
- rdmsrq(MSR_IA32_DEBUGCTLMSR, debugctl);
+ debugctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
debugctl &= ~(DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
- wrmsrq(MSR_IA32_DEBUGCTLMSR, debugctl);
+ wrmsr(MSR_IA32_DEBUGCTLMSR, debugctl);
}
int intel_pmu_save_and_restart(struct perf_event *event);
diff --git a/arch/x86/events/probe.c b/arch/x86/events/probe.c
index bb719d0d3f0b..ac53bb5ba869 100644
--- a/arch/x86/events/probe.c
+++ b/arch/x86/events/probe.c
@@ -45,7 +45,7 @@ perf_msr_probe(struct perf_msr *msr, int cnt, bool zero, void *data)
if (msr[bit].test && !msr[bit].test(bit, data))
continue;
/* Virt sucks; you cannot tell if a R/O MSR is present :/ */
- if (rdmsrq_safe(msr[bit].msr, &val))
+ if (rdmsr_safe(msr[bit].msr, &val))
continue;
mask = msr[bit].mask;
diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c
index 8ed03c32f560..4ac91ed65fde 100644
--- a/arch/x86/events/rapl.c
+++ b/arch/x86/events/rapl.c
@@ -193,7 +193,7 @@ static inline unsigned int get_rapl_pmu_idx(int cpu, int scope)
static inline u64 rapl_read_counter(struct perf_event *event)
{
u64 raw;
- rdmsrq(event->hw.event_base, raw);
+ raw = rdmsr(event->hw.event_base);
return raw;
}
@@ -222,7 +222,7 @@ static u64 rapl_event_update(struct perf_event *event)
prev_raw_count = local64_read(&hwc->prev_count);
do {
- rdmsrq(event->hw.event_base, new_raw_count);
+ new_raw_count = rdmsr(event->hw.event_base);
} while (!local64_try_cmpxchg(&hwc->prev_count,
&prev_raw_count, new_raw_count));
@@ -611,8 +611,8 @@ static int rapl_check_hw_unit(void)
u64 msr_rapl_power_unit_bits;
int i;
- /* protect rdmsrq() to handle virtualization */
- if (rdmsrq_safe(rapl_model->msr_power_unit, &msr_rapl_power_unit_bits))
+ /* protect rdmsr() to handle virtualization */
+ if (rdmsr_safe(rapl_model->msr_power_unit, &msr_rapl_power_unit_bits))
return -1;
for (i = 0; i < NR_RAPL_PKG_DOMAINS; i++)
rapl_pkg_hw_unit[i] = (msr_rapl_power_unit_bits >> 8) & 0x1FULL;
--
2.53.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-04-28 10:43 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 02/11] x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu() Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 03/11] x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 06/11] x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu() Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 10/11] x86/events: Switch core parts to use 64-bit rdmsr/wrmsr() variants Juergen Gross
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox