* [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces
@ 2026-04-28 10:41 Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 01/11] x86/msr: Switch rdmsr_on_cpu() to return a 64-bit quantity Juergen Gross
` (7 more replies)
0 siblings, 8 replies; 9+ messages in thread
From: Juergen Gross @ 2026-04-28 10:41 UTC (permalink / raw)
To: linux-kernel, x86, linux-edac, linux-pm, linux-hwmon,
linux-perf-users, platform-driver-x86, linux-acpi, virtualization
Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Tony Luck, Rafael J. Wysocki,
Viresh Kumar, Guenter Roeck, Daniel Lezcano, Zhang Rui,
Lukasz Luba, Peter Zijlstra, Arnaldo Carvalho de Melo,
Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
Ian Rogers, Adrian Hunter, James Clark, Huang Rui,
Mario Limonciello, Perry Yuan, K Prateek Nayak,
Srinivas Pandruvada, Len Brown, Hans de Goede, Ilpo Järvinen,
Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list
After my first attempt to rework the MSR access functions [1] this is
the result of the feedback I got.
I have still followed the idea to:
- Reduce the number of MSR access functions by keeping the ones with
64-bit values only (instead of the dual 32-bit ones).
- Try to have inline functions instead of macros for rdmsr*(), removing
the hard to read cases where parameters specified the variables for
the results.
One feedback I got was NOT to rename the access functions, which I
avoided in my new approach.
The first 8 patches are a complete set for achieving especially the
first point above for the *_on_cpu() functions.
Patch 9 is preparing the switch of the CPU-local MSR access functions
to have only rdmsr(), rdmsr_safe(), wrmsr() and wrmsr_safe() (all with
64-bit values and as inline functions) in the end. For this purpose
the already existing functions/macros are overloaded via macros to
accept both variants (64-bit and dual 32-bit values) during the phase
switching the different subsystems to the new scheme. This has the
advantage to avoid having to either patch all users of the current
functions in one patch (like done in the first 8 patches), or having
to use intermediate function names with need to be patched at the end
another time. The resulting patches would be very hard to review due
to their size.
The last 2 patches are examples how switches of subsystems would look
like.
Up to now all of that is compile tested only.
[1]: https://lore.kernel.org/lkml/20260420091634.128787-1-jgross@suse.com/
Juergen Gross (11):
x86/msr: Switch rdmsr_on_cpu() to return a 64-bit quantity
x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu()
x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity
x86/msr: Switch all callers of wrmsrq_on_cpu() to use wrmsr_on_cpu()
x86/msr: Switch rdmsr_safe_on_cpu() to return a 64-bit quantity
x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use
rdmsr_safe_on_cpu()
x86/msr: Switch wrmsr_safe_on_cpu() to use a 64-bit quantity
x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use
wrmsr_safe_on_cpu()
x86/msr: Add macros for preparing to switch rdmsr/wrmsr interfaces
x86/events: Switch core parts to use 64-bit rdmsr/wrmsr() variants
x86/cpu/mce: Switch code to use 64-bit rdmsr/wrmsr() variants
arch/x86/events/core.c | 42 ++++----
arch/x86/events/intel/ds.c | 11 +-
arch/x86/events/intel/pt.c | 2 +-
arch/x86/events/intel/uncore_discovery.c | 2 +-
arch/x86/events/intel/uncore_snbep.c | 2 +-
arch/x86/events/msr.c | 2 +-
arch/x86/events/perf_event.h | 26 ++---
arch/x86/events/probe.c | 2 +-
arch/x86/events/rapl.c | 8 +-
arch/x86/include/asm/msr.h | 90 +++++++++-------
arch/x86/include/asm/paravirt.h | 6 +-
arch/x86/kernel/acpi/cppc.c | 8 +-
arch/x86/kernel/cpu/intel_epb.c | 8 +-
arch/x86/kernel/cpu/mce/amd.c | 101 +++++++++---------
arch/x86/kernel/cpu/mce/core.c | 18 ++--
arch/x86/kernel/cpu/mce/inject.c | 40 +++----
arch/x86/kernel/cpu/mce/intel.c | 32 +++---
arch/x86/kernel/cpu/mce/p5.c | 16 +--
arch/x86/kernel/cpu/mce/winchip.c | 10 +-
arch/x86/kernel/cpu/microcode/intel.c | 2 +-
arch/x86/kernel/msr.c | 8 +-
arch/x86/lib/msr-smp.c | 79 ++------------
drivers/cpufreq/acpi-cpufreq.c | 4 +-
drivers/cpufreq/amd-pstate-ut.c | 2 +-
drivers/cpufreq/amd-pstate.c | 21 ++--
drivers/cpufreq/amd_freq_sensitivity.c | 4 +-
drivers/cpufreq/intel_pstate.c | 64 +++++------
drivers/cpufreq/p4-clockmod.c | 32 +++---
drivers/cpufreq/speedstep-centrino.c | 27 ++---
drivers/hwmon/coretemp.c | 44 ++++----
drivers/hwmon/via-cputemp.c | 16 +--
drivers/platform/x86/amd/hfi/hfi.c | 4 +-
.../intel/speed_select_if/isst_if_common.c | 13 ++-
.../intel/uncore-frequency/uncore-frequency.c | 12 +--
drivers/powercap/intel_rapl_msr.c | 2 +-
drivers/thermal/intel/intel_tcc.c | 43 ++++----
drivers/thermal/intel/x86_pkg_temp_thermal.c | 22 ++--
37 files changed, 387 insertions(+), 438 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH RFC 01/11] x86/msr: Switch rdmsr_on_cpu() to return a 64-bit quantity
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
@ 2026-04-28 10:41 ` Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 02/11] x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu() Juergen Gross
` (6 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Juergen Gross @ 2026-04-28 10:41 UTC (permalink / raw)
To: linux-kernel, x86, linux-edac, linux-pm, linux-hwmon
Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Tony Luck, Rafael J. Wysocki,
Viresh Kumar, Guenter Roeck, Daniel Lezcano, Zhang Rui,
Lukasz Luba
In order to prepare retiring rdmsrq_on_cpu() switch rdmsr_on_cpu() to
have the same interface as rdmsrq_on_cpu().
Switch all rdmsr_on_cpu() callers to use the new interface.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/include/asm/msr.h | 8 ++---
arch/x86/kernel/cpu/mce/amd.c | 6 ++--
arch/x86/kernel/cpu/mce/inject.c | 8 ++---
arch/x86/lib/msr-smp.c | 5 ++-
drivers/cpufreq/amd_freq_sensitivity.c | 4 +--
drivers/cpufreq/p4-clockmod.c | 32 ++++++++++----------
drivers/cpufreq/speedstep-centrino.c | 27 +++++++++--------
drivers/hwmon/coretemp.c | 12 ++++----
drivers/thermal/intel/x86_pkg_temp_thermal.c | 22 ++++++++------
9 files changed, 63 insertions(+), 61 deletions(-)
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index 9c2ea29e12a9..fcdaeddf4337 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -256,7 +256,7 @@ int msr_set_bit(u32 msr, u8 bit);
int msr_clear_bit(u32 msr, u8 bit);
#ifdef CONFIG_SMP
-int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
+int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
int rdmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
@@ -269,9 +269,9 @@ int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
#else /* CONFIG_SMP */
-static inline int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+static inline int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
{
- rdmsr(msr_no, *l, *h);
+ rdmsrq(msr_no, *q);
return 0;
}
static inline int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
@@ -292,7 +292,7 @@ static inline int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
static inline void rdmsr_on_cpus(const struct cpumask *m, u32 msr_no,
struct msr __percpu *msrs)
{
- rdmsr_on_cpu(0, msr_no, raw_cpu_ptr(&msrs->l), raw_cpu_ptr(&msrs->h));
+ rdmsr_on_cpu(0, msr_no, raw_cpu_ptr(&msrs->q));
}
static inline void wrmsr_on_cpus(const struct cpumask *m, u32 msr_no,
struct msr __percpu *msrs)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index 6605a0224659..580e90e74e9e 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -969,13 +969,13 @@ store_threshold_limit(struct threshold_block *b, const char *buf, size_t size)
static ssize_t show_error_count(struct threshold_block *b, char *buf)
{
- u32 lo, hi;
+ struct msr val;
/* CPU might be offline by now */
- if (rdmsr_on_cpu(b->cpu, b->address, &lo, &hi))
+ if (rdmsr_on_cpu(b->cpu, b->address, &val.q))
return -ENODEV;
- return sprintf(buf, "%u\n", ((hi & THRESHOLD_MAX) -
+ return sprintf(buf, "%u\n", ((val.h & THRESHOLD_MAX) -
(THRESHOLD_MAX - b->threshold_limit)));
}
diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
index d02c4f556cd0..fa13a8a4946b 100644
--- a/arch/x86/kernel/cpu/mce/inject.c
+++ b/arch/x86/kernel/cpu/mce/inject.c
@@ -316,18 +316,18 @@ static struct notifier_block inject_nb = {
*/
static int toggle_hw_mce_inject(unsigned int cpu, bool enable)
{
- u32 l, h;
+ struct msr val;
int err;
- err = rdmsr_on_cpu(cpu, MSR_K7_HWCR, &l, &h);
+ err = rdmsr_on_cpu(cpu, MSR_K7_HWCR, &val.q);
if (err) {
pr_err("%s: error reading HWCR\n", __func__);
return err;
}
- enable ? (l |= BIT(18)) : (l &= ~BIT(18));
+ enable ? (val.l |= BIT(18)) : (val.l &= ~BIT(18));
- err = wrmsr_on_cpu(cpu, MSR_K7_HWCR, l, h);
+ err = wrmsr_on_cpu(cpu, MSR_K7_HWCR, val.l, val.h);
if (err)
pr_err("%s: error writing HWCR\n", __func__);
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index b8f63419e6ae..6e04aabda863 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -31,7 +31,7 @@ static void __wrmsr_on_cpu(void *info)
wrmsr(rv->msr_no, reg->l, reg->h);
}
-int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
{
int err;
struct msr_info rv;
@@ -40,8 +40,7 @@ int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
rv.msr_no = msr_no;
err = smp_call_function_single(cpu, __rdmsr_on_cpu, &rv, 1);
- *l = rv.reg.l;
- *h = rv.reg.h;
+ *q = rv.reg.q;
return err;
}
diff --git a/drivers/cpufreq/amd_freq_sensitivity.c b/drivers/cpufreq/amd_freq_sensitivity.c
index 13fed4b9e02b..63896478dcab 100644
--- a/drivers/cpufreq/amd_freq_sensitivity.c
+++ b/drivers/cpufreq/amd_freq_sensitivity.c
@@ -52,9 +52,9 @@ static unsigned int amd_powersave_bias_target(struct cpufreq_policy *policy,
return freq_next;
rdmsr_on_cpu(policy->cpu, MSR_AMD64_FREQ_SENSITIVITY_ACTUAL,
- &actual.l, &actual.h);
+ &actual.q);
rdmsr_on_cpu(policy->cpu, MSR_AMD64_FREQ_SENSITIVITY_REFERENCE,
- &reference.l, &reference.h);
+ &reference.q);
actual.h &= 0x00ffffff;
reference.h &= 0x00ffffff;
diff --git a/drivers/cpufreq/p4-clockmod.c b/drivers/cpufreq/p4-clockmod.c
index 69c19233fcd4..393c4a5d2021 100644
--- a/drivers/cpufreq/p4-clockmod.c
+++ b/drivers/cpufreq/p4-clockmod.c
@@ -51,24 +51,24 @@ static unsigned int cpufreq_p4_get(unsigned int cpu);
static int cpufreq_p4_setdc(unsigned int cpu, unsigned int newstate)
{
- u32 l, h;
+ struct msr val;
if ((newstate > DC_DISABLE) || (newstate == DC_RESV))
return -EINVAL;
- rdmsr_on_cpu(cpu, MSR_IA32_THERM_STATUS, &l, &h);
+ rdmsr_on_cpu(cpu, MSR_IA32_THERM_STATUS, &val.q);
- if (l & 0x01)
+ if (val.l & 0x01)
pr_debug("CPU#%d currently thermal throttled\n", cpu);
if (has_N44_O17_errata[cpu] &&
(newstate == DC_25PT || newstate == DC_DFLT))
newstate = DC_38PT;
- rdmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, &l, &h);
+ rdmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, &val.q);
if (newstate == DC_DISABLE) {
pr_debug("CPU#%d disabling modulation\n", cpu);
- wrmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, l & ~(1<<4), h);
+ wrmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, val.l & ~(1<<4), val.h);
} else {
pr_debug("CPU#%d setting duty cycle to %d%%\n",
cpu, ((125 * newstate) / 10));
@@ -77,9 +77,9 @@ static int cpufreq_p4_setdc(unsigned int cpu, unsigned int newstate)
* bits 3-1 : duty cycle
* bit 0 : reserved
*/
- l = (l & ~14);
- l = l | (1<<4) | ((newstate & 0x7)<<1);
- wrmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, l, h);
+ val.l = (val.l & ~14);
+ val.l = val.l | (1<<4) | ((newstate & 0x7)<<1);
+ wrmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, val.l, val.h);
}
return 0;
@@ -205,18 +205,18 @@ static int cpufreq_p4_cpu_init(struct cpufreq_policy *policy)
static unsigned int cpufreq_p4_get(unsigned int cpu)
{
- u32 l, h;
+ struct msr val;
- rdmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, &l, &h);
+ rdmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, &val.q);
- if (l & 0x10) {
- l = l >> 1;
- l &= 0x7;
+ if (val.l & 0x10) {
+ val.l = val.l >> 1;
+ val.l &= 0x7;
} else
- l = DC_DISABLE;
+ val.l = DC_DISABLE;
- if (l != DC_DISABLE)
- return stock_freq * l / 8;
+ if (val.l != DC_DISABLE)
+ return stock_freq * val.l / 8;
return stock_freq;
}
diff --git a/drivers/cpufreq/speedstep-centrino.c b/drivers/cpufreq/speedstep-centrino.c
index 3e6e85a92212..b74c85128377 100644
--- a/drivers/cpufreq/speedstep-centrino.c
+++ b/drivers/cpufreq/speedstep-centrino.c
@@ -322,11 +322,11 @@ static unsigned extract_clock(unsigned msr, unsigned int cpu, int failsafe)
/* Return the current CPU frequency in kHz */
static unsigned int get_cur_freq(unsigned int cpu)
{
- unsigned l, h;
+ struct msr val;
unsigned clock_freq;
- rdmsr_on_cpu(cpu, MSR_IA32_PERF_STATUS, &l, &h);
- clock_freq = extract_clock(l, cpu, 0);
+ rdmsr_on_cpu(cpu, MSR_IA32_PERF_STATUS, &val.q);
+ clock_freq = extract_clock(val.l, cpu, 0);
if (unlikely(clock_freq == 0)) {
/*
@@ -335,8 +335,8 @@ static unsigned int get_cur_freq(unsigned int cpu)
* P-state transition (like TM2). Get the last freq set
* in PERF_CTL.
*/
- rdmsr_on_cpu(cpu, MSR_IA32_PERF_CTL, &l, &h);
- clock_freq = extract_clock(l, cpu, 1);
+ rdmsr_on_cpu(cpu, MSR_IA32_PERF_CTL, &val.q);
+ clock_freq = extract_clock(val.l, cpu, 1);
}
return clock_freq;
}
@@ -417,7 +417,8 @@ static void centrino_cpu_exit(struct cpufreq_policy *policy)
*/
static int centrino_target(struct cpufreq_policy *policy, unsigned int index)
{
- unsigned int msr, oldmsr = 0, h = 0, cpu = policy->cpu;
+ unsigned int msr, cpu = policy->cpu;
+ struct msr oldmsr = { .q = 0 };
int retval = 0;
unsigned int j, first_cpu;
struct cpufreq_frequency_table *op_points;
@@ -459,22 +460,22 @@ static int centrino_target(struct cpufreq_policy *policy, unsigned int index)
msr = op_points->driver_data;
if (first_cpu) {
- rdmsr_on_cpu(good_cpu, MSR_IA32_PERF_CTL, &oldmsr, &h);
- if (msr == (oldmsr & 0xffff)) {
+ rdmsr_on_cpu(good_cpu, MSR_IA32_PERF_CTL, &oldmsr.q);
+ if (msr == (oldmsr.l & 0xffff)) {
pr_debug("no change needed - msr was and needs "
- "to be %x\n", oldmsr);
+ "to be %x\n", oldmsr.l);
retval = 0;
goto out;
}
first_cpu = 0;
/* all but 16 LSB are reserved, treat them with care */
- oldmsr &= ~0xffff;
+ oldmsr.l &= ~0xffff;
msr &= 0xffff;
- oldmsr |= msr;
+ oldmsr.l |= msr;
}
- wrmsr_on_cpu(good_cpu, MSR_IA32_PERF_CTL, oldmsr, h);
+ wrmsr_on_cpu(good_cpu, MSR_IA32_PERF_CTL, oldmsr.l, oldmsr.h);
if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY)
break;
@@ -490,7 +491,7 @@ static int centrino_target(struct cpufreq_policy *policy, unsigned int index)
*/
for_each_cpu(j, covered_cpus)
- wrmsr_on_cpu(j, MSR_IA32_PERF_CTL, oldmsr, h);
+ wrmsr_on_cpu(j, MSR_IA32_PERF_CTL, oldmsr.l, oldmsr.h);
}
retval = 0;
diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
index 6a0d94711ead..fa02960ffff5 100644
--- a/drivers/hwmon/coretemp.c
+++ b/drivers/hwmon/coretemp.c
@@ -356,15 +356,15 @@ static ssize_t show_label(struct device *dev,
static ssize_t show_crit_alarm(struct device *dev,
struct device_attribute *devattr, char *buf)
{
- u32 eax, edx;
+ struct msr val;
struct temp_data *tdata = container_of(devattr, struct temp_data,
sd_attrs[ATTR_CRIT_ALARM]);
mutex_lock(&tdata->update_lock);
- rdmsr_on_cpu(tdata->cpu, tdata->status_reg, &eax, &edx);
+ rdmsr_on_cpu(tdata->cpu, tdata->status_reg, &val.q);
mutex_unlock(&tdata->update_lock);
- return sprintf(buf, "%d\n", (eax >> 5) & 1);
+ return sprintf(buf, "%d\n", (val.l >> 5) & 1);
}
static ssize_t show_tjmax(struct device *dev,
@@ -398,7 +398,7 @@ static ssize_t show_ttarget(struct device *dev,
static ssize_t show_temp(struct device *dev,
struct device_attribute *devattr, char *buf)
{
- u32 eax, edx;
+ struct msr val;
struct temp_data *tdata = container_of(devattr, struct temp_data, sd_attrs[ATTR_TEMP]);
int tjmax;
@@ -407,14 +407,14 @@ static ssize_t show_temp(struct device *dev,
tjmax = get_tjmax(tdata, dev);
/* Check whether the time interval has elapsed */
if (time_after(jiffies, tdata->last_updated + HZ)) {
- rdmsr_on_cpu(tdata->cpu, tdata->status_reg, &eax, &edx);
+ rdmsr_on_cpu(tdata->cpu, tdata->status_reg, &val.q);
/*
* Ignore the valid bit. In all observed cases the register
* value is either low or zero if the valid bit is 0.
* Return it instead of reporting an error which doesn't
* really help at all.
*/
- tdata->temp = tjmax - ((eax >> 16) & 0xff) * 1000;
+ tdata->temp = tjmax - ((val.l >> 16) & 0xff) * 1000;
tdata->last_updated = jiffies;
}
diff --git a/drivers/thermal/intel/x86_pkg_temp_thermal.c b/drivers/thermal/intel/x86_pkg_temp_thermal.c
index 540109761f0a..fc7dbba4f9ca 100644
--- a/drivers/thermal/intel/x86_pkg_temp_thermal.c
+++ b/drivers/thermal/intel/x86_pkg_temp_thermal.c
@@ -125,8 +125,9 @@ sys_set_trip_temp(struct thermal_zone_device *tzd,
{
struct zone_device *zonedev = thermal_zone_device_priv(tzd);
unsigned int trip_index = THERMAL_TRIP_PRIV_TO_INT(trip->priv);
- u32 l, h, mask, shift, intr;
+ u32 mask, shift, intr;
int tj_max, val, ret;
+ struct msr v;
if (temp == THERMAL_TEMP_INVALID)
temp = 0;
@@ -142,7 +143,7 @@ sys_set_trip_temp(struct thermal_zone_device *tzd,
return -EINVAL;
ret = rdmsr_on_cpu(zonedev->cpu, MSR_IA32_PACKAGE_THERM_INTERRUPT,
- &l, &h);
+ &v.q);
if (ret < 0)
return ret;
@@ -155,20 +156,20 @@ sys_set_trip_temp(struct thermal_zone_device *tzd,
shift = THERM_SHIFT_THRESHOLD0;
intr = THERM_INT_THRESHOLD0_ENABLE;
}
- l &= ~mask;
+ v.l &= ~mask;
/*
* When users space sets a trip temperature == 0, which is indication
* that, it is no longer interested in receiving notifications.
*/
if (!temp) {
- l &= ~intr;
+ v.l &= ~intr;
} else {
- l |= val << shift;
- l |= intr;
+ v.l |= val << shift;
+ v.l |= intr;
}
return wrmsr_on_cpu(zonedev->cpu, MSR_IA32_PACKAGE_THERM_INTERRUPT,
- l, h);
+ v.l, v.h);
}
/* Thermal zone callback registry */
@@ -277,7 +278,8 @@ static int pkg_temp_thermal_trips_init(int cpu, int tj_max,
struct thermal_trip *trips, int num_trips)
{
unsigned long thres_reg_value;
- u32 mask, shift, eax, edx;
+ u32 mask, shift;
+ struct msr val;
int ret, i;
for (i = 0; i < num_trips; i++) {
@@ -291,11 +293,11 @@ static int pkg_temp_thermal_trips_init(int cpu, int tj_max,
}
ret = rdmsr_on_cpu(cpu, MSR_IA32_PACKAGE_THERM_INTERRUPT,
- &eax, &edx);
+ &val.q);
if (ret < 0)
return ret;
- thres_reg_value = (eax & mask) >> shift;
+ thres_reg_value = (val.l & mask) >> shift;
trips[i].temperature = thres_reg_value ?
tj_max - thres_reg_value * 1000 : THERMAL_TEMP_INVALID;
--
2.53.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH RFC 02/11] x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu()
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 01/11] x86/msr: Switch rdmsr_on_cpu() to return a 64-bit quantity Juergen Gross
@ 2026-04-28 10:41 ` Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 03/11] x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity Juergen Gross
` (5 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Juergen Gross @ 2026-04-28 10:41 UTC (permalink / raw)
To: linux-kernel, x86, linux-perf-users, linux-edac, linux-pm,
platform-driver-x86
Cc: Juergen Gross, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Tony Luck, Rafael J. Wysocki, Viresh Kumar,
Huang Rui, Mario Limonciello, Perry Yuan, K Prateek Nayak,
Srinivas Pandruvada, Len Brown, Hans de Goede, Ilpo Järvinen
Now that rdmsr_on_cpu() has the same interface as rdmsrq_on_cpu(), the
callers of rdmsrq_on_cpu() can be switched to rdmsr_on_cpu() and
rdmsrq_on_cpu() can be removed.
At the same time switch the only user of rdmsrl_on_cpu() to
rdmsr_on_cpu() and drop rdmsrl_on_cpu(), too.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/events/intel/uncore_snbep.c | 2 +-
arch/x86/include/asm/msr.h | 7 ------
arch/x86/kernel/cpu/intel_epb.c | 4 ++--
arch/x86/kernel/cpu/mce/inject.c | 4 ++--
arch/x86/kernel/cpu/microcode/intel.c | 2 +-
arch/x86/lib/msr-smp.c | 15 -------------
drivers/cpufreq/acpi-cpufreq.c | 4 ++--
drivers/cpufreq/amd-pstate.c | 8 +++----
drivers/cpufreq/intel_pstate.c | 22 +++++++++----------
.../intel/uncore-frequency/uncore-frequency.c | 6 ++---
10 files changed, 26 insertions(+), 48 deletions(-)
diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
index 215d33e260ed..fee94698b611 100644
--- a/arch/x86/events/intel/uncore_snbep.c
+++ b/arch/x86/events/intel/uncore_snbep.c
@@ -3695,7 +3695,7 @@ static int skx_msr_cpu_bus_read(int cpu, u64 *topology)
{
u64 msr_value;
- if (rdmsrq_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value) ||
+ if (rdmsr_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value) ||
!(msr_value & SKX_MSR_CPU_BUS_VALID_BIT))
return -ENXIO;
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index fcdaeddf4337..8c96fc5c6169 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -258,7 +258,6 @@ int msr_clear_bit(u32 msr, u8 bit);
#ifdef CONFIG_SMP
int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
-int rdmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
@@ -279,11 +278,6 @@ static inline int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
wrmsr(msr_no, l, h);
return 0;
}
-static inline int rdmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
-{
- rdmsrq(msr_no, *q);
- return 0;
-}
static inline int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
wrmsrq(msr_no, q);
@@ -329,7 +323,6 @@ static inline int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
/* Compatibility wrappers: */
#define rdmsrl(msr, val) rdmsrq(msr, val)
#define wrmsrl(msr, val) wrmsrq(msr, val)
-#define rdmsrl_on_cpu(cpu, msr, q) rdmsrq_on_cpu(cpu, msr, q)
#endif /* __ASSEMBLER__ */
#endif /* _ASM_X86_MSR_H */
diff --git a/arch/x86/kernel/cpu/intel_epb.c b/arch/x86/kernel/cpu/intel_epb.c
index 2c56f8730f59..cb5a3c299f26 100644
--- a/arch/x86/kernel/cpu/intel_epb.c
+++ b/arch/x86/kernel/cpu/intel_epb.c
@@ -139,7 +139,7 @@ static ssize_t energy_perf_bias_show(struct device *dev,
u64 epb;
int ret;
- ret = rdmsrq_on_cpu(cpu, MSR_IA32_ENERGY_PERF_BIAS, &epb);
+ ret = rdmsr_on_cpu(cpu, MSR_IA32_ENERGY_PERF_BIAS, &epb);
if (ret < 0)
return ret;
@@ -161,7 +161,7 @@ static ssize_t energy_perf_bias_store(struct device *dev,
else if (kstrtou64(buf, 0, &val) || val > MAX_EPB)
return -EINVAL;
- ret = rdmsrq_on_cpu(cpu, MSR_IA32_ENERGY_PERF_BIAS, &epb);
+ ret = rdmsr_on_cpu(cpu, MSR_IA32_ENERGY_PERF_BIAS, &epb);
if (ret < 0)
return ret;
diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
index fa13a8a4946b..78649651c987 100644
--- a/arch/x86/kernel/cpu/mce/inject.c
+++ b/arch/x86/kernel/cpu/mce/inject.c
@@ -590,7 +590,7 @@ static int inj_bank_set(void *data, u64 val)
u64 cap;
/* Get bank count on target CPU so we can handle non-uniform values. */
- rdmsrq_on_cpu(m->extcpu, MSR_IA32_MCG_CAP, &cap);
+ rdmsr_on_cpu(m->extcpu, MSR_IA32_MCG_CAP, &cap);
n_banks = cap & MCG_BANKCNT_MASK;
if (val >= n_banks) {
@@ -614,7 +614,7 @@ static int inj_bank_set(void *data, u64 val)
if (cpu_feature_enabled(X86_FEATURE_SMCA)) {
u64 ipid;
- if (rdmsrq_on_cpu(m->extcpu, MSR_AMD64_SMCA_MCx_IPID(val), &ipid)) {
+ if (rdmsr_on_cpu(m->extcpu, MSR_AMD64_SMCA_MCx_IPID(val), &ipid)) {
pr_err("Error reading IPID on CPU%d\n", m->extcpu);
return -EINVAL;
}
diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
index 37ac4afe0972..b05e751ffcca 100644
--- a/arch/x86/kernel/cpu/microcode/intel.c
+++ b/arch/x86/kernel/cpu/microcode/intel.c
@@ -660,7 +660,7 @@ static void stage_microcode(void)
pkg_id = topology_logical_package_id(cpu);
- err = rdmsrq_on_cpu(cpu, MSR_IA32_MCU_STAGING_MBOX_ADDR, &mmio_pa);
+ err = rdmsr_on_cpu(cpu, MSR_IA32_MCU_STAGING_MBOX_ADDR, &mmio_pa);
if (WARN_ON_ONCE(err))
return;
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index 6e04aabda863..7c96f003bfe0 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -46,21 +46,6 @@ int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
}
EXPORT_SYMBOL(rdmsr_on_cpu);
-int rdmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
-{
- int err;
- struct msr_info rv;
-
- memset(&rv, 0, sizeof(rv));
-
- rv.msr_no = msr_no;
- err = smp_call_function_single(cpu, __rdmsr_on_cpu, &rv, 1);
- *q = rv.reg.q;
-
- return err;
-}
-EXPORT_SYMBOL(rdmsrq_on_cpu);
-
int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
{
int err;
diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
index 21639d9ac753..43bf1c21c4ca 100644
--- a/drivers/cpufreq/acpi-cpufreq.c
+++ b/drivers/cpufreq/acpi-cpufreq.c
@@ -79,11 +79,11 @@ static bool boost_state(unsigned int cpu)
case X86_VENDOR_INTEL:
case X86_VENDOR_CENTAUR:
case X86_VENDOR_ZHAOXIN:
- rdmsrq_on_cpu(cpu, MSR_IA32_MISC_ENABLE, &msr);
+ rdmsr_on_cpu(cpu, MSR_IA32_MISC_ENABLE, &msr);
return !(msr & MSR_IA32_MISC_ENABLE_TURBO_DISABLE);
case X86_VENDOR_HYGON:
case X86_VENDOR_AMD:
- rdmsrq_on_cpu(cpu, MSR_K7_HWCR, &msr);
+ rdmsr_on_cpu(cpu, MSR_K7_HWCR, &msr);
return !(msr & MSR_K7_HWCR_CPB_DIS);
}
return false;
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 453084c67327..a6fc22f770c3 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -208,7 +208,7 @@ static u8 msr_get_epp(struct amd_cpudata *cpudata)
u64 value;
int ret;
- ret = rdmsrq_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value);
+ ret = rdmsr_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value);
if (ret < 0) {
pr_debug("Could not retrieve energy perf value (%d)\n", ret);
return ret;
@@ -382,7 +382,7 @@ static int amd_pstate_init_floor_perf(struct cpufreq_policy *policy)
if (!cpu_feature_enabled(X86_FEATURE_CPPC_PERF_PRIO))
return 0;
- ret = rdmsrq_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ2, &value);
+ ret = rdmsr_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ2, &value);
if (ret) {
pr_err("failed to read CPPC REQ2 value. Error (%d)\n", ret);
return ret;
@@ -480,7 +480,7 @@ static int msr_init_perf(struct amd_cpudata *cpudata)
if (ret)
return ret;
- ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &cppc_req);
+ ret = rdmsr_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &cppc_req);
if (ret)
return ret;
@@ -881,7 +881,7 @@ static int amd_pstate_init_boost_support(struct amd_cpudata *cpudata)
goto exit_err;
}
- ret = rdmsrq_on_cpu(cpudata->cpu, MSR_K7_HWCR, &boost_val);
+ ret = rdmsr_on_cpu(cpudata->cpu, MSR_K7_HWCR, &boost_val);
if (ret) {
pr_err_once("failed to read initial CPU boost state!\n");
ret = -EIO;
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 1292da53e5fc..e5b30a53c49a 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -632,8 +632,8 @@ static s16 intel_pstate_get_epp(struct cpudata *cpu_data, u64 hwp_req_data)
* MSR_HWP_REQUEST, so need to read and get EPP.
*/
if (!hwp_req_data) {
- epp = rdmsrq_on_cpu(cpu_data->cpu, MSR_HWP_REQUEST,
- &hwp_req_data);
+ epp = rdmsr_on_cpu(cpu_data->cpu, MSR_HWP_REQUEST,
+ &hwp_req_data);
if (epp)
return epp;
}
@@ -886,7 +886,7 @@ static ssize_t show_base_frequency(struct cpufreq_policy *policy, char *buf)
if (ratio <= 0) {
u64 cap;
- rdmsrq_on_cpu(policy->cpu, MSR_HWP_CAPABILITIES, &cap);
+ rdmsr_on_cpu(policy->cpu, MSR_HWP_CAPABILITIES, &cap);
ratio = HWP_GUARANTEED_PERF(cap);
}
@@ -1187,7 +1187,7 @@ static void __intel_pstate_get_hwp_cap(struct cpudata *cpu)
{
u64 cap;
- rdmsrq_on_cpu(cpu->cpu, MSR_HWP_CAPABILITIES, &cap);
+ rdmsr_on_cpu(cpu->cpu, MSR_HWP_CAPABILITIES, &cap);
WRITE_ONCE(cpu->hwp_cap_cached, cap);
cpu->pstate.max_pstate = HWP_GUARANTEED_PERF(cap);
cpu->pstate.turbo_pstate = HWP_HIGHEST_PERF(cap);
@@ -1269,7 +1269,7 @@ static void intel_pstate_hwp_set(unsigned int cpu)
if (cpu_data->policy == CPUFREQ_POLICY_PERFORMANCE)
min = max;
- rdmsrq_on_cpu(cpu, MSR_HWP_REQUEST, &value);
+ rdmsr_on_cpu(cpu, MSR_HWP_REQUEST, &value);
value &= ~HWP_MIN_PERF(~0L);
value |= HWP_MIN_PERF(min);
@@ -2156,7 +2156,7 @@ static int core_get_min_pstate(int cpu)
{
u64 value;
- rdmsrq_on_cpu(cpu, MSR_PLATFORM_INFO, &value);
+ rdmsr_on_cpu(cpu, MSR_PLATFORM_INFO, &value);
return (value >> 40) & 0xFF;
}
@@ -2164,7 +2164,7 @@ static int core_get_max_pstate_physical(int cpu)
{
u64 value;
- rdmsrq_on_cpu(cpu, MSR_PLATFORM_INFO, &value);
+ rdmsr_on_cpu(cpu, MSR_PLATFORM_INFO, &value);
return (value >> 8) & 0xFF;
}
@@ -2209,7 +2209,7 @@ static int core_get_max_pstate(int cpu)
int tdp_ratio;
int err;
- rdmsrq_on_cpu(cpu, MSR_PLATFORM_INFO, &plat_info);
+ rdmsr_on_cpu(cpu, MSR_PLATFORM_INFO, &plat_info);
max_pstate = (plat_info >> 8) & 0xFF;
tdp_ratio = core_get_tdp_ratio(cpu, plat_info);
@@ -2241,7 +2241,7 @@ static int core_get_turbo_pstate(int cpu)
u64 value;
int nont, ret;
- rdmsrq_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value);
+ rdmsr_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value);
nont = core_get_max_pstate(cpu);
ret = (value) & 255;
if (ret <= nont)
@@ -2264,7 +2264,7 @@ static int knl_get_turbo_pstate(int cpu)
u64 value;
int nont, ret;
- rdmsrq_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value);
+ rdmsr_on_cpu(cpu, MSR_TURBO_RATIO_LIMIT, &value);
nont = core_get_max_pstate(cpu);
ret = (((value) >> 8) & 0xFF);
if (ret <= nont)
@@ -3318,7 +3318,7 @@ static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy)
intel_pstate_get_hwp_cap(cpu);
- rdmsrq_on_cpu(cpu->cpu, MSR_HWP_REQUEST, &value);
+ rdmsr_on_cpu(cpu->cpu, MSR_HWP_REQUEST, &value);
WRITE_ONCE(cpu->hwp_req_cached, value);
cpu->epp_cached = intel_pstate_get_epp(cpu, value);
diff --git a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
index 667f2c8b9594..b9878a4d391b 100644
--- a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
+++ b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
@@ -52,7 +52,7 @@ static int uncore_read_control_freq(struct uncore_data *data, unsigned int *valu
if (data->control_cpu < 0)
return -ENXIO;
- ret = rdmsrq_on_cpu(data->control_cpu, MSR_UNCORE_RATIO_LIMIT, &cap);
+ ret = rdmsr_on_cpu(data->control_cpu, MSR_UNCORE_RATIO_LIMIT, &cap);
if (ret)
return ret;
@@ -77,7 +77,7 @@ static int uncore_write_control_freq(struct uncore_data *data, unsigned int inpu
if (data->control_cpu < 0)
return -ENXIO;
- ret = rdmsrq_on_cpu(data->control_cpu, MSR_UNCORE_RATIO_LIMIT, &cap);
+ ret = rdmsr_on_cpu(data->control_cpu, MSR_UNCORE_RATIO_LIMIT, &cap);
if (ret)
return ret;
@@ -106,7 +106,7 @@ static int uncore_read_freq(struct uncore_data *data, unsigned int *freq)
if (data->control_cpu < 0)
return -ENXIO;
- ret = rdmsrq_on_cpu(data->control_cpu, MSR_UNCORE_PERF_STATUS, &ratio);
+ ret = rdmsr_on_cpu(data->control_cpu, MSR_UNCORE_PERF_STATUS, &ratio);
if (ret)
return ret;
--
2.53.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH RFC 03/11] x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 01/11] x86/msr: Switch rdmsr_on_cpu() to return a 64-bit quantity Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 02/11] x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu() Juergen Gross
@ 2026-04-28 10:41 ` Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 04/11] x86/msr: Switch all callers of wrmsrq_on_cpu() to use wrmsr_on_cpu() Juergen Gross
` (4 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Juergen Gross @ 2026-04-28 10:41 UTC (permalink / raw)
To: linux-kernel, x86, linux-perf-users, linux-edac, linux-pm
Cc: Juergen Gross, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Tony Luck, Rafael J. Wysocki, Viresh Kumar,
Daniel Lezcano, Zhang Rui, Lukasz Luba
In order to prepare retiring wrmsrq_on_cpu() switch wrmsr_on_cpu() to
have the same interface as wrmsrq_on_cpu().
Switch all wrmsr_on_cpu() callers to use the new interface.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/events/intel/ds.c | 11 ++++-------
arch/x86/include/asm/msr.h | 8 ++++----
arch/x86/kernel/cpu/mce/inject.c | 2 +-
arch/x86/lib/msr-smp.c | 5 ++---
drivers/cpufreq/p4-clockmod.c | 4 ++--
drivers/cpufreq/speedstep-centrino.c | 4 ++--
drivers/thermal/intel/x86_pkg_temp_thermal.c | 2 +-
7 files changed, 16 insertions(+), 20 deletions(-)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 7f0d515c07c5..06d6d06c7a75 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -780,9 +780,7 @@ void init_debug_store_on_cpu(int cpu)
if (!ds)
return;
- wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA,
- (u32)((u64)(unsigned long)ds),
- (u32)((u64)(unsigned long)ds >> 32));
+ wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA, (u64)(unsigned long)ds);
}
void fini_debug_store_on_cpu(int cpu)
@@ -790,7 +788,7 @@ void fini_debug_store_on_cpu(int cpu)
if (!per_cpu(cpu_hw_events, cpu).ds)
return;
- wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA, 0, 0);
+ wrmsr_on_cpu(cpu, MSR_IA32_DS_AREA, 0);
}
static DEFINE_PER_CPU(void *, insn_buffer);
@@ -1095,8 +1093,7 @@ void init_arch_pebs_on_cpu(int cpu)
* contiguous physical buffer (__alloc_pages_node() with order)
*/
arch_pebs_base = virt_to_phys(cpuc->pebs_vaddr) | PEBS_BUFFER_SHIFT;
- wrmsr_on_cpu(cpu, MSR_IA32_PEBS_BASE, (u32)arch_pebs_base,
- (u32)(arch_pebs_base >> 32));
+ wrmsr_on_cpu(cpu, MSR_IA32_PEBS_BASE, arch_pebs_base);
x86_pmu.pebs_active = 1;
}
@@ -1105,7 +1102,7 @@ inline void fini_arch_pebs_on_cpu(int cpu)
if (!x86_pmu.arch_pebs)
return;
- wrmsr_on_cpu(cpu, MSR_IA32_PEBS_BASE, 0, 0);
+ wrmsr_on_cpu(cpu, MSR_IA32_PEBS_BASE, 0);
}
/*
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index 8c96fc5c6169..a004440b4c0a 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -257,7 +257,7 @@ int msr_clear_bit(u32 msr, u8 bit);
#ifdef CONFIG_SMP
int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
-int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
+int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
@@ -273,9 +273,9 @@ static inline int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
rdmsrq(msr_no, *q);
return 0;
}
-static inline int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+static inline int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
- wrmsr(msr_no, l, h);
+ wrmsrq(msr_no, q);
return 0;
}
static inline int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
@@ -291,7 +291,7 @@ static inline void rdmsr_on_cpus(const struct cpumask *m, u32 msr_no,
static inline void wrmsr_on_cpus(const struct cpumask *m, u32 msr_no,
struct msr __percpu *msrs)
{
- wrmsr_on_cpu(0, msr_no, raw_cpu_read(msrs->l), raw_cpu_read(msrs->h));
+ wrmsrq_on_cpu(0, msr_no, raw_cpu_read(msrs->q));
}
static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no,
u32 *l, u32 *h)
diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
index 78649651c987..2d75098211b3 100644
--- a/arch/x86/kernel/cpu/mce/inject.c
+++ b/arch/x86/kernel/cpu/mce/inject.c
@@ -327,7 +327,7 @@ static int toggle_hw_mce_inject(unsigned int cpu, bool enable)
enable ? (val.l |= BIT(18)) : (val.l &= ~BIT(18));
- err = wrmsr_on_cpu(cpu, MSR_K7_HWCR, val.l, val.h);
+ err = wrmsr_on_cpu(cpu, MSR_K7_HWCR, val.q);
if (err)
pr_err("%s: error writing HWCR\n", __func__);
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index 7c96f003bfe0..0b4f3c4e4f82 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -46,7 +46,7 @@ int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
}
EXPORT_SYMBOL(rdmsr_on_cpu);
-int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
int err;
struct msr_info rv;
@@ -54,8 +54,7 @@ int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
memset(&rv, 0, sizeof(rv));
rv.msr_no = msr_no;
- rv.reg.l = l;
- rv.reg.h = h;
+ rv.reg.q = q;
err = smp_call_function_single(cpu, __wrmsr_on_cpu, &rv, 1);
return err;
diff --git a/drivers/cpufreq/p4-clockmod.c b/drivers/cpufreq/p4-clockmod.c
index 393c4a5d2021..409c0210e48a 100644
--- a/drivers/cpufreq/p4-clockmod.c
+++ b/drivers/cpufreq/p4-clockmod.c
@@ -68,7 +68,7 @@ static int cpufreq_p4_setdc(unsigned int cpu, unsigned int newstate)
rdmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, &val.q);
if (newstate == DC_DISABLE) {
pr_debug("CPU#%d disabling modulation\n", cpu);
- wrmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, val.l & ~(1<<4), val.h);
+ wrmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, val.q & ~(1ULL << 4));
} else {
pr_debug("CPU#%d setting duty cycle to %d%%\n",
cpu, ((125 * newstate) / 10));
@@ -79,7 +79,7 @@ static int cpufreq_p4_setdc(unsigned int cpu, unsigned int newstate)
*/
val.l = (val.l & ~14);
val.l = val.l | (1<<4) | ((newstate & 0x7)<<1);
- wrmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, val.l, val.h);
+ wrmsr_on_cpu(cpu, MSR_IA32_THERM_CONTROL, val.q);
}
return 0;
diff --git a/drivers/cpufreq/speedstep-centrino.c b/drivers/cpufreq/speedstep-centrino.c
index b74c85128377..121cddb1430f 100644
--- a/drivers/cpufreq/speedstep-centrino.c
+++ b/drivers/cpufreq/speedstep-centrino.c
@@ -475,7 +475,7 @@ static int centrino_target(struct cpufreq_policy *policy, unsigned int index)
oldmsr.l |= msr;
}
- wrmsr_on_cpu(good_cpu, MSR_IA32_PERF_CTL, oldmsr.l, oldmsr.h);
+ wrmsr_on_cpu(good_cpu, MSR_IA32_PERF_CTL, oldmsr.q);
if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY)
break;
@@ -491,7 +491,7 @@ static int centrino_target(struct cpufreq_policy *policy, unsigned int index)
*/
for_each_cpu(j, covered_cpus)
- wrmsr_on_cpu(j, MSR_IA32_PERF_CTL, oldmsr.l, oldmsr.h);
+ wrmsr_on_cpu(j, MSR_IA32_PERF_CTL, oldmsr.q);
}
retval = 0;
diff --git a/drivers/thermal/intel/x86_pkg_temp_thermal.c b/drivers/thermal/intel/x86_pkg_temp_thermal.c
index fc7dbba4f9ca..e52d35015486 100644
--- a/drivers/thermal/intel/x86_pkg_temp_thermal.c
+++ b/drivers/thermal/intel/x86_pkg_temp_thermal.c
@@ -169,7 +169,7 @@ sys_set_trip_temp(struct thermal_zone_device *tzd,
}
return wrmsr_on_cpu(zonedev->cpu, MSR_IA32_PACKAGE_THERM_INTERRUPT,
- v.l, v.h);
+ v.q);
}
/* Thermal zone callback registry */
--
2.53.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH RFC 04/11] x86/msr: Switch all callers of wrmsrq_on_cpu() to use wrmsr_on_cpu()
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
` (2 preceding siblings ...)
2026-04-28 10:41 ` [PATCH RFC 03/11] x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity Juergen Gross
@ 2026-04-28 10:41 ` Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 05/11] x86/msr: Switch rdmsr_safe_on_cpu() to return a 64-bit quantity Juergen Gross
` (3 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Juergen Gross @ 2026-04-28 10:41 UTC (permalink / raw)
To: linux-kernel, x86, linux-pm, platform-driver-x86
Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Huang Rui, Mario Limonciello,
Perry Yuan, K Prateek Nayak, Rafael J. Wysocki, Viresh Kumar,
Srinivas Pandruvada, Len Brown, Hans de Goede, Ilpo Järvinen
Now that wrmsr_on_cpu() has the same interface as wrmsrq_on_cpu(), the
callers of wrmsrq_on_cpu() can be switched to wrmsr_on_cpu() and
wrmsrq_on_cpu() can be removed.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/include/asm/msr.h | 8 +----
arch/x86/kernel/cpu/intel_epb.c | 4 +--
arch/x86/lib/msr-smp.c | 16 ---------
drivers/cpufreq/amd-pstate.c | 8 ++---
drivers/cpufreq/intel_pstate.c | 36 +++++++++----------
drivers/platform/x86/amd/hfi/hfi.c | 4 +--
.../intel/uncore-frequency/uncore-frequency.c | 6 ++--
7 files changed, 30 insertions(+), 52 deletions(-)
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index a004440b4c0a..c0a3bfba6b56 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -258,7 +258,6 @@ int msr_clear_bit(u32 msr, u8 bit);
#ifdef CONFIG_SMP
int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
-int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
@@ -278,11 +277,6 @@ static inline int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
wrmsrq(msr_no, q);
return 0;
}
-static inline int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
-{
- wrmsrq(msr_no, q);
- return 0;
-}
static inline void rdmsr_on_cpus(const struct cpumask *m, u32 msr_no,
struct msr __percpu *msrs)
{
@@ -291,7 +285,7 @@ static inline void rdmsr_on_cpus(const struct cpumask *m, u32 msr_no,
static inline void wrmsr_on_cpus(const struct cpumask *m, u32 msr_no,
struct msr __percpu *msrs)
{
- wrmsrq_on_cpu(0, msr_no, raw_cpu_read(msrs->q));
+ wrmsr_on_cpu(0, msr_no, raw_cpu_read(msrs->q));
}
static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no,
u32 *l, u32 *h)
diff --git a/arch/x86/kernel/cpu/intel_epb.c b/arch/x86/kernel/cpu/intel_epb.c
index cb5a3c299f26..7533f47bf63d 100644
--- a/arch/x86/kernel/cpu/intel_epb.c
+++ b/arch/x86/kernel/cpu/intel_epb.c
@@ -165,8 +165,8 @@ static ssize_t energy_perf_bias_store(struct device *dev,
if (ret < 0)
return ret;
- ret = wrmsrq_on_cpu(cpu, MSR_IA32_ENERGY_PERF_BIAS,
- (epb & ~EPB_MASK) | val);
+ ret = wrmsr_on_cpu(cpu, MSR_IA32_ENERGY_PERF_BIAS,
+ (epb & ~EPB_MASK) | val);
if (ret < 0)
return ret;
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index 0b4f3c4e4f82..42d42641f2aa 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -61,22 +61,6 @@ int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
}
EXPORT_SYMBOL(wrmsr_on_cpu);
-int wrmsrq_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
-{
- int err;
- struct msr_info rv;
-
- memset(&rv, 0, sizeof(rv));
-
- rv.msr_no = msr_no;
- rv.reg.q = q;
-
- err = smp_call_function_single(cpu, __wrmsr_on_cpu, &rv, 1);
-
- return err;
-}
-EXPORT_SYMBOL(wrmsrq_on_cpu);
-
static void __rwmsr_on_cpus(const struct cpumask *mask, u32 msr_no,
struct msr __percpu *msrs,
void (*msr_func) (void *info))
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index a6fc22f770c3..543b34006918 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -271,7 +271,7 @@ static int msr_update_perf(struct cpufreq_policy *policy, u8 min_perf,
if (fast_switch) {
wrmsrq(MSR_AMD_CPPC_REQ, value);
} else {
- int ret = wrmsrq_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+ int ret = wrmsr_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
if (ret)
return ret;
@@ -319,7 +319,7 @@ static int msr_set_epp(struct cpufreq_policy *policy, u8 epp)
if (value == prev)
return 0;
- ret = wrmsrq_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+ ret = wrmsr_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
if (ret) {
pr_err("failed to set energy perf value (%d)\n", ret);
return ret;
@@ -357,7 +357,7 @@ static int amd_pstate_set_floor_perf(struct cpufreq_policy *policy, u8 perf)
goto out_trace;
}
- ret = wrmsrq_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ2, value);
+ ret = wrmsr_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ2, value);
if (ret) {
changed = false;
pr_err("failed to set CPPC REQ2 value. Error (%d)\n", ret);
@@ -900,7 +900,7 @@ static int amd_pstate_init_boost_support(struct amd_cpudata *cpudata)
static void amd_perf_ctl_reset(unsigned int cpu)
{
- wrmsrq_on_cpu(cpu, MSR_AMD_PERF_CTL, 0);
+ wrmsr_on_cpu(cpu, MSR_AMD_PERF_CTL, 0);
}
#define CPPC_MAX_PERF U8_MAX
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index e5b30a53c49a..08214a0561e7 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -736,7 +736,7 @@ static int intel_pstate_set_epp(struct cpudata *cpu, u32 epp)
* function, so it cannot run in parallel with the update below.
*/
WRITE_ONCE(cpu->hwp_req_cached, value);
- ret = wrmsrq_on_cpu(cpu->cpu, MSR_HWP_REQUEST, value);
+ ret = wrmsr_on_cpu(cpu->cpu, MSR_HWP_REQUEST, value);
if (!ret)
cpu->epp_cached = epp;
@@ -1315,7 +1315,7 @@ static void intel_pstate_hwp_set(unsigned int cpu)
skip_epp:
WRITE_ONCE(cpu_data->hwp_req_cached, value);
- wrmsrq_on_cpu(cpu, MSR_HWP_REQUEST, value);
+ wrmsr_on_cpu(cpu, MSR_HWP_REQUEST, value);
}
static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata);
@@ -1362,7 +1362,7 @@ static void intel_pstate_hwp_offline(struct cpudata *cpu)
if (boot_cpu_has(X86_FEATURE_HWP_EPP))
value |= HWP_ENERGY_PERF_PREFERENCE(HWP_EPP_POWERSAVE);
- wrmsrq_on_cpu(cpu->cpu, MSR_HWP_REQUEST, value);
+ wrmsr_on_cpu(cpu->cpu, MSR_HWP_REQUEST, value);
mutex_lock(&hybrid_capacity_lock);
@@ -1411,7 +1411,7 @@ static void intel_pstate_hwp_enable(struct cpudata *cpudata);
static void intel_pstate_hwp_reenable(struct cpudata *cpu)
{
intel_pstate_hwp_enable(cpu);
- wrmsrq_on_cpu(cpu->cpu, MSR_HWP_REQUEST, READ_ONCE(cpu->hwp_req_cached));
+ wrmsr_on_cpu(cpu->cpu, MSR_HWP_REQUEST, READ_ONCE(cpu->hwp_req_cached));
}
static int intel_pstate_suspend(struct cpufreq_policy *policy)
@@ -1919,7 +1919,7 @@ static void intel_pstate_notify_work(struct work_struct *work)
hybrid_update_capacity(cpudata);
}
- wrmsrq_on_cpu(cpudata->cpu, MSR_HWP_STATUS, 0);
+ wrmsr_on_cpu(cpudata->cpu, MSR_HWP_STATUS, 0);
}
static DEFINE_RAW_SPINLOCK(hwp_notify_lock);
@@ -1969,8 +1969,8 @@ static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata)
if (!cpu_feature_enabled(X86_FEATURE_HWP_NOTIFY))
return;
- /* wrmsrq_on_cpu has to be outside spinlock as this can result in IPC */
- wrmsrq_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00);
+ /* wrmsr_on_cpu has to be outside spinlock as this can result in IPC */
+ wrmsr_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00);
raw_spin_lock_irq(&hwp_notify_lock);
cancel_work = cpumask_test_and_clear_cpu(cpudata->cpu, &hwp_intr_enable_mask);
@@ -1997,9 +1997,9 @@ static void intel_pstate_enable_hwp_interrupt(struct cpudata *cpudata)
if (cpu_feature_enabled(X86_FEATURE_HWP_HIGHEST_PERF_CHANGE))
interrupt_mask |= HWP_HIGHEST_PERF_CHANGE_REQ;
- /* wrmsrq_on_cpu has to be outside spinlock as this can result in IPC */
- wrmsrq_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, interrupt_mask);
- wrmsrq_on_cpu(cpudata->cpu, MSR_HWP_STATUS, 0);
+ /* wrmsr_on_cpu has to be outside spinlock as this can result in IPC */
+ wrmsr_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, interrupt_mask);
+ wrmsr_on_cpu(cpudata->cpu, MSR_HWP_STATUS, 0);
}
}
@@ -2038,9 +2038,9 @@ static void intel_pstate_hwp_enable(struct cpudata *cpudata)
{
/* First disable HWP notification interrupt till we activate again */
if (boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
- wrmsrq_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00);
+ wrmsr_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00);
- wrmsrq_on_cpu(cpudata->cpu, MSR_PM_ENABLE, 0x1);
+ wrmsr_on_cpu(cpudata->cpu, MSR_PM_ENABLE, 0x1);
intel_pstate_enable_hwp_interrupt(cpudata);
@@ -2306,8 +2306,8 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)
* the CPU being updated, so force the register update to run on the
* right CPU.
*/
- wrmsrq_on_cpu(cpu->cpu, MSR_IA32_PERF_CTL,
- pstate_funcs.get_val(cpu, pstate));
+ wrmsr_on_cpu(cpu->cpu, MSR_IA32_PERF_CTL,
+ pstate_funcs.get_val(cpu, pstate));
}
static void intel_pstate_set_min_pstate(struct cpudata *cpu)
@@ -3164,7 +3164,7 @@ static void intel_cpufreq_hwp_update(struct cpudata *cpu, u32 min, u32 max,
if (fast_switch)
wrmsrq(MSR_HWP_REQUEST, value);
else
- wrmsrq_on_cpu(cpu->cpu, MSR_HWP_REQUEST, value);
+ wrmsr_on_cpu(cpu->cpu, MSR_HWP_REQUEST, value);
}
static void intel_cpufreq_perf_ctl_update(struct cpudata *cpu,
@@ -3174,8 +3174,8 @@ static void intel_cpufreq_perf_ctl_update(struct cpudata *cpu,
wrmsrq(MSR_IA32_PERF_CTL,
pstate_funcs.get_val(cpu, target_pstate));
else
- wrmsrq_on_cpu(cpu->cpu, MSR_IA32_PERF_CTL,
- pstate_funcs.get_val(cpu, target_pstate));
+ wrmsr_on_cpu(cpu->cpu, MSR_IA32_PERF_CTL,
+ pstate_funcs.get_val(cpu, target_pstate));
}
static int intel_cpufreq_update_pstate(struct cpufreq_policy *policy,
@@ -3385,7 +3385,7 @@ static int intel_cpufreq_suspend(struct cpufreq_policy *policy)
* written by it may not be suitable.
*/
value &= ~HWP_DESIRED_PERF(~0L);
- wrmsrq_on_cpu(cpu->cpu, MSR_HWP_REQUEST, value);
+ wrmsr_on_cpu(cpu->cpu, MSR_HWP_REQUEST, value);
WRITE_ONCE(cpu->hwp_req_cached, value);
}
diff --git a/drivers/platform/x86/amd/hfi/hfi.c b/drivers/platform/x86/amd/hfi/hfi.c
index 83863a5e0fbc..580fbc3648bf 100644
--- a/drivers/platform/x86/amd/hfi/hfi.c
+++ b/drivers/platform/x86/amd/hfi/hfi.c
@@ -260,11 +260,11 @@ static int amd_hfi_set_state(unsigned int cpu, bool state)
{
int ret;
- ret = wrmsrq_on_cpu(cpu, MSR_AMD_WORKLOAD_CLASS_CONFIG, state ? 1 : 0);
+ ret = wrmsr_on_cpu(cpu, MSR_AMD_WORKLOAD_CLASS_CONFIG, state ? 1 : 0);
if (ret)
return ret;
- return wrmsrq_on_cpu(cpu, MSR_AMD_WORKLOAD_HRST, 0x1);
+ return wrmsr_on_cpu(cpu, MSR_AMD_WORKLOAD_HRST, 0x1);
}
/**
diff --git a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
index b9878a4d391b..c4c24a355854 100644
--- a/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
+++ b/drivers/platform/x86/intel/uncore-frequency/uncore-frequency.c
@@ -89,7 +89,7 @@ static int uncore_write_control_freq(struct uncore_data *data, unsigned int inpu
cap |= FIELD_PREP(UNCORE_MIN_RATIO_MASK, input);
}
- ret = wrmsrq_on_cpu(data->control_cpu, MSR_UNCORE_RATIO_LIMIT, cap);
+ ret = wrmsr_on_cpu(data->control_cpu, MSR_UNCORE_RATIO_LIMIT, cap);
if (ret)
return ret;
@@ -213,8 +213,8 @@ static int uncore_pm_notify(struct notifier_block *nb, unsigned long mode,
if (!data || !data->valid || !data->stored_uncore_data)
return 0;
- wrmsrq_on_cpu(data->control_cpu, MSR_UNCORE_RATIO_LIMIT,
- data->stored_uncore_data);
+ wrmsr_on_cpu(data->control_cpu, MSR_UNCORE_RATIO_LIMIT,
+ data->stored_uncore_data);
}
break;
default:
--
2.53.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH RFC 05/11] x86/msr: Switch rdmsr_safe_on_cpu() to return a 64-bit quantity
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
` (3 preceding siblings ...)
2026-04-28 10:41 ` [PATCH RFC 04/11] x86/msr: Switch all callers of wrmsrq_on_cpu() to use wrmsr_on_cpu() Juergen Gross
@ 2026-04-28 10:41 ` Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 06/11] x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu() Juergen Gross
` (2 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Juergen Gross @ 2026-04-28 10:41 UTC (permalink / raw)
To: linux-kernel, x86, linux-hwmon, linux-pm
Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Guenter Roeck, Rafael J. Wysocki,
Daniel Lezcano, Zhang Rui, Lukasz Luba
In order to prepare retiring rdmsrq_safe_on_cpu() switch
rdmsr_safe_on_cpu() to have the same interface as rdmsrq_safe_on_cpu().
Switch all rdmsr_safe_on_cpu() callers to use the new interface.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/include/asm/msr.h | 7 +++--
arch/x86/kernel/msr.c | 4 +--
arch/x86/lib/msr-smp.c | 9 +++----
drivers/hwmon/coretemp.c | 32 +++++++++++------------
drivers/hwmon/via-cputemp.c | 16 ++++++------
drivers/thermal/intel/intel_tcc.c | 43 ++++++++++++++++---------------
6 files changed, 54 insertions(+), 57 deletions(-)
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index c0a3bfba6b56..b3b43bc04b69 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -260,7 +260,7 @@ int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
-int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
+int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
int rdmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
@@ -287,10 +287,9 @@ static inline void wrmsr_on_cpus(const struct cpumask *m, u32 msr_no,
{
wrmsr_on_cpu(0, msr_no, raw_cpu_read(msrs->q));
}
-static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no,
- u32 *l, u32 *h)
+static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
{
- return rdmsr_safe(msr_no, l, h);
+ return rdmsrq_safe(msr_no, q);
}
static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
{
diff --git a/arch/x86/kernel/msr.c b/arch/x86/kernel/msr.c
index 4469c784eaa0..c9429a718810 100644
--- a/arch/x86/kernel/msr.c
+++ b/arch/x86/kernel/msr.c
@@ -53,7 +53,7 @@ static ssize_t msr_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
u32 __user *tmp = (u32 __user *) buf;
- u32 data[2];
+ u64 data;
u32 reg = *ppos;
int cpu = iminor(file_inode(file));
int err = 0;
@@ -63,7 +63,7 @@ static ssize_t msr_read(struct file *file, char __user *buf,
return -EINVAL; /* Invalid chunk size */
for (; count; count -= 8) {
- err = rdmsr_safe_on_cpu(cpu, reg, &data[0], &data[1]);
+ err = rdmsr_safe_on_cpu(cpu, reg, &data);
if (err)
break;
if (copy_to_user(tmp, &data, 8)) {
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index 42d42641f2aa..0dc3921e0259 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -131,7 +131,7 @@ static void __wrmsr_safe_on_cpu(void *info)
rv->err = wrmsr_safe(rv->msr_no, rv->reg.l, rv->reg.h);
}
-int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
+int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
{
struct msr_info_completion rv;
call_single_data_t csd;
@@ -148,8 +148,7 @@ int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
wait_for_completion(&rv.done);
err = rv.msr.err;
}
- *l = rv.msr.reg.l;
- *h = rv.msr.reg.h;
+ *q = rv.msr.reg.q;
return err;
}
@@ -189,11 +188,9 @@ EXPORT_SYMBOL(wrmsrq_safe_on_cpu);
int rdmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
{
- u32 low, high;
int err;
- err = rdmsr_safe_on_cpu(cpu, msr_no, &low, &high);
- *q = (u64)high << 32 | low;
+ err = rdmsr_safe_on_cpu(cpu, msr_no, q);
return err;
}
diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
index fa02960ffff5..506e79eb4d76 100644
--- a/drivers/hwmon/coretemp.c
+++ b/drivers/hwmon/coretemp.c
@@ -169,7 +169,7 @@ static int adjust_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
int tjmax_ee = 85000;
int usemsr_ee = 1;
int err;
- u32 eax, edx;
+ u64 val;
int i;
u16 devfn = PCI_DEVFN(0, 0);
struct pci_dev *host_bridge = pci_get_domain_bus_and_slot(0, 0, devfn);
@@ -220,14 +220,14 @@ static int adjust_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
* http://softwarecommunity.intel.com/Wiki/Mobility/720.htm
* For Core2 cores, check MSR 0x17, bit 28 1 = Mobile CPU
*/
- err = rdmsr_safe_on_cpu(id, 0x17, &eax, &edx);
+ err = rdmsr_safe_on_cpu(id, 0x17, &val);
if (err) {
dev_warn(dev,
"Unable to access MSR 0x17, assuming desktop"
" CPU\n");
usemsr_ee = 0;
} else if (c->x86_vfm < INTEL_CORE2_PENRYN &&
- !(eax & 0x10000000)) {
+ !(val & 0x10000000)) {
/*
* Trust bit 28 up to Penryn, I could not find any
* documentation on that; if you happen to know
@@ -235,8 +235,8 @@ static int adjust_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
*/
usemsr_ee = 0;
} else {
- /* Platform ID bits 52:50 (EDX starts at bit 32) */
- platform_id = (edx >> 18) & 0x7;
+ /* Platform ID bits 52:50 */
+ platform_id = (val >> 50) & 0x7;
/*
* Mobile Penryn CPU seems to be platform ID 7 or 5
@@ -255,12 +255,12 @@ static int adjust_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
}
if (usemsr_ee) {
- err = rdmsr_safe_on_cpu(id, 0xee, &eax, &edx);
+ err = rdmsr_safe_on_cpu(id, 0xee, &val);
if (err) {
dev_warn(dev,
"Unable to access MSR 0xEE, for Tjmax, left"
" at default\n");
- } else if (eax & 0x40000000) {
+ } else if (val & 0x40000000) {
tjmax = tjmax_ee;
}
} else if (tjmax == 100000) {
@@ -278,7 +278,7 @@ static int get_tjmax(struct temp_data *tdata, struct device *dev)
{
struct cpuinfo_x86 *c = &cpu_data(tdata->cpu);
int err;
- u32 eax, edx;
+ u64 msrval;
u32 val;
/* use static tjmax once it is set */
@@ -289,11 +289,11 @@ static int get_tjmax(struct temp_data *tdata, struct device *dev)
* A new feature of current Intel(R) processors, the
* IA32_TEMPERATURE_TARGET contains the TjMax value
*/
- err = rdmsr_safe_on_cpu(tdata->cpu, MSR_IA32_TEMPERATURE_TARGET, &eax, &edx);
+ err = rdmsr_safe_on_cpu(tdata->cpu, MSR_IA32_TEMPERATURE_TARGET, &msrval);
if (err) {
dev_warn_once(dev, "Unable to read TjMax from CPU %u\n", tdata->cpu);
} else {
- val = (eax >> 16) & 0xff;
+ val = (msrval >> 16) & 0xff;
if (val)
return val * 1000;
}
@@ -314,7 +314,7 @@ static int get_tjmax(struct temp_data *tdata, struct device *dev)
static int get_ttarget(struct temp_data *tdata, struct device *dev)
{
- u32 eax, edx;
+ u64 val;
int tjmax, ttarget_offset, ret;
/*
@@ -324,14 +324,14 @@ static int get_ttarget(struct temp_data *tdata, struct device *dev)
if (tdata->tjmax)
return -ENODEV;
- ret = rdmsr_safe_on_cpu(tdata->cpu, MSR_IA32_TEMPERATURE_TARGET, &eax, &edx);
+ ret = rdmsr_safe_on_cpu(tdata->cpu, MSR_IA32_TEMPERATURE_TARGET, &val);
if (ret)
return ret;
- tjmax = (eax >> 16) & 0xff;
+ tjmax = (val >> 16) & 0xff;
/* Read the still undocumented bits 8:15 of IA32_TEMPERATURE_TARGET. */
- ttarget_offset = (eax >> 8) & 0xff;
+ ttarget_offset = (val >> 8) & 0xff;
return (tjmax - ttarget_offset) * 1000;
}
@@ -560,7 +560,7 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
struct temp_data *tdata;
struct platform_data *pdata = platform_get_drvdata(pdev);
struct cpuinfo_x86 *c = &cpu_data(cpu);
- u32 eax, edx;
+ u64 val;
int err;
if (!housekeeping_cpu(cpu, HK_TYPE_MISC))
@@ -571,7 +571,7 @@ static int create_core_data(struct platform_device *pdev, unsigned int cpu,
return -ENOMEM;
/* Test if we can access the status register */
- err = rdmsr_safe_on_cpu(cpu, tdata->status_reg, &eax, &edx);
+ err = rdmsr_safe_on_cpu(cpu, tdata->status_reg, &val);
if (err)
goto err;
diff --git a/drivers/hwmon/via-cputemp.c b/drivers/hwmon/via-cputemp.c
index a5c03ed59c1f..e239e0a388f7 100644
--- a/drivers/hwmon/via-cputemp.c
+++ b/drivers/hwmon/via-cputemp.c
@@ -65,28 +65,28 @@ static ssize_t temp_show(struct device *dev, struct device_attribute *devattr,
char *buf)
{
struct via_cputemp_data *data = dev_get_drvdata(dev);
- u32 eax, edx;
+ u64 val;
int err;
- err = rdmsr_safe_on_cpu(data->id, data->msr_temp, &eax, &edx);
+ err = rdmsr_safe_on_cpu(data->id, data->msr_temp, &val);
if (err)
return -EAGAIN;
- return sprintf(buf, "%lu\n", ((unsigned long)eax & 0xffffff) * 1000);
+ return sprintf(buf, "%lu\n", ((unsigned long)val & 0xffffff) * 1000);
}
static ssize_t cpu0_vid_show(struct device *dev,
struct device_attribute *devattr, char *buf)
{
struct via_cputemp_data *data = dev_get_drvdata(dev);
- u32 eax, edx;
+ u64 val;
int err;
- err = rdmsr_safe_on_cpu(data->id, data->msr_vid, &eax, &edx);
+ err = rdmsr_safe_on_cpu(data->id, data->msr_vid, &val);
if (err)
return -EAGAIN;
- return sprintf(buf, "%d\n", vid_from_reg(~edx & 0x7f, data->vrm));
+ return sprintf(buf, "%d\n", vid_from_reg(~(val >> 32) & 0x7f, data->vrm));
}
static SENSOR_DEVICE_ATTR_RO(temp1_input, temp, SHOW_TEMP);
@@ -112,7 +112,7 @@ static int via_cputemp_probe(struct platform_device *pdev)
struct via_cputemp_data *data;
struct cpuinfo_x86 *c = &cpu_data(pdev->id);
int err;
- u32 eax, edx;
+ u64 val;
data = devm_kzalloc(&pdev->dev, sizeof(struct via_cputemp_data),
GFP_KERNEL);
@@ -143,7 +143,7 @@ static int via_cputemp_probe(struct platform_device *pdev)
}
/* test if we can access the TEMPERATURE MSR */
- err = rdmsr_safe_on_cpu(data->id, data->msr_temp, &eax, &edx);
+ err = rdmsr_safe_on_cpu(data->id, data->msr_temp, &val);
if (err) {
dev_err(&pdev->dev,
"Unable to access TEMPERATURE MSR, giving up\n");
diff --git a/drivers/thermal/intel/intel_tcc.c b/drivers/thermal/intel/intel_tcc.c
index ab61fb122937..9a8f2f101efc 100644
--- a/drivers/thermal/intel/intel_tcc.c
+++ b/drivers/thermal/intel/intel_tcc.c
@@ -181,17 +181,17 @@ static u32 get_temp_mask(bool pkg)
*/
int intel_tcc_get_tjmax(int cpu)
{
- u32 low, high;
+ struct msr msrval;
int val, err;
if (cpu < 0)
- err = rdmsr_safe(MSR_IA32_TEMPERATURE_TARGET, &low, &high);
+ err = rdmsr_safe(MSR_IA32_TEMPERATURE_TARGET, &msrval.l, &msrval.h);
else
- err = rdmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, &low, &high);
+ err = rdmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, &msrval.q);
if (err)
return err;
- val = (low >> 16) & 0xff;
+ val = (msrval.l >> 16) & 0xff;
return val ? val : -ENODATA;
}
@@ -208,17 +208,17 @@ EXPORT_SYMBOL_NS_GPL(intel_tcc_get_tjmax, "INTEL_TCC");
*/
int intel_tcc_get_offset(int cpu)
{
- u32 low, high;
+ struct msr val;
int err;
if (cpu < 0)
- err = rdmsr_safe(MSR_IA32_TEMPERATURE_TARGET, &low, &high);
+ err = rdmsr_safe(MSR_IA32_TEMPERATURE_TARGET, &val.l, &val.h);
else
- err = rdmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, &low, &high);
+ err = rdmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, &val.q);
if (err)
return err;
- return (low >> 24) & intel_tcc_temp_masks.tcc_offset;
+ return (val.l >> 24) & intel_tcc_temp_masks.tcc_offset;
}
EXPORT_SYMBOL_NS_GPL(intel_tcc_get_offset, "INTEL_TCC");
@@ -235,7 +235,7 @@ EXPORT_SYMBOL_NS_GPL(intel_tcc_get_offset, "INTEL_TCC");
int intel_tcc_set_offset(int cpu, int offset)
{
- u32 low, high;
+ struct msr val;
int err;
if (!intel_tcc_temp_masks.tcc_offset)
@@ -245,23 +245,23 @@ int intel_tcc_set_offset(int cpu, int offset)
return -EINVAL;
if (cpu < 0)
- err = rdmsr_safe(MSR_IA32_TEMPERATURE_TARGET, &low, &high);
+ err = rdmsr_safe(MSR_IA32_TEMPERATURE_TARGET, &val.l, &val.h);
else
- err = rdmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, &low, &high);
+ err = rdmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, &val.q);
if (err)
return err;
/* MSR Locked */
- if (low & BIT(31))
+ if (val.l & BIT(31))
return -EPERM;
- low &= ~(intel_tcc_temp_masks.tcc_offset << 24);
- low |= offset << 24;
+ val.l &= ~(intel_tcc_temp_masks.tcc_offset << 24);
+ val.l |= offset << 24;
if (cpu < 0)
- return wrmsr_safe(MSR_IA32_TEMPERATURE_TARGET, low, high);
+ return wrmsr_safe(MSR_IA32_TEMPERATURE_TARGET, val.l, val.h);
else
- return wrmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, low, high);
+ return wrmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, val.l, val.h);
}
EXPORT_SYMBOL_NS_GPL(intel_tcc_set_offset, "INTEL_TCC");
@@ -279,7 +279,8 @@ EXPORT_SYMBOL_NS_GPL(intel_tcc_set_offset, "INTEL_TCC");
int intel_tcc_get_temp(int cpu, int *temp, bool pkg)
{
u32 msr = pkg ? MSR_IA32_PACKAGE_THERM_STATUS : MSR_IA32_THERM_STATUS;
- u32 low, high, mask;
+ u32 mask;
+ struct msr val;
int tjmax, err;
tjmax = intel_tcc_get_tjmax(cpu);
@@ -287,19 +288,19 @@ int intel_tcc_get_temp(int cpu, int *temp, bool pkg)
return tjmax;
if (cpu < 0)
- err = rdmsr_safe(msr, &low, &high);
+ err = rdmsr_safe(msr, &val.l, &val.h);
else
- err = rdmsr_safe_on_cpu(cpu, msr, &low, &high);
+ err = rdmsr_safe_on_cpu(cpu, msr, &val.q);
if (err)
return err;
/* Temperature is beyond the valid thermal sensor range */
- if (!(low & BIT(31)))
+ if (!(val.l & BIT(31)))
return -ENODATA;
mask = get_temp_mask(pkg);
- *temp = tjmax - ((low >> 16) & mask);
+ *temp = tjmax - ((val.l >> 16) & mask);
return 0;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH RFC 06/11] x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu()
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
` (4 preceding siblings ...)
2026-04-28 10:41 ` [PATCH RFC 05/11] x86/msr: Switch rdmsr_safe_on_cpu() to return a 64-bit quantity Juergen Gross
@ 2026-04-28 10:42 ` Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 07/11] x86/msr: Switch wrmsr_safe_on_cpu() to use a 64-bit quantity Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 08/11] x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use wrmsr_safe_on_cpu() Juergen Gross
7 siblings, 0 replies; 9+ messages in thread
From: Juergen Gross @ 2026-04-28 10:42 UTC (permalink / raw)
To: linux-kernel, x86, linux-perf-users, linux-acpi, linux-pm,
platform-driver-x86
Cc: Juergen Gross, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Rafael J. Wysocki, Len Brown, Huang Rui,
Mario Limonciello, Perry Yuan, K Prateek Nayak, Viresh Kumar,
Srinivas Pandruvada, Hans de Goede, Ilpo Järvinen
Now that rdmsr_safe_on_cpu() has the same interface as
rdmsrq_safe_on_cpu(), the callers of rdmsrq_safe_on_cpu() can be
switched to rdmsr_safe_on_cpu() and rdmsrq_safe_on_cpu() can be
removed.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/events/intel/pt.c | 2 +-
arch/x86/events/intel/uncore_discovery.c | 2 +-
arch/x86/include/asm/msr.h | 5 -----
arch/x86/kernel/acpi/cppc.c | 6 +++---
arch/x86/lib/msr-smp.c | 10 ----------
drivers/cpufreq/amd-pstate-ut.c | 2 +-
drivers/cpufreq/amd-pstate.c | 3 +--
drivers/cpufreq/intel_pstate.c | 6 +++---
.../x86/intel/speed_select_if/isst_if_common.c | 4 ++--
drivers/powercap/intel_rapl_msr.c | 2 +-
10 files changed, 13 insertions(+), 29 deletions(-)
diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
index b5726b50e77d..7c92146b06ea 100644
--- a/arch/x86/events/intel/pt.c
+++ b/arch/x86/events/intel/pt.c
@@ -1840,7 +1840,7 @@ static __init int pt_init(void)
for_each_online_cpu(cpu) {
u64 ctl;
- ret = rdmsrq_safe_on_cpu(cpu, MSR_IA32_RTIT_CTL, &ctl);
+ ret = rdmsr_safe_on_cpu(cpu, MSR_IA32_RTIT_CTL, &ctl);
if (!ret && (ctl & RTIT_CTL_TRACEEN))
prior_warn++;
}
diff --git a/arch/x86/events/intel/uncore_discovery.c b/arch/x86/events/intel/uncore_discovery.c
index 583cbd06b9b8..0853a9e02fda 100644
--- a/arch/x86/events/intel/uncore_discovery.c
+++ b/arch/x86/events/intel/uncore_discovery.c
@@ -405,7 +405,7 @@ static bool uncore_discovery_msr(struct uncore_discovery_domain *domain)
if (__test_and_set_bit(die, die_mask))
continue;
- if (rdmsrq_safe_on_cpu(cpu, domain->discovery_base, &base))
+ if (rdmsr_safe_on_cpu(cpu, domain->discovery_base, &base))
continue;
if (!base)
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index b3b43bc04b69..f2d14c670140 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -262,7 +262,6 @@ void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
-int rdmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
@@ -295,10 +294,6 @@ static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
{
return wrmsr_safe(msr_no, l, h);
}
-static inline int rdmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
-{
- return rdmsrq_safe(msr_no, q);
-}
static inline int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
return wrmsrq_safe(msr_no, q);
diff --git a/arch/x86/kernel/acpi/cppc.c b/arch/x86/kernel/acpi/cppc.c
index d7c8ef1e354d..576319dcbbbf 100644
--- a/arch/x86/kernel/acpi/cppc.c
+++ b/arch/x86/kernel/acpi/cppc.c
@@ -49,7 +49,7 @@ int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val)
{
int err;
- err = rdmsrq_safe_on_cpu(cpunum, reg->address, val);
+ err = rdmsr_safe_on_cpu(cpunum, reg->address, val);
if (!err) {
u64 mask = GENMASK_ULL(reg->bit_offset + reg->bit_width - 1,
reg->bit_offset);
@@ -65,7 +65,7 @@ int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val)
u64 rd_val;
int err;
- err = rdmsrq_safe_on_cpu(cpunum, reg->address, &rd_val);
+ err = rdmsr_safe_on_cpu(cpunum, reg->address, &rd_val);
if (!err) {
u64 mask = GENMASK_ULL(reg->bit_offset + reg->bit_width - 1,
reg->bit_offset);
@@ -147,7 +147,7 @@ int amd_get_highest_perf(unsigned int cpu, u32 *highest_perf)
int ret;
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
- ret = rdmsrq_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &val);
+ ret = rdmsr_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &val);
if (ret)
goto out;
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index 0dc3921e0259..fa22ac662c1d 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -186,16 +186,6 @@ int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
}
EXPORT_SYMBOL(wrmsrq_safe_on_cpu);
-int rdmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
-{
- int err;
-
- err = rdmsr_safe_on_cpu(cpu, msr_no, q);
-
- return err;
-}
-EXPORT_SYMBOL(rdmsrq_safe_on_cpu);
-
/*
* These variants are significantly slower, but allows control over
* the entire 32-bit GPR set.
diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
index aa8a464fab47..8700c076b762 100644
--- a/drivers/cpufreq/amd-pstate-ut.c
+++ b/drivers/cpufreq/amd-pstate-ut.c
@@ -170,7 +170,7 @@ static int amd_pstate_ut_check_perf(u32 index)
lowest_nonlinear_perf = cppc_perf.lowest_nonlinear_perf;
lowest_perf = cppc_perf.lowest_perf;
} else {
- ret = rdmsrq_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &cap1);
+ ret = rdmsr_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &cap1);
if (ret) {
pr_err("%s read CPPC_CAP1 ret=%d error!\n", __func__, ret);
return ret;
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 543b34006918..d1eee3cd8f9b 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -471,8 +471,7 @@ static int msr_init_perf(struct amd_cpudata *cpudata)
u64 cap1, numerator, cppc_req;
u8 min_perf;
- int ret = rdmsrq_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1,
- &cap1);
+ int ret = rdmsr_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, &cap1);
if (ret)
return ret;
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 08214a0561e7..da196539affe 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -2178,13 +2178,13 @@ static int core_get_tdp_ratio(int cpu, u64 plat_info)
int err;
/* Get the TDP level (0, 1, 2) to get ratios */
- err = rdmsrq_safe_on_cpu(cpu, MSR_CONFIG_TDP_CONTROL, &tdp_ctrl);
+ err = rdmsr_safe_on_cpu(cpu, MSR_CONFIG_TDP_CONTROL, &tdp_ctrl);
if (err)
return err;
/* TDP MSR are continuous starting at 0x648 */
tdp_msr = MSR_CONFIG_TDP_NOMINAL + (tdp_ctrl & 0x03);
- err = rdmsrq_safe_on_cpu(cpu, tdp_msr, &tdp_ratio);
+ err = rdmsr_safe_on_cpu(cpu, tdp_msr, &tdp_ratio);
if (err)
return err;
@@ -2221,7 +2221,7 @@ static int core_get_max_pstate(int cpu)
return tdp_ratio;
}
- err = rdmsrq_safe_on_cpu(cpu, MSR_TURBO_ACTIVATION_RATIO, &tar);
+ err = rdmsr_safe_on_cpu(cpu, MSR_TURBO_ACTIVATION_RATIO, &tar);
if (!err) {
int tar_levels;
diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
index 1c48bf6d5457..b15a798454dc 100644
--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
@@ -511,8 +511,8 @@ static long isst_if_msr_cmd_req(u8 *cmd_ptr, int *write_only, int resume)
} else {
u64 data;
- ret = rdmsrq_safe_on_cpu(msr_cmd->logical_cpu,
- msr_cmd->msr, &data);
+ ret = rdmsr_safe_on_cpu(msr_cmd->logical_cpu,
+ msr_cmd->msr, &data);
if (!ret) {
msr_cmd->data = data;
*write_only = 0;
diff --git a/drivers/powercap/intel_rapl_msr.c b/drivers/powercap/intel_rapl_msr.c
index a34543e66446..a6bdbe44c8dd 100644
--- a/drivers/powercap/intel_rapl_msr.c
+++ b/drivers/powercap/intel_rapl_msr.c
@@ -180,7 +180,7 @@ static int rapl_msr_read_raw(int cpu, struct reg_action *ra, bool pmu_ctx)
goto out;
}
- if (rdmsrq_safe_on_cpu(cpu, ra->reg.msr, &ra->value)) {
+ if (rdmsr_safe_on_cpu(cpu, ra->reg.msr, &ra->value)) {
pr_debug("failed to read msr 0x%x on cpu %d\n", ra->reg.msr, cpu);
return -EIO;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH RFC 07/11] x86/msr: Switch wrmsr_safe_on_cpu() to use a 64-bit quantity
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
` (5 preceding siblings ...)
2026-04-28 10:42 ` [PATCH RFC 06/11] x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu() Juergen Gross
@ 2026-04-28 10:42 ` Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 08/11] x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use wrmsr_safe_on_cpu() Juergen Gross
7 siblings, 0 replies; 9+ messages in thread
From: Juergen Gross @ 2026-04-28 10:42 UTC (permalink / raw)
To: linux-kernel, x86, linux-pm
Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Rafael J. Wysocki, Daniel Lezcano,
Zhang Rui, Lukasz Luba
In order to prepare retiring wrmsrq_safe_on_cpu() switch
wrmsr_safe_on_cpu() to have the same interface as wrmsrq_safe_on_cpu().
Switch all wrmsr_safe_on_cpu() callers to use the new interface.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/include/asm/msr.h | 6 +++---
arch/x86/kernel/msr.c | 4 ++--
arch/x86/lib/msr-smp.c | 5 ++---
drivers/thermal/intel/intel_tcc.c | 2 +-
4 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index f2d14c670140..cb14ede8f587 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -261,7 +261,7 @@ int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
-int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
+int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
@@ -290,9 +290,9 @@ static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
{
return rdmsrq_safe(msr_no, q);
}
-static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
- return wrmsr_safe(msr_no, l, h);
+ return wrmsrq_safe(msr_no, q);
}
static inline int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
diff --git a/arch/x86/kernel/msr.c b/arch/x86/kernel/msr.c
index c9429a718810..db4b5c07ba22 100644
--- a/arch/x86/kernel/msr.c
+++ b/arch/x86/kernel/msr.c
@@ -109,7 +109,7 @@ static ssize_t msr_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
const u32 __user *tmp = (const u32 __user *)buf;
- u32 data[2];
+ u64 data;
u32 reg = *ppos;
int cpu = iminor(file_inode(file));
int err = 0;
@@ -134,7 +134,7 @@ static ssize_t msr_write(struct file *file, const char __user *buf,
add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK);
- err = wrmsr_safe_on_cpu(cpu, reg, data[0], data[1]);
+ err = wrmsr_safe_on_cpu(cpu, reg, data);
if (err)
break;
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index fa22ac662c1d..b2859435f4af 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -154,7 +154,7 @@ int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
}
EXPORT_SYMBOL(rdmsr_safe_on_cpu);
-int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
+int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
int err;
struct msr_info rv;
@@ -162,8 +162,7 @@ int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
memset(&rv, 0, sizeof(rv));
rv.msr_no = msr_no;
- rv.reg.l = l;
- rv.reg.h = h;
+ rv.reg.q = q;
err = smp_call_function_single(cpu, __wrmsr_safe_on_cpu, &rv, 1);
return err ? err : rv.err;
diff --git a/drivers/thermal/intel/intel_tcc.c b/drivers/thermal/intel/intel_tcc.c
index 9a8f2f101efc..8c80f9bfbea4 100644
--- a/drivers/thermal/intel/intel_tcc.c
+++ b/drivers/thermal/intel/intel_tcc.c
@@ -261,7 +261,7 @@ int intel_tcc_set_offset(int cpu, int offset)
if (cpu < 0)
return wrmsr_safe(MSR_IA32_TEMPERATURE_TARGET, val.l, val.h);
else
- return wrmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, val.l, val.h);
+ return wrmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, val.q);
}
EXPORT_SYMBOL_NS_GPL(intel_tcc_set_offset, "INTEL_TCC");
--
2.53.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH RFC 08/11] x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use wrmsr_safe_on_cpu()
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
` (6 preceding siblings ...)
2026-04-28 10:42 ` [PATCH RFC 07/11] x86/msr: Switch wrmsr_safe_on_cpu() to use a 64-bit quantity Juergen Gross
@ 2026-04-28 10:42 ` Juergen Gross
7 siblings, 0 replies; 9+ messages in thread
From: Juergen Gross @ 2026-04-28 10:42 UTC (permalink / raw)
To: linux-kernel, x86, linux-acpi, linux-pm, platform-driver-x86
Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Rafael J. Wysocki, Len Brown,
Huang Rui, Mario Limonciello, Perry Yuan, K Prateek Nayak,
Viresh Kumar, Srinivas Pandruvada, Hans de Goede,
Ilpo Järvinen
Now that wrmsr_safe_on_cpu() has the same interface as
wrmsrq_safe_on_cpu(), the callers of wrmsrq_safe_on_cpu() can be
switched to wrmsr_safe_on_cpu() and wrmsrq_safe_on_cpu() can be
removed.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/include/asm/msr.h | 5 -----
arch/x86/kernel/acpi/cppc.c | 2 +-
arch/x86/lib/msr-smp.c | 16 ----------------
drivers/cpufreq/amd-pstate.c | 2 +-
.../x86/intel/speed_select_if/isst_if_common.c | 9 ++++-----
5 files changed, 6 insertions(+), 28 deletions(-)
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index cb14ede8f587..a5596d268053 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -262,7 +262,6 @@ void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs);
int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
-int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
#else /* CONFIG_SMP */
@@ -294,10 +293,6 @@ static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
return wrmsrq_safe(msr_no, q);
}
-static inline int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
-{
- return wrmsrq_safe(msr_no, q);
-}
static inline int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
{
return rdmsr_safe_regs(regs);
diff --git a/arch/x86/kernel/acpi/cppc.c b/arch/x86/kernel/acpi/cppc.c
index 576319dcbbbf..9f75762622e7 100644
--- a/arch/x86/kernel/acpi/cppc.c
+++ b/arch/x86/kernel/acpi/cppc.c
@@ -74,7 +74,7 @@ int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val)
val &= mask;
rd_val &= ~mask;
rd_val |= val;
- err = wrmsrq_safe_on_cpu(cpunum, reg->address, rd_val);
+ err = wrmsr_safe_on_cpu(cpunum, reg->address, rd_val);
}
return err;
}
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index b2859435f4af..9ae9ff11f1f1 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -169,22 +169,6 @@ int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
}
EXPORT_SYMBOL(wrmsr_safe_on_cpu);
-int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
-{
- int err;
- struct msr_info rv;
-
- memset(&rv, 0, sizeof(rv));
-
- rv.msr_no = msr_no;
- rv.reg.q = q;
-
- err = smp_call_function_single(cpu, __wrmsr_safe_on_cpu, &rv, 1);
-
- return err ? err : rv.err;
-}
-EXPORT_SYMBOL(wrmsrq_safe_on_cpu);
-
/*
* These variants are significantly slower, but allows control over
* the entire 32-bit GPR set.
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index d1eee3cd8f9b..8f3d776836c3 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -450,7 +450,7 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
static inline int msr_cppc_enable(struct cpufreq_policy *policy)
{
- return wrmsrq_safe_on_cpu(policy->cpu, MSR_AMD_CPPC_ENABLE, 1);
+ return wrmsr_safe_on_cpu(policy->cpu, MSR_AMD_CPPC_ENABLE, 1);
}
static int shmem_cppc_enable(struct cpufreq_policy *policy)
diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
index b15a798454dc..9d730e6f155d 100644
--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
@@ -192,8 +192,8 @@ void isst_resume_common(void)
if (cb->registered)
isst_mbox_resume_command(cb, sst_cmd);
} else {
- wrmsrq_safe_on_cpu(sst_cmd->cpu, sst_cmd->cmd,
- sst_cmd->data);
+ wrmsr_safe_on_cpu(sst_cmd->cpu, sst_cmd->cmd,
+ sst_cmd->data);
}
}
}
@@ -500,9 +500,8 @@ static long isst_if_msr_cmd_req(u8 *cmd_ptr, int *write_only, int resume)
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
- ret = wrmsrq_safe_on_cpu(msr_cmd->logical_cpu,
- msr_cmd->msr,
- msr_cmd->data);
+ ret = wrmsr_safe_on_cpu(msr_cmd->logical_cpu,
+ msr_cmd->msr, msr_cmd->data);
*write_only = 1;
if (!ret && !resume)
ret = isst_store_cmd(0, msr_cmd->msr,
--
2.53.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-04-28 10:42 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 01/11] x86/msr: Switch rdmsr_on_cpu() to return a 64-bit quantity Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 02/11] x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu() Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 03/11] x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 04/11] x86/msr: Switch all callers of wrmsrq_on_cpu() to use wrmsr_on_cpu() Juergen Gross
2026-04-28 10:41 ` [PATCH RFC 05/11] x86/msr: Switch rdmsr_safe_on_cpu() to return a 64-bit quantity Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 06/11] x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use rdmsr_safe_on_cpu() Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 07/11] x86/msr: Switch wrmsr_safe_on_cpu() to use a 64-bit quantity Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 08/11] x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use wrmsr_safe_on_cpu() Juergen Gross
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox