Linux virtualization list
 help / color / mirror / Atom feed
* [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces
@ 2026-04-28 10:41 Juergen Gross
  2026-04-28 10:42 ` [PATCH RFC 09/11] x86/msr: Add macros for preparing to switch rdmsr/wrmsr interfaces Juergen Gross
  0 siblings, 1 reply; 2+ messages in thread
From: Juergen Gross @ 2026-04-28 10:41 UTC (permalink / raw)
  To: linux-kernel, x86, linux-edac, linux-pm, linux-hwmon,
	linux-perf-users, platform-driver-x86, linux-acpi, virtualization
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Tony Luck, Rafael J. Wysocki,
	Viresh Kumar, Guenter Roeck, Daniel Lezcano, Zhang Rui,
	Lukasz Luba, Peter Zijlstra, Arnaldo Carvalho de Melo,
	Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
	Ian Rogers, Adrian Hunter, James Clark, Huang Rui,
	Mario Limonciello, Perry Yuan, K Prateek Nayak,
	Srinivas Pandruvada, Len Brown, Hans de Goede, Ilpo Järvinen,
	Ajay Kaher, Alexey Makhalov, Broadcom internal kernel review list

After my first attempt to rework the MSR access functions [1] this is
the result of the feedback I got.

I have still followed the idea to:

- Reduce the number of MSR access functions by keeping the ones with
  64-bit values only (instead of the dual 32-bit ones).

- Try to have inline functions instead of macros for rdmsr*(), removing
  the hard to read cases where parameters specified the variables for
  the results.

One feedback I got was NOT to rename the access functions, which I
avoided in my new approach.

The first 8 patches are a complete set for achieving especially the
first point above for the *_on_cpu() functions.

Patch 9 is preparing the switch of the CPU-local MSR access functions
to have only rdmsr(), rdmsr_safe(), wrmsr() and wrmsr_safe() (all with
64-bit values and as inline functions) in the end. For this purpose
the already existing functions/macros are overloaded via macros to
accept both variants (64-bit and dual 32-bit values) during the phase
switching the different subsystems to the new scheme. This has the
advantage to avoid having to either patch all users of the current
functions in one patch (like done in the first 8 patches), or having
to use intermediate function names with need to be patched at the end
another time. The resulting patches would be very hard to review due
to their size.

The last 2 patches are examples how switches of subsystems would look
like.

Up to now all of that is compile tested only.

[1]: https://lore.kernel.org/lkml/20260420091634.128787-1-jgross@suse.com/

Juergen Gross (11):
  x86/msr: Switch rdmsr_on_cpu() to return a 64-bit quantity
  x86/msr: Switch all callers of rdmsrq_on_cpu() to use rdmsr_on_cpu()
  x86/msr: Switch wrmsr_on_cpu() to use a 64-bit quantity
  x86/msr: Switch all callers of wrmsrq_on_cpu() to use wrmsr_on_cpu()
  x86/msr: Switch rdmsr_safe_on_cpu() to return a 64-bit quantity
  x86/msr: Switch all callers of rdmsrq_safe_on_cpu() to use
    rdmsr_safe_on_cpu()
  x86/msr: Switch wrmsr_safe_on_cpu() to use a 64-bit quantity
  x86/msr: Switch all callers of wrmsrq_safe_on_cpu() to use
    wrmsr_safe_on_cpu()
  x86/msr: Add macros for preparing to switch rdmsr/wrmsr interfaces
  x86/events: Switch core parts to use 64-bit rdmsr/wrmsr() variants
  x86/cpu/mce: Switch code to use 64-bit rdmsr/wrmsr() variants

 arch/x86/events/core.c                        |  42 ++++----
 arch/x86/events/intel/ds.c                    |  11 +-
 arch/x86/events/intel/pt.c                    |   2 +-
 arch/x86/events/intel/uncore_discovery.c      |   2 +-
 arch/x86/events/intel/uncore_snbep.c          |   2 +-
 arch/x86/events/msr.c                         |   2 +-
 arch/x86/events/perf_event.h                  |  26 ++---
 arch/x86/events/probe.c                       |   2 +-
 arch/x86/events/rapl.c                        |   8 +-
 arch/x86/include/asm/msr.h                    |  90 +++++++++-------
 arch/x86/include/asm/paravirt.h               |   6 +-
 arch/x86/kernel/acpi/cppc.c                   |   8 +-
 arch/x86/kernel/cpu/intel_epb.c               |   8 +-
 arch/x86/kernel/cpu/mce/amd.c                 | 101 +++++++++---------
 arch/x86/kernel/cpu/mce/core.c                |  18 ++--
 arch/x86/kernel/cpu/mce/inject.c              |  40 +++----
 arch/x86/kernel/cpu/mce/intel.c               |  32 +++---
 arch/x86/kernel/cpu/mce/p5.c                  |  16 +--
 arch/x86/kernel/cpu/mce/winchip.c             |  10 +-
 arch/x86/kernel/cpu/microcode/intel.c         |   2 +-
 arch/x86/kernel/msr.c                         |   8 +-
 arch/x86/lib/msr-smp.c                        |  79 ++------------
 drivers/cpufreq/acpi-cpufreq.c                |   4 +-
 drivers/cpufreq/amd-pstate-ut.c               |   2 +-
 drivers/cpufreq/amd-pstate.c                  |  21 ++--
 drivers/cpufreq/amd_freq_sensitivity.c        |   4 +-
 drivers/cpufreq/intel_pstate.c                |  64 +++++------
 drivers/cpufreq/p4-clockmod.c                 |  32 +++---
 drivers/cpufreq/speedstep-centrino.c          |  27 ++---
 drivers/hwmon/coretemp.c                      |  44 ++++----
 drivers/hwmon/via-cputemp.c                   |  16 +--
 drivers/platform/x86/amd/hfi/hfi.c            |   4 +-
 .../intel/speed_select_if/isst_if_common.c    |  13 ++-
 .../intel/uncore-frequency/uncore-frequency.c |  12 +--
 drivers/powercap/intel_rapl_msr.c             |   2 +-
 drivers/thermal/intel/intel_tcc.c             |  43 ++++----
 drivers/thermal/intel/x86_pkg_temp_thermal.c  |  22 ++--
 37 files changed, 387 insertions(+), 438 deletions(-)

-- 
2.53.0


^ permalink raw reply	[flat|nested] 2+ messages in thread

* [PATCH RFC 09/11] x86/msr: Add macros for preparing to switch rdmsr/wrmsr interfaces
  2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
@ 2026-04-28 10:42 ` Juergen Gross
  0 siblings, 0 replies; 2+ messages in thread
From: Juergen Gross @ 2026-04-28 10:42 UTC (permalink / raw)
  To: linux-kernel, x86, virtualization
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Ajay Kaher, Alexey Makhalov,
	Broadcom internal kernel review list

In order to prepare switching the rdmsr(), rdmasr_safe(), wrmsr() and
wrmsr_safe() interfaces to 64-bit values instead of 32-bit pairs, add
macros to call different implementations depending on the number of
passed parameters.

This enables to use the same function/macro names as today while
doing the interface switch per component instead of one go.

At the same time switch the rdmsr related interfaces to inline
functions, avoiding to change variables passed as a parameter to be
changed.

The helper macros will be removed when all users of the current
interfaces have been switched to the new ones.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/msr.h      | 46 +++++++++++++++++++++++++++++----
 arch/x86/include/asm/paravirt.h |  6 ++---
 2 files changed, 44 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index a5596d268053..4dd181aedb00 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -12,6 +12,7 @@
 #include <uapi/asm/msr.h>
 #include <asm/shared/msr.h>
 
+#include <linux/args.h>
 #include <linux/types.h>
 #include <linux/percpu.h>
 
@@ -179,14 +180,14 @@ static inline u64 native_read_pmc(int counter)
  * pointer indirection), this allows gcc to optimize better
  */
 
-#define rdmsr(msr, low, high)					\
+#define __rdmsr_3(msr, low, high)				\
 do {								\
 	u64 __val = native_read_msr((msr));			\
 	(void)((low) = (u32)__val);				\
 	(void)((high) = (u32)(__val >> 32));			\
 } while (0)
 
-static inline void wrmsr(u32 msr, u32 low, u32 high)
+static inline void __wrmsr_3(u32 msr, u32 low, u32 high)
 {
 	native_write_msr(msr, (u64)high << 32 | low);
 }
@@ -206,7 +207,7 @@ static inline int wrmsrq_safe(u32 msr, u64 val)
 }
 
 /* rdmsr with exception handling */
-#define rdmsr_safe(msr, low, high)				\
+#define __rdmsr_safe_3(msr, low, high)				\
 ({								\
 	u64 __val;						\
 	int __err = native_read_msr_safe((msr), &__val);	\
@@ -243,13 +244,48 @@ static __always_inline void wrmsrns(u32 msr, u64 val)
 }
 
 /*
- * Dual u32 version of wrmsrq_safe():
+ * Dual u32 versions of wrmsr_safe():
  */
-static inline int wrmsr_safe(u32 msr, u32 low, u32 high)
+static __always_inline int __wrmsr_safe_3(u32 msr, u32 low, u32 high)
 {
 	return wrmsrq_safe(msr, (u64)high << 32 | low);
 }
 
+/*
+ * u64 versions of rdmsr/wrmsr[_safe]():
+ */
+static __always_inline u64 __rdmsr_1(u32 msr)
+{
+	u64 val;
+
+	rdmsrq(msr, val);
+
+	return val;
+}
+
+static __always_inline void __wrmsr_2(u32 msr, u64 val)
+{
+	wrmsrq(msr, val);
+}
+
+static __always_inline int __rdmsr_safe_2(u32 msr, u64 *p)
+{
+	return rdmsrq_safe(msr, p);
+}
+
+static __always_inline int __wrmsr_safe_2(u32 msr, u64 val)
+{
+	return wrmsrq_safe(msr, val);
+}
+
+/*
+ * Macros for selecting u64 or dual u32 versions of rdmsr/wrmsr[_safe]():
+ */
+#define rdmsr(...) CONCATENATE(__rdmsr_, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
+#define wrmsr(...) CONCATENATE(__wrmsr_, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
+#define rdmsr_safe(...) CONCATENATE(__rdmsr_safe_, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
+#define wrmsr_safe(...) CONCATENATE(__wrmsr_safe_, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__)
+
 struct msr __percpu *msrs_alloc(void);
 void msrs_free(struct msr __percpu *msrs);
 int msr_set_bit(u32 msr, u8 bit);
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cdfe4007443e..359fbc09f132 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -150,14 +150,14 @@ static inline int paravirt_write_msr_safe(u32 msr, u64 val)
 	return PVOP_CALL2(int, pv_ops, cpu.write_msr_safe, msr, val);
 }
 
-#define rdmsr(msr, val1, val2)			\
+#define __rdmsr_3(msr, val1, val2)		\
 do {						\
 	u64 _l = paravirt_read_msr(msr);	\
 	val1 = (u32)_l;				\
 	val2 = _l >> 32;			\
 } while (0)
 
-static __always_inline void wrmsr(u32 msr, u32 low, u32 high)
+static __always_inline void __wrmsr_3(u32 msr, u32 low, u32 high)
 {
 	paravirt_write_msr(msr, (u64)high << 32 | low);
 }
@@ -178,7 +178,7 @@ static inline int wrmsrq_safe(u32 msr, u64 val)
 }
 
 /* rdmsr with exception handling */
-#define rdmsr_safe(msr, a, b)				\
+#define __rdmsr_safe_3(msr, a, b)				\
 ({							\
 	u64 _l;						\
 	int _err = paravirt_read_msr_safe((msr), &_l);	\
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-04-28 10:43 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-28 10:41 [PATCH RFC 00/11] x86/msr: Reduce MSR access interfaces Juergen Gross
2026-04-28 10:42 ` [PATCH RFC 09/11] x86/msr: Add macros for preparing to switch rdmsr/wrmsr interfaces Juergen Gross

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox