public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 0/6] x86/msr: Rename MSR access functions
@ 2026-04-20  9:16 Juergen Gross
  2026-04-20  9:16 ` [PATCH RFC 1/6] x86/msr: Rename msr_read() and msr_write() Juergen Gross
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Juergen Gross @ 2026-04-20  9:16 UTC (permalink / raw)
  To: linux-kernel, x86, linux-perf-users, linux-edac
  Cc: Juergen Gross, H. Peter Anvin, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck

This is a RFC for renaming the main MSR access functions. The main
motivations for doing that are:

- Prepare for wide spread use cases of WRMSRNS by making it easily
  visible whether a MSR write is serializing or not. This has the
  advantage that new use cases of MSR writes would need to decide
  whether a serializing MSR write is needed or not. Using today's
  MSR write interfaces would probably result in most new use cases
  to use the serializing form, having a negative performance impact.

- Use functions instead of macros for accessing MSRs, which will drop
  modifying variables passed as a parameter.

- Eliminate multiple accessors doing exactly the same thing (e.g.
  rdmsrl() and rdmsrq()).

- Instead of having function names based on the underlying instruction
  mnemonics, have functions of a common name space (msr_*()).

This series is containing only a needed prerequisite patch removing a
name collision with the new access functions, 3 patches introducing
the new access functions, and 2 example patches adapting users to use
the new functions instead the old ones.

I have selected the 2 example patches based on the needed code changes:

Patch 5 is very simple, because all the changes are really trivial.
While using a mixture of serializing and non-serializing MSR writes,
the non-serializing form is used only in case another MSR write is
following in the same function, resulting in each function doing MSR
writes having still serializing semantics (it might be possible to
change that later, but I wanted to be conservative just in case).

Patch 6 is a little bit less simple, as there are a couple of cases
where the new functions are no direct replacements of the old
interfaces. The new functions only work on 64-bit values, so in cases
where 2 32-bit values are needed, "struct msr" is used as a conversion
layer. I thought about adding macros for the same purpose, but in the
end this seemed to be a more simple and readable solution.

In case the general idea is accepted, I'd do the conversion of the rest
of the users (this will be probably a rather large, but mostly trivial
series).

As this series is RFC, I have done only basic compile testing.

Juergen Gross (6):
  x86/msr: Rename msr_read() and msr_write()
  x86/msr: Create a new minimal set of local MSR access functions
  x86/msr: Create a new minimal set of inter-CPU MSR access functions
  x86/msr: Rename the *_safe_regs[_on_cpu]() MSR functions
  x86/events: Switch core parts to use new MSR access functions
  x86/cpu/mce: Switch code to use new MSR access functions

 arch/x86/events/core.c            |  42 ++++++-------
 arch/x86/events/msr.c             |   2 +-
 arch/x86/events/perf_event.h      |  26 ++++----
 arch/x86/events/probe.c           |   2 +-
 arch/x86/events/rapl.c            |   8 +--
 arch/x86/include/asm/msr.h        |  87 ++++++++++++++++++++++---
 arch/x86/kernel/cpu/amd.c         |   4 +-
 arch/x86/kernel/cpu/mce/amd.c     | 101 +++++++++++++++---------------
 arch/x86/kernel/cpu/mce/core.c    |  18 +++---
 arch/x86/kernel/cpu/mce/inject.c  |  40 ++++++------
 arch/x86/kernel/cpu/mce/intel.c   |  32 +++++-----
 arch/x86/kernel/cpu/mce/p5.c      |  16 ++---
 arch/x86/kernel/cpu/mce/winchip.c |  10 +--
 arch/x86/kernel/msr.c             |  16 ++---
 arch/x86/lib/msr-reg-export.c     |   4 +-
 arch/x86/lib/msr-reg.S            |  16 ++---
 arch/x86/lib/msr-smp.c            |  20 +++---
 arch/x86/lib/msr.c                |  12 ++--
 18 files changed, 263 insertions(+), 193 deletions(-)

-- 
2.53.0


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH RFC 1/6] x86/msr: Rename msr_read() and msr_write()
  2026-04-20  9:16 [PATCH RFC 0/6] x86/msr: Rename MSR access functions Juergen Gross
@ 2026-04-20  9:16 ` Juergen Gross
  2026-04-20  9:16 ` [PATCH RFC 2/6] x86/msr: Create a new minimal set of local MSR access functions Juergen Gross
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Juergen Gross @ 2026-04-20  9:16 UTC (permalink / raw)
  To: linux-kernel, x86
  Cc: Juergen Gross, H. Peter Anvin, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen

Rename the existing msr_read() and msr_write() functions to
msr_do_read() and msr_do_write(), as the original names will be used
for new MSR access functions in the future.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/kernel/msr.c | 12 ++++++------
 arch/x86/lib/msr.c    | 12 ++++++------
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/msr.c b/arch/x86/kernel/msr.c
index 4469c784eaa0..43791746103c 100644
--- a/arch/x86/kernel/msr.c
+++ b/arch/x86/kernel/msr.c
@@ -49,8 +49,8 @@ enum allow_write_msrs {
 
 static enum allow_write_msrs allow_writes = MSR_WRITES_DEFAULT;
 
-static ssize_t msr_read(struct file *file, char __user *buf,
-			size_t count, loff_t *ppos)
+static ssize_t msr_do_read(struct file *file, char __user *buf,
+			   size_t count, loff_t *ppos)
 {
 	u32 __user *tmp = (u32 __user *) buf;
 	u32 data[2];
@@ -105,8 +105,8 @@ static int filter_write(u32 reg)
 	return 0;
 }
 
-static ssize_t msr_write(struct file *file, const char __user *buf,
-			 size_t count, loff_t *ppos)
+static ssize_t msr_do_write(struct file *file, const char __user *buf,
+			    size_t count, loff_t *ppos)
 {
 	const u32 __user *tmp = (const u32 __user *)buf;
 	u32 data[2];
@@ -227,8 +227,8 @@ static int msr_open(struct inode *inode, struct file *file)
 static const struct file_operations msr_fops = {
 	.owner = THIS_MODULE,
 	.llseek = no_seek_end_llseek,
-	.read = msr_read,
-	.write = msr_write,
+	.read = msr_do_read,
+	.write = msr_do_write,
 	.open = msr_open,
 	.unlocked_ioctl = msr_ioctl,
 	.compat_ioctl = msr_ioctl,
diff --git a/arch/x86/lib/msr.c b/arch/x86/lib/msr.c
index dfdd1da89f36..be6c34666743 100644
--- a/arch/x86/lib/msr.c
+++ b/arch/x86/lib/msr.c
@@ -28,7 +28,7 @@ void msrs_free(struct msr __percpu *msrs)
 EXPORT_SYMBOL(msrs_free);
 
 /**
- * msr_read - Read an MSR with error handling
+ * msr_do_read - Read an MSR with error handling
  * @msr: MSR to read
  * @m: value to read into
  *
@@ -37,7 +37,7 @@ EXPORT_SYMBOL(msrs_free);
  *
  * Return: %0 for success, otherwise an error code
  */
-static int msr_read(u32 msr, struct msr *m)
+static int msr_do_read(u32 msr, struct msr *m)
 {
 	int err;
 	u64 val;
@@ -50,14 +50,14 @@ static int msr_read(u32 msr, struct msr *m)
 }
 
 /**
- * msr_write - Write an MSR with error handling
+ * msr_do_write - Write an MSR with error handling
  *
  * @msr: MSR to write
  * @m: value to write
  *
  * Return: %0 for success, otherwise an error code
  */
-static int msr_write(u32 msr, struct msr *m)
+static int msr_do_write(u32 msr, struct msr *m)
 {
 	return wrmsrq_safe(msr, m->q);
 }
@@ -70,7 +70,7 @@ static inline int __flip_bit(u32 msr, u8 bit, bool set)
 	if (bit > 63)
 		return err;
 
-	err = msr_read(msr, &m);
+	err = msr_do_read(msr, &m);
 	if (err)
 		return err;
 
@@ -83,7 +83,7 @@ static inline int __flip_bit(u32 msr, u8 bit, bool set)
 	if (m1.q == m.q)
 		return 0;
 
-	err = msr_write(msr, &m1);
+	err = msr_do_write(msr, &m1);
 	if (err)
 		return err;
 
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH RFC 2/6] x86/msr: Create a new minimal set of local MSR access functions
  2026-04-20  9:16 [PATCH RFC 0/6] x86/msr: Rename MSR access functions Juergen Gross
  2026-04-20  9:16 ` [PATCH RFC 1/6] x86/msr: Rename msr_read() and msr_write() Juergen Gross
@ 2026-04-20  9:16 ` Juergen Gross
  2026-04-20  9:16 ` [PATCH RFC 3/6] x86/msr: Create a new minimal set of inter-CPU " Juergen Gross
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Juergen Gross @ 2026-04-20  9:16 UTC (permalink / raw)
  To: linux-kernel, x86
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin

Today there are two sets of MSR access functions (apart from the low
level ones): one is using 64 bit values, and the other a pair of
32-bit values for the MSR contents. The read variants are macros,
while the write variants are proper inline functions.

In order to prepare for non-serializing variants of the write
functions, create a complete set of MSR functions using a proper
name space ("msr_*") without the 32-bit pair variants.

Name the write variants explicitly msr_write_[safe_]ser() and
msr_write_[safe_]noser() in order to make it very clear whether
the serializing or the non-serializing variant is meant.

Right now the new set will be based on the old wrmsr*() and rdmsr*()
functions, but when all users have been switched to use the new
functions, the old wrmsr*() and rdmsr*() functions will be dropped.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/msr.h | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index 9c2ea29e12a9..cc21c8699e23 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -227,6 +227,44 @@ static __always_inline u64 rdpmc(int counter)
 
 #endif	/* !CONFIG_PARAVIRT_XXL */
 
+/*
+ * New set of MSR access functions. New code should use those instead of the
+ * legacy wrmsr*() and rdmsr*() ones.
+ */
+static __always_inline u64 msr_read(u32 msr)
+{
+	u64 val;
+
+	rdmsrq(msr, val);
+
+	return val;
+}
+
+static __always_inline int msr_read_safe(u32 msr, u64 *val)
+{
+	return rdmsrq_safe(msr, val);
+}
+
+static __always_inline void msr_write_ser(u32 msr, u64 val)
+{
+	wrmsrq(msr, val);
+}
+
+static __always_inline int msr_write_safe_ser(u32 msr, u64 val)
+{
+	return wrmsrq_safe(msr, val);
+}
+
+static __always_inline void msr_write_noser(u32 msr, u64 val)
+{
+	wrmsrq(msr, val);
+}
+
+static __always_inline int msr_write_safe_noser(u32 msr, u64 val)
+{
+	return wrmsrq_safe(msr, val);
+}
+
 /* Instruction opcode for WRMSRNS supported in binutils >= 2.40 */
 #define ASM_WRMSRNS _ASM_BYTES(0x0f,0x01,0xc6)
 
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH RFC 3/6] x86/msr: Create a new minimal set of inter-CPU MSR access functions
  2026-04-20  9:16 [PATCH RFC 0/6] x86/msr: Rename MSR access functions Juergen Gross
  2026-04-20  9:16 ` [PATCH RFC 1/6] x86/msr: Rename msr_read() and msr_write() Juergen Gross
  2026-04-20  9:16 ` [PATCH RFC 2/6] x86/msr: Create a new minimal set of local MSR access functions Juergen Gross
@ 2026-04-20  9:16 ` Juergen Gross
  2026-04-20  9:16 ` [PATCH RFC 4/6] x86/msr: Rename the *_safe_regs[_on_cpu]() MSR functions Juergen Gross
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Juergen Gross @ 2026-04-20  9:16 UTC (permalink / raw)
  To: linux-kernel, x86
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin

Like for local MSR access functions create a set of inter-CPU MSR
access functions using the "msr_" name space. For writing MSRs of
other CPUs there is no need to have serializing and non-serializing
variants.

Base the new functions on the old ones for now. This can be changed
when all callers have been changed to use the new functions only, in
order to delete the old ones then.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/msr.h | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index cc21c8699e23..74e87b2b39fd 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -369,5 +369,37 @@ static inline int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
 #define wrmsrl(msr, val) wrmsrq(msr, val)
 #define rdmsrl_on_cpu(cpu, msr, q) rdmsrq_on_cpu(cpu, msr, q)
 
+static inline int msr_read_on_cpu(unsigned int cpu, u32 msr, u64 *val)
+{
+	return rdmsrq_on_cpu(cpu, msr, val);
+}
+
+static inline int msr_read_safe_on_cpu(unsigned int cpu, u32 msr, u64 *val)
+{
+	return rdmsrq_safe_on_cpu(cpu, msr, val);
+}
+
+static inline int msr_write_on_cpu(unsigned int cpu, u32 msr, u64 val)
+{
+	return wrmsrq_on_cpu(cpu, msr, val);
+}
+
+static inline int msr_write_safe_on_cpu(unsigned int cpu, u32 msr, u64 val)
+{
+	return wrmsrq_safe_on_cpu(cpu, msr, val);
+}
+
+static inline void msr_read_on_cpus(const struct cpumask *cpus, u32 msr,
+				    struct msr __percpu *msrs)
+{
+	rdmsr_on_cpus(cpus, msr, msrs);
+}
+
+static inline void msr_write_on_cpus(const struct cpumask *cpus, u32 msr,
+				     struct msr __percpu *msrs)
+{
+	wrmsr_on_cpus(cpus, msr, msrs);
+}
+
 #endif /* __ASSEMBLER__ */
 #endif /* _ASM_X86_MSR_H */
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH RFC 4/6] x86/msr: Rename the *_safe_regs[_on_cpu]() MSR functions
  2026-04-20  9:16 [PATCH RFC 0/6] x86/msr: Rename MSR access functions Juergen Gross
                   ` (2 preceding siblings ...)
  2026-04-20  9:16 ` [PATCH RFC 3/6] x86/msr: Create a new minimal set of inter-CPU " Juergen Gross
@ 2026-04-20  9:16 ` Juergen Gross
  2026-04-20  9:16 ` [PATCH RFC 5/6] x86/events: Switch core parts to use new MSR access functions Juergen Gross
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Juergen Gross @ 2026-04-20  9:16 UTC (permalink / raw)
  To: linux-kernel, x86
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin

Move the functions rdmsr_safe_regs[_on_cpu]() and
wrmsr_safe_regs[_on_cpu]() into the "msr_" name space and change all
callers accordingly.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/msr.h    | 17 ++++++++---------
 arch/x86/kernel/cpu/amd.c     |  4 ++--
 arch/x86/kernel/msr.c         |  4 ++--
 arch/x86/lib/msr-reg-export.c |  4 ++--
 arch/x86/lib/msr-reg.S        | 16 ++++++++--------
 arch/x86/lib/msr-smp.c        | 20 ++++++++++----------
 6 files changed, 32 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index 74e87b2b39fd..9b3d16b2eb61 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -156,9 +156,6 @@ static inline int notrace native_write_msr_safe(u32 msr, u64 val)
 	return err;
 }
 
-extern int rdmsr_safe_regs(u32 regs[8]);
-extern int wrmsr_safe_regs(u32 regs[8]);
-
 static inline u64 native_read_pmc(int counter)
 {
 	EAX_EDX_DECLARE_ARGS(val, low, high);
@@ -292,6 +289,8 @@ struct msr __percpu *msrs_alloc(void);
 void msrs_free(struct msr __percpu *msrs);
 int msr_set_bit(u32 msr, u8 bit);
 int msr_clear_bit(u32 msr, u8 bit);
+int msr_read_safe_regs(u32 regs[8]);
+int msr_write_safe_regs(u32 regs[8]);
 
 #ifdef CONFIG_SMP
 int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
@@ -304,8 +303,8 @@ int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
 int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
 int rdmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
 int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
-int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
-int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
+int msr_read_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
+int msr_write_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
 #else  /*  CONFIG_SMP  */
 static inline int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
 {
@@ -354,13 +353,13 @@ static inline int wrmsrq_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
 {
 	return wrmsrq_safe(msr_no, q);
 }
-static inline int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
+static inline int msr_read_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
 {
-	return rdmsr_safe_regs(regs);
+	return msr_read_safe_regs(regs);
 }
-static inline int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
+static inline int msr_write_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
 {
-	return wrmsr_safe_regs(regs);
+	return msr_write_safe_regs(regs);
 }
 #endif  /* CONFIG_SMP */
 
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 2d9ae6ab1701..7266fcfcf448 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -45,7 +45,7 @@ static inline int rdmsrq_amd_safe(unsigned msr, u64 *p)
 	gprs[1] = msr;
 	gprs[7] = 0x9c5a203a;
 
-	err = rdmsr_safe_regs(gprs);
+	err = msr_read_safe_regs(gprs);
 
 	*p = gprs[0] | ((u64)gprs[2] << 32);
 
@@ -64,7 +64,7 @@ static inline int wrmsrq_amd_safe(unsigned msr, u64 val)
 	gprs[2] = val >> 32;
 	gprs[7] = 0x9c5a203a;
 
-	return wrmsr_safe_regs(gprs);
+	return msr_write_safe_regs(gprs);
 }
 
 /*
diff --git a/arch/x86/kernel/msr.c b/arch/x86/kernel/msr.c
index 43791746103c..e3e71e3ba59f 100644
--- a/arch/x86/kernel/msr.c
+++ b/arch/x86/kernel/msr.c
@@ -162,7 +162,7 @@ static long msr_ioctl(struct file *file, unsigned int ioc, unsigned long arg)
 			err = -EFAULT;
 			break;
 		}
-		err = rdmsr_safe_regs_on_cpu(cpu, regs);
+		err = msr_read_safe_regs_on_cpu(cpu, regs);
 		if (err)
 			break;
 		if (copy_to_user(uregs, &regs, sizeof(regs)))
@@ -188,7 +188,7 @@ static long msr_ioctl(struct file *file, unsigned int ioc, unsigned long arg)
 
 		add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK);
 
-		err = wrmsr_safe_regs_on_cpu(cpu, regs);
+		err = msr_write_safe_regs_on_cpu(cpu, regs);
 		if (err)
 			break;
 		if (copy_to_user(uregs, &regs, sizeof(regs)))
diff --git a/arch/x86/lib/msr-reg-export.c b/arch/x86/lib/msr-reg-export.c
index 876b4168ab0a..c3da46f0581b 100644
--- a/arch/x86/lib/msr-reg-export.c
+++ b/arch/x86/lib/msr-reg-export.c
@@ -2,5 +2,5 @@
 #include <linux/export.h>
 #include <asm/msr.h>
 
-EXPORT_SYMBOL(rdmsr_safe_regs);
-EXPORT_SYMBOL(wrmsr_safe_regs);
+EXPORT_SYMBOL(msr_read_safe_regs);
+EXPORT_SYMBOL(msr_write_safe_regs);
diff --git a/arch/x86/lib/msr-reg.S b/arch/x86/lib/msr-reg.S
index 5ef8494896e8..ccb9e3a962f4 100644
--- a/arch/x86/lib/msr-reg.S
+++ b/arch/x86/lib/msr-reg.S
@@ -12,8 +12,8 @@
  * reg layout: u32 gprs[eax, ecx, edx, ebx, esp, ebp, esi, edi]
  *
  */
-.macro op_safe_regs op
-SYM_TYPED_FUNC_START(\op\()_safe_regs)
+.macro op_safe_regs name op
+SYM_TYPED_FUNC_START(\name)
 	pushq %rbx
 	pushq %r12
 	movq	%rdi, %r10	/* Save pointer */
@@ -42,13 +42,13 @@ SYM_TYPED_FUNC_START(\op\()_safe_regs)
 	jmp     2b
 
 	_ASM_EXTABLE(1b, 3b)
-SYM_FUNC_END(\op\()_safe_regs)
+SYM_FUNC_END(\name)
 .endm
 
 #else /* X86_32 */
 
-.macro op_safe_regs op
-SYM_FUNC_START(\op\()_safe_regs)
+.macro op_safe_regs name op
+SYM_FUNC_START(\name)
 	pushl %ebx
 	pushl %ebp
 	pushl %esi
@@ -84,11 +84,11 @@ SYM_FUNC_START(\op\()_safe_regs)
 	jmp     2b
 
 	_ASM_EXTABLE(1b, 3b)
-SYM_FUNC_END(\op\()_safe_regs)
+SYM_FUNC_END(\name)
 .endm
 
 #endif
 
-op_safe_regs rdmsr
-op_safe_regs wrmsr
+op_safe_regs msr_read_safe_regs rdmsr
+op_safe_regs msr_write_safe_regs wrmsr
 
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index b8f63419e6ae..21bb1aee2af7 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -236,42 +236,42 @@ EXPORT_SYMBOL(rdmsrq_safe_on_cpu);
  * These variants are significantly slower, but allows control over
  * the entire 32-bit GPR set.
  */
-static void __rdmsr_safe_regs_on_cpu(void *info)
+static void __msr_read_safe_regs(void *info)
 {
 	struct msr_regs_info *rv = info;
 
-	rv->err = rdmsr_safe_regs(rv->regs);
+	rv->err = msr_read_safe_regs(rv->regs);
 }
 
-static void __wrmsr_safe_regs_on_cpu(void *info)
+static void __msr_write_safe_regs(void *info)
 {
 	struct msr_regs_info *rv = info;
 
-	rv->err = wrmsr_safe_regs(rv->regs);
+	rv->err = msr_write_safe_regs(rv->regs);
 }
 
-int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
+int msr_read_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
 {
 	int err;
 	struct msr_regs_info rv;
 
 	rv.regs   = regs;
 	rv.err    = -EIO;
-	err = smp_call_function_single(cpu, __rdmsr_safe_regs_on_cpu, &rv, 1);
+	err = smp_call_function_single(cpu, __msr_read_safe_regs, &rv, 1);
 
 	return err ? err : rv.err;
 }
-EXPORT_SYMBOL(rdmsr_safe_regs_on_cpu);
+EXPORT_SYMBOL(msr_read_safe_regs_on_cpu);
 
-int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
+int msr_write_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
 {
 	int err;
 	struct msr_regs_info rv;
 
 	rv.regs = regs;
 	rv.err  = -EIO;
-	err = smp_call_function_single(cpu, __wrmsr_safe_regs_on_cpu, &rv, 1);
+	err = smp_call_function_single(cpu, __msr_write_safe_regs, &rv, 1);
 
 	return err ? err : rv.err;
 }
-EXPORT_SYMBOL(wrmsr_safe_regs_on_cpu);
+EXPORT_SYMBOL(msr_write_safe_regs_on_cpu);
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH RFC 5/6] x86/events: Switch core parts to use new MSR access functions
  2026-04-20  9:16 [PATCH RFC 0/6] x86/msr: Rename MSR access functions Juergen Gross
                   ` (3 preceding siblings ...)
  2026-04-20  9:16 ` [PATCH RFC 4/6] x86/msr: Rename the *_safe_regs[_on_cpu]() MSR functions Juergen Gross
@ 2026-04-20  9:16 ` Juergen Gross
  2026-04-20  9:16 ` [PATCH RFC 6/6] x86/cpu/mce: Switch code " Juergen Gross
  2026-04-20 11:35 ` [PATCH RFC 0/6] x86/msr: Rename " Peter Zijlstra
  6 siblings, 0 replies; 20+ messages in thread
From: Juergen Gross @ 2026-04-20  9:16 UTC (permalink / raw)
  To: linux-kernel, x86, linux-perf-users
  Cc: Juergen Gross, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Thomas Gleixner, Borislav Petkov, Dave Hansen,
	H. Peter Anvin

Switch the core parts of the x86 events subsystem to use the new
msr_*() functions instead of the rdmsr*()/wrmsr*() ones.

Use msr_write_noser() in case there is another MSR write later in the
same function and msr_write_ser() for the last MSR write in a function.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/events/core.c       | 42 ++++++++++++++++++------------------
 arch/x86/events/msr.c        |  2 +-
 arch/x86/events/perf_event.h | 26 +++++++++++-----------
 arch/x86/events/probe.c      |  2 +-
 arch/x86/events/rapl.c       |  8 +++----
 5 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 810ab21ffd99..c15e0d1a6658 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -279,7 +279,7 @@ bool check_hw_exists(struct pmu *pmu, unsigned long *cntr_mask,
 	 */
 	for_each_set_bit(i, cntr_mask, X86_PMC_IDX_MAX) {
 		reg = x86_pmu_config_addr(i);
-		ret = rdmsrq_safe(reg, &val);
+		ret = msr_read_safe(reg, &val);
 		if (ret)
 			goto msr_fail;
 		if (val & ARCH_PERFMON_EVENTSEL_ENABLE) {
@@ -293,7 +293,7 @@ bool check_hw_exists(struct pmu *pmu, unsigned long *cntr_mask,
 
 	if (*(u64 *)fixed_cntr_mask) {
 		reg = MSR_ARCH_PERFMON_FIXED_CTR_CTRL;
-		ret = rdmsrq_safe(reg, &val);
+		ret = msr_read_safe(reg, &val);
 		if (ret)
 			goto msr_fail;
 		for_each_set_bit(i, fixed_cntr_mask, X86_PMC_IDX_MAX) {
@@ -324,11 +324,11 @@ bool check_hw_exists(struct pmu *pmu, unsigned long *cntr_mask,
 	 * (qemu/kvm) that don't trap on the MSR access and always return 0s.
 	 */
 	reg = x86_pmu_event_addr(reg_safe);
-	if (rdmsrq_safe(reg, &val))
+	if (msr_read_safe(reg, &val))
 		goto msr_fail;
 	val ^= 0xffffUL;
-	ret = wrmsrq_safe(reg, val);
-	ret |= rdmsrq_safe(reg, &val_new);
+	ret = msr_write_safe_noser(reg, val);
+	ret |= msr_read_safe(reg, &val_new);
 	if (ret || val != val_new)
 		goto msr_fail;
 
@@ -713,13 +713,13 @@ void x86_pmu_disable_all(void)
 
 		if (!test_bit(idx, cpuc->active_mask))
 			continue;
-		rdmsrq(x86_pmu_config_addr(idx), val);
+		val = msr_read(x86_pmu_config_addr(idx));
 		if (!(val & ARCH_PERFMON_EVENTSEL_ENABLE))
 			continue;
 		val &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
-		wrmsrq(x86_pmu_config_addr(idx), val);
+		msr_write_noser(x86_pmu_config_addr(idx), val);
 		if (is_counter_pair(hwc))
-			wrmsrq(x86_pmu_config_addr(idx + 1), 0);
+			msr_write_noser(x86_pmu_config_addr(idx + 1), 0);
 	}
 }
 
@@ -1446,14 +1446,14 @@ int x86_perf_event_set_period(struct perf_event *event)
 	 */
 	local64_set(&hwc->prev_count, (u64)-left);
 
-	wrmsrq(hwc->event_base, (u64)(-left) & x86_pmu.cntval_mask);
+	msr_write_noser(hwc->event_base, (u64)(-left) & x86_pmu.cntval_mask);
 
 	/*
 	 * Sign extend the Merge event counter's upper 16 bits since
 	 * we currently declare a 48-bit counter width
 	 */
 	if (is_counter_pair(hwc))
-		wrmsrq(x86_pmu_event_addr(idx + 1), 0xffff);
+		msr_write_noser(x86_pmu_event_addr(idx + 1), 0xffff);
 
 	perf_event_update_userpage(event);
 
@@ -1575,10 +1575,10 @@ void perf_event_print_debug(void)
 		return;
 
 	if (x86_pmu.version >= 2) {
-		rdmsrq(MSR_CORE_PERF_GLOBAL_CTRL, ctrl);
-		rdmsrq(MSR_CORE_PERF_GLOBAL_STATUS, status);
-		rdmsrq(MSR_CORE_PERF_GLOBAL_OVF_CTRL, overflow);
-		rdmsrq(MSR_ARCH_PERFMON_FIXED_CTR_CTRL, fixed);
+		ctrl = msr_read(MSR_CORE_PERF_GLOBAL_CTRL);
+		status = msr_read(MSR_CORE_PERF_GLOBAL_STATUS);
+		overflow = msr_read(MSR_CORE_PERF_GLOBAL_OVF_CTRL);
+		fixed = msr_read(MSR_ARCH_PERFMON_FIXED_CTR_CTRL);
 
 		pr_info("\n");
 		pr_info("CPU#%d: ctrl:       %016llx\n", cpu, ctrl);
@@ -1586,19 +1586,19 @@ void perf_event_print_debug(void)
 		pr_info("CPU#%d: overflow:   %016llx\n", cpu, overflow);
 		pr_info("CPU#%d: fixed:      %016llx\n", cpu, fixed);
 		if (pebs_constraints) {
-			rdmsrq(MSR_IA32_PEBS_ENABLE, pebs);
+			pebs = msr_read(MSR_IA32_PEBS_ENABLE);
 			pr_info("CPU#%d: pebs:       %016llx\n", cpu, pebs);
 		}
 		if (x86_pmu.lbr_nr) {
-			rdmsrq(MSR_IA32_DEBUGCTLMSR, debugctl);
+			debugctl = msr_read(MSR_IA32_DEBUGCTLMSR);
 			pr_info("CPU#%d: debugctl:   %016llx\n", cpu, debugctl);
 		}
 	}
 	pr_info("CPU#%d: active:     %016llx\n", cpu, *(u64 *)cpuc->active_mask);
 
 	for_each_set_bit(idx, cntr_mask, X86_PMC_IDX_MAX) {
-		rdmsrq(x86_pmu_config_addr(idx), pmc_ctrl);
-		rdmsrq(x86_pmu_event_addr(idx), pmc_count);
+		pmc_ctrl = msr_read(x86_pmu_config_addr(idx));
+		pmc_count = msr_read(x86_pmu_event_addr(idx));
 
 		prev_left = per_cpu(pmc_prev_left[idx], cpu);
 
@@ -1612,7 +1612,7 @@ void perf_event_print_debug(void)
 	for_each_set_bit(idx, fixed_cntr_mask, X86_PMC_IDX_MAX) {
 		if (fixed_counter_disabled(idx, cpuc->pmu))
 			continue;
-		rdmsrq(x86_pmu_fixed_ctr_addr(idx), pmc_count);
+		pmc_count = msr_read(x86_pmu_fixed_ctr_addr(idx));
 
 		pr_info("CPU#%d: fixed-PMC%d count: %016llx\n",
 			cpu, idx, pmc_count);
@@ -2560,9 +2560,9 @@ void perf_clear_dirty_counters(void)
 			if (!test_bit(i - INTEL_PMC_IDX_FIXED, hybrid(cpuc->pmu, fixed_cntr_mask)))
 				continue;
 
-			wrmsrq(x86_pmu_fixed_ctr_addr(i - INTEL_PMC_IDX_FIXED), 0);
+			msr_write_noser(x86_pmu_fixed_ctr_addr(i - INTEL_PMC_IDX_FIXED), 0);
 		} else {
-			wrmsrq(x86_pmu_event_addr(i), 0);
+			msr_write_noser(x86_pmu_event_addr(i), 0);
 		}
 	}
 
diff --git a/arch/x86/events/msr.c b/arch/x86/events/msr.c
index 76d6418c5055..09d5b2808727 100644
--- a/arch/x86/events/msr.c
+++ b/arch/x86/events/msr.c
@@ -158,7 +158,7 @@ static inline u64 msr_read_counter(struct perf_event *event)
 	u64 now;
 
 	if (event->hw.event_base)
-		rdmsrq(event->hw.event_base, now);
+		now = msr_read(event->hw.event_base);
 	else
 		now = rdtsc_ordered();
 
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index fad87d3c8b2c..cce2e7b67c01 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1271,16 +1271,16 @@ static inline void __x86_pmu_enable_event(struct hw_perf_event *hwc,
 	u64 disable_mask = __this_cpu_read(cpu_hw_events.perf_ctr_virt_mask);
 
 	if (hwc->extra_reg.reg)
-		wrmsrq(hwc->extra_reg.reg, hwc->extra_reg.config);
+		msr_write_noser(hwc->extra_reg.reg, hwc->extra_reg.config);
 
 	/*
 	 * Add enabled Merge event on next counter
 	 * if large increment event being enabled on this counter
 	 */
 	if (is_counter_pair(hwc))
-		wrmsrq(x86_pmu_config_addr(hwc->idx + 1), x86_pmu.perf_ctr_pair_en);
+		msr_write_noser(x86_pmu_config_addr(hwc->idx + 1), x86_pmu.perf_ctr_pair_en);
 
-	wrmsrq(hwc->config_base, (hwc->config | enable_mask) & ~disable_mask);
+	msr_write_ser(hwc->config_base, (hwc->config | enable_mask) & ~disable_mask);
 }
 
 void x86_pmu_enable_all(int added);
@@ -1296,10 +1296,10 @@ static inline void x86_pmu_disable_event(struct perf_event *event)
 	u64 disable_mask = __this_cpu_read(cpu_hw_events.perf_ctr_virt_mask);
 	struct hw_perf_event *hwc = &event->hw;
 
-	wrmsrq(hwc->config_base, hwc->config & ~disable_mask);
+	msr_write_ser(hwc->config_base, hwc->config & ~disable_mask);
 
 	if (is_counter_pair(hwc))
-		wrmsrq(x86_pmu_config_addr(hwc->idx + 1), 0);
+		msr_write_ser(x86_pmu_config_addr(hwc->idx + 1), 0);
 }
 
 void x86_pmu_enable_event(struct perf_event *event);
@@ -1473,12 +1473,12 @@ static __always_inline void __amd_pmu_lbr_disable(void)
 {
 	u64 dbg_ctl, dbg_extn_cfg;
 
-	rdmsrq(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg);
-	wrmsrq(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN);
+	dbg_extn_cfg = msr_read(MSR_AMD_DBG_EXTN_CFG);
+	msr_write_ser(MSR_AMD_DBG_EXTN_CFG, dbg_extn_cfg & ~DBG_EXTN_CFG_LBRV2EN);
 
 	if (cpu_feature_enabled(X86_FEATURE_AMD_LBR_PMC_FREEZE)) {
-		rdmsrq(MSR_IA32_DEBUGCTLMSR, dbg_ctl);
-		wrmsrq(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
+		dbg_ctl = msr_read(MSR_IA32_DEBUGCTLMSR);
+		msr_write_ser(MSR_IA32_DEBUGCTLMSR, dbg_ctl & ~DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
 	}
 }
 
@@ -1619,21 +1619,21 @@ static inline bool intel_pmu_has_bts(struct perf_event *event)
 
 static __always_inline void __intel_pmu_pebs_disable_all(void)
 {
-	wrmsrq(MSR_IA32_PEBS_ENABLE, 0);
+	msr_write_ser(MSR_IA32_PEBS_ENABLE, 0);
 }
 
 static __always_inline void __intel_pmu_arch_lbr_disable(void)
 {
-	wrmsrq(MSR_ARCH_LBR_CTL, 0);
+	msr_write_ser(MSR_ARCH_LBR_CTL, 0);
 }
 
 static __always_inline void __intel_pmu_lbr_disable(void)
 {
 	u64 debugctl;
 
-	rdmsrq(MSR_IA32_DEBUGCTLMSR, debugctl);
+	debugctl = msr_read(MSR_IA32_DEBUGCTLMSR);
 	debugctl &= ~(DEBUGCTLMSR_LBR | DEBUGCTLMSR_FREEZE_LBRS_ON_PMI);
-	wrmsrq(MSR_IA32_DEBUGCTLMSR, debugctl);
+	msr_write_ser(MSR_IA32_DEBUGCTLMSR, debugctl);
 }
 
 int intel_pmu_save_and_restart(struct perf_event *event);
diff --git a/arch/x86/events/probe.c b/arch/x86/events/probe.c
index bb719d0d3f0b..85d591fab26c 100644
--- a/arch/x86/events/probe.c
+++ b/arch/x86/events/probe.c
@@ -45,7 +45,7 @@ perf_msr_probe(struct perf_msr *msr, int cnt, bool zero, void *data)
 			if (msr[bit].test && !msr[bit].test(bit, data))
 				continue;
 			/* Virt sucks; you cannot tell if a R/O MSR is present :/ */
-			if (rdmsrq_safe(msr[bit].msr, &val))
+			if (msr_read_safe(msr[bit].msr, &val))
 				continue;
 
 			mask = msr[bit].mask;
diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c
index 8ed03c32f560..bb9ecf78fd90 100644
--- a/arch/x86/events/rapl.c
+++ b/arch/x86/events/rapl.c
@@ -193,7 +193,7 @@ static inline unsigned int get_rapl_pmu_idx(int cpu, int scope)
 static inline u64 rapl_read_counter(struct perf_event *event)
 {
 	u64 raw;
-	rdmsrq(event->hw.event_base, raw);
+	raw = msr_read(event->hw.event_base);
 	return raw;
 }
 
@@ -222,7 +222,7 @@ static u64 rapl_event_update(struct perf_event *event)
 
 	prev_raw_count = local64_read(&hwc->prev_count);
 	do {
-		rdmsrq(event->hw.event_base, new_raw_count);
+		new_raw_count = msr_read(event->hw.event_base);
 	} while (!local64_try_cmpxchg(&hwc->prev_count,
 				      &prev_raw_count, new_raw_count));
 
@@ -611,8 +611,8 @@ static int rapl_check_hw_unit(void)
 	u64 msr_rapl_power_unit_bits;
 	int i;
 
-	/* protect rdmsrq() to handle virtualization */
-	if (rdmsrq_safe(rapl_model->msr_power_unit, &msr_rapl_power_unit_bits))
+	/* protect msr_read() to handle virtualization */
+	if (msr_read_safe(rapl_model->msr_power_unit, &msr_rapl_power_unit_bits))
 		return -1;
 	for (i = 0; i < NR_RAPL_PKG_DOMAINS; i++)
 		rapl_pkg_hw_unit[i] = (msr_rapl_power_unit_bits >> 8) & 0x1FULL;
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH RFC 6/6] x86/cpu/mce: Switch code to use new MSR access functions
  2026-04-20  9:16 [PATCH RFC 0/6] x86/msr: Rename MSR access functions Juergen Gross
                   ` (4 preceding siblings ...)
  2026-04-20  9:16 ` [PATCH RFC 5/6] x86/events: Switch core parts to use new MSR access functions Juergen Gross
@ 2026-04-20  9:16 ` Juergen Gross
  2026-04-20 11:35 ` [PATCH RFC 0/6] x86/msr: Rename " Peter Zijlstra
  6 siblings, 0 replies; 20+ messages in thread
From: Juergen Gross @ 2026-04-20  9:16 UTC (permalink / raw)
  To: linux-kernel, x86, linux-edac
  Cc: Juergen Gross, Tony Luck, Borislav Petkov, Thomas Gleixner,
	Ingo Molnar, Dave Hansen, H. Peter Anvin

Switch the x86 MCE code to use the new msr_*() functions instead
of the rdmsr*()/wrmsr*() ones.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/kernel/cpu/mce/amd.c     | 101 +++++++++++++++---------------
 arch/x86/kernel/cpu/mce/core.c    |  18 +++---
 arch/x86/kernel/cpu/mce/inject.c  |  40 ++++++------
 arch/x86/kernel/cpu/mce/intel.c   |  32 +++++-----
 arch/x86/kernel/cpu/mce/p5.c      |  16 ++---
 arch/x86/kernel/cpu/mce/winchip.c |  10 +--
 6 files changed, 109 insertions(+), 108 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index 6605a0224659..56cdf9c46d92 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -280,11 +280,11 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
 	u8 *bank_counts = this_cpu_ptr(smca_bank_counts);
 	const struct smca_hwid *s_hwid;
 	unsigned int i, hwid_mcatype;
-	u32 high, low;
+	struct msr val;
 	u32 smca_config = MSR_AMD64_SMCA_MCx_CONFIG(bank);
 
 	/* Set appropriate bits in MCA_CONFIG */
-	if (!rdmsr_safe(smca_config, &low, &high)) {
+	if (!msr_read_safe(smca_config, &val.q)) {
 		/*
 		 * OS is required to set the MCAX bit to acknowledge that it is
 		 * now using the new MSR ranges and new registers under each
@@ -294,7 +294,7 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
 		 *
 		 * MCA_CONFIG[MCAX] is bit 32 (0 in the high portion of the MSR.)
 		 */
-		high |= BIT(0);
+		val.h |= BIT(0);
 
 		/*
 		 * SMCA sets the Deferred Error Interrupt type per bank.
@@ -307,9 +307,9 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
 		 * APIC based interrupt. First, check that no interrupt has been
 		 * set.
 		 */
-		if ((low & BIT(5)) && !((high >> 5) & 0x3) && data->dfr_intr_en) {
+		if ((val.l & BIT(5)) && !((val.h >> 5) & 0x3) && data->dfr_intr_en) {
 			__set_bit(bank, data->dfr_intr_banks);
-			high |= BIT(5);
+			val.h |= BIT(5);
 		}
 
 		/*
@@ -324,33 +324,33 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
 		 * The OS should set this to inform the platform that the OS is ready
 		 * to handle the MCA Thresholding interrupt.
 		 */
-		if ((low & BIT(10)) && data->thr_intr_en) {
+		if ((val.l & BIT(10)) && data->thr_intr_en) {
 			__set_bit(bank, data->thr_intr_banks);
-			high |= BIT(8);
+			val.h |= BIT(8);
 		}
 
-		this_cpu_ptr(mce_banks_array)[bank].lsb_in_status = !!(low & BIT(8));
+		this_cpu_ptr(mce_banks_array)[bank].lsb_in_status = !!(val.l & BIT(8));
 
-		if (low & MCI_CONFIG_PADDRV)
+		if (val.l & MCI_CONFIG_PADDRV)
 			this_cpu_ptr(smca_banks)[bank].paddrv = 1;
 
-		wrmsr(smca_config, low, high);
+		msr_write_ser(smca_config, val.q);
 	}
 
-	if (rdmsr_safe(MSR_AMD64_SMCA_MCx_IPID(bank), &low, &high)) {
+	if (msr_read_safe(MSR_AMD64_SMCA_MCx_IPID(bank), &val.q)) {
 		pr_warn("Failed to read MCA_IPID for bank %d\n", bank);
 		return;
 	}
 
-	hwid_mcatype = HWID_MCATYPE(high & MCI_IPID_HWID,
-				    (high & MCI_IPID_MCATYPE) >> 16);
+	hwid_mcatype = HWID_MCATYPE(val.h & MCI_IPID_HWID,
+				    (val.h & MCI_IPID_MCATYPE) >> 16);
 
 	for (i = 0; i < ARRAY_SIZE(smca_hwid_mcatypes); i++) {
 		s_hwid = &smca_hwid_mcatypes[i];
 
 		if (hwid_mcatype == s_hwid->hwid_mcatype) {
 			this_cpu_ptr(smca_banks)[bank].hwid = s_hwid;
-			this_cpu_ptr(smca_banks)[bank].id = low;
+			this_cpu_ptr(smca_banks)[bank].id = val.l;
 			this_cpu_ptr(smca_banks)[bank].sysfs_id = bank_counts[s_hwid->bank_type]++;
 			break;
 		}
@@ -432,50 +432,50 @@ static bool lvt_off_valid(struct threshold_block *b, int apic, u32 lo, u32 hi)
 static void threshold_restart_block(void *_tr)
 {
 	struct thresh_restart *tr = _tr;
-	u32 hi, lo;
+	struct msr val;
 
 	/* sysfs write might race against an offline operation */
 	if (!this_cpu_read(threshold_banks) && !tr->set_lvt_off)
 		return;
 
-	rdmsr(tr->b->address, lo, hi);
+	val.q = msr_read(tr->b->address);
 
 	/*
 	 * Reset error count and overflow bit.
 	 * This is done during init or after handling an interrupt.
 	 */
-	if (hi & MASK_OVERFLOW_HI || tr->set_lvt_off) {
-		hi &= ~(MASK_ERR_COUNT_HI | MASK_OVERFLOW_HI);
-		hi |= THRESHOLD_MAX - tr->b->threshold_limit;
+	if (val.h & MASK_OVERFLOW_HI || tr->set_lvt_off) {
+		val.h &= ~(MASK_ERR_COUNT_HI | MASK_OVERFLOW_HI);
+		val.h |= THRESHOLD_MAX - tr->b->threshold_limit;
 	} else if (tr->old_limit) {	/* change limit w/o reset */
-		int new_count = (hi & THRESHOLD_MAX) +
+		int new_count = (val.h & THRESHOLD_MAX) +
 		    (tr->old_limit - tr->b->threshold_limit);
 
-		hi = (hi & ~MASK_ERR_COUNT_HI) |
+		val.h = (val.h & ~MASK_ERR_COUNT_HI) |
 		    (new_count & THRESHOLD_MAX);
 	}
 
 	/* clear IntType */
-	hi &= ~MASK_INT_TYPE_HI;
+	val.h &= ~MASK_INT_TYPE_HI;
 
 	if (!tr->b->interrupt_capable)
 		goto done;
 
 	if (tr->set_lvt_off) {
-		if (lvt_off_valid(tr->b, tr->lvt_off, lo, hi)) {
+		if (lvt_off_valid(tr->b, tr->lvt_off, val.l, val.h)) {
 			/* set new lvt offset */
-			hi &= ~MASK_LVTOFF_HI;
-			hi |= tr->lvt_off << 20;
+			val.h &= ~MASK_LVTOFF_HI;
+			val.h |= tr->lvt_off << 20;
 		}
 	}
 
 	if (tr->b->interrupt_enable)
-		hi |= INT_TYPE_APIC;
+		val.h |= INT_TYPE_APIC;
 
  done:
 
-	hi |= MASK_COUNT_EN_HI;
-	wrmsr(tr->b->address, lo, hi);
+	val.h |= MASK_COUNT_EN_HI;
+	msr_write_ser(tr->b->address, val.q);
 }
 
 static void threshold_restart_bank(unsigned int bank, bool intr_en)
@@ -658,12 +658,12 @@ static void disable_err_thresholding(struct cpuinfo_x86 *c, unsigned int bank)
 		return;
 	}
 
-	rdmsrq(MSR_K7_HWCR, hwcr);
+	hwcr = msr_read(MSR_K7_HWCR);
 
 	/* McStatusWrEn has to be set */
 	need_toggle = !(hwcr & BIT(18));
 	if (need_toggle)
-		wrmsrq(MSR_K7_HWCR, hwcr | BIT(18));
+		msr_write_ser(MSR_K7_HWCR, hwcr | BIT(18));
 
 	/* Clear CntP bit safely */
 	for (i = 0; i < num_msrs; i++)
@@ -671,7 +671,7 @@ static void disable_err_thresholding(struct cpuinfo_x86 *c, unsigned int bank)
 
 	/* restore old settings */
 	if (need_toggle)
-		wrmsrq(MSR_K7_HWCR, hwcr);
+		msr_write_ser(MSR_K7_HWCR, hwcr);
 }
 
 static void amd_apply_cpu_quirks(struct cpuinfo_x86 *c)
@@ -710,7 +710,7 @@ static void smca_enable_interrupt_vectors(void)
 	if (!mce_flags.smca || !mce_flags.succor)
 		return;
 
-	if (rdmsrq_safe(MSR_CU_DEF_ERR, &mca_intr_cfg))
+	if (msr_read_safe(MSR_CU_DEF_ERR, &mca_intr_cfg))
 		return;
 
 	offset = (mca_intr_cfg & SMCA_THR_LVT_OFF) >> 12;
@@ -726,7 +726,8 @@ static void smca_enable_interrupt_vectors(void)
 void mce_amd_feature_init(struct cpuinfo_x86 *c)
 {
 	unsigned int bank, block, cpu = smp_processor_id();
-	u32 low = 0, high = 0, address = 0;
+	struct msr val = { .q = 0 };
+	u32 address = 0;
 	int offset = -1;
 
 	amd_apply_cpu_quirks(c);
@@ -746,21 +747,21 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
 		disable_err_thresholding(c, bank);
 
 		for (block = 0; block < NR_BLOCKS; ++block) {
-			address = get_block_address(address, low, high, bank, block, cpu);
+			address = get_block_address(address, val.l, val.h, bank, block, cpu);
 			if (!address)
 				break;
 
-			if (rdmsr_safe(address, &low, &high))
+			if (msr_read_safe(address, &val.q))
 				break;
 
-			if (!(high & MASK_VALID_HI))
+			if (!(val.h & MASK_VALID_HI))
 				continue;
 
-			if (!(high & MASK_CNTP_HI)  ||
-			     (high & MASK_LOCKED_HI))
+			if (!(val.h & MASK_CNTP_HI)  ||
+			     (val.h & MASK_LOCKED_HI))
 				continue;
 
-			offset = prepare_threshold_block(bank, block, address, offset, high);
+			offset = prepare_threshold_block(bank, block, address, offset, val.h);
 		}
 	}
 }
@@ -969,13 +970,13 @@ store_threshold_limit(struct threshold_block *b, const char *buf, size_t size)
 
 static ssize_t show_error_count(struct threshold_block *b, char *buf)
 {
-	u32 lo, hi;
+	struct msr val;
 
 	/* CPU might be offline by now */
-	if (rdmsr_on_cpu(b->cpu, b->address, &lo, &hi))
+	if (msr_read_on_cpu(b->cpu, b->address, &val.q))
 		return -ENODEV;
 
-	return sprintf(buf, "%u\n", ((hi & THRESHOLD_MAX) -
+	return sprintf(buf, "%u\n", ((val.h & THRESHOLD_MAX) -
 				     (THRESHOLD_MAX - b->threshold_limit)));
 }
 
@@ -1083,24 +1084,24 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
 				     u32 address)
 {
 	struct threshold_block *b = NULL;
-	u32 low, high;
+	struct msr val;
 	int err;
 
 	if ((bank >= this_cpu_read(mce_num_banks)) || (block >= NR_BLOCKS))
 		return 0;
 
-	if (rdmsr_safe(address, &low, &high))
+	if (msr_read_safe(address, &val.q))
 		return 0;
 
-	if (!(high & MASK_VALID_HI)) {
+	if (!(val.h & MASK_VALID_HI)) {
 		if (block)
 			goto recurse;
 		else
 			return 0;
 	}
 
-	if (!(high & MASK_CNTP_HI)  ||
-	     (high & MASK_LOCKED_HI))
+	if (!(val.h & MASK_CNTP_HI)  ||
+	     (val.h & MASK_LOCKED_HI))
 		goto recurse;
 
 	b = kzalloc_obj(struct threshold_block);
@@ -1112,7 +1113,7 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
 	b->cpu			= cpu;
 	b->address		= address;
 	b->interrupt_enable	= 0;
-	b->interrupt_capable	= lvt_interrupt_supported(bank, high);
+	b->interrupt_capable	= lvt_interrupt_supported(bank, val.h);
 	b->threshold_limit	= get_thr_limit();
 
 	if (b->interrupt_capable) {
@@ -1124,13 +1125,13 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
 
 	list_add(&b->miscj, &tb->miscj);
 
-	mce_threshold_block_init(b, (high & MASK_LVTOFF_HI) >> 20);
+	mce_threshold_block_init(b, (val.h & MASK_LVTOFF_HI) >> 20);
 
 	err = kobject_init_and_add(&b->kobj, &threshold_ktype, tb->kobj, get_name(cpu, bank, b));
 	if (err)
 		goto out_free;
 recurse:
-	address = get_block_address(address, low, high, bank, ++block, cpu);
+	address = get_block_address(address, val.l, val.h, bank, ++block, cpu);
 	if (!address)
 		return 0;
 
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 8dd424ac5de8..acd73d96cc01 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1871,7 +1871,7 @@ static void __mcheck_cpu_cap_init(void)
 	u64 cap;
 	u8 b;
 
-	rdmsrq(MSR_IA32_MCG_CAP, cap);
+	cap = msr_read(MSR_IA32_MCG_CAP);
 
 	b = cap & MCG_BANKCNT_MASK;
 
@@ -1890,9 +1890,9 @@ static void __mcheck_cpu_init_generic(void)
 {
 	u64 cap;
 
-	rdmsrq(MSR_IA32_MCG_CAP, cap);
+	cap = msr_read(MSR_IA32_MCG_CAP);
 	if (cap & MCG_CTL_P)
-		wrmsr(MSR_IA32_MCG_CTL, 0xffffffff, 0xffffffff);
+		msr_write_ser(MSR_IA32_MCG_CTL, ~0ULL);
 }
 
 static void __mcheck_cpu_init_prepare_banks(void)
@@ -1919,10 +1919,10 @@ static void __mcheck_cpu_init_prepare_banks(void)
 		if (!b->init)
 			continue;
 
-		wrmsrq(mca_msr_reg(i, MCA_CTL), b->ctl);
-		wrmsrq(mca_msr_reg(i, MCA_STATUS), 0);
+		msr_write_ser(mca_msr_reg(i, MCA_CTL), b->ctl);
+		msr_write_ser(mca_msr_reg(i, MCA_STATUS), 0);
 
-		rdmsrq(mca_msr_reg(i, MCA_CTL), msrval);
+		msrval = msr_read(mca_msr_reg(i, MCA_CTL));
 		b->init = !!msrval;
 	}
 }
@@ -2240,7 +2240,7 @@ void mca_bsp_init(struct cpuinfo_x86 *c)
 	if (mce_flags.smca)
 		smca_bsp_init();
 
-	rdmsrq(MSR_IA32_MCG_CAP, cap);
+	cap = msr_read(MSR_IA32_MCG_CAP);
 
 	/* Use accurate RIP reporting if available. */
 	if ((cap & MCG_EXT_P) && MCG_EXT_CNT(cap) >= 9)
@@ -2420,7 +2420,7 @@ static void mce_disable_error_reporting(void)
 		struct mce_bank *b = &mce_banks[i];
 
 		if (b->init)
-			wrmsrq(mca_msr_reg(i, MCA_CTL), 0);
+			msr_write_ser(mca_msr_reg(i, MCA_CTL), 0);
 	}
 	return;
 }
@@ -2776,7 +2776,7 @@ static void mce_reenable_cpu(void)
 		struct mce_bank *b = &mce_banks[i];
 
 		if (b->init)
-			wrmsrq(mca_msr_reg(i, MCA_CTL), b->ctl);
+			msr_write_ser(mca_msr_reg(i, MCA_CTL), b->ctl);
 	}
 }
 
diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
index d02c4f556cd0..157fb0777bd2 100644
--- a/arch/x86/kernel/cpu/mce/inject.c
+++ b/arch/x86/kernel/cpu/mce/inject.c
@@ -316,18 +316,18 @@ static struct notifier_block inject_nb = {
  */
 static int toggle_hw_mce_inject(unsigned int cpu, bool enable)
 {
-	u32 l, h;
+	struct msr val;
 	int err;
 
-	err = rdmsr_on_cpu(cpu, MSR_K7_HWCR, &l, &h);
+	err = msr_read_on_cpu(cpu, MSR_K7_HWCR, &val.q);
 	if (err) {
 		pr_err("%s: error reading HWCR\n", __func__);
 		return err;
 	}
 
-	enable ? (l |= BIT(18)) : (l &= ~BIT(18));
+	enable ? (val.l |= BIT(18)) : (val.l &= ~BIT(18));
 
-	err = wrmsr_on_cpu(cpu, MSR_K7_HWCR, l, h);
+	err = msr_write_on_cpu(cpu, MSR_K7_HWCR, val.q);
 	if (err)
 		pr_err("%s: error writing HWCR\n", __func__);
 
@@ -476,27 +476,27 @@ static void prepare_msrs(void *info)
 	struct mce m = *(struct mce *)info;
 	u8 b = m.bank;
 
-	wrmsrq(MSR_IA32_MCG_STATUS, m.mcgstatus);
+	msr_write_ser(MSR_IA32_MCG_STATUS, m.mcgstatus);
 
 	if (boot_cpu_has(X86_FEATURE_SMCA)) {
 		if (m.inject_flags == DFR_INT_INJ) {
-			wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(b), m.status);
-			wrmsrq(MSR_AMD64_SMCA_MCx_DEADDR(b), m.addr);
+			msr_write_ser(MSR_AMD64_SMCA_MCx_DESTAT(b), m.status);
+			msr_write_ser(MSR_AMD64_SMCA_MCx_DEADDR(b), m.addr);
 		} else {
-			wrmsrq(MSR_AMD64_SMCA_MCx_STATUS(b), m.status);
-			wrmsrq(MSR_AMD64_SMCA_MCx_ADDR(b), m.addr);
+			msr_write_ser(MSR_AMD64_SMCA_MCx_STATUS(b), m.status);
+			msr_write_ser(MSR_AMD64_SMCA_MCx_ADDR(b), m.addr);
 		}
 
-		wrmsrq(MSR_AMD64_SMCA_MCx_SYND(b), m.synd);
+		msr_write_ser(MSR_AMD64_SMCA_MCx_SYND(b), m.synd);
 
 		if (m.misc)
-			wrmsrq(MSR_AMD64_SMCA_MCx_MISC(b), m.misc);
+			msr_write_ser(MSR_AMD64_SMCA_MCx_MISC(b), m.misc);
 	} else {
-		wrmsrq(MSR_IA32_MCx_STATUS(b), m.status);
-		wrmsrq(MSR_IA32_MCx_ADDR(b), m.addr);
+		msr_write_ser(MSR_IA32_MCx_STATUS(b), m.status);
+		msr_write_ser(MSR_IA32_MCx_ADDR(b), m.addr);
 
 		if (m.misc)
-			wrmsrq(MSR_IA32_MCx_MISC(b), m.misc);
+			msr_write_ser(MSR_IA32_MCx_MISC(b), m.misc);
 	}
 }
 
@@ -590,7 +590,7 @@ static int inj_bank_set(void *data, u64 val)
 	u64 cap;
 
 	/* Get bank count on target CPU so we can handle non-uniform values. */
-	rdmsrq_on_cpu(m->extcpu, MSR_IA32_MCG_CAP, &cap);
+	msr_read_on_cpu(m->extcpu, MSR_IA32_MCG_CAP, &cap);
 	n_banks = cap & MCG_BANKCNT_MASK;
 
 	if (val >= n_banks) {
@@ -614,7 +614,7 @@ static int inj_bank_set(void *data, u64 val)
 	if (cpu_feature_enabled(X86_FEATURE_SMCA)) {
 		u64 ipid;
 
-		if (rdmsrq_on_cpu(m->extcpu, MSR_AMD64_SMCA_MCx_IPID(val), &ipid)) {
+		if (msr_read_on_cpu(m->extcpu, MSR_AMD64_SMCA_MCx_IPID(val), &ipid)) {
 			pr_err("Error reading IPID on CPU%d\n", m->extcpu);
 			return -EINVAL;
 		}
@@ -742,15 +742,15 @@ static void check_hw_inj_possible(void)
 		u64 status = MCI_STATUS_VAL, ipid;
 
 		/* Check whether bank is populated */
-		rdmsrq(MSR_AMD64_SMCA_MCx_IPID(bank), ipid);
+		ipid = msr_read(MSR_AMD64_SMCA_MCx_IPID(bank));
 		if (!ipid)
 			continue;
 
 		toggle_hw_mce_inject(cpu, true);
 
-		wrmsrq_safe(mca_msr_reg(bank, MCA_STATUS), status);
-		rdmsrq_safe(mca_msr_reg(bank, MCA_STATUS), &status);
-		wrmsrq_safe(mca_msr_reg(bank, MCA_STATUS), 0);
+		msr_write_safe_ser(mca_msr_reg(bank, MCA_STATUS), status);
+		msr_read_safe(mca_msr_reg(bank, MCA_STATUS), &status);
+		msr_write_safe_ser(mca_msr_reg(bank, MCA_STATUS), 0);
 
 		if (!status) {
 			hw_injection_possible = false;
diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
index 4655223ba560..cd24b55c6e0b 100644
--- a/arch/x86/kernel/cpu/mce/intel.c
+++ b/arch/x86/kernel/cpu/mce/intel.c
@@ -94,7 +94,7 @@ static bool cmci_supported(int *banks)
 	if (!boot_cpu_has(X86_FEATURE_APIC) || lapic_get_maxlvt() < 6)
 		return false;
 
-	rdmsrq(MSR_IA32_MCG_CAP, cap);
+	cap = msr_read(MSR_IA32_MCG_CAP);
 	*banks = min_t(unsigned, MAX_NR_BANKS, cap & MCG_BANKCNT_MASK);
 	return !!(cap & MCG_CMCI_P);
 }
@@ -106,7 +106,7 @@ static bool lmce_supported(void)
 	if (mca_cfg.lmce_disabled)
 		return false;
 
-	rdmsrq(MSR_IA32_MCG_CAP, tmp);
+	tmp = msr_read(MSR_IA32_MCG_CAP);
 
 	/*
 	 * LMCE depends on recovery support in the processor. Hence both
@@ -123,7 +123,7 @@ static bool lmce_supported(void)
 	 * WARN if the MSR isn't locked as init_ia32_feat_ctl() unconditionally
 	 * locks the MSR in the event that it wasn't already locked by BIOS.
 	 */
-	rdmsrq(MSR_IA32_FEAT_CTL, tmp);
+	tmp = msr_read(MSR_IA32_FEAT_CTL);
 	if (WARN_ON_ONCE(!(tmp & FEAT_CTL_LOCKED)))
 		return false;
 
@@ -141,9 +141,9 @@ static void cmci_set_threshold(int bank, int thresh)
 	u64 val;
 
 	raw_spin_lock_irqsave(&cmci_discover_lock, flags);
-	rdmsrq(MSR_IA32_MCx_CTL2(bank), val);
+	val = msr_read(MSR_IA32_MCx_CTL2(bank));
 	val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK;
-	wrmsrq(MSR_IA32_MCx_CTL2(bank), val | thresh);
+	msr_write_ser(MSR_IA32_MCx_CTL2(bank), val | thresh);
 	raw_spin_unlock_irqrestore(&cmci_discover_lock, flags);
 }
 
@@ -184,7 +184,7 @@ static bool cmci_skip_bank(int bank, u64 *val)
 	if (test_bit(bank, mce_banks_ce_disabled))
 		return true;
 
-	rdmsrq(MSR_IA32_MCx_CTL2(bank), *val);
+	*val = msr_read(MSR_IA32_MCx_CTL2(bank));
 
 	/* Already owned by someone else? */
 	if (*val & MCI_CTL2_CMCI_EN) {
@@ -232,8 +232,8 @@ static void cmci_claim_bank(int bank, u64 val, int bios_zero_thresh, int *bios_w
 	struct mca_storm_desc *storm = this_cpu_ptr(&storm_desc);
 
 	val |= MCI_CTL2_CMCI_EN;
-	wrmsrq(MSR_IA32_MCx_CTL2(bank), val);
-	rdmsrq(MSR_IA32_MCx_CTL2(bank), val);
+	msr_write_ser(MSR_IA32_MCx_CTL2(bank), val);
+	val = msr_read(MSR_IA32_MCx_CTL2(bank));
 
 	/* If the enable bit did not stick, this bank should be polled. */
 	if (!(val & MCI_CTL2_CMCI_EN)) {
@@ -324,9 +324,9 @@ static void __cmci_disable_bank(int bank)
 
 	if (!test_bit(bank, this_cpu_ptr(mce_banks_owned)))
 		return;
-	rdmsrq(MSR_IA32_MCx_CTL2(bank), val);
+	val = msr_read(MSR_IA32_MCx_CTL2(bank));
 	val &= ~MCI_CTL2_CMCI_EN;
-	wrmsrq(MSR_IA32_MCx_CTL2(bank), val);
+	msr_write_ser(MSR_IA32_MCx_CTL2(bank), val);
 	__clear_bit(bank, this_cpu_ptr(mce_banks_owned));
 
 	if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD)
@@ -430,10 +430,10 @@ void intel_init_lmce(void)
 	if (!lmce_supported())
 		return;
 
-	rdmsrq(MSR_IA32_MCG_EXT_CTL, val);
+	val = msr_read(MSR_IA32_MCG_EXT_CTL);
 
 	if (!(val & MCG_EXT_CTL_LMCE_EN))
-		wrmsrq(MSR_IA32_MCG_EXT_CTL, val | MCG_EXT_CTL_LMCE_EN);
+		msr_write_ser(MSR_IA32_MCG_EXT_CTL, val | MCG_EXT_CTL_LMCE_EN);
 }
 
 void intel_clear_lmce(void)
@@ -443,9 +443,9 @@ void intel_clear_lmce(void)
 	if (!lmce_supported())
 		return;
 
-	rdmsrq(MSR_IA32_MCG_EXT_CTL, val);
+	val = msr_read(MSR_IA32_MCG_EXT_CTL);
 	val &= ~MCG_EXT_CTL_LMCE_EN;
-	wrmsrq(MSR_IA32_MCG_EXT_CTL, val);
+	msr_write_ser(MSR_IA32_MCG_EXT_CTL, val);
 }
 
 /*
@@ -460,10 +460,10 @@ static void intel_imc_init(struct cpuinfo_x86 *c)
 	case INTEL_SANDYBRIDGE_X:
 	case INTEL_IVYBRIDGE_X:
 	case INTEL_HASWELL_X:
-		if (rdmsrq_safe(MSR_ERROR_CONTROL, &error_control))
+		if (msr_read_safe(MSR_ERROR_CONTROL, &error_control))
 			return;
 		error_control |= 2;
-		wrmsrq_safe(MSR_ERROR_CONTROL, error_control);
+		msr_write_safe_ser(MSR_ERROR_CONTROL, error_control);
 		break;
 	}
 }
diff --git a/arch/x86/kernel/cpu/mce/p5.c b/arch/x86/kernel/cpu/mce/p5.c
index 2272ad53fc33..973b98a90649 100644
--- a/arch/x86/kernel/cpu/mce/p5.c
+++ b/arch/x86/kernel/cpu/mce/p5.c
@@ -23,16 +23,16 @@ int mce_p5_enabled __read_mostly;
 /* Machine check handler for Pentium class Intel CPUs: */
 noinstr void pentium_machine_check(struct pt_regs *regs)
 {
-	u32 loaddr, hi, lotype;
+	struct msr addr, type;
 
 	instrumentation_begin();
-	rdmsr(MSR_IA32_P5_MC_ADDR, loaddr, hi);
-	rdmsr(MSR_IA32_P5_MC_TYPE, lotype, hi);
+	addr.q = msr_read(MSR_IA32_P5_MC_ADDR);
+	type.q = msr_read(MSR_IA32_P5_MC_TYPE);
 
 	pr_emerg("CPU#%d: Machine Check Exception:  0x%8X (type 0x%8X).\n",
-		 smp_processor_id(), loaddr, lotype);
+		 smp_processor_id(), addr.l, type.l);
 
-	if (lotype & (1<<5)) {
+	if (type.l & (1<<5)) {
 		pr_emerg("CPU#%d: Possible thermal failure (CPU on fire ?).\n",
 			 smp_processor_id());
 	}
@@ -44,7 +44,7 @@ noinstr void pentium_machine_check(struct pt_regs *regs)
 /* Set up machine check reporting for processors with Intel style MCE: */
 void intel_p5_mcheck_init(struct cpuinfo_x86 *c)
 {
-	u32 l, h;
+	u64 val;
 
 	/* Default P5 to off as its often misconnected: */
 	if (!mce_p5_enabled)
@@ -55,8 +55,8 @@ void intel_p5_mcheck_init(struct cpuinfo_x86 *c)
 		return;
 
 	/* Read registers before enabling: */
-	rdmsr(MSR_IA32_P5_MC_ADDR, l, h);
-	rdmsr(MSR_IA32_P5_MC_TYPE, l, h);
+	val = msr_read(MSR_IA32_P5_MC_ADDR);
+	val = msr_read(MSR_IA32_P5_MC_TYPE);
 	pr_info("Intel old style machine check architecture supported.\n");
 
 	/* Enable MCE: */
diff --git a/arch/x86/kernel/cpu/mce/winchip.c b/arch/x86/kernel/cpu/mce/winchip.c
index 6c99f2941909..425927c9dd5e 100644
--- a/arch/x86/kernel/cpu/mce/winchip.c
+++ b/arch/x86/kernel/cpu/mce/winchip.c
@@ -28,12 +28,12 @@ noinstr void winchip_machine_check(struct pt_regs *regs)
 /* Set up machine check reporting on the Winchip C6 series */
 void winchip_mcheck_init(struct cpuinfo_x86 *c)
 {
-	u32 lo, hi;
+	struct msr val;
 
-	rdmsr(MSR_IDT_FCR1, lo, hi);
-	lo |= (1<<2);	/* Enable EIERRINT (int 18 MCE) */
-	lo &= ~(1<<4);	/* Enable MCE */
-	wrmsr(MSR_IDT_FCR1, lo, hi);
+	val.q = msr_read(MSR_IDT_FCR1);
+	val.l |= (1<<2);	/* Enable EIERRINT (int 18 MCE) */
+	val.l &= ~(1<<4);	/* Enable MCE */
+	msr_write_ser(MSR_IDT_FCR1, val.q);
 
 	cr4_set_bits(X86_CR4_MCE);
 
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20  9:16 [PATCH RFC 0/6] x86/msr: Rename MSR access functions Juergen Gross
                   ` (5 preceding siblings ...)
  2026-04-20  9:16 ` [PATCH RFC 6/6] x86/cpu/mce: Switch code " Juergen Gross
@ 2026-04-20 11:35 ` Peter Zijlstra
  2026-04-20 11:41   ` Peter Zijlstra
  2026-04-20 11:49   ` Jürgen Groß
  6 siblings, 2 replies; 20+ messages in thread
From: Peter Zijlstra @ 2026-04-20 11:35 UTC (permalink / raw)
  To: Juergen Gross
  Cc: linux-kernel, x86, linux-perf-users, linux-edac, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck

On Mon, Apr 20, 2026 at 11:16:28AM +0200, Juergen Gross wrote:

> - Use functions instead of macros for accessing MSRs, which will drop
>   modifying variables passed as a parameter.
> 
> - Eliminate multiple accessors doing exactly the same thing (e.g.
>   rdmsrl() and rdmsrq()).

So far so sane.

> - Instead of having function names based on the underlying instruction
>   mnemonics, have functions of a common name space (msr_*()).

Not sure on this one. The whole msr_{read,write}_{safe,noser}() thing is
a royal pain. Also 'noser' reads to me as the noun that goes with 'to
nose' [he that noses (around), like baker: he that bakes].

I would much rather we just stick to the mnemonics here. All of this
really is about wrapping single instructions, no need to make it an
unreadable mess.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20 11:35 ` [PATCH RFC 0/6] x86/msr: Rename " Peter Zijlstra
@ 2026-04-20 11:41   ` Peter Zijlstra
  2026-04-20 11:51     ` Jürgen Groß
  2026-04-20 11:49   ` Jürgen Groß
  1 sibling, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2026-04-20 11:41 UTC (permalink / raw)
  To: Juergen Gross
  Cc: linux-kernel, x86, linux-perf-users, linux-edac, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck

On Mon, Apr 20, 2026 at 01:35:12PM +0200, Peter Zijlstra wrote:
> On Mon, Apr 20, 2026 at 11:16:28AM +0200, Juergen Gross wrote:
> 
> > - Use functions instead of macros for accessing MSRs, which will drop
> >   modifying variables passed as a parameter.
> > 
> > - Eliminate multiple accessors doing exactly the same thing (e.g.
> >   rdmsrl() and rdmsrq()).
> 
> So far so sane.
> 
> > - Instead of having function names based on the underlying instruction
> >   mnemonics, have functions of a common name space (msr_*()).
> 
> Not sure on this one. The whole msr_{read,write}_{safe,noser}() thing is
> a royal pain. Also 'noser' reads to me as the noun that goes with 'to
> nose' [he that noses (around), like baker: he that bakes].
> 
> I would much rather we just stick to the mnemonics here. All of this
> really is about wrapping single instructions, no need to make it an
> unreadable mess.

Also, the _safe suffix should just go away. All MSR accessors should be
'safe'.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20 11:35 ` [PATCH RFC 0/6] x86/msr: Rename " Peter Zijlstra
  2026-04-20 11:41   ` Peter Zijlstra
@ 2026-04-20 11:49   ` Jürgen Groß
  2026-04-20 12:33     ` Peter Zijlstra
  1 sibling, 1 reply; 20+ messages in thread
From: Jürgen Groß @ 2026-04-20 11:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, x86, linux-perf-users, linux-edac, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck


[-- Attachment #1.1.1: Type: text/plain, Size: 1434 bytes --]

On 20.04.26 13:35, Peter Zijlstra wrote:
> On Mon, Apr 20, 2026 at 11:16:28AM +0200, Juergen Gross wrote:
> 
>> - Use functions instead of macros for accessing MSRs, which will drop
>>    modifying variables passed as a parameter.
>>
>> - Eliminate multiple accessors doing exactly the same thing (e.g.
>>    rdmsrl() and rdmsrq()).
> 
> So far so sane.
> 
>> - Instead of having function names based on the underlying instruction
>>    mnemonics, have functions of a common name space (msr_*()).
> 
> Not sure on this one. The whole msr_{read,write}_{safe,noser}() thing is
> a royal pain. Also 'noser' reads to me as the noun that goes with 'to
> nose' [he that noses (around), like baker: he that bakes].

Naming is hard. :-)

What about s/ser/sync/ then?

> I would much rather we just stick to the mnemonics here. All of this
> really is about wrapping single instructions, no need to make it an
> unreadable mess.

I'm pretty sure most of the wrmsr*() use cases could switch to the non
serializing variants. The problem not making the serializing aspect visible
in the function name will probably result in most new instances still using
the serializing variant instead of the probably possible non serializing one.

Many of those use cases will even suffer more, as they won't use the
immediate form of WRMSRNS then, which would waste the additional benefits of
that instruction.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3743 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20 11:41   ` Peter Zijlstra
@ 2026-04-20 11:51     ` Jürgen Groß
  2026-04-20 13:44       ` Sean Christopherson
  0 siblings, 1 reply; 20+ messages in thread
From: Jürgen Groß @ 2026-04-20 11:51 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, x86, linux-perf-users, linux-edac, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck


[-- Attachment #1.1.1: Type: text/plain, Size: 1178 bytes --]

On 20.04.26 13:41, Peter Zijlstra wrote:
> On Mon, Apr 20, 2026 at 01:35:12PM +0200, Peter Zijlstra wrote:
>> On Mon, Apr 20, 2026 at 11:16:28AM +0200, Juergen Gross wrote:
>>
>>> - Use functions instead of macros for accessing MSRs, which will drop
>>>    modifying variables passed as a parameter.
>>>
>>> - Eliminate multiple accessors doing exactly the same thing (e.g.
>>>    rdmsrl() and rdmsrq()).
>>
>> So far so sane.
>>
>>> - Instead of having function names based on the underlying instruction
>>>    mnemonics, have functions of a common name space (msr_*()).
>>
>> Not sure on this one. The whole msr_{read,write}_{safe,noser}() thing is
>> a royal pain. Also 'noser' reads to me as the noun that goes with 'to
>> nose' [he that noses (around), like baker: he that bakes].
>>
>> I would much rather we just stick to the mnemonics here. All of this
>> really is about wrapping single instructions, no need to make it an
>> unreadable mess.
> 
> Also, the _safe suffix should just go away. All MSR accessors should be
> 'safe'.

That would be fine by me, but I'd like to have some confirmation this is
really the route to go.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3743 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20 11:49   ` Jürgen Groß
@ 2026-04-20 12:33     ` Peter Zijlstra
  2026-04-20 13:01       ` Jürgen Groß
  0 siblings, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2026-04-20 12:33 UTC (permalink / raw)
  To: Jürgen Groß
  Cc: linux-kernel, x86, linux-perf-users, linux-edac, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck

[-- Attachment #1: Type: text/plain, Size: 2447 bytes --]

On Mon, Apr 20, 2026 at 01:49:02PM +0200, Jürgen Groß wrote:
> On 20.04.26 13:35, Peter Zijlstra wrote:
> > On Mon, Apr 20, 2026 at 11:16:28AM +0200, Juergen Gross wrote:
> > 
> > > - Use functions instead of macros for accessing MSRs, which will drop
> > >    modifying variables passed as a parameter.
> > > 
> > > - Eliminate multiple accessors doing exactly the same thing (e.g.
> > >    rdmsrl() and rdmsrq()).
> > 
> > So far so sane.
> > 
> > > - Instead of having function names based on the underlying instruction
> > >    mnemonics, have functions of a common name space (msr_*()).
> > 
> > Not sure on this one. The whole msr_{read,write}_{safe,noser}() thing is
> > a royal pain. Also 'noser' reads to me as the noun that goes with 'to
> > nose' [he that noses (around), like baker: he that bakes].
> 
> Naming is hard. :-)
> 
> What about s/ser/sync/ then?
> 
> > I would much rather we just stick to the mnemonics here. All of this
> > really is about wrapping single instructions, no need to make it an
> > unreadable mess.
> 
> I'm pretty sure most of the wrmsr*() use cases could switch to the non
> serializing variants. The problem not making the serializing aspect visible
> in the function name will probably result in most new instances still using
> the serializing variant instead of the probably possible non serializing one.
> 
> Many of those use cases will even suffer more, as they won't use the
> immediate form of WRMSRNS then, which would waste the additional benefits of
> that instruction.

I'm confused, if we have a wrmsrns() function, that could see if the msr
argument was a constant and use the immediate form, no?

That is, we have the following instructions: RDMSR, WRMSR, WRMSRNS
And we should have the exact same functions:

	val = rdmsr(msr);
	wrmsr(msr, val);
	wrmsrns(msr, val);

The only interesting question is what to do with the 'safe' aspect. The
instruction takes a fault, we do the extable, but rdmsr() above already
has a return value, so that can't be used.

One option is to, like uaccess and the proposed overflow, is to use
labels like:

	val = rdmsr(msr, label);

And then, even though the wrmsr*() functions have the return available,
do we want to be consistent and do:

	wrmsr(msr, val, label);
	wrmsrns(msr, val, label);

rather than be inconsistent and have them have a boolean return for
success.

What am I missing?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20 12:33     ` Peter Zijlstra
@ 2026-04-20 13:01       ` Jürgen Groß
  2026-04-20 13:10         ` Peter Zijlstra
  0 siblings, 1 reply; 20+ messages in thread
From: Jürgen Groß @ 2026-04-20 13:01 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, x86, linux-perf-users, linux-edac, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck


[-- Attachment #1.1.1: Type: text/plain, Size: 3310 bytes --]

On 20.04.26 14:33, Peter Zijlstra wrote:
> On Mon, Apr 20, 2026 at 01:49:02PM +0200, Jürgen Groß wrote:
>> On 20.04.26 13:35, Peter Zijlstra wrote:
>>> On Mon, Apr 20, 2026 at 11:16:28AM +0200, Juergen Gross wrote:
>>>
>>>> - Use functions instead of macros for accessing MSRs, which will drop
>>>>     modifying variables passed as a parameter.
>>>>
>>>> - Eliminate multiple accessors doing exactly the same thing (e.g.
>>>>     rdmsrl() and rdmsrq()).
>>>
>>> So far so sane.
>>>
>>>> - Instead of having function names based on the underlying instruction
>>>>     mnemonics, have functions of a common name space (msr_*()).
>>>
>>> Not sure on this one. The whole msr_{read,write}_{safe,noser}() thing is
>>> a royal pain. Also 'noser' reads to me as the noun that goes with 'to
>>> nose' [he that noses (around), like baker: he that bakes].
>>
>> Naming is hard. :-)
>>
>> What about s/ser/sync/ then?
>>
>>> I would much rather we just stick to the mnemonics here. All of this
>>> really is about wrapping single instructions, no need to make it an
>>> unreadable mess.
>>
>> I'm pretty sure most of the wrmsr*() use cases could switch to the non
>> serializing variants. The problem not making the serializing aspect visible
>> in the function name will probably result in most new instances still using
>> the serializing variant instead of the probably possible non serializing one.
>>
>> Many of those use cases will even suffer more, as they won't use the
>> immediate form of WRMSRNS then, which would waste the additional benefits of
>> that instruction.
> 
> I'm confused, if we have a wrmsrns() function, that could see if the msr
> argument was a constant and use the immediate form, no?
> 
> That is, we have the following instructions: RDMSR, WRMSR, WRMSRNS
> And we should have the exact same functions:
> 
> 	val = rdmsr(msr);
> 	wrmsr(msr, val);
> 	wrmsrns(msr, val);

People tend to copy similar code, maybe using older kernels as the source.

So even if wrmsrns() would be fine (and, resulting from that, better), they
will more likely end up using wrmsr() instead.

Using new function names implying the exact semantics (serializing vs.
non-serializing) will make it more likely the correct one is being used.

> The only interesting question is what to do with the 'safe' aspect. The
> instruction takes a fault, we do the extable, but rdmsr() above already
> has a return value, so that can't be used.
> 
> One option is to, like uaccess and the proposed overflow, is to use
> labels like:
> 
> 	val = rdmsr(msr, label);
> 
> And then, even though the wrmsr*() functions have the return available,
> do we want to be consistent and do:
> 
> 	wrmsr(msr, val, label);
> 	wrmsrns(msr, val, label);
> 
> rather than be inconsistent and have them have a boolean return for
> success.
> 
> What am I missing?

I like the idea to use a label, but this would result in the need to use
macros instead of functions. So this is trading one aspect against another.
I'm not sure which is the better one here.

An alternative might be to switch rdmsr() to the interface used by rdmsr_safe(),
i.e. let all the accessors return a bool for success/failure and use a pointer
for the MSR value in rdmsr().


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3743 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20 13:01       ` Jürgen Groß
@ 2026-04-20 13:10         ` Peter Zijlstra
  2026-04-20 13:23           ` Jürgen Groß
  2026-04-20 13:36           ` Sean Christopherson
  0 siblings, 2 replies; 20+ messages in thread
From: Peter Zijlstra @ 2026-04-20 13:10 UTC (permalink / raw)
  To: Jürgen Groß
  Cc: linux-kernel, x86, linux-perf-users, linux-edac, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck

[-- Attachment #1: Type: text/plain, Size: 2205 bytes --]

On Mon, Apr 20, 2026 at 03:01:31PM +0200, Jürgen Groß wrote:
> On 20.04.26 14:33, Peter Zijlstra wrote:

> > That is, we have the following instructions: RDMSR, WRMSR, WRMSRNS
> > And we should have the exact same functions:
> > 
> > 	val = rdmsr(msr);
> > 	wrmsr(msr, val);
> > 	wrmsrns(msr, val);
> 
> People tend to copy similar code, maybe using older kernels as the source.
> 
> So even if wrmsrns() would be fine (and, resulting from that, better), they
> will more likely end up using wrmsr() instead.
> 
> Using new function names implying the exact semantics (serializing vs.
> non-serializing) will make it more likely the correct one is being used.

You cannot fix stupid. If you want friction, the label thing will
ensure 'old' code doesn't compile and will need fixing.

Also, if wrmsrns() really is faster, the performance folks will finger
'incorrect' wrmsr() usage sooner or later.

> > The only interesting question is what to do with the 'safe' aspect. The
> > instruction takes a fault, we do the extable, but rdmsr() above already
> > has a return value, so that can't be used.
> > 
> > One option is to, like uaccess and the proposed overflow, is to use
> > labels like:
> > 
> > 	val = rdmsr(msr, label);
> > 
> > And then, even though the wrmsr*() functions have the return available,
> > do we want to be consistent and do:
> > 
> > 	wrmsr(msr, val, label);
> > 	wrmsrns(msr, val, label);
> > 
> > rather than be inconsistent and have them have a boolean return for
> > success.
> > 
> > What am I missing?
> 
> I like the idea to use a label, but this would result in the need to use
> macros instead of functions. So this is trading one aspect against another.
> I'm not sure which is the better one here.
> 
> An alternative might be to switch rdmsr() to the interface used by rdmsr_safe(),
> i.e. let all the accessors return a bool for success/failure and use a pointer
> for the MSR value in rdmsr().

Yes, either way around works. Perhaps that is 'better' because mostly we
don't care about the faults since we've checked the 'feature' earlier.

Its just inconvenient to have return in argument crud, but whatever ;-)

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20 13:10         ` Peter Zijlstra
@ 2026-04-20 13:23           ` Jürgen Groß
  2026-04-20 13:36           ` Sean Christopherson
  1 sibling, 0 replies; 20+ messages in thread
From: Jürgen Groß @ 2026-04-20 13:23 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, x86, linux-perf-users, linux-edac, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck


[-- Attachment #1.1.1: Type: text/plain, Size: 2808 bytes --]

On 20.04.26 15:10, Peter Zijlstra wrote:
> On Mon, Apr 20, 2026 at 03:01:31PM +0200, Jürgen Groß wrote:
>> On 20.04.26 14:33, Peter Zijlstra wrote:
> 
>>> That is, we have the following instructions: RDMSR, WRMSR, WRMSRNS
>>> And we should have the exact same functions:
>>>
>>> 	val = rdmsr(msr);
>>> 	wrmsr(msr, val);
>>> 	wrmsrns(msr, val);
>>
>> People tend to copy similar code, maybe using older kernels as the source.
>>
>> So even if wrmsrns() would be fine (and, resulting from that, better), they
>> will more likely end up using wrmsr() instead.
>>
>> Using new function names implying the exact semantics (serializing vs.
>> non-serializing) will make it more likely the correct one is being used.
> 
> You cannot fix stupid. If you want friction, the label thing will
> ensure 'old' code doesn't compile and will need fixing.
> 
> Also, if wrmsrns() really is faster, the performance folks will finger
> 'incorrect' wrmsr() usage sooner or later.

Fine with me. :-)

> 
>>> The only interesting question is what to do with the 'safe' aspect. The
>>> instruction takes a fault, we do the extable, but rdmsr() above already
>>> has a return value, so that can't be used.
>>>
>>> One option is to, like uaccess and the proposed overflow, is to use
>>> labels like:
>>>
>>> 	val = rdmsr(msr, label);
>>>
>>> And then, even though the wrmsr*() functions have the return available,
>>> do we want to be consistent and do:
>>>
>>> 	wrmsr(msr, val, label);
>>> 	wrmsrns(msr, val, label);
>>>
>>> rather than be inconsistent and have them have a boolean return for
>>> success.
>>>
>>> What am I missing?
>>
>> I like the idea to use a label, but this would result in the need to use
>> macros instead of functions. So this is trading one aspect against another.
>> I'm not sure which is the better one here.
>>
>> An alternative might be to switch rdmsr() to the interface used by rdmsr_safe(),
>> i.e. let all the accessors return a bool for success/failure and use a pointer
>> for the MSR value in rdmsr().
> 
> Yes, either way around works. Perhaps that is 'better' because mostly we
> don't care about the faults since we've checked the 'feature' earlier.
> 
> Its just inconvenient to have return in argument crud, but whatever ;-)

Okay, so would you be fine with the following plan?

- drop *_safe() variants, switch the "normal" ones to "safe" semantics.
- make all interfaces return a bool (true == success, false == failure)
- switch all interfaces to use 64-bit values, drop the 32-bit low/high
   split variants
- use always_inline functions for all local MSR accessors

This will result in rdmsr(), wrmsr() and wrmsrns() as the only available
MSR access functions (plus the related *_on_cpu[s]() ones).


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3743 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20 13:10         ` Peter Zijlstra
  2026-04-20 13:23           ` Jürgen Groß
@ 2026-04-20 13:36           ` Sean Christopherson
  2026-04-20 13:57             ` Jürgen Groß
  1 sibling, 1 reply; 20+ messages in thread
From: Sean Christopherson @ 2026-04-20 13:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Jürgen Groß, linux-kernel, x86, linux-perf-users,
	linux-edac, H. Peter Anvin, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, Arnaldo Carvalho de Melo,
	Namhyung Kim, Mark Rutland, Alexander Shishkin, Jiri Olsa,
	Ian Rogers, Adrian Hunter, James Clark, Tony Luck

On Mon, Apr 20, 2026, Peter Zijlstra wrote:
> On Mon, Apr 20, 2026 at 03:01:31PM +0200, Jürgen Groß wrote:
> > On 20.04.26 14:33, Peter Zijlstra wrote:
> > > The only interesting question is what to do with the 'safe' aspect. The
> > > instruction takes a fault, we do the extable, but rdmsr() above already
> > > has a return value, so that can't be used.
> > > 
> > > One option is to, like uaccess and the proposed overflow, is to use
> > > labels like:
> > > 
> > > 	val = rdmsr(msr, label);
> > > 
> > > And then, even though the wrmsr*() functions have the return available,
> > > do we want to be consistent and do:
> > > 
> > > 	wrmsr(msr, val, label);
> > > 	wrmsrns(msr, val, label);
> > > 
> > > rather than be inconsistent and have them have a boolean return for
> > > success.
> > > 
> > > What am I missing?
> > 
> > I like the idea to use a label, but this would result in the need to use
> > macros instead of functions. So this is trading one aspect against another.
> > I'm not sure which is the better one here.
> > 
> > An alternative might be to switch rdmsr() to the interface used by rdmsr_safe(),
> > i.e. let all the accessors return a bool for success/failure and use a pointer
> > for the MSR value in rdmsr().
> 
> Yes, either way around works. Perhaps that is 'better' because mostly we
> don't care about the faults since we've checked the 'feature' earlier.
> 
> Its just inconvenient to have return in argument crud, but whatever ;-)

Why not do both?  There are definitely flows where one is obviously more readable
than the the other.  E.g. if the RDMSR is being fed right back into a WRMSR, the
out-param version requires a local variable.  And it can be visually jarring if
the surrounding code is a bunch of "val = xyz" expressions.

On the other hand, the outparam with a 0/-errno return can be very useful too,
e.g. when wrapping the RDMSR in a multi-expression if-statement:

	if (rdmsrq_safe(MSR_IA32_CR_PAT, &host_pat) ||
	    (host_pat & GENMASK(2, 0)) != 6) {

As it avoids having to assign with '=' in the if-statement, and avoids having to
define a label.

It would be trivial to add a wrapper around the label version, the hard part is
just the naming :-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20 11:51     ` Jürgen Groß
@ 2026-04-20 13:44       ` Sean Christopherson
  2026-04-20 14:04         ` Jürgen Groß
  0 siblings, 1 reply; 20+ messages in thread
From: Sean Christopherson @ 2026-04-20 13:44 UTC (permalink / raw)
  To: Jürgen Groß
  Cc: Peter Zijlstra, linux-kernel, x86, linux-perf-users, linux-edac,
	H. Peter Anvin, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck

On Mon, Apr 20, 2026, Jürgen Groß wrote:
> On 20.04.26 13:41, Peter Zijlstra wrote:
> > On Mon, Apr 20, 2026 at 01:35:12PM +0200, Peter Zijlstra wrote:
> > > On Mon, Apr 20, 2026 at 11:16:28AM +0200, Juergen Gross wrote:
> > > 
> > > > - Use functions instead of macros for accessing MSRs, which will drop
> > > >    modifying variables passed as a parameter.
> > > > 
> > > > - Eliminate multiple accessors doing exactly the same thing (e.g.
> > > >    rdmsrl() and rdmsrq()).
> > > 
> > > So far so sane.
> > > 
> > > > - Instead of having function names based on the underlying instruction
> > > >    mnemonics, have functions of a common name space (msr_*()).
> > > 
> > > Not sure on this one. The whole msr_{read,write}_{safe,noser}() thing is
> > > a royal pain. Also 'noser' reads to me as the noun that goes with 'to
> > > nose' [he that noses (around), like baker: he that bakes].
> > > 
> > > I would much rather we just stick to the mnemonics here. All of this
> > > really is about wrapping single instructions, no need to make it an
> > > unreadable mess.
> > 
> > Also, the _safe suffix should just go away. All MSR accessors should be
> > 'safe'.
> 
> That would be fine by me, but I'd like to have some confirmation this is
> really the route to go.

I don't care what the suffix is called, or if there is even a suffix, but there
needs to be a way for the caller to communicate that it wants to handle faults,
so that the "caller isn't going to handle a fault" case generates a WARN if the
access does #GP.

If we make rdmsr() return a u64, then we can vacate __rdmsr() and use that for
the version with a label.

E.g. something like this if we provide both the out-param and label variants:

static __always_inline u64 rdmsr(u32 msr)
{
	<this version WARNs on fault>
}

#define __rdmsr(msr, label)
({
	u64 __val;

	<this version jumps to @label on fault>
	__val;
})

static __always_inline int rdmsr_p(u32 msr, u64 *val)
{
	<this version zeros *val on fault and returns 0/-errno>
}

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20 13:36           ` Sean Christopherson
@ 2026-04-20 13:57             ` Jürgen Groß
  0 siblings, 0 replies; 20+ messages in thread
From: Jürgen Groß @ 2026-04-20 13:57 UTC (permalink / raw)
  To: Sean Christopherson, Peter Zijlstra
  Cc: linux-kernel, x86, linux-perf-users, linux-edac, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck


[-- Attachment #1.1.1: Type: text/plain, Size: 2557 bytes --]

On 20.04.26 15:36, Sean Christopherson wrote:
> On Mon, Apr 20, 2026, Peter Zijlstra wrote:
>> On Mon, Apr 20, 2026 at 03:01:31PM +0200, Jürgen Groß wrote:
>>> On 20.04.26 14:33, Peter Zijlstra wrote:
>>>> The only interesting question is what to do with the 'safe' aspect. The
>>>> instruction takes a fault, we do the extable, but rdmsr() above already
>>>> has a return value, so that can't be used.
>>>>
>>>> One option is to, like uaccess and the proposed overflow, is to use
>>>> labels like:
>>>>
>>>> 	val = rdmsr(msr, label);
>>>>
>>>> And then, even though the wrmsr*() functions have the return available,
>>>> do we want to be consistent and do:
>>>>
>>>> 	wrmsr(msr, val, label);
>>>> 	wrmsrns(msr, val, label);
>>>>
>>>> rather than be inconsistent and have them have a boolean return for
>>>> success.
>>>>
>>>> What am I missing?
>>>
>>> I like the idea to use a label, but this would result in the need to use
>>> macros instead of functions. So this is trading one aspect against another.
>>> I'm not sure which is the better one here.
>>>
>>> An alternative might be to switch rdmsr() to the interface used by rdmsr_safe(),
>>> i.e. let all the accessors return a bool for success/failure and use a pointer
>>> for the MSR value in rdmsr().
>>
>> Yes, either way around works. Perhaps that is 'better' because mostly we
>> don't care about the faults since we've checked the 'feature' earlier.
>>
>> Its just inconvenient to have return in argument crud, but whatever ;-)
> 
> Why not do both?  There are definitely flows where one is obviously more readable
> than the the other.  E.g. if the RDMSR is being fed right back into a WRMSR, the
> out-param version requires a local variable.  And it can be visually jarring if
> the surrounding code is a bunch of "val = xyz" expressions.

I looked through my search results regarding wrmsrq() and rdmsrq() use cases and
I couldn't find such an instance. But maybe I have overlooked it or you have
some patch pending using this pattern?

> On the other hand, the outparam with a 0/-errno return can be very useful too,
> e.g. when wrapping the RDMSR in a multi-expression if-statement:
> 
> 	if (rdmsrq_safe(MSR_IA32_CR_PAT, &host_pat) ||
> 	    (host_pat & GENMASK(2, 0)) != 6) {
> 
> As it avoids having to assign with '=' in the if-statement, and avoids having to
> define a label.
> 
> It would be trivial to add a wrapper around the label version, the hard part is
> just the naming :-)

Indeed, naming is hard.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3743 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20 13:44       ` Sean Christopherson
@ 2026-04-20 14:04         ` Jürgen Groß
  2026-04-20 15:34           ` H. Peter Anvin
  0 siblings, 1 reply; 20+ messages in thread
From: Jürgen Groß @ 2026-04-20 14:04 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Peter Zijlstra, linux-kernel, x86, linux-perf-users, linux-edac,
	H. Peter Anvin, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck


[-- Attachment #1.1.1: Type: text/plain, Size: 2497 bytes --]

On 20.04.26 15:44, Sean Christopherson wrote:
> On Mon, Apr 20, 2026, Jürgen Groß wrote:
>> On 20.04.26 13:41, Peter Zijlstra wrote:
>>> On Mon, Apr 20, 2026 at 01:35:12PM +0200, Peter Zijlstra wrote:
>>>> On Mon, Apr 20, 2026 at 11:16:28AM +0200, Juergen Gross wrote:
>>>>
>>>>> - Use functions instead of macros for accessing MSRs, which will drop
>>>>>     modifying variables passed as a parameter.
>>>>>
>>>>> - Eliminate multiple accessors doing exactly the same thing (e.g.
>>>>>     rdmsrl() and rdmsrq()).
>>>>
>>>> So far so sane.
>>>>
>>>>> - Instead of having function names based on the underlying instruction
>>>>>     mnemonics, have functions of a common name space (msr_*()).
>>>>
>>>> Not sure on this one. The whole msr_{read,write}_{safe,noser}() thing is
>>>> a royal pain. Also 'noser' reads to me as the noun that goes with 'to
>>>> nose' [he that noses (around), like baker: he that bakes].
>>>>
>>>> I would much rather we just stick to the mnemonics here. All of this
>>>> really is about wrapping single instructions, no need to make it an
>>>> unreadable mess.
>>>
>>> Also, the _safe suffix should just go away. All MSR accessors should be
>>> 'safe'.
>>
>> That would be fine by me, but I'd like to have some confirmation this is
>> really the route to go.
> 
> I don't care what the suffix is called, or if there is even a suffix, but there
> needs to be a way for the caller to communicate that it wants to handle faults,
> so that the "caller isn't going to handle a fault" case generates a WARN if the
> access does #GP.

This could be handled by just switching parameters:

Let rdmsr() return a u64 and use a pointer parameter for 0/-errno. If that
parameter is NULL we can do the WARN() on error.

> If we make rdmsr() return a u64, then we can vacate __rdmsr() and use that for
> the version with a label.
> 
> E.g. something like this if we provide both the out-param and label variants:
> 
> static __always_inline u64 rdmsr(u32 msr)
> {
> 	<this version WARNs on fault>
> }
> 
> #define __rdmsr(msr, label)
> ({
> 	u64 __val;
> 
> 	<this version jumps to @label on fault>
> 	__val;
> })
> 
> static __always_inline int rdmsr_p(u32 msr, u64 *val)
> {
> 	<this version zeros *val on fault and returns 0/-errno>
> }

Thinking of paravirt support I'm not really sure the label variant is
something I'd like to do. It is possible, but it would certainly not be
more readable. :-)


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3743 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH RFC 0/6] x86/msr: Rename MSR access functions
  2026-04-20 14:04         ` Jürgen Groß
@ 2026-04-20 15:34           ` H. Peter Anvin
  0 siblings, 0 replies; 20+ messages in thread
From: H. Peter Anvin @ 2026-04-20 15:34 UTC (permalink / raw)
  To: Jürgen Groß, Sean Christopherson
  Cc: Peter Zijlstra, linux-kernel, x86, linux-perf-users, linux-edac,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Arnaldo Carvalho de Melo, Namhyung Kim, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Ian Rogers, Adrian Hunter,
	James Clark, Tony Luck

On April 20, 2026 7:04:16 AM PDT, "Jürgen Groß" <jgross@suse.com> wrote:
>On 20.04.26 15:44, Sean Christopherson wrote:
>> On Mon, Apr 20, 2026, Jürgen Groß wrote:
>>> On 20.04.26 13:41, Peter Zijlstra wrote:
>>>> On Mon, Apr 20, 2026 at 01:35:12PM +0200, Peter Zijlstra wrote:
>>>>> On Mon, Apr 20, 2026 at 11:16:28AM +0200, Juergen Gross wrote:
>>>>> 
>>>>>> - Use functions instead of macros for accessing MSRs, which will drop
>>>>>>     modifying variables passed as a parameter.
>>>>>> 
>>>>>> - Eliminate multiple accessors doing exactly the same thing (e.g.
>>>>>>     rdmsrl() and rdmsrq()).
>>>>> 
>>>>> So far so sane.
>>>>> 
>>>>>> - Instead of having function names based on the underlying instruction
>>>>>>     mnemonics, have functions of a common name space (msr_*()).
>>>>> 
>>>>> Not sure on this one. The whole msr_{read,write}_{safe,noser}() thing is
>>>>> a royal pain. Also 'noser' reads to me as the noun that goes with 'to
>>>>> nose' [he that noses (around), like baker: he that bakes].
>>>>> 
>>>>> I would much rather we just stick to the mnemonics here. All of this
>>>>> really is about wrapping single instructions, no need to make it an
>>>>> unreadable mess.
>>>> 
>>>> Also, the _safe suffix should just go away. All MSR accessors should be
>>>> 'safe'.
>>> 
>>> That would be fine by me, but I'd like to have some confirmation this is
>>> really the route to go.
>> 
>> I don't care what the suffix is called, or if there is even a suffix, but there
>> needs to be a way for the caller to communicate that it wants to handle faults,
>> so that the "caller isn't going to handle a fault" case generates a WARN if the
>> access does #GP.
>
>This could be handled by just switching parameters:
>
>Let rdmsr() return a u64 and use a pointer parameter for 0/-errno. If that
>parameter is NULL we can do the WARN() on error.
>
>> If we make rdmsr() return a u64, then we can vacate __rdmsr() and use that for
>> the version with a label.
>> 
>> E.g. something like this if we provide both the out-param and label variants:
>> 
>> static __always_inline u64 rdmsr(u32 msr)
>> {
>> 	<this version WARNs on fault>
>> }
>> 
>> #define __rdmsr(msr, label)
>> ({
>> 	u64 __val;
>> 
>> 	<this version jumps to @label on fault>
>> 	__val;
>> })
>> 
>> static __always_inline int rdmsr_p(u32 msr, u64 *val)
>> {
>> 	<this version zeros *val on fault and returns 0/-errno>
>> }
>
>Thinking of paravirt support I'm not really sure the label variant is
>something I'd like to do. It is possible, but it would certainly not be
>more readable. :-)
>
>
>Juergen

Can we freaking let PV burn in hell please?

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2026-04-20 16:09 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-20  9:16 [PATCH RFC 0/6] x86/msr: Rename MSR access functions Juergen Gross
2026-04-20  9:16 ` [PATCH RFC 1/6] x86/msr: Rename msr_read() and msr_write() Juergen Gross
2026-04-20  9:16 ` [PATCH RFC 2/6] x86/msr: Create a new minimal set of local MSR access functions Juergen Gross
2026-04-20  9:16 ` [PATCH RFC 3/6] x86/msr: Create a new minimal set of inter-CPU " Juergen Gross
2026-04-20  9:16 ` [PATCH RFC 4/6] x86/msr: Rename the *_safe_regs[_on_cpu]() MSR functions Juergen Gross
2026-04-20  9:16 ` [PATCH RFC 5/6] x86/events: Switch core parts to use new MSR access functions Juergen Gross
2026-04-20  9:16 ` [PATCH RFC 6/6] x86/cpu/mce: Switch code " Juergen Gross
2026-04-20 11:35 ` [PATCH RFC 0/6] x86/msr: Rename " Peter Zijlstra
2026-04-20 11:41   ` Peter Zijlstra
2026-04-20 11:51     ` Jürgen Groß
2026-04-20 13:44       ` Sean Christopherson
2026-04-20 14:04         ` Jürgen Groß
2026-04-20 15:34           ` H. Peter Anvin
2026-04-20 11:49   ` Jürgen Groß
2026-04-20 12:33     ` Peter Zijlstra
2026-04-20 13:01       ` Jürgen Groß
2026-04-20 13:10         ` Peter Zijlstra
2026-04-20 13:23           ` Jürgen Groß
2026-04-20 13:36           ` Sean Christopherson
2026-04-20 13:57             ` Jürgen Groß

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox