linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch V3 00/12] uaccess: Provide and use scopes for user masked access
@ 2025-10-17 10:08 Thomas Gleixner
  2025-10-17 10:08 ` [patch V3 01/12] ARM: uaccess: Implement missing __get_user_asm_dword() Thomas Gleixner
                   ` (13 more replies)
  0 siblings, 14 replies; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:08 UTC (permalink / raw)
  To: LKML
  Cc: kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Mathieu Desnoyers, Andrew Cooper, Julia Lawall,
	Nicolas Palix, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	André Almeida, Alexander Viro, Christian Brauner, Jan Kara,
	linux-fsdevel

This is a follow up on the V2 feedback:

   https://lore.kernel.org/20250916163004.674341701@linutronix.de

The main concern over the V2 implementation was the requirement to have
the code within the macro itself.

The main reason for that was the issue with ASM GOTO within a auto cleanup
scope. Clang refuses to build when the ASM GOTO label is outside of the
scope and GCC silently miscompiles the code and misses the cleanup.

After some back and forth discussion Linus suggested to put the local label
workaround into the user access functions themself.

The second reason for having this construct was to make the potential
modification of the pointer (when the architecture supports masking) scope
local, as that preserves the original pointer for the failure path.

Andrew thankfully pointed me to nested for() loops and after some head
scratching I managed to get all of it hidden in that construct.

So now the scoped access looks like this:

	scoped_masked_user_read_access(ptr, efault) {
	        // @ptr is aliased. An eventual mask modification is scope local
		unsafe_get_user(val, ptr, efault);
		...
	}
	return 0;
efault:
        // @ptr is unmodified
	do_stuff(ptr);
	return -EFAULT;


Changes vs. V2:

    - Fix the unsigned long long pointer issue in ARM get_user() -
      Christophe, Russell

    - Provide a generic workaround for the ASM GOTO issue and convert the
      affected architecture code over - Linus

    - Reimplement the scoped cleanup magic with nested for() loops - Andrew

    - Provide variants with size provided by the caller - Mathieu

    - Add get/put_user_masked() helpers for single read/write access

    - Fixup the usage in futex, x86. select

    - A clumsy attempt to implement a coccinelle checker which catches
      access mismatches, e.g. unsafe_put_user() inside a
      scoped_masked_user_read_access() region. That needs more thought and
      more coccinelle foo and is just there for discussion.

The series is based on v6.18-rc1 and also available from git:

    git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git uaccess/masked

Thanks,

	tglx
---
Thomas Gleixner (12):
      ARM: uaccess: Implement missing __get_user_asm_dword()
      uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user()
      x86/uaccess: Use unsafe wrappers for ASM GOTO
      powerpc/uaccess: Use unsafe wrappers for ASM GOTO
      riscv/uaccess: Use unsafe wrappers for ASM GOTO
      s390/uaccess: Use unsafe wrappers for ASM GOTO
      uaccess: Provide scoped masked user access regions
      uaccess: Provide put/get_user_masked()
      coccinelle: misc: Add scoped_masked_$MODE_access() checker script
      futex: Convert to scoped masked user access
      x86/futex: Convert to scoped masked user access
      select: Convert to scoped masked user access

---
 arch/arm/include/asm/uaccess.h               |   26 ++
 arch/powerpc/include/asm/uaccess.h           |    8 
 arch/riscv/include/asm/uaccess.h             |    8 
 arch/s390/include/asm/uaccess.h              |    4 
 arch/x86/include/asm/futex.h                 |   75 ++----
 arch/x86/include/asm/uaccess.h               |   12 -
 fs/select.c                                  |   12 -
 include/linux/uaccess.h                      |  313 ++++++++++++++++++++++++++-
 kernel/futex/futex.h                         |   37 ---
 scripts/coccinelle/misc/scoped_uaccess.cocci |  108 +++++++++
 10 files changed, 497 insertions(+), 106 deletions(-)

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [patch V3 01/12] ARM: uaccess: Implement missing __get_user_asm_dword()
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
@ 2025-10-17 10:08 ` Thomas Gleixner
  2025-10-17 12:36   ` Mathieu Desnoyers
  2025-10-17 10:08 ` [patch V3 02/12] uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user() Thomas Gleixner
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:08 UTC (permalink / raw)
  To: LKML
  Cc: kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Mathieu Desnoyers, Andrew Cooper, Julia Lawall,
	Nicolas Palix, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	André Almeida, Alexander Viro, Christian Brauner, Jan Kara,
	linux-fsdevel

When CONFIG_CPU_SPECTRE=n then get_user() is missing the 8 byte ASM variant
for no real good reason. This prevents using get_user(u64) in generic code.

Implement it as a sequence of two 4-byte reads with LE/BE awareness and
make the unsigned long (or long long) type for the intermediate variable to
read into dependend on the the target type.

The __long_type() macro and idea was lifted from PowerPC. Thanks to
Christophe for pointing it out.

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <linux@armlinux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
Closes: https://lore.kernel.org/oe-kbuild-all/202509120155.pFgwfeUD-lkp@intel.com/
---
V2a: Solve the *ptr issue vs. unsigned long long - Russell/Christophe
V2: New patch to fix the 0-day fallout
---
 arch/arm/include/asm/uaccess.h |   26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)

--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -283,10 +283,17 @@ extern int __put_user_8(void *, unsigned
 	__gu_err;							\
 })
 
+/*
+ * This is a type: either unsigned long, if the argument fits into
+ * that type, or otherwise unsigned long long.
+ */
+#define __long_type(x) \
+	__typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
+
 #define __get_user_err(x, ptr, err, __t)				\
 do {									\
 	unsigned long __gu_addr = (unsigned long)(ptr);			\
-	unsigned long __gu_val;						\
+	__long_type(x) __gu_val;					\
 	unsigned int __ua_flags;					\
 	__chk_user_ptr(ptr);						\
 	might_fault();							\
@@ -295,6 +302,7 @@ do {									\
 	case 1:	__get_user_asm_byte(__gu_val, __gu_addr, err, __t); break;	\
 	case 2:	__get_user_asm_half(__gu_val, __gu_addr, err, __t); break;	\
 	case 4:	__get_user_asm_word(__gu_val, __gu_addr, err, __t); break;	\
+	case 8:	__get_user_asm_dword(__gu_val, __gu_addr, err, __t); break;	\
 	default: (__gu_val) = __get_user_bad();				\
 	}								\
 	uaccess_restore(__ua_flags);					\
@@ -353,6 +361,22 @@ do {									\
 #define __get_user_asm_word(x, addr, err, __t)			\
 	__get_user_asm(x, addr, err, "ldr" __t)
 
+#ifdef __ARMEB__
+#define __WORD0_OFFS	4
+#define __WORD1_OFFS	0
+#else
+#define __WORD0_OFFS	0
+#define __WORD1_OFFS	4
+#endif
+
+#define __get_user_asm_dword(x, addr, err, __t)				\
+	({								\
+	unsigned long __w0, __w1;					\
+	__get_user_asm(__w0, addr + __WORD0_OFFS, err, "ldr" __t);	\
+	__get_user_asm(__w1, addr + __WORD1_OFFS, err, "ldr" __t);	\
+	(x) = ((u64)__w1 << 32) | (u64) __w0;				\
+})
+
 #define __put_user_switch(x, ptr, __err, __fn)				\
 	do {								\
 		const __typeof__(*(ptr)) __user *__pu_ptr = (ptr);	\


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [patch V3 02/12] uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user()
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
  2025-10-17 10:08 ` [patch V3 01/12] ARM: uaccess: Implement missing __get_user_asm_dword() Thomas Gleixner
@ 2025-10-17 10:08 ` Thomas Gleixner
  2025-10-17 12:43   ` Mathieu Desnoyers
  2025-10-17 10:09 ` [patch V3 03/12] x86/uaccess: Use unsafe wrappers for ASM GOTO Thomas Gleixner
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:08 UTC (permalink / raw)
  To: LKML
  Cc: Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Mathieu Desnoyers, Andrew Cooper, Julia Lawall,
	Nicolas Palix, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	André Almeida, Alexander Viro, Christian Brauner, Jan Kara,
	linux-fsdevel

ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:

bool foo(u32 __user *p, u32 val)
{
	scoped_guard(pagefault)
		unsafe_put_user(val, p, efault);
	return true;
efault:
	return false;
}

 e80:	e8 00 00 00 00       	call   e85 <foo+0x5>
 e85:	65 48 8b 05 00 00 00 00 mov    %gs:0x0(%rip),%rax
 e8d:	83 80 04 14 00 00 01 	addl   $0x1,0x1404(%rax)   // pf_disable++
 e94:	89 37                	mov    %esi,(%rdi)
 e96:	83 a8 04 14 00 00 01 	subl   $0x1,0x1404(%rax)   // pf_disable--
 e9d:	b8 01 00 00 00       	mov    $0x1,%eax           // success
 ea2:	e9 00 00 00 00       	jmp    ea7 <foo+0x27>      // ret
 ea7:	31 c0                	xor    %eax,%eax           // fail
 ea9:	e9 00 00 00 00       	jmp    eae <foo+0x2e>      // ret

which is broken as it leaks the pagefault disable counter on failure.

Clang at least fails the build.

Linus suggested to add a local label into the macro scope and let that
jump to the actual caller supplied error label.

       	__label__ local_label;                                  \
        arch_unsafe_get_user(x, ptr, local_label);              \
	if (0) {                                                \
	local_label:                                            \
		goto label;                                     \

That works for both GCC and clang.

clang:

 c80:	0f 1f 44 00 00       	   nopl   0x0(%rax,%rax,1)	
 c85:	65 48 8b 0c 25 00 00 00 00 mov    %gs:0x0,%rcx
 c8e:	ff 81 04 14 00 00    	   incl   0x1404(%rcx)	   // pf_disable++
 c94:	31 c0                	   xor    %eax,%eax        // set retval to false
 c96:	89 37                      mov    %esi,(%rdi)      // write
 c98:	b0 01                	   mov    $0x1,%al         // set retval to true
 c9a:	ff 89 04 14 00 00    	   decl   0x1404(%rcx)     // pf_disable--
 ca0:	2e e9 00 00 00 00    	   cs jmp ca6 <foo+0x26>   // ret

The exception table entry points correctly to c9a

GCC:

 f70:   e8 00 00 00 00          call   f75 <baz+0x5>
 f75:   65 48 8b 05 00 00 00 00 mov    %gs:0x0(%rip),%rax
 f7d:   83 80 04 14 00 00 01    addl   $0x1,0x1404(%rax)  // pf_disable++
 f84:   8b 17                   mov    (%rdi),%edx
 f86:   89 16                   mov    %edx,(%rsi)
 f88:   83 a8 04 14 00 00 01    subl   $0x1,0x1404(%rax) // pf_disable--
 f8f:   b8 01 00 00 00          mov    $0x1,%eax         // success
 f94:   e9 00 00 00 00          jmp    f99 <baz+0x29>    // ret
 f99:   83 a8 04 14 00 00 01    subl   $0x1,0x1404(%rax) // pf_disable--
 fa0:   31 c0                   xor    %eax,%eax         // fail
 fa2:   e9 00 00 00 00          jmp    fa7 <baz+0x37>    // ret

The exception table entry points correctly to f99

So both compilers optimize out the extra goto and emit correct and
efficient code.

Provide a generic wrapper to do that to avoid modifying all the affected
architecture specific implementation with that workaround.

The only change required for architectures is to rename unsafe_*_user() to
arch_unsafe_*_user(). That's done in subsequent changes.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/uaccess.h |   72 +++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 68 insertions(+), 4 deletions(-)

--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -518,7 +518,34 @@ long strncpy_from_user_nofault(char *dst
 		long count);
 long strnlen_user_nofault(const void __user *unsafe_addr, long count);
 
-#ifndef __get_kernel_nofault
+#ifdef arch_get_kernel_nofault
+/*
+ * Wrap the architecture implementation so that @label can be outside of a
+ * cleanup() scope. A regular C goto works correctly, but ASM goto does
+ * not. Clang rejects such an attempt, but GCC silently emits buggy code.
+ */
+#define __get_kernel_nofault(dst, src, type, label)		\
+do {								\
+	__label__ local_label;					\
+	arch_get_kernel_nofault(dst, src, type, local_label);	\
+	if (0) {						\
+	local_label:						\
+		goto label;					\
+	}							\
+} while (0)
+
+#define __put_kernel_nofault(dst, src, type, label)		\
+do {								\
+	__label__ local_label;					\
+	arch_get_kernel_nofault(dst, src, type, local_label);	\
+	if (0) {						\
+	local_label:						\
+		goto label;					\
+	}							\
+} while (0)
+
+#elif !defined(__get_kernel_nofault) /* arch_get_kernel_nofault */
+
 #define __get_kernel_nofault(dst, src, type, label)	\
 do {							\
 	type __user *p = (type __force __user *)(src);	\
@@ -535,7 +562,8 @@ do {							\
 	if (__put_user(data, p))			\
 		goto label;				\
 } while (0)
-#endif
+
+#endif  /* !__get_kernel_nofault */
 
 /**
  * get_kernel_nofault(): safely attempt to read from a location
@@ -549,7 +577,42 @@ do {							\
 	copy_from_kernel_nofault(&(val), __gk_ptr, sizeof(val));\
 })
 
-#ifndef user_access_begin
+#ifdef user_access_begin
+
+#ifdef arch_unsafe_get_user
+/*
+ * Wrap the architecture implementation so that @label can be outside of a
+ * cleanup() scope. A regular C goto works correctly, but ASM goto does
+ * not. Clang rejects such an attempt, but GCC silently emits buggy code.
+ *
+ * Some architectures use internal local labels already, but this extra
+ * indirection here is harmless because the compiler optimizes it out
+ * completely in any case. This construct just ensures that the ASM GOTO
+ * target is always in the local scope. The C goto 'label' works correct
+ * when leaving a cleanup() scope.
+ */
+#define unsafe_get_user(x, ptr, label)			\
+do {							\
+	__label__ local_label;				\
+	arch_unsafe_get_user(x, ptr, local_label);	\
+	if (0) {					\
+	local_label:					\
+		goto label;				\
+	}						\
+} while (0)
+
+#define unsafe_put_user(x, ptr, label)			\
+do {							\
+	__label__ local_label;				\
+	arch_unsafe_put_user(x, ptr, local_label);	\
+	if (0) {					\
+	local_label:					\
+		goto label;				\
+	}						\
+} while (0)
+#endif /* arch_unsafe_get_user */
+
+#else /* user_access_begin */
 #define user_access_begin(ptr,len) access_ok(ptr, len)
 #define user_access_end() do { } while (0)
 #define unsafe_op_wrap(op, err) do { if (unlikely(op)) goto err; } while (0)
@@ -559,7 +622,8 @@ do {							\
 #define unsafe_copy_from_user(d,s,l,e) unsafe_op_wrap(__copy_from_user(d,s,l),e)
 static inline unsigned long user_access_save(void) { return 0UL; }
 static inline void user_access_restore(unsigned long flags) { }
-#endif
+#endif /* !user_access_begin */
+
 #ifndef user_write_access_begin
 #define user_write_access_begin user_access_begin
 #define user_write_access_end user_access_end


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [patch V3 03/12] x86/uaccess: Use unsafe wrappers for ASM GOTO
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
  2025-10-17 10:08 ` [patch V3 01/12] ARM: uaccess: Implement missing __get_user_asm_dword() Thomas Gleixner
  2025-10-17 10:08 ` [patch V3 02/12] uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user() Thomas Gleixner
@ 2025-10-17 10:09 ` Thomas Gleixner
  2025-10-17 10:09 ` [patch V3 04/12] powerpc/uaccess: " Thomas Gleixner
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:09 UTC (permalink / raw)
  To: LKML
  Cc: x86, kernel test robot, Russell King, linux-arm-kernel,
	Linus Torvalds, Madhavan Srinivasan, Michael Ellerman,
	Nicholas Piggin, Christophe Leroy, linuxppc-dev, Paul Walmsley,
	Palmer Dabbelt, linux-riscv, Heiko Carstens,
	Christian Borntraeger, Sven Schnelle, linux-s390,
	Mathieu Desnoyers, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel

ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:

bool foo(u32 __user *p, u32 val)
{
	scoped_guard(pagefault)
		unsafe_put_user(val, p, efault);
	return true;
efault:
	return false;
}

It ends up leaking the pagefault disable counter in the fault path. clang
at least fails the build.

Rename unsafe_*_user() to arch_unsafe_*_user() which makes the generic
uaccess header wrap it with a local label that makes both compilers emit
correct code. Same for the kernel_nofault() variants.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
---
 arch/x86/include/asm/uaccess.h |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -528,18 +528,18 @@ static __must_check __always_inline bool
 #define user_access_save()	smap_save()
 #define user_access_restore(x)	smap_restore(x)
 
-#define unsafe_put_user(x, ptr, label)	\
+#define arch_unsafe_put_user(x, ptr, label)	\
 	__put_user_size((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)), label)
 
 #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
-#define unsafe_get_user(x, ptr, err_label)					\
+#define arch_unsafe_get_user(x, ptr, err_label)					\
 do {										\
 	__inttype(*(ptr)) __gu_val;						\
 	__get_user_size(__gu_val, (ptr), sizeof(*(ptr)), err_label);		\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
 } while (0)
 #else // !CONFIG_CC_HAS_ASM_GOTO_OUTPUT
-#define unsafe_get_user(x, ptr, err_label)					\
+#define arch_unsafe_get_user(x, ptr, err_label)					\
 do {										\
 	int __gu_err;								\
 	__inttype(*(ptr)) __gu_val;						\
@@ -618,11 +618,11 @@ do {									\
 } while (0)
 
 #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
-#define __get_kernel_nofault(dst, src, type, err_label)			\
+#define arch_get_kernel_nofault(dst, src, type, err_label)		\
 	__get_user_size(*((type *)(dst)), (__force type __user *)(src),	\
 			sizeof(type), err_label)
 #else // !CONFIG_CC_HAS_ASM_GOTO_OUTPUT
-#define __get_kernel_nofault(dst, src, type, err_label)			\
+#define arch_get_kernel_nofault(dst, src, type, err_label)			\
 do {									\
 	int __kr_err;							\
 									\
@@ -633,7 +633,7 @@ do {									\
 } while (0)
 #endif // CONFIG_CC_HAS_ASM_GOTO_OUTPUT
 
-#define __put_kernel_nofault(dst, src, type, err_label)			\
+#define arch_put_kernel_nofault(dst, src, type, err_label)		\
 	__put_user_size(*((type *)(src)), (__force type __user *)(dst),	\
 			sizeof(type), err_label)
 


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [patch V3 04/12] powerpc/uaccess: Use unsafe wrappers for ASM GOTO
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
                   ` (2 preceding siblings ...)
  2025-10-17 10:09 ` [patch V3 03/12] x86/uaccess: Use unsafe wrappers for ASM GOTO Thomas Gleixner
@ 2025-10-17 10:09 ` Thomas Gleixner
  2025-10-17 10:09 ` [patch V3 05/12] riscv/uaccess: " Thomas Gleixner
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:09 UTC (permalink / raw)
  To: LKML
  Cc: Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, kernel test robot, Russell King,
	linux-arm-kernel, Linus Torvalds, x86, Paul Walmsley,
	Palmer Dabbelt, linux-riscv, Heiko Carstens,
	Christian Borntraeger, Sven Schnelle, linux-s390,
	Mathieu Desnoyers, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel

ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:

bool foo(u32 __user *p, u32 val)
{
	scoped_guard(pagefault)
		unsafe_put_user(val, p, efault);
	return true;
efault:
	return false;
}

It ends up leaking the pagefault disable counter in the fault path. clang
at least fails the build.

Rename unsafe_*_user() to arch_unsafe_*_user() which makes the generic
uaccess header wrap it with a local label that makes both compilers emit
correct code. Same for the kernel_nofault() variants.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: linuxppc-dev@lists.ozlabs.org
---
 arch/powerpc/include/asm/uaccess.h |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -451,7 +451,7 @@ user_write_access_begin(const void __use
 #define user_write_access_begin	user_write_access_begin
 #define user_write_access_end		prevent_current_write_to_user
 
-#define unsafe_get_user(x, p, e) do {					\
+#define arch_unsafe_get_user(x, p, e) do {			\
 	__long_type(*(p)) __gu_val;				\
 	__typeof__(*(p)) __user *__gu_addr = (p);		\
 								\
@@ -459,7 +459,7 @@ user_write_access_begin(const void __use
 	(x) = (__typeof__(*(p)))__gu_val;			\
 } while (0)
 
-#define unsafe_put_user(x, p, e) \
+#define arch_unsafe_put_user(x, p, e)				\
 	__put_user_size_goto((__typeof__(*(p)))(x), (p), sizeof(*(p)), e)
 
 #define unsafe_copy_from_user(d, s, l, e) \
@@ -504,11 +504,11 @@ do {									\
 		unsafe_put_user(*(u8*)(_src + _i), (u8 __user *)(_dst + _i), e); \
 } while (0)
 
-#define __get_kernel_nofault(dst, src, type, err_label)			\
+#define arch_get_kernel_nofault(dst, src, type, err_label)		\
 	__get_user_size_goto(*((type *)(dst)),				\
 		(__force type __user *)(src), sizeof(type), err_label)
 
-#define __put_kernel_nofault(dst, src, type, err_label)			\
+#define arch_put_kernel_nofault(dst, src, type, err_label)		\
 	__put_user_size_goto(*((type *)(src)),				\
 		(__force type __user *)(dst), sizeof(type), err_label)
 


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [patch V3 05/12] riscv/uaccess: Use unsafe wrappers for ASM GOTO
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
                   ` (3 preceding siblings ...)
  2025-10-17 10:09 ` [patch V3 04/12] powerpc/uaccess: " Thomas Gleixner
@ 2025-10-17 10:09 ` Thomas Gleixner
  2025-10-17 10:09 ` [patch V3 06/12] s390/uaccess: " Thomas Gleixner
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:09 UTC (permalink / raw)
  To: LKML
  Cc: Paul Walmsley, Palmer Dabbelt, linux-riscv, kernel test robot,
	Russell King, linux-arm-kernel, Linus Torvalds, x86,
	Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Heiko Carstens,
	Christian Borntraeger, Sven Schnelle, linux-s390,
	Mathieu Desnoyers, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel

ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:

bool foo(u32 __user *p, u32 val)
{
	scoped_guard(pagefault)
		unsafe_put_user(val, p, efault);
	return true;
efault:
	return false;
}

It ends up leaking the pagefault disable counter in the fault path. clang
at least fails the build.

Rename unsafe_*_user() to arch_unsafe_*_user() which makes the generic
uaccess header wrap it with a local label that makes both compilers emit
correct code. Same for the kernel_nofault() variants.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Walmsley <pjw@kernel.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: linux-riscv@lists.infradead.org
---
 arch/riscv/include/asm/uaccess.h |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

--- a/arch/riscv/include/asm/uaccess.h
+++ b/arch/riscv/include/asm/uaccess.h
@@ -437,10 +437,10 @@ unsigned long __must_check clear_user(vo
 		__clear_user(untagged_addr(to), n) : n;
 }
 
-#define __get_kernel_nofault(dst, src, type, err_label)			\
+#define arch_get_kernel_nofault(dst, src, type, err_label)			\
 	__get_user_nocheck(*((type *)(dst)), (__force __user type *)(src), err_label)
 
-#define __put_kernel_nofault(dst, src, type, err_label)			\
+#define arch_put_kernel_nofault(dst, src, type, err_label)			\
 	__put_user_nocheck(*((type *)(src)), (__force __user type *)(dst), err_label)
 
 static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len)
@@ -460,10 +460,10 @@ static inline void user_access_restore(u
  * We want the unsafe accessors to always be inlined and use
  * the error labels - thus the macro games.
  */
-#define unsafe_put_user(x, ptr, label)					\
+#define arch_unsafe_put_user(x, ptr, label)				\
 	__put_user_nocheck(x, (ptr), label)
 
-#define unsafe_get_user(x, ptr, label)	do {				\
+#define arch_unsafe_get_user(x, ptr, label)	do {			\
 	__inttype(*(ptr)) __gu_val;					\
 	__get_user_nocheck(__gu_val, (ptr), label);			\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [patch V3 06/12] s390/uaccess: Use unsafe wrappers for ASM GOTO
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
                   ` (4 preceding siblings ...)
  2025-10-17 10:09 ` [patch V3 05/12] riscv/uaccess: " Thomas Gleixner
@ 2025-10-17 10:09 ` Thomas Gleixner
  2025-10-17 10:09 ` [patch V3 07/12] uaccess: Provide scoped masked user access regions Thomas Gleixner
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:09 UTC (permalink / raw)
  To: LKML
  Cc: Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Mathieu Desnoyers, Andrew Cooper, Julia Lawall,
	Nicolas Palix, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	André Almeida, Alexander Viro, Christian Brauner, Jan Kara,
	linux-fsdevel

ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:

bool foo(u32 __user *p, u32 val)
{
	scoped_guard(pagefault)
		unsafe_put_user(val, p, efault);
	return true;
efault:
	return false;
}

It ends up leaking the pagefault disable counter in the fault path. clang
at least fails the build.

S390 is not affected for unsafe_*_user() as it uses it's own local label
already, but __get/put_kernel_nofault() lack that.

Rename them to arch_*_kernel_nofault() which makes the generic uaccess
header wrap it with a local label that makes both compilers emit correct
code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
---
 arch/s390/include/asm/uaccess.h |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/s390/include/asm/uaccess.h
+++ b/arch/s390/include/asm/uaccess.h
@@ -468,8 +468,8 @@ do {									\
 
 #endif /* CONFIG_CC_HAS_ASM_GOTO_OUTPUT && CONFIG_CC_HAS_ASM_AOR_FORMAT_FLAGS */
 
-#define __get_kernel_nofault __mvc_kernel_nofault
-#define __put_kernel_nofault __mvc_kernel_nofault
+#define arch_get_kernel_nofault __mvc_kernel_nofault
+#define arch_put_kernel_nofault __mvc_kernel_nofault
 
 void __cmpxchg_user_key_called_with_bad_pointer(void);
 


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
                   ` (5 preceding siblings ...)
  2025-10-17 10:09 ` [patch V3 06/12] s390/uaccess: " Thomas Gleixner
@ 2025-10-17 10:09 ` Thomas Gleixner
  2025-10-17 11:08   ` Andrew Cooper
                     ` (2 more replies)
  2025-10-17 10:09 ` [patch V3 08/12] uaccess: Provide put/get_user_masked() Thomas Gleixner
                   ` (6 subsequent siblings)
  13 siblings, 3 replies; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:09 UTC (permalink / raw)
  To: LKML
  Cc: Christophe Leroy, Mathieu Desnoyers, Andrew Cooper,
	Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

User space access regions are tedious and require similar code patterns all
over the place:

     	if (!user_read_access_begin(from, sizeof(*from)))
		return -EFAULT;
	unsafe_get_user(val, from, Efault);
	user_read_access_end();
	return 0;
Efault:
	user_read_access_end();
	return -EFAULT;

This got worse with the recent addition of masked user access, which
optimizes the speculation prevention:

	if (can_do_masked_user_access())
		from = masked_user_read_access_begin((from));
	else if (!user_read_access_begin(from, sizeof(*from)))
		return -EFAULT;
	unsafe_get_user(val, from, Efault);
	user_read_access_end();
	return 0;
Efault:
	user_read_access_end();
	return -EFAULT;

There have been issues with using the wrong user_*_access_end() variant in
the error path and other typical Copy&Pasta problems, e.g. using the wrong
fault label in the user accessor which ends up using the wrong accesss end
variant. 

These patterns beg for scopes with automatic cleanup. The resulting outcome
is:
    	scoped_masked_user_read_access(from, Efault)
		unsafe_get_user(val, from, Efault);
	return 0;
  Efault:
	return -EFAULT;

The scope guarantees the proper cleanup for the access mode is invoked both
in the success and the failure (fault) path.

The scoped_masked_user_$MODE_access() macros are implemented as self
terminating nested for() loops. Thanks to Andrew Cooper for pointing me at
them. The scope can therefore be left with 'break', 'goto' and 'return'.
Even 'continue' "works" due to the self termination mechanism. Both GCC and
clang optimize all the convoluted macro maze out and the above results with
clang in:

 b80:	f3 0f 1e fa          	       endbr64
 b84:	48 b8 ef cd ab 89 67 45 23 01  movabs $0x123456789abcdef,%rax
 b8e:	48 39 c7    	               cmp    %rax,%rdi
 b91:	48 0f 47 f8          	       cmova  %rax,%rdi
 b95:	90                   	       nop
 b96:	90                   	       nop
 b97:	90                   	       nop
 b98:	31 c9                	       xor    %ecx,%ecx
 b9a:	8b 07                	       mov    (%rdi),%eax
 b9c:	89 06                	       mov    %eax,(%rsi)
 b9e:	85 c9                	       test   %ecx,%ecx
 ba0:	0f 94 c0             	       sete   %al
 ba3:	90                   	       nop
 ba4:	90                   	       nop
 ba5:	90                   	       nop
 ba6:	c3                   	       ret

Which looks as compact as it gets. The NOPs are placeholder for STAC/CLAC.
GCC emits the fault path seperately:

 bf0:	f3 0f 1e fa          	       endbr64
 bf4:	48 b8 ef cd ab 89 67 45 23 01  movabs $0x123456789abcdef,%rax
 bfe:	48 39 c7             	       cmp    %rax,%rdi
 c01:	48 0f 47 f8          	       cmova  %rax,%rdi
 c05:	90                   	       nop
 c06:	90                   	       nop
 c07:	90                   	       nop
 c08:	31 d2                	       xor    %edx,%edx
 c0a:	8b 07                	       mov    (%rdi),%eax
 c0c:	89 06                	       mov    %eax,(%rsi)
 c0e:	85 d2                	       test   %edx,%edx
 c10:	75 09                	       jne    c1b <afoo+0x2b>
 c12:	90                   	       nop
 c13:	90                   	       nop
 c14:	90                   	       nop
 c15:	b8 01 00 00 00       	       mov    $0x1,%eax
 c1a:	c3                   	       ret
 c1b:	90                   	       nop
 c1c:	90                   	       nop
 c1d:	90                   	       nop
 c1e:	31 c0                	       xor    %eax,%eax
 c20:	c3                   	       ret


The fault labels for the scoped*() macros and the fault labels for the
actual user space accessors can be shared and must be placed outside of the
scope.

If masked user access is enabled on an architecture, then the pointer
handed in to scoped_masked_user_$MODE_access() can be modified to point to
a guaranteed faulting user address. This modification is only scope local
as the pointer is aliased inside the scope. When the scope is left the
alias is not longer in effect. IOW the original pointer value is preserved
so it can be used e.g. for fixup or diagnostic purposes in the fault path.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
---
V3: Make it a nested for() loop
    Get rid of the code in macro parameters - Linus
    Provide sized variants - Mathieu
V2: Remove the shady wrappers around the opening and use scopes with automatic cleanup
---
 include/linux/uaccess.h |  197 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 197 insertions(+)

--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_UACCESS_H__
 #define __LINUX_UACCESS_H__
 
+#include <linux/cleanup.h>
 #include <linux/fault-inject-usercopy.h>
 #include <linux/instrumented.h>
 #include <linux/minmax.h>
@@ -35,9 +36,17 @@
 
 #ifdef masked_user_access_begin
  #define can_do_masked_user_access() 1
+# ifndef masked_user_write_access_begin
+#  define masked_user_write_access_begin masked_user_access_begin
+# endif
+# ifndef masked_user_read_access_begin
+#  define masked_user_read_access_begin masked_user_access_begin
+#endif
 #else
  #define can_do_masked_user_access() 0
  #define masked_user_access_begin(src) NULL
+ #define masked_user_read_access_begin(src) NULL
+ #define masked_user_write_access_begin(src) NULL
  #define mask_user_address(src) (src)
 #endif
 
@@ -633,6 +642,194 @@ static inline void user_access_restore(u
 #define user_read_access_end user_access_end
 #endif
 
+/* Define RW variant so the below _mode macro expansion works */
+#define masked_user_rw_access_begin(u)	masked_user_access_begin(u)
+#define user_rw_access_begin(u, s)	user_access_begin(u, s)
+#define user_rw_access_end()		user_access_end()
+
+/* Scoped user access */
+#define USER_ACCESS_GUARD(_mode)					\
+static __always_inline void __user *					\
+class_masked_user_##_mode##_begin(void __user *ptr)			\
+{									\
+	return ptr;							\
+}									\
+									\
+static __always_inline void						\
+class_masked_user_##_mode##_end(void __user *ptr)			\
+{									\
+	user_##_mode##_access_end();					\
+}									\
+									\
+DEFINE_CLASS(masked_user_ ##_mode## _access, void __user *,		\
+	     class_masked_user_##_mode##_end(_T),			\
+	     class_masked_user_##_mode##_begin(ptr), void __user *ptr)	\
+									\
+static __always_inline class_masked_user_##_mode##_access_t		\
+class_masked_user_##_mode##_access_ptr(void __user *scope)		\
+{									\
+	return scope;							\
+}
+
+USER_ACCESS_GUARD(read)
+USER_ACCESS_GUARD(write)
+USER_ACCESS_GUARD(rw)
+#undef USER_ACCESS_GUARD
+
+/**
+ * __scoped_user_access_begin - Start the masked user access
+ * @_mode:	The mode of the access class (read, write, rw)
+ * @_uptr:	The pointer to access user space memory
+ * @_size:	Size of the access
+ * @_elbl:	Error label to goto when the access region is rejected.
+ *
+ * Internal helper for __scoped_masked_user_access(). Don't use directly
+ */
+#define __scoped_user_access_begin(_mode, _uptr, _size, _elbl)		\
+({									\
+	typeof((_uptr)) ____ret;					\
+									\
+	if (can_do_masked_user_access()) {				\
+		____ret = masked_user_##_mode##_access_begin((_uptr));	\
+	} else {							\
+		____ret = _uptr;					\
+		if (!user_##_mode##_access_begin(_uptr, (_size)))	\
+			goto _elbl;					\
+	}								\
+	____ret;							\
+})
+
+/**
+ * __scoped_masked_user_access - Open a scope for masked user access
+ * @_mode:	The mode of the access class (read, write, rw)
+ * @_uptr:	The pointer to access user space memory
+ * @_size:	Size of the access
+ * @_elbl:	Error label to goto when the access region is rejected. It
+ *		must be placed outside the scope.
+ *
+ * If the user access function inside the scope requires a fault label, it
+ * can use @_elvl or a difference label outside the scope, which requires
+ * that user access which is implemented with ASM GOTO has been properly
+ * wrapped. See unsafe_get_user() for reference.
+ *
+ *	scoped_masked_user_rw_access(ptr, efault) {
+ *		unsafe_get_user(rval, &ptr->rval, efault);
+ *		unsafe_put_user(wval, &ptr->wval, efault);
+ *	}
+ *	return 0;
+ *  efault:
+ *	return -EFAULT;
+ *
+ * The scope is internally implemented as a autoterminating nested for()
+ * loop, which can be left with 'return', 'break' and 'goto' at any
+ * point.
+ *
+ * When the scope is left user_##@_mode##_access_end() is automatically
+ * invoked.
+ *
+ * When the architecture supports masked user access and the access region
+ * which is determined by @_uptr and @_size is not a valid user space
+ * address, i.e. < TASK_SIZE, the scope sets the pointer to a faulting user
+ * space address and does not terminate early. This optimizes for the good
+ * case and lets the performance uncritical bad case go through the fault.
+ *
+ * The eventual modification of the pointer is limited to the scope.
+ * Outside of the scope the original pointer value is unmodified, so that
+ * the original pointer value is available for diagnostic purposes in an
+ * out of scope fault path.
+ *
+ * Nesting scoped masked user access into a masked user access scope is
+ * invalid and fails the build. Nesting into other guards, e.g. pagefault
+ * is safe.
+ *
+ * Don't use directly. Use the scoped_masked_user_$MODE_access() instead.
+*/
+#define __scoped_masked_user_access(_mode, _uptr, _size, _elbl)					\
+for (bool ____stop = false; !____stop; ____stop = true)						\
+	for (typeof((_uptr)) _tmpptr = __scoped_user_access_begin(_mode, _uptr, _size, _elbl);	\
+	     !____stop; ____stop = true)							\
+		for (CLASS(masked_user_##_mode##_access, scope) (_tmpptr); !____stop;		\
+		     ____stop = true)					\
+			/* Force modified pointer usage within the scope */			\
+			for (const typeof((_uptr)) _uptr = _tmpptr; !____stop; ____stop = true)	\
+				if (1)
+
+/**
+ * scoped_masked_user_read_access_size - Start a scoped user read access with given size
+ * @_usrc:	Pointer to the user space address to read from
+ * @_size:	Size of the access starting from @_usrc
+ * @_elbl:	Error label to goto when the access region is rejected.
+ *
+ * For further information see __scoped_masked_user_access() above.
+ */
+#define scoped_masked_user_read_access_size(_usrc, _size, _elbl)		\
+	__scoped_masked_user_access(read, (_usrc), (_size), _elbl)
+
+/**
+ * scoped_masked_user_read_access - Start a scoped user read access
+ * @_usrc:	Pointer to the user space address to read from
+ * @_elbl:	Error label to goto when the access region is rejected.
+ *
+ * The size of the access starting from @_usrc is determined via sizeof(*@_usrc)).
+ *
+ * For further information see __scoped_masked_user_access() above.
+ */
+#define scoped_masked_user_read_access(_usrc, _elbl)				\
+	scoped_masked_user_read_access_size((_usrc), sizeof(*(_usrc)), _elbl)
+
+/**
+ * scoped_masked_user_read_end - End a scoped user read access
+ *
+ * Ends the scope opened with scoped_masked_user_read_access[_size]()
+ */
+#define scoped_masked_user_read_end()	__scoped_masked_user_end()
+
+/**
+ * scoped_masked_user_write_access_size - Start a scoped user write access with given size
+ * @_udst:	Pointer to the user space address to write to
+ * @_size:	Size of the access starting from @_udst
+ * @_elbl:	Error label to goto when the access region is rejected.
+ *
+ * For further information see __scoped_masked_user_access() above.
+ */
+#define scoped_masked_user_write_access_size(_udst, _size, _elbl)		\
+	__scoped_masked_user_access(write, (_udst),  (_size), _elbl)
+
+/**
+ * scoped_masked_user_write_access - Start a scoped user write access
+ * @_udst:	Pointer to the user space address to write to
+ * @_elbl:	Error label to goto when the access region is rejected.
+ *
+ * The size of the access starting from @_udst is determined via sizeof(*@_udst)).
+ *
+ * For further information see __scoped_masked_user_access() above.
+ */
+#define scoped_masked_user_write_access(_udst, _elbl)				\
+	scoped_masked_user_write_access_size((_udst), sizeof(*(_udst)), _elbl)
+
+/**
+ * scoped_masked_user_rw_access_size - Start a scoped user read/write access with given size
+ * @_uptr	Pointer to the user space address to read from and write to
+ * @_size:	Size of the access starting from @_uptr
+ * @_elbl:	Error label to goto when the access region is rejected.
+ *
+ * For further information see __scoped_masked_user_access() above.
+ */
+#define scoped_masked_user_rw_access_size(_uptr, _size, _elbl)			\
+	__scoped_masked_user_access(rw, (_uptr), (_size), _elbl)
+
+/**
+ * scoped_masked_user_rw_access - Start a scoped user read/write access
+ * @_uptr	Pointer to the user space address to read from and write to
+ * @_elbl:	Error label to goto when the access region is rejected.
+ *
+ * The size of the access starting from @_uptr is determined via sizeof(*@_uptr)).
+ *
+ * For further information see __scoped_masked_user_access() above.
+ */
+#define scoped_masked_user_rw_access(_uptr, _elbl)				\
+	scoped_masked_user_rw_access_size((_uptr), sizeof(*(_uptr)), _elbl)
+
 #ifdef CONFIG_HARDENED_USERCOPY
 void __noreturn usercopy_abort(const char *name, const char *detail,
 			       bool to_user, unsigned long offset,


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [patch V3 08/12] uaccess: Provide put/get_user_masked()
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
                   ` (6 preceding siblings ...)
  2025-10-17 10:09 ` [patch V3 07/12] uaccess: Provide scoped masked user access regions Thomas Gleixner
@ 2025-10-17 10:09 ` Thomas Gleixner
  2025-10-17 13:41   ` Mathieu Desnoyers
  2025-10-17 10:09 ` [patch V3 09/12] [RFC] coccinelle: misc: Add scoped_masked_$MODE_access() checker script Thomas Gleixner
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:09 UTC (permalink / raw)
  To: LKML
  Cc: kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Mathieu Desnoyers, Andrew Cooper, Julia Lawall,
	Nicolas Palix, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	André Almeida, Alexander Viro, Christian Brauner, Jan Kara,
	linux-fsdevel

Provide conveniance wrappers around scoped masked user access similiar to
put/get_user(), which reduce the usage sites to:

       if (!get_user_masked(val, ptr))
       		return -EFAULT;

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/uaccess.h |   44 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -830,6 +830,50 @@ for (bool ____stop = false; !____stop; _
 #define scoped_masked_user_rw_access(_uptr, _elbl)				\
 	scoped_masked_user_rw_access_size((_uptr), sizeof(*(_uptr)), _elbl)
 
+/**
+ * get_user_masked - Read user data with masked access
+ * @_val:	The variable to store the value read from user memory
+ * @_usrc:	Pointer to the user space memory to read from
+ *
+ * Return: true if successful, false when faulted
+ */
+#define get_user_masked(_val, _usrc)				\
+({								\
+	__label__ efault;					\
+	typeof((_usrc)) _tmpsrc	= (_usrc);			\
+	bool ____ret = true;					\
+								\
+	scoped_masked_user_read_access(_tmpsrc, efault)		\
+		unsafe_get_user(_val, _tmpsrc, efault);		\
+	if (0) {						\
+	efault:							\
+		____ret = false;				\
+	}							\
+	____ret;						\
+})
+
+/**
+ * put_user_masked - Write to user memory with masked access
+ * @_val:	The value to write
+ * @_udst:	Pointer to the user space memory to write to
+ *
+ * Return: true if successful, false when faulted
+ */
+#define put_user_masked(_val, _udst)				\
+({								\
+	__label__ efault;					\
+	typeof((_udst)) _tmpdst	= (_udst);			\
+	bool ____ret = true;					\
+								\
+	scoped_masked_user_write_access(_tmpdst, efault)	\
+		unsafe_put_user(_val, _tmpdst, efault);		\
+	if (0) {						\
+	efault:							\
+		____ret = false;				\
+	}							\
+	____ret;						\
+})
+
 #ifdef CONFIG_HARDENED_USERCOPY
 void __noreturn usercopy_abort(const char *name, const char *detail,
 			       bool to_user, unsigned long offset,


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [patch V3 09/12] [RFC] coccinelle: misc: Add scoped_masked_$MODE_access() checker script
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
                   ` (7 preceding siblings ...)
  2025-10-17 10:09 ` [patch V3 08/12] uaccess: Provide put/get_user_masked() Thomas Gleixner
@ 2025-10-17 10:09 ` Thomas Gleixner
  2025-10-17 10:51   ` Julia Lawall
  2025-10-17 10:09 ` [patch V3 10/12] futex: Convert to scoped masked user access Thomas Gleixner
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:09 UTC (permalink / raw)
  To: LKML
  Cc: Julia Lawall, Nicolas Palix, kernel test robot, Russell King,
	linux-arm-kernel, Linus Torvalds, x86, Madhavan Srinivasan,
	Michael Ellerman, Nicholas Piggin, Christophe Leroy, linuxppc-dev,
	Paul Walmsley, Palmer Dabbelt, linux-riscv, Heiko Carstens,
	Christian Borntraeger, Sven Schnelle, linux-s390,
	Mathieu Desnoyers, Andrew Cooper, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

A common mistake in user access code is that the wrong access mode is
selected for starting the user access section. As most architectures map
Read and Write modes to ReadWrite this goes often unnoticed for quite some
time.

Aside of that the scoped user access mechanism requires that the same
pointer is used for the actual accessor macros that was handed in to start
the scope because the pointer can be modified by the scope begin mechanism
if the architecture supports masking.

Add a basic (and incomplete) coccinelle script to check for the common
issues. The error output is:

kernel/futex/futex.h:303:2-17: ERROR: Invalid pointer for unsafe_put_user(p) in scoped_masked_user_write_access(to)
kernel/futex/futex.h:292:2-17: ERROR: Invalid access mode unsafe_get_user() in scoped_masked_user_write_access()

Not-Yet-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <Julia.Lawall@inria.fr>
Cc: Nicolas Palix <nicolas.palix@imag.fr>
---
 scripts/coccinelle/misc/scoped_uaccess.cocci |  108 +++++++++++++++++++++++++++
 1 file changed, 108 insertions(+)

--- /dev/null
+++ b/scripts/coccinelle/misc/scoped_uaccess.cocci
@@ -0,0 +1,108 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/// Validate scoped_masked_user*access() scopes
+///
+// Confidence: Zero
+// Options: --no-includes --include-headers
+
+virtual context
+virtual report
+virtual org
+
+@initialize:python@
+@@
+
+scopemap = {
+  'scoped_masked_user_read_access_size'  : 'scoped_masked_user_read_access',
+  'scoped_masked_user_write_access_size' : 'scoped_masked_user_write_access',
+  'scoped_masked_user_rw_access_size'    : 'scoped_masked_user_rw_access',
+}
+
+# Most common accessors. Incomplete list
+noaccessmap = {
+  'scoped_masked_user_read_access'       : ('unsafe_put_user', 'unsafe_copy_to_user'),
+  'scoped_masked_user_write_access'      : ('unsafe_get_user', 'unsafe_copy_from_user'),
+}
+
+# Most common accessors. Incomplete list
+ptrmap = {
+  'unsafe_put_user'			 : 1,
+  'unsafe_get_user'			 : 1,
+  'unsafe_copy_to_user'		 	 : 0,
+  'unsafe_copy_from_user'		 : 0,
+}
+
+print_mode = None
+
+def pr_err(pos, msg):
+   if print_mode == 'R':
+      coccilib.report.print_report(pos[0], msg)
+   elif print_mode == 'O':
+      cocci.print_main(msg, pos)
+
+@r0 depends on report || org@
+iterator name scoped_masked_user_read_access,
+	      scoped_masked_user_read_access_size,
+	      scoped_masked_user_write_access,
+	      scoped_masked_user_write_access_size,
+	      scoped_masked_user_rw_access,
+	      scoped_masked_user_rw_access_size;
+iterator scope;
+statement S;
+@@
+
+(
+(
+scoped_masked_user_read_access(...) S
+|
+scoped_masked_user_read_access_size(...) S
+|
+scoped_masked_user_write_access(...) S
+|
+scoped_masked_user_write_access_size(...) S
+|
+scoped_masked_user_rw_access(...) S
+|
+scoped_masked_user_rw_access_size(...) S
+)
+&
+scope(...) S
+)
+
+@script:python depends on r0 && report@
+@@
+print_mode = 'R'
+
+@script:python depends on r0 && org@
+@@
+print_mode = 'O'
+
+@r1@
+expression sp, a0, a1;
+iterator r0.scope;
+identifier ac;
+position p;
+@@
+
+  scope(sp,...) {
+    <+...
+    ac@p(a0, a1, ...);
+    ...+>
+  }
+
+@script:python@
+pos << r1.p;
+scope << r0.scope;
+ac << r1.ac;
+sp << r1.sp;
+a0 << r1.a0;
+a1 << r1.a1;
+@@
+
+scope = scopemap.get(scope, scope)
+if ac in noaccessmap.get(scope, []):
+   pr_err(pos, 'ERROR: Invalid access mode %s() in %s()' %(ac, scope))
+
+if ac in ptrmap:
+   ap = (a0, a1)[ptrmap[ac]]
+   if sp != ap.lstrip('&').split('->')[0].strip():
+      pr_err(pos, 'ERROR: Invalid pointer for %s(%s) in %s(%s)' %(ac, ap, scope, sp))


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [patch V3 10/12] futex: Convert to scoped masked user access
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
                   ` (8 preceding siblings ...)
  2025-10-17 10:09 ` [patch V3 09/12] [RFC] coccinelle: misc: Add scoped_masked_$MODE_access() checker script Thomas Gleixner
@ 2025-10-17 10:09 ` Thomas Gleixner
  2025-10-17 10:09 ` [patch V3 11/12] x86/futex: " Thomas Gleixner
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:09 UTC (permalink / raw)
  To: LKML
  Cc: Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Mathieu Desnoyers, Andrew Cooper, Julia Lawall,
	Nicolas Palix, Alexander Viro, Christian Brauner, Jan Kara,
	linux-fsdevel

From: Thomas Gleixner <tglx@linutronix.de>

Replace the open coded implementation with the new get/put_user_masked()
helpers.

No functional change intended

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Darren Hart <dvhart@infradead.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: "André Almeida" <andrealmeid@igalia.com>
---
V3: Adapt to scope changes
V2: Convert to scoped variant
---
 kernel/futex/futex.h |   37 ++-----------------------------------
 1 file changed, 2 insertions(+), 35 deletions(-)
---
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -285,48 +285,15 @@ static inline int futex_cmpxchg_value_lo
  * This does a plain atomic user space read, and the user pointer has
  * already been verified earlier by get_futex_key() to be both aligned
  * and actually in user space, just like futex_atomic_cmpxchg_inatomic().
- *
- * We still want to avoid any speculation, and while __get_user() is
- * the traditional model for this, it's actually slower than doing
- * this manually these days.
- *
- * We could just have a per-architecture special function for it,
- * the same way we do futex_atomic_cmpxchg_inatomic(), but rather
- * than force everybody to do that, write it out long-hand using
- * the low-level user-access infrastructure.
- *
- * This looks a bit overkill, but generally just results in a couple
- * of instructions.
  */
 static __always_inline int futex_get_value(u32 *dest, u32 __user *from)
 {
-	u32 val;
-
-	if (can_do_masked_user_access())
-		from = masked_user_access_begin(from);
-	else if (!user_read_access_begin(from, sizeof(*from)))
-		return -EFAULT;
-	unsafe_get_user(val, from, Efault);
-	user_read_access_end();
-	*dest = val;
-	return 0;
-Efault:
-	user_read_access_end();
-	return -EFAULT;
+	return get_user_masked(*dest, from) ? 0 : -EFAULT;
 }
 
 static __always_inline int futex_put_value(u32 val, u32 __user *to)
 {
-	if (can_do_masked_user_access())
-		to = masked_user_access_begin(to);
-	else if (!user_write_access_begin(to, sizeof(*to)))
-		return -EFAULT;
-	unsafe_put_user(val, to, Efault);
-	user_write_access_end();
-	return 0;
-Efault:
-	user_write_access_end();
-	return -EFAULT;
+	return put_user_masked(val, to) ? 0 : -EFAULT;
 }
 
 static inline int futex_get_value_locked(u32 *dest, u32 __user *from)


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [patch V3 11/12] x86/futex: Convert to scoped masked user access
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
                   ` (9 preceding siblings ...)
  2025-10-17 10:09 ` [patch V3 10/12] futex: Convert to scoped masked user access Thomas Gleixner
@ 2025-10-17 10:09 ` Thomas Gleixner
  2025-10-17 13:37   ` Andrew Cooper
  2025-10-17 10:09 ` [patch V3 12/12] select: " Thomas Gleixner
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:09 UTC (permalink / raw)
  To: LKML
  Cc: x86, kernel test robot, Russell King, linux-arm-kernel,
	Linus Torvalds, Madhavan Srinivasan, Michael Ellerman,
	Nicholas Piggin, Christophe Leroy, linuxppc-dev, Paul Walmsley,
	Palmer Dabbelt, linux-riscv, Heiko Carstens,
	Christian Borntraeger, Sven Schnelle, linux-s390,
	Mathieu Desnoyers, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel

Replace the open coded implementation with the scoped masked user access
guards

No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
---
V3: Adapt to scope changes
V2: Convert to scoped masked access
    Use RW access functions - Christophe
---
 arch/x86/include/asm/futex.h |   75 ++++++++++++++++++-------------------------
 1 file changed, 33 insertions(+), 42 deletions(-)
---
--- a/arch/x86/include/asm/futex.h
+++ b/arch/x86/include/asm/futex.h
@@ -46,38 +46,31 @@ do {								\
 } while(0)
 
 static __always_inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
-		u32 __user *uaddr)
+						       u32 __user *uaddr)
 {
-	if (can_do_masked_user_access())
-		uaddr = masked_user_access_begin(uaddr);
-	else if (!user_access_begin(uaddr, sizeof(u32)))
-		return -EFAULT;
-
-	switch (op) {
-	case FUTEX_OP_SET:
-		unsafe_atomic_op1("xchgl %0, %2", oval, uaddr, oparg, Efault);
-		break;
-	case FUTEX_OP_ADD:
-		unsafe_atomic_op1(LOCK_PREFIX "xaddl %0, %2", oval,
-				   uaddr, oparg, Efault);
-		break;
-	case FUTEX_OP_OR:
-		unsafe_atomic_op2("orl %4, %3", oval, uaddr, oparg, Efault);
-		break;
-	case FUTEX_OP_ANDN:
-		unsafe_atomic_op2("andl %4, %3", oval, uaddr, ~oparg, Efault);
-		break;
-	case FUTEX_OP_XOR:
-		unsafe_atomic_op2("xorl %4, %3", oval, uaddr, oparg, Efault);
-		break;
-	default:
-		user_access_end();
-		return -ENOSYS;
+	scoped_masked_user_rw_access(uaddr, Efault) {
+		switch (op) {
+		case FUTEX_OP_SET:
+			unsafe_atomic_op1("xchgl %0, %2", oval, uaddr, oparg, Efault);
+			break;
+		case FUTEX_OP_ADD:
+			unsafe_atomic_op1(LOCK_PREFIX "xaddl %0, %2", oval, uaddr, oparg, Efault);
+			break;
+		case FUTEX_OP_OR:
+			unsafe_atomic_op2("orl %4, %3", oval, uaddr, oparg, Efault);
+			break;
+		case FUTEX_OP_ANDN:
+			unsafe_atomic_op2("andl %4, %3", oval, uaddr, ~oparg, Efault);
+			break;
+		case FUTEX_OP_XOR:
+			unsafe_atomic_op2("xorl %4, %3", oval, uaddr, oparg, Efault);
+			break;
+		default:
+			return -ENOSYS;
+		}
 	}
-	user_access_end();
 	return 0;
 Efault:
-	user_access_end();
 	return -EFAULT;
 }
 
@@ -86,21 +79,19 @@ static inline int futex_atomic_cmpxchg_i
 {
 	int ret = 0;
 
-	if (can_do_masked_user_access())
-		uaddr = masked_user_access_begin(uaddr);
-	else if (!user_access_begin(uaddr, sizeof(u32)))
-		return -EFAULT;
-	asm volatile("\n"
-		"1:\t" LOCK_PREFIX "cmpxchgl %3, %2\n"
-		"2:\n"
-		_ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %0) \
-		: "+r" (ret), "=a" (oldval), "+m" (*uaddr)
-		: "r" (newval), "1" (oldval)
-		: "memory"
-	);
-	user_access_end();
-	*uval = oldval;
+	scoped_masked_user_rw_access(uaddr, Efault) {
+		asm volatile("\n"
+			     "1:\t" LOCK_PREFIX "cmpxchgl %3, %2\n"
+			     "2:\n"
+			     _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %0) \
+			     : "+r" (ret), "=a" (oldval), "+m" (*uaddr)
+			     : "r" (newval), "1" (oldval)
+			     : "memory");
+		*uval = oldval;
+	}
 	return ret;
+Efault:
+	return -EFAULT;
 }
 
 #endif


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [patch V3 12/12] select: Convert to scoped masked user access
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
                   ` (10 preceding siblings ...)
  2025-10-17 10:09 ` [patch V3 11/12] x86/futex: " Thomas Gleixner
@ 2025-10-17 10:09 ` Thomas Gleixner
  2025-10-17 10:35   ` Peter Zijlstra
  2025-10-17 10:37 ` [patch V3 00/12] uaccess: Provide and use scopes for user masked access Peter Zijlstra
  2025-10-17 12:25 ` Mathieu Desnoyers
  13 siblings, 1 reply; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 10:09 UTC (permalink / raw)
  To: LKML
  Cc: Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel,
	kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Mathieu Desnoyers, Andrew Cooper, Julia Lawall,
	Nicolas Palix, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	André Almeida

From: Thomas Gleixner <tglx@linutronix.de>

Replace the open coded implementation with the scoped masked user access
guard.

No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: linux-fsdevel@vger.kernel.org
---
V3: Adopt to scope changes
---
 fs/select.c |   12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)
---
--- a/fs/select.c
+++ b/fs/select.c
@@ -776,17 +776,13 @@ static inline int get_sigset_argpack(str
 {
 	// the path is hot enough for overhead of copy_from_user() to matter
 	if (from) {
-		if (can_do_masked_user_access())
-			from = masked_user_access_begin(from);
-		else if (!user_read_access_begin(from, sizeof(*from)))
-			return -EFAULT;
-		unsafe_get_user(to->p, &from->p, Efault);
-		unsafe_get_user(to->size, &from->size, Efault);
-		user_read_access_end();
+		scoped_masked_user_rw_access(from, Efault) {
+			unsafe_get_user(to->p, &from->p, Efault);
+			unsafe_get_user(to->size, &from->size, Efault);
+		}
 	}
 	return 0;
 Efault:
-	user_read_access_end();
 	return -EFAULT;
 }
 


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 12/12] select: Convert to scoped masked user access
  2025-10-17 10:09 ` [patch V3 12/12] select: " Thomas Gleixner
@ 2025-10-17 10:35   ` Peter Zijlstra
  2025-10-17 11:12     ` Thomas Gleixner
  0 siblings, 1 reply; 40+ messages in thread
From: Peter Zijlstra @ 2025-10-17 10:35 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel,
	kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Mathieu Desnoyers, Andrew Cooper, Julia Lawall,
	Nicolas Palix, Darren Hart, Davidlohr Bueso, André Almeida

On Fri, Oct 17, 2025 at 12:09:18PM +0200, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Replace the open coded implementation with the scoped masked user access
> guard.
> 
> No functional change intended.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Christian Brauner <brauner@kernel.org>
> Cc: Jan Kara <jack@suse.cz>
> Cc: linux-fsdevel@vger.kernel.org
> ---
> V3: Adopt to scope changes
> ---
>  fs/select.c |   12 ++++--------
>  1 file changed, 4 insertions(+), 8 deletions(-)
> ---
> --- a/fs/select.c
> +++ b/fs/select.c
> @@ -776,17 +776,13 @@ static inline int get_sigset_argpack(str
>  {
>  	// the path is hot enough for overhead of copy_from_user() to matter
>  	if (from) {
> -		if (can_do_masked_user_access())
> -			from = masked_user_access_begin(from);
> -		else if (!user_read_access_begin(from, sizeof(*from)))
> -			return -EFAULT;
> -		unsafe_get_user(to->p, &from->p, Efault);
> -		unsafe_get_user(to->size, &from->size, Efault);
> -		user_read_access_end();
> +		scoped_masked_user_rw_access(from, Efault) {

Should this not be: scoped_masked_user_read_access() ?

> +			unsafe_get_user(to->p, &from->p, Efault);
> +			unsafe_get_user(to->size, &from->size, Efault);
> +		}
>  	}
>  	return 0;
>  Efault:
> -	user_read_access_end();
>  	return -EFAULT;
>  }
>  
> 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 00/12] uaccess: Provide and use scopes for user masked access
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
                   ` (11 preceding siblings ...)
  2025-10-17 10:09 ` [patch V3 12/12] select: " Thomas Gleixner
@ 2025-10-17 10:37 ` Peter Zijlstra
  2025-10-17 10:50   ` Andrew Cooper
  2025-10-17 12:25 ` Mathieu Desnoyers
  13 siblings, 1 reply; 40+ messages in thread
From: Peter Zijlstra @ 2025-10-17 10:37 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, kernel test robot, Russell King, linux-arm-kernel,
	Linus Torvalds, x86, Madhavan Srinivasan, Michael Ellerman,
	Nicholas Piggin, Christophe Leroy, linuxppc-dev, Paul Walmsley,
	Palmer Dabbelt, linux-riscv, Heiko Carstens,
	Christian Borntraeger, Sven Schnelle, linux-s390,
	Mathieu Desnoyers, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Darren Hart, Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On Fri, Oct 17, 2025 at 12:08:54PM +0200, Thomas Gleixner wrote:

> After some back and forth discussion Linus suggested to put the local label
> workaround into the user access functions themself.

> Andrew thankfully pointed me to nested for() loops and after some head
> scratching I managed to get all of it hidden in that construct.

These two hacks are awesome and made my day; thanks!

> Thomas Gleixner (12):
>       ARM: uaccess: Implement missing __get_user_asm_dword()
>       uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user()
>       x86/uaccess: Use unsafe wrappers for ASM GOTO
>       powerpc/uaccess: Use unsafe wrappers for ASM GOTO
>       riscv/uaccess: Use unsafe wrappers for ASM GOTO
>       s390/uaccess: Use unsafe wrappers for ASM GOTO
>       uaccess: Provide scoped masked user access regions
>       uaccess: Provide put/get_user_masked()
>       futex: Convert to scoped masked user access
>       x86/futex: Convert to scoped masked user access

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

>       select: Convert to scoped masked user access

Basically okay on this one too, except see the comment.

>       coccinelle: misc: Add scoped_masked_$MODE_access() checker script

And that's just pure magic, I can't seem to get me head around
coccinelle syntax.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 00/12] uaccess: Provide and use scopes for user masked access
  2025-10-17 10:37 ` [patch V3 00/12] uaccess: Provide and use scopes for user masked access Peter Zijlstra
@ 2025-10-17 10:50   ` Andrew Cooper
  0 siblings, 0 replies; 40+ messages in thread
From: Andrew Cooper @ 2025-10-17 10:50 UTC (permalink / raw)
  To: Peter Zijlstra, Thomas Gleixner
  Cc: LKML, kernel test robot, Russell King, linux-arm-kernel,
	Linus Torvalds, x86, Madhavan Srinivasan, Michael Ellerman,
	Nicholas Piggin, Christophe Leroy, linuxppc-dev, Paul Walmsley,
	Palmer Dabbelt, linux-riscv, Heiko Carstens,
	Christian Borntraeger, Sven Schnelle, linux-s390,
	Mathieu Desnoyers, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Darren Hart, Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On 17/10/2025 11:37 am, Peter Zijlstra wrote:
> On Fri, Oct 17, 2025 at 12:08:54PM +0200, Thomas Gleixner wrote:
>> Andrew thankfully pointed me to nested for() loops and after some head
>> scratching I managed to get all of it hidden in that construct.
> These two hacks are awesome and made my day; thanks!

It's a new four loop paradigm, brought to you by a silly restriction in C99.

~Andrew

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 09/12] [RFC] coccinelle: misc: Add scoped_masked_$MODE_access() checker script
  2025-10-17 10:09 ` [patch V3 09/12] [RFC] coccinelle: misc: Add scoped_masked_$MODE_access() checker script Thomas Gleixner
@ 2025-10-17 10:51   ` Julia Lawall
  0 siblings, 0 replies; 40+ messages in thread
From: Julia Lawall @ 2025-10-17 10:51 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Julia Lawall, Nicolas Palix, kernel test robot,
	Russell King, linux-arm-kernel, Linus Torvalds, x86,
	Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Mathieu Desnoyers, Andrew Cooper, Peter Zijlstra,
	Darren Hart, Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel



On Fri, 17 Oct 2025, Thomas Gleixner wrote:

> A common mistake in user access code is that the wrong access mode is
> selected for starting the user access section. As most architectures map
> Read and Write modes to ReadWrite this goes often unnoticed for quite some
> time.
>
> Aside of that the scoped user access mechanism requires that the same
> pointer is used for the actual accessor macros that was handed in to start
> the scope because the pointer can be modified by the scope begin mechanism
> if the architecture supports masking.
>
> Add a basic (and incomplete) coccinelle script to check for the common
> issues. The error output is:
>
> kernel/futex/futex.h:303:2-17: ERROR: Invalid pointer for unsafe_put_user(p) in scoped_masked_user_write_access(to)
> kernel/futex/futex.h:292:2-17: ERROR: Invalid access mode unsafe_get_user() in scoped_masked_user_write_access()
>
> Not-Yet-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Julia Lawall <Julia.Lawall@inria.fr>
> Cc: Nicolas Palix <nicolas.palix@imag.fr>
> ---
>  scripts/coccinelle/misc/scoped_uaccess.cocci |  108 +++++++++++++++++++++++++++
>  1 file changed, 108 insertions(+)
>
> --- /dev/null
> +++ b/scripts/coccinelle/misc/scoped_uaccess.cocci
> @@ -0,0 +1,108 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/// Validate scoped_masked_user*access() scopes
> +///
> +// Confidence: Zero
> +// Options: --no-includes --include-headers
> +
> +virtual context
> +virtual report
> +virtual org
> +
> +@initialize:python@
> +@@
> +
> +scopemap = {
> +  'scoped_masked_user_read_access_size'  : 'scoped_masked_user_read_access',
> +  'scoped_masked_user_write_access_size' : 'scoped_masked_user_write_access',
> +  'scoped_masked_user_rw_access_size'    : 'scoped_masked_user_rw_access',
> +}
> +
> +# Most common accessors. Incomplete list
> +noaccessmap = {
> +  'scoped_masked_user_read_access'       : ('unsafe_put_user', 'unsafe_copy_to_user'),
> +  'scoped_masked_user_write_access'      : ('unsafe_get_user', 'unsafe_copy_from_user'),
> +}
> +
> +# Most common accessors. Incomplete list
> +ptrmap = {
> +  'unsafe_put_user'			 : 1,
> +  'unsafe_get_user'			 : 1,
> +  'unsafe_copy_to_user'		 	 : 0,
> +  'unsafe_copy_from_user'		 : 0,
> +}
> +
> +print_mode = None
> +
> +def pr_err(pos, msg):
> +   if print_mode == 'R':
> +      coccilib.report.print_report(pos[0], msg)
> +   elif print_mode == 'O':
> +      cocci.print_main(msg, pos)
> +
> +@r0 depends on report || org@
> +iterator name scoped_masked_user_read_access,
> +	      scoped_masked_user_read_access_size,
> +	      scoped_masked_user_write_access,
> +	      scoped_masked_user_write_access_size,
> +	      scoped_masked_user_rw_access,
> +	      scoped_masked_user_rw_access_size;
> +iterator scope;
> +statement S;
> +@@
> +
> +(
> +(
> +scoped_masked_user_read_access(...) S
> +|
> +scoped_masked_user_read_access_size(...) S
> +|
> +scoped_masked_user_write_access(...) S
> +|
> +scoped_masked_user_write_access_size(...) S
> +|
> +scoped_masked_user_rw_access(...) S
> +|
> +scoped_masked_user_rw_access_size(...) S
> +)
> +&
> +scope(...) S
> +)
> +
> +@script:python depends on r0 && report@
> +@@
> +print_mode = 'R'
> +
> +@script:python depends on r0 && org@
> +@@
> +print_mode = 'O'
> +
> +@r1@
> +expression sp, a0, a1;
> +iterator r0.scope;
> +identifier ac;
> +position p;
> +@@
> +
> +  scope(sp,...) {
> +    <+...
> +    ac@p(a0, a1, ...);
> +    ...+>
> +  }

This will be more efficient and equally useful with <... ...>
Using + requires that there is at least one match, which has a cost.

julia


> +
> +@script:python@
> +pos << r1.p;
> +scope << r0.scope;
> +ac << r1.ac;
> +sp << r1.sp;
> +a0 << r1.a0;
> +a1 << r1.a1;
> +@@
> +
> +scope = scopemap.get(scope, scope)
> +if ac in noaccessmap.get(scope, []):
> +   pr_err(pos, 'ERROR: Invalid access mode %s() in %s()' %(ac, scope))
> +
> +if ac in ptrmap:
> +   ap = (a0, a1)[ptrmap[ac]]
> +   if sp != ap.lstrip('&').split('->')[0].strip():
> +      pr_err(pos, 'ERROR: Invalid pointer for %s(%s) in %s(%s)' %(ac, ap, scope, sp))
>
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-17 10:09 ` [patch V3 07/12] uaccess: Provide scoped masked user access regions Thomas Gleixner
@ 2025-10-17 11:08   ` Andrew Cooper
  2025-10-17 11:21     ` Thomas Gleixner
  2025-10-17 11:25     ` Peter Zijlstra
  2025-10-17 13:23   ` Mathieu Desnoyers
  2025-10-20 18:28   ` David Laight
  2 siblings, 2 replies; 40+ messages in thread
From: Andrew Cooper @ 2025-10-17 11:08 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: Christophe Leroy, Mathieu Desnoyers, Andrew Cooper,
	Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On 17/10/2025 11:09 am, Thomas Gleixner wrote:
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> +#define __scoped_masked_user_access(_mode, _uptr, _size, _elbl)					\
> +for (bool ____stop = false; !____stop; ____stop = true)						\
> +	for (typeof((_uptr)) _tmpptr = __scoped_user_access_begin(_mode, _uptr, _size, _elbl);	\
> +	     !____stop; ____stop = true)							\
> +		for (CLASS(masked_user_##_mode##_access, scope) (_tmpptr); !____stop;		\
> +		     ____stop = true)					\
> +			/* Force modified pointer usage within the scope */			\
> +			for (const typeof((_uptr)) _uptr = _tmpptr; !____stop; ____stop = true)	\
> +				if (1)
> +

Truly a thing of beauty.  At least the end user experience is nice.

One thing to be aware of is that:

    scoped_masked_user_rw_access(ptr, efault) {
        unsafe_get_user(rval, &ptr->rval, efault);
        unsafe_put_user(wval, &ptr->wval, efault);
    } else {
        // unreachable
    }

will compile.  Instead, I think you want the final line of the macro to
be "if (0) {} else" to prevent this.


While we're on the subject, can we find some C standards people to lobby.

C2Y has a proposal to introduce "if (int foo =" syntax to generalise the
for() loop special case.  Can we please see about fixing the restriction
of only allowing a single type per loop?   This example could be a
single loop if it weren't for that restriction.

Thanks,

~Andrew

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 12/12] select: Convert to scoped masked user access
  2025-10-17 10:35   ` Peter Zijlstra
@ 2025-10-17 11:12     ` Thomas Gleixner
  0 siblings, 0 replies; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 11:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel,
	kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Mathieu Desnoyers, Andrew Cooper, Julia Lawall,
	Nicolas Palix, Darren Hart, Davidlohr Bueso, André Almeida

On Fri, Oct 17 2025 at 12:35, Peter Zijlstra wrote:
> On Fri, Oct 17, 2025 at 12:09:18PM +0200, Thomas Gleixner wrote:
>> -		unsafe_get_user(to->p, &from->p, Efault);
>> -		unsafe_get_user(to->size, &from->size, Efault);
>> -		user_read_access_end();
>> +		scoped_masked_user_rw_access(from, Efault) {
>
> Should this not be: scoped_masked_user_read_access() ?

Of course. The wonders of copy & pasta ....

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-17 11:08   ` Andrew Cooper
@ 2025-10-17 11:21     ` Thomas Gleixner
  2025-10-17 11:29       ` Andrew Cooper
  2025-10-17 11:25     ` Peter Zijlstra
  1 sibling, 1 reply; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-17 11:21 UTC (permalink / raw)
  To: Andrew Cooper, LKML
  Cc: Christophe Leroy, Mathieu Desnoyers, Andrew Cooper,
	Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On Fri, Oct 17 2025 at 12:08, Andrew Cooper wrote:

> On 17/10/2025 11:09 am, Thomas Gleixner wrote:
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> +#define __scoped_masked_user_access(_mode, _uptr, _size, _elbl)					\
>> +for (bool ____stop = false; !____stop; ____stop = true)						\
>> +	for (typeof((_uptr)) _tmpptr = __scoped_user_access_begin(_mode, _uptr, _size, _elbl);	\
>> +	     !____stop; ____stop = true)							\
>> +		for (CLASS(masked_user_##_mode##_access, scope) (_tmpptr); !____stop;		\
>> +		     ____stop = true)					\
>> +			/* Force modified pointer usage within the scope */			\
>> +			for (const typeof((_uptr)) _uptr = _tmpptr; !____stop; ____stop = true)	\
>> +				if (1)
>> +
>
> Truly a thing of beauty.  At least the end user experience is nice.
>
> One thing to be aware of is that:
>
>     scoped_masked_user_rw_access(ptr, efault) {
>         unsafe_get_user(rval, &ptr->rval, efault);
>         unsafe_put_user(wval, &ptr->wval, efault);
>     } else {
>         // unreachable
>     }
>
> will compile.  Instead, I think you want the final line of the macro to
> be "if (0) {} else" to prevent this.

Duh. yes. But I can just remove the 'if (1)' completely. That's a
leftover from some earlier iteration of this.

> While we're on the subject, can we find some C standards people to lobby.
>
> C2Y has a proposal to introduce "if (int foo =" syntax to generalise the
> for() loop special case.  Can we please see about fixing the restriction
> of only allowing a single type per loop?   This example could be a
> single loop if it weren't for that restriction.

That'd be nice. But we can't have nice things, can we?

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-17 11:08   ` Andrew Cooper
  2025-10-17 11:21     ` Thomas Gleixner
@ 2025-10-17 11:25     ` Peter Zijlstra
  1 sibling, 0 replies; 40+ messages in thread
From: Peter Zijlstra @ 2025-10-17 11:25 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Thomas Gleixner, LKML, Christophe Leroy, Mathieu Desnoyers,
	Andrew Cooper, Linus Torvalds, kernel test robot, Russell King,
	linux-arm-kernel, x86, Madhavan Srinivasan, Michael Ellerman,
	Nicholas Piggin, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Julia Lawall, Nicolas Palix, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On Fri, Oct 17, 2025 at 12:08:24PM +0100, Andrew Cooper wrote:
> On 17/10/2025 11:09 am, Thomas Gleixner wrote:
> > --- a/include/linux/uaccess.h
> > +++ b/include/linux/uaccess.h
> > +#define __scoped_masked_user_access(_mode, _uptr, _size, _elbl)					\
> > +for (bool ____stop = false; !____stop; ____stop = true)						\
> > +	for (typeof((_uptr)) _tmpptr = __scoped_user_access_begin(_mode, _uptr, _size, _elbl);	\
> > +	     !____stop; ____stop = true)							\
> > +		for (CLASS(masked_user_##_mode##_access, scope) (_tmpptr); !____stop;		\
> > +		     ____stop = true)					\
> > +			/* Force modified pointer usage within the scope */			\
> > +			for (const typeof((_uptr)) _uptr = _tmpptr; !____stop; ____stop = true)	\
> > +				if (1)
> > +
> 
> Truly a thing of beauty.  At least the end user experience is nice.
> 
> One thing to be aware of is that:
> 
>     scoped_masked_user_rw_access(ptr, efault) {
>         unsafe_get_user(rval, &ptr->rval, efault);
>         unsafe_put_user(wval, &ptr->wval, efault);
>     } else {
>         // unreachable
>     }
> 
> will compile.  Instead, I think you want the final line of the macro to
> be "if (0) {} else" to prevent this.
> 
> 
> While we're on the subject, can we find some C standards people to lobby.
> 
> C2Y has a proposal to introduce "if (int foo =" syntax to generalise the
> for() loop special case.  Can we please see about fixing the restriction
> of only allowing a single type per loop?   This example could be a
> single loop if it weren't for that restriction.

So elsewhere, Linus suggested to use a struct to get around that. See
for example this lovely thing:

  https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git/commit/?h=locking/core&id=1bc5d8cefd0d9768dc03c83140dd54c552bea470


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-17 11:21     ` Thomas Gleixner
@ 2025-10-17 11:29       ` Andrew Cooper
  0 siblings, 0 replies; 40+ messages in thread
From: Andrew Cooper @ 2025-10-17 11:29 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: Christophe Leroy, Mathieu Desnoyers, Andrew Cooper,
	Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On 17/10/2025 12:21 pm, Thomas Gleixner wrote:
> On Fri, Oct 17 2025 at 12:08, Andrew Cooper wrote:
>
>> On 17/10/2025 11:09 am, Thomas Gleixner wrote:
>>> --- a/include/linux/uaccess.h
>>> +++ b/include/linux/uaccess.h
>>> +#define __scoped_masked_user_access(_mode, _uptr, _size, _elbl)					\
>>> +for (bool ____stop = false; !____stop; ____stop = true)						\
>>> +	for (typeof((_uptr)) _tmpptr = __scoped_user_access_begin(_mode, _uptr, _size, _elbl);	\
>>> +	     !____stop; ____stop = true)							\
>>> +		for (CLASS(masked_user_##_mode##_access, scope) (_tmpptr); !____stop;		\
>>> +		     ____stop = true)					\
>>> +			/* Force modified pointer usage within the scope */			\
>>> +			for (const typeof((_uptr)) _uptr = _tmpptr; !____stop; ____stop = true)	\
>>> +				if (1)
>>> +
>> Truly a thing of beauty.  At least the end user experience is nice.
>>
>> One thing to be aware of is that:
>>
>>     scoped_masked_user_rw_access(ptr, efault) {
>>         unsafe_get_user(rval, &ptr->rval, efault);
>>         unsafe_put_user(wval, &ptr->wval, efault);
>>     } else {
>>         // unreachable
>>     }
>>
>> will compile.  Instead, I think you want the final line of the macro to
>> be "if (0) {} else" to prevent this.
> Duh. yes. But I can just remove the 'if (1)' completely. That's a
> leftover from some earlier iteration of this.

Oh, of course.  That works too.

>
>> While we're on the subject, can we find some C standards people to lobby.
>>
>> C2Y has a proposal to introduce "if (int foo =" syntax to generalise the
>> for() loop special case.  Can we please see about fixing the restriction
>> of only allowing a single type per loop?   This example could be a
>> single loop if it weren't for that restriction.
> That'd be nice. But we can't have nice things, can we?

Well, the worst they can say is no :)

~Andrew

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 00/12] uaccess: Provide and use scopes for user masked access
  2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
                   ` (12 preceding siblings ...)
  2025-10-17 10:37 ` [patch V3 00/12] uaccess: Provide and use scopes for user masked access Peter Zijlstra
@ 2025-10-17 12:25 ` Mathieu Desnoyers
  13 siblings, 0 replies; 40+ messages in thread
From: Mathieu Desnoyers @ 2025-10-17 12:25 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel

On 2025-10-17 06:08, Thomas Gleixner wrote:
> This is a follow up on the V2 feedback:
> 
>     https://lore.kernel.org/20250916163004.674341701@linutronix.de
> 
> The main concern over the V2 implementation was the requirement to have
> the code within the macro itself.
> 
> The main reason for that was the issue with ASM GOTO within a auto cleanup
> scope. Clang refuses to build when the ASM GOTO label is outside of the
> scope and GCC silently miscompiles the code and misses the cleanup.
> 
> After some back and forth discussion Linus suggested to put the local label
> workaround into the user access functions themself.
> 
> The second reason for having this construct was to make the potential
> modification of the pointer (when the architecture supports masking) scope
> local, as that preserves the original pointer for the failure path.
> 
> Andrew thankfully pointed me to nested for() loops and after some head
> scratching I managed to get all of it hidden in that construct.
> 
> So now the scoped access looks like this:
> 
> 	scoped_masked_user_read_access(ptr, efault) {
> 	        // @ptr is aliased. An eventual mask modification is scope local
> 		unsafe_get_user(val, ptr, efault);
> 		...
> 	}

Now we're talking! It indeed looks much more like C now. I'll go review
the implementation.

Thanks,

Mathieu


> 	return 0;
> efault:
>          // @ptr is unmodified
> 	do_stuff(ptr);
> 	return -EFAULT;
> 
> 
> Changes vs. V2:
> 
>      - Fix the unsigned long long pointer issue in ARM get_user() -
>        Christophe, Russell
> 
>      - Provide a generic workaround for the ASM GOTO issue and convert the
>        affected architecture code over - Linus
> 
>      - Reimplement the scoped cleanup magic with nested for() loops - Andrew
> 
>      - Provide variants with size provided by the caller - Mathieu
> 
>      - Add get/put_user_masked() helpers for single read/write access
> 
>      - Fixup the usage in futex, x86. select
> 
>      - A clumsy attempt to implement a coccinelle checker which catches
>        access mismatches, e.g. unsafe_put_user() inside a
>        scoped_masked_user_read_access() region. That needs more thought and
>        more coccinelle foo and is just there for discussion.
> 
> The series is based on v6.18-rc1 and also available from git:
> 
>      git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git uaccess/masked
> 
> Thanks,
> 
> 	tglx
> ---
> Thomas Gleixner (12):
>        ARM: uaccess: Implement missing __get_user_asm_dword()
>        uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user()
>        x86/uaccess: Use unsafe wrappers for ASM GOTO
>        powerpc/uaccess: Use unsafe wrappers for ASM GOTO
>        riscv/uaccess: Use unsafe wrappers for ASM GOTO
>        s390/uaccess: Use unsafe wrappers for ASM GOTO
>        uaccess: Provide scoped masked user access regions
>        uaccess: Provide put/get_user_masked()
>        coccinelle: misc: Add scoped_masked_$MODE_access() checker script
>        futex: Convert to scoped masked user access
>        x86/futex: Convert to scoped masked user access
>        select: Convert to scoped masked user access
> 
> ---
>   arch/arm/include/asm/uaccess.h               |   26 ++
>   arch/powerpc/include/asm/uaccess.h           |    8
>   arch/riscv/include/asm/uaccess.h             |    8
>   arch/s390/include/asm/uaccess.h              |    4
>   arch/x86/include/asm/futex.h                 |   75 ++----
>   arch/x86/include/asm/uaccess.h               |   12 -
>   fs/select.c                                  |   12 -
>   include/linux/uaccess.h                      |  313 ++++++++++++++++++++++++++-
>   kernel/futex/futex.h                         |   37 ---
>   scripts/coccinelle/misc/scoped_uaccess.cocci |  108 +++++++++
>   10 files changed, 497 insertions(+), 106 deletions(-)


-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 01/12] ARM: uaccess: Implement missing __get_user_asm_dword()
  2025-10-17 10:08 ` [patch V3 01/12] ARM: uaccess: Implement missing __get_user_asm_dword() Thomas Gleixner
@ 2025-10-17 12:36   ` Mathieu Desnoyers
  0 siblings, 0 replies; 40+ messages in thread
From: Mathieu Desnoyers @ 2025-10-17 12:36 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel

On 2025-10-17 06:08, Thomas Gleixner wrote:
> When CONFIG_CPU_SPECTRE=n then get_user() is missing the 8 byte ASM variant
> for no real good reason. This prevents using get_user(u64) in generic code.
> 
> Implement it as a sequence of two 4-byte reads with LE/BE awareness and
> make the unsigned long (or long long) type for the intermediate variable to
> read into dependend on the the target type.
> 
> The __long_type() macro and idea was lifted from PowerPC. Thanks to
> Christophe for pointing it out.
> 
> Reported-by: kernel test robot <lkp@intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: linux-arm-kernel@lists.infradead.org
> Closes: https://lore.kernel.org/oe-kbuild-all/202509120155.pFgwfeUD-lkp@intel.com/
> ---
> V2a: Solve the *ptr issue vs. unsigned long long - Russell/Christophe
> V2: New patch to fix the 0-day fallout
> ---
>   arch/arm/include/asm/uaccess.h |   26 +++++++++++++++++++++++++-
>   1 file changed, 25 insertions(+), 1 deletion(-)
> 
> --- a/arch/arm/include/asm/uaccess.h
> +++ b/arch/arm/include/asm/uaccess.h
> @@ -283,10 +283,17 @@ extern int __put_user_8(void *, unsigned
>   	__gu_err;							\
>   })
>   
> +/*
> + * This is a type: either unsigned long, if the argument fits into
> + * that type, or otherwise unsigned long long.
> + */
> +#define __long_type(x) \
> +	__typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
> +
>   #define __get_user_err(x, ptr, err, __t)				\
>   do {									\
>   	unsigned long __gu_addr = (unsigned long)(ptr);			\
> -	unsigned long __gu_val;						\
> +	__long_type(x) __gu_val;					\
>   	unsigned int __ua_flags;					\
>   	__chk_user_ptr(ptr);						\
>   	might_fault();							\
> @@ -295,6 +302,7 @@ do {									\
>   	case 1:	__get_user_asm_byte(__gu_val, __gu_addr, err, __t); break;	\
>   	case 2:	__get_user_asm_half(__gu_val, __gu_addr, err, __t); break;	\
>   	case 4:	__get_user_asm_word(__gu_val, __gu_addr, err, __t); break;	\
> +	case 8:	__get_user_asm_dword(__gu_val, __gu_addr, err, __t); break;	\
>   	default: (__gu_val) = __get_user_bad();				\
>   	}								\
>   	uaccess_restore(__ua_flags);					\
> @@ -353,6 +361,22 @@ do {									\
>   #define __get_user_asm_word(x, addr, err, __t)			\
>   	__get_user_asm(x, addr, err, "ldr" __t)
>   
> +#ifdef __ARMEB__
> +#define __WORD0_OFFS	4
> +#define __WORD1_OFFS	0
> +#else
> +#define __WORD0_OFFS	0
> +#define __WORD1_OFFS	4
> +#endif
> +
> +#define __get_user_asm_dword(x, addr, err, __t)				\
> +	({								\
> +	unsigned long __w0, __w1;					\
> +	__get_user_asm(__w0, addr + __WORD0_OFFS, err, "ldr" __t);	\
> +	__get_user_asm(__w1, addr + __WORD1_OFFS, err, "ldr" __t);	\
> +	(x) = ((u64)__w1 << 32) | (u64) __w0;				\
> +})

If we look at __get_user_asm_half, it always loads the lower addresses
first (__gu_addr), and then loads the following address (__gu_addr + 1).

This new code for dword flips the order of word accesses between BE and
LE, which means that on BE we're reading the second word and then moving
back one word.

I'm not sure whether it matters or not, but I'm pointing it out in case
it matters in terms of hardware memory access pattern.

Also we end up with __get_user_asm_{half,dword} that effectively do the
same tricks in very different ways, so it would be good to come up with
a unified pattern.

Thanks,

Mathieu


> +
>   #define __put_user_switch(x, ptr, __err, __fn)				\
>   	do {								\
>   		const __typeof__(*(ptr)) __user *__pu_ptr = (ptr);	\
> 


-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 02/12] uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user()
  2025-10-17 10:08 ` [patch V3 02/12] uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user() Thomas Gleixner
@ 2025-10-17 12:43   ` Mathieu Desnoyers
  2025-10-17 12:48     ` Mathieu Desnoyers
  0 siblings, 1 reply; 40+ messages in thread
From: Mathieu Desnoyers @ 2025-10-17 12:43 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel

On 2025-10-17 06:08, Thomas Gleixner wrote:
> ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:
> 
> bool foo(u32 __user *p, u32 val)
> {
> 	scoped_guard(pagefault)
> 		unsafe_put_user(val, p, efault);
> 	return true;
> efault:
> 	return false;
> }
> 
>   e80:	e8 00 00 00 00       	call   e85 <foo+0x5>
>   e85:	65 48 8b 05 00 00 00 00 mov    %gs:0x0(%rip),%rax
>   e8d:	83 80 04 14 00 00 01 	addl   $0x1,0x1404(%rax)   // pf_disable++
>   e94:	89 37                	mov    %esi,(%rdi)
>   e96:	83 a8 04 14 00 00 01 	subl   $0x1,0x1404(%rax)   // pf_disable--
>   e9d:	b8 01 00 00 00       	mov    $0x1,%eax           // success
>   ea2:	e9 00 00 00 00       	jmp    ea7 <foo+0x27>      // ret
>   ea7:	31 c0                	xor    %eax,%eax           // fail
>   ea9:	e9 00 00 00 00       	jmp    eae <foo+0x2e>      // ret
> 
> which is broken as it leaks the pagefault disable counter on failure.
> 
> Clang at least fails the build.
> 
> Linus suggested to add a local label into the macro scope and let that
> jump to the actual caller supplied error label.
> 
>         	__label__ local_label;                                  \
>          arch_unsafe_get_user(x, ptr, local_label);              \
> 	if (0) {                                                \
> 	local_label:                                            \
> 		goto label;                                     \
> 
> That works for both GCC and clang.
> 
> clang:
> 
>   c80:	0f 1f 44 00 00       	   nopl   0x0(%rax,%rax,1)	
>   c85:	65 48 8b 0c 25 00 00 00 00 mov    %gs:0x0,%rcx
>   c8e:	ff 81 04 14 00 00    	   incl   0x1404(%rcx)	   // pf_disable++
>   c94:	31 c0                	   xor    %eax,%eax        // set retval to false
>   c96:	89 37                      mov    %esi,(%rdi)      // write
>   c98:	b0 01                	   mov    $0x1,%al         // set retval to true
>   c9a:	ff 89 04 14 00 00    	   decl   0x1404(%rcx)     // pf_disable--
>   ca0:	2e e9 00 00 00 00    	   cs jmp ca6 <foo+0x26>   // ret
> 
> The exception table entry points correctly to c9a
> 
> GCC:
> 
>   f70:   e8 00 00 00 00          call   f75 <baz+0x5>
>   f75:   65 48 8b 05 00 00 00 00 mov    %gs:0x0(%rip),%rax
>   f7d:   83 80 04 14 00 00 01    addl   $0x1,0x1404(%rax)  // pf_disable++
>   f84:   8b 17                   mov    (%rdi),%edx
>   f86:   89 16                   mov    %edx,(%rsi)
>   f88:   83 a8 04 14 00 00 01    subl   $0x1,0x1404(%rax) // pf_disable--
>   f8f:   b8 01 00 00 00          mov    $0x1,%eax         // success
>   f94:   e9 00 00 00 00          jmp    f99 <baz+0x29>    // ret
>   f99:   83 a8 04 14 00 00 01    subl   $0x1,0x1404(%rax) // pf_disable--
>   fa0:   31 c0                   xor    %eax,%eax         // fail
>   fa2:   e9 00 00 00 00          jmp    fa7 <baz+0x37>    // ret
> 
> The exception table entry points correctly to f99
> 
> So both compilers optimize out the extra goto and emit correct and
> efficient code.
> 
> Provide a generic wrapper to do that to avoid modifying all the affected
> architecture specific implementation with that workaround.
> 
> The only change required for architectures is to rename unsafe_*_user() to
> arch_unsafe_*_user(). That's done in subsequent changes.
> 
> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Adding a link to a gcc bugzilla entry may be relevant here. Ditto for
clang if there is a bug tracker entry for this.

Other than that:

Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>

> ---
>   include/linux/uaccess.h |   72 +++++++++++++++++++++++++++++++++++++++++++++---
>   1 file changed, 68 insertions(+), 4 deletions(-)
> 
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -518,7 +518,34 @@ long strncpy_from_user_nofault(char *dst
>   		long count);
>   long strnlen_user_nofault(const void __user *unsafe_addr, long count);
>   
> -#ifndef __get_kernel_nofault
> +#ifdef arch_get_kernel_nofault
> +/*
> + * Wrap the architecture implementation so that @label can be outside of a
> + * cleanup() scope. A regular C goto works correctly, but ASM goto does
> + * not. Clang rejects such an attempt, but GCC silently emits buggy code.
> + */
> +#define __get_kernel_nofault(dst, src, type, label)		\
> +do {								\
> +	__label__ local_label;					\
> +	arch_get_kernel_nofault(dst, src, type, local_label);	\
> +	if (0) {						\
> +	local_label:						\
> +		goto label;					\
> +	}							\
> +} while (0)
> +
> +#define __put_kernel_nofault(dst, src, type, label)		\
> +do {								\
> +	__label__ local_label;					\
> +	arch_get_kernel_nofault(dst, src, type, local_label);	\
> +	if (0) {						\
> +	local_label:						\
> +		goto label;					\
> +	}							\
> +} while (0)
> +
> +#elif !defined(__get_kernel_nofault) /* arch_get_kernel_nofault */
> +
>   #define __get_kernel_nofault(dst, src, type, label)	\
>   do {							\
>   	type __user *p = (type __force __user *)(src);	\
> @@ -535,7 +562,8 @@ do {							\
>   	if (__put_user(data, p))			\
>   		goto label;				\
>   } while (0)
> -#endif
> +
> +#endif  /* !__get_kernel_nofault */
>   
>   /**
>    * get_kernel_nofault(): safely attempt to read from a location
> @@ -549,7 +577,42 @@ do {							\
>   	copy_from_kernel_nofault(&(val), __gk_ptr, sizeof(val));\
>   })
>   
> -#ifndef user_access_begin
> +#ifdef user_access_begin
> +
> +#ifdef arch_unsafe_get_user
> +/*
> + * Wrap the architecture implementation so that @label can be outside of a
> + * cleanup() scope. A regular C goto works correctly, but ASM goto does
> + * not. Clang rejects such an attempt, but GCC silently emits buggy code.
> + *
> + * Some architectures use internal local labels already, but this extra
> + * indirection here is harmless because the compiler optimizes it out
> + * completely in any case. This construct just ensures that the ASM GOTO
> + * target is always in the local scope. The C goto 'label' works correct
> + * when leaving a cleanup() scope.
> + */
> +#define unsafe_get_user(x, ptr, label)			\
> +do {							\
> +	__label__ local_label;				\
> +	arch_unsafe_get_user(x, ptr, local_label);	\
> +	if (0) {					\
> +	local_label:					\
> +		goto label;				\
> +	}						\
> +} while (0)
> +
> +#define unsafe_put_user(x, ptr, label)			\
> +do {							\
> +	__label__ local_label;				\
> +	arch_unsafe_put_user(x, ptr, local_label);	\
> +	if (0) {					\
> +	local_label:					\
> +		goto label;				\
> +	}						\
> +} while (0)
> +#endif /* arch_unsafe_get_user */
> +
> +#else /* user_access_begin */
>   #define user_access_begin(ptr,len) access_ok(ptr, len)
>   #define user_access_end() do { } while (0)
>   #define unsafe_op_wrap(op, err) do { if (unlikely(op)) goto err; } while (0)
> @@ -559,7 +622,8 @@ do {							\
>   #define unsafe_copy_from_user(d,s,l,e) unsafe_op_wrap(__copy_from_user(d,s,l),e)
>   static inline unsigned long user_access_save(void) { return 0UL; }
>   static inline void user_access_restore(unsigned long flags) { }
> -#endif
> +#endif /* !user_access_begin */
> +
>   #ifndef user_write_access_begin
>   #define user_write_access_begin user_access_begin
>   #define user_write_access_end user_access_end
> 


-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 02/12] uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user()
  2025-10-17 12:43   ` Mathieu Desnoyers
@ 2025-10-17 12:48     ` Mathieu Desnoyers
  0 siblings, 0 replies; 40+ messages in thread
From: Mathieu Desnoyers @ 2025-10-17 12:48 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel

On 2025-10-17 08:43, Mathieu Desnoyers wrote:
[... remembering to trim ...]

>>   long strnlen_user_nofault(const void __user *unsafe_addr, long count);
>> -#ifndef __get_kernel_nofault
>> +#ifdef arch_get_kernel_nofault
One more thing:

This is removing the leading __ from __get_kernel_nofault, which
AFAIR has a defined semantic about needing explicit checks. Is that
on purpose ? IOW, would you rather have __arch_get_kernel_nofault ?

Thanks,

Mathieu


-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-17 10:09 ` [patch V3 07/12] uaccess: Provide scoped masked user access regions Thomas Gleixner
  2025-10-17 11:08   ` Andrew Cooper
@ 2025-10-17 13:23   ` Mathieu Desnoyers
  2025-10-20 18:28   ` David Laight
  2 siblings, 0 replies; 40+ messages in thread
From: Mathieu Desnoyers @ 2025-10-17 13:23 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: Christophe Leroy, Andrew Cooper, Linus Torvalds,
	kernel test robot, Russell King, linux-arm-kernel, x86,
	Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On 2025-10-17 06:09, Thomas Gleixner wrote:

> +/**
> + * __scoped_user_access_begin - Start the masked user access
> + * @_mode:	The mode of the access class (read, write, rw)
> + * @_uptr:	The pointer to access user space memory
> + * @_size:	Size of the access
> + * @_elbl:	Error label to goto when the access region is rejected.
> + *
> + * Internal helper for __scoped_masked_user_access(). Don't use directly
> + */

^ general comment about ending sentences with '.' across this patch
(nit).


> +#define __scoped_user_access_begin(_mode, _uptr, _size, _elbl)		\
> +({									\
> +	typeof((_uptr)) ____ret;					\
> +									\
> +	if (can_do_masked_user_access()) {				\
> +		____ret = masked_user_##_mode##_access_begin((_uptr));	\

I don't think the extra () are needed here, or is there something
special happening within this macro that requires it ?

> +	} else {							\
> +		____ret = _uptr;					\
> +		if (!user_##_mode##_access_begin(_uptr, (_size)))	\

likewise around _size.

> +*/
> +#define __scoped_masked_user_access(_mode, _uptr, _size, _elbl)					\
> +for (bool ____stop = false; !____stop; ____stop = true)						\
> +	for (typeof((_uptr)) _tmpptr = __scoped_user_access_begin(_mode, _uptr, _size, _elbl);	\

The extra () around _uptr seems useless.

> +	     !____stop; ____stop = true)							\
> +		for (CLASS(masked_user_##_mode##_access, scope) (_tmpptr); !____stop;		\

Removing the space before (_tmpptr) would make it clearer that it
behaves as arguments to CLASS(masked_user_##_mode##_access, scope),
similarly to what is done in cleanup.h:scoped_class().

Nesting those constructs will cause variables to be hidden by inner
definitions. I recommend using __UNIQUE_ID() to make sure the "stop" and
"tmpptr" variables don't clash with external ones rather than trying to
solve the issue with a random amount of leading underscores.

> +		     ____stop = true)					\
> +			/* Force modified pointer usage within the scope */			\
> +			for (const typeof((_uptr)) _uptr = _tmpptr; !____stop; ____stop = true)	\

I'm puzzled that it does not trigger compiler warnings as it shadows
_uptr if _uptr is a variable defined outside of this scope.

> +				if (1)
> +

^ can be removed (as pointed out by someone else already).

[...]
> +#define scoped_masked_user_read_access_size(_usrc, _size, _elbl)		\
> +	__scoped_masked_user_access(read, (_usrc), (_size), _elbl)

Useless () around _usrc and _size.


> +#define scoped_masked_user_read_access(_usrc, _elbl)				\
> +	scoped_masked_user_read_access_size((_usrc), sizeof(*(_usrc)), _elbl)

() around the first argument are useless.


> +#define scoped_masked_user_write_access_size(_udst, _size, _elbl)		\
> +	__scoped_masked_user_access(write, (_udst),  (_size), _elbl)

Useless () around _udst and _size.

> + */
> +#define scoped_masked_user_write_access(_udst, _elbl)				\
> +	scoped_masked_user_write_access_size((_udst), sizeof(*(_udst)), _elbl)

() around the first argument are useless.

> +#define scoped_masked_user_rw_access_size(_uptr, _size, _elbl)			\
> +	__scoped_masked_user_access(rw, (_uptr), (_size), _elbl)

Useless () around _uptr and _size.

> +#define scoped_masked_user_rw_access(_uptr, _elbl)				\
> +	scoped_masked_user_rw_access_size((_uptr), sizeof(*(_uptr)), _elbl)

() around the first argument are useless.

Thanks,

Mathieu


-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 11/12] x86/futex: Convert to scoped masked user access
  2025-10-17 10:09 ` [patch V3 11/12] x86/futex: " Thomas Gleixner
@ 2025-10-17 13:37   ` Andrew Cooper
  0 siblings, 0 replies; 40+ messages in thread
From: Andrew Cooper @ 2025-10-17 13:37 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, kernel test robot, Russell King, linux-arm-kernel,
	Linus Torvalds, Madhavan Srinivasan, Michael Ellerman,
	Nicholas Piggin, Christophe Leroy, linuxppc-dev, Paul Walmsley,
	Palmer Dabbelt, linux-riscv, Heiko Carstens,
	Christian Borntraeger, Sven Schnelle, linux-s390,
	Mathieu Desnoyers, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel

On 17/10/2025 11:09 am, Thomas Gleixner wrote:
> @@ -86,21 +79,19 @@ static inline int futex_atomic_cmpxchg_i
>  {
>  	int ret = 0;
>  
> -	if (can_do_masked_user_access())
> -		uaddr = masked_user_access_begin(uaddr);
> -	else if (!user_access_begin(uaddr, sizeof(u32)))
> -		return -EFAULT;
> -	asm volatile("\n"
> -		"1:\t" LOCK_PREFIX "cmpxchgl %3, %2\n"
> -		"2:\n"
> -		_ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %0) \
> -		: "+r" (ret), "=a" (oldval), "+m" (*uaddr)
> -		: "r" (newval), "1" (oldval)
> -		: "memory"
> -	);
> -	user_access_end();
> -	*uval = oldval;
> +	scoped_masked_user_rw_access(uaddr, Efault) {
> +		asm volatile("\n"
> +			     "1:\t" LOCK_PREFIX "cmpxchgl %3, %2\n"
> +			     "2:\n"
> +			     _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %0) \
> +			     : "+r" (ret), "=a" (oldval), "+m" (*uaddr)
> +			     : "r" (newval), "1" (oldval)
> +			     : "memory");

Minor points, but as you're rewriting this, it wants to be asm_inline
volatile.

There's also a useless line continuation on the end of the ASM_EXTABLE
which can be dropped.

~Andrew

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 08/12] uaccess: Provide put/get_user_masked()
  2025-10-17 10:09 ` [patch V3 08/12] uaccess: Provide put/get_user_masked() Thomas Gleixner
@ 2025-10-17 13:41   ` Mathieu Desnoyers
  2025-10-17 13:45     ` Mathieu Desnoyers
  0 siblings, 1 reply; 40+ messages in thread
From: Mathieu Desnoyers @ 2025-10-17 13:41 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel

On 2025-10-17 06:09, Thomas Gleixner wrote:
> Provide conveniance wrappers around scoped masked user access similiar to

convenience
similar

> +/**
> + * get_user_masked - Read user data with masked access
> + * @_val:	The variable to store the value read from user memory
> + * @_usrc:	Pointer to the user space memory to read from
> + *
> + * Return: true if successful, false when faulted

^ '.' (or not) across sentences. Your choice, but it's irregular across
the series.

> + */
> +#define get_user_masked(_val, _usrc)				\
> +({								\
> +	__label__ efault;					\
> +	typeof((_usrc)) _tmpsrc	= (_usrc);			\

Remove extra () around _usrc in typeof.

UNIQUE_ID for _tmpsrc ?

> +	bool ____ret = true;					\

Why so many underscores ? It there are nesting concerns,
it may be a sign that UNIQUE_ID is needed.


> + */
> +#define put_user_masked(_val, _udst)				\
> +({								\
> +	__label__ efault;					\
> +	typeof((_udst)) _tmpdst	= (_udst);			\

Remove extra () around _udst in typeof.

UNIQUE_ID for _tmpsrc ?

> +	bool ____ret = true;	

UNIQUE_ID for ____ret ?

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 08/12] uaccess: Provide put/get_user_masked()
  2025-10-17 13:41   ` Mathieu Desnoyers
@ 2025-10-17 13:45     ` Mathieu Desnoyers
  2025-10-20  6:50       ` Thomas Gleixner
  0 siblings, 1 reply; 40+ messages in thread
From: Mathieu Desnoyers @ 2025-10-17 13:45 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel

On 2025-10-17 09:41, Mathieu Desnoyers wrote:
[...]
>> +/**
>> + * get_user_masked - Read user data with masked access
[...]
>> + * Return: true if successful, false when faulted

I notice that __get_user() and get_user() return -EFAULT
on error, 0 on success. I suspect the reversed logic
will be rather confusing.

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 08/12] uaccess: Provide put/get_user_masked()
  2025-10-17 13:45     ` Mathieu Desnoyers
@ 2025-10-20  6:50       ` Thomas Gleixner
  0 siblings, 0 replies; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-20  6:50 UTC (permalink / raw)
  To: Mathieu Desnoyers, LKML
  Cc: kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Andrew Cooper, Julia Lawall, Nicolas Palix,
	Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
	Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel

On Fri, Oct 17 2025 at 09:45, Mathieu Desnoyers wrote:
> On 2025-10-17 09:41, Mathieu Desnoyers wrote:
> [...]
>>> +/**
>>> + * get_user_masked - Read user data with masked access
> [...]
>>> + * Return: true if successful, false when faulted
>
> I notice that __get_user() and get_user() return -EFAULT
> on error, 0 on success. I suspect the reversed logic
> will be rather confusing.

In most cases the return value of those is treated as a boolean
success/fail indicator. It's pretty confusing when a boolean return
value indicates success with 'false'. It's way more intuitive to read:

       if (!do_something())
       		goto fail;

Just because we have this historic return '0' on success and error code
on fail concept does not mean we have to proliferate it forever.

In fact a lot of that 'hand an error code through the callchain' is just
pointless as in many cases that error code is completely ignored and
just used for a boolean decision at the end. So there is no point to
pretend that those error codes are meaningful except for places where
they need to be returned to user space.

Thanks,

        tglx


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-17 10:09 ` [patch V3 07/12] uaccess: Provide scoped masked user access regions Thomas Gleixner
  2025-10-17 11:08   ` Andrew Cooper
  2025-10-17 13:23   ` Mathieu Desnoyers
@ 2025-10-20 18:28   ` David Laight
  2025-10-21 14:29     ` Thomas Gleixner
  2 siblings, 1 reply; 40+ messages in thread
From: David Laight @ 2025-10-20 18:28 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Christophe Leroy, Mathieu Desnoyers, Andrew Cooper,
	Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On Fri, 17 Oct 2025 12:09:08 +0200 (CEST)
Thomas Gleixner <tglx@linutronix.de> wrote:

> User space access regions are tedious and require similar code patterns all
> over the place:
> 
>      	if (!user_read_access_begin(from, sizeof(*from)))
> 		return -EFAULT;
> 	unsafe_get_user(val, from, Efault);
> 	user_read_access_end();
> 	return 0;
> Efault:
> 	user_read_access_end();
> 	return -EFAULT;
> 
> This got worse with the recent addition of masked user access, which
> optimizes the speculation prevention:
> 
> 	if (can_do_masked_user_access())
> 		from = masked_user_read_access_begin((from));
> 	else if (!user_read_access_begin(from, sizeof(*from)))
> 		return -EFAULT;
> 	unsafe_get_user(val, from, Efault);
> 	user_read_access_end();
> 	return 0;
> Efault:
> 	user_read_access_end();
> 	return -EFAULT;
> 
> There have been issues with using the wrong user_*_access_end() variant in
> the error path and other typical Copy&Pasta problems, e.g. using the wrong
> fault label in the user accessor which ends up using the wrong accesss end
> variant. 
> 
> These patterns beg for scopes with automatic cleanup. The resulting outcome
> is:
>     	scoped_masked_user_read_access(from, Efault)
> 		unsafe_get_user(val, from, Efault);
> 	return 0;
>   Efault:
> 	return -EFAULT;

That definitely looks better than the earlier versions.
Even if the implementation looks like an entry in the obfuscated C competition.

I don't think you need the 'masked' in that name.
Since it works in all cases.

(I don't like the word 'masked' at all, not sure where it came from.
Probably because the first version used logical operators.
'Masking' a user address ought to be the operation of removing high-order
address bits that the hardware is treating as 'don't care'.
The canonical operation here is uaddr = min(uaddr, guard_page) - likely to be
a conditional move.
I think that s/masked/sanitised/ would make more sense (the patch to do
that isn't very big at the moment). I might post it.)

> 
> The scope guarantees the proper cleanup for the access mode is invoked both
> in the success and the failure (fault) path.
> 
> The scoped_masked_user_$MODE_access() macros are implemented as self
> terminating nested for() loops. Thanks to Andrew Cooper for pointing me at
> them. The scope can therefore be left with 'break', 'goto' and 'return'.
> Even 'continue' "works" due to the self termination mechanism. Both GCC and
> clang optimize all the convoluted macro maze out and the above results with
> clang in:
> 
>  b80:	f3 0f 1e fa          	       endbr64
>  b84:	48 b8 ef cd ab 89 67 45 23 01  movabs $0x123456789abcdef,%rax
>  b8e:	48 39 c7    	               cmp    %rax,%rdi
>  b91:	48 0f 47 f8          	       cmova  %rax,%rdi
>  b95:	90                   	       nop
>  b96:	90                   	       nop
>  b97:	90                   	       nop
>  b98:	31 c9                	       xor    %ecx,%ecx
>  b9a:	8b 07                	       mov    (%rdi),%eax
>  b9c:	89 06                	       mov    %eax,(%rsi)
>  b9e:	85 c9                	       test   %ecx,%ecx
>  ba0:	0f 94 c0             	       sete   %al
>  ba3:	90                   	       nop
>  ba4:	90                   	       nop
>  ba5:	90                   	       nop
>  ba6:	c3                   	       ret
> 
> Which looks as compact as it gets. The NOPs are placeholder for STAC/CLAC.
> GCC emits the fault path seperately:
> 
>  bf0:	f3 0f 1e fa          	       endbr64
>  bf4:	48 b8 ef cd ab 89 67 45 23 01  movabs $0x123456789abcdef,%rax
>  bfe:	48 39 c7             	       cmp    %rax,%rdi
>  c01:	48 0f 47 f8          	       cmova  %rax,%rdi
>  c05:	90                   	       nop
>  c06:	90                   	       nop
>  c07:	90                   	       nop
>  c08:	31 d2                	       xor    %edx,%edx
>  c0a:	8b 07                	       mov    (%rdi),%eax
>  c0c:	89 06                	       mov    %eax,(%rsi)
>  c0e:	85 d2                	       test   %edx,%edx
>  c10:	75 09                	       jne    c1b <afoo+0x2b>
>  c12:	90                   	       nop
>  c13:	90                   	       nop
>  c14:	90                   	       nop
>  c15:	b8 01 00 00 00       	       mov    $0x1,%eax
>  c1a:	c3                   	       ret
>  c1b:	90                   	       nop
>  c1c:	90                   	       nop
>  c1d:	90                   	       nop
>  c1e:	31 c0                	       xor    %eax,%eax
>  c20:	c3                   	       ret
> 
> 
> The fault labels for the scoped*() macros and the fault labels for the
> actual user space accessors can be shared and must be placed outside of the
> scope.
> 
> If masked user access is enabled on an architecture, then the pointer
> handed in to scoped_masked_user_$MODE_access() can be modified to point to
> a guaranteed faulting user address. This modification is only scope local
> as the pointer is aliased inside the scope. When the scope is left the
> alias is not longer in effect. IOW the original pointer value is preserved
> so it can be used e.g. for fixup or diagnostic purposes in the fault path.

I think you need to add (in the kerndoc somewhere):

There is no requirement to do the accesses in strict memory order
(or to access the lowest address first).
The only constraint is that gaps must be significantly less than 4k.

Basically the architectures have to support code accessing uptr[4]
before uptr[0] (so using ~0 as the 'bad address' isn't a good idea).
Otherwise you have to go through 'hoops' to double check that all code
accesses the first member of a structure before the second one.
(I've looked through likely users of this and something like poll
or epoll does the 2nd access first - and it isn't obvious.)

There always has to be a guard page at the top of valid user addresses.
Otherwise sequential accesses run into kernel space.
So the code just has to generate the base of the guard page for kernel
addresses (see the horrid ppc code for cpu that have broken conditional move).

> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> ---
> V3: Make it a nested for() loop
>     Get rid of the code in macro parameters - Linus
>     Provide sized variants - Mathieu
> V2: Remove the shady wrappers around the opening and use scopes with automatic cleanup
> ---
>  include/linux/uaccess.h |  197 ++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 197 insertions(+)
> 
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -2,6 +2,7 @@
>  #ifndef __LINUX_UACCESS_H__
>  #define __LINUX_UACCESS_H__
>  
> +#include <linux/cleanup.h>
>  #include <linux/fault-inject-usercopy.h>
>  #include <linux/instrumented.h>
>  #include <linux/minmax.h>
> @@ -35,9 +36,17 @@
>  
>  #ifdef masked_user_access_begin
>   #define can_do_masked_user_access() 1
> +# ifndef masked_user_write_access_begin
> +#  define masked_user_write_access_begin masked_user_access_begin
> +# endif
> +# ifndef masked_user_read_access_begin
> +#  define masked_user_read_access_begin masked_user_access_begin
> +#endif
>  #else
>   #define can_do_masked_user_access() 0
>   #define masked_user_access_begin(src) NULL
> + #define masked_user_read_access_begin(src) NULL
> + #define masked_user_write_access_begin(src) NULL
>   #define mask_user_address(src) (src)
>  #endif
>  
> @@ -633,6 +642,194 @@ static inline void user_access_restore(u
>  #define user_read_access_end user_access_end
>  #endif
>  
> +/* Define RW variant so the below _mode macro expansion works */
> +#define masked_user_rw_access_begin(u)	masked_user_access_begin(u)
> +#define user_rw_access_begin(u, s)	user_access_begin(u, s)
> +#define user_rw_access_end()		user_access_end()
> +
> +/* Scoped user access */
> +#define USER_ACCESS_GUARD(_mode)					\
> +static __always_inline void __user *					\
> +class_masked_user_##_mode##_begin(void __user *ptr)			\
> +{									\
> +	return ptr;							\
> +}									\
> +									\
> +static __always_inline void						\
> +class_masked_user_##_mode##_end(void __user *ptr)			\
> +{									\
> +	user_##_mode##_access_end();					\
> +}									\
> +									\
> +DEFINE_CLASS(masked_user_ ##_mode## _access, void __user *,		\
> +	     class_masked_user_##_mode##_end(_T),			\
> +	     class_masked_user_##_mode##_begin(ptr), void __user *ptr)	\
> +									\
> +static __always_inline class_masked_user_##_mode##_access_t		\
> +class_masked_user_##_mode##_access_ptr(void __user *scope)		\
> +{									\
> +	return scope;							\
> +}
> +
> +USER_ACCESS_GUARD(read)
> +USER_ACCESS_GUARD(write)
> +USER_ACCESS_GUARD(rw)
> +#undef USER_ACCESS_GUARD
> +
> +/**
> + * __scoped_user_access_begin - Start the masked user access
> + * @_mode:	The mode of the access class (read, write, rw)
> + * @_uptr:	The pointer to access user space memory
> + * @_size:	Size of the access
> + * @_elbl:	Error label to goto when the access region is rejected.
> + *
> + * Internal helper for __scoped_masked_user_access(). Don't use directly
> + */
> +#define __scoped_user_access_begin(_mode, _uptr, _size, _elbl)		\
> +({									\
> +	typeof((_uptr)) ____ret;					\
> +									\
> +	if (can_do_masked_user_access()) {				\
> +		____ret = masked_user_##_mode##_access_begin((_uptr));	\
> +	} else {							\
> +		____ret = _uptr;					\
> +		if (!user_##_mode##_access_begin(_uptr, (_size)))	\
> +			goto _elbl;					\
> +	}								\
> +	____ret;							\
> +})
> +
> +/**
> + * __scoped_masked_user_access - Open a scope for masked user access
> + * @_mode:	The mode of the access class (read, write, rw)
> + * @_uptr:	The pointer to access user space memory
> + * @_size:	Size of the access
> + * @_elbl:	Error label to goto when the access region is rejected. It
> + *		must be placed outside the scope.
> + *
> + * If the user access function inside the scope requires a fault label, it
> + * can use @_elvl or a difference label outside the scope, which requires
> + * that user access which is implemented with ASM GOTO has been properly
> + * wrapped. See unsafe_get_user() for reference.
> + *
> + *	scoped_masked_user_rw_access(ptr, efault) {
> + *		unsafe_get_user(rval, &ptr->rval, efault);
> + *		unsafe_put_user(wval, &ptr->wval, efault);
> + *	}
> + *	return 0;
> + *  efault:
> + *	return -EFAULT;
> + *
> + * The scope is internally implemented as a autoterminating nested for()
> + * loop, which can be left with 'return', 'break' and 'goto' at any
> + * point.
> + *
> + * When the scope is left user_##@_mode##_access_end() is automatically
> + * invoked.
> + *
> + * When the architecture supports masked user access and the access region
> + * which is determined by @_uptr and @_size is not a valid user space
> + * address, i.e. < TASK_SIZE, the scope sets the pointer to a faulting user
> + * space address and does not terminate early. This optimizes for the good
> + * case and lets the performance uncritical bad case go through the fault.
> + *
> + * The eventual modification of the pointer is limited to the scope.
> + * Outside of the scope the original pointer value is unmodified, so that
> + * the original pointer value is available for diagnostic purposes in an
> + * out of scope fault path.
> + *
> + * Nesting scoped masked user access into a masked user access scope is
> + * invalid and fails the build. Nesting into other guards, e.g. pagefault
> + * is safe.
> + *
> + * Don't use directly. Use the scoped_masked_user_$MODE_access() instead.
> +*/
> +#define __scoped_masked_user_access(_mode, _uptr, _size, _elbl)					\
> +for (bool ____stop = false; !____stop; ____stop = true)						\
> +	for (typeof((_uptr)) _tmpptr = __scoped_user_access_begin(_mode, _uptr, _size, _elbl);	\

Can you use 'auto' instead of typeof() ?

> +	     !____stop; ____stop = true)							\
> +		for (CLASS(masked_user_##_mode##_access, scope) (_tmpptr); !____stop;		\
> +		     ____stop = true)					\
> +			/* Force modified pointer usage within the scope */			\
> +			for (const typeof((_uptr)) _uptr = _tmpptr; !____stop; ____stop = true)	\

gcc 15.1 also seems to support 'const auto _uptr = _tmpptr;'

	David

> +				if (1)
> +
> +/**
> + * scoped_masked_user_read_access_size - Start a scoped user read access with given size
> + * @_usrc:	Pointer to the user space address to read from
> + * @_size:	Size of the access starting from @_usrc
> + * @_elbl:	Error label to goto when the access region is rejected.
> + *
> + * For further information see __scoped_masked_user_access() above.
> + */
> +#define scoped_masked_user_read_access_size(_usrc, _size, _elbl)		\
> +	__scoped_masked_user_access(read, (_usrc), (_size), _elbl)
> +
> +/**
> + * scoped_masked_user_read_access - Start a scoped user read access
> + * @_usrc:	Pointer to the user space address to read from
> + * @_elbl:	Error label to goto when the access region is rejected.
> + *
> + * The size of the access starting from @_usrc is determined via sizeof(*@_usrc)).
> + *
> + * For further information see __scoped_masked_user_access() above.
> + */
> +#define scoped_masked_user_read_access(_usrc, _elbl)				\
> +	scoped_masked_user_read_access_size((_usrc), sizeof(*(_usrc)), _elbl)
> +
> +/**
> + * scoped_masked_user_read_end - End a scoped user read access
> + *
> + * Ends the scope opened with scoped_masked_user_read_access[_size]()
> + */
> +#define scoped_masked_user_read_end()	__scoped_masked_user_end()
> +
> +/**
> + * scoped_masked_user_write_access_size - Start a scoped user write access with given size
> + * @_udst:	Pointer to the user space address to write to
> + * @_size:	Size of the access starting from @_udst
> + * @_elbl:	Error label to goto when the access region is rejected.
> + *
> + * For further information see __scoped_masked_user_access() above.
> + */
> +#define scoped_masked_user_write_access_size(_udst, _size, _elbl)		\
> +	__scoped_masked_user_access(write, (_udst),  (_size), _elbl)
> +
> +/**
> + * scoped_masked_user_write_access - Start a scoped user write access
> + * @_udst:	Pointer to the user space address to write to
> + * @_elbl:	Error label to goto when the access region is rejected.
> + *
> + * The size of the access starting from @_udst is determined via sizeof(*@_udst)).
> + *
> + * For further information see __scoped_masked_user_access() above.
> + */
> +#define scoped_masked_user_write_access(_udst, _elbl)				\
> +	scoped_masked_user_write_access_size((_udst), sizeof(*(_udst)), _elbl)
> +
> +/**
> + * scoped_masked_user_rw_access_size - Start a scoped user read/write access with given size
> + * @_uptr	Pointer to the user space address to read from and write to
> + * @_size:	Size of the access starting from @_uptr
> + * @_elbl:	Error label to goto when the access region is rejected.
> + *
> + * For further information see __scoped_masked_user_access() above.
> + */
> +#define scoped_masked_user_rw_access_size(_uptr, _size, _elbl)			\
> +	__scoped_masked_user_access(rw, (_uptr), (_size), _elbl)
> +
> +/**
> + * scoped_masked_user_rw_access - Start a scoped user read/write access
> + * @_uptr	Pointer to the user space address to read from and write to
> + * @_elbl:	Error label to goto when the access region is rejected.
> + *
> + * The size of the access starting from @_uptr is determined via sizeof(*@_uptr)).
> + *
> + * For further information see __scoped_masked_user_access() above.
> + */
> +#define scoped_masked_user_rw_access(_uptr, _elbl)				\
> +	scoped_masked_user_rw_access_size((_uptr), sizeof(*(_uptr)), _elbl)
> +
>  #ifdef CONFIG_HARDENED_USERCOPY
>  void __noreturn usercopy_abort(const char *name, const char *detail,
>  			       bool to_user, unsigned long offset,
> 
> 


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-20 18:28   ` David Laight
@ 2025-10-21 14:29     ` Thomas Gleixner
  2025-10-21 14:42       ` Thomas Gleixner
                         ` (3 more replies)
  0 siblings, 4 replies; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-21 14:29 UTC (permalink / raw)
  To: David Laight
  Cc: LKML, Christophe Leroy, Mathieu Desnoyers, Andrew Cooper,
	Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On Mon, Oct 20 2025 at 19:28, David Laight wrote:
> On Fri, 17 Oct 2025 12:09:08 +0200 (CEST)
> Thomas Gleixner <tglx@linutronix.de> wrote:
> That definitely looks better than the earlier versions.
> Even if the implementation looks like an entry in the obfuscated C
> competition.

It has too many characters for that. The contest variant would be:

for(u8 s=0;!s;s=1)for(typeof(u) t= S(m,u,s,e);!s;s=1)for(C(u##m##a,c)(t);!s;s=1)for(const typeof(u) u=t;!s;s=1)

> I don't think you need the 'masked' in that name.
> Since it works in all cases.
>
> (I don't like the word 'masked' at all, not sure where it came from.

It's what Linus named it and I did not think about the name much so far.

> Probably because the first version used logical operators.
> 'Masking' a user address ought to be the operation of removing high-order
> address bits that the hardware is treating as 'don't care'.
> The canonical operation here is uaddr = min(uaddr, guard_page) - likely to be
> a conditional move.

That's how it's implemented for x86:

>>  b84:	48 b8 ef cd ab 89 67 45 23 01  movabs $0x123456789abcdef,%rax
>>  b8e:	48 39 c7    	               cmp    %rax,%rdi
>>  b91:	48 0f 47 f8          	       cmova  %rax,%rdi

0x123456789abcdef is a compile time placeholder for $USR_PTR_MAX which is
replaced during early boot by the real user space topmost address. See below.

> I think that s/masked/sanitised/ would make more sense (the patch to do
> that isn't very big at the moment). I might post it.)

The real point is that it is optimized. It does not have to use the
speculation fence if the architecture supports "masking" because the CPU
can't speculate on the input address as the actual read/write address
depends on the cmova. That's similar to the array_nospec() magic which
masks the input index unconditionally so it's in the valid range before
it can be used for speculatively accessing the array.

So yes, the naming is a bit awkward.

In principle most places which use user_$MODE_access_begin() could
benefit from that. Also under the hood the scope magic actually falls
back to that when the architecture does not support the "masked"
variant.

So simply naming it scoped_user_$MODE_access() is probably the least
confusing of all.

>> If masked user access is enabled on an architecture, then the pointer
>> handed in to scoped_masked_user_$MODE_access() can be modified to point to
>> a guaranteed faulting user address. This modification is only scope local
>> as the pointer is aliased inside the scope. When the scope is left the
>> alias is not longer in effect. IOW the original pointer value is preserved
>> so it can be used e.g. for fixup or diagnostic purposes in the fault path.
>
> I think you need to add (in the kerndoc somewhere):
>
> There is no requirement to do the accesses in strict memory order
> (or to access the lowest address first).
> The only constraint is that gaps must be significantly less than 4k.

The requirement is that the access is not spilling over into the kernel
address space, which means:

       USR_PTR_MAX <= address < (1U << 63)

USR_PTR_MAX on x86 is either
            (1U << 47) - PAGE_SIZE (4-level page tables)
         or (1U << 57) - PAGE_SIZE (5-level page tables)

Which means at least ~8 EiB of unmapped space in both cases.

The access order does not matter at all.

>> +#define __scoped_masked_user_access(_mode, _uptr, _size, _elbl)					\
>> +for (bool ____stop = false; !____stop; ____stop = true)						\
>> +	for (typeof((_uptr)) _tmpptr = __scoped_user_access_begin(_mode, _uptr, _size, _elbl);	\
>
> Can you use 'auto' instead of typeof() ?

Compilers are mightily unhappy about that unless I do typecasting on the
assignment, which is not really buying anything.

>> +	     !____stop; ____stop = true)							\
>> +		for (CLASS(masked_user_##_mode##_access, scope) (_tmpptr); !____stop;		\
>> +		     ____stop = true)					\
>> +			/* Force modified pointer usage within the scope */			\
>> +			for (const typeof((_uptr)) _uptr = _tmpptr; !____stop; ____stop = true)	\
>
> gcc 15.1 also seems to support 'const auto _uptr = _tmpptr;'

Older compilers not so much.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-21 14:29     ` Thomas Gleixner
@ 2025-10-21 14:42       ` Thomas Gleixner
  2025-10-21 20:52         ` David Laight
  2025-10-21 14:44       ` Peter Zijlstra
                         ` (2 subsequent siblings)
  3 siblings, 1 reply; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-21 14:42 UTC (permalink / raw)
  To: David Laight
  Cc: LKML, Christophe Leroy, Mathieu Desnoyers, Andrew Cooper,
	Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On Tue, Oct 21 2025 at 16:29, Thomas Gleixner wrote:
> On Mon, Oct 20 2025 at 19:28, David Laight wrote:
>> There is no requirement to do the accesses in strict memory order
>> (or to access the lowest address first).
>> The only constraint is that gaps must be significantly less than 4k.
>
> The requirement is that the access is not spilling over into the kernel
> address space, which means:
>
>        USR_PTR_MAX <= address < (1U << 63)
>
> USR_PTR_MAX on x86 is either
>             (1U << 47) - PAGE_SIZE (4-level page tables)
>          or (1U << 57) - PAGE_SIZE (5-level page tables)
>
> Which means at least ~8 EiB of unmapped space in both cases.
>
> The access order does not matter at all.

I just noticed that LAM reduces that gap to one page, but then the
kernel has a 8EiB gap right at the kernel/user boundary, which means
even in the LAM case an access with less than 8EiB offset from
USR_PTR_MAX is guaranteed to fault and not to be able to speculatively
access actual kernel memory.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-21 14:29     ` Thomas Gleixner
  2025-10-21 14:42       ` Thomas Gleixner
@ 2025-10-21 14:44       ` Peter Zijlstra
  2025-10-21 15:06       ` Linus Torvalds
  2025-10-21 18:55       ` David Laight
  3 siblings, 0 replies; 40+ messages in thread
From: Peter Zijlstra @ 2025-10-21 14:44 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: David Laight, LKML, Christophe Leroy, Mathieu Desnoyers,
	Andrew Cooper, Linus Torvalds, kernel test robot, Russell King,
	linux-arm-kernel, x86, Madhavan Srinivasan, Michael Ellerman,
	Nicholas Piggin, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
	linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
	linux-s390, Julia Lawall, Nicolas Palix, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On Tue, Oct 21, 2025 at 04:29:58PM +0200, Thomas Gleixner wrote:

> So simply naming it scoped_user_$MODE_access() is probably the least
> confusing of all.

Yeah, that makes sense.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-21 14:29     ` Thomas Gleixner
  2025-10-21 14:42       ` Thomas Gleixner
  2025-10-21 14:44       ` Peter Zijlstra
@ 2025-10-21 15:06       ` Linus Torvalds
  2025-10-21 15:45         ` Thomas Gleixner
  2025-10-21 18:55       ` David Laight
  3 siblings, 1 reply; 40+ messages in thread
From: Linus Torvalds @ 2025-10-21 15:06 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: David Laight, LKML, Christophe Leroy, Mathieu Desnoyers,
	Andrew Cooper, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On Tue, 21 Oct 2025 at 04:30, Thomas Gleixner <tglx@linutronix.de> wrote:
>
> On Mon, Oct 20 2025 at 19:28, David Laight wrote:
> >
> > (I don't like the word 'masked' at all, not sure where it came from.
>
> It's what Linus named it and I did not think about the name much so far.

The original implementation was a mask application, so it made sense
at the time.

We could still change it since there aren't that many users, but I'm
not sure what would be a better name...

                   Linus

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-21 15:06       ` Linus Torvalds
@ 2025-10-21 15:45         ` Thomas Gleixner
  2025-10-21 15:51           ` Linus Torvalds
  0 siblings, 1 reply; 40+ messages in thread
From: Thomas Gleixner @ 2025-10-21 15:45 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: David Laight, LKML, Christophe Leroy, Mathieu Desnoyers,
	Andrew Cooper, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On Tue, Oct 21 2025 at 05:06, Linus Torvalds wrote:
> On Tue, 21 Oct 2025 at 04:30, Thomas Gleixner <tglx@linutronix.de> wrote:
>>
>> On Mon, Oct 20 2025 at 19:28, David Laight wrote:
>> >
>> > (I don't like the word 'masked' at all, not sure where it came from.
>>
>> It's what Linus named it and I did not think about the name much so far.
>
> The original implementation was a mask application, so it made sense
> at the time.
>
> We could still change it since there aren't that many users, but I'm
> not sure what would be a better name...

I couldn't come up with something sensible for the architecture side.

But for the scope guards I think the simple scoped_user_$MODE_access()
is fine as for the usage site it's just a user access, no?

the scope magic resolves either to the "masked" variant or to the
regular user_access_begin() + speculation barrier depending on
architecture support. But that's under the hood an implementation detail
of the scope...() macros.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-21 15:45         ` Thomas Gleixner
@ 2025-10-21 15:51           ` Linus Torvalds
  0 siblings, 0 replies; 40+ messages in thread
From: Linus Torvalds @ 2025-10-21 15:51 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: David Laight, LKML, Christophe Leroy, Mathieu Desnoyers,
	Andrew Cooper, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On Tue, 21 Oct 2025 at 05:46, Thomas Gleixner <tglx@linutronix.de> wrote:
>
> On Tue, Oct 21 2025 at 05:06, Linus Torvalds wrote:
> >
> > We could still change it since there aren't that many users, but I'm
> > not sure what would be a better name...
>
> I couldn't come up with something sensible for the architecture side.
>
> But for the scope guards I think the simple scoped_user_$MODE_access()
> is fine as for the usage site it's just a user access, no?

Ack.

               Linus

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-21 14:29     ` Thomas Gleixner
                         ` (2 preceding siblings ...)
  2025-10-21 15:06       ` Linus Torvalds
@ 2025-10-21 18:55       ` David Laight
  3 siblings, 0 replies; 40+ messages in thread
From: David Laight @ 2025-10-21 18:55 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Christophe Leroy, Mathieu Desnoyers, Andrew Cooper,
	Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On Tue, 21 Oct 2025 16:29:58 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> On Mon, Oct 20 2025 at 19:28, David Laight wrote:
> > On Fri, 17 Oct 2025 12:09:08 +0200 (CEST)
> > Thomas Gleixner <tglx@linutronix.de> wrote:
> > That definitely looks better than the earlier versions.
> > Even if the implementation looks like an entry in the obfuscated C
> > competition.  
> 
> It has too many characters for that. The contest variant would be:
> 
> for(u8 s=0;!s;s=1)for(typeof(u) t= S(m,u,s,e);!s;s=1)for(C(u##m##a,c)(t);!s;s=1)for(const typeof(u) u=t;!s;s=1)
> 
> > I don't think you need the 'masked' in that name.
> > Since it works in all cases.
> >
> > (I don't like the word 'masked' at all, not sure where it came from.  
> 
> It's what Linus named it and I did not think about the name much so far.
> 
> > Probably because the first version used logical operators.
> > 'Masking' a user address ought to be the operation of removing high-order
> > address bits that the hardware is treating as 'don't care'.
> > The canonical operation here is uaddr = min(uaddr, guard_page) - likely to be
> > a conditional move.  
> 
> That's how it's implemented for x86:

I know - I suggested using cmov.

> 
> >>  b84:	48 b8 ef cd ab 89 67 45 23 01  movabs $0x123456789abcdef,%rax
> >>  b8e:	48 39 c7    	               cmp    %rax,%rdi
> >>  b91:	48 0f 47 f8          	       cmova  %rax,%rdi  
> 
> 0x123456789abcdef is a compile time placeholder for $USR_PTR_MAX which is
> replaced during early boot by the real user space topmost address. See below.
> 
> > I think that s/masked/sanitised/ would make more sense (the patch to do
> > that isn't very big at the moment). I might post it.)  
> 
> The real point is that it is optimized. It does not have to use the
> speculation fence if the architecture supports "masking" because the CPU
> can't speculate on the input address as the actual read/write address
> depends on the cmova. That's similar to the array_nospec() magic which
> masks the input index unconditionally so it's in the valid range before
> it can be used for speculatively accessing the array.
> 
> So yes, the naming is a bit awkward.
> 
> In principle most places which use user_$MODE_access_begin() could
> benefit from that. Also under the hood the scope magic actually falls
> back to that when the architecture does not support the "masked"
> variant.
> 
> So simply naming it scoped_user_$MODE_access() is probably the least
> confusing of all.
> 
> >> If masked user access is enabled on an architecture, then the pointer
> >> handed in to scoped_masked_user_$MODE_access() can be modified to point to
> >> a guaranteed faulting user address. This modification is only scope local
> >> as the pointer is aliased inside the scope. When the scope is left the
> >> alias is not longer in effect. IOW the original pointer value is preserved
> >> so it can be used e.g. for fixup or diagnostic purposes in the fault path.  
> >
> > I think you need to add (in the kerndoc somewhere):
> >
> > There is no requirement to do the accesses in strict memory order
> > (or to access the lowest address first).
> > The only constraint is that gaps must be significantly less than 4k.  
> 
> The requirement is that the access is not spilling over into the kernel
> address space, which means:
> 
>        USR_PTR_MAX <= address < (1U << 63)
> 
> USR_PTR_MAX on x86 is either
>             (1U << 47) - PAGE_SIZE (4-level page tables)
>          or (1U << 57) - PAGE_SIZE (5-level page tables)
> 
> Which means at least ~8 EiB of unmapped space in both cases.
> 
> The access order does not matter at all.

But consider the original x86-64 version.
While it relied on the guard page for accesses that started with a user
address, kernel addresses were converted to ~0.
While a byte access at ~0 fails because it isn't mapped, an access
at 'addr + 4' wraps to the bottom of userspace which can be mapped.
So the first access had to be at the requested address, although
subsequent ones only have to be 'reasonably sequential'.

Not all code that is an obvious candidate for this code accesses
the base address first.
So it is best to require that the implementations allow for this,
and then explicitly document that it is allowed behaviour.

The ppc patches do convert kernel addresses to the base on an
invalid page - so they are fine.
I've not seen patches for other architectures.

32bit x86 has a suitable guard page, but the code really needs 'cmov'
and the recent removal of old cpu (including the 486) didn't quite
go that far.


> 
> >> +#define __scoped_masked_user_access(_mode, _uptr, _size, _elbl)					\

Thinking about it there is no need for leading _ on #define parameter names.
It is only variables defined inside #define that have 'issues' if the caller
passes in the same name.

> >> +for (bool ____stop = false; !____stop; ____stop = true)						\
> >> +	for (typeof((_uptr)) _tmpptr = __scoped_user_access_begin(_mode, _uptr, _size, _elbl);	\  
> >
> > Can you use 'auto' instead of typeof() ?  
> 
> Compilers are mightily unhappy about that unless I do typecasting on the
> assignment, which is not really buying anything.

ok - I did a very quick check and thought it might work.

If you can't use auto for the third definition, then I think tmpptr can be 'void _user *'.

	David

> 
> >> +	     !____stop; ____stop = true)							\
> >> +		for (CLASS(masked_user_##_mode##_access, scope) (_tmpptr); !____stop;		\
> >> +		     ____stop = true)					\
> >> +			/* Force modified pointer usage within the scope */			\
> >> +			for (const typeof((_uptr)) _uptr = _tmpptr; !____stop; ____stop = true)	\  
> >
> > gcc 15.1 also seems to support 'const auto _uptr = _tmpptr;'  
> 
> Older compilers not so much.
> 
> Thanks,
> 
>         tglx


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [patch V3 07/12] uaccess: Provide scoped masked user access regions
  2025-10-21 14:42       ` Thomas Gleixner
@ 2025-10-21 20:52         ` David Laight
  0 siblings, 0 replies; 40+ messages in thread
From: David Laight @ 2025-10-21 20:52 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Christophe Leroy, Mathieu Desnoyers, Andrew Cooper,
	Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
	x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
	linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
	Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
	Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, André Almeida, Alexander Viro,
	Christian Brauner, Jan Kara, linux-fsdevel

On Tue, 21 Oct 2025 16:42:22 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> On Tue, Oct 21 2025 at 16:29, Thomas Gleixner wrote:
> > On Mon, Oct 20 2025 at 19:28, David Laight wrote:  
> >> There is no requirement to do the accesses in strict memory order
> >> (or to access the lowest address first).
> >> The only constraint is that gaps must be significantly less than 4k.  
> >
> > The requirement is that the access is not spilling over into the kernel
> > address space, which means:
> >
> >        USR_PTR_MAX <= address < (1U << 63)
> >
> > USR_PTR_MAX on x86 is either
> >             (1U << 47) - PAGE_SIZE (4-level page tables)
> >          or (1U << 57) - PAGE_SIZE (5-level page tables)
> >
> > Which means at least ~8 EiB of unmapped space in both cases.
> >
> > The access order does not matter at all.  
> 
> I just noticed that LAM reduces that gap to one page, but then the
> kernel has a 8EiB gap right at the kernel/user boundary, which means
> even in the LAM case an access with less than 8EiB offset from
> USR_PTR_MAX is guaranteed to fault and not to be able to speculatively
> access actual kernel memory.

It wouldn't be a speculative access, it would be a real access.
But 4k (eg a single page) is plenty for 'reasonably sequential'.

Pretty much the only thing that has to be disallowed is a reverse
order memcpy() (or one that accesses the last bytes first) for
copy_to/from_user() if the length parameter is ignored completely.
Linus wasn't brave enough to remove it from the current version
of access_ok().

I do wonder if any other cpu have the same architectural issues
that required the guard page between user and kernel on 32bit x86.
(One is a system call at the end of the last page.)

LAM is one reason why 'masked_user_access' is such a bad name.

	David 

> 
> Thanks,
> 
>         tglx


^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2025-10-21 20:53 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
2025-10-17 10:08 ` [patch V3 01/12] ARM: uaccess: Implement missing __get_user_asm_dword() Thomas Gleixner
2025-10-17 12:36   ` Mathieu Desnoyers
2025-10-17 10:08 ` [patch V3 02/12] uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user() Thomas Gleixner
2025-10-17 12:43   ` Mathieu Desnoyers
2025-10-17 12:48     ` Mathieu Desnoyers
2025-10-17 10:09 ` [patch V3 03/12] x86/uaccess: Use unsafe wrappers for ASM GOTO Thomas Gleixner
2025-10-17 10:09 ` [patch V3 04/12] powerpc/uaccess: " Thomas Gleixner
2025-10-17 10:09 ` [patch V3 05/12] riscv/uaccess: " Thomas Gleixner
2025-10-17 10:09 ` [patch V3 06/12] s390/uaccess: " Thomas Gleixner
2025-10-17 10:09 ` [patch V3 07/12] uaccess: Provide scoped masked user access regions Thomas Gleixner
2025-10-17 11:08   ` Andrew Cooper
2025-10-17 11:21     ` Thomas Gleixner
2025-10-17 11:29       ` Andrew Cooper
2025-10-17 11:25     ` Peter Zijlstra
2025-10-17 13:23   ` Mathieu Desnoyers
2025-10-20 18:28   ` David Laight
2025-10-21 14:29     ` Thomas Gleixner
2025-10-21 14:42       ` Thomas Gleixner
2025-10-21 20:52         ` David Laight
2025-10-21 14:44       ` Peter Zijlstra
2025-10-21 15:06       ` Linus Torvalds
2025-10-21 15:45         ` Thomas Gleixner
2025-10-21 15:51           ` Linus Torvalds
2025-10-21 18:55       ` David Laight
2025-10-17 10:09 ` [patch V3 08/12] uaccess: Provide put/get_user_masked() Thomas Gleixner
2025-10-17 13:41   ` Mathieu Desnoyers
2025-10-17 13:45     ` Mathieu Desnoyers
2025-10-20  6:50       ` Thomas Gleixner
2025-10-17 10:09 ` [patch V3 09/12] [RFC] coccinelle: misc: Add scoped_masked_$MODE_access() checker script Thomas Gleixner
2025-10-17 10:51   ` Julia Lawall
2025-10-17 10:09 ` [patch V3 10/12] futex: Convert to scoped masked user access Thomas Gleixner
2025-10-17 10:09 ` [patch V3 11/12] x86/futex: " Thomas Gleixner
2025-10-17 13:37   ` Andrew Cooper
2025-10-17 10:09 ` [patch V3 12/12] select: " Thomas Gleixner
2025-10-17 10:35   ` Peter Zijlstra
2025-10-17 11:12     ` Thomas Gleixner
2025-10-17 10:37 ` [patch V3 00/12] uaccess: Provide and use scopes for user masked access Peter Zijlstra
2025-10-17 10:50   ` Andrew Cooper
2025-10-17 12:25 ` Mathieu Desnoyers

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).