* [patch V4 01/12] ARM: uaccess: Implement missing __get_user_asm_dword()
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
@ 2025-10-22 12:49 ` Thomas Gleixner
2025-10-22 12:49 ` [patch V4 02/12] uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user() Thomas Gleixner
` (11 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-22 12:49 UTC (permalink / raw)
To: LKML
Cc: kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
linux-s390, Mathieu Desnoyers, Andrew Cooper, David Laight,
Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
Davidlohr Bueso, André Almeida, Alexander Viro,
Christian Brauner, Jan Kara, linux-fsdevel
When CONFIG_CPU_SPECTRE=n then get_user() is missing the 8 byte ASM variant
for no real good reason. This prevents using get_user(u64) in generic code.
Implement it as a sequence of two 4-byte reads with LE/BE awareness and
make the unsigned long (or long long) type for the intermediate variable to
read into dependend on the the target type.
The __long_type() macro and idea was lifted from PowerPC. Thanks to
Christophe for pointing it out.
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <linux@armlinux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
Closes: https://lore.kernel.org/oe-kbuild-all/202509120155.pFgwfeUD-lkp@intel.com/
---
V2a: Solve the *ptr issue vs. unsigned long long - Russell/Christophe
V2: New patch to fix the 0-day fallout
---
arch/arm/include/asm/uaccess.h | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -283,10 +283,17 @@ extern int __put_user_8(void *, unsigned
__gu_err; \
})
+/*
+ * This is a type: either unsigned long, if the argument fits into
+ * that type, or otherwise unsigned long long.
+ */
+#define __long_type(x) \
+ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
+
#define __get_user_err(x, ptr, err, __t) \
do { \
unsigned long __gu_addr = (unsigned long)(ptr); \
- unsigned long __gu_val; \
+ __long_type(x) __gu_val; \
unsigned int __ua_flags; \
__chk_user_ptr(ptr); \
might_fault(); \
@@ -295,6 +302,7 @@ do { \
case 1: __get_user_asm_byte(__gu_val, __gu_addr, err, __t); break; \
case 2: __get_user_asm_half(__gu_val, __gu_addr, err, __t); break; \
case 4: __get_user_asm_word(__gu_val, __gu_addr, err, __t); break; \
+ case 8: __get_user_asm_dword(__gu_val, __gu_addr, err, __t); break; \
default: (__gu_val) = __get_user_bad(); \
} \
uaccess_restore(__ua_flags); \
@@ -353,6 +361,22 @@ do { \
#define __get_user_asm_word(x, addr, err, __t) \
__get_user_asm(x, addr, err, "ldr" __t)
+#ifdef __ARMEB__
+#define __WORD0_OFFS 4
+#define __WORD1_OFFS 0
+#else
+#define __WORD0_OFFS 0
+#define __WORD1_OFFS 4
+#endif
+
+#define __get_user_asm_dword(x, addr, err, __t) \
+ ({ \
+ unsigned long __w0, __w1; \
+ __get_user_asm(__w0, addr + __WORD0_OFFS, err, "ldr" __t); \
+ __get_user_asm(__w1, addr + __WORD1_OFFS, err, "ldr" __t); \
+ (x) = ((u64)__w1 << 32) | (u64) __w0; \
+})
+
#define __put_user_switch(x, ptr, __err, __fn) \
do { \
const __typeof__(*(ptr)) __user *__pu_ptr = (ptr); \
^ permalink raw reply [flat|nested] 21+ messages in thread* [patch V4 02/12] uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user()
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
2025-10-22 12:49 ` [patch V4 01/12] ARM: uaccess: Implement missing __get_user_asm_dword() Thomas Gleixner
@ 2025-10-22 12:49 ` Thomas Gleixner
2025-10-22 12:49 ` [patch V4 03/12] x86/uaccess: Use unsafe wrappers for ASM GOTO Thomas Gleixner
` (10 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-22 12:49 UTC (permalink / raw)
To: LKML
Cc: Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
linux-s390, Mathieu Desnoyers, Andrew Cooper, David Laight,
Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
Davidlohr Bueso, André Almeida, Alexander Viro,
Christian Brauner, Jan Kara, linux-fsdevel
ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:
bool foo(u32 __user *p, u32 val)
{
scoped_guard(pagefault)
unsafe_put_user(val, p, efault);
return true;
efault:
return false;
}
e80: e8 00 00 00 00 call e85 <foo+0x5>
e85: 65 48 8b 05 00 00 00 00 mov %gs:0x0(%rip),%rax
e8d: 83 80 04 14 00 00 01 addl $0x1,0x1404(%rax) // pf_disable++
e94: 89 37 mov %esi,(%rdi)
e96: 83 a8 04 14 00 00 01 subl $0x1,0x1404(%rax) // pf_disable--
e9d: b8 01 00 00 00 mov $0x1,%eax // success
ea2: e9 00 00 00 00 jmp ea7 <foo+0x27> // ret
ea7: 31 c0 xor %eax,%eax // fail
ea9: e9 00 00 00 00 jmp eae <foo+0x2e> // ret
which is broken as it leaks the pagefault disable counter on failure.
Clang at least fails the build.
Linus suggested to add a local label into the macro scope and let that
jump to the actual caller supplied error label.
__label__ local_label; \
arch_unsafe_get_user(x, ptr, local_label); \
if (0) { \
local_label: \
goto label; \
That works for both GCC and clang.
clang:
c80: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1)
c85: 65 48 8b 0c 25 00 00 00 00 mov %gs:0x0,%rcx
c8e: ff 81 04 14 00 00 incl 0x1404(%rcx) // pf_disable++
c94: 31 c0 xor %eax,%eax // set retval to false
c96: 89 37 mov %esi,(%rdi) // write
c98: b0 01 mov $0x1,%al // set retval to true
c9a: ff 89 04 14 00 00 decl 0x1404(%rcx) // pf_disable--
ca0: 2e e9 00 00 00 00 cs jmp ca6 <foo+0x26> // ret
The exception table entry points correctly to c9a
GCC:
f70: e8 00 00 00 00 call f75 <baz+0x5>
f75: 65 48 8b 05 00 00 00 00 mov %gs:0x0(%rip),%rax
f7d: 83 80 04 14 00 00 01 addl $0x1,0x1404(%rax) // pf_disable++
f84: 8b 17 mov (%rdi),%edx
f86: 89 16 mov %edx,(%rsi)
f88: 83 a8 04 14 00 00 01 subl $0x1,0x1404(%rax) // pf_disable--
f8f: b8 01 00 00 00 mov $0x1,%eax // success
f94: e9 00 00 00 00 jmp f99 <baz+0x29> // ret
f99: 83 a8 04 14 00 00 01 subl $0x1,0x1404(%rax) // pf_disable--
fa0: 31 c0 xor %eax,%eax // fail
fa2: e9 00 00 00 00 jmp fa7 <baz+0x37> // ret
The exception table entry points correctly to f99
So both compilers optimize out the extra goto and emit correct and
efficient code.
Provide a generic wrapper to do that to avoid modifying all the affected
architecture specific implementation with that workaround.
The only change required for architectures is to rename unsafe_*_user() to
arch_unsafe_*_user(). That's done in subsequent changes.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
include/linux/uaccess.h | 72 +++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 68 insertions(+), 4 deletions(-)
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -518,7 +518,34 @@ long strncpy_from_user_nofault(char *dst
long count);
long strnlen_user_nofault(const void __user *unsafe_addr, long count);
-#ifndef __get_kernel_nofault
+#ifdef arch_get_kernel_nofault
+/*
+ * Wrap the architecture implementation so that @label can be outside of a
+ * cleanup() scope. A regular C goto works correctly, but ASM goto does
+ * not. Clang rejects such an attempt, but GCC silently emits buggy code.
+ */
+#define __get_kernel_nofault(dst, src, type, label) \
+do { \
+ __label__ local_label; \
+ arch_get_kernel_nofault(dst, src, type, local_label); \
+ if (0) { \
+ local_label: \
+ goto label; \
+ } \
+} while (0)
+
+#define __put_kernel_nofault(dst, src, type, label) \
+do { \
+ __label__ local_label; \
+ arch_get_kernel_nofault(dst, src, type, local_label); \
+ if (0) { \
+ local_label: \
+ goto label; \
+ } \
+} while (0)
+
+#elif !defined(__get_kernel_nofault) /* arch_get_kernel_nofault */
+
#define __get_kernel_nofault(dst, src, type, label) \
do { \
type __user *p = (type __force __user *)(src); \
@@ -535,7 +562,8 @@ do { \
if (__put_user(data, p)) \
goto label; \
} while (0)
-#endif
+
+#endif /* !__get_kernel_nofault */
/**
* get_kernel_nofault(): safely attempt to read from a location
@@ -549,7 +577,42 @@ do { \
copy_from_kernel_nofault(&(val), __gk_ptr, sizeof(val));\
})
-#ifndef user_access_begin
+#ifdef user_access_begin
+
+#ifdef arch_unsafe_get_user
+/*
+ * Wrap the architecture implementation so that @label can be outside of a
+ * cleanup() scope. A regular C goto works correctly, but ASM goto does
+ * not. Clang rejects such an attempt, but GCC silently emits buggy code.
+ *
+ * Some architectures use internal local labels already, but this extra
+ * indirection here is harmless because the compiler optimizes it out
+ * completely in any case. This construct just ensures that the ASM GOTO
+ * target is always in the local scope. The C goto 'label' works correct
+ * when leaving a cleanup() scope.
+ */
+#define unsafe_get_user(x, ptr, label) \
+do { \
+ __label__ local_label; \
+ arch_unsafe_get_user(x, ptr, local_label); \
+ if (0) { \
+ local_label: \
+ goto label; \
+ } \
+} while (0)
+
+#define unsafe_put_user(x, ptr, label) \
+do { \
+ __label__ local_label; \
+ arch_unsafe_put_user(x, ptr, local_label); \
+ if (0) { \
+ local_label: \
+ goto label; \
+ } \
+} while (0)
+#endif /* arch_unsafe_get_user */
+
+#else /* user_access_begin */
#define user_access_begin(ptr,len) access_ok(ptr, len)
#define user_access_end() do { } while (0)
#define unsafe_op_wrap(op, err) do { if (unlikely(op)) goto err; } while (0)
@@ -559,7 +622,8 @@ do { \
#define unsafe_copy_from_user(d,s,l,e) unsafe_op_wrap(__copy_from_user(d,s,l),e)
static inline unsigned long user_access_save(void) { return 0UL; }
static inline void user_access_restore(unsigned long flags) { }
-#endif
+#endif /* !user_access_begin */
+
#ifndef user_write_access_begin
#define user_write_access_begin user_access_begin
#define user_write_access_end user_access_end
^ permalink raw reply [flat|nested] 21+ messages in thread* [patch V4 03/12] x86/uaccess: Use unsafe wrappers for ASM GOTO
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
2025-10-22 12:49 ` [patch V4 01/12] ARM: uaccess: Implement missing __get_user_asm_dword() Thomas Gleixner
2025-10-22 12:49 ` [patch V4 02/12] uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user() Thomas Gleixner
@ 2025-10-22 12:49 ` Thomas Gleixner
2025-10-22 12:49 ` [patch V4 04/12] powerpc/uaccess: " Thomas Gleixner
` (9 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-22 12:49 UTC (permalink / raw)
To: LKML
Cc: x86, kernel test robot, Russell King, linux-arm-kernel,
Linus Torvalds, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, Christophe Leroy, linuxppc-dev, Paul Walmsley,
Palmer Dabbelt, linux-riscv, Heiko Carstens,
Christian Borntraeger, Sven Schnelle, linux-s390,
Mathieu Desnoyers, Andrew Cooper, David Laight, Julia Lawall,
Nicolas Palix, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
André Almeida, Alexander Viro, Christian Brauner, Jan Kara,
linux-fsdevel
ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:
bool foo(u32 __user *p, u32 val)
{
scoped_guard(pagefault)
unsafe_put_user(val, p, efault);
return true;
efault:
return false;
}
It ends up leaking the pagefault disable counter in the fault path. clang
at least fails the build.
Rename unsafe_*_user() to arch_unsafe_*_user() which makes the generic
uaccess header wrap it with a local label that makes both compilers emit
correct code. Same for the kernel_nofault() variants.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
---
arch/x86/include/asm/uaccess.h | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -528,18 +528,18 @@ static __must_check __always_inline bool
#define user_access_save() smap_save()
#define user_access_restore(x) smap_restore(x)
-#define unsafe_put_user(x, ptr, label) \
+#define arch_unsafe_put_user(x, ptr, label) \
__put_user_size((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)), label)
#ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
-#define unsafe_get_user(x, ptr, err_label) \
+#define arch_unsafe_get_user(x, ptr, err_label) \
do { \
__inttype(*(ptr)) __gu_val; \
__get_user_size(__gu_val, (ptr), sizeof(*(ptr)), err_label); \
(x) = (__force __typeof__(*(ptr)))__gu_val; \
} while (0)
#else // !CONFIG_CC_HAS_ASM_GOTO_OUTPUT
-#define unsafe_get_user(x, ptr, err_label) \
+#define arch_unsafe_get_user(x, ptr, err_label) \
do { \
int __gu_err; \
__inttype(*(ptr)) __gu_val; \
@@ -618,11 +618,11 @@ do { \
} while (0)
#ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
-#define __get_kernel_nofault(dst, src, type, err_label) \
+#define arch_get_kernel_nofault(dst, src, type, err_label) \
__get_user_size(*((type *)(dst)), (__force type __user *)(src), \
sizeof(type), err_label)
#else // !CONFIG_CC_HAS_ASM_GOTO_OUTPUT
-#define __get_kernel_nofault(dst, src, type, err_label) \
+#define arch_get_kernel_nofault(dst, src, type, err_label) \
do { \
int __kr_err; \
\
@@ -633,7 +633,7 @@ do { \
} while (0)
#endif // CONFIG_CC_HAS_ASM_GOTO_OUTPUT
-#define __put_kernel_nofault(dst, src, type, err_label) \
+#define arch_put_kernel_nofault(dst, src, type, err_label) \
__put_user_size(*((type *)(src)), (__force type __user *)(dst), \
sizeof(type), err_label)
^ permalink raw reply [flat|nested] 21+ messages in thread* [patch V4 04/12] powerpc/uaccess: Use unsafe wrappers for ASM GOTO
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
` (2 preceding siblings ...)
2025-10-22 12:49 ` [patch V4 03/12] x86/uaccess: Use unsafe wrappers for ASM GOTO Thomas Gleixner
@ 2025-10-22 12:49 ` Thomas Gleixner
2025-10-22 12:49 ` [patch V4 05/12] riscv/uaccess: " Thomas Gleixner
` (8 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-22 12:49 UTC (permalink / raw)
To: LKML
Cc: Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, linuxppc-dev, kernel test robot, Russell King,
linux-arm-kernel, Linus Torvalds, x86, Paul Walmsley,
Palmer Dabbelt, linux-riscv, Heiko Carstens,
Christian Borntraeger, Sven Schnelle, linux-s390,
Mathieu Desnoyers, Andrew Cooper, David Laight, Julia Lawall,
Nicolas Palix, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
André Almeida, Alexander Viro, Christian Brauner, Jan Kara,
linux-fsdevel
ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:
bool foo(u32 __user *p, u32 val)
{
scoped_guard(pagefault)
unsafe_put_user(val, p, efault);
return true;
efault:
return false;
}
It ends up leaking the pagefault disable counter in the fault path. clang
at least fails the build.
Rename unsafe_*_user() to arch_unsafe_*_user() which makes the generic
uaccess header wrap it with a local label that makes both compilers emit
correct code. Same for the kernel_nofault() variants.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: linuxppc-dev@lists.ozlabs.org
---
arch/powerpc/include/asm/uaccess.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -451,7 +451,7 @@ user_write_access_begin(const void __use
#define user_write_access_begin user_write_access_begin
#define user_write_access_end prevent_current_write_to_user
-#define unsafe_get_user(x, p, e) do { \
+#define arch_unsafe_get_user(x, p, e) do { \
__long_type(*(p)) __gu_val; \
__typeof__(*(p)) __user *__gu_addr = (p); \
\
@@ -459,7 +459,7 @@ user_write_access_begin(const void __use
(x) = (__typeof__(*(p)))__gu_val; \
} while (0)
-#define unsafe_put_user(x, p, e) \
+#define arch_unsafe_put_user(x, p, e) \
__put_user_size_goto((__typeof__(*(p)))(x), (p), sizeof(*(p)), e)
#define unsafe_copy_from_user(d, s, l, e) \
@@ -504,11 +504,11 @@ do { \
unsafe_put_user(*(u8*)(_src + _i), (u8 __user *)(_dst + _i), e); \
} while (0)
-#define __get_kernel_nofault(dst, src, type, err_label) \
+#define arch_get_kernel_nofault(dst, src, type, err_label) \
__get_user_size_goto(*((type *)(dst)), \
(__force type __user *)(src), sizeof(type), err_label)
-#define __put_kernel_nofault(dst, src, type, err_label) \
+#define arch_put_kernel_nofault(dst, src, type, err_label) \
__put_user_size_goto(*((type *)(src)), \
(__force type __user *)(dst), sizeof(type), err_label)
^ permalink raw reply [flat|nested] 21+ messages in thread* [patch V4 05/12] riscv/uaccess: Use unsafe wrappers for ASM GOTO
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
` (3 preceding siblings ...)
2025-10-22 12:49 ` [patch V4 04/12] powerpc/uaccess: " Thomas Gleixner
@ 2025-10-22 12:49 ` Thomas Gleixner
2025-10-22 12:49 ` [patch V4 06/12] s390/uaccess: " Thomas Gleixner
` (7 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-22 12:49 UTC (permalink / raw)
To: LKML
Cc: Paul Walmsley, Palmer Dabbelt, linux-riscv, kernel test robot,
Russell King, linux-arm-kernel, Linus Torvalds, x86,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, linuxppc-dev, Heiko Carstens,
Christian Borntraeger, Sven Schnelle, linux-s390,
Mathieu Desnoyers, Andrew Cooper, David Laight, Julia Lawall,
Nicolas Palix, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
André Almeida, Alexander Viro, Christian Brauner, Jan Kara,
linux-fsdevel
ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:
bool foo(u32 __user *p, u32 val)
{
scoped_guard(pagefault)
unsafe_put_user(val, p, efault);
return true;
efault:
return false;
}
It ends up leaking the pagefault disable counter in the fault path. clang
at least fails the build.
Rename unsafe_*_user() to arch_unsafe_*_user() which makes the generic
uaccess header wrap it with a local label that makes both compilers emit
correct code. Same for the kernel_nofault() variants.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul Walmsley <pjw@kernel.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: linux-riscv@lists.infradead.org
---
arch/riscv/include/asm/uaccess.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/arch/riscv/include/asm/uaccess.h
+++ b/arch/riscv/include/asm/uaccess.h
@@ -437,10 +437,10 @@ unsigned long __must_check clear_user(vo
__clear_user(untagged_addr(to), n) : n;
}
-#define __get_kernel_nofault(dst, src, type, err_label) \
+#define arch_get_kernel_nofault(dst, src, type, err_label) \
__get_user_nocheck(*((type *)(dst)), (__force __user type *)(src), err_label)
-#define __put_kernel_nofault(dst, src, type, err_label) \
+#define arch_put_kernel_nofault(dst, src, type, err_label) \
__put_user_nocheck(*((type *)(src)), (__force __user type *)(dst), err_label)
static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len)
@@ -460,10 +460,10 @@ static inline void user_access_restore(u
* We want the unsafe accessors to always be inlined and use
* the error labels - thus the macro games.
*/
-#define unsafe_put_user(x, ptr, label) \
+#define arch_unsafe_put_user(x, ptr, label) \
__put_user_nocheck(x, (ptr), label)
-#define unsafe_get_user(x, ptr, label) do { \
+#define arch_unsafe_get_user(x, ptr, label) do { \
__inttype(*(ptr)) __gu_val; \
__get_user_nocheck(__gu_val, (ptr), label); \
(x) = (__force __typeof__(*(ptr)))__gu_val; \
^ permalink raw reply [flat|nested] 21+ messages in thread* [patch V4 06/12] s390/uaccess: Use unsafe wrappers for ASM GOTO
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
` (4 preceding siblings ...)
2025-10-22 12:49 ` [patch V4 05/12] riscv/uaccess: " Thomas Gleixner
@ 2025-10-22 12:49 ` Thomas Gleixner
2025-10-22 15:00 ` Heiko Carstens
2025-10-22 12:49 ` [patch V4 07/12] uaccess: Provide scoped user access regions Thomas Gleixner
` (6 subsequent siblings)
12 siblings, 1 reply; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-22 12:49 UTC (permalink / raw)
To: LKML
Cc: Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
linux-riscv, Mathieu Desnoyers, Andrew Cooper, David Laight,
Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
Davidlohr Bueso, André Almeida, Alexander Viro,
Christian Brauner, Jan Kara, linux-fsdevel
ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:
bool foo(u32 __user *p, u32 val)
{
scoped_guard(pagefault)
unsafe_put_user(val, p, efault);
return true;
efault:
return false;
}
It ends up leaking the pagefault disable counter in the fault path. clang
at least fails the build.
S390 is not affected for unsafe_*_user() as it uses it's own local label
already, but __get/put_kernel_nofault() lack that.
Rename them to arch_*_kernel_nofault() which makes the generic uaccess
header wrap it with a local label that makes both compilers emit correct
code.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
---
arch/s390/include/asm/uaccess.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/s390/include/asm/uaccess.h
+++ b/arch/s390/include/asm/uaccess.h
@@ -468,8 +468,8 @@ do { \
#endif /* CONFIG_CC_HAS_ASM_GOTO_OUTPUT && CONFIG_CC_HAS_ASM_AOR_FORMAT_FLAGS */
-#define __get_kernel_nofault __mvc_kernel_nofault
-#define __put_kernel_nofault __mvc_kernel_nofault
+#define arch_get_kernel_nofault __mvc_kernel_nofault
+#define arch_put_kernel_nofault __mvc_kernel_nofault
void __cmpxchg_user_key_called_with_bad_pointer(void);
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [patch V4 06/12] s390/uaccess: Use unsafe wrappers for ASM GOTO
2025-10-22 12:49 ` [patch V4 06/12] s390/uaccess: " Thomas Gleixner
@ 2025-10-22 15:00 ` Heiko Carstens
0 siblings, 0 replies; 21+ messages in thread
From: Heiko Carstens @ 2025-10-22 15:00 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, Christian Borntraeger, Sven Schnelle, linux-s390,
kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
linux-riscv, Mathieu Desnoyers, Andrew Cooper, David Laight,
Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
Davidlohr Bueso, André Almeida, Alexander Viro,
Christian Brauner, Jan Kara, linux-fsdevel
On Wed, Oct 22, 2025 at 02:49:09PM +0200, Thomas Gleixner wrote:
> ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:
>
> bool foo(u32 __user *p, u32 val)
> {
> scoped_guard(pagefault)
> unsafe_put_user(val, p, efault);
> return true;
> efault:
> return false;
> }
>
> It ends up leaking the pagefault disable counter in the fault path. clang
> at least fails the build.
>
> S390 is not affected for unsafe_*_user() as it uses it's own local label
> already, but __get/put_kernel_nofault() lack that.
>
> Rename them to arch_*_kernel_nofault() which makes the generic uaccess
> header wrap it with a local label that makes both compilers emit correct
> code.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Heiko Carstens <hca@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
> Cc: Sven Schnelle <svens@linux.ibm.com>
> Cc: linux-s390@vger.kernel.org
> ---
> arch/s390/include/asm/uaccess.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
Acked-by: Heiko Carstens <hca@linux.ibm.com>
^ permalink raw reply [flat|nested] 21+ messages in thread
* [patch V4 07/12] uaccess: Provide scoped user access regions
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
` (5 preceding siblings ...)
2025-10-22 12:49 ` [patch V4 06/12] s390/uaccess: " Thomas Gleixner
@ 2025-10-22 12:49 ` Thomas Gleixner
2025-10-22 14:20 ` David Laight
2025-10-22 12:49 ` [patch V4 08/12] uaccess: Provide put/get_user_scoped() Thomas Gleixner
` (5 subsequent siblings)
12 siblings, 1 reply; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-22 12:49 UTC (permalink / raw)
To: LKML
Cc: Christophe Leroy, Mathieu Desnoyers, Andrew Cooper,
Linus Torvalds, David Laight, kernel test robot, Russell King,
linux-arm-kernel, x86, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
linux-s390, Julia Lawall, Nicolas Palix, Peter Zijlstra,
Darren Hart, Davidlohr Bueso, André Almeida, Alexander Viro,
Christian Brauner, Jan Kara, linux-fsdevel
User space access regions are tedious and require similar code patterns all
over the place:
if (!user_read_access_begin(from, sizeof(*from)))
return -EFAULT;
unsafe_get_user(val, from, Efault);
user_read_access_end();
return 0;
Efault:
user_read_access_end();
return -EFAULT;
This got worse with the recent addition of masked user access, which
optimizes the speculation prevention:
if (can_do_masked_user_access())
from = masked_user_read_access_begin((from));
else if (!user_read_access_begin(from, sizeof(*from)))
return -EFAULT;
unsafe_get_user(val, from, Efault);
user_read_access_end();
return 0;
Efault:
user_read_access_end();
return -EFAULT;
There have been issues with using the wrong user_*_access_end() variant in
the error path and other typical Copy&Pasta problems, e.g. using the wrong
fault label in the user accessor which ends up using the wrong accesss end
variant.
These patterns beg for scopes with automatic cleanup. The resulting outcome
is:
scoped_user_read_access(from, Efault)
unsafe_get_user(val, from, Efault);
return 0;
Efault:
return -EFAULT;
The scope guarantees the proper cleanup for the access mode is invoked both
in the success and the failure (fault) path.
The scoped_user_$MODE_access() macros are implemented as self terminating
nested for() loops. Thanks to Andrew Cooper for pointing me at them. The
scope can therefore be left with 'break', 'goto' and 'return'. Even
'continue' "works" due to the self termination mechanism. Both GCC and
clang optimize all the convoluted macro maze out and the above results with
clang in:
b80: f3 0f 1e fa endbr64
b84: 48 b8 ef cd ab 89 67 45 23 01 movabs $0x123456789abcdef,%rax
b8e: 48 39 c7 cmp %rax,%rdi
b91: 48 0f 47 f8 cmova %rax,%rdi
b95: 90 nop
b96: 90 nop
b97: 90 nop
b98: 31 c9 xor %ecx,%ecx
b9a: 8b 07 mov (%rdi),%eax
b9c: 89 06 mov %eax,(%rsi)
b9e: 85 c9 test %ecx,%ecx
ba0: 0f 94 c0 sete %al
ba3: 90 nop
ba4: 90 nop
ba5: 90 nop
ba6: c3 ret
Which looks as compact as it gets. The NOPs are placeholder for STAC/CLAC.
GCC emits the fault path seperately:
bf0: f3 0f 1e fa endbr64
bf4: 48 b8 ef cd ab 89 67 45 23 01 movabs $0x123456789abcdef,%rax
bfe: 48 39 c7 cmp %rax,%rdi
c01: 48 0f 47 f8 cmova %rax,%rdi
c05: 90 nop
c06: 90 nop
c07: 90 nop
c08: 31 d2 xor %edx,%edx
c0a: 8b 07 mov (%rdi),%eax
c0c: 89 06 mov %eax,(%rsi)
c0e: 85 d2 test %edx,%edx
c10: 75 09 jne c1b <afoo+0x2b>
c12: 90 nop
c13: 90 nop
c14: 90 nop
c15: b8 01 00 00 00 mov $0x1,%eax
c1a: c3 ret
c1b: 90 nop
c1c: 90 nop
c1d: 90 nop
c1e: 31 c0 xor %eax,%eax
c20: c3 ret
The fault labels for the scoped*() macros and the fault labels for the
actual user space accessors can be shared and must be placed outside of the
scope.
If masked user access is enabled on an architecture, then the pointer
handed in to scoped_user_$MODE_access() can be modified to point to a
guaranteed faulting user address. This modification is only scope local as
the pointer is aliased inside the scope. When the scope is left the alias
is not longer in effect. IOW the original pointer value is preserved so it
can be used e.g. for fixup or diagnostic purposes in the fault path.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: David Laight <david.laight.linux@gmail.com>
---
V4: Remove the _masked_ naming as it's actually confusing - David
Remove underscores and make _tmpptr void - David
Add comment about access size and range - David
Shorten local variables and remove a few unneeded brackets - Mathieu
V3: Make it a nested for() loop
Get rid of the code in macro parameters - Linus
Provide sized variants - Mathieu
V2: Remove the shady wrappers around the opening and use scopes with automatic cleanup
---
include/linux/uaccess.h | 192 ++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 192 insertions(+)
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -2,6 +2,7 @@
#ifndef __LINUX_UACCESS_H__
#define __LINUX_UACCESS_H__
+#include <linux/cleanup.h>
#include <linux/fault-inject-usercopy.h>
#include <linux/instrumented.h>
#include <linux/minmax.h>
@@ -35,9 +36,17 @@
#ifdef masked_user_access_begin
#define can_do_masked_user_access() 1
+# ifndef masked_user_write_access_begin
+# define masked_user_write_access_begin masked_user_access_begin
+# endif
+# ifndef masked_user_read_access_begin
+# define masked_user_read_access_begin masked_user_access_begin
+#endif
#else
#define can_do_masked_user_access() 0
#define masked_user_access_begin(src) NULL
+ #define masked_user_read_access_begin(src) NULL
+ #define masked_user_write_access_begin(src) NULL
#define mask_user_address(src) (src)
#endif
@@ -633,6 +642,189 @@ static inline void user_access_restore(u
#define user_read_access_end user_access_end
#endif
+/* Define RW variant so the below _mode macro expansion works */
+#define masked_user_rw_access_begin(u) masked_user_access_begin(u)
+#define user_rw_access_begin(u, s) user_access_begin(u, s)
+#define user_rw_access_end() user_access_end()
+
+/* Scoped user access */
+#define USER_ACCESS_GUARD(_mode) \
+static __always_inline void __user * \
+class_user_##_mode##_begin(void __user *ptr) \
+{ \
+ return ptr; \
+} \
+ \
+static __always_inline void \
+class_user_##_mode##_end(void __user *ptr) \
+{ \
+ user_##_mode##_access_end(); \
+} \
+ \
+DEFINE_CLASS(user_ ##_mode## _access, void __user *, \
+ class_user_##_mode##_end(_T), \
+ class_user_##_mode##_begin(ptr), void __user *ptr) \
+ \
+static __always_inline class_user_##_mode##_access_t \
+class_user_##_mode##_access_ptr(void __user *scope) \
+{ \
+ return scope; \
+}
+
+USER_ACCESS_GUARD(read)
+USER_ACCESS_GUARD(write)
+USER_ACCESS_GUARD(rw)
+#undef USER_ACCESS_GUARD
+
+/**
+ * __scoped_user_access_begin - Start a scoped user access
+ * @mode: The mode of the access class (read, write, rw)
+ * @uptr: The pointer to access user space memory
+ * @size: Size of the access
+ * @elbl: Error label to goto when the access region is rejected
+ *
+ * Internal helper for __scoped_user_access(). Don't use directly
+ */
+#define __scoped_user_access_begin(mode, uptr, size, elbl) \
+({ \
+ typeof(uptr) __retptr; \
+ \
+ if (can_do_masked_user_access()) { \
+ __retptr = masked_user_##mode##_access_begin(uptr); \
+ } else { \
+ __retptr = uptr; \
+ if (!user_##mode##_access_begin(uptr, size)) \
+ goto elbl; \
+ } \
+ __retptr; \
+})
+
+/**
+ * __scoped_user_access - Open a scope for user access
+ * @mode: The mode of the access class (read, write, rw)
+ * @uptr: The pointer to access user space memory
+ * @size: Size of the access
+ * @elbl: Error label to goto when the access region is rejected. It
+ * must be placed outside the scope
+ *
+ * If the user access function inside the scope requires a fault label, it
+ * can use @elvl or a different label outside the scope, which requires
+ * that user access which is implemented with ASM GOTO has been properly
+ * wrapped. See unsafe_get_user() for reference.
+ *
+ * scoped_user_rw_access(ptr, efault) {
+ * unsafe_get_user(rval, &ptr->rval, efault);
+ * unsafe_put_user(wval, &ptr->wval, efault);
+ * }
+ * return 0;
+ * efault:
+ * return -EFAULT;
+ *
+ * The scope is internally implemented as a autoterminating nested for()
+ * loop, which can be left with 'return', 'break' and 'goto' at any
+ * point.
+ *
+ * When the scope is left user_##@_mode##_access_end() is automatically
+ * invoked.
+ *
+ * When the architecture supports masked user access and the access region
+ * which is determined by @uptr and @size is not a valid user space
+ * address, i.e. < TASK_SIZE, the scope sets the pointer to a faulting user
+ * space address and does not terminate early. This optimizes for the good
+ * case and lets the performance uncritical bad case go through the fault.
+ *
+ * The eventual modification of the pointer is limited to the scope.
+ * Outside of the scope the original pointer value is unmodified, so that
+ * the original pointer value is available for diagnostic purposes in an
+ * out of scope fault path.
+ *
+ * Nesting scoped user access into a user access scope is invalid and fails
+ * the build. Nesting into other guards, e.g. pagefault is safe.
+ *
+ * The masked variant does not check the size of the access and relies on a
+ * mapping hole (e.g. guard page) to catch an out of range pointer, the
+ * first access to user memory inside the scope has to be within
+ * @uptr ... @uptr + PAGE_SIZE - 1
+ *
+ * Don't use directly. Use scoped_masked_user_$MODE_access() instead.
+ */
+#define __scoped_user_access(mode, uptr, size, elbl) \
+for (bool done = false; !done; done = true) \
+ for (void __user *_tmpptr = __scoped_user_access_begin(mode, uptr, size, elbl); \
+ !done; done = true) \
+ for (CLASS(user_##mode##_access, scope)(_tmpptr); !done; done = true) \
+ /* Force modified pointer usage within the scope */ \
+ for (const typeof(uptr) uptr = _tmpptr; !done; done = true)
+
+/**
+ * scoped_user_read_access_size - Start a scoped user read access with given size
+ * @usrc: Pointer to the user space address to read from
+ * @size: Size of the access starting from @usrc
+ * @elbl: Error label to goto when the access region is rejected
+ *
+ * For further information see __scoped_user_access() above.
+ */
+#define scoped_user_read_access_size(usrc, size, elbl) \
+ __scoped_user_access(read, usrc, size, elbl)
+
+/**
+ * scoped_user_read_access - Start a scoped user read access
+ * @usrc: Pointer to the user space address to read from
+ * @elbl: Error label to goto when the access region is rejected
+ *
+ * The size of the access starting from @usrc is determined via sizeof(*@usrc)).
+ *
+ * For further information see __scoped_user_access() above.
+ */
+#define scoped_user_read_access(usrc, elbl) \
+ scoped_user_read_access_size(usrc, sizeof(*(usrc)), elbl)
+
+/**
+ * scoped_user_write_access_size - Start a scoped user write access with given size
+ * @udst: Pointer to the user space address to write to
+ * @size: Size of the access starting from @udst
+ * @elbl: Error label to goto when the access region is rejected
+ *
+ * For further information see __scoped_user_access() above.
+ */
+#define scoped_user_write_access_size(udst, size, elbl) \
+ __scoped_user_access(write, udst, size, elbl)
+
+/**
+ * scoped_user_write_access - Start a scoped user write access
+ * @udst: Pointer to the user space address to write to
+ * @elbl: Error label to goto when the access region is rejected
+ *
+ * The size of the access starting from @udst is determined via sizeof(*@udst)).
+ *
+ * For further information see __scoped_user_access() above.
+ */
+#define scoped_user_write_access(udst, elbl) \
+ scoped_user_write_access_size(udst, sizeof(*(udst)), elbl)
+
+/**
+ * scoped_user_rw_access_size - Start a scoped user read/write access with given size
+ * @uptr Pointer to the user space address to read from and write to
+ * @size: Size of the access starting from @uptr
+ * @elbl: Error label to goto when the access region is rejected
+ *
+ * For further information see __scoped_user_access() above.
+ */
+#define scoped_user_rw_access_size(uptr, size, elbl) \
+ __scoped_user_access(rw, uptr, size, elbl)
+
+/**
+ * scoped_user_rw_access - Start a scoped user read/write access
+ * @uptr Pointer to the user space address to read from and write to
+ * @elbl: Error label to goto when the access region is rejected
+ *
+ * The size of the access starting from @uptr is determined via sizeof(*@uptr)).
+ *
+ * For further information see __scoped_user_access() above.
+ */
+#define scoped_user_rw_access(uptr, elbl) \
+ scoped_user_rw_access_size(uptr, sizeof(*(uptr)), elbl)
+
#ifdef CONFIG_HARDENED_USERCOPY
void __noreturn usercopy_abort(const char *name, const char *detail,
bool to_user, unsigned long offset,
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [patch V4 07/12] uaccess: Provide scoped user access regions
2025-10-22 12:49 ` [patch V4 07/12] uaccess: Provide scoped user access regions Thomas Gleixner
@ 2025-10-22 14:20 ` David Laight
2025-10-22 14:23 ` Peter Zijlstra
0 siblings, 1 reply; 21+ messages in thread
From: David Laight @ 2025-10-22 14:20 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, Christophe Leroy, Mathieu Desnoyers, Andrew Cooper,
Linus Torvalds, kernel test robot, Russell King, linux-arm-kernel,
x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
linuxppc-dev, Paul Walmsley, Palmer Dabbelt, linux-riscv,
Heiko Carstens, Christian Borntraeger, Sven Schnelle, linux-s390,
Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
Davidlohr Bueso, André Almeida, Alexander Viro,
Christian Brauner, Jan Kara, linux-fsdevel
On Wed, 22 Oct 2025 14:49:10 +0200 (CEST)
Thomas Gleixner <tglx@linutronix.de> wrote:
> User space access regions are tedious and require similar code patterns all
> over the place:
>
> if (!user_read_access_begin(from, sizeof(*from)))
> return -EFAULT;
> unsafe_get_user(val, from, Efault);
> user_read_access_end();
> return 0;
> Efault:
> user_read_access_end();
> return -EFAULT;
>
> This got worse with the recent addition of masked user access, which
> optimizes the speculation prevention:
>
> if (can_do_masked_user_access())
> from = masked_user_read_access_begin((from));
> else if (!user_read_access_begin(from, sizeof(*from)))
> return -EFAULT;
> unsafe_get_user(val, from, Efault);
> user_read_access_end();
> return 0;
> Efault:
> user_read_access_end();
> return -EFAULT;
>
> There have been issues with using the wrong user_*_access_end() variant in
> the error path and other typical Copy&Pasta problems, e.g. using the wrong
> fault label in the user accessor which ends up using the wrong accesss end
> variant.
>
> These patterns beg for scopes with automatic cleanup. The resulting outcome
> is:
> scoped_user_read_access(from, Efault)
> unsafe_get_user(val, from, Efault);
> return 0;
> Efault:
> return -EFAULT;
>
> The scope guarantees the proper cleanup for the access mode is invoked both
> in the success and the failure (fault) path.
>
> The scoped_user_$MODE_access() macros are implemented as self terminating
> nested for() loops. Thanks to Andrew Cooper for pointing me at them. The
> scope can therefore be left with 'break', 'goto' and 'return'. Even
> 'continue' "works" due to the self termination mechanism.
I think that 'feature' should be marked as a 'bug', consider code like:
for (; len >= sizeof (*uaddr); uaddr++; len -= sizeof (*uaddr)) {
scoped_user_read_access(uaddr, Efault) {
int frag_len;
unsafe_get_user(frag_len, &uaddr->len, Efault);
if (!frag_len)
break;
...
}
...
}
The expectation would be that the 'break' applies to the visible 'for' loop.
But you need a 'goto' to escape from the visible loop.
Someone who groks the static checkers might want to try to detect
continue/break in those loops.
David
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [patch V4 07/12] uaccess: Provide scoped user access regions
2025-10-22 14:20 ` David Laight
@ 2025-10-22 14:23 ` Peter Zijlstra
0 siblings, 0 replies; 21+ messages in thread
From: Peter Zijlstra @ 2025-10-22 14:23 UTC (permalink / raw)
To: David Laight
Cc: Thomas Gleixner, LKML, Christophe Leroy, Mathieu Desnoyers,
Andrew Cooper, Linus Torvalds, kernel test robot, Russell King,
linux-arm-kernel, x86, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
linux-s390, Julia Lawall, Nicolas Palix, Darren Hart,
Davidlohr Bueso, André Almeida, Alexander Viro,
Christian Brauner, Jan Kara, linux-fsdevel
On Wed, Oct 22, 2025 at 03:20:06PM +0100, David Laight wrote:
> I think that 'feature' should be marked as a 'bug', consider code like:
> for (; len >= sizeof (*uaddr); uaddr++; len -= sizeof (*uaddr)) {
> scoped_user_read_access(uaddr, Efault) {
> int frag_len;
> unsafe_get_user(frag_len, &uaddr->len, Efault);
> if (!frag_len)
> break;
> ...
> }
> ...
> }
>
All the scoped_* () things are for loops. break is a valid way to
terminate them.
^ permalink raw reply [flat|nested] 21+ messages in thread
* [patch V4 08/12] uaccess: Provide put/get_user_scoped()
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
` (6 preceding siblings ...)
2025-10-22 12:49 ` [patch V4 07/12] uaccess: Provide scoped user access regions Thomas Gleixner
@ 2025-10-22 12:49 ` Thomas Gleixner
2025-10-22 12:49 ` [patch V4 09/12] [RFC] coccinelle: misc: Add scoped_$MODE_access() checker script Thomas Gleixner
` (4 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-22 12:49 UTC (permalink / raw)
To: LKML
Cc: kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
linux-s390, Mathieu Desnoyers, Andrew Cooper, David Laight,
Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
Davidlohr Bueso, André Almeida, Alexander Viro,
Christian Brauner, Jan Kara, linux-fsdevel
Provide conveniance wrappers around scoped user access similiar to
put/get_user(), which reduce the usage sites to:
if (!get_user_scoped(val, ptr))
return -EFAULT;
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
V4: Rename to scoped
---
include/linux/uaccess.h | 44 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 44 insertions(+)
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -825,6 +825,50 @@ for (bool done = false; !done; done = tr
#define scoped_user_rw_access(uptr, elbl) \
scoped_user_rw_access_size(uptr, sizeof(*(uptr)), elbl)
+/**
+ * get_user_scoped - Read user data with scoped access
+ * @val: The variable to store the value read from user memory
+ * @usrc: Pointer to the user space memory to read from
+ *
+ * Return: true if successful, false when faulted
+ */
+#define get_user_scoped(val, usrc) \
+({ \
+ __label__ efault; \
+ typeof(usrc) _tmpsrc = usrc; \
+ bool _ret = true; \
+ \
+ scoped_user_read_access(_tmpsrc, efault) \
+ unsafe_get_user(val, _tmpsrc, efault); \
+ if (0) { \
+ efault: \
+ _ret = false; \
+ } \
+ _ret; \
+})
+
+/**
+ * put_user_scoped - Write to user memory with scoped access
+ * @val: The value to write
+ * @udst: Pointer to the user space memory to write to
+ *
+ * Return: true if successful, false when faulted
+ */
+#define put_user_scoped(val, udst) \
+({ \
+ __label__ efault; \
+ typeof(udst) _tmpdst = udst; \
+ bool _ret = true; \
+ \
+ scoped_user_write_access(_tmpdst, efault) \
+ unsafe_put_user(val, _tmpdst, efault); \
+ if (0) { \
+ efault: \
+ _ret = false; \
+ } \
+ _ret; \
+})
+
#ifdef CONFIG_HARDENED_USERCOPY
void __noreturn usercopy_abort(const char *name, const char *detail,
bool to_user, unsigned long offset,
^ permalink raw reply [flat|nested] 21+ messages in thread* [patch V4 09/12] [RFC] coccinelle: misc: Add scoped_$MODE_access() checker script
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
` (7 preceding siblings ...)
2025-10-22 12:49 ` [patch V4 08/12] uaccess: Provide put/get_user_scoped() Thomas Gleixner
@ 2025-10-22 12:49 ` Thomas Gleixner
2025-10-22 12:49 ` [patch V4 10/12] futex: Convert to scoped user access Thomas Gleixner
` (3 subsequent siblings)
12 siblings, 0 replies; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-22 12:49 UTC (permalink / raw)
To: LKML
Cc: Julia Lawall, Nicolas Palix, kernel test robot, Russell King,
linux-arm-kernel, Linus Torvalds, x86, Madhavan Srinivasan,
Michael Ellerman, Nicholas Piggin, Christophe Leroy, linuxppc-dev,
Paul Walmsley, Palmer Dabbelt, linux-riscv, Heiko Carstens,
Christian Borntraeger, Sven Schnelle, linux-s390,
Mathieu Desnoyers, Andrew Cooper, David Laight, Peter Zijlstra,
Darren Hart, Davidlohr Bueso, André Almeida, Alexander Viro,
Christian Brauner, Jan Kara, linux-fsdevel
A common mistake in user access code is that the wrong access mode is
selected for starting the user access section. As most architectures map
Read and Write modes to ReadWrite this goes often unnoticed for quite some
time.
Aside of that the scoped user access mechanism requires that the same
pointer is used for the actual accessor macros that was handed in to start
the scope because the pointer can be modified by the scope begin mechanism
if the architecture supports masking.
Add a basic (and incomplete) coccinelle script to check for the common
issues. The error output is:
kernel/futex/futex.h:303:2-17: ERROR: Invalid pointer for unsafe_put_user(p) in scoped_user_write_access(to)
kernel/futex/futex.h:292:2-17: ERROR: Invalid access mode unsafe_get_user() in scoped_user_write_access()
Not-Yet-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <Julia.Lawall@inria.fr>
Cc: Nicolas Palix <nicolas.palix@imag.fr>
---
scripts/coccinelle/misc/scoped_uaccess.cocci | 108 +++++++++++++++++++++++++++
1 file changed, 108 insertions(+)
--- /dev/null
+++ b/scripts/coccinelle/misc/scoped_uaccess.cocci
@@ -0,0 +1,108 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/// Validate scoped_masked_user*access() scopes
+///
+// Confidence: Zero
+// Options: --no-includes --include-headers
+
+virtual context
+virtual report
+virtual org
+
+@initialize:python@
+@@
+
+scopemap = {
+ 'scoped_user_read_access_size' : 'scoped_user_read_access',
+ 'scoped_user_write_access_size' : 'scoped_user_write_access',
+ 'scoped_user_rw_access_size' : 'scoped_user_rw_access',
+}
+
+# Most common accessors. Incomplete list
+noaccessmap = {
+ 'scoped_user_read_access' : ('unsafe_put_user', 'unsafe_copy_to_user'),
+ 'scoped_user_write_access' : ('unsafe_get_user', 'unsafe_copy_from_user'),
+}
+
+# Most common accessors. Incomplete list
+ptrmap = {
+ 'unsafe_put_user' : 1,
+ 'unsafe_get_user' : 1,
+ 'unsafe_copy_to_user' : 0,
+ 'unsafe_copy_from_user' : 0,
+}
+
+print_mode = None
+
+def pr_err(pos, msg):
+ if print_mode == 'R':
+ coccilib.report.print_report(pos[0], msg)
+ elif print_mode == 'O':
+ cocci.print_main(msg, pos)
+
+@r0 depends on report || org@
+iterator name scoped_user_read_access,
+ scoped_user_read_access_size,
+ scoped_user_write_access,
+ scoped_user_write_access_size,
+ scoped_user_rw_access,
+ scoped_user_rw_access_size;
+iterator scope;
+statement S;
+@@
+
+(
+(
+scoped_user_read_access(...) S
+|
+scoped_user_read_access_size(...) S
+|
+scoped_user_write_access(...) S
+|
+scoped_user_write_access_size(...) S
+|
+scoped_user_rw_access(...) S
+|
+scoped_user_rw_access_size(...) S
+)
+&
+scope(...) S
+)
+
+@script:python depends on r0 && report@
+@@
+print_mode = 'R'
+
+@script:python depends on r0 && org@
+@@
+print_mode = 'O'
+
+@r1@
+expression sp, a0, a1;
+iterator r0.scope;
+identifier ac;
+position p;
+@@
+
+ scope(sp,...) {
+ <...
+ ac@p(a0, a1, ...);
+ ...>
+ }
+
+@script:python@
+pos << r1.p;
+scope << r0.scope;
+ac << r1.ac;
+sp << r1.sp;
+a0 << r1.a0;
+a1 << r1.a1;
+@@
+
+scope = scopemap.get(scope, scope)
+if ac in noaccessmap.get(scope, []):
+ pr_err(pos, 'ERROR: Invalid access mode %s() in %s()' %(ac, scope))
+
+if ac in ptrmap:
+ ap = (a0, a1)[ptrmap[ac]]
+ if sp != ap.lstrip('&').split('->')[0].strip():
+ pr_err(pos, 'ERROR: Invalid pointer for %s(%s) in %s(%s)' %(ac, ap, scope, sp))
^ permalink raw reply [flat|nested] 21+ messages in thread* [patch V4 10/12] futex: Convert to scoped user access
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
` (8 preceding siblings ...)
2025-10-22 12:49 ` [patch V4 09/12] [RFC] coccinelle: misc: Add scoped_$MODE_access() checker script Thomas Gleixner
@ 2025-10-22 12:49 ` Thomas Gleixner
2025-10-22 15:16 ` Linus Torvalds
2025-10-22 12:49 ` [patch V4 11/12] x86/futex: " Thomas Gleixner
` (2 subsequent siblings)
12 siblings, 1 reply; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-22 12:49 UTC (permalink / raw)
To: LKML
Cc: Peter Zijlstra, Darren Hart, Davidlohr Bueso, André Almeida,
kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
linux-s390, Mathieu Desnoyers, Andrew Cooper, David Laight,
Julia Lawall, Nicolas Palix, Alexander Viro, Christian Brauner,
Jan Kara, linux-fsdevel
From: Thomas Gleixner <tglx@linutronix.de>
Replace the open coded implementation with the new get/put_user_scoped()
helpers.
No functional change intended
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Darren Hart <dvhart@infradead.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: "André Almeida" <andrealmeid@igalia.com>
---
V4: Rename once moar
V3: Adapt to scope changes
V2: Convert to scoped variant
---
kernel/futex/futex.h | 37 ++-----------------------------------
1 file changed, 2 insertions(+), 35 deletions(-)
---
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -285,48 +285,15 @@ static inline int futex_cmpxchg_value_lo
* This does a plain atomic user space read, and the user pointer has
* already been verified earlier by get_futex_key() to be both aligned
* and actually in user space, just like futex_atomic_cmpxchg_inatomic().
- *
- * We still want to avoid any speculation, and while __get_user() is
- * the traditional model for this, it's actually slower than doing
- * this manually these days.
- *
- * We could just have a per-architecture special function for it,
- * the same way we do futex_atomic_cmpxchg_inatomic(), but rather
- * than force everybody to do that, write it out long-hand using
- * the low-level user-access infrastructure.
- *
- * This looks a bit overkill, but generally just results in a couple
- * of instructions.
*/
static __always_inline int futex_get_value(u32 *dest, u32 __user *from)
{
- u32 val;
-
- if (can_do_masked_user_access())
- from = masked_user_access_begin(from);
- else if (!user_read_access_begin(from, sizeof(*from)))
- return -EFAULT;
- unsafe_get_user(val, from, Efault);
- user_read_access_end();
- *dest = val;
- return 0;
-Efault:
- user_read_access_end();
- return -EFAULT;
+ return get_user_scoped(*dest, from) ? 0 : -EFAULT;
}
static __always_inline int futex_put_value(u32 val, u32 __user *to)
{
- if (can_do_masked_user_access())
- to = masked_user_access_begin(to);
- else if (!user_write_access_begin(to, sizeof(*to)))
- return -EFAULT;
- unsafe_put_user(val, to, Efault);
- user_write_access_end();
- return 0;
-Efault:
- user_write_access_end();
- return -EFAULT;
+ return put_user_scoped(val, to) ? 0 : -EFAULT;
}
static inline int futex_get_value_locked(u32 *dest, u32 __user *from)
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [patch V4 10/12] futex: Convert to scoped user access
2025-10-22 12:49 ` [patch V4 10/12] futex: Convert to scoped user access Thomas Gleixner
@ 2025-10-22 15:16 ` Linus Torvalds
2025-10-23 18:44 ` Thomas Gleixner
0 siblings, 1 reply; 21+ messages in thread
From: Linus Torvalds @ 2025-10-22 15:16 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
André Almeida, kernel test robot, Russell King,
linux-arm-kernel, x86, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, Christophe Leroy, linuxppc-dev, Paul Walmsley,
Palmer Dabbelt, linux-riscv, Heiko Carstens,
Christian Borntraeger, Sven Schnelle, linux-s390,
Mathieu Desnoyers, Andrew Cooper, David Laight, Julia Lawall,
Nicolas Palix, Alexander Viro, Christian Brauner, Jan Kara,
linux-fsdevel
On Wed, 22 Oct 2025 at 02:49, Thomas Gleixner <tglx@linutronix.de> wrote:
>
> From: Thomas Gleixner <tglx@linutronix.de>
>
> Replace the open coded implementation with the new get/put_user_scoped()
> helpers.
Well, "scoped" here makes no sense in the name, since it isn't scoped
in any way, it just uses the scoped helpers.
I also wonder if we should just get rid of the futex_get/put_value()
macros entirely. I did those masked user access things them long ago
because that code used "__get_user()" and "__put_user()", and I was
removing those helpers and making it match the pattern elsewhere, but
I do wonder if there is any advantage left to them all.
On x86, just using "get_user()" and "put_user()" should work fine now.
Yes, they check the address, but these days *those* helpers use that
masked user address trick too, so there is no real cost to it.
The only cost would be the out-of-line function call, I think. Maybe
that is a sufficiently big cost here.
Linus
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [patch V4 10/12] futex: Convert to scoped user access
2025-10-22 15:16 ` Linus Torvalds
@ 2025-10-23 18:44 ` Thomas Gleixner
2025-10-23 19:26 ` Linus Torvalds
0 siblings, 1 reply; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-23 18:44 UTC (permalink / raw)
To: Linus Torvalds
Cc: LKML, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
André Almeida, kernel test robot, Russell King,
linux-arm-kernel, x86, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, Christophe Leroy, linuxppc-dev, Paul Walmsley,
Palmer Dabbelt, linux-riscv, Heiko Carstens,
Christian Borntraeger, Sven Schnelle, linux-s390,
Mathieu Desnoyers, Andrew Cooper, David Laight, Julia Lawall,
Nicolas Palix, Alexander Viro, Christian Brauner, Jan Kara,
linux-fsdevel
On Wed, Oct 22 2025 at 05:16, Linus Torvalds wrote:
> On Wed, 22 Oct 2025 at 02:49, Thomas Gleixner <tglx@linutronix.de> wrote:
>>
>> From: Thomas Gleixner <tglx@linutronix.de>
>>
>> Replace the open coded implementation with the new get/put_user_scoped()
>> helpers.
>
> Well, "scoped" here makes no sense in the name, since it isn't scoped
> in any way, it just uses the scoped helpers.
I know. Did not come up with a sensible name so far.
> I also wonder if we should just get rid of the futex_get/put_value()
> macros entirely. I did those masked user access things them long ago
> because that code used "__get_user()" and "__put_user()", and I was
> removing those helpers and making it match the pattern elsewhere, but
> I do wonder if there is any advantage left to them all.
>
> On x86, just using "get_user()" and "put_user()" should work fine now.
> Yes, they check the address, but these days *those* helpers use that
> masked user address trick too, so there is no real cost to it.
>
> The only cost would be the out-of-line function call, I think. Maybe
> that is a sufficiently big cost here.
I'll have a look at the usage sites.
But as you said out-of-line function call it occured to me that these
helpers might be just named get/put_user_inline(). Hmm?
Thanks,
tglx
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [patch V4 10/12] futex: Convert to scoped user access
2025-10-23 18:44 ` Thomas Gleixner
@ 2025-10-23 19:26 ` Linus Torvalds
2025-10-23 21:14 ` David Laight
0 siblings, 1 reply; 21+ messages in thread
From: Linus Torvalds @ 2025-10-23 19:26 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
André Almeida, kernel test robot, Russell King,
linux-arm-kernel, x86, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, Christophe Leroy, linuxppc-dev, Paul Walmsley,
Palmer Dabbelt, linux-riscv, Heiko Carstens,
Christian Borntraeger, Sven Schnelle, linux-s390,
Mathieu Desnoyers, Andrew Cooper, David Laight, Julia Lawall,
Nicolas Palix, Alexander Viro, Christian Brauner, Jan Kara,
linux-fsdevel
On Thu, 23 Oct 2025 at 08:44, Thomas Gleixner <tglx@linutronix.de> wrote:
>
> But as you said out-of-line function call it occured to me that these
> helpers might be just named get/put_user_inline(). Hmm?
Yeah, with a comment that clearly says "you need to have actual
performance numbers for why this needs to be inlined" for people to
use it.
Linus
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [patch V4 10/12] futex: Convert to scoped user access
2025-10-23 19:26 ` Linus Torvalds
@ 2025-10-23 21:14 ` David Laight
0 siblings, 0 replies; 21+ messages in thread
From: David Laight @ 2025-10-23 21:14 UTC (permalink / raw)
To: Linus Torvalds
Cc: Thomas Gleixner, LKML, Peter Zijlstra, Darren Hart,
Davidlohr Bueso, André Almeida, kernel test robot,
Russell King, linux-arm-kernel, x86, Madhavan Srinivasan,
Michael Ellerman, Nicholas Piggin, Christophe Leroy, linuxppc-dev,
Paul Walmsley, Palmer Dabbelt, linux-riscv, Heiko Carstens,
Christian Borntraeger, Sven Schnelle, linux-s390,
Mathieu Desnoyers, Andrew Cooper, Julia Lawall, Nicolas Palix,
Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel
On Thu, 23 Oct 2025 09:26:12 -1000
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Thu, 23 Oct 2025 at 08:44, Thomas Gleixner <tglx@linutronix.de> wrote:
> >
> > But as you said out-of-line function call it occured to me that these
> > helpers might be just named get/put_user_inline(). Hmm?
>
> Yeah, with a comment that clearly says "you need to have actual
> performance numbers for why this needs to be inlined" for people to
> use it.
Avoiding an extra clac/stac pair for two accesses might be enough.
But for a single access it might be hard to justify.
(Even if 'return' instructions are horribly painful.
Although anyone chasing performance is probably using a local system
and just disables all that 'stuff'.)
>
> Linus
^ permalink raw reply [flat|nested] 21+ messages in thread
* [patch V4 11/12] x86/futex: Convert to scoped user access
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
` (9 preceding siblings ...)
2025-10-22 12:49 ` [patch V4 10/12] futex: Convert to scoped user access Thomas Gleixner
@ 2025-10-22 12:49 ` Thomas Gleixner
2025-10-22 12:49 ` [patch V4 12/12] select: " Thomas Gleixner
2025-10-22 13:28 ` [patch V4 00/12] uaccess: Provide and use scopes for " Peter Zijlstra
12 siblings, 0 replies; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-22 12:49 UTC (permalink / raw)
To: LKML
Cc: x86, kernel test robot, Russell King, linux-arm-kernel,
Linus Torvalds, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, Christophe Leroy, linuxppc-dev, Paul Walmsley,
Palmer Dabbelt, linux-riscv, Heiko Carstens,
Christian Borntraeger, Sven Schnelle, linux-s390,
Mathieu Desnoyers, Andrew Cooper, David Laight, Julia Lawall,
Nicolas Palix, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
André Almeida, Alexander Viro, Christian Brauner, Jan Kara,
linux-fsdevel
Replace the open coded implementation with the scoped user access
guards
No functional change intended.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
---
V4: Rename once more
Use asm_inline - Andrew
V3: Adapt to scope changes
V2: Convert to scoped masked access
Use RW access functions - Christophe
---
arch/x86/include/asm/futex.h | 75 ++++++++++++++++++-------------------------
1 file changed, 33 insertions(+), 42 deletions(-)
---
--- a/arch/x86/include/asm/futex.h
+++ b/arch/x86/include/asm/futex.h
@@ -46,38 +46,31 @@ do { \
} while(0)
static __always_inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
- u32 __user *uaddr)
+ u32 __user *uaddr)
{
- if (can_do_masked_user_access())
- uaddr = masked_user_access_begin(uaddr);
- else if (!user_access_begin(uaddr, sizeof(u32)))
- return -EFAULT;
-
- switch (op) {
- case FUTEX_OP_SET:
- unsafe_atomic_op1("xchgl %0, %2", oval, uaddr, oparg, Efault);
- break;
- case FUTEX_OP_ADD:
- unsafe_atomic_op1(LOCK_PREFIX "xaddl %0, %2", oval,
- uaddr, oparg, Efault);
- break;
- case FUTEX_OP_OR:
- unsafe_atomic_op2("orl %4, %3", oval, uaddr, oparg, Efault);
- break;
- case FUTEX_OP_ANDN:
- unsafe_atomic_op2("andl %4, %3", oval, uaddr, ~oparg, Efault);
- break;
- case FUTEX_OP_XOR:
- unsafe_atomic_op2("xorl %4, %3", oval, uaddr, oparg, Efault);
- break;
- default:
- user_access_end();
- return -ENOSYS;
+ scoped_user_rw_access(uaddr, Efault) {
+ switch (op) {
+ case FUTEX_OP_SET:
+ unsafe_atomic_op1("xchgl %0, %2", oval, uaddr, oparg, Efault);
+ break;
+ case FUTEX_OP_ADD:
+ unsafe_atomic_op1(LOCK_PREFIX "xaddl %0, %2", oval, uaddr, oparg, Efault);
+ break;
+ case FUTEX_OP_OR:
+ unsafe_atomic_op2("orl %4, %3", oval, uaddr, oparg, Efault);
+ break;
+ case FUTEX_OP_ANDN:
+ unsafe_atomic_op2("andl %4, %3", oval, uaddr, ~oparg, Efault);
+ break;
+ case FUTEX_OP_XOR:
+ unsafe_atomic_op2("xorl %4, %3", oval, uaddr, oparg, Efault);
+ break;
+ default:
+ return -ENOSYS;
+ }
}
- user_access_end();
return 0;
Efault:
- user_access_end();
return -EFAULT;
}
@@ -86,21 +79,19 @@ static inline int futex_atomic_cmpxchg_i
{
int ret = 0;
- if (can_do_masked_user_access())
- uaddr = masked_user_access_begin(uaddr);
- else if (!user_access_begin(uaddr, sizeof(u32)))
- return -EFAULT;
- asm volatile("\n"
- "1:\t" LOCK_PREFIX "cmpxchgl %3, %2\n"
- "2:\n"
- _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %0) \
- : "+r" (ret), "=a" (oldval), "+m" (*uaddr)
- : "r" (newval), "1" (oldval)
- : "memory"
- );
- user_access_end();
- *uval = oldval;
+ scoped_user_rw_access(uaddr, Efault) {
+ asm_inline volatile("\n"
+ "1:\t" LOCK_PREFIX "cmpxchgl %3, %2\n"
+ "2:\n"
+ _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %0)
+ : "+r" (ret), "=a" (oldval), "+m" (*uaddr)
+ : "r" (newval), "1" (oldval)
+ : "memory");
+ *uval = oldval;
+ }
return ret;
+Efault:
+ return -EFAULT;
}
#endif
^ permalink raw reply [flat|nested] 21+ messages in thread* [patch V4 12/12] select: Convert to scoped user access
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
` (10 preceding siblings ...)
2025-10-22 12:49 ` [patch V4 11/12] x86/futex: " Thomas Gleixner
@ 2025-10-22 12:49 ` Thomas Gleixner
2025-10-22 13:28 ` [patch V4 00/12] uaccess: Provide and use scopes for " Peter Zijlstra
12 siblings, 0 replies; 21+ messages in thread
From: Thomas Gleixner @ 2025-10-22 12:49 UTC (permalink / raw)
To: LKML
Cc: Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel,
kernel test robot, Russell King, linux-arm-kernel, Linus Torvalds,
x86, Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, linuxppc-dev, Paul Walmsley, Palmer Dabbelt,
linux-riscv, Heiko Carstens, Christian Borntraeger, Sven Schnelle,
linux-s390, Mathieu Desnoyers, Andrew Cooper, David Laight,
Julia Lawall, Nicolas Palix, Peter Zijlstra, Darren Hart,
Davidlohr Bueso, André Almeida
From: Thomas Gleixner <tglx@linutronix.de>
Replace the open coded implementation with the scoped user access guard.
No functional change intended.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: linux-fsdevel@vger.kernel.org
---
V4: Use read guard - Peterz
Rename once more
V3: Adopt to scope changes
---
fs/select.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
---
--- a/fs/select.c
+++ b/fs/select.c
@@ -776,17 +776,13 @@ static inline int get_sigset_argpack(str
{
// the path is hot enough for overhead of copy_from_user() to matter
if (from) {
- if (can_do_masked_user_access())
- from = masked_user_access_begin(from);
- else if (!user_read_access_begin(from, sizeof(*from)))
- return -EFAULT;
- unsafe_get_user(to->p, &from->p, Efault);
- unsafe_get_user(to->size, &from->size, Efault);
- user_read_access_end();
+ scoped_user_read_access(from, Efault) {
+ unsafe_get_user(to->p, &from->p, Efault);
+ unsafe_get_user(to->size, &from->size, Efault);
+ }
}
return 0;
Efault:
- user_read_access_end();
return -EFAULT;
}
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [patch V4 00/12] uaccess: Provide and use scopes for user access
2025-10-22 12:49 [patch V4 00/12] uaccess: Provide and use scopes for user access Thomas Gleixner
` (11 preceding siblings ...)
2025-10-22 12:49 ` [patch V4 12/12] select: " Thomas Gleixner
@ 2025-10-22 13:28 ` Peter Zijlstra
12 siblings, 0 replies; 21+ messages in thread
From: Peter Zijlstra @ 2025-10-22 13:28 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, kernel test robot, Russell King, linux-arm-kernel,
Linus Torvalds, x86, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, Christophe Leroy, linuxppc-dev, Paul Walmsley,
Palmer Dabbelt, linux-riscv, Heiko Carstens,
Christian Borntraeger, Sven Schnelle, linux-s390,
Mathieu Desnoyers, Andrew Cooper, David Laight, Julia Lawall,
Nicolas Palix, Darren Hart, Davidlohr Bueso, André Almeida,
Alexander Viro, Christian Brauner, Jan Kara, linux-fsdevel
On Wed, Oct 22, 2025 at 02:49:02PM +0200, Thomas Gleixner wrote:
> Thomas Gleixner (12):
> ARM: uaccess: Implement missing __get_user_asm_dword()
> uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user()
> x86/uaccess: Use unsafe wrappers for ASM GOTO
> powerpc/uaccess: Use unsafe wrappers for ASM GOTO
> riscv/uaccess: Use unsafe wrappers for ASM GOTO
> s390/uaccess: Use unsafe wrappers for ASM GOTO
> uaccess: Provide scoped user access regions
> uaccess: Provide put/get_user_scoped()
> futex: Convert to scoped user access
> x86/futex: Convert to scoped user access
> select: Convert to scoped user access
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
^ permalink raw reply [flat|nested] 21+ messages in thread