linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: "Madhavan Srinivasan" <maddy@linux.ibm.com>,
	"Michael Ellerman" <mpe@ellerman.id.au>,
	"Nicholas Piggin" <npiggin@gmail.com>,
	"Christophe Leroy" <christophe.leroy@csgroup.eu>,
	linuxppc-dev@lists.ozlabs.org,
	"kernel test robot" <lkp@intel.com>,
	"Russell King" <linux@armlinux.org.uk>,
	linux-arm-kernel@lists.infradead.org,
	"Linus Torvalds" <torvalds@linux-foundation.org>,
	x86@kernel.org, "Paul Walmsley" <pjw@kernel.org>,
	"Palmer Dabbelt" <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	"Heiko Carstens" <hca@linux.ibm.com>,
	"Christian Borntraeger" <borntraeger@linux.ibm.com>,
	"Sven Schnelle" <svens@linux.ibm.com>,
	linux-s390@vger.kernel.org,
	"Mathieu Desnoyers" <mathieu.desnoyers@efficios.com>,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Julia Lawall" <Julia.Lawall@inria.fr>,
	"Nicolas Palix" <nicolas.palix@imag.fr>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Darren Hart" <dvhart@infradead.org>,
	"Davidlohr Bueso" <dave@stgolabs.net>,
	"André Almeida" <andrealmeid@igalia.com>,
	"Alexander Viro" <viro@zeniv.linux.org.uk>,
	"Christian Brauner" <brauner@kernel.org>,
	"Jan Kara" <jack@suse.cz>,
	linux-fsdevel@vger.kernel.org
Subject: [patch V3 04/12] powerpc/uaccess: Use unsafe wrappers for ASM GOTO
Date: Fri, 17 Oct 2025 12:09:02 +0200 (CEST)	[thread overview]
Message-ID: <20251017093030.064701062@linutronix.de> (raw)
In-Reply-To: 20251017085938.150569636@linutronix.de

ASM GOTO is miscompiled by GCC when it is used inside a auto cleanup scope:

bool foo(u32 __user *p, u32 val)
{
	scoped_guard(pagefault)
		unsafe_put_user(val, p, efault);
	return true;
efault:
	return false;
}

It ends up leaking the pagefault disable counter in the fault path. clang
at least fails the build.

Rename unsafe_*_user() to arch_unsafe_*_user() which makes the generic
uaccess header wrap it with a local label that makes both compilers emit
correct code. Same for the kernel_nofault() variants.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: linuxppc-dev@lists.ozlabs.org
---
 arch/powerpc/include/asm/uaccess.h |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -451,7 +451,7 @@ user_write_access_begin(const void __use
 #define user_write_access_begin	user_write_access_begin
 #define user_write_access_end		prevent_current_write_to_user
 
-#define unsafe_get_user(x, p, e) do {					\
+#define arch_unsafe_get_user(x, p, e) do {			\
 	__long_type(*(p)) __gu_val;				\
 	__typeof__(*(p)) __user *__gu_addr = (p);		\
 								\
@@ -459,7 +459,7 @@ user_write_access_begin(const void __use
 	(x) = (__typeof__(*(p)))__gu_val;			\
 } while (0)
 
-#define unsafe_put_user(x, p, e) \
+#define arch_unsafe_put_user(x, p, e)				\
 	__put_user_size_goto((__typeof__(*(p)))(x), (p), sizeof(*(p)), e)
 
 #define unsafe_copy_from_user(d, s, l, e) \
@@ -504,11 +504,11 @@ do {									\
 		unsafe_put_user(*(u8*)(_src + _i), (u8 __user *)(_dst + _i), e); \
 } while (0)
 
-#define __get_kernel_nofault(dst, src, type, err_label)			\
+#define arch_get_kernel_nofault(dst, src, type, err_label)		\
 	__get_user_size_goto(*((type *)(dst)),				\
 		(__force type __user *)(src), sizeof(type), err_label)
 
-#define __put_kernel_nofault(dst, src, type, err_label)			\
+#define arch_put_kernel_nofault(dst, src, type, err_label)		\
 	__put_user_size_goto(*((type *)(src)),				\
 		(__force type __user *)(dst), sizeof(type), err_label)
 


  parent reply	other threads:[~2025-10-17 10:09 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-17 10:08 [patch V3 00/12] uaccess: Provide and use scopes for user masked access Thomas Gleixner
2025-10-17 10:08 ` [patch V3 01/12] ARM: uaccess: Implement missing __get_user_asm_dword() Thomas Gleixner
2025-10-17 12:36   ` Mathieu Desnoyers
2025-10-17 10:08 ` [patch V3 02/12] uaccess: Provide ASM GOTO safe wrappers for unsafe_*_user() Thomas Gleixner
2025-10-17 12:43   ` Mathieu Desnoyers
2025-10-17 12:48     ` Mathieu Desnoyers
2025-10-17 10:09 ` [patch V3 03/12] x86/uaccess: Use unsafe wrappers for ASM GOTO Thomas Gleixner
2025-10-17 10:09 ` Thomas Gleixner [this message]
2025-10-17 10:09 ` [patch V3 05/12] riscv/uaccess: " Thomas Gleixner
2025-10-17 10:09 ` [patch V3 06/12] s390/uaccess: " Thomas Gleixner
2025-10-17 10:09 ` [patch V3 07/12] uaccess: Provide scoped masked user access regions Thomas Gleixner
2025-10-17 11:08   ` Andrew Cooper
2025-10-17 11:21     ` Thomas Gleixner
2025-10-17 11:29       ` Andrew Cooper
2025-10-17 11:25     ` Peter Zijlstra
2025-10-17 13:23   ` Mathieu Desnoyers
2025-10-20 18:28   ` David Laight
2025-10-21 14:29     ` Thomas Gleixner
2025-10-21 14:42       ` Thomas Gleixner
2025-10-21 20:52         ` David Laight
2025-10-21 14:44       ` Peter Zijlstra
2025-10-21 15:06       ` Linus Torvalds
2025-10-21 15:45         ` Thomas Gleixner
2025-10-21 15:51           ` Linus Torvalds
2025-10-21 18:55       ` David Laight
2025-10-17 10:09 ` [patch V3 08/12] uaccess: Provide put/get_user_masked() Thomas Gleixner
2025-10-17 13:41   ` Mathieu Desnoyers
2025-10-17 13:45     ` Mathieu Desnoyers
2025-10-20  6:50       ` Thomas Gleixner
2025-10-17 10:09 ` [patch V3 09/12] [RFC] coccinelle: misc: Add scoped_masked_$MODE_access() checker script Thomas Gleixner
2025-10-17 10:51   ` Julia Lawall
2025-10-17 10:09 ` [patch V3 10/12] futex: Convert to scoped masked user access Thomas Gleixner
2025-10-17 10:09 ` [patch V3 11/12] x86/futex: " Thomas Gleixner
2025-10-17 13:37   ` Andrew Cooper
2025-10-17 10:09 ` [patch V3 12/12] select: " Thomas Gleixner
2025-10-17 10:35   ` Peter Zijlstra
2025-10-17 11:12     ` Thomas Gleixner
2025-10-17 10:37 ` [patch V3 00/12] uaccess: Provide and use scopes for user masked access Peter Zijlstra
2025-10-17 10:50   ` Andrew Cooper
2025-10-17 12:25 ` Mathieu Desnoyers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251017093030.064701062@linutronix.de \
    --to=tglx@linutronix.de \
    --cc=Julia.Lawall@inria.fr \
    --cc=andrealmeid@igalia.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=borntraeger@linux.ibm.com \
    --cc=brauner@kernel.org \
    --cc=christophe.leroy@csgroup.eu \
    --cc=dave@stgolabs.net \
    --cc=dvhart@infradead.org \
    --cc=hca@linux.ibm.com \
    --cc=jack@suse.cz \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=lkp@intel.com \
    --cc=maddy@linux.ibm.com \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mpe@ellerman.id.au \
    --cc=nicolas.palix@imag.fr \
    --cc=npiggin@gmail.com \
    --cc=palmer@dabbelt.com \
    --cc=peterz@infradead.org \
    --cc=pjw@kernel.org \
    --cc=svens@linux.ibm.com \
    --cc=torvalds@linux-foundation.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).