public inbox for linuxppc-dev@ozlabs.org
 help / color / mirror / Atom feed
* [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path
@ 2026-02-28 13:53 Sayali Patil
  2026-02-28 13:53 ` [PATCH v2 2/2] powerpc/selftests/copyloops: extend selftest to exercise __copy_tofrom_user_power7_vmx Sayali Patil
  2026-03-02 11:12 ` [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path Christophe Leroy (CS GROUP)
  0 siblings, 2 replies; 8+ messages in thread
From: Sayali Patil @ 2026-02-28 13:53 UTC (permalink / raw)
  To: linuxppc-dev, maddy
  Cc: aboorvad, sshegde, chleroy, riteshh, hbathini, ming.lei, csander,
	czhong, venkat88

On powerpc with PREEMPT_FULL or PREEMPT_LAZY and function tracing enabled,
KUAP warnings can be triggered from the VMX usercopy path under memory
stress workloads.

KUAP requires that no subfunctions are called once userspace access has
been enabled. The existing VMX copy implementation violates this
requirement by invoking enter_vmx_usercopy() from the assembly path after
userspace access has already been enabled. If preemption occurs
in this window, the AMR state may not be preserved correctly,
leading to unexpected userspace access state and resulting in
KUAP warnings.

Fix this by restructuring the VMX usercopy flow so that VMX selection
and VMX state management are centralized in raw_copy_tofrom_user(),
which is invoked by the raw_copy_{to,from,in}_user() wrappers.

Introduce a usercopy_mode enum to describe the copy direction
(IN, FROM, TO) and use it to derive the required KUAP permissions.
Userspace access is now enabled and disabled through common helpers
based on the selected mode, ensuring that the correct read/write
permissions are applied consistently.

 The new flow is:

  - raw_copy_{to,from,in}_user() calls raw_copy_tofrom_user()
  - raw_copy_tofrom_user() decides whether to use the VMX path
    based on size and CPU capability
  - Call enter_vmx_usercopy() before enabling userspace access
  - Enable userspace access as per the usercopy mode
    and perform the VMX copy
  - Disable userspace access as per the usercopy mode
  - Call exit_vmx_usercopy()
  - Fall back to the base copy routine if the VMX copy faults

With this change, the VMX assembly routines no longer perform VMX state
management or call helper functions; they only implement the
copy operations.
The previous feature-section based VMX selection inside
__copy_tofrom_user_power7() is removed, and a dedicated
__copy_tofrom_user_power7_vmx() entry point is introduced.

This ensures correct KUAP ordering, avoids subfunction calls
while KUAP is unlocked, and eliminates the warnings while preserving
the VMX fast path.

Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection")
Reported-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Closes: https://lore.kernel.org/all/20260109064917.777587-2-sshegde@linux.ibm.com/
Suggested-by: Christophe Leroy <chleroy@kernel.org>
Co-developed-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
Signed-off-by: Sayali Patil <sayalip@linux.ibm.com>
---

v1->v2
  - Updated as per the review comments.
  - Centralized VMX usercopy handling in __copy_tofrom_user_vmx() in
    arch/powerpc/lib/vmx-helper.c.
  - Introduced a usercopy_mode enum to describe the copy direction
    (IN, FROM, TO) and derive the required KUAP permissions, avoiding
    duplication across the different usercopy paths.

v1: https://lore.kernel.org/all/20260217124457.89219-1-sayalip@linux.ibm.com/

---
 arch/powerpc/include/asm/uaccess.h | 95 ++++++++++++++++++++++++------
 arch/powerpc/lib/copyuser_64.S     |  1 +
 arch/powerpc/lib/copyuser_power7.S | 45 +++++---------
 arch/powerpc/lib/vmx-helper.c      | 26 ++++++++
 4 files changed, 119 insertions(+), 48 deletions(-)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index ba1d878c3f40..63d6eb8b004e 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -15,6 +15,9 @@
 #define TASK_SIZE_MAX		TASK_SIZE_USER64
 #endif
 
+/* Threshold above which VMX copy path is used */
+#define VMX_COPY_THRESHOLD 3328
+
 #include <asm-generic/access_ok.h>
 
 /*
@@ -326,40 +329,96 @@ do {								\
 extern unsigned long __copy_tofrom_user(void __user *to,
 		const void __user *from, unsigned long size);
 
-#ifdef __powerpc64__
-static inline unsigned long
-raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
+enum usercopy_mode {
+	USERCOPY_IN,
+	USERCOPY_FROM,
+	USERCOPY_TO,
+};
+
+unsigned long __copy_tofrom_user_vmx(void __user *to, const void __user *from,
+				unsigned long size, enum usercopy_mode mode);
+
+unsigned long __copy_tofrom_user_base(void __user *to,
+		const void __user *from, unsigned long size);
+
+unsigned long __copy_tofrom_user_power7_vmx(void __user *to,
+		const void __user *from, unsigned long size);
+
+
+static inline bool will_use_vmx(unsigned long n)
+{
+	return IS_ENABLED(CONFIG_ALTIVEC) &&
+		cpu_has_feature(CPU_FTR_VMX_COPY) &&
+		n > VMX_COPY_THRESHOLD;
+}
+
+static inline void raw_copy_allow(void __user *to, enum usercopy_mode mode)
+{
+	switch (mode) {
+	case USERCOPY_IN:
+		allow_user_access(to, KUAP_READ_WRITE);
+		break;
+	case USERCOPY_FROM:
+		allow_user_access(NULL, KUAP_READ);
+		break;
+	case USERCOPY_TO:
+		allow_user_access(to, KUAP_WRITE);
+		break;
+	}
+}
+
+static inline void raw_copy_prevent(enum usercopy_mode mode)
+{
+	switch (mode) {
+	case USERCOPY_IN:
+		prevent_user_access(KUAP_READ_WRITE);
+		break;
+	case USERCOPY_FROM:
+		prevent_user_access(KUAP_READ);
+		break;
+	case USERCOPY_TO:
+		prevent_user_access(KUAP_WRITE);
+		break;
+	}
+}
+
+static inline unsigned long raw_copy_tofrom_user(void __user *to,
+		const void __user *from, unsigned long n,
+		enum usercopy_mode mode)
 {
 	unsigned long ret;
 
-	barrier_nospec();
-	allow_user_access(to, KUAP_READ_WRITE);
+	if (will_use_vmx(n))
+		return __copy_tofrom_user_vmx(to, from,	n, mode);
+
+	raw_copy_allow(to, mode);
 	ret = __copy_tofrom_user(to, from, n);
-	prevent_user_access(KUAP_READ_WRITE);
+	raw_copy_prevent(mode);
 	return ret;
+
+}
+
+#ifdef __powerpc64__
+static inline unsigned long
+raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
+{
+	barrier_nospec();
+	return raw_copy_tofrom_user(to, from, n, USERCOPY_IN);
 }
 #endif /* __powerpc64__ */
 
 static inline unsigned long raw_copy_from_user(void *to,
 		const void __user *from, unsigned long n)
 {
-	unsigned long ret;
-
-	allow_user_access(NULL, KUAP_READ);
-	ret = __copy_tofrom_user((__force void __user *)to, from, n);
-	prevent_user_access(KUAP_READ);
-	return ret;
+	return raw_copy_tofrom_user((__force void __user *)to, from,
+					n, USERCOPY_FROM);
 }
 
 static inline unsigned long
 raw_copy_to_user(void __user *to, const void *from, unsigned long n)
 {
-	unsigned long ret;
-
-	allow_user_access(to, KUAP_WRITE);
-	ret = __copy_tofrom_user(to, (__force const void __user *)from, n);
-	prevent_user_access(KUAP_WRITE);
-	return ret;
+	return raw_copy_tofrom_user(to, (__force const void __user *)from,
+					n, USERCOPY_TO);
 }
 
 unsigned long __arch_clear_user(void __user *addr, unsigned long size);
diff --git a/arch/powerpc/lib/copyuser_64.S b/arch/powerpc/lib/copyuser_64.S
index 9af969d2cc0c..25a99108caff 100644
--- a/arch/powerpc/lib/copyuser_64.S
+++ b/arch/powerpc/lib/copyuser_64.S
@@ -562,3 +562,4 @@ exc;	std	r10,32(3)
 	li	r5,4096
 	b	.Ldst_aligned
 EXPORT_SYMBOL(__copy_tofrom_user)
+EXPORT_SYMBOL(__copy_tofrom_user_base)
diff --git a/arch/powerpc/lib/copyuser_power7.S b/arch/powerpc/lib/copyuser_power7.S
index 8474c682a178..17dbcfbae25f 100644
--- a/arch/powerpc/lib/copyuser_power7.S
+++ b/arch/powerpc/lib/copyuser_power7.S
@@ -5,13 +5,9 @@
  *
  * Author: Anton Blanchard <anton@au.ibm.com>
  */
+#include <linux/export.h>
 #include <asm/ppc_asm.h>
 
-#ifndef SELFTEST_CASE
-/* 0 == don't use VMX, 1 == use VMX */
-#define SELFTEST_CASE	0
-#endif
-
 #ifdef __BIG_ENDIAN__
 #define LVS(VRT,RA,RB)		lvsl	VRT,RA,RB
 #define VPERM(VRT,VRA,VRB,VRC)	vperm	VRT,VRA,VRB,VRC
@@ -47,10 +43,14 @@
 	ld	r15,STK_REG(R15)(r1)
 	ld	r14,STK_REG(R14)(r1)
 .Ldo_err3:
-	bl	CFUNC(exit_vmx_usercopy)
+	ld      r6,STK_REG(R31)(r1)	/* original destination pointer */
+	ld      r5,STK_REG(R29)(r1)	/* original number of bytes */
+	subf    r7,r6,r3		/* #bytes copied */
+	subf    r3,r7,r5		/* #bytes not copied in r3 */
 	ld	r0,STACKFRAMESIZE+16(r1)
 	mtlr	r0
-	b	.Lexit
+	addi    r1,r1,STACKFRAMESIZE
+	blr
 #endif /* CONFIG_ALTIVEC */
 
 .Ldo_err2:
@@ -74,7 +74,6 @@
 
 _GLOBAL(__copy_tofrom_user_power7)
 	cmpldi	r5,16
-	cmpldi	cr1,r5,3328
 
 	std	r3,-STACKFRAMESIZE+STK_REG(R31)(r1)
 	std	r4,-STACKFRAMESIZE+STK_REG(R30)(r1)
@@ -82,12 +81,6 @@ _GLOBAL(__copy_tofrom_user_power7)
 
 	blt	.Lshort_copy
 
-#ifdef CONFIG_ALTIVEC
-test_feature = SELFTEST_CASE
-BEGIN_FTR_SECTION
-	bgt	cr1,.Lvmx_copy
-END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
-#endif
 
 .Lnonvmx_copy:
 	/* Get the source 8B aligned */
@@ -263,23 +256,14 @@ err1;	stb	r0,0(r3)
 15:	li	r3,0
 	blr
 
-.Lunwind_stack_nonvmx_copy:
-	addi	r1,r1,STACKFRAMESIZE
-	b	.Lnonvmx_copy
-
-.Lvmx_copy:
 #ifdef CONFIG_ALTIVEC
+_GLOBAL(__copy_tofrom_user_power7_vmx)
 	mflr	r0
 	std	r0,16(r1)
 	stdu	r1,-STACKFRAMESIZE(r1)
-	bl	CFUNC(enter_vmx_usercopy)
-	cmpwi	cr1,r3,0
-	ld	r0,STACKFRAMESIZE+16(r1)
-	ld	r3,STK_REG(R31)(r1)
-	ld	r4,STK_REG(R30)(r1)
-	ld	r5,STK_REG(R29)(r1)
-	mtlr	r0
 
+	std     r3,STK_REG(R31)(r1)
+	std     r5,STK_REG(R29)(r1)
 	/*
 	 * We prefetch both the source and destination using enhanced touch
 	 * instructions. We use a stream ID of 0 for the load side and
@@ -300,8 +284,6 @@ err1;	stb	r0,0(r3)
 
 	DCBT_SETUP_STREAMS(r6, r7, r9, r10, r8)
 
-	beq	cr1,.Lunwind_stack_nonvmx_copy
-
 	/*
 	 * If source and destination are not relatively aligned we use a
 	 * slower permute loop.
@@ -478,7 +460,8 @@ err3;	lbz	r0,0(r4)
 err3;	stb	r0,0(r3)
 
 15:	addi	r1,r1,STACKFRAMESIZE
-	b	CFUNC(exit_vmx_usercopy)	/* tail call optimise */
+	li r3,0
+	blr
 
 .Lvmx_unaligned_copy:
 	/* Get the destination 16B aligned */
@@ -681,5 +664,7 @@ err3;	lbz	r0,0(r4)
 err3;	stb	r0,0(r3)
 
 15:	addi	r1,r1,STACKFRAMESIZE
-	b	CFUNC(exit_vmx_usercopy)	/* tail call optimise */
+	li r3,0
+	blr
+EXPORT_SYMBOL(__copy_tofrom_user_power7_vmx)
 #endif /* CONFIG_ALTIVEC */
diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
index 54340912398f..35080885204b 100644
--- a/arch/powerpc/lib/vmx-helper.c
+++ b/arch/powerpc/lib/vmx-helper.c
@@ -10,6 +10,32 @@
 #include <linux/hardirq.h>
 #include <asm/switch_to.h>
 
+unsigned long __copy_tofrom_user_vmx(void __user *to, const void __user *from,
+			unsigned long size, enum usercopy_mode mode)
+{
+	unsigned long ret;
+
+	if (!enter_vmx_usercopy()) {
+		raw_copy_allow(to, mode);
+		ret = __copy_tofrom_user(to, from, size);
+		raw_copy_prevent(mode);
+		return ret;
+	}
+
+	raw_copy_allow(to, mode);
+	ret = __copy_tofrom_user_power7_vmx(to, from, size);
+	raw_copy_prevent(mode);
+	exit_vmx_usercopy();
+	if (unlikely(ret)) {
+		raw_copy_allow(to, mode);
+		ret = __copy_tofrom_user_base(to, from, size);
+		raw_copy_prevent(mode);
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(__copy_tofrom_user_vmx);
+
 int enter_vmx_usercopy(void)
 {
 	if (in_interrupt())
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/2] powerpc/selftests/copyloops: extend selftest to exercise __copy_tofrom_user_power7_vmx
  2026-02-28 13:53 [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path Sayali Patil
@ 2026-02-28 13:53 ` Sayali Patil
  2026-03-02 11:12 ` [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path Christophe Leroy (CS GROUP)
  1 sibling, 0 replies; 8+ messages in thread
From: Sayali Patil @ 2026-02-28 13:53 UTC (permalink / raw)
  To: linuxppc-dev, maddy
  Cc: aboorvad, sshegde, chleroy, riteshh, hbathini, ming.lei, csander,
	czhong, venkat88

The new PowerPC VMX fast path (__copy_tofrom_user_power7_vmx) is not
exercised by existing copyloops selftests. This patch updates
the selftest to exercise the VMX variant, ensuring the VMX copy path
is validated.

Changes include:
  - COPY_LOOP=test___copy_tofrom_user_power7_vmx with -D VMX_TEST is used
    in existing selftest build targets.
  - Inclusion of ../utils.c to provide get_auxv_entry() for hardware
    feature detection.
  - At runtime, the test skips execution if Altivec is not available.
  - Copy sizes above VMX_COPY_THRESHOLD are used to ensure the VMX
    path is taken.

This enables validation of the VMX fast path without affecting systems
that do not support Altivec.

Signed-off-by: Sayali Patil <sayalip@linux.ibm.com>
---

v1->v2
No changs in the patch.

v1: https://lore.kernel.org/all/20260217124457.89219-2-sayalip@linux.ibm.com/

---

 .../selftests/powerpc/copyloops/.gitignore        |  4 ++--
 .../testing/selftests/powerpc/copyloops/Makefile  | 11 ++++++++---
 tools/testing/selftests/powerpc/copyloops/stubs.S |  8 --------
 .../selftests/powerpc/copyloops/validate.c        | 15 ++++++++++++++-
 4 files changed, 24 insertions(+), 14 deletions(-)

diff --git a/tools/testing/selftests/powerpc/copyloops/.gitignore b/tools/testing/selftests/powerpc/copyloops/.gitignore
index 7283e8b07b75..80d4270a71ac 100644
--- a/tools/testing/selftests/powerpc/copyloops/.gitignore
+++ b/tools/testing/selftests/powerpc/copyloops/.gitignore
@@ -2,8 +2,8 @@
 copyuser_64_t0
 copyuser_64_t1
 copyuser_64_t2
-copyuser_p7_t0
-copyuser_p7_t1
+copyuser_p7
+copyuser_p7_vmx
 memcpy_64_t0
 memcpy_64_t1
 memcpy_64_t2
diff --git a/tools/testing/selftests/powerpc/copyloops/Makefile b/tools/testing/selftests/powerpc/copyloops/Makefile
index 42940f92d832..0c8efb0bddeb 100644
--- a/tools/testing/selftests/powerpc/copyloops/Makefile
+++ b/tools/testing/selftests/powerpc/copyloops/Makefile
@@ -1,6 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0
 TEST_GEN_PROGS := copyuser_64_t0 copyuser_64_t1 copyuser_64_t2 \
-		copyuser_p7_t0 copyuser_p7_t1 \
+		copyuser_p7 copyuser_p7_vmx \
 		memcpy_64_t0 memcpy_64_t1 memcpy_64_t2 \
 		memcpy_p7_t0 memcpy_p7_t1 copy_mc_64 \
 		copyuser_64_exc_t0 copyuser_64_exc_t1 copyuser_64_exc_t2 \
@@ -28,10 +28,15 @@ $(OUTPUT)/copyuser_64_t%:	copyuser_64.S $(EXTRA_SOURCES)
 		-D SELFTEST_CASE=$(subst copyuser_64_t,,$(notdir $@)) \
 		-o $@ $^
 
-$(OUTPUT)/copyuser_p7_t%:	copyuser_power7.S $(EXTRA_SOURCES)
+$(OUTPUT)/copyuser_p7:	copyuser_power7.S $(EXTRA_SOURCES)
 	$(CC) $(CPPFLAGS) $(CFLAGS) \
 		-D COPY_LOOP=test___copy_tofrom_user_power7 \
-		-D SELFTEST_CASE=$(subst copyuser_p7_t,,$(notdir $@)) \
+		-o $@ $^
+
+$(OUTPUT)/copyuser_p7_vmx:       copyuser_power7.S $(EXTRA_SOURCES) ../utils.c
+	$(CC) $(CPPFLAGS) $(CFLAGS) \
+		-D COPY_LOOP=test___copy_tofrom_user_power7_vmx \
+		-D VMX_TEST \
 		-o $@ $^
 
 # Strictly speaking, we only need the memcpy_64 test cases for big-endian
diff --git a/tools/testing/selftests/powerpc/copyloops/stubs.S b/tools/testing/selftests/powerpc/copyloops/stubs.S
index ec8bcf2bf1c2..3a9cb8c9a3ee 100644
--- a/tools/testing/selftests/powerpc/copyloops/stubs.S
+++ b/tools/testing/selftests/powerpc/copyloops/stubs.S
@@ -1,13 +1,5 @@
 #include <asm/ppc_asm.h>
 
-FUNC_START(enter_vmx_usercopy)
-	li	r3,1
-	blr
-
-FUNC_START(exit_vmx_usercopy)
-	li	r3,0
-	blr
-
 FUNC_START(enter_vmx_ops)
 	li	r3,1
 	blr
diff --git a/tools/testing/selftests/powerpc/copyloops/validate.c b/tools/testing/selftests/powerpc/copyloops/validate.c
index 0f6873618552..fb822534fbe9 100644
--- a/tools/testing/selftests/powerpc/copyloops/validate.c
+++ b/tools/testing/selftests/powerpc/copyloops/validate.c
@@ -12,6 +12,10 @@
 #define BUFLEN (MAX_LEN+MAX_OFFSET+2*MIN_REDZONE)
 #define POISON 0xa5
 
+#ifdef VMX_TEST
+#define VMX_COPY_THRESHOLD 3328
+#endif
+
 unsigned long COPY_LOOP(void *to, const void *from, unsigned long size);
 
 static void do_one(char *src, char *dst, unsigned long src_off,
@@ -81,8 +85,12 @@ int test_copy_loop(void)
 	/* Fill with sequential bytes */
 	for (i = 0; i < BUFLEN; i++)
 		fill[i] = i & 0xff;
-
+#ifdef VMX_TEST
+	/* Force sizes above kernel VMX threshold (3328) */
+	for (len = VMX_COPY_THRESHOLD + 1; len < MAX_LEN; len++) {
+#else
 	for (len = 1; len < MAX_LEN; len++) {
+#endif
 		for (src_off = 0; src_off < MAX_OFFSET; src_off++) {
 			for (dst_off = 0; dst_off < MAX_OFFSET; dst_off++) {
 				do_one(src, dst, src_off, dst_off, len,
@@ -96,5 +104,10 @@ int test_copy_loop(void)
 
 int main(void)
 {
+#ifdef VMX_TEST
+	/* Skip if Altivec not present */
+	SKIP_IF_MSG(!have_hwcap(PPC_FEATURE_HAS_ALTIVEC), "ALTIVEC not supported");
+#endif
+
 	return test_harness(test_copy_loop, str(COPY_LOOP));
 }
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path
  2026-02-28 13:53 [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path Sayali Patil
  2026-02-28 13:53 ` [PATCH v2 2/2] powerpc/selftests/copyloops: extend selftest to exercise __copy_tofrom_user_power7_vmx Sayali Patil
@ 2026-03-02 11:12 ` Christophe Leroy (CS GROUP)
  2026-03-03  9:19   ` Sayali Patil
  1 sibling, 1 reply; 8+ messages in thread
From: Christophe Leroy (CS GROUP) @ 2026-03-02 11:12 UTC (permalink / raw)
  To: Sayali Patil, linuxppc-dev, maddy
  Cc: aboorvad, sshegde, riteshh, hbathini, ming.lei, csander, czhong,
	venkat88

Hi Sayali,

Le 28/02/2026 à 14:53, Sayali Patil a écrit :
> On powerpc with PREEMPT_FULL or PREEMPT_LAZY and function tracing enabled,
> KUAP warnings can be triggered from the VMX usercopy path under memory
> stress workloads.
> 
> KUAP requires that no subfunctions are called once userspace access has
> been enabled. The existing VMX copy implementation violates this
> requirement by invoking enter_vmx_usercopy() from the assembly path after
> userspace access has already been enabled. If preemption occurs
> in this window, the AMR state may not be preserved correctly,
> leading to unexpected userspace access state and resulting in
> KUAP warnings.
> 
> Fix this by restructuring the VMX usercopy flow so that VMX selection
> and VMX state management are centralized in raw_copy_tofrom_user(),
> which is invoked by the raw_copy_{to,from,in}_user() wrappers.
> 
> Introduce a usercopy_mode enum to describe the copy direction
> (IN, FROM, TO) and use it to derive the required KUAP permissions.
> Userspace access is now enabled and disabled through common helpers
> based on the selected mode, ensuring that the correct read/write
> permissions are applied consistently.
> 
>   The new flow is:
> 
>    - raw_copy_{to,from,in}_user() calls raw_copy_tofrom_user()
>    - raw_copy_tofrom_user() decides whether to use the VMX path
>      based on size and CPU capability
>    - Call enter_vmx_usercopy() before enabling userspace access
>    - Enable userspace access as per the usercopy mode
>      and perform the VMX copy
>    - Disable userspace access as per the usercopy mode
>    - Call exit_vmx_usercopy()
>    - Fall back to the base copy routine if the VMX copy faults
> 
> With this change, the VMX assembly routines no longer perform VMX state
> management or call helper functions; they only implement the
> copy operations.
> The previous feature-section based VMX selection inside
> __copy_tofrom_user_power7() is removed, and a dedicated
> __copy_tofrom_user_power7_vmx() entry point is introduced.
> 
> This ensures correct KUAP ordering, avoids subfunction calls
> while KUAP is unlocked, and eliminates the warnings while preserving
> the VMX fast path.
> 
> Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection")
> Reported-by: Shrikanth Hegde <sshegde@linux.ibm.com>
> Closes: https://lore.kernel.org/all/20260109064917.777587-2-sshegde@linux.ibm.com/
> Suggested-by: Christophe Leroy <chleroy@kernel.org>
> Co-developed-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
> Signed-off-by: Sayali Patil <sayalip@linux.ibm.com>
> ---
> 
> v1->v2
>    - Updated as per the review comments.
>    - Centralized VMX usercopy handling in __copy_tofrom_user_vmx() in
>      arch/powerpc/lib/vmx-helper.c.
>    - Introduced a usercopy_mode enum to describe the copy direction
>      (IN, FROM, TO) and derive the required KUAP permissions, avoiding
>      duplication across the different usercopy paths.

I like the reduction of duplication you propose but I can't see the 
added value of that enum, what about:

diff --git a/arch/powerpc/include/asm/uaccess.h 
b/arch/powerpc/include/asm/uaccess.h
index 63d6eb8b004e..14a3219db838 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -329,12 +329,6 @@ do {								\
  extern unsigned long __copy_tofrom_user(void __user *to,
  		const void __user *from, unsigned long size);

-enum usercopy_mode {
-	USERCOPY_IN,
-	USERCOPY_FROM,
-	USERCOPY_TO,
-};
-
  unsigned long __copy_tofrom_user_vmx(void __user *to, const void 
__user *from,
  				unsigned long size, enum usercopy_mode mode);

@@ -352,48 +346,18 @@ static inline bool will_use_vmx(unsigned long n)
  		n > VMX_COPY_THRESHOLD;
  }

-static inline void raw_copy_allow(void __user *to, enum usercopy_mode mode)
-{
-	switch (mode) {
-	case USERCOPY_IN:
-		allow_user_access(to, KUAP_READ_WRITE);
-		break;
-	case USERCOPY_FROM:
-		allow_user_access(NULL, KUAP_READ);
-		break;
-	case USERCOPY_TO:
-		allow_user_access(to, KUAP_WRITE);
-		break;
-	}
-}
-
-static inline void raw_copy_prevent(enum usercopy_mode mode)
-{
-	switch (mode) {
-	case USERCOPY_IN:
-		prevent_user_access(KUAP_READ_WRITE);
-		break;
-	case USERCOPY_FROM:
-		prevent_user_access(KUAP_READ);
-		break;
-	case USERCOPY_TO:
-		prevent_user_access(KUAP_WRITE);
-		break;
-	}
-}
-
  static inline unsigned long raw_copy_tofrom_user(void __user *to,
  		const void __user *from, unsigned long n,
-		enum usercopy_mode mode)
+		unsigned long dir)
  {
  	unsigned long ret;

  	if (will_use_vmx(n))
  		return __copy_tofrom_user_vmx(to, from,	n, mode);

-	raw_copy_allow(to, mode);
+	allow_user_access(to, dir);
  	ret = __copy_tofrom_user(to, from, n);
-	raw_copy_prevent(mode);
+	prevent_user_access(dir);
  	return ret;

  }
@@ -403,22 +367,20 @@ static inline unsigned long
  raw_copy_in_user(void __user *to, const void __user *from, unsigned 
long n)
  {
  	barrier_nospec();
-	return raw_copy_tofrom_user(to, from, n, USERCOPY_IN);
+	return raw_copy_tofrom_user(to, from, n, KUAP_READ_WRITE);
  }
  #endif /* __powerpc64__ */

  static inline unsigned long raw_copy_from_user(void *to,
  		const void __user *from, unsigned long n)
  {
-	return raw_copy_tofrom_user((__force void __user *)to, from,
-					n, USERCOPY_FROM);
+	return raw_copy_tofrom_user((__force void __user *)to, from, n, 
KUAP_READ);
  }

  static inline unsigned long
  raw_copy_to_user(void __user *to, const void *from, unsigned long n)
  {
-	return raw_copy_tofrom_user(to, (__force const void __user *)from,
-					n, USERCOPY_TO);
+	return raw_copy_tofrom_user(to, (__force const void __user *)from, n, 
KUAP_WRITE);
  }

  unsigned long __arch_clear_user(void __user *addr, unsigned long size);
diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
index 35080885204b..4610f7153fd9 100644
--- a/arch/powerpc/lib/vmx-helper.c
+++ b/arch/powerpc/lib/vmx-helper.c
@@ -11,25 +11,25 @@
  #include <asm/switch_to.h>

  unsigned long __copy_tofrom_user_vmx(void __user *to, const void 
__user *from,
-			unsigned long size, enum usercopy_mode mode)
+			unsigned long size, unsigned long dir)
  {
  	unsigned long ret;

  	if (!enter_vmx_usercopy()) {
-		raw_copy_allow(to, mode);
+		allow_user_access(to, dir);
  		ret = __copy_tofrom_user(to, from, size);
-		raw_copy_prevent(mode);
+		prevent_user_access(dir);
  		return ret;
  	}

-	raw_copy_allow(to, mode);
+	allow_user_access(to, dir);
  	ret = __copy_tofrom_user_power7_vmx(to, from, size);
-	raw_copy_prevent(mode);
+	prevent_user_access(dir);
  	exit_vmx_usercopy();
  	if (unlikely(ret)) {
-		raw_copy_allow(to, mode);
+		allow_user_access(to, dir);
  		ret = __copy_tofrom_user_base(to, from, size);
-		raw_copy_prevent(mode);
+		prevent_user_access(dir);
  	}

  	return ret;



Christophe


> 
> v1: https://lore.kernel.org/all/20260217124457.89219-1-sayalip@linux.ibm.com/
> 
> ---
>   arch/powerpc/include/asm/uaccess.h | 95 ++++++++++++++++++++++++------
>   arch/powerpc/lib/copyuser_64.S     |  1 +
>   arch/powerpc/lib/copyuser_power7.S | 45 +++++---------
>   arch/powerpc/lib/vmx-helper.c      | 26 ++++++++
>   4 files changed, 119 insertions(+), 48 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
> index ba1d878c3f40..63d6eb8b004e 100644
> --- a/arch/powerpc/include/asm/uaccess.h
> +++ b/arch/powerpc/include/asm/uaccess.h
> @@ -15,6 +15,9 @@
>   #define TASK_SIZE_MAX		TASK_SIZE_USER64
>   #endif
>   
> +/* Threshold above which VMX copy path is used */
> +#define VMX_COPY_THRESHOLD 3328
> +
>   #include <asm-generic/access_ok.h>
>   
>   /*
> @@ -326,40 +329,96 @@ do {								\
>   extern unsigned long __copy_tofrom_user(void __user *to,
>   		const void __user *from, unsigned long size);
>   
> -#ifdef __powerpc64__
> -static inline unsigned long
> -raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
> +enum usercopy_mode {
> +	USERCOPY_IN,
> +	USERCOPY_FROM,
> +	USERCOPY_TO,
> +};
> +
> +unsigned long __copy_tofrom_user_vmx(void __user *to, const void __user *from,
> +				unsigned long size, enum usercopy_mode mode);
> +
> +unsigned long __copy_tofrom_user_base(void __user *to,
> +		const void __user *from, unsigned long size);
> +
> +unsigned long __copy_tofrom_user_power7_vmx(void __user *to,
> +		const void __user *from, unsigned long size);
> +
> +
> +static inline bool will_use_vmx(unsigned long n)
> +{
> +	return IS_ENABLED(CONFIG_ALTIVEC) &&
> +		cpu_has_feature(CPU_FTR_VMX_COPY) &&
> +		n > VMX_COPY_THRESHOLD;
> +}
> +
> +static inline void raw_copy_allow(void __user *to, enum usercopy_mode mode)
> +{
> +	switch (mode) {
> +	case USERCOPY_IN:
> +		allow_user_access(to, KUAP_READ_WRITE);
> +		break;
> +	case USERCOPY_FROM:
> +		allow_user_access(NULL, KUAP_READ);
> +		break;
> +	case USERCOPY_TO:
> +		allow_user_access(to, KUAP_WRITE);
> +		break;
> +	}
> +}
> +
> +static inline void raw_copy_prevent(enum usercopy_mode mode)
> +{
> +	switch (mode) {
> +	case USERCOPY_IN:
> +		prevent_user_access(KUAP_READ_WRITE);
> +		break;
> +	case USERCOPY_FROM:
> +		prevent_user_access(KUAP_READ);
> +		break;
> +	case USERCOPY_TO:
> +		prevent_user_access(KUAP_WRITE);
> +		break;
> +	}
> +}
> +
> +static inline unsigned long raw_copy_tofrom_user(void __user *to,
> +		const void __user *from, unsigned long n,
> +		enum usercopy_mode mode)
>   {
>   	unsigned long ret;
>   
> -	barrier_nospec();
> -	allow_user_access(to, KUAP_READ_WRITE);
> +	if (will_use_vmx(n))
> +		return __copy_tofrom_user_vmx(to, from,	n, mode);
> +
> +	raw_copy_allow(to, mode);
>   	ret = __copy_tofrom_user(to, from, n);
> -	prevent_user_access(KUAP_READ_WRITE);
> +	raw_copy_prevent(mode);
>   	return ret;
> +
> +}
> +
> +#ifdef __powerpc64__
> +static inline unsigned long
> +raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
> +{
> +	barrier_nospec();
> +	return raw_copy_tofrom_user(to, from, n, USERCOPY_IN);
>   }
>   #endif /* __powerpc64__ */
>   
>   static inline unsigned long raw_copy_from_user(void *to,
>   		const void __user *from, unsigned long n)
>   {
> -	unsigned long ret;
> -
> -	allow_user_access(NULL, KUAP_READ);
> -	ret = __copy_tofrom_user((__force void __user *)to, from, n);
> -	prevent_user_access(KUAP_READ);
> -	return ret;
> +	return raw_copy_tofrom_user((__force void __user *)to, from,
> +					n, USERCOPY_FROM);
>   }
>   
>   static inline unsigned long
>   raw_copy_to_user(void __user *to, const void *from, unsigned long n)
>   {
> -	unsigned long ret;
> -
> -	allow_user_access(to, KUAP_WRITE);
> -	ret = __copy_tofrom_user(to, (__force const void __user *)from, n);
> -	prevent_user_access(KUAP_WRITE);
> -	return ret;
> +	return raw_copy_tofrom_user(to, (__force const void __user *)from,
> +					n, USERCOPY_TO);
>   }
>   
>   unsigned long __arch_clear_user(void __user *addr, unsigned long size);
> diff --git a/arch/powerpc/lib/copyuser_64.S b/arch/powerpc/lib/copyuser_64.S
> index 9af969d2cc0c..25a99108caff 100644
> --- a/arch/powerpc/lib/copyuser_64.S
> +++ b/arch/powerpc/lib/copyuser_64.S
> @@ -562,3 +562,4 @@ exc;	std	r10,32(3)
>   	li	r5,4096
>   	b	.Ldst_aligned
>   EXPORT_SYMBOL(__copy_tofrom_user)
> +EXPORT_SYMBOL(__copy_tofrom_user_base)
> diff --git a/arch/powerpc/lib/copyuser_power7.S b/arch/powerpc/lib/copyuser_power7.S
> index 8474c682a178..17dbcfbae25f 100644
> --- a/arch/powerpc/lib/copyuser_power7.S
> +++ b/arch/powerpc/lib/copyuser_power7.S
> @@ -5,13 +5,9 @@
>    *
>    * Author: Anton Blanchard <anton@au.ibm.com>
>    */
> +#include <linux/export.h>
>   #include <asm/ppc_asm.h>
>   
> -#ifndef SELFTEST_CASE
> -/* 0 == don't use VMX, 1 == use VMX */
> -#define SELFTEST_CASE	0
> -#endif
> -
>   #ifdef __BIG_ENDIAN__
>   #define LVS(VRT,RA,RB)		lvsl	VRT,RA,RB
>   #define VPERM(VRT,VRA,VRB,VRC)	vperm	VRT,VRA,VRB,VRC
> @@ -47,10 +43,14 @@
>   	ld	r15,STK_REG(R15)(r1)
>   	ld	r14,STK_REG(R14)(r1)
>   .Ldo_err3:
> -	bl	CFUNC(exit_vmx_usercopy)
> +	ld      r6,STK_REG(R31)(r1)	/* original destination pointer */
> +	ld      r5,STK_REG(R29)(r1)	/* original number of bytes */
> +	subf    r7,r6,r3		/* #bytes copied */
> +	subf    r3,r7,r5		/* #bytes not copied in r3 */
>   	ld	r0,STACKFRAMESIZE+16(r1)
>   	mtlr	r0
> -	b	.Lexit
> +	addi    r1,r1,STACKFRAMESIZE
> +	blr
>   #endif /* CONFIG_ALTIVEC */
>   
>   .Ldo_err2:
> @@ -74,7 +74,6 @@
>   
>   _GLOBAL(__copy_tofrom_user_power7)
>   	cmpldi	r5,16
> -	cmpldi	cr1,r5,3328
>   
>   	std	r3,-STACKFRAMESIZE+STK_REG(R31)(r1)
>   	std	r4,-STACKFRAMESIZE+STK_REG(R30)(r1)
> @@ -82,12 +81,6 @@ _GLOBAL(__copy_tofrom_user_power7)
>   
>   	blt	.Lshort_copy
>   
> -#ifdef CONFIG_ALTIVEC
> -test_feature = SELFTEST_CASE
> -BEGIN_FTR_SECTION
> -	bgt	cr1,.Lvmx_copy
> -END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
> -#endif
>   
>   .Lnonvmx_copy:
>   	/* Get the source 8B aligned */
> @@ -263,23 +256,14 @@ err1;	stb	r0,0(r3)
>   15:	li	r3,0
>   	blr
>   
> -.Lunwind_stack_nonvmx_copy:
> -	addi	r1,r1,STACKFRAMESIZE
> -	b	.Lnonvmx_copy
> -
> -.Lvmx_copy:
>   #ifdef CONFIG_ALTIVEC
> +_GLOBAL(__copy_tofrom_user_power7_vmx)
>   	mflr	r0
>   	std	r0,16(r1)
>   	stdu	r1,-STACKFRAMESIZE(r1)
> -	bl	CFUNC(enter_vmx_usercopy)
> -	cmpwi	cr1,r3,0
> -	ld	r0,STACKFRAMESIZE+16(r1)
> -	ld	r3,STK_REG(R31)(r1)
> -	ld	r4,STK_REG(R30)(r1)
> -	ld	r5,STK_REG(R29)(r1)
> -	mtlr	r0
>   
> +	std     r3,STK_REG(R31)(r1)
> +	std     r5,STK_REG(R29)(r1)
>   	/*
>   	 * We prefetch both the source and destination using enhanced touch
>   	 * instructions. We use a stream ID of 0 for the load side and
> @@ -300,8 +284,6 @@ err1;	stb	r0,0(r3)
>   
>   	DCBT_SETUP_STREAMS(r6, r7, r9, r10, r8)
>   
> -	beq	cr1,.Lunwind_stack_nonvmx_copy
> -
>   	/*
>   	 * If source and destination are not relatively aligned we use a
>   	 * slower permute loop.
> @@ -478,7 +460,8 @@ err3;	lbz	r0,0(r4)
>   err3;	stb	r0,0(r3)
>   
>   15:	addi	r1,r1,STACKFRAMESIZE
> -	b	CFUNC(exit_vmx_usercopy)	/* tail call optimise */
> +	li r3,0
> +	blr
>   
>   .Lvmx_unaligned_copy:
>   	/* Get the destination 16B aligned */
> @@ -681,5 +664,7 @@ err3;	lbz	r0,0(r4)
>   err3;	stb	r0,0(r3)
>   
>   15:	addi	r1,r1,STACKFRAMESIZE
> -	b	CFUNC(exit_vmx_usercopy)	/* tail call optimise */
> +	li r3,0
> +	blr
> +EXPORT_SYMBOL(__copy_tofrom_user_power7_vmx)
>   #endif /* CONFIG_ALTIVEC */
> diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
> index 54340912398f..35080885204b 100644
> --- a/arch/powerpc/lib/vmx-helper.c
> +++ b/arch/powerpc/lib/vmx-helper.c
> @@ -10,6 +10,32 @@
>   #include <linux/hardirq.h>
>   #include <asm/switch_to.h>
>   
> +unsigned long __copy_tofrom_user_vmx(void __user *to, const void __user *from,
> +			unsigned long size, enum usercopy_mode mode)
> +{
> +	unsigned long ret;
> +
> +	if (!enter_vmx_usercopy()) {
> +		raw_copy_allow(to, mode);
> +		ret = __copy_tofrom_user(to, from, size);
> +		raw_copy_prevent(mode);
> +		return ret;
> +	}
> +
> +	raw_copy_allow(to, mode);
> +	ret = __copy_tofrom_user_power7_vmx(to, from, size);
> +	raw_copy_prevent(mode);
> +	exit_vmx_usercopy();
> +	if (unlikely(ret)) {
> +		raw_copy_allow(to, mode);
> +		ret = __copy_tofrom_user_base(to, from, size);
> +		raw_copy_prevent(mode);
> +	}
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(__copy_tofrom_user_vmx);
> +
>   int enter_vmx_usercopy(void)
>   {
>   	if (in_interrupt())



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path
  2026-03-02 11:12 ` [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path Christophe Leroy (CS GROUP)
@ 2026-03-03  9:19   ` Sayali Patil
  2026-03-03 14:57     ` Christophe Leroy (CS GROUP)
  0 siblings, 1 reply; 8+ messages in thread
From: Sayali Patil @ 2026-03-03  9:19 UTC (permalink / raw)
  To: Christophe Leroy (CS GROUP), linuxppc-dev, maddy
  Cc: aboorvad, sshegde, riteshh, hbathini, ming.lei, csander, czhong,
	venkat88

[-- Attachment #1: Type: text/plain, Size: 19612 bytes --]


On 02/03/26 16:42, Christophe Leroy (CS GROUP) wrote:
> Hi Sayali,
>
> Le 28/02/2026 à 14:53, Sayali Patil a écrit :
>> On powerpc with PREEMPT_FULL or PREEMPT_LAZY and function tracing 
>> enabled,
>> KUAP warnings can be triggered from the VMX usercopy path under memory
>> stress workloads.
>>
>> KUAP requires that no subfunctions are called once userspace access has
>> been enabled. The existing VMX copy implementation violates this
>> requirement by invoking enter_vmx_usercopy() from the assembly path 
>> after
>> userspace access has already been enabled. If preemption occurs
>> in this window, the AMR state may not be preserved correctly,
>> leading to unexpected userspace access state and resulting in
>> KUAP warnings.
>>
>> Fix this by restructuring the VMX usercopy flow so that VMX selection
>> and VMX state management are centralized in raw_copy_tofrom_user(),
>> which is invoked by the raw_copy_{to,from,in}_user() wrappers.
>>
>> Introduce a usercopy_mode enum to describe the copy direction
>> (IN, FROM, TO) and use it to derive the required KUAP permissions.
>> Userspace access is now enabled and disabled through common helpers
>> based on the selected mode, ensuring that the correct read/write
>> permissions are applied consistently.
>>
>>   The new flow is:
>>
>>    - raw_copy_{to,from,in}_user() calls raw_copy_tofrom_user()
>>    - raw_copy_tofrom_user() decides whether to use the VMX path
>>      based on size and CPU capability
>>    - Call enter_vmx_usercopy() before enabling userspace access
>>    - Enable userspace access as per the usercopy mode
>>      and perform the VMX copy
>>    - Disable userspace access as per the usercopy mode
>>    - Call exit_vmx_usercopy()
>>    - Fall back to the base copy routine if the VMX copy faults
>>
>> With this change, the VMX assembly routines no longer perform VMX state
>> management or call helper functions; they only implement the
>> copy operations.
>> The previous feature-section based VMX selection inside
>> __copy_tofrom_user_power7() is removed, and a dedicated
>> __copy_tofrom_user_power7_vmx() entry point is introduced.
>>
>> This ensures correct KUAP ordering, avoids subfunction calls
>> while KUAP is unlocked, and eliminates the warnings while preserving
>> the VMX fast path.
>>
>> Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace 
>> Access Protection")
>> Reported-by: Shrikanth Hegde <sshegde@linux.ibm.com>
>> Closes: 
>> https://lore.kernel.org/all/20260109064917.777587-2-sshegde@linux.ibm.com/
>> Suggested-by: Christophe Leroy <chleroy@kernel.org>
>> Co-developed-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
>> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
>> Signed-off-by: Sayali Patil <sayalip@linux.ibm.com>
>> ---
>>
>> v1->v2
>>    - Updated as per the review comments.
>>    - Centralized VMX usercopy handling in __copy_tofrom_user_vmx() in
>>      arch/powerpc/lib/vmx-helper.c.
>>    - Introduced a usercopy_mode enum to describe the copy direction
>>      (IN, FROM, TO) and derive the required KUAP permissions, avoiding
>>      duplication across the different usercopy paths.
>
> I like the reduction of duplication you propose but I can't see the 
> added value of that enum, what about:
>
> diff --git a/arch/powerpc/include/asm/uaccess.h 
> b/arch/powerpc/include/asm/uaccess.h
> index 63d6eb8b004e..14a3219db838 100644
> --- a/arch/powerpc/include/asm/uaccess.h
> +++ b/arch/powerpc/include/asm/uaccess.h
> @@ -329,12 +329,6 @@ do {                                \
>  extern unsigned long __copy_tofrom_user(void __user *to,
>          const void __user *from, unsigned long size);
>
> -enum usercopy_mode {
> -    USERCOPY_IN,
> -    USERCOPY_FROM,
> -    USERCOPY_TO,
> -};
> -
>  unsigned long __copy_tofrom_user_vmx(void __user *to, const void 
> __user *from,
>                  unsigned long size, enum usercopy_mode mode);
>
> @@ -352,48 +346,18 @@ static inline bool will_use_vmx(unsigned long n)
>          n > VMX_COPY_THRESHOLD;
>  }
>
> -static inline void raw_copy_allow(void __user *to, enum usercopy_mode 
> mode)
> -{
> -    switch (mode) {
> -    case USERCOPY_IN:
> -        allow_user_access(to, KUAP_READ_WRITE);
> -        break;
> -    case USERCOPY_FROM:
> -        allow_user_access(NULL, KUAP_READ);
> -        break;
> -    case USERCOPY_TO:
> -        allow_user_access(to, KUAP_WRITE);
> -        break;
> -    }
> -}
> -
> -static inline void raw_copy_prevent(enum usercopy_mode mode)
> -{
> -    switch (mode) {
> -    case USERCOPY_IN:
> -        prevent_user_access(KUAP_READ_WRITE);
> -        break;
> -    case USERCOPY_FROM:
> -        prevent_user_access(KUAP_READ);
> -        break;
> -    case USERCOPY_TO:
> -        prevent_user_access(KUAP_WRITE);
> -        break;
> -    }
> -}
> -
>  static inline unsigned long raw_copy_tofrom_user(void __user *to,
>          const void __user *from, unsigned long n,
> -        enum usercopy_mode mode)
> +        unsigned long dir)
>  {
>      unsigned long ret;
>
>      if (will_use_vmx(n))
>          return __copy_tofrom_user_vmx(to, from,    n, mode);
>
> -    raw_copy_allow(to, mode);
> +    allow_user_access(to, dir);
>      ret = __copy_tofrom_user(to, from, n);
> -    raw_copy_prevent(mode);
> +    prevent_user_access(dir);
>      return ret;
>
>  }
> @@ -403,22 +367,20 @@ static inline unsigned long
>  raw_copy_in_user(void __user *to, const void __user *from, unsigned 
> long n)
>  {
>      barrier_nospec();
> -    return raw_copy_tofrom_user(to, from, n, USERCOPY_IN);
> +    return raw_copy_tofrom_user(to, from, n, KUAP_READ_WRITE);
>  }
>  #endif /* __powerpc64__ */
>
>  static inline unsigned long raw_copy_from_user(void *to,
>          const void __user *from, unsigned long n)
>  {
> -    return raw_copy_tofrom_user((__force void __user *)to, from,
> -                    n, USERCOPY_FROM);
> +    return raw_copy_tofrom_user((__force void __user *)to, from, n, 
> KUAP_READ);
>  }
>
>  static inline unsigned long
>  raw_copy_to_user(void __user *to, const void *from, unsigned long n)
>  {
> -    return raw_copy_tofrom_user(to, (__force const void __user *)from,
> -                    n, USERCOPY_TO);
> +    return raw_copy_tofrom_user(to, (__force const void __user 
> *)from, n, KUAP_WRITE);
>  }
>
>  unsigned long __arch_clear_user(void __user *addr, unsigned long size);
> diff --git a/arch/powerpc/lib/vmx-helper.c 
> b/arch/powerpc/lib/vmx-helper.c
> index 35080885204b..4610f7153fd9 100644
> --- a/arch/powerpc/lib/vmx-helper.c
> +++ b/arch/powerpc/lib/vmx-helper.c
> @@ -11,25 +11,25 @@
>  #include <asm/switch_to.h>
>
>  unsigned long __copy_tofrom_user_vmx(void __user *to, const void 
> __user *from,
> -            unsigned long size, enum usercopy_mode mode)
> +            unsigned long size, unsigned long dir)
>  {
>      unsigned long ret;
>
>      if (!enter_vmx_usercopy()) {
> -        raw_copy_allow(to, mode);
> +        allow_user_access(to, dir);
>          ret = __copy_tofrom_user(to, from, size);
> -        raw_copy_prevent(mode);
> +        prevent_user_access(dir);
>          return ret;
>      }
>
> -    raw_copy_allow(to, mode);
> +    allow_user_access(to, dir);
>      ret = __copy_tofrom_user_power7_vmx(to, from, size);
> -    raw_copy_prevent(mode);
> +    prevent_user_access(dir);
>      exit_vmx_usercopy();
>      if (unlikely(ret)) {
> -        raw_copy_allow(to, mode);
> +        allow_user_access(to, dir);
>          ret = __copy_tofrom_user_base(to, from, size);
> -        raw_copy_prevent(mode);
> +        prevent_user_access(dir);
>      }
>
>      return ret;
>
>
>
> Christophe
>
>
Hi Christophe,
Thanks for the review.
With the suggested change, we are hitting a compilation error.

The issue is related to how KUAP enforces the access direction.
allow_user_access() contains:

BUILD_BUG_ON(!__builtin_constant_p(dir));

which requires that the access direction is a compile-time constant.
If we pass a runtime value (for example, an unsigned long), the
__builtin_constant_p() check fails and triggers the following build error.

Error:
In function 'allow_user_access', inlined from '__copy_tofrom_user_vmx' 
at arch/powerpc/lib/vmx-helper.c:19:3:
BUILD_BUG_ON failed: !__builtin_constant_p(dir) 706


The previous implementation worked because allow_user_access() was 
invoked with enum
constants (READ, WRITE, READ_WRITE), which satisfied the 
__builtin_constant_p() requirement.
So in this case, the function must be called with a compile-time 
constant to satisfy KUAP.

Please let me know if you would prefer a different approach.

Regards,
Sayali


>>
>> v1: 
>> https://lore.kernel.org/all/20260217124457.89219-1-sayalip@linux.ibm.com/
>>
>> ---
>>   arch/powerpc/include/asm/uaccess.h | 95 ++++++++++++++++++++++++------
>>   arch/powerpc/lib/copyuser_64.S     |  1 +
>>   arch/powerpc/lib/copyuser_power7.S | 45 +++++---------
>>   arch/powerpc/lib/vmx-helper.c      | 26 ++++++++
>>   4 files changed, 119 insertions(+), 48 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/uaccess.h 
>> b/arch/powerpc/include/asm/uaccess.h
>> index ba1d878c3f40..63d6eb8b004e 100644
>> --- a/arch/powerpc/include/asm/uaccess.h
>> +++ b/arch/powerpc/include/asm/uaccess.h
>> @@ -15,6 +15,9 @@
>>   #define TASK_SIZE_MAX        TASK_SIZE_USER64
>>   #endif
>>   +/* Threshold above which VMX copy path is used */
>> +#define VMX_COPY_THRESHOLD 3328
>> +
>>   #include <asm-generic/access_ok.h>
>>     /*
>> @@ -326,40 +329,96 @@ do {                                \
>>   extern unsigned long __copy_tofrom_user(void __user *to,
>>           const void __user *from, unsigned long size);
>>   -#ifdef __powerpc64__
>> -static inline unsigned long
>> -raw_copy_in_user(void __user *to, const void __user *from, unsigned 
>> long n)
>> +enum usercopy_mode {
>> +    USERCOPY_IN,
>> +    USERCOPY_FROM,
>> +    USERCOPY_TO,
>> +};
>> +
>> +unsigned long __copy_tofrom_user_vmx(void __user *to, const void 
>> __user *from,
>> +                unsigned long size, enum usercopy_mode mode);
>> +
>> +unsigned long __copy_tofrom_user_base(void __user *to,
>> +        const void __user *from, unsigned long size);
>> +
>> +unsigned long __copy_tofrom_user_power7_vmx(void __user *to,
>> +        const void __user *from, unsigned long size);
>> +
>> +
>> +static inline bool will_use_vmx(unsigned long n)
>> +{
>> +    return IS_ENABLED(CONFIG_ALTIVEC) &&
>> +        cpu_has_feature(CPU_FTR_VMX_COPY) &&
>> +        n > VMX_COPY_THRESHOLD;
>> +}
>> +
>> +static inline void raw_copy_allow(void __user *to, enum 
>> usercopy_mode mode)
>> +{
>> +    switch (mode) {
>> +    case USERCOPY_IN:
>> +        allow_user_access(to, KUAP_READ_WRITE);
>> +        break;
>> +    case USERCOPY_FROM:
>> +        allow_user_access(NULL, KUAP_READ);
>> +        break;
>> +    case USERCOPY_TO:
>> +        allow_user_access(to, KUAP_WRITE);
>> +        break;
>> +    }
>> +}
>> +
>> +static inline void raw_copy_prevent(enum usercopy_mode mode)
>> +{
>> +    switch (mode) {
>> +    case USERCOPY_IN:
>> +        prevent_user_access(KUAP_READ_WRITE);
>> +        break;
>> +    case USERCOPY_FROM:
>> +        prevent_user_access(KUAP_READ);
>> +        break;
>> +    case USERCOPY_TO:
>> +        prevent_user_access(KUAP_WRITE);
>> +        break;
>> +    }
>> +}
>> +
>> +static inline unsigned long raw_copy_tofrom_user(void __user *to,
>> +        const void __user *from, unsigned long n,
>> +        enum usercopy_mode mode)
>>   {
>>       unsigned long ret;
>>   -    barrier_nospec();
>> -    allow_user_access(to, KUAP_READ_WRITE);
>> +    if (will_use_vmx(n))
>> +        return __copy_tofrom_user_vmx(to, from,    n, mode);
>> +
>> +    raw_copy_allow(to, mode);
>>       ret = __copy_tofrom_user(to, from, n);
>> -    prevent_user_access(KUAP_READ_WRITE);
>> +    raw_copy_prevent(mode);
>>       return ret;
>> +
>> +}
>> +
>> +#ifdef __powerpc64__
>> +static inline unsigned long
>> +raw_copy_in_user(void __user *to, const void __user *from, unsigned 
>> long n)
>> +{
>> +    barrier_nospec();
>> +    return raw_copy_tofrom_user(to, from, n, USERCOPY_IN);
>>   }
>>   #endif /* __powerpc64__ */
>>     static inline unsigned long raw_copy_from_user(void *to,
>>           const void __user *from, unsigned long n)
>>   {
>> -    unsigned long ret;
>> -
>> -    allow_user_access(NULL, KUAP_READ);
>> -    ret = __copy_tofrom_user((__force void __user *)to, from, n);
>> -    prevent_user_access(KUAP_READ);
>> -    return ret;
>> +    return raw_copy_tofrom_user((__force void __user *)to, from,
>> +                    n, USERCOPY_FROM);
>>   }
>>     static inline unsigned long
>>   raw_copy_to_user(void __user *to, const void *from, unsigned long n)
>>   {
>> -    unsigned long ret;
>> -
>> -    allow_user_access(to, KUAP_WRITE);
>> -    ret = __copy_tofrom_user(to, (__force const void __user *)from, n);
>> -    prevent_user_access(KUAP_WRITE);
>> -    return ret;
>> +    return raw_copy_tofrom_user(to, (__force const void __user *)from,
>> +                    n, USERCOPY_TO);
>>   }
>>     unsigned long __arch_clear_user(void __user *addr, unsigned long 
>> size);
>> diff --git a/arch/powerpc/lib/copyuser_64.S 
>> b/arch/powerpc/lib/copyuser_64.S
>> index 9af969d2cc0c..25a99108caff 100644
>> --- a/arch/powerpc/lib/copyuser_64.S
>> +++ b/arch/powerpc/lib/copyuser_64.S
>> @@ -562,3 +562,4 @@ exc;    std    r10,32(3)
>>       li    r5,4096
>>       b    .Ldst_aligned
>>   EXPORT_SYMBOL(__copy_tofrom_user)
>> +EXPORT_SYMBOL(__copy_tofrom_user_base)
>> diff --git a/arch/powerpc/lib/copyuser_power7.S 
>> b/arch/powerpc/lib/copyuser_power7.S
>> index 8474c682a178..17dbcfbae25f 100644
>> --- a/arch/powerpc/lib/copyuser_power7.S
>> +++ b/arch/powerpc/lib/copyuser_power7.S
>> @@ -5,13 +5,9 @@
>>    *
>>    * Author: Anton Blanchard <anton@au.ibm.com>
>>    */
>> +#include <linux/export.h>
>>   #include <asm/ppc_asm.h>
>>   -#ifndef SELFTEST_CASE
>> -/* 0 == don't use VMX, 1 == use VMX */
>> -#define SELFTEST_CASE    0
>> -#endif
>> -
>>   #ifdef __BIG_ENDIAN__
>>   #define LVS(VRT,RA,RB)        lvsl    VRT,RA,RB
>>   #define VPERM(VRT,VRA,VRB,VRC)    vperm    VRT,VRA,VRB,VRC
>> @@ -47,10 +43,14 @@
>>       ld    r15,STK_REG(R15)(r1)
>>       ld    r14,STK_REG(R14)(r1)
>>   .Ldo_err3:
>> -    bl    CFUNC(exit_vmx_usercopy)
>> +    ld      r6,STK_REG(R31)(r1)    /* original destination pointer */
>> +    ld      r5,STK_REG(R29)(r1)    /* original number of bytes */
>> +    subf    r7,r6,r3        /* #bytes copied */
>> +    subf    r3,r7,r5        /* #bytes not copied in r3 */
>>       ld    r0,STACKFRAMESIZE+16(r1)
>>       mtlr    r0
>> -    b    .Lexit
>> +    addi    r1,r1,STACKFRAMESIZE
>> +    blr
>>   #endif /* CONFIG_ALTIVEC */
>>     .Ldo_err2:
>> @@ -74,7 +74,6 @@
>>     _GLOBAL(__copy_tofrom_user_power7)
>>       cmpldi    r5,16
>> -    cmpldi    cr1,r5,3328
>>         std    r3,-STACKFRAMESIZE+STK_REG(R31)(r1)
>>       std    r4,-STACKFRAMESIZE+STK_REG(R30)(r1)
>> @@ -82,12 +81,6 @@ _GLOBAL(__copy_tofrom_user_power7)
>>         blt    .Lshort_copy
>>   -#ifdef CONFIG_ALTIVEC
>> -test_feature = SELFTEST_CASE
>> -BEGIN_FTR_SECTION
>> -    bgt    cr1,.Lvmx_copy
>> -END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
>> -#endif
>>     .Lnonvmx_copy:
>>       /* Get the source 8B aligned */
>> @@ -263,23 +256,14 @@ err1;    stb    r0,0(r3)
>>   15:    li    r3,0
>>       blr
>>   -.Lunwind_stack_nonvmx_copy:
>> -    addi    r1,r1,STACKFRAMESIZE
>> -    b    .Lnonvmx_copy
>> -
>> -.Lvmx_copy:
>>   #ifdef CONFIG_ALTIVEC
>> +_GLOBAL(__copy_tofrom_user_power7_vmx)
>>       mflr    r0
>>       std    r0,16(r1)
>>       stdu    r1,-STACKFRAMESIZE(r1)
>> -    bl    CFUNC(enter_vmx_usercopy)
>> -    cmpwi    cr1,r3,0
>> -    ld    r0,STACKFRAMESIZE+16(r1)
>> -    ld    r3,STK_REG(R31)(r1)
>> -    ld    r4,STK_REG(R30)(r1)
>> -    ld    r5,STK_REG(R29)(r1)
>> -    mtlr    r0
>>   +    std     r3,STK_REG(R31)(r1)
>> +    std     r5,STK_REG(R29)(r1)
>>       /*
>>        * We prefetch both the source and destination using enhanced 
>> touch
>>        * instructions. We use a stream ID of 0 for the load side and
>> @@ -300,8 +284,6 @@ err1;    stb    r0,0(r3)
>>         DCBT_SETUP_STREAMS(r6, r7, r9, r10, r8)
>>   -    beq    cr1,.Lunwind_stack_nonvmx_copy
>> -
>>       /*
>>        * If source and destination are not relatively aligned we use a
>>        * slower permute loop.
>> @@ -478,7 +460,8 @@ err3;    lbz    r0,0(r4)
>>   err3;    stb    r0,0(r3)
>>     15:    addi    r1,r1,STACKFRAMESIZE
>> -    b    CFUNC(exit_vmx_usercopy)    /* tail call optimise */
>> +    li r3,0
>> +    blr
>>     .Lvmx_unaligned_copy:
>>       /* Get the destination 16B aligned */
>> @@ -681,5 +664,7 @@ err3;    lbz    r0,0(r4)
>>   err3;    stb    r0,0(r3)
>>     15:    addi    r1,r1,STACKFRAMESIZE
>> -    b    CFUNC(exit_vmx_usercopy)    /* tail call optimise */
>> +    li r3,0
>> +    blr
>> +EXPORT_SYMBOL(__copy_tofrom_user_power7_vmx)
>>   #endif /* CONFIG_ALTIVEC */
>> diff --git a/arch/powerpc/lib/vmx-helper.c 
>> b/arch/powerpc/lib/vmx-helper.c
>> index 54340912398f..35080885204b 100644
>> --- a/arch/powerpc/lib/vmx-helper.c
>> +++ b/arch/powerpc/lib/vmx-helper.c
>> @@ -10,6 +10,32 @@
>>   #include <linux/hardirq.h>
>>   #include <asm/switch_to.h>
>>   +unsigned long __copy_tofrom_user_vmx(void __user *to, const void 
>> __user *from,
>> +            unsigned long size, enum usercopy_mode mode)
>> +{
>> +    unsigned long ret;
>> +
>> +    if (!enter_vmx_usercopy()) {
>> +        raw_copy_allow(to, mode);
>> +        ret = __copy_tofrom_user(to, from, size);
>> +        raw_copy_prevent(mode);
>> +        return ret;
>> +    }
>> +
>> +    raw_copy_allow(to, mode);
>> +    ret = __copy_tofrom_user_power7_vmx(to, from, size);
>> +    raw_copy_prevent(mode);
>> +    exit_vmx_usercopy();
>> +    if (unlikely(ret)) {
>> +        raw_copy_allow(to, mode);
>> +        ret = __copy_tofrom_user_base(to, from, size);
>> +        raw_copy_prevent(mode);
>> +    }
>> +
>> +    return ret;
>> +}
>> +EXPORT_SYMBOL(__copy_tofrom_user_vmx);
>> +
>>   int enter_vmx_usercopy(void)
>>   {
>>       if (in_interrupt())
>

[-- Attachment #2: Type: text/html, Size: 29553 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path
  2026-03-03  9:19   ` Sayali Patil
@ 2026-03-03 14:57     ` Christophe Leroy (CS GROUP)
  2026-03-03 15:10       ` Christophe Leroy (CS GROUP)
  0 siblings, 1 reply; 8+ messages in thread
From: Christophe Leroy (CS GROUP) @ 2026-03-03 14:57 UTC (permalink / raw)
  To: Sayali Patil, linuxppc-dev, maddy
  Cc: aboorvad, sshegde, riteshh, hbathini, ming.lei, csander, czhong,
	venkat88

Hi,

Le 03/03/2026 à 10:19, Sayali Patil a écrit :
> 
> On 02/03/26 16:42, Christophe Leroy (CS GROUP) wrote:
>> Hi Sayali,
>>
>> Le 28/02/2026 à 14:53, Sayali Patil a écrit :
>>> On powerpc with PREEMPT_FULL or PREEMPT_LAZY and function tracing 
>>> enabled,
>>> KUAP warnings can be triggered from the VMX usercopy path under memory
>>> stress workloads.
>>>
>>> KUAP requires that no subfunctions are called once userspace access has
>>> been enabled. The existing VMX copy implementation violates this
>>> requirement by invoking enter_vmx_usercopy() from the assembly path 
>>> after
>>> userspace access has already been enabled. If preemption occurs
>>> in this window, the AMR state may not be preserved correctly,
>>> leading to unexpected userspace access state and resulting in
>>> KUAP warnings.
>>>
>>> Fix this by restructuring the VMX usercopy flow so that VMX selection
>>> and VMX state management are centralized in raw_copy_tofrom_user(),
>>> which is invoked by the raw_copy_{to,from,in}_user() wrappers.
>>>
>>> Introduce a usercopy_mode enum to describe the copy direction
>>> (IN, FROM, TO) and use it to derive the required KUAP permissions.
>>> Userspace access is now enabled and disabled through common helpers
>>> based on the selected mode, ensuring that the correct read/write
>>> permissions are applied consistently.
>>>
>>>   The new flow is:
>>>
>>>    - raw_copy_{to,from,in}_user() calls raw_copy_tofrom_user()
>>>    - raw_copy_tofrom_user() decides whether to use the VMX path
>>>      based on size and CPU capability
>>>    - Call enter_vmx_usercopy() before enabling userspace access
>>>    - Enable userspace access as per the usercopy mode
>>>      and perform the VMX copy
>>>    - Disable userspace access as per the usercopy mode
>>>    - Call exit_vmx_usercopy()
>>>    - Fall back to the base copy routine if the VMX copy faults
>>>
>>> With this change, the VMX assembly routines no longer perform VMX state
>>> management or call helper functions; they only implement the
>>> copy operations.
>>> The previous feature-section based VMX selection inside
>>> __copy_tofrom_user_power7() is removed, and a dedicated
>>> __copy_tofrom_user_power7_vmx() entry point is introduced.
>>>
>>> This ensures correct KUAP ordering, avoids subfunction calls
>>> while KUAP is unlocked, and eliminates the warnings while preserving
>>> the VMX fast path.
>>>
>>> Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace 
>>> Access Protection")
>>> Reported-by: Shrikanth Hegde <sshegde@linux.ibm.com>
>>> Closes: https://lore.kernel.org/all/20260109064917.777587-2- 
>>> sshegde@linux.ibm.com/
>>> Suggested-by: Christophe Leroy <chleroy@kernel.org>
>>> Co-developed-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
>>> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
>>> Signed-off-by: Sayali Patil <sayalip@linux.ibm.com>
>>> ---
>>>
>>> v1->v2
>>>    - Updated as per the review comments.
>>>    - Centralized VMX usercopy handling in __copy_tofrom_user_vmx() in
>>>      arch/powerpc/lib/vmx-helper.c.
>>>    - Introduced a usercopy_mode enum to describe the copy direction
>>>      (IN, FROM, TO) and derive the required KUAP permissions, avoiding
>>>      duplication across the different usercopy paths.
>>
>> I like the reduction of duplication you propose but I can't see the 
>> added value of that enum, what about:
>>
>> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/ 
>> include/asm/uaccess.h
>> index 63d6eb8b004e..14a3219db838 100644
>> --- a/arch/powerpc/include/asm/uaccess.h
>> +++ b/arch/powerpc/include/asm/uaccess.h
>> @@ -329,12 +329,6 @@ do {                                \
>>  extern unsigned long __copy_tofrom_user(void __user *to,
>>          const void __user *from, unsigned long size);
>>
>> -enum usercopy_mode {
>> -    USERCOPY_IN,
>> -    USERCOPY_FROM,
>> -    USERCOPY_TO,
>> -};
>> -
>>  unsigned long __copy_tofrom_user_vmx(void __user *to, const void 
>> __user *from,
>>                  unsigned long size, enum usercopy_mode mode);
>>
>> @@ -352,48 +346,18 @@ static inline bool will_use_vmx(unsigned long n)
>>          n > VMX_COPY_THRESHOLD;
>>  }
>>
>> -static inline void raw_copy_allow(void __user *to, enum usercopy_mode 
>> mode)
>> -{
>> -    switch (mode) {
>> -    case USERCOPY_IN:
>> -        allow_user_access(to, KUAP_READ_WRITE);
>> -        break;
>> -    case USERCOPY_FROM:
>> -        allow_user_access(NULL, KUAP_READ);
>> -        break;
>> -    case USERCOPY_TO:
>> -        allow_user_access(to, KUAP_WRITE);
>> -        break;
>> -    }
>> -}
>> -
>> -static inline void raw_copy_prevent(enum usercopy_mode mode)
>> -{
>> -    switch (mode) {
>> -    case USERCOPY_IN:
>> -        prevent_user_access(KUAP_READ_WRITE);
>> -        break;
>> -    case USERCOPY_FROM:
>> -        prevent_user_access(KUAP_READ);
>> -        break;
>> -    case USERCOPY_TO:
>> -        prevent_user_access(KUAP_WRITE);
>> -        break;
>> -    }
>> -}
>> -
>>  static inline unsigned long raw_copy_tofrom_user(void __user *to,
>>          const void __user *from, unsigned long n,
>> -        enum usercopy_mode mode)
>> +        unsigned long dir)
>>  {
>>      unsigned long ret;
>>
>>      if (will_use_vmx(n))
>>          return __copy_tofrom_user_vmx(to, from,    n, mode);
>>
>> -    raw_copy_allow(to, mode);
>> +    allow_user_access(to, dir);
>>      ret = __copy_tofrom_user(to, from, n);
>> -    raw_copy_prevent(mode);
>> +    prevent_user_access(dir);
>>      return ret;
>>
>>  }
>> @@ -403,22 +367,20 @@ static inline unsigned long
>>  raw_copy_in_user(void __user *to, const void __user *from, unsigned 
>> long n)
>>  {
>>      barrier_nospec();
>> -    return raw_copy_tofrom_user(to, from, n, USERCOPY_IN);
>> +    return raw_copy_tofrom_user(to, from, n, KUAP_READ_WRITE);
>>  }
>>  #endif /* __powerpc64__ */
>>
>>  static inline unsigned long raw_copy_from_user(void *to,
>>          const void __user *from, unsigned long n)
>>  {
>> -    return raw_copy_tofrom_user((__force void __user *)to, from,
>> -                    n, USERCOPY_FROM);
>> +    return raw_copy_tofrom_user((__force void __user *)to, from, n, 
>> KUAP_READ);
>>  }
>>
>>  static inline unsigned long
>>  raw_copy_to_user(void __user *to, const void *from, unsigned long n)
>>  {
>> -    return raw_copy_tofrom_user(to, (__force const void __user *)from,
>> -                    n, USERCOPY_TO);
>> +    return raw_copy_tofrom_user(to, (__force const void __user 
>> *)from, n, KUAP_WRITE);
>>  }
>>
>>  unsigned long __arch_clear_user(void __user *addr, unsigned long size);
>> diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx- 
>> helper.c
>> index 35080885204b..4610f7153fd9 100644
>> --- a/arch/powerpc/lib/vmx-helper.c
>> +++ b/arch/powerpc/lib/vmx-helper.c
>> @@ -11,25 +11,25 @@
>>  #include <asm/switch_to.h>
>>
>>  unsigned long __copy_tofrom_user_vmx(void __user *to, const void 
>> __user *from,
>> -            unsigned long size, enum usercopy_mode mode)
>> +            unsigned long size, unsigned long dir)
>>  {
>>      unsigned long ret;
>>
>>      if (!enter_vmx_usercopy()) {
>> -        raw_copy_allow(to, mode);
>> +        allow_user_access(to, dir);
>>          ret = __copy_tofrom_user(to, from, size);
>> -        raw_copy_prevent(mode);
>> +        prevent_user_access(dir);
>>          return ret;
>>      }
>>
>> -    raw_copy_allow(to, mode);
>> +    allow_user_access(to, dir);
>>      ret = __copy_tofrom_user_power7_vmx(to, from, size);
>> -    raw_copy_prevent(mode);
>> +    prevent_user_access(dir);
>>      exit_vmx_usercopy();
>>      if (unlikely(ret)) {
>> -        raw_copy_allow(to, mode);
>> +        allow_user_access(to, dir);
>>          ret = __copy_tofrom_user_base(to, from, size);
>> -        raw_copy_prevent(mode);
>> +        prevent_user_access(dir);
>>      }
>>
>>      return ret;
>>
>>
>>
>> Christophe
>>
>>
> Hi Christophe,
> Thanks for the review.
> With the suggested change, we are hitting a compilation error.
> 
> The issue is related to how KUAP enforces the access direction.
> allow_user_access() contains:
> 
> BUILD_BUG_ON(!__builtin_constant_p(dir));
> 
> which requires that the access direction is a compile-time constant.
> If we pass a runtime value (for example, an unsigned long), the
> __builtin_constant_p() check fails and triggers the following build error.
> 
> Error:
> In function 'allow_user_access', inlined from '__copy_tofrom_user_vmx' 
> at arch/powerpc/lib/vmx-helper.c:19:3:
> BUILD_BUG_ON failed: !__builtin_constant_p(dir) 706
> 
> 
> The previous implementation worked because allow_user_access() was 
> invoked with enum
> constants (READ, WRITE, READ_WRITE), which satisfied the 
> __builtin_constant_p() requirement.
> So in this case, the function must be called with a compile-time 
> constant to satisfy KUAP.
> 
> Please let me know if you would prefer a different approach.
> 

Ah, right, I missed that. The problem should only be in vmx-helper.c

What about:

diff --git a/arch/powerpc/include/asm/uaccess.h 
b/arch/powerpc/include/asm/uaccess.h
index 63d6eb8b004e..1e05f66fa647 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -329,14 +329,8 @@ do {								\
  extern unsigned long __copy_tofrom_user(void __user *to,
  		const void __user *from, unsigned long size);

-enum usercopy_mode {
-	USERCOPY_IN,
-	USERCOPY_FROM,
-	USERCOPY_TO,
-};
-
  unsigned long __copy_tofrom_user_vmx(void __user *to, const void 
__user *from,
-				unsigned long size, enum usercopy_mode mode);
+				unsigned long size, unsigned long dir);

  unsigned long __copy_tofrom_user_base(void __user *to,
  		const void __user *from, unsigned long size);
@@ -345,55 +339,25 @@ unsigned long __copy_tofrom_user_power7_vmx(void 
__user *to,
  		const void __user *from, unsigned long size);


-static inline bool will_use_vmx(unsigned long n)
+static __always_inline bool will_use_vmx(unsigned long n)
  {
  	return IS_ENABLED(CONFIG_ALTIVEC) &&
  		cpu_has_feature(CPU_FTR_VMX_COPY) &&
  		n > VMX_COPY_THRESHOLD;
  }

-static inline void raw_copy_allow(void __user *to, enum usercopy_mode mode)
-{
-	switch (mode) {
-	case USERCOPY_IN:
-		allow_user_access(to, KUAP_READ_WRITE);
-		break;
-	case USERCOPY_FROM:
-		allow_user_access(NULL, KUAP_READ);
-		break;
-	case USERCOPY_TO:
-		allow_user_access(to, KUAP_WRITE);
-		break;
-	}
-}
-
-static inline void raw_copy_prevent(enum usercopy_mode mode)
-{
-	switch (mode) {
-	case USERCOPY_IN:
-		prevent_user_access(KUAP_READ_WRITE);
-		break;
-	case USERCOPY_FROM:
-		prevent_user_access(KUAP_READ);
-		break;
-	case USERCOPY_TO:
-		prevent_user_access(KUAP_WRITE);
-		break;
-	}
-}
-
-static inline unsigned long raw_copy_tofrom_user(void __user *to,
+static __always_inline unsigned long raw_copy_tofrom_user(void __user *to,
  		const void __user *from, unsigned long n,
-		enum usercopy_mode mode)
+		unsigned long dir)
  {
  	unsigned long ret;

  	if (will_use_vmx(n))
-		return __copy_tofrom_user_vmx(to, from,	n, mode);
+		return __copy_tofrom_user_vmx(to, from,	n, dir);

-	raw_copy_allow(to, mode);
+	allow_user_access(to, dir);
  	ret = __copy_tofrom_user(to, from, n);
-	raw_copy_prevent(mode);
+	prevent_user_access(dir);
  	return ret;

  }
@@ -403,22 +367,20 @@ static inline unsigned long
  raw_copy_in_user(void __user *to, const void __user *from, unsigned 
long n)
  {
  	barrier_nospec();
-	return raw_copy_tofrom_user(to, from, n, USERCOPY_IN);
+	return raw_copy_tofrom_user(to, from, n, KUAP_READ_WRITE);
  }
  #endif /* __powerpc64__ */

  static inline unsigned long raw_copy_from_user(void *to,
  		const void __user *from, unsigned long n)
  {
-	return raw_copy_tofrom_user((__force void __user *)to, from,
-					n, USERCOPY_FROM);
+	return raw_copy_tofrom_user((__force void __user *)to, from, n, 
KUAP_READ);
  }

  static inline unsigned long
  raw_copy_to_user(void __user *to, const void *from, unsigned long n)
  {
-	return raw_copy_tofrom_user(to, (__force const void __user *)from,
-					n, USERCOPY_TO);
+	return raw_copy_tofrom_user(to, (__force const void __user *)from, n, 
KUAP_WRITE);
  }

  unsigned long __arch_clear_user(void __user *addr, unsigned long size);
diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
index 35080885204b..5fc6b2997158 100644
--- a/arch/powerpc/lib/vmx-helper.c
+++ b/arch/powerpc/lib/vmx-helper.c
@@ -10,26 +10,49 @@
  #include <linux/hardirq.h>
  #include <asm/switch_to.h>

+static void __allow_user_access(void __user *to, unsigned long dir)
+{
+	if (dir == KUAP_READ)
+		allow_user_access(to, KUAP_READ);
+	else if (dir == KUAP_WRITE)
+		allow_user_access(to, KUAP_WRITE);
+	else if (dir == KUAP_READ_WRITE)
+		allow_user_access(to, KUAP_READ_WRITE);
+}
+
+static void __prevent_user_access(unsigned long dir)
+{
+	if (dir == KUAP_READ)
+		prevent_user_access(KUAP_READ);
+	else if (dir == KUAP_WRITE)
+		prevent_user_access(KUAP_WRITE);
+	else if (dir == KUAP_READ_WRITE)
+		prevent_user_access(KUAP_READ_WRITE);
+}
+
  unsigned long __copy_tofrom_user_vmx(void __user *to, const void 
__user *from,
-			unsigned long size, enum usercopy_mode mode)
+			unsigned long size, unsigned long dir)
  {
  	unsigned long ret;

+	if (WARN_ON_ONCE(!dir || dir > KUAP_READ_WRITE))
+		return size;
+
  	if (!enter_vmx_usercopy()) {
-		raw_copy_allow(to, mode);
+		__allow_user_access(to, dir);
  		ret = __copy_tofrom_user(to, from, size);
-		raw_copy_prevent(mode);
+		__prevent_user_access(dir);
  		return ret;
  	}

-	raw_copy_allow(to, mode);
+	__allow_user_access(to, dir);
  	ret = __copy_tofrom_user_power7_vmx(to, from, size);
-	raw_copy_prevent(mode);
+	__prevent_user_access(dir);
  	exit_vmx_usercopy();
  	if (unlikely(ret)) {
-		raw_copy_allow(to, mode);
+		__allow_user_access(to, dir);
  		ret = __copy_tofrom_user_base(to, from, size);
-		raw_copy_prevent(mode);
+		__prevent_user_access(dir);
  	}

  	return ret;


Christophe


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path
  2026-03-03 14:57     ` Christophe Leroy (CS GROUP)
@ 2026-03-03 15:10       ` Christophe Leroy (CS GROUP)
  2026-03-03 15:17         ` Christophe Leroy (CS GROUP)
  0 siblings, 1 reply; 8+ messages in thread
From: Christophe Leroy (CS GROUP) @ 2026-03-03 15:10 UTC (permalink / raw)
  To: Sayali Patil, linuxppc-dev, maddy
  Cc: aboorvad, sshegde, riteshh, hbathini, ming.lei, csander, czhong,
	venkat88

Hi Again,

Le 03/03/2026 à 15:57, Christophe Leroy (CS GROUP) a écrit :
> Hi,
> 
> Le 03/03/2026 à 10:19, Sayali Patil a écrit :
>>
>> On 02/03/26 16:42, Christophe Leroy (CS GROUP) wrote:
>>>
>> Hi Christophe,
>> Thanks for the review.
>> With the suggested change, we are hitting a compilation error.
>>
>> The issue is related to how KUAP enforces the access direction.
>> allow_user_access() contains:
>>
>> BUILD_BUG_ON(!__builtin_constant_p(dir));
>>
>> which requires that the access direction is a compile-time constant.
>> If we pass a runtime value (for example, an unsigned long), the
>> __builtin_constant_p() check fails and triggers the following build 
>> error.
>>
>> Error:
>> In function 'allow_user_access', inlined from '__copy_tofrom_user_vmx' 
>> at arch/powerpc/lib/vmx-helper.c:19:3:
>> BUILD_BUG_ON failed: !__builtin_constant_p(dir) 706
>>
>>
>> The previous implementation worked because allow_user_access() was 
>> invoked with enum
>> constants (READ, WRITE, READ_WRITE), which satisfied the 
>> __builtin_constant_p() requirement.
>> So in this case, the function must be called with a compile-time 
>> constant to satisfy KUAP.
>>
>> Please let me know if you would prefer a different approach.
>>
> 
> Ah, right, I missed that. The problem should only be in vmx-helper.c
> 

Thinking about it once more, I realised that powerpc does not define 
INLINE_COPY_FROM_USER nor INLINE_COPY_TO_USER.

This means that raw_copy_from_user() and raw_copy_to_user() will in 
really not be called much. Therefore __copy_tofrom_user_vmx() could 
remain in uaccess.h as static __always_inline allthough it requires 
exporting enter_vmx_usercopy() and exit_vmx_usercopy().

Christophe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path
  2026-03-03 15:10       ` Christophe Leroy (CS GROUP)
@ 2026-03-03 15:17         ` Christophe Leroy (CS GROUP)
  2026-03-04  5:40           ` Sayali Patil
  0 siblings, 1 reply; 8+ messages in thread
From: Christophe Leroy (CS GROUP) @ 2026-03-03 15:17 UTC (permalink / raw)
  To: Sayali Patil, linuxppc-dev, maddy
  Cc: aboorvad, sshegde, riteshh, hbathini, ming.lei, csander, czhong,
	venkat88

Hi once more,

Le 03/03/2026 à 16:10, Christophe Leroy (CS GROUP) a écrit :
> Hi Again,
> 
> Le 03/03/2026 à 15:57, Christophe Leroy (CS GROUP) a écrit :
>> Hi,
>>
>> Le 03/03/2026 à 10:19, Sayali Patil a écrit :
>>>
>>> On 02/03/26 16:42, Christophe Leroy (CS GROUP) wrote:
>>>>
>>> Hi Christophe,
>>> Thanks for the review.
>>> With the suggested change, we are hitting a compilation error.
>>>
>>> The issue is related to how KUAP enforces the access direction.
>>> allow_user_access() contains:
>>>
>>> BUILD_BUG_ON(!__builtin_constant_p(dir));
>>>
>>> which requires that the access direction is a compile-time constant.
>>> If we pass a runtime value (for example, an unsigned long), the
>>> __builtin_constant_p() check fails and triggers the following build 
>>> error.
>>>
>>> Error:
>>> In function 'allow_user_access', inlined from 
>>> '__copy_tofrom_user_vmx' at arch/powerpc/lib/vmx-helper.c:19:3:
>>> BUILD_BUG_ON failed: !__builtin_constant_p(dir) 706
>>>
>>>
>>> The previous implementation worked because allow_user_access() was 
>>> invoked with enum
>>> constants (READ, WRITE, READ_WRITE), which satisfied the 
>>> __builtin_constant_p() requirement.
>>> So in this case, the function must be called with a compile-time 
>>> constant to satisfy KUAP.
>>>
>>> Please let me know if you would prefer a different approach.
>>>
>>
>> Ah, right, I missed that. The problem should only be in vmx-helper.c
>>
> 
> Thinking about it once more, I realised that powerpc does not define 
> INLINE_COPY_FROM_USER nor INLINE_COPY_TO_USER.
> 
> This means that raw_copy_from_user() and raw_copy_to_user() will in 
> really not be called much. Therefore __copy_tofrom_user_vmx() could 
> remain in uaccess.h as static __always_inline allthough it requires 
> exporting enter_vmx_usercopy() and exit_vmx_usercopy().

That would result in something like:

static __always_inline bool will_use_vmx(unsigned long n)
{
	return IS_ENABLED(CONFIG_ALTIVEC) && cpu_has_feature(CPU_FTR_VMX_COPY) &&
	       n > VMX_COPY_THRESHOLD;
}

static __always_inline unsigned long
raw_copy_tofrom_user(void __user *to, const void __user *from, unsigned 
long n, unsigned long dir)
{
	unsigned long ret;

	if (will_use_vmx(n) && enter_vmx_usercopy()) {
		allow_user_access(to, dir);
		ret = __copy_tofrom_user_power7_vmx(to, from, size);
		prevent_user_access(dir);
		exit_vmx_usercopy();

		if (unlikely(ret)) {
			allow_user_access(to, dir);
			ret = __copy_tofrom_user_base(to, from, size);
			prevent_user_access(dir);
		}
		return ret;
	}
	allow_user_access(to, dir);
	ret = __copy_tofrom_user(to, from, n);
	prevent_user_access(dir);
	return ret;
}


Christophe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path
  2026-03-03 15:17         ` Christophe Leroy (CS GROUP)
@ 2026-03-04  5:40           ` Sayali Patil
  0 siblings, 0 replies; 8+ messages in thread
From: Sayali Patil @ 2026-03-04  5:40 UTC (permalink / raw)
  To: Christophe Leroy (CS GROUP), linuxppc-dev, maddy
  Cc: aboorvad, sshegde, riteshh, hbathini, ming.lei, csander, czhong,
	venkat88

[-- Attachment #1: Type: text/plain, Size: 3233 bytes --]


On 03/03/26 20:47, Christophe Leroy (CS GROUP) wrote:
> Hi once more,
>
> Le 03/03/2026 à 16:10, Christophe Leroy (CS GROUP) a écrit :
>> Hi Again,
>>
>> Le 03/03/2026 à 15:57, Christophe Leroy (CS GROUP) a écrit :
>>> Hi,
>>>
>>> Le 03/03/2026 à 10:19, Sayali Patil a écrit :
>>>>
>>>> On 02/03/26 16:42, Christophe Leroy (CS GROUP) wrote:
>>>>>
>>>> Hi Christophe,
>>>> Thanks for the review.
>>>> With the suggested change, we are hitting a compilation error.
>>>>
>>>> The issue is related to how KUAP enforces the access direction.
>>>> allow_user_access() contains:
>>>>
>>>> BUILD_BUG_ON(!__builtin_constant_p(dir));
>>>>
>>>> which requires that the access direction is a compile-time constant.
>>>> If we pass a runtime value (for example, an unsigned long), the
>>>> __builtin_constant_p() check fails and triggers the following build 
>>>> error.
>>>>
>>>> Error:
>>>> In function 'allow_user_access', inlined from 
>>>> '__copy_tofrom_user_vmx' at arch/powerpc/lib/vmx-helper.c:19:3:
>>>> BUILD_BUG_ON failed: !__builtin_constant_p(dir) 706
>>>>
>>>>
>>>> The previous implementation worked because allow_user_access() was 
>>>> invoked with enum
>>>> constants (READ, WRITE, READ_WRITE), which satisfied the 
>>>> __builtin_constant_p() requirement.
>>>> So in this case, the function must be called with a compile-time 
>>>> constant to satisfy KUAP.
>>>>
>>>> Please let me know if you would prefer a different approach.
>>>>
>>>
>>> Ah, right, I missed that. The problem should only be in vmx-helper.c
>>>
>>
>> Thinking about it once more, I realised that powerpc does not define 
>> INLINE_COPY_FROM_USER nor INLINE_COPY_TO_USER.
>>
>> This means that raw_copy_from_user() and raw_copy_to_user() will in 
>> really not be called much. Therefore __copy_tofrom_user_vmx() could 
>> remain in uaccess.h as static __always_inline allthough it requires 
>> exporting enter_vmx_usercopy() and exit_vmx_usercopy().
>
> That would result in something like:
>
> static __always_inline bool will_use_vmx(unsigned long n)
> {
>     return IS_ENABLED(CONFIG_ALTIVEC) && 
> cpu_has_feature(CPU_FTR_VMX_COPY) &&
>            n > VMX_COPY_THRESHOLD;
> }
>
> static __always_inline unsigned long
> raw_copy_tofrom_user(void __user *to, const void __user *from, 
> unsigned long n, unsigned long dir)
> {
>     unsigned long ret;
>
>     if (will_use_vmx(n) && enter_vmx_usercopy()) {
>         allow_user_access(to, dir);
>         ret = __copy_tofrom_user_power7_vmx(to, from, size);
>         prevent_user_access(dir);
>         exit_vmx_usercopy();
>
>         if (unlikely(ret)) {
>             allow_user_access(to, dir);
>             ret = __copy_tofrom_user_base(to, from, size);
>             prevent_user_access(dir);
>         }
>         return ret;
>     }
>     allow_user_access(to, dir);
>     ret = __copy_tofrom_user(to, from, n);
>     prevent_user_access(dir);
>     return ret;
> }
>
>
> Christophe
>
Thanks Christophe for the review and suggestions. We have incorporated
  these changes in v3.

v3: 
https://lore.kernel.org/all/20260304053506.118632-1-sayalip@linux.ibm.com/

[-- Attachment #2: Type: text/html, Size: 5406 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-03-04  5:40 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-28 13:53 [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path Sayali Patil
2026-02-28 13:53 ` [PATCH v2 2/2] powerpc/selftests/copyloops: extend selftest to exercise __copy_tofrom_user_power7_vmx Sayali Patil
2026-03-02 11:12 ` [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path Christophe Leroy (CS GROUP)
2026-03-03  9:19   ` Sayali Patil
2026-03-03 14:57     ` Christophe Leroy (CS GROUP)
2026-03-03 15:10       ` Christophe Leroy (CS GROUP)
2026-03-03 15:17         ` Christophe Leroy (CS GROUP)
2026-03-04  5:40           ` Sayali Patil

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox