linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/10] powerpc: Implement masked user access
@ 2025-08-22  9:57 Christophe Leroy
  2025-08-22  9:57 ` [PATCH v2 01/10] iter: Avoid barrier_nospec() in copy_from_user_iter() Christophe Leroy
                   ` (9 more replies)
  0 siblings, 10 replies; 19+ messages in thread
From: Christophe Leroy @ 2025-08-22  9:57 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann
  Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

Masked user access avoids the address/size verification by access_ok().
Allthough its main purpose is to skip the speculation in the
verification of user address and size hence avoid the need of spec
mitigation, it also has the advantage to reduce the amount of
instructions needed so it also benefits to platforms that don't
need speculation mitigation, especially when the size of the copy is
not know at build time.

Patches 1,2,4 are cleaning up some redundant barrier_nospec()
introduced by commit 74e19ef0ff80 ("uaccess: Add speculation barrier
to copy_from_user()"). To do that, a speculation barrier is added to
copy_from_user_iter() so that the barrier in powerpc raw_copy_from_user()
which is redundant with the one in copy_from_user() can be removed. To
avoid impacting x86, copy_from_user_iter() is first converted to using
masked user access.

Patch 3 adds masked_user_read_access_begin() and
masked_user_write_access_begin() to match with user_read_access_end()
and user_write_access_end().

Patches 5,6,7 are cleaning up powerpc uaccess functions.

Patches 8 and 9 prepare powerpc/32 for the necessary gap at the top
of userspace.

Last patch implements masked user access.

Changes in v2:
- Converted copy_from_user_iter() to using masked user access.
- Cleaned up powerpc uaccess function to minimise code duplication
when adding masked user access
- Automated TASK_SIZE calculation to minimise use of BUILD_BUG_ON()
- Tried to make some commit messages more clean based on feedback from
version 1 of the series.

Christophe Leroy (10):
  iter: Avoid barrier_nospec() in copy_from_user_iter()
  uaccess: Add speculation barrier to copy_from_user_iter()
  uaccess: Add masked_user_{read/write}_access_begin
  powerpc/uaccess: Move barrier_nospec() out of
    allow_read_{from/write}_user()
  powerpc/uaccess: Remove unused size and from parameters from
    allow_access_user()
  powerpc/uaccess: Remove
    {allow/prevent}_{read/write/read_write}_{from/to/}_user()
  powerpc/uaccess: Refactor user_{read/write/}_access_begin()
  powerpc/32s: Fix segments setup when TASK_SIZE is not a multiple of
    256M
  powerpc/32: Automatically adapt TASK_SIZE based on constraints
  powerpc/uaccess: Implement masked user access

 arch/powerpc/Kconfig                          |   3 +-
 arch/powerpc/include/asm/barrier.h            |   2 +-
 arch/powerpc/include/asm/book3s/32/kup.h      |   3 +-
 arch/powerpc/include/asm/book3s/32/mmu-hash.h |   5 +-
 arch/powerpc/include/asm/book3s/32/pgtable.h  |   4 -
 arch/powerpc/include/asm/book3s/64/kup.h      |   6 +-
 arch/powerpc/include/asm/kup.h                |  52 +------
 arch/powerpc/include/asm/nohash/32/kup-8xx.h  |   3 +-
 arch/powerpc/include/asm/nohash/32/mmu-8xx.h  |   4 -
 arch/powerpc/include/asm/nohash/kup-booke.h   |   3 +-
 arch/powerpc/include/asm/task_size_32.h       |  28 +++-
 arch/powerpc/include/asm/uaccess.h            | 134 +++++++++++++-----
 arch/powerpc/kernel/asm-offsets.c             |   2 +-
 arch/powerpc/kernel/head_book3s_32.S          |   6 +-
 arch/powerpc/mm/book3s32/mmu.c                |   4 +-
 arch/powerpc/mm/mem.c                         |   2 -
 arch/powerpc/mm/nohash/8xx.c                  |   2 -
 arch/powerpc/mm/ptdump/segment_regs.c         |   2 +-
 fs/select.c                                   |   2 +-
 include/linux/uaccess.h                       |   7 +
 kernel/futex/futex.h                          |   4 +-
 lib/iov_iter.c                                |  22 ++-
 lib/strncpy_from_user.c                       |   2 +-
 lib/strnlen_user.c                            |   2 +-
 24 files changed, 172 insertions(+), 132 deletions(-)

-- 
2.49.0



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 01/10] iter: Avoid barrier_nospec() in copy_from_user_iter()
  2025-08-22  9:57 [PATCH v2 00/10] powerpc: Implement masked user access Christophe Leroy
@ 2025-08-22  9:57 ` Christophe Leroy
  2025-08-22  9:57 ` [PATCH v2 02/10] uaccess: Add speculation barrier to copy_from_user_iter() Christophe Leroy
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2025-08-22  9:57 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann
  Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

Following patch will add missing barrier_nospec() to
copy_from_user_iter().

Avoid it for architectures supporting masked user
accesses, the same way as done for copy_from_user() by
commit 0fc810ae3ae1 ("x86/uaccess: Avoid barrier_nospec()
in 64-bit copy_from_user()")

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
v2: New in v2
---
 lib/iov_iter.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index f9193f952f49..48bd0cbce8c2 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -49,12 +49,16 @@ size_t copy_from_user_iter(void __user *iter_from, size_t progress,
 
 	if (should_fail_usercopy())
 		return len;
-	if (access_ok(iter_from, len)) {
-		to += progress;
-		instrument_copy_from_user_before(to, iter_from, len);
-		res = raw_copy_from_user(to, iter_from, len);
-		instrument_copy_from_user_after(to, iter_from, len, res);
-	}
+	if (can_do_masked_user_access())
+		iter_from = mask_user_address(iter_from);
+	else if (!access_ok(iter_from, len))
+		return res;
+
+	to += progress;
+	instrument_copy_from_user_before(to, iter_from, len);
+	res = raw_copy_from_user(to, iter_from, len);
+	instrument_copy_from_user_after(to, iter_from, len, res);
+
 	return res;
 }
 
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 02/10] uaccess: Add speculation barrier to copy_from_user_iter()
  2025-08-22  9:57 [PATCH v2 00/10] powerpc: Implement masked user access Christophe Leroy
  2025-08-22  9:57 ` [PATCH v2 01/10] iter: Avoid barrier_nospec() in copy_from_user_iter() Christophe Leroy
@ 2025-08-22  9:57 ` Christophe Leroy
  2025-08-22 13:46   ` Linus Torvalds
  2025-08-22  9:57 ` [PATCH v2 03/10] uaccess: Add masked_user_{read/write}_access_begin Christophe Leroy
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2025-08-22  9:57 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann
  Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

The results of "access_ok()" can be mis-speculated.  The result is that
you can end speculatively:

	if (access_ok(from, size))
		// Right here

For the same reason as done in copy_from_user() by
commit 74e19ef0ff80 ("uaccess: Add speculation barrier to
copy_from_user()"), add a speculation barrier to copy_from_user_iter().

See commit 74e19ef0ff80 ("uaccess: Add speculation barrier to
copy_from_user()") for more details.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 lib/iov_iter.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 48bd0cbce8c2..8d08b3435174 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -49,11 +49,19 @@ size_t copy_from_user_iter(void __user *iter_from, size_t progress,
 
 	if (should_fail_usercopy())
 		return len;
-	if (can_do_masked_user_access())
+	if (can_do_masked_user_access()) {
 		iter_from = mask_user_address(iter_from);
-	else if (!access_ok(iter_from, len))
-		return res;
+	} else {
+		if (!access_ok(iter_from, len))
+			return res;
 
+		/*
+		 * Ensure that bad access_ok() speculation will not
+		 * lead to nasty side effects *after* the copy is
+		 * finished:
+		 */
+		barrier_nospec();
+	}
 	to += progress;
 	instrument_copy_from_user_before(to, iter_from, len);
 	res = raw_copy_from_user(to, iter_from, len);
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 03/10] uaccess: Add masked_user_{read/write}_access_begin
  2025-08-22  9:57 [PATCH v2 00/10] powerpc: Implement masked user access Christophe Leroy
  2025-08-22  9:57 ` [PATCH v2 01/10] iter: Avoid barrier_nospec() in copy_from_user_iter() Christophe Leroy
  2025-08-22  9:57 ` [PATCH v2 02/10] uaccess: Add speculation barrier to copy_from_user_iter() Christophe Leroy
@ 2025-08-22  9:57 ` Christophe Leroy
  2025-08-24 15:08   ` Thomas Gleixner
  2025-08-22  9:58 ` [PATCH v2 04/10] powerpc/uaccess: Move barrier_nospec() out of allow_read_{from/write}_user() Christophe Leroy
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2025-08-22  9:57 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann
  Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

Allthough masked_user_access_begin() is to only be used when reading
data from user at the moment, introduce masked_user_read_access_begin()
and masked_user_write_access_begin() in order to match
user_read_access_begin() and user_write_access_begin().

That means masked_user_read_access_begin() is used when user memory is
exclusively read during the window, masked_user_write_access_begin()
is used when user memory is exclusively writen during the window,
masked_user_access_begin() remains and is used when both reads and
writes are performed during the open window. Each of them is expected
to be terminated by the matching user_read_access_end(),
user_write_access_end() and user_access_end().

Have them default to masked_user_access_begin() when they are
not defined.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
v2: Added more explanations in the commit message following comments received.
---
 fs/select.c             | 2 +-
 include/linux/uaccess.h | 7 +++++++
 kernel/futex/futex.h    | 4 ++--
 lib/strncpy_from_user.c | 2 +-
 lib/strnlen_user.c      | 2 +-
 5 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/fs/select.c b/fs/select.c
index 082cf60c7e23..36db0359388c 100644
--- a/fs/select.c
+++ b/fs/select.c
@@ -777,7 +777,7 @@ static inline int get_sigset_argpack(struct sigset_argpack *to,
 	// the path is hot enough for overhead of copy_from_user() to matter
 	if (from) {
 		if (can_do_masked_user_access())
-			from = masked_user_access_begin(from);
+			from = masked_user_read_access_begin(from);
 		else if (!user_read_access_begin(from, sizeof(*from)))
 			return -EFAULT;
 		unsafe_get_user(to->p, &from->p, Efault);
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 1beb5b395d81..aa48d5415d32 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -41,6 +41,13 @@
  #define mask_user_address(src) (src)
 #endif
 
+#ifndef masked_user_write_access_begin
+#define masked_user_write_access_begin masked_user_access_begin
+#endif
+#ifndef masked_user_read_access_begin
+#define masked_user_read_access_begin masked_user_access_begin
+#endif
+
 /*
  * Architectures should provide two primitives (raw_copy_{to,from}_user())
  * and get rid of their private instances of copy_{to,from}_user() and
diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
index 2cd57096c38e..a1120a318c18 100644
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -303,7 +303,7 @@ static __always_inline int futex_get_value(u32 *dest, u32 __user *from)
 	u32 val;
 
 	if (can_do_masked_user_access())
-		from = masked_user_access_begin(from);
+		from = masked_user_read_access_begin(from);
 	else if (!user_read_access_begin(from, sizeof(*from)))
 		return -EFAULT;
 	unsafe_get_user(val, from, Efault);
@@ -318,7 +318,7 @@ static __always_inline int futex_get_value(u32 *dest, u32 __user *from)
 static __always_inline int futex_put_value(u32 val, u32 __user *to)
 {
 	if (can_do_masked_user_access())
-		to = masked_user_access_begin(to);
+		to = masked_user_write_access_begin(to);
 	else if (!user_write_access_begin(to, sizeof(*to)))
 		return -EFAULT;
 	unsafe_put_user(val, to, Efault);
diff --git a/lib/strncpy_from_user.c b/lib/strncpy_from_user.c
index 6dc234913dd5..5bb752ff7c61 100644
--- a/lib/strncpy_from_user.c
+++ b/lib/strncpy_from_user.c
@@ -126,7 +126,7 @@ long strncpy_from_user(char *dst, const char __user *src, long count)
 	if (can_do_masked_user_access()) {
 		long retval;
 
-		src = masked_user_access_begin(src);
+		src = masked_user_read_access_begin(src);
 		retval = do_strncpy_from_user(dst, src, count, count);
 		user_read_access_end();
 		return retval;
diff --git a/lib/strnlen_user.c b/lib/strnlen_user.c
index 6e489f9e90f1..4a6574b67f82 100644
--- a/lib/strnlen_user.c
+++ b/lib/strnlen_user.c
@@ -99,7 +99,7 @@ long strnlen_user(const char __user *str, long count)
 	if (can_do_masked_user_access()) {
 		long retval;
 
-		str = masked_user_access_begin(str);
+		str = masked_user_read_access_begin(str);
 		retval = do_strnlen_user(str, count, count);
 		user_read_access_end();
 		return retval;
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 04/10] powerpc/uaccess: Move barrier_nospec() out of allow_read_{from/write}_user()
  2025-08-22  9:57 [PATCH v2 00/10] powerpc: Implement masked user access Christophe Leroy
                   ` (2 preceding siblings ...)
  2025-08-22  9:57 ` [PATCH v2 03/10] uaccess: Add masked_user_{read/write}_access_begin Christophe Leroy
@ 2025-08-22  9:58 ` Christophe Leroy
  2025-08-22  9:58 ` [PATCH v2 05/10] powerpc/uaccess: Remove unused size and from parameters from allow_access_user() Christophe Leroy
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2025-08-22  9:58 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann
  Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

Commit 74e19ef0ff80 ("uaccess: Add speculation barrier to
copy_from_user()") added a redundant barrier_nospec() in
copy_from_user(), because powerpc is already calling
barrier_nospec() in allow_read_from_user() and
allow_read_write_user(). But on other architectures that
call to barrier_nospec() was missing. So change powerpc
instead of reverting the above commit and having to fix
other architectures one by one. This is now possible
because barrier_nospec() has also been added in
copy_from_user_iter().

Move barrier_nospec() out of allow_read_from_user() and
allow_read_write_user(). This will also allow reuse of those
functions when implementing masked user access which doesn't
require barrier_nospec().

Don't add it back in raw_copy_from_user() as it is already called
by copy_from_user() and copy_from_user_iter().

Fixes: 74e19ef0ff80 ("uaccess: Add speculation barrier to copy_from_user()")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 arch/powerpc/include/asm/kup.h     | 2 --
 arch/powerpc/include/asm/uaccess.h | 4 ++++
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 2bb03d941e3e..6737416dde9f 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -134,7 +134,6 @@ static __always_inline void kuap_assert_locked(void)
 
 static __always_inline void allow_read_from_user(const void __user *from, unsigned long size)
 {
-	barrier_nospec();
 	allow_user_access(NULL, from, size, KUAP_READ);
 }
 
@@ -146,7 +145,6 @@ static __always_inline void allow_write_to_user(void __user *to, unsigned long s
 static __always_inline void allow_read_write_user(void __user *to, const void __user *from,
 						  unsigned long size)
 {
-	barrier_nospec();
 	allow_user_access(to, from, size, KUAP_READ_WRITE);
 }
 
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 4f5a46a77fa2..3987a5c33558 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -301,6 +301,7 @@ do {								\
 	__typeof__(sizeof(*(ptr))) __gu_size = sizeof(*(ptr));	\
 								\
 	might_fault();					\
+	barrier_nospec();					\
 	allow_read_from_user(__gu_addr, __gu_size);		\
 	__get_user_size_allowed(__gu_val, __gu_addr, __gu_size, __gu_err);	\
 	prevent_read_from_user(__gu_addr, __gu_size);		\
@@ -329,6 +330,7 @@ raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
 {
 	unsigned long ret;
 
+	barrier_nospec();
 	allow_read_write_user(to, from, n);
 	ret = __copy_tofrom_user(to, from, n);
 	prevent_read_write_user(to, from, n);
@@ -415,6 +417,7 @@ static __must_check __always_inline bool user_access_begin(const void __user *pt
 
 	might_fault();
 
+	barrier_nospec();
 	allow_read_write_user((void __user *)ptr, ptr, len);
 	return true;
 }
@@ -431,6 +434,7 @@ user_read_access_begin(const void __user *ptr, size_t len)
 
 	might_fault();
 
+	barrier_nospec();
 	allow_read_from_user(ptr, len);
 	return true;
 }
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 05/10] powerpc/uaccess: Remove unused size and from parameters from allow_access_user()
  2025-08-22  9:57 [PATCH v2 00/10] powerpc: Implement masked user access Christophe Leroy
                   ` (3 preceding siblings ...)
  2025-08-22  9:58 ` [PATCH v2 04/10] powerpc/uaccess: Move barrier_nospec() out of allow_read_{from/write}_user() Christophe Leroy
@ 2025-08-22  9:58 ` Christophe Leroy
  2025-08-22  9:58 ` [PATCH v2 06/10] powerpc/uaccess: Remove {allow/prevent}_{read/write/read_write}_{from/to/}_user() Christophe Leroy
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2025-08-22  9:58 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann
  Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

Since commit 16132529cee5 ("powerpc/32s: Rework Kernel Userspace
Access Protection") the size parameter is unused on all platforms.

And the 'from' parameter has never been used.

Remove them.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
v2: Also remove 'from' param.
---
 arch/powerpc/include/asm/book3s/32/kup.h     | 3 +--
 arch/powerpc/include/asm/book3s/64/kup.h     | 6 ++----
 arch/powerpc/include/asm/kup.h               | 9 ++++-----
 arch/powerpc/include/asm/nohash/32/kup-8xx.h | 3 +--
 arch/powerpc/include/asm/nohash/kup-booke.h  | 3 +--
 5 files changed, 9 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
index 4e14a5427a63..6718b7e40eef 100644
--- a/arch/powerpc/include/asm/book3s/32/kup.h
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -97,8 +97,7 @@ static __always_inline unsigned long __kuap_get_and_assert_locked(void)
 }
 #define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
 
-static __always_inline void allow_user_access(void __user *to, const void __user *from,
-					      u32 size, unsigned long dir)
+static __always_inline void allow_user_access(void __user *to, unsigned long dir)
 {
 	BUILD_BUG_ON(!__builtin_constant_p(dir));
 
diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 497a7bd31ecc..3b8706007fa1 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -353,8 +353,7 @@ __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
 	return (regs->amr & AMR_KUAP_BLOCK_READ) == AMR_KUAP_BLOCK_READ;
 }
 
-static __always_inline void allow_user_access(void __user *to, const void __user *from,
-					      unsigned long size, unsigned long dir)
+static __always_inline void allow_user_access(void __user *to, unsigned long dir)
 {
 	unsigned long thread_amr = 0;
 
@@ -383,8 +382,7 @@ static __always_inline unsigned long get_kuap(void)
 
 static __always_inline void set_kuap(unsigned long value) { }
 
-static __always_inline void allow_user_access(void __user *to, const void __user *from,
-					      unsigned long size, unsigned long dir)
+static __always_inline void allow_user_access(void __user *to, unsigned long dir)
 { }
 
 #endif /* !CONFIG_PPC_KUAP */
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 6737416dde9f..da5f5b47cca0 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -72,8 +72,7 @@ static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned
  * platforms.
  */
 #ifndef CONFIG_PPC_BOOK3S_64
-static __always_inline void allow_user_access(void __user *to, const void __user *from,
-					      unsigned long size, unsigned long dir) { }
+static __always_inline void allow_user_access(void __user *to, unsigned long dir) { }
 static __always_inline void prevent_user_access(unsigned long dir) { }
 static __always_inline unsigned long prevent_user_access_return(void) { return 0UL; }
 static __always_inline void restore_user_access(unsigned long flags) { }
@@ -134,18 +133,18 @@ static __always_inline void kuap_assert_locked(void)
 
 static __always_inline void allow_read_from_user(const void __user *from, unsigned long size)
 {
-	allow_user_access(NULL, from, size, KUAP_READ);
+	allow_user_access(NULL, KUAP_READ);
 }
 
 static __always_inline void allow_write_to_user(void __user *to, unsigned long size)
 {
-	allow_user_access(to, NULL, size, KUAP_WRITE);
+	allow_user_access(to, KUAP_WRITE);
 }
 
 static __always_inline void allow_read_write_user(void __user *to, const void __user *from,
 						  unsigned long size)
 {
-	allow_user_access(to, from, size, KUAP_READ_WRITE);
+	allow_user_access(to, KUAP_READ_WRITE);
 }
 
 static __always_inline void prevent_read_from_user(const void __user *from, unsigned long size)
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index 46bc5925e5fd..86621fee746d 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -49,8 +49,7 @@ static __always_inline void uaccess_end_8xx(void)
 	    "i"(SPRN_MD_AP), "r"(MD_APG_KUAP), "i"(MMU_FTR_KUAP) : "memory");
 }
 
-static __always_inline void allow_user_access(void __user *to, const void __user *from,
-					      unsigned long size, unsigned long dir)
+static __always_inline void allow_user_access(void __user *to, unsigned long dir)
 {
 	uaccess_begin_8xx(MD_APG_INIT);
 }
diff --git a/arch/powerpc/include/asm/nohash/kup-booke.h b/arch/powerpc/include/asm/nohash/kup-booke.h
index 0c7c3258134c..a8fab0349704 100644
--- a/arch/powerpc/include/asm/nohash/kup-booke.h
+++ b/arch/powerpc/include/asm/nohash/kup-booke.h
@@ -73,8 +73,7 @@ static __always_inline void uaccess_end_booke(void)
 	    "i"(SPRN_PID), "r"(0), "i"(MMU_FTR_KUAP) : "memory");
 }
 
-static __always_inline void allow_user_access(void __user *to, const void __user *from,
-					      unsigned long size, unsigned long dir)
+static __always_inline void allow_user_access(void __user *to, unsigned long dir)
 {
 	uaccess_begin_booke(current->thread.pid);
 }
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 06/10] powerpc/uaccess: Remove {allow/prevent}_{read/write/read_write}_{from/to/}_user()
  2025-08-22  9:57 [PATCH v2 00/10] powerpc: Implement masked user access Christophe Leroy
                   ` (4 preceding siblings ...)
  2025-08-22  9:58 ` [PATCH v2 05/10] powerpc/uaccess: Remove unused size and from parameters from allow_access_user() Christophe Leroy
@ 2025-08-22  9:58 ` Christophe Leroy
  2025-08-22  9:58 ` [PATCH v2 07/10] powerpc/uaccess: Refactor user_{read/write/}_access_begin() Christophe Leroy
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2025-08-22  9:58 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann
  Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

The six following functions have become simple single-line fonctions
that do not have much added value anymore:
- allow_read_from_user()
- allow_write_to_user()
- allow_read_write_user()
- prevent_read_from_user()
- prevent_write_to_user()
- prevent_read_write_user()

Directly call allow_user_access() and prevent_user_access(), it doesn't
reduce the readability and it removes unnecessary middle functions.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
v2: New
---
 arch/powerpc/include/asm/kup.h     | 47 ------------------------------
 arch/powerpc/include/asm/uaccess.h | 30 +++++++++----------
 2 files changed, 15 insertions(+), 62 deletions(-)

diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index da5f5b47cca0..892ad06bdd8c 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -131,53 +131,6 @@ static __always_inline void kuap_assert_locked(void)
 		kuap_get_and_assert_locked();
 }
 
-static __always_inline void allow_read_from_user(const void __user *from, unsigned long size)
-{
-	allow_user_access(NULL, KUAP_READ);
-}
-
-static __always_inline void allow_write_to_user(void __user *to, unsigned long size)
-{
-	allow_user_access(to, KUAP_WRITE);
-}
-
-static __always_inline void allow_read_write_user(void __user *to, const void __user *from,
-						  unsigned long size)
-{
-	allow_user_access(to, KUAP_READ_WRITE);
-}
-
-static __always_inline void prevent_read_from_user(const void __user *from, unsigned long size)
-{
-	prevent_user_access(KUAP_READ);
-}
-
-static __always_inline void prevent_write_to_user(void __user *to, unsigned long size)
-{
-	prevent_user_access(KUAP_WRITE);
-}
-
-static __always_inline void prevent_read_write_user(void __user *to, const void __user *from,
-						    unsigned long size)
-{
-	prevent_user_access(KUAP_READ_WRITE);
-}
-
-static __always_inline void prevent_current_access_user(void)
-{
-	prevent_user_access(KUAP_READ_WRITE);
-}
-
-static __always_inline void prevent_current_read_from_user(void)
-{
-	prevent_user_access(KUAP_READ);
-}
-
-static __always_inline void prevent_current_write_to_user(void)
-{
-	prevent_user_access(KUAP_WRITE);
-}
-
 #endif /* !__ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_KUAP_H_ */
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 3987a5c33558..698996f34891 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -45,14 +45,14 @@
 	do {							\
 		__label__ __pu_failed;				\
 								\
-		allow_write_to_user(__pu_addr, __pu_size);	\
+		allow_user_access(__pu_addr, KUAP_WRITE);	\
 		__put_user_size_goto(__pu_val, __pu_addr, __pu_size, __pu_failed);	\
-		prevent_write_to_user(__pu_addr, __pu_size);	\
+		prevent_user_access(KUAP_WRITE);		\
 		__pu_err = 0;					\
 		break;						\
 								\
 __pu_failed:							\
-		prevent_write_to_user(__pu_addr, __pu_size);	\
+		prevent_user_access(KUAP_WRITE);		\
 		__pu_err = -EFAULT;				\
 	} while (0);						\
 								\
@@ -302,9 +302,9 @@ do {								\
 								\
 	might_fault();					\
 	barrier_nospec();					\
-	allow_read_from_user(__gu_addr, __gu_size);		\
+	allow_user_access(NULL, KUAP_READ);		\
 	__get_user_size_allowed(__gu_val, __gu_addr, __gu_size, __gu_err);	\
-	prevent_read_from_user(__gu_addr, __gu_size);		\
+	prevent_user_access(KUAP_READ);				\
 	(x) = (__typeof__(*(ptr)))__gu_val;			\
 								\
 	__gu_err;						\
@@ -331,9 +331,9 @@ raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
 	unsigned long ret;
 
 	barrier_nospec();
-	allow_read_write_user(to, from, n);
+	allow_user_access(to, KUAP_READ_WRITE);
 	ret = __copy_tofrom_user(to, from, n);
-	prevent_read_write_user(to, from, n);
+	prevent_user_access(KUAP_READ_WRITE);
 	return ret;
 }
 #endif /* __powerpc64__ */
@@ -343,9 +343,9 @@ static inline unsigned long raw_copy_from_user(void *to,
 {
 	unsigned long ret;
 
-	allow_read_from_user(from, n);
+	allow_user_access(NULL, KUAP_READ);
 	ret = __copy_tofrom_user((__force void __user *)to, from, n);
-	prevent_read_from_user(from, n);
+	prevent_user_access(KUAP_READ);
 	return ret;
 }
 
@@ -354,9 +354,9 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n)
 {
 	unsigned long ret;
 
-	allow_write_to_user(to, n);
+	allow_user_access(to, KUAP_WRITE);
 	ret = __copy_tofrom_user(to, (__force const void __user *)from, n);
-	prevent_write_to_user(to, n);
+	prevent_user_access(KUAP_WRITE);
 	return ret;
 }
 
@@ -367,9 +367,9 @@ static inline unsigned long __clear_user(void __user *addr, unsigned long size)
 	unsigned long ret;
 
 	might_fault();
-	allow_write_to_user(addr, size);
+	allow_user_access(addr, KUAP_WRITE);
 	ret = __arch_clear_user(addr, size);
-	prevent_write_to_user(addr, size);
+	prevent_user_access(KUAP_WRITE);
 	return ret;
 }
 
@@ -397,9 +397,9 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
 {
 	if (check_copy_size(from, n, true)) {
 		if (access_ok(to, n)) {
-			allow_write_to_user(to, n);
+			allow_user_access(to, KUAP_WRITE);
 			n = copy_mc_generic((void __force *)to, from, n);
-			prevent_write_to_user(to, n);
+			prevent_user_access(KUAP_WRITE);
 		}
 	}
 
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 07/10] powerpc/uaccess: Refactor user_{read/write/}_access_begin()
  2025-08-22  9:57 [PATCH v2 00/10] powerpc: Implement masked user access Christophe Leroy
                   ` (5 preceding siblings ...)
  2025-08-22  9:58 ` [PATCH v2 06/10] powerpc/uaccess: Remove {allow/prevent}_{read/write/read_write}_{from/to/}_user() Christophe Leroy
@ 2025-08-22  9:58 ` Christophe Leroy
  2025-08-22  9:58 ` [PATCH v2 08/10] powerpc/32s: Fix segments setup when TASK_SIZE is not a multiple of 256M Christophe Leroy
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2025-08-22  9:58 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann
  Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

user_read_access_begin() and user_write_access_begin() and
user_access_begin() are now very similar. Create a common
__user_access_begin() that take direction as parameter.

In order to avoid a warning with the conditional call of
barrier_nospec() which is sometimes an empty macro, change it to a
do {} while (0).

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
v2: New
---
 arch/powerpc/include/asm/barrier.h |  2 +-
 arch/powerpc/include/asm/uaccess.h | 46 +++++++++---------------------
 2 files changed, 14 insertions(+), 34 deletions(-)

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index b95b666f0374..7acbf27cac6c 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -102,7 +102,7 @@ do {									\
 
 #else /* !CONFIG_PPC_BARRIER_NOSPEC */
 #define barrier_nospec_asm
-#define barrier_nospec()
+#define barrier_nospec()	do {} while (0)
 #endif /* CONFIG_PPC_BARRIER_NOSPEC */
 
 /*
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 698996f34891..49254f7d9069 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -410,50 +410,30 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
 extern long __copy_from_user_flushcache(void *dst, const void __user *src,
 		unsigned size);
 
-static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len)
+static __must_check __always_inline bool __user_access_begin(const void __user *ptr, size_t len,
+							     unsigned long dir)
 {
 	if (unlikely(!access_ok(ptr, len)))
 		return false;
 
 	might_fault();
 
-	barrier_nospec();
-	allow_read_write_user((void __user *)ptr, ptr, len);
+	if (dir & KUAP_READ)
+		barrier_nospec();
+	allow_user_access((void __user *)ptr, dir);
 	return true;
 }
-#define user_access_begin	user_access_begin
-#define user_access_end		prevent_current_access_user
-#define user_access_save	prevent_user_access_return
-#define user_access_restore	restore_user_access
 
-static __must_check __always_inline bool
-user_read_access_begin(const void __user *ptr, size_t len)
-{
-	if (unlikely(!access_ok(ptr, len)))
-		return false;
+#define user_access_begin(p, l)		__user_access_begin(p, l, KUAP_READ_WRITE)
+#define user_read_access_begin(p, l)	__user_access_begin(p, l, KUAP_READ)
+#define user_write_access_begin(p, l)	__user_access_begin(p, l, KUAP_WRITE)
 
-	might_fault();
-
-	barrier_nospec();
-	allow_read_from_user(ptr, len);
-	return true;
-}
-#define user_read_access_begin	user_read_access_begin
-#define user_read_access_end		prevent_current_read_from_user
+#define user_access_end()		prevent_user_access(KUAP_READ_WRITE)
+#define user_read_access_end()		prevent_user_access(KUAP_READ)
+#define user_write_access_end()		prevent_user_access(KUAP_WRITE)
 
-static __must_check __always_inline bool
-user_write_access_begin(const void __user *ptr, size_t len)
-{
-	if (unlikely(!access_ok(ptr, len)))
-		return false;
-
-	might_fault();
-
-	allow_write_to_user((void __user *)ptr, len);
-	return true;
-}
-#define user_write_access_begin	user_write_access_begin
-#define user_write_access_end		prevent_current_write_to_user
+#define user_access_save	prevent_user_access_return
+#define user_access_restore	restore_user_access
 
 #define unsafe_get_user(x, p, e) do {					\
 	__long_type(*(p)) __gu_val;				\
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 08/10] powerpc/32s: Fix segments setup when TASK_SIZE is not a multiple of 256M
  2025-08-22  9:57 [PATCH v2 00/10] powerpc: Implement masked user access Christophe Leroy
                   ` (6 preceding siblings ...)
  2025-08-22  9:58 ` [PATCH v2 07/10] powerpc/uaccess: Refactor user_{read/write/}_access_begin() Christophe Leroy
@ 2025-08-22  9:58 ` Christophe Leroy
  2025-08-22  9:58 ` [PATCH v2 09/10] powerpc/32: Automatically adapt TASK_SIZE based on constraints Christophe Leroy
  2025-08-22  9:58 ` [PATCH v2 10/10] powerpc/uaccess: Implement masked user access Christophe Leroy
  9 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2025-08-22  9:58 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann
  Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

For book3s/32 it is assumed that TASK_SIZE is a multiple of 256 Mbytes,
but Kconfig allows any value for TASK_SIZE.

In all relevant calculations, align TASK_SIZE to the upper 256 Mbytes
boundary.

Also use ASM_CONST() in the definition of TASK_SIZE to ensure it is
seen as an unsigned constant.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 arch/powerpc/include/asm/book3s/32/mmu-hash.h | 5 ++++-
 arch/powerpc/include/asm/task_size_32.h       | 2 +-
 arch/powerpc/kernel/asm-offsets.c             | 2 +-
 arch/powerpc/kernel/head_book3s_32.S          | 6 +++---
 arch/powerpc/mm/book3s32/mmu.c                | 2 +-
 arch/powerpc/mm/ptdump/segment_regs.c         | 2 +-
 6 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/32/mmu-hash.h b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
index 78c6a5fde1d6..df00be5b4044 100644
--- a/arch/powerpc/include/asm/book3s/32/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
@@ -192,12 +192,15 @@ extern s32 patch__hash_page_B, patch__hash_page_C;
 extern s32 patch__flush_hash_A0, patch__flush_hash_A1, patch__flush_hash_A2;
 extern s32 patch__flush_hash_B;
 
+#include <linux/sizes.h>
+#include <linux/align.h>
+
 #include <asm/reg.h>
 #include <asm/task_size_32.h>
 
 static __always_inline void update_user_segment(u32 n, u32 val)
 {
-	if (n << 28 < TASK_SIZE)
+	if (n << 28 < ALIGN(TASK_SIZE, SZ_256M))
 		mtsr(val + n * 0x111, n << 28);
 }
 
diff --git a/arch/powerpc/include/asm/task_size_32.h b/arch/powerpc/include/asm/task_size_32.h
index de7290ee770f..30edc21f71fb 100644
--- a/arch/powerpc/include/asm/task_size_32.h
+++ b/arch/powerpc/include/asm/task_size_32.h
@@ -6,7 +6,7 @@
 #error User TASK_SIZE overlaps with KERNEL_START address
 #endif
 
-#define TASK_SIZE (CONFIG_TASK_SIZE)
+#define TASK_SIZE ASM_CONST(CONFIG_TASK_SIZE)
 
 /*
  * This decides where the kernel will search for a free chunk of vm space during
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index b3048f6d3822..2c7fadddae4a 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -330,7 +330,7 @@ int main(void)
 
 #ifndef CONFIG_PPC64
 	DEFINE(TASK_SIZE, TASK_SIZE);
-	DEFINE(NUM_USER_SEGMENTS, TASK_SIZE>>28);
+	DEFINE(NUM_USER_SEGMENTS, ALIGN(TASK_SIZE, SZ_256M) >> 28);
 #endif /* ! CONFIG_PPC64 */
 
 	/* datapage offsets for use by vdso */
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
index cb2bca76be53..c1779455ea32 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -420,7 +420,7 @@ InstructionTLBMiss:
 	lwz	r2,0(r2)		/* get pmd entry */
 #ifdef CONFIG_EXECMEM
 	rlwinm	r3, r0, 4, 0xf
-	subi	r3, r3, (TASK_SIZE >> 28) & 0xf
+	subi	r3, r3, NUM_USER_SEGMENTS
 #endif
 	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
 	beq-	InstructionAddressInvalid	/* return if no mapping */
@@ -475,7 +475,7 @@ DataLoadTLBMiss:
 	lwz	r2,0(r1)		/* get pmd entry */
 	rlwinm	r3, r0, 4, 0xf
 	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
-	subi	r3, r3, (TASK_SIZE >> 28) & 0xf
+	subi	r3, r3, NUM_USER_SEGMENTS
 	beq-	2f			/* bail if no mapping */
 1:	rlwimi	r2,r0,22,20,29		/* insert next 10 bits of address */
 	lwz	r2,0(r2)		/* get linux-style pte */
@@ -554,7 +554,7 @@ DataStoreTLBMiss:
 	lwz	r2,0(r1)		/* get pmd entry */
 	rlwinm	r3, r0, 4, 0xf
 	rlwinm.	r2,r2,0,0,19		/* extract address of pte page */
-	subi	r3, r3, (TASK_SIZE >> 28) & 0xf
+	subi	r3, r3, NUM_USER_SEGMENTS
 	beq-	2f			/* bail if no mapping */
 1:
 	rlwimi	r2,r0,22,20,29		/* insert next 10 bits of address */
diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
index be9c4106e22f..afc9b5cac5a6 100644
--- a/arch/powerpc/mm/book3s32/mmu.c
+++ b/arch/powerpc/mm/book3s32/mmu.c
@@ -225,7 +225,7 @@ int mmu_mark_initmem_nx(void)
 
 	BUILD_BUG_ON(ALIGN_DOWN(MODULES_VADDR, SZ_256M) < TASK_SIZE);
 
-	for (i = TASK_SIZE >> 28; i < 16; i++) {
+	for (i = ALIGN(TASK_SIZE, SZ_256M) >> 28; i < 16; i++) {
 		/* Do not set NX on VM space for modules */
 		if (is_module_segment(i << 28))
 			continue;
diff --git a/arch/powerpc/mm/ptdump/segment_regs.c b/arch/powerpc/mm/ptdump/segment_regs.c
index 9df3af8d481f..c06704b18a2c 100644
--- a/arch/powerpc/mm/ptdump/segment_regs.c
+++ b/arch/powerpc/mm/ptdump/segment_regs.c
@@ -31,7 +31,7 @@ static int sr_show(struct seq_file *m, void *v)
 	int i;
 
 	seq_puts(m, "---[ User Segments ]---\n");
-	for (i = 0; i < TASK_SIZE >> 28; i++)
+	for (i = 0; i < ALIGN(TASK_SIZE, SZ_256M) >> 28; i++)
 		seg_show(m, i);
 
 	seq_puts(m, "\n---[ Kernel Segments ]---\n");
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 09/10] powerpc/32: Automatically adapt TASK_SIZE based on constraints
  2025-08-22  9:57 [PATCH v2 00/10] powerpc: Implement masked user access Christophe Leroy
                   ` (7 preceding siblings ...)
  2025-08-22  9:58 ` [PATCH v2 08/10] powerpc/32s: Fix segments setup when TASK_SIZE is not a multiple of 256M Christophe Leroy
@ 2025-08-22  9:58 ` Christophe Leroy
  2025-08-22 12:04   ` David Laight
  2025-08-22  9:58 ` [PATCH v2 10/10] powerpc/uaccess: Implement masked user access Christophe Leroy
  9 siblings, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2025-08-22  9:58 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann
  Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

At the time being, TASK_SIZE can be customized by the user via Kconfig
but it is not possible to check all constraints in Kconfig. Impossible
setups are detected at compile time with BUILD_BUG() but that leads
to build failure when setting crazy values. It is not a problem on its
own because the user will usually either use the default value or set
a well thought value. However build robots generate crazy random
configs that lead to build failures, and build robots see it as a
regression every time a patch adds such a constraint.

So instead of failing the build when the custom TASK_SIZE is too
big, just adjust it to the maximum possible value matching the setup.

Several architectures already calculate TASK_SIZE based on other
parameters and options.

In order to do so, move MODULES_VADDR calculation into task_size_32.h
and ensure that:
- On book3s/32, userspace and module area have their own segments (256M)
- On 8xx, userspace has its own full PGDIR entries (4M)

Then TASK_SIZE is garantied to be correct so remove related
BUILD_BUG()s.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 arch/powerpc/Kconfig                         |  3 +--
 arch/powerpc/include/asm/book3s/32/pgtable.h |  4 ---
 arch/powerpc/include/asm/nohash/32/mmu-8xx.h |  4 ---
 arch/powerpc/include/asm/task_size_32.h      | 26 ++++++++++++++++++++
 arch/powerpc/mm/book3s32/mmu.c               |  2 --
 arch/powerpc/mm/mem.c                        |  2 --
 arch/powerpc/mm/nohash/8xx.c                 |  2 --
 7 files changed, 27 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 93402a1d9c9f..74e514577ee5 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -1296,9 +1296,8 @@ config TASK_SIZE_BOOL
 	  Say N here unless you know what you are doing.
 
 config TASK_SIZE
-	hex "Size of user task space" if TASK_SIZE_BOOL
+	hex "Size of maximum user task space" if TASK_SIZE_BOOL
 	default "0x80000000" if PPC_8xx
-	default "0xb0000000" if PPC_BOOK3S_32 && EXECMEM
 	default "0xc0000000"
 
 config MODULES_SIZE_BOOL
diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index 92d21c6faf1e..d02d50ca0387 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -195,10 +195,6 @@ void unmap_kernel_page(unsigned long va);
 #define VMALLOC_END	ioremap_bot
 #endif
 
-#define MODULES_END	ALIGN_DOWN(PAGE_OFFSET, SZ_256M)
-#define MODULES_SIZE	(CONFIG_MODULES_SIZE * SZ_1M)
-#define MODULES_VADDR	(MODULES_END - MODULES_SIZE)
-
 #ifndef __ASSEMBLY__
 #include <linux/sched.h>
 #include <linux/threads.h>
diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
index 2986f9ba40b8..866574655ffe 100644
--- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
@@ -170,10 +170,6 @@
 
 #define mmu_linear_psize	MMU_PAGE_8M
 
-#define MODULES_END	PAGE_OFFSET
-#define MODULES_SIZE	(CONFIG_MODULES_SIZE * SZ_1M)
-#define MODULES_VADDR	(MODULES_END - MODULES_SIZE)
-
 #ifndef __ASSEMBLY__
 
 #include <linux/mmdebug.h>
diff --git a/arch/powerpc/include/asm/task_size_32.h b/arch/powerpc/include/asm/task_size_32.h
index 30edc21f71fb..42a64bbd1964 100644
--- a/arch/powerpc/include/asm/task_size_32.h
+++ b/arch/powerpc/include/asm/task_size_32.h
@@ -2,11 +2,37 @@
 #ifndef _ASM_POWERPC_TASK_SIZE_32_H
 #define _ASM_POWERPC_TASK_SIZE_32_H
 
+#include <linux/sizes.h>
+
 #if CONFIG_TASK_SIZE > CONFIG_KERNEL_START
 #error User TASK_SIZE overlaps with KERNEL_START address
 #endif
 
+#ifdef CONFIG_PPC_8xx
+#define MODULES_END	ASM_CONST(CONFIG_PAGE_OFFSET)
+#define MODULES_SIZE	(CONFIG_MODULES_SIZE * SZ_1M)
+#define MODULES_VADDR	(MODULES_END - MODULES_SIZE)
+#define MODULES_BASE	(MODULES_VADDR & ~(UL(SZ_4M) - 1))
+#define USER_TOP	MODULES_BASE
+#endif
+
+#ifdef CONFIG_PPC_BOOK3S_32
+#define MODULES_END	(ASM_CONST(CONFIG_PAGE_OFFSET) & ~(UL(SZ_256M) - 1))
+#define MODULES_SIZE	(CONFIG_MODULES_SIZE * SZ_1M)
+#define MODULES_VADDR	(MODULES_END - MODULES_SIZE)
+#define MODULES_BASE	(MODULES_VADDR & ~(UL(SZ_256M) - 1))
+#define USER_TOP	MODULES_BASE
+#endif
+
+#ifndef USER_TOP
+#define USER_TOP	ASM_CONST(CONFIG_PAGE_OFFSET)
+#endif
+
+#if CONFIG_TASK_SIZE < USER_TOP
 #define TASK_SIZE ASM_CONST(CONFIG_TASK_SIZE)
+#else
+#define TASK_SIZE USER_TOP
+#endif
 
 /*
  * This decides where the kernel will search for a free chunk of vm space during
diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
index afc9b5cac5a6..35ef3a117d3f 100644
--- a/arch/powerpc/mm/book3s32/mmu.c
+++ b/arch/powerpc/mm/book3s32/mmu.c
@@ -223,8 +223,6 @@ int mmu_mark_initmem_nx(void)
 
 	update_bats();
 
-	BUILD_BUG_ON(ALIGN_DOWN(MODULES_VADDR, SZ_256M) < TASK_SIZE);
-
 	for (i = ALIGN(TASK_SIZE, SZ_256M) >> 28; i < 16; i++) {
 		/* Do not set NX on VM space for modules */
 		if (is_module_segment(i << 28))
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 3ddbfdbfa941..bc0f1a9eb0bc 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -401,8 +401,6 @@ struct execmem_info __init *execmem_arch_setup(void)
 #ifdef MODULES_VADDR
 	unsigned long limit = (unsigned long)_etext - SZ_32M;
 
-	BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR);
-
 	/* First try within 32M limit from _etext to avoid branch trampolines */
 	if (MODULES_VADDR < PAGE_OFFSET && MODULES_END > limit) {
 		start = limit;
diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
index ab1505cf42bf..a9d3f4729ead 100644
--- a/arch/powerpc/mm/nohash/8xx.c
+++ b/arch/powerpc/mm/nohash/8xx.c
@@ -209,8 +209,6 @@ void __init setup_initial_memory_limit(phys_addr_t first_memblock_base,
 
 	/* 8xx can only access 32MB at the moment */
 	memblock_set_current_limit(min_t(u64, first_memblock_size, SZ_32M));
-
-	BUILD_BUG_ON(ALIGN_DOWN(MODULES_VADDR, PGDIR_SIZE) < TASK_SIZE);
 }
 
 int pud_clear_huge(pud_t *pud)
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 10/10] powerpc/uaccess: Implement masked user access
  2025-08-22  9:57 [PATCH v2 00/10] powerpc: Implement masked user access Christophe Leroy
                   ` (8 preceding siblings ...)
  2025-08-22  9:58 ` [PATCH v2 09/10] powerpc/32: Automatically adapt TASK_SIZE based on constraints Christophe Leroy
@ 2025-08-22  9:58 ` Christophe Leroy
  2025-08-25  9:04   ` Gabriel Paubert
  9 siblings, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2025-08-22  9:58 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann
  Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

Masked user access avoids the address/size verification by access_ok().
Allthough its main purpose is to skip the speculation in the
verification of user address and size hence avoid the need of spec
mitigation, it also has the advantage of reducing the amount of
instructions required so it even benefits to platforms that don't
need speculation mitigation, especially when the size of the copy is
not know at build time.

So implement masked user access on powerpc. The only requirement is
to have memory gap that faults between the top user space and the
real start of kernel area.

On 64 bits platforms the address space is divided that way:

	0xffffffffffffffff	+------------------+
				|                  |
				|   kernel space   |
 		 		|                  |
	0xc000000000000000	+------------------+  <== PAGE_OFFSET
				|//////////////////|
				|//////////////////|
	0x8000000000000000	|//////////////////|
				|//////////////////|
				|//////////////////|
	0x0010000000000000	+------------------+  <== TASK_SIZE_MAX
				|                  |
				|    user space    |
				|                  |
	0x0000000000000000	+------------------+

Kernel is always above 0x8000000000000000 and user always
below, with a gap in-between. It leads to a 4 instructions sequence:

  80:	7c 69 1b 78 	mr      r9,r3
  84:	7c 63 fe 76 	sradi   r3,r3,63
  88:	7d 29 18 78 	andc    r9,r9,r3
  8c:	79 23 00 4c 	rldimi  r3,r9,0,1

This sequence leaves r3 unmodified when it is below 0x8000000000000000
and clamps it to 0x8000000000000000 if it is above.

On 32 bits it is more tricky. In theory user space can go up to
0xbfffffff while kernel will usually start at 0xc0000000. So a gap
needs to be added in-between. Allthough in theory a single 4k page
would suffice, it is easier and more efficient to enforce a 128k gap
below kernel, as it simplifies the masking.

e500 has the isel instruction which allows selecting one value or
the other without branch and that instruction is not speculative, so
use it. Allthough GCC usually generates code using that instruction,
it is safer to use inline assembly to be sure. The result is:

  14:	3d 20 bf fe 	lis     r9,-16386
  18:	7c 03 48 40 	cmplw   r3,r9
  1c:	7c 69 18 5e 	iselgt  r3,r9,r3

On other ones, when kernel space is over 0x80000000 and user space
is below, the logic in mask_user_address_simple() leads to a
3 instruction sequence:

  14:	7c 69 fe 70 	srawi   r9,r3,31
  18:	7c 63 48 78 	andc    r3,r3,r9
  1c:	51 23 00 00 	rlwimi  r3,r9,0,0,0

This is the default on powerpc 8xx.

When the limit between user space and kernel space is not 0x80000000,
mask_user_address_32() is used and a 6 instructions sequence is
generated:

  24:	54 69 7c 7e 	srwi    r9,r3,17
  28:	21 29 57 ff 	subfic  r9,r9,22527
  2c:	7d 29 fe 70 	srawi   r9,r9,31
  30:	75 2a b0 00 	andis.  r10,r9,45056
  34:	7c 63 48 78 	andc    r3,r3,r9
  38:	7c 63 53 78 	or      r3,r3,r10

The constraint is that TASK_SIZE be aligned to 128K in order to get
the most optimal number of instructions.

When CONFIG_PPC_BARRIER_NOSPEC is not defined, fallback on the
test-based masking as it is quicker than the 6 instructions sequence
but not quicker than the 3 instructions sequences above.

As an exemple, allthough barrier_nospec() voids on the 8xx, this
change has the following impact on strncpy_from_user(): the length of
the function is reduced from 488 to 340 bytes:

Start of the function with the patch:

00000000 <strncpy_from_user>:
   0:	7c ab 2b 79 	mr.     r11,r5
   4:	40 81 01 48 	ble     14c <strncpy_from_user+0x14c>
   8:	7c 89 fe 70 	srawi   r9,r4,31
   c:	7c 84 48 78 	andc    r4,r4,r9
  10:	51 24 00 00 	rlwimi  r4,r9,0,0,0
  14:	94 21 ff f0 	stwu    r1,-16(r1)
  18:	3d 20 dc 00 	lis     r9,-9216
  1c:	7d 3a c3 a6 	mtspr   794,r9
  20:	2f 8b 00 03 	cmpwi   cr7,r11,3
  24:	40 9d 00 b8 	ble     cr7,dc <strncpy_from_user+0xdc>
...

Start of the function without the patch:

00000000 <strncpy_from_user>:
   0:	7c a0 2b 79 	mr.     r0,r5
   4:	40 81 01 10 	ble     114 <strncpy_from_user+0x114>
   8:	2f 84 00 00 	cmpwi   cr7,r4,0
   c:	41 9c 01 30 	blt     cr7,13c <strncpy_from_user+0x13c>
  10:	3d 20 80 00 	lis     r9,-32768
  14:	7d 24 48 50 	subf    r9,r4,r9
  18:	7f 80 48 40 	cmplw   cr7,r0,r9
  1c:	7c 05 03 78 	mr      r5,r0
  20:	41 9d 01 00 	bgt     cr7,120 <strncpy_from_user+0x120>
  24:	3d 20 80 00 	lis     r9,-32768
  28:	7d 25 48 50 	subf    r9,r5,r9
  2c:	7f 84 48 40 	cmplw   cr7,r4,r9
  30:	38 e0 ff f2 	li      r7,-14
  34:	41 9d 00 e4 	bgt     cr7,118 <strncpy_from_user+0x118>
  38:	94 21 ff e0 	stwu    r1,-32(r1)
  3c:	3d 20 dc 00 	lis     r9,-9216
  40:	7d 3a c3 a6 	mtspr   794,r9
  44:	2b 85 00 03 	cmplwi  cr7,r5,3
  48:	40 9d 01 6c 	ble     cr7,1b4 <strncpy_from_user+0x1b4>
...
 118:	7c e3 3b 78 	mr      r3,r7
 11c:	4e 80 00 20 	blr
 120:	7d 25 4b 78 	mr      r5,r9
 124:	3d 20 80 00 	lis     r9,-32768
 128:	7d 25 48 50 	subf    r9,r5,r9
 12c:	7f 84 48 40 	cmplw   cr7,r4,r9
 130:	38 e0 ff f2 	li      r7,-14
 134:	41 bd ff e4 	bgt     cr7,118 <strncpy_from_user+0x118>
 138:	4b ff ff 00 	b       38 <strncpy_from_user+0x38>
 13c:	38 e0 ff f2 	li      r7,-14
 140:	4b ff ff d8 	b       118 <strncpy_from_user+0x118>
...

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
v2: Added 'likely()' to the test in mask_user_address_fallback()
---
 arch/powerpc/include/asm/task_size_32.h |  6 +-
 arch/powerpc/include/asm/uaccess.h      | 78 +++++++++++++++++++++++++
 2 files changed, 81 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/task_size_32.h b/arch/powerpc/include/asm/task_size_32.h
index 42a64bbd1964..725ddbf06217 100644
--- a/arch/powerpc/include/asm/task_size_32.h
+++ b/arch/powerpc/include/asm/task_size_32.h
@@ -13,7 +13,7 @@
 #define MODULES_SIZE	(CONFIG_MODULES_SIZE * SZ_1M)
 #define MODULES_VADDR	(MODULES_END - MODULES_SIZE)
 #define MODULES_BASE	(MODULES_VADDR & ~(UL(SZ_4M) - 1))
-#define USER_TOP	MODULES_BASE
+#define USER_TOP	(MODULES_BASE - SZ_4M)
 #endif
 
 #ifdef CONFIG_PPC_BOOK3S_32
@@ -21,11 +21,11 @@
 #define MODULES_SIZE	(CONFIG_MODULES_SIZE * SZ_1M)
 #define MODULES_VADDR	(MODULES_END - MODULES_SIZE)
 #define MODULES_BASE	(MODULES_VADDR & ~(UL(SZ_256M) - 1))
-#define USER_TOP	MODULES_BASE
+#define USER_TOP	(MODULES_BASE - SZ_4M)
 #endif
 
 #ifndef USER_TOP
-#define USER_TOP	ASM_CONST(CONFIG_PAGE_OFFSET)
+#define USER_TOP	((ASM_CONST(CONFIG_PAGE_OFFSET) - SZ_128K) & ~(UL(SZ_128K) - 1))
 #endif
 
 #if CONFIG_TASK_SIZE < USER_TOP
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 49254f7d9069..0b8e8ed37a14 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -2,6 +2,8 @@
 #ifndef _ARCH_POWERPC_UACCESS_H
 #define _ARCH_POWERPC_UACCESS_H
 
+#include <linux/sizes.h>
+
 #include <asm/processor.h>
 #include <asm/page.h>
 #include <asm/extable.h>
@@ -435,6 +437,82 @@ static __must_check __always_inline bool __user_access_begin(const void __user *
 #define user_access_save	prevent_user_access_return
 #define user_access_restore	restore_user_access
 
+/*
+ * Masking the user address is an alternative to a conditional
+ * user_access_begin that can avoid the fencing. This only works
+ * for dense accesses starting at the address.
+ */
+static inline void __user *mask_user_address_simple(const void __user *ptr)
+{
+	unsigned long addr = (unsigned long)ptr;
+	unsigned long mask = (unsigned long)((long)addr >> (BITS_PER_LONG - 1));
+
+	addr = ((addr & ~mask) & (~0UL >> 1)) | (mask & (1UL << (BITS_PER_LONG - 1)));
+
+	return (void __user *)addr;
+}
+
+static inline void __user *mask_user_address_isel(const void __user *ptr)
+{
+	unsigned long addr;
+
+	asm("cmplw %1, %2; iselgt %0, %2, %1" : "=r"(addr) : "r"(ptr), "r"(TASK_SIZE) : "cr0");
+
+	return (void __user *)addr;
+}
+
+/* TASK_SIZE is a multiple of 128K for shifting by 17 to the right */
+static inline void __user *mask_user_address_32(const void __user *ptr)
+{
+	unsigned long addr = (unsigned long)ptr;
+	unsigned long mask = (unsigned long)((long)((TASK_SIZE >> 17) - 1 - (addr >> 17)) >> 31);
+
+	addr = (addr & ~mask) | (TASK_SIZE & mask);
+
+	return (void __user *)addr;
+}
+
+static inline void __user *mask_user_address_fallback(const void __user *ptr)
+{
+	unsigned long addr = (unsigned long)ptr;
+
+	return (void __user *)(likely(addr < TASK_SIZE) ? addr : TASK_SIZE);
+}
+
+static inline void __user *mask_user_address(const void __user *ptr)
+{
+#ifdef MODULES_VADDR
+	const unsigned long border = MODULES_VADDR;
+#else
+	const unsigned long border = PAGE_OFFSET;
+#endif
+
+	if (IS_ENABLED(CONFIG_PPC64))
+		return mask_user_address_simple(ptr);
+	if (IS_ENABLED(CONFIG_E500))
+		return mask_user_address_isel(ptr);
+	if (TASK_SIZE <= UL(SZ_2G) && border >= UL(SZ_2G))
+		return mask_user_address_simple(ptr);
+	if (IS_ENABLED(CONFIG_PPC_BARRIER_NOSPEC))
+		return mask_user_address_32(ptr);
+	return mask_user_address_fallback(ptr);
+}
+
+static __always_inline void __user *__masked_user_access_begin(const void __user *p,
+							       unsigned long dir)
+{
+	void __user *ptr = mask_user_address(p);
+
+	might_fault();
+	allow_user_access(ptr, dir);
+
+	return ptr;
+}
+
+#define masked_user_access_begin(p) __masked_user_access_begin(p, KUAP_READ_WRITE)
+#define masked_user_read_access_begin(p) __masked_user_access_begin(p, KUAP_READ)
+#define masked_user_write_access_begin(p) __masked_user_access_begin(p, KUAP_WRITE)
+
 #define unsafe_get_user(x, p, e) do {					\
 	__long_type(*(p)) __gu_val;				\
 	__typeof__(*(p)) __user *__gu_addr = (p);		\
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 09/10] powerpc/32: Automatically adapt TASK_SIZE based on constraints
  2025-08-22  9:58 ` [PATCH v2 09/10] powerpc/32: Automatically adapt TASK_SIZE based on constraints Christophe Leroy
@ 2025-08-22 12:04   ` David Laight
  0 siblings, 0 replies; 19+ messages in thread
From: David Laight @ 2025-08-22 12:04 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, Dave Hansen, Linus Torvalds,
	Daniel Borkmann, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

On Fri, 22 Aug 2025 11:58:05 +0200
Christophe Leroy <christophe.leroy@csgroup.eu> wrote:

> At the time being, TASK_SIZE can be customized by the user via Kconfig
> but it is not possible to check all constraints in Kconfig. Impossible
> setups are detected at compile time with BUILD_BUG() but that leads
> to build failure when setting crazy values. It is not a problem on its
> own because the user will usually either use the default value or set
> a well thought value. However build robots generate crazy random
> configs that lead to build failures, and build robots see it as a
> regression every time a patch adds such a constraint.
> 
> So instead of failing the build when the custom TASK_SIZE is too
> big, just adjust it to the maximum possible value matching the setup.
> 
> Several architectures already calculate TASK_SIZE based on other
> parameters and options.
> 
> In order to do so, move MODULES_VADDR calculation into task_size_32.h
> and ensure that:
> - On book3s/32, userspace and module area have their own segments (256M)
> - On 8xx, userspace has its own full PGDIR entries (4M)
> 
> Then TASK_SIZE is garantied to be correct so remove related
                    ^ guaranteed

> BUILD_BUG()s.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 02/10] uaccess: Add speculation barrier to copy_from_user_iter()
  2025-08-22  9:57 ` [PATCH v2 02/10] uaccess: Add speculation barrier to copy_from_user_iter() Christophe Leroy
@ 2025-08-22 13:46   ` Linus Torvalds
  2025-08-22 14:11     ` Giorgi Tchankvetadze
  2025-08-22 18:53     ` David Laight
  0 siblings, 2 replies; 19+ messages in thread
From: Linus Torvalds @ 2025-08-22 13:46 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Daniel Borkmann, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

On Fri, 22 Aug 2025 at 05:58, Christophe Leroy
<christophe.leroy@csgroup.eu> wrote:
>
> The results of "access_ok()" can be mis-speculated.  The result is that
> you can end speculatively:
>
>         if (access_ok(from, size))
>                 // Right here

I actually think that we should probably just make access_ok() itself do this.

We don't have *that* many users since we have been de-emphasizing the
"check ahead of time" model, and any that are performance-critical can
these days be turned into masked addresses.

As it is, now we're in the situation that careful places - like
_inline_copy_from_user(), and with your patch  copy_from_user_iter() -
do maybe wethis by hand and are ugly as a result, and lazy and
probably incorrect places don't do it at all.

That said, I don't object to this patch and maybe we should do that
access_ok() change later and independently of any powerpc work.

                 Linus


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 02/10] uaccess: Add speculation barrier to copy_from_user_iter()
  2025-08-22 13:46   ` Linus Torvalds
@ 2025-08-22 14:11     ` Giorgi Tchankvetadze
  2025-08-22 18:53     ` David Laight
  1 sibling, 0 replies; 19+ messages in thread
From: Giorgi Tchankvetadze @ 2025-08-22 14:11 UTC (permalink / raw)
  To: torvalds
  Cc: akpm, andrealmeid, brauner, christophe.leroy, daniel, dave.hansen,
	dave, david.laight.linux, dvhart, jack, linux-block,
	linux-fsdevel, linux-kernel, linux-mm, linuxppc-dev, maddy, mingo,
	mpe, npiggin, peterz, tglx, viro

so we can use speculation barrier? and fix the problem locally


On 8/22/2025 5:52 PM, Linus Torvalds wrote:
> On Fri, 22 Aug 2025 at 05:58, Christophe Leroy
> <christophe.leroy@csgroup.eu> wrote:
>> > The results of "access_ok()" can be mis-speculated. The result is 
> that > you can end speculatively: > > if (access_ok(from, size)) > // 
> Right here
> I actually think that we should probably just make access_ok() itself do this.
> 
> We don't have *that* many users since we have been de-emphasizing the
> "check ahead of time" model, and any that are performance-critical can
> these days be turned into masked addresses.
> 
> As it is, now we're in the situation that careful places - like
> _inline_copy_from_user(), and with your patch  copy_from_user_iter() -
> do maybe wethis by hand and are ugly as a result, and lazy and
> probably incorrect places don't do it at all.
> 
> That said, I don't object to this patch and maybe we should do that
> access_ok() change later and independently of any powerpc work.
> 
>                   Linus
> 




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 02/10] uaccess: Add speculation barrier to copy_from_user_iter()
  2025-08-22 13:46   ` Linus Torvalds
  2025-08-22 14:11     ` Giorgi Tchankvetadze
@ 2025-08-22 18:53     ` David Laight
  1 sibling, 0 replies; 19+ messages in thread
From: David Laight @ 2025-08-22 18:53 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Christophe Leroy, Michael Ellerman, Nicholas Piggin,
	Madhavan Srinivasan, Alexander Viro, Christian Brauner, Jan Kara,
	Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Darren Hart,
	Davidlohr Bueso, Andre Almeida, Andrew Morton, Dave Hansen,
	Daniel Borkmann, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

On Fri, 22 Aug 2025 09:46:37 -0400
Linus Torvalds <torvalds@linux-foundation.org> wrote:

> On Fri, 22 Aug 2025 at 05:58, Christophe Leroy
> <christophe.leroy@csgroup.eu> wrote:
> >
> > The results of "access_ok()" can be mis-speculated.  The result is that
> > you can end speculatively:
> >
> >         if (access_ok(from, size))
> >                 // Right here
> 
> I actually think that we should probably just make access_ok() itself do this.

You'd need to re-introduce the read/write parameter.
And you'd want it to be compile time.
Although going through the code changing them to read_access_ok()
and write_access_ok() would probably leave you with a lot fewer calls.

> We don't have *that* many users since we have been de-emphasizing the
> "check ahead of time" model, and any that are performance-critical can
> these days be turned into masked addresses.

Or aim to allocate a guard page on all archs, support 'masked' access
on all of them, and then just delete access_ok().
That'll make it look less ugly.
Perhaps not this week though :-)

	David

> 
> As it is, now we're in the situation that careful places - like
> _inline_copy_from_user(), and with your patch  copy_from_user_iter() -
> do maybe wethis by hand and are ugly as a result, and lazy and
> probably incorrect places don't do it at all.
> 
> That said, I don't object to this patch and maybe we should do that
> access_ok() change later and independently of any powerpc work.
> 
>                  Linus



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 03/10] uaccess: Add masked_user_{read/write}_access_begin
  2025-08-22  9:57 ` [PATCH v2 03/10] uaccess: Add masked_user_{read/write}_access_begin Christophe Leroy
@ 2025-08-24 15:08   ` Thomas Gleixner
  0 siblings, 0 replies; 19+ messages in thread
From: Thomas Gleixner @ 2025-08-24 15:08 UTC (permalink / raw)
  To: Christophe Leroy, Michael Ellerman, Nicholas Piggin,
	Madhavan Srinivasan, Alexander Viro, Christian Brauner, Jan Kara,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann
  Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-fsdevel,
	linux-mm, linux-block

On Fri, Aug 22 2025 at 11:57, Christophe Leroy wrote:

> Allthough masked_user_access_begin() is to only be used when reading
> data from user at the moment, introduce masked_user_read_access_begin()
> and masked_user_write_access_begin() in order to match
> user_read_access_begin() and user_write_access_begin().
>
> That means masked_user_read_access_begin() is used when user memory is
> exclusively read during the window, masked_user_write_access_begin()
> is used when user memory is exclusively writen during the window,
> masked_user_access_begin() remains and is used when both reads and
> writes are performed during the open window. Each of them is expected
> to be terminated by the matching user_read_access_end(),
> user_write_access_end() and user_access_end().
>
> Have them default to masked_user_access_begin() when they are
> not defined.

Have you seen:

    https://lore.kernel.org/all/20250813151939.601040635@linutronix.de



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 10/10] powerpc/uaccess: Implement masked user access
  2025-08-22  9:58 ` [PATCH v2 10/10] powerpc/uaccess: Implement masked user access Christophe Leroy
@ 2025-08-25  9:04   ` Gabriel Paubert
  2025-08-25  9:40     ` Christophe Leroy
  0 siblings, 1 reply; 19+ messages in thread
From: Gabriel Paubert @ 2025-08-25  9:04 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann, linux-kernel, linuxppc-dev,
	linux-fsdevel, linux-mm, linux-block


Hi Christophe,

On Fri, Aug 22, 2025 at 11:58:06AM +0200, Christophe Leroy wrote:
> Masked user access avoids the address/size verification by access_ok().
> Allthough its main purpose is to skip the speculation in the
> verification of user address and size hence avoid the need of spec
> mitigation, it also has the advantage of reducing the amount of
> instructions required so it even benefits to platforms that don't
> need speculation mitigation, especially when the size of the copy is
> not know at build time.
> 
> So implement masked user access on powerpc. The only requirement is
> to have memory gap that faults between the top user space and the
> real start of kernel area.
> 
> On 64 bits platforms the address space is divided that way:
> 
> 	0xffffffffffffffff	+------------------+
> 				|                  |
> 				|   kernel space   |
>  		 		|                  |
> 	0xc000000000000000	+------------------+  <== PAGE_OFFSET
> 				|//////////////////|
> 				|//////////////////|
> 	0x8000000000000000	|//////////////////|
> 				|//////////////////|
> 				|//////////////////|
> 	0x0010000000000000	+------------------+  <== TASK_SIZE_MAX
> 				|                  |
> 				|    user space    |
> 				|                  |
> 	0x0000000000000000	+------------------+
> 
> Kernel is always above 0x8000000000000000 and user always
> below, with a gap in-between. It leads to a 4 instructions sequence:
> 
>   80:	7c 69 1b 78 	mr      r9,r3
>   84:	7c 63 fe 76 	sradi   r3,r3,63
>   88:	7d 29 18 78 	andc    r9,r9,r3
>   8c:	79 23 00 4c 	rldimi  r3,r9,0,1
> 
> This sequence leaves r3 unmodified when it is below 0x8000000000000000
> and clamps it to 0x8000000000000000 if it is above.
> 

This comment looks wrong: the second instruction converts r3 to a
replicated sign bit of the address ((addr>0)?0:-1) if treating the
address as signed. After that the code only modifies the MSB of r3. So I
don't see how r3 could be unchanged from the original value...

OTOH, I believe the following 3 instructions sequence would work,
input address (a) in r3, scratch value (tmp) in r9, both intptr_t:

	sradi r9,r3,63	; tmp = (a >= 0) ? 0L : -1L;
	andc r3,r3,r9   ; a = a & ~tmp; (equivalently a = (a >= 0) ? a : 0)
	rldimi r3,r9,0,1 ; copy MSB of tmp to MSB of a 

But maybe I goofed...

Gabriel

 
 



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 10/10] powerpc/uaccess: Implement masked user access
  2025-08-25  9:04   ` Gabriel Paubert
@ 2025-08-25  9:40     ` Christophe Leroy
  2025-08-25 10:18       ` Gabriel Paubert
  0 siblings, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2025-08-25  9:40 UTC (permalink / raw)
  To: Gabriel Paubert
  Cc: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann, linux-kernel, linuxppc-dev,
	linux-fsdevel, linux-mm, linux-block

Hi Gabriel,

Le 25/08/2025 à 11:04, Gabriel Paubert a écrit :
> [Vous ne recevez pas souvent de courriers de paubert@iram.es. D?couvrez pourquoi ceci est important ? https://aka.ms/LearnAboutSenderIdentification ]
> 
> Hi Christophe,
> 
> On Fri, Aug 22, 2025 at 11:58:06AM +0200, Christophe Leroy wrote:
>> Masked user access avoids the address/size verification by access_ok().
>> Allthough its main purpose is to skip the speculation in the
>> verification of user address and size hence avoid the need of spec
>> mitigation, it also has the advantage of reducing the amount of
>> instructions required so it even benefits to platforms that don't
>> need speculation mitigation, especially when the size of the copy is
>> not know at build time.
>>
>> So implement masked user access on powerpc. The only requirement is
>> to have memory gap that faults between the top user space and the
>> real start of kernel area.
>>
>> On 64 bits platforms the address space is divided that way:
>>
>>        0xffffffffffffffff      +------------------+
>>                                |                  |
>>                                |   kernel space   |
>>                                |                  |
>>        0xc000000000000000      +------------------+  <== PAGE_OFFSET
>>                                |//////////////////|
>>                                |//////////////////|
>>        0x8000000000000000      |//////////////////|
>>                                |//////////////////|
>>                                |//////////////////|
>>        0x0010000000000000      +------------------+  <== TASK_SIZE_MAX
>>                                |                  |
>>                                |    user space    |
>>                                |                  |
>>        0x0000000000000000      +------------------+
>>
>> Kernel is always above 0x8000000000000000 and user always
>> below, with a gap in-between. It leads to a 4 instructions sequence:
>>
>>    80: 7c 69 1b 78     mr      r9,r3
>>    84: 7c 63 fe 76     sradi   r3,r3,63
>>    88: 7d 29 18 78     andc    r9,r9,r3
>>    8c: 79 23 00 4c     rldimi  r3,r9,0,1
>>
>> This sequence leaves r3 unmodified when it is below 0x8000000000000000
>> and clamps it to 0x8000000000000000 if it is above.
>>
> 
> This comment looks wrong: the second instruction converts r3 to a
> replicated sign bit of the address ((addr>0)?0:-1) if treating the
> address as signed. After that the code only modifies the MSB of r3. So I
> don't see how r3 could be unchanged from the original value...

Unless I'm missing something, the above rldimi leaves the MSB of r3 
unmodified and replaces all other bits by the same in r9.

This is the code generated by GCC for the following:

	unsigned long mask = (unsigned long)((long)addr >> 63);

	addr = ((addr & ~mask) & (~0UL >> 1)) | (mask & (1UL << 63));


> 
> OTOH, I believe the following 3 instructions sequence would work,
> input address (a) in r3, scratch value (tmp) in r9, both intptr_t:
> 
>          sradi r9,r3,63  ; tmp = (a >= 0) ? 0L : -1L;
>          andc r3,r3,r9   ; a = a & ~tmp; (equivalently a = (a >= 0) ? a : 0)
>          rldimi r3,r9,0,1 ; copy MSB of tmp to MSB of a
> 
> But maybe I goofed...
> 

 From my understanding of rldimi, your proposed code would:
- Keep r3 unmodified when it is above 0x8000000000000000
- Set r3 to 0x7fffffffffffffff when it is below 0x8000000000000000

Extract of ppc64 ABI:

rldimi RA,RS,SH,MB

The contents of register RS are rotated 64 left SH bits.
A mask is generated having 1-bits from bit MB
through bit 63− SH and 0-bits elsewhere. The rotated
data are inserted into register RA under control of the
generated mask.




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 10/10] powerpc/uaccess: Implement masked user access
  2025-08-25  9:40     ` Christophe Leroy
@ 2025-08-25 10:18       ` Gabriel Paubert
  0 siblings, 0 replies; 19+ messages in thread
From: Gabriel Paubert @ 2025-08-25 10:18 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: Michael Ellerman, Nicholas Piggin, Madhavan Srinivasan,
	Alexander Viro, Christian Brauner, Jan Kara, Thomas Gleixner,
	Ingo Molnar, Peter Zijlstra, Darren Hart, Davidlohr Bueso,
	Andre Almeida, Andrew Morton, David Laight, Dave Hansen,
	Linus Torvalds, Daniel Borkmann, linux-kernel, linuxppc-dev,
	linux-fsdevel, linux-mm, linux-block

On Mon, Aug 25, 2025 at 11:40:48AM +0200, Christophe Leroy wrote:
> Hi Gabriel,
> 
> Le 25/08/2025 à 11:04, Gabriel Paubert a écrit :
> > [Vous ne recevez pas souvent de courriers de paubert@iram.es. D?couvrez pourquoi ceci est important ? https://urldefense.com/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!D9dNQwwGXtA!QUcSIXoDBBj9wAtcyQ-z3nPEAj-RnJpPgYwjOeb6LZWLejdLzq4uYsPMecQuK5Qy3147APjCNc-hcXGT71XuBh1AJI2M$  ]
> > 
> > Hi Christophe,
> > 
> > On Fri, Aug 22, 2025 at 11:58:06AM +0200, Christophe Leroy wrote:
> > > Masked user access avoids the address/size verification by access_ok().
> > > Allthough its main purpose is to skip the speculation in the
> > > verification of user address and size hence avoid the need of spec
> > > mitigation, it also has the advantage of reducing the amount of
> > > instructions required so it even benefits to platforms that don't
> > > need speculation mitigation, especially when the size of the copy is
> > > not know at build time.
> > > 
> > > So implement masked user access on powerpc. The only requirement is
> > > to have memory gap that faults between the top user space and the
> > > real start of kernel area.
> > > 
> > > On 64 bits platforms the address space is divided that way:
> > > 
> > >        0xffffffffffffffff      +------------------+
> > >                                |                  |
> > >                                |   kernel space   |
> > >                                |                  |
> > >        0xc000000000000000      +------------------+  <== PAGE_OFFSET
> > >                                |//////////////////|
> > >                                |//////////////////|
> > >        0x8000000000000000      |//////////////////|
> > >                                |//////////////////|
> > >                                |//////////////////|
> > >        0x0010000000000000      +------------------+  <== TASK_SIZE_MAX
> > >                                |                  |
> > >                                |    user space    |
> > >                                |                  |
> > >        0x0000000000000000      +------------------+
> > > 
> > > Kernel is always above 0x8000000000000000 and user always
> > > below, with a gap in-between. It leads to a 4 instructions sequence:
> > > 
> > >    80: 7c 69 1b 78     mr      r9,r3
> > >    84: 7c 63 fe 76     sradi   r3,r3,63
> > >    88: 7d 29 18 78     andc    r9,r9,r3
> > >    8c: 79 23 00 4c     rldimi  r3,r9,0,1
> > > 
> > > This sequence leaves r3 unmodified when it is below 0x8000000000000000
> > > and clamps it to 0x8000000000000000 if it is above.
> > > 
> > 
> > This comment looks wrong: the second instruction converts r3 to a
> > replicated sign bit of the address ((addr>0)?0:-1) if treating the
> > address as signed. After that the code only modifies the MSB of r3. So I
> > don't see how r3 could be unchanged from the original value...
> 
> Unless I'm missing something, the above rldimi leaves the MSB of r3
> unmodified and replaces all other bits by the same in r9.
> 
> This is the code generated by GCC for the following:
> 
> 	unsigned long mask = (unsigned long)((long)addr >> 63);
> 
> 	addr = ((addr & ~mask) & (~0UL >> 1)) | (mask & (1UL << 63));
> 
> 
> > 
> > OTOH, I believe the following 3 instructions sequence would work,
> > input address (a) in r3, scratch value (tmp) in r9, both intptr_t:
> > 
> >          sradi r9,r3,63  ; tmp = (a >= 0) ? 0L : -1L;
> >          andc r3,r3,r9   ; a = a & ~tmp; (equivalently a = (a >= 0) ? a : 0)
> >          rldimi r3,r9,0,1 ; copy MSB of tmp to MSB of a
> > 
> > But maybe I goofed...
> > 
> 
> From my understanding of rldimi, your proposed code would:
> - Keep r3 unmodified when it is above 0x8000000000000000
> - Set r3 to 0x7fffffffffffffff when it is below 0x8000000000000000
> 
> Extract of ppc64 ABI:
> 
> rldimi RA,RS,SH,MB
> 
> The contents of register RS are rotated 64 left SH bits.
> A mask is generated having 1-bits from bit MB
> through bit 63− SH and 0-bits elsewhere. The rotated
> data are inserted into register RA under control of the
> generated mask.

Sorry, you are right, I got the polarity of the mask reversed in my
head.


Once again I may goof, but I believe that the following sequence
would work:

	sradi r9,r3,63
	andc r3,r3,r9
	rldimi r3,r9,63,0  ; insert LSB of r9 into MSB of R3

Cheers,
Gabriel




^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2025-08-25 10:19 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-22  9:57 [PATCH v2 00/10] powerpc: Implement masked user access Christophe Leroy
2025-08-22  9:57 ` [PATCH v2 01/10] iter: Avoid barrier_nospec() in copy_from_user_iter() Christophe Leroy
2025-08-22  9:57 ` [PATCH v2 02/10] uaccess: Add speculation barrier to copy_from_user_iter() Christophe Leroy
2025-08-22 13:46   ` Linus Torvalds
2025-08-22 14:11     ` Giorgi Tchankvetadze
2025-08-22 18:53     ` David Laight
2025-08-22  9:57 ` [PATCH v2 03/10] uaccess: Add masked_user_{read/write}_access_begin Christophe Leroy
2025-08-24 15:08   ` Thomas Gleixner
2025-08-22  9:58 ` [PATCH v2 04/10] powerpc/uaccess: Move barrier_nospec() out of allow_read_{from/write}_user() Christophe Leroy
2025-08-22  9:58 ` [PATCH v2 05/10] powerpc/uaccess: Remove unused size and from parameters from allow_access_user() Christophe Leroy
2025-08-22  9:58 ` [PATCH v2 06/10] powerpc/uaccess: Remove {allow/prevent}_{read/write/read_write}_{from/to/}_user() Christophe Leroy
2025-08-22  9:58 ` [PATCH v2 07/10] powerpc/uaccess: Refactor user_{read/write/}_access_begin() Christophe Leroy
2025-08-22  9:58 ` [PATCH v2 08/10] powerpc/32s: Fix segments setup when TASK_SIZE is not a multiple of 256M Christophe Leroy
2025-08-22  9:58 ` [PATCH v2 09/10] powerpc/32: Automatically adapt TASK_SIZE based on constraints Christophe Leroy
2025-08-22 12:04   ` David Laight
2025-08-22  9:58 ` [PATCH v2 10/10] powerpc/uaccess: Implement masked user access Christophe Leroy
2025-08-25  9:04   ` Gabriel Paubert
2025-08-25  9:40     ` Christophe Leroy
2025-08-25 10:18       ` Gabriel Paubert

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).