public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH] arm64: signal: preserve si_addr for addresses in the VA hole
@ 2026-02-24 13:55 Nirmoy Das
  2026-02-24 20:07 ` Catalin Marinas
  0 siblings, 1 reply; 2+ messages in thread
From: Nirmoy Das @ 2026-02-24 13:55 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: linux-arm-kernel, linux-kernel, Nirmoy Das

When userspace accesses an address in the "hole" between user and kernel
virtual address space, the kernel delivers SIGSEGV with si_addr set to
the faulting address. However, untagged_addr() uses sign_extend64() to
canonicalize the address which corrupts hole addresses making debugging
difficult as userspace cannot see the actual faulting value.

Fix this by only stripping the TBI top-byte for addresses that fall
within the valid user range (below TASK_SIZE) after masking. For hole
addresses, preserve the full original address including any tag bits.

Closes: https://bugzilla.kernel.org/show_bug.cgi?id=220750
Signed-off-by: Nirmoy Das <nirmoyd@nvidia.com>
---
 arch/arm64/include/asm/signal.h | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/signal.h b/arch/arm64/include/asm/signal.h
index ef449f5f4ba8..ca7ff6e5cd2f 100644
--- a/arch/arm64/include/asm/signal.h
+++ b/arch/arm64/include/asm/signal.h
@@ -3,6 +3,7 @@
 #define __ARM64_ASM_SIGNAL_H
 
 #include <asm/memory.h>
+#include <asm/processor.h>
 #include <uapi/asm/signal.h>
 #include <uapi/asm/siginfo.h>
 
@@ -10,6 +11,8 @@ static inline void __user *arch_untagged_si_addr(void __user *addr,
 						 unsigned long sig,
 						 unsigned long si_code)
 {
+	unsigned long masked;
+
 	/*
 	 * For historical reasons, all bits of the fault address are exposed as
 	 * address bits for watchpoint exceptions. New architectures should
@@ -18,7 +21,16 @@ static inline void __user *arch_untagged_si_addr(void __user *addr,
 	if (sig == SIGTRAP && si_code == TRAP_BRKPT)
 		return addr;
 
-	return untagged_addr(addr);
+	/*
+	 * Strip tag bits only for valid user addresses. For addresses
+	 * in the VA hole, preserve the original value so userspace can
+	 * see the actual faulting address for debugging.
+	 */
+	masked = (unsigned long)addr & ((1UL << 56) - 1);
+	if (masked >= TASK_SIZE)
+		return addr;
+
+	return (void __user *)masked;
 }
 #define arch_untagged_si_addr arch_untagged_si_addr
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [RFC PATCH] arm64: signal: preserve si_addr for addresses in the VA hole
  2026-02-24 13:55 [RFC PATCH] arm64: signal: preserve si_addr for addresses in the VA hole Nirmoy Das
@ 2026-02-24 20:07 ` Catalin Marinas
  0 siblings, 0 replies; 2+ messages in thread
From: Catalin Marinas @ 2026-02-24 20:07 UTC (permalink / raw)
  To: Nirmoy Das; +Cc: Will Deacon, linux-arm-kernel, linux-kernel

On Tue, Feb 24, 2026 at 05:55:03AM -0800, Nirmoy Das wrote:
> When userspace accesses an address in the "hole" between user and kernel
> virtual address space, the kernel delivers SIGSEGV with si_addr set to
> the faulting address. However, untagged_addr() uses sign_extend64() to
> canonicalize the address which corrupts hole addresses making debugging
> difficult as userspace cannot see the actual faulting value.
> 
> Fix this by only stripping the TBI top-byte for addresses that fall
> within the valid user range (below TASK_SIZE) after masking. For hole
> addresses, preserve the full original address including any tag bits.

From an architecture perspective, TTBRx selection is done based on bit
55. If TBI is enabled for one of the TTBRx ranges, bits 63:56 of the
address are ignored for the translation. Since TBI is always on for the
user, it makes sense to always ignore these bits. You just need to be
aware that the byte is sign-extended from bit 55.

How does it help with debugging if you know the top byte since it's
ignored by the hardware anyway.

> diff --git a/arch/arm64/include/asm/signal.h b/arch/arm64/include/asm/signal.h
> index ef449f5f4ba8..ca7ff6e5cd2f 100644
> --- a/arch/arm64/include/asm/signal.h
> +++ b/arch/arm64/include/asm/signal.h
> @@ -3,6 +3,7 @@
>  #define __ARM64_ASM_SIGNAL_H
>  
>  #include <asm/memory.h>
> +#include <asm/processor.h>
>  #include <uapi/asm/signal.h>
>  #include <uapi/asm/siginfo.h>
>  
> @@ -10,6 +11,8 @@ static inline void __user *arch_untagged_si_addr(void __user *addr,
>  						 unsigned long sig,
>  						 unsigned long si_code)
>  {
> +	unsigned long masked;
> +
>  	/*
>  	 * For historical reasons, all bits of the fault address are exposed as
>  	 * address bits for watchpoint exceptions. New architectures should
> @@ -18,7 +21,16 @@ static inline void __user *arch_untagged_si_addr(void __user *addr,
>  	if (sig == SIGTRAP && si_code == TRAP_BRKPT)
>  		return addr;
>  
> -	return untagged_addr(addr);
> +	/*
> +	 * Strip tag bits only for valid user addresses. For addresses
> +	 * in the VA hole, preserve the original value so userspace can
> +	 * see the actual faulting address for debugging.
> +	 */
> +	masked = (unsigned long)addr & ((1UL << 56) - 1);
> +	if (masked >= TASK_SIZE)
> +		return addr;

This doesn't make much sense architecturally. In the worst case, I'd
keep the top byte only if bit 55 is set (not in relation to TASK_SIZE).
But even in this case, I don't see what problem it solves. That top bit
doesn't give you any useful information and, with MTE on, TBI1 is also
on.

-- 
Catalin

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-02-24 20:07 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-24 13:55 [RFC PATCH] arm64: signal: preserve si_addr for addresses in the VA hole Nirmoy Das
2026-02-24 20:07 ` Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox