* [PATCH 0/4] ARM: stable backports
@ 2026-05-11 13:53 Sebastian Andrzej Siewior
2026-05-11 13:53 ` [PATCH 1/4] ARM: group is_permission_fault() with is_translation_fault() Sebastian Andrzej Siewior
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Sebastian Andrzej Siewior @ 2026-05-11 13:53 UTC (permalink / raw)
To: stable
Cc: Bryan Brattlof, Daniel Wagner, Jan Kiszka, cip-dev,
nobuhiro.iwamatsu.x90, pavel, Russell King,
Sebastian Andrzej Siewior
This is a backport of ARM related fixes. This applies cleanly to v6.18
and v6.12. I have an updated batch for v6.6 and v6.1 because this does
not apply cleanly.
#1 and #2 are prerequisites for #3.
Can't tell the origin of #3 (fix hash_name() fault). It might be there
since the begin of time.
#4 (fix branch predictor hardening) fixes commit f5fe12b1eaee2 ("ARM:
spectre-v2: harden user aborts in kernel space") which is v4.20-rc2.
If there are no objections I would post the v6.6 version once this is
accepted and then rebase the PREEMPT_RT bits on top of this.
Russell King (Oracle) (4):
ARM: group is_permission_fault() with is_translation_fault()
ARM: allow __do_kernel_fault() to report execution of memory faults
ARM: fix hash_name() fault
ARM: fix branch predictor hardening
arch/arm/mm/alignment.c | 6 ++-
arch/arm/mm/fault.c | 100 ++++++++++++++++++++++++++++++----------
2 files changed, 80 insertions(+), 26 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/4] ARM: group is_permission_fault() with is_translation_fault()
2026-05-11 13:53 [PATCH 0/4] ARM: stable backports Sebastian Andrzej Siewior
@ 2026-05-11 13:53 ` Sebastian Andrzej Siewior
2026-05-11 13:53 ` [PATCH 2/4] ARM: allow __do_kernel_fault() to report execution of memory faults Sebastian Andrzej Siewior
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Sebastian Andrzej Siewior @ 2026-05-11 13:53 UTC (permalink / raw)
To: stable
Cc: Bryan Brattlof, Daniel Wagner, Jan Kiszka, cip-dev,
nobuhiro.iwamatsu.x90, pavel, Russell King,
Sebastian Andrzej Siewior
From: "Russell King (Oracle)" <rmk+kernel@armlinux.org.uk>
commit dea20281ac88226615761c570c8ff7adc18e6ac2 upstream.
Group is_permission_fault() with is_translation_fault(), which is
needed to use is_permission_fault() in __do_kernel_fault(). As
this is static inline, there is no need for this to be under
CONFIG_MMU.
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/arm/mm/fault.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 2bc828a1940c0..f87f353e5a8b0 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -128,6 +128,19 @@ static inline bool is_translation_fault(unsigned int fsr)
return false;
}
+static inline bool is_permission_fault(unsigned int fsr)
+{
+ int fs = fsr_fs(fsr);
+#ifdef CONFIG_ARM_LPAE
+ if ((fs & FS_MMU_NOLL_MASK) == FS_PERM_NOLL)
+ return true;
+#else
+ if (fs == FS_L1_PERM || fs == FS_L2_PERM)
+ return true;
+#endif
+ return false;
+}
+
static void die_kernel_fault(const char *msg, struct mm_struct *mm,
unsigned long addr, unsigned int fsr,
struct pt_regs *regs)
@@ -225,19 +238,6 @@ void do_bad_area(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
}
#ifdef CONFIG_MMU
-static inline bool is_permission_fault(unsigned int fsr)
-{
- int fs = fsr_fs(fsr);
-#ifdef CONFIG_ARM_LPAE
- if ((fs & FS_MMU_NOLL_MASK) == FS_PERM_NOLL)
- return true;
-#else
- if (fs == FS_L1_PERM || fs == FS_L2_PERM)
- return true;
-#endif
- return false;
-}
-
#ifdef CONFIG_CPU_TTBR0_PAN
static inline bool ttbr0_usermode_access_allowed(struct pt_regs *regs)
{
--
2.53.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/4] ARM: allow __do_kernel_fault() to report execution of memory faults
2026-05-11 13:53 [PATCH 0/4] ARM: stable backports Sebastian Andrzej Siewior
2026-05-11 13:53 ` [PATCH 1/4] ARM: group is_permission_fault() with is_translation_fault() Sebastian Andrzej Siewior
@ 2026-05-11 13:53 ` Sebastian Andrzej Siewior
2026-05-11 13:53 ` [PATCH 3/4] ARM: fix hash_name() fault Sebastian Andrzej Siewior
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Sebastian Andrzej Siewior @ 2026-05-11 13:53 UTC (permalink / raw)
To: stable
Cc: Bryan Brattlof, Daniel Wagner, Jan Kiszka, cip-dev,
nobuhiro.iwamatsu.x90, pavel, Russell King, Xie Yuanbin,
Sebastian Andrzej Siewior
From: "Russell King (Oracle)" <rmk+kernel@armlinux.org.uk>
commit 40b466db1dffb41f0529035c59c5739636d0e5b8 upstream.
Allow __do_kernel_fault() to detect the execution of memory, so we can
provide the same fault message as do_page_fault() would do. This is
required when we split the kernel address fault handling from the
main do_page_fault() code path.
Reviewed-by: Xie Yuanbin <xieyuanbin1@huawei.com>
Tested-by: Xie Yuanbin <xieyuanbin1@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/arm/mm/fault.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index f87f353e5a8b0..192c8ab196dba 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -175,6 +175,8 @@ __do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
*/
if (addr < PAGE_SIZE) {
msg = "NULL pointer dereference";
+ } else if (is_permission_fault(fsr) && fsr & FSR_LNX_PF) {
+ msg = "execution of memory";
} else {
if (is_translation_fault(fsr) &&
kfence_handle_page_fault(addr, is_write_fault(fsr), regs))
--
2.53.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 3/4] ARM: fix hash_name() fault
2026-05-11 13:53 [PATCH 0/4] ARM: stable backports Sebastian Andrzej Siewior
2026-05-11 13:53 ` [PATCH 1/4] ARM: group is_permission_fault() with is_translation_fault() Sebastian Andrzej Siewior
2026-05-11 13:53 ` [PATCH 2/4] ARM: allow __do_kernel_fault() to report execution of memory faults Sebastian Andrzej Siewior
@ 2026-05-11 13:53 ` Sebastian Andrzej Siewior
2026-05-11 13:53 ` [PATCH 4/4] ARM: fix branch predictor hardening Sebastian Andrzej Siewior
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Sebastian Andrzej Siewior @ 2026-05-11 13:53 UTC (permalink / raw)
To: stable
Cc: Bryan Brattlof, Daniel Wagner, Jan Kiszka, cip-dev,
nobuhiro.iwamatsu.x90, pavel, Russell King, Zizhi Wo, Xie Yuanbin,
Sebastian Andrzej Siewior
From: "Russell King (Oracle)" <rmk+kernel@armlinux.org.uk>
commit 7733bc7d299d682f2723dc38fc7f370b9bf973e9 upstream.
Zizhi Wo reports:
"During the execution of hash_name()->load_unaligned_zeropad(), a
potential memory access beyond the PAGE boundary may occur. For
example, when the filename length is near the PAGE_SIZE boundary.
This triggers a page fault, which leads to a call to
do_page_fault()->mmap_read_trylock(). If we can't acquire the lock,
we have to fall back to the mmap_read_lock() path, which calls
might_sleep(). This breaks RCU semantics because path lookup occurs
under an RCU read-side critical section."
This is seen with CONFIG_DEBUG_ATOMIC_SLEEP=y and CONFIG_KFENCE=y.
Kernel addresses (with the exception of the vectors/kuser helper
page) do not have VMAs associated with them. If the vectors/kuser
helper page faults, then there are two possibilities:
1. if the fault happened while in kernel mode, then we're basically
dead, because the CPU won't be able to vector through this page
to handle the fault.
2. if the fault happened while in user mode, that means the page was
protected from user access, and we want to fault anyway.
Thus, we can handle kernel addresses from any context entirely
separately without going anywhere near the mmap lock. This gives us
an entirely non-sleeping path for all kernel mode kernel address
faults.
As we handle the kernel address faults before interrupts are enabled,
this change has the side effect of improving the branch predictor
hardening, but does not completely solve the issue.
Reported-by: Zizhi Wo <wozizhi@huaweicloud.com>
Reported-by: Xie Yuanbin <xieyuanbin1@huawei.com>
Link: https://lore.kernel.org/r/20251126090505.3057219-1-wozizhi@huaweicloud.com
Reviewed-by: Xie Yuanbin <xieyuanbin1@huawei.com>
Tested-by: Xie Yuanbin <xieyuanbin1@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/arm/mm/fault.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 192c8ab196dba..0e5b4bc7b2176 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -261,6 +261,35 @@ static inline bool ttbr0_usermode_access_allowed(struct pt_regs *regs)
}
#endif
+static int __kprobes
+do_kernel_address_page_fault(struct mm_struct *mm, unsigned long addr,
+ unsigned int fsr, struct pt_regs *regs)
+{
+ if (user_mode(regs)) {
+ /*
+ * Fault from user mode for a kernel space address. User mode
+ * should not be faulting in kernel space, which includes the
+ * vector/khelper page. Send a SIGSEGV.
+ */
+ __do_user_fault(addr, fsr, SIGSEGV, SEGV_MAPERR, regs);
+ } else {
+ /*
+ * Fault from kernel mode. Enable interrupts if they were
+ * enabled in the parent context. Section (upper page table)
+ * translation faults are handled via do_translation_fault(),
+ * so we will only get here for a non-present kernel space
+ * PTE or PTE permission fault. This may happen in exceptional
+ * circumstances and need the fixup tables to be walked.
+ */
+ if (interrupts_enabled(regs))
+ local_irq_enable();
+
+ __do_kernel_fault(mm, addr, fsr, regs);
+ }
+
+ return 0;
+}
+
static int __kprobes
do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
{
@@ -274,6 +303,12 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
if (kprobe_page_fault(regs, fsr))
return 0;
+ /*
+ * Handle kernel addresses faults separately, which avoids touching
+ * the mmap lock from contexts that are not able to sleep.
+ */
+ if (addr >= TASK_SIZE)
+ return do_kernel_address_page_fault(mm, addr, fsr, regs);
/* Enable interrupts if they were enabled in the parent context. */
if (interrupts_enabled(regs))
--
2.53.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 4/4] ARM: fix branch predictor hardening
2026-05-11 13:53 [PATCH 0/4] ARM: stable backports Sebastian Andrzej Siewior
` (2 preceding siblings ...)
2026-05-11 13:53 ` [PATCH 3/4] ARM: fix hash_name() fault Sebastian Andrzej Siewior
@ 2026-05-11 13:53 ` Sebastian Andrzej Siewior
2026-05-12 8:00 ` [PATCH 0/4] ARM: stable backports Pavel Machek
2026-05-13 13:51 ` Bryan Brattlof
5 siblings, 0 replies; 7+ messages in thread
From: Sebastian Andrzej Siewior @ 2026-05-11 13:53 UTC (permalink / raw)
To: stable
Cc: Bryan Brattlof, Daniel Wagner, Jan Kiszka, cip-dev,
nobuhiro.iwamatsu.x90, pavel, Russell King, Xie Yuanbin,
Sebastian Andrzej Siewior
From: "Russell King (Oracle)" <rmk+kernel@armlinux.org.uk>
commit fd2dee1c6e2256f726ba33fd3083a7be0efc80d3 upstream.
__do_user_fault() may be called with indeterminent interrupt enable
state, which means we may be preemptive at this point. This causes
problems when calling harden_branch_predictor(). For example, when
called from a data abort, do_alignment_fault()->do_bad_area().
Move harden_branch_predictor() out of __do_user_fault() and into the
calling contexts.
Moving it into do_kernel_address_page_fault(), we can be sure that
interrupts will be disabled here.
Converting do_translation_fault() to use do_kernel_address_page_fault()
rather than do_bad_area() means that we keep branch predictor handling
for translation faults. Interrupts will also be disabled at this call
site.
do_sect_fault() needs special handling, so detect user mode accesses
to kernel-addresses, and add an explicit call to branch predictor
hardening.
Finally, add branch predictor hardening to do_alignment() for the
faulting case (user mode accessing kernel addresses) before interrupts
are enabled.
This should cover all cases where harden_branch_predictor() is called,
ensuring that it is always has interrupts disabled, also ensuring that
it is called early in each call path.
Reviewed-by: Xie Yuanbin <xieyuanbin1@huawei.com>
Tested-by: Xie Yuanbin <xieyuanbin1@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/arm/mm/alignment.c | 6 +++++-
arch/arm/mm/fault.c | 39 ++++++++++++++++++++++++++-------------
2 files changed, 31 insertions(+), 14 deletions(-)
diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c
index 3c6ddb1afdc46..812380f30ae36 100644
--- a/arch/arm/mm/alignment.c
+++ b/arch/arm/mm/alignment.c
@@ -19,10 +19,11 @@
#include <linux/init.h>
#include <linux/sched/signal.h>
#include <linux/uaccess.h>
+#include <linux/unaligned.h>
#include <asm/cp15.h>
#include <asm/system_info.h>
-#include <linux/unaligned.h>
+#include <asm/system_misc.h>
#include <asm/opcodes.h>
#include "fault.h"
@@ -809,6 +810,9 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
int thumb2_32b = 0;
int fault;
+ if (addr >= TASK_SIZE && user_mode(regs))
+ harden_branch_predictor();
+
if (interrupts_enabled(regs))
local_irq_enable();
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 0e5b4bc7b2176..ed4330cc3f4e6 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -198,9 +198,6 @@ __do_user_fault(unsigned long addr, unsigned int fsr, unsigned int sig,
{
struct task_struct *tsk = current;
- if (addr > TASK_SIZE)
- harden_branch_predictor();
-
#ifdef CONFIG_DEBUG_USER
if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) ||
((user_debug & UDBG_BUS) && (sig == SIGBUS))) {
@@ -269,8 +266,10 @@ do_kernel_address_page_fault(struct mm_struct *mm, unsigned long addr,
/*
* Fault from user mode for a kernel space address. User mode
* should not be faulting in kernel space, which includes the
- * vector/khelper page. Send a SIGSEGV.
+ * vector/khelper page. Handle the branch predictor hardening
+ * while interrupts are still disabled, then send a SIGSEGV.
*/
+ harden_branch_predictor();
__do_user_fault(addr, fsr, SIGSEGV, SEGV_MAPERR, regs);
} else {
/*
@@ -485,16 +484,20 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
* We enter here because the first level page table doesn't contain
* a valid entry for the address.
*
- * If the address is in kernel space (>= TASK_SIZE), then we are
- * probably faulting in the vmalloc() area.
+ * If this is a user address (addr < TASK_SIZE), we handle this as a
+ * normal page fault. This leaves the remainder of the function to handle
+ * kernel address translation faults.
*
- * If the init_task's first level page tables contains the relevant
- * entry, we copy the it to this task. If not, we send the process
- * a signal, fixup the exception, or oops the kernel.
+ * Since user mode is not permitted to access kernel addresses, pass these
+ * directly to do_kernel_address_page_fault() to handle.
*
- * NOTE! We MUST NOT take any locks for this case. We may be in an
- * interrupt or a critical region, and should only copy the information
- * from the master page table, nothing more.
+ * Otherwise, we're probably faulting in the vmalloc() area, so try to fix
+ * that up. Note that we must not take any locks or enable interrupts in
+ * this case.
+ *
+ * If vmalloc() fixup fails, that means the non-leaf page tables did not
+ * contain an entry for this address, so handle this via
+ * do_kernel_address_page_fault().
*/
#ifdef CONFIG_MMU
static int __kprobes
@@ -560,7 +563,8 @@ do_translation_fault(unsigned long addr, unsigned int fsr,
return 0;
bad_area:
- do_bad_area(addr, fsr, regs);
+ do_kernel_address_page_fault(current->mm, addr, fsr, regs);
+
return 0;
}
#else /* CONFIG_MMU */
@@ -580,7 +584,16 @@ do_translation_fault(unsigned long addr, unsigned int fsr,
static int
do_sect_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
{
+ /*
+ * If this is a kernel address, but from user mode, then userspace
+ * is trying bad stuff. Invoke the branch predictor handling.
+ * Interrupts are disabled here.
+ */
+ if (addr >= TASK_SIZE && user_mode(regs))
+ harden_branch_predictor();
+
do_bad_area(addr, fsr, regs);
+
return 0;
}
#endif /* CONFIG_ARM_LPAE */
--
2.53.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 0/4] ARM: stable backports
2026-05-11 13:53 [PATCH 0/4] ARM: stable backports Sebastian Andrzej Siewior
` (3 preceding siblings ...)
2026-05-11 13:53 ` [PATCH 4/4] ARM: fix branch predictor hardening Sebastian Andrzej Siewior
@ 2026-05-12 8:00 ` Pavel Machek
2026-05-13 13:51 ` Bryan Brattlof
5 siblings, 0 replies; 7+ messages in thread
From: Pavel Machek @ 2026-05-12 8:00 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: stable, Bryan Brattlof, Daniel Wagner, Jan Kiszka, cip-dev,
nobuhiro.iwamatsu.x90, pavel, Russell King
[-- Attachment #1: Type: text/plain, Size: 805 bytes --]
Hi!
> This is a backport of ARM related fixes. This applies cleanly to v6.18
> and v6.12. I have an updated batch for v6.6 and v6.1 because this does
> not apply cleanly.
>
> #1 and #2 are prerequisites for #3.
People often use stable-dep-of: header for that.
> Can't tell the origin of #3 (fix hash_name() fault). It might be there
> since the begin of time.
>
> #4 (fix branch predictor hardening) fixes commit f5fe12b1eaee2 ("ARM:
> spectre-v2: harden user aborts in kernel space") which is v4.20-rc2.
>
> If there are no objections I would post the v6.6 version once this is
> accepted and then rebase the PREEMPT_RT bits on top of this.
I don't see anything obviously wrong with the series.
Reviewed-by: Pavel Machek <pavel@nabladev.com>
Best regards,
Pavel
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 0/4] ARM: stable backports
2026-05-11 13:53 [PATCH 0/4] ARM: stable backports Sebastian Andrzej Siewior
` (4 preceding siblings ...)
2026-05-12 8:00 ` [PATCH 0/4] ARM: stable backports Pavel Machek
@ 2026-05-13 13:51 ` Bryan Brattlof
5 siblings, 0 replies; 7+ messages in thread
From: Bryan Brattlof @ 2026-05-13 13:51 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: stable, Daniel Wagner, Jan Kiszka, cip-dev, nobuhiro.iwamatsu.x90,
pavel, Russell King
Thank you Sebastian!
On May 11, 2026 thus sayeth Sebastian Andrzej Siewior:
> This is a backport of ARM related fixes. This applies cleanly to v6.18
> and v6.12. I have an updated batch for v6.6 and v6.1 because this does
> not apply cleanly.
>
> #1 and #2 are prerequisites for #3.
>
> Can't tell the origin of #3 (fix hash_name() fault). It might be there
> since the begin of time.
>
> #4 (fix branch predictor hardening) fixes commit f5fe12b1eaee2 ("ARM:
> spectre-v2: harden user aborts in kernel space") which is v4.20-rc2.
>
> If there are no objections I would post the v6.6 version once this is
> accepted and then rebase the PREEMPT_RT bits on top of this.
>
> Russell King (Oracle) (4):
> ARM: group is_permission_fault() with is_translation_fault()
> ARM: allow __do_kernel_fault() to report execution of memory faults
> ARM: fix hash_name() fault
> ARM: fix branch predictor hardening
>
> arch/arm/mm/alignment.c | 6 ++-
> arch/arm/mm/fault.c | 100 ++++++++++++++++++++++++++++++----------
> 2 files changed, 80 insertions(+), 26 deletions(-)
Reviewed-by: Bryan Brattlof <bb@ti.com>
~Bryan
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-05-13 13:51 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-11 13:53 [PATCH 0/4] ARM: stable backports Sebastian Andrzej Siewior
2026-05-11 13:53 ` [PATCH 1/4] ARM: group is_permission_fault() with is_translation_fault() Sebastian Andrzej Siewior
2026-05-11 13:53 ` [PATCH 2/4] ARM: allow __do_kernel_fault() to report execution of memory faults Sebastian Andrzej Siewior
2026-05-11 13:53 ` [PATCH 3/4] ARM: fix hash_name() fault Sebastian Andrzej Siewior
2026-05-11 13:53 ` [PATCH 4/4] ARM: fix branch predictor hardening Sebastian Andrzej Siewior
2026-05-12 8:00 ` [PATCH 0/4] ARM: stable backports Pavel Machek
2026-05-13 13:51 ` Bryan Brattlof
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox