From: Thomas Gleixner <tglx@kernel.org>
To: LKML <linux-kernel@vger.kernel.org>
Cc: "Mathieu Desnoyers" <mathieu.desnoyers@efficios.com>,
"Andrè Almeida" <andrealmeid@igalia.com>,
"Sebastian Andrzej Siewior" <bigeasy@linutronix.de>,
"Carlos O'Donell" <carlos@redhat.com>,
"Peter Zijlstra" <peterz@infradead.org>,
"Florian Weimer" <fweimer@redhat.com>,
"Rich Felker" <dalias@aerifal.cx>,
"Torvald Riegel" <triegel@redhat.com>,
"Darren Hart" <dvhart@infradead.org>,
"Ingo Molnar" <mingo@kernel.org>,
"Davidlohr Bueso" <dave@stgolabs.net>,
"Arnd Bergmann" <arnd@arndb.de>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
"Uros Bizjak" <ubizjak@gmail.com>,
"Thomas Weißschuh" <linux@weissschuh.net>
Subject: [patch V3 10/14] futex: Provide infrastructure to plug the non contended robust futex unlock race
Date: Mon, 30 Mar 2026 14:02:51 +0200 [thread overview]
Message-ID: <20260330120117.742090889@kernel.org> (raw)
In-Reply-To: 20260330114212.927686587@kernel.org
When the FUTEX_ROBUST_UNLOCK mechanism is used for unlocking (PI-)futexes,
then the unlock sequence in user space looks like this:
1) robust_list_set_op_pending(mutex);
2) robust_list_remove(mutex);
lval = gettid();
3) if (atomic_try_cmpxchg(&mutex->lock, lval, 0))
4) robust_list_clear_op_pending();
else
5) sys_futex(OP | FUTEX_ROBUST_UNLOCK, ....);
That still leaves a minimal race window between #3 and #4 where the mutex
could be acquired by some other task, which observes that it is the last
user and:
1) unmaps the mutex memory
2) maps a different file, which ends up covering the same address
When then the original task exits before reaching #5 then the kernel robust
list handling observes the pending op entry and tries to fix up user space.
In case that the newly mapped data contains the TID of the exiting thread
at the address of the mutex/futex the kernel will set the owner died bit in
that memory and therefore corrupt unrelated data.
On X86 this boils down to this simplified assembly sequence:
mov %esi,%eax // Load TID into EAX
xor %ecx,%ecx // Set ECX to 0
#3 lock cmpxchg %ecx,(%rdi) // Try the TID -> 0 transition
.Lstart:
jnz .Lend
#4 movq %rcx,(%rdx) // Clear list_op_pending
.Lend:
If the cmpxchg() succeeds and the task is interrupted before it can clear
list_op_pending in the robust list head (#4) and the task crashes in a
signal handler or gets killed then it ends up in do_exit() and subsequently
in the robust list handling, which then might run into the unmap/map issue
described above.
This is only relevant when user space was interrupted and a signal is
pending. The fix-up has to be done before signal delivery is attempted
because:
1) The signal might be fatal so get_signal() ends up in do_exit()
2) The signal handler might crash or the task is killed before returning
from the handler. At that point the instruction pointer in pt_regs is
not longer the instruction pointer of the initially interrupted unlock
sequence.
The right place to handle this is in __exit_to_user_mode_loop() before
invoking arch_do_signal_or_restart() as this covers obviously both
scenarios.
As this is only relevant when the task was interrupted in user space, this
is tied to RSEQ and the generic entry code as RSEQ keeps track of user
space interrupts unconditionally even if the task does not have a RSEQ
region installed. That makes the decision very lightweight:
if (current->rseq.user_irq && within(regs, csr->unlock_ip_range))
futex_fixup_robust_unlock(regs, csr);
futex_fixup_robust_unlock() then invokes a architecture specific function
to returen the pending op pointer or NULL. The function evaluates the
register content to decide whether the pending ops pointer in the robust
list head needs to be cleared.
Assuming the above unlock sequence, then on x86 this decision is the
trivial evaluation of the zero flag:
return regs->eflags & X86_EFLAGS_ZF ? regs->dx : NULL;
Other architectures might need to do more complex evaluations due to LLSC,
but the approach is valid in general. The size of the pointer is determined
from the matching range struct, which covers both 32-bit and 64-bit builds
including COMPAT.
The unlock sequence is going to be placed in the VDSO so that the kernel
can keep everything synchronized, especially the register usage. The
resulting code sequence for user space is:
if (__vdso_futex_robust_list$SZ_try_unlock(lock, tid, &pending_op) != tid)
err = sys_futex($OP | FUTEX_ROBUST_UNLOCK,....);
Both the VDSO unlock and the kernel side unlock ensure that the pending_op
pointer is always cleared when the lock becomes unlocked.
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
---
V3: Fixup conversion leftover which was lost on the devel machine
V2: Convert to the struct range storage and simplify the fixup logic
---
include/linux/futex.h | 39 ++++++++++++++++++++++++++++++++++++-
include/vdso/futex.h | 52 ++++++++++++++++++++++++++++++++++++++++++++++++++
kernel/entry/common.c | 9 +++++---
kernel/futex/core.c | 18 +++++++++++++++++
4 files changed, 114 insertions(+), 4 deletions(-)
--- a/include/linux/futex.h
+++ b/include/linux/futex.h
@@ -105,7 +105,41 @@ static inline int futex_hash_free(struct
#endif /* !CONFIG_FUTEX */
#ifdef CONFIG_FUTEX_ROBUST_UNLOCK
+#include <asm/futex_robust.h>
+
void futex_reset_cs_ranges(struct futex_mm_data *fd);
+void __futex_fixup_robust_unlock(struct pt_regs *regs, struct futex_unlock_cs_range *csr);
+
+static inline bool futex_within_robust_unlock(struct pt_regs *regs,
+ struct futex_unlock_cs_range *csr)
+{
+ unsigned long ip = instruction_pointer(regs);
+
+ return ip >= csr->start_ip && ip < csr->start_ip + csr->len;
+}
+
+static inline void futex_fixup_robust_unlock(struct pt_regs *regs)
+{
+ struct futex_unlock_cs_range *csr;
+
+ /*
+ * Avoid dereferencing current->mm if not returning from interrupt.
+ * current->rseq.event is going to be used subsequently, so bringing the
+ * cache line in is not a big deal.
+ */
+ if (!current->rseq.event.user_irq)
+ return;
+
+ csr = current->mm->futex.unlock.cs_ranges;
+
+ /* The loop is optimized out for !COMPAT */
+ for (int r = 0; r < FUTEX_ROBUST_MAX_CS_RANGES; r++, csr++) {
+ if (unlikely(futex_within_robust_unlock(regs, csr))) {
+ __futex_fixup_robust_unlock(regs, csr);
+ return;
+ }
+ }
+}
static inline void futex_set_vdso_cs_range(struct futex_mm_data *fd, unsigned int idx,
unsigned long vdso, unsigned long start,
@@ -115,7 +149,10 @@ static inline void futex_set_vdso_cs_ran
fd->unlock.cs_ranges[idx].len = end - start;
fd->unlock.cs_ranges[idx].pop_size32 = sz32;
}
-#endif /* CONFIG_FUTEX_ROBUST_UNLOCK */
+#else /* CONFIG_FUTEX_ROBUST_UNLOCK */
+static inline void futex_fixup_robust_unlock(struct pt_regs *regs) { }
+#endif /* !CONFIG_FUTEX_ROBUST_UNLOCK */
+
#if defined(CONFIG_FUTEX_PRIVATE_HASH) || defined(CONFIG_FUTEX_ROBUST_UNLOCK)
void futex_mm_init(struct mm_struct *mm);
--- /dev/null
+++ b/include/vdso/futex.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _VDSO_FUTEX_H
+#define _VDSO_FUTEX_H
+
+#include <uapi/linux/types.h>
+
+/**
+ * __vdso_futex_robust_list64_try_unlock - Try to unlock an uncontended robust futex
+ * with a 64-bit pending op pointer
+ * @lock: Pointer to the futex lock object
+ * @tid: The TID of the calling task
+ * @pop: Pointer to the task's robust_list_head::list_pending_op
+ *
+ * Return: The content of *@lock. On success this is the same as @tid.
+ *
+ * The function implements:
+ * if (atomic_try_cmpxchg(lock, &tid, 0))
+ * *op = NULL;
+ * return tid;
+ *
+ * There is a race between a successful unlock and clearing the pending op
+ * pointer in the robust list head. If the calling task is interrupted in the
+ * race window and has to handle a (fatal) signal on return to user space then
+ * the kernel handles the clearing of @pending_op before attempting to deliver
+ * the signal. That ensures that a task cannot exit with a potentially invalid
+ * pending op pointer.
+ *
+ * User space uses it in the following way:
+ *
+ * if (__vdso_futex_robust_list64_try_unlock(lock, tid, &pending_op) != tid)
+ * err = sys_futex($OP | FUTEX_ROBUST_UNLOCK,....);
+ *
+ * If the unlock attempt fails due to the FUTEX_WAITERS bit set in the lock,
+ * then the syscall does the unlock, clears the pending op pointer and wakes the
+ * requested number of waiters.
+ */
+__u32 __vdso_futex_robust_list64_try_unlock(__u32 *lock, __u32 tid, __u64 *pop);
+
+/**
+ * __vdso_futex_robust_list32_try_unlock - Try to unlock an uncontended robust futex
+ * with a 32-bit pending op pointer
+ * @lock: Pointer to the futex lock object
+ * @tid: The TID of the calling task
+ * @pop: Pointer to the task's robust_list_head::list_pending_op
+ *
+ * Return: The content of *@lock. On success this is the same as @tid.
+ *
+ * Same as __vdso_futex_robust_list64_try_unlock() just with a 32-bit @pop pointer.
+ */
+__u32 __vdso_futex_robust_list32_try_unlock(__u32 *lock, __u32 tid, __u32 *pop);
+
+#endif
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -1,11 +1,12 @@
// SPDX-License-Identifier: GPL-2.0
-#include <linux/irq-entry-common.h>
-#include <linux/resume_user_mode.h>
+#include <linux/futex.h>
#include <linux/highmem.h>
+#include <linux/irq-entry-common.h>
#include <linux/jump_label.h>
#include <linux/kmsan.h>
#include <linux/livepatch.h>
+#include <linux/resume_user_mode.h>
#include <linux/tick.h>
/* Workaround to allow gradual conversion of architecture code */
@@ -60,8 +61,10 @@ static __always_inline unsigned long __e
if (ti_work & _TIF_PATCH_PENDING)
klp_update_patch_state(current);
- if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
+ if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) {
+ futex_fixup_robust_unlock(regs);
arch_do_signal_or_restart(regs);
+ }
if (ti_work & _TIF_NOTIFY_RESUME)
resume_user_mode_work(regs);
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -46,6 +46,8 @@
#include <linux/slab.h>
#include <linux/vmalloc.h>
+#include <vdso/futex.h>
+
#include "futex.h"
#include "../locking/rtmutex_common.h"
@@ -1447,6 +1449,22 @@ bool futex_robust_list_clear_pending(voi
return robust_list_clear_pending(pop);
}
+#ifdef CONFIG_FUTEX_ROBUST_UNLOCK
+void __futex_fixup_robust_unlock(struct pt_regs *regs, struct futex_unlock_cs_range *csr)
+{
+ /*
+ * arch_futex_robust_unlock_get_pop() returns the list pending op pointer from
+ * @regs if the try_cmpxchg() succeeded.
+ */
+ void __user *pop = arch_futex_robust_unlock_get_pop(regs);
+
+ if (!pop)
+ return;
+
+ futex_robust_list_clear_pending(pop, csr->pop_size32 ? FLAGS_ROBUST_LIST32 : 0);
+}
+#endif /* CONFIG_FUTEX_ROBUST_UNLOCK */
+
static void futex_cleanup(struct task_struct *tsk)
{
if (unlikely(tsk->futex.robust_list)) {
next prev parent reply other threads:[~2026-03-30 12:02 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-30 12:01 [patch V3 00/14] futex: Address the robust futex unlock race for real Thomas Gleixner
2026-03-30 12:02 ` [patch V3 01/14] futex: Move futex task related data into a struct Thomas Gleixner
2026-03-30 12:02 ` [patch V3 02/14] futex: Make futex_mm_init() void Thomas Gleixner
2026-03-30 12:02 ` [patch V3 03/14] futex: Move futex related mm_struct data into a struct Thomas Gleixner
2026-03-30 15:23 ` Alexander Kuleshov
2026-03-30 12:02 ` [patch V3 04/14] futex: Provide UABI defines for robust list entry modifiers Thomas Gleixner
2026-03-30 12:02 ` [patch V3 05/14] uaccess: Provide unsafe_atomic_store_release_user() Thomas Gleixner
2026-03-30 13:33 ` Mark Rutland
2026-03-30 12:02 ` [patch V3 06/14] x86: Select ARCH_MEMORY_ORDER_TOS Thomas Gleixner
2026-03-30 13:34 ` Mark Rutland
2026-03-30 19:48 ` Thomas Gleixner
2026-03-30 12:02 ` [patch V3 07/14] futex: Cleanup UAPI defines Thomas Gleixner
2026-03-30 12:02 ` [patch V3 08/14] futex: Add support for unlocking robust futexes Thomas Gleixner
2026-03-30 12:02 ` [patch V3 09/14] futex: Add robust futex unlock IP range Thomas Gleixner
2026-03-30 12:02 ` Thomas Gleixner [this message]
2026-03-30 12:02 ` [patch V3 11/14] x86/vdso: Prepare for robust futex unlock support Thomas Gleixner
2026-03-30 12:03 ` [patch V3 12/14] x86/vdso: Implement __vdso_futex_robust_try_unlock() Thomas Gleixner
2026-03-30 12:03 ` [patch V3 13/14] Documentation: futex: Add a note about robust list race condition Thomas Gleixner
2026-03-30 12:03 ` [patch V3 14/14] selftests: futex: Add tests for robust release operations Thomas Gleixner
2026-03-30 13:45 ` [patch V3 00/14] futex: Address the robust futex unlock race for real Mark Rutland
2026-03-30 13:51 ` Peter Zijlstra
2026-03-30 19:36 ` Thomas Gleixner
2026-03-31 14:12 ` Mark Rutland
2026-03-31 12:59 ` André Almeida
2026-03-31 13:03 ` Sebastian Andrzej Siewior
2026-03-31 14:13 ` Mark Rutland
2026-03-31 15:22 ` Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260330120117.742090889@kernel.org \
--to=tglx@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=andrealmeid@igalia.com \
--cc=arnd@arndb.de \
--cc=bigeasy@linutronix.de \
--cc=carlos@redhat.com \
--cc=dalias@aerifal.cx \
--cc=dave@stgolabs.net \
--cc=dvhart@infradead.org \
--cc=fweimer@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@weissschuh.net \
--cc=mathieu.desnoyers@efficios.com \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=triegel@redhat.com \
--cc=ubizjak@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox