From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F6D641B358 for ; Thu, 19 Mar 2026 23:24:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773962694; cv=none; b=pFUQWsO20tToXEe/Y2YyPQ2VsGFQg+ckFCRJfHpYGF5v9mIGIT4B+9FF8qN91To80WuZ5nR00ORYeoH34FAMCrjCITeSgHurgN/w3JNirULksbaRKjths/Gn6am0nT300kHvFOSkbdEVORr3hDweTRD/PNYwBmM5qLhz2rinwTY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773962694; c=relaxed/simple; bh=Uggzs65I9fdVqQMX14AulsXwC5qMHjc+e7WEGViVlLs=; h=Date:Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=Wgp76msmLy0wxJE8ENrCm98MiFbGBXWit3idraMY9uLCVNJhjK2ddTvb8M03AvmLoyeTvGSzT54AKJionVQB+ZfYHRolwjo8Hy1c+B2gK5VTlyFSQG5e5K3f3uylWECxT5TjnBn7XoGcDg4I2u1Jxis9VrTS/yHfKBDUR2cT2FU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OwZkaptG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OwZkaptG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9BDA3C19424; Thu, 19 Mar 2026 23:24:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773962694; bh=Uggzs65I9fdVqQMX14AulsXwC5qMHjc+e7WEGViVlLs=; h=Date:From:To:Cc:Subject:References:From; b=OwZkaptG7vezYPvMAXG5MRaB+ZLFjWzEZrexEsrxlZddCIW0dwO70V9Kxxfi4nqo0 OCQphWoU3WVHfPd47OF6/ZLsMzD2kWuF8PaFE+eFWpj6+ujtSxiYHEDHVC8EqI930p jpNjm1TppVwIrrrDcV2ZHIuv91/8B3kIsqKjDbKfVKxfO/akjlVNfLevZbfJboFr7o tjnk8lBEZ0vgzVvPvW+4MJ8ZHSTpyI5xx+24DrOAn/p0EpF+1ZMUd/j2YXpswVqFZo 4ifWl8C8dNa9FFYu8IJohA4RbVDA4zVELeO/8DdLNpL0i0Ch2yl435q7qSHzSl1zKn LXUKF6L2hxkfQ== Date: Fri, 20 Mar 2026 00:24:51 +0100 Message-ID: <20260319231239.749930268@kernel.org> User-Agent: quilt/0.68 From: Thomas Gleixner To: LKML Cc: Mathieu Desnoyers , =?UTF-8?q?Andr=C3=A9=20Almeida?= , Sebastian Andrzej Siewior , Carlos O'Donell , Peter Zijlstra , Florian Weimer , Rich Felker , Torvald Riegel , Darren Hart , Ingo Molnar , Davidlohr Bueso , Arnd Bergmann , "Liam R . Howlett" , Uros Bizjak , =?UTF-8?q?Thomas=20Wei=C3=9Fschuh?= Subject: [patch v2 09/11] futex: Provide infrastructure to plug the non contended robust futex unlock race References: <20260319225224.853416463@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 When the FUTEX_ROBUST_UNLOCK mechanism is used for unlocking (PI-)futexes, then the unlock sequence in user space looks like this: 1) robust_list_set_op_pending(mutex); 2) robust_list_remove(mutex); lval = gettid(); 3) if (atomic_try_cmpxchg(&mutex->lock, lval, 0)) 4) robust_list_clear_op_pending(); else 5) sys_futex(OP | FUTEX_ROBUST_UNLOCK, ....); That still leaves a minimal race window between #3 and #4 where the mutex could be acquired by some other task, which observes that it is the last user and: 1) unmaps the mutex memory 2) maps a different file, which ends up covering the same address When then the original task exits before reaching #5 then the kernel robust list handling observes the pending op entry and tries to fix up user space. In case that the newly mapped data contains the TID of the exiting thread at the address of the mutex/futex the kernel will set the owner died bit in that memory and therefore corrupt unrelated data. On X86 this boils down to this simplified assembly sequence: mov %esi,%eax // Load TID into EAX xor %ecx,%ecx // Set ECX to 0 #3 lock cmpxchg %ecx,(%rdi) // Try the TID -> 0 transition .Lstart: jnz .Lend #4 movq %rcx,(%rdx) // Clear list_op_pending .Lend: If the cmpxchg() succeeds and the task is interrupted before it can clear list_op_pending in the robust list head (#4) and the task crashes in a signal handler or gets killed then it ends up in do_exit() and subsequently in the robust list handling, which then might run into the unmap/map issue described above. This is only relevant when user space was interrupted and a signal is pending. The fix-up has to be done before signal delivery is attempted because: 1) The signal might be fatal so get_signal() ends up in do_exit() 2) The signal handler might crash or the task is killed before returning from the handler. At that point the instruction pointer in pt_regs is not longer the instruction pointer of the initially interrupted unlock sequence. The right place to handle this is in __exit_to_user_mode_loop() before invoking arch_do_signal_or_restart() as this covers obviously both scenarios. As this is only relevant when the task was interrupted in user space, this is tied to RSEQ and the generic entry code as RSEQ keeps track of user space interrupts unconditionally even if the task does not have a RSEQ region installed. That makes the decision very lightweight: if (current->rseq.user_irq && within(regs, csr->unlock_ip_range)) futex_fixup_robust_unlock(regs, csr); futex_fixup_robust_unlock() then invokes a architecture specific function to returen the pending op pointer or NULL. The function evaluates the register content to decide whether the pending ops pointer in the robust list head needs to be cleared. Assuming the above unlock sequence, then on x86 this decision is the trivial evaluation of the zero flag: return regs->eflags & X86_EFLAGS_ZF ? regs->dx : NULL; Other architectures might need to do more complex evaluations due to LLSC, but the approach is valid in general. The size of the pointer is determined from the matching range struct, which covers both 32-bit and 64-bit builds including COMPAT. The unlock sequence is going to be placed in the VDSO so that the kernel can keep everything synchronized, especially the register usage. The resulting code sequence for user space is: if (__vdso_futex_robust_list$SZ_try_unlock(lock, tid, &pending_op) != tid) err = sys_futex($OP | FUTEX_ROBUST_UNLOCK,....); Both the VDSO unlock and the kernel side unlock ensure that the pending_op pointer is always cleared when the lock becomes unlocked. Signed-off-by: Thomas Gleixner --- V2: Convert to the struct range storage and simplify the fixup logic --- include/linux/futex.h | 42 +++++++++++++++++++++++++++++++++++++++- include/vdso/futex.h | 52 ++++++++++++++++++++++++++++++++++++++++++++++++++ kernel/entry/common.c | 9 +++++--- kernel/futex/core.c | 14 +++++++++++++ 4 files changed, 113 insertions(+), 4 deletions(-) --- a/include/linux/futex.h +++ b/include/linux/futex.h @@ -110,7 +110,47 @@ static inline int futex_hash_allocate_de } static inline int futex_hash_free(struct mm_struct *mm) { return 0; } static inline int futex_mm_init(struct mm_struct *mm) { return 0; } +#endif /* !CONFIG_FUTEX */ -#endif +#ifdef CONFIG_FUTEX_ROBUST_UNLOCK +#include + +void __futex_fixup_robust_unlock(struct pt_regs *regs, struct futex_unlock_cs_range *csr); + +static inline bool futex_within_robust_unlock(struct pt_regs *regs, + struct futex_unlock_cs_range *csr) +{ + unsigned long ip = instruction_pointer(regs); + + return ip >= csr->start_ip && ip < csr->end_ip; +} + +static inline void futex_fixup_robust_unlock(struct pt_regs *regs) +{ + struct futex_unlock_cs_range *csr; + + /* + * Avoid dereferencing current->mm if not returning from interrupt. + * current->rseq.event is going to be used anyway in the exit to user + * code, so bringing it in is not a big deal. + */ + if (!current->rseq.event.user_irq) + return; + + csr = current->mm->futex.unlock_cs_ranges; + if (unlikely(futex_within_robust_unlock(regs, csr))) { + __futex_fixup_robust_unlock(regs, csr); + return; + } + + /* Multi sized robust lists are only supported with CONFIG_COMPAT */ + if (IS_ENABLED(CONFIG_COMPAT) && current->mm->futex.unlock_cs_num_ranges == 2) { + if (unlikely(futex_within_robust_unlock(regs, ++csr))) + __futex_fixup_robust_unlock(regs, csr); + } +} +#else /* CONFIG_FUTEX_ROBUST_UNLOCK */ +static inline void futex_fixup_robust_unlock(struct pt_regs *regs) {} +#endif /* !CONFIG_FUTEX_ROBUST_UNLOCK */ #endif --- /dev/null +++ b/include/vdso/futex.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _VDSO_FUTEX_H +#define _VDSO_FUTEX_H + +#include + +/** + * __vdso_futex_robust_list64_try_unlock - Try to unlock an uncontended robust futex + * with a 64-bit pending op pointer + * @lock: Pointer to the futex lock object + * @tid: The TID of the calling task + * @pop: Pointer to the task's robust_list_head::list_pending_op + * + * Return: The content of *@lock. On success this is the same as @tid. + * + * The function implements: + * if (atomic_try_cmpxchg(lock, &tid, 0)) + * *op = NULL; + * return tid; + * + * There is a race between a successful unlock and clearing the pending op + * pointer in the robust list head. If the calling task is interrupted in the + * race window and has to handle a (fatal) signal on return to user space then + * the kernel handles the clearing of @pending_op before attempting to deliver + * the signal. That ensures that a task cannot exit with a potentially invalid + * pending op pointer. + * + * User space uses it in the following way: + * + * if (__vdso_futex_robust_list64_try_unlock(lock, tid, &pending_op) != tid) + * err = sys_futex($OP | FUTEX_ROBUST_UNLOCK,....); + * + * If the unlock attempt fails due to the FUTEX_WAITERS bit set in the lock, + * then the syscall does the unlock, clears the pending op pointer and wakes the + * requested number of waiters. + */ +__u32 __vdso_futex_robust_list64_try_unlock(__u32 *lock, __u32 tid, __u64 *pop); + +/** + * __vdso_futex_robust_list32_try_unlock - Try to unlock an uncontended robust futex + * with a 32-bit pending op pointer + * @lock: Pointer to the futex lock object + * @tid: The TID of the calling task + * @pop: Pointer to the task's robust_list_head::list_pending_op + * + * Return: The content of *@lock. On success this is the same as @tid. + * + * Same as __vdso_futex_robust_list64_try_unlock() just with a 32-bit @pop pointer. + */ +__u32 __vdso_futex_robust_list32_try_unlock(__u32 *lock, __u32 tid, __u32 *pop); + +#endif --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -1,11 +1,12 @@ // SPDX-License-Identifier: GPL-2.0 -#include -#include +#include #include +#include #include #include #include +#include #include /* Workaround to allow gradual conversion of architecture code */ @@ -60,8 +61,10 @@ static __always_inline unsigned long __e if (ti_work & _TIF_PATCH_PENDING) klp_update_patch_state(current); - if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) + if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) { + futex_fixup_robust_unlock(regs); arch_do_signal_or_restart(regs); + } if (ti_work & _TIF_NOTIFY_RESUME) resume_user_mode_work(regs); --- a/kernel/futex/core.c +++ b/kernel/futex/core.c @@ -46,6 +46,8 @@ #include #include +#include + #include "futex.h" #include "../locking/rtmutex_common.h" @@ -1447,6 +1449,18 @@ bool futex_robust_list_clear_pending(voi return robust_list_clear_pending(pop); } +#ifdef CONFIG_FUTEX_ROBUST_UNLOCK +void __futex_fixup_robust_unlock(struct pt_regs *regs, struct futex_unlock_cs_range *csr) +{ + void __user *pop = arch_futex_robust_unlock_get_pop(regs); + + if (!pop) + return; + + futex_robust_list_clear_pending(pop, csr->cs_pop_size32 ? FLAGS_ROBUST_LIST32 : 0); +} +#endif /* CONFIG_FUTEX_ROBUST_UNLOCK */ + static void futex_cleanup(struct task_struct *tsk) { if (unlikely(tsk->futex.robust_list)) {