From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A16DC402BB2 for ; Thu, 19 Mar 2026 23:25:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773962704; cv=none; b=qxeOITlaiKgoO201iDHlvX9iP/Av65UMrq55zhz97u30RERuJzzPi/VO/tBhCHYWXe87eXT399sBQEQMUowgKNO3Yp2qI8ZoOJUjeevvYVhk9L2at0p9cPD6CcwZsuRYkFZdt6DWO40aOrmRF+2oYCA56YYS48Yugc9x60jlSBk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773962704; c=relaxed/simple; bh=pXiZzi+C+fA/Df4QQaip6X5KTWKA5uy73Z/YXkuz2Eg=; h=Date:Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=PNGOJLuDEdmhorvEGRm21b0QcNTzL3WZpWLxJ4m/stpasgyyCRD8uW7dyADXlnL1/UL6HpU0OUcVA7IVoagzl6rVZnr2HHXyRH6rUOuOmefdukkpmKsIICcUUVm2NvEC2OQd1P4XBQA+L+XEyBnLH1ZMVy9nmFk1n0cpsAcvybc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FcmkM5x7; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FcmkM5x7" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A4BD6C19424; Thu, 19 Mar 2026 23:25:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773962704; bh=pXiZzi+C+fA/Df4QQaip6X5KTWKA5uy73Z/YXkuz2Eg=; h=Date:From:To:Cc:Subject:References:From; b=FcmkM5x7fli8ZOIUsgCFgEUoTUorpqn4H3lgDaTFgkCfURn9OKzkUX61kGiDbECG2 iGfRLtrRHRq1Epth/4ZxizLh4zEvY3+i2U0fvqJMRdWvvnb1boeSAvmOX7TIXl1/9C pqgGGv0Hd80h/uXp3pnjHUPxxlKVmgjol3zCH9l/E/zbLHXEnCrjW4c0CJujJ7DAo0 M8cHZ1SlNUQ1peLRzUj+T9tqAtaZmNV8GulgJThXB2L9jM2Ro5cGhc1eQ/FLEZr+k1 d9T4L+mBNDoPwNR54XKCCNrTwHnsi5KcWXQSVDOvqR+16oo0o07elMhCIu/D1El/8j Sjavd2fHxbiHw== Date: Fri, 20 Mar 2026 00:25:00 +0100 Message-ID: <20260319231239.882002347@kernel.org> User-Agent: quilt/0.68 From: Thomas Gleixner To: LKML Cc: Mathieu Desnoyers , =?UTF-8?q?Andr=C3=A9=20Almeida?= , Sebastian Andrzej Siewior , Carlos O'Donell , Peter Zijlstra , Florian Weimer , Rich Felker , Torvald Riegel , Darren Hart , Ingo Molnar , Davidlohr Bueso , Arnd Bergmann , "Liam R . Howlett" , Uros Bizjak , =?UTF-8?q?Thomas=20Wei=C3=9Fschuh?= Subject: [patch v2 11/11] x86/vdso: Implement __vdso_futex_robust_try_unlock() References: <20260319225224.853416463@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 When the FUTEX_ROBUST_UNLOCK mechanism is used for unlocking (PI-)futexes, then the unlock sequence in userspace looks like this: 1) robust_list_set_op_pending(mutex); 2) robust_list_remove(mutex); lval = gettid(); 3) if (atomic_try_cmpxchg(&mutex->lock, lval, 0)) 4) robust_list_clear_op_pending(); else 5) sys_futex(OP,...FUTEX_ROBUST_UNLOCK); That still leaves a minimal race window between #3 and #4 where the mutex could be acquired by some other task which observes that it is the last user and: 1) unmaps the mutex memory 2) maps a different file, which ends up covering the same address When then the original task exits before reaching #5 then the kernel robust list handling observes the pending op entry and tries to fix up user space. In case that the newly mapped data contains the TID of the exiting thread at the address of the mutex/futex the kernel will set the owner died bit in that memory and therefore corrupt unrelated data. Provide a VDSO function which exposes the critical section window in the VDSO symbol table. The resulting addresses are updated in the task's mm when the VDSO is (re)map()'ed. The core code detects when a task was interrupted within the critical section and is about to deliver a signal. It then invokes an architecture specific function which determines whether the pending op pointer has to be cleared or not. The unlock assembly sequence on 64-bit is: mov %esi,%eax // Load TID into EAX xor %ecx,%ecx // Set ECX to 0 lock cmpxchg %ecx,(%rdi) // Try the TID -> 0 transition .Lstart: jnz .Lend movq %rcx,(%rdx) // Clear list_op_pending .Lend: ret So the decision can be simply based on the ZF state in regs->flags. The pending op pointer is always in DX independent of the build mode (32/64-bit) to make the pending op pointer retrieval uniform. The size of the pointer is stored in the matching criticial section range struct and the core code retrieves it from there. So the pointer retrieval function does not have to care. It is bit-size independent: return regs->flags & X86_EFLAGS_ZF ? regs->dx : NULL; There are two entry points to handle the different robust list pending op pointer size: __vdso_futex_robust_list64_try_unlock() __vdso_futex_robust_list32_try_unlock() The 32-bit VDSO provides only __vdso_futex_robust_list32_try_unlock(). The 64-bit VDSO provides always __vdso_futex_robust_list64_try_unlock() and when COMPAT is enabled also the list32 variant, which is required to support multi-size robust list pointers used by gaming emulators. The unlock function is inspired by an idea from Mathieu Desnoyers. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/20260311185409.1988269-1-mathieu.desnoyers@efficios.com -- V2: Provide different entry points - Florian Use __u32 and __x86_64__ - Thomas Use private labels - Thomas Optimize assembly - Uros Split the functions up now that ranges are supported in the core and document the actual assembly. --- arch/x86/Kconfig | 1 arch/x86/entry/vdso/common/vfutex.c | 76 +++++++++++++++++++++++++++++++ arch/x86/entry/vdso/vdso32/Makefile | 5 +- arch/x86/entry/vdso/vdso32/vdso32.lds.S | 3 + arch/x86/entry/vdso/vdso32/vfutex.c | 1 arch/x86/entry/vdso/vdso64/Makefile | 7 +- arch/x86/entry/vdso/vdso64/vdso64.lds.S | 4 + arch/x86/entry/vdso/vdso64/vdsox32.lds.S | 4 + arch/x86/entry/vdso/vdso64/vfutex.c | 1 arch/x86/include/asm/futex_robust.h | 19 +++++++ 10 files changed, 116 insertions(+), 5 deletions(-) --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -238,6 +238,7 @@ config X86 select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_EISA if X86_32 select HAVE_EXIT_THREAD + select HAVE_FUTEX_ROBUST_UNLOCK select HAVE_GENERIC_TIF_BITS select HAVE_GUP_FAST select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE --- /dev/null +++ b/arch/x86/entry/vdso/common/vfutex.c @@ -0,0 +1,76 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include + +/* + * Assembly template for the try unlock functions. The basic functionality is: + * + * mov esi, %eax Move the TID into EAX + * xor %ecx, %ecx Clear ECX + * lock_cmpxchgl %ecx, (%rdi) Attempt the TID -> 0 transition + * .Lcs_start: Start of the critical section + * jnz .Lcs_end If cmpxchl failed jump to the end + * .Lcs_success: Start of the success section + * movq %rcx, (%rdx) Set the pending op pointer to 0 + * .Lcs_end: End of the critical section + * + * .Lcs_start and .Lcs_end establish the critical section range. .Lcs_success is + * technically not required, but there for illustration, debugging and testing. + * + * When CONFIG_COMPAT is enabled then the 64-bit VDSO provides two functions. + * One for the regular 64-bit sized pending operation pointer and one for a + * 32-bit sized pointer to support gaming emulators. + * + * The 32-bit VDSO provides only the one for 32-bit sized pointers. + */ +#define __stringify_1(x...) #x +#define __stringify(x...) __stringify_1(x) + +#define LABEL(name, which) __stringify(name##_futex_try_unlock_cs_##which:) + +#define JNZ_END(name) "jnz " __stringify(name) "_futex_try_unlock_cs_end\n" + +#define CLEAR_POPQ "movq %[zero], %a[pop]\n" +#define CLEAR_POPL "movl %k[zero], %a[pop]\n" + +#define futex_robust_try_unlock(name, clear_pop, __lock, __tid, __pop) \ +({ \ + asm volatile ( \ + " \n" \ + " lock cmpxchgl %k[zero], %a[lock] \n" \ + " \n" \ + LABEL(name, start) \ + " \n" \ + JNZ_END(name) \ + " \n" \ + LABEL(name, success) \ + " \n" \ + clear_pop \ + " \n" \ + LABEL(name, end) \ + : [tid] "+&a" (__tid) \ + : [lock] "D" (__lock), \ + [pop] "d" (__pop), \ + [zero] "S" (0UL) \ + : "memory" \ + ); \ + __tid; \ +}) + +#ifdef __x86_64__ +__u32 __vdso_futex_robust_list64_try_unlock(__u32 *lock, __u32 tid, __u64 *pop) +{ + return futex_robust_try_unlock(x86_64, CLEAR_POPQ, lock, tid, pop); +} + +#ifdef CONFIG_COMPAT +__u32 __vdso_futex_robust_list32_try_unlock(__u32 *lock, __u32 tid, __u32 *pop) +{ + return futex_robust_try_unlock(x86_64_compat, CLEAR_POPL, lock, tid, pop); +} +#endif /* CONFIG_COMPAT */ +#else /* __x86_64__ */ +__u32 __vdso_futex_robust_list32_try_unlock(__u32 *lock, __u32 tid, __u32 *pop) +{ + return futex_robust_try_unlock(x86_32, CLEAR_POPL, lock, tid, pop); +} +#endif /* !__x86_64__ */ --- a/arch/x86/entry/vdso/vdso32/Makefile +++ b/arch/x86/entry/vdso/vdso32/Makefile @@ -7,8 +7,9 @@ vdsos-y := 32 # Files to link into the vDSO: -vobjs-y := note.o vclock_gettime.o vgetcpu.o -vobjs-y += system_call.o sigreturn.o +vobjs-y := note.o vclock_gettime.o vgetcpu.o +vobjs-y += system_call.o sigreturn.o +vobjs-$(CONFIG_FUTEX_ROBUST_UNLOCK) += vfutex.o # Compilation flags flags-y := -DBUILD_VDSO32 -m32 -mregparm=0 --- a/arch/x86/entry/vdso/vdso32/vdso32.lds.S +++ b/arch/x86/entry/vdso/vdso32/vdso32.lds.S @@ -30,6 +30,9 @@ VERSION __vdso_clock_gettime64; __vdso_clock_getres_time64; __vdso_getcpu; +#ifdef CONFIG_FUTEX_ROBUST_UNLOCK + __vdso_futex_robust_list32_try_unlock; +#endif }; LINUX_2.5 { --- /dev/null +++ b/arch/x86/entry/vdso/vdso32/vfutex.c @@ -0,0 +1 @@ +#include "common/vfutex.c" --- a/arch/x86/entry/vdso/vdso64/Makefile +++ b/arch/x86/entry/vdso/vdso64/Makefile @@ -8,9 +8,10 @@ vdsos-y := 64 vdsos-$(CONFIG_X86_X32_ABI) += x32 # Files to link into the vDSO: -vobjs-y := note.o vclock_gettime.o vgetcpu.o -vobjs-y += vgetrandom.o vgetrandom-chacha.o -vobjs-$(CONFIG_X86_SGX) += vsgx.o +vobjs-y := note.o vclock_gettime.o vgetcpu.o +vobjs-y += vgetrandom.o vgetrandom-chacha.o +vobjs-$(CONFIG_X86_SGX) += vsgx.o +vobjs-$(CONFIG_FUTEX_ROBUST_UNLOCK) += vfutex.o # Compilation flags flags-y := -DBUILD_VDSO64 -m64 -mcmodel=small --- a/arch/x86/entry/vdso/vdso64/vdso64.lds.S +++ b/arch/x86/entry/vdso/vdso64/vdso64.lds.S @@ -32,6 +32,10 @@ VERSION { #endif getrandom; __vdso_getrandom; +#ifdef CONFIG_FUTEX_ROBUST_UNLOCK + __vdso_futex_robust_list64_try_unlock; + __vdso_futex_robust_list32_try_unlock; +#endif local: *; }; } --- a/arch/x86/entry/vdso/vdso64/vdsox32.lds.S +++ b/arch/x86/entry/vdso/vdso64/vdsox32.lds.S @@ -22,6 +22,10 @@ VERSION { __vdso_getcpu; __vdso_time; __vdso_clock_getres; +#ifdef CONFIG_FUTEX_ROBUST_UNLOCK + __vdso_futex_robust_list64_try_unlock; + __vdso_futex_robust_list32_try_unlock; +#endif local: *; }; } --- /dev/null +++ b/arch/x86/entry/vdso/vdso64/vfutex.c @@ -0,0 +1 @@ +#include "common/vfutex.c" --- /dev/null +++ b/arch/x86/include/asm/futex_robust.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_X86_FUTEX_ROBUST_H +#define _ASM_X86_FUTEX_ROBUST_H + +#include + +static __always_inline void __user *x86_futex_robust_unlock_get_pop(struct pt_regs *regs) +{ + /* + * If ZF is set then the cmpxchg succeeded and the pending op pointer + * needs to be cleared. + */ + return regs->flags & X86_EFLAGS_ZF ? (void __user *)regs->dx : NULL; +} + +#define arch_futex_robust_unlock_get_pop(regs) \ + x86_futex_robust_unlock_get_pop(regs) + +#endif /* _ASM_X86_FUTEX_ROBUST_H */