From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1EA3399354 for ; Mon, 30 Mar 2026 19:36:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774899408; cv=none; b=FpyUb1k8S+knFuYrHtNE3vK0i7sbXtXzzg7JHMg20rmYu2DDOyydDN7YuoG71zQtPPOzJdKMI9W3lxriCPm+bEwiDMEKptP3SFbTVU61NpkRj9ECiAYlTPeggPkcKJJkXzmjBpwtYAcqg7Bdsp22g0XLGn30jfEhcSFvUAlX7zY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774899408; c=relaxed/simple; bh=kmuyD6vLFslywhX5//yO8ndOva795FS6LtyRO+MWE78=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=b0NUbrE8NmoRuhYfRNGWZw/Mm5DdJFgJ5InTUcAL+iN7RK37vO4tR5FlPOuBylI13zdXg9PTF7D4jpAQUP8oMOOIvHuk0DBsFhvsqrlJlfWfKMCDOFma+S7jQrnwcthx2/rbTttGoPUe/34mGFrZqnY7/udT7X7tzmBZ329vexc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=X9bu5AmU; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="X9bu5AmU" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71888C4CEF7; Mon, 30 Mar 2026 19:36:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774899408; bh=kmuyD6vLFslywhX5//yO8ndOva795FS6LtyRO+MWE78=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=X9bu5AmUYNdPxlM/YZociwRhw66pbjlHX2QgT+ggz0nfRic0yOZDLln99NA3vxYV7 0p6I+IO1Ud6+WmjiUd+YGz/gWC4zyJRFmFNCskXBwrpCL2SzpMoOBiMM8CXDFdgvqw 6DjD6So3lxKnrmAvlUIxUwQx2TWKscYsEPDKc+zlYbMy2njn4RsW+mUdcV7z+y+xp2 NnaC4Xr9YBshdfEg1vaZsCeeU3/zQeHpxpedoPXRP6WAuNXKs7ROgnMPfioVgggjev 7dxnWnw8XWqGDkiFs/5pRQ01LjrBkr5buqnrVEzfdnWQzmtsIZ/+uC+pnm3zQdHwyo 7SYleM8b1iBGw== From: Thomas Gleixner To: Mark Rutland Cc: LKML , Mathieu Desnoyers , =?utf-8?Q?Andr=C3=A8?= Almeida , Sebastian Andrzej Siewior , Carlos O'Donell , Peter Zijlstra , Florian Weimer , Rich Felker , Torvald Riegel , Darren Hart , Ingo Molnar , Davidlohr Bueso , Arnd Bergmann , "Liam R . Howlett" , Uros Bizjak , Thomas =?utf-8?Q?Wei=C3=9Fschuh?= Subject: Re: [patch V3 00/14] futex: Address the robust futex unlock race for real In-Reply-To: References: <20260330114212.927686587@kernel.org> Date: Mon, 30 Mar 2026 21:36:44 +0200 Message-ID: <87pl4l9kj7.ffs@tglx> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain On Mon, Mar 30 2026 at 14:45, Mark Rutland wrote: > On Mon, Mar 30, 2026 at 02:01:58PM +0200, Thomas Gleixner wrote: >> User space can't solve this problem without help from the kernel. This >> series provides the kernel side infrastructure to help it along: [ 6 more citation lines. Click/Enter to show. ] >> >> 1) Combined unlock, pointer clearing, wake-up for the contended case >> >> 2) VDSO based unlock and pointer clearing helpers with a fix-up function >> in the kernel when user space was interrupted within the critical >> section. > > I see the vdso bits in this series are specific to x86. Do other > architectures need something here? Yes. > I might be missing some context; I'm not sure whether that's not > necessary or just not implemented by this series, and so I'm not sure > whether arm64 folk and other need to go dig into this. The VDSO functions __vdso_futex_robust_list64_try_unlock() and __vdso_futex_robust_list32_try_unlock() are architecture specific. The scheme in x86 ASM is: mov %esi,%eax // Load TID into EAX xor %ecx,%ecx // Set ECX to 0 lock cmpxchg %ecx,(%rdi) // Try the TID -> 0 transition .Lstart: jnz .Lend movq %rcx,(%rdx) // Clear list_op_pending .Lend: ret .Lstart is the start of the critical section, .Lend the end. These two addresses need to be retrieved from the VDSO when the VDSO is mapped to user space and stored in mm::futex:unlock::cs_ranges[]. See patch 11/14. If the cmpxchg was successful, then the pending pointer has to be cleared when user space was interrupted before reaching .Lend. So .Lstart has to be immediately after the instruction which did the try compare exchange and the architecture needs to have their ASM variant and the helper function which tells the generic code whether the pointer has to be cleared or not. On x86 that is: return regs->flags & X86_EFLAGS_ZF ? (void __user *)regs->dx : NULL; as the result of CMPXCHG is in the Zero Flag and the pointer is in [ER]DX. The former is defined by the ISA, the latter is enforced by the ASM constraints and has to be kept in sync between the VDSO ASM and the evaluation helper. Do you see a problem with that on AARGH64? Thanks, tglx