From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D38AC3BBA09 for ; Thu, 12 Mar 2026 22:23:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773354192; cv=none; b=a0tSjQFasufKV12HHF/LZqPqAmIk9a2BHW0+I9JN2/uN+3oUyCYrqDHcbaj3BuD/JJIoGUi2yGMjz9/gB6wKZ79MGQg0cMlwiZuqU8smAWML/E8xQx1iI+jRD3uDavO4TS1qLpfeVuLtPLGIYMdSMvr6AqCoDL29muUNGzjFN1U= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773354192; c=relaxed/simple; bh=n3EcZIWkWkvH2IHwrjPGBxVMBH/Q85xwMAvHlnT6Mgg=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=OSe/vKNwc4q+++ZgUIU3/Z03qdnhO1TI88jIKr0/ZYhhBS3SFkxVHzgITYTkRLxZIUvB5pZQ9v1CnimDQju25krjEP0PsYzpsopkrZeWDzCcGtKNacbxC+LMyEcaeVBsT4c5uqD7EOo593PXtx9j6BZKADtEKErQySZUXc9lvjg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mSQW0/qA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mSQW0/qA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D9754C4CEF7; Thu, 12 Mar 2026 22:23:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773354192; bh=n3EcZIWkWkvH2IHwrjPGBxVMBH/Q85xwMAvHlnT6Mgg=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=mSQW0/qASYRZvr2q7G9P2BR4ZV5KtnSshhibf9+9F2d5GyKPM5IiavagQBhVybbTZ WaqKvJl17+HjJnwYdkaBvpGRWBnNBiDKht2P1pbzO6L6z8rU0qTymuYUJKiEuND9f3 BkEN2RqzhDHFnNeTYa4wtsK7SUf14Pe4xIli1C7FcwDiEWyuzaPeEQ7S+7g2B+9hsM 2TxF4YieJFgML9Adx1uWEdh8TxGx95G9KGsWk5KMDU6uOhIYuWxWGvFNp8iN5cu1HD Si+rjWy2zMUYmCn4mtZbkM+sa+SmzofxU97eXTGLSzErxmIfESyQq66bDDFBC7yjYu U3Y9UdYD2L5cw== From: Thomas Gleixner To: Mathieu Desnoyers , =?utf-8?Q?Andr?= =?utf-8?Q?=C3=A9?= Almeida Cc: linux-kernel@vger.kernel.org, Mathieu Desnoyers , Carlos O'Donell , Sebastian Andrzej Siewior , Peter Zijlstra , Florian Weimer , Rich Felker , Torvald Riegel , Darren Hart , Ingo Molnar , Davidlohr Bueso , Arnd Bergmann , "Liam R . Howlett" Subject: Re: [RFC PATCH] futex: Introduce __vdso_robust_futex_unlock In-Reply-To: <20260311185409.1988269-1-mathieu.desnoyers@efficios.com> References: <20260311185409.1988269-1-mathieu.desnoyers@efficios.com> Date: Thu, 12 Mar 2026 23:23:08 +0100 Message-ID: <87eclopu0j.ffs@tglx> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain On Wed, Mar 11 2026 at 14:54, Mathieu Desnoyers wrote: > +u32 __vdso_robust_futex_unlock(u32 *uaddr, uintptr_t *op_pending_addr) > +{ > + u32 val = 0; > + > + /* > + * Within the ip range identified by the futex exception table, > + * the register "eax" contains the value loaded by xchg. This is > + * expected by futex_vdso_exception() to check whether waiters > + * need to be woken up. This register state is transferred to > + * bit 1 (NEED_WAKEUP) of *op_pending_addr before the ip range > + * ends. > + */ > + asm volatile ( _ASM_VDSO_EXTABLE_FUTEX_HANDLE(1f, 3f) > + /* Exchange uaddr (store-release). */ > + "xchg %[uaddr], %[val]\n\t" > + "1:\n\t" > + /* Test if FUTEX_WAITERS (0x80000000) is set. */ > + "test %[val], %[val]\n\t" > + "js 2f\n\t" > + /* Clear *op_pending_addr if there are no waiters. */ > + ASM_PTR_SET "$0, %[op_pending_addr]\n\t" > + "jmp 3f\n\t" > + "2:\n\t" > + /* Set bit 1 (NEED_WAKEUP) in *op_pending_addr. */ > + ASM_PTR_BIT_SET "$1, %[op_pending_addr]\n\t" > + "3:\n\t" > + : [val] "+a" (val), > + [uaddr] "+m" (*uaddr) > + : [op_pending_addr] "m" (*op_pending_addr) > + : "memory"); TBH, all of this is completely overengineered and tasteless bloat. The exactly same thing can be achieved by doing the obvious: struct robust_list_head2 { struct robust_list_head rhead; u32 unlock_val; }; // User space unlock(futex) { struct robust_list_head2 *h = ....; h->unlock_val = 0; h->rhead.list_op_pending = .... | FUTEX_ROBUST_UNLOCK; xchg(futex->uval, h->unlock_val); if (h->unlock_val & FUTEX_WAITERS) syscall(FUTEX, &futex->uval, FUTEX_WAKE, ....); h->rhead.list_op_pending = NULL; } And then the kernel robust list code does: if (fetch_robust_entry(&pending, &head->list_op_pending, &pip)) return; if (pending & FUTEX_ROBUST_UNLOCK_PENDING) { if (get_user(unlock_val, &head_v2->unlock_val)) return; } ..... if (!pending) return; /* * If userspace unlocked the futex already, but did not manage * to clear the pending pointer, then the futex is not longer * owned by the task and might have been freed already. * * As the dying task it not the owner anymore there is no need * to access the futex and to set the OWNERDEAD bit, just wake * up a waiter in case the task died before doing so. * * That wakeup might be spurious, but that's harmless as all * futex users must be able to handle spurious wake ups * correctly. */ if (unlock_val) { if (unlock_val & FUTEX_WAITERS) futex_wake(pending + offset,....); return; } No? If you do it clever you can extend the existing code with minimally intrusive changes. But yeah, no ASM, no VDSO, no signal magic, no architecture EXTABLE mess, no architecture specific hackery, too generic and not convoluted enough, seriously? And replying to your other mail right here: > My aim is to use this vDSO as a replacement for atomic xchg and atomic > cmpxchg within library code. I am trying to make the transition as > straightforward as possible considering that this is a design bug > fix. Absolutely not for the price of creating a completely incomprehensible and unjustified mess in the kernel when it can be done with a trivial new interface, which just extends the existing one by the missing functionality in a generic way. Thanks, tglx