From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60DEA34403F for ; Thu, 2 Apr 2026 15:21:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775143289; cv=none; b=CE6m0Lw1dR9lFWVfYB6dRRSdCJ7J4FCCVtHOo6dQQyrL97AIErsL5yg4k5HxT96fR5zQNVMwp8PygQlsq6RtrIbOwYjDzg/J5jU6Mh4bo9Fpj7LLym9bZoUbBSV6k+TSSHYopi+c2S0BzoX3+gXPpBOp5lfPaTirr12idyzEzkI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775143289; c=relaxed/simple; bh=2VuPz3QvUX2th5jVajOINkRvuRgn8Mt8+3WUiD1Hrec=; h=Date:Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=pcdlUz7h0KY0dG2HnMxm10oGC0uGj+HB1aUOVdyhN4lto4ZTCH7zS8Nl43CSj4ocrwgOVV+rLBboUX5Y2HmrXVlbCiZgzsvOh2E78sM56FfoM2F7U3DiCvUhNj0YO7CNipt0YEGZbeNXd8P/CMYmuhhwbrsOyrE1VaXQssj+r68= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=RBHZ/1o8; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RBHZ/1o8" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 888DBC2BC9E; Thu, 2 Apr 2026 15:21:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775143289; bh=2VuPz3QvUX2th5jVajOINkRvuRgn8Mt8+3WUiD1Hrec=; h=Date:From:To:Cc:Subject:References:From; b=RBHZ/1o898sAAolKQjT8pTiu6XvRuUWB+nUxcgOe8qiUq/lfqRPIXr7d6ZFyu1ddb yMctdMR/iFBfJprL0D1n1QO2zZ+iHrlTUIwz0Y8UI/QR5MTIACTglSpt2aTF552g96 nXQYozyzIsgWmVwJZjWxm9ugVR1+Bu8oCsqgckMRc1Il7h4smrmnTBlfRttAZ7AKCa 22yPte2NBopKD/5ihrHJNx/n+kekkpzpZH5Sqf9Ojub87Xfz8fZpqnTPO+AEmmTosw qVeL+PAXqXENO0EBcLCslQkMFiiaeLMk52Ta3FfD39FEq7oYZO1CcJOwxV9rafGTH8 tjJzVuomewZYQ== Date: Thu, 02 Apr 2026 17:21:26 +0200 Message-ID: <20260402151939.934927432@kernel.org> User-Agent: quilt/0.68 From: Thomas Gleixner To: LKML Cc: Mathieu Desnoyers , =?UTF-8?q?Andr=C3=A8=20Almeida?= , Sebastian Andrzej Siewior , Carlos O'Donell , Peter Zijlstra , Florian Weimer , Rich Felker , Torvald Riegel , Darren Hart , Ingo Molnar , Davidlohr Bueso , Arnd Bergmann , "Liam R . Howlett" , Uros Bizjak , =?UTF-8?q?Thomas=20Wei=C3=9Fschuh?= Subject: [patch V4 05/14] uaccess: Provide unsafe_atomic_store_release_user() References: <20260402151131.876492985@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 The upcoming support for unlocking robust futexes in the kernel requires store release semantics. Syscalls do not imply memory ordering on all architectures so the unlock operation requires a barrier. This barrier can be avoided when stores imply release like on x86. Provide a generic version with a smp_mb() before the unsafe_put_user(), which can be overridden by architectures. Provide also a ARCH_MEMORY_ORDER_TOS Kconfig option, which can be selected by architectures with Total Store Order (TSO), where store implies release, so that the smp_mb() in the generic implementation can be avoided. If that is set a barrier() is used instead of smp_mb(), which is not required for the use case at hand, but makes it future proof for other usage to prevent the compiler from reordering. Signed-off-by: Thomas Gleixner Reviewed-by: André Almeida --- V4: Rename it really .... Add a barrier when TSO=y V3: Rename to CONFIG_ARCH_MEMORY_ORDER_TSO - Peter V2: New patch --- arch/Kconfig | 4 ++++ include/linux/uaccess.h | 11 +++++++++++ 2 files changed, 15 insertions(+) --- a/arch/Kconfig +++ b/arch/Kconfig @@ -403,6 +403,10 @@ config ARCH_32BIT_OFF_T config ARCH_32BIT_USTAT_F_TINODE bool +# Selected by architectures with Total Store Order (TSO) +config ARCH_MEMORY_ORDER_TSO + bool + config HAVE_ASM_MODVERSIONS bool help --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -644,6 +644,17 @@ static inline void user_access_restore(u #define user_read_access_end user_access_end #endif +#ifndef unsafe_atomic_store_release_user +# define unsafe_atomic_store_release_user(val, uptr, elbl) \ + do { \ + if (!IS_ENABLED(CONFIG_ARCH_MEMORY_ORDER_TSO)) \ + smp_mb(); \ + else \ + barrier(); \ + unsafe_put_user(val, uptr, elbl); \ + } while (0) +#endif + /* Define RW variant so the below _mode macro expansion works */ #define masked_user_rw_access_begin(u) masked_user_access_begin(u) #define user_rw_access_begin(u, s) user_access_begin(u, s)