From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C197B3CA48C for ; Mon, 30 Mar 2026 12:02:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774872149; cv=none; b=tTVy+UO9/drne98n017DHyM009dtKKLP73IozUkif6PjsA2QqapLbKApsvNdvZz5ZcrkLYl3XViovX6fjqQvw1pwYTnDikZLOY4rlF4XH8fYR9rJd+1YWv0MmaLj9PB4Bd9D/AG6z9BOxk7TJal2LZlVdaHsviMVUuGw088PAGc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774872149; c=relaxed/simple; bh=/Hv/rLpSmG/g9q10+TksisFDLx+73rKsxlv3kxLjeWM=; h=Date:Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=JwOucSCkViqRFJsnO51qdF7QyhpywVELfBqdAncVOHvRzZia8xs6g8//wZ6PxeSEhuTotqXyBHRvAmvXrnEESDyMyxcpr1qMrK0los831DwUVKpZkGGKyt4HkxsNTqk9XiLwHCr2mmSSeeH0slhz6hMoEa1qD2TkdClGFikYU1k= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QbUPThx5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QbUPThx5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 80685C2BCB0; Mon, 30 Mar 2026 12:02:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774872149; bh=/Hv/rLpSmG/g9q10+TksisFDLx+73rKsxlv3kxLjeWM=; h=Date:From:To:Cc:Subject:References:From; b=QbUPThx5yH0vDUoFIxyeOYe4dj3a9Edw/SEgUIh7R709ZZLR5ef2sDVckW5IKDKKu BsOq+kf9t69ilXnWBWGvgYRb55xiLyVXpFQ6K2Yph8NhDVP3+XnwmukcBfLtVV2xa1 4pE6B26I9MvIasRMVT4d5UVcVm8NULLQOTSosCmIgGMv/4LDb4jLtrE9oXWR2QSBp8 Jr5RUFYtu9bfSuDcQ9GikQOrfhqZ9zr+hMEn1Mq8hJVznGPp56hEzYaHeftal1EOuE P3Bh9fV2MG7BYnxE0Y2RLXL+0Ai6Uo54OUhgUOxszGTAzLkpZAXghDiML4pAgLfPJU GUaWdoz7WYxJg== Date: Mon, 30 Mar 2026 14:02:25 +0200 Message-ID: <20260330120117.398431176@kernel.org> User-Agent: quilt/0.68 From: Thomas Gleixner To: LKML Cc: Mathieu Desnoyers , =?UTF-8?q?Andr=C3=A8=20Almeida?= , Sebastian Andrzej Siewior , Carlos O'Donell , Peter Zijlstra , Florian Weimer , Rich Felker , Torvald Riegel , Darren Hart , Ingo Molnar , Davidlohr Bueso , Arnd Bergmann , "Liam R . Howlett" , Uros Bizjak , =?UTF-8?q?Thomas=20Wei=C3=9Fschuh?= Subject: [patch V3 05/14] uaccess: Provide unsafe_atomic_store_release_user() References: <20260330114212.927686587@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 The upcoming support for unlocking robust futexes in the kernel requires store release semantics. Syscalls do not imply memory ordering on all architectures so the unlock operation requires a barrier. This barrier can be avoided when stores imply release like on x86. Provide a generic version with a smp_mb() before the unsafe_put_user(), which can be overridden by architectures. Provide also a ARCH_MEMORY_ORDER_TOS Kconfig option, which can be selected by architectures with Total Store Order (TSO), where store implies release, so that the smp_mb() in the generic implementation can be avoided. Signed-off-by: Thomas Gleixner Reviewed-by: André Almeida --- V3: Rename to CONFIG_ARCH_MEMORY_ORDER_TSO - Peter V2: New patch --- arch/Kconfig | 4 ++++ include/linux/uaccess.h | 9 +++++++++ 2 files changed, 13 insertions(+) --- a/arch/Kconfig +++ b/arch/Kconfig @@ -403,6 +403,10 @@ config ARCH_32BIT_OFF_T config ARCH_32BIT_USTAT_F_TINODE bool +# Selected by architectures with Total Store Order (TOS) +config ARCH_MEMORY_ORDER_TOS + bool + config HAVE_ASM_MODVERSIONS bool help --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -644,6 +644,15 @@ static inline void user_access_restore(u #define user_read_access_end user_access_end #endif +#ifndef unsafe_atomic_store_release_user +# define unsafe_atomic_store_release_user(val, uptr, elbl) \ + do { \ + if (!IS_ENABLED(CONFIG_ARCH_MEMORY_ORDER_TOS)) \ + smp_mb(); \ + unsafe_put_user(val, uptr, elbl); \ + } while (0) +#endif + /* Define RW variant so the below _mode macro expansion works */ #define masked_user_rw_access_begin(u) masked_user_access_begin(u) #define user_rw_access_begin(u, s) user_access_begin(u, s)