public inbox for rust-for-linux@vger.kernel.org
 help / color / mirror / Atom feed
* [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1
@ 2026-03-03 20:16 Boqun Feng
  2026-03-03 20:16 ` [PATCH 01/13] rust: sync: atomic: Remove bound `T: Sync` for `Atomic::from_ptr()` Boqun Feng
                   ` (13 more replies)
  0 siblings, 14 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel

Hi Peter,

Please pull these changes of Rust atomic in v7.1 into tip/locking/core.
Major changes are the atomic pointer support and a boolean-like
AtomicFlag type (using a byte if arch support efficient xchg/cmpxchg
over bytes otherwise 4 bytes). Thanks!

Regards,
Boqun


The following changes since commit 3dcef70e41ab13483803c536ddea8d5f1803ee25:

  ww-mutex: Fix the ww_acquire_ctx function annotations (2026-02-27 16:40:20 +0100)

are available in the Git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux.git tags/rust-atomic.20260303a

for you to fetch changes up to 68d1c8ac7f0b1f0de92a803b9b71090fd1b86d17:

  rust: atomic: Update a safety comment in impl of `fetch_add()` (2026-03-03 11:55:57 -0800)

----------------------------------------------------------------
Rust atomic changes for v7.1

* Add Atomic<ptr> support.
* Add an AtomicFlag type for boolean-like usage with
  architecture-specific performance optimization.
* Add unsafe atomic operations over raw pointers.
* Add `fetch_sub()` for atomic types.
* Documentation and example improvements.
-----BEGIN PGP SIGNATURE-----

iQFFBAABCAAvFiEEj5IosQTPz8XU1wRHSXnow7UH+rgFAmmnPU0RHGJvcXVuQGtl
cm5lbC5vcmcACgkQSXnow7UH+rgKrgf/UNZb0CmIG7d2jN1GsTHwYa8disAGlWFk
KOSTMNn83WICVhIIqUqrRcvSzR0FLwpp1jOH0lMYzZlfxQBOIoVc82xXD2SLLjAa
2VS/vknOitxAsChFceKs7w+hcQD168xSbDqo/dSxI/KO+OMQUxLqTW0zKTVYZhij
JIfv57Nv1331J+gnwici6/q3cBqP14Hv968cZ5Dw8tqWJMpMuqJPQLsgKg5um6Y0
hzpgXLkkB8Vg02qku/YdkcBFCvxWz5CifOpmLWNum+B82emELHmKhpOUdticuWof
iUkqygu4Un+QAcKb+8LG3L30UW3GBw4kEHpb357jc/EqZHvhX5aEYg==
=CYTx
-----END PGP SIGNATURE-----

----------------------------------------------------------------
Andreas Hindborg (3):
      rust: sync: atomic: Add fetch_sub()
      rust: sync: atomic: Update documentation for `fetch_add()`
      rust: atomic: Update a safety comment in impl of `fetch_add()`

Boqun Feng (7):
      rust: sync: atomic: Remove bound `T: Sync` for `Atomic::from_ptr()`
      rust: helpers: Generify the definitions of rust_helper_*_{read,set}*
      rust: helpers: Generify the definitions of rust_helper_*_xchg*
      rust: helpers: Generify the definitions of rust_helper_*_cmpxchg*
      rust: sync: atomic: Clarify the need of CONFIG_ARCH_SUPPORTS_ATOMIC_RMW
      rust: sync: atomic: Add Atomic<*{mut,const} T> support
      rust: sync: atomic: Add atomic operation helpers over raw pointers

FUJITA Tomonori (3):
      rust: sync: atomic: Add example for Atomic::get_mut()
      rust: sync: atomic: Add performance-optimal Flag type for atomic booleans
      rust: list: Use AtomicFlag in AtomicTracker

 rust/helpers/atomic_ext.c            | 158 ++++++------------
 rust/kernel/list/arc.rs              |   8 +-
 rust/kernel/sync/atomic.rs           | 310 +++++++++++++++++++++++++++++++++--
 rust/kernel/sync/atomic/internal.rs  |  44 +++--
 rust/kernel/sync/atomic/predefine.rs | 109 ++++++++++++
 5 files changed, 496 insertions(+), 133 deletions(-)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 01/13] rust: sync: atomic: Remove bound `T: Sync` for `Atomic::from_ptr()`
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
@ 2026-03-03 20:16 ` Boqun Feng
  2026-03-03 20:16 ` [PATCH 02/13] rust: sync: atomic: Add example for Atomic::get_mut() Boqun Feng
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel, Boqun Feng

From: Boqun Feng <boqun.feng@gmail.com>

Originally, `Atomic::from_ptr()` requires `T` being a `Sync` because I
thought having the ability to do `from_ptr()` meant multiplle
`&Atomic<T>`s shared by different threads, which was identical (or
similar) to multiple `&T`s shared by different threads. Hence `T` was
required to be `Sync`. However this is not true, since `&Atomic<T>` is
not the same at `&T`. Moreover, having this bound makes `Atomic::<*mut
T>::from_ptr()` impossible, which is definitely not intended. Therefore
remove the `T: Sync` bound.

[boqun: Fix title typo spotted by Alice & Gary]

Fixes: 29c32c405e53 ("rust: sync: atomic: Add generic atomics")
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260120115207.55318-2-boqun.feng@gmail.com
---
 rust/kernel/sync/atomic.rs | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 4aebeacb961a..296b25e83bbb 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -204,10 +204,7 @@ pub const fn new(v: T) -> Self {
     /// // no data race.
     /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release);
     /// ```
-    pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self
-    where
-        T: Sync,
-    {
+    pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self {
         // CAST: `T` and `Atomic<T>` have the same size, alignment and bit validity.
         // SAFETY: Per function safety requirement, `ptr` is a valid pointer and the object will
         // live long enough. It's safe to return a `&Atomic<T>` because function safety requirement
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 02/13] rust: sync: atomic: Add example for Atomic::get_mut()
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
  2026-03-03 20:16 ` [PATCH 01/13] rust: sync: atomic: Remove bound `T: Sync` for `Atomic::from_ptr()` Boqun Feng
@ 2026-03-03 20:16 ` Boqun Feng
  2026-03-03 20:16 ` [PATCH 03/13] rust: helpers: Generify the definitions of rust_helper_*_{read,set}* Boqun Feng
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel, FUJITA Tomonori

From: FUJITA Tomonori <fujita.tomonori@gmail.com>

Add an example for Atomic::get_mut(). No functional change.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Boqun Feng <boqun@kernel.org>
Link: https://patch.msgid.link/20260128123313.3850604-1-tomo@aliasing.net
---
 rust/kernel/sync/atomic.rs | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 296b25e83bbb..e262b0cb53ae 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -232,6 +232,17 @@ pub const fn as_ptr(&self) -> *mut T {
     /// Returns a mutable reference to the underlying atomic `T`.
     ///
     /// This is safe because the mutable reference of the atomic `T` guarantees exclusive access.
+    ///
+    /// # Examples
+    ///
+    /// ```
+    /// use kernel::sync::atomic::{Atomic, Relaxed};
+    ///
+    /// let mut atomic_val = Atomic::new(0u32);
+    /// let val_mut = atomic_val.get_mut();
+    /// *val_mut = 101;
+    /// assert_eq!(101, atomic_val.load(Relaxed));
+    /// ```
     pub fn get_mut(&mut self) -> &mut T {
         // CAST: `T` and `T::Repr` has the same size and alignment per the safety requirement of
         // `AtomicType`, and per the type invariants `self.0` is a valid `T`, therefore the casting
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 03/13] rust: helpers: Generify the definitions of rust_helper_*_{read,set}*
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
  2026-03-03 20:16 ` [PATCH 01/13] rust: sync: atomic: Remove bound `T: Sync` for `Atomic::from_ptr()` Boqun Feng
  2026-03-03 20:16 ` [PATCH 02/13] rust: sync: atomic: Add example for Atomic::get_mut() Boqun Feng
@ 2026-03-03 20:16 ` Boqun Feng
  2026-03-03 20:16 ` [PATCH 04/13] rust: helpers: Generify the definitions of rust_helper_*_xchg* Boqun Feng
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel, Boqun Feng

From: Boqun Feng <boqun.feng@gmail.com>

To support atomic pointers, more {read,set} helpers will be introduced,
hence define macros to generate these helpers to ease the introduction
of the future helpers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260117122243.24404-2-boqun.feng@gmail.com
---
 rust/helpers/atomic_ext.c | 53 +++++++++++++++++----------------------
 1 file changed, 23 insertions(+), 30 deletions(-)

diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index 7d0c2bd340da..f471c1ff123d 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -4,45 +4,38 @@
 #include <asm/rwonce.h>
 #include <linux/atomic.h>
 
-__rust_helper s8 rust_helper_atomic_i8_read(s8 *ptr)
-{
-	return READ_ONCE(*ptr);
-}
-
-__rust_helper s8 rust_helper_atomic_i8_read_acquire(s8 *ptr)
-{
-	return smp_load_acquire(ptr);
-}
-
-__rust_helper s16 rust_helper_atomic_i16_read(s16 *ptr)
-{
-	return READ_ONCE(*ptr);
+#define GEN_READ_HELPER(tname, type)						\
+__rust_helper type rust_helper_atomic_##tname##_read(type *ptr)			\
+{										\
+	return READ_ONCE(*ptr);							\
 }
 
-__rust_helper s16 rust_helper_atomic_i16_read_acquire(s16 *ptr)
-{
-	return smp_load_acquire(ptr);
+#define GEN_SET_HELPER(tname, type)						\
+__rust_helper void rust_helper_atomic_##tname##_set(type *ptr, type val)	\
+{										\
+	WRITE_ONCE(*ptr, val);							\
 }
 
-__rust_helper void rust_helper_atomic_i8_set(s8 *ptr, s8 val)
-{
-	WRITE_ONCE(*ptr, val);
+#define GEN_READ_ACQUIRE_HELPER(tname, type)					\
+__rust_helper type rust_helper_atomic_##tname##_read_acquire(type *ptr)		\
+{										\
+	return smp_load_acquire(ptr);						\
 }
 
-__rust_helper void rust_helper_atomic_i8_set_release(s8 *ptr, s8 val)
-{
-	smp_store_release(ptr, val);
+#define GEN_SET_RELEASE_HELPER(tname, type)					\
+__rust_helper void rust_helper_atomic_##tname##_set_release(type *ptr, type val)\
+{										\
+	smp_store_release(ptr, val);						\
 }
 
-__rust_helper void rust_helper_atomic_i16_set(s16 *ptr, s16 val)
-{
-	WRITE_ONCE(*ptr, val);
-}
+#define GEN_READ_SET_HELPERS(tname, type)					\
+	GEN_READ_HELPER(tname, type)						\
+	GEN_SET_HELPER(tname, type)						\
+	GEN_READ_ACQUIRE_HELPER(tname, type)					\
+	GEN_SET_RELEASE_HELPER(tname, type)					\
 
-__rust_helper void rust_helper_atomic_i16_set_release(s16 *ptr, s16 val)
-{
-	smp_store_release(ptr, val);
-}
+GEN_READ_SET_HELPERS(i8, s8)
+GEN_READ_SET_HELPERS(i16, s16)
 
 /*
  * xchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 04/13] rust: helpers: Generify the definitions of rust_helper_*_xchg*
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
                   ` (2 preceding siblings ...)
  2026-03-03 20:16 ` [PATCH 03/13] rust: helpers: Generify the definitions of rust_helper_*_{read,set}* Boqun Feng
@ 2026-03-03 20:16 ` Boqun Feng
  2026-03-03 20:16 ` [PATCH 05/13] rust: helpers: Generify the definitions of rust_helper_*_cmpxchg* Boqun Feng
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel, Boqun Feng

From: Boqun Feng <boqun.feng@gmail.com>

To support atomic pointers, more xchg helpers will be introduced, hence
define macros to generate these helpers to ease the introduction of the
future helpers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260117122243.24404-3-boqun.feng@gmail.com
---
 rust/helpers/atomic_ext.c | 48 ++++++++++-----------------------------
 1 file changed, 12 insertions(+), 36 deletions(-)

diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index f471c1ff123d..c5f665bbe785 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -44,45 +44,21 @@ GEN_READ_SET_HELPERS(i16, s16)
  * The architectures that currently support Rust (x86_64, armv7,
  * arm64, riscv, and loongarch) satisfy these requirements.
  */
-__rust_helper s8 rust_helper_atomic_i8_xchg(s8 *ptr, s8 new)
-{
-	return xchg(ptr, new);
-}
-
-__rust_helper s16 rust_helper_atomic_i16_xchg(s16 *ptr, s16 new)
-{
-	return xchg(ptr, new);
-}
-
-__rust_helper s8 rust_helper_atomic_i8_xchg_acquire(s8 *ptr, s8 new)
-{
-	return xchg_acquire(ptr, new);
-}
-
-__rust_helper s16 rust_helper_atomic_i16_xchg_acquire(s16 *ptr, s16 new)
-{
-	return xchg_acquire(ptr, new);
-}
-
-__rust_helper s8 rust_helper_atomic_i8_xchg_release(s8 *ptr, s8 new)
-{
-	return xchg_release(ptr, new);
-}
-
-__rust_helper s16 rust_helper_atomic_i16_xchg_release(s16 *ptr, s16 new)
-{
-	return xchg_release(ptr, new);
+#define GEN_XCHG_HELPER(tname, type, suffix)					\
+__rust_helper type								\
+rust_helper_atomic_##tname##_xchg##suffix(type *ptr, type new)			\
+{										\
+	return xchg##suffix(ptr, new);					\
 }
 
-__rust_helper s8 rust_helper_atomic_i8_xchg_relaxed(s8 *ptr, s8 new)
-{
-	return xchg_relaxed(ptr, new);
-}
+#define GEN_XCHG_HELPERS(tname, type)						\
+	GEN_XCHG_HELPER(tname, type, )						\
+	GEN_XCHG_HELPER(tname, type, _acquire)					\
+	GEN_XCHG_HELPER(tname, type, _release)					\
+	GEN_XCHG_HELPER(tname, type, _relaxed)					\
 
-__rust_helper s16 rust_helper_atomic_i16_xchg_relaxed(s16 *ptr, s16 new)
-{
-	return xchg_relaxed(ptr, new);
-}
+GEN_XCHG_HELPERS(i8, s8)
+GEN_XCHG_HELPERS(i16, s16)
 
 /*
  * try_cmpxchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 05/13] rust: helpers: Generify the definitions of rust_helper_*_cmpxchg*
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
                   ` (3 preceding siblings ...)
  2026-03-03 20:16 ` [PATCH 04/13] rust: helpers: Generify the definitions of rust_helper_*_xchg* Boqun Feng
@ 2026-03-03 20:16 ` Boqun Feng
  2026-03-03 20:16 ` [PATCH 06/13] rust: sync: atomic: Clarify the need of CONFIG_ARCH_SUPPORTS_ATOMIC_RMW Boqun Feng
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel, Boqun Feng

From: Boqun Feng <boqun.feng@gmail.com>

To support atomic pointers, more cmpxchg helpers will be introduced,
hence define macros to generate these helpers to ease the introduction
of the future helpers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260117122243.24404-4-boqun.feng@gmail.com
---
 rust/helpers/atomic_ext.c | 48 ++++++++++-----------------------------
 1 file changed, 12 insertions(+), 36 deletions(-)

diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index c5f665bbe785..240218e2e708 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -67,42 +67,18 @@ GEN_XCHG_HELPERS(i16, s16)
  * The architectures that currently support Rust (x86_64, armv7,
  * arm64, riscv, and loongarch) satisfy these requirements.
  */
-__rust_helper bool rust_helper_atomic_i8_try_cmpxchg(s8 *ptr, s8 *old, s8 new)
-{
-	return try_cmpxchg(ptr, old, new);
-}
-
-__rust_helper bool rust_helper_atomic_i16_try_cmpxchg(s16 *ptr, s16 *old, s16 new)
-{
-	return try_cmpxchg(ptr, old, new);
-}
-
-__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_acquire(s8 *ptr, s8 *old, s8 new)
-{
-	return try_cmpxchg_acquire(ptr, old, new);
-}
-
-__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_acquire(s16 *ptr, s16 *old, s16 new)
-{
-	return try_cmpxchg_acquire(ptr, old, new);
-}
-
-__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_release(s8 *ptr, s8 *old, s8 new)
-{
-	return try_cmpxchg_release(ptr, old, new);
-}
-
-__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_release(s16 *ptr, s16 *old, s16 new)
-{
-	return try_cmpxchg_release(ptr, old, new);
+#define GEN_TRY_CMPXCHG_HELPER(tname, type, suffix)				\
+__rust_helper bool								\
+rust_helper_atomic_##tname##_try_cmpxchg##suffix(type *ptr, type *old, type new)\
+{										\
+	return try_cmpxchg##suffix(ptr, old, new);				\
 }
 
-__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_relaxed(s8 *ptr, s8 *old, s8 new)
-{
-	return try_cmpxchg_relaxed(ptr, old, new);
-}
+#define GEN_TRY_CMPXCHG_HELPERS(tname, type)					\
+	GEN_TRY_CMPXCHG_HELPER(tname, type, )					\
+	GEN_TRY_CMPXCHG_HELPER(tname, type, _acquire)				\
+	GEN_TRY_CMPXCHG_HELPER(tname, type, _release)				\
+	GEN_TRY_CMPXCHG_HELPER(tname, type, _relaxed)				\
 
-__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_relaxed(s16 *ptr, s16 *old, s16 new)
-{
-	return try_cmpxchg_relaxed(ptr, old, new);
-}
+GEN_TRY_CMPXCHG_HELPERS(i8, s8)
+GEN_TRY_CMPXCHG_HELPERS(i16, s16)
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 06/13] rust: sync: atomic: Clarify the need of CONFIG_ARCH_SUPPORTS_ATOMIC_RMW
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
                   ` (4 preceding siblings ...)
  2026-03-03 20:16 ` [PATCH 05/13] rust: helpers: Generify the definitions of rust_helper_*_cmpxchg* Boqun Feng
@ 2026-03-03 20:16 ` Boqun Feng
  2026-03-03 20:16 ` [PATCH 07/13] rust: sync: atomic: Add Atomic<*{mut,const} T> support Boqun Feng
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel, Boqun Feng, Dirk Behme

From: Boqun Feng <boqun.feng@gmail.com>

Currently, since all the architectures that support Rust all have
CONFIG_ARCH_SUPPORTS_ATOMIC_RMW selected, the helpers of atomic
load/store on i8 and i16 relies on CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y.
It's generally fine since most of architectures support that.

The plan for CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=n architectures is adding
their (probably lock-based) atomic load/store for i8 and i16 as their
atomic_{read,set}() and atomic64_{read,set}() counterpart when they
plans to support Rust.

Hence use a statis_assert!() to check this and remind the future us the
need of the helpers. This is more clear than the #[cfg] on impl blocks
of i8 and i16.

Suggested-by: Dirk Behme <dirk.behme@gmail.com>
Suggested-by: Benno Lossin <lossin@kernel.org>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260120140503.62804-2-boqun.feng@gmail.com
---
 rust/kernel/sync/atomic/internal.rs | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/internal.rs
index 0dac58bca2b3..ef516bcb02ee 100644
--- a/rust/kernel/sync/atomic/internal.rs
+++ b/rust/kernel/sync/atomic/internal.rs
@@ -37,16 +37,23 @@ pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {
     type Delta;
 }
 
-// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only
-// guaranteed against read-modify-write operations if the architecture supports native atomic RmW.
-#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)]
+// The current helpers of load/store of atomic `i8` and `i16` use `{WRITE,READ}_ONCE()` hence the
+// atomicity is only guaranteed against read-modify-write operations if the architecture supports
+// native atomic RmW.
+//
+// In the future when a CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=n architecture plans to support Rust, the
+// load/store helpers that guarantee atomicity against RmW operations (usually via a lock) need to
+// be added.
+crate::static_assert!(
+    cfg!(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW),
+    "The current implementation of atomic i8/i16/ptr relies on the architecure being \
+    ARCH_SUPPORTS_ATOMIC_RMW"
+);
+
 impl AtomicImpl for i8 {
     type Delta = Self;
 }
 
-// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only
-// guaranteed against read-modify-write operations if the architecture supports native atomic RmW.
-#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)]
 impl AtomicImpl for i16 {
     type Delta = Self;
 }
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 07/13] rust: sync: atomic: Add Atomic<*{mut,const} T> support
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
                   ` (5 preceding siblings ...)
  2026-03-03 20:16 ` [PATCH 06/13] rust: sync: atomic: Clarify the need of CONFIG_ARCH_SUPPORTS_ATOMIC_RMW Boqun Feng
@ 2026-03-03 20:16 ` Boqun Feng
  2026-03-03 20:16 ` [PATCH 08/13] rust: sync: atomic: Add performance-optimal Flag type for atomic booleans Boqun Feng
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel, Boqun Feng, FUJITA Tomonori

From: Boqun Feng <boqun.feng@gmail.com>

Atomic pointer support is an important piece of synchronization
algorithm, e.g. RCU, hence provide the support for that.

Note that instead of relying on atomic_long or the implementation of
`Atomic<usize>`, a new set of helpers (atomic_ptr_*) is introduced for
atomic pointer specifically, this is because ptr2int casting would
lose the provenance of a pointer and even though in theory there are a
few tricks the provenance can be restored, it'll still be a simpler
implementation if C could provide atomic pointers directly. The side
effects of this approach are: we don't have the arithmetic and logical
operations for pointers yet and the current implementation only works
on ARCH_SUPPORTS_ATOMIC_RMW architectures, but these are implementation
issues and can be added later.

Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260120140503.62804-3-boqun.feng@gmail.com
---
 rust/helpers/atomic_ext.c            |  3 ++
 rust/kernel/sync/atomic.rs           | 12 +++++++-
 rust/kernel/sync/atomic/internal.rs  | 24 +++++++++------
 rust/kernel/sync/atomic/predefine.rs | 46 ++++++++++++++++++++++++++++
 4 files changed, 75 insertions(+), 10 deletions(-)

diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index 240218e2e708..c267d5190529 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -36,6 +36,7 @@ __rust_helper void rust_helper_atomic_##tname##_set_release(type *ptr, type val)
 
 GEN_READ_SET_HELPERS(i8, s8)
 GEN_READ_SET_HELPERS(i16, s16)
+GEN_READ_SET_HELPERS(ptr, const void *)
 
 /*
  * xchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
@@ -59,6 +60,7 @@ rust_helper_atomic_##tname##_xchg##suffix(type *ptr, type new)			\
 
 GEN_XCHG_HELPERS(i8, s8)
 GEN_XCHG_HELPERS(i16, s16)
+GEN_XCHG_HELPERS(ptr, const void *)
 
 /*
  * try_cmpxchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
@@ -82,3 +84,4 @@ rust_helper_atomic_##tname##_try_cmpxchg##suffix(type *ptr, type *old, type new)
 
 GEN_TRY_CMPXCHG_HELPERS(i8, s8)
 GEN_TRY_CMPXCHG_HELPERS(i16, s16)
+GEN_TRY_CMPXCHG_HELPERS(ptr, const void *)
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index e262b0cb53ae..f4c3ab15c8a7 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -51,6 +51,10 @@
 #[repr(transparent)]
 pub struct Atomic<T: AtomicType>(AtomicRepr<T::Repr>);
 
+// SAFETY: `Atomic<T>` is safe to transfer between execution contexts because of the safety
+// requirement of `AtomicType`.
+unsafe impl<T: AtomicType> Send for Atomic<T> {}
+
 // SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
 unsafe impl<T: AtomicType> Sync for Atomic<T> {}
 
@@ -68,6 +72,11 @@ unsafe impl<T: AtomicType> Sync for Atomic<T> {}
 ///
 /// - [`Self`] must have the same size and alignment as [`Self::Repr`].
 /// - [`Self`] must be [round-trip transmutable] to  [`Self::Repr`].
+/// - [`Self`] must be safe to transfer between execution contexts, if it's [`Send`], this is
+///   automatically satisfied. The exception is pointer types that are even though marked as
+///   `!Send` (e.g. raw pointers and [`NonNull<T>`]) but requiring `unsafe` to do anything
+///   meaningful on them. This is because transferring pointer values between execution contexts is
+///   safe as long as the actual `unsafe` dereferencing is justified.
 ///
 /// Note that this is more relaxed than requiring the bi-directional transmutability (i.e.
 /// [`transmute()`] is always sound between `U` and `T`) because of the support for atomic
@@ -108,7 +117,8 @@ unsafe impl<T: AtomicType> Sync for Atomic<T> {}
 /// [`transmute()`]: core::mem::transmute
 /// [round-trip transmutable]: AtomicType#round-trip-transmutability
 /// [Examples]: AtomicType#examples
-pub unsafe trait AtomicType: Sized + Send + Copy {
+/// [`NonNull<T>`]: core::ptr::NonNull
+pub unsafe trait AtomicType: Sized + Copy {
     /// The backing atomic implementation type.
     type Repr: AtomicImpl;
 }
diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/internal.rs
index ef516bcb02ee..e301db4eaf91 100644
--- a/rust/kernel/sync/atomic/internal.rs
+++ b/rust/kernel/sync/atomic/internal.rs
@@ -7,6 +7,7 @@
 use crate::bindings;
 use crate::macros::paste;
 use core::cell::UnsafeCell;
+use ffi::c_void;
 
 mod private {
     /// Sealed trait marker to disable customized impls on atomic implementation traits.
@@ -14,10 +15,11 @@ pub trait Sealed {}
 }
 
 // The C side supports atomic primitives only for `i32` and `i64` (`atomic_t` and `atomic64_t`),
-// while the Rust side also layers provides atomic support for `i8` and `i16`
-// on top of lower-level C primitives.
+// while the Rust side also provides atomic support for `i8`, `i16` and `*const c_void` on top of
+// lower-level C primitives.
 impl private::Sealed for i8 {}
 impl private::Sealed for i16 {}
+impl private::Sealed for *const c_void {}
 impl private::Sealed for i32 {}
 impl private::Sealed for i64 {}
 
@@ -26,10 +28,10 @@ impl private::Sealed for i64 {}
 /// This trait is sealed, and only types that map directly to the C side atomics
 /// or can be implemented with lower-level C primitives are allowed to implement this:
 ///
-/// - `i8` and `i16` are implemented with lower-level C primitives.
+/// - `i8`, `i16` and `*const c_void` are implemented with lower-level C primitives.
 /// - `i32` map to `atomic_t`
 /// - `i64` map to `atomic64_t`
-pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {
+pub trait AtomicImpl: Sized + Copy + private::Sealed {
     /// The type of the delta in arithmetic or logical operations.
     ///
     /// For example, in `atomic_add(ptr, v)`, it's the type of `v`. Usually it's the same type of
@@ -37,9 +39,9 @@ pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {
     type Delta;
 }
 
-// The current helpers of load/store of atomic `i8` and `i16` use `{WRITE,READ}_ONCE()` hence the
-// atomicity is only guaranteed against read-modify-write operations if the architecture supports
-// native atomic RmW.
+// The current helpers of load/store of atomic `i8`, `i16` and pointers use `{WRITE,READ}_ONCE()`
+// hence the atomicity is only guaranteed against read-modify-write operations if the architecture
+// supports native atomic RmW.
 //
 // In the future when a CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=n architecture plans to support Rust, the
 // load/store helpers that guarantee atomicity against RmW operations (usually via a lock) need to
@@ -58,6 +60,10 @@ impl AtomicImpl for i16 {
     type Delta = Self;
 }
 
+impl AtomicImpl for *const c_void {
+    type Delta = isize;
+}
+
 // `atomic_t` implements atomic operations on `i32`.
 impl AtomicImpl for i32 {
     type Delta = Self;
@@ -269,7 +275,7 @@ macro_rules! declare_and_impl_atomic_methods {
 }
 
 declare_and_impl_atomic_methods!(
-    [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ]
+    [ i8 => atomic_i8, i16 => atomic_i16, *const c_void => atomic_ptr, i32 => atomic, i64 => atomic64 ]
     /// Basic atomic operations
     pub trait AtomicBasicOps {
         /// Atomic read (load).
@@ -287,7 +293,7 @@ fn set[release](a: &AtomicRepr<Self>, v: Self) {
 );
 
 declare_and_impl_atomic_methods!(
-    [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ]
+    [ i8 => atomic_i8, i16 => atomic_i16, *const c_void => atomic_ptr, i32 => atomic, i64 => atomic64 ]
     /// Exchange and compare-and-exchange atomic operations
     pub trait AtomicExchangeOps {
         /// Atomic exchange.
diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
index 67a0406d3ea4..6f2c60529b64 100644
--- a/rust/kernel/sync/atomic/predefine.rs
+++ b/rust/kernel/sync/atomic/predefine.rs
@@ -4,6 +4,7 @@
 
 use crate::static_assert;
 use core::mem::{align_of, size_of};
+use ffi::c_void;
 
 // Ensure size and alignment requirements are checked.
 static_assert!(size_of::<bool>() == size_of::<i8>());
@@ -28,6 +29,26 @@ unsafe impl super::AtomicType for i16 {
     type Repr = i16;
 }
 
+// SAFETY:
+//
+// - `*mut T` has the same size and alignment with `*const c_void`, and is round-trip
+//   transmutable to `*const c_void`.
+// - `*mut T` is safe to transfer between execution contexts. See the safety requirement of
+//   [`AtomicType`].
+unsafe impl<T: Sized> super::AtomicType for *mut T {
+    type Repr = *const c_void;
+}
+
+// SAFETY:
+//
+// - `*const T` has the same size and alignment with `*const c_void`, and is round-trip
+//   transmutable to `*const c_void`.
+// - `*const T` is safe to transfer between execution contexts. See the safety requirement of
+//   [`AtomicType`].
+unsafe impl<T: Sized> super::AtomicType for *const T {
+    type Repr = *const c_void;
+}
+
 // SAFETY: `i32` has the same size and alignment with itself, and is round-trip transmutable to
 // itself.
 unsafe impl super::AtomicType for i32 {
@@ -226,4 +247,29 @@ fn atomic_bool_tests() {
         assert_eq!(false, x.load(Relaxed));
         assert_eq!(Ok(false), x.cmpxchg(false, true, Full));
     }
+
+    #[test]
+    fn atomic_ptr_tests() {
+        let mut v = 42;
+        let mut u = 43;
+        let x = Atomic::new(&raw mut v);
+
+        assert_eq!(x.load(Acquire), &raw mut v);
+        assert_eq!(x.cmpxchg(&raw mut u, &raw mut u, Relaxed), Err(&raw mut v));
+        assert_eq!(x.cmpxchg(&raw mut v, &raw mut u, Relaxed), Ok(&raw mut v));
+        assert_eq!(x.load(Relaxed), &raw mut u);
+
+        let x = Atomic::new(&raw const v);
+
+        assert_eq!(x.load(Acquire), &raw const v);
+        assert_eq!(
+            x.cmpxchg(&raw const u, &raw const u, Relaxed),
+            Err(&raw const v)
+        );
+        assert_eq!(
+            x.cmpxchg(&raw const v, &raw const u, Relaxed),
+            Ok(&raw const v)
+        );
+        assert_eq!(x.load(Relaxed), &raw const u);
+    }
 }
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 08/13] rust: sync: atomic: Add performance-optimal Flag type for atomic booleans
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
                   ` (6 preceding siblings ...)
  2026-03-03 20:16 ` [PATCH 07/13] rust: sync: atomic: Add Atomic<*{mut,const} T> support Boqun Feng
@ 2026-03-03 20:16 ` Boqun Feng
  2026-03-03 20:16 ` [PATCH 09/13] rust: list: Use AtomicFlag in AtomicTracker Boqun Feng
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel, FUJITA Tomonori

From: FUJITA Tomonori <fujita.tomonori@gmail.com>

Add AtomicFlag type for boolean flags.

Document when AtomicFlag is generally preferable to Atomic<bool>: in
particular, when RMW operations such as xchg()/cmpxchg() may be used
and minimizing memory usage is not the top priority. On some
architectures without byte-sized RMW instructions, Atomic<bool> can be
slower for RMW operations.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Boqun Feng <boqun@kernel.org>
Link: https://patch.msgid.link/20260129122622.3896144-2-tomo@aliasing.net
---
 rust/kernel/sync/atomic.rs           | 125 +++++++++++++++++++++++++++
 rust/kernel/sync/atomic/predefine.rs |  17 ++++
 2 files changed, 142 insertions(+)

diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index f4c3ab15c8a7..f80cebce5bc1 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -578,3 +578,128 @@ pub fn fetch_add<Rhs, Ordering: ordering::Ordering>(&self, v: Rhs, _: Ordering)
         unsafe { from_repr(ret) }
     }
 }
+
+#[cfg(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64))]
+#[repr(C)]
+#[derive(Clone, Copy)]
+struct Flag {
+    bool_field: bool,
+}
+
+/// # Invariants
+///
+/// `padding` must be all zeroes.
+#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
+#[repr(C, align(4))]
+#[derive(Clone, Copy)]
+struct Flag {
+    #[cfg(target_endian = "big")]
+    padding: [u8; 3],
+    bool_field: bool,
+    #[cfg(target_endian = "little")]
+    padding: [u8; 3],
+}
+
+impl Flag {
+    #[inline(always)]
+    const fn new(b: bool) -> Self {
+        // INVARIANT: `padding` is all zeroes.
+        Self {
+            bool_field: b,
+            #[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
+            padding: [0; 3],
+        }
+    }
+}
+
+// SAFETY: `Flag` and `Repr` have the same size and alignment, and `Flag` is round-trip
+// transmutable to the selected representation (`i8` or `i32`).
+unsafe impl AtomicType for Flag {
+    #[cfg(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64))]
+    type Repr = i8;
+    #[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
+    type Repr = i32;
+}
+
+/// An atomic flag type intended to be backed by performance-optimal integer type.
+///
+/// The backing integer type is an implementation detail; it may vary by architecture and change
+/// in the future.
+///
+/// [`AtomicFlag`] is generally preferable to [`Atomic<bool>`] when you need read-modify-write
+/// (RMW) operations (e.g. [`Atomic::xchg()`]/[`Atomic::cmpxchg()`]) or when [`Atomic<bool>`] does
+/// not save memory due to padding. On some architectures that do not support byte-sized atomic
+/// RMW operations, RMW operations on [`Atomic<bool>`] are slower.
+///
+/// If you only use [`Atomic::load()`]/[`Atomic::store()`], [`Atomic<bool>`] is fine.
+///
+/// # Examples
+///
+/// ```
+/// use kernel::sync::atomic::{AtomicFlag, Relaxed};
+///
+/// let flag = AtomicFlag::new(false);
+/// assert_eq!(false, flag.load(Relaxed));
+/// flag.store(true, Relaxed);
+/// assert_eq!(true, flag.load(Relaxed));
+/// ```
+pub struct AtomicFlag(Atomic<Flag>);
+
+impl AtomicFlag {
+    /// Creates a new atomic flag.
+    #[inline(always)]
+    pub const fn new(b: bool) -> Self {
+        Self(Atomic::new(Flag::new(b)))
+    }
+
+    /// Returns a mutable reference to the underlying flag as a [`bool`].
+    ///
+    /// This is safe because the mutable reference of the atomic flag guarantees exclusive access.
+    ///
+    /// # Examples
+    ///
+    /// ```
+    /// use kernel::sync::atomic::{AtomicFlag, Relaxed};
+    ///
+    /// let mut atomic_flag = AtomicFlag::new(false);
+    /// assert_eq!(false, atomic_flag.load(Relaxed));
+    /// *atomic_flag.get_mut() = true;
+    /// assert_eq!(true, atomic_flag.load(Relaxed));
+    /// ```
+    #[inline(always)]
+    pub fn get_mut(&mut self) -> &mut bool {
+        &mut self.0.get_mut().bool_field
+    }
+
+    /// Loads the value from the atomic flag.
+    #[inline(always)]
+    pub fn load<Ordering: ordering::AcquireOrRelaxed>(&self, o: Ordering) -> bool {
+        self.0.load(o).bool_field
+    }
+
+    /// Stores a value to the atomic flag.
+    #[inline(always)]
+    pub fn store<Ordering: ordering::ReleaseOrRelaxed>(&self, v: bool, o: Ordering) {
+        self.0.store(Flag::new(v), o);
+    }
+
+    /// Stores a value to the atomic flag and returns the previous value.
+    #[inline(always)]
+    pub fn xchg<Ordering: ordering::Ordering>(&self, new: bool, o: Ordering) -> bool {
+        self.0.xchg(Flag::new(new), o).bool_field
+    }
+
+    /// Store a value to the atomic flag if the current value is equal to `old`.
+    #[inline(always)]
+    pub fn cmpxchg<Ordering: ordering::Ordering>(
+        &self,
+        old: bool,
+        new: bool,
+        o: Ordering,
+    ) -> Result<bool, bool> {
+        match self.0.cmpxchg(Flag::new(old), Flag::new(new), o) {
+            Ok(_) => Ok(old),
+            Err(f) => Err(f.bool_field),
+        }
+    }
+}
diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
index 6f2c60529b64..ceb3caed9784 100644
--- a/rust/kernel/sync/atomic/predefine.rs
+++ b/rust/kernel/sync/atomic/predefine.rs
@@ -272,4 +272,21 @@ fn atomic_ptr_tests() {
         );
         assert_eq!(x.load(Relaxed), &raw const u);
     }
+
+    #[test]
+    fn atomic_flag_tests() {
+        let mut flag = AtomicFlag::new(false);
+
+        assert_eq!(false, flag.load(Relaxed));
+
+        *flag.get_mut() = true;
+        assert_eq!(true, flag.load(Relaxed));
+
+        assert_eq!(true, flag.xchg(false, Relaxed));
+        assert_eq!(false, flag.load(Relaxed));
+
+        *flag.get_mut() = true;
+        assert_eq!(Ok(true), flag.cmpxchg(true, false, Full));
+        assert_eq!(false, flag.load(Relaxed));
+    }
 }
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 09/13] rust: list: Use AtomicFlag in AtomicTracker
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
                   ` (7 preceding siblings ...)
  2026-03-03 20:16 ` [PATCH 08/13] rust: sync: atomic: Add performance-optimal Flag type for atomic booleans Boqun Feng
@ 2026-03-03 20:16 ` Boqun Feng
  2026-03-03 20:16 ` [PATCH 10/13] rust: sync: atomic: Add atomic operation helpers over raw pointers Boqun Feng
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel, FUJITA Tomonori

From: FUJITA Tomonori <fujita.tomonori@gmail.com>

Make AtomicTracker use AtomicFlag instead of Atomic<bool> to avoid
slow byte-sized RMWs on architectures that don't support them.

Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Signed-off-by: Boqun Feng <boqun@kernel.org>
Link: https://patch.msgid.link/20260129122622.3896144-3-tomo@aliasing.net
---
 rust/kernel/list/arc.rs | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/rust/kernel/list/arc.rs b/rust/kernel/list/arc.rs
index 2282f33913ee..5e84f500a3fe 100644
--- a/rust/kernel/list/arc.rs
+++ b/rust/kernel/list/arc.rs
@@ -6,7 +6,7 @@
 
 use crate::alloc::{AllocError, Flags};
 use crate::prelude::*;
-use crate::sync::atomic::{ordering, Atomic};
+use crate::sync::atomic::{ordering, AtomicFlag};
 use crate::sync::{Arc, ArcBorrow, UniqueArc};
 use core::marker::PhantomPinned;
 use core::ops::Deref;
@@ -469,7 +469,7 @@ impl<T, U, const ID: u64> core::ops::DispatchFromDyn<ListArc<U, ID>> for ListArc
 /// If the boolean is `false`, then there is no [`ListArc`] for this value.
 #[repr(transparent)]
 pub struct AtomicTracker<const ID: u64 = 0> {
-    inner: Atomic<bool>,
+    inner: AtomicFlag,
     // This value needs to be pinned to justify the INVARIANT: comment in `AtomicTracker::new`.
     _pin: PhantomPinned,
 }
@@ -480,12 +480,12 @@ pub fn new() -> impl PinInit<Self> {
         // INVARIANT: Pin-init initializers can't be used on an existing `Arc`, so this value will
         // not be constructed in an `Arc` that already has a `ListArc`.
         Self {
-            inner: Atomic::new(false),
+            inner: AtomicFlag::new(false),
             _pin: PhantomPinned,
         }
     }
 
-    fn project_inner(self: Pin<&mut Self>) -> &mut Atomic<bool> {
+    fn project_inner(self: Pin<&mut Self>) -> &mut AtomicFlag {
         // SAFETY: The `inner` field is not structurally pinned, so we may obtain a mutable
         // reference to it even if we only have a pinned reference to `self`.
         unsafe { &mut Pin::into_inner_unchecked(self).inner }
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 10/13] rust: sync: atomic: Add atomic operation helpers over raw pointers
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
                   ` (8 preceding siblings ...)
  2026-03-03 20:16 ` [PATCH 09/13] rust: list: Use AtomicFlag in AtomicTracker Boqun Feng
@ 2026-03-03 20:16 ` Boqun Feng
  2026-03-03 20:16 ` [PATCH 11/13] rust: sync: atomic: Add fetch_sub() Boqun Feng
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel, Boqun Feng

From: Boqun Feng <boqun.feng@gmail.com>

In order to synchronize with C or external memory, atomic operations
over raw pointers are need. Although there is already an
`Atomic::from_ptr()` to provide a `&Atomic<T>`, it's more convenient to
have helpers that directly perform atomic operations on raw pointers.
Hence a few are added, which are basically an `Atomic::from_ptr().op()`
wrapper.

Note: for naming, since `atomic_xchg()` and `atomic_cmpxchg()` have a
conflict naming to 32bit C atomic xchg/cmpxchg, hence the helpers are
just named as `xchg()` and `cmpxchg()`. For `atomic_load()` and
`atomic_store()`, their 32bit C counterparts are `atomic_read()` and
`atomic_set()`, so keep the `atomic_` prefix.

[boqun: Fix typo spotted by Alice and fix broken sentence spotted by
Gary]

Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260120115207.55318-3-boqun.feng@gmail.com
---
 rust/kernel/sync/atomic.rs           | 104 +++++++++++++++++++++++++++
 rust/kernel/sync/atomic/predefine.rs |  46 ++++++++++++
 2 files changed, 150 insertions(+)

diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index f80cebce5bc1..1bb1fc2be177 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -703,3 +703,107 @@ pub fn cmpxchg<Ordering: ordering::Ordering>(
         }
     }
 }
+
+/// Atomic load over raw pointers.
+///
+/// This function provides a short-cut of `Atomic::from_ptr().load(..)`, and can be used to work
+/// with C side on synchronizations:
+///
+/// - `atomic_load(.., Relaxed)` maps to `READ_ONCE()` when used for inter-thread communication.
+/// - `atomic_load(.., Acquire)` maps to `smp_load_acquire()`.
+///
+/// # Safety
+///
+/// - `ptr` is a valid pointer to `T` and aligned to `align_of::<T>()`.
+/// - If there is a concurrent store from kernel (C or Rust), it has to be atomic.
+#[doc(alias("READ_ONCE", "smp_load_acquire"))]
+#[inline(always)]
+pub unsafe fn atomic_load<T: AtomicType, Ordering: ordering::AcquireOrRelaxed>(
+    ptr: *mut T,
+    o: Ordering,
+) -> T
+where
+    T::Repr: AtomicBasicOps,
+{
+    // SAFETY: Per the function safety requirement, `ptr` is valid and aligned to
+    // `align_of::<T>()`, and all concurrent stores from kernel are atomic, hence no data race per
+    // LKMM.
+    unsafe { Atomic::from_ptr(ptr) }.load(o)
+}
+
+/// Atomic store over raw pointers.
+///
+/// This function provides a short-cut of `Atomic::from_ptr().load(..)`, and can be used to work
+/// with C side on synchronizations:
+///
+/// - `atomic_store(.., Relaxed)` maps to `WRITE_ONCE()` when used for inter-thread communication.
+/// - `atomic_load(.., Release)` maps to `smp_store_release()`.
+///
+/// # Safety
+///
+/// - `ptr` is a valid pointer to `T` and aligned to `align_of::<T>()`.
+/// - If there is a concurrent access from kernel (C or Rust), it has to be atomic.
+#[doc(alias("WRITE_ONCE", "smp_store_release"))]
+#[inline(always)]
+pub unsafe fn atomic_store<T: AtomicType, Ordering: ordering::ReleaseOrRelaxed>(
+    ptr: *mut T,
+    v: T,
+    o: Ordering,
+) where
+    T::Repr: AtomicBasicOps,
+{
+    // SAFETY: Per the function safety requirement, `ptr` is valid and aligned to
+    // `align_of::<T>()`, and all concurrent accesses from kernel are atomic, hence no data race
+    // per LKMM.
+    unsafe { Atomic::from_ptr(ptr) }.store(v, o);
+}
+
+/// Atomic exchange over raw pointers.
+///
+/// This function provides a short-cut of `Atomic::from_ptr().xchg(..)`, and can be used to work
+/// with C side on synchronizations.
+///
+/// # Safety
+///
+/// - `ptr` is a valid pointer to `T` and aligned to `align_of::<T>()`.
+/// - If there is a concurrent access from kernel (C or Rust), it has to be atomic.
+#[inline(always)]
+pub unsafe fn xchg<T: AtomicType, Ordering: ordering::Ordering>(
+    ptr: *mut T,
+    new: T,
+    o: Ordering,
+) -> T
+where
+    T::Repr: AtomicExchangeOps,
+{
+    // SAFETY: Per the function safety requirement, `ptr` is valid and aligned to
+    // `align_of::<T>()`, and all concurrent accesses from kernel are atomic, hence no data race
+    // per LKMM.
+    unsafe { Atomic::from_ptr(ptr) }.xchg(new, o)
+}
+
+/// Atomic compare and exchange over raw pointers.
+///
+/// This function provides a short-cut of `Atomic::from_ptr().cmpxchg(..)`, and can be used to work
+/// with C side on synchronizations.
+///
+/// # Safety
+///
+/// - `ptr` is a valid pointer to `T` and aligned to `align_of::<T>()`.
+/// - If there is a concurrent access from kernel (C or Rust), it has to be atomic.
+#[doc(alias("try_cmpxchg"))]
+#[inline(always)]
+pub unsafe fn cmpxchg<T: AtomicType, Ordering: ordering::Ordering>(
+    ptr: *mut T,
+    old: T,
+    new: T,
+    o: Ordering,
+) -> Result<T, T>
+where
+    T::Repr: AtomicExchangeOps,
+{
+    // SAFETY: Per the function safety requirement, `ptr` is valid and aligned to
+    // `align_of::<T>()`, and all concurrent accesses from kernel are atomic, hence no data race
+    // per LKMM.
+    unsafe { Atomic::from_ptr(ptr) }.cmpxchg(old, new, o)
+}
diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
index ceb3caed9784..1d53834fcb12 100644
--- a/rust/kernel/sync/atomic/predefine.rs
+++ b/rust/kernel/sync/atomic/predefine.rs
@@ -178,6 +178,14 @@ fn atomic_basic_tests() {
 
             assert_eq!(v, x.load(Relaxed));
         });
+
+        for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| {
+            let x = Atomic::new(v);
+            let ptr = x.as_ptr();
+
+            // SAFETY: `ptr` is a valid pointer and no concurrent access.
+            assert_eq!(v, unsafe { atomic_load(ptr, Relaxed) });
+        });
     }
 
     #[test]
@@ -188,6 +196,17 @@ fn atomic_acquire_release_tests() {
             x.store(v, Release);
             assert_eq!(v, x.load(Acquire));
         });
+
+        for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| {
+            let x = Atomic::new(0);
+            let ptr = x.as_ptr();
+
+            // SAFETY: `ptr` is a valid pointer and no concurrent access.
+            unsafe { atomic_store(ptr, v, Release) };
+
+            // SAFETY: `ptr` is a valid pointer and no concurrent access.
+            assert_eq!(v, unsafe { atomic_load(ptr, Acquire) });
+        });
     }
 
     #[test]
@@ -201,6 +220,18 @@ fn atomic_xchg_tests() {
             assert_eq!(old, x.xchg(new, Full));
             assert_eq!(new, x.load(Relaxed));
         });
+
+        for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| {
+            let x = Atomic::new(v);
+            let ptr = x.as_ptr();
+
+            let old = v;
+            let new = v + 1;
+
+            // SAFETY: `ptr` is a valid pointer and no concurrent access.
+            assert_eq!(old, unsafe { xchg(ptr, new, Full) });
+            assert_eq!(new, x.load(Relaxed));
+        });
     }
 
     #[test]
@@ -216,6 +247,21 @@ fn atomic_cmpxchg_tests() {
             assert_eq!(Ok(old), x.cmpxchg(old, new, Relaxed));
             assert_eq!(new, x.load(Relaxed));
         });
+
+        for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| {
+            let x = Atomic::new(v);
+            let ptr = x.as_ptr();
+
+            let old = v;
+            let new = v + 1;
+
+            // SAFETY: `ptr` is a valid pointer and no concurrent access.
+            assert_eq!(Err(old), unsafe { cmpxchg(ptr, new, new, Full) });
+            assert_eq!(old, x.load(Relaxed));
+            // SAFETY: `ptr` is a valid pointer and no concurrent access.
+            assert_eq!(Ok(old), unsafe { cmpxchg(ptr, old, new, Relaxed) });
+            assert_eq!(new, x.load(Relaxed));
+        });
     }
 
     #[test]
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 11/13] rust: sync: atomic: Add fetch_sub()
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
                   ` (9 preceding siblings ...)
  2026-03-03 20:16 ` [PATCH 10/13] rust: sync: atomic: Add atomic operation helpers over raw pointers Boqun Feng
@ 2026-03-03 20:16 ` Boqun Feng
  2026-03-03 20:17 ` [PATCH 12/13] rust: sync: atomic: Update documentation for `fetch_add()` Boqun Feng
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel

From: Andreas Hindborg <a.hindborg@kernel.org>

Add `Atomic::fetch_sub()` with implementation and documentation in line
with existing `Atomic::fetch_add()` implementation.

Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Signed-off-by: Boqun Feng <boqun@kernel.org>
Link: https://patch.msgid.link/20260220-atomic-sub-v3-1-e63cbed1d2aa@kernel.org
---
 rust/kernel/sync/atomic.rs          | 43 +++++++++++++++++++++++++++++
 rust/kernel/sync/atomic/internal.rs |  5 ++++
 2 files changed, 48 insertions(+)

diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 1bb1fc2be177..545a8d37ba78 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -577,6 +577,49 @@ pub fn fetch_add<Rhs, Ordering: ordering::Ordering>(&self, v: Rhs, _: Ordering)
         // SAFETY: `ret` comes from reading `self.0`, which is a valid `T` per type invariants.
         unsafe { from_repr(ret) }
     }
+
+    /// Atomic fetch and subtract.
+    ///
+    /// Atomically updates `*self` to `(*self).wrapping_sub(v)`, and returns the value of `*self`
+    /// before the update.
+    ///
+    /// # Examples
+    ///
+    /// ```
+    /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed};
+    ///
+    /// let x = Atomic::new(42);
+    /// assert_eq!(42, x.load(Relaxed));
+    /// assert_eq!(42, x.fetch_sub(12, Acquire));
+    /// assert_eq!(30, x.load(Relaxed));
+    ///
+    /// let x = Atomic::new(42);
+    /// assert_eq!(42, x.load(Relaxed));
+    /// assert_eq!(42, x.fetch_sub(12, Full));
+    /// assert_eq!(30, x.load(Relaxed));
+    /// ```
+    #[inline(always)]
+    pub fn fetch_sub<Rhs, Ordering: ordering::Ordering>(&self, v: Rhs, _: Ordering) -> T
+    where
+        // Types that support addition also support subtraction.
+        T: AtomicAdd<Rhs>,
+    {
+        let v = T::rhs_into_delta(v);
+
+        // INVARIANT: `self.0` is a valid `T` after `atomic_fetch_sub*()` due to safety requirement
+        // of `AtomicAdd`.
+        let ret = {
+            match Ordering::TYPE {
+                OrderingType::Full => T::Repr::atomic_fetch_sub(&self.0, v),
+                OrderingType::Acquire => T::Repr::atomic_fetch_sub_acquire(&self.0, v),
+                OrderingType::Release => T::Repr::atomic_fetch_sub_release(&self.0, v),
+                OrderingType::Relaxed => T::Repr::atomic_fetch_sub_relaxed(&self.0, v),
+            }
+        };
+
+        // SAFETY: `ret` comes from reading `self.0`, which is a valid `T` per type invariants.
+        unsafe { from_repr(ret) }
+    }
 }
 
 #[cfg(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64))]
diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/internal.rs
index e301db4eaf91..b762dbdf6d18 100644
--- a/rust/kernel/sync/atomic/internal.rs
+++ b/rust/kernel/sync/atomic/internal.rs
@@ -340,5 +340,10 @@ fn fetch_add[acquire, release, relaxed](a: &AtomicRepr<Self>, v: Self::Delta) ->
             // SAFETY: `a.as_ptr()` is valid and properly aligned.
             unsafe { bindings::#call(v, a.as_ptr().cast()) }
         }
+
+        fn fetch_sub[acquire, release, relaxed](a: &AtomicRepr<Self>, v: Self::Delta) -> Self {
+            // SAFETY: `a.as_ptr()` guarantees the returned pointer is valid and properly aligned.
+            unsafe { bindings::#call(v, a.as_ptr().cast()) }
+        }
     }
 );
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 12/13] rust: sync: atomic: Update documentation for `fetch_add()`
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
                   ` (10 preceding siblings ...)
  2026-03-03 20:16 ` [PATCH 11/13] rust: sync: atomic: Add fetch_sub() Boqun Feng
@ 2026-03-03 20:17 ` Boqun Feng
  2026-03-03 20:17 ` [PATCH 13/13] rust: atomic: Update a safety comment in impl of `fetch_add()` Boqun Feng
  2026-03-09 15:54 ` [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:17 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel

From: Andreas Hindborg <a.hindborg@kernel.org>

The documentation for `fetch_add()` does not indicate that the original
value is returned by `fetch_add()`. Update the documentation so this is
clear.

Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Signed-off-by: Boqun Feng <boqun@kernel.org>
Link: https://patch.msgid.link/20260220-atomic-sub-v3-2-e63cbed1d2aa@kernel.org
---
 rust/kernel/sync/atomic.rs | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 545a8d37ba78..9cd009d57e35 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -545,16 +545,14 @@ pub fn add<Rhs>(&self, v: Rhs, _: ordering::Relaxed)
     /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed};
     ///
     /// let x = Atomic::new(42);
-    ///
     /// assert_eq!(42, x.load(Relaxed));
-    ///
-    /// assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) });
+    /// assert_eq!(42, x.fetch_add(12, Acquire));
+    /// assert_eq!(54, x.load(Relaxed));
     ///
     /// let x = Atomic::new(42);
-    ///
     /// assert_eq!(42, x.load(Relaxed));
-    ///
-    /// assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } );
+    /// assert_eq!(42, x.fetch_add(12, Full));
+    /// assert_eq!(54, x.load(Relaxed));
     /// ```
     #[inline(always)]
     pub fn fetch_add<Rhs, Ordering: ordering::Ordering>(&self, v: Rhs, _: Ordering) -> T
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 13/13] rust: atomic: Update a safety comment in impl of `fetch_add()`
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
                   ` (11 preceding siblings ...)
  2026-03-03 20:17 ` [PATCH 12/13] rust: sync: atomic: Update documentation for `fetch_add()` Boqun Feng
@ 2026-03-03 20:17 ` Boqun Feng
  2026-03-09 15:54 ` [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-03 20:17 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel

From: Andreas Hindborg <a.hindborg@kernel.org>

The safety comment used in the implementation of `fetch_add()` could be
read as just saying something it is true without justifying it. Update
the safety comment to include justification.

Suggested-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun@kernel.org>
Link: https://patch.msgid.link/20260220-atomic-sub-v3-3-e63cbed1d2aa@kernel.org
---
 rust/kernel/sync/atomic/internal.rs | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/internal.rs
index b762dbdf6d18..ad810c2172ec 100644
--- a/rust/kernel/sync/atomic/internal.rs
+++ b/rust/kernel/sync/atomic/internal.rs
@@ -337,7 +337,7 @@ fn add[](a: &AtomicRepr<Self>, v: Self::Delta) {
         /// Atomically updates `*a` to `(*a).wrapping_add(v)`, and returns the value of `*a`
         /// before the update.
         fn fetch_add[acquire, release, relaxed](a: &AtomicRepr<Self>, v: Self::Delta) -> Self {
-            // SAFETY: `a.as_ptr()` is valid and properly aligned.
+            // SAFETY: `a.as_ptr()` guarantees the returned pointer is valid and properly aligned.
             unsafe { bindings::#call(v, a.as_ptr().cast()) }
         }
 
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1
  2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
                   ` (12 preceding siblings ...)
  2026-03-03 20:17 ` [PATCH 13/13] rust: atomic: Update a safety comment in impl of `fetch_add()` Boqun Feng
@ 2026-03-09 15:54 ` Boqun Feng
  13 siblings, 0 replies; 15+ messages in thread
From: Boqun Feng @ 2026-03-09 15:54 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, Miguel Ojeda, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Thomas Gleixner, Ingo Molnar,
	rust-for-linux, linux-kernel

On Tue, Mar 03, 2026 at 12:16:48PM -0800, Boqun Feng wrote:
> Hi Peter,
> 
> Please pull these changes of Rust atomic in v7.1 into tip/locking/core.
> Major changes are the atomic pointer support and a boolean-like
> AtomicFlag type (using a byte if arch support efficient xchg/cmpxchg
> over bytes otherwise 4 bytes). Thanks!
> 

Gentle ping ;-) Thanks!

Regards,
Boqun

> Regards,
> Boqun
> 
> 
> The following changes since commit 3dcef70e41ab13483803c536ddea8d5f1803ee25:
> 
>   ww-mutex: Fix the ww_acquire_ctx function annotations (2026-02-27 16:40:20 +0100)
> 
> are available in the Git repository at:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux.git tags/rust-atomic.20260303a
> 
> for you to fetch changes up to 68d1c8ac7f0b1f0de92a803b9b71090fd1b86d17:
> 
>   rust: atomic: Update a safety comment in impl of `fetch_add()` (2026-03-03 11:55:57 -0800)
> 
> ----------------------------------------------------------------
> Rust atomic changes for v7.1
> 
> * Add Atomic<ptr> support.
> * Add an AtomicFlag type for boolean-like usage with
>   architecture-specific performance optimization.
> * Add unsafe atomic operations over raw pointers.
> * Add `fetch_sub()` for atomic types.
> * Documentation and example improvements.
> -----BEGIN PGP SIGNATURE-----
[...]

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2026-03-09 15:54 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-03 20:16 [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng
2026-03-03 20:16 ` [PATCH 01/13] rust: sync: atomic: Remove bound `T: Sync` for `Atomic::from_ptr()` Boqun Feng
2026-03-03 20:16 ` [PATCH 02/13] rust: sync: atomic: Add example for Atomic::get_mut() Boqun Feng
2026-03-03 20:16 ` [PATCH 03/13] rust: helpers: Generify the definitions of rust_helper_*_{read,set}* Boqun Feng
2026-03-03 20:16 ` [PATCH 04/13] rust: helpers: Generify the definitions of rust_helper_*_xchg* Boqun Feng
2026-03-03 20:16 ` [PATCH 05/13] rust: helpers: Generify the definitions of rust_helper_*_cmpxchg* Boqun Feng
2026-03-03 20:16 ` [PATCH 06/13] rust: sync: atomic: Clarify the need of CONFIG_ARCH_SUPPORTS_ATOMIC_RMW Boqun Feng
2026-03-03 20:16 ` [PATCH 07/13] rust: sync: atomic: Add Atomic<*{mut,const} T> support Boqun Feng
2026-03-03 20:16 ` [PATCH 08/13] rust: sync: atomic: Add performance-optimal Flag type for atomic booleans Boqun Feng
2026-03-03 20:16 ` [PATCH 09/13] rust: list: Use AtomicFlag in AtomicTracker Boqun Feng
2026-03-03 20:16 ` [PATCH 10/13] rust: sync: atomic: Add atomic operation helpers over raw pointers Boqun Feng
2026-03-03 20:16 ` [PATCH 11/13] rust: sync: atomic: Add fetch_sub() Boqun Feng
2026-03-03 20:17 ` [PATCH 12/13] rust: sync: atomic: Update documentation for `fetch_add()` Boqun Feng
2026-03-03 20:17 ` [PATCH 13/13] rust: atomic: Update a safety comment in impl of `fetch_add()` Boqun Feng
2026-03-09 15:54 ` [GIT PULL] [PATCH 00/13] Rust atomic changes for v7.1 Boqun Feng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox