public inbox for rcu@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] rust: sync: Atomic pointer and RCU
@ 2026-01-17 12:22 Boqun Feng
  2026-01-17 12:22 ` [PATCH 1/5] rust: helpers: Generify the definitions of rust_helper_*_{read,set}* Boqun Feng
                   ` (4 more replies)
  0 siblings, 5 replies; 20+ messages in thread
From: Boqun Feng @ 2026-01-17 12:22 UTC (permalink / raw)
  To: rust-for-linux, linux-kernel, rcu
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
	Danilo Krummrich, Will Deacon, Peter Zijlstra, Mark Rutland,
	Paul E. McKenney, Frederic Weisbecker, Neeraj Upadhyay,
	Joel Fernandes, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Zqiang, FUJITA Tomonori

Hi,

This is a respin of my previous RCU pointer patch [1]. RCU protected
pointers maps to a "struct foo __rcu *" on the C side, which although
RCU has its own API, but fundamentally it's a pointer that are operated
atomically, hence using Rust's atomic (pointer) API provides the
necessary atomic + ordering.

Although asynchronous reclaim is not in the current implementation, but
it should be easy to extend.

[1]: https://lore.kernel.org/rust-for-linux/20250421164221.1121805-13-boqun.feng@gmail.com/

Regards,
Boqun

Boqun Feng (5):
  rust: helpers: Generify the definitions of rust_helper_*_{read,set}*
  rust: helpers: Generify the definitions of rust_helper_*_xchg*
  rust: helpers: Generify the definitions of rust_helper_*_cmpxchg*
  rust: sync: atomic: Add Atomic<*mut T> support
  rust: sync: rcu: Add RCU protected pointer

 rust/helpers/atomic_ext.c            | 158 +++++--------
 rust/kernel/sync/atomic.rs           |  12 +-
 rust/kernel/sync/atomic/internal.rs  |  21 +-
 rust/kernel/sync/atomic/predefine.rs |  23 ++
 rust/kernel/sync/rcu.rs              | 326 ++++++++++++++++++++++++++-
 5 files changed, 427 insertions(+), 113 deletions(-)

-- 
2.51.0


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/5] rust: helpers: Generify the definitions of rust_helper_*_{read,set}*
  2026-01-17 12:22 [PATCH 0/5] rust: sync: Atomic pointer and RCU Boqun Feng
@ 2026-01-17 12:22 ` Boqun Feng
  2026-01-17 12:22 ` [PATCH 2/5] rust: helpers: Generify the definitions of rust_helper_*_xchg* Boqun Feng
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 20+ messages in thread
From: Boqun Feng @ 2026-01-17 12:22 UTC (permalink / raw)
  To: rust-for-linux, linux-kernel, rcu
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
	Danilo Krummrich, Will Deacon, Peter Zijlstra, Mark Rutland,
	Paul E. McKenney, Frederic Weisbecker, Neeraj Upadhyay,
	Joel Fernandes, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Zqiang, FUJITA Tomonori

To support atomic pointers, more {read,set} helpers will be introduced,
hence define macros to generate these helpers to ease the introduction
of the future helpers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 rust/helpers/atomic_ext.c | 53 +++++++++++++++++----------------------
 1 file changed, 23 insertions(+), 30 deletions(-)

diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index 7d0c2bd340da..f471c1ff123d 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -4,45 +4,38 @@
 #include <asm/rwonce.h>
 #include <linux/atomic.h>
 
-__rust_helper s8 rust_helper_atomic_i8_read(s8 *ptr)
-{
-	return READ_ONCE(*ptr);
-}
-
-__rust_helper s8 rust_helper_atomic_i8_read_acquire(s8 *ptr)
-{
-	return smp_load_acquire(ptr);
-}
-
-__rust_helper s16 rust_helper_atomic_i16_read(s16 *ptr)
-{
-	return READ_ONCE(*ptr);
+#define GEN_READ_HELPER(tname, type)						\
+__rust_helper type rust_helper_atomic_##tname##_read(type *ptr)			\
+{										\
+	return READ_ONCE(*ptr);							\
 }
 
-__rust_helper s16 rust_helper_atomic_i16_read_acquire(s16 *ptr)
-{
-	return smp_load_acquire(ptr);
+#define GEN_SET_HELPER(tname, type)						\
+__rust_helper void rust_helper_atomic_##tname##_set(type *ptr, type val)	\
+{										\
+	WRITE_ONCE(*ptr, val);							\
 }
 
-__rust_helper void rust_helper_atomic_i8_set(s8 *ptr, s8 val)
-{
-	WRITE_ONCE(*ptr, val);
+#define GEN_READ_ACQUIRE_HELPER(tname, type)					\
+__rust_helper type rust_helper_atomic_##tname##_read_acquire(type *ptr)		\
+{										\
+	return smp_load_acquire(ptr);						\
 }
 
-__rust_helper void rust_helper_atomic_i8_set_release(s8 *ptr, s8 val)
-{
-	smp_store_release(ptr, val);
+#define GEN_SET_RELEASE_HELPER(tname, type)					\
+__rust_helper void rust_helper_atomic_##tname##_set_release(type *ptr, type val)\
+{										\
+	smp_store_release(ptr, val);						\
 }
 
-__rust_helper void rust_helper_atomic_i16_set(s16 *ptr, s16 val)
-{
-	WRITE_ONCE(*ptr, val);
-}
+#define GEN_READ_SET_HELPERS(tname, type)					\
+	GEN_READ_HELPER(tname, type)						\
+	GEN_SET_HELPER(tname, type)						\
+	GEN_READ_ACQUIRE_HELPER(tname, type)					\
+	GEN_SET_RELEASE_HELPER(tname, type)					\
 
-__rust_helper void rust_helper_atomic_i16_set_release(s16 *ptr, s16 val)
-{
-	smp_store_release(ptr, val);
-}
+GEN_READ_SET_HELPERS(i8, s8)
+GEN_READ_SET_HELPERS(i16, s16)
 
 /*
  * xchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/5] rust: helpers: Generify the definitions of rust_helper_*_xchg*
  2026-01-17 12:22 [PATCH 0/5] rust: sync: Atomic pointer and RCU Boqun Feng
  2026-01-17 12:22 ` [PATCH 1/5] rust: helpers: Generify the definitions of rust_helper_*_{read,set}* Boqun Feng
@ 2026-01-17 12:22 ` Boqun Feng
  2026-01-17 12:22 ` [PATCH 3/5] rust: helpers: Generify the definitions of rust_helper_*_cmpxchg* Boqun Feng
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 20+ messages in thread
From: Boqun Feng @ 2026-01-17 12:22 UTC (permalink / raw)
  To: rust-for-linux, linux-kernel, rcu
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
	Danilo Krummrich, Will Deacon, Peter Zijlstra, Mark Rutland,
	Paul E. McKenney, Frederic Weisbecker, Neeraj Upadhyay,
	Joel Fernandes, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Zqiang, FUJITA Tomonori

To support atomic pointers, more xchg helpers will be introduced, hence
define macros to generate these helpers to ease the introduction of the
future helpers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 rust/helpers/atomic_ext.c | 48 ++++++++++-----------------------------
 1 file changed, 12 insertions(+), 36 deletions(-)

diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index f471c1ff123d..c5f665bbe785 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -44,45 +44,21 @@ GEN_READ_SET_HELPERS(i16, s16)
  * The architectures that currently support Rust (x86_64, armv7,
  * arm64, riscv, and loongarch) satisfy these requirements.
  */
-__rust_helper s8 rust_helper_atomic_i8_xchg(s8 *ptr, s8 new)
-{
-	return xchg(ptr, new);
-}
-
-__rust_helper s16 rust_helper_atomic_i16_xchg(s16 *ptr, s16 new)
-{
-	return xchg(ptr, new);
-}
-
-__rust_helper s8 rust_helper_atomic_i8_xchg_acquire(s8 *ptr, s8 new)
-{
-	return xchg_acquire(ptr, new);
-}
-
-__rust_helper s16 rust_helper_atomic_i16_xchg_acquire(s16 *ptr, s16 new)
-{
-	return xchg_acquire(ptr, new);
-}
-
-__rust_helper s8 rust_helper_atomic_i8_xchg_release(s8 *ptr, s8 new)
-{
-	return xchg_release(ptr, new);
-}
-
-__rust_helper s16 rust_helper_atomic_i16_xchg_release(s16 *ptr, s16 new)
-{
-	return xchg_release(ptr, new);
+#define GEN_XCHG_HELPER(tname, type, suffix)					\
+__rust_helper type								\
+rust_helper_atomic_##tname##_xchg##suffix(type *ptr, type new)			\
+{										\
+	return xchg##suffix(ptr, new);					\
 }
 
-__rust_helper s8 rust_helper_atomic_i8_xchg_relaxed(s8 *ptr, s8 new)
-{
-	return xchg_relaxed(ptr, new);
-}
+#define GEN_XCHG_HELPERS(tname, type)						\
+	GEN_XCHG_HELPER(tname, type, )						\
+	GEN_XCHG_HELPER(tname, type, _acquire)					\
+	GEN_XCHG_HELPER(tname, type, _release)					\
+	GEN_XCHG_HELPER(tname, type, _relaxed)					\
 
-__rust_helper s16 rust_helper_atomic_i16_xchg_relaxed(s16 *ptr, s16 new)
-{
-	return xchg_relaxed(ptr, new);
-}
+GEN_XCHG_HELPERS(i8, s8)
+GEN_XCHG_HELPERS(i16, s16)
 
 /*
  * try_cmpxchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/5] rust: helpers: Generify the definitions of rust_helper_*_cmpxchg*
  2026-01-17 12:22 [PATCH 0/5] rust: sync: Atomic pointer and RCU Boqun Feng
  2026-01-17 12:22 ` [PATCH 1/5] rust: helpers: Generify the definitions of rust_helper_*_{read,set}* Boqun Feng
  2026-01-17 12:22 ` [PATCH 2/5] rust: helpers: Generify the definitions of rust_helper_*_xchg* Boqun Feng
@ 2026-01-17 12:22 ` Boqun Feng
  2026-01-17 12:22 ` [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support Boqun Feng
  2026-01-17 12:22 ` [PATCH 5/5] rust: sync: rcu: Add RCU protected pointer Boqun Feng
  4 siblings, 0 replies; 20+ messages in thread
From: Boqun Feng @ 2026-01-17 12:22 UTC (permalink / raw)
  To: rust-for-linux, linux-kernel, rcu
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
	Danilo Krummrich, Will Deacon, Peter Zijlstra, Mark Rutland,
	Paul E. McKenney, Frederic Weisbecker, Neeraj Upadhyay,
	Joel Fernandes, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Zqiang, FUJITA Tomonori

To support atomic pointers, more cmpxchg helpers will be introduced,
hence define macros to generate these helpers to ease the introduction
of the future helpers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 rust/helpers/atomic_ext.c | 48 ++++++++++-----------------------------
 1 file changed, 12 insertions(+), 36 deletions(-)

diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index c5f665bbe785..240218e2e708 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -67,42 +67,18 @@ GEN_XCHG_HELPERS(i16, s16)
  * The architectures that currently support Rust (x86_64, armv7,
  * arm64, riscv, and loongarch) satisfy these requirements.
  */
-__rust_helper bool rust_helper_atomic_i8_try_cmpxchg(s8 *ptr, s8 *old, s8 new)
-{
-	return try_cmpxchg(ptr, old, new);
-}
-
-__rust_helper bool rust_helper_atomic_i16_try_cmpxchg(s16 *ptr, s16 *old, s16 new)
-{
-	return try_cmpxchg(ptr, old, new);
-}
-
-__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_acquire(s8 *ptr, s8 *old, s8 new)
-{
-	return try_cmpxchg_acquire(ptr, old, new);
-}
-
-__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_acquire(s16 *ptr, s16 *old, s16 new)
-{
-	return try_cmpxchg_acquire(ptr, old, new);
-}
-
-__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_release(s8 *ptr, s8 *old, s8 new)
-{
-	return try_cmpxchg_release(ptr, old, new);
-}
-
-__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_release(s16 *ptr, s16 *old, s16 new)
-{
-	return try_cmpxchg_release(ptr, old, new);
+#define GEN_TRY_CMPXCHG_HELPER(tname, type, suffix)				\
+__rust_helper bool								\
+rust_helper_atomic_##tname##_try_cmpxchg##suffix(type *ptr, type *old, type new)\
+{										\
+	return try_cmpxchg##suffix(ptr, old, new);				\
 }
 
-__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_relaxed(s8 *ptr, s8 *old, s8 new)
-{
-	return try_cmpxchg_relaxed(ptr, old, new);
-}
+#define GEN_TRY_CMPXCHG_HELPERS(tname, type)					\
+	GEN_TRY_CMPXCHG_HELPER(tname, type, )					\
+	GEN_TRY_CMPXCHG_HELPER(tname, type, _acquire)				\
+	GEN_TRY_CMPXCHG_HELPER(tname, type, _release)				\
+	GEN_TRY_CMPXCHG_HELPER(tname, type, _relaxed)				\
 
-__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_relaxed(s16 *ptr, s16 *old, s16 new)
-{
-	return try_cmpxchg_relaxed(ptr, old, new);
-}
+GEN_TRY_CMPXCHG_HELPERS(i8, s8)
+GEN_TRY_CMPXCHG_HELPERS(i16, s16)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-17 12:22 [PATCH 0/5] rust: sync: Atomic pointer and RCU Boqun Feng
                   ` (2 preceding siblings ...)
  2026-01-17 12:22 ` [PATCH 3/5] rust: helpers: Generify the definitions of rust_helper_*_cmpxchg* Boqun Feng
@ 2026-01-17 12:22 ` Boqun Feng
  2026-01-17 17:03   ` Gary Guo
                     ` (2 more replies)
  2026-01-17 12:22 ` [PATCH 5/5] rust: sync: rcu: Add RCU protected pointer Boqun Feng
  4 siblings, 3 replies; 20+ messages in thread
From: Boqun Feng @ 2026-01-17 12:22 UTC (permalink / raw)
  To: rust-for-linux, linux-kernel, rcu
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
	Danilo Krummrich, Will Deacon, Peter Zijlstra, Mark Rutland,
	Paul E. McKenney, Frederic Weisbecker, Neeraj Upadhyay,
	Joel Fernandes, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Zqiang, FUJITA Tomonori

Atomic pointer support is an important piece of synchronization
algorithm, e.g. RCU, hence provide the support for that.

Note that instead of relying on atomic_long or the implementation of
`Atomic<usize>`, a new set of helpers (atomic_ptr_*) is introduced for
atomic pointer specifically, this is because ptr2int casting would
lose the provenance of a pointer and even though in theory there are a
few tricks the provenance can be restored, it'll still be a simpler
implementation if C could provide atomic pointers directly. The side
effects of this approach are: we don't have the arithmetic and logical
operations for pointers yet and the current implementation only works
on ARCH_SUPPORTS_ATOMIC_RMW architectures, but these are implementation
issues and can be added later.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 rust/helpers/atomic_ext.c            |  3 +++
 rust/kernel/sync/atomic.rs           | 12 +++++++++++-
 rust/kernel/sync/atomic/internal.rs  | 21 +++++++++++++++------
 rust/kernel/sync/atomic/predefine.rs | 23 +++++++++++++++++++++++
 4 files changed, 52 insertions(+), 7 deletions(-)

diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index 240218e2e708..c267d5190529 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -36,6 +36,7 @@ __rust_helper void rust_helper_atomic_##tname##_set_release(type *ptr, type val)
 
 GEN_READ_SET_HELPERS(i8, s8)
 GEN_READ_SET_HELPERS(i16, s16)
+GEN_READ_SET_HELPERS(ptr, const void *)
 
 /*
  * xchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
@@ -59,6 +60,7 @@ rust_helper_atomic_##tname##_xchg##suffix(type *ptr, type new)			\
 
 GEN_XCHG_HELPERS(i8, s8)
 GEN_XCHG_HELPERS(i16, s16)
+GEN_XCHG_HELPERS(ptr, const void *)
 
 /*
  * try_cmpxchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
@@ -82,3 +84,4 @@ rust_helper_atomic_##tname##_try_cmpxchg##suffix(type *ptr, type *old, type new)
 
 GEN_TRY_CMPXCHG_HELPERS(i8, s8)
 GEN_TRY_CMPXCHG_HELPERS(i16, s16)
+GEN_TRY_CMPXCHG_HELPERS(ptr, const void *)
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 4aebeacb961a..4d2a5228c2e4 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -51,6 +51,10 @@
 #[repr(transparent)]
 pub struct Atomic<T: AtomicType>(AtomicRepr<T::Repr>);
 
+// SAFETY: `Atomic<T>` is safe to transfer between execution contexts because of the safety
+// requirement of `AtomicType`.
+unsafe impl<T: AtomicType> Send for Atomic<T> {}
+
 // SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
 unsafe impl<T: AtomicType> Sync for Atomic<T> {}
 
@@ -68,6 +72,11 @@ unsafe impl<T: AtomicType> Sync for Atomic<T> {}
 ///
 /// - [`Self`] must have the same size and alignment as [`Self::Repr`].
 /// - [`Self`] must be [round-trip transmutable] to  [`Self::Repr`].
+/// - [`Self`] must be safe to transfer between execution contexts, if it's [`Send`], this is
+///   automatically satisfied. The exception is pointer types that are even though marked as
+///   `!Send` (e.g. raw pointers and [`NonNull<T>`]) but requiring `unsafe` to do anything
+///   meaningful on them. This is because transferring pointer values between execution contexts is
+///   safe as long as the actual `unsafe` dereferencing is justified.
 ///
 /// Note that this is more relaxed than requiring the bi-directional transmutability (i.e.
 /// [`transmute()`] is always sound between `U` and `T`) because of the support for atomic
@@ -108,7 +117,8 @@ unsafe impl<T: AtomicType> Sync for Atomic<T> {}
 /// [`transmute()`]: core::mem::transmute
 /// [round-trip transmutable]: AtomicType#round-trip-transmutability
 /// [Examples]: AtomicType#examples
-pub unsafe trait AtomicType: Sized + Send + Copy {
+/// [`NonNull<T>`]: core::ptr::NonNull
+pub unsafe trait AtomicType: Sized + Copy {
     /// The backing atomic implementation type.
     type Repr: AtomicImpl;
 }
diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/internal.rs
index 0dac58bca2b3..93f5a7846645 100644
--- a/rust/kernel/sync/atomic/internal.rs
+++ b/rust/kernel/sync/atomic/internal.rs
@@ -7,6 +7,7 @@
 use crate::bindings;
 use crate::macros::paste;
 use core::cell::UnsafeCell;
+use ffi::c_void;
 
 mod private {
     /// Sealed trait marker to disable customized impls on atomic implementation traits.
@@ -14,10 +15,11 @@ pub trait Sealed {}
 }
 
 // The C side supports atomic primitives only for `i32` and `i64` (`atomic_t` and `atomic64_t`),
-// while the Rust side also layers provides atomic support for `i8` and `i16`
-// on top of lower-level C primitives.
+// while the Rust side also provides atomic support for `i8`, `i16` and `*const c_void` on top of
+// lower-level C primitives.
 impl private::Sealed for i8 {}
 impl private::Sealed for i16 {}
+impl private::Sealed for *const c_void {}
 impl private::Sealed for i32 {}
 impl private::Sealed for i64 {}
 
@@ -26,10 +28,10 @@ impl private::Sealed for i64 {}
 /// This trait is sealed, and only types that map directly to the C side atomics
 /// or can be implemented with lower-level C primitives are allowed to implement this:
 ///
-/// - `i8` and `i16` are implemented with lower-level C primitives.
+/// - `i8`, `i16` and `*const c_void` are implemented with lower-level C primitives.
 /// - `i32` map to `atomic_t`
 /// - `i64` map to `atomic64_t`
-pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {
+pub trait AtomicImpl: Sized + Copy + private::Sealed {
     /// The type of the delta in arithmetic or logical operations.
     ///
     /// For example, in `atomic_add(ptr, v)`, it's the type of `v`. Usually it's the same type of
@@ -51,6 +53,13 @@ impl AtomicImpl for i16 {
     type Delta = Self;
 }
 
+// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only
+// guaranteed against read-modify-write operations if the architecture supports native atomic RmW.
+#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)]
+impl AtomicImpl for *const c_void {
+    type Delta = isize;
+}
+
 // `atomic_t` implements atomic operations on `i32`.
 impl AtomicImpl for i32 {
     type Delta = Self;
@@ -262,7 +271,7 @@ macro_rules! declare_and_impl_atomic_methods {
 }
 
 declare_and_impl_atomic_methods!(
-    [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ]
+    [ i8 => atomic_i8, i16 => atomic_i16, *const c_void => atomic_ptr, i32 => atomic, i64 => atomic64 ]
     /// Basic atomic operations
     pub trait AtomicBasicOps {
         /// Atomic read (load).
@@ -280,7 +289,7 @@ fn set[release](a: &AtomicRepr<Self>, v: Self) {
 );
 
 declare_and_impl_atomic_methods!(
-    [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ]
+    [ i8 => atomic_i8, i16 => atomic_i16, *const c_void => atomic_ptr, i32 => atomic, i64 => atomic64 ]
     /// Exchange and compare-and-exchange atomic operations
     pub trait AtomicExchangeOps {
         /// Atomic exchange.
diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
index 42067c6a266c..1a4670d225b5 100644
--- a/rust/kernel/sync/atomic/predefine.rs
+++ b/rust/kernel/sync/atomic/predefine.rs
@@ -4,6 +4,7 @@
 
 use crate::static_assert;
 use core::mem::{align_of, size_of};
+use ffi::c_void;
 
 // Ensure size and alignment requirements are checked.
 static_assert!(size_of::<bool>() == size_of::<i8>());
@@ -28,6 +29,16 @@ unsafe impl super::AtomicType for i16 {
     type Repr = i16;
 }
 
+// SAFETY:
+//
+// - `*mut T` has the same size and alignment with `*const c_void`, and is round-trip
+//   transmutable to `*const c_void`.
+// - `*mut T` is safe to transfer between execution contexts. See the safety requirement of
+//   [`AtomicType`].
+unsafe impl<T: Sized> super::AtomicType for *mut T {
+    type Repr = *const c_void;
+}
+
 // SAFETY: `i32` has the same size and alignment with itself, and is round-trip transmutable to
 // itself.
 unsafe impl super::AtomicType for i32 {
@@ -215,4 +226,16 @@ fn atomic_bool_tests() {
         assert_eq!(false, x.load(Relaxed));
         assert_eq!(Ok(false), x.cmpxchg(false, true, Full));
     }
+
+    #[test]
+    fn atomic_ptr_tests() {
+        let mut v = 42;
+        let mut u = 43;
+        let x = Atomic::new(&raw mut v);
+
+        assert_eq!(x.load(Acquire), &raw mut v);
+        assert_eq!(x.cmpxchg(&raw mut u, &raw mut u, Relaxed), Err(&raw mut v));
+        assert_eq!(x.cmpxchg(&raw mut v, &raw mut u, Relaxed), Ok(&raw mut v));
+        assert_eq!(x.load(Relaxed), &raw mut u);
+    }
 }
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 5/5] rust: sync: rcu: Add RCU protected pointer
  2026-01-17 12:22 [PATCH 0/5] rust: sync: Atomic pointer and RCU Boqun Feng
                   ` (3 preceding siblings ...)
  2026-01-17 12:22 ` [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support Boqun Feng
@ 2026-01-17 12:22 ` Boqun Feng
  2026-01-18  8:28   ` Dirk Behme
  4 siblings, 1 reply; 20+ messages in thread
From: Boqun Feng @ 2026-01-17 12:22 UTC (permalink / raw)
  To: rust-for-linux, linux-kernel, rcu
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
	Danilo Krummrich, Will Deacon, Peter Zijlstra, Mark Rutland,
	Paul E. McKenney, Frederic Weisbecker, Neeraj Upadhyay,
	Joel Fernandes, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Zqiang, FUJITA Tomonori

RCU protected pointers are an atomic pointer that can be loaded and
dereferenced by mulitple RCU readers, but only one updater/writer can
change the value (following a read-copy-update pattern usually).

This is useful in the case where data is read-mostly. The rationale of
this patch is to provide a proof of concept on how RCU should be exposed
to the Rust world, and it also serves as an example for atomic usage.

Similar mechanisms like ArcSwap [1] are already widely used.

Provide a `Rcu<P>` type with an atomic pointer implementation. `P` has
to be a `ForeignOwnable`, which means the ownership of a object can be
represented by a pointer-size value.

`Rcu::dereference()` requires a RCU Guard, which means dereferencing is
only valid under RCU read lock protection.

`Rcu::copy_update()` is the operation for updaters, it requries a
`Pin<&mut Self>` for exclusive accesses, since RCU updaters are normally
exclusive with each other.

A lot of RCU functionalities including asynchronously free (call_rcu()
and kfree_rcu()) are still missing, and will be the future work.

Also, we still need language changes like field projection [2] to
provide better ergonomic.

Acknowledgment: this work is based on a lot of productive discussions
and hard work from others, these are the ones I can remember (sorry if I
forgot your contribution):

* Wedson started the work on RCU field projection and Benno followed it
  up and had been working on it as a more general language feature.
  Also, Gary's field-projection repo [3] has been used as an example for
  related discussions.

* During Kangrejos 2023 [4], Gary, Benno and Alice provided a lot of
  feedbacks on the talk from Paul and me: "If you want to use RCU in
  Rust for Linux kernel..."

* During a recent discussion among Benno, Paul and me, Benno suggested
  using `Pin<&mut>` to guarantee the exclusive access on updater
  operations.

Link: https://crates.io/crates/arc-swap [1]
Link: https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lang/topic/Field.20Projections/near/474648059 [2]
Link: https://github.com/nbdd0121/field-projection [3]
Link: https://kangrejos.com/2023 [4]
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 rust/kernel/sync/rcu.rs | 326 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 325 insertions(+), 1 deletion(-)

diff --git a/rust/kernel/sync/rcu.rs b/rust/kernel/sync/rcu.rs
index a32bef6e490b..28bbccaa2e5e 100644
--- a/rust/kernel/sync/rcu.rs
+++ b/rust/kernel/sync/rcu.rs
@@ -4,7 +4,23 @@
 //!
 //! C header: [`include/linux/rcupdate.h`](srctree/include/linux/rcupdate.h)
 
-use crate::{bindings, types::NotThreadSafe};
+use crate::bindings;
+use crate::{
+    sync::atomic::{
+        Atomic,
+        Relaxed,
+        Release, //
+    },
+    types::{
+        ForeignOwnable,
+        NotThreadSafe, //
+    },
+};
+use core::{
+    marker::PhantomData,
+    pin::Pin,
+    ptr::NonNull, //
+};
 
 /// Evidence that the RCU read side lock is held on the current thread/CPU.
 ///
@@ -50,3 +66,311 @@ fn drop(&mut self) {
 pub fn read_lock() -> Guard {
     Guard::new()
 }
+
+use crate::types::Opaque;
+
+/// A temporary `UnsafePinned` [1] that provides a way to opt-out typical alias rules for mutable
+/// references.
+///
+/// # Invariants
+///
+/// `self.0` is always properly initialized.
+///
+/// [1]: https://doc.rust-lang.org/std/pin/struct.UnsafePinned.html
+struct UnsafePinned<T>(Opaque<T>);
+
+impl<T> UnsafePinned<T> {
+    const fn new(value: T) -> Self {
+        // INVARIANTS: `value` is initialized.
+        Self(Opaque::new(value))
+    }
+
+    const fn get(&self) -> *mut T {
+        self.0.get()
+    }
+}
+
+// SAFETY: `UnsafePinned` is safe to transfer between execution contexts as long as `T` is `Send`.
+unsafe impl<T: Send> Send for UnsafePinned<T> {}
+// SAFETY: `UnsafePinned` is safe to shared between execution contexts as long as `T` is `Sync`.
+unsafe impl<T: Sync> Sync for UnsafePinned<T> {}
+
+/// An RCU protected pointer, the pointed object is protected by RCU.
+///
+/// # Invariants
+///
+/// Either the pointer is null, or it points to a return value of
+/// [`ForeignOwnable::into_foreign()`] and the atomic variable exclusively owns the pointer.
+pub struct Rcu<P: ForeignOwnable>(
+    UnsafePinned<Atomic<*mut crate::ffi::c_void>>,
+    PhantomData<P>,
+);
+
+/// A pointer that has been unpublished, but hasn't waited for a grace period yet.
+///
+/// The pointed object may still have an existing RCU reader. Therefore a grace period is needed to
+/// free the object.
+///
+/// # Invariants
+///
+/// The pointer has to be a return value of [`ForeignOwnable::into_foreign`] and [`Self`]
+/// exclusively owns the pointer.
+pub struct RcuOld<P: ForeignOwnable>(NonNull<crate::ffi::c_void>, PhantomData<P>);
+
+impl<P: ForeignOwnable> Drop for RcuOld<P> {
+    fn drop(&mut self) {
+        // SAFETY: As long as called in a sleepable context, which should be checked by klint,
+        // `synchronize_rcu()` is safe to call.
+        unsafe {
+            bindings::synchronize_rcu();
+        }
+
+        // SAFETY: `self.0` is a return value of `P::into_foreign()`, so it's safe to call
+        // `from_foreign()` on it. Plus, the above `synchronize_rcu()` guarantees no existing
+        // `ForeignOwnable::borrow()` anymore.
+        let p: P = unsafe { P::from_foreign(self.0.as_ptr()) };
+        drop(p);
+    }
+}
+
+impl<P: ForeignOwnable> Rcu<P> {
+    /// Creates a new RCU pointer.
+    pub fn new(p: P) -> Self {
+        // INVARIANTS: The return value of `p.into_foreign()` is directly stored in the atomic
+        // variable.
+        Self(
+            UnsafePinned::new(Atomic::new(p.into_foreign())),
+            PhantomData,
+        )
+    }
+
+    fn as_atomic(&self) -> &Atomic<*mut crate::ffi::c_void> {
+        // SAFETY: Per type invariants of `UnsafePinned`, `self.0.get()` points to an initialized
+        // `&Atomic`.
+        unsafe { &*self.0.get() }
+    }
+
+    fn as_atomic_mut_pinned(self: Pin<&mut Self>) -> &Atomic<*mut crate::ffi::c_void> {
+        self.into_ref().get_ref().as_atomic()
+    }
+
+    /// Dereferences the protected object.
+    ///
+    /// Returns `Some(b)`, where `b` is a reference-like borrowed type, if the pointer is not null,
+    /// otherwise returns `None`.
+    ///
+    /// # Examples
+    ///
+    /// ```rust
+    /// # use kernel::alloc::{flags, KBox};
+    /// use kernel::sync::rcu::{self, Rcu};
+    ///
+    /// let x = Rcu::new(KBox::new(100i32, flags::GFP_KERNEL)?);
+    ///
+    /// let g = rcu::read_lock();
+    /// // Read in under RCU read lock protection.
+    /// let v = x.dereference(&g);
+    ///
+    /// assert_eq!(v, Some(&100i32));
+    ///
+    /// # Ok::<(), Error>(())
+    /// ```
+    ///
+    /// Note the borrowed access can outlive the reference of the [`Rcu<P>`], this is because as
+    /// long as the RCU read lock is held, the pointed object should remain valid.
+    ///
+    /// In the following case, the main thread is responsible for the ownership of `shared`, i.e. it
+    /// will drop it eventually, and a work item can temporarily access the `shared` via `cloned`,
+    /// but the use of the dereferenced object doesn't depend on `cloned`'s existence.
+    ///
+    /// ```rust
+    /// # use kernel::alloc::{flags, KBox};
+    /// # use kernel::workqueue::system;
+    /// # use kernel::sync::{Arc, atomic::{Atomic, Acquire, Release}};
+    /// use kernel::sync::rcu::{self, Rcu};
+    ///
+    /// struct Config {
+    ///     a: i32,
+    ///     b: i32,
+    ///     c: i32,
+    /// }
+    ///
+    /// let config = KBox::new(Config { a: 1, b: 2, c: 3 }, flags::GFP_KERNEL)?;
+    ///
+    /// let shared = Arc::new(Rcu::new(config), flags::GFP_KERNEL)?;
+    /// let cloned = shared.clone();
+    ///
+    /// // Use atomic to simulate a special refcounting.
+    /// static FLAG: Atomic<i32> = Atomic::new(0);
+    ///
+    /// system().try_spawn(flags::GFP_KERNEL, move || {
+    ///     let g = rcu::read_lock();
+    ///     let v = cloned.dereference(&g).unwrap();
+    ///     drop(cloned); // release reference to `shared`.
+    ///     FLAG.store(1, Release);
+    ///
+    ///     // but still need to access `v`.
+    ///     assert_eq!(v.a, 1);
+    ///     drop(g);
+    /// });
+    ///
+    /// // Wait until `cloned` dropped.
+    /// while FLAG.load(Acquire) == 0 {
+    ///     // SAFETY: Sleep should be safe.
+    ///     unsafe { kernel::bindings::schedule(); }
+    /// }
+    ///
+    /// drop(shared);
+    ///
+    /// # Ok::<(), Error>(())
+    /// ```
+    pub fn dereference<'rcu>(&self, _rcu_guard: &'rcu Guard) -> Option<P::Borrowed<'rcu>> {
+        // Ordering: Address dependency pairs with the `store(Release)` in copy_update().
+        let ptr = self.as_atomic().load(Relaxed);
+
+        if !ptr.is_null() {
+            // SAFETY:
+            // - Since `ptr` is not null, so it has to be a return value of `P::into_foreign()`.
+            // - The returned `Borrowed<'rcu>` cannot outlive the RCU Guar, this guarantees the
+            //   return value will only be used under RCU read lock, and the RCU read lock prevents
+            //   the pass of a grace period that the drop of `RcuOld` or `Rcu` is waiting for,
+            //   therefore no `from_foreign()` will be called for `ptr` as long as `Borrowed` exists.
+            //
+            //      CPU 0                                       CPU 1
+            //      =====                                       =====
+            //      { `x` is a reference to Rcu<Box<i32>> }
+            //      let g = rcu::read_lock();
+            //
+            //      if let Some(b) = x.dereference(&g) {
+            //      // drop(g); cannot be done, since `b` is still alive.
+            //
+            //                                              if let Some(old) = x.replace(...) {
+            //                                                  // `x` is null now.
+            //          println!("{}", b);
+            //      }
+            //                                                  drop(old):
+            //                                                    synchronize_rcu();
+            //      drop(g);
+            //                                                    // a grace period passed.
+            //                                                    // No `Borrowed` exists now.
+            //                                                    from_foreign(...);
+            //                                              }
+            Some(unsafe { P::borrow(ptr) })
+        } else {
+            None
+        }
+    }
+
+    /// Read, copy and update the pointer with new value.
+    ///
+    /// Returns `None` if the pointer's old value is null, otherwise returns `Some(old)`, where old
+    /// is a [`RcuOld`] which can be used to free the old object eventually.
+    ///
+    /// The `Pin<&mut Self>` is needed because this function needs the exclusive access to
+    /// [`Rcu<P>`], otherwise two `copy_update()`s may get the same old object and double free.
+    /// Using `Pin<&mut Self>` provides the exclusive access that C side requires with the type
+    /// system checking.
+    ///
+    /// Also this has to be `Pin` because a `&mut Self` may allow users to `swap()` safely, that
+    /// will break the atomicity. A [`Rcu<P>`] should be structurally pinned in the struct that
+    /// contains it.
+    ///
+    /// Note that `Pin<&mut Self>` cannot assume noalias on `self.0` here because of `self.0` is an
+    /// [`UnsafePinned`].
+    ///
+    /// [`UnsafePinned`]: https://doc.rust-lang.org/std/pin/struct.UnsafePinned.html
+    pub fn copy_update<F>(self: Pin<&mut Self>, f: F) -> Option<RcuOld<P>>
+    where
+        F: FnOnce(Option<P::Borrowed<'_>>) -> Option<P>,
+    {
+        let inner = self.as_atomic_mut_pinned();
+
+        // step 1: COPY, or more generally, initializing `new` based on `old`.
+        // Ordering: Address dependency pairs with the `store(Release)` in copy_update().
+        let old_ptr = NonNull::new(inner.load(Relaxed));
+
+        let old = old_ptr.map(|nonnull| {
+            // SAFETY: Per type invariants `old_ptr` has to be a value return by a previous
+            // `into_foreign()`, and the exclusive reference `self` guarantees that `from_foreign()`
+            // has not been called.
+            unsafe { P::borrow(nonnull.as_ptr()) }
+        });
+
+        let new = f(old);
+
+        // step 2: UPDATE.
+        if let Some(new) = new {
+            let new_ptr = new.into_foreign();
+            // Ordering: Pairs with the address dependency in `dereference()` and
+            // `copy_update()`.
+            // INVARIANTS: `new.into_foreign()` is directly store into the atomic variable.
+            inner.store(new_ptr, Release);
+        } else {
+            // Ordering: Setting to a null pointer doesn't need to be Release.
+            // INVARIANTS: The atomic variable is set to be null.
+            inner.store(core::ptr::null_mut(), Relaxed);
+        }
+
+        // INVARIANTS: The exclusive reference guarantess that the ownership of a previous
+        // `into_foreign()` transferred to the `RcuOld`.
+        Some(RcuOld(old_ptr?, PhantomData))
+    }
+
+    /// Replaces the pointer with new value.
+    ///
+    /// Returns `None` if the pointer's old value is null, otherwise returns `Some(old)`, where old
+    /// is a [`RcuOld`] which can be used to free the old object eventually.
+    ///
+    /// # Examples
+    ///
+    /// ```rust
+    /// use core::pin::pin;
+    /// # use kernel::alloc::{flags, KBox};
+    /// use kernel::sync::rcu::{self, Rcu};
+    ///
+    /// let mut x = pin!(Rcu::new(KBox::new(100i32, flags::GFP_KERNEL)?));
+    /// let q = KBox::new(101i32, flags::GFP_KERNEL)?;
+    ///
+    /// // Read in under RCU read lock protection.
+    /// let g = rcu::read_lock();
+    /// let v = x.dereference(&g);
+    ///
+    /// // Replace with a new object.
+    /// let old = x.as_mut().replace(q);
+    ///
+    /// assert!(old.is_some());
+    ///
+    /// // `v` should still read the old value.
+    /// assert_eq!(v, Some(&100i32));
+    ///
+    /// // New readers should get the new value.
+    /// assert_eq!(x.dereference(&g), Some(&101i32));
+    ///
+    /// drop(g);
+    ///
+    /// // Can free the object outside the read-side critical section.
+    /// drop(old);
+    /// # Ok::<(), Error>(())
+    /// ```
+    pub fn replace(self: Pin<&mut Self>, new: P) -> Option<RcuOld<P>> {
+        self.copy_update(|_| Some(new))
+    }
+}
+
+impl<P: ForeignOwnable> Drop for Rcu<P> {
+    fn drop(&mut self) {
+        let ptr = self.as_atomic().load(Relaxed);
+        if !ptr.is_null() {
+            // SAFETY: As long as called in a sleepable context, which should be checked by klint,
+            // `synchronize_rcu()` is safe to call.
+            unsafe {
+                bindings::synchronize_rcu();
+            }
+
+            // SAFETY: `self.0` is a return value of `P::into_foreign()`, so it's safe to call
+            // `from_foreign()` on it. Plus, the above `synchronize_rcu()` guarantees no existing
+            // `ForeignOwnable::borrow()` anymore.
+            drop(unsafe { P::from_foreign(ptr) });
+        }
+    }
+}
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-17 12:22 ` [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support Boqun Feng
@ 2026-01-17 17:03   ` Gary Guo
  2026-01-18  4:19     ` Boqun Feng
  2026-01-18  8:38   ` Dirk Behme
  2026-01-19  3:09   ` FUJITA Tomonori
  2 siblings, 1 reply; 20+ messages in thread
From: Gary Guo @ 2026-01-17 17:03 UTC (permalink / raw)
  To: Boqun Feng, rust-for-linux, linux-kernel, rcu
  Cc: Miguel Ojeda, Gary Guo, Björn Roy Baron, Benno Lossin,
	Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	Will Deacon, Peter Zijlstra, Mark Rutland, Paul E. McKenney,
	Frederic Weisbecker, Neeraj Upadhyay, Joel Fernandes,
	Josh Triplett, Uladzislau Rezki, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Zqiang, FUJITA Tomonori

On Sat Jan 17, 2026 at 12:22 PM GMT, Boqun Feng wrote:
> Atomic pointer support is an important piece of synchronization
> algorithm, e.g. RCU, hence provide the support for that.
>
> Note that instead of relying on atomic_long or the implementation of
> `Atomic<usize>`, a new set of helpers (atomic_ptr_*) is introduced for
> atomic pointer specifically, this is because ptr2int casting would
> lose the provenance of a pointer and even though in theory there are a
> few tricks the provenance can be restored, it'll still be a simpler
> implementation if C could provide atomic pointers directly. The side
> effects of this approach are: we don't have the arithmetic and logical
> operations for pointers yet and the current implementation only works
> on ARCH_SUPPORTS_ATOMIC_RMW architectures, but these are implementation
> issues and can be added later.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>

I am happy that this is now using dedicated helpers for pointers, and not going
through an intermediate integer which can lose provenance.

Some feedbacks below, but in general LGTM.

Reviewed-by: Gary Guo <gary@garyguo.net>

> ---
>  rust/helpers/atomic_ext.c            |  3 +++
>  rust/kernel/sync/atomic.rs           | 12 +++++++++++-
>  rust/kernel/sync/atomic/internal.rs  | 21 +++++++++++++++------
>  rust/kernel/sync/atomic/predefine.rs | 23 +++++++++++++++++++++++
>  4 files changed, 52 insertions(+), 7 deletions(-)
>
> diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
> index 240218e2e708..c267d5190529 100644
> --- a/rust/helpers/atomic_ext.c
> +++ b/rust/helpers/atomic_ext.c
> @@ -36,6 +36,7 @@ __rust_helper void rust_helper_atomic_##tname##_set_release(type *ptr, type val)
>  
>  GEN_READ_SET_HELPERS(i8, s8)
>  GEN_READ_SET_HELPERS(i16, s16)
> +GEN_READ_SET_HELPERS(ptr, const void *)
>  
>  /*
>   * xchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
> @@ -59,6 +60,7 @@ rust_helper_atomic_##tname##_xchg##suffix(type *ptr, type new)			\
>  
>  GEN_XCHG_HELPERS(i8, s8)
>  GEN_XCHG_HELPERS(i16, s16)
> +GEN_XCHG_HELPERS(ptr, const void *)
>  
>  /*
>   * try_cmpxchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
> @@ -82,3 +84,4 @@ rust_helper_atomic_##tname##_try_cmpxchg##suffix(type *ptr, type *old, type new)
>  
>  GEN_TRY_CMPXCHG_HELPERS(i8, s8)
>  GEN_TRY_CMPXCHG_HELPERS(i16, s16)
> +GEN_TRY_CMPXCHG_HELPERS(ptr, const void *)
> diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> index 4aebeacb961a..4d2a5228c2e4 100644
> --- a/rust/kernel/sync/atomic.rs
> +++ b/rust/kernel/sync/atomic.rs
> @@ -51,6 +51,10 @@
>  #[repr(transparent)]
>  pub struct Atomic<T: AtomicType>(AtomicRepr<T::Repr>);
>  
> +// SAFETY: `Atomic<T>` is safe to transfer between execution contexts because of the safety
> +// requirement of `AtomicType`.
> +unsafe impl<T: AtomicType> Send for Atomic<T> {}
> +
>  // SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
>  unsafe impl<T: AtomicType> Sync for Atomic<T> {}
>  
> @@ -68,6 +72,11 @@ unsafe impl<T: AtomicType> Sync for Atomic<T> {}
>  ///
>  /// - [`Self`] must have the same size and alignment as [`Self::Repr`].
>  /// - [`Self`] must be [round-trip transmutable] to  [`Self::Repr`].
> +/// - [`Self`] must be safe to transfer between execution contexts, if it's [`Send`], this is
> +///   automatically satisfied. The exception is pointer types that are even though marked as
> +///   `!Send` (e.g. raw pointers and [`NonNull<T>`]) but requiring `unsafe` to do anything
> +///   meaningful on them. This is because transferring pointer values between execution contexts is
> +///   safe as long as the actual `unsafe` dereferencing is justified.

I think the discussion about `Send` on pointers should be moved to the `impl<T>
AtomicType for *mut T` side.

>  ///
>  /// Note that this is more relaxed than requiring the bi-directional transmutability (i.e.
>  /// [`transmute()`] is always sound between `U` and `T`) because of the support for atomic
> @@ -108,7 +117,8 @@ unsafe impl<T: AtomicType> Sync for Atomic<T> {}
>  /// [`transmute()`]: core::mem::transmute
>  /// [round-trip transmutable]: AtomicType#round-trip-transmutability
>  /// [Examples]: AtomicType#examples
> -pub unsafe trait AtomicType: Sized + Send + Copy {
> +/// [`NonNull<T>`]: core::ptr::NonNull
> +pub unsafe trait AtomicType: Sized + Copy {
>      /// The backing atomic implementation type.
>      type Repr: AtomicImpl;
>  }
> diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/internal.rs
> index 0dac58bca2b3..93f5a7846645 100644
> --- a/rust/kernel/sync/atomic/internal.rs
> +++ b/rust/kernel/sync/atomic/internal.rs
> @@ -7,6 +7,7 @@
>  use crate::bindings;
>  use crate::macros::paste;
>  use core::cell::UnsafeCell;
> +use ffi::c_void;
>  
>  mod private {
>      /// Sealed trait marker to disable customized impls on atomic implementation traits.
> @@ -14,10 +15,11 @@ pub trait Sealed {}
>  }
>  
>  // The C side supports atomic primitives only for `i32` and `i64` (`atomic_t` and `atomic64_t`),
> -// while the Rust side also layers provides atomic support for `i8` and `i16`
> -// on top of lower-level C primitives.
> +// while the Rust side also provides atomic support for `i8`, `i16` and `*const c_void` on top of
> +// lower-level C primitives.
>  impl private::Sealed for i8 {}
>  impl private::Sealed for i16 {}
> +impl private::Sealed for *const c_void {}
>  impl private::Sealed for i32 {}
>  impl private::Sealed for i64 {}
>  
> @@ -26,10 +28,10 @@ impl private::Sealed for i64 {}
>  /// This trait is sealed, and only types that map directly to the C side atomics
>  /// or can be implemented with lower-level C primitives are allowed to implement this:
>  ///
> -/// - `i8` and `i16` are implemented with lower-level C primitives.
> +/// - `i8`, `i16` and `*const c_void` are implemented with lower-level C primitives.
>  /// - `i32` map to `atomic_t`
>  /// - `i64` map to `atomic64_t`
> -pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {
> +pub trait AtomicImpl: Sized + Copy + private::Sealed {
>      /// The type of the delta in arithmetic or logical operations.
>      ///
>      /// For example, in `atomic_add(ptr, v)`, it's the type of `v`. Usually it's the same type of
> @@ -51,6 +53,13 @@ impl AtomicImpl for i16 {
>      type Delta = Self;
>  }
>  
> +// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only
> +// guaranteed against read-modify-write operations if the architecture supports native atomic RmW.
> +#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)]
> +impl AtomicImpl for *const c_void {
> +    type Delta = isize;
> +}
> +
>  // `atomic_t` implements atomic operations on `i32`.
>  impl AtomicImpl for i32 {
>      type Delta = Self;
> @@ -262,7 +271,7 @@ macro_rules! declare_and_impl_atomic_methods {
>  }
>  
>  declare_and_impl_atomic_methods!(
> -    [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ]
> +    [ i8 => atomic_i8, i16 => atomic_i16, *const c_void => atomic_ptr, i32 => atomic, i64 => atomic64 ]
>      /// Basic atomic operations
>      pub trait AtomicBasicOps {
>          /// Atomic read (load).
> @@ -280,7 +289,7 @@ fn set[release](a: &AtomicRepr<Self>, v: Self) {
>  );
>  
>  declare_and_impl_atomic_methods!(
> -    [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ]
> +    [ i8 => atomic_i8, i16 => atomic_i16, *const c_void => atomic_ptr, i32 => atomic, i64 => atomic64 ]
>      /// Exchange and compare-and-exchange atomic operations
>      pub trait AtomicExchangeOps {
>          /// Atomic exchange.
> diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
> index 42067c6a266c..1a4670d225b5 100644
> --- a/rust/kernel/sync/atomic/predefine.rs
> +++ b/rust/kernel/sync/atomic/predefine.rs
> @@ -4,6 +4,7 @@
>  
>  use crate::static_assert;
>  use core::mem::{align_of, size_of};
> +use ffi::c_void;
>  
>  // Ensure size and alignment requirements are checked.
>  static_assert!(size_of::<bool>() == size_of::<i8>());
> @@ -28,6 +29,16 @@ unsafe impl super::AtomicType for i16 {
>      type Repr = i16;
>  }
>  
> +// SAFETY:
> +//
> +// - `*mut T` has the same size and alignment with `*const c_void`, and is round-trip
> +//   transmutable to `*const c_void`.
> +// - `*mut T` is safe to transfer between execution contexts. See the safety requirement of
> +//   [`AtomicType`].
> +unsafe impl<T: Sized> super::AtomicType for *mut T {
> +    type Repr = *const c_void;
> +}

How about *const T?

> +
>  // SAFETY: `i32` has the same size and alignment with itself, and is round-trip transmutable to
>  // itself.
>  unsafe impl super::AtomicType for i32 {
> @@ -215,4 +226,16 @@ fn atomic_bool_tests() {
>          assert_eq!(false, x.load(Relaxed));
>          assert_eq!(Ok(false), x.cmpxchg(false, true, Full));
>      }
> +
> +    #[test]
> +    fn atomic_ptr_tests() {
> +        let mut v = 42;
> +        let mut u = 43;
> +        let x = Atomic::new(&raw mut v);
> +
> +        assert_eq!(x.load(Acquire), &raw mut v);
> +        assert_eq!(x.cmpxchg(&raw mut u, &raw mut u, Relaxed), Err(&raw mut v));
> +        assert_eq!(x.cmpxchg(&raw mut v, &raw mut u, Relaxed), Ok(&raw mut v));
> +        assert_eq!(x.load(Relaxed), &raw mut u);
> +    }
>  }


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-17 17:03   ` Gary Guo
@ 2026-01-18  4:19     ` Boqun Feng
  2026-01-18 15:39       ` Gary Guo
  2026-01-20 12:37       ` Alice Ryhl
  0 siblings, 2 replies; 20+ messages in thread
From: Boqun Feng @ 2026-01-18  4:19 UTC (permalink / raw)
  To: Gary Guo
  Cc: rust-for-linux, linux-kernel, rcu, Miguel Ojeda,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Paul E. McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Uladzislau Rezki,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang,
	FUJITA Tomonori

On Sat, Jan 17, 2026 at 05:03:15PM +0000, Gary Guo wrote:
> On Sat Jan 17, 2026 at 12:22 PM GMT, Boqun Feng wrote:
> > Atomic pointer support is an important piece of synchronization
> > algorithm, e.g. RCU, hence provide the support for that.
> >
> > Note that instead of relying on atomic_long or the implementation of
> > `Atomic<usize>`, a new set of helpers (atomic_ptr_*) is introduced for
> > atomic pointer specifically, this is because ptr2int casting would
> > lose the provenance of a pointer and even though in theory there are a
> > few tricks the provenance can be restored, it'll still be a simpler
> > implementation if C could provide atomic pointers directly. The side
> > effects of this approach are: we don't have the arithmetic and logical
> > operations for pointers yet and the current implementation only works
> > on ARCH_SUPPORTS_ATOMIC_RMW architectures, but these are implementation
> > issues and can be added later.
> >
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> 
> I am happy that this is now using dedicated helpers for pointers, and not going
> through an intermediate integer which can lose provenance.
> 
> Some feedbacks below, but in general LGTM.
> 
> Reviewed-by: Gary Guo <gary@garyguo.net>
> 

Thanks!

> > ---
> > diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
[...]
> > index 4aebeacb961a..4d2a5228c2e4 100644
> > --- a/rust/kernel/sync/atomic.rs
> > +++ b/rust/kernel/sync/atomic.rs
> > @@ -51,6 +51,10 @@
> >  #[repr(transparent)]
> >  pub struct Atomic<T: AtomicType>(AtomicRepr<T::Repr>);
> >  
> > +// SAFETY: `Atomic<T>` is safe to transfer between execution contexts because of the safety
> > +// requirement of `AtomicType`.
> > +unsafe impl<T: AtomicType> Send for Atomic<T> {}
> > +
> >  // SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
> >  unsafe impl<T: AtomicType> Sync for Atomic<T> {}
> >  
> > @@ -68,6 +72,11 @@ unsafe impl<T: AtomicType> Sync for Atomic<T> {}
> >  ///
> >  /// - [`Self`] must have the same size and alignment as [`Self::Repr`].
> >  /// - [`Self`] must be [round-trip transmutable] to  [`Self::Repr`].
> > +/// - [`Self`] must be safe to transfer between execution contexts, if it's [`Send`], this is
> > +///   automatically satisfied. The exception is pointer types that are even though marked as
> > +///   `!Send` (e.g. raw pointers and [`NonNull<T>`]) but requiring `unsafe` to do anything
> > +///   meaningful on them. This is because transferring pointer values between execution contexts is
> > +///   safe as long as the actual `unsafe` dereferencing is justified.
> 
> I think the discussion about `Send` on pointers should be moved to the `impl<T>
> AtomicType for *mut T` side.
> 

The reason I put something here was to answer the potential question
"why don't you require AtomicType being a subtrait of Send?", that's
more of a question for people who read about `AtomicType`, so I figured
we need some explanation. But I'm fine if you think we should move some
of the comments to the impl block, or we duplicate some. Although I
don't think the current version is worse. Considering we do:

    /// - [`Self`] must have the same size and alignment as [`Self::Repr`].
    /// - [`Self`] must be [round-trip transmutable] to  [`Self::Repr`].
    /// - [`Self`] must be safe to transfer between execution contexts, if it's [`Send`], this is
    ///   automatically satisfied.

for AtomicType, I'm not sure someone read about `AtomicType` could have
everything they need to understand why it's not `: Send`.

[...]
> > +// SAFETY:
> > +//
> > +// - `*mut T` has the same size and alignment with `*const c_void`, and is round-trip
> > +//   transmutable to `*const c_void`.
> > +// - `*mut T` is safe to transfer between execution contexts. See the safety requirement of
> > +//   [`AtomicType`].
> > +unsafe impl<T: Sized> super::AtomicType for *mut T {
> > +    type Repr = *const c_void;
> > +}
> 
> How about *const T?
> 

In general I want to avoid const raw pointers since it provides very
little extra compared to mut raw pointers. For compiler optimization,
provenenace is more important than "const vs mut" modifier, for
dereference, it's unsafe anyway and users need to provide reasoning
(including knowing the provenance and other accesses may happen to the
same address), so I feel the type difference of "*const T" vs "*mut T"
doesn't do anything extra either.

Think about it, in Rust std, there are two pointer types only maps to
"*mut T": NonNull<T> (as_ptr() returns a `*mut T`) and AtomicPtr<T>
(as_ptr() returns a `*mut *mut T`). And there is no type like
NonNullConst<T> and AtomicConstPtr<T>. This is a lint to me that we may
not need to support `*const T` in most cases.

But maybe I'm missing something? If you have a good reason, we can
obviously add the support for `*const T`.

Regards,
Boqun

> > +
> >  // SAFETY: `i32` has the same size and alignment with itself, and is round-trip transmutable to
> >  // itself.
> >  unsafe impl super::AtomicType for i32 {
> > @@ -215,4 +226,16 @@ fn atomic_bool_tests() {
> >          assert_eq!(false, x.load(Relaxed));
> >          assert_eq!(Ok(false), x.cmpxchg(false, true, Full));
> >      }
> > +
> > +    #[test]
> > +    fn atomic_ptr_tests() {
> > +        let mut v = 42;
> > +        let mut u = 43;
> > +        let x = Atomic::new(&raw mut v);
> > +
> > +        assert_eq!(x.load(Acquire), &raw mut v);
> > +        assert_eq!(x.cmpxchg(&raw mut u, &raw mut u, Relaxed), Err(&raw mut v));
> > +        assert_eq!(x.cmpxchg(&raw mut v, &raw mut u, Relaxed), Ok(&raw mut v));
> > +        assert_eq!(x.load(Relaxed), &raw mut u);
> > +    }
> >  }
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 5/5] rust: sync: rcu: Add RCU protected pointer
  2026-01-17 12:22 ` [PATCH 5/5] rust: sync: rcu: Add RCU protected pointer Boqun Feng
@ 2026-01-18  8:28   ` Dirk Behme
  2026-01-19  1:03     ` Boqun Feng
  0 siblings, 1 reply; 20+ messages in thread
From: Dirk Behme @ 2026-01-18  8:28 UTC (permalink / raw)
  To: Boqun Feng, rust-for-linux, linux-kernel, rcu
  Cc: Miguel Ojeda, Gary Guo, Björn Roy Baron, Benno Lossin,
	Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	Will Deacon, Peter Zijlstra, Mark Rutland, Paul E. McKenney,
	Frederic Weisbecker, Neeraj Upadhyay, Joel Fernandes,
	Josh Triplett, Uladzislau Rezki, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Zqiang, FUJITA Tomonori

On 17.01.26 13:22, Boqun Feng wrote:
> RCU protected pointers are an atomic pointer that can be loaded and
> dereferenced by mulitple RCU readers, but only one updater/writer can
> change the value (following a read-copy-update pattern usually).
> 
> This is useful in the case where data is read-mostly. The rationale of
> this patch is to provide a proof of concept on how RCU should be exposed
> to the Rust world, and it also serves as an example for atomic usage.
> 
> Similar mechanisms like ArcSwap [1] are already widely used.
> 
> Provide a `Rcu<P>` type with an atomic pointer implementation. `P` has
> to be a `ForeignOwnable`, which means the ownership of a object can be
> represented by a pointer-size value.
> 
> `Rcu::dereference()` requires a RCU Guard, which means dereferencing is
> only valid under RCU read lock protection.
> 
> `Rcu::copy_update()` is the operation for updaters, it requries a
> `Pin<&mut Self>` for exclusive accesses, since RCU updaters are normally
> exclusive with each other.
> 
> A lot of RCU functionalities including asynchronously free (call_rcu()
> and kfree_rcu()) are still missing, and will be the future work.
> 
> Also, we still need language changes like field projection [2] to
> provide better ergonomic.
> 
> Acknowledgment: this work is based on a lot of productive discussions
> and hard work from others, these are the ones I can remember (sorry if I
> forgot your contribution):
> 
> * Wedson started the work on RCU field projection and Benno followed it
>   up and had been working on it as a more general language feature.
>   Also, Gary's field-projection repo [3] has been used as an example for
>   related discussions.
> 
> * During Kangrejos 2023 [4], Gary, Benno and Alice provided a lot of
>   feedbacks on the talk from Paul and me: "If you want to use RCU in
>   Rust for Linux kernel..."
> 
> * During a recent discussion among Benno, Paul and me, Benno suggested
>   using `Pin<&mut>` to guarantee the exclusive access on updater
>   operations.
> 
> Link: https://crates.io/crates/arc-swap [1]
> Link: https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lang/topic/Field.20Projections/near/474648059 [2]
> Link: https://github.com/nbdd0121/field-projection [3]
> Link: https://kangrejos.com/2023 [4]
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
>  rust/kernel/sync/rcu.rs | 326 +++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 325 insertions(+), 1 deletion(-)
> 
> diff --git a/rust/kernel/sync/rcu.rs b/rust/kernel/sync/rcu.rs
> index a32bef6e490b..28bbccaa2e5e 100644
> --- a/rust/kernel/sync/rcu.rs
> +++ b/rust/kernel/sync/rcu.rs
> @@ -4,7 +4,23 @@
>  //!
>  //! C header: [`include/linux/rcupdate.h`](srctree/include/linux/rcupdate.h)
>  
> -use crate::{bindings, types::NotThreadSafe};
> +use crate::bindings;
> +use crate::{
> +    sync::atomic::{
> +        Atomic,
> +        Relaxed,
> +        Release, //
> +    },
> +    types::{
> +        ForeignOwnable,
> +        NotThreadSafe, //
> +    },
> +};
> +use core::{
> +    marker::PhantomData,
> +    pin::Pin,
> +    ptr::NonNull, //
> +};
>  
>  /// Evidence that the RCU read side lock is held on the current thread/CPU.
>  ///
> @@ -50,3 +66,311 @@ fn drop(&mut self) {
>  pub fn read_lock() -> Guard {
>      Guard::new()
>  }
> +
> +use crate::types::Opaque;
> +
> +/// A temporary `UnsafePinned` [1] that provides a way to opt-out typical alias rules for mutable
> +/// references.
> +///
> +/// # Invariants
> +///
> +/// `self.0` is always properly initialized.
> +///
> +/// [1]: https://doc.rust-lang.org/std/pin/struct.UnsafePinned.html
> +struct UnsafePinned<T>(Opaque<T>);
> +
> +impl<T> UnsafePinned<T> {
> +    const fn new(value: T) -> Self {
> +        // INVARIANTS: `value` is initialized.
> +        Self(Opaque::new(value))
> +    }
> +
> +    const fn get(&self) -> *mut T {
> +        self.0.get()
> +    }
> +}
> +
> +// SAFETY: `UnsafePinned` is safe to transfer between execution contexts as long as `T` is `Send`.
> +unsafe impl<T: Send> Send for UnsafePinned<T> {}
> +// SAFETY: `UnsafePinned` is safe to shared between execution contexts as long as `T` is `Sync`.
> +unsafe impl<T: Sync> Sync for UnsafePinned<T> {}
> +
> +/// An RCU protected pointer, the pointed object is protected by RCU.
> +///
> +/// # Invariants
> +///
> +/// Either the pointer is null, or it points to a return value of
> +/// [`ForeignOwnable::into_foreign()`] and the atomic variable exclusively owns the pointer.
> +pub struct Rcu<P: ForeignOwnable>(
> +    UnsafePinned<Atomic<*mut crate::ffi::c_void>>,
> +    PhantomData<P>,
> +);
> +
> +/// A pointer that has been unpublished, but hasn't waited for a grace period yet.
> +///
> +/// The pointed object may still have an existing RCU reader. Therefore a grace period is needed to
> +/// free the object.
> +///
> +/// # Invariants
> +///
> +/// The pointer has to be a return value of [`ForeignOwnable::into_foreign`] and [`Self`]
> +/// exclusively owns the pointer.
> +pub struct RcuOld<P: ForeignOwnable>(NonNull<crate::ffi::c_void>, PhantomData<P>);
> +
> +impl<P: ForeignOwnable> Drop for RcuOld<P> {
> +    fn drop(&mut self) {
> +        // SAFETY: As long as called in a sleepable context, which should be checked by klint,
> +        // `synchronize_rcu()` is safe to call.
> +        unsafe {
> +            bindings::synchronize_rcu();
> +        }
> +
> +        // SAFETY: `self.0` is a return value of `P::into_foreign()`, so it's safe to call
> +        // `from_foreign()` on it. Plus, the above `synchronize_rcu()` guarantees no existing
> +        // `ForeignOwnable::borrow()` anymore.
> +        let p: P = unsafe { P::from_foreign(self.0.as_ptr()) };
> +        drop(p);
> +    }
> +}
> +
> +impl<P: ForeignOwnable> Rcu<P> {
> +    /// Creates a new RCU pointer.
> +    pub fn new(p: P) -> Self {
> +        // INVARIANTS: The return value of `p.into_foreign()` is directly stored in the atomic
> +        // variable.
> +        Self(
> +            UnsafePinned::new(Atomic::new(p.into_foreign())),
> +            PhantomData,
> +        )
> +    }
> +
> +    fn as_atomic(&self) -> &Atomic<*mut crate::ffi::c_void> {
> +        // SAFETY: Per type invariants of `UnsafePinned`, `self.0.get()` points to an initialized
> +        // `&Atomic`.
> +        unsafe { &*self.0.get() }
> +    }
> +
> +    fn as_atomic_mut_pinned(self: Pin<&mut Self>) -> &Atomic<*mut crate::ffi::c_void> {
> +        self.into_ref().get_ref().as_atomic()
> +    }
> +
> +    /// Dereferences the protected object.
> +    ///
> +    /// Returns `Some(b)`, where `b` is a reference-like borrowed type, if the pointer is not null,
> +    /// otherwise returns `None`.
> +    ///
> +    /// # Examples
> +    ///
> +    /// ```rust
> +    /// # use kernel::alloc::{flags, KBox};
> +    /// use kernel::sync::rcu::{self, Rcu};
> +    ///
> +    /// let x = Rcu::new(KBox::new(100i32, flags::GFP_KERNEL)?);
> +    ///
> +    /// let g = rcu::read_lock();
> +    /// // Read in under RCU read lock protection.
> +    /// let v = x.dereference(&g);
> +    ///
> +    /// assert_eq!(v, Some(&100i32));
> +    ///
> +    /// # Ok::<(), Error>(())
> +    /// ```
> +    ///
> +    /// Note the borrowed access can outlive the reference of the [`Rcu<P>`], this is because as
> +    /// long as the RCU read lock is held, the pointed object should remain valid.
> +    ///
> +    /// In the following case, the main thread is responsible for the ownership of `shared`, i.e. it
> +    /// will drop it eventually, and a work item can temporarily access the `shared` via `cloned`,
> +    /// but the use of the dereferenced object doesn't depend on `cloned`'s existence.
> +    ///
> +    /// ```rust
> +    /// # use kernel::alloc::{flags, KBox};
> +    /// # use kernel::workqueue::system;
> +    /// # use kernel::sync::{Arc, atomic::{Atomic, Acquire, Release}};
> +    /// use kernel::sync::rcu::{self, Rcu};
> +    ///
> +    /// struct Config {
> +    ///     a: i32,
> +    ///     b: i32,
> +    ///     c: i32,
> +    /// }
> +    ///
> +    /// let config = KBox::new(Config { a: 1, b: 2, c: 3 }, flags::GFP_KERNEL)?;
> +    ///
> +    /// let shared = Arc::new(Rcu::new(config), flags::GFP_KERNEL)?;
> +    /// let cloned = shared.clone();
> +    ///
> +    /// // Use atomic to simulate a special refcounting.
> +    /// static FLAG: Atomic<i32> = Atomic::new(0);
> +    ///
> +    /// system().try_spawn(flags::GFP_KERNEL, move || {
> +    ///     let g = rcu::read_lock();
> +    ///     let v = cloned.dereference(&g).unwrap();
> +    ///     drop(cloned); // release reference to `shared`.
> +    ///     FLAG.store(1, Release);
> +    ///
> +    ///     // but still need to access `v`.
> +    ///     assert_eq!(v.a, 1);
> +    ///     drop(g);
> +    /// });
> +    ///
> +    /// // Wait until `cloned` dropped.
> +    /// while FLAG.load(Acquire) == 0 {
> +    ///     // SAFETY: Sleep should be safe.
> +    ///     unsafe { kernel::bindings::schedule(); }
> +    /// }
> +    ///
> +    /// drop(shared);
> +    ///
> +    /// # Ok::<(), Error>(())
> +    /// ```
> +    pub fn dereference<'rcu>(&self, _rcu_guard: &'rcu Guard) -> Option<P::Borrowed<'rcu>> {
> +        // Ordering: Address dependency pairs with the `store(Release)` in copy_update().
> +        let ptr = self.as_atomic().load(Relaxed);
> +
> +        if !ptr.is_null() {
> +            // SAFETY:


Would it be an option to take an early return here and with this drop
one indentation level of the larger `SAFETY` comment?

if ptr.is_null() {
   return None;
}

// SAFETY:
...
Some(unsafe { P::borrow(ptr) })

Same for `Drop for Rcu<P>` below.


> +            // - Since `ptr` is not null, so it has to be a return value of `P::into_foreign()`.
> +            // - The returned `Borrowed<'rcu>` cannot outlive the RCU Guar, this guarantees the

Guar -> Guard

> +            //   return value will only be used under RCU read lock, and the RCU read lock prevents
> +            //   the pass of a grace period that the drop of `RcuOld` or `Rcu` is waiting for,
> +            //   therefore no `from_foreign()` will be called for `ptr` as long as `Borrowed` exists.
> +            //
> +            //      CPU 0                                       CPU 1
> +            //      =====                                       =====
> +            //      { `x` is a reference to Rcu<Box<i32>> }
> +            //      let g = rcu::read_lock();
> +            //
> +            //      if let Some(b) = x.dereference(&g) {
> +            //      // drop(g); cannot be done, since `b` is still alive.
> +            //
> +            //                                              if let Some(old) = x.replace(...) {
> +            //                                                  // `x` is null now.
> +            //          println!("{}", b);
> +            //      }
> +            //                                                  drop(old):
> +            //                                                    synchronize_rcu();
> +            //      drop(g);
> +            //                                                    // a grace period passed.
> +            //                                                    // No `Borrowed` exists now.
> +            //                                                    from_foreign(...);
> +            //                                              }
> +            Some(unsafe { P::borrow(ptr) })
> +        } else {
> +            None
> +        }
> +    }
> +
> +    /// Read, copy and update the pointer with new value.
> +    ///
> +    /// Returns `None` if the pointer's old value is null, otherwise returns `Some(old)`, where old
> +    /// is a [`RcuOld`] which can be used to free the old object eventually.
> +    ///
> +    /// The `Pin<&mut Self>` is needed because this function needs the exclusive access to
> +    /// [`Rcu<P>`], otherwise two `copy_update()`s may get the same old object and double free.
> +    /// Using `Pin<&mut Self>` provides the exclusive access that C side requires with the type
> +    /// system checking.
> +    ///
> +    /// Also this has to be `Pin` because a `&mut Self` may allow users to `swap()` safely, that
> +    /// will break the atomicity. A [`Rcu<P>`] should be structurally pinned in the struct that
> +    /// contains it.
> +    ///
> +    /// Note that `Pin<&mut Self>` cannot assume noalias on `self.0` here because of `self.0` is an
> +    /// [`UnsafePinned`].
> +    ///
> +    /// [`UnsafePinned`]: https://doc.rust-lang.org/std/pin/struct.UnsafePinned.html
> +    pub fn copy_update<F>(self: Pin<&mut Self>, f: F) -> Option<RcuOld<P>>
> +    where
> +        F: FnOnce(Option<P::Borrowed<'_>>) -> Option<P>,
> +    {
> +        let inner = self.as_atomic_mut_pinned();
> +
> +        // step 1: COPY, or more generally, initializing `new` based on `old`.
> +        // Ordering: Address dependency pairs with the `store(Release)` in copy_update().
> +        let old_ptr = NonNull::new(inner.load(Relaxed));
> +
> +        let old = old_ptr.map(|nonnull| {
> +            // SAFETY: Per type invariants `old_ptr` has to be a value return by a previous
> +            // `into_foreign()`, and the exclusive reference `self` guarantees that `from_foreign()`
> +            // has not been called.
> +            unsafe { P::borrow(nonnull.as_ptr()) }
> +        });
> +
> +        let new = f(old);
> +
> +        // step 2: UPDATE.
> +        if let Some(new) = new {
> +            let new_ptr = new.into_foreign();
> +            // Ordering: Pairs with the address dependency in `dereference()` and
> +            // `copy_update()`.
> +            // INVARIANTS: `new.into_foreign()` is directly store into the atomic variable.
> +            inner.store(new_ptr, Release);
> +        } else {
> +            // Ordering: Setting to a null pointer doesn't need to be Release.
> +            // INVARIANTS: The atomic variable is set to be null.
> +            inner.store(core::ptr::null_mut(), Relaxed);
> +        }
> +
> +        // INVARIANTS: The exclusive reference guarantess that the ownership of a previous
> +        // `into_foreign()` transferred to the `RcuOld`.
> +        Some(RcuOld(old_ptr?, PhantomData))
> +    }
> +
> +    /// Replaces the pointer with new value.
> +    ///
> +    /// Returns `None` if the pointer's old value is null, otherwise returns `Some(old)`, where old
> +    /// is a [`RcuOld`] which can be used to free the old object eventually.
> +    ///
> +    /// # Examples
> +    ///
> +    /// ```rust
> +    /// use core::pin::pin;
> +    /// # use kernel::alloc::{flags, KBox};
> +    /// use kernel::sync::rcu::{self, Rcu};
> +    ///
> +    /// let mut x = pin!(Rcu::new(KBox::new(100i32, flags::GFP_KERNEL)?));
> +    /// let q = KBox::new(101i32, flags::GFP_KERNEL)?;
> +    ///
> +    /// // Read in under RCU read lock protection.
> +    /// let g = rcu::read_lock();
> +    /// let v = x.dereference(&g);
> +    ///
> +    /// // Replace with a new object.
> +    /// let old = x.as_mut().replace(q);
> +    ///
> +    /// assert!(old.is_some());
> +    ///
> +    /// // `v` should still read the old value.
> +    /// assert_eq!(v, Some(&100i32));
> +    ///
> +    /// // New readers should get the new value.
> +    /// assert_eq!(x.dereference(&g), Some(&101i32));
> +    ///
> +    /// drop(g);
> +    ///
> +    /// // Can free the object outside the read-side critical section.
> +    /// drop(old);
> +    /// # Ok::<(), Error>(())
> +    /// ```
> +    pub fn replace(self: Pin<&mut Self>, new: P) -> Option<RcuOld<P>> {
> +        self.copy_update(|_| Some(new))
> +    }
> +}
> +
> +impl<P: ForeignOwnable> Drop for Rcu<P> {
> +    fn drop(&mut self) {
> +        let ptr = self.as_atomic().load(Relaxed);
> +        if !ptr.is_null() {
> +            // SAFETY: As long as called in a sleepable context, which should be checked by klint,
> +            // `synchronize_rcu()` is safe to call.
> +            unsafe {
> +                bindings::synchronize_rcu();
> +            }
> +
> +            // SAFETY: `self.0` is a return value of `P::into_foreign()`, so it's safe to call
> +            // `from_foreign()` on it. Plus, the above `synchronize_rcu()` guarantees no existing
> +            // `ForeignOwnable::borrow()` anymore.
> +            drop(unsafe { P::from_foreign(ptr) });
> +        }
> +    }
> +}


Best regards

Dirk

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-17 12:22 ` [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support Boqun Feng
  2026-01-17 17:03   ` Gary Guo
@ 2026-01-18  8:38   ` Dirk Behme
  2026-01-18 14:57     ` Boqun Feng
  2026-01-19  3:09   ` FUJITA Tomonori
  2 siblings, 1 reply; 20+ messages in thread
From: Dirk Behme @ 2026-01-18  8:38 UTC (permalink / raw)
  To: Boqun Feng, rust-for-linux, linux-kernel, rcu
  Cc: Miguel Ojeda, Gary Guo, Björn Roy Baron, Benno Lossin,
	Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	Will Deacon, Peter Zijlstra, Mark Rutland, Paul E. McKenney,
	Frederic Weisbecker, Neeraj Upadhyay, Joel Fernandes,
	Josh Triplett, Uladzislau Rezki, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Zqiang, FUJITA Tomonori

On 17.01.26 13:22, Boqun Feng wrote:
> Atomic pointer support is an important piece of synchronization
> algorithm, e.g. RCU, hence provide the support for that.
> 
> Note that instead of relying on atomic_long or the implementation of
> `Atomic<usize>`, a new set of helpers (atomic_ptr_*) is introduced for
> atomic pointer specifically, this is because ptr2int casting would
> lose the provenance of a pointer and even though in theory there are a
> few tricks the provenance can be restored, it'll still be a simpler
> implementation if C could provide atomic pointers directly. The side
> effects of this approach are: we don't have the arithmetic and logical
> operations for pointers yet and the current implementation only works
> on ARCH_SUPPORTS_ATOMIC_RMW architectures, but these are implementation
> issues and can be added later.
> 
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
>  rust/helpers/atomic_ext.c            |  3 +++
>  rust/kernel/sync/atomic.rs           | 12 +++++++++++-
>  rust/kernel/sync/atomic/internal.rs  | 21 +++++++++++++++------
>  rust/kernel/sync/atomic/predefine.rs | 23 +++++++++++++++++++++++
>  4 files changed, 52 insertions(+), 7 deletions(-)
> 
> diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
> index 240218e2e708..c267d5190529 100644
> --- a/rust/helpers/atomic_ext.c
> +++ b/rust/helpers/atomic_ext.c
> @@ -36,6 +36,7 @@ __rust_helper void rust_helper_atomic_##tname##_set_release(type *ptr, type val)
>  
>  GEN_READ_SET_HELPERS(i8, s8)
>  GEN_READ_SET_HELPERS(i16, s16)
> +GEN_READ_SET_HELPERS(ptr, const void *)
>  
>  /*
>   * xchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
> @@ -59,6 +60,7 @@ rust_helper_atomic_##tname##_xchg##suffix(type *ptr, type new)			\
>  
>  GEN_XCHG_HELPERS(i8, s8)
>  GEN_XCHG_HELPERS(i16, s16)
> +GEN_XCHG_HELPERS(ptr, const void *)
>  
>  /*
>   * try_cmpxchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
> @@ -82,3 +84,4 @@ rust_helper_atomic_##tname##_try_cmpxchg##suffix(type *ptr, type *old, type new)
>  
>  GEN_TRY_CMPXCHG_HELPERS(i8, s8)
>  GEN_TRY_CMPXCHG_HELPERS(i16, s16)
> +GEN_TRY_CMPXCHG_HELPERS(ptr, const void *)
> diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> index 4aebeacb961a..4d2a5228c2e4 100644
> --- a/rust/kernel/sync/atomic.rs
> +++ b/rust/kernel/sync/atomic.rs
> @@ -51,6 +51,10 @@
>  #[repr(transparent)]
>  pub struct Atomic<T: AtomicType>(AtomicRepr<T::Repr>);
>  
> +// SAFETY: `Atomic<T>` is safe to transfer between execution contexts because of the safety
> +// requirement of `AtomicType`.
> +unsafe impl<T: AtomicType> Send for Atomic<T> {}
> +
>  // SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
>  unsafe impl<T: AtomicType> Sync for Atomic<T> {}
>  
> @@ -68,6 +72,11 @@ unsafe impl<T: AtomicType> Sync for Atomic<T> {}
>  ///
>  /// - [`Self`] must have the same size and alignment as [`Self::Repr`].
>  /// - [`Self`] must be [round-trip transmutable] to  [`Self::Repr`].
> +/// - [`Self`] must be safe to transfer between execution contexts, if it's [`Send`], this is
> +///   automatically satisfied. The exception is pointer types that are even though marked as
> +///   `!Send` (e.g. raw pointers and [`NonNull<T>`]) but requiring `unsafe` to do anything
> +///   meaningful on them. This is because transferring pointer values between execution contexts is
> +///   safe as long as the actual `unsafe` dereferencing is justified.
>  ///
>  /// Note that this is more relaxed than requiring the bi-directional transmutability (i.e.
>  /// [`transmute()`] is always sound between `U` and `T`) because of the support for atomic
> @@ -108,7 +117,8 @@ unsafe impl<T: AtomicType> Sync for Atomic<T> {}
>  /// [`transmute()`]: core::mem::transmute
>  /// [round-trip transmutable]: AtomicType#round-trip-transmutability
>  /// [Examples]: AtomicType#examples
> -pub unsafe trait AtomicType: Sized + Send + Copy {
> +/// [`NonNull<T>`]: core::ptr::NonNull
> +pub unsafe trait AtomicType: Sized + Copy {
>      /// The backing atomic implementation type.
>      type Repr: AtomicImpl;
>  }
> diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/internal.rs
> index 0dac58bca2b3..93f5a7846645 100644
> --- a/rust/kernel/sync/atomic/internal.rs
> +++ b/rust/kernel/sync/atomic/internal.rs
> @@ -7,6 +7,7 @@
>  use crate::bindings;
>  use crate::macros::paste;
>  use core::cell::UnsafeCell;
> +use ffi::c_void;
>  
>  mod private {
>      /// Sealed trait marker to disable customized impls on atomic implementation traits.
> @@ -14,10 +15,11 @@ pub trait Sealed {}
>  }
>  
>  // The C side supports atomic primitives only for `i32` and `i64` (`atomic_t` and `atomic64_t`),
> -// while the Rust side also layers provides atomic support for `i8` and `i16`
> -// on top of lower-level C primitives.
> +// while the Rust side also provides atomic support for `i8`, `i16` and `*const c_void` on top of
> +// lower-level C primitives.
>  impl private::Sealed for i8 {}
>  impl private::Sealed for i16 {}
> +impl private::Sealed for *const c_void {}
>  impl private::Sealed for i32 {}
>  impl private::Sealed for i64 {}
>  
> @@ -26,10 +28,10 @@ impl private::Sealed for i64 {}
>  /// This trait is sealed, and only types that map directly to the C side atomics
>  /// or can be implemented with lower-level C primitives are allowed to implement this:
>  ///
> -/// - `i8` and `i16` are implemented with lower-level C primitives.
> +/// - `i8`, `i16` and `*const c_void` are implemented with lower-level C primitives.
>  /// - `i32` map to `atomic_t`
>  /// - `i64` map to `atomic64_t`
> -pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {
> +pub trait AtomicImpl: Sized + Copy + private::Sealed {
>      /// The type of the delta in arithmetic or logical operations.
>      ///
>      /// For example, in `atomic_add(ptr, v)`, it's the type of `v`. Usually it's the same type of
> @@ -51,6 +53,13 @@ impl AtomicImpl for i16 {
>      type Delta = Self;
>  }
>  
> +// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only

uses -> use ?

> +// guaranteed against read-modify-write operations if the architecture supports native atomic RmW.
> +#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)]
> +impl AtomicImpl for *const c_void {
> +    type Delta = isize;
> +}

Are all users of this guarded with CONFIG_ARCH_SUPPORTS_ATOMIC_RMW as
well? Or do we want (need?) to cover the
non-CONFIG_ARCH_SUPPORTS_ATOMIC_RMW cases where someone tries to use
this as well?

Best regards

Dirk




^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-18  8:38   ` Dirk Behme
@ 2026-01-18 14:57     ` Boqun Feng
  2026-01-18 15:05       ` Boqun Feng
  0 siblings, 1 reply; 20+ messages in thread
From: Boqun Feng @ 2026-01-18 14:57 UTC (permalink / raw)
  To: Dirk Behme
  Cc: rust-for-linux, linux-kernel, rcu, Miguel Ojeda, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Paul E. McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Uladzislau Rezki,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang,
	FUJITA Tomonori

On Sun, Jan 18, 2026 at 09:38:36AM +0100, Dirk Behme wrote:
[...]
> >  
> > +// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only
> 
> uses -> use ?
> 

Will fix, thank you!

> > +// guaranteed against read-modify-write operations if the architecture supports native atomic RmW.
> > +#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)]
> > +impl AtomicImpl for *const c_void {
> > +    type Delta = isize;
> > +}
> 
> Are all users of this guarded with CONFIG_ARCH_SUPPORTS_ATOMIC_RMW as
> well? Or do we want (need?) to cover the

No, the users don't need to guard with CONFIG_ARCH_SUPPORTS_ATOMIC_RMW,
the purpose of this #[cfg] is to avoid surprise that when
CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=n arch supports Rust, when that happens,
we need to add the support to the helpers of i8/i16/ptr.

> non-CONFIG_ARCH_SUPPORTS_ATOMIC_RMW cases where someone tries to use

Note that these arches are very rare, so we might not have any problem
for a while.

Regards,
Boqun

> this as well?
> 
> Best regards
> 
> Dirk
> 
> 
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-18 14:57     ` Boqun Feng
@ 2026-01-18 15:05       ` Boqun Feng
  2026-01-18 19:59         ` Benno Lossin
  0 siblings, 1 reply; 20+ messages in thread
From: Boqun Feng @ 2026-01-18 15:05 UTC (permalink / raw)
  To: Dirk Behme
  Cc: rust-for-linux, linux-kernel, rcu, Miguel Ojeda, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Paul E. McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Uladzislau Rezki,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang,
	FUJITA Tomonori

On Sun, Jan 18, 2026 at 10:57:37PM +0800, Boqun Feng wrote:
> On Sun, Jan 18, 2026 at 09:38:36AM +0100, Dirk Behme wrote:
> [...]
> > >  
> > > +// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only
> > 
> > uses -> use ?
> > 
> 
> Will fix, thank you!
> 
> > > +// guaranteed against read-modify-write operations if the architecture supports native atomic RmW.
> > > +#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)]
> > > +impl AtomicImpl for *const c_void {
> > > +    type Delta = isize;
> > > +}
> > 
> > Are all users of this guarded with CONFIG_ARCH_SUPPORTS_ATOMIC_RMW as
> > well? Or do we want (need?) to cover the
> 
> No, the users don't need to guard with CONFIG_ARCH_SUPPORTS_ATOMIC_RMW,
> the purpose of this #[cfg] is to avoid surprise that when
> CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=n arch supports Rust, when that happens,
> we need to add the support to the helpers of i8/i16/ptr.
> 

Hmm... I guess at this moment, I probably should do

#[cfg(not(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW))]
static_assert!(false,
               "Support of architectures that don't have native atomic needs to implement helpers in atomic_ext.c");

I can add a patch in the next version if it looks good to everyone.

Regards,
Boqun

> > non-CONFIG_ARCH_SUPPORTS_ATOMIC_RMW cases where someone tries to use
> 
> Note that these arches are very rare, so we might not have any problem
> for a while.
> 
> Regards,
> Boqun
> 
> > this as well?
> > 
> > Best regards
> > 
> > Dirk
> > 
> > 
> > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-18  4:19     ` Boqun Feng
@ 2026-01-18 15:39       ` Gary Guo
  2026-01-20 11:57         ` Boqun Feng
  2026-01-20 12:37       ` Alice Ryhl
  1 sibling, 1 reply; 20+ messages in thread
From: Gary Guo @ 2026-01-18 15:39 UTC (permalink / raw)
  To: Boqun Feng, Gary Guo
  Cc: rust-for-linux, linux-kernel, rcu, Miguel Ojeda,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Paul E. McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Uladzislau Rezki,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang,
	FUJITA Tomonori

On Sun Jan 18, 2026 at 4:19 AM GMT, Boqun Feng wrote:
> On Sat, Jan 17, 2026 at 05:03:15PM +0000, Gary Guo wrote:
>> On Sat Jan 17, 2026 at 12:22 PM GMT, Boqun Feng wrote:
>> > Atomic pointer support is an important piece of synchronization
>> > algorithm, e.g. RCU, hence provide the support for that.
>> >
>> > Note that instead of relying on atomic_long or the implementation of
>> > `Atomic<usize>`, a new set of helpers (atomic_ptr_*) is introduced for
>> > atomic pointer specifically, this is because ptr2int casting would
>> > lose the provenance of a pointer and even though in theory there are a
>> > few tricks the provenance can be restored, it'll still be a simpler
>> > implementation if C could provide atomic pointers directly. The side
>> > effects of this approach are: we don't have the arithmetic and logical
>> > operations for pointers yet and the current implementation only works
>> > on ARCH_SUPPORTS_ATOMIC_RMW architectures, but these are implementation
>> > issues and can be added later.
>> >
>> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
>> 
>> I am happy that this is now using dedicated helpers for pointers, and not going
>> through an intermediate integer which can lose provenance.
>> 
>> Some feedbacks below, but in general LGTM.
>> 
>> Reviewed-by: Gary Guo <gary@garyguo.net>
>> 
>
> Thanks!
>
>> > ---
>> > diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> [...]
>> > index 4aebeacb961a..4d2a5228c2e4 100644
>> > --- a/rust/kernel/sync/atomic.rs
>> > +++ b/rust/kernel/sync/atomic.rs
>> > @@ -51,6 +51,10 @@
>> >  #[repr(transparent)]
>> >  pub struct Atomic<T: AtomicType>(AtomicRepr<T::Repr>);
>> >  
>> > +// SAFETY: `Atomic<T>` is safe to transfer between execution contexts because of the safety
>> > +// requirement of `AtomicType`.
>> > +unsafe impl<T: AtomicType> Send for Atomic<T> {}
>> > +
>> >  // SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
>> >  unsafe impl<T: AtomicType> Sync for Atomic<T> {}
>> >  
>> > @@ -68,6 +72,11 @@ unsafe impl<T: AtomicType> Sync for Atomic<T> {}
>> >  ///
>> >  /// - [`Self`] must have the same size and alignment as [`Self::Repr`].
>> >  /// - [`Self`] must be [round-trip transmutable] to  [`Self::Repr`].
>> > +/// - [`Self`] must be safe to transfer between execution contexts, if it's [`Send`], this is
>> > +///   automatically satisfied. The exception is pointer types that are even though marked as
>> > +///   `!Send` (e.g. raw pointers and [`NonNull<T>`]) but requiring `unsafe` to do anything
>> > +///   meaningful on them. This is because transferring pointer values between execution contexts is
>> > +///   safe as long as the actual `unsafe` dereferencing is justified.
>> 
>> I think the discussion about `Send` on pointers should be moved to the `impl<T>
>> AtomicType for *mut T` side.
>> 
>
> The reason I put something here was to answer the potential question
> "why don't you require AtomicType being a subtrait of Send?", that's
> more of a question for people who read about `AtomicType`, so I figured
> we need some explanation. But I'm fine if you think we should move some
> of the comments to the impl block, or we duplicate some. Although I
> don't think the current version is worse. Considering we do:
>
>     /// - [`Self`] must have the same size and alignment as [`Self::Repr`].
>     /// - [`Self`] must be [round-trip transmutable] to  [`Self::Repr`].
>     /// - [`Self`] must be safe to transfer between execution contexts, if it's [`Send`], this is
>     ///   automatically satisfied.
>
> for AtomicType, I'm not sure someone read about `AtomicType` could have
> everything they need to understand why it's not `: Send`.

Ok.

>
> [...]
>> > +// SAFETY:
>> > +//
>> > +// - `*mut T` has the same size and alignment with `*const c_void`, and is round-trip
>> > +//   transmutable to `*const c_void`.
>> > +// - `*mut T` is safe to transfer between execution contexts. See the safety requirement of
>> > +//   [`AtomicType`].
>> > +unsafe impl<T: Sized> super::AtomicType for *mut T {
>> > +    type Repr = *const c_void;
>> > +}
>> 
>> How about *const T?
>> 
>
> In general I want to avoid const raw pointers since it provides very
> little extra compared to mut raw pointers. For compiler optimization,
> provenenace is more important than "const vs mut" modifier, for
> dereference, it's unsafe anyway and users need to provide reasoning
> (including knowing the provenance and other accesses may happen to the
> same address), so I feel the type difference of "*const T" vs "*mut T"
> doesn't do anything extra either.
>
> Think about it, in Rust std, there are two pointer types only maps to
> "*mut T": NonNull<T> (as_ptr() returns a `*mut T`) and AtomicPtr<T>
> (as_ptr() returns a `*mut *mut T`). And there is no type like
> NonNullConst<T> and AtomicConstPtr<T>. This is a lint to me that we may
> not need to support `*const T` in most cases.

Actually `NonNull` is internally `*const T`, because it's covariant, unlike
`*mut T` which is invariant.

Now, for atomics, it's less likely that you actually want covariance. So this
difference matters less.

>
> But maybe I'm missing something? If you have a good reason, we can
> obviously add the support for `*const T`.

It just feels that it is somewhat inconsistent. There's no good motivation right
now. I am fine to leave it out and add when needed.

Best,
Gary

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-18 15:05       ` Boqun Feng
@ 2026-01-18 19:59         ` Benno Lossin
  2026-01-19  0:57           ` Boqun Feng
  0 siblings, 1 reply; 20+ messages in thread
From: Benno Lossin @ 2026-01-18 19:59 UTC (permalink / raw)
  To: Boqun Feng, Dirk Behme
  Cc: rust-for-linux, linux-kernel, rcu, Miguel Ojeda, Gary Guo,
	Björn Roy Baron, Andreas Hindborg, Alice Ryhl, Trevor Gross,
	Danilo Krummrich, Will Deacon, Peter Zijlstra, Mark Rutland,
	Paul E. McKenney, Frederic Weisbecker, Neeraj Upadhyay,
	Joel Fernandes, Josh Triplett, Uladzislau Rezki, Steven Rostedt,
	Mathieu Desnoyers, Lai Jiangshan, Zqiang, FUJITA Tomonori

On Sun Jan 18, 2026 at 4:05 PM CET, Boqun Feng wrote:
> On Sun, Jan 18, 2026 at 10:57:37PM +0800, Boqun Feng wrote:
>> On Sun, Jan 18, 2026 at 09:38:36AM +0100, Dirk Behme wrote:
>> [...]
>> > >  
>> > > +// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only
>> > 
>> > uses -> use ?
>> > 
>> 
>> Will fix, thank you!
>> 
>> > > +// guaranteed against read-modify-write operations if the architecture supports native atomic RmW.
>> > > +#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)]
>> > > +impl AtomicImpl for *const c_void {
>> > > +    type Delta = isize;
>> > > +}
>> > 
>> > Are all users of this guarded with CONFIG_ARCH_SUPPORTS_ATOMIC_RMW as
>> > well? Or do we want (need?) to cover the
>> 
>> No, the users don't need to guard with CONFIG_ARCH_SUPPORTS_ATOMIC_RMW,
>> the purpose of this #[cfg] is to avoid surprise that when
>> CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=n arch supports Rust, when that happens,
>> we need to add the support to the helpers of i8/i16/ptr.
>> 
>
> Hmm... I guess at this moment, I probably should do
>
> #[cfg(not(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW))]
> static_assert!(false,
>                "Support of architectures that don't have native atomic needs to implement helpers in atomic_ext.c");
>
> I can add a patch in the next version if it looks good to everyone.

I think this is a great idea!

By the way, did you know about `cfg!` [1]?:

    static_assert!(
        cfg!(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW),
        "Support of architectures that don't have native atomic needs to implement helpers in atomic_ext.c",
    );


[1]: https://doc.rust-lang.org/core/macro.cfg.html

Cheers,
Benno

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-18 19:59         ` Benno Lossin
@ 2026-01-19  0:57           ` Boqun Feng
  0 siblings, 0 replies; 20+ messages in thread
From: Boqun Feng @ 2026-01-19  0:57 UTC (permalink / raw)
  To: Benno Lossin
  Cc: Dirk Behme, rust-for-linux, linux-kernel, rcu, Miguel Ojeda,
	Gary Guo, Björn Roy Baron, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Paul E. McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Uladzislau Rezki,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang,
	FUJITA Tomonori

On Sun, Jan 18, 2026 at 08:59:09PM +0100, Benno Lossin wrote:
> On Sun Jan 18, 2026 at 4:05 PM CET, Boqun Feng wrote:
> > On Sun, Jan 18, 2026 at 10:57:37PM +0800, Boqun Feng wrote:
> >> On Sun, Jan 18, 2026 at 09:38:36AM +0100, Dirk Behme wrote:
> >> [...]
> >> > >  
> >> > > +// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only
> >> > 
> >> > uses -> use ?
> >> > 
> >> 
> >> Will fix, thank you!
> >> 
> >> > > +// guaranteed against read-modify-write operations if the architecture supports native atomic RmW.
> >> > > +#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)]
> >> > > +impl AtomicImpl for *const c_void {
> >> > > +    type Delta = isize;
> >> > > +}
> >> > 
> >> > Are all users of this guarded with CONFIG_ARCH_SUPPORTS_ATOMIC_RMW as
> >> > well? Or do we want (need?) to cover the
> >> 
> >> No, the users don't need to guard with CONFIG_ARCH_SUPPORTS_ATOMIC_RMW,
> >> the purpose of this #[cfg] is to avoid surprise that when
> >> CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=n arch supports Rust, when that happens,
> >> we need to add the support to the helpers of i8/i16/ptr.
> >> 
> >
> > Hmm... I guess at this moment, I probably should do
> >
> > #[cfg(not(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW))]
> > static_assert!(false,
> >                "Support of architectures that don't have native atomic needs to implement helpers in atomic_ext.c");
> >
> > I can add a patch in the next version if it looks good to everyone.
> 
> I think this is a great idea!
> 
> By the way, did you know about `cfg!` [1]?:
> 
>     static_assert!(
>         cfg!(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW),
>         "Support of architectures that don't have native atomic needs to implement helpers in atomic_ext.c",
>     );
> 

Even better! Thanks. I've added a new patch that converts the existing
#[cfg] on impl blocks to this. I've added Suggested-by of both you and
Dirk ;-)

Regards,
Boqun

> 
> [1]: https://doc.rust-lang.org/core/macro.cfg.html
> 
> Cheers,
> Benno

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 5/5] rust: sync: rcu: Add RCU protected pointer
  2026-01-18  8:28   ` Dirk Behme
@ 2026-01-19  1:03     ` Boqun Feng
  0 siblings, 0 replies; 20+ messages in thread
From: Boqun Feng @ 2026-01-19  1:03 UTC (permalink / raw)
  To: Dirk Behme
  Cc: rust-for-linux, linux-kernel, rcu, Miguel Ojeda, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Paul E. McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Uladzislau Rezki,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang,
	FUJITA Tomonori

On Sun, Jan 18, 2026 at 09:28:35AM +0100, Dirk Behme wrote:
[...]
> > +    /// ```
> > +    pub fn dereference<'rcu>(&self, _rcu_guard: &'rcu Guard) -> Option<P::Borrowed<'rcu>> {
> > +        // Ordering: Address dependency pairs with the `store(Release)` in copy_update().
> > +        let ptr = self.as_atomic().load(Relaxed);
> > +
> > +        if !ptr.is_null() {
> > +            // SAFETY:
> 
> 
> Would it be an option to take an early return here and with this drop
> one indentation level of the larger `SAFETY` comment?
> 
> if ptr.is_null() {
>    return None;
> }
> 
> // SAFETY:
> ...
> Some(unsafe { P::borrow(ptr) })
> 
> Same for `Drop for Rcu<P>` below.
> 
> 

Make sense.

> > +            // - Since `ptr` is not null, so it has to be a return value of `P::into_foreign()`.
> > +            // - The returned `Borrowed<'rcu>` cannot outlive the RCU Guar, this guarantees the
> 
> Guar -> Guard
> 

Fixed! Thank you!

Regards,
Boqun

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-17 12:22 ` [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support Boqun Feng
  2026-01-17 17:03   ` Gary Guo
  2026-01-18  8:38   ` Dirk Behme
@ 2026-01-19  3:09   ` FUJITA Tomonori
  2 siblings, 0 replies; 20+ messages in thread
From: FUJITA Tomonori @ 2026-01-19  3:09 UTC (permalink / raw)
  To: boqun.feng
  Cc: rust-for-linux, linux-kernel, rcu, ojeda, gary, bjorn3_gh, lossin,
	a.hindborg, aliceryhl, tmgross, dakr, will, peterz, mark.rutland,
	paulmck, frederic, neeraj.upadhyay, joelagnelf, josh, urezki,
	rostedt, mathieu.desnoyers, jiangshanlai, qiang.zhang,
	fujita.tomonori

On Sat, 17 Jan 2026 20:22:42 +0800
Boqun Feng <boqun.feng@gmail.com> wrote:

> Atomic pointer support is an important piece of synchronization
> algorithm, e.g. RCU, hence provide the support for that.
> 
> Note that instead of relying on atomic_long or the implementation of
> `Atomic<usize>`, a new set of helpers (atomic_ptr_*) is introduced for
> atomic pointer specifically, this is because ptr2int casting would
> lose the provenance of a pointer and even though in theory there are a
> few tricks the provenance can be restored, it'll still be a simpler
> implementation if C could provide atomic pointers directly. The side
> effects of this approach are: we don't have the arithmetic and logical
> operations for pointers yet and the current implementation only works
> on ARCH_SUPPORTS_ATOMIC_RMW architectures, but these are implementation
> issues and can be added later.
> 
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
>  rust/helpers/atomic_ext.c            |  3 +++
>  rust/kernel/sync/atomic.rs           | 12 +++++++++++-
>  rust/kernel/sync/atomic/internal.rs  | 21 +++++++++++++++------
>  rust/kernel/sync/atomic/predefine.rs | 23 +++++++++++++++++++++++
>  4 files changed, 52 insertions(+), 7 deletions(-)

Reviewed-by: FUJITA Tomonori <fujita.tomonori@gmail.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-18 15:39       ` Gary Guo
@ 2026-01-20 11:57         ` Boqun Feng
  0 siblings, 0 replies; 20+ messages in thread
From: Boqun Feng @ 2026-01-20 11:57 UTC (permalink / raw)
  To: Gary Guo
  Cc: rust-for-linux, linux-kernel, rcu, Miguel Ojeda,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Paul E. McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Uladzislau Rezki,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang,
	FUJITA Tomonori

On Sun, Jan 18, 2026 at 03:39:40PM +0000, Gary Guo wrote:
[...]
> >> 
> >> How about *const T?
> >> 
> >
> > In general I want to avoid const raw pointers since it provides very
> > little extra compared to mut raw pointers. For compiler optimization,
> > provenenace is more important than "const vs mut" modifier, for
> > dereference, it's unsafe anyway and users need to provide reasoning
> > (including knowing the provenance and other accesses may happen to the
> > same address), so I feel the type difference of "*const T" vs "*mut T"
> > doesn't do anything extra either.
> >
> > Think about it, in Rust std, there are two pointer types only maps to
> > "*mut T": NonNull<T> (as_ptr() returns a `*mut T`) and AtomicPtr<T>
> > (as_ptr() returns a `*mut *mut T`). And there is no type like
> > NonNullConst<T> and AtomicConstPtr<T>. This is a lint to me that we may
> > not need to support `*const T` in most cases.
> 
> Actually `NonNull` is internally `*const T`, because it's covariant, unlike
> `*mut T` which is invariant.
> 

Ah, right!

> Now, for atomics, it's less likely that you actually want covariance. So this
> difference matters less.
> 

Agreed.

> >
> > But maybe I'm missing something? If you have a good reason, we can
> > obviously add the support for `*const T`.
> 
> It just feels that it is somewhat inconsistent. There's no good motivation right
> now. I am fine to leave it out and add when needed.
> 

Yeah, we can also add it later. Thanks!

Regards,
Boqun

> Best,
> Gary

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-18  4:19     ` Boqun Feng
  2026-01-18 15:39       ` Gary Guo
@ 2026-01-20 12:37       ` Alice Ryhl
  2026-01-20 14:07         ` Boqun Feng
  1 sibling, 1 reply; 20+ messages in thread
From: Alice Ryhl @ 2026-01-20 12:37 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Gary Guo, rust-for-linux, linux-kernel, rcu, Miguel Ojeda,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Paul E. McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Uladzislau Rezki,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang,
	FUJITA Tomonori

On Sun, Jan 18, 2026 at 12:19:35PM +0800, Boqun Feng wrote:
> On Sat, Jan 17, 2026 at 05:03:15PM +0000, Gary Guo wrote:
> > On Sat Jan 17, 2026 at 12:22 PM GMT, Boqun Feng wrote:
> > > +// SAFETY:
> > > +//
> > > +// - `*mut T` has the same size and alignment with `*const c_void`, and is round-trip
> > > +//   transmutable to `*const c_void`.
> > > +// - `*mut T` is safe to transfer between execution contexts. See the safety requirement of
> > > +//   [`AtomicType`].
> > > +unsafe impl<T: Sized> super::AtomicType for *mut T {
> > > +    type Repr = *const c_void;
> > > +}
> > 
> > How about *const T?
> > 
> 
> In general I want to avoid const raw pointers since it provides very
> little extra compared to mut raw pointers. For compiler optimization,
> provenenace is more important than "const vs mut" modifier, for
> dereference, it's unsafe anyway and users need to provide reasoning
> (including knowing the provenance and other accesses may happen to the
> same address), so I feel the type difference of "*const T" vs "*mut T"
> doesn't do anything extra either.
> 
> Think about it, in Rust std, there are two pointer types only maps to
> "*mut T": NonNull<T> (as_ptr() returns a `*mut T`) and AtomicPtr<T>
> (as_ptr() returns a `*mut *mut T`). And there is no type like
> NonNullConst<T> and AtomicConstPtr<T>. This is a lint to me that we may
> not need to support `*const T` in most cases.
> 
> But maybe I'm missing something? If you have a good reason, we can
> obviously add the support for `*const T`.

It was pretty inconvenient in:
https://lore.kernel.org/all/20260117-upgrade-poll-v1-1-179437b7bd49@google.com/
since I had to cast_mut() a bunch of places.

Alice

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support
  2026-01-20 12:37       ` Alice Ryhl
@ 2026-01-20 14:07         ` Boqun Feng
  0 siblings, 0 replies; 20+ messages in thread
From: Boqun Feng @ 2026-01-20 14:07 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Gary Guo, rust-for-linux, linux-kernel, rcu, Miguel Ojeda,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Paul E. McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Uladzislau Rezki,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang,
	FUJITA Tomonori

On Tue, Jan 20, 2026 at 12:37:03PM +0000, Alice Ryhl wrote:
> On Sun, Jan 18, 2026 at 12:19:35PM +0800, Boqun Feng wrote:
> > On Sat, Jan 17, 2026 at 05:03:15PM +0000, Gary Guo wrote:
> > > On Sat Jan 17, 2026 at 12:22 PM GMT, Boqun Feng wrote:
> > > > +// SAFETY:
> > > > +//
> > > > +// - `*mut T` has the same size and alignment with `*const c_void`, and is round-trip
> > > > +//   transmutable to `*const c_void`.
> > > > +// - `*mut T` is safe to transfer between execution contexts. See the safety requirement of
> > > > +//   [`AtomicType`].
> > > > +unsafe impl<T: Sized> super::AtomicType for *mut T {
> > > > +    type Repr = *const c_void;
> > > > +}
> > > 
> > > How about *const T?
> > > 
> > 
> > In general I want to avoid const raw pointers since it provides very
> > little extra compared to mut raw pointers. For compiler optimization,
> > provenenace is more important than "const vs mut" modifier, for
> > dereference, it's unsafe anyway and users need to provide reasoning
> > (including knowing the provenance and other accesses may happen to the
> > same address), so I feel the type difference of "*const T" vs "*mut T"
> > doesn't do anything extra either.
> > 
> > Think about it, in Rust std, there are two pointer types only maps to
> > "*mut T": NonNull<T> (as_ptr() returns a `*mut T`) and AtomicPtr<T>
> > (as_ptr() returns a `*mut *mut T`). And there is no type like
> > NonNullConst<T> and AtomicConstPtr<T>. This is a lint to me that we may
> > not need to support `*const T` in most cases.
> > 
> > But maybe I'm missing something? If you have a good reason, we can
> > obviously add the support for `*const T`.
> 
> It was pretty inconvenient in:
> https://lore.kernel.org/all/20260117-upgrade-poll-v1-1-179437b7bd49@google.com/
> since I had to cast_mut() a bunch of places.
> 

Let's add it then ;-)

https://lore.kernel.org/rust-for-linux/20260120140503.62804-1-boqun.feng@gmail.com/

Regards,
Boqun

> Alice

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2026-01-20 15:46 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-17 12:22 [PATCH 0/5] rust: sync: Atomic pointer and RCU Boqun Feng
2026-01-17 12:22 ` [PATCH 1/5] rust: helpers: Generify the definitions of rust_helper_*_{read,set}* Boqun Feng
2026-01-17 12:22 ` [PATCH 2/5] rust: helpers: Generify the definitions of rust_helper_*_xchg* Boqun Feng
2026-01-17 12:22 ` [PATCH 3/5] rust: helpers: Generify the definitions of rust_helper_*_cmpxchg* Boqun Feng
2026-01-17 12:22 ` [PATCH 4/5] rust: sync: atomic: Add Atomic<*mut T> support Boqun Feng
2026-01-17 17:03   ` Gary Guo
2026-01-18  4:19     ` Boqun Feng
2026-01-18 15:39       ` Gary Guo
2026-01-20 11:57         ` Boqun Feng
2026-01-20 12:37       ` Alice Ryhl
2026-01-20 14:07         ` Boqun Feng
2026-01-18  8:38   ` Dirk Behme
2026-01-18 14:57     ` Boqun Feng
2026-01-18 15:05       ` Boqun Feng
2026-01-18 19:59         ` Benno Lossin
2026-01-19  0:57           ` Boqun Feng
2026-01-19  3:09   ` FUJITA Tomonori
2026-01-17 12:22 ` [PATCH 5/5] rust: sync: rcu: Add RCU protected pointer Boqun Feng
2026-01-18  8:28   ` Dirk Behme
2026-01-19  1:03     ` Boqun Feng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox