* [GIT PULL][PATCH 00/36] Rust synchronization changes for v7.0
@ 2026-01-11 11:57 Boqun Feng
2026-01-11 11:57 ` [PATCH 01/36] rust: sync: Refactor static_lock_class!() macro Boqun Feng
` (4 more replies)
0 siblings, 5 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 11:57 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng
Peter,
Please pull the following changes of Rust synchronization for 7.0 into
tip/locking/core, you can find the details of the changes in the git tag
message below.
Thanks!
Regards,
Boqun
----------------------------------------------------------------
The following changes since commit a45026cef17d1080c985adf28234d6c8475ad66f:
locking/local_lock: Include more missing headers (2026-01-08 11:21:57 +0100)
are available in the Git repository at:
https://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux.git/ tags/rust-sync-7.0
for you to fetch changes up to ccf9e070116a81d29aae30db501d562c8efd1ed8:
rust: sync: Inline various lock related methods (2026-01-10 10:53:46 +0800)
----------------------------------------------------------------
Rust synchronization changes for v7.0:
- Add support for Atomic<i8/i16/bool> and replace most Rust native AtomicBool
usages with Atomic<bool>, and further switching will require Atomic<Flag>
- Clean up LockClassKey and improve its docs
- Add missing Send and Sync trait impl for SetOnce
- Make ARef Unpin as it is supposed to be
- Add __rust_helper to a few Rust helpers as a preparation for helper LTO
- Inline various lock related functions to avoid additional function calls.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEj5IosQTPz8XU1wRHSXnow7UH+rgFAmljjIkACgkQSXnow7UH
+rizPQgAi5rdVoIpjN9BaQtWVHcAwBhbD7WhboxDhsSdEl3yaw0E7OLML5IyupLP
BUsrI5BAhwUaIpE/4PT9RePLCOeFqCKfz9eyQpb6uEwLVKcx8WESrItrlStqK8dG
lAZEV07SwAWq/ARsgI02LZnyDQxxBrX8Q4FKZgglpaBxieVXmQjekcSF2W6s3lka
qWXB7MU38D3DZjKr6Lpp8BjdI7qTNInEZDGtRPncIId+4Jj7V5IpEX/NThyrDLp1
M0UzXOMzexIfeSm3oz95II6R+GeDpruI6pN8QDtljaTL0Al5/z5yO8Zj9KIPGAl4
9JRUJ0pNVrAUljjJ4ap8hIMPlOWqjw==
=JOZ1
-----END PGP SIGNATURE-----
----------------------------------------------------------------
Alice Ryhl (16):
rust: sync: Refactor static_lock_class!() macro
rust: sync: Clean up LockClassKey and its docs
rust: sync: Implement Unpin for ARef
rust: barrier: Add __rust_helper to helpers
rust: blk: Add __rust_helper to helpers
rust: completion: Add __rust_helper to helpers
rust: cpu: Add __rust_helper to helpers
rust: processor: Add __rust_helper to helpers
rust: rcu: Add __rust_helper to helpers
rust: refcount: Add __rust_helper to helpers
rust: sync: Add __rust_helper to helpers
rust: task: Add __rust_helper to helpers
rust: time: Add __rust_helper to helpers
rust: wait: Add __rust_helper to helpers
rust: helpers: Move #define __rust_helper out of atomic.c
rust: sync: Inline various lock related methods
Boqun Feng (1):
arch: um/x86: Select ARCH_SUPPORTS_ATOMIC_RMW for UML_X86
FUJITA Tomonori (19):
rust: sync: set_once: Implement Send and Sync
rust: helpers: Add i8/i16 atomic_read_acquire/atomic_set_release helpers
rust: helpers: Add i8/i16 relaxed atomic helpers
rust: helpers: Add i8/i16 atomic xchg helpers
rust: helpers: Add i8/i16 atomic xchg_acquire helpers
rust: helpers: Add i8/i16 atomic xchg_release helpers
rust: helpers: Add i8/i16 atomic xchg_relaxed helpers
rust: helpers: Add i8/i16 atomic try_cmpxchg helpers
rust: helpers: Add i8/i16 atomic try_cmpxchg_acquire helpers
rust: helpers: Add i8/i16 atomic try_cmpxchg_release helpers
rust: helpers: Add i8/i16 atomic try_cmpxchg_relaxed helpers
rust: sync: atomic: Prepare AtomicOps macros for i8/i16 support
rust: sync: atomic: Add i8/i16 load and store support
rust: sync: atomic: Add store_release/load_acquire tests
rust: sync: atomic: Add i8/i16 xchg and cmpxchg support
rust: sync: atomic: Add atomic bool support via i8 representation
rust: sync: atomic: Add atomic bool tests
rust: list: Switch to kernel::sync atomic primitives
rust_binder: Switch to kernel::sync atomic primitives
arch/x86/um/Kconfig | 1 +
drivers/android/binder/rust_binder_main.rs | 20 ++---
drivers/android/binder/stats.rs | 8 +-
drivers/android/binder/thread.rs | 24 +++--
drivers/android/binder/transaction.rs | 16 ++--
rust/helpers/atomic.c | 7 +-
rust/helpers/atomic_ext.c | 139 +++++++++++++++++++++++++++++
rust/helpers/barrier.c | 6 +-
rust/helpers/blk.c | 4 +-
rust/helpers/completion.c | 2 +-
rust/helpers/cpu.c | 2 +-
rust/helpers/helpers.c | 3 +
rust/helpers/mutex.c | 13 +--
rust/helpers/processor.c | 2 +-
rust/helpers/rcu.c | 4 +-
rust/helpers/refcount.c | 10 +--
rust/helpers/signal.c | 2 +-
rust/helpers/spinlock.c | 13 +--
rust/helpers/sync.c | 4 +-
rust/helpers/task.c | 24 ++---
rust/helpers/time.c | 14 +--
rust/helpers/wait.c | 2 +-
rust/kernel/list/arc.rs | 14 ++-
rust/kernel/sync.rs | 74 +++++++++++----
rust/kernel/sync/aref.rs | 3 +
rust/kernel/sync/atomic/internal.rs | 114 ++++++++++++++++++-----
rust/kernel/sync/atomic/predefine.rs | 55 +++++++++++-
rust/kernel/sync/lock.rs | 7 ++
rust/kernel/sync/lock/global.rs | 2 +
rust/kernel/sync/lock/mutex.rs | 5 ++
rust/kernel/sync/lock/spinlock.rs | 5 ++
rust/kernel/sync/set_once.rs | 8 ++
scripts/atomic/gen-rust-atomic-helpers.sh | 5 --
33 files changed, 462 insertions(+), 150 deletions(-)
create mode 100644 rust/helpers/atomic_ext.c
^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH 01/36] rust: sync: Refactor static_lock_class!() macro
2026-01-11 11:57 [GIT PULL][PATCH 00/36] Rust synchronization changes for v7.0 Boqun Feng
@ 2026-01-11 11:57 ` Boqun Feng
2026-01-11 11:57 ` [PATCH 02/36] rust: sync: Clean up LockClassKey and its docs Boqun Feng
` (3 subsequent siblings)
4 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 11:57 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Daniel Almeida,
Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
By introducing a new_static() constructor, the macro does not need to go
through MaybeUninit::uninit().assume_init(), which is a pattern that is
best avoided when possible.
The safety comment not only requires that the value is leaked, but also
that it is stored in the right portion of memory. This is so that the
lockdep static_obj() check will succeed when using this constructor. One
could argue that lockdep detects this scenario, so that safety
requirement isn't needed. However, it simplifies matters to require that
static_obj() will succeed and it's not a burdensome requirement on the
caller.
Suggested-by: Benno Lossin <lossin@kernel.org>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Reviewed-by: Benno Lossin <lossin@kernel.org>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20250811-lock-class-key-cleanup-v3-1-b12967ee1ca2@google.com
---
rust/kernel/sync.rs | 24 ++++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)
diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index 5df87e2bd212..1dfbee8e9d00 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -45,6 +45,21 @@ pub struct LockClassKey {
unsafe impl Sync for LockClassKey {}
impl LockClassKey {
+ /// Initializes a statically allocated lock class key.
+ ///
+ /// This is usually used indirectly through the [`static_lock_class!`] macro.
+ ///
+ /// # Safety
+ ///
+ /// * Before using the returned value, it must be pinned in a static memory location.
+ /// * The destructor must never run on the returned `LockClassKey`.
+ #[doc(hidden)]
+ pub const unsafe fn new_static() -> Self {
+ LockClassKey {
+ inner: Opaque::uninit(),
+ }
+ }
+
/// Initializes a dynamically allocated lock class key. In the common case of using a
/// statically allocated lock class key, the static_lock_class! macro should be used instead.
///
@@ -101,12 +116,9 @@ fn drop(self: Pin<&mut Self>) {
macro_rules! static_lock_class {
() => {{
static CLASS: $crate::sync::LockClassKey =
- // Lockdep expects uninitialized memory when it's handed a statically allocated `struct
- // lock_class_key`.
- //
- // SAFETY: `LockClassKey` transparently wraps `Opaque` which permits uninitialized
- // memory.
- unsafe { ::core::mem::MaybeUninit::uninit().assume_init() };
+ // SAFETY: The returned `LockClassKey` is stored in static memory and we pin it. Drop
+ // never runs on a static global.
+ unsafe { $crate::sync::LockClassKey::new_static() };
$crate::prelude::Pin::static_ref(&CLASS)
}};
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 02/36] rust: sync: Clean up LockClassKey and its docs
2026-01-11 11:57 [GIT PULL][PATCH 00/36] Rust synchronization changes for v7.0 Boqun Feng
2026-01-11 11:57 ` [PATCH 01/36] rust: sync: Refactor static_lock_class!() macro Boqun Feng
@ 2026-01-11 11:57 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (2 subsequent siblings)
4 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 11:57 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Daniel Almeida,
Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
Several aspects of the code and documentation for this type is
incomplete. Also several things are hidden from the docs. Thus, clean it
up and make it easier to read the rendered html docs.
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Benno Lossin <lossin@kernel.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20250811-lock-class-key-cleanup-v3-2-b12967ee1ca2@google.com
---
rust/kernel/sync.rs | 54 +++++++++++++++++++++++++++++++++------------
1 file changed, 40 insertions(+), 14 deletions(-)
diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index 1dfbee8e9d00..b10e576221ff 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -32,7 +32,9 @@
pub use refcount::Refcount;
pub use set_once::SetOnce;
-/// Represents a lockdep class. It's a wrapper around C's `lock_class_key`.
+/// Represents a lockdep class.
+///
+/// Wraps the kernel's `struct lock_class_key`.
#[repr(transparent)]
#[pin_data(PinnedDrop)]
pub struct LockClassKey {
@@ -40,6 +42,10 @@ pub struct LockClassKey {
inner: Opaque<bindings::lock_class_key>,
}
+// SAFETY: Unregistering a lock class key from a different thread than where it was registered is
+// allowed.
+unsafe impl Send for LockClassKey {}
+
// SAFETY: `bindings::lock_class_key` is designed to be used concurrently from multiple threads and
// provides its own synchronization.
unsafe impl Sync for LockClassKey {}
@@ -47,28 +53,31 @@ unsafe impl Sync for LockClassKey {}
impl LockClassKey {
/// Initializes a statically allocated lock class key.
///
- /// This is usually used indirectly through the [`static_lock_class!`] macro.
+ /// This is usually used indirectly through the [`static_lock_class!`] macro. See its
+ /// documentation for more information.
///
/// # Safety
///
/// * Before using the returned value, it must be pinned in a static memory location.
/// * The destructor must never run on the returned `LockClassKey`.
- #[doc(hidden)]
pub const unsafe fn new_static() -> Self {
LockClassKey {
inner: Opaque::uninit(),
}
}
- /// Initializes a dynamically allocated lock class key. In the common case of using a
- /// statically allocated lock class key, the static_lock_class! macro should be used instead.
+ /// Initializes a dynamically allocated lock class key.
+ ///
+ /// In the common case of using a statically allocated lock class key, the
+ /// [`static_lock_class!`] macro should be used instead.
///
/// # Examples
+ ///
/// ```
- /// # use kernel::alloc::KBox;
- /// # use kernel::types::ForeignOwnable;
- /// # use kernel::sync::{LockClassKey, SpinLock};
- /// # use pin_init::stack_pin_init;
+ /// use kernel::alloc::KBox;
+ /// use kernel::types::ForeignOwnable;
+ /// use kernel::sync::{LockClassKey, SpinLock};
+ /// use pin_init::stack_pin_init;
///
/// let key = KBox::pin_init(LockClassKey::new_dynamic(), GFP_KERNEL)?;
/// let key_ptr = key.into_foreign();
@@ -86,7 +95,6 @@ impl LockClassKey {
/// // SAFETY: We dropped `num`, the only use of the key, so the result of the previous
/// // `borrow` has also been dropped. Thus, it's safe to use from_foreign.
/// unsafe { drop(<Pin<KBox<LockClassKey>> as ForeignOwnable>::from_foreign(key_ptr)) };
- ///
/// # Ok::<(), Error>(())
/// ```
pub fn new_dynamic() -> impl PinInit<Self> {
@@ -96,7 +104,10 @@ pub fn new_dynamic() -> impl PinInit<Self> {
})
}
- pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
+ /// Returns a raw pointer to the inner C struct.
+ ///
+ /// It is up to the caller to use the raw pointer correctly.
+ pub fn as_ptr(&self) -> *mut bindings::lock_class_key {
self.inner.get()
}
}
@@ -104,14 +115,28 @@ pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
#[pinned_drop]
impl PinnedDrop for LockClassKey {
fn drop(self: Pin<&mut Self>) {
- // SAFETY: self.as_ptr was registered with lockdep and self is pinned, so the address
- // hasn't changed. Thus, it's safe to pass to unregister.
+ // SAFETY: `self.as_ptr()` was registered with lockdep and `self` is pinned, so the address
+ // hasn't changed. Thus, it's safe to pass it to unregister.
unsafe { bindings::lockdep_unregister_key(self.as_ptr()) }
}
}
/// Defines a new static lock class and returns a pointer to it.
-#[doc(hidden)]
+///
+/// # Examples
+///
+/// ```
+/// use kernel::c_str;
+/// use kernel::sync::{static_lock_class, Arc, SpinLock};
+///
+/// fn new_locked_int() -> Result<Arc<SpinLock<u32>>> {
+/// Arc::pin_init(SpinLock::new(
+/// 42,
+/// c_str!("new_locked_int"),
+/// static_lock_class!(),
+/// ), GFP_KERNEL)
+/// }
+/// ```
#[macro_export]
macro_rules! static_lock_class {
() => {{
@@ -122,6 +147,7 @@ macro_rules! static_lock_class {
$crate::prelude::Pin::static_ref(&CLASS)
}};
}
+pub use static_lock_class;
/// Returns the given string, if one is provided, otherwise generates one based on the source code
/// location.
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 03/36] rust: sync: set_once: Implement Send and Sync
2026-01-11 11:57 [GIT PULL][PATCH 00/36] Rust synchronization changes for v7.0 Boqun Feng
2026-01-11 11:57 ` [PATCH 01/36] rust: sync: Refactor static_lock_class!() macro Boqun Feng
2026-01-11 11:57 ` [PATCH 02/36] rust: sync: Clean up LockClassKey and its docs Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 04/36] rust: sync: Implement Unpin for ARef Boqun Feng
` (32 more replies)
2026-01-13 10:10 ` [GIT PULL][PATCH 00/36] Rust synchronization changes for v7.0 Boqun Feng
2026-01-13 10:43 ` Peter Zijlstra
4 siblings, 33 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Implement Send and Sync for SetOnce<T> to allow it to be used across
thread boundaries.
Send: SetOnce<T> can be transferred across threads when T: Send, as
the contained value is also transferred and will be dropped on the
destination thread.
Sync: SetOnce<T> can be shared across threads when T: Sync, as
as_ref() provides shared references &T and atomic operations ensure
proper synchronization. Since the inner T may be dropped on any
thread, we also require T: Send.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251216000901.221375-1-fujita.tomonori@gmail.com
---
rust/kernel/sync/set_once.rs | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/rust/kernel/sync/set_once.rs b/rust/kernel/sync/set_once.rs
index bdba601807d8..139cef05e935 100644
--- a/rust/kernel/sync/set_once.rs
+++ b/rust/kernel/sync/set_once.rs
@@ -123,3 +123,11 @@ fn drop(&mut self) {
}
}
}
+
+// SAFETY: `SetOnce` can be transferred across thread boundaries iff the data it contains can.
+unsafe impl<T: Send> Send for SetOnce<T> {}
+
+// SAFETY: `SetOnce` synchronises access to the inner value via atomic operations,
+// so shared references are safe when `T: Sync`. Since the inner `T` may be dropped
+// on any thread, we also require `T: Send`.
+unsafe impl<T: Send + Sync> Sync for SetOnce<T> {}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 04/36] rust: sync: Implement Unpin for ARef
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 05/36] rust: helpers: Add i8/i16 atomic_read_acquire/atomic_set_release helpers Boqun Feng
` (31 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Daniel Almeida,
Alexandre Courbot, Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
The default implementation of Unpin for ARef<T> is conditional on T
being Unpin due to its PhantomData<T> field. However, this is overly
strict as pointers to T are legal to move even if T itself cannot move.
Since commit 66f1ea83d9f8 ("rust: lock: Add a Pin<&mut T> accessor")
this causes build failures when combined with a Mutex that contains an
field ARef<T>, because almost any type that ARef is used with is !Unpin.
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Alexandre Courbot <acourbot@nvidia.com>
Reviewed-by: Benno Lossin <lossin@kernel.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251218-unpin-for-aref-v2-1-30d77129cbc6@google.com
---
rust/kernel/sync/aref.rs | 3 +++
1 file changed, 3 insertions(+)
diff --git a/rust/kernel/sync/aref.rs b/rust/kernel/sync/aref.rs
index 0d24a0432015..0616c0353c2b 100644
--- a/rust/kernel/sync/aref.rs
+++ b/rust/kernel/sync/aref.rs
@@ -83,6 +83,9 @@ unsafe impl<T: AlwaysRefCounted + Sync + Send> Send for ARef<T> {}
// example, when the reference count reaches zero and `T` is dropped.
unsafe impl<T: AlwaysRefCounted + Sync + Send> Sync for ARef<T> {}
+// Even if `T` is pinned, pointers to `T` can still move.
+impl<T: AlwaysRefCounted> Unpin for ARef<T> {}
+
impl<T: AlwaysRefCounted> ARef<T> {
/// Creates a new instance of [`ARef`].
///
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 05/36] rust: helpers: Add i8/i16 atomic_read_acquire/atomic_set_release helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
2026-01-11 12:01 ` [PATCH 04/36] rust: sync: Implement Unpin for ARef Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 06/36] rust: helpers: Add i8/i16 relaxed atomic helpers Boqun Feng
` (30 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Joel Fernandes, Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add helper functions to expose smp_load_acquire() and
smp_store_release() for i8 and i16 types.
The smp_load_acquire() and smp_store_release() macros require type
information (sizeof) to generate appropriate architecture-specific
memory ordering instructions. Therefore, separate helper functions are
needed for each type size.
These helpers expose different symbol names than their C counterparts
so they are split into atomic_ext.c instead of atomic.c. The symbol
names; atomic_[i8|i16]_read_acquire and atomic_[i8|i16]_set_release
makes the interface Rust/C clear, consistent with i32/i64.
These helpers will be used by the upcoming Atomic<i8> and Atomic<i16>
implementation to provide proper Acquire/Release semantics across all
architectures.
[boqun: Rename the functions from {load,store} to {read,set} to avoid
future adjustment]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-by: Joel Fernandes <joelagnelf@nvidia.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251211113826.1299077-2-fujita.tomonori@gmail.com
---
rust/helpers/atomic_ext.c | 23 +++++++++++++++++++++++
rust/helpers/helpers.c | 1 +
2 files changed, 24 insertions(+)
create mode 100644 rust/helpers/atomic_ext.c
diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
new file mode 100644
index 000000000000..1fb624147aa4
--- /dev/null
+++ b/rust/helpers/atomic_ext.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <asm/barrier.h>
+
+__rust_helper s8 rust_helper_atomic_i8_read_acquire(s8 *ptr)
+{
+ return smp_load_acquire(ptr);
+}
+
+__rust_helper s16 rust_helper_atomic_i16_read_acquire(s16 *ptr)
+{
+ return smp_load_acquire(ptr);
+}
+
+__rust_helper void rust_helper_atomic_i8_set_release(s8 *ptr, s8 val)
+{
+ smp_store_release(ptr, val);
+}
+
+__rust_helper void rust_helper_atomic_i16_set_release(s16 *ptr, s16 val)
+{
+ smp_store_release(ptr, val);
+}
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 79c72762ad9c..15d75578f459 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -8,6 +8,7 @@
*/
#include "atomic.c"
+#include "atomic_ext.c"
#include "auxiliary.c"
#include "barrier.c"
#include "binder.c"
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 06/36] rust: helpers: Add i8/i16 relaxed atomic helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
2026-01-11 12:01 ` [PATCH 04/36] rust: sync: Implement Unpin for ARef Boqun Feng
2026-01-11 12:01 ` [PATCH 05/36] rust: helpers: Add i8/i16 atomic_read_acquire/atomic_set_release helpers Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 07/36] rust: helpers: Add i8/i16 atomic xchg helpers Boqun Feng
` (29 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Joel Fernandes, Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add READ_ONCE/WRITE_ONCE based helpers for i8 and i16 types to support
relaxed atomic operations in Rust.
While relaxed operations could be implemented purely in Rust using
read_volatile() and write_volatile(), using C's READ_ONCE() and
WRITE_ONCE() macros ensures complete consistency with the kernel
memory model.
These helpers expose different symbol names than their C counterparts
so they are split into atomic_ext.c instead of atomic.c. The symbol
names; the names make the interface Rust/C clear, consistent with
i32/i64.
[boqun: Rename the functions from {load,store} to {read,set} to avoid
future adjustment]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-by: Joel Fernandes <joelagnelf@nvidia.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251211113826.1299077-3-fujita.tomonori@gmail.com
---
rust/helpers/atomic_ext.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index 1fb624147aa4..02e05b4246ae 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -1,22 +1,43 @@
// SPDX-License-Identifier: GPL-2.0
#include <asm/barrier.h>
+#include <asm/rwonce.h>
+
+__rust_helper s8 rust_helper_atomic_i8_read(s8 *ptr)
+{
+ return READ_ONCE(*ptr);
+}
__rust_helper s8 rust_helper_atomic_i8_read_acquire(s8 *ptr)
{
return smp_load_acquire(ptr);
}
+__rust_helper s16 rust_helper_atomic_i16_read(s16 *ptr)
+{
+ return READ_ONCE(*ptr);
+}
+
__rust_helper s16 rust_helper_atomic_i16_read_acquire(s16 *ptr)
{
return smp_load_acquire(ptr);
}
+__rust_helper void rust_helper_atomic_i8_set(s8 *ptr, s8 val)
+{
+ WRITE_ONCE(*ptr, val);
+}
+
__rust_helper void rust_helper_atomic_i8_set_release(s8 *ptr, s8 val)
{
smp_store_release(ptr, val);
}
+__rust_helper void rust_helper_atomic_i16_set(s16 *ptr, s16 val)
+{
+ WRITE_ONCE(*ptr, val);
+}
+
__rust_helper void rust_helper_atomic_i16_set_release(s16 *ptr, s16 val)
{
smp_store_release(ptr, val);
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 07/36] rust: helpers: Add i8/i16 atomic xchg helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (2 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 06/36] rust: helpers: Add i8/i16 relaxed atomic helpers Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 08/36] rust: helpers: Add i8/i16 atomic xchg_acquire helpers Boqun Feng
` (28 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add i8/i16 atomic xchg helpers that call xchg() macro implementing
atomic xchg using architecture-specific instructions.
[boqun: Use xchg() instead of raw_xchg()]
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251223062140.938325-2-fujita.tomonori@gmail.com
---
rust/helpers/atomic_ext.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index 02e05b4246ae..3136255a84c6 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -2,6 +2,7 @@
#include <asm/barrier.h>
#include <asm/rwonce.h>
+#include <linux/atomic.h>
__rust_helper s8 rust_helper_atomic_i8_read(s8 *ptr)
{
@@ -42,3 +43,20 @@ __rust_helper void rust_helper_atomic_i16_set_release(s16 *ptr, s16 val)
{
smp_store_release(ptr, val);
}
+
+/*
+ * xchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
+ * architecture provding xchg() support for i8 and i16.
+ *
+ * The architectures that currently support Rust (x86_64, armv7,
+ * arm64, riscv, and loongarch) satisfy these requirements.
+ */
+__rust_helper s8 rust_helper_atomic_i8_xchg(s8 *ptr, s8 new)
+{
+ return xchg(ptr, new);
+}
+
+__rust_helper s16 rust_helper_atomic_i16_xchg(s16 *ptr, s16 new)
+{
+ return xchg(ptr, new);
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 08/36] rust: helpers: Add i8/i16 atomic xchg_acquire helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (3 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 07/36] rust: helpers: Add i8/i16 atomic xchg helpers Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 09/36] rust: helpers: Add i8/i16 atomic xchg_release helpers Boqun Feng
` (27 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add i8/i16 atomic xchg_acquire helpers that call xchg_acquire() macro
implementing atomic xchg_acquire using architecture-specific
instructions.
[boqun: Use xchg_acquire() instead of raw_xchg_acquire()]
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251223062140.938325-3-fujita.tomonori@gmail.com
---
rust/helpers/atomic_ext.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index 3136255a84c6..177bb3603e5f 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -60,3 +60,13 @@ __rust_helper s16 rust_helper_atomic_i16_xchg(s16 *ptr, s16 new)
{
return xchg(ptr, new);
}
+
+__rust_helper s8 rust_helper_atomic_i8_xchg_acquire(s8 *ptr, s8 new)
+{
+ return xchg_acquire(ptr, new);
+}
+
+__rust_helper s16 rust_helper_atomic_i16_xchg_acquire(s16 *ptr, s16 new)
+{
+ return xchg_acquire(ptr, new);
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 09/36] rust: helpers: Add i8/i16 atomic xchg_release helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (4 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 08/36] rust: helpers: Add i8/i16 atomic xchg_acquire helpers Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 10/36] rust: helpers: Add i8/i16 atomic xchg_relaxed helpers Boqun Feng
` (26 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add i8/i16 atomic xchg_release helpers that call xchg_release() macro
implementing atomic xchg_release using architecture-specific
instructions.
[boqun: Use xchg_release() instead of raw_xchg_release()]
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251223062140.938325-4-fujita.tomonori@gmail.com
---
rust/helpers/atomic_ext.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index 177bb3603e5f..2b976a7ad3d7 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -70,3 +70,13 @@ __rust_helper s16 rust_helper_atomic_i16_xchg_acquire(s16 *ptr, s16 new)
{
return xchg_acquire(ptr, new);
}
+
+__rust_helper s8 rust_helper_atomic_i8_xchg_release(s8 *ptr, s8 new)
+{
+ return xchg_release(ptr, new);
+}
+
+__rust_helper s16 rust_helper_atomic_i16_xchg_release(s16 *ptr, s16 new)
+{
+ return xchg_release(ptr, new);
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 10/36] rust: helpers: Add i8/i16 atomic xchg_relaxed helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (5 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 09/36] rust: helpers: Add i8/i16 atomic xchg_release helpers Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 11/36] rust: helpers: Add i8/i16 atomic try_cmpxchg helpers Boqun Feng
` (25 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add i8/i16 atomic xchg_relaxed helpers that call xchg_relaxed() macro
implementing atomic xchg_relaxed using architecture-specific
instructions.
[boqun: Use xchg_relaxed() instead of raw_xchg_relaxed()]
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251223062140.938325-5-fujita.tomonori@gmail.com
---
rust/helpers/atomic_ext.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index 2b976a7ad3d7..76e392c39c7b 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -80,3 +80,13 @@ __rust_helper s16 rust_helper_atomic_i16_xchg_release(s16 *ptr, s16 new)
{
return xchg_release(ptr, new);
}
+
+__rust_helper s8 rust_helper_atomic_i8_xchg_relaxed(s8 *ptr, s8 new)
+{
+ return xchg_relaxed(ptr, new);
+}
+
+__rust_helper s16 rust_helper_atomic_i16_xchg_relaxed(s16 *ptr, s16 new)
+{
+ return xchg_relaxed(ptr, new);
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 11/36] rust: helpers: Add i8/i16 atomic try_cmpxchg helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (6 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 10/36] rust: helpers: Add i8/i16 atomic xchg_relaxed helpers Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 12/36] rust: helpers: Add i8/i16 atomic try_cmpxchg_acquire helpers Boqun Feng
` (24 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add i8/i16 atomic try_cmpxchg helpers that call try_cmpxchg() macro
implementing atomic try_cmpxchg using architecture-specific
instructions.
[boqun: Add comments explaining CONFIG_ARCH_SUPPORTS_ATOMIC_RMW and use
try_cmpxchg() instead of raw_try_cmpxchg()]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251227115951.1424458-2-fujita.tomonori@gmail.com
---
rust/helpers/atomic_ext.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index 76e392c39c7b..5ee127f1cc80 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -90,3 +90,20 @@ __rust_helper s16 rust_helper_atomic_i16_xchg_relaxed(s16 *ptr, s16 new)
{
return xchg_relaxed(ptr, new);
}
+
+/*
+ * try_cmpxchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the
+ * architecture provding try_cmpxchg() support for i8 and i16.
+ *
+ * The architectures that currently support Rust (x86_64, armv7,
+ * arm64, riscv, and loongarch) satisfy these requirements.
+ */
+__rust_helper bool rust_helper_atomic_i8_try_cmpxchg(s8 *ptr, s8 *old, s8 new)
+{
+ return try_cmpxchg(ptr, old, new);
+}
+
+__rust_helper bool rust_helper_atomic_i16_try_cmpxchg(s16 *ptr, s16 *old, s16 new)
+{
+ return try_cmpxchg(ptr, old, new);
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 12/36] rust: helpers: Add i8/i16 atomic try_cmpxchg_acquire helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (7 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 11/36] rust: helpers: Add i8/i16 atomic try_cmpxchg helpers Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 13/36] rust: helpers: Add i8/i16 atomic try_cmpxchg_release helpers Boqun Feng
` (23 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add i8/i16 atomic try_cmpxchg_acquire helpers that call
try_cmpxchg_acquire() macro implementing atomic try_cmpxchg_acquire
using architecture-specific instructions.
[boqun: Use try_cmpxchg_acquire() instead of raw_try_cmpxchg_acquire()]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251227115951.1424458-3-fujita.tomonori@gmail.com
---
rust/helpers/atomic_ext.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index 5ee127f1cc80..b6efec14e5b3 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -107,3 +107,13 @@ __rust_helper bool rust_helper_atomic_i16_try_cmpxchg(s16 *ptr, s16 *old, s16 ne
{
return try_cmpxchg(ptr, old, new);
}
+
+__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_acquire(s8 *ptr, s8 *old, s8 new)
+{
+ return try_cmpxchg_acquire(ptr, old, new);
+}
+
+__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_acquire(s16 *ptr, s16 *old, s16 new)
+{
+ return try_cmpxchg_acquire(ptr, old, new);
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 13/36] rust: helpers: Add i8/i16 atomic try_cmpxchg_release helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (8 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 12/36] rust: helpers: Add i8/i16 atomic try_cmpxchg_acquire helpers Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 14/36] rust: helpers: Add i8/i16 atomic try_cmpxchg_relaxed helpers Boqun Feng
` (22 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add i8/i16 atomic try_cmpxchg_release helpers that call
try_cmpxchg_release() macro implementing atomic try_cmpxchg_release
using architecture-specific instructions.
[boqun: Use try_cmpxchg_release() instead of raw_try_cmpxchg_release()]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251227115951.1424458-4-fujita.tomonori@gmail.com
---
rust/helpers/atomic_ext.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index b6efec14e5b3..962ea05dfb9c 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -117,3 +117,13 @@ __rust_helper bool rust_helper_atomic_i16_try_cmpxchg_acquire(s16 *ptr, s16 *old
{
return try_cmpxchg_acquire(ptr, old, new);
}
+
+__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_release(s8 *ptr, s8 *old, s8 new)
+{
+ return try_cmpxchg_release(ptr, old, new);
+}
+
+__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_release(s16 *ptr, s16 *old, s16 new)
+{
+ return try_cmpxchg_release(ptr, old, new);
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 14/36] rust: helpers: Add i8/i16 atomic try_cmpxchg_relaxed helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (9 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 13/36] rust: helpers: Add i8/i16 atomic try_cmpxchg_release helpers Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 15/36] rust: sync: atomic: Prepare AtomicOps macros for i8/i16 support Boqun Feng
` (21 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add i8/i16 atomic try_cmpxchg_relaxed helpers that call
try_cmpxchg_relaxed() macro implementing atomic try_cmpxchg_relaxed
using architecture-specific instructions.
[boqun: Use try_cmpxchg_relaxed() instead of raw_try_cmpxchg_relaxed()]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251227115951.1424458-5-fujita.tomonori@gmail.com
---
rust/helpers/atomic_ext.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/rust/helpers/atomic_ext.c b/rust/helpers/atomic_ext.c
index 962ea05dfb9c..7d0c2bd340da 100644
--- a/rust/helpers/atomic_ext.c
+++ b/rust/helpers/atomic_ext.c
@@ -127,3 +127,13 @@ __rust_helper bool rust_helper_atomic_i16_try_cmpxchg_release(s16 *ptr, s16 *old
{
return try_cmpxchg_release(ptr, old, new);
}
+
+__rust_helper bool rust_helper_atomic_i8_try_cmpxchg_relaxed(s8 *ptr, s8 *old, s8 new)
+{
+ return try_cmpxchg_relaxed(ptr, old, new);
+}
+
+__rust_helper bool rust_helper_atomic_i16_try_cmpxchg_relaxed(s16 *ptr, s16 *old, s16 new)
+{
+ return try_cmpxchg_relaxed(ptr, old, new);
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 15/36] rust: sync: atomic: Prepare AtomicOps macros for i8/i16 support
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (10 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 14/36] rust: helpers: Add i8/i16 atomic try_cmpxchg_relaxed helpers Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 16/36] arch: um/x86: Select ARCH_SUPPORTS_ATOMIC_RMW for UML_X86 Boqun Feng
` (20 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Rework the internal AtomicOps macro plumbing to generate per-type
implementations from a mapping list.
Capture the trait definition once and reuse it for both declaration
and per-type impl expansion to reduce duplication and keep future
extensions simple.
This is a preparatory refactor for enabling i8/i16 atomics cleanly.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251228120546.1602275-2-fujita.tomonori@gmail.com
---
rust/kernel/sync/atomic/internal.rs | 85 ++++++++++++++++++++++-------
1 file changed, 66 insertions(+), 19 deletions(-)
diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/internal.rs
index 6fdd8e59f45b..41b4ce2935e3 100644
--- a/rust/kernel/sync/atomic/internal.rs
+++ b/rust/kernel/sync/atomic/internal.rs
@@ -156,16 +156,17 @@ fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)? {
}
}
-// Delcares $ops trait with methods and implements the trait for `i32` and `i64`.
-macro_rules! declare_and_impl_atomic_methods {
- ($(#[$attr:meta])* $pub:vis trait $ops:ident {
- $(
- $(#[doc=$doc:expr])*
- fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? {
- $unsafe:tt { bindings::#call($($arg:tt)*) }
- }
- )*
- }) => {
+macro_rules! declare_atomic_ops_trait {
+ (
+ $(#[$attr:meta])* $pub:vis trait $ops:ident {
+ $(
+ $(#[doc=$doc:expr])*
+ fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? {
+ $unsafe:tt { bindings::#call($($arg:tt)*) }
+ }
+ )*
+ }
+ ) => {
$(#[$attr])*
$pub trait $ops: AtomicImpl {
$(
@@ -175,21 +176,25 @@ fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? {
);
)*
}
+ }
+}
- impl $ops for i32 {
+macro_rules! impl_atomic_ops_for_one {
+ (
+ $ty:ty => $ctype:ident,
+ $(#[$attr:meta])* $pub:vis trait $ops:ident {
$(
- impl_atomic_method!(
- (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? {
- $unsafe { call($($arg)*) }
- }
- );
+ $(#[doc=$doc:expr])*
+ fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? {
+ $unsafe:tt { bindings::#call($($arg:tt)*) }
+ }
)*
}
-
- impl $ops for i64 {
+ ) => {
+ impl $ops for $ty {
$(
impl_atomic_method!(
- (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? {
+ ($ctype) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? {
$unsafe { call($($arg)*) }
}
);
@@ -198,7 +203,47 @@ impl $ops for i64 {
}
}
+// Declares $ops trait with methods and implements the trait.
+macro_rules! declare_and_impl_atomic_methods {
+ (
+ [ $($map:tt)* ]
+ $(#[$attr:meta])* $pub:vis trait $ops:ident { $($body:tt)* }
+ ) => {
+ declare_and_impl_atomic_methods!(
+ @with_ops_def
+ [ $($map)* ]
+ ( $(#[$attr])* $pub trait $ops { $($body)* } )
+ );
+ };
+
+ (@with_ops_def [ $($map:tt)* ] ( $($ops_def:tt)* )) => {
+ declare_atomic_ops_trait!( $($ops_def)* );
+
+ declare_and_impl_atomic_methods!(
+ @munch
+ [ $($map)* ]
+ ( $($ops_def)* )
+ );
+ };
+
+ (@munch [] ( $($ops_def:tt)* )) => {};
+
+ (@munch [ $ty:ty => $ctype:ident $(, $($rest:tt)*)? ] ( $($ops_def:tt)* )) => {
+ impl_atomic_ops_for_one!(
+ $ty => $ctype,
+ $($ops_def)*
+ );
+
+ declare_and_impl_atomic_methods!(
+ @munch
+ [ $($($rest)*)? ]
+ ( $($ops_def)* )
+ );
+ };
+}
+
declare_and_impl_atomic_methods!(
+ [ i32 => atomic, i64 => atomic64 ]
/// Basic atomic operations
pub trait AtomicBasicOps {
/// Atomic read (load).
@@ -216,6 +261,7 @@ fn set[release](a: &AtomicRepr<Self>, v: Self) {
);
declare_and_impl_atomic_methods!(
+ [ i32 => atomic, i64 => atomic64 ]
/// Exchange and compare-and-exchange atomic operations
pub trait AtomicExchangeOps {
/// Atomic exchange.
@@ -243,6 +289,7 @@ fn try_cmpxchg[acquire, release, relaxed](
);
declare_and_impl_atomic_methods!(
+ [ i32 => atomic, i64 => atomic64 ]
/// Atomic arithmetic operations
pub trait AtomicArithmeticOps {
/// Atomic add (wrapping).
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 16/36] arch: um/x86: Select ARCH_SUPPORTS_ATOMIC_RMW for UML_X86
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (11 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 15/36] rust: sync: atomic: Prepare AtomicOps macros for i8/i16 support Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 17/36] rust: sync: atomic: Add i8/i16 load and store support Boqun Feng
` (19 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng,
FUJITA Tomonori, Richard Weinberger
x86 atomic instructions are used for um on UML_X86, therefore atomics
on UML_X86 support native atomic RmW as x86 does, hence select
ARCH_SUPPORTS_ATOMIC_RMW.
Reviewed-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Acked-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260106034034.60074-1-boqun.feng@gmail.com
---
arch/x86/um/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/um/Kconfig b/arch/x86/um/Kconfig
index bdd7c8e39b01..44b12e45f9a0 100644
--- a/arch/x86/um/Kconfig
+++ b/arch/x86/um/Kconfig
@@ -9,6 +9,7 @@ endmenu
config UML_X86
def_bool y
select ARCH_USE_QUEUED_RWLOCKS
+ select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_USE_QUEUED_SPINLOCKS
select DCACHE_WORD_ACCESS
select HAVE_EFFICIENT_UNALIGNED_ACCESS
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 17/36] rust: sync: atomic: Add i8/i16 load and store support
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (12 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 16/36] arch: um/x86: Select ARCH_SUPPORTS_ATOMIC_RMW for UML_X86 Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 18/36] rust: sync: atomic: Add store_release/load_acquire tests Boqun Feng
` (18 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Joel Fernandes, Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add atomic operation support for i8 and i16 types using volatile
read/write and smp_load_acquire/smp_store_release helpers.
[boqun: Adjust [1] to avoid introduction of
impl_atomic_only_load_and_store_ops!() in the middle]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-by: Joel Fernandes <joelagnelf@nvidia.com>
Link: https://lore.kernel.org/all/20251228120546.1602275-1-fujita.tomonori@gmail.com/ [1]
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251211113826.1299077-4-fujita.tomonori@gmail.com
---
rust/kernel/sync/atomic/internal.rs | 25 +++++++++++++++++++------
rust/kernel/sync/atomic/predefine.rs | 14 +++++++++++++-
2 files changed, 32 insertions(+), 7 deletions(-)
diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/internal.rs
index 41b4ce2935e3..1b2a7933bc14 100644
--- a/rust/kernel/sync/atomic/internal.rs
+++ b/rust/kernel/sync/atomic/internal.rs
@@ -13,17 +13,22 @@ mod private {
pub trait Sealed {}
}
-// `i32` and `i64` are only supported atomic implementations.
+// The C side supports atomic primitives only for `i32` and `i64` (`atomic_t` and `atomic64_t`),
+// while the Rust side also layers provides atomic support for `i8` and `i16`
+// on top of lower-level C primitives.
+impl private::Sealed for i8 {}
+impl private::Sealed for i16 {}
impl private::Sealed for i32 {}
impl private::Sealed for i64 {}
/// A marker trait for types that implement atomic operations with C side primitives.
///
-/// This trait is sealed, and only types that have directly mapping to the C side atomics should
-/// impl this:
+/// This trait is sealed, and only types that map directly to the C side atomics
+/// or can be implemented with lower-level C primitives are allowed to implement this:
///
-/// - `i32` maps to `atomic_t`.
-/// - `i64` maps to `atomic64_t`.
+/// - `i8` and `i16` are implemented with lower-level C primitives.
+/// - `i32` map to `atomic_t`
+/// - `i64` map to `atomic64_t`
pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {
/// The type of the delta in arithmetic or logical operations.
///
@@ -32,6 +37,14 @@ pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {
type Delta;
}
+impl AtomicImpl for i8 {
+ type Delta = Self;
+}
+
+impl AtomicImpl for i16 {
+ type Delta = Self;
+}
+
// `atomic_t` implements atomic operations on `i32`.
impl AtomicImpl for i32 {
type Delta = Self;
@@ -243,7 +256,7 @@ macro_rules! declare_and_impl_atomic_methods {
}
declare_and_impl_atomic_methods!(
- [ i32 => atomic, i64 => atomic64 ]
+ [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ]
/// Basic atomic operations
pub trait AtomicBasicOps {
/// Atomic read (load).
diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
index 45a17985cda4..09b357be59b8 100644
--- a/rust/kernel/sync/atomic/predefine.rs
+++ b/rust/kernel/sync/atomic/predefine.rs
@@ -5,6 +5,18 @@
use crate::static_assert;
use core::mem::{align_of, size_of};
+// SAFETY: `i8` has the same size and alignment with itself, and is round-trip transmutable to
+// itself.
+unsafe impl super::AtomicType for i8 {
+ type Repr = i8;
+}
+
+// SAFETY: `i16` has the same size and alignment with itself, and is round-trip transmutable to
+// itself.
+unsafe impl super::AtomicType for i16 {
+ type Repr = i16;
+}
+
// SAFETY: `i32` has the same size and alignment with itself, and is round-trip transmutable to
// itself.
unsafe impl super::AtomicType for i32 {
@@ -118,7 +130,7 @@ macro_rules! for_each_type {
#[test]
fn atomic_basic_tests() {
- for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
+ for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| {
let x = Atomic::new(v);
assert_eq!(v, x.load(Relaxed));
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 18/36] rust: sync: atomic: Add store_release/load_acquire tests
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (13 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 17/36] rust: sync: atomic: Add i8/i16 load and store support Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 19/36] rust: sync: atomic: Add i8/i16 xchg and cmpxchg support Boqun Feng
` (17 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Joel Fernandes, Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add minimum store_release/load_acquire tests.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-by: Joel Fernandes <joelagnelf@nvidia.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251211113826.1299077-5-fujita.tomonori@gmail.com
---
rust/kernel/sync/atomic/predefine.rs | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
index 09b357be59b8..51e9df0cf56e 100644
--- a/rust/kernel/sync/atomic/predefine.rs
+++ b/rust/kernel/sync/atomic/predefine.rs
@@ -137,6 +137,16 @@ fn atomic_basic_tests() {
});
}
+ #[test]
+ fn atomic_acquire_release_tests() {
+ for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| {
+ let x = Atomic::new(0);
+
+ x.store(v, Release);
+ assert_eq!(v, x.load(Acquire));
+ });
+ }
+
#[test]
fn atomic_xchg_tests() {
for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 19/36] rust: sync: atomic: Add i8/i16 xchg and cmpxchg support
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (14 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 18/36] rust: sync: atomic: Add store_release/load_acquire tests Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 20/36] rust: sync: atomic: Add atomic bool support via i8 representation Boqun Feng
` (16 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add atomic xchg and cmpxchg operation support for i8 and i16 types
with tests.
Note that since the current implementation of
Atomic::<{i8,i16}>::{load,store}() is READ_ONCE()/WRITE_ONCE()-based.
The atomicity between load/store and xchg/cmpxchg is only guaranteed if
the architecture has native RmW support, hence i8/i16 is currently
AtomicImpl only when CONFIG_ARCH_SUPPORTS_ATOMIC_RWM=y.
[boqun: Make i8/i16 AtomicImpl only when
CONFIG_ARCH_SUPPORTS_ATOMIC_RWM=y]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251228120546.1602275-4-fujita.tomonori@gmail.com
---
rust/kernel/sync/atomic/internal.rs | 8 +++++++-
rust/kernel/sync/atomic/predefine.rs | 4 ++--
2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/internal.rs
index 1b2a7933bc14..0dac58bca2b3 100644
--- a/rust/kernel/sync/atomic/internal.rs
+++ b/rust/kernel/sync/atomic/internal.rs
@@ -37,10 +37,16 @@ pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {
type Delta;
}
+// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only
+// guaranteed against read-modify-write operations if the architecture supports native atomic RmW.
+#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)]
impl AtomicImpl for i8 {
type Delta = Self;
}
+// The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only
+// guaranteed against read-modify-write operations if the architecture supports native atomic RmW.
+#[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)]
impl AtomicImpl for i16 {
type Delta = Self;
}
@@ -274,7 +280,7 @@ fn set[release](a: &AtomicRepr<Self>, v: Self) {
);
declare_and_impl_atomic_methods!(
- [ i32 => atomic, i64 => atomic64 ]
+ [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ]
/// Exchange and compare-and-exchange atomic operations
pub trait AtomicExchangeOps {
/// Atomic exchange.
diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
index 51e9df0cf56e..248d26555ccf 100644
--- a/rust/kernel/sync/atomic/predefine.rs
+++ b/rust/kernel/sync/atomic/predefine.rs
@@ -149,7 +149,7 @@ fn atomic_acquire_release_tests() {
#[test]
fn atomic_xchg_tests() {
- for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
+ for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| {
let x = Atomic::new(v);
let old = v;
@@ -162,7 +162,7 @@ fn atomic_xchg_tests() {
#[test]
fn atomic_cmpxchg_tests() {
- for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
+ for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| {
let x = Atomic::new(v);
let old = v;
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 20/36] rust: sync: atomic: Add atomic bool support via i8 representation
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (15 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 19/36] rust: sync: atomic: Add i8/i16 xchg and cmpxchg support Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 21/36] rust: sync: atomic: Add atomic bool tests Boqun Feng
` (15 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add `bool` support, `Atomic<bool>` by using `i8` as its underlying
representation.
Rust specifies that `bool` has size 1 and alignment 1 [1], so it
matches `i8` on layout; keep `static_assert!()` checks to enforce this
assumption at build time.
[boqun: Remove the unnecessary impl AtomicImpl for bool]
Link: https://doc.rust-lang.org/reference/types/boolean.html [1]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260101034922.2020334-2-fujita.tomonori@gmail.com
---
rust/kernel/sync/atomic/predefine.rs | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
index 248d26555ccf..3fc99174b086 100644
--- a/rust/kernel/sync/atomic/predefine.rs
+++ b/rust/kernel/sync/atomic/predefine.rs
@@ -5,6 +5,17 @@
use crate::static_assert;
use core::mem::{align_of, size_of};
+// Ensure size and alignment requirements are checked.
+static_assert!(size_of::<bool>() == size_of::<i8>());
+static_assert!(align_of::<bool>() == align_of::<i8>());
+
+// SAFETY: `bool` has the same size and alignment as `i8`, and Rust guarantees that `bool` has
+// only two valid bit patterns: 0 (false) and 1 (true). Those are valid `i8` values, so `bool` is
+// round-trip transmutable to `i8`.
+unsafe impl super::AtomicType for bool {
+ type Repr = i8;
+}
+
// SAFETY: `i8` has the same size and alignment with itself, and is round-trip transmutable to
// itself.
unsafe impl super::AtomicType for i8 {
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 21/36] rust: sync: atomic: Add atomic bool tests
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (16 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 20/36] rust: sync: atomic: Add atomic bool support via i8 representation Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 22/36] rust: list: Switch to kernel::sync atomic primitives Boqun Feng
` (14 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add tests for Atomic<bool> operations.
Atomic<bool> does not fit into the existing u8/16/32/64 tests so
introduce a dedicated test for it.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260101034922.2020334-3-fujita.tomonori@gmail.com
---
rust/kernel/sync/atomic/predefine.rs | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
index 3fc99174b086..42067c6a266c 100644
--- a/rust/kernel/sync/atomic/predefine.rs
+++ b/rust/kernel/sync/atomic/predefine.rs
@@ -199,4 +199,20 @@ fn atomic_arithmetic_tests() {
assert_eq!(v + 25, x.load(Relaxed));
});
}
+
+ #[test]
+ fn atomic_bool_tests() {
+ let x = Atomic::new(false);
+
+ assert_eq!(false, x.load(Relaxed));
+ x.store(true, Relaxed);
+ assert_eq!(true, x.load(Relaxed));
+
+ assert_eq!(true, x.xchg(false, Relaxed));
+ assert_eq!(false, x.load(Relaxed));
+
+ assert_eq!(Err(false), x.cmpxchg(true, true, Relaxed));
+ assert_eq!(false, x.load(Relaxed));
+ assert_eq!(Ok(false), x.cmpxchg(false, true, Full));
+ }
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 22/36] rust: list: Switch to kernel::sync atomic primitives
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (17 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 21/36] rust: sync: atomic: Add atomic bool tests Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 23/36] rust_binder: " Boqun Feng
` (13 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Convert uses of `AtomicBool` to `Atomic<bool>`.
Note that the compare_exchange migration simplifies to
`try_cmpxchg()`, since `try_cmpxchg()` provides relaxed ordering on
failure, making the explicit failure ordering unnecessary.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251230093718.1852322-3-fujita.tomonori@gmail.com
---
rust/kernel/list/arc.rs | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/rust/kernel/list/arc.rs b/rust/kernel/list/arc.rs
index d92bcf665c89..2282f33913ee 100644
--- a/rust/kernel/list/arc.rs
+++ b/rust/kernel/list/arc.rs
@@ -6,11 +6,11 @@
use crate::alloc::{AllocError, Flags};
use crate::prelude::*;
+use crate::sync::atomic::{ordering, Atomic};
use crate::sync::{Arc, ArcBorrow, UniqueArc};
use core::marker::PhantomPinned;
use core::ops::Deref;
use core::pin::Pin;
-use core::sync::atomic::{AtomicBool, Ordering};
/// Declares that this type has some way to ensure that there is exactly one `ListArc` instance for
/// this id.
@@ -469,7 +469,7 @@ impl<T, U, const ID: u64> core::ops::DispatchFromDyn<ListArc<U, ID>> for ListArc
/// If the boolean is `false`, then there is no [`ListArc`] for this value.
#[repr(transparent)]
pub struct AtomicTracker<const ID: u64 = 0> {
- inner: AtomicBool,
+ inner: Atomic<bool>,
// This value needs to be pinned to justify the INVARIANT: comment in `AtomicTracker::new`.
_pin: PhantomPinned,
}
@@ -480,12 +480,12 @@ pub fn new() -> impl PinInit<Self> {
// INVARIANT: Pin-init initializers can't be used on an existing `Arc`, so this value will
// not be constructed in an `Arc` that already has a `ListArc`.
Self {
- inner: AtomicBool::new(false),
+ inner: Atomic::new(false),
_pin: PhantomPinned,
}
}
- fn project_inner(self: Pin<&mut Self>) -> &mut AtomicBool {
+ fn project_inner(self: Pin<&mut Self>) -> &mut Atomic<bool> {
// SAFETY: The `inner` field is not structurally pinned, so we may obtain a mutable
// reference to it even if we only have a pinned reference to `self`.
unsafe { &mut Pin::into_inner_unchecked(self).inner }
@@ -500,7 +500,7 @@ unsafe fn on_create_list_arc_from_unique(self: Pin<&mut Self>) {
unsafe fn on_drop_list_arc(&self) {
// INVARIANT: We just dropped a ListArc, so the boolean should be false.
- self.inner.store(false, Ordering::Release);
+ self.inner.store(false, ordering::Release);
}
}
@@ -514,8 +514,6 @@ unsafe impl<const ID: u64> TryNewListArc<ID> for AtomicTracker<ID> {
fn try_new_list_arc(&self) -> bool {
// INVARIANT: If this method returns true, then the boolean used to be false, and is no
// longer false, so it is okay for the caller to create a new [`ListArc`].
- self.inner
- .compare_exchange(false, true, Ordering::Acquire, Ordering::Relaxed)
- .is_ok()
+ self.inner.cmpxchg(false, true, ordering::Acquire).is_ok()
}
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 23/36] rust_binder: Switch to kernel::sync atomic primitives
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (18 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 22/36] rust: list: Switch to kernel::sync atomic primitives Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 24/36] rust: barrier: Add __rust_helper to helpers Boqun Feng
` (12 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, FUJITA Tomonori,
Boqun Feng
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Convert uses of AtomicBool, AtomicUsize, and AtomicU32.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Acked-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251230093718.1852322-4-fujita.tomonori@gmail.com
---
drivers/android/binder/rust_binder_main.rs | 20 ++++++++----------
drivers/android/binder/stats.rs | 8 ++++----
drivers/android/binder/thread.rs | 24 ++++++++++------------
drivers/android/binder/transaction.rs | 16 +++++++--------
4 files changed, 32 insertions(+), 36 deletions(-)
diff --git a/drivers/android/binder/rust_binder_main.rs b/drivers/android/binder/rust_binder_main.rs
index c79a9e742240..47bfb114cabb 100644
--- a/drivers/android/binder/rust_binder_main.rs
+++ b/drivers/android/binder/rust_binder_main.rs
@@ -18,6 +18,7 @@
prelude::*,
seq_file::SeqFile,
seq_print,
+ sync::atomic::{ordering::Relaxed, Atomic},
sync::poll::PollTable,
sync::Arc,
task::Pid,
@@ -28,10 +29,7 @@
use crate::{context::Context, page_range::Shrinker, process::Process, thread::Thread};
-use core::{
- ptr::NonNull,
- sync::atomic::{AtomicBool, AtomicUsize, Ordering},
-};
+use core::ptr::NonNull;
mod allocation;
mod context;
@@ -90,9 +88,9 @@ fn default() -> Self {
}
fn next_debug_id() -> usize {
- static NEXT_DEBUG_ID: AtomicUsize = AtomicUsize::new(0);
+ static NEXT_DEBUG_ID: Atomic<usize> = Atomic::new(0);
- NEXT_DEBUG_ID.fetch_add(1, Ordering::Relaxed)
+ NEXT_DEBUG_ID.fetch_add(1, Relaxed)
}
/// Provides a single place to write Binder return values via the
@@ -215,7 +213,7 @@ fn arc_pin_init(init: impl PinInit<T>) -> Result<DLArc<T>, kernel::error::Error>
struct DeliverCode {
code: u32,
- skip: AtomicBool,
+ skip: Atomic<bool>,
}
kernel::list::impl_list_arc_safe! {
@@ -226,7 +224,7 @@ impl DeliverCode {
fn new(code: u32) -> Self {
Self {
code,
- skip: AtomicBool::new(false),
+ skip: Atomic::new(false),
}
}
@@ -235,7 +233,7 @@ fn new(code: u32) -> Self {
/// This is used instead of removing it from the work list, since `LinkedList::remove` is
/// unsafe, whereas this method is not.
fn skip(&self) {
- self.skip.store(true, Ordering::Relaxed);
+ self.skip.store(true, Relaxed);
}
}
@@ -245,7 +243,7 @@ fn do_work(
_thread: &Thread,
writer: &mut BinderReturnWriter<'_>,
) -> Result<bool> {
- if !self.skip.load(Ordering::Relaxed) {
+ if !self.skip.load(Relaxed) {
writer.write_code(self.code)?;
}
Ok(true)
@@ -259,7 +257,7 @@ fn should_sync_wakeup(&self) -> bool {
fn debug_print(&self, m: &SeqFile, prefix: &str, _tprefix: &str) -> Result<()> {
seq_print!(m, "{}", prefix);
- if self.skip.load(Ordering::Relaxed) {
+ if self.skip.load(Relaxed) {
seq_print!(m, "(skipped) ");
}
if self.code == defs::BR_TRANSACTION_COMPLETE {
diff --git a/drivers/android/binder/stats.rs b/drivers/android/binder/stats.rs
index 037002651941..ab75e9561cbf 100644
--- a/drivers/android/binder/stats.rs
+++ b/drivers/android/binder/stats.rs
@@ -5,7 +5,7 @@
//! Keep track of statistics for binder_logs.
use crate::defs::*;
-use core::sync::atomic::{AtomicU32, Ordering::Relaxed};
+use kernel::sync::atomic::{ordering::Relaxed, Atomic};
use kernel::{ioctl::_IOC_NR, seq_file::SeqFile, seq_print};
const BC_COUNT: usize = _IOC_NR(BC_REPLY_SG) as usize + 1;
@@ -14,14 +14,14 @@
pub(crate) static GLOBAL_STATS: BinderStats = BinderStats::new();
pub(crate) struct BinderStats {
- bc: [AtomicU32; BC_COUNT],
- br: [AtomicU32; BR_COUNT],
+ bc: [Atomic<u32>; BC_COUNT],
+ br: [Atomic<u32>; BR_COUNT],
}
impl BinderStats {
pub(crate) const fn new() -> Self {
#[expect(clippy::declare_interior_mutable_const)]
- const ZERO: AtomicU32 = AtomicU32::new(0);
+ const ZERO: Atomic<u32> = Atomic::new(0);
Self {
bc: [ZERO; BC_COUNT],
diff --git a/drivers/android/binder/thread.rs b/drivers/android/binder/thread.rs
index 1a8e6fdc0dc4..82264db06507 100644
--- a/drivers/android/binder/thread.rs
+++ b/drivers/android/binder/thread.rs
@@ -15,6 +15,7 @@
security,
seq_file::SeqFile,
seq_print,
+ sync::atomic::{ordering::Relaxed, Atomic},
sync::poll::{PollCondVar, PollTable},
sync::{Arc, SpinLock},
task::Task,
@@ -34,10 +35,7 @@
BinderReturnWriter, DArc, DLArc, DTRWrap, DeliverCode, DeliverToRead,
};
-use core::{
- mem::size_of,
- sync::atomic::{AtomicU32, Ordering},
-};
+use core::mem::size_of;
/// Stores the layout of the scatter-gather entries. This is used during the `translate_objects`
/// call and is discarded when it returns.
@@ -273,8 +271,8 @@ struct InnerThread {
impl InnerThread {
fn new() -> Result<Self> {
fn next_err_id() -> u32 {
- static EE_ID: AtomicU32 = AtomicU32::new(0);
- EE_ID.fetch_add(1, Ordering::Relaxed)
+ static EE_ID: Atomic<u32> = Atomic::new(0);
+ EE_ID.fetch_add(1, Relaxed)
}
Ok(Self {
@@ -1537,7 +1535,7 @@ pub(crate) fn release(self: &Arc<Self>) {
#[pin_data]
struct ThreadError {
- error_code: AtomicU32,
+ error_code: Atomic<u32>,
#[pin]
links_track: AtomicTracker,
}
@@ -1545,18 +1543,18 @@ struct ThreadError {
impl ThreadError {
fn try_new() -> Result<DArc<Self>> {
DTRWrap::arc_pin_init(pin_init!(Self {
- error_code: AtomicU32::new(BR_OK),
+ error_code: Atomic::new(BR_OK),
links_track <- AtomicTracker::new(),
}))
.map(ListArc::into_arc)
}
fn set_error_code(&self, code: u32) {
- self.error_code.store(code, Ordering::Relaxed);
+ self.error_code.store(code, Relaxed);
}
fn is_unused(&self) -> bool {
- self.error_code.load(Ordering::Relaxed) == BR_OK
+ self.error_code.load(Relaxed) == BR_OK
}
}
@@ -1566,8 +1564,8 @@ fn do_work(
_thread: &Thread,
writer: &mut BinderReturnWriter<'_>,
) -> Result<bool> {
- let code = self.error_code.load(Ordering::Relaxed);
- self.error_code.store(BR_OK, Ordering::Relaxed);
+ let code = self.error_code.load(Relaxed);
+ self.error_code.store(BR_OK, Relaxed);
writer.write_code(code)?;
Ok(true)
}
@@ -1583,7 +1581,7 @@ fn debug_print(&self, m: &SeqFile, prefix: &str, _tprefix: &str) -> Result<()> {
m,
"{}transaction error: {}\n",
prefix,
- self.error_code.load(Ordering::Relaxed)
+ self.error_code.load(Relaxed)
);
Ok(())
}
diff --git a/drivers/android/binder/transaction.rs b/drivers/android/binder/transaction.rs
index 4bd3c0e417eb..2273a8e9d01c 100644
--- a/drivers/android/binder/transaction.rs
+++ b/drivers/android/binder/transaction.rs
@@ -2,11 +2,11 @@
// Copyright (C) 2025 Google LLC.
-use core::sync::atomic::{AtomicBool, Ordering};
use kernel::{
prelude::*,
seq_file::SeqFile,
seq_print,
+ sync::atomic::{ordering::Relaxed, Atomic},
sync::{Arc, SpinLock},
task::Kuid,
time::{Instant, Monotonic},
@@ -33,7 +33,7 @@ pub(crate) struct Transaction {
pub(crate) to: Arc<Process>,
#[pin]
allocation: SpinLock<Option<Allocation>>,
- is_outstanding: AtomicBool,
+ is_outstanding: Atomic<bool>,
code: u32,
pub(crate) flags: u32,
data_size: usize,
@@ -105,7 +105,7 @@ pub(crate) fn new(
offsets_size: trd.offsets_size as _,
data_address,
allocation <- kernel::new_spinlock!(Some(alloc.success()), "Transaction::new"),
- is_outstanding: AtomicBool::new(false),
+ is_outstanding: Atomic::new(false),
txn_security_ctx_off,
oneway_spam_detected,
start_time: Instant::now(),
@@ -145,7 +145,7 @@ pub(crate) fn new_reply(
offsets_size: trd.offsets_size as _,
data_address: alloc.ptr,
allocation <- kernel::new_spinlock!(Some(alloc.success()), "Transaction::new"),
- is_outstanding: AtomicBool::new(false),
+ is_outstanding: Atomic::new(false),
txn_security_ctx_off: None,
oneway_spam_detected,
start_time: Instant::now(),
@@ -215,8 +215,8 @@ pub(crate) fn find_from(&self, thread: &Thread) -> Option<&DArc<Transaction>> {
pub(crate) fn set_outstanding(&self, to_process: &mut ProcessInner) {
// No race because this method is only called once.
- if !self.is_outstanding.load(Ordering::Relaxed) {
- self.is_outstanding.store(true, Ordering::Relaxed);
+ if !self.is_outstanding.load(Relaxed) {
+ self.is_outstanding.store(true, Relaxed);
to_process.add_outstanding_txn();
}
}
@@ -227,8 +227,8 @@ fn drop_outstanding_txn(&self) {
// destructor, which is guaranteed to not race with any other operations on the
// transaction. It also cannot race with `set_outstanding`, since submission happens
// before delivery.
- if self.is_outstanding.load(Ordering::Relaxed) {
- self.is_outstanding.store(false, Ordering::Relaxed);
+ if self.is_outstanding.load(Relaxed) {
+ self.is_outstanding.store(false, Relaxed);
self.to.drop_outstanding_txn();
}
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 24/36] rust: barrier: Add __rust_helper to helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (19 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 23/36] rust_binder: " Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 25/36] rust: blk: " Boqun Feng
` (11 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
This is needed to inline these helpers into Rust code.
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260105-define-rust-helper-v2-1-51da5f454a67@google.com
---
rust/helpers/barrier.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c
index cdf28ce8e511..fed8853745c8 100644
--- a/rust/helpers/barrier.c
+++ b/rust/helpers/barrier.c
@@ -2,17 +2,17 @@
#include <asm/barrier.h>
-void rust_helper_smp_mb(void)
+__rust_helper void rust_helper_smp_mb(void)
{
smp_mb();
}
-void rust_helper_smp_wmb(void)
+__rust_helper void rust_helper_smp_wmb(void)
{
smp_wmb();
}
-void rust_helper_smp_rmb(void)
+__rust_helper void rust_helper_smp_rmb(void)
{
smp_rmb();
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 25/36] rust: blk: Add __rust_helper to helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (20 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 24/36] rust: barrier: Add __rust_helper to helpers Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:01 ` [PATCH 26/36] rust: completion: " Boqun Feng
` (10 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
This is needed to inline these helpers into Rust code.
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260105-define-rust-helper-v2-2-51da5f454a67@google.com
---
rust/helpers/blk.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/rust/helpers/blk.c b/rust/helpers/blk.c
index cc9f4e6a2d23..20c512e46a7a 100644
--- a/rust/helpers/blk.c
+++ b/rust/helpers/blk.c
@@ -3,12 +3,12 @@
#include <linux/blk-mq.h>
#include <linux/blkdev.h>
-void *rust_helper_blk_mq_rq_to_pdu(struct request *rq)
+__rust_helper void *rust_helper_blk_mq_rq_to_pdu(struct request *rq)
{
return blk_mq_rq_to_pdu(rq);
}
-struct request *rust_helper_blk_mq_rq_from_pdu(void *pdu)
+__rust_helper struct request *rust_helper_blk_mq_rq_from_pdu(void *pdu)
{
return blk_mq_rq_from_pdu(pdu);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 26/36] rust: completion: Add __rust_helper to helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (21 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 25/36] rust: blk: " Boqun Feng
@ 2026-01-11 12:01 ` Boqun Feng
2026-01-11 12:02 ` [PATCH 27/36] rust: cpu: " Boqun Feng
` (9 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:01 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
This is needed to inline these helpers into Rust code.
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260105-define-rust-helper-v2-5-51da5f454a67@google.com
---
rust/helpers/completion.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/rust/helpers/completion.c b/rust/helpers/completion.c
index b2443262a2ae..0126767cc3be 100644
--- a/rust/helpers/completion.c
+++ b/rust/helpers/completion.c
@@ -2,7 +2,7 @@
#include <linux/completion.h>
-void rust_helper_init_completion(struct completion *x)
+__rust_helper void rust_helper_init_completion(struct completion *x)
{
init_completion(x);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 27/36] rust: cpu: Add __rust_helper to helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (22 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 26/36] rust: completion: " Boqun Feng
@ 2026-01-11 12:02 ` Boqun Feng
2026-01-11 12:02 ` [PATCH 28/36] rust: processor: " Boqun Feng
` (8 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:02 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
This is needed to inline these helpers into Rust code.
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260105-define-rust-helper-v2-6-51da5f454a67@google.com
---
rust/helpers/cpu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/rust/helpers/cpu.c b/rust/helpers/cpu.c
index 824e0adb19d4..5759349b2c88 100644
--- a/rust/helpers/cpu.c
+++ b/rust/helpers/cpu.c
@@ -2,7 +2,7 @@
#include <linux/smp.h>
-unsigned int rust_helper_raw_smp_processor_id(void)
+__rust_helper unsigned int rust_helper_raw_smp_processor_id(void)
{
return raw_smp_processor_id();
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 28/36] rust: processor: Add __rust_helper to helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (23 preceding siblings ...)
2026-01-11 12:02 ` [PATCH 27/36] rust: cpu: " Boqun Feng
@ 2026-01-11 12:02 ` Boqun Feng
2026-01-11 12:02 ` [PATCH 29/36] rust: rcu: " Boqun Feng
` (7 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:02 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
This is needed to inline these helpers into Rust code.
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260105-define-rust-helper-v2-13-51da5f454a67@google.com
---
rust/helpers/processor.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/rust/helpers/processor.c b/rust/helpers/processor.c
index d41355e14d6e..76fadbb647c5 100644
--- a/rust/helpers/processor.c
+++ b/rust/helpers/processor.c
@@ -2,7 +2,7 @@
#include <linux/processor.h>
-void rust_helper_cpu_relax(void)
+__rust_helper void rust_helper_cpu_relax(void)
{
cpu_relax();
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 29/36] rust: rcu: Add __rust_helper to helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (24 preceding siblings ...)
2026-01-11 12:02 ` [PATCH 28/36] rust: processor: " Boqun Feng
@ 2026-01-11 12:02 ` Boqun Feng
2026-01-11 12:02 ` [PATCH 30/36] rust: refcount: " Boqun Feng
` (6 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:02 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng,
Joel Fernandes (NVIDIA)
From: Alice Ryhl <aliceryhl@google.com>
This is needed to inline these helpers into Rust code.
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Joel Fernandes (NVIDIA) <joel@joelfernandes.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260105-define-rust-helper-v2-16-51da5f454a67@google.com
---
rust/helpers/rcu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/rust/helpers/rcu.c b/rust/helpers/rcu.c
index f1cec6583513..481274c05857 100644
--- a/rust/helpers/rcu.c
+++ b/rust/helpers/rcu.c
@@ -2,12 +2,12 @@
#include <linux/rcupdate.h>
-void rust_helper_rcu_read_lock(void)
+__rust_helper void rust_helper_rcu_read_lock(void)
{
rcu_read_lock();
}
-void rust_helper_rcu_read_unlock(void)
+__rust_helper void rust_helper_rcu_read_unlock(void)
{
rcu_read_unlock();
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 30/36] rust: refcount: Add __rust_helper to helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (25 preceding siblings ...)
2026-01-11 12:02 ` [PATCH 29/36] rust: rcu: " Boqun Feng
@ 2026-01-11 12:02 ` Boqun Feng
2026-01-11 12:02 ` [PATCH 31/36] rust: sync: " Boqun Feng
` (5 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:02 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
This is needed to inline these helpers into Rust code.
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260105-define-rust-helper-v2-17-51da5f454a67@google.com
---
rust/helpers/refcount.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/rust/helpers/refcount.c b/rust/helpers/refcount.c
index d175898ad7b8..36334a674ee4 100644
--- a/rust/helpers/refcount.c
+++ b/rust/helpers/refcount.c
@@ -2,27 +2,27 @@
#include <linux/refcount.h>
-refcount_t rust_helper_REFCOUNT_INIT(int n)
+__rust_helper refcount_t rust_helper_REFCOUNT_INIT(int n)
{
return (refcount_t)REFCOUNT_INIT(n);
}
-void rust_helper_refcount_set(refcount_t *r, int n)
+__rust_helper void rust_helper_refcount_set(refcount_t *r, int n)
{
refcount_set(r, n);
}
-void rust_helper_refcount_inc(refcount_t *r)
+__rust_helper void rust_helper_refcount_inc(refcount_t *r)
{
refcount_inc(r);
}
-void rust_helper_refcount_dec(refcount_t *r)
+__rust_helper void rust_helper_refcount_dec(refcount_t *r)
{
refcount_dec(r);
}
-bool rust_helper_refcount_dec_and_test(refcount_t *r)
+__rust_helper bool rust_helper_refcount_dec_and_test(refcount_t *r)
{
return refcount_dec_and_test(r);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 31/36] rust: sync: Add __rust_helper to helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (26 preceding siblings ...)
2026-01-11 12:02 ` [PATCH 30/36] rust: refcount: " Boqun Feng
@ 2026-01-11 12:02 ` Boqun Feng
2026-01-11 12:02 ` [PATCH 32/36] rust: task: " Boqun Feng
` (4 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:02 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
This is needed to inline these helpers into Rust code.
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260105-define-rust-helper-v2-20-51da5f454a67@google.com
---
rust/helpers/mutex.c | 13 +++++++------
rust/helpers/spinlock.c | 13 +++++++------
rust/helpers/sync.c | 4 ++--
3 files changed, 16 insertions(+), 14 deletions(-)
diff --git a/rust/helpers/mutex.c b/rust/helpers/mutex.c
index e487819125f0..1b07d6e64299 100644
--- a/rust/helpers/mutex.c
+++ b/rust/helpers/mutex.c
@@ -2,28 +2,29 @@
#include <linux/mutex.h>
-void rust_helper_mutex_lock(struct mutex *lock)
+__rust_helper void rust_helper_mutex_lock(struct mutex *lock)
{
mutex_lock(lock);
}
-int rust_helper_mutex_trylock(struct mutex *lock)
+__rust_helper int rust_helper_mutex_trylock(struct mutex *lock)
{
return mutex_trylock(lock);
}
-void rust_helper___mutex_init(struct mutex *mutex, const char *name,
- struct lock_class_key *key)
+__rust_helper void rust_helper___mutex_init(struct mutex *mutex,
+ const char *name,
+ struct lock_class_key *key)
{
__mutex_init(mutex, name, key);
}
-void rust_helper_mutex_assert_is_held(struct mutex *mutex)
+__rust_helper void rust_helper_mutex_assert_is_held(struct mutex *mutex)
{
lockdep_assert_held(mutex);
}
-void rust_helper_mutex_destroy(struct mutex *lock)
+__rust_helper void rust_helper_mutex_destroy(struct mutex *lock)
{
mutex_destroy(lock);
}
diff --git a/rust/helpers/spinlock.c b/rust/helpers/spinlock.c
index 42c4bf01a23e..4d13062cf253 100644
--- a/rust/helpers/spinlock.c
+++ b/rust/helpers/spinlock.c
@@ -2,8 +2,9 @@
#include <linux/spinlock.h>
-void rust_helper___spin_lock_init(spinlock_t *lock, const char *name,
- struct lock_class_key *key)
+__rust_helper void rust_helper___spin_lock_init(spinlock_t *lock,
+ const char *name,
+ struct lock_class_key *key)
{
#ifdef CONFIG_DEBUG_SPINLOCK
# if defined(CONFIG_PREEMPT_RT)
@@ -16,22 +17,22 @@ void rust_helper___spin_lock_init(spinlock_t *lock, const char *name,
#endif /* CONFIG_DEBUG_SPINLOCK */
}
-void rust_helper_spin_lock(spinlock_t *lock)
+__rust_helper void rust_helper_spin_lock(spinlock_t *lock)
{
spin_lock(lock);
}
-void rust_helper_spin_unlock(spinlock_t *lock)
+__rust_helper void rust_helper_spin_unlock(spinlock_t *lock)
{
spin_unlock(lock);
}
-int rust_helper_spin_trylock(spinlock_t *lock)
+__rust_helper int rust_helper_spin_trylock(spinlock_t *lock)
{
return spin_trylock(lock);
}
-void rust_helper_spin_assert_is_held(spinlock_t *lock)
+__rust_helper void rust_helper_spin_assert_is_held(spinlock_t *lock)
{
lockdep_assert_held(lock);
}
diff --git a/rust/helpers/sync.c b/rust/helpers/sync.c
index ff7e68b48810..82d6aff73b04 100644
--- a/rust/helpers/sync.c
+++ b/rust/helpers/sync.c
@@ -2,12 +2,12 @@
#include <linux/lockdep.h>
-void rust_helper_lockdep_register_key(struct lock_class_key *k)
+__rust_helper void rust_helper_lockdep_register_key(struct lock_class_key *k)
{
lockdep_register_key(k);
}
-void rust_helper_lockdep_unregister_key(struct lock_class_key *k)
+__rust_helper void rust_helper_lockdep_unregister_key(struct lock_class_key *k)
{
lockdep_unregister_key(k);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 32/36] rust: task: Add __rust_helper to helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (27 preceding siblings ...)
2026-01-11 12:02 ` [PATCH 31/36] rust: sync: " Boqun Feng
@ 2026-01-11 12:02 ` Boqun Feng
2026-01-11 12:02 ` [PATCH 33/36] rust: time: " Boqun Feng
` (3 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:02 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
This is needed to inline these helpers into Rust code.
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260105-define-rust-helper-v2-21-51da5f454a67@google.com
---
rust/helpers/signal.c | 2 +-
rust/helpers/task.c | 24 ++++++++++++------------
2 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/rust/helpers/signal.c b/rust/helpers/signal.c
index 1a6bbe9438e2..85111186cf3d 100644
--- a/rust/helpers/signal.c
+++ b/rust/helpers/signal.c
@@ -2,7 +2,7 @@
#include <linux/sched/signal.h>
-int rust_helper_signal_pending(struct task_struct *t)
+__rust_helper int rust_helper_signal_pending(struct task_struct *t)
{
return signal_pending(t);
}
diff --git a/rust/helpers/task.c b/rust/helpers/task.c
index 2c85bbc2727e..c0e1a06ede78 100644
--- a/rust/helpers/task.c
+++ b/rust/helpers/task.c
@@ -3,60 +3,60 @@
#include <linux/kernel.h>
#include <linux/sched/task.h>
-void rust_helper_might_resched(void)
+__rust_helper void rust_helper_might_resched(void)
{
might_resched();
}
-struct task_struct *rust_helper_get_current(void)
+__rust_helper struct task_struct *rust_helper_get_current(void)
{
return current;
}
-void rust_helper_get_task_struct(struct task_struct *t)
+__rust_helper void rust_helper_get_task_struct(struct task_struct *t)
{
get_task_struct(t);
}
-void rust_helper_put_task_struct(struct task_struct *t)
+__rust_helper void rust_helper_put_task_struct(struct task_struct *t)
{
put_task_struct(t);
}
-kuid_t rust_helper_task_uid(struct task_struct *task)
+__rust_helper kuid_t rust_helper_task_uid(struct task_struct *task)
{
return task_uid(task);
}
-kuid_t rust_helper_task_euid(struct task_struct *task)
+__rust_helper kuid_t rust_helper_task_euid(struct task_struct *task)
{
return task_euid(task);
}
#ifndef CONFIG_USER_NS
-uid_t rust_helper_from_kuid(struct user_namespace *to, kuid_t uid)
+__rust_helper uid_t rust_helper_from_kuid(struct user_namespace *to, kuid_t uid)
{
return from_kuid(to, uid);
}
#endif /* CONFIG_USER_NS */
-bool rust_helper_uid_eq(kuid_t left, kuid_t right)
+__rust_helper bool rust_helper_uid_eq(kuid_t left, kuid_t right)
{
return uid_eq(left, right);
}
-kuid_t rust_helper_current_euid(void)
+__rust_helper kuid_t rust_helper_current_euid(void)
{
return current_euid();
}
-struct user_namespace *rust_helper_current_user_ns(void)
+__rust_helper struct user_namespace *rust_helper_current_user_ns(void)
{
return current_user_ns();
}
-pid_t rust_helper_task_tgid_nr_ns(struct task_struct *tsk,
- struct pid_namespace *ns)
+__rust_helper pid_t rust_helper_task_tgid_nr_ns(struct task_struct *tsk,
+ struct pid_namespace *ns)
{
return task_tgid_nr_ns(tsk, ns);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 33/36] rust: time: Add __rust_helper to helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (28 preceding siblings ...)
2026-01-11 12:02 ` [PATCH 32/36] rust: task: " Boqun Feng
@ 2026-01-11 12:02 ` Boqun Feng
2026-01-11 12:02 ` [PATCH 34/36] rust: wait: " Boqun Feng
` (2 subsequent siblings)
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:02 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
This is needed to inline these helpers into Rust code.
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260105-define-rust-helper-v2-22-51da5f454a67@google.com
---
rust/helpers/time.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/rust/helpers/time.c b/rust/helpers/time.c
index 67a36ccc3ec4..32f495970493 100644
--- a/rust/helpers/time.c
+++ b/rust/helpers/time.c
@@ -4,37 +4,37 @@
#include <linux/ktime.h>
#include <linux/timekeeping.h>
-void rust_helper_fsleep(unsigned long usecs)
+__rust_helper void rust_helper_fsleep(unsigned long usecs)
{
fsleep(usecs);
}
-ktime_t rust_helper_ktime_get_real(void)
+__rust_helper ktime_t rust_helper_ktime_get_real(void)
{
return ktime_get_real();
}
-ktime_t rust_helper_ktime_get_boottime(void)
+__rust_helper ktime_t rust_helper_ktime_get_boottime(void)
{
return ktime_get_boottime();
}
-ktime_t rust_helper_ktime_get_clocktai(void)
+__rust_helper ktime_t rust_helper_ktime_get_clocktai(void)
{
return ktime_get_clocktai();
}
-s64 rust_helper_ktime_to_us(const ktime_t kt)
+__rust_helper s64 rust_helper_ktime_to_us(const ktime_t kt)
{
return ktime_to_us(kt);
}
-s64 rust_helper_ktime_to_ms(const ktime_t kt)
+__rust_helper s64 rust_helper_ktime_to_ms(const ktime_t kt)
{
return ktime_to_ms(kt);
}
-void rust_helper_udelay(unsigned long usec)
+__rust_helper void rust_helper_udelay(unsigned long usec)
{
udelay(usec);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 34/36] rust: wait: Add __rust_helper to helpers
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (29 preceding siblings ...)
2026-01-11 12:02 ` [PATCH 33/36] rust: time: " Boqun Feng
@ 2026-01-11 12:02 ` Boqun Feng
2026-01-11 12:02 ` [PATCH 35/36] rust: helpers: Move #define __rust_helper out of atomic.c Boqun Feng
2026-01-11 12:02 ` [PATCH 36/36] rust: sync: Inline various lock related methods Boqun Feng
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:02 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
This is needed to inline these helpers into Rust code.
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260105-define-rust-helper-v2-25-51da5f454a67@google.com
---
rust/helpers/wait.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/rust/helpers/wait.c b/rust/helpers/wait.c
index ae48e33d9da3..2dde1e451780 100644
--- a/rust/helpers/wait.c
+++ b/rust/helpers/wait.c
@@ -2,7 +2,7 @@
#include <linux/wait.h>
-void rust_helper_init_wait(struct wait_queue_entry *wq_entry)
+__rust_helper void rust_helper_init_wait(struct wait_queue_entry *wq_entry)
{
init_wait(wq_entry);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 35/36] rust: helpers: Move #define __rust_helper out of atomic.c
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (30 preceding siblings ...)
2026-01-11 12:02 ` [PATCH 34/36] rust: wait: " Boqun Feng
@ 2026-01-11 12:02 ` Boqun Feng
2026-01-11 12:02 ` [PATCH 36/36] rust: sync: Inline various lock related methods Boqun Feng
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:02 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
In order to support inline helpers [1], we need to have __rust_helper
defined for all helper files. Current we are lucky that atomic.c is the
first file in helpers.c, but this is fragile. Thus, move it to
helpers.c.
[boqun: Reword the commit message and apply file hash changes]
Link: https://lore.kernel.org/r/20260105-define-rust-helper-v2-0-51da5f454a67@google.com [1]
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260107-move-rust_helper-define-v1-1-4109d58ef275@google.com
---
rust/helpers/atomic.c | 7 +------
rust/helpers/helpers.c | 2 ++
scripts/atomic/gen-rust-atomic-helpers.sh | 5 -----
3 files changed, 3 insertions(+), 11 deletions(-)
diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c
index cf06b7ef9a1c..4b24eceef5fc 100644
--- a/rust/helpers/atomic.c
+++ b/rust/helpers/atomic.c
@@ -11,11 +11,6 @@
#include <linux/atomic.h>
-// TODO: Remove this after INLINE_HELPERS support is added.
-#ifndef __rust_helper
-#define __rust_helper
-#endif
-
__rust_helper int
rust_helper_atomic_read(const atomic_t *v)
{
@@ -1037,4 +1032,4 @@ rust_helper_atomic64_dec_if_positive(atomic64_t *v)
}
#endif /* _RUST_ATOMIC_API_H */
-// 615a0e0c98b5973a47fe4fa65e92935051ca00ed
+// e4edb6174dd42a265284958f00a7cea7ddb464b1
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 15d75578f459..a3c42e51f00a 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -7,6 +7,8 @@
* Sorted alphabetically.
*/
+#define __rust_helper
+
#include "atomic.c"
#include "atomic_ext.c"
#include "auxiliary.c"
diff --git a/scripts/atomic/gen-rust-atomic-helpers.sh b/scripts/atomic/gen-rust-atomic-helpers.sh
index 45b1e100ed7c..a3732153af29 100755
--- a/scripts/atomic/gen-rust-atomic-helpers.sh
+++ b/scripts/atomic/gen-rust-atomic-helpers.sh
@@ -47,11 +47,6 @@ cat << EOF
#include <linux/atomic.h>
-// TODO: Remove this after INLINE_HELPERS support is added.
-#ifndef __rust_helper
-#define __rust_helper
-#endif
-
EOF
grep '^[a-z]' "$1" | while read name meta args; do
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH 36/36] rust: sync: Inline various lock related methods
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
` (31 preceding siblings ...)
2026-01-11 12:02 ` [PATCH 35/36] rust: helpers: Move #define __rust_helper out of atomic.c Boqun Feng
@ 2026-01-11 12:02 ` Boqun Feng
32 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-11 12:02 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich, Daniel Almeida,
Boqun Feng
From: Alice Ryhl <aliceryhl@google.com>
While debugging a different issue [1], the following relocation was
noticed in the rust_binder.ko file:
R_AARCH64_CALL26 _RNvXNtNtNtCsdfZWD8DztAw_6kernel4sync4lock8spinlockNtB2_15SpinLockBackendNtB4_7Backend6unlock
This relocation (and a similar one for lock) occurred many times
throughout the module. That is not really useful because all this
function does is call spin_unlock(), so what we actually want here is
that a call to spin_unlock() dirctly is generated in favor of this
wrapper method.
Thus, mark these methods inline.
[boqun: Reword the commit message a bit]
Link: https://lore.kernel.org/p/20251111-binder-fix-list-remove-v1-0-8ed14a0da63d@google.com
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20251218-inline-lock-unlock-v2-1-fbadac8bd61b@google.com
---
rust/kernel/sync/lock.rs | 7 +++++++
rust/kernel/sync/lock/global.rs | 2 ++
rust/kernel/sync/lock/mutex.rs | 5 +++++
rust/kernel/sync/lock/spinlock.rs | 5 +++++
4 files changed, 19 insertions(+)
diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index 46a57d1fc309..10b6b5e9b024 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -156,6 +156,7 @@ impl<B: Backend> Lock<(), B> {
/// the whole lifetime of `'a`.
///
/// [`State`]: Backend::State
+ #[inline]
pub unsafe fn from_raw<'a>(ptr: *mut B::State) -> &'a Self {
// SAFETY:
// - By the safety contract `ptr` must point to a valid initialised instance of `B::State`
@@ -169,6 +170,7 @@ pub unsafe fn from_raw<'a>(ptr: *mut B::State) -> &'a Self {
impl<T: ?Sized, B: Backend> Lock<T, B> {
/// Acquires the lock and gives the caller access to the data protected by it.
+ #[inline]
pub fn lock(&self) -> Guard<'_, T, B> {
// SAFETY: The constructor of the type calls `init`, so the existence of the object proves
// that `init` was called.
@@ -182,6 +184,7 @@ pub fn lock(&self) -> Guard<'_, T, B> {
/// Returns a guard that can be used to access the data protected by the lock if successful.
// `Option<T>` is not `#[must_use]` even if `T` is, thus the attribute is needed here.
#[must_use = "if unused, the lock will be immediately unlocked"]
+ #[inline]
pub fn try_lock(&self) -> Option<Guard<'_, T, B>> {
// SAFETY: The constructor of the type calls `init`, so the existence of the object proves
// that `init` was called.
@@ -275,6 +278,7 @@ pub fn as_mut(&mut self) -> Pin<&mut T> {
impl<T: ?Sized, B: Backend> core::ops::Deref for Guard<'_, T, B> {
type Target = T;
+ #[inline]
fn deref(&self) -> &Self::Target {
// SAFETY: The caller owns the lock, so it is safe to deref the protected data.
unsafe { &*self.lock.data.get() }
@@ -285,6 +289,7 @@ impl<T: ?Sized, B: Backend> core::ops::DerefMut for Guard<'_, T, B>
where
T: Unpin,
{
+ #[inline]
fn deref_mut(&mut self) -> &mut Self::Target {
// SAFETY: The caller owns the lock, so it is safe to deref the protected data.
unsafe { &mut *self.lock.data.get() }
@@ -292,6 +297,7 @@ fn deref_mut(&mut self) -> &mut Self::Target {
}
impl<T: ?Sized, B: Backend> Drop for Guard<'_, T, B> {
+ #[inline]
fn drop(&mut self) {
// SAFETY: The caller owns the lock, so it is safe to unlock it.
unsafe { B::unlock(self.lock.state.get(), &self.state) };
@@ -304,6 +310,7 @@ impl<'a, T: ?Sized, B: Backend> Guard<'a, T, B> {
/// # Safety
///
/// The caller must ensure that it owns the lock.
+ #[inline]
pub unsafe fn new(lock: &'a Lock<T, B>, state: B::GuardState) -> Self {
// SAFETY: The caller can only hold the lock if `Backend::init` has already been called.
unsafe { B::assert_is_held(lock.state.get()) };
diff --git a/rust/kernel/sync/lock/global.rs b/rust/kernel/sync/lock/global.rs
index eab48108a4ae..aecbdc34738f 100644
--- a/rust/kernel/sync/lock/global.rs
+++ b/rust/kernel/sync/lock/global.rs
@@ -77,6 +77,7 @@ pub unsafe fn init(&'static self) {
}
/// Lock this global lock.
+ #[inline]
pub fn lock(&'static self) -> GlobalGuard<B> {
GlobalGuard {
inner: self.inner.lock(),
@@ -84,6 +85,7 @@ pub fn lock(&'static self) -> GlobalGuard<B> {
}
/// Try to lock this global lock.
+ #[inline]
pub fn try_lock(&'static self) -> Option<GlobalGuard<B>> {
Some(GlobalGuard {
inner: self.inner.try_lock()?,
diff --git a/rust/kernel/sync/lock/mutex.rs b/rust/kernel/sync/lock/mutex.rs
index 581cee7ab842..cda0203efefb 100644
--- a/rust/kernel/sync/lock/mutex.rs
+++ b/rust/kernel/sync/lock/mutex.rs
@@ -102,6 +102,7 @@ unsafe impl super::Backend for MutexBackend {
type State = bindings::mutex;
type GuardState = ();
+ #[inline]
unsafe fn init(
ptr: *mut Self::State,
name: *const crate::ffi::c_char,
@@ -112,18 +113,21 @@ unsafe fn init(
unsafe { bindings::__mutex_init(ptr, name, key) }
}
+ #[inline]
unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState {
// SAFETY: The safety requirements of this function ensure that `ptr` points to valid
// memory, and that it has been initialised before.
unsafe { bindings::mutex_lock(ptr) };
}
+ #[inline]
unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) {
// SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the
// caller is the owner of the mutex.
unsafe { bindings::mutex_unlock(ptr) };
}
+ #[inline]
unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState> {
// SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use.
let result = unsafe { bindings::mutex_trylock(ptr) };
@@ -135,6 +139,7 @@ unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState> {
}
}
+ #[inline]
unsafe fn assert_is_held(ptr: *mut Self::State) {
// SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use.
unsafe { bindings::mutex_assert_is_held(ptr) }
diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
index d7be38ccbdc7..ef76fa07ca3a 100644
--- a/rust/kernel/sync/lock/spinlock.rs
+++ b/rust/kernel/sync/lock/spinlock.rs
@@ -101,6 +101,7 @@ unsafe impl super::Backend for SpinLockBackend {
type State = bindings::spinlock_t;
type GuardState = ();
+ #[inline]
unsafe fn init(
ptr: *mut Self::State,
name: *const crate::ffi::c_char,
@@ -111,18 +112,21 @@ unsafe fn init(
unsafe { bindings::__spin_lock_init(ptr, name, key) }
}
+ #[inline]
unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState {
// SAFETY: The safety requirements of this function ensure that `ptr` points to valid
// memory, and that it has been initialised before.
unsafe { bindings::spin_lock(ptr) }
}
+ #[inline]
unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) {
// SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the
// caller is the owner of the spinlock.
unsafe { bindings::spin_unlock(ptr) }
}
+ #[inline]
unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState> {
// SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use.
let result = unsafe { bindings::spin_trylock(ptr) };
@@ -134,6 +138,7 @@ unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState> {
}
}
+ #[inline]
unsafe fn assert_is_held(ptr: *mut Self::State) {
// SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use.
unsafe { bindings::spin_assert_is_held(ptr) }
--
2.51.0
^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [GIT PULL][PATCH 00/36] Rust synchronization changes for v7.0
2026-01-11 11:57 [GIT PULL][PATCH 00/36] Rust synchronization changes for v7.0 Boqun Feng
` (2 preceding siblings ...)
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
@ 2026-01-13 10:10 ` Boqun Feng
2026-01-13 10:43 ` Peter Zijlstra
4 siblings, 0 replies; 39+ messages in thread
From: Boqun Feng @ 2026-01-13 10:10 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: rust-for-linux, linux-kernel, Will Deacon, Mark Rutland,
Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich
On Sun, Jan 11, 2026 at 07:57:32PM +0800, Boqun Feng wrote:
> Peter,
>
Kindly ping ;-)
Regards,
Boqun
> Please pull the following changes of Rust synchronization for 7.0 into
> tip/locking/core, you can find the details of the changes in the git tag
> message below.
>
> Thanks!
>
> Regards,
> Boqun
>
> ----------------------------------------------------------------
>
> The following changes since commit a45026cef17d1080c985adf28234d6c8475ad66f:
>
> locking/local_lock: Include more missing headers (2026-01-08 11:21:57 +0100)
>
> are available in the Git repository at:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux.git/ tags/rust-sync-7.0
>
> for you to fetch changes up to ccf9e070116a81d29aae30db501d562c8efd1ed8:
>
> rust: sync: Inline various lock related methods (2026-01-10 10:53:46 +0800)
>
> ----------------------------------------------------------------
> Rust synchronization changes for v7.0:
>
> - Add support for Atomic<i8/i16/bool> and replace most Rust native AtomicBool
> usages with Atomic<bool>, and further switching will require Atomic<Flag>
> - Clean up LockClassKey and improve its docs
> - Add missing Send and Sync trait impl for SetOnce
> - Make ARef Unpin as it is supposed to be
> - Add __rust_helper to a few Rust helpers as a preparation for helper LTO
> - Inline various lock related functions to avoid additional function calls.
> -----BEGIN PGP SIGNATURE-----
>
> iQEzBAABCAAdFiEEj5IosQTPz8XU1wRHSXnow7UH+rgFAmljjIkACgkQSXnow7UH
> +rizPQgAi5rdVoIpjN9BaQtWVHcAwBhbD7WhboxDhsSdEl3yaw0E7OLML5IyupLP
> BUsrI5BAhwUaIpE/4PT9RePLCOeFqCKfz9eyQpb6uEwLVKcx8WESrItrlStqK8dG
> lAZEV07SwAWq/ARsgI02LZnyDQxxBrX8Q4FKZgglpaBxieVXmQjekcSF2W6s3lka
> qWXB7MU38D3DZjKr6Lpp8BjdI7qTNInEZDGtRPncIId+4Jj7V5IpEX/NThyrDLp1
> M0UzXOMzexIfeSm3oz95II6R+GeDpruI6pN8QDtljaTL0Al5/z5yO8Zj9KIPGAl4
> 9JRUJ0pNVrAUljjJ4ap8hIMPlOWqjw==
> =JOZ1
> -----END PGP SIGNATURE-----
>
> ----------------------------------------------------------------
[...]
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [GIT PULL][PATCH 00/36] Rust synchronization changes for v7.0
2026-01-11 11:57 [GIT PULL][PATCH 00/36] Rust synchronization changes for v7.0 Boqun Feng
` (3 preceding siblings ...)
2026-01-13 10:10 ` [GIT PULL][PATCH 00/36] Rust synchronization changes for v7.0 Boqun Feng
@ 2026-01-13 10:43 ` Peter Zijlstra
4 siblings, 0 replies; 39+ messages in thread
From: Peter Zijlstra @ 2026-01-13 10:43 UTC (permalink / raw)
To: Boqun Feng
Cc: Ingo Molnar, rust-for-linux, linux-kernel, Will Deacon,
Mark Rutland, Thomas Gleixner, Miguel Ojeda, Gary Guo, Alice Ryhl,
Andreas Hindborg, Benno Lossin, Danilo Krummrich
On Sun, Jan 11, 2026 at 07:57:32PM +0800, Boqun Feng wrote:
> Peter,
>
> Please pull the following changes of Rust synchronization for 7.0 into
> tip/locking/core, you can find the details of the changes in the git tag
> message below.
Done!, Thanks!
^ permalink raw reply [flat|nested] 39+ messages in thread
end of thread, other threads:[~2026-01-13 10:43 UTC | newest]
Thread overview: 39+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-11 11:57 [GIT PULL][PATCH 00/36] Rust synchronization changes for v7.0 Boqun Feng
2026-01-11 11:57 ` [PATCH 01/36] rust: sync: Refactor static_lock_class!() macro Boqun Feng
2026-01-11 11:57 ` [PATCH 02/36] rust: sync: Clean up LockClassKey and its docs Boqun Feng
2026-01-11 12:01 ` [PATCH 03/36] rust: sync: set_once: Implement Send and Sync Boqun Feng
2026-01-11 12:01 ` [PATCH 04/36] rust: sync: Implement Unpin for ARef Boqun Feng
2026-01-11 12:01 ` [PATCH 05/36] rust: helpers: Add i8/i16 atomic_read_acquire/atomic_set_release helpers Boqun Feng
2026-01-11 12:01 ` [PATCH 06/36] rust: helpers: Add i8/i16 relaxed atomic helpers Boqun Feng
2026-01-11 12:01 ` [PATCH 07/36] rust: helpers: Add i8/i16 atomic xchg helpers Boqun Feng
2026-01-11 12:01 ` [PATCH 08/36] rust: helpers: Add i8/i16 atomic xchg_acquire helpers Boqun Feng
2026-01-11 12:01 ` [PATCH 09/36] rust: helpers: Add i8/i16 atomic xchg_release helpers Boqun Feng
2026-01-11 12:01 ` [PATCH 10/36] rust: helpers: Add i8/i16 atomic xchg_relaxed helpers Boqun Feng
2026-01-11 12:01 ` [PATCH 11/36] rust: helpers: Add i8/i16 atomic try_cmpxchg helpers Boqun Feng
2026-01-11 12:01 ` [PATCH 12/36] rust: helpers: Add i8/i16 atomic try_cmpxchg_acquire helpers Boqun Feng
2026-01-11 12:01 ` [PATCH 13/36] rust: helpers: Add i8/i16 atomic try_cmpxchg_release helpers Boqun Feng
2026-01-11 12:01 ` [PATCH 14/36] rust: helpers: Add i8/i16 atomic try_cmpxchg_relaxed helpers Boqun Feng
2026-01-11 12:01 ` [PATCH 15/36] rust: sync: atomic: Prepare AtomicOps macros for i8/i16 support Boqun Feng
2026-01-11 12:01 ` [PATCH 16/36] arch: um/x86: Select ARCH_SUPPORTS_ATOMIC_RMW for UML_X86 Boqun Feng
2026-01-11 12:01 ` [PATCH 17/36] rust: sync: atomic: Add i8/i16 load and store support Boqun Feng
2026-01-11 12:01 ` [PATCH 18/36] rust: sync: atomic: Add store_release/load_acquire tests Boqun Feng
2026-01-11 12:01 ` [PATCH 19/36] rust: sync: atomic: Add i8/i16 xchg and cmpxchg support Boqun Feng
2026-01-11 12:01 ` [PATCH 20/36] rust: sync: atomic: Add atomic bool support via i8 representation Boqun Feng
2026-01-11 12:01 ` [PATCH 21/36] rust: sync: atomic: Add atomic bool tests Boqun Feng
2026-01-11 12:01 ` [PATCH 22/36] rust: list: Switch to kernel::sync atomic primitives Boqun Feng
2026-01-11 12:01 ` [PATCH 23/36] rust_binder: " Boqun Feng
2026-01-11 12:01 ` [PATCH 24/36] rust: barrier: Add __rust_helper to helpers Boqun Feng
2026-01-11 12:01 ` [PATCH 25/36] rust: blk: " Boqun Feng
2026-01-11 12:01 ` [PATCH 26/36] rust: completion: " Boqun Feng
2026-01-11 12:02 ` [PATCH 27/36] rust: cpu: " Boqun Feng
2026-01-11 12:02 ` [PATCH 28/36] rust: processor: " Boqun Feng
2026-01-11 12:02 ` [PATCH 29/36] rust: rcu: " Boqun Feng
2026-01-11 12:02 ` [PATCH 30/36] rust: refcount: " Boqun Feng
2026-01-11 12:02 ` [PATCH 31/36] rust: sync: " Boqun Feng
2026-01-11 12:02 ` [PATCH 32/36] rust: task: " Boqun Feng
2026-01-11 12:02 ` [PATCH 33/36] rust: time: " Boqun Feng
2026-01-11 12:02 ` [PATCH 34/36] rust: wait: " Boqun Feng
2026-01-11 12:02 ` [PATCH 35/36] rust: helpers: Move #define __rust_helper out of atomic.c Boqun Feng
2026-01-11 12:02 ` [PATCH 36/36] rust: sync: Inline various lock related methods Boqun Feng
2026-01-13 10:10 ` [GIT PULL][PATCH 00/36] Rust synchronization changes for v7.0 Boqun Feng
2026-01-13 10:43 ` Peter Zijlstra
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox