* [PATCH v1 0/2] rust: sync: Add AtomicFlag type
@ 2026-01-28 11:51 FUJITA Tomonori
2026-01-28 11:51 ` [PATCH v1 1/2] rust: sync: atomic: Add perfromance-optimal Flag type for atomic booleans FUJITA Tomonori
2026-01-28 11:52 ` [PATCH v1 2/2] rust: list: Use AtomicFlag in AtomicTracker FUJITA Tomonori
0 siblings, 2 replies; 9+ messages in thread
From: FUJITA Tomonori @ 2026-01-28 11:51 UTC (permalink / raw)
To: boqun.feng, ojeda, peterz, will
Cc: a.hindborg, aliceryhl, bjorn3_gh, dakr, gary, lossin,
mark.rutland, tmgross, rust-for-linux, FUJITA Tomonori
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
This series adds AtomicFlag and switches the list atomic tracker to
use it.
Unlike the previous design, we avoid exposing Atomic<Flag> and always
use AtomicFlag.
FUJITA Tomonori (2):
rust: sync: atomic: Add perfromance-optimal Flag type for atomic
booleans
rust: list: Use AtomicFlag in AtomicTracker
rust/kernel/list/arc.rs | 8 +-
rust/kernel/sync/atomic.rs | 121 +++++++++++++++++++++++++++
rust/kernel/sync/atomic/predefine.rs | 17 ++++
3 files changed, 142 insertions(+), 4 deletions(-)
base-commit: 6583920e15fc567109e1c64ca58c917f52f40736
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v1 1/2] rust: sync: atomic: Add perfromance-optimal Flag type for atomic booleans
2026-01-28 11:51 [PATCH v1 0/2] rust: sync: Add AtomicFlag type FUJITA Tomonori
@ 2026-01-28 11:51 ` FUJITA Tomonori
2026-01-28 13:41 ` Gary Guo
2026-01-28 11:52 ` [PATCH v1 2/2] rust: list: Use AtomicFlag in AtomicTracker FUJITA Tomonori
1 sibling, 1 reply; 9+ messages in thread
From: FUJITA Tomonori @ 2026-01-28 11:51 UTC (permalink / raw)
To: boqun.feng, ojeda, peterz, will
Cc: a.hindborg, aliceryhl, bjorn3_gh, dakr, gary, lossin,
mark.rutland, tmgross, rust-for-linux, FUJITA Tomonori
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Add AtomicFlag type for boolean flags.
Document when AtomicFlag is generally preferable to Atomic<bool>: in
particular, when RMW operations such as xchg()/cmpxchg() may be used
and minimizing memory usage is not the top priority. On some
architectures without byte-sized RMW instructions, Atomic<bool> can be
slower for RMW operations.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
---
rust/kernel/sync/atomic.rs | 121 +++++++++++++++++++++++++++
rust/kernel/sync/atomic/predefine.rs | 17 ++++
2 files changed, 138 insertions(+)
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 4aebeacb961a..7d06193709c0 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -560,3 +560,124 @@ pub fn fetch_add<Rhs, Ordering: ordering::Ordering>(&self, v: Rhs, _: Ordering)
unsafe { from_repr(ret) }
}
}
+
+/// # Invariants
+///
+/// `padding` must be all zeroes.
+#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
+#[repr(C, align(4))]
+#[derive(Clone, Copy)]
+struct Flag {
+ bool_field: bool,
+ padding: [u8; 3],
+}
+
+#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
+impl Flag {
+ #[inline(always)]
+ const fn new(b: bool) -> Self {
+ // INVARIANT: `padding` is all zeroes.
+ Self {
+ bool_field: b,
+ padding: [0; 3],
+ }
+ }
+}
+
+// SAFETY: `Flag` and `i32` have the same size and alignment, and it's round-trip
+// transmutable to `i32`.
+#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
+unsafe impl AtomicType for Flag {
+ type Repr = i32;
+}
+
+macro_rules! atomic_flag_doc {
+ () => {
+ concat!(
+ "An atomic flag type intended to be backed by performance-optimal integer type.\n\n",
+ "The backing integer type is an implementation detail; it may vary by architecture and change\n",
+ "in the future.\n\n",
+ "[`AtomicFlag`] is generally preferable to [`Atomic<bool>`] when you need read-modify-write\n",
+ "(RMW) operations (e.g. [`Atomic::xchg()`]/[`Atomic::cmpxchg()`]) or when [`Atomic<bool>`] does\n",
+ "not save memory due to padding. On some architectures that do not support byte-sized atomic\n",
+ "RMW operations, RMW operations on [`Atomic<bool>`] are slower.\n\n",
+ "If you only use [`Atomic::load()`]/[`Atomic::store()`], [`Atomic<bool>`] is fine.\n\n",
+ "# Examples\n\n",
+ "```\n",
+ "use kernel::sync::atomic::{Atomic, AtomicFlag, Relaxed};\n\n",
+ "let flag = AtomicFlag::new(false);\n",
+ "assert_eq!(false, flag.load(Relaxed));\n",
+ "flag.store(true, Relaxed);\n",
+ "assert_eq!(true, flag.load(Relaxed));\n",
+ "```\n"
+ )
+ };
+}
+
+#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
+#[doc = atomic_flag_doc!()]
+pub struct AtomicFlag(Atomic<Flag>);
+
+#[cfg(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64))]
+#[doc = atomic_flag_doc!()]
+pub type AtomicFlag = Atomic<bool>;
+
+#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
+impl AtomicFlag {
+ /// Creates a new atomic flag.
+ #[inline(always)]
+ pub const fn new(b: bool) -> Self {
+ Self(Atomic::new(Flag::new(b)))
+ }
+
+ /// Returns a mutable reference to the underlying flag as a [`bool`].
+ ///
+ /// This is safe because the mutable reference of the atomic flag guarantees exclusive access.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use kernel::sync::atomic::{AtomicFlag, Relaxed};
+ ///
+ /// let mut atomic_flag = AtomicFlag::new(false);
+ /// assert_eq!(false, atomic_flag.load(Relaxed));
+ /// *atomic_flag.get_mut() = true;
+ /// assert_eq!(true, atomic_flag.load(Relaxed));
+ /// ```
+ #[inline(always)]
+ pub fn get_mut(&mut self) -> &mut bool {
+ &mut self.0.get_mut().bool_field
+ }
+
+ /// Loads the value from the atomic flag.
+ #[inline(always)]
+ pub fn load<Ordering: ordering::AcquireOrRelaxed>(&self, o: Ordering) -> bool {
+ self.0.load(o).bool_field
+ }
+
+ /// Stores a value to the atomic flag.
+ #[inline(always)]
+ pub fn store<Ordering: ordering::ReleaseOrRelaxed>(&self, v: bool, o: Ordering) {
+ self.0.store(Flag::new(v), o);
+ }
+
+ /// Stores a value to the atomic flag and returns the previous value.
+ #[inline(always)]
+ pub fn xchg<Ordering: ordering::Ordering>(&self, new: bool, o: Ordering) -> bool {
+ self.0.xchg(Flag::new(new), o).bool_field
+ }
+
+ /// Store a value to the atomic flag if the current value is equal to `old`.
+ #[inline(always)]
+ pub fn cmpxchg<Ordering: ordering::Ordering>(
+ &self,
+ old: bool,
+ new: bool,
+ o: Ordering,
+ ) -> Result<bool, bool> {
+ match self.0.cmpxchg(Flag::new(old), Flag::new(new), o) {
+ Ok(_) => Ok(old),
+ Err(f) => Err(f.bool_field),
+ }
+ }
+}
diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
index 42067c6a266c..d14e10544dcf 100644
--- a/rust/kernel/sync/atomic/predefine.rs
+++ b/rust/kernel/sync/atomic/predefine.rs
@@ -215,4 +215,21 @@ fn atomic_bool_tests() {
assert_eq!(false, x.load(Relaxed));
assert_eq!(Ok(false), x.cmpxchg(false, true, Full));
}
+
+ #[test]
+ fn atomic_flag_tests() {
+ let mut flag = AtomicFlag::new(false);
+
+ assert_eq!(false, flag.load(Relaxed));
+
+ *flag.get_mut() = true;
+ assert_eq!(true, flag.load(Relaxed));
+
+ assert_eq!(true, flag.xchg(false, Relaxed));
+ assert_eq!(false, flag.load(Relaxed));
+
+ *flag.get_mut() = true;
+ assert_eq!(Ok(true), flag.cmpxchg(true, false, Full));
+ assert_eq!(false, flag.load(Relaxed));
+ }
}
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v1 2/2] rust: list: Use AtomicFlag in AtomicTracker
2026-01-28 11:51 [PATCH v1 0/2] rust: sync: Add AtomicFlag type FUJITA Tomonori
2026-01-28 11:51 ` [PATCH v1 1/2] rust: sync: atomic: Add perfromance-optimal Flag type for atomic booleans FUJITA Tomonori
@ 2026-01-28 11:52 ` FUJITA Tomonori
1 sibling, 0 replies; 9+ messages in thread
From: FUJITA Tomonori @ 2026-01-28 11:52 UTC (permalink / raw)
To: boqun.feng, ojeda, peterz, will
Cc: a.hindborg, aliceryhl, bjorn3_gh, dakr, gary, lossin,
mark.rutland, tmgross, rust-for-linux, FUJITA Tomonori
From: FUJITA Tomonori <fujita.tomonori@gmail.com>
Make AtomicTracker use AtomicFlag instead of Atomic<bool> to avoid
slow byte-sized RMWs on architectures that don't support them.
Reviewed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
---
rust/kernel/list/arc.rs | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/rust/kernel/list/arc.rs b/rust/kernel/list/arc.rs
index 2282f33913ee..5e84f500a3fe 100644
--- a/rust/kernel/list/arc.rs
+++ b/rust/kernel/list/arc.rs
@@ -6,7 +6,7 @@
use crate::alloc::{AllocError, Flags};
use crate::prelude::*;
-use crate::sync::atomic::{ordering, Atomic};
+use crate::sync::atomic::{ordering, AtomicFlag};
use crate::sync::{Arc, ArcBorrow, UniqueArc};
use core::marker::PhantomPinned;
use core::ops::Deref;
@@ -469,7 +469,7 @@ impl<T, U, const ID: u64> core::ops::DispatchFromDyn<ListArc<U, ID>> for ListArc
/// If the boolean is `false`, then there is no [`ListArc`] for this value.
#[repr(transparent)]
pub struct AtomicTracker<const ID: u64 = 0> {
- inner: Atomic<bool>,
+ inner: AtomicFlag,
// This value needs to be pinned to justify the INVARIANT: comment in `AtomicTracker::new`.
_pin: PhantomPinned,
}
@@ -480,12 +480,12 @@ pub fn new() -> impl PinInit<Self> {
// INVARIANT: Pin-init initializers can't be used on an existing `Arc`, so this value will
// not be constructed in an `Arc` that already has a `ListArc`.
Self {
- inner: Atomic::new(false),
+ inner: AtomicFlag::new(false),
_pin: PhantomPinned,
}
}
- fn project_inner(self: Pin<&mut Self>) -> &mut Atomic<bool> {
+ fn project_inner(self: Pin<&mut Self>) -> &mut AtomicFlag {
// SAFETY: The `inner` field is not structurally pinned, so we may obtain a mutable
// reference to it even if we only have a pinned reference to `self`.
unsafe { &mut Pin::into_inner_unchecked(self).inner }
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v1 1/2] rust: sync: atomic: Add perfromance-optimal Flag type for atomic booleans
2026-01-28 11:51 ` [PATCH v1 1/2] rust: sync: atomic: Add perfromance-optimal Flag type for atomic booleans FUJITA Tomonori
@ 2026-01-28 13:41 ` Gary Guo
2026-01-28 17:19 ` Boqun Feng
2026-01-29 0:40 ` FUJITA Tomonori
0 siblings, 2 replies; 9+ messages in thread
From: Gary Guo @ 2026-01-28 13:41 UTC (permalink / raw)
To: FUJITA Tomonori, boqun.feng, ojeda, peterz, will
Cc: a.hindborg, aliceryhl, bjorn3_gh, dakr, gary, lossin,
mark.rutland, tmgross, rust-for-linux, FUJITA Tomonori
On Wed Jan 28, 2026 at 11:51 AM GMT, FUJITA Tomonori wrote:
> From: FUJITA Tomonori <fujita.tomonori@gmail.com>
>
> Add AtomicFlag type for boolean flags.
>
> Document when AtomicFlag is generally preferable to Atomic<bool>: in
> particular, when RMW operations such as xchg()/cmpxchg() may be used
> and minimizing memory usage is not the top priority. On some
> architectures without byte-sized RMW instructions, Atomic<bool> can be
> slower for RMW operations.
>
> Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
> ---
> rust/kernel/sync/atomic.rs | 121 +++++++++++++++++++++++++++
> rust/kernel/sync/atomic/predefine.rs | 17 ++++
> 2 files changed, 138 insertions(+)
>
> diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> index 4aebeacb961a..7d06193709c0 100644
> --- a/rust/kernel/sync/atomic.rs
> +++ b/rust/kernel/sync/atomic.rs
> @@ -560,3 +560,124 @@ pub fn fetch_add<Rhs, Ordering: ordering::Ordering>(&self, v: Rhs, _: Ordering)
> unsafe { from_repr(ret) }
> }
> }
> +
> +/// # Invariants
> +///
> +/// `padding` must be all zeroes.
> +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
This config repeats too much.
I think probably we should just not let `AtomicFlag` alias `Atomic<u8>` (this
has the benefit of creating a type mismatch, so code cannot rely on this on x86
and fail to compile on, say, RV).
This way the `struct Flag`, `struct AtomicFlag` and `impl AtomicFlag` would
always exist and the config is only needed for much fewer times (plus, you don't
need to macro to avoid duplicating docs).
> +#[repr(C, align(4))]
> +#[derive(Clone, Copy)]
> +struct Flag {
> + bool_field: bool,
> + padding: [u8; 3],
You probably still want, on big endian platforms, put padding first, so the
generated instruction uses small immediates (0 & 1) instead of 0 & 0x01000000.
Best,
Gary
> +}
> +
> +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
> +impl Flag {
> + #[inline(always)]
> + const fn new(b: bool) -> Self {
> + // INVARIANT: `padding` is all zeroes.
> + Self {
> + bool_field: b,
> + padding: [0; 3],
> + }
> + }
> +}
> +
> +// SAFETY: `Flag` and `i32` have the same size and alignment, and it's round-trip
> +// transmutable to `i32`.
> +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
> +unsafe impl AtomicType for Flag {
> + type Repr = i32;
> +}
> +
> +macro_rules! atomic_flag_doc {
> + () => {
> + concat!(
> + "An atomic flag type intended to be backed by performance-optimal integer type.\n\n",
> + "The backing integer type is an implementation detail; it may vary by architecture and change\n",
> + "in the future.\n\n",
> + "[`AtomicFlag`] is generally preferable to [`Atomic<bool>`] when you need read-modify-write\n",
> + "(RMW) operations (e.g. [`Atomic::xchg()`]/[`Atomic::cmpxchg()`]) or when [`Atomic<bool>`] does\n",
> + "not save memory due to padding. On some architectures that do not support byte-sized atomic\n",
> + "RMW operations, RMW operations on [`Atomic<bool>`] are slower.\n\n",
> + "If you only use [`Atomic::load()`]/[`Atomic::store()`], [`Atomic<bool>`] is fine.\n\n",
> + "# Examples\n\n",
> + "```\n",
> + "use kernel::sync::atomic::{Atomic, AtomicFlag, Relaxed};\n\n",
> + "let flag = AtomicFlag::new(false);\n",
> + "assert_eq!(false, flag.load(Relaxed));\n",
> + "flag.store(true, Relaxed);\n",
> + "assert_eq!(true, flag.load(Relaxed));\n",
> + "```\n"
> + )
> + };
> +}
> +
> +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
> +#[doc = atomic_flag_doc!()]
> +pub struct AtomicFlag(Atomic<Flag>);
> +
> +#[cfg(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64))]
> +#[doc = atomic_flag_doc!()]
> +pub type AtomicFlag = Atomic<bool>;
> +
> +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
> +impl AtomicFlag {
> + /// Creates a new atomic flag.
> + #[inline(always)]
> + pub const fn new(b: bool) -> Self {
> + Self(Atomic::new(Flag::new(b)))
> + }
> +
> + /// Returns a mutable reference to the underlying flag as a [`bool`].
> + ///
> + /// This is safe because the mutable reference of the atomic flag guarantees exclusive access.
> + ///
> + /// # Examples
> + ///
> + /// ```
> + /// use kernel::sync::atomic::{AtomicFlag, Relaxed};
> + ///
> + /// let mut atomic_flag = AtomicFlag::new(false);
> + /// assert_eq!(false, atomic_flag.load(Relaxed));
> + /// *atomic_flag.get_mut() = true;
> + /// assert_eq!(true, atomic_flag.load(Relaxed));
> + /// ```
> + #[inline(always)]
> + pub fn get_mut(&mut self) -> &mut bool {
> + &mut self.0.get_mut().bool_field
> + }
> +
> + /// Loads the value from the atomic flag.
> + #[inline(always)]
> + pub fn load<Ordering: ordering::AcquireOrRelaxed>(&self, o: Ordering) -> bool {
> + self.0.load(o).bool_field
> + }
> +
> + /// Stores a value to the atomic flag.
> + #[inline(always)]
> + pub fn store<Ordering: ordering::ReleaseOrRelaxed>(&self, v: bool, o: Ordering) {
> + self.0.store(Flag::new(v), o);
> + }
> +
> + /// Stores a value to the atomic flag and returns the previous value.
> + #[inline(always)]
> + pub fn xchg<Ordering: ordering::Ordering>(&self, new: bool, o: Ordering) -> bool {
> + self.0.xchg(Flag::new(new), o).bool_field
> + }
> +
> + /// Store a value to the atomic flag if the current value is equal to `old`.
> + #[inline(always)]
> + pub fn cmpxchg<Ordering: ordering::Ordering>(
> + &self,
> + old: bool,
> + new: bool,
> + o: Ordering,
> + ) -> Result<bool, bool> {
> + match self.0.cmpxchg(Flag::new(old), Flag::new(new), o) {
> + Ok(_) => Ok(old),
> + Err(f) => Err(f.bool_field),
> + }
> + }
> +}
> diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
> index 42067c6a266c..d14e10544dcf 100644
> --- a/rust/kernel/sync/atomic/predefine.rs
> +++ b/rust/kernel/sync/atomic/predefine.rs
> @@ -215,4 +215,21 @@ fn atomic_bool_tests() {
> assert_eq!(false, x.load(Relaxed));
> assert_eq!(Ok(false), x.cmpxchg(false, true, Full));
> }
> +
> + #[test]
> + fn atomic_flag_tests() {
> + let mut flag = AtomicFlag::new(false);
> +
> + assert_eq!(false, flag.load(Relaxed));
> +
> + *flag.get_mut() = true;
> + assert_eq!(true, flag.load(Relaxed));
> +
> + assert_eq!(true, flag.xchg(false, Relaxed));
> + assert_eq!(false, flag.load(Relaxed));
> +
> + *flag.get_mut() = true;
> + assert_eq!(Ok(true), flag.cmpxchg(true, false, Full));
> + assert_eq!(false, flag.load(Relaxed));
> + }
> }
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1 1/2] rust: sync: atomic: Add perfromance-optimal Flag type for atomic booleans
2026-01-28 13:41 ` Gary Guo
@ 2026-01-28 17:19 ` Boqun Feng
2026-01-28 18:07 ` Boqun Feng
2026-01-28 23:22 ` FUJITA Tomonori
2026-01-29 0:40 ` FUJITA Tomonori
1 sibling, 2 replies; 9+ messages in thread
From: Boqun Feng @ 2026-01-28 17:19 UTC (permalink / raw)
To: Gary Guo
Cc: FUJITA Tomonori, boqun.feng, ojeda, peterz, will, a.hindborg,
aliceryhl, bjorn3_gh, dakr, lossin, mark.rutland, tmgross,
rust-for-linux, FUJITA Tomonori
On Wed, Jan 28, 2026 at 01:41:41PM +0000, Gary Guo wrote:
> On Wed Jan 28, 2026 at 11:51 AM GMT, FUJITA Tomonori wrote:
> > From: FUJITA Tomonori <fujita.tomonori@gmail.com>
> >
> > Add AtomicFlag type for boolean flags.
> >
> > Document when AtomicFlag is generally preferable to Atomic<bool>: in
> > particular, when RMW operations such as xchg()/cmpxchg() may be used
> > and minimizing memory usage is not the top priority. On some
> > architectures without byte-sized RMW instructions, Atomic<bool> can be
> > slower for RMW operations.
> >
> > Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
> > ---
> > rust/kernel/sync/atomic.rs | 121 +++++++++++++++++++++++++++
> > rust/kernel/sync/atomic/predefine.rs | 17 ++++
> > 2 files changed, 138 insertions(+)
> >
> > diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> > index 4aebeacb961a..7d06193709c0 100644
> > --- a/rust/kernel/sync/atomic.rs
> > +++ b/rust/kernel/sync/atomic.rs
> > @@ -560,3 +560,124 @@ pub fn fetch_add<Rhs, Ordering: ordering::Ordering>(&self, v: Rhs, _: Ordering)
> > unsafe { from_repr(ret) }
> > }
> > }
> > +
> > +/// # Invariants
> > +///
> > +/// `padding` must be all zeroes.
> > +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
>
> This config repeats too much.
>
> I think probably we should just not let `AtomicFlag` alias `Atomic<u8>` (this
> has the benefit of creating a type mismatch, so code cannot rely on this on x86
> and fail to compile on, say, RV).
>
> This way the `struct Flag`, `struct AtomicFlag` and `impl AtomicFlag` would
> always exist and the config is only needed for much fewer times (plus, you don't
> need to macro to avoid duplicating docs).
>
You probably still need configs for `#[repr(align(_))]` in that case or
two definitions of `Flag` anyway. but yes the duplicate docs can be
avoided, so are some impl blocks.
BTW, while we are at it, maybe we should use arches that don't support
byte-wise atomic instructions here instead of the ones do, i.e.
#[cfg(not(any(CONFIG_RISCV, CONFIG_LOONGARCH)))]
#[repr(C)]
#[derive(Clone, Copy)]
struct Flag {
bool_flag: bool,
}
#[cfg(any(CONFIG_RISCV, CONFIG_LOONGARCH))]
#[repr(C, align(4)]
#[derive(Clone, Copy)]
struct Flag {
#[cfg(target_endian = "big")]
padding: [u8; 3],
bool_flag: bool,
#[cfg(target_endian = "little")]
padding: [u8; 3],
}
and we should do
unsafe impl AtomicType for Flag {
#[cfg(any(CONFIG_RISCV, CONFIG_LOONGARCH)))]
type Repr = i32;
#[cfg(not(any(CONFIG_RISCV, CONFIG_LOONGARCH)))]
type Repr = i8;
}
as well.
> > +#[repr(C, align(4))]
> > +#[derive(Clone, Copy)]
> > +struct Flag {
> > + bool_field: bool,
> > + padding: [u8; 3],
>
> You probably still want, on big endian platforms, put padding first, so the
> generated instruction uses small immediates (0 & 1) instead of 0 & 0x01000000.
>
> Best,
> Gary
>
> > +}
> > +
> > +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
> > +impl Flag {
> > + #[inline(always)]
> > + const fn new(b: bool) -> Self {
> > + // INVARIANT: `padding` is all zeroes.
> > + Self {
> > + bool_field: b,
> > + padding: [0; 3],
> > + }
> > + }
> > +}
> > +
> > +// SAFETY: `Flag` and `i32` have the same size and alignment, and it's round-trip
> > +// transmutable to `i32`.
> > +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
> > +unsafe impl AtomicType for Flag {
> > + type Repr = i32;
> > +}
> > +
> > +macro_rules! atomic_flag_doc {
> > + () => {
> > + concat!(
> > + "An atomic flag type intended to be backed by performance-optimal integer type.\n\n",
> > + "The backing integer type is an implementation detail; it may vary by architecture and change\n",
> > + "in the future.\n\n",
> > + "[`AtomicFlag`] is generally preferable to [`Atomic<bool>`] when you need read-modify-write\n",
> > + "(RMW) operations (e.g. [`Atomic::xchg()`]/[`Atomic::cmpxchg()`]) or when [`Atomic<bool>`] does\n",
> > + "not save memory due to padding. On some architectures that do not support byte-sized atomic\n",
> > + "RMW operations, RMW operations on [`Atomic<bool>`] are slower.\n\n",
> > + "If you only use [`Atomic::load()`]/[`Atomic::store()`], [`Atomic<bool>`] is fine.\n\n",
> > + "# Examples\n\n",
> > + "```\n",
> > + "use kernel::sync::atomic::{Atomic, AtomicFlag, Relaxed};\n\n",
> > + "let flag = AtomicFlag::new(false);\n",
> > + "assert_eq!(false, flag.load(Relaxed));\n",
> > + "flag.store(true, Relaxed);\n",
> > + "assert_eq!(true, flag.load(Relaxed));\n",
> > + "```\n"
> > + )
> > + };
> > +}
> > +
> > +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
> > +#[doc = atomic_flag_doc!()]
> > +pub struct AtomicFlag(Atomic<Flag>);
> > +
> > +#[cfg(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64))]
> > +#[doc = atomic_flag_doc!()]
> > +pub type AtomicFlag = Atomic<bool>;
> > +
> > +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
> > +impl AtomicFlag {
> > + /// Creates a new atomic flag.
> > + #[inline(always)]
> > + pub const fn new(b: bool) -> Self {
> > + Self(Atomic::new(Flag::new(b)))
> > + }
> > +
> > + /// Returns a mutable reference to the underlying flag as a [`bool`].
> > + ///
> > + /// This is safe because the mutable reference of the atomic flag guarantees exclusive access.
> > + ///
> > + /// # Examples
> > + ///
> > + /// ```
> > + /// use kernel::sync::atomic::{AtomicFlag, Relaxed};
> > + ///
> > + /// let mut atomic_flag = AtomicFlag::new(false);
> > + /// assert_eq!(false, atomic_flag.load(Relaxed));
> > + /// *atomic_flag.get_mut() = true;
> > + /// assert_eq!(true, atomic_flag.load(Relaxed));
> > + /// ```
> > + #[inline(always)]
> > + pub fn get_mut(&mut self) -> &mut bool {
> > + &mut self.0.get_mut().bool_field
> > + }
> > +
> > + /// Loads the value from the atomic flag.
> > + #[inline(always)]
> > + pub fn load<Ordering: ordering::AcquireOrRelaxed>(&self, o: Ordering) -> bool {
> > + self.0.load(o).bool_field
For load(), xchg() and cmpxchg(), I think we should not use
`.bool_field`, because we know that `padding` is always zero, but
compilers don't. Using `.bool_field` will make the compiler generate i32
to i8 or a bit mask instruction to get the boolean. We need to implement
`From<Flag>` for `bool` and use `.into()` here.
Regards,
Boqun
> > + }
> > +
> > + /// Stores a value to the atomic flag.
> > + #[inline(always)]
> > + pub fn store<Ordering: ordering::ReleaseOrRelaxed>(&self, v: bool, o: Ordering) {
> > + self.0.store(Flag::new(v), o);
> > + }
> > +
> > + /// Stores a value to the atomic flag and returns the previous value.
> > + #[inline(always)]
> > + pub fn xchg<Ordering: ordering::Ordering>(&self, new: bool, o: Ordering) -> bool {
> > + self.0.xchg(Flag::new(new), o).bool_field
> > + }
> > +
> > + /// Store a value to the atomic flag if the current value is equal to `old`.
> > + #[inline(always)]
> > + pub fn cmpxchg<Ordering: ordering::Ordering>(
> > + &self,
> > + old: bool,
> > + new: bool,
> > + o: Ordering,
> > + ) -> Result<bool, bool> {
> > + match self.0.cmpxchg(Flag::new(old), Flag::new(new), o) {
> > + Ok(_) => Ok(old),
> > + Err(f) => Err(f.bool_field),
> > + }
> > + }
> > +}
> > diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
> > index 42067c6a266c..d14e10544dcf 100644
> > --- a/rust/kernel/sync/atomic/predefine.rs
> > +++ b/rust/kernel/sync/atomic/predefine.rs
> > @@ -215,4 +215,21 @@ fn atomic_bool_tests() {
> > assert_eq!(false, x.load(Relaxed));
> > assert_eq!(Ok(false), x.cmpxchg(false, true, Full));
> > }
> > +
> > + #[test]
> > + fn atomic_flag_tests() {
> > + let mut flag = AtomicFlag::new(false);
> > +
> > + assert_eq!(false, flag.load(Relaxed));
> > +
> > + *flag.get_mut() = true;
> > + assert_eq!(true, flag.load(Relaxed));
> > +
> > + assert_eq!(true, flag.xchg(false, Relaxed));
> > + assert_eq!(false, flag.load(Relaxed));
> > +
> > + *flag.get_mut() = true;
> > + assert_eq!(Ok(true), flag.cmpxchg(true, false, Full));
> > + assert_eq!(false, flag.load(Relaxed));
> > + }
> > }
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1 1/2] rust: sync: atomic: Add perfromance-optimal Flag type for atomic booleans
2026-01-28 17:19 ` Boqun Feng
@ 2026-01-28 18:07 ` Boqun Feng
2026-01-28 23:22 ` FUJITA Tomonori
1 sibling, 0 replies; 9+ messages in thread
From: Boqun Feng @ 2026-01-28 18:07 UTC (permalink / raw)
To: Gary Guo
Cc: FUJITA Tomonori, boqun.feng, ojeda, peterz, will, a.hindborg,
aliceryhl, bjorn3_gh, dakr, lossin, mark.rutland, tmgross,
rust-for-linux, FUJITA Tomonori
On Wed, Jan 28, 2026 at 09:19:14AM -0800, Boqun Feng wrote:
[...]
> > > + #[inline(always)]
> > > + pub fn get_mut(&mut self) -> &mut bool {
> > > + &mut self.0.get_mut().bool_field
> > > + }
> > > +
> > > + /// Loads the value from the atomic flag.
> > > + #[inline(always)]
> > > + pub fn load<Ordering: ordering::AcquireOrRelaxed>(&self, o: Ordering) -> bool {
> > > + self.0.load(o).bool_field
>
> For load(), xchg() and cmpxchg(), I think we should not use
> `.bool_field`, because we know that `padding` is always zero, but
> compilers don't. Using `.bool_field` will make the compiler generate i32
> to i8 or a bit mask instruction to get the boolean. We need to implement
> `From<Flag>` for `bool` and use `.into()` here.
>
Scratch this for a moment, I need to do some more investigation.
Regards,
Boqun
> Regards,
> Boqun
>
>
[..]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1 1/2] rust: sync: atomic: Add perfromance-optimal Flag type for atomic booleans
2026-01-28 17:19 ` Boqun Feng
2026-01-28 18:07 ` Boqun Feng
@ 2026-01-28 23:22 ` FUJITA Tomonori
2026-01-28 23:56 ` Boqun Feng
1 sibling, 1 reply; 9+ messages in thread
From: FUJITA Tomonori @ 2026-01-28 23:22 UTC (permalink / raw)
To: boqun
Cc: gary, tomo, boqun.feng, ojeda, peterz, will, a.hindborg,
aliceryhl, bjorn3_gh, dakr, lossin, mark.rutland, tmgross,
rust-for-linux, fujita.tomonori
On Wed, 28 Jan 2026 09:19:14 -0800
Boqun Feng <boqun@kernel.org> wrote:
> On Wed, Jan 28, 2026 at 01:41:41PM +0000, Gary Guo wrote:
>> On Wed Jan 28, 2026 at 11:51 AM GMT, FUJITA Tomonori wrote:
>> > From: FUJITA Tomonori <fujita.tomonori@gmail.com>
>> >
>> > Add AtomicFlag type for boolean flags.
>> >
>> > Document when AtomicFlag is generally preferable to Atomic<bool>: in
>> > particular, when RMW operations such as xchg()/cmpxchg() may be used
>> > and minimizing memory usage is not the top priority. On some
>> > architectures without byte-sized RMW instructions, Atomic<bool> can be
>> > slower for RMW operations.
>> >
>> > Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
>> > ---
>> > rust/kernel/sync/atomic.rs | 121 +++++++++++++++++++++++++++
>> > rust/kernel/sync/atomic/predefine.rs | 17 ++++
>> > 2 files changed, 138 insertions(+)
>> >
>> > diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
>> > index 4aebeacb961a..7d06193709c0 100644
>> > --- a/rust/kernel/sync/atomic.rs
>> > +++ b/rust/kernel/sync/atomic.rs
>> > @@ -560,3 +560,124 @@ pub fn fetch_add<Rhs, Ordering: ordering::Ordering>(&self, v: Rhs, _: Ordering)
>> > unsafe { from_repr(ret) }
>> > }
>> > }
>> > +
>> > +/// # Invariants
>> > +///
>> > +/// `padding` must be all zeroes.
>> > +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
>>
>> This config repeats too much.
>>
>> I think probably we should just not let `AtomicFlag` alias `Atomic<u8>` (this
>> has the benefit of creating a type mismatch, so code cannot rely on this on x86
>> and fail to compile on, say, RV).
>>
>> This way the `struct Flag`, `struct AtomicFlag` and `impl AtomicFlag` would
>> always exist and the config is only needed for much fewer times (plus, you don't
>> need to macro to avoid duplicating docs).
>>
>
> You probably still need configs for `#[repr(align(_))]` in that case or
> two definitions of `Flag` anyway. but yes the duplicate docs can be
> avoided, so are some impl blocks.
If we go with cfg_attr, we get something like the followings:
#[cfg_attr(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64), repr(C))]
#[cfg_attr(
not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)),
repr(C, align(4))
)]
#[derive(Clone, Copy)]
struct Flag {
#[cfg(all(
target_endian = "big",
not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64))
))]
padding: [u8; 3],
bool_field: bool,
#[cfg(all(
target_endian = "little",
not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64))
))]
padding: [u8; 3],
}
I think that two definitions of `Flag` is more readable here.
> BTW, while we are at it, maybe we should use arches that don't support
> byte-wise atomic instructions here instead of the ones do, i.e.
>
> #[cfg(not(any(CONFIG_RISCV, CONFIG_LOONGARCH)))]
> #[repr(C)]
> #[derive(Clone, Copy)]
> struct Flag {
> bool_flag: bool,
> }
>
> #[cfg(any(CONFIG_RISCV, CONFIG_LOONGARCH))]
> #[repr(C, align(4)]
> #[derive(Clone, Copy)]
> struct Flag {
> #[cfg(target_endian = "big")]
> padding: [u8; 3],
> bool_flag: bool,
> #[cfg(target_endian = "little")]
> padding: [u8; 3],
> }
>
> and we should do
>
> unsafe impl AtomicType for Flag {
> #[cfg(any(CONFIG_RISCV, CONFIG_LOONGARCH)))]
> type Repr = i32;
> #[cfg(not(any(CONFIG_RISCV, CONFIG_LOONGARCH)))]
> type Repr = i8;
> }
>
> as well.
Is the idea to list only arches without byte-wise atomics so new
byte-atomic-capable arches don't require updates?
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1 1/2] rust: sync: atomic: Add perfromance-optimal Flag type for atomic booleans
2026-01-28 23:22 ` FUJITA Tomonori
@ 2026-01-28 23:56 ` Boqun Feng
0 siblings, 0 replies; 9+ messages in thread
From: Boqun Feng @ 2026-01-28 23:56 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: gary, boqun.feng, ojeda, peterz, will, a.hindborg, aliceryhl,
bjorn3_gh, dakr, lossin, mark.rutland, tmgross, rust-for-linux,
fujita.tomonori
On Thu, Jan 29, 2026 at 08:22:43AM +0900, FUJITA Tomonori wrote:
> On Wed, 28 Jan 2026 09:19:14 -0800
> Boqun Feng <boqun@kernel.org> wrote:
>
> > On Wed, Jan 28, 2026 at 01:41:41PM +0000, Gary Guo wrote:
> >> On Wed Jan 28, 2026 at 11:51 AM GMT, FUJITA Tomonori wrote:
> >> > From: FUJITA Tomonori <fujita.tomonori@gmail.com>
> >> >
> >> > Add AtomicFlag type for boolean flags.
> >> >
> >> > Document when AtomicFlag is generally preferable to Atomic<bool>: in
> >> > particular, when RMW operations such as xchg()/cmpxchg() may be used
> >> > and minimizing memory usage is not the top priority. On some
> >> > architectures without byte-sized RMW instructions, Atomic<bool> can be
> >> > slower for RMW operations.
> >> >
> >> > Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
> >> > ---
> >> > rust/kernel/sync/atomic.rs | 121 +++++++++++++++++++++++++++
> >> > rust/kernel/sync/atomic/predefine.rs | 17 ++++
> >> > 2 files changed, 138 insertions(+)
> >> >
> >> > diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> >> > index 4aebeacb961a..7d06193709c0 100644
> >> > --- a/rust/kernel/sync/atomic.rs
> >> > +++ b/rust/kernel/sync/atomic.rs
> >> > @@ -560,3 +560,124 @@ pub fn fetch_add<Rhs, Ordering: ordering::Ordering>(&self, v: Rhs, _: Ordering)
> >> > unsafe { from_repr(ret) }
> >> > }
> >> > }
> >> > +
> >> > +/// # Invariants
> >> > +///
> >> > +/// `padding` must be all zeroes.
> >> > +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
> >>
> >> This config repeats too much.
> >>
> >> I think probably we should just not let `AtomicFlag` alias `Atomic<u8>` (this
> >> has the benefit of creating a type mismatch, so code cannot rely on this on x86
> >> and fail to compile on, say, RV).
> >>
> >> This way the `struct Flag`, `struct AtomicFlag` and `impl AtomicFlag` would
> >> always exist and the config is only needed for much fewer times (plus, you don't
> >> need to macro to avoid duplicating docs).
> >>
> >
> > You probably still need configs for `#[repr(align(_))]` in that case or
> > two definitions of `Flag` anyway. but yes the duplicate docs can be
> > avoided, so are some impl blocks.
>
> If we go with cfg_attr, we get something like the followings:
>
> #[cfg_attr(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64), repr(C))]
> #[cfg_attr(
> not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)),
> repr(C, align(4))
> )]
> #[derive(Clone, Copy)]
> struct Flag {
> #[cfg(all(
> target_endian = "big",
> not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64))
> ))]
> padding: [u8; 3],
> bool_field: bool,
> #[cfg(all(
> target_endian = "little",
> not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64))
> ))]
> padding: [u8; 3],
> }
>
> I think that two definitions of `Flag` is more readable here.
>
Agreed ;-)
>
> > BTW, while we are at it, maybe we should use arches that don't support
> > byte-wise atomic instructions here instead of the ones do, i.e.
> >
> > #[cfg(not(any(CONFIG_RISCV, CONFIG_LOONGARCH)))]
> > #[repr(C)]
> > #[derive(Clone, Copy)]
> > struct Flag {
> > bool_flag: bool,
> > }
> >
> > #[cfg(any(CONFIG_RISCV, CONFIG_LOONGARCH))]
> > #[repr(C, align(4)]
> > #[derive(Clone, Copy)]
> > struct Flag {
> > #[cfg(target_endian = "big")]
> > padding: [u8; 3],
> > bool_flag: bool,
> > #[cfg(target_endian = "little")]
> > padding: [u8; 3],
> > }
> >
> > and we should do
> >
> > unsafe impl AtomicType for Flag {
> > #[cfg(any(CONFIG_RISCV, CONFIG_LOONGARCH)))]
> > type Repr = i32;
> > #[cfg(not(any(CONFIG_RISCV, CONFIG_LOONGARCH)))]
> > type Repr = i8;
> > }
> >
> > as well.
>
> Is the idea to list only arches without byte-wise atomics so new
> byte-atomic-capable arches don't require updates?
Yes, but I guess I was wrong here. More archs are actually
byte-atomic-incapable, so what you had is making sense. Feel free to
ignore that.
Regards,
Boqun
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1 1/2] rust: sync: atomic: Add perfromance-optimal Flag type for atomic booleans
2026-01-28 13:41 ` Gary Guo
2026-01-28 17:19 ` Boqun Feng
@ 2026-01-29 0:40 ` FUJITA Tomonori
1 sibling, 0 replies; 9+ messages in thread
From: FUJITA Tomonori @ 2026-01-29 0:40 UTC (permalink / raw)
To: gary
Cc: tomo, boqun.feng, ojeda, peterz, will, a.hindborg, aliceryhl,
bjorn3_gh, dakr, lossin, mark.rutland, tmgross, rust-for-linux,
fujita.tomonori
On Wed, 28 Jan 2026 13:41:41 +0000
"Gary Guo" <gary@garyguo.net> wrote:
> On Wed Jan 28, 2026 at 11:51 AM GMT, FUJITA Tomonori wrote:
>> From: FUJITA Tomonori <fujita.tomonori@gmail.com>
>>
>> Add AtomicFlag type for boolean flags.
>>
>> Document when AtomicFlag is generally preferable to Atomic<bool>: in
>> particular, when RMW operations such as xchg()/cmpxchg() may be used
>> and minimizing memory usage is not the top priority. On some
>> architectures without byte-sized RMW instructions, Atomic<bool> can be
>> slower for RMW operations.
>>
>> Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
>> ---
>> rust/kernel/sync/atomic.rs | 121 +++++++++++++++++++++++++++
>> rust/kernel/sync/atomic/predefine.rs | 17 ++++
>> 2 files changed, 138 insertions(+)
>>
>> diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
>> index 4aebeacb961a..7d06193709c0 100644
>> --- a/rust/kernel/sync/atomic.rs
>> +++ b/rust/kernel/sync/atomic.rs
>> @@ -560,3 +560,124 @@ pub fn fetch_add<Rhs, Ordering: ordering::Ordering>(&self, v: Rhs, _: Ordering)
>> unsafe { from_repr(ret) }
>> }
>> }
>> +
>> +/// # Invariants
>> +///
>> +/// `padding` must be all zeroes.
>> +#[cfg(not(any(CONFIG_X86_64, CONFIG_UML, CONFIG_ARM, CONFIG_ARM64)))]
>
> This config repeats too much.
>
> I think probably we should just not let `AtomicFlag` alias `Atomic<u8>` (this
> has the benefit of creating a type mismatch, so code cannot rely on this on x86
> and fail to compile on, say, RV).
>
> This way the `struct Flag`, `struct AtomicFlag` and `impl AtomicFlag` would
> always exist and the config is only needed for much fewer times (plus, you don't
> need to macro to avoid duplicating docs).
Agreed, I'll do.
>> +#[repr(C, align(4))]
>> +#[derive(Clone, Copy)]
>> +struct Flag {
>> + bool_field: bool,
>> + padding: [u8; 3],
>
> You probably still want, on big endian platforms, put padding first, so the
> generated instruction uses small immediates (0 & 1) instead of 0 & 0x01000000.
Thanks, I'll handle this in v2.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-01-29 0:40 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-28 11:51 [PATCH v1 0/2] rust: sync: Add AtomicFlag type FUJITA Tomonori
2026-01-28 11:51 ` [PATCH v1 1/2] rust: sync: atomic: Add perfromance-optimal Flag type for atomic booleans FUJITA Tomonori
2026-01-28 13:41 ` Gary Guo
2026-01-28 17:19 ` Boqun Feng
2026-01-28 18:07 ` Boqun Feng
2026-01-28 23:22 ` FUJITA Tomonori
2026-01-28 23:56 ` Boqun Feng
2026-01-29 0:40 ` FUJITA Tomonori
2026-01-28 11:52 ` [PATCH v1 2/2] rust: list: Use AtomicFlag in AtomicTracker FUJITA Tomonori
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox