rust-for-linux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/3] rust: Add i8 and i16 atomic support
@ 2025-11-15  5:03 FUJITA Tomonori
  2025-11-15  5:03 ` [PATCH v1 1/3] rust: sync: Add smp_store_release/smp_load_acquire helpers FUJITA Tomonori
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: FUJITA Tomonori @ 2025-11-15  5:03 UTC (permalink / raw)
  To: boqun.feng, ojeda, alex.gaynor, peterz, will
  Cc: a.hindborg, aliceryhl, bjorn3_gh, dakr, gary, lossin,
	mark.rutland, rust-for-linux, tmgross

This adds Atomic<i8> and Atomic<i16> support; including
load/store(Relaxed) and load(Acquire)/store(Release) operations.

Relaxed operations can be implemented purely in Rust using
read_volatile() and write_volatile(), equivalent to C's
READ_ONCE/WRITE_ONCE macros.

load(Acquire)/store(Release) use smp_load_acquire and
smp_store_release helpers. These helpers internally use the
appropriate architecture-specific instructions.

FUJITA Tomonori (3):
  rust: sync: Add smp_store_release/smp_load_acquire helpers
  rust: sync: atomic: Add i8 and i16 support (load and store)
  rust: sync: atomic: Add store_release/load_acquire tests

 rust/helpers/barrier.c               | 24 ++++++++
 rust/kernel/sync/atomic/internal.rs  | 85 ++++++++++++++++++++++++++++
 rust/kernel/sync/atomic/predefine.rs | 24 +++++++-
 3 files changed, 132 insertions(+), 1 deletion(-)


base-commit: 7a0892d2836e12cc61b6823f888629a3eb64e268
-- 
2.43.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v1 1/3] rust: sync: Add smp_store_release/smp_load_acquire helpers
  2025-11-15  5:03 [PATCH v1 0/3] rust: Add i8 and i16 atomic support FUJITA Tomonori
@ 2025-11-15  5:03 ` FUJITA Tomonori
  2025-11-15  5:22   ` Boqun Feng
  2025-11-15  5:03 ` [PATCH v1 2/3] rust: sync: atomic: Add i8 and i16 support (load and store) FUJITA Tomonori
  2025-11-15  5:03 ` [PATCH v1 3/3] rust: sync: atomic: Add store_release/load_acquire tests FUJITA Tomonori
  2 siblings, 1 reply; 8+ messages in thread
From: FUJITA Tomonori @ 2025-11-15  5:03 UTC (permalink / raw)
  To: boqun.feng, ojeda, alex.gaynor, peterz, will
  Cc: a.hindborg, aliceryhl, bjorn3_gh, dakr, gary, lossin,
	mark.rutland, rust-for-linux, tmgross

Add smp_store_release/smp_load_acquire helpers to support
store_release/load_acquire for i8/i16.

Both are implemented as a C macro and type information is necessary so
the size argument is necessary unlike the C version.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
---
 rust/helpers/barrier.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c
index cdf28ce8e511..b66ef792c7be 100644
--- a/rust/helpers/barrier.c
+++ b/rust/helpers/barrier.c
@@ -16,3 +16,27 @@ void rust_helper_smp_rmb(void)
 {
 	smp_rmb();
 }
+
+void rust_helper_smp_store_release(void *ptr, u64 val, size_t size)
+{
+	switch (size) {
+	case 1:
+		smp_store_release((u8 *)ptr, (u8)val);
+		break;
+	case 2:
+		smp_store_release((u16 *)ptr, (u16)val);
+		break;
+	}
+}
+
+void rust_helper_smp_load_acquire(void *ptr, void *val, size_t size)
+{
+	switch (size) {
+	case 1:
+		*(u8 *)val = smp_load_acquire((u8 *)ptr);
+		break;
+	case 2:
+		*(u16 *)val = smp_load_acquire((u16 *)ptr);
+		break;
+	}
+}
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v1 2/3] rust: sync: atomic: Add i8 and i16 support (load and store)
  2025-11-15  5:03 [PATCH v1 0/3] rust: Add i8 and i16 atomic support FUJITA Tomonori
  2025-11-15  5:03 ` [PATCH v1 1/3] rust: sync: Add smp_store_release/smp_load_acquire helpers FUJITA Tomonori
@ 2025-11-15  5:03 ` FUJITA Tomonori
  2025-11-15  5:03 ` [PATCH v1 3/3] rust: sync: atomic: Add store_release/load_acquire tests FUJITA Tomonori
  2 siblings, 0 replies; 8+ messages in thread
From: FUJITA Tomonori @ 2025-11-15  5:03 UTC (permalink / raw)
  To: boqun.feng, ojeda, alex.gaynor, peterz, will
  Cc: a.hindborg, aliceryhl, bjorn3_gh, dakr, gary, lossin,
	mark.rutland, rust-for-linux, tmgross

Add atomic operation support for i8 and i16 types using volatile
read/write and smp_load_acquire/smp_store_release helpers.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
---
 rust/kernel/sync/atomic/internal.rs  | 85 ++++++++++++++++++++++++++++
 rust/kernel/sync/atomic/predefine.rs | 14 ++++-
 2 files changed, 98 insertions(+), 1 deletion(-)

diff --git a/rust/kernel/sync/atomic/internal.rs b/rust/kernel/sync/atomic/internal.rs
index 6fdd8e59f45b..d9c4edfec20d 100644
--- a/rust/kernel/sync/atomic/internal.rs
+++ b/rust/kernel/sync/atomic/internal.rs
@@ -263,3 +263,88 @@ fn fetch_add[acquire, release, relaxed](a: &AtomicRepr<Self>, v: Self::Delta) ->
         }
     }
 );
+
+impl private::Sealed for i8 {}
+impl private::Sealed for i16 {}
+
+impl AtomicImpl for i8 {
+    type Delta = Self;
+}
+
+impl AtomicImpl for i16 {
+    type Delta = Self;
+}
+
+trait VolatileOps: AtomicImpl {
+    /// Volatile read.
+    fn volatile_read(a: &AtomicRepr<Self>) -> Self;
+
+    /// Volatile write.
+    fn volatile_set(a: &AtomicRepr<Self>, v: Self);
+}
+
+macro_rules! impl_volatile_methods {
+    ($ty:ty) => {
+        impl VolatileOps for $ty {
+            #[inline(always)]
+            fn volatile_read(a: &AtomicRepr<Self>) -> Self {
+                // SAFETY: `a.as_ptr()` is valid and properly aligned.
+                unsafe { core::ptr::read_volatile(a.as_ptr()) }
+            }
+
+            #[inline(always)]
+            fn volatile_set(a: &AtomicRepr<Self>, v: Self) {
+                // SAFETY: `a.as_ptr()` is valid and properly aligned.
+                unsafe { core::ptr::write_volatile(a.as_ptr(), v) }
+            }
+        }
+    };
+}
+
+impl_volatile_methods!(i8);
+impl_volatile_methods!(i16);
+
+macro_rules! impl_atomic_basic_ops_by_volatile {
+    ($ty:ty) => {
+        impl AtomicBasicOps for $ty {
+            #[inline(always)]
+            fn atomic_set(a: &AtomicRepr<Self>, v: Self) {
+                <Self as VolatileOps>::volatile_set(a, v)
+            }
+
+            #[inline(always)]
+            fn atomic_set_release(a: &AtomicRepr<Self>, v: Self) {
+                // SAFETY: `a.as_ptr()` is valid and properly aligned.
+                unsafe {
+                    bindings::smp_store_release(
+                        a.as_ptr().cast(),
+                        v as u64,
+                        core::mem::size_of::<Self>(),
+                    )
+                }
+            }
+
+            #[inline(always)]
+            fn atomic_read(a: &AtomicRepr<Self>) -> Self {
+                <Self as VolatileOps>::volatile_read(a)
+            }
+
+            #[inline(always)]
+            fn atomic_read_acquire(a: &AtomicRepr<Self>) -> Self {
+                let mut ret: Self = 0;
+                // SAFETY: `a.as_ptr()` is valid and properly aligned.
+                unsafe {
+                    bindings::smp_load_acquire(
+                        a.as_ptr().cast(),
+                        core::ptr::from_mut(&mut ret).cast(),
+                        core::mem::size_of::<Self>(),
+                    )
+                }
+                ret
+            }
+        }
+    };
+}
+
+impl_atomic_basic_ops_by_volatile!(i8);
+impl_atomic_basic_ops_by_volatile!(i16);
diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
index 45a17985cda4..09b357be59b8 100644
--- a/rust/kernel/sync/atomic/predefine.rs
+++ b/rust/kernel/sync/atomic/predefine.rs
@@ -5,6 +5,18 @@
 use crate::static_assert;
 use core::mem::{align_of, size_of};
 
+// SAFETY: `i8` has the same size and alignment with itself, and is round-trip transmutable to
+// itself.
+unsafe impl super::AtomicType for i8 {
+    type Repr = i8;
+}
+
+// SAFETY: `i16` has the same size and alignment with itself, and is round-trip transmutable to
+// itself.
+unsafe impl super::AtomicType for i16 {
+    type Repr = i16;
+}
+
 // SAFETY: `i32` has the same size and alignment with itself, and is round-trip transmutable to
 // itself.
 unsafe impl super::AtomicType for i32 {
@@ -118,7 +130,7 @@ macro_rules! for_each_type {
 
     #[test]
     fn atomic_basic_tests() {
-        for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
+        for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| {
             let x = Atomic::new(v);
 
             assert_eq!(v, x.load(Relaxed));
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v1 3/3] rust: sync: atomic: Add store_release/load_acquire tests
  2025-11-15  5:03 [PATCH v1 0/3] rust: Add i8 and i16 atomic support FUJITA Tomonori
  2025-11-15  5:03 ` [PATCH v1 1/3] rust: sync: Add smp_store_release/smp_load_acquire helpers FUJITA Tomonori
  2025-11-15  5:03 ` [PATCH v1 2/3] rust: sync: atomic: Add i8 and i16 support (load and store) FUJITA Tomonori
@ 2025-11-15  5:03 ` FUJITA Tomonori
  2 siblings, 0 replies; 8+ messages in thread
From: FUJITA Tomonori @ 2025-11-15  5:03 UTC (permalink / raw)
  To: boqun.feng, ojeda, alex.gaynor, peterz, will
  Cc: a.hindborg, aliceryhl, bjorn3_gh, dakr, gary, lossin,
	mark.rutland, rust-for-linux, tmgross

Add minimum store_release/load_acquire tests.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
---
 rust/kernel/sync/atomic/predefine.rs | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/rust/kernel/sync/atomic/predefine.rs b/rust/kernel/sync/atomic/predefine.rs
index 09b357be59b8..51e9df0cf56e 100644
--- a/rust/kernel/sync/atomic/predefine.rs
+++ b/rust/kernel/sync/atomic/predefine.rs
@@ -137,6 +137,16 @@ fn atomic_basic_tests() {
         });
     }
 
+    #[test]
+    fn atomic_acquire_release_tests() {
+        for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| {
+            let x = Atomic::new(0);
+
+            x.store(v, Release);
+            assert_eq!(v, x.load(Acquire));
+        });
+    }
+
     #[test]
     fn atomic_xchg_tests() {
         for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v1 1/3] rust: sync: Add smp_store_release/smp_load_acquire helpers
  2025-11-15  5:03 ` [PATCH v1 1/3] rust: sync: Add smp_store_release/smp_load_acquire helpers FUJITA Tomonori
@ 2025-11-15  5:22   ` Boqun Feng
  2025-11-15  6:22     ` FUJITA Tomonori
  0 siblings, 1 reply; 8+ messages in thread
From: Boqun Feng @ 2025-11-15  5:22 UTC (permalink / raw)
  To: FUJITA Tomonori
  Cc: ojeda, alex.gaynor, peterz, will, a.hindborg, aliceryhl,
	bjorn3_gh, dakr, gary, lossin, mark.rutland, rust-for-linux,
	tmgross

Hi Tomo,

Thanks for taking a look at this.

On Sat, Nov 15, 2025 at 02:03:03PM +0900, FUJITA Tomonori wrote:
> Add smp_store_release/smp_load_acquire helpers to support
> store_release/load_acquire for i8/i16.
> 
> Both are implemented as a C macro and type information is necessary so
> the size argument is necessary unlike the C version.
> 
> Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
> ---
>  rust/helpers/barrier.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c

Could you just define helpers:

atomic_{i8,i16}_store{,_release}(), atomic_{i8,i16}_load{,_acquire}()

in a separate helper file? (e.g. rust/helpers/atomic_ext.c).

In this way, you could just use C to implement these and that'll make
the interface Rust/C more clear.

Eventually I will the macro declare_and_impl_atomic_methods!() support
customized list of types, but right now a dedicated macro to generate
all impls for AtomicBasicOps of i8/i16 is fine.

Regards,
Boqun

> index cdf28ce8e511..b66ef792c7be 100644
> --- a/rust/helpers/barrier.c
> +++ b/rust/helpers/barrier.c
> @@ -16,3 +16,27 @@ void rust_helper_smp_rmb(void)
>  {
>  	smp_rmb();
>  }
> +
> +void rust_helper_smp_store_release(void *ptr, u64 val, size_t size)
> +{
> +	switch (size) {
> +	case 1:
> +		smp_store_release((u8 *)ptr, (u8)val);
> +		break;
> +	case 2:
> +		smp_store_release((u16 *)ptr, (u16)val);
> +		break;
> +	}
> +}
> +
> +void rust_helper_smp_load_acquire(void *ptr, void *val, size_t size)
> +{
> +	switch (size) {
> +	case 1:
> +		*(u8 *)val = smp_load_acquire((u8 *)ptr);
> +		break;
> +	case 2:
> +		*(u16 *)val = smp_load_acquire((u16 *)ptr);
> +		break;
> +	}
> +}
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v1 1/3] rust: sync: Add smp_store_release/smp_load_acquire helpers
  2025-11-15  5:22   ` Boqun Feng
@ 2025-11-15  6:22     ` FUJITA Tomonori
  2025-11-15  7:01       ` Boqun Feng
  0 siblings, 1 reply; 8+ messages in thread
From: FUJITA Tomonori @ 2025-11-15  6:22 UTC (permalink / raw)
  To: boqun.feng
  Cc: fujita.tomonori, ojeda, alex.gaynor, peterz, will, a.hindborg,
	aliceryhl, bjorn3_gh, dakr, gary, lossin, mark.rutland,
	rust-for-linux, tmgross

On Fri, 14 Nov 2025 21:22:28 -0800
Boqun Feng <boqun.feng@gmail.com> wrote:

> On Sat, Nov 15, 2025 at 02:03:03PM +0900, FUJITA Tomonori wrote:
>> Add smp_store_release/smp_load_acquire helpers to support
>> store_release/load_acquire for i8/i16.
>> 
>> Both are implemented as a C macro and type information is necessary so
>> the size argument is necessary unlike the C version.
>> 
>> Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
>> ---
>>  rust/helpers/barrier.c | 24 ++++++++++++++++++++++++
>>  1 file changed, 24 insertions(+)
>> 
>> diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c
> 
> Could you just define helpers:
> 
> atomic_{i8,i16}_store{,_release}(), atomic_{i8,i16}_load{,_acquire}()
> 
> in a separate helper file? (e.g. rust/helpers/atomic_ext.c).

Of course, I'll.


> Eventually I will the macro declare_and_impl_atomic_methods!() support
> customized list of types, but right now a dedicated macro to generate
> all impls for AtomicBasicOps of i8/i16 is fine.

Is this comment referring to the second patch?

Do you mean that I should remove the VolatileOps trait and implement
AtomicBasicOps for i8/i16 using a dedicated macro like
impl_only_atomic_load_and_store_ops?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v1 1/3] rust: sync: Add smp_store_release/smp_load_acquire helpers
  2025-11-15  6:22     ` FUJITA Tomonori
@ 2025-11-15  7:01       ` Boqun Feng
  2025-11-15 13:23         ` FUJITA Tomonori
  0 siblings, 1 reply; 8+ messages in thread
From: Boqun Feng @ 2025-11-15  7:01 UTC (permalink / raw)
  To: FUJITA Tomonori
  Cc: ojeda, alex.gaynor, peterz, will, a.hindborg, aliceryhl,
	bjorn3_gh, dakr, gary, lossin, mark.rutland, rust-for-linux,
	tmgross

On Sat, Nov 15, 2025 at 03:22:20PM +0900, FUJITA Tomonori wrote:
> On Fri, 14 Nov 2025 21:22:28 -0800
> Boqun Feng <boqun.feng@gmail.com> wrote:
> 
> > On Sat, Nov 15, 2025 at 02:03:03PM +0900, FUJITA Tomonori wrote:
> >> Add smp_store_release/smp_load_acquire helpers to support
> >> store_release/load_acquire for i8/i16.
> >> 
> >> Both are implemented as a C macro and type information is necessary so
> >> the size argument is necessary unlike the C version.
> >> 
> >> Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
> >> ---
> >>  rust/helpers/barrier.c | 24 ++++++++++++++++++++++++
> >>  1 file changed, 24 insertions(+)
> >> 
> >> diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c
> > 
> > Could you just define helpers:
> > 
> > atomic_{i8,i16}_store{,_release}(), atomic_{i8,i16}_load{,_acquire}()
> > 
> > in a separate helper file? (e.g. rust/helpers/atomic_ext.c).
> 
> Of course, I'll.
> 
> 
> > Eventually I will the macro declare_and_impl_atomic_methods!() support
> > customized list of types, but right now a dedicated macro to generate
> > all impls for AtomicBasicOps of i8/i16 is fine.
> 
> Is this comment referring to the second patch?
> 

Yes. 

> Do you mean that I should remove the VolatileOps trait and implement
> AtomicBasicOps for i8/i16 using a dedicated macro like
> impl_only_atomic_load_and_store_ops?

Right, I would want to avoid to use write_volatile() and read_volatile()
directly as atomic operations, but rather we implement
atomic_*_{load,store}() with WRITE_ONCE() and READ_ONCE(), and then Rust
code call them. In this way we keep our kernel memory model bits on the
C side.

Regards,
Boqun

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v1 1/3] rust: sync: Add smp_store_release/smp_load_acquire helpers
  2025-11-15  7:01       ` Boqun Feng
@ 2025-11-15 13:23         ` FUJITA Tomonori
  0 siblings, 0 replies; 8+ messages in thread
From: FUJITA Tomonori @ 2025-11-15 13:23 UTC (permalink / raw)
  To: boqun.feng
  Cc: fujita.tomonori, ojeda, alex.gaynor, peterz, will, a.hindborg,
	aliceryhl, bjorn3_gh, dakr, gary, lossin, mark.rutland,
	rust-for-linux, tmgross

On Fri, 14 Nov 2025 23:01:33 -0800
Boqun Feng <boqun.feng@gmail.com> wrote:

>> Do you mean that I should remove the VolatileOps trait and implement
>> AtomicBasicOps for i8/i16 using a dedicated macro like
>> impl_only_atomic_load_and_store_ops?
> 
> Right, I would want to avoid to use write_volatile() and read_volatile()
> directly as atomic operations, but rather we implement
> atomic_*_{load,store}() with WRITE_ONCE() and READ_ONCE(), and then Rust
> code call them. In this way we keep our kernel memory model bits on the
> C side.

Sounds good, will do in v2.


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-11-15 13:23 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-15  5:03 [PATCH v1 0/3] rust: Add i8 and i16 atomic support FUJITA Tomonori
2025-11-15  5:03 ` [PATCH v1 1/3] rust: sync: Add smp_store_release/smp_load_acquire helpers FUJITA Tomonori
2025-11-15  5:22   ` Boqun Feng
2025-11-15  6:22     ` FUJITA Tomonori
2025-11-15  7:01       ` Boqun Feng
2025-11-15 13:23         ` FUJITA Tomonori
2025-11-15  5:03 ` [PATCH v1 2/3] rust: sync: atomic: Add i8 and i16 support (load and store) FUJITA Tomonori
2025-11-15  5:03 ` [PATCH v1 3/3] rust: sync: atomic: Add store_release/load_acquire tests FUJITA Tomonori

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).