linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/9] LKMM generic atomics in Rust
@ 2025-07-14  5:36 Boqun Feng
  2025-07-14  5:36 ` [PATCH v7 1/9] rust: Introduce atomic API helpers Boqun Feng
                   ` (8 more replies)
  0 siblings, 9 replies; 50+ messages in thread
From: Boqun Feng @ 2025-07-14  5:36 UTC (permalink / raw)
  To: linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
	Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
	Linus Torvalds, Thomas Gleixner, Alan Stern

Hi all,

This is the v7 of LKMM atomics in Rust, you can find the previous
versions at:

v6: https://lore.kernel.org/rust-for-linux/20250710060052.11955-1-boqun.feng@gmail.com/
v5: https://lore.kernel.org/rust-for-linux/20250618164934.19817-1-boqun.feng@gmail.com/
v4: https://lore.kernel.org/rust-for-linux/20250609224615.27061-1-boqun.feng@gmail.com/
v3: https://lore.kernel.org/rust-for-linux/20250421164221.1121805-1-boqun.feng@gmail.com/
v2: https://lore.kernel.org/rust-for-linux/20241101060237.1185533-1-boqun.feng@gmail.com/
v1: https://lore.kernel.org/rust-for-linux/20240612223025.1158537-1-boqun.feng@gmail.com/
wip: https://lore.kernel.org/rust-for-linux/20240322233838.868874-1-boqun.feng@gmail.com/

I think I resolved most of the comments from the last version. I do
really want to see if this can be in v6.17-rc1, please take a look.

Changes since v6:

- Changed the macro format of atomic/ops.rs and improved the safety
  requirements per the suggestion of Benno.
- Used fine-grained trait design like `AllowAtomicAdd` for arithmetic
  operations per the suggestion of Benno.
- Added an `isize_atomic_repr` internal type to reduce the #[cfg(_)]
  usage for implementing Atomic<{i,u}size> per suggestion of Miguel.
- Made barrier functions always inline.
- Documentation and safety comment improvement. Thanks Benno!

Regards,
Boqun

Boqun Feng (9):
  rust: Introduce atomic API helpers
  rust: sync: Add basic atomic operation mapping framework
  rust: sync: atomic: Add ordering annotation types
  rust: sync: atomic: Add generic atomics
  rust: sync: atomic: Add atomic {cmp,}xchg operations
  rust: sync: atomic: Add the framework of arithmetic operations
  rust: sync: atomic: Add Atomic<u{32,64}>
  rust: sync: Add memory barriers
  rust: sync: atomic: Add Atomic<{usize,isize}>

 MAINTAINERS                               |    4 +-
 rust/helpers/atomic.c                     | 1040 +++++++++++++++++++++
 rust/helpers/barrier.c                    |   18 +
 rust/helpers/helpers.c                    |    2 +
 rust/kernel/sync.rs                       |    2 +
 rust/kernel/sync/atomic.rs                |  187 ++++
 rust/kernel/sync/atomic/generic.rs        |  573 ++++++++++++
 rust/kernel/sync/atomic/ops.rs            |  265 ++++++
 rust/kernel/sync/atomic/ordering.rs       |  109 +++
 rust/kernel/sync/barrier.rs               |   61 ++
 scripts/atomic/gen-atomics.sh             |    1 +
 scripts/atomic/gen-rust-atomic-helpers.sh |   67 ++
 12 files changed, 2328 insertions(+), 1 deletion(-)
 create mode 100644 rust/helpers/atomic.c
 create mode 100644 rust/helpers/barrier.c
 create mode 100644 rust/kernel/sync/atomic.rs
 create mode 100644 rust/kernel/sync/atomic/generic.rs
 create mode 100644 rust/kernel/sync/atomic/ops.rs
 create mode 100644 rust/kernel/sync/atomic/ordering.rs
 create mode 100644 rust/kernel/sync/barrier.rs
 create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh

-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH v7 1/9] rust: Introduce atomic API helpers
  2025-07-14  5:36 [PATCH v7 0/9] LKMM generic atomics in Rust Boqun Feng
@ 2025-07-14  5:36 ` Boqun Feng
  2025-07-16  9:23   ` Greg Kroah-Hartman
  2025-07-14  5:36 ` [PATCH v7 2/9] rust: sync: Add basic atomic operation mapping framework Boqun Feng
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-14  5:36 UTC (permalink / raw)
  To: linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
	Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
	Linus Torvalds, Thomas Gleixner, Alan Stern

In order to support LKMM atomics in Rust, add rust_helper_* for atomic
APIs. These helpers ensure the implementation of LKMM atomics in Rust is
the same as in C. This could save the maintenance burden of having two
similar atomic implementations in asm.

Originally-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 rust/helpers/atomic.c                     | 1040 +++++++++++++++++++++
 rust/helpers/helpers.c                    |    1 +
 scripts/atomic/gen-atomics.sh             |    1 +
 scripts/atomic/gen-rust-atomic-helpers.sh |   67 ++
 4 files changed, 1109 insertions(+)
 create mode 100644 rust/helpers/atomic.c
 create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh

diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c
new file mode 100644
index 000000000000..cf06b7ef9a1c
--- /dev/null
+++ b/rust/helpers/atomic.c
@@ -0,0 +1,1040 @@
+// SPDX-License-Identifier: GPL-2.0
+
+// Generated by scripts/atomic/gen-rust-atomic-helpers.sh
+// DO NOT MODIFY THIS FILE DIRECTLY
+
+/*
+ * This file provides helpers for the various atomic functions for Rust.
+ */
+#ifndef _RUST_ATOMIC_API_H
+#define _RUST_ATOMIC_API_H
+
+#include <linux/atomic.h>
+
+// TODO: Remove this after INLINE_HELPERS support is added.
+#ifndef __rust_helper
+#define __rust_helper
+#endif
+
+__rust_helper int
+rust_helper_atomic_read(const atomic_t *v)
+{
+	return atomic_read(v);
+}
+
+__rust_helper int
+rust_helper_atomic_read_acquire(const atomic_t *v)
+{
+	return atomic_read_acquire(v);
+}
+
+__rust_helper void
+rust_helper_atomic_set(atomic_t *v, int i)
+{
+	atomic_set(v, i);
+}
+
+__rust_helper void
+rust_helper_atomic_set_release(atomic_t *v, int i)
+{
+	atomic_set_release(v, i);
+}
+
+__rust_helper void
+rust_helper_atomic_add(int i, atomic_t *v)
+{
+	atomic_add(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_add_return(int i, atomic_t *v)
+{
+	return atomic_add_return(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_add_return_acquire(int i, atomic_t *v)
+{
+	return atomic_add_return_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_add_return_release(int i, atomic_t *v)
+{
+	return atomic_add_return_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_add_return_relaxed(int i, atomic_t *v)
+{
+	return atomic_add_return_relaxed(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add(int i, atomic_t *v)
+{
+	return atomic_fetch_add(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add_acquire(int i, atomic_t *v)
+{
+	return atomic_fetch_add_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add_release(int i, atomic_t *v)
+{
+	return atomic_fetch_add_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add_relaxed(int i, atomic_t *v)
+{
+	return atomic_fetch_add_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_sub(int i, atomic_t *v)
+{
+	atomic_sub(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_sub_return(int i, atomic_t *v)
+{
+	return atomic_sub_return(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_sub_return_acquire(int i, atomic_t *v)
+{
+	return atomic_sub_return_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_sub_return_release(int i, atomic_t *v)
+{
+	return atomic_sub_return_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_sub_return_relaxed(int i, atomic_t *v)
+{
+	return atomic_sub_return_relaxed(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_sub(int i, atomic_t *v)
+{
+	return atomic_fetch_sub(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_sub_acquire(int i, atomic_t *v)
+{
+	return atomic_fetch_sub_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_sub_release(int i, atomic_t *v)
+{
+	return atomic_fetch_sub_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_sub_relaxed(int i, atomic_t *v)
+{
+	return atomic_fetch_sub_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_inc(atomic_t *v)
+{
+	atomic_inc(v);
+}
+
+__rust_helper int
+rust_helper_atomic_inc_return(atomic_t *v)
+{
+	return atomic_inc_return(v);
+}
+
+__rust_helper int
+rust_helper_atomic_inc_return_acquire(atomic_t *v)
+{
+	return atomic_inc_return_acquire(v);
+}
+
+__rust_helper int
+rust_helper_atomic_inc_return_release(atomic_t *v)
+{
+	return atomic_inc_return_release(v);
+}
+
+__rust_helper int
+rust_helper_atomic_inc_return_relaxed(atomic_t *v)
+{
+	return atomic_inc_return_relaxed(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_inc(atomic_t *v)
+{
+	return atomic_fetch_inc(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_inc_acquire(atomic_t *v)
+{
+	return atomic_fetch_inc_acquire(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_inc_release(atomic_t *v)
+{
+	return atomic_fetch_inc_release(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_inc_relaxed(atomic_t *v)
+{
+	return atomic_fetch_inc_relaxed(v);
+}
+
+__rust_helper void
+rust_helper_atomic_dec(atomic_t *v)
+{
+	atomic_dec(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_return(atomic_t *v)
+{
+	return atomic_dec_return(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_return_acquire(atomic_t *v)
+{
+	return atomic_dec_return_acquire(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_return_release(atomic_t *v)
+{
+	return atomic_dec_return_release(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_return_relaxed(atomic_t *v)
+{
+	return atomic_dec_return_relaxed(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_dec(atomic_t *v)
+{
+	return atomic_fetch_dec(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_dec_acquire(atomic_t *v)
+{
+	return atomic_fetch_dec_acquire(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_dec_release(atomic_t *v)
+{
+	return atomic_fetch_dec_release(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_dec_relaxed(atomic_t *v)
+{
+	return atomic_fetch_dec_relaxed(v);
+}
+
+__rust_helper void
+rust_helper_atomic_and(int i, atomic_t *v)
+{
+	atomic_and(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_and(int i, atomic_t *v)
+{
+	return atomic_fetch_and(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_and_acquire(int i, atomic_t *v)
+{
+	return atomic_fetch_and_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_and_release(int i, atomic_t *v)
+{
+	return atomic_fetch_and_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_and_relaxed(int i, atomic_t *v)
+{
+	return atomic_fetch_and_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_andnot(int i, atomic_t *v)
+{
+	atomic_andnot(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_andnot(int i, atomic_t *v)
+{
+	return atomic_fetch_andnot(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_andnot_acquire(int i, atomic_t *v)
+{
+	return atomic_fetch_andnot_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_andnot_release(int i, atomic_t *v)
+{
+	return atomic_fetch_andnot_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
+{
+	return atomic_fetch_andnot_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_or(int i, atomic_t *v)
+{
+	atomic_or(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_or(int i, atomic_t *v)
+{
+	return atomic_fetch_or(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_or_acquire(int i, atomic_t *v)
+{
+	return atomic_fetch_or_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_or_release(int i, atomic_t *v)
+{
+	return atomic_fetch_or_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_or_relaxed(int i, atomic_t *v)
+{
+	return atomic_fetch_or_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_xor(int i, atomic_t *v)
+{
+	atomic_xor(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_xor(int i, atomic_t *v)
+{
+	return atomic_fetch_xor(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_xor_acquire(int i, atomic_t *v)
+{
+	return atomic_fetch_xor_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_xor_release(int i, atomic_t *v)
+{
+	return atomic_fetch_xor_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_xor_relaxed(int i, atomic_t *v)
+{
+	return atomic_fetch_xor_relaxed(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_xchg(atomic_t *v, int new)
+{
+	return atomic_xchg(v, new);
+}
+
+__rust_helper int
+rust_helper_atomic_xchg_acquire(atomic_t *v, int new)
+{
+	return atomic_xchg_acquire(v, new);
+}
+
+__rust_helper int
+rust_helper_atomic_xchg_release(atomic_t *v, int new)
+{
+	return atomic_xchg_release(v, new);
+}
+
+__rust_helper int
+rust_helper_atomic_xchg_relaxed(atomic_t *v, int new)
+{
+	return atomic_xchg_relaxed(v, new);
+}
+
+__rust_helper int
+rust_helper_atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+	return atomic_cmpxchg(v, old, new);
+}
+
+__rust_helper int
+rust_helper_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
+{
+	return atomic_cmpxchg_acquire(v, old, new);
+}
+
+__rust_helper int
+rust_helper_atomic_cmpxchg_release(atomic_t *v, int old, int new)
+{
+	return atomic_cmpxchg_release(v, old, new);
+}
+
+__rust_helper int
+rust_helper_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
+{
+	return atomic_cmpxchg_relaxed(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
+{
+	return atomic_try_cmpxchg(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
+{
+	return atomic_try_cmpxchg_acquire(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
+{
+	return atomic_try_cmpxchg_release(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
+{
+	return atomic_try_cmpxchg_relaxed(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_sub_and_test(int i, atomic_t *v)
+{
+	return atomic_sub_and_test(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic_dec_and_test(atomic_t *v)
+{
+	return atomic_dec_and_test(v);
+}
+
+__rust_helper bool
+rust_helper_atomic_inc_and_test(atomic_t *v)
+{
+	return atomic_inc_and_test(v);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_negative(int i, atomic_t *v)
+{
+	return atomic_add_negative(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_negative_acquire(int i, atomic_t *v)
+{
+	return atomic_add_negative_acquire(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_negative_release(int i, atomic_t *v)
+{
+	return atomic_add_negative_release(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_negative_relaxed(int i, atomic_t *v)
+{
+	return atomic_add_negative_relaxed(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add_unless(atomic_t *v, int a, int u)
+{
+	return atomic_fetch_add_unless(v, a, u);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_unless(atomic_t *v, int a, int u)
+{
+	return atomic_add_unless(v, a, u);
+}
+
+__rust_helper bool
+rust_helper_atomic_inc_not_zero(atomic_t *v)
+{
+	return atomic_inc_not_zero(v);
+}
+
+__rust_helper bool
+rust_helper_atomic_inc_unless_negative(atomic_t *v)
+{
+	return atomic_inc_unless_negative(v);
+}
+
+__rust_helper bool
+rust_helper_atomic_dec_unless_positive(atomic_t *v)
+{
+	return atomic_dec_unless_positive(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_if_positive(atomic_t *v)
+{
+	return atomic_dec_if_positive(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_read(const atomic64_t *v)
+{
+	return atomic64_read(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_read_acquire(const atomic64_t *v)
+{
+	return atomic64_read_acquire(v);
+}
+
+__rust_helper void
+rust_helper_atomic64_set(atomic64_t *v, s64 i)
+{
+	atomic64_set(v, i);
+}
+
+__rust_helper void
+rust_helper_atomic64_set_release(atomic64_t *v, s64 i)
+{
+	atomic64_set_release(v, i);
+}
+
+__rust_helper void
+rust_helper_atomic64_add(s64 i, atomic64_t *v)
+{
+	atomic64_add(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_add_return(s64 i, atomic64_t *v)
+{
+	return atomic64_add_return(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_add_return_acquire(s64 i, atomic64_t *v)
+{
+	return atomic64_add_return_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_add_return_release(s64 i, atomic64_t *v)
+{
+	return atomic64_add_return_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
+{
+	return atomic64_add_return_relaxed(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_add(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_add_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add_release(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_add_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_add_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_sub(s64 i, atomic64_t *v)
+{
+	atomic64_sub(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_sub_return(s64 i, atomic64_t *v)
+{
+	return atomic64_sub_return(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
+{
+	return atomic64_sub_return_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_sub_return_release(s64 i, atomic64_t *v)
+{
+	return atomic64_sub_return_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
+{
+	return atomic64_sub_return_relaxed(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_sub(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_sub(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_sub_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_sub_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_sub_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_inc(atomic64_t *v)
+{
+	atomic64_inc(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_inc_return(atomic64_t *v)
+{
+	return atomic64_inc_return(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_inc_return_acquire(atomic64_t *v)
+{
+	return atomic64_inc_return_acquire(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_inc_return_release(atomic64_t *v)
+{
+	return atomic64_inc_return_release(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_inc_return_relaxed(atomic64_t *v)
+{
+	return atomic64_inc_return_relaxed(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_inc(atomic64_t *v)
+{
+	return atomic64_fetch_inc(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_inc_acquire(atomic64_t *v)
+{
+	return atomic64_fetch_inc_acquire(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_inc_release(atomic64_t *v)
+{
+	return atomic64_fetch_inc_release(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_inc_relaxed(atomic64_t *v)
+{
+	return atomic64_fetch_inc_relaxed(v);
+}
+
+__rust_helper void
+rust_helper_atomic64_dec(atomic64_t *v)
+{
+	atomic64_dec(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_return(atomic64_t *v)
+{
+	return atomic64_dec_return(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_return_acquire(atomic64_t *v)
+{
+	return atomic64_dec_return_acquire(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_return_release(atomic64_t *v)
+{
+	return atomic64_dec_return_release(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_return_relaxed(atomic64_t *v)
+{
+	return atomic64_dec_return_relaxed(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_dec(atomic64_t *v)
+{
+	return atomic64_fetch_dec(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_dec_acquire(atomic64_t *v)
+{
+	return atomic64_fetch_dec_acquire(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_dec_release(atomic64_t *v)
+{
+	return atomic64_fetch_dec_release(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_dec_relaxed(atomic64_t *v)
+{
+	return atomic64_fetch_dec_relaxed(v);
+}
+
+__rust_helper void
+rust_helper_atomic64_and(s64 i, atomic64_t *v)
+{
+	atomic64_and(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_and(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_and(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_and_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_and_release(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_and_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_and_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_andnot(s64 i, atomic64_t *v)
+{
+	atomic64_andnot(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_andnot(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_andnot(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_andnot_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_andnot_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_andnot_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_or(s64 i, atomic64_t *v)
+{
+	atomic64_or(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_or(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_or(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_or_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_or_release(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_or_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_or_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_xor(s64 i, atomic64_t *v)
+{
+	atomic64_xor(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_xor(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_xor(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_xor_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_xor_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
+{
+	return atomic64_fetch_xor_relaxed(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_xchg(atomic64_t *v, s64 new)
+{
+	return atomic64_xchg(v, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_xchg_acquire(atomic64_t *v, s64 new)
+{
+	return atomic64_xchg_acquire(v, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_xchg_release(atomic64_t *v, s64 new)
+{
+	return atomic64_xchg_release(v, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
+{
+	return atomic64_xchg_relaxed(v, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
+{
+	return atomic64_cmpxchg(v, old, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
+{
+	return atomic64_cmpxchg_acquire(v, old, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
+{
+	return atomic64_cmpxchg_release(v, old, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
+{
+	return atomic64_cmpxchg_relaxed(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
+{
+	return atomic64_try_cmpxchg(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
+{
+	return atomic64_try_cmpxchg_acquire(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
+{
+	return atomic64_try_cmpxchg_release(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
+{
+	return atomic64_try_cmpxchg_relaxed(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_sub_and_test(s64 i, atomic64_t *v)
+{
+	return atomic64_sub_and_test(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_dec_and_test(atomic64_t *v)
+{
+	return atomic64_dec_and_test(v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_inc_and_test(atomic64_t *v)
+{
+	return atomic64_inc_and_test(v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_negative(s64 i, atomic64_t *v)
+{
+	return atomic64_add_negative(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
+{
+	return atomic64_add_negative_acquire(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_negative_release(s64 i, atomic64_t *v)
+{
+	return atomic64_add_negative_release(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
+{
+	return atomic64_add_negative_relaxed(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+	return atomic64_fetch_add_unless(v, a, u);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+	return atomic64_add_unless(v, a, u);
+}
+
+__rust_helper bool
+rust_helper_atomic64_inc_not_zero(atomic64_t *v)
+{
+	return atomic64_inc_not_zero(v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_inc_unless_negative(atomic64_t *v)
+{
+	return atomic64_inc_unless_negative(v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_dec_unless_positive(atomic64_t *v)
+{
+	return atomic64_dec_unless_positive(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_if_positive(atomic64_t *v)
+{
+	return atomic64_dec_if_positive(v);
+}
+
+#endif /* _RUST_ATOMIC_API_H */
+// 615a0e0c98b5973a47fe4fa65e92935051ca00ed
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 16fa9bca5949..83e89f6a68fb 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -7,6 +7,7 @@
  * Sorted alphabetically.
  */
 
+#include "atomic.c"
 #include "auxiliary.c"
 #include "blk.c"
 #include "bug.c"
diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh
index 5b98a8307693..02508d0d6fe4 100755
--- a/scripts/atomic/gen-atomics.sh
+++ b/scripts/atomic/gen-atomics.sh
@@ -11,6 +11,7 @@ cat <<EOF |
 gen-atomic-instrumented.sh      linux/atomic/atomic-instrumented.h
 gen-atomic-long.sh              linux/atomic/atomic-long.h
 gen-atomic-fallback.sh          linux/atomic/atomic-arch-fallback.h
+gen-rust-atomic-helpers.sh      ../rust/helpers/atomic.c
 EOF
 while read script header args; do
 	/bin/sh ${ATOMICDIR}/${script} ${ATOMICTBL} ${args} > ${LINUXDIR}/include/${header}
diff --git a/scripts/atomic/gen-rust-atomic-helpers.sh b/scripts/atomic/gen-rust-atomic-helpers.sh
new file mode 100755
index 000000000000..45b1e100ed7c
--- /dev/null
+++ b/scripts/atomic/gen-rust-atomic-helpers.sh
@@ -0,0 +1,67 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+
+ATOMICDIR=$(dirname $0)
+
+. ${ATOMICDIR}/atomic-tbl.sh
+
+#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
+gen_proto_order_variant()
+{
+	local meta="$1"; shift
+	local pfx="$1"; shift
+	local name="$1"; shift
+	local sfx="$1"; shift
+	local order="$1"; shift
+	local atomic="$1"; shift
+	local int="$1"; shift
+
+	local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
+
+	local ret="$(gen_ret_type "${meta}" "${int}")"
+	local params="$(gen_params "${int}" "${atomic}" "$@")"
+	local args="$(gen_args "$@")"
+	local retstmt="$(gen_ret_stmt "${meta}")"
+
+cat <<EOF
+__rust_helper ${ret}
+rust_helper_${atomicname}(${params})
+{
+	${retstmt}${atomicname}(${args});
+}
+
+EOF
+}
+
+cat << EOF
+// SPDX-License-Identifier: GPL-2.0
+
+// Generated by $0
+// DO NOT MODIFY THIS FILE DIRECTLY
+
+/*
+ * This file provides helpers for the various atomic functions for Rust.
+ */
+#ifndef _RUST_ATOMIC_API_H
+#define _RUST_ATOMIC_API_H
+
+#include <linux/atomic.h>
+
+// TODO: Remove this after INLINE_HELPERS support is added.
+#ifndef __rust_helper
+#define __rust_helper
+#endif
+
+EOF
+
+grep '^[a-z]' "$1" | while read name meta args; do
+	gen_proto "${meta}" "${name}" "atomic" "int" ${args}
+done
+
+grep '^[a-z]' "$1" | while read name meta args; do
+	gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
+done
+
+cat <<EOF
+#endif /* _RUST_ATOMIC_API_H */
+EOF
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v7 2/9] rust: sync: Add basic atomic operation mapping framework
  2025-07-14  5:36 [PATCH v7 0/9] LKMM generic atomics in Rust Boqun Feng
  2025-07-14  5:36 ` [PATCH v7 1/9] rust: Introduce atomic API helpers Boqun Feng
@ 2025-07-14  5:36 ` Boqun Feng
  2025-07-14 10:03   ` Benno Lossin
  2025-07-14  5:36 ` [PATCH v7 3/9] rust: sync: atomic: Add ordering annotation types Boqun Feng
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-14  5:36 UTC (permalink / raw)
  To: linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
	Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
	Linus Torvalds, Thomas Gleixner, Alan Stern

Preparation for generic atomic implementation. To unify the
implementation of a generic method over `i32` and `i64`, the C side
atomic methods need to be grouped so that in a generic method, they can
be referred as <type>::<method>, otherwise their parameters and return
value are different between `i32` and `i64`, which would require using
`transmute()` to unify the type into a `T`.

Introduce `AtomicImpl` to represent a basic type in Rust that has the
direct mapping to an atomic implementation from C. This trait is sealed,
and currently only `i32` and `i64` impl this.

Further, different methods are put into different `*Ops` trait groups,
and this is for the future when smaller types like `i8`/`i16` are
supported but only with a limited set of API (e.g. only set(), load(),
xchg() and cmpxchg(), no add() or sub() etc).

While the atomic mod is introduced, documentation is also added for
memory models and data races.

Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect
my responsiblity on the Rust atomic mod.

Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Benno Lossin <lossin@kernel.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
Benno, I actually followed your suggestion and put the safety
requirement inline, and also I realized I don't need to mention about
data race, because no data race is an implied safety requirement. Note
that macro-wise, I forced only #[doc] attributes can be put before
`unsafe fn ..` because this is the only usage, and I don't think it's
likely we want to support other attributes. We can always add them
later.

 MAINTAINERS                    |   4 +-
 rust/kernel/sync.rs            |   1 +
 rust/kernel/sync/atomic.rs     |  19 +++
 rust/kernel/sync/atomic/ops.rs | 265 +++++++++++++++++++++++++++++++++
 4 files changed, 288 insertions(+), 1 deletion(-)
 create mode 100644 rust/kernel/sync/atomic.rs
 create mode 100644 rust/kernel/sync/atomic/ops.rs

diff --git a/MAINTAINERS b/MAINTAINERS
index 0c1d245bf7b8..5eef524975ca 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3894,7 +3894,7 @@ F:	drivers/input/touchscreen/atmel_mxt_ts.c
 ATOMIC INFRASTRUCTURE
 M:	Will Deacon <will@kernel.org>
 M:	Peter Zijlstra <peterz@infradead.org>
-R:	Boqun Feng <boqun.feng@gmail.com>
+M:	Boqun Feng <boqun.feng@gmail.com>
 R:	Mark Rutland <mark.rutland@arm.com>
 L:	linux-kernel@vger.kernel.org
 S:	Maintained
@@ -3903,6 +3903,8 @@ F:	arch/*/include/asm/atomic*.h
 F:	include/*/atomic*.h
 F:	include/linux/refcount.h
 F:	scripts/atomic/
+F:	rust/kernel/sync/atomic.rs
+F:	rust/kernel/sync/atomic/
 
 ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER
 M:	Bradley Grove <linuxdrivers@attotech.com>
diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index 36a719015583..b620027e0641 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -10,6 +10,7 @@
 use pin_init;
 
 mod arc;
+pub mod atomic;
 mod condvar;
 pub mod lock;
 mod locked_by;
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
new file mode 100644
index 000000000000..c9c7c3617dd5
--- /dev/null
+++ b/rust/kernel/sync/atomic.rs
@@ -0,0 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Atomic primitives.
+//!
+//! These primitives have the same semantics as their C counterparts: and the precise definitions of
+//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Consistency) Model is the
+//! only model for Rust code in kernel, and Rust's own atomics should be avoided.
+//!
+//! # Data races
+//!
+//! [`LKMM`] atomics have different rules regarding data races:
+//!
+//! - A normal write from C side is treated as an atomic write if
+//!   CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
+//! - Mixed-size atomic accesses don't cause data races.
+//!
+//! [`LKMM`]: srctree/tools/memory-model/
+
+pub mod ops;
diff --git a/rust/kernel/sync/atomic/ops.rs b/rust/kernel/sync/atomic/ops.rs
new file mode 100644
index 000000000000..1353dc748ef9
--- /dev/null
+++ b/rust/kernel/sync/atomic/ops.rs
@@ -0,0 +1,265 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Atomic implementations.
+//!
+//! Provides 1:1 mapping of atomic implementations.
+
+use crate::bindings;
+use crate::macros::paste;
+
+mod private {
+    /// Sealed trait marker to disable customized impls on atomic implementation traits.
+    pub trait Sealed {}
+}
+
+// `i32` and `i64` are only supported atomic implementations.
+impl private::Sealed for i32 {}
+impl private::Sealed for i64 {}
+
+/// A marker trait for types that implement atomic operations with C side primitives.
+///
+/// This trait is sealed, and only types that have directly mapping to the C side atomics should
+/// impl this:
+///
+/// - `i32` maps to `atomic_t`.
+/// - `i64` maps to `atomic64_t`.
+pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {
+    /// The type of the delta in arithmetic or logical operations.
+    ///
+    /// For example, in `atomic_add(ptr, v)`, it's the type of `v`. Usually it's the same type of
+    /// [`Self`], but it may be different for the atomic pointer type.
+    type Delta;
+}
+
+// `atomic_t` implements atomic operations on `i32`.
+impl AtomicImpl for i32 {
+    type Delta = Self;
+}
+
+// `atomic64_t` implements atomic operations on `i64`.
+impl AtomicImpl for i64 {
+    type Delta = Self;
+}
+
+// This macro generates the function signature with given argument list and return type.
+macro_rules! declare_atomic_method {
+    (
+        $(#[doc=$doc:expr])*
+        $func:ident($($arg:ident : $arg_type:ty),*) $(-> $ret:ty)?
+    ) => {
+        paste!(
+            $(#[doc = $doc])*
+            unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)?;
+        );
+    };
+    (
+        $(#[doc=$doc:expr])*
+        $func:ident [$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(-> $ret:ty)?
+    ) => {
+        paste!(
+            declare_atomic_method!(
+                $(#[doc = $doc])*
+                [< $func _ $variant >]($($arg_sig)*) $(-> $ret)?
+            );
+        );
+
+        declare_atomic_method!(
+            $(#[doc = $doc])*
+            $func [$($rest)*]($($arg_sig)*) $(-> $ret)?
+        );
+    };
+    (
+        $(#[doc=$doc:expr])*
+        $func:ident []($($arg_sig:tt)*) $(-> $ret:ty)?
+    ) => {
+        declare_atomic_method!(
+            $(#[doc = $doc])*
+            $func($($arg_sig)*) $(-> $ret)?
+        );
+    }
+}
+
+// This macro generates the function implementation with given argument list and return type, and it
+// will replace "call(...)" expression with "$ctype _ $func" to call the real C function.
+macro_rules! impl_atomic_method {
+    (
+        ($ctype:ident) $func:ident($($arg:ident: $arg_type:ty),*) $(-> $ret:ty)? {
+            call($($c_arg:expr),*)
+        }
+    ) => {
+        paste!(
+            #[inline(always)]
+            unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)? {
+                // SAFETY: Per function safety requirement, all pointers are aligned and valid, and
+                // accesses won't cause data race per LKMM.
+                unsafe { bindings::[< $ctype _ $func >]($($c_arg,)*) }
+            }
+        );
+    };
+    (
+        ($ctype:ident) $func:ident[$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(-> $ret:ty)? {
+            call($($arg:tt)*)
+        }
+    ) => {
+        paste!(
+            impl_atomic_method!(
+                ($ctype) [< $func _ $variant >]($($arg_sig)*) $( -> $ret)? {
+                    call($($arg)*)
+            }
+            );
+        );
+        impl_atomic_method!(
+            ($ctype) $func [$($rest)*]($($arg_sig)*) $( -> $ret)? {
+                call($($arg)*)
+            }
+        );
+    };
+    (
+        ($ctype:ident) $func:ident[]($($arg_sig:tt)*) $( -> $ret:ty)? {
+            call($($arg:tt)*)
+        }
+    ) => {
+        impl_atomic_method!(
+            ($ctype) $func($($arg_sig)*) $(-> $ret)? {
+                call($($arg)*)
+            }
+        );
+    }
+}
+
+// Delcares $ops trait with methods and implements the trait for `i32` and `i64`.
+macro_rules! declare_and_impl_atomic_methods {
+    ($(#[$attr:meta])* pub trait $ops:ident {
+        $(
+            $(#[doc=$doc:expr])*
+            unsafe fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? {
+                bindings::#call($($arg:tt)*)
+            }
+        )*
+    }) => {
+        $(#[$attr])*
+        pub trait $ops: AtomicImpl {
+            $(
+                declare_atomic_method!(
+                    $(#[doc=$doc])*
+                    $func[$($variant)*]($($arg_sig)*) $(-> $ret)?
+                );
+            )*
+        }
+
+        impl $ops for i32 {
+            $(
+                impl_atomic_method!(
+                    (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? {
+                        call($($arg)*)
+                    }
+                );
+            )*
+        }
+
+        impl $ops for i64 {
+            $(
+                impl_atomic_method!(
+                    (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? {
+                        call($($arg)*)
+                    }
+                );
+            )*
+        }
+    }
+}
+
+declare_and_impl_atomic_methods!(
+    /// Basic atomic operations
+    pub trait AtomicHasBasicOps {
+        /// Atomic read (load).
+        ///
+        /// # Safety
+        /// - `ptr` is aligned to [`align_of::<Self>()`].
+        /// - `ptr` is valid for reads.
+        ///
+        /// [`align_of::<Self>()`]: core::mem::align_of
+        unsafe fn read[acquire](ptr: *mut Self) -> Self {
+            bindings::#call(ptr.cast())
+        }
+
+        /// Atomic set (store).
+        ///
+        /// # Safety
+        /// - `ptr` is aligned to [`align_of::<Self>()`].
+        /// - `ptr` is valid for writes.
+        ///
+        /// [`align_of::<Self>()`]: core::mem::align_of
+        unsafe fn set[release](ptr: *mut Self, v: Self) {
+            bindings::#call(ptr.cast(), v)
+        }
+    }
+);
+
+declare_and_impl_atomic_methods!(
+    /// Exchange and compare-and-exchange atomic operations
+    pub trait AtomicHasXchgOps {
+        /// Atomic exchange.
+        ///
+        /// Atomically updates `*ptr` to `v` and returns the old value.
+        ///
+        /// # Safety
+        /// - `ptr` is aligned to [`align_of::<Self>()`].
+        /// - `ptr` is valid for reads and writes.
+        ///
+        /// [`align_of::<Self>()`]: core::mem::align_of
+        unsafe fn xchg[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self {
+            bindings::#call(ptr.cast(), v)
+        }
+
+        /// Atomic compare and exchange.
+        ///
+        /// If `*ptr` == `*old`, atomically updates `*ptr` to `new`. Otherwise, `*ptr` is not
+        /// modified, `*old` is updated to the current value of `*ptr`.
+        ///
+        /// Return `true` if the update of `*ptr` occured, `false` otherwise.
+        ///
+        /// # Safety
+        /// - `ptr` is aligned to [`align_of::<Self>()`].
+        /// - `ptr` is valid for reads and writes.
+        /// - `old` is aligned to [`align_of::<Self>()`].
+        /// - `old` is valid for reads and writes.
+        ///
+        /// [`align_of::<Self>()`]: core::mem::align_of
+        unsafe fn try_cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: *mut Self, new: Self) -> bool {
+            bindings::#call(ptr.cast(), old, new)
+        }
+    }
+);
+
+declare_and_impl_atomic_methods!(
+    /// Atomic arithmetic operations
+    pub trait AtomicHasArithmeticOps {
+        /// Atomic add (wrapping).
+        ///
+        /// Atomically updates `*ptr` to `(*ptr).wrapping_add(v)`.
+        ///
+        /// # Safety
+        /// - `ptr` is aligned to `align_of::<Self>()`.
+        /// - `ptr` is valid for reads and writes.
+        ///
+        /// [`align_of::<Self>()`]: core::mem::align_of
+        unsafe fn add[](ptr: *mut Self, v: Self::Delta) {
+            bindings::#call(v, ptr.cast())
+        }
+
+        /// Atomic fetch and add (wrapping).
+        ///
+        /// Atomically updates `*ptr` to `(*ptr).wrapping_add(v)`, and returns the value of `*ptr`
+        /// before the update.
+        ///
+        /// # Safety
+        /// - `ptr` is aligned to `align_of::<Self>()`.
+        /// - `ptr` is valid for reads and writes.
+        ///
+        /// [`align_of::<Self>()`]: core::mem::align_of
+        unsafe fn fetch_add[acquire, release, relaxed](ptr: *mut Self, v: Self::Delta) -> Self {
+            bindings::#call(v, ptr.cast())
+        }
+    }
+);
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v7 3/9] rust: sync: atomic: Add ordering annotation types
  2025-07-14  5:36 [PATCH v7 0/9] LKMM generic atomics in Rust Boqun Feng
  2025-07-14  5:36 ` [PATCH v7 1/9] rust: Introduce atomic API helpers Boqun Feng
  2025-07-14  5:36 ` [PATCH v7 2/9] rust: sync: Add basic atomic operation mapping framework Boqun Feng
@ 2025-07-14  5:36 ` Boqun Feng
  2025-07-14 10:10   ` Benno Lossin
  2025-07-14  5:36 ` [PATCH v7 4/9] rust: sync: atomic: Add generic atomics Boqun Feng
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-14  5:36 UTC (permalink / raw)
  To: linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
	Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
	Linus Torvalds, Thomas Gleixner, Alan Stern

Preparation for atomic primitives. Instead of a suffix like _acquire, a
method parameter along with the corresponding generic parameter will be
used to specify the ordering of an atomic operations. For example,
atomic load() can be defined as:

	impl<T: ...> Atomic<T> {
	    pub fn load<O: AcquireOrRelaxed>(&self, _o: O) -> T { ... }
	}

and acquire users would do:

	let r = x.load(Acquire);

relaxed users:

	let r = x.load(Relaxed);

doing the following:

	let r = x.load(Release);

will cause a compiler error.

Compared to suffixes, it's easier to tell what ordering variants an
operation has, and it also make it easier to unify the implementation of
all ordering variants in one method via generic. The `TYPE` associate
const is for generic function to pick up the particular implementation
specified by an ordering annotation.

Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
Benno, please take a good and if you want to provide your Reviewed-by
for this one. I didn't apply your Reviewed-by because I used
`ordering::Any` instead of `AnyOrdering`, I think you're Ok with it [1],
but I could be wrong. Thanks!

[1]: https://lore.kernel.org/rust-for-linux/DB8M91D7KIT4.14W69YK7108ND@kernel.org/


 rust/kernel/sync/atomic.rs          |   3 +
 rust/kernel/sync/atomic/ordering.rs | 109 ++++++++++++++++++++++++++++
 2 files changed, 112 insertions(+)
 create mode 100644 rust/kernel/sync/atomic/ordering.rs

diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index c9c7c3617dd5..e80ac049f36b 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -17,3 +17,6 @@
 //! [`LKMM`]: srctree/tools/memory-model/
 
 pub mod ops;
+pub mod ordering;
+
+pub use ordering::{Acquire, Full, Relaxed, Release};
diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/ordering.rs
new file mode 100644
index 000000000000..aea0a2bbb1b9
--- /dev/null
+++ b/rust/kernel/sync/atomic/ordering.rs
@@ -0,0 +1,109 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Memory orderings.
+//!
+//! The semantics of these orderings follows the [`LKMM`] definitions and rules.
+//!
+//! - [`Acquire`] provides ordering between the load part of the annotated operation and all the
+//!   following memory accesses, and if there is a store part, the store part has the [`Relaxed`]
+//!   ordering.
+//! - [`Release`] provides ordering between all the preceding memory accesses and the store part of
+//!   the annotated operation, and if there is a load part, the load part has the [`Relaxed`]
+//!   ordering.
+//! - [`Full`] means "fully-ordered", that is:
+//!   - It provides ordering between all the preceding memory accesses and the annotated operation.
+//!   - It provides ordering between the annotated operation and all the following memory accesses.
+//!   - It provides ordering between all the preceding memory accesses and all the following memory
+//!     accesses.
+//!   - All the orderings are the same strength as a full memory barrier (i.e. `smp_mb()`).
+//! - [`Relaxed`] provides no ordering except the dependency orderings. Dependency orderings are
+//!   described in "DEPENDENCY RELATIONS" in [`LKMM`]'s [`explanation`].
+//!
+//! [`LKMM`]: srctree/tools/memory-model/
+//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.txt
+
+/// The annotation type for relaxed memory ordering, for the description of relaxed memory
+/// ordering, see [module-level documentation].
+///
+/// [module-level documentation]: crate::sync::atomic::ordering
+pub struct Relaxed;
+
+/// The annotation type for acquire memory ordering, for the description of acquire memory
+/// ordering, see [module-level documentation].
+///
+/// [module-level documentation]: crate::sync::atomic::ordering
+pub struct Acquire;
+
+/// The annotation type for release memory ordering, for the description of release memory
+/// ordering, see [module-level documentation].
+///
+/// [module-level documentation]: crate::sync::atomic::ordering
+pub struct Release;
+
+/// The annotation type for fully-ordered memory ordering, for the description fully-ordered memory
+/// ordering, see [module-level documentation].
+///
+/// [module-level documentation]: crate::sync::atomic::ordering
+pub struct Full;
+
+/// Describes the exact memory ordering.
+#[doc(hidden)]
+pub enum OrderingType {
+    /// Relaxed ordering.
+    Relaxed,
+    /// Acquire ordering.
+    Acquire,
+    /// Release ordering.
+    Release,
+    /// Fully-ordered.
+    Full,
+}
+
+mod internal {
+    /// Sealed trait, can be only implemented inside atomic mod.
+    pub trait Sealed {}
+
+    impl Sealed for super::Relaxed {}
+    impl Sealed for super::Acquire {}
+    impl Sealed for super::Release {}
+    impl Sealed for super::Full {}
+}
+
+/// The trait bound for annotating operations that support any ordering.
+pub trait Any: internal::Sealed {
+    /// Describes the exact memory ordering.
+    const TYPE: OrderingType;
+}
+
+impl Any for Relaxed {
+    const TYPE: OrderingType = OrderingType::Relaxed;
+}
+
+impl Any for Acquire {
+    const TYPE: OrderingType = OrderingType::Acquire;
+}
+
+impl Any for Release {
+    const TYPE: OrderingType = OrderingType::Release;
+}
+
+impl Any for Full {
+    const TYPE: OrderingType = OrderingType::Full;
+}
+
+/// The trait bound for operations that only support acquire or relaxed ordering.
+pub trait AcquireOrRelaxed: Any {}
+
+impl AcquireOrRelaxed for Acquire {}
+impl AcquireOrRelaxed for Relaxed {}
+
+/// The trait bound for operations that only support release or relaxed ordering.
+pub trait ReleaseOrRelaxed: Any {}
+
+impl ReleaseOrRelaxed for Release {}
+impl ReleaseOrRelaxed for Relaxed {}
+
+/// The trait bound for operations that only support relaxed ordering.
+pub trait RelaxedOnly: AcquireOrRelaxed + ReleaseOrRelaxed + Any {}
+
+impl RelaxedOnly for Relaxed {}
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v7 4/9] rust: sync: atomic: Add generic atomics
  2025-07-14  5:36 [PATCH v7 0/9] LKMM generic atomics in Rust Boqun Feng
                   ` (2 preceding siblings ...)
  2025-07-14  5:36 ` [PATCH v7 3/9] rust: sync: atomic: Add ordering annotation types Boqun Feng
@ 2025-07-14  5:36 ` Boqun Feng
  2025-07-14 10:30   ` Benno Lossin
  2025-07-14  5:36 ` [PATCH v7 5/9] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-14  5:36 UTC (permalink / raw)
  To: linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
	Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
	Linus Torvalds, Thomas Gleixner, Alan Stern

To provide using LKMM atomics for Rust code, a generic `Atomic<T>` is
added, currently `T` needs to be Send + Copy because these are the
straightforward usages and all basic types support this.

Implement `AllowAtomic` for `i32` and `i64`, and so far only basic
operations load() and store() are introduced.

Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 rust/kernel/sync/atomic.rs         |  14 ++
 rust/kernel/sync/atomic/generic.rs | 285 +++++++++++++++++++++++++++++
 2 files changed, 299 insertions(+)
 create mode 100644 rust/kernel/sync/atomic/generic.rs

diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index e80ac049f36b..c5193c1c90fe 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -16,7 +16,21 @@
 //!
 //! [`LKMM`]: srctree/tools/memory-model/
 
+pub mod generic;
 pub mod ops;
 pub mod ordering;
 
+pub use generic::Atomic;
 pub use ordering::{Acquire, Full, Relaxed, Release};
+
+// SAFETY: `i32` has the same size and alignment with itself, and is round-trip transmutable to
+// itself.
+unsafe impl generic::AllowAtomic for i32 {
+    type Repr = i32;
+}
+
+// SAFETY: `i64` has the same size and alignment with itself, and is round-trip transmutable to
+// itself.
+unsafe impl generic::AllowAtomic for i64 {
+    type Repr = i64;
+}
diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
new file mode 100644
index 000000000000..b3e07328d857
--- /dev/null
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -0,0 +1,285 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Generic atomic primitives.
+
+use super::ops::{AtomicHasBasicOps, AtomicImpl};
+use super::{ordering, ordering::OrderingType};
+use crate::build_error;
+use core::cell::UnsafeCell;
+
+/// A memory location which can be safely modified from multiple execution contexts.
+///
+/// This has the same size, alignment and bit validity as the underlying type `T`.
+///
+/// The atomic operations are implemented in a way that is fully compatible with the [Linux Kernel
+/// Memory (Consistency) Model][LKMM], hence they should be modeled as the corresponding
+/// [`LKMM`][LKMM] atomic primitives. With the help of [`Atomic::from_ptr()`] and
+/// [`Atomic::as_ptr()`], this provides a way to interact with [C-side atomic operations]
+/// (including those without the `atomic` prefix, e.g. `READ_ONCE()`, `WRITE_ONCE()`,
+/// `smp_load_acquire()` and `smp_store_release()`).
+///
+/// [LKMM]: srctree/tools/memory-model/
+/// [C-side atomic operations]: srctree/Documentation/atomic_t.txt
+#[repr(transparent)]
+pub struct Atomic<T: AllowAtomic>(UnsafeCell<T>);
+
+// SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
+unsafe impl<T: AllowAtomic> Sync for Atomic<T> {}
+
+/// Types that support basic atomic operations.
+///
+/// # Round-trip transmutability
+///
+/// `T` is round-trip transmutable to `U` if and only if both of these properties hold:
+///
+/// - Any valid bit pattern for `T` is also a valid bit pattern for `U`.
+/// - Transmuting (e.g. using [`transmute()`]) a value of type `T` to `U` and then to `T` again
+///   yields a value that is in all aspects equivalent to the original value.
+///
+/// # Safety
+///
+/// - [`Self`] must have the same size and alignment as [`Self::Repr`].
+/// - [`Self`] must be [round-trip transmutable] to  [`Self::Repr`].
+///
+/// Note that this is more relaxed than requiring the bi-directional transmutability (i.e.
+/// [`transmute()`] is always sound between `U` to `T`) because of the support for atomic variables
+/// over unit-only enums, see [Examples].
+///
+/// # Limitations
+///
+/// Because C primitives are used to implement the atomic operations, and a C function requires a
+/// valid object of a type to operate on (i.e. no `MaybeUninit<_>`), hence at the Rust <-> C
+/// surface, only types with no uninitialized bits can be passed. As a result, types like `(u8,
+/// u16)` (a tuple with a `MaybeUninit` hole in it) are currently not supported. Note that
+/// technically these types can be supported if some APIs are removed for them and the inner
+/// implementation is tweaked, but the justification of support such a type is not strong enough at
+/// the moment. This should be resolved if there is an implementation for `MaybeUninit<i32>` as
+/// `AtomicImpl`.
+///
+/// # Examples
+///
+/// A unit-only enum that implements [`AllowAtomic`]:
+///
+/// ```
+/// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed};
+///
+/// #[derive(Clone, Copy, PartialEq, Eq)]
+/// #[repr(i32)]
+/// enum State {
+///     Uninit = 0,
+///     Working = 1,
+///     Done = 2,
+/// };
+///
+/// // SAFETY: `State` and `i32` has the same size and alignment, and it's round-trip
+/// // transmutable to `i32`.
+/// unsafe impl AllowAtomic for State {
+///     type Repr = i32;
+/// }
+///
+/// let s = Atomic::new(State::Uninit);
+///
+/// assert_eq!(State::Uninit, s.load(Relaxed));
+/// ```
+/// [`transmute()`]: core::mem::transmute
+/// [round-trip transmutable]: AllowAtomic#round-trip-transmutability
+/// [Examples]: AllowAtomic#examples
+pub unsafe trait AllowAtomic: Sized + Send + Copy {
+    /// The backing atomic implementation type.
+    type Repr: AtomicImpl;
+}
+
+#[inline(always)]
+const fn into_repr<T: AllowAtomic>(v: T) -> T::Repr {
+    // SAFETY: Per the safety requirement of `AllowAtomic`, the transmute operation is sound.
+    unsafe { core::mem::transmute_copy(&v) }
+}
+
+/// # Safety
+///
+/// `r` must be a valid bit pattern of `T`.
+#[inline(always)]
+const unsafe fn from_repr<T: AllowAtomic>(r: T::Repr) -> T {
+    // SAFETY: Per the safety requirement of the function, the transmute operation is sound.
+    unsafe { core::mem::transmute_copy(&r) }
+}
+
+impl<T: AllowAtomic> Atomic<T> {
+    /// Creates a new atomic `T`.
+    pub const fn new(v: T) -> Self {
+        Self(UnsafeCell::new(v))
+    }
+
+    /// Creates a reference to an atomic `T` from a pointer of `T`.
+    ///
+    /// # Safety
+    ///
+    /// - `ptr` is aligned to `align_of::<T>()`.
+    /// - `ptr` is valid for reads and writes for `'a`.
+    /// - For the duration of `'a`, other accesses to `*ptr` must not cause data races (defined
+    ///   by [`LKMM`]) against atomic operations on the returned reference. Note that if all other
+    ///   accesses are atomic, then this safety requirement is trivially fulfilled.
+    ///
+    /// [`LKMM`]: srctree/tools/memory-model
+    ///
+    /// # Examples
+    ///
+    /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [`Atomic::store()`] can
+    /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire()` or
+    /// `WRITE_ONCE()`/`smp_store_release()` in C side:
+    ///
+    /// ```
+    /// # use kernel::types::Opaque;
+    /// use kernel::sync::atomic::{Atomic, Relaxed, Release};
+    ///
+    /// // Assume there is a C struct `foo`.
+    /// mod cbindings {
+    ///     #[repr(C)]
+    ///     pub(crate) struct foo {
+    ///         pub(crate) a: i32,
+    ///         pub(crate) b: i32
+    ///     }
+    /// }
+    ///
+    /// let tmp = Opaque::new(cbindings::foo { a: 1, b: 2 });
+    ///
+    /// // struct foo *foo_ptr = ..;
+    /// let foo_ptr = tmp.get();
+    ///
+    /// // SAFETY: `foo_ptr` is valid, and `.a` is in bounds.
+    /// let foo_a_ptr = unsafe { &raw mut (*foo_ptr).a };
+    ///
+    /// // a = READ_ONCE(foo_ptr->a);
+    /// //
+    /// // SAFETY: `foo_a_ptr` is valid for read, and all other accesses on it is atomic, so no
+    /// // data race.
+    /// let a = unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed);
+    /// # assert_eq!(a, 1);
+    ///
+    /// // smp_store_release(&foo_ptr->a, 2);
+    /// //
+    /// // SAFETY: `foo_a_ptr` is valid for writes, and all other accesses on it is atomic, so
+    /// // no data race.
+    /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release);
+    /// ```
+    ///
+    /// However, this should be only used when communicating with C side or manipulating a C
+    /// struct.
+    pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self
+    where
+        T: Sync,
+    {
+        // CAST: `T` is transparent to `Atomic<T>`.
+        // SAFETY: Per function safety requirement, `ptr` is a valid pointer and the object will
+        // live long enough. It's safe to return a `&Atomic<T>` because function safety requirement
+        // guarantees other accesses won't cause data races.
+        unsafe { &*ptr.cast::<Self>() }
+    }
+
+    /// Returns a pointer to the underlying atomic `T`.
+    ///
+    /// Note that use of the return pointer must not cause data races defined by [`LKMM`].
+    ///
+    /// # Guarantees
+    ///
+    /// The returned pointer is properly aligned (i.e. aligned to [`align_of::<T>()`])
+    ///
+    /// [`LKMM`]: srctree/tools/memory-model
+    /// [`align_of::<T>()`]: core::mem::align_of
+    pub const fn as_ptr(&self) -> *mut T {
+        // GUARANTEE: `self.0` has the same alignment of `T`.
+        self.0.get()
+    }
+
+    /// Returns a mutable reference to the underlying atomic `T`.
+    ///
+    /// This is safe because the mutable reference of the atomic `T` guarantees the exclusive
+    /// access.
+    pub fn get_mut(&mut self) -> &mut T {
+        // SAFETY: `self.as_ptr()` is a valid pointer to `T`. `&mut self` guarantees the exclusive
+        // access, so it's safe to reborrow mutably.
+        unsafe { &mut *self.as_ptr() }
+    }
+}
+
+impl<T: AllowAtomic> Atomic<T>
+where
+    T::Repr: AtomicHasBasicOps,
+{
+    /// Loads the value from the atomic `T`.
+    ///
+    /// # Examples
+    ///
+    /// ```
+    /// use kernel::sync::atomic::{Atomic, Relaxed};
+    ///
+    /// let x = Atomic::new(42i32);
+    ///
+    /// assert_eq!(42, x.load(Relaxed));
+    ///
+    /// let x = Atomic::new(42i64);
+    ///
+    /// assert_eq!(42, x.load(Relaxed));
+    /// ```
+    #[doc(alias("atomic_read", "atomic64_read"))]
+    #[inline(always)]
+    pub fn load<Ordering: ordering::AcquireOrRelaxed>(&self, _: Ordering) -> T {
+        // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
+        // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
+        let a = self.as_ptr().cast::<T::Repr>();
+
+        // SAFETY:
+        // - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
+        //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
+        // - `a` is a valid pointer per the CAST justification above.
+        let v = unsafe {
+            match Ordering::TYPE {
+                OrderingType::Relaxed => T::Repr::atomic_read(a),
+                OrderingType::Acquire => T::Repr::atomic_read_acquire(a),
+                _ => build_error!("Wrong ordering"),
+            }
+        };
+
+        // SAFETY: `v` comes from reading `a` which was derived from `self.as_ptr()` which points
+        // at a valid `T`.
+        unsafe { from_repr(v) }
+    }
+
+    /// Stores a value to the atomic `T`.
+    ///
+    /// # Examples
+    ///
+    /// ```
+    /// use kernel::sync::atomic::{Atomic, Relaxed};
+    ///
+    /// let x = Atomic::new(42i32);
+    ///
+    /// assert_eq!(42, x.load(Relaxed));
+    ///
+    /// x.store(43, Relaxed);
+    ///
+    /// assert_eq!(43, x.load(Relaxed));
+    /// ```
+    #[doc(alias("atomic_set", "atomic64_set"))]
+    #[inline(always)]
+    pub fn store<Ordering: ordering::ReleaseOrRelaxed>(&self, v: T, _: Ordering) {
+        let v = into_repr(v);
+        // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
+        // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
+        let a = self.as_ptr().cast::<T::Repr>();
+
+        // `*self` remains valid after `atomic_set*()` because `v` is transmutable to `T`.
+        //
+        // SAFETY:
+        // - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
+        //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
+        // - `a` is a valid pointer per the CAST justification above.
+        unsafe {
+            match Ordering::TYPE {
+                OrderingType::Relaxed => T::Repr::atomic_set(a, v),
+                OrderingType::Release => T::Repr::atomic_set_release(a, v),
+                _ => build_error!("Wrong ordering"),
+            }
+        };
+    }
+}
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v7 5/9] rust: sync: atomic: Add atomic {cmp,}xchg operations
  2025-07-14  5:36 [PATCH v7 0/9] LKMM generic atomics in Rust Boqun Feng
                   ` (3 preceding siblings ...)
  2025-07-14  5:36 ` [PATCH v7 4/9] rust: sync: atomic: Add generic atomics Boqun Feng
@ 2025-07-14  5:36 ` Boqun Feng
  2025-07-14 10:56   ` Benno Lossin
  2025-07-14  5:36 ` [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-14  5:36 UTC (permalink / raw)
  To: linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
	Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
	Linus Torvalds, Thomas Gleixner, Alan Stern

xchg() and cmpxchg() are basic operations on atomic. Provide these based
on C APIs.

Note that cmpxchg() use the similar function signature as
compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means
the operation succeeds and `Err(old)` means the operation fails.

Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 rust/kernel/sync/atomic/generic.rs | 181 ++++++++++++++++++++++++++++-
 1 file changed, 180 insertions(+), 1 deletion(-)

diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
index b3e07328d857..4e45d594d8ef 100644
--- a/rust/kernel/sync/atomic/generic.rs
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -2,7 +2,7 @@
 
 //! Generic atomic primitives.
 
-use super::ops::{AtomicHasBasicOps, AtomicImpl};
+use super::ops::{AtomicHasBasicOps, AtomicHasXchgOps, AtomicImpl};
 use super::{ordering, ordering::OrderingType};
 use crate::build_error;
 use core::cell::UnsafeCell;
@@ -283,3 +283,182 @@ pub fn store<Ordering: ordering::ReleaseOrRelaxed>(&self, v: T, _: Ordering) {
         };
     }
 }
+
+impl<T: AllowAtomic> Atomic<T>
+where
+    T::Repr: AtomicHasXchgOps,
+{
+    /// Atomic exchange.
+    ///
+    /// Atomically updates `*self` to `v` and returns the old value of `*self`.
+    ///
+    /// # Examples
+    ///
+    /// ```
+    /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed};
+    ///
+    /// let x = Atomic::new(42);
+    ///
+    /// assert_eq!(42, x.xchg(52, Acquire));
+    /// assert_eq!(52, x.load(Relaxed));
+    /// ```
+    #[doc(alias("atomic_xchg", "atomic64_xchg", "swap"))]
+    #[inline(always)]
+    pub fn xchg<Ordering: ordering::Any>(&self, v: T, _: Ordering) -> T {
+        let v = into_repr(v);
+        // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
+        // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
+        let a = self.as_ptr().cast::<T::Repr>();
+
+        // `*self` remains valid after `atomic_xchg*()` because `v` is transmutable to `T`.
+        //
+        // SAFETY:
+        // - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
+        //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
+        // - `a` is a valid pointer per the CAST justification above.
+        let ret = unsafe {
+            match Ordering::TYPE {
+                OrderingType::Full => T::Repr::atomic_xchg(a, v),
+                OrderingType::Acquire => T::Repr::atomic_xchg_acquire(a, v),
+                OrderingType::Release => T::Repr::atomic_xchg_release(a, v),
+                OrderingType::Relaxed => T::Repr::atomic_xchg_relaxed(a, v),
+            }
+        };
+
+        // SAFETY: `v` comes from reading `a` which was derived from `self.as_ptr()` which points
+        // at a valid `T`.
+        unsafe { from_repr(ret) }
+    }
+
+    /// Atomic compare and exchange.
+    ///
+    /// If `*self` == `old`, atomically updates `*self` to `new`. Otherwise, `*self` is not
+    /// modified.
+    ///
+    /// Compare: The comparison is done via the byte level comparison between `*self` and `old`.
+    ///
+    /// Ordering: When succeeds, provides the corresponding ordering as the `Ordering` type
+    /// parameter indicates, and a failed one doesn't provide any ordering, the load part of a
+    /// failed cmpxchg is a [`Relaxed`] load.
+    ///
+    /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed to be equal to `old`,
+    /// otherwise returns `Err(value)`, and `value` is the current value of `*self`.
+    ///
+    /// # Examples
+    ///
+    /// ```
+    /// use kernel::sync::atomic::{Atomic, Full, Relaxed};
+    ///
+    /// let x = Atomic::new(42);
+    ///
+    /// // Checks whether cmpxchg succeeded.
+    /// let success = x.cmpxchg(52, 64, Relaxed).is_ok();
+    /// # assert!(!success);
+    ///
+    /// // Checks whether cmpxchg failed.
+    /// let failure = x.cmpxchg(52, 64, Relaxed).is_err();
+    /// # assert!(failure);
+    ///
+    /// // Uses the old value if failed, probably re-try cmpxchg.
+    /// match x.cmpxchg(52, 64, Relaxed) {
+    ///     Ok(_) => { },
+    ///     Err(old) => {
+    ///         // do something with `old`.
+    ///         # assert_eq!(old, 42);
+    ///     }
+    /// }
+    ///
+    /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in C.
+    /// let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old);
+    /// # assert_eq!(42, latest);
+    /// assert_eq!(64, x.load(Relaxed));
+    /// ```
+    ///
+    /// [`Relaxed`]: super::ordering::Relaxed
+    #[doc(alias(
+        "atomic_cmpxchg",
+        "atomic64_cmpxchg",
+        "atomic_try_cmpxchg",
+        "atomic64_try_cmpxchg",
+        "compare_exchange"
+    ))]
+    #[inline(always)]
+    pub fn cmpxchg<Ordering: ordering::Any>(
+        &self,
+        mut old: T,
+        new: T,
+        o: Ordering,
+    ) -> Result<T, T> {
+        // Note on code generation:
+        //
+        // try_cmpxchg() is used to implement cmpxchg(), and if the helper functions are inlined,
+        // the compiler is able to figure out that branch is not needed if the users don't care
+        // about whether the operation succeeds or not. One exception is on x86, due to commit
+        // 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semantics"), the
+        // atomic_try_cmpxchg() on x86 has a branch even if the caller doesn't care about the
+        // success of cmpxchg and only wants to use the old value. For example, for code like:
+        //
+        //     let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old);
+        //
+        // It will still generate code:
+        //
+        //     movl    $0x40, %ecx
+        //     movl    $0x34, %eax
+        //     lock
+        //     cmpxchgl        %ecx, 0x4(%rsp)
+        //     jne     1f
+        //     2:
+        //     ...
+        //     1:  movl    %eax, %ecx
+        //     jmp 2b
+        //
+        // This might be "fixed" by introducing a try_cmpxchg_exclusive() that knows the "*old"
+        // location in the C function is always safe to write.
+        if self.try_cmpxchg(&mut old, new, o) {
+            Ok(old)
+        } else {
+            Err(old)
+        }
+    }
+
+    /// Atomic compare and exchange and returns whether the operation succeeds.
+    ///
+    /// If `*self` == `old`, atomically updates `*self` to `new`. Otherwise, `*self` is not
+    /// modified, `*old` is updated to the current value of `*self`.
+    ///
+    /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg()`].
+    ///
+    /// Returns `true` means the cmpxchg succeeds otherwise returns `false`.
+    #[inline(always)]
+    fn try_cmpxchg<Ordering: ordering::Any>(&self, old: &mut T, new: T, _: Ordering) -> bool {
+        let mut old_tmp = into_repr(*old);
+        let oldp = &raw mut old_tmp;
+        let new = into_repr(new);
+        // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
+        // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
+        let a = self.0.get().cast::<T::Repr>();
+
+        // `*self` remains valid after `atomic_try_cmpxchg*()` because `new` is transmutable to
+        // `T`.
+        //
+        // SAFETY:
+        // - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
+        //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
+        // - `a` is a valid pointer per the CAST justification above.
+        // - `oldp` is a valid and properly aligned pointer of `T::Repr`.
+        let ret = unsafe {
+            match Ordering::TYPE {
+                OrderingType::Full => T::Repr::atomic_try_cmpxchg(a, oldp, new),
+                OrderingType::Acquire => T::Repr::atomic_try_cmpxchg_acquire(a, oldp, new),
+                OrderingType::Release => T::Repr::atomic_try_cmpxchg_release(a, oldp, new),
+                OrderingType::Relaxed => T::Repr::atomic_try_cmpxchg_relaxed(a, oldp, new),
+            }
+        };
+
+        // SAFETY: `old_tmp` comes from reading `a` which was derived from `self.as_ptr()` which
+        // points at a valid `T`
+        *old = unsafe { from_repr(old_tmp) };
+
+        ret
+    }
+}
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-14  5:36 [PATCH v7 0/9] LKMM generic atomics in Rust Boqun Feng
                   ` (4 preceding siblings ...)
  2025-07-14  5:36 ` [PATCH v7 5/9] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
@ 2025-07-14  5:36 ` Boqun Feng
  2025-07-15 11:21   ` Benno Lossin
  2025-07-14  5:36 ` [PATCH v7 7/9] rust: sync: atomic: Add Atomic<u{32,64}> Boqun Feng
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-14  5:36 UTC (permalink / raw)
  To: linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
	Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
	Linus Torvalds, Thomas Gleixner, Alan Stern

One important set of atomic operations is the arithmetic operations,
i.e. add(), sub(), fetch_add(), add_return(), etc. However it may not
make senses for all the types that `AllowAtomic` to have arithmetic
operations, for example a `Foo(u32)` may not have a reasonable add() or
sub(), plus subword types (`u8` and `u16`) currently don't have
atomic arithmetic operations even on C side and might not have them in
the future in Rust (because they are usually suboptimal on a few
architecures). Therefore the plan is to add a few subtraits of
`AllowAtomic` describing which types have and can do atomic arithemtic
operations.

One trait `AllowAtomicAdd` is added, and only add() and fetch_add() are
added. The rest will be added in the future.

Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 rust/kernel/sync/atomic.rs         |  14 ++++
 rust/kernel/sync/atomic/generic.rs | 111 ++++++++++++++++++++++++++++-
 2 files changed, 124 insertions(+), 1 deletion(-)

diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index c5193c1c90fe..54f5b4618337 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -29,8 +29,22 @@ unsafe impl generic::AllowAtomic for i32 {
     type Repr = i32;
 }
 
+// SAFETY: The wrapping add result of two `i32`s is a valid `i32`.
+unsafe impl generic::AllowAtomicAdd<i32> for i32 {
+    fn rhs_into_delta(rhs: i32) -> i32 {
+        rhs
+    }
+}
+
 // SAFETY: `i64` has the same size and alignment with itself, and is round-trip transmutable to
 // itself.
 unsafe impl generic::AllowAtomic for i64 {
     type Repr = i64;
 }
+
+// SAFETY: The wrapping add result of two `i64`s is a valid `i64`.
+unsafe impl generic::AllowAtomicAdd<i64> for i64 {
+    fn rhs_into_delta(rhs: i64) -> i64 {
+        rhs
+    }
+}
diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
index 4e45d594d8ef..9e2394017202 100644
--- a/rust/kernel/sync/atomic/generic.rs
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -2,7 +2,7 @@
 
 //! Generic atomic primitives.
 
-use super::ops::{AtomicHasBasicOps, AtomicHasXchgOps, AtomicImpl};
+use super::ops::{AtomicHasArithmeticOps, AtomicHasBasicOps, AtomicHasXchgOps, AtomicImpl};
 use super::{ordering, ordering::OrderingType};
 use crate::build_error;
 use core::cell::UnsafeCell;
@@ -104,6 +104,18 @@ const fn into_repr<T: AllowAtomic>(v: T) -> T::Repr {
     unsafe { core::mem::transmute_copy(&r) }
 }
 
+/// Types that support atomic add operations.
+///
+/// # Safety
+///
+/// Wrapping adding any value of type `Self::Repr::Delta` obtained by [`Self::rhs_into_delta()`] to
+/// any value of type `Self::Repr` obtained through transmuting a value of type `Self` to must
+/// yield a value with a bit pattern also valid for `Self`.
+pub unsafe trait AllowAtomicAdd<Rhs = Self>: AllowAtomic {
+    /// Converts `Rhs` into the `Delta` type of the atomic implementation.
+    fn rhs_into_delta(rhs: Rhs) -> <Self::Repr as AtomicImpl>::Delta;
+}
+
 impl<T: AllowAtomic> Atomic<T> {
     /// Creates a new atomic `T`.
     pub const fn new(v: T) -> Self {
@@ -462,3 +474,100 @@ fn try_cmpxchg<Ordering: ordering::Any>(&self, old: &mut T, new: T, _: Ordering)
         ret
     }
 }
+
+impl<T: AllowAtomic> Atomic<T>
+where
+    T::Repr: AtomicHasArithmeticOps,
+{
+    /// Atomic add.
+    ///
+    /// Atomically updates `*self` to `(*self).wrapping_add(v)`.
+    ///
+    /// # Examples
+    ///
+    /// ```
+    /// use kernel::sync::atomic::{Atomic, Relaxed};
+    ///
+    /// let x = Atomic::new(42);
+    ///
+    /// assert_eq!(42, x.load(Relaxed));
+    ///
+    /// x.add(12, Relaxed);
+    ///
+    /// assert_eq!(54, x.load(Relaxed));
+    /// ```
+    #[inline(always)]
+    pub fn add<Rhs, Ordering: ordering::RelaxedOnly>(&self, v: Rhs, _: Ordering)
+    where
+        T: AllowAtomicAdd<Rhs>,
+    {
+        let v = T::rhs_into_delta(v);
+        // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
+        // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
+        let a = self.as_ptr().cast::<T::Repr>();
+
+        // `*self` remains valid after `atomic_add()` because of the safety requirement of
+        // `AllowAtomicAdd`.
+        //
+        // SAFETY:
+        // - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
+        //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
+        // - `a` is a valid pointer per the CAST justification above.
+        unsafe {
+            T::Repr::atomic_add(a, v);
+        }
+    }
+
+    /// Atomic fetch and add.
+    ///
+    /// Atomically updates `*self` to `(*self).wrapping_add(v)`, and returns the value of `*self`
+    /// before the update.
+    ///
+    /// # Examples
+    ///
+    /// ```
+    /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed};
+    ///
+    /// let x = Atomic::new(42);
+    ///
+    /// assert_eq!(42, x.load(Relaxed));
+    ///
+    /// assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) });
+    ///
+    /// let x = Atomic::new(42);
+    ///
+    /// assert_eq!(42, x.load(Relaxed));
+    ///
+    /// assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } );
+    /// ```
+    #[inline(always)]
+    pub fn fetch_add<Rhs, Ordering: ordering::Any>(&self, v: Rhs, _: Ordering) -> T
+    where
+        T: AllowAtomicAdd<Rhs>,
+    {
+        let v = T::rhs_into_delta(v);
+        // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
+        // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
+        let a = self.as_ptr().cast::<T::Repr>();
+
+        // `*self` remains valid after `atomic_fetch_add*()` because of the safety requirement of
+        // `AllowAtomicAdd`.
+        //
+        // SAFETY:
+        // - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
+        //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
+        // - `a` is a valid pointer per the CAST justification above.
+        let ret = unsafe {
+            match Ordering::TYPE {
+                OrderingType::Full => T::Repr::atomic_fetch_add(a, v),
+                OrderingType::Acquire => T::Repr::atomic_fetch_add_acquire(a, v),
+                OrderingType::Release => T::Repr::atomic_fetch_add_release(a, v),
+                OrderingType::Relaxed => T::Repr::atomic_fetch_add_relaxed(a, v),
+            }
+        };
+
+        // SAFETY: `ret` comes from reading `a` which was derived from `self.as_ptr()` which points
+        // at a valid `T`.
+        unsafe { from_repr(ret) }
+    }
+}
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v7 7/9] rust: sync: atomic: Add Atomic<u{32,64}>
  2025-07-14  5:36 [PATCH v7 0/9] LKMM generic atomics in Rust Boqun Feng
                   ` (5 preceding siblings ...)
  2025-07-14  5:36 ` [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
@ 2025-07-14  5:36 ` Boqun Feng
  2025-07-14  5:36 ` [PATCH v7 8/9] rust: sync: Add memory barriers Boqun Feng
  2025-07-14  5:36 ` [PATCH v7 9/9] rust: sync: atomic: Add Atomic<{usize,isize}> Boqun Feng
  8 siblings, 0 replies; 50+ messages in thread
From: Boqun Feng @ 2025-07-14  5:36 UTC (permalink / raw)
  To: linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
	Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
	Linus Torvalds, Thomas Gleixner, Alan Stern

Add generic atomic support for basic unsigned types that have an
`AtomicImpl` with the same size and alignment.

Unit tests are added including Atomic<i32> and Atomic<i64>.

Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Reviewed-by: Benno Lossin <lossin@kernel.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 rust/kernel/sync/atomic.rs | 95 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 95 insertions(+)

diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 54f5b4618337..eb4a47d7e2f3 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -48,3 +48,98 @@ fn rhs_into_delta(rhs: i64) -> i64 {
         rhs
     }
 }
+
+// SAFETY: `u32` and `i32` has the same size and alignment, and `u32` is round-trip transmutable to
+// `i32`.
+unsafe impl generic::AllowAtomic for u32 {
+    type Repr = i32;
+}
+
+// SAFETY: The wrapping add result of two `i32`s is a valid `u32`.
+unsafe impl generic::AllowAtomicAdd<u32> for u32 {
+    fn rhs_into_delta(rhs: u32) -> i32 {
+        rhs as i32
+    }
+}
+
+// SAFETY: `u64` and `i64` has the same size and alignment, and `u64` is round-trip transmutable to
+// `i64`.
+unsafe impl generic::AllowAtomic for u64 {
+    type Repr = i64;
+}
+
+// SAFETY: The wrapping add result of two `i64`s is a valid `u64`.
+unsafe impl generic::AllowAtomicAdd<u64> for u64 {
+    fn rhs_into_delta(rhs: u64) -> i64 {
+        rhs as i64
+    }
+}
+
+use crate::macros::kunit_tests;
+
+#[kunit_tests(rust_atomics)]
+mod tests {
+    use super::*;
+
+    // Call $fn($val) with each $type of $val.
+    macro_rules! for_each_type {
+        ($val:literal in [$($type:ty),*] $fn:expr) => {
+            $({
+                let v: $type = $val;
+
+                $fn(v);
+            })*
+        }
+    }
+
+    #[test]
+    fn atomic_basic_tests() {
+        for_each_type!(42 in [i32, i64, u32, u64] |v| {
+            let x = Atomic::new(v);
+
+            assert_eq!(v, x.load(Relaxed));
+        });
+    }
+
+    #[test]
+    fn atomic_xchg_tests() {
+        for_each_type!(42 in [i32, i64, u32, u64] |v| {
+            let x = Atomic::new(v);
+
+            let old = v;
+            let new = v + 1;
+
+            assert_eq!(old, x.xchg(new, Full));
+            assert_eq!(new, x.load(Relaxed));
+        });
+    }
+
+    #[test]
+    fn atomic_cmpxchg_tests() {
+        for_each_type!(42 in [i32, i64, u32, u64] |v| {
+            let x = Atomic::new(v);
+
+            let old = v;
+            let new = v + 1;
+
+            assert_eq!(Err(old), x.cmpxchg(new, new, Full));
+            assert_eq!(old, x.load(Relaxed));
+            assert_eq!(Ok(old), x.cmpxchg(old, new, Relaxed));
+            assert_eq!(new, x.load(Relaxed));
+        });
+    }
+
+    #[test]
+    fn atomic_arithmetic_tests() {
+        for_each_type!(42 in [i32, i64, u32, u64] |v| {
+            let x = Atomic::new(v);
+
+            assert_eq!(v, x.fetch_add(12, Full));
+            assert_eq!(v + 12, x.load(Relaxed));
+
+            x.add(13, Relaxed);
+
+            assert_eq!(v + 25, x.load(Relaxed));
+        });
+    }
+}
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v7 8/9] rust: sync: Add memory barriers
  2025-07-14  5:36 [PATCH v7 0/9] LKMM generic atomics in Rust Boqun Feng
                   ` (6 preceding siblings ...)
  2025-07-14  5:36 ` [PATCH v7 7/9] rust: sync: atomic: Add Atomic<u{32,64}> Boqun Feng
@ 2025-07-14  5:36 ` Boqun Feng
  2025-07-14  5:36 ` [PATCH v7 9/9] rust: sync: atomic: Add Atomic<{usize,isize}> Boqun Feng
  8 siblings, 0 replies; 50+ messages in thread
From: Boqun Feng @ 2025-07-14  5:36 UTC (permalink / raw)
  To: linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
	Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
	Linus Torvalds, Thomas Gleixner, Alan Stern

Memory barriers are building blocks for concurrent code, hence provide
a minimal set of them.

The compiler barrier, barrier(), is implemented in inline asm instead of
using core::sync::atomic::compiler_fence() because memory models are
different: kernel's atomics are implemented in inline asm therefore the
compiler barrier should be implemented in inline asm as well. Also it's
currently only public to the kernel crate until there's a reasonable
driver usage.

Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 rust/helpers/barrier.c      | 18 +++++++++++
 rust/helpers/helpers.c      |  1 +
 rust/kernel/sync.rs         |  1 +
 rust/kernel/sync/barrier.rs | 61 +++++++++++++++++++++++++++++++++++++
 4 files changed, 81 insertions(+)
 create mode 100644 rust/helpers/barrier.c
 create mode 100644 rust/kernel/sync/barrier.rs

diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c
new file mode 100644
index 000000000000..cdf28ce8e511
--- /dev/null
+++ b/rust/helpers/barrier.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <asm/barrier.h>
+
+void rust_helper_smp_mb(void)
+{
+	smp_mb();
+}
+
+void rust_helper_smp_wmb(void)
+{
+	smp_wmb();
+}
+
+void rust_helper_smp_rmb(void)
+{
+	smp_rmb();
+}
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 83e89f6a68fb..8ddfc8f84e87 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -9,6 +9,7 @@
 
 #include "atomic.c"
 #include "auxiliary.c"
+#include "barrier.c"
 #include "blk.c"
 #include "bug.c"
 #include "build_assert.c"
diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index b620027e0641..c7c0e552bafe 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -11,6 +11,7 @@
 
 mod arc;
 pub mod atomic;
+pub mod barrier;
 mod condvar;
 pub mod lock;
 mod locked_by;
diff --git a/rust/kernel/sync/barrier.rs b/rust/kernel/sync/barrier.rs
new file mode 100644
index 000000000000..8f2d435fcd94
--- /dev/null
+++ b/rust/kernel/sync/barrier.rs
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Memory barriers.
+//!
+//! These primitives have the same semantics as their C counterparts: and the precise definitions
+//! of semantics can be found at [`LKMM`].
+//!
+//! [`LKMM`]: srctree/tools/memory-model/
+
+/// A compiler barrier.
+///
+/// A barrier that prevents compiler from reordering memory accesses across the barrier.
+#[inline(always)]
+pub(crate) fn barrier() {
+    // By default, Rust inline asms are treated as being able to access any memory or flags, hence
+    // it suffices as a compiler barrier.
+    //
+    // SAFETY: An empty asm block.
+    unsafe { core::arch::asm!("") };
+}
+
+/// A full memory barrier.
+///
+/// A barrier that prevents compiler and CPU from reordering memory accesses across the barrier.
+#[inline(always)]
+pub fn smp_mb() {
+    if cfg!(CONFIG_SMP) {
+        // SAFETY: `smp_mb()` is safe to call.
+        unsafe { bindings::smp_mb() };
+    } else {
+        barrier();
+    }
+}
+
+/// A write-write memory barrier.
+///
+/// A barrier that prevents compiler and CPU from reordering memory write accesses across the
+/// barrier.
+#[inline(always)]
+pub fn smp_wmb() {
+    if cfg!(CONFIG_SMP) {
+        // SAFETY: `smp_wmb()` is safe to call.
+        unsafe { bindings::smp_wmb() };
+    } else {
+        barrier();
+    }
+}
+
+/// A read-read memory barrier.
+///
+/// A barrier that prevents compiler and CPU from reordering memory read accesses across the
+/// barrier.
+#[inline(always)]
+pub fn smp_rmb() {
+    if cfg!(CONFIG_SMP) {
+        // SAFETY: `smp_rmb()` is safe to call.
+        unsafe { bindings::smp_rmb() };
+    } else {
+        barrier();
+    }
+}
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH v7 9/9] rust: sync: atomic: Add Atomic<{usize,isize}>
  2025-07-14  5:36 [PATCH v7 0/9] LKMM generic atomics in Rust Boqun Feng
                   ` (7 preceding siblings ...)
  2025-07-14  5:36 ` [PATCH v7 8/9] rust: sync: Add memory barriers Boqun Feng
@ 2025-07-14  5:36 ` Boqun Feng
  2025-07-14 11:06   ` Benno Lossin
  8 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-14  5:36 UTC (permalink / raw)
  To: linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
	Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
	Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
	Linus Torvalds, Thomas Gleixner, Alan Stern

Add generic atomic support for `usize` and `isize`. Note that instead of
mapping directly to `atomic_long_t`, the represention type
(`AllowAtomic::Repr`) is selected based on CONFIG_64BIT. This reduces
the necessity of creating `atomic_long_*` helpers, which could save
the binary size of kernel if inline helpers are not available. To do so,
an internal type `isize_atomic_repr` is defined, it's `i32` in 32bit
kernel and `i64` in 64bit kernel.

Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Reviewed-by: Benno Lossin <lossin@kernel.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
 rust/kernel/sync/atomic.rs | 50 +++++++++++++++++++++++++++++++++++---
 1 file changed, 46 insertions(+), 4 deletions(-)

diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index eb4a47d7e2f3..3c1bb0c4d396 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -49,6 +49,35 @@ fn rhs_into_delta(rhs: i64) -> i64 {
     }
 }
 
+// Defines an internal type that always maps to the integer type which has the same size alignment
+// as `isize` and `usize`, and `isize` and `usize` are always bi-directional transmutable to
+// `isize_atomic_repr`, which also always implements `AtomicImpl`.
+#[allow(non_camel_case_types)]
+#[cfg(not(CONFIG_64BIT))]
+type isize_atomic_repr = i32;
+#[allow(non_camel_case_types)]
+#[cfg(CONFIG_64BIT)]
+type isize_atomic_repr = i64;
+
+// Ensure size and alignment requirements are checked.
+crate::static_assert!(core::mem::size_of::<isize>() == core::mem::size_of::<isize_atomic_repr>());
+crate::static_assert!(core::mem::align_of::<isize>() == core::mem::align_of::<isize_atomic_repr>());
+crate::static_assert!(core::mem::size_of::<usize>() == core::mem::size_of::<isize_atomic_repr>());
+crate::static_assert!(core::mem::align_of::<usize>() == core::mem::align_of::<isize_atomic_repr>());
+
+// SAFETY: `isize` has the same size and alignment with `isize_atomic_repr`, and is round-trip
+// transmutable to `isize_atomic_repr`.
+unsafe impl generic::AllowAtomic for isize {
+    type Repr = isize_atomic_repr;
+}
+
+// SAFETY: The wrapping add result of two `isize_atomic_repr`s is a valid `usize`.
+unsafe impl generic::AllowAtomicAdd<isize> for isize {
+    fn rhs_into_delta(rhs: isize) -> isize_atomic_repr {
+        rhs as isize_atomic_repr
+    }
+}
+
 // SAFETY: `u32` and `i32` has the same size and alignment, and `u32` is round-trip transmutable to
 // `i32`.
 unsafe impl generic::AllowAtomic for u32 {
@@ -75,6 +104,19 @@ fn rhs_into_delta(rhs: u64) -> i64 {
     }
 }
 
+// SAFETY: `usize` has the same size and alignment with `isize_atomic_repr`, and is round-trip
+// transmutable to `isize_atomic_repr`.
+unsafe impl generic::AllowAtomic for usize {
+    type Repr = isize_atomic_repr;
+}
+
+// SAFETY: The wrapping add result of two `isize_atomic_repr`s is a valid `usize`.
+unsafe impl generic::AllowAtomicAdd<usize> for usize {
+    fn rhs_into_delta(rhs: usize) -> isize_atomic_repr {
+        rhs as isize_atomic_repr
+    }
+}
+
 use crate::macros::kunit_tests;
 
 #[kunit_tests(rust_atomics)]
@@ -94,7 +136,7 @@ macro_rules! for_each_type {
 
     #[test]
     fn atomic_basic_tests() {
-        for_each_type!(42 in [i32, i64, u32, u64] |v| {
+        for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
             let x = Atomic::new(v);
 
             assert_eq!(v, x.load(Relaxed));
@@ -103,7 +145,7 @@ fn atomic_basic_tests() {
 
     #[test]
     fn atomic_xchg_tests() {
-        for_each_type!(42 in [i32, i64, u32, u64] |v| {
+        for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
             let x = Atomic::new(v);
 
             let old = v;
@@ -116,7 +158,7 @@ fn atomic_xchg_tests() {
 
     #[test]
     fn atomic_cmpxchg_tests() {
-        for_each_type!(42 in [i32, i64, u32, u64] |v| {
+        for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
             let x = Atomic::new(v);
 
             let old = v;
@@ -131,7 +173,7 @@ fn atomic_cmpxchg_tests() {
 
     #[test]
     fn atomic_arithmetic_tests() {
-        for_each_type!(42 in [i32, i64, u32, u64] |v| {
+        for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
             let x = Atomic::new(v);
 
             assert_eq!(v, x.fetch_add(12, Full));
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 2/9] rust: sync: Add basic atomic operation mapping framework
  2025-07-14  5:36 ` [PATCH v7 2/9] rust: sync: Add basic atomic operation mapping framework Boqun Feng
@ 2025-07-14 10:03   ` Benno Lossin
  2025-07-14 13:42     ` Boqun Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-14 10:03 UTC (permalink / raw)
  To: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
	Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
	Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
	Thomas Gleixner, Alan Stern

On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
> Preparation for generic atomic implementation. To unify the
> implementation of a generic method over `i32` and `i64`, the C side
> atomic methods need to be grouped so that in a generic method, they can
> be referred as <type>::<method>, otherwise their parameters and return
> value are different between `i32` and `i64`, which would require using
> `transmute()` to unify the type into a `T`.
>
> Introduce `AtomicImpl` to represent a basic type in Rust that has the
> direct mapping to an atomic implementation from C. This trait is sealed,
> and currently only `i32` and `i64` impl this.
>
> Further, different methods are put into different `*Ops` trait groups,
> and this is for the future when smaller types like `i8`/`i16` are
> supported but only with a limited set of API (e.g. only set(), load(),
> xchg() and cmpxchg(), no add() or sub() etc).
>
> While the atomic mod is introduced, documentation is also added for
> memory models and data races.
>
> Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect
> my responsiblity on the Rust atomic mod.
>
> Reviewed-by: Alice Ryhl <aliceryhl@google.com>
> Reviewed-by: Benno Lossin <lossin@kernel.org>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> Benno, I actually followed your suggestion and put the safety
> requirement inline, and also I realized I don't need to mention about
> data race, because no data race is an implied safety requirement.

Thanks! I think it looks much better :)

> Note that macro-wise, I forced only #[doc] attributes can be put
> before `unsafe fn ..` because this is the only usage, and I don't
> think it's likely we want to support other attributes. We can always
> add them later.

Sounds good.

> +declare_and_impl_atomic_methods!(
> +    /// Basic atomic operations
> +    pub trait AtomicHasBasicOps {

I think we should drop the `Has` from the names. So this one can just be
`AtomicBasicOps`. Or how about `BasicAtomic`, or `AtomicBase`?

> +        /// Atomic read (load).
> +        ///
> +        /// # Safety
> +        /// - `ptr` is aligned to [`align_of::<Self>()`].
> +        /// - `ptr` is valid for reads.
> +        ///
> +        /// [`align_of::<Self>()`]: core::mem::align_of
> +        unsafe fn read[acquire](ptr: *mut Self) -> Self {
> +            bindings::#call(ptr.cast())
> +        }
> +
> +        /// Atomic set (store).
> +        ///
> +        /// # Safety
> +        /// - `ptr` is aligned to [`align_of::<Self>()`].
> +        /// - `ptr` is valid for writes.
> +        ///
> +        /// [`align_of::<Self>()`]: core::mem::align_of
> +        unsafe fn set[release](ptr: *mut Self, v: Self) {
> +            bindings::#call(ptr.cast(), v)
> +        }
> +    }
> +);
> +
> +declare_and_impl_atomic_methods!(
> +    /// Exchange and compare-and-exchange atomic operations
> +    pub trait AtomicHasXchgOps {

Same here `AtomicXchgOps` or `AtomicExchangeOps` or `AtomicExchange`?
(I would prefer to not abbreviate it to `Xchg`)

> +        /// Atomic exchange.
> +        ///
> +        /// Atomically updates `*ptr` to `v` and returns the old value.
> +        ///
> +        /// # Safety
> +        /// - `ptr` is aligned to [`align_of::<Self>()`].
> +        /// - `ptr` is valid for reads and writes.
> +        ///
> +        /// [`align_of::<Self>()`]: core::mem::align_of
> +        unsafe fn xchg[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self {
> +            bindings::#call(ptr.cast(), v)
> +        }
> +
> +        /// Atomic compare and exchange.
> +        ///
> +        /// If `*ptr` == `*old`, atomically updates `*ptr` to `new`. Otherwise, `*ptr` is not
> +        /// modified, `*old` is updated to the current value of `*ptr`.
> +        ///
> +        /// Return `true` if the update of `*ptr` occured, `false` otherwise.
> +        ///
> +        /// # Safety
> +        /// - `ptr` is aligned to [`align_of::<Self>()`].
> +        /// - `ptr` is valid for reads and writes.
> +        /// - `old` is aligned to [`align_of::<Self>()`].
> +        /// - `old` is valid for reads and writes.
> +        ///
> +        /// [`align_of::<Self>()`]: core::mem::align_of
> +        unsafe fn try_cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: *mut Self, new: Self) -> bool {
> +            bindings::#call(ptr.cast(), old, new)
> +        )}
> +    }
> +);
> +
> +declare_and_impl_atomic_methods!(
> +    /// Atomic arithmetic operations
> +    pub trait AtomicHasArithmeticOps {

Forgot to rename this one to `Add`? I think `AtomicAdd` sounds best for
this one.

---
Cheers,
Benno

> +        /// Atomic add (wrapping).
> +        ///
> +        /// Atomically updates `*ptr` to `(*ptr).wrapping_add(v)`.
> +        ///
> +        /// # Safety
> +        /// - `ptr` is aligned to `align_of::<Self>()`.
> +        /// - `ptr` is valid for reads and writes.
> +        ///
> +        /// [`align_of::<Self>()`]: core::mem::align_of
> +        unsafe fn add[](ptr: *mut Self, v: Self::Delta) {
> +            bindings::#call(v, ptr.cast())
> +        }
> +
> +        /// Atomic fetch and add (wrapping).
> +        ///
> +        /// Atomically updates `*ptr` to `(*ptr).wrapping_add(v)`, and returns the value of `*ptr`
> +        /// before the update.
> +        ///
> +        /// # Safety
> +        /// - `ptr` is aligned to `align_of::<Self>()`.
> +        /// - `ptr` is valid for reads and writes.
> +        ///
> +        /// [`align_of::<Self>()`]: core::mem::align_of
> +        unsafe fn fetch_add[acquire, release, relaxed](ptr: *mut Self, v: Self::Delta) -> Self {
> +            bindings::#call(v, ptr.cast())
> +        }
> +    }
> +);


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 3/9] rust: sync: atomic: Add ordering annotation types
  2025-07-14  5:36 ` [PATCH v7 3/9] rust: sync: atomic: Add ordering annotation types Boqun Feng
@ 2025-07-14 10:10   ` Benno Lossin
  2025-07-14 14:59     ` Boqun Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-14 10:10 UTC (permalink / raw)
  To: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
	Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
	Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
	Thomas Gleixner, Alan Stern

On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
> Preparation for atomic primitives. Instead of a suffix like _acquire, a
> method parameter along with the corresponding generic parameter will be
> used to specify the ordering of an atomic operations. For example,
> atomic load() can be defined as:
>
> 	impl<T: ...> Atomic<T> {
> 	    pub fn load<O: AcquireOrRelaxed>(&self, _o: O) -> T { ... }
> 	}
>
> and acquire users would do:
>
> 	let r = x.load(Acquire);
>
> relaxed users:
>
> 	let r = x.load(Relaxed);
>
> doing the following:
>
> 	let r = x.load(Release);
>
> will cause a compiler error.
>
> Compared to suffixes, it's easier to tell what ordering variants an
> operation has, and it also make it easier to unify the implementation of
> all ordering variants in one method via generic. The `TYPE` associate
> const is for generic function to pick up the particular implementation
> specified by an ordering annotation.
>
> Reviewed-by: Alice Ryhl <aliceryhl@google.com>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> Benno, please take a good and if you want to provide your Reviewed-by
> for this one. I didn't apply your Reviewed-by because I used
> `ordering::Any` instead of `AnyOrdering`, I think you're Ok with it [1],
> but I could be wrong. Thanks!
>
> [1]: https://lore.kernel.org/rust-for-linux/DB8M91D7KIT4.14W69YK7108ND@kernel.org/

> +/// The trait bound for annotating operations that support any ordering.
> +pub trait Any: internal::Sealed {

How about we just name this `Ordering`? Because that's what it is :)

That sadly means you can't do

    fn foo<Ordering: Ordering>() {}
           --------  ^^^^^^^^ not a trait
           |
           found this type parameter

But you can still do

    fn foo<O: Ordering>(_: O) {}

If we don't have the ordering module public and instead re-export from
atomic, you could also write:

    fn foo<Ordering: atomic::Ordering>(_: Ordering) {}

If you want it to be extra clear. What do you think?

---
Cheers,
Benno

> +    /// Describes the exact memory ordering.
> +    const TYPE: OrderingType;
> +}

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 4/9] rust: sync: atomic: Add generic atomics
  2025-07-14  5:36 ` [PATCH v7 4/9] rust: sync: atomic: Add generic atomics Boqun Feng
@ 2025-07-14 10:30   ` Benno Lossin
  2025-07-14 14:21     ` Boqun Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-14 10:30 UTC (permalink / raw)
  To: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
	Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
	Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
	Thomas Gleixner, Alan Stern

On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
> To provide using LKMM atomics for Rust code, a generic `Atomic<T>` is
> added, currently `T` needs to be Send + Copy because these are the
> straightforward usages and all basic types support this.
>
> Implement `AllowAtomic` for `i32` and `i64`, and so far only basic
> operations load() and store() are introduced.
>
> Reviewed-by: Alice Ryhl <aliceryhl@google.com>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
>  rust/kernel/sync/atomic.rs         |  14 ++
>  rust/kernel/sync/atomic/generic.rs | 285 +++++++++++++++++++++++++++++
>  2 files changed, 299 insertions(+)
>  create mode 100644 rust/kernel/sync/atomic/generic.rs
>
> diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> index e80ac049f36b..c5193c1c90fe 100644
> --- a/rust/kernel/sync/atomic.rs
> +++ b/rust/kernel/sync/atomic.rs
> @@ -16,7 +16,21 @@
>  //!
>  //! [`LKMM`]: srctree/tools/memory-model/
>  
> +pub mod generic;

Hmm, maybe just re-export the stuff? I don't think there's an advantage
to having the generic module be public.

>  pub mod ops;
>  pub mod ordering;
>  
> +pub use generic::Atomic;
>  pub use ordering::{Acquire, Full, Relaxed, Release};
> +
> +// SAFETY: `i32` has the same size and alignment with itself, and is round-trip transmutable to
> +// itself.
> +unsafe impl generic::AllowAtomic for i32 {
> +    type Repr = i32;
> +}
> +
> +// SAFETY: `i64` has the same size and alignment with itself, and is round-trip transmutable to
> +// itself.
> +unsafe impl generic::AllowAtomic for i64 {
> +    type Repr = i64;
> +}
> diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
> new file mode 100644
> index 000000000000..b3e07328d857
> --- /dev/null
> +++ b/rust/kernel/sync/atomic/generic.rs
> @@ -0,0 +1,285 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Generic atomic primitives.
> +
> +use super::ops::{AtomicHasBasicOps, AtomicImpl};
> +use super::{ordering, ordering::OrderingType};
> +use crate::build_error;
> +use core::cell::UnsafeCell;
> +
> +/// A memory location which can be safely modified from multiple execution contexts.
> +///
> +/// This has the same size, alignment and bit validity as the underlying type `T`.

Let's also mention that this disables any niche optimizations (due to
the unsafe cell).

> +///
> +/// The atomic operations are implemented in a way that is fully compatible with the [Linux Kernel
> +/// Memory (Consistency) Model][LKMM], hence they should be modeled as the corresponding
> +/// [`LKMM`][LKMM] atomic primitives. With the help of [`Atomic::from_ptr()`] and
> +/// [`Atomic::as_ptr()`], this provides a way to interact with [C-side atomic operations]
> +/// (including those without the `atomic` prefix, e.g. `READ_ONCE()`, `WRITE_ONCE()`,
> +/// `smp_load_acquire()` and `smp_store_release()`).
> +///
> +/// [LKMM]: srctree/tools/memory-model/
> +/// [C-side atomic operations]: srctree/Documentation/atomic_t.txt

Did you check that these links looks good in rustdoc?

> +#[repr(transparent)]
> +pub struct Atomic<T: AllowAtomic>(UnsafeCell<T>);
> +
> +// SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
> +unsafe impl<T: AllowAtomic> Sync for Atomic<T> {}
> +
> +/// Types that support basic atomic operations.
> +///
> +/// # Round-trip transmutability
> +///
> +/// `T` is round-trip transmutable to `U` if and only if both of these properties hold:
> +///
> +/// - Any valid bit pattern for `T` is also a valid bit pattern for `U`.
> +/// - Transmuting (e.g. using [`transmute()`]) a value of type `T` to `U` and then to `T` again
> +///   yields a value that is in all aspects equivalent to the original value.
> +///
> +/// # Safety
> +///
> +/// - [`Self`] must have the same size and alignment as [`Self::Repr`].
> +/// - [`Self`] must be [round-trip transmutable] to  [`Self::Repr`].
> +///
> +/// Note that this is more relaxed than requiring the bi-directional transmutability (i.e.
> +/// [`transmute()`] is always sound between `U` to `T`) because of the support for atomic variables

s/ to / and /

> +/// over unit-only enums, see [Examples].
> +///
> +/// # Limitations
> +///
> +/// Because C primitives are used to implement the atomic operations, and a C function requires a
> +/// valid object of a type to operate on (i.e. no `MaybeUninit<_>`), hence at the Rust <-> C
> +/// surface, only types with no uninitialized bits can be passed. As a result, types like `(u8,

s/no uninitialized/initialized/

> +/// u16)` (a tuple with a `MaybeUninit` hole in it) are currently not supported. Note that

s/a tuple with a `MaybeUninit` hole in it/padding bytes are uninitialized/

> +/// technically these types can be supported if some APIs are removed for them and the inner
> +/// implementation is tweaked, but the justification of support such a type is not strong enough at
> +/// the moment. This should be resolved if there is an implementation for `MaybeUninit<i32>` as
> +/// `AtomicImpl`.
> +///
> +/// # Examples
> +///
> +/// A unit-only enum that implements [`AllowAtomic`]:
> +///
> +/// ```
> +/// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed};
> +///
> +/// #[derive(Clone, Copy, PartialEq, Eq)]
> +/// #[repr(i32)]
> +/// enum State {
> +///     Uninit = 0,
> +///     Working = 1,
> +///     Done = 2,
> +/// };
> +///
> +/// // SAFETY: `State` and `i32` has the same size and alignment, and it's round-trip
> +/// // transmutable to `i32`.
> +/// unsafe impl AllowAtomic for State {
> +///     type Repr = i32;
> +/// }
> +///
> +/// let s = Atomic::new(State::Uninit);
> +///
> +/// assert_eq!(State::Uninit, s.load(Relaxed));
> +/// ```
> +/// [`transmute()`]: core::mem::transmute
> +/// [round-trip transmutable]: AllowAtomic#round-trip-transmutability
> +/// [Examples]: AllowAtomic#examples
> +pub unsafe trait AllowAtomic: Sized + Send + Copy {
> +    /// The backing atomic implementation type.
> +    type Repr: AtomicImpl;
> +}
> +
> +#[inline(always)]
> +const fn into_repr<T: AllowAtomic>(v: T) -> T::Repr {
> +    // SAFETY: Per the safety requirement of `AllowAtomic`, the transmute operation is sound.

Please explain the concrete parts of the safety requirements that you
are using here (ie round-trip-transmutability).

> +    unsafe { core::mem::transmute_copy(&v) }
> +}
> +
> +/// # Safety
> +///
> +/// `r` must be a valid bit pattern of `T`.
> +#[inline(always)]
> +const unsafe fn from_repr<T: AllowAtomic>(r: T::Repr) -> T {
> +    // SAFETY: Per the safety requirement of the function, the transmute operation is sound.
> +    unsafe { core::mem::transmute_copy(&r) }
> +}
> +
> +impl<T: AllowAtomic> Atomic<T> {
> +    /// Creates a new atomic `T`.
> +    pub const fn new(v: T) -> Self {
> +        Self(UnsafeCell::new(v))
> +    }
> +
> +    /// Creates a reference to an atomic `T` from a pointer of `T`.
> +    ///
> +    /// # Safety
> +    ///
> +    /// - `ptr` is aligned to `align_of::<T>()`.
> +    /// - `ptr` is valid for reads and writes for `'a`.
> +    /// - For the duration of `'a`, other accesses to `*ptr` must not cause data races (defined
> +    ///   by [`LKMM`]) against atomic operations on the returned reference. Note that if all other
> +    ///   accesses are atomic, then this safety requirement is trivially fulfilled.
> +    ///
> +    /// [`LKMM`]: srctree/tools/memory-model
> +    ///
> +    /// # Examples
> +    ///
> +    /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [`Atomic::store()`] can
> +    /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire()` or
> +    /// `WRITE_ONCE()`/`smp_store_release()` in C side:
> +    ///
> +    /// ```
> +    /// # use kernel::types::Opaque;
> +    /// use kernel::sync::atomic::{Atomic, Relaxed, Release};
> +    ///
> +    /// // Assume there is a C struct `foo`.
> +    /// mod cbindings {
> +    ///     #[repr(C)]
> +    ///     pub(crate) struct foo {
> +    ///         pub(crate) a: i32,
> +    ///         pub(crate) b: i32
> +    ///     }
> +    /// }
> +    ///
> +    /// let tmp = Opaque::new(cbindings::foo { a: 1, b: 2 });
> +    ///
> +    /// // struct foo *foo_ptr = ..;
> +    /// let foo_ptr = tmp.get();
> +    ///
> +    /// // SAFETY: `foo_ptr` is valid, and `.a` is in bounds.
> +    /// let foo_a_ptr = unsafe { &raw mut (*foo_ptr).a };
> +    ///
> +    /// // a = READ_ONCE(foo_ptr->a);
> +    /// //
> +    /// // SAFETY: `foo_a_ptr` is valid for read, and all other accesses on it is atomic, so no
> +    /// // data race.
> +    /// let a = unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed);
> +    /// # assert_eq!(a, 1);
> +    ///
> +    /// // smp_store_release(&foo_ptr->a, 2);
> +    /// //
> +    /// // SAFETY: `foo_a_ptr` is valid for writes, and all other accesses on it is atomic, so
> +    /// // no data race.
> +    /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release);
> +    /// ```
> +    ///
> +    /// However, this should be only used when communicating with C side or manipulating a C
> +    /// struct.

This sentence should be above the `Safety` section.

> +    pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self
> +    where
> +        T: Sync,
> +    {
> +        // CAST: `T` is transparent to `Atomic<T>`.
> +        // SAFETY: Per function safety requirement, `ptr` is a valid pointer and the object will
> +        // live long enough. It's safe to return a `&Atomic<T>` because function safety requirement
> +        // guarantees other accesses won't cause data races.
> +        unsafe { &*ptr.cast::<Self>() }
> +    }
> +
> +    /// Returns a pointer to the underlying atomic `T`.
> +    ///
> +    /// Note that use of the return pointer must not cause data races defined by [`LKMM`].
> +    ///
> +    /// # Guarantees
> +    ///
> +    /// The returned pointer is properly aligned (i.e. aligned to [`align_of::<T>()`])

You really only need this guarantee? Not validity etc?

> +    ///
> +    /// [`LKMM`]: srctree/tools/memory-model
> +    /// [`align_of::<T>()`]: core::mem::align_of
> +    pub const fn as_ptr(&self) -> *mut T {
> +        // GUARANTEE: `self.0` has the same alignment of `T`.
> +        self.0.get()
> +    }
> +
> +    /// Returns a mutable reference to the underlying atomic `T`.
> +    ///
> +    /// This is safe because the mutable reference of the atomic `T` guarantees the exclusive

s/the exclusive/exclusive/

---
Cheers,
Benno

> +    /// access.
> +    pub fn get_mut(&mut self) -> &mut T {
> +        // SAFETY: `self.as_ptr()` is a valid pointer to `T`. `&mut self` guarantees the exclusive
> +        // access, so it's safe to reborrow mutably.
> +        unsafe { &mut *self.as_ptr() }
> +    }
> +}

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 5/9] rust: sync: atomic: Add atomic {cmp,}xchg operations
  2025-07-14  5:36 ` [PATCH v7 5/9] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
@ 2025-07-14 10:56   ` Benno Lossin
  0 siblings, 0 replies; 50+ messages in thread
From: Benno Lossin @ 2025-07-14 10:56 UTC (permalink / raw)
  To: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
	Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
	Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
	Thomas Gleixner, Alan Stern

On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
> xchg() and cmpxchg() are basic operations on atomic. Provide these based
> on C APIs.
>
> Note that cmpxchg() use the similar function signature as
> compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means
> the operation succeeds and `Err(old)` means the operation fails.
>
> Reviewed-by: Alice Ryhl <aliceryhl@google.com>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>

Looks good except for the naming disputes :) So

Reviewed-by: Benno Lossin <lossin@kernel.org>

---
Cheers,
Benno

> ---
>  rust/kernel/sync/atomic/generic.rs | 181 ++++++++++++++++++++++++++++-
>  1 file changed, 180 insertions(+), 1 deletion(-)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 9/9] rust: sync: atomic: Add Atomic<{usize,isize}>
  2025-07-14  5:36 ` [PATCH v7 9/9] rust: sync: atomic: Add Atomic<{usize,isize}> Boqun Feng
@ 2025-07-14 11:06   ` Benno Lossin
  2025-07-14 13:47     ` Boqun Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-14 11:06 UTC (permalink / raw)
  To: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
	Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
	Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
	Thomas Gleixner, Alan Stern

On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
> +// Defines an internal type that always maps to the integer type which has the same size alignment
> +// as `isize` and `usize`, and `isize` and `usize` are always bi-directional transmutable to
> +// `isize_atomic_repr`, which also always implements `AtomicImpl`.
> +#[allow(non_camel_case_types)]
> +#[cfg(not(CONFIG_64BIT))]
> +type isize_atomic_repr = i32;
> +#[allow(non_camel_case_types)]
> +#[cfg(CONFIG_64BIT)]
> +type isize_atomic_repr = i64;
> +
> +// Ensure size and alignment requirements are checked.
> +crate::static_assert!(core::mem::size_of::<isize>() == core::mem::size_of::<isize_atomic_repr>());
> +crate::static_assert!(core::mem::align_of::<isize>() == core::mem::align_of::<isize_atomic_repr>());
> +crate::static_assert!(core::mem::size_of::<usize>() == core::mem::size_of::<isize_atomic_repr>());
> +crate::static_assert!(core::mem::align_of::<usize>() == core::mem::align_of::<isize_atomic_repr>());

This is fine for now, but I would prefer for this to go into an
`assumptions` module like Miguel proposed some time ago.

---
Cheers,
Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 2/9] rust: sync: Add basic atomic operation mapping framework
  2025-07-14 10:03   ` Benno Lossin
@ 2025-07-14 13:42     ` Boqun Feng
  2025-07-14 15:00       ` Benno Lossin
  0 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-14 13:42 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Mon, Jul 14, 2025 at 12:03:11PM +0200, Benno Lossin wrote:
[...]
> > +declare_and_impl_atomic_methods!(
> > +    /// Basic atomic operations
> > +    pub trait AtomicHasBasicOps {
> 
> I think we should drop the `Has` from the names. So this one can just be
> `AtomicBasicOps`. Or how about `BasicAtomic`, or `AtomicBase`?
> 

Actually, given your feedback to `ordering::Any` vs `core::any::Any`,
I think it makes more sense to keep the current names ;-) These are only
trait names to describe what operations do an `AtomicImpl` has, and they
should be used only in atomic mod, so naming them a bit longer is not a
problem. And by doing so, we free the better names `BasicAtomic` or
`AtomicBase` for other future usages.

Best I could do is `AtomicBasicOps` or `HasAtomicBasicOps`.

> > +        /// Atomic read (load).
> > +        ///
> > +        /// # Safety
> > +        /// - `ptr` is aligned to [`align_of::<Self>()`].
> > +        /// - `ptr` is valid for reads.
> > +        ///
> > +        /// [`align_of::<Self>()`]: core::mem::align_of
> > +        unsafe fn read[acquire](ptr: *mut Self) -> Self {
> > +            bindings::#call(ptr.cast())
> > +        }
> > +
> > +        /// Atomic set (store).
> > +        ///
> > +        /// # Safety
> > +        /// - `ptr` is aligned to [`align_of::<Self>()`].
> > +        /// - `ptr` is valid for writes.
> > +        ///
> > +        /// [`align_of::<Self>()`]: core::mem::align_of
> > +        unsafe fn set[release](ptr: *mut Self, v: Self) {
> > +            bindings::#call(ptr.cast(), v)
> > +        }
> > +    }
> > +);
> > +
> > +declare_and_impl_atomic_methods!(
> > +    /// Exchange and compare-and-exchange atomic operations
> > +    pub trait AtomicHasXchgOps {
> 
> Same here `AtomicXchgOps` or `AtomicExchangeOps` or `AtomicExchange`?
> (I would prefer to not abbreviate it to `Xchg`)
> 

The "Xchg" -> "Exchange" part seems fine to me.

> > +        /// Atomic exchange.
> > +        ///
> > +        /// Atomically updates `*ptr` to `v` and returns the old value.
> > +        ///
> > +        /// # Safety
> > +        /// - `ptr` is aligned to [`align_of::<Self>()`].
> > +        /// - `ptr` is valid for reads and writes.
> > +        ///
> > +        /// [`align_of::<Self>()`]: core::mem::align_of
> > +        unsafe fn xchg[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self {
> > +            bindings::#call(ptr.cast(), v)
> > +        }
> > +
> > +        /// Atomic compare and exchange.
> > +        ///
> > +        /// If `*ptr` == `*old`, atomically updates `*ptr` to `new`. Otherwise, `*ptr` is not
> > +        /// modified, `*old` is updated to the current value of `*ptr`.
> > +        ///
> > +        /// Return `true` if the update of `*ptr` occured, `false` otherwise.
> > +        ///
> > +        /// # Safety
> > +        /// - `ptr` is aligned to [`align_of::<Self>()`].
> > +        /// - `ptr` is valid for reads and writes.
> > +        /// - `old` is aligned to [`align_of::<Self>()`].
> > +        /// - `old` is valid for reads and writes.
> > +        ///
> > +        /// [`align_of::<Self>()`]: core::mem::align_of
> > +        unsafe fn try_cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: *mut Self, new: Self) -> bool {
> > +            bindings::#call(ptr.cast(), old, new)
> > +        )}
> > +    }
> > +);
> > +
> > +declare_and_impl_atomic_methods!(
> > +    /// Atomic arithmetic operations
> > +    pub trait AtomicHasArithmeticOps {
> 
> Forgot to rename this one to `Add`? I think `AtomicAdd` sounds best for

No, because at the `AtomicImpl` level, it's easy to know whether a type
has all the arithmetic operations or none (also the `Delta` type should
be known). But I don't have opinions on renaming it to `AtomicAddOps` or
`HasAtomicAddOps`.

Regards,
Boqun

> this one.
> 
> ---
> Cheers,
> Benno
> 
> > +        /// Atomic add (wrapping).
> > +        ///
> > +        /// Atomically updates `*ptr` to `(*ptr).wrapping_add(v)`.
> > +        ///
> > +        /// # Safety
> > +        /// - `ptr` is aligned to `align_of::<Self>()`.
> > +        /// - `ptr` is valid for reads and writes.
> > +        ///
> > +        /// [`align_of::<Self>()`]: core::mem::align_of
> > +        unsafe fn add[](ptr: *mut Self, v: Self::Delta) {
> > +            bindings::#call(v, ptr.cast())
> > +        }
> > +
> > +        /// Atomic fetch and add (wrapping).
> > +        ///
> > +        /// Atomically updates `*ptr` to `(*ptr).wrapping_add(v)`, and returns the value of `*ptr`
> > +        /// before the update.
> > +        ///
> > +        /// # Safety
> > +        /// - `ptr` is aligned to `align_of::<Self>()`.
> > +        /// - `ptr` is valid for reads and writes.
> > +        ///
> > +        /// [`align_of::<Self>()`]: core::mem::align_of
> > +        unsafe fn fetch_add[acquire, release, relaxed](ptr: *mut Self, v: Self::Delta) -> Self {
> > +            bindings::#call(v, ptr.cast())
> > +        }
> > +    }
> > +);
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 9/9] rust: sync: atomic: Add Atomic<{usize,isize}>
  2025-07-14 11:06   ` Benno Lossin
@ 2025-07-14 13:47     ` Boqun Feng
  0 siblings, 0 replies; 50+ messages in thread
From: Boqun Feng @ 2025-07-14 13:47 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Mon, Jul 14, 2025 at 01:06:08PM +0200, Benno Lossin wrote:
> On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
> > +// Defines an internal type that always maps to the integer type which has the same size alignment
> > +// as `isize` and `usize`, and `isize` and `usize` are always bi-directional transmutable to
> > +// `isize_atomic_repr`, which also always implements `AtomicImpl`.
> > +#[allow(non_camel_case_types)]
> > +#[cfg(not(CONFIG_64BIT))]
> > +type isize_atomic_repr = i32;
> > +#[allow(non_camel_case_types)]
> > +#[cfg(CONFIG_64BIT)]
> > +type isize_atomic_repr = i64;
> > +
> > +// Ensure size and alignment requirements are checked.
> > +crate::static_assert!(core::mem::size_of::<isize>() == core::mem::size_of::<isize_atomic_repr>());
> > +crate::static_assert!(core::mem::align_of::<isize>() == core::mem::align_of::<isize_atomic_repr>());
> > +crate::static_assert!(core::mem::size_of::<usize>() == core::mem::size_of::<isize_atomic_repr>());
> > +crate::static_assert!(core::mem::align_of::<usize>() == core::mem::align_of::<isize_atomic_repr>());
> 
> This is fine for now, but I would prefer for this to go into an
> `assumptions` module like Miguel proposed some time ago.
> 

Well, sure. Also if `assumptions` or other core kernel mod can provide
a definition similar to `isize_atomic_repr`, I'm happy to use it.

Regards,
Boqun

> ---
> Cheers,
> Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 4/9] rust: sync: atomic: Add generic atomics
  2025-07-14 10:30   ` Benno Lossin
@ 2025-07-14 14:21     ` Boqun Feng
  2025-07-14 14:30       ` Boqun Feng
                         ` (2 more replies)
  0 siblings, 3 replies; 50+ messages in thread
From: Boqun Feng @ 2025-07-14 14:21 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Mon, Jul 14, 2025 at 12:30:12PM +0200, Benno Lossin wrote:
> On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
> > To provide using LKMM atomics for Rust code, a generic `Atomic<T>` is
> > added, currently `T` needs to be Send + Copy because these are the
> > straightforward usages and all basic types support this.
> >
> > Implement `AllowAtomic` for `i32` and `i64`, and so far only basic
> > operations load() and store() are introduced.
> >
> > Reviewed-by: Alice Ryhl <aliceryhl@google.com>
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > ---
> >  rust/kernel/sync/atomic.rs         |  14 ++
> >  rust/kernel/sync/atomic/generic.rs | 285 +++++++++++++++++++++++++++++
> >  2 files changed, 299 insertions(+)
> >  create mode 100644 rust/kernel/sync/atomic/generic.rs
> >
> > diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> > index e80ac049f36b..c5193c1c90fe 100644
> > --- a/rust/kernel/sync/atomic.rs
> > +++ b/rust/kernel/sync/atomic.rs
> > @@ -16,7 +16,21 @@
> >  //!
> >  //! [`LKMM`]: srctree/tools/memory-model/
> >  
> > +pub mod generic;
> 
> Hmm, maybe just re-export the stuff? I don't think there's an advantage
> to having the generic module be public.
> 

If `generic` is not public, then in the kernel::sync::atomic page, it
won't should up, and there is no mentioning of struct `Atomic` either.

If I made it public and also re-export the `Atomic`, there would be a
"Re-export" section mentioning all the re-exports, so I will keep
`generic` unless you have some tricks that I don't know.

Also I feel it's a bit naturally that `AllowAtomic` and `AllowAtomicAdd`
stay under `generic` (instead of re-export them at `atomic` mod level)
because they are about the generic part of `Atomic`, right?

> >  pub mod ops;
> >  pub mod ordering;
> >  
> > +pub use generic::Atomic;
> >  pub use ordering::{Acquire, Full, Relaxed, Release};
> > +
[...]
> > +/// A memory location which can be safely modified from multiple execution contexts.
> > +///
> > +/// This has the same size, alignment and bit validity as the underlying type `T`.
> 
> Let's also mention that this disables any niche optimizations (due to
> the unsafe cell).
> 

Done

> > +///
> > +/// The atomic operations are implemented in a way that is fully compatible with the [Linux Kernel
> > +/// Memory (Consistency) Model][LKMM], hence they should be modeled as the corresponding
> > +/// [`LKMM`][LKMM] atomic primitives. With the help of [`Atomic::from_ptr()`] and
> > +/// [`Atomic::as_ptr()`], this provides a way to interact with [C-side atomic operations]
> > +/// (including those without the `atomic` prefix, e.g. `READ_ONCE()`, `WRITE_ONCE()`,
> > +/// `smp_load_acquire()` and `smp_store_release()`).
> > +///
> > +/// [LKMM]: srctree/tools/memory-model/
> > +/// [C-side atomic operations]: srctree/Documentation/atomic_t.txt
> 
> Did you check that these links looks good in rustdoc?
> 

Yep.

> > +#[repr(transparent)]
> > +pub struct Atomic<T: AllowAtomic>(UnsafeCell<T>);
> > +
> > +// SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
> > +unsafe impl<T: AllowAtomic> Sync for Atomic<T> {}
> > +
> > +/// Types that support basic atomic operations.
> > +///
> > +/// # Round-trip transmutability
> > +///
> > +/// `T` is round-trip transmutable to `U` if and only if both of these properties hold:
> > +///
> > +/// - Any valid bit pattern for `T` is also a valid bit pattern for `U`.
> > +/// - Transmuting (e.g. using [`transmute()`]) a value of type `T` to `U` and then to `T` again
> > +///   yields a value that is in all aspects equivalent to the original value.
> > +///
> > +/// # Safety
> > +///
> > +/// - [`Self`] must have the same size and alignment as [`Self::Repr`].
> > +/// - [`Self`] must be [round-trip transmutable] to  [`Self::Repr`].
> > +///
> > +/// Note that this is more relaxed than requiring the bi-directional transmutability (i.e.
> > +/// [`transmute()`] is always sound between `U` to `T`) because of the support for atomic variables
> 
> s/ to / and /
> 

Fixed.

> > +/// over unit-only enums, see [Examples].
> > +///
> > +/// # Limitations
> > +///
> > +/// Because C primitives are used to implement the atomic operations, and a C function requires a
> > +/// valid object of a type to operate on (i.e. no `MaybeUninit<_>`), hence at the Rust <-> C
> > +/// surface, only types with no uninitialized bits can be passed. As a result, types like `(u8,
> 
> s/no uninitialized/initialized/
> 

hmm.. "with initialized bits" seems to me saying "it's OK as long as
some bits are initialized", how about "with all the bits initialized"?

> > +/// u16)` (a tuple with a `MaybeUninit` hole in it) are currently not supported. Note that
> 
> s/a tuple with a `MaybeUninit` hole in it/padding bytes are uninitialized/
> 

Done.

[...]
> > +
> > +#[inline(always)]
> > +const fn into_repr<T: AllowAtomic>(v: T) -> T::Repr {
> > +    // SAFETY: Per the safety requirement of `AllowAtomic`, the transmute operation is sound.
> 
> Please explain the concrete parts of the safety requirements that you
> are using here (ie round-trip-transmutability).
> 

Done.

> > +    unsafe { core::mem::transmute_copy(&v) }
> > +}
> > +
> > +/// # Safety
> > +///
> > +/// `r` must be a valid bit pattern of `T`.
> > +#[inline(always)]
> > +const unsafe fn from_repr<T: AllowAtomic>(r: T::Repr) -> T {
> > +    // SAFETY: Per the safety requirement of the function, the transmute operation is sound.
> > +    unsafe { core::mem::transmute_copy(&r) }
> > +}
> > +
> > +impl<T: AllowAtomic> Atomic<T> {
> > +    /// Creates a new atomic `T`.
> > +    pub const fn new(v: T) -> Self {
> > +        Self(UnsafeCell::new(v))
> > +    }
> > +
> > +    /// Creates a reference to an atomic `T` from a pointer of `T`.
> > +    ///
> > +    /// # Safety
> > +    ///
> > +    /// - `ptr` is aligned to `align_of::<T>()`.
> > +    /// - `ptr` is valid for reads and writes for `'a`.
> > +    /// - For the duration of `'a`, other accesses to `*ptr` must not cause data races (defined
> > +    ///   by [`LKMM`]) against atomic operations on the returned reference. Note that if all other
> > +    ///   accesses are atomic, then this safety requirement is trivially fulfilled.
> > +    ///
> > +    /// [`LKMM`]: srctree/tools/memory-model
> > +    ///
> > +    /// # Examples
> > +    ///
> > +    /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [`Atomic::store()`] can
> > +    /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire()` or
> > +    /// `WRITE_ONCE()`/`smp_store_release()` in C side:
> > +    ///
> > +    /// ```
> > +    /// # use kernel::types::Opaque;
> > +    /// use kernel::sync::atomic::{Atomic, Relaxed, Release};
> > +    ///
> > +    /// // Assume there is a C struct `foo`.
> > +    /// mod cbindings {
> > +    ///     #[repr(C)]
> > +    ///     pub(crate) struct foo {
> > +    ///         pub(crate) a: i32,
> > +    ///         pub(crate) b: i32
> > +    ///     }
> > +    /// }
> > +    ///
> > +    /// let tmp = Opaque::new(cbindings::foo { a: 1, b: 2 });
> > +    ///
> > +    /// // struct foo *foo_ptr = ..;
> > +    /// let foo_ptr = tmp.get();
> > +    ///
> > +    /// // SAFETY: `foo_ptr` is valid, and `.a` is in bounds.
> > +    /// let foo_a_ptr = unsafe { &raw mut (*foo_ptr).a };
> > +    ///
> > +    /// // a = READ_ONCE(foo_ptr->a);
> > +    /// //
> > +    /// // SAFETY: `foo_a_ptr` is valid for read, and all other accesses on it is atomic, so no
> > +    /// // data race.
> > +    /// let a = unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed);
> > +    /// # assert_eq!(a, 1);
> > +    ///
> > +    /// // smp_store_release(&foo_ptr->a, 2);
> > +    /// //
> > +    /// // SAFETY: `foo_a_ptr` is valid for writes, and all other accesses on it is atomic, so
> > +    /// // no data race.
> > +    /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release);
> > +    /// ```
> > +    ///
> > +    /// However, this should be only used when communicating with C side or manipulating a C
> > +    /// struct.
> 
> This sentence should be above the `Safety` section.
> 

Hmm.. why? This is the further information about what the above
"Examples" section just mentioned?

> > +    pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self
> > +    where
> > +        T: Sync,
> > +    {
> > +        // CAST: `T` is transparent to `Atomic<T>`.
> > +        // SAFETY: Per function safety requirement, `ptr` is a valid pointer and the object will
> > +        // live long enough. It's safe to return a `&Atomic<T>` because function safety requirement
> > +        // guarantees other accesses won't cause data races.
> > +        unsafe { &*ptr.cast::<Self>() }
> > +    }
> > +
> > +    /// Returns a pointer to the underlying atomic `T`.
> > +    ///
> > +    /// Note that use of the return pointer must not cause data races defined by [`LKMM`].
> > +    ///
> > +    /// # Guarantees
> > +    ///
> > +    /// The returned pointer is properly aligned (i.e. aligned to [`align_of::<T>()`])
> 
> You really only need this guarantee? Not validity etc?
> 

Nice find, I changed it to also guarantee the pointer is valid.

> > +    ///
> > +    /// [`LKMM`]: srctree/tools/memory-model
> > +    /// [`align_of::<T>()`]: core::mem::align_of
> > +    pub const fn as_ptr(&self) -> *mut T {
> > +        // GUARANTEE: `self.0` has the same alignment of `T`.
> > +        self.0.get()
> > +    }
> > +
> > +    /// Returns a mutable reference to the underlying atomic `T`.
> > +    ///
> > +    /// This is safe because the mutable reference of the atomic `T` guarantees the exclusive
> 
> s/the exclusive/exclusive/
> 

Done.

Regards,
Boqun

> ---
> Cheers,
> Benno
> 
> > +    /// access.
> > +    pub fn get_mut(&mut self) -> &mut T {
> > +        // SAFETY: `self.as_ptr()` is a valid pointer to `T`. `&mut self` guarantees the exclusive
> > +        // access, so it's safe to reborrow mutably.
> > +        unsafe { &mut *self.as_ptr() }
> > +    }
> > +}

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 4/9] rust: sync: atomic: Add generic atomics
  2025-07-14 14:21     ` Boqun Feng
@ 2025-07-14 14:30       ` Boqun Feng
  2025-07-14 14:34       ` Miguel Ojeda
  2025-07-14 15:05       ` Benno Lossin
  2 siblings, 0 replies; 50+ messages in thread
From: Boqun Feng @ 2025-07-14 14:30 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Mon, Jul 14, 2025 at 07:21:53AM -0700, Boqun Feng wrote:
> On Mon, Jul 14, 2025 at 12:30:12PM +0200, Benno Lossin wrote:
> > On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
> > > To provide using LKMM atomics for Rust code, a generic `Atomic<T>` is
> > > added, currently `T` needs to be Send + Copy because these are the
> > > straightforward usages and all basic types support this.
> > >
> > > Implement `AllowAtomic` for `i32` and `i64`, and so far only basic
> > > operations load() and store() are introduced.
> > >
> > > Reviewed-by: Alice Ryhl <aliceryhl@google.com>
> > > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > > ---
> > >  rust/kernel/sync/atomic.rs         |  14 ++
> > >  rust/kernel/sync/atomic/generic.rs | 285 +++++++++++++++++++++++++++++
> > >  2 files changed, 299 insertions(+)
> > >  create mode 100644 rust/kernel/sync/atomic/generic.rs
> > >
> > > diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> > > index e80ac049f36b..c5193c1c90fe 100644
> > > --- a/rust/kernel/sync/atomic.rs
> > > +++ b/rust/kernel/sync/atomic.rs
> > > @@ -16,7 +16,21 @@
> > >  //!
> > >  //! [`LKMM`]: srctree/tools/memory-model/
> > >  
> > > +pub mod generic;
> > 
> > Hmm, maybe just re-export the stuff? I don't think there's an advantage
> > to having the generic module be public.
> > 
> 
> If `generic` is not public, then in the kernel::sync::atomic page, it

I meant the rustdoc of `kernel::sync::atomic` page.

Regards,
Boqun

> won't should up, and there is no mentioning of struct `Atomic` either.
> 
> If I made it public and also re-export the `Atomic`, there would be a
> "Re-export" section mentioning all the re-exports, so I will keep
> `generic` unless you have some tricks that I don't know.
> 
> Also I feel it's a bit naturally that `AllowAtomic` and `AllowAtomicAdd`
> stay under `generic` (instead of re-export them at `atomic` mod level)
> because they are about the generic part of `Atomic`, right?
> 
[...]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 4/9] rust: sync: atomic: Add generic atomics
  2025-07-14 14:21     ` Boqun Feng
  2025-07-14 14:30       ` Boqun Feng
@ 2025-07-14 14:34       ` Miguel Ojeda
  2025-07-14 14:53         ` Boqun Feng
  2025-07-14 15:05       ` Benno Lossin
  2 siblings, 1 reply; 50+ messages in thread
From: Miguel Ojeda @ 2025-07-14 14:34 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Benno Lossin, linux-kernel, rust-for-linux, lkmm, linux-arch,
	Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
	Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
	Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
	Thomas Gleixner, Alan Stern

On Mon, Jul 14, 2025 at 4:22 PM Boqun Feng <boqun.feng@gmail.com> wrote:
>
> Hmm.. why? This is the further information about what the above
> "Examples" section just mentioned?

I think Benno may be trying to point out is may be confusing what
"this" may be referring to, i.e. is it referring to something concrete
about the example, or about `from_ptr` as a whole / in all cases?

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 4/9] rust: sync: atomic: Add generic atomics
  2025-07-14 14:34       ` Miguel Ojeda
@ 2025-07-14 14:53         ` Boqun Feng
  2025-07-14 15:16           ` Benno Lossin
  0 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-14 14:53 UTC (permalink / raw)
  To: Miguel Ojeda
  Cc: Benno Lossin, linux-kernel, rust-for-linux, lkmm, linux-arch,
	Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
	Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
	Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
	Thomas Gleixner, Alan Stern

On Mon, Jul 14, 2025 at 04:34:39PM +0200, Miguel Ojeda wrote:
> On Mon, Jul 14, 2025 at 4:22 PM Boqun Feng <boqun.feng@gmail.com> wrote:
> >
> > Hmm.. why? This is the further information about what the above
> > "Examples" section just mentioned?
> 
> I think Benno may be trying to point out is may be confusing what
> "this" may be referring to, i.e. is it referring to something concrete
> about the example, or about `from_ptr` as a whole / in all cases?
> 

Ok, how about I do this:

diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
index 26d9ff3f7c35..e40010394536 100644
--- a/rust/kernel/sync/atomic/generic.rs
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -135,6 +135,9 @@ pub const fn new(v: T) -> Self {

     /// Creates a reference to an atomic `T` from a pointer of `T`.
     ///
+    /// This usually is used when when communicating with C side or manipulating a C struct, see
+    /// examples below.
+    ///
     /// # Safety
     ///
     /// - `ptr` is aligned to `align_of::<T>()`.
@@ -185,9 +188,6 @@ pub const fn new(v: T) -> Self {
     /// // no data race.
     /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release);
     /// ```
-    ///
-    /// However, this should be only used when communicating with C side or manipulating a C
-    /// struct.
     pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self
     where
         T: Sync,

?

Regards,
Boqun

> Cheers,
> Miguel

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 3/9] rust: sync: atomic: Add ordering annotation types
  2025-07-14 10:10   ` Benno Lossin
@ 2025-07-14 14:59     ` Boqun Feng
  2025-07-14 15:16       ` Benno Lossin
  0 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-14 14:59 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Mon, Jul 14, 2025 at 12:10:46PM +0200, Benno Lossin wrote:
> On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
> > Preparation for atomic primitives. Instead of a suffix like _acquire, a
> > method parameter along with the corresponding generic parameter will be
> > used to specify the ordering of an atomic operations. For example,
> > atomic load() can be defined as:
> >
> > 	impl<T: ...> Atomic<T> {
> > 	    pub fn load<O: AcquireOrRelaxed>(&self, _o: O) -> T { ... }
> > 	}
> >
> > and acquire users would do:
> >
> > 	let r = x.load(Acquire);
> >
> > relaxed users:
> >
> > 	let r = x.load(Relaxed);
> >
> > doing the following:
> >
> > 	let r = x.load(Release);
> >
> > will cause a compiler error.
> >
> > Compared to suffixes, it's easier to tell what ordering variants an
> > operation has, and it also make it easier to unify the implementation of
> > all ordering variants in one method via generic. The `TYPE` associate
> > const is for generic function to pick up the particular implementation
> > specified by an ordering annotation.
> >
> > Reviewed-by: Alice Ryhl <aliceryhl@google.com>
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > ---
> > Benno, please take a good and if you want to provide your Reviewed-by
> > for this one. I didn't apply your Reviewed-by because I used
> > `ordering::Any` instead of `AnyOrdering`, I think you're Ok with it [1],
> > but I could be wrong. Thanks!
> >
> > [1]: https://lore.kernel.org/rust-for-linux/DB8M91D7KIT4.14W69YK7108ND@kernel.org/
> 
> > +/// The trait bound for annotating operations that support any ordering.
> > +pub trait Any: internal::Sealed {
> 
> How about we just name this `Ordering`? Because that's what it is :)
> 

Seems OK to me, I then also followed Gary's suggestion:

	https://lore.kernel.org/rust-for-linux/20250621121842.0c3ca452.gary@garyguo.net/

and dropped `RelaxedOnly` trait.

> That sadly means you can't do
> 
>     fn foo<Ordering: Ordering>() {}
>            --------  ^^^^^^^^ not a trait
>            |
>            found this type parameter
> 
> But you can still do
> 
>     fn foo<O: Ordering>(_: O) {}
> 
> If we don't have the ordering module public and instead re-export from

Keeping ordering mod public helps rustdoc readers to find the module and
read the module documentation (where is the best place to explain each
ordering), and also I made `Relaxed`, `Acquire`, `Release` and `Full`
refer to the module documentation in their doc, making `ordering` mod
private would cause rustdoc issues.

Regards,
Boqun

> atomic, you could also write:
> 
>     fn foo<Ordering: atomic::Ordering>(_: Ordering) {}
> 
> If you want it to be extra clear. What do you think?
> 
> ---
> Cheers,
> Benno
> 
> > +    /// Describes the exact memory ordering.
> > +    const TYPE: OrderingType;
> > +}

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 2/9] rust: sync: Add basic atomic operation mapping framework
  2025-07-14 13:42     ` Boqun Feng
@ 2025-07-14 15:00       ` Benno Lossin
  2025-07-14 15:34         ` Boqun Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-14 15:00 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Mon Jul 14, 2025 at 3:42 PM CEST, Boqun Feng wrote:
> On Mon, Jul 14, 2025 at 12:03:11PM +0200, Benno Lossin wrote:
> [...]
>> > +declare_and_impl_atomic_methods!(
>> > +    /// Basic atomic operations
>> > +    pub trait AtomicHasBasicOps {
>> 
>> I think we should drop the `Has` from the names. So this one can just be
>> `AtomicBasicOps`. Or how about `BasicAtomic`, or `AtomicBase`?
>> 
>
> Actually, given your feedback to `ordering::Any` vs `core::any::Any`,
> I think it makes more sense to keep the current names ;-) These are only
> trait names to describe what operations do an `AtomicImpl` has, and they
> should be used only in atomic mod, so naming them a bit longer is not a
> problem. And by doing so, we free the better names `BasicAtomic` or
> `AtomicBase` for other future usages.
>
> Best I could do is `AtomicBasicOps` or `HasAtomicBasicOps`.

But you are already in the `ops` namespace, so by your argument you
could just use `ops::BasicAtomic` :)

I'm fine with `AtomicBasicOps`.

>> > +declare_and_impl_atomic_methods!(
>> > +    /// Atomic arithmetic operations
>> > +    pub trait AtomicHasArithmeticOps {
>> 
>> Forgot to rename this one to `Add`? I think `AtomicAdd` sounds best for
>
> No, because at the `AtomicImpl` level, it's easy to know whether a type
> has all the arithmetic operations or none (also the `Delta` type should
> be known). But I don't have opinions on renaming it to `AtomicAddOps` or
> `HasAtomicAddOps`.

Ahh right yeah this makes sense. So let's use `AtomicArithmeticOps` if
you insist on the `Ops` part.

---
Cheers,
Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 4/9] rust: sync: atomic: Add generic atomics
  2025-07-14 14:21     ` Boqun Feng
  2025-07-14 14:30       ` Boqun Feng
  2025-07-14 14:34       ` Miguel Ojeda
@ 2025-07-14 15:05       ` Benno Lossin
  2025-07-14 15:32         ` Boqun Feng
  2 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-14 15:05 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Mon Jul 14, 2025 at 4:21 PM CEST, Boqun Feng wrote:
> On Mon, Jul 14, 2025 at 12:30:12PM +0200, Benno Lossin wrote:
>> On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
>> > To provide using LKMM atomics for Rust code, a generic `Atomic<T>` is
>> > added, currently `T` needs to be Send + Copy because these are the
>> > straightforward usages and all basic types support this.
>> >
>> > Implement `AllowAtomic` for `i32` and `i64`, and so far only basic
>> > operations load() and store() are introduced.
>> >
>> > Reviewed-by: Alice Ryhl <aliceryhl@google.com>
>> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
>> > ---
>> >  rust/kernel/sync/atomic.rs         |  14 ++
>> >  rust/kernel/sync/atomic/generic.rs | 285 +++++++++++++++++++++++++++++
>> >  2 files changed, 299 insertions(+)
>> >  create mode 100644 rust/kernel/sync/atomic/generic.rs
>> >
>> > diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
>> > index e80ac049f36b..c5193c1c90fe 100644
>> > --- a/rust/kernel/sync/atomic.rs
>> > +++ b/rust/kernel/sync/atomic.rs
>> > @@ -16,7 +16,21 @@
>> >  //!
>> >  //! [`LKMM`]: srctree/tools/memory-model/
>> >  
>> > +pub mod generic;
>> 
>> Hmm, maybe just re-export the stuff? I don't think there's an advantage
>> to having the generic module be public.
>> 
>
> If `generic` is not public, then in the kernel::sync::atomic page, it
> won't should up, and there is no mentioning of struct `Atomic` either.
>
> If I made it public and also re-export the `Atomic`, there would be a
> "Re-export" section mentioning all the re-exports, so I will keep
> `generic` unless you have some tricks that I don't know.

Just use `#[doc(inline)]` :)

    https://doc.rust-lang.org/rustdoc/write-documentation/the-doc-attribute.html#inline-and-no_inline

> Also I feel it's a bit naturally that `AllowAtomic` and `AllowAtomicAdd`
> stay under `generic` (instead of re-export them at `atomic` mod level)
> because they are about the generic part of `Atomic`, right?

Why is that more natural? It only adds an extra path layer in any import
for atomics.

Unless you at some point want to add `concrete::Atomic<T>` etc, I would
just re-export them.

>> > +/// The atomic operations are implemented in a way that is fully compatible with the [Linux Kernel
>> > +/// Memory (Consistency) Model][LKMM], hence they should be modeled as the corresponding
>> > +/// [`LKMM`][LKMM] atomic primitives. With the help of [`Atomic::from_ptr()`] and
>> > +/// [`Atomic::as_ptr()`], this provides a way to interact with [C-side atomic operations]
>> > +/// (including those without the `atomic` prefix, e.g. `READ_ONCE()`, `WRITE_ONCE()`,
>> > +/// `smp_load_acquire()` and `smp_store_release()`).
>> > +///
>> > +/// [LKMM]: srctree/tools/memory-model/
>> > +/// [C-side atomic operations]: srctree/Documentation/atomic_t.txt
>> 
>> Did you check that these links looks good in rustdoc?
>> 
>
> Yep.

Nice :)

>> > +/// over unit-only enums, see [Examples].
>> > +///
>> > +/// # Limitations
>> > +///
>> > +/// Because C primitives are used to implement the atomic operations, and a C function requires a
>> > +/// valid object of a type to operate on (i.e. no `MaybeUninit<_>`), hence at the Rust <-> C
>> > +/// surface, only types with no uninitialized bits can be passed. As a result, types like `(u8,
>> 
>> s/no uninitialized/initialized/
>> 
>
> hmm.. "with initialized bits" seems to me saying "it's OK as long as
> some bits are initialized", how about "with all the bits initialized"?

Sounds good. The double negation sounded a bit weird to me.

>> > +    /// However, this should be only used when communicating with C side or manipulating a C
>> > +    /// struct.
>> 
>> This sentence should be above the `Safety` section.
>> 
>
> Hmm.. why? This is the further information about what the above
> "Examples" section just mentioned?

I thought "this" is referring to "this function", if not then please be
more concrete :)

---
Cheers,
Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 3/9] rust: sync: atomic: Add ordering annotation types
  2025-07-14 14:59     ` Boqun Feng
@ 2025-07-14 15:16       ` Benno Lossin
  0 siblings, 0 replies; 50+ messages in thread
From: Benno Lossin @ 2025-07-14 15:16 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Mon Jul 14, 2025 at 4:59 PM CEST, Boqun Feng wrote:
> On Mon, Jul 14, 2025 at 12:10:46PM +0200, Benno Lossin wrote:
>> On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
>> > Preparation for atomic primitives. Instead of a suffix like _acquire, a
>> > method parameter along with the corresponding generic parameter will be
>> > used to specify the ordering of an atomic operations. For example,
>> > atomic load() can be defined as:
>> >
>> > 	impl<T: ...> Atomic<T> {
>> > 	    pub fn load<O: AcquireOrRelaxed>(&self, _o: O) -> T { ... }
>> > 	}
>> >
>> > and acquire users would do:
>> >
>> > 	let r = x.load(Acquire);
>> >
>> > relaxed users:
>> >
>> > 	let r = x.load(Relaxed);
>> >
>> > doing the following:
>> >
>> > 	let r = x.load(Release);
>> >
>> > will cause a compiler error.
>> >
>> > Compared to suffixes, it's easier to tell what ordering variants an
>> > operation has, and it also make it easier to unify the implementation of
>> > all ordering variants in one method via generic. The `TYPE` associate
>> > const is for generic function to pick up the particular implementation
>> > specified by an ordering annotation.
>> >
>> > Reviewed-by: Alice Ryhl <aliceryhl@google.com>
>> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
>> > ---
>> > Benno, please take a good and if you want to provide your Reviewed-by
>> > for this one. I didn't apply your Reviewed-by because I used
>> > `ordering::Any` instead of `AnyOrdering`, I think you're Ok with it [1],
>> > but I could be wrong. Thanks!
>> >
>> > [1]: https://lore.kernel.org/rust-for-linux/DB8M91D7KIT4.14W69YK7108ND@kernel.org/
>> 
>> > +/// The trait bound for annotating operations that support any ordering.
>> > +pub trait Any: internal::Sealed {
>> 
>> How about we just name this `Ordering`? Because that's what it is :)
>> 
>
> Seems OK to me, I then also followed Gary's suggestion:
>
> 	https://lore.kernel.org/rust-for-linux/20250621121842.0c3ca452.gary@garyguo.net/
>
> and dropped `RelaxedOnly` trait.

Sounds good.

>> That sadly means you can't do
>> 
>>     fn foo<Ordering: Ordering>() {}
>>            --------  ^^^^^^^^ not a trait
>>            |
>>            found this type parameter
>> 
>> But you can still do
>> 
>>     fn foo<O: Ordering>(_: O) {}
>> 
>> If we don't have the ordering module public and instead re-export from
>
> Keeping ordering mod public helps rustdoc readers to find the module and
> read the module documentation (where is the best place to explain each
> ordering), and also I made `Relaxed`, `Acquire`, `Release` and `Full`
> refer to the module documentation in their doc, making `ordering` mod
> private would cause rustdoc issues.

You could move those docs to the `Ordering` trait :) But I think having
an ordering module is fine.

---
Cheers,
Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 4/9] rust: sync: atomic: Add generic atomics
  2025-07-14 14:53         ` Boqun Feng
@ 2025-07-14 15:16           ` Benno Lossin
  0 siblings, 0 replies; 50+ messages in thread
From: Benno Lossin @ 2025-07-14 15:16 UTC (permalink / raw)
  To: Boqun Feng, Miguel Ojeda
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Mon Jul 14, 2025 at 4:53 PM CEST, Boqun Feng wrote:
> On Mon, Jul 14, 2025 at 04:34:39PM +0200, Miguel Ojeda wrote:
>> On Mon, Jul 14, 2025 at 4:22 PM Boqun Feng <boqun.feng@gmail.com> wrote:
>> >
>> > Hmm.. why? This is the further information about what the above
>> > "Examples" section just mentioned?
>> 
>> I think Benno may be trying to point out is may be confusing what
>> "this" may be referring to, i.e. is it referring to something concrete
>> about the example, or about `from_ptr` as a whole / in all cases?
>> 
>
> Ok, how about I do this:
>
> diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
> index 26d9ff3f7c35..e40010394536 100644
> --- a/rust/kernel/sync/atomic/generic.rs
> +++ b/rust/kernel/sync/atomic/generic.rs
> @@ -135,6 +135,9 @@ pub const fn new(v: T) -> Self {
>
>      /// Creates a reference to an atomic `T` from a pointer of `T`.
>      ///
> +    /// This usually is used when when communicating with C side or manipulating a C struct, see
> +    /// examples below.

I don't think it's necessary to mention the examples below, but this is
what I had in mind.

---
Cheers,
Benno

> +    ///
>      /// # Safety
>      ///
>      /// - `ptr` is aligned to `align_of::<T>()`.
> @@ -185,9 +188,6 @@ pub const fn new(v: T) -> Self {
>      /// // no data race.
>      /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release);
>      /// ```
> -    ///
> -    /// However, this should be only used when communicating with C side or manipulating a C
> -    /// struct.
>      pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self
>      where
>          T: Sync,
>
> ?
>
> Regards,
> Boqun
>
>> Cheers,
>> Miguel


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 4/9] rust: sync: atomic: Add generic atomics
  2025-07-14 15:05       ` Benno Lossin
@ 2025-07-14 15:32         ` Boqun Feng
  2025-07-15  9:36           ` Benno Lossin
  0 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-14 15:32 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Mon, Jul 14, 2025 at 05:05:40PM +0200, Benno Lossin wrote:
[...]
> >> >  //!
> >> >  //! [`LKMM`]: srctree/tools/memory-model/
> >> >  
> >> > +pub mod generic;
> >> 
> >> Hmm, maybe just re-export the stuff? I don't think there's an advantage
> >> to having the generic module be public.
> >> 
> >
> > If `generic` is not public, then in the kernel::sync::atomic page, it
> > won't should up, and there is no mentioning of struct `Atomic` either.
> >
> > If I made it public and also re-export the `Atomic`, there would be a
> > "Re-export" section mentioning all the re-exports, so I will keep
> > `generic` unless you have some tricks that I don't know.
> 
> Just use `#[doc(inline)]` :)
> 
>     https://doc.rust-lang.org/rustdoc/write-documentation/the-doc-attribute.html#inline-and-no_inline
> 
> > Also I feel it's a bit naturally that `AllowAtomic` and `AllowAtomicAdd`
> > stay under `generic` (instead of re-export them at `atomic` mod level)
> > because they are about the generic part of `Atomic`, right?
> 
> Why is that more natural? It only adds an extra path layer in any import
> for atomics.
> 

Exactly, users need to go through extra steps if they want to use the
"generic" part of the atomic, and I think that makes user more aware of
what they are essentially doing:

- If you want to use the predefined types for atomic, just

  use kernel::sync::atomic::Atomic;

  and just operate on an `Atomic<_>`.

- If you want to bring your own type for atomic operations, you need to

  use kernel::sync::atomic::generic::AllowAtomic;

  (essentially you go into the "generic" part of the atomic)

  and provide your own implementation for `AllowAtomic` and then you
  could use it for your own type.

I feel it's natural because for extra features you fetch more modules
in.

Regards,
Boqun

> Unless you at some point want to add `concrete::Atomic<T>` etc, I would
> just re-export them.
> 
[...]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 2/9] rust: sync: Add basic atomic operation mapping framework
  2025-07-14 15:00       ` Benno Lossin
@ 2025-07-14 15:34         ` Boqun Feng
  0 siblings, 0 replies; 50+ messages in thread
From: Boqun Feng @ 2025-07-14 15:34 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Mon, Jul 14, 2025 at 05:00:56PM +0200, Benno Lossin wrote:
> On Mon Jul 14, 2025 at 3:42 PM CEST, Boqun Feng wrote:
> > On Mon, Jul 14, 2025 at 12:03:11PM +0200, Benno Lossin wrote:
> > [...]
> >> > +declare_and_impl_atomic_methods!(
> >> > +    /// Basic atomic operations
> >> > +    pub trait AtomicHasBasicOps {
> >> 
> >> I think we should drop the `Has` from the names. So this one can just be
> >> `AtomicBasicOps`. Or how about `BasicAtomic`, or `AtomicBase`?
> >> 
> >
> > Actually, given your feedback to `ordering::Any` vs `core::any::Any`,
> > I think it makes more sense to keep the current names ;-) These are only
> > trait names to describe what operations do an `AtomicImpl` has, and they
> > should be used only in atomic mod, so naming them a bit longer is not a
> > problem. And by doing so, we free the better names `BasicAtomic` or
> > `AtomicBase` for other future usages.
> >
> > Best I could do is `AtomicBasicOps` or `HasAtomicBasicOps`.
> 
> But you are already in the `ops` namespace, so by your argument you
> could just use `ops::BasicAtomic` :)
> 
> I'm fine with `AtomicBasicOps`.
> 
> >> > +declare_and_impl_atomic_methods!(
> >> > +    /// Atomic arithmetic operations
> >> > +    pub trait AtomicHasArithmeticOps {
> >> 
> >> Forgot to rename this one to `Add`? I think `AtomicAdd` sounds best for
> >
> > No, because at the `AtomicImpl` level, it's easy to know whether a type
> > has all the arithmetic operations or none (also the `Delta` type should
> > be known). But I don't have opinions on renaming it to `AtomicAddOps` or
> > `HasAtomicAddOps`.
> 
> Ahh right yeah this makes sense. So let's use `AtomicArithmeticOps` if
> you insist on the `Ops` part.
> 

Done with AtomicBasicOps AtomicExchangeOps and AtomicArithmeticOps ;-)
Thank you!

Regards,
Boqun

> ---
> Cheers,
> Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 4/9] rust: sync: atomic: Add generic atomics
  2025-07-14 15:32         ` Boqun Feng
@ 2025-07-15  9:36           ` Benno Lossin
  2025-07-15 13:14             ` Boqun Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-15  9:36 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Mon Jul 14, 2025 at 5:32 PM CEST, Boqun Feng wrote:
> On Mon, Jul 14, 2025 at 05:05:40PM +0200, Benno Lossin wrote:
> [...]
>> >> >  //!
>> >> >  //! [`LKMM`]: srctree/tools/memory-model/
>> >> >  
>> >> > +pub mod generic;
>> >> 
>> >> Hmm, maybe just re-export the stuff? I don't think there's an advantage
>> >> to having the generic module be public.
>> >> 
>> >
>> > If `generic` is not public, then in the kernel::sync::atomic page, it
>> > won't should up, and there is no mentioning of struct `Atomic` either.
>> >
>> > If I made it public and also re-export the `Atomic`, there would be a
>> > "Re-export" section mentioning all the re-exports, so I will keep
>> > `generic` unless you have some tricks that I don't know.
>> 
>> Just use `#[doc(inline)]` :)
>> 
>>     https://doc.rust-lang.org/rustdoc/write-documentation/the-doc-attribute.html#inline-and-no_inline
>> 
>> > Also I feel it's a bit naturally that `AllowAtomic` and `AllowAtomicAdd`
>> > stay under `generic` (instead of re-export them at `atomic` mod level)
>> > because they are about the generic part of `Atomic`, right?
>> 
>> Why is that more natural? It only adds an extra path layer in any import
>> for atomics.
>> 
>
> Exactly, users need to go through extra steps if they want to use the
> "generic" part of the atomic, and I think that makes user more aware of
> what they are essentially doing:
>
> - If you want to use the predefined types for atomic, just
>
>   use kernel::sync::atomic::Atomic;
>
>   and just operate on an `Atomic<_>`.
>
> - If you want to bring your own type for atomic operations, you need to
>
>   use kernel::sync::atomic::generic::AllowAtomic;
>
>   (essentially you go into the "generic" part of the atomic)
>
>   and provide your own implementation for `AllowAtomic` and then you
>   could use it for your own type.
>
> I feel it's natural because for extra features you fetch more modules
> in.

The same would apply if you had to import `AllowAtomic` from atomic
directly? I don't really see the value in this.

---
Cheers,
Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-14  5:36 ` [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
@ 2025-07-15 11:21   ` Benno Lossin
  2025-07-15 13:33     ` Boqun Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-15 11:21 UTC (permalink / raw)
  To: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch
  Cc: Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
	Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
	Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
	Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
	Thomas Gleixner, Alan Stern

On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
> +/// Types that support atomic add operations.
> +///
> +/// # Safety
> +///
> +/// Wrapping adding any value of type `Self::Repr::Delta` obtained by [`Self::rhs_into_delta()`] to

I don't like "wrapping adding", either we use "`wrapping_add`" or we use
some other phrasing.

> +/// any value of type `Self::Repr` obtained through transmuting a value of type `Self` to must
> +/// yield a value with a bit pattern also valid for `Self`.
> +pub unsafe trait AllowAtomicAdd<Rhs = Self>: AllowAtomic {

Why `Allow*`? I think `AtomicAdd` is better?

> +    /// Converts `Rhs` into the `Delta` type of the atomic implementation.
> +    fn rhs_into_delta(rhs: Rhs) -> <Self::Repr as AtomicImpl>::Delta;
> +}
> +
>  impl<T: AllowAtomic> Atomic<T> {
>      /// Creates a new atomic `T`.
>      pub const fn new(v: T) -> Self {
> @@ -462,3 +474,100 @@ fn try_cmpxchg<Ordering: ordering::Any>(&self, old: &mut T, new: T, _: Ordering)
>          ret
>      }
>  }
> +
> +impl<T: AllowAtomic> Atomic<T>
> +where
> +    T::Repr: AtomicHasArithmeticOps,
> +{
> +    /// Atomic add.
> +    ///
> +    /// Atomically updates `*self` to `(*self).wrapping_add(v)`.
> +    ///
> +    /// # Examples
> +    ///
> +    /// ```
> +    /// use kernel::sync::atomic::{Atomic, Relaxed};
> +    ///
> +    /// let x = Atomic::new(42);
> +    ///
> +    /// assert_eq!(42, x.load(Relaxed));
> +    ///
> +    /// x.add(12, Relaxed);
> +    ///
> +    /// assert_eq!(54, x.load(Relaxed));
> +    /// ```
> +    #[inline(always)]
> +    pub fn add<Rhs, Ordering: ordering::RelaxedOnly>(&self, v: Rhs, _: Ordering)
> +    where
> +        T: AllowAtomicAdd<Rhs>,
> +    {
> +        let v = T::rhs_into_delta(v);
> +        // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
> +        // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
> +        let a = self.as_ptr().cast::<T::Repr>();
> +
> +        // `*self` remains valid after `atomic_add()` because of the safety requirement of
> +        // `AllowAtomicAdd`.

This part should be moved to the CAST comment above, since we're not
only writing a value transmuted from `T` into `*self`.

---
Cheers,
Benno

> +        //
> +        // SAFETY:
> +        // - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
> +        //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
> +        // - `a` is a valid pointer per the CAST justification above.
> +        unsafe {
> +            T::Repr::atomic_add(a, v);
> +        }
> +    }

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 4/9] rust: sync: atomic: Add generic atomics
  2025-07-15  9:36           ` Benno Lossin
@ 2025-07-15 13:14             ` Boqun Feng
  2025-07-15 15:35               ` Benno Lossin
  0 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-15 13:14 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Tue, Jul 15, 2025 at 11:36:49AM +0200, Benno Lossin wrote:
> On Mon Jul 14, 2025 at 5:32 PM CEST, Boqun Feng wrote:
> > On Mon, Jul 14, 2025 at 05:05:40PM +0200, Benno Lossin wrote:
> > [...]
> >> >> >  //!
> >> >> >  //! [`LKMM`]: srctree/tools/memory-model/
> >> >> >  
> >> >> > +pub mod generic;
> >> >> 
> >> >> Hmm, maybe just re-export the stuff? I don't think there's an advantage
> >> >> to having the generic module be public.
> >> >> 
> >> >
> >> > If `generic` is not public, then in the kernel::sync::atomic page, it
> >> > won't should up, and there is no mentioning of struct `Atomic` either.
> >> >
> >> > If I made it public and also re-export the `Atomic`, there would be a
> >> > "Re-export" section mentioning all the re-exports, so I will keep
> >> > `generic` unless you have some tricks that I don't know.
> >> 
> >> Just use `#[doc(inline)]` :)
> >> 
> >>     https://doc.rust-lang.org/rustdoc/write-documentation/the-doc-attribute.html#inline-and-no_inline
> >> 
> >> > Also I feel it's a bit naturally that `AllowAtomic` and `AllowAtomicAdd`
> >> > stay under `generic` (instead of re-export them at `atomic` mod level)
> >> > because they are about the generic part of `Atomic`, right?
> >> 
> >> Why is that more natural? It only adds an extra path layer in any import
> >> for atomics.
> >> 
> >
> > Exactly, users need to go through extra steps if they want to use the
> > "generic" part of the atomic, and I think that makes user more aware of
> > what they are essentially doing:
> >
> > - If you want to use the predefined types for atomic, just
> >
> >   use kernel::sync::atomic::Atomic;
> >
> >   and just operate on an `Atomic<_>`.
> >
> > - If you want to bring your own type for atomic operations, you need to
> >
> >   use kernel::sync::atomic::generic::AllowAtomic;
> >
> >   (essentially you go into the "generic" part of the atomic)
> >
> >   and provide your own implementation for `AllowAtomic` and then you
> >   could use it for your own type.
> >
> > I feel it's natural because for extra features you fetch more modules
> > in.
> 
> The same would apply if you had to import `AllowAtomic` from atomic
> directly? I don't really see the value in this.
> 

Because generic::AllowAtomic is more clear that the user is using the
generic part of the API, or the user is extending the Atomic type with
the generic ability. Grouping functionality with a namespace is
basically what I want to achieve here. It's similar to why do we put
`atomic` (and `lock`) under `sync`.

Regards,
Boqun

> ---
> Cheers,
> Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-15 11:21   ` Benno Lossin
@ 2025-07-15 13:33     ` Boqun Feng
  2025-07-15 15:45       ` Benno Lossin
  0 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-15 13:33 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Tue, Jul 15, 2025 at 01:21:20PM +0200, Benno Lossin wrote:
> On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
> > +/// Types that support atomic add operations.
> > +///
> > +/// # Safety
> > +///
> > +/// Wrapping adding any value of type `Self::Repr::Delta` obtained by [`Self::rhs_into_delta()`] to
> 
> I don't like "wrapping adding", either we use "`wrapping_add`" or we use
> some other phrasing.
> 

Let's use `wrapping_add` then.

    /// `wrapping_add` any value of type `Self::Repr::Delta` obtained by [`Self::rhs_into_delta()`] to
    /// any value of type `Self::Repr` obtained through transmuting a value of type `Self` to must
    /// yield a value with a bit pattern also valid for `Self`.

> > +pub unsafe trait AllowAtomicAdd<Rhs = Self>: AllowAtomic {
> 
> Why `Allow*`? I think `AtomicAdd` is better?
> 

To be consistent with `AllowAtomic` (the super trait), if we use
`AtomicAdd` here, should we change `AllowAtomic` to `AtomicBase`?

> > +    /// Converts `Rhs` into the `Delta` type of the atomic implementation.
> > +    fn rhs_into_delta(rhs: Rhs) -> <Self::Repr as AtomicImpl>::Delta;
> > +}
> > +
> >  impl<T: AllowAtomic> Atomic<T> {
> >      /// Creates a new atomic `T`.
> >      pub const fn new(v: T) -> Self {
> > @@ -462,3 +474,100 @@ fn try_cmpxchg<Ordering: ordering::Any>(&self, old: &mut T, new: T, _: Ordering)
> >          ret
> >      }
> >  }
> > +
> > +impl<T: AllowAtomic> Atomic<T>
> > +where
> > +    T::Repr: AtomicHasArithmeticOps,
> > +{
> > +    /// Atomic add.
> > +    ///
> > +    /// Atomically updates `*self` to `(*self).wrapping_add(v)`.
> > +    ///
> > +    /// # Examples
> > +    ///
> > +    /// ```
> > +    /// use kernel::sync::atomic::{Atomic, Relaxed};
> > +    ///
> > +    /// let x = Atomic::new(42);
> > +    ///
> > +    /// assert_eq!(42, x.load(Relaxed));
> > +    ///
> > +    /// x.add(12, Relaxed);
> > +    ///
> > +    /// assert_eq!(54, x.load(Relaxed));
> > +    /// ```
> > +    #[inline(always)]
> > +    pub fn add<Rhs, Ordering: ordering::RelaxedOnly>(&self, v: Rhs, _: Ordering)
> > +    where
> > +        T: AllowAtomicAdd<Rhs>,
> > +    {
> > +        let v = T::rhs_into_delta(v);
> > +        // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
> > +        // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
> > +        let a = self.as_ptr().cast::<T::Repr>();
> > +
> > +        // `*self` remains valid after `atomic_add()` because of the safety requirement of
> > +        // `AllowAtomicAdd`.
> 
> This part should be moved to the CAST comment above, since we're not
> only writing a value transmuted from `T` into `*self`.
> 

Hmm.. the CAST comment should explain why a pointer of `T` can be a
valid pointer of `T::Repr` because the atomic_add() below is going to
read through the pointer and write value back. The comment starting with
"`*self`" explains the value written is a valid `T`, therefore
conceptually atomic_add() below writes a valid `T` in form of `T::Repr`
into `a`.

Basically

// CAST
let a = ..

^ explains what `a` is a valid for and why it's valid.

// `*self` remains

^ explains that we write a valid value to `a`.


So I don't think we need to move?

Regards,
Boqun

> ---
> Cheers,
> Benno
> 
> > +        //
> > +        // SAFETY:
> > +        // - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
> > +        //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
> > +        // - `a` is a valid pointer per the CAST justification above.
> > +        unsafe {
> > +            T::Repr::atomic_add(a, v);
> > +        }
> > +    }

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 4/9] rust: sync: atomic: Add generic atomics
  2025-07-15 13:14             ` Boqun Feng
@ 2025-07-15 15:35               ` Benno Lossin
  0 siblings, 0 replies; 50+ messages in thread
From: Benno Lossin @ 2025-07-15 15:35 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Tue Jul 15, 2025 at 3:14 PM CEST, Boqun Feng wrote:
> On Tue, Jul 15, 2025 at 11:36:49AM +0200, Benno Lossin wrote:
>> On Mon Jul 14, 2025 at 5:32 PM CEST, Boqun Feng wrote:
>> > On Mon, Jul 14, 2025 at 05:05:40PM +0200, Benno Lossin wrote:
>> > [...]
>> >> >> >  //!
>> >> >> >  //! [`LKMM`]: srctree/tools/memory-model/
>> >> >> >  
>> >> >> > +pub mod generic;
>> >> >> 
>> >> >> Hmm, maybe just re-export the stuff? I don't think there's an advantage
>> >> >> to having the generic module be public.
>> >> >> 
>> >> >
>> >> > If `generic` is not public, then in the kernel::sync::atomic page, it
>> >> > won't should up, and there is no mentioning of struct `Atomic` either.
>> >> >
>> >> > If I made it public and also re-export the `Atomic`, there would be a
>> >> > "Re-export" section mentioning all the re-exports, so I will keep
>> >> > `generic` unless you have some tricks that I don't know.
>> >> 
>> >> Just use `#[doc(inline)]` :)
>> >> 
>> >>     https://doc.rust-lang.org/rustdoc/write-documentation/the-doc-attribute.html#inline-and-no_inline
>> >> 
>> >> > Also I feel it's a bit naturally that `AllowAtomic` and `AllowAtomicAdd`
>> >> > stay under `generic` (instead of re-export them at `atomic` mod level)
>> >> > because they are about the generic part of `Atomic`, right?
>> >> 
>> >> Why is that more natural? It only adds an extra path layer in any import
>> >> for atomics.
>> >> 
>> >
>> > Exactly, users need to go through extra steps if they want to use the
>> > "generic" part of the atomic, and I think that makes user more aware of
>> > what they are essentially doing:
>> >
>> > - If you want to use the predefined types for atomic, just
>> >
>> >   use kernel::sync::atomic::Atomic;
>> >
>> >   and just operate on an `Atomic<_>`.
>> >
>> > - If you want to bring your own type for atomic operations, you need to
>> >
>> >   use kernel::sync::atomic::generic::AllowAtomic;
>> >
>> >   (essentially you go into the "generic" part of the atomic)
>> >
>> >   and provide your own implementation for `AllowAtomic` and then you
>> >   could use it for your own type.
>> >
>> > I feel it's natural because for extra features you fetch more modules
>> > in.
>> 
>> The same would apply if you had to import `AllowAtomic` from atomic
>> directly? I don't really see the value in this.
>> 
>
> Because generic::AllowAtomic is more clear that the user is using the
> generic part of the API, or the user is extending the Atomic type with
> the generic ability. Grouping functionality with a namespace is
> basically what I want to achieve here. It's similar to why do we put
> `atomic` (and `lock`) under `sync`.

For `sync::{atomic, lock}` this makes sense, because `sync` might get
other submodules in the future (eg `once`). But for atomics, I don't see
any new modules similar to `generic` and even if, `AllowAtomic` still
makes sense in the top-level atomic module. I don't think the
namespacing argument makes sense in this case.

---
Cheers,
Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-15 13:33     ` Boqun Feng
@ 2025-07-15 15:45       ` Benno Lossin
  2025-07-15 16:13         ` Boqun Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-15 15:45 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Tue Jul 15, 2025 at 3:33 PM CEST, Boqun Feng wrote:
> On Tue, Jul 15, 2025 at 01:21:20PM +0200, Benno Lossin wrote:
>> On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
>> > +/// Types that support atomic add operations.
>> > +///
>> > +/// # Safety
>> > +///
>> > +/// Wrapping adding any value of type `Self::Repr::Delta` obtained by [`Self::rhs_into_delta()`] to
>> 
>> I don't like "wrapping adding", either we use "`wrapping_add`" or we use
>> some other phrasing.
>> 
>
> Let's use `wrapping_add` then.
>
>     /// `wrapping_add` any value of type `Self::Repr::Delta` obtained by [`Self::rhs_into_delta()`] to
>     /// any value of type `Self::Repr` obtained through transmuting a value of type `Self` to must
>     /// yield a value with a bit pattern also valid for `Self`.
>
>> > +pub unsafe trait AllowAtomicAdd<Rhs = Self>: AllowAtomic {
>> 
>> Why `Allow*`? I think `AtomicAdd` is better?
>> 
>
> To be consistent with `AllowAtomic` (the super trait), if we use
> `AtomicAdd` here, should we change `AllowAtomic` to `AtomicBase`?

Ideally, we would name that trait just `Atomic` :) But it then
conflicts with the `Atomic<T>` struct (this would be motivation to put
them in different modules :). I like `AtomicBase` better than
`AllowAtomic`, but maybe there is a better name, how about `AtomicType`?

>> > +    /// Converts `Rhs` into the `Delta` type of the atomic implementation.
>> > +    fn rhs_into_delta(rhs: Rhs) -> <Self::Repr as AtomicImpl>::Delta;
>> > +}
>> > +
>> >  impl<T: AllowAtomic> Atomic<T> {
>> >      /// Creates a new atomic `T`.
>> >      pub const fn new(v: T) -> Self {
>> > @@ -462,3 +474,100 @@ fn try_cmpxchg<Ordering: ordering::Any>(&self, old: &mut T, new: T, _: Ordering)
>> >          ret
>> >      }
>> >  }
>> > +
>> > +impl<T: AllowAtomic> Atomic<T>
>> > +where
>> > +    T::Repr: AtomicHasArithmeticOps,
>> > +{
>> > +    /// Atomic add.
>> > +    ///
>> > +    /// Atomically updates `*self` to `(*self).wrapping_add(v)`.
>> > +    ///
>> > +    /// # Examples
>> > +    ///
>> > +    /// ```
>> > +    /// use kernel::sync::atomic::{Atomic, Relaxed};
>> > +    ///
>> > +    /// let x = Atomic::new(42);
>> > +    ///
>> > +    /// assert_eq!(42, x.load(Relaxed));
>> > +    ///
>> > +    /// x.add(12, Relaxed);
>> > +    ///
>> > +    /// assert_eq!(54, x.load(Relaxed));
>> > +    /// ```
>> > +    #[inline(always)]
>> > +    pub fn add<Rhs, Ordering: ordering::RelaxedOnly>(&self, v: Rhs, _: Ordering)
>> > +    where
>> > +        T: AllowAtomicAdd<Rhs>,
>> > +    {
>> > +        let v = T::rhs_into_delta(v);
>> > +        // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
>> > +        // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
>> > +        let a = self.as_ptr().cast::<T::Repr>();
>> > +
>> > +        // `*self` remains valid after `atomic_add()` because of the safety requirement of
>> > +        // `AllowAtomicAdd`.
>> 
>> This part should be moved to the CAST comment above, since we're not
>> only writing a value transmuted from `T` into `*self`.
>> 
>
> Hmm.. the CAST comment should explain why a pointer of `T` can be a
> valid pointer of `T::Repr` because the atomic_add() below is going to
> read through the pointer and write value back. The comment starting with
> "`*self`" explains the value written is a valid `T`, therefore
> conceptually atomic_add() below writes a valid `T` in form of `T::Repr`
> into `a`.

I see, my interpretation was that if we put it on the cast, then the
operation that `atomic_add` does also is valid.

But I think this comment should either be part of the `CAST` or the
`SAFETY` comment. Going by your interpretation, it would make more sense
in the SAFETY one, since there you justify that you're actually writing
a value of type `T`.

---
Cheers,
Benno

> Basically
>
> // CAST
> let a = ..
>
> ^ explains what `a` is a valid for and why it's valid.
>
> // `*self` remains
>
> ^ explains that we write a valid value to `a`.
>
>
> So I don't think we need to move?

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-15 15:45       ` Benno Lossin
@ 2025-07-15 16:13         ` Boqun Feng
  2025-07-15 18:39           ` Benno Lossin
  0 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-15 16:13 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Tue, Jul 15, 2025 at 05:45:34PM +0200, Benno Lossin wrote:
> On Tue Jul 15, 2025 at 3:33 PM CEST, Boqun Feng wrote:
> > On Tue, Jul 15, 2025 at 01:21:20PM +0200, Benno Lossin wrote:
> >> On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
> >> > +/// Types that support atomic add operations.
> >> > +///
> >> > +/// # Safety
> >> > +///
> >> > +/// Wrapping adding any value of type `Self::Repr::Delta` obtained by [`Self::rhs_into_delta()`] to
> >> 
> >> I don't like "wrapping adding", either we use "`wrapping_add`" or we use
> >> some other phrasing.
> >> 
> >
> > Let's use `wrapping_add` then.
> >
> >     /// `wrapping_add` any value of type `Self::Repr::Delta` obtained by [`Self::rhs_into_delta()`] to
> >     /// any value of type `Self::Repr` obtained through transmuting a value of type `Self` to must
> >     /// yield a value with a bit pattern also valid for `Self`.
> >
> >> > +pub unsafe trait AllowAtomicAdd<Rhs = Self>: AllowAtomic {
> >> 
> >> Why `Allow*`? I think `AtomicAdd` is better?
> >> 
> >
> > To be consistent with `AllowAtomic` (the super trait), if we use
> > `AtomicAdd` here, should we change `AllowAtomic` to `AtomicBase`?
> 
> Ideally, we would name that trait just `Atomic` :) But it then
> conflicts with the `Atomic<T>` struct (this would be motivation to put
> them in different modules :). I like `AtomicBase` better than

Oh, if we move `Atomic<T>` to atomic.rs and keep atomic::generic, then
we can name it atomic::generic::Atomic ;-)

> `AllowAtomic`, but maybe there is a better name, how about `AtomicType`?
> 

AtomicType may be better than AtomicBase to me.

> >> > +    /// Converts `Rhs` into the `Delta` type of the atomic implementation.
> >> > +    fn rhs_into_delta(rhs: Rhs) -> <Self::Repr as AtomicImpl>::Delta;
> >> > +}
> >> > +
> >> >  impl<T: AllowAtomic> Atomic<T> {
> >> >      /// Creates a new atomic `T`.
> >> >      pub const fn new(v: T) -> Self {
> >> > @@ -462,3 +474,100 @@ fn try_cmpxchg<Ordering: ordering::Any>(&self, old: &mut T, new: T, _: Ordering)
> >> >          ret
> >> >      }
> >> >  }
> >> > +
> >> > +impl<T: AllowAtomic> Atomic<T>
> >> > +where
> >> > +    T::Repr: AtomicHasArithmeticOps,
> >> > +{
> >> > +    /// Atomic add.
> >> > +    ///
> >> > +    /// Atomically updates `*self` to `(*self).wrapping_add(v)`.
> >> > +    ///
> >> > +    /// # Examples
> >> > +    ///
> >> > +    /// ```
> >> > +    /// use kernel::sync::atomic::{Atomic, Relaxed};
> >> > +    ///
> >> > +    /// let x = Atomic::new(42);
> >> > +    ///
> >> > +    /// assert_eq!(42, x.load(Relaxed));
> >> > +    ///
> >> > +    /// x.add(12, Relaxed);
> >> > +    ///
> >> > +    /// assert_eq!(54, x.load(Relaxed));
> >> > +    /// ```
> >> > +    #[inline(always)]
> >> > +    pub fn add<Rhs, Ordering: ordering::RelaxedOnly>(&self, v: Rhs, _: Ordering)
> >> > +    where
> >> > +        T: AllowAtomicAdd<Rhs>,
> >> > +    {
> >> > +        let v = T::rhs_into_delta(v);
> >> > +        // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
> >> > +        // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
> >> > +        let a = self.as_ptr().cast::<T::Repr>();
> >> > +
> >> > +        // `*self` remains valid after `atomic_add()` because of the safety requirement of
> >> > +        // `AllowAtomicAdd`.
> >> 
> >> This part should be moved to the CAST comment above, since we're not
> >> only writing a value transmuted from `T` into `*self`.
> >> 
> >
> > Hmm.. the CAST comment should explain why a pointer of `T` can be a
> > valid pointer of `T::Repr` because the atomic_add() below is going to
> > read through the pointer and write value back. The comment starting with
> > "`*self`" explains the value written is a valid `T`, therefore
> > conceptually atomic_add() below writes a valid `T` in form of `T::Repr`
> > into `a`.
> 
> I see, my interpretation was that if we put it on the cast, then the
> operation that `atomic_add` does also is valid.
> 
> But I think this comment should either be part of the `CAST` or the
> `SAFETY` comment. Going by your interpretation, it would make more sense
> in the SAFETY one, since there you justify that you're actually writing
> a value of type `T`.
> 

Hmm.. you're probably right. There are two safety things about
atomic_add():

- Whether calling it is safe
- Whether the operation on `a` (a pointer to `T` essentially) is safe.

How about the following:

        let v = T::rhs_into_delta(v);
        // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
        // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
        let a = self.as_ptr().cast::<T::Repr>();

        // `*self` remains valid after `atomic_add()` because of the safety requirement of
        // `AllowAtomicAdd`.
        //
        // SAFETY:
        // - For calling `atomic_add()`:
        //   - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
        //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
        //   - `a` is a valid pointer per the CAST justification above.
        // - For accessing `*a`: the value written is transmutable to `T`
        //   due to the safety requirement of `AllowAtomicAdd`.
        unsafe { T::Repr::atomic_add(a, v) };

Regards,
Boqun

> ---
> Cheers,
> Benno
> 
> > Basically
> >
> > // CAST
> > let a = ..
> >
> > ^ explains what `a` is a valid for and why it's valid.
> >
> > // `*self` remains
> >
> > ^ explains that we write a valid value to `a`.
> >
> >
> > So I don't think we need to move?

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-15 16:13         ` Boqun Feng
@ 2025-07-15 18:39           ` Benno Lossin
  2025-07-15 20:13             ` Boqun Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-15 18:39 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Tue Jul 15, 2025 at 6:13 PM CEST, Boqun Feng wrote:
> On Tue, Jul 15, 2025 at 05:45:34PM +0200, Benno Lossin wrote:
>> On Tue Jul 15, 2025 at 3:33 PM CEST, Boqun Feng wrote:
>> > On Tue, Jul 15, 2025 at 01:21:20PM +0200, Benno Lossin wrote:
>> >> On Mon Jul 14, 2025 at 7:36 AM CEST, Boqun Feng wrote:
>> >> > +/// Types that support atomic add operations.
>> >> > +///
>> >> > +/// # Safety
>> >> > +///
>> >> > +/// Wrapping adding any value of type `Self::Repr::Delta` obtained by [`Self::rhs_into_delta()`] to
>> >> 
>> >> I don't like "wrapping adding", either we use "`wrapping_add`" or we use
>> >> some other phrasing.
>> >> 
>> >
>> > Let's use `wrapping_add` then.
>> >
>> >     /// `wrapping_add` any value of type `Self::Repr::Delta` obtained by [`Self::rhs_into_delta()`] to
>> >     /// any value of type `Self::Repr` obtained through transmuting a value of type `Self` to must
>> >     /// yield a value with a bit pattern also valid for `Self`.
>> >
>> >> > +pub unsafe trait AllowAtomicAdd<Rhs = Self>: AllowAtomic {
>> >> 
>> >> Why `Allow*`? I think `AtomicAdd` is better?
>> >> 
>> >
>> > To be consistent with `AllowAtomic` (the super trait), if we use
>> > `AtomicAdd` here, should we change `AllowAtomic` to `AtomicBase`?
>> 
>> Ideally, we would name that trait just `Atomic` :) But it then
>> conflicts with the `Atomic<T>` struct (this would be motivation to put
>> them in different modules :). I like `AtomicBase` better than
>
> Oh, if we move `Atomic<T>` to atomic.rs and keep atomic::generic, then
> we can name it atomic::generic::Atomic ;-)

That would be an argument for having the `generic` module :)

Well, I'm not so sure about having two types with the same name right
away, so maybe let's discuss this in our meeting.

>> `AllowAtomic`, but maybe there is a better name, how about `AtomicType`?
>> 
>
> AtomicType may be better than AtomicBase to me.

Yeah I like it better too.

>> >> > +    /// Converts `Rhs` into the `Delta` type of the atomic implementation.
>> >> > +    fn rhs_into_delta(rhs: Rhs) -> <Self::Repr as AtomicImpl>::Delta;
>> >> > +}
>> >> > +
>> >> >  impl<T: AllowAtomic> Atomic<T> {
>> >> >      /// Creates a new atomic `T`.
>> >> >      pub const fn new(v: T) -> Self {
>> >> > @@ -462,3 +474,100 @@ fn try_cmpxchg<Ordering: ordering::Any>(&self, old: &mut T, new: T, _: Ordering)
>> >> >          ret
>> >> >      }
>> >> >  }
>> >> > +
>> >> > +impl<T: AllowAtomic> Atomic<T>
>> >> > +where
>> >> > +    T::Repr: AtomicHasArithmeticOps,
>> >> > +{
>> >> > +    /// Atomic add.
>> >> > +    ///
>> >> > +    /// Atomically updates `*self` to `(*self).wrapping_add(v)`.
>> >> > +    ///
>> >> > +    /// # Examples
>> >> > +    ///
>> >> > +    /// ```
>> >> > +    /// use kernel::sync::atomic::{Atomic, Relaxed};
>> >> > +    ///
>> >> > +    /// let x = Atomic::new(42);
>> >> > +    ///
>> >> > +    /// assert_eq!(42, x.load(Relaxed));
>> >> > +    ///
>> >> > +    /// x.add(12, Relaxed);
>> >> > +    ///
>> >> > +    /// assert_eq!(54, x.load(Relaxed));
>> >> > +    /// ```
>> >> > +    #[inline(always)]
>> >> > +    pub fn add<Rhs, Ordering: ordering::RelaxedOnly>(&self, v: Rhs, _: Ordering)
>> >> > +    where
>> >> > +        T: AllowAtomicAdd<Rhs>,
>> >> > +    {
>> >> > +        let v = T::rhs_into_delta(v);
>> >> > +        // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
>> >> > +        // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
>> >> > +        let a = self.as_ptr().cast::<T::Repr>();
>> >> > +
>> >> > +        // `*self` remains valid after `atomic_add()` because of the safety requirement of
>> >> > +        // `AllowAtomicAdd`.
>> >> 
>> >> This part should be moved to the CAST comment above, since we're not
>> >> only writing a value transmuted from `T` into `*self`.
>> >> 
>> >
>> > Hmm.. the CAST comment should explain why a pointer of `T` can be a
>> > valid pointer of `T::Repr` because the atomic_add() below is going to
>> > read through the pointer and write value back. The comment starting with
>> > "`*self`" explains the value written is a valid `T`, therefore
>> > conceptually atomic_add() below writes a valid `T` in form of `T::Repr`
>> > into `a`.
>> 
>> I see, my interpretation was that if we put it on the cast, then the
>> operation that `atomic_add` does also is valid.
>> 
>> But I think this comment should either be part of the `CAST` or the
>> `SAFETY` comment. Going by your interpretation, it would make more sense
>> in the SAFETY one, since there you justify that you're actually writing
>> a value of type `T`.
>> 
>
> Hmm.. you're probably right. There are two safety things about
> atomic_add():
>
> - Whether calling it is safe
> - Whether the operation on `a` (a pointer to `T` essentially) is safe.

Well part of calling `T::Repr::atomic_add` is that the pointer is valid.
But it actually isn't valid for all operations, only for the specific
one you have here. If we want to be 100% correct, we actually need to
change the safety comment of `atomic_add` to say that it only requires
the result of `*a + v` to be writable... But that is most likely very
annoying... (note that we also have this issue for `store`)

I'm not too sure on what the right way to do this is. The formal answer
is to "just do it right", but then safety comments really just devolve
into formally proving the correctness of the program. I think -- for now
at least :) -- that we shouldn't do this here & now (since we also have
a lot of other code that isn't using normal good safety comments, let
alone formally correct ones).

> How about the following:
>
>         let v = T::rhs_into_delta(v);
>         // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
>         // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
>         let a = self.as_ptr().cast::<T::Repr>();
>
>         // `*self` remains valid after `atomic_add()` because of the safety requirement of
>         // `AllowAtomicAdd`.
>         //
>         // SAFETY:
>         // - For calling `atomic_add()`:
>         //   - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
>         //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
>         //   - `a` is a valid pointer per the CAST justification above.
>         // - For accessing `*a`: the value written is transmutable to `T`
>         //   due to the safety requirement of `AllowAtomicAdd`.
>         unsafe { T::Repr::atomic_add(a, v) };

That looks fine for now. But isn't this duplicating the sentence
starting with `*self`?

---
Cheers,
Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-15 18:39           ` Benno Lossin
@ 2025-07-15 20:13             ` Boqun Feng
  2025-07-16 10:25               ` Benno Lossin
  0 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-15 20:13 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Tue, Jul 15, 2025 at 08:39:04PM +0200, Benno Lossin wrote:
[...]
> >> > Hmm.. the CAST comment should explain why a pointer of `T` can be a
> >> > valid pointer of `T::Repr` because the atomic_add() below is going to
> >> > read through the pointer and write value back. The comment starting with
> >> > "`*self`" explains the value written is a valid `T`, therefore
> >> > conceptually atomic_add() below writes a valid `T` in form of `T::Repr`
> >> > into `a`.
> >> 
> >> I see, my interpretation was that if we put it on the cast, then the
> >> operation that `atomic_add` does also is valid.
> >> 
> >> But I think this comment should either be part of the `CAST` or the
> >> `SAFETY` comment. Going by your interpretation, it would make more sense
> >> in the SAFETY one, since there you justify that you're actually writing
> >> a value of type `T`.
> >> 
> >
> > Hmm.. you're probably right. There are two safety things about
> > atomic_add():
> >
> > - Whether calling it is safe
> > - Whether the operation on `a` (a pointer to `T` essentially) is safe.
> 
> Well part of calling `T::Repr::atomic_add` is that the pointer is valid.

Here by saying "calling `T::Repr::atomic_add`", I think you mean the
whole operation, so yeah, we have to consider the validy for `T` of the
result. But what I'm trying to do is reasoning this in 2 steps:

First, let's treat it as an `atomic_add(*mut i32, i32)`, then as long as
we provide a valid `*mut i32`, it's safe to call. 

And second assume we call it with a valid pointer to `T::Repr`, and a
delta from `rhs_into_delta()`, then per the safety guarantee of
`AllowAtomicAdd`, the value written at the pointer is a valid `T`.

Based on these, we can prove the whole operation is safe for the given
input.

> But it actually isn't valid for all operations, only for the specific
> one you have here. If we want to be 100% correct, we actually need to
> change the safety comment of `atomic_add` to say that it only requires
> the result of `*a + v` to be writable... But that is most likely very
> annoying... (note that we also have this issue for `store`)
> 
> I'm not too sure on what the right way to do this is. The formal answer
> is to "just do it right", but then safety comments really just devolve
> into formally proving the correctness of the program. I think -- for now
> at least :) -- that we shouldn't do this here & now (since we also have
> a lot of other code that isn't using normal good safety comments, let
> alone formally correct ones).
> 
> > How about the following:
> >
> >         let v = T::rhs_into_delta(v);
> >         // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
> >         // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
> >         let a = self.as_ptr().cast::<T::Repr>();
> >
> >         // `*self` remains valid after `atomic_add()` because of the safety requirement of
> >         // `AllowAtomicAdd`.
> >         //
> >         // SAFETY:
> >         // - For calling `atomic_add()`:
> >         //   - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
> >         //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
> >         //   - `a` is a valid pointer per the CAST justification above.
> >         // - For accessing `*a`: the value written is transmutable to `T`
> >         //   due to the safety requirement of `AllowAtomicAdd`.
> >         unsafe { T::Repr::atomic_add(a, v) };
> 
> That looks fine for now. But isn't this duplicating the sentence
> starting with `*self`?

Oh sorry, I meant to remove the sentence starting with `*self`. :(

Regards,
Boqun

> 
> ---
> Cheers,
> Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 1/9] rust: Introduce atomic API helpers
  2025-07-14  5:36 ` [PATCH v7 1/9] rust: Introduce atomic API helpers Boqun Feng
@ 2025-07-16  9:23   ` Greg Kroah-Hartman
  2025-07-16 12:36     ` Miguel Ojeda
  2025-07-16 12:47     ` Peter Zijlstra
  0 siblings, 2 replies; 50+ messages in thread
From: Greg Kroah-Hartman @ 2025-07-16  9:23 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
	Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
	Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
	Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
	Paul E. McKenney, Linus Torvalds, Thomas Gleixner, Alan Stern

On Sun, Jul 13, 2025 at 10:36:48PM -0700, Boqun Feng wrote:
> In order to support LKMM atomics in Rust, add rust_helper_* for atomic
> APIs. These helpers ensure the implementation of LKMM atomics in Rust is
> the same as in C. This could save the maintenance burden of having two
> similar atomic implementations in asm.
> 
> Originally-by: Mark Rutland <mark.rutland@arm.com>
> Reviewed-by: Alice Ryhl <aliceryhl@google.com>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
>  rust/helpers/atomic.c                     | 1040 +++++++++++++++++++++
>  rust/helpers/helpers.c                    |    1 +
>  scripts/atomic/gen-atomics.sh             |    1 +
>  scripts/atomic/gen-rust-atomic-helpers.sh |   67 ++
>  4 files changed, 1109 insertions(+)
>  create mode 100644 rust/helpers/atomic.c
>  create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh
> 
> diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c
> new file mode 100644
> index 000000000000..cf06b7ef9a1c
> --- /dev/null
> +++ b/rust/helpers/atomic.c
> @@ -0,0 +1,1040 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh
> +// DO NOT MODIFY THIS FILE DIRECTLY

As this is auto-generated, how do we know when to auto-generate it
again?  What files does it depend on?  And why can't we just
auto-generate it at build time instead of having a static file in the
tree that no one knows when to regenerate it?  :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-15 20:13             ` Boqun Feng
@ 2025-07-16 10:25               ` Benno Lossin
  2025-07-16 14:13                 ` Boqun Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-16 10:25 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Tue Jul 15, 2025 at 10:13 PM CEST, Boqun Feng wrote:
> On Tue, Jul 15, 2025 at 08:39:04PM +0200, Benno Lossin wrote:
> [...]
>> >> > Hmm.. the CAST comment should explain why a pointer of `T` can be a
>> >> > valid pointer of `T::Repr` because the atomic_add() below is going to
>> >> > read through the pointer and write value back. The comment starting with
>> >> > "`*self`" explains the value written is a valid `T`, therefore
>> >> > conceptually atomic_add() below writes a valid `T` in form of `T::Repr`
>> >> > into `a`.
>> >> 
>> >> I see, my interpretation was that if we put it on the cast, then the
>> >> operation that `atomic_add` does also is valid.
>> >> 
>> >> But I think this comment should either be part of the `CAST` or the
>> >> `SAFETY` comment. Going by your interpretation, it would make more sense
>> >> in the SAFETY one, since there you justify that you're actually writing
>> >> a value of type `T`.
>> >> 
>> >
>> > Hmm.. you're probably right. There are two safety things about
>> > atomic_add():
>> >
>> > - Whether calling it is safe
>> > - Whether the operation on `a` (a pointer to `T` essentially) is safe.
>> 
>> Well part of calling `T::Repr::atomic_add` is that the pointer is valid.
>
> Here by saying "calling `T::Repr::atomic_add`", I think you mean the
> whole operation, so yeah, we have to consider the validy for `T` of the
> result.

I meant just the call to `atomic_add`.

> But what I'm trying to do is reasoning this in 2 steps:
>
> First, let's treat it as an `atomic_add(*mut i32, i32)`, then as long as
> we provide a valid `*mut i32`, it's safe to call. 

But the thing is, we're not supplying a valid `*mut i32`. Because the
pointer points to a value that is not actually an `i32`. You're only
allowed to write certain values and so you basically have to treat it as
a transmute + write. And so you need to include a justification for this
transmute in the write itself. 

For example, if we had `bool: AllowAtomic`, then writing a `2` in store
would be insta-UB, since we then have a `&UnsafeCell<bool>` pointing at
`2`.

This is part of library vs language UB, writing `2` into a bool and
having a reference is language-UB (ie instant UB) and writing a `2` into
a variable of type `i32` that is somewhere cast to `bool` is library-UB
(since it will lead to language-UB later). 

The safety comments become simpler when you use `UnsafeCell<T::Repr>`
instead :) since that changes this language-UB into library-UB. (the
only safety comment that is more complex then is `get_mut`, but that's
only a single one)

If you don't want that, then we can solve this in two ways:

(1) add a guarantee on `atomic_add` (and all other operations) that it
    will write `*a + v` to `a` and nothing else.
(2) make the safety requirement only require writes of the addition to
    be valid.

My preference precedence is: use `T::Repr`, (2) and finally (1). (2)
will be very wordy on all operations & the safety comments in this file,
but it's clean from a formal perspective. (1) works by saying "what
we're supplying is actually not a valid `*mut i32`, but since the
guarantee of the function ensures that only specific things are written,
it's fine" which isn't very clean. And the `T::Repr` approach avoids all
this by just storing value of `T::Repr` circumventing the whole issue.
Then we only need to justify why we can point a `&mut T` at it and that
we can do by having an invariant that should be simple to keep.

We probably should talk about this in our meeting :)

---
Cheers,
Benno

> And second assume we call it with a valid pointer to `T::Repr`, and a
> delta from `rhs_into_delta()`, then per the safety guarantee of
> `AllowAtomicAdd`, the value written at the pointer is a valid `T`.
>
> Based on these, we can prove the whole operation is safe for the given
> input.
>
>> But it actually isn't valid for all operations, only for the specific
>> one you have here. If we want to be 100% correct, we actually need to
>> change the safety comment of `atomic_add` to say that it only requires
>> the result of `*a + v` to be writable... But that is most likely very
>> annoying... (note that we also have this issue for `store`)
>> 
>> I'm not too sure on what the right way to do this is. The formal answer
>> is to "just do it right", but then safety comments really just devolve
>> into formally proving the correctness of the program. I think -- for now
>> at least :) -- that we shouldn't do this here & now (since we also have
>> a lot of other code that isn't using normal good safety comments, let
>> alone formally correct ones).
>> 
>> > How about the following:
>> >
>> >         let v = T::rhs_into_delta(v);
>> >         // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
>> >         // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
>> >         let a = self.as_ptr().cast::<T::Repr>();
>> >
>> >         // `*self` remains valid after `atomic_add()` because of the safety requirement of
>> >         // `AllowAtomicAdd`.
>> >         //
>> >         // SAFETY:
>> >         // - For calling `atomic_add()`:
>> >         //   - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
>> >         //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
>> >         //   - `a` is a valid pointer per the CAST justification above.
>> >         // - For accessing `*a`: the value written is transmutable to `T`
>> >         //   due to the safety requirement of `AllowAtomicAdd`.
>> >         unsafe { T::Repr::atomic_add(a, v) };

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 1/9] rust: Introduce atomic API helpers
  2025-07-16  9:23   ` Greg Kroah-Hartman
@ 2025-07-16 12:36     ` Miguel Ojeda
  2025-07-16 12:47     ` Peter Zijlstra
  1 sibling, 0 replies; 50+ messages in thread
From: Miguel Ojeda @ 2025-07-16 12:36 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch,
	Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
	Danilo Krummrich, Will Deacon, Peter Zijlstra, Mark Rutland,
	Wedson Almeida Filho, Viresh Kumar, Lyude Paul, Ingo Molnar,
	Mitchell Levy, Paul E. McKenney, Linus Torvalds, Thomas Gleixner,
	Alan Stern

On Wed, Jul 16, 2025 at 11:23 AM Greg Kroah-Hartman
<gregkh@linuxfoundation.org> wrote:
>
> As this is auto-generated, how do we know when to auto-generate it
> again?  What files does it depend on?  And why can't we just
> auto-generate it at build time instead of having a static file in the
> tree that no one knows when to regenerate it?  :)

If Boqun confirms that there is no reason not to do that, then I can
take a look at that.

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 1/9] rust: Introduce atomic API helpers
  2025-07-16  9:23   ` Greg Kroah-Hartman
  2025-07-16 12:36     ` Miguel Ojeda
@ 2025-07-16 12:47     ` Peter Zijlstra
  2025-07-16 12:54       ` Greg Kroah-Hartman
  2025-07-16 12:56       ` Miguel Ojeda
  1 sibling, 2 replies; 50+ messages in thread
From: Peter Zijlstra @ 2025-07-16 12:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch,
	Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
	Danilo Krummrich, Will Deacon, Mark Rutland, Wedson Almeida Filho,
	Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
	Paul E. McKenney, Linus Torvalds, Thomas Gleixner, Alan Stern

On Wed, Jul 16, 2025 at 11:23:09AM +0200, Greg Kroah-Hartman wrote:
> On Sun, Jul 13, 2025 at 10:36:48PM -0700, Boqun Feng wrote:
> > In order to support LKMM atomics in Rust, add rust_helper_* for atomic
> > APIs. These helpers ensure the implementation of LKMM atomics in Rust is
> > the same as in C. This could save the maintenance burden of having two
> > similar atomic implementations in asm.
> > 
> > Originally-by: Mark Rutland <mark.rutland@arm.com>
> > Reviewed-by: Alice Ryhl <aliceryhl@google.com>
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > ---
> >  rust/helpers/atomic.c                     | 1040 +++++++++++++++++++++
> >  rust/helpers/helpers.c                    |    1 +
> >  scripts/atomic/gen-atomics.sh             |    1 +
> >  scripts/atomic/gen-rust-atomic-helpers.sh |   67 ++
> >  4 files changed, 1109 insertions(+)
> >  create mode 100644 rust/helpers/atomic.c
> >  create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh
> > 
> > diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c
> > new file mode 100644
> > index 000000000000..cf06b7ef9a1c
> > --- /dev/null
> > +++ b/rust/helpers/atomic.c
> > @@ -0,0 +1,1040 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh
> > +// DO NOT MODIFY THIS FILE DIRECTLY
> 
> As this is auto-generated, how do we know when to auto-generate it
> again?  What files does it depend on?  And why can't we just
> auto-generate it at build time instead of having a static file in the
> tree that no one knows when to regenerate it?  :)

It depends on the scripts/atomic/* bits. They hardly if ever change. We
do it this way because:

 - generating these files every build is 'slow'-ish;
 - code navigation suffers;
 - Linus asked for this.

Specifically, pretty much the entire atomic_*() namespace would
disappear from ctags / code-browsing-tool-of-choice if we would not
check in these files.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 1/9] rust: Introduce atomic API helpers
  2025-07-16 12:47     ` Peter Zijlstra
@ 2025-07-16 12:54       ` Greg Kroah-Hartman
  2025-07-16 12:57         ` Miguel Ojeda
  2025-07-16 12:56       ` Miguel Ojeda
  1 sibling, 1 reply; 50+ messages in thread
From: Greg Kroah-Hartman @ 2025-07-16 12:54 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch,
	Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
	Danilo Krummrich, Will Deacon, Mark Rutland, Wedson Almeida Filho,
	Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
	Paul E. McKenney, Linus Torvalds, Thomas Gleixner, Alan Stern

On Wed, Jul 16, 2025 at 02:47:13PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 16, 2025 at 11:23:09AM +0200, Greg Kroah-Hartman wrote:
> > On Sun, Jul 13, 2025 at 10:36:48PM -0700, Boqun Feng wrote:
> > > In order to support LKMM atomics in Rust, add rust_helper_* for atomic
> > > APIs. These helpers ensure the implementation of LKMM atomics in Rust is
> > > the same as in C. This could save the maintenance burden of having two
> > > similar atomic implementations in asm.
> > > 
> > > Originally-by: Mark Rutland <mark.rutland@arm.com>
> > > Reviewed-by: Alice Ryhl <aliceryhl@google.com>
> > > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > > ---
> > >  rust/helpers/atomic.c                     | 1040 +++++++++++++++++++++
> > >  rust/helpers/helpers.c                    |    1 +
> > >  scripts/atomic/gen-atomics.sh             |    1 +
> > >  scripts/atomic/gen-rust-atomic-helpers.sh |   67 ++
> > >  4 files changed, 1109 insertions(+)
> > >  create mode 100644 rust/helpers/atomic.c
> > >  create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh
> > > 
> > > diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c
> > > new file mode 100644
> > > index 000000000000..cf06b7ef9a1c
> > > --- /dev/null
> > > +++ b/rust/helpers/atomic.c
> > > @@ -0,0 +1,1040 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +
> > > +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh
> > > +// DO NOT MODIFY THIS FILE DIRECTLY
> > 
> > As this is auto-generated, how do we know when to auto-generate it
> > again?  What files does it depend on?  And why can't we just
> > auto-generate it at build time instead of having a static file in the
> > tree that no one knows when to regenerate it?  :)
> 
> It depends on the scripts/atomic/* bits. They hardly if ever change. We
> do it this way because:
> 
>  - generating these files every build is 'slow'-ish;
>  - code navigation suffers;
>  - Linus asked for this.
> 
> Specifically, pretty much the entire atomic_*() namespace would
> disappear from ctags / code-browsing-tool-of-choice if we would not
> check in these files.

Ah, ok, that makes sense in a sad way.  As long as someone knows to
regenerate these files when needed, hopefully when the C files change
someone knows to update these rust ones...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 1/9] rust: Introduce atomic API helpers
  2025-07-16 12:47     ` Peter Zijlstra
  2025-07-16 12:54       ` Greg Kroah-Hartman
@ 2025-07-16 12:56       ` Miguel Ojeda
  1 sibling, 0 replies; 50+ messages in thread
From: Miguel Ojeda @ 2025-07-16 12:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Greg Kroah-Hartman, Boqun Feng, linux-kernel, rust-for-linux,
	lkmm, linux-arch, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Mark Rutland,
	Wedson Almeida Filho, Viresh Kumar, Lyude Paul, Ingo Molnar,
	Mitchell Levy, Paul E. McKenney, Linus Torvalds, Thomas Gleixner,
	Alan Stern

On Wed, Jul 16, 2025 at 2:47 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> It depends on the scripts/atomic/* bits. They hardly if ever change. We
> do it this way because:
>
>  - generating these files every build is 'slow'-ish;
>  - code navigation suffers;
>  - Linus asked for this.
>
> Specifically, pretty much the entire atomic_*() namespace would
> disappear from ctags / code-browsing-tool-of-choice if we would not
> check in these files.

Makes sense, thanks. The Rust helpers one may not be the end of the
world in terms of code navigation (since people will use the Rust
abstractions and nobody should call the helpers from C), but since it
is just 1 more script in the list, I think it is best to keep it as it
is.

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 1/9] rust: Introduce atomic API helpers
  2025-07-16 12:54       ` Greg Kroah-Hartman
@ 2025-07-16 12:57         ` Miguel Ojeda
  2025-07-16 13:04           ` Peter Zijlstra
  0 siblings, 1 reply; 50+ messages in thread
From: Miguel Ojeda @ 2025-07-16 12:57 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Peter Zijlstra, Boqun Feng, linux-kernel, rust-for-linux, lkmm,
	linux-arch, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Mark Rutland,
	Wedson Almeida Filho, Viresh Kumar, Lyude Paul, Ingo Molnar,
	Mitchell Levy, Paul E. McKenney, Linus Torvalds, Thomas Gleixner,
	Alan Stern

On Wed, Jul 16, 2025 at 2:54 PM Greg Kroah-Hartman
<gregkh@linuxfoundation.org> wrote:
>
> Ah, ok, that makes sense in a sad way.  As long as someone knows to
> regenerate these files when needed, hopefully when the C files change
> someone knows to update these rust ones...

IIUC, the line added in `scripts/atomic/gen-atomics.sh` will make sure
it happens at the same time, right?

Cheers,
Miguel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 1/9] rust: Introduce atomic API helpers
  2025-07-16 12:57         ` Miguel Ojeda
@ 2025-07-16 13:04           ` Peter Zijlstra
  0 siblings, 0 replies; 50+ messages in thread
From: Peter Zijlstra @ 2025-07-16 13:04 UTC (permalink / raw)
  To: Miguel Ojeda
  Cc: Greg Kroah-Hartman, Boqun Feng, linux-kernel, rust-for-linux,
	lkmm, linux-arch, Miguel Ojeda, Alex Gaynor, Gary Guo,
	Bj??rn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
	Trevor Gross, Danilo Krummrich, Will Deacon, Mark Rutland,
	Wedson Almeida Filho, Viresh Kumar, Lyude Paul, Ingo Molnar,
	Mitchell Levy, Paul E. McKenney, Linus Torvalds, Thomas Gleixner,
	Alan Stern

On Wed, Jul 16, 2025 at 02:57:09PM +0200, Miguel Ojeda wrote:
> On Wed, Jul 16, 2025 at 2:54???PM Greg Kroah-Hartman
> <gregkh@linuxfoundation.org> wrote:
> >
> > Ah, ok, that makes sense in a sad way.  As long as someone knows to
> > regenerate these files when needed, hopefully when the C files change
> > someone knows to update these rust ones...
> 
> IIUC, the line added in `scripts/atomic/gen-atomics.sh` will make sure
> it happens at the same time, right?

Yeah, it should

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-16 10:25               ` Benno Lossin
@ 2025-07-16 14:13                 ` Boqun Feng
  2025-07-16 15:36                   ` Benno Lossin
  0 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-16 14:13 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Wed, Jul 16, 2025 at 12:25:30PM +0200, Benno Lossin wrote:
> On Tue Jul 15, 2025 at 10:13 PM CEST, Boqun Feng wrote:
> > On Tue, Jul 15, 2025 at 08:39:04PM +0200, Benno Lossin wrote:
> > [...]
> >> >> > Hmm.. the CAST comment should explain why a pointer of `T` can be a
> >> >> > valid pointer of `T::Repr` because the atomic_add() below is going to
> >> >> > read through the pointer and write value back. The comment starting with
> >> >> > "`*self`" explains the value written is a valid `T`, therefore
> >> >> > conceptually atomic_add() below writes a valid `T` in form of `T::Repr`
> >> >> > into `a`.
> >> >> 
> >> >> I see, my interpretation was that if we put it on the cast, then the
> >> >> operation that `atomic_add` does also is valid.
> >> >> 
> >> >> But I think this comment should either be part of the `CAST` or the
> >> >> `SAFETY` comment. Going by your interpretation, it would make more sense
> >> >> in the SAFETY one, since there you justify that you're actually writing
> >> >> a value of type `T`.
> >> >> 
> >> >
> >> > Hmm.. you're probably right. There are two safety things about
> >> > atomic_add():
> >> >
> >> > - Whether calling it is safe
> >> > - Whether the operation on `a` (a pointer to `T` essentially) is safe.
> >> 
> >> Well part of calling `T::Repr::atomic_add` is that the pointer is valid.
> >
> > Here by saying "calling `T::Repr::atomic_add`", I think you mean the
> > whole operation, so yeah, we have to consider the validy for `T` of the
> > result.
> 
> I meant just the call to `atomic_add`.
> 
> > But what I'm trying to do is reasoning this in 2 steps:
> >
> > First, let's treat it as an `atomic_add(*mut i32, i32)`, then as long as
> > we provide a valid `*mut i32`, it's safe to call. 
> 
> But the thing is, we're not supplying a valid `*mut i32`. Because the
> pointer points to a value that is not actually an `i32`. You're only
> allowed to write certain values and so you basically have to treat it as
> a transmute + write. And so you need to include a justification for this
> transmute in the write itself. 
> 
> For example, if we had `bool: AllowAtomic`, then writing a `2` in store
> would be insta-UB, since we then have a `&UnsafeCell<bool>` pointing at
> `2`.
> 
> This is part of library vs language UB, writing `2` into a bool and
> having a reference is language-UB (ie instant UB) and writing a `2` into
> a variable of type `i32` that is somewhere cast to `bool` is library-UB
> (since it will lead to language-UB later). 
> 

But we are not writing `2` in this case, right? 

A) we have a pointer `*mut i32`, and the memory location is valid for
   writing an `i32`, so we can pass it to a function that may write
   an `i32` to it.

B) and at the same time, we prove that the value written was a valid
   `bool`.

There is no `2` written in the whole process, the proof contains two
parts, that is it. There is no language-UB or library-UB in the whole
process, and you're missing it.

It's like if you want to prove 3 < x < 5, you first prove that x > 3
and then x < 5. It's just that you don't prove it in one go.

> The safety comments become simpler when you use `UnsafeCell<T::Repr>`
> instead :) since that changes this language-UB into library-UB. (the
> only safety comment that is more complex then is `get_mut`, but that's
> only a single one)
> 
> If you don't want that, then we can solve this in two ways:
> 
> (1) add a guarantee on `atomic_add` (and all other operations) that it
>     will write `*a + v` to `a` and nothing else.
> (2) make the safety requirement only require writes of the addition to
>     be valid.
> 
> My preference precedence is: use `T::Repr`, (2) and finally (1). (2)
> will be very wordy on all operations & the safety comments in this file,
> but it's clean from a formal perspective. (1) works by saying "what
> we're supplying is actually not a valid `*mut i32`, but since the
> guarantee of the function ensures that only specific things are written,
> it's fine" which isn't very clean. And the `T::Repr` approach avoids all
> this by just storing value of `T::Repr` circumventing the whole issue.
> Then we only need to justify why we can point a `&mut T` at it and that
> we can do by having an invariant that should be simple to keep.
> 
> We probably should talk about this in our meeting :)
> 

I have a better solution:

in ops.rs

    pub struct AtomicRepr<T: AtomicImpl>(UnsafeCell<T>)

    impl AtomicArithmeticOps for i32 {
        // a *safe* function
        fn atomic_add(a: &AtomicRepr, v: i32) {
	    ...
	}
    }

in generic.rs

    pub struct Atomic<T>(AtoimcRepr<T::Repr>);

    impl<T: AtomicAdd> Atomic<T> {
        fn add(&self, v: .., ...) {
	    T::Repr::atomic_add(&self.0, ...);
	}
    }

see:

	https://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux.git/log/?h=rust-atomic-impl

Regards,
Boqun

> ---
> Cheers,
> Benno
> 
> > And second assume we call it with a valid pointer to `T::Repr`, and a
> > delta from `rhs_into_delta()`, then per the safety guarantee of
> > `AllowAtomicAdd`, the value written at the pointer is a valid `T`.
> >
> > Based on these, we can prove the whole operation is safe for the given
> > input.
> >
> >> But it actually isn't valid for all operations, only for the specific
> >> one you have here. If we want to be 100% correct, we actually need to
> >> change the safety comment of `atomic_add` to say that it only requires
> >> the result of `*a + v` to be writable... But that is most likely very
> >> annoying... (note that we also have this issue for `store`)
> >> 
> >> I'm not too sure on what the right way to do this is. The formal answer
> >> is to "just do it right", but then safety comments really just devolve
> >> into formally proving the correctness of the program. I think -- for now
> >> at least :) -- that we shouldn't do this here & now (since we also have
> >> a lot of other code that isn't using normal good safety comments, let
> >> alone formally correct ones).
> >> 
> >> > How about the following:
> >> >
> >> >         let v = T::rhs_into_delta(v);
> >> >         // CAST: Per the safety requirement of `AllowAtomic`, a valid pointer of `T` is a valid
> >> >         // pointer of `T::Repr` for reads and valid for writes of values transmutable to `T`.
> >> >         let a = self.as_ptr().cast::<T::Repr>();
> >> >
> >> >         // `*self` remains valid after `atomic_add()` because of the safety requirement of
> >> >         // `AllowAtomicAdd`.
> >> >         //
> >> >         // SAFETY:
> >> >         // - For calling `atomic_add()`:
> >> >         //   - `a` is aligned to `align_of::<T::Repr>()` because of the safety requirement of
> >> >         //   `AllowAtomic` and the guarantee of `Atomic::as_ptr()`.
> >> >         //   - `a` is a valid pointer per the CAST justification above.
> >> >         // - For accessing `*a`: the value written is transmutable to `T`
> >> >         //   due to the safety requirement of `AllowAtomicAdd`.
> >> >         unsafe { T::Repr::atomic_add(a, v) };

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-16 14:13                 ` Boqun Feng
@ 2025-07-16 15:36                   ` Benno Lossin
  2025-07-16 15:48                     ` Boqun Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-16 15:36 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Wed Jul 16, 2025 at 4:13 PM CEST, Boqun Feng wrote:
> On Wed, Jul 16, 2025 at 12:25:30PM +0200, Benno Lossin wrote:
>> On Tue Jul 15, 2025 at 10:13 PM CEST, Boqun Feng wrote:
>> > On Tue, Jul 15, 2025 at 08:39:04PM +0200, Benno Lossin wrote:
>> > [...]
>> >> >> > Hmm.. the CAST comment should explain why a pointer of `T` can be a
>> >> >> > valid pointer of `T::Repr` because the atomic_add() below is going to
>> >> >> > read through the pointer and write value back. The comment starting with
>> >> >> > "`*self`" explains the value written is a valid `T`, therefore
>> >> >> > conceptually atomic_add() below writes a valid `T` in form of `T::Repr`
>> >> >> > into `a`.
>> >> >> 
>> >> >> I see, my interpretation was that if we put it on the cast, then the
>> >> >> operation that `atomic_add` does also is valid.
>> >> >> 
>> >> >> But I think this comment should either be part of the `CAST` or the
>> >> >> `SAFETY` comment. Going by your interpretation, it would make more sense
>> >> >> in the SAFETY one, since there you justify that you're actually writing
>> >> >> a value of type `T`.
>> >> >> 
>> >> >
>> >> > Hmm.. you're probably right. There are two safety things about
>> >> > atomic_add():
>> >> >
>> >> > - Whether calling it is safe
>> >> > - Whether the operation on `a` (a pointer to `T` essentially) is safe.
>> >> 
>> >> Well part of calling `T::Repr::atomic_add` is that the pointer is valid.
>> >
>> > Here by saying "calling `T::Repr::atomic_add`", I think you mean the
>> > whole operation, so yeah, we have to consider the validy for `T` of the
>> > result.
>> 
>> I meant just the call to `atomic_add`.
>> 
>> > But what I'm trying to do is reasoning this in 2 steps:
>> >
>> > First, let's treat it as an `atomic_add(*mut i32, i32)`, then as long as
>> > we provide a valid `*mut i32`, it's safe to call. 
>> 
>> But the thing is, we're not supplying a valid `*mut i32`. Because the
>> pointer points to a value that is not actually an `i32`. You're only
>> allowed to write certain values and so you basically have to treat it as
>> a transmute + write. And so you need to include a justification for this
>> transmute in the write itself. 
>> 
>> For example, if we had `bool: AllowAtomic`, then writing a `2` in store
>> would be insta-UB, since we then have a `&UnsafeCell<bool>` pointing at
>> `2`.
>> 
>> This is part of library vs language UB, writing `2` into a bool and
>> having a reference is language-UB (ie instant UB) and writing a `2` into
>> a variable of type `i32` that is somewhere cast to `bool` is library-UB
>> (since it will lead to language-UB later). 
>> 
>
> But we are not writing `2` in this case, right? 
>
> A) we have a pointer `*mut i32`, and the memory location is valid for
>    writing an `i32`, so we can pass it to a function that may write
>    an `i32` to it.
>
> B) and at the same time, we prove that the value written was a valid
>    `bool`.
>
> There is no `2` written in the whole process, the proof contains two
> parts, that is it. There is no language-UB or library-UB in the whole
> process, and you're missing it.

There is no UB here and the public API surface is sound.

The problem is the internal safe <-> unsafe code interaction & the
safety documentation.

> It's like if you want to prove 3 < x < 5, you first prove that x > 3
> and then x < 5. It's just that you don't prove it in one go.

That's true, but not analogous to this case. This is a better analogy:

You have a lemma that proves P given that x == 10. Now you want to use
this lemma, but you are only able to prove that x >= 10. You argue that
this is fine, because the proof of the lemma only uses the fact that
x >= 10.
    But in this situation you're not following the exact rules of the
formal system. To do that you would have to reformulate the lemma to
only ask for x >= 10.

The last part is what relaxing the safety requirements would be
(approach (2) given below).

>> The safety comments become simpler when you use `UnsafeCell<T::Repr>`
>> instead :) since that changes this language-UB into library-UB. (the
>> only safety comment that is more complex then is `get_mut`, but that's
>> only a single one)
>> 
>> If you don't want that, then we can solve this in two ways:
>> 
>> (1) add a guarantee on `atomic_add` (and all other operations) that it
>>     will write `*a + v` to `a` and nothing else.
>> (2) make the safety requirement only require writes of the addition to
>>     be valid.
>> 
>> My preference precedence is: use `T::Repr`, (2) and finally (1). (2)
>> will be very wordy on all operations & the safety comments in this file,
>> but it's clean from a formal perspective. (1) works by saying "what
>> we're supplying is actually not a valid `*mut i32`, but since the
>> guarantee of the function ensures that only specific things are written,
>> it's fine" which isn't very clean. And the `T::Repr` approach avoids all
>> this by just storing value of `T::Repr` circumventing the whole issue.
>> Then we only need to justify why we can point a `&mut T` at it and that
>> we can do by having an invariant that should be simple to keep.
>> 
>> We probably should talk about this in our meeting :)
>> 
>
> I have a better solution:
>
> in ops.rs
>
>     pub struct AtomicRepr<T: AtomicImpl>(UnsafeCell<T>)
>
>     impl AtomicArithmeticOps for i32 {
>         // a *safe* function
>         fn atomic_add(a: &AtomicRepr, v: i32) {
> 	    ...
> 	}
>     }
>
> in generic.rs
>
>     pub struct Atomic<T>(AtoimcRepr<T::Repr>);
>
>     impl<T: AtomicAdd> Atomic<T> {
>         fn add(&self, v: .., ...) {
> 	    T::Repr::atomic_add(&self.0, ...);
> 	}
>     }
>
> see:
>
> 	https://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux.git/log/?h=rust-atomic-impl

Hmm what does the additional indirection give you?

Otherwise this looks like the `T::Repr` approach that I detailed above,
so I like it :)

---
Cheers,
Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-16 15:36                   ` Benno Lossin
@ 2025-07-16 15:48                     ` Boqun Feng
  2025-07-16 17:16                       ` Benno Lossin
  0 siblings, 1 reply; 50+ messages in thread
From: Boqun Feng @ 2025-07-16 15:48 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Wed, Jul 16, 2025 at 05:36:05PM +0200, Benno Lossin wrote:
[..]
> >
> > I have a better solution:
> >
> > in ops.rs
> >
> >     pub struct AtomicRepr<T: AtomicImpl>(UnsafeCell<T>)
> >
> >     impl AtomicArithmeticOps for i32 {
> >         // a *safe* function
> >         fn atomic_add(a: &AtomicRepr, v: i32) {
> > 	    ...
> > 	}
> >     }
> >
> > in generic.rs
> >
> >     pub struct Atomic<T>(AtoimcRepr<T::Repr>);
> >
> >     impl<T: AtomicAdd> Atomic<T> {
> >         fn add(&self, v: .., ...) {
> > 	    T::Repr::atomic_add(&self.0, ...);
> > 	}
> >     }
> >
> > see:
> >
> > 	https://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux.git/log/?h=rust-atomic-impl
> 
> Hmm what does the additional indirection give you?
> 

What additional indirection you mean? You cannot make atomic_add() safe
with only `UnsafeCell<T::Repr>`.

Regards,
Boqun

> Otherwise this looks like the `T::Repr` approach that I detailed above,
> so I like it :)
> 
> ---
> Cheers,
> Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-16 15:48                     ` Boqun Feng
@ 2025-07-16 17:16                       ` Benno Lossin
  2025-07-16 17:38                         ` Boqun Feng
  0 siblings, 1 reply; 50+ messages in thread
From: Benno Lossin @ 2025-07-16 17:16 UTC (permalink / raw)
  To: Boqun Feng
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Wed Jul 16, 2025 at 5:48 PM CEST, Boqun Feng wrote:
> On Wed, Jul 16, 2025 at 05:36:05PM +0200, Benno Lossin wrote:
> [..]
>> >
>> > I have a better solution:
>> >
>> > in ops.rs
>> >
>> >     pub struct AtomicRepr<T: AtomicImpl>(UnsafeCell<T>)
>> >
>> >     impl AtomicArithmeticOps for i32 {
>> >         // a *safe* function
>> >         fn atomic_add(a: &AtomicRepr, v: i32) {
>> > 	    ...
>> > 	}
>> >     }
>> >
>> > in generic.rs
>> >
>> >     pub struct Atomic<T>(AtoimcRepr<T::Repr>);
>> >
>> >     impl<T: AtomicAdd> Atomic<T> {
>> >         fn add(&self, v: .., ...) {
>> > 	    T::Repr::atomic_add(&self.0, ...);
>> > 	}
>> >     }
>> >
>> > see:
>> >
>> > 	https://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux.git/log/?h=rust-atomic-impl
>> 
>> Hmm what does the additional indirection give you?
>> 
>
> What additional indirection you mean? You cannot make atomic_add() safe
> with only `UnsafeCell<T::Repr>`.

What is the advantage of making it safe? It just moves the safety
comments into `ops.rs` which makes it harder to read due to the macros.

---
Cheers,
Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations
  2025-07-16 17:16                       ` Benno Lossin
@ 2025-07-16 17:38                         ` Boqun Feng
  0 siblings, 0 replies; 50+ messages in thread
From: Boqun Feng @ 2025-07-16 17:38 UTC (permalink / raw)
  To: Benno Lossin
  Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
	Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
	Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
	Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
	Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
	Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner, Alan Stern

On Wed, Jul 16, 2025 at 07:16:02PM +0200, Benno Lossin wrote:
> On Wed Jul 16, 2025 at 5:48 PM CEST, Boqun Feng wrote:
> > On Wed, Jul 16, 2025 at 05:36:05PM +0200, Benno Lossin wrote:
> > [..]
> >> >
> >> > I have a better solution:
> >> >
> >> > in ops.rs
> >> >
> >> >     pub struct AtomicRepr<T: AtomicImpl>(UnsafeCell<T>)
> >> >
> >> >     impl AtomicArithmeticOps for i32 {
> >> >         // a *safe* function
> >> >         fn atomic_add(a: &AtomicRepr, v: i32) {
> >> > 	    ...
> >> > 	}
> >> >     }
> >> >
> >> > in generic.rs
> >> >
> >> >     pub struct Atomic<T>(AtoimcRepr<T::Repr>);
> >> >
> >> >     impl<T: AtomicAdd> Atomic<T> {
> >> >         fn add(&self, v: .., ...) {
> >> > 	    T::Repr::atomic_add(&self.0, ...);
> >> > 	}
> >> >     }
> >> >
> >> > see:
> >> >
> >> > 	https://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux.git/log/?h=rust-atomic-impl
> >> 
> >> Hmm what does the additional indirection give you?
> >> 
> >
> > What additional indirection you mean? You cannot make atomic_add() safe
> > with only `UnsafeCell<T::Repr>`.
> 
> What is the advantage of making it safe? It just moves the safety

Well, first we in general are in favor of safe functions when we can
make it safe, right? Second, at Atomic<T> level, the major unsafe stuff
comes from the T <-> T::Repr transmutable and making `Atomic<T>` a valid
`T`, the safety of i{32, 64}::atomic_add() would be a bit distraction
when implementing Atomic::add(). With i{32, 64}::atomic_add() being
safe, I can implementation Atomic::add() as:

impl<T: ..> Atomic<T> {
    #[inline(always)]
    pub fn add<Rhs>(&self, v: Rhs, _: ordering::Relaxed)
    where
        T: AtomicAdd<Rhs>,
    {
        let v = T::rhs_into_delta(v);

        // INVARIANT: `self.0` is a valid `T` due to safety requirement of `AtomicAdd`.
        T::Repr::atomic_add(&self.0, v);
    }
}

then all the safety related comments will be focused on why `self.0` is
still a valid `T` after the operation (if you want to be extra detailed
about it, it's fine, and can be done easily)

> comments into `ops.rs` which makes it harder to read due to the macros.

Does it? Add `i32` and `i64` level, you only need the pointer to be a
valid `* mut i{32, 64}`. So the following is pretty clear to me.

    /// Atomic arithmetic operations
    pub trait AtomicArithmeticOps {
        /// Atomic add (wrapping).
        ///
        /// Atomically updates `*a` to `(*a).wrapping_add(v)`.
        fn add[](a: &AtomicRepr<Self>, v: Self::Delta) {
            // SAFETY: `a.as_ptr()` is valid and properly aligned.
            bindings::#call(v, a.as_ptr().cast())
        }
    }

it is at least better than:

            $(
                $(#[$($p_attr)*])*
                $pvis unsafe fn $p_field<E>(
                    self,
                    slot: *mut $p_type,
                    init: impl $crate::PinInit<$p_type, E>,
                ) -> ::core::result::Result<(), E> {
                    // SAFETY: TODO.
                    unsafe { $crate::PinInit::__pinned_init(init, slot) }
                }
            )*

;-)

Regards,
Boqun

> 
> ---
> Cheers,
> Benno

^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2025-07-16 17:38 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-14  5:36 [PATCH v7 0/9] LKMM generic atomics in Rust Boqun Feng
2025-07-14  5:36 ` [PATCH v7 1/9] rust: Introduce atomic API helpers Boqun Feng
2025-07-16  9:23   ` Greg Kroah-Hartman
2025-07-16 12:36     ` Miguel Ojeda
2025-07-16 12:47     ` Peter Zijlstra
2025-07-16 12:54       ` Greg Kroah-Hartman
2025-07-16 12:57         ` Miguel Ojeda
2025-07-16 13:04           ` Peter Zijlstra
2025-07-16 12:56       ` Miguel Ojeda
2025-07-14  5:36 ` [PATCH v7 2/9] rust: sync: Add basic atomic operation mapping framework Boqun Feng
2025-07-14 10:03   ` Benno Lossin
2025-07-14 13:42     ` Boqun Feng
2025-07-14 15:00       ` Benno Lossin
2025-07-14 15:34         ` Boqun Feng
2025-07-14  5:36 ` [PATCH v7 3/9] rust: sync: atomic: Add ordering annotation types Boqun Feng
2025-07-14 10:10   ` Benno Lossin
2025-07-14 14:59     ` Boqun Feng
2025-07-14 15:16       ` Benno Lossin
2025-07-14  5:36 ` [PATCH v7 4/9] rust: sync: atomic: Add generic atomics Boqun Feng
2025-07-14 10:30   ` Benno Lossin
2025-07-14 14:21     ` Boqun Feng
2025-07-14 14:30       ` Boqun Feng
2025-07-14 14:34       ` Miguel Ojeda
2025-07-14 14:53         ` Boqun Feng
2025-07-14 15:16           ` Benno Lossin
2025-07-14 15:05       ` Benno Lossin
2025-07-14 15:32         ` Boqun Feng
2025-07-15  9:36           ` Benno Lossin
2025-07-15 13:14             ` Boqun Feng
2025-07-15 15:35               ` Benno Lossin
2025-07-14  5:36 ` [PATCH v7 5/9] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
2025-07-14 10:56   ` Benno Lossin
2025-07-14  5:36 ` [PATCH v7 6/9] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
2025-07-15 11:21   ` Benno Lossin
2025-07-15 13:33     ` Boqun Feng
2025-07-15 15:45       ` Benno Lossin
2025-07-15 16:13         ` Boqun Feng
2025-07-15 18:39           ` Benno Lossin
2025-07-15 20:13             ` Boqun Feng
2025-07-16 10:25               ` Benno Lossin
2025-07-16 14:13                 ` Boqun Feng
2025-07-16 15:36                   ` Benno Lossin
2025-07-16 15:48                     ` Boqun Feng
2025-07-16 17:16                       ` Benno Lossin
2025-07-16 17:38                         ` Boqun Feng
2025-07-14  5:36 ` [PATCH v7 7/9] rust: sync: atomic: Add Atomic<u{32,64}> Boqun Feng
2025-07-14  5:36 ` [PATCH v7 8/9] rust: sync: Add memory barriers Boqun Feng
2025-07-14  5:36 ` [PATCH v7 9/9] rust: sync: atomic: Add Atomic<{usize,isize}> Boqun Feng
2025-07-14 11:06   ` Benno Lossin
2025-07-14 13:47     ` Boqun Feng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).