* [PATCH v5 00/10] LKMM generic atomics in Rust
@ 2025-06-18 16:49 Boqun Feng
2025-06-18 16:49 ` [PATCH v5 01/10] rust: Introduce atomic API helpers Boqun Feng
` (10 more replies)
0 siblings, 11 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-18 16:49 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Hi,
v5 for LKMM atomics in Rust, you can find the previous versions:
v4: https://lore.kernel.org/rust-for-linux/20250609224615.27061-1-boqun.feng@gmail.com/
v3: https://lore.kernel.org/rust-for-linux/20250421164221.1121805-1-boqun.feng@gmail.com/
v2: https://lore.kernel.org/rust-for-linux/20241101060237.1185533-1-boqun.feng@gmail.com/
v1: https://lore.kernel.org/rust-for-linux/20240612223025.1158537-1-boqun.feng@gmail.com/
wip: https://lore.kernel.org/rust-for-linux/20240322233838.868874-1-boqun.feng@gmail.com/
The reason of providing our own LKMM atomics is because memory model
wise Rust native memory model is not guaranteed to work with LKMM and
having only one memory model throughout the kernel is always better for
reasoning.
Changes since v4:
* Rename the ordering enum type and corresponding constant in trait All
as per feedback from Benno.
* Add more tests for Atomic<{i,u}size> and Atomic<*mut T>.
* Rebase on v6.16-rc2
Still please advise how we want to route the patches and for future
ones:
* Option #1: via tip, I can send a pull request to Ingo at -rc4 or -rc5.
* Option #2: via rust, I can send a pull request to Miguel at -rc4 or -rc5.
* Option #3: via my own tree or atomic group in kernel.org, I can send
a pull request to Linus at 6.17 merge window.
My default option is #1, but feel free to make any suggestion.
Regards,
Boqun
Boqun Feng (10):
rust: Introduce atomic API helpers
rust: sync: Add basic atomic operation mapping framework
rust: sync: atomic: Add ordering annotation types
rust: sync: atomic: Add generic atomics
rust: sync: atomic: Add atomic {cmp,}xchg operations
rust: sync: atomic: Add the framework of arithmetic operations
rust: sync: atomic: Add Atomic<u{32,64}>
rust: sync: atomic: Add Atomic<{usize,isize}>
rust: sync: atomic: Add Atomic<*mut T>
rust: sync: Add memory barriers
MAINTAINERS | 4 +-
rust/helpers/atomic.c | 1038 +++++++++++++++++++++
rust/helpers/barrier.c | 18 +
rust/helpers/helpers.c | 2 +
rust/kernel/sync.rs | 2 +
rust/kernel/sync/atomic.rs | 233 +++++
rust/kernel/sync/atomic/generic.rs | 523 +++++++++++
rust/kernel/sync/atomic/ops.rs | 199 ++++
rust/kernel/sync/atomic/ordering.rs | 106 +++
rust/kernel/sync/barrier.rs | 67 ++
scripts/atomic/gen-atomics.sh | 1 +
scripts/atomic/gen-rust-atomic-helpers.sh | 65 ++
12 files changed, 2257 insertions(+), 1 deletion(-)
create mode 100644 rust/helpers/atomic.c
create mode 100644 rust/helpers/barrier.c
create mode 100644 rust/kernel/sync/atomic.rs
create mode 100644 rust/kernel/sync/atomic/generic.rs
create mode 100644 rust/kernel/sync/atomic/ops.rs
create mode 100644 rust/kernel/sync/atomic/ordering.rs
create mode 100644 rust/kernel/sync/barrier.rs
create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh
--
2.39.5 (Apple Git-154)
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v5 01/10] rust: Introduce atomic API helpers
2025-06-18 16:49 [PATCH v5 00/10] LKMM generic atomics in Rust Boqun Feng
@ 2025-06-18 16:49 ` Boqun Feng
2025-06-26 8:44 ` Andreas Hindborg
2025-06-18 16:49 ` [PATCH v5 02/10] rust: sync: Add basic atomic operation mapping framework Boqun Feng
` (9 subsequent siblings)
10 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-06-18 16:49 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
In order to support LKMM atomics in Rust, add rust_helper_* for atomic
APIs. These helpers ensure the implementation of LKMM atomics in Rust is
the same as in C. This could save the maintenance burden of having two
similar atomic implementations in asm.
Originally-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/helpers/atomic.c | 1038 +++++++++++++++++++++
rust/helpers/helpers.c | 1 +
scripts/atomic/gen-atomics.sh | 1 +
scripts/atomic/gen-rust-atomic-helpers.sh | 65 ++
4 files changed, 1105 insertions(+)
create mode 100644 rust/helpers/atomic.c
create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh
diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c
new file mode 100644
index 000000000000..00bf10887928
--- /dev/null
+++ b/rust/helpers/atomic.c
@@ -0,0 +1,1038 @@
+// SPDX-License-Identifier: GPL-2.0
+
+// Generated by scripts/atomic/gen-rust-atomic-helpers.sh
+// DO NOT MODIFY THIS FILE DIRECTLY
+
+/*
+ * This file provides helpers for the various atomic functions for Rust.
+ */
+#ifndef _RUST_ATOMIC_API_H
+#define _RUST_ATOMIC_API_H
+
+#include <linux/atomic.h>
+
+// TODO: Remove this after LTO helper support is added.
+#define __rust_helper
+
+__rust_helper int
+rust_helper_atomic_read(const atomic_t *v)
+{
+ return atomic_read(v);
+}
+
+__rust_helper int
+rust_helper_atomic_read_acquire(const atomic_t *v)
+{
+ return atomic_read_acquire(v);
+}
+
+__rust_helper void
+rust_helper_atomic_set(atomic_t *v, int i)
+{
+ atomic_set(v, i);
+}
+
+__rust_helper void
+rust_helper_atomic_set_release(atomic_t *v, int i)
+{
+ atomic_set_release(v, i);
+}
+
+__rust_helper void
+rust_helper_atomic_add(int i, atomic_t *v)
+{
+ atomic_add(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_add_return(int i, atomic_t *v)
+{
+ return atomic_add_return(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_add_return_acquire(int i, atomic_t *v)
+{
+ return atomic_add_return_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_add_return_release(int i, atomic_t *v)
+{
+ return atomic_add_return_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_add_return_relaxed(int i, atomic_t *v)
+{
+ return atomic_add_return_relaxed(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add(int i, atomic_t *v)
+{
+ return atomic_fetch_add(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add_acquire(int i, atomic_t *v)
+{
+ return atomic_fetch_add_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add_release(int i, atomic_t *v)
+{
+ return atomic_fetch_add_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add_relaxed(int i, atomic_t *v)
+{
+ return atomic_fetch_add_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_sub(int i, atomic_t *v)
+{
+ atomic_sub(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_sub_return(int i, atomic_t *v)
+{
+ return atomic_sub_return(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_sub_return_acquire(int i, atomic_t *v)
+{
+ return atomic_sub_return_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_sub_return_release(int i, atomic_t *v)
+{
+ return atomic_sub_return_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_sub_return_relaxed(int i, atomic_t *v)
+{
+ return atomic_sub_return_relaxed(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_sub(int i, atomic_t *v)
+{
+ return atomic_fetch_sub(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_sub_acquire(int i, atomic_t *v)
+{
+ return atomic_fetch_sub_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_sub_release(int i, atomic_t *v)
+{
+ return atomic_fetch_sub_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_sub_relaxed(int i, atomic_t *v)
+{
+ return atomic_fetch_sub_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_inc(atomic_t *v)
+{
+ atomic_inc(v);
+}
+
+__rust_helper int
+rust_helper_atomic_inc_return(atomic_t *v)
+{
+ return atomic_inc_return(v);
+}
+
+__rust_helper int
+rust_helper_atomic_inc_return_acquire(atomic_t *v)
+{
+ return atomic_inc_return_acquire(v);
+}
+
+__rust_helper int
+rust_helper_atomic_inc_return_release(atomic_t *v)
+{
+ return atomic_inc_return_release(v);
+}
+
+__rust_helper int
+rust_helper_atomic_inc_return_relaxed(atomic_t *v)
+{
+ return atomic_inc_return_relaxed(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_inc(atomic_t *v)
+{
+ return atomic_fetch_inc(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_inc_acquire(atomic_t *v)
+{
+ return atomic_fetch_inc_acquire(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_inc_release(atomic_t *v)
+{
+ return atomic_fetch_inc_release(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_inc_relaxed(atomic_t *v)
+{
+ return atomic_fetch_inc_relaxed(v);
+}
+
+__rust_helper void
+rust_helper_atomic_dec(atomic_t *v)
+{
+ atomic_dec(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_return(atomic_t *v)
+{
+ return atomic_dec_return(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_return_acquire(atomic_t *v)
+{
+ return atomic_dec_return_acquire(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_return_release(atomic_t *v)
+{
+ return atomic_dec_return_release(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_return_relaxed(atomic_t *v)
+{
+ return atomic_dec_return_relaxed(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_dec(atomic_t *v)
+{
+ return atomic_fetch_dec(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_dec_acquire(atomic_t *v)
+{
+ return atomic_fetch_dec_acquire(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_dec_release(atomic_t *v)
+{
+ return atomic_fetch_dec_release(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_dec_relaxed(atomic_t *v)
+{
+ return atomic_fetch_dec_relaxed(v);
+}
+
+__rust_helper void
+rust_helper_atomic_and(int i, atomic_t *v)
+{
+ atomic_and(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_and(int i, atomic_t *v)
+{
+ return atomic_fetch_and(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_and_acquire(int i, atomic_t *v)
+{
+ return atomic_fetch_and_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_and_release(int i, atomic_t *v)
+{
+ return atomic_fetch_and_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_and_relaxed(int i, atomic_t *v)
+{
+ return atomic_fetch_and_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_andnot(int i, atomic_t *v)
+{
+ atomic_andnot(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_andnot(int i, atomic_t *v)
+{
+ return atomic_fetch_andnot(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_andnot_acquire(int i, atomic_t *v)
+{
+ return atomic_fetch_andnot_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_andnot_release(int i, atomic_t *v)
+{
+ return atomic_fetch_andnot_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
+{
+ return atomic_fetch_andnot_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_or(int i, atomic_t *v)
+{
+ atomic_or(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_or(int i, atomic_t *v)
+{
+ return atomic_fetch_or(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_or_acquire(int i, atomic_t *v)
+{
+ return atomic_fetch_or_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_or_release(int i, atomic_t *v)
+{
+ return atomic_fetch_or_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_or_relaxed(int i, atomic_t *v)
+{
+ return atomic_fetch_or_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_xor(int i, atomic_t *v)
+{
+ atomic_xor(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_xor(int i, atomic_t *v)
+{
+ return atomic_fetch_xor(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_xor_acquire(int i, atomic_t *v)
+{
+ return atomic_fetch_xor_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_xor_release(int i, atomic_t *v)
+{
+ return atomic_fetch_xor_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_xor_relaxed(int i, atomic_t *v)
+{
+ return atomic_fetch_xor_relaxed(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_xchg(atomic_t *v, int new)
+{
+ return atomic_xchg(v, new);
+}
+
+__rust_helper int
+rust_helper_atomic_xchg_acquire(atomic_t *v, int new)
+{
+ return atomic_xchg_acquire(v, new);
+}
+
+__rust_helper int
+rust_helper_atomic_xchg_release(atomic_t *v, int new)
+{
+ return atomic_xchg_release(v, new);
+}
+
+__rust_helper int
+rust_helper_atomic_xchg_relaxed(atomic_t *v, int new)
+{
+ return atomic_xchg_relaxed(v, new);
+}
+
+__rust_helper int
+rust_helper_atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+ return atomic_cmpxchg(v, old, new);
+}
+
+__rust_helper int
+rust_helper_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
+{
+ return atomic_cmpxchg_acquire(v, old, new);
+}
+
+__rust_helper int
+rust_helper_atomic_cmpxchg_release(atomic_t *v, int old, int new)
+{
+ return atomic_cmpxchg_release(v, old, new);
+}
+
+__rust_helper int
+rust_helper_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
+{
+ return atomic_cmpxchg_relaxed(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
+{
+ return atomic_try_cmpxchg(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
+{
+ return atomic_try_cmpxchg_acquire(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
+{
+ return atomic_try_cmpxchg_release(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
+{
+ return atomic_try_cmpxchg_relaxed(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_sub_and_test(int i, atomic_t *v)
+{
+ return atomic_sub_and_test(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic_dec_and_test(atomic_t *v)
+{
+ return atomic_dec_and_test(v);
+}
+
+__rust_helper bool
+rust_helper_atomic_inc_and_test(atomic_t *v)
+{
+ return atomic_inc_and_test(v);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_negative(int i, atomic_t *v)
+{
+ return atomic_add_negative(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_negative_acquire(int i, atomic_t *v)
+{
+ return atomic_add_negative_acquire(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_negative_release(int i, atomic_t *v)
+{
+ return atomic_add_negative_release(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_negative_relaxed(int i, atomic_t *v)
+{
+ return atomic_add_negative_relaxed(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add_unless(atomic_t *v, int a, int u)
+{
+ return atomic_fetch_add_unless(v, a, u);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_unless(atomic_t *v, int a, int u)
+{
+ return atomic_add_unless(v, a, u);
+}
+
+__rust_helper bool
+rust_helper_atomic_inc_not_zero(atomic_t *v)
+{
+ return atomic_inc_not_zero(v);
+}
+
+__rust_helper bool
+rust_helper_atomic_inc_unless_negative(atomic_t *v)
+{
+ return atomic_inc_unless_negative(v);
+}
+
+__rust_helper bool
+rust_helper_atomic_dec_unless_positive(atomic_t *v)
+{
+ return atomic_dec_unless_positive(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_if_positive(atomic_t *v)
+{
+ return atomic_dec_if_positive(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_read(const atomic64_t *v)
+{
+ return atomic64_read(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_read_acquire(const atomic64_t *v)
+{
+ return atomic64_read_acquire(v);
+}
+
+__rust_helper void
+rust_helper_atomic64_set(atomic64_t *v, s64 i)
+{
+ atomic64_set(v, i);
+}
+
+__rust_helper void
+rust_helper_atomic64_set_release(atomic64_t *v, s64 i)
+{
+ atomic64_set_release(v, i);
+}
+
+__rust_helper void
+rust_helper_atomic64_add(s64 i, atomic64_t *v)
+{
+ atomic64_add(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_add_return(s64 i, atomic64_t *v)
+{
+ return atomic64_add_return(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_add_return_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_add_return_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_add_return_release(s64 i, atomic64_t *v)
+{
+ return atomic64_add_return_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_add_return_relaxed(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_add(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_add_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add_release(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_add_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_add_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_sub(s64 i, atomic64_t *v)
+{
+ atomic64_sub(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_sub_return(s64 i, atomic64_t *v)
+{
+ return atomic64_sub_return(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_sub_return_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_sub_return_release(s64 i, atomic64_t *v)
+{
+ return atomic64_sub_return_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_sub_return_relaxed(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_sub(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_sub(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_sub_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_sub_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_sub_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_inc(atomic64_t *v)
+{
+ atomic64_inc(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_inc_return(atomic64_t *v)
+{
+ return atomic64_inc_return(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_inc_return_acquire(atomic64_t *v)
+{
+ return atomic64_inc_return_acquire(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_inc_return_release(atomic64_t *v)
+{
+ return atomic64_inc_return_release(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_inc_return_relaxed(atomic64_t *v)
+{
+ return atomic64_inc_return_relaxed(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_inc(atomic64_t *v)
+{
+ return atomic64_fetch_inc(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_inc_acquire(atomic64_t *v)
+{
+ return atomic64_fetch_inc_acquire(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_inc_release(atomic64_t *v)
+{
+ return atomic64_fetch_inc_release(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_inc_relaxed(atomic64_t *v)
+{
+ return atomic64_fetch_inc_relaxed(v);
+}
+
+__rust_helper void
+rust_helper_atomic64_dec(atomic64_t *v)
+{
+ atomic64_dec(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_return(atomic64_t *v)
+{
+ return atomic64_dec_return(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_return_acquire(atomic64_t *v)
+{
+ return atomic64_dec_return_acquire(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_return_release(atomic64_t *v)
+{
+ return atomic64_dec_return_release(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_return_relaxed(atomic64_t *v)
+{
+ return atomic64_dec_return_relaxed(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_dec(atomic64_t *v)
+{
+ return atomic64_fetch_dec(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_dec_acquire(atomic64_t *v)
+{
+ return atomic64_fetch_dec_acquire(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_dec_release(atomic64_t *v)
+{
+ return atomic64_fetch_dec_release(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_dec_relaxed(atomic64_t *v)
+{
+ return atomic64_fetch_dec_relaxed(v);
+}
+
+__rust_helper void
+rust_helper_atomic64_and(s64 i, atomic64_t *v)
+{
+ atomic64_and(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_and(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_and(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_and_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_and_release(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_and_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_and_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_andnot(s64 i, atomic64_t *v)
+{
+ atomic64_andnot(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_andnot(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_andnot(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_andnot_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_andnot_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_andnot_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_or(s64 i, atomic64_t *v)
+{
+ atomic64_or(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_or(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_or(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_or_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_or_release(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_or_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_or_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_xor(s64 i, atomic64_t *v)
+{
+ atomic64_xor(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_xor(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_xor(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_xor_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_xor_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_xor_relaxed(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_xchg(atomic64_t *v, s64 new)
+{
+ return atomic64_xchg(v, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_xchg_acquire(atomic64_t *v, s64 new)
+{
+ return atomic64_xchg_acquire(v, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_xchg_release(atomic64_t *v, s64 new)
+{
+ return atomic64_xchg_release(v, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
+{
+ return atomic64_xchg_relaxed(v, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
+{
+ return atomic64_cmpxchg(v, old, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
+{
+ return atomic64_cmpxchg_acquire(v, old, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
+{
+ return atomic64_cmpxchg_release(v, old, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
+{
+ return atomic64_cmpxchg_relaxed(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
+{
+ return atomic64_try_cmpxchg(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
+{
+ return atomic64_try_cmpxchg_acquire(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
+{
+ return atomic64_try_cmpxchg_release(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
+{
+ return atomic64_try_cmpxchg_relaxed(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_sub_and_test(s64 i, atomic64_t *v)
+{
+ return atomic64_sub_and_test(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_dec_and_test(atomic64_t *v)
+{
+ return atomic64_dec_and_test(v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_inc_and_test(atomic64_t *v)
+{
+ return atomic64_inc_and_test(v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_negative(s64 i, atomic64_t *v)
+{
+ return atomic64_add_negative(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_add_negative_acquire(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_negative_release(s64 i, atomic64_t *v)
+{
+ return atomic64_add_negative_release(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_add_negative_relaxed(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+ return atomic64_fetch_add_unless(v, a, u);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+ return atomic64_add_unless(v, a, u);
+}
+
+__rust_helper bool
+rust_helper_atomic64_inc_not_zero(atomic64_t *v)
+{
+ return atomic64_inc_not_zero(v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_inc_unless_negative(atomic64_t *v)
+{
+ return atomic64_inc_unless_negative(v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_dec_unless_positive(atomic64_t *v)
+{
+ return atomic64_dec_unless_positive(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_if_positive(atomic64_t *v)
+{
+ return atomic64_dec_if_positive(v);
+}
+
+#endif /* _RUST_ATOMIC_API_H */
+// b032d261814b3e119b72dbf7d21447f6731325ee
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 16fa9bca5949..83e89f6a68fb 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -7,6 +7,7 @@
* Sorted alphabetically.
*/
+#include "atomic.c"
#include "auxiliary.c"
#include "blk.c"
#include "bug.c"
diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh
index 5b98a8307693..02508d0d6fe4 100755
--- a/scripts/atomic/gen-atomics.sh
+++ b/scripts/atomic/gen-atomics.sh
@@ -11,6 +11,7 @@ cat <<EOF |
gen-atomic-instrumented.sh linux/atomic/atomic-instrumented.h
gen-atomic-long.sh linux/atomic/atomic-long.h
gen-atomic-fallback.sh linux/atomic/atomic-arch-fallback.h
+gen-rust-atomic-helpers.sh ../rust/helpers/atomic.c
EOF
while read script header args; do
/bin/sh ${ATOMICDIR}/${script} ${ATOMICTBL} ${args} > ${LINUXDIR}/include/${header}
diff --git a/scripts/atomic/gen-rust-atomic-helpers.sh b/scripts/atomic/gen-rust-atomic-helpers.sh
new file mode 100755
index 000000000000..72f2e5bde0c6
--- /dev/null
+++ b/scripts/atomic/gen-rust-atomic-helpers.sh
@@ -0,0 +1,65 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+
+ATOMICDIR=$(dirname $0)
+
+. ${ATOMICDIR}/atomic-tbl.sh
+
+#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
+gen_proto_order_variant()
+{
+ local meta="$1"; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local order="$1"; shift
+ local atomic="$1"; shift
+ local int="$1"; shift
+
+ local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
+
+ local ret="$(gen_ret_type "${meta}" "${int}")"
+ local params="$(gen_params "${int}" "${atomic}" "$@")"
+ local args="$(gen_args "$@")"
+ local retstmt="$(gen_ret_stmt "${meta}")"
+
+cat <<EOF
+__rust_helper ${ret}
+rust_helper_${atomicname}(${params})
+{
+ ${retstmt}${atomicname}(${args});
+}
+
+EOF
+}
+
+cat << EOF
+// SPDX-License-Identifier: GPL-2.0
+
+// Generated by $0
+// DO NOT MODIFY THIS FILE DIRECTLY
+
+/*
+ * This file provides helpers for the various atomic functions for Rust.
+ */
+#ifndef _RUST_ATOMIC_API_H
+#define _RUST_ATOMIC_API_H
+
+#include <linux/atomic.h>
+
+// TODO: Remove this after LTO helper support is added.
+#define __rust_helper
+
+EOF
+
+grep '^[a-z]' "$1" | while read name meta args; do
+ gen_proto "${meta}" "${name}" "atomic" "int" ${args}
+done
+
+grep '^[a-z]' "$1" | while read name meta args; do
+ gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
+done
+
+cat <<EOF
+#endif /* _RUST_ATOMIC_API_H */
+EOF
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 82+ messages in thread
* [PATCH v5 02/10] rust: sync: Add basic atomic operation mapping framework
2025-06-18 16:49 [PATCH v5 00/10] LKMM generic atomics in Rust Boqun Feng
2025-06-18 16:49 ` [PATCH v5 01/10] rust: Introduce atomic API helpers Boqun Feng
@ 2025-06-18 16:49 ` Boqun Feng
2025-06-26 8:50 ` Andreas Hindborg
2025-06-26 10:17 ` Andreas Hindborg
2025-06-18 16:49 ` [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types Boqun Feng
` (8 subsequent siblings)
10 siblings, 2 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-18 16:49 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Preparation for generic atomic implementation. To unify the
implementation of a generic method over `i32` and `i64`, the C side
atomic methods need to be grouped so that in a generic method, they can
be referred as <type>::<method>, otherwise their parameters and return
value are different between `i32` and `i64`, which would require using
`transmute()` to unify the type into a `T`.
Introduce `AtomicImpl` to represent a basic type in Rust that has the
direct mapping to an atomic implementation from C. This trait is sealed,
and currently only `i32` and `i64` impl this.
Further, different methods are put into different `*Ops` trait groups,
and this is for the future when smaller types like `i8`/`i16` are
supported but only with a limited set of API (e.g. only set(), load(),
xchg() and cmpxchg(), no add() or sub() etc).
While the atomic mod is introduced, documentation is also added for
memory models and data races.
Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect
my responsiblity on the Rust atomic mod.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
MAINTAINERS | 4 +-
rust/kernel/sync.rs | 1 +
rust/kernel/sync/atomic.rs | 19 ++++
rust/kernel/sync/atomic/ops.rs | 199 +++++++++++++++++++++++++++++++++
4 files changed, 222 insertions(+), 1 deletion(-)
create mode 100644 rust/kernel/sync/atomic.rs
create mode 100644 rust/kernel/sync/atomic/ops.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index 0c1d245bf7b8..5eef524975ca 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3894,7 +3894,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c
ATOMIC INFRASTRUCTURE
M: Will Deacon <will@kernel.org>
M: Peter Zijlstra <peterz@infradead.org>
-R: Boqun Feng <boqun.feng@gmail.com>
+M: Boqun Feng <boqun.feng@gmail.com>
R: Mark Rutland <mark.rutland@arm.com>
L: linux-kernel@vger.kernel.org
S: Maintained
@@ -3903,6 +3903,8 @@ F: arch/*/include/asm/atomic*.h
F: include/*/atomic*.h
F: include/linux/refcount.h
F: scripts/atomic/
+F: rust/kernel/sync/atomic.rs
+F: rust/kernel/sync/atomic/
ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER
M: Bradley Grove <linuxdrivers@attotech.com>
diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index 36a719015583..b620027e0641 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -10,6 +10,7 @@
use pin_init;
mod arc;
+pub mod atomic;
mod condvar;
pub mod lock;
mod locked_by;
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
new file mode 100644
index 000000000000..65e41dba97b7
--- /dev/null
+++ b/rust/kernel/sync/atomic.rs
@@ -0,0 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Atomic primitives.
+//!
+//! These primitives have the same semantics as their C counterparts: and the precise definitions of
+//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Consistency) Model is the
+//! only model for Rust code in kernel, and Rust's own atomics should be avoided.
+//!
+//! # Data races
+//!
+//! [`LKMM`] atomics have different rules regarding data races:
+//!
+//! - A normal write from C side is treated as an atomic write if
+//! CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
+//! - Mixed-size atomic accesses don't cause data races.
+//!
+//! [`LKMM`]: srctree/tools/memory-mode/
+
+pub mod ops;
diff --git a/rust/kernel/sync/atomic/ops.rs b/rust/kernel/sync/atomic/ops.rs
new file mode 100644
index 000000000000..f8825f7c84f0
--- /dev/null
+++ b/rust/kernel/sync/atomic/ops.rs
@@ -0,0 +1,199 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Atomic implementations.
+//!
+//! Provides 1:1 mapping of atomic implementations.
+
+use crate::bindings::*;
+use crate::macros::paste;
+
+mod private {
+ /// Sealed trait marker to disable customized impls on atomic implementation traits.
+ pub trait Sealed {}
+}
+
+// `i32` and `i64` are only supported atomic implementations.
+impl private::Sealed for i32 {}
+impl private::Sealed for i64 {}
+
+/// A marker trait for types that implement atomic operations with C side primitives.
+///
+/// This trait is sealed, and only types that have directly mapping to the C side atomics should
+/// impl this:
+///
+/// - `i32` maps to `atomic_t`.
+/// - `i64` maps to `atomic64_t`.
+pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {}
+
+// `atomic_t` implements atomic operations on `i32`.
+impl AtomicImpl for i32 {}
+
+// `atomic64_t` implements atomic operations on `i64`.
+impl AtomicImpl for i64 {}
+
+// This macro generates the function signature with given argument list and return type.
+macro_rules! declare_atomic_method {
+ (
+ $func:ident($($arg:ident : $arg_type:ty),*) $(-> $ret:ty)?
+ ) => {
+ paste!(
+ #[doc = concat!("Atomic ", stringify!($func))]
+ #[doc = "# Safety"]
+ #[doc = "- Any pointer passed to the function has to be a valid pointer"]
+ #[doc = "- Accesses must not cause data races per LKMM:"]
+ #[doc = " - Atomic read racing with normal read, normal write or atomic write is not data race."]
+ #[doc = " - Atomic write racing with normal read or normal write is data-race, unless the"]
+ #[doc = " normal accesses are done at C side and considered as immune to data"]
+ #[doc = " races, e.g. CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC."]
+ unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)?;
+ );
+ };
+ (
+ $func:ident [$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(-> $ret:ty)?
+ ) => {
+ paste!(
+ declare_atomic_method!(
+ [< $func _ $variant >]($($arg_sig)*) $(-> $ret)?
+ );
+ );
+
+ declare_atomic_method!(
+ $func [$($rest)*]($($arg_sig)*) $(-> $ret)?
+ );
+ };
+ (
+ $func:ident []($($arg_sig:tt)*) $(-> $ret:ty)?
+ ) => {
+ declare_atomic_method!(
+ $func($($arg_sig)*) $(-> $ret)?
+ );
+ }
+}
+
+// This macro generates the function implementation with given argument list and return type, and it
+// will replace "call(...)" expression with "$ctype _ $func" to call the real C function.
+macro_rules! impl_atomic_method {
+ (
+ ($ctype:ident) $func:ident($($arg:ident: $arg_type:ty),*) $(-> $ret:ty)? {
+ call($($c_arg:expr),*)
+ }
+ ) => {
+ paste!(
+ #[inline(always)]
+ unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)? {
+ // SAFETY: Per function safety requirement, all pointers are valid, and accesses
+ // won't cause data race per LKMM.
+ unsafe { [< $ctype _ $func >]($($c_arg,)*) }
+ }
+ );
+ };
+ (
+ ($ctype:ident) $func:ident[$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(-> $ret:ty)? {
+ call($($arg:tt)*)
+ }
+ ) => {
+ paste!(
+ impl_atomic_method!(
+ ($ctype) [< $func _ $variant >]($($arg_sig)*) $( -> $ret)? {
+ call($($arg)*)
+ }
+ );
+ );
+ impl_atomic_method!(
+ ($ctype) $func [$($rest)*]($($arg_sig)*) $( -> $ret)? {
+ call($($arg)*)
+ }
+ );
+ };
+ (
+ ($ctype:ident) $func:ident[]($($arg_sig:tt)*) $( -> $ret:ty)? {
+ call($($arg:tt)*)
+ }
+ ) => {
+ impl_atomic_method!(
+ ($ctype) $func($($arg_sig)*) $(-> $ret)? {
+ call($($arg)*)
+ }
+ );
+ }
+}
+
+// Delcares $ops trait with methods and implements the trait for `i32` and `i64`.
+macro_rules! declare_and_impl_atomic_methods {
+ ($ops:ident ($doc:literal) {
+ $(
+ $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? {
+ call($($arg:tt)*)
+ }
+ )*
+ }) => {
+ #[doc = $doc]
+ pub trait $ops: AtomicImpl {
+ $(
+ declare_atomic_method!(
+ $func[$($variant)*]($($arg_sig)*) $(-> $ret)?
+ );
+ )*
+ }
+
+ impl $ops for i32 {
+ $(
+ impl_atomic_method!(
+ (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? {
+ call($($arg)*)
+ }
+ );
+ )*
+ }
+
+ impl $ops for i64 {
+ $(
+ impl_atomic_method!(
+ (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? {
+ call($($arg)*)
+ }
+ );
+ )*
+ }
+ }
+}
+
+declare_and_impl_atomic_methods!(
+ AtomicHasBasicOps ("Basic atomic operations") {
+ read[acquire](ptr: *mut Self) -> Self {
+ call(ptr as *mut _)
+ }
+
+ set[release](ptr: *mut Self, v: Self) {
+ call(ptr as *mut _, v)
+ }
+ }
+);
+
+declare_and_impl_atomic_methods!(
+ AtomicHasXchgOps ("Exchange and compare-and-exchange atomic operations") {
+ xchg[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self {
+ call(ptr as *mut _, v)
+ }
+
+ cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: Self, new: Self) -> Self {
+ call(ptr as *mut _, old, new)
+ }
+
+ try_cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: *mut Self, new: Self) -> bool {
+ call(ptr as *mut _, old, new)
+ }
+ }
+);
+
+declare_and_impl_atomic_methods!(
+ AtomicHasArithmeticOps ("Atomic arithmetic operations") {
+ add[](ptr: *mut Self, v: Self) {
+ call(v, ptr as *mut _)
+ }
+
+ fetch_add[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self {
+ call(v, ptr as *mut _)
+ }
+ }
+);
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 82+ messages in thread
* [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-18 16:49 [PATCH v5 00/10] LKMM generic atomics in Rust Boqun Feng
2025-06-18 16:49 ` [PATCH v5 01/10] rust: Introduce atomic API helpers Boqun Feng
2025-06-18 16:49 ` [PATCH v5 02/10] rust: sync: Add basic atomic operation mapping framework Boqun Feng
@ 2025-06-18 16:49 ` Boqun Feng
2025-06-19 10:31 ` Peter Zijlstra
` (2 more replies)
2025-06-18 16:49 ` [PATCH v5 04/10] rust: sync: atomic: Add generic atomics Boqun Feng
` (7 subsequent siblings)
10 siblings, 3 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-18 16:49 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Preparation for atomic primitives. Instead of a suffix like _acquire, a
method parameter along with the corresponding generic parameter will be
used to specify the ordering of an atomic operations. For example,
atomic load() can be defined as:
impl<T: ...> Atomic<T> {
pub fn load<O: AcquireOrRelaxed>(&self, _o: O) -> T { ... }
}
and acquire users would do:
let r = x.load(Acquire);
relaxed users:
let r = x.load(Relaxed);
doing the following:
let r = x.load(Release);
will cause a compiler error.
Compared to suffixes, it's easier to tell what ordering variants an
operation has, and it also make it easier to unify the implementation of
all ordering variants in one method via generic. The `IS_RELAXED` and
`TYPE` associate consts are for generic function to pick up the
particular implementation specified by an ordering annotation.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic.rs | 3 +
rust/kernel/sync/atomic/ordering.rs | 106 ++++++++++++++++++++++++++++
2 files changed, 109 insertions(+)
create mode 100644 rust/kernel/sync/atomic/ordering.rs
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 65e41dba97b7..9fe5d81fc2a9 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -17,3 +17,6 @@
//! [`LKMM`]: srctree/tools/memory-mode/
pub mod ops;
+pub mod ordering;
+
+pub use ordering::{Acquire, Full, Relaxed, Release};
diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/ordering.rs
new file mode 100644
index 000000000000..96757574ed7d
--- /dev/null
+++ b/rust/kernel/sync/atomic/ordering.rs
@@ -0,0 +1,106 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Memory orderings.
+//!
+//! The semantics of these orderings follows the [`LKMM`] definitions and rules.
+//!
+//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model.
+//! - [`Full`] means "fully-ordered", that is:
+//! - It provides ordering between all the preceding memory accesses and the annotated operation.
+//! - It provides ordering between the annotated operation and all the following memory accesses.
+//! - It provides ordering between all the preceding memory accesses and all the fllowing memory
+//! accesses.
+//! - All the orderings are the same strong as a full memory barrier (i.e. `smp_mb()`).
+//! - [`Relaxed`] is similar to the counterpart in Rust memory model, except that dependency
+//! orderings are also honored in [`LKMM`]. Dependency orderings are described in "DEPENDENCY
+//! RELATIONS" in [`LKMM`]'s [`explanation`].
+//!
+//! [`LKMM`]: srctree/tools/memory-model/
+//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.txt
+
+/// The annotation type for relaxed memory ordering.
+pub struct Relaxed;
+
+/// The annotation type for acquire memory ordering.
+pub struct Acquire;
+
+/// The annotation type for release memory ordering.
+pub struct Release;
+
+/// The annotation type for fully-order memory ordering.
+pub struct Full;
+
+/// Describes the exact memory ordering.
+pub enum OrderingType {
+ /// Relaxed ordering.
+ Relaxed,
+ /// Acquire ordering.
+ Acquire,
+ /// Release ordering.
+ Release,
+ /// Fully-ordered.
+ Full,
+}
+
+mod internal {
+ /// Unit types for ordering annotation.
+ ///
+ /// Sealed trait, can be only implemented inside atomic mod.
+ pub trait OrderingUnit {
+ /// Describes the exact memory ordering.
+ const TYPE: super::OrderingType;
+ }
+}
+
+impl internal::OrderingUnit for Relaxed {
+ const TYPE: OrderingType = OrderingType::Relaxed;
+}
+
+impl internal::OrderingUnit for Acquire {
+ const TYPE: OrderingType = OrderingType::Acquire;
+}
+
+impl internal::OrderingUnit for Release {
+ const TYPE: OrderingType = OrderingType::Release;
+}
+
+impl internal::OrderingUnit for Full {
+ const TYPE: OrderingType = OrderingType::Full;
+}
+
+/// The trait bound for annotating operations that should support all orderings.
+pub trait All: internal::OrderingUnit {}
+
+impl All for Relaxed {}
+impl All for Acquire {}
+impl All for Release {}
+impl All for Full {}
+
+/// The trait bound for operations that only support acquire or relaxed ordering.
+pub trait AcquireOrRelaxed: All {
+ /// Describes whether an ordering is relaxed or not.
+ const IS_RELAXED: bool = false;
+}
+
+impl AcquireOrRelaxed for Acquire {}
+
+impl AcquireOrRelaxed for Relaxed {
+ const IS_RELAXED: bool = true;
+}
+
+/// The trait bound for operations that only support release or relaxed ordering.
+pub trait ReleaseOrRelaxed: All {
+ /// Describes whether an ordering is relaxed or not.
+ const IS_RELAXED: bool = false;
+}
+
+impl ReleaseOrRelaxed for Release {}
+
+impl ReleaseOrRelaxed for Relaxed {
+ const IS_RELAXED: bool = true;
+}
+
+/// The trait bound for operations that only support relaxed ordering.
+pub trait RelaxedOnly: AcquireOrRelaxed + ReleaseOrRelaxed + All {}
+
+impl RelaxedOnly for Relaxed {}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 82+ messages in thread
* [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-18 16:49 [PATCH v5 00/10] LKMM generic atomics in Rust Boqun Feng
` (2 preceding siblings ...)
2025-06-18 16:49 ` [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types Boqun Feng
@ 2025-06-18 16:49 ` Boqun Feng
2025-06-21 11:32 ` Gary Guo
2025-06-26 12:15 ` Andreas Hindborg
2025-06-18 16:49 ` [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
` (6 subsequent siblings)
10 siblings, 2 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-18 16:49 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
To provide using LKMM atomics for Rust code, a generic `Atomic<T>` is
added, currently `T` needs to be Send + Copy because these are the
straightforward usages and all basic types support this. The trait
`AllowAtomic` should be only implemented inside atomic mod until the
generic atomic framework is mature enough (unless the implementer is a
`#[repr(transparent)]` new type).
`AtomicImpl` types are automatically `AllowAtomic`, and so far only
basic operations load() and store() are introduced.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic.rs | 2 +
rust/kernel/sync/atomic/generic.rs | 258 +++++++++++++++++++++++++++++
2 files changed, 260 insertions(+)
create mode 100644 rust/kernel/sync/atomic/generic.rs
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 9fe5d81fc2a9..a01e44eec380 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -16,7 +16,9 @@
//!
//! [`LKMM`]: srctree/tools/memory-mode/
+pub mod generic;
pub mod ops;
pub mod ordering;
+pub use generic::Atomic;
pub use ordering::{Acquire, Full, Relaxed, Release};
diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
new file mode 100644
index 000000000000..73c26f9cf6b8
--- /dev/null
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -0,0 +1,258 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Generic atomic primitives.
+
+use super::ops::*;
+use super::ordering::*;
+use crate::types::Opaque;
+
+/// A generic atomic variable.
+///
+/// `T` must impl [`AllowAtomic`], that is, an [`AtomicImpl`] has to be chosen.
+///
+/// # Invariants
+///
+/// Doing an atomic operation while holding a reference of [`Self`] won't cause a data race, this
+/// is guaranteed by the safety requirement of [`Self::from_ptr`] and the extra safety requirement
+/// of the usage on pointers returned by [`Self::as_ptr`].
+#[repr(transparent)]
+pub struct Atomic<T: AllowAtomic>(Opaque<T>);
+
+// SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
+unsafe impl<T: AllowAtomic> Sync for Atomic<T> {}
+
+/// Atomics that support basic atomic operations.
+///
+/// TODO: Currently the [`AllowAtomic`] types are restricted within basic integer types (and their
+/// transparent new types). In the future, we could extend the scope to more data types when there
+/// is a clear and meaningful usage, but for now, [`AllowAtomic`] should only be implemented inside
+/// atomic mod for the restricted types mentioned above.
+///
+/// # Safety
+///
+/// [`Self`] must have the same size and alignment as [`Self::Repr`].
+pub unsafe trait AllowAtomic: Sized + Send + Copy {
+ /// The backing atomic implementation type.
+ type Repr: AtomicImpl;
+
+ /// Converts into a [`Self::Repr`].
+ fn into_repr(self) -> Self::Repr;
+
+ /// Converts from a [`Self::Repr`].
+ fn from_repr(repr: Self::Repr) -> Self;
+}
+
+// An `AtomicImpl` is automatically an `AllowAtomic`.
+//
+// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment.
+unsafe impl<T: AtomicImpl> AllowAtomic for T {
+ type Repr = Self;
+
+ fn into_repr(self) -> Self::Repr {
+ self
+ }
+
+ fn from_repr(repr: Self::Repr) -> Self {
+ repr
+ }
+}
+
+impl<T: AllowAtomic> Atomic<T> {
+ /// Creates a new atomic.
+ pub const fn new(v: T) -> Self {
+ Self(Opaque::new(v))
+ }
+
+ /// Creates a reference to [`Self`] from a pointer.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` has to be a valid pointer.
+ /// - `ptr` has to be valid for both reads and writes for the whole lifetime `'a`.
+ /// - For the whole lifetime of '`a`, other accesses to the object cannot cause data races
+ /// (defined by [`LKMM`]) against atomic operations on the returned reference.
+ ///
+ /// [`LKMM`]: srctree/tools/memory-model
+ ///
+ /// # Examples
+ ///
+ /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [`Atomic::store()`] can
+ /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire()` or
+ /// `WRITE_ONCE()`/`smp_store_release()` in C side:
+ ///
+ /// ```rust
+ /// # use kernel::types::Opaque;
+ /// use kernel::sync::atomic::{Atomic, Relaxed, Release};
+ ///
+ /// // Assume there is a C struct `Foo`.
+ /// mod cbindings {
+ /// #[repr(C)]
+ /// pub(crate) struct foo { pub(crate) a: i32, pub(crate) b: i32 }
+ /// }
+ ///
+ /// let tmp = Opaque::new(cbindings::foo { a: 1, b: 2});
+ ///
+ /// // struct foo *foo_ptr = ..;
+ /// let foo_ptr = tmp.get();
+ ///
+ /// // SAFETY: `foo_ptr` is a valid pointer, and `.a` is inbound.
+ /// let foo_a_ptr = unsafe { core::ptr::addr_of_mut!((*foo_ptr).a) };
+ ///
+ /// // a = READ_ONCE(foo_ptr->a);
+ /// //
+ /// // SAFETY: `foo_a_ptr` is a valid pointer for read, and all accesses on it is atomic, so no
+ /// // data race.
+ /// let a = unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed);
+ /// # assert_eq!(a, 1);
+ ///
+ /// // smp_store_release(&foo_ptr->a, 2);
+ /// //
+ /// // SAFETY: `foo_a_ptr` is a valid pointer for write, and all accesses on it is atomic, so no
+ /// // data race.
+ /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release);
+ /// ```
+ ///
+ /// However, this should be only used when communicating with C side or manipulating a C struct.
+ pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self
+ where
+ T: Sync,
+ {
+ // CAST: `T` is transparent to `Atomic<T>`.
+ // SAFETY: Per function safety requirement, `ptr` is a valid pointer and the object will
+ // live long enough. It's safe to return a `&Atomic<T>` because function safety requirement
+ // guarantees other accesses won't cause data races.
+ unsafe { &*ptr.cast::<Self>() }
+ }
+
+ /// Returns a pointer to the underlying atomic variable.
+ ///
+ /// Extra safety requirement on using the return pointer: the operations done via the pointer
+ /// cannot cause data races defined by [`LKMM`].
+ ///
+ /// [`LKMM`]: srctree/tools/memory-model
+ pub const fn as_ptr(&self) -> *mut T {
+ self.0.get()
+ }
+
+ /// Returns a mutable reference to the underlying atomic variable.
+ ///
+ /// This is safe because the mutable reference of the atomic variable guarantees the exclusive
+ /// access.
+ pub fn get_mut(&mut self) -> &mut T {
+ // SAFETY: `self.as_ptr()` is a valid pointer to `T`, and the object has already been
+ // initialized. `&mut self` guarantees the exclusive access, so it's safe to reborrow
+ // mutably.
+ unsafe { &mut *self.as_ptr() }
+ }
+}
+
+impl<T: AllowAtomic> Atomic<T>
+where
+ T::Repr: AtomicHasBasicOps,
+{
+ /// Loads the value from the atomic variable.
+ ///
+ /// # Examples
+ ///
+ /// Simple usages:
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{Atomic, Relaxed};
+ ///
+ /// let x = Atomic::new(42i32);
+ ///
+ /// assert_eq!(42, x.load(Relaxed));
+ ///
+ /// let x = Atomic::new(42i64);
+ ///
+ /// assert_eq!(42, x.load(Relaxed));
+ /// ```
+ ///
+ /// Customized new types in [`Atomic`]:
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed};
+ ///
+ /// #[derive(Clone, Copy)]
+ /// #[repr(transparent)]
+ /// struct NewType(u32);
+ ///
+ /// // SAFETY: `NewType` is transparent to `u32`, which has the same size and alignment as
+ /// // `i32`.
+ /// unsafe impl AllowAtomic for NewType {
+ /// type Repr = i32;
+ ///
+ /// fn into_repr(self) -> Self::Repr {
+ /// self.0 as i32
+ /// }
+ ///
+ /// fn from_repr(repr: Self::Repr) -> Self {
+ /// NewType(repr as u32)
+ /// }
+ /// }
+ ///
+ /// let n = Atomic::new(NewType(0));
+ ///
+ /// assert_eq!(0, n.load(Relaxed).0);
+ /// ```
+ #[doc(alias("atomic_read", "atomic64_read"))]
+ #[inline(always)]
+ pub fn load<Ordering: AcquireOrRelaxed>(&self, _: Ordering) -> T {
+ let a = self.as_ptr().cast::<T::Repr>();
+
+ // SAFETY:
+ // - For calling the atomic_read*() function:
+ // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
+ // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
+ // - per the type invariants, the following atomic operation won't cause data races.
+ // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
+ // - atomic operations are used here.
+ let v = unsafe {
+ if Ordering::IS_RELAXED {
+ T::Repr::atomic_read(a)
+ } else {
+ T::Repr::atomic_read_acquire(a)
+ }
+ };
+
+ T::from_repr(v)
+ }
+
+ /// Stores a value to the atomic variable.
+ ///
+ /// # Examples
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{Atomic, Relaxed};
+ ///
+ /// let x = Atomic::new(42i32);
+ ///
+ /// assert_eq!(42, x.load(Relaxed));
+ ///
+ /// x.store(43, Relaxed);
+ ///
+ /// assert_eq!(43, x.load(Relaxed));
+ /// ```
+ ///
+ #[doc(alias("atomic_set", "atomic64_set"))]
+ #[inline(always)]
+ pub fn store<Ordering: ReleaseOrRelaxed>(&self, v: T, _: Ordering) {
+ let v = T::into_repr(v);
+ let a = self.as_ptr().cast::<T::Repr>();
+
+ // SAFETY:
+ // - For calling the atomic_set*() function:
+ // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
+ // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
+ // - per the type invariants, the following atomic operation won't cause data races.
+ // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
+ // - atomic operations are used here.
+ unsafe {
+ if Ordering::IS_RELAXED {
+ T::Repr::atomic_set(a, v)
+ } else {
+ T::Repr::atomic_set_release(a, v)
+ }
+ };
+ }
+}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 82+ messages in thread
* [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-18 16:49 [PATCH v5 00/10] LKMM generic atomics in Rust Boqun Feng
` (3 preceding siblings ...)
2025-06-18 16:49 ` [PATCH v5 04/10] rust: sync: atomic: Add generic atomics Boqun Feng
@ 2025-06-18 16:49 ` Boqun Feng
2025-06-21 11:37 ` Gary Guo
` (2 more replies)
2025-06-18 16:49 ` [PATCH v5 06/10] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
` (5 subsequent siblings)
10 siblings, 3 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-18 16:49 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
xchg() and cmpxchg() are basic operations on atomic. Provide these based
on C APIs.
Note that cmpxchg() use the similar function signature as
compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means
the operation succeeds and `Err(old)` means the operation fails.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic/generic.rs | 154 +++++++++++++++++++++++++++++
1 file changed, 154 insertions(+)
diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
index 73c26f9cf6b8..bcdbeea45dd8 100644
--- a/rust/kernel/sync/atomic/generic.rs
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -256,3 +256,157 @@ pub fn store<Ordering: ReleaseOrRelaxed>(&self, v: T, _: Ordering) {
};
}
}
+
+impl<T: AllowAtomic> Atomic<T>
+where
+ T::Repr: AtomicHasXchgOps,
+{
+ /// Atomic exchange.
+ ///
+ /// # Examples
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed};
+ ///
+ /// let x = Atomic::new(42);
+ ///
+ /// assert_eq!(42, x.xchg(52, Acquire));
+ /// assert_eq!(52, x.load(Relaxed));
+ /// ```
+ #[doc(alias("atomic_xchg", "atomic64_xchg"))]
+ #[inline(always)]
+ pub fn xchg<Ordering: All>(&self, v: T, _: Ordering) -> T {
+ let v = T::into_repr(v);
+ let a = self.as_ptr().cast::<T::Repr>();
+
+ // SAFETY:
+ // - For calling the atomic_xchg*() function:
+ // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
+ // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
+ // - per the type invariants, the following atomic operation won't cause data races.
+ // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
+ // - atomic operations are used here.
+ let ret = unsafe {
+ match Ordering::TYPE {
+ OrderingType::Full => T::Repr::atomic_xchg(a, v),
+ OrderingType::Acquire => T::Repr::atomic_xchg_acquire(a, v),
+ OrderingType::Release => T::Repr::atomic_xchg_release(a, v),
+ OrderingType::Relaxed => T::Repr::atomic_xchg_relaxed(a, v),
+ }
+ };
+
+ T::from_repr(ret)
+ }
+
+ /// Atomic compare and exchange.
+ ///
+ /// Compare: The comparison is done via the byte level comparison between the atomic variables
+ /// with the `old` value.
+ ///
+ /// Ordering: When succeeds, provides the corresponding ordering as the `Ordering` type
+ /// parameter indicates, and a failed one doesn't provide any ordering, the read part of a
+ /// failed cmpxchg should be treated as a relaxed read.
+ ///
+ /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed to be equal to `old`,
+ /// otherwise returns `Err(value)`, and `value` is the value of the atomic variable when
+ /// cmpxchg was happening.
+ ///
+ /// # Examples
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{Atomic, Full, Relaxed};
+ ///
+ /// let x = Atomic::new(42);
+ ///
+ /// // Checks whether cmpxchg succeeded.
+ /// let success = x.cmpxchg(52, 64, Relaxed).is_ok();
+ /// # assert!(!success);
+ ///
+ /// // Checks whether cmpxchg failed.
+ /// let failure = x.cmpxchg(52, 64, Relaxed).is_err();
+ /// # assert!(failure);
+ ///
+ /// // Uses the old value if failed, probably re-try cmpxchg.
+ /// match x.cmpxchg(52, 64, Relaxed) {
+ /// Ok(_) => { },
+ /// Err(old) => {
+ /// // do something with `old`.
+ /// # assert_eq!(old, 42);
+ /// }
+ /// }
+ ///
+ /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in C.
+ /// let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old);
+ /// # assert_eq!(42, latest);
+ /// assert_eq!(64, x.load(Relaxed));
+ /// ```
+ #[doc(alias(
+ "atomic_cmpxchg",
+ "atomic64_cmpxchg",
+ "atomic_try_cmpxchg",
+ "atomic64_try_cmpxchg"
+ ))]
+ #[inline(always)]
+ pub fn cmpxchg<Ordering: All>(&self, mut old: T, new: T, o: Ordering) -> Result<T, T> {
+ // Note on code generation:
+ //
+ // try_cmpxchg() is used to implement cmpxchg(), and if the helper functions are inlined,
+ // the compiler is able to figure out that branch is not needed if the users don't care
+ // about whether the operation succeeds or not. One exception is on x86, due to commit
+ // 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semantics"), the
+ // atomic_try_cmpxchg() on x86 has a branch even if the caller doesn't care about the
+ // success of cmpxchg and only wants to use the old value. For example, for code like:
+ //
+ // let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old);
+ //
+ // It will still generate code:
+ //
+ // movl $0x40, %ecx
+ // movl $0x34, %eax
+ // lock
+ // cmpxchgl %ecx, 0x4(%rsp)
+ // jne 1f
+ // 2:
+ // ...
+ // 1: movl %eax, %ecx
+ // jmp 2b
+ //
+ // This might be "fixed" by introducing a try_cmpxchg_exclusive() that knows the "*old"
+ // location in the C function is always safe to write.
+ if self.try_cmpxchg(&mut old, new, o) {
+ Ok(old)
+ } else {
+ Err(old)
+ }
+ }
+
+ /// Atomic compare and exchange and returns whether the operation succeeds.
+ ///
+ /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg()`].
+ ///
+ /// Returns `true` means the cmpxchg succeeds otherwise returns `false` with `old` updated to
+ /// the value of the atomic variable when cmpxchg was happening.
+ #[inline(always)]
+ fn try_cmpxchg<Ordering: All>(&self, old: &mut T, new: T, _: Ordering) -> bool {
+ let old = (old as *mut T).cast::<T::Repr>();
+ let new = T::into_repr(new);
+ let a = self.0.get().cast::<T::Repr>();
+
+ // SAFETY:
+ // - For calling the atomic_try_cmpchg*() function:
+ // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllowAtomic`,
+ // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
+ // - per the type invariants, the following atomic operation won't cause data races.
+ // - `old` is a valid pointer to write because it comes from a mutable reference.
+ // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
+ // - atomic operations are used here.
+ unsafe {
+ match Ordering::TYPE {
+ OrderingType::Full => T::Repr::atomic_try_cmpxchg(a, old, new),
+ OrderingType::Acquire => T::Repr::atomic_try_cmpxchg_acquire(a, old, new),
+ OrderingType::Release => T::Repr::atomic_try_cmpxchg_release(a, old, new),
+ OrderingType::Relaxed => T::Repr::atomic_try_cmpxchg_relaxed(a, old, new),
+ }
+ }
+ }
+}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 82+ messages in thread
* [PATCH v5 06/10] rust: sync: atomic: Add the framework of arithmetic operations
2025-06-18 16:49 [PATCH v5 00/10] LKMM generic atomics in Rust Boqun Feng
` (4 preceding siblings ...)
2025-06-18 16:49 ` [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
@ 2025-06-18 16:49 ` Boqun Feng
2025-06-21 11:41 ` Gary Guo
2025-06-26 12:39 ` Andreas Hindborg
2025-06-18 16:49 ` [PATCH v5 07/10] rust: sync: atomic: Add Atomic<u{32,64}> Boqun Feng
` (4 subsequent siblings)
10 siblings, 2 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-18 16:49 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
One important set of atomic operations is the arithmetic operations,
i.e. add(), sub(), fetch_add(), add_return(), etc. However it may not
make senses for all the types that `AllowAtomic` to have arithmetic
operations, for example a `Foo(u32)` may not have a reasonable add() or
sub(), plus subword types (`u8` and `u16`) currently don't have
atomic arithmetic operations even on C side and might not have them in
the future in Rust (because they are usually suboptimal on a few
architecures). Therefore add a subtrait of `AllowAtomic` describing
which types have and can do atomic arithemtic operations.
A few things about this `AllowAtomicArithmetic` trait:
* It has an associate type `Delta` instead of using
`AllowAllowAtomic::Repr` because, a `Bar(u32)` (whose `Repr` is `i32`)
may not wants an `add(&self, i32)`, but an `add(&self, u32)`.
* `AtomicImpl` types already implement an `AtomicHasArithmeticOps`
trait, so add blanket implementation for them. In the future, `i8` and
`i16` may impl `AtomicImpl` but not `AtomicHasArithmeticOps` if
arithemtic operations are not available.
Only add() and fetch_add() are added. The rest will be added in the
future.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic/generic.rs | 101 +++++++++++++++++++++++++++++
1 file changed, 101 insertions(+)
diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
index bcdbeea45dd8..8c5bd90b2619 100644
--- a/rust/kernel/sync/atomic/generic.rs
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -57,6 +57,23 @@ fn from_repr(repr: Self::Repr) -> Self {
}
}
+/// Atomics that allows arithmetic operations with an integer type.
+pub trait AllowAtomicArithmetic: AllowAtomic {
+ /// The delta types for arithmetic operations.
+ type Delta;
+
+ /// Converts [`Self::Delta`] into the representation of the atomic type.
+ fn delta_into_repr(d: Self::Delta) -> Self::Repr;
+}
+
+impl<T: AtomicImpl + AtomicHasArithmeticOps> AllowAtomicArithmetic for T {
+ type Delta = Self;
+
+ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
+ d
+ }
+}
+
impl<T: AllowAtomic> Atomic<T> {
/// Creates a new atomic.
pub const fn new(v: T) -> Self {
@@ -410,3 +427,87 @@ fn try_cmpxchg<Ordering: All>(&self, old: &mut T, new: T, _: Ordering) -> bool {
}
}
}
+
+impl<T: AllowAtomicArithmetic> Atomic<T>
+where
+ T::Repr: AtomicHasArithmeticOps,
+{
+ /// Atomic add.
+ ///
+ /// The addition is a wrapping addition.
+ ///
+ /// # Examples
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{Atomic, Relaxed};
+ ///
+ /// let x = Atomic::new(42);
+ ///
+ /// assert_eq!(42, x.load(Relaxed));
+ ///
+ /// x.add(12, Relaxed);
+ ///
+ /// assert_eq!(54, x.load(Relaxed));
+ /// ```
+ #[inline(always)]
+ pub fn add<Ordering: RelaxedOnly>(&self, v: T::Delta, _: Ordering) {
+ let v = T::delta_into_repr(v);
+ let a = self.as_ptr().cast::<T::Repr>();
+
+ // SAFETY:
+ // - For calling the atomic_add() function:
+ // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
+ // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
+ // - per the type invariants, the following atomic operation won't cause data races.
+ // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
+ // - atomic operations are used here.
+ unsafe {
+ T::Repr::atomic_add(a, v);
+ }
+ }
+
+ /// Atomic fetch and add.
+ ///
+ /// The addition is a wrapping addition.
+ ///
+ /// # Examples
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed};
+ ///
+ /// let x = Atomic::new(42);
+ ///
+ /// assert_eq!(42, x.load(Relaxed));
+ ///
+ /// assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) });
+ ///
+ /// let x = Atomic::new(42);
+ ///
+ /// assert_eq!(42, x.load(Relaxed));
+ ///
+ /// assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } );
+ /// ```
+ #[inline(always)]
+ pub fn fetch_add<Ordering: All>(&self, v: T::Delta, _: Ordering) -> T {
+ let v = T::delta_into_repr(v);
+ let a = self.as_ptr().cast::<T::Repr>();
+
+ // SAFETY:
+ // - For calling the atomic_fetch_add*() function:
+ // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
+ // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
+ // - per the type invariants, the following atomic operation won't cause data races.
+ // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
+ // - atomic operations are used here.
+ let ret = unsafe {
+ match Ordering::TYPE {
+ OrderingType::Full => T::Repr::atomic_fetch_add(a, v),
+ OrderingType::Acquire => T::Repr::atomic_fetch_add_acquire(a, v),
+ OrderingType::Release => T::Repr::atomic_fetch_add_release(a, v),
+ OrderingType::Relaxed => T::Repr::atomic_fetch_add_relaxed(a, v),
+ }
+ };
+
+ T::from_repr(ret)
+ }
+}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 82+ messages in thread
* [PATCH v5 07/10] rust: sync: atomic: Add Atomic<u{32,64}>
2025-06-18 16:49 [PATCH v5 00/10] LKMM generic atomics in Rust Boqun Feng
` (5 preceding siblings ...)
2025-06-18 16:49 ` [PATCH v5 06/10] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
@ 2025-06-18 16:49 ` Boqun Feng
2025-06-26 12:47 ` Andreas Hindborg
2025-06-18 16:49 ` [PATCH v5 08/10] rust: sync: atomic: Add Atomic<{usize,isize}> Boqun Feng
` (3 subsequent siblings)
10 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-06-18 16:49 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Add generic atomic support for basic unsigned types that have an
`AtomicImpl` with the same size and alignment.
Unit tests are added including Atomic<i32> and Atomic<i64>.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic.rs | 111 +++++++++++++++++++++++++++++++++++++
1 file changed, 111 insertions(+)
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index a01e44eec380..965a3db554d9 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -22,3 +22,114 @@
pub use generic::Atomic;
pub use ordering::{Acquire, Full, Relaxed, Release};
+
+// SAFETY: `u64` and `i64` has the same size and alignment.
+unsafe impl generic::AllowAtomic for u64 {
+ type Repr = i64;
+
+ fn into_repr(self) -> Self::Repr {
+ self as Self::Repr
+ }
+
+ fn from_repr(repr: Self::Repr) -> Self {
+ repr as Self
+ }
+}
+
+impl generic::AllowAtomicArithmetic for u64 {
+ type Delta = u64;
+
+ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
+ d as Self::Repr
+ }
+}
+
+// SAFETY: `u32` and `i32` has the same size and alignment.
+unsafe impl generic::AllowAtomic for u32 {
+ type Repr = i32;
+
+ fn into_repr(self) -> Self::Repr {
+ self as Self::Repr
+ }
+
+ fn from_repr(repr: Self::Repr) -> Self {
+ repr as Self
+ }
+}
+
+impl generic::AllowAtomicArithmetic for u32 {
+ type Delta = u32;
+
+ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
+ d as Self::Repr
+ }
+}
+
+use crate::macros::kunit_tests;
+
+#[kunit_tests(rust_atomics)]
+mod tests {
+ use super::*;
+
+ // Call $fn($val) with each $type of $val.
+ macro_rules! for_each_type {
+ ($val:literal in [$($type:ty),*] $fn:expr) => {
+ $({
+ let v: $type = $val;
+
+ $fn(v);
+ })*
+ }
+ }
+
+ #[test]
+ fn atomic_basic_tests() {
+ for_each_type!(42 in [i32, i64, u32, u64] |v| {
+ let x = Atomic::new(v);
+
+ assert_eq!(v, x.load(Relaxed));
+ });
+ }
+
+ #[test]
+ fn atomic_xchg_tests() {
+ for_each_type!(42 in [i32, i64, u32, u64] |v| {
+ let x = Atomic::new(v);
+
+ let old = v;
+ let new = v + 1;
+
+ assert_eq!(old, x.xchg(new, Full));
+ assert_eq!(new, x.load(Relaxed));
+ });
+ }
+
+ #[test]
+ fn atomic_cmpxchg_tests() {
+ for_each_type!(42 in [i32, i64, u32, u64] |v| {
+ let x = Atomic::new(v);
+
+ let old = v;
+ let new = v + 1;
+
+ assert_eq!(Err(old), x.cmpxchg(new, new, Full));
+ assert_eq!(old, x.load(Relaxed));
+ assert_eq!(Ok(old), x.cmpxchg(old, new, Relaxed));
+ assert_eq!(new, x.load(Relaxed));
+ });
+ }
+
+ #[test]
+ fn atomic_arithmetic_tests() {
+ for_each_type!(42 in [i32, i64, u32, u64] |v| {
+ let x = Atomic::new(v);
+
+ assert_eq!(v, x.fetch_add(12, Full));
+ assert_eq!(v + 12, x.load(Relaxed));
+
+ x.add(13, Relaxed);
+
+ assert_eq!(v + 25, x.load(Relaxed));
+ });
+ }
+}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 82+ messages in thread
* [PATCH v5 08/10] rust: sync: atomic: Add Atomic<{usize,isize}>
2025-06-18 16:49 [PATCH v5 00/10] LKMM generic atomics in Rust Boqun Feng
` (6 preceding siblings ...)
2025-06-18 16:49 ` [PATCH v5 07/10] rust: sync: atomic: Add Atomic<u{32,64}> Boqun Feng
@ 2025-06-18 16:49 ` Boqun Feng
2025-06-26 12:49 ` Andreas Hindborg
2025-06-18 16:49 ` [PATCH v5 09/10] rust: sync: atomic: Add Atomic<*mut T> Boqun Feng
` (2 subsequent siblings)
10 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-06-18 16:49 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Add generic atomic support for `usize` and `isize`. Note that instead of
mapping directly to `atomic_long_t`, the represention type
(`AllowAtomic::Repr`) is selected based on CONFIG_64BIT. This reduces
the necessarity of creating `atomic_long_*` helpers, which could save
the binary size of kernel if inline helpers are not available.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic.rs | 58 +++++++++++++++++++++++++++++++++++---
1 file changed, 54 insertions(+), 4 deletions(-)
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 965a3db554d9..829511f4d582 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -65,6 +65,56 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
}
}
+// SAFETY: `usize` has the same size and the alignment as `i64` for 64bit and the same as `i32` for
+// 32bit.
+unsafe impl generic::AllowAtomic for usize {
+ #[cfg(CONFIG_64BIT)]
+ type Repr = i64;
+ #[cfg(not(CONFIG_64BIT))]
+ type Repr = i32;
+
+ fn into_repr(self) -> Self::Repr {
+ self as Self::Repr
+ }
+
+ fn from_repr(repr: Self::Repr) -> Self {
+ repr as Self
+ }
+}
+
+impl generic::AllowAtomicArithmetic for usize {
+ type Delta = usize;
+
+ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
+ d as Self::Repr
+ }
+}
+
+// SAFETY: `isize` has the same size and the alignment as `i64` for 64bit and the same as `i32` for
+// 32bit.
+unsafe impl generic::AllowAtomic for isize {
+ #[cfg(CONFIG_64BIT)]
+ type Repr = i64;
+ #[cfg(not(CONFIG_64BIT))]
+ type Repr = i32;
+
+ fn into_repr(self) -> Self::Repr {
+ self as Self::Repr
+ }
+
+ fn from_repr(repr: Self::Repr) -> Self {
+ repr as Self
+ }
+}
+
+impl generic::AllowAtomicArithmetic for isize {
+ type Delta = isize;
+
+ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
+ d as Self::Repr
+ }
+}
+
use crate::macros::kunit_tests;
#[kunit_tests(rust_atomics)]
@@ -84,7 +134,7 @@ macro_rules! for_each_type {
#[test]
fn atomic_basic_tests() {
- for_each_type!(42 in [i32, i64, u32, u64] |v| {
+ for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
let x = Atomic::new(v);
assert_eq!(v, x.load(Relaxed));
@@ -93,7 +143,7 @@ fn atomic_basic_tests() {
#[test]
fn atomic_xchg_tests() {
- for_each_type!(42 in [i32, i64, u32, u64] |v| {
+ for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
let x = Atomic::new(v);
let old = v;
@@ -106,7 +156,7 @@ fn atomic_xchg_tests() {
#[test]
fn atomic_cmpxchg_tests() {
- for_each_type!(42 in [i32, i64, u32, u64] |v| {
+ for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
let x = Atomic::new(v);
let old = v;
@@ -121,7 +171,7 @@ fn atomic_cmpxchg_tests() {
#[test]
fn atomic_arithmetic_tests() {
- for_each_type!(42 in [i32, i64, u32, u64] |v| {
+ for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
let x = Atomic::new(v);
assert_eq!(v, x.fetch_add(12, Full));
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 82+ messages in thread
* [PATCH v5 09/10] rust: sync: atomic: Add Atomic<*mut T>
2025-06-18 16:49 [PATCH v5 00/10] LKMM generic atomics in Rust Boqun Feng
` (7 preceding siblings ...)
2025-06-18 16:49 ` [PATCH v5 08/10] rust: sync: atomic: Add Atomic<{usize,isize}> Boqun Feng
@ 2025-06-18 16:49 ` Boqun Feng
2025-06-18 16:49 ` [PATCH v5 10/10] rust: sync: Add memory barriers Boqun Feng
2025-06-18 20:22 ` [PATCH v5 00/10] LKMM generic atomics in Rust Alice Ryhl
10 siblings, 0 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-18 16:49 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Add atomic support for raw pointer values, similar to `isize` and
`usize`, the representation type is selected based on CONFIG_64BIT.
`*mut T` is not `Send`, however `Atomic<*mut T>` definitely needs to be
a `Sync`, and that's the whole point of atomics: being able to have
multiple shared references in different threads so that they can sync
with each other. As a result, a pointer value will be transferred from
one thread to another via `Atomic<*mut T>`:
<thread 1> <thread 2>
x.store(p1, Relaxed);
let p = x.load(p1, Relaxed);
This means a raw pointer value (`*mut T`) needs to be able to transfer
across thread boundaries, which is essentially `Send`.
To reflect this in the type system, and based on the fact that pointer
values can be transferred safely (only using them to dereference is
unsafe), as suggested by Alice, extend the `AllowAtomic` trait to
include a customized `Send` semantics, that is: `impl AllowAtomic` has
to be safe to be transferred across thread boundaries.
Suggested-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic.rs | 48 ++++++++++++++++++++++++++++++
rust/kernel/sync/atomic/generic.rs | 16 ++++++++--
2 files changed, 61 insertions(+), 3 deletions(-)
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 829511f4d582..70920146935f 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -114,6 +114,22 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
d as Self::Repr
}
}
+// SAFETY: A `*mut T` has the same size and the alignment as `i64` for 64bit and the same as `i32`
+// for 32bit. And it's safe to transfer the ownership of a pointer value to another thread.
+unsafe impl<T> generic::AllowAtomic for *mut T {
+ #[cfg(CONFIG_64BIT)]
+ type Repr = i64;
+ #[cfg(not(CONFIG_64BIT))]
+ type Repr = i32;
+
+ fn into_repr(self) -> Self::Repr {
+ self as Self::Repr
+ }
+
+ fn from_repr(repr: Self::Repr) -> Self {
+ repr as Self
+ }
+}
use crate::macros::kunit_tests;
@@ -139,6 +155,9 @@ fn atomic_basic_tests() {
assert_eq!(v, x.load(Relaxed));
});
+
+ let x = Atomic::new(core::ptr::null_mut::<i32>());
+ assert!(x.load(Relaxed).is_null());
}
#[test]
@@ -182,4 +201,33 @@ fn atomic_arithmetic_tests() {
assert_eq!(v + 25, x.load(Relaxed));
});
}
+
+ #[test]
+ fn atomic_ptr_tests() -> crate::error::Result {
+ use crate::alloc::{flags::GFP_KERNEL, KBox};
+ use core::ptr;
+
+ let x = Atomic::new(ptr::null_mut::<i32>());
+
+ assert!(x.load(Relaxed).is_null());
+
+ let new = KBox::new(42, GFP_KERNEL)?;
+ x.store(ptr::from_mut(KBox::leak(new)), Release);
+
+ let ptr = x.load(Relaxed);
+ assert!(!ptr.is_null());
+
+ // SAFETY: `ptr` is a valid pointer from `KBox::leak()` and the address dependency
+ // guarantees observation of the initialization of `KBox`.
+ assert_eq!(42, unsafe { ptr.read_volatile() });
+
+ x.xchg(ptr::null_mut(), Relaxed);
+ assert!(x.load(Relaxed).is_null());
+
+ // SAFETY: `ptr` is a valid pointer from `KBox::leak()` and no one is currently referencing
+ // the pointer, so it's safety to convert the ownership back to a `KBox`.
+ drop(unsafe { KBox::from_raw(ptr) });
+
+ Ok(())
+ }
}
diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
index 8c5bd90b2619..f496774c1686 100644
--- a/rust/kernel/sync/atomic/generic.rs
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -18,6 +18,10 @@
#[repr(transparent)]
pub struct Atomic<T: AllowAtomic>(Opaque<T>);
+// SAFETY: `Atomic<T>` is safe to send between execution contexts, because `T` is `AllowAtomic` and
+// `AllowAtomic`'s safety requirement guarantees that.
+unsafe impl<T: AllowAtomic> Send for Atomic<T> {}
+
// SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
unsafe impl<T: AllowAtomic> Sync for Atomic<T> {}
@@ -30,8 +34,13 @@ unsafe impl<T: AllowAtomic> Sync for Atomic<T> {}
///
/// # Safety
///
-/// [`Self`] must have the same size and alignment as [`Self::Repr`].
-pub unsafe trait AllowAtomic: Sized + Send + Copy {
+/// - [`Self`] must have the same size and alignment as [`Self::Repr`].
+/// - The implementer must guarantee it's safe to transfer ownership from one execution context to
+/// another, this means it has to be a [`Send`], but because `*mut T` is not [`Send`] and that's
+/// the basic type needs to support atomic operations, so this safety requirement is added to
+/// [`AllowAtomic`] trait. This safety requirement is automatically satisfied if the type is a
+/// [`Send`].
+pub unsafe trait AllowAtomic: Sized + Copy {
/// The backing atomic implementation type.
type Repr: AtomicImpl;
@@ -44,7 +53,8 @@ pub unsafe trait AllowAtomic: Sized + Send + Copy {
// An `AtomicImpl` is automatically an `AllowAtomic`.
//
-// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment.
+// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment. And all
+// `AtomicImpl` types are `Send`.
unsafe impl<T: AtomicImpl> AllowAtomic for T {
type Repr = Self;
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 82+ messages in thread
* [PATCH v5 10/10] rust: sync: Add memory barriers
2025-06-18 16:49 [PATCH v5 00/10] LKMM generic atomics in Rust Boqun Feng
` (8 preceding siblings ...)
2025-06-18 16:49 ` [PATCH v5 09/10] rust: sync: atomic: Add Atomic<*mut T> Boqun Feng
@ 2025-06-18 16:49 ` Boqun Feng
2025-06-26 13:36 ` Andreas Hindborg
2025-06-18 20:22 ` [PATCH v5 00/10] LKMM generic atomics in Rust Alice Ryhl
10 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-06-18 16:49 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Memory barriers are building blocks for concurrent code, hence provide
a minimal set of them.
The compiler barrier, barrier(), is implemented in inline asm instead of
using core::sync::atomic::compiler_fence() because memory models are
different: kernel's atomics are implemented in inline asm therefore the
compiler barrier should be implemented in inline asm as well. Also it's
currently only public to the kernel crate until there's a reasonable
driver usage.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/helpers/barrier.c | 18 ++++++++++
rust/helpers/helpers.c | 1 +
rust/kernel/sync.rs | 1 +
rust/kernel/sync/barrier.rs | 67 +++++++++++++++++++++++++++++++++++++
4 files changed, 87 insertions(+)
create mode 100644 rust/helpers/barrier.c
create mode 100644 rust/kernel/sync/barrier.rs
diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c
new file mode 100644
index 000000000000..cdf28ce8e511
--- /dev/null
+++ b/rust/helpers/barrier.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <asm/barrier.h>
+
+void rust_helper_smp_mb(void)
+{
+ smp_mb();
+}
+
+void rust_helper_smp_wmb(void)
+{
+ smp_wmb();
+}
+
+void rust_helper_smp_rmb(void)
+{
+ smp_rmb();
+}
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 83e89f6a68fb..8ddfc8f84e87 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -9,6 +9,7 @@
#include "atomic.c"
#include "auxiliary.c"
+#include "barrier.c"
#include "blk.c"
#include "bug.c"
#include "build_assert.c"
diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index b620027e0641..c7c0e552bafe 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -11,6 +11,7 @@
mod arc;
pub mod atomic;
+pub mod barrier;
mod condvar;
pub mod lock;
mod locked_by;
diff --git a/rust/kernel/sync/barrier.rs b/rust/kernel/sync/barrier.rs
new file mode 100644
index 000000000000..36a5c70e6716
--- /dev/null
+++ b/rust/kernel/sync/barrier.rs
@@ -0,0 +1,67 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Memory barriers.
+//!
+//! These primitives have the same semantics as their C counterparts: and the precise definitions of
+//! semantics can be found at [`LKMM`].
+//!
+//! [`LKMM`]: srctree/tools/memory-mode/
+
+/// A compiler barrier.
+///
+/// An explicic compiler barrier function that prevents the compiler from moving the memory
+/// accesses either side of it to the other side.
+pub(crate) fn barrier() {
+ // By default, Rust inline asms are treated as being able to access any memory or flags, hence
+ // it suffices as a compiler barrier.
+ //
+ // SAFETY: An empty asm block should be safe.
+ unsafe {
+ core::arch::asm!("");
+ }
+}
+
+/// A full memory barrier.
+///
+/// A barrier function that prevents both the compiler and the CPU from moving the memory accesses
+/// either side of it to the other side.
+pub fn smp_mb() {
+ if cfg!(CONFIG_SMP) {
+ // SAFETY: `smp_mb()` is safe to call.
+ unsafe {
+ bindings::smp_mb();
+ }
+ } else {
+ barrier();
+ }
+}
+
+/// A write-write memory barrier.
+///
+/// A barrier function that prevents both the compiler and the CPU from moving the memory write
+/// accesses either side of it to the other side.
+pub fn smp_wmb() {
+ if cfg!(CONFIG_SMP) {
+ // SAFETY: `smp_wmb()` is safe to call.
+ unsafe {
+ bindings::smp_wmb();
+ }
+ } else {
+ barrier();
+ }
+}
+
+/// A read-read memory barrier.
+///
+/// A barrier function that prevents both the compiler and the CPU from moving the memory read
+/// accesses either side of it to the other side.
+pub fn smp_rmb() {
+ if cfg!(CONFIG_SMP) {
+ // SAFETY: `smp_rmb()` is safe to call.
+ unsafe {
+ bindings::smp_rmb();
+ }
+ } else {
+ barrier();
+ }
+}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 82+ messages in thread
* Re: [PATCH v5 00/10] LKMM generic atomics in Rust
2025-06-18 16:49 [PATCH v5 00/10] LKMM generic atomics in Rust Boqun Feng
` (9 preceding siblings ...)
2025-06-18 16:49 ` [PATCH v5 10/10] rust: sync: Add memory barriers Boqun Feng
@ 2025-06-18 20:22 ` Alice Ryhl
10 siblings, 0 replies; 82+ messages in thread
From: Alice Ryhl @ 2025-06-18 20:22 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Wed, Jun 18, 2025 at 6:49 PM Boqun Feng <boqun.feng@gmail.com> wrote:
>
> Hi,
>
> v5 for LKMM atomics in Rust, you can find the previous versions:
>
> v4: https://lore.kernel.org/rust-for-linux/20250609224615.27061-1-boqun.feng@gmail.com/
> v3: https://lore.kernel.org/rust-for-linux/20250421164221.1121805-1-boqun.feng@gmail.com/
> v2: https://lore.kernel.org/rust-for-linux/20241101060237.1185533-1-boqun.feng@gmail.com/
> v1: https://lore.kernel.org/rust-for-linux/20240612223025.1158537-1-boqun.feng@gmail.com/
> wip: https://lore.kernel.org/rust-for-linux/20240322233838.868874-1-boqun.feng@gmail.com/
>
> The reason of providing our own LKMM atomics is because memory model
> wise Rust native memory model is not guaranteed to work with LKMM and
> having only one memory model throughout the kernel is always better for
> reasoning.
>
> Changes since v4:
>
> * Rename the ordering enum type and corresponding constant in trait All
> as per feedback from Benno.
>
> * Add more tests for Atomic<{i,u}size> and Atomic<*mut T>.
>
> * Rebase on v6.16-rc2
>
>
> Still please advise how we want to route the patches and for future
> ones:
>
> * Option #1: via tip, I can send a pull request to Ingo at -rc4 or -rc5.
> * Option #2: via rust, I can send a pull request to Miguel at -rc4 or -rc5.
> * Option #3: via my own tree or atomic group in kernel.org, I can send
> a pull request to Linus at 6.17 merge window.
>
> My default option is #1, but feel free to make any suggestion.
>
> Regards,
> Boqun
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-18 16:49 ` [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types Boqun Feng
@ 2025-06-19 10:31 ` Peter Zijlstra
2025-06-19 12:19 ` Alice Ryhl
2025-06-19 13:29 ` Boqun Feng
2025-06-21 11:18 ` Gary Guo
2025-06-26 12:36 ` Andreas Hindborg
2 siblings, 2 replies; 82+ messages in thread
From: Peter Zijlstra @ 2025-06-19 10:31 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Wed, Jun 18, 2025 at 09:49:27AM -0700, Boqun Feng wrote:
> +//! Memory orderings.
> +//!
> +//! The semantics of these orderings follows the [`LKMM`] definitions and rules.
> +//!
> +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model.
So I've no clue what the Rust memory model states, and I'm assuming
it is very similar to the C11 model. I have also forgotten what LKMM
states :/
Do they all agree on what RELEASE+ACQUIRE makes?
> +//! - [`Full`] means "fully-ordered", that is:
> +//! - It provides ordering between all the preceding memory accesses and the annotated operation.
> +//! - It provides ordering between the annotated operation and all the following memory accesses.
> +//! - It provides ordering between all the preceding memory accesses and all the fllowing memory
> +//! accesses.
> +//! - All the orderings are the same strong as a full memory barrier (i.e. `smp_mb()`).
s/strong/strength/ ?
> +//! - [`Relaxed`] is similar to the counterpart in Rust memory model, except that dependency
> +//! orderings are also honored in [`LKMM`]. Dependency orderings are described in "DEPENDENCY
> +//! RELATIONS" in [`LKMM`]'s [`explanation`].
> +//!
> +//! [`LKMM`]: srctree/tools/memory-model/
> +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.txt
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-19 10:31 ` Peter Zijlstra
@ 2025-06-19 12:19 ` Alice Ryhl
2025-06-19 13:29 ` Boqun Feng
1 sibling, 0 replies; 82+ messages in thread
From: Alice Ryhl @ 2025-06-19 12:19 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Trevor Gross, Danilo Krummrich,
Will Deacon, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 19, 2025 at 12:31 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Jun 18, 2025 at 09:49:27AM -0700, Boqun Feng wrote:
>
> > +//! Memory orderings.
> > +//!
> > +//! The semantics of these orderings follows the [`LKMM`] definitions and rules.
> > +//!
> > +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model.
>
> So I've no clue what the Rust memory model states, and I'm assuming
> it is very similar to the C11 model. I have also forgotten what LKMM
> states :/
>
> Do they all agree on what RELEASE+ACQUIRE makes?
Rust just uses the C11 model outright, so yes it's the same. There's
no separate Rust memory model as far as atomics are concerned.
Alice
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-19 10:31 ` Peter Zijlstra
2025-06-19 12:19 ` Alice Ryhl
@ 2025-06-19 13:29 ` Boqun Feng
2025-06-19 14:32 ` Peter Zijlstra
1 sibling, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-06-19 13:29 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 19, 2025 at 12:31:55PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 18, 2025 at 09:49:27AM -0700, Boqun Feng wrote:
>
> > +//! Memory orderings.
> > +//!
> > +//! The semantics of these orderings follows the [`LKMM`] definitions and rules.
> > +//!
> > +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model.
>
> So I've no clue what the Rust memory model states, and I'm assuming
> it is very similar to the C11 model. I have also forgotten what LKMM
> states :/
>
> Do they all agree on what RELEASE+ACQUIRE makes?
>
I think the question is irrelevant here, because we are implementing
LKMM atomics in Rust using primitives from C, so no C11/Rust memory
model in the picture for kernel Rust.
But I think they do. I assume you mostly ask whether RELEASE(a) +
ACQUIRE(b) (i.e. release and acquire on different variables) makes a TSO
barrier [1]? We don't make it a TSO barrier in LKMM either (only
unlock(a)+lock(b) is a TSO barrier) and neither does C11/Rust memory
model.
[1]: https://lore.kernel.org/lkml/20211202005053.3131071-1-paulmck@kernel.org/
> > +//! - [`Full`] means "fully-ordered", that is:
> > +//! - It provides ordering between all the preceding memory accesses and the annotated operation.
> > +//! - It provides ordering between the annotated operation and all the following memory accesses.
> > +//! - It provides ordering between all the preceding memory accesses and all the fllowing memory
> > +//! accesses.
> > +//! - All the orderings are the same strong as a full memory barrier (i.e. `smp_mb()`).
>
> s/strong/strength/ ?
>
Fixed locally.
Regards,
Boqun
> > +//! - [`Relaxed`] is similar to the counterpart in Rust memory model, except that dependency
> > +//! orderings are also honored in [`LKMM`]. Dependency orderings are described in "DEPENDENCY
> > +//! RELATIONS" in [`LKMM`]'s [`explanation`].
> > +//!
> > +//! [`LKMM`]: srctree/tools/memory-model/
> > +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.txt
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-19 13:29 ` Boqun Feng
@ 2025-06-19 14:32 ` Peter Zijlstra
2025-06-19 15:00 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Peter Zijlstra @ 2025-06-19 14:32 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 19, 2025 at 06:29:29AM -0700, Boqun Feng wrote:
> On Thu, Jun 19, 2025 at 12:31:55PM +0200, Peter Zijlstra wrote:
> > On Wed, Jun 18, 2025 at 09:49:27AM -0700, Boqun Feng wrote:
> >
> > > +//! Memory orderings.
> > > +//!
> > > +//! The semantics of these orderings follows the [`LKMM`] definitions and rules.
> > > +//!
> > > +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model.
> >
> > So I've no clue what the Rust memory model states, and I'm assuming
> > it is very similar to the C11 model. I have also forgotten what LKMM
> > states :/
> >
> > Do they all agree on what RELEASE+ACQUIRE makes?
> >
>
> I think the question is irrelevant here, because we are implementing
> LKMM atomics in Rust using primitives from C, so no C11/Rust memory
> model in the picture for kernel Rust.
The question is relevant in so far that the comment refers to them; and
if their behaviour is different in any way, this is confusing.
> But I think they do. I assume you mostly ask whether RELEASE(a) +
> ACQUIRE(b) (i.e. release and acquire on different variables) makes a TSO
> barrier [1]? We don't make it a TSO barrier in LKMM either (only
> unlock(a)+lock(b) is a TSO barrier) and neither does C11/Rust memory
> model.
>
> [1]: https://lore.kernel.org/lkml/20211202005053.3131071-1-paulmck@kernel.org/
Right, that!
So given we build locks from atomics, this has to come from somewhere.
The simplest lock -- TAS -- is: rmw.acquire + store.release.
So while plain store.release + load.acquire might not make TSO (although
IIRC ARM added variants that do just that in an effort to aid x86
emulation); store.release + rmw.acquire must, otherwise we cannot
satisfy that unlock+lock.
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-19 14:32 ` Peter Zijlstra
@ 2025-06-19 15:00 ` Boqun Feng
2025-06-19 15:10 ` Peter Zijlstra
2025-06-19 18:04 ` Alan Stern
0 siblings, 2 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-19 15:00 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 19, 2025 at 04:32:14PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 19, 2025 at 06:29:29AM -0700, Boqun Feng wrote:
> > On Thu, Jun 19, 2025 at 12:31:55PM +0200, Peter Zijlstra wrote:
> > > On Wed, Jun 18, 2025 at 09:49:27AM -0700, Boqun Feng wrote:
> > >
> > > > +//! Memory orderings.
> > > > +//!
> > > > +//! The semantics of these orderings follows the [`LKMM`] definitions and rules.
> > > > +//!
> > > > +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model.
> > >
> > > So I've no clue what the Rust memory model states, and I'm assuming
> > > it is very similar to the C11 model. I have also forgotten what LKMM
> > > states :/
> > >
> > > Do they all agree on what RELEASE+ACQUIRE makes?
> > >
> >
> > I think the question is irrelevant here, because we are implementing
> > LKMM atomics in Rust using primitives from C, so no C11/Rust memory
> > model in the picture for kernel Rust.
>
> The question is relevant in so far that the comment refers to them; and
> if their behaviour is different in any way, this is confusing.
>
I did use the word "similar" and before that I said "The semantics of
these orderings follows the [`LKMM`] definitions and rules." The
referring was merely to avoid repeating the part like:
- [`Acquire`] orders the load part of the operation against all
following memory operations.
- [`Release`] orders the store part of the operation against all
preceding memory operations.
because of this part, both models agree. But if you think this way is
better, I could change it.
> > But I think they do. I assume you mostly ask whether RELEASE(a) +
> > ACQUIRE(b) (i.e. release and acquire on different variables) makes a TSO
> > barrier [1]? We don't make it a TSO barrier in LKMM either (only
> > unlock(a)+lock(b) is a TSO barrier) and neither does C11/Rust memory
> > model.
> >
> > [1]: https://lore.kernel.org/lkml/20211202005053.3131071-1-paulmck@kernel.org/
>
> Right, that!
>
> So given we build locks from atomics, this has to come from somewhere.
>
> The simplest lock -- TAS -- is: rmw.acquire + store.release.
>
> So while plain store.release + load.acquire might not make TSO (although
> IIRC ARM added variants that do just that in an effort to aid x86
> emulation); store.release + rmw.acquire must, otherwise we cannot
> satisfy that unlock+lock.
>
Make sense, so something like this in the model should work:
diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
index d7e7bf13c831..90cb6db6e335 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -27,7 +27,7 @@ include "lock.cat"
(* Release Acquire *)
let acq-po = [Acquire] ; po ; [M]
let po-rel = [M] ; po ; [Release]
-let po-unlock-lock-po = po ; [UL] ; (po|rf) ; [LKR] ; po
+let po-unlock-lock-po = po ; (([UL] ; (po|rf) ; [LKR]) | ([Release]; (po;rf); [Acquire & RMW])) ; po
(* Fences *)
let R4rmb = R \ Noreturn (* Reads for which rmb works *)
although I'm not sure whether there will be actual users that use this
ordering.
Regards,
Boqun
^ permalink raw reply related [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-19 15:00 ` Boqun Feng
@ 2025-06-19 15:10 ` Peter Zijlstra
2025-06-19 15:15 ` Boqun Feng
2025-06-19 18:04 ` Alan Stern
1 sibling, 1 reply; 82+ messages in thread
From: Peter Zijlstra @ 2025-06-19 15:10 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 19, 2025 at 08:00:30AM -0700, Boqun Feng wrote:
> > So given we build locks from atomics, this has to come from somewhere.
> >
> > The simplest lock -- TAS -- is: rmw.acquire + store.release.
> >
> > So while plain store.release + load.acquire might not make TSO (although
> > IIRC ARM added variants that do just that in an effort to aid x86
> > emulation); store.release + rmw.acquire must, otherwise we cannot
> > satisfy that unlock+lock.
>
> Make sense, so something like this in the model should work:
>
> diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
> index d7e7bf13c831..90cb6db6e335 100644
> --- a/tools/memory-model/linux-kernel.cat
> +++ b/tools/memory-model/linux-kernel.cat
> @@ -27,7 +27,7 @@ include "lock.cat"
> (* Release Acquire *)
> let acq-po = [Acquire] ; po ; [M]
> let po-rel = [M] ; po ; [Release]
> -let po-unlock-lock-po = po ; [UL] ; (po|rf) ; [LKR] ; po
> +let po-unlock-lock-po = po ; (([UL] ; (po|rf) ; [LKR]) | ([Release]; (po;rf); [Acquire & RMW])) ; po
>
> (* Fences *)
> let R4rmb = R \ Noreturn (* Reads for which rmb works *)
>
I am forever struggling with cats, but that does look about right :-)
> although I'm not sure whether there will be actual users that use this
> ordering.
include/asm-generic/ticket_spinlock.h comes to mind, as I thing would
kernel/locking/qspinlock.*, no?
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-19 15:10 ` Peter Zijlstra
@ 2025-06-19 15:15 ` Boqun Feng
0 siblings, 0 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-19 15:15 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 19, 2025 at 05:10:50PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 19, 2025 at 08:00:30AM -0700, Boqun Feng wrote:
>
> > > So given we build locks from atomics, this has to come from somewhere.
> > >
> > > The simplest lock -- TAS -- is: rmw.acquire + store.release.
> > >
> > > So while plain store.release + load.acquire might not make TSO (although
> > > IIRC ARM added variants that do just that in an effort to aid x86
> > > emulation); store.release + rmw.acquire must, otherwise we cannot
> > > satisfy that unlock+lock.
> >
> > Make sense, so something like this in the model should work:
> >
> > diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
> > index d7e7bf13c831..90cb6db6e335 100644
> > --- a/tools/memory-model/linux-kernel.cat
> > +++ b/tools/memory-model/linux-kernel.cat
> > @@ -27,7 +27,7 @@ include "lock.cat"
> > (* Release Acquire *)
> > let acq-po = [Acquire] ; po ; [M]
> > let po-rel = [M] ; po ; [Release]
> > -let po-unlock-lock-po = po ; [UL] ; (po|rf) ; [LKR] ; po
> > +let po-unlock-lock-po = po ; (([UL] ; (po|rf) ; [LKR]) | ([Release]; (po;rf); [Acquire & RMW])) ; po
> >
> > (* Fences *)
> > let R4rmb = R \ Noreturn (* Reads for which rmb works *)
> >
>
> I am forever struggling with cats, but that does look about right :-)
>
;-) ;-) ;-)
> > although I'm not sure whether there will be actual users that use this
> > ordering.
>
> include/asm-generic/ticket_spinlock.h comes to mind, as I thing would
> kernel/locking/qspinlock.*, no?
>
Ah, right. Although I thought users outside lock implementation would be
nice, but you're right, we do have users. Previously our reasoning for
correctness of this particular locking ordering kinda depends on
per-architecture memory model reasoning, so modeling this does make
sense.
Regards,
Boqun
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-19 15:00 ` Boqun Feng
2025-06-19 15:10 ` Peter Zijlstra
@ 2025-06-19 18:04 ` Alan Stern
1 sibling, 0 replies; 82+ messages in thread
From: Alan Stern @ 2025-06-19 18:04 UTC (permalink / raw)
To: Boqun Feng
Cc: Peter Zijlstra, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross,
Danilo Krummrich, Will Deacon, Mark Rutland, Wedson Almeida Filho,
Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
Thomas Gleixner
On Thu, Jun 19, 2025 at 08:00:30AM -0700, Boqun Feng wrote:
> Make sense, so something like this in the model should work:
>
> diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
> index d7e7bf13c831..90cb6db6e335 100644
> --- a/tools/memory-model/linux-kernel.cat
> +++ b/tools/memory-model/linux-kernel.cat
> @@ -27,7 +27,7 @@ include "lock.cat"
> (* Release Acquire *)
> let acq-po = [Acquire] ; po ; [M]
> let po-rel = [M] ; po ; [Release]
> -let po-unlock-lock-po = po ; [UL] ; (po|rf) ; [LKR] ; po
> +let po-unlock-lock-po = po ; (([UL] ; (po|rf) ; [LKR]) | ([Release]; (po;rf); [Acquire & RMW])) ; po
>
> (* Fences *)
> let R4rmb = R \ Noreturn (* Reads for which rmb works *)
>
>
> although I'm not sure whether there will be actual users that use this
> ordering.
If we do end up making a change like this then we should also start
keeping careful track of the parts of the LKMM that are not justified by
the operational model (and vice versa), perhaps putting something about
them into the documentation. As far as I can remember,
po-unlock-lock-po is the only current example, but my memory isn't
always the greatest -- just one reason why it would be good to have
these things written down in an organized manner.
Alan
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-18 16:49 ` [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types Boqun Feng
2025-06-19 10:31 ` Peter Zijlstra
@ 2025-06-21 11:18 ` Gary Guo
2025-06-23 2:48 ` Boqun Feng
2025-06-26 12:36 ` Andreas Hindborg
2 siblings, 1 reply; 82+ messages in thread
From: Gary Guo @ 2025-06-21 11:18 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Wed, 18 Jun 2025 09:49:27 -0700
Boqun Feng <boqun.feng@gmail.com> wrote:
> Preparation for atomic primitives. Instead of a suffix like _acquire, a
> method parameter along with the corresponding generic parameter will be
> used to specify the ordering of an atomic operations. For example,
> atomic load() can be defined as:
>
> impl<T: ...> Atomic<T> {
> pub fn load<O: AcquireOrRelaxed>(&self, _o: O) -> T { ... }
> }
>
> and acquire users would do:
>
> let r = x.load(Acquire);
>
> relaxed users:
>
> let r = x.load(Relaxed);
>
> doing the following:
>
> let r = x.load(Release);
>
> will cause a compiler error.
I quite like the design. Minor comments inline below.
>
> Compared to suffixes, it's easier to tell what ordering variants an
> operation has, and it also make it easier to unify the implementation of
> all ordering variants in one method via generic. The `IS_RELAXED` and
> `TYPE` associate consts are for generic function to pick up the
> particular implementation specified by an ordering annotation.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> rust/kernel/sync/atomic.rs | 3 +
> rust/kernel/sync/atomic/ordering.rs | 106 ++++++++++++++++++++++++++++
> 2 files changed, 109 insertions(+)
> create mode 100644 rust/kernel/sync/atomic/ordering.rs
>
> diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> index 65e41dba97b7..9fe5d81fc2a9 100644
> --- a/rust/kernel/sync/atomic.rs
> +++ b/rust/kernel/sync/atomic.rs
> @@ -17,3 +17,6 @@
> //! [`LKMM`]: srctree/tools/memory-mode/
>
> pub mod ops;
> +pub mod ordering;
> +
> +pub use ordering::{Acquire, Full, Relaxed, Release};
> diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/ordering.rs
> new file mode 100644
> index 000000000000..96757574ed7d
> --- /dev/null
> +++ b/rust/kernel/sync/atomic/ordering.rs
> @@ -0,0 +1,106 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Memory orderings.
> +//!
> +//! The semantics of these orderings follows the [`LKMM`] definitions and rules.
> +//!
> +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model.
> +//! - [`Full`] means "fully-ordered", that is:
> +//! - It provides ordering between all the preceding memory accesses and the annotated operation.
> +//! - It provides ordering between the annotated operation and all the following memory accesses.
> +//! - It provides ordering between all the preceding memory accesses and all the fllowing memory
> +//! accesses.
> +//! - All the orderings are the same strong as a full memory barrier (i.e. `smp_mb()`).
> +//! - [`Relaxed`] is similar to the counterpart in Rust memory model, except that dependency
> +//! orderings are also honored in [`LKMM`]. Dependency orderings are described in "DEPENDENCY
> +//! RELATIONS" in [`LKMM`]'s [`explanation`].
> +//!
> +//! [`LKMM`]: srctree/tools/memory-model/
> +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.txt
> +
> +/// The annotation type for relaxed memory ordering.
> +pub struct Relaxed;
> +
> +/// The annotation type for acquire memory ordering.
> +pub struct Acquire;
> +
> +/// The annotation type for release memory ordering.
> +pub struct Release;
> +
> +/// The annotation type for fully-order memory ordering.
> +pub struct Full;
> +
> +/// Describes the exact memory ordering.
> +pub enum OrderingType {
> + /// Relaxed ordering.
> + Relaxed,
> + /// Acquire ordering.
> + Acquire,
> + /// Release ordering.
> + Release,
> + /// Fully-ordered.
> + Full,
> +}
Does this need to be public? I think this can cause a confusion on what
this is in the rendered documentation.
IIUC this is for internal atomic impl only
and this is not useful otherwise. This can be moved into `internal` and
then `pub(super) use internal::OrderingType` to stop exposing it.
(Or, just `#[doc(hidden)]` so it doesn't show in the docs).
> +
> +mod internal {
> + /// Unit types for ordering annotation.
> + ///
> + /// Sealed trait, can be only implemented inside atomic mod.
> + pub trait OrderingUnit {
> + /// Describes the exact memory ordering.
> + const TYPE: super::OrderingType;
> + }
> +}
> +
> +impl internal::OrderingUnit for Relaxed {
> + const TYPE: OrderingType = OrderingType::Relaxed;
> +}
> +
> +impl internal::OrderingUnit for Acquire {
> + const TYPE: OrderingType = OrderingType::Acquire;
> +}
> +
> +impl internal::OrderingUnit for Release {
> + const TYPE: OrderingType = OrderingType::Release;
> +}
> +
> +impl internal::OrderingUnit for Full {
> + const TYPE: OrderingType = OrderingType::Full;
> +}
> +
> +/// The trait bound for annotating operations that should support all orderings.
> +pub trait All: internal::OrderingUnit {}
> +
> +impl All for Relaxed {}
> +impl All for Acquire {}
> +impl All for Release {}
> +impl All for Full {}
> +
> +/// The trait bound for operations that only support acquire or relaxed ordering.
> +pub trait AcquireOrRelaxed: All {
> + /// Describes whether an ordering is relaxed or not.
> + const IS_RELAXED: bool = false;
This should not be needed. I'd prefer to the use site to just match on
`TYPE`.
> +}
> +
> +impl AcquireOrRelaxed for Acquire {}
> +
> +impl AcquireOrRelaxed for Relaxed {
> + const IS_RELAXED: bool = true;
> +}
> +
> +/// The trait bound for operations that only support release or relaxed ordering.
> +pub trait ReleaseOrRelaxed: All {
> + /// Describes whether an ordering is relaxed or not.
> + const IS_RELAXED: bool = false;
> +}
> +
> +impl ReleaseOrRelaxed for Release {}
> +
> +impl ReleaseOrRelaxed for Relaxed {
> + const IS_RELAXED: bool = true;
> +}
> +
> +/// The trait bound for operations that only support relaxed ordering.
> +pub trait RelaxedOnly: AcquireOrRelaxed + ReleaseOrRelaxed + All {}
> +
> +impl RelaxedOnly for Relaxed {}
Any reason that this is needed at all? Should just be a non-generic
function that takes a `Relaxed` directly?
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-18 16:49 ` [PATCH v5 04/10] rust: sync: atomic: Add generic atomics Boqun Feng
@ 2025-06-21 11:32 ` Gary Guo
2025-06-23 5:19 ` Boqun Feng
2025-06-26 12:15 ` Andreas Hindborg
1 sibling, 1 reply; 82+ messages in thread
From: Gary Guo @ 2025-06-21 11:32 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Wed, 18 Jun 2025 09:49:28 -0700
Boqun Feng <boqun.feng@gmail.com> wrote:
> To provide using LKMM atomics for Rust code, a generic `Atomic<T>` is
> added, currently `T` needs to be Send + Copy because these are the
> straightforward usages and all basic types support this. The trait
> `AllowAtomic` should be only implemented inside atomic mod until the
> generic atomic framework is mature enough (unless the implementer is a
> `#[repr(transparent)]` new type).
>
> `AtomicImpl` types are automatically `AllowAtomic`, and so far only
> basic operations load() and store() are introduced.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> rust/kernel/sync/atomic.rs | 2 +
> rust/kernel/sync/atomic/generic.rs | 258 +++++++++++++++++++++++++++++
> 2 files changed, 260 insertions(+)
> create mode 100644 rust/kernel/sync/atomic/generic.rs
>
> diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> index 9fe5d81fc2a9..a01e44eec380 100644
> --- a/rust/kernel/sync/atomic.rs
> +++ b/rust/kernel/sync/atomic.rs
> @@ -16,7 +16,9 @@
> //!
> //! [`LKMM`]: srctree/tools/memory-mode/
>
> +pub mod generic;
> pub mod ops;
> pub mod ordering;
>
> +pub use generic::Atomic;
> pub use ordering::{Acquire, Full, Relaxed, Release};
> diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
> new file mode 100644
> index 000000000000..73c26f9cf6b8
> --- /dev/null
> +++ b/rust/kernel/sync/atomic/generic.rs
> @@ -0,0 +1,258 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Generic atomic primitives.
> +
> +use super::ops::*;
> +use super::ordering::*;
> +use crate::types::Opaque;
> +
> +/// A generic atomic variable.
> +///
> +/// `T` must impl [`AllowAtomic`], that is, an [`AtomicImpl`] has to be chosen.
> +///
> +/// # Invariants
> +///
> +/// Doing an atomic operation while holding a reference of [`Self`] won't cause a data race, this
> +/// is guaranteed by the safety requirement of [`Self::from_ptr`] and the extra safety requirement
> +/// of the usage on pointers returned by [`Self::as_ptr`].
> +#[repr(transparent)]
> +pub struct Atomic<T: AllowAtomic>(Opaque<T>);
This should store `Opaque<T::Repr>` instead.
The implementation below essentially assumes that this is
`Opaque<T::Repr>`:
* atomic ops cast this to `*mut T::Repr`
* load/store operates on `T::Repr` then converts to `T` with
`T::from_repr`/`T::into_repr`.
Note tha the transparent new types restriction on `AllowAtomic` is not
sufficient for this, as I can define
#[repr(transparent)]
struct MyWeirdI32(pub i32);
impl AllowAtomic for MyWeirdI32 {
type Repr = i32;
fn into_repr(self) -> Self::Repr {
!self
}
fn from_repr(repr: Self::Repr) -> Self {
!self
}
}
Then `Atomic<MyWeirdI32>::new(MyWeirdI32(0)).load(Relaxed)` will give me
`MyWeirdI32(-1)`.
Alternatively, we should remove `into_repr`/`from_repr` and always cast
instead. In such case, `AllowAtomic` needs to have the transmutability
as a safety precondition.
> +
> +// SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
> +unsafe impl<T: AllowAtomic> Sync for Atomic<T> {}
> +
> +/// Atomics that support basic atomic operations.
> +///
> +/// TODO: Currently the [`AllowAtomic`] types are restricted within basic integer types (and their
> +/// transparent new types). In the future, we could extend the scope to more data types when there
> +/// is a clear and meaningful usage, but for now, [`AllowAtomic`] should only be implemented inside
> +/// atomic mod for the restricted types mentioned above.
> +///
> +/// # Safety
> +///
> +/// [`Self`] must have the same size and alignment as [`Self::Repr`].
> +pub unsafe trait AllowAtomic: Sized + Send + Copy {
> + /// The backing atomic implementation type.
> + type Repr: AtomicImpl;
> +
> + /// Converts into a [`Self::Repr`].
> + fn into_repr(self) -> Self::Repr;
> +
> + /// Converts from a [`Self::Repr`].
> + fn from_repr(repr: Self::Repr) -> Self;
> +}
> +
> +// An `AtomicImpl` is automatically an `AllowAtomic`.
> +//
> +// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment.
> +unsafe impl<T: AtomicImpl> AllowAtomic for T {
> + type Repr = Self;
> +
> + fn into_repr(self) -> Self::Repr {
> + self
> + }
> +
> + fn from_repr(repr: Self::Repr) -> Self {
> + repr
> + }
> +}
> +
> +impl<T: AllowAtomic> Atomic<T> {
> + /// Creates a new atomic.
> + pub const fn new(v: T) -> Self {
> + Self(Opaque::new(v))
> + }
> +
> + /// Creates a reference to [`Self`] from a pointer.
> + ///
> + /// # Safety
> + ///
> + /// - `ptr` has to be a valid pointer.
> + /// - `ptr` has to be valid for both reads and writes for the whole lifetime `'a`.
> + /// - For the whole lifetime of '`a`, other accesses to the object cannot cause data races
> + /// (defined by [`LKMM`]) against atomic operations on the returned reference.
> + ///
> + /// [`LKMM`]: srctree/tools/memory-model
> + ///
> + /// # Examples
> + ///
> + /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [`Atomic::store()`] can
> + /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire()` or
> + /// `WRITE_ONCE()`/`smp_store_release()` in C side:
> + ///
> + /// ```rust
> + /// # use kernel::types::Opaque;
> + /// use kernel::sync::atomic::{Atomic, Relaxed, Release};
> + ///
> + /// // Assume there is a C struct `Foo`.
> + /// mod cbindings {
> + /// #[repr(C)]
> + /// pub(crate) struct foo { pub(crate) a: i32, pub(crate) b: i32 }
> + /// }
> + ///
> + /// let tmp = Opaque::new(cbindings::foo { a: 1, b: 2});
> + ///
> + /// // struct foo *foo_ptr = ..;
> + /// let foo_ptr = tmp.get();
> + ///
> + /// // SAFETY: `foo_ptr` is a valid pointer, and `.a` is inbound.
> + /// let foo_a_ptr = unsafe { core::ptr::addr_of_mut!((*foo_ptr).a) };
> + ///
> + /// // a = READ_ONCE(foo_ptr->a);
> + /// //
> + /// // SAFETY: `foo_a_ptr` is a valid pointer for read, and all accesses on it is atomic, so no
> + /// // data race.
> + /// let a = unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed);
> + /// # assert_eq!(a, 1);
> + ///
> + /// // smp_store_release(&foo_ptr->a, 2);
> + /// //
> + /// // SAFETY: `foo_a_ptr` is a valid pointer for write, and all accesses on it is atomic, so no
> + /// // data race.
> + /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release);
> + /// ```
> + ///
> + /// However, this should be only used when communicating with C side or manipulating a C struct.
> + pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self
> + where
> + T: Sync,
> + {
> + // CAST: `T` is transparent to `Atomic<T>`.
> + // SAFETY: Per function safety requirement, `ptr` is a valid pointer and the object will
> + // live long enough. It's safe to return a `&Atomic<T>` because function safety requirement
> + // guarantees other accesses won't cause data races.
> + unsafe { &*ptr.cast::<Self>() }
> + }
> +
> + /// Returns a pointer to the underlying atomic variable.
> + ///
> + /// Extra safety requirement on using the return pointer: the operations done via the pointer
> + /// cannot cause data races defined by [`LKMM`].
> + ///
> + /// [`LKMM`]: srctree/tools/memory-model
> + pub const fn as_ptr(&self) -> *mut T {
> + self.0.get()
> + }
> +
> + /// Returns a mutable reference to the underlying atomic variable.
> + ///
> + /// This is safe because the mutable reference of the atomic variable guarantees the exclusive
> + /// access.
> + pub fn get_mut(&mut self) -> &mut T {
> + // SAFETY: `self.as_ptr()` is a valid pointer to `T`, and the object has already been
> + // initialized. `&mut self` guarantees the exclusive access, so it's safe to reborrow
> + // mutably.
> + unsafe { &mut *self.as_ptr() }
> + }
> +}
> +
> +impl<T: AllowAtomic> Atomic<T>
> +where
> + T::Repr: AtomicHasBasicOps,
> +{
> + /// Loads the value from the atomic variable.
> + ///
> + /// # Examples
> + ///
> + /// Simple usages:
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{Atomic, Relaxed};
> + ///
> + /// let x = Atomic::new(42i32);
> + ///
> + /// assert_eq!(42, x.load(Relaxed));
> + ///
> + /// let x = Atomic::new(42i64);
> + ///
> + /// assert_eq!(42, x.load(Relaxed));
> + /// ```
> + ///
> + /// Customized new types in [`Atomic`]:
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed};
> + ///
> + /// #[derive(Clone, Copy)]
> + /// #[repr(transparent)]
> + /// struct NewType(u32);
> + ///
> + /// // SAFETY: `NewType` is transparent to `u32`, which has the same size and alignment as
> + /// // `i32`.
> + /// unsafe impl AllowAtomic for NewType {
> + /// type Repr = i32;
> + ///
> + /// fn into_repr(self) -> Self::Repr {
> + /// self.0 as i32
> + /// }
> + ///
> + /// fn from_repr(repr: Self::Repr) -> Self {
> + /// NewType(repr as u32)
> + /// }
> + /// }
> + ///
> + /// let n = Atomic::new(NewType(0));
> + ///
> + /// assert_eq!(0, n.load(Relaxed).0);
> + /// ```
> + #[doc(alias("atomic_read", "atomic64_read"))]
> + #[inline(always)]
> + pub fn load<Ordering: AcquireOrRelaxed>(&self, _: Ordering) -> T {
> + let a = self.as_ptr().cast::<T::Repr>();
> +
> + // SAFETY:
> + // - For calling the atomic_read*() function:
> + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
> + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> + // - per the type invariants, the following atomic operation won't cause data races.
> + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> + // - atomic operations are used here.
> + let v = unsafe {
> + if Ordering::IS_RELAXED {
> + T::Repr::atomic_read(a)
> + } else {
> + T::Repr::atomic_read_acquire(a)
> + }
This can be
match Ordering::TYPE {
OrderingType::Relaxed => T::Repr::atomic_read(a),
_ => T::Repr::atomic_read_acquire(a),
}
Or, also add the match arm for acquire and add a match
arm for `_ => build_error!()`.
> + };
> +
> + T::from_repr(v)
> + }
> +
> + /// Stores a value to the atomic variable.
> + ///
> + /// # Examples
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{Atomic, Relaxed};
> + ///
> + /// let x = Atomic::new(42i32);
> + ///
> + /// assert_eq!(42, x.load(Relaxed));
> + ///
> + /// x.store(43, Relaxed);
> + ///
> + /// assert_eq!(43, x.load(Relaxed));
> + /// ```
> + ///
> + #[doc(alias("atomic_set", "atomic64_set"))]
> + #[inline(always)]
> + pub fn store<Ordering: ReleaseOrRelaxed>(&self, v: T, _: Ordering) {
> + let v = T::into_repr(v);
> + let a = self.as_ptr().cast::<T::Repr>();
> +
> + // SAFETY:
> + // - For calling the atomic_set*() function:
> + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
> + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> + // - per the type invariants, the following atomic operation won't cause data races.
> + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> + // - atomic operations are used here.
> + unsafe {
> + if Ordering::IS_RELAXED {
> + T::Repr::atomic_set(a, v)
> + } else {
> + T::Repr::atomic_set_release(a, v)
> + }
> + };
> + }
> +}
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-18 16:49 ` [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
@ 2025-06-21 11:37 ` Gary Guo
2025-06-23 5:23 ` Boqun Feng
2025-06-26 13:12 ` Andreas Hindborg
2025-06-27 8:58 ` Benno Lossin
2 siblings, 1 reply; 82+ messages in thread
From: Gary Guo @ 2025-06-21 11:37 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Wed, 18 Jun 2025 09:49:29 -0700
Boqun Feng <boqun.feng@gmail.com> wrote:
> xchg() and cmpxchg() are basic operations on atomic. Provide these based
> on C APIs.
>
> Note that cmpxchg() use the similar function signature as
> compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means
> the operation succeeds and `Err(old)` means the operation fails.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> rust/kernel/sync/atomic/generic.rs | 154 +++++++++++++++++++++++++++++
> 1 file changed, 154 insertions(+)
>
> diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
> index 73c26f9cf6b8..bcdbeea45dd8 100644
> --- a/rust/kernel/sync/atomic/generic.rs
> +++ b/rust/kernel/sync/atomic/generic.rs
> @@ -256,3 +256,157 @@ pub fn store<Ordering: ReleaseOrRelaxed>(&self, v: T, _: Ordering) {
> };
> }
> }
> +
> +impl<T: AllowAtomic> Atomic<T>
> +where
> + T::Repr: AtomicHasXchgOps,
> +{
> + /// Atomic exchange.
> + ///
> + /// # Examples
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed};
> + ///
> + /// let x = Atomic::new(42);
> + ///
> + /// assert_eq!(42, x.xchg(52, Acquire));
> + /// assert_eq!(52, x.load(Relaxed));
> + /// ```
> + #[doc(alias("atomic_xchg", "atomic64_xchg"))]
> + #[inline(always)]
> + pub fn xchg<Ordering: All>(&self, v: T, _: Ordering) -> T {
> + let v = T::into_repr(v);
> + let a = self.as_ptr().cast::<T::Repr>();
> +
> + // SAFETY:
> + // - For calling the atomic_xchg*() function:
> + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
> + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> + // - per the type invariants, the following atomic operation won't cause data races.
> + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> + // - atomic operations are used here.
> + let ret = unsafe {
> + match Ordering::TYPE {
> + OrderingType::Full => T::Repr::atomic_xchg(a, v),
> + OrderingType::Acquire => T::Repr::atomic_xchg_acquire(a, v),
> + OrderingType::Release => T::Repr::atomic_xchg_release(a, v),
> + OrderingType::Relaxed => T::Repr::atomic_xchg_relaxed(a, v),
> + }
> + };
> +
> + T::from_repr(ret)
> + }
> +
> + /// Atomic compare and exchange.
> + ///
> + /// Compare: The comparison is done via the byte level comparison between the atomic variables
> + /// with the `old` value.
> + ///
> + /// Ordering: When succeeds, provides the corresponding ordering as the `Ordering` type
> + /// parameter indicates, and a failed one doesn't provide any ordering, the read part of a
> + /// failed cmpxchg should be treated as a relaxed read.
> + ///
> + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed to be equal to `old`,
> + /// otherwise returns `Err(value)`, and `value` is the value of the atomic variable when
> + /// cmpxchg was happening.
> + ///
> + /// # Examples
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{Atomic, Full, Relaxed};
> + ///
> + /// let x = Atomic::new(42);
> + ///
> + /// // Checks whether cmpxchg succeeded.
> + /// let success = x.cmpxchg(52, 64, Relaxed).is_ok();
> + /// # assert!(!success);
> + ///
> + /// // Checks whether cmpxchg failed.
> + /// let failure = x.cmpxchg(52, 64, Relaxed).is_err();
> + /// # assert!(failure);
> + ///
> + /// // Uses the old value if failed, probably re-try cmpxchg.
> + /// match x.cmpxchg(52, 64, Relaxed) {
> + /// Ok(_) => { },
> + /// Err(old) => {
> + /// // do something with `old`.
> + /// # assert_eq!(old, 42);
> + /// }
> + /// }
> + ///
> + /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in C.
> + /// let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old);
> + /// # assert_eq!(42, latest);
> + /// assert_eq!(64, x.load(Relaxed));
> + /// ```
> + #[doc(alias(
> + "atomic_cmpxchg",
> + "atomic64_cmpxchg",
> + "atomic_try_cmpxchg",
> + "atomic64_try_cmpxchg"
> + ))]
> + #[inline(always)]
> + pub fn cmpxchg<Ordering: All>(&self, mut old: T, new: T, o: Ordering) -> Result<T, T> {
> + // Note on code generation:
> + //
> + // try_cmpxchg() is used to implement cmpxchg(), and if the helper functions are inlined,
> + // the compiler is able to figure out that branch is not needed if the users don't care
> + // about whether the operation succeeds or not. One exception is on x86, due to commit
> + // 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semantics"), the
> + // atomic_try_cmpxchg() on x86 has a branch even if the caller doesn't care about the
> + // success of cmpxchg and only wants to use the old value. For example, for code like:
> + //
> + // let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old);
> + //
> + // It will still generate code:
> + //
> + // movl $0x40, %ecx
> + // movl $0x34, %eax
> + // lock
> + // cmpxchgl %ecx, 0x4(%rsp)
> + // jne 1f
> + // 2:
> + // ...
> + // 1: movl %eax, %ecx
> + // jmp 2b
> + //
> + // This might be "fixed" by introducing a try_cmpxchg_exclusive() that knows the "*old"
> + // location in the C function is always safe to write.
> + if self.try_cmpxchg(&mut old, new, o) {
> + Ok(old)
> + } else {
> + Err(old)
> + }
> + }
> +
> + /// Atomic compare and exchange and returns whether the operation succeeds.
> + ///
> + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg()`].
> + ///
> + /// Returns `true` means the cmpxchg succeeds otherwise returns `false` with `old` updated to
> + /// the value of the atomic variable when cmpxchg was happening.
> + #[inline(always)]
> + fn try_cmpxchg<Ordering: All>(&self, old: &mut T, new: T, _: Ordering) -> bool {
> + let old = (old as *mut T).cast::<T::Repr>();
> + let new = T::into_repr(new);
> + let a = self.0.get().cast::<T::Repr>();
> +
> + // SAFETY:
> + // - For calling the atomic_try_cmpchg*() function:
> + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllowAtomic`,
> + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> + // - per the type invariants, the following atomic operation won't cause data races.
> + // - `old` is a valid pointer to write because it comes from a mutable reference.
> + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> + // - atomic operations are used here.
> + unsafe {
> + match Ordering::TYPE {
> + OrderingType::Full => T::Repr::atomic_try_cmpxchg(a, old, new),
> + OrderingType::Acquire => T::Repr::atomic_try_cmpxchg_acquire(a, old, new),
> + OrderingType::Release => T::Repr::atomic_try_cmpxchg_release(a, old, new),
> + OrderingType::Relaxed => T::Repr::atomic_try_cmpxchg_relaxed(a, old, new),
> + }
> + }
Again this function is only using `T::into_repr`, bypassing
`T::from_repr` and just use pointer casting.
BTW, any reason that this is a separate function, and it couldn't just
be in `cmpxchg` function?
> + }
> +}
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 06/10] rust: sync: atomic: Add the framework of arithmetic operations
2025-06-18 16:49 ` [PATCH v5 06/10] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
@ 2025-06-21 11:41 ` Gary Guo
2025-06-26 12:39 ` Andreas Hindborg
1 sibling, 0 replies; 82+ messages in thread
From: Gary Guo @ 2025-06-21 11:41 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Wed, 18 Jun 2025 09:49:30 -0700
Boqun Feng <boqun.feng@gmail.com> wrote:
> One important set of atomic operations is the arithmetic operations,
> i.e. add(), sub(), fetch_add(), add_return(), etc. However it may not
> make senses for all the types that `AllowAtomic` to have arithmetic
> operations, for example a `Foo(u32)` may not have a reasonable add() or
> sub(), plus subword types (`u8` and `u16`) currently don't have
> atomic arithmetic operations even on C side and might not have them in
> the future in Rust (because they are usually suboptimal on a few
> architecures). Therefore add a subtrait of `AllowAtomic` describing
> which types have and can do atomic arithemtic operations.
>
> A few things about this `AllowAtomicArithmetic` trait:
>
> * It has an associate type `Delta` instead of using
> `AllowAllowAtomic::Repr` because, a `Bar(u32)` (whose `Repr` is `i32`)
> may not wants an `add(&self, i32)`, but an `add(&self, u32)`.
>
> * `AtomicImpl` types already implement an `AtomicHasArithmeticOps`
> trait, so add blanket implementation for them. In the future, `i8` and
> `i16` may impl `AtomicImpl` but not `AtomicHasArithmeticOps` if
> arithemtic operations are not available.
>
> Only add() and fetch_add() are added. The rest will be added in the
> future.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> rust/kernel/sync/atomic/generic.rs | 101 +++++++++++++++++++++++++++++
> 1 file changed, 101 insertions(+)
>
> diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
> index bcdbeea45dd8..8c5bd90b2619 100644
> --- a/rust/kernel/sync/atomic/generic.rs
> +++ b/rust/kernel/sync/atomic/generic.rs
> @@ -57,6 +57,23 @@ fn from_repr(repr: Self::Repr) -> Self {
> }
> }
>
> +/// Atomics that allows arithmetic operations with an integer type.
> +pub trait AllowAtomicArithmetic: AllowAtomic {
> + /// The delta types for arithmetic operations.
> + type Delta;
> +
> + /// Converts [`Self::Delta`] into the representation of the atomic type.
> + fn delta_into_repr(d: Self::Delta) -> Self::Repr;
> +}
> +
> +impl<T: AtomicImpl + AtomicHasArithmeticOps> AllowAtomicArithmetic for T {
> + type Delta = Self;
> +
> + fn delta_into_repr(d: Self::Delta) -> Self::Repr {
> + d
> + }
> +}
> +
> impl<T: AllowAtomic> Atomic<T> {
> /// Creates a new atomic.
> pub const fn new(v: T) -> Self {
> @@ -410,3 +427,87 @@ fn try_cmpxchg<Ordering: All>(&self, old: &mut T, new: T, _: Ordering) -> bool {
> }
> }
> }
> +
> +impl<T: AllowAtomicArithmetic> Atomic<T>
> +where
> + T::Repr: AtomicHasArithmeticOps,
> +{
> + /// Atomic add.
> + ///
> + /// The addition is a wrapping addition.
> + ///
> + /// # Examples
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{Atomic, Relaxed};
> + ///
> + /// let x = Atomic::new(42);
> + ///
> + /// assert_eq!(42, x.load(Relaxed));
> + ///
> + /// x.add(12, Relaxed);
> + ///
> + /// assert_eq!(54, x.load(Relaxed));
> + /// ```
> + #[inline(always)]
> + pub fn add<Ordering: RelaxedOnly>(&self, v: T::Delta, _: Ordering) {
This can be just
pub fn add(&self, v: T::Delta, _: Relaxed)
> + let v = T::delta_into_repr(v);
> + let a = self.as_ptr().cast::<T::Repr>();
> +
> + // SAFETY:
> + // - For calling the atomic_add() function:
> + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
> + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> + // - per the type invariants, the following atomic operation won't cause data races.
> + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> + // - atomic operations are used here.
> + unsafe {
> + T::Repr::atomic_add(a, v);
> + }
> + }
> +
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-21 11:18 ` Gary Guo
@ 2025-06-23 2:48 ` Boqun Feng
0 siblings, 0 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-23 2:48 UTC (permalink / raw)
To: Gary Guo
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat, Jun 21, 2025 at 12:18:42PM +0100, Gary Guo wrote:
[...]
> > +
> > +/// The annotation type for relaxed memory ordering.
> > +pub struct Relaxed;
> > +
> > +/// The annotation type for acquire memory ordering.
> > +pub struct Acquire;
> > +
> > +/// The annotation type for release memory ordering.
> > +pub struct Release;
> > +
> > +/// The annotation type for fully-order memory ordering.
> > +pub struct Full;
> > +
> > +/// Describes the exact memory ordering.
> > +pub enum OrderingType {
> > + /// Relaxed ordering.
> > + Relaxed,
> > + /// Acquire ordering.
> > + Acquire,
> > + /// Release ordering.
> > + Release,
> > + /// Fully-ordered.
> > + Full,
> > +}
>
> Does this need to be public? I think this can cause a confusion on what
> this is in the rendered documentation.
>
I would like to make it public so that users can define their own method
with ordering out of atomic mod (even out of kernel crate):
pub fn my_ordering_func<Ordering: All>(..., o: Ordering) {
match Ordering::TYPE {
}
}
I just realized to do so I need to make OrderingUnit pub too (with a
sealed supertrait of course).
> IIUC this is for internal atomic impl only
> and this is not useful otherwise. This can be moved into `internal` and
> then `pub(super) use internal::OrderingType` to stop exposing it.
>
> (Or, just `#[doc(hidden)]` so it doesn't show in the docs).
>
Seem reasonable.
> > +
> > +mod internal {
> > + /// Unit types for ordering annotation.
> > + ///
> > + /// Sealed trait, can be only implemented inside atomic mod.
> > + pub trait OrderingUnit {
> > + /// Describes the exact memory ordering.
> > + const TYPE: super::OrderingType;
> > + }
> > +}
> > +
> > +impl internal::OrderingUnit for Relaxed {
> > + const TYPE: OrderingType = OrderingType::Relaxed;
> > +}
[...]
> > +
> > +/// The trait bound for operations that only support acquire or relaxed ordering.
> > +pub trait AcquireOrRelaxed: All {
> > + /// Describes whether an ordering is relaxed or not.
> > + const IS_RELAXED: bool = false;
>
> This should not be needed. I'd prefer to the use site to just match on
> `TYPE`.
>
Right, I somehow missed how monomorphization works. I can drop this.
Thanks!
> > +}
> > +
[...]
> > +/// The trait bound for operations that only support relaxed ordering.
> > +pub trait RelaxedOnly: AcquireOrRelaxed + ReleaseOrRelaxed + All {}
> > +
> > +impl RelaxedOnly for Relaxed {}
>
> Any reason that this is needed at all? Should just be a non-generic
Mostly for documentation purpose, i.e. users can figure out the ordering
from the trait bounds of the function. I will say we can probably drop
it when we find a Release-only or Acquire-only function, but even then
the current definition won't affect users, so I lean torwards keeping
it.
Regards,
Boqun
> function that takes a `Relaxed` directly?
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-21 11:32 ` Gary Guo
@ 2025-06-23 5:19 ` Boqun Feng
2025-06-23 11:54 ` Benno Lossin
2025-06-23 18:30 ` Gary Guo
0 siblings, 2 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-23 5:19 UTC (permalink / raw)
To: Gary Guo
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat, Jun 21, 2025 at 12:32:12PM +0100, Gary Guo wrote:
[...]
> > +#[repr(transparent)]
> > +pub struct Atomic<T: AllowAtomic>(Opaque<T>);
>
> This should store `Opaque<T::Repr>` instead.
>
"should" is a strong word ;-) If we still use `into_repr`/`from_repr`
it's a bit impossible, because Atomic::new() wants to be a const
function, so it requires const_trait_impl I believe.
If we require transmutability as a safety requirement for `AllowAtomic`,
then either `T` or `T::Repr` is fine.
> The implementation below essentially assumes that this is
> `Opaque<T::Repr>`:
> * atomic ops cast this to `*mut T::Repr`
> * load/store operates on `T::Repr` then converts to `T` with
> `T::from_repr`/`T::into_repr`.
>
Note that we only require one direction of strong transmutability, that
is: for every `T`, it must be able to safe transmute to a `T::Repr`, for
`T::Repr` -> `T` transmutation, only if it's a result of a `transmute<T,
T::Repr>()`. This is mostly due to potential support for unit-only enum.
E.g. using an atomic variable to represent a finite state.
> Note tha the transparent new types restriction on `AllowAtomic` is not
> sufficient for this, as I can define
>
Nice catch! I do agree we should disallow `MyWeirdI32`, and I also agree
that we should put transmutability as safety requirement for
`AllowAtomic`. However, I would suggest we still keep
`into_repr`/`from_repr`, and require the implementation to make them
provide the same results as transmute(), as a correctness precondition
(instead of a safety precondition), in other words, you can still write
a `MyWeirdI32`, and it won't cause safety issues, but it'll be
incorrect.
The reason why I think we should keep `into_repr`/`from_repr` but add
a correctness precondition is that they are easily to implement as safe
code for basic types, so it'll be better than a transmute() call. Also
considering `Atomic<*mut T>`, would transmuting between integers and
pointers act the same as expose_provenance() and
from_exposed_provenance()?
So something like this for `AllowAtomic`, implementation-wise, no need
to change.
/// Atomics that support basic atomic operations.
///
/// Implementers must guarantee that `into_repr()` and `from_repr()` provide the same results as
/// transmute between [`Self`] and [`Self::Repr`].
///
/// TODO: Currently the [`AllowAtomic`] types are restricted within basic integer types (and their
/// transparent new types). In the future, we could extend the scope to more data types when there
/// is a clear and meaningful usage, but for now, [`AllowAtomic`] should only be implemented inside
/// atomic mod for the restricted types mentioned above.
///
/// # Safety
///
/// - [`Self`] must have the same size and alignment as [`Self::Repr`].
/// - Any value of [`Self`] must be safe to [`transmute()`] to a [`Self::Repr`], this also means
/// that a pointer to [`Self`] can be treated as a pointer to [`Self::Repr`].
/// - If a value of [`Self::Repr`] is a result a [`transmute()`] from a [`Self`], it must be safe
/// to [`transmute()`] the value back to a [`Self`].
/// that a pointer to [`Self`] can be treated as a pointer to [`Self::Repr`].
/// - The implementer must guarantee it's safe to transfer ownership from one execution context to
/// another, this means it has to be a [`Send`], but because `*mut T` is not [`Send`] and that's
/// the basic type needs to support atomic operations, so this safety requirement is added to
/// [`AllowAtomic`] trait. This safety requirement is automatically satisfied if the type is a
/// [`Send`].
///
/// [`transmute()`]: core::mem::transmute
pub unsafe trait AllowAtomic: Sized + Copy {
Thoughts?
Regards,
Boqun
> #[repr(transparent)]
> struct MyWeirdI32(pub i32);
>
> impl AllowAtomic for MyWeirdI32 {
> type Repr = i32;
>
> fn into_repr(self) -> Self::Repr {
> !self
> }
>
> fn from_repr(repr: Self::Repr) -> Self {
> !self
> }
> }
>
> Then `Atomic<MyWeirdI32>::new(MyWeirdI32(0)).load(Relaxed)` will give me
> `MyWeirdI32(-1)`.
>
> Alternatively, we should remove `into_repr`/`from_repr` and always cast
> instead. In such case, `AllowAtomic` needs to have the transmutability
> as a safety precondition.
>
[...]
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-21 11:37 ` Gary Guo
@ 2025-06-23 5:23 ` Boqun Feng
0 siblings, 0 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-23 5:23 UTC (permalink / raw)
To: Gary Guo
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat, Jun 21, 2025 at 12:37:53PM +0100, Gary Guo wrote:
[...]
> > + /// Atomic compare and exchange.
> > + ///
> > + /// Compare: The comparison is done via the byte level comparison between the atomic variables
> > + /// with the `old` value.
> > + ///
> > + /// Ordering: When succeeds, provides the corresponding ordering as the `Ordering` type
> > + /// parameter indicates, and a failed one doesn't provide any ordering, the read part of a
> > + /// failed cmpxchg should be treated as a relaxed read.
> > + ///
> > + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed to be equal to `old`,
> > + /// otherwise returns `Err(value)`, and `value` is the value of the atomic variable when
> > + /// cmpxchg was happening.
> > + ///
> > + /// # Examples
> > + ///
> > + /// ```rust
> > + /// use kernel::sync::atomic::{Atomic, Full, Relaxed};
> > + ///
> > + /// let x = Atomic::new(42);
> > + ///
> > + /// // Checks whether cmpxchg succeeded.
> > + /// let success = x.cmpxchg(52, 64, Relaxed).is_ok();
> > + /// # assert!(!success);
> > + ///
> > + /// // Checks whether cmpxchg failed.
> > + /// let failure = x.cmpxchg(52, 64, Relaxed).is_err();
> > + /// # assert!(failure);
> > + ///
> > + /// // Uses the old value if failed, probably re-try cmpxchg.
> > + /// match x.cmpxchg(52, 64, Relaxed) {
> > + /// Ok(_) => { },
> > + /// Err(old) => {
> > + /// // do something with `old`.
> > + /// # assert_eq!(old, 42);
> > + /// }
> > + /// }
> > + ///
> > + /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in C.
> > + /// let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old);
> > + /// # assert_eq!(42, latest);
> > + /// assert_eq!(64, x.load(Relaxed));
> > + /// ```
> > + #[doc(alias(
> > + "atomic_cmpxchg",
> > + "atomic64_cmpxchg",
> > + "atomic_try_cmpxchg",
> > + "atomic64_try_cmpxchg"
> > + ))]
> > + #[inline(always)]
> > + pub fn cmpxchg<Ordering: All>(&self, mut old: T, new: T, o: Ordering) -> Result<T, T> {
> > + // Note on code generation:
> > + //
> > + // try_cmpxchg() is used to implement cmpxchg(), and if the helper functions are inlined,
> > + // the compiler is able to figure out that branch is not needed if the users don't care
> > + // about whether the operation succeeds or not. One exception is on x86, due to commit
> > + // 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semantics"), the
> > + // atomic_try_cmpxchg() on x86 has a branch even if the caller doesn't care about the
> > + // success of cmpxchg and only wants to use the old value. For example, for code like:
> > + //
> > + // let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old);
> > + //
> > + // It will still generate code:
> > + //
> > + // movl $0x40, %ecx
> > + // movl $0x34, %eax
> > + // lock
> > + // cmpxchgl %ecx, 0x4(%rsp)
> > + // jne 1f
> > + // 2:
> > + // ...
> > + // 1: movl %eax, %ecx
> > + // jmp 2b
> > + //
> > + // This might be "fixed" by introducing a try_cmpxchg_exclusive() that knows the "*old"
> > + // location in the C function is always safe to write.
> > + if self.try_cmpxchg(&mut old, new, o) {
> > + Ok(old)
> > + } else {
> > + Err(old)
> > + }
> > + }
> > +
> > + /// Atomic compare and exchange and returns whether the operation succeeds.
> > + ///
> > + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg()`].
> > + ///
> > + /// Returns `true` means the cmpxchg succeeds otherwise returns `false` with `old` updated to
> > + /// the value of the atomic variable when cmpxchg was happening.
> > + #[inline(always)]
> > + fn try_cmpxchg<Ordering: All>(&self, old: &mut T, new: T, _: Ordering) -> bool {
> > + let old = (old as *mut T).cast::<T::Repr>();
> > + let new = T::into_repr(new);
> > + let a = self.0.get().cast::<T::Repr>();
> > +
> > + // SAFETY:
> > + // - For calling the atomic_try_cmpchg*() function:
> > + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllowAtomic`,
> > + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> > + // - per the type invariants, the following atomic operation won't cause data races.
> > + // - `old` is a valid pointer to write because it comes from a mutable reference.
> > + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> > + // - atomic operations are used here.
> > + unsafe {
> > + match Ordering::TYPE {
> > + OrderingType::Full => T::Repr::atomic_try_cmpxchg(a, old, new),
> > + OrderingType::Acquire => T::Repr::atomic_try_cmpxchg_acquire(a, old, new),
> > + OrderingType::Release => T::Repr::atomic_try_cmpxchg_release(a, old, new),
> > + OrderingType::Relaxed => T::Repr::atomic_try_cmpxchg_relaxed(a, old, new),
> > + }
> > + }
>
> Again this function is only using `T::into_repr`, bypassing
> `T::from_repr` and just use pointer casting.
>
> BTW, any reason that this is a separate function, and it couldn't just
> be in `cmpxchg` function?
>
It's a non-public function, I feel it's easier to see that Rust's
cmpxchg() is implemented via a try_cmpxchg() that is a wrapper of
`atomic_try_cmpxchg*()`.
Regards,
Boqun
>
> > + }
> > +}
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-23 5:19 ` Boqun Feng
@ 2025-06-23 11:54 ` Benno Lossin
2025-06-23 12:58 ` Boqun Feng
2025-06-23 18:30 ` Gary Guo
1 sibling, 1 reply; 82+ messages in thread
From: Benno Lossin @ 2025-06-23 11:54 UTC (permalink / raw)
To: Boqun Feng, Gary Guo
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
On Mon Jun 23, 2025 at 7:19 AM CEST, Boqun Feng wrote:
> On Sat, Jun 21, 2025 at 12:32:12PM +0100, Gary Guo wrote:
>> Note tha the transparent new types restriction on `AllowAtomic` is not
>> sufficient for this, as I can define
>>
>
> Nice catch! I do agree we should disallow `MyWeirdI32`, and I also agree
> that we should put transmutability as safety requirement for
> `AllowAtomic`. However, I would suggest we still keep
> `into_repr`/`from_repr`, and require the implementation to make them
> provide the same results as transmute(), as a correctness precondition
> (instead of a safety precondition), in other words, you can still write
> a `MyWeirdI32`, and it won't cause safety issues, but it'll be
> incorrect.
Hmm I don't like keeping the function when we add the transmute
requirement.
> The reason why I think we should keep `into_repr`/`from_repr` but add
> a correctness precondition is that they are easily to implement as safe
> code for basic types, so it'll be better than a transmute() call. Also
> considering `Atomic<*mut T>`, would transmuting between integers and
> pointers act the same as expose_provenance() and
> from_exposed_provenance()?
Hmmm, this is indeed a problem for pointers. I guess we do need the
functions...
But this also prevents us from adding the transmute requirement, as it
doesn't hold for pointers. Maybe we need to add the requirement that
`into_repr`/`from_repr` preserve the binary representation?
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-23 11:54 ` Benno Lossin
@ 2025-06-23 12:58 ` Boqun Feng
0 siblings, 0 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-23 12:58 UTC (permalink / raw)
To: Benno Lossin
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Mon, Jun 23, 2025 at 01:54:38PM +0200, Benno Lossin wrote:
> On Mon Jun 23, 2025 at 7:19 AM CEST, Boqun Feng wrote:
> > On Sat, Jun 21, 2025 at 12:32:12PM +0100, Gary Guo wrote:
> >> Note tha the transparent new types restriction on `AllowAtomic` is not
> >> sufficient for this, as I can define
> >>
> >
> > Nice catch! I do agree we should disallow `MyWeirdI32`, and I also agree
> > that we should put transmutability as safety requirement for
> > `AllowAtomic`. However, I would suggest we still keep
> > `into_repr`/`from_repr`, and require the implementation to make them
> > provide the same results as transmute(), as a correctness precondition
> > (instead of a safety precondition), in other words, you can still write
> > a `MyWeirdI32`, and it won't cause safety issues, but it'll be
> > incorrect.
>
> Hmm I don't like keeping the function when we add the transmute
> requirement.
>
> > The reason why I think we should keep `into_repr`/`from_repr` but add
> > a correctness precondition is that they are easily to implement as safe
> > code for basic types, so it'll be better than a transmute() call. Also
> > considering `Atomic<*mut T>`, would transmuting between integers and
> > pointers act the same as expose_provenance() and
> > from_exposed_provenance()?
>
> Hmmm, this is indeed a problem for pointers. I guess we do need the
> functions...
>
> But this also prevents us from adding the transmute requirement, as it
> doesn't hold for pointers. Maybe we need to add the requirement that
The requirement is "transumability", which requires any valid binary
representation of `T` must be a valid binary representation of
`T::Repr`, and we need it regardless whether we use `transumate()` or
not in the implementation. Because for the current implementation,
`from_ptr()` and any atomics may read a value from `Atomic::new()` needs
this. Even if we change the implementation to `Opaque<T::Repr>`, we
still need it for `get_mut()`
> `into_repr`/`from_repr` preserve the binary representation?
We need this too, but just maybe not for safety reasons. Besides, the
precondition that we can say `into_repr`/`from_repr` can preserve binary
representation is the transmutability requirement.
Regards,
Boqun
>
> ---
> Cheers,
> Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-23 5:19 ` Boqun Feng
2025-06-23 11:54 ` Benno Lossin
@ 2025-06-23 18:30 ` Gary Guo
2025-06-23 19:09 ` Boqun Feng
1 sibling, 1 reply; 82+ messages in thread
From: Gary Guo @ 2025-06-23 18:30 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sun, 22 Jun 2025 22:19:44 -0700
Boqun Feng <boqun.feng@gmail.com> wrote:
> On Sat, Jun 21, 2025 at 12:32:12PM +0100, Gary Guo wrote:
> [...]
> > > +#[repr(transparent)]
> > > +pub struct Atomic<T: AllowAtomic>(Opaque<T>);
> >
> > This should store `Opaque<T::Repr>` instead.
> >
>
> "should" is a strong word ;-) If we still use `into_repr`/`from_repr`
> it's a bit impossible, because Atomic::new() wants to be a const
> function, so it requires const_trait_impl I believe.
>
> If we require transmutability as a safety requirement for `AllowAtomic`,
> then either `T` or `T::Repr` is fine.
>
> > The implementation below essentially assumes that this is
> > `Opaque<T::Repr>`:
> > * atomic ops cast this to `*mut T::Repr`
> > * load/store operates on `T::Repr` then converts to `T` with
> > `T::from_repr`/`T::into_repr`.
> >
>
> Note that we only require one direction of strong transmutability, that
> is: for every `T`, it must be able to safe transmute to a `T::Repr`, for
> `T::Repr` -> `T` transmutation, only if it's a result of a `transmute<T,
> T::Repr>()`. This is mostly due to potential support for unit-only enum.
> E.g. using an atomic variable to represent a finite state.
>
> > Note tha the transparent new types restriction on `AllowAtomic` is not
> > sufficient for this, as I can define
> >
>
> Nice catch! I do agree we should disallow `MyWeirdI32`, and I also agree
> that we should put transmutability as safety requirement for
> `AllowAtomic`. However, I would suggest we still keep
> `into_repr`/`from_repr`, and require the implementation to make them
> provide the same results as transmute(), as a correctness precondition
> (instead of a safety precondition), in other words, you can still write
> a `MyWeirdI32`, and it won't cause safety issues, but it'll be
> incorrect.
>
> The reason why I think we should keep `into_repr`/`from_repr` but add
> a correctness precondition is that they are easily to implement as safe
> code for basic types, so it'll be better than a transmute() call. Also
> considering `Atomic<*mut T>`, would transmuting between integers and
> pointers act the same as expose_provenance() and
> from_exposed_provenance()?
Okay, this is more problematic than I thought then. For pointers, you
cannot just transmute between from pointers to usize (which is its
Repr):
* Transmuting from pointer to usize discards provenance
* Transmuting from usize to pointer gives invalid provenance
We want neither behaviour, so we must store `usize` directly and
always call into repr functions.
To make things cost I guess you would need an extra trait to indicate
that transmuting is fine.
Best,
Gary
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-23 18:30 ` Gary Guo
@ 2025-06-23 19:09 ` Boqun Feng
2025-06-23 23:27 ` Benno Lossin
2025-07-04 20:25 ` Boqun Feng
0 siblings, 2 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-23 19:09 UTC (permalink / raw)
To: Gary Guo
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Mon, Jun 23, 2025 at 07:30:19PM +0100, Gary Guo wrote:
> On Sun, 22 Jun 2025 22:19:44 -0700
> Boqun Feng <boqun.feng@gmail.com> wrote:
>
> > On Sat, Jun 21, 2025 at 12:32:12PM +0100, Gary Guo wrote:
> > [...]
> > > > +#[repr(transparent)]
> > > > +pub struct Atomic<T: AllowAtomic>(Opaque<T>);
> > >
> > > This should store `Opaque<T::Repr>` instead.
> > >
> >
> > "should" is a strong word ;-) If we still use `into_repr`/`from_repr`
> > it's a bit impossible, because Atomic::new() wants to be a const
> > function, so it requires const_trait_impl I believe.
> >
> > If we require transmutability as a safety requirement for `AllowAtomic`,
> > then either `T` or `T::Repr` is fine.
> >
> > > The implementation below essentially assumes that this is
> > > `Opaque<T::Repr>`:
> > > * atomic ops cast this to `*mut T::Repr`
> > > * load/store operates on `T::Repr` then converts to `T` with
> > > `T::from_repr`/`T::into_repr`.
> > >
> >
> > Note that we only require one direction of strong transmutability, that
> > is: for every `T`, it must be able to safe transmute to a `T::Repr`, for
> > `T::Repr` -> `T` transmutation, only if it's a result of a `transmute<T,
> > T::Repr>()`. This is mostly due to potential support for unit-only enum.
> > E.g. using an atomic variable to represent a finite state.
> >
> > > Note tha the transparent new types restriction on `AllowAtomic` is not
> > > sufficient for this, as I can define
> > >
> >
> > Nice catch! I do agree we should disallow `MyWeirdI32`, and I also agree
> > that we should put transmutability as safety requirement for
> > `AllowAtomic`. However, I would suggest we still keep
> > `into_repr`/`from_repr`, and require the implementation to make them
> > provide the same results as transmute(), as a correctness precondition
> > (instead of a safety precondition), in other words, you can still write
> > a `MyWeirdI32`, and it won't cause safety issues, but it'll be
> > incorrect.
> >
> > The reason why I think we should keep `into_repr`/`from_repr` but add
> > a correctness precondition is that they are easily to implement as safe
> > code for basic types, so it'll be better than a transmute() call. Also
> > considering `Atomic<*mut T>`, would transmuting between integers and
> > pointers act the same as expose_provenance() and
> > from_exposed_provenance()?
>
> Okay, this is more problematic than I thought then. For pointers, you
Welcome to my nightmare ;-)
> cannot just transmute between from pointers to usize (which is its
> Repr):
> * Transmuting from pointer to usize discards provenance
> * Transmuting from usize to pointer gives invalid provenance
>
> We want neither behaviour, so we must store `usize` directly and
> always call into repr functions.
>
If we store `usize`, how can we support the `get_mut()` then? E.g.
static V: i32 = 32;
let mut x = Atomic::new(&V as *const i32 as *mut i32);
// ^ assume we expose_provenance() in new().
let ptr: &mut *mut i32 = x.get_mut(); // which is `&mut self.0.get()`.
let ptr_val = *ptr; // Does `ptr_val` have the proper provenance?
> To make things cost I guess you would need an extra trait to indicate
> that transmuting is fine.
Could you maybe provide an example?
Regards,
Boqun
>
> Best,
> Gary
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-23 19:09 ` Boqun Feng
@ 2025-06-23 23:27 ` Benno Lossin
2025-06-24 16:35 ` Boqun Feng
2025-07-04 20:25 ` Boqun Feng
1 sibling, 1 reply; 82+ messages in thread
From: Benno Lossin @ 2025-06-23 23:27 UTC (permalink / raw)
To: Boqun Feng, Gary Guo
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
On Mon Jun 23, 2025 at 9:09 PM CEST, Boqun Feng wrote:
> On Mon, Jun 23, 2025 at 07:30:19PM +0100, Gary Guo wrote:
>> cannot just transmute between from pointers to usize (which is its
>> Repr):
>> * Transmuting from pointer to usize discards provenance
>> * Transmuting from usize to pointer gives invalid provenance
>>
>> We want neither behaviour, so we must store `usize` directly and
>> always call into repr functions.
>>
>
> If we store `usize`, how can we support the `get_mut()` then? E.g.
>
> static V: i32 = 32;
>
> let mut x = Atomic::new(&V as *const i32 as *mut i32);
> // ^ assume we expose_provenance() in new().
>
> let ptr: &mut *mut i32 = x.get_mut(); // which is `&mut self.0.get()`.
>
> let ptr_val = *ptr; // Does `ptr_val` have the proper provenance?
If `get_mut` transmutes the integer into a pointer, then it will have
the wrong provenance (it will just have plain invalid provenance).
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-23 23:27 ` Benno Lossin
@ 2025-06-24 16:35 ` Boqun Feng
2025-06-26 13:54 ` Benno Lossin
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-06-24 16:35 UTC (permalink / raw)
To: Benno Lossin
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Tue, Jun 24, 2025 at 01:27:38AM +0200, Benno Lossin wrote:
> On Mon Jun 23, 2025 at 9:09 PM CEST, Boqun Feng wrote:
> > On Mon, Jun 23, 2025 at 07:30:19PM +0100, Gary Guo wrote:
> >> cannot just transmute between from pointers to usize (which is its
> >> Repr):
> >> * Transmuting from pointer to usize discards provenance
> >> * Transmuting from usize to pointer gives invalid provenance
> >>
> >> We want neither behaviour, so we must store `usize` directly and
> >> always call into repr functions.
> >>
> >
> > If we store `usize`, how can we support the `get_mut()` then? E.g.
> >
> > static V: i32 = 32;
> >
> > let mut x = Atomic::new(&V as *const i32 as *mut i32);
> > // ^ assume we expose_provenance() in new().
> >
> > let ptr: &mut *mut i32 = x.get_mut(); // which is `&mut self.0.get()`.
> >
> > let ptr_val = *ptr; // Does `ptr_val` have the proper provenance?
>
> If `get_mut` transmutes the integer into a pointer, then it will have
> the wrong provenance (it will just have plain invalid provenance).
>
The key topic Gary and I have been discussing is whether we should
define Atomic<T> as:
(my current implementation)
pub struct Atomic<T: AllowAtomic>(Opaque<T>);
or
(Gary's suggestion)
pub struct Atomic<T: AllowAtomic>(Opaque<T::Repr>);
`T::Repr` is guaranteed to be the same size and alignment of `T`, and
per our discussion, it makes sense to further require that `transmute<T,
T::Repr>()` should also be safe (as the safety requirement of
`AllowAtomic`), or we can say `T` bit validity can be preserved by
`T::Repr`: a valid bit combination `T` can be transumated to `T::Repr`,
and if transumated back, it's the same bit combination.
Now as I pointed out, if we use `Opaque<T::Repr>`, then `.get_mut()`
would be unsound for `Atomic<*mut T>`. And Gary's concern is that in
the current implementation, we directly cast a `*mut T` (from
`Opaque::get()`) into a `*mut T::Repr`, and pass it directly into C/asm
atomic primitives. However, I think with the additional safety
requirement above, this shouldn't be a problem: because the C/asm atomic
primitives would just pass the address to an asm block, and that'll be
out of Rust abstract machine, and as long as the C/primitives atomic
primitives are implemented correctly, the bit representation of `T`
remains valid after asm blocks.
So I think the current implementation still works and is better.
Regards,
Boqun
> ---
> Cheers,
> Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 01/10] rust: Introduce atomic API helpers
2025-06-18 16:49 ` [PATCH v5 01/10] rust: Introduce atomic API helpers Boqun Feng
@ 2025-06-26 8:44 ` Andreas Hindborg
2025-06-27 14:00 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-26 8:44 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> In order to support LKMM atomics in Rust, add rust_helper_* for atomic
> APIs. These helpers ensure the implementation of LKMM atomics in Rust is
> the same as in C. This could save the maintenance burden of having two
> similar atomic implementations in asm.
>
> Originally-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> rust/helpers/atomic.c | 1038 +++++++++++++++++++++
> rust/helpers/helpers.c | 1 +
> scripts/atomic/gen-atomics.sh | 1 +
> scripts/atomic/gen-rust-atomic-helpers.sh | 65 ++
> 4 files changed, 1105 insertions(+)
> create mode 100644 rust/helpers/atomic.c
> create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh
>
> diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c
> new file mode 100644
> index 000000000000..00bf10887928
> --- /dev/null
> +++ b/rust/helpers/atomic.c
> @@ -0,0 +1,1038 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh
> +// DO NOT MODIFY THIS FILE DIRECTLY
If this file is generated, why check it in? Can't we run the generator
at build time?
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 02/10] rust: sync: Add basic atomic operation mapping framework
2025-06-18 16:49 ` [PATCH v5 02/10] rust: sync: Add basic atomic operation mapping framework Boqun Feng
@ 2025-06-26 8:50 ` Andreas Hindborg
2025-06-26 10:17 ` Andreas Hindborg
1 sibling, 0 replies; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-26 8:50 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> Preparation for generic atomic implementation. To unify the
> implementation of a generic method over `i32` and `i64`, the C side
> atomic methods need to be grouped so that in a generic method, they can
> be referred as <type>::<method>, otherwise their parameters and return
> value are different between `i32` and `i64`, which would require using
> `transmute()` to unify the type into a `T`.
>
> Introduce `AtomicImpl` to represent a basic type in Rust that has the
> direct mapping to an atomic implementation from C. This trait is sealed,
> and currently only `i32` and `i64` impl this.
>
> Further, different methods are put into different `*Ops` trait groups,
> and this is for the future when smaller types like `i8`/`i16` are
> supported but only with a limited set of API (e.g. only set(), load(),
> xchg() and cmpxchg(), no add() or sub() etc).
>
> While the atomic mod is introduced, documentation is also added for
> memory models and data races.
>
> Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect
> my responsiblity on the Rust atomic mod.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> MAINTAINERS | 4 +-
> rust/kernel/sync.rs | 1 +
> rust/kernel/sync/atomic.rs | 19 ++++
> rust/kernel/sync/atomic/ops.rs | 199 +++++++++++++++++++++++++++++++++
> 4 files changed, 222 insertions(+), 1 deletion(-)
> create mode 100644 rust/kernel/sync/atomic.rs
> create mode 100644 rust/kernel/sync/atomic/ops.rs
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 0c1d245bf7b8..5eef524975ca 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -3894,7 +3894,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c
> ATOMIC INFRASTRUCTURE
> M: Will Deacon <will@kernel.org>
> M: Peter Zijlstra <peterz@infradead.org>
> -R: Boqun Feng <boqun.feng@gmail.com>
> +M: Boqun Feng <boqun.feng@gmail.com>
> R: Mark Rutland <mark.rutland@arm.com>
> L: linux-kernel@vger.kernel.org
> S: Maintained
> @@ -3903,6 +3903,8 @@ F: arch/*/include/asm/atomic*.h
> F: include/*/atomic*.h
> F: include/linux/refcount.h
> F: scripts/atomic/
> +F: rust/kernel/sync/atomic.rs
> +F: rust/kernel/sync/atomic/
>
> ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER
> M: Bradley Grove <linuxdrivers@attotech.com>
> diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
> index 36a719015583..b620027e0641 100644
> --- a/rust/kernel/sync.rs
> +++ b/rust/kernel/sync.rs
> @@ -10,6 +10,7 @@
> use pin_init;
>
> mod arc;
> +pub mod atomic;
> mod condvar;
> pub mod lock;
> mod locked_by;
> diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> new file mode 100644
> index 000000000000..65e41dba97b7
> --- /dev/null
> +++ b/rust/kernel/sync/atomic.rs
> @@ -0,0 +1,19 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Atomic primitives.
> +//!
> +//! These primitives have the same semantics as their C counterparts: and the precise definitions of
> +//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Consistency) Model is the
> +//! only model for Rust code in kernel, and Rust's own atomics should be avoided.
> +//!
> +//! # Data races
> +//!
> +//! [`LKMM`] atomics have different rules regarding data races:
> +//!
> +//! - A normal write from C side is treated as an atomic write if
> +//! CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
> +//! - Mixed-size atomic accesses don't cause data races.
> +//!
> +//! [`LKMM`]: srctree/tools/memory-mode/
Typo in the link destination.
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 02/10] rust: sync: Add basic atomic operation mapping framework
2025-06-18 16:49 ` [PATCH v5 02/10] rust: sync: Add basic atomic operation mapping framework Boqun Feng
2025-06-26 8:50 ` Andreas Hindborg
@ 2025-06-26 10:17 ` Andreas Hindborg
2025-06-27 14:30 ` Boqun Feng
1 sibling, 1 reply; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-26 10:17 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> Preparation for generic atomic implementation. To unify the
> implementation of a generic method over `i32` and `i64`, the C side
> atomic methods need to be grouped so that in a generic method, they can
> be referred as <type>::<method>, otherwise their parameters and return
> value are different between `i32` and `i64`, which would require using
> `transmute()` to unify the type into a `T`.
I can't follow this, could you expand a bit?
>
> Introduce `AtomicImpl` to represent a basic type in Rust that has the
> direct mapping to an atomic implementation from C. This trait is sealed,
> and currently only `i32` and `i64` impl this.
>
> Further, different methods are put into different `*Ops` trait groups,
> and this is for the future when smaller types like `i8`/`i16` are
> supported but only with a limited set of API (e.g. only set(), load(),
> xchg() and cmpxchg(), no add() or sub() etc).
>
> While the atomic mod is introduced, documentation is also added for
> memory models and data races.
>
> Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect
> my responsiblity on the Rust atomic mod.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> MAINTAINERS | 4 +-
> rust/kernel/sync.rs | 1 +
> rust/kernel/sync/atomic.rs | 19 ++++
> rust/kernel/sync/atomic/ops.rs | 199 +++++++++++++++++++++++++++++++++
> 4 files changed, 222 insertions(+), 1 deletion(-)
> create mode 100644 rust/kernel/sync/atomic.rs
> create mode 100644 rust/kernel/sync/atomic/ops.rs
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 0c1d245bf7b8..5eef524975ca 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -3894,7 +3894,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c
> ATOMIC INFRASTRUCTURE
> M: Will Deacon <will@kernel.org>
> M: Peter Zijlstra <peterz@infradead.org>
> -R: Boqun Feng <boqun.feng@gmail.com>
> +M: Boqun Feng <boqun.feng@gmail.com>
> R: Mark Rutland <mark.rutland@arm.com>
> L: linux-kernel@vger.kernel.org
> S: Maintained
> @@ -3903,6 +3903,8 @@ F: arch/*/include/asm/atomic*.h
> F: include/*/atomic*.h
> F: include/linux/refcount.h
> F: scripts/atomic/
> +F: rust/kernel/sync/atomic.rs
> +F: rust/kernel/sync/atomic/
>
> ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER
> M: Bradley Grove <linuxdrivers@attotech.com>
> diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
> index 36a719015583..b620027e0641 100644
> --- a/rust/kernel/sync.rs
> +++ b/rust/kernel/sync.rs
> @@ -10,6 +10,7 @@
> use pin_init;
>
> mod arc;
> +pub mod atomic;
> mod condvar;
> pub mod lock;
> mod locked_by;
> diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> new file mode 100644
> index 000000000000..65e41dba97b7
> --- /dev/null
> +++ b/rust/kernel/sync/atomic.rs
> @@ -0,0 +1,19 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Atomic primitives.
> +//!
> +//! These primitives have the same semantics as their C counterparts: and the precise definitions of
> +//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Consistency) Model is the
> +//! only model for Rust code in kernel, and Rust's own atomics should be avoided.
> +//!
> +//! # Data races
> +//!
> +//! [`LKMM`] atomics have different rules regarding data races:
> +//!
> +//! - A normal write from C side is treated as an atomic write if
> +//! CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
> +//! - Mixed-size atomic accesses don't cause data races.
> +//!
> +//! [`LKMM`]: srctree/tools/memory-mode/
> +
> +pub mod ops;
> diff --git a/rust/kernel/sync/atomic/ops.rs b/rust/kernel/sync/atomic/ops.rs
> new file mode 100644
> index 000000000000..f8825f7c84f0
> --- /dev/null
> +++ b/rust/kernel/sync/atomic/ops.rs
> @@ -0,0 +1,199 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Atomic implementations.
> +//!
> +//! Provides 1:1 mapping of atomic implementations.
> +
> +use crate::bindings::*;
> +use crate::macros::paste;
> +
> +mod private {
> + /// Sealed trait marker to disable customized impls on atomic implementation traits.
> + pub trait Sealed {}
> +}
> +
> +// `i32` and `i64` are only supported atomic implementations.
> +impl private::Sealed for i32 {}
> +impl private::Sealed for i64 {}
> +
> +/// A marker trait for types that implement atomic operations with C side primitives.
> +///
> +/// This trait is sealed, and only types that have directly mapping to the C side atomics should
> +/// impl this:
> +///
> +/// - `i32` maps to `atomic_t`.
> +/// - `i64` maps to `atomic64_t`.
> +pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {}
> +
> +// `atomic_t` implements atomic operations on `i32`.
> +impl AtomicImpl for i32 {}
> +
> +// `atomic64_t` implements atomic operations on `i64`.
> +impl AtomicImpl for i64 {}
> +
> +// This macro generates the function signature with given argument list and return type.
Perhaps could we add an example expansion to make the macro easier for
people to parse the first time:
declare_atomic_method!(
read[acquire](ptr: *mut Self) -> Self
);
->
#[doc = "Atomic read_acquire"]
..
unsafe fn atomic_read_acquire(ptr: *mut Self) -> Self;
#[doc = "Atomic read"]
..
unsafe fn atomic_read(ptr: *mut Self) -> Self;
> +macro_rules! declare_atomic_method {
> + (
> + $func:ident($($arg:ident : $arg_type:ty),*) $(-> $ret:ty)?
> + ) => {
> + paste!(
> + #[doc = concat!("Atomic ", stringify!($func))]
> + #[doc = "# Safety"]
> + #[doc = "- Any pointer passed to the function has to be a valid pointer"]
> + #[doc = "- Accesses must not cause data races per LKMM:"]
> + #[doc = " - Atomic read racing with normal read, normal write or atomic write is not data race."]
> + #[doc = " - Atomic write racing with normal read or normal write is data-race, unless the"]
> + #[doc = " normal accesses are done at C side and considered as immune to data"]
> + #[doc = " races, e.g. CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC."]
> + unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)?;
> + );
> + };
> + (
> + $func:ident [$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(-> $ret:ty)?
> + ) => {
> + paste!(
> + declare_atomic_method!(
> + [< $func _ $variant >]($($arg_sig)*) $(-> $ret)?
> + );
> + );
> +
> + declare_atomic_method!(
> + $func [$($rest)*]($($arg_sig)*) $(-> $ret)?
> + );
> + };
> + (
> + $func:ident []($($arg_sig:tt)*) $(-> $ret:ty)?
> + ) => {
> + declare_atomic_method!(
> + $func($($arg_sig)*) $(-> $ret)?
> + );
> + }
> +}
> +
> +// This macro generates the function implementation with given argument list and return type, and it
> +// will replace "call(...)" expression with "$ctype _ $func" to call the real C function.
Similarly, I feel an expansion example is helpful:
impl_atomic_method!(
(atomic) read[acquire](ptr: *mut Self) -> Self {
call(ptr as *mut _)
}
);
->
#[inline(always)]
unsafe fn atomic_read_acquire(ptr: *mut Self) -> Self {
unsafe { atomic_read_acquire((ptr as *mut _)) }
}
#[inline(always)]
unsafe fn atomic_read(ptr: *mut Self) -> Self {
unsafe { atomic_read((ptr as *mut _)) }
}
Lastly, perhaps we should do `ptr.cast()` rather than `as *mut _` ?
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-18 16:49 ` [PATCH v5 04/10] rust: sync: atomic: Add generic atomics Boqun Feng
2025-06-21 11:32 ` Gary Guo
@ 2025-06-26 12:15 ` Andreas Hindborg
2025-06-27 15:01 ` Boqun Feng
1 sibling, 1 reply; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-26 12:15 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
[...]
> +
> +impl<T: AllowAtomic> Atomic<T> {
> + /// Creates a new atomic.
> + pub const fn new(v: T) -> Self {
> + Self(Opaque::new(v))
> + }
> +
> + /// Creates a reference to [`Self`] from a pointer.
> + ///
> + /// # Safety
> + ///
> + /// - `ptr` has to be a valid pointer.
> + /// - `ptr` has to be valid for both reads and writes for the whole lifetime `'a`.
> + /// - For the whole lifetime of '`a`, other accesses to the object cannot cause data races
> + /// (defined by [`LKMM`]) against atomic operations on the returned reference.
I feel the wording is a bit tangled here. How about something along the
lines of
For the duration of `'a`, all accesses to the object must be atomic.
> + ///
> + /// [`LKMM`]: srctree/tools/memory-model
> + ///
> + /// # Examples
> + ///
> + /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [`Atomic::store()`] can
> + /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire()` or
> + /// `WRITE_ONCE()`/`smp_store_release()` in C side:
> + ///
> + /// ```rust
> + /// # use kernel::types::Opaque;
> + /// use kernel::sync::atomic::{Atomic, Relaxed, Release};
> + ///
> + /// // Assume there is a C struct `Foo`.
> + /// mod cbindings {
> + /// #[repr(C)]
> + /// pub(crate) struct foo { pub(crate) a: i32, pub(crate) b: i32 }
> + /// }
> + ///
> + /// let tmp = Opaque::new(cbindings::foo { a: 1, b: 2});
> + ///
> + /// // struct foo *foo_ptr = ..;
> + /// let foo_ptr = tmp.get();
> + ///
> + /// // SAFETY: `foo_ptr` is a valid pointer, and `.a` is inbound.
Did you mean to say "in bounds"? Or what is "inbound"?
> + /// let foo_a_ptr = unsafe { core::ptr::addr_of_mut!((*foo_ptr).a) };
This should be `&raw mut` by now, right?
> + ///
> + /// // a = READ_ONCE(foo_ptr->a);
> + /// //
> + /// // SAFETY: `foo_a_ptr` is a valid pointer for read, and all accesses on it is atomic, so no
> + /// // data race.
> + /// let a = unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed);
> + /// # assert_eq!(a, 1);
> + ///
> + /// // smp_store_release(&foo_ptr->a, 2);
> + /// //
> + /// // SAFETY: `foo_a_ptr` is a valid pointer for write, and all accesses on it is atomic, so no
> + /// // data race.
> + /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release);
> + /// ```
> + ///
> + /// However, this should be only used when communicating with C side or manipulating a C struct.
> + pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self
> + where
> + T: Sync,
> + {
> + // CAST: `T` is transparent to `Atomic<T>`.
> + // SAFETY: Per function safety requirement, `ptr` is a valid pointer and the object will
> + // live long enough. It's safe to return a `&Atomic<T>` because function safety requirement
> + // guarantees other accesses won't cause data races.
> + unsafe { &*ptr.cast::<Self>() }
> + }
> +
> + /// Returns a pointer to the underlying atomic variable.
> + ///
> + /// Extra safety requirement on using the return pointer: the operations done via the pointer
> + /// cannot cause data races defined by [`LKMM`].
> + ///
> + /// [`LKMM`]: srctree/tools/memory-model
> + pub const fn as_ptr(&self) -> *mut T {
> + self.0.get()
> + }
> +
> + /// Returns a mutable reference to the underlying atomic variable.
> + ///
> + /// This is safe because the mutable reference of the atomic variable guarantees the exclusive
> + /// access.
> + pub fn get_mut(&mut self) -> &mut T {
> + // SAFETY: `self.as_ptr()` is a valid pointer to `T`, and the object has already been
> + // initialized. `&mut self` guarantees the exclusive access, so it's safe to reborrow
> + // mutably.
> + unsafe { &mut *self.as_ptr() }
> + }
> +}
> +
> +impl<T: AllowAtomic> Atomic<T>
> +where
> + T::Repr: AtomicHasBasicOps,
> +{
> + /// Loads the value from the atomic variable.
> + ///
> + /// # Examples
> + ///
> + /// Simple usages:
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{Atomic, Relaxed};
> + ///
> + /// let x = Atomic::new(42i32);
> + ///
> + /// assert_eq!(42, x.load(Relaxed));
> + ///
> + /// let x = Atomic::new(42i64);
> + ///
> + /// assert_eq!(42, x.load(Relaxed));
> + /// ```
> + ///
> + /// Customized new types in [`Atomic`]:
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed};
> + ///
> + /// #[derive(Clone, Copy)]
> + /// #[repr(transparent)]
> + /// struct NewType(u32);
> + ///
> + /// // SAFETY: `NewType` is transparent to `u32`, which has the same size and alignment as
> + /// // `i32`.
> + /// unsafe impl AllowAtomic for NewType {
> + /// type Repr = i32;
> + ///
> + /// fn into_repr(self) -> Self::Repr {
> + /// self.0 as i32
> + /// }
> + ///
> + /// fn from_repr(repr: Self::Repr) -> Self {
> + /// NewType(repr as u32)
> + /// }
> + /// }
> + ///
> + /// let n = Atomic::new(NewType(0));
> + ///
> + /// assert_eq!(0, n.load(Relaxed).0);
> + /// ```
> + #[doc(alias("atomic_read", "atomic64_read"))]
> + #[inline(always)]
> + pub fn load<Ordering: AcquireOrRelaxed>(&self, _: Ordering) -> T {
> + let a = self.as_ptr().cast::<T::Repr>();
> +
> + // SAFETY:
> + // - For calling the atomic_read*() function:
> + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
Typo `AllocAtomic`.
> + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> + // - per the type invariants, the following atomic operation won't cause data races.
> + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> + // - atomic operations are used here.
> + let v = unsafe {
> + if Ordering::IS_RELAXED {
> + T::Repr::atomic_read(a)
> + } else {
> + T::Repr::atomic_read_acquire(a)
> + }
> + };
> +
> + T::from_repr(v)
> + }
> +
> + /// Stores a value to the atomic variable.
> + ///
> + /// # Examples
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{Atomic, Relaxed};
> + ///
> + /// let x = Atomic::new(42i32);
> + ///
> + /// assert_eq!(42, x.load(Relaxed));
> + ///
> + /// x.store(43, Relaxed);
> + ///
> + /// assert_eq!(43, x.load(Relaxed));
> + /// ```
> + ///
> + #[doc(alias("atomic_set", "atomic64_set"))]
> + #[inline(always)]
> + pub fn store<Ordering: ReleaseOrRelaxed>(&self, v: T, _: Ordering) {
> + let v = T::into_repr(v);
> + let a = self.as_ptr().cast::<T::Repr>();
> +
> + // SAFETY:
> + // - For calling the atomic_set*() function:
> + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
Typo `AllocAtomic`.
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-18 16:49 ` [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types Boqun Feng
2025-06-19 10:31 ` Peter Zijlstra
2025-06-21 11:18 ` Gary Guo
@ 2025-06-26 12:36 ` Andreas Hindborg
2025-06-27 14:34 ` Boqun Feng
2 siblings, 1 reply; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-26 12:36 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> Preparation for atomic primitives. Instead of a suffix like _acquire, a
> method parameter along with the corresponding generic parameter will be
> used to specify the ordering of an atomic operations. For example,
> atomic load() can be defined as:
>
> impl<T: ...> Atomic<T> {
> pub fn load<O: AcquireOrRelaxed>(&self, _o: O) -> T { ... }
> }
>
> and acquire users would do:
>
> let r = x.load(Acquire);
>
> relaxed users:
>
> let r = x.load(Relaxed);
>
> doing the following:
>
> let r = x.load(Release);
>
> will cause a compiler error.
>
> Compared to suffixes, it's easier to tell what ordering variants an
> operation has, and it also make it easier to unify the implementation of
> all ordering variants in one method via generic. The `IS_RELAXED` and
> `TYPE` associate consts are for generic function to pick up the
> particular implementation specified by an ordering annotation.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> rust/kernel/sync/atomic.rs | 3 +
> rust/kernel/sync/atomic/ordering.rs | 106 ++++++++++++++++++++++++++++
> 2 files changed, 109 insertions(+)
> create mode 100644 rust/kernel/sync/atomic/ordering.rs
>
> diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> index 65e41dba97b7..9fe5d81fc2a9 100644
> --- a/rust/kernel/sync/atomic.rs
> +++ b/rust/kernel/sync/atomic.rs
> @@ -17,3 +17,6 @@
> //! [`LKMM`]: srctree/tools/memory-mode/
>
> pub mod ops;
> +pub mod ordering;
> +
> +pub use ordering::{Acquire, Full, Relaxed, Release};
> diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/ordering.rs
> new file mode 100644
> index 000000000000..96757574ed7d
> --- /dev/null
> +++ b/rust/kernel/sync/atomic/ordering.rs
> @@ -0,0 +1,106 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Memory orderings.
> +//!
> +//! The semantics of these orderings follows the [`LKMM`] definitions and rules.
> +//!
> +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model.
> +//! - [`Full`] means "fully-ordered", that is:
> +//! - It provides ordering between all the preceding memory accesses and the annotated operation.
> +//! - It provides ordering between the annotated operation and all the following memory accesses.
> +//! - It provides ordering between all the preceding memory accesses and all the fllowing memory
> +//! accesses.
> +//! - All the orderings are the same strong as a full memory barrier (i.e. `smp_mb()`).
> +//! - [`Relaxed`] is similar to the counterpart in Rust memory model, except that dependency
> +//! orderings are also honored in [`LKMM`]. Dependency orderings are described in "DEPENDENCY
> +//! RELATIONS" in [`LKMM`]'s [`explanation`].
> +//!
> +//! [`LKMM`]: srctree/tools/memory-model/
> +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.txt
> +
> +/// The annotation type for relaxed memory ordering.
> +pub struct Relaxed;
> +
> +/// The annotation type for acquire memory ordering.
> +pub struct Acquire;
> +
> +/// The annotation type for release memory ordering.
> +pub struct Release;
> +
> +/// The annotation type for fully-order memory ordering.
> +pub struct Full;
> +
> +/// Describes the exact memory ordering.
> +pub enum OrderingType {
> + /// Relaxed ordering.
> + Relaxed,
> + /// Acquire ordering.
> + Acquire,
> + /// Release ordering.
> + Release,
> + /// Fully-ordered.
> + Full,
> +}
> +
> +mod internal {
> + /// Unit types for ordering annotation.
> + ///
> + /// Sealed trait, can be only implemented inside atomic mod.
> + pub trait OrderingUnit {
> + /// Describes the exact memory ordering.
> + const TYPE: super::OrderingType;
> + }
> +}
> +
> +impl internal::OrderingUnit for Relaxed {
> + const TYPE: OrderingType = OrderingType::Relaxed;
> +}
> +
> +impl internal::OrderingUnit for Acquire {
> + const TYPE: OrderingType = OrderingType::Acquire;
> +}
> +
> +impl internal::OrderingUnit for Release {
> + const TYPE: OrderingType = OrderingType::Release;
> +}
> +
> +impl internal::OrderingUnit for Full {
> + const TYPE: OrderingType = OrderingType::Full;
> +}
> +
> +/// The trait bound for annotating operations that should support all orderings.
> +pub trait All: internal::OrderingUnit {}
I think I would prefer `Any` rather than `All` here. Because it is "any
of", not "all of them at once".
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 06/10] rust: sync: atomic: Add the framework of arithmetic operations
2025-06-18 16:49 ` [PATCH v5 06/10] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
2025-06-21 11:41 ` Gary Guo
@ 2025-06-26 12:39 ` Andreas Hindborg
2025-06-28 3:04 ` Boqun Feng
1 sibling, 1 reply; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-26 12:39 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> One important set of atomic operations is the arithmetic operations,
> i.e. add(), sub(), fetch_add(), add_return(), etc. However it may not
> make senses for all the types that `AllowAtomic` to have arithmetic
> operations, for example a `Foo(u32)` may not have a reasonable add() or
> sub(), plus subword types (`u8` and `u16`) currently don't have
> atomic arithmetic operations even on C side and might not have them in
> the future in Rust (because they are usually suboptimal on a few
> architecures). Therefore add a subtrait of `AllowAtomic` describing
> which types have and can do atomic arithemtic operations.
>
> A few things about this `AllowAtomicArithmetic` trait:
>
> * It has an associate type `Delta` instead of using
> `AllowAllowAtomic::Repr` because, a `Bar(u32)` (whose `Repr` is `i32`)
> may not wants an `add(&self, i32)`, but an `add(&self, u32)`.
>
> * `AtomicImpl` types already implement an `AtomicHasArithmeticOps`
> trait, so add blanket implementation for them. In the future, `i8` and
> `i16` may impl `AtomicImpl` but not `AtomicHasArithmeticOps` if
> arithemtic operations are not available.
>
> Only add() and fetch_add() are added. The rest will be added in the
> future.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> rust/kernel/sync/atomic/generic.rs | 101 +++++++++++++++++++++++++++++
> 1 file changed, 101 insertions(+)
>
> diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
> index bcdbeea45dd8..8c5bd90b2619 100644
> --- a/rust/kernel/sync/atomic/generic.rs
> +++ b/rust/kernel/sync/atomic/generic.rs
> @@ -57,6 +57,23 @@ fn from_repr(repr: Self::Repr) -> Self {
> }
> }
>
> +/// Atomics that allows arithmetic operations with an integer type.
> +pub trait AllowAtomicArithmetic: AllowAtomic {
> + /// The delta types for arithmetic operations.
> + type Delta;
> +
> + /// Converts [`Self::Delta`] into the representation of the atomic type.
> + fn delta_into_repr(d: Self::Delta) -> Self::Repr;
> +}
> +
> +impl<T: AtomicImpl + AtomicHasArithmeticOps> AllowAtomicArithmetic for T {
> + type Delta = Self;
> +
> + fn delta_into_repr(d: Self::Delta) -> Self::Repr {
> + d
> + }
> +}
> +
> impl<T: AllowAtomic> Atomic<T> {
> /// Creates a new atomic.
> pub const fn new(v: T) -> Self {
> @@ -410,3 +427,87 @@ fn try_cmpxchg<Ordering: All>(&self, old: &mut T, new: T, _: Ordering) -> bool {
> }
> }
> }
> +
> +impl<T: AllowAtomicArithmetic> Atomic<T>
> +where
> + T::Repr: AtomicHasArithmeticOps,
> +{
> + /// Atomic add.
> + ///
> + /// The addition is a wrapping addition.
> + ///
> + /// # Examples
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{Atomic, Relaxed};
> + ///
> + /// let x = Atomic::new(42);
> + ///
> + /// assert_eq!(42, x.load(Relaxed));
> + ///
> + /// x.add(12, Relaxed);
> + ///
> + /// assert_eq!(54, x.load(Relaxed));
> + /// ```
> + #[inline(always)]
> + pub fn add<Ordering: RelaxedOnly>(&self, v: T::Delta, _: Ordering) {
> + let v = T::delta_into_repr(v);
> + let a = self.as_ptr().cast::<T::Repr>();
> +
> + // SAFETY:
> + // - For calling the atomic_add() function:
> + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
Typo, should be `AllowAtomic`.
> + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> + // - per the type invariants, the following atomic operation won't cause data races.
> + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> + // - atomic operations are used here.
> + unsafe {
> + T::Repr::atomic_add(a, v);
> + }
> + }
> +
> + /// Atomic fetch and add.
> + ///
> + /// The addition is a wrapping addition.
> + ///
> + /// # Examples
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed};
> + ///
> + /// let x = Atomic::new(42);
> + ///
> + /// assert_eq!(42, x.load(Relaxed));
> + ///
> + /// assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) });
> + ///
> + /// let x = Atomic::new(42);
> + ///
> + /// assert_eq!(42, x.load(Relaxed));
> + ///
> + /// assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } );
> + /// ```
> + #[inline(always)]
> + pub fn fetch_add<Ordering: All>(&self, v: T::Delta, _: Ordering) -> T {
> + let v = T::delta_into_repr(v);
> + let a = self.as_ptr().cast::<T::Repr>();
> +
> + // SAFETY:
> + // - For calling the atomic_fetch_add*() function:
> + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
Typo, should be `AllowAtomic`.
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 07/10] rust: sync: atomic: Add Atomic<u{32,64}>
2025-06-18 16:49 ` [PATCH v5 07/10] rust: sync: atomic: Add Atomic<u{32,64}> Boqun Feng
@ 2025-06-26 12:47 ` Andreas Hindborg
0 siblings, 0 replies; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-26 12:47 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> Add generic atomic support for basic unsigned types that have an
> `AtomicImpl` with the same size and alignment.
>
> Unit tests are added including Atomic<i32> and Atomic<i64>.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 08/10] rust: sync: atomic: Add Atomic<{usize,isize}>
2025-06-18 16:49 ` [PATCH v5 08/10] rust: sync: atomic: Add Atomic<{usize,isize}> Boqun Feng
@ 2025-06-26 12:49 ` Andreas Hindborg
0 siblings, 0 replies; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-26 12:49 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> Add generic atomic support for `usize` and `isize`. Note that instead of
> mapping directly to `atomic_long_t`, the represention type
> (`AllowAtomic::Repr`) is selected based on CONFIG_64BIT. This reduces
> the necessarity of creating `atomic_long_*` helpers, which could save
> the binary size of kernel if inline helpers are not available.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-18 16:49 ` [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
2025-06-21 11:37 ` Gary Guo
@ 2025-06-26 13:12 ` Andreas Hindborg
2025-06-28 3:03 ` Boqun Feng
2025-06-27 8:58 ` Benno Lossin
2 siblings, 1 reply; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-26 13:12 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> xchg() and cmpxchg() are basic operations on atomic. Provide these based
> on C APIs.
>
> Note that cmpxchg() use the similar function signature as
> compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means
> the operation succeeds and `Err(old)` means the operation fails.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> rust/kernel/sync/atomic/generic.rs | 154 +++++++++++++++++++++++++++++
> 1 file changed, 154 insertions(+)
>
> diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
> index 73c26f9cf6b8..bcdbeea45dd8 100644
> --- a/rust/kernel/sync/atomic/generic.rs
> +++ b/rust/kernel/sync/atomic/generic.rs
> @@ -256,3 +256,157 @@ pub fn store<Ordering: ReleaseOrRelaxed>(&self, v: T, _: Ordering) {
> };
> }
> }
> +
> +impl<T: AllowAtomic> Atomic<T>
> +where
> + T::Repr: AtomicHasXchgOps,
> +{
> + /// Atomic exchange.
> + ///
> + /// # Examples
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed};
> + ///
> + /// let x = Atomic::new(42);
> + ///
> + /// assert_eq!(42, x.xchg(52, Acquire));
> + /// assert_eq!(52, x.load(Relaxed));
> + /// ```
> + #[doc(alias("atomic_xchg", "atomic64_xchg"))]
> + #[inline(always)]
> + pub fn xchg<Ordering: All>(&self, v: T, _: Ordering) -> T {
> + let v = T::into_repr(v);
> + let a = self.as_ptr().cast::<T::Repr>();
> +
> + // SAFETY:
> + // - For calling the atomic_xchg*() function:
> + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
Typo: `AllowAtomic`.
> + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> + // - per the type invariants, the following atomic operation won't cause data races.
> + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> + // - atomic operations are used here.
> + let ret = unsafe {
> + match Ordering::TYPE {
> + OrderingType::Full => T::Repr::atomic_xchg(a, v),
> + OrderingType::Acquire => T::Repr::atomic_xchg_acquire(a, v),
> + OrderingType::Release => T::Repr::atomic_xchg_release(a, v),
> + OrderingType::Relaxed => T::Repr::atomic_xchg_relaxed(a, v),
> + }
> + };
> +
> + T::from_repr(ret)
> + }
> +
> + /// Atomic compare and exchange.
> + ///
> + /// Compare: The comparison is done via the byte level comparison between the atomic variables
> + /// with the `old` value.
> + ///
> + /// Ordering: When succeeds, provides the corresponding ordering as the `Ordering` type
> + /// parameter indicates, and a failed one doesn't provide any ordering, the read part of a
> + /// failed cmpxchg should be treated as a relaxed read.
Rust `core::ptr` functions have this sentence on success ordering for
compare_exchange:
Using Acquire as success ordering makes the store part of this
operation Relaxed, and using Release makes the successful load
Relaxed.
Does this translate to LKMM cmpxchg operations? If so, I think we should
include this sentence. This also applies to `Atomic::xchg`.
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 10/10] rust: sync: Add memory barriers
2025-06-18 16:49 ` [PATCH v5 10/10] rust: sync: Add memory barriers Boqun Feng
@ 2025-06-26 13:36 ` Andreas Hindborg
2025-06-28 3:42 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-26 13:36 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> Memory barriers are building blocks for concurrent code, hence provide
> a minimal set of them.
>
> The compiler barrier, barrier(), is implemented in inline asm instead of
> using core::sync::atomic::compiler_fence() because memory models are
> different: kernel's atomics are implemented in inline asm therefore the
> compiler barrier should be implemented in inline asm as well. Also it's
> currently only public to the kernel crate until there's a reasonable
> driver usage.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> rust/helpers/barrier.c | 18 ++++++++++
> rust/helpers/helpers.c | 1 +
> rust/kernel/sync.rs | 1 +
> rust/kernel/sync/barrier.rs | 67 +++++++++++++++++++++++++++++++++++++
> 4 files changed, 87 insertions(+)
> create mode 100644 rust/helpers/barrier.c
> create mode 100644 rust/kernel/sync/barrier.rs
>
> diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c
> new file mode 100644
> index 000000000000..cdf28ce8e511
> --- /dev/null
> +++ b/rust/helpers/barrier.c
> @@ -0,0 +1,18 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <asm/barrier.h>
> +
> +void rust_helper_smp_mb(void)
> +{
> + smp_mb();
> +}
> +
> +void rust_helper_smp_wmb(void)
> +{
> + smp_wmb();
> +}
> +
> +void rust_helper_smp_rmb(void)
> +{
> + smp_rmb();
> +}
> diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
> index 83e89f6a68fb..8ddfc8f84e87 100644
> --- a/rust/helpers/helpers.c
> +++ b/rust/helpers/helpers.c
> @@ -9,6 +9,7 @@
>
> #include "atomic.c"
> #include "auxiliary.c"
> +#include "barrier.c"
> #include "blk.c"
> #include "bug.c"
> #include "build_assert.c"
> diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
> index b620027e0641..c7c0e552bafe 100644
> --- a/rust/kernel/sync.rs
> +++ b/rust/kernel/sync.rs
> @@ -11,6 +11,7 @@
>
> mod arc;
> pub mod atomic;
> +pub mod barrier;
> mod condvar;
> pub mod lock;
> mod locked_by;
> diff --git a/rust/kernel/sync/barrier.rs b/rust/kernel/sync/barrier.rs
> new file mode 100644
> index 000000000000..36a5c70e6716
> --- /dev/null
> +++ b/rust/kernel/sync/barrier.rs
> @@ -0,0 +1,67 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Memory barriers.
> +//!
> +//! These primitives have the same semantics as their C counterparts: and the precise definitions of
> +//! semantics can be found at [`LKMM`].
> +//!
> +//! [`LKMM`]: srctree/tools/memory-mode/
Typo in link target.
> +
> +/// A compiler barrier.
> +///
> +/// An explicic compiler barrier function that prevents the compiler from moving the memory
> +/// accesses either side of it to the other side.
Typo in "explicit".
How about:
A compiler barrier. Prevents the compiler from reordering
memory access instructions across the barrier.
> +pub(crate) fn barrier() {
> + // By default, Rust inline asms are treated as being able to access any memory or flags, hence
> + // it suffices as a compiler barrier.
> + //
> + // SAFETY: An empty asm block should be safe.
> + unsafe {
> + core::arch::asm!("");
> + }
> +}
> +
> +/// A full memory barrier.
> +///
> +/// A barrier function that prevents both the compiler and the CPU from moving the memory accesses
> +/// either side of it to the other side.
A barrier that prevents compiler and CPU from reordering memory access
instructions across the barrier.
> +pub fn smp_mb() {
> + if cfg!(CONFIG_SMP) {
> + // SAFETY: `smp_mb()` is safe to call.
> + unsafe {
> + bindings::smp_mb();
> + }
> + } else {
> + barrier();
> + }
> +}
> +
> +/// A write-write memory barrier.
> +///
> +/// A barrier function that prevents both the compiler and the CPU from moving the memory write
> +/// accesses either side of it to the other side.
A barrier that prevents compiler and CPU from reordering memory write
instructions across the barrier.
> +pub fn smp_wmb() {
> + if cfg!(CONFIG_SMP) {
> + // SAFETY: `smp_wmb()` is safe to call.
> + unsafe {
> + bindings::smp_wmb();
> + }
> + } else {
> + barrier();
> + }
> +}
> +
> +/// A read-read memory barrier.
> +///
> +/// A barrier function that prevents both the compiler and the CPU from moving the memory read
> +/// accesses either side of it to the other side.
A barrier that prevents compiler and CPU from reordering memory read
instructions across the barrier.
> +pub fn smp_rmb() {
> + if cfg!(CONFIG_SMP) {
> + // SAFETY: `smp_rmb()` is safe to call.
> + unsafe {
> + bindings::smp_rmb();
> + }
> + } else {
> + barrier();
> + }
> +}
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-24 16:35 ` Boqun Feng
@ 2025-06-26 13:54 ` Benno Lossin
2025-07-04 21:22 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Benno Lossin @ 2025-06-26 13:54 UTC (permalink / raw)
To: Boqun Feng
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Tue Jun 24, 2025 at 6:35 PM CEST, Boqun Feng wrote:
> On Tue, Jun 24, 2025 at 01:27:38AM +0200, Benno Lossin wrote:
>> On Mon Jun 23, 2025 at 9:09 PM CEST, Boqun Feng wrote:
>> > On Mon, Jun 23, 2025 at 07:30:19PM +0100, Gary Guo wrote:
>> >> cannot just transmute between from pointers to usize (which is its
>> >> Repr):
>> >> * Transmuting from pointer to usize discards provenance
>> >> * Transmuting from usize to pointer gives invalid provenance
>> >>
>> >> We want neither behaviour, so we must store `usize` directly and
>> >> always call into repr functions.
>> >>
>> >
>> > If we store `usize`, how can we support the `get_mut()` then? E.g.
>> >
>> > static V: i32 = 32;
>> >
>> > let mut x = Atomic::new(&V as *const i32 as *mut i32);
>> > // ^ assume we expose_provenance() in new().
>> >
>> > let ptr: &mut *mut i32 = x.get_mut(); // which is `&mut self.0.get()`.
>> >
>> > let ptr_val = *ptr; // Does `ptr_val` have the proper provenance?
>>
>> If `get_mut` transmutes the integer into a pointer, then it will have
>> the wrong provenance (it will just have plain invalid provenance).
>>
>
> The key topic Gary and I have been discussing is whether we should
> define Atomic<T> as:
>
> (my current implementation)
>
> pub struct Atomic<T: AllowAtomic>(Opaque<T>);
>
> or
>
> (Gary's suggestion)
>
> pub struct Atomic<T: AllowAtomic>(Opaque<T::Repr>);
>
> `T::Repr` is guaranteed to be the same size and alignment of `T`, and
> per our discussion, it makes sense to further require that `transmute<T,
> T::Repr>()` should also be safe (as the safety requirement of
> `AllowAtomic`), or we can say `T` bit validity can be preserved by
> `T::Repr`: a valid bit combination `T` can be transumated to `T::Repr`,
> and if transumated back, it's the same bit combination.
>
> Now as I pointed out, if we use `Opaque<T::Repr>`, then `.get_mut()`
> would be unsound for `Atomic<*mut T>`. And Gary's concern is that in
> the current implementation, we directly cast a `*mut T` (from
> `Opaque::get()`) into a `*mut T::Repr`, and pass it directly into C/asm
> atomic primitives. However, I think with the additional safety
> requirement above, this shouldn't be a problem: because the C/asm atomic
> primitives would just pass the address to an asm block, and that'll be
> out of Rust abstract machine, and as long as the C/primitives atomic
> primitives are implemented correctly, the bit representation of `T`
> remains valid after asm blocks.
>
> So I think the current implementation still works and is better.
I don't think there is a big difference between `Opaque<T>` and
`Opaque<T::Repr>` if we have the transmute equivalence between the two.
From a safety perspective, you don't gain or lose anything by using the
first over the second one. They both require the invariant that they are
valid (as `Opaque` removes that... we should really be using
`UnsafeCell` here instead... why aren't we doing that?).
Where their differences do play a role is in the implementation of the
various operations on the atomic. If you need to pass `*mut T::Repr` to
the C side, it's better if you store `Opaque<T::Repr>` and if you want
to give `&mut T` back to the user, then it's better to
store `Opaque<T>`.
I would choose the one that results in overall less code. It's probably
going to be `Opaque<T::Repr>`, since we will have more operations that
need `*mut T::Repr` than `*mut T`.
Now I don't understand why you value `Opaque<T>` over `Opaque<T::Repr>`,
they are (up to transmute-equivalence) the same.
I think that you said at one point that `Opaque<T>` makes more sense
from a conceptual view, since we're building `Atomic<T>`. I think that
doesn't really matter, since it's implementation detail. The same
argument could be made about casing `u64` to `i64` for implementing the
atomics: just implement atomics in C also for `u64` and then use that
instead...
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-18 16:49 ` [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
2025-06-21 11:37 ` Gary Guo
2025-06-26 13:12 ` Andreas Hindborg
@ 2025-06-27 8:58 ` Benno Lossin
2025-06-27 13:53 ` Boqun Feng
2 siblings, 1 reply; 82+ messages in thread
From: Benno Lossin @ 2025-06-27 8:58 UTC (permalink / raw)
To: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
Thomas Gleixner
On Wed Jun 18, 2025 at 6:49 PM CEST, Boqun Feng wrote:
> +impl<T: AllowAtomic> Atomic<T>
> +where
> + T::Repr: AtomicHasXchgOps,
> +{
> + /// Atomic exchange.
> + ///
> + /// # Examples
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed};
> + ///
> + /// let x = Atomic::new(42);
> + ///
> + /// assert_eq!(42, x.xchg(52, Acquire));
> + /// assert_eq!(52, x.load(Relaxed));
> + /// ```
> + #[doc(alias("atomic_xchg", "atomic64_xchg"))]
> + #[inline(always)]
> + pub fn xchg<Ordering: All>(&self, v: T, _: Ordering) -> T {
Can we name this `exchange`?
> + let v = T::into_repr(v);
> + let a = self.as_ptr().cast::<T::Repr>();
> +
> + // SAFETY:
> + // - For calling the atomic_xchg*() function:
> + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
> + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> + // - per the type invariants, the following atomic operation won't cause data races.
> + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> + // - atomic operations are used here.
> + let ret = unsafe {
> + match Ordering::TYPE {
> + OrderingType::Full => T::Repr::atomic_xchg(a, v),
> + OrderingType::Acquire => T::Repr::atomic_xchg_acquire(a, v),
> + OrderingType::Release => T::Repr::atomic_xchg_release(a, v),
> + OrderingType::Relaxed => T::Repr::atomic_xchg_relaxed(a, v),
> + }
> + };
> +
> + T::from_repr(ret)
> + }
> +
> + /// Atomic compare and exchange.
> + ///
> + /// Compare: The comparison is done via the byte level comparison between the atomic variables
> + /// with the `old` value.
> + ///
> + /// Ordering: When succeeds, provides the corresponding ordering as the `Ordering` type
> + /// parameter indicates, and a failed one doesn't provide any ordering, the read part of a
> + /// failed cmpxchg should be treated as a relaxed read.
This is a bit confusing to me. The operation has a store and a load
operation and both can have different orderings (at least in Rust
userland) depending on the success/failure of the operation. In
userland, I can supply `AcqRel` and `Acquire` to ensure that I always
have Acquire semantics on any read and `Release` semantics on any write
(which I would think is a common case). How do I do this using your API?
Don't I need `Acquire` semantics on the read in order for
`compare_exchange` to give me the correct behavior in this example:
pub struct Foo {
data: Atomic<u64>,
new: Atomic<bool>,
ready: Atomic<bool>,
}
impl Foo {
pub fn new() -> Self {
Self {
data: Atomic::new(0),
new: Atomic::new(false),
ready: Atomic::new(false),
}
}
pub fn get(&self) -> Option<u64> {
if self.new.compare_exchange(true, false, Release).is_ok() {
let val = self.data.load(Acquire);
self.ready.store(false, Release);
Some(val)
} else {
None
}
}
pub fn set(&self, val: u64) -> Result<(), u64> {
if self.ready.compare_exchange(false, true, Release).is_ok() {
self.data.store(val, Release);
self.new.store(true, Release);
} else {
Err(val)
}
}
}
IIUC, you need `Acquire` ordering on both `compare_exchange` operations'
reads for this to work, right? Because if they are relaxed, this could
happen:
Thread 0 | Thread 1
------------------------------------------------|------------------------------------------------
get() { | set(42) {
| if ready.cmpxchg(false, true, Rel).is_ok() {
| data.store(42, Rel)
| new.store(true, Rel)
if new.cmpxchg(true, false, Rel).is_ok() { |
let val = self.data.load(Acq); // reads 0 |
ready.store(false, Rel); |
Some(val) |
} | }
} | }
So essentially, the `data.store` operation is not synchronized, because
the read on `new` is not `Acquire`.
> + ///
> + /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed to be equal to `old`,
> + /// otherwise returns `Err(value)`, and `value` is the value of the atomic variable when
> + /// cmpxchg was happening.
> + ///
> + /// # Examples
> + ///
> + /// ```rust
> + /// use kernel::sync::atomic::{Atomic, Full, Relaxed};
> + ///
> + /// let x = Atomic::new(42);
> + ///
> + /// // Checks whether cmpxchg succeeded.
> + /// let success = x.cmpxchg(52, 64, Relaxed).is_ok();
> + /// # assert!(!success);
> + ///
> + /// // Checks whether cmpxchg failed.
> + /// let failure = x.cmpxchg(52, 64, Relaxed).is_err();
> + /// # assert!(failure);
> + ///
> + /// // Uses the old value if failed, probably re-try cmpxchg.
> + /// match x.cmpxchg(52, 64, Relaxed) {
> + /// Ok(_) => { },
> + /// Err(old) => {
> + /// // do something with `old`.
> + /// # assert_eq!(old, 42);
> + /// }
> + /// }
> + ///
> + /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in C.
> + /// let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old);
> + /// # assert_eq!(42, latest);
> + /// assert_eq!(64, x.load(Relaxed));
> + /// ```
> + #[doc(alias(
> + "atomic_cmpxchg",
> + "atomic64_cmpxchg",
> + "atomic_try_cmpxchg",
> + "atomic64_try_cmpxchg"
> + ))]
> + #[inline(always)]
> + pub fn cmpxchg<Ordering: All>(&self, mut old: T, new: T, o: Ordering) -> Result<T, T> {
`compare_exchange`?
> + /// Atomic compare and exchange and returns whether the operation succeeds.
> + ///
> + /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg()`].
> + ///
> + /// Returns `true` means the cmpxchg succeeds otherwise returns `false` with `old` updated to
> + /// the value of the atomic variable when cmpxchg was happening.
> + #[inline(always)]
> + fn try_cmpxchg<Ordering: All>(&self, old: &mut T, new: T, _: Ordering) -> bool {
`try_compare_exchange`?
---
Cheers,
Benno
> + let old = (old as *mut T).cast::<T::Repr>();
> + let new = T::into_repr(new);
> + let a = self.0.get().cast::<T::Repr>();
> +
> + // SAFETY:
> + // - For calling the atomic_try_cmpchg*() function:
> + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllowAtomic`,
> + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> + // - per the type invariants, the following atomic operation won't cause data races.
> + // - `old` is a valid pointer to write because it comes from a mutable reference.
> + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> + // - atomic operations are used here.
> + unsafe {
> + match Ordering::TYPE {
> + OrderingType::Full => T::Repr::atomic_try_cmpxchg(a, old, new),
> + OrderingType::Acquire => T::Repr::atomic_try_cmpxchg_acquire(a, old, new),
> + OrderingType::Release => T::Repr::atomic_try_cmpxchg_release(a, old, new),
> + OrderingType::Relaxed => T::Repr::atomic_try_cmpxchg_relaxed(a, old, new),
> + }
> + }
> + }
> +}
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-27 8:58 ` Benno Lossin
@ 2025-06-27 13:53 ` Boqun Feng
2025-06-28 6:12 ` Benno Lossin
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-06-27 13:53 UTC (permalink / raw)
To: Benno Lossin
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Fri, Jun 27, 2025 at 10:58:43AM +0200, Benno Lossin wrote:
> On Wed Jun 18, 2025 at 6:49 PM CEST, Boqun Feng wrote:
> > +impl<T: AllowAtomic> Atomic<T>
> > +where
> > + T::Repr: AtomicHasXchgOps,
> > +{
> > + /// Atomic exchange.
> > + ///
> > + /// # Examples
> > + ///
> > + /// ```rust
> > + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed};
> > + ///
> > + /// let x = Atomic::new(42);
> > + ///
> > + /// assert_eq!(42, x.xchg(52, Acquire));
> > + /// assert_eq!(52, x.load(Relaxed));
> > + /// ```
> > + #[doc(alias("atomic_xchg", "atomic64_xchg"))]
> > + #[inline(always)]
> > + pub fn xchg<Ordering: All>(&self, v: T, _: Ordering) -> T {
>
> Can we name this `exchange`?
>
FYI, in Rust std, this operation is called `swap()`, what's the reason
of using a name that is neither the Rust convention nor Linux kernel
convention?
As for naming, the reason I choose xchg() and cmpxchg() is because they
are the name LKMM uses for a long time, to use another name, we have to
have a very good reason to do so and I don't see a good reason
that the other names are better, especially, in our memory model, we use
xchg() and cmpxchg() a lot, and they are different than Rust version
where you can specify orderings separately. Naming LKMM xchg()/cmpxchg()
would cause more confusion I believe.
Same answer for compare_exchange() vs cmpxchg().
> > + let v = T::into_repr(v);
> > + let a = self.as_ptr().cast::<T::Repr>();
> > +
> > + // SAFETY:
> > + // - For calling the atomic_xchg*() function:
> > + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
> > + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> > + // - per the type invariants, the following atomic operation won't cause data races.
> > + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> > + // - atomic operations are used here.
> > + let ret = unsafe {
> > + match Ordering::TYPE {
> > + OrderingType::Full => T::Repr::atomic_xchg(a, v),
> > + OrderingType::Acquire => T::Repr::atomic_xchg_acquire(a, v),
> > + OrderingType::Release => T::Repr::atomic_xchg_release(a, v),
> > + OrderingType::Relaxed => T::Repr::atomic_xchg_relaxed(a, v),
> > + }
> > + };
> > +
> > + T::from_repr(ret)
> > + }
> > +
> > + /// Atomic compare and exchange.
> > + ///
> > + /// Compare: The comparison is done via the byte level comparison between the atomic variables
> > + /// with the `old` value.
> > + ///
> > + /// Ordering: When succeeds, provides the corresponding ordering as the `Ordering` type
> > + /// parameter indicates, and a failed one doesn't provide any ordering, the read part of a
> > + /// failed cmpxchg should be treated as a relaxed read.
>
> This is a bit confusing to me. The operation has a store and a load
> operation and both can have different orderings (at least in Rust
> userland) depending on the success/failure of the operation. In
> userland, I can supply `AcqRel` and `Acquire` to ensure that I always
> have Acquire semantics on any read and `Release` semantics on any write
> (which I would think is a common case). How do I do this using your API?
>
Usually in kernel that means in a failure case you need to use a barrier
afterwards, for example:
if (old != cmpxchg(v, old, new)) {
smp_mb();
// ^ following memory operations are ordered against.
}
> Don't I need `Acquire` semantics on the read in order for
> `compare_exchange` to give me the correct behavior in this example:
>
> pub struct Foo {
> data: Atomic<u64>,
> new: Atomic<bool>,
> ready: Atomic<bool>,
> }
>
> impl Foo {
> pub fn new() -> Self {
> Self {
> data: Atomic::new(0),
> new: Atomic::new(false),
> ready: Atomic::new(false),
> }
> }
>
> pub fn get(&self) -> Option<u64> {
> if self.new.compare_exchange(true, false, Release).is_ok() {
You should use `Full` if you want AcqRel-like behavior when succeed.
> let val = self.data.load(Acquire);
> self.ready.store(false, Release);
> Some(val)
> } else {
> None
> }
> }
>
> pub fn set(&self, val: u64) -> Result<(), u64> {
> if self.ready.compare_exchange(false, true, Release).is_ok() {
Same.
Regards,
Boqun
> self.data.store(val, Release);
> self.new.store(true, Release);
> } else {
> Err(val)
> }
> }
> }
>
> IIUC, you need `Acquire` ordering on both `compare_exchange` operations'
> reads for this to work, right? Because if they are relaxed, this could
> happen:
>
> Thread 0 | Thread 1
> ------------------------------------------------|------------------------------------------------
> get() { | set(42) {
> | if ready.cmpxchg(false, true, Rel).is_ok() {
> | data.store(42, Rel)
> | new.store(true, Rel)
> if new.cmpxchg(true, false, Rel).is_ok() { |
> let val = self.data.load(Acq); // reads 0 |
> ready.store(false, Rel); |
> Some(val) |
> } | }
> } | }
>
> So essentially, the `data.store` operation is not synchronized, because
> the read on `new` is not `Acquire`.
>
[...]
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 01/10] rust: Introduce atomic API helpers
2025-06-26 8:44 ` Andreas Hindborg
@ 2025-06-27 14:00 ` Boqun Feng
0 siblings, 0 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-27 14:00 UTC (permalink / raw)
To: Andreas Hindborg
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 26, 2025 at 10:44:13AM +0200, Andreas Hindborg wrote:
> "Boqun Feng" <boqun.feng@gmail.com> writes:
>
> > In order to support LKMM atomics in Rust, add rust_helper_* for atomic
> > APIs. These helpers ensure the implementation of LKMM atomics in Rust is
> > the same as in C. This could save the maintenance burden of having two
> > similar atomic implementations in asm.
> >
> > Originally-by: Mark Rutland <mark.rutland@arm.com>
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > ---
> > rust/helpers/atomic.c | 1038 +++++++++++++++++++++
> > rust/helpers/helpers.c | 1 +
> > scripts/atomic/gen-atomics.sh | 1 +
> > scripts/atomic/gen-rust-atomic-helpers.sh | 65 ++
> > 4 files changed, 1105 insertions(+)
> > create mode 100644 rust/helpers/atomic.c
> > create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh
> >
> > diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c
> > new file mode 100644
> > index 000000000000..00bf10887928
> > --- /dev/null
> > +++ b/rust/helpers/atomic.c
> > @@ -0,0 +1,1038 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +// Generated by scripts/atomic/gen-rust-atomic-helpers.sh
> > +// DO NOT MODIFY THIS FILE DIRECTLY
>
> If this file is generated, why check it in? Can't we run the generator
> at build time?
>
Greg asked the same question, and it has been answered in v1:
https://lore.kernel.org/rust-for-linux/ZmrLmnPz_0Q8oXny@J2N7QTR9R3/
I'm simply following what we already do for other version of atomic
APIs.
Regards,
Boqun
>
> Best regards,
> Andreas Hindborg
>
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 02/10] rust: sync: Add basic atomic operation mapping framework
2025-06-26 10:17 ` Andreas Hindborg
@ 2025-06-27 14:30 ` Boqun Feng
0 siblings, 0 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-27 14:30 UTC (permalink / raw)
To: Andreas Hindborg
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 26, 2025 at 12:17:14PM +0200, Andreas Hindborg wrote:
> "Boqun Feng" <boqun.feng@gmail.com> writes:
>
> > Preparation for generic atomic implementation. To unify the
> > implementation of a generic method over `i32` and `i64`, the C side
> > atomic methods need to be grouped so that in a generic method, they can
> > be referred as <type>::<method>, otherwise their parameters and return
> > value are different between `i32` and `i64`, which would require using
> > `transmute()` to unify the type into a `T`.
>
> I can't follow this, could you expand a bit?
>
So let's say I want to implement a generic `Atomic::load()`, without the
unification, what I can use are:
pub fn atomic_read(ptr: *mut i32) -> i32
and
pub fn atomic64_read(ptr: *mut i64) -> i64
and the implementation of `Atomic::load()` would be:
impl<T:...> Atomic<T> {
pub fn load(&self) -> T {
if size_of::<T> == 4 {
unsafe { transmute(atomic_read(self.0.get())) }
} else {
unsafe { transmute(atomic64_read(self.0.get())) }
}
}
}
because although load() is function of a generic struct, "if ... else
..." expression requires each branch has the same return type, so the
`transmute()` would be needed. What I meant was a trait method was
provided to `i32` and `i64`:
impl AtomicImpl for i32 {
fn atomic_read(ptr: *mut Self) -> Self;
}
impl AtomicImpl for i64 {
fn atomic_read(ptr: *mut Self) -> Self;
}
so that I could do:
impl<T:...> Atomic<T> {
pub fn load(&self) -> T {
T::atomic_read(self.0.get())
}
}
> >
> > Introduce `AtomicImpl` to represent a basic type in Rust that has the
> > direct mapping to an atomic implementation from C. This trait is sealed,
> > and currently only `i32` and `i64` impl this.
> >
> > Further, different methods are put into different `*Ops` trait groups,
> > and this is for the future when smaller types like `i8`/`i16` are
> > supported but only with a limited set of API (e.g. only set(), load(),
> > xchg() and cmpxchg(), no add() or sub() etc).
> >
> > While the atomic mod is introduced, documentation is also added for
> > memory models and data races.
> >
> > Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect
> > my responsiblity on the Rust atomic mod.
> >
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > ---
> > MAINTAINERS | 4 +-
> > rust/kernel/sync.rs | 1 +
> > rust/kernel/sync/atomic.rs | 19 ++++
> > rust/kernel/sync/atomic/ops.rs | 199 +++++++++++++++++++++++++++++++++
> > 4 files changed, 222 insertions(+), 1 deletion(-)
> > create mode 100644 rust/kernel/sync/atomic.rs
> > create mode 100644 rust/kernel/sync/atomic/ops.rs
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index 0c1d245bf7b8..5eef524975ca 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -3894,7 +3894,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c
> > ATOMIC INFRASTRUCTURE
> > M: Will Deacon <will@kernel.org>
> > M: Peter Zijlstra <peterz@infradead.org>
> > -R: Boqun Feng <boqun.feng@gmail.com>
> > +M: Boqun Feng <boqun.feng@gmail.com>
> > R: Mark Rutland <mark.rutland@arm.com>
> > L: linux-kernel@vger.kernel.org
> > S: Maintained
> > @@ -3903,6 +3903,8 @@ F: arch/*/include/asm/atomic*.h
> > F: include/*/atomic*.h
> > F: include/linux/refcount.h
> > F: scripts/atomic/
> > +F: rust/kernel/sync/atomic.rs
> > +F: rust/kernel/sync/atomic/
> >
> > ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER
> > M: Bradley Grove <linuxdrivers@attotech.com>
> > diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
> > index 36a719015583..b620027e0641 100644
> > --- a/rust/kernel/sync.rs
> > +++ b/rust/kernel/sync.rs
> > @@ -10,6 +10,7 @@
> > use pin_init;
> >
> > mod arc;
> > +pub mod atomic;
> > mod condvar;
> > pub mod lock;
> > mod locked_by;
> > diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> > new file mode 100644
> > index 000000000000..65e41dba97b7
> > --- /dev/null
> > +++ b/rust/kernel/sync/atomic.rs
> > @@ -0,0 +1,19 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +//! Atomic primitives.
> > +//!
> > +//! These primitives have the same semantics as their C counterparts: and the precise definitions of
> > +//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Consistency) Model is the
> > +//! only model for Rust code in kernel, and Rust's own atomics should be avoided.
> > +//!
> > +//! # Data races
> > +//!
> > +//! [`LKMM`] atomics have different rules regarding data races:
> > +//!
> > +//! - A normal write from C side is treated as an atomic write if
> > +//! CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
> > +//! - Mixed-size atomic accesses don't cause data races.
> > +//!
> > +//! [`LKMM`]: srctree/tools/memory-mode/
> > +
> > +pub mod ops;
> > diff --git a/rust/kernel/sync/atomic/ops.rs b/rust/kernel/sync/atomic/ops.rs
> > new file mode 100644
> > index 000000000000..f8825f7c84f0
> > --- /dev/null
> > +++ b/rust/kernel/sync/atomic/ops.rs
> > @@ -0,0 +1,199 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +//! Atomic implementations.
> > +//!
> > +//! Provides 1:1 mapping of atomic implementations.
> > +
> > +use crate::bindings::*;
> > +use crate::macros::paste;
> > +
> > +mod private {
> > + /// Sealed trait marker to disable customized impls on atomic implementation traits.
> > + pub trait Sealed {}
> > +}
> > +
> > +// `i32` and `i64` are only supported atomic implementations.
> > +impl private::Sealed for i32 {}
> > +impl private::Sealed for i64 {}
> > +
> > +/// A marker trait for types that implement atomic operations with C side primitives.
> > +///
> > +/// This trait is sealed, and only types that have directly mapping to the C side atomics should
> > +/// impl this:
> > +///
> > +/// - `i32` maps to `atomic_t`.
> > +/// - `i64` maps to `atomic64_t`.
> > +pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {}
> > +
> > +// `atomic_t` implements atomic operations on `i32`.
> > +impl AtomicImpl for i32 {}
> > +
> > +// `atomic64_t` implements atomic operations on `i64`.
> > +impl AtomicImpl for i64 {}
> > +
> > +// This macro generates the function signature with given argument list and return type.
>
> Perhaps could we add an example expansion to make the macro easier for
> people to parse the first time:
>
That might be a good idea, I will see what I can. However, note that
these macros are only for internal usage (i.e. not an export macro)
similar to impl_item_type!() in configfs.rs, so you can just expand the
file to see the result.
Actually, I could use scripts (similar to patch #1) to generate these,
but was suggested to use macros instead.
The current hesitation from me is because I need to change these macros
and I would probably look into adding expansion examples when things are
stable (i.e. I don't need to change) again. So maybe not the next
version.
> declare_atomic_method!(
> read[acquire](ptr: *mut Self) -> Self
> );
>
> ->
>
> #[doc = "Atomic read_acquire"]
> ..
> unsafe fn atomic_read_acquire(ptr: *mut Self) -> Self;
>
> #[doc = "Atomic read"]
> ..
> unsafe fn atomic_read(ptr: *mut Self) -> Self;
>
>
[..]
>
> Lastly, perhaps we should do `ptr.cast()` rather than `as *mut _` ?
>
Sure, that makes sense. I missed that because clippy cannot work on
macro'd code?
Regards,
Boqun
>
> Best regards,
> Andreas Hindborg
>
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-26 12:36 ` Andreas Hindborg
@ 2025-06-27 14:34 ` Boqun Feng
2025-06-27 14:44 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-06-27 14:34 UTC (permalink / raw)
To: Andreas Hindborg
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 26, 2025 at 02:36:50PM +0200, Andreas Hindborg wrote:
[...]
> > +/// The trait bound for annotating operations that should support all orderings.
> > +pub trait All: internal::OrderingUnit {}
>
> I think I would prefer `Any` rather than `All` here. Because it is "any
> of", not "all of them at once".
>
Good idea! Changed. Thanks!
Regards,
Boqun
>
> Best regards,
> Andreas Hindborg
>
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-27 14:34 ` Boqun Feng
@ 2025-06-27 14:44 ` Boqun Feng
0 siblings, 0 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-27 14:44 UTC (permalink / raw)
To: Andreas Hindborg
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Fri, Jun 27, 2025 at 07:34:46AM -0700, Boqun Feng wrote:
> On Thu, Jun 26, 2025 at 02:36:50PM +0200, Andreas Hindborg wrote:
> [...]
> > > +/// The trait bound for annotating operations that should support all orderings.
> > > +pub trait All: internal::OrderingUnit {}
> >
> > I think I would prefer `Any` rather than `All` here. Because it is "any
> > of", not "all of them at once".
> >
>
> Good idea! Changed. Thanks!
>
And I realized I can unify `Any` with `OrderingUnit`, here is the what I
have now:
mod internal {
/// Sealed trait, can be only implemented inside atomic mod.
pub trait Sealed {}
impl Sealed for super::Relaxed {}
impl Sealed for super::Acquire {}
impl Sealed for super::Release {}
impl Sealed for super::Full {}
}
/// The trait bound for annotating operations that support any ordering.
pub trait Any: internal::Sealed {
/// Describes the exact memory ordering.
const TYPE: OrderingType;
}
impl Any for Relaxed {
const TYPE: OrderingType = OrderingType::Relaxed;
}
impl Any for Acquire {
const TYPE: OrderingType = OrderingType::Acquire;
}
impl Any for Release {
const TYPE: OrderingType = OrderingType::Release;
}
impl Any for Full {
const TYPE: OrderingType = OrderingType::Full;
}
Better than what I had before, thanks!
Regards,
Boqun
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-26 12:15 ` Andreas Hindborg
@ 2025-06-27 15:01 ` Boqun Feng
2025-06-30 9:52 ` Andreas Hindborg
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-06-27 15:01 UTC (permalink / raw)
To: Andreas Hindborg
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 26, 2025 at 02:15:35PM +0200, Andreas Hindborg wrote:
> "Boqun Feng" <boqun.feng@gmail.com> writes:
>
> [...]
>
> > +
> > +impl<T: AllowAtomic> Atomic<T> {
> > + /// Creates a new atomic.
> > + pub const fn new(v: T) -> Self {
> > + Self(Opaque::new(v))
> > + }
> > +
> > + /// Creates a reference to [`Self`] from a pointer.
> > + ///
> > + /// # Safety
> > + ///
> > + /// - `ptr` has to be a valid pointer.
> > + /// - `ptr` has to be valid for both reads and writes for the whole lifetime `'a`.
> > + /// - For the whole lifetime of '`a`, other accesses to the object cannot cause data races
> > + /// (defined by [`LKMM`]) against atomic operations on the returned reference.
>
> I feel the wording is a bit tangled here. How about something along the
> lines of
>
> For the duration of `'a`, all accesses to the object must be atomic.
>
Well, a non-atomic read vs an atomic read is not a data race (for both
Rust memory model and LKMM), so your proposal is overly restricted.
I can s/the whole lifetime/the duration.
> > + ///
> > + /// [`LKMM`]: srctree/tools/memory-model
> > + ///
> > + /// # Examples
> > + ///
> > + /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [`Atomic::store()`] can
> > + /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire()` or
> > + /// `WRITE_ONCE()`/`smp_store_release()` in C side:
> > + ///
> > + /// ```rust
> > + /// # use kernel::types::Opaque;
> > + /// use kernel::sync::atomic::{Atomic, Relaxed, Release};
> > + ///
> > + /// // Assume there is a C struct `Foo`.
> > + /// mod cbindings {
> > + /// #[repr(C)]
> > + /// pub(crate) struct foo { pub(crate) a: i32, pub(crate) b: i32 }
> > + /// }
> > + ///
> > + /// let tmp = Opaque::new(cbindings::foo { a: 1, b: 2});
> > + ///
> > + /// // struct foo *foo_ptr = ..;
> > + /// let foo_ptr = tmp.get();
> > + ///
> > + /// // SAFETY: `foo_ptr` is a valid pointer, and `.a` is inbound.
>
> Did you mean to say "in bounds"? Or what is "inbound"?
>
I think I meant to say "inbounds", fixed.
> > + /// let foo_a_ptr = unsafe { core::ptr::addr_of_mut!((*foo_ptr).a) };
>
> This should be `&raw mut` by now, right?
>
Right.
[...]
>
> Typo `AllocAtomic`.
>
All fixed! Thanks!
Regards,
Boqun
>
> Best regards,
> Andreas Hindborg
>
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-26 13:12 ` Andreas Hindborg
@ 2025-06-28 3:03 ` Boqun Feng
2025-06-30 10:16 ` Andreas Hindborg
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-06-28 3:03 UTC (permalink / raw)
To: Andreas Hindborg
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 26, 2025 at 03:12:12PM +0200, Andreas Hindborg wrote:
> "Boqun Feng" <boqun.feng@gmail.com> writes:
>
> > xchg() and cmpxchg() are basic operations on atomic. Provide these based
> > on C APIs.
> >
> > Note that cmpxchg() use the similar function signature as
> > compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means
> > the operation succeeds and `Err(old)` means the operation fails.
> >
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > ---
> > rust/kernel/sync/atomic/generic.rs | 154 +++++++++++++++++++++++++++++
> > 1 file changed, 154 insertions(+)
> >
> > diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
> > index 73c26f9cf6b8..bcdbeea45dd8 100644
> > --- a/rust/kernel/sync/atomic/generic.rs
> > +++ b/rust/kernel/sync/atomic/generic.rs
> > @@ -256,3 +256,157 @@ pub fn store<Ordering: ReleaseOrRelaxed>(&self, v: T, _: Ordering) {
> > };
> > }
> > }
> > +
> > +impl<T: AllowAtomic> Atomic<T>
> > +where
> > + T::Repr: AtomicHasXchgOps,
> > +{
> > + /// Atomic exchange.
> > + ///
> > + /// # Examples
> > + ///
> > + /// ```rust
> > + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed};
> > + ///
> > + /// let x = Atomic::new(42);
> > + ///
> > + /// assert_eq!(42, x.xchg(52, Acquire));
> > + /// assert_eq!(52, x.load(Relaxed));
> > + /// ```
> > + #[doc(alias("atomic_xchg", "atomic64_xchg"))]
> > + #[inline(always)]
> > + pub fn xchg<Ordering: All>(&self, v: T, _: Ordering) -> T {
> > + let v = T::into_repr(v);
> > + let a = self.as_ptr().cast::<T::Repr>();
> > +
> > + // SAFETY:
> > + // - For calling the atomic_xchg*() function:
> > + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
>
> Typo: `AllowAtomic`.
>
Fixed.
> > + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> > + // - per the type invariants, the following atomic operation won't cause data races.
> > + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> > + // - atomic operations are used here.
> > + let ret = unsafe {
> > + match Ordering::TYPE {
> > + OrderingType::Full => T::Repr::atomic_xchg(a, v),
> > + OrderingType::Acquire => T::Repr::atomic_xchg_acquire(a, v),
> > + OrderingType::Release => T::Repr::atomic_xchg_release(a, v),
> > + OrderingType::Relaxed => T::Repr::atomic_xchg_relaxed(a, v),
> > + }
> > + };
> > +
> > + T::from_repr(ret)
> > + }
> > +
> > + /// Atomic compare and exchange.
> > + ///
> > + /// Compare: The comparison is done via the byte level comparison between the atomic variables
> > + /// with the `old` value.
> > + ///
> > + /// Ordering: When succeeds, provides the corresponding ordering as the `Ordering` type
> > + /// parameter indicates, and a failed one doesn't provide any ordering, the read part of a
> > + /// failed cmpxchg should be treated as a relaxed read.
>
> Rust `core::ptr` functions have this sentence on success ordering for
> compare_exchange:
>
> Using Acquire as success ordering makes the store part of this
> operation Relaxed, and using Release makes the successful load
> Relaxed.
>
> Does this translate to LKMM cmpxchg operations? If so, I think we should
> include this sentence. This also applies to `Atomic::xchg`.
>
I see this as a different style of documenting, so in my next version,
I have the following:
//! - [`Acquire`] provides ordering between the load part of the annotated operation and all the
//! following memory accesses.
//! - [`Release`] provides ordering between all the preceding memory accesses and the store part of
//! the annotated operation.
in atomic/ordering.rs, I think I can extend it to:
//! - [`Acquire`] provides ordering between the load part of the annotated operation and all the
//! following memory accesses, and if there is a store part, it has Relaxed ordering.
//! - [`Release`] provides ordering between all the preceding memory accesses and the store part of
//! the annotated operation, and if there is load part, it has Relaxed ordering
This aligns with what we usually describe things in tool/memory-model/.
Regards,
Boqun
>
> Best regards,
> Andreas Hindborg
>
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 06/10] rust: sync: atomic: Add the framework of arithmetic operations
2025-06-26 12:39 ` Andreas Hindborg
@ 2025-06-28 3:04 ` Boqun Feng
0 siblings, 0 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-28 3:04 UTC (permalink / raw)
To: Andreas Hindborg
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 26, 2025 at 02:39:49PM +0200, Andreas Hindborg wrote:
[...]
> > + // - For calling the atomic_add() function:
> > + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
>
> Typo, should be `AllowAtomic`.
>
[...]
> > + // SAFETY:
> > + // - For calling the atomic_fetch_add*() function:
> > + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
>
> Typo, should be `AllowAtomic`.
>
Both fixed.
Regards,
Boqun
>
> Best regards,
> Andreas Hindborg
>
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 10/10] rust: sync: Add memory barriers
2025-06-26 13:36 ` Andreas Hindborg
@ 2025-06-28 3:42 ` Boqun Feng
2025-06-30 9:54 ` Andreas Hindborg
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-06-28 3:42 UTC (permalink / raw)
To: Andreas Hindborg
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 26, 2025 at 03:36:25PM +0200, Andreas Hindborg wrote:
> "Boqun Feng" <boqun.feng@gmail.com> writes:
[...]
> > +//! [`LKMM`]: srctree/tools/memory-mode/
>
> Typo in link target.
>
> > +
> > +/// A compiler barrier.
> > +///
> > +/// An explicic compiler barrier function that prevents the compiler from moving the memory
> > +/// accesses either side of it to the other side.
>
> Typo in "explicit".
>
Fixed.
> How about:
>
> A compiler barrier. Prevents the compiler from reordering
> memory access instructions across the barrier.
>
>
> > +pub(crate) fn barrier() {
> > + // By default, Rust inline asms are treated as being able to access any memory or flags, hence
> > + // it suffices as a compiler barrier.
> > + //
> > + // SAFETY: An empty asm block should be safe.
> > + unsafe {
> > + core::arch::asm!("");
> > + }
> > +}
> > +
> > +/// A full memory barrier.
> > +///
> > +/// A barrier function that prevents both the compiler and the CPU from moving the memory accesses
> > +/// either side of it to the other side.
>
>
> A barrier that prevents compiler and CPU from reordering memory access
> instructions across the barrier.
>
> > +pub fn smp_mb() {
> > + if cfg!(CONFIG_SMP) {
> > + // SAFETY: `smp_mb()` is safe to call.
> > + unsafe {
> > + bindings::smp_mb();
> > + }
> > + } else {
> > + barrier();
> > + }
> > +}
> > +
> > +/// A write-write memory barrier.
> > +///
> > +/// A barrier function that prevents both the compiler and the CPU from moving the memory write
> > +/// accesses either side of it to the other side.
>
> A barrier that prevents compiler and CPU from reordering memory write
> instructions across the barrier.
>
> > +pub fn smp_wmb() {
> > + if cfg!(CONFIG_SMP) {
> > + // SAFETY: `smp_wmb()` is safe to call.
> > + unsafe {
> > + bindings::smp_wmb();
> > + }
> > + } else {
> > + barrier();
> > + }
> > +}
> > +
> > +/// A read-read memory barrier.
> > +///
> > +/// A barrier function that prevents both the compiler and the CPU from moving the memory read
> > +/// accesses either side of it to the other side.
>
> A barrier that prevents compiler and CPU from reordering memory read
> instructions across the barrier.
>
These are good wording, except that I will use "memory (read/write)
accesses" instead of "memory (read/write) instructions" because:
1) "instructions" are at lower level than the language, and memory
barriers function are provided as synchonization primitives, so I
feel we should describe memory barrier effects at language level,
i.e. mention how it would interact with objects and accesses to them.
2) There are instructions can do read and write in one instruction, it
might be unclear when we say "prevents reordering an instruction"
whether both parts are included, for example:
r1 = atomic_add(x, 1); // <- this can be one instruction.
smp_rmb();
r2 = atomic_read(y);
people may think because the smp_rmb() prevents read instructions
reordering, and atomic_add() is one instruction in this case,
smp_rmb() can prevents the write part of that instruction from
reordering, but that's not the case.
So I will do:
A barrier that prevents compiler and CPU from reordering memory read
accesses across the barrier.
Regards,
Boqun
> > +pub fn smp_rmb() {
> > + if cfg!(CONFIG_SMP) {
> > + // SAFETY: `smp_rmb()` is safe to call.
> > + unsafe {
> > + bindings::smp_rmb();
> > + }
> > + } else {
> > + barrier();
> > + }
> > +}
>
>
> Best regards,
> Andreas Hindborg
>
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-27 13:53 ` Boqun Feng
@ 2025-06-28 6:12 ` Benno Lossin
2025-06-28 7:31 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Benno Lossin @ 2025-06-28 6:12 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Fri Jun 27, 2025 at 3:53 PM CEST, Boqun Feng wrote:
> On Fri, Jun 27, 2025 at 10:58:43AM +0200, Benno Lossin wrote:
>> On Wed Jun 18, 2025 at 6:49 PM CEST, Boqun Feng wrote:
>> > +impl<T: AllowAtomic> Atomic<T>
>> > +where
>> > + T::Repr: AtomicHasXchgOps,
>> > +{
>> > + /// Atomic exchange.
>> > + ///
>> > + /// # Examples
>> > + ///
>> > + /// ```rust
>> > + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed};
>> > + ///
>> > + /// let x = Atomic::new(42);
>> > + ///
>> > + /// assert_eq!(42, x.xchg(52, Acquire));
>> > + /// assert_eq!(52, x.load(Relaxed));
>> > + /// ```
>> > + #[doc(alias("atomic_xchg", "atomic64_xchg"))]
>> > + #[inline(always)]
>> > + pub fn xchg<Ordering: All>(&self, v: T, _: Ordering) -> T {
>>
>> Can we name this `exchange`?
>>
>
> FYI, in Rust std, this operation is called `swap()`, what's the reason
> of using a name that is neither the Rust convention nor Linux kernel
> convention?
Ah, well then my suggestion would be `swap()` instead :)
> As for naming, the reason I choose xchg() and cmpxchg() is because they
> are the name LKMM uses for a long time, to use another name, we have to
> have a very good reason to do so and I don't see a good reason
> that the other names are better, especially, in our memory model, we use
> xchg() and cmpxchg() a lot, and they are different than Rust version
> where you can specify orderings separately. Naming LKMM xchg()/cmpxchg()
> would cause more confusion I believe.
I'm just not used to the name shortening from the kernel... I think it's
fine to use them especially since the ordering parameters differ from
std's atomics.
Can you add aliases for the Rust names?
> Same answer for compare_exchange() vs cmpxchg().
>
>> > + let v = T::into_repr(v);
>> > + let a = self.as_ptr().cast::<T::Repr>();
>> > +
>> > + // SAFETY:
>> > + // - For calling the atomic_xchg*() function:
>> > + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
>> > + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
>> > + // - per the type invariants, the following atomic operation won't cause data races.
>> > + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
>> > + // - atomic operations are used here.
>> > + let ret = unsafe {
>> > + match Ordering::TYPE {
>> > + OrderingType::Full => T::Repr::atomic_xchg(a, v),
>> > + OrderingType::Acquire => T::Repr::atomic_xchg_acquire(a, v),
>> > + OrderingType::Release => T::Repr::atomic_xchg_release(a, v),
>> > + OrderingType::Relaxed => T::Repr::atomic_xchg_relaxed(a, v),
>> > + }
>> > + };
>> > +
>> > + T::from_repr(ret)
>> > + }
>> > +
>> > + /// Atomic compare and exchange.
>> > + ///
>> > + /// Compare: The comparison is done via the byte level comparison between the atomic variables
>> > + /// with the `old` value.
>> > + ///
>> > + /// Ordering: When succeeds, provides the corresponding ordering as the `Ordering` type
>> > + /// parameter indicates, and a failed one doesn't provide any ordering, the read part of a
>> > + /// failed cmpxchg should be treated as a relaxed read.
>>
>> This is a bit confusing to me. The operation has a store and a load
>> operation and both can have different orderings (at least in Rust
>> userland) depending on the success/failure of the operation. In
>> userland, I can supply `AcqRel` and `Acquire` to ensure that I always
>> have Acquire semantics on any read and `Release` semantics on any write
>> (which I would think is a common case). How do I do this using your API?
>>
>
> Usually in kernel that means in a failure case you need to use a barrier
> afterwards, for example:
>
> if (old != cmpxchg(v, old, new)) {
> smp_mb();
> // ^ following memory operations are ordered against.
> }
Do we already have abstractions for those?
>> Don't I need `Acquire` semantics on the read in order for
>> `compare_exchange` to give me the correct behavior in this example:
>>
>> pub struct Foo {
>> data: Atomic<u64>,
>> new: Atomic<bool>,
>> ready: Atomic<bool>,
>> }
>>
>> impl Foo {
>> pub fn new() -> Self {
>> Self {
>> data: Atomic::new(0),
>> new: Atomic::new(false),
>> ready: Atomic::new(false),
>> }
>> }
>>
>> pub fn get(&self) -> Option<u64> {
>> if self.new.compare_exchange(true, false, Release).is_ok() {
>
> You should use `Full` if you want AcqRel-like behavior when succeed.
I think it would be pretty valuable to document this. Also any other
"direct" translations from the Rust memory model are useful. For example
is `SeqCst` "equivalent" to `Full`?
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-28 6:12 ` Benno Lossin
@ 2025-06-28 7:31 ` Boqun Feng
2025-06-28 8:00 ` Benno Lossin
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-06-28 7:31 UTC (permalink / raw)
To: Benno Lossin
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat, Jun 28, 2025 at 08:12:42AM +0200, Benno Lossin wrote:
> On Fri Jun 27, 2025 at 3:53 PM CEST, Boqun Feng wrote:
> > On Fri, Jun 27, 2025 at 10:58:43AM +0200, Benno Lossin wrote:
> >> On Wed Jun 18, 2025 at 6:49 PM CEST, Boqun Feng wrote:
> >> > +impl<T: AllowAtomic> Atomic<T>
> >> > +where
> >> > + T::Repr: AtomicHasXchgOps,
> >> > +{
> >> > + /// Atomic exchange.
> >> > + ///
> >> > + /// # Examples
> >> > + ///
> >> > + /// ```rust
> >> > + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed};
> >> > + ///
> >> > + /// let x = Atomic::new(42);
> >> > + ///
> >> > + /// assert_eq!(42, x.xchg(52, Acquire));
> >> > + /// assert_eq!(52, x.load(Relaxed));
> >> > + /// ```
> >> > + #[doc(alias("atomic_xchg", "atomic64_xchg"))]
> >> > + #[inline(always)]
> >> > + pub fn xchg<Ordering: All>(&self, v: T, _: Ordering) -> T {
> >>
> >> Can we name this `exchange`?
> >>
> >
> > FYI, in Rust std, this operation is called `swap()`, what's the reason
> > of using a name that is neither the Rust convention nor Linux kernel
> > convention?
>
> Ah, well then my suggestion would be `swap()` instead :)
>
;-)
> > As for naming, the reason I choose xchg() and cmpxchg() is because they
> > are the name LKMM uses for a long time, to use another name, we have to
> > have a very good reason to do so and I don't see a good reason
> > that the other names are better, especially, in our memory model, we use
> > xchg() and cmpxchg() a lot, and they are different than Rust version
> > where you can specify orderings separately. Naming LKMM xchg()/cmpxchg()
> > would cause more confusion I believe.
>
> I'm just not used to the name shortening from the kernel... I think it's
I guess it's a bit curse of knowledge from my side...
> fine to use them especially since the ordering parameters differ from
> std's atomics.
>
> Can you add aliases for the Rust names?
>
I can, but I also want to see a real user request ;-) As a bi-model user
myself, I generally don't mind the name, as you can see C++ and Rust use
different names as well, what I usually do is just "tell me what's the
name of the function if I need to do this" ;-)
> > Same answer for compare_exchange() vs cmpxchg().
> >
> >> > + let v = T::into_repr(v);
> >> > + let a = self.as_ptr().cast::<T::Repr>();
> >> > +
> >> > + // SAFETY:
> >> > + // - For calling the atomic_xchg*() function:
> >> > + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
> >> > + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
> >> > + // - per the type invariants, the following atomic operation won't cause data races.
> >> > + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
> >> > + // - atomic operations are used here.
> >> > + let ret = unsafe {
> >> > + match Ordering::TYPE {
> >> > + OrderingType::Full => T::Repr::atomic_xchg(a, v),
> >> > + OrderingType::Acquire => T::Repr::atomic_xchg_acquire(a, v),
> >> > + OrderingType::Release => T::Repr::atomic_xchg_release(a, v),
> >> > + OrderingType::Relaxed => T::Repr::atomic_xchg_relaxed(a, v),
> >> > + }
> >> > + };
> >> > +
> >> > + T::from_repr(ret)
> >> > + }
> >> > +
> >> > + /// Atomic compare and exchange.
> >> > + ///
> >> > + /// Compare: The comparison is done via the byte level comparison between the atomic variables
> >> > + /// with the `old` value.
> >> > + ///
> >> > + /// Ordering: When succeeds, provides the corresponding ordering as the `Ordering` type
> >> > + /// parameter indicates, and a failed one doesn't provide any ordering, the read part of a
> >> > + /// failed cmpxchg should be treated as a relaxed read.
> >>
> >> This is a bit confusing to me. The operation has a store and a load
> >> operation and both can have different orderings (at least in Rust
> >> userland) depending on the success/failure of the operation. In
> >> userland, I can supply `AcqRel` and `Acquire` to ensure that I always
> >> have Acquire semantics on any read and `Release` semantics on any write
> >> (which I would think is a common case). How do I do this using your API?
> >>
> >
> > Usually in kernel that means in a failure case you need to use a barrier
> > afterwards, for example:
> >
> > if (old != cmpxchg(v, old, new)) {
> > smp_mb();
> > // ^ following memory operations are ordered against.
> > }
>
> Do we already have abstractions for those?
>
You mean the smp_mb()? Yes it's in patch #10.
> >> Don't I need `Acquire` semantics on the read in order for
> >> `compare_exchange` to give me the correct behavior in this example:
> >>
> >> pub struct Foo {
> >> data: Atomic<u64>,
> >> new: Atomic<bool>,
> >> ready: Atomic<bool>,
> >> }
> >>
> >> impl Foo {
> >> pub fn new() -> Self {
> >> Self {
> >> data: Atomic::new(0),
> >> new: Atomic::new(false),
> >> ready: Atomic::new(false),
> >> }
> >> }
> >>
> >> pub fn get(&self) -> Option<u64> {
> >> if self.new.compare_exchange(true, false, Release).is_ok() {
> >
> > You should use `Full` if you want AcqRel-like behavior when succeed.
>
> I think it would be pretty valuable to document this. Also any other
> "direct" translations from the Rust memory model are useful. For example
I don't disagree. But I'm afraid it'll still a learning process for
everyone. Usually as a kernel developer, when working on concurrent
code, the thought process is not 1) "write it in Rust/C++ memory model"
and then 2) "translate to LKMM atomics", it's usually just write
directly because already learned patterns from kernel code.
So while I'm confident that I can answer any translation question you
come up with, but I don't have a full list yet.
Also I don't know whether it's worth doing, because of the thought
process thing I mentioned above.
My sincere suggestion to anyone who wants to do concurrent programming
in kernel is just "learn the LKMM" (or "use a lock" ;-)). There are good
learning materials in LWN, also you can check out the
tools/memory-model/ for the model, documentation and tools.
Either you are familiar with a few concepts in memory model areas, or
you have learned the LKMM, otherwise I'm afraid there's no short-cut for
one to pick up LKMM atomics correctly and precisely with a few
translation rules from Rust native atomics.
The other thing to note is that there could be multiple "translations",
for example for this particular case, we can also do:
pub fn get(&self) -> Option<u64> {
if self.new.cmpxchg(true, false, Release).is_ok() {
smp_mb(); // Ordering the load part of cmpxchg() with the
// following memory accesses, i.e. providing at
// least the Acquire ordering.
let val = self.data.load(Acquire);
self.ready.store(false, Release);
} else {
None
}
}
So whatever the document is, it might not be accurate/complete, and
might be misleading.
> is `SeqCst` "equivalent" to `Full`?
No ;-) How many hours do you have? (It's a figurative question, I
probably need to go to sleep now ;-)) For example, `SeqCst` on atomic
read-modify-write operations maps to acquire+release atomics on ARM64 I
believe, but a `Full` atomic is acquire+release plus a full memory
barrier on ARM64. Also a `Full` atomic implies a full memory barrier
(smp_mb()), but a `SeqCst` atomic is not a `SeqCst` fence.
Regards,
Boqun
>
> ---
> Cheers,
> Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-28 7:31 ` Boqun Feng
@ 2025-06-28 8:00 ` Benno Lossin
2025-06-30 15:24 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Benno Lossin @ 2025-06-28 8:00 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat Jun 28, 2025 at 9:31 AM CEST, Boqun Feng wrote:
> On Sat, Jun 28, 2025 at 08:12:42AM +0200, Benno Lossin wrote:
>> On Fri Jun 27, 2025 at 3:53 PM CEST, Boqun Feng wrote:
>> > As for naming, the reason I choose xchg() and cmpxchg() is because they
>> > are the name LKMM uses for a long time, to use another name, we have to
>> > have a very good reason to do so and I don't see a good reason
>> > that the other names are better, especially, in our memory model, we use
>> > xchg() and cmpxchg() a lot, and they are different than Rust version
>> > where you can specify orderings separately. Naming LKMM xchg()/cmpxchg()
>> > would cause more confusion I believe.
>>
>> I'm just not used to the name shortening from the kernel... I think it's
>
> I guess it's a bit curse of knowledge from my side...
>
>> fine to use them especially since the ordering parameters differ from
>> std's atomics.
>>
>> Can you add aliases for the Rust names?
>>
>
> I can, but I also want to see a real user request ;-) As a bi-model user
> myself, I generally don't mind the name, as you can see C++ and Rust use
> different names as well, what I usually do is just "tell me what's the
> name of the function if I need to do this" ;-)
I think learning Rust in the kernel is different from learning a new
language. Yes you're learning a specific dialect of Rust, but that's
what every project does.
You also added aliases for the C versions, so let's also add the Rust
ones :)
>> >> Don't I need `Acquire` semantics on the read in order for
>> >> `compare_exchange` to give me the correct behavior in this example:
>> >>
>> >> pub struct Foo {
>> >> data: Atomic<u64>,
>> >> new: Atomic<bool>,
>> >> ready: Atomic<bool>,
>> >> }
>> >>
>> >> impl Foo {
>> >> pub fn new() -> Self {
>> >> Self {
>> >> data: Atomic::new(0),
>> >> new: Atomic::new(false),
>> >> ready: Atomic::new(false),
>> >> }
>> >> }
>> >>
>> >> pub fn get(&self) -> Option<u64> {
>> >> if self.new.compare_exchange(true, false, Release).is_ok() {
>> >
>> > You should use `Full` if you want AcqRel-like behavior when succeed.
>>
>> I think it would be pretty valuable to document this. Also any other
>> "direct" translations from the Rust memory model are useful. For example
>
> I don't disagree. But I'm afraid it'll still a learning process for
> everyone. Usually as a kernel developer, when working on concurrent
> code, the thought process is not 1) "write it in Rust/C++ memory model"
> and then 2) "translate to LKMM atomics", it's usually just write
> directly because already learned patterns from kernel code.
That's fair. Maybe just me clinging to the only straw that I have :)
(it also isn't a good straw, I barely know my way around the atomics in
std :)
> So while I'm confident that I can answer any translation question you
> come up with, but I don't have a full list yet.
>
> Also I don't know whether it's worth doing, because of the thought
> process thing I mentioned above.
Yeah makes sense.
> My sincere suggestion to anyone who wants to do concurrent programming
> in kernel is just "learn the LKMM" (or "use a lock" ;-)). There are good
> learning materials in LWN, also you can check out the
> tools/memory-model/ for the model, documentation and tools.
I'm luckily not in the position of having to use atomics for anything :)
> Either you are familiar with a few concepts in memory model areas, or
> you have learned the LKMM, otherwise I'm afraid there's no short-cut for
> one to pick up LKMM atomics correctly and precisely with a few
> translation rules from Rust native atomics.
>
> The other thing to note is that there could be multiple "translations",
> for example for this particular case, we can also do:
>
> pub fn get(&self) -> Option<u64> {
> if self.new.cmpxchg(true, false, Release).is_ok() {
> smp_mb(); // Ordering the load part of cmpxchg() with the
> // following memory accesses, i.e. providing at
> // least the Acquire ordering.
> let val = self.data.load(Acquire);
> self.ready.store(false, Release);
> } else {
> None
> }
> }
>
> So whatever the document is, it might not be accurate/complete, and
> might be misleading.
Yeah.
>> is `SeqCst` "equivalent" to `Full`?
>
> No ;-) How many hours do you have? (It's a figurative question, I
> probably need to go to sleep now ;-)) For example, `SeqCst` on atomic
> read-modify-write operations maps to acquire+release atomics on ARM64 I
> believe, but a `Full` atomic is acquire+release plus a full memory
> barrier on ARM64. Also a `Full` atomic implies a full memory barrier
> (smp_mb()), but a `SeqCst` atomic is not a `SeqCst` fence.
Thanks for the quick explanation, I would have been satisfied with "No"
:)
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-27 15:01 ` Boqun Feng
@ 2025-06-30 9:52 ` Andreas Hindborg
2025-06-30 14:44 ` Alan Stern
0 siblings, 1 reply; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-30 9:52 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> On Thu, Jun 26, 2025 at 02:15:35PM +0200, Andreas Hindborg wrote:
>> "Boqun Feng" <boqun.feng@gmail.com> writes:
>>
>> [...]
>>
>> > +
>> > +impl<T: AllowAtomic> Atomic<T> {
>> > + /// Creates a new atomic.
>> > + pub const fn new(v: T) -> Self {
>> > + Self(Opaque::new(v))
>> > + }
>> > +
>> > + /// Creates a reference to [`Self`] from a pointer.
>> > + ///
>> > + /// # Safety
>> > + ///
>> > + /// - `ptr` has to be a valid pointer.
>> > + /// - `ptr` has to be valid for both reads and writes for the whole lifetime `'a`.
>> > + /// - For the whole lifetime of '`a`, other accesses to the object cannot cause data races
>> > + /// (defined by [`LKMM`]) against atomic operations on the returned reference.
>>
>> I feel the wording is a bit tangled here. How about something along the
>> lines of
>>
>> For the duration of `'a`, all accesses to the object must be atomic.
>>
>
> Well, a non-atomic read vs an atomic read is not a data race (for both
> Rust memory model and LKMM), so your proposal is overly restricted.
OK, my mistake then. I thought mixing marked and plain accesses would be
considered a race. I got hat from
`tools/memory-model/Documentation/explanation.txt`:
A "data race"
occurs when there are two memory accesses such that:
1. they access the same location,
2. at least one of them is a store,
3. at least one of them is plain,
4. they occur on different CPUs (or in different threads on the
same CPU), and
5. they execute concurrently.
I did not study all that documentation, so I might be missing a point or
two.
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 10/10] rust: sync: Add memory barriers
2025-06-28 3:42 ` Boqun Feng
@ 2025-06-30 9:54 ` Andreas Hindborg
0 siblings, 0 replies; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-30 9:54 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> On Thu, Jun 26, 2025 at 03:36:25PM +0200, Andreas Hindborg wrote:
>> "Boqun Feng" <boqun.feng@gmail.com> writes:
> [...]
>> > +//! [`LKMM`]: srctree/tools/memory-mode/
>>
>> Typo in link target.
>>
>> > +
>> > +/// A compiler barrier.
>> > +///
>> > +/// An explicic compiler barrier function that prevents the compiler from moving the memory
>> > +/// accesses either side of it to the other side.
>>
>> Typo in "explicit".
>>
>
> Fixed.
>
>> How about:
>>
>> A compiler barrier. Prevents the compiler from reordering
>> memory access instructions across the barrier.
>>
>>
>> > +pub(crate) fn barrier() {
>> > + // By default, Rust inline asms are treated as being able to access any memory or flags, hence
>> > + // it suffices as a compiler barrier.
>> > + //
>> > + // SAFETY: An empty asm block should be safe.
>> > + unsafe {
>> > + core::arch::asm!("");
>> > + }
>> > +}
>> > +
>> > +/// A full memory barrier.
>> > +///
>> > +/// A barrier function that prevents both the compiler and the CPU from moving the memory accesses
>> > +/// either side of it to the other side.
>>
>>
>> A barrier that prevents compiler and CPU from reordering memory access
>> instructions across the barrier.
>>
>> > +pub fn smp_mb() {
>> > + if cfg!(CONFIG_SMP) {
>> > + // SAFETY: `smp_mb()` is safe to call.
>> > + unsafe {
>> > + bindings::smp_mb();
>> > + }
>> > + } else {
>> > + barrier();
>> > + }
>> > +}
>> > +
>> > +/// A write-write memory barrier.
>> > +///
>> > +/// A barrier function that prevents both the compiler and the CPU from moving the memory write
>> > +/// accesses either side of it to the other side.
>>
>> A barrier that prevents compiler and CPU from reordering memory write
>> instructions across the barrier.
>>
>> > +pub fn smp_wmb() {
>> > + if cfg!(CONFIG_SMP) {
>> > + // SAFETY: `smp_wmb()` is safe to call.
>> > + unsafe {
>> > + bindings::smp_wmb();
>> > + }
>> > + } else {
>> > + barrier();
>> > + }
>> > +}
>> > +
>> > +/// A read-read memory barrier.
>> > +///
>> > +/// A barrier function that prevents both the compiler and the CPU from moving the memory read
>> > +/// accesses either side of it to the other side.
>>
>> A barrier that prevents compiler and CPU from reordering memory read
>> instructions across the barrier.
>>
>
> These are good wording, except that I will use "memory (read/write)
> accesses" instead of "memory (read/write) instructions" because:
>
> 1) "instructions" are at lower level than the language, and memory
> barriers function are provided as synchonization primitives, so I
> feel we should describe memory barrier effects at language level,
> i.e. mention how it would interact with objects and accesses to them.
>
> 2) There are instructions can do read and write in one instruction, it
> might be unclear when we say "prevents reordering an instruction"
> whether both parts are included, for example:
>
> r1 = atomic_add(x, 1); // <- this can be one instruction.
> smp_rmb();
> r2 = atomic_read(y);
>
> people may think because the smp_rmb() prevents read instructions
> reordering, and atomic_add() is one instruction in this case,
> smp_rmb() can prevents the write part of that instruction from
> reordering, but that's not the case.
>
>
> So I will do:
>
> A barrier that prevents compiler and CPU from reordering memory read
> accesses across the barrier.
Right, that makes sense 👍
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-28 3:03 ` Boqun Feng
@ 2025-06-30 10:16 ` Andreas Hindborg
2025-06-30 14:51 ` Alan Stern
0 siblings, 1 reply; 82+ messages in thread
From: Andreas Hindborg @ 2025-06-30 10:16 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Benno Lossin,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> On Thu, Jun 26, 2025 at 03:12:12PM +0200, Andreas Hindborg wrote:
>> "Boqun Feng" <boqun.feng@gmail.com> writes:
>>
>> > xchg() and cmpxchg() are basic operations on atomic. Provide these based
>> > on C APIs.
>> >
>> > Note that cmpxchg() use the similar function signature as
>> > compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means
>> > the operation succeeds and `Err(old)` means the operation fails.
>> >
>> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
>> > ---
>> > rust/kernel/sync/atomic/generic.rs | 154 +++++++++++++++++++++++++++++
>> > 1 file changed, 154 insertions(+)
>> >
>> > diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
>> > index 73c26f9cf6b8..bcdbeea45dd8 100644
>> > --- a/rust/kernel/sync/atomic/generic.rs
>> > +++ b/rust/kernel/sync/atomic/generic.rs
>> > @@ -256,3 +256,157 @@ pub fn store<Ordering: ReleaseOrRelaxed>(&self, v: T, _: Ordering) {
>> > };
>> > }
>> > }
>> > +
>> > +impl<T: AllowAtomic> Atomic<T>
>> > +where
>> > + T::Repr: AtomicHasXchgOps,
>> > +{
>> > + /// Atomic exchange.
>> > + ///
>> > + /// # Examples
>> > + ///
>> > + /// ```rust
>> > + /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed};
>> > + ///
>> > + /// let x = Atomic::new(42);
>> > + ///
>> > + /// assert_eq!(42, x.xchg(52, Acquire));
>> > + /// assert_eq!(52, x.load(Relaxed));
>> > + /// ```
>> > + #[doc(alias("atomic_xchg", "atomic64_xchg"))]
>> > + #[inline(always)]
>> > + pub fn xchg<Ordering: All>(&self, v: T, _: Ordering) -> T {
>> > + let v = T::into_repr(v);
>> > + let a = self.as_ptr().cast::<T::Repr>();
>> > +
>> > + // SAFETY:
>> > + // - For calling the atomic_xchg*() function:
>> > + // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
>>
>> Typo: `AllowAtomic`.
>>
>
> Fixed.
>
>> > + // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
>> > + // - per the type invariants, the following atomic operation won't cause data races.
>> > + // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
>> > + // - atomic operations are used here.
>> > + let ret = unsafe {
>> > + match Ordering::TYPE {
>> > + OrderingType::Full => T::Repr::atomic_xchg(a, v),
>> > + OrderingType::Acquire => T::Repr::atomic_xchg_acquire(a, v),
>> > + OrderingType::Release => T::Repr::atomic_xchg_release(a, v),
>> > + OrderingType::Relaxed => T::Repr::atomic_xchg_relaxed(a, v),
>> > + }
>> > + };
>> > +
>> > + T::from_repr(ret)
>> > + }
>> > +
>> > + /// Atomic compare and exchange.
>> > + ///
>> > + /// Compare: The comparison is done via the byte level comparison between the atomic variables
>> > + /// with the `old` value.
>> > + ///
>> > + /// Ordering: When succeeds, provides the corresponding ordering as the `Ordering` type
>> > + /// parameter indicates, and a failed one doesn't provide any ordering, the read part of a
>> > + /// failed cmpxchg should be treated as a relaxed read.
>>
>> Rust `core::ptr` functions have this sentence on success ordering for
>> compare_exchange:
>>
>> Using Acquire as success ordering makes the store part of this
>> operation Relaxed, and using Release makes the successful load
>> Relaxed.
>>
>> Does this translate to LKMM cmpxchg operations? If so, I think we should
>> include this sentence. This also applies to `Atomic::xchg`.
>>
>
> I see this as a different style of documenting, so in my next version,
> I have the following:
>
> //! - [`Acquire`] provides ordering between the load part of the annotated operation and all the
> //! following memory accesses.
> //! - [`Release`] provides ordering between all the preceding memory accesses and the store part of
> //! the annotated operation.
>
> in atomic/ordering.rs, I think I can extend it to:
>
> //! - [`Acquire`] provides ordering between the load part of the annotated operation and all the
> //! following memory accesses, and if there is a store part, it has Relaxed ordering.
> //! - [`Release`] provides ordering between all the preceding memory accesses and the store part of
> //! the annotated operation, and if there is load part, it has Relaxed ordering
>
> This aligns with what we usually describe things in tool/memory-model/.
Cool. When you start to go into details of ordering concepts, I feel
like something is missing though. For example for this sentence:
[`Release`] provides ordering between all the preceding memory
accesses and the store part of the annotated operation.
I guess this provided ordering is only guaranteed to be observable for
threads that read the same location with `Acquire` or stronger ordering?
If we start expanding on the orderings, rather than deferring to LKMM,
we should include this info.
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-30 9:52 ` Andreas Hindborg
@ 2025-06-30 14:44 ` Alan Stern
2025-07-01 8:54 ` Andreas Hindborg
0 siblings, 1 reply; 82+ messages in thread
From: Alan Stern @ 2025-06-30 14:44 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
Benno Lossin, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
Thomas Gleixner
On Mon, Jun 30, 2025 at 11:52:35AM +0200, Andreas Hindborg wrote:
> "Boqun Feng" <boqun.feng@gmail.com> writes:
> > Well, a non-atomic read vs an atomic read is not a data race (for both
> > Rust memory model and LKMM), so your proposal is overly restricted.
>
> OK, my mistake then. I thought mixing marked and plain accesses would be
> considered a race. I got hat from
> `tools/memory-model/Documentation/explanation.txt`:
>
> A "data race"
> occurs when there are two memory accesses such that:
>
> 1. they access the same location,
>
> 2. at least one of them is a store,
>
> 3. at least one of them is plain,
>
> 4. they occur on different CPUs (or in different threads on the
> same CPU), and
>
> 5. they execute concurrently.
>
> I did not study all that documentation, so I might be missing a point or
> two.
You missed point 2 above: at least one of the accesses has to be a
store. When you're looking at a non-atomic read vs. an atomic read,
both of them are loads and so it isn't a data race.
Alan
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-30 10:16 ` Andreas Hindborg
@ 2025-06-30 14:51 ` Alan Stern
2025-06-30 15:12 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Alan Stern @ 2025-06-30 14:51 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
Benno Lossin, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
Thomas Gleixner
On Mon, Jun 30, 2025 at 12:16:27PM +0200, Andreas Hindborg wrote:
> "Boqun Feng" <boqun.feng@gmail.com> writes:
> > in atomic/ordering.rs, I think I can extend it to:
> >
> > //! - [`Acquire`] provides ordering between the load part of the annotated operation and all the
> > //! following memory accesses, and if there is a store part, it has Relaxed ordering.
> > //! - [`Release`] provides ordering between all the preceding memory accesses and the store part of
> > //! the annotated operation, and if there is load part, it has Relaxed ordering
> >
> > This aligns with what we usually describe things in tool/memory-model/.
>
> Cool. When you start to go into details of ordering concepts, I feel
> like something is missing though. For example for this sentence:
>
> [`Release`] provides ordering between all the preceding memory
> accesses and the store part of the annotated operation.
>
> I guess this provided ordering is only guaranteed to be observable for
> threads that read the same location with `Acquire` or stronger ordering?
>
> If we start expanding on the orderings, rather than deferring to LKMM,
> we should include this info.
The problem with the word "ordering" is that it is too general, not
specific enough. You need more context to know exactly what the
ordering means.
For example, ordering store A against store B (which comes later in the
code) could mean that the CPU executes A before it executes B. Or it
could mean that a different CPU will see the data from A before it sees
the data from B.
A more explicit description would be helpful.
Alan Stern
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-30 14:51 ` Alan Stern
@ 2025-06-30 15:12 ` Boqun Feng
0 siblings, 0 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-30 15:12 UTC (permalink / raw)
To: Alan Stern
Cc: Andreas Hindborg, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
Benno Lossin, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
Thomas Gleixner
On Mon, Jun 30, 2025 at 10:51:00AM -0400, Alan Stern wrote:
> On Mon, Jun 30, 2025 at 12:16:27PM +0200, Andreas Hindborg wrote:
> > "Boqun Feng" <boqun.feng@gmail.com> writes:
> > > in atomic/ordering.rs, I think I can extend it to:
> > >
> > > //! - [`Acquire`] provides ordering between the load part of the annotated operation and all the
> > > //! following memory accesses, and if there is a store part, it has Relaxed ordering.
> > > //! - [`Release`] provides ordering between all the preceding memory accesses and the store part of
> > > //! the annotated operation, and if there is load part, it has Relaxed ordering
> > >
> > > This aligns with what we usually describe things in tool/memory-model/.
> >
> > Cool. When you start to go into details of ordering concepts, I feel
Well, avoiding going too much into the details is what I wanted for
those documentation comments ;-)
> > like something is missing though. For example for this sentence:
> >
> > [`Release`] provides ordering between all the preceding memory
> > accesses and the store part of the annotated operation.
> >
> > I guess this provided ordering is only guaranteed to be observable for
> > threads that read the same location with `Acquire` or stronger ordering?
> >
> > If we start expanding on the orderings, rather than deferring to LKMM,
> > we should include this info.
I'm not sure I follow you on this one. I'm not trying to expand on
orderings, instead I'm trying to resolve your feedback that my previous
version didn't mention what ordering the unspecific part of a
read-modify-write has (like the store part of an Acquire and load part
of the Release). And to me, they are naturally just Relaxed, but I feel
that given the feedback I got from you, maybe I should explicitly
mention they are Relaxed.
It's simply just making things more explicit and I'm still deferring to
LKMM about the exact meaning of the ordering.
>
> The problem with the word "ordering" is that it is too general, not
> specific enough. You need more context to know exactly what the
> ordering means.
>
> For example, ordering store A against store B (which comes later in the
> code) could mean that the CPU executes A before it executes B. Or it
> could mean that a different CPU will see the data from A before it sees
> the data from B.
>
> A more explicit description would be helpful.
>
Except that the explicit description should be in tools/memory-model/
instead of the comments of a function or the Rust atomic module. Like
Documentation/atomic_t.txt vs
tools/memory-model/Documentation/explanation.txt. Because one is for
people to get a quick idea about what these annotations/suffixes mean,
and the other is the more elaborate on their precise meaning. Most of
the cases, people won't need to get the subtilly of the memory model to
write correct code.
Regards,
Boqun
> Alan Stern
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-28 8:00 ` Benno Lossin
@ 2025-06-30 15:24 ` Boqun Feng
2025-06-30 15:27 ` Boqun Feng
2025-06-30 15:50 ` Benno Lossin
0 siblings, 2 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-30 15:24 UTC (permalink / raw)
To: Benno Lossin
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat, Jun 28, 2025 at 10:00:34AM +0200, Benno Lossin wrote:
> On Sat Jun 28, 2025 at 9:31 AM CEST, Boqun Feng wrote:
> > On Sat, Jun 28, 2025 at 08:12:42AM +0200, Benno Lossin wrote:
> >> On Fri Jun 27, 2025 at 3:53 PM CEST, Boqun Feng wrote:
> >> > As for naming, the reason I choose xchg() and cmpxchg() is because they
> >> > are the name LKMM uses for a long time, to use another name, we have to
> >> > have a very good reason to do so and I don't see a good reason
> >> > that the other names are better, especially, in our memory model, we use
> >> > xchg() and cmpxchg() a lot, and they are different than Rust version
> >> > where you can specify orderings separately. Naming LKMM xchg()/cmpxchg()
> >> > would cause more confusion I believe.
> >>
> >> I'm just not used to the name shortening from the kernel... I think it's
> >
> > I guess it's a bit curse of knowledge from my side...
> >
> >> fine to use them especially since the ordering parameters differ from
> >> std's atomics.
> >>
> >> Can you add aliases for the Rust names?
> >>
> >
> > I can, but I also want to see a real user request ;-) As a bi-model user
> > myself, I generally don't mind the name, as you can see C++ and Rust use
> > different names as well, what I usually do is just "tell me what's the
> > name of the function if I need to do this" ;-)
>
> I think learning Rust in the kernel is different from learning a new
> language. Yes you're learning a specific dialect of Rust, but that's
> what every project does.
>
> You also added aliases for the C versions, so let's also add the Rust
> ones :)
>
Make senses, so added:
--- a/rust/kernel/sync/atomic/generic.rs
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -310,7 +310,7 @@ impl<T: AllowAtomic> Atomic<T>
/// assert_eq!(42, x.xchg(52, Acquire));
/// assert_eq!(52, x.load(Relaxed));
/// ```
- #[doc(alias("atomic_xchg", "atomic64_xchg"))]
+ #[doc(alias("atomic_xchg", "atomic64_xchg", "swap"))]
#[inline(always)]
pub fn xchg<Ordering: Any>(&self, v: T, _: Ordering) -> T {
let v = T::into_repr(v);
@@ -382,6 +382,7 @@ pub fn xchg<Ordering: Any>(&self, v: T, _: Ordering) -> T {
"atomic64_cmpxchg",
"atomic_try_cmpxchg",
"atomic64_try_cmpxchg"
+ "compare_exchange"
))]
#[inline(always)]
pub fn cmpxchg<Ordering: Any>(&self, mut old: T, new: T, o: Ordering) -> Result<T, T> {
Seems good?
Regards,
Boqun
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-30 15:24 ` Boqun Feng
@ 2025-06-30 15:27 ` Boqun Feng
2025-06-30 15:50 ` Benno Lossin
1 sibling, 0 replies; 82+ messages in thread
From: Boqun Feng @ 2025-06-30 15:27 UTC (permalink / raw)
To: Benno Lossin
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Mon, Jun 30, 2025 at 08:24:13AM -0700, Boqun Feng wrote:
[...]
>
> Make senses, so added:
>
> --- a/rust/kernel/sync/atomic/generic.rs
> +++ b/rust/kernel/sync/atomic/generic.rs
> @@ -310,7 +310,7 @@ impl<T: AllowAtomic> Atomic<T>
> /// assert_eq!(42, x.xchg(52, Acquire));
> /// assert_eq!(52, x.load(Relaxed));
> /// ```
> - #[doc(alias("atomic_xchg", "atomic64_xchg"))]
> + #[doc(alias("atomic_xchg", "atomic64_xchg", "swap"))]
> #[inline(always)]
> pub fn xchg<Ordering: Any>(&self, v: T, _: Ordering) -> T {
> let v = T::into_repr(v);
> @@ -382,6 +382,7 @@ pub fn xchg<Ordering: Any>(&self, v: T, _: Ordering) -> T {
> "atomic64_cmpxchg",
> "atomic_try_cmpxchg",
> "atomic64_try_cmpxchg"
Missing a comma here, fixed locally.
Regards,
Boqun
> + "compare_exchange"
> ))]
> #[inline(always)]
> pub fn cmpxchg<Ordering: Any>(&self, mut old: T, new: T, o: Ordering) -> Result<T, T> {
>
> Seems good?
>
> Regards,
> Boqun
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-30 15:24 ` Boqun Feng
2025-06-30 15:27 ` Boqun Feng
@ 2025-06-30 15:50 ` Benno Lossin
1 sibling, 0 replies; 82+ messages in thread
From: Benno Lossin @ 2025-06-30 15:50 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Mon Jun 30, 2025 at 5:24 PM CEST, Boqun Feng wrote:
> On Sat, Jun 28, 2025 at 10:00:34AM +0200, Benno Lossin wrote:
>> On Sat Jun 28, 2025 at 9:31 AM CEST, Boqun Feng wrote:
>> > On Sat, Jun 28, 2025 at 08:12:42AM +0200, Benno Lossin wrote:
>> >> On Fri Jun 27, 2025 at 3:53 PM CEST, Boqun Feng wrote:
>> >> > As for naming, the reason I choose xchg() and cmpxchg() is because they
>> >> > are the name LKMM uses for a long time, to use another name, we have to
>> >> > have a very good reason to do so and I don't see a good reason
>> >> > that the other names are better, especially, in our memory model, we use
>> >> > xchg() and cmpxchg() a lot, and they are different than Rust version
>> >> > where you can specify orderings separately. Naming LKMM xchg()/cmpxchg()
>> >> > would cause more confusion I believe.
>> >>
>> >> I'm just not used to the name shortening from the kernel... I think it's
>> >
>> > I guess it's a bit curse of knowledge from my side...
>> >
>> >> fine to use them especially since the ordering parameters differ from
>> >> std's atomics.
>> >>
>> >> Can you add aliases for the Rust names?
>> >>
>> >
>> > I can, but I also want to see a real user request ;-) As a bi-model user
>> > myself, I generally don't mind the name, as you can see C++ and Rust use
>> > different names as well, what I usually do is just "tell me what's the
>> > name of the function if I need to do this" ;-)
>>
>> I think learning Rust in the kernel is different from learning a new
>> language. Yes you're learning a specific dialect of Rust, but that's
>> what every project does.
>>
>> You also added aliases for the C versions, so let's also add the Rust
>> ones :)
>>
>
> Make senses, so added:
>
> --- a/rust/kernel/sync/atomic/generic.rs
> +++ b/rust/kernel/sync/atomic/generic.rs
> @@ -310,7 +310,7 @@ impl<T: AllowAtomic> Atomic<T>
> /// assert_eq!(42, x.xchg(52, Acquire));
> /// assert_eq!(52, x.load(Relaxed));
> /// ```
> - #[doc(alias("atomic_xchg", "atomic64_xchg"))]
> + #[doc(alias("atomic_xchg", "atomic64_xchg", "swap"))]
> #[inline(always)]
> pub fn xchg<Ordering: Any>(&self, v: T, _: Ordering) -> T {
> let v = T::into_repr(v);
> @@ -382,6 +382,7 @@ pub fn xchg<Ordering: Any>(&self, v: T, _: Ordering) -> T {
> "atomic64_cmpxchg",
> "atomic_try_cmpxchg",
> "atomic64_try_cmpxchg"
> + "compare_exchange"
> ))]
> #[inline(always)]
> pub fn cmpxchg<Ordering: Any>(&self, mut old: T, new: T, o: Ordering) -> Result<T, T> {
>
> Seems good?
Yeah, thanks!
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-30 14:44 ` Alan Stern
@ 2025-07-01 8:54 ` Andreas Hindborg
2025-07-01 14:50 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Andreas Hindborg @ 2025-07-01 8:54 UTC (permalink / raw)
To: Alan Stern
Cc: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
Benno Lossin, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
Thomas Gleixner
"Alan Stern" <stern@rowland.harvard.edu> writes:
> On Mon, Jun 30, 2025 at 11:52:35AM +0200, Andreas Hindborg wrote:
>> "Boqun Feng" <boqun.feng@gmail.com> writes:
>> > Well, a non-atomic read vs an atomic read is not a data race (for both
>> > Rust memory model and LKMM), so your proposal is overly restricted.
>>
>> OK, my mistake then. I thought mixing marked and plain accesses would be
>> considered a race. I got hat from
>> `tools/memory-model/Documentation/explanation.txt`:
>>
>> A "data race"
>> occurs when there are two memory accesses such that:
>>
>> 1. they access the same location,
>>
>> 2. at least one of them is a store,
>>
>> 3. at least one of them is plain,
>>
>> 4. they occur on different CPUs (or in different threads on the
>> same CPU), and
>>
>> 5. they execute concurrently.
>>
>> I did not study all that documentation, so I might be missing a point or
>> two.
>
> You missed point 2 above: at least one of the accesses has to be a
> store. When you're looking at a non-atomic read vs. an atomic read,
> both of them are loads and so it isn't a data race.
Ah, right. I was missing the entire point made by Boqun. Thanks for
clarifying.
Since what constitutes a race might not be immediately clear to users
(like me), can we include the section above in the safety comment,
rather than deferring to LKMM docs?
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-01 8:54 ` Andreas Hindborg
@ 2025-07-01 14:50 ` Boqun Feng
2025-07-02 8:33 ` Andreas Hindborg
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-07-01 14:50 UTC (permalink / raw)
To: Andreas Hindborg
Cc: Alan Stern, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
Benno Lossin, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
Thomas Gleixner
On Tue, Jul 01, 2025 at 10:54:09AM +0200, Andreas Hindborg wrote:
> "Alan Stern" <stern@rowland.harvard.edu> writes:
>
> > On Mon, Jun 30, 2025 at 11:52:35AM +0200, Andreas Hindborg wrote:
> >> "Boqun Feng" <boqun.feng@gmail.com> writes:
> >> > Well, a non-atomic read vs an atomic read is not a data race (for both
> >> > Rust memory model and LKMM), so your proposal is overly restricted.
> >>
> >> OK, my mistake then. I thought mixing marked and plain accesses would be
> >> considered a race. I got hat from
> >> `tools/memory-model/Documentation/explanation.txt`:
> >>
> >> A "data race"
> >> occurs when there are two memory accesses such that:
> >>
> >> 1. they access the same location,
> >>
> >> 2. at least one of them is a store,
> >>
> >> 3. at least one of them is plain,
> >>
> >> 4. they occur on different CPUs (or in different threads on the
> >> same CPU), and
> >>
> >> 5. they execute concurrently.
> >>
> >> I did not study all that documentation, so I might be missing a point or
> >> two.
> >
> > You missed point 2 above: at least one of the accesses has to be a
> > store. When you're looking at a non-atomic read vs. an atomic read,
> > both of them are loads and so it isn't a data race.
>
> Ah, right. I was missing the entire point made by Boqun. Thanks for
> clarifying.
>
> Since what constitutes a race might not be immediately clear to users
> (like me), can we include the section above in the safety comment,
> rather than deferring to LKMM docs?
>
Still, I don't think it's a good idea. For a few reasons:
1) Maintaining multiple sources of truth is painful and risky, it's
going to be further confusing if users feel LKMM and the function
safety requirement conflict with each other.
2) Human language is not the best tool to describe memory model, that's
why we use herd to describe and use memory model. Trying to describe
the memory model in comments rather than referring to the formal
model is a way backwards.
3) I believed the reason we got our discussion here is because you tried
to improve the comment of `from_ptr()`, and I do appreciate that
effort. And I think we should continue in that direction instead of
pulling the whole "what are data race conditions" into picture. So
how about we clearly call out that it'll be safe if other accesses
are atomic, which should be the most cases:
/// - For the duration of `'a`, other accesses to the object cannot cause data races
/// (defined by [`LKMM`]) against atomic operations on the returned reference. Note
/// that if all other accesses are atomic, then this safety requirement is trivially
/// fulfilled.
Regards,
Boqun
>
> Best regards,
> Andreas Hindborg
>
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-01 14:50 ` Boqun Feng
@ 2025-07-02 8:33 ` Andreas Hindborg
0 siblings, 0 replies; 82+ messages in thread
From: Andreas Hindborg @ 2025-07-02 8:33 UTC (permalink / raw)
To: Boqun Feng
Cc: Alan Stern, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
Benno Lossin, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
Thomas Gleixner
"Boqun Feng" <boqun.feng@gmail.com> writes:
> On Tue, Jul 01, 2025 at 10:54:09AM +0200, Andreas Hindborg wrote:
>> "Alan Stern" <stern@rowland.harvard.edu> writes:
>>
>> > On Mon, Jun 30, 2025 at 11:52:35AM +0200, Andreas Hindborg wrote:
>> >> "Boqun Feng" <boqun.feng@gmail.com> writes:
>> >> > Well, a non-atomic read vs an atomic read is not a data race (for both
>> >> > Rust memory model and LKMM), so your proposal is overly restricted.
>> >>
>> >> OK, my mistake then. I thought mixing marked and plain accesses would be
>> >> considered a race. I got hat from
>> >> `tools/memory-model/Documentation/explanation.txt`:
>> >>
>> >> A "data race"
>> >> occurs when there are two memory accesses such that:
>> >>
>> >> 1. they access the same location,
>> >>
>> >> 2. at least one of them is a store,
>> >>
>> >> 3. at least one of them is plain,
>> >>
>> >> 4. they occur on different CPUs (or in different threads on the
>> >> same CPU), and
>> >>
>> >> 5. they execute concurrently.
>> >>
>> >> I did not study all that documentation, so I might be missing a point or
>> >> two.
>> >
>> > You missed point 2 above: at least one of the accesses has to be a
>> > store. When you're looking at a non-atomic read vs. an atomic read,
>> > both of them are loads and so it isn't a data race.
>>
>> Ah, right. I was missing the entire point made by Boqun. Thanks for
>> clarifying.
>>
>> Since what constitutes a race might not be immediately clear to users
>> (like me), can we include the section above in the safety comment,
>> rather than deferring to LKMM docs?
>>
>
> Still, I don't think it's a good idea. For a few reasons:
>
> 1) Maintaining multiple sources of truth is painful and risky, it's
> going to be further confusing if users feel LKMM and the function
> safety requirement conflict with each other.
I would agree.
>
> 2) Human language is not the best tool to describe memory model, that's
> why we use herd to describe and use memory model. Trying to describe
> the memory model in comments rather than referring to the formal
> model is a way backwards.
I do not agree with this. I read human language much better than formal logic.
>
> 3) I believed the reason we got our discussion here is because you tried
> to improve the comment of `from_ptr()`, and I do appreciate that
> effort. And I think we should continue in that direction instead of
> pulling the whole "what are data race conditions" into picture.
Yes, absolutely.
> So
> how about we clearly call out that it'll be safe if other accesses
> are atomic, which should be the most cases:
>
> /// - For the duration of `'a`, other accesses to the object cannot cause data races
> /// (defined by [`LKMM`]) against atomic operations on the returned reference. Note
> /// that if all other accesses are atomic, then this safety requirement is trivially
> /// fulfilled.
Sounds good to me, thanks!
Best regards,
Andreas Hindborg
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-23 19:09 ` Boqun Feng
2025-06-23 23:27 ` Benno Lossin
@ 2025-07-04 20:25 ` Boqun Feng
2025-07-04 20:45 ` Benno Lossin
1 sibling, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-07-04 20:25 UTC (permalink / raw)
To: Gary Guo
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Mon, Jun 23, 2025 at 12:09:21PM -0700, Boqun Feng wrote:
> On Mon, Jun 23, 2025 at 07:30:19PM +0100, Gary Guo wrote:
> > On Sun, 22 Jun 2025 22:19:44 -0700
> > Boqun Feng <boqun.feng@gmail.com> wrote:
> >
> > > On Sat, Jun 21, 2025 at 12:32:12PM +0100, Gary Guo wrote:
> > > [...]
> > > > > +#[repr(transparent)]
> > > > > +pub struct Atomic<T: AllowAtomic>(Opaque<T>);
> > > >
> > > > This should store `Opaque<T::Repr>` instead.
> > > >
> > >
> > > "should" is a strong word ;-) If we still use `into_repr`/`from_repr`
> > > it's a bit impossible, because Atomic::new() wants to be a const
> > > function, so it requires const_trait_impl I believe.
> > >
> > > If we require transmutability as a safety requirement for `AllowAtomic`,
> > > then either `T` or `T::Repr` is fine.
> > >
> > > > The implementation below essentially assumes that this is
> > > > `Opaque<T::Repr>`:
> > > > * atomic ops cast this to `*mut T::Repr`
> > > > * load/store operates on `T::Repr` then converts to `T` with
> > > > `T::from_repr`/`T::into_repr`.
> > > >
> > >
> > > Note that we only require one direction of strong transmutability, that
> > > is: for every `T`, it must be able to safe transmute to a `T::Repr`, for
> > > `T::Repr` -> `T` transmutation, only if it's a result of a `transmute<T,
> > > T::Repr>()`. This is mostly due to potential support for unit-only enum.
> > > E.g. using an atomic variable to represent a finite state.
> > >
> > > > Note tha the transparent new types restriction on `AllowAtomic` is not
> > > > sufficient for this, as I can define
> > > >
> > >
> > > Nice catch! I do agree we should disallow `MyWeirdI32`, and I also agree
> > > that we should put transmutability as safety requirement for
> > > `AllowAtomic`. However, I would suggest we still keep
> > > `into_repr`/`from_repr`, and require the implementation to make them
> > > provide the same results as transmute(), as a correctness precondition
> > > (instead of a safety precondition), in other words, you can still write
> > > a `MyWeirdI32`, and it won't cause safety issues, but it'll be
> > > incorrect.
> > >
> > > The reason why I think we should keep `into_repr`/`from_repr` but add
> > > a correctness precondition is that they are easily to implement as safe
> > > code for basic types, so it'll be better than a transmute() call. Also
> > > considering `Atomic<*mut T>`, would transmuting between integers and
> > > pointers act the same as expose_provenance() and
> > > from_exposed_provenance()?
> >
> > Okay, this is more problematic than I thought then. For pointers, you
>
> Welcome to my nightmare ;-)
>
> > cannot just transmute between from pointers to usize (which is its
> > Repr):
> > * Transmuting from pointer to usize discards provenance
> > * Transmuting from usize to pointer gives invalid provenance
> >
> > We want neither behaviour, so we must store `usize` directly and
> > always call into repr functions.
> >
>
> If we store `usize`, how can we support the `get_mut()` then? E.g.
>
> static V: i32 = 32;
>
> let mut x = Atomic::new(&V as *const i32 as *mut i32);
> // ^ assume we expose_provenance() in new().
>
> let ptr: &mut *mut i32 = x.get_mut(); // which is `&mut self.0.get()`.
>
> let ptr_val = *ptr; // Does `ptr_val` have the proper provenance?
>
There are a few off-list discussions, and I've been trying some
experiment myself, here are a few points/concepts that will help future
discussion or documentation, so I put it down here:
* Round-trip transmutability (thank Benno for the name!).
We realize this should be a safety requirement of `AllowAtomic` type
(i.e. the type that can be put in a Atomic<T>). What it means is:
- If `T: AllowAtomic`, transmute() from `T` to `T::Repr` is always
safe and
- if a value of `T::Repr` is a result of transmute() from `T` to
`T::Repr`, then `transmute()` for that value to `T` is also safe.
This essentially means a valid bit pattern of `T: AllowAtomic` has to
be a valid bit pattern of `T::Repr`.
This is needed because the atomic framework operates on `T::Repr` to
implement atomic operations on `T`.
Note that this is more relaxed than bi-direct transmutability (i.e.
transmute() between `T` and `T::Repr`) because we want to support
atomic type over unit-only enums:
#[repr(i32)]
pub enum State {
Init = 0,
Working = 1,
Done = 2,
}
This should be really helpful to support atomics as states, for
example:
https://lore.kernel.org/rust-for-linux/20250702-module-params-v3-v14-1-5b1cc32311af@kernel.org/
* transmute()-equivalent from_repr() and into_repr().
(This is not a safety requirement)
from_repr() and into_repr(), if exist, should behave like transmute()
on the bit pattern of the results, in other words, bit patterns of `T`
or `T::Repr` should stay the same before and after these operations.
Of course if we remove them and replace with transmute(), same result.
This reflects the fact that customized atomic types should store
unmodified bit patterns into atomic variables, and this make atomic
operations don't have weird behavior [1] when combined with new(),
from_ptr() and get_mut().
* Provenance preservation.
(This is not a safety requirement for Atomic itself)
For a `Atomic<*mut T>`, it should preserve the provenance of the
pointer that has been stored into it, i.e. the load result from a
`Atomic<*mut T>` should have the same provenance.
Technically, without this, `Atomic<*mut T>` still work without any
safety issue itself, but the user of it must maintain the provenance
themselves before store or after load.
And it turns out it's not very hard to prove the current
implementation achieve this:
- For a non-atomic operation done on the atomic variable, they are
already using pointer operation, so the provenance has been
preserved.
- For an atomic operation, since they are done via inline asm code, in
Rust's abstract machine, they can be treated as pointer read and
write:
a) A load of the atomic can be treated as a pointer read and then
exposing the provenance.
b) A store of the atomic can be treated as a pointer write with a
value created with the exposed provenance.
And our implementation, thanks to no arbitrary type coercion,
already guarantee that for each a) there is a from_repr() after and
for each b) there is a into_repr() before. And from_repr() acts as
a with_exposed_provenance() and into_repr() acts as a
expose_provenance(). Hence the provenance is preserved.
Note this is a global property and it has to proven at `Atomic<T>`
level.
Regards,
Boqun
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-04 20:25 ` Boqun Feng
@ 2025-07-04 20:45 ` Benno Lossin
2025-07-04 21:17 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Benno Lossin @ 2025-07-04 20:45 UTC (permalink / raw)
To: Boqun Feng, Gary Guo
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Björn Roy Baron, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
On Fri Jul 4, 2025 at 10:25 PM CEST, Boqun Feng wrote:
> There are a few off-list discussions, and I've been trying some
> experiment myself, here are a few points/concepts that will help future
> discussion or documentation, so I put it down here:
>
> * Round-trip transmutability (thank Benno for the name!).
>
> We realize this should be a safety requirement of `AllowAtomic` type
> (i.e. the type that can be put in a Atomic<T>). What it means is:
>
> - If `T: AllowAtomic`, transmute() from `T` to `T::Repr` is always
> safe and
s/safe/sound/
> - if a value of `T::Repr` is a result of transmute() from `T` to
> `T::Repr`, then `transmute()` for that value to `T` is also safe.
s/safe/sound/
:)
>
> This essentially means a valid bit pattern of `T: AllowAtomic` has to
> be a valid bit pattern of `T::Repr`.
>
> This is needed because the atomic framework operates on `T::Repr` to
> implement atomic operations on `T`.
>
> Note that this is more relaxed than bi-direct transmutability (i.e.
> transmute() between `T` and `T::Repr`) because we want to support
> atomic type over unit-only enums:
>
> #[repr(i32)]
> pub enum State {
> Init = 0,
> Working = 1,
> Done = 2,
> }
>
> This should be really helpful to support atomics as states, for
> example:
>
> https://lore.kernel.org/rust-for-linux/20250702-module-params-v3-v14-1-5b1cc32311af@kernel.org/
>
> * transmute()-equivalent from_repr() and into_repr().
Hmm I don't think this name fits the description below, how about
"bit-equivalency of from_repr() and into_repr()"? We don't need to
transmute, we only want to ensure that `{from,into}_repr` are just
transmutes.
> (This is not a safety requirement)
>
> from_repr() and into_repr(), if exist, should behave like transmute()
> on the bit pattern of the results, in other words, bit patterns of `T`
> or `T::Repr` should stay the same before and after these operations.
>
> Of course if we remove them and replace with transmute(), same result.
>
> This reflects the fact that customized atomic types should store
> unmodified bit patterns into atomic variables, and this make atomic
> operations don't have weird behavior [1] when combined with new(),
> from_ptr() and get_mut().
I remember that this was required to support types like `(u8, u16)`? If
yes, then it would be good to include a paragraph like the one above for
enums :)
> * Provenance preservation.
>
> (This is not a safety requirement for Atomic itself)
>
> For a `Atomic<*mut T>`, it should preserve the provenance of the
> pointer that has been stored into it, i.e. the load result from a
> `Atomic<*mut T>` should have the same provenance.
>
> Technically, without this, `Atomic<*mut T>` still work without any
> safety issue itself, but the user of it must maintain the provenance
> themselves before store or after load.
>
> And it turns out it's not very hard to prove the current
> implementation achieve this:
>
> - For a non-atomic operation done on the atomic variable, they are
> already using pointer operation, so the provenance has been
> preserved.
> - For an atomic operation, since they are done via inline asm code, in
> Rust's abstract machine, they can be treated as pointer read and
> write:
>
> a) A load of the atomic can be treated as a pointer read and then
> exposing the provenance.
> b) A store of the atomic can be treated as a pointer write with a
> value created with the exposed provenance.
>
> And our implementation, thanks to no arbitrary type coercion,
> already guarantee that for each a) there is a from_repr() after and
> for each b) there is a into_repr() before. And from_repr() acts as
> a with_exposed_provenance() and into_repr() acts as a
> expose_provenance(). Hence the provenance is preserved.
I'm not sure this point is correct, but I'm an atomics noob, so maybe
Gary should take a look at this :)
> Note this is a global property and it has to proven at `Atomic<T>`
> level.
Thanks for he awesome writeup, do you want to put this in some comment
or at least the commit log?
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-04 20:45 ` Benno Lossin
@ 2025-07-04 21:17 ` Boqun Feng
2025-07-04 22:38 ` Benno Lossin
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-07-04 21:17 UTC (permalink / raw)
To: Benno Lossin
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Fri, Jul 04, 2025 at 10:45:48PM +0200, Benno Lossin wrote:
> On Fri Jul 4, 2025 at 10:25 PM CEST, Boqun Feng wrote:
> > There are a few off-list discussions, and I've been trying some
> > experiment myself, here are a few points/concepts that will help future
> > discussion or documentation, so I put it down here:
> >
> > * Round-trip transmutability (thank Benno for the name!).
> >
> > We realize this should be a safety requirement of `AllowAtomic` type
> > (i.e. the type that can be put in a Atomic<T>). What it means is:
> >
> > - If `T: AllowAtomic`, transmute() from `T` to `T::Repr` is always
> > safe and
>
> s/safe/sound/
>
> > - if a value of `T::Repr` is a result of transmute() from `T` to
> > `T::Repr`, then `transmute()` for that value to `T` is also safe.
>
> s/safe/sound/
>
Make sense.
> :)
>
> >
> > This essentially means a valid bit pattern of `T: AllowAtomic` has to
> > be a valid bit pattern of `T::Repr`.
> >
> > This is needed because the atomic framework operates on `T::Repr` to
> > implement atomic operations on `T`.
> >
> > Note that this is more relaxed than bi-direct transmutability (i.e.
> > transmute() between `T` and `T::Repr`) because we want to support
> > atomic type over unit-only enums:
> >
> > #[repr(i32)]
> > pub enum State {
> > Init = 0,
> > Working = 1,
> > Done = 2,
> > }
> >
> > This should be really helpful to support atomics as states, for
> > example:
> >
> > https://lore.kernel.org/rust-for-linux/20250702-module-params-v3-v14-1-5b1cc32311af@kernel.org/
> >
> > * transmute()-equivalent from_repr() and into_repr().
>
> Hmm I don't think this name fits the description below, how about
> "bit-equivalency of from_repr() and into_repr()"? We don't need to
> transmute, we only want to ensure that `{from,into}_repr` are just
> transmutes.
>
Good point!
Btw, do you offer naming service, I will pay! ;-)
> > (This is not a safety requirement)
> >
> > from_repr() and into_repr(), if exist, should behave like transmute()
> > on the bit pattern of the results, in other words, bit patterns of `T`
> > or `T::Repr` should stay the same before and after these operations.
> >
> > Of course if we remove them and replace with transmute(), same result.
> >
> > This reflects the fact that customized atomic types should store
> > unmodified bit patterns into atomic variables, and this make atomic
> > operations don't have weird behavior [1] when combined with new(),
> > from_ptr() and get_mut().
>
> I remember that this was required to support types like `(u8, u16)`? If
My bad, I forgot to put the link to [1]...
[1]: https://lore.kernel.org/rust-for-linux/20250621123212.66fb016b.gary@garyguo.net/
Basically, without requiring from_repr() and into_repr() to act as a
transmute(), you can have weird types in Atomic<T>.
`(u8, u16)` (in case it's not clear to other audience, it's tuple with a
`u8` and a `u16` in it, so there is a 8-bit hole) is not going to
support until we have something like a `Atomic<MaybeUninit<i32>>`.
> yes, then it would be good to include a paragraph like the one above for
> enums :)
>
> > * Provenance preservation.
> >
> > (This is not a safety requirement for Atomic itself)
> >
> > For a `Atomic<*mut T>`, it should preserve the provenance of the
> > pointer that has been stored into it, i.e. the load result from a
> > `Atomic<*mut T>` should have the same provenance.
> >
> > Technically, without this, `Atomic<*mut T>` still work without any
> > safety issue itself, but the user of it must maintain the provenance
> > themselves before store or after load.
> >
> > And it turns out it's not very hard to prove the current
> > implementation achieve this:
> >
> > - For a non-atomic operation done on the atomic variable, they are
> > already using pointer operation, so the provenance has been
> > preserved.
> > - For an atomic operation, since they are done via inline asm code, in
> > Rust's abstract machine, they can be treated as pointer read and
> > write:
> >
> > a) A load of the atomic can be treated as a pointer read and then
> > exposing the provenance.
> > b) A store of the atomic can be treated as a pointer write with a
> > value created with the exposed provenance.
> >
> > And our implementation, thanks to no arbitrary type coercion,
> > already guarantee that for each a) there is a from_repr() after and
> > for each b) there is a into_repr() before. And from_repr() acts as
> > a with_exposed_provenance() and into_repr() acts as a
> > expose_provenance(). Hence the provenance is preserved.
>
> I'm not sure this point is correct, but I'm an atomics noob, so maybe
> Gary should take a look at this :)
>
Basically, what I'm trying to prove is that we can have a provenance-
preserved Atomic<*mut T> implementation based on the C atomics. Either
that is true, or we should write our own atomic pointer implementation.
> > Note this is a global property and it has to proven at `Atomic<T>`
> > level.
>
> Thanks for he awesome writeup, do you want to put this in some comment
> or at least the commit log?
>
Yes, so the round-trip transmutability will be in the safe requirement
of `AllowAtomic`. And if we still keep `from_repr()` and `into_repr()`
(we can give them default implementation using trasnmute()), I will put
the "bit-equivalency of from_repr() and into_repr()" in the requirement
of `AllowAtomic` as well.
For the "Provenance preservation", I will put it before `impl
AllowAtomic for *mut T`. (Remember we recently discover that doc comment
works for impl block as well? [2])
[2]: https://lore.kernel.org/rust-for-linux/aD4NW2vDc9rKBDPy@tardis.local/
Regards,
Boqun
> ---
> Cheers,
> Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-06-26 13:54 ` Benno Lossin
@ 2025-07-04 21:22 ` Boqun Feng
2025-07-04 22:05 ` Benno Lossin
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-07-04 21:22 UTC (permalink / raw)
To: Benno Lossin
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Thu, Jun 26, 2025 at 03:54:24PM +0200, Benno Lossin wrote:
> On Tue Jun 24, 2025 at 6:35 PM CEST, Boqun Feng wrote:
> > On Tue, Jun 24, 2025 at 01:27:38AM +0200, Benno Lossin wrote:
> >> On Mon Jun 23, 2025 at 9:09 PM CEST, Boqun Feng wrote:
> >> > On Mon, Jun 23, 2025 at 07:30:19PM +0100, Gary Guo wrote:
> >> >> cannot just transmute between from pointers to usize (which is its
> >> >> Repr):
> >> >> * Transmuting from pointer to usize discards provenance
> >> >> * Transmuting from usize to pointer gives invalid provenance
> >> >>
> >> >> We want neither behaviour, so we must store `usize` directly and
> >> >> always call into repr functions.
> >> >>
> >> >
> >> > If we store `usize`, how can we support the `get_mut()` then? E.g.
> >> >
> >> > static V: i32 = 32;
> >> >
> >> > let mut x = Atomic::new(&V as *const i32 as *mut i32);
> >> > // ^ assume we expose_provenance() in new().
> >> >
> >> > let ptr: &mut *mut i32 = x.get_mut(); // which is `&mut self.0.get()`.
> >> >
> >> > let ptr_val = *ptr; // Does `ptr_val` have the proper provenance?
> >>
> >> If `get_mut` transmutes the integer into a pointer, then it will have
> >> the wrong provenance (it will just have plain invalid provenance).
> >>
> >
> > The key topic Gary and I have been discussing is whether we should
> > define Atomic<T> as:
> >
> > (my current implementation)
> >
> > pub struct Atomic<T: AllowAtomic>(Opaque<T>);
> >
> > or
> >
> > (Gary's suggestion)
> >
> > pub struct Atomic<T: AllowAtomic>(Opaque<T::Repr>);
> >
> > `T::Repr` is guaranteed to be the same size and alignment of `T`, and
> > per our discussion, it makes sense to further require that `transmute<T,
> > T::Repr>()` should also be safe (as the safety requirement of
> > `AllowAtomic`), or we can say `T` bit validity can be preserved by
> > `T::Repr`: a valid bit combination `T` can be transumated to `T::Repr`,
> > and if transumated back, it's the same bit combination.
> >
> > Now as I pointed out, if we use `Opaque<T::Repr>`, then `.get_mut()`
> > would be unsound for `Atomic<*mut T>`. And Gary's concern is that in
> > the current implementation, we directly cast a `*mut T` (from
> > `Opaque::get()`) into a `*mut T::Repr`, and pass it directly into C/asm
> > atomic primitives. However, I think with the additional safety
> > requirement above, this shouldn't be a problem: because the C/asm atomic
> > primitives would just pass the address to an asm block, and that'll be
> > out of Rust abstract machine, and as long as the C/primitives atomic
> > primitives are implemented correctly, the bit representation of `T`
> > remains valid after asm blocks.
> >
> > So I think the current implementation still works and is better.
>
> I don't think there is a big difference between `Opaque<T>` and
> `Opaque<T::Repr>` if we have the transmute equivalence between the two.
> From a safety perspective, you don't gain or lose anything by using the
> first over the second one. They both require the invariant that they are
> valid (as `Opaque` removes that... we should really be using
> `UnsafeCell` here instead... why aren't we doing that?).
>
I need the `UnsafePinned`-like behavior of `Atomic<*mut T>` to support
Rcu<T>, and I will replace it with `UnsafePinned`, once that's is
available.
Maybe that also means `UnsafePinned<T>` make more sense? Because if `T`
is a pointer, it's easy to prove the provenance is there. (Note a
`&Atomic<*mut T>` may come from a `*mut *mut T`, may be a field in C
struct)
Regards,
Boqun
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-04 21:22 ` Boqun Feng
@ 2025-07-04 22:05 ` Benno Lossin
2025-07-04 22:30 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Benno Lossin @ 2025-07-04 22:05 UTC (permalink / raw)
To: Boqun Feng
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Fri Jul 4, 2025 at 11:22 PM CEST, Boqun Feng wrote:
> On Thu, Jun 26, 2025 at 03:54:24PM +0200, Benno Lossin wrote:
>> On Tue Jun 24, 2025 at 6:35 PM CEST, Boqun Feng wrote:
>> > On Tue, Jun 24, 2025 at 01:27:38AM +0200, Benno Lossin wrote:
>> >> On Mon Jun 23, 2025 at 9:09 PM CEST, Boqun Feng wrote:
>> >> > On Mon, Jun 23, 2025 at 07:30:19PM +0100, Gary Guo wrote:
>> >> >> cannot just transmute between from pointers to usize (which is its
>> >> >> Repr):
>> >> >> * Transmuting from pointer to usize discards provenance
>> >> >> * Transmuting from usize to pointer gives invalid provenance
>> >> >>
>> >> >> We want neither behaviour, so we must store `usize` directly and
>> >> >> always call into repr functions.
>> >> >>
>> >> >
>> >> > If we store `usize`, how can we support the `get_mut()` then? E.g.
>> >> >
>> >> > static V: i32 = 32;
>> >> >
>> >> > let mut x = Atomic::new(&V as *const i32 as *mut i32);
>> >> > // ^ assume we expose_provenance() in new().
>> >> >
>> >> > let ptr: &mut *mut i32 = x.get_mut(); // which is `&mut self.0.get()`.
>> >> >
>> >> > let ptr_val = *ptr; // Does `ptr_val` have the proper provenance?
>> >>
>> >> If `get_mut` transmutes the integer into a pointer, then it will have
>> >> the wrong provenance (it will just have plain invalid provenance).
>> >>
>> >
>> > The key topic Gary and I have been discussing is whether we should
>> > define Atomic<T> as:
>> >
>> > (my current implementation)
>> >
>> > pub struct Atomic<T: AllowAtomic>(Opaque<T>);
>> >
>> > or
>> >
>> > (Gary's suggestion)
>> >
>> > pub struct Atomic<T: AllowAtomic>(Opaque<T::Repr>);
>> >
>> > `T::Repr` is guaranteed to be the same size and alignment of `T`, and
>> > per our discussion, it makes sense to further require that `transmute<T,
>> > T::Repr>()` should also be safe (as the safety requirement of
>> > `AllowAtomic`), or we can say `T` bit validity can be preserved by
>> > `T::Repr`: a valid bit combination `T` can be transumated to `T::Repr`,
>> > and if transumated back, it's the same bit combination.
>> >
>> > Now as I pointed out, if we use `Opaque<T::Repr>`, then `.get_mut()`
>> > would be unsound for `Atomic<*mut T>`. And Gary's concern is that in
>> > the current implementation, we directly cast a `*mut T` (from
>> > `Opaque::get()`) into a `*mut T::Repr`, and pass it directly into C/asm
>> > atomic primitives. However, I think with the additional safety
>> > requirement above, this shouldn't be a problem: because the C/asm atomic
>> > primitives would just pass the address to an asm block, and that'll be
>> > out of Rust abstract machine, and as long as the C/primitives atomic
>> > primitives are implemented correctly, the bit representation of `T`
>> > remains valid after asm blocks.
>> >
>> > So I think the current implementation still works and is better.
>>
>> I don't think there is a big difference between `Opaque<T>` and
>> `Opaque<T::Repr>` if we have the transmute equivalence between the two.
>> From a safety perspective, you don't gain or lose anything by using the
>> first over the second one. They both require the invariant that they are
>> valid (as `Opaque` removes that... we should really be using
>> `UnsafeCell` here instead... why aren't we doing that?).
>>
>
> I need the `UnsafePinned`-like behavior of `Atomic<*mut T>` to support
> Rcu<T>, and I will replace it with `UnsafePinned`, once that's is
> available.
Can you expand on this? What do you mean by "`UnsafePinned`-like
behavior"? And what does `Rcu<T>` have to do with atomics?
> Maybe that also means `UnsafePinned<T>` make more sense? Because if `T`
> is a pointer, it's easy to prove the provenance is there. (Note a
> `&Atomic<*mut T>` may come from a `*mut *mut T`, may be a field in C
> struct)
Also don't understand this.
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-04 22:05 ` Benno Lossin
@ 2025-07-04 22:30 ` Boqun Feng
2025-07-04 22:49 ` Benno Lossin
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-07-04 22:30 UTC (permalink / raw)
To: Benno Lossin
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat, Jul 05, 2025 at 12:05:48AM +0200, Benno Lossin wrote:
[..]
> >>
> >> I don't think there is a big difference between `Opaque<T>` and
> >> `Opaque<T::Repr>` if we have the transmute equivalence between the two.
> >> From a safety perspective, you don't gain or lose anything by using the
> >> first over the second one. They both require the invariant that they are
> >> valid (as `Opaque` removes that... we should really be using
> >> `UnsafeCell` here instead... why aren't we doing that?).
> >>
> >
> > I need the `UnsafePinned`-like behavior of `Atomic<*mut T>` to support
> > Rcu<T>, and I will replace it with `UnsafePinned`, once that's is
> > available.
>
> Can you expand on this? What do you mean by "`UnsafePinned`-like
> behavior"? And what does `Rcu<T>` have to do with atomics?
>
`Rcu<T>` is an RCU protected (atomic) pointer, the its definition is
pub struct Rcu<T>(Atomic<*mut T>);
I need Pin<&mut Rcu<T>> and &Rcu<T> able to co-exist: an updater will
have the access to Pin<&mut Rcu<T>>, and all the readers will have the
access to &Rcu<T>, for that I need `Atomic<*mut T>` to be
`UnsafePinned`, because `Pin<&mut Rcu<T>>` cannot imply noalias.
> > Maybe that also means `UnsafePinned<T>` make more sense? Because if `T`
> > is a pointer, it's easy to prove the provenance is there. (Note a
> > `&Atomic<*mut T>` may come from a `*mut *mut T`, may be a field in C
> > struct)
>
> Also don't understand this.
>
One of the usage of the atomic is being able to communicate with C side,
for example, if we have a struct foo:
struct foo {
struct bar *b;
}
and writer can do this at C side:
struct foo *f = ...;
struct bar *b = kcalloc(*b, ...);
// init b;
smp_store_release(&f->b, b);
and a reader at Rust side can do:
#[repr(transparent)]
struct Bar(binding::bar);
struct Foo(Opaque<bindings::foo>);
fn get_bar(foo: &Foo) {
let foo_ptr = foo.0.get();
let b: *mut *mut Bar = unsafe { &raw mut (*foo_ptr).b }.cast();
// SAFETY: C side accessing this pointer with atomics.
let b = unsafe { Atomic::<*mut Bar>::from_ptr(b) };
// Acquire pairs with the Release from C side;
let bar_ptr = b.load(Acquire);
// accessing bar.
}
This is the case we must support if we want to write any non-trivial
synchronization code communicate with C side.
And in this case, it's generally easier to reason why we can convert a
*mut *mut Bar to &UnsafePinned<*mut Bar>.
Regards,
Boqun
> ---
> Cheers,
> Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-04 21:17 ` Boqun Feng
@ 2025-07-04 22:38 ` Benno Lossin
2025-07-04 23:21 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Benno Lossin @ 2025-07-04 22:38 UTC (permalink / raw)
To: Boqun Feng
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Fri Jul 4, 2025 at 11:17 PM CEST, Boqun Feng wrote:
> On Fri, Jul 04, 2025 at 10:45:48PM +0200, Benno Lossin wrote:
>> On Fri Jul 4, 2025 at 10:25 PM CEST, Boqun Feng wrote:
>> > * transmute()-equivalent from_repr() and into_repr().
>>
>> Hmm I don't think this name fits the description below, how about
>> "bit-equivalency of from_repr() and into_repr()"? We don't need to
>> transmute, we only want to ensure that `{from,into}_repr` are just
>> transmutes.
>>
>
> Good point!
>
> Btw, do you offer naming service, I will pay! ;-)
:) :)
>> > (This is not a safety requirement)
>> >
>> > from_repr() and into_repr(), if exist, should behave like transmute()
>> > on the bit pattern of the results, in other words, bit patterns of `T`
>> > or `T::Repr` should stay the same before and after these operations.
>> >
>> > Of course if we remove them and replace with transmute(), same result.
>> >
>> > This reflects the fact that customized atomic types should store
>> > unmodified bit patterns into atomic variables, and this make atomic
>> > operations don't have weird behavior [1] when combined with new(),
>> > from_ptr() and get_mut().
>>
>> I remember that this was required to support types like `(u8, u16)`? If
>
> My bad, I forgot to put the link to [1]...
>
> [1]: https://lore.kernel.org/rust-for-linux/20250621123212.66fb016b.gary@garyguo.net/
>
> Basically, without requiring from_repr() and into_repr() to act as a
> transmute(), you can have weird types in Atomic<T>.
Ah right, I forgot some context... Is this really a problem? I mean it's
weird sure, but if someone needs this, then it's fine?
> `(u8, u16)` (in case it's not clear to other audience, it's tuple with a
> `u8` and a `u16` in it, so there is a 8-bit hole) is not going to
> support until we have something like a `Atomic<MaybeUninit<i32>>`.
Ahh right we also had this issue, could you also include that in your
writeup? :)
>> yes, then it would be good to include a paragraph like the one above for
>> enums :)
>>
>> > * Provenance preservation.
>> >
>> > (This is not a safety requirement for Atomic itself)
>> >
>> > For a `Atomic<*mut T>`, it should preserve the provenance of the
>> > pointer that has been stored into it, i.e. the load result from a
>> > `Atomic<*mut T>` should have the same provenance.
>> >
>> > Technically, without this, `Atomic<*mut T>` still work without any
>> > safety issue itself, but the user of it must maintain the provenance
>> > themselves before store or after load.
>> >
>> > And it turns out it's not very hard to prove the current
>> > implementation achieve this:
>> >
>> > - For a non-atomic operation done on the atomic variable, they are
>> > already using pointer operation, so the provenance has been
>> > preserved.
>> > - For an atomic operation, since they are done via inline asm code, in
>> > Rust's abstract machine, they can be treated as pointer read and
>> > write:
>> >
>> > a) A load of the atomic can be treated as a pointer read and then
>> > exposing the provenance.
>> > b) A store of the atomic can be treated as a pointer write with a
>> > value created with the exposed provenance.
>> >
>> > And our implementation, thanks to no arbitrary type coercion,
>> > already guarantee that for each a) there is a from_repr() after and
>> > for each b) there is a into_repr() before. And from_repr() acts as
>> > a with_exposed_provenance() and into_repr() acts as a
>> > expose_provenance(). Hence the provenance is preserved.
>>
>> I'm not sure this point is correct, but I'm an atomics noob, so maybe
>> Gary should take a look at this :)
>>
>
> Basically, what I'm trying to prove is that we can have a provenance-
> preserved Atomic<*mut T> implementation based on the C atomics. Either
> that is true, or we should write our own atomic pointer implementation.
That much I remembered :) But since you were going into the specifics
above, I think we should try to be correct. But maybe natural language
is the wrong medium for that, just write the rust code and we'll see...
>> > Note this is a global property and it has to proven at `Atomic<T>`
>> > level.
>>
>> Thanks for he awesome writeup, do you want to put this in some comment
>> or at least the commit log?
>>
>
> Yes, so the round-trip transmutability will be in the safe requirement
> of `AllowAtomic`. And if we still keep `from_repr()` and `into_repr()`
> (we can give them default implementation using trasnmute()), I will put
> the "bit-equivalency of from_repr() and into_repr()" in the requirement
> of `AllowAtomic` as well.
>
> For the "Provenance preservation", I will put it before `impl
> AllowAtomic for *mut T`. (Remember we recently discover that doc comment
> works for impl block as well? [2])
Yeah that sounds good!
---
Cheers,
Benno
> [2]: https://lore.kernel.org/rust-for-linux/aD4NW2vDc9rKBDPy@tardis.local/
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-04 22:30 ` Boqun Feng
@ 2025-07-04 22:49 ` Benno Lossin
2025-07-04 23:21 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Benno Lossin @ 2025-07-04 22:49 UTC (permalink / raw)
To: Boqun Feng
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat Jul 5, 2025 at 12:30 AM CEST, Boqun Feng wrote:
> On Sat, Jul 05, 2025 at 12:05:48AM +0200, Benno Lossin wrote:
> [..]
>> >>
>> >> I don't think there is a big difference between `Opaque<T>` and
>> >> `Opaque<T::Repr>` if we have the transmute equivalence between the two.
>> >> From a safety perspective, you don't gain or lose anything by using the
>> >> first over the second one. They both require the invariant that they are
>> >> valid (as `Opaque` removes that... we should really be using
>> >> `UnsafeCell` here instead... why aren't we doing that?).
>> >>
>> >
>> > I need the `UnsafePinned`-like behavior of `Atomic<*mut T>` to support
>> > Rcu<T>, and I will replace it with `UnsafePinned`, once that's is
>> > available.
>>
>> Can you expand on this? What do you mean by "`UnsafePinned`-like
>> behavior"? And what does `Rcu<T>` have to do with atomics?
>>
>
> `Rcu<T>` is an RCU protected (atomic) pointer, the its definition is
>
> pub struct Rcu<T>(Atomic<*mut T>);
>
> I need Pin<&mut Rcu<T>> and &Rcu<T> able to co-exist: an updater will
> have the access to Pin<&mut Rcu<T>>, and all the readers will have the
> access to &Rcu<T>, for that I need `Atomic<*mut T>` to be
> `UnsafePinned`, because `Pin<&mut Rcu<T>>` cannot imply noalias.
Then `Rcu` should be
pub struct Rcu<T>(UnsafePinned<Atomic<*mut T>>);
And `Atomic` shouldn't wrap `UnsafePinned<T>`. Because that prevents
`&mut Atomic<i32>` to be tagged with `noalias` and that should be fine.
You should only pay for what you need :)
>> > Maybe that also means `UnsafePinned<T>` make more sense? Because if `T`
>> > is a pointer, it's easy to prove the provenance is there. (Note a
>> > `&Atomic<*mut T>` may come from a `*mut *mut T`, may be a field in C
>> > struct)
>>
>> Also don't understand this.
>>
>
> One of the usage of the atomic is being able to communicate with C side,
> for example, if we have a struct foo:
>
> struct foo {
> struct bar *b;
> }
>
> and writer can do this at C side:
>
> struct foo *f = ...;
> struct bar *b = kcalloc(*b, ...);
>
> // init b;
>
> smp_store_release(&f->b, b);
>
> and a reader at Rust side can do:
>
> #[repr(transparent)]
> struct Bar(binding::bar);
> struct Foo(Opaque<bindings::foo>);
>
> fn get_bar(foo: &Foo) {
> let foo_ptr = foo.0.get();
>
> let b: *mut *mut Bar = unsafe { &raw mut (*foo_ptr).b }.cast();
> // SAFETY: C side accessing this pointer with atomics.
> let b = unsafe { Atomic::<*mut Bar>::from_ptr(b) };
>
> // Acquire pairs with the Release from C side;
> let bar_ptr = b.load(Acquire);
>
> // accessing bar.
> }
This is a nice example, might be a good idea to put this on
`Atomic::from_ptr`.
> This is the case we must support if we want to write any non-trivial
> synchronization code communicate with C side.
>
> And in this case, it's generally easier to reason why we can convert a
> *mut *mut Bar to &UnsafePinned<*mut Bar>.
What does that have to do with `UnsafePinned`? `UnsafeCell` should
suffice.
Also where does the provenance interact with `UnsafePinned`?
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-04 22:49 ` Benno Lossin
@ 2025-07-04 23:21 ` Boqun Feng
0 siblings, 0 replies; 82+ messages in thread
From: Boqun Feng @ 2025-07-04 23:21 UTC (permalink / raw)
To: Benno Lossin
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat, Jul 05, 2025 at 12:49:09AM +0200, Benno Lossin wrote:
> On Sat Jul 5, 2025 at 12:30 AM CEST, Boqun Feng wrote:
> > On Sat, Jul 05, 2025 at 12:05:48AM +0200, Benno Lossin wrote:
> > [..]
> >> >>
> >> >> I don't think there is a big difference between `Opaque<T>` and
> >> >> `Opaque<T::Repr>` if we have the transmute equivalence between the two.
> >> >> From a safety perspective, you don't gain or lose anything by using the
> >> >> first over the second one. They both require the invariant that they are
> >> >> valid (as `Opaque` removes that... we should really be using
> >> >> `UnsafeCell` here instead... why aren't we doing that?).
> >> >>
> >> >
> >> > I need the `UnsafePinned`-like behavior of `Atomic<*mut T>` to support
> >> > Rcu<T>, and I will replace it with `UnsafePinned`, once that's is
> >> > available.
> >>
> >> Can you expand on this? What do you mean by "`UnsafePinned`-like
> >> behavior"? And what does `Rcu<T>` have to do with atomics?
> >>
> >
> > `Rcu<T>` is an RCU protected (atomic) pointer, the its definition is
> >
> > pub struct Rcu<T>(Atomic<*mut T>);
> >
> > I need Pin<&mut Rcu<T>> and &Rcu<T> able to co-exist: an updater will
> > have the access to Pin<&mut Rcu<T>>, and all the readers will have the
> > access to &Rcu<T>, for that I need `Atomic<*mut T>` to be
> > `UnsafePinned`, because `Pin<&mut Rcu<T>>` cannot imply noalias.
>
> Then `Rcu` should be
>
> pub struct Rcu<T>(UnsafePinned<Atomic<*mut T>>);
>
> And `Atomic` shouldn't wrap `UnsafePinned<T>`. Because that prevents
> `&mut Atomic<i32>` to be tagged with `noalias` and that should be fine.
> You should only pay for what you need :)
>
Fair enough. Changing it to UnsafeCell then.
> >> > Maybe that also means `UnsafePinned<T>` make more sense? Because if `T`
> >> > is a pointer, it's easy to prove the provenance is there. (Note a
> >> > `&Atomic<*mut T>` may come from a `*mut *mut T`, may be a field in C
> >> > struct)
> >>
> >> Also don't understand this.
> >>
> >
> > One of the usage of the atomic is being able to communicate with C side,
> > for example, if we have a struct foo:
> >
> > struct foo {
> > struct bar *b;
> > }
> >
> > and writer can do this at C side:
> >
> > struct foo *f = ...;
> > struct bar *b = kcalloc(*b, ...);
> >
> > // init b;
> >
> > smp_store_release(&f->b, b);
> >
> > and a reader at Rust side can do:
> >
> > #[repr(transparent)]
> > struct Bar(binding::bar);
> > struct Foo(Opaque<bindings::foo>);
> >
> > fn get_bar(foo: &Foo) {
> > let foo_ptr = foo.0.get();
> >
> > let b: *mut *mut Bar = unsafe { &raw mut (*foo_ptr).b }.cast();
> > // SAFETY: C side accessing this pointer with atomics.
> > let b = unsafe { Atomic::<*mut Bar>::from_ptr(b) };
> >
> > // Acquire pairs with the Release from C side;
> > let bar_ptr = b.load(Acquire);
> >
> > // accessing bar.
> > }
>
> This is a nice example, might be a good idea to put this on
> `Atomic::from_ptr`.
>
I have something similar in the doc comment of `Atomic::from_ptr()`,
just not an `Atomic<*mut T>`.
> > This is the case we must support if we want to write any non-trivial
> > synchronization code communicate with C side.
> >
> > And in this case, it's generally easier to reason why we can convert a
> > *mut *mut Bar to &UnsafePinned<*mut Bar>.
>
> What does that have to do with `UnsafePinned`? `UnsafeCell` should
> suffice.
>
I was talking about things like UnsafeCell<*mut T> vs UnsafeCell<isize>
not comparing between UnsafePinned and UnsafeCell.
Regards,
Boqun
> Also where does the provenance interact with `UnsafePinned`?
>
> ---
> Cheers,
> Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-04 22:38 ` Benno Lossin
@ 2025-07-04 23:21 ` Boqun Feng
2025-07-05 8:04 ` Benno Lossin
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-07-04 23:21 UTC (permalink / raw)
To: Benno Lossin
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat, Jul 05, 2025 at 12:38:05AM +0200, Benno Lossin wrote:
[..]
> >> > (This is not a safety requirement)
> >> >
> >> > from_repr() and into_repr(), if exist, should behave like transmute()
> >> > on the bit pattern of the results, in other words, bit patterns of `T`
> >> > or `T::Repr` should stay the same before and after these operations.
> >> >
> >> > Of course if we remove them and replace with transmute(), same result.
> >> >
> >> > This reflects the fact that customized atomic types should store
> >> > unmodified bit patterns into atomic variables, and this make atomic
> >> > operations don't have weird behavior [1] when combined with new(),
> >> > from_ptr() and get_mut().
> >>
> >> I remember that this was required to support types like `(u8, u16)`? If
> >
> > My bad, I forgot to put the link to [1]...
> >
> > [1]: https://lore.kernel.org/rust-for-linux/20250621123212.66fb016b.gary@garyguo.net/
> >
> > Basically, without requiring from_repr() and into_repr() to act as a
> > transmute(), you can have weird types in Atomic<T>.
>
> Ah right, I forgot some context... Is this really a problem? I mean it's
It's not a problem for safety, so it's not a safety requirement. But I
really don't see a reason why we want to support this. Not supporting
this makes the atomic implementation reasoning easier.
> weird sure, but if someone needs this, then it's fine?
>
They can always play the !value game outside atomic, i.e. !value before
store and !value after load, so I don't think it's reasonable request.
> > `(u8, u16)` (in case it's not clear to other audience, it's tuple with a
> > `u8` and a `u16` in it, so there is a 8-bit hole) is not going to
> > support until we have something like a `Atomic<MaybeUninit<i32>>`.
>
> Ahh right we also had this issue, could you also include that in your
> writeup? :)
>
Sure, I will put it in a limitation section maybe.
> >> yes, then it would be good to include a paragraph like the one above for
> >> enums :)
> >>
> >> > * Provenance preservation.
> >> >
> >> > (This is not a safety requirement for Atomic itself)
> >> >
> >> > For a `Atomic<*mut T>`, it should preserve the provenance of the
> >> > pointer that has been stored into it, i.e. the load result from a
> >> > `Atomic<*mut T>` should have the same provenance.
> >> >
> >> > Technically, without this, `Atomic<*mut T>` still work without any
> >> > safety issue itself, but the user of it must maintain the provenance
> >> > themselves before store or after load.
> >> >
> >> > And it turns out it's not very hard to prove the current
> >> > implementation achieve this:
> >> >
> >> > - For a non-atomic operation done on the atomic variable, they are
> >> > already using pointer operation, so the provenance has been
> >> > preserved.
> >> > - For an atomic operation, since they are done via inline asm code, in
> >> > Rust's abstract machine, they can be treated as pointer read and
> >> > write:
> >> >
> >> > a) A load of the atomic can be treated as a pointer read and then
> >> > exposing the provenance.
> >> > b) A store of the atomic can be treated as a pointer write with a
> >> > value created with the exposed provenance.
> >> >
> >> > And our implementation, thanks to no arbitrary type coercion,
> >> > already guarantee that for each a) there is a from_repr() after and
> >> > for each b) there is a into_repr() before. And from_repr() acts as
> >> > a with_exposed_provenance() and into_repr() acts as a
> >> > expose_provenance(). Hence the provenance is preserved.
> >>
> >> I'm not sure this point is correct, but I'm an atomics noob, so maybe
> >> Gary should take a look at this :)
> >>
> >
> > Basically, what I'm trying to prove is that we can have a provenance-
> > preserved Atomic<*mut T> implementation based on the C atomics. Either
> > that is true, or we should write our own atomic pointer implementation.
>
> That much I remembered :) But since you were going into the specifics
> above, I think we should try to be correct. But maybe natural language
> is the wrong medium for that, just write the rust code and we'll see...
>
I don't thinking writing rust code can help us here other than duplicate
my reasoning above, so like:
ipml *mut() {
pub fn xchg(ptr: *mut *mut (), new: *mut ()) -> *mut () {
// SAFTEY: ..
// `atomic_long_xchg()` is implemented as asm(), so it can
// be treated as a normal pointer swap() hence preserve the
// provenance.
unsafe { atomic_long_xchg(ptr.cast::<atomic_long_t>(), new as ffi:c_long) }
}
pub fn cmpxchg(ptr: *mut *mut (), old: *mut (), new: *mut ()) -> *mut () {
// SAFTEY: ..
// `atomic_long_xchg()` is implemented as asm(), so it can
// be treated as a normal pointer compare_exchange() hence preserve the
// provenance.
unsafe { atomic_long_cmpxchg(ptr.cast::<atomic_long_t>(), old as ffi::c_long, new as ffi:c_long) }
}
<do it for a lot of functions>
}
So I don't think that approach is worth doing. Again the provenance
preserving is a global property, either we have it as whole or we don't
have it, and adding precise comment of each function call won't change
the result. I don't see much difference between reasoning about a set of
functions vs. reasoning one function by one function with the same
reasoning.
If we have a reason to believe that C atomic doesn't support this we
just need to move to our own implementation. I know you (and probably
Gary) may feel the reasoning about provenance preserving a bit handwavy,
but this is probably the best we can get, and it's technically better
than using Rust native atomics, because that's just UB and no one would
help you.
(I made a copy-pasta on purpose above, just to make another point why
writing each function out is not worth)
Regards,
Boqun
> >> > Note this is a global property and it has to proven at `Atomic<T>`
> >> > level.
> >>
> >> Thanks for he awesome writeup, do you want to put this in some comment
> >> or at least the commit log?
> >>
> >
> > Yes, so the round-trip transmutability will be in the safe requirement
> > of `AllowAtomic`. And if we still keep `from_repr()` and `into_repr()`
> > (we can give them default implementation using trasnmute()), I will put
> > the "bit-equivalency of from_repr() and into_repr()" in the requirement
> > of `AllowAtomic` as well.
> >
> > For the "Provenance preservation", I will put it before `impl
> > AllowAtomic for *mut T`. (Remember we recently discover that doc comment
> > works for impl block as well? [2])
>
> Yeah that sounds good!
>
> ---
> Cheers,
> Benno
>
> > [2]: https://lore.kernel.org/rust-for-linux/aD4NW2vDc9rKBDPy@tardis.local/
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-04 23:21 ` Boqun Feng
@ 2025-07-05 8:04 ` Benno Lossin
2025-07-05 15:38 ` Boqun Feng
0 siblings, 1 reply; 82+ messages in thread
From: Benno Lossin @ 2025-07-05 8:04 UTC (permalink / raw)
To: Boqun Feng
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat Jul 5, 2025 at 1:21 AM CEST, Boqun Feng wrote:
> On Sat, Jul 05, 2025 at 12:38:05AM +0200, Benno Lossin wrote:
> [..]
>> >> > (This is not a safety requirement)
>> >> >
>> >> > from_repr() and into_repr(), if exist, should behave like transmute()
>> >> > on the bit pattern of the results, in other words, bit patterns of `T`
>> >> > or `T::Repr` should stay the same before and after these operations.
>> >> >
>> >> > Of course if we remove them and replace with transmute(), same result.
>> >> >
>> >> > This reflects the fact that customized atomic types should store
>> >> > unmodified bit patterns into atomic variables, and this make atomic
>> >> > operations don't have weird behavior [1] when combined with new(),
>> >> > from_ptr() and get_mut().
>> >>
>> >> I remember that this was required to support types like `(u8, u16)`? If
>> >
>> > My bad, I forgot to put the link to [1]...
>> >
>> > [1]: https://lore.kernel.org/rust-for-linux/20250621123212.66fb016b.gary@garyguo.net/
>> >
>> > Basically, without requiring from_repr() and into_repr() to act as a
>> > transmute(), you can have weird types in Atomic<T>.
>>
>> Ah right, I forgot some context... Is this really a problem? I mean it's
>
> It's not a problem for safety, so it's not a safety requirement. But I
> really don't see a reason why we want to support this. Not supporting
> this makes the atomic implementation reasoning easier.
Yeah.
>> weird sure, but if someone needs this, then it's fine?
>>
>
> They can always play the !value game outside atomic, i.e. !value before
> store and !value after load, so I don't think it's reasonable request.
That's true, yeah let's forbid this :)
>> > `(u8, u16)` (in case it's not clear to other audience, it's tuple with a
>> > `u8` and a `u16` in it, so there is a 8-bit hole) is not going to
>> > support until we have something like a `Atomic<MaybeUninit<i32>>`.
>>
>> Ahh right we also had this issue, could you also include that in your
>> writeup? :)
>>
>
> Sure, I will put it in a limitation section maybe.
>
>> >> yes, then it would be good to include a paragraph like the one above for
>> >> enums :)
>> >>
>> >> > * Provenance preservation.
>> >> >
>> >> > (This is not a safety requirement for Atomic itself)
>> >> >
>> >> > For a `Atomic<*mut T>`, it should preserve the provenance of the
>> >> > pointer that has been stored into it, i.e. the load result from a
>> >> > `Atomic<*mut T>` should have the same provenance.
>> >> >
>> >> > Technically, without this, `Atomic<*mut T>` still work without any
>> >> > safety issue itself, but the user of it must maintain the provenance
>> >> > themselves before store or after load.
>> >> >
>> >> > And it turns out it's not very hard to prove the current
>> >> > implementation achieve this:
>> >> >
>> >> > - For a non-atomic operation done on the atomic variable, they are
>> >> > already using pointer operation, so the provenance has been
>> >> > preserved.
>> >> > - For an atomic operation, since they are done via inline asm code, in
>> >> > Rust's abstract machine, they can be treated as pointer read and
>> >> > write:
>> >> >
>> >> > a) A load of the atomic can be treated as a pointer read and then
>> >> > exposing the provenance.
>> >> > b) A store of the atomic can be treated as a pointer write with a
>> >> > value created with the exposed provenance.
>> >> >
>> >> > And our implementation, thanks to no arbitrary type coercion,
>> >> > already guarantee that for each a) there is a from_repr() after and
>> >> > for each b) there is a into_repr() before. And from_repr() acts as
>> >> > a with_exposed_provenance() and into_repr() acts as a
>> >> > expose_provenance(). Hence the provenance is preserved.
>> >>
>> >> I'm not sure this point is correct, but I'm an atomics noob, so maybe
>> >> Gary should take a look at this :)
>> >>
>> >
>> > Basically, what I'm trying to prove is that we can have a provenance-
>> > preserved Atomic<*mut T> implementation based on the C atomics. Either
>> > that is true, or we should write our own atomic pointer implementation.
>>
>> That much I remembered :) But since you were going into the specifics
>> above, I think we should try to be correct. But maybe natural language
>> is the wrong medium for that, just write the rust code and we'll see...
>>
>
> I don't thinking writing rust code can help us here other than duplicate
> my reasoning above, so like:
>
> ipml *mut() {
> pub fn xchg(ptr: *mut *mut (), new: *mut ()) -> *mut () {
> // SAFTEY: ..
> // `atomic_long_xchg()` is implemented as asm(), so it can
> // be treated as a normal pointer swap() hence preserve the
> // provenance.
Oh I think Gary was talking specifically about Rust's `asm!`. I don't
know if C asm is going to play the same way... (inside LLVM they
probably are the same thing, but in the abstract machine?)
> unsafe { atomic_long_xchg(ptr.cast::<atomic_long_t>(), new as ffi:c_long) }
> }
>
> pub fn cmpxchg(ptr: *mut *mut (), old: *mut (), new: *mut ()) -> *mut () {
> // SAFTEY: ..
> // `atomic_long_xchg()` is implemented as asm(), so it can
> // be treated as a normal pointer compare_exchange() hence preserve the
> // provenance.
> unsafe { atomic_long_cmpxchg(ptr.cast::<atomic_long_t>(), old as ffi::c_long, new as ffi:c_long) }
> }
>
> <do it for a lot of functions>
> }
>
> So I don't think that approach is worth doing. Again the provenance
> preserving is a global property, either we have it as whole or we don't
> have it, and adding precise comment of each function call won't change
> the result. I don't see much difference between reasoning about a set of
> functions vs. reasoning one function by one function with the same
> reasoning.
>
> If we have a reason to believe that C atomic doesn't support this we
> just need to move to our own implementation. I know you (and probably
> Gary) may feel the reasoning about provenance preserving a bit handwavy,
YES :)
> but this is probably the best we can get, and it's technically better
I think we can at improve the safety docs situation.
> than using Rust native atomics, because that's just UB and no one would
> help you.
I'm not arguing using those :)
> (I made a copy-pasta on purpose above, just to make another point why
> writing each function out is not worth)
Yeah that's true, but at the moment that safety comment is on the `impl`
block? I don't think that's the right place...
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-05 8:04 ` Benno Lossin
@ 2025-07-05 15:38 ` Boqun Feng
2025-07-05 21:43 ` Benno Lossin
0 siblings, 1 reply; 82+ messages in thread
From: Boqun Feng @ 2025-07-05 15:38 UTC (permalink / raw)
To: Benno Lossin
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat, Jul 05, 2025 at 10:04:04AM +0200, Benno Lossin wrote:
[...]
> >> >
> >> > Basically, what I'm trying to prove is that we can have a provenance-
> >> > preserved Atomic<*mut T> implementation based on the C atomics. Either
> >> > that is true, or we should write our own atomic pointer implementation.
> >>
> >> That much I remembered :) But since you were going into the specifics
> >> above, I think we should try to be correct. But maybe natural language
> >> is the wrong medium for that, just write the rust code and we'll see...
> >>
> >
> > I don't thinking writing rust code can help us here other than duplicate
> > my reasoning above, so like:
> >
> > ipml *mut() {
> > pub fn xchg(ptr: *mut *mut (), new: *mut ()) -> *mut () {
> > // SAFTEY: ..
Note: provenance preserving is not about the safety of Atomic<*mut T>
implementation, even if we don't preserve the provenance, calling
`Atomic<*mut T>` function won't cause UB, it's just that any pointer you
get from `Atomic<*mut T>` is a pointer without provenance.
So what I meant in this example is all the safey comment is above and
the rest is not a safe comment.
Hope it's clear.
> > // `atomic_long_xchg()` is implemented as asm(), so it can
> > // be treated as a normal pointer swap() hence preserve the
> > // provenance.
>
> Oh I think Gary was talking specifically about Rust's `asm!`. I don't
> know if C asm is going to play the same way... (inside LLVM they
> probably are the same thing, but in the abstract machine?)
>
You need to understand why Rust abstract machine model `asm!()` in
that way: Rust abstract machine cannot see through `asm!()`, so it has
to assume that `asm!() block can do anything that some equivalent Rust
code does. Further more, this "can do anything that some equivalent Rust
code does" is only one way to reason, the core part about this is Rust
will be very conservative when using the `asm!()` result for
optimization.
It should apply to C asm!() as well because LLVM cannot know see through
the asm block either. And based on the spirit, it might apply to any C
code as well, because it's outside Rust abstract machine. But if you
don't agree the reasoning, then we just cannot implement Atomic<*mut T>
with the existing C API.
> > unsafe { atomic_long_xchg(ptr.cast::<atomic_long_t>(), new as ffi:c_long) }
> > }
> >
> > pub fn cmpxchg(ptr: *mut *mut (), old: *mut (), new: *mut ()) -> *mut () {
> > // SAFTEY: ..
> > // `atomic_long_xchg()` is implemented as asm(), so it can
> > // be treated as a normal pointer compare_exchange() hence preserve the
> > // provenance.
> > unsafe { atomic_long_cmpxchg(ptr.cast::<atomic_long_t>(), old as ffi::c_long, new as ffi:c_long) }
> > }
> >
> > <do it for a lot of functions>
> > }
> >
> > So I don't think that approach is worth doing. Again the provenance
> > preserving is a global property, either we have it as whole or we don't
> > have it, and adding precise comment of each function call won't change
> > the result. I don't see much difference between reasoning about a set of
> > functions vs. reasoning one function by one function with the same
> > reasoning.
> >
> > If we have a reason to believe that C atomic doesn't support this we
> > just need to move to our own implementation. I know you (and probably
> > Gary) may feel the reasoning about provenance preserving a bit handwavy,
>
> YES :)
>
> > but this is probably the best we can get, and it's technically better
>
> I think we can at improve the safety docs situation.
>
Once again, it's not about the safety of Atomic<*mut T> implementation.
> > than using Rust native atomics, because that's just UB and no one would
> > help you.
>
> I'm not arguing using those :)
>
> > (I made a copy-pasta on purpose above, just to make another point why
> > writing each function out is not worth)
>
> Yeah that's true, but at the moment that safety comment is on the `impl`
> block? I don't think that's the right place...
>
Feel free to send any patch that improves this in your opinion ;-)
Regards,
Boqun
> ---
> Cheers,
> Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 04/10] rust: sync: atomic: Add generic atomics
2025-07-05 15:38 ` Boqun Feng
@ 2025-07-05 21:43 ` Benno Lossin
0 siblings, 0 replies; 82+ messages in thread
From: Benno Lossin @ 2025-07-05 21:43 UTC (permalink / raw)
To: Boqun Feng
Cc: Gary Guo, linux-kernel, rust-for-linux, lkmm, linux-arch,
Miguel Ojeda, Alex Gaynor, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Sat Jul 5, 2025 at 5:38 PM CEST, Boqun Feng wrote:
> On Sat, Jul 05, 2025 at 10:04:04AM +0200, Benno Lossin wrote:
> [...]
>> >> >
>> >> > Basically, what I'm trying to prove is that we can have a provenance-
>> >> > preserved Atomic<*mut T> implementation based on the C atomics. Either
>> >> > that is true, or we should write our own atomic pointer implementation.
>> >>
>> >> That much I remembered :) But since you were going into the specifics
>> >> above, I think we should try to be correct. But maybe natural language
>> >> is the wrong medium for that, just write the rust code and we'll see...
>> >>
>> >
>> > I don't thinking writing rust code can help us here other than duplicate
>> > my reasoning above, so like:
>> >
>> > ipml *mut() {
>> > pub fn xchg(ptr: *mut *mut (), new: *mut ()) -> *mut () {
>> > // SAFTEY: ..
>
> Note: provenance preserving is not about the safety of Atomic<*mut T>
> implementation, even if we don't preserve the provenance, calling
> `Atomic<*mut T>` function won't cause UB, it's just that any pointer you
> get from `Atomic<*mut T>` is a pointer without provenance.
>
> So what I meant in this example is all the safey comment is above and
> the rest is not a safe comment.
Yeah it's not a safety requirement, but a guarantee.
> Hope it's clear.
>
>> > // `atomic_long_xchg()` is implemented as asm(), so it can
>> > // be treated as a normal pointer swap() hence preserve the
>> > // provenance.
>>
>> Oh I think Gary was talking specifically about Rust's `asm!`. I don't
>> know if C asm is going to play the same way... (inside LLVM they
>> probably are the same thing, but in the abstract machine?)
>>
>
> You need to understand why Rust abstract machine model `asm!()` in
> that way: Rust abstract machine cannot see through `asm!()`, so it has
> to assume that `asm!() block can do anything that some equivalent Rust
> code does. Further more, this "can do anything that some equivalent Rust
> code does" is only one way to reason, the core part about this is Rust
> will be very conservative when using the `asm!()` result for
> optimization.
Yes that makes sense.
> It should apply to C asm!() as well because LLVM cannot know see through
> the asm block either. And based on the spirit, it might apply to any C
> code as well, because it's outside Rust abstract machine. But if you
> don't agree the reasoning, then we just cannot implement Atomic<*mut T>
> with the existing C API.
We probably should run this by t-opsem on the Rust zulip or ask about
this in the next Meeting with the Rust folks.
>> > unsafe { atomic_long_xchg(ptr.cast::<atomic_long_t>(), new as ffi:c_long) }
>> > }
>> >
>> > pub fn cmpxchg(ptr: *mut *mut (), old: *mut (), new: *mut ()) -> *mut () {
>> > // SAFTEY: ..
>> > // `atomic_long_xchg()` is implemented as asm(), so it can
>> > // be treated as a normal pointer compare_exchange() hence preserve the
>> > // provenance.
>> > unsafe { atomic_long_cmpxchg(ptr.cast::<atomic_long_t>(), old as ffi::c_long, new as ffi:c_long) }
>> > }
>> >
>> > <do it for a lot of functions>
>> > }
>> >
>> > So I don't think that approach is worth doing. Again the provenance
>> > preserving is a global property, either we have it as whole or we don't
>> > have it, and adding precise comment of each function call won't change
>> > the result. I don't see much difference between reasoning about a set of
>> > functions vs. reasoning one function by one function with the same
>> > reasoning.
>> >
>> > If we have a reason to believe that C atomic doesn't support this we
>> > just need to move to our own implementation. I know you (and probably
>> > Gary) may feel the reasoning about provenance preserving a bit handwavy,
>>
>> YES :)
>>
>> > but this is probably the best we can get, and it's technically better
>>
>> I think we can at improve the safety docs situation.
>>
>
> Once again, it's not about the safety of Atomic<*mut T> implementation.
"Safety docs" to me means all of these:
* `SAFETY` comments & `# Safety` sections,
* `INVARIANT` comments & `# Invariants` sections,
* `GUARANTEE` comments & `# Guarantees` sections.
Maybe there is a better name...
>> > than using Rust native atomics, because that's just UB and no one would
>> > help you.
>>
>> I'm not arguing using those :)
>>
>> > (I made a copy-pasta on purpose above, just to make another point why
>> > writing each function out is not worth)
>>
>> Yeah that's true, but at the moment that safety comment is on the `impl`
>> block? I don't think that's the right place...
>>
>
> Feel free to send any patch that improves this in your opinion ;-)
I'd prefer we do it right away. But we should just have one big comment
explaining it on the impl and then in the functions refer to it from a
`GUARANTEE` comment?
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 82+ messages in thread
end of thread, other threads:[~2025-07-05 21:43 UTC | newest]
Thread overview: 82+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-18 16:49 [PATCH v5 00/10] LKMM generic atomics in Rust Boqun Feng
2025-06-18 16:49 ` [PATCH v5 01/10] rust: Introduce atomic API helpers Boqun Feng
2025-06-26 8:44 ` Andreas Hindborg
2025-06-27 14:00 ` Boqun Feng
2025-06-18 16:49 ` [PATCH v5 02/10] rust: sync: Add basic atomic operation mapping framework Boqun Feng
2025-06-26 8:50 ` Andreas Hindborg
2025-06-26 10:17 ` Andreas Hindborg
2025-06-27 14:30 ` Boqun Feng
2025-06-18 16:49 ` [PATCH v5 03/10] rust: sync: atomic: Add ordering annotation types Boqun Feng
2025-06-19 10:31 ` Peter Zijlstra
2025-06-19 12:19 ` Alice Ryhl
2025-06-19 13:29 ` Boqun Feng
2025-06-19 14:32 ` Peter Zijlstra
2025-06-19 15:00 ` Boqun Feng
2025-06-19 15:10 ` Peter Zijlstra
2025-06-19 15:15 ` Boqun Feng
2025-06-19 18:04 ` Alan Stern
2025-06-21 11:18 ` Gary Guo
2025-06-23 2:48 ` Boqun Feng
2025-06-26 12:36 ` Andreas Hindborg
2025-06-27 14:34 ` Boqun Feng
2025-06-27 14:44 ` Boqun Feng
2025-06-18 16:49 ` [PATCH v5 04/10] rust: sync: atomic: Add generic atomics Boqun Feng
2025-06-21 11:32 ` Gary Guo
2025-06-23 5:19 ` Boqun Feng
2025-06-23 11:54 ` Benno Lossin
2025-06-23 12:58 ` Boqun Feng
2025-06-23 18:30 ` Gary Guo
2025-06-23 19:09 ` Boqun Feng
2025-06-23 23:27 ` Benno Lossin
2025-06-24 16:35 ` Boqun Feng
2025-06-26 13:54 ` Benno Lossin
2025-07-04 21:22 ` Boqun Feng
2025-07-04 22:05 ` Benno Lossin
2025-07-04 22:30 ` Boqun Feng
2025-07-04 22:49 ` Benno Lossin
2025-07-04 23:21 ` Boqun Feng
2025-07-04 20:25 ` Boqun Feng
2025-07-04 20:45 ` Benno Lossin
2025-07-04 21:17 ` Boqun Feng
2025-07-04 22:38 ` Benno Lossin
2025-07-04 23:21 ` Boqun Feng
2025-07-05 8:04 ` Benno Lossin
2025-07-05 15:38 ` Boqun Feng
2025-07-05 21:43 ` Benno Lossin
2025-06-26 12:15 ` Andreas Hindborg
2025-06-27 15:01 ` Boqun Feng
2025-06-30 9:52 ` Andreas Hindborg
2025-06-30 14:44 ` Alan Stern
2025-07-01 8:54 ` Andreas Hindborg
2025-07-01 14:50 ` Boqun Feng
2025-07-02 8:33 ` Andreas Hindborg
2025-06-18 16:49 ` [PATCH v5 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
2025-06-21 11:37 ` Gary Guo
2025-06-23 5:23 ` Boqun Feng
2025-06-26 13:12 ` Andreas Hindborg
2025-06-28 3:03 ` Boqun Feng
2025-06-30 10:16 ` Andreas Hindborg
2025-06-30 14:51 ` Alan Stern
2025-06-30 15:12 ` Boqun Feng
2025-06-27 8:58 ` Benno Lossin
2025-06-27 13:53 ` Boqun Feng
2025-06-28 6:12 ` Benno Lossin
2025-06-28 7:31 ` Boqun Feng
2025-06-28 8:00 ` Benno Lossin
2025-06-30 15:24 ` Boqun Feng
2025-06-30 15:27 ` Boqun Feng
2025-06-30 15:50 ` Benno Lossin
2025-06-18 16:49 ` [PATCH v5 06/10] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
2025-06-21 11:41 ` Gary Guo
2025-06-26 12:39 ` Andreas Hindborg
2025-06-28 3:04 ` Boqun Feng
2025-06-18 16:49 ` [PATCH v5 07/10] rust: sync: atomic: Add Atomic<u{32,64}> Boqun Feng
2025-06-26 12:47 ` Andreas Hindborg
2025-06-18 16:49 ` [PATCH v5 08/10] rust: sync: atomic: Add Atomic<{usize,isize}> Boqun Feng
2025-06-26 12:49 ` Andreas Hindborg
2025-06-18 16:49 ` [PATCH v5 09/10] rust: sync: atomic: Add Atomic<*mut T> Boqun Feng
2025-06-18 16:49 ` [PATCH v5 10/10] rust: sync: Add memory barriers Boqun Feng
2025-06-26 13:36 ` Andreas Hindborg
2025-06-28 3:42 ` Boqun Feng
2025-06-30 9:54 ` Andreas Hindborg
2025-06-18 20:22 ` [PATCH v5 00/10] LKMM generic atomics in Rust Alice Ryhl
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).