* [PATCH v4 00/10] LKMM generic atomics in Rust
@ 2025-06-09 22:46 Boqun Feng
2025-06-09 22:46 ` [PATCH v4 01/10] rust: Introduce atomic API helpers Boqun Feng
` (9 more replies)
0 siblings, 10 replies; 16+ messages in thread
From: Boqun Feng @ 2025-06-09 22:46 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Hi,
v4 for LKMM atomics in Rust, you can find the previous versions:
v3: https://lore.kernel.org/rust-for-linux/20250421164221.1121805-1-boqun.feng@gmail.com/
v2: https://lore.kernel.org/rust-for-linux/20241101060237.1185533-1-boqun.feng@gmail.com/
v1: https://lore.kernel.org/rust-for-linux/20240612223025.1158537-1-boqun.feng@gmail.com/
wip: https://lore.kernel.org/rust-for-linux/20240322233838.868874-1-boqun.feng@gmail.com/
The reason of providing our own LKMM atomics is because memory model
wise Rust native memory model is not guaranteed to work with LKMM and
having only one memory model throughout the kernel is always better for
reasoning.
I haven't gotten any review from last version but I got a few feedbacks
during Rust-for-Linux weekly meeting. I trimmed two more patches to make
the current series easier to review, the current version includes:
* Generic atomic support of i32, i64, u32, u64, isize and usize on:
* load() and store()
* xchg() and cmpxchg()
* add() and fetch_add()
* Atomic pointer support on:
* load() and store()
* xchg() and cmpxchg()
* Barrier and ordering support.
Any missing functionality can of course be added in a later patch. There
are some use cases based on these API can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/boqun/linux.git rust-atomic-dev
for example, RCU protected pointers and AtomicFlag.
I think the current version is ready to merge (modulo some documentation
improvement), and I would like to postpone small implementation
improvement because we are seeing growing usages of atomics in Rust
side. It's better to merge the API first so that we can clean up and
help new users.
But I do have one question about how to route the patch, basically I
have three options:
* via tip, I can send a pull request to Ingo at -rc4 or -rc5.
* via rust-next, I can send a pull request to Miguel at -rc4 or -rc5.
* via my own tree or atomic (Peter if you remember, we do have an atomic
group in kernel.org and I can create a shared tree under that group),
I can send a pull request to Linus for 6.17 merge window.
Please advise.
Regards,
Boqun
Boqun Feng (10):
rust: Introduce atomic API helpers
rust: sync: Add basic atomic operation mapping framework
rust: sync: atomic: Add ordering annotation types
rust: sync: atomic: Add generic atomics
rust: sync: atomic: Add atomic {cmp,}xchg operations
rust: sync: atomic: Add the framework of arithmetic operations
rust: sync: atomic: Add Atomic<u{32,64}>
rust: sync: atomic: Add Atomic<{usize,isize}>
rust: sync: atomic: Add Atomic<*mut T>
rust: sync: Add memory barriers
MAINTAINERS | 4 +-
rust/helpers/atomic.c | 1038 +++++++++++++++++++++
rust/helpers/barrier.c | 18 +
rust/helpers/helpers.c | 2 +
rust/kernel/sync.rs | 2 +
rust/kernel/sync/atomic.rs | 176 ++++
rust/kernel/sync/atomic/generic.rs | 524 +++++++++++
rust/kernel/sync/atomic/ops.rs | 199 ++++
rust/kernel/sync/atomic/ordering.rs | 94 ++
rust/kernel/sync/barrier.rs | 67 ++
scripts/atomic/gen-atomics.sh | 1 +
scripts/atomic/gen-rust-atomic-helpers.sh | 65 ++
12 files changed, 2189 insertions(+), 1 deletion(-)
create mode 100644 rust/helpers/atomic.c
create mode 100644 rust/helpers/barrier.c
create mode 100644 rust/kernel/sync/atomic.rs
create mode 100644 rust/kernel/sync/atomic/generic.rs
create mode 100644 rust/kernel/sync/atomic/ops.rs
create mode 100644 rust/kernel/sync/atomic/ordering.rs
create mode 100644 rust/kernel/sync/barrier.rs
create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh
--
2.39.5 (Apple Git-154)
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v4 01/10] rust: Introduce atomic API helpers
2025-06-09 22:46 [PATCH v4 00/10] LKMM generic atomics in Rust Boqun Feng
@ 2025-06-09 22:46 ` Boqun Feng
2025-06-09 22:46 ` [PATCH v4 02/10] rust: sync: Add basic atomic operation mapping framework Boqun Feng
` (8 subsequent siblings)
9 siblings, 0 replies; 16+ messages in thread
From: Boqun Feng @ 2025-06-09 22:46 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
In order to support LKMM atomics in Rust, add rust_helper_* for atomic
APIs. These helpers ensure the implementation of LKMM atomics in Rust is
the same as in C. This could save the maintenance burden of having two
similar atomic implementations in asm.
Originally-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/helpers/atomic.c | 1038 +++++++++++++++++++++
rust/helpers/helpers.c | 1 +
scripts/atomic/gen-atomics.sh | 1 +
scripts/atomic/gen-rust-atomic-helpers.sh | 65 ++
4 files changed, 1105 insertions(+)
create mode 100644 rust/helpers/atomic.c
create mode 100755 scripts/atomic/gen-rust-atomic-helpers.sh
diff --git a/rust/helpers/atomic.c b/rust/helpers/atomic.c
new file mode 100644
index 000000000000..00bf10887928
--- /dev/null
+++ b/rust/helpers/atomic.c
@@ -0,0 +1,1038 @@
+// SPDX-License-Identifier: GPL-2.0
+
+// Generated by scripts/atomic/gen-rust-atomic-helpers.sh
+// DO NOT MODIFY THIS FILE DIRECTLY
+
+/*
+ * This file provides helpers for the various atomic functions for Rust.
+ */
+#ifndef _RUST_ATOMIC_API_H
+#define _RUST_ATOMIC_API_H
+
+#include <linux/atomic.h>
+
+// TODO: Remove this after LTO helper support is added.
+#define __rust_helper
+
+__rust_helper int
+rust_helper_atomic_read(const atomic_t *v)
+{
+ return atomic_read(v);
+}
+
+__rust_helper int
+rust_helper_atomic_read_acquire(const atomic_t *v)
+{
+ return atomic_read_acquire(v);
+}
+
+__rust_helper void
+rust_helper_atomic_set(atomic_t *v, int i)
+{
+ atomic_set(v, i);
+}
+
+__rust_helper void
+rust_helper_atomic_set_release(atomic_t *v, int i)
+{
+ atomic_set_release(v, i);
+}
+
+__rust_helper void
+rust_helper_atomic_add(int i, atomic_t *v)
+{
+ atomic_add(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_add_return(int i, atomic_t *v)
+{
+ return atomic_add_return(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_add_return_acquire(int i, atomic_t *v)
+{
+ return atomic_add_return_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_add_return_release(int i, atomic_t *v)
+{
+ return atomic_add_return_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_add_return_relaxed(int i, atomic_t *v)
+{
+ return atomic_add_return_relaxed(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add(int i, atomic_t *v)
+{
+ return atomic_fetch_add(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add_acquire(int i, atomic_t *v)
+{
+ return atomic_fetch_add_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add_release(int i, atomic_t *v)
+{
+ return atomic_fetch_add_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add_relaxed(int i, atomic_t *v)
+{
+ return atomic_fetch_add_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_sub(int i, atomic_t *v)
+{
+ atomic_sub(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_sub_return(int i, atomic_t *v)
+{
+ return atomic_sub_return(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_sub_return_acquire(int i, atomic_t *v)
+{
+ return atomic_sub_return_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_sub_return_release(int i, atomic_t *v)
+{
+ return atomic_sub_return_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_sub_return_relaxed(int i, atomic_t *v)
+{
+ return atomic_sub_return_relaxed(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_sub(int i, atomic_t *v)
+{
+ return atomic_fetch_sub(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_sub_acquire(int i, atomic_t *v)
+{
+ return atomic_fetch_sub_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_sub_release(int i, atomic_t *v)
+{
+ return atomic_fetch_sub_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_sub_relaxed(int i, atomic_t *v)
+{
+ return atomic_fetch_sub_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_inc(atomic_t *v)
+{
+ atomic_inc(v);
+}
+
+__rust_helper int
+rust_helper_atomic_inc_return(atomic_t *v)
+{
+ return atomic_inc_return(v);
+}
+
+__rust_helper int
+rust_helper_atomic_inc_return_acquire(atomic_t *v)
+{
+ return atomic_inc_return_acquire(v);
+}
+
+__rust_helper int
+rust_helper_atomic_inc_return_release(atomic_t *v)
+{
+ return atomic_inc_return_release(v);
+}
+
+__rust_helper int
+rust_helper_atomic_inc_return_relaxed(atomic_t *v)
+{
+ return atomic_inc_return_relaxed(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_inc(atomic_t *v)
+{
+ return atomic_fetch_inc(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_inc_acquire(atomic_t *v)
+{
+ return atomic_fetch_inc_acquire(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_inc_release(atomic_t *v)
+{
+ return atomic_fetch_inc_release(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_inc_relaxed(atomic_t *v)
+{
+ return atomic_fetch_inc_relaxed(v);
+}
+
+__rust_helper void
+rust_helper_atomic_dec(atomic_t *v)
+{
+ atomic_dec(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_return(atomic_t *v)
+{
+ return atomic_dec_return(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_return_acquire(atomic_t *v)
+{
+ return atomic_dec_return_acquire(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_return_release(atomic_t *v)
+{
+ return atomic_dec_return_release(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_return_relaxed(atomic_t *v)
+{
+ return atomic_dec_return_relaxed(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_dec(atomic_t *v)
+{
+ return atomic_fetch_dec(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_dec_acquire(atomic_t *v)
+{
+ return atomic_fetch_dec_acquire(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_dec_release(atomic_t *v)
+{
+ return atomic_fetch_dec_release(v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_dec_relaxed(atomic_t *v)
+{
+ return atomic_fetch_dec_relaxed(v);
+}
+
+__rust_helper void
+rust_helper_atomic_and(int i, atomic_t *v)
+{
+ atomic_and(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_and(int i, atomic_t *v)
+{
+ return atomic_fetch_and(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_and_acquire(int i, atomic_t *v)
+{
+ return atomic_fetch_and_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_and_release(int i, atomic_t *v)
+{
+ return atomic_fetch_and_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_and_relaxed(int i, atomic_t *v)
+{
+ return atomic_fetch_and_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_andnot(int i, atomic_t *v)
+{
+ atomic_andnot(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_andnot(int i, atomic_t *v)
+{
+ return atomic_fetch_andnot(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_andnot_acquire(int i, atomic_t *v)
+{
+ return atomic_fetch_andnot_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_andnot_release(int i, atomic_t *v)
+{
+ return atomic_fetch_andnot_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
+{
+ return atomic_fetch_andnot_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_or(int i, atomic_t *v)
+{
+ atomic_or(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_or(int i, atomic_t *v)
+{
+ return atomic_fetch_or(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_or_acquire(int i, atomic_t *v)
+{
+ return atomic_fetch_or_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_or_release(int i, atomic_t *v)
+{
+ return atomic_fetch_or_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_or_relaxed(int i, atomic_t *v)
+{
+ return atomic_fetch_or_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic_xor(int i, atomic_t *v)
+{
+ atomic_xor(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_xor(int i, atomic_t *v)
+{
+ return atomic_fetch_xor(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_xor_acquire(int i, atomic_t *v)
+{
+ return atomic_fetch_xor_acquire(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_xor_release(int i, atomic_t *v)
+{
+ return atomic_fetch_xor_release(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_xor_relaxed(int i, atomic_t *v)
+{
+ return atomic_fetch_xor_relaxed(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_xchg(atomic_t *v, int new)
+{
+ return atomic_xchg(v, new);
+}
+
+__rust_helper int
+rust_helper_atomic_xchg_acquire(atomic_t *v, int new)
+{
+ return atomic_xchg_acquire(v, new);
+}
+
+__rust_helper int
+rust_helper_atomic_xchg_release(atomic_t *v, int new)
+{
+ return atomic_xchg_release(v, new);
+}
+
+__rust_helper int
+rust_helper_atomic_xchg_relaxed(atomic_t *v, int new)
+{
+ return atomic_xchg_relaxed(v, new);
+}
+
+__rust_helper int
+rust_helper_atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+ return atomic_cmpxchg(v, old, new);
+}
+
+__rust_helper int
+rust_helper_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
+{
+ return atomic_cmpxchg_acquire(v, old, new);
+}
+
+__rust_helper int
+rust_helper_atomic_cmpxchg_release(atomic_t *v, int old, int new)
+{
+ return atomic_cmpxchg_release(v, old, new);
+}
+
+__rust_helper int
+rust_helper_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
+{
+ return atomic_cmpxchg_relaxed(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
+{
+ return atomic_try_cmpxchg(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
+{
+ return atomic_try_cmpxchg_acquire(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
+{
+ return atomic_try_cmpxchg_release(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
+{
+ return atomic_try_cmpxchg_relaxed(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic_sub_and_test(int i, atomic_t *v)
+{
+ return atomic_sub_and_test(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic_dec_and_test(atomic_t *v)
+{
+ return atomic_dec_and_test(v);
+}
+
+__rust_helper bool
+rust_helper_atomic_inc_and_test(atomic_t *v)
+{
+ return atomic_inc_and_test(v);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_negative(int i, atomic_t *v)
+{
+ return atomic_add_negative(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_negative_acquire(int i, atomic_t *v)
+{
+ return atomic_add_negative_acquire(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_negative_release(int i, atomic_t *v)
+{
+ return atomic_add_negative_release(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_negative_relaxed(int i, atomic_t *v)
+{
+ return atomic_add_negative_relaxed(i, v);
+}
+
+__rust_helper int
+rust_helper_atomic_fetch_add_unless(atomic_t *v, int a, int u)
+{
+ return atomic_fetch_add_unless(v, a, u);
+}
+
+__rust_helper bool
+rust_helper_atomic_add_unless(atomic_t *v, int a, int u)
+{
+ return atomic_add_unless(v, a, u);
+}
+
+__rust_helper bool
+rust_helper_atomic_inc_not_zero(atomic_t *v)
+{
+ return atomic_inc_not_zero(v);
+}
+
+__rust_helper bool
+rust_helper_atomic_inc_unless_negative(atomic_t *v)
+{
+ return atomic_inc_unless_negative(v);
+}
+
+__rust_helper bool
+rust_helper_atomic_dec_unless_positive(atomic_t *v)
+{
+ return atomic_dec_unless_positive(v);
+}
+
+__rust_helper int
+rust_helper_atomic_dec_if_positive(atomic_t *v)
+{
+ return atomic_dec_if_positive(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_read(const atomic64_t *v)
+{
+ return atomic64_read(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_read_acquire(const atomic64_t *v)
+{
+ return atomic64_read_acquire(v);
+}
+
+__rust_helper void
+rust_helper_atomic64_set(atomic64_t *v, s64 i)
+{
+ atomic64_set(v, i);
+}
+
+__rust_helper void
+rust_helper_atomic64_set_release(atomic64_t *v, s64 i)
+{
+ atomic64_set_release(v, i);
+}
+
+__rust_helper void
+rust_helper_atomic64_add(s64 i, atomic64_t *v)
+{
+ atomic64_add(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_add_return(s64 i, atomic64_t *v)
+{
+ return atomic64_add_return(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_add_return_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_add_return_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_add_return_release(s64 i, atomic64_t *v)
+{
+ return atomic64_add_return_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_add_return_relaxed(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_add(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_add_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add_release(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_add_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_add_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_sub(s64 i, atomic64_t *v)
+{
+ atomic64_sub(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_sub_return(s64 i, atomic64_t *v)
+{
+ return atomic64_sub_return(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_sub_return_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_sub_return_release(s64 i, atomic64_t *v)
+{
+ return atomic64_sub_return_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_sub_return_relaxed(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_sub(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_sub(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_sub_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_sub_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_sub_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_inc(atomic64_t *v)
+{
+ atomic64_inc(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_inc_return(atomic64_t *v)
+{
+ return atomic64_inc_return(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_inc_return_acquire(atomic64_t *v)
+{
+ return atomic64_inc_return_acquire(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_inc_return_release(atomic64_t *v)
+{
+ return atomic64_inc_return_release(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_inc_return_relaxed(atomic64_t *v)
+{
+ return atomic64_inc_return_relaxed(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_inc(atomic64_t *v)
+{
+ return atomic64_fetch_inc(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_inc_acquire(atomic64_t *v)
+{
+ return atomic64_fetch_inc_acquire(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_inc_release(atomic64_t *v)
+{
+ return atomic64_fetch_inc_release(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_inc_relaxed(atomic64_t *v)
+{
+ return atomic64_fetch_inc_relaxed(v);
+}
+
+__rust_helper void
+rust_helper_atomic64_dec(atomic64_t *v)
+{
+ atomic64_dec(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_return(atomic64_t *v)
+{
+ return atomic64_dec_return(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_return_acquire(atomic64_t *v)
+{
+ return atomic64_dec_return_acquire(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_return_release(atomic64_t *v)
+{
+ return atomic64_dec_return_release(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_return_relaxed(atomic64_t *v)
+{
+ return atomic64_dec_return_relaxed(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_dec(atomic64_t *v)
+{
+ return atomic64_fetch_dec(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_dec_acquire(atomic64_t *v)
+{
+ return atomic64_fetch_dec_acquire(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_dec_release(atomic64_t *v)
+{
+ return atomic64_fetch_dec_release(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_dec_relaxed(atomic64_t *v)
+{
+ return atomic64_fetch_dec_relaxed(v);
+}
+
+__rust_helper void
+rust_helper_atomic64_and(s64 i, atomic64_t *v)
+{
+ atomic64_and(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_and(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_and(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_and_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_and_release(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_and_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_and_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_andnot(s64 i, atomic64_t *v)
+{
+ atomic64_andnot(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_andnot(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_andnot(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_andnot_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_andnot_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_andnot_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_or(s64 i, atomic64_t *v)
+{
+ atomic64_or(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_or(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_or(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_or_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_or_release(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_or_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_or_relaxed(i, v);
+}
+
+__rust_helper void
+rust_helper_atomic64_xor(s64 i, atomic64_t *v)
+{
+ atomic64_xor(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_xor(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_xor(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_xor_acquire(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_xor_release(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_fetch_xor_relaxed(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_xchg(atomic64_t *v, s64 new)
+{
+ return atomic64_xchg(v, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_xchg_acquire(atomic64_t *v, s64 new)
+{
+ return atomic64_xchg_acquire(v, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_xchg_release(atomic64_t *v, s64 new)
+{
+ return atomic64_xchg_release(v, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
+{
+ return atomic64_xchg_relaxed(v, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
+{
+ return atomic64_cmpxchg(v, old, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
+{
+ return atomic64_cmpxchg_acquire(v, old, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
+{
+ return atomic64_cmpxchg_release(v, old, new);
+}
+
+__rust_helper s64
+rust_helper_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
+{
+ return atomic64_cmpxchg_relaxed(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
+{
+ return atomic64_try_cmpxchg(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
+{
+ return atomic64_try_cmpxchg_acquire(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
+{
+ return atomic64_try_cmpxchg_release(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
+{
+ return atomic64_try_cmpxchg_relaxed(v, old, new);
+}
+
+__rust_helper bool
+rust_helper_atomic64_sub_and_test(s64 i, atomic64_t *v)
+{
+ return atomic64_sub_and_test(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_dec_and_test(atomic64_t *v)
+{
+ return atomic64_dec_and_test(v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_inc_and_test(atomic64_t *v)
+{
+ return atomic64_inc_and_test(v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_negative(s64 i, atomic64_t *v)
+{
+ return atomic64_add_negative(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
+{
+ return atomic64_add_negative_acquire(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_negative_release(s64 i, atomic64_t *v)
+{
+ return atomic64_add_negative_release(i, v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
+{
+ return atomic64_add_negative_relaxed(i, v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+ return atomic64_fetch_add_unless(v, a, u);
+}
+
+__rust_helper bool
+rust_helper_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+ return atomic64_add_unless(v, a, u);
+}
+
+__rust_helper bool
+rust_helper_atomic64_inc_not_zero(atomic64_t *v)
+{
+ return atomic64_inc_not_zero(v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_inc_unless_negative(atomic64_t *v)
+{
+ return atomic64_inc_unless_negative(v);
+}
+
+__rust_helper bool
+rust_helper_atomic64_dec_unless_positive(atomic64_t *v)
+{
+ return atomic64_dec_unless_positive(v);
+}
+
+__rust_helper s64
+rust_helper_atomic64_dec_if_positive(atomic64_t *v)
+{
+ return atomic64_dec_if_positive(v);
+}
+
+#endif /* _RUST_ATOMIC_API_H */
+// b032d261814b3e119b72dbf7d21447f6731325ee
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 0f1b5d115985..0e7e7b388062 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -7,6 +7,7 @@
* Sorted alphabetically.
*/
+#include "atomic.c"
#include "auxiliary.c"
#include "blk.c"
#include "bug.c"
diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh
index 5b98a8307693..02508d0d6fe4 100755
--- a/scripts/atomic/gen-atomics.sh
+++ b/scripts/atomic/gen-atomics.sh
@@ -11,6 +11,7 @@ cat <<EOF |
gen-atomic-instrumented.sh linux/atomic/atomic-instrumented.h
gen-atomic-long.sh linux/atomic/atomic-long.h
gen-atomic-fallback.sh linux/atomic/atomic-arch-fallback.h
+gen-rust-atomic-helpers.sh ../rust/helpers/atomic.c
EOF
while read script header args; do
/bin/sh ${ATOMICDIR}/${script} ${ATOMICTBL} ${args} > ${LINUXDIR}/include/${header}
diff --git a/scripts/atomic/gen-rust-atomic-helpers.sh b/scripts/atomic/gen-rust-atomic-helpers.sh
new file mode 100755
index 000000000000..72f2e5bde0c6
--- /dev/null
+++ b/scripts/atomic/gen-rust-atomic-helpers.sh
@@ -0,0 +1,65 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+
+ATOMICDIR=$(dirname $0)
+
+. ${ATOMICDIR}/atomic-tbl.sh
+
+#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
+gen_proto_order_variant()
+{
+ local meta="$1"; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local order="$1"; shift
+ local atomic="$1"; shift
+ local int="$1"; shift
+
+ local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
+
+ local ret="$(gen_ret_type "${meta}" "${int}")"
+ local params="$(gen_params "${int}" "${atomic}" "$@")"
+ local args="$(gen_args "$@")"
+ local retstmt="$(gen_ret_stmt "${meta}")"
+
+cat <<EOF
+__rust_helper ${ret}
+rust_helper_${atomicname}(${params})
+{
+ ${retstmt}${atomicname}(${args});
+}
+
+EOF
+}
+
+cat << EOF
+// SPDX-License-Identifier: GPL-2.0
+
+// Generated by $0
+// DO NOT MODIFY THIS FILE DIRECTLY
+
+/*
+ * This file provides helpers for the various atomic functions for Rust.
+ */
+#ifndef _RUST_ATOMIC_API_H
+#define _RUST_ATOMIC_API_H
+
+#include <linux/atomic.h>
+
+// TODO: Remove this after LTO helper support is added.
+#define __rust_helper
+
+EOF
+
+grep '^[a-z]' "$1" | while read name meta args; do
+ gen_proto "${meta}" "${name}" "atomic" "int" ${args}
+done
+
+grep '^[a-z]' "$1" | while read name meta args; do
+ gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
+done
+
+cat <<EOF
+#endif /* _RUST_ATOMIC_API_H */
+EOF
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 02/10] rust: sync: Add basic atomic operation mapping framework
2025-06-09 22:46 [PATCH v4 00/10] LKMM generic atomics in Rust Boqun Feng
2025-06-09 22:46 ` [PATCH v4 01/10] rust: Introduce atomic API helpers Boqun Feng
@ 2025-06-09 22:46 ` Boqun Feng
2025-06-09 22:46 ` [PATCH v4 03/10] rust: sync: atomic: Add ordering annotation types Boqun Feng
` (7 subsequent siblings)
9 siblings, 0 replies; 16+ messages in thread
From: Boqun Feng @ 2025-06-09 22:46 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Preparation for generic atomic implementation. To unify the
implementation of a generic method over `i32` and `i64`, the C side
atomic methods need to be grouped so that in a generic method, they can
be referred as <type>::<method>, otherwise their parameters and return
value are different between `i32` and `i64`, which would require using
`transmute()` to unify the type into a `T`.
Introduce `AtomicImpl` to represent a basic type in Rust that has the
direct mapping to an atomic implementation from C. This trait is sealed,
and currently only `i32` and `i64` impl this.
Further, different methods are put into different `*Ops` trait groups,
and this is for the future when smaller types like `i8`/`i16` are
supported but only with a limited set of API (e.g. only set(), load(),
xchg() and cmpxchg(), no add() or sub() etc).
While the atomic mod is introduced, documentation is also added for
memory models and data races.
Also bump my role to the maintainer of ATOMIC INFRASTRUCTURE to reflect
my responsiblity on the Rust atomic mod.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
MAINTAINERS | 4 +-
rust/kernel/sync.rs | 1 +
rust/kernel/sync/atomic.rs | 19 ++++
rust/kernel/sync/atomic/ops.rs | 199 +++++++++++++++++++++++++++++++++
4 files changed, 222 insertions(+), 1 deletion(-)
create mode 100644 rust/kernel/sync/atomic.rs
create mode 100644 rust/kernel/sync/atomic/ops.rs
diff --git a/MAINTAINERS b/MAINTAINERS
index a92290fffa16..fe0cf0a2e6e5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3894,7 +3894,7 @@ F: drivers/input/touchscreen/atmel_mxt_ts.c
ATOMIC INFRASTRUCTURE
M: Will Deacon <will@kernel.org>
M: Peter Zijlstra <peterz@infradead.org>
-R: Boqun Feng <boqun.feng@gmail.com>
+M: Boqun Feng <boqun.feng@gmail.com>
R: Mark Rutland <mark.rutland@arm.com>
L: linux-kernel@vger.kernel.org
S: Maintained
@@ -3903,6 +3903,8 @@ F: arch/*/include/asm/atomic*.h
F: include/*/atomic*.h
F: include/linux/refcount.h
F: scripts/atomic/
+F: rust/kernel/sync/atomic.rs
+F: rust/kernel/sync/atomic/
ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER
M: Bradley Grove <linuxdrivers@attotech.com>
diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index 36a719015583..b620027e0641 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -10,6 +10,7 @@
use pin_init;
mod arc;
+pub mod atomic;
mod condvar;
pub mod lock;
mod locked_by;
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
new file mode 100644
index 000000000000..65e41dba97b7
--- /dev/null
+++ b/rust/kernel/sync/atomic.rs
@@ -0,0 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Atomic primitives.
+//!
+//! These primitives have the same semantics as their C counterparts: and the precise definitions of
+//! semantics can be found at [`LKMM`]. Note that Linux Kernel Memory (Consistency) Model is the
+//! only model for Rust code in kernel, and Rust's own atomics should be avoided.
+//!
+//! # Data races
+//!
+//! [`LKMM`] atomics have different rules regarding data races:
+//!
+//! - A normal write from C side is treated as an atomic write if
+//! CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
+//! - Mixed-size atomic accesses don't cause data races.
+//!
+//! [`LKMM`]: srctree/tools/memory-mode/
+
+pub mod ops;
diff --git a/rust/kernel/sync/atomic/ops.rs b/rust/kernel/sync/atomic/ops.rs
new file mode 100644
index 000000000000..f8825f7c84f0
--- /dev/null
+++ b/rust/kernel/sync/atomic/ops.rs
@@ -0,0 +1,199 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Atomic implementations.
+//!
+//! Provides 1:1 mapping of atomic implementations.
+
+use crate::bindings::*;
+use crate::macros::paste;
+
+mod private {
+ /// Sealed trait marker to disable customized impls on atomic implementation traits.
+ pub trait Sealed {}
+}
+
+// `i32` and `i64` are only supported atomic implementations.
+impl private::Sealed for i32 {}
+impl private::Sealed for i64 {}
+
+/// A marker trait for types that implement atomic operations with C side primitives.
+///
+/// This trait is sealed, and only types that have directly mapping to the C side atomics should
+/// impl this:
+///
+/// - `i32` maps to `atomic_t`.
+/// - `i64` maps to `atomic64_t`.
+pub trait AtomicImpl: Sized + Send + Copy + private::Sealed {}
+
+// `atomic_t` implements atomic operations on `i32`.
+impl AtomicImpl for i32 {}
+
+// `atomic64_t` implements atomic operations on `i64`.
+impl AtomicImpl for i64 {}
+
+// This macro generates the function signature with given argument list and return type.
+macro_rules! declare_atomic_method {
+ (
+ $func:ident($($arg:ident : $arg_type:ty),*) $(-> $ret:ty)?
+ ) => {
+ paste!(
+ #[doc = concat!("Atomic ", stringify!($func))]
+ #[doc = "# Safety"]
+ #[doc = "- Any pointer passed to the function has to be a valid pointer"]
+ #[doc = "- Accesses must not cause data races per LKMM:"]
+ #[doc = " - Atomic read racing with normal read, normal write or atomic write is not data race."]
+ #[doc = " - Atomic write racing with normal read or normal write is data-race, unless the"]
+ #[doc = " normal accesses are done at C side and considered as immune to data"]
+ #[doc = " races, e.g. CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC."]
+ unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)?;
+ );
+ };
+ (
+ $func:ident [$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(-> $ret:ty)?
+ ) => {
+ paste!(
+ declare_atomic_method!(
+ [< $func _ $variant >]($($arg_sig)*) $(-> $ret)?
+ );
+ );
+
+ declare_atomic_method!(
+ $func [$($rest)*]($($arg_sig)*) $(-> $ret)?
+ );
+ };
+ (
+ $func:ident []($($arg_sig:tt)*) $(-> $ret:ty)?
+ ) => {
+ declare_atomic_method!(
+ $func($($arg_sig)*) $(-> $ret)?
+ );
+ }
+}
+
+// This macro generates the function implementation with given argument list and return type, and it
+// will replace "call(...)" expression with "$ctype _ $func" to call the real C function.
+macro_rules! impl_atomic_method {
+ (
+ ($ctype:ident) $func:ident($($arg:ident: $arg_type:ty),*) $(-> $ret:ty)? {
+ call($($c_arg:expr),*)
+ }
+ ) => {
+ paste!(
+ #[inline(always)]
+ unsafe fn [< atomic_ $func >]($($arg: $arg_type,)*) $(-> $ret)? {
+ // SAFETY: Per function safety requirement, all pointers are valid, and accesses
+ // won't cause data race per LKMM.
+ unsafe { [< $ctype _ $func >]($($c_arg,)*) }
+ }
+ );
+ };
+ (
+ ($ctype:ident) $func:ident[$variant:ident $($rest:ident)*]($($arg_sig:tt)*) $(-> $ret:ty)? {
+ call($($arg:tt)*)
+ }
+ ) => {
+ paste!(
+ impl_atomic_method!(
+ ($ctype) [< $func _ $variant >]($($arg_sig)*) $( -> $ret)? {
+ call($($arg)*)
+ }
+ );
+ );
+ impl_atomic_method!(
+ ($ctype) $func [$($rest)*]($($arg_sig)*) $( -> $ret)? {
+ call($($arg)*)
+ }
+ );
+ };
+ (
+ ($ctype:ident) $func:ident[]($($arg_sig:tt)*) $( -> $ret:ty)? {
+ call($($arg:tt)*)
+ }
+ ) => {
+ impl_atomic_method!(
+ ($ctype) $func($($arg_sig)*) $(-> $ret)? {
+ call($($arg)*)
+ }
+ );
+ }
+}
+
+// Delcares $ops trait with methods and implements the trait for `i32` and `i64`.
+macro_rules! declare_and_impl_atomic_methods {
+ ($ops:ident ($doc:literal) {
+ $(
+ $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? {
+ call($($arg:tt)*)
+ }
+ )*
+ }) => {
+ #[doc = $doc]
+ pub trait $ops: AtomicImpl {
+ $(
+ declare_atomic_method!(
+ $func[$($variant)*]($($arg_sig)*) $(-> $ret)?
+ );
+ )*
+ }
+
+ impl $ops for i32 {
+ $(
+ impl_atomic_method!(
+ (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? {
+ call($($arg)*)
+ }
+ );
+ )*
+ }
+
+ impl $ops for i64 {
+ $(
+ impl_atomic_method!(
+ (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? {
+ call($($arg)*)
+ }
+ );
+ )*
+ }
+ }
+}
+
+declare_and_impl_atomic_methods!(
+ AtomicHasBasicOps ("Basic atomic operations") {
+ read[acquire](ptr: *mut Self) -> Self {
+ call(ptr as *mut _)
+ }
+
+ set[release](ptr: *mut Self, v: Self) {
+ call(ptr as *mut _, v)
+ }
+ }
+);
+
+declare_and_impl_atomic_methods!(
+ AtomicHasXchgOps ("Exchange and compare-and-exchange atomic operations") {
+ xchg[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self {
+ call(ptr as *mut _, v)
+ }
+
+ cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: Self, new: Self) -> Self {
+ call(ptr as *mut _, old, new)
+ }
+
+ try_cmpxchg[acquire, release, relaxed](ptr: *mut Self, old: *mut Self, new: Self) -> bool {
+ call(ptr as *mut _, old, new)
+ }
+ }
+);
+
+declare_and_impl_atomic_methods!(
+ AtomicHasArithmeticOps ("Atomic arithmetic operations") {
+ add[](ptr: *mut Self, v: Self) {
+ call(v, ptr as *mut _)
+ }
+
+ fetch_add[acquire, release, relaxed](ptr: *mut Self, v: Self) -> Self {
+ call(v, ptr as *mut _)
+ }
+ }
+);
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-09 22:46 [PATCH v4 00/10] LKMM generic atomics in Rust Boqun Feng
2025-06-09 22:46 ` [PATCH v4 01/10] rust: Introduce atomic API helpers Boqun Feng
2025-06-09 22:46 ` [PATCH v4 02/10] rust: sync: Add basic atomic operation mapping framework Boqun Feng
@ 2025-06-09 22:46 ` Boqun Feng
2025-06-10 9:07 ` Benno Lossin
2025-06-09 22:46 ` [PATCH v4 04/10] rust: sync: atomic: Add generic atomics Boqun Feng
` (6 subsequent siblings)
9 siblings, 1 reply; 16+ messages in thread
From: Boqun Feng @ 2025-06-09 22:46 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Preparation for atomic primitives. Instead of a suffix like _acquire, a
method parameter along with the corresponding generic parameter will be
used to specify the ordering of an atomic operations. For example,
atomic load() can be defined as:
impl<T: ...> Atomic<T> {
pub fn load<O: AcquireOrRelaxed>(&self, _o: O) -> T { ... }
}
and acquire users would do:
let r = x.load(Acquire);
relaxed users:
let r = x.load(Relaxed);
doing the following:
let r = x.load(Release);
will cause a compiler error.
Compared to suffixes, it's easier to tell what ordering variants an
operation has, and it also make it easier to unify the implementation of
all ordering variants in one method via generic. The `IS_RELAXED` and
`ORDER` associate consts are for generic function to pick up the
particular implementation specified by an ordering annotation.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic.rs | 3 +
rust/kernel/sync/atomic/ordering.rs | 94 +++++++++++++++++++++++++++++
2 files changed, 97 insertions(+)
create mode 100644 rust/kernel/sync/atomic/ordering.rs
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 65e41dba97b7..9fe5d81fc2a9 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -17,3 +17,6 @@
//! [`LKMM`]: srctree/tools/memory-mode/
pub mod ops;
+pub mod ordering;
+
+pub use ordering::{Acquire, Full, Relaxed, Release};
diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/ordering.rs
new file mode 100644
index 000000000000..14cda8c5d1b1
--- /dev/null
+++ b/rust/kernel/sync/atomic/ordering.rs
@@ -0,0 +1,94 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Memory orderings.
+//!
+//! The semantics of these orderings follows the [`LKMM`] definitions and rules.
+//!
+//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model.
+//! - [`Full`] means "fully-ordered", that is:
+//! - It provides ordering between all the preceding memory accesses and the annotated operation.
+//! - It provides ordering between the annotated operation and all the following memory accesses.
+//! - It provides ordering between all the preceding memory accesses and all the fllowing memory
+//! accesses.
+//! - All the orderings are the same strong as a full memory barrier (i.e. `smp_mb()`).
+//! - [`Relaxed`] is similar to the counterpart in Rust memory model, except that dependency
+//! orderings are also honored in [`LKMM`]. Dependency orderings are described in "DEPENDENCY
+//! RELATIONS" in [`LKMM`]'s [`explanation`].
+//!
+//! [`LKMM`]: srctree/tools/memory-model/
+//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.txt
+
+/// The annotation type for relaxed memory ordering.
+pub struct Relaxed;
+
+/// The annotation type for acquire memory ordering.
+pub struct Acquire;
+
+/// The annotation type for release memory ordering.
+pub struct Release;
+
+/// The annotation type for fully-order memory ordering.
+pub struct Full;
+
+/// The trait bound for operations that only support relaxed ordering.
+pub trait RelaxedOnly: AcquireOrRelaxed + ReleaseOrRelaxed + All {}
+
+impl RelaxedOnly for Relaxed {}
+
+/// The trait bound for operations that only support acquire or relaxed ordering.
+pub trait AcquireOrRelaxed: All {
+ /// Describes whether an ordering is relaxed or not.
+ const IS_RELAXED: bool = false;
+}
+
+impl AcquireOrRelaxed for Acquire {}
+
+impl AcquireOrRelaxed for Relaxed {
+ const IS_RELAXED: bool = true;
+}
+
+/// The trait bound for operations that only support release or relaxed ordering.
+pub trait ReleaseOrRelaxed: All {
+ /// Describes whether an ordering is relaxed or not.
+ const IS_RELAXED: bool = false;
+}
+
+impl ReleaseOrRelaxed for Release {}
+
+impl ReleaseOrRelaxed for Relaxed {
+ const IS_RELAXED: bool = true;
+}
+
+/// Describes the exact memory ordering of an `impl` [`All`].
+pub enum OrderingDesc {
+ /// Relaxed ordering.
+ Relaxed,
+ /// Acquire ordering.
+ Acquire,
+ /// Release ordering.
+ Release,
+ /// Fully-ordered.
+ Full,
+}
+
+/// The trait bound for annotating operations that should support all orderings.
+pub trait All {
+ /// Describes the exact memory ordering.
+ const ORDER: OrderingDesc;
+}
+
+impl All for Relaxed {
+ const ORDER: OrderingDesc = OrderingDesc::Relaxed;
+}
+
+impl All for Acquire {
+ const ORDER: OrderingDesc = OrderingDesc::Acquire;
+}
+
+impl All for Release {
+ const ORDER: OrderingDesc = OrderingDesc::Release;
+}
+
+impl All for Full {
+ const ORDER: OrderingDesc = OrderingDesc::Full;
+}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 04/10] rust: sync: atomic: Add generic atomics
2025-06-09 22:46 [PATCH v4 00/10] LKMM generic atomics in Rust Boqun Feng
` (2 preceding siblings ...)
2025-06-09 22:46 ` [PATCH v4 03/10] rust: sync: atomic: Add ordering annotation types Boqun Feng
@ 2025-06-09 22:46 ` Boqun Feng
2025-06-09 22:46 ` [PATCH v4 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
` (5 subsequent siblings)
9 siblings, 0 replies; 16+ messages in thread
From: Boqun Feng @ 2025-06-09 22:46 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
To provide using LKMM atomics for Rust code, a generic `Atomic<T>` is
added, currently `T` needs to be Send + Copy because these are the
straightforward usages and all basic types support this. The trait
`AllowAtomic` should be only implemented inside atomic mod until the
generic atomic framework is mature enough (unless the implementer is a
`#[repr(transparent)]` new type).
`AtomicImpl` types are automatically `AllowAtomic`, and so far only
basic operations load() and store() are introduced.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic.rs | 2 +
rust/kernel/sync/atomic/generic.rs | 258 +++++++++++++++++++++++++++++
2 files changed, 260 insertions(+)
create mode 100644 rust/kernel/sync/atomic/generic.rs
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 9fe5d81fc2a9..a01e44eec380 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -16,7 +16,9 @@
//!
//! [`LKMM`]: srctree/tools/memory-mode/
+pub mod generic;
pub mod ops;
pub mod ordering;
+pub use generic::Atomic;
pub use ordering::{Acquire, Full, Relaxed, Release};
diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
new file mode 100644
index 000000000000..73c26f9cf6b8
--- /dev/null
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -0,0 +1,258 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Generic atomic primitives.
+
+use super::ops::*;
+use super::ordering::*;
+use crate::types::Opaque;
+
+/// A generic atomic variable.
+///
+/// `T` must impl [`AllowAtomic`], that is, an [`AtomicImpl`] has to be chosen.
+///
+/// # Invariants
+///
+/// Doing an atomic operation while holding a reference of [`Self`] won't cause a data race, this
+/// is guaranteed by the safety requirement of [`Self::from_ptr`] and the extra safety requirement
+/// of the usage on pointers returned by [`Self::as_ptr`].
+#[repr(transparent)]
+pub struct Atomic<T: AllowAtomic>(Opaque<T>);
+
+// SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
+unsafe impl<T: AllowAtomic> Sync for Atomic<T> {}
+
+/// Atomics that support basic atomic operations.
+///
+/// TODO: Currently the [`AllowAtomic`] types are restricted within basic integer types (and their
+/// transparent new types). In the future, we could extend the scope to more data types when there
+/// is a clear and meaningful usage, but for now, [`AllowAtomic`] should only be implemented inside
+/// atomic mod for the restricted types mentioned above.
+///
+/// # Safety
+///
+/// [`Self`] must have the same size and alignment as [`Self::Repr`].
+pub unsafe trait AllowAtomic: Sized + Send + Copy {
+ /// The backing atomic implementation type.
+ type Repr: AtomicImpl;
+
+ /// Converts into a [`Self::Repr`].
+ fn into_repr(self) -> Self::Repr;
+
+ /// Converts from a [`Self::Repr`].
+ fn from_repr(repr: Self::Repr) -> Self;
+}
+
+// An `AtomicImpl` is automatically an `AllowAtomic`.
+//
+// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment.
+unsafe impl<T: AtomicImpl> AllowAtomic for T {
+ type Repr = Self;
+
+ fn into_repr(self) -> Self::Repr {
+ self
+ }
+
+ fn from_repr(repr: Self::Repr) -> Self {
+ repr
+ }
+}
+
+impl<T: AllowAtomic> Atomic<T> {
+ /// Creates a new atomic.
+ pub const fn new(v: T) -> Self {
+ Self(Opaque::new(v))
+ }
+
+ /// Creates a reference to [`Self`] from a pointer.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` has to be a valid pointer.
+ /// - `ptr` has to be valid for both reads and writes for the whole lifetime `'a`.
+ /// - For the whole lifetime of '`a`, other accesses to the object cannot cause data races
+ /// (defined by [`LKMM`]) against atomic operations on the returned reference.
+ ///
+ /// [`LKMM`]: srctree/tools/memory-model
+ ///
+ /// # Examples
+ ///
+ /// Using [`Atomic::from_ptr()`] combined with [`Atomic::load()`] or [`Atomic::store()`] can
+ /// achieve the same functionality as `READ_ONCE()`/`smp_load_acquire()` or
+ /// `WRITE_ONCE()`/`smp_store_release()` in C side:
+ ///
+ /// ```rust
+ /// # use kernel::types::Opaque;
+ /// use kernel::sync::atomic::{Atomic, Relaxed, Release};
+ ///
+ /// // Assume there is a C struct `Foo`.
+ /// mod cbindings {
+ /// #[repr(C)]
+ /// pub(crate) struct foo { pub(crate) a: i32, pub(crate) b: i32 }
+ /// }
+ ///
+ /// let tmp = Opaque::new(cbindings::foo { a: 1, b: 2});
+ ///
+ /// // struct foo *foo_ptr = ..;
+ /// let foo_ptr = tmp.get();
+ ///
+ /// // SAFETY: `foo_ptr` is a valid pointer, and `.a` is inbound.
+ /// let foo_a_ptr = unsafe { core::ptr::addr_of_mut!((*foo_ptr).a) };
+ ///
+ /// // a = READ_ONCE(foo_ptr->a);
+ /// //
+ /// // SAFETY: `foo_a_ptr` is a valid pointer for read, and all accesses on it is atomic, so no
+ /// // data race.
+ /// let a = unsafe { Atomic::from_ptr(foo_a_ptr) }.load(Relaxed);
+ /// # assert_eq!(a, 1);
+ ///
+ /// // smp_store_release(&foo_ptr->a, 2);
+ /// //
+ /// // SAFETY: `foo_a_ptr` is a valid pointer for write, and all accesses on it is atomic, so no
+ /// // data race.
+ /// unsafe { Atomic::from_ptr(foo_a_ptr) }.store(2, Release);
+ /// ```
+ ///
+ /// However, this should be only used when communicating with C side or manipulating a C struct.
+ pub unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a Self
+ where
+ T: Sync,
+ {
+ // CAST: `T` is transparent to `Atomic<T>`.
+ // SAFETY: Per function safety requirement, `ptr` is a valid pointer and the object will
+ // live long enough. It's safe to return a `&Atomic<T>` because function safety requirement
+ // guarantees other accesses won't cause data races.
+ unsafe { &*ptr.cast::<Self>() }
+ }
+
+ /// Returns a pointer to the underlying atomic variable.
+ ///
+ /// Extra safety requirement on using the return pointer: the operations done via the pointer
+ /// cannot cause data races defined by [`LKMM`].
+ ///
+ /// [`LKMM`]: srctree/tools/memory-model
+ pub const fn as_ptr(&self) -> *mut T {
+ self.0.get()
+ }
+
+ /// Returns a mutable reference to the underlying atomic variable.
+ ///
+ /// This is safe because the mutable reference of the atomic variable guarantees the exclusive
+ /// access.
+ pub fn get_mut(&mut self) -> &mut T {
+ // SAFETY: `self.as_ptr()` is a valid pointer to `T`, and the object has already been
+ // initialized. `&mut self` guarantees the exclusive access, so it's safe to reborrow
+ // mutably.
+ unsafe { &mut *self.as_ptr() }
+ }
+}
+
+impl<T: AllowAtomic> Atomic<T>
+where
+ T::Repr: AtomicHasBasicOps,
+{
+ /// Loads the value from the atomic variable.
+ ///
+ /// # Examples
+ ///
+ /// Simple usages:
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{Atomic, Relaxed};
+ ///
+ /// let x = Atomic::new(42i32);
+ ///
+ /// assert_eq!(42, x.load(Relaxed));
+ ///
+ /// let x = Atomic::new(42i64);
+ ///
+ /// assert_eq!(42, x.load(Relaxed));
+ /// ```
+ ///
+ /// Customized new types in [`Atomic`]:
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{generic::AllowAtomic, Atomic, Relaxed};
+ ///
+ /// #[derive(Clone, Copy)]
+ /// #[repr(transparent)]
+ /// struct NewType(u32);
+ ///
+ /// // SAFETY: `NewType` is transparent to `u32`, which has the same size and alignment as
+ /// // `i32`.
+ /// unsafe impl AllowAtomic for NewType {
+ /// type Repr = i32;
+ ///
+ /// fn into_repr(self) -> Self::Repr {
+ /// self.0 as i32
+ /// }
+ ///
+ /// fn from_repr(repr: Self::Repr) -> Self {
+ /// NewType(repr as u32)
+ /// }
+ /// }
+ ///
+ /// let n = Atomic::new(NewType(0));
+ ///
+ /// assert_eq!(0, n.load(Relaxed).0);
+ /// ```
+ #[doc(alias("atomic_read", "atomic64_read"))]
+ #[inline(always)]
+ pub fn load<Ordering: AcquireOrRelaxed>(&self, _: Ordering) -> T {
+ let a = self.as_ptr().cast::<T::Repr>();
+
+ // SAFETY:
+ // - For calling the atomic_read*() function:
+ // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
+ // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
+ // - per the type invariants, the following atomic operation won't cause data races.
+ // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
+ // - atomic operations are used here.
+ let v = unsafe {
+ if Ordering::IS_RELAXED {
+ T::Repr::atomic_read(a)
+ } else {
+ T::Repr::atomic_read_acquire(a)
+ }
+ };
+
+ T::from_repr(v)
+ }
+
+ /// Stores a value to the atomic variable.
+ ///
+ /// # Examples
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{Atomic, Relaxed};
+ ///
+ /// let x = Atomic::new(42i32);
+ ///
+ /// assert_eq!(42, x.load(Relaxed));
+ ///
+ /// x.store(43, Relaxed);
+ ///
+ /// assert_eq!(43, x.load(Relaxed));
+ /// ```
+ ///
+ #[doc(alias("atomic_set", "atomic64_set"))]
+ #[inline(always)]
+ pub fn store<Ordering: ReleaseOrRelaxed>(&self, v: T, _: Ordering) {
+ let v = T::into_repr(v);
+ let a = self.as_ptr().cast::<T::Repr>();
+
+ // SAFETY:
+ // - For calling the atomic_set*() function:
+ // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
+ // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
+ // - per the type invariants, the following atomic operation won't cause data races.
+ // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
+ // - atomic operations are used here.
+ unsafe {
+ if Ordering::IS_RELAXED {
+ T::Repr::atomic_set(a, v)
+ } else {
+ T::Repr::atomic_set_release(a, v)
+ }
+ };
+ }
+}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations
2025-06-09 22:46 [PATCH v4 00/10] LKMM generic atomics in Rust Boqun Feng
` (3 preceding siblings ...)
2025-06-09 22:46 ` [PATCH v4 04/10] rust: sync: atomic: Add generic atomics Boqun Feng
@ 2025-06-09 22:46 ` Boqun Feng
2025-06-09 22:46 ` [PATCH v4 06/10] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
` (4 subsequent siblings)
9 siblings, 0 replies; 16+ messages in thread
From: Boqun Feng @ 2025-06-09 22:46 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
xchg() and cmpxchg() are basic operations on atomic. Provide these based
on C APIs.
Note that cmpxchg() use the similar function signature as
compare_exchange() in Rust std: returning a `Result`, `Ok(old)` means
the operation succeeds and `Err(old)` means the operation fails.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic/generic.rs | 154 +++++++++++++++++++++++++++++
1 file changed, 154 insertions(+)
diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
index 73c26f9cf6b8..39a9e208e767 100644
--- a/rust/kernel/sync/atomic/generic.rs
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -256,3 +256,157 @@ pub fn store<Ordering: ReleaseOrRelaxed>(&self, v: T, _: Ordering) {
};
}
}
+
+impl<T: AllowAtomic> Atomic<T>
+where
+ T::Repr: AtomicHasXchgOps,
+{
+ /// Atomic exchange.
+ ///
+ /// # Examples
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{Atomic, Acquire, Relaxed};
+ ///
+ /// let x = Atomic::new(42);
+ ///
+ /// assert_eq!(42, x.xchg(52, Acquire));
+ /// assert_eq!(52, x.load(Relaxed));
+ /// ```
+ #[doc(alias("atomic_xchg", "atomic64_xchg"))]
+ #[inline(always)]
+ pub fn xchg<Ordering: All>(&self, v: T, _: Ordering) -> T {
+ let v = T::into_repr(v);
+ let a = self.as_ptr().cast::<T::Repr>();
+
+ // SAFETY:
+ // - For calling the atomic_xchg*() function:
+ // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
+ // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
+ // - per the type invariants, the following atomic operation won't cause data races.
+ // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
+ // - atomic operations are used here.
+ let ret = unsafe {
+ match Ordering::ORDER {
+ OrderingDesc::Full => T::Repr::atomic_xchg(a, v),
+ OrderingDesc::Acquire => T::Repr::atomic_xchg_acquire(a, v),
+ OrderingDesc::Release => T::Repr::atomic_xchg_release(a, v),
+ OrderingDesc::Relaxed => T::Repr::atomic_xchg_relaxed(a, v),
+ }
+ };
+
+ T::from_repr(ret)
+ }
+
+ /// Atomic compare and exchange.
+ ///
+ /// Compare: The comparison is done via the byte level comparison between the atomic variables
+ /// with the `old` value.
+ ///
+ /// Ordering: When succeeds, provides the corresponding ordering as the `Ordering` type
+ /// parameter indicates, and a failed one doesn't provide any ordering, the read part of a
+ /// failed cmpxchg should be treated as a relaxed read.
+ ///
+ /// Returns `Ok(value)` if cmpxchg succeeds, and `value` is guaranteed to be equal to `old`,
+ /// otherwise returns `Err(value)`, and `value` is the value of the atomic variable when
+ /// cmpxchg was happening.
+ ///
+ /// # Examples
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{Atomic, Full, Relaxed};
+ ///
+ /// let x = Atomic::new(42);
+ ///
+ /// // Checks whether cmpxchg succeeded.
+ /// let success = x.cmpxchg(52, 64, Relaxed).is_ok();
+ /// # assert!(!success);
+ ///
+ /// // Checks whether cmpxchg failed.
+ /// let failure = x.cmpxchg(52, 64, Relaxed).is_err();
+ /// # assert!(failure);
+ ///
+ /// // Uses the old value if failed, probably re-try cmpxchg.
+ /// match x.cmpxchg(52, 64, Relaxed) {
+ /// Ok(_) => { },
+ /// Err(old) => {
+ /// // do something with `old`.
+ /// # assert_eq!(old, 42);
+ /// }
+ /// }
+ ///
+ /// // Uses the latest value regardlessly, same as atomic_cmpxchg() in C.
+ /// let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old);
+ /// # assert_eq!(42, latest);
+ /// assert_eq!(64, x.load(Relaxed));
+ /// ```
+ #[doc(alias(
+ "atomic_cmpxchg",
+ "atomic64_cmpxchg",
+ "atomic_try_cmpxchg",
+ "atomic64_try_cmpxchg"
+ ))]
+ #[inline(always)]
+ pub fn cmpxchg<Ordering: All>(&self, mut old: T, new: T, o: Ordering) -> Result<T, T> {
+ // Note on code generation:
+ //
+ // try_cmpxchg() is used to implement cmpxchg(), and if the helper functions are inlined,
+ // the compiler is able to figure out that branch is not needed if the users don't care
+ // about whether the operation succeeds or not. One exception is on x86, due to commit
+ // 44fe84459faf ("locking/atomic: Fix atomic_try_cmpxchg() semantics"), the
+ // atomic_try_cmpxchg() on x86 has a branch even if the caller doesn't care about the
+ // success of cmpxchg and only wants to use the old value. For example, for code like:
+ //
+ // let latest = x.cmpxchg(42, 64, Full).unwrap_or_else(|old| old);
+ //
+ // It will still generate code:
+ //
+ // movl $0x40, %ecx
+ // movl $0x34, %eax
+ // lock
+ // cmpxchgl %ecx, 0x4(%rsp)
+ // jne 1f
+ // 2:
+ // ...
+ // 1: movl %eax, %ecx
+ // jmp 2b
+ //
+ // This might be "fixed" by introducing a try_cmpxchg_exclusive() that knows the "*old"
+ // location in the C function is always safe to write.
+ if self.try_cmpxchg(&mut old, new, o) {
+ Ok(old)
+ } else {
+ Err(old)
+ }
+ }
+
+ /// Atomic compare and exchange and returns whether the operation succeeds.
+ ///
+ /// "Compare" and "Ordering" part are the same as [`Atomic::cmpxchg()`].
+ ///
+ /// Returns `true` means the cmpxchg succeeds otherwise returns `false` with `old` updated to
+ /// the value of the atomic variable when cmpxchg was happening.
+ #[inline(always)]
+ fn try_cmpxchg<Ordering: All>(&self, old: &mut T, new: T, _: Ordering) -> bool {
+ let old = (old as *mut T).cast::<T::Repr>();
+ let new = T::into_repr(new);
+ let a = self.0.get().cast::<T::Repr>();
+
+ // SAFETY:
+ // - For calling the atomic_try_cmpchg*() function:
+ // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllowAtomic`,
+ // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
+ // - per the type invariants, the following atomic operation won't cause data races.
+ // - `old` is a valid pointer to write because it comes from a mutable reference.
+ // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
+ // - atomic operations are used here.
+ unsafe {
+ match Ordering::ORDER {
+ OrderingDesc::Full => T::Repr::atomic_try_cmpxchg(a, old, new),
+ OrderingDesc::Acquire => T::Repr::atomic_try_cmpxchg_acquire(a, old, new),
+ OrderingDesc::Release => T::Repr::atomic_try_cmpxchg_release(a, old, new),
+ OrderingDesc::Relaxed => T::Repr::atomic_try_cmpxchg_relaxed(a, old, new),
+ }
+ }
+ }
+}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 06/10] rust: sync: atomic: Add the framework of arithmetic operations
2025-06-09 22:46 [PATCH v4 00/10] LKMM generic atomics in Rust Boqun Feng
` (4 preceding siblings ...)
2025-06-09 22:46 ` [PATCH v4 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
@ 2025-06-09 22:46 ` Boqun Feng
2025-06-09 22:46 ` [PATCH v4 07/10] rust: sync: atomic: Add Atomic<u{32,64}> Boqun Feng
` (3 subsequent siblings)
9 siblings, 0 replies; 16+ messages in thread
From: Boqun Feng @ 2025-06-09 22:46 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
One important set of atomic operations is the arithmetic operations,
i.e. add(), sub(), fetch_add(), add_return(), etc. However it may not
make senses for all the types that `AllowAtomic` to have arithmetic
operations, for example a `Foo(u32)` may not have a reasonable add() or
sub(), plus subword types (`u8` and `u16`) currently don't have
atomic arithmetic operations even on C side and might not have them in
the future in Rust (because they are usually suboptimal on a few
architecures). Therefore add a subtrait of `AllowAtomic` describing
which types have and can do atomic arithemtic operations.
A few things about this `AllowAtomicArithmetic` trait:
* It has an associate type `Delta` instead of using
`AllowAllowAtomic::Repr` because, a `Bar(u32)` (whose `Repr` is `i32`)
may not wants an `add(&self, i32)`, but an `add(&self, u32)`.
* `AtomicImpl` types already implement an `AtomicHasArithmeticOps`
trait, so add blanket implementation for them. In the future, `i8` and
`i16` may impl `AtomicImpl` but not `AtomicHasArithmeticOps` if
arithemtic operations are not available.
Only add() and fetch_add() are added. The rest will be added in the
future.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic/generic.rs | 102 +++++++++++++++++++++++++++++
1 file changed, 102 insertions(+)
diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
index 39a9e208e767..f0bc831e8079 100644
--- a/rust/kernel/sync/atomic/generic.rs
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -3,6 +3,7 @@
//! Generic atomic primitives.
use super::ops::*;
+use super::ordering;
use super::ordering::*;
use crate::types::Opaque;
@@ -57,6 +58,23 @@ fn from_repr(repr: Self::Repr) -> Self {
}
}
+/// Atomics that allows arithmetic operations with an integer type.
+pub trait AllowAtomicArithmetic: AllowAtomic {
+ /// The delta types for arithmetic operations.
+ type Delta;
+
+ /// Converts [`Self::Delta`] into the representation of the atomic type.
+ fn delta_into_repr(d: Self::Delta) -> Self::Repr;
+}
+
+impl<T: AtomicImpl + AtomicHasArithmeticOps> AllowAtomicArithmetic for T {
+ type Delta = Self;
+
+ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
+ d
+ }
+}
+
impl<T: AllowAtomic> Atomic<T> {
/// Creates a new atomic.
pub const fn new(v: T) -> Self {
@@ -410,3 +428,87 @@ fn try_cmpxchg<Ordering: All>(&self, old: &mut T, new: T, _: Ordering) -> bool {
}
}
}
+
+impl<T: AllowAtomicArithmetic> Atomic<T>
+where
+ T::Repr: AtomicHasArithmeticOps,
+{
+ /// Atomic add.
+ ///
+ /// The addition is a wrapping addition.
+ ///
+ /// # Examples
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{Atomic, Relaxed};
+ ///
+ /// let x = Atomic::new(42);
+ ///
+ /// assert_eq!(42, x.load(Relaxed));
+ ///
+ /// x.add(12, Relaxed);
+ ///
+ /// assert_eq!(54, x.load(Relaxed));
+ /// ```
+ #[inline(always)]
+ pub fn add<Ordering: RelaxedOnly>(&self, v: T::Delta, _: Ordering) {
+ let v = T::delta_into_repr(v);
+ let a = self.as_ptr().cast::<T::Repr>();
+
+ // SAFETY:
+ // - For calling the atomic_add() function:
+ // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
+ // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
+ // - per the type invariants, the following atomic operation won't cause data races.
+ // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
+ // - atomic operations are used here.
+ unsafe {
+ T::Repr::atomic_add(a, v);
+ }
+ }
+
+ /// Atomic fetch and add.
+ ///
+ /// The addition is a wrapping addition.
+ ///
+ /// # Examples
+ ///
+ /// ```rust
+ /// use kernel::sync::atomic::{Atomic, Acquire, Full, Relaxed};
+ ///
+ /// let x = Atomic::new(42);
+ ///
+ /// assert_eq!(42, x.load(Relaxed));
+ ///
+ /// assert_eq!(54, { x.fetch_add(12, Acquire); x.load(Relaxed) });
+ ///
+ /// let x = Atomic::new(42);
+ ///
+ /// assert_eq!(42, x.load(Relaxed));
+ ///
+ /// assert_eq!(54, { x.fetch_add(12, Full); x.load(Relaxed) } );
+ /// ```
+ #[inline(always)]
+ pub fn fetch_add<Ordering: All>(&self, v: T::Delta, _: Ordering) -> T {
+ let v = T::delta_into_repr(v);
+ let a = self.as_ptr().cast::<T::Repr>();
+
+ // SAFETY:
+ // - For calling the atomic_fetch_add*() function:
+ // - `self.as_ptr()` is a valid pointer, and per the safety requirement of `AllocAtomic`,
+ // a `*mut T` is a valid `*mut T::Repr`. Therefore `a` is a valid pointer,
+ // - per the type invariants, the following atomic operation won't cause data races.
+ // - For extra safety requirement of usage on pointers returned by `self.as_ptr():
+ // - atomic operations are used here.
+ let ret = unsafe {
+ match Ordering::ORDER {
+ ordering::OrderingDesc::Full => T::Repr::atomic_fetch_add(a, v),
+ ordering::OrderingDesc::Acquire => T::Repr::atomic_fetch_add_acquire(a, v),
+ ordering::OrderingDesc::Release => T::Repr::atomic_fetch_add_release(a, v),
+ ordering::OrderingDesc::Relaxed => T::Repr::atomic_fetch_add_relaxed(a, v),
+ }
+ };
+
+ T::from_repr(ret)
+ }
+}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 07/10] rust: sync: atomic: Add Atomic<u{32,64}>
2025-06-09 22:46 [PATCH v4 00/10] LKMM generic atomics in Rust Boqun Feng
` (5 preceding siblings ...)
2025-06-09 22:46 ` [PATCH v4 06/10] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
@ 2025-06-09 22:46 ` Boqun Feng
2025-06-09 22:46 ` [PATCH v4 08/10] rust: sync: atomic: Add Atomic<{usize,isize}> Boqun Feng
` (2 subsequent siblings)
9 siblings, 0 replies; 16+ messages in thread
From: Boqun Feng @ 2025-06-09 22:46 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Add generic atomic support for basic unsigned types that have an
`AtomicImpl` with the same size and alignment.
Unit tests are added including Atomic<i32> and Atomic<i64>.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic.rs | 83 ++++++++++++++++++++++++++++++++++++++
1 file changed, 83 insertions(+)
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index a01e44eec380..9039591b4d46 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -22,3 +22,86 @@
pub use generic::Atomic;
pub use ordering::{Acquire, Full, Relaxed, Release};
+
+// SAFETY: `u64` and `i64` has the same size and alignment.
+unsafe impl generic::AllowAtomic for u64 {
+ type Repr = i64;
+
+ fn into_repr(self) -> Self::Repr {
+ self as _
+ }
+
+ fn from_repr(repr: Self::Repr) -> Self {
+ repr as _
+ }
+}
+
+impl generic::AllowAtomicArithmetic for u64 {
+ type Delta = u64;
+
+ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
+ d as _
+ }
+}
+
+// SAFETY: `u32` and `i32` has the same size and alignment.
+unsafe impl generic::AllowAtomic for u32 {
+ type Repr = i32;
+
+ fn into_repr(self) -> Self::Repr {
+ self as _
+ }
+
+ fn from_repr(repr: Self::Repr) -> Self {
+ repr as _
+ }
+}
+
+impl generic::AllowAtomicArithmetic for u32 {
+ type Delta = u32;
+
+ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
+ d as _
+ }
+}
+
+use crate::macros::kunit_tests;
+
+#[kunit_tests(rust_atomics)]
+mod tests {
+ use super::*;
+
+ // Call $fn($val) with each $type of $val.
+ macro_rules! for_each_type {
+ ($val:literal in [$($type:ty),*] $fn:expr) => {
+ $({
+ let v: $type = $val;
+
+ $fn(v);
+ })*
+ }
+ }
+
+ #[test]
+ fn atomic_basic_tests() {
+ for_each_type!(42 in [i32, i64, u32, u64] |v| {
+ let x = Atomic::new(v);
+
+ assert_eq!(v, x.load(Relaxed));
+ });
+ }
+
+ #[test]
+ fn atomic_arithmetic_tests() {
+ for_each_type!(42 in [i32, i64, u32, u64] |v| {
+ let x = Atomic::new(v);
+
+ assert_eq!(v, x.fetch_add(12, Full));
+ assert_eq!(v + 12, x.load(Relaxed));
+
+ x.add(13, Relaxed);
+
+ assert_eq!(v + 25, x.load(Relaxed));
+ });
+ }
+}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 08/10] rust: sync: atomic: Add Atomic<{usize,isize}>
2025-06-09 22:46 [PATCH v4 00/10] LKMM generic atomics in Rust Boqun Feng
` (6 preceding siblings ...)
2025-06-09 22:46 ` [PATCH v4 07/10] rust: sync: atomic: Add Atomic<u{32,64}> Boqun Feng
@ 2025-06-09 22:46 ` Boqun Feng
2025-06-09 22:46 ` [PATCH v4 09/10] rust: sync: atomic: Add Atomic<*mut T> Boqun Feng
2025-06-09 22:46 ` [PATCH v4 10/10] rust: sync: Add memory barriers Boqun Feng
9 siblings, 0 replies; 16+ messages in thread
From: Boqun Feng @ 2025-06-09 22:46 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Add generic atomic support for `usize` and `isize`. Note that instead of
mapping directly to `atomic_long_t`, the represention type
(`AllowAtomic::Repr`) is selected based on CONFIG_64BIT. This reduces
the necessarity of creating `atomic_long_*` helpers, which could save
the binary size of kernel if inline helpers are not available.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic.rs | 54 ++++++++++++++++++++++++++++++++++++--
1 file changed, 52 insertions(+), 2 deletions(-)
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index 9039591b4d46..e36431f0b42c 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -65,6 +65,56 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
}
}
+// SAFETY: `usize` has the same size and the alignment as `i64` for 64bit and the same as `i32` for
+// 32bit.
+unsafe impl generic::AllowAtomic for usize {
+ #[cfg(CONFIG_64BIT)]
+ type Repr = i64;
+ #[cfg(not(CONFIG_64BIT))]
+ type Repr = i32;
+
+ fn into_repr(self) -> Self::Repr {
+ self as Self::Repr
+ }
+
+ fn from_repr(repr: Self::Repr) -> Self {
+ repr as Self
+ }
+}
+
+impl generic::AllowAtomicArithmetic for usize {
+ type Delta = usize;
+
+ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
+ d as Self::Repr
+ }
+}
+
+// SAFETY: `isize` has the same size and the alignment as `i64` for 64bit and the same as `i32` for
+// 32bit.
+unsafe impl generic::AllowAtomic for isize {
+ #[cfg(CONFIG_64BIT)]
+ type Repr = i64;
+ #[cfg(not(CONFIG_64BIT))]
+ type Repr = i32;
+
+ fn into_repr(self) -> Self::Repr {
+ self as Self::Repr
+ }
+
+ fn from_repr(repr: Self::Repr) -> Self {
+ repr as Self
+ }
+}
+
+impl generic::AllowAtomicArithmetic for isize {
+ type Delta = isize;
+
+ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
+ d as Self::Repr
+ }
+}
+
use crate::macros::kunit_tests;
#[kunit_tests(rust_atomics)]
@@ -84,7 +134,7 @@ macro_rules! for_each_type {
#[test]
fn atomic_basic_tests() {
- for_each_type!(42 in [i32, i64, u32, u64] |v| {
+ for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
let x = Atomic::new(v);
assert_eq!(v, x.load(Relaxed));
@@ -93,7 +143,7 @@ fn atomic_basic_tests() {
#[test]
fn atomic_arithmetic_tests() {
- for_each_type!(42 in [i32, i64, u32, u64] |v| {
+ for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| {
let x = Atomic::new(v);
assert_eq!(v, x.fetch_add(12, Full));
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 09/10] rust: sync: atomic: Add Atomic<*mut T>
2025-06-09 22:46 [PATCH v4 00/10] LKMM generic atomics in Rust Boqun Feng
` (7 preceding siblings ...)
2025-06-09 22:46 ` [PATCH v4 08/10] rust: sync: atomic: Add Atomic<{usize,isize}> Boqun Feng
@ 2025-06-09 22:46 ` Boqun Feng
2025-06-09 22:46 ` [PATCH v4 10/10] rust: sync: Add memory barriers Boqun Feng
9 siblings, 0 replies; 16+ messages in thread
From: Boqun Feng @ 2025-06-09 22:46 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Add atomic support for raw pointer values, similar to `isize` and
`usize`, the representation type is selected based on CONFIG_64BIT.
`*mut T` is not `Send`, however `Atomic<*mut T>` definitely needs to be
a `Sync`, and that's the whole point of atomics: being able to have
multiple shared references in different threads so that they can sync
with each other. As a result, a pointer value will be transferred from
one thread to another via `Atomic<*mut T>`:
<thread 1> <thread 2>
x.store(p1, Relaxed);
let p = x.load(p1, Relaxed);
This means a raw pointer value (`*mut T`) needs to be able to transfer
across thread boundaries, which is essentially `Send`.
To reflect this in the type system, and based on the fact that pointer
values can be transferred safely (only using them to dereference is
unsafe), as suggested by Alice, extend the `AllowAtomic` trait to
include a customized `Send` semantics, that is: `impl AllowAtomic` has
to be safe to be transferred across thread boundaries.
Suggested-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/kernel/sync/atomic.rs | 19 +++++++++++++++++++
rust/kernel/sync/atomic/generic.rs | 16 +++++++++++++---
2 files changed, 32 insertions(+), 3 deletions(-)
diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
index e36431f0b42c..e4dd31a3e3e2 100644
--- a/rust/kernel/sync/atomic.rs
+++ b/rust/kernel/sync/atomic.rs
@@ -114,6 +114,22 @@ fn delta_into_repr(d: Self::Delta) -> Self::Repr {
d as Self::Repr
}
}
+// SAFETY: A `*mut T` has the same size and the alignment as `i64` for 64bit and the same as `i32`
+// for 32bit. And it's safe to transfer the ownership of a pointer value to another thread.
+unsafe impl<T> generic::AllowAtomic for *mut T {
+ #[cfg(CONFIG_64BIT)]
+ type Repr = i64;
+ #[cfg(not(CONFIG_64BIT))]
+ type Repr = i32;
+
+ fn into_repr(self) -> Self::Repr {
+ self as Self::Repr
+ }
+
+ fn from_repr(repr: Self::Repr) -> Self {
+ repr as Self
+ }
+}
use crate::macros::kunit_tests;
@@ -139,6 +155,9 @@ fn atomic_basic_tests() {
assert_eq!(v, x.load(Relaxed));
});
+
+ let x = Atomic::new(core::ptr::null_mut::<i32>());
+ assert!(x.load(Relaxed).is_null());
}
#[test]
diff --git a/rust/kernel/sync/atomic/generic.rs b/rust/kernel/sync/atomic/generic.rs
index f0bc831e8079..e2f60e89fbbb 100644
--- a/rust/kernel/sync/atomic/generic.rs
+++ b/rust/kernel/sync/atomic/generic.rs
@@ -19,6 +19,10 @@
#[repr(transparent)]
pub struct Atomic<T: AllowAtomic>(Opaque<T>);
+// SAFETY: `Atomic<T>` is safe to send between execution contexts, because `T` is `AllowAtomic` and
+// `AllowAtomic`'s safety requirement guarantees that.
+unsafe impl<T: AllowAtomic> Send for Atomic<T> {}
+
// SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic.
unsafe impl<T: AllowAtomic> Sync for Atomic<T> {}
@@ -31,8 +35,13 @@ unsafe impl<T: AllowAtomic> Sync for Atomic<T> {}
///
/// # Safety
///
-/// [`Self`] must have the same size and alignment as [`Self::Repr`].
-pub unsafe trait AllowAtomic: Sized + Send + Copy {
+/// - [`Self`] must have the same size and alignment as [`Self::Repr`].
+/// - The implementer must guarantee it's safe to transfer ownership from one execution context to
+/// another, this means it has to be a [`Send`], but because `*mut T` is not [`Send`] and that's
+/// the basic type needs to support atomic operations, so this safety requirement is added to
+/// [`AllowAtomic`] trait. This safety requirement is automatically satisfied if the type is a
+/// [`Send`].
+pub unsafe trait AllowAtomic: Sized + Copy {
/// The backing atomic implementation type.
type Repr: AtomicImpl;
@@ -45,7 +54,8 @@ pub unsafe trait AllowAtomic: Sized + Send + Copy {
// An `AtomicImpl` is automatically an `AllowAtomic`.
//
-// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment.
+// SAFETY: `T::Repr` is `Self` (i.e. `T`), so they have the same size and alignment. And all
+// `AtomicImpl` types are `Send`.
unsafe impl<T: AtomicImpl> AllowAtomic for T {
type Repr = Self;
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 10/10] rust: sync: Add memory barriers
2025-06-09 22:46 [PATCH v4 00/10] LKMM generic atomics in Rust Boqun Feng
` (8 preceding siblings ...)
2025-06-09 22:46 ` [PATCH v4 09/10] rust: sync: atomic: Add Atomic<*mut T> Boqun Feng
@ 2025-06-09 22:46 ` Boqun Feng
9 siblings, 0 replies; 16+ messages in thread
From: Boqun Feng @ 2025-06-09 22:46 UTC (permalink / raw)
To: linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo,
Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl,
Trevor Gross, Danilo Krummrich, Will Deacon, Peter Zijlstra,
Mark Rutland, Wedson Almeida Filho, Viresh Kumar, Lyude Paul,
Ingo Molnar, Mitchell Levy, Paul E. McKenney, Greg Kroah-Hartman,
Linus Torvalds, Thomas Gleixner
Memory barriers are building blocks for concurrent code, hence provide
a minimal set of them.
The compiler barrier, barrier(), is implemented in inline asm instead of
using core::sync::atomic::compiler_fence() because memory models are
different: kernel's atomics are implemented in inline asm therefore the
compiler barrier should be implemented in inline asm as well. Also it's
currently only public to the kernel crate until there's a reasonable
driver usage.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
---
rust/helpers/barrier.c | 18 ++++++++++
rust/helpers/helpers.c | 1 +
rust/kernel/sync.rs | 1 +
rust/kernel/sync/barrier.rs | 67 +++++++++++++++++++++++++++++++++++++
4 files changed, 87 insertions(+)
create mode 100644 rust/helpers/barrier.c
create mode 100644 rust/kernel/sync/barrier.rs
diff --git a/rust/helpers/barrier.c b/rust/helpers/barrier.c
new file mode 100644
index 000000000000..cdf28ce8e511
--- /dev/null
+++ b/rust/helpers/barrier.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <asm/barrier.h>
+
+void rust_helper_smp_mb(void)
+{
+ smp_mb();
+}
+
+void rust_helper_smp_wmb(void)
+{
+ smp_wmb();
+}
+
+void rust_helper_smp_rmb(void)
+{
+ smp_rmb();
+}
diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 0e7e7b388062..928eca7fbbb4 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -9,6 +9,7 @@
#include "atomic.c"
#include "auxiliary.c"
+#include "barrier.c"
#include "blk.c"
#include "bug.c"
#include "build_assert.c"
diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
index b620027e0641..c7c0e552bafe 100644
--- a/rust/kernel/sync.rs
+++ b/rust/kernel/sync.rs
@@ -11,6 +11,7 @@
mod arc;
pub mod atomic;
+pub mod barrier;
mod condvar;
pub mod lock;
mod locked_by;
diff --git a/rust/kernel/sync/barrier.rs b/rust/kernel/sync/barrier.rs
new file mode 100644
index 000000000000..36a5c70e6716
--- /dev/null
+++ b/rust/kernel/sync/barrier.rs
@@ -0,0 +1,67 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Memory barriers.
+//!
+//! These primitives have the same semantics as their C counterparts: and the precise definitions of
+//! semantics can be found at [`LKMM`].
+//!
+//! [`LKMM`]: srctree/tools/memory-mode/
+
+/// A compiler barrier.
+///
+/// An explicic compiler barrier function that prevents the compiler from moving the memory
+/// accesses either side of it to the other side.
+pub(crate) fn barrier() {
+ // By default, Rust inline asms are treated as being able to access any memory or flags, hence
+ // it suffices as a compiler barrier.
+ //
+ // SAFETY: An empty asm block should be safe.
+ unsafe {
+ core::arch::asm!("");
+ }
+}
+
+/// A full memory barrier.
+///
+/// A barrier function that prevents both the compiler and the CPU from moving the memory accesses
+/// either side of it to the other side.
+pub fn smp_mb() {
+ if cfg!(CONFIG_SMP) {
+ // SAFETY: `smp_mb()` is safe to call.
+ unsafe {
+ bindings::smp_mb();
+ }
+ } else {
+ barrier();
+ }
+}
+
+/// A write-write memory barrier.
+///
+/// A barrier function that prevents both the compiler and the CPU from moving the memory write
+/// accesses either side of it to the other side.
+pub fn smp_wmb() {
+ if cfg!(CONFIG_SMP) {
+ // SAFETY: `smp_wmb()` is safe to call.
+ unsafe {
+ bindings::smp_wmb();
+ }
+ } else {
+ barrier();
+ }
+}
+
+/// A read-read memory barrier.
+///
+/// A barrier function that prevents both the compiler and the CPU from moving the memory read
+/// accesses either side of it to the other side.
+pub fn smp_rmb() {
+ if cfg!(CONFIG_SMP) {
+ // SAFETY: `smp_rmb()` is safe to call.
+ unsafe {
+ bindings::smp_rmb();
+ }
+ } else {
+ barrier();
+ }
+}
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v4 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-09 22:46 ` [PATCH v4 03/10] rust: sync: atomic: Add ordering annotation types Boqun Feng
@ 2025-06-10 9:07 ` Benno Lossin
2025-06-10 17:30 ` Boqun Feng
0 siblings, 1 reply; 16+ messages in thread
From: Benno Lossin @ 2025-06-10 9:07 UTC (permalink / raw)
To: Boqun Feng, linux-kernel, rust-for-linux, lkmm, linux-arch
Cc: Miguel Ojeda, Alex Gaynor, Gary Guo, Björn Roy Baron,
Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich,
Will Deacon, Peter Zijlstra, Mark Rutland, Wedson Almeida Filho,
Viresh Kumar, Lyude Paul, Ingo Molnar, Mitchell Levy,
Paul E. McKenney, Greg Kroah-Hartman, Linus Torvalds,
Thomas Gleixner
On Tue Jun 10, 2025 at 12:46 AM CEST, Boqun Feng wrote:
> Preparation for atomic primitives. Instead of a suffix like _acquire, a
> method parameter along with the corresponding generic parameter will be
> used to specify the ordering of an atomic operations. For example,
> atomic load() can be defined as:
>
> impl<T: ...> Atomic<T> {
> pub fn load<O: AcquireOrRelaxed>(&self, _o: O) -> T { ... }
> }
>
> and acquire users would do:
>
> let r = x.load(Acquire);
>
> relaxed users:
>
> let r = x.load(Relaxed);
>
> doing the following:
>
> let r = x.load(Release);
>
> will cause a compiler error.
>
> Compared to suffixes, it's easier to tell what ordering variants an
> operation has, and it also make it easier to unify the implementation of
> all ordering variants in one method via generic. The `IS_RELAXED` and
> `ORDER` associate consts are for generic function to pick up the
> particular implementation specified by an ordering annotation.
>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Looks good, I got a few comments on the details below.
> ---
> rust/kernel/sync/atomic.rs | 3 +
> rust/kernel/sync/atomic/ordering.rs | 94 +++++++++++++++++++++++++++++
> 2 files changed, 97 insertions(+)
> create mode 100644 rust/kernel/sync/atomic/ordering.rs
>
> diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> index 65e41dba97b7..9fe5d81fc2a9 100644
> --- a/rust/kernel/sync/atomic.rs
> +++ b/rust/kernel/sync/atomic.rs
> @@ -17,3 +17,6 @@
> //! [`LKMM`]: srctree/tools/memory-mode/
>
> pub mod ops;
> +pub mod ordering;
> +
> +pub use ordering::{Acquire, Full, Relaxed, Release};
> diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/ordering.rs
> new file mode 100644
> index 000000000000..14cda8c5d1b1
> --- /dev/null
> +++ b/rust/kernel/sync/atomic/ordering.rs
> @@ -0,0 +1,94 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Memory orderings.
> +//!
> +//! The semantics of these orderings follows the [`LKMM`] definitions and rules.
> +//!
> +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model.
> +//! - [`Full`] means "fully-ordered", that is:
> +//! - It provides ordering between all the preceding memory accesses and the annotated operation.
> +//! - It provides ordering between the annotated operation and all the following memory accesses.
> +//! - It provides ordering between all the preceding memory accesses and all the fllowing memory
> +//! accesses.
> +//! - All the orderings are the same strong as a full memory barrier (i.e. `smp_mb()`).
s/strong/strength/ ?
> +//! - [`Relaxed`] is similar to the counterpart in Rust memory model, except that dependency
> +//! orderings are also honored in [`LKMM`]. Dependency orderings are described in "DEPENDENCY
> +//! RELATIONS" in [`LKMM`]'s [`explanation`].
> +//!
> +//! [`LKMM`]: srctree/tools/memory-model/
> +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.txt
> +
> +/// The annotation type for relaxed memory ordering.
> +pub struct Relaxed;
> +
> +/// The annotation type for acquire memory ordering.
> +pub struct Acquire;
> +
> +/// The annotation type for release memory ordering.
> +pub struct Release;
> +
> +/// The annotation type for fully-order memory ordering.
> +pub struct Full;
Is this ordering only ever used in combination with itself? (Since you
don't have a `FullOrAcquire` trait)
> +
> +/// The trait bound for operations that only support relaxed ordering.
> +pub trait RelaxedOnly: AcquireOrRelaxed + ReleaseOrRelaxed + All {}
> +
> +impl RelaxedOnly for Relaxed {}
> +
> +/// The trait bound for operations that only support acquire or relaxed ordering.
> +pub trait AcquireOrRelaxed: All {
> + /// Describes whether an ordering is relaxed or not.
> + const IS_RELAXED: bool = false;
> +}
> +
> +impl AcquireOrRelaxed for Acquire {}
> +
> +impl AcquireOrRelaxed for Relaxed {
> + const IS_RELAXED: bool = true;
> +}
> +
> +/// The trait bound for operations that only support release or relaxed ordering.
> +pub trait ReleaseOrRelaxed: All {
> + /// Describes whether an ordering is relaxed or not.
> + const IS_RELAXED: bool = false;
> +}
> +
> +impl ReleaseOrRelaxed for Release {}
> +
> +impl ReleaseOrRelaxed for Relaxed {
> + const IS_RELAXED: bool = true;
> +}
> +
> +/// Describes the exact memory ordering of an `impl` [`All`].
> +pub enum OrderingDesc {
Why not name this `Ordering`?
> + /// Relaxed ordering.
> + Relaxed,
> + /// Acquire ordering.
> + Acquire,
> + /// Release ordering.
> + Release,
> + /// Fully-ordered.
> + Full,
> +}
> +
> +/// The trait bound for annotating operations that should support all orderings.
> +pub trait All {
> + /// Describes the exact memory ordering.
> + const ORDER: OrderingDesc;
And then here: `ORDERING`.
---
Cheers,
Benno
> +}
> +
> +impl All for Relaxed {
> + const ORDER: OrderingDesc = OrderingDesc::Relaxed;
> +}
> +
> +impl All for Acquire {
> + const ORDER: OrderingDesc = OrderingDesc::Acquire;
> +}
> +
> +impl All for Release {
> + const ORDER: OrderingDesc = OrderingDesc::Release;
> +}
> +
> +impl All for Full {
> + const ORDER: OrderingDesc = OrderingDesc::Full;
> +}
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-10 9:07 ` Benno Lossin
@ 2025-06-10 17:30 ` Boqun Feng
2025-06-10 17:58 ` Boqun Feng
0 siblings, 1 reply; 16+ messages in thread
From: Boqun Feng @ 2025-06-10 17:30 UTC (permalink / raw)
To: Benno Lossin
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Tue, Jun 10, 2025 at 11:07:16AM +0200, Benno Lossin wrote:
> On Tue Jun 10, 2025 at 12:46 AM CEST, Boqun Feng wrote:
> > Preparation for atomic primitives. Instead of a suffix like _acquire, a
> > method parameter along with the corresponding generic parameter will be
> > used to specify the ordering of an atomic operations. For example,
> > atomic load() can be defined as:
> >
> > impl<T: ...> Atomic<T> {
> > pub fn load<O: AcquireOrRelaxed>(&self, _o: O) -> T { ... }
> > }
> >
> > and acquire users would do:
> >
> > let r = x.load(Acquire);
> >
> > relaxed users:
> >
> > let r = x.load(Relaxed);
> >
> > doing the following:
> >
> > let r = x.load(Release);
> >
> > will cause a compiler error.
> >
> > Compared to suffixes, it's easier to tell what ordering variants an
> > operation has, and it also make it easier to unify the implementation of
> > all ordering variants in one method via generic. The `IS_RELAXED` and
> > `ORDER` associate consts are for generic function to pick up the
> > particular implementation specified by an ordering annotation.
> >
> > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
>
> Looks good, I got a few comments on the details below.
>
Thanks for taking a look!
> > ---
> > rust/kernel/sync/atomic.rs | 3 +
> > rust/kernel/sync/atomic/ordering.rs | 94 +++++++++++++++++++++++++++++
> > 2 files changed, 97 insertions(+)
> > create mode 100644 rust/kernel/sync/atomic/ordering.rs
> >
> > diff --git a/rust/kernel/sync/atomic.rs b/rust/kernel/sync/atomic.rs
> > index 65e41dba97b7..9fe5d81fc2a9 100644
> > --- a/rust/kernel/sync/atomic.rs
> > +++ b/rust/kernel/sync/atomic.rs
> > @@ -17,3 +17,6 @@
> > //! [`LKMM`]: srctree/tools/memory-mode/
> >
> > pub mod ops;
> > +pub mod ordering;
> > +
> > +pub use ordering::{Acquire, Full, Relaxed, Release};
> > diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/ordering.rs
> > new file mode 100644
> > index 000000000000..14cda8c5d1b1
> > --- /dev/null
> > +++ b/rust/kernel/sync/atomic/ordering.rs
> > @@ -0,0 +1,94 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +//! Memory orderings.
> > +//!
> > +//! The semantics of these orderings follows the [`LKMM`] definitions and rules.
> > +//!
> > +//! - [`Acquire`] and [`Release`] are similar to their counterpart in Rust memory model.
> > +//! - [`Full`] means "fully-ordered", that is:
> > +//! - It provides ordering between all the preceding memory accesses and the annotated operation.
> > +//! - It provides ordering between the annotated operation and all the following memory accesses.
> > +//! - It provides ordering between all the preceding memory accesses and all the fllowing memory
> > +//! accesses.
> > +//! - All the orderings are the same strong as a full memory barrier (i.e. `smp_mb()`).
>
> s/strong/strength/ ?
>
Good catch.
> > +//! - [`Relaxed`] is similar to the counterpart in Rust memory model, except that dependency
> > +//! orderings are also honored in [`LKMM`]. Dependency orderings are described in "DEPENDENCY
> > +//! RELATIONS" in [`LKMM`]'s [`explanation`].
> > +//!
> > +//! [`LKMM`]: srctree/tools/memory-model/
> > +//! [`explanation`]: srctree/tools/memory-model/Documentation/explanation.txt
> > +
> > +/// The annotation type for relaxed memory ordering.
> > +pub struct Relaxed;
> > +
> > +/// The annotation type for acquire memory ordering.
> > +pub struct Acquire;
> > +
> > +/// The annotation type for release memory ordering.
> > +pub struct Release;
> > +
> > +/// The annotation type for fully-order memory ordering.
> > +pub struct Full;
>
> Is this ordering only ever used in combination with itself? (Since you
> don't have a `FullOrAcquire` trait)
>
Yes, `Full` is an ordering that is stronger than `Acquire`.
> > +
> > +/// The trait bound for operations that only support relaxed ordering.
> > +pub trait RelaxedOnly: AcquireOrRelaxed + ReleaseOrRelaxed + All {}
> > +
> > +impl RelaxedOnly for Relaxed {}
> > +
> > +/// The trait bound for operations that only support acquire or relaxed ordering.
> > +pub trait AcquireOrRelaxed: All {
> > + /// Describes whether an ordering is relaxed or not.
> > + const IS_RELAXED: bool = false;
> > +}
> > +
> > +impl AcquireOrRelaxed for Acquire {}
> > +
> > +impl AcquireOrRelaxed for Relaxed {
> > + const IS_RELAXED: bool = true;
> > +}
> > +
> > +/// The trait bound for operations that only support release or relaxed ordering.
> > +pub trait ReleaseOrRelaxed: All {
> > + /// Describes whether an ordering is relaxed or not.
> > + const IS_RELAXED: bool = false;
> > +}
> > +
> > +impl ReleaseOrRelaxed for Release {}
> > +
> > +impl ReleaseOrRelaxed for Relaxed {
> > + const IS_RELAXED: bool = true;
> > +}
> > +
> > +/// Describes the exact memory ordering of an `impl` [`All`].
> > +pub enum OrderingDesc {
>
> Why not name this `Ordering`?
>
I was trying to avoid having an `Ordering` enum in a `ordering` mod.
Also I want to save the name "Ordering" for the generic type parameter
of an atomic operation, e.g.
pub fn xchg<Ordering: ALL>(..)
this enum is more of an internal implementation detail, and users should
not use this enum directly, so I would like to avoid potential
confusion.
I have played a few sealed trait tricks on my end, but seems I cannot
achieve:
1) `OrderingDesc` is only accessible in the atomic mod.
2) `All` is only impl-able in the atomic mod, while it can be used as a
trait bound outside kernel crate.
Maybe there is a trick I'm missing?
> > + /// Relaxed ordering.
> > + Relaxed,
> > + /// Acquire ordering.
> > + Acquire,
> > + /// Release ordering.
> > + Release,
> > + /// Fully-ordered.
> > + Full,
> > +}
> > +
> > +/// The trait bound for annotating operations that should support all orderings.
> > +pub trait All {
> > + /// Describes the exact memory ordering.
> > + const ORDER: OrderingDesc;
>
> And then here: `ORDERING`.
Make sense, thanks!
Regards,
Boqun
>
> ---
> Cheers,
> Benno
>
> > +}
> > +
> > +impl All for Relaxed {
> > + const ORDER: OrderingDesc = OrderingDesc::Relaxed;
> > +}
> > +
> > +impl All for Acquire {
> > + const ORDER: OrderingDesc = OrderingDesc::Acquire;
> > +}
> > +
> > +impl All for Release {
> > + const ORDER: OrderingDesc = OrderingDesc::Release;
> > +}
> > +
> > +impl All for Full {
> > + const ORDER: OrderingDesc = OrderingDesc::Full;
> > +}
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-10 17:30 ` Boqun Feng
@ 2025-06-10 17:58 ` Boqun Feng
2025-06-10 18:53 ` Boqun Feng
0 siblings, 1 reply; 16+ messages in thread
From: Boqun Feng @ 2025-06-10 17:58 UTC (permalink / raw)
To: Benno Lossin
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Tue, Jun 10, 2025 at 10:30:55AM -0700, Boqun Feng wrote:
[...]
> > > +/// Describes the exact memory ordering of an `impl` [`All`].
> > > +pub enum OrderingDesc {
> >
> > Why not name this `Ordering`?
> >
>
> I was trying to avoid having an `Ordering` enum in a `ordering` mod.
> Also I want to save the name "Ordering" for the generic type parameter
> of an atomic operation, e.g.
>
> pub fn xchg<Ordering: ALL>(..)
>
> this enum is more of an internal implementation detail, and users should
> not use this enum directly, so I would like to avoid potential
> confusion.
>
> I have played a few sealed trait tricks on my end, but seems I cannot
> achieve:
>
> 1) `OrderingDesc` is only accessible in the atomic mod.
> 2) `All` is only impl-able in the atomic mod, while it can be used as a
> trait bound outside kernel crate.
>
> Maybe there is a trick I'm missing?
>
Something like this seems to work:
pub(super) mod private {
/// Describes the exact memory ordering of an `impl` [`All`].
pub enum Ordering {
/// Relaxed ordering.
Relaxed,
/// Acquire ordering.
Acquire,
/// Release ordering.
Release,
/// Fully-ordered.
Full,
}
pub trait HasOrderingDesc {
/// Describes the exact memory ordering.
const ORDERING: Ordering;
}
}
/// The trait bound for annotating operations that should support all orderings.
pub trait All: private::HasOrderingDesc { }
impl private::HasOrderingDesc for Relaxed {
const ORDERING: private::Ordering = private::Ordering::Relaxed;
}
the trick is to seal the enum and the trait together.
Regards,
Boqun
> > > + /// Relaxed ordering.
> > > + Relaxed,
> > > + /// Acquire ordering.
> > > + Acquire,
> > > + /// Release ordering.
> > > + Release,
> > > + /// Fully-ordered.
> > > + Full,
> > > +}
> > > +
> > > +/// The trait bound for annotating operations that should support all orderings.
> > > +pub trait All {
> > > + /// Describes the exact memory ordering.
> > > + const ORDER: OrderingDesc;
> >
> > And then here: `ORDERING`.
>
[..]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-10 17:58 ` Boqun Feng
@ 2025-06-10 18:53 ` Boqun Feng
2025-06-11 6:40 ` Benno Lossin
0 siblings, 1 reply; 16+ messages in thread
From: Boqun Feng @ 2025-06-10 18:53 UTC (permalink / raw)
To: Benno Lossin
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Tue, Jun 10, 2025 at 10:58:30AM -0700, Boqun Feng wrote:
> On Tue, Jun 10, 2025 at 10:30:55AM -0700, Boqun Feng wrote:
> [...]
> > > > +/// Describes the exact memory ordering of an `impl` [`All`].
> > > > +pub enum OrderingDesc {
> > >
> > > Why not name this `Ordering`?
> > >
> >
> > I was trying to avoid having an `Ordering` enum in a `ordering` mod.
> > Also I want to save the name "Ordering" for the generic type parameter
> > of an atomic operation, e.g.
> >
> > pub fn xchg<Ordering: ALL>(..)
> >
> > this enum is more of an internal implementation detail, and users should
> > not use this enum directly, so I would like to avoid potential
> > confusion.
> >
> > I have played a few sealed trait tricks on my end, but seems I cannot
> > achieve:
> >
> > 1) `OrderingDesc` is only accessible in the atomic mod.
> > 2) `All` is only impl-able in the atomic mod, while it can be used as a
> > trait bound outside kernel crate.
> >
> > Maybe there is a trick I'm missing?
> >
>
> Something like this seems to work:
>
> pub(super) mod private {
> /// Describes the exact memory ordering of an `impl` [`All`].
> pub enum Ordering {
> /// Relaxed ordering.
> Relaxed,
> /// Acquire ordering.
> Acquire,
> /// Release ordering.
> Release,
> /// Fully-ordered.
> Full,
> }
>
> pub trait HasOrderingDesc {
> /// Describes the exact memory ordering.
> const ORDERING: Ordering;
> }
> }
>
> /// The trait bound for annotating operations that should support all orderings.
> pub trait All: private::HasOrderingDesc { }
>
> impl private::HasOrderingDesc for Relaxed {
> const ORDERING: private::Ordering = private::Ordering::Relaxed;
> }
>
> the trick is to seal the enum and the trait together.
>
> Regards,
> Boqun
>
> > > > + /// Relaxed ordering.
> > > > + Relaxed,
> > > > + /// Acquire ordering.
> > > > + Acquire,
> > > > + /// Release ordering.
> > > > + Release,
> > > > + /// Fully-ordered.
> > > > + Full,
> > > > +}
> > > > +
> > > > +/// The trait bound for annotating operations that should support all orderings.
> > > > +pub trait All {
> > > > + /// Describes the exact memory ordering.
> > > > + const ORDER: OrderingDesc;
> > >
> > > And then here: `ORDERING`.
> >
After a second thought, the following is probably what I will go for:
/// The annotation type for relaxed memory ordering.
pub struct Relaxed;
/// The annotation type for acquire memory ordering.
pub struct Acquire;
/// The annotation type for release memory ordering.
pub struct Release;
/// The annotation type for fully-order memory ordering.
pub struct Full;
/// Describes the exact memory ordering.
pub enum OrderingType {
/// Relaxed ordering.
Relaxed,
/// Acquire ordering.
Acquire,
/// Release ordering.
Release,
/// Fully-ordered.
Full,
}
mod internal {
/// Unit types for ordering annotation.
///
/// Sealed trait, can be only implemented inside atomic mod.
pub trait OrderingUnit {
/// Describes the exact memory ordering.
const TYPE: super::OrderingType;
}
}
impl internal::OrderingUnit for Relaxed {
const TYPE: OrderingType = OrderingType::Relaxed;
}
impl internal::OrderingUnit for Acquire {
const TYPE: OrderingType = OrderingType::Acquire;
}
impl internal::OrderingUnit for Release {
const TYPE: OrderingType = OrderingType::Release;
}
impl internal::OrderingUnit for Full {
const TYPE: OrderingType = OrderingType::Full;
}
That is:
1) Rename "OrderingDesc" into "OrderingType", and make it public.
2) Provide a sealed trait (`OrderingUnit`) for all the unit types
that describe ordering.
3) Instead of "ORDER" or "ORDERING", name the enum constant "TYPE".
An example shows why is probably an xchg() implementation, if I was to
follow the previous naming suggestion, it will be:
match Ordering::ORDERING {
<some mode path>::Ordering::Relaxed => atomic_xchg_relaxed(...),
...
}
with the current one, it will be:
match Ordering::TYPE {
// assume we "use ordering::OrderingType"
OrderingType::Relaxed => atomic_xchg_relaxed(...),
...
}
I think this version is much better.
Regards,
Boqun
> [..]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 03/10] rust: sync: atomic: Add ordering annotation types
2025-06-10 18:53 ` Boqun Feng
@ 2025-06-11 6:40 ` Benno Lossin
0 siblings, 0 replies; 16+ messages in thread
From: Benno Lossin @ 2025-06-11 6:40 UTC (permalink / raw)
To: Boqun Feng
Cc: linux-kernel, rust-for-linux, lkmm, linux-arch, Miguel Ojeda,
Alex Gaynor, Gary Guo, Björn Roy Baron, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, Will Deacon,
Peter Zijlstra, Mark Rutland, Wedson Almeida Filho, Viresh Kumar,
Lyude Paul, Ingo Molnar, Mitchell Levy, Paul E. McKenney,
Greg Kroah-Hartman, Linus Torvalds, Thomas Gleixner
On Tue Jun 10, 2025 at 8:53 PM CEST, Boqun Feng wrote:
> On Tue, Jun 10, 2025 at 10:58:30AM -0700, Boqun Feng wrote:
>> On Tue, Jun 10, 2025 at 10:30:55AM -0700, Boqun Feng wrote:
>> [...]
>> > > > +/// Describes the exact memory ordering of an `impl` [`All`].
>> > > > +pub enum OrderingDesc {
>> > >
>> > > Why not name this `Ordering`?
>> > >
>> >
>> > I was trying to avoid having an `Ordering` enum in a `ordering` mod.
>> > Also I want to save the name "Ordering" for the generic type parameter
>> > of an atomic operation, e.g.
>> >
>> > pub fn xchg<Ordering: ALL>(..)
>> >
>> > this enum is more of an internal implementation detail, and users should
>> > not use this enum directly, so I would like to avoid potential
>> > confusion.
>> >
>> > I have played a few sealed trait tricks on my end, but seems I cannot
>> > achieve:
>> >
>> > 1) `OrderingDesc` is only accessible in the atomic mod.
>> > 2) `All` is only impl-able in the atomic mod, while it can be used as a
>> > trait bound outside kernel crate.
>> >
>> > Maybe there is a trick I'm missing?
>> >
>>
>> Something like this seems to work:
>>
>> pub(super) mod private {
>> /// Describes the exact memory ordering of an `impl` [`All`].
>> pub enum Ordering {
>> /// Relaxed ordering.
>> Relaxed,
>> /// Acquire ordering.
>> Acquire,
>> /// Release ordering.
>> Release,
>> /// Fully-ordered.
>> Full,
>> }
>>
>> pub trait HasOrderingDesc {
>> /// Describes the exact memory ordering.
>> const ORDERING: Ordering;
>> }
>> }
>>
>> /// The trait bound for annotating operations that should support all orderings.
>> pub trait All: private::HasOrderingDesc { }
>>
>> impl private::HasOrderingDesc for Relaxed {
>> const ORDERING: private::Ordering = private::Ordering::Relaxed;
>> }
>>
>> the trick is to seal the enum and the trait together.
>>
>> Regards,
>> Boqun
>>
>> > > > + /// Relaxed ordering.
>> > > > + Relaxed,
>> > > > + /// Acquire ordering.
>> > > > + Acquire,
>> > > > + /// Release ordering.
>> > > > + Release,
>> > > > + /// Fully-ordered.
>> > > > + Full,
>> > > > +}
>> > > > +
>> > > > +/// The trait bound for annotating operations that should support all orderings.
>> > > > +pub trait All {
>> > > > + /// Describes the exact memory ordering.
>> > > > + const ORDER: OrderingDesc;
>> > >
>> > > And then here: `ORDERING`.
>> >
>
> After a second thought, the following is probably what I will go for:
>
> /// The annotation type for relaxed memory ordering.
> pub struct Relaxed;
>
> /// The annotation type for acquire memory ordering.
> pub struct Acquire;
>
> /// The annotation type for release memory ordering.
> pub struct Release;
>
> /// The annotation type for fully-order memory ordering.
> pub struct Full;
>
> /// Describes the exact memory ordering.
> pub enum OrderingType {
> /// Relaxed ordering.
> Relaxed,
> /// Acquire ordering.
> Acquire,
> /// Release ordering.
> Release,
> /// Fully-ordered.
> Full,
> }
>
> mod internal {
> /// Unit types for ordering annotation.
> ///
> /// Sealed trait, can be only implemented inside atomic mod.
> pub trait OrderingUnit {
> /// Describes the exact memory ordering.
> const TYPE: super::OrderingType;
> }
> }
>
> impl internal::OrderingUnit for Relaxed {
> const TYPE: OrderingType = OrderingType::Relaxed;
> }
>
> impl internal::OrderingUnit for Acquire {
> const TYPE: OrderingType = OrderingType::Acquire;
> }
>
> impl internal::OrderingUnit for Release {
> const TYPE: OrderingType = OrderingType::Release;
> }
>
> impl internal::OrderingUnit for Full {
> const TYPE: OrderingType = OrderingType::Full;
> }
>
> That is:
>
> 1) Rename "OrderingDesc" into "OrderingType", and make it public.
> 2) Provide a sealed trait (`OrderingUnit`) for all the unit types
> that describe ordering.
> 3) Instead of "ORDER" or "ORDERING", name the enum constant "TYPE".
>
>
> An example shows why is probably an xchg() implementation, if I was to
> follow the previous naming suggestion, it will be:
>
> match Ordering::ORDERING {
> <some mode path>::Ordering::Relaxed => atomic_xchg_relaxed(...),
> ...
> }
>
> with the current one, it will be:
>
> match Ordering::TYPE {
> // assume we "use ordering::OrderingType"
> OrderingType::Relaxed => atomic_xchg_relaxed(...),
> ...
> }
>
> I think this version is much better.
Agreed :)
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2025-06-11 6:40 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-09 22:46 [PATCH v4 00/10] LKMM generic atomics in Rust Boqun Feng
2025-06-09 22:46 ` [PATCH v4 01/10] rust: Introduce atomic API helpers Boqun Feng
2025-06-09 22:46 ` [PATCH v4 02/10] rust: sync: Add basic atomic operation mapping framework Boqun Feng
2025-06-09 22:46 ` [PATCH v4 03/10] rust: sync: atomic: Add ordering annotation types Boqun Feng
2025-06-10 9:07 ` Benno Lossin
2025-06-10 17:30 ` Boqun Feng
2025-06-10 17:58 ` Boqun Feng
2025-06-10 18:53 ` Boqun Feng
2025-06-11 6:40 ` Benno Lossin
2025-06-09 22:46 ` [PATCH v4 04/10] rust: sync: atomic: Add generic atomics Boqun Feng
2025-06-09 22:46 ` [PATCH v4 05/10] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
2025-06-09 22:46 ` [PATCH v4 06/10] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
2025-06-09 22:46 ` [PATCH v4 07/10] rust: sync: atomic: Add Atomic<u{32,64}> Boqun Feng
2025-06-09 22:46 ` [PATCH v4 08/10] rust: sync: atomic: Add Atomic<{usize,isize}> Boqun Feng
2025-06-09 22:46 ` [PATCH v4 09/10] rust: sync: atomic: Add Atomic<*mut T> Boqun Feng
2025-06-09 22:46 ` [PATCH v4 10/10] rust: sync: Add memory barriers Boqun Feng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).