rust-for-linux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Boqun Feng <boqun.feng@gmail.com>
To: Ralf Jung <post@ralfj.de>
Cc: "Benno Lossin" <lossin@kernel.org>,
	linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org,
	lkmm@lists.linux.dev, linux-arch@vger.kernel.org,
	"Miguel Ojeda" <ojeda@kernel.org>,
	"Alex Gaynor" <alex.gaynor@gmail.com>,
	"Gary Guo" <gary@garyguo.net>,
	"Björn Roy Baron" <bjorn3_gh@protonmail.com>,
	"Andreas Hindborg" <a.hindborg@kernel.org>,
	"Alice Ryhl" <aliceryhl@google.com>,
	"Trevor Gross" <tmgross@umich.edu>,
	"Danilo Krummrich" <dakr@kernel.org>,
	"Will Deacon" <will@kernel.org>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Mark Rutland" <mark.rutland@arm.com>,
	"Wedson Almeida Filho" <wedsonaf@gmail.com>,
	"Viresh Kumar" <viresh.kumar@linaro.org>,
	"Lyude Paul" <lyude@redhat.com>, "Ingo Molnar" <mingo@kernel.org>,
	"Mitchell Levy" <levymitchell0@gmail.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	"Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
	"Linus Torvalds" <torvalds@linux-foundation.org>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Alan Stern" <stern@rowland.harvard.edu>
Subject: Re: [PATCH v6 8/9] rust: sync: Add memory barriers
Date: Tue, 15 Jul 2025 08:21:47 -0700	[thread overview]
Message-ID: <aHZyC4xr7jgN6Mgv@Mac.home> (raw)
In-Reply-To: <4d373b56-0f36-4f8a-9052-cee38b90f59b@ralfj.de>

On Mon, Jul 14, 2025 at 05:42:39PM +0200, Ralf Jung wrote:
> Hi all,
> 
> On 11.07.25 20:20, Boqun Feng wrote:
> > On Fri, Jul 11, 2025 at 10:57:48AM +0200, Benno Lossin wrote:
> > > On Thu Jul 10, 2025 at 8:00 AM CEST, Boqun Feng wrote:
> > > > diff --git a/rust/kernel/sync/barrier.rs b/rust/kernel/sync/barrier.rs
> > > > new file mode 100644
> > > > index 000000000000..df4015221503
> > > > --- /dev/null
> > > > +++ b/rust/kernel/sync/barrier.rs
> > > > @@ -0,0 +1,65 @@
> > > > +// SPDX-License-Identifier: GPL-2.0
> > > > +
> > > > +//! Memory barriers.
> > > > +//!
> > > > +//! These primitives have the same semantics as their C counterparts: and the precise definitions
> > > > +//! of semantics can be found at [`LKMM`].
> > > > +//!
> > > > +//! [`LKMM`]: srctree/tools/memory-model/
> > > > +
> > > > +/// A compiler barrier.
> > > > +///
> > > > +/// A barrier that prevents compiler from reordering memory accesses across the barrier.
> > > > +pub(crate) fn barrier() {
> > > > +    // By default, Rust inline asms are treated as being able to access any memory or flags, hence
> > > > +    // it suffices as a compiler barrier.
> > > 
> > > I don't know about this, but it also isn't my area of expertise... I
> > > think I heard Ralf talk about this at Rust Week, but I don't remember...
> > > 
> > 
> > Easy, let's Cc Ralf ;-)
> > 
> > Ralf, I believe the question here is:
> > 
> > In kernel C, we define a compiler barrier (barrier()), which is
> > implemented as:
> > 
> > # define barrier() __asm__ __volatile__("": : :"memory")
> > 
> > Now we want to have a Rust version, and I think an empty `asm!()` should
> > be enough as an equivalent as a barrier() in C, because an empty
> > `asm!()` in Rust implies "memory" as the clobber:
> > 
> > 	https://godbolt.org/z/3z3fnWYjs
> > 
> > ?
> > 
> > I know you have some opinions on C++ compiler_fence() [1]. But in LKMM,
> > barrier() and other barriers work for all memory accesses not just
> > atomics, so the problem "So, if your program contains no atomic
> > accesses, but some atomic fences, those fences do nothing." doesn't
> > exist for us. And our barrier() is strictly weaker than other barriers.
> > 
> > And based on my understanding of the consensus on Rust vs LKMM, "do
> > whatever kernel C does and rely on whatever kernel C relies" is the
> > general suggestion, so I think an empty `asm!()` works here. Of course
> > if in practice, we find an issue, I'm happy to look for solutions ;-)
> > 
> > Thoughts?
> > 
> > [1]: https://github.com/rust-lang/unsafe-code-guidelines/issues/347
> 
> If I understood correctly, this is about using "compiler barriers" to order
> volatile accesses that the LKMM uses in lieu of atomic accesses?
> I can't give a principled answer here, unfortunately -- as you know, the
> mapping of LKMM through the compiler isn't really in a state where we can
> make principled formal statements. And making principled formal statements
> is my main expertise so I am a bit out of my depth here. ;)
> 

Understood ;-)

> So I agree with your 2nd paragraph: I would say just like the fact that you
> are using volatile accesses in the first place, this falls under "do
> whatever the C code does, it shouldn't be any more broken in Rust than it is
> in C".
> 
> However, saying that it in general "prevents reordering all memory accesses"
> is unlikely to be fully correct -- if the compiler can prove that the inline
> asm block could not possibly have access to a local variable (e.g. because
> it never had its address taken), its accesses can still be reordered. This
> applies both to C compilers and Rust compilers. Extra annotations such as
> `noalias` (or `restrict` in C) can also give rise to reorderings around
> arbitrary code, including such barriers. This is not a problem for
> concurrent code since it would anyway be wrong to claim that some pointer
> doesn't have aliases when it is accessed by multiple threads, but it shows

Right, it shouldn't be a problem for most of the concurrent code, and
thank you for bringing this up. I believe we can rely on the barrier
behavior if the memory accesses on both sides are done via aliased
references/pointers, which should be the same as C code relies on.

One thing though is we don't use much of `restrict` in kernel C, so I
wonder the compiler's behavior in the following code:

    let mut x = KBox::new_uninit(GFP_KERNEL)?;
    // ^ KBox is our own Box implementation based on kmalloc(), and it
    // accepts a flag in new*() functions for different allocation
    // behavior (can sleep or not, etc), of course we want it to behave
    // like an std Box in term of aliasing.

    let x = KBox::write(x, foo); // A

    smp_mb():
      // using Rust asm!() for explanation, it's really implemented in
      // C.
      asm!("mfence");

    let a: &Atomic<*mut Foo> = ...; // `a` was null initially.

    a.store(KBox::into_raw(x), Relaxed); // B

Now we obviously want A and B to be ordered, because smp_mb() is
supposed to be stronger than Release ordering. So if another thread does
an Acquire read or uses address dependency:

    let a: &Atomic<*mut Foo> = ...;
    let foo_ptr = a.load(Acquire); // or load(Relaxed);

    if !foo_ptr.is_null() {
        let y: KBox<Foo> = unsafe { KBox::from_raw(foo_ptr) };
	// ^ this should be safe.
    }

Is it something Rust AM could guarantee? I think it makes no difference
than 1) allocating some normal memory for DMA; 2) writing to the normal
memory; 3) issuing some io barrier instructions to make sure the device
will see the writes in step 2; 4) doing some MMIO to notify the
device for a DMA read. Therefore reordering of A and B by compiler will
be problematic.

Regards,
Boqun

> that the framing of barriers in terms of preventing reordering of accesses
> is too imprecise. That's why the C++ memory model uses a very different
> framing, and that's why I can't give a definite answer here. :)
> 
> Kind regards,
> Ralf
> 

  reply	other threads:[~2025-07-15 15:21 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-10  6:00 [PATCH v6 0/9] LKMM generic atomics in Rust Boqun Feng
2025-07-10  6:00 ` [PATCH v6 1/9] rust: Introduce atomic API helpers Boqun Feng
2025-07-10  6:00 ` [PATCH v6 2/9] rust: sync: Add basic atomic operation mapping framework Boqun Feng
2025-07-10 11:04   ` Benno Lossin
2025-07-10 15:12     ` Boqun Feng
2025-07-10 15:46       ` Benno Lossin
2025-07-10 16:16         ` Boqun Feng
2025-07-10 19:21           ` Benno Lossin
2025-07-10 20:29             ` Boqun Feng
2025-07-11  8:15               ` Benno Lossin
2025-07-10  6:00 ` [PATCH v6 3/9] rust: sync: atomic: Add ordering annotation types Boqun Feng
2025-07-10 11:08   ` Benno Lossin
2025-07-10 12:00     ` Andreas Hindborg
2025-07-10 14:42       ` Boqun Feng
2025-07-10 15:05         ` Benno Lossin
2025-07-10 15:57           ` Boqun Feng
2025-07-10 19:19             ` Benno Lossin
2025-07-10 18:32           ` Miguel Ojeda
2025-07-10 19:06             ` Miguel Ojeda
2025-07-10  6:00 ` [PATCH v6 4/9] rust: sync: atomic: Add generic atomics Boqun Feng
2025-07-11  8:03   ` Benno Lossin
2025-07-11 13:22     ` Boqun Feng
2025-07-11 13:34       ` Benno Lossin
2025-07-11 13:51         ` Boqun Feng
2025-07-11 18:34           ` Benno Lossin
2025-07-11 21:25             ` Boqun Feng
2025-07-11 13:58     ` Boqun Feng
2025-07-11 18:35       ` Benno Lossin
2025-07-14  7:08         ` Boqun Feng
2025-07-13 19:51     ` Boqun Feng
2025-07-10  6:00 ` [PATCH v6 5/9] rust: sync: atomic: Add atomic {cmp,}xchg operations Boqun Feng
2025-07-11  8:42   ` Benno Lossin
2025-07-10  6:00 ` [PATCH v6 6/9] rust: sync: atomic: Add the framework of arithmetic operations Boqun Feng
2025-07-11  8:53   ` Benno Lossin
2025-07-11 14:39     ` Boqun Feng
2025-07-11 17:41       ` Boqun Feng
2025-07-11 19:07         ` Benno Lossin
2025-07-11 18:55       ` Benno Lossin
2025-07-11 19:51         ` Boqun Feng
2025-07-11 21:03           ` Benno Lossin
2025-07-11 21:22             ` Boqun Feng
2025-07-14  4:20               ` Boqun Feng
2025-07-10  6:00 ` [PATCH v6 7/9] rust: sync: atomic: Add Atomic<u{32,64}> Boqun Feng
2025-07-11  8:54   ` Benno Lossin
2025-07-10  6:00 ` [PATCH v6 8/9] rust: sync: Add memory barriers Boqun Feng
2025-07-11  8:57   ` Benno Lossin
2025-07-11 13:32     ` Boqun Feng
2025-07-11 18:57       ` Benno Lossin
2025-07-11 19:26         ` Boqun Feng
2025-07-11 21:04           ` Benno Lossin
2025-07-11 21:34             ` Boqun Feng
2025-07-11 18:20     ` Boqun Feng
2025-07-14 15:42       ` Ralf Jung
2025-07-15 15:21         ` Boqun Feng [this message]
2025-07-15 15:35           ` Ralf Jung
2025-07-15 15:56             ` Boqun Feng
2025-07-16 19:42               ` Ralf Jung
2025-07-10  6:00 ` [PATCH v6 9/9] rust: sync: atomic: Add Atomic<{usize,isize}> Boqun Feng
2025-07-11  9:00   ` Benno Lossin
2025-07-11 13:45     ` Miguel Ojeda
2025-07-11 14:07       ` Boqun Feng
2025-07-11 14:40         ` Miguel Ojeda
2025-07-11 15:46           ` Boqun Feng
2025-07-11 18:35             ` Miguel Ojeda
2025-07-11 19:05       ` Benno Lossin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aHZyC4xr7jgN6Mgv@Mac.home \
    --to=boqun.feng@gmail.com \
    --cc=a.hindborg@kernel.org \
    --cc=alex.gaynor@gmail.com \
    --cc=aliceryhl@google.com \
    --cc=bjorn3_gh@protonmail.com \
    --cc=dakr@kernel.org \
    --cc=gary@garyguo.net \
    --cc=gregkh@linuxfoundation.org \
    --cc=levymitchell0@gmail.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkmm@lists.linux.dev \
    --cc=lossin@kernel.org \
    --cc=lyude@redhat.com \
    --cc=mark.rutland@arm.com \
    --cc=mingo@kernel.org \
    --cc=ojeda@kernel.org \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=post@ralfj.de \
    --cc=rust-for-linux@vger.kernel.org \
    --cc=stern@rowland.harvard.edu \
    --cc=tglx@linutronix.de \
    --cc=tmgross@umich.edu \
    --cc=torvalds@linux-foundation.org \
    --cc=viresh.kumar@linaro.org \
    --cc=wedsonaf@gmail.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).