public inbox for rust-for-linux@vger.kernel.org
 help / color / mirror / Atom feed
From: Joel Fernandes <joelagnelf@nvidia.com>
To: "Gary Guo" <gary@garyguo.net>, "Miguel Ojeda" <ojeda@kernel.org>,
	"Boqun Feng" <boqun@kernel.org>,
	"Björn Roy Baron" <bjorn3_gh@protonmail.com>,
	"Benno Lossin" <lossin@kernel.org>,
	"Andreas Hindborg" <a.hindborg@kernel.org>,
	"Alice Ryhl" <aliceryhl@google.com>,
	"Trevor Gross" <tmgross@umich.edu>,
	"Danilo Krummrich" <dakr@kernel.org>,
	"Will Deacon" <will@kernel.org>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Mark Rutland" <mark.rutland@arm.com>
Cc: Alan Stern <stern@rowland.harvard.edu>,
	Andrea Parri <parri.andrea@gmail.com>,
	Nicholas Piggin <npiggin@gmail.com>,
	David Howells <dhowells@redhat.com>,
	Jade Alglave <j.alglave@ucl.ac.uk>,
	Luc Maranget <luc.maranget@inria.fr>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Akira Yokosawa <akiyks@gmail.com>,
	Daniel Lustig <dlustig@nvidia.com>,
	rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-arch@vger.kernel.org, lkmm@lists.linux.dev,
	Alexandre Courbot <acourbot@nvidia.com>,
	John Hubbard <jhubbard@nvidia.com>, Timur Tabi <ttabi@nvidia.com>,
	Eliot Courtney <ecourtney@nvidia.com>,
	Alistair Popple <apopple@nvidia.com>
Subject: Re: [PATCH 2/3] rust: sync: generic memory barriers
Date: Thu, 2 Apr 2026 17:49:44 -0400	[thread overview]
Message-ID: <620eaaf3-0569-4633-afd9-74ec18dccbf8@nvidia.com> (raw)
In-Reply-To: <20260402152443.1059634-4-gary@kernel.org>

Hi Gary,

On 4/2/2026 11:24 AM, Gary Guo wrote:
> From: Gary Guo <gary@garyguo.net>
>
> Implement a generic interface for memory barriers (full system/DMA/SMP).
> The interface uses a parameter to force user to specify their intent with
> barriers.
>
> It provides `Read`, `Write`, `Full` orderings which map to the existing
> `rmb()`, `wmb()` and `mb()`, but also `Acquire` and `Release` which is
> documented to have `LOAD->{LOAD,STORE}` ordering and `{LOAD,STORE}->WRITE`
> ordering, although for now they're still mapped to a full `mb()`. But in
> the future it could be mapped to a more efficient form depending on the
> architecture. I included them as many users do not need the STORE->LOAD
> ordering, and having them use `Acquire`/`Release` is more clear on their
> intent in what reordering is to be prevented.
>
> Generic is used here instead of providing individual standalone functions
> to reduce code duplication. For example, the `Acquire` -> `Full` upgrade
> here is uniformly implemented for all three types. The `CONFIG_SMP` check
> in `smp_mb` is uniformly implemented for all SMP barriers. This could
> extend to `virt_mb`'s if they're introduced in the future.
>
> Signed-off-by: Gary Guo <gary@garyguo.net>
> ---
>  rust/kernel/sync/atomic/ordering.rs |   2 +-
>  rust/kernel/sync/barrier.rs         | 194 ++++++++++++++++++++++++----

IMO this patch should be split up into different patches for CPU vs IO, and
perhaps even more patches separating out different barrier types.

>  2 files changed, 168 insertions(+), 28 deletions(-)
>
> diff --git a/rust/kernel/sync/atomic/ordering.rs b/rust/kernel/sync/atomic/ordering.rs
> index 3f103aa8db99..c4e732e7212f 100644
> --- a/rust/kernel/sync/atomic/ordering.rs
> +++ b/rust/kernel/sync/atomic/ordering.rs
[...]> +// Currently kernel only support `rmb`, `wmb` and full `mb`.
> +impl MemoryBarrier<Smp> for Read {
> +    #[inline]
> +    fn run() {
> +        // SAFETY: `smp_rmb()` is safe to call.
> +        unsafe { bindings::smp_rmb() };
> +    }
> +}
> +
> +impl MemoryBarrier<Smp> for Write {
> +    #[inline]
> +    fn run() {
>          // SAFETY: `smp_wmb()` is safe to call.
>          unsafe { bindings::smp_wmb() };
> -    } else {
> -        barrier();
>      }
>  }
>
> -/// A read-read memory barrier.
> +impl MemoryBarrier<Smp> for Full {
> +    #[inline]
> +    fn run() {
> +        // SAFETY: `smp_mb()` is safe to call.
> +        unsafe { bindings::smp_mb() };
> +    }
> +}
> +
> +/// Memory barrier.
>  ///
> -/// A barrier that prevents compiler and CPU from reordering memory read accesses across the
> -/// barrier.
> -#[inline(always)]
> -pub fn smp_rmb() {
> +/// A barrier that prevents compiler and CPU from reordering memory accesses across the barrier.
> +///
> +/// The specific forms of reordering can be specified using the parameter.
> +/// - `mb(Read)` provides a read-read barrier.
> +/// - `mb(Write)` provides a write-write barrier.
> +/// - `mb(Full)` provides a full barrier.
> +/// - `mb(Acquire)` prevents preceding read from being ordered against succeeding memory
> +///    operations.
> +/// - `mb(Release)` prevents preceding memory operations from being ordered against succeeding
> +///    writes.

I don't agree with this definition of Release. Release is always associated with
a specific store, likewise acquire with a load. The definition above also
doesn't make sense 'prevents preceding memory operations from being ordered
against succeeding writes', that's not what Release semantics are. Release
orders memory operations with a specific memory operation associated with
Release. Same for Acquire.

See also in Documentation/memory-barriers.txt, ACQUIRE and RELEASE are defined as being
tied to specific memory operations.

Or am I missing something subtle?

thanks,

-- 
Joel Fernandes

  reply	other threads:[~2026-04-02 21:49 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-02 15:24 [PATCH 0/3] rust: more memory barriers bindings Gary Guo
2026-04-02 15:24 ` [PATCH 1/3] rust: sync: add helpers for mb, dma_mb and friends Gary Guo
2026-04-02 15:24 ` [PATCH 2/3] rust: sync: generic memory barriers Gary Guo
2026-04-02 21:49   ` Joel Fernandes [this message]
2026-04-03  0:07     ` Gary Guo
2026-04-03 21:33       ` Joel Fernandes
2026-04-04 12:43         ` Gary Guo
2026-04-02 15:24 ` [PATCH 3/3] gpu: nova-core: fix wrong use of barriers in GSP code Gary Guo
2026-04-02 21:56   ` Joel Fernandes
2026-04-02 21:59     ` Joel Fernandes
2026-04-04 13:02     ` Gary Guo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=620eaaf3-0569-4633-afd9-74ec18dccbf8@nvidia.com \
    --to=joelagnelf@nvidia.com \
    --cc=a.hindborg@kernel.org \
    --cc=acourbot@nvidia.com \
    --cc=akiyks@gmail.com \
    --cc=aliceryhl@google.com \
    --cc=apopple@nvidia.com \
    --cc=bjorn3_gh@protonmail.com \
    --cc=boqun@kernel.org \
    --cc=dakr@kernel.org \
    --cc=dhowells@redhat.com \
    --cc=dlustig@nvidia.com \
    --cc=ecourtney@nvidia.com \
    --cc=gary@garyguo.net \
    --cc=j.alglave@ucl.ac.uk \
    --cc=jhubbard@nvidia.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkmm@lists.linux.dev \
    --cc=lossin@kernel.org \
    --cc=luc.maranget@inria.fr \
    --cc=mark.rutland@arm.com \
    --cc=npiggin@gmail.com \
    --cc=ojeda@kernel.org \
    --cc=parri.andrea@gmail.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rust-for-linux@vger.kernel.org \
    --cc=stern@rowland.harvard.edu \
    --cc=tmgross@umich.edu \
    --cc=ttabi@nvidia.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox