public inbox for rust-for-linux@vger.kernel.org
 help / color / mirror / Atom feed
From: "Gary Guo" <gary@garyguo.net>
To: "Eliot Courtney" <ecourtney@nvidia.com>,
	"Gary Guo" <gary@garyguo.net>, "Miguel Ojeda" <ojeda@kernel.org>,
	"Boqun Feng" <boqun@kernel.org>,
	"Björn Roy Baron" <bjorn3_gh@protonmail.com>,
	"Benno Lossin" <lossin@kernel.org>,
	"Andreas Hindborg" <a.hindborg@kernel.org>,
	"Alice Ryhl" <aliceryhl@google.com>,
	"Trevor Gross" <tmgross@umich.edu>,
	"Danilo Krummrich" <dakr@kernel.org>,
	"Alexandre Courbot" <acourbot@nvidia.com>,
	"David Airlie" <airlied@gmail.com>,
	"Simona Vetter" <simona@ffwll.ch>
Cc: "Alan Stern" <stern@rowland.harvard.edu>,
	"Andrea Parri" <parri.andrea@gmail.com>,
	"Will Deacon" <will@kernel.org>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Nicholas Piggin" <npiggin@gmail.com>,
	"David Howells" <dhowells@redhat.com>,
	"Jade Alglave" <j.alglave@ucl.ac.uk>,
	"Luc Maranget" <luc.maranget@inria.fr>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	"Akira Yokosawa" <akiyks@gmail.com>,
	"Daniel Lustig" <dlustig@nvidia.com>,
	"Joel Fernandes" <joelagnelf@nvidia.com>,
	<rust-for-linux@vger.kernel.org>, <nouveau@lists.freedesktop.org>,
	<linux-kernel@vger.kernel.org>, <linux-arch@vger.kernel.org>,
	<lkmm@lists.linux.dev>, <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH 3/3] gpu: nova-core: fix wrong use of barriers in GSP code
Date: Mon, 13 Apr 2026 12:13:21 +0100	[thread overview]
Message-ID: <DHRZE1B9D8M8.3G97DOTB881RR@garyguo.net> (raw)
In-Reply-To: <DHRS5I7ZTI01.27WY0OTHA7ZMQ@nvidia.com>

On Mon Apr 13, 2026 at 6:33 AM BST, Eliot Courtney wrote:
> On Fri Apr 3, 2026 at 12:24 AM JST, Gary Guo wrote:
>> From: Gary Guo <gary@garyguo.net>
>>
>> Currently, in the GSP->CPU messaging path, the current code misses a read
>> barrier before data read. The barrier after read is updated to a DMA
>> barrier (with release ordering desired), instead of the existing (Rust)
>> SeqCst SMP barrier; the location of barrier is also moved to the beginning
>> of function, because the barrier is needed to synchronizing between data
>> and ring-buffer pointer, the RMW operation does not internally need a
>> barrier (nor it has to be atomic, as CPU pointers are updated by CPU only).
>>
>> In the CPU->GSP messaging path, the current code misses a write barrier
>> after data write and before updating the CPU write pointer. Barrier is not
>> needed before data write due to control dependency, this fact is documented
>> explicitly. This could be replaced with an acquire barrier if needed.
>>
>> Signed-off-by: Gary Guo <gary@garyguo.net>
>
> nit: should this have
> Fixes: 75f6b1de8133 ("gpu: nova-core: gsp: Add GSP command queue bindings and handling")
>
> ?
>
>> ---
>>  drivers/gpu/nova-core/gsp/cmdq.rs | 19 +++++++++++++++++++
>>  drivers/gpu/nova-core/gsp/fw.rs   | 12 ------------
>>  2 files changed, 19 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
>> index 2224896ccc89..7e4315b13984 100644
>> --- a/drivers/gpu/nova-core/gsp/cmdq.rs
>> +++ b/drivers/gpu/nova-core/gsp/cmdq.rs
>> @@ -19,6 +19,12 @@
>>      prelude::*,
>>      sync::{
>>          aref::ARef,
>> +        barrier::{
>> +            dma_mb,
>> +            Read,
>> +            Release,
>> +            Write, //
>> +        },
>>          Mutex, //
>>      },
>>      time::Delta,
>> @@ -258,6 +264,9 @@ fn new(dev: &device::Device<device::Bound>) -> Result<Self> {
>>          let tx = self.cpu_write_ptr() as usize;
>>          let rx = self.gsp_read_ptr() as usize;
>>  
>> +        // ORDERING: control dependency provides necessary LOAD->STORE ordering.
>> +        // `dma_mb(Acquire)` may be used here if we don't want to rely on control dependency.
>> +
>>          // SAFETY:
>>          // - We will only access the driver-owned part of the shared memory.
>>          // - Per the safety statement of the function, no concurrent access will be performed.
>> @@ -311,6 +320,9 @@ fn driver_write_area_size(&self) -> usize {
>>          let tx = self.gsp_write_ptr() as usize;
>>          let rx = self.cpu_read_ptr() as usize;
>>  
>> +        // ORDERING: Ensure data load is ordered after load of GSP write pointer.
>> +        dma_mb(Read);
>> +
>>          // SAFETY:
>>          // - We will only access the driver-owned part of the shared memory.
>>          // - Per the safety statement of the function, no concurrent access will be performed.
>> @@ -408,6 +420,10 @@ fn cpu_read_ptr(&self) -> u32 {
>>  
>>      // Informs the GSP that it can send `elem_count` new pages into the message queue.
>>      fn advance_cpu_read_ptr(&mut self, elem_count: u32) {
>> +        // ORDERING: Ensure read pointer is properly ordered.
>
> What about a more specific comment that describes exactly what is
> ordered, e.g. something like:
> Ensure all reads of message data by the CPU have completed before writing
> the updated read pointer to the GSP, since it may overwrite that data.
>
> Maybe this is just me but it's a lot easier for me to think of the
> orderings as a pair of (load? store? -> load? store?) which works for
> everything hw actually supports except for ll+ls+ss, rather than mapping
> 'Release' to (load+store -> store) in my head. e.g. here IIUC we need to
> make sure all loads by the CPU are done before we do the store for the
> pointer, so we need to make sure loads don't cross ahead of this
> barrier but also that stores don't cross behind it, so (load -> store)
> should be sufficient? So, depending on what you want to do with the
> memory model, this could be tightened IMO. Unlike the one below that
> only needs to order stores with eachother (ss).

The pair of `(access, access)` is actually the first iteration of the API that I've designed.
You can see it implemented here: https://github.com/nbdd0121/lkmm-rs/blob/trunk/src/lib.rs
A good thing is that it maps nicely to loongarch and RISC-V. A bad thing is that
it misses ll+ls+ss as you pointed out.

However, I think the more tricky part of this approach is now the semantics is
harder to fit into LKMM; it is now became 9 types of barrier that we need to
reason about, and that's a huge addition. I am not sure it is even feasible to
implement, given the KCSAN only have 5 types of barriers that it could encode
about.

On the other hand, acquire/barriers are much easier to reason about (and they
already exist in C11/Rust memory models), and KCSAN already implements marked
release accesses as ONCE + release barrier (with acquire accesses being
implicit), so it probably doesn't even need changing.

Best,
Gary

>
>> +        //
>
> nit: stray //
>
>> +        dma_mb(Release);
>> +
>>          super::fw::gsp_mem::advance_cpu_read_ptr(&self.0, elem_count)
>>      }
>>  
>> @@ -422,6 +438,9 @@ fn cpu_write_ptr(&self) -> u32 {
>>  
>>      // Informs the GSP that it can process `elem_count` new pages from the command queue.
>>      fn advance_cpu_write_ptr(&mut self, elem_count: u32) {
>> +        // ORDERING: Ensure all command data is visible before updateing ring buffer pointer.
>> +        dma_mb(Write);
>> +
>>          super::fw::gsp_mem::advance_cpu_write_ptr(&self.0, elem_count)
>>      }
>>  }
>> diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs
>> index 0c8a74f0e8ac..62c2cf1b030c 100644
>> --- a/drivers/gpu/nova-core/gsp/fw.rs
>> +++ b/drivers/gpu/nova-core/gsp/fw.rs
>> @@ -42,11 +42,6 @@
>>  
>>  // TODO: Replace with `IoView` projections once available.
>>  pub(super) mod gsp_mem {
>> -    use core::sync::atomic::{
>> -        fence,
>> -        Ordering, //
>> -    };
>> -
>>      use kernel::{
>>          dma::Coherent,
>>          dma_read,
>> @@ -72,10 +67,6 @@ pub(in crate::gsp) fn cpu_read_ptr(qs: &Coherent<GspMem>) -> u32 {
>>  
>>      pub(in crate::gsp) fn advance_cpu_read_ptr(qs: &Coherent<GspMem>, count: u32) {
>>          let rptr = cpu_read_ptr(qs).wrapping_add(count) % MSGQ_NUM_PAGES;
>> -
>> -        // Ensure read pointer is properly ordered.
>> -        fence(Ordering::SeqCst);
>> -
>>          dma_write!(qs, .cpuq.rx.0.readPtr, rptr);
>>      }
>>  
>> @@ -87,9 +78,6 @@ pub(in crate::gsp) fn advance_cpu_write_ptr(qs: &Coherent<GspMem>, count: u32) {
>>          let wptr = cpu_write_ptr(qs).wrapping_add(count) % MSGQ_NUM_PAGES;
>>  
>>          dma_write!(qs, .cpuq.tx.0.writePtr, wptr);
>> -
>> -        // Ensure all command data is visible before triggering the GSP read.
>> -        fence(Ordering::SeqCst);
>>      }
>>  }
>>  


      reply	other threads:[~2026-04-13 11:13 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-02 15:24 [PATCH 0/3] rust: more memory barriers bindings Gary Guo
2026-04-02 15:24 ` [PATCH 1/3] rust: sync: add helpers for mb, dma_mb and friends Gary Guo
2026-04-13  5:33   ` Eliot Courtney
2026-04-02 15:24 ` [PATCH 2/3] rust: sync: generic memory barriers Gary Guo
2026-04-02 21:49   ` Joel Fernandes
2026-04-03  0:07     ` Gary Guo
2026-04-03 21:33       ` Joel Fernandes
2026-04-04 12:43         ` Gary Guo
2026-04-06 18:01           ` Joel Fernandes
2026-04-02 15:24 ` [PATCH 3/3] gpu: nova-core: fix wrong use of barriers in GSP code Gary Guo
2026-04-02 21:56   ` Joel Fernandes
2026-04-02 21:59     ` Joel Fernandes
2026-04-04 13:02     ` Gary Guo
2026-04-06 18:18       ` Joel Fernandes
2026-04-13  5:33   ` Eliot Courtney
2026-04-13 11:13     ` Gary Guo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DHRZE1B9D8M8.3G97DOTB881RR@garyguo.net \
    --to=gary@garyguo.net \
    --cc=a.hindborg@kernel.org \
    --cc=acourbot@nvidia.com \
    --cc=airlied@gmail.com \
    --cc=akiyks@gmail.com \
    --cc=aliceryhl@google.com \
    --cc=bjorn3_gh@protonmail.com \
    --cc=boqun@kernel.org \
    --cc=dakr@kernel.org \
    --cc=dhowells@redhat.com \
    --cc=dlustig@nvidia.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=ecourtney@nvidia.com \
    --cc=j.alglave@ucl.ac.uk \
    --cc=joelagnelf@nvidia.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkmm@lists.linux.dev \
    --cc=lossin@kernel.org \
    --cc=luc.maranget@inria.fr \
    --cc=nouveau@lists.freedesktop.org \
    --cc=npiggin@gmail.com \
    --cc=ojeda@kernel.org \
    --cc=parri.andrea@gmail.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rust-for-linux@vger.kernel.org \
    --cc=simona@ffwll.ch \
    --cc=stern@rowland.harvard.edu \
    --cc=tmgross@umich.edu \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox