From: "Paul E. McKenney" <paulmck@kernel.org>
To: Marco Elver <elver@google.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>,
catalin.marinas@arm.com, will@kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, mark.rutland@arm.com,
Jonathan Corbet <corbet@lwn.net>,
linux-doc@vger.kernel.org, arnd@arndb.de,
Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb()
Date: Thu, 16 Jun 2022 16:13:50 -0700 [thread overview]
Message-ID: <20220616231350.GA1790663@paulmck-ThinkPad-P17-Gen-1> (raw)
In-Reply-To: <CANpmjNNPf5J2OcVxoMgVtFYjWJhJ2JE+UBFyqnt6+WrPobPOHQ@mail.gmail.com>
On Mon, May 23, 2022 at 01:35:27PM +0200, Marco Elver wrote:
> On Mon, 23 May 2022 at 13:21, Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> >
> > The memory barrier dma_mb() is introduced by commit a76a37777f2c
> > ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
> > which is used to ensure that prior (both reads and writes) accesses
> > to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
> >
> > Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
> > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>
> Reviewed-by: Marco Elver <elver@google.com>
Just checking... Did these ever get picked up? It was suggested
that they go up via the arm64 tree, if I remember correctly.
Thanx, Paul
> > ---
> > Documentation/memory-barriers.txt | 11 ++++++-----
> > include/asm-generic/barrier.h | 8 ++++++++
> > 2 files changed, 14 insertions(+), 5 deletions(-)
> >
> > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> > index b12df9137e1c..832b5d36e279 100644
> > --- a/Documentation/memory-barriers.txt
> > +++ b/Documentation/memory-barriers.txt
> > @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
> >
> > (*) dma_wmb();
> > (*) dma_rmb();
> > + (*) dma_mb();
> >
> > These are for use with consistent memory to guarantee the ordering
> > of writes or reads of shared memory accessible to both the CPU and a
> > @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
> > The dma_rmb() allows us guarantee the device has released ownership
> > before we read the data from the descriptor, and the dma_wmb() allows
> > us to guarantee the data is written to the descriptor before the device
> > - can see it now has ownership. Note that, when using writel(), a prior
> > - wmb() is not needed to guarantee that the cache coherent memory writes
> > - have completed before writing to the MMIO region. The cheaper
> > - writel_relaxed() does not provide this guarantee and must not be used
> > - here.
> > + can see it now has ownership. The dma_mb() implies both a dma_rmb() and
> > + a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
> > + to guarantee that the cache coherent memory writes have completed before
> > + writing to the MMIO region. The cheaper writel_relaxed() does not provide
> > + this guarantee and must not be used here.
> >
> > See the subsection "Kernel I/O barrier effects" for more information on
> > relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
> > diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> > index fd7e8fbaeef1..961f4d88f9ef 100644
> > --- a/include/asm-generic/barrier.h
> > +++ b/include/asm-generic/barrier.h
> > @@ -38,6 +38,10 @@
> > #define wmb() do { kcsan_wmb(); __wmb(); } while (0)
> > #endif
> >
> > +#ifdef __dma_mb
> > +#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
> > +#endif
> > +
> > #ifdef __dma_rmb
> > #define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
> > #endif
> > @@ -65,6 +69,10 @@
> > #define wmb() mb()
> > #endif
> >
> > +#ifndef dma_mb
> > +#define dma_mb() mb()
> > +#endif
> > +
> > #ifndef dma_rmb
> > #define dma_rmb() rmb()
> > #endif
> > --
> > 2.35.3
> >
next prev parent reply other threads:[~2022-06-16 23:13 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-23 11:31 [PATCH v4 0/2] arm64: Fix kcsan test_barrier fail and panic Kefeng Wang
2022-05-23 11:31 ` [PATCH v4 1/2] asm-generic: Add memory barrier dma_mb() Kefeng Wang
2022-05-23 11:35 ` Marco Elver
2022-06-16 23:13 ` Paul E. McKenney [this message]
2022-06-17 10:18 ` Marco Elver
2022-06-19 9:45 ` Catalin Marinas
2022-06-20 21:02 ` Paul E. McKenney
2022-05-23 11:38 ` Mark Rutland
2022-05-23 11:31 ` [PATCH v4 2/2] arm64: kcsan: Support detecting more missing memory barriers Kefeng Wang
2022-05-23 14:16 ` Mark Rutland
2022-06-14 3:20 ` Kefeng Wang
2022-06-21 10:46 ` Catalin Marinas
2022-06-23 19:31 ` [PATCH v4 0/2] arm64: Fix kcsan test_barrier fail and panic Will Deacon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220616231350.GA1790663@paulmck-ThinkPad-P17-Gen-1 \
--to=paulmck@kernel.org \
--cc=arnd@arndb.de \
--cc=catalin.marinas@arm.com \
--cc=corbet@lwn.net \
--cc=elver@google.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=peterz@infradead.org \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).