* Re: [RESEND PATCH] documentation: memory-barriers: fix smp_mb__before_spinlock() semantics
[not found] <1427791181-21952-1-git-send-email-will.deacon@arm.com>
@ 2015-04-01 15:26 ` Paul E. McKenney
0 siblings, 0 replies; only message in thread
From: Paul E. McKenney @ 2015-04-01 15:26 UTC (permalink / raw)
To: Will Deacon; +Cc: Peter Zijlstra, linuxppc-dev, linux-kernel, Oleg Nesterov
On Tue, Mar 31, 2015 at 09:39:41AM +0100, Will Deacon wrote:
> Our current documentation claims that, when followed by an ACQUIRE,
> smp_mb__before_spinlock() orders prior loads against subsequent loads
> and stores, which isn't actually true.
>
> Fix the documentation to state that this sequence orders only prior
> stores against subsequent loads and stores.
>
> Cc: Oleg Nesterov <oleg@redhat.com>
> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>
> Could somebody pick this up please? I guess I could route it via the arm64
> tree with an Ack, but I'd rather it went through Paul or -tip.
Queued for 4.2, along with a separate patch for PowerPC that make it so
that PowerPC actually behaves as described below. ;-)
Thanx, Paul
> Documentation/memory-barriers.txt | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> index ca2387ef27ab..fa28a0c1e2b1 100644
> --- a/Documentation/memory-barriers.txt
> +++ b/Documentation/memory-barriers.txt
> @@ -1768,10 +1768,9 @@ for each construct. These operations all imply certain barriers:
>
> Memory operations issued before the ACQUIRE may be completed after
> the ACQUIRE operation has completed. An smp_mb__before_spinlock(),
> - combined with a following ACQUIRE, orders prior loads against
> - subsequent loads and stores and also orders prior stores against
> - subsequent stores. Note that this is weaker than smp_mb()! The
> - smp_mb__before_spinlock() primitive is free on many architectures.
> + combined with a following ACQUIRE, orders prior stores against
> + subsequent loads and stores. Note that this is weaker than smp_mb()!
> + The smp_mb__before_spinlock() primitive is free on many architectures.
>
> (2) RELEASE operation implication:
------------------------------------------------------------------------
powerpc: Fix smp_mb__before_spinlock()
Currently, smp_mb__before_spinlock() is defined to be smp_wmb()
in core code, but this is not sufficient on PowerPC. This patch
therefore supplies an override for the generic definition to
strengthen smp_mb__before_spinlock() to smp_mb(), as is needed
on PowerPC.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: <linuxppc-dev@lists.ozlabs.org>
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index a3bf5be111ff..1124f59b8df4 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -89,5 +89,6 @@ do { \
#define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() smp_mb()
+#define smp_mb__before_spinlock() smp_mb()
#endif /* _ASM_POWERPC_BARRIER_H */
^ permalink raw reply related [flat|nested] only message in thread
only message in thread, other threads:[~2015-04-01 15:26 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1427791181-21952-1-git-send-email-will.deacon@arm.com>
2015-04-01 15:26 ` [RESEND PATCH] documentation: memory-barriers: fix smp_mb__before_spinlock() semantics Paul E. McKenney
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).