linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 tip/core/locking 8/8] powerpc: Full barrier for smp_mb__after_unlock_lock()
       [not found] ` <1386799151-2219-1-git-send-email-paulmck@linux.vnet.ibm.com>
@ 2013-12-11 21:59   ` Paul E. McKenney
  0 siblings, 0 replies; only message in thread
From: Paul E. McKenney @ 2013-12-11 21:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, laijs, edumazet, peterz, fweisbec, josh, rostedt, oleg,
	dhowells, sbw, niv, mathieu.desnoyers, darren, akpm,
	Paul E. McKenney, linuxppc-dev, mingo, Paul Mackerras

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

The powerpc lock acquisition sequence is as follows:

	lwarx; cmpwi; bne; stwcx.; lwsync;

Lock release is as follows:

	lwsync; stw;

If CPU 0 does a store (say, x=1) then a lock release, and CPU 1 does a
lock acquisition then a load (say, r1=y), then there is no guarantee of
a full memory barrier between the store to 'x' and the load from 'y'.
To see this, suppose that CPUs 0 and 1 are hardware threads in the same
core that share a store buffer, and that CPU 2 is in some other core,
and that CPU 2 does the following:

	y = 1; sync; r2 = x;

If 'x' and 'y' are both initially zero, then the lock acquisition and
release sequences above can result in r1 and r2 both being equal to
zero, which could not happen if unlock+lock was a full barrier.

This commit therefore makes powerpc's smp_mb__after_unlock_lock() be a
full barrier.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: linuxppc-dev@lists.ozlabs.org
---
 arch/powerpc/include/asm/spinlock.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 5f54a744dcc5..f6e78d63fb6a 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -28,6 +28,8 @@
 #include <asm/synch.h>
 #include <asm/ppc-opcode.h>
 
+#define smp_mb__after_unlock_lock()	smp_mb()  /* Full ordering for lock. */
+
 #define arch_spin_is_locked(x)		((x)->slock != 0)
 
 #ifdef CONFIG_PPC64
-- 
1.8.1.5

^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2013-12-11 21:59 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20131211215850.GA810@linux.vnet.ibm.com>
     [not found] ` <1386799151-2219-1-git-send-email-paulmck@linux.vnet.ibm.com>
2013-12-11 21:59   ` [PATCH v6 tip/core/locking 8/8] powerpc: Full barrier for smp_mb__after_unlock_lock() Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).