linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] ARM: spinlock: ensure we have a compiler barrier before sev
@ 2014-02-04 12:28 Will Deacon
  2014-02-04 22:08 ` Christoffer Dall
  0 siblings, 1 reply; 2+ messages in thread
From: Will Deacon @ 2014-02-04 12:28 UTC (permalink / raw)
  To: linux-arm-kernel

When unlocking a spinlock, we require the following, strictly ordered
sequence of events:

	<barrier>	/* dmb */
	<unlock>
	<barrier>	/* dsb */
	<sev>

Whilst the code does indeed reflect this in terms of the architecture,
the final <barrier> + <sev> have been contracted into a single inline
asm without a "memory" clobber, therefore the compiler is at liberty to
reorder the unlock to the end of the above sequence. In such a case,
a waiting CPU may be woken up before the lock has been unlocked, leading
to extremely poor performance.

This patch reworks the dsb_sev() function to make use of the dsb()
macro and ensure ordering against the unlock.

Cc: <stable@vger.kernel.org>
Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/spinlock.h | 15 +++------------
 1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/arch/arm/include/asm/spinlock.h b/arch/arm/include/asm/spinlock.h
index ef3c6072aa45..ac4bfae26702 100644
--- a/arch/arm/include/asm/spinlock.h
+++ b/arch/arm/include/asm/spinlock.h
@@ -37,18 +37,9 @@
 
 static inline void dsb_sev(void)
 {
-#if __LINUX_ARM_ARCH__ >= 7
-	__asm__ __volatile__ (
-		"dsb ishst\n"
-		SEV
-	);
-#else
-	__asm__ __volatile__ (
-		"mcr p15, 0, %0, c7, c10, 4\n"
-		SEV
-		: : "r" (0)
-	);
-#endif
+
+	dsb(ishst);
+	__asm__(SEV);
 }
 
 /*
-- 
1.8.2.2

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* [PATCH] ARM: spinlock: ensure we have a compiler barrier before sev
  2014-02-04 12:28 [PATCH] ARM: spinlock: ensure we have a compiler barrier before sev Will Deacon
@ 2014-02-04 22:08 ` Christoffer Dall
  0 siblings, 0 replies; 2+ messages in thread
From: Christoffer Dall @ 2014-02-04 22:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 04, 2014 at 12:28:46PM +0000, Will Deacon wrote:
> When unlocking a spinlock, we require the following, strictly ordered
> sequence of events:
> 
> 	<barrier>	/* dmb */
> 	<unlock>
> 	<barrier>	/* dsb */
> 	<sev>
> 
> Whilst the code does indeed reflect this in terms of the architecture,
> the final <barrier> + <sev> have been contracted into a single inline
> asm without a "memory" clobber, therefore the compiler is at liberty to
> reorder the unlock to the end of the above sequence. In such a case,
> a waiting CPU may be woken up before the lock has been unlocked, leading
> to extremely poor performance.
> 
> This patch reworks the dsb_sev() function to make use of the dsb()
> macro and ensure ordering against the unlock.
> 
> Cc: <stable@vger.kernel.org>
> Reported-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>

FWIW: Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-02-04 22:08 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-04 12:28 [PATCH] ARM: spinlock: ensure we have a compiler barrier before sev Will Deacon
2014-02-04 22:08 ` Christoffer Dall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).