linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: peterz@infradead.org
To: linux-arch@vger.kernel.org
Cc: geert@linux-m68k.org, paulmck@linux.vnet.ibm.com,
	torvalds@linux-foundation.org, VICTORK@il.ibm.com,
	oleg@redhat.com, anton@samba.org, benh@kernel.crashing.org,
	fweisbec@gmail.com, mathieu.desnoyers@polymtl.ca,
	michael@ellerman.id.au, mikey@neuling.org,
	linux@arm.linux.org.uk, schwidefsky@de.ibm.com,
	heiko.carstens@de.ibm.com, tony.luck@intel.com,
	Will Deacon <will.deacon@arm.com>,
	Peter Zijlstra <peterz@infradead.org>
Subject: [PATCH 3/4] arch: Introduce smp_load_acquire(), smp_store_release()
Date: Thu, 07 Nov 2013 23:03:17 +0100	[thread overview]
Message-ID: <20131107221254.226041159@infradead.org> (raw)
In-Reply-To: 20131107220314.740353088@infradead.org

[-- Attachment #1: peter_zijlstra-arch-sr-la.patch --]
[-- Type: text/plain, Size: 15047 bytes --]

A number of situations currently require the heavyweight smp_mb(),
even though there is no need to order prior stores against later
loads.  Many architectures have much cheaper ways to handle these
situations, but the Linux kernel currently has no portable way
to make use of them.

This commit therefore supplies smp_load_acquire() and
smp_store_release() to remedy this situation.  The new
smp_load_acquire() primitive orders the specified load against
any subsequent reads or writes, while the new smp_store_release()
primitive orders the specifed store against any prior reads or
writes.  These primitives allow array-based circular FIFOs to be
implemented without an smp_mb(), and also allow a theoretical
hole in rcu_assign_pointer() to be closed at no additional
expense on most architectures.

In addition, the RCU experience transitioning from explicit
smp_read_barrier_depends() and smp_wmb() to rcu_dereference()
and rcu_assign_pointer(), respectively resulted in substantial
improvements in readability.  It therefore seems likely that
replacing other explicit barriers with smp_load_acquire() and
smp_store_release() will provide similar benefits.  It appears
that roughly half of the explicit barriers in core kernel code
might be so replaced.

[Changelog by PaulMck]

Cc: Tony Luck <tony.luck@intel.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Michael Neuling <mikey@neuling.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Victor Kaplansky <VICTORK@il.ibm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 arch/arm/include/asm/barrier.h      |   15 ++++++++++
 arch/arm64/include/asm/barrier.h    |   50 ++++++++++++++++++++++++++++++++++++
 arch/ia64/include/asm/barrier.h     |   49 +++++++++++++++++++++++++++++++++++
 arch/metag/include/asm/barrier.h    |   15 ++++++++++
 arch/mips/include/asm/barrier.h     |   15 ++++++++++
 arch/powerpc/include/asm/barrier.h  |   21 ++++++++++++++-
 arch/s390/include/asm/barrier.h     |   15 ++++++++++
 arch/sparc/include/asm/barrier_64.h |   15 ++++++++++
 arch/x86/include/asm/barrier.h      |   15 ++++++++++
 include/asm-generic/barrier.h       |   15 ++++++++++
 include/linux/compiler.h            |    9 ++++++
 11 files changed, 233 insertions(+), 1 deletion(-)

Index: linux-2.6/arch/arm/include/asm/barrier.h
===================================================================
--- linux-2.6.orig/arch/arm/include/asm/barrier.h	2013-11-07 17:36:09.105170623 +0100
+++ linux-2.6/arch/arm/include/asm/barrier.h	2013-11-07 17:36:09.097170473 +0100
@@ -59,6 +59,21 @@
 #define smp_wmb()	dmb(ishst)
 #endif
 
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	ACCESS_ONCE(*p) = (v);						\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = ACCESS_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	___p1;								\
+})
+
 #define read_barrier_depends()		do { } while(0)
 #define smp_read_barrier_depends()	do { } while(0)
 
Index: linux-2.6/arch/arm64/include/asm/barrier.h
===================================================================
--- linux-2.6.orig/arch/arm64/include/asm/barrier.h	2013-11-07 17:36:09.105170623 +0100
+++ linux-2.6/arch/arm64/include/asm/barrier.h	2013-11-07 17:36:09.098170492 +0100
@@ -35,10 +35,60 @@
 #define smp_mb()	barrier()
 #define smp_rmb()	barrier()
 #define smp_wmb()	barrier()
+
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	ACCESS_ONCE(*p) = (v);						\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = ACCESS_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	___p1;								\
+})
+
 #else
+
 #define smp_mb()	asm volatile("dmb ish" : : : "memory")
 #define smp_rmb()	asm volatile("dmb ishld" : : : "memory")
 #define smp_wmb()	asm volatile("dmb ishst" : : : "memory")
+
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	switch (sizeof(*p)) {						\
+	case 4:								\
+		asm volatile ("stlr %w1, %0"				\
+				: "=Q" (*p) : "r" (v) : "memory");	\
+		break;							\
+	case 8:								\
+		asm volatile ("stlr %1, %0"				\
+				: "=Q" (*p) : "r" (v) : "memory");	\
+		break;							\
+	}								\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1;						\
+	compiletime_assert_atomic_type(*p);				\
+	switch (sizeof(*p)) {						\
+	case 4:								\
+		asm volatile ("ldar %w0, %1"				\
+			: "=r" (___p1) : "Q" (*p) : "memory");		\
+		break;							\
+	case 8:								\
+		asm volatile ("ldar %0, %1"				\
+			: "=r" (___p1) : "Q" (*p) : "memory");		\
+		break;							\
+	}								\
+	___p1;								\
+})
+
 #endif
 
 #define read_barrier_depends()		do { } while(0)
Index: linux-2.6/arch/ia64/include/asm/barrier.h
===================================================================
--- linux-2.6.orig/arch/ia64/include/asm/barrier.h	2013-11-07 17:36:09.105170623 +0100
+++ linux-2.6/arch/ia64/include/asm/barrier.h	2013-11-07 17:36:09.098170492 +0100
@@ -45,11 +45,60 @@
 # define smp_rmb()	rmb()
 # define smp_wmb()	wmb()
 # define smp_read_barrier_depends()	read_barrier_depends()
+
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	switch (sizeof(*p)) {						\
+	case 4:								\
+		asm volatile ("st4.rel [%0]=%1"				\
+				: "=r" (p) : "r" (v) : "memory");	\
+		break;							\
+	case 8:								\
+		asm volatile ("st8.rel [%0]=%1"				\
+				: "=r" (p) : "r" (v) : "memory");	\
+		break;							\
+	}								\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1;						\
+	compiletime_assert_atomic_type(*p);				\
+	switch (sizeof(*p)) {						\
+	case 4:								\
+		asm volatile ("ld4.acq %0=[%1]"				\
+				: "=r" (___p1) : "r" (p) : "memory");	\
+		break;							\
+	case 8:								\
+		asm volatile ("ld8.acq %0=[%1]"				\
+				: "=r" (___p1) : "r" (p) : "memory");	\
+		break;							\
+	}								\
+	___p1;								\
+})
+
 #else
+
 # define smp_mb()	barrier()
 # define smp_rmb()	barrier()
 # define smp_wmb()	barrier()
 # define smp_read_barrier_depends()	do { } while(0)
+
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	ACCESS_ONCE(*p) = (v);						\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = ACCESS_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	___p1;								\
+})
 #endif
 
 /*
Index: linux-2.6/arch/metag/include/asm/barrier.h
===================================================================
--- linux-2.6.orig/arch/metag/include/asm/barrier.h	2013-11-07 17:36:09.105170623 +0100
+++ linux-2.6/arch/metag/include/asm/barrier.h	2013-11-07 17:36:09.099170511 +0100
@@ -82,4 +82,19 @@
 #define smp_read_barrier_depends()     do { } while (0)
 #define set_mb(var, value) do { var = value; smp_mb(); } while (0)
 
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	ACCESS_ONCE(*p) = (v);						\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = ACCESS_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	___p1;								\
+})
+
 #endif /* _ASM_METAG_BARRIER_H */
Index: linux-2.6/arch/mips/include/asm/barrier.h
===================================================================
--- linux-2.6.orig/arch/mips/include/asm/barrier.h	2013-11-07 17:36:09.105170623 +0100
+++ linux-2.6/arch/mips/include/asm/barrier.h	2013-11-07 17:36:09.099170511 +0100
@@ -180,4 +180,19 @@
 #define nudge_writes() mb()
 #endif
 
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	ACCESS_ONCE(*p) = (v);						\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = ACCESS_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	___p1;								\
+})
+
 #endif /* __ASM_BARRIER_H */
Index: linux-2.6/arch/powerpc/include/asm/barrier.h
===================================================================
--- linux-2.6.orig/arch/powerpc/include/asm/barrier.h	2013-11-07 17:36:09.105170623 +0100
+++ linux-2.6/arch/powerpc/include/asm/barrier.h	2013-11-07 17:36:09.100170529 +0100
@@ -45,11 +45,15 @@
 #    define SMPWMB      eieio
 #endif
 
+#define __lwsync()	__asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory")
+
 #define smp_mb()	mb()
-#define smp_rmb()	__asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory")
+#define smp_rmb()	__lwsync()
 #define smp_wmb()	__asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
 #define smp_read_barrier_depends()	read_barrier_depends()
 #else
+#define __lwsync()	barrier()
+
 #define smp_mb()	barrier()
 #define smp_rmb()	barrier()
 #define smp_wmb()	barrier()
@@ -65,4 +69,19 @@
 #define data_barrier(x)	\
 	asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
 
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	__lwsync();							\
+	ACCESS_ONCE(*p) = (v);						\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = ACCESS_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	__lwsync();							\
+	___p1;								\
+})
+
 #endif /* _ASM_POWERPC_BARRIER_H */
Index: linux-2.6/arch/s390/include/asm/barrier.h
===================================================================
--- linux-2.6.orig/arch/s390/include/asm/barrier.h	2013-11-07 17:36:09.105170623 +0100
+++ linux-2.6/arch/s390/include/asm/barrier.h	2013-11-07 17:36:09.100170529 +0100
@@ -32,4 +32,19 @@
 
 #define set_mb(var, value)		do { var = value; mb(); } while (0)
 
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	barrier();							\
+	ACCESS_ONCE(*p) = (v);						\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = ACCESS_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	barrier();							\
+	___p1;								\
+})
+
 #endif /* __ASM_BARRIER_H */
Index: linux-2.6/arch/sparc/include/asm/barrier_64.h
===================================================================
--- linux-2.6.orig/arch/sparc/include/asm/barrier_64.h	2013-11-07 17:36:09.105170623 +0100
+++ linux-2.6/arch/sparc/include/asm/barrier_64.h	2013-11-07 17:36:09.101170548 +0100
@@ -53,4 +53,19 @@
 
 #define smp_read_barrier_depends()	do { } while(0)
 
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	barrier();							\
+	ACCESS_ONCE(*p) = (v);						\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = ACCESS_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	barrier();							\
+	___p1;								\
+})
+
 #endif /* !(__SPARC64_BARRIER_H) */
Index: linux-2.6/arch/x86/include/asm/barrier.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/barrier.h	2013-11-07 17:36:09.105170623 +0100
+++ linux-2.6/arch/x86/include/asm/barrier.h	2013-11-07 22:23:46.097491898 +0100
@@ -92,12 +92,53 @@
 #endif
 #define smp_read_barrier_depends()	read_barrier_depends()
 #define set_mb(var, value) do { (void)xchg(&var, value); } while (0)
-#else
+#else /* !SMP */
 #define smp_mb()	barrier()
 #define smp_rmb()	barrier()
 #define smp_wmb()	barrier()
 #define smp_read_barrier_depends()	do { } while (0)
 #define set_mb(var, value) do { var = value; barrier(); } while (0)
+#endif /* SMP */
+
+#if defined(CONFIG_X86_OOSTORE) || defined(CONFIG_X86_PPRO_FENCE)
+
+/*
+ * For either of these options x86 doesn't have a strong TSO memory
+ * model and we should fall back to full barriers.
+ */
+
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	ACCESS_ONCE(*p) = (v);						\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = ACCESS_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	___p1;								\
+})
+
+#else /* regular x86 TSO memory ordering */
+
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	barrier();							\
+	ACCESS_ONCE(*p) = (v);						\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = ACCESS_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	barrier();							\
+	___p1;								\
+})
+
 #endif
 
 /*
Index: linux-2.6/include/asm-generic/barrier.h
===================================================================
--- linux-2.6.orig/include/asm-generic/barrier.h	2013-11-07 17:36:09.105170623 +0100
+++ linux-2.6/include/asm-generic/barrier.h	2013-11-07 17:36:09.102170567 +0100
@@ -62,5 +62,20 @@
 #define set_mb(var, value)  do { (var) = (value); mb(); } while (0)
 #endif
 
+#define smp_store_release(p, v)						\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	ACCESS_ONCE(*p) = (v);						\
+} while (0)
+
+#define smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = ACCESS_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	smp_mb();							\
+	___p1;								\
+})
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
Index: linux-2.6/include/linux/compiler.h
===================================================================
--- linux-2.6.orig/include/linux/compiler.h	2013-11-07 17:36:09.105170623 +0100
+++ linux-2.6/include/linux/compiler.h	2013-11-07 17:36:09.102170567 +0100
@@ -298,6 +298,11 @@
 # define __same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b))
 #endif
 
+/* Is this type a native word size -- useful for atomic operations */
+#ifndef __native_word
+# define __native_word(t) (sizeof(t) == sizeof(int) || sizeof(t) == sizeof(long))
+#endif
+
 /* Compile time object size, -1 for unknown */
 #ifndef __compiletime_object_size
 # define __compiletime_object_size(obj) -1
@@ -337,6 +342,10 @@
 #define compiletime_assert(condition, msg) \
 	_compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
 
+#define compiletime_assert_atomic_type(t)				\
+	compiletime_assert(__native_word(t),				\
+		"Need native word sized stores/loads for atomicity.")
+
 /*
  * Prevent the compiler from merging or refetching accesses.  The compiler
  * is also forbidden from reordering successive instances of ACCESS_ONCE(),

  parent reply	other threads:[~2013-11-07 22:15 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-07 22:03 [PATCH 0/4] arch: Introduce smp_load_acquire() and smp_store_release() peterz
2013-11-07 22:03 ` [PATCH 1/4] doc: Rename LOCK/UNLOCK to ACQUIRE/RELEASE peterz
2013-11-07 20:05   ` Mathieu Desnoyers
2013-11-07 22:03 ` [PATCH 2/4] arch: Clean up asm/barrier.h implementations using asm-generic/barrier.h peterz
2013-11-07 20:22   ` Mathieu Desnoyers
2013-11-07 22:03 ` peterz [this message]
2013-11-07 21:03   ` [PATCH 3/4] arch: Introduce smp_load_acquire(), smp_store_release() Mathieu Desnoyers
2013-11-08  4:58     ` Paul Mackerras
2013-11-07 22:03 ` [PATCH 4/4] perf: Optimize perf_output_begin() -- weaker memory barrier peterz
2013-11-07 21:16   ` Mathieu Desnoyers
2013-11-08  2:19     ` Paul E. McKenney
2013-11-08  2:54       ` Mathieu Desnoyers
2013-11-08  3:13         ` Mathieu Desnoyers
2013-11-08  3:25         ` Paul E. McKenney
2013-11-08  7:21       ` Peter Zijlstra
     [not found] <20131108062703.GC2693@Krystal>
2013-11-08 10:29 ` [PATCH 3/4] arch: Introduce smp_load_acquire(), smp_store_release() Mathieu Desnoyers
2013-11-08 11:08   ` Peter Zijlstra
2013-11-08 13:53     ` Mathieu Desnoyers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131107221254.226041159@infradead.org \
    --to=peterz@infradead.org \
    --cc=VICTORK@il.ibm.com \
    --cc=anton@samba.org \
    --cc=benh@kernel.crashing.org \
    --cc=fweisbec@gmail.com \
    --cc=geert@linux-m68k.org \
    --cc=heiko.carstens@de.ibm.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux@arm.linux.org.uk \
    --cc=mathieu.desnoyers@polymtl.ca \
    --cc=michael@ellerman.id.au \
    --cc=mikey@neuling.org \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=schwidefsky@de.ibm.com \
    --cc=tony.luck@intel.com \
    --cc=torvalds@linux-foundation.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).