* [PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release()
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
@ 2016-01-10 14:16 ` Michael S. Tsirkin
2016-01-12 16:28 ` Paul E. McKenney
[not found] ` <20160112162844.GD3818@linux.vnet.ibm.com>
2016-01-10 14:16 ` [PATCH v3 02/41] asm-generic: guard smp_store_release/load_acquire Michael S. Tsirkin
` (44 subsequent siblings)
45 siblings, 2 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:16 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
Benjamin Herrenschmidt, Heiko Carstens, virtualization,
Paul Mackerras, H. Peter Anvin, sparclinux, Ingo Molnar,
linux-arch, linux-s390, Davidlohr Bueso, Russell King - ARM Linux,
Arnd Bergmann, Davidlohr Bueso, Michael Ellerman, x86,
Christian Borntraeger, Linus Torvalds, xen-devel, Ingo Molnar,
Paul E . McKenney, linux-xtensa
From: Davidlohr Bueso <dave@stgolabs.net>
With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()")
it was made clear that the context of this call (and thus set_mb)
is strictly for CPU ordering, as opposed to IO. As such all archs
should use the smp variant of mb(), respecting the semantics and
saving a mandatory barrier on UP.
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <linux-arch@vger.kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: dave@stgolabs.net
Link: http://lkml.kernel.org/r/1445975631-17047-3-git-send-email-dave@stgolabs.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/ia64/include/asm/barrier.h | 2 +-
arch/powerpc/include/asm/barrier.h | 2 +-
arch/s390/include/asm/barrier.h | 2 +-
include/asm-generic/barrier.h | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index df896a1..209c4b8 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -77,7 +77,7 @@ do { \
___p1; \
})
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
/*
* The group barrier in front of the rsm & ssm are necessary to ensure
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 0eca6ef..a7af5fb 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,7 +34,7 @@
#define rmb() __asm__ __volatile__ ("sync" : : : "memory")
#define wmb() __asm__ __volatile__ ("sync" : : : "memory")
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
#ifdef __SUBARCH_HAS_LWSYNC
# define SMPWMB LWSYNC
diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index d68e11e..7ffd0b1 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -36,7 +36,7 @@
#define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() smp_mb()
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
#define smp_store_release(p, v) \
do { \
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index b42afad..0f45f93 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -93,7 +93,7 @@
#endif /* CONFIG_SMP */
#ifndef smp_store_mb
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
#endif
#ifndef smp_mb__before_atomic
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 02/41] asm-generic: guard smp_store_release/load_acquire
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
2016-01-10 14:16 ` [PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release() Michael S. Tsirkin
@ 2016-01-10 14:16 ` Michael S. Tsirkin
2016-01-10 14:16 ` [PATCH v3 03/41] ia64: rename nop->iosapic_nop Michael S. Tsirkin
` (43 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:16 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev, David Miller
Allow architectures to override smp_store_release
and smp_load_acquire by guarding the defines
in asm-generic/barrier.h with ifndef directives.
This is in preparation to reusing asm-generic/barrier.h
on architectures which have their own definition
of these macros.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
include/asm-generic/barrier.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 0f45f93..987b2e0 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -104,13 +104,16 @@
#define smp_mb__after_atomic() smp_mb()
#endif
+#ifndef smp_store_release
#define smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
smp_mb(); \
WRITE_ONCE(*p, v); \
} while (0)
+#endif
+#ifndef smp_load_acquire
#define smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
@@ -118,6 +121,7 @@ do { \
smp_mb(); \
___p1; \
})
+#endif
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_GENERIC_BARRIER_H */
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 03/41] ia64: rename nop->iosapic_nop
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
2016-01-10 14:16 ` [PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release() Michael S. Tsirkin
2016-01-10 14:16 ` [PATCH v3 02/41] asm-generic: guard smp_store_release/load_acquire Michael S. Tsirkin
@ 2016-01-10 14:16 ` Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 04/41] ia64: reuse asm-generic/barrier.h Michael S. Tsirkin
` (42 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:16 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Tony Luck, Andrew Cooper,
Fenghua Yu, Jiang Liu, Joe
asm-generic/barrier.h defines a nop() macro.
To be able to use this header on ia64, we shouldn't
call local functions/variables nop().
There's one instance where this breaks on ia64:
rename the function to iosapic_nop to avoid the conflict.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/ia64/kernel/iosapic.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
index d2fae05..90fde5b 100644
--- a/arch/ia64/kernel/iosapic.c
+++ b/arch/ia64/kernel/iosapic.c
@@ -256,7 +256,7 @@ set_rte (unsigned int gsi, unsigned int irq, unsigned int dest, int mask)
}
static void
-nop (struct irq_data *data)
+iosapic_nop (struct irq_data *data)
{
/* do nothing... */
}
@@ -415,7 +415,7 @@ iosapic_unmask_level_irq (struct irq_data *data)
#define iosapic_shutdown_level_irq mask_irq
#define iosapic_enable_level_irq unmask_irq
#define iosapic_disable_level_irq mask_irq
-#define iosapic_ack_level_irq nop
+#define iosapic_ack_level_irq iosapic_nop
static struct irq_chip irq_type_iosapic_level = {
.name = "IO-SAPIC-level",
@@ -453,7 +453,7 @@ iosapic_ack_edge_irq (struct irq_data *data)
}
#define iosapic_enable_edge_irq unmask_irq
-#define iosapic_disable_edge_irq nop
+#define iosapic_disable_edge_irq iosapic_nop
static struct irq_chip irq_type_iosapic_edge = {
.name = "IO-SAPIC-edge",
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 04/41] ia64: reuse asm-generic/barrier.h
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (2 preceding siblings ...)
2016-01-10 14:16 ` [PATCH v3 03/41] ia64: rename nop->iosapic_nop Michael S. Tsirkin
@ 2016-01-10 14:17 ` Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 05/41] powerpc: " Michael S. Tsirkin
` (41 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:17 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Davidlohr Bueso, Russell King - ARM Linux, Arnd Bergmann, x86,
xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Tony Luck, Andrew Cooper,
Fenghua Yu <fengh>
On ia64 smp_rmb, smp_wmb, read_barrier_depends, smp_read_barrier_depends
and smp_store_mb() match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/ia64/include/asm/barrier.h | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 209c4b8..2f93348 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -48,12 +48,6 @@
# define smp_mb() barrier()
#endif
-#define smp_rmb() smp_mb()
-#define smp_wmb() smp_mb()
-
-#define read_barrier_depends() do { } while (0)
-#define smp_read_barrier_depends() do { } while (0)
-
#define smp_mb__before_atomic() barrier()
#define smp_mb__after_atomic() barrier()
@@ -77,12 +71,12 @@ do { \
___p1; \
})
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
/*
* The group barrier in front of the rsm & ssm are necessary to ensure
* that none of the previous instructions in the same group are
* affected by the rsm/ssm.
*/
+#include <asm-generic/barrier.h>
+
#endif /* _ASM_IA64_BARRIER_H */
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 05/41] powerpc: reuse asm-generic/barrier.h
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (3 preceding siblings ...)
2016-01-10 14:17 ` [PATCH v3 04/41] ia64: reuse asm-generic/barrier.h Michael S. Tsirkin
@ 2016-01-10 14:17 ` Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 06/41] s390: " Michael S. Tsirkin
` (40 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:17 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
Benjamin Herrenschmidt, virtualization, Paul Mackerras,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Davidlohr Bueso, Russell King - ARM Linux, Arnd Bergmann,
Michael Ellerman, x86, xen-devel, Ingo Molnar, Paul E. McKenney,
linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
adi-buildroot-devel, Thomas Gleixner
On powerpc read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/powerpc/include/asm/barrier.h | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index a7af5fb..980ad0c 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -34,8 +34,6 @@
#define rmb() __asm__ __volatile__ ("sync" : : : "memory")
#define wmb() __asm__ __volatile__ ("sync" : : : "memory")
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
#ifdef __SUBARCH_HAS_LWSYNC
# define SMPWMB LWSYNC
#else
@@ -60,9 +58,6 @@
#define smp_wmb() barrier()
#endif /* CONFIG_SMP */
-#define read_barrier_depends() do { } while (0)
-#define smp_read_barrier_depends() do { } while (0)
-
/*
* This is a barrier which prevents following instructions from being
* started until the value of the argument x is known. For example, if
@@ -87,8 +82,8 @@ do { \
___p1; \
})
-#define smp_mb__before_atomic() smp_mb()
-#define smp_mb__after_atomic() smp_mb()
#define smp_mb__before_spinlock() smp_mb()
+#include <asm-generic/barrier.h>
+
#endif /* _ASM_POWERPC_BARRIER_H */
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 06/41] s390: reuse asm-generic/barrier.h
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (4 preceding siblings ...)
2016-01-10 14:17 ` [PATCH v3 05/41] powerpc: " Michael S. Tsirkin
@ 2016-01-10 14:17 ` Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 07/41] sparc: " Michael S. Tsirkin
` (39 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:17 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
linux-arch, linux-s390, Davidlohr Bueso, Russell King - ARM Linux,
Arnd Bergmann, x86, Christian Borntraeger, xen-devel, Ingo Molnar,
linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
adi-buildroot-devel, Martin Schwidefsky, Thomas Gleixner,
linux-metag
On s390 read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/s390/include/asm/barrier.h | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index 7ffd0b1..c358c31 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -30,14 +30,6 @@
#define smp_rmb() rmb()
#define smp_wmb() wmb()
-#define read_barrier_depends() do { } while (0)
-#define smp_read_barrier_depends() do { } while (0)
-
-#define smp_mb__before_atomic() smp_mb()
-#define smp_mb__after_atomic() smp_mb()
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
#define smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
@@ -53,4 +45,6 @@ do { \
___p1; \
})
+#include <asm-generic/barrier.h>
+
#endif /* __ASM_BARRIER_H */
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 07/41] sparc: reuse asm-generic/barrier.h
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (5 preceding siblings ...)
2016-01-10 14:17 ` [PATCH v3 06/41] s390: " Michael S. Tsirkin
@ 2016-01-10 14:17 ` Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 08/41] arm: " Michael S. Tsirkin
` (38 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:17 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev, David Miller
On sparc 64 bit dma_rmb, dma_wmb, smp_store_mb, smp_mb, smp_rmb,
smp_wmb, read_barrier_depends and smp_read_barrier_depends match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.
nop uses __asm__ __volatile but is otherwise identical to
the generic version, drop that as well.
This is in preparation to refactoring this code area.
Note: nop() was in processor.h and not in barrier.h as on other
architectures. Nothing seems to depend on it being there though.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: David S. Miller <davem@davemloft.net>
---
arch/sparc/include/asm/barrier_32.h | 1 -
arch/sparc/include/asm/barrier_64.h | 21 ++-------------------
arch/sparc/include/asm/processor.h | 3 ---
3 files changed, 2 insertions(+), 23 deletions(-)
diff --git a/arch/sparc/include/asm/barrier_32.h b/arch/sparc/include/asm/barrier_32.h
index ae69eda..8059130 100644
--- a/arch/sparc/include/asm/barrier_32.h
+++ b/arch/sparc/include/asm/barrier_32.h
@@ -1,7 +1,6 @@
#ifndef __SPARC_BARRIER_H
#define __SPARC_BARRIER_H
-#include <asm/processor.h> /* for nop() */
#include <asm-generic/barrier.h>
#endif /* !(__SPARC_BARRIER_H) */
diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 14a9286..26c3f72 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,25 +37,6 @@ do { __asm__ __volatile__("ba,pt %%xcc, 1f\n\t" \
#define rmb() __asm__ __volatile__("":::"memory")
#define wmb() __asm__ __volatile__("":::"memory")
-#define dma_rmb() rmb()
-#define dma_wmb() wmb()
-
-#define smp_store_mb(__var, __value) \
- do { WRITE_ONCE(__var, __value); membar_safe("#StoreLoad"); } while(0)
-
-#ifdef CONFIG_SMP
-#define smp_mb() mb()
-#define smp_rmb() rmb()
-#define smp_wmb() wmb()
-#else
-#define smp_mb() __asm__ __volatile__("":::"memory")
-#define smp_rmb() __asm__ __volatile__("":::"memory")
-#define smp_wmb() __asm__ __volatile__("":::"memory")
-#endif
-
-#define read_barrier_depends() do { } while (0)
-#define smp_read_barrier_depends() do { } while (0)
-
#define smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
@@ -74,4 +55,6 @@ do { \
#define smp_mb__before_atomic() barrier()
#define smp_mb__after_atomic() barrier()
+#include <asm-generic/barrier.h>
+
#endif /* !(__SPARC64_BARRIER_H) */
diff --git a/arch/sparc/include/asm/processor.h b/arch/sparc/include/asm/processor.h
index 2fe99e6..9da9646 100644
--- a/arch/sparc/include/asm/processor.h
+++ b/arch/sparc/include/asm/processor.h
@@ -5,7 +5,4 @@
#else
#include <asm/processor_32.h>
#endif
-
-#define nop() __asm__ __volatile__ ("nop")
-
#endif
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 08/41] arm: reuse asm-generic/barrier.h
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (6 preceding siblings ...)
2016-01-10 14:17 ` [PATCH v3 07/41] sparc: " Michael S. Tsirkin
@ 2016-01-10 14:17 ` Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 09/41] arm64: " Michael S. Tsirkin
` (37 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:17 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, Richard Woodruff,
user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
Russell King, Thomas Gleixner, linux-metag, linux-arm-kernel,
Andrew Cooper, Jo
On arm smp_store_mb, read_barrier_depends, smp_read_barrier_depends,
smp_store_release, smp_load_acquire, smp_mb__before_atomic and
smp_mb__after_atomic match the asm-generic variants exactly. Drop the
local definitions and pull in asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
---
arch/arm/include/asm/barrier.h | 23 +----------------------
1 file changed, 1 insertion(+), 22 deletions(-)
diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 3ff5642..31152e8 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -70,28 +70,7 @@ extern void arm_heavy_mb(void);
#define smp_wmb() dmb(ishst)
#endif
-#define smp_store_release(p, v) \
-do { \
- compiletime_assert_atomic_type(*p); \
- smp_mb(); \
- WRITE_ONCE(*p, v); \
-} while (0)
-
-#define smp_load_acquire(p) \
-({ \
- typeof(*p) ___p1 = READ_ONCE(*p); \
- compiletime_assert_atomic_type(*p); \
- smp_mb(); \
- ___p1; \
-})
-
-#define read_barrier_depends() do { } while(0)
-#define smp_read_barrier_depends() do { } while(0)
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_mb__before_atomic() smp_mb()
-#define smp_mb__after_atomic() smp_mb()
+#include <asm-generic/barrier.h>
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_BARRIER_H */
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 09/41] arm64: reuse asm-generic/barrier.h
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (7 preceding siblings ...)
2016-01-10 14:17 ` [PATCH v3 08/41] arm: " Michael S. Tsirkin
@ 2016-01-10 14:17 ` Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 10/41] metag: " Michael S. Tsirkin
` (36 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:17 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Catalin Marinas,
Will Deacon, virtualization, H. Peter Anvin, sparclinux,
Ingo Molnar, linux-arch, linux-s390, Russell King - ARM Linux,
Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
user-mode-linux-devel, Stefano Stabellini, Andre Przywara,
adi-buildroot-devel, Thomas Gleixner, linux-metag,
linux-arm-kernel, Andrew
On arm64 nop, read_barrier_depends, smp_read_barrier_depends
smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/arm64/include/asm/barrier.h | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 9622eb4..91a43f4 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -91,14 +91,7 @@ do { \
__u.__val; \
})
-#define read_barrier_depends() do { } while(0)
-#define smp_read_barrier_depends() do { } while(0)
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-#define nop() asm volatile("nop");
-
-#define smp_mb__before_atomic() smp_mb()
-#define smp_mb__after_atomic() smp_mb()
+#include <asm-generic/barrier.h>
#endif /* __ASSEMBLY__ */
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 10/41] metag: reuse asm-generic/barrier.h
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (8 preceding siblings ...)
2016-01-10 14:17 ` [PATCH v3 09/41] arm64: " Michael S. Tsirkin
@ 2016-01-10 14:17 ` Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 11/41] mips: " Michael S. Tsirkin
` (35 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:17 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, James Hogan, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev
On metag dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/metag/include/asm/barrier.h | 25 ++-----------------------
1 file changed, 2 insertions(+), 23 deletions(-)
diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index 172b7e5..b5b778b 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,9 +44,6 @@ static inline void wr_fence(void)
#define rmb() barrier()
#define wmb() mb()
-#define dma_rmb() rmb()
-#define dma_wmb() wmb()
-
#ifndef CONFIG_SMP
#define fence() do { } while (0)
#define smp_mb() barrier()
@@ -81,27 +78,9 @@ static inline void fence(void)
#endif
#endif
-#define read_barrier_depends() do { } while (0)
-#define smp_read_barrier_depends() do { } while (0)
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
-#define smp_store_release(p, v) \
-do { \
- compiletime_assert_atomic_type(*p); \
- smp_mb(); \
- WRITE_ONCE(*p, v); \
-} while (0)
-
-#define smp_load_acquire(p) \
-({ \
- typeof(*p) ___p1 = READ_ONCE(*p); \
- compiletime_assert_atomic_type(*p); \
- smp_mb(); \
- ___p1; \
-})
-
#define smp_mb__before_atomic() barrier()
#define smp_mb__after_atomic() barrier()
+#include <asm-generic/barrier.h>
+
#endif /* _ASM_METAG_BARRIER_H */
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 11/41] mips: reuse asm-generic/barrier.h
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (9 preceding siblings ...)
2016-01-10 14:17 ` [PATCH v3 10/41] metag: " Michael S. Tsirkin
@ 2016-01-10 14:18 ` Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 12/41] x86/um: " Michael S. Tsirkin
` (34 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:18 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Ralf Baechle,
Joe Perches, linuxppc-dev
On mips dma_rmb, dma_wmb, smp_store_mb, read_barrier_depends,
smp_read_barrier_depends, smp_store_release and smp_load_acquire match
the asm-generic variants exactly. Drop the local definitions and pull in
asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/mips/include/asm/barrier.h | 25 ++-----------------------
1 file changed, 2 insertions(+), 23 deletions(-)
diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 752e0b8..3eac4b9 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -10,9 +10,6 @@
#include <asm/addrspace.h>
-#define read_barrier_depends() do { } while(0)
-#define smp_read_barrier_depends() do { } while(0)
-
#ifdef CONFIG_CPU_HAS_SYNC
#define __sync() \
__asm__ __volatile__( \
@@ -87,8 +84,6 @@
#define wmb() fast_wmb()
#define rmb() fast_rmb()
-#define dma_wmb() fast_wmb()
-#define dma_rmb() fast_rmb()
#if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
# ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -112,9 +107,6 @@
#define __WEAK_LLSC_MB " \n"
#endif
-#define smp_store_mb(var, value) \
- do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
#define smp_llsc_mb() __asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")
#ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -129,22 +121,9 @@
#define nudge_writes() mb()
#endif
-#define smp_store_release(p, v) \
-do { \
- compiletime_assert_atomic_type(*p); \
- smp_mb(); \
- WRITE_ONCE(*p, v); \
-} while (0)
-
-#define smp_load_acquire(p) \
-({ \
- typeof(*p) ___p1 = READ_ONCE(*p); \
- compiletime_assert_atomic_type(*p); \
- smp_mb(); \
- ___p1; \
-})
-
#define smp_mb__before_atomic() smp_mb__before_llsc()
#define smp_mb__after_atomic() smp_llsc_mb()
+#include <asm-generic/barrier.h>
+
#endif /* __ASM_BARRIER_H */
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 12/41] x86/um: reuse asm-generic/barrier.h
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (10 preceding siblings ...)
2016-01-10 14:18 ` [PATCH v3 11/41] mips: " Michael S. Tsirkin
@ 2016-01-10 14:18 ` Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 13/41] x86: " Michael S. Tsirkin
` (33 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:18 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, user-mode-linux-user, linux-sh,
Peter Zijlstra, virtualization, H. Peter Anvin, sparclinux,
linux-arch, linux-s390, Russell King - ARM Linux, Arnd Bergmann,
Richard Weinberger, x86, Ingo Molnar, xen-devel, Ingo Molnar,
Borislav Petkov, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, Jeff Dike, adi-buildroot-devel,
Andy Lutomirski, Thomas Gleixner, linux-metag
On x86/um CONFIG_SMP is never defined. As a result, several macros
match the asm-generic variant exactly. Drop the local definitions and
pull in asm-generic/barrier.h instead.
This is in preparation to refactoring this code area.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Richard Weinberger <richard@nod.at>
---
arch/x86/um/asm/barrier.h | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/arch/x86/um/asm/barrier.h b/arch/x86/um/asm/barrier.h
index 755481f..174781a 100644
--- a/arch/x86/um/asm/barrier.h
+++ b/arch/x86/um/asm/barrier.h
@@ -36,13 +36,6 @@
#endif /* CONFIG_X86_PPRO_FENCE */
#define dma_wmb() barrier()
-#define smp_mb() barrier()
-#define smp_rmb() barrier()
-#define smp_wmb() barrier()
-
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-
-#define read_barrier_depends() do { } while (0)
-#define smp_read_barrier_depends() do { } while (0)
+#include <asm-generic/barrier.h>
#endif
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 13/41] x86: reuse asm-generic/barrier.h
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (11 preceding siblings ...)
2016-01-10 14:18 ` [PATCH v3 12/41] x86/um: " Michael S. Tsirkin
@ 2016-01-10 14:18 ` Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 14/41] asm-generic: add __smp_xxx wrappers Michael S. Tsirkin
` (32 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:18 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, Ingo Molnar,
xen-devel, Ingo Molnar, Borislav Petkov, linux-xtensa,
user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
Andy Lutomirski, Thomas Gleixner, linux-metag, linux-arm-kernel,
Andrew Cooper, Joe Perches
As on most architectures, on x86 read_barrier_depends and
smp_read_barrier_depends are empty. Drop the local definitions and pull
the generic ones from asm-generic/barrier.h instead: they are identical.
This is in preparation to refactoring this code area.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/x86/include/asm/barrier.h | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 0681d25..cc4c2a7 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -43,9 +43,6 @@
#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
#endif /* SMP */
-#define read_barrier_depends() do { } while (0)
-#define smp_read_barrier_depends() do { } while (0)
-
#if defined(CONFIG_X86_PPRO_FENCE)
/*
@@ -91,4 +88,6 @@ do { \
#define smp_mb__before_atomic() barrier()
#define smp_mb__after_atomic() barrier()
+#include <asm-generic/barrier.h>
+
#endif /* _ASM_X86_BARRIER_H */
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 14/41] asm-generic: add __smp_xxx wrappers
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (12 preceding siblings ...)
2016-01-10 14:18 ` [PATCH v3 13/41] x86: " Michael S. Tsirkin
@ 2016-01-10 14:18 ` Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 15/41] powerpc: define __smp_xxx Michael S. Tsirkin
` (31 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:18 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev, David Miller
On !SMP, most architectures define their
barriers as compiler barriers.
On SMP, most need an actual barrier.
Make it possible to remove the code duplication for
!SMP by defining low-level __smp_xxx barriers
which do not depend on the value of SMP, then
use them from asm-generic conditionally.
Besides reducing code duplication, these low level APIs will also be
useful for virtualization, where a barrier is sometimes needed even if
!SMP since we might be talking to another kernel on the same SMP system.
Both virtio and Xen drivers will benefit.
The smp_xxx variants should use __smp_XXX ones or barrier() depending on
SMP, identically for all architectures.
We keep ifndef guards around them for now - once/if all
architectures are converted to use the generic
code, we'll be able to remove these.
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
include/asm-generic/barrier.h | 91 ++++++++++++++++++++++++++++++++++++++-----
1 file changed, 82 insertions(+), 9 deletions(-)
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 987b2e0..8752964 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -54,22 +54,38 @@
#define read_barrier_depends() do { } while (0)
#endif
+#ifndef __smp_mb
+#define __smp_mb() mb()
+#endif
+
+#ifndef __smp_rmb
+#define __smp_rmb() rmb()
+#endif
+
+#ifndef __smp_wmb
+#define __smp_wmb() wmb()
+#endif
+
+#ifndef __smp_read_barrier_depends
+#define __smp_read_barrier_depends() read_barrier_depends()
+#endif
+
#ifdef CONFIG_SMP
#ifndef smp_mb
-#define smp_mb() mb()
+#define smp_mb() __smp_mb()
#endif
#ifndef smp_rmb
-#define smp_rmb() rmb()
+#define smp_rmb() __smp_rmb()
#endif
#ifndef smp_wmb
-#define smp_wmb() wmb()
+#define smp_wmb() __smp_wmb()
#endif
#ifndef smp_read_barrier_depends
-#define smp_read_barrier_depends() read_barrier_depends()
+#define smp_read_barrier_depends() __smp_read_barrier_depends()
#endif
#else /* !CONFIG_SMP */
@@ -92,23 +108,78 @@
#endif /* CONFIG_SMP */
+#ifndef __smp_store_mb
+#define __smp_store_mb(var, value) do { WRITE_ONCE(var, value); __smp_mb(); } while (0)
+#endif
+
+#ifndef __smp_mb__before_atomic
+#define __smp_mb__before_atomic() __smp_mb()
+#endif
+
+#ifndef __smp_mb__after_atomic
+#define __smp_mb__after_atomic() __smp_mb()
+#endif
+
+#ifndef __smp_store_release
+#define __smp_store_release(p, v) \
+do { \
+ compiletime_assert_atomic_type(*p); \
+ __smp_mb(); \
+ WRITE_ONCE(*p, v); \
+} while (0)
+#endif
+
+#ifndef __smp_load_acquire
+#define __smp_load_acquire(p) \
+({ \
+ typeof(*p) ___p1 = READ_ONCE(*p); \
+ compiletime_assert_atomic_type(*p); \
+ __smp_mb(); \
+ ___p1; \
+})
+#endif
+
+#ifdef CONFIG_SMP
+
+#ifndef smp_store_mb
+#define smp_store_mb(var, value) __smp_store_mb(var, value)
+#endif
+
+#ifndef smp_mb__before_atomic
+#define smp_mb__before_atomic() __smp_mb__before_atomic()
+#endif
+
+#ifndef smp_mb__after_atomic
+#define smp_mb__after_atomic() __smp_mb__after_atomic()
+#endif
+
+#ifndef smp_store_release
+#define smp_store_release(p, v) __smp_store_release(p, v)
+#endif
+
+#ifndef smp_load_acquire
+#define smp_load_acquire(p) __smp_load_acquire(p)
+#endif
+
+#else /* !CONFIG_SMP */
+
#ifndef smp_store_mb
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
+#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
#endif
#ifndef smp_mb__before_atomic
-#define smp_mb__before_atomic() smp_mb()
+#define smp_mb__before_atomic() barrier()
#endif
#ifndef smp_mb__after_atomic
-#define smp_mb__after_atomic() smp_mb()
+#define smp_mb__after_atomic() barrier()
#endif
#ifndef smp_store_release
#define smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
- smp_mb(); \
+ barrier(); \
WRITE_ONCE(*p, v); \
} while (0)
#endif
@@ -118,10 +189,12 @@ do { \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p); \
- smp_mb(); \
+ barrier(); \
___p1; \
})
#endif
+#endif
+
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_GENERIC_BARRIER_H */
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 15/41] powerpc: define __smp_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (13 preceding siblings ...)
2016-01-10 14:18 ` [PATCH v3 14/41] asm-generic: add __smp_xxx wrappers Michael S. Tsirkin
@ 2016-01-10 14:18 ` Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 16/41] arm64: " Michael S. Tsirkin
` (30 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:18 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
Benjamin Herrenschmidt, virtualization, Paul Mackerras,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Davidlohr Bueso, Russell King - ARM Linux, Arnd Bergmann,
Michael Ellerman, x86, xen-devel, Ingo Molnar, Paul E. McKenney,
linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
Boqun Feng, adi-buildroot-devel
This defines __smp_xxx barriers for powerpc
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
This reduces the amount of arch-specific boiler-plate code.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Boqun Feng <boqun.feng@gmail.com>
---
arch/powerpc/include/asm/barrier.h | 24 ++++++++----------------
1 file changed, 8 insertions(+), 16 deletions(-)
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 980ad0c..c0deafc 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -44,19 +44,11 @@
#define dma_rmb() __lwsync()
#define dma_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
-#ifdef CONFIG_SMP
-#define smp_lwsync() __lwsync()
+#define __smp_lwsync() __lwsync()
-#define smp_mb() mb()
-#define smp_rmb() __lwsync()
-#define smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
-#else
-#define smp_lwsync() barrier()
-
-#define smp_mb() barrier()
-#define smp_rmb() barrier()
-#define smp_wmb() barrier()
-#endif /* CONFIG_SMP */
+#define __smp_mb() mb()
+#define __smp_rmb() __lwsync()
+#define __smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
/*
* This is a barrier which prevents following instructions from being
@@ -67,18 +59,18 @@
#define data_barrier(x) \
asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
- smp_lwsync(); \
+ __smp_lwsync(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p); \
- smp_lwsync(); \
+ __smp_lwsync(); \
___p1; \
})
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 16/41] arm64: define __smp_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (14 preceding siblings ...)
2016-01-10 14:18 ` [PATCH v3 15/41] powerpc: define __smp_xxx Michael S. Tsirkin
@ 2016-01-10 14:18 ` Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 17/41] arm: " Michael S. Tsirkin
` (29 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:18 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Catalin Marinas,
Will Deacon, virtualization, H. Peter Anvin, sparclinux,
Ingo Molnar, linux-arch, linux-s390, Russell King - ARM Linux,
Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
user-mode-linux-devel, Stefano Stabellini, Andre Przywara,
adi-buildroot-devel, Thomas Gleixner, linux-metag,
linux-arm-kernel, Andrew
This defines __smp_xxx barriers for arm64,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Note: arm64 does not support !SMP config,
so smp_xxx and __smp_xxx are always equivalent.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/arm64/include/asm/barrier.h | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 91a43f4..dae5c49 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -35,11 +35,11 @@
#define dma_rmb() dmb(oshld)
#define dma_wmb() dmb(oshst)
-#define smp_mb() dmb(ish)
-#define smp_rmb() dmb(ishld)
-#define smp_wmb() dmb(ishst)
+#define __smp_mb() dmb(ish)
+#define __smp_rmb() dmb(ishld)
+#define __smp_wmb() dmb(ishst)
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
switch (sizeof(*p)) { \
@@ -62,7 +62,7 @@ do { \
} \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
union { typeof(*p) __val; char __c[1]; } __u; \
compiletime_assert_atomic_type(*p); \
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 17/41] arm: define __smp_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (15 preceding siblings ...)
2016-01-10 14:18 ` [PATCH v3 16/41] arm64: " Michael S. Tsirkin
@ 2016-01-10 14:18 ` Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 18/41] blackfin: " Michael S. Tsirkin
` (28 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:18 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, Richard Woodruff,
user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
Russell King, Thomas Gleixner, linux-metag, linux-arm-kernel,
Andrew Cooper, Jo
This defines __smp_xxx barriers for arm,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
This reduces the amount of arch-specific boiler-plate code.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
---
arch/arm/include/asm/barrier.h | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
index 31152e8..112cc1a 100644
--- a/arch/arm/include/asm/barrier.h
+++ b/arch/arm/include/asm/barrier.h
@@ -60,15 +60,9 @@ extern void arm_heavy_mb(void);
#define dma_wmb() barrier()
#endif
-#ifndef CONFIG_SMP
-#define smp_mb() barrier()
-#define smp_rmb() barrier()
-#define smp_wmb() barrier()
-#else
-#define smp_mb() dmb(ish)
-#define smp_rmb() smp_mb()
-#define smp_wmb() dmb(ishst)
-#endif
+#define __smp_mb() dmb(ish)
+#define __smp_rmb() __smp_mb()
+#define __smp_wmb() dmb(ishst)
#include <asm-generic/barrier.h>
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 18/41] blackfin: define __smp_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (16 preceding siblings ...)
2016-01-10 14:18 ` [PATCH v3 17/41] arm: " Michael S. Tsirkin
@ 2016-01-10 14:19 ` Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 19/41] ia64: " Michael S. Tsirkin
` (27 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:19 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev, David Miller
This defines __smp_xxx barriers for blackfin,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/blackfin/include/asm/barrier.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/blackfin/include/asm/barrier.h b/arch/blackfin/include/asm/barrier.h
index dfb66fe..7cca51c 100644
--- a/arch/blackfin/include/asm/barrier.h
+++ b/arch/blackfin/include/asm/barrier.h
@@ -78,8 +78,8 @@
#endif /* !CONFIG_SMP */
-#define smp_mb__before_atomic() barrier()
-#define smp_mb__after_atomic() barrier()
+#define __smp_mb__before_atomic() barrier()
+#define __smp_mb__after_atomic() barrier()
#include <asm-generic/barrier.h>
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 19/41] ia64: define __smp_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (17 preceding siblings ...)
2016-01-10 14:19 ` [PATCH v3 18/41] blackfin: " Michael S. Tsirkin
@ 2016-01-10 14:19 ` Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 20/41] metag: " Michael S. Tsirkin
` (26 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:19 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Davidlohr Bueso, Russell King - ARM Linux, Arnd Bergmann, x86,
xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Tony Luck, Andrew Cooper,
Fenghua Yu <fengh>
This defines __smp_xxx barriers for ia64,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
This reduces the amount of arch-specific boiler-plate code.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/ia64/include/asm/barrier.h | 14 +++++---------
1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
index 2f93348..588f161 100644
--- a/arch/ia64/include/asm/barrier.h
+++ b/arch/ia64/include/asm/barrier.h
@@ -42,28 +42,24 @@
#define dma_rmb() mb()
#define dma_wmb() mb()
-#ifdef CONFIG_SMP
-# define smp_mb() mb()
-#else
-# define smp_mb() barrier()
-#endif
+# define __smp_mb() mb()
-#define smp_mb__before_atomic() barrier()
-#define smp_mb__after_atomic() barrier()
+#define __smp_mb__before_atomic() barrier()
+#define __smp_mb__after_atomic() barrier()
/*
* IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
* need for asm trickery!
*/
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
barrier(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p); \
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 20/41] metag: define __smp_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (18 preceding siblings ...)
2016-01-10 14:19 ` [PATCH v3 19/41] ia64: " Michael S. Tsirkin
@ 2016-01-10 14:19 ` Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 21/41] mips: " Michael S. Tsirkin
` (25 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:19 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, Davidlohr Bueso, x86,
xen-devel, Ingo Molnar, linux-xtensa, James Hogan,
user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper
This defines __smp_xxx barriers for metag,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Note: as __smp_XX macros should not depend on CONFIG_SMP, they can not
use the existing fence() macro since that is defined differently between
SMP and !SMP. For this reason, this patch introduces a wrapper
metag_fence() that doesn't depend on CONFIG_SMP.
fence() is then defined using that, depending on CONFIG_SMP.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/metag/include/asm/barrier.h | 32 +++++++++++++++-----------------
1 file changed, 15 insertions(+), 17 deletions(-)
diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h
index b5b778b..84880c9 100644
--- a/arch/metag/include/asm/barrier.h
+++ b/arch/metag/include/asm/barrier.h
@@ -44,13 +44,6 @@ static inline void wr_fence(void)
#define rmb() barrier()
#define wmb() mb()
-#ifndef CONFIG_SMP
-#define fence() do { } while (0)
-#define smp_mb() barrier()
-#define smp_rmb() barrier()
-#define smp_wmb() barrier()
-#else
-
#ifdef CONFIG_METAG_SMP_WRITE_REORDERING
/*
* Write to the atomic memory unlock system event register (command 0). This is
@@ -60,26 +53,31 @@ static inline void wr_fence(void)
* incoherence). It is therefore ineffective if used after and on the same
* thread as a write.
*/
-static inline void fence(void)
+static inline void metag_fence(void)
{
volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK;
barrier();
*flushptr = 0;
barrier();
}
-#define smp_mb() fence()
-#define smp_rmb() fence()
-#define smp_wmb() barrier()
+#define __smp_mb() metag_fence()
+#define __smp_rmb() metag_fence()
+#define __smp_wmb() barrier()
#else
-#define fence() do { } while (0)
-#define smp_mb() barrier()
-#define smp_rmb() barrier()
-#define smp_wmb() barrier()
+#define metag_fence() do { } while (0)
+#define __smp_mb() barrier()
+#define __smp_rmb() barrier()
+#define __smp_wmb() barrier()
#endif
+
+#ifdef CONFIG_SMP
+#define fence() metag_fence()
+#else
+#define fence() do { } while (0)
#endif
-#define smp_mb__before_atomic() barrier()
-#define smp_mb__after_atomic() barrier()
+#define __smp_mb__before_atomic() barrier()
+#define __smp_mb__after_atomic() barrier()
#include <asm-generic/barrier.h>
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 21/41] mips: define __smp_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (19 preceding siblings ...)
2016-01-10 14:19 ` [PATCH v3 20/41] metag: " Michael S. Tsirkin
@ 2016-01-10 14:19 ` Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 22/41] s390: " Michael S. Tsirkin
` (24 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:19 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, Davidlohr Bueso, x86,
xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Ralf Baechle,
Joe Perches <jo>
This defines __smp_xxx barriers for mips,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Note: the only exception is smp_mb__before_llsc which is mips-specific.
We define both the __smp_mb__before_llsc variant (for use in
asm/barriers.h) and smp_mb__before_llsc (for use elsewhere on this
architecture).
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/mips/include/asm/barrier.h | 26 ++++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h
index 3eac4b9..d296633 100644
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -85,20 +85,20 @@
#define wmb() fast_wmb()
#define rmb() fast_rmb()
-#if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
+#if defined(CONFIG_WEAK_ORDERING)
# ifdef CONFIG_CPU_CAVIUM_OCTEON
-# define smp_mb() __sync()
-# define smp_rmb() barrier()
-# define smp_wmb() __syncw()
+# define __smp_mb() __sync()
+# define __smp_rmb() barrier()
+# define __smp_wmb() __syncw()
# else
-# define smp_mb() __asm__ __volatile__("sync" : : :"memory")
-# define smp_rmb() __asm__ __volatile__("sync" : : :"memory")
-# define smp_wmb() __asm__ __volatile__("sync" : : :"memory")
+# define __smp_mb() __asm__ __volatile__("sync" : : :"memory")
+# define __smp_rmb() __asm__ __volatile__("sync" : : :"memory")
+# define __smp_wmb() __asm__ __volatile__("sync" : : :"memory")
# endif
#else
-#define smp_mb() barrier()
-#define smp_rmb() barrier()
-#define smp_wmb() barrier()
+#define __smp_mb() barrier()
+#define __smp_rmb() barrier()
+#define __smp_wmb() barrier()
#endif
#if defined(CONFIG_WEAK_REORDERING_BEYOND_LLSC) && defined(CONFIG_SMP)
@@ -111,6 +111,7 @@
#ifdef CONFIG_CPU_CAVIUM_OCTEON
#define smp_mb__before_llsc() smp_wmb()
+#define __smp_mb__before_llsc() __smp_wmb()
/* Cause previous writes to become visible on all CPUs as soon as possible */
#define nudge_writes() __asm__ __volatile__(".set push\n\t" \
".set arch=octeon\n\t" \
@@ -118,11 +119,12 @@
".set pop" : : : "memory")
#else
#define smp_mb__before_llsc() smp_llsc_mb()
+#define __smp_mb__before_llsc() smp_llsc_mb()
#define nudge_writes() mb()
#endif
-#define smp_mb__before_atomic() smp_mb__before_llsc()
-#define smp_mb__after_atomic() smp_llsc_mb()
+#define __smp_mb__before_atomic() __smp_mb__before_llsc()
+#define __smp_mb__after_atomic() smp_llsc_mb()
#include <asm-generic/barrier.h>
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 22/41] s390: define __smp_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (20 preceding siblings ...)
2016-01-10 14:19 ` [PATCH v3 21/41] mips: " Michael S. Tsirkin
@ 2016-01-10 14:19 ` Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 23/41] sh: define __smp_xxx, fix smp_store_mb for !SMP Michael S. Tsirkin
` (23 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:19 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
linux-arch, linux-s390, Davidlohr Bueso, Russell King - ARM Linux,
Arnd Bergmann, x86, Christian Borntraeger, xen-devel, Ingo Molnar,
linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
adi-buildroot-devel, Martin Schwidefsky, Thomas Gleixner,
linux-metag
This defines __smp_xxx barriers for s390,
for use by virtualization.
Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers
unconditionally on this architecture.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
---
arch/s390/include/asm/barrier.h | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index c358c31..fbd25b2 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -26,18 +26,21 @@
#define wmb() barrier()
#define dma_rmb() mb()
#define dma_wmb() mb()
-#define smp_mb() mb()
-#define smp_rmb() rmb()
-#define smp_wmb() wmb()
-
-#define smp_store_release(p, v) \
+#define __smp_mb() mb()
+#define __smp_rmb() rmb()
+#define __smp_wmb() wmb()
+#define smp_mb() __smp_mb()
+#define smp_rmb() __smp_rmb()
+#define smp_wmb() __smp_wmb()
+
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
barrier(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p); \
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 23/41] sh: define __smp_xxx, fix smp_store_mb for !SMP
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (21 preceding siblings ...)
2016-01-10 14:19 ` [PATCH v3 22/41] s390: " Michael S. Tsirkin
@ 2016-01-10 14:19 ` Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 24/41] sparc: define __smp_xxx Michael S. Tsirkin
` (22 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:19 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev, David Miller
sh variant of smp_store_mb() calls xchg() on !SMP which is stronger than
implied by both the name and the documentation.
define __smp_store_mb instead: code in asm-generic/barrier.h
will then define smp_store_mb correctly depending on
CONFIG_SMP.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/sh/include/asm/barrier.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h
index bf91037..f887c64 100644
--- a/arch/sh/include/asm/barrier.h
+++ b/arch/sh/include/asm/barrier.h
@@ -32,7 +32,8 @@
#define ctrl_barrier() __asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop")
#endif
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
+#define smp_store_mb(var, value) __smp_store_mb(var, value)
#include <asm-generic/barrier.h>
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 24/41] sparc: define __smp_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (22 preceding siblings ...)
2016-01-10 14:19 ` [PATCH v3 23/41] sh: define __smp_xxx, fix smp_store_mb for !SMP Michael S. Tsirkin
@ 2016-01-10 14:19 ` Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 25/41] tile: " Michael S. Tsirkin
` (21 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:19 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev, David Miller
This defines __smp_xxx barriers for sparc,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: David S. Miller <davem@davemloft.net>
---
arch/sparc/include/asm/barrier_64.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/sparc/include/asm/barrier_64.h b/arch/sparc/include/asm/barrier_64.h
index 26c3f72..c9f6ee6 100644
--- a/arch/sparc/include/asm/barrier_64.h
+++ b/arch/sparc/include/asm/barrier_64.h
@@ -37,14 +37,14 @@ do { __asm__ __volatile__("ba,pt %%xcc, 1f\n\t" \
#define rmb() __asm__ __volatile__("":::"memory")
#define wmb() __asm__ __volatile__("":::"memory")
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
barrier(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p); \
@@ -52,8 +52,8 @@ do { \
___p1; \
})
-#define smp_mb__before_atomic() barrier()
-#define smp_mb__after_atomic() barrier()
+#define __smp_mb__before_atomic() barrier()
+#define __smp_mb__after_atomic() barrier()
#include <asm-generic/barrier.h>
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 25/41] tile: define __smp_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (23 preceding siblings ...)
2016-01-10 14:19 ` [PATCH v3 24/41] sparc: define __smp_xxx Michael S. Tsirkin
@ 2016-01-10 14:20 ` Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 26/41] xtensa: " Michael S. Tsirkin
` (20 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:20 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
Chris Metcalf, H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev, David Miller <davem>
This defines __smp_xxx barriers for tile,
for use by virtualization.
Some smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Note: for 32 bit, keep smp_mb__after_atomic around since it's faster
than the generic implementation.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/tile/include/asm/barrier.h | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/tile/include/asm/barrier.h b/arch/tile/include/asm/barrier.h
index 96a42ae..d552228 100644
--- a/arch/tile/include/asm/barrier.h
+++ b/arch/tile/include/asm/barrier.h
@@ -79,11 +79,12 @@ mb_incoherent(void)
* But after the word is updated, the routine issues an "mf" before returning,
* and since it's a function call, we don't even need a compiler barrier.
*/
-#define smp_mb__before_atomic() smp_mb()
-#define smp_mb__after_atomic() do { } while (0)
+#define __smp_mb__before_atomic() __smp_mb()
+#define __smp_mb__after_atomic() do { } while (0)
+#define smp_mb__after_atomic() __smp_mb__after_atomic()
#else /* 64 bit */
-#define smp_mb__before_atomic() smp_mb()
-#define smp_mb__after_atomic() smp_mb()
+#define __smp_mb__before_atomic() __smp_mb()
+#define __smp_mb__after_atomic() __smp_mb()
#endif
#include <asm-generic/barrier.h>
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 26/41] xtensa: define __smp_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (24 preceding siblings ...)
2016-01-10 14:20 ` [PATCH v3 25/41] tile: " Michael S. Tsirkin
@ 2016-01-10 14:20 ` Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 27/41] x86: " Michael S. Tsirkin
` (19 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:20 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
Max Filippov, H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Chris Zankel, Andrew Cooper,
Joe Perches, linuxppc-dev
This defines __smp_xxx barriers for xtensa,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/xtensa/include/asm/barrier.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/xtensa/include/asm/barrier.h b/arch/xtensa/include/asm/barrier.h
index 5b88774..956596e 100644
--- a/arch/xtensa/include/asm/barrier.h
+++ b/arch/xtensa/include/asm/barrier.h
@@ -13,8 +13,8 @@
#define rmb() barrier()
#define wmb() mb()
-#define smp_mb__before_atomic() barrier()
-#define smp_mb__after_atomic() barrier()
+#define __smp_mb__before_atomic() barrier()
+#define __smp_mb__after_atomic() barrier()
#include <asm-generic/barrier.h>
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 27/41] x86: define __smp_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (25 preceding siblings ...)
2016-01-10 14:20 ` [PATCH v3 26/41] xtensa: " Michael S. Tsirkin
@ 2016-01-10 14:20 ` Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 28/41] asm-generic: implement virt_xxx memory barriers Michael S. Tsirkin
` (18 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:20 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, Ingo Molnar,
xen-devel, Ingo Molnar, Borislav Petkov, linux-xtensa,
user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
Andy Lutomirski, Thomas Gleixner, linux-metag, linux-arm-kernel,
Andrew Cooper, Joe Perches
This defines __smp_xxx barriers for x86,
for use by virtualization.
smp_xxx barriers are removed as they are
defined correctly by asm-generic/barriers.h
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
arch/x86/include/asm/barrier.h | 31 ++++++++++++-------------------
1 file changed, 12 insertions(+), 19 deletions(-)
diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index cc4c2a7..a584e1c 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -31,17 +31,10 @@
#endif
#define dma_wmb() barrier()
-#ifdef CONFIG_SMP
-#define smp_mb() mb()
-#define smp_rmb() dma_rmb()
-#define smp_wmb() barrier()
-#define smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
-#else /* !SMP */
-#define smp_mb() barrier()
-#define smp_rmb() barrier()
-#define smp_wmb() barrier()
-#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0)
-#endif /* SMP */
+#define __smp_mb() mb()
+#define __smp_rmb() dma_rmb()
+#define __smp_wmb() barrier()
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
#if defined(CONFIG_X86_PPRO_FENCE)
@@ -50,31 +43,31 @@
* model and we should fall back to full barriers.
*/
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
- smp_mb(); \
+ __smp_mb(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p); \
- smp_mb(); \
+ __smp_mb(); \
___p1; \
})
#else /* regular x86 TSO memory ordering */
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
barrier(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p); \
@@ -85,8 +78,8 @@ do { \
#endif
/* Atomic operations are already serializing on x86 */
-#define smp_mb__before_atomic() barrier()
-#define smp_mb__after_atomic() barrier()
+#define __smp_mb__before_atomic() barrier()
+#define __smp_mb__after_atomic() barrier()
#include <asm-generic/barrier.h>
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 28/41] asm-generic: implement virt_xxx memory barriers
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (26 preceding siblings ...)
2016-01-10 14:20 ` [PATCH v3 27/41] x86: " Michael S. Tsirkin
@ 2016-01-10 14:20 ` Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 29/41] Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb" Michael S. Tsirkin
` (17 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:20 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, Jonathan Corbet, x86,
xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, linux-doc,
Joe Perches, linuxppc-dev
Guests running within virtual machines might be affected by SMP effects even if
the guest itself is compiled without SMP support. This is an artifact of
interfacing with an SMP host while running an UP kernel. Using mandatory
barriers for this use-case would be possible but is often suboptimal.
In particular, virtio uses a bunch of confusing ifdefs to work around
this, while xen just uses the mandatory barriers.
To better handle this case, low-level virt_mb() etc macros are made available.
These are implemented trivially using the low-level __smp_xxx macros,
the purpose of these wrappers is to annotate those specific cases.
These have the same effect as smp_mb() etc when SMP is enabled, but generate
identical code for SMP and non-SMP systems. For example, virtual machine guests
should use virt_mb() rather than smp_mb() when synchronizing against a
(possibly SMP) host.
Suggested-by: David Miller <davem@davemloft.net>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
include/asm-generic/barrier.h | 11 +++++++++++
Documentation/memory-barriers.txt | 28 +++++++++++++++++++++++-----
2 files changed, 34 insertions(+), 5 deletions(-)
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 8752964..1cceca14 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -196,5 +196,16 @@ do { \
#endif
+/* Barriers for virtual machine guests when talking to an SMP host */
+#define virt_mb() __smp_mb()
+#define virt_rmb() __smp_rmb()
+#define virt_wmb() __smp_wmb()
+#define virt_read_barrier_depends() __smp_read_barrier_depends()
+#define virt_store_mb(var, value) __smp_store_mb(var, value)
+#define virt_mb__before_atomic() __smp_mb__before_atomic()
+#define virt_mb__after_atomic() __smp_mb__after_atomic()
+#define virt_store_release(p, v) __smp_store_release(p, v)
+#define virt_load_acquire(p) __smp_load_acquire(p)
+
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_GENERIC_BARRIER_H */
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index aef9487..8f4a93a 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1655,17 +1655,18 @@ macro is a good place to start looking.
SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
systems because it is assumed that a CPU will appear to be self-consistent,
and will order overlapping accesses correctly with respect to itself.
+However, see the subsection on "Virtual Machine Guests" below.
[!] Note that SMP memory barriers _must_ be used to control the ordering of
references to shared memory on SMP systems, though the use of locking instead
is sufficient.
Mandatory barriers should not be used to control SMP effects, since mandatory
-barriers unnecessarily impose overhead on UP systems. They may, however, be
-used to control MMIO effects on accesses through relaxed memory I/O windows.
-These are required even on non-SMP systems as they affect the order in which
-memory operations appear to a device by prohibiting both the compiler and the
-CPU from reordering them.
+barriers impose unnecessary overhead on both SMP and UP systems. They may,
+however, be used to control MMIO effects on accesses through relaxed memory I/O
+windows. These barriers are required even on non-SMP systems as they affect
+the order in which memory operations appear to a device by prohibiting both the
+compiler and the CPU from reordering them.
There are some more advanced barrier functions:
@@ -2948,6 +2949,23 @@ The Alpha defines the Linux kernel's memory barrier model.
See the subsection on "Cache Coherency" above.
+VIRTUAL MACHINE GUESTS
+-------------------
+
+Guests running within virtual machines might be affected by SMP effects even if
+the guest itself is compiled without SMP support. This is an artifact of
+interfacing with an SMP host while running an UP kernel. Using mandatory
+barriers for this use-case would be possible but is often suboptimal.
+
+To handle this case optimally, low-level virt_mb() etc macros are available.
+These have the same effect as smp_mb() etc when SMP is enabled, but generate
+identical code for SMP and non-SMP systems. For example, virtual machine guests
+should use virt_mb() rather than smp_mb() when synchronizing against a
+(possibly SMP) host.
+
+These are equivalent to smp_mb() etc counterparts in all other respects,
+in particular, they do not control MMIO effects: to control
+MMIO effects, use mandatory barriers.
============
EXAMPLE USES
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 29/41] Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb"
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (27 preceding siblings ...)
2016-01-10 14:20 ` [PATCH v3 28/41] asm-generic: implement virt_xxx memory barriers Michael S. Tsirkin
@ 2016-01-10 14:20 ` Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 30/41] virtio_ring: update weak barriers to use virt_xxx Michael S. Tsirkin
` (16 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:20 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Alexander Duyck,
virtualization, H. Peter Anvin, sparclinux, linux-arch,
linux-s390, Russell King - ARM Linux, Arnd Bergmann, x86,
xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev, David
This reverts commit 9e1a27ea42691429e31f158cce6fc61bc79bb2e9.
While that commit optimizes !CONFIG_SMP, it mixes
up DMA and SMP concepts, making the code hard
to figure out.
A better way to optimize this is with the new __smp_XXX
barriers.
As a first step, go back to full rmb/wmb barriers
for !SMP.
We switch to __smp_XXX barriers in the next patch.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
include/linux/virtio_ring.h | 23 +++++++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)
diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 8e50888..67e06fe 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -21,20 +21,19 @@
* actually quite cheap.
*/
+#ifdef CONFIG_SMP
static inline void virtio_mb(bool weak_barriers)
{
-#ifdef CONFIG_SMP
if (weak_barriers)
smp_mb();
else
-#endif
mb();
}
static inline void virtio_rmb(bool weak_barriers)
{
if (weak_barriers)
- dma_rmb();
+ smp_rmb();
else
rmb();
}
@@ -42,10 +41,26 @@ static inline void virtio_rmb(bool weak_barriers)
static inline void virtio_wmb(bool weak_barriers)
{
if (weak_barriers)
- dma_wmb();
+ smp_wmb();
else
wmb();
}
+#else
+static inline void virtio_mb(bool weak_barriers)
+{
+ mb();
+}
+
+static inline void virtio_rmb(bool weak_barriers)
+{
+ rmb();
+}
+
+static inline void virtio_wmb(bool weak_barriers)
+{
+ wmb();
+}
+#endif
struct virtio_device;
struct virtqueue;
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 30/41] virtio_ring: update weak barriers to use virt_xxx
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (28 preceding siblings ...)
2016-01-10 14:20 ` [PATCH v3 29/41] Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb" Michael S. Tsirkin
@ 2016-01-10 14:20 ` Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 31/41] sh: support 1 and 2 byte xchg Michael S. Tsirkin
` (15 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:20 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Alexander Duyck,
virtualization, H. Peter Anvin, sparclinux, linux-arch,
linux-s390, Russell King - ARM Linux, Arnd Bergmann, x86,
xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev, David
virtio ring uses smp_wmb on SMP and wmb on !SMP,
the reason for the later being that it might be
talking to another kernel on the same SMP machine.
This is exactly what virt_xxx barriers do,
so switch to these instead of homegrown ifdef hacks.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
include/linux/virtio_ring.h | 25 ++++---------------------
1 file changed, 4 insertions(+), 21 deletions(-)
diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 67e06fe..f3fa55b 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -12,7 +12,7 @@
* anyone care?
*
* For virtio_pci on SMP, we don't need to order with respect to MMIO
- * accesses through relaxed memory I/O windows, so smp_mb() et al are
+ * accesses through relaxed memory I/O windows, so virt_mb() et al are
* sufficient.
*
* For using virtio to talk to real devices (eg. other heterogeneous
@@ -21,11 +21,10 @@
* actually quite cheap.
*/
-#ifdef CONFIG_SMP
static inline void virtio_mb(bool weak_barriers)
{
if (weak_barriers)
- smp_mb();
+ virt_mb();
else
mb();
}
@@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers)
static inline void virtio_rmb(bool weak_barriers)
{
if (weak_barriers)
- smp_rmb();
+ virt_rmb();
else
rmb();
}
@@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers)
static inline void virtio_wmb(bool weak_barriers)
{
if (weak_barriers)
- smp_wmb();
+ virt_wmb();
else
wmb();
}
-#else
-static inline void virtio_mb(bool weak_barriers)
-{
- mb();
-}
-
-static inline void virtio_rmb(bool weak_barriers)
-{
- rmb();
-}
-
-static inline void virtio_wmb(bool weak_barriers)
-{
- wmb();
-}
-#endif
struct virtio_device;
struct virtqueue;
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 31/41] sh: support 1 and 2 byte xchg
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (29 preceding siblings ...)
2016-01-10 14:20 ` [PATCH v3 30/41] virtio_ring: update weak barriers to use virt_xxx Michael S. Tsirkin
@ 2016-01-10 14:20 ` Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 32/41] sh: move xchg_cmpxchg to a header by itself Michael S. Tsirkin
` (14 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:20 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, Rich Felker, linux-ia64, linux-sh, Peter Zijlstra,
virtualization, H. Peter Anvin, sparclinux, linux-arch,
linux-s390, Russell King - ARM Linux, Arnd Bergmann, x86,
xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev, David Miller
This completes the xchg implementation for sh architecture. Note: The
llsc variant is tricky since this only supports 4 byte atomics, the
existing implementation of 1 byte xchg is wrong: we need to do a 4 byte
cmpxchg and retry if any bytes changed meanwhile.
Write this in C for clarity.
Suggested-by: Rich Felker <dalias@libc.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
arch/sh/include/asm/cmpxchg-grb.h | 22 +++++++++++++++
arch/sh/include/asm/cmpxchg-irq.h | 11 ++++++++
arch/sh/include/asm/cmpxchg-llsc.h | 58 +++++++++++++++++++++++---------------
arch/sh/include/asm/cmpxchg.h | 3 ++
4 files changed, 72 insertions(+), 22 deletions(-)
diff --git a/arch/sh/include/asm/cmpxchg-grb.h b/arch/sh/include/asm/cmpxchg-grb.h
index f848dec..2ed557b 100644
--- a/arch/sh/include/asm/cmpxchg-grb.h
+++ b/arch/sh/include/asm/cmpxchg-grb.h
@@ -23,6 +23,28 @@ static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
return retval;
}
+static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
+{
+ unsigned long retval;
+
+ __asm__ __volatile__ (
+ " .align 2 \n\t"
+ " mova 1f, r0 \n\t" /* r0 = end point */
+ " mov r15, r1 \n\t" /* r1 = saved sp */
+ " mov #-6, r15 \n\t" /* LOGIN */
+ " mov.w @%1, %0 \n\t" /* load old value */
+ " extu.w %0, %0 \n\t" /* extend as unsigned */
+ " mov.w %2, @%1 \n\t" /* store new value */
+ "1: mov r1, r15 \n\t" /* LOGOUT */
+ : "=&r" (retval),
+ "+r" (m),
+ "+r" (val) /* inhibit r15 overloading */
+ :
+ : "memory" , "r0", "r1");
+
+ return retval;
+}
+
static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
{
unsigned long retval;
diff --git a/arch/sh/include/asm/cmpxchg-irq.h b/arch/sh/include/asm/cmpxchg-irq.h
index bd11f63..f888772 100644
--- a/arch/sh/include/asm/cmpxchg-irq.h
+++ b/arch/sh/include/asm/cmpxchg-irq.h
@@ -14,6 +14,17 @@ static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
return retval;
}
+static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
+{
+ unsigned long flags, retval;
+
+ local_irq_save(flags);
+ retval = *m;
+ *m = val;
+ local_irq_restore(flags);
+ return retval;
+}
+
static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
{
unsigned long flags, retval;
diff --git a/arch/sh/include/asm/cmpxchg-llsc.h b/arch/sh/include/asm/cmpxchg-llsc.h
index 4713666..e754794 100644
--- a/arch/sh/include/asm/cmpxchg-llsc.h
+++ b/arch/sh/include/asm/cmpxchg-llsc.h
@@ -1,6 +1,9 @@
#ifndef __ASM_SH_CMPXCHG_LLSC_H
#define __ASM_SH_CMPXCHG_LLSC_H
+#include <linux/bitops.h>
+#include <asm/byteorder.h>
+
static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
{
unsigned long retval;
@@ -22,29 +25,8 @@ static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
return retval;
}
-static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
-{
- unsigned long retval;
- unsigned long tmp;
-
- __asm__ __volatile__ (
- "1: \n\t"
- "movli.l @%2, %0 ! xchg_u8 \n\t"
- "mov %0, %1 \n\t"
- "mov %3, %0 \n\t"
- "movco.l %0, @%2 \n\t"
- "bf 1b \n\t"
- "synco \n\t"
- : "=&z"(tmp), "=&r" (retval)
- : "r" (m), "r" (val & 0xff)
- : "t", "memory"
- );
-
- return retval;
-}
-
static inline unsigned long
-__cmpxchg_u32(volatile int *m, unsigned long old, unsigned long new)
+__cmpxchg_u32(volatile u32 *m, unsigned long old, unsigned long new)
{
unsigned long retval;
unsigned long tmp;
@@ -68,4 +50,36 @@ __cmpxchg_u32(volatile int *m, unsigned long old, unsigned long new)
return retval;
}
+static inline u32 __xchg_cmpxchg(volatile void *ptr, u32 x, int size)
+{
+ int off = (unsigned long)ptr % sizeof(u32);
+ volatile u32 *p = ptr - off;
+#ifdef __BIG_ENDIAN
+ int bitoff = (sizeof(u32) - 1 - off) * BITS_PER_BYTE;
+#else
+ int bitoff = off * BITS_PER_BYTE;
+#endif
+ u32 bitmask = ((0x1 << size * BITS_PER_BYTE) - 1) << bitoff;
+ u32 oldv, newv;
+ u32 ret;
+
+ do {
+ oldv = READ_ONCE(*p);
+ ret = (oldv & bitmask) >> bitoff;
+ newv = (oldv & ~bitmask) | (x << bitoff);
+ } while (__cmpxchg_u32(p, oldv, newv) != oldv);
+
+ return ret;
+}
+
+static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
+{
+ return __xchg_cmpxchg(m, val, sizeof *m);
+}
+
+static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
+{
+ return __xchg_cmpxchg(m, val, sizeof *m);
+}
+
#endif /* __ASM_SH_CMPXCHG_LLSC_H */
diff --git a/arch/sh/include/asm/cmpxchg.h b/arch/sh/include/asm/cmpxchg.h
index 85c97b18..5225916 100644
--- a/arch/sh/include/asm/cmpxchg.h
+++ b/arch/sh/include/asm/cmpxchg.h
@@ -27,6 +27,9 @@ extern void __xchg_called_with_bad_pointer(void);
case 4: \
__xchg__res = xchg_u32(__xchg_ptr, x); \
break; \
+ case 2: \
+ __xchg__res = xchg_u16(__xchg_ptr, x); \
+ break; \
case 1: \
__xchg__res = xchg_u8(__xchg_ptr, x); \
break; \
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 32/41] sh: move xchg_cmpxchg to a header by itself
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (30 preceding siblings ...)
2016-01-10 14:20 ` [PATCH v3 31/41] sh: support 1 and 2 byte xchg Michael S. Tsirkin
@ 2016-01-10 14:20 ` Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 33/41] virtio_ring: use virt_store_mb Michael S. Tsirkin
` (13 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:20 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, Rich Felker, linux-ia64, linux-sh, Peter Zijlstra,
virtualization, H. Peter Anvin, sparclinux, linux-arch,
linux-s390, Russell King - ARM Linux, Arnd Bergmann, x86,
xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev, David Miller
Looks like future sh variants will support a 4-byte cas which will be
used to implement 1 and 2 byte xchg.
This is exactly what we do for llsc now, move the portable part of the
code into a separate header so it's easy to reuse.
Suggested-by: Rich Felker <dalias@libc.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
arch/sh/include/asm/cmpxchg-llsc.h | 35 +-------------------------
arch/sh/include/asm/cmpxchg-xchg.h | 51 ++++++++++++++++++++++++++++++++++++++
2 files changed, 52 insertions(+), 34 deletions(-)
create mode 100644 arch/sh/include/asm/cmpxchg-xchg.h
diff --git a/arch/sh/include/asm/cmpxchg-llsc.h b/arch/sh/include/asm/cmpxchg-llsc.h
index e754794..fcfd322 100644
--- a/arch/sh/include/asm/cmpxchg-llsc.h
+++ b/arch/sh/include/asm/cmpxchg-llsc.h
@@ -1,9 +1,6 @@
#ifndef __ASM_SH_CMPXCHG_LLSC_H
#define __ASM_SH_CMPXCHG_LLSC_H
-#include <linux/bitops.h>
-#include <asm/byteorder.h>
-
static inline unsigned long xchg_u32(volatile u32 *m, unsigned long val)
{
unsigned long retval;
@@ -50,36 +47,6 @@ __cmpxchg_u32(volatile u32 *m, unsigned long old, unsigned long new)
return retval;
}
-static inline u32 __xchg_cmpxchg(volatile void *ptr, u32 x, int size)
-{
- int off = (unsigned long)ptr % sizeof(u32);
- volatile u32 *p = ptr - off;
-#ifdef __BIG_ENDIAN
- int bitoff = (sizeof(u32) - 1 - off) * BITS_PER_BYTE;
-#else
- int bitoff = off * BITS_PER_BYTE;
-#endif
- u32 bitmask = ((0x1 << size * BITS_PER_BYTE) - 1) << bitoff;
- u32 oldv, newv;
- u32 ret;
-
- do {
- oldv = READ_ONCE(*p);
- ret = (oldv & bitmask) >> bitoff;
- newv = (oldv & ~bitmask) | (x << bitoff);
- } while (__cmpxchg_u32(p, oldv, newv) != oldv);
-
- return ret;
-}
-
-static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
-{
- return __xchg_cmpxchg(m, val, sizeof *m);
-}
-
-static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
-{
- return __xchg_cmpxchg(m, val, sizeof *m);
-}
+#include <asm/cmpxchg-xchg.h>
#endif /* __ASM_SH_CMPXCHG_LLSC_H */
diff --git a/arch/sh/include/asm/cmpxchg-xchg.h b/arch/sh/include/asm/cmpxchg-xchg.h
new file mode 100644
index 0000000..7219719
--- /dev/null
+++ b/arch/sh/include/asm/cmpxchg-xchg.h
@@ -0,0 +1,51 @@
+#ifndef __ASM_SH_CMPXCHG_XCHG_H
+#define __ASM_SH_CMPXCHG_XCHG_H
+
+/*
+ * Copyright (C) 2016 Red Hat, Inc.
+ * Author: Michael S. Tsirkin <mst@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See the
+ * file "COPYING" in the main directory of this archive for more details.
+ */
+#include <linux/bitops.h>
+#include <asm/byteorder.h>
+
+/*
+ * Portable implementations of 1 and 2 byte xchg using a 4 byte cmpxchg.
+ * Note: this header isn't self-contained: before including it, __cmpxchg_u32
+ * must be defined first.
+ */
+static inline u32 __xchg_cmpxchg(volatile void *ptr, u32 x, int size)
+{
+ int off = (unsigned long)ptr % sizeof(u32);
+ volatile u32 *p = ptr - off;
+#ifdef __BIG_ENDIAN
+ int bitoff = (sizeof(u32) - 1 - off) * BITS_PER_BYTE;
+#else
+ int bitoff = off * BITS_PER_BYTE;
+#endif
+ u32 bitmask = ((0x1 << size * BITS_PER_BYTE) - 1) << bitoff;
+ u32 oldv, newv;
+ u32 ret;
+
+ do {
+ oldv = READ_ONCE(*p);
+ ret = (oldv & bitmask) >> bitoff;
+ newv = (oldv & ~bitmask) | (x << bitoff);
+ } while (__cmpxchg_u32(p, oldv, newv) != oldv);
+
+ return ret;
+}
+
+static inline unsigned long xchg_u16(volatile u16 *m, unsigned long val)
+{
+ return __xchg_cmpxchg(m, val, sizeof *m);
+}
+
+static inline unsigned long xchg_u8(volatile u8 *m, unsigned long val)
+{
+ return __xchg_cmpxchg(m, val, sizeof *m);
+}
+
+#endif /* __ASM_SH_CMPXCHG_XCHG_H */
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 33/41] virtio_ring: use virt_store_mb
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (31 preceding siblings ...)
2016-01-10 14:20 ` [PATCH v3 32/41] sh: move xchg_cmpxchg to a header by itself Michael S. Tsirkin
@ 2016-01-10 14:21 ` Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 34/41] checkpatch.pl: add missing memory barriers Michael S. Tsirkin
` (12 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:21 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Andrew Cooper, Joe Perches,
linuxppc-dev, David Miller
We need a full barrier after writing out event index, using
virt_store_mb there seems better than open-coding. As usual, we need a
wrapper to account for strong barriers.
It's tempting to use this in vhost as well, for that, we'll
need a variant of smp_store_mb that works on __user pointers.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
include/linux/virtio_ring.h | 11 +++++++++++
drivers/virtio/virtio_ring.c | 15 +++++++++------
2 files changed, 20 insertions(+), 6 deletions(-)
diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index f3fa55b..a156e2b 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -45,6 +45,17 @@ static inline void virtio_wmb(bool weak_barriers)
wmb();
}
+static inline void virtio_store_mb(bool weak_barriers,
+ __virtio16 *p, __virtio16 v)
+{
+ if (weak_barriers) {
+ virt_store_mb(*p, v);
+ } else {
+ WRITE_ONCE(*p, v);
+ mb();
+ }
+}
+
struct virtio_device;
struct virtqueue;
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index ee663c4..e12e385 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -517,10 +517,10 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
/* If we expect an interrupt for the next entry, tell host
* by writing event index and flush out the write before
* the read in the next get_buf call. */
- if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
- vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx);
- virtio_mb(vq->weak_barriers);
- }
+ if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT))
+ virtio_store_mb(vq->weak_barriers,
+ &vring_used_event(&vq->vring),
+ cpu_to_virtio16(_vq->vdev, vq->last_used_idx));
#ifdef DEBUG
vq->last_add_time_valid = false;
@@ -653,8 +653,11 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq)
}
/* TODO: tune this threshold */
bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4;
- vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs);
- virtio_mb(vq->weak_barriers);
+
+ virtio_store_mb(vq->weak_barriers,
+ &vring_used_event(&vq->vring),
+ cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs));
+
if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) {
END_USE(vq);
return false;
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 34/41] checkpatch.pl: add missing memory barriers
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (32 preceding siblings ...)
2016-01-10 14:21 ` [PATCH v3 33/41] virtio_ring: use virt_store_mb Michael S. Tsirkin
@ 2016-01-10 14:21 ` Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 35/41] checkpatch: check for __smp outside barrier.h Michael S. Tsirkin
` (11 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:21 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Andy Whitcroft,
Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
Joe Perches, linuxppc-dev, David Miller
SMP-only barriers were missing in checkpatch.pl
Refactor code slightly to make adding more variants easier.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
scripts/checkpatch.pl | 20 +++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index 2b3c228..97b8b62 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -5116,7 +5116,25 @@ sub process {
}
}
# check for memory barriers without a comment.
- if ($line =~ /\b(mb|rmb|wmb|read_barrier_depends|smp_mb|smp_rmb|smp_wmb|smp_read_barrier_depends)\(/) {
+
+ my $barriers = qr{
+ mb|
+ rmb|
+ wmb|
+ read_barrier_depends
+ }x;
+ my $smp_barriers = qr{
+ store_release|
+ load_acquire|
+ store_mb|
+ ($barriers)
+ }x;
+ my $all_barriers = qr{
+ $barriers|
+ smp_($smp_barriers)
+ }x;
+
+ if ($line =~ /\b($all_barriers)\s*\(/) {
if (!ctx_has_comment($first_line, $linenr)) {
WARN("MEMORY_BARRIER",
"memory barrier without comment\n" . $herecurr);
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 35/41] checkpatch: check for __smp outside barrier.h
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (33 preceding siblings ...)
2016-01-10 14:21 ` [PATCH v3 34/41] checkpatch.pl: add missing memory barriers Michael S. Tsirkin
@ 2016-01-10 14:21 ` Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 36/41] checkpatch: add virt barriers Michael S. Tsirkin
` (10 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:21 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Andy Whitcroft,
Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
Joe Perches, linuxppc-dev, David Miller
Introduction of __smp barriers cleans up a bunch of duplicate code, but
it gives people an additional handle onto a "new" set of barriers - just
because they're prefixed with __* unfortunately doesn't stop anyone from
using it (as happened with other arch stuff before.)
Add a checkpatch test so it will trigger a warning.
Reported-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
scripts/checkpatch.pl | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index 97b8b62..a96adcb 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -5141,6 +5141,16 @@ sub process {
}
}
+ my $underscore_smp_barriers = qr{__smp_($smp_barriers)}x;
+
+ if ($realfile !~ m@^include/asm-generic/@ &&
+ $realfile !~ m@/barrier\.h$@ &&
+ $line =~ m/\b($underscore_smp_barriers)\s*\(/ &&
+ $line !~ m/^.\s*\#\s*define\s+($underscore_smp_barriers)\s*\(/) {
+ WARN("MEMORY_BARRIER",
+ "__smp memory barriers shouldn't be used outside barrier.h and asm-generic\n" . $herecurr);
+ }
+
# check for waitqueue_active without a comment.
if ($line =~ /\bwaitqueue_active\s*\(/) {
if (!ctx_has_comment($first_line, $linenr)) {
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 36/41] checkpatch: add virt barriers
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (34 preceding siblings ...)
2016-01-10 14:21 ` [PATCH v3 35/41] checkpatch: check for __smp outside barrier.h Michael S. Tsirkin
@ 2016-01-10 14:21 ` Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 37/41] xenbus: use virt_xxx barriers Michael S. Tsirkin
` (9 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:21 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, xen-devel,
Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Andy Whitcroft,
Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
Joe Perches, linuxppc-dev, David Miller
Add virt_ barriers to list of barriers to check for
presence of a comment.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
scripts/checkpatch.pl | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index a96adcb..5ca272b 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -5131,7 +5131,8 @@ sub process {
}x;
my $all_barriers = qr{
$barriers|
- smp_($smp_barriers)
+ smp_($smp_barriers)|
+ virt_($smp_barriers)
}x;
if ($line =~ /\b($all_barriers)\s*\(/) {
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 37/41] xenbus: use virt_xxx barriers
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (35 preceding siblings ...)
2016-01-10 14:21 ` [PATCH v3 36/41] checkpatch: add virt barriers Michael S. Tsirkin
@ 2016-01-10 14:21 ` Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 38/41] xen/io: " Michael S. Tsirkin
` (8 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:21 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
linux-s390, Russell King - ARM Linux, Arnd Bergmann, x86,
xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Konrad Rzeszutek Wilk,
Andrew Cooper, David Vrabel <david.vrab>
drivers/xen/xenbus/xenbus_comms.c uses
full memory barriers to communicate with the other side.
For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.
Switch to virt_xxx barriers which serve this exact purpose.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: David Vrabel <david.vrabel@citrix.com>
---
drivers/xen/xenbus/xenbus_comms.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
index fdb0f33..ecdecce 100644
--- a/drivers/xen/xenbus/xenbus_comms.c
+++ b/drivers/xen/xenbus/xenbus_comms.c
@@ -123,14 +123,14 @@ int xb_write(const void *data, unsigned len)
avail = len;
/* Must write data /after/ reading the consumer index. */
- mb();
+ virt_mb();
memcpy(dst, data, avail);
data += avail;
len -= avail;
/* Other side must not see new producer until data is there. */
- wmb();
+ virt_wmb();
intf->req_prod += avail;
/* Implies mb(): other side will see the updated producer. */
@@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len)
avail = len;
/* Must read data /after/ reading the producer index. */
- rmb();
+ virt_rmb();
memcpy(data, src, avail);
data += avail;
len -= avail;
/* Other side must not see free space until we've copied out */
- mb();
+ virt_mb();
intf->rsp_cons += avail;
pr_debug("Finished read of %i bytes (%i to go)\n", avail, len);
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 38/41] xen/io: use virt_xxx barriers
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (36 preceding siblings ...)
2016-01-10 14:21 ` [PATCH v3 37/41] xenbus: use virt_xxx barriers Michael S. Tsirkin
@ 2016-01-10 14:21 ` Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 39/41] xen/events: " Michael S. Tsirkin
` (7 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:21 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, Boris Ostrovsky, linux-arch,
linux-s390, Russell King - ARM Linux, Arnd Bergmann, x86,
xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Konrad Rzeszutek Wilk,
Andrew Cooper, David Vrabel <david.vrab>
include/xen/interface/io/ring.h uses
full memory barriers to communicate with the other side.
For guests compiled with CONFIG_SMP, smp_wmb and smp_mb
would be sufficient, so mb() and wmb() here are only needed if
a non-SMP guest runs on an SMP host.
Switch to virt_xxx barriers which serve this exact purpose.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: David Vrabel <david.vrabel@citrix.com>
---
include/xen/interface/io/ring.h | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 7dc685b..21f4fbd 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -208,12 +208,12 @@ struct __name##_back_ring { \
#define RING_PUSH_REQUESTS(_r) do { \
- wmb(); /* back sees requests /before/ updated producer index */ \
+ virt_wmb(); /* back sees requests /before/ updated producer index */ \
(_r)->sring->req_prod = (_r)->req_prod_pvt; \
} while (0)
#define RING_PUSH_RESPONSES(_r) do { \
- wmb(); /* front sees responses /before/ updated producer index */ \
+ virt_wmb(); /* front sees responses /before/ updated producer index */ \
(_r)->sring->rsp_prod = (_r)->rsp_prod_pvt; \
} while (0)
@@ -250,9 +250,9 @@ struct __name##_back_ring { \
#define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do { \
RING_IDX __old = (_r)->sring->req_prod; \
RING_IDX __new = (_r)->req_prod_pvt; \
- wmb(); /* back sees requests /before/ updated producer index */ \
+ virt_wmb(); /* back sees requests /before/ updated producer index */ \
(_r)->sring->req_prod = __new; \
- mb(); /* back sees new requests /before/ we check req_event */ \
+ virt_mb(); /* back sees new requests /before/ we check req_event */ \
(_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) < \
(RING_IDX)(__new - __old)); \
} while (0)
@@ -260,9 +260,9 @@ struct __name##_back_ring { \
#define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do { \
RING_IDX __old = (_r)->sring->rsp_prod; \
RING_IDX __new = (_r)->rsp_prod_pvt; \
- wmb(); /* front sees responses /before/ updated producer index */ \
+ virt_wmb(); /* front sees responses /before/ updated producer index */ \
(_r)->sring->rsp_prod = __new; \
- mb(); /* front sees new responses /before/ we check rsp_event */ \
+ virt_mb(); /* front sees new responses /before/ we check rsp_event */ \
(_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) < \
(RING_IDX)(__new - __old)); \
} while (0)
@@ -271,7 +271,7 @@ struct __name##_back_ring { \
(_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r); \
if (_work_to_do) break; \
(_r)->sring->req_event = (_r)->req_cons + 1; \
- mb(); \
+ virt_mb(); \
(_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r); \
} while (0)
@@ -279,7 +279,7 @@ struct __name##_back_ring { \
(_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r); \
if (_work_to_do) break; \
(_r)->sring->rsp_event = (_r)->rsp_cons + 1; \
- mb(); \
+ virt_mb(); \
(_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r); \
} while (0)
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 39/41] xen/events: use virt_xxx barriers
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (37 preceding siblings ...)
2016-01-10 14:21 ` [PATCH v3 38/41] xen/io: " Michael S. Tsirkin
@ 2016-01-10 14:21 ` Michael S. Tsirkin
2016-01-10 14:22 ` [PATCH v3 40/41] s390: use generic memory barriers Michael S. Tsirkin
` (6 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:21 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
Stefano Stabellini, H. Peter Anvin, sparclinux, Boris Ostrovsky,
linux-arch, linux-s390, Russell King - ARM Linux, Arnd Bergmann,
x86, xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Wei Liu,
Konrad Rzeszutek Wilk <konrad.wil>
drivers/xen/events/events_fifo.c uses rmb() to communicate with the
other side.
For guests compiled with CONFIG_SMP, smp_rmb would be sufficient, so
rmb() here is only needed if a non-SMP guest runs on an SMP host.
Switch to the virt_rmb barrier which serves this exact purpose.
Pull in asm/barrier.h here to make sure the file is self-contained.
Suggested-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
drivers/xen/events/events_fifo.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index 96a1b8d..eff2b88 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -41,6 +41,7 @@
#include <linux/percpu.h>
#include <linux/cpu.h>
+#include <asm/barrier.h>
#include <asm/sync_bitops.h>
#include <asm/xen/hypercall.h>
#include <asm/xen/hypervisor.h>
@@ -296,7 +297,7 @@ static void consume_one_event(unsigned cpu,
* control block.
*/
if (head == 0) {
- rmb(); /* Ensure word is up-to-date before reading head. */
+ virt_rmb(); /* Ensure word is up-to-date before reading head. */
head = control_block->head[priority];
}
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 40/41] s390: use generic memory barriers
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (38 preceding siblings ...)
2016-01-10 14:21 ` [PATCH v3 39/41] xen/events: " Michael S. Tsirkin
@ 2016-01-10 14:22 ` Michael S. Tsirkin
2016-01-10 14:22 ` [PATCH v3 41/41] s390: more efficient smp barriers Michael S. Tsirkin
` (5 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:22 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
linux-arch, linux-s390, Davidlohr Bueso, Russell King - ARM Linux,
Arnd Bergmann, x86, Christian Borntraeger, xen-devel, Ingo Molnar,
linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
adi-buildroot-devel, Martin Schwidefsky, Thomas Gleixner,
linux-metag
The s390 kernel is SMP to 99.99%, we just didn't bother with a
non-smp variant for the memory-barriers. If the generic header
is used we'd get the non-smp version for free. It will save a
small amount of text space for CONFIG_SMP=n.
Suggested-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
arch/s390/include/asm/barrier.h | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index fbd25b2..4d26fa4 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -29,9 +29,6 @@
#define __smp_mb() mb()
#define __smp_rmb() rmb()
#define __smp_wmb() wmb()
-#define smp_mb() __smp_mb()
-#define smp_rmb() __smp_rmb()
-#define smp_wmb() __smp_wmb()
#define __smp_store_release(p, v) \
do { \
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v3 41/41] s390: more efficient smp barriers
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (39 preceding siblings ...)
2016-01-10 14:22 ` [PATCH v3 40/41] s390: use generic memory barriers Michael S. Tsirkin
@ 2016-01-10 14:22 ` Michael S. Tsirkin
[not found] ` <1452426622-4471-40-git-send-email-mst@redhat.com>
` (4 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-10 14:22 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, Heiko Carstens,
virtualization, H. Peter Anvin, sparclinux, Ingo Molnar,
linux-arch, linux-s390, Davidlohr Bueso, Russell King - ARM Linux,
Arnd Bergmann, x86, Christian Borntraeger, xen-devel, Ingo Molnar,
linux-xtensa, user-mode-linux-devel, Stefano Stabellini,
adi-buildroot-devel, Martin Schwidefsky, Thomas Gleixner,
linux-metag
As per: lkml.kernel.org/r/20150921112252.3c2937e1@mschwide
atomics imply a barrier on s390, so s390 should change
smp_mb__before_atomic and smp_mb__after_atomic to barrier() instead of
smp_mb() and hence should not use the generic versions.
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
arch/s390/include/asm/barrier.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index 4d26fa4..5c8db3c 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -45,6 +45,9 @@ do { \
___p1; \
})
+#define __smp_mb__before_atomic() barrier()
+#define __smp_mb__after_atomic() barrier()
+
#include <asm-generic/barrier.h>
#endif /* __ASM_BARRIER_H */
--
MST
^ permalink raw reply related [flat|nested] 48+ messages in thread
* Re: [PATCH v3 39/41] xen/events: use virt_xxx barriers
[not found] ` <1452426622-4471-40-git-send-email-mst@redhat.com>
@ 2016-01-11 11:12 ` David Vrabel
0 siblings, 0 replies; 48+ messages in thread
From: David Vrabel @ 2016-01-11 11:12 UTC (permalink / raw)
To: Michael S. Tsirkin, linux-kernel
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
Stefano Stabellini, H. Peter Anvin, sparclinux, Boris Ostrovsky,
linux-arch, linux-s390, Russell King - ARM Linux, Arnd Bergmann,
x86, xen-devel, Ingo Molnar, linux-xtensa, user-mode-linux-devel,
Stefano Stabellini, adi-buildroot-devel, Thomas Gleixner,
linux-metag, linux-arm-kernel, Wei Liu,
Konrad Rzeszutek Wilk <konrad.wil>
On 10/01/16 14:21, Michael S. Tsirkin wrote:
> drivers/xen/events/events_fifo.c uses rmb() to communicate with the
> other side.
>
> For guests compiled with CONFIG_SMP, smp_rmb would be sufficient, so
> rmb() here is only needed if a non-SMP guest runs on an SMP host.
>
> Switch to the virt_rmb barrier which serves this exact purpose.
>
> Pull in asm/barrier.h here to make sure the file is self-contained.
>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: David Vrabel <david.vrabel@citrix.com>
David
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v3 00/41] arch: barrier cleanup + barriers for virt
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
` (41 preceding siblings ...)
[not found] ` <1452426622-4471-40-git-send-email-mst@redhat.com>
@ 2016-01-12 12:50 ` Peter Zijlstra
[not found] ` <1452426622-4471-14-git-send-email-mst@redhat.com>
` (2 subsequent siblings)
45 siblings, 0 replies; 48+ messages in thread
From: Peter Zijlstra @ 2016-01-12 12:50 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: linux-mips, linux-ia64, linux-sh, virtualization, H. Peter Anvin,
sparclinux, linux-arch, linux-s390, Russell King - ARM Linux,
Arnd Bergmann, x86, xen-devel, Ingo Molnar, linux-xtensa,
user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
Thomas Gleixner, linux-metag, linux-arm-kernel, Andrew Cooper,
linux-kernel, Joe Perches, linuxppc-dev, David Miller
On Sun, Jan 10, 2016 at 04:16:22PM +0200, Michael S. Tsirkin wrote:
> I parked this in vhost tree for now, though the inclusion of patch 1 from tip
> creates a merge conflict - but one that is trivial to resolve.
>
> So I intend to just merge it all through my tree, including the
> duplicate patch, and assume conflict will be resolved.
>
> I would really appreciate some feedback on arch bits (especially the x86 bits),
> and acks for merging this through the vhost tree.
Thanks for doing this, looks good to me.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v3 13/41] x86: reuse asm-generic/barrier.h
[not found] ` <1452426622-4471-14-git-send-email-mst@redhat.com>
@ 2016-01-12 14:10 ` Thomas Gleixner
0 siblings, 0 replies; 48+ messages in thread
From: Thomas Gleixner @ 2016-01-12 14:10 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, Ingo Molnar,
xen-devel, Ingo Molnar, Borislav Petkov, linux-xtensa,
user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
Andy Lutomirski, linux-metag, linux-arm-kernel, Andrew Cooper,
linux-kernel, Joe Perches
On Sun, 10 Jan 2016, Michael S. Tsirkin wrote:
> As on most architectures, on x86 read_barrier_depends and
> smp_read_barrier_depends are empty. Drop the local definitions and pull
> the generic ones from asm-generic/barrier.h instead: they are identical.
>
> This is in preparation to refactoring this code area.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v3 27/41] x86: define __smp_xxx
[not found] ` <1452426622-4471-28-git-send-email-mst@redhat.com>
@ 2016-01-12 14:11 ` Thomas Gleixner
0 siblings, 0 replies; 48+ messages in thread
From: Thomas Gleixner @ 2016-01-12 14:11 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra, virtualization,
H. Peter Anvin, sparclinux, linux-arch, linux-s390,
Russell King - ARM Linux, Arnd Bergmann, x86, Ingo Molnar,
xen-devel, Ingo Molnar, Borislav Petkov, linux-xtensa,
user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
Andy Lutomirski, linux-metag, linux-arm-kernel, Andrew Cooper,
linux-kernel, Joe Perches
On Sun, 10 Jan 2016, Michael S. Tsirkin wrote:
> This defines __smp_xxx barriers for x86,
> for use by virtualization.
>
> smp_xxx barriers are removed as they are
> defined correctly by asm-generic/barriers.h
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release()
2016-01-10 14:16 ` [PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release() Michael S. Tsirkin
@ 2016-01-12 16:28 ` Paul E. McKenney
[not found] ` <20160112162844.GD3818@linux.vnet.ibm.com>
1 sibling, 0 replies; 48+ messages in thread
From: Paul E. McKenney @ 2016-01-12 16:28 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
Benjamin Herrenschmidt, Heiko Carstens, virtualization,
Paul Mackerras, H. Peter Anvin, sparclinux, Ingo Molnar,
linux-arch, linux-s390, Davidlohr Bueso, Russell King - ARM Linux,
Arnd Bergmann, Davidlohr Bueso, Michael Ellerman, x86,
Christian Borntraeger, Linus Torvalds, xen-devel, Ingo Molnar,
linux-xtensa, user-mode-linux-devel
On Sun, Jan 10, 2016 at 04:16:32PM +0200, Michael S. Tsirkin wrote:
> From: Davidlohr Bueso <dave@stgolabs.net>
>
> With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()")
> it was made clear that the context of this call (and thus set_mb)
> is strictly for CPU ordering, as opposed to IO. As such all archs
> should use the smp variant of mb(), respecting the semantics and
> saving a mandatory barrier on UP.
>
> Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: <linux-arch@vger.kernel.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tony Luck <tony.luck@intel.com>
> Cc: dave@stgolabs.net
> Link: http://lkml.kernel.org/r/1445975631-17047-3-git-send-email-dave@stgolabs.net
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Aside from a need for s/lcoking/locking/ in the subject line:
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
> arch/ia64/include/asm/barrier.h | 2 +-
> arch/powerpc/include/asm/barrier.h | 2 +-
> arch/s390/include/asm/barrier.h | 2 +-
> include/asm-generic/barrier.h | 2 +-
> 4 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
> index df896a1..209c4b8 100644
> --- a/arch/ia64/include/asm/barrier.h
> +++ b/arch/ia64/include/asm/barrier.h
> @@ -77,7 +77,7 @@ do { \
> ___p1; \
> })
>
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); mb(); } while (0)
> +#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
>
> /*
> * The group barrier in front of the rsm & ssm are necessary to ensure
> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> index 0eca6ef..a7af5fb 100644
> --- a/arch/powerpc/include/asm/barrier.h
> +++ b/arch/powerpc/include/asm/barrier.h
> @@ -34,7 +34,7 @@
> #define rmb() __asm__ __volatile__ ("sync" : : : "memory")
> #define wmb() __asm__ __volatile__ ("sync" : : : "memory")
>
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); mb(); } while (0)
> +#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
>
> #ifdef __SUBARCH_HAS_LWSYNC
> # define SMPWMB LWSYNC
> diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> index d68e11e..7ffd0b1 100644
> --- a/arch/s390/include/asm/barrier.h
> +++ b/arch/s390/include/asm/barrier.h
> @@ -36,7 +36,7 @@
> #define smp_mb__before_atomic() smp_mb()
> #define smp_mb__after_atomic() smp_mb()
>
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); mb(); } while (0)
> +#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
>
> #define smp_store_release(p, v) \
> do { \
> diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> index b42afad..0f45f93 100644
> --- a/include/asm-generic/barrier.h
> +++ b/include/asm-generic/barrier.h
> @@ -93,7 +93,7 @@
> #endif /* CONFIG_SMP */
>
> #ifndef smp_store_mb
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); mb(); } while (0)
> +#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> #endif
>
> #ifndef smp_mb__before_atomic
> --
> MST
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v3 05/41] powerpc: reuse asm-generic/barrier.h
[not found] ` <1452426622-4471-6-git-send-email-mst@redhat.com>
@ 2016-01-12 16:31 ` Paul E. McKenney
0 siblings, 0 replies; 48+ messages in thread
From: Paul E. McKenney @ 2016-01-12 16:31 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
Benjamin Herrenschmidt, virtualization, Paul Mackerras,
H. Peter Anvin, sparclinux, Ingo Molnar, linux-arch, linux-s390,
Davidlohr Bueso, Russell King - ARM Linux, Arnd Bergmann,
Michael Ellerman, x86, xen-devel, Ingo Molnar, linux-xtensa,
user-mode-linux-devel, Stefano Stabellini, adi-buildroot-devel,
Thomas Gleixner, linux-metag
On Sun, Jan 10, 2016 at 04:17:09PM +0200, Michael S. Tsirkin wrote:
> On powerpc read_barrier_depends, smp_read_barrier_depends
> smp_store_mb(), smp_mb__before_atomic and smp_mb__after_atomic match the
> asm-generic variants exactly. Drop the local definitions and pull in
> asm-generic/barrier.h instead.
>
> This is in preparation to refactoring this code area.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
Looks sane to me.
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
> arch/powerpc/include/asm/barrier.h | 9 ++-------
> 1 file changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> index a7af5fb..980ad0c 100644
> --- a/arch/powerpc/include/asm/barrier.h
> +++ b/arch/powerpc/include/asm/barrier.h
> @@ -34,8 +34,6 @@
> #define rmb() __asm__ __volatile__ ("sync" : : : "memory")
> #define wmb() __asm__ __volatile__ ("sync" : : : "memory")
>
> -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> -
> #ifdef __SUBARCH_HAS_LWSYNC
> # define SMPWMB LWSYNC
> #else
> @@ -60,9 +58,6 @@
> #define smp_wmb() barrier()
> #endif /* CONFIG_SMP */
>
> -#define read_barrier_depends() do { } while (0)
> -#define smp_read_barrier_depends() do { } while (0)
> -
> /*
> * This is a barrier which prevents following instructions from being
> * started until the value of the argument x is known. For example, if
> @@ -87,8 +82,8 @@ do { \
> ___p1; \
> })
>
> -#define smp_mb__before_atomic() smp_mb()
> -#define smp_mb__after_atomic() smp_mb()
> #define smp_mb__before_spinlock() smp_mb()
>
> +#include <asm-generic/barrier.h>
> +
> #endif /* _ASM_POWERPC_BARRIER_H */
> --
> MST
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release()
[not found] ` <20160112162844.GD3818@linux.vnet.ibm.com>
@ 2016-01-12 18:40 ` Michael S. Tsirkin
0 siblings, 0 replies; 48+ messages in thread
From: Michael S. Tsirkin @ 2016-01-12 18:40 UTC (permalink / raw)
To: Paul E. McKenney
Cc: linux-mips, linux-ia64, linux-sh, Peter Zijlstra,
Benjamin Herrenschmidt, Heiko Carstens, virtualization,
Paul Mackerras, H. Peter Anvin, sparclinux, Ingo Molnar,
linux-arch, linux-s390, Davidlohr Bueso, Russell King - ARM Linux,
Arnd Bergmann, Davidlohr Bueso, Michael Ellerman, x86,
Christian Borntraeger, Linus Torvalds, xen-devel, Ingo Molnar,
linux-xtensa, user-mode-linux-devel
On Tue, Jan 12, 2016 at 08:28:44AM -0800, Paul E. McKenney wrote:
> On Sun, Jan 10, 2016 at 04:16:32PM +0200, Michael S. Tsirkin wrote:
> > From: Davidlohr Bueso <dave@stgolabs.net>
> >
> > With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()")
> > it was made clear that the context of this call (and thus set_mb)
> > is strictly for CPU ordering, as opposed to IO. As such all archs
> > should use the smp variant of mb(), respecting the semantics and
> > saving a mandatory barrier on UP.
> >
> > Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > Cc: <linux-arch@vger.kernel.org>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> > Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
> > Cc: Linus Torvalds <torvalds@linux-foundation.org>
> > Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Tony Luck <tony.luck@intel.com>
> > Cc: dave@stgolabs.net
> > Link: http://lkml.kernel.org/r/1445975631-17047-3-git-send-email-dave@stgolabs.net
> > Signed-off-by: Ingo Molnar <mingo@kernel.org>
>
> Aside from a need for s/lcoking/locking/ in the subject line:
>
> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Thanks!
Though Ingo already put this in tip tree like this,
and I need a copy in my tree to avoid breaking bisect,
so I will probably keep it exactly the same to avoid confusion.
> > ---
> > arch/ia64/include/asm/barrier.h | 2 +-
> > arch/powerpc/include/asm/barrier.h | 2 +-
> > arch/s390/include/asm/barrier.h | 2 +-
> > include/asm-generic/barrier.h | 2 +-
> > 4 files changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/ia64/include/asm/barrier.h b/arch/ia64/include/asm/barrier.h
> > index df896a1..209c4b8 100644
> > --- a/arch/ia64/include/asm/barrier.h
> > +++ b/arch/ia64/include/asm/barrier.h
> > @@ -77,7 +77,7 @@ do { \
> > ___p1; \
> > })
> >
> > -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); mb(); } while (0)
> > +#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> >
> > /*
> > * The group barrier in front of the rsm & ssm are necessary to ensure
> > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > index 0eca6ef..a7af5fb 100644
> > --- a/arch/powerpc/include/asm/barrier.h
> > +++ b/arch/powerpc/include/asm/barrier.h
> > @@ -34,7 +34,7 @@
> > #define rmb() __asm__ __volatile__ ("sync" : : : "memory")
> > #define wmb() __asm__ __volatile__ ("sync" : : : "memory")
> >
> > -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); mb(); } while (0)
> > +#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> >
> > #ifdef __SUBARCH_HAS_LWSYNC
> > # define SMPWMB LWSYNC
> > diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
> > index d68e11e..7ffd0b1 100644
> > --- a/arch/s390/include/asm/barrier.h
> > +++ b/arch/s390/include/asm/barrier.h
> > @@ -36,7 +36,7 @@
> > #define smp_mb__before_atomic() smp_mb()
> > #define smp_mb__after_atomic() smp_mb()
> >
> > -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); mb(); } while (0)
> > +#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> >
> > #define smp_store_release(p, v) \
> > do { \
> > diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> > index b42afad..0f45f93 100644
> > --- a/include/asm-generic/barrier.h
> > +++ b/include/asm-generic/barrier.h
> > @@ -93,7 +93,7 @@
> > #endif /* CONFIG_SMP */
> >
> > #ifndef smp_store_mb
> > -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); mb(); } while (0)
> > +#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); smp_mb(); } while (0)
> > #endif
> >
> > #ifndef smp_mb__before_atomic
> > --
> > MST
> >
^ permalink raw reply [flat|nested] 48+ messages in thread
end of thread, other threads:[~2016-01-12 18:40 UTC | newest]
Thread overview: 48+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1452426622-4471-1-git-send-email-mst@redhat.com>
2016-01-10 14:16 ` [PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release() Michael S. Tsirkin
2016-01-12 16:28 ` Paul E. McKenney
[not found] ` <20160112162844.GD3818@linux.vnet.ibm.com>
2016-01-12 18:40 ` Michael S. Tsirkin
2016-01-10 14:16 ` [PATCH v3 02/41] asm-generic: guard smp_store_release/load_acquire Michael S. Tsirkin
2016-01-10 14:16 ` [PATCH v3 03/41] ia64: rename nop->iosapic_nop Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 04/41] ia64: reuse asm-generic/barrier.h Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 05/41] powerpc: " Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 06/41] s390: " Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 07/41] sparc: " Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 08/41] arm: " Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 09/41] arm64: " Michael S. Tsirkin
2016-01-10 14:17 ` [PATCH v3 10/41] metag: " Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 11/41] mips: " Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 12/41] x86/um: " Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 13/41] x86: " Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 14/41] asm-generic: add __smp_xxx wrappers Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 15/41] powerpc: define __smp_xxx Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 16/41] arm64: " Michael S. Tsirkin
2016-01-10 14:18 ` [PATCH v3 17/41] arm: " Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 18/41] blackfin: " Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 19/41] ia64: " Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 20/41] metag: " Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 21/41] mips: " Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 22/41] s390: " Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 23/41] sh: define __smp_xxx, fix smp_store_mb for !SMP Michael S. Tsirkin
2016-01-10 14:19 ` [PATCH v3 24/41] sparc: define __smp_xxx Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 25/41] tile: " Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 26/41] xtensa: " Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 27/41] x86: " Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 28/41] asm-generic: implement virt_xxx memory barriers Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 29/41] Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb" Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 30/41] virtio_ring: update weak barriers to use virt_xxx Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 31/41] sh: support 1 and 2 byte xchg Michael S. Tsirkin
2016-01-10 14:20 ` [PATCH v3 32/41] sh: move xchg_cmpxchg to a header by itself Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 33/41] virtio_ring: use virt_store_mb Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 34/41] checkpatch.pl: add missing memory barriers Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 35/41] checkpatch: check for __smp outside barrier.h Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 36/41] checkpatch: add virt barriers Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 37/41] xenbus: use virt_xxx barriers Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 38/41] xen/io: " Michael S. Tsirkin
2016-01-10 14:21 ` [PATCH v3 39/41] xen/events: " Michael S. Tsirkin
2016-01-10 14:22 ` [PATCH v3 40/41] s390: use generic memory barriers Michael S. Tsirkin
2016-01-10 14:22 ` [PATCH v3 41/41] s390: more efficient smp barriers Michael S. Tsirkin
[not found] ` <1452426622-4471-40-git-send-email-mst@redhat.com>
2016-01-11 11:12 ` [PATCH v3 39/41] xen/events: use virt_xxx barriers David Vrabel
2016-01-12 12:50 ` [PATCH v3 00/41] arch: barrier cleanup + barriers for virt Peter Zijlstra
[not found] ` <1452426622-4471-14-git-send-email-mst@redhat.com>
2016-01-12 14:10 ` [PATCH v3 13/41] x86: reuse asm-generic/barrier.h Thomas Gleixner
[not found] ` <1452426622-4471-28-git-send-email-mst@redhat.com>
2016-01-12 14:11 ` [PATCH v3 27/41] x86: define __smp_xxx Thomas Gleixner
[not found] ` <1452426622-4471-6-git-send-email-mst@redhat.com>
2016-01-12 16:31 ` [PATCH v3 05/41] powerpc: reuse asm-generic/barrier.h Paul E. McKenney
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).