* [PATCH] xchg/alpha: Add unconditional memory barrier to cmpxchg
@ 2018-02-20 18:45 Andrea Parri
2018-02-20 19:38 ` Paul E. McKenney
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Andrea Parri @ 2018-02-20 18:45 UTC (permalink / raw)
To: Ingo Molnar
Cc: Andrea Parri, Will Deacon, Paul E. McKenney, Alan Stern,
Richard Henderson, Ivan Kokshaysky, Matt Turner, linux-alpha,
linux-kernel
Continuing along with the fight against smp_read_barrier_depends() [1]
(or rather, against its improper use), add an unconditional barrier to
cmpxchg. This guarantees that dependency ordering is preserved when a
dependency is headed by an unsuccessful cmpxchg. As it turns out, the
change could enable further simplification of LKMM as proposed in [2].
[1] https://marc.info/?l=linux-kernel&m=150884953419377&w=2
https://marc.info/?l=linux-kernel&m=150884946319353&w=2
https://marc.info/?l=linux-kernel&m=151215810824468&w=2
https://marc.info/?l=linux-kernel&m=151215816324484&w=2
[2] https://marc.info/?l=linux-kernel&m=151881978314872&w=2
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: linux-alpha@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
arch/alpha/include/asm/xchg.h | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
index 68dfb3cb71454..e2660866ce972 100644
--- a/arch/alpha/include/asm/xchg.h
+++ b/arch/alpha/include/asm/xchg.h
@@ -128,10 +128,9 @@ ____xchg(, volatile void *ptr, unsigned long x, int size)
* store NEW in MEM. Return the initial value in MEM. Success is
* indicated by comparing RETURN with OLD.
*
- * The memory barrier should be placed in SMP only when we actually
- * make the change. If we don't change anything (so if the returned
- * prev is equal to old) then we aren't acquiring anything new and
- * we don't need any memory barrier as far I can tell.
+ * The memory barrier is placed in SMP unconditionally, in order to
+ * guarantee that dependency ordering is preserved when a dependency
+ * is headed by an unsuccessful operation.
*/
static inline unsigned long
@@ -150,8 +149,8 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new)
" or %1,%2,%2\n"
" stq_c %2,0(%4)\n"
" beq %2,3f\n"
- __ASM__MB
"2:\n"
+ __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"
@@ -177,8 +176,8 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new)
" or %1,%2,%2\n"
" stq_c %2,0(%4)\n"
" beq %2,3f\n"
- __ASM__MB
"2:\n"
+ __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"
@@ -200,8 +199,8 @@ ____cmpxchg(_u32, volatile int *m, int old, int new)
" mov %4,%1\n"
" stl_c %1,%2\n"
" beq %1,3f\n"
- __ASM__MB
"2:\n"
+ __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"
@@ -223,8 +222,8 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new)
" mov %4,%1\n"
" stq_c %1,%2\n"
" beq %1,3f\n"
- __ASM__MB
"2:\n"
+ __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"
--
2.7.4
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] xchg/alpha: Add unconditional memory barrier to cmpxchg
2018-02-20 18:45 [PATCH] xchg/alpha: Add unconditional memory barrier to cmpxchg Andrea Parri
@ 2018-02-20 19:38 ` Paul E. McKenney
2018-02-21 10:49 ` [tip:locking/urgent] locking/xchg/alpha: Add unconditional memory barrier to cmpxchg() tip-bot for Andrea Parri
2018-02-21 11:21 ` [PATCH] xchg/alpha: Add unconditional memory barrier to cmpxchg Will Deacon
2 siblings, 0 replies; 5+ messages in thread
From: Paul E. McKenney @ 2018-02-20 19:38 UTC (permalink / raw)
To: Andrea Parri
Cc: Ingo Molnar, Will Deacon, Alan Stern, Richard Henderson,
Ivan Kokshaysky, Matt Turner, linux-alpha, linux-kernel
On Tue, Feb 20, 2018 at 07:45:56PM +0100, Andrea Parri wrote:
> Continuing along with the fight against smp_read_barrier_depends() [1]
> (or rather, against its improper use), add an unconditional barrier to
> cmpxchg. This guarantees that dependency ordering is preserved when a
> dependency is headed by an unsuccessful cmpxchg. As it turns out, the
> change could enable further simplification of LKMM as proposed in [2].
>
> [1] https://marc.info/?l=linux-kernel&m=150884953419377&w=2
> https://marc.info/?l=linux-kernel&m=150884946319353&w=2
> https://marc.info/?l=linux-kernel&m=151215810824468&w=2
> https://marc.info/?l=linux-kernel&m=151215816324484&w=2
>
> [2] https://marc.info/?l=linux-kernel&m=151881978314872&w=2
>
> Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
> Acked-by: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Acked-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> Cc: Alan Stern <stern@rowland.harvard.edu>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
> Cc: Matt Turner <mattst88@gmail.com>
> Cc: linux-alpha@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> ---
> arch/alpha/include/asm/xchg.h | 15 +++++++--------
> 1 file changed, 7 insertions(+), 8 deletions(-)
>
> diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
> index 68dfb3cb71454..e2660866ce972 100644
> --- a/arch/alpha/include/asm/xchg.h
> +++ b/arch/alpha/include/asm/xchg.h
> @@ -128,10 +128,9 @@ ____xchg(, volatile void *ptr, unsigned long x, int size)
> * store NEW in MEM. Return the initial value in MEM. Success is
> * indicated by comparing RETURN with OLD.
> *
> - * The memory barrier should be placed in SMP only when we actually
> - * make the change. If we don't change anything (so if the returned
> - * prev is equal to old) then we aren't acquiring anything new and
> - * we don't need any memory barrier as far I can tell.
> + * The memory barrier is placed in SMP unconditionally, in order to
> + * guarantee that dependency ordering is preserved when a dependency
> + * is headed by an unsuccessful operation.
> */
>
> static inline unsigned long
> @@ -150,8 +149,8 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new)
> " or %1,%2,%2\n"
> " stq_c %2,0(%4)\n"
> " beq %2,3f\n"
> - __ASM__MB
> "2:\n"
> + __ASM__MB
> ".subsection 2\n"
> "3: br 1b\n"
> ".previous"
> @@ -177,8 +176,8 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new)
> " or %1,%2,%2\n"
> " stq_c %2,0(%4)\n"
> " beq %2,3f\n"
> - __ASM__MB
> "2:\n"
> + __ASM__MB
> ".subsection 2\n"
> "3: br 1b\n"
> ".previous"
> @@ -200,8 +199,8 @@ ____cmpxchg(_u32, volatile int *m, int old, int new)
> " mov %4,%1\n"
> " stl_c %1,%2\n"
> " beq %1,3f\n"
> - __ASM__MB
> "2:\n"
> + __ASM__MB
> ".subsection 2\n"
> "3: br 1b\n"
> ".previous"
> @@ -223,8 +222,8 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new)
> " mov %4,%1\n"
> " stq_c %1,%2\n"
> " beq %1,3f\n"
> - __ASM__MB
> "2:\n"
> + __ASM__MB
> ".subsection 2\n"
> "3: br 1b\n"
> ".previous"
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* [tip:locking/urgent] locking/xchg/alpha: Add unconditional memory barrier to cmpxchg()
2018-02-20 18:45 [PATCH] xchg/alpha: Add unconditional memory barrier to cmpxchg Andrea Parri
2018-02-20 19:38 ` Paul E. McKenney
@ 2018-02-21 10:49 ` tip-bot for Andrea Parri
2018-02-21 11:21 ` [PATCH] xchg/alpha: Add unconditional memory barrier to cmpxchg Will Deacon
2 siblings, 0 replies; 5+ messages in thread
From: tip-bot for Andrea Parri @ 2018-02-21 10:49 UTC (permalink / raw)
To: linux-tip-commits
Cc: mingo, tglx, stern, torvalds, parri.andrea, rth, hpa, mattst88,
linux-kernel, peterz, ink, will.deacon, paulmck
Commit-ID: cb13b424e986aed68d74cbaec3449ea23c50e167
Gitweb: https://git.kernel.org/tip/cb13b424e986aed68d74cbaec3449ea23c50e167
Author: Andrea Parri <parri.andrea@gmail.com>
AuthorDate: Tue, 20 Feb 2018 19:45:56 +0100
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 21 Feb 2018 10:12:29 +0100
locking/xchg/alpha: Add unconditional memory barrier to cmpxchg()
Continuing along with the fight against smp_read_barrier_depends() [1]
(or rather, against its improper use), add an unconditional barrier to
cmpxchg. This guarantees that dependency ordering is preserved when a
dependency is headed by an unsuccessful cmpxchg. As it turns out, the
change could enable further simplification of LKMM as proposed in [2].
[1] https://marc.info/?l=linux-kernel&m=150884953419377&w=2
https://marc.info/?l=linux-kernel&m=150884946319353&w=2
https://marc.info/?l=linux-kernel&m=151215810824468&w=2
https://marc.info/?l=linux-kernel&m=151215816324484&w=2
[2] https://marc.info/?l=linux-kernel&m=151881978314872&w=2
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-alpha@vger.kernel.org
Link: http://lkml.kernel.org/r/1519152356-4804-1-git-send-email-parri.andrea@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/alpha/include/asm/xchg.h | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
index 68dfb3c..e266086 100644
--- a/arch/alpha/include/asm/xchg.h
+++ b/arch/alpha/include/asm/xchg.h
@@ -128,10 +128,9 @@ ____xchg(, volatile void *ptr, unsigned long x, int size)
* store NEW in MEM. Return the initial value in MEM. Success is
* indicated by comparing RETURN with OLD.
*
- * The memory barrier should be placed in SMP only when we actually
- * make the change. If we don't change anything (so if the returned
- * prev is equal to old) then we aren't acquiring anything new and
- * we don't need any memory barrier as far I can tell.
+ * The memory barrier is placed in SMP unconditionally, in order to
+ * guarantee that dependency ordering is preserved when a dependency
+ * is headed by an unsuccessful operation.
*/
static inline unsigned long
@@ -150,8 +149,8 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new)
" or %1,%2,%2\n"
" stq_c %2,0(%4)\n"
" beq %2,3f\n"
- __ASM__MB
"2:\n"
+ __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"
@@ -177,8 +176,8 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new)
" or %1,%2,%2\n"
" stq_c %2,0(%4)\n"
" beq %2,3f\n"
- __ASM__MB
"2:\n"
+ __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"
@@ -200,8 +199,8 @@ ____cmpxchg(_u32, volatile int *m, int old, int new)
" mov %4,%1\n"
" stl_c %1,%2\n"
" beq %1,3f\n"
- __ASM__MB
"2:\n"
+ __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"
@@ -223,8 +222,8 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new)
" mov %4,%1\n"
" stq_c %1,%2\n"
" beq %1,3f\n"
- __ASM__MB
"2:\n"
+ __ASM__MB
".subsection 2\n"
"3: br 1b\n"
".previous"
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] xchg/alpha: Add unconditional memory barrier to cmpxchg
2018-02-20 18:45 [PATCH] xchg/alpha: Add unconditional memory barrier to cmpxchg Andrea Parri
2018-02-20 19:38 ` Paul E. McKenney
2018-02-21 10:49 ` [tip:locking/urgent] locking/xchg/alpha: Add unconditional memory barrier to cmpxchg() tip-bot for Andrea Parri
@ 2018-02-21 11:21 ` Will Deacon
2018-02-21 13:24 ` Andrea Parri
2 siblings, 1 reply; 5+ messages in thread
From: Will Deacon @ 2018-02-21 11:21 UTC (permalink / raw)
To: Andrea Parri
Cc: Ingo Molnar, Paul E. McKenney, Alan Stern, Richard Henderson,
Ivan Kokshaysky, Matt Turner, linux-alpha, linux-kernel
Hi Andrea,
On Tue, Feb 20, 2018 at 07:45:56PM +0100, Andrea Parri wrote:
> Continuing along with the fight against smp_read_barrier_depends() [1]
> (or rather, against its improper use), add an unconditional barrier to
> cmpxchg. This guarantees that dependency ordering is preserved when a
> dependency is headed by an unsuccessful cmpxchg. As it turns out, the
> change could enable further simplification of LKMM as proposed in [2].
>
> [1] https://marc.info/?l=linux-kernel&m=150884953419377&w=2
> https://marc.info/?l=linux-kernel&m=150884946319353&w=2
> https://marc.info/?l=linux-kernel&m=151215810824468&w=2
> https://marc.info/?l=linux-kernel&m=151215816324484&w=2
>
> [2] https://marc.info/?l=linux-kernel&m=151881978314872&w=2
>
> Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
> Acked-by: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> Cc: Alan Stern <stern@rowland.harvard.edu>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
> Cc: Matt Turner <mattst88@gmail.com>
> Cc: linux-alpha@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> ---
> arch/alpha/include/asm/xchg.h | 15 +++++++--------
> 1 file changed, 7 insertions(+), 8 deletions(-)
>
> diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
> index 68dfb3cb71454..e2660866ce972 100644
> --- a/arch/alpha/include/asm/xchg.h
> +++ b/arch/alpha/include/asm/xchg.h
> @@ -128,10 +128,9 @@ ____xchg(, volatile void *ptr, unsigned long x, int size)
> * store NEW in MEM. Return the initial value in MEM. Success is
> * indicated by comparing RETURN with OLD.
> *
> - * The memory barrier should be placed in SMP only when we actually
> - * make the change. If we don't change anything (so if the returned
> - * prev is equal to old) then we aren't acquiring anything new and
> - * we don't need any memory barrier as far I can tell.
> + * The memory barrier is placed in SMP unconditionally, in order to
> + * guarantee that dependency ordering is preserved when a dependency
> + * is headed by an unsuccessful operation.
> */
>
> static inline unsigned long
> @@ -150,8 +149,8 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new)
> " or %1,%2,%2\n"
> " stq_c %2,0(%4)\n"
> " beq %2,3f\n"
> - __ASM__MB
> "2:\n"
> + __ASM__MB
> ".subsection 2\n"
> "3: br 1b\n"
> ".previous"
It might be better just to add smp_read_barrier_depends() into the cmpxchg
macro, then remove all of the __ASM__MB stuff.
That said, I don't actually understand how the Alpha cmpxchg or xchg
implementations satisfy the memory model, since they only appear to have
a barrier after the operation.
So MP using xchg:
WRITE_ONCE(x, 1)
xchg(y, 1)
smp_load_acquire(y) == 1
READ_ONCE(x) == 0
would be allowed. What am I missing?
Since I'm in the mood for dumb questions, do we need to care about
this_cpu_cmpxchg? I'm sure I've seen code that allows concurrent access to
per-cpu variables, but the asm-generic implementation of this_cpu_cmpxchg
doesn't use READ_ONCE.
Will
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] xchg/alpha: Add unconditional memory barrier to cmpxchg
2018-02-21 11:21 ` [PATCH] xchg/alpha: Add unconditional memory barrier to cmpxchg Will Deacon
@ 2018-02-21 13:24 ` Andrea Parri
0 siblings, 0 replies; 5+ messages in thread
From: Andrea Parri @ 2018-02-21 13:24 UTC (permalink / raw)
To: Will Deacon
Cc: Ingo Molnar, Paul E. McKenney, Alan Stern, Richard Henderson,
Ivan Kokshaysky, Matt Turner, linux-alpha, linux-kernel
On Wed, Feb 21, 2018 at 11:21:38AM +0000, Will Deacon wrote:
> Hi Andrea,
>
> On Tue, Feb 20, 2018 at 07:45:56PM +0100, Andrea Parri wrote:
> > Continuing along with the fight against smp_read_barrier_depends() [1]
> > (or rather, against its improper use), add an unconditional barrier to
> > cmpxchg. This guarantees that dependency ordering is preserved when a
> > dependency is headed by an unsuccessful cmpxchg. As it turns out, the
> > change could enable further simplification of LKMM as proposed in [2].
> >
> > [1] https://marc.info/?l=linux-kernel&m=150884953419377&w=2
> > https://marc.info/?l=linux-kernel&m=150884946319353&w=2
> > https://marc.info/?l=linux-kernel&m=151215810824468&w=2
> > https://marc.info/?l=linux-kernel&m=151215816324484&w=2
> >
> > [2] https://marc.info/?l=linux-kernel&m=151881978314872&w=2
> >
> > Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
> > Acked-by: Peter Zijlstra <peterz@infradead.org>
> > Cc: Will Deacon <will.deacon@arm.com>
> > Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > Cc: Alan Stern <stern@rowland.harvard.edu>
> > Cc: Richard Henderson <rth@twiddle.net>
> > Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
> > Cc: Matt Turner <mattst88@gmail.com>
> > Cc: linux-alpha@vger.kernel.org
> > Cc: linux-kernel@vger.kernel.org
> > ---
> > arch/alpha/include/asm/xchg.h | 15 +++++++--------
> > 1 file changed, 7 insertions(+), 8 deletions(-)
> >
> > diff --git a/arch/alpha/include/asm/xchg.h b/arch/alpha/include/asm/xchg.h
> > index 68dfb3cb71454..e2660866ce972 100644
> > --- a/arch/alpha/include/asm/xchg.h
> > +++ b/arch/alpha/include/asm/xchg.h
> > @@ -128,10 +128,9 @@ ____xchg(, volatile void *ptr, unsigned long x, int size)
> > * store NEW in MEM. Return the initial value in MEM. Success is
> > * indicated by comparing RETURN with OLD.
> > *
> > - * The memory barrier should be placed in SMP only when we actually
> > - * make the change. If we don't change anything (so if the returned
> > - * prev is equal to old) then we aren't acquiring anything new and
> > - * we don't need any memory barrier as far I can tell.
> > + * The memory barrier is placed in SMP unconditionally, in order to
> > + * guarantee that dependency ordering is preserved when a dependency
> > + * is headed by an unsuccessful operation.
> > */
> >
> > static inline unsigned long
> > @@ -150,8 +149,8 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new)
> > " or %1,%2,%2\n"
> > " stq_c %2,0(%4)\n"
> > " beq %2,3f\n"
> > - __ASM__MB
> > "2:\n"
> > + __ASM__MB
> > ".subsection 2\n"
> > "3: br 1b\n"
> > ".previous"
>
> It might be better just to add smp_read_barrier_depends() into the cmpxchg
> macro, then remove all of the __ASM__MB stuff.
Mmh, it might be better to add smp_mb() into the cmpxchg macro (after the
operation), then remove all the __ASM__MB stuff.
>
> That said, I don't actually understand how the Alpha cmpxchg or xchg
> implementations satisfy the memory model, since they only appear to have
> a barrier after the operation.
>
> So MP using xchg:
>
> WRITE_ONCE(x, 1)
> xchg(y, 1)
>
> smp_load_acquire(y) == 1
> READ_ONCE(x) == 0
>
> would be allowed. What am I missing?
Good question ;-) The absence of an smp_mb() (or of an __ASM__MB) before
the operation did upset me.
If this question remains pending, I'll send a patch to add these barriers.
>
> Since I'm in the mood for dumb questions, do we need to care about
> this_cpu_cmpxchg? I'm sure I've seen code that allows concurrent access to
> per-cpu variables, but the asm-generic implementation of this_cpu_cmpxchg
> doesn't use READ_ONCE.
Frankly, I'm not sure if this's an issue in the generic implementation of
this_cpu_* or, rather, in that code. let me dig a bit more into this ...
Andrea
>
> Will
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-02-21 13:25 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-02-20 18:45 [PATCH] xchg/alpha: Add unconditional memory barrier to cmpxchg Andrea Parri
2018-02-20 19:38 ` Paul E. McKenney
2018-02-21 10:49 ` [tip:locking/urgent] locking/xchg/alpha: Add unconditional memory barrier to cmpxchg() tip-bot for Andrea Parri
2018-02-21 11:21 ` [PATCH] xchg/alpha: Add unconditional memory barrier to cmpxchg Will Deacon
2018-02-21 13:24 ` Andrea Parri
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).