linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC] arm64: Implement WFE based spin wait for MCS spinlocks
@ 2016-04-14  7:13 Jason Low
  2016-04-20 10:30 ` Peter Zijlstra
  0 siblings, 1 reply; 3+ messages in thread
From: Jason Low @ 2016-04-14  7:13 UTC (permalink / raw)
  To: Will Deacon
  Cc: Peter Zijlstra, Linus Torvalds, linux-kernel@vger.kernel.org,
	mingo@redhat.com, paulmck@linux.vnet.ibm.com, terry.rudd,
	Long, Wai Man, boqun.feng@gmail.com, dave@stgolabs.net,
	jason.low2

Use WFE to avoid most spinning with MCS spinlocks. This is implemented
with the new cmpwait() mechanism for comparing and waiting for the MCS
locked value to change using LDXR + WFE.

Signed-off-by: Jason Low <jason.low2@hp.com>
---
 arch/arm64/include/asm/mcs_spinlock.h | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)
 create mode 100644 arch/arm64/include/asm/mcs_spinlock.h

diff --git a/arch/arm64/include/asm/mcs_spinlock.h b/arch/arm64/include/asm/mcs_spinlock.h
new file mode 100644
index 0000000..d295d9d
--- /dev/null
+++ b/arch/arm64/include/asm/mcs_spinlock.h
@@ -0,0 +1,21 @@
+#ifndef __ASM_MCS_SPINLOCK_H
+#define __ASM_MCS_SPINLOCK_H
+
+#define arch_mcs_spin_lock_contended(l)					\
+do {									\
+	int locked_val;							\
+	for (;;) {							\
+		locked_val = READ_ONCE(*l);				\
+		if (locked_val)						\
+			break;						\
+		cmpwait(l, locked_val);					\
+	}								\
+	smp_rmb();							\
+} while (0)
+
+#define arch_mcs_spin_unlock_contended(l)				\
+do {									\
+	smp_store_release(l, 1);					\
+} while (0)
+
+#endif  /* __ASM_MCS_SPINLOCK_H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [RFC] arm64: Implement WFE based spin wait for MCS spinlocks
  2016-04-14  7:13 [RFC] arm64: Implement WFE based spin wait for MCS spinlocks Jason Low
@ 2016-04-20 10:30 ` Peter Zijlstra
  2016-04-20 19:36   ` Jason Low
  0 siblings, 1 reply; 3+ messages in thread
From: Peter Zijlstra @ 2016-04-20 10:30 UTC (permalink / raw)
  To: Jason Low
  Cc: Will Deacon, Linus Torvalds, linux-kernel@vger.kernel.org,
	mingo@redhat.com, paulmck@linux.vnet.ibm.com, terry.rudd,
	Long, Wai Man, boqun.feng@gmail.com, dave@stgolabs.net

On Thu, Apr 14, 2016 at 12:13:38AM -0700, Jason Low wrote:
> Use WFE to avoid most spinning with MCS spinlocks. This is implemented
> with the new cmpwait() mechanism for comparing and waiting for the MCS
> locked value to change using LDXR + WFE.
> 
> Signed-off-by: Jason Low <jason.low2@hp.com>
> ---
>  arch/arm64/include/asm/mcs_spinlock.h | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
>  create mode 100644 arch/arm64/include/asm/mcs_spinlock.h
> 
> diff --git a/arch/arm64/include/asm/mcs_spinlock.h b/arch/arm64/include/asm/mcs_spinlock.h
> new file mode 100644
> index 0000000..d295d9d
> --- /dev/null
> +++ b/arch/arm64/include/asm/mcs_spinlock.h
> @@ -0,0 +1,21 @@
> +#ifndef __ASM_MCS_SPINLOCK_H
> +#define __ASM_MCS_SPINLOCK_H
> +
> +#define arch_mcs_spin_lock_contended(l)					\
> +do {									\
> +	int locked_val;							\
> +	for (;;) {							\
> +		locked_val = READ_ONCE(*l);				\
> +		if (locked_val)						\
> +			break;						\
> +		cmpwait(l, locked_val);					\
> +	}								\
> +	smp_rmb();							\
> +} while (0)

If you make the generic version use smp_cond_load_acquire() this isn't
needed.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [RFC] arm64: Implement WFE based spin wait for MCS spinlocks
  2016-04-20 10:30 ` Peter Zijlstra
@ 2016-04-20 19:36   ` Jason Low
  0 siblings, 0 replies; 3+ messages in thread
From: Jason Low @ 2016-04-20 19:36 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Linus Torvalds, linux-kernel@vger.kernel.org,
	mingo@redhat.com, paulmck@linux.vnet.ibm.com, terry.rudd,
	Long, Wai Man, boqun.feng@gmail.com, dave@stgolabs.net,
	jason.low2

On Wed, 2016-04-20 at 12:30 +0200, Peter Zijlstra wrote:
> On Thu, Apr 14, 2016 at 12:13:38AM -0700, Jason Low wrote:
> > Use WFE to avoid most spinning with MCS spinlocks. This is implemented
> > with the new cmpwait() mechanism for comparing and waiting for the MCS
> > locked value to change using LDXR + WFE.
> > 
> > Signed-off-by: Jason Low <jason.low2@hp.com>
> > ---
> >  arch/arm64/include/asm/mcs_spinlock.h | 21 +++++++++++++++++++++
> >  1 file changed, 21 insertions(+)
> >  create mode 100644 arch/arm64/include/asm/mcs_spinlock.h
> > 
> > diff --git a/arch/arm64/include/asm/mcs_spinlock.h b/arch/arm64/include/asm/mcs_spinlock.h
> > new file mode 100644
> > index 0000000..d295d9d
> > --- /dev/null
> > +++ b/arch/arm64/include/asm/mcs_spinlock.h
> > @@ -0,0 +1,21 @@
> > +#ifndef __ASM_MCS_SPINLOCK_H
> > +#define __ASM_MCS_SPINLOCK_H
> > +
> > +#define arch_mcs_spin_lock_contended(l)					\
> > +do {									\
> > +	int locked_val;							\
> > +	for (;;) {							\
> > +		locked_val = READ_ONCE(*l);				\
> > +		if (locked_val)						\
> > +			break;						\
> > +		cmpwait(l, locked_val);					\
> > +	}								\
> > +	smp_rmb();							\
> > +} while (0)
> 
> If you make the generic version use smp_cond_load_acquire() this isn't
> needed.

Yup, in the email thread about modifying the generic version to use
smp_cond_load_acquire(), I mentioned that overriding it in arch/arm64
would not be needed anymore.

Will made a suggestion about overriding it on arm64, but it turns out he
was just referring to avoiding the immediate dependency on
smp_cond_load_acquire().

Thanks,
Jason

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-04-20 19:38 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-04-14  7:13 [RFC] arm64: Implement WFE based spin wait for MCS spinlocks Jason Low
2016-04-20 10:30 ` Peter Zijlstra
2016-04-20 19:36   ` Jason Low

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).