From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Paul E. McKenney" Subject: Re: [PATCH v7 5/6] MCS Lock: allow architectures to hook in to contended paths Date: Sun, 19 Jan 2014 18:34:48 -0800 Message-ID: <20140120023448.GM10038@linux.vnet.ibm.com> References: <1389917311.3138.15.camel@schen9-DESK> Reply-To: paulmck@linux.vnet.ibm.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: <1389917311.3138.15.camel@schen9-DESK> Sender: owner-linux-mm@kvack.org To: Tim Chen Cc: Ingo Molnar , Andrew Morton , Thomas Gleixner , Will Deacon , linux-kernel@vger.kernel.org, linux-mm , linux-arch@vger.kernel.org, Linus Torvalds , Waiman Long , Andrea Arcangeli , Alex Shi , Andi Kleen , Michel Lespinasse , Davidlohr Bueso , Matthew R Wilcox , Dave Hansen , Peter Zijlstra , Rik van Riel , Peter Hurley , Raghavendra K T , George List-Id: linux-arch.vger.kernel.org On Thu, Jan 16, 2014 at 04:08:31PM -0800, Tim Chen wrote: > When contended, architectures may be able to reduce the polling overhead > in ways which aren't expressible using a simple relax() primitive. > > This patch allows architectures to hook into the mcs_{lock,unlock} > functions for the contended cases only. > > From: Will Deacon > Signed-off-by: Will Deacon Reviewed-by: Paul E. McKenney > --- > kernel/locking/mcs_spinlock.c | 47 +++++++++++++++++++++++++------------------ > 1 file changed, 27 insertions(+), 20 deletions(-) > > diff --git a/kernel/locking/mcs_spinlock.c b/kernel/locking/mcs_spinlock.c > index 6cdc730..66d8883 100644 > --- a/kernel/locking/mcs_spinlock.c > +++ b/kernel/locking/mcs_spinlock.c > @@ -7,19 +7,34 @@ > * It avoids expensive cache bouncings that common test-and-set spin-lock > * implementations incur. > */ > -/* > - * asm/processor.h may define arch_mutex_cpu_relax(). > - * If it is not defined, cpu_relax() will be used. > - */ > + > #include > #include > #include > #include > #include > +#include > #include > > -#ifndef arch_mutex_cpu_relax > -# define arch_mutex_cpu_relax() cpu_relax() > +#ifndef arch_mcs_spin_lock_contended > +/* > + * Using smp_load_acquire() provides a memory barrier that ensures > + * subsequent operations happen after the lock is acquired. > + */ > +#define arch_mcs_spin_lock_contended(l) \ > + while (!(smp_load_acquire(l))) { \ > + arch_mutex_cpu_relax(); \ > + } > +#endif > + > +#ifndef arch_mcs_spin_unlock_contended > +/* > + * smp_store_release() provides a memory barrier to ensure all > + * operations in the critical section has been completed before > + * unlocking. > + */ > +#define arch_mcs_spin_unlock_contended(l) \ > + smp_store_release((l), 1) > #endif > > /* > @@ -43,13 +58,9 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > return; > } > ACCESS_ONCE(prev->next) = node; > - /* > - * Wait until the lock holder passes the lock down. > - * Using smp_load_acquire() provides a memory barrier that > - * ensures subsequent operations happen after the lock is acquired. > - */ > - while (!(smp_load_acquire(&node->locked))) > - arch_mutex_cpu_relax(); > + > + /* Wait until the lock holder passes the lock down. */ > + arch_mcs_spin_lock_contended(&node->locked); > } > EXPORT_SYMBOL_GPL(mcs_spin_lock); > > @@ -71,12 +82,8 @@ void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > while (!(next = ACCESS_ONCE(node->next))) > arch_mutex_cpu_relax(); > } > - /* > - * Pass lock to next waiter. > - * smp_store_release() provides a memory barrier to ensure > - * all operations in the critical section has been completed > - * before unlocking. > - */ > - smp_store_release(&next->locked, 1); > + > + /* Pass lock to next waiter. */ > + arch_mcs_spin_unlock_contended(&next->locked); > } > EXPORT_SYMBOL_GPL(mcs_spin_unlock); > -- > 1.7.11.7 > > > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from e31.co.us.ibm.com ([32.97.110.149]:42286 "EHLO e31.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752327AbaATCe5 (ORCPT ); Sun, 19 Jan 2014 21:34:57 -0500 Received: from /spool/local by e31.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sun, 19 Jan 2014 19:34:56 -0700 Date: Sun, 19 Jan 2014 18:34:48 -0800 From: "Paul E. McKenney" Subject: Re: [PATCH v7 5/6] MCS Lock: allow architectures to hook in to contended paths Message-ID: <20140120023448.GM10038@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <1389917311.3138.15.camel@schen9-DESK> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1389917311.3138.15.camel@schen9-DESK> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Tim Chen Cc: Ingo Molnar , Andrew Morton , Thomas Gleixner , Will Deacon , linux-kernel@vger.kernel.org, linux-mm , linux-arch@vger.kernel.org, Linus Torvalds , Waiman Long , Andrea Arcangeli , Alex Shi , Andi Kleen , Michel Lespinasse , Davidlohr Bueso , Matthew R Wilcox , Dave Hansen , Peter Zijlstra , Rik van Riel , Peter Hurley , Raghavendra K T , George Spelvin , "H. Peter Anvin" , Arnd Bergmann , Aswin Chandramouleeswaran , Scott J Norton , "Figo.zhang" Message-ID: <20140120023448.jvKVfx1JL0DcMsuYVM1l80fRRBDscaD1PcUGgKl26Ng@z> On Thu, Jan 16, 2014 at 04:08:31PM -0800, Tim Chen wrote: > When contended, architectures may be able to reduce the polling overhead > in ways which aren't expressible using a simple relax() primitive. > > This patch allows architectures to hook into the mcs_{lock,unlock} > functions for the contended cases only. > > From: Will Deacon > Signed-off-by: Will Deacon Reviewed-by: Paul E. McKenney > --- > kernel/locking/mcs_spinlock.c | 47 +++++++++++++++++++++++++------------------ > 1 file changed, 27 insertions(+), 20 deletions(-) > > diff --git a/kernel/locking/mcs_spinlock.c b/kernel/locking/mcs_spinlock.c > index 6cdc730..66d8883 100644 > --- a/kernel/locking/mcs_spinlock.c > +++ b/kernel/locking/mcs_spinlock.c > @@ -7,19 +7,34 @@ > * It avoids expensive cache bouncings that common test-and-set spin-lock > * implementations incur. > */ > -/* > - * asm/processor.h may define arch_mutex_cpu_relax(). > - * If it is not defined, cpu_relax() will be used. > - */ > + > #include > #include > #include > #include > #include > +#include > #include > > -#ifndef arch_mutex_cpu_relax > -# define arch_mutex_cpu_relax() cpu_relax() > +#ifndef arch_mcs_spin_lock_contended > +/* > + * Using smp_load_acquire() provides a memory barrier that ensures > + * subsequent operations happen after the lock is acquired. > + */ > +#define arch_mcs_spin_lock_contended(l) \ > + while (!(smp_load_acquire(l))) { \ > + arch_mutex_cpu_relax(); \ > + } > +#endif > + > +#ifndef arch_mcs_spin_unlock_contended > +/* > + * smp_store_release() provides a memory barrier to ensure all > + * operations in the critical section has been completed before > + * unlocking. > + */ > +#define arch_mcs_spin_unlock_contended(l) \ > + smp_store_release((l), 1) > #endif > > /* > @@ -43,13 +58,9 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > return; > } > ACCESS_ONCE(prev->next) = node; > - /* > - * Wait until the lock holder passes the lock down. > - * Using smp_load_acquire() provides a memory barrier that > - * ensures subsequent operations happen after the lock is acquired. > - */ > - while (!(smp_load_acquire(&node->locked))) > - arch_mutex_cpu_relax(); > + > + /* Wait until the lock holder passes the lock down. */ > + arch_mcs_spin_lock_contended(&node->locked); > } > EXPORT_SYMBOL_GPL(mcs_spin_lock); > > @@ -71,12 +82,8 @@ void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > while (!(next = ACCESS_ONCE(node->next))) > arch_mutex_cpu_relax(); > } > - /* > - * Pass lock to next waiter. > - * smp_store_release() provides a memory barrier to ensure > - * all operations in the critical section has been completed > - * before unlocking. > - */ > - smp_store_release(&next->locked, 1); > + > + /* Pass lock to next waiter. */ > + arch_mcs_spin_unlock_contended(&next->locked); > } > EXPORT_SYMBOL_GPL(mcs_spin_unlock); > -- > 1.7.11.7 > > >