From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Paul E. McKenney" Subject: Re: [PATCH v7 2/6] MCS Lock: optimizations and extra comments Date: Sun, 19 Jan 2014 18:29:47 -0800 Message-ID: <20140120022947.GJ10038@linux.vnet.ibm.com> References: <1389917300.3138.12.camel@schen9-DESK> Reply-To: paulmck@linux.vnet.ibm.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Received: from e35.co.us.ibm.com ([32.97.110.153]:34080 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752327AbaATC3z (ORCPT ); Sun, 19 Jan 2014 21:29:55 -0500 Received: from /spool/local by e35.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sun, 19 Jan 2014 19:29:54 -0700 Content-Disposition: inline In-Reply-To: <1389917300.3138.12.camel@schen9-DESK> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Tim Chen Cc: Ingo Molnar , Andrew Morton , Thomas Gleixner , Will Deacon , linux-kernel@vger.kernel.org, linux-mm , linux-arch@vger.kernel.org, Linus Torvalds , Waiman Long , Andrea Arcangeli , Alex Shi , Andi Kleen , Michel Lespinasse , Davidlohr Bueso , Matthew R Wilcox , Dave Hansen , Peter Zijlstra , Rik van Riel , Peter Hurley , Raghavendra K T , George Spelvin , "H. Peter Anvin" , Arnd Bergmann , Aswin On Thu, Jan 16, 2014 at 04:08:20PM -0800, Tim Chen wrote: > Remove unnecessary operation and make the cmpxchg(lock, node, NULL) == node > check in mcs_spin_unlock() likely() as it is likely that a race did not occur > most of the time. > > Also add in more comments describing how the local node is used in MCS locks. > > From: Jason Low > Reviewed-by: Tim Chen > Signed-off-by: Jason Low Reviewed-by: Paul E. McKenney > --- > include/linux/mcs_spinlock.h | 13 +++++++++++-- > 1 file changed, 11 insertions(+), 2 deletions(-) > > diff --git a/include/linux/mcs_spinlock.h b/include/linux/mcs_spinlock.h > index b5de3b0..96f14299 100644 > --- a/include/linux/mcs_spinlock.h > +++ b/include/linux/mcs_spinlock.h > @@ -18,6 +18,12 @@ struct mcs_spinlock { > }; > > /* > + * In order to acquire the lock, the caller should declare a local node and > + * pass a reference of the node to this function in addition to the lock. > + * If the lock has already been acquired, then this will proceed to spin > + * on this node->locked until the previous lock holder sets the node->locked > + * in mcs_spin_unlock(). > + * > * We don't inline mcs_spin_lock() so that perf can correctly account for the > * time spent in this lock function. > */ > @@ -33,7 +39,6 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > prev = xchg(lock, node); > if (likely(prev == NULL)) { > /* Lock acquired */ > - node->locked = 1; > return; > } > ACCESS_ONCE(prev->next) = node; > @@ -43,6 +48,10 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > arch_mutex_cpu_relax(); > } > > +/* > + * Releases the lock. The caller should pass in the corresponding node that > + * was used to acquire the lock. > + */ > static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > { > struct mcs_spinlock *next = ACCESS_ONCE(node->next); > @@ -51,7 +60,7 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod > /* > * Release the lock by setting it to NULL > */ > - if (cmpxchg(lock, node, NULL) == node) > + if (likely(cmpxchg(lock, node, NULL) == node)) > return; > /* Wait until the next pointer is set */ > while (!(next = ACCESS_ONCE(node->next))) > -- > 1.7.11.7 > > > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from e35.co.us.ibm.com ([32.97.110.153]:34080 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752327AbaATC3z (ORCPT ); Sun, 19 Jan 2014 21:29:55 -0500 Received: from /spool/local by e35.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sun, 19 Jan 2014 19:29:54 -0700 Date: Sun, 19 Jan 2014 18:29:47 -0800 From: "Paul E. McKenney" Subject: Re: [PATCH v7 2/6] MCS Lock: optimizations and extra comments Message-ID: <20140120022947.GJ10038@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <1389917300.3138.12.camel@schen9-DESK> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1389917300.3138.12.camel@schen9-DESK> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Tim Chen Cc: Ingo Molnar , Andrew Morton , Thomas Gleixner , Will Deacon , linux-kernel@vger.kernel.org, linux-mm , linux-arch@vger.kernel.org, Linus Torvalds , Waiman Long , Andrea Arcangeli , Alex Shi , Andi Kleen , Michel Lespinasse , Davidlohr Bueso , Matthew R Wilcox , Dave Hansen , Peter Zijlstra , Rik van Riel , Peter Hurley , Raghavendra K T , George Spelvin , "H. Peter Anvin" , Arnd Bergmann , Aswin Chandramouleeswaran , Scott J Norton , "Figo.zhang" Message-ID: <20140120022947.8Rj3xdLhCz5nL0BgItiwhuqVxfokt3dvpByv28HxJM8@z> On Thu, Jan 16, 2014 at 04:08:20PM -0800, Tim Chen wrote: > Remove unnecessary operation and make the cmpxchg(lock, node, NULL) == node > check in mcs_spin_unlock() likely() as it is likely that a race did not occur > most of the time. > > Also add in more comments describing how the local node is used in MCS locks. > > From: Jason Low > Reviewed-by: Tim Chen > Signed-off-by: Jason Low Reviewed-by: Paul E. McKenney > --- > include/linux/mcs_spinlock.h | 13 +++++++++++-- > 1 file changed, 11 insertions(+), 2 deletions(-) > > diff --git a/include/linux/mcs_spinlock.h b/include/linux/mcs_spinlock.h > index b5de3b0..96f14299 100644 > --- a/include/linux/mcs_spinlock.h > +++ b/include/linux/mcs_spinlock.h > @@ -18,6 +18,12 @@ struct mcs_spinlock { > }; > > /* > + * In order to acquire the lock, the caller should declare a local node and > + * pass a reference of the node to this function in addition to the lock. > + * If the lock has already been acquired, then this will proceed to spin > + * on this node->locked until the previous lock holder sets the node->locked > + * in mcs_spin_unlock(). > + * > * We don't inline mcs_spin_lock() so that perf can correctly account for the > * time spent in this lock function. > */ > @@ -33,7 +39,6 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > prev = xchg(lock, node); > if (likely(prev == NULL)) { > /* Lock acquired */ > - node->locked = 1; > return; > } > ACCESS_ONCE(prev->next) = node; > @@ -43,6 +48,10 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > arch_mutex_cpu_relax(); > } > > +/* > + * Releases the lock. The caller should pass in the corresponding node that > + * was used to acquire the lock. > + */ > static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node) > { > struct mcs_spinlock *next = ACCESS_ONCE(node->next); > @@ -51,7 +60,7 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod > /* > * Release the lock by setting it to NULL > */ > - if (cmpxchg(lock, node, NULL) == node) > + if (likely(cmpxchg(lock, node, NULL) == node)) > return; > /* Wait until the next pointer is set */ > while (!(next = ACCESS_ONCE(node->next))) > -- > 1.7.11.7 > > >