From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Zijlstra Subject: Re: [PATCH RFC 1/2] qspinlock: Introducing a 4-byte queue spinlock implementation Date: Thu, 1 Aug 2013 22:47:10 +0200 Message-ID: <20130801204710.GH27162@twins.programming.kicks-ass.net> References: <1375324631-32868-1-git-send-email-Waiman.Long@hp.com> <1375324631-32868-2-git-send-email-Waiman.Long@hp.com> <51FAC3BA.9050705@linux.vnet.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Received: from merlin.infradead.org ([205.233.59.134]:54981 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751362Ab3HAUro (ORCPT ); Thu, 1 Aug 2013 16:47:44 -0400 Content-Disposition: inline In-Reply-To: <51FAC3BA.9050705@linux.vnet.ibm.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Raghavendra K T Cc: Waiman Long , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Arnd Bergmann , linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Steven Rostedt , Andrew Morton , Richard Weinberger , Catalin Marinas , Greg Kroah-Hartman , Matt Fleming , Herbert Xu , Akinobu Mita , Rusty Russell , Michel Lespinasse , Andi Kleen , Rik van Riel , "Paul E. McKenney" , Linus Torvalds , George Spelvin , Harvey Harrison On Fri, Aug 02, 2013 at 01:53:22AM +0530, Raghavendra K T wrote: You need to learn to trim your replies.. I already stopped reading that paravirt thread because of it. Soon I'll introduce you to my /dev/null mail reader. > On 08/01/2013 08:07 AM, Waiman Long wrote: > >+static __always_inline void queue_spin_lock(struct qspinlock *lock) > >+{ > >+ if (likely(queue_spin_trylock(lock))) > >+ return; > >+ queue_spin_lock_slowpath(lock); > >+} > > quickly falling into slowpath may hurt performance in some cases. no? > > Instead, I tried something like this: > > #define SPIN_THRESHOLD 64 > > static __always_inline void queue_spin_lock(struct qspinlock *lock) > { > unsigned count = SPIN_THRESHOLD; > do { > if (likely(queue_spin_trylock(lock))) > return; > cpu_relax(); > } while (count--); > queue_spin_lock_slowpath(lock); > } > > Though I could see some gains in overcommit, but it hurted undercommit > in some workloads :(. This would break the FIFO nature of the lock. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from merlin.infradead.org ([205.233.59.134]:54981 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751362Ab3HAUro (ORCPT ); Thu, 1 Aug 2013 16:47:44 -0400 Date: Thu, 1 Aug 2013 22:47:10 +0200 From: Peter Zijlstra Subject: Re: [PATCH RFC 1/2] qspinlock: Introducing a 4-byte queue spinlock implementation Message-ID: <20130801204710.GH27162@twins.programming.kicks-ass.net> References: <1375324631-32868-1-git-send-email-Waiman.Long@hp.com> <1375324631-32868-2-git-send-email-Waiman.Long@hp.com> <51FAC3BA.9050705@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <51FAC3BA.9050705@linux.vnet.ibm.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Raghavendra K T Cc: Waiman Long , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Arnd Bergmann , linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Steven Rostedt , Andrew Morton , Richard Weinberger , Catalin Marinas , Greg Kroah-Hartman , Matt Fleming , Herbert Xu , Akinobu Mita , Rusty Russell , Michel Lespinasse , Andi Kleen , Rik van Riel , "Paul E. McKenney" , Linus Torvalds , George Spelvin , Harvey Harrison , "Chandramouleeswaran, Aswin" , "Norton, Scott J" Message-ID: <20130801204710.j3zQ8IoYpwD_w7rlxogQjlbcCUf-6nxdJXRgaxKeohY@z> On Fri, Aug 02, 2013 at 01:53:22AM +0530, Raghavendra K T wrote: You need to learn to trim your replies.. I already stopped reading that paravirt thread because of it. Soon I'll introduce you to my /dev/null mail reader. > On 08/01/2013 08:07 AM, Waiman Long wrote: > >+static __always_inline void queue_spin_lock(struct qspinlock *lock) > >+{ > >+ if (likely(queue_spin_trylock(lock))) > >+ return; > >+ queue_spin_lock_slowpath(lock); > >+} > > quickly falling into slowpath may hurt performance in some cases. no? > > Instead, I tried something like this: > > #define SPIN_THRESHOLD 64 > > static __always_inline void queue_spin_lock(struct qspinlock *lock) > { > unsigned count = SPIN_THRESHOLD; > do { > if (likely(queue_spin_trylock(lock))) > return; > cpu_relax(); > } while (count--); > queue_spin_lock_slowpath(lock); > } > > Though I could see some gains in overcommit, but it hurted undercommit > in some workloads :(. This would break the FIFO nature of the lock.