From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ingo Molnar Subject: Re: [PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS Date: Fri, 18 Apr 2014 09:46:16 +0200 Message-ID: <20140418074616.GB13517@gmail.com> References: <1397747051-15401-1-git-send-email-Waiman.Long@hp.com> <1397747051-15401-6-git-send-email-Waiman.Long@hp.com> <20140417155844.GS11096@twins.programming.kicks-ass.net> <53504C4E.8060800@hp.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <53504C4E.8060800@hp.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org To: Waiman Long Cc: linux-arch@vger.kernel.org, Rik van Riel , Raghavendra K T , Gleb Natapov , kvm@vger.kernel.org, Konrad Rzeszutek Wilk , Peter Zijlstra , Scott J Norton , x86@kernel.org, Paolo Bonzini , linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Ingo Molnar , Chegu Vinod , David Vrabel , "H. Peter Anvin" , xen-devel@lists.xenproject.org, Thomas Gleixner , "Paul E. McKenney" , Linus Torvalds , Oleg Nesterov List-Id: linux-arch.vger.kernel.org * Waiman Long wrote: > On 04/17/2014 11:58 AM, Peter Zijlstra wrote: > >On Thu, Apr 17, 2014 at 11:03:57AM -0400, Waiman Long wrote: > >>+static __always_inline void > >>+clear_pending_set_locked(struct qspinlock *lock, u32 val) > >>+{ > >>+ struct __qspinlock *l = (void *)lock; > >>+ > >>+ ACCESS_ONCE(l->locked_pending) = 1; > >>+} > >>@@ -157,8 +251,13 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval) > >> * we're pending, wait for the owner to go away. > >> * > >> * *,1,1 -> *,1,0 > >>+ * > >>+ * this wait loop must be a load-acquire such that we match the > >>+ * store-release that clears the locked bit and create lock > >>+ * sequentiality; this because not all try_clear_pending_set_locked() > >>+ * implementations imply full barriers. > >You renamed the function referred in the above comment. > > > > Sorry, will fix the comments. I suggest not renaming the function instead. try_clear_pending_set_locked() tells the intent in a clearer fashion. Thanks, Ingo From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ee0-f42.google.com ([74.125.83.42]:63656 "EHLO mail-ee0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751110AbaDRHqV (ORCPT ); Fri, 18 Apr 2014 03:46:21 -0400 Date: Fri, 18 Apr 2014 09:46:16 +0200 From: Ingo Molnar Subject: Re: [PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS Message-ID: <20140418074616.GB13517@gmail.com> References: <1397747051-15401-1-git-send-email-Waiman.Long@hp.com> <1397747051-15401-6-git-send-email-Waiman.Long@hp.com> <20140417155844.GS11096@twins.programming.kicks-ass.net> <53504C4E.8060800@hp.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <53504C4E.8060800@hp.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Waiman Long Cc: Peter Zijlstra , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Paolo Bonzini , Konrad Rzeszutek Wilk , "Paul E. McKenney" , Rik van Riel , Linus Torvalds , Raghavendra K T , David Vrabel , Oleg Nesterov , Gleb Natapov , Scott J Norton , Chegu Vinod Message-ID: <20140418074616.0T8mj-sXi-QvixdUkIyYIQNknaI2AxfS3JR0DhzjHck@z> * Waiman Long wrote: > On 04/17/2014 11:58 AM, Peter Zijlstra wrote: > >On Thu, Apr 17, 2014 at 11:03:57AM -0400, Waiman Long wrote: > >>+static __always_inline void > >>+clear_pending_set_locked(struct qspinlock *lock, u32 val) > >>+{ > >>+ struct __qspinlock *l = (void *)lock; > >>+ > >>+ ACCESS_ONCE(l->locked_pending) = 1; > >>+} > >>@@ -157,8 +251,13 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval) > >> * we're pending, wait for the owner to go away. > >> * > >> * *,1,1 -> *,1,0 > >>+ * > >>+ * this wait loop must be a load-acquire such that we match the > >>+ * store-release that clears the locked bit and create lock > >>+ * sequentiality; this because not all try_clear_pending_set_locked() > >>+ * implementations imply full barriers. > >You renamed the function referred in the above comment. > > > > Sorry, will fix the comments. I suggest not renaming the function instead. try_clear_pending_set_locked() tells the intent in a clearer fashion. Thanks, Ingo