From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757845AbaEJBT5 (ORCPT ); Fri, 9 May 2014 21:19:57 -0400 Received: from g5t1627.atlanta.hp.com ([15.192.137.10]:7087 "EHLO g5t1627.atlanta.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755778AbaEJBTz (ORCPT ); Fri, 9 May 2014 21:19:55 -0400 Message-ID: <536D7EA4.4060301@hp.com> Date: Fri, 09 May 2014 21:19:32 -0400 From: Waiman Long User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12 MIME-Version: 1.0 To: Peter Zijlstra CC: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Paolo Bonzini , Konrad Rzeszutek Wilk , Boris Ostrovsky , "Paul E. McKenney" , Rik van Riel , Linus Torvalds , Raghavendra K T , David Vrabel , Oleg Nesterov , Gleb Natapov , Scott J Norton , Chegu Vinod Subject: Re: [PATCH v10 09/19] qspinlock: Prepare for unfair lock support References: <1399474907-22206-1-git-send-email-Waiman.Long@hp.com> <1399474907-22206-10-git-send-email-Waiman.Long@hp.com> <20140508190649.GR2844@laptop.programming.kicks-ass.net> In-Reply-To: <20140508190649.GR2844@laptop.programming.kicks-ass.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/08/2014 03:06 PM, Peter Zijlstra wrote: > On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote: >> If unfair lock is supported, the lock acquisition loop at the end of >> the queue_spin_lock_slowpath() function may need to detect the fact >> the lock can be stolen. Code are added for the stolen lock detection. >> >> A new qhead macro is also defined as a shorthand for mcs.locked. > NAK, unfair should be a pure test-and-set lock. I have performance data showing that a simple test-and-set lock does not scale well. That is the primary reason of ditching the test-and-set lock and use a more complicated scheme which scales better. Also, it will be hard to make the unfair test-and-set lock code to coexist nicely with PV spinlock code. >> /** >> * get_qlock - Set the lock bit and own the lock >> - * @lock: Pointer to queue spinlock structure >> + * @lock : Pointer to queue spinlock structure >> + * Return: 1 if lock acquired, 0 otherwise >> * >> * This routine should only be called when the caller is the only one >> * entitled to acquire the lock. >> */ >> -static __always_inline void get_qlock(struct qspinlock *lock) >> +static __always_inline int get_qlock(struct qspinlock *lock) >> { >> struct __qspinlock *l = (void *)lock; >> >> barrier(); >> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; >> barrier(); >> + return 1; >> } > and here you make a horribly named function more horrible; > try_set_locked() is that its now. Will do. -Longman