From mboxrd@z Thu Jan 1 00:00:00 1970 From: Konrad Rzeszutek Wilk Subject: Re: [PATCH 03/11] qspinlock: Add pending bit Date: Wed, 18 Jun 2014 09:36:56 -0400 Message-ID: <20140618133656.GA4729@laptop.dumpdata.com> References: <20140615124657.264658593@chello.nl> <20140615130153.196728583@chello.nl> <20140617203615.GA29634@laptop.dumpdata.com> <53A1782C.7040400@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <53A1782C.7040400@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org To: Paolo Bonzini Cc: Waiman.Long@hp.com, linux-arch@vger.kernel.org, riel@redhat.com, Peter Zijlstra , kvm@vger.kernel.org, boris.ostrovsky@oracle.com, scott.norton@hp.com, raghavendra.kt@linux.vnet.ibm.com, paolo.bonzini@gmail.com, linux-kernel@vger.kernel.org, gleb@redhat.com, virtualization@lists.linux-foundation.org, Peter Zijlstra , chegu_vinod@hp.com, david.vrabel@citrix.com, oleg@redhat.com, xen-devel@lists.xenproject.org, tglx@linutronix.de, paulmck@linux.vnet.ibm.com, torvalds@linux-foundation.org, mingo@kernel.org List-Id: linux-arch.vger.kernel.org On Wed, Jun 18, 2014 at 01:29:48PM +0200, Paolo Bonzini wrote: > Il 17/06/2014 22:36, Konrad Rzeszutek Wilk ha scritto: > >+ /* One more attempt - but if we fail mark it as pending. */ > >+ if (val == _Q_LOCKED_VAL) { > >+ new = Q_LOCKED_VAL |_Q_PENDING_VAL; > >+ > >+ old = atomic_cmpxchg(&lock->val, val, new); > >+ if (old == _Q_LOCKED_VAL) /* YEEY! */ > >+ return; > >+ val = old; > >+ } > > Note that Peter's code is in a for(;;) loop: > > > + for (;;) { > + /* > + * If we observe any contention; queue. > + */ > + if (val & ~_Q_LOCKED_MASK) > + goto queue; > + > + new = _Q_LOCKED_VAL; > + if (val == new) > + new |= _Q_PENDING_VAL; > + > + old = atomic_cmpxchg(&lock->val, val, new); > + if (old == val) > + break; > + > + val = old; > + } > + > + /* > + * we won the trylock > + */ > + if (new == _Q_LOCKED_VAL) > + return; > > So what you'd have is basically: > > /* > * One more attempt if no one is already in queue. Perhaps > * they have unlocked the spinlock already. > */ > if (val == _Q_LOCKED_VAL && atomic_read(&lock->val) == 0) { > old = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); > if (old == 0) /* YEEY! */ > return; > val = old; > } > > But I agree with Waiman that this is unlikely to trigger often enough. It > does have to be handled in the slowpath for correctness, but the most likely > path is (0,0,1)->(0,1,1). > > Paolo From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from userp1040.oracle.com ([156.151.31.81]:47137 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966349AbaFRNiG (ORCPT ); Wed, 18 Jun 2014 09:38:06 -0400 Date: Wed, 18 Jun 2014 09:36:56 -0400 From: Konrad Rzeszutek Wilk Subject: Re: [PATCH 03/11] qspinlock: Add pending bit Message-ID: <20140618133656.GA4729@laptop.dumpdata.com> References: <20140615124657.264658593@chello.nl> <20140615130153.196728583@chello.nl> <20140617203615.GA29634@laptop.dumpdata.com> <53A1782C.7040400@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <53A1782C.7040400@redhat.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Paolo Bonzini Cc: Peter Zijlstra , Waiman.Long@hp.com, tglx@linutronix.de, mingo@kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, paolo.bonzini@gmail.com, boris.ostrovsky@oracle.com, paulmck@linux.vnet.ibm.com, riel@redhat.com, torvalds@linux-foundation.org, raghavendra.kt@linux.vnet.ibm.com, david.vrabel@citrix.com, oleg@redhat.com, gleb@redhat.com, scott.norton@hp.com, chegu_vinod@hp.com, Peter Zijlstra Message-ID: <20140618133656.vg29F8tBtLEqX-Am_mF77G2abLNScvJ1cxZQlwqJynY@z> On Wed, Jun 18, 2014 at 01:29:48PM +0200, Paolo Bonzini wrote: > Il 17/06/2014 22:36, Konrad Rzeszutek Wilk ha scritto: > >+ /* One more attempt - but if we fail mark it as pending. */ > >+ if (val == _Q_LOCKED_VAL) { > >+ new = Q_LOCKED_VAL |_Q_PENDING_VAL; > >+ > >+ old = atomic_cmpxchg(&lock->val, val, new); > >+ if (old == _Q_LOCKED_VAL) /* YEEY! */ > >+ return; > >+ val = old; > >+ } > > Note that Peter's code is in a for(;;) loop: > > > + for (;;) { > + /* > + * If we observe any contention; queue. > + */ > + if (val & ~_Q_LOCKED_MASK) > + goto queue; > + > + new = _Q_LOCKED_VAL; > + if (val == new) > + new |= _Q_PENDING_VAL; > + > + old = atomic_cmpxchg(&lock->val, val, new); > + if (old == val) > + break; > + > + val = old; > + } > + > + /* > + * we won the trylock > + */ > + if (new == _Q_LOCKED_VAL) > + return; > > So what you'd have is basically: > > /* > * One more attempt if no one is already in queue. Perhaps > * they have unlocked the spinlock already. > */ > if (val == _Q_LOCKED_VAL && atomic_read(&lock->val) == 0) { > old = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); > if (old == 0) /* YEEY! */ > return; > val = old; > } > > But I agree with Waiman that this is unlikely to trigger often enough. It > does have to be handled in the slowpath for correctness, but the most likely > path is (0,0,1)->(0,1,1). > > Paolo