From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> To: Waiman Long <waiman.long@hp.com> Cc: linux-arch@vger.kernel.org, riel@redhat.com, Peter Zijlstra <a.p.zijlstra@chello.nl>, kvm@vger.kernel.org, boris.ostrovsky@oracle.com, scott.norton@hp.com, raghavendra.kt@linux.vnet.ibm.com, paolo.bonzini@gmail.com, linux-kernel@vger.kernel.org, gleb@redhat.com, virtualization@lists.linux-foundation.org, Peter Zijlstra <peterz@infradead.org>, chegu_vinod@hp.com, david.vrabel@citrix.com, oleg@redhat.com, xen-devel@lists.xenproject.org, tglx@linutronix.de, paulmck@linux.vnet.ibm.com, torvalds@linux-foundation.org, mingo@kernel.org Subject: Re: [PATCH 03/11] qspinlock: Add pending bit Date: Tue, 17 Jun 2014 17:10:57 -0400 [thread overview] Message-ID: <20140617211057.GD29634@laptop.dumpdata.com> (raw) In-Reply-To: <20140617210729.GB31817@laptop.dumpdata.com> On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote: > On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote: > > On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote: > > >On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote: > > >>Because the qspinlock needs to touch a second cacheline; add a pending > > >>bit and allow a single in-word spinner before we punt to the second > > >>cacheline. > > >Could you add this in the description please: > > > > > >And by second cacheline we mean the local 'node'. That is the: > > >mcs_nodes[0] and mcs_nodes[idx] > > > > > >Perhaps it might be better then to split this in the header file > > >as this is trying to not be a slowpath code - but rather - a > > >pre-slow-path-lets-try-if-we can do another cmpxchg in case > > >the unlocker has just unlocked itself. > > > > > >So something like: > > > > > >diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h > > >index e8a7ae8..29cc9c7 100644 > > >--- a/include/asm-generic/qspinlock.h > > >+++ b/include/asm-generic/qspinlock.h > > >@@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val); > > > */ > > > static __always_inline void queue_spin_lock(struct qspinlock *lock) > > > { > > >- u32 val; > > >+ u32 val, new; > > > > > > val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); > > > if (likely(val == 0)) > > > return; > > >+ > > >+ /* One more attempt - but if we fail mark it as pending. */ > > >+ if (val == _Q_LOCKED_VAL) { > > >+ new = Q_LOCKED_VAL |_Q_PENDING_VAL; > > >+ > > >+ old = atomic_cmpxchg(&lock->val, val, new); > > >+ if (old == _Q_LOCKED_VAL) /* YEEY! */ > > >+ return; > > > > No, it can leave like that. The unlock path will not clear the pending bit. > > Err, you are right. It needs to go back in the slowpath. What I should have wrote is: if (old == 0) /* YEEY */ return; As that would the same thing as this patch does on the pending bit - that is if we can on the second compare and exchange set the pending bit (and the lock) and the lock has been released - we are good. And it is a quick path. > > > We are trying to make the fastpath as simple as possible as it may be > > inlined. The complexity of the queue spinlock is in the slowpath. > > Sure, but then it shouldn't be called slowpath anymore as it is not > slow. It is a combination of fast path (the potential chance of > grabbing the lock and setting the pending lock) and the real slow > path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ? > I forgot to mention - that was the crux of my comments - just change the slowpath to complex name at that point to better reflect what it does. > > > > Moreover, an cmpxchg followed immediately followed by another cmpxchg will > > just increase the level of memory contention when a lock is fairly > > contended. The chance of second cmpxchg() succeeding will be pretty low. > > Then why even do the pending bit - which is what the slowpath does > for the first time. And if it grabs it (And sets the pending bit) it > immediately exits. Why not perculate that piece of code in-to this header. > > And the leave all that slow code (queing, mcs_lock access, etc) in the slowpath. > > > > > -Longman > > > >
WARNING: multiple messages have this Message-ID (diff)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> To: Waiman Long <waiman.long@hp.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>, tglx@linutronix.de, mingo@kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, paolo.bonzini@gmail.com, boris.ostrovsky@oracle.com, paulmck@linux.vnet.ibm.com, riel@redhat.com, torvalds@linux-foundation.org, raghavendra.kt@linux.vnet.ibm.com, david.vrabel@citrix.com, oleg@redhat.com, gleb@redhat.com, scott.norton@hp.com, chegu_vinod@hp.com, Peter Zijlstra <peterz@infradead.org> Subject: Re: [PATCH 03/11] qspinlock: Add pending bit Date: Tue, 17 Jun 2014 17:10:57 -0400 [thread overview] Message-ID: <20140617211057.GD29634@laptop.dumpdata.com> (raw) Message-ID: <20140617211057.PY3ZW1emWqq86ZyqV4QJiaeY0dIL2sK4IQBfOyFo148@z> (raw) In-Reply-To: <20140617210729.GB31817@laptop.dumpdata.com> On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote: > On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote: > > On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote: > > >On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote: > > >>Because the qspinlock needs to touch a second cacheline; add a pending > > >>bit and allow a single in-word spinner before we punt to the second > > >>cacheline. > > >Could you add this in the description please: > > > > > >And by second cacheline we mean the local 'node'. That is the: > > >mcs_nodes[0] and mcs_nodes[idx] > > > > > >Perhaps it might be better then to split this in the header file > > >as this is trying to not be a slowpath code - but rather - a > > >pre-slow-path-lets-try-if-we can do another cmpxchg in case > > >the unlocker has just unlocked itself. > > > > > >So something like: > > > > > >diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h > > >index e8a7ae8..29cc9c7 100644 > > >--- a/include/asm-generic/qspinlock.h > > >+++ b/include/asm-generic/qspinlock.h > > >@@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val); > > > */ > > > static __always_inline void queue_spin_lock(struct qspinlock *lock) > > > { > > >- u32 val; > > >+ u32 val, new; > > > > > > val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); > > > if (likely(val == 0)) > > > return; > > >+ > > >+ /* One more attempt - but if we fail mark it as pending. */ > > >+ if (val == _Q_LOCKED_VAL) { > > >+ new = Q_LOCKED_VAL |_Q_PENDING_VAL; > > >+ > > >+ old = atomic_cmpxchg(&lock->val, val, new); > > >+ if (old == _Q_LOCKED_VAL) /* YEEY! */ > > >+ return; > > > > No, it can leave like that. The unlock path will not clear the pending bit. > > Err, you are right. It needs to go back in the slowpath. What I should have wrote is: if (old == 0) /* YEEY */ return; As that would the same thing as this patch does on the pending bit - that is if we can on the second compare and exchange set the pending bit (and the lock) and the lock has been released - we are good. And it is a quick path. > > > We are trying to make the fastpath as simple as possible as it may be > > inlined. The complexity of the queue spinlock is in the slowpath. > > Sure, but then it shouldn't be called slowpath anymore as it is not > slow. It is a combination of fast path (the potential chance of > grabbing the lock and setting the pending lock) and the real slow > path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ? > I forgot to mention - that was the crux of my comments - just change the slowpath to complex name at that point to better reflect what it does. > > > > Moreover, an cmpxchg followed immediately followed by another cmpxchg will > > just increase the level of memory contention when a lock is fairly > > contended. The chance of second cmpxchg() succeeding will be pretty low. > > Then why even do the pending bit - which is what the slowpath does > for the first time. And if it grabs it (And sets the pending bit) it > immediately exits. Why not perculate that piece of code in-to this header. > > And the leave all that slow code (queing, mcs_lock access, etc) in the slowpath. > > > > > -Longman > > > >
next prev parent reply other threads:[~2014-06-17 21:10 UTC|newest] Thread overview: 124+ messages / expand[flat|nested] mbox.gz Atom feed top 2014-06-15 12:46 [PATCH 00/11] qspinlock with paravirt support Peter Zijlstra 2014-06-15 12:46 ` Peter Zijlstra 2014-06-15 12:46 ` [PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock Peter Zijlstra 2014-06-15 12:46 ` Peter Zijlstra 2014-06-16 20:49 ` Konrad Rzeszutek Wilk 2014-06-16 20:49 ` Konrad Rzeszutek Wilk 2014-06-17 20:03 ` Konrad Rzeszutek Wilk 2014-06-17 20:03 ` Konrad Rzeszutek Wilk 2014-06-23 16:12 ` Peter Zijlstra 2014-06-23 16:12 ` Peter Zijlstra 2014-06-23 16:20 ` Konrad Rzeszutek Wilk 2014-06-23 16:20 ` Konrad Rzeszutek Wilk 2014-06-23 15:56 ` Peter Zijlstra 2014-06-23 16:16 ` Konrad Rzeszutek Wilk 2014-06-23 16:16 ` Konrad Rzeszutek Wilk 2014-06-17 20:05 ` Konrad Rzeszutek Wilk 2014-06-17 20:05 ` Konrad Rzeszutek Wilk 2014-06-23 16:26 ` Peter Zijlstra 2014-06-23 16:26 ` Peter Zijlstra 2014-06-23 16:45 ` Konrad Rzeszutek Wilk 2014-06-23 16:45 ` Konrad Rzeszutek Wilk 2014-06-15 12:46 ` [PATCH 02/11] qspinlock, x86: Enable x86-64 to use " Peter Zijlstra 2014-06-15 12:46 ` Peter Zijlstra 2014-06-15 12:47 ` [PATCH 03/11] qspinlock: Add pending bit Peter Zijlstra 2014-06-15 12:47 ` Peter Zijlstra 2014-06-17 20:36 ` Konrad Rzeszutek Wilk 2014-06-17 20:36 ` Konrad Rzeszutek Wilk 2014-06-17 20:51 ` Waiman Long 2014-06-17 20:51 ` Waiman Long 2014-06-17 21:07 ` Konrad Rzeszutek Wilk 2014-06-17 21:07 ` Konrad Rzeszutek Wilk 2014-06-17 21:10 ` Konrad Rzeszutek Wilk [this message] 2014-06-17 21:10 ` Konrad Rzeszutek Wilk 2014-06-17 22:25 ` Waiman Long 2014-06-17 22:25 ` Waiman Long 2014-06-24 8:24 ` Peter Zijlstra 2014-06-24 8:24 ` Peter Zijlstra 2014-06-18 11:29 ` Paolo Bonzini 2014-06-18 11:29 ` Paolo Bonzini 2014-06-18 13:36 ` Konrad Rzeszutek Wilk 2014-06-18 13:36 ` Konrad Rzeszutek Wilk 2014-06-23 16:35 ` Peter Zijlstra 2014-06-23 16:35 ` Peter Zijlstra 2014-06-15 12:47 ` [PATCH 04/11] qspinlock: Extract out the exchange of tail code word Peter Zijlstra 2014-06-17 20:55 ` Konrad Rzeszutek Wilk 2014-06-17 20:55 ` Konrad Rzeszutek Wilk 2014-06-18 11:37 ` Paolo Bonzini 2014-06-18 11:37 ` Paolo Bonzini 2014-06-18 13:50 ` Konrad Rzeszutek Wilk 2014-06-18 13:50 ` Konrad Rzeszutek Wilk 2014-06-18 15:46 ` Waiman Long 2014-06-18 15:46 ` Waiman Long 2014-06-18 15:49 ` Paolo Bonzini 2014-06-18 15:49 ` Paolo Bonzini 2014-06-18 16:02 ` Konrad Rzeszutek Wilk 2014-06-18 16:02 ` Konrad Rzeszutek Wilk 2014-06-24 10:47 ` Peter Zijlstra 2014-06-24 10:47 ` Peter Zijlstra 2014-06-15 12:47 ` [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS Peter Zijlstra 2014-06-15 12:47 ` Peter Zijlstra 2014-06-18 11:39 ` Paolo Bonzini 2014-06-18 11:39 ` Paolo Bonzini 2014-07-07 14:35 ` Peter Zijlstra 2014-07-07 14:35 ` Peter Zijlstra 2014-07-07 15:08 ` Paolo Bonzini 2014-07-07 15:08 ` Paolo Bonzini 2014-07-07 15:35 ` Peter Zijlstra 2014-07-07 15:35 ` Peter Zijlstra 2014-07-07 16:10 ` Paolo Bonzini 2014-07-07 16:10 ` Paolo Bonzini 2014-06-18 15:57 ` Konrad Rzeszutek Wilk 2014-06-18 15:57 ` Konrad Rzeszutek Wilk 2014-07-07 14:33 ` Peter Zijlstra 2014-07-07 14:33 ` Peter Zijlstra 2014-06-15 12:47 ` [PATCH 06/11] qspinlock: Optimize pending bit Peter Zijlstra 2014-06-15 12:47 ` Peter Zijlstra 2014-06-18 11:42 ` Paolo Bonzini 2014-06-18 11:42 ` Paolo Bonzini 2014-06-15 12:47 ` [PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable Peter Zijlstra 2014-06-15 12:47 ` Peter Zijlstra 2014-06-18 16:36 ` Konrad Rzeszutek Wilk 2014-06-18 16:36 ` Konrad Rzeszutek Wilk 2014-07-07 14:51 ` Peter Zijlstra 2014-07-07 14:51 ` Peter Zijlstra 2014-06-15 12:47 ` [PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors Peter Zijlstra 2014-06-15 12:47 ` Peter Zijlstra 2014-06-16 21:57 ` Waiman Long 2014-06-18 16:40 ` Konrad Rzeszutek Wilk 2014-06-18 16:40 ` Konrad Rzeszutek Wilk 2014-06-15 12:47 ` [PATCH 09/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled Peter Zijlstra 2014-06-15 12:47 ` Peter Zijlstra 2014-06-18 16:43 ` Konrad Rzeszutek Wilk 2014-06-18 16:43 ` Konrad Rzeszutek Wilk 2014-06-15 12:47 ` [PATCH 10/11] qspinlock: Paravirt support Peter Zijlstra 2014-06-15 12:47 ` Peter Zijlstra 2014-06-16 22:08 ` Waiman Long 2014-06-18 12:03 ` Paolo Bonzini 2014-06-18 12:03 ` Paolo Bonzini 2014-06-18 15:26 ` Waiman Long 2014-06-18 15:26 ` Waiman Long 2014-07-07 15:20 ` Peter Zijlstra 2014-07-07 15:20 ` Peter Zijlstra 2014-07-07 15:20 ` Peter Zijlstra 2014-07-07 15:20 ` Peter Zijlstra 2014-06-17 0:53 ` Waiman Long 2014-06-17 0:53 ` Waiman Long 2014-06-18 12:04 ` Paolo Bonzini 2014-06-18 12:04 ` Paolo Bonzini 2014-06-20 13:46 ` Konrad Rzeszutek Wilk 2014-06-20 13:46 ` Konrad Rzeszutek Wilk 2014-07-07 15:27 ` Peter Zijlstra 2014-07-15 14:23 ` Konrad Rzeszutek Wilk 2014-07-15 14:23 ` Konrad Rzeszutek Wilk 2014-06-15 12:47 ` [PATCH 11/11] qspinlock, kvm: Add paravirt support Peter Zijlstra 2014-06-22 16:36 ` Raghavendra K T 2014-06-22 16:36 ` Raghavendra K T 2014-07-07 15:23 ` Peter Zijlstra 2014-07-07 15:23 ` Peter Zijlstra 2014-06-16 20:52 ` [PATCH 00/11] qspinlock with " Konrad Rzeszutek Wilk 2014-06-16 20:52 ` Konrad Rzeszutek Wilk -- strict thread matches above, loose matches on Subject: below -- 2014-06-17 23:23 [PATCH 03/11] qspinlock: Add pending bit Konrad Rzeszutek Wilk 2014-06-17 23:23 ` Konrad Rzeszutek Wilk 2014-06-24 8:46 ` Peter Zijlstra 2014-06-24 8:46 ` Peter Zijlstra
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20140617211057.GD29634@laptop.dumpdata.com \ --to=konrad.wilk@oracle.com \ --cc=a.p.zijlstra@chello.nl \ --cc=boris.ostrovsky@oracle.com \ --cc=chegu_vinod@hp.com \ --cc=david.vrabel@citrix.com \ --cc=gleb@redhat.com \ --cc=kvm@vger.kernel.org \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mingo@kernel.org \ --cc=oleg@redhat.com \ --cc=paolo.bonzini@gmail.com \ --cc=paulmck@linux.vnet.ibm.com \ --cc=peterz@infradead.org \ --cc=raghavendra.kt@linux.vnet.ibm.com \ --cc=riel@redhat.com \ --cc=scott.norton@hp.com \ --cc=tglx@linutronix.de \ --cc=torvalds@linux-foundation.org \ --cc=virtualization@lists.linux-foundation.org \ --cc=waiman.long@hp.com \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).