From: Waiman Long <Waiman.Long@hp.com> To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>, Peter Zijlstra <peterz@infradead.org> Cc: linux-arch@vger.kernel.org, Waiman Long <Waiman.Long@hp.com>, Rik van Riel <riel@redhat.com>, Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>, Gleb Natapov <gleb@redhat.com>, kvm@vger.kernel.org, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Scott J Norton <scott.norton@hp.com>, x86@kernel.org, Paolo Bonzini <paolo.bonzini@gmail.com>, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Chegu Vinod <chegu_vinod@hp.com>, David Vrabel <david.vrabel@citrix.com>, Oleg Nesterov <oleg@redhat.com>, xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>, "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>, Linus Torvalds <torvalds@linux-foundation.org> Subject: [PATCH v11 08/16] qspinlock: Prepare for unfair lock support Date: Fri, 30 May 2014 11:43:54 -0400 [thread overview] Message-ID: <1401464642-33890-9-git-send-email-Waiman.Long@hp.com> (raw) In-Reply-To: <1401464642-33890-1-git-send-email-Waiman.Long@hp.com> If unfair lock is supported, the lock acquisition loop at the end of the queue_spin_lock_slowpath() function may need to detect the fact the lock can be stolen. Code are added for the stolen lock detection. Signed-off-by: Waiman Long <Waiman.Long@hp.com> --- kernel/locking/qspinlock.c | 26 ++++++++++++++++++-------- 1 files changed, 18 insertions(+), 8 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 2c7abe7..ae1b19d 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -94,7 +94,7 @@ static inline struct mcs_spinlock *decode_tail(u32 tail) * can allow better optimization of the lock acquisition for the pending * bit holder. * - * This internal structure is also used by the set_locked function which + * This internal structure is also used by the try_set_locked function which * is not restricted to _Q_PENDING_BITS == 8. */ struct __qspinlock { @@ -206,19 +206,21 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) #endif /* _Q_PENDING_BITS == 8 */ /** - * set_locked - Set the lock bit and own the lock - * @lock: Pointer to queue spinlock structure + * try_set_locked - Try to set the lock bit and own the lock + * @lock : Pointer to queue spinlock structure + * Return: 1 if lock acquired, 0 otherwise * * This routine should only be called when the caller is the only one * entitled to acquire the lock. */ -static __always_inline void set_locked(struct qspinlock *lock) +static __always_inline int try_set_locked(struct qspinlock *lock) { struct __qspinlock *l = (void *)lock; barrier(); ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; barrier(); + return 1; } /** @@ -357,11 +359,12 @@ queue: /* * we're at the head of the waitqueue, wait for the owner & pending to * go away. - * Load-acquired is used here because the set_locked() + * Load-acquired is used here because the try_set_locked() * function below may not be a full memory barrier. * * *,x,y -> *,0,0 */ +retry_queue_wait: while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_PENDING_MASK) arch_mutex_cpu_relax(); @@ -378,13 +381,20 @@ queue: */ for (;;) { if (val != tail) { - set_locked(lock); - break; + /* + * The try_set_locked function will only failed if the + * lock was stolen. + */ + if (try_set_locked(lock)) + break; + else + goto retry_queue_wait; } old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL); if (old == val) goto release; /* No contention */ - + else if (old & _Q_LOCKED_MASK) + goto retry_queue_wait; val = old; } -- 1.7.1
WARNING: multiple messages have this Message-ID (diff)
From: Waiman Long <Waiman.Long@hp.com> To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>, Peter Zijlstra <peterz@infradead.org> Cc: linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Paolo Bonzini <paolo.bonzini@gmail.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>, Rik van Riel <riel@redhat.com>, Linus Torvalds <torvalds@linux-foundation.org>, Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>, David Vrabel <david.vrabel@citrix.com>, Oleg Nesterov <oleg@redhat.com>, Gleb Natapov <gleb@redhat.com>, Scott J Norton <scott.norton@hp.com>, Chegu Vinod <chegu_vinod@hp.com>, Waiman Long <Waiman.Long@hp.com> Subject: [PATCH v11 08/16] qspinlock: Prepare for unfair lock support Date: Fri, 30 May 2014 11:43:54 -0400 [thread overview] Message-ID: <1401464642-33890-9-git-send-email-Waiman.Long@hp.com> (raw) Message-ID: <20140530154354.rQhb4ki034iwywi1fZm63b0CWxh1tNcYsXqhl66cGis@z> (raw) In-Reply-To: <1401464642-33890-1-git-send-email-Waiman.Long@hp.com> If unfair lock is supported, the lock acquisition loop at the end of the queue_spin_lock_slowpath() function may need to detect the fact the lock can be stolen. Code are added for the stolen lock detection. Signed-off-by: Waiman Long <Waiman.Long@hp.com> --- kernel/locking/qspinlock.c | 26 ++++++++++++++++++-------- 1 files changed, 18 insertions(+), 8 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 2c7abe7..ae1b19d 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -94,7 +94,7 @@ static inline struct mcs_spinlock *decode_tail(u32 tail) * can allow better optimization of the lock acquisition for the pending * bit holder. * - * This internal structure is also used by the set_locked function which + * This internal structure is also used by the try_set_locked function which * is not restricted to _Q_PENDING_BITS == 8. */ struct __qspinlock { @@ -206,19 +206,21 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) #endif /* _Q_PENDING_BITS == 8 */ /** - * set_locked - Set the lock bit and own the lock - * @lock: Pointer to queue spinlock structure + * try_set_locked - Try to set the lock bit and own the lock + * @lock : Pointer to queue spinlock structure + * Return: 1 if lock acquired, 0 otherwise * * This routine should only be called when the caller is the only one * entitled to acquire the lock. */ -static __always_inline void set_locked(struct qspinlock *lock) +static __always_inline int try_set_locked(struct qspinlock *lock) { struct __qspinlock *l = (void *)lock; barrier(); ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; barrier(); + return 1; } /** @@ -357,11 +359,12 @@ queue: /* * we're at the head of the waitqueue, wait for the owner & pending to * go away. - * Load-acquired is used here because the set_locked() + * Load-acquired is used here because the try_set_locked() * function below may not be a full memory barrier. * * *,x,y -> *,0,0 */ +retry_queue_wait: while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_PENDING_MASK) arch_mutex_cpu_relax(); @@ -378,13 +381,20 @@ queue: */ for (;;) { if (val != tail) { - set_locked(lock); - break; + /* + * The try_set_locked function will only failed if the + * lock was stolen. + */ + if (try_set_locked(lock)) + break; + else + goto retry_queue_wait; } old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL); if (old == val) goto release; /* No contention */ - + else if (old & _Q_LOCKED_MASK) + goto retry_queue_wait; val = old; } -- 1.7.1
next prev parent reply other threads:[~2014-05-30 15:43 UTC|newest] Thread overview: 61+ messages / expand[flat|nested] mbox.gz Atom feed top 2014-05-30 15:43 [PATCH v11 00/16] qspinlock: a 4-byte queue spinlock with PV support Waiman Long 2014-05-30 15:43 ` [PATCH v11 01/16] qspinlock: A simple generic 4-byte queue spinlock Waiman Long 2014-05-30 15:43 ` Waiman Long 2014-05-30 15:43 ` [PATCH v11 02/16] qspinlock, x86: Enable x86-64 to use " Waiman Long 2014-05-30 15:43 ` Waiman Long 2014-05-30 15:43 ` [PATCH v11 03/16] qspinlock: Add pending bit Waiman Long 2014-05-30 15:43 ` Waiman Long 2014-05-30 15:43 ` [PATCH v11 04/16] qspinlock: Extract out the exchange of tail code word Waiman Long 2014-05-30 15:43 ` Waiman Long 2014-05-30 15:43 ` [PATCH v11 05/16] qspinlock: Optimize for smaller NR_CPUS Waiman Long 2014-05-30 15:43 ` Waiman Long 2014-05-30 15:43 ` [PATCH v11 06/16] qspinlock: prolong the stay in the pending bit path Waiman Long 2014-05-30 15:43 ` Waiman Long 2014-06-11 10:26 ` Peter Zijlstra 2014-06-11 10:26 ` Peter Zijlstra 2014-06-11 21:22 ` Long, Wai Man 2014-06-11 21:22 ` Long, Wai Man 2014-06-12 6:00 ` Peter Zijlstra 2014-06-12 20:54 ` Waiman Long 2014-06-12 20:54 ` Waiman Long 2014-06-15 13:12 ` Peter Zijlstra 2014-06-15 13:12 ` Peter Zijlstra 2014-05-30 15:43 ` [PATCH v11 07/16] qspinlock: Use a simple write to grab the lock, if applicable Waiman Long 2014-05-30 15:43 ` Waiman Long 2014-05-30 15:43 ` Waiman Long [this message] 2014-05-30 15:43 ` [PATCH v11 08/16] qspinlock: Prepare for unfair lock support Waiman Long 2014-05-30 15:43 ` [PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest Waiman Long 2014-05-30 15:43 ` Waiman Long 2014-06-11 10:54 ` Peter Zijlstra 2014-06-11 10:54 ` Peter Zijlstra 2014-06-11 11:38 ` Peter Zijlstra 2014-06-12 1:37 ` Long, Wai Man 2014-06-12 1:37 ` Long, Wai Man 2014-06-12 5:50 ` Peter Zijlstra 2014-06-12 5:50 ` Peter Zijlstra 2014-06-12 21:08 ` Waiman Long 2014-06-12 21:08 ` Waiman Long 2014-06-15 13:14 ` Peter Zijlstra 2014-06-15 13:14 ` Peter Zijlstra 2014-05-30 15:43 ` [PATCH v11 10/16] qspinlock: Split the MCS queuing code into a separate slowerpath Waiman Long 2014-05-30 15:43 ` Waiman Long 2014-05-30 15:43 ` [PATCH v11 11/16] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled Waiman Long 2014-05-30 15:43 ` Waiman Long 2014-05-30 15:43 ` [PATCH v11 12/16] pvqspinlock, x86: Add PV data structure & methods Waiman Long 2014-05-30 15:43 ` Waiman Long 2014-05-30 15:43 ` [PATCH v11 13/16] pvqspinlock: Enable coexistence with the unfair lock Waiman Long 2014-05-30 15:43 ` Waiman Long 2014-05-30 15:44 ` [PATCH v11 14/16] pvqspinlock: Add qspinlock para-virtualization support Waiman Long 2014-05-30 15:44 ` Waiman Long 2014-06-12 8:17 ` Peter Zijlstra 2014-06-12 8:17 ` Peter Zijlstra 2014-06-12 20:48 ` Waiman Long 2014-06-12 20:48 ` Waiman Long 2014-06-15 13:16 ` Peter Zijlstra 2014-06-15 13:16 ` Peter Zijlstra 2014-06-17 20:59 ` Konrad Rzeszutek Wilk 2014-06-17 20:59 ` Konrad Rzeszutek Wilk 2014-05-30 15:44 ` [PATCH v11 15/16] pvqspinlock, x86: Enable PV qspinlock PV for KVM Waiman Long 2014-05-30 15:44 ` Waiman Long 2014-05-30 15:44 ` [PATCH v11 16/16] pvqspinlock, x86: Enable PV qspinlock for XEN Waiman Long 2014-05-30 15:44 ` Waiman Long
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1401464642-33890-9-git-send-email-Waiman.Long@hp.com \ --to=waiman.long@hp.com \ --cc=boris.ostrovsky@oracle.com \ --cc=chegu_vinod@hp.com \ --cc=david.vrabel@citrix.com \ --cc=gleb@redhat.com \ --cc=hpa@zytor.com \ --cc=konrad.wilk@oracle.com \ --cc=kvm@vger.kernel.org \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mingo@redhat.com \ --cc=oleg@redhat.com \ --cc=paolo.bonzini@gmail.com \ --cc=paulmck@linux.vnet.ibm.com \ --cc=peterz@infradead.org \ --cc=raghavendra.kt@linux.vnet.ibm.com \ --cc=riel@redhat.com \ --cc=scott.norton@hp.com \ --cc=tglx@linutronix.de \ --cc=torvalds@linux-foundation.org \ --cc=virtualization@lists.linux-foundation.org \ --cc=x86@kernel.org \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).