From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
Peter Zijlstra <peterz@infradead.org>
Cc: linux-arch@vger.kernel.org, Waiman Long <Waiman.Long@hp.com>,
Rik van Riel <riel@redhat.com>,
Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
Gleb Natapov <gleb@redhat.com>,
kvm@vger.kernel.org,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
Scott J Norton <scott.norton@hp.com>,
x86@kernel.org, Paolo Bonzini <paolo.bonzini@gmail.com>,
linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org,
Chegu Vinod <chegu_vinod@hp.com>,
David Vrabel <david.vrabel@citrix.com>,
Oleg Nesterov <oleg@redhat.com>,
xen-devel@lists.xenproject.org,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
Linus Torvalds <torvalds@linux-foundation.org>
Subject: [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
Date: Wed, 7 May 2014 11:01:37 -0400 [thread overview]
Message-ID: <1399474907-22206-10-git-send-email-Waiman.Long@hp.com> (raw)
In-Reply-To: <1399474907-22206-1-git-send-email-Waiman.Long@hp.com>
If unfair lock is supported, the lock acquisition loop at the end of
the queue_spin_lock_slowpath() function may need to detect the fact
the lock can be stolen. Code are added for the stolen lock detection.
A new qhead macro is also defined as a shorthand for mcs.locked.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 26 +++++++++++++++++++-------
1 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index e98d7d4..9e7659e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -64,6 +64,7 @@
struct qnode {
struct mcs_spinlock mcs;
};
+#define qhead mcs.locked /* The queue head flag */
/*
* Per-CPU queue node structures; we can never have more than 4 nested
@@ -216,18 +217,20 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
/**
* get_qlock - Set the lock bit and own the lock
- * @lock: Pointer to queue spinlock structure
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 otherwise
*
* This routine should only be called when the caller is the only one
* entitled to acquire the lock.
*/
-static __always_inline void get_qlock(struct qspinlock *lock)
+static __always_inline int get_qlock(struct qspinlock *lock)
{
struct __qspinlock *l = (void *)lock;
barrier();
ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
barrier();
+ return 1;
}
/**
@@ -365,7 +368,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
tail = encode_tail(smp_processor_id(), idx);
node += idx;
- node->mcs.locked = 0;
+ node->qhead = 0;
node->mcs.next = NULL;
/*
@@ -391,7 +394,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
prev = decode_tail(old);
ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
- while (!smp_load_acquire(&node->mcs.locked))
+ while (!smp_load_acquire(&node->qhead))
arch_mutex_cpu_relax();
}
@@ -403,6 +406,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*
* *,x,y -> *,0,0
*/
+retry_queue_wait:
while ((val = smp_load_acquire(&lock->val.counter))
& _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
@@ -419,12 +423,20 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*/
for (;;) {
if (val != tail) {
- get_qlock(lock);
- break;
+ /*
+ * The get_qlock function will only failed if the
+ * lock was stolen.
+ */
+ if (get_qlock(lock))
+ break;
+ else
+ goto retry_queue_wait;
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
goto release; /* No contention */
+ else if (old & _Q_LOCKED_MASK)
+ goto retry_queue_wait;
val = old;
}
@@ -435,7 +447,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
- arch_mcs_spin_unlock_contended(&next->mcs.locked);
+ arch_mcs_spin_unlock_contended(&next->qhead);
release:
/*
--
1.7.1
WARNING: multiple messages have this Message-ID (diff)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
Peter Zijlstra <peterz@infradead.org>
Cc: linux-arch@vger.kernel.org, x86@kernel.org,
linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org,
xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
Paolo Bonzini <paolo.bonzini@gmail.com>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
Rik van Riel <riel@redhat.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
David Vrabel <david.vrabel@citrix.com>,
Oleg Nesterov <oleg@redhat.com>, Gleb Natapov <gleb@redhat.com>,
Scott J Norton <scott.norton@hp.com>,
Chegu Vinod <chegu_vinod@hp.com>,
Waiman Long <Waiman.Long@hp.com>
Subject: [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
Date: Wed, 7 May 2014 11:01:37 -0400 [thread overview]
Message-ID: <1399474907-22206-10-git-send-email-Waiman.Long@hp.com> (raw)
Message-ID: <20140507150137.iOWvck6pAoxXUm-eyOudpWSKkgovRDzuYwbDjvo29OI@z> (raw)
In-Reply-To: <1399474907-22206-1-git-send-email-Waiman.Long@hp.com>
If unfair lock is supported, the lock acquisition loop at the end of
the queue_spin_lock_slowpath() function may need to detect the fact
the lock can be stolen. Code are added for the stolen lock detection.
A new qhead macro is also defined as a shorthand for mcs.locked.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 26 +++++++++++++++++++-------
1 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index e98d7d4..9e7659e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -64,6 +64,7 @@
struct qnode {
struct mcs_spinlock mcs;
};
+#define qhead mcs.locked /* The queue head flag */
/*
* Per-CPU queue node structures; we can never have more than 4 nested
@@ -216,18 +217,20 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
/**
* get_qlock - Set the lock bit and own the lock
- * @lock: Pointer to queue spinlock structure
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 otherwise
*
* This routine should only be called when the caller is the only one
* entitled to acquire the lock.
*/
-static __always_inline void get_qlock(struct qspinlock *lock)
+static __always_inline int get_qlock(struct qspinlock *lock)
{
struct __qspinlock *l = (void *)lock;
barrier();
ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
barrier();
+ return 1;
}
/**
@@ -365,7 +368,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
tail = encode_tail(smp_processor_id(), idx);
node += idx;
- node->mcs.locked = 0;
+ node->qhead = 0;
node->mcs.next = NULL;
/*
@@ -391,7 +394,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
prev = decode_tail(old);
ACCESS_ONCE(prev->mcs.next) = (struct mcs_spinlock *)node;
- while (!smp_load_acquire(&node->mcs.locked))
+ while (!smp_load_acquire(&node->qhead))
arch_mutex_cpu_relax();
}
@@ -403,6 +406,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*
* *,x,y -> *,0,0
*/
+retry_queue_wait:
while ((val = smp_load_acquire(&lock->val.counter))
& _Q_LOCKED_PENDING_MASK)
arch_mutex_cpu_relax();
@@ -419,12 +423,20 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*/
for (;;) {
if (val != tail) {
- get_qlock(lock);
- break;
+ /*
+ * The get_qlock function will only failed if the
+ * lock was stolen.
+ */
+ if (get_qlock(lock))
+ break;
+ else
+ goto retry_queue_wait;
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
goto release; /* No contention */
+ else if (old & _Q_LOCKED_MASK)
+ goto retry_queue_wait;
val = old;
}
@@ -435,7 +447,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
- arch_mcs_spin_unlock_contended(&next->mcs.locked);
+ arch_mcs_spin_unlock_contended(&next->qhead);
release:
/*
--
1.7.1
next prev parent reply other threads:[~2014-05-07 15:01 UTC|newest]
Thread overview: 97+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-07 15:01 [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 01/19] qspinlock: A simple generic 4-byte queue spinlock Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 02/19] qspinlock, x86: Enable x86-64 to use " Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 03/19] qspinlock: Add pending bit Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 18:57 ` Peter Zijlstra
2014-05-10 0:49 ` Waiman Long
2014-05-10 0:49 ` Waiman Long
2014-05-12 15:22 ` Radim Krčmář
2014-05-12 17:29 ` Peter Zijlstra
2014-05-13 19:47 ` Waiman Long
2014-05-13 19:47 ` Waiman Long
2014-05-14 16:51 ` Radim Krčmář
2014-05-14 16:51 ` Radim Krčmář
2014-05-14 17:00 ` Peter Zijlstra
2014-05-14 17:00 ` Peter Zijlstra
2014-05-14 19:13 ` Radim Krčmář
2014-05-14 19:13 ` Radim Krčmář
2014-05-19 20:17 ` Waiman Long
2014-05-19 20:17 ` Waiman Long
[not found] ` <20140521164930.GA26199@potion.brq.redhat.com>
2014-05-21 17:02 ` [RFC 08/07] qspinlock: integrate pending bit into queue Radim Krčmář
2014-05-21 17:02 ` Radim Krčmář
2014-05-07 15:01 ` [PATCH v10 04/19] qspinlock: Extract out the exchange of tail code word Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 05/19] qspinlock: Optimize for smaller NR_CPUS Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 18:58 ` Peter Zijlstra
2014-05-08 18:58 ` Peter Zijlstra
2014-05-10 0:58 ` Waiman Long
2014-05-10 0:58 ` Waiman Long
2014-05-10 13:38 ` Peter Zijlstra
2014-05-10 13:38 ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:00 ` Peter Zijlstra
2014-05-08 19:00 ` Peter Zijlstra
2014-05-10 1:05 ` Waiman Long
2014-05-10 1:05 ` Waiman Long
2014-05-08 19:02 ` Peter Zijlstra
2014-05-08 19:02 ` Peter Zijlstra
2014-05-10 1:06 ` Waiman Long
2014-05-10 1:06 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:04 ` Peter Zijlstra
2014-05-08 19:04 ` Peter Zijlstra
2014-05-10 1:08 ` Waiman Long
2014-05-10 1:08 ` Waiman Long
2014-05-10 14:14 ` Peter Zijlstra
2014-05-10 14:14 ` Peter Zijlstra
2014-05-10 18:21 ` Peter Zijlstra
2014-05-07 15:01 ` Waiman Long [this message]
2014-05-07 15:01 ` [PATCH v10 09/19] qspinlock: Prepare for unfair lock support Waiman Long
2014-05-08 19:06 ` Peter Zijlstra
2014-05-08 19:06 ` Peter Zijlstra
2014-05-10 1:19 ` Waiman Long
2014-05-10 14:13 ` Peter Zijlstra
2014-05-10 14:13 ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:12 ` Peter Zijlstra
2014-05-08 19:12 ` Peter Zijlstra
2014-05-19 20:30 ` Waiman Long
2014-05-19 20:30 ` Waiman Long
2014-05-12 18:57 ` Radim Krčmář
2014-05-07 15:01 ` [PATCH v10 11/19] qspinlock: Split the MCS queuing code into a separate slowerpath Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 12/19] unfair qspinlock: Variable frequency lock stealing mechanism Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:19 ` Peter Zijlstra
2014-05-08 19:19 ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 13/19] unfair qspinlock: Enable lock stealing in lock waiters Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 14/19] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 15/19] pvqspinlock, x86: Add PV data structure & methods Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 16/19] pvqspinlock: Enable coexistence with the unfair lock Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 17/19] pvqspinlock: Add qspinlock para-virtualization support Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-08 17:54 ` Waiman Long
2014-05-08 17:54 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 19/19] pvqspinlock, x86: Enable PV qspinlock for XEN Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 19:07 ` [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support Konrad Rzeszutek Wilk
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-08 17:54 ` Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1399474907-22206-10-git-send-email-Waiman.Long@hp.com \
--to=waiman.long@hp.com \
--cc=boris.ostrovsky@oracle.com \
--cc=chegu_vinod@hp.com \
--cc=david.vrabel@citrix.com \
--cc=gleb@redhat.com \
--cc=hpa@zytor.com \
--cc=konrad.wilk@oracle.com \
--cc=kvm@vger.kernel.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=oleg@redhat.com \
--cc=paolo.bonzini@gmail.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=raghavendra.kt@linux.vnet.ibm.com \
--cc=riel@redhat.com \
--cc=scott.norton@hp.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=virtualization@lists.linux-foundation.org \
--cc=x86@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).