xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jeremy Fitzhardinge <jeremy@goop.org>
To: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Cc: Nick Piggin <npiggin@suse.de>,
	Peter Zijlstra <peterz@infradead.org>,
	Jan Beulich <JBeulich@novell.com>, Avi Kivity <avi@redhat.com>,
	Xen-devel <xen-devel@lists.xensource.com>
Subject: [PATCH RFC 10/12] x86/pvticketlock: keep count of blocked cpus
Date: Fri, 16 Jul 2010 18:03:04 -0700 (PDT)	[thread overview]
Message-ID: <9d5625e61c7f35e72156e8cb881e55910b4fa5dc.1279328276.git.jeremy.fitzhardinge@citrix.com> (raw)
In-Reply-To: <cover.1279328276.git.jeremy.fitzhardinge@citrix.com>

When a CPU blocks by calling into __ticket_lock_spinning, keep a count in
the spinlock.  This allows __ticket_lock_kick to more accurately tell
whether it has any work to do (in many cases, a spinlock may be contended,
but none of the waiters have gone into blocking).

This adds two locked instructions to the spinlock slow path (once the
lock has already spun for SPIN_THRESHOLD iterations), and adds another
one or two bytes to struct arch_spinlock.

We need to make sure we increment the waiting counter before doing the
last-chance check of the lock to see if we picked it up in the meantime.
If we don't then there's a potential deadlock:

	lock holder		lock waiter

				clear event channel
				check lock for pickup (did not)
	release lock
	check waiting counter
		(=0, no kick)
				add waiting counter
				block (=deadlock)

Moving the "add waiting counter earler" avoids the deadlock:

	lock holder		lock waiter

				clear event channel
				add waiting counter
				check lock for pickup (did not)
	release lock
	check waiting counter
		(=1, kick)
				block (and immediately wake)

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   27 ++++++++++++++++++++++++++-
 arch/x86/include/asm/spinlock_types.h |    3 +++
 arch/x86/xen/spinlock.c               |    4 ++++
 3 files changed, 33 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index a79dfee..3deabca 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -65,6 +65,31 @@ static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, u
 {
 }
 
+static __always_inline bool __ticket_lock_waiters(const struct arch_spinlock *lock)
+{
+	return false;
+}
+#else
+static inline void __ticket_add_waiting(struct arch_spinlock *lock)
+{
+	if (sizeof(lock->waiting) == sizeof(u8))
+		asm (LOCK_PREFIX "addb $1, %0" : "+m" (lock->waiting) : : "memory");
+	else
+		asm (LOCK_PREFIX "addw $1, %0" : "+m" (lock->waiting) : : "memory");
+}
+
+static inline void __ticket_sub_waiting(struct arch_spinlock *lock)
+{
+	if (sizeof(lock->waiting) == sizeof(u8))
+		asm (LOCK_PREFIX "subb $1, %0" : "+m" (lock->waiting) : : "memory");
+	else
+		asm (LOCK_PREFIX "subw $1, %0" : "+m" (lock->waiting) : : "memory");
+}
+
+static __always_inline bool __ticket_lock_waiters(const struct arch_spinlock *lock)
+{
+	return ACCESS_ONCE(lock->waiting) != 0;
+}
 #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
 
 /*
@@ -106,7 +131,7 @@ static __always_inline struct __raw_tickets __ticket_spin_claim(struct arch_spin
  */
 static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
-	if (unlikely(lock->tickets.tail != next))
+	if (unlikely(__ticket_lock_waiters(lock)))
 		____ticket_unlock_kick(lock, next);
 }
 
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 48dafc3..b396ed5 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -26,6 +26,9 @@ typedef struct arch_spinlock {
 			__ticket_t head, tail;
 		} tickets;
 	};
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+	__ticket_t waiting;
+#endif
 } arch_spinlock_t;
 
 #define __ARCH_SPIN_LOCK_UNLOCKED	{ { .slock = 0 } }
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index e60d5f1..2f81d5e 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -118,6 +118,8 @@ static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 	/* Only check lock once pending cleared */
 	barrier();
 
+	__ticket_add_waiting(lock);
+
 	/* check again make sure it didn't become free while
 	   we weren't looking  */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
@@ -132,6 +134,8 @@ static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
+	__ticket_sub_waiting(lock);
+
 	cpumask_clear_cpu(cpu, &waiting_cpus);
 	w->lock = NULL;
 	spin_time_accum_blocked(start);
-- 
1.7.1.1

  parent reply	other threads:[~2010-07-17  1:03 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-17  1:03 [PATCH RFC 00/12] X86 ticket lock cleanups and improvements Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 05/12] x86/ticketlock: make __ticket_spin_lock common Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 07/12] x86/spinlocks: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 09/12] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
2010-09-26 11:39   ` Srivatsa Vaddagiri
2010-09-26 22:34     ` Jeremy Fitzhardinge
2011-01-18 16:27       ` Srivatsa Vaddagiri
2011-01-19  1:28         ` Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C Jeremy Fitzhardinge
2010-08-02 15:07   ` Peter Zijlstra
2010-08-02 15:17     ` Jeremy Fitzhardinge
2010-08-06 12:43       ` Jan Beulich
2010-08-06 14:53         ` Jeremy Fitzhardinge
2010-08-06 20:17           ` H. Peter Anvin
2010-08-06 20:33             ` Jeremy Fitzhardinge
2010-08-06 21:09               ` H. Peter Anvin
2010-08-06 22:03                 ` Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 04/12] x86/ticketlock: make large and small ticket versions of spin_lock the same Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 06/12] x86/ticketlock: make __ticket_spin_trylock common Jeremy Fitzhardinge
2010-07-17  1:03 ` Jeremy Fitzhardinge [this message]
2010-08-03  8:32   ` [PATCH RFC 10/12] x86/pvticketlock: keep count of blocked cpus Peter Zijlstra
2010-08-03  9:44     ` Nick Piggin
2010-08-03 15:45     ` Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 12/12] x86/pvticketlock: use callee-save for unlock_kick as well Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 11/12] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 03/12] x86/ticketlock: Use C for __ticket_spin_unlock Jeremy Fitzhardinge
2010-07-20 15:38   ` Konrad Rzeszutek Wilk
2010-07-20 16:17     ` Jeremy Fitzhardinge
2010-08-06 17:47       ` H. Peter Anvin
2010-08-06 20:03         ` Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 08/12] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
2010-07-17  1:03 ` [PATCH RFC 01/12] x86/ticketlock: clean up types and accessors Jeremy Fitzhardinge
  -- strict thread matches above, loose matches on Subject: below --
2010-07-05 14:53 [PATCH RFC 10/12] x86/pvticketlock: keep count of blocked cpus Jeremy Fitzhardinge

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9d5625e61c7f35e72156e8cb881e55910b4fa5dc.1279328276.git.jeremy.fitzhardinge@citrix.com \
    --to=jeremy@goop.org \
    --cc=JBeulich@novell.com \
    --cc=avi@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=npiggin@suse.de \
    --cc=peterz@infradead.org \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).