kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC V5 00/11] Paravirtualized ticketlocks
@ 2011-10-13  0:51 Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 01/11] x86/spinlock: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
                   ` (11 more replies)
  0 siblings, 12 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13  0:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

[ Changes since last posting: 
  - Use "lock add" for unlock operation rather than "lock xadd"; it is
    equivalent to "add; mfence", but more efficient than both "lock
    xadd" and "mfence".

  I think this version is ready for submission.
]

NOTE: this series is available in:
      git://github.com/jsgf/linux-xen.git upstream/pvticketlock-slowflag
and is based on the previously posted ticketlock cleanup series in
      git://github.com/jsgf/linux-xen.git upstream/ticketlock-cleanup

This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism.

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct "next"
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into "slowpath state".

- When releasing a lock, if it is in "slowpath state", the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The "slowpath state" is stored in the LSB of the within the lock tail
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a "small ticket" can deal with 128 CPUs, and "large ticket"
32768).

This series provides a Xen implementation, but it should be
straightforward to add a KVM implementation as well.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in "slowpath"
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
	inc = xadd(&lock->tickets, inc);
	inc.tail &= ~TICKET_SLOWPATH_FLAG;

	if (likely(inc.head == inc.tail))
		goto out;

	for (;;) {
		unsigned count = SPIN_THRESHOLD;

		do {
			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
				goto out;
			cpu_relax();
		} while (--count);
		__ticket_lock_spinning(lock, inc.tail);
	}
out:	barrier();

which results in:
	push   %rbp
	mov    %rsp,%rbp

	mov    $0x200,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f	# Slowpath if lock in contention

	pop    %rbp
	retq   

	### SLOWPATH START
1:	and    $-2,%edx
	movzbl %dl,%esi

2:	mov    $0x800,%eax
	jmp    4f

3:	pause  
	sub    $0x1,%eax
	je     5f

4:	movzbl (%rdi),%ecx
	cmp    %cl,%dl
	jne    3b

	pop    %rbp
	retq   

5:	callq  *__ticket_lock_spinning
	jmp    2b
	### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

	push   %rbp
	mov    %rsp,%rbp

	mov    $0x100,%eax
	lock xadd %ax,(%rdi)
	movzbl %ah,%edx
	cmp    %al,%dl
	jne    1f

	pop    %rbp
	retq   

	### SLOWPATH START
1:	pause  
	movzbl (%rdi),%eax
	cmp    %dl,%al
	jne    1b

	pop    %rbp
	retq   
	### SLOWPATH END

The unlock code is complicated by the need to both add to the lock's
"head" and fetch the slowpath flag from "tail".  This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set.  The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).

This is is all unnecessary complication if you're not using PV ticket
locks, it also uses the jump-label machinery to use the standard
"add"-based unlock in the non-PV case.

	if (TICKET_SLOWPATH_FLAG &&
	    unlikely(static_branch(&paravirt_ticketlocks_enabled))) {
		arch_spinlock_t prev;

		prev = *lock;
		add_smp(&lock->tickets.head, TICKET_LOCK_INC);

		/* add_smp() is a full mb() */

		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
			__ticket_unlock_slowpath(lock, prev);
	} else
		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);

which generates:
	push   %rbp
	mov    %rsp,%rbp

	nop5	# replaced by 5-byte jmp 2f when PV enabled

	# non-PV unlock
	addb   $0x2,(%rdi)

1:	pop    %rbp
	retq   

### PV unlock ###
2:	movzwl (%rdi),%esi	# Fetch prev

	lock addb $0x2,(%rdi)	# Do unlock

	testb  $0x1,0x1(%rdi)	# Test flag
	je     1b		# Finished if not set

### Slow path ###
	add    $2,%sil		# Add "head" in old lock state
	mov    %esi,%edx
	and    $0xfe,%dh	# clear slowflag for comparison
	movzbl %dh,%eax
	cmp    %dl,%al		# If head == tail (uncontended)
	je     4f		# clear slowpath flag

	# Kick next CPU waiting for lock
3:	movzbl %sil,%esi
	callq  *pv_lock_ops.kick

	pop    %rbp
	retq   

	# Lock no longer contended - clear slowflag
4:	mov    %esi,%eax
	lock cmpxchg %dx,(%rdi)	# cmpxchg to clear flag
	cmp    %si,%ax
	jne    3b		# If clear failed, then kick

	pop    %rbp
	retq   

So when not using PV ticketlocks, the unlock sequence just has a
5-byte nop added to it, and the PV case is reasonable straightforward
aside from requiring a "lock add".

Thoughts? Comments? Suggestions?

Jeremy Fitzhardinge (10):
  x86/spinlock: replace pv spinlocks with pv ticketlocks
  x86/ticketlock: don't inline _spin_unlock when using paravirt
    spinlocks
  x86/ticketlock: collapse a layer of functions
  xen: defer spinlock setup until boot CPU setup
  xen/pvticketlock: Xen implementation for PV ticket locks
  xen/pvticketlocks: add xen_nopvspin parameter to disable xen pv
    ticketlocks
  x86/pvticketlock: use callee-save for lock_spinning
  x86/pvticketlock: when paravirtualizing ticket locks, increment by 2
  x86/ticketlock: add slowpath logic
  xen/pvticketlock: allow interrupts to be enabled while blocking

Stefano Stabellini (1):
  xen: enable PV ticketlocks on HVM Xen

 arch/x86/Kconfig                      |    3 +
 arch/x86/include/asm/paravirt.h       |   30 +---
 arch/x86/include/asm/paravirt_types.h |   10 +-
 arch/x86/include/asm/spinlock.h       |  126 +++++++++----
 arch/x86/include/asm/spinlock_types.h |   16 +-
 arch/x86/kernel/paravirt-spinlocks.c  |   18 +--
 arch/x86/xen/smp.c                    |    3 +-
 arch/x86/xen/spinlock.c               |  331 ++++++++++-----------------------
 kernel/Kconfig.locks                  |    2 +-
 9 files changed, 210 insertions(+), 329 deletions(-)

-- 
1.7.6.4

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH RFC V5 01/11] x86/spinlock: replace pv spinlocks with pv ticketlocks
  2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
@ 2011-10-13  0:51 ` Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 02/11] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks Jeremy Fitzhardinge
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13  0:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk "Prevent Guests from Spinning Around"
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |   30 ++-----------------
 arch/x86/include/asm/paravirt_types.h |   10 ++----
 arch/x86/include/asm/spinlock.h       |   50 ++++++++++++++++++++++++++------
 arch/x86/include/asm/spinlock_types.h |    4 --
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +--------
 arch/x86/xen/spinlock.c               |    7 ++++-
 6 files changed, 56 insertions(+), 60 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index a7d2db9..76cae7a 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -750,36 +750,14 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, __ticket_t ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
 {
-	return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended	arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
-						  unsigned long flags)
-{
-	PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-	return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-	PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 8e8b9a4..005e24d 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -327,13 +327,11 @@ struct pv_mmu_ops {
 };
 
 struct arch_spinlock;
+#include <asm/spinlock_types.h>
+
 struct pv_lock_ops {
-	int (*spin_is_locked)(struct arch_spinlock *lock);
-	int (*spin_is_contended)(struct arch_spinlock *lock);
-	void (*spin_lock)(struct arch_spinlock *lock);
-	void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags);
-	int (*spin_trylock)(struct arch_spinlock *lock);
-	void (*spin_unlock)(struct arch_spinlock *lock);
+	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index a82c2bf..5efd2f9 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -37,6 +37,32 @@
 # define UNLOCK_LOCK_PREFIX
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD	(1 << 11)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, __ticket_t ticket)
+{
+}
+
+static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
+{
+}
+
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+
+/* 
+ * If a spinlock has someone waiting on it, then kick the appropriate
+ * waiting cpu.
+ */
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
+{
+	if (unlikely(lock->tickets.tail != next))
+		____ticket_unlock_kick(lock, next);
+}
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -50,19 +76,24 @@
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
+static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
 	inc = xadd(&lock->tickets, inc);
 
 	for (;;) {
-		if (inc.head == inc.tail)
-			break;
-		cpu_relax();
-		inc.head = ACCESS_ONCE(lock->tickets.head);
+		unsigned count = SPIN_THRESHOLD;
+
+		do {
+			if (inc.head == inc.tail)
+				goto out;
+			cpu_relax();
+			inc.head = ACCESS_ONCE(lock->tickets.head);
+		} while (--count);
+		__ticket_lock_spinning(lock, inc.tail);
 	}
-	barrier();		/* make sure nothing creeps before the lock is taken */
+out:	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
 static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
@@ -81,7 +112,10 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 {
+	__ticket_t next = lock->tickets.head + 1;
+
 	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
+	__ticket_unlock_kick(lock, next);
 }
 
 static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
@@ -98,8 +132,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
 	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
-
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	return __ticket_spin_is_locked(lock);
@@ -132,8 +164,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 	arch_spin_lock(lock);
 }
 
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
 	while (arch_spin_is_locked(lock))
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 8ebd5df..dbe223d 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -1,10 +1,6 @@
 #ifndef _ASM_X86_SPINLOCK_TYPES_H
 #define _ASM_X86_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 #include <linux/types.h>
 
 #if (CONFIG_NR_CPUS < 256)
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 676b8c7..c2e010e 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -7,21 +7,10 @@
 
 #include <asm/paravirt.h>
 
-static inline void
-default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
-{
-	arch_spin_lock(lock);
-}
-
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.spin_is_locked = __ticket_spin_is_locked,
-	.spin_is_contended = __ticket_spin_is_contended,
-
-	.spin_lock = __ticket_spin_lock,
-	.spin_lock_flags = default_spin_lock_flags,
-	.spin_trylock = __ticket_spin_trylock,
-	.spin_unlock = __ticket_spin_unlock,
+	.lock_spinning = paravirt_nop,
+	.unlock_kick = paravirt_nop,
 #endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index cc9b1e1..23af06a 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -121,6 +121,9 @@ struct xen_spinlock {
 	unsigned short spinners;	/* count of waiting cpus */
 };
 
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+
+#if 0
 static int xen_spin_is_locked(struct arch_spinlock *lock)
 {
 	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
@@ -148,7 +151,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock)
 	return old == 0;
 }
 
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
 static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
 
 /*
@@ -338,6 +340,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock)
 	if (unlikely(xl->spinners))
 		xen_spin_unlock_slow(xl);
 }
+#endif
 
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
@@ -373,12 +376,14 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
+#if 0
 	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
 	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
 	pv_lock_ops.spin_lock = xen_spin_lock;
 	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
 	pv_lock_ops.spin_trylock = xen_spin_trylock;
 	pv_lock_ops.spin_unlock = xen_spin_unlock;
+#endif
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
-- 
1.7.6.4


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH RFC V5 02/11] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks
  2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 01/11] x86/spinlock: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
@ 2011-10-13  0:51 ` Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 03/11] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13  0:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

The code size expands somewhat, and its probably better to just call
a function rather than inline it.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/Kconfig     |    3 +++
 kernel/Kconfig.locks |    2 +-
 2 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6a47bb2..1f03f82 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -585,6 +585,9 @@ config PARAVIRT_SPINLOCKS
 
 	  If you are unsure how to answer this question, answer N.
 
+config ARCH_NOINLINE_SPIN_UNLOCK
+       def_bool PARAVIRT_SPINLOCKS
+
 config PARAVIRT_CLOCK
 	bool
 
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 5068e2a..584637b 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -125,7 +125,7 @@ config INLINE_SPIN_LOCK_IRQSAVE
 		 ARCH_INLINE_SPIN_LOCK_IRQSAVE
 
 config INLINE_SPIN_UNLOCK
-	def_bool !DEBUG_SPINLOCK && (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK)
+	def_bool !DEBUG_SPINLOCK && (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK) && !ARCH_NOINLINE_SPIN_UNLOCK
 
 config INLINE_SPIN_UNLOCK_BH
 	def_bool !DEBUG_SPINLOCK && ARCH_INLINE_SPIN_UNLOCK_BH
-- 
1.7.6.4


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH RFC V5 03/11] x86/ticketlock: collapse a layer of functions
  2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 01/11] x86/spinlock: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 02/11] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks Jeremy Fitzhardinge
@ 2011-10-13  0:51 ` Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 04/11] xen: defer spinlock setup until boot CPU setup Jeremy Fitzhardinge
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13  0:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h |   35 +++++------------------------------
 1 files changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 5efd2f9..f0d6a59 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -76,7 +76,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __t
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
 	register struct __raw_tickets inc = { .tail = 1 };
 
@@ -96,7 +96,7 @@ static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
 out:	barrier();		/* make sure nothing creeps before the lock is taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
 	arch_spinlock_t old, new;
 
@@ -110,7 +110,7 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
 }
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	__ticket_t next = lock->tickets.head + 1;
 
@@ -118,46 +118,21 @@ static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
 	__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return !!(tmp.tail ^ tmp.head);
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
 	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-	return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended	arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-	__ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-	return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-	__ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 						  unsigned long flags)
 {
-- 
1.7.6.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH RFC V5 04/11] xen: defer spinlock setup until boot CPU setup
  2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (2 preceding siblings ...)
  2011-10-13  0:51 ` [PATCH RFC V5 03/11] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
@ 2011-10-13  0:51 ` Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 05/11] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13  0:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Marcelo Tosatti, Nick Piggin, KVM, Peter Zijlstra,
	the arch/x86 maintainers, Linux Kernel Mailing List, Andi Kleen,
	Avi Kivity, Jeremy Fitzhardinge, Ingo Molnar, Linus Torvalds,
	Xen Devel

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

There's no need to do it at very early init, and doing it there
makes it impossible to use the jump_label machinery.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/smp.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index e79dbb9..4dec905 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -200,6 +200,7 @@ static void __init xen_smp_prepare_boot_cpu(void)
 
 	xen_filter_cpu_maps();
 	xen_setup_vcpu_info_placement();
+	xen_init_spinlocks();
 }
 
 static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
@@ -513,7 +514,6 @@ void __init xen_smp_init(void)
 {
 	smp_ops = xen_smp_ops;
 	xen_fill_possible_map();
-	xen_init_spinlocks();
 }
 
 static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
-- 
1.7.6.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH RFC V5 05/11] xen/pvticketlock: Xen implementation for PV ticket locks
  2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (3 preceding siblings ...)
  2011-10-13  0:51 ` [PATCH RFC V5 04/11] xen: defer spinlock setup until boot CPU setup Jeremy Fitzhardinge
@ 2011-10-13  0:51 ` Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 06/11] xen/pvticketlocks: add xen_nopvspin parameter to disable xen pv ticketlocks Jeremy Fitzhardinge
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13  0:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/spinlock.c |  287 +++++++----------------------------------------
 1 files changed, 43 insertions(+), 244 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 23af06a..f6133c5 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -19,32 +19,21 @@
 #ifdef CONFIG_XEN_DEBUG_FS
 static struct xen_spinlock_stats
 {
-	u64 taken;
 	u32 taken_slow;
-	u32 taken_slow_nested;
 	u32 taken_slow_pickup;
 	u32 taken_slow_spurious;
-	u32 taken_slow_irqenable;
 
-	u64 released;
 	u32 released_slow;
 	u32 released_slow_kicked;
 
 #define HISTO_BUCKETS	30
-	u32 histo_spin_total[HISTO_BUCKETS+1];
-	u32 histo_spin_spinning[HISTO_BUCKETS+1];
 	u32 histo_spin_blocked[HISTO_BUCKETS+1];
 
-	u64 time_total;
-	u64 time_spinning;
 	u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1 << 10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
 	if (unlikely(zero_stats)) {
@@ -73,22 +62,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
 		array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-	spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-	u32 delta = xen_clocksource_read() - start;
-
-	__spin_time_accum(delta, spinlock_stats.histo_spin_total);
-	spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
 	u32 delta = xen_clocksource_read() - start;
@@ -105,214 +78,84 @@ static inline u64 spin_time_start(void)
 	return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
 #endif  /* CONFIG_XEN_DEBUG_FS */
 
-struct xen_spinlock {
-	unsigned char lock;		/* 0 -> free; 1 -> locked */
-	unsigned short spinners;	/* count of waiting cpus */
+struct xen_lock_waiting {
+	struct arch_spinlock *lock;
+	__ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	return xl->lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
+static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	/* Not strictly true; this is only the count of contended
-	   lock-takers entering the slow path. */
-	return xl->spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	u8 old = 1;
-
-	asm("xchgb %b0,%1"
-	    : "+q" (old), "+m" (xl->lock) : : "memory");
-
-	return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-	struct xen_spinlock *prev;
-
-	prev = __this_cpu_read(lock_spinners);
-	__this_cpu_write(lock_spinners, xl);
-
-	wmb();			/* set lock of interest before count */
-
-	asm(LOCK_PREFIX " incw %0"
-	    : "+m" (xl->spinners) : : "memory");
-
-	return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct xen_spinlock *prev)
-{
-	asm(LOCK_PREFIX " decw %0"
-	    : "+m" (xl->spinners) : : "memory");
-	wmb();			/* decrement count before restoring lock */
-	__this_cpu_write(lock_spinners, prev);
-}
-
-static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	struct xen_spinlock *prev;
 	int irq = __this_cpu_read(lock_kicker_irq);
-	int ret;
+	struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
+	int cpu = smp_processor_id();
 	u64 start;
+	unsigned long flags;
 
 	/* If kicker interrupts not initialized yet, just spin */
 	if (irq == -1)
-		return 0;
+		return;
 
 	start = spin_time_start();
 
-	/* announce we're spinning */
-	prev = spinning_lock(xl);
+	/* Make sure interrupts are disabled to ensure that these
+	   per-cpu values are not overwritten. */
+	local_irq_save(flags);
+
+	w->want = want;
+	w->lock = lock;
+
+	/* This uses set_bit, which atomic and therefore a barrier */
+	cpumask_set_cpu(cpu, &waiting_cpus);
 
 	ADD_STATS(taken_slow, 1);
-	ADD_STATS(taken_slow_nested, prev != NULL);
-
-	do {
-		unsigned long flags;
-
-		/* clear pending */
-		xen_clear_irq_pending(irq);
-
-		/* check again make sure it didn't become free while
-		   we weren't looking  */
-		ret = xen_spin_trylock(lock);
-		if (ret) {
-			ADD_STATS(taken_slow_pickup, 1);
-
-			/*
-			 * If we interrupted another spinlock while it
-			 * was blocking, make sure it doesn't block
-			 * without rechecking the lock.
-			 */
-			if (prev != NULL)
-				xen_set_irq_pending(irq);
-			goto out;
-		}
 
-		flags = arch_local_save_flags();
-		if (irq_enable) {
-			ADD_STATS(taken_slow_irqenable, 1);
-			raw_local_irq_enable();
-		}
+	/* clear pending */
+	xen_clear_irq_pending(irq);
 
-		/*
-		 * Block until irq becomes pending.  If we're
-		 * interrupted at this point (after the trylock but
-		 * before entering the block), then the nested lock
-		 * handler guarantees that the irq will be left
-		 * pending if there's any chance the lock became free;
-		 * xen_poll_irq() returns immediately if the irq is
-		 * pending.
-		 */
-		xen_poll_irq(irq);
+	/* Only check lock once pending cleared */
+	barrier();
 
-		raw_local_irq_restore(flags);
+	/* check again make sure it didn't become free while
+	   we weren't looking  */
+	if (ACCESS_ONCE(lock->tickets.head) == want) {
+		ADD_STATS(taken_slow_pickup, 1);
+		goto out;
+	}
 
-		ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
-	} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
+	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+	xen_poll_irq(irq);
+	ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
-	unspinning_lock(xl, prev);
-	spin_time_accum_blocked(start);
-
-	return ret;
-}
-
-static inline void __xen_spin_lock(struct arch_spinlock *lock, bool irq_enable)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-	unsigned timeout;
-	u8 oldval;
-	u64 start_spin;
-
-	ADD_STATS(taken, 1);
-
-	start_spin = spin_time_start();
-
-	do {
-		u64 start_spin_fast = spin_time_start();
-
-		timeout = TIMEOUT;
+	cpumask_clear_cpu(cpu, &waiting_cpus);
+	w->lock = NULL;
 
-		asm("1: xchgb %1,%0\n"
-		    "   testb %1,%1\n"
-		    "   jz 3f\n"
-		    "2: rep;nop\n"
-		    "   cmpb $0,%0\n"
-		    "   je 1b\n"
-		    "   dec %2\n"
-		    "   jnz 2b\n"
-		    "3:\n"
-		    : "+m" (xl->lock), "=q" (oldval), "+r" (timeout)
-		    : "1" (1)
-		    : "memory");
+	local_irq_restore(flags);
 
-		spin_time_accum_spinning(start_spin_fast);
-
-	} while (unlikely(oldval != 0 &&
-			  (TIMEOUT == ~0 || !xen_spin_lock_slow(lock, irq_enable))));
-
-	spin_time_accum_total(start_spin);
-}
-
-static void xen_spin_lock(struct arch_spinlock *lock)
-{
-	__xen_spin_lock(lock, false);
-}
-
-static void xen_spin_lock_flags(struct arch_spinlock *lock, unsigned long flags)
-{
-	__xen_spin_lock(lock, !raw_irqs_disabled_flags(flags));
+	spin_time_accum_blocked(start);
 }
 
-static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
+static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
 	int cpu;
 
 	ADD_STATS(released_slow, 1);
 
-	for_each_online_cpu(cpu) {
-		/* XXX should mix up next cpu selection */
-		if (per_cpu(lock_spinners, cpu) == xl) {
+	for_each_cpu(cpu, &waiting_cpus) {
+		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
+
+		if (w->lock == lock && w->want == next) {
 			ADD_STATS(released_slow_kicked, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 			break;
@@ -320,28 +163,6 @@ static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
 	}
 }
 
-static void xen_spin_unlock(struct arch_spinlock *lock)
-{
-	struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-	ADD_STATS(released, 1);
-
-	smp_wmb();		/* make sure no writes get moved after unlock */
-	xl->lock = 0;		/* release lock */
-
-	/*
-	 * Make sure unlock happens before checking for waiting
-	 * spinners.  We need a strong barrier to enforce the
-	 * write-read ordering to different memory locations, as the
-	 * CPU makes no implied guarantees about their ordering.
-	 */
-	mb();
-
-	if (unlikely(xl->spinners))
-		xen_spin_unlock_slow(xl);
-}
-#endif
-
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
 	BUG();
@@ -376,14 +197,8 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-#if 0
-	pv_lock_ops.spin_is_locked = xen_spin_is_locked;
-	pv_lock_ops.spin_is_contended = xen_spin_is_contended;
-	pv_lock_ops.spin_lock = xen_spin_lock;
-	pv_lock_ops.spin_lock_flags = xen_spin_lock_flags;
-	pv_lock_ops.spin_trylock = xen_spin_trylock;
-	pv_lock_ops.spin_unlock = xen_spin_unlock;
-#endif
+	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
 #ifdef CONFIG_XEN_DEBUG_FS
@@ -401,37 +216,21 @@ static int __init xen_spinlock_debugfs(void)
 
 	debugfs_create_u8("zero_stats", 0644, d_spin_debug, &zero_stats);
 
-	debugfs_create_u32("timeout", 0644, d_spin_debug, &lock_timeout);
-
-	debugfs_create_u64("taken", 0444, d_spin_debug, &spinlock_stats.taken);
 	debugfs_create_u32("taken_slow", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow);
-	debugfs_create_u32("taken_slow_nested", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_nested);
 	debugfs_create_u32("taken_slow_pickup", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow_pickup);
 	debugfs_create_u32("taken_slow_spurious", 0444, d_spin_debug,
 			   &spinlock_stats.taken_slow_spurious);
-	debugfs_create_u32("taken_slow_irqenable", 0444, d_spin_debug,
-			   &spinlock_stats.taken_slow_irqenable);
 
-	debugfs_create_u64("released", 0444, d_spin_debug, &spinlock_stats.released);
 	debugfs_create_u32("released_slow", 0444, d_spin_debug,
 			   &spinlock_stats.released_slow);
 	debugfs_create_u32("released_slow_kicked", 0444, d_spin_debug,
 			   &spinlock_stats.released_slow_kicked);
 
-	debugfs_create_u64("time_spinning", 0444, d_spin_debug,
-			   &spinlock_stats.time_spinning);
 	debugfs_create_u64("time_blocked", 0444, d_spin_debug,
 			   &spinlock_stats.time_blocked);
-	debugfs_create_u64("time_total", 0444, d_spin_debug,
-			   &spinlock_stats.time_total);
 
-	xen_debugfs_create_u32_array("histo_total", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_total, HISTO_BUCKETS + 1);
-	xen_debugfs_create_u32_array("histo_spinning", 0444, d_spin_debug,
-				     spinlock_stats.histo_spin_spinning, HISTO_BUCKETS + 1);
 	xen_debugfs_create_u32_array("histo_blocked", 0444, d_spin_debug,
 				     spinlock_stats.histo_spin_blocked, HISTO_BUCKETS + 1);
 
-- 
1.7.6.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH RFC V5 06/11] xen/pvticketlocks: add xen_nopvspin parameter to disable xen pv ticketlocks
  2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (4 preceding siblings ...)
  2011-10-13  0:51 ` [PATCH RFC V5 05/11] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
@ 2011-10-13  0:51 ` Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 07/11] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13  0:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/spinlock.c |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index f6133c5..1e21c99 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -195,12 +195,26 @@ void xen_uninit_lock_cpu(int cpu)
 	unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
 }
 
+static bool xen_pvspin __initdata = true;
+
 void __init xen_init_spinlocks(void)
 {
+	if (!xen_pvspin) {
+		printk(KERN_DEBUG "xen: PV spinlocks disabled\n");
+		return;
+	}
+
 	pv_lock_ops.lock_spinning = xen_lock_spinning;
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
+static __init int xen_parse_nopvspin(char *arg)
+{
+	xen_pvspin = false;
+	return 0;
+}
+early_param("xen_nopvspin", xen_parse_nopvspin);
+
 #ifdef CONFIG_XEN_DEBUG_FS
 
 static struct dentry *d_spin_debug;
-- 
1.7.6.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH RFC V5 07/11] x86/pvticketlock: use callee-save for lock_spinning
  2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (5 preceding siblings ...)
  2011-10-13  0:51 ` [PATCH RFC V5 06/11] xen/pvticketlocks: add xen_nopvspin parameter to disable xen pv ticketlocks Jeremy Fitzhardinge
@ 2011-10-13  0:51 ` Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 08/11] x86/pvticketlock: when paravirtualizing ticket locks, increment by 2 Jeremy Fitzhardinge
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13  0:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/paravirt_types.h |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |    2 +-
 arch/x86/xen/spinlock.c               |    3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 76cae7a..50281c7 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -752,7 +752,7 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, __ticket_t ticket)
 {
-	PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 005e24d..5e0c138 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -330,7 +330,7 @@ struct arch_spinlock;
 #include <asm/spinlock_types.h>
 
 struct pv_lock_ops {
-	void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-	.lock_spinning = paravirt_nop,
+	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 1e21c99..431d231 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -145,6 +145,7 @@ out:
 
 	spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
@@ -204,7 +205,7 @@ void __init xen_init_spinlocks(void)
 		return;
 	}
 
-	pv_lock_ops.lock_spinning = xen_lock_spinning;
+	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
-- 
1.7.6.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH RFC V5 08/11] x86/pvticketlock: when paravirtualizing ticket locks, increment by 2
  2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (6 preceding siblings ...)
  2011-10-13  0:51 ` [PATCH RFC V5 07/11] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
@ 2011-10-13  0:51 ` Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 09/11] x86/ticketlock: add slowpath logic Jeremy Fitzhardinge
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13  0:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a "is in slowpath state" bit.  This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU systems are probably
specially built for the hardware rather than a generic distro
kernel.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/include/asm/spinlock.h       |   10 +++++-----
 arch/x86/include/asm/spinlock_types.h |   10 +++++++++-
 2 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index f0d6a59..dd155f7 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -78,7 +78,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __t
  */
 static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
-	register struct __raw_tickets inc = { .tail = 1 };
+	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
 
@@ -104,7 +104,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	if (old.tickets.head != old.tickets.tail)
 		return 0;
 
-	new.head_tail = old.head_tail + (1 << TICKET_SHIFT);
+	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
 
 	/* cmpxchg is a full barrier, so nothing can move before it */
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
@@ -112,9 +112,9 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + 1;
+	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
 
-	__add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX);
+	__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
 	__ticket_unlock_kick(lock, next);
 }
 
@@ -129,7 +129,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
 	struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
 
-	return ((tmp.tail - tmp.head) & TICKET_MASK) > 1;
+	return ((tmp.tail - tmp.head) & TICKET_MASK) > TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended	arch_spin_is_contended
 
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index dbe223d..aa9a205 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -3,7 +3,13 @@
 
 #include <linux/types.h>
 
-#if (CONFIG_NR_CPUS < 256)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define __TICKET_LOCK_INC	2
+#else
+#define __TICKET_LOCK_INC	1
+#endif
+
+#if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
 typedef u8  __ticket_t;
 typedef u16 __ticketpair_t;
 #else
@@ -11,6 +17,8 @@ typedef u16 __ticket_t;
 typedef u32 __ticketpair_t;
 #endif
 
+#define TICKET_LOCK_INC	((__ticket_t)__TICKET_LOCK_INC)
+
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 #define TICKET_MASK	((__ticket_t)((1 << TICKET_SHIFT) - 1))
 
-- 
1.7.6.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH RFC V5 09/11] x86/ticketlock: add slowpath logic
  2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (7 preceding siblings ...)
  2011-10-13  0:51 ` [PATCH RFC V5 08/11] x86/pvticketlock: when paravirtualizing ticket locks, increment by 2 Jeremy Fitzhardinge
@ 2011-10-13  0:51 ` Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 10/11] xen/pvticketlock: allow interrupts to be enabled while blocking Jeremy Fitzhardinge
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13  0:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, Srivatsa Vaddagiri, Stephan Diestelhorst

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks.  The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie,
no contention).

In the specific implementation of lock_spinning(), make sure to set
the slowpath flags on the lock just before blocking.  We must do
this before the last-chance pickup test to prevent a deadlock
with the unlocker:

Unlocker			Locker
				test for lock pickup
					-> fail
unlock
test slowpath
	-> false
				set slowpath flags
				block

Whereas this works in any ordering:

Unlocker			Locker
				set slowpath flags
				test for lock pickup
					-> fail
				block
unlock
test slowpath
	-> true, kick

If the unlocker finds that the lock has the slowpath flag set but it is
actually uncontended (ie, head == tail, so nobody is waiting), then it
clears the slowpath flag.

The unlock code uses a locked add to update the head counter.  This also
acts as a full memory barrier so that its safe to subsequently
read back the slowflag state, knowing that the updated lock is visible
to the other CPUs.  If it were an unlocked add, then the flag read may
just be forwarded from the store buffer before it was visible to the other
CPUs, which could result in a deadlock.

Unfortunately this means we need to do a locked instruction when
unlocking with PV ticketlocks.  However, if PV ticketlocks are not
enabled, then the old non-locked "add" is the only unlocking code.

Note: this code relies on gcc making sure that unlikely() code is out of
line of the fastpath, which only happens when OPTIMIZE_SIZE=n.  If it
doesn't the generated code isn't too bad, but its definitely suboptimal.

Thanks to Srivatsa Vaddagiri for providing a bugfix to the original
version of this change, which has been folded in.
Thanks to Stephan Diestelhorst for commenting on some code which relied
on an inaccurate reading of the x86 memory ordering rules.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Cc: Stephan Diestelhorst <stephan.diestelhorst@amd.com>
---
 arch/x86/include/asm/paravirt.h       |    2 +-
 arch/x86/include/asm/spinlock.h       |   79 ++++++++++++++++++++++++--------
 arch/x86/include/asm/spinlock_types.h |    2 +
 arch/x86/kernel/paravirt-spinlocks.c  |    3 +
 arch/x86/xen/spinlock.c               |    6 +++
 5 files changed, 71 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 50281c7..13b3d8b 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -755,7 +755,7 @@ static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, _
 	PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index dd155f7..8e0b9cf 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -1,11 +1,14 @@
 #ifndef _ASM_X86_SPINLOCK_H
 #define _ASM_X86_SPINLOCK_H
 
+#include <linux/jump_label.h>
 #include <linux/atomic.h>
 #include <asm/page.h>
 #include <asm/processor.h>
 #include <linux/compiler.h>
 #include <asm/paravirt.h>
+#include <asm/bitops.h>
+
 /*
  * Your basic SMP spinlocks, allowing only a single CPU anywhere
  *
@@ -40,29 +43,27 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 11)
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
+extern struct jump_label_key paravirt_ticketlocks_enabled;
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
 
-static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, __ticket_t ticket)
+static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
 {
+	set_bit(0, (volatile unsigned long *)&lock->tickets.tail);
 }
 
-static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
+#else  /* !CONFIG_PARAVIRT_SPINLOCKS */
+static __always_inline void __ticket_lock_spinning(arch_spinlock_t *lock, __ticket_t ticket)
 {
 }
 
-#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
-
-
-/* 
- * If a spinlock has someone waiting on it, then kick the appropriate
- * waiting cpu.
- */
-static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
+static inline void __ticket_unlock_kick(arch_spinlock_t *lock, __ticket_t ticket)
 {
-	if (unlikely(lock->tickets.tail != next))
-		____ticket_unlock_kick(lock, next);
 }
 
+#endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -76,20 +77,22 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, __t
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
 {
 	register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
 	inc = xadd(&lock->tickets, inc);
+	if (likely(inc.head == inc.tail))
+		goto out;
 
+	inc.tail &= ~TICKET_SLOWPATH_FLAG;
 	for (;;) {
 		unsigned count = SPIN_THRESHOLD;
 
 		do {
-			if (inc.head == inc.tail)
+			if (ACCESS_ONCE(lock->tickets.head) == inc.tail)
 				goto out;
 			cpu_relax();
-			inc.head = ACCESS_ONCE(lock->tickets.head);
 		} while (--count);
 		__ticket_lock_spinning(lock, inc.tail);
 	}
@@ -101,7 +104,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	arch_spinlock_t old, new;
 
 	old.tickets = ACCESS_ONCE(lock->tickets);
-	if (old.tickets.head != old.tickets.tail)
+	if (old.tickets.head != (old.tickets.tail & ~TICKET_SLOWPATH_FLAG))
 		return 0;
 
 	new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT);
@@ -110,12 +113,48 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 	return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail;
 }
 
+static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
+					    arch_spinlock_t old)
+{
+	arch_spinlock_t new;
+
+	BUILD_BUG_ON(((__ticket_t)NR_CPUS) != NR_CPUS);
+
+	/* Perform the unlock on the "before" copy */
+	old.tickets.head += TICKET_LOCK_INC;
+
+	/* Clear the slowpath flag */
+	new.head_tail = old.head_tail & ~(TICKET_SLOWPATH_FLAG << TICKET_SHIFT);
+
+	/*
+	 * If the lock is uncontended, clear the flag - use cmpxchg in
+	 * case it changes behind our back though.
+	 */
+	if (new.tickets.head != new.tickets.tail ||
+	    cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) != old.head_tail) {
+		/*
+		 * Lock still has someone queued for it, so wake up an
+		 * appropriate waiter.
+		 */
+		__ticket_unlock_kick(lock, old.tickets.head);
+	}
+}
+
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__ticket_t next = lock->tickets.head + TICKET_LOCK_INC;
+	if (TICKET_SLOWPATH_FLAG &&
+	    unlikely(static_branch(&paravirt_ticketlocks_enabled))) {
+		arch_spinlock_t prev;
+
+		prev = *lock;
+		add_smp(&lock->tickets.head, TICKET_LOCK_INC);
+
+		/* add_smp() is a full mb() */
 
-	__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
-	__ticket_unlock_kick(lock, next);
+		if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
+			__ticket_unlock_slowpath(lock, prev);
+	} else
+		__add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
 }
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index aa9a205..407f7f7 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -5,8 +5,10 @@
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 #define __TICKET_LOCK_INC	2
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)1)
 #else
 #define __TICKET_LOCK_INC	1
+#define TICKET_SLOWPATH_FLAG	((__ticket_t)0)
 #endif
 
 #if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC))
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 4251c1d..6ca1d33 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -4,6 +4,7 @@
  */
 #include <linux/spinlock.h>
 #include <linux/module.h>
+#include <linux/jump_label.h>
 
 #include <asm/paravirt.h>
 
@@ -15,3 +16,5 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
+struct jump_label_key paravirt_ticketlocks_enabled = JUMP_LABEL_INIT;
+EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 431d231..0a552ec 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -124,6 +124,10 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	/* Only check lock once pending cleared */
 	barrier();
 
+	/* Mark entry to slowpath before doing the pickup test to make
+	   sure we don't deadlock with an unlocker. */
+	__ticket_enter_slowpath(lock);
+
 	/* check again make sure it didn't become free while
 	   we weren't looking  */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
@@ -205,6 +209,8 @@ void __init xen_init_spinlocks(void)
 		return;
 	}
 
+	jump_label_inc(&paravirt_ticketlocks_enabled);
+
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
 	pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
-- 
1.7.6.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH RFC V5 10/11] xen/pvticketlock: allow interrupts to be enabled while blocking
  2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (8 preceding siblings ...)
  2011-10-13  0:51 ` [PATCH RFC V5 09/11] x86/ticketlock: add slowpath logic Jeremy Fitzhardinge
@ 2011-10-13  0:51 ` Jeremy Fitzhardinge
  2011-10-13  0:51 ` [PATCH RFC V5 11/11] xen: enable PV ticketlocks on HVM Xen Jeremy Fitzhardinge
  2011-10-13 10:54 ` [PATCH RFC V5 00/11] Paravirtualized ticketlocks Peter Zijlstra
  11 siblings, 0 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13  0:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.

If we can enable interrupts while waiting for the lock to become
available, and we take an interrupt before entering the poll,
and the handler takes a spinlock which ends up going into
the slow state (invalidating the per-cpu "lock" and "want" values),
then when the interrupt handler returns the event channel will
remain pending so the poll will return immediately, causing it to
return out to the main spinlock loop.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/spinlock.c |   48 ++++++++++++++++++++++++++++++++++++++++------
 1 files changed, 41 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 0a552ec..fc506e6 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -106,11 +106,28 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 
 	start = spin_time_start();
 
-	/* Make sure interrupts are disabled to ensure that these
-	   per-cpu values are not overwritten. */
+	/*
+	 * Make sure an interrupt handler can't upset things in a
+	 * partially setup state.
+	 */
 	local_irq_save(flags);
 
+	/*
+	 * We don't really care if we're overwriting some other
+	 * (lock,want) pair, as that would mean that we're currently
+	 * in an interrupt context, and the outer context had
+	 * interrupts enabled.  That has already kicked the VCPU out
+	 * of xen_poll_irq(), so it will just return spuriously and
+	 * retry with newly setup (lock,want).
+	 *
+	 * The ordering protocol on this is that the "lock" pointer
+	 * may only be set non-NULL if the "want" ticket is correct.
+	 * If we're updating "want", we must first clear "lock".
+	 */
+	w->lock = NULL;
+	smp_wmb();
 	w->want = want;
+	smp_wmb();
 	w->lock = lock;
 
 	/* This uses set_bit, which atomic and therefore a barrier */
@@ -124,21 +141,36 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	/* Only check lock once pending cleared */
 	barrier();
 
-	/* Mark entry to slowpath before doing the pickup test to make
-	   sure we don't deadlock with an unlocker. */
+	/*
+	 * Mark entry to slowpath before doing the pickup test to make
+	 * sure we don't deadlock with an unlocker.
+	 */
 	__ticket_enter_slowpath(lock);
 
-	/* check again make sure it didn't become free while
-	   we weren't looking  */
+	/*
+	 * check again make sure it didn't become free while
+	 * we weren't looking 
+	 */
 	if (ACCESS_ONCE(lock->tickets.head) == want) {
 		ADD_STATS(taken_slow_pickup, 1);
 		goto out;
 	}
 
+	/* Allow interrupts while blocked */
+	local_irq_restore(flags);
+
+	/*
+	 * If an interrupt happens here, it will leave the wakeup irq
+	 * pending, which will cause xen_poll_irq() to return
+	 * immediately.
+	 */
+
 	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
 	xen_poll_irq(irq);
 	ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
+	local_irq_save(flags);
+
 	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
@@ -160,7 +192,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 	for_each_cpu(cpu, &waiting_cpus) {
 		const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu);
 
-		if (w->lock == lock && w->want == next) {
+		/* Make sure we read lock before want */
+		if (ACCESS_ONCE(w->lock) == lock &&
+		    ACCESS_ONCE(w->want) == next) {
 			ADD_STATS(released_slow_kicked, 1);
 			xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
 			break;
-- 
1.7.6.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH RFC V5 11/11] xen: enable PV ticketlocks on HVM Xen
  2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (9 preceding siblings ...)
  2011-10-13  0:51 ` [PATCH RFC V5 10/11] xen/pvticketlock: allow interrupts to be enabled while blocking Jeremy Fitzhardinge
@ 2011-10-13  0:51 ` Jeremy Fitzhardinge
  2011-10-13 10:54 ` [PATCH RFC V5 00/11] Paravirtualized ticketlocks Peter Zijlstra
  11 siblings, 0 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13  0:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Linus Torvalds, Peter Zijlstra, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Stefano Stabellini, Jeremy Fitzhardinge

From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 arch/x86/xen/smp.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 4dec905..2d01aeb 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -552,4 +552,5 @@ void __init xen_hvm_smp_init(void)
 	smp_ops.cpu_die = xen_hvm_cpu_die;
 	smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
 	smp_ops.send_call_func_single_ipi = xen_smp_send_call_function_single_ipi;
+	xen_init_spinlocks();
 }
-- 
1.7.6.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
                   ` (10 preceding siblings ...)
  2011-10-13  0:51 ` [PATCH RFC V5 11/11] xen: enable PV ticketlocks on HVM Xen Jeremy Fitzhardinge
@ 2011-10-13 10:54 ` Peter Zijlstra
  2011-10-13 16:44   ` Jeremy Fitzhardinge
  11 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2011-10-13 10:54 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On Wed, 2011-10-12 at 17:51 -0700, Jeremy Fitzhardinge wrote:
> 
> This is is all unnecessary complication if you're not using PV ticket
> locks, it also uses the jump-label machinery to use the standard
> "add"-based unlock in the non-PV case.
> 
>         if (TICKET_SLOWPATH_FLAG &&
>             unlikely(static_branch(&paravirt_ticketlocks_enabled))) {
>                 arch_spinlock_t prev;
> 
>                 prev = *lock;
>                 add_smp(&lock->tickets.head, TICKET_LOCK_INC);
> 
>                 /* add_smp() is a full mb() */
> 
>                 if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
>                         __ticket_unlock_slowpath(lock, prev);
>         } else
>                 __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); 

Not that I mind the jump_label usage, but didn't paravirt have an
existing alternative() thingy to do things like this? Or is the
alternative() stuff not flexible enough to express this?

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-13 10:54 ` [PATCH RFC V5 00/11] Paravirtualized ticketlocks Peter Zijlstra
@ 2011-10-13 16:44   ` Jeremy Fitzhardinge
  2011-10-14 14:17     ` Jason Baron
  2011-10-17 16:33     ` H. Peter Anvin
  0 siblings, 2 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-13 16:44 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On 10/13/2011 03:54 AM, Peter Zijlstra wrote:
> On Wed, 2011-10-12 at 17:51 -0700, Jeremy Fitzhardinge wrote:
>> This is is all unnecessary complication if you're not using PV ticket
>> locks, it also uses the jump-label machinery to use the standard
>> "add"-based unlock in the non-PV case.
>>
>>         if (TICKET_SLOWPATH_FLAG &&
>>             unlikely(static_branch(&paravirt_ticketlocks_enabled))) {
>>                 arch_spinlock_t prev;
>>
>>                 prev = *lock;
>>                 add_smp(&lock->tickets.head, TICKET_LOCK_INC);
>>
>>                 /* add_smp() is a full mb() */
>>
>>                 if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
>>                         __ticket_unlock_slowpath(lock, prev);
>>         } else
>>                 __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); 
> Not that I mind the jump_label usage, but didn't paravirt have an
> existing alternative() thingy to do things like this? Or is the
> alternative() stuff not flexible enough to express this?

Yeah, that's a good question.  There are three mechanisms with somewhat
overlapping concerns:

  * alternative()
  * pvops patching
  * jump_labels

Alternative() is for low-level instruction substitution, and really only
makes sense at the assembler level with one or two instructions.

pvops is basically a collection of ordinary _ops structures full of
function pointers, but it has a layer of patching to help optimise it. 
In the common case, this just replaces an indirect call with a direct
one, but in some special cases it can inline code.  This is used for
small, extremely performance-critical things like cli/sti, but it
awkward to use in general because you have to specify the inlined code
as a parameterless asm.

Jump_labels is basically an efficient way of doing conditionals
predicated on rarely-changed booleans - so it's similar to pvops in that
it is effectively a very ordinary C construct optimised by dynamic code
patching.


So for _arch_spin_unlock(), what I'm trying to go for is that if you're
not using PV ticketlocks, then the unlock sequence is unchanged from
normal.  But also, even if you are using PV ticketlocks, I want the
fastpath to be inlined, with the call out to a special function only
happening on the slow path.  So the result is that if().  If the
static_branch is false, then the executed code sequence is:

	nop5
	addb $2, (lock)
	ret

which is pretty much ideal.  If the static_branch is true, then it ends
up being:

	jmp5 1f
	[...]

1:	lock add $2, (lock)
	test $1, (lock.tail)
	jne slowpath
	ret
slowpath:...

which is also pretty good, given all the other constraints.

While I could try use inline patching to get a simply add for the non-PV
unlock case (it would be awkward without asm parameters), but I wouldn't
be able to also get the PV unlock fastpath code to be (near) inline. 
Hence jump_label.

Thanks,
    J

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-13 16:44   ` Jeremy Fitzhardinge
@ 2011-10-14 14:17     ` Jason Baron
  2011-10-14 17:02       ` Jeremy Fitzhardinge
  2011-10-17 16:33     ` H. Peter Anvin
  1 sibling, 1 reply; 25+ messages in thread
From: Jason Baron @ 2011-10-14 14:17 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, konrad.wilk

On Thu, Oct 13, 2011 at 09:44:48AM -0700, Jeremy Fitzhardinge wrote:
> On 10/13/2011 03:54 AM, Peter Zijlstra wrote:
> > On Wed, 2011-10-12 at 17:51 -0700, Jeremy Fitzhardinge wrote:
> >> This is is all unnecessary complication if you're not using PV ticket
> >> locks, it also uses the jump-label machinery to use the standard
> >> "add"-based unlock in the non-PV case.
> >>
> >>         if (TICKET_SLOWPATH_FLAG &&
> >>             unlikely(static_branch(&paravirt_ticketlocks_enabled))) {
> >>                 arch_spinlock_t prev;
> >>
> >>                 prev = *lock;
> >>                 add_smp(&lock->tickets.head, TICKET_LOCK_INC);
> >>
> >>                 /* add_smp() is a full mb() */
> >>
> >>                 if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
> >>                         __ticket_unlock_slowpath(lock, prev);
> >>         } else
> >>                 __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); 
> > Not that I mind the jump_label usage, but didn't paravirt have an
> > existing alternative() thingy to do things like this? Or is the
> > alternative() stuff not flexible enough to express this?
> 
> Yeah, that's a good question.  There are three mechanisms with somewhat
> overlapping concerns:
> 
>   * alternative()
>   * pvops patching
>   * jump_labels
> 
> Alternative() is for low-level instruction substitution, and really only
> makes sense at the assembler level with one or two instructions.
> 
> pvops is basically a collection of ordinary _ops structures full of
> function pointers, but it has a layer of patching to help optimise it. 
> In the common case, this just replaces an indirect call with a direct
> one, but in some special cases it can inline code.  This is used for
> small, extremely performance-critical things like cli/sti, but it
> awkward to use in general because you have to specify the inlined code
> as a parameterless asm.
> 

I haven't look at the pvops patching (probably should), but I was
wondering if jump labels could be used for it? Or is there something
that the pvops patching is doing that jump labels can't handle?


> Jump_labels is basically an efficient way of doing conditionals
> predicated on rarely-changed booleans - so it's similar to pvops in that
> it is effectively a very ordinary C construct optimised by dynamic code
> patching.
> 

Another thing is that it can be changed at run-time...Can pvops be
adjusted at run-time as opposed to just boot-time?

thanks,

-Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-14 14:17     ` Jason Baron
@ 2011-10-14 17:02       ` Jeremy Fitzhardinge
  2011-10-14 18:35         ` Jason Baron
  2011-10-14 18:37         ` H. Peter Anvin
  0 siblings, 2 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-14 17:02 UTC (permalink / raw)
  To: Jason Baron
  Cc: Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, konrad.wilk

On 10/14/2011 07:17 AM, Jason Baron wrote:
> On Thu, Oct 13, 2011 at 09:44:48AM -0700, Jeremy Fitzhardinge wrote:
>> pvops is basically a collection of ordinary _ops structures full of
>> function pointers, but it has a layer of patching to help optimise it. 
>> In the common case, this just replaces an indirect call with a direct
>> one, but in some special cases it can inline code.  This is used for
>> small, extremely performance-critical things like cli/sti, but it
>> awkward to use in general because you have to specify the inlined code
>> as a parameterless asm.
>>
> I haven't look at the pvops patching (probably should), but I was
> wondering if jump labels could be used for it? Or is there something
> that the pvops patching is doing that jump labels can't handle?

Jump labels are essentially binary: you can use path A or path B.  pvops
are multiway: there's no limit to the number of potential number of
paravirtualized hypervisor implementations.  At the moment we have 4:
native, Xen, KVM and lguest.

As I said, pvops patching is very general since it allows a particular
op site to be either patched with a direct call/jump to the target code,
or have code inserted inline at the site.  In fact, it probably wouldn't
take very much to allow it to implement jump labels.

And the pvops patching mechanism is certainly general to any *ops style
structure which is initialized once (or rarely) and could be optimised. 
LSM, perhaps?

>> Jump_labels is basically an efficient way of doing conditionals
>> predicated on rarely-changed booleans - so it's similar to pvops in that
>> it is effectively a very ordinary C construct optimised by dynamic code
>> patching.
>>
> Another thing is that it can be changed at run-time...Can pvops be
> adjusted at run-time as opposed to just boot-time?

No.  In general that wouldn't really make sense, because once you've
booted on one hypervisor you're stuck there (though hypothetically you
could consider migration between machines with different hypervisors). 
In some cases it might make sense though, such as switching on PV
ticketlocks if the host system becomes overcommitted, but leaving the
native ticketlocks enabled if not.

    J

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-14 17:02       ` Jeremy Fitzhardinge
@ 2011-10-14 18:35         ` Jason Baron
  2011-10-14 18:38           ` H. Peter Anvin
  2011-10-14 19:02           ` Jeremy Fitzhardinge
  2011-10-14 18:37         ` H. Peter Anvin
  1 sibling, 2 replies; 25+ messages in thread
From: Jason Baron @ 2011-10-14 18:35 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, konrad.wilk, rth

On Fri, Oct 14, 2011 at 10:02:35AM -0700, Jeremy Fitzhardinge wrote:
> On 10/14/2011 07:17 AM, Jason Baron wrote:
> > On Thu, Oct 13, 2011 at 09:44:48AM -0700, Jeremy Fitzhardinge wrote:
> >> pvops is basically a collection of ordinary _ops structures full of
> >> function pointers, but it has a layer of patching to help optimise it. 
> >> In the common case, this just replaces an indirect call with a direct
> >> one, but in some special cases it can inline code.  This is used for
> >> small, extremely performance-critical things like cli/sti, but it
> >> awkward to use in general because you have to specify the inlined code
> >> as a parameterless asm.
> >>
> > I haven't look at the pvops patching (probably should), but I was
> > wondering if jump labels could be used for it? Or is there something
> > that the pvops patching is doing that jump labels can't handle?
> 
> Jump labels are essentially binary: you can use path A or path B.  pvops
> are multiway: there's no limit to the number of potential number of
> paravirtualized hypervisor implementations.  At the moment we have 4:
> native, Xen, KVM and lguest.
> 

Yes, they are binary using the static_branch() interface. But in
general, the asm goto() construct, allows branching to any number of
labels. I have implemented the boolean static_branch() b/c it seems like
the most common interface for jump labels, but I imagine we will
introduce new interfaces as time goes on. You could of course nest
static_branch() calls, although I can't say I've tried it.

We could have an interface, that allowed static branch(), to specifiy an
arbitrary number of no-ops such that call-site itself could look anyway
we want, if we don't know the bias at compile time. This, of course
means potentially greater than 1 no-op in the fast path. I assume the
pvops can have greater than 1 no-op in the fast path. Or is there a
better solution here?

> As I said, pvops patching is very general since it allows a particular
> op site to be either patched with a direct call/jump to the target code,
> or have code inserted inline at the site.  In fact, it probably wouldn't
> take very much to allow it to implement jump labels.
> 
> And the pvops patching mechanism is certainly general to any *ops style
> structure which is initialized once (or rarely) and could be optimised. 
> LSM, perhaps?
> 
> >> Jump_labels is basically an efficient way of doing conditionals
> >> predicated on rarely-changed booleans - so it's similar to pvops in that
> >> it is effectively a very ordinary C construct optimised by dynamic code
> >> patching.
> >>
> > Another thing is that it can be changed at run-time...Can pvops be
> > adjusted at run-time as opposed to just boot-time?
> 
> No.  In general that wouldn't really make sense, because once you've
> booted on one hypervisor you're stuck there (though hypothetically you
> could consider migration between machines with different hypervisors). 
> In some cases it might make sense though, such as switching on PV
> ticketlocks if the host system becomes overcommitted, but leaving the
> native ticketlocks enabled if not.
> 
>     J

A nice featuer of jump labels, is that it allows the various branches
(currently we only support 2), to be written in c code (as opposed to asm),
which means you can write your code as you normally would and access any
parameters as you normally would - hopefully, making the code pretty
readable as well.

I hope this better clarifies the use-cases for the various mechanisms.

Thanks,

-Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-14 17:02       ` Jeremy Fitzhardinge
  2011-10-14 18:35         ` Jason Baron
@ 2011-10-14 18:37         ` H. Peter Anvin
  2011-10-14 19:10           ` Jeremy Fitzhardinge
  1 sibling, 1 reply; 25+ messages in thread
From: H. Peter Anvin @ 2011-10-14 18:37 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Marcelo Tosatti, Nick Piggin, KVM, konrad.wilk, Peter Zijlstra,
	Jason Baron, the arch/x86 maintainers, Linux Kernel Mailing List,
	Andi Kleen, Avi Kivity, Jeremy Fitzhardinge, Ingo Molnar,
	Linus Torvalds, Xen Devel

On 10/14/2011 10:02 AM, Jeremy Fitzhardinge wrote:
> 
> Jump labels are essentially binary: you can use path A or path B.  pvops
> are multiway: there's no limit to the number of potential number of
> paravirtualized hypervisor implementations.  At the moment we have 4:
> native, Xen, KVM and lguest.
> 

This isn't (or shouldn't be) really true... it should be possible to do
an N-way jump label even if the current mechanism doesn't.

	-hpa

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-14 18:35         ` Jason Baron
@ 2011-10-14 18:38           ` H. Peter Anvin
  2011-10-14 18:51             ` Jeremy Fitzhardinge
  2011-10-14 19:02           ` Jeremy Fitzhardinge
  1 sibling, 1 reply; 25+ messages in thread
From: H. Peter Anvin @ 2011-10-14 18:38 UTC (permalink / raw)
  To: Jason Baron
  Cc: Jeremy Fitzhardinge, Peter Zijlstra, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, konrad.wilk, rth

On 10/14/2011 11:35 AM, Jason Baron wrote:
> 
> A nice featuer of jump labels, is that it allows the various branches
> (currently we only support 2), to be written in c code (as opposed to asm),
> which means you can write your code as you normally would and access any
> parameters as you normally would - hopefully, making the code pretty
> readable as well.
> 
> I hope this better clarifies the use-cases for the various mechanisms.
> 

There is an important subcase which might be handy which would be to
allow direct patching of call instructions instead of using indirect calls.

	-hpa

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-14 18:38           ` H. Peter Anvin
@ 2011-10-14 18:51             ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-14 18:51 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Jason Baron, Peter Zijlstra, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, konrad.wilk, rth

On 10/14/2011 11:38 AM, H. Peter Anvin wrote:
> On 10/14/2011 11:35 AM, Jason Baron wrote:
>> A nice featuer of jump labels, is that it allows the various branches
>> (currently we only support 2), to be written in c code (as opposed to asm),
>> which means you can write your code as you normally would and access any
>> parameters as you normally would - hopefully, making the code pretty
>> readable as well.
>>
>> I hope this better clarifies the use-cases for the various mechanisms.
>>
> There is an important subcase which might be handy which would be to
> allow direct patching of call instructions instead of using indirect calls.

Right, that's how the pvops patching is primarily used.

    J

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-14 18:35         ` Jason Baron
  2011-10-14 18:38           ` H. Peter Anvin
@ 2011-10-14 19:02           ` Jeremy Fitzhardinge
  2011-10-17 14:58             ` Jason Baron
  1 sibling, 1 reply; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-14 19:02 UTC (permalink / raw)
  To: Jason Baron
  Cc: Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, konrad.wilk, rth

On 10/14/2011 11:35 AM, Jason Baron wrote:
> On Fri, Oct 14, 2011 at 10:02:35AM -0700, Jeremy Fitzhardinge wrote:
>> On 10/14/2011 07:17 AM, Jason Baron wrote:
>>> On Thu, Oct 13, 2011 at 09:44:48AM -0700, Jeremy Fitzhardinge wrote:
>>>> pvops is basically a collection of ordinary _ops structures full of
>>>> function pointers, but it has a layer of patching to help optimise it. 
>>>> In the common case, this just replaces an indirect call with a direct
>>>> one, but in some special cases it can inline code.  This is used for
>>>> small, extremely performance-critical things like cli/sti, but it
>>>> awkward to use in general because you have to specify the inlined code
>>>> as a parameterless asm.
>>>>
>>> I haven't look at the pvops patching (probably should), but I was
>>> wondering if jump labels could be used for it? Or is there something
>>> that the pvops patching is doing that jump labels can't handle?
>> Jump labels are essentially binary: you can use path A or path B.  pvops
>> are multiway: there's no limit to the number of potential number of
>> paravirtualized hypervisor implementations.  At the moment we have 4:
>> native, Xen, KVM and lguest.
>>
> Yes, they are binary using the static_branch() interface. But in
> general, the asm goto() construct, allows branching to any number of
> labels. I have implemented the boolean static_branch() b/c it seems like
> the most common interface for jump labels, but I imagine we will
> introduce new interfaces as time goes on. You could of course nest
> static_branch() calls, although I can't say I've tried it.

At the moment we're using pvops to optimise things like:

	(*pv_mmu_ops.set_pte)(...);

To do that with some kind of multiway jump label thing, then that would
need to expand out to something akin to:

	if (static_branch(is_xen))
		xen_set_pte(...);
	else if (static_branch(is_kvm))
		kvm_set_pte(...);
	else if (static_branch(is_lguest))
		lguest_set_pte(...);
	else
		native_set_pte(...);

or something similar with an actual jump table.  But I don't see how it
offers much scope for improvement.

If there were something like:

	STATIC_INDIRECT_CALL(&pv_mmu_ops.set_pte)(...);

where the apparently indirect call is actually patched to be a direct
call, then that would offer a large subset of what we do with pvops.

However, to completely replace pvops patching, the static branch / jump
label mechanism would also need to work in assembler code, and be
capable of actually patching callsites with instructions rather than
just calls (sti/cli/pushf/popf being the most important).

We also keep track of the live registers at the callsite, and compare
that to what registers the target functions will clobber in order to
optimise the amount of register save/restore is needed.  And as a result
we have some pvops functions with non-standard calling conventions to
minimise save/restores on critical paths.

> We could have an interface, that allowed static branch(), to specifiy an
> arbitrary number of no-ops such that call-site itself could look anyway
> we want, if we don't know the bias at compile time. This, of course
> means potentially greater than 1 no-op in the fast path. I assume the
> pvops can have greater than 1 no-op in the fast path. Or is there a
> better solution here?

See above.  But pvops patching is pretty well tuned for its job.

However, I definitely think its worth investigating some way to reduce
the number of patching mechanisms, and if pvops patching doesn't stretch
static jumps in unnatural ways, then perhaps that's the way to go.

Thanks,
    J

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-14 18:37         ` H. Peter Anvin
@ 2011-10-14 19:10           ` Jeremy Fitzhardinge
  2011-10-14 19:12             ` H. Peter Anvin
  0 siblings, 1 reply; 25+ messages in thread
From: Jeremy Fitzhardinge @ 2011-10-14 19:10 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Jason Baron, Peter Zijlstra, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, konrad.wilk

On 10/14/2011 11:37 AM, H. Peter Anvin wrote:
> On 10/14/2011 10:02 AM, Jeremy Fitzhardinge wrote:
>> Jump labels are essentially binary: you can use path A or path B.  pvops
>> are multiway: there's no limit to the number of potential number of
>> paravirtualized hypervisor implementations.  At the moment we have 4:
>> native, Xen, KVM and lguest.
>>
> This isn't (or shouldn't be) really true... it should be possible to do
> an N-way jump label even if the current mechanism doesn't.

We probably don't want all those implementations (near) inline, so they
would end up being plain function calls anyway.

    J

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-14 19:10           ` Jeremy Fitzhardinge
@ 2011-10-14 19:12             ` H. Peter Anvin
  0 siblings, 0 replies; 25+ messages in thread
From: H. Peter Anvin @ 2011-10-14 19:12 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Jason Baron, Peter Zijlstra, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, konrad.wilk

On 10/14/2011 12:10 PM, Jeremy Fitzhardinge wrote:
> 
> We probably don't want all those implementations (near) inline, so they
> would end up being plain function calls anyway.
> 

I would not object if the native one was closer, though; especially in
term of source text (the current level of macroization of some
operations is horrific.)

	-hpa

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-14 19:02           ` Jeremy Fitzhardinge
@ 2011-10-17 14:58             ` Jason Baron
  0 siblings, 0 replies; 25+ messages in thread
From: Jason Baron @ 2011-10-17 14:58 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, H. Peter Anvin, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge, konrad.wilk, rth

On Fri, Oct 14, 2011 at 12:02:43PM -0700, Jeremy Fitzhardinge wrote:
> On 10/14/2011 11:35 AM, Jason Baron wrote:
> > On Fri, Oct 14, 2011 at 10:02:35AM -0700, Jeremy Fitzhardinge wrote:
> >> On 10/14/2011 07:17 AM, Jason Baron wrote:
> >>> On Thu, Oct 13, 2011 at 09:44:48AM -0700, Jeremy Fitzhardinge wrote:
> >>>> pvops is basically a collection of ordinary _ops structures full of
> >>>> function pointers, but it has a layer of patching to help optimise it. 
> >>>> In the common case, this just replaces an indirect call with a direct
> >>>> one, but in some special cases it can inline code.  This is used for
> >>>> small, extremely performance-critical things like cli/sti, but it
> >>>> awkward to use in general because you have to specify the inlined code
> >>>> as a parameterless asm.
> >>>>
> >>> I haven't look at the pvops patching (probably should), but I was
> >>> wondering if jump labels could be used for it? Or is there something
> >>> that the pvops patching is doing that jump labels can't handle?
> >> Jump labels are essentially binary: you can use path A or path B.  pvops
> >> are multiway: there's no limit to the number of potential number of
> >> paravirtualized hypervisor implementations.  At the moment we have 4:
> >> native, Xen, KVM and lguest.
> >>
> > Yes, they are binary using the static_branch() interface. But in
> > general, the asm goto() construct, allows branching to any number of
> > labels. I have implemented the boolean static_branch() b/c it seems like
> > the most common interface for jump labels, but I imagine we will
> > introduce new interfaces as time goes on. You could of course nest
> > static_branch() calls, although I can't say I've tried it.
> 
> At the moment we're using pvops to optimise things like:
> 
> 	(*pv_mmu_ops.set_pte)(...);
> 
> To do that with some kind of multiway jump label thing, then that would
> need to expand out to something akin to:
> 
> 	if (static_branch(is_xen))
> 		xen_set_pte(...);
> 	else if (static_branch(is_kvm))
> 		kvm_set_pte(...);
> 	else if (static_branch(is_lguest))
> 		lguest_set_pte(...);
> 	else
> 		native_set_pte(...);
> 
> or something similar with an actual jump table.  But I don't see how it
> offers much scope for improvement.
> 
> If there were something like:
> 
> 	STATIC_INDIRECT_CALL(&pv_mmu_ops.set_pte)(...);
> 
> where the apparently indirect call is actually patched to be a direct
> call, then that would offer a large subset of what we do with pvops.
> 
> However, to completely replace pvops patching, the static branch / jump
> label mechanism would also need to work in assembler code, and be
> capable of actually patching callsites with instructions rather than
> just calls (sti/cli/pushf/popf being the most important).
> 
> We also keep track of the live registers at the callsite, and compare
> that to what registers the target functions will clobber in order to
> optimise the amount of register save/restore is needed.  And as a result
> we have some pvops functions with non-standard calling conventions to
> minimise save/restores on critical paths.
> 
> > We could have an interface, that allowed static branch(), to specifiy an
> > arbitrary number of no-ops such that call-site itself could look anyway
> > we want, if we don't know the bias at compile time. This, of course
> > means potentially greater than 1 no-op in the fast path. I assume the
> > pvops can have greater than 1 no-op in the fast path. Or is there a
> > better solution here?
> 
> See above.  But pvops patching is pretty well tuned for its job.
> 
> However, I definitely think its worth investigating some way to reduce
> the number of patching mechanisms, and if pvops patching doesn't stretch
> static jumps in unnatural ways, then perhaps that's the way to go.
> 
> Thanks,
>     J

ok, as things are now, I don't think jump labels are well suited for
replacing indirect calls. They could be used to have a single no-op that
is replaced with a jmp to the proper direct call...but at that point
you've taken an extra jump. That doesn't make sense to me.

Jump labels are well suited as mentioned for if/else type control flow,
while the indirect call table, at least to me, seems like a bit of a
different use-case...

Thanks,

-Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks
  2011-10-13 16:44   ` Jeremy Fitzhardinge
  2011-10-14 14:17     ` Jason Baron
@ 2011-10-17 16:33     ` H. Peter Anvin
  1 sibling, 0 replies; 25+ messages in thread
From: H. Peter Anvin @ 2011-10-17 16:33 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Peter Zijlstra, Linus Torvalds, Ingo Molnar,
	the arch/x86 maintainers, Linux Kernel Mailing List, Nick Piggin,
	Avi Kivity, Marcelo Tosatti, KVM, Andi Kleen, Xen Devel,
	Jeremy Fitzhardinge

On 10/13/2011 09:44 AM, Jeremy Fitzhardinge wrote:
> 
> Yeah, that's a good question.  There are three mechanisms with somewhat
> overlapping concerns:
> 
>   * alternative()
>   * pvops patching
>   * jump_labels
> 
> Alternative() is for low-level instruction substitution, and really only
> makes sense at the assembler level with one or two instructions.
> 
> pvops is basically a collection of ordinary _ops structures full of
> function pointers, but it has a layer of patching to help optimise it. 
> In the common case, this just replaces an indirect call with a direct
> one, but in some special cases it can inline code.  This is used for
> small, extremely performance-critical things like cli/sti, but it
> awkward to use in general because you have to specify the inlined code
> as a parameterless asm.
> 
> Jump_labels is basically an efficient way of doing conditionals
> predicated on rarely-changed booleans - so it's similar to pvops in that
> it is effectively a very ordinary C construct optimised by dynamic code
> patching.

Then there is static_cpu_has(), which is basically jump labels
implemented using the alternatives mechanism.

If nothing else it would be good to:

1. Make more general use of ops patching;
2. Merge mechanisms where practical.

	-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2011-10-17 16:34 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-10-13  0:51 [PATCH RFC V5 00/11] Paravirtualized ticketlocks Jeremy Fitzhardinge
2011-10-13  0:51 ` [PATCH RFC V5 01/11] x86/spinlock: replace pv spinlocks with pv ticketlocks Jeremy Fitzhardinge
2011-10-13  0:51 ` [PATCH RFC V5 02/11] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks Jeremy Fitzhardinge
2011-10-13  0:51 ` [PATCH RFC V5 03/11] x86/ticketlock: collapse a layer of functions Jeremy Fitzhardinge
2011-10-13  0:51 ` [PATCH RFC V5 04/11] xen: defer spinlock setup until boot CPU setup Jeremy Fitzhardinge
2011-10-13  0:51 ` [PATCH RFC V5 05/11] xen/pvticketlock: Xen implementation for PV ticket locks Jeremy Fitzhardinge
2011-10-13  0:51 ` [PATCH RFC V5 06/11] xen/pvticketlocks: add xen_nopvspin parameter to disable xen pv ticketlocks Jeremy Fitzhardinge
2011-10-13  0:51 ` [PATCH RFC V5 07/11] x86/pvticketlock: use callee-save for lock_spinning Jeremy Fitzhardinge
2011-10-13  0:51 ` [PATCH RFC V5 08/11] x86/pvticketlock: when paravirtualizing ticket locks, increment by 2 Jeremy Fitzhardinge
2011-10-13  0:51 ` [PATCH RFC V5 09/11] x86/ticketlock: add slowpath logic Jeremy Fitzhardinge
2011-10-13  0:51 ` [PATCH RFC V5 10/11] xen/pvticketlock: allow interrupts to be enabled while blocking Jeremy Fitzhardinge
2011-10-13  0:51 ` [PATCH RFC V5 11/11] xen: enable PV ticketlocks on HVM Xen Jeremy Fitzhardinge
2011-10-13 10:54 ` [PATCH RFC V5 00/11] Paravirtualized ticketlocks Peter Zijlstra
2011-10-13 16:44   ` Jeremy Fitzhardinge
2011-10-14 14:17     ` Jason Baron
2011-10-14 17:02       ` Jeremy Fitzhardinge
2011-10-14 18:35         ` Jason Baron
2011-10-14 18:38           ` H. Peter Anvin
2011-10-14 18:51             ` Jeremy Fitzhardinge
2011-10-14 19:02           ` Jeremy Fitzhardinge
2011-10-17 14:58             ` Jason Baron
2011-10-14 18:37         ` H. Peter Anvin
2011-10-14 19:10           ` Jeremy Fitzhardinge
2011-10-14 19:12             ` H. Peter Anvin
2011-10-17 16:33     ` H. Peter Anvin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).