* [patch 1/2] spinlock: lockbreak cleanup
@ 2007-08-08 4:22 Nick Piggin
2007-08-08 4:24 ` [patch 2/2] x86_64: ticket lock spinlock Nick Piggin
2007-08-11 0:07 ` [patch 1/2] spinlock: lockbreak cleanup Andi Kleen
0 siblings, 2 replies; 9+ messages in thread
From: Nick Piggin @ 2007-08-08 4:22 UTC (permalink / raw)
To: Andrew Morton
Cc: Andi Kleen, Linus Torvalds, Ingo Molnar, linux-arch,
Linux Kernel Mailing List
The break_lock data structure and code for spinlocks is quite nasty.
Not only does it double the size of a spinlock but it changes locking to
a potentially less optimal trylock.
Put all of that under CONFIG_GENERIC_LOCKBREAK, and introduce a
__raw_spin_is_contended that uses the lock data itself to determine whether
there are waiters on the lock, to be used if CONFIG_GENERIC_LOCKBREAK is
not set.
Rename need_lockbreak to spin_needbreak, make it use spin_is_contended to
decouple it from the spinlock implementation, and make it typesafe (rwlocks
do not have any need_lockbreak sites -- why do they even get bloated up
with that break_lock then?).
Signed-off-by: Nick Piggin <npiggin@suse.de>
---
Index: linux-2.6/include/linux/sched.h
===================================================================
--- linux-2.6.orig/include/linux/sched.h
+++ linux-2.6/include/linux/sched.h
@@ -1741,26 +1741,16 @@ extern int cond_resched_softirq(void);
/*
* Does a critical section need to be broken due to another
- * task waiting?:
+ * task waiting?: (technically does not depend on CONFIG_PREEMPT,
+ * but a general need for low latency)
*/
-#if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP)
-# define need_lockbreak(lock) ((lock)->break_lock)
+#ifdef CONFIG_PREEMPT
+# define spin_needbreak(lock) spin_is_contended(lock)
#else
-# define need_lockbreak(lock) 0
+# define spin_needbreak(lock) 0
#endif
/*
- * Does a critical section need to be broken due to another
- * task waiting or preemption being signalled:
- */
-static inline int lock_need_resched(spinlock_t *lock)
-{
- if (need_lockbreak(lock) || need_resched())
- return 1;
- return 0;
-}
-
-/*
* Reevaluate whether the task has signals pending delivery.
* Wake the task if so.
* This is required every time the blocked sigset_t changes.
Index: linux-2.6/include/linux/spinlock.h
===================================================================
--- linux-2.6.orig/include/linux/spinlock.h
+++ linux-2.6/include/linux/spinlock.h
@@ -120,6 +120,12 @@ do { \
#define spin_is_locked(lock) __raw_spin_is_locked(&(lock)->raw_lock)
+#ifdef CONFIG_GENERIC_LOCKBREAK
+#define spin_is_contended(lock) ((lock)->break_lock)
+#else
+#define spin_is_contended(lock) __raw_spin_is_contended(&(lock)->raw_lock)
+#endif
+
/**
* spin_unlock_wait - wait until the spinlock gets unlocked
* @lock: the spinlock in question.
Index: linux-2.6/fs/jbd/checkpoint.c
===================================================================
--- linux-2.6.orig/fs/jbd/checkpoint.c
+++ linux-2.6/fs/jbd/checkpoint.c
@@ -347,7 +347,8 @@ restart:
break;
}
retry = __process_buffer(journal, jh, bhs,&batch_count);
- if (!retry && lock_need_resched(&journal->j_list_lock)){
+ if (!retry && (need_resched() ||
+ spin_needbreak(&journal->j_list_lock))) {
spin_unlock(&journal->j_list_lock);
retry = 1;
break;
Index: linux-2.6/fs/jbd/commit.c
===================================================================
--- linux-2.6.orig/fs/jbd/commit.c
+++ linux-2.6/fs/jbd/commit.c
@@ -265,7 +265,7 @@ write_out_data:
put_bh(bh);
}
- if (lock_need_resched(&journal->j_list_lock)) {
+ if (need_resched() || spin_needbreak(&journal->j_list_lock)) {
spin_unlock(&journal->j_list_lock);
goto write_out_data;
}
Index: linux-2.6/fs/jbd2/checkpoint.c
===================================================================
--- linux-2.6.orig/fs/jbd2/checkpoint.c
+++ linux-2.6/fs/jbd2/checkpoint.c
@@ -347,7 +347,8 @@ restart:
break;
}
retry = __process_buffer(journal, jh, bhs,&batch_count);
- if (!retry && lock_need_resched(&journal->j_list_lock)){
+ if (!retry && (need_resched() ||
+ spin_needbreak(&journal->j_list_lock))) {
spin_unlock(&journal->j_list_lock);
retry = 1;
break;
Index: linux-2.6/fs/jbd2/commit.c
===================================================================
--- linux-2.6.orig/fs/jbd2/commit.c
+++ linux-2.6/fs/jbd2/commit.c
@@ -265,7 +265,7 @@ write_out_data:
put_bh(bh);
}
- if (lock_need_resched(&journal->j_list_lock)) {
+ if (need_resched() || spin_needbreak(&journal->j_list_lock)) {
spin_unlock(&journal->j_list_lock);
goto write_out_data;
}
Index: linux-2.6/include/linux/spinlock_up.h
===================================================================
--- linux-2.6.orig/include/linux/spinlock_up.h
+++ linux-2.6/include/linux/spinlock_up.h
@@ -64,6 +64,8 @@ static inline void __raw_spin_unlock(raw
# define __raw_spin_trylock(lock) ({ (void)(lock); 1; })
#endif /* DEBUG_SPINLOCK */
+#define __raw_spin_is_contended(lock) (((void)(lock), 0))
+
#define __raw_read_can_lock(lock) (((void)(lock), 1))
#define __raw_write_can_lock(lock) (((void)(lock), 1))
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -4500,19 +4500,15 @@ EXPORT_SYMBOL(cond_resched);
*/
int cond_resched_lock(spinlock_t *lock)
{
+ int resched = need_resched() && system_state == SYSTEM_RUNNING;
int ret = 0;
- if (need_lockbreak(lock)) {
+ if (spin_needbreak(lock) || resched) {
spin_unlock(lock);
- cpu_relax();
- ret = 1;
- spin_lock(lock);
- }
- if (need_resched() && system_state == SYSTEM_RUNNING) {
- spin_release(&lock->dep_map, 1, _THIS_IP_);
- _raw_spin_unlock(lock);
- preempt_enable_no_resched();
- __cond_resched();
+ if (resched && need_resched())
+ __cond_resched();
+ else
+ cpu_relax();
ret = 1;
spin_lock(lock);
}
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -514,8 +514,7 @@ again:
if (progress >= 32) {
progress = 0;
if (need_resched() ||
- need_lockbreak(src_ptl) ||
- need_lockbreak(dst_ptl))
+ spin_needbreak(src_ptl) || spin_needbreak(dst_ptl))
break;
}
if (pte_none(*src_pte)) {
@@ -854,7 +853,7 @@ unsigned long unmap_vmas(struct mmu_gath
tlb_finish_mmu(*tlbp, tlb_start, start);
if (need_resched() ||
- (i_mmap_lock && need_lockbreak(i_mmap_lock))) {
+ (i_mmap_lock && spin_needbreak(i_mmap_lock))) {
if (i_mmap_lock) {
*tlbp = NULL;
goto out;
@@ -1860,8 +1859,7 @@ again:
restart_addr = zap_page_range(vma, start_addr,
end_addr - start_addr, details);
- need_break = need_resched() ||
- need_lockbreak(details->i_mmap_lock);
+ need_break = need_resched() || spin_needbreak(details->i_mmap_lock);
if (restart_addr >= end_addr) {
/* We have now completed this vma: mark it so */
Index: linux-2.6/arch/x86_64/Kconfig
===================================================================
--- linux-2.6.orig/arch/x86_64/Kconfig
+++ linux-2.6/arch/x86_64/Kconfig
@@ -74,6 +74,11 @@ config ISA
config SBUS
bool
+config GENERIC_LOCKBREAK
+ bool
+ default y
+ depends on SMP && PREEMPT
+
config RWSEM_GENERIC_SPINLOCK
bool
default y
Index: linux-2.6/include/linux/spinlock_types.h
===================================================================
--- linux-2.6.orig/include/linux/spinlock_types.h
+++ linux-2.6/include/linux/spinlock_types.h
@@ -19,7 +19,7 @@
typedef struct {
raw_spinlock_t raw_lock;
-#if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP)
+#ifdef CONFIG_GENERIC_LOCKBREAK
unsigned int break_lock;
#endif
#ifdef CONFIG_DEBUG_SPINLOCK
@@ -35,7 +35,7 @@ typedef struct {
typedef struct {
raw_rwlock_t raw_lock;
-#if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP)
+#ifdef CONFIG_GENERIC_LOCKBREAK
unsigned int break_lock;
#endif
#ifdef CONFIG_DEBUG_SPINLOCK
Index: linux-2.6/kernel/spinlock.c
===================================================================
--- linux-2.6.orig/kernel/spinlock.c
+++ linux-2.6/kernel/spinlock.c
@@ -65,8 +65,7 @@ EXPORT_SYMBOL(_write_trylock);
* even on CONFIG_PREEMPT, because lockdep assumes that interrupts are
* not re-enabled during lock-acquire (which the preempt-spin-ops do):
*/
-#if !defined(CONFIG_PREEMPT) || !defined(CONFIG_SMP) || \
- defined(CONFIG_DEBUG_LOCK_ALLOC)
+#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC)
void __lockfunc _read_lock(rwlock_t *lock)
{
Index: linux-2.6/arch/arm/Kconfig
===================================================================
--- linux-2.6.orig/arch/arm/Kconfig
+++ linux-2.6/arch/arm/Kconfig
@@ -91,6 +91,11 @@ config GENERIC_IRQ_PROBE
bool
default y
+config GENERIC_LOCKBREAK
+ bool
+ default y
+ depends on SMP && PREEMPT
+
config RWSEM_GENERIC_SPINLOCK
bool
default y
Index: linux-2.6/arch/i386/Kconfig
===================================================================
--- linux-2.6.orig/arch/i386/Kconfig
+++ linux-2.6/arch/i386/Kconfig
@@ -14,6 +14,11 @@ config X86_32
486, 586, Pentiums, and various instruction-set-compatible chips by
AMD, Cyrix, and others.
+config GENERIC_LOCKBREAK
+ bool
+ default y
+ depends on SMP && PREEMPT
+
config GENERIC_TIME
bool
default y
Index: linux-2.6/arch/ia64/Kconfig
===================================================================
--- linux-2.6.orig/arch/ia64/Kconfig
+++ linux-2.6/arch/ia64/Kconfig
@@ -42,6 +42,11 @@ config MMU
config SWIOTLB
bool
+config GENERIC_LOCKBREAK
+ bool
+ default y
+ depends on SMP && PREEMPT
+
config RWSEM_XCHGADD_ALGORITHM
bool
default y
Index: linux-2.6/arch/m32r/Kconfig
===================================================================
--- linux-2.6.orig/arch/m32r/Kconfig
+++ linux-2.6/arch/m32r/Kconfig
@@ -215,6 +215,11 @@ config IRAM_SIZE
# Define implied options from the CPU selection here
#
+config GENERIC_LOCKBREAK
+ bool
+ default y
+ depends on SMP && PREEMPT
+
config RWSEM_GENERIC_SPINLOCK
bool
depends on M32R
Index: linux-2.6/arch/mips/Kconfig
===================================================================
--- linux-2.6.orig/arch/mips/Kconfig
+++ linux-2.6/arch/mips/Kconfig
@@ -647,6 +647,11 @@ source "arch/mips/philips/pnx8550/common
endmenu
+config GENERIC_LOCKBREAK
+ bool
+ default y
+ depends on SMP && PREEMPT
+
config RWSEM_GENERIC_SPINLOCK
bool
default y
Index: linux-2.6/arch/parisc/Kconfig
===================================================================
--- linux-2.6.orig/arch/parisc/Kconfig
+++ linux-2.6/arch/parisc/Kconfig
@@ -19,6 +19,11 @@ config MMU
config STACK_GROWSUP
def_bool y
+config GENERIC_LOCKBREAK
+ bool
+ default y
+ depends on SMP && PREEMPT
+
config RWSEM_GENERIC_SPINLOCK
def_bool y
Index: linux-2.6/arch/sparc64/Kconfig
===================================================================
--- linux-2.6.orig/arch/sparc64/Kconfig
+++ linux-2.6/arch/sparc64/Kconfig
@@ -196,6 +196,11 @@ config US2E_FREQ
If in doubt, say N.
# Global things across all Sun machines.
+config GENERIC_LOCKBREAK
+ bool
+ default y
+ depends on SMP && PREEMPT
+
config RWSEM_GENERIC_SPINLOCK
bool
^ permalink raw reply [flat|nested] 9+ messages in thread* [patch 2/2] x86_64: ticket lock spinlock
2007-08-08 4:22 [patch 1/2] spinlock: lockbreak cleanup Nick Piggin
@ 2007-08-08 4:24 ` Nick Piggin
2007-08-08 10:26 ` Andi Kleen
2007-08-08 17:31 ` Valdis.Kletnieks
2007-08-11 0:07 ` [patch 1/2] spinlock: lockbreak cleanup Andi Kleen
1 sibling, 2 replies; 9+ messages in thread
From: Nick Piggin @ 2007-08-08 4:24 UTC (permalink / raw)
To: Andrew Morton
Cc: Andi Kleen, Linus Torvalds, Ingo Molnar, linux-arch,
Linux Kernel Mailing List
Introduce ticket lock spinlocks for x86-64 which are FIFO. The implementation
is described in the comments. The straight-line lock/unlock instruction
sequence is slightly slower than the dec based locks on modern x86 CPUs,
however the difference is quite small on Core2 and Opteron when working out of
cache, and becomes almost insignificant even on P4 when the lock misses cache.
trylock is more significantly slower, but they are relatively rare.
The memory ordering of the lock does conform to Intel's standards, and the
implementation has been reviewed by Intel and AMD engineers.
The algorithm also tells us how many CPUs are contending the lock, so
lockbreak becomes trivial and we no longer have to waste 4 bytes per
spinlock for it.
After this, we can no longer spin on any locks with preempt enabled,
and cannot reenable interrupts when spinning on an irq safe lock, because
at that point we have already taken a ticket and the would deadlock if
the same CPU tries to take the lock again. These are hackish anyway: if
the lock happens to be called under a preempt or interrupt disabled section,
then it will just have the same latency problems. The real fix is to keep
critical sections short, and ensure locks are reasonably fair (which this
patch does).
Signed-off-by: Nick Piggin <npiggin@suse.de>
---
Index: linux-2.6/include/asm-x86_64/spinlock.h
===================================================================
--- linux-2.6.orig/include/asm-x86_64/spinlock.h
+++ linux-2.6/include/asm-x86_64/spinlock.h
@@ -12,74 +12,93 @@
* Simple spin lock operations. There are two variants, one clears IRQ's
* on the local processor, one does not.
*
- * We make no fairness assumptions. They have a cost.
+ * These are fair FIFO ticket locks, which are currently limited to 256
+ * CPUs.
*
* (the type definitions are in asm/spinlock_types.h)
*/
+#if (NR_CPUS > 256)
+#error spinlock supports a maximum of 256 CPUs
+#endif
+
static inline int __raw_spin_is_locked(raw_spinlock_t *lock)
{
- return *(volatile signed int *)(&(lock)->slock) <= 0;
+ int tmp = *(volatile signed int *)(&(lock)->slock);
+
+ return (((tmp >> 8) & 0xff) != (tmp & 0xff));
}
-static inline void __raw_spin_lock(raw_spinlock_t *lock)
+static inline int __raw_spin_is_contended(raw_spinlock_t *lock)
{
- asm volatile(
- "\n1:\t"
- LOCK_PREFIX " ; decl %0\n\t"
- "jns 2f\n"
- "3:\n"
- "rep;nop\n\t"
- "cmpl $0,%0\n\t"
- "jle 3b\n\t"
- "jmp 1b\n"
- "2:\t" : "=m" (lock->slock) : : "memory");
+ int tmp = *(volatile signed int *)(&(lock)->slock);
+
+ return (((tmp >> 8) & 0xff) - (tmp & 0xff)) > 1;
}
-/*
- * Same as __raw_spin_lock, but reenable interrupts during spinning.
- */
-#ifndef CONFIG_PROVE_LOCKING
-static inline void __raw_spin_lock_flags(raw_spinlock_t *lock, unsigned long flags)
+static inline void __raw_spin_lock(raw_spinlock_t *lock)
{
- asm volatile(
- "\n1:\t"
- LOCK_PREFIX " ; decl %0\n\t"
- "jns 5f\n"
- "testl $0x200, %1\n\t" /* interrupts were disabled? */
- "jz 4f\n\t"
- "sti\n"
- "3:\t"
- "rep;nop\n\t"
- "cmpl $0, %0\n\t"
- "jle 3b\n\t"
- "cli\n\t"
+ short inc = 0x0100;
+
+ /*
+ * Ticket locks are conceptually two bytes, one indicating the current
+ * head of the queue, and the other indicating the current tail. The
+ * lock is acquired by atomically noting the tail and incrementing it
+ * by one (thus adding ourself to the queue and noting our position),
+ * then waiting until the head becomes equal to the the initial value
+ * of the tail.
+ *
+ * This uses a 16-bit xadd to increment the tail and also load the
+ * position of the head, which takes care of memory ordering issues
+ * and should be optimal for the uncontended case. Note the tail must
+ * be in the high byte, otherwise the 16-bit wide increment of the low
+ * byte would carry up and contaminate the high byte.
+ */
+
+ __asm__ __volatile__ (
+ LOCK_PREFIX "xaddw %w0, %1\n"
+ "1:\t"
+ "cmpb %h0, %b0\n\t"
+ "je 2f\n\t"
+ "rep ; nop\n\t"
+ "movb %1, %b0\n\t"
+ "lfence\n\t"
"jmp 1b\n"
- "4:\t"
- "rep;nop\n\t"
- "cmpl $0, %0\n\t"
- "jg 1b\n\t"
- "jmp 4b\n"
- "5:\n\t"
- : "+m" (lock->slock) : "r" ((unsigned)flags) : "memory");
+ "2:"
+ :"+Q" (inc), "+m" (lock->slock)
+ :
+ :"memory", "cc");
}
-#endif
+
+#define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock)
static inline int __raw_spin_trylock(raw_spinlock_t *lock)
{
- int oldval;
+ short tmp;
+ short oldval;
asm volatile(
- "xchgl %0,%1"
- :"=q" (oldval), "=m" (lock->slock)
- :"0" (0) : "memory");
+ "movw %2,%w0\n\t"
+ "cmpb %h0, %b0\n\t"
+ "jne 1f\n\t"
+ "movw %w0,%w1\n\t"
+ "incb %h1\n\t"
+ LOCK_PREFIX "cmpxchgw %w1,%2\n\t"
+ "1:"
+ :"=a" (oldval), "=Q" (tmp), "+m" (lock->slock)
+ :
+ : "memory", "cc");
- return oldval > 0;
+ return ((oldval & 0xff) == ((oldval >> 8) & 0xff));
}
static inline void __raw_spin_unlock(raw_spinlock_t *lock)
{
- asm volatile("movl $1,%0" :"=m" (lock->slock) :: "memory");
+ __asm__ __volatile__(
+ "incb %0"
+ :"+m" (lock->slock)
+ :
+ :"memory", "cc");
}
static inline void __raw_spin_unlock_wait(raw_spinlock_t *lock)
Index: linux-2.6/include/asm-x86_64/spinlock_types.h
===================================================================
--- linux-2.6.orig/include/asm-x86_64/spinlock_types.h
+++ linux-2.6/include/asm-x86_64/spinlock_types.h
@@ -9,7 +9,7 @@ typedef struct {
unsigned int slock;
} raw_spinlock_t;
-#define __RAW_SPIN_LOCK_UNLOCKED { 1 }
+#define __RAW_SPIN_LOCK_UNLOCKED { 0 }
typedef struct {
unsigned int lock;
Index: linux-2.6/arch/x86_64/Kconfig
===================================================================
--- linux-2.6.orig/arch/x86_64/Kconfig
+++ linux-2.6/arch/x86_64/Kconfig
@@ -76,8 +76,7 @@ config SBUS
config GENERIC_LOCKBREAK
bool
- default y
- depends on SMP && PREEMPT
+ default n
config RWSEM_GENERIC_SPINLOCK
bool
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [patch 2/2] x86_64: ticket lock spinlock
2007-08-08 4:24 ` [patch 2/2] x86_64: ticket lock spinlock Nick Piggin
@ 2007-08-08 10:26 ` Andi Kleen
2007-08-09 1:42 ` Nick Piggin
2007-08-08 17:31 ` Valdis.Kletnieks
1 sibling, 1 reply; 9+ messages in thread
From: Andi Kleen @ 2007-08-08 10:26 UTC (permalink / raw)
To: Nick Piggin
Cc: Andrew Morton, Linus Torvalds, Ingo Molnar, linux-arch,
Linux Kernel Mailing List
> *
> * (the type definitions are in asm/spinlock_types.h)
> */
>
> +#if (NR_CPUS > 256)
> +#error spinlock supports a maximum of 256 CPUs
> +#endif
> +
> static inline int __raw_spin_is_locked(raw_spinlock_t *lock)
> {
> - return *(volatile signed int *)(&(lock)->slock) <= 0;
> + int tmp = *(volatile signed int *)(&(lock)->slock);
Why is slock not volatile signed int in the first place?
> - int oldval;
> + short tmp;
> + short oldval;
Broken white space?
-Andi
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [patch 2/2] x86_64: ticket lock spinlock
2007-08-08 10:26 ` Andi Kleen
@ 2007-08-09 1:42 ` Nick Piggin
2007-08-09 9:54 ` Andi Kleen
0 siblings, 1 reply; 9+ messages in thread
From: Nick Piggin @ 2007-08-09 1:42 UTC (permalink / raw)
To: Andi Kleen
Cc: Andrew Morton, Linus Torvalds, Ingo Molnar, linux-arch,
Linux Kernel Mailing List
On Wed, Aug 08, 2007 at 12:26:55PM +0200, Andi Kleen wrote:
>
> > *
> > * (the type definitions are in asm/spinlock_types.h)
> > */
> >
> > +#if (NR_CPUS > 256)
> > +#error spinlock supports a maximum of 256 CPUs
> > +#endif
> > +
> > static inline int __raw_spin_is_locked(raw_spinlock_t *lock)
> > {
> > - return *(volatile signed int *)(&(lock)->slock) <= 0;
> > + int tmp = *(volatile signed int *)(&(lock)->slock);
>
> Why is slock not volatile signed int in the first place?
Don't know really. Why does spin_is_locked need it to be volatile?
> > - int oldval;
> > + short tmp;
> > + short oldval;
>
> Broken white space?
Hmm, I'll fix it.
Thanks,
Nick
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [patch 2/2] x86_64: ticket lock spinlock
2007-08-09 1:42 ` Nick Piggin
@ 2007-08-09 9:54 ` Andi Kleen
0 siblings, 0 replies; 9+ messages in thread
From: Andi Kleen @ 2007-08-09 9:54 UTC (permalink / raw)
To: Nick Piggin
Cc: Andrew Morton, Linus Torvalds, Ingo Molnar, linux-arch,
Linux Kernel Mailing List
On Thursday 09 August 2007 03:42:54 Nick Piggin wrote:
> On Wed, Aug 08, 2007 at 12:26:55PM +0200, Andi Kleen wrote:
> >
> > > *
> > > * (the type definitions are in asm/spinlock_types.h)
> > > */
> > >
> > > +#if (NR_CPUS > 256)
> > > +#error spinlock supports a maximum of 256 CPUs
> > > +#endif
> > > +
> > > static inline int __raw_spin_is_locked(raw_spinlock_t *lock)
> > > {
> > > - return *(volatile signed int *)(&(lock)->slock) <= 0;
> > > + int tmp = *(volatile signed int *)(&(lock)->slock);
> >
> > Why is slock not volatile signed int in the first place?
>
> Don't know really. Why does spin_is_locked need it to be volatile?
I suppose in case a caller doesn't have a memory barrier
(they should in theory, but might not). Without any barrier
or volatile gcc might optimize it away.
The other accesses in spinlocks hopefully all have barriers.
Ok anyways the patches look good.
-Andi
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch 2/2] x86_64: ticket lock spinlock
2007-08-08 4:24 ` [patch 2/2] x86_64: ticket lock spinlock Nick Piggin
2007-08-08 10:26 ` Andi Kleen
@ 2007-08-08 17:31 ` Valdis.Kletnieks
2007-08-09 1:40 ` Nick Piggin
1 sibling, 1 reply; 9+ messages in thread
From: Valdis.Kletnieks @ 2007-08-08 17:31 UTC (permalink / raw)
To: Nick Piggin
Cc: Andrew Morton, Andi Kleen, Linus Torvalds, Ingo Molnar,
linux-arch, Linux Kernel Mailing List
[-- Attachment #1: Type: text/plain, Size: 861 bytes --]
On Wed, 08 Aug 2007 06:24:44 +0200, Nick Piggin said:
> After this, we can no longer spin on any locks with preempt enabled,
> and cannot reenable interrupts when spinning on an irq safe lock, because
> at that point we have already taken a ticket and the would deadlock if
> the same CPU tries to take the lock again. These are hackish anyway: if
> the lock happens to be called under a preempt or interrupt disabled section,
> then it will just have the same latency problems. The real fix is to keep
> critical sections short, and ensure locks are reasonably fair (which this
> patch does).
Any guesstimates how often we do that sort of hackish thing currently, and
how hard it will be to debug each one? "Deadlock if the same CPU tries to
take the lock again" is pretty easy to notice - are there more subtle failure
modes (larger loops of locks, etc)?
[-- Attachment #2: Type: application/pgp-signature, Size: 226 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch 2/2] x86_64: ticket lock spinlock
2007-08-08 17:31 ` Valdis.Kletnieks
@ 2007-08-09 1:40 ` Nick Piggin
0 siblings, 0 replies; 9+ messages in thread
From: Nick Piggin @ 2007-08-09 1:40 UTC (permalink / raw)
To: Valdis.Kletnieks
Cc: Andrew Morton, Andi Kleen, Linus Torvalds, Ingo Molnar,
linux-arch, Linux Kernel Mailing List
On Wed, Aug 08, 2007 at 01:31:58PM -0400, Valdis.Kletnieks@vt.edu wrote:
> On Wed, 08 Aug 2007 06:24:44 +0200, Nick Piggin said:
>
> > After this, we can no longer spin on any locks with preempt enabled,
> > and cannot reenable interrupts when spinning on an irq safe lock, because
> > at that point we have already taken a ticket and the would deadlock if
> > the same CPU tries to take the lock again. These are hackish anyway: if
> > the lock happens to be called under a preempt or interrupt disabled section,
> > then it will just have the same latency problems. The real fix is to keep
> > critical sections short, and ensure locks are reasonably fair (which this
> > patch does).
>
> Any guesstimates how often we do that sort of hackish thing currently, and
> how hard it will be to debug each one? "Deadlock if the same CPU tries to
> take the lock again" is pretty easy to notice - are there more subtle failure
> modes (larger loops of locks, etc)?
I'll try to explain better:
The old spinlocks re-enable preemption and interrupts while they spin
waiting for a held lock. This was done because people noticed some
long latencies while spinning. The problem however is that preemption
and interrupts can only be re-enabled if they were enabled before the
spin_lock call. So if you have code that perhaps takes nested locks,
or locks while interrupts are already disabled, then you get the latency
problems back.
So the non-hack fix is to keep critical sections short (which is what
we've been working at forever), and to have relatively fair locks
(which is what this patch does).
A side-effect of this patch is that it can no longer enable preemption
or ints while spinning, so my changelog is a rationale of why that
shouldn't be a big problem.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch 1/2] spinlock: lockbreak cleanup
2007-08-08 4:22 [patch 1/2] spinlock: lockbreak cleanup Nick Piggin
2007-08-08 4:24 ` [patch 2/2] x86_64: ticket lock spinlock Nick Piggin
@ 2007-08-11 0:07 ` Andi Kleen
2007-08-13 7:52 ` Nick Piggin
1 sibling, 1 reply; 9+ messages in thread
From: Andi Kleen @ 2007-08-11 0:07 UTC (permalink / raw)
To: Nick Piggin
Cc: Andrew Morton, Linus Torvalds, Ingo Molnar,
Linux Kernel Mailing List
Nick,
These two patches make my P4 (single socket HT) test box not boot. I dropped them for now.
Some oopses
-Andi
NMI Watchdog detected LOCKUP on CPU 1
CPU 1
Modules linked in:
Pid: 1648, comm: sh Not tainted 2.6.23-rc2-git3 #472
RIP: 0010:[<ffffffff80547882>] [<ffffffff80547882>] _spin_lock+0x10/0x18
RSP: 0018:ffff810001127f20 EFLAGS: 00000097
RAX: 000000000000df84 RBX: ffff8100398de040 RCX: ffff810001105850
RDX: ffff810080852000 RSI: 0000000000000000 RDI: ffff810001017180
RBP: ffff810001127f58 R08: 0000000010010000 R09: ffffffff807c5180
R10: 0000000000000001 R11: ffffffff8030ed1e R12: ffff810001017180
R13: ffff8100398de040 R14: 0000000000000001 R15: ffff81003a6c3b48
FS: 00002b0f1abcef60(0000) GS:ffff81003e0ffcc0(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 000000000045b090 CR3: 000000003db5c000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process sh (pid: 1648, threadinfo ffff810039480000, task ffff8100398de040)
Stack: ffffffff8022fd10 00000001000004ab ffff8100398de040 ffff8100398de040
0000000000000000 ffff81003d086ac0 ffff810001017180 0000000000000001
ffffffff8023b91a ffffffff807c25c8 ffff810039481cd0 0000000000000000
ffffffff8021ae58 ffffffff807c25c8 ffffffff8021b461 ffff81003a51b408
0000000000000000 0000000000000001 ffffffff8020bfd6 ffff810039481cd0 <EOI>
ffff810039481de8 ffffffff8030ed1e 0000000000008001 0000000000000206
ffff810039480000 ffff810001017180 ffff810039480000 ffff81003da4c850
ffff8100398de040 0000000000000000 ffffffffffffff10 ffffffff80545e8a
0000000000000010 0000000000000246 ffff810039481d58 0000000000000018
0000000000000086 ffffffff80311570 ffff8100398d91c0 ffff81003ad83e50
ffff8100398de040 ffff81003e0e0790 ffff8100398de248 0000000039481dc8
ffff81003da4c850 000000000000142e ffff8100398de040 ffff81000100e208
ffff81003e0e0790 0000000000000000 ffff810039481e88 ffff810039481e90
ffff81000100e180 ffff810039481e68 000000000059a4f0 ffff810039481df8
ffffffff8022f56c ffff810039481e08 ffffffff80545f79 ffff810039481e58
ffffffff80545fb3 0000000000000002 0000000000000292 ffff81003db92000
ffff81003e0e0790 0000000000000001 ffff81000100e180 ffff81003e0e0790
0000000000000001 ffff810039481ed8 ffffffff8022fa8f ffff810039481e68
ffff810039481e68 ffff8100398de040 0000000000000001 0000000000000001
ffff810000000101 ffff810039481e98 ffff810039481e98 ffff81003db10be0
0000000000000202 ffff8100398d91c0 00000000398d91c0 ffff81003db92000
000000000059a920 ffff81003affd380 ffffffff8028399b ffff810039481f58
ffff81003db92000 000000000059a920 000000000059a4f0 ffff81003db92000
000000000059a920 000000000059a8b0 ffffffff8020a1ec 00002b0f1a9a7628
0000000000594e20 000000000059a4f0 0000000000599c01 0000000000594e20
ffffffff8020b767 000000000059a8b0 000000000059a920 0000000000594e20
0000000000599c01 000000000059a4f0 0000000000594e20 0000000000000202
ffffffffffffffff 0000000000000000 00002b0f1aaaab28 000000000000003b
ffffffffffffffff 000000000059a920 000000000059a4f0 0000000000594e20
000000000000003b 00002b0f1aa33d97 0000000000000033 0000000000000202
00007fff906c42c8 000000000000002b
Call Trace:
<IRQ> [<ffffffff8022fd10>] scheduler_tick+0x3e/0x149
[<ffffffff8023b91a>] update_process_times+0x5c/0x68
[<ffffffff8021ae58>] smp_local_timer_interrupt+0x34/0x55
[<ffffffff8021b461>] smp_apic_timer_interrupt+0x44/0x5b
[<ffffffff8020bfd6>] apic_timer_interrupt+0x66/0x70
<EOI> [<ffffffff8030ed1e>] nfs_permission+0x0/0x1d1
[<ffffffff80545e8a>] thread_return+0x58/0xd0
[<ffffffff80311570>] nfs_file_open+0x0/0x7c
[<ffffffff8022f56c>] __cond_resched+0x1c/0x44
[<ffffffff80545f79>] cond_resched+0x2e/0x39
[<ffffffff80545fb3>] wait_for_completion+0x17/0xbe
[<ffffffff8022fa8f>] sched_exec+0xb3/0xce
[<ffffffff8028399b>] do_execve+0x5d/0x1a6
[<ffffffff8020a1ec>] sys_execve+0x36/0x8b
[<ffffffff8020b767>] stub_execve+0x67/0xb0
Code: 8a 07 0f ae e8 eb f3 c3 f0 81 2f 00 00 00 01 74 05 e8 a8 62
Kernel panic - not syncing: Aiee, killing interrupt handler!
(another boot)
NMI Watchdog detected LOCKUP on CPU 0
CPU 0
Modules linked in:
Pid: 1193, comm: udevstart Not tainted 2.6.23-rc2-git3 #474
RIP: 0010:[<ffffffff805476d7>] [<ffffffff805476d7>] _spin_lock+0x15/0x18
RSP: 0018:ffff81003a6cf8d0 EFLAGS: 00000002
RAX: 0000000000006a6b RBX: ffffffff807c6180 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffff81003a6cf930 RDI: ffff81000100e180
RBP: ffff81003a6cf8f8 R08: ffff81003a7f9680 R09: ffff81003a628b48
R10: 000000000053b31b R11: ffffffff8030eb02 R12: ffff81000100e180
R13: ffff81003a6cf930 R14: ffff810001118100 R15: 0000000000000000
FS: 00002b85cee96b00(0000) GS:ffffffff8072d000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00002b6f5d8d6310 CR3: 000000003a524000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process udevstart (pid: 1193, threadinfo ffff81003a6ce000, task ffff810001263890)
Stack: ffffffff8022c09b 0000000000000003 0000000000000001 ffff810001118100
ffffffff806d6188 ffff81003a6cf968 ffffffff8022d166 000000003a6cf928
0000000000000003 ffff810001017180 ffff810001017180 ffff81003a6cf948
0000000000000092 ffff81003a6cf978 ffff81003a043d20 0000000000000001
0000000000000001 ffffffff806d6188 0000000000000000 ffff81003a6cf9a8
ffffffff8022b04d 0000000300000000 ffffffff806d6180 0000000000000296
ffff810001263890 ffff81003da5ecb0 ffff81003a6cfcb8 ffffffff806d6188
ffffffff8054763c 0000000000000001 ffff810001263890 ffffffff8022d484
0000000000100100 0000000000200200 ffffffff8022b6ed 0000000000000f30
ffff810001263890 0000000000000000 ffff81003d4b2818 ffff81003a6cfe48
ffffffff805472e5 ffffffff8030eb02 000000000053b31b ffff81003a628b48
ffff81000100e208 0000000000000001 0000000000000000 0000000001be28ab
ffff8100012638d8 ffffffff806d6180 ffffffff805478d7 ffff810001118100
ffff8100011c8970 ffff8100011c8970 ffffffff803106ab ffffffff8022b542
ffff810001118100 ffff81000100e180 ffff81003a6cfac0 ffffffff8022bc96
0000000000000000 0000000000000001 0000000000000082 ffffffff8022d472
0000000000000010 0000000000000003 ffff81000100e180 0000000000000001
0000000300000000 0000000000000082 0000000000000292 ffff81003a043d20
0000000000000001 0000000000000001 ffffffff806d6188 0000000000000000
ffff81003a6cfb70 ffffffff8022b04d 0000000300000000 ffffffff806d6188
0000000000000000 0000000000000001 0000000000000282 0000000000000003
ffff81003a6cfbb0 ffffffff8022bf0c ffff810001263890 0000000000000000
ffff81003d4b2818 ffff81003e0af880 0000000000000001 000000000053b31b
0000000000000282 ffffffff80547393 ffffffff8030eb02 000000000053b31b
ffff81003a628b48 ffffffff80545c82 00000000ffffffff ffff81003a6cfe48
ffffffff8028e1b9 ffff81003db1c001 ffff8100011c8970 ffff81003a6cfe48
ffff81003d4b2818 ffff81003e0af880 ffff81003a6cfe48 ffff81003a6cfcb8
ffffffff802859da ffff81003a6cfcc8 ffff81003a6cfcc8 ffff81003e0af880
ffff8100011041ed ffff81003a6cfe48 ffff81003d4b2818 ffff81003e0af880
ffff81003db1c000 0000000000000000 ffffffff80286fe4 000000000000004d
ffff81003db1c005 000000010000004d 000000000000004c ffff81003da5ecb0
ffffffffffffffa6 00000003002968b1 ffff81003db1c001 ffff8100011360c0
ffff81003a7a8cb0 0000000000000018 ffff810001106740 ffff81003a6cfe48
ffff81003da5ecb0 ffff81003e0af880 ffff81003db1c000 000000000053b31b
ffffffff802879c0 ffff81003da5ecb0 ffff81003e0af880 0000000000000001
0000000000008124 0000000100000001 ffff810000000000 ffffffff8022b6ed
000000000000bec8 ffff8100012638d8 ffff810001017208 ffff81003d042780
ffff810001263890 0000000000000001 ffff81003d042e00 ffff810001017180
000000000053b31b ffffffff80545c82 ffff81003aaab000 ffff810001106740
ffff810001106744 ffff81003a6cfe48 0000000000000001 ffff81003db1c000
ffffffff80287d70 ffff81003db1c000 ffffffff8028692c ffff81003db1c000
000000003db1c000 ffff81003a6cfe48 0000000000000001 00000000ffffff9c
ffffffff802885a0 00007fffdbf545c0 ffff81003a6cfef8 000000000053b31b
00007fffdbf54770 000000000053af10 ffffffff802819f6 ffff81003da5ecb0
ffff81003e0af880 0000000000000001 0000000000008124 0000000100000005
ffff810000000000 ffffffff8022b6ed 000000000000bec8 ffff8100012638d8
ffff810001017208 ffff81003d042780 ffff810001263890 0000000000000001
ffff81003d042e00 ffff810001017180 000000000053b31b ffffffff80545c82
ffff81003a6cff70 00007fffdbf545c0 000000000053bf50 000000000053b31b
ffffffff80281bc1 ffff810001263890 ffff81003d919080 ffff810001263a98
0000000100000000 ffff81003d919080 0000000049211f86 0000000007ed0e12
0000000049211f86 0000000007ed0e12 0000000049211f86 0000000000564610
00007fffdbf54b50 00007fffdbf54770 00007fffdbf54774 000000000053b31b
00007fffdbf54650 000000000053bf50 ffffffff8020b39e 0000000000000246
0000000000000000 0000000000000004 ffffff0000000000 0000000000000004
0000000000000004 00007fffdbf545c0 00007fffdbf545c0 00007fffdbf54650
0000000000000004 00002b85ced1a325 0000000000000033 0000000000000202
00007fffdbf54718 000000000000002b
Call Trace:
[<ffffffff8022c09b>] task_rq_lock+0x3d/0x6f
[<ffffffff8022d166>] try_to_wake_up+0x24/0x342
[<ffffffff8022b04d>] __wake_up_common+0x3e/0x68
[<ffffffff8054763c>] __down+0xb3/0x100
[<ffffffff8022d484>] default_wake_function+0x0/0xe
[<ffffffff8022b6ed>] update_curr+0xe2/0x101
[<ffffffff805472e5>] __down_failed+0x35/0x3a
[<ffffffff8030eb02>] nfs_permission+0x0/0x1d1
[<ffffffff805478d7>] lock_kernel+0x30/0x37
[<ffffffff803106ab>] nfs_lookup_revalidate+0x3f/0x39c
[<ffffffff8022b542>] update_curr_load+0x6c/0x82
[<ffffffff8022bc96>] __check_preempt_curr_fair+0x1d/0x38
[<ffffffff8022d472>] try_to_wake_up+0x330/0x342
[<ffffffff8022b04d>] __wake_up_common+0x3e/0x68
[<ffffffff8022bf0c>] __wake_up+0x38/0x4e
[<ffffffff80547393>] __up_wakeup+0x35/0x67
[<ffffffff8030eb02>] nfs_permission+0x0/0x1d1
[<ffffffff80545c82>] thread_return+0x0/0xd0
[<ffffffff8028e1b9>] __d_lookup+0xb0/0xf8
[<ffffffff802859da>] do_lookup+0x157/0x1ae
[<ffffffff80286fe4>] __link_path_walk+0x343/0xcc7
[<ffffffff802879c0>] link_path_walk+0x58/0xe0
[<ffffffff8022b6ed>] update_curr+0xe2/0x101
[<ffffffff80545c82>] thread_return+0x0/0xd0
[<ffffffff80287d70>] do_path_lookup+0x1a0/0x1c3
[<ffffffff8028692c>] getname+0x14c/0x190
[<ffffffff802885a0>] __user_walk_fd+0x37/0x53
[<ffffffff802819f6>] vfs_stat_fd+0x1b/0x4a
[<ffffffff8022b6ed>] update_curr+0xe2/0x101
[<ffffffff80545c82>] thread_return+0x0/0xd0
[<ffffffff80281bc1>] sys_newstat+0x19/0x31
[<ffffffff8020b39e>] system_call+0x7e/0x83
Code: eb f3 c3 f0 81 2f 00 00 00 01 74 05 e8 38 62 e0 ff c3 53 48
NMI Watchdog detected LOCKUP on CPU 1
CPU 1
Modules linked in:
Pid: 1223, comm: modify_resolvco Not tainted 2.6.23-rc2-git3 #474
RIP: 0010:[<ffffffff80547774>] [<ffffffff80547774>] _spin_lock_irqsave+0x13/0x1b
RSP: 0018:ffff81003a173580 EFLAGS: 00000097
RAX: 0000000000000296 RBX: ffffffff806d6180 RCX: ffff81003a172000
RDX: 0000000000006362 RSI: ffff81003d919080 RDI: ffffffff806d6188
RBP: ffffffff806d6188 R08: ffff81003a172000 R09: ffff810001012800
R10: 00000000ffffffff R11: ffffffff8030eb02 R12: 0000000000000296
R13: ffff81003d919080 R14: ffff810001017180 R15: 000000000053b31b
FS: 00002b180ec08f60(0000) GS:ffff81003e0ffcc0(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 000000000057a028 CR3: 000000003d7c1000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process modify_resolvco (pid: 1223, threadinfo ffff81003a172000, task ffff81003d919080)
Stack: ffffffff80547679 0000000000000001 ffff81003d919080 ffffffff8022d484
ffff81003a7ff530 ffffffff806d6190 ffff810001017208 0000000000000001
ffff81003d919080 0000000000000001 0000000000000001 ffff81003d042780
ffffffff805472e5 ffffffffffffffff ffffffffffffffff 0000000000000000
ffff81003a172000 0000000000000000 ffff81003a172000 ffff81003a173fd8
ffff81003d919080 ffffffff806d6180 ffffffff8054792f 000000000053b31b
ffffffffffffff80 ffff81003a1736e8 ffffffff80545d2c ffff81003d85c008
ffffffff80536aea ffff81003dbb6030 ffff81003d919080 ffff810001263890
ffff81003d919280 0000000100000000 ffff810001263890 0000000000004040
ffff81003d9190c8 0000000000000286 ffffffff8023b4a9 0000000000000068
ffff81003a173748 ffff81003a173758 ffff810001004728 0000000000000001
ffffffff805387dd 0000000000000000 ffffffff805387ff 0000000000000246
ffffffff805466ba ffffffff80535e23 ffff81003d754770 ffff81003d754680
0000000000000001 0000000000000001 ffffffff805387dd ffff81003da48080
ffffffff80546754 ffff81003d754770 0000000000000001 0000000000000000
ffff81003d919080 ffffffff80244b74 ffff81003a173770 ffff81003a173770
ffffffff80533702 0000000000000000 0000000000000000 ffff81003d754770
ffff81003a173828 ffff81003a173938 ffffffff80538ca3 ffff81003d754680
ffff81003d754680 ffffffff8059cb90 ffffffff80533e91 0000000000000000
ffffffff805466da ffffffff80535e23 ffff81003a173868 ffff81003da48080
ffff81003d82bac0 ffff81003d80f600 ffffffff80533f11 ffff81003a173868
ffffffff80315fc8 ffffffff806cf578 ffff81003d82ba38 ffff81003a173868
0000000000000000 ffff81003d82bb68 ffff81003d82bb68 ffff81003d82ba30
ffffffff80312177 ffff81003d690000 0000000000000296 ffff81003dbb6000
ffff81003db4e7c0 ffff81003d69b800 ffffffff80539785 ffff81003d85c750
ffff81003db4e7c0 ffff81003d69b800 ffffffff8053987c ffffffff8059cb90
0000000000000282 ffff81003d69b800 ffffffff8053864d 0000000000000000
0000000000000000 ffff81003a173928 ffffffff8022b6ed 000000000000384b
ffff810001017208 0000000000000001 ffff81003d9190c8 ffff810001017180
ffff810001017208 00000000fffeebb6 ffffffff8022b9cf 0000000000000000
ffff81003d919080 ffff81003d919080 ffff81003d443d68 ffff8100012e0cb0
ffff81003d82bb68 ffff81003a173d58 ffff8100012e0b10 ffff81003a173bb8
ffffffff803107d6 0000000000000092 ffffffff8022d472 000000003a173a38
0000000000000003 ffff810001017180 0000000000000000 00000000002e12a5
0000000000000092 ffff810001263890 ffffffff806d6190 0000000000000001
0000000000000001 ffffffff806d6188 0000000000000000 ffff81003a173a38
ffffffff8022b04d 0000000300000000 ffffffff806d6180 0000000000000292
ffff81003d919080 0000000000000001 ffff81003a173e08 0000000000000292
ffffffff80547647 0000000000000001 ffff81003d919080 ffffffff8022d484
0000000000100100 0000000000200200 80f7518d00000001 01fa80c0950f20f9
ffff81003d919080 0000000000000000 ffff81003d4b2818 ffff81003e0af880
ffffffff805472e5 ffffffff8030eb02 0000000000000011 ffff81003a173e08
ffff81003dab07c0 0000000000000000 ffff81003a173d58 ffff81003a173fd8
0000000000000001 0000000000000000 ffffffff8028e1b9 ffff81003a173e0d
ffff8100012e0cb0 ffff81003a173d58 ffff81003d443d68 ffff81003e0af880
ffff81003a173d58 ffff81003a173bb8 ffffffff802859da 3100000005beffff
ffff81003a173bc8 ffff81003e0af880 ffff8100011041ed ffff81003a173d58
ffff81003d443d68 ffff81003e0af880 ffff81003a173e08 0000000000000000
ffffffff802874dd ffffffff80264196 ffff81003a173e11 000001010000b400
0000000000000282 9066906666ef894c 00000044fe1ebbe8 000000040182f869
ffff81003a173e0d ffff81003e0af880 ffff8100012e0b10 ffff81003a173cf8
ffff810001106880 ffff81003a173d58 ffff81003da5ecb0 ffff81003e0af880
ffff81003a173e08 ffff81003a173e08 ffffffff802879c0 ffff81003da5ecb0
ffff81003e0af880 0000000000000000 00007fffffffee79 0000000100000101
ffffffff00000000 ffff81003d0d7cc0 000000003a173f58 000000000000000a
ffff81003ee1cf08 ffff81003a0d7000 00007fffffffe000 ffff81003d0d7cc2
ffff810000000000 ffff81003a173e08 0000000000000011 ffff81003dab07c0
ffff81003afe5138 ffff810001106880 ffff810001106884 ffff81003a173d58
0000000000000011 ffff81003a173e08 ffffffff80287d70 00000000000000d0
0000000000000246 ffff81003a173d58 0000000000000101 00000000ffffffe9
0000000000000011 0000000000000000 ffffffff8028875f ffffff9c3d0d7cc0
ffff81003d0d7cc0 ffff81003a173e08 ffff81003a173f58 0000000000000000
ffff81003a173f58 00000000005a2740 ffffffff80282601 ffff8100012e0b10
ffff81003e0af880 0000000000000000 00007fffffffee79 0000000100000101
ffffffff00000000 ffff81003d0d7cc0 000000003a173f58 000000000000000a
ffff81003ee1cf08 ffff81003a0d7000 00007fffffffe000 ffff81003d0d7cc2
ffff810000000000 ffff81003a173e08 0000000000000011 ffff81003dab07c0
ffff81003a173f58 00000000005a2740 ffff81003d0d7cc0 ffff81003a173e08
ffffffff802ac044 7361622f6e69622f 0000000000000068 0000000000000009
0000000000000009 00000000005a2750 ffffffff80282011 ffff81003a173e58
0000000000000000 0000000000000000 00007fffffffefe0 0000000000000000
00007ffffffffff8 686769723ee1cf08 ffffffff80282210 ffff81003d0d7cc0
ffffffff806ccc40 0000000000000000 ffff81003d0d7cc2 ffffffff806ccc00
ffff81003d0d7cc0 00000000fffffff8 ffffffff802823aa 0000000000000029
ffff81003d0d7cc0 0000000000000000 ffff81003da9e000 00000000005a67d0
ffffffff8028386c ffff81003a173f58 ffff81003da9e000 00000000005a67d0
00000000005a2740 ffff81003da9e000 00000000005a67d0 00000000005aacd0
ffffffff8020a1ec 00007fff9c6898a0 00000000005aa440 00000000005a2740
0000000000000001 00000000005aa440 ffffffff8020b767 00000000005aacd0
00000000005a67d0 00000000005aa440 0000000000000001 00000000005a2740
00000000005aa440 0000000000000206 ffffffffffffffff 0000000000000000
0000000000000000 000000000000003b ffffffffffffffff 00000000005a67d0
00000000005a2740 00000000005aa440 000000000000003b 00002b180ea6dd97
0000000000000033 0000000000000206 00007fff9c689928 000000000000002b
Call Trace:
[<ffffffff80547679>] __down+0xf0/0x100
[<ffffffff8022d484>] default_wake_function+0x0/0xe
[<ffffffff805472e5>] __down_failed+0x35/0x3a
[<ffffffff8054792f>] __reacquire_kernel_lock+0x3b/0x44
[<ffffffff80545d2c>] thread_return+0xaa/0xd0
[<ffffffff80536aea>] xs_send_kvec+0x7a/0x83
[<ffffffff8023b4a9>] lock_timer_base+0x26/0x4c
[<ffffffff805387dd>] rpc_wait_bit_interruptible+0x0/0x29
[<ffffffff805387ff>] rpc_wait_bit_interruptible+0x22/0x29
[<ffffffff805466ba>] __wait_on_bit+0x40/0x6e
[<ffffffff80535e23>] xprt_timer+0x0/0x7b
[<ffffffff805387dd>] rpc_wait_bit_interruptible+0x0/0x29
[<ffffffff80546754>] out_of_line_wait_on_bit+0x6c/0x78
[<ffffffff80244b74>] wake_bit_function+0x0/0x23
[<ffffffff80533702>] call_transmit+0x200/0x22c
[<ffffffff80538ca3>] __rpc_execute+0xf2/0x238
[<ffffffff80533e91>] rpc_do_run_task+0x89/0xa6
[<ffffffff805466da>] __wait_on_bit+0x60/0x6e
[<ffffffff80535e23>] xprt_timer+0x0/0x7b
[<ffffffff80533f11>] rpc_call_sync+0x19/0x32
[<ffffffff80315fc8>] nfs_proc_getattr+0x61/0x85
[<ffffffff80312177>] __nfs_revalidate_inode+0x14a/0x28f
[<ffffffff80539785>] put_rpccred+0x34/0xe9
[<ffffffff8053987c>] rpcauth_unbindcred+0x42/0x4e
[<ffffffff8053864d>] rpc_put_task+0x6d/0x81
[<ffffffff8022b6ed>] update_curr+0xe2/0x101
[<ffffffff8022b9cf>] dequeue_entity+0x73/0x97
[<ffffffff803107d6>] nfs_lookup_revalidate+0x16a/0x39c
[<ffffffff8022d472>] try_to_wake_up+0x330/0x342
[<ffffffff8022b04d>] __wake_up_common+0x3e/0x68
[<ffffffff80547647>] __down+0xbe/0x100
[<ffffffff8022d484>] default_wake_function+0x0/0xe
[<ffffffff805472e5>] __down_failed+0x35/0x3a
[<ffffffff8030eb02>] nfs_permission+0x0/0x1d1
[<ffffffff8028e1b9>] __d_lookup+0xb0/0xf8
[<ffffffff802859da>] do_lookup+0x157/0x1ae
[<ffffffff802874dd>] __link_path_walk+0x83c/0xcc7
[<ffffffff80264196>] zone_statistics+0x3f/0x60
[<ffffffff802879c0>] link_path_walk+0x58/0xe0
[<ffffffff80287d70>] do_path_lookup+0x1a0/0x1c3
[<ffffffff8028875f>] __path_lookup_intent_open+0x56/0x97
[<ffffffff80282601>] open_exec+0x24/0xc0
[<ffffffff802ac044>] load_script+0x1a8/0x1e8
[<ffffffff80282011>] get_arg_page+0x46/0x9c
[<ffffffff80282210>] copy_strings+0x1a9/0x1ba
[<ffffffff802823aa>] search_binary_handler+0x90/0x15f
[<ffffffff8028386c>] do_execve+0x14e/0x1a6
[<ffffffff8020a1ec>] sys_execve+0x36/0x8b
[<ffffffff8020b767>] stub_execve+0x67/0xb0
Code: 8a 17 0f ae e8 eb f3 c3 fa b8 00 01 00 00 f0 66 0f c1 07 38
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch 1/2] spinlock: lockbreak cleanup
2007-08-11 0:07 ` [patch 1/2] spinlock: lockbreak cleanup Andi Kleen
@ 2007-08-13 7:52 ` Nick Piggin
0 siblings, 0 replies; 9+ messages in thread
From: Nick Piggin @ 2007-08-13 7:52 UTC (permalink / raw)
To: Andi Kleen
Cc: Andrew Morton, Linus Torvalds, Ingo Molnar,
Linux Kernel Mailing List
On Sat, Aug 11, 2007 at 02:07:43AM +0200, Andi Kleen wrote:
>
> Nick,
>
> These two patches make my P4 (single socket HT) test box not boot. I dropped them for now.
>
> Some oopses
Sorry, the trylock had a race where it would not work correctly :(
Have fixed it now and will do more testing and resend to you.
Thanks,
Nick
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2007-08-13 12:57 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-08-08 4:22 [patch 1/2] spinlock: lockbreak cleanup Nick Piggin
2007-08-08 4:24 ` [patch 2/2] x86_64: ticket lock spinlock Nick Piggin
2007-08-08 10:26 ` Andi Kleen
2007-08-09 1:42 ` Nick Piggin
2007-08-09 9:54 ` Andi Kleen
2007-08-08 17:31 ` Valdis.Kletnieks
2007-08-09 1:40 ` Nick Piggin
2007-08-11 0:07 ` [patch 1/2] spinlock: lockbreak cleanup Andi Kleen
2007-08-13 7:52 ` Nick Piggin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox