* [PATCH] bug in futex unqueue_me
@ 2006-07-27 16:41 Christian Borntraeger
2006-07-30 6:38 ` Ingo Molnar
0 siblings, 1 reply; 5+ messages in thread
From: Christian Borntraeger @ 2006-07-27 16:41 UTC (permalink / raw)
To: linux-kernel
Cc: Rusty Russell, Ingo Molnar, Thomas Gleixner, Martin Schwidefsky
From: Christian Borntraeger <borntrae@de.ibm.com>
This patch adds a barrier() in futex unqueue_me to avoid aliasing of two
pointers.
On my s390x system I saw the following oops:
Unable to handle kernel pointer dereference at virtual kernel address
0000000000000000
Oops: 0004 [#1]
CPU: 0 Not tainted
Process mytool (pid: 13613, task: 000000003ecb6ac0, ksp: 00000000366bdbd8)
Krnl PSW : 0704d00180000000 00000000003c9ac2 (_spin_lock+0xe/0x30)
Krnl GPRS: 00000000ffffffff 000000003ecb6ac0 0000000000000000 0700000000000000
0000000000000000 0000000000000000 000001fe00002028 00000000000c091f
000001fe00002054 000001fe00002054 0000000000000000 00000000366bddc0
00000000005ef8c0 00000000003d00e8 0000000000144f91 00000000366bdcb8
Krnl Code: ba 4e 20 00 12 44 b9 16 00 3e a7 84 00 08 e3 e0 f0 88 00 04
Call Trace:
([<0000000000144f90>] unqueue_me+0x40/0xe4)
[<0000000000145a0c>] do_futex+0x33c/0xc40
[<000000000014643e>] sys_futex+0x12e/0x144
[<000000000010bb00>] sysc_noemu+0x10/0x16
[<000002000003741c>] 0x2000003741c
The code in question is:
static int unqueue_me(struct futex_q *q)
{
int ret = 0;
spinlock_t *lock_ptr;
/* In the common case we don't take the spinlock, which is nice. */
retry:
lock_ptr = q->lock_ptr;
if (lock_ptr != 0) {
spin_lock(lock_ptr);
/*
* q->lock_ptr can change between reading it and
* spin_lock(), causing us to take the wrong lock. This
* corrects the race condition.
[...]
and my compiler (gcc 4.1.0) makes the following out of it:
00000000000003c8 <unqueue_me>:
3c8: eb bf f0 70 00 24 stmg %r11,%r15,112(%r15)
3ce: c0 d0 00 00 00 00 larl %r13,3ce <unqueue_me+0x6>
3d0: R_390_PC32DBL .rodata+0x2a
3d4: a7 f1 1e 00 tml %r15,7680
3d8: a7 84 00 01 je 3da <unqueue_me+0x12>
3dc: b9 04 00 ef lgr %r14,%r15
3e0: a7 fb ff d0 aghi %r15,-48
3e4: b9 04 00 b2 lgr %r11,%r2
3e8: e3 e0 f0 98 00 24 stg %r14,152(%r15)
3ee: e3 c0 b0 28 00 04 lg %r12,40(%r11)
/* write q->lock_ptr in r12 */
3f4: b9 02 00 cc ltgr %r12,%r12
3f8: a7 84 00 4b je 48e <unqueue_me+0xc6>
/* if r12 is zero then jump over the code.... */
3fc: e3 20 b0 28 00 04 lg %r2,40(%r11)
/* write q->lock_ptr in r2 */
402: c0 e5 00 00 00 00 brasl %r14,402 <unqueue_me+0x3a>
404: R_390_PC32DBL _spin_lock+0x2
/* use r2 as parameter for spin_lock */
So the code becomes more or less:
if (q->lock_ptr != 0) spin_lock(q->lock_ptr)
instead of
if (lock_ptr != 0) spin_lock(lock_ptr)
Which caused the oops from above.
After adding a barrier gcc creates code without this problem:
[...] (the same)
3ee: e3 c0 b0 28 00 04 lg %r12,40(%r11)
3f4: b9 02 00 cc ltgr %r12,%r12
3f8: b9 04 00 2c lgr %r2,%r12
3fc: a7 84 00 48 je 48c <unqueue_me+0xc4>
400: c0 e5 00 00 00 00 brasl %r14,400 <unqueue_me+0x38>
402: R_390_PC32DBL _spin_lock+0x2
As a general note, this code of unqueue_me seems a bit fishy. The retry logic
of unqueue_me only works if we can guarantee, that the original value of
q->lock_ptr is always a spinlock (Otherwise we overwrite kernel memory). We
know that q->lock_ptr can change. I dont know what happens with the original
spinlock, as I am not an expert with the futex code.
CC: Martin Schwidefsky <schwidefsky@de.ibm.com>
CC: Rusty Russell <rusty@rustcorp.com.au>
CC: Ingo Molnar <mingo@redhat.com>
CC: Thomas Gleixner <tglx@timesys.com>
Signed-off-by: Christian Borntraeger <borntrae@de.ibm.com>
---
futex.c | 1 +
1 file changed, 1 insertion(+)
---
diff --git a/kernel/futex.c b/kernel/futex.c
index cf0c8e2..01aa87c 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -930,6 +930,7 @@ static int unqueue_me(struct futex_q *q)
/* In the common case we don't take the spinlock, which is nice. */
retry:
lock_ptr = q->lock_ptr;
+ barrier();
if (lock_ptr != 0) {
spin_lock(lock_ptr);
/*
--
Mit freundlichen Grüßen / Best Regards
Christian Borntraeger
Linux Software Engineer zSeries Linux & Virtualization
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] bug in futex unqueue_me
2006-07-27 16:41 [PATCH] bug in futex unqueue_me Christian Borntraeger
@ 2006-07-30 6:38 ` Ingo Molnar
2006-07-30 23:53 ` Steven Rostedt
2006-07-31 8:04 ` Christian Borntraeger
0 siblings, 2 replies; 5+ messages in thread
From: Ingo Molnar @ 2006-07-30 6:38 UTC (permalink / raw)
To: Christian Borntraeger
Cc: linux-kernel, Rusty Russell, Ingo Molnar, Thomas Gleixner,
Martin Schwidefsky, Andrew Morton
* Christian Borntraeger <borntrae@de.ibm.com> wrote:
> From: Christian Borntraeger <borntrae@de.ibm.com>
>
> This patch adds a barrier() in futex unqueue_me to avoid aliasing of
> two pointers.
>
> On my s390x system I saw the following oops:
> So the code becomes more or less:
> if (q->lock_ptr != 0) spin_lock(q->lock_ptr)
> instead of
> if (lock_ptr != 0) spin_lock(lock_ptr)
>
> Which caused the oops from above.
interesting, how is this possible? We do a spin_lock(lock_ptr), and
taking a spinlock is an implicit barrier(). So gcc must not delay
evaluating lock_ptr to inside the critical section. And as far as i can
see the s390 spinlock implementation goes through an 'asm volatile'
piece of code, which is a barrier already. So how could this have
happened? I have nothing against adding a barrier(), but we should first
investigate why the spin_lock() didnt act as a barrier - there might be
other, similar bugs hiding. (we rely on spin_lock()s barrier-ness in a
fair number of places)
> As a general note, this code of unqueue_me seems a bit fishy. The
> retry logic of unqueue_me only works if we can guarantee, that the
> original value of q->lock_ptr is always a spinlock (Otherwise we
> overwrite kernel memory). We know that q->lock_ptr can change. I dont
> know what happens with the original spinlock, as I am not an expert
> with the futex code.
yes, it is always a pointer to a valid spinlock, or NULL.
futex_requeue() can change the spinlock from one to another, and
wake_futex() can change it to NULL. The futex unqueue_me() fastpath is
when a futex waiter was woken - in which case it's NULL. But it can
still be non-NULL if we timed out or a signal happened, in which case we
may race with a wakeup or a requeue. futex_requeue() changes the
spinlock pointer if it holds both the old and the new spinlock. So it's
race-free as far as i can see.
Ingo
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] bug in futex unqueue_me
2006-07-30 6:38 ` Ingo Molnar
@ 2006-07-30 23:53 ` Steven Rostedt
2006-07-31 8:04 ` Christian Borntraeger
1 sibling, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2006-07-30 23:53 UTC (permalink / raw)
To: Ingo Molnar
Cc: Christian Borntraeger, linux-kernel, Rusty Russell, Ingo Molnar,
Thomas Gleixner, Martin Schwidefsky, Andrew Morton
On Sun, 2006-07-30 at 08:38 +0200, Ingo Molnar wrote:
> * Christian Borntraeger <borntrae@de.ibm.com> wrote:
>
> > From: Christian Borntraeger <borntrae@de.ibm.com>
> >
> > This patch adds a barrier() in futex unqueue_me to avoid aliasing of
> > two pointers.
> >
> > On my s390x system I saw the following oops:
>
> > So the code becomes more or less:
> > if (q->lock_ptr != 0) spin_lock(q->lock_ptr)
> > instead of
> > if (lock_ptr != 0) spin_lock(lock_ptr)
> >
> > Which caused the oops from above.
>
> interesting, how is this possible? We do a spin_lock(lock_ptr), and
> taking a spinlock is an implicit barrier(). So gcc must not delay
> evaluating lock_ptr to inside the critical section. And as far as i can
> see the s390 spinlock implementation goes through an 'asm volatile'
> piece of code, which is a barrier already. So how could this have
> happened? I have nothing against adding a barrier(), but we should first
> investigate why the spin_lock() didnt act as a barrier - there might be
> other, similar bugs hiding. (we rely on spin_lock()s barrier-ness in a
> fair number of places)
Ingo, this spinlock is probably still a barrier, but is it still a
barrier on itself? That is, the problem here is that we have the
compiler optimizing the lock_ptr temp variable that is used inside the
spin_lock. So does a spin_lock protect itself, or just the stuff inside
it?
Here we need a barrier to keep gcc from optimizing the use of the lock
and not what the lock is protecting.
I don't know about other areas in the kernel that has a dynamic spin
lock like this that needs protection.
-- Steve
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] bug in futex unqueue_me
2006-07-30 6:38 ` Ingo Molnar
2006-07-30 23:53 ` Steven Rostedt
@ 2006-07-31 8:04 ` Christian Borntraeger
2006-07-31 11:49 ` Ingo Molnar
1 sibling, 1 reply; 5+ messages in thread
From: Christian Borntraeger @ 2006-07-31 8:04 UTC (permalink / raw)
To: Ingo Molnar
Cc: linux-kernel, Rusty Russell, Ingo Molnar, Thomas Gleixner,
Martin Schwidefsky, Andrew Morton
On Sunday 30 July 2006 08:38, Ingo Molnar wrote:
> interesting, how is this possible? We do a spin_lock(lock_ptr), and
> taking a spinlock is an implicit barrier(). So gcc must not delay
> evaluating lock_ptr to inside the critical section. And as far as i can
> see the s390 spinlock implementation goes through an 'asm volatile'
> piece of code, which is a barrier already. So how could this have
> happened?
spin_lock is a barrier, but isnt the barrierness too late here? The compiler
reloads the value of lock_ptr after the "if(lock_ptr)" and *before* calling
spin_lock(lock_ptr):
3ee: e3 c0 b0 28 00 04 lg %r12,40(%r11)
q->lockptr in r12
3f4: b9 02 00 cc ltgr %r12,%r12
load and test r12
3f8: a7 84 00 4b je 48e <unqueue_me+0xc6>
if r12 == 0 jump away
3fc: e3 20 b0 28 00 04 lg %r2,40(%r11)
q->lockptr in r2
402: c0 e5 00 00 00 00 brasl %r14,402 <unqueue_me+0x3a>
404: R_390_PC32DBL _spin_lock+0x2
call spinlock (r2 is first parameter)
I really dont know why the compiler reloads lock_ptr from memory at all, but I
will talk to our compiler guys to find out.
> I have nothing against adding a barrier(), but we should first
> investigate why the spin_lock() didnt act as a barrier - there might be
> other, similar bugs hiding. (we rely on spin_lock()s barrier-ness in a
> fair number of places)
See above. I think the barrier must be before "if(lock_ptr)" and not
afterwards.
> yes, it is always a pointer to a valid spinlock, or NULL.
> futex_requeue() can change the spinlock from one to another, and
> wake_futex() can change it to NULL. The futex unqueue_me() fastpath is
> when a futex waiter was woken - in which case it's NULL. But it can
> still be non-NULL if we timed out or a signal happened, in which case we
> may race with a wakeup or a requeue. futex_requeue() changes the
> spinlock pointer if it holds both the old and the new spinlock. So it's
> race-free as far as i can see.
Ok, looks fine then.
--
Mit freundlichen Grüßen / Best Regards
Christian Borntraeger
Linux Software Engineer zSeries Linux & Virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] bug in futex unqueue_me
2006-07-31 8:04 ` Christian Borntraeger
@ 2006-07-31 11:49 ` Ingo Molnar
0 siblings, 0 replies; 5+ messages in thread
From: Ingo Molnar @ 2006-07-31 11:49 UTC (permalink / raw)
To: Christian Borntraeger
Cc: linux-kernel, Rusty Russell, Ingo Molnar, Thomas Gleixner,
Martin Schwidefsky, Andrew Morton, Steven Rostedt
* Christian Borntraeger <borntrae@de.ibm.com> wrote:
> On Sunday 30 July 2006 08:38, Ingo Molnar wrote:
> > interesting, how is this possible? We do a spin_lock(lock_ptr), and
> > taking a spinlock is an implicit barrier(). So gcc must not delay
> > evaluating lock_ptr to inside the critical section. And as far as i can
> > see the s390 spinlock implementation goes through an 'asm volatile'
> > piece of code, which is a barrier already. So how could this have
> > happened?
>
> spin_lock is a barrier, but isnt the barrierness too late here? The
> compiler reloads the value of lock_ptr after the "if(lock_ptr)" and
> *before* calling spin_lock(lock_ptr):
ah, indeed. So your patch is a real fix. Thanks,
Acked-by: Ingo Molnar <mingo@elte.hu>
Ingo
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2006-07-31 11:56 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-07-27 16:41 [PATCH] bug in futex unqueue_me Christian Borntraeger
2006-07-30 6:38 ` Ingo Molnar
2006-07-30 23:53 ` Steven Rostedt
2006-07-31 8:04 ` Christian Borntraeger
2006-07-31 11:49 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox