public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Lockdep and rw_semaphores
@ 2011-09-11  1:34 Vladislav Bolkhovitin
  2011-09-11  2:38 ` Al Viro
  0 siblings, 1 reply; 8+ messages in thread
From: Vladislav Bolkhovitin @ 2011-09-11  1:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar; +Cc: linux-kernel

Hello,

Looks like lockdep somehow over-restrictive for rw_semaphores in case when they
are taken for read (down_read()) and requires them to follow the same inner-outer
rules as for plain locks.

For instance, code like:

DECLARE_RWSEM(QQ_sem);
DECLARE_RWSEM(QQ1_sem);

thread1:

        down_read(&QQ_sem);
        down_read(&QQ1_sem);

        msleep(60000);

        up_read(&QQ1_sem);
        up_read(&QQ_sem);

thread2:

	down_read(&QQ1_sem);
	down_read(&QQ_sem);

	fn();

	up_read(&QQ_sem);
	up_read(&QQ1_sem);

will trigger possible circular locking dependency detected warning, like:

=======================================================
[ INFO: possible circular locking dependency detected ]
3.0.0 #20
-------------------------------------------------------
thread2/5290 is trying to acquire lock:
 (QQ_sem){.+.+..}, at: [<ffffffffa04453a5>] thread2+0x135/0x220

but task is already holding lock:
 (QQ1_sem){.+.+..}, at: [<ffffffffa0445399>] thread2+0x129/0x220

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (QQ1_sem){.+.+..}:
       [<ffffffff81080fb1>] validate_chain+0x6a1/0x7a0
       [<ffffffff8108139b>] __lock_acquire+0x2eb/0x4c0
       [<ffffffff81081c07>] lock_acquire+0x97/0x140
       [<ffffffff8141871c>] down_read+0x4c/0xa0
       [<ffffffffa0418b3e>] thread1+0x4e/0xb0
       [<ffffffff81068976>] kthread+0xa6/0xb0
       [<ffffffff81422254>] kernel_thread_helper+0x4/0x10

-> #0 (QQ_sem){.+.+..}:
       [<ffffffff810808e7>] check_prev_add+0x517/0x540
       [<ffffffff81080fb1>] validate_chain+0x6a1/0x7a0
       [<ffffffff8108139b>] __lock_acquire+0x2eb/0x4c0
       [<ffffffff81081c07>] lock_acquire+0x97/0x140
       [<ffffffff8141871c>] down_read+0x4c/0xa0
       [<ffffffffa04455c7>] thread2+0x137/0x2d0
       [<ffffffff81068976>] kthread+0xa6/0xb0
       [<ffffffff81422254>] kernel_thread_helper+0x4/0x10

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(QQ1_sem);
                               lock(QQ_sem);
                               lock(QQ1_sem);
  lock(QQ_sem);

 *** DEADLOCK ***

1 lock held by thread2/5290:
 #0:  (QQ1_sem){.+.+..}, at: [<ffffffffa0445399>] thread2+0x129/0x220
stack backtrace:
Pid: 5290, comm: thread2 Not tainted 3.0.0 #20
Call Trace:
 [<ffffffff8107e9b7>] print_circular_bug+0x107/0x110
 ...

Is it by design or just something overlooked? I don't see how reverse order of
down_read()'s can lead to any deadlock. Or am I missing anything?

Thanks,
Vlad

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Lockdep and rw_semaphores
  2011-09-11  1:34 Lockdep and rw_semaphores Vladislav Bolkhovitin
@ 2011-09-11  2:38 ` Al Viro
  2011-09-13  2:19   ` Vladislav Bolkhovitin
  0 siblings, 1 reply; 8+ messages in thread
From: Al Viro @ 2011-09-11  2:38 UTC (permalink / raw)
  To: Vladislav Bolkhovitin; +Cc: Peter Zijlstra, Ingo Molnar, linux-kernel

On Sat, Sep 10, 2011 at 09:34:14PM -0400, Vladislav Bolkhovitin wrote:
> Hello,
> 
> Looks like lockdep somehow over-restrictive for rw_semaphores in case when they
> are taken for read (down_read()) and requires them to follow the same inner-outer
> rules as for plain locks.
> 
> For instance, code like:
> 
> DECLARE_RWSEM(QQ_sem);
> DECLARE_RWSEM(QQ1_sem);
> 
> thread1:
> 
>         down_read(&QQ_sem);
>         down_read(&QQ1_sem);

> thread2:
> 
> 	down_read(&QQ1_sem);
> 	down_read(&QQ_sem);

> Is it by design or just something overlooked? I don't see how reverse order of
> down_read()'s can lead to any deadlock. Or am I missing anything?

thread1: got QQ
thread2: got QQ1
thread3: tries to do down_write() on QQ, gets blocked
thread4: tries to do down_write() on QQ1, gets blocked

Now we have thread1 that can't get QQ1 once the threads trying to get it
exclusive get a shot at it.  Thread2 is blocked in the same way on QQ.
And neither is going to release the (shared) lock they are holding, so
thread3 and thread4 are not going to get anywhere either.

IOW, ordering *is* needed there.  Note that for the same reason trying to
grab the same lock shared twice is a deadlock:

A: already holds X shared
B: blocks trying to grab it exclusive
A: tries to grab it shared again and gets stuck, since there is a pending
down_write() and we are guaranteed that writer will get served as soon
as all current readers are through; no new readers are allowed to starve it.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Lockdep and rw_semaphores
  2011-09-11  2:38 ` Al Viro
@ 2011-09-13  2:19   ` Vladislav Bolkhovitin
  2011-09-13  5:17     ` Al Viro
  2011-09-13 14:07     ` David Howells
  0 siblings, 2 replies; 8+ messages in thread
From: Vladislav Bolkhovitin @ 2011-09-13  2:19 UTC (permalink / raw)
  To: Al Viro; +Cc: Peter Zijlstra, Ingo Molnar, linux-kernel


Al Viro, on 09/10/2011 10:38 PM wrote:
> On Sat, Sep 10, 2011 at 09:34:14PM -0400, Vladislav Bolkhovitin wrote:
>> Hello,
>>
>> Looks like lockdep somehow over-restrictive for rw_semaphores in case when they
>> are taken for read (down_read()) and requires them to follow the same inner-outer
>> rules as for plain locks.
>>
>> For instance, code like:
>>
>> DECLARE_RWSEM(QQ_sem);
>> DECLARE_RWSEM(QQ1_sem);
>>
>> thread1:
>>
>>         down_read(&QQ_sem);
>>         down_read(&QQ1_sem);
> 
>> thread2:
>>
>> 	down_read(&QQ1_sem);
>> 	down_read(&QQ_sem);
> 
>> Is it by design or just something overlooked? I don't see how reverse order of
>> down_read()'s can lead to any deadlock. Or am I missing anything?
> 
> thread1: got QQ
> thread2: got QQ1
> thread3: tries to do down_write() on QQ, gets blocked
> thread4: tries to do down_write() on QQ1, gets blocked
> 
> Now we have thread1 that can't get QQ1 once the threads trying to get it
> exclusive get a shot at it.  Thread2 is blocked in the same way on QQ.
> And neither is going to release the (shared) lock they are holding, so
> thread3 and thread4 are not going to get anywhere either.
> 
> IOW, ordering *is* needed there.  Note that for the same reason trying to
> grab the same lock shared twice is a deadlock:
> 
> A: already holds X shared
> B: blocks trying to grab it exclusive
> A: tries to grab it shared again and gets stuck, since there is a pending
> down_write() and we are guaranteed that writer will get served as soon
> as all current readers are through; no new readers are allowed to starve it.

Sure, if nested write locking is involved, lockdep should loudly complain, but on
the first nested write, not read, because what's the point to complain on nested
reads? I may not have nested write locking at all (and don't have).

Plus, the deadlock scenario lockdep described for me

       CPU0                    CPU1
       ----                    ----
  lock(QQ1_sem);
                               lock(QQ_sem);
                               lock(QQ1_sem);
  lock(QQ_sem);

 *** DEADLOCK ***

is simply wrong. Such deadlock can never happen, correct?

Thanks,
Vlad

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Lockdep and rw_semaphores
  2011-09-13  2:19   ` Vladislav Bolkhovitin
@ 2011-09-13  5:17     ` Al Viro
  2011-09-14  1:55       ` Vladislav Bolkhovitin
  2011-09-13 14:07     ` David Howells
  1 sibling, 1 reply; 8+ messages in thread
From: Al Viro @ 2011-09-13  5:17 UTC (permalink / raw)
  To: Vladislav Bolkhovitin; +Cc: Peter Zijlstra, Ingo Molnar, linux-kernel

On Mon, Sep 12, 2011 at 10:19:20PM -0400, Vladislav Bolkhovitin wrote:

> Sure, if nested write locking is involved, lockdep should loudly complain, but on
> the first nested write, not read, because what's the point to complain on nested
> reads? I may not have nested write locking at all (and don't have).

_What_ nested write locking?  For that to happen it's enough to have
unrelated threads try to grab these suckers exclusive at the wrong time.
One thread for each lock.  Separately.  No nesting involved.

And if you never grab these locks exclusive (again, not nested - anything
grabbing them exclusive will suffice), why the fsck do you have them at
all?  rwsem that is never grabbed exclusive is rather pointless, isn't it?

> Plus, the deadlock scenario lockdep described for me
> 
>        CPU0                    CPU1
>        ----                    ----
>   lock(QQ1_sem);
>                                lock(QQ_sem);
>                                lock(QQ1_sem);
>   lock(QQ_sem);
> 
>  *** DEADLOCK ***
> 
> is simply wrong. Such deadlock can never happen, correct?

Sigh...  OK, let's spell it out:

thread 1:
	down_read(&A); /* got it */
thread 2:
	down_read(&B); /* got it */
thread 3:
	down_write(&A);	/* blocked until thread 1 releases A */
thread 4:
	down_write(&B);	/* blocked until thread 2 releases B */
thread 1:
	down_read(&B);	/* blocked until thread 4 gets and releases B */
thread 2:
	down_read(&A);	/* blocked until thread 3 gets and released A */

and we have a deadlock.  Note that thread 3 and thread 4 are just doing
solitary down_write(), no other locks held.  And if they happen to come
a bit later, we won't get stuck.

Again, if your code does that kind of out-of-order down_read(), it is broken.
Period.  Either there's literally nothing that would ever do down_write()
(in which case you can remove the useless rwsem completely), or you can get
a full-blown deadlock there.  It's harder to hit (you need 4 tasks instead
if 2, as you would with mutex_lock() out of order), but it's still quite
triggerable.

What you seem to be missing is that rwsems are fair to writers.  I.e.
down_read() blocks not only when there's a writer already holding the lock;
it blocks when there's a writer blocked trying to acquire the lock.  That's
what makes this kind of out-of-order down_read() unsafe; unrelated thread
doing a solitary down_write() is able to wedge the whole thing stuck.  Without
having acquired *any* locks - just waiting for readers to go away so it could
get the damn thing exclusive.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Lockdep and rw_semaphores
  2011-09-13  2:19   ` Vladislav Bolkhovitin
  2011-09-13  5:17     ` Al Viro
@ 2011-09-13 14:07     ` David Howells
  1 sibling, 0 replies; 8+ messages in thread
From: David Howells @ 2011-09-13 14:07 UTC (permalink / raw)
  To: Vladislav Bolkhovitin
  Cc: dhowells, Al Viro, Peter Zijlstra, Ingo Molnar, linux-kernel


Al is correct.

The in-kernel R/W semaphore implementation is written to be strictly fair.  If
the semaphore is readlocked and a writer is pending, all further readers and
writers get added to the back of the FIFO queue, thus preventing writers from
being starved.

When it comes time to wake someone up, the waiter at the front of the queue is
woken. If that's a reader, then next on the queue may be woken up too if
that's also a reader - up to the point a writer is encountered.  Any readers
beyond that writer in the queue have to wait for the writer in front of them
to get its share first.

David

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Lockdep and rw_semaphores
  2011-09-13  5:17     ` Al Viro
@ 2011-09-14  1:55       ` Vladislav Bolkhovitin
  2011-09-14  4:40         ` Al Viro
  0 siblings, 1 reply; 8+ messages in thread
From: Vladislav Bolkhovitin @ 2011-09-14  1:55 UTC (permalink / raw)
  To: Al Viro; +Cc: Peter Zijlstra, Ingo Molnar, linux-kernel


Al Viro, on 09/13/2011 01:17 AM wrote:
> On Mon, Sep 12, 2011 at 10:19:20PM -0400, Vladislav Bolkhovitin wrote:
> 
>> Sure, if nested write locking is involved, lockdep should loudly complain, but on
>> the first nested write, not read, because what's the point to complain on nested
>> reads? I may not have nested write locking at all (and don't have).
> 
> _What_ nested write locking?  For that to happen it's enough to have
> unrelated threads try to grab these suckers exclusive at the wrong time.
> One thread for each lock.  Separately.  No nesting involved.
> 
> And if you never grab these locks exclusive (again, not nested - anything
> grabbing them exclusive will suffice), why the fsck do you have them at
> all?  rwsem that is never grabbed exclusive is rather pointless, isn't it?
> 
>> Plus, the deadlock scenario lockdep described for me
>>
>>        CPU0                    CPU1
>>        ----                    ----
>>   lock(QQ1_sem);
>>                                lock(QQ_sem);
>>                                lock(QQ1_sem);
>>   lock(QQ_sem);
>>
>>  *** DEADLOCK ***
>>
>> is simply wrong. Such deadlock can never happen, correct?
> 
> Sigh...  OK, let's spell it out:
> 
> thread 1:
> 	down_read(&A); /* got it */
> thread 2:
> 	down_read(&B); /* got it */
> thread 3:
> 	down_write(&A);	/* blocked until thread 1 releases A */
> thread 4:
> 	down_write(&B);	/* blocked until thread 2 releases B */
> thread 1:
> 	down_read(&B);	/* blocked until thread 4 gets and releases B */
> thread 2:
> 	down_read(&A);	/* blocked until thread 3 gets and released A */
> 
> and we have a deadlock.  Note that thread 3 and thread 4 are just doing
> solitary down_write(), no other locks held.  And if they happen to come
> a bit later, we won't get stuck.
> 
> Again, if your code does that kind of out-of-order down_read(), it is broken.
> Period.  Either there's literally nothing that would ever do down_write()
> (in which case you can remove the useless rwsem completely), or you can get
> a full-blown deadlock there.  It's harder to hit (you need 4 tasks instead
> if 2, as you would with mutex_lock() out of order), but it's still quite
> triggerable.
> 
> What you seem to be missing is that rwsems are fair to writers.  I.e.
> down_read() blocks not only when there's a writer already holding the lock;
> it blocks when there's a writer blocked trying to acquire the lock.  That's
> what makes this kind of out-of-order down_read() unsafe; unrelated thread
> doing a solitary down_write() is able to wedge the whole thing stuck.  Without
> having acquired *any* locks - just waiting for readers to go away so it could
> get the damn thing exclusive.

Al, David,

I see your point and agree, thanks for the explanation. But my original points are
still valid:

1. Reverse read locking isn't always a deadlock. For instance, if only 1 write
thread participating and doesn't do nested write locking, which is a quite valid
scenario, because by design of rw locks they are used with many readers and
limited amount of rare writers.

2. The "deadlock" explanation lockdep is producing in such cases is wrong and
confusing, because in the explanation scenario there is no deadlock. If to do such
nice and helpful explanations, better to do it correctly.

So, it should be better if this warning is issued, if there is >1 thread write
locking detected on any participated rw lock, and illustrated with a correct
explanation.

Thanks,
Vlad

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Lockdep and rw_semaphores
  2011-09-14  1:55       ` Vladislav Bolkhovitin
@ 2011-09-14  4:40         ` Al Viro
  2011-09-15  2:16           ` Vladislav Bolkhovitin
  0 siblings, 1 reply; 8+ messages in thread
From: Al Viro @ 2011-09-14  4:40 UTC (permalink / raw)
  To: Vladislav Bolkhovitin; +Cc: Peter Zijlstra, Ingo Molnar, linux-kernel

On Tue, Sep 13, 2011 at 09:55:25PM -0400, Vladislav Bolkhovitin wrote:

> > thread 1:
> > 	down_read(&A); /* got it */
> > thread 2:
> > 	down_read(&B); /* got it */
> > thread 3:
> > 	down_write(&A);	/* blocked until thread 1 releases A */

That's the only thread here doing down_write() on A

> > thread 4:
> > 	down_write(&B);	/* blocked until thread 2 releases B */

... and that's the only thread here doing down_write() on B.  And neither
of those is holding any other locks.  No nesting.

> 1. Reverse read locking isn't always a deadlock. For instance, if only 1 write
> thread participating and doesn't do nested write locking, which is a quite valid
> scenario, because by design of rw locks they are used with many readers and
> limited amount of rare writers.

Um?  If you mean that here we have two threads doing down_write(), remember
that you've got two locks.

> So, it should be better if this warning is issued, if there is >1 thread write
> locking detected on any participated rw lock, and illustrated with a correct
> explanation.

Which would be which threads, in the situation described above?  Again,
we have no nesting for writes and we have one thread attempting down_write()
for any given lock.  Two locks, two writers in total...

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Lockdep and rw_semaphores
  2011-09-14  4:40         ` Al Viro
@ 2011-09-15  2:16           ` Vladislav Bolkhovitin
  0 siblings, 0 replies; 8+ messages in thread
From: Vladislav Bolkhovitin @ 2011-09-15  2:16 UTC (permalink / raw)
  To: Al Viro; +Cc: Peter Zijlstra, Ingo Molnar, linux-kernel


Al Viro, on 09/14/2011 12:40 AM wrote:
> On Tue, Sep 13, 2011 at 09:55:25PM -0400, Vladislav Bolkhovitin wrote:
> 
>>> thread 1:
>>> 	down_read(&A); /* got it */
>>> thread 2:
>>> 	down_read(&B); /* got it */
>>> thread 3:
>>> 	down_write(&A);	/* blocked until thread 1 releases A */
> 
> That's the only thread here doing down_write() on A
> 
>>> thread 4:
>>> 	down_write(&B);	/* blocked until thread 2 releases B */
> 
> ... and that's the only thread here doing down_write() on B.  And neither
> of those is holding any other locks.  No nesting.
> 
>> 1. Reverse read locking isn't always a deadlock. For instance, if only 1 write
>> thread participating and doesn't do nested write locking, which is a quite valid
>> scenario, because by design of rw locks they are used with many readers and
>> limited amount of rare writers.
> 
> Um?  If you mean that here we have two threads doing down_write(), remember
> that you've got two locks.

No, consider there is only one management thread doing either down_write(&A), or
down_write(&B), never both, with a set of worker threads doing down_read() of any
locks in any order.

_One_ down_write() thread, no down_write() nesting => no deadlock possibility.

>> So, it should be better if this warning is issued, if there is >1 thread write
>> locking detected on any participated rw lock, and illustrated with a correct
>> explanation.
> 
> Which would be which threads, in the situation described above?  Again,
> we have no nesting for writes and we have one thread attempting down_write()
> for any given lock.  Two locks, two writers in total...

Consider we have N rw locks/semaphores depending from each other and not following
strict read locking rule, i.e. down_read() all or some of them in any order. The
lockdep warning should be issued only if detected that any of those locks
down_write() in more than 1 thread or there is attempt of nesting down_write() of
any of the locks. This will be more correct handling of the RW locks dependency
than currently.

I hope, lockdep already stores some info to recognize current thread/process and
separate dependency chains, so this info can be reused.

Vlad

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-09-15  2:13 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-09-11  1:34 Lockdep and rw_semaphores Vladislav Bolkhovitin
2011-09-11  2:38 ` Al Viro
2011-09-13  2:19   ` Vladislav Bolkhovitin
2011-09-13  5:17     ` Al Viro
2011-09-14  1:55       ` Vladislav Bolkhovitin
2011-09-14  4:40         ` Al Viro
2011-09-15  2:16           ` Vladislav Bolkhovitin
2011-09-13 14:07     ` David Howells

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox