* Lockdep incorrectly complaining about circular dependencies involving read-locks
@ 2016-01-21 22:20 Felix Kuehling
2016-01-26 23:48 ` Felix Kuehling
0 siblings, 1 reply; 2+ messages in thread
From: Felix Kuehling @ 2016-01-21 22:20 UTC (permalink / raw)
To: linux-kernel
I'm running into circular lock dependencies reported by lockdep that
involve read-locks and should not be flagged as deadlocks at all. I
wrote a very simple test function that demonstrates the problem:
> static void test_lockdep(void)
> {
> struct mutex fktest_m;
> struct rw_semaphore fktest_s;
>
> mutex_init(&fktest_m);
> init_rwsem(&fktest_s);
>
> down_read(&fktest_s);
> mutex_lock(&fktest_m);
> mutex_unlock(&fktest_m);
> up_read(&fktest_s);
>
> mutex_lock(&fktest_m);
> down_read(&fktest_s);
> up_read(&fktest_s);
> mutex_unlock(&fktest_m);
>
> mutex_destroy(&fktest_m);
> }
It sets up a circular lock dependency between a mutex and a read-write
semaphore. However, the semaphore is only ever locked for reading. As I
understand it, there is no potential for a deadlock here because
multiple readers don't exclude each other. However, I get this:
> [ 10.832547]
> [ 10.834122] ======================================================
> [ 10.840655] [ INFO: possible circular locking dependency detected ]
> [ 10.847284] 4.4.0-kfd #3 Tainted: G E
> [ 10.852356] -------------------------------------------------------
> [ 10.858989] systemd-udevd/2385 is trying to acquire lock:
> [ 10.864695] (&fktest_s){.+.+..}, at: [<ffffffffc0212463>] test_lockdep+0x9a/0xd4 [amdgpu]
> [ 10.873474]
> [ 10.873474] but task is already holding lock:
> [ 10.879633] (&fktest_m){+.+...}, at: [<ffffffffc0212457>] test_lockdep+0x8e/0xd4 [amdgpu]
> [ 10.888418]
> [ 10.888418] which lock already depends on the new lock.
> [ 10.888418]
> [ 10.897071]
> [ 10.897071] the existing dependency chain (in reverse order) is:
> [ 10.904981]
> -> #1 (&fktest_m){+.+...}:
> [ 10.909138] [<ffffffff810ada5d>] lock_acquire+0x6d/0x90
> [ 10.915309] [<ffffffff8190ddaa>] mutex_lock_nested+0x4a/0x3a0
> [ 10.922040] [<ffffffffc0212431>] test_lockdep+0x68/0xd4 [amdgpu]
> [ 10.929042] [<ffffffffc0295009>] amdgpu_init+0x9/0x7b [amdgpu]
> [ 10.935856] [<ffffffff810003f8>] do_one_initcall+0xc8/0x200
> [ 10.942449] [<ffffffff8113f5ad>] do_init_module+0x56/0x1d8
> [ 10.948919] [<ffffffff810e83b1>] load_module+0x1b91/0x2460
> [ 10.955376] [<ffffffff810e8e5b>] SyS_finit_module+0x7b/0xa0
> [ 10.961961] [<ffffffff81910572>] entry_SYSCALL_64_fastpath+0x12/0x76
> [ 10.969388]
> -> #0 (&fktest_s){.+.+..}:
> [ 10.973569] [<ffffffff810acc2a>] __lock_acquire+0x100a/0x16b0
> [ 10.980315] [<ffffffff810ada5d>] lock_acquire+0x6d/0x90
> [ 10.986502] [<ffffffff8190e884>] down_read+0x34/0x50
> [ 11.002586] [<ffffffffc0212463>] test_lockdep+0x9a/0xd4 [amdgpu]
> [ 11.009610] [<ffffffffc0295009>] amdgpu_init+0x9/0x7b [amdgpu]
> [ 11.016453] [<ffffffff810003f8>] do_one_initcall+0xc8/0x200
> [ 11.023001] [<ffffffff8113f5ad>] do_init_module+0x56/0x1d8
> [ 11.029462] [<ffffffff810e83b1>] load_module+0x1b91/0x2460
> [ 11.035927] [<ffffffff810e8e5b>] SyS_finit_module+0x7b/0xa0
> [ 11.042478] [<ffffffff81910572>] entry_SYSCALL_64_fastpath+0x12/0x76
> [ 11.049860]
> [ 11.049860] other info that might help us debug this:
> [ 11.049860]
> [ 11.058356] Possible unsafe locking scenario:
> [ 11.058356]
> [ 11.064644] CPU0 CPU1
> [ 11.069436] ---- ----
> [ 11.074229] lock(&fktest_m);
> [ 11.077465] lock(&fktest_s);
> [ 11.083376] lock(&fktest_m);
> [ 11.089288] lock(&fktest_s);
> [ 11.092542]
> [ 11.092542] *** DEADLOCK ***
> [ 11.092542]
> [ 11.098819] 1 lock held by systemd-udevd/2385:
> [ 11.103530] #0: (&fktest_m){+.+...}, at: [<ffffffffc0212457>] test_lockdep+0x8e/0xd4 [amdgpu]
> [ 11.112780]
> [ 11.112780] stack backtrace:
> [ 11.117388] CPU: 7 PID: 2385 Comm: systemd-udevd Tainted: G E 4.4.0-kfd #3
> [ 11.125840] Hardware name: ASUS All Series/Z97-PRO(Wi-Fi ac)/USB 3.1, BIOS 2401 04/27/2015
> [ 11.134593] ffffffff82714a90 ffff8808335d7a70 ffffffff8144e3cb ffffffff82714a90
> [ 11.142421] ffff8808335d7ab0 ffffffff8113ec7a ffff8808335d7b00 0000000000000000
> [ 11.150248] ffff8808335a9ee0 ffff8808335a9f08 ffff8808335a96c0 ffff8808335a9f08
> [ 11.158076] Call Trace:
> [ 11.160655] [<ffffffff8144e3cb>] dump_stack+0x44/0x59
> [ 11.166105] [<ffffffff8113ec7a>] print_circular_bug+0x1f9/0x207
> [ 11.172457] [<ffffffff810acc2a>] __lock_acquire+0x100a/0x16b0
> [ 11.178619] [<ffffffff810ab706>] ? mark_held_locks+0x66/0x90
> [ 11.184693] [<ffffffffc0295000>] ? 0xffffffffc0295000
> [ 11.190127] [<ffffffff810ada5d>] lock_acquire+0x6d/0x90
> [ 11.195758] [<ffffffffc0212463>] ? test_lockdep+0x9a/0xd4 [amdgpu]
> [ 11.202376] [<ffffffff8190e884>] down_read+0x34/0x50
> [ 11.207729] [<ffffffffc0212463>] ? test_lockdep+0x9a/0xd4 [amdgpu]
> [ 11.214370] [<ffffffffc0212463>] test_lockdep+0x9a/0xd4 [amdgpu]
> [ 11.220847] [<ffffffffc0295009>] amdgpu_init+0x9/0x7b [amdgpu]
> [ 11.227096] [<ffffffff810003f8>] do_one_initcall+0xc8/0x200
> [ 11.233083] [<ffffffff8113f574>] ? do_init_module+0x1d/0x1d8
> [ 11.239153] [<ffffffff8119bb4f>] ? kmem_cache_alloc+0xbf/0x180
> [ 11.245421] [<ffffffff8113f5ad>] do_init_module+0x56/0x1d8
> [ 11.251307] [<ffffffff810e83b1>] load_module+0x1b91/0x2460
> [ 11.257196] [<ffffffff810e58e0>] ? __symbol_put+0x30/0x30
> [ 11.262993] [<ffffffff810e5c06>] ? copy_module_from_fd.isra.61+0xf6/0x150
> [ 11.270261] [<ffffffff810e8e5b>] SyS_finit_module+0x7b/0xa0
> [ 11.276250] [<ffffffff81910572>] entry_SYSCALL_64_fastpath+0x12/0x76
I confirmed my results with the latest master branch of
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git, but
I was seeing the same thing on a 4.1-based kernel.
Relevant kernel config bits:
> $ grep LOCKDEP .config
> CONFIG_LOCKDEP_SUPPORT=y
> CONFIG_LOCKDEP=y
> # CONFIG_DEBUG_LOCKDEP is not set
I'm reading lockdep code now, trying to understand how lockdep works in
detail, and how it could properly deal with read-locks. But it will
probably take me a few more days or weeks to figure it out by myself (or
be convinced it can't be done). I'd appreciate feedback from someone
more familiar with the code.
Thank you,
Felix
P.S.: I'm not subscribed to the list. I'll be watching the archives or
digests, but please CC me on replies.
--
F e l i x K u e h l i n g
SMTS Software Development Engineer | Vertical Workstation/Compute
1 Commerce Valley Dr. East, Markham, ON L3T 7X6 Canada
(O) +1(289)695-1597
_ _ _ _____ _____
/ \ | \ / | | _ \ \ _ |
/ A \ | \M/ | | |D) ) /|_| |
/_/ \_\ |_| |_| |_____/ |__/ \| facebook.com/AMD | amd.com
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Lockdep incorrectly complaining about circular dependencies involving read-locks
2016-01-21 22:20 Lockdep incorrectly complaining about circular dependencies involving read-locks Felix Kuehling
@ 2016-01-26 23:48 ` Felix Kuehling
0 siblings, 0 replies; 2+ messages in thread
From: Felix Kuehling @ 2016-01-26 23:48 UTC (permalink / raw)
To: linux-kernel
On 16-01-21 05:20 PM, Felix Kuehling wrote:
> I'm running into circular lock dependencies reported by lockdep that
> involve read-locks and should not be flagged as deadlocks at all. I
> wrote a very simple test function that demonstrates the problem:
[snip]
> It sets up a circular lock dependency between a mutex and a read-write
> semaphore. However, the semaphore is only ever locked for reading.
The same expirement with a spinlock and an rwlock works correctly. The
difference is that the rwlock allows recursive read-locks, whereas the
rw semaphore does not. Recursive read-locks are not added to the lock
dependency graph at all, hence they don't trip the circular lock checks.
I think this handling of recursive read locks is actually incorrect,
because it fails to notice potential circular deadlocks like this:
> spinlock_t fktest_sp;
> rwlock_t fktest_rw;
>
> spin_lock_init(&fktest_sp);
> rwlock_init(&fktest_rw);
>
> read_lock(&fktest_rw);
> spin_lock(&fktest_sp);
> spin_unlock(&fktest_sp);
> read_unlock(&fktest_rw);
>
> spin_lock(&fktest_sp);
> write_lock(&fktest_rw);
> write_unlock(&fktest_rw);
> spin_unlock(&fktest_sp);
In a possible deadlock scenario thread A takes read lock, thread B takes
spin lock, thread A blocks trying to take spin lock, thread B blocks
trying to take write lock.
The read-lock depending on the spinlock is never entered into the
dependency graph. Hence the circular dependency is not seen when the
write lock is taken.
I think the whole handling of read-locks needs to be revamped for proper
detection of circular lock dependencies without false positives or false
negatives (I have demonstrated one example of each with the current
implementation).
A dependency chain must be able to distinguish, whether a lock is taken
for reading or for writing. That means, read locks need their own lock
(sub-)class. A new write dependency (a lock that depends on holding a
write-lock) creates a dependency on both the read and write-lock
classes, because both cases can lead to a potential deadlock. A new read
dependency (a lock that depends on holding a read-lock) only creates a
dependency on the write-lock class, because read locks don't block each
other.
I'm going to prototype my idea. I can use sub-classes for making
separate read and write lock classes. But that means, effectively I only
have half as many usable lock sub-classes left available for nesting
annotations on lock primitives that support read-locking. I hope that's
not a problem.
Feedback welcome.
Regards,
Felix
--
F e l i x K u e h l i n g
SMTS Software Development Engineer | Vertical Workstation/Compute
1 Commerce Valley Dr. East, Markham, ON L3T 7X6 Canada
(O) +1(289)695-1597
_ _ _ _____ _____
/ \ | \ / | | _ \ \ _ |
/ A \ | \M/ | | |D) ) /|_| |
/_/ \_\ |_| |_| |_____/ |__/ \| facebook.com/AMD | amd.com
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2016-01-27 0:02 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-01-21 22:20 Lockdep incorrectly complaining about circular dependencies involving read-locks Felix Kuehling
2016-01-26 23:48 ` Felix Kuehling
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox