public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
@ 2008-12-02 18:49 Arkadiusz Miskiewicz
  2008-12-02 19:03 ` Arkadiusz Miskiewicz
  2008-12-03  3:20 ` Dave Chinner
  0 siblings, 2 replies; 13+ messages in thread
From: Arkadiusz Miskiewicz @ 2008-12-02 18:49 UTC (permalink / raw)
  To: xfs


Hello,

I'm trying to use xfs project quota on kernel 2.6.27.7 (vanilla, no additional 
patches), x86_64 UP machine (SMP kernel). 

mounted /home with usrquota,prjquota

[arekm@arm ~]$ mount|grep xfs
/dev/sda1 on / type xfs (rw)
/dev/sda3 on /home type xfs (rw,usrquota,prjquota)
/dev/hdb on /mnt/storage2 type xfs (rw)

played a bit with setting quota/reporting it and such.

$ cat /etc/projects
10:/home/users/arekm/public_html
10:/home/users/arekm/rpm
20:/home/users/arekm/tcl
20:/home/users/arekm/tcl-test

I did for example xfs_quota -x -c "project -s 10" and allowed it to finish. I 
also started it second time and aborted with ctrl+c after few seconds (which 
I assume should have no effect since initial -s 10 was finished properly 
earlier). Played more with xfs_quota report and such.

Now some processes that are using /home/users/arekm/rpm are hanging in D-state 
like:

SysRq : Show Blocked State
  task                        PC stack   pid father
patch         D ffff88003a7dd080     0  3971   3965
 ffff880034453cd8 0000000000000086 0000000000000000 ffff8800344770d0
 ffff880034453cd8 ffff8800354d2440 ffffffff805d0340 ffff8800354d27b8
 00000000000041ed 00000000fffc7a61 ffff8800354d27b8 0000000000000250
Call Trace:
 [<ffffffffa00af4c4>] ? kmem_zone_alloc+0x94/0xe0 [xfs]
 [<ffffffff804a51cd>] __down_write_nested+0x8d/0xd0
 [<ffffffff804a521b>] __down_write+0xb/0x10
 [<ffffffff804a4229>] down_write+0x9/0x10
 [<ffffffffa008deb6>] xfs_ilock+0x76/0x90 [xfs]
 [<ffffffffa00aa7d0>] xfs_lock_two_inodes+0x70/0x120 [xfs]
 [<ffffffffa00ac651>] xfs_remove+0x141/0x3a0 [xfs]
 [<ffffffff804a54c9>] ? _spin_lock+0x9/0x10
 [<ffffffffa00b7c13>] xfs_setup_inode+0x673/0xa00 [xfs]
 [<ffffffff802d0849>] vfs_unlink+0xf9/0x140
 [<ffffffff802d3313>] do_unlinkat+0x1a3/0x1c0
 [<ffffffff80287ce0>] ? audit_syscall_entry+0x150/0x180
 [<ffffffff802d3341>] sys_unlink+0x11/0x20
 [<ffffffff8020c5aa>] system_call_fastpath+0x16/0x1b


reboot, retry with patch (which accesses /home/users/arekm/rpm) and stuck in 
D-state again. touch home/users/arekm/rpm/xyz - doesn't stuck. 
cp /bin/bash /home/users/arekm/rpm/ - doesn't stuck.


Did reboot and third test. D-state again for patch (this is interesting since 
uncompressing into /home/users/arekm/rpm/ succeeds but applying patch to 
uncompressed tree fails).

SysRq : Show Blocked State
  task                        PC stack   pid father
patch         D ffff88003a7d07c0     0  3631   3625
 ffff88003443bcd8 0000000000000082 0000000000000000 ffff88003444e4a8
 ffff88003443bcd8 ffff88003553e500 ffffffff805d0340 ffff88003553e878
 00000000000041ed 00000000fffc4120 ffff88003553e878 0000000000000250
Call Trace:
 [<ffffffffa00af4c4>] ? kmem_zone_alloc+0x94/0xe0 [xfs]
 [<ffffffff804a51cd>] __down_write_nested+0x8d/0xd0
 [<ffffffff804a521b>] __down_write+0xb/0x10
 [<ffffffff804a4229>] down_write+0x9/0x10
 [<ffffffffa008deb6>] xfs_ilock+0x76/0x90 [xfs]
 [<ffffffffa00aa7d0>] xfs_lock_two_inodes+0x70/0x120 [xfs]
 [<ffffffffa00ac651>] xfs_remove+0x141/0x3a0 [xfs]
 [<ffffffff804a54c9>] ? _spin_lock+0x9/0x10
 [<ffffffffa00b7c13>] xfs_setup_inode+0x673/0xa00 [xfs]
 [<ffffffff802d0849>] vfs_unlink+0xf9/0x140
 [<ffffffff802d3313>] do_unlinkat+0x1a3/0x1c0
 [<ffffffff80287ce0>] ? audit_syscall_entry+0x150/0x180
 [<ffffffff802d3341>] sys_unlink+0x11/0x20
 [<ffffffff8020c5aa>] system_call_fastpath+0x16/0x1b

I'm able to make it D-stuck quite reliably.

Any ideas?

Going to do xfs_repair just in case and retest.
-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
  2008-12-02 18:49 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time) Arkadiusz Miskiewicz
@ 2008-12-02 19:03 ` Arkadiusz Miskiewicz
  2008-12-03  3:20 ` Dave Chinner
  1 sibling, 0 replies; 13+ messages in thread
From: Arkadiusz Miskiewicz @ 2008-12-02 19:03 UTC (permalink / raw)
  To: xfs

On Tuesday 02 of December 2008, Arkadiusz Miskiewicz wrote:

> Going to do xfs_repair just in case and retest.

Didn't help, xfs_repair for small two things in phase 6 but nothing serious. 
Again d-state for patch.

xfs_repair found just these (sorry, in polish)
błędna liczba nused w wolnym bloku 16777216 dla i-węzła katalogu 237689990
przebudowywanie i-węzła katalogu 237689990
błędna liczba nused w wolnym bloku 16777216 dla i-węzła katalogu 873160686
przebudowywanie i-węzła katalogu 873160686

-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
  2008-12-02 18:49 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time) Arkadiusz Miskiewicz
  2008-12-02 19:03 ` Arkadiusz Miskiewicz
@ 2008-12-03  3:20 ` Dave Chinner
  2008-12-03 13:06   ` Arkadiusz Miskiewicz
  1 sibling, 1 reply; 13+ messages in thread
From: Dave Chinner @ 2008-12-03  3:20 UTC (permalink / raw)
  To: Arkadiusz Miskiewicz; +Cc: xfs

On Tue, Dec 02, 2008 at 07:49:55PM +0100, Arkadiusz Miskiewicz wrote:
> 
> Hello,
> 
> I'm trying to use xfs project quota on kernel 2.6.27.7 (vanilla, no additional 
> patches), x86_64 UP machine (SMP kernel). 
> 
> Now some processes that are using /home/users/arekm/rpm are hanging in D-state 
> like:
> 
> SysRq : Show Blocked State
>   task                        PC stack   pid father
> patch         D ffff88003a7dd080     0  3971   3965
>  ffff880034453cd8 0000000000000086 0000000000000000 ffff8800344770d0
>  ffff880034453cd8 ffff8800354d2440 ffffffff805d0340 ffff8800354d27b8
>  00000000000041ed 00000000fffc7a61 ffff8800354d27b8 0000000000000250
> Call Trace:
>  [<ffffffffa00af4c4>] ? kmem_zone_alloc+0x94/0xe0 [xfs]
>  [<ffffffff804a51cd>] __down_write_nested+0x8d/0xd0
>  [<ffffffff804a521b>] __down_write+0xb/0x10
>  [<ffffffff804a4229>] down_write+0x9/0x10
>  [<ffffffffa008deb6>] xfs_ilock+0x76/0x90 [xfs]
>  [<ffffffffa00aa7d0>] xfs_lock_two_inodes+0x70/0x120 [xfs]
>  [<ffffffffa00ac651>] xfs_remove+0x141/0x3a0 [xfs]
>  [<ffffffff804a54c9>] ? _spin_lock+0x9/0x10
>  [<ffffffffa00b7c13>] xfs_setup_inode+0x673/0xa00 [xfs]
>  [<ffffffff802d0849>] vfs_unlink+0xf9/0x140
>  [<ffffffff802d3313>] do_unlinkat+0x1a3/0x1c0
>  [<ffffffff80287ce0>] ? audit_syscall_entry+0x150/0x180
>  [<ffffffff802d3341>] sys_unlink+0x11/0x20
>  [<ffffffff8020c5aa>] system_call_fastpath+0x16/0x1b

Can you enable lockdep in your kernel and retest? That will give
use much more information about the locks that are causing problems
here....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
  2008-12-03  3:20 ` Dave Chinner
@ 2008-12-03 13:06   ` Arkadiusz Miskiewicz
  2008-12-03 13:35     ` Arkadiusz Miskiewicz
  2008-12-03 21:30     ` Dave Chinner
  0 siblings, 2 replies; 13+ messages in thread
From: Arkadiusz Miskiewicz @ 2008-12-03 13:06 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Wednesday 03 of December 2008, Dave Chinner wrote:
> On Tue, Dec 02, 2008 at 07:49:55PM +0100, Arkadiusz Miskiewicz wrote:
> > Hello,
> >
> > I'm trying to use xfs project quota on kernel 2.6.27.7 (vanilla, no
> > additional patches), x86_64 UP machine (SMP kernel).
> >
> > Now some processes that are using /home/users/arekm/rpm are hanging in
> > D-state like:
> >
> > SysRq : Show Blocked State
> >   task                        PC stack   pid father
> > patch         D ffff88003a7dd080     0  3971   3965
> >  ffff880034453cd8 0000000000000086 0000000000000000 ffff8800344770d0
> >  ffff880034453cd8 ffff8800354d2440 ffffffff805d0340 ffff8800354d27b8
> >  00000000000041ed 00000000fffc7a61 ffff8800354d27b8 0000000000000250
> > Call Trace:
> >  [<ffffffffa00af4c4>] ? kmem_zone_alloc+0x94/0xe0 [xfs]
> >  [<ffffffff804a51cd>] __down_write_nested+0x8d/0xd0
> >  [<ffffffff804a521b>] __down_write+0xb/0x10
> >  [<ffffffff804a4229>] down_write+0x9/0x10
> >  [<ffffffffa008deb6>] xfs_ilock+0x76/0x90 [xfs]
> >  [<ffffffffa00aa7d0>] xfs_lock_two_inodes+0x70/0x120 [xfs]
> >  [<ffffffffa00ac651>] xfs_remove+0x141/0x3a0 [xfs]
> >  [<ffffffff804a54c9>] ? _spin_lock+0x9/0x10
> >  [<ffffffffa00b7c13>] xfs_setup_inode+0x673/0xa00 [xfs]
> >  [<ffffffff802d0849>] vfs_unlink+0xf9/0x140
> >  [<ffffffff802d3313>] do_unlinkat+0x1a3/0x1c0
> >  [<ffffffff80287ce0>] ? audit_syscall_entry+0x150/0x180
> >  [<ffffffff802d3341>] sys_unlink+0x11/0x20
> >  [<ffffffff8020c5aa>] system_call_fastpath+0x16/0x1b
>
> Can you enable lockdep in your kernel and retest? That will give
> use much more information about the locks that are causing problems
> here....

some debugging (including lockdep) enabled:

[  755.172243] SysRq : Show Blocked State
[  755.172265]   task                PC stack   pid father
[  755.172298] patch         D ef59de3c     0  3539   3533
[  755.172308]        c2f47520 00000086 00000002 ef59de3c ef59de44 00000000 
ef4b4920 0291f000
[  755.172324]        00000046 00000010 c2e24100 c0504040 ef59de44 ef59de40 
ef59de3c ef59c000
[  755.172339]        ef4b4920 ef4b4aa8 00000000 00021568 00000001 ef4b4920 
00000000 00000000
[  755.172354] Call Trace:
[  755.172359]  [<c014bc6a>] trace_hardirqs_on_caller+0xfa/0x130
[  755.172371]  [<c0392a4d>] schedule_timeout+0x8d/0xf0
[  755.172379]  [<c010910f>] native_sched_clock+0x7f/0xb0
[  755.172386]  [<c01315c0>] process_timeout+0x0/0x10
[  755.172394]  [<c0392a48>] schedule_timeout+0x88/0xf0
[  755.172411]  [<f88975db>] xfs_lock_two_inodes+0xcb/0x120 [xfs]
[  755.172451]  [<f8899526>] xfs_remove+0x136/0x3c0 [xfs]
[  755.172480]  [<c0393357>] mutex_lock_nested+0x1f7/0x290
[  755.172486]  [<c01ad067>] vfs_unlink+0x87/0x130
[  755.172494]  [<c01ad067>] vfs_unlink+0x87/0x130
[  755.172502]  [<f88a4ac6>] xfs_vn_unlink+0x36/0x80 [xfs]
[  755.172533]  [<c01ad0bd>] vfs_unlink+0xdd/0x130
[  755.172540]  [<c0394a44>] _spin_unlock+0x14/0x20
[  755.172546]  [<c01af13e>] do_unlinkat+0x14e/0x160
[  755.172552]  [<c014bc6a>] trace_hardirqs_on_caller+0xfa/0x130
[  755.172558]  [<c03949c0>] _spin_unlock_irq+0x20/0x30
[  755.172564]  [<c02400d4>] copy_to_user+0x34/0x80
[  755.172570]  [<c023fdbc>] trace_hardirqs_on_thunk+0xc/0x10
[  755.172576]  [<c03970b0>] do_page_fault+0x0/0x780
[  755.172583]  [<c014bc6a>] trace_hardirqs_on_caller+0xfa/0x130
[  755.172589]  [<c0103cbd>] sysenter_do_call+0x12/0x31

[arekm@farm ~]$ zgrep LOCKDEP /proc/config.gz
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_LOCKDEP=y
# CONFIG_DEBUG_LOCKDEP is not set

I don't see anything strictly lockdep related in dmesg so it doesn't seem to 
be triggered.

D-state lock is also happening if I drop usrquota,prjquota, reboot and retry 
the test. I assume something was written on disk that triggers the problem.

Note that now I'm testing on a second machine (UP i686, SMP kernel), so this 
isn't unique problem.

> Cheers,
>
> Dave.

-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
  2008-12-03 13:06   ` Arkadiusz Miskiewicz
@ 2008-12-03 13:35     ` Arkadiusz Miskiewicz
  2008-12-03 21:30     ` Dave Chinner
  1 sibling, 0 replies; 13+ messages in thread
From: Arkadiusz Miskiewicz @ 2008-12-03 13:35 UTC (permalink / raw)
  To: xfs

On Wednesday 03 of December 2008, Arkadiusz Miskiewicz wrote:

> I don't see anything strictly lockdep related in dmesg so it doesn't seem
> to be triggered.

maybe /proc/lockdep will help

all lock classes:
c0474eb0 FD:    4 BD:    1 --..: clockevents_lock
 -> [c0474ef0] tick_device_lock

c0472b30 FD:    1 BD:    4 --..: resource_lock

c0471a30 FD:    1 BD:    3 ....: set_atomicity_lock

c0472370 FD:    1 BD:   23 ....: pgd_lock

c0471fb0 FD:    2 BD:    9 +...: ioapic_lock
 -> [c0470d90] i8259A_lock

c0641140 FD:    5 BD:  152 ++..: &rq->lock
 -> [c09fbf3c] &vec->lock
 -> [c0641148] &rt_b->rt_runtime_lock
 -> [c0641150] &rt_rq->rt_runtime_lock

c09fbf3c FD:    1 BD:  153 ....: &vec->lock

c0472a78 FD:    9 BD:    2 --..: cpu_add_remove_lock
 -> [c0474510] workqueue_lock
 -> [c0474670] kthread_create_lock
 -> [c0641140] &rq->lock
 -> [c0654274] &q->lock

c0470d90 FD:    1 BD:   10 +...: i8259A_lock

c09fbf08 FD:    3 BD:    8 ++..: &irq_desc_lock_class
 -> [c0470d90] i8259A_lock
 -> [c0471fb0] ioapic_lock

c0471550 FD:    1 BD:    2 ++..: rtc_lock

c0497a94 FD:    2 BD:    1 ++..: xtime_lock
 -> [c0474d30] clocksource_lock

c0474d30 FD:    1 BD:    2 ++..: clocksource_lock

c0474d70 FD:    2 BD:    1 -+..: watchdog_lock
 -> [c0653e6c] &base->lock

c0480550 FD:    2 BD:   11 ....: tty_ldisc_lock
 -> [c0480570] tty_ldisc_wait.lock

c04729d0 FD:    1 BD:    7 ....: (console_sem).lock

c0653e6c FD:    1 BD:   61 ++..: &base->lock

c047d610 FD:    1 BD:    5 ....: vga_lock

c0472990 FD:    2 BD:    6 ....: logbuf_lock
 -> [c04729d0] (console_sem).lock

c0481890 FD:    2 BD:    1 ....: printing_lock
 -> [c047d610] vga_lock

c09fbf18 FD:    1 BD:    1 ..--: rcu_read_lock

c09fbf9c FD:    1 BD:   88 ++..: &zone->lock

c049f770 FD:    1 BD:    2 --..: bdev_lock

c04777ec FD:    1 BD:    1 --..: slub_lock

c0474ef0 FD:    3 BD:    2 ....: tick_device_lock
 -> [c0471370] i8253_lock
 -> [c0474f30] tick_broadcast_lock

c0471370 FD:    1 BD:    4 .+..: i8253_lock

c0474f30 FD:    2 BD:    3 .+..: tick_broadcast_lock
 -> [c0471370] i8253_lock

c065429c FD:    1 BD:  154 ++..: &cpu_base->lock

c0476fac FD:    1 BD:    1 --..: shrinker_rwsem

c047cb78 FD:    1 BD:    2 --..: percpu_counters_lock

c0478470 FD:    1 BD:    1 ----: file_systems_lock

c04784e0 FD:    1 BD:   69 ....: mnt_id_ida.lock

c049f750 FD:    8 BD:   68 --..: vfsmount_lock
 -> [c04784e0] mnt_id_ida.lock
 -> [c0654274] &q->lock

c0477e50 FD:    3 BD:   22 --..: sb_lock
 -> [c0a0c0c8] &idp->lock
 -> [c0477e70] unnamed_dev_lock

c0479368 FD:   22 BD:    1 ----: &type->s_umount_key
 -> [c0477e50] sb_lock
 -> [c04782b0] inode_lock
 -> [c09fbf9c] &zone->lock
 -> [c049f6f0] dcache_lock
 -> [c0a041c8] &s->s_dquot.dqonoff_mutex
 -> [c0479360] &type->s_lock_key

c0a0c0c8 FD:    1 BD:   39 ....: &idp->lock

c0477e70 FD:    2 BD:   23 --..: unnamed_dev_lock
 -> [c0a0c0c8] &idp->lock

c04782b0 FD:    1 BD:   80 --..: inode_lock

c049f6f0 FD:   13 BD:   67 --..: dcache_lock
 -> [c0a046c8] &dentry->d_lock
 -> [c049f750] vfsmount_lock
 -> [c049f714] rename_lock

c0a046c8 FD:    3 BD:   69 --..: &dentry->d_lock
 -> [c0a046c9] &dentry->d_lock/1
 -> [c0472cd0] sysctl_lock

c0a041c8 FD:    2 BD:   11 --..: &s->s_dquot.dqonoff_mutex
 -> [c0478a30] dq_list_lock

c0478a30 FD:    1 BD:   12 --..: dq_list_lock

c0479360 FD:    1 BD:    2 --..: &type->s_lock_key

c04792b0 FD:    3 BD:   22 --..: sysfs_ino_lock
 -> [c04792e0] sysfs_ino_ida.lock
 -> [c0a03e84] &n->list_lock

c04792e0 FD:    1 BD:   23 ....: sysfs_ino_ida.lock

c04791d8 FD:   17 BD:   24 --..: sysfs_mutex
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock
 -> [c0a03e84] &n->list_lock
 -> [c09fbf9c] &zone->lock

c0479380 FD:   19 BD:    3 --..: &type->i_mutex_dir_key
 -> [c049f6f0] dcache_lock
 -> [c04791d8] sysfs_mutex
 -> [c049f750] vfsmount_lock
 -> [c0a03e84] &n->list_lock
 -> [c2f44c20] &writer->lock_class
 -> [c09fbf9c] &zone->lock
 -> [c04782b0] inode_lock
 -> [c0641140] &rq->lock

c04796c8 FD:   18 BD:    1 --..: &type->s_umount_key#2
 -> [c0477e50] sb_lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock

c046fff4 FD:    1 BD:    1 ----: old_style_rw_init

c04786a8 FD:   19 BD:    1 --..: &type->s_umount_key#3
 -> [c0477e50] sb_lock
 -> [c09fbf9c] &zone->lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock

c0478d28 FD:   19 BD:    1 --..: &type->s_umount_key#4
 -> [c0477e50] sb_lock
 -> [c04782b0] inode_lock
 -> [c09fbf9c] &zone->lock
 -> [c049f6f0] dcache_lock

c0478dd0 FD:    1 BD:   12 --..: proc_subdir_lock

c0478e20 FD:    1 BD:    9 ....: proc_inum_ida.lock

c0478df0 FD:    2 BD:    8 --..: proc_inum_lock
 -> [c0478e20] proc_inum_ida.lock

c0488898 FD:   77 BD:    1 --..: net_mutex
 -> [c0478dd0] proc_subdir_lock
 -> [c0478e20] proc_inum_ida.lock
 -> [c0478df0] proc_inum_lock
 -> [c0488eb8] rtnl_mutex
 -> [c04782b0] inode_lock
 -> [c0489a10] nl_table_lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0472cd0] sysctl_lock
 -> [c0a15d50] &net->rules_mod_lock
 -> [c0653e6c] &base->lock
 -> [c0a03e84] &n->list_lock
 -> [c048ab10] raw_v4_hashinfo.lock
 -> [c09fbf9c] &zone->lock
 -> [c048e650] raw_v6_hashinfo.lock
 -> [c0a17f80] &ip6addrlbl_table.lock

c04771d0 FD:    1 BD:    5 ----: vmlist_lock

c046f1e0 FD:    1 BD:    3 --..: init_mm.page_table_lock

c0a0ea04 FD:    1 BD:    2 ....: semaphore->lock

c04723b0 FD:    1 BD:    5 --..: memtype_lock

c0a0efe8 FD:    1 BD:    1 ....: acpi_gbl_hardware_lock

c046f6e8 FD:    1 BD:    1 --..: init_task.alloc_lock

c04783b0 FD:    1 BD:    3 --..: init_task.file_lock

c0470774 FD:    1 BD:    6 ....: init_sighand.siglock

c0497a70 FD:    1 BD:   19 ....: pidmap_lock

c0497990 FD:   15 BD:    3 ..?-: tasklist_lock
 -> [c0470774] init_sighand.siglock
 -> [c0642268] &sighand->siglock
 -> [c0a03e84] &n->list_lock
 -> [c09fbf9c] &zone->lock
 -> [c0653ec0] &cwq->lock

c0471ef0 FD:    1 BD:    1 ....: vector_lock

c0474670 FD:    1 BD:    7 --..: kthread_create_lock

c0654274 FD:    6 BD:  133 ++..: &q->lock
 -> [c0641140] &rq->lock

c0642270 FD:    1 BD:    9 --..: &p->alloc_lock

c0642268 FD:   10 BD:   18 ++..: &sighand->siglock
 -> [c0654274] &q->lock
 -> [c0497a70] pidmap_lock
 -> [c0641140] &rq->lock
 -> [c065429c] &cpu_base->lock
 -> [c0a0f970] &tty->ctrl_lock
 -> [c09fbf24] &tsk->delays->lock

c0642278 FD:    6 BD:    4 ....: &p->pi_lock
 -> [c0641140] &rq->lock

c0641148 FD:    3 BD:  153 ....: &rt_b->rt_runtime_lock
 -> [c065429c] &cpu_base->lock
 -> [c0641150] &rt_rq->rt_runtime_lock

c0641150 FD:    1 BD:  154 +...: &rt_rq->rt_runtime_lock

c0652c44 FD:    1 BD:    4 --..: &cpu_hotplug.lock

c0472778 FD:    7 BD:    1 --..: sched_domains_mutex
 -> [c09fbf9c] &zone->lock
 -> [c0641140] &rq->lock

c0474510 FD:    1 BD:    3 --..: workqueue_lock

c0a03e84 FD:    1 BD:  100 ++..: &n->list_lock

c0a0c0d4 FD:    1 BD:   20 --..: &k->list_lock

c0485fd8 FD:    1 BD:   19 --..: dpm_list_mtx

c047ca90 FD:    1 BD:   20 --..: sequence_lock

c0653ec0 FD:    7 BD:   47 ++..: &cwq->lock
 -> [c0654274] &q->lock

c0653e8c FD:   20 BD:    1 --..: khelper
 -> [c0653e98] &sub_info->work

c0653e98 FD:   19 BD:    2 --..: &sub_info->work
 -> [c09fbf9c] &zone->lock
 -> [c0642270] &p->alloc_lock
 -> [c04783b0] init_task.file_lock
 -> [c0642268] &sighand->siglock
 -> [c0497a70] pidmap_lock
 -> [c0497990] tasklist_lock
 -> [c0641140] &rq->lock
 -> [c0654274] &q->lock
 -> [c0a03e84] &n->list_lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)

c0642260 FD:    1 BD:    1 ----: &fs->lock

c04796d8 FD:   23 BD:    4 --..: &sb->s_type->i_mutex_key
 -> [c049f6f0] dcache_lock
 -> [c04782b0] inode_lock
 -> [c0a046c8] &dentry->d_lock
 -> [c2f44c20] &writer->lock_class
 -> [c09fbf9c] &zone->lock
 -> [c09fadbc] &counter->lock
 -> [c0a03f20] &mz->lru_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [c09fbf94] &zone->lru_lock
 -> [c0a03e84] &n->list_lock
 -> [c049f750] vfsmount_lock

c04744b0 FD:    1 BD:   14 ....: running_helpers_waitq.lock

c0488eb8 FD:   71 BD:    2 --..: rtnl_mutex
 -> [c0a0c0d4] &k->list_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c047e800] bus_type_sem
 -> [c0479270] sysfs_assoc_lock
 -> [c0485fd8] dpm_list_mtx
 -> [c047ca90] sequence_lock
 -> [c0a03e84] &n->list_lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0653ec0] &cwq->lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [c04744b0] running_helpers_waitq.lock
 -> [c0a15cf0] struct class mutex#6
 -> [c0488c30] dev_base_lock
 -> [c0a15bd0] &tbl->lock
 -> [c0472cd0] sysctl_lock
 -> [c0478dd0] proc_subdir_lock
 -> [c0478e20] proc_inum_ida.lock
 -> [c0478df0] proc_inum_lock
 -> [c0a17f28] &ndev->lock
 -> [c0a18410] &idev->mc_lock
 -> [c0a18418] &mc->mca_lock
 -> [c0a14d90] &list->lock
 -> [c0a19064] &k->k_lock
 -> [c09fbf9c] &zone->lock
 -> [c048b54c] (inetaddr_chain).rwsem
 -> [c0a15968] _xmit_LOOPBACK
 -> [c0a16b00] &in_dev->mc_list_lock
 -> [c0a16af8] &in_dev->mc_tomb_lock
 -> [c0480244] &input_pool.lock
 -> [c04802e4] &nonblocking_pool.lock
 -> [c048bbd0] fib_info_lock
 -> [c048c910] fib_hash_lock
 -> [c0a17f20] &ifa->lock
 -> [c0a17fcc] &tb->tb6_lock
 -> [c0a15868] _xmit_ETHER
 -> [c09fbf08] &irq_desc_lock_class
 -> [c047ce50] pci_lock
 -> [c0653e6c] &base->lock
 -> [f8986650] &tp->lock
 -> [f8986648] &tp->mii_lock
 -> [c0489890] qdisc_list_lock
 -> [c0a15e54] &list->lock#3
 -> [c0489688] noop_qdisc.q.lock
 -> [c0a1567c] &dev->tx_global_lock
 -> [c0a15e64] &qdisc_tx_lock

c0478110 FD:    1 BD:    1 ----: binfmt_lock

c04885c8 FD:   19 BD:    1 --..: &type->s_umount_key#5
 -> [c0477e50] sb_lock
 -> [c09fbf9c] &zone->lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock

c04887f0 FD:    1 BD:    1 --..: proto_list_lock

c0488670 FD:    1 BD:    1 --..: net_family_lock

c0489a10 FD:    6 BD:    3 ..-?: nl_table_lock
 -> [c0480244] &input_pool.lock
 -> [c04802e4] &nonblocking_pool.lock
 -> [c0a03e84] &n->list_lock

c0489a30 FD:    1 BD:   25 .+..: nl_table_wait.lock

c047cd30 FD:    1 BD:    1 ....: gpio_lock

c0479270 FD:    1 BD:   20 --..: sysfs_assoc_lock

c0a117d8 FD:    1 BD:    1 --..: struct class mutex

c0476970 FD:    1 BD:    1 -+..: &rcu_ctrlblk.lock

c047e800 FD:    1 BD:   18 ----: bus_type_sem

c0a14b00 FD:    1 BD:    1 --..: struct class mutex#2

c0488590 FD:    1 BD:    8 ....: pci_config_lock

c0485a38 FD:    1 BD:    1 --..: sysdev_drivers_lock

c0483c50 FD:    1 BD:    1 ....: sysrq_key_table_lock

c09fc1c4 FD:    1 BD:    1 --..: struct class mutex#3

c0478038 FD:    1 BD:    3 --..: chrdevs_lock

c0a0eff0 FD:    1 BD:    1 ....: acpi_gbl_gpe_lock

c047e1d0 FD:    1 BD:    1 --..: acpi_res_lock

c04752b8 FD:    1 BD:    1 --..: pm_mutex

c047e828 FD:    1 BD:    1 --..: acpi_device_lock

c0a14438 FD:    6 BD:    3 ....: semaphore->lock#2
 -> [c0641140] &rq->lock

c0a19064 FD:    7 BD:   16 --..: &k->k_lock
 -> [c0654274] &q->lock

c047d0cc FD:    1 BD:    1 ----: pci_bus_sem

c0a0ca08 FD:    1 BD:    1 --..: struct class mutex#4

c047ce50 FD:    2 BD:    6 ....: pci_lock
 -> [c0488590] pci_config_lock

c047f198 FD:    1 BD:    2 --..: acpi_prt_lock

c0485b30 FD:    1 BD:    3 ....: probe_waitqueue.lock

c047f150 FD:    4 BD:    1 --..: acpi_link_lock
 -> [c0a0ea04] semaphore->lock
 -> [c0488590] pci_config_lock
 -> [c0a03e84] &n->list_lock

c047f590 FD:    1 BD:    1 --..: pnp_lock

c0486818 FD:   59 BD:    1 --..: serio_mutex
 -> [c0486850] serio_event_lock
 -> [c0a0c0d4] &k->list_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c047e800] bus_type_sem
 -> [c0479270] sysfs_assoc_lock
 -> [c0485fd8] dpm_list_mtx
 -> [c047ca90] sequence_lock
 -> [c0a03e84] &n->list_lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0653ec0] &cwq->lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [c04744b0] running_helpers_waitq.lock
 -> [c0a14438] semaphore->lock#2
 -> [c0a19064] &k->k_lock
 -> [c0a144c0] &serio->drv_mutex
 -> [c0485b30] probe_waitqueue.lock
 -> [c0a14430] &dev->devres_lock
 -> [c04782b0] inode_lock

c0486850 FD:    7 BD:    2 ....: serio_event_lock
 -> [c0486890] serio_wait.lock

c0486890 FD:    6 BD:    3 ....: serio_wait.lock
 -> [c0641140] &rq->lock

c0a14b84 FD:    1 BD:    1 --..: &dma_list_mutex

c0489850 FD:    1 BD:    1 ----: qdisc_mod_lock

c0489b38 FD:   14 BD:    1 --..: genl_mutex
 -> [c0489a10] nl_table_lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock

c048fb90 FD:    1 BD:    1 --..: netlbl_domhsh_lock

c048fcf0 FD:    1 BD:    1 --..: netlbl_unlhsh_lock

c0472cd0 FD:    1 BD:   70 --..: sysctl_lock

c0478168 FD:   18 BD:    1 --..: &type->s_umount_key#6
 -> [c0477e50] sb_lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock

c0478828 FD:   18 BD:    1 --..: &type->s_umount_key#7
 -> [c0477e50] sb_lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock

c0a0f200 FD:    1 BD:    1 --..: struct class mutex#5

c0a15cf0 FD:    1 BD:    3 --..: struct class mutex#6

c0488c30 FD:    1 BD:    3 -.--: dev_base_lock

c0487e18 FD:    1 BD:    1 --..: cpufreq_governor_mutex

c048a3f0 FD:    1 BD:    1 -...: inet_proto_lock

c048bb50 FD:    1 BD:    1 -...: inetsw_lock

c0480244 FD:    2 BD:   49 ++..: &input_pool.lock
 -> [c0480370] random_read_wait.lock

c04802e4 FD:    2 BD:   14 .+..: &nonblocking_pool.lock
 -> [c04803b0] random_write_wait.lock

c04803b0 FD:    1 BD:   15 .+..: random_write_wait.lock

c0488e50 FD:    1 BD:    1 --..: neigh_tbl_lock

c0488cd0 FD:    1 BD:    3 -...: ptype_lock

c0a15bd0 FD:   11 BD:    3 -+-+: &tbl->lock
 -> [c0653e6c] &base->lock
 -> [c0480244] &input_pool.lock
 -> [c04802e4] &nonblocking_pool.lock
 -> [c0a03e84] &n->list_lock
 -> [c0a15be8] &n->lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0a15be0] &(&hh->hh_lock)->lock
 -> [c0a15bd8] &list->lock#4

c0a15d50 FD:    1 BD:    2 --..: &net->rules_mod_lock

c048d1b0 FD:    1 BD:    1 -...: xfrm_state_afinfo_lock

c048d0b0 FD:    1 BD:    1 -...: xfrm_policy_afinfo_lock

c048ab10 FD:    1 BD:    2 -...: raw_v4_hashinfo.lock

c048aa30 FD:    1 BD:    1 --..: tcp_cong_list_lock

c04796d9 FD:   27 BD:    1 --..: &sb->s_type->i_mutex_key/1
 -> [c049f6f0] dcache_lock
 -> [c2f44c20] &writer->lock_class
 -> [c04782b0] inode_lock
 -> [c09fbf9c] &zone->lock
 -> [c0a03e84] &n->list_lock
 -> [c09fadbc] &counter->lock
 -> [c0a03f20] &mz->lru_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [c09fbf94] &zone->lru_lock
 -> [c0a046c8] &dentry->d_lock
 -> [c04796d8] &sb->s_type->i_mutex_key
 -> [c0a04858] &inode->inotify_mutex

c2f44c20 FD:    1 BD:   28 --..: &writer->lock_class

c0a04888 FD:    1 BD:    4 --..: &newf->file_lock

c04796d0 FD:    1 BD:    2 --..: &sb->s_type->i_lock_key

c049f690 FD:    2 BD:    4 --..: files_lock
 -> [c04781d0] fasync_lock

c09fadbc FD:    1 BD:   40 ....: &counter->lock

c0a03f20 FD:    1 BD:   41 ....: &mz->lru_lock

c0a04870 FD:    5 BD:   29 ....: &inode->i_data.tree_lock
 -> [c0a03f20] &mz->lru_lock
 -> [c09fadbc] &counter->lock
 -> [c0a03e84] &n->list_lock
 -> [c0a0c1c0] &percpu_counter_irqsafe

c09fbf94 FD:    2 BD:   38 ....: &zone->lru_lock
 -> [c0a03f20] &mz->lru_lock

c0471918 FD:   20 BD:    1 --..: therm_cpu_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex

c0474130 FD:    1 BD:    1 .+..: uidhash_lock

c0480758 FD:   30 BD:    2 --..: misc_mtx
 -> [c0a0c0d4] &k->list_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c047e800] bus_type_sem
 -> [c0479270] sysfs_assoc_lock
 -> [c0485fd8] dpm_list_mtx
 -> [c047ca90] sequence_lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0653ec0] &cwq->lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [c04744b0] running_helpers_waitq.lock
 -> [c0a0fb3c] struct class mutex#7
 -> [c0a03e84] &n->list_lock
 -> [c0a14d90] &list->lock

c0a0fb3c FD:    1 BD:    3 --..: struct class mutex#7

c0476690 FD:    1 BD:    1 ....: audit_freelist_lock

c04766d0 FD:    1 BD:    1 ....: serial_lock

c047cab0 FD:    1 BD:    1 ....: ratelimit_lock

c0474750 FD:    1 BD:    1 ....: die_chain.lock

c0477410 FD:    1 BD:    3 --..: swap_lock

c0477668 FD:   20 BD:    1 ----: &type->s_umount_key#8
 -> [c0477e50] sb_lock
 -> [c09fbf9c] &zone->lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock
 -> [c0a03e48] &sbinfo->stat_lock

c04763b8 FD:    2 BD:    1 --..: callback_mutex
 -> [c0642270] &p->alloc_lock

c0476e30 FD:    6 BD:    1 .+..: pdflush_lock
 -> [c0641140] &rq->lock

c09fb4c0 FD:    1 BD:   30 ....: &(kretprobe_table_locks[i].lock)

c04787c8 FD:   18 BD:    1 --..: &type->s_umount_key#9
 -> [c0477e50] sb_lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock

c04793c8 FD:   22 BD:    1 --..: &type->s_umount_key#10
 -> [c0477e50] sb_lock
 -> [c09fbf9c] &zone->lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock
 -> [c0a041c8] &s->s_dquot.dqonoff_mutex
 -> [c04793c0] &type->s_lock_key#2

c04793c0 FD:    1 BD:    2 --..: &type->s_lock_key#2

c0479828 FD:   19 BD:    1 --..: &type->s_umount_key#11
 -> [c0477e50] sb_lock
 -> [c09fbf9c] &zone->lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock

c0479e08 FD:   22 BD:    1 --..: &type->s_umount_key#12
 -> [c0477e50] sb_lock
 -> [c09fbf9c] &zone->lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock
 -> [c0a041c8] &s->s_dquot.dqonoff_mutex
 -> [c0479e00] &type->s_lock_key#3

c0479e00 FD:    1 BD:    2 --..: &type->s_lock_key#3

c047bcec FD:    2 BD:    1 --..: crypto_alg_sem
 -> [c047bd2c] (crypto_chain).rwsem

c047bd2c FD:    1 BD:    2 ..--: (crypto_chain).rwsem

c047c070 FD:    1 BD:    2 --..: elv_list_lock

c0a0ca2c FD:    1 BD:    1 --..: &drv->dynids.lock

c0a14430 FD:    1 BD:    6 ....: &dev->devres_lock

c0a0cb04 FD:    1 BD:    1 --..: struct class mutex#8

c04802a4 FD:    1 BD:    1 ....: &blocking_pool.lock

c0a0f960 FD:    1 BD:    4 --..: struct class mutex#9

c0a0fb54 FD:    1 BD:    3 --..: struct class mutex#10

c0480498 FD:   48 BD:    2 --..: tty_mutex
 -> [c04729d0] (console_sem).lock
 -> [c0472990] logbuf_lock
 -> [c09fbf9c] &zone->lock
 -> [c0480550] tty_ldisc_lock
 -> [c0a0f978] &tty->read_lock
 -> [c0654274] &q->lock
 -> [c0480570] tty_ldisc_wait.lock
 -> [c0470774] init_sighand.siglock
 -> [c0a0fb54] struct class mutex#10
 -> [c0485fd8] dpm_list_mtx
 -> [c04791d8] sysfs_mutex
 -> [c04782b0] inode_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c0a03e84] &n->list_lock
 -> [c0a14430] &dev->devres_lock
 -> [c047ca90] sequence_lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0653ec0] &cwq->lock
 -> [c0641140] &rq->lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)
 -> [c04744b0] running_helpers_waitq.lock
 -> [c0479270] sysfs_assoc_lock
 -> [c0a0c0d4] &k->list_lock
 -> [c049f690] files_lock
 -> [c0642268] &sighand->siglock
 -> [c0a14d90] &list->lock
 -> [c04793d8] &sb->s_type->i_mutex_key#9
 -> [c0a0f970] &tty->ctrl_lock

c0486cb8 FD:   46 BD:    4 --..: input_mutex
 -> [c0486cf0] input_devices_poll_wait.lock
 -> [c0a144a0] &emumousebtn_mutex_class
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [c0a0c0d4] &k->list_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c047e800] bus_type_sem
 -> [c0479270] sysfs_assoc_lock
 -> [c0485fd8] dpm_list_mtx
 -> [c047ca90] sequence_lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0653ec0] &cwq->lock
 -> [c04744b0] running_helpers_waitq.lock
 -> [c0a19064] &k->k_lock
 -> [c0a14620] struct class mutex#12
 -> [c0a146a7] &mousedev->mutex/31
 -> [c0a14634] &dev->mutex
 -> [c0a1462c] &dev->event_lock
 -> [c0a14d90] &list->lock
 -> [c0a03e84] &n->list_lock

c0486cf0 FD:    1 BD:    5 ....: input_devices_poll_wait.lock

c04842b8 FD:   48 BD:    2 --..: port_mutex
 -> [c0a11b88] &state->mutex
 -> [c0a0f960] struct class mutex#9
 -> [c0485fd8] dpm_list_mtx
 -> [c04791d8] sysfs_mutex
 -> [c04782b0] inode_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c0a19064] &k->k_lock
 -> [c0a03e84] &n->list_lock
 -> [c0a14430] &dev->devres_lock
 -> [c047ca90] sequence_lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0653ec0] &cwq->lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [c04744b0] running_helpers_waitq.lock
 -> [c0479270] sysfs_assoc_lock
 -> [c0a0c0d4] &k->list_lock
 -> [c0472b30] resource_lock

c0a11b88 FD:   46 BD:    3 --..: &state->mutex
 -> [c0472b30] resource_lock
 -> [c0a11bcc] &port_lock_key
 -> [c04768f8] probing_active
 -> [c0a0c0d4] &k->list_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c047e800] bus_type_sem
 -> [c0479270] sysfs_assoc_lock
 -> [c0485fd8] dpm_list_mtx
 -> [c047ca90] sequence_lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0653ec0] &cwq->lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [c04744b0] running_helpers_waitq.lock
 -> [c0a19064] &k->k_lock
 -> [c0a0f960] struct class mutex#9
 -> [c0a03e84] &n->list_lock
 -> [c0a11be8] &irq_lists[i].lock
 -> [c09fbf08] &irq_desc_lock_class
 -> [c0478dd0] proc_subdir_lock
 -> [c0478e20] proc_inum_ida.lock
 -> [c0478df0] proc_inum_lock
 -> [c0653e6c] &base->lock
 -> [c0a04b6c] &ent->pde_unload_lock
 -> [c0480550] tty_ldisc_lock
 -> [c0a0f978] &tty->read_lock
 -> [c0a0f9a8] &tty->buf.lock

c0a11bcc FD:    1 BD:    4 ....: &port_lock_key

c04768f8 FD:   10 BD:    4 --..: probing_active
 -> [c09fbf08] &irq_desc_lock_class
 -> [c0653e6c] &base->lock
 -> [c0641140] &rq->lock

c0484478 FD:   49 BD:    1 --..: serial_mutex
 -> [c04842b8] port_mutex

c047c1f8 FD:    2 BD:    3 --..: block_class_lock
 -> [c0a0bbe0] struct class mutex#11

c0a0bbe0 FD:    1 BD:    4 --..: struct class mutex#11

c0a14620 FD:    1 BD:    5 --..: struct class mutex#12

c0486a70 FD:    7 BD:    5 ++..: i8042_lock
 -> [c0654274] &q->lock

c0a04b6c FD:    1 BD:    4 --..: &ent->pde_unload_lock

c0a146a7 FD:   10 BD:    5 --..: &mousedev->mutex/31
 -> [c0a14688] &mousedev->mutex#2

c0a144a0 FD:    7 BD:    8 --..: &emumousebtn_mutex_class
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock

c0a144c0 FD:   53 BD:    2 --..: &serio->drv_mutex
 -> [c0a144c8] &serio->lock
 -> [c0a14614] &ps2dev->cmd_mutex
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c0a0c0d4] &k->list_lock
 -> [c047e800] bus_type_sem
 -> [c0479270] sysfs_assoc_lock
 -> [c0485fd8] dpm_list_mtx
 -> [c047ca90] sequence_lock
 -> [c09fbf9c] &zone->lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0653ec0] &cwq->lock
 -> [c0641140] &rq->lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)
 -> [c0654274] &q->lock
 -> [c04744b0] running_helpers_waitq.lock
 -> [c0a19064] &k->k_lock
 -> [c0a14620] struct class mutex#12
 -> [c0486cb8] input_mutex
 -> [f8947018] psmouse_mutex

c0a144c8 FD:   18 BD:    5 ++..: &serio->lock
 -> [c0654274] &q->lock
 -> [c0a1462c] &dev->event_lock

c0a14614 FD:   22 BD:    4 --..: &ps2dev->cmd_mutex
 -> [c0a144c8] &serio->lock
 -> [c0486a70] i8042_lock
 -> [c0654274] &q->lock
 -> [c0653e6c] &base->lock
 -> [c0641140] &rq->lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)

c048800c FD:    1 BD:    1 --..: triggers_list_lock

c0487fac FD:    1 BD:    1 ..--: leds_list_lock

c04894f0 FD:    1 BD:    1 -...: llc_sap_list_lock

c0489c58 FD:    1 BD:    1 --..: afinfo_mutex

c048d830 FD:    1 BD:    1 -...: inetsw6_lock

c048e650 FD:    1 BD:    2 -...: raw_v6_hashinfo.lock

c048eab0 FD:    1 BD:    1 -...: inet6_proto_lock

c0a17f80 FD:    1 BD:    2 --..: &ip6addrlbl_table.lock

c0a17f28 FD:    6 BD:    3 -.-+: &ndev->lock
 -> [c0480244] &input_pool.lock
 -> [c04802e4] &nonblocking_pool.lock
 -> [c0653e6c] &base->lock

c0a18410 FD:    1 BD:    3 -+..: &idev->mc_lock

c0a18418 FD:    2 BD:    3 -+..: &mc->mca_lock
 -> [c0a15868] _xmit_ETHER

c048d950 FD:    2 BD:    1 -+..: addrconf_verify_lock
 -> [c0653e6c] &base->lock

c048d8f0 FD:    1 BD:    3 -.-+: addrconf_hash_lock

c0a18fe4 FD:    1 BD:    1 -.--: &net->packet.sklist_lock

c04752b9 FD:    1 BD:    1 --..: pm_mutex/1

c0653eb0 FD:   35 BD:    1 --..: events
 -> [c09fc1bc] &(vmstat_work)->work
 -> [c047a100] key_cleanup_task
 -> [c04818e0] console_work
 -> [c0489ec0] (expires_work).work
 -> [c0480300] (rekey_work).work
 -> [c0488da0] (dst_gc_work).work
 -> [c0a0f9a0] &(&tty->buf.work)->work

c0473c90 FD:    1 BD:    1 --..: task_capability_lock

c0477ff0 FD:    1 BD:    4 --..: cdev_lock

c0a0f978 FD:    1 BD:   10 ....: &tty->read_lock

c0480570 FD:    1 BD:   12 ....: tty_ldisc_wait.lock

c0642290 FD:   76 BD:    1 ----: &mm->mmap_sem
 -> [c0a0325c] &anon_vma->lock
 -> [c0641054] __pte_lockptr(page)
 -> [c09fbf94] &zone->lru_lock
 -> [c04796d0] &sb->s_type->i_lock_key
 -> [c2f44c20] &writer->lock_class
 -> [c0a04868] &inode->i_data.i_mmap_lock
 -> [c0642288] &mm->page_table_lock
 -> [c09fadbc] &counter->lock
 -> [c0a03f20] &mz->lru_lock
 -> [c0642291] &mm->mmap_sem/1
 -> [c0642270] &p->alloc_lock
 -> [c09fbf9c] &zone->lock
 -> [c0641140] &rq->lock
 -> [c0a03e84] &n->list_lock
 -> [f88bea30] &sb->s_type->i_lock_key#3
 -> [c0a04870] &inode->i_data.tree_lock
 -> [f88bf0e8] &(&ip->i_lock)->mr_lock
 -> [f89ce3f0] ide_lock
 -> [c09fc1ec] &page_address_htable[i].lock
 -> [c0654274] &q->lock
 -> [c09fbf24] &tsk->delays->lock
 -> [c049f690] files_lock
 -> [c049f6f0] dcache_lock
 -> [c0653e6c] &base->lock
 -> [c049eb10] kmap_lock
 -> [c0a0bbbc] &ret->lock
 -> [c04782b0] inode_lock
 -> [f88bf0f8] &ip->i_flags_lock
 -> [c0472370] pgd_lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)
 -> [c09f44c0] &futex_queues[i].lock
 -> [c0a04860] &inode->i_data.private_lock
 -> [c0a04c48] &bb->mutex
 -> [c04723b0] memtype_lock
 -> [c0a03e50] &info->lock
 -> [c0a04c90] &ids->rw_mutex
 -> [c0a04858] &inode->inotify_mutex

c0642288 FD:    1 BD:   21 --..: &mm->page_table_lock

c0a0325c FD:    2 BD:   19 --..: &anon_vma->lock
 -> [c0642288] &mm->page_table_lock

c0641054 FD:   14 BD:   19 --..: __pte_lockptr(page)
 -> [c0641055] __pte_lockptr(page)/1
 -> [c0a03f20] &mz->lru_lock
 -> [c09fadbc] &counter->lock
 -> [c09fbf94] &zone->lru_lock
 -> [c0a03e84] &n->list_lock
 -> [c09fbf9c] &zone->lock
 -> [c0654274] &q->lock
 -> [c0a04860] &inode->i_data.private_lock

c049eb10 FD:    3 BD:    2 --..: kmap_lock
 -> [c09fc1ec] &page_address_htable[i].lock
 -> [c09fc1e4] &pool_lock

c09fc1ec FD:    1 BD:   29 ....: &page_address_htable[i].lock

c09fc1e4 FD:    1 BD:    3 ....: &pool_lock

c0a04868 FD:    3 BD:   15 --..: &inode->i_data.i_mmap_lock
 -> [c0a0325c] &anon_vma->lock

c0642291 FD:   22 BD:    2 --..: &mm->mmap_sem/1
 -> [c0a04868] &inode->i_data.i_mmap_lock
 -> [c0a0325c] &anon_vma->lock
 -> [c0642288] &mm->page_table_lock
 -> [c0641054] __pte_lockptr(page)
 -> [c0a03e84] &n->list_lock
 -> [c09fbf9c] &zone->lock
 -> [c0641140] &rq->lock
 -> [c0472370] pgd_lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)
 -> [c04723b0] memtype_lock

c0641055 FD:    1 BD:   20 --..: __pte_lockptr(page)/1

c0a048ac FD:   74 BD:    2 ----: &namespace_sem
 -> [c04796d8] &sb->s_type->i_mutex_key
 -> [c049f6f0] dcache_lock
 -> [c049f750] vfsmount_lock
 -> [f88bea40] &type->i_mutex_dir_key#3
 -> [c0479380] &type->i_mutex_dir_key
 -> [c0477678] &sb->s_type->i_mutex_key#5
 -> [c04756f8] cgroup_mutex
 -> [c0478d40] &type->i_mutex_dir_key#2
 -> [c0478d38] &sb->s_type->i_mutex_key#3

c0478d40 FD:   20 BD:    3 --..: &type->i_mutex_dir_key#2
 -> [c049f6f0] dcache_lock
 -> [c0478dd0] proc_subdir_lock
 -> [c04782b0] inode_lock
 -> [c0472cd0] sysctl_lock
 -> [c0642270] &p->alloc_lock
 -> [c0a03e84] &n->list_lock
 -> [c0a046c8] &dentry->d_lock
 -> [c2f44c20] &writer->lock_class
 -> [c09fbf9c] &zone->lock
 -> [c049f750] vfsmount_lock

c047422c FD:   15 BD:    1 ----: uts_sem
 -> [c0641140] &rq->lock

c0475158 FD:   35 BD:    2 --..: module_mutex
 -> [c04771d0] vmlist_lock
 -> [c09fbf9c] &zone->lock
 -> [c0641140] &rq->lock
 -> [c0a0c0d4] &k->list_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c0652c44] &cpu_hotplug.lock
 -> [c0476538] lock
 -> [c047ca90] sequence_lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0653ec0] &cwq->lock
 -> [c0654274] &q->lock
 -> [c04744b0] running_helpers_waitq.lock
 -> [c0479270] sysfs_assoc_lock
 -> [c0a03e84] &n->list_lock
 -> [c046f1e0] init_mm.page_table_lock
 -> [c0a14d90] &list->lock
 -> [c04792e0] sysfs_ino_ida.lock
 -> [c0472370] pgd_lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)

c0a14634 FD:    7 BD:    8 --..: &dev->mutex
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock

c0476538 FD:   11 BD:    3 --..: lock
 -> [c0474670] kthread_create_lock
 -> [c0641140] &rq->lock
 -> [c0654274] &q->lock
 -> [c0642278] &p->pi_lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)
 -> [c0472370] pgd_lock

c04751d0 FD:    1 BD:    1 ....: module_wq.lock

c0a1462c FD:   17 BD:    8 +...: &dev->event_lock
 -> [c0480244] &input_pool.lock

f89cdeb8 FD:   25 BD:    1 --..: ide_cfg_mtx
 -> [c09fbf08] &irq_desc_lock_class
 -> [c0478dd0] proc_subdir_lock
 -> [c0478e20] proc_inum_ida.lock
 -> [c0478df0] proc_inum_lock
 -> [c047cb78] percpu_counters_lock
 -> [c047c070] elv_list_lock
 -> [f89ce3f0] ide_lock

c09fc1bc FD:    2 BD:    2 --..: &(vmstat_work)->work
 -> [c0653e6c] &base->lock

f89ce74c FD:    1 BD:    1 --..: struct class mutex#13

f89ce3f0 FD:   16 BD:   27 ++..: ide_lock
 -> [c0a0bbbc] &ret->lock
 -> [c0653e6c] &base->lock
 -> [c0654274] &q->lock
 -> [c0480244] &input_pool.lock
 -> [c0653ec0] &cwq->lock
 -> [c0a03e84] &n->list_lock
 -> [c0a04978] &dio->bio_lock
 -> [c09fc1ec] &page_address_htable[i].lock
 -> [c09fbf9c] &zone->lock

c0a0bbbc FD:    1 BD:   28 +...: &ret->lock

f89cdf18 FD:    2 BD:    1 --..: ide_setting_mtx
 -> [c0a03e84] &n->list_lock

c0a04968 FD:   51 BD:    1 --..: &bdev->bd_mutex
 -> [f8810e98] idedisk_ref_mutex
 -> [c04782b0] inode_lock
 -> [c0477e50] sb_lock
 -> [c09fbf9c] &zone->lock
 -> [c09fadbc] &counter->lock
 -> [c0a03f20] &mz->lru_lock
 -> [c0a03e84] &n->list_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [c0a04860] &inode->i_data.private_lock
 -> [f89ce3f0] ide_lock
 -> [c0654274] &q->lock
 -> [c0653e6c] &base->lock
 -> [c0641140] &rq->lock
 -> [c09fbf24] &tsk->delays->lock
 -> [c0a0c0d4] &k->list_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c047e800] bus_type_sem
 -> [c0479270] sysfs_assoc_lock
 -> [c0485fd8] dpm_list_mtx
 -> [c0a19064] &k->k_lock
 -> [c0a0bbe0] struct class mutex#11
 -> [c09fbf94] &zone->lru_lock
 -> [c049f770] bdev_lock
 -> [c047c1f8] block_class_lock
 -> [c0a04969] &bdev->bd_mutex/1
 -> [f8a426f8] idecd_ref_mutex
 -> [c0a0bbbc] &ret->lock
 -> [c0472370] pgd_lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)

f8810e98 FD:    1 BD:    3 --..: idedisk_ref_mutex

c0a04860 FD:    1 BD:   21 --..: &inode->i_data.private_lock

c09fbf24 FD:    1 BD:   28 ....: &tsk->delays->lock

c0a14a60 FD:    1 BD:    1 ..-+: &trigger->leddev_list_lock

c0a04930 FD:  123 BD:    1 --..: &p->lock
 -> [c047c1f8] block_class_lock
 -> [c0a048ac] &namespace_sem
 -> [c0642270] &p->alloc_lock
 -> [c0642268] &sighand->siglock
 -> [c09fbf24] &tsk->delays->lock
 -> [c0475158] module_mutex
 -> [c0641140] &rq->lock
 -> [c0487d50] cpufreq_driver_lock
 -> [c04773d8] swapon_mutex
 -> [c0478038] chrdevs_lock
 -> [c0480758] misc_mtx
 -> [c0480498] tty_mutex

c0478178 FD:   18 BD:    1 --..: &sb->s_type->i_mutex_key#2
 -> [c09fbf9c] &zone->lock
 -> [c04781d0] fasync_lock
 -> [c0654274] &q->lock
 -> [c0a03e84] &n->list_lock
 -> [c0641140] &rq->lock

c04781d0 FD:    1 BD:    6 ..+.: fasync_lock

c0478d38 FD:   22 BD:    3 --..: &sb->s_type->i_mutex_key#3
 -> [c0472cd0] sysctl_lock
 -> [c049f6f0] dcache_lock
 -> [c04782b0] inode_lock
 -> [c0478d48] &sb->s_type->i_alloc_sem_key
 -> [c0a046c8] &dentry->d_lock
 -> [c0642270] &p->alloc_lock
 -> [c09fbf9c] &zone->lock
 -> [c0a03e84] &n->list_lock
 -> [c0a04888] &newf->file_lock
 -> [c0478dd0] proc_subdir_lock
 -> [c2f44c20] &writer->lock_class
 -> [c049f750] vfsmount_lock

c0478d30 FD:    1 BD:    1 --..: &sb->s_type->i_lock_key#2

c0478d48 FD:    2 BD:    4 --..: &sb->s_type->i_alloc_sem_key
 -> [c04782b0] inode_lock

c0a04969 FD:    2 BD:    2 --..: &bdev->bd_mutex/1
 -> [f8810e98] idedisk_ref_mutex

c0a04970 FD:    1 BD:    2 ....: semaphore->lock#3

f88bea28 FD:   65 BD:    1 ----: &type->s_umount_key#13
 -> [c0477e50] sb_lock
 -> [c0a04970] semaphore->lock#3
 -> [c04782b0] inode_lock
 -> [c0474670] kthread_create_lock
 -> [c0641140] &rq->lock
 -> [c0654274] &q->lock
 -> [f88be8b0] xfs_buftarg_lock
 -> [c09fbf9c] &zone->lock
 -> [c0472a78] cpu_add_remove_lock
 -> [f88bf1b8] &btp->bt_hash[i].bh_lock
 -> [c09fadbc] &counter->lock
 -> [c0a03f20] &mz->lru_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [f89ce3f0] ide_lock
 -> [c0a0bbbc] &ret->lock
 -> [c0653e6c] &base->lock
 -> [f88bf130] &mp->m_icsb_mutex
 -> [f88bf1c8] semaphore->lock#4
 -> [f88bf1b0] &btp->bt_delwrite_lock
 -> [f88bec10] xfs_err_lock
 -> [f88bec58] uuid_monitor
 -> [c0a03e84] &n->list_lock
 -> [c04771d0] vmlist_lock
 -> [f88be8f0] as_lock
 -> [f88bf0e8] &(&ip->i_lock)->mr_lock
 -> [f88bf1fc] &mp->m_sb_lock
 -> [c049f6f0] dcache_lock
 -> [f88bea20] &type->s_lock_key#4
 -> [f88bf158] &mru->lock
 -> [f88bf128] &log->l_icloglock
 -> [c0a041c8] &s->s_dquot.dqonoff_mutex
 -> [f88bf0f8] &ip->i_flags_lock
 -> [f88bf120] &log->l_grant_lock
 -> [c09fc1ec] &page_address_htable[i].lock
 -> [c0a0c1c0] &percpu_counter_irqsafe
 -> [c09fbf94] &zone->lru_lock

f88be8b0 FD:    1 BD:    2 --..: xfs_buftarg_lock

f88bf1b8 FD:    1 BD:   26 --..: &btp->bt_hash[i].bh_lock

f88bf130 FD:    2 BD:    2 --..: &mp->m_icsb_mutex
 -> [f88bf1fc] &mp->m_sb_lock

f88bf1fc FD:    1 BD:   23 --..: &mp->m_sb_lock

f88bf1c8 FD:    6 BD:   27 ....: semaphore->lock#4
 -> [c0641140] &rq->lock

f88bf1b0 FD:    7 BD:   22 --..: &btp->bt_delwrite_lock
 -> [f88bf1c8] semaphore->lock#4

f88bec10 FD:    1 BD:    2 ....: xfs_err_lock

f88bec58 FD:    1 BD:    2 --..: uuid_monitor

f88bf110 FD:    1 BD:   27 --..: &mp->m_ail_lock

c0a0bb68 FD:   19 BD:    1 --..: kblockd
 -> [c0a0c0b0] &cfqd->unplug_work
 -> [c0a0bbb4] &q->unplug_work

c0a0c0b0 FD:   17 BD:    2 --..: &cfqd->unplug_work
 -> [f89ce3f0] ide_lock
 -> [c0653e6c] &base->lock

f88be8f0 FD:    1 BD:    2 --..: as_lock

f88bf148 FD:    2 BD:   18 ----: &pag->pag_ici_lock
 -> [f88bf0f8] &ip->i_flags_lock

f88bf0e8 FD:   40 BD:   17 ----: &(&ip->i_lock)->mr_lock
 -> [f88bf148] &pag->pag_ici_lock
 -> [f88bf1f4] &mp->m_ilock
 -> [f88bf0f8] &ip->i_flags_lock
 -> [f88bf1b8] &btp->bt_hash[i].bh_lock
 -> [c09fadbc] &counter->lock
 -> [c0a03f20] &mz->lru_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [f89ce3f0] ide_lock
 -> [c0a0bbbc] &ret->lock
 -> [c0653e6c] &base->lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [f88bf1c8] semaphore->lock#4
 -> [c0a03e84] &n->list_lock
 -> [c09fbf94] &zone->lru_lock
 -> [c04782b0] inode_lock
 -> [f88bf128] &log->l_icloglock
 -> [f88bf120] &log->l_grant_lock
 -> [f88bf1fc] &mp->m_sb_lock
 -> [f88bf118] &iclog->ic_callback_lock
 -> [f88bf1b0] &btp->bt_delwrite_lock
 -> [c2f44c20] &writer->lock_class
 -> [f88bf138] &mp->m_peraglock
 -> [c09fbf9c] &zone->lock
 -> [c0472370] pgd_lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)
 -> [f88bf110] &mp->m_ail_lock
 -> [c0a0c19c] &sem->wait_lock

f88bf0f8 FD:    1 BD:   21 --..: &ip->i_flags_lock

f88bf1f4 FD:    8 BD:   18 --..: &mp->m_ilock
 -> [f88bf0f8] &ip->i_flags_lock

c0a04858 FD:   18 BD:   12 --..: &inode->inotify_mutex
 -> [c0a04994] &ih->mutex

f88bea40 FD:   54 BD:    6 --..: &type->i_mutex_dir_key#3
 -> [c049f6f0] dcache_lock
 -> [f88bf0e8] &(&ip->i_lock)->mr_lock
 -> [c04782b0] inode_lock
 -> [f88bf1b8] &btp->bt_hash[i].bh_lock
 -> [c09fbf94] &zone->lru_lock
 -> [f88bf1c8] semaphore->lock#4
 -> [c0654274] &q->lock
 -> [f88bf1f4] &mp->m_ilock
 -> [f88bf0f8] &ip->i_flags_lock
 -> [c09fadbc] &counter->lock
 -> [c0a03f20] &mz->lru_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [f89ce3f0] ide_lock
 -> [c0641140] &rq->lock
 -> [c09fbf9c] &zone->lock
 -> [c0a03e84] &n->list_lock
 -> [c2f44c20] &writer->lock_class
 -> [c0653e6c] &base->lock
 -> [c0a0bbbc] &ret->lock
 -> [c0a046c8] &dentry->d_lock
 -> [c049f750] vfsmount_lock
 -> [f88bf120] &log->l_grant_lock
 -> [f88bf0e9] &(&ip->i_lock)->mr_lock/1
 -> [f88bf1b0] &btp->bt_delwrite_lock
 -> [f88bf128] &log->l_icloglock
 -> [f88bf0ea] &(&ip->i_lock)->mr_lock/2
 -> [f88bf0eb] &(&ip->i_lock)->mr_lock/3
 -> [c0472370] pgd_lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)
 -> [c0a0c19c] &sem->wait_lock

f88bea30 FD:    1 BD:    2 --..: &sb->s_type->i_lock_key#3

f88bf0e0 FD:   48 BD:    7 ----: &(&ip->i_iolock)->mr_lock
 -> [c09fbf9c] &zone->lock
 -> [c09fadbc] &counter->lock
 -> [c0a03f20] &mz->lru_lock
 -> [c0a03e84] &n->list_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [f88bf0e8] &(&ip->i_lock)->mr_lock
 -> [f89ce3f0] ide_lock
 -> [c09fc1ec] &page_address_htable[i].lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [c09fbf24] &tsk->delays->lock
 -> [c2f44c20] &writer->lock_class
 -> [c09fbf94] &zone->lru_lock
 -> [c0653e6c] &base->lock
 -> [c0a0bbbc] &ret->lock
 -> [c0472370] pgd_lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)
 -> [c04782b0] inode_lock
 -> [c0a04860] &inode->i_data.private_lock
 -> [f88bf120] &log->l_grant_lock
 -> [c0a04868] &inode->i_data.i_mmap_lock
 -> [f88bf0f8] &ip->i_flags_lock

c0a0f998 FD:   10 BD:    1 --..: &tty->termios_mutex
 -> [c0480550] tty_ldisc_lock
 -> [c0654274] &q->lock
 -> [c0a0f970] &tty->ctrl_lock

c0a0f9a8 FD:    1 BD:   14 +...: &tty->buf.lock

c0480850 FD:    1 BD:    1 ....: vt_spawn_con.lock

c0a049a0 FD:   19 BD:    1 --..: &dev->up_mutex
 -> [c0a04858] &inode->inotify_mutex
 -> [c0a03e84] &n->list_lock

c0a04994 FD:   17 BD:   13 --..: &ih->mutex
 -> [c0a0c0c8] &idp->lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock
 -> [c0a03e84] &n->list_lock

c048d6f0 FD:    1 BD:    2 --..: unix_table_lock

c0a17a84 FD:   81 BD:    1 --..: &u->readlock
 -> [c048d6f0] unix_table_lock
 -> [f88bea41] &type->i_mutex_dir_key#3/1
 -> [c049f750] vfsmount_lock
 -> [c0477679] &sb->s_type->i_mutex_key#5/1
 -> [c0a17a8c] &u->lock
 -> [c0654274] &q->lock
 -> [c0a03e84] &n->list_lock
 -> [c0641140] &rq->lock
 -> [c0a17ea4] &af_unix_sk_receive_queue_lock_key
 -> [c09fbf9c] &zone->lock

c0a17a8c FD:   10 BD:    2 --..: &u->lock
 -> [c0654274] &q->lock
 -> [c0a15028] clock-AF_UNIX
 -> [c0a17a8d] &u->lock/1
 -> [c0a17ea4] &af_unix_sk_receive_queue_lock_key

c0a15028 FD:    1 BD:    3 -.--: clock-AF_UNIX

c0a17ea4 FD:    1 BD:    3 --..: &af_unix_sk_receive_queue_lock_key

c0a14f08 FD:    1 BD:    1 -...: slock-AF_UNIX

c0a14de8 FD:    1 BD:    1 --..: sk_lock-AF_UNIX

c0a049a8 FD:    1 BD:    1 --..: &dev->ev_mutex

c04787d0 FD:    1 BD:    1 --..: &sb->s_type->i_lock_key#4

f88bea38 FD:   61 BD:    5 --..: &sb->s_type->i_mutex_key#4
 -> [f88bf0e0] &(&ip->i_iolock)->mr_lock
 -> [f88bf120] &log->l_grant_lock
 -> [f88bf0e8] &(&ip->i_lock)->mr_lock
 -> [c0a046c8] &dentry->d_lock
 -> [f88bea48] &sb->s_type->i_alloc_sem_key#3
 -> [c04782b0] inode_lock
 -> [f88bf0ea] &(&ip->i_lock)->mr_lock/2
 -> [f88bf1c8] semaphore->lock#4
 -> [f88bf128] &log->l_icloglock
 -> [f88bf0f8] &ip->i_flags_lock
 -> [c049f6f0] dcache_lock
 -> [f88bf0eb] &(&ip->i_lock)->mr_lock/3
 -> [c09fbf9c] &zone->lock
 -> [c0a03e84] &n->list_lock
 -> [f88bf0ed] &(&ip->i_lock)->mr_lock/5
 -> [f88bf1b0] &btp->bt_delwrite_lock
 -> [f88bf138] &mp->m_peraglock
 -> [f88bf1fc] &mp->m_sb_lock
 -> [f88bf118] &iclog->ic_callback_lock
 -> [f88bf0ec] &(&ip->i_lock)->mr_lock/4
 -> [c0a0c19c] &sem->wait_lock
 -> [c0653e6c] &base->lock
 -> [c0641140] &rq->lock
 -> [c0472370] pgd_lock
 -> [c09fb4c0] &(kretprobe_table_locks[i].lock)

c04804d0 FD:    1 BD:    1 --..: redirect_lock

c0a0f980 FD:   19 BD:    1 --..: &tty->atomic_write_lock
 -> [c0654274] &q->lock
 -> [c04729d0] (console_sem).lock
 -> [c047d610] vga_lock
 -> [c0472990] logbuf_lock
 -> [c09fbf9c] &zone->lock
 -> [c0641140] &rq->lock
 -> [c0a0f978] &tty->read_lock
 -> [c0a03e84] &n->list_lock
 -> [c0480550] tty_ldisc_lock
 -> [c0a0f970] &tty->ctrl_lock

c0480370 FD:    1 BD:   50 ++..: random_read_wait.lock

f88bea20 FD:   20 BD:    2 --..: &type->s_lock_key#4
 -> [f88bf1c8] semaphore->lock#4
 -> [f88bf1b0] &btp->bt_delwrite_lock
 -> [f89ce3f0] ide_lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [f88bf1b8] &btp->bt_hash[i].bh_lock

c047a788 FD:   21 BD:    1 --..: &type->s_umount_key#14
 -> [c0477e50] sb_lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock
 -> [c0a041c8] &s->s_dquot.dqonoff_mutex
 -> [c047a780] &type->s_lock_key#5

c047a780 FD:    1 BD:    2 --..: &type->s_lock_key#5

c0a0c1b8 FD:    1 BD:    1 --..: &fbc->lock

f88bea41 FD:   69 BD:    3 --..: &type->i_mutex_dir_key#3/1
 -> [c0a046c8] &dentry->d_lock
 -> [c2f44c20] &writer->lock_class
 -> [c049f6f0] dcache_lock
 -> [f88bf0e8] &(&ip->i_lock)->mr_lock
 -> [f88bea38] &sb->s_type->i_mutex_key#4
 -> [c0a04858] &inode->inotify_mutex
 -> [c0a03e84] &n->list_lock
 -> [f88bea40] &type->i_mutex_dir_key#3
 -> [c04782b0] inode_lock
 -> [f88bf120] &log->l_grant_lock
 -> [f88bf0e0] &(&ip->i_iolock)->mr_lock
 -> [f88bf0f8] &ip->i_flags_lock
 -> [f88bf1f4] &mp->m_ilock
 -> [f88bf0e9] &(&ip->i_lock)->mr_lock/1
 -> [f88bf128] &log->l_icloglock
 -> [f88bf0ea] &(&ip->i_lock)->mr_lock/2
 -> [f88bf0ec] &(&ip->i_lock)->mr_lock/4
 -> [f88bf1c8] semaphore->lock#4
 -> [c0a04860] &inode->i_data.private_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [c09fbf94] &zone->lru_lock
 -> [f88bf1b8] &btp->bt_hash[i].bh_lock
 -> [c0654274] &q->lock
 -> [c09fbf9c] &zone->lock
 -> [f88bf1b0] &btp->bt_delwrite_lock
 -> [f88bea42] &type->i_mutex_dir_key#3/2
 -> [c0641140] &rq->lock

c0a03e48 FD:    1 BD:   11 --..: &sbinfo->stat_lock

c0477678 FD:   30 BD:    5 --..: &sb->s_type->i_mutex_key#5
 -> [c049f6f0] dcache_lock
 -> [c2f44c20] &writer->lock_class
 -> [c0a03e48] &sbinfo->stat_lock
 -> [c04782b0] inode_lock
 -> [c0477670] &sb->s_type->i_lock_key#5
 -> [c0a046c8] &dentry->d_lock
 -> [c0a03e50] &info->lock
 -> [c09fadbc] &counter->lock
 -> [c0a03f20] &mz->lru_lock
 -> [c0a03e84] &n->list_lock
 -> [c0477688] &sb->s_type->i_alloc_sem_key#2
 -> [c09fbf9c] &zone->lock
 -> [c0641140] &rq->lock
 -> [c049f750] vfsmount_lock

c0477679 FD:   34 BD:    2 --..: &sb->s_type->i_mutex_key#5/1
 -> [c049f6f0] dcache_lock
 -> [c2f44c20] &writer->lock_class
 -> [c0a03e48] &sbinfo->stat_lock
 -> [c04782b0] inode_lock
 -> [c0477670] &sb->s_type->i_lock_key#5
 -> [c0a046c8] &dentry->d_lock
 -> [c0477678] &sb->s_type->i_mutex_key#5
 -> [c0a04858] &inode->inotify_mutex
 -> [c0a03e84] &n->list_lock
 -> [c09fbf9c] &zone->lock
 -> [c0641140] &rq->lock

c0477670 FD:    1 BD:    6 --..: &sb->s_type->i_lock_key#5

c0a14f80 FD:    1 BD:    1 -...: slock-AF_NETLINK

c0a14e60 FD:    1 BD:    1 --..: sk_lock-AF_NETLINK

c0479190 FD:    1 BD:    1 --..: sysfs_open_dirent_lock

c0a04c2c FD:   23 BD:    1 --..: &buffer->mutex
 -> [c047ca90] sequence_lock
 -> [c0a03e84] &n->list_lock
 -> [c0a14d90] &list->lock
 -> [c0489a30] nl_table_wait.lock
 -> [c09fbf9c] &zone->lock
 -> [c047ce50] pci_lock
 -> [c047f198] acpi_prt_lock

c0a03e50 FD:    7 BD:    9 --..: &info->lock
 -> [c0a03e48] &sbinfo->stat_lock
 -> [c0a04870] &inode->i_data.tree_lock

c0477688 FD:   18 BD:    6 --..: &sb->s_type->i_alloc_sem_key#2
 -> [c04782b0] inode_lock
 -> [c0a04868] &inode->i_data.i_mmap_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [c09fbf94] &zone->lru_lock
 -> [c0a03e50] &info->lock
 -> [c0641140] &rq->lock

c0479370 FD:    1 BD:    1 --..: &sb->s_type->i_lock_key#6

c0a14d90 FD:    1 BD:   20 -+..: &list->lock

c0a150a0 FD:    1 BD:    1 -.--: clock-AF_NETLINK

c04786c0 FD:    1 BD:    1 --..: &type->i_mutex_dir_key#4

c049f714 FD:    4 BD:   68 --..: rename_lock
 -> [c0a046c8] &dentry->d_lock
 -> [c0a046c9] &dentry->d_lock/1

c0a046c9 FD:    1 BD:   70 --..: &dentry->d_lock/1

c0a0bbb4 FD:   17 BD:    2 --..: &q->unplug_work
 -> [f89ce3f0] ide_lock
 -> [c0653e6c] &base->lock

c047494c FD:    1 BD:    1 --..: (cpu_dma_lat_notifier).rwsem

c0487520 FD:    1 BD:    2 ....: thermal_cdev_idr.lock

c0487578 FD:    2 BD:    1 --..: thermal_idr_lock
 -> [c0487520] thermal_cdev_idr.lock

c0a14764 FD:    1 BD:    1 --..: struct class mutex#14

c04875d8 FD:    1 BD:    1 --..: thermal_list_lock

c0487d8c FD:    1 BD:    1 --..: (cpufreq_policy_notifier_list).rwsem

f892a618 FD:    4 BD:    1 --..: info_mutex
 -> [c0478dd0] proc_subdir_lock
 -> [c0478e20] proc_inum_ida.lock
 -> [c0478df0] proc_inum_lock

f8947018 FD:   52 BD:    3 --..: psmouse_mutex
 -> [c0a144c8] &serio->lock
 -> [c0a14614] &ps2dev->cmd_mutex
 -> [c0a0c0d4] &k->list_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c047e800] bus_type_sem
 -> [c0479270] sysfs_assoc_lock
 -> [c0485fd8] dpm_list_mtx
 -> [c047ca90] sequence_lock
 -> [c09fbf9c] &zone->lock
 -> [c0a14d90] &list->lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0a19064] &k->k_lock
 -> [c0a14620] struct class mutex#12
 -> [c0486cb8] input_mutex

f896994c FD:   40 BD:    2 ----: (usb_notifier_list).rwsem
 -> [c04785d0] pin_fs_lock
 -> [c04784e0] mnt_id_ida.lock
 -> [c049f750] vfsmount_lock
 -> [c0477e50] sb_lock
 -> [f8969a08] &type->s_umount_key#15
 -> [f8969a18] &sb->s_type->i_mutex_key#6
 -> [f8969b10] deviceconndiscwq.lock
 -> [c0a0c0d4] &k->list_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c047e800] bus_type_sem
 -> [c0479270] sysfs_assoc_lock
 -> [c0485fd8] dpm_list_mtx
 -> [c047ca90] sequence_lock
 -> [c0a14d90] &list->lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0a19064] &k->k_lock
 -> [f896a368] struct class mutex#18
 -> [c0a03e84] &n->list_lock

f8968df0 FD:    7 BD:    2 ....: hub_event_lock
 -> [f8968e30] khubd_wait.lock

f8968e30 FD:    6 BD:    3 ....: khubd_wait.lock
 -> [c0641140] &rq->lock

f892a798 FD:    1 BD:    1 --..: strings

f8970b78 FD:    1 BD:    2 --..: register_mutex

f892a538 FD:   29 BD:    2 --..: sound_mutex
 -> [c0a0c0d4] &k->list_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c047e800] bus_type_sem
 -> [c0479270] sysfs_assoc_lock
 -> [c0485fd8] dpm_list_mtx
 -> [c047ca90] sequence_lock
 -> [c0a03e84] &n->list_lock
 -> [c0a14d90] &list->lock
 -> [c0489a30] nl_table_wait.lock
 -> [f88ff904] struct class mutex#15
 -> [c0a19064] &k->k_lock

f88ff904 FD:    1 BD:    3 --..: struct class mutex#15

f8986648 FD:    1 BD:    3 .+..: &tp->mii_lock

c0a15868 FD:    1 BD:    4 -...: _xmit_ETHER

c04774d8 FD:   20 BD:    1 --..: pools_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex

f8968fb8 FD:   69 BD:    1 --..: usb_bus_list_lock
 -> [c0a0c0d4] &k->list_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c047e800] bus_type_sem
 -> [c0479270] sysfs_assoc_lock
 -> [c0485fd8] dpm_list_mtx
 -> [c047ca90] sequence_lock
 -> [c0a14d90] &list->lock
 -> [c0654274] &q->lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0a19064] &k->k_lock
 -> [f8969ee0] struct class mutex#16
 -> [f8969030] hcd_root_hub_lock
 -> [c0a14438] semaphore->lock#2
 -> [f8969e94] &dev->pm_mutex
 -> [f8968f10] device_state_lock
 -> [f8969f04] &new_driver->dynids.lock
 -> [c0a03b28] &retval->lock
 -> [f89906ec] &ehci->lock
 -> [f8969ec0] &hub->status_mutex
 -> [c0653e6c] &base->lock
 -> [c0641140] &rq->lock
 -> [f8968df0] hub_event_lock
 -> [c0485b30] probe_waitqueue.lock
 -> [c0a03e84] &n->list_lock
 -> [c0478038] chrdevs_lock
 -> [f8969718] minor_lock
 -> [f896a344] struct class mutex#17
 -> [f896994c] (usb_notifier_list).rwsem
 -> [c04792e0] sysfs_ino_ida.lock
 -> [c09fbf9c] &zone->lock

f8969ee0 FD:    1 BD:    2 --..: struct class mutex#16

c04785d0 FD:    1 BD:    4 --..: pin_fs_lock

f8969a08 FD:   24 BD:    3 --..: &type->s_umount_key#15
 -> [c0477e50] sb_lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock
 -> [c0a041c8] &s->s_dquot.dqonoff_mutex
 -> [f8969a00] &type->s_lock_key#6

f8969a00 FD:   18 BD:    4 --..: &type->s_lock_key#6
 -> [f8969a19] &sb->s_type->i_mutex_key#6/1

f8969a18 FD:   16 BD:    6 --..: &sb->s_type->i_mutex_key#6
 -> [c049f6f0] dcache_lock
 -> [c04782b0] inode_lock
 -> [c09fbf9c] &zone->lock

f8969b10 FD:    1 BD:    3 ....: deviceconndiscwq.lock

c0a03b28 FD:    1 BD:    2 ....: &retval->lock

f8968d6c FD:    7 BD:    1 --..: ehci_cf_port_reset_rwsem
 -> [c0653e6c] &base->lock
 -> [c0641140] &rq->lock

f8968f10 FD:    1 BD:    5 ....: device_state_lock

f8969030 FD:    2 BD:    6 ....: hcd_root_hub_lock
 -> [f8969090] hcd_urb_list_lock

f8969090 FD:    1 BD:    7 ....: hcd_urb_list_lock

f8969e94 FD:   10 BD:    4 --..: &dev->pm_mutex
 -> [c0653e6c] &base->lock
 -> [f89690f8] reject_mutex
 -> [f8969070] hcd_urb_unlink_lock
 -> [f8969030] hcd_root_hub_lock
 -> [f8968ff0] usb_kill_urb_queue.lock
 -> [f89906ec] &ehci->lock
 -> [f8968f10] device_state_lock
 -> [f8969090] hcd_urb_list_lock
 -> [f8a1130c] &ohci->lock

f8969f04 FD:    1 BD:    2 --..: &new_driver->dynids.lock

f89906ec FD:    1 BD:    6 ....: &ehci->lock

f8969ec0 FD:   10 BD:    2 --..: &hub->status_mutex
 -> [f8969030] hcd_root_hub_lock
 -> [f89906ec] &ehci->lock
 -> [c0654274] &q->lock

f892a6cc FD:    1 BD:    1 --..: snd_ioctl_rwsem

f8969718 FD:    2 BD:    2 --..: minor_lock
 -> [f89696c0] endpoint_idr.lock

f89696c0 FD:    1 BD:    3 ....: endpoint_idr.lock

f896a344 FD:    1 BD:    2 --..: struct class mutex#17

f896a368 FD:    1 BD:    3 --..: struct class mutex#18

f8934bf8 FD:    1 BD:    1 --..: cdrom_mutex

f8a426f8 FD:    1 BD:    2 --..: idecd_ref_mutex

f89b46a0 FD:    1 BD:    2 ....: rtc_idr.lock

f89b46f8 FD:    2 BD:    1 --..: idr_lock
 -> [f89b46a0] rtc_idr.lock

f89b4b04 FD:    1 BD:    1 --..: struct class mutex#19

f892a5b8 FD:    1 BD:    2 --..: snd_card_mutex

f89afe40 FD:    1 BD:    1 --..: struct class mutex#20

f8a7b068 FD:    1 BD:    1 --..: &ac97->reg_mutex

f892af7c FD:    1 BD:    1 --..: &card->controls_rwsem

f892af74 FD:    1 BD:    1 ..--: &card->ctl_files_rwlock

f88f7b78 FD:    1 BD:    1 --..: list_mutex

f89d7304 FD:    1 BD:    1 ....: &chip->reg_lock

f89ff538 FD:    1 BD:    1 --..: buses_mutex

f89ec7d0 FD:    1 BD:    1 --..: full_list_lock

f8a096d0 FD:    1 BD:    1 --..: ports_lock

f89ec9b0 FD:    1 BD:    1 --..: topology_lock

f89ecd00 FD:    1 BD:    1 --..: &tmp->pardevice_lock

f89ecd10 FD:    1 BD:    1 ....: &tmp->cad_lock

f89ecd18 FD:    6 BD:    1 .+..: semaphore->lock#5
 -> [c0641140] &rq->lock

f89a6438 FD:   31 BD:    1 --..: register_mutex#2
 -> [f892a538] sound_mutex
 -> [f8970b78] register_mutex
 -> [c0a03e84] &n->list_lock

f89ecd08 FD:    1 BD:    1 --..: &tmp->waitlist_lock

f89ec718 FD:   30 BD:    1 --..: registration_lock
 -> [f89ec6d0] parportlist_lock
 -> [c0a0c0d4] &k->list_lock
 -> [c04792b0] sysfs_ino_lock
 -> [c04791d8] sysfs_mutex
 -> [c047e800] bus_type_sem
 -> [c0479270] sysfs_assoc_lock
 -> [c0485fd8] dpm_list_mtx
 -> [c047ca90] sequence_lock
 -> [c0a14d90] &list->lock
 -> [c0489a30] nl_table_wait.lock
 -> [c0a19064] &k->k_lock
 -> [f8a2dea0] struct class mutex#21

f89ec6d0 FD:    1 BD:    2 ....: parportlist_lock

f8a2dea0 FD:    1 BD:    2 --..: struct class mutex#21

f8a1130c FD:    1 BD:    5 .+..: &ohci->lock

c04758c8 FD:   24 BD:    1 --..: &type->s_umount_key#16
 -> [c0477e50] sb_lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock
 -> [c04758d8] &sb->s_type->i_mutex_key#7
 -> [c04756f8] cgroup_mutex

c04758d8 FD:   20 BD:    2 --..: &sb->s_type->i_mutex_key#7
 -> [c04756f8] cgroup_mutex
 -> [c0a046c8] &dentry->d_lock
 -> [c049f6f0] dcache_lock
 -> [c2f44c20] &writer->lock_class
 -> [c04782b0] inode_lock

c04756f8 FD:   18 BD:    5 --..: cgroup_mutex
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [c0475790] css_set_lock
 -> [c049f6f0] dcache_lock
 -> [c04782b0] inode_lock
 -> [c0a03e84] &n->list_lock
 -> [c09fbf9c] &zone->lock

c0475790 FD:    1 BD:    6 --..: css_set_lock

c0487d50 FD:    1 BD:    2 ....: cpufreq_driver_lock

f89b4b14 FD:    1 BD:    1 +...: &rtc->irq_lock

f89b4b0c FD:    1 BD:    1 +...: &rtc->irq_task_lock

f89b4b1c FD:    2 BD:    1 --..: &rtc->ops_lock
 -> [c0471550] rtc_lock

c04765b8 FD:   10 BD:    1 --..: audit_cmd_mutex
 -> [c0474670] kthread_create_lock
 -> [c0641140] &rq->lock
 -> [c0654274] &q->lock
 -> [c0a14d90] &list->lock
 -> [c0a03e84] &n->list_lock

c09faf64 FD:    1 BD:    1 ....: &list->lock#2

c04765f0 FD:    1 BD:    1 ....: audit_backlog_wait.lock

c0476630 FD:    1 BD:    1 ....: kauditd_wait.lock

c0a17a8d FD:    1 BD:    3 --..: &u->lock/1

c09f44c0 FD:    8 BD:    2 --..: &futex_queues[i].lock
 -> [c0654274] &q->lock
 -> [c09f44c1] &futex_queues[i].lock/1

c04773d8 FD:   15 BD:    2 --..: swapon_mutex
 -> [c0477410] swap_lock
 -> [c049f6f0] dcache_lock

f8969e84 FD:   12 BD:    1 --..: ksuspend_usbd
 -> [f8969e8c] &(&dev->autosuspend)->work

f8969e8c FD:   11 BD:    2 --..: &(&dev->autosuspend)->work
 -> [f8969e94] &dev->pm_mutex

f8969a19 FD:   17 BD:    5 --..: &sb->s_type->i_mutex_key#6/1
 -> [f8969a18] &sb->s_type->i_mutex_key#6

f89690f8 FD:    1 BD:    5 --..: reject_mutex

f8969070 FD:    1 BD:    5 ....: hcd_urb_unlink_lock

f8968ff0 FD:    1 BD:    5 ....: usb_kill_urb_queue.lock

f88bf158 FD:    1 BD:    2 --..: &mru->lock

f88bf128 FD:    9 BD:   26 --..: &log->l_icloglock
 -> [f88bf110] &mp->m_ail_lock
 -> [f88bf120] &log->l_grant_lock
 -> [c0654274] &q->lock

f88bf120 FD:    1 BD:   27 --..: &log->l_grant_lock

f88bf0e9 FD:   42 BD:    7 --..: &(&ip->i_lock)->mr_lock/1
 -> [f88bf138] &mp->m_peraglock
 -> [f88bf1b8] &btp->bt_hash[i].bh_lock
 -> [c09fadbc] &counter->lock
 -> [c0a03f20] &mz->lru_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [f89ce3f0] ide_lock
 -> [c0653e6c] &base->lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [f88bf1c8] semaphore->lock#4
 -> [c0a03e84] &n->list_lock
 -> [c04782b0] inode_lock
 -> [f88bf0e8] &(&ip->i_lock)->mr_lock
 -> [c09fbf94] &zone->lru_lock
 -> [c0a0bbbc] &ret->lock
 -> [f88bf1f4] &mp->m_ilock
 -> [f88bf140] &mp->m_agirotor_lock
 -> [c09fbf9c] &zone->lock
 -> [f88bf128] &log->l_icloglock
 -> [f88bf120] &log->l_grant_lock
 -> [f88bf1fc] &mp->m_sb_lock
 -> [f88bf118] &iclog->ic_callback_lock
 -> [f88bf1b0] &btp->bt_delwrite_lock
 -> [c0a0c19c] &sem->wait_lock

f88bf138 FD:   30 BD:   18 ..--: &mp->m_peraglock
 -> [f88bf1b8] &btp->bt_hash[i].bh_lock
 -> [c09fadbc] &counter->lock
 -> [c0a03f20] &mz->lru_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [f89ce3f0] ide_lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [f88bf1c8] semaphore->lock#4
 -> [c09fbf94] &zone->lru_lock
 -> [c09fbf9c] &zone->lock
 -> [c0653e6c] &base->lock
 -> [f88bf0a8] &pag->pagb_lock
 -> [f88bf0f8] &ip->i_flags_lock
 -> [f88bf128] &log->l_icloglock
 -> [f88bf120] &log->l_grant_lock
 -> [f88bf1b0] &btp->bt_delwrite_lock
 -> [c0a03e84] &n->list_lock
 -> [c0a0bbbc] &ret->lock

f88bf1a8 FD:   17 BD:    1 --..: xfslogd
 -> [f88bf1c0] &bp->b_iodone_work

f88bf1c0 FD:   16 BD:    2 --..: &bp->b_iodone_work
 -> [f88bf1c8] semaphore->lock#4
 -> [f88bf1b8] &btp->bt_hash[i].bh_lock
 -> [f88bf128] &log->l_icloglock
 -> [f88bf120] &log->l_grant_lock
 -> [f88bf118] &iclog->ic_callback_lock
 -> [f88bf110] &mp->m_ail_lock
 -> [c0a03e84] &n->list_lock
 -> [c0654274] &q->lock
 -> [f88bf0a8] &pag->pagb_lock
 -> [c09fbf9c] &zone->lock

f88bf118 FD:   10 BD:   24 --..: &iclog->ic_callback_lock
 -> [f88bf128] &log->l_icloglock

f88bea48 FD:   49 BD:    6 --..: &sb->s_type->i_alloc_sem_key#3
 -> [f88bf0e0] &(&ip->i_iolock)->mr_lock
 -> [f88bf0e8] &(&ip->i_lock)->mr_lock
 -> [c0641140] &rq->lock

f88bf0a8 FD:    1 BD:   21 --..: &pag->pagb_lock

f88bf0ea FD:   37 BD:    9 --..: &(&ip->i_lock)->mr_lock/2
 -> [f88bf0eb] &(&ip->i_lock)->mr_lock/3
 -> [f88bf0ed] &(&ip->i_lock)->mr_lock/5
 -> [f88bf0ec] &(&ip->i_lock)->mr_lock/4
 -> [c0a0c19c] &sem->wait_lock

f88bf0eb FD:   33 BD:   10 --..: &(&ip->i_lock)->mr_lock/3
 -> [f88bf1b8] &btp->bt_hash[i].bh_lock
 -> [f88bf1c8] semaphore->lock#4
 -> [f88bf128] &log->l_icloglock
 -> [f88bf120] &log->l_grant_lock
 -> [f88bf1fc] &mp->m_sb_lock
 -> [f88bf118] &iclog->ic_callback_lock
 -> [f88bf1b0] &btp->bt_delwrite_lock
 -> [f89ce3f0] ide_lock
 -> [c0a03e84] &n->list_lock
 -> [c0a0bbbc] &ret->lock
 -> [f88bf0ed] &(&ip->i_lock)->mr_lock/5
 -> [c04782b0] inode_lock
 -> [c09fadbc] &counter->lock
 -> [c0a03f20] &mz->lru_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [c0653e6c] &base->lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [c09fbf94] &zone->lru_lock
 -> [f88bf0ec] &(&ip->i_lock)->mr_lock/4

f88bf0ed FD:   15 BD:   12 --..: &(&ip->i_lock)->mr_lock/5
 -> [f88bf1b8] &btp->bt_hash[i].bh_lock
 -> [f88bf1c8] semaphore->lock#4
 -> [f88bf128] &log->l_icloglock
 -> [f88bf120] &log->l_grant_lock
 -> [f88bf1fc] &mp->m_sb_lock
 -> [f88bf118] &iclog->ic_callback_lock
 -> [c04782b0] inode_lock

f8a226ec FD:    1 BD:    1 --..: _lock

c04758d0 FD:    1 BD:    1 --..: &sb->s_type->i_lock_key#7

c0a04978 FD:    6 BD:   28 +...: &dio->bio_lock
 -> [c0641140] &rq->lock

f8a2280c FD:    1 BD:    1 --..: _hash_lock

f88bf140 FD:    1 BD:    8 --..: &mp->m_agirotor_lock

c0a0c1c0 FD:    1 BD:   30 ....: &percpu_counter_irqsafe

f88bf1a0 FD:   43 BD:    1 --..: xfsdatad
 -> [f88bf184] &ioend->io_work
 -> [f88bf174] &ioend->io_work#2

f88bf184 FD:   41 BD:    2 --..: &ioend->io_work
 -> [f88bf0e8] &(&ip->i_lock)->mr_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [f88bf120] &log->l_grant_lock
 -> [c0a03e84] &n->list_lock

f88bf0ec FD:   16 BD:   11 --..: &(&ip->i_lock)->mr_lock/4
 -> [f88bf1b8] &btp->bt_hash[i].bh_lock
 -> [c04782b0] inode_lock
 -> [f88bf1c8] semaphore->lock#4
 -> [f88bf128] &log->l_icloglock
 -> [f88bf120] &log->l_grant_lock
 -> [f88bf1fc] &mp->m_sb_lock
 -> [f88bf118] &iclog->ic_callback_lock
 -> [f88bf0ed] &(&ip->i_lock)->mr_lock/5

c0a0f970 FD:    1 BD:   20 ....: &tty->ctrl_lock

c0a0f988 FD:   22 BD:    1 --..: &tty->atomic_read_lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock
 -> [c0a0f978] &tty->read_lock
 -> [c0480550] tty_ldisc_lock
 -> [c0642268] &sighand->siglock

c048b54c FD:    8 BD:    3 ..--: (inetaddr_chain).rwsem
 -> [c048c910] fib_hash_lock
 -> [c048bbd0] fib_info_lock
 -> [c0480244] &input_pool.lock
 -> [c04802e4] &nonblocking_pool.lock
 -> [c0489a30] nl_table_wait.lock

c048c910 FD:    1 BD:    4 -.-?: fib_hash_lock

c048bbd0 FD:    1 BD:    4 -...: fib_info_lock

c0a15968 FD:    1 BD:    3 -...: _xmit_LOOPBACK

c0a16b00 FD:    1 BD:    3 -.-+: &in_dev->mc_list_lock

c0a16af8 FD:    1 BD:    3 -...: &in_dev->mc_tomb_lock

c0a17f20 FD:    2 BD:    3 -+..: &ifa->lock
 -> [c0653e6c] &base->lock

c0a17fcc FD:    4 BD:    4 -+-.: &tb->tb6_lock
 -> [c0489a30] nl_table_wait.lock
 -> [c048dff0] fib6_walker_lock
 -> [c0a03e84] &n->list_lock

c048dff0 FD:    1 BD:    5 -+..: fib6_walker_lock

c0a14f10 FD:   18 BD:    2 -+..: slock-AF_INET
 -> [c0a14d90] &list->lock
 -> [c0a16614] &hashinfo->ehash_locks[i]
 -> [c0a1660c] &tcp_hashinfo.bhash[i].lock
 -> [c0a15140] &queue->syn_wait_lock
 -> [c0653e6c] &base->lock
 -> [c0a15be8] &n->lock
 -> [c0a03e84] &n->list_lock

c0a14df0 FD:   37 BD:    1 --..: sk_lock-AF_INET
 -> [c0a15030] clock-AF_INET
 -> [c0a14d88] &sk->sk_dst_lock
 -> [c048af30] udp_hash_lock
 -> [c0a15be8] &n->lock
 -> [c0a03e84] &n->list_lock
 -> [c0a1660c] &tcp_hashinfo.bhash[i].lock
 -> [c0480244] &input_pool.lock
 -> [c04802e4] &nonblocking_pool.lock
 -> [c0a15140] &queue->syn_wait_lock
 -> [c049f850] tcp_hashinfo.lhash_lock
 -> [c049f874] tcp_hashinfo.lhash_wait.lock
 -> [c0653e6c] &base->lock
 -> [c0654274] &q->lock
 -> [c0a14da0] &newsk->sk_dst_lock
 -> [c0a16010] &rt_hash_locks[i]
 -> [c09fbf9c] &zone->lock

c0a15030 FD:    1 BD:    2 -.-?: clock-AF_INET

f8986650 FD:    1 BD:    8 ++..: &tp->lock

c048dfb0 FD:    2 BD:    2 -+..: icmp6_dst_lock
 -> [c0a03e84] &n->list_lock

c0a15be8 FD:    4 BD:    7 -+-+: &n->lock
 -> [c0653e6c] &base->lock
 -> [c0a03e84] &n->list_lock
 -> [c0a15bd8] &list->lock#4

c0a15e54 FD:    1 BD:    3 -+..: &list->lock#3

c0a156a8 FD:    2 BD:    7 -+..: _xmit_ETHER#2
 -> [f8986650] &tp->lock

c0a15070 FD:    1 BD:    2 -.-+: clock-AF_INET6

c048af30 FD:    1 BD:    3 -.-+: udp_hash_lock

c0a14d88 FD:    1 BD:    3 ----: &sk->sk_dst_lock

c0a14f88 FD:    1 BD:    1 -...: slock-AF_PACKET

c0a14e68 FD:   17 BD:    1 --..: sk_lock-AF_PACKET
 -> [c0a18fec] &po->bind_lock
 -> [c0488cd0] ptype_lock
 -> [c0654274] &q->lock
 -> [c0641140] &rq->lock

c0a18fec FD:    2 BD:    2 --..: &po->bind_lock
 -> [c0488cd0] ptype_lock

c0a150a8 FD:    1 BD:    1 -.-+: clock-AF_PACKET

c0a1567c FD:    4 BD:    3 -+..: &dev->tx_global_lock
 -> [c0a156a8] _xmit_ETHER#2
 -> [c0653e6c] &base->lock

c0489890 FD:    1 BD:    3 -...: qdisc_list_lock

c0489688 FD:    1 BD:    3 -...: noop_qdisc.q.lock

c0a15e64 FD:    1 BD:    6 -+..: &qdisc_tx_lock

c0472930 FD:    1 BD:    1 ....: log_wait.lock

c0480790 FD:    6 BD:    3 ....: vt_activate_queue.lock
 -> [c0641140] &rq->lock

c09f44c1 FD:    7 BD:    3 --..: &futex_queues[i].lock/1
 -> [c0654274] &q->lock

c0a16010 FD:    2 BD:    5 -+..: &rt_hash_locks[i]
 -> [c0a03e84] &n->list_lock

f892afd0 FD:    2 BD:    1 --..: &entry->access
 -> [f892a5b8] snd_card_mutex

f890a3b8 FD:    1 BD:    1 --..: evdev_table_mutex

f890a734 FD:    1 BD:    1 --..: &evdev->client_lock

f890a72c FD:   20 BD:    1 --..: &evdev->mutex
 -> [c0a14634] &dev->mutex
 -> [c0a1462c] &dev->event_lock
 -> [c0a144a0] &emumousebtn_mutex_class

c047f458 FD:    1 BD:    1 ....: acpi_system_event_lock

c047e79c FD:    1 BD:    1 ....: acpi_bus_event_queue.lock

c0a11be8 FD:    1 BD:    4 +...: &irq_lists[i].lock

c0a14f50 FD:    1 BD:    1 -...: slock-AF_INET6

c0a14e30 FD:   12 BD:    1 --..: sk_lock-AF_INET6
 -> [c0a15070] clock-AF_INET6
 -> [c048af30] udp_hash_lock
 -> [c0a1660c] &tcp_hashinfo.bhash[i].lock
 -> [c0480244] &input_pool.lock
 -> [c04802e4] &nonblocking_pool.lock
 -> [c0a15140] &queue->syn_wait_lock
 -> [c0a14d88] &sk->sk_dst_lock
 -> [c049f850] tcp_hashinfo.lhash_lock

c048eb70 FD:    1 BD:    1 -...: ipv6_sk_mc_lock

c048d850 FD:    1 BD:    1 -...: ipv6_sk_ac_lock

c0a15bd8 FD:    1 BD:    8 .+..: &list->lock#4

c0474630 FD:   13 BD:    1 ....: idr_lock#2
 -> [c0a0c0c8] &idp->lock
 -> [c0653f40] &new_timer->it_lock

c0653f40 FD:   11 BD:    2 .+..: &new_timer->it_lock
 -> [c065429c] &cpu_base->lock
 -> [c0642268] &sighand->siglock

c0a1660c FD:    2 BD:    5 -+..: &tcp_hashinfo.bhash[i].lock
 -> [c0a16614] &hashinfo->ehash_locks[i]

c0a15140 FD:    1 BD:    5 -+..: &queue->syn_wait_lock

c049f850 FD:    1 BD:    3 -.-+: tcp_hashinfo.lhash_lock

c049f874 FD:    1 BD:    2 ....: tcp_hashinfo.lhash_wait.lock

c0486fb8 FD:    1 BD:    1 --..: mousedev_table_mutex

c0a14690 FD:    1 BD:    1 --..: &mousedev->client_lock

c0a14688 FD:    9 BD:    6 --..: &mousedev->mutex#2
 -> [c0a144a0] &emumousebtn_mutex_class
 -> [c0a14634] &dev->mutex

c0a14680 FD:    1 BD:    9 +...: &client->packet_lock

c047a2f8 FD:   14 BD:    1 --..: key_user_keyring_mutex
 -> [c0479ff0] key_user_lock
 -> [c0a04d38] &candidate->lock
 -> [c0480244] &input_pool.lock
 -> [c04802e4] &nonblocking_pool.lock
 -> [c0479fd0] key_serial_lock
 -> [c047a058] key_construction_mutex
 -> [c0a04d28] &key->sem
 -> [c047a224] root_key_user.lock
 -> [c0a03e84] &n->list_lock

c047a170 FD:    1 BD:    5 ----: keyring_name_lock

c0479ff0 FD:    1 BD:    2 --..: key_user_lock

c0a04d38 FD:    1 BD:    7 --..: &candidate->lock

c0479fd0 FD:    1 BD:    4 --..: key_serial_lock

c047a058 FD:    2 BD:    2 --..: key_construction_mutex
 -> [c047a170] keyring_name_lock

c0a04d28 FD:    4 BD:    2 --..: &key->sem
 -> [c047a1ac] keyring_serialise_link_sem

c047a1ac FD:    3 BD:    3 --..: keyring_serialise_link_sem
 -> [c0a04d38] &candidate->lock
 -> [c047a224] root_key_user.lock

c047a100 FD:    5 BD:    2 --..: key_cleanup_task
 -> [c0479fd0] key_serial_lock
 -> [c0a04d38] &candidate->lock
 -> [c047a170] keyring_name_lock
 -> [c047a224] root_key_user.lock

c0a04d29 FD:    3 BD:    1 --..: &key->sem/1
 -> [c0a04d38] &candidate->lock
 -> [c047a224] root_key_user.lock

f8a29e08 FD:   21 BD:    2 --..: &type->s_umount_key#17
 -> [c0477e50] sb_lock
 -> [c04782b0] inode_lock
 -> [c049f6f0] dcache_lock
 -> [c0a041c8] &s->s_dquot.dqonoff_mutex
 -> [f8a29e00] &type->s_lock_key#7

f8a29e00 FD:    1 BD:    3 --..: &type->s_lock_key#7

f8a29e90 FD:    1 BD:    2 ----: entries_lock

f8a29e18 FD:   25 BD:    1 --..: &sb->s_type->i_mutex_key#8
 -> [c0a046c8] &dentry->d_lock
 -> [f8a29e28] &sb->s_type->i_alloc_sem_key#4
 -> [c049f6f0] dcache_lock
 -> [c04782b0] inode_lock
 -> [c04785d0] pin_fs_lock
 -> [c04784e0] mnt_id_ida.lock
 -> [c049f750] vfsmount_lock
 -> [c0477e50] sb_lock
 -> [f8a29e08] &type->s_umount_key#17
 -> [f8a29e90] entries_lock

f8a29e10 FD:    1 BD:    1 --..: &sb->s_type->i_lock_key#8

f8a29e28 FD:    2 BD:    2 --..: &sb->s_type->i_alloc_sem_key#4
 -> [c04782b0] inode_lock

c04818e0 FD:   10 BD:    2 --..: console_work
 -> [c04729d0] (console_sem).lock
 -> [c047d610] vga_lock
 -> [c0641140] &rq->lock
 -> [c0480790] vt_activate_queue.lock
 -> [c0472990] logbuf_lock

c0a04c48 FD:   19 BD:    2 --..: &bb->mutex
 -> [c047ce50] pci_lock
 -> [c04723b0] memtype_lock
 -> [c0641054] __pte_lockptr(page)
 -> [c0642288] &mm->page_table_lock
 -> [c09fbf9c] &zone->lock

c0a04c90 FD:   28 BD:    2 --..: &ids->rw_mutex
 -> [c049f6f0] dcache_lock
 -> [c04782b0] inode_lock
 -> [c2f44c20] &writer->lock_class
 -> [c0a0c0c8] &idp->lock
 -> [c0a03e50] &info->lock
 -> [c0a04858] &inode->inotify_mutex
 -> [c0a04870] &inode->i_data.tree_lock
 -> [c09fbf94] &zone->lru_lock
 -> [c0a03e84] &n->list_lock

c0a04c88 FD:    1 BD:    4 --..: &new->lock

c04719b8 FD:    2 BD:    2 --..: mtrr_mutex
 -> [c0471a30] set_atomicity_lock

f8a55e18 FD:   14 BD:    1 --..: &dev->struct_mutex
 -> [c0a0c0c8] &idp->lock
 -> [c04771d0] vmlist_lock
 -> [c09fbf9c] &zone->lock
 -> [c0652c44] &cpu_hotplug.lock
 -> [c04719b8] mtrr_mutex
 -> [c04723b0] memtype_lock
 -> [c0654274] &q->lock

f8a55e40 FD:    1 BD:    1 --..: struct class mutex#22

f8a55e38 FD:    1 BD:    1 --..: &dev->count_lock

f8a55e10 FD:    1 BD:    1 --..: &dev->ctxlist_mutex

c0a041a0 FD:    1 BD:    1 ..+.: &f->f_owner.lock

f88bf1e4 FD:    1 BD:    1 --..: &mp->m_sync_lock

c048e030 FD:    6 BD:    1 -+..: fib6_gc_lock
 -> [c048dfb0] icmp6_dst_lock

f88bf174 FD:   41 BD:    2 --..: &ioend->io_work#2
 -> [f88bf0e8] &(&ip->i_lock)->mr_lock
 -> [c0a04870] &inode->i_data.tree_lock
 -> [c0654274] &q->lock

c0a16614 FD:    1 BD:    6 -+-+: &hashinfo->ehash_locks[i]

c0a14f11 FD:   22 BD:    1 -+..: slock-AF_INET/1
 -> [c0a16010] &rt_hash_locks[i]
 -> [c0a15be8] &n->lock
 -> [c0a15140] &queue->syn_wait_lock
 -> [c0653e6c] &base->lock
 -> [c0a14f10] slock-AF_INET
 -> [c0654274] &q->lock
 -> [c0a03e84] &n->list_lock
 -> [c0a1660c] &tcp_hashinfo.bhash[i].lock
 -> [c09fbf9c] &zone->lock
 -> [c048a8a8] tcp_death_row.death_lock

c047a224 FD:    1 BD:    7 --..: root_key_user.lock

c04794c0 FD:    1 BD:    2 ....: allocated_ptys.lock

c0479478 FD:    2 BD:    1 --..: allocated_ptys_lock
 -> [c04794c0] allocated_ptys.lock

c04793d8 FD:   20 BD:    3 --..: &sb->s_type->i_mutex_key#9
 -> [c049f6f0] dcache_lock
 -> [c0a046c8] &dentry->d_lock
 -> [c04782b0] inode_lock
 -> [c0a04858] &inode->inotify_mutex
 -> [c0477ff0] cdev_lock
 -> [c0a03e84] &n->list_lock

c04793d0 FD:    1 BD:    1 --..: &sb->s_type->i_lock_key#9

c0a14da0 FD:    1 BD:    2 --..: &newsk->sk_dst_lock

c0489ec0 FD:    4 BD:    2 --..: (expires_work).work
 -> [c0a16010] &rt_hash_locks[i]
 -> [c0653e6c] &base->lock

c0a15be0 FD:    1 BD:    4 -+..: &(&hh->hh_lock)->lock

c048a2f0 FD:    1 BD:    1 -+..: inet_peer_unused_lock

c048a8a8 FD:    2 BD:    2 -+..: tcp_death_row.death_lock
 -> [c0653e6c] &base->lock

c0a041d8 FD:   70 BD:    1 --..: &s->s_vfs_rename_mutex
 -> [f88bea41] &type->i_mutex_dir_key#3/1
 -> [f88bea42] &type->i_mutex_dir_key#3/2

f88bea42 FD:   62 BD:    4 --..: &type->i_mutex_dir_key#3/2
 -> [c2f44c20] &writer->lock_class
 -> [f88bea38] &sb->s_type->i_mutex_key#4
 -> [c049f6f0] dcache_lock

c0a0c19c FD:    6 BD:   19 ....: &sem->wait_lock
 -> [c0641140] &rq->lock

c0480300 FD:    6 BD:    2 --..: (rekey_work).work
 -> [c0480244] &input_pool.lock
 -> [c04802e4] &nonblocking_pool.lock
 -> [c0653e6c] &base->lock

c04769d0 FD:    1 BD:    1 -+..: &rcu_bh_ctrlblk.lock

c0a16f48 FD:    5 BD:    1 -+..: &f->lock
 -> [c0480244] &input_pool.lock
 -> [c04802e4] &nonblocking_pool.lock

c0489630 FD:    1 BD:    1 .+..: rif_lock

c0488d70 FD:    2 BD:    4 -+..: dst_garbage.lock
 -> [c0653e6c] &base->lock

c0488da0 FD:    5 BD:    2 --..: (dst_gc_work).work
 -> [c0488d38] dst_gc_mutex

c0488d38 FD:    4 BD:    3 --..: dst_gc_mutex
 -> [c0488d70] dst_garbage.lock
 -> [c0a03e84] &n->list_lock

f890a720 FD:    1 BD:    9 +...: &client->buffer_lock

c0a0f9a0 FD:   11 BD:    2 --..: &(&tty->buf.work)->work
 -> [c0480550] tty_ldisc_lock
 -> [c0a0f9a8] &tty->buf.lock
 -> [c0a0f978] &tty->read_lock
 -> [c0654274] &q->lock



-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
  2008-12-03 13:06   ` Arkadiusz Miskiewicz
  2008-12-03 13:35     ` Arkadiusz Miskiewicz
@ 2008-12-03 21:30     ` Dave Chinner
  2008-12-03 21:42       ` Arkadiusz Miskiewicz
  1 sibling, 1 reply; 13+ messages in thread
From: Dave Chinner @ 2008-12-03 21:30 UTC (permalink / raw)
  To: Arkadiusz Miskiewicz; +Cc: xfs

On Wed, Dec 03, 2008 at 02:06:41PM +0100, Arkadiusz Miskiewicz wrote:
> On Wednesday 03 of December 2008, Dave Chinner wrote:
> > On Tue, Dec 02, 2008 at 07:49:55PM +0100, Arkadiusz Miskiewicz wrote:
> > > Hello,
> > >
> > > I'm trying to use xfs project quota on kernel 2.6.27.7 (vanilla, no
> > > additional patches), x86_64 UP machine (SMP kernel).
> > >
> > > Now some processes that are using /home/users/arekm/rpm are hanging in
> > > D-state like:
.....
> [arekm@farm ~]$ zgrep LOCKDEP /proc/config.gz
> CONFIG_LOCKDEP_SUPPORT=y
> CONFIG_LOCKDEP=y
> # CONFIG_DEBUG_LOCKDEP is not set
> 
> I don't see anything strictly lockdep related in dmesg so it doesn't seem to 
> be triggered.

Which implies there is something with a lock held that is blocked
elsewhere...

> D-state lock is also happening if I drop usrquota,prjquota, reboot and retry 
> the test. I assume something was written on disk that triggers the problem.

Unlikely - locking doesn't generally get stuck due to on disk
corruption. Are there any other blocked processes in the machine?
i.e. what is the entire output of 'echo w > /proc/sysrq-trigger'?
Are there any other signs of general unwellness (e.g. a CPU running
at 100% when it shouldn't be)?

> Note that now I'm testing on a second machine (UP i686, SMP kernel), so this 
> isn't unique problem.

Can you identify the inode that the unlinkis hanging on and get
an xfs_db dump of the contents of that inode? Also a dump of the
parent directory inode would be useful, too.

FWIW, if you are seeing this on two hosts, can you try to build
a reproducable test case using a minimal data set and a simple
set of commands? If you can do this and supply us with a
xfs_metadump image of the filesystem plus the commands to reproduce
the problem we'll be able to find the problem pretty quickly....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
  2008-12-03 21:30     ` Dave Chinner
@ 2008-12-03 21:42       ` Arkadiusz Miskiewicz
  2008-12-03 22:07         ` Christoph Hellwig
  2008-12-03 22:09         ` Dave Chinner
  0 siblings, 2 replies; 13+ messages in thread
From: Arkadiusz Miskiewicz @ 2008-12-03 21:42 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Wednesday 03 of December 2008, Dave Chinner wrote:

> > D-state lock is also happening if I drop usrquota,prjquota, reboot and
> > retry the test. I assume something was written on disk that triggers the
> > problem.
>
> Unlikely - locking doesn't generally get stuck due to on disk
> corruption. Are there any other blocked processes in the machine?
> i.e. what is the entire output of 'echo w > /proc/sysrq-trigger'?

Only this one program trace visible in sysrq-w output. No other traces - so no 
other blocked programs.

> Are there any other signs of general unwellness (e.g. a CPU running
> at 100% when it shouldn't be)?

Nothing wrong.

> FWIW, if you are seeing this on two hosts, can you try to build
> a reproducable test case using a minimal data set and a simple
> set of commands? If you can do this and supply us with a
> xfs_metadump image of the filesystem plus the commands to reproduce
> the problem we'll be able to find the problem pretty quickly....

I was able to reproduce it with:

- mount fs with usrquota,prjquota
- setup /home/users/arekm/rpm as project quota id = 10
- run program below twice

[arekm@farm rpm]$ more a.c
#include <stdio.h>
 
int main() {
        int i;
 
        i = 
rename("/home/users/arekm/tmp/aa", "/home/users/arekm/rpm/testing");
        printf("ret=%d %m\n", i);
        return 0;
}
[arekm@farm rpm]$ touch /home/users/arekm/tmp/aa
[arekm@farm rpm]$ ./a.out
ret=-1 Invalid cross-device link
[arekm@farm rpm]$ ./a.out

second run hangs with D-state. 

For clarification, rpm and tmp directories are on the same 
filesystem/partition (hda2), rpm/ dir belongs to project quota id=10, tmp 
doesn't belong to any project quota.

For the rest of your questions -  Christoph promised to look at the issue 
today, so I'll wait until tomorrow and if the issue will still be a mystery 
then I'll dig out all data you asked for.

> Cheers,
>
> Dave.

-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
  2008-12-03 21:42       ` Arkadiusz Miskiewicz
@ 2008-12-03 22:07         ` Christoph Hellwig
  2008-12-03 22:42           ` Christoph Hellwig
  2008-12-03 22:09         ` Dave Chinner
  1 sibling, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2008-12-03 22:07 UTC (permalink / raw)
  To: Arkadiusz Miskiewicz; +Cc: xfs

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=unknown-8bit, Size: 939 bytes --]

On Wed, Dec 03, 2008 at 10:42:29PM +0100, Arkadiusz Miskiewicz wrote:
> [arekm@farm rpm]$ touch /home/users/arekm/tmp/aa
> [arekm@farm rpm]$ ./a.out
> ret=-1 Invalid cross-device link

That is btw, intentionåand expected.  To make the hierachial quotas
work renames between different projects or from/to no project at all
are not allowed.

> [arekm@farm rpm]$ ./a.out
> 
> second run hangs with D-state. 
> 
> For clarification, rpm and tmp directories are on the same 
> filesystem/partition (hda2), rpm/ dir belongs to project quota id=10, tmp 
> doesn't belong to any project quota.
> 
> For the rest of your questions -  Christoph promised to look at the issue 
> today, so I'll wait until tomorrow and if the issue will still be a mystery 
> then I'll dig out all data you asked for.

I tried to run your testcase, adopted to local paths and I can run it a
couple hundred times.  Then I get a hard lockup of my KVM virtual
machine..


[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
  2008-12-03 21:42       ` Arkadiusz Miskiewicz
  2008-12-03 22:07         ` Christoph Hellwig
@ 2008-12-03 22:09         ` Dave Chinner
  2008-12-04  8:13           ` Arkadiusz Miskiewicz
  2008-12-04 12:32           ` Christoph Hellwig
  1 sibling, 2 replies; 13+ messages in thread
From: Dave Chinner @ 2008-12-03 22:09 UTC (permalink / raw)
  To: Arkadiusz Miskiewicz; +Cc: xfs

On Wed, Dec 03, 2008 at 10:42:29PM +0100, Arkadiusz Miskiewicz wrote:
> On Wednesday 03 of December 2008, Dave Chinner wrote:
> 
> > > D-state lock is also happening if I drop usrquota,prjquota, reboot and
> > > retry the test. I assume something was written on disk that triggers the
> > > problem.
> >
> > Unlikely - locking doesn't generally get stuck due to on disk
> > corruption. Are there any other blocked processes in the machine?
> > i.e. what is the entire output of 'echo w > /proc/sysrq-trigger'?
> 
> Only this one program trace visible in sysrq-w output. No other traces - so no 
> other blocked programs.
> 
> > Are there any other signs of general unwellness (e.g. a CPU running
> > at 100% when it shouldn't be)?
> 
> Nothing wrong.
> 
> > FWIW, if you are seeing this on two hosts, can you try to build
> > a reproducable test case using a minimal data set and a simple
> > set of commands? If you can do this and supply us with a
> > xfs_metadump image of the filesystem plus the commands to reproduce
> > the problem we'll be able to find the problem pretty quickly....
> 
> I was able to reproduce it with:
> 
> - mount fs with usrquota,prjquota
> - setup /home/users/arekm/rpm as project quota id = 10
> - run program below twice
> 
> [arekm@farm rpm]$ more a.c
> #include <stdio.h>
>  
> int main() {
>         int i;
>  
>         i = 
> rename("/home/users/arekm/tmp/aa", "/home/users/arekm/rpm/testing");
>         printf("ret=%d %m\n", i);
>         return 0;
> }
> [arekm@farm rpm]$ touch /home/users/arekm/tmp/aa
> [arekm@farm rpm]$ ./a.out
> ret=-1 Invalid cross-device link

Well, that's what we needed to know. The bug:

199         /*
200          * Lock all the participating inodes. Depending upon whether
201          * the target_name exists in the target directory, and
202          * whether the target directory is the same as the source
203          * directory, we can lock from 2 to 4 inodes.
204          */
205  >>>>>  xfs_lock_inodes(inodes, num_inodes, XFS_ILOCK_EXCL);
206
207         /*
208          * If we are using project inheritance, we only allow renames
209          * into our tree when the project IDs are the same; else the
210          * tree quota mechanism would be circumvented.
211          */
212         if (unlikely((target_dp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT) &&
213                      (target_dp->i_d.di_projid != src_ip->i_d.di_projid))) {
214                 error = XFS_ERROR(EXDEV);
215  >>>>>>>        xfs_rename_unlock4(inodes, XFS_ILOCK_SHARED);
216                 xfs_trans_cancel(tp, cancel_flags);
217                 goto std_return;
218         }

Is that the unlock of the inodes is using the incorrect lock
type for the unlock, (inodes lock XFS_ILOCK_EXCL, unlocked XFS_ILOCK_SHARED)
which means they don't get unlocked and the next attempt to do anything
with those inodes will hang.

Compile-tested-only patch below that should fix the problem.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com


XFS: Fix hang after disallowed rename across directory quota domains

When project quota is active and is being used for directory tree
quota control, we disallow rename outside the current directory
tree. This requires a check to be made after all the inodes
involved in the rename are locked. We fail to unlock the inodes
correctly if we disallow the rename when the target is outside the
current directory tree. This results in a hang on the next access
to the inodes involved in failed rename.

Reported-by: Arkadiusz Miskiewicz <arekm@maven.pl>
Signed-off-by: Dave Chinner <david@fromorbit.com>
---
 fs/xfs/xfs_rename.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/fs/xfs/xfs_rename.c b/fs/xfs/xfs_rename.c
index d700dac..c903130 100644
--- a/fs/xfs/xfs_rename.c
+++ b/fs/xfs/xfs_rename.c
@@ -212,7 +212,7 @@ xfs_rename(
 	if (unlikely((target_dp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT) &&
 		     (target_dp->i_d.di_projid != src_ip->i_d.di_projid))) {
 		error = XFS_ERROR(EXDEV);
-		xfs_rename_unlock4(inodes, XFS_ILOCK_SHARED);
+		xfs_rename_unlock4(inodes, XFS_ILOCK_EXCL);
 		xfs_trans_cancel(tp, cancel_flags);
 		goto std_return;
 	}

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
  2008-12-03 22:07         ` Christoph Hellwig
@ 2008-12-03 22:42           ` Christoph Hellwig
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2008-12-03 22:42 UTC (permalink / raw)
  To: Arkadiusz Miskiewicz; +Cc: xfs

For reason I don't understand myself yet this patch from my queue
fix it for me:


Signed-off-by: Christoph Hellwig <hch@lst.de>

Index: xfs/fs/xfs/xfs_rename.c
===================================================================
--- xfs.orig/fs/xfs/xfs_rename.c	2008-12-03 23:26:34.000000000 +0100
+++ xfs/fs/xfs/xfs_rename.c	2008-12-03 23:29:09.000000000 +0100
@@ -42,31 +42,6 @@
 
 
 /*
- * Given an array of up to 4 inode pointers, unlock the pointed to inodes.
  * If there are fewer than 4 entries in the array, the empty entries will
- * be at the end and will have NULL pointers in them.
- */
-STATIC void
-xfs_rename_unlock4(
-	xfs_inode_t	**i_tab,
-	uint		lock_mode)
-{
-	int	i;
-
-	xfs_iunlock(i_tab[0], lock_mode);
-	for (i = 1; i < 4; i++) {
-		if (i_tab[i] == NULL)
-			break;
-
-		/*
-		 * Watch out for duplicate entries in the table.
-		 */
-		if (i_tab[i] != i_tab[i-1])
-			xfs_iunlock(i_tab[i], lock_mode);
-	}
-}
-
-/*
  * Enter all inodes for a rename transaction into a sorted array.
  */
 STATIC void
@@ -205,19 +180,6 @@ xfs_rename(
 	xfs_lock_inodes(inodes, num_inodes, XFS_ILOCK_EXCL);
 
 	/*
-	 * If we are using project inheritance, we only allow renames
-	 * into our tree when the project IDs are the same; else the
-	 * tree quota mechanism would be circumvented.
-	 */
-	if (unlikely((target_dp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT) &&
-		     (target_dp->i_d.di_projid != src_ip->i_d.di_projid))) {
-		error = XFS_ERROR(EXDEV);
-		xfs_rename_unlock4(inodes, XFS_ILOCK_SHARED);
-		xfs_trans_cancel(tp, cancel_flags);
-		goto std_return;
-	}
-
-	/*
 	 * Join all the inodes to the transaction. From this point on,
 	 * we can rely on either trans_commit or trans_cancel to unlock
 	 * them.  Note that we need to add a vnode reference to the
@@ -242,6 +204,17 @@ xfs_rename(
 	}
 
 	/*
+	 * If we are using project inheritance, we only allow renames
+	 * into our tree when the project IDs are the same; else the
+	 * tree quota mechanism would be circumvented.
+	 */
+	if (unlikely((target_dp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT) &&
+		     (target_dp->i_d.di_projid != src_ip->i_d.di_projid))) {
+		error = XFS_ERROR(EXDEV);
+		goto error_return;
+	}
+
+	/*
 	 * Set up the target.
 	 */
 	if (target_ip == NULL) {

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
  2008-12-03 22:09         ` Dave Chinner
@ 2008-12-04  8:13           ` Arkadiusz Miskiewicz
  2008-12-04 12:32           ` Christoph Hellwig
  1 sibling, 0 replies; 13+ messages in thread
From: Arkadiusz Miskiewicz @ 2008-12-04  8:13 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Wednesday 03 of December 2008, Dave Chinner wrote:
> On Wed, Dec 03, 2008 at 10:42:29PM +0100, Arkadiusz Miskiewicz wrote:
> > On Wednesday 03 of December 2008, Dave Chinner wrote:

> > [arekm@farm rpm]$ touch /home/users/arekm/tmp/aa
> > [arekm@farm rpm]$ ./a.out
> > ret=-1 Invalid cross-device link
>
> Well, that's what we needed to know. The bug:
>
> 199         /*
> 200          * Lock all the participating inodes. Depending upon whether
> 201          * the target_name exists in the target directory, and
> 202          * whether the target directory is the same as the source
> 203          * directory, we can lock from 2 to 4 inodes.
> 204          */
> 205  >>>>>  xfs_lock_inodes(inodes, num_inodes, XFS_ILOCK_EXCL);
> 206
> 207         /*
> 208          * If we are using project inheritance, we only allow renames
> 209          * into our tree when the project IDs are the same; else the
> 210          * tree quota mechanism would be circumvented.
> 211          */
> 212         if (unlikely((target_dp->i_d.di_flags & XFS_DIFLAG_PROJINHERIT)
> && 213                      (target_dp->i_d.di_projid !=
> src_ip->i_d.di_projid))) { 214                 error = XFS_ERROR(EXDEV);
> 215  >>>>>>>        xfs_rename_unlock4(inodes, XFS_ILOCK_SHARED);
> 216                 xfs_trans_cancel(tp, cancel_flags);
> 217                 goto std_return;
> 218         }
>
> Is that the unlock of the inodes is using the incorrect lock
> type for the unlock, (inodes lock XFS_ILOCK_EXCL, unlocked
> XFS_ILOCK_SHARED) which means they don't get unlocked and the next attempt
> to do anything with those inodes will hang.
>
> Compile-tested-only patch below that should fix the problem.

It fixes the problem for me. Thanks! I hope that it will reach stable@ team 
for 2.6.27.9.

> Cheers,
>
> Dave.



-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
  2008-12-03 22:09         ` Dave Chinner
  2008-12-04  8:13           ` Arkadiusz Miskiewicz
@ 2008-12-04 12:32           ` Christoph Hellwig
  2008-12-04 21:34             ` Dave Chinner
  1 sibling, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2008-12-04 12:32 UTC (permalink / raw)
  To: Arkadiusz Miskiewicz, xfs

On Thu, Dec 04, 2008 at 09:09:34AM +1100, Dave Chinner wrote:
> Is that the unlock of the inodes is using the incorrect lock
> type for the unlock, (inodes lock XFS_ILOCK_EXCL, unlocked XFS_ILOCK_SHARED)
> which means they don't get unlocked and the next attempt to do anything
> with those inodes will hang.
> 
> Compile-tested-only patch below that should fix the problem.

Yeah, that also explains why my patch fixes it :)  I'd say let's put
yours into 2.6.28 and -stable, and I'll rediff mine ontop for the 2.6.29
queue.  I'll also write a testcase for xfsqa based on Arkadiusz's
report.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time)
  2008-12-04 12:32           ` Christoph Hellwig
@ 2008-12-04 21:34             ` Dave Chinner
  0 siblings, 0 replies; 13+ messages in thread
From: Dave Chinner @ 2008-12-04 21:34 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: xfs

On Thu, Dec 04, 2008 at 07:32:06AM -0500, Christoph Hellwig wrote:
> On Thu, Dec 04, 2008 at 09:09:34AM +1100, Dave Chinner wrote:
> > Is that the unlock of the inodes is using the incorrect lock
> > type for the unlock, (inodes lock XFS_ILOCK_EXCL, unlocked XFS_ILOCK_SHARED)
> > which means they don't get unlocked and the next attempt to do anything
> > with those inodes will hang.
> > 
> > Compile-tested-only patch below that should fix the problem.
> 
> Yeah, that also explains why my patch fixes it :)  I'd say let's put
> yours into 2.6.28 and -stable, and I'll rediff mine ontop for the 2.6.29
> queue.  I'll also write a testcase for xfsqa based on Arkadiusz's
> report.

I agree that this is probably the best approach - your fix is the
better long term solution, I think.

SGI folk, can we get my patch pushed to linus and stable ASAP?
Probably be an idea to add a:

Tested-by: Arkadiusz Miskiewicz <arekm@maven.pl>

tag to it as well to make it easy for the stable review process....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2008-12-04 21:34 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-12-02 18:49 2.6.27.7 vanilla, project quota enabled and process stuck in D state (repeatable every time) Arkadiusz Miskiewicz
2008-12-02 19:03 ` Arkadiusz Miskiewicz
2008-12-03  3:20 ` Dave Chinner
2008-12-03 13:06   ` Arkadiusz Miskiewicz
2008-12-03 13:35     ` Arkadiusz Miskiewicz
2008-12-03 21:30     ` Dave Chinner
2008-12-03 21:42       ` Arkadiusz Miskiewicz
2008-12-03 22:07         ` Christoph Hellwig
2008-12-03 22:42           ` Christoph Hellwig
2008-12-03 22:09         ` Dave Chinner
2008-12-04  8:13           ` Arkadiusz Miskiewicz
2008-12-04 12:32           ` Christoph Hellwig
2008-12-04 21:34             ` Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox