* xfs: possible deadlock warning
@ 2014-05-28 5:19 Gu Zheng
2014-05-28 6:00 ` Dave Chinner
0 siblings, 1 reply; 4+ messages in thread
From: Gu Zheng @ 2014-05-28 5:19 UTC (permalink / raw)
To: xfs; +Cc: Dave Chinner, linux-kernel
Hi all,
When running the latest Linus' tree, the following possible deadlock warning occurs.
[ 140.949000] ======================================================
[ 140.949000] [ INFO: possible circular locking dependency detected ]
[ 140.949000] 3.15.0-rc7+ #93 Not tainted
[ 140.949000] -------------------------------------------------------
[ 140.949000] qemu-kvm/5056 is trying to acquire lock:
[ 140.949000] (&isec->lock){+.+.+.}, at: [<ffffffff8128c835>] inode_doinit_with_dentry+0xa5/0x640
[ 140.949000]
[ 140.949000] but task is already holding lock:
[ 140.949000] (&mm->mmap_sem){++++++}, at: [<ffffffff81182bcf>] vm_mmap_pgoff+0x6f/0xc0
[ 140.949000]
[ 140.949000] which lock already depends on the new lock.
[ 140.949000]
[ 140.949000]
[ 140.949000] the existing dependency chain (in reverse order) is:
[ 140.949000]
[ 140.949000] -> #2 (&mm->mmap_sem){++++++}:
[ 140.949000] [<ffffffff810c214c>] __lock_acquire+0xadc/0x12f0
[ 140.949000] [<ffffffff810c3152>] lock_acquire+0xa2/0x130
[ 140.949000] [<ffffffff8118dbdc>] might_fault+0x8c/0xb0
[ 140.949000] [<ffffffff811f1371>] filldir+0x91/0x120
[ 140.949000] [<ffffffffa01ff788>] xfs_dir2_block_getdents+0x1e8/0x250 [xfs]
[ 140.949000] [<ffffffffa01ff92a>] xfs_readdir+0xda/0x120 [xfs]
[ 140.949000] [<ffffffffa02017db>] xfs_file_readdir+0x2b/0x40 [xfs]
[ 140.949000] [<ffffffff811f11b8>] iterate_dir+0xa8/0xe0
[ 140.949000] [<ffffffff811f165a>] SyS_getdents+0x8a/0x120
[ 140.949000] [<ffffffff8164b269>] system_call_fastpath+0x16/0x1b
[ 140.949000]
[ 140.949000] -> #1 (&xfs_dir_ilock_class){++++.+}:
[ 140.949000] [<ffffffff810c214c>] __lock_acquire+0xadc/0x12f0
[ 140.949000] [<ffffffff810c3152>] lock_acquire+0xa2/0x130
[ 140.949000] [<ffffffff810bc547>] down_read_nested+0x57/0xa0
[ 140.949000] [<ffffffffa0247602>] xfs_ilock+0xf2/0x120 [xfs]
[ 140.949000] [<ffffffffa02476a4>] xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
[ 140.949000] [<ffffffffa021d229>] xfs_attr_get+0x79/0xb0 [xfs]
[ 140.949000] [<ffffffffa02162f7>] xfs_xattr_get+0x37/0x50 [xfs]
[ 140.949000] [<ffffffff812034cf>] generic_getxattr+0x4f/0x70
[ 140.949000] [<ffffffff8128c8e0>] inode_doinit_with_dentry+0x150/0x640
[ 140.949000] [<ffffffff8128cea8>] sb_finish_set_opts+0xd8/0x270
[ 140.949000] [<ffffffff8128d2cf>] selinux_set_mnt_opts+0x28f/0x5e0
[ 140.949000] [<ffffffff8128d688>] superblock_doinit+0x68/0xd0
[ 140.949000] [<ffffffff8128d700>] delayed_superblock_init+0x10/0x20
[ 140.949000] [<ffffffff811e0a82>] iterate_supers+0xb2/0x110
[ 140.949000] [<ffffffff8128ef33>] selinux_complete_init+0x33/0x40
[ 140.949000] [<ffffffff8129d6b4>] security_load_policy+0xf4/0x600
[ 140.949000] [<ffffffff812908bc>] sel_write_load+0xac/0x750
[ 140.949000] [<ffffffff811dd0ad>] vfs_write+0xbd/0x1f0
[ 140.949000] [<ffffffff811ddc29>] SyS_write+0x49/0xb0
[ 140.949000] [<ffffffff8164b269>] system_call_fastpath+0x16/0x1b
[ 140.949000]
[ 140.949000] -> #0 (&isec->lock){+.+.+.}:
[ 140.949000] [<ffffffff810c0c51>] check_prevs_add+0x951/0x970
[ 140.949000] [<ffffffff810c214c>] __lock_acquire+0xadc/0x12f0
[ 140.949000] [<ffffffff810c3152>] lock_acquire+0xa2/0x130
[ 140.949000] [<ffffffff8163e038>] mutex_lock_nested+0x78/0x4f0
[ 140.949000] [<ffffffff8128c835>] inode_doinit_with_dentry+0xa5/0x640
[ 140.949000] [<ffffffff8128d97c>] selinux_d_instantiate+0x1c/0x20
[ 140.949000] [<ffffffff81283a1b>] security_d_instantiate+0x1b/0x30
[ 140.949000] [<ffffffff811f4fb0>] d_instantiate+0x50/0x70
[ 140.950000] [<ffffffff8117eb10>] __shmem_file_setup+0xe0/0x1d0
[ 140.950000] [<ffffffff81181488>] shmem_zero_setup+0x28/0x70
[ 140.950000] [<ffffffff811999f3>] mmap_region+0x543/0x5a0
[ 140.950000] [<ffffffff81199d51>] do_mmap_pgoff+0x301/0x3d0
[ 140.950000] [<ffffffff81182bf0>] vm_mmap_pgoff+0x90/0xc0
[ 140.950000] [<ffffffff81182c4d>] vm_mmap+0x2d/0x40
[ 140.950000] [<ffffffffa0765177>] kvm_arch_prepare_memory_region+0x47/0x60 [kvm]
[ 140.950000] [<ffffffffa074ed6f>] __kvm_set_memory_region+0x1ff/0x770 [kvm]
[ 140.950000] [<ffffffffa074f30d>] kvm_set_memory_region+0x2d/0x50 [kvm]
[ 140.950000] [<ffffffffa0b2e0da>] vmx_set_tss_addr+0x4a/0x190 [kvm_intel]
[ 140.950000] [<ffffffffa0760bc0>] kvm_arch_vm_ioctl+0x9c0/0xb80 [kvm]
[ 140.950000] [<ffffffffa074f3be>] kvm_vm_ioctl+0x8e/0x730 [kvm]
[ 140.950000] [<ffffffff811f0e50>] do_vfs_ioctl+0x300/0x520
[ 140.950000] [<ffffffff811f10f1>] SyS_ioctl+0x81/0xa0
[ 140.950000] [<ffffffff8164b269>] system_call_fastpath+0x16/0x1b
[ 140.950000]
[ 140.950000] other info that might help us debug this:
[ 140.950000]
[ 140.950000] Chain exists of:
[ 140.950000] &isec->lock --> &xfs_dir_ilock_class --> &mm->mmap_sem
[ 140.950000]
[ 140.950000] Possible unsafe locking scenario:
[ 140.950000]
[ 140.950000] CPU0 CPU1
[ 140.950000] ---- ----
[ 140.950000] lock(&mm->mmap_sem);
[ 140.950000] lock(&xfs_dir_ilock_class);
[ 140.950000] lock(&mm->mmap_sem);
[ 140.950000] lock(&isec->lock);
[ 140.950000]
[ 140.950000] *** DEADLOCK ***
[ 140.950000]
[ 140.950000] 2 locks held by qemu-kvm/5056:
[ 140.950000] #0: (&kvm->slots_lock){+.+.+.}, at: [<ffffffffa074f302>] kvm_set_memory_region+0x22/0x50 [kvm]
[ 140.950000] #1: (&mm->mmap_sem){++++++}, at: [<ffffffff81182bcf>] vm_mmap_pgoff+0x6f/0xc0
[ 140.950000]
[ 140.950000] stack backtrace:
[ 140.950000] CPU: 76 PID: 5056 Comm: qemu-kvm Not tainted 3.15.0-rc7+ #93
[ 140.950000] Hardware name: FUJITSU PRIMEQUEST2800E/SB, BIOS PRIMEQUEST 2000 Series BIOS Version 01.48 05/07/2014
[ 140.950000] ffffffff823925a0 ffff880830ba7750 ffffffff81638c00 ffffffff82321bc0
[ 140.950000] ffff880830ba7790 ffffffff81632d63 ffff880830ba77c0 0000000000000001
[ 140.950000] ffff8808359540d8 ffff8808359540d8 ffff880835953480 0000000000000002
[ 140.950000] Call Trace:
[ 140.950000] [<ffffffff81638c00>] dump_stack+0x4d/0x66
[ 140.950000] [<ffffffff81632d63>] print_circular_bug+0x1f9/0x207
[ 140.950000] [<ffffffff810c0c51>] check_prevs_add+0x951/0x970
[ 140.950000] [<ffffffff810c214c>] __lock_acquire+0xadc/0x12f0
[ 140.950000] [<ffffffff810c3152>] lock_acquire+0xa2/0x130
[ 140.950000] [<ffffffff8128c835>] ? inode_doinit_with_dentry+0xa5/0x640
[ 140.950000] [<ffffffff8163e038>] mutex_lock_nested+0x78/0x4f0
[ 140.950000] [<ffffffff8128c835>] ? inode_doinit_with_dentry+0xa5/0x640
[ 140.950000] [<ffffffff8128c835>] ? inode_doinit_with_dentry+0xa5/0x640
[ 140.950000] [<ffffffff8101cd69>] ? sched_clock+0x9/0x10
[ 140.950000] [<ffffffff810a6545>] ? local_clock+0x25/0x30
[ 140.950000] [<ffffffff8128c835>] inode_doinit_with_dentry+0xa5/0x640
[ 140.950000] [<ffffffff8128d97c>] selinux_d_instantiate+0x1c/0x20
[ 140.950000] [<ffffffff81283a1b>] security_d_instantiate+0x1b/0x30
[ 140.950000] [<ffffffff811f4fb0>] d_instantiate+0x50/0x70
[ 140.950000] [<ffffffff8117eb10>] __shmem_file_setup+0xe0/0x1d0
[ 140.950000] [<ffffffff81181488>] shmem_zero_setup+0x28/0x70
[ 140.950000] [<ffffffff811999f3>] mmap_region+0x543/0x5a0
[ 140.950000] [<ffffffff81199d51>] do_mmap_pgoff+0x301/0x3d0
[ 140.950000] [<ffffffff81182bf0>] vm_mmap_pgoff+0x90/0xc0
[ 140.950000] [<ffffffff81182c4d>] vm_mmap+0x2d/0x40
[ 140.950000] [<ffffffffa0765177>] kvm_arch_prepare_memory_region+0x47/0x60 [kvm]
[ 140.950000] [<ffffffffa074ed6f>] __kvm_set_memory_region+0x1ff/0x770 [kvm]
[ 140.950000] [<ffffffff810c1085>] ? mark_held_locks+0x75/0xa0
[ 140.950000] [<ffffffffa074f30d>] kvm_set_memory_region+0x2d/0x50 [kvm]
[ 140.950000] [<ffffffffa0b2e0da>] vmx_set_tss_addr+0x4a/0x190 [kvm_intel]
[ 140.950000] [<ffffffffa0760bc0>] kvm_arch_vm_ioctl+0x9c0/0xb80 [kvm]
[ 140.950000] [<ffffffff810c1920>] ? __lock_acquire+0x2b0/0x12f0
[ 140.950000] [<ffffffff810c34e8>] ? lock_release_non_nested+0x308/0x350
[ 140.950000] [<ffffffff8101cd69>] ? sched_clock+0x9/0x10
[ 140.950000] [<ffffffff810a6545>] ? local_clock+0x25/0x30
[ 140.950000] [<ffffffff810bde3f>] ? lock_release_holdtime.part.28+0xf/0x190
[ 140.950000] [<ffffffffa074f3be>] kvm_vm_ioctl+0x8e/0x730 [kvm]
[ 140.950000] [<ffffffff811f0e50>] do_vfs_ioctl+0x300/0x520
[ 140.950000] [<ffffffff81287e86>] ? file_has_perm+0x86/0xa0
[ 140.950000] [<ffffffff811f10f1>] SyS_ioctl+0x81/0xa0
[ 140.950000] [<ffffffff8111383c>] ? __audit_syscall_entry+0x9c/0xf0
[ 140.950000] [<ffffffff8164b269>] system_call_fastpath+0x16/0x1b
Thanks,
Gu
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: xfs: possible deadlock warning
2014-05-28 5:19 xfs: possible deadlock warning Gu Zheng
@ 2014-05-28 6:00 ` Dave Chinner
2014-05-29 3:34 ` Gu Zheng
0 siblings, 1 reply; 4+ messages in thread
From: Dave Chinner @ 2014-05-28 6:00 UTC (permalink / raw)
To: Gu Zheng; +Cc: xfs, linux-kernel
On Wed, May 28, 2014 at 01:19:16PM +0800, Gu Zheng wrote:
> Hi all,
> When running the latest Linus' tree, the following possible deadlock warning occurs.
false positive. There isn't a deadlock between inode locks on
different filesystems. i.e. there is no dependency between shmem
inodes and xfs inodes, nor on their security contexts. Nor can you
take a page fault on a directory inode, which is the XFS inode lock
class it's complaining about.
Fundamentally, the problem here is shmem instantiating a new inode
with the mmap_sem held. That's just plain wrong...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: xfs: possible deadlock warning
2014-05-28 6:00 ` Dave Chinner
@ 2014-05-29 3:34 ` Gu Zheng
2014-05-29 7:49 ` Dave Chinner
0 siblings, 1 reply; 4+ messages in thread
From: Gu Zheng @ 2014-05-29 3:34 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs, linux-kernel
Hi Dave,
On 05/28/2014 02:00 PM, Dave Chinner wrote:
> On Wed, May 28, 2014 at 01:19:16PM +0800, Gu Zheng wrote:
>> Hi all,
>> When running the latest Linus' tree, the following possible deadlock warning occurs.
>
> false positive. There isn't a deadlock between inode locks on
> different filesystems. i.e. there is no dependency between shmem
> inodes and xfs inodes, nor on their security contexts. Nor can you
> take a page fault on a directory inode, which is the XFS inode lock
> class it's complaining about.
If it's really a noisy, can we avoid this?
Thanks,
Gu
>
> Fundamentally, the problem here is shmem instantiating a new inode
> with the mmap_sem held. That's just plain wrong...
Agree, it's better to prepare the file before going into the protection region.
>
>
> Cheers,
>
> Dave.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: xfs: possible deadlock warning
2014-05-29 3:34 ` Gu Zheng
@ 2014-05-29 7:49 ` Dave Chinner
0 siblings, 0 replies; 4+ messages in thread
From: Dave Chinner @ 2014-05-29 7:49 UTC (permalink / raw)
To: Gu Zheng; +Cc: xfs, linux-kernel
On Thu, May 29, 2014 at 11:34:21AM +0800, Gu Zheng wrote:
> Hi Dave,
>
> On 05/28/2014 02:00 PM, Dave Chinner wrote:
>
> > On Wed, May 28, 2014 at 01:19:16PM +0800, Gu Zheng wrote:
> >> Hi all,
> >> When running the latest Linus' tree, the following possible deadlock warning occurs.
> >
> > false positive. There isn't a deadlock between inode locks on
> > different filesystems. i.e. there is no dependency between shmem
> > inodes and xfs inodes, nor on their security contexts. Nor can you
> > take a page fault on a directory inode, which is the XFS inode lock
> > class it's complaining about.
>
> If it's really a noisy, can we avoid this?
It's on my list of things to do. The XFs directory locking was
changed slightly to remove race condition that SGI's CXFS filesystem
was hitting, and that introduced all these lockdep false positives.
Unfortunately, to get rid of the lockdep false positives we can
either:
a) revert the locking change; or
b) rewrite the readdir code to use more fine grained locking
so that we don't hold the lock over filldir() calls.
I don't think that reverting a change that fixed a directory
corruption problem is a good idea, so rewriting the readdir code is
the solution.
SGI have disappeared off the planet so they aren't going to fix it
anytime soon, so it's waiting for me to find the time to finish and
test the patches I have ibeen working on in my spare time that
rework the readdir code.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2014-05-29 7:49 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-05-28 5:19 xfs: possible deadlock warning Gu Zheng
2014-05-28 6:00 ` Dave Chinner
2014-05-29 3:34 ` Gu Zheng
2014-05-29 7:49 ` Dave Chinner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).