public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: Dave Chinner <david@fromorbit.com>, xfs@oss.sgi.com
Subject: Re: [PATCH 2/3] xfs: fix directory inode iolock lockdep false positive
Date: Wed, 19 Feb 2014 13:25:16 -0500	[thread overview]
Message-ID: <5304F70C.8070601@redhat.com> (raw)
In-Reply-To: <1392783402-4726-3-git-send-email-david@fromorbit.com>

[-- Attachment #1: Type: text/plain, Size: 2347 bytes --]

On 02/18/2014 11:16 PM, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> The change to add the IO lock to protect the directory extent map
> during readdir operations has cause lockdep to have a heart attack
> as it now sees a different locking order on inodes w.r.t. the
> mmap_sem because readdir has a different ordering to write().
> 
> Add a new lockdep class for directory inodes to avoid this false
> positive.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---

Hey Dave,

I'm not terribly familiar with lockdep, but I hit the attached "possible
circular locking dependency detected" warning when running with this patch.

(Reproduces by running generic/001 after a reboot).

Brian

>  fs/xfs/xfs_iops.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c
> index 9ddfb81..bb3bb65 100644
> --- a/fs/xfs/xfs_iops.c
> +++ b/fs/xfs/xfs_iops.c
> @@ -48,6 +48,18 @@
>  #include <linux/fiemap.h>
>  #include <linux/slab.h>
>  
> +/*
> + * Directories have different lock order w.r.t. mmap_sem compared to regular
> + * files. This is due to readdir potentially triggering page faults on a user
> + * buffer inside filldir(), and this happens with the ilock on the directory
> + * held. For regular files, the lock order is the other way around - the
> + * mmap_sem is taken during the page fault, and then we lock the ilock to do
> + * block mapping. Hence we need a different class for the directory ilock so
> + * that lockdep can tell them apart.
> + */
> +static struct lock_class_key xfs_nondir_ilock_class;
> +static struct lock_class_key xfs_dir_ilock_class;
> +
>  static int
>  xfs_initxattrs(
>  	struct inode		*inode,
> @@ -1191,6 +1203,7 @@ xfs_setup_inode(
>  	xfs_diflags_to_iflags(inode, ip);
>  
>  	ip->d_ops = ip->i_mount->m_nondir_inode_ops;
> +	lockdep_set_class(&ip->i_lock.mr_lock, &xfs_nondir_ilock_class);
>  	switch (inode->i_mode & S_IFMT) {
>  	case S_IFREG:
>  		inode->i_op = &xfs_inode_operations;
> @@ -1198,6 +1211,7 @@ xfs_setup_inode(
>  		inode->i_mapping->a_ops = &xfs_address_space_operations;
>  		break;
>  	case S_IFDIR:
> +		lockdep_set_class(&ip->i_lock.mr_lock, &xfs_dir_ilock_class);
>  		if (xfs_sb_version_hasasciici(&XFS_M(inode->i_sb)->m_sb))
>  			inode->i_op = &xfs_dir_ci_inode_operations;
>  		else
> 



[-- Attachment #2: messages.lockdep --]
[-- Type: text/plain, Size: 17959 bytes --]

Feb 19 12:22:03 localhost kernel: [  101.486725] 
Feb 19 12:22:03 localhost kernel: [  101.486903] ======================================================
Feb 19 12:22:03 localhost kernel: [  101.487018] [ INFO: possible circular locking dependency detected ]
Feb 19 12:22:03 localhost kernel: [  101.487018] 3.14.0-rc1+ #6 Tainted: GF       W  O
Feb 19 12:22:03 localhost kernel: [  101.487018] -------------------------------------------------------
Feb 19 12:22:03 localhost kernel: [  101.487018] rm/4171 is trying to acquire lock:
Feb 19 12:22:03 localhost kernel: [  101.487018]  (&mm->mmap_sem){++++++}, at: [<ffffffff811cc8cf>] might_fault+0x5f/0xb0
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018] but task is already holding lock:
Feb 19 12:22:03 localhost kernel: [  101.487018]  (&xfs_dir_ilock_class){++++..}, at: [<ffffffffa05a0022>] xfs_ilock+0x122/0x250 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018] which lock already depends on the new lock.
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018] the existing dependency chain (in reverse order) is:
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018] -> #2 (&xfs_dir_ilock_class){++++..}:
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff810f3ec2>] lock_acquire+0xa2/0x1d0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff810ed147>] down_read_nested+0x57/0xa0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffffa05a0022>] xfs_ilock+0x122/0x250 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffffa05a01af>] xfs_ilock_attr_map_shared+0x1f/0x50 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffffa0565d50>] xfs_attr_get+0x90/0xe0 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffffa055b9d7>] xfs_xattr_get+0x37/0x50 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff812483ef>] generic_getxattr+0x4f/0x70
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff8133fd5e>] inode_doinit_with_dentry+0x1ae/0x650
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff813402d8>] sb_finish_set_opts+0xd8/0x270
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff81340702>] selinux_set_mnt_opts+0x292/0x5f0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff81340ac8>] superblock_doinit+0x68/0xd0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff81340b8d>] selinux_sb_kern_mount+0x3d/0xa0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff81335536>] security_sb_kern_mount+0x16/0x20
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff8122333a>] mount_fs+0x8a/0x1b0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff8124285b>] vfs_kern_mount+0x6b/0x150
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff8124561e>] do_mount+0x23e/0xb90
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff812462a3>] SyS_mount+0x83/0xc0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff8178ed69>] system_call_fastpath+0x16/0x1b
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018] -> #1 (&isec->lock){+.+.+.}:
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff810f3ec2>] lock_acquire+0xa2/0x1d0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff81780d77>] mutex_lock_nested+0x77/0x3f0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff8133fc42>] inode_doinit_with_dentry+0x92/0x650
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff81340dcc>] selinux_d_instantiate+0x1c/0x20
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff8133517b>] security_d_instantiate+0x1b/0x30
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff81237d70>] d_instantiate+0x50/0x70
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff811bcb70>] __shmem_file_setup+0xe0/0x1d0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff811bf988>] shmem_zero_setup+0x28/0x70
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff811d8653>] mmap_region+0x543/0x5a0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff811d89b1>] do_mmap_pgoff+0x301/0x3c0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff811c18f0>] vm_mmap_pgoff+0x90/0xc0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff811d6f26>] SyS_mmap_pgoff+0x116/0x270
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff8101f9b2>] SyS_mmap+0x22/0x30
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff8178ed69>] system_call_fastpath+0x16/0x1b
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018] -> #0 (&mm->mmap_sem){++++++}:
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff810f351c>] __lock_acquire+0x18ec/0x1aa0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff810f3ec2>] lock_acquire+0xa2/0x1d0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff811cc8fc>] might_fault+0x8c/0xb0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff812341c1>] filldir+0x91/0x120
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffffa053f2f7>] xfs_dir2_sf_getdents+0x317/0x380 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffffa054001b>] xfs_readdir+0x16b/0x230 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffffa05427fb>] xfs_file_readdir+0x2b/0x40 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff81234008>] iterate_dir+0xa8/0xe0
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff812344b3>] SyS_getdents+0x93/0x120
Feb 19 12:22:03 localhost kernel: [  101.487018]        [<ffffffff8178ed69>] system_call_fastpath+0x16/0x1b
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018] other info that might help us debug this:
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018] Chain exists of:
Feb 19 12:22:03 localhost kernel: [  101.487018]   &mm->mmap_sem --> &isec->lock --> &xfs_dir_ilock_class
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018]  Possible unsafe locking scenario:
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018]        CPU0                    CPU1
Feb 19 12:22:03 localhost kernel: [  101.487018]        ----                    ----
Feb 19 12:22:03 localhost kernel: [  101.487018]   lock(&xfs_dir_ilock_class);
Feb 19 12:22:03 localhost kernel: [  101.487018]                                lock(&isec->lock);
Feb 19 12:22:03 localhost kernel: [  101.487018]                                lock(&xfs_dir_ilock_class);
Feb 19 12:22:03 localhost kernel: [  101.487018]   lock(&mm->mmap_sem);
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018]  *** DEADLOCK ***
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018] 2 locks held by rm/4171:
Feb 19 12:22:03 localhost kernel: [  101.487018]  #0:  (&type->i_mutex_dir_key#4){+.+.+.}, at: [<ffffffff81233fc2>] iterate_dir+0x62/0xe0
Feb 19 12:22:03 localhost kernel: [  101.487018]  #1:  (&xfs_dir_ilock_class){++++..}, at: [<ffffffffa05a0022>] xfs_ilock+0x122/0x250 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018] 
Feb 19 12:22:03 localhost kernel: [  101.487018] stack backtrace:
Feb 19 12:22:03 localhost kernel: [  101.487018] CPU: 1 PID: 4171 Comm: rm Tainted: GF       W  O 3.14.0-rc1+ #6
Feb 19 12:22:03 localhost kernel: [  101.487018] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
Feb 19 12:22:03 localhost kernel: [  101.487018]  ffffffff82597d80 ffff8800c43cdc60 ffffffff8177ba90 ffffffff825cd9c0
Feb 19 12:22:03 localhost kernel: [  101.487018]  ffff8800c43cdca0 ffffffff81777168 ffff8800c43cdcf0 ffff8800d44ba630
Feb 19 12:22:03 localhost kernel: [  101.487018]  ffff8800d44b9aa0 0000000000000002 0000000000000002 ffff8800d44ba630
Feb 19 12:22:03 localhost kernel: [  101.487018] Call Trace:
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff8177ba90>] dump_stack+0x4d/0x66
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff81777168>] print_circular_bug+0x201/0x20f
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff810f351c>] __lock_acquire+0x18ec/0x1aa0
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff810f3ec2>] lock_acquire+0xa2/0x1d0
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff811cc8cf>] ? might_fault+0x5f/0xb0
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff811cc8fc>] might_fault+0x8c/0xb0
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff811cc8cf>] ? might_fault+0x5f/0xb0
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff812341c1>] filldir+0x91/0x120
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffffa053f2f7>] xfs_dir2_sf_getdents+0x317/0x380 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffffa05a0022>] ? xfs_ilock+0x122/0x250 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffffa054001b>] xfs_readdir+0x16b/0x230 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffffa05427fb>] xfs_file_readdir+0x2b/0x40 [xfs]
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff81234008>] iterate_dir+0xa8/0xe0
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff812344b3>] SyS_getdents+0x93/0x120
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff81234130>] ? fillonedir+0xf0/0xf0
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff8114a2cc>] ? __audit_syscall_entry+0x9c/0xf0
Feb 19 12:22:03 localhost kernel: [  101.487018]  [<ffffffff8178ed69>] system_call_fastpath+0x16/0x1b
Feb 19 12:22:03 localhost kernel: 
Feb 19 12:22:03 localhost kernel: ======================================================
Feb 19 12:22:03 localhost kernel: [ INFO: possible circular locking dependency detected ]
Feb 19 12:22:03 localhost kernel: 3.14.0-rc1+ #6 Tainted: GF       W  O
Feb 19 12:22:03 localhost kernel: -------------------------------------------------------
Feb 19 12:22:03 localhost kernel: rm/4171 is trying to acquire lock:
Feb 19 12:22:03 localhost kernel: (&mm->mmap_sem){++++++}, at: [<ffffffff811cc8cf>] might_fault+0x5f/0xb0
Feb 19 12:22:03 localhost kernel: 
but task is already holding lock:
Feb 19 12:22:03 localhost kernel: (&xfs_dir_ilock_class){++++..}, at: [<ffffffffa05a0022>] xfs_ilock+0x122/0x250 [xfs]
Feb 19 12:22:03 localhost kernel: 
which lock already depends on the new lock.

Feb 19 12:22:03 localhost kernel: 
the existing dependency chain (in reverse order) is:
Feb 19 12:22:03 localhost kernel: 
-> #2 (&xfs_dir_ilock_class){++++..}:
Feb 19 12:22:03 localhost kernel:       [<ffffffff810f3ec2>] lock_acquire+0xa2/0x1d0
Feb 19 12:22:03 localhost kernel:       [<ffffffff810ed147>] down_read_nested+0x57/0xa0
Feb 19 12:22:03 localhost kernel:       [<ffffffffa05a0022>] xfs_ilock+0x122/0x250 [xfs]
Feb 19 12:22:03 localhost kernel:       [<ffffffffa05a01af>] xfs_ilock_attr_map_shared+0x1f/0x50 [xfs]
Feb 19 12:22:03 localhost kernel:       [<ffffffffa0565d50>] xfs_attr_get+0x90/0xe0 [xfs]
Feb 19 12:22:03 localhost kernel:       [<ffffffffa055b9d7>] xfs_xattr_get+0x37/0x50 [xfs]
Feb 19 12:22:03 localhost kernel:       [<ffffffff812483ef>] generic_getxattr+0x4f/0x70
Feb 19 12:22:03 localhost kernel:       [<ffffffff8133fd5e>] inode_doinit_with_dentry+0x1ae/0x650
Feb 19 12:22:03 localhost kernel:       [<ffffffff813402d8>] sb_finish_set_opts+0xd8/0x270
Feb 19 12:22:03 localhost kernel:       [<ffffffff81340702>] selinux_set_mnt_opts+0x292/0x5f0
Feb 19 12:22:03 localhost kernel:       [<ffffffff81340ac8>] superblock_doinit+0x68/0xd0
Feb 19 12:22:03 localhost kernel:       [<ffffffff81340b8d>] selinux_sb_kern_mount+0x3d/0xa0
Feb 19 12:22:03 localhost kernel:       [<ffffffff81335536>] security_sb_kern_mount+0x16/0x20
Feb 19 12:22:03 localhost kernel:       [<ffffffff8122333a>] mount_fs+0x8a/0x1b0
Feb 19 12:22:03 localhost kernel:       [<ffffffff8124285b>] vfs_kern_mount+0x6b/0x150
Feb 19 12:22:03 localhost kernel:       [<ffffffff8124561e>] do_mount+0x23e/0xb90
Feb 19 12:22:03 localhost kernel:       [<ffffffff812462a3>] SyS_mount+0x83/0xc0
Feb 19 12:22:03 localhost kernel:       [<ffffffff8178ed69>] system_call_fastpath+0x16/0x1b
Feb 19 12:22:03 localhost kernel: 
-> #1 (&isec->lock){+.+.+.}:
Feb 19 12:22:03 localhost kernel:       [<ffffffff810f3ec2>] lock_acquire+0xa2/0x1d0
Feb 19 12:22:03 localhost kernel:       [<ffffffff81780d77>] mutex_lock_nested+0x77/0x3f0
Feb 19 12:22:03 localhost kernel:       [<ffffffff8133fc42>] inode_doinit_with_dentry+0x92/0x650
Feb 19 12:22:03 localhost kernel:       [<ffffffff81340dcc>] selinux_d_instantiate+0x1c/0x20
Feb 19 12:22:03 localhost kernel:       [<ffffffff8133517b>] security_d_instantiate+0x1b/0x30
Feb 19 12:22:03 localhost kernel:       [<ffffffff81237d70>] d_instantiate+0x50/0x70
Feb 19 12:22:03 localhost kernel:       [<ffffffff811bcb70>] __shmem_file_setup+0xe0/0x1d0
Feb 19 12:22:03 localhost kernel:       [<ffffffff811bf988>] shmem_zero_setup+0x28/0x70
Feb 19 12:22:03 localhost kernel:       [<ffffffff811d8653>] mmap_region+0x543/0x5a0
Feb 19 12:22:03 localhost kernel:       [<ffffffff811d89b1>] do_mmap_pgoff+0x301/0x3c0
Feb 19 12:22:03 localhost kernel:       [<ffffffff811c18f0>] vm_mmap_pgoff+0x90/0xc0
Feb 19 12:22:03 localhost kernel:       [<ffffffff811d6f26>] SyS_mmap_pgoff+0x116/0x270
Feb 19 12:22:03 localhost kernel:       [<ffffffff8101f9b2>] SyS_mmap+0x22/0x30
Feb 19 12:22:03 localhost kernel:       [<ffffffff8178ed69>] system_call_fastpath+0x16/0x1b
Feb 19 12:22:03 localhost kernel: 
-> #0 (&mm->mmap_sem){++++++}:
Feb 19 12:22:03 localhost kernel:       [<ffffffff810f351c>] __lock_acquire+0x18ec/0x1aa0
Feb 19 12:22:03 localhost kernel:       [<ffffffff810f3ec2>] lock_acquire+0xa2/0x1d0
Feb 19 12:22:03 localhost kernel:       [<ffffffff811cc8fc>] might_fault+0x8c/0xb0
Feb 19 12:22:03 localhost kernel:       [<ffffffff812341c1>] filldir+0x91/0x120
Feb 19 12:22:03 localhost kernel:       [<ffffffffa053f2f7>] xfs_dir2_sf_getdents+0x317/0x380 [xfs]
Feb 19 12:22:03 localhost kernel:       [<ffffffffa054001b>] xfs_readdir+0x16b/0x230 [xfs]
Feb 19 12:22:03 localhost kernel:       [<ffffffffa05427fb>] xfs_file_readdir+0x2b/0x40 [xfs]
Feb 19 12:22:03 localhost kernel:       [<ffffffff81234008>] iterate_dir+0xa8/0xe0
Feb 19 12:22:03 localhost kernel:       [<ffffffff812344b3>] SyS_getdents+0x93/0x120
Feb 19 12:22:03 localhost kernel:       [<ffffffff8178ed69>] system_call_fastpath+0x16/0x1b
Feb 19 12:22:03 localhost kernel: 
other info that might help us debug this:

Feb 19 12:22:03 localhost kernel: Chain exists of:
  &mm->mmap_sem --> &isec->lock --> &xfs_dir_ilock_class

Feb 19 12:22:03 localhost kernel: Possible unsafe locking scenario:

Feb 19 12:22:03 localhost kernel:       CPU0                    CPU1
Feb 19 12:22:03 localhost kernel:       ----                    ----
Feb 19 12:22:03 localhost kernel:  lock(&xfs_dir_ilock_class);
Feb 19 12:22:03 localhost kernel:                               lock(&isec->lock);
Feb 19 12:22:03 localhost kernel:                               lock(&xfs_dir_ilock_class);
Feb 19 12:22:03 localhost kernel:  lock(&mm->mmap_sem);
Feb 19 12:22:03 localhost kernel: 
 *** DEADLOCK ***

Feb 19 12:22:03 localhost kernel: 2 locks held by rm/4171:
Feb 19 12:22:03 localhost kernel: #0:  (&type->i_mutex_dir_key#4){+.+.+.}, at: [<ffffffff81233fc2>] iterate_dir+0x62/0xe0
Feb 19 12:22:03 localhost kernel: #1:  (&xfs_dir_ilock_class){++++..}, at: [<ffffffffa05a0022>] xfs_ilock+0x122/0x250 [xfs]
Feb 19 12:22:03 localhost kernel: 
stack backtrace:
Feb 19 12:22:03 localhost kernel: CPU: 1 PID: 4171 Comm: rm Tainted: GF       W  O 3.14.0-rc1+ #6
Feb 19 12:22:03 localhost kernel: Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
Feb 19 12:22:03 localhost kernel: ffffffff82597d80 ffff8800c43cdc60 ffffffff8177ba90 ffffffff825cd9c0
Feb 19 12:22:03 localhost kernel: ffff8800c43cdca0 ffffffff81777168 ffff8800c43cdcf0 ffff8800d44ba630
Feb 19 12:22:03 localhost kernel: ffff8800d44b9aa0 0000000000000002 0000000000000002 ffff8800d44ba630
Feb 19 12:22:03 localhost kernel: Call Trace:
Feb 19 12:22:03 localhost kernel: [<ffffffff8177ba90>] dump_stack+0x4d/0x66
Feb 19 12:22:03 localhost kernel: [<ffffffff81777168>] print_circular_bug+0x201/0x20f
Feb 19 12:22:03 localhost kernel: [<ffffffff810f351c>] __lock_acquire+0x18ec/0x1aa0
Feb 19 12:22:03 localhost kernel: [<ffffffff810f3ec2>] lock_acquire+0xa2/0x1d0
Feb 19 12:22:03 localhost kernel: [<ffffffff811cc8cf>] ? might_fault+0x5f/0xb0
Feb 19 12:22:03 localhost kernel: [<ffffffff811cc8fc>] might_fault+0x8c/0xb0
Feb 19 12:22:03 localhost kernel: [<ffffffff811cc8cf>] ? might_fault+0x5f/0xb0
Feb 19 12:22:03 localhost kernel: [<ffffffff812341c1>] filldir+0x91/0x120
Feb 19 12:22:03 localhost kernel: [<ffffffffa053f2f7>] xfs_dir2_sf_getdents+0x317/0x380 [xfs]
Feb 19 12:22:03 localhost kernel: [<ffffffffa05a0022>] ? xfs_ilock+0x122/0x250 [xfs]
Feb 19 12:22:03 localhost kernel: [<ffffffffa054001b>] xfs_readdir+0x16b/0x230 [xfs]
Feb 19 12:22:03 localhost kernel: [<ffffffffa05427fb>] xfs_file_readdir+0x2b/0x40 [xfs]
Feb 19 12:22:03 localhost kernel: [<ffffffff81234008>] iterate_dir+0xa8/0xe0
Feb 19 12:22:03 localhost kernel: [<ffffffff812344b3>] SyS_getdents+0x93/0x120
Feb 19 12:22:03 localhost kernel: [<ffffffff81234130>] ? fillonedir+0xf0/0xf0
Feb 19 12:22:03 localhost kernel: [<ffffffff8114a2cc>] ? __audit_syscall_entry+0x9c/0xf0
Feb 19 12:22:03 localhost kernel: [<ffffffff8178ed69>] system_call_fastpath+0x16/0x1b



[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2014-02-19 18:25 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-19  4:16 [PATCH 0/3] xfs: lockdep and stack reduction fixes Dave Chinner
2014-02-19  4:16 ` [PATCH 1/3] xfs: always do log forces via the workqueue Dave Chinner
2014-02-19 18:24   ` Brian Foster
2014-02-20  0:23     ` Dave Chinner
2014-02-20 14:51       ` Mark Tinguely
2014-02-20 22:07         ` Dave Chinner
2014-02-20 22:35           ` Mark Tinguely
2014-02-21  0:02             ` Dave Chinner
2014-02-21 15:04       ` Brian Foster
2014-02-21 22:21         ` Dave Chinner
2014-02-24 13:35           ` Brian Foster
2014-02-19  4:16 ` [PATCH 2/3] xfs: fix directory inode iolock lockdep false positive Dave Chinner
2014-02-19 18:25   ` Brian Foster [this message]
2014-02-20  0:13     ` mmap_sem -> isec->lock lockdep issues with shmem (was Re: [PATCH 2/3] xfs: fix directory inode iolock lockdep false positive) Dave Chinner
2014-02-20 14:51   ` [PATCH 2/3] xfs: fix directory inode iolock lockdep false positive Christoph Hellwig
2014-02-19  4:16 ` [PATCH 3/3] xfs: allocate xfs_da_args to reduce stack footprint Dave Chinner
2014-02-19 18:25   ` Brian Foster
2014-02-20 14:56   ` Christoph Hellwig
2014-02-20 21:09     ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5304F70C.8070601@redhat.com \
    --to=bfoster@redhat.com \
    --cc=david@fromorbit.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox