From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753008AbbC3H3W (ORCPT ); Mon, 30 Mar 2015 03:29:22 -0400 Received: from mail-wi0-f179.google.com ([209.85.212.179]:37607 "EHLO mail-wi0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752967AbbC3H3T (ORCPT ); Mon, 30 Mar 2015 03:29:19 -0400 Message-ID: <5518FB4A.4070200@monom.org> Date: Mon, 30 Mar 2015 09:29:14 +0200 From: Daniel Wagner User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: xfs@oss.sgi.com CC: Dave Chinner , "linux-kernel@vger.kernel.org" Subject: deadlock between &type->i_mutex_dir_key#4 and &xfs_dir_ilock_class Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, Just my test box booted 4.0.0-rc6 and I was greeted by: [Mar30 10:10] ====================================================== [ +0.000043] [ INFO: possible circular locking dependency detected ] [ +0.000045] 4.0.0-rc6 #32 Not tainted [ +0.000027] ------------------------------------------------------- [ +0.000042] ls/1709 is trying to acquire lock: [ +0.000034] (&mm->mmap_sem){++++++}, at: [] might_fault+0x5f/0xb0 [ +0.000083] but task is already holding lock: [ +0.000043] (&xfs_dir_ilock_class){.+.+..}, at: [] xfs_ilock+0xc2/0x130 [xfs] [ +0.000110] which lock already depends on the new lock. [ +0.000058] the existing dependency chain (in reverse order) is: [ +0.000049] -> #2 (&xfs_dir_ilock_class){.+.+..}: [ +0.000054] [] lock_acquire+0xc7/0x160 [ +0.000048] [] down_read_nested+0x57/0xa0 [ +0.000048] [] xfs_ilock+0xc2/0x130 [xfs] [ +0.000071] [] xfs_ilock_attr_map_shared+0x38/0x50 [xfs] [ +0.000076] [] xfs_attr_get+0xdc/0x1b0 [xfs] [ +0.000062] [] xfs_xattr_get+0x3d/0x80 [xfs] [ +0.000073] [] generic_getxattr+0x4f/0x70 [ +0.000052] [] inode_doinit_with_dentry+0x172/0x6a0 [ +0.000054] [] selinux_d_instantiate+0x1c/0x20 [ +0.000049] [] security_d_instantiate+0x1b/0x30 [ +0.000050] [] d_splice_alias+0x9d/0x360 [ +0.000047] [] xfs_vn_lookup+0x92/0xd0 [xfs] [ +0.000071] [] lookup_real+0x1d/0x70 [ +0.000045] [] __lookup_hash+0x42/0x60 [ +0.000045] [] link_path_walk+0x411/0x1450 [ +0.000046] [] path_init+0xb7/0x710 [ +0.000043] [] path_openat+0x76/0x670 [ +0.000042] [] do_filp_open+0x49/0xd0 [ +0.000044] [] do_sys_open+0x13b/0x250 [ +0.000044] [] SyS_open+0x1e/0x20 [ +0.000041] [] system_call_fastpath+0x12/0x17 [ +0.000047] -> #1 (&isec->lock){+.+.+.}: [ +0.000045] [] lock_acquire+0xc7/0x160 [ +0.000045] [] mutex_lock_nested+0x7d/0x450 [ +0.000045] [] inode_doinit_with_dentry+0xc5/0x6a0 [ +0.000050] [] selinux_d_instantiate+0x1c/0x20 [ +0.001072] [] security_d_instantiate+0x1b/0x30 [ +0.001056] [] d_instantiate+0x54/0x80 [ +0.001052] [] __shmem_file_setup+0xdc/0x250 [ +0.001059] [] shmem_zero_setup+0x28/0x70 [ +0.001074] [] mmap_region+0x5d8/0x5f0 [ +0.001045] [] do_mmap_pgoff+0x31b/0x400 [ +0.001040] [] vm_mmap_pgoff+0xb0/0xf0 [ +0.001015] [] SyS_mmap_pgoff+0x116/0x2b0 [ +0.001009] [] SyS_mmap+0x22/0x30 [ +0.001000] [] system_call_fastpath+0x12/0x17 [ +0.000991] -> #0 (&mm->mmap_sem){++++++}: [ +0.001902] [] __lock_acquire+0x2048/0x2050 [ +0.000968] [] lock_acquire+0xc7/0x160 [ +0.000941] [] might_fault+0x8c/0xb0 [ +0.000937] [] filldir+0x92/0x120 [ +0.000950] [] xfs_dir2_block_getdents.isra.11+0x1b9/0x210 [xfs] [ +0.000994] [] xfs_readdir+0x178/0x1c0 [xfs] [ +0.000986] [] xfs_file_readdir+0x2b/0x30 [xfs] [ +0.000985] [] iterate_dir+0x9a/0x140 [ +0.000956] [] SyS_getdents+0x94/0x120 [ +0.000942] [] system_call_fastpath+0x12/0x17 [ +0.000949] other info that might help us debug this: [ +0.002781] Chain exists of: &mm->mmap_sem --> &isec->lock --> &xfs_dir_ilock_class [ +0.002801] Possible unsafe locking scenario: [ +0.001860] CPU0 CPU1 [ +0.000927] ---- ---- [ +0.000926] lock(&xfs_dir_ilock_class); [ +0.000918] lock(&isec->lock); [ +0.000935] lock(&xfs_dir_ilock_class); [ +0.000941] lock(&mm->mmap_sem); [ +0.000926] *** DEADLOCK *** [ +0.002726] 2 locks held by ls/1709: [ +0.000909] #0: (&type->i_mutex_dir_key#4){+.+.+.}, at: [] iterate_dir+0x61/0x140 [ +0.000995] #1: (&xfs_dir_ilock_class){.+.+..}, at: [] xfs_ilock+0xc2/0x130 [xfs] [ +0.001019] stack backtrace: [ +0.001923] CPU: 32 PID: 1709 Comm: ls Not tainted 4.0.0-rc6 #32 [ +0.000979] Hardware name: Dell Inc. PowerEdge R820/066N7P, BIOS 2.0.20 01/16/2014 [ +0.000997] 0000000000000000 00000000c4a0aaca ffff881faea3bb18 ffffffff817dd7b1 [ +0.001034] 0000000000000000 ffffffff82897000 ffff881faea3bb68 ffffffff810ead5d [ +0.001018] ffff881fac919ea8 ffff881faea3bbc8 ffff881faea3bb68 ffff881fac919e70 [ +0.001026] Call Trace: [ +0.001003] [] dump_stack+0x4c/0x65 [ +0.001019] [] print_circular_bug+0x1cd/0x230 [ +0.001027] [] __lock_acquire+0x2048/0x2050 [ +0.001067] [] lock_acquire+0xc7/0x160 [ +0.001036] [] ? might_fault+0x5f/0xb0 [ +0.001040] [] might_fault+0x8c/0xb0 [ +0.001051] [] ? might_fault+0x5f/0xb0 [ +0.001027] [] filldir+0x92/0x120 [ +0.001043] [] xfs_dir2_block_getdents.isra.11+0x1b9/0x210 [xfs] [ +0.001080] [] xfs_readdir+0x178/0x1c0 [xfs] [ +0.001030] [] ? mutex_lock_killable_nested+0x2a3/0x4e0 [ +0.001067] [] xfs_file_readdir+0x2b/0x30 [xfs] [ +0.001049] [] iterate_dir+0x9a/0x140 [ +0.001044] [] SyS_getdents+0x94/0x120 [ +0.001034] [] ? fillonedir+0xf0/0xf0 [ +0.001038] [] system_call_fastpath+0x12/0x17 I tried to find out if this was reported before but I haven't found anything. If I missed it I am sorry for the noise. cheers, daniel