From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753598AbbC3TkX (ORCPT ); Mon, 30 Mar 2015 15:40:23 -0400 Received: from ipmail06.adl6.internode.on.net ([150.101.137.145]:15728 "EHLO ipmail06.adl6.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753431AbbC3TkT (ORCPT ); Mon, 30 Mar 2015 15:40:19 -0400 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2DMBwAuphlVPM+HLHlcgwaBLoJFg3urVgEBAQEBAQaZCwICAQECgTlNAQEBAQEBBgEBAQE4O4QUAQEBAwEnExwjBQsIAw4KCSUPBSUDBxoTiCcHzAgBAQEBAQUBAQEBHhiFd4UahHgHgxeBFgWGGpBZg1yLPIh6hCQqMYJDAQEB Date: Tue, 31 Mar 2015 06:40:16 +1100 From: Dave Chinner To: Daniel Wagner Cc: xfs@oss.sgi.com, "linux-kernel@vger.kernel.org" Subject: Re: deadlock between &type->i_mutex_dir_key#4 and &xfs_dir_ilock_class Message-ID: <20150330194016.GC28621@dastard> References: <5518FB4A.4070200@monom.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5518FB4A.4070200@monom.org> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 30, 2015 at 09:29:14AM +0200, Daniel Wagner wrote: > Hi, > > Just my test box booted 4.0.0-rc6 and I was greeted by: > > > [Mar30 10:10] ====================================================== > [ +0.000043] [ INFO: possible circular locking dependency detected ] > [ +0.000045] 4.0.0-rc6 #32 Not tainted > [ +0.000027] ------------------------------------------------------- > [ +0.000042] ls/1709 is trying to acquire lock: > [ +0.000034] (&mm->mmap_sem){++++++}, at: [] might_fault+0x5f/0xb0 > [ +0.000083] > but task is already holding lock: > [ +0.000043] (&xfs_dir_ilock_class){.+.+..}, at: [] xfs_ilock+0xc2/0x130 [xfs] > [ +0.000110] > which lock already depends on the new lock. No deadlock. Problem is the shmem code, which is doing inode instantiation under the mmap_sem, thereby inverting the entire vfs locking order w.r.t. to the mmap_sem.... i.e. this one: > -> #1 (&isec->lock){+.+.+.}: > [ +0.000045] [] lock_acquire+0xc7/0x160 > [ +0.000045] [] mutex_lock_nested+0x7d/0x450 > [ +0.000045] [] inode_doinit_with_dentry+0xc5/0x6a0 > [ +0.000050] [] selinux_d_instantiate+0x1c/0x20 > [ +0.001072] [] security_d_instantiate+0x1b/0x30 > [ +0.001056] [] d_instantiate+0x54/0x80 > [ +0.001052] [] __shmem_file_setup+0xdc/0x250 > [ +0.001059] [] shmem_zero_setup+0x28/0x70 > [ +0.001074] [] mmap_region+0x5d8/0x5f0 > [ +0.001045] [] do_mmap_pgoff+0x31b/0x400 > [ +0.001040] [] vm_mmap_pgoff+0xb0/0xf0 > [ +0.001015] [] SyS_mmap_pgoff+0x116/0x2b0 > [ +0.001009] [] SyS_mmap+0x22/0x30 > [ +0.001000] [] system_call_fastpath+0x12/0x17 vm_mmap_pgoff() takes the mmap_sem. > I tried to find out if this was reported before but I > haven't found anything. If I missed it I am sorry for the noise. It's been reported so many times I need a FAQ entry for it. problem is, i can't fix it easily because it's a shmem bug... Cheers, Dave. -- Dave Chinner david@fromorbit.com