From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from relay.sgi.com (relay2.corp.sgi.com [137.38.102.29]) by oss.sgi.com (Postfix) with ESMTP id 515BB7F3F for ; Sun, 27 Apr 2014 19:50:50 -0500 (CDT) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25]) by relay2.corp.sgi.com (Postfix) with ESMTP id 50B8B304048 for ; Sun, 27 Apr 2014 17:50:47 -0700 (PDT) Received: from ipmail04.adl6.internode.on.net (ipmail04.adl6.internode.on.net [150.101.137.141]) by cuda.sgi.com with ESMTP id lzHBrEdt7xEpp2lf for ; Sun, 27 Apr 2014 17:50:45 -0700 (PDT) Date: Mon, 28 Apr 2014 10:50:43 +1000 From: Dave Chinner Subject: Re: 3.15.0-rc2: RECLAIM_FS-safe -> RECLAIM_FS-unsafe lock order detected Message-ID: <20140428005043.GK15995@dastard> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: xfs-bounces@oss.sgi.com Sender: xfs-bounces@oss.sgi.com To: Christian Kujau Cc: xfs@oss.sgi.com On Fri, Apr 25, 2014 at 03:21:16AM -0700, Christian Kujau wrote: > Hi, > > I haven't run vanilla for a while, so this is pretty much a copy of > what I reported[0] back with 3.14-rc2, but now with 3.15-rc2. Full > dmesg & .config can be found here: > > http://nerdbynature.de/bits/3.15-rc2/ > > > ====================================================== > [ INFO: RECLAIM_FS-safe -> RECLAIM_FS-unsafe lock order detected ] > 3.15.0-rc2 #1 Not tainted > ------------------------------------------------------ > rm/8288 [HC0[0]:SC0[0]:HE1:SE1] is trying to acquire: > (&mm->mmap_sem){++++++}, at: [] might_fault+0x58/0xa0 > > and this task is already holding: > (&xfs_dir_ilock_class){++++-.}, at: [] > xfs_ilock_data_map_shared+0x28/0x70 > which would create a new lock dependency: > (&xfs_dir_ilock_class){++++-.} -> (&mm->mmap_sem){++++++} > > but this new dependency connects a RECLAIM_FS-irq-safe lock: > (&xfs_dir_ilock_class){++++-.} > ... which became RECLAIM_FS-irq-safe at: > [] lock_acquire+0x54/0x70 > [] down_write_nested+0x50/0xa0 > [] xfs_reclaim_inode+0x108/0x318 > [] xfs_reclaim_inodes_ag+0x1b4/0x360 > [] xfs_reclaim_inodes_nr+0x38/0x4c > [] super_cache_scan+0x150/0x158 > [] shrink_slab_node+0x138/0x228 > [] shrink_slab+0x124/0x13c > [] kswapd+0x3f8/0x884 > [] kthread+0xbc/0xd0 > [] ret_from_kernel_thread+0x5c/0x64 > to a RECLAIM_FS-irq-unsafe lock: > (&mm->mmap_sem){++++++} > ... which became RECLAIM_FS-irq-unsafe at: > ... [] lockdep_trace_alloc+0x84/0x104 > [] kmem_cache_alloc+0x30/0x148 > [] mmap_region+0x2fc/0x578 > [] do_mmap_pgoff+0x2ec/0x378 > [] vm_mmap_pgoff+0x58/0x94 > [] load_elf_binary+0x488/0x11f4 > [] search_binary_handler+0x98/0x1f4 > [] do_execve+0x484/0x580 > [] try_to_run_init_process+0x18/0x58 > [] kernel_init+0xac/0x110 > [] ret_from_kernel_thread+0x5c/0x64 > > other info that might help us debug this: > > Possible interrupt unsafe locking scenario: > > CPU0 CPU1 > ---- ---- > lock(&mm->mmap_sem); > local_irq_disable(); > lock(&xfs_dir_ilock_class); > lock(&mm->mmap_sem); > > lock(&xfs_dir_ilock_class); Known false positive. Directory inodes can't be mmap()d or execv()d, nor can referenced inodes be reclaimed. Cheers, Dave. -- Dave Chinner david@fromorbit.com _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs