From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id oANCGUhJ204799 for ; Tue, 23 Nov 2010 06:16:31 -0600 Received: from ipmail04.adl6.internode.on.net (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 27FE01C6A514 for ; Tue, 23 Nov 2010 04:18:06 -0800 (PST) Received: from ipmail04.adl6.internode.on.net (ipmail04.adl6.internode.on.net [150.101.137.141]) by cuda.sgi.com with ESMTP id xD9WvAdUP5SDVFjC for ; Tue, 23 Nov 2010 04:18:06 -0800 (PST) Date: Tue, 23 Nov 2010 23:18:02 +1100 From: Nick Piggin Subject: XFS reclaim lock order bug Message-ID: <20101123121802.GA4785@amd> MIME-Version: 1.0 Content-Disposition: inline List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: xfs@oss.sgi.com Hi, IIRC I've reported this before. Perhaps it is a false positive, but even so it is still annoying that it triggers and turns off lockdep for subsequent debugging. Any chance it can get fixed or properly annotated? Thanks, Nick [ 286.895008] [ 286.895010] ================================= [ 286.895020] [ INFO: inconsistent lock state ] [ 286.895020] 2.6.37-rc3+ #28 [ 286.895020] --------------------------------- [ 286.895020] inconsistent {RECLAIM_FS-ON-R} -> {IN-RECLAIM_FS-W} usage. [ 286.895020] rm/1844 [HC0[0]:SC0[0]:HE1:SE1] takes: [ 286.895020] (&(&ip->i_iolock)->mr_lock#2){++++-+}, at: [] xfs_ilock+0xe8/0x1e0 [xfs] [ 286.895020] {RECLAIM_FS-ON-R} state was registered at: [ 286.895020] [] mark_held_locks+0x6b/0xa0 [ 286.895020] [] lockdep_trace_alloc+0x91/0xd0 [ 286.895020] [] __alloc_pages_nodemask+0x91/0x780 [ 286.895020] [] alloc_page_vma+0x93/0x150 [ 286.895020] [] handle_mm_fault+0x719/0x9a0 [ 286.895020] [] do_page_fault+0x133/0x4f0 [ 286.895020] [] page_fault+0x1f/0x30 [ 286.895020] [] generic_file_aio_read+0x2fa/0x730 [ 286.895020] [] xfs_file_aio_read+0x15b/0x390 [xfs] [ 286.895020] [] do_sync_read+0xd2/0x110 [ 286.895020] [] vfs_read+0xc5/0x190 [ 286.895020] [] sys_read+0x4c/0x80 [ 286.895020] [] system_call_fastpath+0x16/0x1b [ 286.895020] irq event stamp: 1095103 [ 286.895020] hardirqs last enabled at (1095103): [] _raw_spin_unlock_irqrestore+0x65/0x80 [ 286.895020] hardirqs last disabled at (1095102): [] _raw_spin_lock_irqsave+0x17/0x60 [ 286.895020] softirqs last enabled at (1093048): [] __do_softirq+0x16e/0x360 [ 286.895020] softirqs last disabled at (1093009): [] call_softirq+0x1c/0x50 [ 286.895020] [ 286.895020] other info that might help us debug this: [ 286.895020] 3 locks held by rm/1844: [ 286.895020] #0: (&sb->s_type->i_mutex_key#13){+.+.+.}, at: [] do_lookup+0xfc/0x170 [ 286.895020] #1: (shrinker_rwsem){++++..}, at: [] shrink_slab+0x38/0x190 [ 286.895020] #2: (&pag->pag_ici_reclaim_lock){+.+...}, at: [] xfs_reclaim_inodes_ag+0xa4/0x370 [xfs] [ 286.895020] [ 286.895020] stack backtrace: [ 286.895020] Pid: 1844, comm: rm Not tainted 2.6.37-rc3+ #28 [ 286.895020] Call Trace: [ 286.895020] [] print_usage_bug+0x170/0x180 [ 286.895020] [] mark_lock+0x211/0x400 [ 286.895020] [] __lock_acquire+0x40e/0x1490 [ 286.895020] [] lock_acquire+0x95/0x1b0 [ 286.895020] [] ? xfs_ilock+0xe8/0x1e0 [xfs] [ 286.895020] [] ? xfs_reclaim_inode+0x174/0x2a0 [xfs] [ 286.895020] [] down_write_nested+0x4a/0x70 [ 286.895020] [] ? xfs_ilock+0xe8/0x1e0 [xfs] [ 286.895020] [] xfs_ilock+0xe8/0x1e0 [xfs] [ 286.895020] [] xfs_reclaim_inode+0x1c0/0x2a0 [xfs] [ 286.895020] [] xfs_reclaim_inodes_ag+0x20f/0x370 [xfs] [ 286.895020] [] xfs_reclaim_inode_shrink+0x78/0x80 [xfs] [ 286.895020] [] shrink_slab+0x127/0x190 [ 286.895020] [] zone_reclaim+0x349/0x420 [ 286.895020] [] ? zone_watermark_ok+0x25/0xe0 [ 286.895020] [] get_page_from_freelist+0x673/0x830 [ 286.895020] [] ? init_object+0x43/0x80 [ 286.895020] [] ? kmem_zone_alloc+0x8c/0xd0 [xfs] [ 286.895020] [] ? mark_held_locks+0x6b/0xa0 [ 286.895020] [] ? mark_held_locks+0x6b/0xa0 [ 286.895020] [] __alloc_pages_nodemask+0x110/0x780 [ 286.895020] [] ? unfreeze_slab+0x11a/0x160 [ 286.895020] [] alloc_pages_current+0x76/0xf0 [ 286.895020] [] new_slab+0x205/0x2b0 [ 286.895020] [] __slab_alloc+0x30c/0x480 [ 286.895020] [] ? d_alloc+0x22/0x200 [ 286.895020] [] ? d_alloc+0x22/0x200 [ 286.895020] [] ? d_alloc+0x22/0x200 [ 286.895020] [] kmem_cache_alloc+0xf8/0x1a0 [ 286.895020] [] ? __d_lookup+0x1c0/0x1f0 [ 286.895020] [] ? __d_lookup+0x0/0x1f0 [ 286.895020] [] d_alloc+0x22/0x200 [ 286.895020] [] d_alloc_and_lookup+0x2b/0x90 [ 286.895020] [] ? d_lookup+0x3c/0x60 [ 286.895020] [] do_lookup+0x11a/0x170 [ 286.895020] [] link_path_walk+0x31a/0xa50 [ 286.895020] [] path_walk+0x62/0xe0 [ 286.895020] [] do_path_lookup+0x5b/0x60 [ 286.895020] [] user_path_at+0x52/0xa0 [ 286.895020] [] ? kmem_cache_free+0xe5/0x190 [ 286.895020] [] ? trace_hardirqs_on+0xd/0x10 [ 286.895020] [] ? do_unlinkat+0x60/0x1d0 [ 286.895020] [] vfs_fstatat+0x37/0x70 [ 286.895020] [] sys_newfstatat+0x1f/0x50 [ 286.895020] [] ? trace_hardirqs_on_caller+0x13d/0x180 [ 286.895020] [] ? trace_hardirqs_on_thunk+0x3a/0x3f [ 286.895020] [] system_call_fastpath+0x16/0x1b _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs