From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: with ECARTIS (v1.0.0; list xfs); Sun, 13 Jan 2008 16:16:00 -0800 (PST) Received: from larry.melbourne.sgi.com (larry.melbourne.sgi.com [134.14.52.130]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with SMTP id m0E0Fm8u019119 for ; Sun, 13 Jan 2008 16:15:53 -0800 Message-ID: <478AAA73.3080008@sgi.com> Date: Mon, 14 Jan 2008 11:18:59 +1100 From: Lachlan McIlroy Reply-To: lachlan@sgi.com MIME-Version: 1.0 Subject: Re: xfs_fsr: circular dependency under 2.6.24-rc6 References: <20080113014659.GO26626@ns1.anodized.com> In-Reply-To: <20080113014659.GO26626@ns1.anodized.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: xfs-bounce@oss.sgi.com Errors-to: xfs-bounce@oss.sgi.com List-Id: xfs To: Christopher Layne Cc: xfs@oss.sgi.com Christopher, This is a known dependency and is actually a false alarm. The inode that is being reclaimed (thread #2) cannot have writes in progress (thread #0) since they cannot involve the same i_iolock. We cannot change the code to work around this nor can we add lockdep annotations to avoid this case. You can safely ignore this lockdep report. Lachlan Christopher Layne wrote: > ======================================================= > [ INFO: possible circular locking dependency detected ] > 2.6.24-rc6 #1 > ------------------------------------------------------- > xfs_fsr/5694 is trying to acquire lock: > (&mm->mmap_sem){----}, at: [] dio_get_page+0x4b/0x184 > > but task is already holding lock: > (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0x4d/0x8d > > which lock already depends on the new lock. > > > the existing dependency chain (in reverse order) is: > > -> #2 (&(&ip->i_iolock)->mr_lock){----}: > [] __lock_acquire+0xb2b/0xd3f > [] xfs_ilock+0x26/0x8d > [] lock_acquire+0x84/0xa8 > [] xfs_ilock+0x26/0x8d > [] mark_held_locks+0x58/0x72 > [] down_write_nested+0x39/0x45 > [] xfs_ilock+0x26/0x8d > [] xfs_ireclaim+0x37/0x7a > [] xfs_finish_reclaim+0x15d/0x16b > [] xfs_fs_clear_inode+0xca/0xeb > [] clear_inode+0x94/0xeb > [] dispose_list+0x58/0xfa > [] invalidate_inodes+0xd9/0xf7 > [] generic_shutdown_super+0x39/0xf3 > [] kill_block_super+0xd/0x1e > [] deactivate_super+0x49/0x61 > [] sys_umount+0x1f5/0x206 > [] trace_hardirqs_on_thunk+0x35/0x3a > [] trace_hardirqs_on+0x121/0x14c > [] trace_hardirqs_on_thunk+0x35/0x3a > [] system_call+0x7e/0x83 > [] 0xffffffffffffffff > > -> #1 (iprune_mutex){--..}: > [] __lock_acquire+0xb2b/0xd3f > [] shrink_icache_memory+0x42/0x214 > [] lock_acquire+0x84/0xa8 > [] shrink_icache_memory+0x42/0x214 > [] __lock_acquire+0xd1e/0xd3f > [] shrink_icache_memory+0x42/0x214 > [] mutex_lock_nested+0xfd/0x297 > [] prune_dcache+0xd8/0x184 > [] shrink_icache_memory+0x42/0x214 > [] shrink_slab+0xe7/0x15a > [] try_to_free_pages+0x17a/0x24b > [] __alloc_pages+0x208/0x34e > [] handle_mm_fault+0x211/0x66d > [] do_page_fault+0x3bd/0x743 > [] __up_write+0x21/0x112 > [] __up_write+0x21/0x112 > [] _spin_unlock_irqrestore+0x3e/0x44 > [] trace_hardirqs_on_thunk+0x35/0x3a > [] trace_hardirqs_on+0x121/0x14c > [] error_exit+0x0/0xa9 > [] 0xffffffffffffffff > > -> #0 (&mm->mmap_sem){----}: > [] __lock_acquire+0xa30/0xd3f > [] dio_get_page+0x4b/0x184 > [] lock_acquire+0x84/0xa8 > [] dio_get_page+0x4b/0x184 > [] down_read+0x32/0x3b > [] dio_get_page+0x4b/0x184 > [] __spin_lock_init+0x29/0x47 > [] __blockdev_direct_IO+0x3fc/0x9c6 > [] lockdep_init_map+0x8f/0x460 > [] xfs_vm_direct_IO+0x101/0x134 > [] xfs_get_blocks_direct+0x0/0x11 > [] xfs_end_io_direct+0x0/0x82 > [] __up_write+0x21/0x112 > [] generic_file_direct_IO+0xcd/0x103 > [] generic_file_direct_write+0x60/0xfd > [] xfs_write+0x4ed/0x760 > [] xfs_iunlock+0x37/0x85 > [] xfs_read+0x1f1/0x210 > [] do_sync_write+0xd1/0x118 > [] __lock_acquire+0xd1e/0xd3f > [] autoremove_wake_function+0x0/0x2e > [] dnotify_parent+0x1f/0x6d > [] vfs_write+0xad/0x136 > [] sys_write+0x45/0x6e > [] system_call+0x7e/0x83 > [] 0xffffffffffffffff > > other info that might help us debug this: > > 1 lock held by xfs_fsr/5694: > #0: (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0x4d/0x8d > > stack backtrace: > Pid: 5694, comm: xfs_fsr Not tainted 2.6.24-rc6 #1 > > Call Trace: > [] print_circular_bug_tail+0x69/0x72 > [] __lock_acquire+0xa30/0xd3f > [] dio_get_page+0x4b/0x184 > [] lock_acquire+0x84/0xa8 > [] dio_get_page+0x4b/0x184 > [] down_read+0x32/0x3b > [] dio_get_page+0x4b/0x184 > [] __spin_lock_init+0x29/0x47 > [] __blockdev_direct_IO+0x3fc/0x9c6 > [] lockdep_init_map+0x8f/0x460 > [] xfs_vm_direct_IO+0x101/0x134 > [] xfs_get_blocks_direct+0x0/0x11 > [] xfs_end_io_direct+0x0/0x82 > [] __up_write+0x21/0x112 > [] generic_file_direct_IO+0xcd/0x103 > [] generic_file_direct_write+0x60/0xfd > [] xfs_write+0x4ed/0x760 > [] xfs_iunlock+0x37/0x85 > [] xfs_read+0x1f1/0x210 > [] do_sync_write+0xd1/0x118 > [] __lock_acquire+0xd1e/0xd3f > [] autoremove_wake_function+0x0/0x2e > [] dnotify_parent+0x1f/0x6d > [] vfs_write+0xad/0x136 > [] sys_write+0x45/0x6e > [] system_call+0x7e/0x83 > > > -- > > xfs issue or kernel issue? > > -cl > > >