From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15]) by oss.sgi.com (8.12.11.20060308/8.12.11/SuSE Linux 0.7) with ESMTP id mB7DW2Z8009085 for ; Sun, 7 Dec 2008 07:32:02 -0600 Received: from mga14.intel.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 2849D16D621A for ; Sun, 7 Dec 2008 05:32:00 -0800 (PST) Received: from mga14.intel.com (mga14.intel.com [143.182.124.37]) by cuda.sgi.com with ESMTP id SkD17YpPUgQFqF7k for ; Sun, 07 Dec 2008 05:32:00 -0800 (PST) Date: Sat, 6 Dec 2008 21:20:24 +0800 From: Wu Fengguang Subject: xfs: possible circular locking dependency detected Message-ID: <20081206132023.GA21235@localhost> MIME-Version: 1.0 Content-Disposition: inline List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: David Chinner Cc: Ingo Molnar , LKML , xfs@oss.sgi.com Hi Dave, I got this warning while accessing xfs on usb storage. Is this a real problem? Thanks, Fengguang ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.28-rc7 #85 ------------------------------------------------------- rsync/20106 is trying to acquire lock: (iprune_mutex){--..}, at: [] shrink_icache_memory+0x84/0x290 but task is already holding lock: (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0x75/0xb0 [xfs] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&(&ip->i_iolock)->mr_lock){----}: [] __lock_acquire+0x12e2/0x18c0 [] lock_acquire+0x99/0xd0 [] down_write_nested+0x57/0x90 [] xfs_ilock+0xa5/0xb0 [xfs] [] xfs_ireclaim+0x46/0x90 [xfs] [] xfs_finish_reclaim+0x5e/0x1a0 [xfs] [] xfs_reclaim+0x11b/0x120 [xfs] [] xfs_fs_clear_inode+0xee/0x120 [xfs] [] clear_inode+0x90/0x140 [] dispose_list+0x38/0x120 [] shrink_icache_memory+0x243/0x290 [] shrink_slab+0x125/0x180 [] kswapd+0x542/0x6a0 [] kthread+0x4e/0x90 [] child_rip+0xa/0x11 [] 0xffffffffffffffff -> #0 (iprune_mutex){--..}: [] __lock_acquire+0x142f/0x18c0 [] lock_acquire+0x99/0xd0 [] mutex_lock_nested+0xce/0x320 [] shrink_icache_memory+0x84/0x290 [] shrink_slab+0x125/0x180 [] try_to_free_pages+0x286/0x3f0 [] __alloc_pages_internal+0x255/0x5b0 [] alloc_pages_current+0x7b/0x100 [] __page_cache_alloc+0x10/0x20 [] __do_page_cache_readahead+0x138/0x250 [] ondemand_readahead+0xdf/0x3c0 [] page_cache_async_readahead+0xa9/0xc0 [] do_generic_file_read+0x259/0x4d0 [] generic_file_aio_read+0xd0/0x1c0 [] xfs_read+0x12a/0x280 [xfs] [] xfs_file_aio_read+0x56/0x60 [xfs] [] do_sync_read+0xf9/0x140 [] vfs_read+0xc8/0x180 [] sys_read+0x55/0x90 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff other info that might help us debug this: 2 locks held by rsync/20106: #0: (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0x75/0xb0 [xfs] #1: (shrinker_rwsem){----}, at: [] shrink_slab+0x37/0x180 stack backtrace: Pid: 20106, comm: rsync Not tainted 2.6.28-rc7 #85 Call Trace: [] print_circular_bug_tail+0xd8/0xe0 [] __lock_acquire+0x142f/0x18c0 [] ? __pagevec_release+0x26/0x40 [] lock_acquire+0x99/0xd0 [] ? shrink_icache_memory+0x84/0x290 [] mutex_lock_nested+0xce/0x320 [] ? shrink_icache_memory+0x84/0x290 [] ? shrink_icache_memory+0x84/0x290 [] shrink_icache_memory+0x84/0x290 [] shrink_slab+0x125/0x180 [] try_to_free_pages+0x286/0x3f0 [] ? isolate_pages_global+0x0/0x260 [] __alloc_pages_internal+0x255/0x5b0 [] alloc_pages_current+0x7b/0x100 [] __page_cache_alloc+0x10/0x20 [] __do_page_cache_readahead+0x138/0x250 [] ? __do_page_cache_readahead+0xca/0x250 [] ondemand_readahead+0xdf/0x3c0 [] ? sched_clock+0x9/0x10 [] page_cache_async_readahead+0xa9/0xc0 [] do_generic_file_read+0x259/0x4d0 [] ? file_read_actor+0x0/0x190 [] generic_file_aio_read+0xd0/0x1c0 [] ? xfs_ilock+0x75/0xb0 [xfs] [] xfs_read+0x12a/0x280 [xfs] [] xfs_file_aio_read+0x56/0x60 [xfs] [] do_sync_read+0xf9/0x140 [] ? autoremove_wake_function+0x0/0x40 [] ? _raw_spin_unlock+0x7f/0xb0 [] ? trace_hardirqs_off_thunk+0x3a/0x3c [] vfs_read+0xc8/0x180 [] sys_read+0x55/0x90 [] system_call_fastpath+0x16/0x1b _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs