From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexander Beregalov Subject: 2.6.27-rc6: lockdep warning: iprune_mutex at shrink_icache_memory+0x38/0x1a8 Date: Sun, 14 Sep 2008 03:31:38 +0400 Message-ID: <20080913233138.GA19576@orion> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:subject :message-id:mime-version:content-type:content-disposition:user-agent; bh=yggihOV+zFCOW1FWl7u5Ron1fIwpjZ/fz8w77gEgycU=; b=DGtKDv7xsr5nR5iYRUoeDXBh/AO0kLUEsd47gOgfNbTrRA9fAsUy8wJuIdpRZaJV0G Hm4Y1WZGoNWzH4GprdRb5VrqiL/WJfkYNVVN0yiOeb9lU6mNKHEYYmPQN8QH7fj8X88H 7mvN8HSaWi/RPMJ7VgnBZ3YmjJ6/TCdDMf4Gc= Content-Disposition: inline Sender: kernel-testers-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: rjw-KKrjLPT3xs0@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kernel-testers-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org Hi [ INFO: possible circular locking dependency detected ] 2.6.27-rc6-00034-gd1c6d2e #3 ------------------------------------------------------- nfsd/1766 is trying to acquire lock: (iprune_mutex){--..}, at: [] shrink_icache_memory+0x38/0x1a8 but task is already holding lock: (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0xa2/0xd6 I read files through nfs and saw delay for few seconds. System is x86_32, nfs, xfs. The last working kernel is 2.6.27-rc5, I do not know yet is it reproducible or not. the existing dependency chain (in reverse order) is: -> #1 (&(&ip->i_iolock)->mr_lock){----}: [] __lock_acquire+0x970/0xae8 [] lock_acquire+0x5b/0x77 [] down_write_nested+0x35/0x6c [] xfs_ilock+0x7b/0xd6 [] xfs_ireclaim+0x1d/0x59 [] xfs_finish_reclaim+0x12a/0x134 [] xfs_reclaim+0xbc/0x125 [] xfs_fs_clear_inode+0x55/0x8e [] clear_inode+0x7a/0xc9 [] dispose_list+0x3c/0xca [] shrink_icache_memory+0x17b/0x1a8 [] shrink_slab+0xd3/0x12e [] kswapd+0x2cb/0x3ac [] kthread+0x39/0x5e [] kernel_thread_helper+0x7/0x10 [] 0xffffffff -> #0 (iprune_mutex){--..}: [] __lock_acquire+0x845/0xae8 [] lock_acquire+0x5b/0x77 [] __mutex_lock_common+0xa0/0x2d0 [] mutex_lock_nested+0x29/0x31 [] shrink_icache_memory+0x38/0x1a8 [] shrink_slab+0xd3/0x12e [] try_to_free_pages+0x1cf/0x287 [] __alloc_pages_internal+0x257/0x3c6 [] __do_page_cache_readahead+0xb7/0x16f [] ondemand_readahead+0x115/0x123 [] page_cache_sync_readahead+0x16/0x1c [] __generic_file_splice_read+0xe0/0x3f7 [] generic_file_splice_read+0x66/0x80 [] xfs_splice_read+0x46/0x71 [] xfs_file_splice_read+0x24/0x29 [] do_splice_to+0x4e/0x5f [] splice_direct_to_actor+0xc1/0x185 [] nfsd_vfs_read+0x21d/0x310 [] nfsd_read+0x84/0x9b [] nfsd3_proc_read+0xb9/0x104 [] nfsd_dispatch+0xcf/0x1a2 [] svc_process+0x379/0x587 [] nfsd+0x106/0x153 [] kthread+0x39/0x5e [] kernel_thread_helper+0x7/0x10 [] 0xffffffff other info that might help us debug this: 3 locks held by nfsd/1766: #0: (hash_sem){..--}, at: [] exp_readlock+0xd/0xf #1: (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0xa2/0xd6 #2: (shrinker_rwsem){----}, at: [] shrink_slab+0x24/0x12e stack backtrace: Pid: 1766, comm: nfsd Not tainted 2.6.27-rc6-00034-gd1c6d2e #3 [] ? printk+0xf/0x12 [] print_circular_bug_tail+0x5c/0x67 [] __lock_acquire+0x845/0xae8 [] lock_acquire+0x5b/0x77 [] ? shrink_icache_memory+0x38/0x1a8 [] __mutex_lock_common+0xa0/0x2d0 [] ? shrink_icache_memory+0x38/0x1a8 [] mutex_lock_nested+0x29/0x31 [] ? shrink_icache_memory+0x38/0x1a8 [] shrink_icache_memory+0x38/0x1a8 [] ? down_read_trylock+0x38/0x42 [] shrink_slab+0xd3/0x12e [] try_to_free_pages+0x1cf/0x287 [] ? isolate_pages_global+0x0/0x3e [] __alloc_pages_internal+0x257/0x3c6 [] ? trace_hardirqs_on_caller+0xe6/0x10d [] __do_page_cache_readahead+0xb7/0x16f [] ondemand_readahead+0x115/0x123 [] page_cache_sync_readahead+0x16/0x1c [] __generic_file_splice_read+0xe0/0x3f7 [] ? register_lock_class+0x17/0x26a [] ? __lock_acquire+0xad9/0xae8 [] ? register_lock_class+0x17/0x26a [] ? __lock_acquire+0xad9/0xae8 [] ? spd_release_page+0x0/0xf [] generic_file_splice_read+0x66/0x80 [] xfs_splice_read+0x46/0x71 [] xfs_file_splice_read+0x24/0x29 [] do_splice_to+0x4e/0x5f [] splice_direct_to_actor+0xc1/0x185 [] ? nfsd_direct_splice_actor+0x0/0xf [] nfsd_vfs_read+0x21d/0x310 [] nfsd_read+0x84/0x9b [] nfsd3_proc_read+0xb9/0x104 [] nfsd_dispatch+0xcf/0x1a2 [] svc_process+0x379/0x587 [] nfsd+0x106/0x153 [] ? nfsd+0x0/0x153 [] kthread+0x39/0x5e [] ? kthread+0x0/0x5e [] kernel_thread_helper+0x7/0x10 ======================= e1000: eth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <86> TDT <86> next_to_use <86> next_to_clean buffer_info[next_to_clean] time_stamp <1f7dc5> next_to_watch jiffies <1f8034> next_to_watch.status <1>