From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755902AbYEJRqn (ORCPT ); Sat, 10 May 2008 13:46:43 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751101AbYEJRqg (ORCPT ); Sat, 10 May 2008 13:46:36 -0400 Received: from E23SMTP06.au.ibm.com ([202.81.18.175]:50930 "EHLO e23smtp06.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750799AbYEJRqe (ORCPT ); Sat, 10 May 2008 13:46:34 -0400 Message-ID: <4825DF71.1030209@linux.vnet.ibm.com> Date: Sat, 10 May 2008 23:16:25 +0530 From: Kamalesh Babulal User-Agent: Thunderbird 1.5.0.14ubu (X11/20080505) MIME-Version: 1.0 To: Alexander Beregalov CC: kernel-testers@vger.kernel.org, kernel list , Ingo Molnar , peterz@infradead.org Subject: Re: 2.6.26-rc1: possible circular locking dependency References: In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Adding the cc to kernel-list, Ingo Molnar and Peter Zijlstra Alexander Beregalov wrote: > [ INFO: possible circular locking dependency detected ] > 2.6.26-rc1-00279-g28a4acb #13 > ------------------------------------------------------- > nfsd/3087 is trying to acquire lock: > (iprune_mutex){--..}, at: [] shrink_icache_memory+0x38/0x19b > > but task is already holding lock: > (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0xa2/0xd6 > > which lock already depends on the new lock. > > > the existing dependency chain (in reverse order) is: > > -> #1 (&(&ip->i_iolock)->mr_lock){----}: > [] __lock_acquire+0xa0c/0xbc6 > [] lock_acquire+0x6a/0x86 > [] down_write_nested+0x33/0x6a > [] xfs_ilock+0x7b/0xd6 > [] xfs_ireclaim+0x1d/0x59 > [] xfs_finish_reclaim+0x173/0x195 > [] xfs_reclaim+0xb3/0x138 > [] xfs_fs_clear_inode+0x55/0x8e > [] clear_inode+0x83/0xd2 > [] dispose_list+0x3c/0xc1 > [] shrink_icache_memory+0x173/0x19b > [] shrink_slab+0xda/0x14e > [] try_to_free_pages+0x1e4/0x2a2 > [] __alloc_pages_internal+0x23a/0x39d > [] __alloc_pages+0xa/0xc > [] __do_page_cache_readahead+0xaa/0x16a > [] force_page_cache_readahead+0x4a/0x74 > [] sys_madvise+0x308/0x400 > [] sysenter_past_esp+0x6a/0xb1 > [] 0xffffffff > > -> #0 (iprune_mutex){--..}: > [] __lock_acquire+0x929/0xbc6 > [] lock_acquire+0x6a/0x86 > [] mutex_lock_nested+0xb4/0x226 > [] shrink_icache_memory+0x38/0x19b > [] shrink_slab+0xda/0x14e > [] try_to_free_pages+0x1e4/0x2a2 > [] __alloc_pages_internal+0x23a/0x39d > [] __alloc_pages+0xa/0xc > [] __do_page_cache_readahead+0xaa/0x16a > [] ondemand_readahead+0x119/0x127 > [] page_cache_async_readahead+0x52/0x5d > [] generic_file_splice_read+0x290/0x4a8 > [] xfs_splice_read+0x4b/0x78 > [] xfs_file_splice_read+0x24/0x29 > [] do_splice_to+0x45/0x63 > [] splice_direct_to_actor+0xab/0x150 > [] nfsd_vfs_read+0x1ed/0x2d0 > [] nfsd_read+0x82/0x99 > [] nfsd3_proc_read+0xdf/0x12a > [] nfsd_dispatch+0xcf/0x19e > [] svc_process+0x3b3/0x68b > [] nfsd+0x168/0x26b > [] kernel_thread_helper+0x7/0x10 > [] 0xffffffff > > other info that might help us debug this: > > 3 locks held by nfsd/3087: > #0: (hash_sem){..--}, at: [] exp_readlock+0xd/0xf > #1: (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0xa2/0xd6 > #2: (shrinker_rwsem){----}, at: [] shrink_slab+0x24/0x14e > > stack backtrace: > Pid: 3087, comm: nfsd Not tainted 2.6.26-rc1-00279-g28a4acb #13 > [] print_circular_bug_tail+0x5a/0x65 > [] ? print_circular_bug_header+0xa8/0xb3 > [] __lock_acquire+0x929/0xbc6 > [] ? native_sched_clock+0x8b/0x9f > [] lock_acquire+0x6a/0x86 > [] ? shrink_icache_memory+0x38/0x19b > [] mutex_lock_nested+0xb4/0x226 > [] ? shrink_icache_memory+0x38/0x19b > [] ? shrink_icache_memory+0x38/0x19b > [] shrink_icache_memory+0x38/0x19b > [] shrink_slab+0xda/0x14e > [] try_to_free_pages+0x1e4/0x2a2 > [] ? _spin_unlock_irqrestore+0x36/0x58 > [] ? isolate_pages_global+0x0/0x3e > [] __alloc_pages_internal+0x23a/0x39d > [] __alloc_pages+0xa/0xc > [] __do_page_cache_readahead+0xaa/0x16a > [] ondemand_readahead+0x119/0x127 > [] page_cache_async_readahead+0x52/0x5d > [] generic_file_splice_read+0x290/0x4a8 > [] ? _spin_unlock+0x27/0x3c > [] ? _atomic_dec_and_lock+0x25/0x30 > [] ? iput+0x24/0x4e > [] ? __lock_acquire+0xbaa/0xbc6 > [] ? exportfs_decode_fh+0x9b/0x1a1 > [] ? spd_release_page+0x0/0xf > [] xfs_splice_read+0x4b/0x78 > [] xfs_file_splice_read+0x24/0x29 > [] do_splice_to+0x45/0x63 > [] splice_direct_to_actor+0xab/0x150 > [] ? nfsd_direct_splice_actor+0x0/0xf > [] nfsd_vfs_read+0x1ed/0x2d0 > [] nfsd_read+0x82/0x99 > [] nfsd3_proc_read+0xdf/0x12a > [] nfsd_dispatch+0xcf/0x19e > [] svc_process+0x3b3/0x68b > [] nfsd+0x168/0x26b > [] ? nfsd+0x0/0x26b > [] kernel_thread_helper+0x7/0x10 > -- > To unsubscribe from this list: send the line "unsubscribe kernel-testers" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Thanks & Regards, Kamalesh Babulal, Linux Technology Center, IBM, ISTL.