From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda1.sgi.com [192.48.157.11]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id n6RHpW1x117637 for ; Mon, 27 Jul 2009 12:51:33 -0500 Received: from mx2.redhat.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 0A67610C6128 for ; Mon, 27 Jul 2009 11:00:58 -0700 (PDT) Received: from mx2.redhat.com (mx2.redhat.com [66.187.237.31]) by cuda.sgi.com with ESMTP id JKQ9FGYljqRQQHxE for ; Mon, 27 Jul 2009 11:00:58 -0700 (PDT) Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n6RHqHLg028048 for ; Mon, 27 Jul 2009 13:52:17 -0400 Message-ID: <4A6DE939.6050606@redhat.com> Date: Mon, 27 Jul 2009 13:51:53 -0400 From: Prarit Bhargava MIME-Version: 1.0 Subject: Circular locking on rawhide 2.6.31-0.81.rc3.git4 List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: xfs@oss.sgi.com, Eric Sandeen Hello everyone, This was seen while doing a "rpmbuild -bp kernel.spec" on a recent rawhide build. ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.31-0.81.rc3.git4.fc12.x86_64 #1 ------------------------------------------------------- rpm/4790 is trying to acquire lock: (&(&ip->i_iolock)->mr_lock){++++++}, at: [] xfs_ilock+0x3f/0xa7 [xfs] but task is already holding lock: (&mm->mmap_sem){++++++}, at: [] sys_munmap+0x4b/0x86 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&mm->mmap_sem){++++++}: [] __lock_acquire+0xa79/0xc0e [] lock_acquire+0xee/0x12e [] might_fault+0x9e/0xd9 [] file_read_actor+0xdf/0x137 [] generic_file_aio_read+0x321/0x52f [] xfs_read+0x190/0x214 [xfs] [] xfs_file_aio_read+0x77/0x8d [xfs] [] do_sync_read+0xfa/0x14b [] vfs_read+0xba/0x12b [] sys_read+0x59/0x91 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff -> #0 (&(&ip->i_iolock)->mr_lock){++++++}: [] __lock_acquire+0x956/0xc0e [] lock_acquire+0xee/0x12e [] down_write_nested+0x61/0xac [] xfs_ilock+0x3f/0xa7 [xfs] [] xfs_free_eofblocks+0x126/0x238 [xfs] [] xfs_release+0x150/0x173 [xfs] [] xfs_file_release+0x28/0x40 [xfs] [] __fput+0x137/0x1f8 [] fput+0x2d/0x43 [] remove_vma+0x67/0xb5 [] do_munmap+0x305/0x33b [] sys_munmap+0x59/0x86 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff other info that might help us debug this: 1 lock held by rpm/4790: #0: (&mm->mmap_sem){++++++}, at: [] sys_munmap+0x4b/0x86 stack backtrace: Pid: 4790, comm: rpm Not tainted 2.6.31-0.81.rc3.git4.fc12.x86_64 #1 Call Trace: [] print_circular_bug_tail+0x80/0x9f [] ? check_noncircular+0x93/0xe8 [] __lock_acquire+0x956/0xc0e [] lock_acquire+0xee/0x12e [] ? xfs_ilock+0x3f/0xa7 [xfs] [] ? xfs_ilock+0x3f/0xa7 [xfs] [] down_write_nested+0x61/0xac [] ? xfs_ilock+0x3f/0xa7 [xfs] [] xfs_ilock+0x3f/0xa7 [xfs] [] xfs_free_eofblocks+0x126/0x238 [xfs] [] xfs_release+0x150/0x173 [xfs] [] xfs_file_release+0x28/0x40 [xfs] [] __fput+0x137/0x1f8 [] fput+0x2d/0x43 [] remove_vma+0x67/0xb5 [] do_munmap+0x305/0x33b [] ? sys_munmap+0x4b/0x86 [] sys_munmap+0x59/0x86 [] system_call_fastpath+0x16/0x1b P. _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs