From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760272AbXEOAxW (ORCPT ); Mon, 14 May 2007 20:53:22 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1755805AbXEOAxQ (ORCPT ); Mon, 14 May 2007 20:53:16 -0400 Received: from gw.goop.org ([64.81.55.164]:44155 "EHLO mail.goop.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753999AbXEOAxP (ORCPT ); Mon, 14 May 2007 20:53:15 -0400 Message-ID: <46490478.3010409@goop.org> Date: Mon, 14 May 2007 17:53:12 -0700 From: Jeremy Fitzhardinge User-Agent: Thunderbird 1.5.0.10 (X11/20070302) MIME-Version: 1.0 To: David Chinner CC: xfs@oss.sgi.com, Linux Kernel Mailing List Subject: 2.6.22-rc1 xfs lockdep messages Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org I tend to get this when doing unlinks or rms in xfs: ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.22-rc1-paravirt #1382 ------------------------------------------------------- rm/1451 is trying to acquire lock: (&(&ip->i_lock)->mr_lock/1){--..}, at: [] xfs_ilock+0x64/0x8d [xfs] but task is already holding lock: (&(&ip->i_lock)->mr_lock){----}, at: [] xfs_ilock+0x64/0x8d [xfs] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&(&ip->i_lock)->mr_lock){----}: [] __lock_acquire+0xa1f/0xbab [] lock_acquire+0x7b/0x9f [] down_write_nested+0x3d/0x58 [] xfs_ilock+0x64/0x8d [xfs] [] xfs_iget_core+0x2bd/0x605 [xfs] [] xfs_iget+0xac/0x133 [xfs] [] xfs_trans_iget+0xdc/0x142 [xfs] [] xfs_ialloc+0xa5/0x457 [xfs] [] xfs_dir_ialloc+0x6d/0x260 [xfs] [] xfs_create+0x2f4/0x5a6 [xfs] [] xfs_vn_mknod+0x130/0x1e5 [xfs] [] xfs_vn_create+0x12/0x14 [xfs] [] vfs_create+0x9b/0xe5 [] open_namei+0x176/0x593 [] do_filp_open+0x26/0x3b [] do_sys_open+0x43/0xc7 [] sys_open+0x1c/0x1e [] syscall_call+0x7/0xb [] 0xffffffff -> #0 (&(&ip->i_lock)->mr_lock/1){--..}: [] __lock_acquire+0x903/0xbab [] lock_acquire+0x7b/0x9f [] down_write_nested+0x3d/0x58 [] xfs_ilock+0x64/0x8d [xfs] [] xfs_lock_inodes+0x11d/0x12f [xfs] [] xfs_lock_dir_and_entry+0xc2/0xcc [xfs] [] xfs_remove+0x213/0x425 [xfs] [] xfs_vn_unlink+0x1c/0x44 [xfs] [] vfs_unlink+0x75/0xb3 [] do_unlinkat+0x96/0x12c [] sys_unlink+0x13/0x15 [] syscall_call+0x7/0xb [] 0xffffffff other info that might help us debug this: 3 locks held by rm/1451: #0: (&inode->i_mutex/1){--..}, at: [] do_unlinkat+0x5e/0x12c #1: (&inode->i_mutex){--..}, at: [] mutex_lock+0x1f/0x23 #2: (&(&ip->i_lock)->mr_lock){----}, at: [] xfs_ilock+0x64/0x8d [xfs] stack backtrace: [] show_trace_log_lvl+0x1a/0x30 [] show_trace+0x12/0x14 [] dump_stack+0x16/0x18 [] print_circular_bug_tail+0x5f/0x68 [] __lock_acquire+0x903/0xbab [] lock_acquire+0x7b/0x9f [] down_write_nested+0x3d/0x58 [] xfs_ilock+0x64/0x8d [xfs] [] xfs_lock_inodes+0x11d/0x12f [xfs] [] xfs_lock_dir_and_entry+0xc2/0xcc [xfs] [] xfs_remove+0x213/0x425 [xfs] [] xfs_vn_unlink+0x1c/0x44 [xfs] [] vfs_unlink+0x75/0xb3 [] do_unlinkat+0x96/0x12c [] sys_unlink+0x13/0x15 [] syscall_call+0x7/0xb ======================= J