From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id q9UKh0dU071337 for ; Tue, 30 Oct 2012 15:43:00 -0500 Received: from ipmail07.adl2.internode.on.net (ipmail07.adl2.internode.on.net [150.101.137.131]) by cuda.sgi.com with ESMTP id B3yovY8gPNUYBl72 for ; Tue, 30 Oct 2012 13:44:49 -0700 (PDT) Date: Tue, 30 Oct 2012 09:26:13 +1100 From: Dave Chinner Subject: Re: Hang in XFS reclaim on 3.7.0-rc3 Message-ID: <20121029222613.GU29378@dastard> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: Torsten Kaiser Cc: Linux Kernel , xfs@oss.sgi.com On Mon, Oct 29, 2012 at 09:03:15PM +0100, Torsten Kaiser wrote: > After experiencing a hang of all IO yesterday ( > http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on > LOCKDEP after upgrading to -rc3. > > I then tried to replicate the load that hung yesterday and got the > following lockdep report, implicating XFS instead of by stacking swap > onto dm-crypt and md. > > [ 2844.971913] > [ 2844.971920] ================================= > [ 2844.971921] [ INFO: inconsistent lock state ] > [ 2844.971924] 3.7.0-rc3 #1 Not tainted > [ 2844.971925] --------------------------------- > [ 2844.971927] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage. > [ 2844.971929] kswapd0/725 [HC0[0]:SC0[0]:HE1:SE1] takes: > [ 2844.971931] (&(&ip->i_lock)->mr_lock){++++?.}, at: [] xfs_ilock+0x84/0xb0 > [ 2844.971941] {RECLAIM_FS-ON-W} state was registered at: > [ 2844.971942] [] mark_held_locks+0x7e/0x130 > [ 2844.971947] [] lockdep_trace_alloc+0x63/0xc0 > [ 2844.971949] [] kmem_cache_alloc+0x35/0xe0 > [ 2844.971952] [] vm_map_ram+0x271/0x770 > [ 2844.971955] [] _xfs_buf_map_pages+0x46/0xe0 > [ 2844.971959] [] xfs_buf_get_map+0x8a/0x130 > [ 2844.971961] [] xfs_trans_get_buf_map+0xa9/0xd0 > [ 2844.971964] [] xfs_ifree_cluster+0x129/0x670 > [ 2844.971967] [] xfs_ifree+0xe9/0xf0 > [ 2844.971969] [] xfs_inactive+0x2af/0x480 > [ 2844.971972] [] xfs_fs_evict_inode+0x70/0x80 > [ 2844.971974] [] evict+0xaf/0x1b0 > [ 2844.971977] [] iput+0x105/0x210 > [ 2844.971979] [] dentry_iput+0xa0/0xe0 > [ 2844.971981] [] dput+0x150/0x280 > [ 2844.971983] [] sys_renameat+0x21b/0x290 > [ 2844.971986] [] sys_rename+0x16/0x20 > [ 2844.971988] [] system_call_fastpath+0x16/0x1b We shouldn't be mapping pages there. See if the patch below fixes it. Fundamentally, though, the lockdep warning has come about because vm_map_ram is doing a GFP_KERNEL allocation when we need it to be doing GFP_NOFS - we are within a transaction here, so memory reclaim is not allowed to recurse back into the filesystem. mm-folk: can we please get this vmalloc/gfp_flags passing API fixed once and for all? This is the fourth time in the last month or so that I've seen XFS bug reports with silent hangs and associated lockdep output that implicate GFP_KERNEL allocations from vm_map_ram in GFP_NOFS conditions as the potential cause.... Cheers, Dave. -- Dave Chinner david@fromorbit.com xfs: don't vmap inode cluster buffers during free From: Dave Chinner Signed-off-by: Dave Chinner --- fs/xfs/xfs_inode.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index c4add46..82f6e5d 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -1781,7 +1781,8 @@ xfs_ifree_cluster( * to mark all the active inodes on the buffer stale. */ bp = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno, - mp->m_bsize * blks_per_cluster, 0); + mp->m_bsize * blks_per_cluster, + XBF_UNMAPPED); if (!bp) return ENOMEM; _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs