From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda1.sgi.com [192.48.157.11]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id q78MNGJJ076536 for ; Wed, 8 Aug 2012 17:23:16 -0500 Received: from ipmail05.adl6.internode.on.net (ipmail05.adl6.internode.on.net [150.101.137.143]) by cuda.sgi.com with ESMTP id AFOI4ijRrhmPtbVp for ; Wed, 08 Aug 2012 15:23:14 -0700 (PDT) Date: Thu, 9 Aug 2012 08:23:05 +1000 From: Dave Chinner Subject: Re: xfs hang when filesystem filled Message-ID: <20120808222305.GV2877@dastard> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: "Guk-Bong, Kwon" Cc: xfs@oss.sgi.com On Tue, Aug 07, 2012 at 02:54:48PM +0900, Guk-Bong, Kwon wrote: > HI all > > I tested xfs over nfs using bonnie++ > > xfs and nfs hang when xfs filesystem filled > > What's the problem? It appears to be blocked in writeback, getting ENOSPC errors when they shouldn't occur. > see below > -------------------------------- > > 1. nfs server > > a. uname -a > - Linux nfs_server 2.6.32.58 #1 SMP Thu Mar 22 13:33:34 KST 2012 x86_64 > Intel(R) Xeon(R) CPU E5606 @ 2.13GHz GenuineIntel GNU/Linux Old kernel. Upgrade. > ================================================================================ > /test 0.0.0.0/0.0.0.0(rw,async,wdelay,hide,nocrossmnt,insecure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,fsid=1342087477,anonuid=65534,anongid=65534) > ================================================================================ You're using the async export option, which means the server/client write throttling mechanisms built into the NFs protocol are not active. That leads to clients swamping the server with dirty data and not backing off when the server is overloaded, and leads to -data loss- when the server fails. IOWs, you're massively overcomitting allocation from lots of threads which means you are probably depleting the free space pool, and that leads to -data loss- and potentially deadlocks. If this is what your production systems do, then a) increase the reserve pool, and b) fix your producton systems not to do this. > Aug 2 18:17:58 anystor1 kernel: Call Trace: > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_btree_is_lastrec+0x4e/0x60 > Aug 2 18:17:58 anystor1 kernel: [] ? schedule_timeout+0x1ed/0x250 > Aug 2 18:17:58 anystor1 kernel: [] ? __down+0x61/0xa0 > Aug 2 18:17:58 anystor1 kernel: [] ? down+0x46/0x50 > Aug 2 18:17:58 anystor1 kernel: [] ? _xfs_buf_find+0x134/0x220 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_buf_get_flags+0x6e/0x190 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_trans_get_buf+0x10e/0x160 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_alloc_fix_freelist+0x144/0x450 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_icsb_disable_counter+0x17/0x160 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_bmap_add_extent_delay_real+0x8d2/0x11a0 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_trans_log_buf+0x63/0xa0 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_icsb_balance_counter_locked+0x31/0xf0 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_alloc_vextent+0x1b1/0x4c0 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_bmap_btalloc+0x596/0xa70 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_bmapi+0x9fa/0x1230 > Aug 2 18:17:58 anystor1 kernel: [] ? xlog_state_release_iclog+0x56/0xe0 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_trans_reserve+0x9f/0x210 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_iomap_write_allocate+0x24e/0x3d0 > Aug 2 18:17:58 anystor1 kernel: [] ? elv_insert+0xf0/0x260 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_iomap+0x2cb/0x300 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_map_blocks+0x25/0x30 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_page_state_convert+0x414/0x6d0 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_vm_writepage+0x77/0x130 > Aug 2 18:17:58 anystor1 kernel: [] ? __writepage+0xa/0x40 > Aug 2 18:17:58 anystor1 kernel: [] ? write_cache_pages+0x1df/0x3d0 > Aug 2 18:17:58 anystor1 kernel: [] ? __writepage+0x0/0x40 > Aug 2 18:17:58 anystor1 kernel: [] ? __filemap_fdatawrite_range+0x4c/0x60 > Aug 2 18:17:58 anystor1 kernel: [] ? radix_tree_gang_lookup+0x71/0xf0 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_flush_pages+0xad/0xc0 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_sync_inode_data+0xca/0xf0 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_inode_ag_walk+0x80/0x140 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_sync_inode_data+0x0/0xf0 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_inode_ag_iterator+0x88/0xd0 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_sync_inode_data+0x0/0xf0 > Aug 2 18:17:58 anystor1 kernel: [] ? schedule_timeout+0x15d/0x250 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_sync_data+0x30/0x60 > Aug 2 18:17:58 anystor1 kernel: [] ? xfs_flush_inodes_work+0x1e/0x50 > Aug 2 18:17:58 anystor1 kernel: [] ? xfssyncd+0x13c/0x1d0 > Aug 2 18:17:58 anystor1 kernel: [] ? xfssyncd+0x0/0x1d0 > Aug 2 18:17:58 anystor1 kernel: [] ? kthread+0x96/0xb0 There's your problem - writeback of data is blocked waiting on a metadata buffer, and everything else is blocked behind it. Upgrade your kernel. In summary, you are doing something silly on a very old kernel and you broke it. As a prize, you get to keep all the broken pieces..... Cheers, Dave. -- Dave Chinner david@fromorbit.com _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs