From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from relay.sgi.com (relay1.corp.sgi.com [137.38.102.111]) by oss.sgi.com (Postfix) with ESMTP id 40F367F60 for ; Mon, 25 Feb 2013 15:40:04 -0600 (CST) Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25]) by relay1.corp.sgi.com (Postfix) with ESMTP id 2F1268F8035 for ; Mon, 25 Feb 2013 13:40:00 -0800 (PST) Received: from greer.hardwarefreak.com (mo-65-41-216-221.sta.embarqhsd.net [65.41.216.221]) by cuda.sgi.com with ESMTP id 88Zm712PnmWFO1sp for ; Mon, 25 Feb 2013 13:40:00 -0800 (PST) Message-ID: <512BDA2A.5050600@hardwarefreak.com> Date: Mon, 25 Feb 2013 15:39:54 -0600 From: Stan Hoeppner MIME-Version: 1.0 Subject: Re: Consistent throughput challenge -- fragmentation? References: In-Reply-To: Reply-To: stan@hardwarefreak.com List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: xfs-bounces@oss.sgi.com Sender: xfs-bounces@oss.sgi.com To: Brian Cain Cc: xfs@oss.sgi.com On 2/25/2013 10:01 AM, Brian Cain wrote: > All, > > I have been observing some odd behavior regarding write throughput to an > XFS partition (the baseline kernel version is 2.6.32.27). I see > consistently high write throughput (close to the performance of the raw > block device) to the filesystem immediately after a mkfs, but after a few > test cycles, there is sporadic poor performance. > > The test mechanism is like so: > > [mkfs.xfs ] (no flags/options, xfsprogs ver 3.1.1-0.1.36) > ... > 1. remove a previous test cycle's directory > 2. create a new directory > 3. open/write/close a small file (4kb) in this directory > 4. open/read/close this same small file (by the local NFS server) > 5. open[O_DIRECT]/write/write/write/.../close a large file (anywhere from > ~100MB to 200GB) > > Step #5 contains the high-throughput metrics which becomes an order of > magnitude worse several test cycles after a mkfs. Omitting steps 1-3 does > not show the poor performance behavior. > > Can anyone provide any suggestions as to an explanation for the behavior or > a way to mitigate it? Running xfs_fsr didn't seem to improve the results. The usual cause of such aged filesystem low performance is free space fragmentation. xfs_fsr will defragment files, but in doing so it *increases* free space fragmentation, thus won't help the situation. > I'm happy to share benchmarks, specific results data, or describe the > hardware being used for the measurements if it's helpful. Paste the output of 'xfs_db -r -c freesp /dev/[device]' just before you do the large file write. This will show us the free space distribution histogram. -- Stan _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs