From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Chinner Subject: Re: Filesystem benchmarks on reasonably fast hardware Date: Tue, 19 Jul 2011 12:41:38 +1000 Message-ID: <20110719024138.GJ30254@dastard> References: <20110718143450.GH1437@logfs.org> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: linux-fsdevel@vger.kernel.org To: =?iso-8859-1?Q?J=F6rn?= Engel Return-path: Received: from ipmail05.adl6.internode.on.net ([150.101.137.143]:64366 "EHLO ipmail05.adl6.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751260Ab1GSCmG (ORCPT ); Mon, 18 Jul 2011 22:42:06 -0400 Content-Disposition: inline In-Reply-To: <20110718143450.GH1437@logfs.org> <20110718114036.GF1437@logfs.org> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Mon, Jul 18, 2011 at 01:40:36PM +0200, J=F6rn Engel wrote: > On Mon, 18 July 2011 20:57:49 +1000, Dave Chinner wrote: > > On Mon, Jul 18, 2011 at 09:53:39AM +0200, J=F6rn Engel wrote: > > > On Mon, 18 July 2011 09:32:52 +1000, Dave Chinner wrote: > > > > On Sun, Jul 17, 2011 at 06:05:01PM +0200, J=F6rn Engel wrote: > >=20 > > > > > xfs: > > > > > =3D=3D=3D=3D > > > > > seqrd 1 2 4 8 16 32 64 128 > > > > > 16384 4698 4424 4397 4402 4394 4398 4642 4679=09 > > > > > 8192 6234 5827 5797 5801 5795 6114 5793 5812=09 > > > > > 4096 9100 8835 8882 8896 8874 8890 8910 8906=09 > > > > > 2048 14922 14391 14259 14248 14264 14264 14269 14273=09 > > > > > 1024 23853 22690 22329 22362 22338 22277 22240 22301=09 > > > > > 512 37353 33990 33292 33332 33306 33296 33224 33271=09 >=20 > Your patch definitely helps. Bottom right number is 584741 now. > Still slower than ext4 or btrfs, but in the right ballpark. Will > post the entire block once it has been generated. The btrfs numbers are through doing different IO. have a look at all the sub-filesystem block size numbers for btrfs. No matter the thread count, the number is the same - hardware limits. btrfs is not doing an IO per read syscall there - I'd say it's falling back to buffered IO unlink ext4 and xfs.... =2E.... > seqrd 1 2 4 8 16 32 64 128 > 16384 4542 8311 15738 28955 38273 36644 38530 38527=09 > 8192 6000 10413 19208 33878 65927 76906 77083 77102=09 > 4096 8931 14971 24794 44223 83512 144867 147581 150702=09 > 2048 14375 23489 34364 56887 103053 192662 307167 309222=09 > 1024 21647 36022 49649 77163 132886 243296 421389 497581=09 > 512 31832 61257 79545 108782 176341 303836 517814 584741=09 >=20 > Quite a nice improvement for such a small patch. As they say, "every > small factor of 17 helps". ;) And in general the numbers are within a couple of percent of the ext4 numbers, which is probably a reflection of the slightly higher CPU cost of the XFS read path compared to ext4. > What bothers me a bit is that the single-threaded numbers took such a > noticeable hit... Is it reproducable? I did notice quite a bit of run-to-run variation in the numbers I ran. For single threaded numbers, they appear to be in the order of +/-100 ops @ 16k block size. >=20 > > Ok, the patch below takes the numbers on my test setup on a 16k IO > > size: > >=20 > > seqrd 1 2 4 8 16 > > vanilla 3603 2798 2563 not tested... > > patches 3707 5746 10304 12875 11016 >=20 > ...in particular when your numbers improve even for a single thread. > Wonder what's going on here. And these were just quoted from a single test run. > Anyway, feel free to add a Tested-By: or something from me. And mayb= e > fix the two typos below. Will do. Cheers, Dave. --=20 Dave Chinner david@fromorbit.com -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel= " in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html