From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Chinner Subject: Re: Filesystem benchmarks on reasonably fast hardware Date: Mon, 18 Jul 2011 09:32:52 +1000 Message-ID: <20110717233252.GH21663@dastard> References: <20110717160501.GA1437@logfs.org> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: linux-fsdevel@vger.kernel.org To: =?iso-8859-1?Q?J=F6rn?= Engel Return-path: Received: from ipmail06.adl6.internode.on.net ([150.101.137.145]:58671 "EHLO ipmail06.adl6.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755969Ab1GQXc4 (ORCPT ); Sun, 17 Jul 2011 19:32:56 -0400 Content-Disposition: inline In-Reply-To: <20110717160501.GA1437@logfs.org> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Sun, Jul 17, 2011 at 06:05:01PM +0200, J=F6rn Engel wrote: > Hello everyone! >=20 > Recently I have had the pleasure of working with some nice hardware > and the displeasure of seeing it fail commercially. However, when > trying to optimize performance I noticed that in some cases the > bottlenecks were not in the hardware or my driver, but rather in the > filesystem on top of it. So maybe all this may still be useful in > improving said filesystem. >=20 > Hardware is basically a fast SSD. Performance tops out at about > 650MB/s and is fairly insensitive to random access behaviour. Latenc= y > is about 50us for 512B reads and near 0 for writes, through the usual > cheating. >=20 > Numbers below were created with sysbench, using directIO. Each block > is a matrix with results for blocksizes from 512B to 16384B and threa= d > count from 1 to 128. Four blocks for reads and writes, both > sequential and random. What's the command line/script used to generate the result matrix? And what kernel are you running on? > xfs: > =3D=3D=3D=3D > seqrd 1 2 4 8 16 32 64 128 > 16384 4698 4424 4397 4402 4394 4398 4642 4679=09 > 8192 6234 5827 5797 5801 5795 6114 5793 5812=09 > 4096 9100 8835 8882 8896 8874 8890 8910 8906=09 > 2048 14922 14391 14259 14248 14264 14264 14269 14273=09 > 1024 23853 22690 22329 22362 22338 22277 22240 22301=09 > 512 37353 33990 33292 33332 33306 33296 33224 33271=09 Something is single threading completely there - something is very wrong. Someone want to send me a nice fast pci-e SSD - my disks don't spin that fast... :/ > rndrd 1 2 4 8 16 32 64 128 > 16384 4585 8248 14219 22533 32020 38636 39033 39054=09 > 8192 6032 11186 20294 34443 53112 71228 78197 78284=09 > 4096 8247 15539 29046 52090 86744 125835 154031 157143=09 > 2048 11950 22652 42719 79562 140133 218092 286111 314870=09 > 1024 16526 31294 59761 112494 207848 348226 483972 574403=09 > 512 20635 39755 73010 130992 270648 484406 686190 726615=09 >=20 > seqwr 1 2 4 8 16 32 64 128 > 16384 39956 39695 39971 39913 37042 37538 36591 32179=09 > 8192 67934 66073 30963 29038 29852 25210 23983 28272=09 > 4096 89250 81417 28671 18685 12917 14870 22643 22237=09 > 2048 140272 120588 140665 140012 137516 139183 131330 129684=09 > 1024 217473 147899 210350 218526 219867 220120 219758 215166=09 > 512 328260 181197 211131 263533 294009 298203 301698 298013=09 >=20 > rndwr 1 2 4 8 16 32 64 128 > 16384 38447 38153 38145 38140 38156 38199 38208 38236=09 > 8192 78001 76965 76908 76945 77023 77174 77166 77106=09 > 4096 160721 156000 157196 157084 157078 157123 156978 157149=09 > 2048 325395 317148 317858 318442 318750 318981 319798 320393=09 > 1024 434084 649814 650176 651820 653928 654223 655650 655818=09 > 512 501067 876555 1290292 1217671 1244399 1267729 1285469 1298522=09 I'm assuming that is the h/w can do 650MB/s then the numbers are in iops? from 4 threads up all results equate to 650MB/s. > Sequential reads are pretty horrible. Sequential writes are hitting = a > hot lock again. lockstat output? > So, if anyone would like to improve one of these filesystems and need= s > more data, feel free to ping me. Of course I'm interested. ;) Cheers, Dave. --=20 Dave Chinner david@fromorbit.com -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel= " in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html