From: Dave Chinner <david@fromorbit.com>
To: "Jörn Engel" <joern@logfs.org>
Cc: linux-fsdevel@vger.kernel.org
Subject: Re: Filesystem benchmarks on reasonably fast hardware
Date: Mon, 18 Jul 2011 09:32:52 +1000 [thread overview]
Message-ID: <20110717233252.GH21663@dastard> (raw)
In-Reply-To: <20110717160501.GA1437@logfs.org>
On Sun, Jul 17, 2011 at 06:05:01PM +0200, Jörn Engel wrote:
> Hello everyone!
>
> Recently I have had the pleasure of working with some nice hardware
> and the displeasure of seeing it fail commercially. However, when
> trying to optimize performance I noticed that in some cases the
> bottlenecks were not in the hardware or my driver, but rather in the
> filesystem on top of it. So maybe all this may still be useful in
> improving said filesystem.
>
> Hardware is basically a fast SSD. Performance tops out at about
> 650MB/s and is fairly insensitive to random access behaviour. Latency
> is about 50us for 512B reads and near 0 for writes, through the usual
> cheating.
>
> Numbers below were created with sysbench, using directIO. Each block
> is a matrix with results for blocksizes from 512B to 16384B and thread
> count from 1 to 128. Four blocks for reads and writes, both
> sequential and random.
What's the command line/script used to generate the result matrix?
And what kernel are you running on?
> xfs:
> ====
> seqrd 1 2 4 8 16 32 64 128
> 16384 4698 4424 4397 4402 4394 4398 4642 4679
> 8192 6234 5827 5797 5801 5795 6114 5793 5812
> 4096 9100 8835 8882 8896 8874 8890 8910 8906
> 2048 14922 14391 14259 14248 14264 14264 14269 14273
> 1024 23853 22690 22329 22362 22338 22277 22240 22301
> 512 37353 33990 33292 33332 33306 33296 33224 33271
Something is single threading completely there - something is very
wrong. Someone want to send me a nice fast pci-e SSD - my disks
don't spin that fast... :/
> rndrd 1 2 4 8 16 32 64 128
> 16384 4585 8248 14219 22533 32020 38636 39033 39054
> 8192 6032 11186 20294 34443 53112 71228 78197 78284
> 4096 8247 15539 29046 52090 86744 125835 154031 157143
> 2048 11950 22652 42719 79562 140133 218092 286111 314870
> 1024 16526 31294 59761 112494 207848 348226 483972 574403
> 512 20635 39755 73010 130992 270648 484406 686190 726615
>
> seqwr 1 2 4 8 16 32 64 128
> 16384 39956 39695 39971 39913 37042 37538 36591 32179
> 8192 67934 66073 30963 29038 29852 25210 23983 28272
> 4096 89250 81417 28671 18685 12917 14870 22643 22237
> 2048 140272 120588 140665 140012 137516 139183 131330 129684
> 1024 217473 147899 210350 218526 219867 220120 219758 215166
> 512 328260 181197 211131 263533 294009 298203 301698 298013
>
> rndwr 1 2 4 8 16 32 64 128
> 16384 38447 38153 38145 38140 38156 38199 38208 38236
> 8192 78001 76965 76908 76945 77023 77174 77166 77106
> 4096 160721 156000 157196 157084 157078 157123 156978 157149
> 2048 325395 317148 317858 318442 318750 318981 319798 320393
> 1024 434084 649814 650176 651820 653928 654223 655650 655818
> 512 501067 876555 1290292 1217671 1244399 1267729 1285469 1298522
I'm assuming that is the h/w can do 650MB/s then the numbers are in
iops? from 4 threads up all results equate to 650MB/s.
> Sequential reads are pretty horrible. Sequential writes are hitting a
> hot lock again.
lockstat output?
> So, if anyone would like to improve one of these filesystems and needs
> more data, feel free to ping me.
Of course I'm interested. ;)
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2011-07-17 23:32 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-07-17 16:05 Filesystem benchmarks on reasonably fast hardware Jörn Engel
2011-07-17 23:32 ` Dave Chinner [this message]
[not found] ` <20110718075339.GB1437@logfs.org>
2011-07-18 10:57 ` Dave Chinner
2011-07-18 11:40 ` Jörn Engel
2011-07-19 2:41 ` Dave Chinner
2011-07-19 7:36 ` Jörn Engel
2011-07-19 9:23 ` srimugunthan dhandapani
2011-07-21 19:05 ` Jörn Engel
2011-07-19 10:15 ` Dave Chinner
2011-07-18 14:34 ` Jörn Engel
[not found] ` <20110718103956.GE1437@logfs.org>
2011-07-18 11:10 ` Dave Chinner
2011-07-18 12:07 ` Ted Ts'o
2011-07-18 12:42 ` Jörn Engel
2011-07-25 15:18 ` Ted Ts'o
2011-07-25 18:20 ` Jörn Engel
2011-07-25 21:18 ` Ted Ts'o
2011-07-26 14:57 ` Ted Ts'o
2011-07-27 3:39 ` Yongqiang Yang
2011-07-19 13:19 ` Dave Chinner
2011-07-21 10:42 ` Jörn Engel
2011-07-22 18:51 ` Jörn Engel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110717233252.GH21663@dastard \
--to=david@fromorbit.com \
--cc=joern@logfs.org \
--cc=linux-fsdevel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).