From: Stan Hoeppner <stan@hardwarefreak.com>
To: xfs@oss.sgi.com
Subject: Re: Performance problem - reads slower than writes
Date: Wed, 01 Feb 2012 01:29:30 -0600 [thread overview]
Message-ID: <4F28E9DA.8030407@hardwarefreak.com> (raw)
In-Reply-To: <20120131202526.GJ9090@dastard>
On 1/31/2012 2:25 PM, Dave Chinner wrote:
> On Tue, Jan 31, 2012 at 02:16:04PM +0000, Brian Candler wrote:
>> Here we appear to be limited by real seeks. 225 seeks/sec is still very good
>
> That number indicates 225 IOs/s, not 225 seeks/s.
Yeah, the voice coil actuator and spindle rotation limits the peak
random seek rate of good 7.2k drive/controller combos to about 150/s.
15k drives do about 250-300 seeks/s max. Simple tool to test max random
seeks/sec for a device:
32bit binary: http://www.hardwarefreak.com/seekerb
source: http://www.hardwarefreak.com/seeker_baryluk.c
I'm not the author. The original seeker program is single threaded.
Baryluk did the thread hacking. Background info:
http://www.linuxinsight.com/how_fast_is_your_disk.html
Usage: ./seekerb device [threads]
Results for a single WD 7.2K drive, no NCQ, deadline elevator:
1 threads Results: 64 seeks/second, 15.416 ms random access time
16 threads Results: 97 seeks/second, 10.285 ms random access time
128 threads Results: 121 seeks/second, 8.208 ms random access time
Actual output:
$ seekerb /dev/sda 128
Seeker v3.0, 2009-06-17,
http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda [976773168 blocks, 500107862016 bytes, 465 GB,
476940 MB, 500 GiB, 500107 MiB]
[512 logical sector size, 512 physical sector size]
[128 threads]
Wait 30 seconds.............................
Results: 121 seeks/second, 8.208 ms random access time (52614775 <
offsets < 499769984475)
Targeting array devices (mdraid or hardware, or FC SAN LUN) with lots of
spindles, and/or SSDs should yield some interesting results.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2012-02-01 7:29 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-01-30 22:00 Performance problem - reads slower than writes Brian Candler
2012-01-31 2:05 ` Dave Chinner
2012-01-31 10:31 ` Brian Candler
2012-01-31 14:16 ` Brian Candler
2012-01-31 20:25 ` Dave Chinner
2012-02-01 7:29 ` Stan Hoeppner [this message]
2012-02-03 18:47 ` Brian Candler
2012-02-03 19:03 ` Christoph Hellwig
2012-02-03 21:01 ` Brian Candler
2012-02-03 21:17 ` Brian Candler
2012-02-05 22:50 ` Dave Chinner
2012-02-05 22:43 ` Dave Chinner
2012-01-31 14:52 ` Christoph Hellwig
2012-01-31 21:52 ` Brian Candler
2012-02-01 0:50 ` Raghavendra D Prabhu
2012-02-01 3:59 ` Dave Chinner
2012-02-03 11:54 ` Brian Candler
2012-02-03 19:42 ` Stan Hoeppner
2012-02-03 22:10 ` Brian Candler
2012-02-04 9:59 ` Stan Hoeppner
2012-02-04 11:24 ` Brian Candler
2012-02-04 12:49 ` Stan Hoeppner
2012-02-04 20:04 ` Brian Candler
2012-02-04 20:44 ` Joe Landman
2012-02-06 10:40 ` Brian Candler
2012-02-07 17:30 ` Brian Candler
2012-02-05 5:16 ` Stan Hoeppner
2012-02-05 9:05 ` Brian Candler
2012-01-31 20:06 ` Dave Chinner
2012-01-31 21:35 ` Brian Candler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F28E9DA.8030407@hardwarefreak.com \
--to=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox