From: John Robinson <john.robinson@anonymous.org.uk>
To: Linux RAID <linux-raid@vger.kernel.org>
Subject: Re: Awful RAID5 random read performance
Date: Wed, 03 Jun 2009 20:57:34 +0100 [thread overview]
Message-ID: <4A26D5AE.2000003@anonymous.org.uk> (raw)
In-Reply-To: <4A26C313.6080700@tmr.com>
On 03/06/2009 19:38, Bill Davidsen wrote:
> John Robinson wrote:
>> On 02/06/2009 20:47, Keld Jørn Simonsen wrote:
[...]
>>> In your case, using 3 disks, raid5 should give about 210 % of the
>>> nominal
>>> single disk speed for big file reads, and maybe 180 % for big file
>>> writes. raid10,f2 should give about 290 % for big file reads and 140%
>>> for big file writes. Random reads should be about the same for raid5 and
>>> raid10,f2 - raid10,f2 maybe 15 % faster, while random writes should be
>>> mediocre for raid5, and good for raid10,f2.
>>
>> I'd be interested in reading about where you got these figures from
>> and/or the rationale behind them; I'd have guessed differently...
>
> For small values of N, 10,f2 generally comes quite close to N*Sr, where
> N is # of disks and Sr is single drive read speed. This is assuming
> fiarly large reads and adequate stripe buffer space. Obviously for
> larger values of N that saturates something else in the system, like the
> bus, before N gets too large. I don't generally see more than (N/2-1)*Sw
> for write, at least for large writes. I came up with those numbers based
> on testing 3-4-5 drive arrays which do large file transfers. If you want
> to read more than large file speed into them, feel free.
Actually it was the RAID-5 figures I'd have guessed differently. I'd
expect ~290% (rather than 210%) for big 3-disc RAID-5 reads, and ~140%
(rather than "mediocre") for random small writes. But of course I
haven't tested.
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2009-06-03 19:57 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-05-30 21:46 Awful RAID5 random read performance Maurice Hilarius
2009-05-31 6:25 ` Michael Tokarev
2009-05-31 7:47 ` Thomas Fjellstrom
2009-05-31 12:29 ` John Robinson
2009-05-31 15:41 ` Leslie Rhorer
2009-05-31 16:56 ` Thomas Fjellstrom
2009-05-31 18:26 ` Keld Jørn Simonsen
2009-06-02 18:54 ` Bill Davidsen
2009-06-02 19:47 ` Keld Jørn Simonsen
2009-06-02 23:13 ` John Robinson
2009-06-03 18:38 ` Bill Davidsen
2009-06-03 19:57 ` John Robinson [this message]
2009-06-03 22:21 ` Goswin von Brederlow
2009-06-04 11:23 ` Keld Jørn Simonsen
2009-06-04 22:40 ` Nifty Fedora Mitch
2009-06-06 23:06 ` Bill Davidsen
2009-06-01 1:19 ` Carlos Carvalho
2009-06-01 4:57 ` Leslie Rhorer
2009-06-01 5:39 ` Thomas Fjellstrom
2009-06-01 12:43 ` Maurice Hilarius
2009-06-02 14:57 ` Wil Reichert
2009-06-02 15:14 ` Maurice Hilarius
2009-06-02 19:47 ` Bill Davidsen
2009-06-01 11:41 ` Goswin von Brederlow
2009-06-03 1:57 ` Leslie Rhorer
2009-05-31 17:19 ` Goswin von Brederlow
2009-06-01 12:01 ` John Robinson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A26D5AE.2000003@anonymous.org.uk \
--to=john.robinson@anonymous.org.uk \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).