From: Max Waterman <davidmaxwaterman+gmane@fastmail.co.uk>
To: linux-raid@vger.kernel.org
Subject: Re: md faster than h/w?
Date: Wed, 18 Jan 2006 12:43:15 +0800 [thread overview]
Message-ID: <dqkh18$mf2$2@sea.gmane.org> (raw)
In-Reply-To: <20060117170901.84523.qmail@web54604.mail.yahoo.com>
Andargor wrote:
>
> --- Max Waterman
> <davidmaxwaterman+gmane@fastmail.co.uk> wrote:
>
>> Andargor wrote:
>>> I haven't found a benchmark that is 100%
>>> reliable/comparable. Of course, it all depends how
>> the
>>> drive is used in production, which may have little
>>> correlation with the benchmarks...
>> Indeed.
>>
>> Do you think that if it is configured for the best
>> possible
>> read performance, then that would be it's worst
>> possible
>> write performance?
>>
>> I was hoping that having it configured for good read
>> perf.
>> would mean it was pretty good for write too....
>>
>> Max.
>>
>
> I don't have nearly the expertise some people here
> show, but intuitively I don't think that's true. If
> anything, it would be the opposite, unless write
> caching was as good as read caching (both h/w and
> kernel).
Ok. I wonder if it's possible to have the best possible read
performance, and the worst possible write performance at the same time?
I'm noticing these messages :
"sda: asking for cache data failed
sda: assuming drive cache: write through"
in the dmesg output. We've set the raid drive to be write-back for
better bandwidth, but if sd is assuming write through, I wonder what
impact that will have on write performance? ... but I've asked that in a
separate message already.
> Also, the number of disks you have to write
> to or read from depending on RAID level has an impact.
I'm assuming more is better? We're trying to get an extra one to make it
up to 6.
What RAID should we use for best write bandwidth?
I'm assuming RAID5 isn't the best...doesn't it have to touch every disk
for a write - ie no benefit over a single disk?
> And as Mark Hahn has indicated, the actual location on
> disk you are reading/writing has an impact as well.
> Difficult to evaluate objectively.
Yes, but I don't see that I have much control over that in the end
system...or do I? I suppose I could partition for performance - sounds
messy.
Max.
next prev parent reply other threads:[~2006-01-18 4:43 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-01-13 7:06 md faster than h/w? Max Waterman
2006-01-13 14:46 ` Ross Vandegrift
2006-01-13 21:08 ` Lajber Zoltan
2006-01-14 1:19 ` Max Waterman
2006-01-14 2:05 ` Ross Vandegrift
2006-01-14 8:26 ` Max Waterman
2006-01-14 10:42 ` Michael Tokarev
2006-01-14 11:48 ` Max Waterman
2006-01-14 18:14 ` Mark Hahn
2006-01-14 1:22 ` Max Waterman
2006-01-14 6:40 ` Mark Hahn
2006-01-14 8:54 ` Max Waterman
2006-01-14 21:23 ` Ross Vandegrift
2006-01-16 4:37 ` Max Waterman
2006-01-16 5:33 ` Max Waterman
2006-01-16 14:12 ` Andargor
2006-01-17 9:18 ` Max Waterman
2006-01-17 17:09 ` Andargor
2006-01-18 4:43 ` Max Waterman [this message]
2006-01-16 6:31 ` Max Waterman
2006-01-16 13:30 ` Ric Wheeler
2006-01-16 14:08 ` Mark Hahn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='dqkh18$mf2$2@sea.gmane.org' \
--to=davidmaxwaterman+gmane@fastmail.co.uk \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).