From: Stan Hoeppner <stan@hardwarefreak.com>
To: Adam Goryachev <mailinglists@websitemanagers.com.au>
Cc: linux-raid <linux-raid@vger.kernel.org>
Subject: Re: RAID10 Performance
Date: Fri, 27 Jul 2012 13:29:54 -0500 [thread overview]
Message-ID: <5012DE22.10108@hardwarefreak.com> (raw)
In-Reply-To: <50129176.3050604@websitemanagers.com.au>
On 7/27/2012 8:02 AM, Adam Goryachev wrote:
> On 27/07/12 17:07, Stan Hoeppner wrote:
>> 1. Recreate the arrays with 6 or 8 drives each, use a 64KB chunk
>
> Would you suggest these 6 - 8 drives in RAID10 or some other RAID
> level? (IMHO, the best performance with reasonable protection is RAID10)
You're running many VMs. That implies a mixed read/write workload. You
can't beat RAID10 for this workload.
> How do you get that many drives into a decent "server"? I'm using a
> 4RU rackmount server case, but it only has capacity for 5 x hot swap
> 3.5" drives (plus one internal drive).
Given that's a 4U chassis, I'd bet you mean 5 x 5.25" bays with 3.5" hot
swap carriers.
In that case, get 8 of these:
http://www.newegg.com/Product/Product.aspx?Item=N82E16822148710
and replace two carriers with two of these:
http://www.newegg.com/Product/Product.aspx?Item=N82E16817994142
Providing the make/model of the case would be helpful.
>> 2. Replace the 7.2k WD drives with 10k SATA, or 15k SAS drives
>
> Which drives would you suggest? The drives I have are already over
> $350 each (AUD)...
See above. Or you could get 6 of these:
http://www.newegg.com/Product/Product.aspx?Item=N82E16822236243
>> 3. Replace the drives with SSDs
>
> Yes, I'd love to do this.
>
>> Any of these 3 things will decrease latency per request.
>
> I have already advised adding an additional pair of drives, and
> converting to SSD's.
>
> Would adding another 2 identical drives and configuring in RAID10
> really improve performance by double?
No because you'd have 5 drives and you have 3 now. With 6 drives total,
yes, performance will be doubled. And BTW, there is no such thing as an
odd number of drives RAID10. That's actually more akin to RAID1E. It
just happens that the md/RAID driver that provides it is the RAID10
driver. You want an even number of drives. Use a standard RAID10
layout, not "near" or "far" layouts.
> Would it be more than double (because much less seeking should give
> better throughput, similar I expect to performance of two concurrent
> reads is less than half of a single read)?
It's not throughput you're after, but latency reduction. More spindles
in the stripe, or faster spindles, is what you want for a high
concurrency workload, to reduce latency. Reducing latency will increase
the responsiveness of the client machines, and, to some degree
throughput, because the array can now process more IO transactions per
unit time.
> If using SSD's, what would you suggest to get 1TB usable space?
> Would 4 x Intel 480GB SSD 520 Series (see link) in RAID10 be the best
> solution? Would it make more sense to use 4 in RAID6 so that expansion
> is easier in future (ie, add a 5th drive to add 480G usable storage)?
I actually wouldn't recommend SSDs for this server. The technology is
still young, hasn't been proven yet for long term server use, and the
cost/GB is still very high, more than double that of rust.
I'd recommend the 10k RPM WD Raptor 1TB drives. They're sold as 3.5"
drives but are actually 2.5" drives in a custom mounting frame, so you
can use them in a chassis with either size of hot swap cage. They're
also very inexpensive given the performance plus capacity.
I'd also recommend using XFS if you aren't already. And do NOT use the
metadata 1.2 default 512KB chunk size. It is horrible for random write
workloads. Use a 32KB chunk with your md/RAID10 with these fast drives,
and align XFS during mkfs.xfs using -d options su/sw.
> PS, thanks for the reminder that RAID10 grow is not yet supported, I
> may need to do some creative raid management to "grow" the array,
> extended downtime is possible to get that done when needed...
You can grow a RAID10 based array: join 2 or more 4 drive RAID10s in a
--linear array. Add more 4 drive RAID10s in the future by growing the
linear array. Then grow the filesystem over the new space.
--
Stan
next prev parent reply other threads:[~2012-07-27 18:29 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-26 14:16 RAID10 Performance Adam Goryachev
2012-07-27 7:07 ` Stan Hoeppner
2012-07-27 13:02 ` Adam Goryachev
2012-07-27 18:29 ` Stan Hoeppner [this message]
2012-07-28 6:36 ` Adam Goryachev
2012-07-28 15:33 ` Stan Hoeppner
2012-08-08 3:49 ` Adam Goryachev
2012-08-08 16:59 ` Stan Hoeppner
2012-08-08 17:14 ` Roberto Spadim
2012-08-09 1:00 ` Adam Goryachev
2012-08-09 22:37 ` Stan Hoeppner
2012-07-27 12:05 ` Phil Turmel
-- strict thread matches above, loose matches on Subject: below --
2011-03-02 9:04 Aaron Sowry
2011-03-02 9:24 ` Robin Hill
2011-03-02 10:14 ` Keld Jørn Simonsen
2011-03-02 14:42 ` Mark Knecht
2011-03-02 14:47 ` Mathias Burén
2011-03-02 15:02 ` Mario 'BitKoenig' Holbe
2011-03-02 8:50 Aaron Sowry
2011-03-02 11:16 ` NeilBrown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5012DE22.10108@hardwarefreak.com \
--to=stan@hardwarefreak.com \
--cc=linux-raid@vger.kernel.org \
--cc=mailinglists@websitemanagers.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).