From: Chris Wedgwood <cw@f00f.org>
To: Justin Piszcz <jpiszcz@lucidpixels.com>
Cc: linux-raid@vger.kernel.org, xfs@oss.sgi.com
Subject: Re: Linux SW RAID: HW Raid Controller/JBOD vs. Multiple PCI-e Cards?
Date: Sat, 5 May 2007 13:54:56 -0700 [thread overview]
Message-ID: <20070505205456.GA17112@tuatara.stupidest.org> (raw)
In-Reply-To: <Pine.LNX.4.64.0705051228300.12744@p34.internal.lan>
On Sat, May 05, 2007 at 12:33:49PM -0400, Justin Piszcz wrote:
> Also, when I run simultaenous dd's from all of the drives, I see
> 850-860MB/s, I am curious if there is some kind of limitation with
> software raid as to why I am not getting better than 500MB/s for
> sequential write speed?
What does "vmstat 1" output look like in both cases? My guess is that
for large writes it's NOT CPU bound but it can't hurt to check.
> With 7 disks, I got about the same speed, adding 3 more for a total
> of 10 did not seem to help in regards to write. However, read
> improved to 622MBs/ from about 420-430MB/s.
RAID is quirky.
It's worth fiddling with the stripe size as that can have a big
difference in terms of performance --- it's far from clear why on some
setups some values work well and other setups you want very different
values.
It would be good to know if anyone has ever studied stripe size and
also controller interleave/layout issues to get a good understanding
of why certain values are good and others are very poor and why it
varies so much from one setup to the other.
Also, 'dd performance' varies between the start of a disk and the end.
Typically you get better performance at the start of the disk so dd
might not be a very good benchmark here.
> However, if I want to upgrade to more than 12 disks, I am out of
> PCI-e slots, so I was wondering, does anyone on this list run a 16
> port Areca or 3ware card and use it for JBOD? What kind of
> performance do you see when using mdadm with such a card? Or if
> anyone uses mdadm with less than a 16 port card, I'd like to hear
> what kind of experiences you have seen with that type of
> configuration.
I've used some 2, 4 and 8 port 3ware cards. As JBODS they worked
fine, as RAID cards I had no end of problems. I'm happy to test
larger cards if someone wants to donate them :-)
next prev parent reply other threads:[~2007-05-05 20:54 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-05-05 16:33 Linux SW RAID: HW Raid Controller/JBOD vs. Multiple PCI-e Cards? Justin Piszcz
2007-05-05 16:44 ` Bill Davidsen
2007-05-05 17:37 ` Patrik Jonsson
2007-05-05 20:54 ` Chris Wedgwood [this message]
2007-05-05 21:37 ` Speed variation depending on disk position (was: Linux SW RAID: HW Raid Controller/JBOD vs. Multiple PCI-e Cards?) Peter Rabbitson
2007-05-06 5:02 ` Speed variation depending on disk position Benjamin Davenport
2007-05-06 15:29 ` Speed variation depending on disk position (was: Linux SW RAID: HW Raid Controller/JBOD vs. Multiple PCI-e Cards?) Mark Hahn
2007-05-06 19:39 ` Speed variation depending on disk position Richard Scobie
2007-05-08 13:34 ` Bill Davidsen
2007-05-05 21:18 ` Linux SW RAID: HW Raid Controller/JBOD vs. Multiple PCI-e Cards? Emmanuel Florac
2007-05-05 21:32 ` Justin Piszcz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070505205456.GA17112@tuatara.stupidest.org \
--to=cw@f00f.org \
--cc=jpiszcz@lucidpixels.com \
--cc=linux-raid@vger.kernel.org \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).