From: Stan Hoeppner <stan@hardwarefreak.com>
To: Brian Candler <B.Candler@pobox.com>
Cc: Christoph Hellwig <hch@infradead.org>, xfs@oss.sgi.com
Subject: Re: Performance problem - reads slower than writes
Date: Sat, 04 Feb 2012 06:49:23 -0600 [thread overview]
Message-ID: <4F2D2953.2020906@hardwarefreak.com> (raw)
In-Reply-To: <20120204112436.GA3167@nsrc.org>
On 2/4/2012 5:24 AM, Brian Candler wrote:
> On Sat, Feb 04, 2012 at 03:59:08AM -0600, Stan Hoeppner wrote:
>> Will you be using mdraid or hardware RAID across those 24 spindles?
>
> Gluster is the front-runner at the moment. Each file sits on a single
> spindle, and there is a separate filesystem per spindle, so I think the
> parallel processing will work much better this way. This does mean double
> the disks to get data replication though.
Apparently you've read of a different GlusterFS. The one I know of is
for aggregating multiple storage hosts into a cloud storage resource.
It is not designed to replace striping or concatenation of disks within
a single host.
Even if what you describe can be done with Gluster, the performance will
likely be significantly less than a properly setup mdraid or hardware
raid. Again, if it can be done, I'd test it head-to-head against RAID.
> I did some testing of RAID6 mdraid (12 disks with with 1MB stripe size) and
> it sucked. However I need to re-test it now that I know about inode64.
> We do have a requirement for archival storage and that might use RAID6.
I've never been a fan of parity RAID, let alone double parity RAID.
SATA drives are so cheap (or were until the flooding in Thailand) that
it's really hard to justify RAID6 over RAID10 or a layered stripe over
mirror, given the many advantages of RAID10 and negligible
disadvantages. The RAID6 dead drive rebuild time, and performance
degradation during the rebuild, on a production system with real users,
is enough justification to go RAID10, where that drive rebuild will take
many many hours less, if not days less, and degrade performance only mildly.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2012-02-04 12:49 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-01-30 22:00 Performance problem - reads slower than writes Brian Candler
2012-01-31 2:05 ` Dave Chinner
2012-01-31 10:31 ` Brian Candler
2012-01-31 14:16 ` Brian Candler
2012-01-31 20:25 ` Dave Chinner
2012-02-01 7:29 ` Stan Hoeppner
2012-02-03 18:47 ` Brian Candler
2012-02-03 19:03 ` Christoph Hellwig
2012-02-03 21:01 ` Brian Candler
2012-02-03 21:17 ` Brian Candler
2012-02-05 22:50 ` Dave Chinner
2012-02-05 22:43 ` Dave Chinner
2012-01-31 14:52 ` Christoph Hellwig
2012-01-31 21:52 ` Brian Candler
2012-02-01 0:50 ` Raghavendra D Prabhu
2012-02-01 3:59 ` Dave Chinner
2012-02-03 11:54 ` Brian Candler
2012-02-03 19:42 ` Stan Hoeppner
2012-02-03 22:10 ` Brian Candler
2012-02-04 9:59 ` Stan Hoeppner
2012-02-04 11:24 ` Brian Candler
2012-02-04 12:49 ` Stan Hoeppner [this message]
2012-02-04 20:04 ` Brian Candler
2012-02-04 20:44 ` Joe Landman
2012-02-06 10:40 ` Brian Candler
2012-02-07 17:30 ` Brian Candler
2012-02-05 5:16 ` Stan Hoeppner
2012-02-05 9:05 ` Brian Candler
2012-01-31 20:06 ` Dave Chinner
2012-01-31 21:35 ` Brian Candler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F2D2953.2020906@hardwarefreak.com \
--to=stan@hardwarefreak.com \
--cc=B.Candler@pobox.com \
--cc=hch@infradead.org \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox