From: Bill Davidsen <davidsen@tmr.com>
To: Mattias Wadenstein <maswan@acc.umu.se>
Cc: Neil Brown <neilb@suse.de>, Norman Elton <normelton@gmail.com>,
linux-raid@vger.kernel.org
Subject: Re: Raid over 48 disks
Date: Wed, 19 Dec 2007 10:26:31 -0500 [thread overview]
Message-ID: <47693827.6070604@tmr.com> (raw)
In-Reply-To: <Pine.GSO.4.64.0712190918430.1549@montezuma.acc.umu.se>
Mattias Wadenstein wrote:
> On Wed, 19 Dec 2007, Neil Brown wrote:
>
>> On Tuesday December 18, normelton@gmail.com wrote:
>>> We're investigating the possibility of running Linux (RHEL) on top of
>>> Sun's X4500 Thumper box:
>>>
>>> http://www.sun.com/servers/x64/x4500/
>>>
>>> Basically, it's a server with 48 SATA hard drives. No hardware RAID.
>>> It's designed for Sun's ZFS filesystem.
>>>
>>> So... we're curious how Linux will handle such a beast. Has anyone run
>>> MD software RAID over so many disks? Then piled LVM/ext3 on top of
>>> that? Any suggestions?
>
> There are those that have run Linux MD RAID on thumpers before. I
> vaguely recall some driver issues (unrelated to MD) that made it less
> suitable than solaris, but that might be fixed in recent kernels.
>
>> Alternately, 8 6drive RAID5s or 6 8raid RAID6s, and use RAID0 to
>> combine them together. This would give you adequate reliability and
>> performance and still a large amount of storage space.
>
> My personal suggestion would be 5 9-disk raid6s, one raid1 root mirror
> and one hot spare. Then raid0, lvm, or separate filesystem on those 5
> raidsets for data, depending on your needs.
Other than thinking raid-10 better than raid-1for performance, I like it.
>
> You get almost as much data space as with the 6 8-disk raid6s, and
> have a separate pair of disks for all the small updates (logging,
> metadata, etc), so this makes alot of sense if most of the data is
> bulk file access.
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
next prev parent reply other threads:[~2007-12-19 15:26 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-12-18 17:29 Raid over 48 disks Norman Elton
2007-12-18 18:27 ` Justin Piszcz
2007-12-18 19:34 ` Thiemo Nagel
2007-12-18 19:52 ` Norman Elton
2007-12-18 20:19 ` Thiemo Nagel
2007-12-18 20:25 ` Justin Piszcz
2007-12-18 21:13 ` Thiemo Nagel
2007-12-18 21:20 ` Jon Nelson
2007-12-18 21:40 ` Thiemo Nagel
2007-12-18 21:43 ` Justin Piszcz
2007-12-18 21:21 ` Justin Piszcz
2007-12-19 15:21 ` Bill Davidsen
2007-12-19 15:02 ` Justin Piszcz
2007-12-20 16:48 ` Thiemo Nagel
2007-12-21 1:53 ` Bill Davidsen
2007-12-18 18:45 ` Robin Hill
2007-12-18 20:28 ` Neil Brown
2007-12-19 8:27 ` Mattias Wadenstein
2007-12-19 15:26 ` Bill Davidsen [this message]
2007-12-21 11:03 ` Leif Nixon
2007-12-25 17:31 ` pg_mh, Peter Grandi
2007-12-25 21:08 ` Bill Davidsen
2007-12-18 20:36 ` Brendan Conoboy
2007-12-18 23:50 ` Guy Watkins
2007-12-18 23:58 ` Justin Piszcz
2007-12-18 23:59 ` Justin Piszcz
2007-12-19 12:08 ` Russell Smith
2007-12-21 10:57 ` Leif Nixon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=47693827.6070604@tmr.com \
--to=davidsen@tmr.com \
--cc=linux-raid@vger.kernel.org \
--cc=maswan@acc.umu.se \
--cc=neilb@suse.de \
--cc=normelton@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).