* Raid over 48 disks ... for real now
@ 2008-01-17 16:19 Norman Elton
2008-01-17 16:32 ` Norman Elton
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Norman Elton @ 2008-01-17 16:19 UTC (permalink / raw)
To: linux-raid
I posed the question a few weeks ago about how to best accommodate
software RAID over an array of 48 disks (a Sun X4500 server, a.k.a.
Thumper). I appreciate all the suggestions.
Well, the hardware is here. It is indeed six Marvell 88SX6081 SATA
controllers, each with eight 1TB drives, for a total raw storage of
48TB. I must admit, it's quite impressive. And loud. More information
about the hardware is available online...
http://www.sun.com/servers/x64/x4500/arch-wp.pdf
It came loaded with Solaris, configured with ZFS. Things seemed to
work fine. I did not do any benchmarks, but I can revert to that
configuration if necessary.
Now I've loaded RHEL onto the box. For a first-shot, I've created one
RAID-5 array (+ 1 spare) on each of the controllers, then used LVM to
create a VolGroup across the arrays.
So now I'm trying to figure out what to do with this space. So far,
I've tested mke2fs on a 1TB and a 5TB LogVol.
I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3.
Am I better off sticking with relatively small partitions (2-5 TB), or
should I crank up the block size and go for one big partition?
Thoughts?
Norman Elton
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Raid over 48 disks ... for real now
2008-01-17 16:19 Raid over 48 disks ... for real now Norman Elton
@ 2008-01-17 16:32 ` Norman Elton
2008-01-17 19:50 ` Janek Kozicki
2008-01-18 16:08 ` michael
2 siblings, 0 replies; 7+ messages in thread
From: Norman Elton @ 2008-01-17 16:32 UTC (permalink / raw)
To: linux-raid
>> Hi, sounds like a monster server. I am interested in how you will make
>> the space useful to remote machines- iscsi? this is what I am
>> researching currently.
Yes, it's a honker of a box. It will be collecting data from various
"collector" servers. The plan right now is to collect the file to
binary files using a daemon (already running on a smaller box), then
make the last 30/60/90/?? days available in a database that is
populated from these files. If we need to gather older data, then the
individual files must be consulted locally.
So, in production, I would probably setup the database partition on
it's own set of 6 disks, then dedicate the rest to handling/archiving
the raw binary files. These files are small (a few MB each), as they
get rotated every five minutes.
Hope this makes sense, and provides a little background info on what
we're trying to do.
Norman
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Raid over 48 disks ... for real now
2008-01-17 16:19 Raid over 48 disks ... for real now Norman Elton
2008-01-17 16:32 ` Norman Elton
@ 2008-01-17 19:50 ` Janek Kozicki
2008-01-19 5:16 ` Jon Lewis
2008-01-18 16:08 ` michael
2 siblings, 1 reply; 7+ messages in thread
From: Janek Kozicki @ 2008-01-17 19:50 UTC (permalink / raw)
To: Norman Elton, linux-raid
Norman Elton said: (by the date of Thu, 17 Jan 2008 11:19:35 -0500)
> I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3.
there is ext4 (or ext4dev) - it's an ext3 modified to support 1024 PB size
(1048576 TB). You could check if it's feasible. Personally I'd always
stick with ext2/ext3/ext4 since it is most widely used and thus has
the best recovery tools.
--
Janek Kozicki |
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Raid over 48 disks ... for real now
2008-01-17 16:19 Raid over 48 disks ... for real now Norman Elton
2008-01-17 16:32 ` Norman Elton
2008-01-17 19:50 ` Janek Kozicki
@ 2008-01-18 16:08 ` michael
2008-01-18 17:02 ` Greg Cormier
2008-01-18 18:44 ` Norman Elton
2 siblings, 2 replies; 7+ messages in thread
From: michael @ 2008-01-18 16:08 UTC (permalink / raw)
To: linux-raid
Quoting Norman Elton <normelton@gmail.com>:
> I posed the question a few weeks ago about how to best accommodate
> software RAID over an array of 48 disks (a Sun X4500 server, a.k.a.
> Thumper). I appreciate all the suggestions.
>
> Well, the hardware is here. It is indeed six Marvell 88SX6081 SATA
> controllers, each with eight 1TB drives, for a total raw storage of
> 48TB. I must admit, it's quite impressive. And loud. More information
> about the hardware is available online...
>
> http://www.sun.com/servers/x64/x4500/arch-wp.pdf
>
> It came loaded with Solaris, configured with ZFS. Things seemed to
> work fine. I did not do any benchmarks, but I can revert to that
> configuration if necessary.
>
> Now I've loaded RHEL onto the box. For a first-shot, I've created one
> RAID-5 array (+ 1 spare) on each of the controllers, then used LVM to
> create a VolGroup across the arrays.
>
> So now I'm trying to figure out what to do with this space. So far,
> I've tested mke2fs on a 1TB and a 5TB LogVol.
>
> I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3.
> Am I better off sticking with relatively small partitions (2-5 TB), or
> should I crank up the block size and go for one big partition?
Impressive system. I'm curious to what the storage drives look like
and how they attach to the server with that many disks?
Sounds like you have some time to play around before shoving it into
production.
I wonder how long it would take to run an fsck on one large filesystem?
Cheers,
Mike
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Raid over 48 disks ... for real now
2008-01-18 16:08 ` michael
@ 2008-01-18 17:02 ` Greg Cormier
2008-01-18 18:44 ` Norman Elton
1 sibling, 0 replies; 7+ messages in thread
From: Greg Cormier @ 2008-01-18 17:02 UTC (permalink / raw)
To: michael; +Cc: linux-raid
> I wonder how long it would take to run an fsck on one large filesystem?
:)
I would imagine you'd have time to order a new system, build it, and
restore the backups before the fsck was done!
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Raid over 48 disks ... for real now
2008-01-18 16:08 ` michael
2008-01-18 17:02 ` Greg Cormier
@ 2008-01-18 18:44 ` Norman Elton
1 sibling, 0 replies; 7+ messages in thread
From: Norman Elton @ 2008-01-18 18:44 UTC (permalink / raw)
To: linux-raid
It is quite a box. There's a picture of the box with the cover removed
on Sun's website:
http://www.sun.com/images/k3/k3_sunfirex4500_4.jpg
From the X4500 homepage, there's a gallery of additional pictures. The
drives drop in from the top. Massive fans channel air in the small
gaps between the drives. It doesn't look like there's much room
between the disks, but a lot of cold air gets sucked in the front, and
a lot of hot air comes out the back. So it must be doing its job :).
I have not tried a fsck on it yet. I'll probably setup a lot of 2TB
partitions rather than a single large partition. Then write the
software to handle storing data across many partitions.
Norman
On 1/18/08, michael@estone.ca <michael@estone.ca> wrote:
> Quoting Norman Elton <normelton@gmail.com>:
>
> > I posed the question a few weeks ago about how to best accommodate
> > software RAID over an array of 48 disks (a Sun X4500 server, a.k.a.
> > Thumper). I appreciate all the suggestions.
> >
> > Well, the hardware is here. It is indeed six Marvell 88SX6081 SATA
> > controllers, each with eight 1TB drives, for a total raw storage of
> > 48TB. I must admit, it's quite impressive. And loud. More information
> > about the hardware is available online...
> >
> > http://www.sun.com/servers/x64/x4500/arch-wp.pdf
> >
> > It came loaded with Solaris, configured with ZFS. Things seemed to
> > work fine. I did not do any benchmarks, but I can revert to that
> > configuration if necessary.
> >
> > Now I've loaded RHEL onto the box. For a first-shot, I've created one
> > RAID-5 array (+ 1 spare) on each of the controllers, then used LVM to
> > create a VolGroup across the arrays.
> >
> > So now I'm trying to figure out what to do with this space. So far,
> > I've tested mke2fs on a 1TB and a 5TB LogVol.
> >
> > I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3.
> > Am I better off sticking with relatively small partitions (2-5 TB), or
> > should I crank up the block size and go for one big partition?
>
> Impressive system. I'm curious to what the storage drives look like
> and how they attach to the server with that many disks?
> Sounds like you have some time to play around before shoving it into
> production.
> I wonder how long it would take to run an fsck on one large filesystem?
>
> Cheers,
> Mike
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Raid over 48 disks ... for real now
2008-01-17 19:50 ` Janek Kozicki
@ 2008-01-19 5:16 ` Jon Lewis
0 siblings, 0 replies; 7+ messages in thread
From: Jon Lewis @ 2008-01-19 5:16 UTC (permalink / raw)
To: Janek Kozicki; +Cc: linux-raid
On Thu, 17 Jan 2008, Janek Kozicki wrote:
>> I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3.
>
> there is ext4 (or ext4dev) - it's an ext3 modified to support 1024 PB size
> (1048576 TB). You could check if it's feasible. Personally I'd always
> stick with ext2/ext3/ext4 since it is most widely used and thus has
> the best recovery tools.
Something else to keep in mind...XFS fs repair tools require large amounts
of memory. If you were to create one or a few really huge fs's on this
array, you might end up with fs's which can't be repaired because you
don't have or even can't get a machine with enough RAM for the job...not
to mention the amount of time it would take.
----------------------------------------------------------------------
Jon Lewis | I route
Senior Network Engineer | therefore you are
Atlantic Net |
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2008-01-19 5:16 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-01-17 16:19 Raid over 48 disks ... for real now Norman Elton
2008-01-17 16:32 ` Norman Elton
2008-01-17 19:50 ` Janek Kozicki
2008-01-19 5:16 ` Jon Lewis
2008-01-18 16:08 ` michael
2008-01-18 17:02 ` Greg Cormier
2008-01-18 18:44 ` Norman Elton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).