* How to configure 36 disks ?
@ 2009-03-23 13:59 Raz
2009-03-23 15:35 ` Bill Davidsen
0 siblings, 1 reply; 11+ messages in thread
From: Raz @ 2009-03-23 13:59 UTC (permalink / raw)
To: linux-fsdevel, Linux RAID Mailing List, linux-xfs, linux-aio,
linux-ide@vger.kernel.org
Hello
I need to configure 3xDAS'es, each with 12 disks.
All three DAS'es are connected to a single machine.
I have the following requirements (in this order of importance)
from the storage:
1. redundancy.
having two disks failing in one raid5 breaks the entire raid. when
you have 30TB storage
it is a disaster.
2. performance.
My code eliminates Linux raid5/6 write penalty. I managed to do by
manipulating xfs and patching linux raid5 a bit.
3. modularity ( a "grow" and it will be nice to have "shrink" )
file system and volume must be able to grow. shrinking is possible
by unifying multiple file systems
under unionfs or aufs.
4. Utilize storage size.
I assume each disk is 1TB.
Solution #1
raid0
DAS1: raid5: D,D,D,D,D,D |
raid5: D,D,D,D,D,D |
|
DAS2: raid5: D,D,D,D,D,D | xfs
raid5: D,D,D,D,D,D |
|
DAS3: raid5: D,D,D,D,D,D |
raid5: D,D,D,D,D,D |
1. redundancy. no. if a single raid fails, 30 TB fails.
2. performance. good.
3. modularity. no. raid0 does not grow.
4. Size. 30TB.
Solution #2
raid0
DAS1: raid6: D,D,D,D,D,D |
raid6: D,D,D,D,D,D |
|
DAS2: raid6: D,D,D,D,D,D | xfs.
raid6: D,D,D,D,D,D |
|
DAS3: raid6: D,D,D,D,D,D |
raid6: D,D,D,D,D,D |
1. redundancy. fair. less likely three disks will break in a single raid.
2. performance. good.
3. modularity. no. raid0 does not grow.
4. size. 24 TB
Solution #3
unionfs/aufs
DAS1: raid5: D,D,D,D,D,D xfs |
raid5: D,D,D,D,D,D xfs |
|
DAS2: raid5: D,D,D,D,D,D xfs |
raid5: D,D,D,D,D,D xfs |
|
DAS3: raid5: D,D,D,D,D,D xfs |
raid5: D,D,D,D,D,D xfs |
1. redundancy. fair. if a single raid fails, only this raid fails.
2. performance. fair.
unionfs is not mainline and does not support write balancing.
aufs is not mature enough.
3. modularity. yes. grow and shrinks.
4. Size. 30TB.
Solution #4
xfs over Linux LVM
DAS1: raid6: D,D,D,D,D,D |
raid6: D,D,D,D,D,D |
|
DAS2: raid6: D,D,D,D,D,D |
raid6: D,D,D,D,D,D |
|
DAS3: raid6: D,D,D,D,D,D |
raid6: D,D,D,D,D,D |
1. redundancy. fair. less likely three disks will break in a single raid
2. performance. bad.
3. modularity. yes. grows
4. Size 24TB
Any other ideas ?
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: How to configure 36 disks ?
2009-03-23 13:59 How to configure 36 disks ? Raz
@ 2009-03-23 15:35 ` Bill Davidsen
2009-03-23 16:02 ` Jon Hardcastle
0 siblings, 1 reply; 11+ messages in thread
From: Bill Davidsen @ 2009-03-23 15:35 UTC (permalink / raw)
To: Raz
Cc: linux-fsdevel, Linux RAID Mailing List, linux-ide@vger.kernel.org,
linux-xfs, linux-aio
Raz wrote:
> Hello
> I need to configure 3xDAS'es, each with 12 disks.
> All three DAS'es are connected to a single machine.
> I have the following requirements (in this order of importance)
> from the storage:
>
> 1. redundancy.
> having two disks failing in one raid5 breaks the entire raid. when
> you have 30TB storage
> it is a disaster.
>
> 2. performance.
> My code eliminates Linux raid5/6 write penalty. I managed to do by
> manipulating xfs and patching linux raid5 a bit.
>
> 3. modularity ( a "grow" and it will be nice to have "shrink" )
> file system and volume must be able to grow. shrinking is possible
> by unifying multiple file systems
> under unionfs or aufs.
>
> 4. Utilize storage size.
>
> I assume each disk is 1TB.
>
>
___ snip ___
> Any other ideas ?
Yes, you have the whole solution rotated 90 degrees. Consider your
original solution #2 below... You have no redundancy if one whole DAS
box fails, which is certainly a possible failure mode. If you put the
RAID0 horizontally, two arrays size six in each DAS, then RAID6
vertically, if one DAS fails completely you still have a functioning
system, and the failure results for individual drives remains about the
same, while the rebuild time will be longer.
Solution #2
raid0
DAS1: raid6: D,D,D,D,D,D |
raid6: D,D,D,D,D,D |
|
DAS2: raid6: D,D,D,D,D,D | xfs.
raid6: D,D,D,D,D,D |
|
DAS3: raid6: D,D,D,D,D,D |
raid6: D,D,D,D,D,D |
In addition, you can expand this configuration by adding more DAS units.
This addresses several of your goals.
In practice, just to get faster rebuild as the array gets larger, I
suspect you would find it was worth making the horizontal arrays RAID5
instead of RAID0, just to minimize time to full performance.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
"You are disgraced professional losers. And by the way, give us our money back."
- Representative Earl Pomeroy, Democrat of North Dakota
on the A.I.G. executives who were paid bonuses after a federal bailout.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: How to configure 36 disks ?
2009-03-23 15:35 ` Bill Davidsen
@ 2009-03-23 16:02 ` Jon Hardcastle
2009-03-23 16:22 ` Mark Lord
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Jon Hardcastle @ 2009-03-23 16:02 UTC (permalink / raw)
To: Raz, Bill Davidsen
Cc: linux-fsdevel, Linux RAID Mailing List, linux-ide@vger.kernel.org,
linux-xfs, linux-aio
I'd like to understand how you even go attaching that many devices to a system.. I am 'comparatively' new to this.. and have a 6 raid5 system.. not enterprise.. and i have slammed into case/power/sat slot issues already. What sort of hardware must one use to grow to a 36 array system!
-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'..Be fearful when others are greedy, and be greedy when others are fearful.'
-----------------------
--- On Mon, 23/3/09, Bill Davidsen <davidsen@tmr.com> wrote:
> From: Bill Davidsen <davidsen@tmr.com>
> Subject: Re: How to configure 36 disks ?
> To: "Raz" <raziebe@gmail.com>
> Cc: linux-fsdevel@vger.kernel.org, "Linux RAID Mailing List" <linux-raid@vger.kernel.org>, linux-xfs@oss.sgi.com, linux-aio@kvack.org, "linux-ide@vger.kernel.org" <linux-ide@vger.kernel.org>
> Date: Monday, 23 March, 2009, 3:35 PM
> Raz wrote:
> > Hello
> > I need to configure 3xDAS'es, each with 12 disks.
> > All three DAS'es are connected to a single
> machine.
> > I have the following requirements (in this order of
> importance)
> > from the storage:
> >
> > 1. redundancy.
> > having two disks failing in one raid5 breaks the
> entire raid. when
> > you have 30TB storage
> > it is a disaster.
> >
> > 2. performance.
> > My code eliminates Linux raid5/6 write penalty. I
> managed to do by
> > manipulating xfs and patching linux raid5 a bit.
> >
> > 3. modularity ( a "grow" and it will be nice
> to have "shrink" )
> > file system and volume must be able to grow.
> shrinking is possible
> > by unifying multiple file systems
> > under unionfs or aufs.
> >
> > 4. Utilize storage size.
> >
> > I assume each disk is 1TB.
> >
> >
> ___ snip ___
>
> > Any other ideas ?
>
> Yes, you have the whole solution rotated 90 degrees.
> Consider your original solution #2 below... You have no
> redundancy if one whole DAS box fails, which is certainly a
> possible failure mode. If you put the RAID0 horizontally,
> two arrays size six in each DAS, then RAID6 vertically, if
> one DAS fails completely you still have a functioning
> system, and the failure results for individual drives
> remains about the same, while the rebuild time will be
> longer.
>
> Solution #2
> raid0
> DAS1: raid6: D,D,D,D,D,D |
> raid6: D,D,D,D,D,D |
> |
> DAS2: raid6: D,D,D,D,D,D | xfs.
> raid6: D,D,D,D,D,D |
> |
> DAS3: raid6: D,D,D,D,D,D |
> raid6: D,D,D,D,D,D |
>
>
> In addition, you can expand this configuration by adding
> more DAS units. This addresses several of your goals.
>
> In practice, just to get faster rebuild as the array gets
> larger, I suspect you would find it was worth making the
> horizontal arrays RAID5 instead of RAID0, just to minimize
> time to full performance.
>
> -- bill davidsen <davidsen@tmr.com>
> CTO TMR Associates, Inc
>
> "You are disgraced professional losers. And by the
> way, give us our money back."
> - Representative Earl Pomeroy, Democrat of North Dakota
> on the A.I.G. executives who were paid bonuses after a
> federal bailout.
>
>
> --
> To unsubscribe from this list: send the line
> "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at
> http://vger.kernel.org/majordomo-info.html
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: How to configure 36 disks ?
2009-03-23 16:02 ` Jon Hardcastle
@ 2009-03-23 16:22 ` Mark Lord
2009-03-23 16:23 ` Christopher Smith
` (2 subsequent siblings)
3 siblings, 0 replies; 11+ messages in thread
From: Mark Lord @ 2009-03-23 16:22 UTC (permalink / raw)
To: Jon
Cc: linux-aio, Linux RAID Mailing List, linux-xfs,
linux-ide@vger.kernel.org, Raz, linux-fsdevel, Bill Davidsen
Jon Hardcastle wrote:
> I'd like to understand how you even go attaching that many devices to a system.. I am 'comparatively' new to this.. and have a 6 raid5 system.. not enterprise.. and i have slammed into case/power/sat slot issues already. What sort of hardware must one use to grow to a 36 array system!
..
Here's one way:
-- four onboard SATA ports
-- plus four add-in PCI/PCIX/PCIe 8-port Marvell SATA cards.
-- and a honkin' huge PSU!
:)
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: How to configure 36 disks ?
2009-03-23 16:02 ` Jon Hardcastle
2009-03-23 16:22 ` Mark Lord
@ 2009-03-23 16:23 ` Christopher Smith
2009-03-23 16:28 ` Raz
2009-03-23 16:45 ` Greg Freemyer
2009-03-24 19:38 ` Goswin von Brederlow
3 siblings, 1 reply; 11+ messages in thread
From: Christopher Smith @ 2009-03-23 16:23 UTC (permalink / raw)
To: Jon
Cc: linux-aio, Linux RAID Mailing List, linux-xfs,
linux-ide@vger.kernel.org, Raz, linux-fsdevel, Bill Davidsen
Jon Hardcastle wrote:
> I'd like to understand how you even go attaching that many devices to a system.. I am 'comparatively' new to this.. and have a 6 raid5 system.. not enterprise.. and i have slammed into case/power/sat slot issues already. What sort of hardware must one use to grow to a 36 array system!
You can always use this approach:
http://www.youtube.com/watch?v=96dWOEa4Djs
--Chris
P.S.: Sorry, couldn't resist.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: How to configure 36 disks ?
2009-03-23 16:23 ` Christopher Smith
@ 2009-03-23 16:28 ` Raz
0 siblings, 0 replies; 11+ messages in thread
From: Raz @ 2009-03-23 16:28 UTC (permalink / raw)
To: Christopher Smith
Cc: Jon, Linux RAID Mailing List, linux-aio, linux-xfs,
linux-ide@vger.kernel.org, linux-fsdevel, Bill Davidsen
On Mon, Mar 23, 2009 at 6:23 PM, Christopher Smith <x@xman.org> wrote:
> Jon Hardcastle wrote:
>>
>> I'd like to understand how you even go attaching that many devices to a
>> system.. I am 'comparatively' new to this.. and have a 6 raid5 system.. not
>> enterprise.. and i have slammed into case/power/sat slot issues already.
>> What sort of hardware must one use to grow to a 36 array system!
>
> You can always use this approach:
>
> http://www.youtube.com/watch?v=96dWOEa4Djs
>
> --Chris
>
> P.S.: Sorry, couldn't resist.
>
>
me too.
http://blogs.sun.com/observatory/entry/don_t_shout_at_your
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: How to configure 36 disks ?
2009-03-23 16:02 ` Jon Hardcastle
2009-03-23 16:22 ` Mark Lord
2009-03-23 16:23 ` Christopher Smith
@ 2009-03-23 16:45 ` Greg Freemyer
2009-03-23 18:32 ` Mikael Abrahamsson
2009-03-24 19:38 ` Goswin von Brederlow
3 siblings, 1 reply; 11+ messages in thread
From: Greg Freemyer @ 2009-03-23 16:45 UTC (permalink / raw)
To: Jon
Cc: linux-aio, Linux RAID Mailing List, linux-xfs,
linux-ide@vger.kernel.org, Raz, linux-fsdevel, Bill Davidsen
On Mon, Mar 23, 2009 at 12:02 PM, Jon Hardcastle
<jd_hardcastle@yahoo.com> wrote:
>
> I'd like to understand how you even go attaching that many devices to a system.. I am 'comparatively' new to this.. and have a 6 raid5 system.. not enterprise.. and i have slammed into case/power/sat slot issues already. What sort of hardware must one use to grow to a 36 array system!
For the drives you could get a few big enclosures:
http://www.pc-pitstop.com/sata_enclosures/sat122urd.asp
http://www.pc-pitstop.com/sata_enclosures/scsas16rm.asp
I don't think these offer PMP (port multiplexer) support. I would
look for ones that did if you're serious about doing something like
this.
If you used PMP with 4x drives per sata port, you would only need 9
sata ports in the PC to control the 36 drives. Not sure what the
limits of PMP is, but 4x seems reasonable to me.
Greg
--
Greg Freemyer
Head of EDD Tape Extraction and Processing team
Litigation Triage Solutions Specialist
http://www.linkedin.com/in/gregfreemyer
First 99 Days Litigation White Paper -
http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf
The Norcross Group
The Intersection of Evidence & Technology
http://www.norcrossgroup.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: How to configure 36 disks ?
2009-03-23 16:45 ` Greg Freemyer
@ 2009-03-23 18:32 ` Mikael Abrahamsson
0 siblings, 0 replies; 11+ messages in thread
From: Mikael Abrahamsson @ 2009-03-23 18:32 UTC (permalink / raw)
To: Greg Freemyer
Cc: Jon, Linux RAID Mailing List, linux-aio, linux-xfs,
linux-ide@vger.kernel.org, Raz, linux-fsdevel, Bill Davidsen
On Mon, 23 Mar 2009, Greg Freemyer wrote:
> http://www.pc-pitstop.com/sata_enclosures/sat122urd.asp
> http://www.pc-pitstop.com/sata_enclosures/scsas16rm.asp
I would like to add this if you're on a budget:
<http://cgi.ebay.com/NORCO-RPC-4020-4u-rack-mountable-case-no-power-supply_W0QQitemZ380111981091QQcmdZViewItemQQptZCOMP_EN_Servers?hash=item380111981091&_trksid=p3286.c0.m14&_trkparms=72%3A1205|66%3A4|65%3A12|39%3A1|240%3A1318|301%3A1|293%3A1|294%3A200>
> sata ports in the PC to control the 36 drives. Not sure what the limits
> of PMP is, but 4x seems reasonable to me.
The PMPs I've seen take 5 drives each:
<http://www.addonics.com/products/host_controller/ad5sapm.asp>
--
Mikael Abrahamsson email: swmike@swm.pp.se
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: How to configure 36 disks ?
2009-03-23 16:02 ` Jon Hardcastle
` (2 preceding siblings ...)
2009-03-23 16:45 ` Greg Freemyer
@ 2009-03-24 19:38 ` Goswin von Brederlow
2009-03-25 12:14 ` Drew
3 siblings, 1 reply; 11+ messages in thread
From: Goswin von Brederlow @ 2009-03-24 19:38 UTC (permalink / raw)
To: Jon
Cc: linux-aio, Linux RAID Mailing List, linux-xfs,
linux-ide@vger.kernel.org, Raz, linux-fsdevel, Bill Davidsen
Jon Hardcastle <jd_hardcastle@yahoo.com> writes:
> I'd like to understand how you even go attaching that many devices to a system.. I am 'comparatively' new to this.. and have a 6 raid5 system.. not enterprise.. and i have slammed into case/power/sat slot issues already. What sort of hardware must one use to grow to a 36 array system!
Well, lets see.
Put 3 dual channel SAS controler in the box giving you 6 external SAS
connectors. Buy 6 48x disk enclosures and connect them. Configure them
all as JBOD and you get your 288 disks.
MfG
Goswin
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: How to configure 36 disks ?
2009-03-24 19:38 ` Goswin von Brederlow
@ 2009-03-25 12:14 ` Drew
0 siblings, 0 replies; 11+ messages in thread
From: Drew @ 2009-03-25 12:14 UTC (permalink / raw)
To: Goswin von Brederlow
Cc: Jon, Linux RAID Mailing List, linux-aio, linux-xfs,
linux-ide@vger.kernel.org, Raz, linux-fsdevel, Bill Davidsen
>> I'd like to understand how you even go attaching that many devices to a system.. I am 'comparatively'
>> new to this.. and have a 6 raid5 system.. not enterprise.. and i have slammed into case/power/sat slot
>> issues already. What sort of hardware must one use to grow to a 36 array system!
>
> Well, lets see.
>
> Put 3 dual channel SAS controler in the box giving you 6 external SAS
> connectors. Buy 6 48x disk enclosures and connect them. Configure them
> all as JBOD and you get your 288 disks.
And after you've mortgaged your house, your future, and your first born ... :-P
--
Drew
"Nothing in life is to be feared. It is only to be understood."
--Marie Curie
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: How to configure 36 disks ?
@ 2009-03-27 8:31 Michael Monnerie
0 siblings, 0 replies; 11+ messages in thread
From: Michael Monnerie @ 2009-03-27 8:31 UTC (permalink / raw)
To: xfs
(I sent this mail on Tue 23:40 CET, but it seems it didn't arrive on the
list. At least I didn't receive it. So I send again)
On Montag 23 März 2009 Raz wrote:
> 1. redundancy.
> 2. performance.
First, you should have bought enclosures with included RAID controllers,
then you'd be finished already.
But then, I guess you wanted to save the money, but at least go get some
real hardware RAID controller. Look at Areca ( http://www.areca.com.tw )
if you use Linux or Windows, the 1680 series SAS controllers can cope
with SAS/SATA drives, up to 128 drives per controller. You can put a
cache up to 4GB on the controller, and a battery backup module that
protects your data in case of power outages. Those controllers are
blazing fast, have good admin tools, send you an e-mail in case of
problems, and they used it in the video that Chris mentioned:
http://blogs.sun.com/observatory/entry/don_t_shout_at_your
In the end you see they used an Areca 16port, plus an Adaptec 8port (I
guess because they only had the Areca in 16port available).
Then configure a RAID-60 or whatever you like.
mfg zmi
--
// Michael Monnerie, Ing.BSc ----- http://it-management.at
// Tel: 0660 / 415 65 31 .network.your.ideas.
// PGP Key: "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38 500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net Key-ID: 1C1209B4
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2009-03-27 8:31 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-03-23 13:59 How to configure 36 disks ? Raz
2009-03-23 15:35 ` Bill Davidsen
2009-03-23 16:02 ` Jon Hardcastle
2009-03-23 16:22 ` Mark Lord
2009-03-23 16:23 ` Christopher Smith
2009-03-23 16:28 ` Raz
2009-03-23 16:45 ` Greg Freemyer
2009-03-23 18:32 ` Mikael Abrahamsson
2009-03-24 19:38 ` Goswin von Brederlow
2009-03-25 12:14 ` Drew
-- strict thread matches above, loose matches on Subject: below --
2009-03-27 8:31 Michael Monnerie
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox