* future hardware
@ 2006-10-21 12:04 Dan
2006-10-21 16:52 ` Justin Piszcz
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Dan @ 2006-10-21 12:04 UTC (permalink / raw)
To: linux-raid
I have been using an older 64bit system, socket 754 for a while now. It has
the old PCI bus 33Mhz. I have two low cost (no HW RAID) PCI SATA I cards
each with 4 ports to give me an eight disk RAID 6. I also have a Gig NIC,
on the PCI bus. I have Gig switches with clients connecting to it at Gig
speed.
As many know you get a peak transfer rate of 133 MB/s or 1064Mb/s from that
PCI bus http://en.wikipedia.org/wiki/Peripheral_Component_Interconnect
The transfer rate is not bad across the network but my bottle neck it the
PCI bus. I have been shopping around for new MB and PCI-express cards. I
have been using mdadm for a long time and would like to stay with it. I am
having trouble finding an eight port PCI-express card that does not have all
the fancy HW RAID which jacks up the cost. I am now considering using a MB
with eight SATA II slots onboard. GIGABYTE GA-M59SLI-S5 Socket AM2 NVIDIA
nForce 590 SLI MCP ATX.
What are other users of mdadm using with the PCI-express cards, most cost
effective solution?
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: future hardware
2006-10-21 12:04 future hardware Dan
@ 2006-10-21 16:52 ` Justin Piszcz
2006-10-22 2:38 ` Mike Hardy
2006-10-22 2:02 ` Richard Scobie
` (2 subsequent siblings)
3 siblings, 1 reply; 9+ messages in thread
From: Justin Piszcz @ 2006-10-21 16:52 UTC (permalink / raw)
To: Dan; +Cc: linux-raid
On Sat, 21 Oct 2006, Dan wrote:
> I have been using an older 64bit system, socket 754 for a while now. It has
> the old PCI bus 33Mhz. I have two low cost (no HW RAID) PCI SATA I cards
> each with 4 ports to give me an eight disk RAID 6. I also have a Gig NIC,
> on the PCI bus. I have Gig switches with clients connecting to it at Gig
> speed.
>
> As many know you get a peak transfer rate of 133 MB/s or 1064Mb/s from that
> PCI bus http://en.wikipedia.org/wiki/Peripheral_Component_Interconnect
>
> The transfer rate is not bad across the network but my bottle neck it the
> PCI bus. I have been shopping around for new MB and PCI-express cards. I
> have been using mdadm for a long time and would like to stay with it. I am
> having trouble finding an eight port PCI-express card that does not have all
> the fancy HW RAID which jacks up the cost. I am now considering using a MB
> with eight SATA II slots onboard. GIGABYTE GA-M59SLI-S5 Socket AM2 NVIDIA
> nForce 590 SLI MCP ATX.
>
> What are other users of mdadm using with the PCI-express cards, most cost
> effective solution?
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
Read this:
http://www.anandtech.com/IT/showdoc.aspx?i=2859
I have a similar setup to you, 6 IDE ATA/100 + 2 SATA/150 (all 400GB) in
an mdadm RAID5, works well but it maxes out the PCI bus unfortunately. At
some point I am going to do what you did, get 2 x SATA PCI-e SiL 3114
cards perhaps. Or, after reading that article, consider SAS maybe..?
Justin.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: future hardware
2006-10-21 12:04 future hardware Dan
2006-10-21 16:52 ` Justin Piszcz
@ 2006-10-22 2:02 ` Richard Scobie
2006-10-27 21:22 ` Bill Davidsen
2006-10-31 16:11 ` Rob Bray
3 siblings, 0 replies; 9+ messages in thread
From: Richard Scobie @ 2006-10-22 2:02 UTC (permalink / raw)
To: Linux RAID Mailing List
Dan wrote:
>
> What are other users of mdadm using with the PCI-express cards, most cost
> effective solution?
I have been successfully using a pair of Addonics AD2SA3GPX1 cards, with
4 x 500GB in a stacked RAID0 on top of a pair of RAID1 configuration.
The cards are cheap and use the sil24 driver, which seems to be one of
the better supported.
Performance is good - read/write speeds of 140MB/s in bonnie++ as I recall.
Regards,
Richard
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: future hardware
2006-10-21 16:52 ` Justin Piszcz
@ 2006-10-22 2:38 ` Mike Hardy
0 siblings, 0 replies; 9+ messages in thread
From: Mike Hardy @ 2006-10-22 2:38 UTC (permalink / raw)
To: linux-raid
Justin Piszcz wrote:
> cards perhaps. Or, after reading that article, consider SAS maybe..?
I hate to be the guy that breaks out the unsubstantiated anecdotal
evidence, but I've got a RAID10 with 4x300GB Maxtor SAS drives, and I've
already had two trigger their internal SMART "I'm about to fail" message.
They've been in service now for around 2 months, and they do have an
okay temperature, and I have not been beating the crap out of them.
More than a little disappointing.
They are fast though...
-Mike
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: future hardware
2006-10-21 12:04 future hardware Dan
2006-10-21 16:52 ` Justin Piszcz
2006-10-22 2:02 ` Richard Scobie
@ 2006-10-27 21:22 ` Bill Davidsen
2006-10-27 21:56 ` Daniel Korstad
2006-10-31 16:11 ` Rob Bray
3 siblings, 1 reply; 9+ messages in thread
From: Bill Davidsen @ 2006-10-27 21:22 UTC (permalink / raw)
To: Dan; +Cc: linux-raid
Dan wrote:
>I have been using an older 64bit system, socket 754 for a while now. It has
>the old PCI bus 33Mhz. I have two low cost (no HW RAID) PCI SATA I cards
>each with 4 ports to give me an eight disk RAID 6. I also have a Gig NIC,
>on the PCI bus. I have Gig switches with clients connecting to it at Gig
>speed.
>
>As many know you get a peak transfer rate of 133 MB/s or 1064Mb/s from that
>PCI bus http://en.wikipedia.org/wiki/Peripheral_Component_Interconnect
>
>The transfer rate is not bad across the network but my bottle neck it the
>PCI bus. I have been shopping around for new MB and PCI-express cards. I
>have been using mdadm for a long time and would like to stay with it. I am
>having trouble finding an eight port PCI-express card that does not have all
>the fancy HW RAID which jacks up the cost. I am now considering using a MB
>with eight SATA II slots onboard. GIGABYTE GA-M59SLI-S5 Socket AM2 NVIDIA
>nForce 590 SLI MCP ATX.
>
>What are other users of mdadm using with the PCI-express cards, most cost
>effective solution?
>
There may still be m/b available with multiple PCI busses. Don't know if
you are interested in a low budget solution, but that would address
bandwidth and use existing hardware.
Idle curiousity: what kind of case are you using for the drives? I will
need to spec a machine with eight drives in the December-January timeframe.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: future hardware
2006-10-27 21:22 ` Bill Davidsen
@ 2006-10-27 21:56 ` Daniel Korstad
2006-10-27 22:18 ` Daniel Korstad
0 siblings, 1 reply; 9+ messages in thread
From: Daniel Korstad @ 2006-10-27 21:56 UTC (permalink / raw)
To: Bill Davidsen; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 2262 bytes --]
I have a case what will fit seven HD in standard bays. Than I have four
bays of 5.25 for DVD/CD drives, so I bought this;
http://www.newegg.com/product/product.asp?item=N82E16841101035
leaving me one 5.25 left for the fan. In addition to the fan in the
item above, I have the exhaust fan on the Power Supply, another 12mm
exhaust fan and a 12mm intake that blows across the other HDs.
This is my current case, with a little mod for an extra drive;
http://www.newegg.com/Product/Product.asp?Item=N82E16811133133
I have ten drives in it now. Two in a RAID1 for the OS and eight in a
RAID6.
If I were to do it again, I would buy this...
http://www.newegg.com/Product/Product.asp?Item=N82E16811112064
On Fri, 2006-10-27 at 17:22 -0400, Bill Davidsen wrote:
> Dan wrote:
>
> >I have been using an older 64bit system, socket 754 for a while now. It has
> >the old PCI bus 33Mhz. I have two low cost (no HW RAID) PCI SATA I cards
> >each with 4 ports to give me an eight disk RAID 6. I also have a Gig NIC,
> >on the PCI bus. I have Gig switches with clients connecting to it at Gig
> >speed.
> >
> >As many know you get a peak transfer rate of 133 MB/s or 1064Mb/s from that
> >PCI bus http://en.wikipedia.org/wiki/Peripheral_Component_Interconnect
> >
> >The transfer rate is not bad across the network but my bottle neck it the
> >PCI bus. I have been shopping around for new MB and PCI-express cards. I
> >have been using mdadm for a long time and would like to stay with it. I am
> >having trouble finding an eight port PCI-express card that does not have all
> >the fancy HW RAID which jacks up the cost. I am now considering using a MB
> >with eight SATA II slots onboard. GIGABYTE GA-M59SLI-S5 Socket AM2 NVIDIA
> >nForce 590 SLI MCP ATX.
> >
> >What are other users of mdadm using with the PCI-express cards, most cost
> >effective solution?
> >
> There may still be m/b available with multiple PCI busses. Don't know if
> you are interested in a low budget solution, but that would address
> bandwidth and use existing hardware.
>
> Idle curiousity: what kind of case are you using for the drives? I will
> need to spec a machine with eight drives in the December-January timeframe.
>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 827 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: future hardware
2006-10-27 21:56 ` Daniel Korstad
@ 2006-10-27 22:18 ` Daniel Korstad
2006-10-29 22:29 ` Doug Ledford
0 siblings, 1 reply; 9+ messages in thread
From: Daniel Korstad @ 2006-10-27 22:18 UTC (permalink / raw)
To: Bill Davidsen; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 2438 bytes --]
> I have a case what will fit seven HD in standard bays. Than I have four
> bays of 5.25 for DVD/CD drives, so I bought this;
> http://www.newegg.com/product/product.asp?item=N82E16841101035
>
> leaving me one 5.25 left for the fan. In addition to the fan in the
> item above, I have the exhaust fan on the Power Supply, another 12mm
> exhaust fan and a 12mm intake that blows across the other HDs.
Sorry, I too much of a hurry, those are 120cm exhaust and 120cm intake
>
> This is my current case, with a little mod for an extra drive;
> http://www.newegg.com/Product/Product.asp?Item=N82E16811133133
>
> I have ten drives in it now. Two in a RAID1 for the OS and eight in a
> RAID6.
>
> If I were to do it again, I would buy this...
> http://www.newegg.com/Product/Product.asp?Item=N82E16811112064
>
>
>
> On Fri, 2006-10-27 at 17:22 -0400, Bill Davidsen wrote:
> > Dan wrote:
> >
> > >I have been using an older 64bit system, socket 754 for a while now. It has
> > >the old PCI bus 33Mhz. I have two low cost (no HW RAID) PCI SATA I cards
> > >each with 4 ports to give me an eight disk RAID 6. I also have a Gig NIC,
> > >on the PCI bus. I have Gig switches with clients connecting to it at Gig
> > >speed.
> > >
> > >As many know you get a peak transfer rate of 133 MB/s or 1064Mb/s from that
> > >PCI bus http://en.wikipedia.org/wiki/Peripheral_Component_Interconnect
> > >
> > >The transfer rate is not bad across the network but my bottle neck it the
> > >PCI bus. I have been shopping around for new MB and PCI-express cards. I
> > >have been using mdadm for a long time and would like to stay with it. I am
> > >having trouble finding an eight port PCI-express card that does not have all
> > >the fancy HW RAID which jacks up the cost. I am now considering using a MB
> > >with eight SATA II slots onboard. GIGABYTE GA-M59SLI-S5 Socket AM2 NVIDIA
> > >nForce 590 SLI MCP ATX.
> > >
> > >What are other users of mdadm using with the PCI-express cards, most cost
> > >effective solution?
> > >
> > There may still be m/b available with multiple PCI busses. Don't know if
> > you are interested in a low budget solution, but that would address
> > bandwidth and use existing hardware.
> >
> > Idle curiousity: what kind of case are you using for the drives? I will
> > need to spec a machine with eight drives in the December-January timeframe.
> >
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 827 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: future hardware
2006-10-27 22:18 ` Daniel Korstad
@ 2006-10-29 22:29 ` Doug Ledford
0 siblings, 0 replies; 9+ messages in thread
From: Doug Ledford @ 2006-10-29 22:29 UTC (permalink / raw)
To: Daniel Korstad; +Cc: Bill Davidsen, linux-raid
[-- Attachment #1: Type: text/plain, Size: 821 bytes --]
On Fri, 2006-10-27 at 17:18 -0500, Daniel Korstad wrote:
> > leaving me one 5.25 left for the fan. In addition to the fan in the
> > item above, I have the exhaust fan on the Power Supply, another 12mm
> > exhaust fan and a 12mm intake that blows across the other HDs.
> Sorry, I too much of a hurry, those are 120cm exhaust and 120cm intake
Hehehe, I'll burn in hell for pointing this out, but as 10mm == 1cm, a
120*mm* fan or 12*cm* fan would be correct. I'm pretty sure your fans
are neither 12mm nor 120cm (or if you do have a 120cm
fan...damn...that's a lot of cooling)...
--
Doug Ledford <dledford@redhat.com>
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://people.redhat.com/dledford/Infiniband
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: future hardware
2006-10-21 12:04 future hardware Dan
` (2 preceding siblings ...)
2006-10-27 21:22 ` Bill Davidsen
@ 2006-10-31 16:11 ` Rob Bray
3 siblings, 0 replies; 9+ messages in thread
From: Rob Bray @ 2006-10-31 16:11 UTC (permalink / raw)
To: Dan; +Cc: linux-raid
> I have been using an older 64bit system, socket 754 for a while now. It
> has
> the old PCI bus 33Mhz. I have two low cost (no HW RAID) PCI SATA I cards
> each with 4 ports to give me an eight disk RAID 6. I also have a Gig NIC,
> on the PCI bus. I have Gig switches with clients connecting to it at Gig
> speed.
>
> As many know you get a peak transfer rate of 133 MB/s or 1064Mb/s from
> that
> PCI bus http://en.wikipedia.org/wiki/Peripheral_Component_Interconnect
>
> The transfer rate is not bad across the network but my bottle neck it the
> PCI bus. I have been shopping around for new MB and PCI-express cards. I
> have been using mdadm for a long time and would like to stay with it. I
> am
> having trouble finding an eight port PCI-express card that does not have
> all
> the fancy HW RAID which jacks up the cost. I am now considering using a
> MB
> with eight SATA II slots onboard. GIGABYTE GA-M59SLI-S5 Socket AM2 NVIDIA
> nForce 590 SLI MCP ATX.
>
> What are other users of mdadm using with the PCI-express cards, most cost
> effective solution?
>
>
I agree that SATA drives on PCI-E cards are as much bang-for-buck as is
available right now. On the newer platforms, each PCI-E slot, the onboard
RAID controller(s), and the 32-bit PCI bus all have discrete paths to the
chip.
Play with the thing to see how many disks you can put on a controller
without a slowdown. Don't assume the controller isn't oversold on
bandwidth (I was only able to use three out of four CK804 ports on a
GA-K8NE without saturating it; two out of four slots on a PCI Sil3114).
Combining the bandwidth of the onboard RAID controller, two SATA slots,
and one PCI controller card, sustained reads reach 450MB/s (across 7
disks, RAID-0) with an $80 board, and three $20 controller cards.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2006-10-31 16:11 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-10-21 12:04 future hardware Dan
2006-10-21 16:52 ` Justin Piszcz
2006-10-22 2:38 ` Mike Hardy
2006-10-22 2:02 ` Richard Scobie
2006-10-27 21:22 ` Bill Davidsen
2006-10-27 21:56 ` Daniel Korstad
2006-10-27 22:18 ` Daniel Korstad
2006-10-29 22:29 ` Doug Ledford
2006-10-31 16:11 ` Rob Bray
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).