From: Daniel Korstad <dan@korstad.net>
To: Bill Davidsen <davidsen@tmr.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: future hardware
Date: Fri, 27 Oct 2006 17:18:12 -0500 [thread overview]
Message-ID: <1161987492.3068.12.camel@localhost.localdomain> (raw)
In-Reply-To: <1161986176.3068.9.camel@localhost.localdomain>
[-- Attachment #1: Type: text/plain, Size: 2438 bytes --]
> I have a case what will fit seven HD in standard bays. Than I have four
> bays of 5.25 for DVD/CD drives, so I bought this;
> http://www.newegg.com/product/product.asp?item=N82E16841101035
>
> leaving me one 5.25 left for the fan. In addition to the fan in the
> item above, I have the exhaust fan on the Power Supply, another 12mm
> exhaust fan and a 12mm intake that blows across the other HDs.
Sorry, I too much of a hurry, those are 120cm exhaust and 120cm intake
>
> This is my current case, with a little mod for an extra drive;
> http://www.newegg.com/Product/Product.asp?Item=N82E16811133133
>
> I have ten drives in it now. Two in a RAID1 for the OS and eight in a
> RAID6.
>
> If I were to do it again, I would buy this...
> http://www.newegg.com/Product/Product.asp?Item=N82E16811112064
>
>
>
> On Fri, 2006-10-27 at 17:22 -0400, Bill Davidsen wrote:
> > Dan wrote:
> >
> > >I have been using an older 64bit system, socket 754 for a while now. It has
> > >the old PCI bus 33Mhz. I have two low cost (no HW RAID) PCI SATA I cards
> > >each with 4 ports to give me an eight disk RAID 6. I also have a Gig NIC,
> > >on the PCI bus. I have Gig switches with clients connecting to it at Gig
> > >speed.
> > >
> > >As many know you get a peak transfer rate of 133 MB/s or 1064Mb/s from that
> > >PCI bus http://en.wikipedia.org/wiki/Peripheral_Component_Interconnect
> > >
> > >The transfer rate is not bad across the network but my bottle neck it the
> > >PCI bus. I have been shopping around for new MB and PCI-express cards. I
> > >have been using mdadm for a long time and would like to stay with it. I am
> > >having trouble finding an eight port PCI-express card that does not have all
> > >the fancy HW RAID which jacks up the cost. I am now considering using a MB
> > >with eight SATA II slots onboard. GIGABYTE GA-M59SLI-S5 Socket AM2 NVIDIA
> > >nForce 590 SLI MCP ATX.
> > >
> > >What are other users of mdadm using with the PCI-express cards, most cost
> > >effective solution?
> > >
> > There may still be m/b available with multiple PCI busses. Don't know if
> > you are interested in a low budget solution, but that would address
> > bandwidth and use existing hardware.
> >
> > Idle curiousity: what kind of case are you using for the drives? I will
> > need to spec a machine with eight drives in the December-January timeframe.
> >
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 827 bytes --]
next prev parent reply other threads:[~2006-10-27 22:18 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-10-21 12:04 future hardware Dan
2006-10-21 16:52 ` Justin Piszcz
2006-10-22 2:38 ` Mike Hardy
2006-10-22 2:02 ` Richard Scobie
2006-10-27 21:22 ` Bill Davidsen
2006-10-27 21:56 ` Daniel Korstad
2006-10-27 22:18 ` Daniel Korstad [this message]
2006-10-29 22:29 ` Doug Ledford
2006-10-31 16:11 ` Rob Bray
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1161987492.3068.12.camel@localhost.localdomain \
--to=dan@korstad.net \
--cc=davidsen@tmr.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).