linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* new raid system for home use
@ 2004-04-18  3:34 Paul Phillips
  2004-04-18 18:49 ` Mark Hahn
  0 siblings, 1 reply; 16+ messages in thread
From: Paul Phillips @ 2004-04-18  3:34 UTC (permalink / raw)
  To: linux-raid

I'm building a new system to be my home media server (as well as web,
mail, etc.) I already have a terabyte of data so I'm aiming at about a
three terabyte capacity.  My tentative plan is to use 8 of the new Hitachi
Deskstar 7K400 400 GB SATA drives in a RAID-5 configuration, and 2
additional drives in a RAID-1 configuration as the boot device and higher
priority data storage.  I'd like to use debian unstable and kernel v2.6.

I've never built anything like this before though so I'm a bit nervous
I'll make unwise hardware choices.  So I appeal to the locals for any
specific or general advice you may care to offer.  Once it's running I'll
document my setup for fellow degenerate home multi-terabyte data amassers.
I did not find the web overflowing with instances of people building such
large linux RAID servers in non-business settings.

Important:

  * top-notch linux support for { RAID card, gigabit ethernet chipset, ??? }
  * components with proven linux functionality/reliability
  * easy expandability
  * no bottlenecks if I want to stream video to up to four locations
  * doesn't demand rack-mount

Would be nice:

  * open source drivers for all components if possible, or most if not
  * all things being equal, the quieter and cooler-running version

Not particularly important:

  * endless oodles of CPU (I'd think 2x3GHz would be megaplenty)
  * hot swap
  * uptime > 99.9%
  * drive reliability (willing to keep spares handy and drop them
      in as the occasion warrants)
  * price (not cost-unconscious, but not spend-averse)

Other matters of interest:

  * Would RAID-6 be overkill? I doubt I'll be backing up the big array,
      ever.  Losing it would suck a lot but not end my existence.
  * Is EVMS mature enough to use if I'm bleeding edge averse in that
      area? I'd never heard of it before reading this list.
  * Software vs. Hardware RAID? I imagine this is a good place for
      Hardware if I buy the right card, but maybe Software would require
      less expertise and fiddling to get running in peak form.
  * Would I be smarter to settle for kernel 2.4 at this time?
  * I'm probably failing to consider the five most important factors...

Examples of the items I might land on if I had to buy this today without
wise feedback:

  *  Chassis: http://www.baber.com/cases/mpe_ft2_black.htm
  *     RAID: http://www.3ware.com/products/serial_ata9000.asp
  * 10 Disks: http://www.hitachigst.com/hdd/desk/7k400.html
  *     MoBo: ???
  * CPU, RAM: TBD based on MoBo

Many many thanks for all replies.

-- 
Paul Phillips      | All zork and no slay makes Jack a troll boy.
Analgesic          |
Empiricist         |
pull his pi pal!   |----------* http://www.improving.org/paulp/ *----------

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-18  3:34 new raid system for home use Paul Phillips
@ 2004-04-18 18:49 ` Mark Hahn
  2004-04-18 20:22   ` Paul Phillips
                     ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Mark Hahn @ 2004-04-18 18:49 UTC (permalink / raw)
  To: Paul Phillips; +Cc: linux-raid

> three terabyte capacity.  My tentative plan is to use 8 of the new Hitachi
> Deskstar 7K400 400 GB SATA drives in a RAID-5 configuration, and 2

highest-capacity disks are noticably more expensive than more routine ones.

> additional drives in a RAID-1 configuration as the boot device and higher
> priority data storage. 

I wouldn't bother, since raid5 is plenty fast.  it's nothing but marketing
that the storage vendors push this concept of "near-line" disk storage.

> I'd like to use debian unstable and kernel v2.6.

distributions are irrelevant.  the only reason I can think to prefer 2.6
is better support for very large block devices.

> I did not find the web overflowing with instances of people building such
> large linux RAID servers in non-business settings.

I can't think of anything about data servers that is "setting specific".

>   * top-notch linux support for { RAID card, gigabit ethernet chipset, ??? }

why a raid card?  they're slow and expensive.  I'd use two promise sata150tx4
cards.  reasons for prefering sw raid have been discussed here before and the 
facts remain unchanged.

broadcom or intel gigabit nics seem to be quite safe choices.

>   * components with proven linux functionality/reliability
>   * easy expandability
>   * no bottlenecks if I want to stream video to up to four locations

but we're talking piddly little streams, no?  just compressed video at 
a MB/s or two?

>   * doesn't demand rack-mount

nothing requires rack-mount.  even giant-sized motherboards will fit
into *some* mid-tower-ish chassis.

naturally, a lot of disks should make you very concerned for the 
size of your power supply.

>   * open source drivers for all components if possible, or most if not
>   * all things being equal, the quieter and cooler-running version

big PSU's tend not to be quiet.  and even though modern non-SCSI disks 
are quiet, enough of them does make some noise.

>   * endless oodles of CPU (I'd think 2x3GHz would be megaplenty)

too much, I'd say.  a single p4/2.6 would be fine.  it's true though that 
if you have your heart set on high bandwidth, that necessitates PCI-X,
and they're uncommonly found outside of dual xeon/opteron "server" boards.
you can, of course, sensibly run such a board with 1 cpu.

>   * hot swap

thankfully, this is starting to be almost standard in a chassis designed
for more than a couple disks.

>   * uptime > 99.9%

trivial.

>   * drive reliability (willing to keep spares handy and drop them
>       in as the occasion warrants)

it's not hot if you have to do something to use it.

>   * price (not cost-unconscious, but not spend-averse)

if you like the integrated approach (windows, etc), then just get a 
sata-based storage box supported by some real company.  as with all 
integrated solutions, the pitch is based on them worrying about it,
not you.  yes, you pay through the nose, but that's the tradeoff you 
have to evaluate.

> Other matters of interest:
> 
>   * Would RAID-6 be overkill? I doubt I'll be backing up the big array,
>       ever.  Losing it would suck a lot but not end my existence.

r5+hotspare is plenty reliable.  I think r6 is a bit immature, but I haven't
tried it.

>   * Is EVMS mature enough to use if I'm bleeding edge averse in that
>       area? I'd never heard of it before reading this list.

EVMS is afflicted by featuritis, IMO, compared to LVM.  but why do you
think you need it?  volume managers are for people who want to divide
their storage into little chunks, and then experience the bofhish grandeur 
of requiring the lusers to beg for more space.

big storage should be left in big chunks, unles there's some good reason
for it.

>   * Software vs. Hardware RAID? I imagine this is a good place for
>       Hardware if I buy the right card, but maybe Software would require
>       less expertise and fiddling to get running in peak form.

fiddling is required if you're trying to tweak either approach.
do you want to tweak via some proprietary/integrated interface,
talking to a $1k card that's slower than a $100 card?

I don't believe anyone would claim that hw raid is somehow more reliable.
people who like embedded/gui interfaces would claim that hw raid is more
usable.

>   * Would I be smarter to settle for kernel 2.4 at this time?

no.  your have no real performance issues, and sata support in 2.6 is very
good, as is large-block-dev.

>   * I'm probably failing to consider the five most important factors...

the only really important factor is that disks should be 3+ year warrantee.
hw raid is fine if you like that kind of thing, and want the security of 
paying more for slower performance, but the privilege of waiting on hold
at a telephone support number.

>   *  Chassis: http://www.baber.com/cases/mpe_ft2_black.htm

jeez.  that's a penis surrogate.  why not just get a straightforward
3-4U rackmount chassis and sit it on a little wheeled dolly from Ikea?

400W PS is not enough for 8-10 disks, and you probably want 
> ATX support (EATX/SSI/etc).

note also that 5.25 bays are in some ways a disadvantage, since all
disks are 3.5 (and you probably want a couple multi-bay hotswap 
converters that put, eg, 4 3.5's in 3 5.25's.)

>   *     RAID: http://www.3ware.com/products/serial_ata9000.asp

for a hw raid card, 3ware is pretty good.  they're still much more expensive
than sw raid, and by most reports, slower.

>   * 10 Disks: http://www.hitachigst.com/hdd/desk/7k400.html

use two promise 4-port controllers and the 1-2 sata ports that come 
with your MB.  you'll find that r5 is plenty fast for normal use,
so you don't need to waste a disk with a separate r1.

>   *     MoBo: ???

any Intel i75xx, i875, or AMD 8xxx from a recognizable vendor (tyan,
supermicro, asus, intel, etc.)

>   * CPU, RAM: TBD based on MoBo

you don't need much CPU power, and unless you have high locality
of reference, lots of memory is wasted on fileservers.  get 1-2GB ECC.

you should also think about whether you really require this to be a 
single server.  components at the basic level are quite cheap, but 
as you go higher-end, costs go upward quite steeply.

regards, mark hahn.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-18 18:49 ` Mark Hahn
@ 2004-04-18 20:22   ` Paul Phillips
  2004-04-19 16:03     ` Norman Schmidt
  2004-04-18 20:48   ` Guy
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: Paul Phillips @ 2004-04-18 20:22 UTC (permalink / raw)
  To: Mark Hahn; +Cc: linux-raid

Thanks a lot for your comments, that helps a lot.  A couple items:

On Sun, 18 Apr 2004, Mark Hahn wrote:

> if you like the integrated approach (windows, etc), then just get a
> sata-based storage box supported by some real company.

That was my original plan but I was too horrified by the markup.  What I'd
really like to find is someone who will build it to specification for a
small markup on parts and let me worry about supporting it.  I don't mind
software effort, but for me "it's hard where hardware is involved," ha ha,
don't hurt yourself laughing.

> EVMS is afflicted by featuritis, IMO, compared to LVM.  but why do you
> think you need it?

If there was a consensus here that EVMS is the bright shining future of
storage management, I figured I'd jump on the boat sooner than later.  I
like being tech-fashion forward as long as I don't bleed TOO much.

> >   *  Chassis: http://www.baber.com/cases/mpe_ft2_black.htm
>
> jeez.  that's a penis surrogate.  why not just get a straightforward
> 3-4U rackmount chassis and sit it on a little wheeled dolly from Ikea?

I don't think they sell wheeled dollies large enough for my penis.
Actually I just picked that one after a quick web search trying to find a
non-rack with lots of drive bays.  I have bad associations with rackmount
cases but that may be irrational prejudice.

> use two promise 4-port controllers and the 1-2 sata ports that come with
> your MB.  you'll find that r5 is plenty fast for normal use, so you
> don't need to waste a disk with a separate r1.

I was under the impression that I can't boot off raid-5, but if that
information is dated then all the better.

Thanks again.

-- 
Paul Phillips      | been around the world and found that only stupid
Apatheist          | people are breeding, the cretins cloning and feeding,
Empiricist         | and i don't even own a tv.  -- harvey danger
i'll ship a pulp   |----------* http://www.improving.org/paulp/ *----------

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: new raid system for home use
  2004-04-18 18:49 ` Mark Hahn
  2004-04-18 20:22   ` Paul Phillips
@ 2004-04-18 20:48   ` Guy
  2004-04-19 16:01     ` Norman Schmidt
  2004-04-19  9:16   ` Clemens Schwaighofer
  2004-04-20  1:14   ` maarten van den Berg
  3 siblings, 1 reply; 16+ messages in thread
From: Guy @ 2004-04-18 20:48 UTC (permalink / raw)
  To: 'Mark Hahn', 'Paul Phillips'; +Cc: linux-raid

You said:
================
>   * hot swap

thankfully, this is starting to be almost standard in a chassis designed
for more than a couple disks.
================

I think I have read that Linux does not support hot swap SATA disks.
Not yet.
I think SCSI is the only hot swap option unless he goes with hardware RAID.
Also, hardware RAID does real hot swap (remove bad disk, insert good disk,
back to computer games).  With software RAID you must issue magic
incantations to swap a disk.  Some would argue against software RAID because
of this.  These incantations are beyond most computer operators (in the real
world).  They know how to change a tape at the correct time, but know little
about the OS.  In my opinion!

Guy

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Mark Hahn
Sent: Sunday, April 18, 2004 2:50 PM
To: Paul Phillips
Cc: linux-raid@vger.kernel.org
Subject: Re: new raid system for home use

> three terabyte capacity.  My tentative plan is to use 8 of the new Hitachi
> Deskstar 7K400 400 GB SATA drives in a RAID-5 configuration, and 2

highest-capacity disks are noticably more expensive than more routine ones.

> additional drives in a RAID-1 configuration as the boot device and higher
> priority data storage. 

I wouldn't bother, since raid5 is plenty fast.  it's nothing but marketing
that the storage vendors push this concept of "near-line" disk storage.

> I'd like to use debian unstable and kernel v2.6.

distributions are irrelevant.  the only reason I can think to prefer 2.6
is better support for very large block devices.

> I did not find the web overflowing with instances of people building such
> large linux RAID servers in non-business settings.

I can't think of anything about data servers that is "setting specific".

>   * top-notch linux support for { RAID card, gigabit ethernet chipset, ???
}

why a raid card?  they're slow and expensive.  I'd use two promise
sata150tx4
cards.  reasons for prefering sw raid have been discussed here before and
the 
facts remain unchanged.

broadcom or intel gigabit nics seem to be quite safe choices.

>   * components with proven linux functionality/reliability
>   * easy expandability
>   * no bottlenecks if I want to stream video to up to four locations

but we're talking piddly little streams, no?  just compressed video at 
a MB/s or two?

>   * doesn't demand rack-mount

nothing requires rack-mount.  even giant-sized motherboards will fit
into *some* mid-tower-ish chassis.

naturally, a lot of disks should make you very concerned for the 
size of your power supply.

>   * open source drivers for all components if possible, or most if not
>   * all things being equal, the quieter and cooler-running version

big PSU's tend not to be quiet.  and even though modern non-SCSI disks 
are quiet, enough of them does make some noise.

>   * endless oodles of CPU (I'd think 2x3GHz would be megaplenty)

too much, I'd say.  a single p4/2.6 would be fine.  it's true though that 
if you have your heart set on high bandwidth, that necessitates PCI-X,
and they're uncommonly found outside of dual xeon/opteron "server" boards.
you can, of course, sensibly run such a board with 1 cpu.

>   * hot swap

thankfully, this is starting to be almost standard in a chassis designed
for more than a couple disks.

>   * uptime > 99.9%

trivial.

>   * drive reliability (willing to keep spares handy and drop them
>       in as the occasion warrants)

it's not hot if you have to do something to use it.

>   * price (not cost-unconscious, but not spend-averse)

if you like the integrated approach (windows, etc), then just get a 
sata-based storage box supported by some real company.  as with all 
integrated solutions, the pitch is based on them worrying about it,
not you.  yes, you pay through the nose, but that's the tradeoff you 
have to evaluate.

> Other matters of interest:
> 
>   * Would RAID-6 be overkill? I doubt I'll be backing up the big array,
>       ever.  Losing it would suck a lot but not end my existence.

r5+hotspare is plenty reliable.  I think r6 is a bit immature, but I haven't
tried it.

>   * Is EVMS mature enough to use if I'm bleeding edge averse in that
>       area? I'd never heard of it before reading this list.

EVMS is afflicted by featuritis, IMO, compared to LVM.  but why do you
think you need it?  volume managers are for people who want to divide
their storage into little chunks, and then experience the bofhish grandeur 
of requiring the lusers to beg for more space.

big storage should be left in big chunks, unles there's some good reason
for it.

>   * Software vs. Hardware RAID? I imagine this is a good place for
>       Hardware if I buy the right card, but maybe Software would require
>       less expertise and fiddling to get running in peak form.

fiddling is required if you're trying to tweak either approach.
do you want to tweak via some proprietary/integrated interface,
talking to a $1k card that's slower than a $100 card?

I don't believe anyone would claim that hw raid is somehow more reliable.
people who like embedded/gui interfaces would claim that hw raid is more
usable.

>   * Would I be smarter to settle for kernel 2.4 at this time?

no.  your have no real performance issues, and sata support in 2.6 is very
good, as is large-block-dev.

>   * I'm probably failing to consider the five most important factors...

the only really important factor is that disks should be 3+ year warrantee.
hw raid is fine if you like that kind of thing, and want the security of 
paying more for slower performance, but the privilege of waiting on hold
at a telephone support number.

>   *  Chassis: http://www.baber.com/cases/mpe_ft2_black.htm

jeez.  that's a penis surrogate.  why not just get a straightforward
3-4U rackmount chassis and sit it on a little wheeled dolly from Ikea?

400W PS is not enough for 8-10 disks, and you probably want 
> ATX support (EATX/SSI/etc).

note also that 5.25 bays are in some ways a disadvantage, since all
disks are 3.5 (and you probably want a couple multi-bay hotswap 
converters that put, eg, 4 3.5's in 3 5.25's.)

>   *     RAID: http://www.3ware.com/products/serial_ata9000.asp

for a hw raid card, 3ware is pretty good.  they're still much more expensive
than sw raid, and by most reports, slower.

>   * 10 Disks: http://www.hitachigst.com/hdd/desk/7k400.html

use two promise 4-port controllers and the 1-2 sata ports that come 
with your MB.  you'll find that r5 is plenty fast for normal use,
so you don't need to waste a disk with a separate r1.

>   *     MoBo: ???

any Intel i75xx, i875, or AMD 8xxx from a recognizable vendor (tyan,
supermicro, asus, intel, etc.)

>   * CPU, RAM: TBD based on MoBo

you don't need much CPU power, and unless you have high locality
of reference, lots of memory is wasted on fileservers.  get 1-2GB ECC.

you should also think about whether you really require this to be a 
single server.  components at the basic level are quite cheap, but 
as you go higher-end, costs go upward quite steeply.

regards, mark hahn.

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-18 18:49 ` Mark Hahn
  2004-04-18 20:22   ` Paul Phillips
  2004-04-18 20:48   ` Guy
@ 2004-04-19  9:16   ` Clemens Schwaighofer
  2004-04-19 17:57     ` Mark Hahn
  2004-04-20  1:14   ` maarten van den Berg
  3 siblings, 1 reply; 16+ messages in thread
From: Clemens Schwaighofer @ 2004-04-19  9:16 UTC (permalink / raw)
  To: Mark Hahn; +Cc: Paul Phillips, linux-raid

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Mark Hahn wrote:

| EVMS is afflicted by featuritis, IMO, compared to LVM.  but why do you
| think you need it?  volume managers are for people who want to divide
| their storage into little chunks, and then experience the bofhish
grandeur
| of requiring the lusers to beg for more space.

There is no need for any kind of LVM unless you start with a small set
of HDs and may need to extend it later without moving data around on
partitions etc.
for bofhish behavior there is quota ;)

|>  * I'm probably failing to consider the five most important factors...
|
|
| the only really important factor is that disks should be 3+ year
warrantee.
| hw raid is fine if you like that kind of thing, and want the security of
| paying more for slower performance, but the privilege of waiting on hold
| at a telephone support number.

it always depends WHAT kind of HW raid you are talking about. The thing
you get in HP/Compaq DL boxes or the "HW raid" you get with promise, etc
low level controllers. Of course if you want to get raid in that area
software raid is more easy and cheap and faster, but I think in the
upper area hardware raid has some other advantages, eg easy boot from
raid 5, transparency to the OS layer (no 5 mds because you want
partitions, no patches because you want to make md partitions, etc).
100% sure and easy hotswap ...

- --
Clemens Schwaighofer - IT Engineer & System Administration
==========================================================
TEQUILA\Japan, 6-17-2 Ginza Chuo-ku, Tokyo 104-8167, JAPAN
Tel: +81-(0)3-3545-7703            Fax: +81-(0)3-3545-7343
http://www.tequila.co.jp
==========================================================
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFAg5kIjBz/yQjBxz8RAsliAJ9NDA7i463cZFD8TTYNs/HrltLdcACfdehd
UmkB+VGTNInQqSiUSvPFUVc=
=Z312
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-18 20:48   ` Guy
@ 2004-04-19 16:01     ` Norman Schmidt
  0 siblings, 0 replies; 16+ messages in thread
From: Norman Schmidt @ 2004-04-19 16:01 UTC (permalink / raw)
  To: linux-raid

Guy schrieb:

> You said:
> ================
> 
>>  * hot swap
> 
> I think I have read that Linux does not support hot swap SATA disks.
> Not yet.

I don´t know how exactly you define "hot swap". But the following works 
(I tried it):

3* Samsung Spinpoint 160G SATA disk
Promise SATA 150 TX4 controller
sw raid 5 over the three disks

promise driver module with 2.4.23 or so kernel

The drives are accessed as sd[something]. If you Plug and pull other 
scsi devices, you should use uuids for mdadm, not drive letters (because 
they change).

I set faulty one drive and removed it with mdadm. Then I unplugged it 
(first SATA plug, then current plug). The drive was gone in 
/proc/scsi/something. Then I reattached it vice versa. The drive was 
back in /proc/. Thenn´added the drive back to the raid with mdadm - 
resync - everything was fine.

So for changing the drive, you don´t have to power down the server, not 
even stop any server daemons.

> I think SCSI is the only hot swap option unless he goes with hardware RAID.
> Also, hardware RAID does real hot swap (remove bad disk, insert good disk,
> back to computer games).  With software RAID you must issue magic
> incantations to swap a disk.  Some would argue against software RAID because
> of this.

Yes, that´s right, that´s a con. But I had massive problemns with two 
Mylex DAC960 hardware raid controllers - I would never use them again 
(were bought by my predecessors).

> Guy

Norman.


-- 
Norman Schmidt          Institut fuer Physikal. u. Theoret. Chemie
Dipl.-Chem. Univ.       Friedrich-Alexander-Universitaet
schmidt@naa.net         Erlangen-Nuernberg
                         IT-Systembetreuer Physikalische Chemie

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-18 20:22   ` Paul Phillips
@ 2004-04-19 16:03     ` Norman Schmidt
  2004-04-19 16:20       ` Jeff Garzik
  2004-04-21 13:24       ` Robert Washburne
  0 siblings, 2 replies; 16+ messages in thread
From: Norman Schmidt @ 2004-04-19 16:03 UTC (permalink / raw)
  To: linux-raid

Hi Paul!

Paul Phillips schrieb:

> I was under the impression that I can't boot off raid-5, but if that
> information is dated then all the better.
> 
> Thanks again.

Afaik, that´s still true. The reason is that the two (or more) drives of 
a raid 1 essentially hold the same data, and if you do everything right, 
the complete boot mechanism is on all disks, so that you can boot with 
any of them. On raid 5 disks, you would never have the complete data on 
any of the disks.

Norman.

-- 
Norman Schmidt          Institut fuer Physikal. u. Theoret. Chemie
Dipl.-Chem. Univ.       Friedrich-Alexander-Universitaet
schmidt@naa.net         Erlangen-Nuernberg
                         IT-Systembetreuer Physikalische Chemie

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-19 16:03     ` Norman Schmidt
@ 2004-04-19 16:20       ` Jeff Garzik
  2004-04-19 17:50         ` Mark Hahn
  2004-04-21 13:24       ` Robert Washburne
  1 sibling, 1 reply; 16+ messages in thread
From: Jeff Garzik @ 2004-04-19 16:20 UTC (permalink / raw)
  To: schmidt; +Cc: linux-raid

Norman Schmidt wrote:
> Hi Paul!
> 
> Paul Phillips schrieb:
> 
>> I was under the impression that I can't boot off raid-5, but if that
>> information is dated then all the better.
>>
>> Thanks again.
> 
> 
> Afaik, that´s still true. The reason is that the two (or more) drives of 
> a raid 1 essentially hold the same data, and if you do everything right, 
> the complete boot mechanism is on all disks, so that you can boot with 
> any of them. On raid 5 disks, you would never have the complete data on 
> any of the disks.


If you can load an initrd, you can boot off of anything.

The question then becomes whether or not you can load the initrd :) 
Sometimes even in RAID5 situations you can...

	Jeff



-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-19 16:20       ` Jeff Garzik
@ 2004-04-19 17:50         ` Mark Hahn
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Hahn @ 2004-04-19 17:50 UTC (permalink / raw)
  To: linux-raid

> >> I was under the impression that I can't boot off raid-5, but if that
...
> If you can load an initrd, you can boot off of anything.

indeed, you may not even need initrd - I usually have a boot kernel with 
monolithic (non-modular) boot device support.  the result is that as long 
as lilo/grub can reads a MB or so off some device, you can boot.

regards, mark hahn.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-19  9:16   ` Clemens Schwaighofer
@ 2004-04-19 17:57     ` Mark Hahn
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Hahn @ 2004-04-19 17:57 UTC (permalink / raw)
  To: linux-raid

> | think you need it?  volume managers are for people who want to divide
> | their storage into little chunks, and then experience the bofhish
> grandeur
> | of requiring the lusers to beg for more space.
> 
> There is no need for any kind of LVM unless you start with a small set
> of HDs and may need to extend it later without moving data around on
> partitions etc.
> for bofhish behavior there is quota ;)

yes, though I have to wonder whether there are performance implications
for doing that kind of incremental filesystem extension.  filesystems,
as you know, do normally want to have some knowlege of the "raid topology"
of the blockdev space they consume.

<scandalous>
personally, I kind of like the idea of having raid performed by the FS,
though this is obviously a steep undertaking (and undesirable in the 
complexity-management/layering/compartmentalization sense.)
</scandalous>

> software raid is more easy and cheap and faster, but I think in the
> upper area hardware raid has some other advantages, eg easy boot from
> raid 5, transparency to the OS layer (no 5 mds because you want
> partitions, no patches because you want to make md partitions, etc).
> 100% sure and easy hotswap ...

hw raid gives you an integrated solution.  you obviously wind up very very
dependent on replacement cards, firmware upgrades, the bios-level interface,
existence of any user-level control utilities, etc.

personally, I prefer to use commodity hardware in part because it's so
mundane.  how long will it take you to RMA that HW raid card?  if you're 
already paying for a big-name HW support contract, the time/money may be 
quite minimal.

regards, mark hahn.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-18 18:49 ` Mark Hahn
                     ` (2 preceding siblings ...)
  2004-04-19  9:16   ` Clemens Schwaighofer
@ 2004-04-20  1:14   ` maarten van den Berg
  3 siblings, 0 replies; 16+ messages in thread
From: maarten van den Berg @ 2004-04-20  1:14 UTC (permalink / raw)
  To: linux-raid

On Sunday 18 April 2004 20:49, Mark Hahn wrote:
> > three terabyte capacity.  My tentative plan is to use 8 of the new
> > Hitachi Deskstar 7K400 400 GB SATA drives in a RAID-5 configuration, and

My own home setup consists of 5+2 x 80GB IDE disks yielding an array of 400GB 
space. It uses 3 cheap promise ATA cards and uses raid5.  It is full to the 
brim now, so funds permitting I will build a new bigger array this somewhere 
this year...

> > additional drives in a RAID-1 configuration as the boot device and higher
> > priority data storage.
>
> I wouldn't bother, since raid5 is plenty fast.  it's nothing but marketing
> that the storage vendors push this concept of "near-line" disk storage.

I myself preferred to use a small old 8 GB bootdrive. It needn't be 
fault-tolerant; I can live with the OS dying one day: my data is not in 
danger.

> naturally, a lot of disks should make you very concerned for the
> size of your power supply.

Be that as it may, my system above (with eight disks!) runs happily off a 
high-quality 300 Watt PSU.  Ide disks don't draw such high currents anymore, 
look at the label.  Typically both rails are loaded well under 1 amp.
(Yes, I realize spin-up current can be a killer, but as I said, it works fine)
Moreover, I have them spinning down when idle for 4 hours, so if those startup 
currents were that severe, I would have killed my PSU years ago...)

> >   * open source drivers for all components if possible, or most if not
> >   * all things being equal, the quieter and cooler-running version
>
> big PSU's tend not to be quiet.  and even though modern non-SCSI disks
> are quiet, enough of them does make some noise.

Advice: definitely go for a PSU that's equipped with a 120mm fan. They're 
really very quiet, you'll be surprised. You can recognize them easily since 
the fan will be mounted at the bottom [intake], horizontally. 

> >   * endless oodles of CPU (I'd think 2x3GHz would be megaplenty)
>
> too much, I'd say.  a single p4/2.6 would be fine.  it's true though that
> if you have your heart set on high bandwidth, that necessitates PCI-X,
> and they're uncommonly found outside of dual xeon/opteron "server" boards.
> you can, of course, sensibly run such a board with 1 cpu.

If it is only for the raid5 calculations, don't bother.  I changed my server 
motherboards at one time, it had a lowly AMD K6 before, it has dual celerons 
now (SMP). There was no noticeable speed difference whatsoever.
And both those boards are dead _slow_ compared to current systems.

> big storage should be left in big chunks, unles there's some good reason
> for it.

I fully concur.

> 400W PS is not enough for 8-10 disks, and you probably want

I will agree on the "better be safe than sorry" aspect, but as I said above, a 
normal (but good quality!) PSU can, and does, handle 8 disks if need be.
(I'm not talking about scsi disks though!)

> note also that 5.25 bays are in some ways a disadvantage, since all
> disks are 3.5 (and you probably want a couple multi-bay hotswap
> converters that put, eg, 4 3.5's in 3 5.25's.)

I disagree.  Disks tend to die in a too hot environment. So I mounted all 
disks in 5.25" bays (thus leaving lots of air space) and ripped out the whole 
plastic front to mount two papst silent 120mm fans right in front of the 
disks.  These babies don't even know what 30 degrees celsius feels like ;-)) 

Good luck!
Maarten

-- 
Yes of course I'm sure it's the red cable. I guarante[^%!/+)F#0c|'NO CARRIER

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
       [not found] <4084AC05.6090800@tequila.co.jp>
@ 2004-04-20 11:46 ` Mark Hahn
  2004-04-20 12:54   ` KELEMEN Peter
  0 siblings, 1 reply; 16+ messages in thread
From: Mark Hahn @ 2004-04-20 11:46 UTC (permalink / raw)
  To: linux-raid

> | personally, I prefer to use commodity hardware in part because it's so
> | mundane.  how long will it take you to RMA that HW raid card?  if you're
> | already paying for a big-name HW support contract, the time/money may be
> | quite minimal.
> 
> It's always a hot debate here SW raid vs HW raid ... I think it depends

I don't believe there is much real debate: debate presumes that someone 
will change their mind.

> what you want to do, how much work you want to put into it, how much you
> trust your admin, that he can punch in some commands to replace a b0rked
> HD or if she/he can only replace a HD physically (those hot swap stuff
> in the HP boxes eg.).

egads - if you're trusting an admin who has so recently said
"would you like fries with that", then you have other serious risks.

OK, to summarize, everyone agrees that:

sw raid is faster, cheaper and not dependent on obscure hardware.
hw raid is "supported" by someone, and runnable by a drooling idiot.

amusing how this resembles the whole linux-windows TCO-based argument.

> Eg I see no reason to do software raid if you can have hw raid as a
> reasonable price.

speed is enough for me.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-20 11:46 ` Mark Hahn
@ 2004-04-20 12:54   ` KELEMEN Peter
  2004-04-20 16:12     ` Jeff Garzik
  0 siblings, 1 reply; 16+ messages in thread
From: KELEMEN Peter @ 2004-04-20 12:54 UTC (permalink / raw)
  To: linux-raid

* Mark Hahn (hahn@physics.mcmaster.ca) [20040420 07:46]:

> [...] sw raid is faster, cheaper and not dependent on obscure
> hardware. [...]

For me, random reads are 10-15% better on 3ware 7506 HW-RAID5 than
on SW-RAID5.

Peter

-- 
    .+'''+.         .+'''+.         .+'''+.         .+'''+.         .+''
 Kelemen Péter     /       \       /       \     Peter.Kelemen@cern.ch
.+'         `+...+'         `+...+'         `+...+'         `+...+'
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-20 12:54   ` KELEMEN Peter
@ 2004-04-20 16:12     ` Jeff Garzik
  2004-04-20 20:08       ` KELEMEN Peter
  0 siblings, 1 reply; 16+ messages in thread
From: Jeff Garzik @ 2004-04-20 16:12 UTC (permalink / raw)
  To: KELEMEN Peter; +Cc: linux-raid

KELEMEN Peter wrote:
> * Mark Hahn (hahn@physics.mcmaster.ca) [20040420 07:46]:
> 
> 
>>[...] sw raid is faster, cheaper and not dependent on obscure
>>hardware. [...]
> 
> 
> For me, random reads are 10-15% better on 3ware 7506 HW-RAID5 than
> on SW-RAID5.

That's highly dependent on the hardware used for software RAID5.

	Jeff





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-20 16:12     ` Jeff Garzik
@ 2004-04-20 20:08       ` KELEMEN Peter
  0 siblings, 0 replies; 16+ messages in thread
From: KELEMEN Peter @ 2004-04-20 20:08 UTC (permalink / raw)
  To: linux-raid

* Jeff Garzik (jgarzik@pobox.com) [20040420 12:12]:

> That's highly dependent on the hardware used for software RAID5.

Dual Xeon 2.6 GHz, 2G RAM, Intel e7501 chipset, 3x 3ware 7506
sitting in 64bit/66MHz slots, 20x WD 120G.  RedHat 2.4.21 kernel.

Peter

-- 
    .+'''+.         .+'''+.         .+'''+.         .+'''+.         .+''
 Kelemen Péter     /       \       /       \     Peter.Kelemen@cern.ch
.+'         `+...+'         `+...+'         `+...+'         `+...+'
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: new raid system for home use
  2004-04-19 16:03     ` Norman Schmidt
  2004-04-19 16:20       ` Jeff Garzik
@ 2004-04-21 13:24       ` Robert Washburne
  1 sibling, 0 replies; 16+ messages in thread
From: Robert Washburne @ 2004-04-21 13:24 UTC (permalink / raw)
  To: linux-raid

Grettengs!  This is my first post to the list.

On my system (a dedicated fileserver running gentoo Linux) I partition all 
of my drives the same way:
P1 - 1 cyl - /boot
P2 - swap
P3 - 2G/RAID 5 - /
L5 - 4G/ext2 - / alternate
L* - 35G/RAID 5 - Storage partitions

I use lilo with a duel boot.
Default boot uses the RAID 5 root.
Secondary boot uses the ext2 root.  This partition is copied onto every drive.

This way, I have all of the advantages of RAID 5 for my OS, but I can boot 
to a mundane partition if the RAID should break or needs maintanence (like 
adding another drive).
If the boot drive dies, I can boot from CD and chroot to any of the 
remaining drive's L5 partition.
I have tested both boots and it works quite well.  The cost in space is 4G 
from a 250G drive.  About a 2% overhead.

It could be argued that this layout would be more efficient if root were 
placed closer to the center of the cylinders.  But this is not a very 
active machine.  It basicly holds my large multimedia files while they wait 
their turn for processing (takes 100G to filter and process 1 hour of 
video).  No data streaming applications.  So I took the mental shortcut.

Hope this is useful.

Bob W.

At 12:03 PM 4/19/2004, you wrote:
>Hi Paul!
>
>Paul Phillips schrieb:
>
>>I was under the impression that I can't boot off raid-5, but if that
>>information is dated then all the better.
>>Thanks again.
>
>Afaik, that´s still true. The reason is that the two (or more) drives of a 
>raid 1 essentially hold the same data, and if you do everything right, the 
>complete boot mechanism is on all disks, so that you can boot with any of 
>them. On raid 5 disks, you would never have the complete data on any of 
>the disks.
>
>Norman.
>
>--


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2004-04-21 13:24 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-04-18  3:34 new raid system for home use Paul Phillips
2004-04-18 18:49 ` Mark Hahn
2004-04-18 20:22   ` Paul Phillips
2004-04-19 16:03     ` Norman Schmidt
2004-04-19 16:20       ` Jeff Garzik
2004-04-19 17:50         ` Mark Hahn
2004-04-21 13:24       ` Robert Washburne
2004-04-18 20:48   ` Guy
2004-04-19 16:01     ` Norman Schmidt
2004-04-19  9:16   ` Clemens Schwaighofer
2004-04-19 17:57     ` Mark Hahn
2004-04-20  1:14   ` maarten van den Berg
     [not found] <4084AC05.6090800@tequila.co.jp>
2004-04-20 11:46 ` Mark Hahn
2004-04-20 12:54   ` KELEMEN Peter
2004-04-20 16:12     ` Jeff Garzik
2004-04-20 20:08       ` KELEMEN Peter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).