* RAID 16?
@ 2006-02-02 5:59 David Liontooth
2006-02-02 6:03 ` Neil Brown
` (4 more replies)
0 siblings, 5 replies; 31+ messages in thread
From: David Liontooth @ 2006-02-02 5:59 UTC (permalink / raw)
To: linux-raid
We're wondering if it's possible to run the following --
* define 4 pairs of RAID 1 with an 8-port 3ware 9500S card
* the OS will see these are four normal drives
* use md to configure them into a RAID 6 array
Would this work? Would it be better than RAID 15? We're looking for a
very high redundancy system.
This is on a Debian system running the 2.6 kernel. We're contemplating
running EVMS on top.
Dave
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 5:59 RAID 16? David Liontooth
@ 2006-02-02 6:03 ` Neil Brown
2006-02-02 8:34 ` Gordon Henderson
` (3 subsequent siblings)
4 siblings, 0 replies; 31+ messages in thread
From: Neil Brown @ 2006-02-02 6:03 UTC (permalink / raw)
To: David Liontooth; +Cc: linux-raid
On Wednesday February 1, liontooth@cogweb.net wrote:
>
> We're wondering if it's possible to run the following --
>
> * define 4 pairs of RAID 1 with an 8-port 3ware 9500S card
> * the OS will see these are four normal drives
> * use md to configure them into a RAID 6 array
>
> Would this work? Would it be better than RAID 15? We're looking for a
> very high redundancy system.
There is no reason why this shouldn't work. Go for it....
NeilBrown
>
> This is on a Debian system running the 2.6 kernel. We're contemplating
> running EVMS on top.
>
> Dave
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 5:59 RAID 16? David Liontooth
2006-02-02 6:03 ` Neil Brown
@ 2006-02-02 8:34 ` Gordon Henderson
2006-02-02 16:17 ` Matthias Urlichs
` (2 subsequent siblings)
4 siblings, 0 replies; 31+ messages in thread
From: Gordon Henderson @ 2006-02-02 8:34 UTC (permalink / raw)
To: David Liontooth; +Cc: linux-raid
On Wed, 1 Feb 2006, David Liontooth wrote:
> We're wondering if it's possible to run the following --
>
> * define 4 pairs of RAID 1 with an 8-port 3ware 9500S card
> * the OS will see these are four normal drives
> * use md to configure them into a RAID 6 array
>
> Would this work? Would it be better than RAID 15? We're looking for a
> very high redundancy system.
So you have 8 disks and would end up with 2 disks worth of data... (3 if
you used RAID-5)
It'll work, but I'm not convinced that you'll gain anything in real terms.
Maybe if this machine was going to be locked in a bunker with absolutely
zero access for its lifetime? Or live on a space shuttle?
I'd be tempted to turn off RAID on the 3ware and run RAID-6 over all 8
drives myself, but I like to keep things simple - to the extent that I'd
not even bother with EVMS. Keep as few software layers as possible between
the application & the platters... But your application may demand it,
so... :)
Gordon
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 5:59 RAID 16? David Liontooth
2006-02-02 6:03 ` Neil Brown
2006-02-02 8:34 ` Gordon Henderson
@ 2006-02-02 16:17 ` Matthias Urlichs
2006-02-02 16:28 ` Mattias Wadenstein
2006-02-02 18:42 ` Mario 'BitKoenig' Holbe
2006-02-02 16:44 ` Mr. James W. Laferriere
2006-02-03 2:32 ` Bill Davidsen
4 siblings, 2 replies; 31+ messages in thread
From: Matthias Urlichs @ 2006-02-02 16:17 UTC (permalink / raw)
To: linux-raid
Hi, David Liontooth wrote:
>
> We're wondering if it's possible to run the following --
>
> * define 4 pairs of RAID 1 with an 8-port 3ware 9500S card * the OS will
> see these are four normal drives * use md to configure them into a RAID
> 6 array
>
Hmm. You'd have eight disks, five(!) may fail at any time, giving you
two disks of capacity.
Ouch. That's not "very high" redundancy, that's "insane". ;-)
In your case, I'd install the eight disks as a straight 6-disk RAID6 with
two spares. Four disks may fail (just not within the time it takes to
reconstruct the array...), giving you four disks of capacity.
A net win, I'd say, esp. since it's far more likely that your two power
supplies will both die within the time you'd need to replace one.
But it's your call.
--
Matthias Urlichs | {M:U} IT Design @ m-u-it.de | smurf@smurf.noris.de
Disclaimer: The quote was selected randomly. Really. | http://smurf.noris.de
- -
... Logically incoherent, semantically incomprehensible, and legally ...
impeccable!
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 16:17 ` Matthias Urlichs
@ 2006-02-02 16:28 ` Mattias Wadenstein
2006-02-02 16:54 ` Gordon Henderson
2006-02-02 18:42 ` Mario 'BitKoenig' Holbe
1 sibling, 1 reply; 31+ messages in thread
From: Mattias Wadenstein @ 2006-02-02 16:28 UTC (permalink / raw)
To: Matthias Urlichs; +Cc: linux-raid
On Thu, 2 Feb 2006, Matthias Urlichs wrote:
> Hi, David Liontooth wrote:
>
>>
>> We're wondering if it's possible to run the following --
>>
>> * define 4 pairs of RAID 1 with an 8-port 3ware 9500S card * the OS will
>> see these are four normal drives * use md to configure them into a RAID
>> 6 array
>>
> Hmm. You'd have eight disks, five(!) may fail at any time, giving you
> two disks of capacity.
>
> Ouch. That's not "very high" redundancy, that's "insane". ;-)
>
> In your case, I'd install the eight disks as a straight 6-disk RAID6 with
> two spares. Four disks may fail (just not within the time it takes to
> reconstruct the array...), giving you four disks of capacity.
Yes, but then you (probably) lose hotswap. A feature here was to use the
3ware hw raid for the raid1 pairs and use the hw-raid hotswap instead of
having to deal with linux hotswap (unless both drives in a raid1-set
dies).
> A net win, I'd say, esp. since it's far more likely that your two power
> supplies will both die within the time you'd need to replace one.
>
> But it's your call.
If you only need reliability and can spend a few extra disks, I didn't
find the setup so bad.
/Mattias Wadenstein
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 5:59 RAID 16? David Liontooth
` (2 preceding siblings ...)
2006-02-02 16:17 ` Matthias Urlichs
@ 2006-02-02 16:44 ` Mr. James W. Laferriere
2006-02-03 9:08 ` Lars Marowsky-Bree
2006-02-03 2:32 ` Bill Davidsen
4 siblings, 1 reply; 31+ messages in thread
From: Mr. James W. Laferriere @ 2006-02-02 16:44 UTC (permalink / raw)
To: David Liontooth; +Cc: linux-raid
Hello David ,
On Wed, 1 Feb 2006, David Liontooth wrote:
> We're wondering if it's possible to run the following --
>
> * define 4 pairs of RAID 1 with an 8-port 3ware 9500S card
> * the OS will see these are four normal drives
> * use md to configure them into a RAID 6 array
>
> Would this work? Would it be better than RAID 15? We're looking for a
> very high redundancy system.
>
> This is on a Debian system running the 2.6 kernel. We're contemplating
> running EVMS on top.
I happen to agree with the person that said (something like ,
"keep as few software layers between you & the devices ."
That said why do a raid1 into a raid6 ?
Why not a 2 raid6 arrays raid1'd ?
A thought . Hth , JimL
--
+------------------------------------------------------------------+
| James W. Laferriere | System Techniques | Give me VMS |
| Network Engineer | 3542 Broken Yoke Dr. | Give me Linux |
| babydr@baby-dragons.com | Billings , MT. 59105 | only on AXP |
| http://www.asteriskhelpdesk.com/cgi-bin/astlance/r.cgi?babydr |
+------------------------------------------------------------------+
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 16:28 ` Mattias Wadenstein
@ 2006-02-02 16:54 ` Gordon Henderson
2006-02-02 20:24 ` Matthias Urlichs
2006-02-02 21:18 ` J. Ryan Earl
0 siblings, 2 replies; 31+ messages in thread
From: Gordon Henderson @ 2006-02-02 16:54 UTC (permalink / raw)
To: Mattias Wadenstein; +Cc: linux-raid
On Thu, 2 Feb 2006, Mattias Wadenstein wrote:
> Yes, but then you (probably) lose hotswap. A feature here was to use the
> 3ware hw raid for the raid1 pairs and use the hw-raid hotswap instead of
> having to deal with linux hotswap (unless both drives in a raid1-set
> dies).
I'm not familiar with the 3ware controller, (other than knowing the name),
is it SCSI, SATA, or PATA? But ...
I've actually had very good results hot swapping SCSI drives on a live
linux system though. I guess you do run the risk of something crowbarring
the SCSI bus for a few cycles when the drive is unplugged or plugged in,
but it's all parity/checksummed, isn't it?
Anyone tried SATA drives yet?
Gordon
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 16:17 ` Matthias Urlichs
2006-02-02 16:28 ` Mattias Wadenstein
@ 2006-02-02 18:42 ` Mario 'BitKoenig' Holbe
2006-02-02 20:34 ` Matthias Urlichs
2006-02-03 0:20 ` Guy
1 sibling, 2 replies; 31+ messages in thread
From: Mario 'BitKoenig' Holbe @ 2006-02-02 18:42 UTC (permalink / raw)
To: linux-raid
Matthias Urlichs <smurf@smurf.noris.de> wrote:
> Hi, David Liontooth wrote:
>> * define 4 pairs of RAID 1 with an 8-port 3ware 9500S card * the OS will
> Hmm. You'd have eight disks, five(!) may fail at any time, giving you
Four, isn't it?
RAID6 covers the failure of 2 of the underlying RAID1s, which, in turn,
means failures of 2 disks each, so four.
Sometimes even 5, yes - given the right ones fail.
regards
Mario
--
We are the Bore. Resistance is futile. You will be bored.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 16:54 ` Gordon Henderson
@ 2006-02-02 20:24 ` Matthias Urlichs
2006-02-02 21:18 ` J. Ryan Earl
1 sibling, 0 replies; 31+ messages in thread
From: Matthias Urlichs @ 2006-02-02 20:24 UTC (permalink / raw)
To: linux-raid
Hi, Gordon Henderson wrote:
> I've actually had very good results hot swapping SCSI drives on a live
> linux system though. I guess you do run the risk of something crowbarring
> the SCSI bus for a few cycles when the drive is unplugged or plugged in,
> but it's all parity/checksummed, isn't it?
You run the very real risks of unbalancing the bus when you plug/unplug,
thereby creating power spikes which may or may not destry your hardware.
I did what you do for years. Always when the bus was idle, and with
antistatic precautions. One day, all SCSI driver chips on the bus were
shot afterwards. That was the last time I *ever* touched a powered SCSI
bus.
(I had to replace the SCSI interface chip in my tape drive (fiddly work
with the soldering iron); luckily it used a standard one that was
available.)
--
Matthias Urlichs | {M:U} IT Design @ m-u-it.de | smurf@smurf.noris.de
Disclaimer: The quote was selected randomly. Really. | http://smurf.noris.de
- -
Reality continues to ruin my life.
-- Calvin
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 18:42 ` Mario 'BitKoenig' Holbe
@ 2006-02-02 20:34 ` Matthias Urlichs
2006-02-03 0:20 ` Guy
1 sibling, 0 replies; 31+ messages in thread
From: Matthias Urlichs @ 2006-02-02 20:34 UTC (permalink / raw)
To: linux-raid
Hi, Mario 'BitKoenig' Holbe wrote:
>> Hmm. You'd have eight disks, five(!) may fail at any time, giving you
>
> Four, isn't it?
> RAID6 covers the failure of 2 of the underlying RAID1s, which, in turn,
> means failures of 2 disks each, so four. Sometimes even 5, yes - given the
> right ones fail.
No -- with any four failed disks you still do not have a single point of
failure. Only when you take out two RAID1 pairs and one disk in a third
pair does the second disk in that third pair become a SPOF.
--
Matthias Urlichs | {M:U} IT Design @ m-u-it.de | smurf@smurf.noris.de
Disclaimer: The quote was selected randomly. Really. | http://smurf.noris.de
- -
Man created God in his own image.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 16:54 ` Gordon Henderson
2006-02-02 20:24 ` Matthias Urlichs
@ 2006-02-02 21:18 ` J. Ryan Earl
2006-02-02 21:29 ` Andy Smith
` (2 more replies)
1 sibling, 3 replies; 31+ messages in thread
From: J. Ryan Earl @ 2006-02-02 21:18 UTC (permalink / raw)
To: Gordon Henderson; +Cc: Mattias Wadenstein, linux-raid
Gordon Henderson wrote:
>I've actually had very good results hot swapping SCSI drives on a live
>linux system though.
>
>Anyone tried SATA drives yet?
>
Yes, and it does NOT work yet. libata does not support hotplugging of
harddrives yet: http://linux.yyz.us/sata/features.html
It supports hotplugging of the PCI controller itself, but not harddrives.
-ryan
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 21:18 ` J. Ryan Earl
@ 2006-02-02 21:29 ` Andy Smith
2006-02-02 22:38 ` Konstantin Olchanski
2006-02-03 2:54 ` Bill Davidsen
2 siblings, 0 replies; 31+ messages in thread
From: Andy Smith @ 2006-02-02 21:29 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 823 bytes --]
On Thu, Feb 02, 2006 at 03:18:14PM -0600, J. Ryan Earl wrote:
> Gordon Henderson wrote:
>
> >I've actually had very good results hot swapping SCSI drives on a live
> >linux system though.
> >
> >Anyone tried SATA drives yet?
> >
> Yes, and it does NOT work yet. libata does not support hotplugging of
> harddrives yet: http://linux.yyz.us/sata/features.html
>
> It supports hotplugging of the PCI controller itself, but not harddrives.
Hotplugging of SATA II drives works with 3ware raid controllers
*when installed in a chassis that supports it*. I believe pretty
much any chassis that says it supports SATA hotplug will do.
Personally I have had success with Chenbro 1U chassis.
--
http://strugglers.net/wiki/Xen_hosting -- A Xen VPS hosting hobby
Encrypted mail welcome - keyid 0x604DE5DB
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 21:18 ` J. Ryan Earl
2006-02-02 21:29 ` Andy Smith
@ 2006-02-02 22:38 ` Konstantin Olchanski
2006-02-03 2:31 ` Ross Vandegrift
2006-02-03 2:54 ` Bill Davidsen
2 siblings, 1 reply; 31+ messages in thread
From: Konstantin Olchanski @ 2006-02-02 22:38 UTC (permalink / raw)
To: J. Ryan Earl; +Cc: Gordon Henderson, Mattias Wadenstein, linux-raid
On Thu, Feb 02, 2006 at 03:18:14PM -0600, J. Ryan Earl wrote:
> >
> > Anyone tried [to how-swap] SATA drives yet?
>
> Yes, and it does NOT work yet. libata does not support hotplugging of
> harddrives yet: http://linux.yyz.us/sata/features.html
Despite the claims for the opposite, hot-swap SATA does work,
albeit with caveats and depending on chipsets and drivers.
Hot-unplug and hot-plug works for me with 3ware (8506) and LSI
Megaraid (8-port SATA) controllers. Take a drive out, put it back in,
type in a magic controller dependant command to enable it, then
run mdadm --add.
With the SATA Rocketraid 1820 (hptmv.ko) hot-unplug works,
but hot plug does not (hptmv.ko reports "drive plugged in and enabled,
but Linux I/O fails with errors).
With Promise PDC20319 (4-port SATA), I hot swap by "rmmod sata_promise;
remove old disk; connect new disk; modprobe sata_promise". I am sure
this "rmmod" trick works for hot-swapping disks on any SATA controller.
--
Konstantin Olchanski
Data Acquisition Systems: The Bytes Must Flow!
Email: olchansk-at-triumf-dot-ca
Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: RAID 16?
2006-02-02 18:42 ` Mario 'BitKoenig' Holbe
2006-02-02 20:34 ` Matthias Urlichs
@ 2006-02-03 0:20 ` Guy
2006-02-03 0:59 ` David Liontooth
1 sibling, 1 reply; 31+ messages in thread
From: Guy @ 2006-02-03 0:20 UTC (permalink / raw)
To: 'Mario 'BitKoenig' Holbe', linux-raid
He can loose 1 disk from any 2 RAID1 arrays. And then 2 disks from the
other 2 RAID1 arrays. A total of 6 of 8 disks can fail if chosen correctly.
I would go with an 8 disk RAID6, which would give the space of 6 disks and
support any 2 disks failing. Or a 7 disk RAID6 with 1 spare, but I think
that is over kill.
Guy
} -----Original Message-----
} From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
} owner@vger.kernel.org] On Behalf Of Mario 'BitKoenig' Holbe
} Sent: Thursday, February 02, 2006 1:42 PM
} To: linux-raid@vger.kernel.org
} Subject: Re: RAID 16?
}
} Matthias Urlichs <smurf@smurf.noris.de> wrote:
} > Hi, David Liontooth wrote:
} >> * define 4 pairs of RAID 1 with an 8-port 3ware 9500S card * the OS
} will
} > Hmm. You'd have eight disks, five(!) may fail at any time, giving you
}
} Four, isn't it?
} RAID6 covers the failure of 2 of the underlying RAID1s, which, in turn,
} means failures of 2 disks each, so four.
} Sometimes even 5, yes - given the right ones fail.
}
}
} regards
} Mario
} --
} We are the Bore. Resistance is futile. You will be bored.
}
} -
} To unsubscribe from this list: send the line "unsubscribe linux-raid" in
} the body of a message to majordomo@vger.kernel.org
} More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-03 0:20 ` Guy
@ 2006-02-03 0:59 ` David Liontooth
0 siblings, 0 replies; 31+ messages in thread
From: David Liontooth @ 2006-02-03 0:59 UTC (permalink / raw)
To: Guy; +Cc: 'Mario 'BitKoenig' Holbe', linux-raid
I appreciate all the expert feedback on this. We'll be able to make a
well-informed decision on how to proceed.
Best,
Dave
Guy wrote:
>He can loose 1 disk from any 2 RAID1 arrays. And then 2 disks from the
>other 2 RAID1 arrays. A total of 6 of 8 disks can fail if chosen correctly.
>
>I would go with an 8 disk RAID6, which would give the space of 6 disks and
>support any 2 disks failing. Or a 7 disk RAID6 with 1 spare, but I think
>that is over kill.
>
>Guy
>
>} -----Original Message-----
>} From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
>} owner@vger.kernel.org] On Behalf Of Mario 'BitKoenig' Holbe
>} Sent: Thursday, February 02, 2006 1:42 PM
>} To: linux-raid@vger.kernel.org
>} Subject: Re: RAID 16?
>}
>} Matthias Urlichs <smurf@smurf.noris.de> wrote:
>} > Hi, David Liontooth wrote:
>} >> * define 4 pairs of RAID 1 with an 8-port 3ware 9500S card * the OS
>} will
>} > Hmm. You'd have eight disks, five(!) may fail at any time, giving you
>}
>} Four, isn't it?
>} RAID6 covers the failure of 2 of the underlying RAID1s, which, in turn,
>} means failures of 2 disks each, so four.
>} Sometimes even 5, yes - given the right ones fail.
>}
>}
>} regards
>} Mario
>} --
>} We are the Bore. Resistance is futile. You will be bored.
>}
>} -
>} To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>} the body of a message to majordomo@vger.kernel.org
>} More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 22:38 ` Konstantin Olchanski
@ 2006-02-03 2:31 ` Ross Vandegrift
0 siblings, 0 replies; 31+ messages in thread
From: Ross Vandegrift @ 2006-02-03 2:31 UTC (permalink / raw)
To: Konstantin Olchanski
Cc: J. Ryan Earl, Gordon Henderson, Mattias Wadenstein, linux-raid
On Thu, Feb 02, 2006 at 02:38:59PM -0800, Konstantin Olchanski wrote:
> Despite the claims for the opposite, hot-swap SATA does work,
> albeit with caveats and depending on chipsets and drivers.
>
> Hot-unplug and hot-plug works for me with 3ware (8506) and LSI
> Megaraid (8-port SATA) controllers. Take a drive out, put it back in,
> type in a magic controller dependant command to enable it, then
> run mdadm --add.
But in both of these cases, the Linux kernel never interacts directly
with individual drives, only with SCSI disks presented to the OS by
the controller.
In other words, the cards and controllers take care of the hotswap so
libata doesn't need to.
> With Promise PDC20319 (4-port SATA), I hot swap by "rmmod sata_promise;
> remove old disk; connect new disk; modprobe sata_promise". I am sure
> this "rmmod" trick works for hot-swapping disks on any SATA controller.
I used to do this to hot swap ISA cards ::-)
I had a box with two ISA slots. I only had ISA modem, sound card, and
NIC. The NIC stayed in. The modem and the sound card were both PnP,
so I could run isapnp to bring them up after a swap. Never fried
anything, despite it being a horribly bad idea!
Don't try that at home...
--
Ross Vandegrift
ross@lug.udel.edu
"The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell."
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 5:59 RAID 16? David Liontooth
` (3 preceding siblings ...)
2006-02-02 16:44 ` Mr. James W. Laferriere
@ 2006-02-03 2:32 ` Bill Davidsen
2006-02-05 23:42 ` Hard drive lifetime: wear from spinning up or rebooting vs running David Liontooth
2009-09-20 19:44 ` RAID 16? Matthias Urlichs
4 siblings, 2 replies; 31+ messages in thread
From: Bill Davidsen @ 2006-02-03 2:32 UTC (permalink / raw)
To: David Liontooth; +Cc: linux-raid
On Wed, 1 Feb 2006, David Liontooth wrote:
>
> We're wondering if it's possible to run the following --
>
> * define 4 pairs of RAID 1 with an 8-port 3ware 9500S card
> * the OS will see these are four normal drives
> * use md to configure them into a RAID 6 array
>
> Would this work? Would it be better than RAID 15? We're looking for a
> very high redundancy system.
You only get the size of two drives with that! I think you would get the
same reliability and better performance with four RAID-1 mirror arrays and
RAID-5 over that. You still have to lose four drives to lose data, but you
get the size of three instead of two.
If you really care about it, use two controllers, put four sets mirrored
one drive on each controller, then RAID-5 over that.
>
> This is on a Debian system running the 2.6 kernel. We're contemplating
> running EVMS on top.
>
> Dave
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with little computers since 1979
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 21:18 ` J. Ryan Earl
2006-02-02 21:29 ` Andy Smith
2006-02-02 22:38 ` Konstantin Olchanski
@ 2006-02-03 2:54 ` Bill Davidsen
2 siblings, 0 replies; 31+ messages in thread
From: Bill Davidsen @ 2006-02-03 2:54 UTC (permalink / raw)
To: J. Ryan Earl; +Cc: Gordon Henderson, Mattias Wadenstein, linux-raid
On Thu, 2 Feb 2006, J. Ryan Earl wrote:
>
> X-UID: 40928
>
> Gordon Henderson wrote:
>
> >I've actually had very good results hot swapping SCSI drives on a live
> >linux system though.
> >
> >Anyone tried SATA drives yet?
> >
> Yes, and it does NOT work yet. libata does not support hotplugging of
> harddrives yet: http://linux.yyz.us/sata/features.html
>
> It supports hotplugging of the PCI controller itself, but not harddrives.
I can add controllers but no devices? I have to think the norm is exactly
the opposite... One of the SATA controllers with the plugs on the back for
your little external box.
A work in progress, I realize.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with little computers since 1979
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-02 16:44 ` Mr. James W. Laferriere
@ 2006-02-03 9:08 ` Lars Marowsky-Bree
0 siblings, 0 replies; 31+ messages in thread
From: Lars Marowsky-Bree @ 2006-02-03 9:08 UTC (permalink / raw)
To: linux-raid
On 2006-02-02T09:44:36, "Mr. James W. Laferriere" <babydr@baby-dragons.com> wrote:
> That said why do a raid1 into a raid6 ?
> Why not a 2 raid6 arrays raid1'd ?
The available space is the same, but the redundancy would be much
worse.
Sincerely,
Lars Marowsky-Brée
--
High Availability & Clustering
SUSE Labs, Research and Development
SUSE LINUX Products GmbH - A Novell Business -- Charles Darwin
"Ignorance more frequently begets confidence than does knowledge"
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread
* Hard drive lifetime: wear from spinning up or rebooting vs running
2006-02-03 2:32 ` Bill Davidsen
@ 2006-02-05 23:42 ` David Liontooth
2006-02-06 3:57 ` Konstantin Olchanski
` (4 more replies)
2009-09-20 19:44 ` RAID 16? Matthias Urlichs
1 sibling, 5 replies; 31+ messages in thread
From: David Liontooth @ 2006-02-05 23:42 UTC (permalink / raw)
To: linux-raid
In designing an archival system, we're trying to find data on when it
pays to power or spin the drives down versus keeping them running.
Is there a difference between spinning up the drives from sleep and from
a reboot? Leaving out the cost imposed on the (separate) operating
system drive.
Temperature obviously matters -- a linear approximation might look like
this,
Lifetime = 60 - 12 [(t-40)/2.5]
where 60 is the average maximum lifetime, achieved at 40 degrees C and
below, and lifetime decreases by a year for every 2.5 degree rise in
temperature. Does anyone have an actual formula?
To keep it simple, let's assume we keep temperature at or below what is
required to reach average maximum lifetime. What is the cost of spinning
up the drives in the currency of lifetime months?
My guess would be that the cost is tiny -- in the order of minutes.
Or are different components stressed in a running drive versus one that
is spinning up, so it's not possible to translate the cost of one into
the currency of the other?
Finally, is there passive decay of drive components in storage?
Dave
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive lifetime: wear from spinning up or rebooting vs running
2006-02-05 23:42 ` Hard drive lifetime: wear from spinning up or rebooting vs running David Liontooth
@ 2006-02-06 3:57 ` Konstantin Olchanski
2006-02-06 5:25 ` Patrik Jonsson
2006-02-06 4:35 ` Richard Scobie
` (3 subsequent siblings)
4 siblings, 1 reply; 31+ messages in thread
From: Konstantin Olchanski @ 2006-02-06 3:57 UTC (permalink / raw)
To: David Liontooth; +Cc: linux-raid
On Sun, Feb 05, 2006 at 03:42:26PM -0800, David Liontooth wrote:
> In designing an archival system, we're trying to find data on when it
> pays to power or spin the drives down versus keeping them running.
>
> Temperature obviously matters -- a linear approximation might look like this,
> Lifetime = 60 - 12 [(t-40)/2.5]
I would expect an exponential rather than linear formula (linear formula yelds
negative life times). L = ... exp(-T) or L = ... exp(1/kT)
> Does anyone have an actual formula?
I doubt it, because it requires measuring lifetimes, which takes
years, by which time the data are useless because the disks you used
are obsolete.
> Or are different components stressed in a running drive versus one that
> is spinning up, so it's not possible to translate the cost of one into
> the currency of the other?
I would expect that spinning-up a drive is very stressful and is likely
to kill the drive [spindle motor power electronics]. In my experience
disk die about evenly from 3 causes: no spinning (dead spindle motor
power electronics), heads do not move (dead head motor power
electronics), or spontaneusly developing bad sectors (disk platter
contamination?).
Hmm... NASA-type people may have data for life times of power electronics,
at least the shape of the temperature dependance (linear or exp or ???).
--
Konstantin Olchanski
Data Acquisition Systems: The Bytes Must Flow!
Email: olchansk-at-triumf-dot-ca
Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive lifetime: wear from spinning up or rebooting vs running
2006-02-05 23:42 ` Hard drive lifetime: wear from spinning up or rebooting vs running David Liontooth
2006-02-06 3:57 ` Konstantin Olchanski
@ 2006-02-06 4:35 ` Richard Scobie
2006-02-06 10:09 ` Mattias Wadenstein
` (2 subsequent siblings)
4 siblings, 0 replies; 31+ messages in thread
From: Richard Scobie @ 2006-02-06 4:35 UTC (permalink / raw)
To: linux-raid
David Liontooth wrote:
> Temperature obviously matters -- a linear approximation might look like
> this,
>
> Lifetime = 60 - 12 [(t-40)/2.5]
>
> where 60 is the average maximum lifetime, achieved at 40 degrees C and
> below, and lifetime decreases by a year for every 2.5 degree rise in
> temperature. Does anyone have an actual formula?
This paper from Hitachi may be of some use for their drives:
http://www.hgst.com/hdd/technolo/drivetemp/drivetemp.htm
Regards,
Richard
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive lifetime: wear from spinning up or rebooting vs running
2006-02-06 3:57 ` Konstantin Olchanski
@ 2006-02-06 5:25 ` Patrik Jonsson
0 siblings, 0 replies; 31+ messages in thread
From: Patrik Jonsson @ 2006-02-06 5:25 UTC (permalink / raw)
Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 919 bytes --]
Konstantin Olchanski wrote:
>>Does anyone have an actual formula?
>
>
> I doubt it, because it requires measuring lifetimes, which takes
> years, by which time the data are useless because the disks you used
> are obsolete.
I found this article on drive reliability from Seagate:
http://www.digit-life.com/articles/storagereliability/
They do indeed model the temperature derating as an exponential, such
that 25C is the reference temp and at 30C the MTBF is reduced to 78%.
Running the drive at 40C gives you half the lifetime.
Can't find anything about spinup/down though, but they do talk about how
MTBF depends on power-on hours per year, which should be a correlated
quantity. They assume the MTBF goes *up* the fewer POH/yr the drive has,
there's never any reduction due to excessive spinup/down, or at least
the reduction is never dominant. They also talk about the effect of duty
cycle.
cheers,
/Patrik
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 254 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive lifetime: wear from spinning up or rebooting vs running
2006-02-05 23:42 ` Hard drive lifetime: wear from spinning up or rebooting vs running David Liontooth
2006-02-06 3:57 ` Konstantin Olchanski
2006-02-06 4:35 ` Richard Scobie
@ 2006-02-06 10:09 ` Mattias Wadenstein
2006-02-06 16:45 ` David Liontooth
2006-02-06 19:22 ` Brad Dameron
2006-02-06 21:15 ` Dan Stromberg
4 siblings, 1 reply; 31+ messages in thread
From: Mattias Wadenstein @ 2006-02-06 10:09 UTC (permalink / raw)
To: David Liontooth; +Cc: linux-raid
On Sun, 5 Feb 2006, David Liontooth wrote:
> In designing an archival system, we're trying to find data on when it
> pays to power or spin the drives down versus keeping them running.
>
> Is there a difference between spinning up the drives from sleep and from
> a reboot? Leaving out the cost imposed on the (separate) operating
> system drive.
Hitachi claims "5 years (Surface temperature of HDA is 45°C or less) Life
of the drive does not change in the case that the drive is used
intermittently." for their ultrastar 10K300 drives. I suspect that the
best estimates you're going to get is from the manufacturers, if you can
find the right documents (OEM specifications, not marketing blurbs).
For their deskstar (sata/pata) drives I didn't find life time estimates
beyond 50000 start-stop-cycles.
/Mattias Wadenstein
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive lifetime: wear from spinning up or rebooting vs running
2006-02-06 10:09 ` Mattias Wadenstein
@ 2006-02-06 16:45 ` David Liontooth
2006-02-06 17:12 ` Francois Barre
0 siblings, 1 reply; 31+ messages in thread
From: David Liontooth @ 2006-02-06 16:45 UTC (permalink / raw)
To: linux-raid
Mattias Wadenstein wrote:
> On Sun, 5 Feb 2006, David Liontooth wrote:
>
>> In designing an archival system, we're trying to find data on when it
>> pays to power or spin the drives down versus keeping them running.
>
> Hitachi claims "5 years (Surface temperature of HDA is 45°C or less)
> Life of the drive does not change in the case that the drive is used
> intermittently." for their ultrastar 10K300 drives. I suspect that the
> best estimates you're going to get is from the manufacturers, if you
> can find the right documents (OEM specifications, not marketing blurbs).
"Intermittent" may assume the drive is powered on and in regular use and
may simply be a claim that spindle drive components are designed to fail
simultaneously with disk platter and head motor components.
Konstantin's observation that "disk die about evenly from 3 causes: no
spinning (dead spindle motor power electronics), heads do not move (dead
head motor power electronics), or spontaneusly developing bad sectors
(disk platter contamination?)" is consistent with a rational goal of
manufacturing components with similar lifetimes under normal use.
> For their deskstar (sata/pata) drives I didn't find life time
> estimates beyond 50000 start-stop-cycles.
If components are in fact manufactured to fail simultaneously under
normal use (including a dozen or two start-stop cycles a day), then
taking the drive off-line for more than a few hours should
unproblematically extend its life.
Appreciate all the good advice and references. While we have to rely on
specifications rather than actual long-term tests, this should still
move us in the right direction. One of the problems with creating a
digital archive is that the technology has no archival history. We know
acid-free paper lasts millennia; how long do modern hard drives last in
cold storage? To some people's horror, we now know home-made CDs last a
couple of years.
Dave
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive lifetime: wear from spinning up or rebooting vs running
2006-02-06 16:45 ` David Liontooth
@ 2006-02-06 17:12 ` Francois Barre
2006-02-07 8:44 ` Hans Kristian Rosbach
2006-02-07 19:18 ` Neil Bortnak
0 siblings, 2 replies; 31+ messages in thread
From: Francois Barre @ 2006-02-06 17:12 UTC (permalink / raw)
To: David Liontooth; +Cc: linux-raid
2006/2/6, David Liontooth <liontooth@cogweb.net>:
> Mattias Wadenstein wrote:
>
> > On Sun, 5 Feb 2006, David Liontooth wrote:
> > For their deskstar (sata/pata) drives I didn't find life time
> > estimates beyond 50000 start-stop-cycles.
>
> If components are in fact manufactured to fail simultaneously under
> normal use (including a dozen or two start-stop cycles a day), then
> taking the drive off-line for more than a few hours should
> unproblematically extend its life.
>
IMHO, a single start-stop cycle is more costy in terms of lifetime
than a couple of hours spinning. As far as I know, on actual disks
(especially 7200 and 10k rpm ones), spinup is a really critical and
life-consuming action ; spindle motor is heavily used, much more than
it will be when spin speed is stable. On our actual storage design,
disks are never stopped (sorry for Earth...), because it doesn't worth
spinning down for less than a couple of days.
However, temperature has a real impact on heads, (incl. head motors),
because of material dilatation on overheat. So cooling your drives is
a major issue.
> how long do modern hard drives last in cold storage?
Demagnetation ?
A couple of years back in time, there were some tools to read and then
rewrite floppy contents to remagnet the floppy content. I guess it
shall be the same for the drive : periodically re-read and re-write
each and every sector of the drive to grant a good magnetation of the
surface.
I would not give more than 100 years for a drive to lose all its
content by demagnetation... Anyway, in 100 years, no computer will
have the controllers to plug a sata nor a scsi :-p.
I guess a long-living system should not stay cool, and
re-activate/check its content periodically...
> we now know home-made CDs last a couple of years.
I thought it was said to be at least a century... But with the
enormous cost reduction operated in this area, it's no surprise the
lifetime decreased so much.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive lifetime: wear from spinning up or rebooting vs running
2006-02-05 23:42 ` Hard drive lifetime: wear from spinning up or rebooting vs running David Liontooth
` (2 preceding siblings ...)
2006-02-06 10:09 ` Mattias Wadenstein
@ 2006-02-06 19:22 ` Brad Dameron
2006-02-06 21:15 ` Dan Stromberg
4 siblings, 0 replies; 31+ messages in thread
From: Brad Dameron @ 2006-02-06 19:22 UTC (permalink / raw)
To: linux-raid
On Sun, 2006-02-05 at 15:42 -0800, David Liontooth wrote:
> In designing an archival system, we're trying to find data on when it
> pays to power or spin the drives down versus keeping them running.
>
> Is there a difference between spinning up the drives from sleep and from
> a reboot? Leaving out the cost imposed on the (separate) operating
> system drive.
>
> Temperature obviously matters -- a linear approximation might look like
> this,
>
> Lifetime = 60 - 12 [(t-40)/2.5]
>
> where 60 is the average maximum lifetime, achieved at 40 degrees C and
> below, and lifetime decreases by a year for every 2.5 degree rise in
> temperature. Does anyone have an actual formula?
>
> To keep it simple, let's assume we keep temperature at or below what is
> required to reach average maximum lifetime. What is the cost of spinning
> up the drives in the currency of lifetime months?
>
> My guess would be that the cost is tiny -- in the order of minutes.
>
> Or are different components stressed in a running drive versus one that
> is spinning up, so it's not possible to translate the cost of one into
> the currency of the other?
>
> Finally, is there passive decay of drive components in storage?
>
> Dave
I read somewhere, still looking for the link, that the constant on/off
of a drive actually decrease's the drives lifespan due to the
heating/cooling of the bearings. It was actually determined to be best
to leave the drive spinning.
Brad Dameron
SeaTab Software
www.seatab.com
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive lifetime: wear from spinning up or rebooting vs running
2006-02-05 23:42 ` Hard drive lifetime: wear from spinning up or rebooting vs running David Liontooth
` (3 preceding siblings ...)
2006-02-06 19:22 ` Brad Dameron
@ 2006-02-06 21:15 ` Dan Stromberg
4 siblings, 0 replies; 31+ messages in thread
From: Dan Stromberg @ 2006-02-06 21:15 UTC (permalink / raw)
To: David Liontooth; +Cc: linux-raid, strombrg
Drives are probably going to have a lifetime that is proportionate to a
variety of things, and while I'm not a physicist or mechanical engineer,
nor in the hard disk business, the things that come to mind first are:
1) Thermal stress due to temperate changes - with more rapid changes
being more severe (expansion and contraction, I assume - viiz one of
those projectors or cars that run hot, and leave a fan running for a
while before fully powering off)
2) The amount of time a disk spends in a powered-off state (EG,
lubricants may congeal, and about every time my employer, UCI, has a
campus-wide power outage, -some- piece of equipment somewhere on campus
fails to come back up - probably due to thermal stress)
3) The number of times a disk goes to a powered-off state (thermal
stress again)
4) The amount of bumping around the disk undergoes, which may to an
extent be greater in disks that are surrounded by other disks, with
disks on the physical periphery of your RAID solution bumping around a
little less - those little rubber things that you screw the drive into
may help here.
5) The materials used in the platters, heads, servo, etc.
6) The number of alternate blocks for remapping bad blocks
7) The degree of tendency for a head crash to peel off a bunch of
material, or to just make a tiny scratch, and the degree of tendency for
scratched-off particles to bang into platters or heads later and scrape
off more particles - which can sometimes yield an exponential decay of
drive usability
8) How good the clean room(s) the drive was built in was/were
9) How good a drive is at parking the hards over unimportant parts of
the platters, when bumped, dropped, in an earth quake, when turned off,
etc.
If you want to be thorough with this, you probably want to employ some
materials scientists, some statisticians, get a bunch of different kinds
of drives and characterize their designs somehow, do multiple
longitudinal studies, hunt for correlations between drive attributes and
lifetimes, etc.
And I totally agree with a previous poster - this stuff may all change
quite a bit by the time the study is done, so it'd be a really good idea
to look for ways of increasing your characterizations longevity somehow,
possibly by delving down into individual parts of the drives and looking
at their lifetime. But don't rule out holistic/chaotic effects
unnecessarily, even if "the light's better over here" when looking at
the reductionistic view of drives.
PS: Letting a drive stay powered without spinning is sometimes called a
"warm spare", while a drive that's spinning all the time even while not
in active use in a RAID array is called a "cold spare".
HTH :)
On Sun, 2006-02-05 at 15:42 -0800, David Liontooth wrote:
> In designing an archival system, we're trying to find data on when it
> pays to power or spin the drives down versus keeping them running.
>
> Is there a difference between spinning up the drives from sleep and from
> a reboot? Leaving out the cost imposed on the (separate) operating
> system drive.
>
> Temperature obviously matters -- a linear approximation might look like
> this,
>
> Lifetime = 60 - 12 [(t-40)/2.5]
>
> where 60 is the average maximum lifetime, achieved at 40 degrees C and
> below, and lifetime decreases by a year for every 2.5 degree rise in
> temperature. Does anyone have an actual formula?
>
> To keep it simple, let's assume we keep temperature at or below what is
> required to reach average maximum lifetime. What is the cost of spinning
> up the drives in the currency of lifetime months?
>
> My guess would be that the cost is tiny -- in the order of minutes.
>
> Or are different components stressed in a running drive versus one that
> is spinning up, so it's not possible to translate the cost of one into
> the currency of the other?
>
> Finally, is there passive decay of drive components in storage?
>
> Dave
>
>
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive lifetime: wear from spinning up or rebooting vs running
2006-02-06 17:12 ` Francois Barre
@ 2006-02-07 8:44 ` Hans Kristian Rosbach
2006-02-07 19:18 ` Neil Bortnak
1 sibling, 0 replies; 31+ messages in thread
From: Hans Kristian Rosbach @ 2006-02-07 8:44 UTC (permalink / raw)
To: Francois Barre; +Cc: linux-raid
On Mon, 2006-02-06 at 18:12 +0100, Francois Barre wrote:
> 2006/2/6, David Liontooth <liontooth@cogweb.net>:
> > how long do modern hard drives last in cold storage?
> Demagnetation ?
> A couple of years back in time, there were some tools to read and then
> rewrite floppy contents to remagnet the floppy content. I guess it
> shall be the same for the drive : periodically re-read and re-write
> each and every sector of the drive to grant a good magnetation of the
> surface.
> I would not give more than 100 years for a drive to lose all its
> content by demagnetation... Anyway, in 100 years, no computer will
> have the controllers to plug a sata nor a scsi :-p.
> I guess a long-living system should not stay cool, and
> re-activate/check its content periodically...
A program called Spinrite can fix such floppies (have done several
times for me. Even floppies I formatted just 15min ago and suddenly
cannot be read in another computer. One swipe with spinrite and it
worked 100%.
It also can remagnetise and even exercise bad sectors on HDDs.
I have tried this on about 20 working disks now, and it has found
blocks that were hard to read on 4 of them. These were fixed using
statistical recovery. After running it again a week later it found
nothing wrong with the disk.
http://grc.com/spinrite.htm
PS: The website looks a bit suspect, but the program actually does
work as advertised as far as I have found.
PS: Seems it does not like some adaptec scsi cards, no matter what
disk I tested it got read errors on every sector. Both disks and
controllers works fine for booting Linux/Windows so I guess it's
the scsi bios/dos interaction that is making problems for Spinrite.
> > we now know home-made CDs last a couple of years.
> I thought it was said to be at least a century... But with the
> enormous cost reduction operated in this area, it's no surprise the
> lifetime decreased so much.
I've seen cd's be destroyed just because of morning moisture. The
up-side (reflective side) is often unprotected and _very_ sensitive
to moisture. Imagine sprinklers going off in your offices, how much
valuable data is on those cds you do not store in a safe?
I have heard that some recovery companies can recover data from
such damaged cds since the data is not stored in the reflective layer.
But I imagine it is a very costly experience.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive lifetime: wear from spinning up or rebooting vs running
2006-02-06 17:12 ` Francois Barre
2006-02-07 8:44 ` Hans Kristian Rosbach
@ 2006-02-07 19:18 ` Neil Bortnak
1 sibling, 0 replies; 31+ messages in thread
From: Neil Bortnak @ 2006-02-07 19:18 UTC (permalink / raw)
To: Francois Barre; +Cc: David Liontooth, linux-raid
On Mon, 2006-02-06 at 18:12 +0100, Francois Barre wrote:
> A couple of years back in time, there were some tools to read and then
> rewrite floppy contents to remagnet the floppy content. I guess it
> shall be the same for the drive : periodically re-read and re-write
> each and every sector of the drive to grant a good magnetation of the
> surface.
I don't think this applies so much anymore. The coercivity of modern,
high-density magnetic media is quite a lot higher than that of floppy
disks. These days when you encode it, it really stays unless a very
strong magnetic force acts on it. Much stronger than that of any
household magnet.
That's why waving a big magnet over modern tapes doesn't actually do
anything to them (or so I'm told, I still need to test that, but I don't
use tape for backup anymore so I don't really have the materials handy).
The strength of the magnet is not powerful enough to overcome the
coercivity of the media. I don't know about those molybdenum magnets.
It's a problem in data remanance protection (ensuring that the data is
*really* gone on a hard disk).
Just to go OT for a second, doing one of those 32+ pass security wipes
won't necessarily protect you against a highly motivated attacker with a
scanning tunnelling microscope. This is because the data can be laid
down on a track, but the heads can move every so slightly (but still
well within tolerances) out of alignment. When you do your 32+ passes,
there is still a very small strip of the data left that can be
recovered.
The US military used to melt their drives into slag, but then EPA
regulations put a stop to that because of some of the more exotic
chemicals in the drive. Now they apparently use one of their 2.4MW
electomagnets for the job. Apparently the platters end up pressed
against the top of the drive. :)
Neil
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: RAID 16?
2006-02-03 2:32 ` Bill Davidsen
2006-02-05 23:42 ` Hard drive lifetime: wear from spinning up or rebooting vs running David Liontooth
@ 2009-09-20 19:44 ` Matthias Urlichs
1 sibling, 0 replies; 31+ messages in thread
From: Matthias Urlichs @ 2009-09-20 19:44 UTC (permalink / raw)
To: linux-raid
On Thu, 02 Feb 2006 21:32:44 -0500, Bill Davidsen wrote:
>> Would this work? Would it be better than RAID 15? We're looking for a
>> very high redundancy system.
>
> You only get the size of two drives with that! I think you would get the
> same reliability and better performance with four RAID-1 mirror arrays
> and RAID-5 over that. You still have to lose four drives to lose data,
> but you get the size of three instead of two.
But in his case, the loss of any four disks is not a problem.
Personally, in this case I'd build a simple RAID6 with two spares. You
can lose four disks (just not at the same time :-P ) and you still have
four disks' capacity.
Of course, if you're worried about controller failure, a RAID1 built from
two RAID6 (one on each controller) is your only high-reliability option.
--
^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2009-09-20 19:44 UTC | newest]
Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-02-02 5:59 RAID 16? David Liontooth
2006-02-02 6:03 ` Neil Brown
2006-02-02 8:34 ` Gordon Henderson
2006-02-02 16:17 ` Matthias Urlichs
2006-02-02 16:28 ` Mattias Wadenstein
2006-02-02 16:54 ` Gordon Henderson
2006-02-02 20:24 ` Matthias Urlichs
2006-02-02 21:18 ` J. Ryan Earl
2006-02-02 21:29 ` Andy Smith
2006-02-02 22:38 ` Konstantin Olchanski
2006-02-03 2:31 ` Ross Vandegrift
2006-02-03 2:54 ` Bill Davidsen
2006-02-02 18:42 ` Mario 'BitKoenig' Holbe
2006-02-02 20:34 ` Matthias Urlichs
2006-02-03 0:20 ` Guy
2006-02-03 0:59 ` David Liontooth
2006-02-02 16:44 ` Mr. James W. Laferriere
2006-02-03 9:08 ` Lars Marowsky-Bree
2006-02-03 2:32 ` Bill Davidsen
2006-02-05 23:42 ` Hard drive lifetime: wear from spinning up or rebooting vs running David Liontooth
2006-02-06 3:57 ` Konstantin Olchanski
2006-02-06 5:25 ` Patrik Jonsson
2006-02-06 4:35 ` Richard Scobie
2006-02-06 10:09 ` Mattias Wadenstein
2006-02-06 16:45 ` David Liontooth
2006-02-06 17:12 ` Francois Barre
2006-02-07 8:44 ` Hans Kristian Rosbach
2006-02-07 19:18 ` Neil Bortnak
2006-02-06 19:22 ` Brad Dameron
2006-02-06 21:15 ` Dan Stromberg
2009-09-20 19:44 ` RAID 16? Matthias Urlichs
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).