* Hard drive Reliability?
@ 2004-05-19 19:58 John Lange
2004-05-19 20:49 ` Måns Rullgård
` (3 more replies)
0 siblings, 4 replies; 31+ messages in thread
From: John Lange @ 2004-05-19 19:58 UTC (permalink / raw)
To: LinuxRaid
I have a few discussions recently with technical service people and we
all come to the conclusion that Hard Drives are far less reliable than
they used to be.
Case in point, I have a 120G Maxtor drive in a server that began to fail
less than 8 months into service. Major headache.
I've heard similar stories from other people. I guy I know that sells
both desktop and file server systems now says he now only builds
motherboards with some kind of RAID 1 and dual drives because drives
fail so often this saves loads of headaches. After a their system has
died and they 'lost everything' people are more than willing to pay the
extra $150 for redundancy.
I know drive manufactures were sued recently in a class action for
shipping drives which they knew were going to fail prematurely but that
was a few years back.
So what are other peoples feelings about drive reliability and are some
brands better than others?
Does anyone know of any web sites with statistics or test data?
--
John Lange
^ permalink raw reply [flat|nested] 31+ messages in thread* Re: Hard drive Reliability?
2004-05-19 19:58 Hard drive Reliability? John Lange
@ 2004-05-19 20:49 ` Måns Rullgård
2004-05-19 21:28 ` Sevatio
2004-05-19 22:04 ` berk walker
` (2 subsequent siblings)
3 siblings, 1 reply; 31+ messages in thread
From: Måns Rullgård @ 2004-05-19 20:49 UTC (permalink / raw)
To: linux-raid
John Lange <john.lange@bighostbox.com> writes:
> I have a few discussions recently with technical service people and we
> all come to the conclusion that Hard Drives are far less reliable than
> they used to be.
>
> Case in point, I have a 120G Maxtor drive in a server that began to fail
> less than 8 months into service. Major headache.
>
> I've heard similar stories from other people. I guy I know that sells
> both desktop and file server systems now says he now only builds
> motherboards with some kind of RAID 1 and dual drives because drives
> fail so often this saves loads of headaches. After a their system has
> died and they 'lost everything' people are more than willing to pay the
> extra $150 for redundancy.
>
> I know drive manufactures were sued recently in a class action for
> shipping drives which they knew were going to fail prematurely but that
> was a few years back.
>
> So what are other peoples feelings about drive reliability and are some
> brands better than others?
I'd agree that drives are not as good as they used to be. I recently
had a one year old Western Digital disk suddenly drop dead while
running. Nothing could be recovered from it. Shortly after that I
bought four Seagate Barracuda ATA disks, one of which was defective.
It would work flawlessly for an hour to a month, then shut itself
down. Rebooting made it run for a while before it repeated.
Compare this to the 1GB Quantum Fireball SCSI disk sitting in my
firewall. It's been running 24/7 since some time around 1996.
> Does anyone know of any web sites with statistics or test data?
That would be interesting.
--
Måns Rullgård
mru@kth.se
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread* Re: Hard drive Reliability?
2004-05-19 20:49 ` Måns Rullgård
@ 2004-05-19 21:28 ` Sevatio
2004-05-19 22:02 ` John Lange
` (2 more replies)
0 siblings, 3 replies; 31+ messages in thread
From: Sevatio @ 2004-05-19 21:28 UTC (permalink / raw)
To: linux-raid
Måns Rullgård wrote:
> John Lange <john.lange@bighostbox.com> writes:
>
>
>>I have a few discussions recently with technical service people and we
>>all come to the conclusion that Hard Drives are far less reliable than
>>they used to be.
>>
>>Case in point, I have a 120G Maxtor drive in a server that began to fail
>>less than 8 months into service. Major headache.
>>
>>I've heard similar stories from other people. I guy I know that sells
>>both desktop and file server systems now says he now only builds
>>motherboards with some kind of RAID 1 and dual drives because drives
>>fail so often this saves loads of headaches. After a their system has
>>died and they 'lost everything' people are more than willing to pay the
>>extra $150 for redundancy.
>>
>>I know drive manufactures were sued recently in a class action for
>>shipping drives which they knew were going to fail prematurely but that
>>was a few years back.
>>
>>So what are other peoples feelings about drive reliability and are some
>>brands better than others?
>
>
> I'd agree that drives are not as good as they used to be. I recently
> had a one year old Western Digital disk suddenly drop dead while
> running. Nothing could be recovered from it. Shortly after that I
> bought four Seagate Barracuda ATA disks, one of which was defective.
> It would work flawlessly for an hour to a month, then shut itself
> down. Rebooting made it run for a while before it repeated.
>
> Compare this to the 1GB Quantum Fireball SCSI disk sitting in my
> firewall. It's been running 24/7 since some time around 1996.
>
>
>>Does anyone know of any web sites with statistics or test data?
>
>
> That would be interesting.
>
I would also concur. I had a Western Digital that failed after 10
months. Western Digital sent me another one that failed in exactly 10
months again. I didn't want to send it in to get yet another piece of
WD junk. So, lesson learned to stay away form WD drives; correct or not.
Sevatio
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread* Re: Hard drive Reliability?
2004-05-19 21:28 ` Sevatio
@ 2004-05-19 22:02 ` John Lange
2004-05-19 22:42 ` jim
2004-05-19 22:26 ` Guy
2004-05-19 23:44 ` maarten van den Berg
2 siblings, 1 reply; 31+ messages in thread
From: John Lange @ 2004-05-19 22:02 UTC (permalink / raw)
To: Sevatio; +Cc: linux-raid
> WD junk. So, lesson learned to stay away form WD drives; correct or
> not.
I'd agree with the one addition that, unfortunately they _ALL_ seem to
be junk. Unfortunately, in my discussions with various associates we
can't seem to come to any consensus on any particular brands being
better or worse than others.
Largely that is my reason for posting here.
John
On Wed, 2004-05-19 at 16:28, Sevatio wrote:
> Måns Rullgård wrote:
>
> > John Lange <john.lange@bighostbox.com> writes:
> >
> >
> >>I have a few discussions recently with technical service people and we
> >>all come to the conclusion that Hard Drives are far less reliable than
> >>they used to be.
> >>
> >>Case in point, I have a 120G Maxtor drive in a server that began to fail
> >>less than 8 months into service. Major headache.
> >>
> >>I've heard similar stories from other people. I guy I know that sells
> >>both desktop and file server systems now says he now only builds
> >>motherboards with some kind of RAID 1 and dual drives because drives
> >>fail so often this saves loads of headaches. After a their system has
> >>died and they 'lost everything' people are more than willing to pay the
> >>extra $150 for redundancy.
> >>
> >>I know drive manufactures were sued recently in a class action for
> >>shipping drives which they knew were going to fail prematurely but that
> >>was a few years back.
> >>
> >>So what are other peoples feelings about drive reliability and are some
> >>brands better than others?
> >
> >
> > I'd agree that drives are not as good as they used to be. I recently
> > had a one year old Western Digital disk suddenly drop dead while
> > running. Nothing could be recovered from it. Shortly after that I
> > bought four Seagate Barracuda ATA disks, one of which was defective.
> > It would work flawlessly for an hour to a month, then shut itself
> > down. Rebooting made it run for a while before it repeated.
> >
> > Compare this to the 1GB Quantum Fireball SCSI disk sitting in my
> > firewall. It's been running 24/7 since some time around 1996.
> >
> >
> >>Does anyone know of any web sites with statistics or test data?
> >
> >
> > That would be interesting.
> >
>
>
> I would also concur. I had a Western Digital that failed after 10
> months. Western Digital sent me another one that failed in exactly 10
> months again. I didn't want to send it in to get yet another piece of
> WD junk. So, lesson learned to stay away form WD drives; correct or not.
>
> Sevatio
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-19 22:02 ` John Lange
@ 2004-05-19 22:42 ` jim
0 siblings, 0 replies; 31+ messages in thread
From: jim @ 2004-05-19 22:42 UTC (permalink / raw)
To: John Lange; +Cc: linux-raid
>
> > WD junk. So, lesson learned to stay away form WD drives; correct or
> > not.
>
> I'd agree with the one addition that, unfortunately they _ALL_ seem to
> be junk. Unfortunately, in my discussions with various associates we
> can't seem to come to any consensus on any particular brands being
> better or worse than others.
Our experience with Maxtor 60GB, 80GB, 120GB, various models, confirms
it isn't just WD. In my opinion, all of the disk manufacturers have
become so price competitive that they cannot produce an excellent
product at the current price points. If big PC manufacturers don't
care whose drive they use as long as it lasts at least 1 year (the PC
warranty period) in a home computer that is mostly idle or off, and
they only care about cost, there is little motivation for drive
manufacturers to produce a reliable drive - only a cheap one.
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: Hard drive Reliability?
2004-05-19 21:28 ` Sevatio
2004-05-19 22:02 ` John Lange
@ 2004-05-19 22:26 ` Guy
2004-05-19 23:53 ` maarten van den Berg
2004-05-19 23:44 ` maarten van den Berg
2 siblings, 1 reply; 31+ messages in thread
From: Guy @ 2004-05-19 22:26 UTC (permalink / raw)
To: 'Sevatio', linux-raid
>I would also concur. I had a Western Digital that failed after 10
>months. Western Digital sent me another one that failed in exactly 10
>months again. I didn't want to send it in to get yet another piece of
>WD junk. So, lesson learned to stay away form WD drives; correct or not.
>Sevatio
==========================================================================
I had problems with Maxtor. So, lesson learned to stay away form Maxtor
drives; correct or not.
My friend had problems with IBM.
Who's left? I like Seagate, for now.
A few years ago HP and EMC were using Seagate drives in their big disk
arrays. I am sure they have stats and know which drives are good.
Guy
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-19 22:26 ` Guy
@ 2004-05-19 23:53 ` maarten van den Berg
2004-05-20 0:49 ` TJ Harrell
0 siblings, 1 reply; 31+ messages in thread
From: maarten van den Berg @ 2004-05-19 23:53 UTC (permalink / raw)
To: linux-raid
On Thursday 20 May 2004 00:26, Guy wrote:
> >I would also concur. I had a Western Digital that failed after 10
> >months. Western Digital sent me another one that failed in exactly 10
> >months again. I didn't want to send it in to get yet another piece of
> >WD junk. So, lesson learned to stay away form WD drives; correct or not.
> >
> >Sevatio
>
> ==========================================================================
>
> I had problems with Maxtor. So, lesson learned to stay away form Maxtor
> drives; correct or not.
>
> My friend had problems with IBM.
>
> Who's left? I like Seagate, for now.
>
> A few years ago HP and EMC were using Seagate drives in their big disk
> arrays. I am sure they have stats and know which drives are good.
Hm, I somehow seriously doubt that, if and when you're talking about _brands_.
However, I strongly feel that certain _series_ are indeed better or worse
than others. But that only helps big manufacturers; by the time we need to
buy a new drive the serie either doesn't exist anymore, is overhauled to
reduce cost, or has been transfered from plant Y in Taiwan to plant X in
China. (not that I imply anything by naming countries, they're examples)
The tricky part, especially for manufacturers, is that they only know after
the fact if they chose their drive brand / model wisely. Only after a year
or so they can see how many returns they've had, and by that time it's too
late to change anything about it.
Maarten
--
Yes of course I'm sure it's the red cable. I guarante[^%!/+)F#0c|'NO CARRIER
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-19 23:53 ` maarten van den Berg
@ 2004-05-20 0:49 ` TJ Harrell
2004-05-20 1:13 ` berk walker
2004-05-20 6:39 ` Måns Rullgård
0 siblings, 2 replies; 31+ messages in thread
From: TJ Harrell @ 2004-05-20 0:49 UTC (permalink / raw)
To: linux-raid
I've never bought Seagate. Seagate was horribly unreliable in earlier
times, 10 years ago, or so. I buy WD exclusively now because it is the only
IDE drive I can get a 3 year warranty on. I've never had the problems with
them that other people have expressed, but I don't hold high expectations,
either.
I have read that the bearings used in current WD drives tend to wear
faster at higher operating temperatures. New fluid bearings are supposedly
less sensitive to this problem, though.
In any case, it seems that IDE drives are designed as throwaway drives,
constructed as cheaply as possible. Servers generally use only SCSI. SCSI
drives in general are manufactured better because servers require it, and
customers are willing to pay for it. IDE equipment will not be manufactured
as well because the market is unwilling to pay the price premium for
quality. The market is too competetive in the desktop market. Most buyers
don't know about quality, and buy on price. Consequently, drive prices are
marginalized, and quality becomes of less importance.
^ permalink raw reply [flat|nested] 31+ messages in thread* Re: Hard drive Reliability?
2004-05-20 0:49 ` TJ Harrell
@ 2004-05-20 1:13 ` berk walker
2004-05-20 6:39 ` Måns Rullgård
1 sibling, 0 replies; 31+ messages in thread
From: berk walker @ 2004-05-20 1:13 UTC (permalink / raw)
To: TJ Harrell; +Cc: linux-raid
TJ Harrell wrote:
> I've never bought Seagate. Seagate was horribly unreliable in earlier
>times, 10 years ago, or so. I buy WD exclusively now because it is the only
>IDE drive I can get a 3 year warranty on. I've never had the problems with
>them that other people have expressed, but I don't hold high expectations,
>either.
> I have read that the bearings used in current WD drives tend to wear
>faster at higher operating temperatures. New fluid bearings are supposedly
>less sensitive to this problem, though.
> In any case, it seems that IDE drives are designed as throwaway drives,
>constructed as cheaply as possible. Servers generally use only SCSI. SCSI
>drives in general are manufactured better because servers require it, and
>customers are willing to pay for it. IDE equipment will not be manufactured
>as well because the market is unwilling to pay the price premium for
>quality. The market is too competetive in the desktop market. Most buyers
>don't know about quality, and buy on price. Consequently, drive prices are
>marginalized, and quality becomes of less importance.
>
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
as hard core (read Linux) users rarely turn their systems off, wouldn't
air-bearings be cheap and long life?
b-
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-20 0:49 ` TJ Harrell
2004-05-20 1:13 ` berk walker
@ 2004-05-20 6:39 ` Måns Rullgård
1 sibling, 0 replies; 31+ messages in thread
From: Måns Rullgård @ 2004-05-20 6:39 UTC (permalink / raw)
To: linux-raid
"TJ Harrell" <systemloc@earthlink.net> writes:
> I've never bought Seagate. Seagate was horribly unreliable in earlier
> times, 10 years ago, or so. I buy WD exclusively now because it is the only
> IDE drive I can get a 3 year warranty on.
I have Seagate drives with a 3-year warranty. Out of five drives
purchased during the last year, one was defective. The replacement is
OK. These drives are running 24/7 in a software RAID configuration.
--
Måns Rullgård
mru@kth.se
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-19 21:28 ` Sevatio
2004-05-19 22:02 ` John Lange
2004-05-19 22:26 ` Guy
@ 2004-05-19 23:44 ` maarten van den Berg
2 siblings, 0 replies; 31+ messages in thread
From: maarten van den Berg @ 2004-05-19 23:44 UTC (permalink / raw)
To: linux-raid
On Wednesday 19 May 2004 23:28, Sevatio wrote:
> Måns Rullgård wrote:
> > John Lange <john.lange@bighostbox.com> writes:
> > I'd agree that drives are not as good as they used to be. I recently
> > had a one year old Western Digital disk suddenly drop dead while
> > running. Nothing could be recovered from it. Shortly after that I
> > bought four Seagate Barracuda ATA disks, one of which was defective.
> > It would work flawlessly for an hour to a month, then shut itself
> > down. Rebooting made it run for a while before it repeated.
> >
> > Compare this to the 1GB Quantum Fireball SCSI disk sitting in my
> > firewall. It's been running 24/7 since some time around 1996.
> >
> >>Does anyone know of any web sites with statistics or test data?
> >
> > That would be interesting.
>
> I would also concur. I had a Western Digital that failed after 10
> months. Western Digital sent me another one that failed in exactly 10
> months again. I didn't want to send it in to get yet another piece of
> WD junk. So, lesson learned to stay away form WD drives; correct or not.
<aol mode> Me too ! </aol mode>
My current vendor gives me a hard time everytime I order a new drive since
they sell WD predominantly. I still do not give in, no WD for me anymore.
Maarten
--
Yes of course I'm sure it's the red cable. I guarante[^%!/+)F#0c|'NO CARRIER
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-19 19:58 Hard drive Reliability? John Lange
2004-05-19 20:49 ` Måns Rullgård
@ 2004-05-19 22:04 ` berk walker
2004-05-19 22:18 ` Guy
2004-05-20 4:39 ` Mark Hahn
3 siblings, 0 replies; 31+ messages in thread
From: berk walker @ 2004-05-19 22:04 UTC (permalink / raw)
To: John Lange; +Cc: LinuxRaid
I know a guy with a box of dead WDs, and I have 3 out of 5 bad Maxtor
40Gb. drives
John Lange wrote:
>I have a few discussions recently with technical service people and we
>all come to the conclusion that Hard Drives are far less reliable than
>they used to be.
>
>Case in point, I have a 120G Maxtor drive in a server that began to fail
>less than 8 months into service. Major headache.
>
>I've heard similar stories from other people. I guy I know that sells
>both desktop and file server systems now says he now only builds
>motherboards with some kind of RAID 1 and dual drives because drives
>fail so often this saves loads of headaches. After a their system has
>died and they 'lost everything' people are more than willing to pay the
>extra $150 for redundancy.
>
>I know drive manufactures were sued recently in a class action for
>shipping drives which they knew were going to fail prematurely but that
>was a few years back.
>
>So what are other peoples feelings about drive reliability and are some
>brands better than others?
>
>Does anyone know of any web sites with statistics or test data?
>
>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: Hard drive Reliability?
2004-05-19 19:58 Hard drive Reliability? John Lange
2004-05-19 20:49 ` Måns Rullgård
2004-05-19 22:04 ` berk walker
@ 2004-05-19 22:18 ` Guy
2004-05-19 23:40 ` maarten van den Berg
2004-05-20 4:39 ` Mark Hahn
3 siblings, 1 reply; 31+ messages in thread
From: Guy @ 2004-05-19 22:18 UTC (permalink / raw)
To: 'John Lange', 'LinuxRaid'
I think they fudge the MTBF! They say 1,000,000 hours MTBF.
That's over 114 years!
Drives don't last anywhere near that long.
Someone I know has 4 IBM disks. 3 of 4 have failed.
1 or 2 were replaced and the replacement drive(s) have failed.
All still under warranty! He gave up on IBM since the MTBF seems to be less
than 1 year. This was about 2-3 years ago. He mirrors thing most of the
time.
In the past I have almost never had a disk failure. Almost all on my drives
became too small to use before they failed. The drives ranged from 10Meg to
3Gig. Drives larger than 3Gig seem to fail before you out grow them.
I would love to see real live stats. Claimed MTBF and actual MTBF.
I just checked Seagate and Maxtor. They don't give a MTBF anymore.
When did that happen!
Just Service life and Warranty.
Anyway the best indicator of expected life, the warranty. If the
manufacture thinks the drive will only last 1 or 3 years (depending on size
or model), who am I to argue?
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of John Lange
Sent: Wednesday, May 19, 2004 3:58 PM
To: LinuxRaid
Subject: Hard drive Reliability?
I have a few discussions recently with technical service people and we
all come to the conclusion that Hard Drives are far less reliable than
they used to be.
Case in point, I have a 120G Maxtor drive in a server that began to fail
less than 8 months into service. Major headache.
I've heard similar stories from other people. I guy I know that sells
both desktop and file server systems now says he now only builds
motherboards with some kind of RAID 1 and dual drives because drives
fail so often this saves loads of headaches. After a their system has
died and they 'lost everything' people are more than willing to pay the
extra $150 for redundancy.
I know drive manufactures were sued recently in a class action for
shipping drives which they knew were going to fail prematurely but that
was a few years back.
So what are other peoples feelings about drive reliability and are some
brands better than others?
Does anyone know of any web sites with statistics or test data?
--
John Lange
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-19 22:18 ` Guy
@ 2004-05-19 23:40 ` maarten van den Berg
2004-05-20 0:34 ` Guy
2004-05-20 15:51 ` Sevatio
0 siblings, 2 replies; 31+ messages in thread
From: maarten van den Berg @ 2004-05-19 23:40 UTC (permalink / raw)
To: 'LinuxRaid'
On Thursday 20 May 2004 00:18, Guy wrote:
> I think they fudge the MTBF! They say 1,000,000 hours MTBF.
> That's over 114 years!
> Drives don't last anywhere near that long.
Hear hear! (although the MTBF does mean entirely something else, its use in
marketing, and the ensuing misplaced consumer-confidence, is atrocious.)
> Someone I know has 4 IBM disks. 3 of 4 have failed.
> 1 or 2 were replaced and the replacement drive(s) have failed.
Yeah, it's weird. For me Western Digital drives have failed consistently on
me, but I've had very good experience with Maxtor (read: Quantum) and Hitachi
(read: IBM). As opposed to other posts in this thread...
> All still under warranty! He gave up on IBM since the MTBF seems to be
> less than 1 year. This was about 2-3 years ago. He mirrors thing most of
> the time.
One thing I've come to believe over the years is that heat is a very important
factor that's killing drives. So I now take great care in ensuring good heat
dissipation from the drives. This entails amongst others that you should
never 'sandwich' drives in their 3,5" slots (I can't believe case
manufacturers still have not woken up to this need!). Instead I often arrange
for them to go in 5,25" slots so they have plenty of air around. If I need to
put them in 3,5" slots I always leave 1 unit space around a drive.
In the servers I deploy I take bigger measures, like a bigass 120mm fan just
in front of the drives (accomplished either by dremel or by case design)
> In the past I have almost never had a disk failure. Almost all on my
> drives became too small to use before they failed. The drives ranged from
> 10Meg to 3Gig. Drives larger than 3Gig seem to fail before you out grow
> them.
Hm, no. I did observe some real bad brands / series, even all the way back to
30 MB(M!) RLL drives and a particularly bad batch of 48 MB scsi-1 seagate
ones. But I'll admit that those were the exception to the rule way back then.
> I would love to see real live stats. Claimed MTBF and actual MTBF.
MTBF is measured in a purely statistical way, not taking any _real_ wear and
tear or aging into account. They run 10000 drives for a month and
extrapolate the MTBF from there. The figure is close to meaningless. For
starters, it does not guarantee _anything_. If you have 5 out of 5 failures
within the first six months, that still fits fine inside the statistical
model, unless a lot of others have that same rate of failure. Secondly, one
just cannot extrapolate the life expectancy of a drive in this way and get
useable figures. I can take a statistical test with 20.000 babies during a
year, and perhaps extrapolate from there that the MTBF for humans is 210
years. And boy, we all know that statistic paints a dead wrong picture...!
You need to disregard any and all MTBF values. They serve no purpose for us
end-users. They only serve a purpose for vendors (expected rate of return),
manufacturers and, probably, insurance companies...
> I just checked Seagate and Maxtor. They don't give a MTBF anymore.
> When did that happen!
Well, It was more or less useless anyway. I can tell you just offhand that you
can lengthen the life expectancy of a drive maybe four-fold if you make sure
it stays below 35 degrees celsius its entire life, instead of ~45 degrees.
Don't hold me up on that, but you know what I mean, and it is true. :-)
> Just Service life and Warranty.
> Anyway the best indicator of expected life, the warranty. If the
> manufacture thinks the drive will only last 1 or 3 years (depending on size
> or model), who am I to argue?
Times have indeed changed. 5 or 10 years ago, I would not have hesitated to
put all my data (of which I had little or no backups) on a single 120MB or 2
GB disk. Nowadays, I hardly ever put valueable data on single disks. Either
it has good backups or it goes onto raid 1 or 5 arrays. I've seen it happen
too many times at customers... I do take my precautions now.
(I've been there myself too, and got the T-shirt...)
Not that that guarantees anything... The lightning might strike my 8-disk
fileserver and take out everything. The lightning may hit my house as well
and take any and all but some very very old backups along with it.
But still, chances are much lower and that is what counts, innit ?
If / when a real disaster happens, I'll still live through it. But I just
*need* it to have a much better reason than the ubiquitous drive-failure,
user-error of virus, because *that* I will not forgive myself...
Maarten
--
Yes of course I'm sure it's the red cable. I guarante[^%!/+)F#0c|'NO CARRIER
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: Hard drive Reliability?
2004-05-19 23:40 ` maarten van den Berg
@ 2004-05-20 0:34 ` Guy
2004-05-20 1:46 ` John Lange
2004-05-20 22:27 ` Russ Price
2004-05-20 15:51 ` Sevatio
1 sibling, 2 replies; 31+ messages in thread
From: Guy @ 2004-05-20 0:34 UTC (permalink / raw)
To: 'maarten van den Berg', 'LinuxRaid'
You said:
<Well, It was more or less useless anyway. I can tell you just offhand that
<you can lengthen the life expectancy of a drive maybe four-fold if you make
<sure it stays below 35 degrees celsius its entire life, instead of ~45
<degrees.
<Don't hold me up on that, but you know what I mean, and it is true. :-)
========================================================================
I agree with you, but!
I have an old system with 2 18 Gig SCSI disks. One IBM and one Seagate.
Both run very hot! I added extra cooling fans. Both fans failed about 2
years ago. Only the CPU and power supply fans still work. The disk drives
are too hot to touch. Much too hot to touch! The system is running 99.99%
of the time. No disk problems. The system is 4-5 years old as a guess. To
help date it, it is a P3-350Mhz. My wife uses this computer. :)
I just dripped some drops of water on the drives, it did not boil. But I
can only keep my finger on them for less than 1 second.
Anyway I think I have been very lucky. I don't recommend hot drives, just a
funny example of "rules were made to be broken"!
Guy
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: Hard drive Reliability?
2004-05-20 0:34 ` Guy
@ 2004-05-20 1:46 ` John Lange
2004-05-20 22:27 ` Russ Price
1 sibling, 0 replies; 31+ messages in thread
From: John Lange @ 2004-05-20 1:46 UTC (permalink / raw)
To: Guy; +Cc: 'LinuxRaid'
On Wed, 2004-05-19 at 19:34, Guy wrote:
> I just dripped some drops of water on the drives, it did not boil.
I believe your issues with reliability may be traced to your willingness
to pour water into your computer while its on! ;)
Sorry Guy.. I couldn't resist :)
Regards,
John
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-20 0:34 ` Guy
2004-05-20 1:46 ` John Lange
@ 2004-05-20 22:27 ` Russ Price
1 sibling, 0 replies; 31+ messages in thread
From: Russ Price @ 2004-05-20 22:27 UTC (permalink / raw)
To: 'LinuxRaid'
Guy wrote:
> I have an old system with 2 18 Gig SCSI disks. One IBM and one Seagate.
> Both run very hot! I added extra cooling fans. Both fans failed about 2
> years ago. Only the CPU and power supply fans still work. The disk drives
> are too hot to touch. Much too hot to touch! The system is running 99.99%
> of the time. No disk problems. The system is 4-5 years old as a guess. To
> help date it, it is a P3-350Mhz. My wife uses this computer. :)
The lack of start-stop cycles may work in your favor; you don't have as
many thermal expansion/contraction cycles or startup electrical surges
to worry about. Of course, if stiction strikes after a power-off, you
could be in trouble...
In any case, it's not a bad idea to set up smartd and look after your
system logs. If your drives have temperature sensors, it's a good idea
to tell smartd to report raw values for temperature (usually -r 194 -R
194 in the smartd.conf file); the normalized values look wacky.
I also have hddtemp and the hddtemp plugin for gkrellm, so I can easily
check the temperatures on my desktop, at least for my four-drive SATA
RAID5 array. My older 30GB Maxtors don't have temperature sensors, but
they're in the same fan-cooled bays as the Samsung SATA drives, and they
all run reasonably cool to the touch. Current room temperature is about
28 C, and the temperature display reports 29-33 C for the drives in the
array.
Russ
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-19 23:40 ` maarten van den Berg
2004-05-20 0:34 ` Guy
@ 2004-05-20 15:51 ` Sevatio
1 sibling, 0 replies; 31+ messages in thread
From: Sevatio @ 2004-05-20 15:51 UTC (permalink / raw)
To: linux-raid
> One thing I've come to believe over the years is that heat is a very important
> factor that's killing drives. So I now take great care in ensuring good heat
> dissipation from the drives. This entails amongst others that you should
> never 'sandwich' drives in their 3,5" slots (I can't believe case
> manufacturers still have not woken up to this need!). Instead I often arrange
> for them to go in 5,25" slots so they have plenty of air around. If I need to
> put them in 3,5" slots I always leave 1 unit space around a drive.
> In the servers I deploy I take bigger measures, like a bigass 120mm fan just
> in front of the drives (accomplished either by dremel or by case design)
>
> Maarten
>
I employ the same methods now by mounting harddrives in the 5.25" bays
and placing fans in front of them. HDs that are sandwiched together in
the 3.5" bays get too hot to touch whereas fanned 5.25" HDs are barely
above room temperature. This seemed to have helped. I haven't had a
failure of a HD that's mounted in this manner yet (these have run 24x7
for the last 1.5years). I've even seen HD heatsink kits. It all boils
down to the extreme tight margins that hardware mfgs operate. They're
forced cut corners where possible. We just need to be as informed as
possible and intelligently vote with our wallets.
Sevatio
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-19 19:58 Hard drive Reliability? John Lange
` (2 preceding siblings ...)
2004-05-19 22:18 ` Guy
@ 2004-05-20 4:39 ` Mark Hahn
2004-05-20 7:15 ` John Lange
3 siblings, 1 reply; 31+ messages in thread
From: Mark Hahn @ 2004-05-20 4:39 UTC (permalink / raw)
To: John Lange; +Cc: linux-raid
> all come to the conclusion that Hard Drives are far less reliable than
> they used to be.
margins are stretched, for sure. I don't believe this is the main
cause of people's frustration with disk failures, but instead the
fact that drives are treated with less care. nowadays, those annoying
randoms at the corner computer store keep a pile of disks under
the counter, with nothing but an anti-static bag to protect them.
think nothing of clanking them together to shuffle the deck looking
for the 80G one you're asking for. and how did those disks arrive?
some low-margin shipping service, packed 40 per box, from a fourth-line
reseller that specializes in shifting objects at high speed.
the fact is that disks are dirt cheap now, so whining about their
robustness is kind of silly. if you don't like trusting a single
disk, use raid: that's what it's for. yes, it's less of a clean
solution on small machines, but there is *no* reliability problem
on servers, since raid5 is fast and cheap and you get to choose
your comfort level of bomb-proof-ness.
> Case in point, I have a 120G Maxtor drive in a server that began to fail
> less than 8 months into service. Major headache.
there is no conspiracy: all the top-tier vendors have roughly the same
quality (and product lines, and prices, etc.)
> fail so often this saves loads of headaches. After a their system has
> died and they 'lost everything' people are more than willing to pay the
> extra $150 for redundancy.
it's curious to reflect on the social aspects of the PC revolution.
people just plain like the idea of having their stuff stored on a box
that sits within reach. the fact that this is becoming cheaper and
cheaper doesn't mean that it's the right solution, always, all ways.
diskless PCs make HUGE amounts of sense; I suppose we can blame MSFT
somewhat for fighting that.
> I know drive manufactures were sued recently in a class action for
> shipping drives which they knew were going to fail prematurely but that
> was a few years back.
that is a somewhat deceptive way to put it. IBM honestly produced
a product that they thought was good. in fact, it was the darling
of the geek industry, until people realized that there were some odd
issues having to do with abrupt power-offs (do we even have the story
straight yet?). IBM is, like any other large organization, crippled
by its legal types, and can't just forthrightly say "we screwed up
and didn't test this odd usage pattern properly".
as products mature, they tend to become more complex, and entertain
new failure modes. can you reach under the hood and tweak the carb
on your car? similarly, features like auto-defect-sparing and
write-behind caches that flush on power-loss are tricky, and produce
non-intuitive failure modes. can they be tested better, sure. is there
any going back? no.
> So what are other peoples feelings about drive reliability and are some
> brands better than others?
maxtor/seagate/hgst/wd are safe bets. go for 3yr warranties.
use some form of raid and/or backup and/or replication.
> Does anyone know of any web sites with statistics or test data?
storagereview.com tries, but its hard to collect serious data from
random, noncompliant populations. in particular, squeaky wheels lead
to drastic biases.
^ permalink raw reply [flat|nested] 31+ messages in thread* Re: Hard drive Reliability?
2004-05-20 4:39 ` Mark Hahn
@ 2004-05-20 7:15 ` John Lange
2004-05-20 12:15 ` Mark Hahn
0 siblings, 1 reply; 31+ messages in thread
From: John Lange @ 2004-05-20 7:15 UTC (permalink / raw)
To: Mark Hahn; +Cc: linux-raid
On Wed, 2004-05-19 at 23:39, Mark Hahn wrote:
> the fact is that disks are dirt cheap now, so whining about their
> robustness is kind of silly.
On this point I can't agree with you. Sure, hardware is dirt cheap but
DATA is not. So long as the drive makers are up-front about it so
end-users know that they MUST have mirroring then fine, but they aren't.
Users are lead to believe the products are "ultra reliable" and
therefore are not as careful with their storage and backup solutions as
they should be.
At this point I must correct myself. I had said the drive was a Maxtor
but I was wrong. Just checked smartctl and in fact it is a Segate
ST3120026A. A google turns up the drives home page:
http://www.seagate.com/cda/products/discsales/marketing/detail/0,1081,580,00.html
Quotes from the product data sheet (PDF on that page):
"The worlds toughest and Quietest High-Performance desktop drive with
..."
"A proven rugged design for increased reliability"
"Best-Fit Applications
· Mainstream and High-Performance PCs
· Entry-Level ATA Servers, including RAID
· Cost-Effective Network Attached Storage"
I don't seem to see a footnote that says "50% of drives will fail in a
year or less..."
Going by the product sheet and the fact the drive has a 3 year warranty
you'd think you would be relatively safe. Nope...
> if you don't like trusting a single
> disk, use raid: that's what it's for. yes, it's less of a clean
> solution on small machines, but there is *no* reliability problem
> on servers, since raid5 is fast and cheap and you get to choose
> your comfort level of bomb-proof-ness.
I don't totally disagree but there are many other considerations. I may
have RAID 5 on my server but the server is in a data-center thats 2000km
away. Shipping new drives and having a support call to install them can
get very expensive not to mention down time.
Never the less I believe we are all slowly coming to the realization
that mirroring is now the minimum we should deploy even on the desktop.
Its not really fair to the consumer though. Basically if they make the
drives really bad then we will buy twice as many of them. Of course this
only works for a while until a competitor destroys them (witness what
happened to the American auto makers when the Japanese started making
cars.)
> diskless PCs make HUGE amounts of sense;
AMEN! Having a hard drive in a PC on your desktop is completely
senseless in any office with more than about 2 desktops. LTSP (
www.ltsp.org ) all the way!
John Lange
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-20 7:15 ` John Lange
@ 2004-05-20 12:15 ` Mark Hahn
2004-05-20 13:26 ` jim
` (2 more replies)
0 siblings, 3 replies; 31+ messages in thread
From: Mark Hahn @ 2004-05-20 12:15 UTC (permalink / raw)
To: John Lange; +Cc: linux-raid
> I don't seem to see a footnote that says "50% of drives will fail in a
> year or less..."
properly handled, especially by the hands between you and the factory,
properly installed (with adequate airflow), they certainly won't.
> Going by the product sheet and the fact the drive has a 3 year warranty
> you'd think you would be relatively safe. Nope...
you imply that vendors are knowingly shipping half their product
that will die within even a 1yr warranty period, and then have
to be replaced at significant cost to the vendor. I really can't
see why you think they're so stupid! the alternate explanation,
which fits the data (such as it is) perfectly well is that the
supply chain damages the drives.
> away. Shipping new drives and having a support call to install them can
> get very expensive not to mention down time.
so install a hot spare or two or three.
> senseless in any office with more than about 2 desktops. LTSP (
> www.ltsp.org ) all the way!
actually, I considered using ltsp when I built my diskless cluster,
but once I looked, I could find no real value-add there. instead,
I run export an unmodified RH dist as a readonly root, boot with
the usual dhcp/pxe/tftp tools, and have /var in tmpfs.
regards, mark hahn.
^ permalink raw reply [flat|nested] 31+ messages in thread* Re: Hard drive Reliability?
2004-05-20 12:15 ` Mark Hahn
@ 2004-05-20 13:26 ` jim
2004-05-20 13:31 ` John Lange
2004-05-20 13:32 ` Tim Grant
2 siblings, 0 replies; 31+ messages in thread
From: jim @ 2004-05-20 13:26 UTC (permalink / raw)
To: Mark Hahn; +Cc: linux-raid
> you imply that vendors are knowingly shipping half their product
> that will die within even a 1yr warranty period, and then have
> to be replaced at significant cost to the vendor. I really can't
> see why you think they're so stupid! the alternate explanation,
> which fits the data (such as it is) perfectly well is that the
> supply chain damages the drives.
I don't believe that theory, because Maxtor has replaced some of
our failed drives, directly, and the replacements also failed in
less than a year.
It takes a lot of time for an end-user to do an RMA. With drives
costing around $100, it's much cheaper to buy a new drive and replace
it than to go through the drive manufacturer's RMA process. I don't
believe drive manufacturers are stupid at all...
One thing the drive manufacturers could do to help is publish MTBF
figures for a range of duty cycles. If they say "this drive will fail
in 8 months if you do an average of 50 seeks per second throughout the
day", then people will know that the drives may not be up to the task.
Or "this drive will last 15 years if you just turn it on and never
access it."
Jim
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-20 12:15 ` Mark Hahn
2004-05-20 13:26 ` jim
@ 2004-05-20 13:31 ` John Lange
2004-05-20 14:20 ` Mark Hahn
2004-05-20 13:32 ` Tim Grant
2 siblings, 1 reply; 31+ messages in thread
From: John Lange @ 2004-05-20 13:31 UTC (permalink / raw)
To: Mark Hahn; +Cc: linux-raid
On Thu, 2004-05-20 at 07:15, Mark Hahn wrote:
> you imply that vendors are knowingly shipping half their product
> that will die within even a 1yr warranty period, and then have
> to be replaced at significant cost to the vendor. I really can't
> see why you think they're so stupid! the alternate explanation,
> which fits the data (such as it is) perfectly well is that the
> supply chain damages the drives.
This is certainly possible, however, you imply that the vendors are
knowingly shipping their drives through supply chains which damage the
drives and then have to be replaced at significant cost to the vendor.
Or alternatively, are knowingly packaging the drives so poorly they are
damaged in shipping. Either way I'm not letting the vendors off the hook
so easily.
I also can debunk this another way. I have two identical drives which
came out of the same shipping carton which were still in their original
plastic "shell packaging". Not impossible that one drive was poorly
handled after it was removed from the carton but also not likely.
And third, if you go to the drives web site link I provided, they make a
big deal out of how "tough" their drives are specifically in regard to
how well they withstand all kinds of hardship during shipping.
So again, the consumer is being misled.
> > away. Shipping new drives and having a support call to install them can
> > get very expensive not to mention down time.
>
> so install a hot spare or two or three.
Again, a somewhat reasonable thing to do if you are aware that you need
to do this but certainly nothing from the vendor has prepared us for
that necessity. Only repeated drive failures has started to bring us to
the realization this is no longer optional.
I say "somewhat" because if you intend to have all drives cooled
properly their is often enough space for more than 2-3 drives. In this
specific example the drives are in a drive mounting bay which sits
directly in front of a large fan. There is probably room for one more
drive at most.
BTW, smartctl reports the drive has never gone above 40C (well below its
operating maximum of 60C). This case was selected partly because it had
a fan in front of the drives. The point being it did not overheat while
in service.
So we are left with the "damaged in handling" option which I find very
unlikely OR the poor option that it was poor quality to begin with.
>
> > senseless in any office with more than about 2 desktops. LTSP (
> > www.ltsp.org ) all the way!
>
> actually, I considered using ltsp when I built my diskless cluster,
> but once I looked, I could find no real value-add there. instead,
> I run export an unmodified RH dist as a readonly root, boot with
> the usual dhcp/pxe/tftp tools, and have /var in tmpfs.
You are correct that LTSP is mostly it is just a repackaging of X though
they are adding more all the time. For example access to removable media
on the client etc.
Whether you do standard remote X or LTSP, the point is NO hard drives on
the desktop is the only way to go!
The solutions are ironic, either DOUBLE the number of hard drives on the
clients or eliminate them all together!
Regards,
John Lange
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-20 13:31 ` John Lange
@ 2004-05-20 14:20 ` Mark Hahn
0 siblings, 0 replies; 31+ messages in thread
From: Mark Hahn @ 2004-05-20 14:20 UTC (permalink / raw)
To: John Lange; +Cc: linux-raid
> > you imply that vendors are knowingly shipping half their product
> > that will die within even a 1yr warranty period, and then have
> > to be replaced at significant cost to the vendor. I really can't
> > see why you think they're so stupid! the alternate explanation,
> > which fits the data (such as it is) perfectly well is that the
> > supply chain damages the drives.
>
> This is certainly possible, however, you imply that the vendors are
> knowingly shipping their drives through supply chains which damage the
> drives and then have to be replaced at significant cost to the vendor.
no, just the opposite. when vendors sell through "channel", it means
direct from the vendor to a large multinational like Avnet or Ingram.
I would guess that that official channel is fine, since there are
adequate means of control, training, feedback. perhaps even the
official retail channel is fine, since retail boxed disks are pretty
well protected.) the problem I see is the grey-market "channel" -
J Random Corner Computer Shoppe where they have a stack of bare
drives sitting in the display case.
such handling is clearly a huge risk. that's about the only fact
that any of us has about this issue. there's no data to support the
idea that drive vendors knowingly ship broken disks. the fact that
there are not hordes of class-action lawsuits is a good indication that
there's no ambulance to chase.
> that necessity. Only repeated drive failures has started to bring us to
> the realization this is no longer optional.
no, it's always been that way. 10 years ago, the drive market was more
rarified, tended to be professionally pre-installed, and vastly smaller
and lower-variance than now. $5,000 scsi disks did fail back then,
sometimes quite soon upon delivery. of course, margins were much higher,
and "materials margins" (relative to theoretical limits) were higher as
well.
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: Hard drive Reliability?
2004-05-20 12:15 ` Mark Hahn
2004-05-20 13:26 ` jim
2004-05-20 13:31 ` John Lange
@ 2004-05-20 13:32 ` Tim Grant
2004-05-20 14:38 ` Robin Bowes
2 siblings, 1 reply; 31+ messages in thread
From: Tim Grant @ 2004-05-20 13:32 UTC (permalink / raw)
To: linux-raid
Has anyone tried to Maxline drives from Maxtor? I know the MTBF for
most IDE hard drives is calculated for 8 hours a day, 5 days a week, and
we all know none of us are doing that. We meet with Maxtor and they
were speaking of their new Maxline drive which is intended for 24 hours
a day, 365 days a week. We just started using them in our products, but
it's a little early to see if they're going to be as problematic as the
other Maxtor drives we used.
--
Tim Grant
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Mark Hahn
Sent: Thursday, May 20, 2004 8:15 AM
To: John Lange
Cc: linux-raid@vger.kernel.org
Subject: Re: Hard drive Reliability?
> I don't seem to see a footnote that says "50% of drives will fail in a
> year or less..."
properly handled, especially by the hands between you and the factory,
properly installed (with adequate airflow), they certainly won't.
> Going by the product sheet and the fact the drive has a 3 year
> warranty you'd think you would be relatively safe. Nope...
you imply that vendors are knowingly shipping half their product that
will die within even a 1yr warranty period, and then have
to be replaced at significant cost to the vendor. I really can't
see why you think they're so stupid! the alternate explanation, which
fits the data (such as it is) perfectly well is that the
supply chain damages the drives.
> away. Shipping new drives and having a support call to install them
> can get very expensive not to mention down time.
so install a hot spare or two or three.
> senseless in any office with more than about 2 desktops. LTSP (
> www.ltsp.org ) all the way!
actually, I considered using ltsp when I built my diskless cluster, but
once I looked, I could find no real value-add there. instead, I run
export an unmodified RH dist as a readonly root, boot with
the usual dhcp/pxe/tftp tools, and have /var in tmpfs.
regards, mark hahn.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org More majordomo info
at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: Hard drive Reliability?
2004-05-20 13:32 ` Tim Grant
@ 2004-05-20 14:38 ` Robin Bowes
2004-05-22 5:15 ` Brad Campbell
0 siblings, 1 reply; 31+ messages in thread
From: Robin Bowes @ 2004-05-20 14:38 UTC (permalink / raw)
To: linux-raid
On Thu, May 20, 2004 14:32, Tim Grant said:
>
> Has anyone tried to Maxline drives from Maxtor? I know the MTBF for
> most IDE hard drives is calculated for 8 hours a day, 5 days a week, and
> we all know none of us are doing that. We meet with Maxtor and they
> were speaking of their new Maxline drive which is intended for 24 hours
> a day, 365 days a week. We just started using them in our products, but
> it's a little early to see if they're going to be as problematic as the
> other Maxtor drives we used.
I've just bought 6 of their 250GB SATA Maxline Plus II drives. I'll be building them
into a Coolermaster case with a couple of fans blowing over them (three drives per fan)
and running them off Promise SATA150 TX4 controllers with software Raid.
I'll let you know how I get on. (expect lots of questions here when I start trying to
build the arrays!
R.
--
http://robinbowes.com
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-20 14:38 ` Robin Bowes
@ 2004-05-22 5:15 ` Brad Campbell
2004-05-24 13:25 ` Frank van Maarseveen
0 siblings, 1 reply; 31+ messages in thread
From: Brad Campbell @ 2004-05-22 5:15 UTC (permalink / raw)
To: Robin Bowes; +Cc: linux-raid
Robin Bowes wrote:
> I've just bought 6 of their 250GB SATA Maxline Plus II drives. I'll be building them
> into a Coolermaster case with a couple of fans blowing over them (three drives per fan)
> and running them off Promise SATA150 TX4 controllers with software Raid.
>
> I'll let you know how I get on. (expect lots of questions here when I start trying to
> build the arrays!
I have 10 of the above drives with 3 of the above controllers in a Software Raid-5, if you have any
questions give me a yell :p)
For those looking for great SATA hotswap bays with good cooling, I have 2 of these. Besides being a
bit on the noisy side, they keep the drives very cool and the added airflow inside the case keeps
everything else cool too!
http://www.supermicro.com/products/accessories/mobilerack/CSE-M35T-1.cfm
Regards,
Brad
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-22 5:15 ` Brad Campbell
@ 2004-05-24 13:25 ` Frank van Maarseveen
2004-05-24 18:35 ` Brad Campbell
0 siblings, 1 reply; 31+ messages in thread
From: Frank van Maarseveen @ 2004-05-24 13:25 UTC (permalink / raw)
To: linux-raid; +Cc: Brad Campbell
On Sat, May 22, 2004 at 09:15:13AM +0400, Brad Campbell wrote:
> For those looking for great SATA hotswap bays with good cooling, I have 2
> of these. Besides being a bit on the noisy side, they keep the drives very
> cool and the added airflow inside the case keeps everything else cool too!
>
> http://www.supermicro.com/products/accessories/mobilerack/CSE-M35T-1.cfm
I hope you didn't put a complete raid set in one of these because they
have only one fan. Guess what happens when that moving part dies...
--
Frank
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-24 13:25 ` Frank van Maarseveen
@ 2004-05-24 18:35 ` Brad Campbell
2004-05-24 22:38 ` maarten van den Berg
0 siblings, 1 reply; 31+ messages in thread
From: Brad Campbell @ 2004-05-24 18:35 UTC (permalink / raw)
To: Frank van Maarseveen; +Cc: linux-raid
Frank van Maarseveen wrote:
> On Sat, May 22, 2004 at 09:15:13AM +0400, Brad Campbell wrote:
>
>>For those looking for great SATA hotswap bays with good cooling, I have 2
>>of these. Besides being a bit on the noisy side, they keep the drives very
>>cool and the added airflow inside the case keeps everything else cool too!
>>
>>http://www.supermicro.com/products/accessories/mobilerack/CSE-M35T-1.cfm
>
>
> I hope you didn't put a complete raid set in one of these because they
> have only one fan. Guess what happens when that moving part dies...
Yep, I have 2 units with 10 drives combined into a single Raid-5.
When that moving part dies the drives heat up a bit more. There are 4 case evacuation fans at the
back of this box that provide just enough backup airflow to stop the drives reaching destruction
temperatures.
I have had a good look at the failure modes of 92mm double ball bearing fans and I have not found
much record of them failing catastrophically. They slow down due to dust or bearing failure over
time before they grind to a halt usually.
These boxes measure fan RPM and drive temperature and if either of them get a little out of hand
they sound a dirty great warning piezo that is hard to miss.
I'm not worried. I'm more concerened about psu/cpu fan failure than these fans failing.
Give me one decent 92mm double ball raced fan against multiple 40mm crappy sleeve fans you get in
normal hotswap bays any day of the week!
Brad
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: Hard drive Reliability?
2004-05-24 18:35 ` Brad Campbell
@ 2004-05-24 22:38 ` maarten van den Berg
0 siblings, 0 replies; 31+ messages in thread
From: maarten van den Berg @ 2004-05-24 22:38 UTC (permalink / raw)
To: linux-raid
On Monday 24 May 2004 20:35, Brad Campbell wrote:
> Frank van Maarseveen wrote:
> > On Sat, May 22, 2004 at 09:15:13AM +0400, Brad Campbell wrote:
> I have had a good look at the failure modes of 92mm double ball bearing
> fans and I have not found much record of them failing catastrophically.
> They slow down due to dust or bearing failure over time before they grind
> to a halt usually.
Well, that IS true. Fans that die are often the 40mm and 60mm variants. (and
to a much lesser extent, 80 mm) The bigger fans have a good balance between
motor strength and bearing surface, especially when it comes to places where
dust can enter or settle. I've seen old Dells with 120 mm fans, where you
could barely turn the fan by hand but they still did spin up. Incredible.
Meanwhile, they made such a noise any admin within 200 feet would know they
were due for exchange. ;-)
> Give me one decent 92mm double ball raced fan against multiple 40mm crappy
> sleeve fans you get in normal hotswap bays any day of the week!
Hear hear !
I tend to 'dremel' my way for a 120mm fan nowadays, if at all possible...
Maarten
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: Hard drive Reliability?
@ 2004-05-20 13:54 Cress, Andrew R
0 siblings, 0 replies; 31+ messages in thread
From: Cress, Andrew R @ 2004-05-20 13:54 UTC (permalink / raw)
To: jim, Mark Hahn; +Cc: linux-raid
One aspect that hasn't been mentioned, is that disk drives are arguably
the most complex component in the computer, and that the pressure to
release new, faster, bigger, cheaper disks at increasingly shorter
release cycles means that disk vendors have to leverage existing parts
and firmware code to do this, and also some risk is involved. It's
actually pretty amazing that they get it done as well as they do (some
better than others). However, updated disk firmware releases are often
important to maintain disk reliability.
Disk handling damage is a common problem, but usually mishandling occurs
from lack of care by others in the chain, not the disk vendor.
Disclaimer: I'm not directly associated with any disk vendor, but I
have had lots of test & support experience with a variety of disk drives
(mostly SCSI).
Andy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of jim@rubylane.com
Sent: Thursday, May 20, 2004 9:27 AM
To: Mark Hahn
Cc: linux-raid@vger.kernel.org
Subject: Re: Hard drive Reliability?
> you imply that vendors are knowingly shipping half their product
> that will die within even a 1yr warranty period, and then have
> to be replaced at significant cost to the vendor. I really can't
> see why you think they're so stupid! the alternate explanation,
> which fits the data (such as it is) perfectly well is that the
> supply chain damages the drives.
I don't believe that theory, because Maxtor has replaced some of
our failed drives, directly, and the replacements also failed in
less than a year.
It takes a lot of time for an end-user to do an RMA. With drives
costing around $100, it's much cheaper to buy a new drive and replace
it than to go through the drive manufacturer's RMA process. I don't
believe drive manufacturers are stupid at all...
One thing the drive manufacturers could do to help is publish MTBF
figures for a range of duty cycles. If they say "this drive will fail
in 8 months if you do an average of 50 seeks per second throughout the
day", then people will know that the drives may not be up to the task.
Or "this drive will last 15 years if you just turn it on and never
access it."
Jim
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2004-05-24 22:38 UTC | newest]
Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-05-19 19:58 Hard drive Reliability? John Lange
2004-05-19 20:49 ` Måns Rullgård
2004-05-19 21:28 ` Sevatio
2004-05-19 22:02 ` John Lange
2004-05-19 22:42 ` jim
2004-05-19 22:26 ` Guy
2004-05-19 23:53 ` maarten van den Berg
2004-05-20 0:49 ` TJ Harrell
2004-05-20 1:13 ` berk walker
2004-05-20 6:39 ` Måns Rullgård
2004-05-19 23:44 ` maarten van den Berg
2004-05-19 22:04 ` berk walker
2004-05-19 22:18 ` Guy
2004-05-19 23:40 ` maarten van den Berg
2004-05-20 0:34 ` Guy
2004-05-20 1:46 ` John Lange
2004-05-20 22:27 ` Russ Price
2004-05-20 15:51 ` Sevatio
2004-05-20 4:39 ` Mark Hahn
2004-05-20 7:15 ` John Lange
2004-05-20 12:15 ` Mark Hahn
2004-05-20 13:26 ` jim
2004-05-20 13:31 ` John Lange
2004-05-20 14:20 ` Mark Hahn
2004-05-20 13:32 ` Tim Grant
2004-05-20 14:38 ` Robin Bowes
2004-05-22 5:15 ` Brad Campbell
2004-05-24 13:25 ` Frank van Maarseveen
2004-05-24 18:35 ` Brad Campbell
2004-05-24 22:38 ` maarten van den Berg
-- strict thread matches above, loose matches on Subject: below --
2004-05-20 13:54 Cress, Andrew R
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).