From: Berkey B Walker <berk@panix.com>
To: Mikael Abrahamsson <swmike@swm.pp.se>
Cc: Bill Davidsen <davidsen@tmr.com>, Phillip Susi <psusi@cfl.rr.com>,
st0ff@npl.de, linux-raid@vger.kernel.org
Subject: Re: Use of WD20EARS with MDADM
Date: Wed, 21 Apr 2010 17:58:23 -0400 [thread overview]
Message-ID: <4BCF74FF.9080909@panix.com> (raw)
In-Reply-To: <alpine.DEB.1.10.1004211625580.6768@uplift.swm.pp.se>
Mikael Abrahamsson wrote:
> On Wed, 21 Apr 2010, Bill Davidsen wrote:
>
>> I hear this said, but I don't have any data to back it up. Drive
>> vendors aren't stupid, so if the parking feature is likely to cause
>> premature failures under warranty, I would expect that the feature
>> would not be there, or that the drive would be made more robust.
>> Maybe I have too much faith in greed as a design goal, but I have to
>> wonder if load cycles are as destructive as seems to be the assumption.
>
> What I think people are worried about is that a drive might have X
> load/unload cycles in the data sheet (300k or 600k seem to be normal
> figures) and reaching this in 1-2 years of "normal" (according to the
> user who is running it 24/7) might be worrying (and understandably so).
>
> Otoh these drives seem to be designed for desktop 8 hour per day use,
> so running them as a 24/7 fileserver under linux is not what they were
> designed for. I have no idea what will happen when the load/unload
> cycles goes over the data sheet number, but my guess is that it was
> put there for a reason.
>
>> I'd love to find some real data, anecdotal stories about older drives
>> are not overly helpful. Clearly there is a trade-off between energy
>> saving, response, and durability, I just don't have any data from a
>> large population of new (green) drives.
>
> My personal experience from the WD20EADS drives is that around 40% of
> them failed within the first year of operation. This is not from a
> large population of drives though and wasn't due to load/unload
> cycles. I had no problem getting them replaced under warranty, but I'm
> running RAID6 nowadays :P
>
Sorry, you sound like a factory droid. *I* see no reason for early
failure besides cheap mat'ls in construction. Were these assertations
of short life to be true, I would campaign against the drive maker. (I
think that they are just normalizing failure rate against warranty
claims) Buy good stuff. I *wish* I could define the term by mfg. It
seems Seagate, & WD don't hack it. The Japanese drives did, but since
the $ dropped -
One thing seemingly missed is the relationship between storage density
and drive temp.variations.. Hard drive mfgs are going to be in deep
doodoo when the SSD folks get price/perf in the lead lane. This year, I
predict. And maybe another 2 for long term reliability to be in the lead..
I believe that many [most?] RAID users are looking for results (long
term archival) that are not intended in the design.We are about 2
generations away from that being a reality - I think.. For other users,
I would suggest a mirror machine. with both machines being scrubbed
daily, and media being dissimilar in mfg and mfg date.
I can't wait until Neil gets to (has to) play/work with the coming tech.
Neat things are coming.
b-
next prev parent reply other threads:[~2010-04-21 21:58 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-25 16:20 Use of WD20EARS with MDADM Andrew Dunn
2010-03-25 17:01 ` Asdo
2010-03-25 17:58 ` Mark Knecht
2010-03-25 20:23 ` John Robinson
2010-03-26 10:45 ` Asdo
2010-03-25 17:10 ` David Lethe
2010-03-25 17:45 ` Asdo
2010-03-28 16:19 ` Stefan *St0fF* Huebner
2010-03-29 16:59 ` WD20EARS data Stefan /*St0fF*/ Hübner
2010-03-29 17:13 ` Stefan /*St0fF*/ Hübner
2010-04-14 19:53 ` Use of WD20EARS with MDADM Bill Davidsen
2010-04-19 14:17 ` Phillip Susi
2010-04-21 13:20 ` Bill Davidsen
2010-04-21 13:45 ` Tim Small
2010-04-21 14:32 ` Mikael Abrahamsson
2010-04-21 21:58 ` Berkey B Walker [this message]
2010-05-02 22:33 ` Bill Davidsen
2010-05-03 0:08 ` Berkey B Walker
2010-04-21 15:14 ` Phillip Susi
2010-04-21 16:42 ` Bill Davidsen
2010-04-21 17:36 ` Mark Knecht
2010-04-21 18:40 ` Tim Small
2010-04-21 19:01 ` Mark Knecht
2010-04-21 19:31 ` Clinton Lee Taylor
2010-04-22 0:51 ` Steven Haigh
2010-04-21 19:33 ` Phillip Susi
2010-04-21 20:36 ` Mark Knecht
2010-05-12 13:06 ` Tim Small
2010-04-22 11:40 ` wdidle3 Tim Small
2010-04-22 16:13 ` Use of WD20EARS with MDADM Khelben Blackstaff
2010-04-22 18:16 ` Simon Matthews
2010-04-22 19:44 ` Phillip Susi
2010-04-22 23:23 ` Mark Knecht
2010-04-23 0:03 ` Richard Scobie
2010-04-23 1:29 ` Mark Knecht
2010-04-23 3:49 ` Phillip Susi
2010-04-23 3:44 ` Phillip Susi
2010-04-21 15:52 ` Simon Matthews
2010-04-21 19:24 ` Richard Scobie
2010-03-26 20:27 ` Peter Kieser
2010-03-26 20:59 ` Mark Knecht
2010-03-26 21:01 ` Peter Kieser
2010-03-26 21:06 ` Mark Knecht
2010-03-26 21:16 ` Mark Knecht
2010-03-26 21:19 ` Richard Scobie
2010-03-26 22:38 ` Mark Knecht
2010-03-26 22:47 ` Peter Kieser
2010-03-26 22:50 ` Matt Garman
2010-03-26 22:51 ` Peter Kieser
2010-03-26 23:01 ` David Rees
2010-03-27 23:31 ` Mark Knecht
2010-03-26 21:45 ` Matt Garman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BCF74FF.9080909@panix.com \
--to=berk@panix.com \
--cc=davidsen@tmr.com \
--cc=linux-raid@vger.kernel.org \
--cc=psusi@cfl.rr.com \
--cc=st0ff@npl.de \
--cc=swmike@swm.pp.se \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).