From: Jon Hardcastle <jd_hardcastle@yahoo.com>
To: linux-raid@vger.kernel.org, Maurice Hilarius <maurice@harddata.com>
Subject: Re: Interesting article
Date: Thu, 15 Jan 2009 01:09:44 -0800 (PST) [thread overview]
Message-ID: <148335.68682.qm@web51311.mail.re2.yahoo.com> (raw)
In-Reply-To: <496E44DE.7090200@harddata.com>
I have read about this previously and what i turned up then was that it is true, a failed drive can cause others to fail.. particularly if the drives are old and haven't been regularly made to sweat!
My setup is 2x320GB mirrored 6x500GB RAID 5 with 1 off them as a spare (for now) - all SW raid.. and with an LVM ontop.
I do checks 6 days a week on each of the drives using smartctl and monitor the results - I rotate through the long and short checks so that i dont have 6 drives all doing long checks on the same day! (really effects performance when streaming video from it!)
I also do a raid 'scrub' (check) once a week and if there is an error i do a repair.
Also, when i 'grow' one of the LV's i do a e2fsck -cc on it to 'flush' out any bad sectors by reading AND writing.
This has actually done just this and one of my drives is now showing 800 reallocated sectors! (it is going back)
As a result of this i now plan to do monthly individual drive checks by dismantling the array and running bad blocks on the individual drives.
The point of all this is to flush out any bad sectors before they get chance to be a problem.
I also do a nightly copy of the files I absolutely can not do without (Photos et al) to the mirrored 320GB drives.... (as well as 6monthly DVD's)
Hope this is of interest to someone!
-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'..Be fearful when others are greedy, and be greedy when others are fearful.'
-----------------------
--- On Wed, 14/1/09, Maurice Hilarius <maurice@harddata.com> wrote:
> From: Maurice Hilarius <maurice@harddata.com>
> Subject: Interesting article
> To: linux-raid@vger.kernel.org
> Date: Wednesday, 14 January, 2009, 8:02 PM
> I read this today:
> http://blogs.zdnet.com/storage/?p=162
>
> Would anyone who knows enough about this care to comment?
>
> Thanks in advance for any thoughts..
>
>
> --
> With our best regards,
>
> //Maurice W. Hilarius Telephone: 01-780-456-9771/
> /Hard Data Ltd. FAX:
> 01-780-456-9772/
> /11060 - 166 Avenue email:maurice@harddata.com/
> /Edmonton, AB, Canada/
> / T5X 1Y3/
> /
>
> --
> To unsubscribe from this list: send the line
> "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at
> http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2009-01-15 9:09 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-01-14 20:02 Interesting article Maurice Hilarius
2009-01-14 22:55 ` Chris Worley
2009-01-15 11:09 ` David Greaves
2009-01-15 22:13 ` Michal Soltys
2009-01-15 22:31 ` David Greaves
2009-01-16 7:12 ` Peter Rabbitson
2009-01-15 0:57 ` Guy Watkins
2009-01-15 9:09 ` Jon Hardcastle [this message]
2009-01-15 11:51 ` RAID5 recoverability - Was: " Matti Aarnio
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=148335.68682.qm@web51311.mail.re2.yahoo.com \
--to=jd_hardcastle@yahoo.com \
--cc=Jon@eHardcastle.com \
--cc=linux-raid@vger.kernel.org \
--cc=maurice@harddata.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).