From: Bill Davidsen <davidsen@tmr.com>
To: berk walker <berk@panix.com>
Cc: dean gaudet <dean@arctic.org>,
Robin Bowes <robin-lists@robinbowes.com>,
linux-raid@vger.kernel.org
Subject: Re: raid5 software vs hardware: parity calculations?
Date: Tue, 16 Jan 2007 00:06:31 -0500 [thread overview]
Message-ID: <45AC5D57.50001@tmr.com> (raw)
In-Reply-To: <45AC1DD9.9070402@panix.com>
berk walker wrote:
>
> dean gaudet wrote:
>> On Mon, 15 Jan 2007, Robin Bowes wrote:
>>
>>
>>> I'm running RAID6 instead of RAID5+1 - I've had a couple of instances
>>> where a drive has failed in a RAID5+1 array and a second has failed
>>> during the rebuild after the hot-spare had kicked in.
>>>
>>
>> if the failures were read errors without losing the entire disk (the
>> typical case) then new kernels are much better -- on read error md
>> will reconstruct the sectors from the other disks and attempt to
>> write it back.
>>
>> you can also run monthly "checks"...
>>
>> echo check >/sys/block/mdX/md/sync_action
>>
>> it'll read the entire array (parity included) and correct read errors
>> as they're discovered.
>>
>> -dean
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
>
> Could I get a pointer as to how I can do this "check" in my FC5 [BLAG]
> system? I can find no appropriate "check", nor "md" available to me.
> It would be a "good thing" if I were able to find potentially weak
> spots, rewrite them to good, and know that it might be time for a new
> drive.
Grab a recent mdadm source, it's a part of that.
>
> All of my arrays have drives of approx the same mfg date, so the
> possibility of more than one showing bad at the same time can not be
> ignored.
Never can, but it is highly unlikely, given the MTBF of modern drives.
And when you consider total failures as opposed to bad sectors it gets
even smaller. There is no perfect way to avoid ever losing data, just
ways to reduce the chance to balance the cost of data loss vs. hardware.
Current Linux will rewrite bad sectors, whole drive failures are an
argument for spares.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
prev parent reply other threads:[~2007-01-16 5:06 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-01-11 22:44 raid5 software vs hardware: parity calculations? James Ralston
2007-01-12 17:39 ` dean gaudet
2007-01-12 20:34 ` James Ralston
2007-01-13 9:20 ` Dan Williams
2007-01-13 17:32 ` Bill Davidsen
2007-01-13 23:23 ` Robin Bowes
2007-01-14 3:16 ` dean gaudet
2007-01-15 11:48 ` Michael Tokarev
2007-01-15 15:29 ` Bill Davidsen
2007-01-15 16:22 ` Robin Bowes
2007-01-15 17:37 ` Bill Davidsen
2007-01-15 21:25 ` dean gaudet
2007-01-15 21:32 ` Gordon Henderson
2007-01-16 0:35 ` berk walker
2007-01-16 0:48 ` dean gaudet
2007-01-16 3:41 ` Mr. James W. Laferriere
2007-01-16 4:16 ` dean gaudet
2007-01-16 5:06 ` Bill Davidsen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=45AC5D57.50001@tmr.com \
--to=davidsen@tmr.com \
--cc=berk@panix.com \
--cc=dean@arctic.org \
--cc=linux-raid@vger.kernel.org \
--cc=robin-lists@robinbowes.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).