From: Thiemo Nagel <thiemo.nagel@ph.tum.de>
To: Neil Brown <neilb@suse.de>
Cc: linux-raid@vger.kernel.org
Subject: Re: raid6 check/repair
Date: Fri, 30 Nov 2007 19:34:33 +0100 [thread overview]
Message-ID: <475057B9.30701@ph.tum.de> (raw)
In-Reply-To: <18254.21949.441607.134763@notabene.brown>
Dear Neil,
>> The point that I'm trying to make is, that there does exist a specific
>> case, in which recovery is possible, and that implementing recovery for
>> that case will not hurt in any way.
>
> Assuming that it true (maybe hpa got it wrong) what specific
> conditions would lead to one drive having corrupt data, and would
> correcting it on an occasional 'repair' pass be an appropriate
> response?
The use case for the proposed 'repair' would be occasional,
low-frequency corruption, for which many sources can be imagined:
Any piece of hardware has a certain failure rate, which may depend on
things like age, temperature, stability of operating voltage, cosmic
rays, etc. but also on variations in the production process. Therefore,
hardware may suffer from infrequent glitches, which are seldom enough,
to be impossible to trace back to a particular piece of equipment. It
would be nice to recover gracefully from that.
Kernel bugs or just plain administrator mistakes are another thing.
But also the case of power-loss during writing that you have mentioned
could profit from that 'repair': With heterogeneous hardware, blocks
may be written in unpredictable order, so that in more cases graceful
recovery would be possible with 'repair' compared to just recalculating
parity.
> Does the value justify the cost of extra code complexity?
In the case of protecting data integrity, I'd say 'yes'.
> Everything costs extra. Code uses bytes of memory, requires
> maintenance, and possibly introduced new bugs.
Of course, you are right. However, in my other email, I tried to sketch
a piece of code which is very lean as it makes use of functions which I
assume to exist. (Sorry, I didn't look at the md code, yet, so please
correct me if I'm wrong.) Therefore I assume the costs in memory,
maintenance and bugs to be rather low.
Kind regards,
Thiemo
next prev parent reply other threads:[~2007-11-30 18:34 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-11-21 13:25 raid6 check/repair Thiemo Nagel
2007-11-22 3:55 ` Neil Brown
2007-11-22 16:51 ` Thiemo Nagel
2007-11-27 5:08 ` Bill Davidsen
2007-11-29 6:04 ` Neil Brown
2007-11-29 6:01 ` Neil Brown
2007-11-29 19:30 ` Bill Davidsen
2007-11-29 23:17 ` Eyal Lebedinsky
2007-11-30 14:42 ` Thiemo Nagel
[not found] ` <1196650421.14411.10.camel@elara.tcw.local>
[not found] ` <47546019.5030300@ph.tum.de>
2007-12-03 20:36 ` mailing list configuration (was: raid6 check/repair) Janek Kozicki
2007-12-04 8:45 ` Matti Aarnio
2007-12-04 21:07 ` raid6 check/repair Peter Grandi
2007-12-05 6:53 ` Mikael Abrahamsson
2007-12-05 9:00 ` Leif Nixon
2007-12-05 20:31 ` Bill Davidsen
2007-12-06 18:27 ` Andre Noll
2007-12-07 17:34 ` Gabor Gombas
2007-11-30 18:34 ` Thiemo Nagel [this message]
-- strict thread matches above, loose matches on Subject: below --
2007-11-21 13:45 Thiemo Nagel
2007-12-14 15:25 ` Thiemo Nagel
2007-11-15 15:28 Leif Nixon
2007-11-16 4:26 ` Neil Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=475057B9.30701@ph.tum.de \
--to=thiemo.nagel@ph.tum.de \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).