From: Brad Campbell <lists2009@fnarfbargle.com>
To: Bart Kus <me@bartk.us>, linux-raid@vger.kernel.org
Subject: Re: md-raid paranoia mode?
Date: Thu, 12 Jun 2014 10:15:32 +0800 [thread overview]
Message-ID: <53990D44.300@fnarfbargle.com> (raw)
In-Reply-To: <5397FBCE.3060009@bartk.us>
On 11/06/14 14:48, Bart Kus wrote:
> Hello,
>
> As far as I understand, md-raid relies on the underlying devices to
> inform it of IO errors before it'll seek redundant/parity data to
> fulfill the read request. I have, however, seen certain hard drives
> report successful reads while returning garbage data.
If you have drives that return garbage as valid data then you have far
greater problems than what you are suggesting will fix. So much so I
suggest you document these instances and start banging a drum announcing
them in a name and shame campaign. That sort of behavior from storage
devices is never ok, and the manufacturer needs to know that.
This comes up on the list at least once a year, and the upshot is that
your storage platform needs to be reliable. Storage is *supposed* to be
reliable. Even the cheapest solution is *supposed* to say "I'm sorry but
that bit of data you asked for is toast". Even my 35c USB drives do that.
Whether you have a single drive or 10 mirrors, if you have a drive
returning garbage you need to solve that problem first. Patching
software that is based on the fundamental assumption that the storage
stack knows when something is bad, to no longer trust that assumption
makes all sorts of guarantees go out the window.
From personal experience, I lost a 12TB RAID-6 and all the data on it
due to a bad SATA controller. The controller would return corrupt reads
under heavy load, and months of read/modify/write cycles combined with
corrupt data spread the corruption all over the array. My immediate
reaction was the same as yours. "RAID6 should be able to protect against
this stuff", but after education from people that are more knowledgeable
than I, it became apparent that bad hardware is JUST insidious and
papering over one part of the stack would just lead to it biting me
elsewhere anyway.
I learned 2 very valuable lessons.
- Don't deploy hardware unless you trust it. This may mean a month of
burn-in testing in a spare machine, or delaying trusting it with
valuable data. In my case it was a cheap 2 port PCIe SATA card procured
to get me out of a tight spot, so I plugged it in and strapped drives to
it blindly believing it would be ok.
- RAID is no substitute for backups.
next prev parent reply other threads:[~2014-06-12 2:15 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-11 6:48 md-raid paranoia mode? Bart Kus
[not found] ` <CAH3kUhH06kpJNqb-zdcv5nu2e1FeZuotcW0SjBbWDOCcasm9OA@mail.gmail.com>
2014-06-11 10:34 ` Bart Kus
2014-06-12 7:26 ` Mattias Wadenstein
2014-06-11 17:31 ` Piergiorgio Sartor
2014-06-12 2:15 ` Brad Campbell [this message]
2014-06-12 6:28 ` Roman Mamedov
2014-06-12 6:45 ` NeilBrown
2014-06-12 7:26 ` David Brown
2014-06-12 8:06 ` Roman Mamedov
2014-06-12 8:30 ` Brad Campbell
2014-06-12 8:53 ` Roman Mamedov
2014-06-12 11:27 ` David Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53990D44.300@fnarfbargle.com \
--to=lists2009@fnarfbargle.com \
--cc=linux-raid@vger.kernel.org \
--cc=me@bartk.us \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).