Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: George Mitchell <george@chinilu.com>
To: linux-btrfs@vger.kernel.org
Subject: Re: Unexpected raid1 behaviour
Date: Tue, 19 Dec 2017 10:31:40 -0800	[thread overview]
Message-ID: <f6785d3b-89fc-fd3b-38fe-815613b335fb@chinilu.com> (raw)
In-Reply-To: <20171219144644.GA9855@polanet.pl>

On 12/19/2017 06:46 AM, Tomasz Pala wrote:
> On Tue, Dec 19, 2017 at 07:25:49 -0500, Austin S. Hemmelgarn wrote:
>
>>> Well, the RAID1+ is all about the failing hardware.
>> About catastrophically failing hardware, not intermittent failure.
> It shouldn't matter - as long as disk failing once is kicked out of the
> array *if possible*. Or reattached in write-only mode as a best effort,
> meaning "will try to keep your *redundancy* copy, but won't trust it to
> be read from".
> As you see, the "failure level handled" is not by definition, but by implementation.
>
> *if possible* == when there are other volume members having the same
> data /or/ there are spare members that could take over the failing ones.
>
>> I never said the hardware needed to not fail, just that it needed to
>> fail in a consistent manner.  BTRFS handles catastrophic failures of
>> storage devices just fine right now.  It has issues with intermittent
>> failures, but so does hardware RAID, and so do MD and LVM to a lesser
>> degree.
> When planning hardware failovers/backups I can't predict the failing
> pattern. So first of all - every *known* shortcoming should be
> documented somehow. Secondly - permanent failures are not handled "just
> fine", as there is (1) no automatic mount as degraded, so the machine
> won't reboot properly and (2) the r/w degraded mount is[*] one-timer.
> Again, this should be:
> 1. documented in manpage, as a comment to profiles, not wiki page or
> linux-btrfs archives,
> 2. printed on screen when creating/converting "RAID1" profile (by btrfs tools),
> 3. blown into one's face when doing r/w degraded mount (by kernel).
>
> [*] yes, I know the recent kernels handle this, but the last LTS (4.14)
> is just too young.
>
> I'm now aware of issues with MD you're referring to - I got drives
> kicked off many times and they were *never* causing any problems despite
> being visible in the system. Moreover, since 4.10 there is FAILFAST
> which would do this even faster. There is also no problem with mounting
> degraded MD array automatically, so telling that btrfs is doing "just
> fine" is, well... not even theoretically close. And in my practice it
> never saved the day, but already ruined a few ones... It's not right for
> the protection to make more problems than it solves.
>
>> No, classical RAID (other than RAID0) is supposed to handle catastrophic
>> failure of component devices.  That is the entirety of the original
>> design purpose, and that is the entirety of what you should be using it
>> for in production.
> 1. no, it's not: https://www.cs.cmu.edu/~garth/RAIDpaper/Patterson88.pdf
>
> 2. even if there was, the single I/O failure (e.g. one bad block) might
>     be interpreted as "catastrophic" and the entire drive should be kicked off then.
>
> 3. if sysadmin doesn't request any kind of device autobinding, the
> device that were already failed doesn't matter anymore - regardless of
> it's current state or reappearences.
>
>> The point at which you are getting random corruption
>> on a disk and you're using anything but BTRFS for replication, you
>> _NEED_ to replace that disk, and if you don't you risk it causing
>> corruption on the other disk.
> Not only BTRFS, there are hardware solutions like T10 PI/DIF.
> Guess what should RAID controller do in such situation? Fail
> drive immediately after the first CRC mismatch?
>
> BTW do you consider "random corruption" as a catastrophic failure?
>
>> As of right now, BTRFS is no different in
>> that respect, but I agree that it _should_ be able to handle such a
>> situation eventually.
> The first step should be to realize, that there are some tunables
> required if you want to handle many different situation.
>
> Having said that, let's back to reallity:
>
>
> The classical RAID is about keeping the system functional - trashing a
> single drive from RAID1 should be fully-ignorable by sysadmin. The
> system must reboot properly, work properly and there MUST NOT by ANY
> functional differences compared to non-degraded mode except for slower
> read rate (and having no more redundancy obviously).
>
>
> - not having this == not having RAID1.
>
>> It shouldn't have been called RAID in the first place, that we can agree
>> on (even if for different reasons).
> The misnaming would be much less of a problem if it were documented
> properly (man page, btrfs-progs and finally kernel screaming).
>
>>> - I got one "RAID1" stuck in r/o after degraded mount, not nice... Not
>>> _expected_ to happen after single disk failure (without any reappearing).
>> And that's a known bug on older kernels (not to mention that you should
>> not be mounting writable and degraded for any purpose other than fixing
>> the volume).
> Yes, ...but:
>
> 1. "known" only to the people that already stepped into it, meaning too
>     late - it should be "COMMONLY known", i.e. documented,
> 2. "older kernels" are not so old, the newest mature LTS (4.9) is still
>     affected,
> 3. I was about to fix the volume, accidentally the machine has rebooted.
>     Which should do no harm if I had a RAID1.
> 4. As already said before, using r/w degraded RAID1 is FULLY ACCEPTABLE,
>     as long as you accept "no more redundancy"...
> 4a. ...or had an N-way mirror and there is still some redundancy if N>2.
>
>
> Since we agree, that btrfs RAID != common RAID, as there are/were
> different design principles and some features are in WIP state at best,
> the current behaviour should be better documented. That's it.
>
>
I have significant experience as a user of raid1. I spent years using 
software raid1 and then more years using hardware (3ware) raid1 and now 
around 3 years using btrfs raid1. I have not found btrfs raid1 to be 
less reliable than any of the previous implementations of raid.  I have 
found that any implementation of raid whether it be software, hardware, 
or filesystem, is not infallible.  I have also found that when you have 
a failure, you don't just plug things back in and expect it to be fixed 
without seriously investigating what has gone wrong and potential 
unexpected consequences.  I have found that even with hardware raid you 
can find ways to screw things up to the point that you lose your data.  
I have had situations where I reconnected a drive on hardware raid1 only 
to find that the array would not sync and from there on I ended up 
having to directly attach one of the drives and recover the partition 
table with test disk in order to regain access to my data.  So NO FORM 
of raid is a replacement for backups and NO FORM of raid is a 
replacement for due diligence in recovery from failure mode.  Raid gives 
you a second chance when things go wrong, it does not make failures 
transparent which is seemingly what we sometimes expect from raid.  And 
I doubt that we will ever achieve that goal no matter how much effort we 
put into making it happen. Even with hardware raid things can happen 
that were not foreseen by the designers.  So I think we have to be 
careful when we compare various raid (or "raid like") implementations.  
There is no such thing as "fool proof" raid and likely never will be. 
And with that I will end my rant.




  parent reply	other threads:[~2017-12-19 18:40 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-16 19:50 Unexpected raid1 behaviour Dark Penguin
2017-12-17 11:58 ` Duncan
2017-12-17 15:48   ` Peter Grandi
2017-12-17 20:42     ` Chris Murphy
2017-12-18  8:49       ` Anand Jain
2017-12-18  8:49     ` Anand Jain
2017-12-18 10:36       ` Peter Grandi
2017-12-18 12:10       ` Nikolay Borisov
2017-12-18 13:43         ` Anand Jain
2017-12-18 22:28       ` Chris Murphy
2017-12-18 22:29         ` Chris Murphy
2017-12-19 12:30         ` Adam Borowski
2017-12-19 12:54         ` Andrei Borzenkov
2017-12-19 12:59         ` Peter Grandi
2017-12-18 13:06     ` Austin S. Hemmelgarn
2017-12-18 19:43       ` Tomasz Pala
2017-12-18 22:01         ` Peter Grandi
2017-12-19 12:46           ` Austin S. Hemmelgarn
2017-12-19 12:25         ` Austin S. Hemmelgarn
2017-12-19 14:46           ` Tomasz Pala
2017-12-19 16:35             ` Austin S. Hemmelgarn
2017-12-19 17:56               ` Tomasz Pala
2017-12-19 19:47                 ` Chris Murphy
2017-12-19 21:17                   ` Tomasz Pala
2017-12-20  0:08                     ` Chris Murphy
2017-12-23  4:08                       ` Tomasz Pala
2017-12-23  5:23                         ` Duncan
2017-12-20 16:53                   ` Andrei Borzenkov
2017-12-20 16:57                     ` Austin S. Hemmelgarn
2017-12-20 20:02                     ` Chris Murphy
2017-12-20 20:07                       ` Chris Murphy
2017-12-20 20:14                         ` Austin S. Hemmelgarn
2017-12-21  1:34                           ` Chris Murphy
2017-12-21 11:49                         ` Andrei Borzenkov
2017-12-19 20:11                 ` Austin S. Hemmelgarn
2017-12-19 21:58                   ` Tomasz Pala
2017-12-20 13:10                     ` Austin S. Hemmelgarn
2017-12-19 23:53                   ` Chris Murphy
2017-12-20 13:12                     ` Austin S. Hemmelgarn
2017-12-19 18:31             ` George Mitchell [this message]
2017-12-19 20:28               ` Tomasz Pala
2017-12-19 19:35             ` Chris Murphy
2017-12-19 20:41               ` Tomasz Pala
2017-12-19 20:47                 ` Austin S. Hemmelgarn
2017-12-19 22:23                   ` Tomasz Pala
2017-12-20 13:33                     ` Austin S. Hemmelgarn
2017-12-20 17:28                       ` Duncan
2017-12-21 11:44                   ` Andrei Borzenkov
2017-12-21 12:27                     ` Austin S. Hemmelgarn
2017-12-22 16:05                       ` Tomasz Pala
2017-12-22 21:04                         ` Chris Murphy
2017-12-23  2:52                           ` Tomasz Pala
2017-12-23  5:40                             ` Duncan
2017-12-19 23:59                 ` Chris Murphy
2017-12-20  8:34                   ` Tomasz Pala
2017-12-20  8:51                     ` Tomasz Pala
2017-12-20 19:49                     ` Chris Murphy
2017-12-18  5:11   ` Anand Jain
2017-12-18  1:20 ` Qu Wenruo
2017-12-18 13:31 ` Austin S. Hemmelgarn
2018-01-12 12:26   ` Dark Penguin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f6785d3b-89fc-fd3b-38fe-815613b335fb@chinilu.com \
    --to=george@chinilu.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox