From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: Chris Murphy <lists@colorremedies.com>,
Martin Steigerwald <martin@lichtvoll.de>
Cc: Martin Steigerwald <martin.steigerwald@teamix.de>,
Roman Mamedov <rm@romanrm.net>,
Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: degraded BTRFS RAID 1 not mountable: open_ctree failed, unable to find block group for 0
Date: Thu, 17 Nov 2016 15:20:56 -0500 [thread overview]
Message-ID: <5be14cba-943b-a622-b9af-394b76f2e650@gmail.com> (raw)
In-Reply-To: <CAJCQCtQnSYRMUWb3V3Qn+chb2o18F5dgoy2mxPw8vqnohrErjQ@mail.gmail.com>
On 2016-11-17 15:05, Chris Murphy wrote:
> I think the wiki should be updated to reflect that raid1 and raid10
> are mostly OK. I think it's grossly misleading to consider either as
> green/OK when a single degraded read write mount creates single chunks
> that will then prevent a subsequent degraded read write mount. And
> also the lack of various notifications of device faultiness I think
> make it less than OK also. It's not in the "do not use" category but
> it should be in the middle ground status so users can make informed
> decisions.
>
It's worth pointing out also regarding this:
* This is handled sanely in recent kernels (the check got changed from
per-fs to per-chunk, so you still have a usable FS if all the single
chunks are only on devices you still have).
* This is only an issue with filesystems with exactly two disks. If a
3+ disk raid1 FS goes degraded, you still generate raid1 chunks.
* There are a couple of other cases where raid1 mode falls flat on it's
face (lots of I/O errors in a short span of time with compression
enabled can cause a kernel panic for example).
* raid10 has some other issues of it's own (you lose two devices, your
filesystem is dead, which shouldn't be the case 100% of the time (if you
lose different parts of each mirror, BTRFS _should_ be able to recover,
it just doesn't do so right now)).
As far as the failed device handling issues, those are a problem with
BTRFS in general, not just raid1 and raid10, so I wouldn't count those
against raid1 and raid10.
next prev parent reply other threads:[~2016-11-17 20:21 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-16 10:25 degraded BTRFS RAID 1 not mountable: open_ctree failed, unable to find block group for 0 Martin Steigerwald
2016-11-16 10:43 ` Roman Mamedov
2016-11-16 10:55 ` Martin Steigerwald
2016-11-16 11:00 ` Roman Mamedov
2016-11-16 11:04 ` Martin Steigerwald
2016-11-16 12:57 ` Austin S. Hemmelgarn
2016-11-16 17:06 ` Martin Steigerwald
2016-11-17 20:05 ` Chris Murphy
2016-11-17 20:20 ` Austin S. Hemmelgarn [this message]
2016-11-19 20:27 ` Chris Murphy
2016-11-20 11:58 ` Niccolò Belli
2016-11-17 20:46 ` Martin Steigerwald
2016-11-16 11:18 ` Martin Steigerwald
2016-11-16 12:48 ` Austin S. Hemmelgarn
-- strict thread matches above, loose matches on Subject: below --
2017-08-22 9:31 g6094199
2017-08-22 10:28 ` Dmitrii Tcvetkov
2017-08-23 13:12 g6094199
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5be14cba-943b-a622-b9af-394b76f2e650@gmail.com \
--to=ahferroin7@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=lists@colorremedies.com \
--cc=martin.steigerwald@teamix.de \
--cc=martin@lichtvoll.de \
--cc=rm@romanrm.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).