* Raid1 volume stuck as read-only: How to dump, recreate and restore its content?
@ 2018-03-11 16:28 Andreas Hild
2018-03-11 17:47 ` Adam Borowski
0 siblings, 1 reply; 5+ messages in thread
From: Andreas Hild @ 2018-03-11 16:28 UTC (permalink / raw)
To: linux-btrfs
Dear All,
Following a physical disk failure of a RAID1 array, I tried to mount
the remaining volume of a root partition with "-o degraded". For some
reason it ended up as read-only as described here:
https://btrfs.wiki.kernel.org/index.php/Gotchas#raid1_volumes_only_mountable_once_RW_if_degraded
How to precisely do this: dump, recreate and restore its contents?
Could someone please provided more details how to recover this volume
safely?
Device 4 is missing as shown below. /dev/sda4 is a new disk available
now. I'm currently using a live disk to recover this root volume.
Many thanks!
Best wishes,
-Andreas
*uname -a
Linux debian 4.9.0-4-amd64 #1 SMP Debian 4.9.65-3 (2017-12-03) x86_64 GNU/Linux
*btrfs --version
btrfs-progs v4.7.3
*btrfs fi show
warning, device 4 is missing
Label: none uuid: ce222f62-b31b-44d6-94ac-79de595325be
Total devices 2 FS bytes used 27.55GiB
devid 1 size 50.00GiB used 35.31GiB path /dev/sdb2
*** Some devices missing
Label: none uuid: f946ebb5-bdf9-43f4-8252-5c2d997f7021
Total devices 1 FS bytes used 146.54GiB
devid 1 size 407.69GiB used 153.16GiB path /dev/sdb3
Label: none uuid: 9e312f97-b2f5-4a53-9fd0-b4fc65653dd1
Total devices 1 FS bytes used 384.00KiB
devid 1 size 50.00GiB used 2.02GiB path /dev/sda4
*dmesg > dmesg.log
[ 1193.006992] BTRFS error (device sda4): unable to start balance with
target data profile 16
[ 1312.905264] BTRFS info (device sdb2): allowing degraded mounts
[ 1312.905271] BTRFS info (device sdb2): disk space caching is enabled
[ 1312.905275] BTRFS info (device sdb2): has skinny extents
[ 1312.907400] BTRFS warning (device sdb2): devid 4 uuid
18d9aa16-4543-41fa-90c8-560944e61b2c is missing
[ 1312.957486] BTRFS info (device sdb2): bdev (null) errs: wr
120118005, rd 38429879, flush 8815049, corrupt 0, gen 0
[ 1313.279140] BTRFS warning (device sdb2): missing devices (1)
exceeds the limit (0), writeable mount is not allowed
[ 1313.312077] BTRFS error (device sdb2): open_ctree failed
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Raid1 volume stuck as read-only: How to dump, recreate and restore its content?
2018-03-11 16:28 Raid1 volume stuck as read-only: How to dump, recreate and restore its content? Andreas Hild
@ 2018-03-11 17:47 ` Adam Borowski
2018-03-12 6:19 ` Duncan
0 siblings, 1 reply; 5+ messages in thread
From: Adam Borowski @ 2018-03-11 17:47 UTC (permalink / raw)
To: Andreas Hild; +Cc: linux-btrfs
On Sun, Mar 11, 2018 at 11:28:08PM +0700, Andreas Hild wrote:
> Following a physical disk failure of a RAID1 array, I tried to mount
> the remaining volume of a root partition with "-o degraded". For some
> reason it ended up as read-only as described here:
> https://btrfs.wiki.kernel.org/index.php/Gotchas#raid1_volumes_only_mountable_once_RW_if_degraded
>
>
> How to precisely do this: dump, recreate and restore its contents?
> Could someone please provided more details how to recover this volume
> safely?
> Linux debian 4.9.0-4-amd64 #1 SMP Debian 4.9.65-3 (2017-12-03) x86_64 GNU/Linux
> [ 1313.279140] BTRFS warning (device sdb2): missing devices (1)
> exceeds the limit (0), writeable mount is not allowed
I'd recommend instead going with kernel 4.14 or newer (available in
stretch-backports), which handles this case well without the need to
restore. If there's no actual data loss (there shouldn't be, it's RAID1
with only a single device missing), you can mount degraded normally, then
balance the data onto the new disk.
Recovery with 4.9 is unpleasant.
Meow!
--
⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ A dumb species has no way to open a tuna can.
⢿⡄⠘⠷⠚⠋⠀ A smart species invents a can opener.
⠈⠳⣄⠀⠀⠀⠀ A master species delegates.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Raid1 volume stuck as read-only: How to dump, recreate and restore its content?
2018-03-11 17:47 ` Adam Borowski
@ 2018-03-12 6:19 ` Duncan
2018-03-13 7:08 ` Piotr Pawłow
0 siblings, 1 reply; 5+ messages in thread
From: Duncan @ 2018-03-12 6:19 UTC (permalink / raw)
To: linux-btrfs
Adam Borowski posted on Sun, 11 Mar 2018 18:47:13 +0100 as excerpted:
> On Sun, Mar 11, 2018 at 11:28:08PM +0700, Andreas Hild wrote:
>> Following a physical disk failure of a RAID1 array, I tried to mount
>> the remaining volume of a root partition with "-o degraded". For some
>> reason it ended up as read-only as described here:
>> https://btrfs.wiki.kernel.org/index.php/
Gotchas#raid1_volumes_only_mountable_once_RW_if_degraded
>>
>>
>> How to precisely do this: dump, recreate and restore its contents?
>> Could someone please provided more details how to recover this volume
>> safely?
>
>> Linux debian 4.9.0-4-amd64 #1 SMP Debian 4.9.65-3 (2017-12-03) x86_64
>> GNU/Linux
>
>> [ 1313.279140] BTRFS warning (device sdb2): missing devices (1)
>> exceeds the limit (0), writeable mount is not allowed
>
> I'd recommend instead going with kernel 4.14 or newer (available in
> stretch-backports), which handles this case well without the need to
> restore. If there's no actual data loss (there shouldn't be, it's RAID1
> with only a single device missing), you can mount degraded normally,
> then balance the data onto the new disk.
>
> Recovery with 4.9 is unpleasant.
Second the 4.14+ kernel with the per-chunk-check (vs. older per-device)
patches recommendation, both in general and if the priority is getting
the existing filesystem back into normal writable condition.
And since we're talking about upgrades, I'll mention that 4.7 btrfs-tools
version as well. For normal operations, mounting, writing, balance,
scrub, etc, it's the kernel version that does the work (the userspace
scrub and balance commands just call the appropriate kernel
functionality), so it's the kernel version that's important to keep
current as having the latest bugfixes. However, when things go wrong and
you're running commands such as btrfs check, restore, or rescue, as well
as when you create a new btrfs with mkfs.btrfs, it's the userspace code
that does the work and thus the userspace code you want to have the
latest bugfixes in ordered to have the greatest chance at a fix.
And with userspace versions synced with kernelspace and coming out
shortly after the kernel release of the same major.minor, five such
kernel releases a year, and 4.15 being the current kernel release, 4.7
userspace is 8 kernel series releases and over a year and a half
outdated. Put differently, 4.7 is missing a year and a half worth of
bugfixes that you won't have when you run it to try to check or recover
that btrfs that won't mount! Do you *really* want to risk your data on
bugs that were after all discovered and fixed over a year ago?
Meanwhile, if the priority is simply ensuring access to the data, and
you'd prefer to stick with the 4.9-LTS series kernel if possible, then
the existing read-only filesystem should let you do just that, reliably
read the files in ordered to copy them elsewhere, say to a new btrfs, tho
it can just as easily be non-btrfs if desired, since you're just copying
the files as normal. It's a read-only filesystem, but that shouldn't
prevent you copying the files off, just prevent writing.
No need to worry about btrfs restore at all, which after all is designed
for disaster recovery and thus (among other things) doesn't verify
checksums on the data it recovers, in ordered to give the best chance at
recovery when things are already badly wrong, so just copying the data
off the read-only filesystem is a better option when it's available, as
it is here.
Alternatively, since the value of data is defined not by empty claims but
by the backups you consider it worth having of that data (and raid is not
a backup), you either have a backup to copy from if necessary, or by lack
of said backup, you defined the value of the data as too trivial to be
worth bothering with a backup, in which case it's trivial enough it may
not be worth bothering copying it off, either -- just blow it away with a
fresh mkfs and start over.
And should you be reconsidering your backup-action (or lack thereof)
definition of the value of the data... consider yourself luck fate didn't
take you up on that definition... this time... and take the opportunity
presented to make that backup while you have the chance, because fate may
actually take you up on that value definition, next time. =:^)
(FWIW, about a year ago I upgraded even my backups to ssd, because I
wasn't comfortable with amount of unprotected data I had in the delta
between current/working and last backup, because backups were enough of a
hassle I kept putting them off. By buying ssds, I deliberately chose to
spend money to bring down the hassle cost of the backups, and thus my
"trivial value" threshold definition, and backups are fast enough now
that as I predicted, I make them far more often. So I'm walking my own
talk, and am able to sleep much more comfortably now as I'm not worrying
about that backup I put off and the chance fate might take me up on my
formerly too-high-for-comfort "trivial" threshold definition.=:^)
(And as it happens, I'm actually running from a system/root filesystem
backup ATM, as an upgrade didn't go well and x wouldn't start, so I
reverted. But my root/system filesystem is under 10 gigs, on SSD for the
backup as well as the working copy, so a full backup copy of root takes
only a few minutes and I made one before upgrading a few packages I had
some doubts about due to previous upgrade issues with them, so the delta
between working and that backup was literally the five package upgrades I
was it turned out rightly worried about. So that investment in ssds for
backup has paid off. While in this particular case simply taking a
snapshot and recovering to it when the upgrade went bad would have worked
just as well, having the independent filesystem backup on a different set
of physical devices means I don't have to worry about loss of the
filesystem or physical devices containing it, either! =:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Raid1 volume stuck as read-only: How to dump, recreate and restore its content?
2018-03-12 6:19 ` Duncan
@ 2018-03-13 7:08 ` Piotr Pawłow
2018-03-15 8:58 ` Duncan
0 siblings, 1 reply; 5+ messages in thread
From: Piotr Pawłow @ 2018-03-13 7:08 UTC (permalink / raw)
To: Duncan, linux-btrfs, a.t.hild
Hello,
> Put differently, 4.7 is missing a year and a half worth of bugfixes that you won't have when you run it to try to check or recover that btrfs that won't mount! Do you *really* want to risk your data on bugs that were after all discovered and fixed over a year ago?
It is also missing newly introduced bugs. Right now I'm dealing with btrfs raid1 server that had the fs getting stuck and kernel oopses due to a regression:
https://bugzilla.kernel.org/show_bug.cgi?id=198861
I had to cherry-pick commit 3be8828fc507cdafe7040a3dcf361a2bcd8e305b and recompile the kernel to even start moving the data off the failing drive, as the fix is not in stable yet, and encountering any i/o error would break the kernel. And now it seems the fs is corrupted, maybe due to all the crashes earlier.
FYI in case you decide to switch to 4.15
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Raid1 volume stuck as read-only: How to dump, recreate and restore its content?
2018-03-13 7:08 ` Piotr Pawłow
@ 2018-03-15 8:58 ` Duncan
0 siblings, 0 replies; 5+ messages in thread
From: Duncan @ 2018-03-15 8:58 UTC (permalink / raw)
To: linux-btrfs
Piotr Pawłow posted on Tue, 13 Mar 2018 08:08:27 +0100 as excerpted:
> Hello,
>> Put differently, 4.7 is missing a year and a half worth of bugfixes
>> that you won't have when you run it to try to check or recover that
>> btrfs that won't mount! Do you *really* want to risk your data on bugs
>> that were after all discovered and fixed over a year ago?
>
> It is also missing newly introduced bugs. Right now I'm dealing with
> btrfs raid1 server that had the fs getting stuck and kernel oopses due
> to a regression:
>
> https://bugzilla.kernel.org/show_bug.cgi?id=198861
>
> I had to cherry-pick commit 3be8828fc507cdafe7040a3dcf361a2bcd8e305b and
> recompile the kernel to even start moving the data off the failing
> drive, as the fix is not in stable yet, and encountering any i/o error
> would break the kernel. And now it seems the fs is corrupted, maybe due
> to all the crashes earlier.
>
> FYI in case you decide to switch to 4.15
In context I was referring to userspace as the 4.7 was userspace btrfs-
progs, not kernelspace.
For kernelspace he was on 4.9, which is the second-newest LTS (long-term-
stable) kernel series, and thus should continue to be at least somewhat
supported on this list for another year or so, as we try to support the
two newest kernels from both the current and LTS series. Tho 4.9 does
lack the newer raid1 per-chunk degraded-writable scanning feature, and
AFAIK that won't be stable-backported as it's more a feature than a bugfix
and as such, doesn't meet the requirements for stable-series backports.
Which is why Adam recommended a newer kernel, since that was the
particular problem needing addressed here.
But for someone on an older kernel, presumably because they like
stability, I'd suggest the newer 4.14 LTS series kernel as an upgrade,
not the only short-term supported 4.15 series... unless the intent is to
continue staying current after that, with 4.16, 4.17, etc. Which your
point about newer kernels coming with newer bugs in addition to fixes
supports as well. Moving to the 4.14 LTS should get the real fixes and
the longer stabilization time, tho not the feature adds, which would
bring a higher chance of more bugs, as well.
And with 4.15 out for awhile now and 4.16 close, 4.14 should be
reasonably stabilizing by now and should be pretty safe to move to.
But of course there's some risk of new bugs in addition to fixes for
newer userspace versions too. But since it's kernelspace that's the
operational code and userspace is primarily recovery, and we know that
older bugs ARE fixed in newer userspace, and assuming a sane backups
policy which I stressed in the same post (if you don't have a backup,
you're defining the data as of less value than the time/trouble/resources
to create the backup, thus defining it as of relatively low/trivial value
in the first place, because you're more willing to risk losing it than
you are to spend the time/resources/hassle to ensure against that risk),
the better chance at an updated userspace being able to fix problems with
less risk of further damage really does justify considering updating to
reasonably current userspace. If there's any doubt, stay a version or
two behind the latest release and watch for reports of problems with the
latest, but certainly, with 4.15 userspace out and no serious reports of
new damage from 4.14 userspace, the latter should now be a reasonably
safe upgrade.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-03-15 9:00 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-03-11 16:28 Raid1 volume stuck as read-only: How to dump, recreate and restore its content? Andreas Hild
2018-03-11 17:47 ` Adam Borowski
2018-03-12 6:19 ` Duncan
2018-03-13 7:08 ` Piotr Pawłow
2018-03-15 8:58 ` Duncan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).