* Can't mount (even in ro) after power outage - corrupt leaf, open_ctree failed
@ 2018-01-21 15:53 msk conf
2018-01-21 21:54 ` Chris Murphy
0 siblings, 1 reply; 5+ messages in thread
From: msk conf @ 2018-01-21 15:53 UTC (permalink / raw)
To: linux-btrfs
Hello there,
I would like to ask you for help with (corrupted) btrfs on my nas.
After power outage I can't mount it back at all:
UUID="e8cb7e76-7f93-4eac-aec7-ca64395d2110" /array btrfs
noatime,compress=lzo 0 2
UUID="e8cb7e76-7f93-4eac-aec7-ca64395d2110" /home btrfs
noatime,compress=lzo,subvolid=3315 0 0
I tried combination of recovery,ro,degraded,usebackuproot, specifying
partial partitions where fs is placed to - no success at all.
for example
mount -ousebackuproot,ro
/dev/disk/by-uuid/e8cb7e76-7f93-4eac-aec7-ca64395d2110 /mnt/restore
mount -ousebackuproot,ro,degraded /dev/sdb4 /mnt/restore
...
Failing with:
[ 2765.719548] BTRFS critical (device sda4): corrupt leaf, slot offset
bad: block=3997250650112, root=1, slot=48
[ 2765.731772] BTRFS error (device sda4): failed to read block groups: -5
[ 2765.781993] BTRFS error (device sda4): open_ctree failed
Some informations at beginning:
uname -a
Linux nas 4.9.0-4-amd64 #1 SMP Debian 4.9.65-3 (2017-12-03) x86_64 GNU/Linux
btrfs --version
btrfs-progs v4.7.3
btrfs fi show
Label: none uuid: 4a20b115-4742-42f1-9e1a-225323faa31a
Total devices 1 FS bytes used 8.32GiB
devid 1 size 11.17GiB used 11.02GiB path /dev/md0
Label: none uuid: e8cb7e76-7f93-4eac-aec7-ca64395d2110
Total devices 2 FS bytes used 1.86TiB
devid 2 size 3.62TiB used 1.86TiB path /dev/sdb4
devid 4 size 3.62TiB used 1.86TiB path /dev/sda4
sda4+sdb4 is btrfs-raid1 filesystem which is currently failing to mount.
btrfs check /dev/sda4
incorrect offsets 13686 13622
checking extents
incorrect offsets 13686 13622
bad block 3996803399680
Errors found in extent allocation tree or chunk allocation
checking free space cache
checking fs roots
checking csums
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3154150576128-3154158186496 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3154158247936-3154158645248 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3154158657536-3154160283648 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3154160676864-3154160701440 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3154160951296-3154162651136 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3154162782208-3154162786304 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3154163032064-3154164342784 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3154164428800-3154165215232 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3154165477376-3154165501952 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3154165764096-3154799960064 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3160185249792-3161232998400 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3176324980736-3176425730048 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3176426110976-3176831508480 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 3176831909888-3177245937664 but there is no extent record
incorrect offsets 13715 13651
Error looking up extent record -1
Csum exists for 4098653097984-4098792165376 but there is no extent record
incorrect offsets 13715 13651
Error looking up extent record -1
Csum exists for 4098792427520-4098934644736 but there is no extent record
incorrect offsets 13715 13651
Error looking up extent record -1
Csum exists for 4116888281088-4116889329664 but there is no extent record
incorrect offsets 13715 13651
Error looking up extent record -1
Csum exists for 4116889731072-4116952510464 but there is no extent record
incorrect offsets 13686 13622
Error looking up extent record -1
Csum exists for 4167480627200-4168429125632 but there is no extent record
Checking filesystem on /dev/sda4
UUID: e8cb7e76-7f93-4eac-aec7-ca64395d2110
found 1935048380418 bytes used err is 19
total csum bytes: 0
total tree bytes: 285835264
total fs tree bytes: 0
total extent tree bytes: 284655616
btree space waste bytes: 61859235
file data blocks allocated: 495452160
referenced 495452160
What I didn't tried yet is recovery (have not another drive to place
data to) and --repair (I am scared to do so).
Am i lost and must buy another drive and try restore or is there way how
to fix that?
Any help is appreciated.
Thank you.
--
msk
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Can't mount (even in ro) after power outage - corrupt leaf, open_ctree failed
2018-01-21 15:53 Can't mount (even in ro) after power outage - corrupt leaf, open_ctree failed msk conf
@ 2018-01-21 21:54 ` Chris Murphy
[not found] ` <05babf08-00de-ae0f-06f3-546719674aa1@gmail.com>
0 siblings, 1 reply; 5+ messages in thread
From: Chris Murphy @ 2018-01-21 21:54 UTC (permalink / raw)
To: msk conf; +Cc: Btrfs BTRFS
On Sun, Jan 21, 2018 at 8:53 AM, msk conf <msk.conf@gmail.com> wrote:
> Hello there,
>
> I would like to ask you for help with (corrupted) btrfs on my nas.
>
> After power outage I can't mount it back at all:
> UUID="e8cb7e76-7f93-4eac-aec7-ca64395d2110" /array btrfs
> noatime,compress=lzo 0 2
> UUID="e8cb7e76-7f93-4eac-aec7-ca64395d2110" /home btrfs
> noatime,compress=lzo,subvolid=3315 0 0
What do you get for
btrfs fi df /array
I want to know if all block groups are raid1.
>
> I tried combination of recovery,ro,degraded,usebackuproot, specifying
> partial partitions where fs is placed to - no success at all.
> for example
> mount -ousebackuproot,ro
> /dev/disk/by-uuid/e8cb7e76-7f93-4eac-aec7-ca64395d2110 /mnt/restore
> mount -ousebackuproot,ro,degraded /dev/sdb4 /mnt/restore
> ...
>
> Failing with:
> [ 2765.719548] BTRFS critical (device sda4): corrupt leaf, slot offset bad:
> block=3997250650112, root=1, slot=48
> [ 2765.731772] BTRFS error (device sda4): failed to read block groups: -5
> [ 2765.781993] BTRFS error (device sda4): open_ctree failed
I don't know that "degraded" will know which device to ignore when
both devices are available. But then, if everything is raid1, a
failure on sda4 should result in a retry on sdb4. There's not enough
information in the log provided. Sounds like a bad sector resulting in
an IO error.
>
> Some informations at beginning:
>
> uname -a
> Linux nas 4.9.0-4-amd64 #1 SMP Debian 4.9.65-3 (2017-12-03) x86_64 GNU/Linux
>
> btrfs --version
> btrfs-progs v4.7.3
I wouldn't use btrfs check, let alone using it with --repair, with
this version. Get something newer, ideally 4.14.1.
The thing to include in the next report is output from:
btrfs check
btrfs check --mode=lowmem
I don't recommend using --repair until you've exhausted all other
options, including any scraping necessary with btrfs restore to get
your backups up to date.
--
Chris Murphy
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Can't mount (even in ro) after power outage - corrupt leaf, open_ctree failed
[not found] ` <CAJCQCtSkPTLJ0d94_HrNmr_g+1spOBXjS8EY1RkXnXf15wUKFw@mail.gmail.com>
@ 2018-01-21 23:24 ` Chris Murphy
2018-01-22 9:14 ` Zatkovský Dušan
0 siblings, 1 reply; 5+ messages in thread
From: Chris Murphy @ 2018-01-21 23:24 UTC (permalink / raw)
To: msk conf, Btrfs BTRFS
On Sun, Jan 21, 2018 at 4:13 PM, Chris Murphy <lists@colorremedies.com> wrote:
> On Sun, Jan 21, 2018 at 3:31 PM, msk conf <msk.conf@gmail.com> wrote:
>> Hello,
>>
>> thank you for the reply.
>>
>>> What do you get for btrfs fi df /array
>>
>>
>> Can't do that because filesystem is not mountable. I will get stats for '/'
>> filesystem instead (because '/array' is an empty directory - mountpoint on /
>
> Try
> $ sudo btrfs-debug-tree -t chunk /dev/mapper/first | grep 'METADATA\|SYSTEM'
You need to adapt that /dev/ node for your case, I just copy pasted
that from my setup. Anyway, that will look at the chunk tree and show
the profile for these chunk types.
--
Chris Murphy
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Can't mount (even in ro) after power outage - corrupt leaf, open_ctree failed
2018-01-21 23:24 ` Chris Murphy
@ 2018-01-22 9:14 ` Zatkovský Dušan
2018-01-24 8:56 ` Zatkovský Dušan
0 siblings, 1 reply; 5+ messages in thread
From: Zatkovský Dušan @ 2018-01-22 9:14 UTC (permalink / raw)
To: linux-btrfs
Hi.
Badblocks finished on both disks with no errors. The only messages from
kernel
during night are 6x perf: interrupt took too long (2511 > 2500),
lowering kernel.perf_event_max_sample_rate to 79500
root@nas:~# smartctl -l scterc /dev/sda
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-4-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
SCT Error Recovery Control:
Read: 70 (7.0 seconds)
Write: 70 (7.0 seconds)
root@nas:~# smartctl -l scterc /dev/sdb
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-4-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
SCT Error Recovery Control:
Read: 70 (7.0 seconds)
Write: 70 (7.0 seconds)
root@nas:~# btrfs-debug-tree -t chunk /dev/sda4 | grep 'METADATA\|SYSTEM'
incorrect offsets 13686 13622
type METADATA|RAID1 num_stripes 2
type METADATA|RAID1 num_stripes 2
type SYSTEM|RAID1 num_stripes 2
type METADATA|RAID1 num_stripes 2
type METADATA|RAID1 num_stripes 2
root@nas:~# btrfs-debug-tree -t chunk /dev/sdb4 | grep 'METADATA\|SYSTEM'
incorrect offsets 13686 13622
type METADATA|RAID1 num_stripes 2
type METADATA|RAID1 num_stripes 2
type SYSTEM|RAID1 num_stripes 2
type METADATA|RAID1 num_stripes 2
type METADATA|RAID1 num_stripes 2
(still used "old" version of btrfs tools, working remotely now, I will
boot something newer when I will get access to that NAS at EOD)
Thank you
msk
Dňa 22. 1. 2018 o 0:24 Chris Murphy napísal(a):
> On Sun, Jan 21, 2018 at 4:13 PM, Chris Murphy <lists@colorremedies.com> wrote:
>> On Sun, Jan 21, 2018 at 3:31 PM, msk conf <msk.conf@gmail.com> wrote:
>>> Hello,
>>>
>>> thank you for the reply.
>>>
>>>> What do you get for btrfs fi df /array
>>>
>>> Can't do that because filesystem is not mountable. I will get stats for '/'
>>> filesystem instead (because '/array' is an empty directory - mountpoint on /
>> Try
>> $ sudo btrfs-debug-tree -t chunk /dev/mapper/first | grep 'METADATA\|SYSTEM'
>
> You need to adapt that /dev/ node for your case, I just copy pasted
> that from my setup. Anyway, that will look at the chunk tree and show
> the profile for these chunk types.
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Can't mount (even in ro) after power outage - corrupt leaf, open_ctree failed
2018-01-22 9:14 ` Zatkovský Dušan
@ 2018-01-24 8:56 ` Zatkovský Dušan
0 siblings, 0 replies; 5+ messages in thread
From: Zatkovský Dušan @ 2018-01-24 8:56 UTC (permalink / raw)
To: linux-btrfs
Ok, so I ended with btrfs restore, seems that all (or most important)
files were restored.
Now looking for another reliable filesystem which will not unrecoverably
die on power outage.
msk
Dňa 22. 1. 2018 o 10:14 Zatkovský Dušan napísal(a):
> Hi.
>
> Badblocks finished on both disks with no errors. The only messages
> from kernel
> during night are 6x perf: interrupt took too long (2511 > 2500),
> lowering kernel.perf_event_max_sample_rate to 79500
>
> root@nas:~# smartctl -l scterc /dev/sda
> smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-4-amd64] (local build)
> Copyright (C) 2002-16, Bruce Allen, Christian Franke,
> www.smartmontools.org
>
> SCT Error Recovery Control:
> Read: 70 (7.0 seconds)
> Write: 70 (7.0 seconds)
>
> root@nas:~# smartctl -l scterc /dev/sdb
> smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-4-amd64] (local build)
> Copyright (C) 2002-16, Bruce Allen, Christian Franke,
> www.smartmontools.org
>
> SCT Error Recovery Control:
> Read: 70 (7.0 seconds)
> Write: 70 (7.0 seconds)
>
> root@nas:~# btrfs-debug-tree -t chunk /dev/sda4 | grep 'METADATA\|SYSTEM'
> incorrect offsets 13686 13622
> type METADATA|RAID1 num_stripes 2
> type METADATA|RAID1 num_stripes 2
> type SYSTEM|RAID1 num_stripes 2
> type METADATA|RAID1 num_stripes 2
> type METADATA|RAID1 num_stripes 2
>
> root@nas:~# btrfs-debug-tree -t chunk /dev/sdb4 | grep 'METADATA\|SYSTEM'
> incorrect offsets 13686 13622
> type METADATA|RAID1 num_stripes 2
> type METADATA|RAID1 num_stripes 2
> type SYSTEM|RAID1 num_stripes 2
> type METADATA|RAID1 num_stripes 2
> type METADATA|RAID1 num_stripes 2
>
> (still used "old" version of btrfs tools, working remotely now, I will
> boot something newer when I will get access to that NAS at EOD)
>
> Thank you
> msk
>
>
> Dňa 22. 1. 2018 o 0:24 Chris Murphy napísal(a):
>> On Sun, Jan 21, 2018 at 4:13 PM, Chris Murphy
>> <lists@colorremedies.com> wrote:
>>> On Sun, Jan 21, 2018 at 3:31 PM, msk conf <msk.conf@gmail.com> wrote:
>>>> Hello,
>>>>
>>>> thank you for the reply.
>>>>
>>>>> What do you get for btrfs fi df /array
>>>>
>>>> Can't do that because filesystem is not mountable. I will get stats
>>>> for '/'
>>>> filesystem instead (because '/array' is an empty directory -
>>>> mountpoint on /
>>> Try
>>> $ sudo btrfs-debug-tree -t chunk /dev/mapper/first | grep
>>> 'METADATA\|SYSTEM'
>>
>> You need to adapt that /dev/ node for your case, I just copy pasted
>> that from my setup. Anyway, that will look at the chunk tree and show
>> the profile for these chunk types.
>>
>>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-01-24 8:56 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-01-21 15:53 Can't mount (even in ro) after power outage - corrupt leaf, open_ctree failed msk conf
2018-01-21 21:54 ` Chris Murphy
[not found] ` <05babf08-00de-ae0f-06f3-546719674aa1@gmail.com>
[not found] ` <CAJCQCtSkPTLJ0d94_HrNmr_g+1spOBXjS8EY1RkXnXf15wUKFw@mail.gmail.com>
2018-01-21 23:24 ` Chris Murphy
2018-01-22 9:14 ` Zatkovský Dušan
2018-01-24 8:56 ` Zatkovský Dušan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).