public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Qu Wenruo <wqu@suse.com>
To: Christian Wimmer <telefonchris@icloud.com>,
	Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: Anand Jain <anand.jain@oracle.com>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>
Subject: Re: need help in a broken 2TB BTRFS partition
Date: Sun, 17 Oct 2021 07:27:06 +0800	[thread overview]
Message-ID: <7bd9c183-91b9-c646-5e22-8bc9d138f15f@suse.com> (raw)
In-Reply-To: <B57D8AFF-FE6E-4CC2-B6C1-066F3A8CEDF0@icloud.com>



On 2021/10/17 01:35, Christian Wimmer wrote:
> BTW, due to some unsuccessful boot attempts this disc /dev/sdd1 does not work any more with “-o ro,rescue=all”
> so I decided to try some nasty commands like the following:

Well, any attempt to do any write into that fs would only further 
degrade the fs.

> 
> 
> Suse_Tumbleweed:/home/proc # btrfs rescue chunk-recover /dev/sdd1
> Scanning: 4914069504 in dev0cmds/rescue-chunk-recover.c:130: process_extent_buffer: BUG_ON `exist->nmirrors >= BTRFS_MAX_MIRRORS` triggered, value 1
> btrfs(+0x1a121)[0x55830a51d121]
> btrfs(+0x508dc)[0x55830a5538dc]
> /lib64/libc.so.6(+0x8db37)[0x7fa8984cbb37]
> /lib64/libc.so.6(+0x112640)[0x7fa898550640]
> Aborted (core dumped)
> 
> Unfortunately the program crashes. Is this expected?

No, but it doesn't matter anymore.

The problem would not be chunk tree related afaik.

> 
> What else can I try if the mount command reports:
> 
> Suse_Tumbleweed:/home/proc # mount -o ro,rescue=all /dev/sdd1 /home/promise2/disk3
> mount: /home/promise2/disk3: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error.

Dmesg would provide the reason why it fails to mount.

Thanks,
Qu

> 
> 
> 
> 
> 
>> On 16. Oct 2021, at 07:08, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>>
>> On 2021/10/16 05:01, Christian Wimmer wrote:
>>> Hi Qu,
>>>
>>> I hope I find you well.
>>>
>>> Almost two years that my system runs without any failure.
>>> Since this is very boring I tried make my life somehow harder and tested again the snapshot feature of my Parallels Desktop installation yesterday:-)
>>> When I erased the old snapshots I could feel (and actually hear) already that the system is writing too much to the partitions.
>>> What I want to say is that it took too long (for any reason) to erase the old snapshots and to shut the system down.
>>
>> The slow down seems to be caused by qgroup.
>>
>> We already have an idea how to solve the problem and have some patches
>> for that.
>>
>> Although it would add a new sysfs interface and may need user space
>> tools support.
>>
>>>
>>> Well, after booting I saw that one of the discs is not coming back and I got the following error message:
>>>
>>> Suse_Tumbleweed:/home/proc # btrfs check /dev/sdd1
>>> Opening filesystem to check...
>>> parent transid verify failed on 324239360 wanted 208553 found 184371
>>> parent transid verify failed on 324239360 wanted 208553 found 184371
>>> parent transid verify failed on 324239360 wanted 208553 found 184371
>>
>> This is the typical transid mismatch, caused by missing writes.
>>
>> Normally if it's a physical machine, the first thing we suspect would be
>> the disk.
>>
>> But since you're using an VM in MacOS, it has a whole storage stack to
>> go through.
>>
>> And any of the stack is not handling flush/fua correctly, then it can
>> definitely go wrong like this.
>>
>>
>>> Ignoring transid failure
>>> leaf parent key incorrect 324239360
>>> ERROR: failed to read block groups: Operation not permitted
>>> ERROR: cannot open file system
>>>
>>>
>>> Could you help me to debug and repair this please?
>>
>> Repair is not really possible.
>>
>>>
>>> I already run the command btrfs restore /dev/sdd1 . and could restore 90% of the data but not the important last 10%.
>>
>> Using newer kernel like v5.14, you can using "-o ro,rescue=all" mount
>> option, which would act mostly like btrfs-restore, and you may have a
>> chance to recover the lost 10%.
>>
>>>
>>> My system is:
>>>
>>> Suse Tumbleweed inside Parallels Desktop on a Mac Mini
>>>
>>> Mac Min: Big Sur
>>> Parallels Desktop: 17.1.0
>>> Suse: Linux Suse_Tumbleweed 5.13.2-1-default #1 SMP Thu Jul 15 03:36:02 UTC 2021 (89416ca) x86_64 x86_64 x86_64 GNU/Linux
>>>
>>> Suse_Tumbleweed:~ # btrfs --version
>>> btrfs-progs v5.13
>>>
>>> The disk /dev/sdd1 is one of several 2TB partitions that reside on a NAS attached to the Mac Mini like
>>
>> /dev/sdd1 is directly mapped into the VM or something else?
>>
>> Or a file in remote filesystem (like NFS) then mapped into the VM?
>>
>> Thanks,
>> Qu
>>
>>>
>>> Disk /dev/sde: 2 TiB, 2197949513728 bytes, 4292870144 sectors
>>> Disk model: Linux_raid5_2tb_
>>> Units: sectors of 1 * 512 = 512 bytes
>>> Sector size (logical/physical): 512 bytes / 4096 bytes
>>> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>>> Disklabel type: gpt
>>> Disk identifier: 942781EC-8969-408B-BE8D-67F6A8AD6355
>>>
>>> Device     Start        End    Sectors Size Type
>>> /dev/sde1   2048 4292868095 4292866048   2T Linux filesystem
>>>
>>>
>>> What would be the next steps to repair this disk?
>>>
>>> Thank you all in advance for your help,
>>>
>>> Chris
>>>
> 


      reply	other threads:[~2021-10-16 23:27 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-06  3:44 [PATCH] btrfs-progs: Skip device tree when we failed to read it Qu Wenruo
2019-12-06  6:12 ` Anand Jain
2019-12-06 15:50   ` Christian Wimmer
2019-12-06 16:34   ` Christian Wimmer
     [not found]   ` <762365A0-8BDF-454B-ABA9-AB2F0C958106@icloud.com>
2019-12-07  1:16     ` Qu WenRuo
2019-12-07  3:47       ` Christian Wimmer
2019-12-07  4:31         ` Qu Wenruo
2019-12-07 13:03           ` Christian Wimmer
2019-12-07 14:10             ` Qu Wenruo
2019-12-07 14:25               ` Christian Wimmer
2019-12-07 16:44               ` Christian Wimmer
2019-12-08  1:21                 ` Qu WenRuo
2019-12-10 21:25                   ` Christian Wimmer
2019-12-11  0:36                     ` Qu Wenruo
2019-12-11 15:57                       ` Christian Wimmer
     [not found]           ` <9FB359ED-EAD4-41DD-B846-1422F2DC4242@icloud.com>
2020-01-04 17:07             ` 12 TB btrfs file system on virtual machine broke again Christian Wimmer
2020-01-05  4:03               ` Chris Murphy
2020-01-05 13:40                 ` Christian Wimmer
2020-01-05 14:07                   ` Martin Raiber
2020-01-05 14:14                     ` Christian Wimmer
2020-01-05 14:23                       ` Christian Wimmer
2020-01-05  4:25               ` Qu Wenruo
2020-01-05 14:17                 ` Christian Wimmer
2020-01-05 18:50                   ` Chris Murphy
2020-01-05 19:18                     ` Christian Wimmer
2020-01-05 19:36                       ` Chris Murphy
2020-01-05 19:49                         ` Christian Wimmer
2020-01-05 19:52                         ` Christian Wimmer
2020-01-05 20:34                           ` Chris Murphy
2020-01-05 20:36                             ` Chris Murphy
     [not found]                         ` <3F43DDB8-0372-4CDE-B143-D2727D3447BC@icloud.com>
2020-01-05 20:30                           ` Chris Murphy
2020-01-05 20:36                             ` Christian Wimmer
2020-01-05 21:13                               ` Chris Murphy
2020-01-05 21:58                                 ` Christian Wimmer
2020-01-05 22:28                                   ` Chris Murphy
2020-01-06  1:31                                     ` Christian Wimmer
2020-01-06  1:33                                     ` Christian Wimmer
2020-01-11 17:04                                     ` 12 TB btrfs file system on virtual machine broke again (third time) Christian Wimmer
2020-01-11 17:23                                     ` Christian Wimmer
2020-01-11 19:46                                       ` Chris Murphy
2020-01-13 19:41                                         ` 12 TB btrfs file system on virtual machine broke again (fourth time) Christian Wimmer
2020-01-13 20:03                                           ` Chris Murphy
2020-01-31 16:35                                             ` btrfs not booting any more Christian Wimmer
2020-05-08 12:20                                             ` btrfs reports bad key ordering after out of memory situation Christian Wimmer
2020-01-05 23:50                   ` 12 TB btrfs file system on virtual machine broke again Qu Wenruo
2020-01-06  1:32                     ` Christian Wimmer
2020-01-11  7:25                     ` Andrei Borzenkov
2021-10-15 21:01                     ` need help in a broken 2TB BTRFS partition Christian Wimmer
2021-10-16 10:08                       ` Qu Wenruo
2021-10-16 17:29                         ` Christian Wimmer
2021-10-16 22:55                           ` Qu Wenruo
2021-10-16 17:35                         ` Christian Wimmer
2021-10-16 23:27                           ` Qu Wenruo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7bd9c183-91b9-c646-5e22-8bc9d138f15f@suse.com \
    --to=wqu@suse.com \
    --cc=anand.jain@oracle.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    --cc=telefonchris@icloud.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox