linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: Lukas Pirl <btrfs@lukas-pirl.de>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: btrfs scrub crashes OS
Date: Tue, 26 Sep 2017 20:16:43 +0800	[thread overview]
Message-ID: <cdf87f77-e6b3-b4cc-a47f-ad86f3111f3d@gmx.com> (raw)
In-Reply-To: <2d813e0f-2b9e-90b0-039f-58cc6a55d4e4@lukas-pirl.de>



On 2017年09月26日 19:50, Lukas Pirl wrote:
> On 09/26/2017 11:36 AM, Qu Wenruo wrote as excerpted:
>> This is strange, this means that we can't find a chunk map for a 72K
>> length data extent.
>>
>> Either the new mapper code has some bug, or it's a big problem.
>> But I think it's more possible for former case.
>>
>> Would you please try to dump the chunk tree (which should be quite
>> small) using the following command?
>>
>> $ btrfs inspect-internal dump-tree -t chunk <device>
> 
> Sure, happy to provide that:
>    https://static.lukas-pirl.de/dump-chunk-tree.txt
> (too large for Pastebin, file will probably go away in a couple of weeks).
> 

Found the needed info.

Your original data extent range is [644337258496, +72K].
Which is just in the range of the following 2 chunks:

---
	item 5 key (FIRST_CHUNK_TREE CHUNK_ITEM 643200712704) itemoff 15611 
itemsize 112
		length 1073741824 owner 2 stripe_len 65536 type DATA|RAID1
                 <snip>

	item 6 key (FIRST_CHUNK_TREE CHUNK_ITEM 645348196352) itemoff 15499 
itemsize 112
		length 1073741824 owner 2 stripe_len 65536 type DATA|RAID1
		io_align 65536 io_width 65536 sector_size 4096
                 <snip>
---
Those two chunks are covering the ranges of:
[643200712704, +1G)
[645348196352, +1G)

And no other chunk covers the hole between them.
But your original data extent range is in that hole.

So offline scrub output that error messages.
At least the chunk mapping code is correct.

Maybe something is wrong in the extent tree.
But anyway, it shouldn't cause too much trouble in offline scrub as you 
can see, it's a user space program and handles the problem quite well.
(Outputs error message and continue, without panic out)

So reading all your disk may still be needed to wipe out or confirm the 
possibility of the disk IO routine.

Thanks,
Qu

> Cheers,
> 
> Lukas
> 

      reply	other threads:[~2017-09-26 12:17 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-25  9:50 btrfs scrub crashes OS Lukas Pirl
2017-09-25 10:19 ` Qu Wenruo
2017-09-26  8:34   ` Lukas Pirl
2017-09-26  8:51     ` Qu Wenruo
2017-09-26  9:26       ` Lukas Pirl
2017-09-26  9:36         ` Qu Wenruo
2017-09-26 11:50           ` Lukas Pirl
2017-09-26 12:16             ` Qu Wenruo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cdf87f77-e6b3-b4cc-a47f-ad86f3111f3d@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=btrfs@lukas-pirl.de \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).