public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
To: linux-btrfs@vger.kernel.org
Subject: Re: repeatable(ish) corrupt leaf filesystem splat on 5.1.x - fixed in 5.4.14, 5.5.0
Date: Mon, 3 Feb 2020 23:43:33 -0500	[thread overview]
Message-ID: <20200204044333.GZ13306@hungrycats.org> (raw)
In-Reply-To: <20190704210323.GK11831@hungrycats.org>

[-- Attachment #1: Type: text/plain, Size: 4131 bytes --]

On Thu, Jul 04, 2019 at 05:03:23PM -0400, Zygo Blaxell wrote:
> I've seen this twice in 3 days after releasing 5.1.x kernels from the
> test lab:
> 
> 5.1.15 on 2xSATA RAID1 SSD, during a balance:
> 
> 	[48714.200014][ T3498] BTRFS critical (device dm-21): corrupt leaf: root=2 block=117776711680 slot=57, unexpected item end, have 109534755 expect 12632
> 	[48714.200381][ T3498] BTRFS critical (device dm-21): corrupt leaf: root=2 block=117776711680 slot=57, unexpected item end, have 109534755 expect 12632
> 	[48714.200399][ T9749] BTRFS: error (device dm-21) in __btrfs_free_extent:7109: errno=-5 IO failure
> 	[48714.200401][ T9749] BTRFS info (device dm-21): forced readonly
> 	[48714.200405][ T9749] BTRFS: error (device dm-21) in btrfs_run_delayed_refs:3008: errno=-5 IO failure
> 	[48714.200419][ T9749] BTRFS info (device dm-21): found 359 extents
> 	[48714.200442][ T9749] BTRFS info (device dm-21): 1 enospc errors during balance
> 	[48714.200445][ T9749] BTRFS info (device dm-21): balance: ended with status: -30
> 
> and 5.1.9 on 1xNVME, a few hours after some /proc NULL pointer dereference
> bugs:
> 
> 	[89244.144505][ T7009] BTRFS critical (device dm-4): corrupt leaf: root=2 block=1854946361344 slot=32, unexpected item end, have 1335222703 expect 15056
> 	[89244.144822][ T7009] BTRFS critical (device dm-4): corrupt leaf: root=2 block=1854946361344 slot=32, unexpected item end, have 1335222703 expect 15056
> 	[89244.144832][ T2403] BTRFS: error (device dm-4) in btrfs_run_delayed_refs:3008: errno=-5 IO failure
> 	[89244.144836][ T2403] BTRFS info (device dm-4): forced readonly
> 
> The machines had been upgraded from 5.0.x to 5.1.x for less than 24
> hours each.
> 
> The 5.1.9 machine had crashed (on 5.0.15) before, but a scrub had passed
> while running 5.1.9 after the crash.  The filesystem failure occurred
> 20 hours later.  There were some other NULL pointer deferences in that
> uptime, so maybe 5.1.9 is just a generally buggy kernel that nobody
> should ever run.  I expect better from 5.1.15, though, which had no
> unusual events reported in the 8 hours between its post-reboot scrub
> and the filesystem failure.
> 
> I have several other machines running 5.1.x kernels that have not yet had
> such a failure--including all of my test machines, which don't seem to hit
> this issue after 25+ days of stress-testing.  Most of the test machines
> are using rotating disks, a few are running SSD+HDD with lvmcache.
> 
> One correlation that may be interesting:  both of the failing filesystems
> had 1MB unallocated on all disks; all of the non-failing filesystems have
> 1GB or more unallocated on all disks.  I was running the balance on the
> first filesystem to try to free up some unallocated space.  The second
> filesystem died without any help from me.
> 
> It turns out that 'btrfs check --repair' can fix these!  First time
> I've ever seen check --repair fix a broken filesystem.  A few files are
> damaged, but the filesystem is read-write again and still working so far
> (on a 5.0.21 kernel) .

Since this report I have repeated this event several times on kernels
from 5.1 to 5.4, running my 10x rsync, bees dedupe, balance, scrub,
snapshot create and delete stress test.

The symptoms on each kernel are different because the bug interacts with
other capabilities and fixes in each kernel:

	5.1.21 - all test runs eventually end with corrupted metadata
	on disk (*)

	5.2.21, 5.3.18 - write time tree checker (usually) detects
	filesystem corruption and aborts transaction before metadata on
	disk is damaged

	5.4.13 - NULL pointer splats in various places, especially
	snapshot create and during mount.  These end the test too quickly
	to see if there is also metadata corruption.

These are all fixed by:

	707de9c0806d btrfs: relocation: fix reloc_root lifespan and access

When this patch is applied to kernels 5.1, 5.2, or 5.3, it fixes all of
the above problems.

Thanks Qu for this patch.

(*) or one of the tree mod log UAF bugs--but the metadata corruption
usually happens much faster.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

      parent reply	other threads:[~2020-02-04  4:43 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-04 21:03 repeatable(ish) corrupt leaf filesystem splat on 5.1.x Zygo Blaxell
2019-07-05  0:06 ` Qu Wenruo
2019-07-05  3:33   ` Zygo Blaxell
2020-02-04  4:43 ` Zygo Blaxell [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200204044333.GZ13306@hungrycats.org \
    --to=ce3g8jdj@umail.furryterror.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox