From: Roman Mamedov <rm@romanrm.net>
To: "Janos Toth F." <toth.f.janos@gmail.com>
Cc: Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: "corrupt leaf, invalid item offset size pair"
Date: Tue, 9 May 2017 22:23:18 +0500 [thread overview]
Message-ID: <20170509222318.1ccbba3a@natsu> (raw)
In-Reply-To: <CANznX5FbxnO66uqA=UK7h-f5MbnCR0zDMs6iYfyYidPZ7MyU9Q@mail.gmail.com>
On Mon, 8 May 2017 20:05:44 +0200
"Janos Toth F." <toth.f.janos@gmail.com> wrote:
> May be someone more talented will be able to assist you but in my
> experience this kind of damage is fatal in practice (even if you could
> theoretically fix it, it's probably easier to recreate the fs and
> restore the content from backup, or use the rescue tool to save some
> of the old content which you never had copies from and restore that).
> I think the problem is that the disturbed disk gets out of sync
> (obviously, it misses some queued/buffered writes) from the rest of
> the fs/disk(s) but later gets accepted back like it's in a perfectly
> fine state (and/or Btrfs is ready to deal with problems like this,
> though it looks like it is not), and then some fatal corruption starts
> developing (due to the problematic disk being treated like it has
> correct data, even though it has some errors). If you have it mounted
> RW long enough, it will probably get worse and gets unmountable at
> some point (and thus harder, if not impossible to rescure any data).
> This is how I usually lost my RAID-5 mode Btrfs filesystems before I
> stopped experimenting with that. I never had this problem since I
> disabled SATA HotPlug (in the firmware setup of the motherboard) and
> switched to RAID-10 mode (and eventually replaced both faulty SATA
> cables in the system, one at a time after an incident...).
Yeah I scrapped the FS and now restoring from backups. For some of the stuff
that wasn't backed up, "btrfs restore" worked remarkably well.
This was my primary 9x2TB mdadm RAID6 with Btrfs on top. But after all, it
appears to be too risky to run all storage as a huge SPOF like that. And
since I had almost everything backed up elsewhere, there's seems to be little
justification for the protections of RAID6 (the machine does not need 100.00%
uptime and does not even have hot-swap drive bays).
So I will now switch to using individual drives with single-device Btrfs on
each, joined for convenience with mhddfs/unionfs/aufs on the directory tree
level.
--
With respect,
Roman
prev parent reply other threads:[~2017-05-09 17:23 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-08 4:58 "corrupt leaf, invalid item offset size pair" Roman Mamedov
2017-05-08 18:05 ` Janos Toth F.
2017-05-09 17:23 ` Roman Mamedov [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170509222318.1ccbba3a@natsu \
--to=rm@romanrm.net \
--cc=linux-btrfs@vger.kernel.org \
--cc=toth.f.janos@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox