linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chris Murphy <lists@colorremedies.com>
To: "Swâmi Petaramesh" <swami@petaramesh.org>
Cc: Chris Murphy <lists@colorremedies.com>,
	Qu Wenruo <quwenruo.btrfs@gmx.com>,
	Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: Massive filesystem corruption since kernel 5.2 (ARCH)
Date: Tue, 30 Jul 2019 14:15:32 -0600	[thread overview]
Message-ID: <CAJCQCtR3pW7T7=DxuAyqwfG+4ii-jg2AVqQL2wVEAx2VrGAY8g@mail.gmail.com> (raw)
In-Reply-To: <d76a038d-fc7f-5910-ec2d-ac783891f001@petaramesh.org>

On Tue, Jul 30, 2019 at 2:09 AM Swâmi Petaramesh <swami@petaramesh.org> wrote:
>
> On 7/29/19 9:10 PM, Chris Murphy wrote:
> > We've discussed many times how both file system repair, and file
> > system restore from backup, simply are not scalable for big file
> > systems. It takes too long.
>
> So what would be the solution ?

There presently is no solution, and I'm not aware of the future plan
either. I think it's a problem.

>
> IMHO yes, having to full backup then reformat then full restore is
> impractical for big FSes. Especially if they have a lot of subvols.
>
> Also most private individuals do not have enough disks to perform a full
> backup of their RAID NAS, etc.

I sympathize with the lack of resources. But no full disk backup
simply cannot be taken seriously in any computer science context. The
data cannot be that important by the user's own estimation if there
aren't backups. It's reasonable for resource limitations to have a
subset of data backed up. But if none of it is *shrug* there just
aren't that many people who will sympathize with data loss if there
are no backups.

Backup+restore is for sure a Byzantine work around for the data
storage problem, but you have no idea what will fail or what will
fail. There's not a file system list on earth that will tell you it's
OK to not have backups.


> I believe that we should have a repair tool that can fix a filesystem
> metadata and make it clean and usable again even if this is at the cost
> of losing a whole directory tree or subvols or whatever.

So far that isn't how it works. I don't know if it's a limitation of
the on disk format. Or a limitation on reconstructing from incorrect
information, even though the checksum is correct.


> But it would be better to lose clearly identified things and resume with
> a working FS and a list of files to be restored, rather than being
> unable to repair and having to reformat everything and restore everything...

Yep. That doesn't exist yet and I don't know if that's a design goal
of Btrfs eventually.

ZFS meanwhile has no repair tool. If it becomes inconsistent, that's
it, recreate the file system.

If your use case policy requires a repair tool, you really have to
disqualify both ZFS and Btrfs because the Btrfs repair tool is still
marked in the man page as dangerous. I just cannot take repair of
Btrfs seriously when Btrfs developers consider it dangerous on a case
by case basis.

It's always the case with any file system that a clean reproducer has
the best chance of getting developer attention. This is not easy. Part
of practical best practice is having a bulk of systems on some very
stable operating system with well maintained stable, or actively
maintained long term kernels. And to have some smaller percentage of
machines to test mainline kernels on. It might be annoying and
tedious, and definitely bad and a bug, to have a problem. But at least
your problem is restricted to your test machines.

There isn't enough history here to piece together with any certainty
why you're experiencing what you're experiencing beyond what Qu has
already stated.

-- 
Chris Murphy

  reply	other threads:[~2019-07-30 20:15 UTC|newest]

Thread overview: 83+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-29 12:32 Massive filesystem corruption since kernel 5.2 (ARCH) Swâmi Petaramesh
2019-07-29 13:02 ` Swâmi Petaramesh
2019-07-29 13:35   ` Qu Wenruo
2019-07-29 13:42     ` Swâmi Petaramesh
2019-07-29 13:47       ` Qu Wenruo
2019-07-29 13:52         ` Swâmi Petaramesh
2019-07-29 13:59           ` Qu Wenruo
2019-07-29 14:01           ` Swâmi Petaramesh
2019-07-29 14:08             ` Qu Wenruo
2019-07-29 14:21               ` Swâmi Petaramesh
2019-07-29 14:27                 ` Qu Wenruo
2019-07-29 14:34                   ` Swâmi Petaramesh
2019-07-29 14:40                     ` Qu Wenruo
2019-07-29 14:46                       ` Swâmi Petaramesh
2019-07-29 14:51                         ` Qu Wenruo
2019-07-29 14:55                           ` Swâmi Petaramesh
2019-07-29 15:05                             ` Swâmi Petaramesh
2019-07-29 19:20                               ` Chris Murphy
2019-07-30  6:47                                 ` Swâmi Petaramesh
2019-07-29 19:10                       ` Chris Murphy
2019-07-30  8:09                         ` Swâmi Petaramesh
2019-07-30 20:15                           ` Chris Murphy [this message]
2019-07-30 22:44                             ` Swâmi Petaramesh
2019-07-30 23:13                               ` Graham Cobb
2019-07-30 23:24                                 ` Chris Murphy
     [not found] ` <f8b08aec-2c43-9545-906e-7e41953d9ed4@bouton.name>
2019-07-29 13:35   ` Swâmi Petaramesh
2019-07-30  8:04     ` Henk Slager
2019-07-30  8:17       ` Swâmi Petaramesh
2019-07-29 13:39   ` Lionel Bouton
2019-07-29 13:45     ` Swâmi Petaramesh
     [not found]       ` <d8c571e4-718e-1241-66ab-176d091d6b48@bouton.name>
2019-07-29 14:04         ` Swâmi Petaramesh
2019-08-01  4:50           ` Anand Jain
2019-08-01  6:07             ` Swâmi Petaramesh
2019-08-01  6:36               ` Qu Wenruo
2019-08-01  8:07                 ` Swâmi Petaramesh
2019-08-01  8:43                   ` Qu Wenruo
2019-08-01 13:46                     ` Anand Jain
2019-08-01 18:56                       ` Swâmi Petaramesh
2019-08-08  8:46                         ` Qu Wenruo
2019-08-08  9:55                           ` Swâmi Petaramesh
2019-08-08 10:12                             ` Qu Wenruo
  -- strict thread matches above, loose matches on Subject: below --
2019-08-24 17:44 Christoph Anton Mitterer
2019-08-25 10:00 ` Swâmi Petaramesh
2019-08-27  0:00   ` Christoph Anton Mitterer
2019-08-27  5:06     ` Swâmi Petaramesh
2019-08-27  6:13       ` Swâmi Petaramesh
2019-08-27  6:21         ` Qu Wenruo
2019-08-27  6:34           ` Swâmi Petaramesh
2019-08-27  6:52             ` Qu Wenruo
2019-08-27  9:14               ` Swâmi Petaramesh
2019-08-27 12:40                 ` Hans van Kranenburg
2019-08-29 12:46                   ` Oliver Freyermuth
2019-08-29 13:08                     ` Christoph Anton Mitterer
2019-08-29 13:09                     ` Swâmi Petaramesh
2019-08-29 13:11                     ` Qu Wenruo
2019-08-29 13:17                       ` Oliver Freyermuth
2019-08-29 17:40                         ` Oliver Freyermuth
2019-08-27 10:59           ` Swâmi Petaramesh
2019-08-27 11:11             ` Alberto Bursi
2019-08-27 11:20               ` Swâmi Petaramesh
2019-08-27 11:29                 ` Alberto Bursi
2019-08-27 11:45                   ` Swâmi Petaramesh
2019-08-27 17:49               ` Swâmi Petaramesh
2019-08-27 22:10               ` Chris Murphy
2019-08-27 12:52 ` Michal Soltys
2019-09-12  7:50 ` Filipe Manana
2019-09-12  8:24   ` James Harvey
2019-09-12  9:06     ` Filipe Manana
2019-09-12  9:09     ` Holger Hoffstätte
2019-09-12 10:53     ` Swâmi Petaramesh
2019-09-12 12:58       ` Christoph Anton Mitterer
2019-10-14  4:00         ` Nicholas D Steeves
2019-09-12  8:48   ` Swâmi Petaramesh
2019-09-12 13:09   ` Christoph Anton Mitterer
2019-09-12 14:28     ` Filipe Manana
2019-09-12 14:39       ` Christoph Anton Mitterer
2019-09-12 14:57         ` Swâmi Petaramesh
2019-09-12 16:21           ` Zdenek Kaspar
2019-09-12 18:52             ` Swâmi Petaramesh
2019-09-13 18:50       ` Pete
     [not found]         ` <CACzgC9gvhGwyQAKm5J1smZZjim-ecEix62ZQCY-wwJYVzMmJ3Q@mail.gmail.com>
2019-10-14  2:07           ` Adam Bahe
2019-10-14  2:19             ` Qu Wenruo
2019-10-14 17:54             ` Chris Murphy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJCQCtR3pW7T7=DxuAyqwfG+4ii-jg2AVqQL2wVEAx2VrGAY8g@mail.gmail.com' \
    --to=lists@colorremedies.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    --cc=swami@petaramesh.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).