From: Dave Chinner <david@fromorbit.com>
To: Sean Caron <scaron@umich.edu>
Cc: linux-xfs@vger.kernel.org
Subject: Re: XFS disaster recovery
Date: Tue, 8 Feb 2022 12:51:15 +1100 [thread overview]
Message-ID: <20220208015115.GI59729@dread.disaster.area> (raw)
In-Reply-To: <CAA43vkWz4ftLGuSvkUn3GFuc=Ca6vLqJ28Nc_CGuTyyNVtXszA@mail.gmail.com>
On Mon, Feb 07, 2022 at 05:56:21PM -0500, Sean Caron wrote:
> Got it. I ran an xfs_repair on the simulated metadata filesystem and
> it seems like it almost finished but errored out with the message:
>
> fatal error -- name create failed in lost+found (28), filesystem may
> be out of space
Not a lot to go on there - can you send me the entire reapir output?
> However there is plenty of space on the underlying volume where the
> metadata dump and sparse image are kept. Even if the sparse image was
> actually 384 TB as it shows up in "ls", there's 425 TB free on the
> volume where it's kept.
Hmmm - the sparse image should be the same size as the filesystem
itself. If it's only 384TB and not 500TB, then either the metadump
or the restore may not have completed fully.
> I wonder since this was a fairly large filesystem (~500 TB) it's
> hitting some kind of limit somewhere with the loopback device?
Shouldn't - I've used larger loopback files hostsed on XFS
filesystems in the past.
> Any thoughts on how I might be able to move past this? I guess I will
> need to xfs_repair this filesystem one way or the other anyway to get
> anything off of it, but it would be nice to run the simulation first
> just to see what to expect.
I think that first we need to make sure that the metadump and
restore process was completed successfully (did you check the exit
value was zero?). xfs_db can be used to do that:
# xfs_db -r <image-file>
xfs_db> sb 0
xfs_db> p agcount
<val>
xfs_db> agf <val - 1>
xfs_db> p
.....
(should dump the last AGF in the filesystem)
If that works, then the metadump/restore should have been complete,
and the size of the image file should match the size of the
filesystem that was dumped...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2022-02-08 1:54 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-01 23:07 XFS disaster recovery Sean Caron
2022-02-01 23:33 ` Dave Chinner
2022-02-02 1:20 ` Sean Caron
2022-02-02 2:44 ` Dave Chinner
2022-02-02 7:42 ` [PATCH] metadump: handle corruption errors without aborting Dave Chinner
2022-02-02 18:49 ` Sean Caron
2022-02-02 19:43 ` Sean Caron
2022-02-02 20:18 ` Sean Caron
2022-02-02 22:05 ` Dave Chinner
2022-02-02 23:45 ` Sean Caron
2022-02-06 22:34 ` Dave Chinner
2022-02-07 21:42 ` Sean Caron
2022-02-07 22:34 ` Dave Chinner
2022-02-07 22:03 ` XFS disaster recovery Sean Caron
2022-02-07 22:33 ` Dave Chinner
2022-02-07 22:56 ` Sean Caron
2022-02-08 1:51 ` Dave Chinner [this message]
2022-02-08 15:46 ` Sean Caron
2022-02-08 20:56 ` Dave Chinner
2022-02-08 21:24 ` Sean Caron
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220208015115.GI59729@dread.disaster.area \
--to=david@fromorbit.com \
--cc=linux-xfs@vger.kernel.org \
--cc=scaron@umich.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox