* XFS corruption help; xfs_repair isn't working
@ 2022-11-29 20:49 Chris Boot
2022-11-29 22:06 ` Dave Chinner
0 siblings, 1 reply; 6+ messages in thread
From: Chris Boot @ 2022-11-29 20:49 UTC (permalink / raw)
To: linux-xfs
[-- Attachment #1: Type: text/plain, Size: 1373 bytes --]
Hi all,
Sorry, I'm mailing here as a last resort before declaring this
filesystem done for. Following a string of unclean reboots and a dying
hard disk I have this filesystem in a very poor state that xfs_repair
can't make any progress on.
It has been mounted on kernel 5.18.14-1~bpo11+1 (from Debian
bullseye-backports). Most of the repairs were done using xfsprogs
5.10.0-4 (from Debian bullseye stable), though I did also try with
6.0.0-1 (from Debian bookworm/testing re-built myself).
I've attached the full log from xfs_repair, but the summary is it all
starts with multiple instances of this in Phase 3:
Metadata CRC error detected at 0x5609236ce178, xfs_dir3_block block
0xe101f32f8/0x1000
bad directory block magic # 0x1859dc06 in block 0 for directory inode
64426557977
bad bestfree table in block 0 in directory inode 64426557977: repairing
table
As it is the filesystem can be mounted and most data appears accessible,
but several directories are corrupt and can't be read or removed; the
kernel reports metadata corruption and CRC errors and returns EUCLEAN.
Ideally I'd like to remove the corrupt directories, recover as much of
what's left as possible, and make the filesystem usable again (it's an
rsnapshot destination) - but I'll take what I can.
Many thanks in advance,
Chris
PS: Please Cc me in replies
--
Chris Boot
bootc@boo.tc
[-- Attachment #2: xfs_repair.log --]
[-- Type: text/plain, Size: 8455 bytes --]
Phase 1 - find and verify superblock...
- reporting progress in intervals of 15 minutes
Phase 2 - using internal log
- zero log...
- 20:25:20: zeroing log - 521728 of 521728 blocks done
- scan filesystem freespace and inode maps...
- 20:25:23: scanning filesystem freespace - 38 of 38 allocation groups done
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- 20:25:23: scanning agi unlinked lists - 38 of 38 allocation groups done
- process known inodes and perform inode discovery...
- agno = 15
- agno = 0
- agno = 30
Metadata CRC error detected at 0x5609236ce178, xfs_dir3_block block 0xe101f32f8/0x1000
bad directory block magic # 0x1859dc06 in block 0 for directory inode 64426557977
bad bestfree table in block 0 in directory inode 64426557977: repairing table
- agno = 16
- agno = 31
- agno = 1
- agno = 17
- agno = 32
- agno = 2
Metadata CRC error detected at 0x5609236ce178, xfs_dir3_block block 0x7f8007bf8/0x1000
bad directory block magic # 0x92b36c92 in block 0 for directory inode 36507254400
bad bestfree table in block 0 in directory inode 36507254400: repairing table
- agno = 18
- agno = 33
Metadata CRC error detected at 0x5609236ce178, xfs_dir3_block block 0xf77fff228/0x1000
bad directory block magic # 0xd9ac4ca5 in block 0 for directory inode 70866962823
bad bestfree table in block 0 in directory inode 70866962823: repairing table
- agno = 3
- agno = 34
- agno = 19
Metadata CRC error detected at 0x5609236ce178, xfs_dir3_block block 0xff00332e0/0x1000
bad directory block magic # 0xa03871ec in block 0 for directory inode 73014585868
bad bestfree table in block 0 in directory inode 73014585868: repairing table
- agno = 4
- agno = 35
- agno = 20
Metadata CRC error detected at 0x5609236ce178, xfs_dir3_block block 0x960000640/0x1000
bad directory block magic # 0x10a5bf9c in block 0 for directory inode 42949675715
bad bestfree table in block 0 in directory inode 42949675715: repairing table
- agno = 5
- agno = 36
- agno = 21
- agno = 37
- agno = 6
- agno = 22
- agno = 7
- agno = 23
- agno = 8
- agno = 24
- agno = 9
Metadata CRC error detected at 0x5609236ce178, xfs_dir3_block block 0xb4002d4f0/0x1000
bad directory block magic # 0x9214d055 in block 0 for directory inode 51539795872
bad bestfree table in block 0 in directory inode 51539795872: repairing table
- agno = 25
- agno = 10
- agno = 26
- agno = 11
- agno = 27
Metadata CRC error detected at 0x5609236ce178, xfs_dir3_block block 0xca7fff978/0x1000
bad directory block magic # 0x5b58346c in block 0 for directory inode 57982059162
bad bestfree table in block 0 in directory inode 57982059162: repairing table
- agno = 12
- agno = 28
- agno = 13
- agno = 29
- agno = 14
- 20:32:26: process known inodes and inode discovery - 32678656 of 32678656 inodes done
- process newly discovered inodes...
- 20:32:26: process newly discovered inodes - 38 of 38 allocation groups done
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- 20:32:27: setting up duplicate extent list - 38 of 38 allocation groups done
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 2
- agno = 4
- agno = 7
- agno = 10
- agno = 3
- agno = 11
- agno = 8
- agno = 13
- agno = 24
- agno = 17
- agno = 1
- agno = 9
- agno = 14
- agno = 22
- agno = 35
- agno = 30
- agno = 12
- agno = 29
- agno = 27
- agno = 25
- agno = 28
- agno = 26
- agno = 23
- agno = 32
- agno = 20
bad directory block magic # 0x5b58346c in block 0 for directory inode 57982059162
- agno = 36
bad bestfree table in block 0 in directory inode 57982059162: repairing table
- agno = 6
- agno = 33
- agno = 34
- agno = 21
- agno = 37
- agno = 5
- agno = 19
- agno = 16
bad directory block magic # 0x10a5bf9c in block 0 for directory inode 42949675715
bad bestfree table in block 0 in directory inode 42949675715: repairing table
- agno = 31
- agno = 15
- agno = 18
bad directory block magic # 0xd9ac4ca5 in block 0 for directory inode 70866962823
bad bestfree table in block 0 in directory inode 70866962823: repairing table
bad directory block magic # 0x92b36c92 in block 0 for directory inode 36507254400
bad bestfree table in block 0 in directory inode 36507254400: repairing table
bad directory block magic # 0x9214d055 in block 0 for directory inode 51539795872
bad bestfree table in block 0 in directory inode 51539795872: repairing table
bad directory block magic # 0xa03871ec in block 0 for directory inode 73014585868
bad bestfree table in block 0 in directory inode 73014585868: repairing table
bad directory block magic # 0x1859dc06 in block 0 for directory inode 64426557977
bad bestfree table in block 0 in directory inode 64426557977: repairing table
- 20:32:35: check for inodes claiming duplicate blocks - 32678656 of 32678656 inodes done
Phase 5 - rebuild AG headers and trees...
- 20:32:37: rebuild AG headers and trees - 38 of 38 allocation groups done
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
bad directory block magic # 0x1859dc06 for directory inode 64426557977 block 0: fixing magic # to 0x58444233
bad directory block magic # 0x92b36c92 for directory inode 36507254400 block 0: fixing magic # to 0x58444233
bad directory block magic # 0xd9ac4ca5 for directory inode 70866962823 block 0: fixing magic # to 0x58444233
bad directory block magic # 0xa03871ec for directory inode 73014585868 block 0: fixing magic # to 0x58444233
bad directory block magic # 0x10a5bf9c for directory inode 42949675715 block 0: fixing magic # to 0x58444233
bad directory block magic # 0x9214d055 for directory inode 51539795872 block 0: fixing magic # to 0x58444233
bad directory block magic # 0x5b58346c for directory inode 57982059162 block 0: fixing magic # to 0x58444233
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
- 20:39:06: verify and correct link counts - 38 of 38 allocation groups done
Metadata corruption detected at 0x5609236cdcc8, xfs_dir3_block block 0xff00332e0/0x1000
libxfs_bwrite: write verifier failed on xfs_dir3_block bno 0xff00332e0/0x8
Metadata corruption detected at 0x5609236cdcc8, xfs_dir3_block block 0xf77fff228/0x1000
libxfs_bwrite: write verifier failed on xfs_dir3_block bno 0xf77fff228/0x8
Metadata corruption detected at 0x5609236cdcc8, xfs_dir3_block block 0x7f8007bf8/0x1000
libxfs_bwrite: write verifier failed on xfs_dir3_block bno 0x7f8007bf8/0x8
Metadata corruption detected at 0x5609236cdcc8, xfs_dir3_block block 0xe101f32f8/0x1000
libxfs_bwrite: write verifier failed on xfs_dir3_block bno 0xe101f32f8/0x8
Metadata corruption detected at 0x5609236cdcc8, xfs_dir3_block block 0x960000640/0x1000
libxfs_bwrite: write verifier failed on xfs_dir3_block bno 0x960000640/0x8
Metadata corruption detected at 0x5609236cdcc8, xfs_dir3_block block 0xca7fff978/0x1000
libxfs_bwrite: write verifier failed on xfs_dir3_block bno 0xca7fff978/0x8
Metadata corruption detected at 0x5609236cdcc8, xfs_dir3_block block 0xb4002d4f0/0x1000
libxfs_bwrite: write verifier failed on xfs_dir3_block bno 0xb4002d4f0/0x8
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Releasing dirty buffer to free list!
xfs_repair: Refusing to write a corrupt buffer to the data device!
xfs_repair: Lost a write to the data device!
fatal error -- File system metadata writeout failed, err=117. Re-run xfs_repair.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: XFS corruption help; xfs_repair isn't working
2022-11-29 20:49 XFS corruption help; xfs_repair isn't working Chris Boot
@ 2022-11-29 22:06 ` Dave Chinner
2022-12-01 2:12 ` Darrick J. Wong
0 siblings, 1 reply; 6+ messages in thread
From: Dave Chinner @ 2022-11-29 22:06 UTC (permalink / raw)
To: Chris Boot; +Cc: linux-xfs
On Tue, Nov 29, 2022 at 08:49:27PM +0000, Chris Boot wrote:
> Hi all,
>
> Sorry, I'm mailing here as a last resort before declaring this filesystem
> done for. Following a string of unclean reboots and a dying hard disk I have
> this filesystem in a very poor state that xfs_repair can't make any progress
> on.
>
> It has been mounted on kernel 5.18.14-1~bpo11+1 (from Debian
> bullseye-backports). Most of the repairs were done using xfsprogs 5.10.0-4
> (from Debian bullseye stable), though I did also try with 6.0.0-1 (from
> Debian bookworm/testing re-built myself).
>
> I've attached the full log from xfs_repair, but the summary is it all starts
> with multiple instances of this in Phase 3:
>
> Metadata CRC error detected at 0x5609236ce178, xfs_dir3_block block
> 0xe101f32f8/0x1000
> bad directory block magic # 0x1859dc06 in block 0 for directory inode
> 64426557977
> bad bestfree table in block 0 in directory inode 64426557977: repairing
> table
I think that the problem is that we are trying to repair garbage
without completely reinitialising the directory block header. We
don't bother checking the incoming directory block for sanity after
the CRC fails, and then we only warn that it has a bad magic number.
We then go a process it as though it is a directory block,
essentially trusting that the directory block header is actually
sane. Which it clearly isn't because the magic number in the dir
block has been trashed.
We then rescan parts of the directory block and rewrite parts of the
block header, but the next time we re-scan the block we find that
there are still bad parts in the header/directory block. Then we
rewrite the magic number to make it look like a directory block,
and when repair is finished it goes to write the recovered directory
block to disk and it fails the verifier check - it's still a corrupt
directory block because it's still full of garbage that doesn't pass
muster.
From a recovery persepective, I think that if we get a bad CRC and
an unrecognisable magic number, we have no idea what the block is
meant to contain - we cannot trust it to contain directory
information, so we should just trash the block rather than try to
rebuild it. If it was a valid directory block, this will result in
the files it pointed to being moved to lost+found so no data is
actually lost.
If it wasn't a dir block at all, then simply trashing the data fork
of the inode and not touching the contents of the block at all is
right thing to do. Modifying something that may be cross-linked
before we've resolved all the cross-linked extents is a bad thing to
be doing, so if we cannot recognise the block as a directory block,
we shouldn't try to recover it as a directory block at all....
Darrick, what are your thoughts on this?
> As it is the filesystem can be mounted and most data appears accessible, but
> several directories are corrupt and can't be read or removed; the kernel
> reports metadata corruption and CRC errors and returns EUCLEAN.
>
> Ideally I'd like to remove the corrupt directories, recover as much of
> what's left as possible, and make the filesystem usable again (it's an
> rsnapshot destination) - but I'll take what I can.
Yup, it's only a small number of directory inodes, so we might be
able to do this with some manual xfs_db magic. I think all we'd
need to do is rewrite specific parts of the dir block header and
repair should then do the rest...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: XFS corruption help; xfs_repair isn't working
2022-11-29 22:06 ` Dave Chinner
@ 2022-12-01 2:12 ` Darrick J. Wong
2022-12-01 13:00 ` Chris Boot
0 siblings, 1 reply; 6+ messages in thread
From: Darrick J. Wong @ 2022-12-01 2:12 UTC (permalink / raw)
To: Dave Chinner; +Cc: Chris Boot, linux-xfs
On Wed, Nov 30, 2022 at 09:06:46AM +1100, Dave Chinner wrote:
> On Tue, Nov 29, 2022 at 08:49:27PM +0000, Chris Boot wrote:
> > Hi all,
> >
> > Sorry, I'm mailing here as a last resort before declaring this filesystem
> > done for. Following a string of unclean reboots and a dying hard disk I have
> > this filesystem in a very poor state that xfs_repair can't make any progress
> > on.
> >
> > It has been mounted on kernel 5.18.14-1~bpo11+1 (from Debian
> > bullseye-backports). Most of the repairs were done using xfsprogs 5.10.0-4
> > (from Debian bullseye stable), though I did also try with 6.0.0-1 (from
> > Debian bookworm/testing re-built myself).
> >
> > I've attached the full log from xfs_repair, but the summary is it all starts
> > with multiple instances of this in Phase 3:
> >
> > Metadata CRC error detected at 0x5609236ce178, xfs_dir3_block block
> > 0xe101f32f8/0x1000
> > bad directory block magic # 0x1859dc06 in block 0 for directory inode
> > 64426557977
> > bad bestfree table in block 0 in directory inode 64426557977: repairing
> > table
>
> I think that the problem is that we are trying to repair garbage
> without completely reinitialising the directory block header. We
> don't bother checking the incoming directory block for sanity after
> the CRC fails, and then we only warn that it has a bad magic number.
>
> We then go a process it as though it is a directory block,
> essentially trusting that the directory block header is actually
> sane. Which it clearly isn't because the magic number in the dir
> block has been trashed.
>
> We then rescan parts of the directory block and rewrite parts of the
> block header, but the next time we re-scan the block we find that
> there are still bad parts in the header/directory block. Then we
> rewrite the magic number to make it look like a directory block,
> and when repair is finished it goes to write the recovered directory
> block to disk and it fails the verifier check - it's still a corrupt
> directory block because it's still full of garbage that doesn't pass
> muster.
>
> From a recovery persepective, I think that if we get a bad CRC and
> an unrecognisable magic number, we have no idea what the block is
> meant to contain - we cannot trust it to contain directory
> information, so we should just trash the block rather than try to
> rebuild it. If it was a valid directory block, this will result in
> the files it pointed to being moved to lost+found so no data is
> actually lost.
>
> If it wasn't a dir block at all, then simply trashing the data fork
> of the inode and not touching the contents of the block at all is
> right thing to do. Modifying something that may be cross-linked
> before we've resolved all the cross-linked extents is a bad thing to
> be doing, so if we cannot recognise the block as a directory block,
> we shouldn't try to recover it as a directory block at all....
>
> Darrick, what are your thoughts on this?
I kinda want to see the metadump of this (possibly enormous) filesystem.
Probably the best outcome is to figure out which blocks in each
directory are corrupt, remove them from the data fork mapping, and see
if repair can fix up the other things (e.g. bestfree data) and dump the
unlinked files in /lost+found. Hopefully rsnapshot can deal with the
directory tree if we can at least get the bad dirblocks out of the way.
If reflink is turned on, repair can deal with crosslinked file data
blocks, though anything other kind of block results in the usual
scraping-till-its-clean behavior.
I'm also kinda curious what started this corruption problem, and did any
of it leak through to other files?
--D
> > As it is the filesystem can be mounted and most data appears accessible, but
> > several directories are corrupt and can't be read or removed; the kernel
> > reports metadata corruption and CRC errors and returns EUCLEAN.
> >
> > Ideally I'd like to remove the corrupt directories, recover as much of
> > what's left as possible, and make the filesystem usable again (it's an
> > rsnapshot destination) - but I'll take what I can.
>
> Yup, it's only a small number of directory inodes, so we might be
> able to do this with some manual xfs_db magic. I think all we'd
> need to do is rewrite specific parts of the dir block header and
> repair should then do the rest...
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: XFS corruption help; xfs_repair isn't working
2022-12-01 2:12 ` Darrick J. Wong
@ 2022-12-01 13:00 ` Chris Boot
2022-12-06 14:43 ` Chris Boot
0 siblings, 1 reply; 6+ messages in thread
From: Chris Boot @ 2022-12-01 13:00 UTC (permalink / raw)
To: Darrick J. Wong, Dave Chinner; +Cc: linux-xfs
On 01/12/2022 02:12, Darrick J. Wong wrote:
> On Wed, Nov 30, 2022 at 09:06:46AM +1100, Dave Chinner wrote:
>> On Tue, Nov 29, 2022 at 08:49:27PM +0000, Chris Boot wrote:
>>> Hi all,
>>>
>>> Sorry, I'm mailing here as a last resort before declaring this filesystem
>>> done for. Following a string of unclean reboots and a dying hard disk I have
>>> this filesystem in a very poor state that xfs_repair can't make any progress
>>> on.
>>>
>>> It has been mounted on kernel 5.18.14-1~bpo11+1 (from Debian
>>> bullseye-backports). Most of the repairs were done using xfsprogs 5.10.0-4
>>> (from Debian bullseye stable), though I did also try with 6.0.0-1 (from
>>> Debian bookworm/testing re-built myself).
>>>
>>> I've attached the full log from xfs_repair, but the summary is it all starts
>>> with multiple instances of this in Phase 3:
>>>
>>> Metadata CRC error detected at 0x5609236ce178, xfs_dir3_block block
>>> 0xe101f32f8/0x1000
>>> bad directory block magic # 0x1859dc06 in block 0 for directory inode
>>> 64426557977
>>> bad bestfree table in block 0 in directory inode 64426557977: repairing
>>> table
>>
>> I think that the problem is that we are trying to repair garbage
>> without completely reinitialising the directory block header. We
>> don't bother checking the incoming directory block for sanity after
>> the CRC fails, and then we only warn that it has a bad magic number.
>>
>> We then go a process it as though it is a directory block,
>> essentially trusting that the directory block header is actually
>> sane. Which it clearly isn't because the magic number in the dir
>> block has been trashed.
>>
>> We then rescan parts of the directory block and rewrite parts of the
>> block header, but the next time we re-scan the block we find that
>> there are still bad parts in the header/directory block. Then we
>> rewrite the magic number to make it look like a directory block,
>> and when repair is finished it goes to write the recovered directory
>> block to disk and it fails the verifier check - it's still a corrupt
>> directory block because it's still full of garbage that doesn't pass
>> muster.
>>
>> From a recovery persepective, I think that if we get a bad CRC and
>> an unrecognisable magic number, we have no idea what the block is
>> meant to contain - we cannot trust it to contain directory
>> information, so we should just trash the block rather than try to
>> rebuild it. If it was a valid directory block, this will result in
>> the files it pointed to being moved to lost+found so no data is
>> actually lost.
>>
>> If it wasn't a dir block at all, then simply trashing the data fork
>> of the inode and not touching the contents of the block at all is
>> right thing to do. Modifying something that may be cross-linked
>> before we've resolved all the cross-linked extents is a bad thing to
>> be doing, so if we cannot recognise the block as a directory block,
>> we shouldn't try to recover it as a directory block at all....
>>
>> Darrick, what are your thoughts on this?
>
> I kinda want to see the metadump of this (possibly enormous) filesystem.
I've asked whether I can share this with you. The filesystem is indeed
huge (35TiB) and I wouldn't be surprised if the metadata alone was
rather large. What would be the most efficient way of sharing that with you?
It looks like there are exactly 7 unreadable directories scattered
across the filesystem, most in data that has been there for weeks/months
- but a couple in the most recent complete "snapshot" directory.
> Probably the best outcome is to figure out which blocks in each
> directory are corrupt, remove them from the data fork mapping, and see
> if repair can fix up the other things (e.g. bestfree data) and dump the
> unlinked files in /lost+found. Hopefully rsnapshot can deal with the
> directory tree if we can at least get the bad dirblocks out of the way.
rsnapshot just runs an rsync with --link-dest= set, so it'll just
duplicate files that are missing, but it aborts when it hits the
corrupted directories as it can't look inside them.
> If reflink is turned on, repair can deal with crosslinked file data
> blocks, though anything other kind of block results in the usual
> scraping-till-its-clean behavior.
Sadly reflink is off:
meta-data=/dev/vg_data/rsnapshot isize=512 agcount=38,
agsize=251658224 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0, rmapbt=0
= reflink=0 bigtime=0 inobtcount=0
nrext64=0
data = bsize=4096 blocks=9395240960, imaxpct=5
= sunit=16 swidth=64 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
> I'm also kinda curious what started this corruption problem, and did any
> of it leak through to other files?
I wish we knew. This came to light when the machine had to be repeatedly
rebooted because a large computation job was making the system run out
of memory. Unfortunately it has a lot of swap configured so it wasn't
just being OOM killed, which would have been much better. This all
actually led to soft lockups and to our reboots. This happened 3-4 times
before we noticed the corruption.
During the above the RAID controller (an LSI MegaRAID) marked one of the
hard disks that makes up the array (a RAID-60 over 18x 8TB SAS disks, 2x
9-disk RAID-6 spans) faulty.
During the recovery I know that xfs_repair was run with -L at some
point; I'm not certain whether the person doing this actually tried
mounting the filesystem first to replay the log, though. There was
certainly a lot more corruption than just this, but it seems like that
all got repaired away. /lost+found was full of 10s of thousands of
displaced files (now removed).
Thanks,
Chris
--
Chris Boot
bootc@boo.tc
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: XFS corruption help; xfs_repair isn't working
2022-12-01 13:00 ` Chris Boot
@ 2022-12-06 14:43 ` Chris Boot
2022-12-09 9:44 ` Carlos Maiolino
0 siblings, 1 reply; 6+ messages in thread
From: Chris Boot @ 2022-12-06 14:43 UTC (permalink / raw)
To: Darrick J. Wong, Dave Chinner; +Cc: linux-xfs
On 01/12/2022 13:00, Chris Boot wrote:
>> I kinda want to see the metadump of this (possibly enormous) filesystem.
>
> I've asked whether I can share this with you. The filesystem is indeed
> huge (35TiB) and I wouldn't be surprised if the metadata alone was
> rather large. What would be the most efficient way of sharing that with
> you?
I've got approval to send a metadata dump to you. What's the best way of
getting it over? It's 31GiB uncompressed, 6.5GiB with zstd compression.
Many thanksm
Chris
--
Chris Boot
bootc@boo.tc
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: XFS corruption help; xfs_repair isn't working
2022-12-06 14:43 ` Chris Boot
@ 2022-12-09 9:44 ` Carlos Maiolino
0 siblings, 0 replies; 6+ messages in thread
From: Carlos Maiolino @ 2022-12-09 9:44 UTC (permalink / raw)
To: Chris Boot; +Cc: Darrick J. Wong, Dave Chinner, linux-xfs
On Tue, Dec 06, 2022 at 02:43:32PM +0000, Chris Boot wrote:
> On 01/12/2022 13:00, Chris Boot wrote:
> >> I kinda want to see the metadump of this (possibly enormous) filesystem.
> >
> > I've asked whether I can share this with you. The filesystem is indeed
> > huge (35TiB) and I wouldn't be surprised if the metadata alone was
> > rather large. What would be the most efficient way of sharing that with
> > you?
>
> I've got approval to send a metadata dump to you. What's the best way of
> getting it over? It's 31GiB uncompressed, 6.5GiB with zstd compression.
Place it somewhere we can easily download, and if you feel uncomfortable sharing
the link publicly to the list, you can send it directly to those who asked for
it, off-list, but please keep the remaining discussion within the list.
--
Carlos Maiolino
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-12-09 9:44 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-11-29 20:49 XFS corruption help; xfs_repair isn't working Chris Boot
2022-11-29 22:06 ` Dave Chinner
2022-12-01 2:12 ` Darrick J. Wong
2022-12-01 13:00 ` Chris Boot
2022-12-06 14:43 ` Chris Boot
2022-12-09 9:44 ` Carlos Maiolino
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox