From: Dave Chinner <david@fromorbit.com>
To: "hubert ." <hubjin657@outlook.com>
Cc: Carlos Maiolino <cem@kernel.org>,
"linux-xfs@vger.kernel.org" <linux-xfs@vger.kernel.org>
Subject: Re: xfs_metadump segmentation fault on large fs - xfsprogs 6.1
Date: Fri, 26 Sep 2025 19:39:18 +1000 [thread overview]
Message-ID: <aNZfRuIVgIOiP6Qp@dread.disaster.area> (raw)
In-Reply-To: <IA0PR05MB99755ED06F9965745B20D9DAFA1EA@IA0PR05MB9975.namprd05.prod.outlook.com>
On Fri, Sep 26, 2025 at 09:04:12AM +0000, hubert . wrote:
> >> Regarding the xfs_metadump segfault, yes, a core might be useful to
> >> investigate where the segfault is triggered, but you'll need to be
> >> running xfsprogs from the upstream tree (preferentially latest code), so
> >> we can actually match the core information the code.
> >
> > I figured it was not all the needed info, thanks for clarifying.
> >
> > Right now we had to put away the original hdds, as we cannot afford
> > another failed drive and time is pressing, and are dd'ing the image to a
> > real partition to try xfs_repair on it directly (takes days, of course,
> > but we're lucky we got the storage).
> > I will try the metadump and do further debugging if it segfaults again.
>
> So I'm back now with a real partition.
> First, I ran "xfs_repair -vn" and it did complete, reporting - as expected - a
> bunch of entries to junk, skipping the last phases with "Inode allocation
> btrees are too corrupted, skipping phases 6 and 7".
> It created a 270MB log, I can upload it somewhere if it could be of interest.
No need, but thanks for the offer.
> Core was generated by `/usr/sbin/xfs_db -i -p xfs_metadump -c metadump /dev/sda1'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0 libxfs_bmbt_disk_get_all (rec=0x55c47aec3eb0, irec=<synthetic pointer>) at ../include/libxfs.h:226
>
> warning: 226 ../include/libxfs.h: No such file or directory
> (gdb) bt full
> #0 libxfs_bmbt_disk_get_all (rec=0x55c47aec3eb0, irec=<synthetic pointer>) at ../include/libxfs.h:226
> l0 = <optimized out>
> l1 = <optimized out>
> l0 = <optimized out>
> l1 = <optimized out>
Ok, so it's faulted when trying to read a BMBT record from an
in-memory buffer...
Remember the addr of the rec (0x55c47aec3eb0) now....
> #1 convert_extent (rp=0x55c47aec3eb0, op=<synthetic pointer>, sp=<synthetic pointer>, cp=<synthetic pointer>, fp=<synthetic pointer>) at /build/reproducible-path/xfsprogs-6.16.0/db/bmap.c:320
> irec = <optimized out>
> irec = <optimized out>
> #2 process_bmbt_reclist (dip=dip@entry=0x55c37aec3e00, whichfork=whichfork@entry=0, rp=0x55c37aec3eb0, numrecs=numrecs@entry=268435457) at /build/reproducible-path/xfsprogs-6.16.0/db/metadump.c:2181
Smoking gun:
numrecs=numrecs@entry=268435457
268435457 = 2^28 + 1
> is_meta = false
> btype = <optimized out>
> i = <optimized out>
> o = <optimized out>
> op = <optimized out>
> s = <optimized out>
> c = <optimized out>
> cp = <optimized out>
> f = <optimized out>
> last = <optimized out>
> agno = <optimized out>
> agbno = <optimized out>
> rval = <optimized out>
> #3 0x000055c36404e042 in process_exinode (dip=0x55c37aec3e00, whichfork=0) at /build/reproducible-path/xfsprogs-6.16.0/db/metadump.c:2421
> max_nex = <optimized out>
> nex = 268435457
Yup, that's the problem.
The inode is in extent format, which means the extent records are in
the inode data fork area, which is about 300 bytes max for a 512
byte inode. IOWs, it can hold about 12 BMBT records. The BMBT
records are in the on-disk inode buffer, as is the disk inode @dip.
Look at the address of dip: 0x55c37aec3e00
The address of the BMBT rec: 0x55c47aec3eb0
Now lok at what BMBT record convert_extent() is trying to access:
process_bmbt_reclist()
{
.....
convert_extent(&rp[numrecs - 1], &o, &s, &c, &f);
.....
Yeah, that inode buffer isn't 268 million bmbt recrods long....
So there must be a bounds checking bug in process_exinode():
static int
process_exinode(
struct xfs_dinode *dip,
int whichfork)
{
xfs_extnum_t max_nex = xfs_iext_max_nextents(
xfs_dinode_has_large_extent_counts(dip), whichfork);
xfs_extnum_t nex = xfs_dfork_nextents(dip, whichfork);
int used = nex * sizeof(struct xfs_bmbt_rec);
if (nex > max_nex || used > XFS_DFORK_SIZE(dip, mp, whichfork)) {
if (metadump.show_warnings)
print_warning("bad number of extents %llu in inode %lld",
(unsigned long long)nex,
(long long)metadump.cur_ino);
return 1;
}
Can you spot it?
Hint: ((2^28 + 1) * 2^4) - 1 as an int is?
-Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2025-09-26 9:39 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <f9Etb2La9b1KOT-5VdCdf6cd10olyT-FsRb8AZh8HNI1D4Czb610tw4BE15cNrEhY5OiXDGS7xR6R1trRyn1LA==@protonmail.internalid>
2025-07-25 11:27 ` xfs_metadump segmentation fault on large fs - xfsprogs 6.1 hubert .
2025-07-26 3:51 ` Carlos Maiolino
2025-08-01 13:51 ` hubert .
2025-08-18 15:56 ` hubert .
2025-08-25 7:51 ` Carlos Maiolino
2025-08-27 10:51 ` hubert .
2025-09-26 9:04 ` hubert .
2025-09-26 9:39 ` Dave Chinner [this message]
2025-09-26 13:45 ` Carlos Maiolino
2025-09-27 23:22 ` Dave Chinner
2025-09-28 6:11 ` Carlos Maiolino
2025-09-30 9:22 ` Dave Chinner
2025-11-03 9:23 ` hubert .
2025-11-03 10:18 ` Carlos Maiolino
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aNZfRuIVgIOiP6Qp@dread.disaster.area \
--to=david@fromorbit.com \
--cc=cem@kernel.org \
--cc=hubjin657@outlook.com \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox