From: "Darrick J. Wong" <djwong@kernel.org>
To: Christoph Hellwig <hch@infradead.org>
Cc: aalbersh@kernel.org, linux-xfs@vger.kernel.org
Subject: Re: [PATCH 1/2] mkfs: enable new features by default
Date: Tue, 2 Dec 2025 16:53:45 -0800 [thread overview]
Message-ID: <20251203005345.GD89492@frogsfrogsfrogs> (raw)
In-Reply-To: <aS6Xhh4iZHwJHA3m@infradead.org>
On Mon, Dec 01, 2025 at 11:38:46PM -0800, Christoph Hellwig wrote:
> On Mon, Dec 01, 2025 at 05:28:16PM -0800, Darrick J. Wong wrote:
> > From: Darrick J. Wong <djwong@kernel.org>
> >
> > Since the LTS is coming up, enable parent pointers and exchange-range by
> > default for all users. Also fix up an out of date comment.
>
> Do you have any numbers that show the overhead or non-overhead of
> enabling rmap? It will increase the amount of metadata written quite
> a bit.
I'm assuming you're interested in the overhead of *parent pointers* and
not rmap since we turned on rmap by default back in 2023?
I created a really stupid benchmarking script that does:
#!/bin/bash
umount /opt
mkfs.xfs -f /dev/sdb -n parent=$1
mount /dev/sdb /opt
mkdir -p /opt/foo
for ((i=0;i<10;i++)); do
time fsstress -n 400000 -p 4 -z -f creat=1,mkdir=1,mknod=1,rmdir=1,unlink=1,link=1,rename=1 -d /opt/foo -s 1
done
# ./dumb.sh 0
meta-data=/dev/sdb isize=512 agcount=4, agsize=1298176 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=1
= reflink=1 bigtime=1 inobtcount=1 nrext64=1
= exchange=1 metadir=0
data = bsize=4096 blocks=5192704, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
= rgcount=0 rgsize=0 extents
= zoned=0 start=0 reserved=0
Discarding blocks...Done.
real 0m18.807s
user 0m2.169s
sys 0m54.013s
real 0m13.845s
user 0m2.005s
sys 0m34.048s
real 0m14.019s
user 0m1.931s
sys 0m36.086s
real 0m14.435s
user 0m2.105s
sys 0m35.845s
real 0m14.823s
user 0m1.920s
sys 0m35.528s
real 0m14.181s
user 0m2.013s
sys 0m35.775s
real 0m14.281s
user 0m1.865s
sys 0m36.240s
real 0m13.638s
user 0m1.933s
sys 0m35.642s
real 0m13.553s
user 0m1.904s
sys 0m35.084s
real 0m13.963s
user 0m1.979s
sys 0m35.724s
# ./dumb.sh 1
meta-data=/dev/sdb isize=512 agcount=4, agsize=1298176 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=1
= reflink=1 bigtime=1 inobtcount=1 nrext64=1
= exchange=1 metadir=0
data = bsize=4096 blocks=5192704, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=1
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
= rgcount=0 rgsize=0 extents
= zoned=0 start=0 reserved=0
Discarding blocks...Done.
real 0m20.654s
user 0m2.374s
sys 1m4.441s
real 0m14.255s
user 0m1.990s
sys 0m36.749s
real 0m14.553s
user 0m1.931s
sys 0m36.606s
real 0m13.855s
user 0m1.767s
sys 0m36.467s
real 0m14.606s
user 0m2.073s
sys 0m37.255s
real 0m13.706s
user 0m1.942s
sys 0m36.294s
real 0m14.177s
user 0m2.017s
sys 0m36.528s
real 0m15.310s
user 0m2.164s
sys 0m37.720s
real 0m14.099s
user 0m2.013s
sys 0m37.062s
real 0m14.067s
user 0m2.068s
sys 0m36.552s
As you can see, there's a noticeable increase in the runtime of the
first fsstress invocation, but for the subsequent runs there's not much
of a difference. I think the parent pointer log items usually complete
in a single log checkpoint and are usually omitted from the log. In the
common case of a single parent and an inline xattr area, the overhead is
basically zero because we're just writing to the attr fork's if_data and
not messing with xattr blocks.
If I remove the -flink=1 parameter from fsstress so that parent pointers
are always running out of the immediate area then the first parent=0
runtime is:
real 0m18.920s
user 0m2.559s
sys 1m0.991s
and the first parent=1 is:
real 0m20.458s
user 0m2.533s
sys 1m6.301s
I see more or less the same timings for the nine subsequent runs for
each parent= setting. I think it's safe to say the overhead ranges
between negligible and 10% on a cold new filesystem.
--D
next prev parent reply other threads:[~2025-12-03 0:53 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-02 1:27 [PATCHSET 2/2] xfsprogs: enable new stable features for 6.18 Darrick J. Wong
2025-12-02 1:28 ` [PATCH 1/2] mkfs: enable new features by default Darrick J. Wong
2025-12-02 7:38 ` Christoph Hellwig
2025-12-03 0:53 ` Darrick J. Wong [this message]
2025-12-03 6:31 ` Christoph Hellwig
2025-12-04 18:48 ` Darrick J. Wong
2025-12-02 1:28 ` [PATCH 2/2] mkfs: add 2025 LTS config file Darrick J. Wong
-- strict thread matches above, loose matches on Subject: below --
2025-12-09 16:16 [PATCHSET V2] xfsprogs: enable new stable features for 6.18 Darrick J. Wong
2025-12-09 16:16 ` [PATCH 1/2] mkfs: enable new features by default Darrick J. Wong
2025-12-09 16:22 ` Christoph Hellwig
2025-12-09 22:25 ` Dave Chinner
2025-12-10 23:49 ` Darrick J. Wong
2025-12-15 23:59 ` Dave Chinner
2025-12-16 23:07 ` Darrick J. Wong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251203005345.GD89492@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=aalbersh@kernel.org \
--cc=hch@infradead.org \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox