From: Christoph Hellwig <hch@lst.de>
To: "Nirjhar Roy (IBM)" <nirjhar.roy.lists@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>, Carlos Maiolino <cem@kernel.org>,
"Darrick J. Wong" <djwong@kernel.org>,
linux-xfs@vger.kernel.org
Subject: Re: [PATCH 2/2] xfs: remove metafile inodes from the active inode stat
Date: Tue, 3 Feb 2026 08:14:34 +0100 [thread overview]
Message-ID: <20260203071434.GA19039@lst.de> (raw)
In-Reply-To: <00fa6edc7f0c324ceb95f7181682d04ce3f53839.camel@gmail.com>
On Tue, Feb 03, 2026 at 12:41:23PM +0530, Nirjhar Roy (IBM) wrote:
> On Mon, 2026-02-02 at 15:14 +0100, Christoph Hellwig wrote:
> > The active inode (or active vnode until recently) stat can get much larger
> > than expected on file systems with a lot of metafile inodes like zoned
> > file systems on SMR hard disks with 10.000s of rtg rmap inodes.
> And this was causing (or could have caused) some sort of counter overflows or something?
Not really an overflow. But if you have a lot of metadir inodes it
messes up the stats with extra counts that are not user visible.
> > This fixes xfs/177 on SMR hard drives.
> I can see that xfs/177 has a couple of sub-test cases (like Round 1,2, ...) - do you remember if 1
> particular round was causing problems or were there issues with all/most of them?
Comparing the cached values to the expected ones.
> So is it like then there is a state(or at some function) where
> xs_inodes_active counter was bumped up even though "ip" was a metadir
> inode and here in the above line it is corrected (i.e, decremented
> by 1) and xs_inodes_meta is incremented - shouldn't the appropriate
> counter have been directly bumped up whenever it was created?
xfs_inode_alloc doesn't know if the inode is going to be a meta inode,
as it hasn't been read from disk yet for the common case. For the
less common inode allocation case we'd know it, but passing it down
would be a bit annoying.
> > +/* Metafile counters */
> > + uint32_t xs_inodes_meta;
> uint64_t would be an overkill, isn't it?
Yes. Sticking to the same type as the active inodes here.
next prev parent reply other threads:[~2026-02-03 7:14 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-02 14:14 fix inode stats with lots of metafiles Christoph Hellwig
2026-02-02 14:14 ` [PATCH 1/2] xfs: cleanup inode counter stats Christoph Hellwig
2026-02-03 6:16 ` Nirjhar Roy (IBM)
2026-02-03 6:47 ` Christoph Hellwig
2026-02-06 6:38 ` Darrick J. Wong
2026-02-02 14:14 ` [PATCH 2/2] xfs: remove metafile inodes from the active inode stat Christoph Hellwig
2026-02-03 7:11 ` Nirjhar Roy (IBM)
2026-02-03 7:14 ` Christoph Hellwig [this message]
2026-02-03 7:24 ` Nirjhar Roy (IBM)
2026-02-03 7:29 ` Christoph Hellwig
2026-02-06 6:43 ` Darrick J. Wong
2026-02-06 6:52 ` Christoph Hellwig
2026-02-25 9:38 ` fix inode stats with lots of metafiles Carlos Maiolino
-- strict thread matches above, loose matches on Subject: below --
2026-02-24 13:59 fix inode stats with lots of metafiles v2 Christoph Hellwig
2026-02-24 13:59 ` [PATCH 2/2] xfs: remove metafile inodes from the active inode stat Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260203071434.GA19039@lst.de \
--to=hch@lst.de \
--cc=cem@kernel.org \
--cc=djwong@kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=nirjhar.roy.lists@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox