From: "Darrick J. Wong" <djwong@kernel.org>
To: Dave Chinner <david@fromorbit.com>
Cc: Christoph Hellwig <hch@infradead.org>,
cheng.lin130@zte.com.cn, linux-xfs@vger.kernel.org,
jiang.yong5@zte.com.cn, wang.liang82@zte.com.cn,
liu.dong3@zte.com.cn
Subject: Re: [PATCH] xfs: pin inodes that would otherwise overflow link count
Date: Wed, 11 Oct 2023 14:25:13 -0700 [thread overview]
Message-ID: <20231011212513.GZ21298@frogsfrogsfrogs> (raw)
In-Reply-To: <ZScQRPEzGALKuSpk@dread.disaster.area>
On Thu, Oct 12, 2023 at 08:14:44AM +1100, Dave Chinner wrote:
> On Thu, Oct 12, 2023 at 08:08:20AM +1100, Dave Chinner wrote:
> > On Wed, Oct 11, 2023 at 01:33:50PM -0700, Darrick J. Wong wrote:
> > > From: Darrick J. Wong <djwong@kernel.org>
> > >
> > > The VFS inc_nlink function does not explicitly check for integer
> > > overflows in the i_nlink field. Instead, it checks the link count
> > > against s_max_links in the vfs_{link,create,rename} functions. XFS
> > > sets the maximum link count to 2.1 billion, so integer overflows should
> > > not be a problem.
> > >
> > > However. It's possible that online repair could find that a file has
> > > more than four billion links, particularly if the link count got
> >
> > I don't think we should be attempting to fix that online - if we've
> > really found an inode with 4 billion links then something else has
> > gone wrong during repair because we shouldn't get there in the first
> > place.
> >
> > IOWs, we should be preventing a link count overflow at the time
> > that the link count is being added and returning -EMLINK errors to
> > that operation. This prevents overflow, and so if repair does find
> > more than 2.1 billion links to the inode, there's clearly something
> > else very wrong (either in repair or a bug in the filesystem that
> > has leaked many, many link counts).
> >
> > huh.
> >
> > We set sb->s_max_links = XFS_MAXLINKS, but nowhere does the VFS
> > enforce that, nor does any XFS code. The lack of checking or
> > enforcement of filesystem max link count anywhere is ... not ideal.
>
> No, wait, I read the cscope output wrong. sb->s_max_links *is*
> enforced at the VFS level, so we should never end up in a situation
> with link count greater than XFS_MAXLINKS inside the XFS code in
> normal operation. i.e. A count greater than that is an indication of
> a software bug or corruption, so we should definitely be verifying
> di_nlink is within the valid on-disk range regardless of anything
> else....
... and I just realized that the VFS doesn't check for underflows when
unlinking or rmdir'ing. Maybe it should be doing that instead of
patching XFS and everything else?
--D
> -Dave.
> --
> Dave Chinner
> david@fromorbit.com
next prev parent reply other threads:[~2023-10-11 21:25 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-11 20:33 [PATCH] xfs: pin inodes that would otherwise overflow link count Darrick J. Wong
2023-10-11 21:08 ` Dave Chinner
2023-10-11 21:14 ` Dave Chinner
2023-10-11 21:25 ` Darrick J. Wong [this message]
2023-10-11 21:41 ` Darrick J. Wong
2023-10-11 22:21 ` Dave Chinner
2023-10-12 11:05 ` cheng.lin130
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231011212513.GZ21298@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=cheng.lin130@zte.com.cn \
--cc=david@fromorbit.com \
--cc=hch@infradead.org \
--cc=jiang.yong5@zte.com.cn \
--cc=linux-xfs@vger.kernel.org \
--cc=liu.dong3@zte.com.cn \
--cc=wang.liang82@zte.com.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox