public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Long Li <leo.lilong@huawei.com>
To: Dave Chinner <david@fromorbit.com>
Cc: <djwong@kernel.org>, <houtao1@huawei.com>, <yi.zhang@huawei.com>,
	<guoxuenan@huawei.com>, <linux-xfs@vger.kernel.org>
Subject: Re: [PATCH] xfs: fix incorrect i_nlink caused by inode racing
Date: Tue, 15 Nov 2022 22:33:38 +0800	[thread overview]
Message-ID: <20221115143338.GB1723222@ceph-admin> (raw)
In-Reply-To: <20221115002313.GS3600936@dread.disaster.area>

On Tue, Nov 15, 2022 at 11:23:13AM +1100, Dave Chinner wrote:
> On Mon, Nov 14, 2022 at 09:34:17PM +0800, Long Li wrote:
> > On Sat, Nov 12, 2022 at 07:52:50AM +1100, Dave Chinner wrote:
> > > On Mon, Nov 07, 2022 at 10:36:48PM +0800, Long Li wrote:
> > > > The following error occurred during the fsstress test:
> > > > 
> > > > XFS: Assertion failed: VFS_I(ip)->i_nlink >= 2, file: fs/xfs/xfs_inode.c, line: 2925
> > > > 
> > > > The problem was that inode race condition causes incorrect i_nlink to be
> > > > written to disk, and then it is read into memory. Consider the following
> > > > call graph, inodes that are marked as both XFS_IFLUSHING and
> > > > XFS_IRECLAIMABLE, i_nlink will be reset to 1 and then restored to original
> > > > value in xfs_reinit_inode(). Therefore, the i_nlink of directory on disk
> > > > may be set to 1.
> > > > 
> > > >   xfsaild
> > > >       xfs_inode_item_push
> > > >           xfs_iflush_cluster
> > > >               xfs_iflush
> > > >                   xfs_inode_to_disk
> > > > 
> > > >   xfs_iget
> > > >       xfs_iget_cache_hit
> > > >           xfs_iget_recycle
> > > >               xfs_reinit_inode
> > > >   	          inode_init_always
> > > > 
> > > > So skip inodes that being flushed and markded as XFS_IRECLAIMABLE, prevent
> > > > concurrent read and write to inodes.
> > > 
> > > urk.
> > > 
> > > xfs_reinit_inode() needs to hold the ILOCK_EXCL as it is changing
> > > internal inode state and can race with other RCU protected inode
> > > lookups. Have a look at what xfs_iflush_cluster() does - it
> > > grabs the ILOCK_SHARED while under rcu + ip->i_flags_lock, and so
> > > xfs_iflush/xfs_inode_to_disk() are protected from racing inode
> > > updates (during transactions) by that lock.
> > > 
> > > Hence it looks to me that I_FLUSHING isn't the problem here - it's
> > > that we have a transient modified inode state in xfs_reinit_inode()
> > > that is externally visisble...
> > 
> > Before xfs_reinit_inode(), XFS_IRECLAIM will be set in ip->i_flags, this 
> > looks like can prevent race with other RCU protected inode lookups.  
> 
> That only protects against new lookups - it does not protect against the
> IRECLAIM flag being set *after* the lookup in xfs_iflush_cluster()
> whilst the inode is being flushed to the cluster buffer. That's why
> xfs_iflush_cluster() does:
> 
> 	rcu_read_lock()
> 	lookup inode
> 	spinlock(ip->i_flags_lock);
> 	check IRECLAIM|IFLUSHING
> >>>>>>	xfs_ilock_nowait(ip, XFS_ILOCK_SHARED)     <<<<<<<<
> 	set IFLUSHING
> 	spin_unlock(ip->i_flags_lock)
> 	rcu_read_unlock()
> 
> At this point, the only lock that is held is XFS_ILOCK_SHARED, and
> it's the only lock that protects the inode state outside the lookup
> scope against concurrent changes.
> 
> Essentially, xfs_reinit_inode() needs to add a:
> 
> 	xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)
> 
> before it set IRECLAIM - if it fails to get the ILOCK_EXCL, then we
> need to skip the inode, drop out of RCU scope, delay and retry the
> lookup.
> 
> > Can it be considered that don't modifying the information about the on-disk
> > values in the VFS inode in xfs_reinit_inode()? if so lock can be avoided.
> 
> We have to reinit the VFS inode because it has gone through
> ->destroy_inode and so the state has been trashed. We have to bring
> it back as an I_NEW inode, which requires reinitialising everything.
> THe issue is that we store inode state information (like nlink) in
> the VFS inode instead of the XFS inode portion of the structure (to
> minimise memory footprint), and that means xfs_reinit_inode() has a
> transient state where the VFS inode is not correct. We can avoid
> that simply by holding the XFS_ILOCK_EXCL, guaranteeing nothing in
> XFS should be trying to read/modify the internal metadata state
> while we are reinitialising the VFS inode portion of the
> structure...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com

Thanks for the detailed and clear explanation, holding ILOCK_EXCL lock
in xfs_reinit_inode() can solve the problem simply, I will resend a 
patch. :)

Thanks,
Long Li

      reply	other threads:[~2022-11-15 14:12 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-07 14:36 [PATCH] xfs: fix incorrect i_nlink caused by inode racing Long Li
2022-11-07 16:38 ` Darrick J. Wong
2022-11-10  1:42   ` Long Li
2022-11-11 20:52 ` Dave Chinner
2022-11-14 13:34   ` Long Li
2022-11-15  0:23     ` Dave Chinner
2022-11-15 14:33       ` Long Li [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221115143338.GB1723222@ceph-admin \
    --to=leo.lilong@huawei.com \
    --cc=david@fromorbit.com \
    --cc=djwong@kernel.org \
    --cc=guoxuenan@huawei.com \
    --cc=houtao1@huawei.com \
    --cc=linux-xfs@vger.kernel.org \
    --cc=yi.zhang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox