linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Brian Foster <bfoster@redhat.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH 24/30] xfs: rework stale inodes in xfs_ifree_cluster
Date: Sat, 6 Jun 2020 07:32:10 +1000	[thread overview]
Message-ID: <20200605213210.GE2040@dread.disaster.area> (raw)
In-Reply-To: <20200605182722.GH23747@bfoster>

On Fri, Jun 05, 2020 at 02:27:22PM -0400, Brian Foster wrote:
> On Thu, Jun 04, 2020 at 05:46:00PM +1000, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@redhat.com>
> > 
> > Once we have inodes pinning the cluster buffer and attached whenever
> > they are dirty, we no longer have a guarantee that the items are
> > flush locked when we lock the cluster buffer. Hence we cannot just
> > walk the buffer log item list and modify the attached inodes.
> > 
> > If the inode is not flush locked, we have to ILOCK it first and then
> > flush lock it to do all the prerequisite checks needed to avoid
> > races with other code. This is already handled by
> > xfs_ifree_get_one_inode(), so rework the inode iteration loop and
> > function to update all inodes in cache whether they are attached to
> > the buffer or not.
> > 
> > Note: we also remove the copying of the log item lsn to the
> > ili_flush_lsn as xfs_iflush_done() now uses the XFS_ISTALE flag to
> > trigger aborts and so flush lsn matching is not needed in IO
> > completion for processing freed inodes.
> > 
> > Signed-off-by: Dave Chinner <dchinner@redhat.com>
> > Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
> > ---
> >  fs/xfs/xfs_inode.c | 158 ++++++++++++++++++---------------------------
> >  1 file changed, 62 insertions(+), 96 deletions(-)
> > 
> > diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> > index 272b54cf97000..fb4c614c64fda 100644
> > --- a/fs/xfs/xfs_inode.c
> > +++ b/fs/xfs/xfs_inode.c
> ...
> > @@ -2559,43 +2563,53 @@ xfs_ifree_get_one_inode(
> >  	 */
> >  	if (ip != free_ip) {
> >  		if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)) {
> > +			spin_unlock(&ip->i_flags_lock);
> >  			rcu_read_unlock();
> >  			delay(1);
> >  			goto retry;
> >  		}
> > -
> > -		/*
> > -		 * Check the inode number again in case we're racing with
> > -		 * freeing in xfs_reclaim_inode().  See the comments in that
> > -		 * function for more information as to why the initial check is
> > -		 * not sufficient.
> > -		 */
> > -		if (ip->i_ino != inum) {
> > -			xfs_iunlock(ip, XFS_ILOCK_EXCL);
> > -			goto out_rcu_unlock;
> > -		}
> 
> Why is the recheck under ILOCK_EXCL no longer necessary? It looks like
> reclaim decides whether to proceed or not under the ilock and doesn't
> acquire the spinlock until it decides to reclaim. Hm?

Because we now take the ILOCK while still holding the i_flags_lock
instead of dropping the spin lock then trying to get the ILOCK.
Hence with this change, if we get the ILOCK we are guaranteed that
the inode number has not changed and don't need to recheck it.

This is guaranteed by xfs_reclaim_inode() because it locks in
the order of ILOCK -> i_flags_lock and it zeroes the ip->i_ino
while holding both these locks. Hence if we've got the i_flags_lock
and we try to get the ILOCK, the inode is either going to be valid and
reclaim will skip the inode (because we hold locks) or the inode
will already be in reclaim and the ip->i_ino will be zero....


> >  	}
> > +	ip->i_flags |= XFS_ISTALE;
> > +	spin_unlock(&ip->i_flags_lock);
> >  	rcu_read_unlock();
> >  
> > -	xfs_iflock(ip);
> > -	xfs_iflags_set(ip, XFS_ISTALE);
> > +	/*
> > +	 * If we can't get the flush lock, the inode is already attached.  All
> > +	 * we needed to do here is mark the inode stale so buffer IO completion
> > +	 * will remove it from the AIL.
> > +	 */
> 
> To make sure I'm following this correctly, we can assume the inode is
> attached based on an iflock_nowait() failure because we hold the ilock,
> right?

Actually, because we hold the buffer lock. We only flush the inode
to the buffer when we are holding the buffer lock, so all flush
locking shold nest inside the buffer lock. So for taking the flock,
the lock order is bp->b_sema -> ILOCK_EXCL -> iflock. We drop the
flush lock before we drop the buffer lock in IO completion, and
hence if we hold the buffer lock, nothing else can actually unlock
the inode flush lock.

> IOW, any other task doing a similar iflock check would have to do
> so under ilock and release the flush lock first if the inode didn't end
> up flushed, for whatever reason.

Yes, anything taking the flush lock needs to first hold the ILOCK -
that's always been the case and we've always done that because the
ILOCK is needed to provides serialisation against a) other
modifications while we are accessing/flushing the inode, and b)
inode reclaim.

/me checks.

After this patchset nothing calls xfs_iflock() at all - everything
uses xfs_iflock_nowait(), so it might be time to turn this back into
a plain status flag and get rid of the iflock stuff altogether as
it's just a state flag now...

> > +	ASSERT(iip->ili_fields);
> > +	spin_lock(&iip->ili_lock);
> > +	iip->ili_last_fields = iip->ili_fields;
> > +	iip->ili_fields = 0;
> > +	iip->ili_fsync_fields = 0;
> > +	spin_unlock(&iip->ili_lock);
> > +	list_add_tail(&iip->ili_item.li_bio_list, &bp->b_li_list);
> > +	ASSERT(iip->ili_last_fields);
> 
> We already asserted ->ili_fields and assigned ->ili_fields to
> ->ili_last_fields, so this assert seems spurious.

Ah, the first ASSERT goes away in the next patch, I think. It was
debug, and I may have removed it from the wrong patch...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2020-06-05 21:32 UTC|newest]

Thread overview: 82+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-04  7:45 [PATCH 00/30] xfs: rework inode flushing to make inode reclaim fully asynchronous Dave Chinner
2020-06-04  7:45 ` [PATCH 01/30] xfs: Don't allow logging of XFS_ISTALE inodes Dave Chinner
2020-06-04  7:45 ` [PATCH 02/30] xfs: remove logged flag from inode log item Dave Chinner
2020-06-04  7:45 ` [PATCH 03/30] xfs: add an inode item lock Dave Chinner
2020-06-09 13:13   ` Brian Foster
2020-06-04  7:45 ` [PATCH 04/30] xfs: mark inode buffers in cache Dave Chinner
2020-06-04 14:04   ` Brian Foster
2020-06-04  7:45 ` [PATCH 05/30] xfs: mark dquot " Dave Chinner
2020-06-04  7:45 ` [PATCH 06/30] xfs: mark log recovery buffers for completion Dave Chinner
2020-06-04  7:45 ` [PATCH 07/30] xfs: call xfs_buf_iodone directly Dave Chinner
2020-06-04  7:45 ` [PATCH 08/30] xfs: clean up whacky buffer log item list reinit Dave Chinner
2020-06-04  7:45 ` [PATCH 09/30] xfs: make inode IO completion buffer centric Dave Chinner
2020-06-04  7:45 ` [PATCH 10/30] xfs: use direct calls for dquot IO completion Dave Chinner
2020-06-04  7:45 ` [PATCH 11/30] xfs: clean up the buffer iodone callback functions Dave Chinner
2020-06-04  7:45 ` [PATCH 12/30] xfs: get rid of log item callbacks Dave Chinner
2020-06-04  7:45 ` [PATCH 13/30] xfs: handle buffer log item IO errors directly Dave Chinner
2020-06-04 14:05   ` Brian Foster
2020-06-05  0:59     ` Dave Chinner
2020-06-05  1:32   ` [PATCH 13/30 V2] " Dave Chinner
2020-06-05 16:24     ` Brian Foster
2020-06-04  7:45 ` [PATCH 14/30] xfs: unwind log item error flagging Dave Chinner
2020-06-04  7:45 ` [PATCH 15/30] xfs: move xfs_clear_li_failed out of xfs_ail_delete_one() Dave Chinner
2020-06-04  7:45 ` [PATCH 16/30] xfs: pin inode backing buffer to the inode log item Dave Chinner
2020-06-04 14:05   ` Brian Foster
2020-06-04  7:45 ` [PATCH 17/30] xfs: make inode reclaim almost non-blocking Dave Chinner
2020-06-04 18:06   ` Brian Foster
2020-06-04  7:45 ` [PATCH 18/30] xfs: remove IO submission from xfs_reclaim_inode() Dave Chinner
2020-06-04 18:08   ` Brian Foster
2020-06-04 22:53     ` Dave Chinner
2020-06-05 16:25       ` Brian Foster
2020-06-04  7:45 ` [PATCH 19/30] xfs: allow multiple reclaimers per AG Dave Chinner
2020-06-05 16:26   ` Brian Foster
2020-06-05 21:07     ` Dave Chinner
2020-06-08 16:44       ` Brian Foster
2020-06-04  7:45 ` [PATCH 20/30] xfs: don't block inode reclaim on the ILOCK Dave Chinner
2020-06-05 16:26   ` Brian Foster
2020-06-04  7:45 ` [PATCH 21/30] xfs: remove SYNC_TRYLOCK from inode reclaim Dave Chinner
2020-06-05 16:26   ` Brian Foster
2020-06-04  7:45 ` [PATCH 22/30] xfs: remove SYNC_WAIT from xfs_reclaim_inodes() Dave Chinner
2020-06-05 16:26   ` Brian Foster
2020-06-05 21:09     ` Dave Chinner
2020-06-04  7:45 ` [PATCH 23/30] xfs: clean up inode reclaim comments Dave Chinner
2020-06-05 16:26   ` Brian Foster
2020-06-04  7:46 ` [PATCH 24/30] xfs: rework stale inodes in xfs_ifree_cluster Dave Chinner
2020-06-05 18:27   ` Brian Foster
2020-06-05 21:32     ` Dave Chinner [this message]
2020-06-08 16:44       ` Brian Foster
2020-06-04  7:46 ` [PATCH 25/30] xfs: attach inodes to the cluster buffer when dirtied Dave Chinner
2020-06-08 16:45   ` Brian Foster
2020-06-08 21:05     ` Dave Chinner
2020-06-04  7:46 ` [PATCH 26/30] xfs: xfs_iflush() is no longer necessary Dave Chinner
2020-06-08 16:45   ` Brian Foster
2020-06-08 21:37     ` Dave Chinner
2020-06-08 22:26   ` [PATCH 26/30 V2] " Dave Chinner
2020-06-09 13:11     ` Brian Foster
2020-06-04  7:46 ` [PATCH 27/30] xfs: rename xfs_iflush_int() Dave Chinner
2020-06-08 17:37   ` Brian Foster
2020-06-04  7:46 ` [PATCH 28/30] xfs: rework xfs_iflush_cluster() dirty inode iteration Dave Chinner
2020-06-09 13:11   ` Brian Foster
2020-06-09 22:01     ` Dave Chinner
2020-06-10 13:06       ` Brian Foster
2020-06-10 23:40         ` Dave Chinner
2020-06-11 13:56           ` Brian Foster
2020-06-15  1:01             ` Dave Chinner
2020-06-15 14:21               ` Brian Foster
2020-06-16 14:41                 ` Brian Foster
2020-06-11  1:56   ` [PATCH 28/30 V2] " Dave Chinner
2020-06-04  7:46 ` [PATCH 29/30] xfs: factor xfs_iflush_done Dave Chinner
2020-06-09 13:12   ` Brian Foster
2020-06-09 22:14     ` Dave Chinner
2020-06-10 13:08       ` Brian Foster
2020-06-11  0:16         ` Dave Chinner
2020-06-11 14:07           ` Brian Foster
2020-06-15  1:49             ` Dave Chinner
2020-06-15  5:20               ` Amir Goldstein
2020-06-15 14:31               ` Brian Foster
2020-06-11  1:58   ` [PATCH 29/30 V2] " Dave Chinner
2020-06-04  7:46 ` [PATCH 30/30] xfs: remove xfs_inobp_check() Dave Chinner
2020-06-09 13:12   ` Brian Foster
  -- strict thread matches above, loose matches on Subject: below --
2020-06-22  8:15 [PATCH 00/30] xfs: rework inode flushing to make inode reclaim fully asynchronous Dave Chinner
2020-06-22  8:15 ` [PATCH 24/30] xfs: rework stale inodes in xfs_ifree_cluster Dave Chinner
2020-06-01 21:42 [PATCH 00/30] xfs: rework inode flushing to make inode reclaim fully asynchronous Dave Chinner
2020-06-01 21:42 ` [PATCH 24/30] xfs: rework stale inodes in xfs_ifree_cluster Dave Chinner
2020-06-02 23:01   ` Darrick J. Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200605213210.GE2040@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=bfoster@redhat.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).