From: Brian Foster <bfoster@redhat.com>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH 26/30] xfs: xfs_iflush() is no longer necessary
Date: Mon, 8 Jun 2020 12:45:51 -0400 [thread overview]
Message-ID: <20200608164551.GD36278@bfoster> (raw)
In-Reply-To: <20200604074606.266213-27-david@fromorbit.com>
On Thu, Jun 04, 2020 at 05:46:02PM +1000, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
>
> Now we have a cached buffer on inode log items, we don't need
> to do buffer lookups when flushing inodes anymore - all we need
> to do is lock the buffer and we are ready to go.
>
> This largely gets rid of the need for xfs_iflush(), which is
> essentially just a mechanism to look up the buffer and flush the
> inode to it. Instead, we can just call xfs_iflush_cluster() with a
> few modifications to ensure it also flushes the inode we already
> hold locked.
>
> This allows the AIL inode item pushing to be almost entirely
> non-blocking in XFS - we won't block unless memory allocation
> for the cluster inode lookup blocks or the block device queues are
> full.
>
> Writeback during inode reclaim becomes a little more complex because
> we now have to lock the buffer ourselves, but otherwise this change
> is largely a functional no-op that removes a whole lot of code.
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
> ---
Looks mostly reasonable..
> fs/xfs/xfs_inode.c | 106 ++++++----------------------------------
> fs/xfs/xfs_inode.h | 2 +-
> fs/xfs/xfs_inode_item.c | 54 +++++++++-----------
> 3 files changed, 37 insertions(+), 125 deletions(-)
>
> diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> index af65acd24ec4e..61c872e4ee157 100644
> --- a/fs/xfs/xfs_inode.c
> +++ b/fs/xfs/xfs_inode.c
...
> @@ -3688,6 +3609,7 @@ xfs_iflush_int(
> ASSERT(ip->i_df.if_format != XFS_DINODE_FMT_BTREE ||
> ip->i_df.if_nextents > XFS_IFORK_MAXEXT(ip, XFS_DATA_FORK));
> ASSERT(iip != NULL && iip->ili_fields != 0);
> + ASSERT(iip->ili_item.li_buf == bp);
FWIW, the previous assert includes an iip NULL check.
>
> dip = xfs_buf_offset(bp, ip->i_imap.im_boffset);
>
...
> diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c
> index 697248b7eb2be..326547e89cb6b 100644
> --- a/fs/xfs/xfs_inode_item.c
> +++ b/fs/xfs/xfs_inode_item.c
> @@ -485,53 +485,42 @@ xfs_inode_item_push(
> uint rval = XFS_ITEM_SUCCESS;
> int error;
>
> - if (xfs_ipincount(ip) > 0)
> + ASSERT(iip->ili_item.li_buf);
> +
> + if (xfs_ipincount(ip) > 0 || xfs_buf_ispinned(bp) ||
> + (ip->i_flags & XFS_ISTALE))
> return XFS_ITEM_PINNED;
>
> - if (!xfs_ilock_nowait(ip, XFS_ILOCK_SHARED))
> - return XFS_ITEM_LOCKED;
> + /* If the inode is already flush locked, we're already flushing. */
Or we're racing with reclaim (since we don't have the ilock here any
longer)?
> + if (xfs_isiflocked(ip))
> + return XFS_ITEM_FLUSHING;
>
> - /*
> - * Re-check the pincount now that we stabilized the value by
> - * taking the ilock.
> - */
> - if (xfs_ipincount(ip) > 0) {
> - rval = XFS_ITEM_PINNED;
> - goto out_unlock;
> - }
> + if (!xfs_buf_trylock(bp))
> + return XFS_ITEM_LOCKED;
>
> - /*
> - * Stale inode items should force out the iclog.
> - */
> - if (ip->i_flags & XFS_ISTALE) {
> - rval = XFS_ITEM_PINNED;
> - goto out_unlock;
> + if (bp->b_flags & _XBF_DELWRI_Q) {
> + xfs_buf_unlock(bp);
> + return XFS_ITEM_FLUSHING;
Hmm, what's the purpose of this check? I would expect that we'd still be
able to flush to a buffer even though it's delwri queued. We drop the
buffer lock after queueing it (and then it's reacquired on delwri
submit).
> }
> + spin_unlock(&lip->li_ailp->ail_lock);
>
> /*
> - * Someone else is already flushing the inode. Nothing we can do
> - * here but wait for the flush to finish and remove the item from
> - * the AIL.
> + * We need to hold a reference for flushing the cluster buffer as it may
> + * fail the buffer without IO submission. In which case, we better get a
> + * reference for that completion because otherwise we don't get a
> + * reference for IO until we queue the buffer for delwri submission.
> */
> - if (!xfs_iflock_nowait(ip)) {
> - rval = XFS_ITEM_FLUSHING;
> - goto out_unlock;
> - }
> -
> - ASSERT(iip->ili_fields != 0 || XFS_FORCED_SHUTDOWN(ip->i_mount));
> - spin_unlock(&lip->li_ailp->ail_lock);
> -
> - error = xfs_iflush(ip, &bp);
> + xfs_buf_hold(bp);
> + error = xfs_iflush_cluster(ip, bp);
> if (!error) {
> if (!xfs_buf_delwri_queue(bp, buffer_list))
> rval = XFS_ITEM_FLUSHING;
> xfs_buf_relse(bp);
> - } else if (error == -EAGAIN)
> + } else {
> rval = XFS_ITEM_LOCKED;
> + }
>
> spin_lock(&lip->li_ailp->ail_lock);
> -out_unlock:
> - xfs_iunlock(ip, XFS_ILOCK_SHARED);
> return rval;
> }
>
> @@ -548,6 +537,7 @@ xfs_inode_item_release(
>
> ASSERT(ip->i_itemp != NULL);
> ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
> + ASSERT(lip->li_buf || !test_bit(XFS_LI_DIRTY, &lip->li_flags));
This is the transaction cancel/abort path, so seems like this should be
part of the patch that attaches the ili on logging the inode?
Brian
>
> lock_flags = iip->ili_lock_flags;
> iip->ili_lock_flags = 0;
> --
> 2.26.2.761.g0e0b3e54be
>
next prev parent reply other threads:[~2020-06-08 16:45 UTC|newest]
Thread overview: 81+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-04 7:45 [PATCH 00/30] xfs: rework inode flushing to make inode reclaim fully asynchronous Dave Chinner
2020-06-04 7:45 ` [PATCH 01/30] xfs: Don't allow logging of XFS_ISTALE inodes Dave Chinner
2020-06-04 7:45 ` [PATCH 02/30] xfs: remove logged flag from inode log item Dave Chinner
2020-06-04 7:45 ` [PATCH 03/30] xfs: add an inode item lock Dave Chinner
2020-06-09 13:13 ` Brian Foster
2020-06-04 7:45 ` [PATCH 04/30] xfs: mark inode buffers in cache Dave Chinner
2020-06-04 14:04 ` Brian Foster
2020-06-04 7:45 ` [PATCH 05/30] xfs: mark dquot " Dave Chinner
2020-06-04 7:45 ` [PATCH 06/30] xfs: mark log recovery buffers for completion Dave Chinner
2020-06-04 7:45 ` [PATCH 07/30] xfs: call xfs_buf_iodone directly Dave Chinner
2020-06-04 7:45 ` [PATCH 08/30] xfs: clean up whacky buffer log item list reinit Dave Chinner
2020-06-04 7:45 ` [PATCH 09/30] xfs: make inode IO completion buffer centric Dave Chinner
2020-06-04 7:45 ` [PATCH 10/30] xfs: use direct calls for dquot IO completion Dave Chinner
2020-06-04 7:45 ` [PATCH 11/30] xfs: clean up the buffer iodone callback functions Dave Chinner
2020-06-04 7:45 ` [PATCH 12/30] xfs: get rid of log item callbacks Dave Chinner
2020-06-04 7:45 ` [PATCH 13/30] xfs: handle buffer log item IO errors directly Dave Chinner
2020-06-04 14:05 ` Brian Foster
2020-06-05 0:59 ` Dave Chinner
2020-06-05 1:32 ` [PATCH 13/30 V2] " Dave Chinner
2020-06-05 16:24 ` Brian Foster
2020-06-04 7:45 ` [PATCH 14/30] xfs: unwind log item error flagging Dave Chinner
2020-06-04 7:45 ` [PATCH 15/30] xfs: move xfs_clear_li_failed out of xfs_ail_delete_one() Dave Chinner
2020-06-04 7:45 ` [PATCH 16/30] xfs: pin inode backing buffer to the inode log item Dave Chinner
2020-06-04 14:05 ` Brian Foster
2020-06-04 7:45 ` [PATCH 17/30] xfs: make inode reclaim almost non-blocking Dave Chinner
2020-06-04 18:06 ` Brian Foster
2020-06-04 7:45 ` [PATCH 18/30] xfs: remove IO submission from xfs_reclaim_inode() Dave Chinner
2020-06-04 18:08 ` Brian Foster
2020-06-04 22:53 ` Dave Chinner
2020-06-05 16:25 ` Brian Foster
2020-06-04 7:45 ` [PATCH 19/30] xfs: allow multiple reclaimers per AG Dave Chinner
2020-06-05 16:26 ` Brian Foster
2020-06-05 21:07 ` Dave Chinner
2020-06-08 16:44 ` Brian Foster
2020-06-04 7:45 ` [PATCH 20/30] xfs: don't block inode reclaim on the ILOCK Dave Chinner
2020-06-05 16:26 ` Brian Foster
2020-06-04 7:45 ` [PATCH 21/30] xfs: remove SYNC_TRYLOCK from inode reclaim Dave Chinner
2020-06-05 16:26 ` Brian Foster
2020-06-04 7:45 ` [PATCH 22/30] xfs: remove SYNC_WAIT from xfs_reclaim_inodes() Dave Chinner
2020-06-05 16:26 ` Brian Foster
2020-06-05 21:09 ` Dave Chinner
2020-06-04 7:45 ` [PATCH 23/30] xfs: clean up inode reclaim comments Dave Chinner
2020-06-05 16:26 ` Brian Foster
2020-06-04 7:46 ` [PATCH 24/30] xfs: rework stale inodes in xfs_ifree_cluster Dave Chinner
2020-06-05 18:27 ` Brian Foster
2020-06-05 21:32 ` Dave Chinner
2020-06-08 16:44 ` Brian Foster
2020-06-04 7:46 ` [PATCH 25/30] xfs: attach inodes to the cluster buffer when dirtied Dave Chinner
2020-06-08 16:45 ` Brian Foster
2020-06-08 21:05 ` Dave Chinner
2020-06-04 7:46 ` [PATCH 26/30] xfs: xfs_iflush() is no longer necessary Dave Chinner
2020-06-08 16:45 ` Brian Foster [this message]
2020-06-08 21:37 ` Dave Chinner
2020-06-08 22:26 ` [PATCH 26/30 V2] " Dave Chinner
2020-06-09 13:11 ` Brian Foster
2020-06-04 7:46 ` [PATCH 27/30] xfs: rename xfs_iflush_int() Dave Chinner
2020-06-08 17:37 ` Brian Foster
2020-06-04 7:46 ` [PATCH 28/30] xfs: rework xfs_iflush_cluster() dirty inode iteration Dave Chinner
2020-06-09 13:11 ` Brian Foster
2020-06-09 22:01 ` Dave Chinner
2020-06-10 13:06 ` Brian Foster
2020-06-10 23:40 ` Dave Chinner
2020-06-11 13:56 ` Brian Foster
2020-06-15 1:01 ` Dave Chinner
2020-06-15 14:21 ` Brian Foster
2020-06-16 14:41 ` Brian Foster
2020-06-11 1:56 ` [PATCH 28/30 V2] " Dave Chinner
2020-06-04 7:46 ` [PATCH 29/30] xfs: factor xfs_iflush_done Dave Chinner
2020-06-09 13:12 ` Brian Foster
2020-06-09 22:14 ` Dave Chinner
2020-06-10 13:08 ` Brian Foster
2020-06-11 0:16 ` Dave Chinner
2020-06-11 14:07 ` Brian Foster
2020-06-15 1:49 ` Dave Chinner
2020-06-15 5:20 ` Amir Goldstein
2020-06-15 14:31 ` Brian Foster
2020-06-11 1:58 ` [PATCH 29/30 V2] " Dave Chinner
2020-06-04 7:46 ` [PATCH 30/30] xfs: remove xfs_inobp_check() Dave Chinner
2020-06-09 13:12 ` Brian Foster
-- strict thread matches above, loose matches on Subject: below --
2020-06-22 8:15 [PATCH 00/30] xfs: rework inode flushing to make inode reclaim fully asynchronous Dave Chinner
2020-06-22 8:16 ` [PATCH 26/30] xfs: xfs_iflush() is no longer necessary Dave Chinner
2020-06-01 21:42 [PATCH 00/30] xfs: rework inode flushing to make inode reclaim fully asynchronous Dave Chinner
2020-06-01 21:42 ` [PATCH 26/30] xfs: xfs_iflush() is no longer necessary Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200608164551.GD36278@bfoster \
--to=bfoster@redhat.com \
--cc=david@fromorbit.com \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).