From: "Darrick J. Wong" <darrick.wong@oracle.com>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH 28/30] xfs: rework xfs_iflush_cluster() dirty inode iteration
Date: Tue, 2 Jun 2020 16:23:35 -0700 [thread overview]
Message-ID: <20200602232335.GS8230@magnolia> (raw)
In-Reply-To: <20200601214251.4167140-29-david@fromorbit.com>
On Tue, Jun 02, 2020 at 07:42:49AM +1000, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
>
> Now that we have all the dirty inodes attached to the cluster
> buffer, we don't actually have to do radix tree lookups to find
> them. Sure, the radix tree is efficient, but walking a linked list
> of just the dirty inodes attached to the buffer is much better.
>
> We are also no longer dependent on having a locked inode passed into
> the function to determine where to start the lookup. This means we
> can drop it from the function call and treat all inodes the same.
>
> We also make xfs_iflush_cluster skip inodes marked with
> XFS_IRECLAIM. This we avoid races with inodes that reclaim is
> actively referencing or are being re-initialised by inode lookup. If
> they are actually dirty, they'll get written by a future cluster
> flush....
>
> We also add a shutdown check after obtaining the flush lock so that
> we catch inodes that are dirty in memory and may have inconsistent
> state due to the shutdown in progress. We abort these inodes
> directly and so they remove themselves directly from the buffer list
> and the AIL rather than having to wait for the buffer to be failed
> and callbacks run to be processed correctly.
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
Looks ok,
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
--D
> ---
> fs/xfs/xfs_inode.c | 148 ++++++++++++++++------------------------
> fs/xfs/xfs_inode.h | 2 +-
> fs/xfs/xfs_inode_item.c | 2 +-
> 3 files changed, 62 insertions(+), 90 deletions(-)
>
> diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> index 8566bd0f4334d..931a483d5b316 100644
> --- a/fs/xfs/xfs_inode.c
> +++ b/fs/xfs/xfs_inode.c
> @@ -3611,117 +3611,94 @@ xfs_iflush(
> */
> int
> xfs_iflush_cluster(
> - struct xfs_inode *ip,
> struct xfs_buf *bp)
> {
> - struct xfs_mount *mp = ip->i_mount;
> - struct xfs_perag *pag;
> - unsigned long first_index, mask;
> - int cilist_size;
> - struct xfs_inode **cilist;
> - struct xfs_inode *cip;
> - struct xfs_ino_geometry *igeo = M_IGEO(mp);
> - int error = 0;
> - int nr_found;
> + struct xfs_mount *mp = bp->b_mount;
> + struct xfs_log_item *lip, *n;
> + struct xfs_inode *ip;
> + struct xfs_inode_log_item *iip;
> int clcount = 0;
> - int i;
> -
> - pag = xfs_perag_get(mp, XFS_INO_TO_AGNO(mp, ip->i_ino));
> -
> - cilist_size = igeo->inodes_per_cluster * sizeof(struct xfs_inode *);
> - cilist = kmem_alloc(cilist_size, KM_MAYFAIL|KM_NOFS);
> - if (!cilist)
> - goto out_put;
> -
> - mask = ~(igeo->inodes_per_cluster - 1);
> - first_index = XFS_INO_TO_AGINO(mp, ip->i_ino) & mask;
> - rcu_read_lock();
> - /* really need a gang lookup range call here */
> - nr_found = radix_tree_gang_lookup(&pag->pag_ici_root, (void**)cilist,
> - first_index, igeo->inodes_per_cluster);
> - if (nr_found == 0)
> - goto out_free;
> + int error = 0;
>
> - for (i = 0; i < nr_found; i++) {
> - cip = cilist[i];
> + /*
> + * We must use the safe variant here as on shutdown xfs_iflush_abort()
> + * can remove itself from the list.
> + */
> + list_for_each_entry_safe(lip, n, &bp->b_li_list, li_bio_list) {
> + iip = (struct xfs_inode_log_item *)lip;
> + ip = iip->ili_inode;
>
> /*
> - * because this is an RCU protected lookup, we could find a
> - * recently freed or even reallocated inode during the lookup.
> - * We need to check under the i_flags_lock for a valid inode
> - * here. Skip it if it is not valid or the wrong inode.
> + * Quick and dirty check to avoid locks if possible.
> */
> - spin_lock(&cip->i_flags_lock);
> - if (!cip->i_ino ||
> - __xfs_iflags_test(cip, XFS_ISTALE)) {
> - spin_unlock(&cip->i_flags_lock);
> + if (__xfs_iflags_test(ip, XFS_IRECLAIM | XFS_IFLOCK))
> + continue;
> + if (xfs_ipincount(ip))
> continue;
> - }
>
> /*
> - * Once we fall off the end of the cluster, no point checking
> - * any more inodes in the list because they will also all be
> - * outside the cluster.
> + * The inode is still attached to the buffer, which means it is
> + * dirty but reclaim might try to grab it. Check carefully for
> + * that, and grab the ilock while still holding the i_flags_lock
> + * to guarantee reclaim will not be able to reclaim this inode
> + * once we drop the i_flags_lock.
> */
> - if ((XFS_INO_TO_AGINO(mp, cip->i_ino) & mask) != first_index) {
> - spin_unlock(&cip->i_flags_lock);
> - break;
> + spin_lock(&ip->i_flags_lock);
> + ASSERT(!__xfs_iflags_test(ip, XFS_ISTALE));
> + if (__xfs_iflags_test(ip, XFS_IRECLAIM | XFS_IFLOCK)) {
> + spin_unlock(&ip->i_flags_lock);
> + continue;
> }
> - spin_unlock(&cip->i_flags_lock);
>
> /*
> - * Do an un-protected check to see if the inode is dirty and
> - * is a candidate for flushing. These checks will be repeated
> - * later after the appropriate locks are acquired.
> + * ILOCK will pin the inode against reclaim and prevent
> + * concurrent transactions modifying the inode while we are
> + * flushing the inode.
> */
> - if (xfs_inode_clean(cip) && xfs_ipincount(cip) == 0)
> + if (!xfs_ilock_nowait(ip, XFS_ILOCK_SHARED)) {
> + spin_unlock(&ip->i_flags_lock);
> continue;
> + }
> + spin_unlock(&ip->i_flags_lock);
>
> /*
> - * Try to get locks. If any are unavailable or it is pinned,
> - * then this inode cannot be flushed and is skipped.
> + * Skip inodes that are already flush locked as they have
> + * already been written to the buffer.
> */
> -
> - if (!xfs_ilock_nowait(cip, XFS_ILOCK_SHARED))
> - continue;
> - if (!xfs_iflock_nowait(cip)) {
> - xfs_iunlock(cip, XFS_ILOCK_SHARED);
> - continue;
> - }
> - if (xfs_ipincount(cip)) {
> - xfs_ifunlock(cip);
> - xfs_iunlock(cip, XFS_ILOCK_SHARED);
> + if (!xfs_iflock_nowait(ip)) {
> + xfs_iunlock(ip, XFS_ILOCK_SHARED);
> continue;
> }
>
> -
> /*
> - * Check the inode number again, just to be certain we are not
> - * racing with freeing in xfs_reclaim_inode(). See the comments
> - * in that function for more information as to why the initial
> - * check is not sufficient.
> + * If we are shut down, unpin and abort the inode now as there
> + * is no point in flushing it to the buffer just to get an IO
> + * completion to abort the buffer and remove it from the AIL.
> */
> - if (!cip->i_ino) {
> - xfs_ifunlock(cip);
> - xfs_iunlock(cip, XFS_ILOCK_SHARED);
> + if (XFS_FORCED_SHUTDOWN(mp)) {
> + xfs_iunpin_wait(ip);
> + /* xfs_iflush_abort() drops the flush lock */
> + xfs_iflush_abort(ip);
> + xfs_iunlock(ip, XFS_ILOCK_SHARED);
> + error = -EIO;
> continue;
> }
>
> - /*
> - * arriving here means that this inode can be flushed. First
> - * re-check that it's dirty before flushing.
> - */
> - if (!xfs_inode_clean(cip)) {
> - error = xfs_iflush(cip, bp);
> - if (error) {
> - xfs_iunlock(cip, XFS_ILOCK_SHARED);
> - goto out_free;
> - }
> - clcount++;
> - } else {
> - xfs_ifunlock(cip);
> + /* don't block waiting on a log force to unpin dirty inodes */
> + if (xfs_ipincount(ip)) {
> + xfs_ifunlock(ip);
> + xfs_iunlock(ip, XFS_ILOCK_SHARED);
> + continue;
> }
> - xfs_iunlock(cip, XFS_ILOCK_SHARED);
> +
> + if (!xfs_inode_clean(ip))
> + error = xfs_iflush(ip, bp);
> + else
> + xfs_ifunlock(ip);
> + xfs_iunlock(ip, XFS_ILOCK_SHARED);
> + if (error)
> + break;
> + clcount++;
> }
>
> if (clcount) {
> @@ -3729,11 +3706,6 @@ xfs_iflush_cluster(
> XFS_STATS_ADD(mp, xs_icluster_flushinode, clcount);
> }
>
> -out_free:
> - rcu_read_unlock();
> - kmem_free(cilist);
> -out_put:
> - xfs_perag_put(pag);
> if (error) {
> bp->b_flags |= XBF_ASYNC;
> xfs_buf_ioend_fail(bp);
> diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h
> index d1109eb13ba2e..b93cf9076df8a 100644
> --- a/fs/xfs/xfs_inode.h
> +++ b/fs/xfs/xfs_inode.h
> @@ -427,7 +427,7 @@ int xfs_log_force_inode(struct xfs_inode *ip);
> void xfs_iunpin_wait(xfs_inode_t *);
> #define xfs_ipincount(ip) ((unsigned int) atomic_read(&ip->i_pincount))
>
> -int xfs_iflush_cluster(struct xfs_inode *, struct xfs_buf *);
> +int xfs_iflush_cluster(struct xfs_buf *);
> void xfs_lock_two_inodes(struct xfs_inode *ip0, uint ip0_mode,
> struct xfs_inode *ip1, uint ip1_mode);
>
> diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c
> index e679fac944725..a3a8ae5e39e12 100644
> --- a/fs/xfs/xfs_inode_item.c
> +++ b/fs/xfs/xfs_inode_item.c
> @@ -513,7 +513,7 @@ xfs_inode_item_push(
> * reference for IO until we queue the buffer for delwri submission.
> */
> xfs_buf_hold(bp);
> - error = xfs_iflush_cluster(ip, bp);
> + error = xfs_iflush_cluster(bp);
> if (!error) {
> if (!xfs_buf_delwri_queue(bp, buffer_list))
> rval = XFS_ITEM_FLUSHING;
> --
> 2.26.2.761.g0e0b3e54be
>
next prev parent reply other threads:[~2020-06-02 23:23 UTC|newest]
Thread overview: 85+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-01 21:42 [PATCH 00/30] xfs: rework inode flushing to make inode reclaim fully asynchronous Dave Chinner
2020-06-01 21:42 ` [PATCH 01/30] xfs: Don't allow logging of XFS_ISTALE inodes Dave Chinner
2020-06-02 4:30 ` Darrick J. Wong
2020-06-02 7:06 ` Dave Chinner
2020-06-02 16:32 ` Brian Foster
2020-06-01 21:42 ` [PATCH 02/30] xfs: remove logged flag from inode log item Dave Chinner
2020-06-02 16:32 ` Brian Foster
2020-06-01 21:42 ` [PATCH 03/30] xfs: add an inode item lock Dave Chinner
2020-06-02 16:34 ` Brian Foster
2020-06-04 1:54 ` Dave Chinner
2020-06-04 14:03 ` Brian Foster
2020-06-01 21:42 ` [PATCH 04/30] xfs: mark inode buffers in cache Dave Chinner
2020-06-02 16:45 ` Brian Foster
2020-06-02 19:22 ` Darrick J. Wong
2020-06-02 21:29 ` Dave Chinner
2020-06-03 14:57 ` Brian Foster
2020-06-03 21:21 ` Dave Chinner
2020-06-01 21:42 ` [PATCH 05/30] xfs: mark dquot " Dave Chinner
2020-06-02 16:45 ` Brian Foster
2020-06-02 19:00 ` Darrick J. Wong
2020-06-01 21:42 ` [PATCH 06/30] xfs: mark log recovery buffers for completion Dave Chinner
2020-06-02 16:45 ` Brian Foster
2020-06-02 19:24 ` Darrick J. Wong
2020-06-01 21:42 ` [PATCH 07/30] xfs: call xfs_buf_iodone directly Dave Chinner
2020-06-02 16:47 ` Brian Foster
2020-06-02 21:38 ` Dave Chinner
2020-06-03 14:58 ` Brian Foster
2020-06-01 21:42 ` [PATCH 08/30] xfs: clean up whacky buffer log item list reinit Dave Chinner
2020-06-02 16:47 ` Brian Foster
2020-06-01 21:42 ` [PATCH 09/30] xfs: make inode IO completion buffer centric Dave Chinner
2020-06-03 14:58 ` Brian Foster
2020-06-01 21:42 ` [PATCH 10/30] xfs: use direct calls for dquot IO completion Dave Chinner
2020-06-02 19:25 ` Darrick J. Wong
2020-06-03 14:58 ` Brian Foster
2020-06-01 21:42 ` [PATCH 11/30] xfs: clean up the buffer iodone callback functions Dave Chinner
2020-06-03 14:58 ` Brian Foster
2020-06-01 21:42 ` [PATCH 12/30] xfs: get rid of log item callbacks Dave Chinner
2020-06-03 14:58 ` Brian Foster
2020-06-01 21:42 ` [PATCH 13/30] xfs: handle buffer log item IO errors directly Dave Chinner
2020-06-02 20:39 ` Darrick J. Wong
2020-06-02 22:17 ` Dave Chinner
2020-06-03 15:02 ` Brian Foster
2020-06-03 21:34 ` Dave Chinner
2020-06-01 21:42 ` [PATCH 14/30] xfs: unwind log item error flagging Dave Chinner
2020-06-02 20:45 ` Darrick J. Wong
2020-06-03 15:02 ` Brian Foster
2020-06-01 21:42 ` [PATCH 15/30] xfs: move xfs_clear_li_failed out of xfs_ail_delete_one() Dave Chinner
2020-06-02 20:47 ` Darrick J. Wong
2020-06-03 15:02 ` Brian Foster
2020-06-01 21:42 ` [PATCH 16/30] xfs: pin inode backing buffer to the inode log item Dave Chinner
2020-06-02 22:30 ` Darrick J. Wong
2020-06-02 22:53 ` Dave Chinner
2020-06-03 18:58 ` Brian Foster
2020-06-03 22:15 ` Dave Chinner
2020-06-04 14:03 ` Brian Foster
2020-06-01 21:42 ` [PATCH 17/30] xfs: make inode reclaim almost non-blocking Dave Chinner
2020-06-01 21:42 ` [PATCH 18/30] xfs: remove IO submission from xfs_reclaim_inode() Dave Chinner
2020-06-02 22:36 ` Darrick J. Wong
2020-06-01 21:42 ` [PATCH 19/30] xfs: allow multiple reclaimers per AG Dave Chinner
2020-06-01 21:42 ` [PATCH 20/30] xfs: don't block inode reclaim on the ILOCK Dave Chinner
2020-06-01 21:42 ` [PATCH 21/30] xfs: remove SYNC_TRYLOCK from inode reclaim Dave Chinner
2020-06-01 21:42 ` [PATCH 22/30] xfs: remove SYNC_WAIT from xfs_reclaim_inodes() Dave Chinner
2020-06-02 22:43 ` Darrick J. Wong
2020-06-01 21:42 ` [PATCH 23/30] xfs: clean up inode reclaim comments Dave Chinner
2020-06-02 22:45 ` Darrick J. Wong
2020-06-01 21:42 ` [PATCH 24/30] xfs: rework stale inodes in xfs_ifree_cluster Dave Chinner
2020-06-02 23:01 ` Darrick J. Wong
2020-06-01 21:42 ` [PATCH 25/30] xfs: attach inodes to the cluster buffer when dirtied Dave Chinner
2020-06-02 23:03 ` Darrick J. Wong
2020-06-01 21:42 ` [PATCH 26/30] xfs: xfs_iflush() is no longer necessary Dave Chinner
2020-06-01 21:42 ` [PATCH 27/30] xfs: rename xfs_iflush_int() Dave Chinner
2020-06-01 21:42 ` [PATCH 28/30] xfs: rework xfs_iflush_cluster() dirty inode iteration Dave Chinner
2020-06-02 23:23 ` Darrick J. Wong [this message]
2020-06-01 21:42 ` [PATCH 29/30] xfs: factor xfs_iflush_done Dave Chinner
2020-06-01 21:42 ` [PATCH 30/30] xfs: remove xfs_inobp_check() Dave Chinner
-- strict thread matches above, loose matches on Subject: below --
2020-06-04 7:45 [PATCH 00/30] xfs: rework inode flushing to make inode reclaim fully asynchronous Dave Chinner
2020-06-04 7:46 ` [PATCH 28/30] xfs: rework xfs_iflush_cluster() dirty inode iteration Dave Chinner
2020-06-09 13:11 ` Brian Foster
2020-06-09 22:01 ` Dave Chinner
2020-06-10 13:06 ` Brian Foster
2020-06-10 23:40 ` Dave Chinner
2020-06-11 13:56 ` Brian Foster
2020-06-15 1:01 ` Dave Chinner
2020-06-15 14:21 ` Brian Foster
2020-06-16 14:41 ` Brian Foster
2020-06-22 8:15 [PATCH 00/30] xfs: rework inode flushing to make inode reclaim fully asynchronous Dave Chinner
2020-06-22 8:16 ` [PATCH 28/30] xfs: rework xfs_iflush_cluster() dirty inode iteration Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200602232335.GS8230@magnolia \
--to=darrick.wong@oracle.com \
--cc=david@fromorbit.com \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).