linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: linux-xfs@vger.kernel.org
Subject: [PATCH 03/12] xfs: always attach iflush_done and simplify error handling
Date: Fri, 17 Apr 2020 11:08:50 -0400	[thread overview]
Message-ID: <20200417150859.14734-4-bfoster@redhat.com> (raw)
In-Reply-To: <20200417150859.14734-1-bfoster@redhat.com>

The inode flush code has several layers of error handling between
the inode and cluster flushing code. If the inode flush fails before
acquiring the backing buffer, the inode flush is aborted. If the
cluster flush fails, the current inode flush is aborted and the
cluster buffer is failed to handle the initial inode and any others
that might have been attached before the error.

Since xfs_iflush() is the only caller of xfs_iflush_cluser(), the
error handling between the two can be condensed in the top-level
function. If we update xfs_iflush_int() to attach the item
completion handler to the buffer first, any errors that occur after
the first call to xfs_iflush_int() can be handled with a buffer
I/O failure.

Lift the error handling from xfs_iflush_cluster() into xfs_iflush()
and consolidate with the existing error handling. This also replaces
the need to release the buffer because failing the buffer with
XBF_ASYNC drops the current reference.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/xfs_inode.c | 94 +++++++++++++++-------------------------------
 1 file changed, 30 insertions(+), 64 deletions(-)

diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
index b539ee221ce5..4c9971ec6fa6 100644
--- a/fs/xfs/xfs_inode.c
+++ b/fs/xfs/xfs_inode.c
@@ -3496,6 +3496,7 @@ xfs_iflush_cluster(
 	struct xfs_inode	**cilist;
 	struct xfs_inode	*cip;
 	struct xfs_ino_geometry	*igeo = M_IGEO(mp);
+	int			error = 0;
 	int			nr_found;
 	int			clcount = 0;
 	int			i;
@@ -3588,11 +3589,10 @@ xfs_iflush_cluster(
 		 * re-check that it's dirty before flushing.
 		 */
 		if (!xfs_inode_clean(cip)) {
-			int	error;
 			error = xfs_iflush_int(cip, bp);
 			if (error) {
 				xfs_iunlock(cip, XFS_ILOCK_SHARED);
-				goto cluster_corrupt_out;
+				goto out_free;
 			}
 			clcount++;
 		} else {
@@ -3611,32 +3611,7 @@ xfs_iflush_cluster(
 	kmem_free(cilist);
 out_put:
 	xfs_perag_put(pag);
-	return 0;
-
-
-cluster_corrupt_out:
-	/*
-	 * Corruption detected in the clustering loop.  Invalidate the
-	 * inode buffer and shut down the filesystem.
-	 */
-	rcu_read_unlock();
-
-	/*
-	 * We'll always have an inode attached to the buffer for completion
-	 * process by the time we are called from xfs_iflush(). Hence we have
-	 * always need to do IO completion processing to abort the inodes
-	 * attached to the buffer.  handle them just like the shutdown case in
-	 * xfs_buf_submit().
-	 */
-	ASSERT(bp->b_iodone);
-	xfs_buf_iofail(bp, XBF_ASYNC);
-	xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
-
-	/* abort the corrupt inode, as it was not attached to the buffer */
-	xfs_iflush_abort(cip, false);
-	kmem_free(cilist);
-	xfs_perag_put(pag);
-	return -EFSCORRUPTED;
+	return error;
 }
 
 /*
@@ -3692,17 +3667,16 @@ xfs_iflush(
 	 */
 	if (XFS_FORCED_SHUTDOWN(mp)) {
 		error = -EIO;
-		goto abort_out;
+		goto abort;
 	}
 
 	/*
 	 * Get the buffer containing the on-disk inode. We are doing a try-lock
-	 * operation here, so we may get  an EAGAIN error. In that case, we
-	 * simply want to return with the inode still dirty.
+	 * operation here, so we may get an EAGAIN error. In that case, return
+	 * leaving the inode dirty.
 	 *
 	 * If we get any other error, we effectively have a corruption situation
-	 * and we cannot flush the inode, so we treat it the same as failing
-	 * xfs_iflush_int().
+	 * and we cannot flush the inode. Abort the flush and shut down.
 	 */
 	error = xfs_imap_to_bp(mp, NULL, &ip->i_imap, &dip, &bp, XBF_TRYLOCK,
 			       0);
@@ -3711,14 +3685,7 @@ xfs_iflush(
 		return error;
 	}
 	if (error)
-		goto corrupt_out;
-
-	/*
-	 * First flush out the inode that xfs_iflush was called with.
-	 */
-	error = xfs_iflush_int(ip, bp);
-	if (error)
-		goto corrupt_out;
+		goto abort;
 
 	/*
 	 * If the buffer is pinned then push on the log now so we won't
@@ -3728,28 +3695,28 @@ xfs_iflush(
 		xfs_log_force(mp, 0);
 
 	/*
-	 * inode clustering: try to gather other inodes into this write
+	 * Flush the provided inode then attempt to gather others from the
+	 * cluster into the write.
 	 *
-	 * Note: Any error during clustering will result in the filesystem
-	 * being shut down and completion callbacks run on the cluster buffer.
-	 * As we have already flushed and attached this inode to the buffer,
-	 * it has already been aborted and released by xfs_iflush_cluster() and
-	 * so we have no further error handling to do here.
+	 * Note: Once we attempt to flush an inode, we must run buffer
+	 * completion callbacks on any failure. If this fails, simulate an I/O
+	 * failure on the buffer and shut down.
 	 */
-	error = xfs_iflush_cluster(ip, bp);
-	if (error)
-		return error;
+	error = xfs_iflush_int(ip, bp);
+	if (!error)
+		error = xfs_iflush_cluster(ip, bp);
+	if (error) {
+		xfs_buf_iofail(bp, XBF_ASYNC);
+		goto shutdown;
+	}
 
 	*bpp = bp;
 	return 0;
 
-corrupt_out:
-	if (bp)
-		xfs_buf_relse(bp);
-	xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
-abort_out:
-	/* abort the corrupt inode, as it was not attached to the buffer */
+abort:
 	xfs_iflush_abort(ip, false);
+shutdown:
+	xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
 	return error;
 }
 
@@ -3798,6 +3765,13 @@ xfs_iflush_int(
 	       ip->i_d.di_nextents > XFS_IFORK_MAXEXT(ip, XFS_DATA_FORK));
 	ASSERT(iip != NULL && iip->ili_fields != 0);
 
+	/*
+	 * Attach the inode item callback to the buffer. Whether the flush
+	 * succeeds or not, buffer I/O completion processing is now required to
+	 * remove the inode from the AIL and release the flush lock.
+	 */
+	xfs_buf_attach_iodone(bp, xfs_iflush_done, &iip->ili_item);
+
 	/* set *dip = inode's place in the buffer */
 	dip = xfs_buf_offset(bp, ip->i_imap.im_boffset);
 
@@ -3913,14 +3887,6 @@ xfs_iflush_int(
 	xfs_trans_ail_copy_lsn(mp->m_ail, &iip->ili_flush_lsn,
 				&iip->ili_item.li_lsn);
 
-	/*
-	 * Attach the function xfs_iflush_done to the inode's
-	 * buffer.  This will remove the inode from the AIL
-	 * and unlock the inode's flush lock when the inode is
-	 * completely written to disk.
-	 */
-	xfs_buf_attach_iodone(bp, xfs_iflush_done, &iip->ili_item);
-
 	/* generate the checksum. */
 	xfs_dinode_calc_crc(mp, dip);
 
-- 
2.21.1


  parent reply	other threads:[~2020-04-17 15:09 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-17 15:08 [PATCH 00/12] xfs: flush related error handling cleanups Brian Foster
2020-04-17 15:08 ` [PATCH 01/12] xfs: refactor failed buffer resubmission into xfsaild Brian Foster
2020-04-17 22:37   ` Allison Collins
2020-04-20  2:45   ` Dave Chinner
2020-04-20 13:58     ` Brian Foster
2020-04-20 22:19       ` Dave Chinner
2020-04-17 15:08 ` [PATCH 02/12] xfs: factor out buffer I/O failure simulation code Brian Foster
2020-04-17 22:37   ` Allison Collins
2020-04-20  2:48   ` Dave Chinner
2020-04-20 13:58     ` Brian Foster
2020-04-17 15:08 ` Brian Foster [this message]
2020-04-18  0:07   ` [PATCH 03/12] xfs: always attach iflush_done and simplify error handling Allison Collins
2020-04-20 13:59     ` Brian Foster
2020-04-20  3:08   ` Dave Chinner
2020-04-20 14:00     ` Brian Foster
2020-04-17 15:08 ` [PATCH 04/12] xfs: remove unnecessary shutdown check from xfs_iflush() Brian Foster
2020-04-18  0:27   ` Allison Collins
2020-04-20  3:10   ` Dave Chinner
2020-04-17 15:08 ` [PATCH 05/12] xfs: ratelimit unmount time per-buffer I/O error warning Brian Foster
2020-04-20  3:19   ` Dave Chinner
2020-04-20 14:02     ` Brian Foster
2020-04-20 22:23       ` Dave Chinner
2020-04-21 12:13         ` Brian Foster
2020-04-20 18:50   ` Allison Collins
2020-04-17 15:08 ` [PATCH 06/12] xfs: remove duplicate verification from xfs_qm_dqflush() Brian Foster
2020-04-20  3:53   ` Dave Chinner
2020-04-20 14:02     ` Brian Foster
2020-04-20 22:31       ` Dave Chinner
2020-04-17 15:08 ` [PATCH 07/12] xfs: abort consistently on dquot flush failure Brian Foster
2020-04-20  3:54   ` Dave Chinner
2020-04-20 18:50   ` Allison Collins
2020-04-17 15:08 ` [PATCH 08/12] xfs: remove unnecessary quotaoff intent item push handler Brian Foster
2020-04-20  3:58   ` Dave Chinner
2020-04-20 14:02     ` Brian Foster
2020-04-17 15:08 ` [PATCH 09/12] xfs: elide the AIL lock on log item failure tracking Brian Foster
2020-04-17 15:08 ` [PATCH 10/12] xfs: clean up AIL log item removal functions Brian Foster
2020-04-20  4:32   ` Dave Chinner
2020-04-20 14:03     ` Brian Foster
2020-04-17 15:08 ` [PATCH 11/12] xfs: remove unused iflush stale parameter Brian Foster
2020-04-20  4:34   ` Dave Chinner
2020-04-20 19:19   ` Allison Collins
2020-04-17 15:08 ` [PATCH 12/12] xfs: random buffer write failure errortag Brian Foster
2020-04-20  4:37   ` Dave Chinner
2020-04-20 14:04     ` Brian Foster
2020-04-20 22:42   ` Allison Collins
2020-04-19 22:53 ` [PATCH 00/12] xfs: flush related error handling cleanups Dave Chinner
2020-04-20 14:06   ` Brian Foster
2020-04-20 22:53     ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200417150859.14734-4-bfoster@redhat.com \
    --to=bfoster@redhat.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).