From: Brian Foster <bfoster@redhat.com>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH 28/30] xfs: rework xfs_iflush_cluster() dirty inode iteration
Date: Thu, 11 Jun 2020 09:56:18 -0400 [thread overview]
Message-ID: <20200611135618.GA56572@bfoster> (raw)
In-Reply-To: <20200610234008.GM2040@dread.disaster.area>
On Thu, Jun 11, 2020 at 09:40:08AM +1000, Dave Chinner wrote:
> On Wed, Jun 10, 2020 at 09:06:28AM -0400, Brian Foster wrote:
> > On Wed, Jun 10, 2020 at 08:01:39AM +1000, Dave Chinner wrote:
> > > On Tue, Jun 09, 2020 at 09:11:55AM -0400, Brian Foster wrote:
> > > > On Thu, Jun 04, 2020 at 05:46:04PM +1000, Dave Chinner wrote:
> > > > > - * check is not sufficient.
> > > > > + * If we are shut down, unpin and abort the inode now as there
> > > > > + * is no point in flushing it to the buffer just to get an IO
> > > > > + * completion to abort the buffer and remove it from the AIL.
> > > > > */
> > > > > - if (!cip->i_ino) {
> > > > > - xfs_ifunlock(cip);
> > > > > - xfs_iunlock(cip, XFS_ILOCK_SHARED);
> > > > > + if (XFS_FORCED_SHUTDOWN(mp)) {
> > > > > + xfs_iunpin_wait(ip);
> > > >
> > > > Note that we have an unlocked check above that skips pinned inodes.
> > >
> > > Right, but we could be racing with a transaction commit that pinned
> > > the inode and a shutdown. As the comment says: it's a quick and
> > > dirty check to avoid trying to get locks when we know that it is
> > > unlikely we can flush the inode without blocking. We still have to
> > > recheck that state once we have the ILOCK....
> > >
> >
> > Right, but that means we can just as easily skip the shutdown processing
> > (which waits for unpin) if a particular inode is pinned. So which is
> > supposed to happen in the shutdown case?
> >
> > ISTM that either could happen. As a result this kind of looks like
> > random logic to me.
>
> Yes, shutdown is racy, so it could be either. However, I'm not
> changing the shutdown logic or handling here. If the shutdown race
> could happen before this patchset (and it can), it can still happen
> after the patchset, and this patchset does not change the way we
> handle the shutdown race at all.
>
> IOWs, while this shutdown logic may appear "random", that's not a
> result of this patchset - it a result of design decisions made in
> the last major shutdown rework/cleanup that required checks to be
> added to places that could hang waiting for an event that would
> never occur because shutdown state prevented it from occurring.
>
It's not so much the shutdown check that I find random as much as how it
intends to handle pinned inodes.
> There's already enough complexity in this patchset that adding
> shutdown logic changes is just too much to ask for. If we want to
> change how various shutdown logics work, lets do it as a separate
> set of changes so all the subtle bugs that result from the changes
> bisect to the isolated shutdown logic changes...
>
The fact that shutdown is racy is just background context. My point is
that this patch appears to introduce special shutdown handling for a
condition where it 1.) didn't previously exist and 2.) doesn't appear to
be necessary.
The current push/flush code only incorporates a shutdown check
indirectly via mapping the buffer, which simulates an I/O failure and
causes us to abort the flush (and shutdown if the I/O failure happened
for some other reason). If the shutdown happened sometime after we
acquired the buffer, then there's no real impact on this code path. We
flush the inode(s) and return success. The shutdown will be handled
appropriately when xfsaild attempts to submit the buffer.
The new code no longer maps the buffer because that is done much
earlier, but for some reason incorporates a new check to abort the flush
if the fs is already shutdown. The problem I have with this is that
these checks tend to be brittle, untested and a maintenance burden. As
such, I don't think we should ever add new shutdown checks for cases
that aren't required for functional correctness. That way we hopefully
move to a state where we have the minimum number of shutdown checks with
broadest coverage to ensure everything unwinds correctly, but don't have
to constantly battle with insufficiently tested logic in obscure
contexts that silently break as surrounding code changes over time and
leads to random fstests hangs and shutdown logic cleanouts every couple
of years.
So my question for any newly added shutdown check is: what problem does
this check solve? If there isn't an explicit functional problem and it's
intended more as convenience/efficiency logic (which is what the comment
implies), then I don't think it's justified. If there is one, then
perhaps it is justified, but should be more clearly documented (and I do
still think the pin check logic should be cleaned up, but that's a very
minor tweak).
Brian
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
next prev parent reply other threads:[~2020-06-11 13:56 UTC|newest]
Thread overview: 82+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-04 7:45 [PATCH 00/30] xfs: rework inode flushing to make inode reclaim fully asynchronous Dave Chinner
2020-06-04 7:45 ` [PATCH 01/30] xfs: Don't allow logging of XFS_ISTALE inodes Dave Chinner
2020-06-04 7:45 ` [PATCH 02/30] xfs: remove logged flag from inode log item Dave Chinner
2020-06-04 7:45 ` [PATCH 03/30] xfs: add an inode item lock Dave Chinner
2020-06-09 13:13 ` Brian Foster
2020-06-04 7:45 ` [PATCH 04/30] xfs: mark inode buffers in cache Dave Chinner
2020-06-04 14:04 ` Brian Foster
2020-06-04 7:45 ` [PATCH 05/30] xfs: mark dquot " Dave Chinner
2020-06-04 7:45 ` [PATCH 06/30] xfs: mark log recovery buffers for completion Dave Chinner
2020-06-04 7:45 ` [PATCH 07/30] xfs: call xfs_buf_iodone directly Dave Chinner
2020-06-04 7:45 ` [PATCH 08/30] xfs: clean up whacky buffer log item list reinit Dave Chinner
2020-06-04 7:45 ` [PATCH 09/30] xfs: make inode IO completion buffer centric Dave Chinner
2020-06-04 7:45 ` [PATCH 10/30] xfs: use direct calls for dquot IO completion Dave Chinner
2020-06-04 7:45 ` [PATCH 11/30] xfs: clean up the buffer iodone callback functions Dave Chinner
2020-06-04 7:45 ` [PATCH 12/30] xfs: get rid of log item callbacks Dave Chinner
2020-06-04 7:45 ` [PATCH 13/30] xfs: handle buffer log item IO errors directly Dave Chinner
2020-06-04 14:05 ` Brian Foster
2020-06-05 0:59 ` Dave Chinner
2020-06-05 1:32 ` [PATCH 13/30 V2] " Dave Chinner
2020-06-05 16:24 ` Brian Foster
2020-06-04 7:45 ` [PATCH 14/30] xfs: unwind log item error flagging Dave Chinner
2020-06-04 7:45 ` [PATCH 15/30] xfs: move xfs_clear_li_failed out of xfs_ail_delete_one() Dave Chinner
2020-06-04 7:45 ` [PATCH 16/30] xfs: pin inode backing buffer to the inode log item Dave Chinner
2020-06-04 14:05 ` Brian Foster
2020-06-04 7:45 ` [PATCH 17/30] xfs: make inode reclaim almost non-blocking Dave Chinner
2020-06-04 18:06 ` Brian Foster
2020-06-04 7:45 ` [PATCH 18/30] xfs: remove IO submission from xfs_reclaim_inode() Dave Chinner
2020-06-04 18:08 ` Brian Foster
2020-06-04 22:53 ` Dave Chinner
2020-06-05 16:25 ` Brian Foster
2020-06-04 7:45 ` [PATCH 19/30] xfs: allow multiple reclaimers per AG Dave Chinner
2020-06-05 16:26 ` Brian Foster
2020-06-05 21:07 ` Dave Chinner
2020-06-08 16:44 ` Brian Foster
2020-06-04 7:45 ` [PATCH 20/30] xfs: don't block inode reclaim on the ILOCK Dave Chinner
2020-06-05 16:26 ` Brian Foster
2020-06-04 7:45 ` [PATCH 21/30] xfs: remove SYNC_TRYLOCK from inode reclaim Dave Chinner
2020-06-05 16:26 ` Brian Foster
2020-06-04 7:45 ` [PATCH 22/30] xfs: remove SYNC_WAIT from xfs_reclaim_inodes() Dave Chinner
2020-06-05 16:26 ` Brian Foster
2020-06-05 21:09 ` Dave Chinner
2020-06-04 7:45 ` [PATCH 23/30] xfs: clean up inode reclaim comments Dave Chinner
2020-06-05 16:26 ` Brian Foster
2020-06-04 7:46 ` [PATCH 24/30] xfs: rework stale inodes in xfs_ifree_cluster Dave Chinner
2020-06-05 18:27 ` Brian Foster
2020-06-05 21:32 ` Dave Chinner
2020-06-08 16:44 ` Brian Foster
2020-06-04 7:46 ` [PATCH 25/30] xfs: attach inodes to the cluster buffer when dirtied Dave Chinner
2020-06-08 16:45 ` Brian Foster
2020-06-08 21:05 ` Dave Chinner
2020-06-04 7:46 ` [PATCH 26/30] xfs: xfs_iflush() is no longer necessary Dave Chinner
2020-06-08 16:45 ` Brian Foster
2020-06-08 21:37 ` Dave Chinner
2020-06-08 22:26 ` [PATCH 26/30 V2] " Dave Chinner
2020-06-09 13:11 ` Brian Foster
2020-06-04 7:46 ` [PATCH 27/30] xfs: rename xfs_iflush_int() Dave Chinner
2020-06-08 17:37 ` Brian Foster
2020-06-04 7:46 ` [PATCH 28/30] xfs: rework xfs_iflush_cluster() dirty inode iteration Dave Chinner
2020-06-09 13:11 ` Brian Foster
2020-06-09 22:01 ` Dave Chinner
2020-06-10 13:06 ` Brian Foster
2020-06-10 23:40 ` Dave Chinner
2020-06-11 13:56 ` Brian Foster [this message]
2020-06-15 1:01 ` Dave Chinner
2020-06-15 14:21 ` Brian Foster
2020-06-16 14:41 ` Brian Foster
2020-06-11 1:56 ` [PATCH 28/30 V2] " Dave Chinner
2020-06-04 7:46 ` [PATCH 29/30] xfs: factor xfs_iflush_done Dave Chinner
2020-06-09 13:12 ` Brian Foster
2020-06-09 22:14 ` Dave Chinner
2020-06-10 13:08 ` Brian Foster
2020-06-11 0:16 ` Dave Chinner
2020-06-11 14:07 ` Brian Foster
2020-06-15 1:49 ` Dave Chinner
2020-06-15 5:20 ` Amir Goldstein
2020-06-15 14:31 ` Brian Foster
2020-06-11 1:58 ` [PATCH 29/30 V2] " Dave Chinner
2020-06-04 7:46 ` [PATCH 30/30] xfs: remove xfs_inobp_check() Dave Chinner
2020-06-09 13:12 ` Brian Foster
-- strict thread matches above, loose matches on Subject: below --
2020-06-22 8:15 [PATCH 00/30] xfs: rework inode flushing to make inode reclaim fully asynchronous Dave Chinner
2020-06-22 8:16 ` [PATCH 28/30] xfs: rework xfs_iflush_cluster() dirty inode iteration Dave Chinner
2020-06-01 21:42 [PATCH 00/30] xfs: rework inode flushing to make inode reclaim fully asynchronous Dave Chinner
2020-06-01 21:42 ` [PATCH 28/30] xfs: rework xfs_iflush_cluster() dirty inode iteration Dave Chinner
2020-06-02 23:23 ` Darrick J. Wong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200611135618.GA56572@bfoster \
--to=bfoster@redhat.com \
--cc=david@fromorbit.com \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).