linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Carlos Maiolino <cmaiolino@redhat.com>, linux-xfs@vger.kernel.org
Subject: Re: [PATCH] [RFC] Release buffer locks in case of IO error
Date: Fri, 30 Sep 2016 10:54:13 -0400	[thread overview]
Message-ID: <20160930145413.GB27528@bfoster.bfoster> (raw)
In-Reply-To: <20160930000147.GH27872@dastard>

On Fri, Sep 30, 2016 at 10:01:47AM +1000, Dave Chinner wrote:
> On Thu, Sep 29, 2016 at 03:03:19PM +0200, Carlos Maiolino wrote:
> > I have been working in a bug still regarding xfs fail_at_unmount configuration,
> > where, even though the configuration is enable, an unmount attempt will still
> > hang if the AIL buf items are locked as a result of a previous failed attempt to
> > flush these items.
> > 
> > Currently, if there is a failure while trying to flush inode buffers to disk,
> > these items are kept in AIL with FLUSHING status and with the locks held, making
> > them unable to be retried. Either during unmount, where they will be retried and
> > 'failed', or if using a thin provisioned device, the pool is actually extended, to
> > accomodate these not-yet-flushed items, instead of retrying to flush such items,
> > xfs is unable to retry them, once they are already locked.
> 
> [....]
> 

So this was originally written simply as a hack/experiment to test a
theory about what could be going wrong here, based on Carlos'
investigation so far into the issue. It wasn't really intended to be
posted as a proposed fix, so I'm going to skip over the details...

...
> 
> Ok, I'm pretty sure that this just addresses a symptom of the
> underlying problem, not solve the root cause. e.g. dquot flushing
> has exactly the same problem.
> 

Agree.

> The underlying problem is that when the buffer was failed, the
> callbacks attached to the buffer were not run. Hence the inodes
> locked and attached to the buffer were not aborted and unlocked
> when the buffer IO was failed. That's the underlying problem that
> needs fixing - this cannot be solved sanely by trying to guess why
> an inode is flush locked when walking the AIL....
> 

Are you making the assumption that the filesystem is already shutdown in
this scenario? I assume so, otherwise I'm not sure simply running the
callbacks (that remove items from the AIL) is really appropriate.

My _unconfirmed suspicion_ is that the core problem is that any log
items that are flush locked upon AIL push remain flush locked in the
event of I/O error (independent of fs shutdown, which is not guaranteed
after a metadata I/O error). E.g., consider the case of a transient
error or error configuration that expects more than one retry cycle
through xfsaild. IOW, the current AIL error retry mechanism doesn't work
for flush locked items.

(FWIW, another experiment I was thinking about was an optional
error-specific log item callback that would be specified and invoked in
the event of any metadata I/O error to release things like flush locks
and prepare for another retry, but I think that is complicated by the
fact that the in-core struct has already been flushed to the buffer.)

But stepping back from all that, this is just a theory based on Carlos'
investigation so far and could easily be wrong. I haven't debugged the
issue and so I'm not totally confident I actually understand the root
cause. As such, I'm not sure it's worth getting further into the weeds
until we have a root cause analysis.

Carlos,

Unless you completely disagree and are confident you have a handle on
the problem (in which case you can just ignore me and send a new patch
:), I'd suggest that what we probably need here is a detailed writeup of
the root cause. E.g., a step by step progression of what happens to the
log item in relation to the I/O errors and shutdown (if one actually
occurs), preferably backed by tracepoint data. Just my .02.

Brian

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

  parent reply	other threads:[~2016-09-30 14:54 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-29 13:03 [PATCH] [RFC] Release buffer locks in case of IO error Carlos Maiolino
2016-09-30  0:01 ` Dave Chinner
2016-09-30  9:14   ` Carlos Maiolino
2016-09-30  9:37   ` Carlos Maiolino
2016-09-30 14:54   ` Brian Foster [this message]
2016-10-01  0:21     ` Dave Chinner
2016-10-01 13:27       ` Brian Foster
2016-10-03 14:03         ` Carlos Maiolino
2016-10-03 22:08           ` Dave Chinner
  -- strict thread matches above, loose matches on Subject: below --
2016-09-29 13:00 Carlos Maiolino
2016-09-29 13:04 ` Carlos Maiolino

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160930145413.GB27528@bfoster.bfoster \
    --to=bfoster@redhat.com \
    --cc=cmaiolino@redhat.com \
    --cc=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).