public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Carlos Maiolino <cmaiolino@redhat.com>
To: xfs@oss.sgi.com
Subject: Re: XFS hung task in xfs_ail_push_all_sync() when unmounting FS after disk failure/recovery
Date: Wed, 23 Mar 2016 16:32:21 +0100	[thread overview]
Message-ID: <20160323153221.GA19456@redhat.com> (raw)
In-Reply-To: <20160323123010.GA43073@bfoster.bfoster>

I'm still trying to get a reliable reproducer, at least exactly with what I have
seen a few days ago.

Shyam, could you try to reproduce it with a recent/upstream kernel? That would
be great to make sure we have been seen the same issue.

AFAICT, it happens in the following situation:

1 - Something is written to the filesystem
2 - log checkpoint is done for the previous write
3 - Disk failure
4 - XFS tries to writeback metadata logged in [2]

When [4] happens, I can't trigger xfs_log_force messages all the time, most of
time I just get an infinite loop in these messages:

[12694.318109] XFS (dm-0): Failing async write on buffer block
0xffffffffffffffff. Retrying async write.

Sometimes I can trigger the xfs_log_force() loop.

I should have something new around tomorrow.


I'm building a kernel from xfs/for-next tree, to ensure I'm using the last code,
and then will try to reproduce it from there too.



On Wed, Mar 23, 2016 at 08:30:12AM -0400, Brian Foster wrote:
> On Wed, Mar 23, 2016 at 03:13:57PM +0530, Shyam Kaushik wrote:
> > Hi Brian,
> > 
> > Here are two inodes on which the xfsaild is looping over & over during
> > unmount. This captures right from the cp that I started with copying some
> > files to xfs while the drive was pulled out, later drive was pushed back &
> > unmount was attempted. Does this give you better picture on the issue?
> > Please let me know if you prefer to do some other steps in the reproducer.
> > 
> ...
> >     kworker/0:1H-257   [000] ...1  7477.849984: xfs_inode_unpin: dev
> > 253:11 ino 0x103c84a count 1 pincount 1 caller xfs_trans_committed_bulk
> > [xfs]
> 
> So it looks like the transaction is committed to the log. We don't
> necessarily know whether the I/O completed or this was aborted, though I
> suppose we can infer the former since the inode ends up on the AIL.
> There's not much else that I can go on here, however. It looks like this
> trace output is redacted and/or some events were lost. For example, I
> don't see any xfs_ail_insert/move events at all, etc.
> 
> I'd suggest to use trace-record to collect and send (or post somewhere)
> the full, raw trace dump. Something like 'trace-record -e xfs:* sleep
> 999' should dump everything to a file while you try and reproduce.
> 
> Alternatively, it sounds like Carlos is working towards a reproducer on
> a recent kernel and might be closer to tracking this down. One random
> thought/guess from skimming through the code: I wonder if there's some
> kind of race going on between a failed metadata write retry and the fs
> shutdown. It looks like we retry once in xfs_buf_iodone_callbacks(). If
> the fs is shutdown, we invoke the callbacks which look like it should
> ultimately release the flush lock (in xfs_iflush_done()), but I'm not
> sure that happens if the shutdown occurs after a failed retry.
> 
> ...
> > It will be hard for me to shift to a newer kernel. But if you say we are
> > left with no options to root cause this issue, I can attempt at this
> > direction. Pls let me know. Thanks.
> > 
> 
> I ask mainly for informational and debugging purposes. If the problem
> doesn't occur on a recent kernel, we might have a fix that could be
> identified via a bisect. If it does, then it's still preferable to debug
> on something more recent. As mentioned above, it sounds like Carlos has
> been working on tracking this down on recent kernels already.
> 
> Brian
> 
> > --Shyam
> > -----Original Message-----
> > From: Brian Foster [mailto:bfoster@redhat.com]
> > Sent: 22 March 2016 19:34
> > To: Shyam Kaushik
> > Cc: Alex Lyakas; xfs@oss.sgi.com
> > Subject: Re: XFS hung task in xfs_ail_push_all_sync() when unmounting FS
> > after disk failure/recovery
> > 
> > On Tue, Mar 22, 2016 at 06:31:48PM +0530, Shyam Kaushik wrote:
> > > Hi Brian,
> > >
> > > Thanks for your quick reply. I repeated the test & trace-pipe is
> > > constantly filled with this:
> > >
> > >    xfsaild/dm-10-3335  [003] ...2 24890.546491: xfs_ilock_nowait: dev
> > > 253:10 ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.546492: xfs_iunlock: dev 253:10
> > > ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.546493: xfs_ail_flushing: dev
> > > 253:10 lip 0xffff8800a9f437b8 lsn 1/38624 type XFS_LI_INODE flags IN_AIL
> > >    xfsaild/dm-10-3335  [003] ...2 24890.596491: xfs_ilock_nowait: dev
> > > 253:10 ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.596492: xfs_iunlock: dev 253:10
> > > ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.596494: xfs_ail_flushing: dev
> > > 253:10 lip 0xffff8800a9f437b8 lsn 1/38624 type XFS_LI_INODE flags IN_AIL
> > >    xfsaild/dm-10-3335  [003] ...2 24890.646497: xfs_ilock_nowait: dev
> > > 253:10 ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.646498: xfs_iunlock: dev 253:10
> > > ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.646500: xfs_ail_flushing: dev
> > > 253:10 lip 0xffff8800a9f437b8 lsn 1/38624 type XFS_LI_INODE flags IN_AIL
> > >    xfsaild/dm-10-3335  [003] ...2 24890.696467: xfs_ilock_nowait: dev
> > > 253:10 ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.696468: xfs_iunlock: dev 253:10
> > > ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.696468: xfs_ail_flushing: dev
> > > 253:10 lip 0xffff8800a9f437b8 lsn 1/38624 type XFS_LI_INODE flags IN_AIL
> > >    xfsaild/dm-10-3335  [003] ...2 24890.746548: xfs_ilock_nowait: dev
> > > 253:10 ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.746550: xfs_iunlock: dev 253:10
> > > ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.746550: xfs_ail_flushing: dev
> > > 253:10 lip 0xffff8800a9f437b8 lsn 1/38624 type XFS_LI_INODE flags IN_AIL
> > >    xfsaild/dm-10-3335  [003] ...2 24890.796479: xfs_ilock_nowait: dev
> > > 253:10 ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.796480: xfs_iunlock: dev 253:10
> > > ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.796480: xfs_ail_flushing: dev
> > > 253:10 lip 0xffff8800a9f437b8 lsn 1/38624 type XFS_LI_INODE flags IN_AIL
> > >    xfsaild/dm-10-3335  [003] ...2 24890.846467: xfs_ilock_nowait: dev
> > > 253:10 ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 24890.846468: xfs_iunlock: dev 253:10
> > > ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >
> > 
> > So xfsaild is spinning on this inode. It was presumably modified, logged
> > and flushed to the log, hence it's sitting in the AIL waiting to be
> > flushed to disk. xfsaild wants to push it to get it flushed to disk and
> > off the AIL, but it sees it is already in the flushing state as the
> > flush lock is held.
> > 
> > It's not clear to me why the inode is not removed from the AIL, or
> > whether that I/O was actually submitted or aborted with an error. The
> > shutdown involved here most likely affects this one way or the other.
> > IIUC, the I/O completion should eventually release the flush lock and
> > remove the inode from the AIL. A complete trace log of the entire
> > reproducer might shed more light as to what's going on.
> > 
> > Also, it sounds like you have a reliable reproducer. Does this reproduce
> > on a recent kernel?
> > 
> > Brian
> > 
> > >
> > > while regular activity seems to happen on other inodes/kworker threads
> > >
> > >     kworker/u8:4-27691 [001] ...1 24895.811474: xfs_writepage: dev
> > 253:10
> > > ino 0x1801061 pgoff 0x29000 size 0x1aebbc offset 0 length 0 delalloc 1
> > > unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.811477: xfs_invalidatepage: dev
> > > 253:10 ino 0x1801061 pgoff 0x29000 size 0x1aebbc offset 0 length 1000
> > > delalloc 1 unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.811478: xfs_releasepage: dev
> > > 253:10 ino 0x1801061 pgoff 0x29000 size 0x1aebbc offset 0 length 0
> > > delalloc 0 unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.811482: xfs_writepage: dev
> > 253:10
> > > ino 0x4017bdf pgoff 0x29000 size 0x1aebbc offset 0 length 0 delalloc 1
> > > unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.811482: xfs_invalidatepage: dev
> > > 253:10 ino 0x4017bdf pgoff 0x29000 size 0x1aebbc offset 0 length 1000
> > > delalloc 1 unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.811483: xfs_releasepage: dev
> > > 253:10 ino 0x4017bdf pgoff 0x29000 size 0x1aebbc offset 0 length 0
> > > delalloc 0 unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.811485: xfs_writepage: dev
> > 253:10
> > > ino 0x68048c3 pgoff 0x29000 size 0x1aebbc offset 0 length 0 delalloc 1
> > > unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.811486: xfs_invalidatepage: dev
> > > 253:10 ino 0x68048c3 pgoff 0x29000 size 0x1aebbc offset 0 length 1000
> > > delalloc 1 unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.811486: xfs_releasepage: dev
> > > 253:10 ino 0x68048c3 pgoff 0x29000 size 0x1aebbc offset 0 length 0
> > > delalloc 0 unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.812381: xfs_writepage: dev
> > 253:10
> > > ino 0x1805e37 pgoff 0x29000 size 0x68470 offset 0 length 0 delalloc 1
> > > unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.812382: xfs_invalidatepage: dev
> > > 253:10 ino 0x1805e37 pgoff 0x29000 size 0x68470 offset 0 length 1000
> > > delalloc 1 unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.812382: xfs_releasepage: dev
> > > 253:10 ino 0x1805e37 pgoff 0x29000 size 0x68470 offset 0 length 0
> > delalloc
> > > 0 unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.812385: xfs_writepage: dev
> > 253:10
> > > ino 0x4019c95 pgoff 0x29000 size 0x68470 offset 0 length 0 delalloc 1
> > > unwritten 0
> > >     kworker/u8:4-27691 [001] ...1 24895.812385: xfs_invalidatepage: dev
> > > 253:10 ino 0x4019c95 pgoff 0x29000 size 0x68470 offset 0 length 1000
> > > delalloc 1 unwritten 0
> > >
> > >
> > > looks like xfsaild is not able to take lock until hung-task timeout
> > kicks
> > > in
> > >
> > >    xfsaild/dm-10-3335  [003] ...2 25247.649468: xfs_ilock_nowait: dev
> > > 253:10 ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 25247.649469: xfs_iunlock: dev 253:10
> > > ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 25247.649469: xfs_ail_flushing: dev
> > > 253:10 lip 0xffff8800a9f437b8 lsn 1/38624 type XFS_LI_INODE flags IN_AIL
> > >    xfsaild/dm-10-3335  [003] ...2 25247.699478: xfs_ilock_nowait: dev
> > > 253:10 ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 25247.699516: xfs_iunlock: dev 253:10
> > > ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 25247.699517: xfs_ail_flushing: dev
> > > 253:10 lip 0xffff8800a9f437b8 lsn 1/38624 type XFS_LI_INODE flags IN_AIL
> > >    xfsaild/dm-10-3335  [003] ...2 25247.749471: xfs_ilock_nowait: dev
> > > 253:10 ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 25247.749478: xfs_iunlock: dev 253:10
> > > ino 0xc0 flags ILOCK_SHARED caller xfs_inode_item_push [xfs]
> > >    xfsaild/dm-10-3335  [003] ...2 25247.749479: xfs_ail_flushing: dev
> > > 253:10 lip 0xffff8800a9f437b8 lsn 1/38624 type XFS_LI_INODE flags IN_AIL
> > >
> > > Please let me know how to debug this further. Thanks.
> > >
> > > --Shyam
> > > -----Original Message-----
> > > From: Brian Foster [mailto:bfoster@redhat.com]
> > > Sent: 22 March 2016 17:49
> > > To: Shyam Kaushik
> > > Cc: xfs@oss.sgi.com; Alex Lyakas
> > > Subject: Re: XFS hung task in xfs_ail_push_all_sync() when unmounting FS
> > > after disk failure/recovery
> > >
> > > On Tue, Mar 22, 2016 at 04:51:39PM +0530, Shyam Kaushik wrote:
> > > > Hi XFS developers,
> > > >
> > > > We are seeing the following issue with XFS on kernel 3.18.19.
> > > >
> > > > We have XFS mounted over a raw disk. Disk was pulled out manually.
> > There
> > > > were async writes on files that were errored like this
> > > >
> > > ...
> > > >
> > > > And XFS hit metadata & Log IO errors that it decides to shutdown:
> > > >
> > > > Mar 16 16:03:22 host0 kernel: [ 4637.351841] XFS (dm-29): metadata I/O
> > > > error: block 0x3a27fbd0 ("xlog_iodone") error 5 numblks 64
> > > > Mar 16 16:03:22 host0 kernel: [ 4637.352820] XFS(dm-29): SHUTDOWN!!!
> > > > old_flags=0x0 new_flags=0x2
> > > > Mar 16 16:03:22 host0 kernel: [ 4637.353187] XFS (dm-29): Log I/O
> > Error
> > > > Detected.  Shutting down filesystem
> > > ...
> > > > Later the drive was re-inserted back. After the drive was re-inserted,
> > > XFS
> > > > was attempted to be unmounted
> > > >
> > > > Mar 16 16:16:53 host0 controld: [2557] [     ] umount[202]
> > > > : umount(/sdisk/vol5b0, xfs)
> > > >
> > > > But nothing happens except for the 30-secs xfs_log_force errors that
> > > keeps
> > > > repeating
> > > >
> > > ...
> > > >
> > > > This problem doesn't happen consistently, but happens periodically
> > with
> > > a
> > > > drive failure/recovery followed by XFS unmount. I couldn't find this
> > > issue
> > > > fixed in later kernels. Can you please suggest how I can debug this
> > > issue
> > > > further?
> > > >
> > >
> > > Similar problems have been reproduced due to racy/incorrect EFI/EFD
> > > object tracking, which are internal data structures associated with
> > > freeing extents.
> > >
> > > What happens if you enable tracepoints while the fs is in this hung
> > > unmount state?
> > >
> > > # trace-cmd start -e "xfs:*"
> > > # cat /sys/kernel/debug/tracing/trace_pipe
> > >
> > > Brian
> > >
> > > > Thanks!
> > > >
> > > > --Shyam
> > > >
> > > > _______________________________________________
> > > > xfs mailing list
> > > > xfs@oss.sgi.com
> > > > http://oss.sgi.com/mailman/listinfo/xfs
> > >
> > > _______________________________________________
> > > xfs mailing list
> > > xfs@oss.sgi.com
> > > http://oss.sgi.com/mailman/listinfo/xfs
> > 
> > _______________________________________________
> > xfs mailing list
> > xfs@oss.sgi.com
> > http://oss.sgi.com/mailman/listinfo/xfs
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

-- 
Carlos

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2016-03-23 15:32 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-22 11:21 XFS hung task in xfs_ail_push_all_sync() when unmounting FS after disk failure/recovery Shyam Kaushik
2016-03-22 12:19 ` Brian Foster
2016-03-22 13:01   ` Shyam Kaushik
2016-03-22 14:03     ` Brian Foster
2016-03-22 15:38       ` Carlos Maiolino
2016-03-22 15:56         ` Carlos Maiolino
2016-03-23  9:43       ` Shyam Kaushik
2016-03-23 12:30         ` Brian Foster
2016-03-23 15:32           ` Carlos Maiolino [this message]
2016-03-23 22:37             ` Dave Chinner
2016-03-24 11:08               ` Carlos Maiolino
2016-03-24 16:52               ` Carlos Maiolino
2016-03-24 21:56                 ` Dave Chinner
2016-04-01 12:31                   ` Carlos Maiolino
2016-03-23  9:52   ` Shyam Kaushik
2016-03-24 13:38   ` Shyam Kaushik
2016-04-08 10:51   ` Shyam Kaushik
2016-04-08 13:16     ` Brian Foster
2016-04-08 13:35       ` Shyam Kaushik
2016-04-08 14:31         ` Carlos Maiolino
2016-04-08 17:48       ` Shyam Kaushik
2016-04-08 19:00         ` Brian Foster
2016-04-08 17:51       ` Shyam Kaushik
2016-04-08 22:46     ` Dave Chinner
2016-04-10 18:40       ` Alex Lyakas
2016-04-11  1:21         ` Dave Chinner
2016-04-11 14:52           ` Shyam Kaushik
2016-04-11 22:47             ` Dave Chinner
2016-04-12  5:20           ` Shyam Kaushik
2016-04-12  6:59           ` Shyam Kaushik
2016-04-12  8:19             ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160323153221.GA19456@redhat.com \
    --to=cmaiolino@redhat.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox