public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Eryu Guan <eguan@redhat.com>
To: Brian Foster <bfoster@redhat.com>
Cc: xfs@oss.sgi.com
Subject: Re: [PATCH 0/6 v2] xfs: xfs_iflush_cluster vs xfs_reclaim_inode
Date: Mon, 11 Apr 2016 14:25:04 +0800	[thread overview]
Message-ID: <20160411062504.GD10345@eguan.usersys.redhat.com> (raw)
In-Reply-To: <20160410092235.GZ10345@eguan.usersys.redhat.com>

On Sun, Apr 10, 2016 at 05:22:35PM +0800, Eryu Guan wrote:
> On Fri, Apr 08, 2016 at 07:37:09AM -0400, Brian Foster wrote:
> > On Fri, Apr 08, 2016 at 11:28:41AM +0800, Eryu Guan wrote:
> > > On Fri, Apr 08, 2016 at 09:37:45AM +1000, Dave Chinner wrote:
> > > > Hi folks,
> > > > 
> > > > This is the second version of this patch set, first posted and
> > > > described here:
> > > > 
> > > > http://oss.sgi.com/archives/xfs/2016-04/msg00069.html
> > > 
> > > Just a quick note here, I'm testing the v1 patchset right now, v4.6-rc2
> > > kernel + v1 patch, config file is based on rhel7 debug kernel config.
> > > 
> > > The test is the same as the original reproducer (long term fsstress run
> > > on XFS, exported from NFS). The test on x86_64 host has been running for
> > > two days and everything looks fine. Test on ppc64 host has been running
> > > for a few hours and I noticed a lock issue and a few warnings, not sure
> > > if it's related to the patches or even to XFS yet(I need to run test on
> > > stock -rc2 kernel to be sure), but just post the logs here for reference
> > > 
> > 
> > Had the original problem ever been reproduced on an upstream kernel?
> 
> No, I've never seen the original problem in my upstream kernel testings.
> Perhaps that's because I didn't run tests on debug kernels. But I didn't
> see it in RHEL7 debug kernel testings either.
> 
> > 
> > FWIW, my rhel kernel based test is still running well approaching ~48
> > hours. I've seen some lockdep messages (bad unlock balance), but IIRC
> > I've been seeing those from the start so I haven't been paying much
> > attention to it while digging into the core problem.
> > 
> > > [ 1911.626286] ======================================================
> > > [ 1911.626291] [ INFO: possible circular locking dependency detected ]
> > > [ 1911.626297] 4.6.0-rc2.debug+ #1 Not tainted
> > > [ 1911.626301] -------------------------------------------------------
> > > [ 1911.626306] nfsd/7402 is trying to acquire lock:
> > > [ 1911.626311]  (&s->s_sync_lock){+.+.+.}, at: [<c0000000003585f0>] .sync_inodes_sb+0xe0/0x230
> > > [ 1911.626327]
> > > [ 1911.626327] but task is already holding lock:
> > > [ 1911.626333]  (sb_internal){.+.+.+}, at: [<c00000000031a780>] .__sb_start_write+0x90/0x130
> > > [ 1911.626346]
> > > [ 1911.626346] which lock already depends on the new lock.
> > > [ 1911.626346]
> > > [ 1911.626353]
> > > [ 1911.626353] the existing dependency chain (in reverse order) is:
> > > [ 1911.626358]
> > ...
> > > [ 1911.627134]  Possible unsafe locking scenario:
> > > [ 1911.627134]
> > > [ 1911.627139]        CPU0                    CPU1
> > > [ 1911.627143]        ----                    ----
> > > [ 1911.627147]   lock(sb_internal);
> > > [ 1911.627153]                                lock(&s->s_sync_lock);
> > > [ 1911.627160]                                lock(sb_internal);
> > > [ 1911.627166]   lock(&s->s_sync_lock);
> > > [ 1911.627172]
> > > [ 1911.627172]  *** DEADLOCK ***
> > > [ 1911.627172]
> > ...
> > 
> > We actually have a report of this one on the list:
> > 
> > http://oss.sgi.com/archives/xfs/2016-04/msg00001.html
> > 
> > ... so I don't think it's related to this series. I believe I've seen
> > this once or twice when testing something completely unrelated, as well.
> > 
> > > [ 2046.852739] kworker/dying (399) used greatest stack depth: 4352 bytes left
> > > [ 2854.687381] XFS: Assertion failed: buffer_mapped(bh), file: fs/xfs/xfs_aops.c, line: 780
> > > [ 2854.687434] ------------[ cut here ]------------
> > > [ 2854.687488] WARNING: CPU: 5 PID: 28924 at fs/xfs/xfs_message.c:105 .asswarn+0x2c/0x40 [xfs]
> > ...
> > > [ 2854.687997] ---[ end trace 872ac2709186f780 ]---
> > 
> > These asserts look new to me, however. It would be interesting to see if
> > these reproduce independently.
> 
> I've seen just the assert failures in the same fsstress testing on ppc64
> host (no lock warnings in the beginning). Will see if it's still
> reproducible on stock kernel.

I saw the assert failures on stock kernel (v4.6-rc2) too, so at least
it's not something introduced by this patchset.

Thanks,
Eryu

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2016-04-11  6:25 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-07 23:37 [PATCH 0/6 v2] xfs: xfs_iflush_cluster vs xfs_reclaim_inode Dave Chinner
2016-04-07 23:37 ` [PATCH 1/6] xfs: fix inode validity check in xfs_iflush_cluster Dave Chinner
2016-04-07 23:43   ` Christoph Hellwig
2016-04-07 23:37 ` [PATCH 2/6] xfs: rename variables in xfs_iflush_cluster for clarity Dave Chinner
2016-04-07 23:44   ` Christoph Hellwig
2016-04-07 23:37 ` [PATCH 3/6] xfs: skip stale inodes in xfs_iflush_cluster Dave Chinner
2016-04-07 23:37 ` [PATCH 4/6] xfs: xfs_iflush_cluster has range issues Dave Chinner
2016-04-07 23:37 ` [PATCH 5/6] xfs: xfs_inode_free() isn't RCU safe Dave Chinner
2016-04-07 23:37 ` [PATCH 6/6] xfs: mark reclaimed inodes invalid earlier Dave Chinner
2016-04-07 23:46   ` Christoph Hellwig
2016-04-08  3:28 ` [PATCH 0/6 v2] xfs: xfs_iflush_cluster vs xfs_reclaim_inode Eryu Guan
2016-04-08 11:37   ` Brian Foster
2016-04-10  9:22     ` Eryu Guan
2016-04-11  6:25       ` Eryu Guan [this message]
2016-04-08 17:18 ` Brian Foster
2016-04-08 22:17   ` Dave Chinner
2016-04-11 13:37     ` Brian Foster
2016-04-11 23:31       ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160411062504.GD10345@eguan.usersys.redhat.com \
    --to=eguan@redhat.com \
    --cc=bfoster@redhat.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox