From: Eryu Guan <eguan@redhat.com>
To: linux-nfs@vger.kernel.org
Cc: linux-xfs@vger.kernel.org, linux-ext4@vger.kernel.org,
Michal Hocko <mhocko@suse.com>, Theodore Ts'o <tytso@mit.edu>,
jack@suse.cz, david@fromorbit.com
Subject: Re: [v4.12-rc1 regression] nfs server crashed in fstests run
Date: Wed, 28 Jun 2017 11:04:39 +0800 [thread overview]
Message-ID: <20170628030439.GA23360@eguan.usersys.redhat.com> (raw)
In-Reply-To: <20170623072656.GI23360@eguan.usersys.redhat.com>
On Fri, Jun 23, 2017 at 03:26:56PM +0800, Eryu Guan wrote:
> [As this bug somehow involves in ext4/jbd2 changes, cc ext4 list too]
>
> On Fri, Jun 02, 2017 at 02:04:57PM +0800, Eryu Guan wrote:
> > Hi all,
> >
> > Starting from 4.12-rc1 kernel, I saw Linux nfs server crashes all the
> > time in my fstests (xfstests) runs, I appended the console log of an
> > NFSv3 crash to the end of this mail.
>
> Some follow-up updates on this bug. *My* conclusion is, commit
> 81378da64de6 ("jbd2: mark the transaction context with the scope
> GFP_NOFS context") introduced this issue, and this is a bug that needs
> ext4, XFS and NFS to reproduce.
>
> For more details please see below.
>
> >
> > I was exporting a directory resided on XFS, and loopback mounted the NFS
> > export on localhost. Both NFSv3 and NFSv4 could hit this crash. The
> > crash usually happens when running test case generic/029 or generic/095.
> >
> > But the problem is, there's no easy and efficient way to reproduce it, I
> > tried run only generic/029 and generic/095 in a loop for 1000 times but
> > failed, I also tried run the 'quick' group tests only for 50 iterations
> > but failed again. It seems that the only reliable way to reproduce it is
> > run the 'auto' group tests for 20 iterations.
> >
> > i=0
> > while [ $i -lt 20 ]; do
> > ./check -nfs -g auto
> > ((i++))
> > done
> >
> > And usually server crashed within 5 iterations, but at times it could
> > survive 10 iterations and only crashed if you left it running for more
> > iterations. This makes it hard to bisect and bisecting is very
> > time-consuming.
> >
> > (The bisecting is running now, it needs a few days to finish. My first
> > two attempts pointed first bad commit to some mm patches, but reverting
> > that patch didn't prevent server from crashing, so I enlarged the loop
> > count and started bisecting for the third time).
>
> The third round of bisect finally finished after 2 weeks painful
> testings, git bisect pointed first bad to commit 81378da64de6 ("jbd2:
> mark the transaction context with the scope GFP_NOFS context"), which
> seemed very weird to me, because the crash always happens in XFS code.
>
> But this reminded me that I was not only exporting XFS for NFS testing,
> but also ext4. So my full test setup is
>
> # mount -t ext4 /dev/sda4 /export/test
> # showmount -e localhost
> Export list for localhost:
> /export/scratch *
> /export/test *
>
> (/export/scratch is on rootfs, which is XFS)
>
> # cat local.config
> TEST_DEV=localhost:/export/test
> TEST_DIR=/mnt/testarea/test
> SCRATCH_DEV=localhost:/export/scratch
> SCRATCH_MNT=/mnt/testarea/scratch
>
>
> Then I did further confirmation tests:
> 1. switch to a new branch with that jbd2 patch as HEAD and compile
> kernel, run test with both ext4 and XFS exported on this newly compiled
> kernel, it crashed within 5 iterations.
>
> 2. revert that jbd2 patch (when it was HEAD), run test with both ext4
> and XFS exported, kernel survived 20 iterations of full fstests run.
>
> 3. kernel from step 1 survived 20 iterations of full fstests run, if I
> export XFS only (create XFS on /dev/sda4 and mount it at /export/test).
>
> 4. 4.12-rc1 kernel survived the same test if I export ext4 only (both
> /export/test and /export/scratch were mounted as ext4, and this was done
> on another test host because I don't have another spare test partition)
>
>
> All these facts seem to confirm that commit 81378da64de6 really is the
> culprit, I just don't see how..
>
> I attached git bisect log, and if you need more information please let
> me know. BTW, I'm testing 4.12-rc6 kernel now to see if it's already
> been fixed there.
4.12-rc6 kernel survived 30 iterations of full fstests run. Not sure if
it's been fixed or made even harder to reproduce.
Thanks,
Eryu
prev parent reply other threads:[~2017-06-28 3:04 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-02 6:04 [v4.12-rc1 regression] nfs server crashed in fstests run Eryu Guan
2017-06-07 3:33 ` Eryu Guan
2017-06-07 15:53 ` J. Bruce Fields
2017-06-08 2:27 ` Eryu Guan
2017-06-07 19:23 ` Darrick J. Wong
2017-06-08 5:37 ` Eryu Guan
2017-06-08 15:32 ` J. Bruce Fields
2017-06-23 7:26 ` Eryu Guan
2017-06-23 7:43 ` Michal Hocko
2017-06-23 7:51 ` Michal Hocko
2017-06-23 8:12 ` Eryu Guan
2017-06-26 12:39 ` Dave Chinner
2017-06-27 13:01 ` Michal Hocko
2017-06-28 2:58 ` Eryu Guan
2017-06-28 3:04 ` Eryu Guan [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170628030439.GA23360@eguan.usersys.redhat.com \
--to=eguan@redhat.com \
--cc=david@fromorbit.com \
--cc=jack@suse.cz \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=mhocko@suse.com \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).