From: Eric Whitney <enwlinux@gmail.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Eric Whitney <enwlinux@gmail.com>,
linux-ext4@vger.kernel.org, tytso@mit.edu
Subject: Re: generic/418 regression seen on 5.12-rc3
Date: Mon, 22 Mar 2021 12:37:03 -0400 [thread overview]
Message-ID: <20210322163703.GA32047@localhost.localdomain> (raw)
In-Reply-To: <20210318221620.GW3420@casper.infradead.org>
* Matthew Wilcox <willy@infradead.org>:
> On Thu, Mar 18, 2021 at 05:38:08PM -0400, Eric Whitney wrote:
> > * Matthew Wilcox <willy@infradead.org>:
> > > On Thu, Mar 18, 2021 at 02:16:13PM -0400, Eric Whitney wrote:
> > > > As mentioned in today's ext4 concall, I've seen generic/418 fail from time to
> > > > time when run on 5.12-rc3 and 5.12-rc1 kernels. This first occurred when
> > > > running the 1k test case using kvm-xfstests. I was then able to bisect the
> > > > failure to a patch landed in the -rc1 merge window:
> > > >
> > > > (bd8a1f3655a7) mm/filemap: support readpage splitting a page
> > >
> > > Thanks for letting me know. This failure is new to me.
> >
> > Sure - it's useful to know that it's new to you. Ted said he's also going
> > to test XFS with a large number of generic/418 trials which would be a
> > useful comparison. However, he's had no luck as yet reproducing what I've
> > seen on his Google compute engine test setup running ext4.
> >
> > >
> > > I don't understand it; this patch changes the behaviour of buffered reads
> > > from waiting on a page with a refcount held to waiting on a page without
> > > the refcount held, then starting the lookup from scratch once the page
> > > is unlocked. I find it hard to believe this introduces a /new/ failure.
> > > Either it makes an existing failure easier to hit, or there's a subtle
> > > bug in the retry logic that I'm not seeing.
> > >
> >
> > For keeping Murphy at bay I'm rerunning the bisection from scratch just
> > to make sure I come out at the same patch. The initial bisection looked
> > clean, but when dealing with a failure that occurs probabilistically it's
> > easy enough to get it wrong. Is this patch revertable in -rc1 or -rc3?
> > Ordinarily I like to do that for confirmation.
>
> Alas, not easily. I've built a lot on top of it since then. I could
> probably come up with a moral reversion (and will have to if we can't
> figure out why it's causing a problem!)
>
> > And there's always the chance that a latent ext4 bug is being hit.
>
> That would also be valuable information to find out. If this
> patch is exposing a latent bug, I can't think what it might be.
>
> > I'd be very happy to run whatever debugging patches you might want, though
> > you might want to wait until I've reproduced the bisection result. The
> > offsets vary, unfortunately - I've seen 1024, 2048, and 3072 reported when
> > running a file system with 4k blocks.
>
> As I expected, but thank you for being willing to run debug patches.
> I'll wait for you to confirm the bisection and then work up something
> that'll help figure out what's going on.
An update:
I've been able to rerun the bisection on a freshly installed test appliance
using a kernel built from a new tree using different test hardware. It ended
at the same patch - "mm/filemap: support readpage splitting a page".
I'd thought I'd been able to reproduce the failure using ext4's 4K block size,
but my test logs say otherwise, as do a few thousand trial runs of generic/418.
So, at the moment I can only say definitively that I've seen the failures when
running with 1k blocks (block size < page size).
I've been able to simplify generic/418 by reducing the set of test cases it
uses to just "-w", or direct I/O writes. When I modify the test to remove
just that test case and leaving the remainder, I no longer see test failures.
A beneficial side effect of the simplification is that it reduces the running
time by an order of magnitude.
When it fails, it looks like the test is writing via dio a chunk of bytes all
of which have been set to 0x1 and then reading back via dio the same chunk with
all those bytes equal to 0x0.
I'm attempting to create a simpler reproducer. generic/418 actually has a
number of processes simultaneously writing to and reading from disjoint
regions of the same file, and I'd like to see if the failure is dependent
upon that behavior, or whether it can be isolated to a single writer/reader.
Eric
next prev parent reply other threads:[~2021-03-22 16:37 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-18 18:16 generic/418 regression seen on 5.12-rc3 Eric Whitney
2021-03-18 19:41 ` Theodore Ts'o
2021-03-18 20:15 ` Matthew Wilcox
2021-03-18 21:38 ` Eric Whitney
2021-03-18 22:16 ` Matthew Wilcox
2021-03-22 16:37 ` Eric Whitney [this message]
2021-03-28 2:41 ` Matthew Wilcox
2021-04-01 16:15 ` Jan Kara
2021-04-01 17:46 ` Eric Whitney
2021-04-02 5:07 ` Ritesh Harjani
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210322163703.GA32047@localhost.localdomain \
--to=enwlinux@gmail.com \
--cc=linux-ext4@vger.kernel.org \
--cc=tytso@mit.edu \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox