public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: Brian Foster <bfoster@redhat.com>, linux-xfs@vger.kernel.org
Subject: Re: [BUG] log I/O completion GPF via xfs/006 and xfs/264 on 5.17.0-rc8
Date: Sat, 19 Mar 2022 09:39:13 +1100	[thread overview]
Message-ID: <20220318223913.GI1544202@dread.disaster.area> (raw)
In-Reply-To: <20220318215133.GG8224@magnolia>

On Fri, Mar 18, 2022 at 02:51:33PM -0700, Darrick J. Wong wrote:
> On Sat, Mar 19, 2022 at 08:48:31AM +1100, Dave Chinner wrote:
> > On Fri, Mar 18, 2022 at 09:46:53AM -0400, Brian Foster wrote:
> > > Hi,
> > > 
> > > I'm not sure if this is known and/or fixed already, but it didn't look
> > > familiar so here is a report. I hit a splat when testing Willy's
> > > prospective folio bookmark change and it turns out it replicates on
> > > Linus' current master (551acdc3c3d2). This initially reproduced on
> > > xfs/264 (mkfs defaults) and I saw a soft lockup warning variant via
> > > xfs/006, but when I attempted to reproduce the latter a second time I
> > > hit what looks like the same problem as xfs/264. Both tests seem to
> > > involve some form of error injection, so possibly the same underlying
> > > problem. The GPF splat from xfs/264 is below.
> > 
> > On a side note, I'm wondering if we should add xfs/006 and xfs/264
> > to the recoveryloop group - they do a shutdown under load and a
> > followup mount to ensure the filesystem gets recovered before
> > the test ends and the fs is checked, so while thy don't explicitly
> > test recovery, they do exercise it....
> > 
> > Thoughts?
> 
> Someone else asked about this the other day, and I proposed a 'recovery'
> group for tests that don't run in a loop.

That distinction is largely meaningless to me.

I tend to think of "recoveryloop" as the recovery tests I want to
run in a long running loop via iteration. e.g. isomething like 
'check -I 250 -g recoveryloop'. I don't really care if the tests
loop internally doing multiple recoveries - I'm wanting to run the
recovery tests that reproduce problems frequeently in a tight loop
repeatedly.

Hence I think we should just lump the shutdown+recovery tests all in
one group so that when we want to exercise shutdown/recovery we just
have one single group to run repeatedly in a loop. Whether that
group is named 'recovery' or 'recoveryloop' is largely irrelevant to
me.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

      reply	other threads:[~2022-03-18 22:39 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-18 13:46 [BUG] log I/O completion GPF via xfs/006 and xfs/264 on 5.17.0-rc8 Brian Foster
2022-03-18 16:11 ` Brian Foster
2022-03-18 21:42   ` Dave Chinner
2022-03-21 18:35     ` Brian Foster
2022-03-21 22:14       ` Dave Chinner
2022-03-22 14:33         ` Brian Foster
2022-03-22 21:41           ` Dave Chinner
2022-03-18 21:48 ` Dave Chinner
2022-03-18 21:51   ` Darrick J. Wong
2022-03-18 22:39     ` Dave Chinner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220318223913.GI1544202@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=bfoster@redhat.com \
    --cc=djwong@kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox