From: Brian Foster <bfoster@redhat.com>
To: Christoph Hellwig <hch@lst.de>
Cc: zlang@kernel.org, djwong@kernel.org, fstests@vger.kernel.org,
linux-xfs@vger.kernel.org
Subject: Re: [PATCH] xfs: test log recovery for extent frees right after growfs
Date: Tue, 8 Oct 2024 12:28:37 -0400 [thread overview]
Message-ID: <ZwVdtXUSwEXRpcuQ@bfoster> (raw)
In-Reply-To: <ZuBwKQBMsuV-dp18@bfoster>
On Tue, Sep 10, 2024 at 12:13:29PM -0400, Brian Foster wrote:
> On Tue, Sep 10, 2024 at 05:10:53PM +0200, Christoph Hellwig wrote:
> > On Tue, Sep 10, 2024 at 10:19:50AM -0400, Brian Foster wrote:
> > > No real issue with the test, but I wonder if we could do something more
> > > generic. Various XFS shutdown and log recovery issues went undetected
> > > for a while until we started adding more of the generic stress tests
> > > currently categorized in the recoveryloop group.
> > >
> > > So for example, I'm wondering if you took something like generic/388 or
> > > 475 and modified it to start with a smallish fs, grew it in 1GB or
> > > whatever increments on each loop iteration, and then ran the same
> > > generic stress/timeout/shutdown/recovery sequence, would that eventually
> > > reproduce the issue you've fixed? I don't think reproducibility would
> > > need to be 100% for the test to be useful, fwiw.
> > >
> > > Note that I'm assuming we don't have something like that already. I see
> > > growfs and shutdown tests in tests/xfs/group.list, but nothing in both
> > > groups and I haven't looked through the individual tests. Just a
> > > thought.
> >
> > It turns out reproducing this bug was surprisingly complicated.
> > After a growfs we can now dip into reserves that made the test1
> > file start filling up the existing AGs first for a while, and thus
> > the error injection would hit on that and never even reach a new
> > AG.
> >
> > So while agree with your sentiment and like the highlevel idea, I
> > suspect it will need a fair amount of work to actually be useful.
> > Right now I'm too busy with various projects to look into it
> > unfortunately.
> >
>
> Fair enough, maybe I'll play with it a bit when I have some more time.
>
> Brian
>
>
FWIW, here's a quick hack at such a test. This is essentially a copy of
xfs/104, tweaked to remove some of the output noise and whatnot, and
hacked in some bits from generic/388 to do a shutdown and mount cycle
per iteration.
I'm not sure if this reproduces your original problem, but this blows up
pretty quickly on 6.12.0-rc2. I see a stream of warnings that start like
this (buffer readahead path via log recovery):
[ 2807.764283] XFS (vdb2): xfs_buf_map_verify: daddr 0x3e803 out of range, EOFS 0x3e800
[ 2807.768094] ------------[ cut here ]------------
[ 2807.770629] WARNING: CPU: 0 PID: 28386 at fs/xfs/xfs_buf.c:553 xfs_buf_get_map+0x184e/0x2670 [xfs]
... and then end up with an unrecoverable/unmountable fs. From the title
it sounds like this may be a different issue though.. hm?
Brian
--- 8< ---
diff --git a/tests/xfs/609 b/tests/xfs/609
new file mode 100755
index 00000000..b9c23869
--- /dev/null
+++ b/tests/xfs/609
@@ -0,0 +1,100 @@
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+#
+# FS QA Test No. 609
+#
+# XFS online growfs-while-allocating tests (data subvol variant)
+#
+. ./common/preamble
+_begin_fstest growfs ioctl prealloc auto stress
+
+# Import common functions.
+. ./common/filter
+
+_create_scratch()
+{
+ _scratch_mkfs_xfs $@ >> $seqres.full
+
+ if ! _try_scratch_mount 2>/dev/null
+ then
+ echo "failed to mount $SCRATCH_DEV"
+ exit 1
+ fi
+
+ # fix the reserve block pool to a known size so that the enospc
+ # calculations work out correctly.
+ _scratch_resvblks 1024 > /dev/null 2>&1
+}
+
+_fill_scratch()
+{
+ $XFS_IO_PROG -f -c "resvsp 0 ${1}" $SCRATCH_MNT/resvfile
+}
+
+_stress_scratch()
+{
+ procs=3
+ nops=1000
+ # -w ensures that the only ops are ones which cause write I/O
+ FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
+ -n $nops $FSSTRESS_AVOID`
+ $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
+}
+
+_require_scratch
+_require_xfs_io_command "falloc"
+
+_scratch_mkfs_xfs | tee -a $seqres.full | _filter_mkfs 2>$tmp.mkfs
+. $tmp.mkfs # extract blocksize and data size for scratch device
+
+endsize=`expr 550 \* 1048576` # stop after growing this big
+incsize=`expr 42 \* 1048576` # grow in chunks of this size
+modsize=`expr 4 \* $incsize` # pause after this many increments
+
+[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
+
+nags=4
+size=`expr 125 \* 1048576` # 120 megabytes initially
+sizeb=`expr $size / $dbsize` # in data blocks
+logblks=$(_scratch_find_xfs_min_logblocks -dsize=${size} -dagcount=${nags})
+_create_scratch -lsize=${logblks}b -dsize=${size} -dagcount=${nags}
+
+for i in `seq 125 -1 90`; do
+ fillsize=`expr $i \* 1048576`
+ out="$(_fill_scratch $fillsize 2>&1)"
+ echo "$out" | grep -q 'No space left on device' && continue
+ test -n "${out}" && echo "$out"
+ break
+done
+
+#
+# Grow the filesystem while actively stressing it...
+# Kick off more stress threads on each iteration, grow; repeat.
+#
+while [ $size -le $endsize ]; do
+ echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
+ _stress_scratch
+ size=`expr $size + $incsize`
+ sizeb=`expr $size / $dbsize` # in data blocks
+ echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
+ xfs_growfs -D ${sizeb} $SCRATCH_MNT >> $seqres.full
+ echo AGCOUNT=$agcount >> $seqres.full
+ echo >> $seqres.full
+
+ sleep $((RANDOM % 3))
+ _scratch_shutdown
+ ps -e | grep fsstress > /dev/null 2>&1
+ while [ $? -eq 0 ]; do
+ killall -9 fsstress > /dev/null 2>&1
+ wait > /dev/null 2>&1
+ ps -e | grep fsstress > /dev/null 2>&1
+ done
+ _scratch_cycle_mount || _fail "cycle mount failed"
+done > /dev/null 2>&1
+wait # stop for any remaining stress processes
+
+_scratch_unmount
+
+status=0
+exit
diff --git a/tests/xfs/609.out b/tests/xfs/609.out
new file mode 100644
index 00000000..1853cc65
--- /dev/null
+++ b/tests/xfs/609.out
@@ -0,0 +1,7 @@
+QA output created by 609
+meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
+data = bsize=XXX blocks=XXX, imaxpct=PCT
+ = sunit=XXX swidth=XXX, unwritten=X
+naming =VERN bsize=XXX
+log =LDEV bsize=XXX blocks=XXX
+realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
next prev parent reply other threads:[~2024-10-08 16:27 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-10 4:31 [PATCH] xfs: test log recovery for extent frees right after growfs Christoph Hellwig
2024-09-10 8:57 ` Zorro Lang
2024-09-10 11:34 ` Christoph Hellwig
2024-09-10 14:19 ` Brian Foster
2024-09-10 15:10 ` Christoph Hellwig
2024-09-10 16:13 ` Brian Foster
2024-10-08 16:28 ` Brian Foster [this message]
2024-10-09 8:04 ` Christoph Hellwig
2024-10-09 12:35 ` Brian Foster
2024-10-09 12:43 ` Christoph Hellwig
2024-10-09 15:14 ` Brian Foster
2024-10-10 6:51 ` Christoph Hellwig
2024-10-14 6:00 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZwVdtXUSwEXRpcuQ@bfoster \
--to=bfoster@redhat.com \
--cc=djwong@kernel.org \
--cc=fstests@vger.kernel.org \
--cc=hch@lst.de \
--cc=linux-xfs@vger.kernel.org \
--cc=zlang@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox