* [PATCH 1/2] xfs: online grow vs. log recovery stress test
2024-10-17 16:34 [PATCH 0/2] fstests/xfs: a couple growfs log recovery tests Brian Foster
@ 2024-10-17 16:34 ` Brian Foster
2024-10-25 17:32 ` Zorro Lang
2024-10-17 16:34 ` [PATCH 2/2] xfs: online grow vs. log recovery stress test (realtime version) Brian Foster
2024-10-18 5:09 ` [PATCH 0/2] fstests/xfs: a couple growfs log recovery tests Christoph Hellwig
2 siblings, 1 reply; 10+ messages in thread
From: Brian Foster @ 2024-10-17 16:34 UTC (permalink / raw)
To: fstests; +Cc: linux-xfs, djwong, hch
fstests includes decent functional tests for online growfs and
shrink, and decent stress tests for crash and log recovery, but no
combination of the two. This test combines bits from a typical
growfs stress test like xfs/104 with crash recovery cycles from a
test like generic/388. As a result, this reproduces at least a
couple recently fixed issues related to log recovery of online
growfs operations.
Signed-off-by: Brian Foster <bfoster@redhat.com>
---
tests/xfs/609 | 69 +++++++++++++++++++++++++++++++++++++++++++++++
tests/xfs/609.out | 7 +++++
2 files changed, 76 insertions(+)
create mode 100755 tests/xfs/609
create mode 100644 tests/xfs/609.out
diff --git a/tests/xfs/609 b/tests/xfs/609
new file mode 100755
index 00000000..796f4357
--- /dev/null
+++ b/tests/xfs/609
@@ -0,0 +1,69 @@
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved.
+#
+# FS QA Test No. 609
+#
+# Test XFS online growfs log recovery.
+#
+. ./common/preamble
+_begin_fstest auto growfs stress shutdown log recoveryloop
+
+# Import common functions.
+. ./common/filter
+
+_stress_scratch()
+{
+ procs=4
+ nops=999999
+ # -w ensures that the only ops are ones which cause write I/O
+ FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
+ -n $nops $FSSTRESS_AVOID`
+ $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
+}
+
+_require_scratch
+
+_scratch_mkfs_xfs | tee -a $seqres.full | _filter_mkfs 2>$tmp.mkfs
+. $tmp.mkfs # extract blocksize and data size for scratch device
+
+endsize=`expr 550 \* 1048576` # stop after growing this big
+[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
+
+nags=4
+size=`expr 125 \* 1048576` # 120 megabytes initially
+sizeb=`expr $size / $dbsize` # in data blocks
+logblks=$(_scratch_find_xfs_min_logblocks -dsize=${size} -dagcount=${nags})
+
+_scratch_mkfs_xfs -lsize=${logblks}b -dsize=${size} -dagcount=${nags} \
+ >> $seqres.full
+_scratch_mount
+
+# Grow the filesystem in random sized chunks while stressing and performing
+# shutdown and recovery. The randomization is intended to create a mix of sub-ag
+# and multi-ag grows.
+while [ $size -le $endsize ]; do
+ echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
+ _stress_scratch
+ incsize=$((RANDOM % 40 * 1048576))
+ size=`expr $size + $incsize`
+ sizeb=`expr $size / $dbsize` # in data blocks
+ echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
+ xfs_growfs -D ${sizeb} $SCRATCH_MNT >> $seqres.full
+
+ sleep $((RANDOM % 3))
+ _scratch_shutdown
+ ps -e | grep fsstress > /dev/null 2>&1
+ while [ $? -eq 0 ]; do
+ killall -9 fsstress > /dev/null 2>&1
+ wait > /dev/null 2>&1
+ ps -e | grep fsstress > /dev/null 2>&1
+ done
+ _scratch_cycle_mount || _fail "cycle mount failed"
+done > /dev/null 2>&1
+wait # stop for any remaining stress processes
+
+_scratch_unmount
+
+status=0
+exit
diff --git a/tests/xfs/609.out b/tests/xfs/609.out
new file mode 100644
index 00000000..1853cc65
--- /dev/null
+++ b/tests/xfs/609.out
@@ -0,0 +1,7 @@
+QA output created by 609
+meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
+data = bsize=XXX blocks=XXX, imaxpct=PCT
+ = sunit=XXX swidth=XXX, unwritten=X
+naming =VERN bsize=XXX
+log =LDEV bsize=XXX blocks=XXX
+realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
--
2.46.2
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH 1/2] xfs: online grow vs. log recovery stress test
2024-10-17 16:34 ` [PATCH 1/2] xfs: online grow vs. log recovery stress test Brian Foster
@ 2024-10-25 17:32 ` Zorro Lang
2024-10-29 14:22 ` Brian Foster
0 siblings, 1 reply; 10+ messages in thread
From: Zorro Lang @ 2024-10-25 17:32 UTC (permalink / raw)
To: Brian Foster; +Cc: fstests, linux-xfs, djwong, hch
On Thu, Oct 17, 2024 at 12:34:04PM -0400, Brian Foster wrote:
> fstests includes decent functional tests for online growfs and
> shrink, and decent stress tests for crash and log recovery, but no
> combination of the two. This test combines bits from a typical
> growfs stress test like xfs/104 with crash recovery cycles from a
> test like generic/388. As a result, this reproduces at least a
> couple recently fixed issues related to log recovery of online
> growfs operations.
>
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
Hi Brian,
Thanks for this new test case! Some tiny review points below :)
> tests/xfs/609 | 69 +++++++++++++++++++++++++++++++++++++++++++++++
> tests/xfs/609.out | 7 +++++
> 2 files changed, 76 insertions(+)
> create mode 100755 tests/xfs/609
> create mode 100644 tests/xfs/609.out
>
> diff --git a/tests/xfs/609 b/tests/xfs/609
> new file mode 100755
> index 00000000..796f4357
> --- /dev/null
> +++ b/tests/xfs/609
> @@ -0,0 +1,69 @@
> +#! /bin/bash
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved.
> +#
> +# FS QA Test No. 609
> +#
> +# Test XFS online growfs log recovery.
> +#
> +. ./common/preamble
> +_begin_fstest auto growfs stress shutdown log recoveryloop
> +
> +# Import common functions.
> +. ./common/filter
> +
> +_stress_scratch()
> +{
> + procs=4
> + nops=999999
> + # -w ensures that the only ops are ones which cause write I/O
> + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
> + -n $nops $FSSTRESS_AVOID`
> + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
> +}
> +
> +_require_scratch
> +
> +_scratch_mkfs_xfs | tee -a $seqres.full | _filter_mkfs 2>$tmp.mkfs
"_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs" can get same output
as the .out file below.
> +. $tmp.mkfs # extract blocksize and data size for scratch device
> +
> +endsize=`expr 550 \* 1048576` # stop after growing this big
> +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
> +
> +nags=4
> +size=`expr 125 \* 1048576` # 120 megabytes initially
> +sizeb=`expr $size / $dbsize` # in data blocks
> +logblks=$(_scratch_find_xfs_min_logblocks -dsize=${size} -dagcount=${nags})
> +
> +_scratch_mkfs_xfs -lsize=${logblks}b -dsize=${size} -dagcount=${nags} \
> + >> $seqres.full
What if this mkfs (with specific options) fails? So how about || _fail "....."
> +_scratch_mount
> +
> +# Grow the filesystem in random sized chunks while stressing and performing
> +# shutdown and recovery. The randomization is intended to create a mix of sub-ag
> +# and multi-ag grows.
> +while [ $size -le $endsize ]; do
> + echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
> + _stress_scratch
> + incsize=$((RANDOM % 40 * 1048576))
> + size=`expr $size + $incsize`
> + sizeb=`expr $size / $dbsize` # in data blocks
> + echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
> + xfs_growfs -D ${sizeb} $SCRATCH_MNT >> $seqres.full
_require_command "$XFS_GROWFS_PROG" xfs_growfs
Then use $XFS_GROWFS_PROG
> +
> + sleep $((RANDOM % 3))
> + _scratch_shutdown
> + ps -e | grep fsstress > /dev/null 2>&1
> + while [ $? -eq 0 ]; do
> + killall -9 fsstress > /dev/null 2>&1
_require_command "$KILLALL_PROG" killall
Then use $KILLALL_PROG
> + wait > /dev/null 2>&1
> + ps -e | grep fsstress > /dev/null 2>&1
> + done
> + _scratch_cycle_mount || _fail "cycle mount failed"
> +done > /dev/null 2>&1
> +wait # stop for any remaining stress processes
If the testing be interrupted, the fsstress processes will cause later tests fail.
So we deal with background processes in _cleanup().
e.g.
_cleanup()
{
$KILLALL_ALL fsstress > /dev/null 2>&1
wait
cd /
rm -f $tmp.*
}
Or use a loop kill as you does above.
> +
> +_scratch_unmount
> +
> +status=0
> +exit
> diff --git a/tests/xfs/609.out b/tests/xfs/609.out
> new file mode 100644
> index 00000000..1853cc65
> --- /dev/null
> +++ b/tests/xfs/609.out
> @@ -0,0 +1,7 @@
> +QA output created by 609
> +meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
> +data = bsize=XXX blocks=XXX, imaxpct=PCT
> + = sunit=XXX swidth=XXX, unwritten=X
> +naming =VERN bsize=XXX
> +log =LDEV bsize=XXX blocks=XXX
> +realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
So what's this output in .out file for? How about "Silence is golden"?
Thanks,
Zorro
> --
> 2.46.2
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH 1/2] xfs: online grow vs. log recovery stress test
2024-10-25 17:32 ` Zorro Lang
@ 2024-10-29 14:22 ` Brian Foster
0 siblings, 0 replies; 10+ messages in thread
From: Brian Foster @ 2024-10-29 14:22 UTC (permalink / raw)
To: Zorro Lang; +Cc: fstests, linux-xfs, djwong, hch
On Sat, Oct 26, 2024 at 01:32:42AM +0800, Zorro Lang wrote:
> On Thu, Oct 17, 2024 at 12:34:04PM -0400, Brian Foster wrote:
> > fstests includes decent functional tests for online growfs and
> > shrink, and decent stress tests for crash and log recovery, but no
> > combination of the two. This test combines bits from a typical
> > growfs stress test like xfs/104 with crash recovery cycles from a
> > test like generic/388. As a result, this reproduces at least a
> > couple recently fixed issues related to log recovery of online
> > growfs operations.
> >
> > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > ---
>
> Hi Brian,
>
> Thanks for this new test case! Some tiny review points below :)
>
> > tests/xfs/609 | 69 +++++++++++++++++++++++++++++++++++++++++++++++
> > tests/xfs/609.out | 7 +++++
> > 2 files changed, 76 insertions(+)
> > create mode 100755 tests/xfs/609
> > create mode 100644 tests/xfs/609.out
> >
...
> > diff --git a/tests/xfs/609.out b/tests/xfs/609.out
> > new file mode 100644
> > index 00000000..1853cc65
> > --- /dev/null
> > +++ b/tests/xfs/609.out
> > @@ -0,0 +1,7 @@
> > +QA output created by 609
> > +meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
> > +data = bsize=XXX blocks=XXX, imaxpct=PCT
> > + = sunit=XXX swidth=XXX, unwritten=X
> > +naming =VERN bsize=XXX
> > +log =LDEV bsize=XXX blocks=XXX
> > +realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
>
> So what's this output in .out file for? How about "Silence is golden"?
>
No particular reason.. this was mostly a mash and cleanup of a couple
preexisting tests around growfs and crash recovery, so probably just
leftover from that. All of these suggestions sound good to me. I'll
apply them and post a v2. Thanks for the review!
Brian
> Thanks,
> Zorro
>
> > --
> > 2.46.2
> >
> >
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 2/2] xfs: online grow vs. log recovery stress test (realtime version)
2024-10-17 16:34 [PATCH 0/2] fstests/xfs: a couple growfs log recovery tests Brian Foster
2024-10-17 16:34 ` [PATCH 1/2] xfs: online grow vs. log recovery stress test Brian Foster
@ 2024-10-17 16:34 ` Brian Foster
2024-10-18 5:09 ` [PATCH 0/2] fstests/xfs: a couple growfs log recovery tests Christoph Hellwig
2 siblings, 0 replies; 10+ messages in thread
From: Brian Foster @ 2024-10-17 16:34 UTC (permalink / raw)
To: fstests; +Cc: linux-xfs, djwong, hch
This is fundamentally the same as the previous growfs vs. log
recovery test, with tweaks to support growing the XFS realtime
volume on such configurations. Changes include using the appropriate
mkfs params, growfs params, and enabling realtime inheritance on the
scratch fs.
Signed-off-by: Brian Foster <bfoster@redhat.com>
---
tests/xfs/610 | 71 +++++++++++++++++++++++++++++++++++++++++++++++
tests/xfs/610.out | 7 +++++
2 files changed, 78 insertions(+)
create mode 100755 tests/xfs/610
create mode 100644 tests/xfs/610.out
diff --git a/tests/xfs/610 b/tests/xfs/610
new file mode 100755
index 00000000..95ae31be
--- /dev/null
+++ b/tests/xfs/610
@@ -0,0 +1,71 @@
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved.
+#
+# FS QA Test No. 610
+#
+# Test XFS online growfs log recovery.
+#
+. ./common/preamble
+_begin_fstest auto growfs stress shutdown log recoveryloop
+
+# Import common functions.
+. ./common/filter
+
+_stress_scratch()
+{
+ procs=4
+ nops=999999
+ # -w ensures that the only ops are ones which cause write I/O
+ FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
+ -n $nops $FSSTRESS_AVOID`
+ $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
+}
+
+_require_scratch
+_require_realtime
+
+_scratch_mkfs_xfs | tee -a $seqres.full | _filter_mkfs 2>$tmp.mkfs
+. $tmp.mkfs # extract blocksize and data size for scratch device
+
+endsize=`expr 550 \* 1048576` # stop after growing this big
+[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
+
+nags=4
+size=`expr 125 \* 1048576` # 120 megabytes initially
+sizeb=`expr $size / $dbsize` # in data blocks
+logblks=$(_scratch_find_xfs_min_logblocks -rsize=${size} -dagcount=${nags})
+
+_scratch_mkfs_xfs -lsize=${logblks}b -rsize=${size} -dagcount=${nags} \
+ >> $seqres.full
+_scratch_mount
+_xfs_force_bdev realtime $SCRATCH_MNT &> /dev/null
+
+# Grow the filesystem in random sized chunks while stressing and performing
+# shutdown and recovery. The randomization is intended to create a mix of sub-ag
+# and multi-ag grows.
+while [ $size -le $endsize ]; do
+ echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
+ _stress_scratch
+ incsize=$((RANDOM % 40 * 1048576))
+ size=`expr $size + $incsize`
+ sizeb=`expr $size / $dbsize` # in data blocks
+ echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
+ xfs_growfs -R ${sizeb} $SCRATCH_MNT >> $seqres.full
+
+ sleep $((RANDOM % 3))
+ _scratch_shutdown
+ ps -e | grep fsstress > /dev/null 2>&1
+ while [ $? -eq 0 ]; do
+ killall -9 fsstress > /dev/null 2>&1
+ wait > /dev/null 2>&1
+ ps -e | grep fsstress > /dev/null 2>&1
+ done
+ _scratch_cycle_mount || _fail "cycle mount failed"
+done > /dev/null 2>&1
+wait # stop for any remaining stress processes
+
+_scratch_unmount
+
+status=0
+exit
diff --git a/tests/xfs/610.out b/tests/xfs/610.out
new file mode 100644
index 00000000..42a6d3ce
--- /dev/null
+++ b/tests/xfs/610.out
@@ -0,0 +1,7 @@
+QA output created by 610
+meta-data=DDEV isize=XXX agcount=N, agsize=XXX blks
+data = bsize=XXX blocks=XXX, imaxpct=PCT
+ = sunit=XXX swidth=XXX, unwritten=X
+naming =VERN bsize=XXX
+log =LDEV bsize=XXX blocks=XXX
+realtime =RDEV extsz=XXX blocks=XXX, rtextents=XXX
--
2.46.2
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH 0/2] fstests/xfs: a couple growfs log recovery tests
2024-10-17 16:34 [PATCH 0/2] fstests/xfs: a couple growfs log recovery tests Brian Foster
2024-10-17 16:34 ` [PATCH 1/2] xfs: online grow vs. log recovery stress test Brian Foster
2024-10-17 16:34 ` [PATCH 2/2] xfs: online grow vs. log recovery stress test (realtime version) Brian Foster
@ 2024-10-18 5:09 ` Christoph Hellwig
2024-10-18 11:29 ` Brian Foster
2 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2024-10-18 5:09 UTC (permalink / raw)
To: Brian Foster; +Cc: fstests, linux-xfs, djwong, hch
On Thu, Oct 17, 2024 at 12:34:03PM -0400, Brian Foster wrote:
> I believe you reproduced a problem with your customized realtime variant
> of the initial test. I've not been able to reproduce any test failures
> with patch 2 here, though I have tried to streamline the test a bit to
> reduce unnecessary bits (patch 1 still reproduces the original
> problems). I also don't tend to test much with rt, so it's possible my
> config is off somehow or another. Otherwise I _think_ I've included the
> necessary changes for rt support in the test itself.
>
> Thoughts? I'd like to figure out what might be going on there before
> this should land..
Darrick mentioned that was just with his rt group patchset, which
make sense as we don't have per-group metadata without that.
Anyway, the series looks good to me, and I think it supersedes my
more targeted hand crafted reproducer.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/2] fstests/xfs: a couple growfs log recovery tests
2024-10-18 5:09 ` [PATCH 0/2] fstests/xfs: a couple growfs log recovery tests Christoph Hellwig
@ 2024-10-18 11:29 ` Brian Foster
2024-10-18 21:39 ` Darrick J. Wong
2024-10-21 16:41 ` Darrick J. Wong
0 siblings, 2 replies; 10+ messages in thread
From: Brian Foster @ 2024-10-18 11:29 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: fstests, linux-xfs, djwong
On Fri, Oct 18, 2024 at 07:09:09AM +0200, Christoph Hellwig wrote:
> On Thu, Oct 17, 2024 at 12:34:03PM -0400, Brian Foster wrote:
> > I believe you reproduced a problem with your customized realtime variant
> > of the initial test. I've not been able to reproduce any test failures
> > with patch 2 here, though I have tried to streamline the test a bit to
> > reduce unnecessary bits (patch 1 still reproduces the original
> > problems). I also don't tend to test much with rt, so it's possible my
> > config is off somehow or another. Otherwise I _think_ I've included the
> > necessary changes for rt support in the test itself.
> >
> > Thoughts? I'd like to figure out what might be going on there before
> > this should land..
>
> Darrick mentioned that was just with his rt group patchset, which
> make sense as we don't have per-group metadata without that.
>
Ah, that would explain it then.
> Anyway, the series looks good to me, and I think it supersedes my
> more targeted hand crafted reproducer.
>
Ok, thanks. It would be nice if anybody who knows more about the rt
group stuff could give the rt test a quick whirl and just confirm it's
at least still effective in that known broken case after my tweaks.
Otherwise I'll wait on any feedback on the code/test itself... thanks.
Brian
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/2] fstests/xfs: a couple growfs log recovery tests
2024-10-18 11:29 ` Brian Foster
@ 2024-10-18 21:39 ` Darrick J. Wong
2024-10-21 16:41 ` Darrick J. Wong
1 sibling, 0 replies; 10+ messages in thread
From: Darrick J. Wong @ 2024-10-18 21:39 UTC (permalink / raw)
To: Brian Foster; +Cc: Christoph Hellwig, fstests, linux-xfs
On Fri, Oct 18, 2024 at 07:29:22AM -0400, Brian Foster wrote:
> On Fri, Oct 18, 2024 at 07:09:09AM +0200, Christoph Hellwig wrote:
> > On Thu, Oct 17, 2024 at 12:34:03PM -0400, Brian Foster wrote:
> > > I believe you reproduced a problem with your customized realtime variant
> > > of the initial test. I've not been able to reproduce any test failures
> > > with patch 2 here, though I have tried to streamline the test a bit to
> > > reduce unnecessary bits (patch 1 still reproduces the original
> > > problems). I also don't tend to test much with rt, so it's possible my
> > > config is off somehow or another. Otherwise I _think_ I've included the
> > > necessary changes for rt support in the test itself.
> > >
> > > Thoughts? I'd like to figure out what might be going on there before
> > > this should land..
> >
> > Darrick mentioned that was just with his rt group patchset, which
> > make sense as we don't have per-group metadata without that.
> >
>
> Ah, that would explain it then.
Yep.
> > Anyway, the series looks good to me, and I think it supersedes my
> > more targeted hand crafted reproducer.
> >
>
> Ok, thanks. It would be nice if anybody who knows more about the rt
> group stuff could give the rt test a quick whirl and just confirm it's
> at least still effective in that known broken case after my tweaks.
> Otherwise I'll wait on any feedback on the code/test itself... thanks.
Will do, now that I'm out of the mountains. :)
The tests look fine to me, but I guess we could wait to see what falls
out when I add bfoster's tests.
--D
> Brian
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/2] fstests/xfs: a couple growfs log recovery tests
2024-10-18 11:29 ` Brian Foster
2024-10-18 21:39 ` Darrick J. Wong
@ 2024-10-21 16:41 ` Darrick J. Wong
2024-10-22 5:52 ` Christoph Hellwig
1 sibling, 1 reply; 10+ messages in thread
From: Darrick J. Wong @ 2024-10-21 16:41 UTC (permalink / raw)
To: Brian Foster; +Cc: Christoph Hellwig, fstests, linux-xfs
On Fri, Oct 18, 2024 at 07:29:22AM -0400, Brian Foster wrote:
> On Fri, Oct 18, 2024 at 07:09:09AM +0200, Christoph Hellwig wrote:
> > On Thu, Oct 17, 2024 at 12:34:03PM -0400, Brian Foster wrote:
> > > I believe you reproduced a problem with your customized realtime variant
> > > of the initial test. I've not been able to reproduce any test failures
> > > with patch 2 here, though I have tried to streamline the test a bit to
> > > reduce unnecessary bits (patch 1 still reproduces the original
> > > problems). I also don't tend to test much with rt, so it's possible my
> > > config is off somehow or another. Otherwise I _think_ I've included the
> > > necessary changes for rt support in the test itself.
> > >
> > > Thoughts? I'd like to figure out what might be going on there before
> > > this should land..
> >
> > Darrick mentioned that was just with his rt group patchset, which
> > make sense as we don't have per-group metadata without that.
> >
>
> Ah, that would explain it then.
>
> > Anyway, the series looks good to me, and I think it supersedes my
> > more targeted hand crafted reproducer.
> >
>
> Ok, thanks. It would be nice if anybody who knows more about the rt
> group stuff could give the rt test a quick whirl and just confirm it's
> at least still effective in that known broken case after my tweaks.
> Otherwise I'll wait on any feedback on the code/test itself... thanks.
Perplexingly, I tried this out on the test fleet last night and got zero
failures except for torvalds TOT.
Oh, I don't have any recoveryloop VMs that also have rt enabled, maybe
that's why 610 didn't pop anywhere.
--D
> Brian
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/2] fstests/xfs: a couple growfs log recovery tests
2024-10-21 16:41 ` Darrick J. Wong
@ 2024-10-22 5:52 ` Christoph Hellwig
0 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2024-10-22 5:52 UTC (permalink / raw)
To: Darrick J. Wong; +Cc: Brian Foster, Christoph Hellwig, fstests, linux-xfs
On Mon, Oct 21, 2024 at 09:41:50AM -0700, Darrick J. Wong wrote:
> Perplexingly, I tried this out on the test fleet last night and got zero
> failures except for torvalds TOT.
>
> Oh, I don't have any recoveryloop VMs that also have rt enabled, maybe
> that's why 610 didn't pop anywhere.
Note that your trees already contain the fixes for AGs and RTGs, so
they are not expected to fail. To Linus' tree fail is expected for
AGs, and we'd need an older version of your rtgroup branch to fail
for RTGs.
As far as I can tell the result is expected.
^ permalink raw reply [flat|nested] 10+ messages in thread