* [PATCH v2 0/2] fstests/xfs: a couple growfs log recovery tests
@ 2024-10-29 17:21 Brian Foster
2024-10-29 17:21 ` [PATCH v2 1/2] xfs: online grow vs. log recovery stress test Brian Foster
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Brian Foster @ 2024-10-29 17:21 UTC (permalink / raw)
To: fstests; +Cc: linux-xfs, djwong, hch
v2:
- Miscellaneous cleanups to both tests.
v1: https://lore.kernel.org/fstests/20241017163405.173062-1-bfoster@redhat.com/
Brian Foster (2):
xfs: online grow vs. log recovery stress test
xfs: online grow vs. log recovery stress test (realtime version)
tests/xfs/609 | 81 +++++++++++++++++++++++++++++++++++++++++++++
tests/xfs/609.out | 2 ++
tests/xfs/610 | 83 +++++++++++++++++++++++++++++++++++++++++++++++
tests/xfs/610.out | 2 ++
4 files changed, 168 insertions(+)
create mode 100755 tests/xfs/609
create mode 100644 tests/xfs/609.out
create mode 100755 tests/xfs/610
create mode 100644 tests/xfs/610.out
--
2.46.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 1/2] xfs: online grow vs. log recovery stress test
2024-10-29 17:21 [PATCH v2 0/2] fstests/xfs: a couple growfs log recovery tests Brian Foster
@ 2024-10-29 17:21 ` Brian Foster
2024-10-30 19:41 ` Zorro Lang
2024-10-29 17:21 ` [PATCH v2 2/2] xfs: online grow vs. log recovery stress test (realtime version) Brian Foster
2024-10-30 4:36 ` [PATCH v2 0/2] fstests/xfs: a couple growfs log recovery tests Christoph Hellwig
2 siblings, 1 reply; 11+ messages in thread
From: Brian Foster @ 2024-10-29 17:21 UTC (permalink / raw)
To: fstests; +Cc: linux-xfs, djwong, hch
fstests includes decent functional tests for online growfs and
shrink, and decent stress tests for crash and log recovery, but no
combination of the two. This test combines bits from a typical
growfs stress test like xfs/104 with crash recovery cycles from a
test like generic/388. As a result, this reproduces at least a
couple recently fixed issues related to log recovery of online
growfs operations.
Signed-off-by: Brian Foster <bfoster@redhat.com>
---
tests/xfs/609 | 81 +++++++++++++++++++++++++++++++++++++++++++++++
tests/xfs/609.out | 2 ++
2 files changed, 83 insertions(+)
create mode 100755 tests/xfs/609
create mode 100644 tests/xfs/609.out
diff --git a/tests/xfs/609 b/tests/xfs/609
new file mode 100755
index 00000000..4df966f7
--- /dev/null
+++ b/tests/xfs/609
@@ -0,0 +1,81 @@
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved.
+#
+# FS QA Test No. 609
+#
+# Test XFS online growfs log recovery.
+#
+. ./common/preamble
+_begin_fstest auto growfs stress shutdown log recoveryloop
+
+# Import common functions.
+. ./common/filter
+
+_stress_scratch()
+{
+ procs=4
+ nops=999999
+ # -w ensures that the only ops are ones which cause write I/O
+ FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
+ -n $nops $FSSTRESS_AVOID`
+ $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
+}
+
+_require_scratch
+_require_command "$XFS_GROWFS_PROG" xfs_growfs
+_require_command "$KILLALL_PROG" killall
+
+_cleanup()
+{
+ $KILLALL_ALL fsstress > /dev/null 2>&1
+ wait
+ cd /
+ rm -f $tmp.*
+}
+
+_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs
+. $tmp.mkfs # extract blocksize and data size for scratch device
+
+endsize=`expr 550 \* 1048576` # stop after growing this big
+[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
+
+nags=4
+size=`expr 125 \* 1048576` # 120 megabytes initially
+sizeb=`expr $size / $dbsize` # in data blocks
+logblks=$(_scratch_find_xfs_min_logblocks -dsize=${size} -dagcount=${nags})
+
+_scratch_mkfs_xfs -lsize=${logblks}b -dsize=${size} -dagcount=${nags} \
+ >> $seqres.full || _fail "mkfs failed"
+_scratch_mount
+
+# Grow the filesystem in random sized chunks while stressing and performing
+# shutdown and recovery. The randomization is intended to create a mix of sub-ag
+# and multi-ag grows.
+while [ $size -le $endsize ]; do
+ echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
+ _stress_scratch
+ incsize=$((RANDOM % 40 * 1048576))
+ size=`expr $size + $incsize`
+ sizeb=`expr $size / $dbsize` # in data blocks
+ echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
+ $XFS_GROWFS_PROG -D ${sizeb} $SCRATCH_MNT >> $seqres.full
+
+ sleep $((RANDOM % 3))
+ _scratch_shutdown
+ ps -e | grep fsstress > /dev/null 2>&1
+ while [ $? -eq 0 ]; do
+ $KILLALL_PROG -9 fsstress > /dev/null 2>&1
+ wait > /dev/null 2>&1
+ ps -e | grep fsstress > /dev/null 2>&1
+ done
+ _scratch_cycle_mount || _fail "cycle mount failed"
+done > /dev/null 2>&1
+wait # stop for any remaining stress processes
+
+_scratch_unmount
+
+echo Silence is golden.
+
+status=0
+exit
diff --git a/tests/xfs/609.out b/tests/xfs/609.out
new file mode 100644
index 00000000..8be27d3a
--- /dev/null
+++ b/tests/xfs/609.out
@@ -0,0 +1,2 @@
+QA output created by 609
+Silence is golden.
--
2.46.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 2/2] xfs: online grow vs. log recovery stress test (realtime version)
2024-10-29 17:21 [PATCH v2 0/2] fstests/xfs: a couple growfs log recovery tests Brian Foster
2024-10-29 17:21 ` [PATCH v2 1/2] xfs: online grow vs. log recovery stress test Brian Foster
@ 2024-10-29 17:21 ` Brian Foster
2024-10-30 19:54 ` Zorro Lang
2024-10-30 4:36 ` [PATCH v2 0/2] fstests/xfs: a couple growfs log recovery tests Christoph Hellwig
2 siblings, 1 reply; 11+ messages in thread
From: Brian Foster @ 2024-10-29 17:21 UTC (permalink / raw)
To: fstests; +Cc: linux-xfs, djwong, hch
This is fundamentally the same as the previous growfs vs. log
recovery test, with tweaks to support growing the XFS realtime
volume on such configurations. Changes include using the appropriate
mkfs params, growfs params, and enabling realtime inheritance on the
scratch fs.
Signed-off-by: Brian Foster <bfoster@redhat.com>
---
tests/xfs/610 | 83 +++++++++++++++++++++++++++++++++++++++++++++++
tests/xfs/610.out | 2 ++
2 files changed, 85 insertions(+)
create mode 100755 tests/xfs/610
create mode 100644 tests/xfs/610.out
diff --git a/tests/xfs/610 b/tests/xfs/610
new file mode 100755
index 00000000..6d3a526f
--- /dev/null
+++ b/tests/xfs/610
@@ -0,0 +1,83 @@
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved.
+#
+# FS QA Test No. 610
+#
+# Test XFS online growfs log recovery.
+#
+. ./common/preamble
+_begin_fstest auto growfs stress shutdown log recoveryloop
+
+# Import common functions.
+. ./common/filter
+
+_stress_scratch()
+{
+ procs=4
+ nops=999999
+ # -w ensures that the only ops are ones which cause write I/O
+ FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
+ -n $nops $FSSTRESS_AVOID`
+ $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
+}
+
+_require_scratch
+_require_realtime
+_require_command "$XFS_GROWFS_PROG" xfs_growfs
+_require_command "$KILLALL_PROG" killall
+
+_cleanup()
+{
+ $KILLALL_ALL fsstress > /dev/null 2>&1
+ wait
+ cd /
+ rm -f $tmp.*
+}
+
+_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs
+. $tmp.mkfs # extract blocksize and data size for scratch device
+
+endsize=`expr 550 \* 1048576` # stop after growing this big
+[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
+
+nags=4
+size=`expr 125 \* 1048576` # 120 megabytes initially
+sizeb=`expr $size / $dbsize` # in data blocks
+logblks=$(_scratch_find_xfs_min_logblocks -rsize=${size} -dagcount=${nags})
+
+_scratch_mkfs_xfs -lsize=${logblks}b -rsize=${size} -dagcount=${nags} \
+ >> $seqres.full || _fail "mkfs failed"
+_scratch_mount
+_xfs_force_bdev realtime $SCRATCH_MNT &> /dev/null
+
+# Grow the filesystem in random sized chunks while stressing and performing
+# shutdown and recovery. The randomization is intended to create a mix of sub-ag
+# and multi-ag grows.
+while [ $size -le $endsize ]; do
+ echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
+ _stress_scratch
+ incsize=$((RANDOM % 40 * 1048576))
+ size=`expr $size + $incsize`
+ sizeb=`expr $size / $dbsize` # in data blocks
+ echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
+ $XFS_GROWFS_PROG -R ${sizeb} $SCRATCH_MNT >> $seqres.full
+
+ sleep $((RANDOM % 3))
+ _scratch_shutdown
+ ps -e | grep fsstress > /dev/null 2>&1
+ while [ $? -eq 0 ]; do
+ $KILLALL_PROG -9 fsstress > /dev/null 2>&1
+ wait > /dev/null 2>&1
+ ps -e | grep fsstress > /dev/null 2>&1
+ done
+ _scratch_cycle_mount || _fail "cycle mount failed"
+done > /dev/null 2>&1
+wait # stop for any remaining stress processes
+
+_scratch_unmount
+
+echo Silence is golden.
+
+status=0
+exit
diff --git a/tests/xfs/610.out b/tests/xfs/610.out
new file mode 100644
index 00000000..c42a1cf8
--- /dev/null
+++ b/tests/xfs/610.out
@@ -0,0 +1,2 @@
+QA output created by 610
+Silence is golden.
--
2.46.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v2 0/2] fstests/xfs: a couple growfs log recovery tests
2024-10-29 17:21 [PATCH v2 0/2] fstests/xfs: a couple growfs log recovery tests Brian Foster
2024-10-29 17:21 ` [PATCH v2 1/2] xfs: online grow vs. log recovery stress test Brian Foster
2024-10-29 17:21 ` [PATCH v2 2/2] xfs: online grow vs. log recovery stress test (realtime version) Brian Foster
@ 2024-10-30 4:36 ` Christoph Hellwig
2024-10-30 8:24 ` Zorro Lang
2 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2024-10-30 4:36 UTC (permalink / raw)
To: Brian Foster; +Cc: fstests, linux-xfs, djwong, hch
Still looks good to me (but I'm a horrible test reviewer, so that
might not count much :)):
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 0/2] fstests/xfs: a couple growfs log recovery tests
2024-10-30 4:36 ` [PATCH v2 0/2] fstests/xfs: a couple growfs log recovery tests Christoph Hellwig
@ 2024-10-30 8:24 ` Zorro Lang
0 siblings, 0 replies; 11+ messages in thread
From: Zorro Lang @ 2024-10-30 8:24 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Brian Foster, fstests, linux-xfs, djwong
On Wed, Oct 30, 2024 at 05:36:59AM +0100, Christoph Hellwig wrote:
> Still looks good to me (but I'm a horrible test reviewer, so that
> might not count much :)):
You review always counts much :)
>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 1/2] xfs: online grow vs. log recovery stress test
2024-10-29 17:21 ` [PATCH v2 1/2] xfs: online grow vs. log recovery stress test Brian Foster
@ 2024-10-30 19:41 ` Zorro Lang
2024-10-31 13:18 ` Brian Foster
0 siblings, 1 reply; 11+ messages in thread
From: Zorro Lang @ 2024-10-30 19:41 UTC (permalink / raw)
To: Brian Foster; +Cc: fstests, linux-xfs, djwong, hch
On Tue, Oct 29, 2024 at 01:21:34PM -0400, Brian Foster wrote:
> fstests includes decent functional tests for online growfs and
> shrink, and decent stress tests for crash and log recovery, but no
> combination of the two. This test combines bits from a typical
> growfs stress test like xfs/104 with crash recovery cycles from a
> test like generic/388. As a result, this reproduces at least a
> couple recently fixed issues related to log recovery of online
> growfs operations.
>
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
> tests/xfs/609 | 81 +++++++++++++++++++++++++++++++++++++++++++++++
> tests/xfs/609.out | 2 ++
> 2 files changed, 83 insertions(+)
> create mode 100755 tests/xfs/609
> create mode 100644 tests/xfs/609.out
>
> diff --git a/tests/xfs/609 b/tests/xfs/609
> new file mode 100755
> index 00000000..4df966f7
> --- /dev/null
> +++ b/tests/xfs/609
> @@ -0,0 +1,81 @@
> +#! /bin/bash
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved.
> +#
> +# FS QA Test No. 609
> +#
> +# Test XFS online growfs log recovery.
> +#
> +. ./common/preamble
> +_begin_fstest auto growfs stress shutdown log recoveryloop
> +
> +# Import common functions.
> +. ./common/filter
> +
> +_stress_scratch()
> +{
> + procs=4
> + nops=999999
> + # -w ensures that the only ops are ones which cause write I/O
> + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
> + -n $nops $FSSTRESS_AVOID`
> + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
> +}
> +
> +_require_scratch
> +_require_command "$XFS_GROWFS_PROG" xfs_growfs
> +_require_command "$KILLALL_PROG" killall
> +
> +_cleanup()
> +{
> + $KILLALL_ALL fsstress > /dev/null 2>&1
> + wait
> + cd /
> + rm -f $tmp.*
> +}
> +
> +_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs
> +. $tmp.mkfs # extract blocksize and data size for scratch device
> +
> +endsize=`expr 550 \* 1048576` # stop after growing this big
> +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
> +
> +nags=4
> +size=`expr 125 \* 1048576` # 120 megabytes initially
> +sizeb=`expr $size / $dbsize` # in data blocks
> +logblks=$(_scratch_find_xfs_min_logblocks -dsize=${size} -dagcount=${nags})
> +
> +_scratch_mkfs_xfs -lsize=${logblks}b -dsize=${size} -dagcount=${nags} \
> + >> $seqres.full || _fail "mkfs failed"
This test fails on my testing machine, as [1], due to above mkfs.xfs print
a warning:
"mkfs.xfs: small data volume, ignoring data volume stripe unit 128 and stripe width 256"
My test device is scripted, if without the specific mkfs options, it got:
# mkfs.xfs -f $SCRATCH_DEV
meta-data=/dev/sda6 isize=512 agcount=25, agsize=1064176 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=1
= reflink=1 bigtime=1 inobtcount=1 nrext64=1
= exchange=0
data = bsize=4096 blocks=26604400, imaxpct=25
= sunit=16 swidth=32 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0
log =internal log bsize=4096 blocks=179552, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
But if with the specific mkfs options, it got:
# /usr/sbin/mkfs.xfs -f -lsize=3075b -dsize=131072000 -dagcount=4 $SCRATCH_DEV
mkfs.xfs: small data volume, ignoring data volume stripe unit 128 and stripe width 256
meta-data=/dev/sda6 isize=512 agcount=4, agsize=8000 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=1
= reflink=1 bigtime=1 inobtcount=1 nrext64=1
= exchange=0
data = bsize=4096 blocks=32000, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0
log =internal log bsize=4096 blocks=3075, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Hi Brian, if you think "ignoreing volume stripe" doesn't affect the test, we can
filter out the stderr with "2>&1". I can help to change that when I merge.
Others looks good to me, with above confirmation:
Reviewed-by: Zorro Lang <zlang@redhat.com>
Thanks,
Zorro
[1]
SECTION -- default
FSTYP -- xfs (non-debug)
PLATFORM -- Linux/x86_64 dell-per750-41 6.11.0-0.rc6.49.fc42.x86_64+debug #1 SMP PREEMPT_DYNAMIC Mon Sep 2 02:18:15 UTC 2024
MKFS_OPTIONS -- -f /dev/sda6
MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 /dev/sda6 /mnt/scratch
xfs/609 [failed, exit status 1]_check_dmesg: something found in dmesg (see /root/git/xfstests/results//default/xfs/609.dmesg)
- output mismatch (see /root/git/xfstests/results//default/xfs/609.out.bad)
--- tests/xfs/609.out 2024-10-30 16:29:52.250176790 +0800
+++ /root/git/xfstests/results//default/xfs/609.out.bad 2024-10-30 16:31:01.759590117 +0800
@@ -1,2 +1,2 @@
QA output created by 609
-Silence is golden.
+mkfs.xfs: small data volume, ignoring data volume stripe unit 128 and stripe width 256
...
(Run 'diff -u /root/git/xfstests/tests/xfs/609.out /root/git/xfstests/results//default/xfs/609.out.bad' to see the entire diff)
xfs/610 [not run] External volumes not in use, skipped this test
Ran: xfs/609 xfs/610
Not run: xfs/610
Failures: xfs/609
Failed 1 of 2 tests
> +_scratch_mount
> +
> +# Grow the filesystem in random sized chunks while stressing and performing
> +# shutdown and recovery. The randomization is intended to create a mix of sub-ag
> +# and multi-ag grows.
> +while [ $size -le $endsize ]; do
> + echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
> + _stress_scratch
> + incsize=$((RANDOM % 40 * 1048576))
> + size=`expr $size + $incsize`
> + sizeb=`expr $size / $dbsize` # in data blocks
> + echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
> + $XFS_GROWFS_PROG -D ${sizeb} $SCRATCH_MNT >> $seqres.full
> +
> + sleep $((RANDOM % 3))
> + _scratch_shutdown
> + ps -e | grep fsstress > /dev/null 2>&1
> + while [ $? -eq 0 ]; do
> + $KILLALL_PROG -9 fsstress > /dev/null 2>&1
> + wait > /dev/null 2>&1
> + ps -e | grep fsstress > /dev/null 2>&1
> + done
> + _scratch_cycle_mount || _fail "cycle mount failed"
> +done > /dev/null 2>&1
> +wait # stop for any remaining stress processes
> +
> +_scratch_unmount
> +
> +echo Silence is golden.
> +
> +status=0
> +exit
> diff --git a/tests/xfs/609.out b/tests/xfs/609.out
> new file mode 100644
> index 00000000..8be27d3a
> --- /dev/null
> +++ b/tests/xfs/609.out
> @@ -0,0 +1,2 @@
> +QA output created by 609
> +Silence is golden.
> --
> 2.46.2
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] xfs: online grow vs. log recovery stress test (realtime version)
2024-10-29 17:21 ` [PATCH v2 2/2] xfs: online grow vs. log recovery stress test (realtime version) Brian Foster
@ 2024-10-30 19:54 ` Zorro Lang
2024-10-31 13:20 ` Brian Foster
0 siblings, 1 reply; 11+ messages in thread
From: Zorro Lang @ 2024-10-30 19:54 UTC (permalink / raw)
To: Brian Foster; +Cc: fstests, linux-xfs, djwong, hch
On Tue, Oct 29, 2024 at 01:21:35PM -0400, Brian Foster wrote:
> This is fundamentally the same as the previous growfs vs. log
> recovery test, with tweaks to support growing the XFS realtime
> volume on such configurations. Changes include using the appropriate
> mkfs params, growfs params, and enabling realtime inheritance on the
> scratch fs.
>
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
> tests/xfs/610 | 83 +++++++++++++++++++++++++++++++++++++++++++++++
> tests/xfs/610.out | 2 ++
> 2 files changed, 85 insertions(+)
> create mode 100755 tests/xfs/610
> create mode 100644 tests/xfs/610.out
>
> diff --git a/tests/xfs/610 b/tests/xfs/610
> new file mode 100755
> index 00000000..6d3a526f
> --- /dev/null
> +++ b/tests/xfs/610
> @@ -0,0 +1,83 @@
> +#! /bin/bash
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved.
> +#
> +# FS QA Test No. 610
> +#
> +# Test XFS online growfs log recovery.
> +#
> +. ./common/preamble
> +_begin_fstest auto growfs stress shutdown log recoveryloop
> +
> +# Import common functions.
> +. ./common/filter
> +
> +_stress_scratch()
> +{
> + procs=4
> + nops=999999
> + # -w ensures that the only ops are ones which cause write I/O
> + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
> + -n $nops $FSSTRESS_AVOID`
> + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
> +}
> +
> +_require_scratch
> +_require_realtime
> +_require_command "$XFS_GROWFS_PROG" xfs_growfs
> +_require_command "$KILLALL_PROG" killall
> +
> +_cleanup()
> +{
> + $KILLALL_ALL fsstress > /dev/null 2>&1
> + wait
> + cd /
> + rm -f $tmp.*
> +}
> +
> +_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs
> +. $tmp.mkfs # extract blocksize and data size for scratch device
> +
> +endsize=`expr 550 \* 1048576` # stop after growing this big
> +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
> +
> +nags=4
> +size=`expr 125 \* 1048576` # 120 megabytes initially
> +sizeb=`expr $size / $dbsize` # in data blocks
> +logblks=$(_scratch_find_xfs_min_logblocks -rsize=${size} -dagcount=${nags})
> +
> +_scratch_mkfs_xfs -lsize=${logblks}b -rsize=${size} -dagcount=${nags} \
> + >> $seqres.full || _fail "mkfs failed"
Ahah, not sure why this case didn't hit the failure of xfs/609, do you think
we should filter out the mkfs warning too?
SECTION -- default
FSTYP -- xfs (non-debug)
PLATFORM -- Linux/x86_64 dell-per750-41 6.12.0-0.rc5.44.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Oct 28 14:12:55 UTC 2024
MKFS_OPTIONS -- -f -rrtdev=/dev/mapper/testvg-rtdev /dev/sda6
MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 -ortdev=/dev/mapper/testvg-rtdev /dev/sda6 /mnt/scratch
xfs/610 39s
Ran: xfs/610
Passed all 1 tests
> +_scratch_mount
> +_xfs_force_bdev realtime $SCRATCH_MNT &> /dev/null
> +
> +# Grow the filesystem in random sized chunks while stressing and performing
> +# shutdown and recovery. The randomization is intended to create a mix of sub-ag
> +# and multi-ag grows.
> +while [ $size -le $endsize ]; do
> + echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
> + _stress_scratch
> + incsize=$((RANDOM % 40 * 1048576))
> + size=`expr $size + $incsize`
> + sizeb=`expr $size / $dbsize` # in data blocks
> + echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
> + $XFS_GROWFS_PROG -R ${sizeb} $SCRATCH_MNT >> $seqres.full
> +
> + sleep $((RANDOM % 3))
> + _scratch_shutdown
> + ps -e | grep fsstress > /dev/null 2>&1
> + while [ $? -eq 0 ]; do
> + $KILLALL_PROG -9 fsstress > /dev/null 2>&1
> + wait > /dev/null 2>&1
> + ps -e | grep fsstress > /dev/null 2>&1
> + done
> + _scratch_cycle_mount || _fail "cycle mount failed"
_scratch_cycle_mount does _fail if it fails, I'll help to remove the "|| _fail ..."
> +done > /dev/null 2>&1
> +wait # stop for any remaining stress processes
> +
> +_scratch_unmount
If this ^^ isn't a necessary step of bug reproduce, then we don't need to do this
manually, each test case does that at the end. I can help to remove it when I
merge this patch.
Others looks good to me,
Reviewed-by: Zorro Lang <zlang@redaht.com>
> +
> +echo Silence is golden.
> +
> +status=0
> +exit
> diff --git a/tests/xfs/610.out b/tests/xfs/610.out
> new file mode 100644
> index 00000000..c42a1cf8
> --- /dev/null
> +++ b/tests/xfs/610.out
> @@ -0,0 +1,2 @@
> +QA output created by 610
> +Silence is golden.
> --
> 2.46.2
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 1/2] xfs: online grow vs. log recovery stress test
2024-10-30 19:41 ` Zorro Lang
@ 2024-10-31 13:18 ` Brian Foster
0 siblings, 0 replies; 11+ messages in thread
From: Brian Foster @ 2024-10-31 13:18 UTC (permalink / raw)
To: Zorro Lang; +Cc: fstests, linux-xfs, djwong, hch
On Thu, Oct 31, 2024 at 03:41:33AM +0800, Zorro Lang wrote:
> On Tue, Oct 29, 2024 at 01:21:34PM -0400, Brian Foster wrote:
> > fstests includes decent functional tests for online growfs and
> > shrink, and decent stress tests for crash and log recovery, but no
> > combination of the two. This test combines bits from a typical
> > growfs stress test like xfs/104 with crash recovery cycles from a
> > test like generic/388. As a result, this reproduces at least a
> > couple recently fixed issues related to log recovery of online
> > growfs operations.
> >
> > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > ---
> > tests/xfs/609 | 81 +++++++++++++++++++++++++++++++++++++++++++++++
> > tests/xfs/609.out | 2 ++
> > 2 files changed, 83 insertions(+)
> > create mode 100755 tests/xfs/609
> > create mode 100644 tests/xfs/609.out
> >
> > diff --git a/tests/xfs/609 b/tests/xfs/609
> > new file mode 100755
> > index 00000000..4df966f7
> > --- /dev/null
> > +++ b/tests/xfs/609
> > @@ -0,0 +1,81 @@
> > +#! /bin/bash
> > +# SPDX-License-Identifier: GPL-2.0
> > +# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved.
> > +#
> > +# FS QA Test No. 609
> > +#
> > +# Test XFS online growfs log recovery.
> > +#
> > +. ./common/preamble
> > +_begin_fstest auto growfs stress shutdown log recoveryloop
> > +
> > +# Import common functions.
> > +. ./common/filter
> > +
> > +_stress_scratch()
> > +{
> > + procs=4
> > + nops=999999
> > + # -w ensures that the only ops are ones which cause write I/O
> > + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
> > + -n $nops $FSSTRESS_AVOID`
> > + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
> > +}
> > +
> > +_require_scratch
> > +_require_command "$XFS_GROWFS_PROG" xfs_growfs
> > +_require_command "$KILLALL_PROG" killall
> > +
> > +_cleanup()
> > +{
> > + $KILLALL_ALL fsstress > /dev/null 2>&1
> > + wait
> > + cd /
> > + rm -f $tmp.*
> > +}
> > +
> > +_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs
> > +. $tmp.mkfs # extract blocksize and data size for scratch device
> > +
> > +endsize=`expr 550 \* 1048576` # stop after growing this big
> > +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
> > +
> > +nags=4
> > +size=`expr 125 \* 1048576` # 120 megabytes initially
> > +sizeb=`expr $size / $dbsize` # in data blocks
> > +logblks=$(_scratch_find_xfs_min_logblocks -dsize=${size} -dagcount=${nags})
> > +
> > +_scratch_mkfs_xfs -lsize=${logblks}b -dsize=${size} -dagcount=${nags} \
> > + >> $seqres.full || _fail "mkfs failed"
>
>
> This test fails on my testing machine, as [1], due to above mkfs.xfs print
> a warning:
>
> "mkfs.xfs: small data volume, ignoring data volume stripe unit 128 and stripe width 256"
>
> My test device is scripted, if without the specific mkfs options, it got:
> # mkfs.xfs -f $SCRATCH_DEV
> meta-data=/dev/sda6 isize=512 agcount=25, agsize=1064176 blks
> = sectsz=512 attr=2, projid32bit=1
> = crc=1 finobt=1, sparse=1, rmapbt=1
> = reflink=1 bigtime=1 inobtcount=1 nrext64=1
> = exchange=0
> data = bsize=4096 blocks=26604400, imaxpct=25
> = sunit=16 swidth=32 blks
> naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0
> log =internal log bsize=4096 blocks=179552, version=2
> = sectsz=512 sunit=16 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> But if with the specific mkfs options, it got:
>
> # /usr/sbin/mkfs.xfs -f -lsize=3075b -dsize=131072000 -dagcount=4 $SCRATCH_DEV
> mkfs.xfs: small data volume, ignoring data volume stripe unit 128 and stripe width 256
> meta-data=/dev/sda6 isize=512 agcount=4, agsize=8000 blks
> = sectsz=512 attr=2, projid32bit=1
> = crc=1 finobt=1, sparse=1, rmapbt=1
> = reflink=1 bigtime=1 inobtcount=1 nrext64=1
> = exchange=0
> data = bsize=4096 blocks=32000, imaxpct=25
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0
> log =internal log bsize=4096 blocks=3075, version=2
> = sectsz=512 sunit=0 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> Hi Brian, if you think "ignoreing volume stripe" doesn't affect the test, we can
> filter out the stderr with "2>&1". I can help to change that when I merge.
>
Hmm.. I don't think it should affect things. We could probably make the
scratch fs a bit bigger, but the idea is to leave enough room so it can
be grown a number of times. Any idea if using a particular min size fs
makes that warning go away?
Either way I don't think the custom stripe unit/width should make much
of a difference for a grow vs. log recovery test, so I'm fine with
filtering that out if that's easiest.
Brian
> Others looks good to me, with above confirmation:
>
> Reviewed-by: Zorro Lang <zlang@redhat.com>
>
> Thanks,
> Zorro
>
> [1]
> SECTION -- default
> FSTYP -- xfs (non-debug)
> PLATFORM -- Linux/x86_64 dell-per750-41 6.11.0-0.rc6.49.fc42.x86_64+debug #1 SMP PREEMPT_DYNAMIC Mon Sep 2 02:18:15 UTC 2024
> MKFS_OPTIONS -- -f /dev/sda6
> MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 /dev/sda6 /mnt/scratch
>
> xfs/609 [failed, exit status 1]_check_dmesg: something found in dmesg (see /root/git/xfstests/results//default/xfs/609.dmesg)
> - output mismatch (see /root/git/xfstests/results//default/xfs/609.out.bad)
> --- tests/xfs/609.out 2024-10-30 16:29:52.250176790 +0800
> +++ /root/git/xfstests/results//default/xfs/609.out.bad 2024-10-30 16:31:01.759590117 +0800
> @@ -1,2 +1,2 @@
> QA output created by 609
> -Silence is golden.
> +mkfs.xfs: small data volume, ignoring data volume stripe unit 128 and stripe width 256
> ...
> (Run 'diff -u /root/git/xfstests/tests/xfs/609.out /root/git/xfstests/results//default/xfs/609.out.bad' to see the entire diff)
> xfs/610 [not run] External volumes not in use, skipped this test
> Ran: xfs/609 xfs/610
> Not run: xfs/610
> Failures: xfs/609
> Failed 1 of 2 tests
>
>
> > +_scratch_mount
> > +
> > +# Grow the filesystem in random sized chunks while stressing and performing
> > +# shutdown and recovery. The randomization is intended to create a mix of sub-ag
> > +# and multi-ag grows.
> > +while [ $size -le $endsize ]; do
> > + echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
> > + _stress_scratch
> > + incsize=$((RANDOM % 40 * 1048576))
> > + size=`expr $size + $incsize`
> > + sizeb=`expr $size / $dbsize` # in data blocks
> > + echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
> > + $XFS_GROWFS_PROG -D ${sizeb} $SCRATCH_MNT >> $seqres.full
> > +
> > + sleep $((RANDOM % 3))
> > + _scratch_shutdown
> > + ps -e | grep fsstress > /dev/null 2>&1
> > + while [ $? -eq 0 ]; do
> > + $KILLALL_PROG -9 fsstress > /dev/null 2>&1
> > + wait > /dev/null 2>&1
> > + ps -e | grep fsstress > /dev/null 2>&1
> > + done
> > + _scratch_cycle_mount || _fail "cycle mount failed"
> > +done > /dev/null 2>&1
> > +wait # stop for any remaining stress processes
> > +
> > +_scratch_unmount
> > +
> > +echo Silence is golden.
> > +
> > +status=0
> > +exit
> > diff --git a/tests/xfs/609.out b/tests/xfs/609.out
> > new file mode 100644
> > index 00000000..8be27d3a
> > --- /dev/null
> > +++ b/tests/xfs/609.out
> > @@ -0,0 +1,2 @@
> > +QA output created by 609
> > +Silence is golden.
> > --
> > 2.46.2
> >
> >
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] xfs: online grow vs. log recovery stress test (realtime version)
2024-10-30 19:54 ` Zorro Lang
@ 2024-10-31 13:20 ` Brian Foster
2024-10-31 16:35 ` Darrick J. Wong
0 siblings, 1 reply; 11+ messages in thread
From: Brian Foster @ 2024-10-31 13:20 UTC (permalink / raw)
To: Zorro Lang; +Cc: fstests, linux-xfs, djwong, hch
On Thu, Oct 31, 2024 at 03:54:56AM +0800, Zorro Lang wrote:
> On Tue, Oct 29, 2024 at 01:21:35PM -0400, Brian Foster wrote:
> > This is fundamentally the same as the previous growfs vs. log
> > recovery test, with tweaks to support growing the XFS realtime
> > volume on such configurations. Changes include using the appropriate
> > mkfs params, growfs params, and enabling realtime inheritance on the
> > scratch fs.
> >
> > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > ---
>
>
>
> > tests/xfs/610 | 83 +++++++++++++++++++++++++++++++++++++++++++++++
> > tests/xfs/610.out | 2 ++
> > 2 files changed, 85 insertions(+)
> > create mode 100755 tests/xfs/610
> > create mode 100644 tests/xfs/610.out
> >
> > diff --git a/tests/xfs/610 b/tests/xfs/610
> > new file mode 100755
> > index 00000000..6d3a526f
> > --- /dev/null
> > +++ b/tests/xfs/610
> > @@ -0,0 +1,83 @@
> > +#! /bin/bash
> > +# SPDX-License-Identifier: GPL-2.0
> > +# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved.
> > +#
> > +# FS QA Test No. 610
> > +#
> > +# Test XFS online growfs log recovery.
> > +#
> > +. ./common/preamble
> > +_begin_fstest auto growfs stress shutdown log recoveryloop
> > +
> > +# Import common functions.
> > +. ./common/filter
> > +
> > +_stress_scratch()
> > +{
> > + procs=4
> > + nops=999999
> > + # -w ensures that the only ops are ones which cause write I/O
> > + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
> > + -n $nops $FSSTRESS_AVOID`
> > + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
> > +}
> > +
> > +_require_scratch
> > +_require_realtime
> > +_require_command "$XFS_GROWFS_PROG" xfs_growfs
> > +_require_command "$KILLALL_PROG" killall
> > +
> > +_cleanup()
> > +{
> > + $KILLALL_ALL fsstress > /dev/null 2>&1
> > + wait
> > + cd /
> > + rm -f $tmp.*
> > +}
> > +
> > +_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs
> > +. $tmp.mkfs # extract blocksize and data size for scratch device
> > +
> > +endsize=`expr 550 \* 1048576` # stop after growing this big
> > +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
> > +
> > +nags=4
> > +size=`expr 125 \* 1048576` # 120 megabytes initially
> > +sizeb=`expr $size / $dbsize` # in data blocks
> > +logblks=$(_scratch_find_xfs_min_logblocks -rsize=${size} -dagcount=${nags})
> > +
> > +_scratch_mkfs_xfs -lsize=${logblks}b -rsize=${size} -dagcount=${nags} \
> > + >> $seqres.full || _fail "mkfs failed"
>
> Ahah, not sure why this case didn't hit the failure of xfs/609, do you think
> we should filter out the mkfs warning too?
>
My experience with this test is that it didn't reproduce any problems on
current master, but Darrick had originally customized it from xfs/609
and found it useful to identify some issues in outstanding development
work around rt.
I've been trying to keep the two tests consistent outside of enabling
the appropriate rt bits, so I'd suggest we apply the same changes here
as for 609 around the mkfs thing (whichever way that goes).
> SECTION -- default
> FSTYP -- xfs (non-debug)
> PLATFORM -- Linux/x86_64 dell-per750-41 6.12.0-0.rc5.44.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Oct 28 14:12:55 UTC 2024
> MKFS_OPTIONS -- -f -rrtdev=/dev/mapper/testvg-rtdev /dev/sda6
> MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 -ortdev=/dev/mapper/testvg-rtdev /dev/sda6 /mnt/scratch
>
> xfs/610 39s
> Ran: xfs/610
> Passed all 1 tests
>
> > +_scratch_mount
> > +_xfs_force_bdev realtime $SCRATCH_MNT &> /dev/null
> > +
> > +# Grow the filesystem in random sized chunks while stressing and performing
> > +# shutdown and recovery. The randomization is intended to create a mix of sub-ag
> > +# and multi-ag grows.
> > +while [ $size -le $endsize ]; do
> > + echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
> > + _stress_scratch
> > + incsize=$((RANDOM % 40 * 1048576))
> > + size=`expr $size + $incsize`
> > + sizeb=`expr $size / $dbsize` # in data blocks
> > + echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
> > + $XFS_GROWFS_PROG -R ${sizeb} $SCRATCH_MNT >> $seqres.full
> > +
> > + sleep $((RANDOM % 3))
> > + _scratch_shutdown
> > + ps -e | grep fsstress > /dev/null 2>&1
> > + while [ $? -eq 0 ]; do
> > + $KILLALL_PROG -9 fsstress > /dev/null 2>&1
> > + wait > /dev/null 2>&1
> > + ps -e | grep fsstress > /dev/null 2>&1
> > + done
> > + _scratch_cycle_mount || _fail "cycle mount failed"
>
> _scratch_cycle_mount does _fail if it fails, I'll help to remove the "|| _fail ..."
>
Ok.
> > +done > /dev/null 2>&1
> > +wait # stop for any remaining stress processes
> > +
> > +_scratch_unmount
>
> If this ^^ isn't a necessary step of bug reproduce, then we don't need to do this
> manually, each test case does that at the end. I can help to remove it when I
> merge this patch.
>
Hm I don't think so. That might also just be copy/paste leftover. Feel
free to drop it.
> Others looks good to me,
>
> Reviewed-by: Zorro Lang <zlang@redaht.com>
>
Thanks!
Brian
>
> > +
> > +echo Silence is golden.
> > +
> > +status=0
> > +exit
> > diff --git a/tests/xfs/610.out b/tests/xfs/610.out
> > new file mode 100644
> > index 00000000..c42a1cf8
> > --- /dev/null
> > +++ b/tests/xfs/610.out
> > @@ -0,0 +1,2 @@
> > +QA output created by 610
> > +Silence is golden.
> > --
> > 2.46.2
> >
> >
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] xfs: online grow vs. log recovery stress test (realtime version)
2024-10-31 13:20 ` Brian Foster
@ 2024-10-31 16:35 ` Darrick J. Wong
2024-10-31 19:43 ` Zorro Lang
0 siblings, 1 reply; 11+ messages in thread
From: Darrick J. Wong @ 2024-10-31 16:35 UTC (permalink / raw)
To: Brian Foster; +Cc: Zorro Lang, fstests, linux-xfs, hch
On Thu, Oct 31, 2024 at 09:20:49AM -0400, Brian Foster wrote:
> On Thu, Oct 31, 2024 at 03:54:56AM +0800, Zorro Lang wrote:
> > On Tue, Oct 29, 2024 at 01:21:35PM -0400, Brian Foster wrote:
> > > This is fundamentally the same as the previous growfs vs. log
> > > recovery test, with tweaks to support growing the XFS realtime
> > > volume on such configurations. Changes include using the appropriate
> > > mkfs params, growfs params, and enabling realtime inheritance on the
> > > scratch fs.
> > >
> > > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > > ---
> >
> >
> >
> > > tests/xfs/610 | 83 +++++++++++++++++++++++++++++++++++++++++++++++
> > > tests/xfs/610.out | 2 ++
> > > 2 files changed, 85 insertions(+)
> > > create mode 100755 tests/xfs/610
> > > create mode 100644 tests/xfs/610.out
> > >
> > > diff --git a/tests/xfs/610 b/tests/xfs/610
> > > new file mode 100755
> > > index 00000000..6d3a526f
> > > --- /dev/null
> > > +++ b/tests/xfs/610
> > > @@ -0,0 +1,83 @@
> > > +#! /bin/bash
> > > +# SPDX-License-Identifier: GPL-2.0
> > > +# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved.
> > > +#
> > > +# FS QA Test No. 610
> > > +#
> > > +# Test XFS online growfs log recovery.
> > > +#
> > > +. ./common/preamble
> > > +_begin_fstest auto growfs stress shutdown log recoveryloop
> > > +
> > > +# Import common functions.
> > > +. ./common/filter
> > > +
> > > +_stress_scratch()
> > > +{
> > > + procs=4
> > > + nops=999999
> > > + # -w ensures that the only ops are ones which cause write I/O
> > > + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
> > > + -n $nops $FSSTRESS_AVOID`
> > > + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
> > > +}
> > > +
> > > +_require_scratch
> > > +_require_realtime
> > > +_require_command "$XFS_GROWFS_PROG" xfs_growfs
> > > +_require_command "$KILLALL_PROG" killall
> > > +
> > > +_cleanup()
> > > +{
> > > + $KILLALL_ALL fsstress > /dev/null 2>&1
> > > + wait
> > > + cd /
> > > + rm -f $tmp.*
> > > +}
> > > +
> > > +_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs
> > > +. $tmp.mkfs # extract blocksize and data size for scratch device
> > > +
> > > +endsize=`expr 550 \* 1048576` # stop after growing this big
> > > +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
> > > +
> > > +nags=4
> > > +size=`expr 125 \* 1048576` # 120 megabytes initially
> > > +sizeb=`expr $size / $dbsize` # in data blocks
> > > +logblks=$(_scratch_find_xfs_min_logblocks -rsize=${size} -dagcount=${nags})
> > > +
> > > +_scratch_mkfs_xfs -lsize=${logblks}b -rsize=${size} -dagcount=${nags} \
> > > + >> $seqres.full || _fail "mkfs failed"
> >
> > Ahah, not sure why this case didn't hit the failure of xfs/609, do you think
> > we should filter out the mkfs warning too?
It won't-- the warning you got with 609 was about ignoring stripe
geometry on a small data volume. This mkfs invocation creates a
filesystem with a normal size data volume and a small rt volume, and
mkfs doesn't complain about small rt volumes.
--D
> My experience with this test is that it didn't reproduce any problems on
> current master, but Darrick had originally customized it from xfs/609
> and found it useful to identify some issues in outstanding development
> work around rt.
>
> I've been trying to keep the two tests consistent outside of enabling
> the appropriate rt bits, so I'd suggest we apply the same changes here
> as for 609 around the mkfs thing (whichever way that goes).
>
> > SECTION -- default
> > FSTYP -- xfs (non-debug)
> > PLATFORM -- Linux/x86_64 dell-per750-41 6.12.0-0.rc5.44.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Oct 28 14:12:55 UTC 2024
> > MKFS_OPTIONS -- -f -rrtdev=/dev/mapper/testvg-rtdev /dev/sda6
> > MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 -ortdev=/dev/mapper/testvg-rtdev /dev/sda6 /mnt/scratch
> >
> > xfs/610 39s
> > Ran: xfs/610
> > Passed all 1 tests
> >
> > > +_scratch_mount
> > > +_xfs_force_bdev realtime $SCRATCH_MNT &> /dev/null
> > > +
> > > +# Grow the filesystem in random sized chunks while stressing and performing
> > > +# shutdown and recovery. The randomization is intended to create a mix of sub-ag
> > > +# and multi-ag grows.
> > > +while [ $size -le $endsize ]; do
> > > + echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
> > > + _stress_scratch
> > > + incsize=$((RANDOM % 40 * 1048576))
> > > + size=`expr $size + $incsize`
> > > + sizeb=`expr $size / $dbsize` # in data blocks
> > > + echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
> > > + $XFS_GROWFS_PROG -R ${sizeb} $SCRATCH_MNT >> $seqres.full
> > > +
> > > + sleep $((RANDOM % 3))
> > > + _scratch_shutdown
> > > + ps -e | grep fsstress > /dev/null 2>&1
> > > + while [ $? -eq 0 ]; do
> > > + $KILLALL_PROG -9 fsstress > /dev/null 2>&1
> > > + wait > /dev/null 2>&1
> > > + ps -e | grep fsstress > /dev/null 2>&1
> > > + done
> > > + _scratch_cycle_mount || _fail "cycle mount failed"
> >
> > _scratch_cycle_mount does _fail if it fails, I'll help to remove the "|| _fail ..."
> >
>
> Ok.
>
> > > +done > /dev/null 2>&1
> > > +wait # stop for any remaining stress processes
> > > +
> > > +_scratch_unmount
> >
> > If this ^^ isn't a necessary step of bug reproduce, then we don't need to do this
> > manually, each test case does that at the end. I can help to remove it when I
> > merge this patch.
> >
>
> Hm I don't think so. That might also just be copy/paste leftover. Feel
> free to drop it.
>
> > Others looks good to me,
> >
> > Reviewed-by: Zorro Lang <zlang@redaht.com>
> >
>
> Thanks!
>
> Brian
>
> >
> > > +
> > > +echo Silence is golden.
> > > +
> > > +status=0
> > > +exit
> > > diff --git a/tests/xfs/610.out b/tests/xfs/610.out
> > > new file mode 100644
> > > index 00000000..c42a1cf8
> > > --- /dev/null
> > > +++ b/tests/xfs/610.out
> > > @@ -0,0 +1,2 @@
> > > +QA output created by 610
> > > +Silence is golden.
> > > --
> > > 2.46.2
> > >
> > >
> >
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 2/2] xfs: online grow vs. log recovery stress test (realtime version)
2024-10-31 16:35 ` Darrick J. Wong
@ 2024-10-31 19:43 ` Zorro Lang
0 siblings, 0 replies; 11+ messages in thread
From: Zorro Lang @ 2024-10-31 19:43 UTC (permalink / raw)
To: Darrick J. Wong; +Cc: Brian Foster, fstests, linux-xfs, hch
On Thu, Oct 31, 2024 at 09:35:24AM -0700, Darrick J. Wong wrote:
> On Thu, Oct 31, 2024 at 09:20:49AM -0400, Brian Foster wrote:
> > On Thu, Oct 31, 2024 at 03:54:56AM +0800, Zorro Lang wrote:
> > > On Tue, Oct 29, 2024 at 01:21:35PM -0400, Brian Foster wrote:
> > > > This is fundamentally the same as the previous growfs vs. log
> > > > recovery test, with tweaks to support growing the XFS realtime
> > > > volume on such configurations. Changes include using the appropriate
> > > > mkfs params, growfs params, and enabling realtime inheritance on the
> > > > scratch fs.
> > > >
> > > > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > > > ---
> > >
> > >
> > >
> > > > tests/xfs/610 | 83 +++++++++++++++++++++++++++++++++++++++++++++++
> > > > tests/xfs/610.out | 2 ++
> > > > 2 files changed, 85 insertions(+)
> > > > create mode 100755 tests/xfs/610
> > > > create mode 100644 tests/xfs/610.out
> > > >
> > > > diff --git a/tests/xfs/610 b/tests/xfs/610
> > > > new file mode 100755
> > > > index 00000000..6d3a526f
> > > > --- /dev/null
> > > > +++ b/tests/xfs/610
> > > > @@ -0,0 +1,83 @@
> > > > +#! /bin/bash
> > > > +# SPDX-License-Identifier: GPL-2.0
> > > > +# Copyright (c) 2024 Red Hat, Inc. All Rights Reserved.
> > > > +#
> > > > +# FS QA Test No. 610
> > > > +#
> > > > +# Test XFS online growfs log recovery.
> > > > +#
> > > > +. ./common/preamble
> > > > +_begin_fstest auto growfs stress shutdown log recoveryloop
> > > > +
> > > > +# Import common functions.
> > > > +. ./common/filter
> > > > +
> > > > +_stress_scratch()
> > > > +{
> > > > + procs=4
> > > > + nops=999999
> > > > + # -w ensures that the only ops are ones which cause write I/O
> > > > + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
> > > > + -n $nops $FSSTRESS_AVOID`
> > > > + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
> > > > +}
> > > > +
> > > > +_require_scratch
> > > > +_require_realtime
> > > > +_require_command "$XFS_GROWFS_PROG" xfs_growfs
> > > > +_require_command "$KILLALL_PROG" killall
> > > > +
> > > > +_cleanup()
> > > > +{
> > > > + $KILLALL_ALL fsstress > /dev/null 2>&1
> > > > + wait
> > > > + cd /
> > > > + rm -f $tmp.*
> > > > +}
> > > > +
> > > > +_scratch_mkfs_xfs | _filter_mkfs >$seqres.full 2>$tmp.mkfs
> > > > +. $tmp.mkfs # extract blocksize and data size for scratch device
> > > > +
> > > > +endsize=`expr 550 \* 1048576` # stop after growing this big
> > > > +[ `expr $endsize / $dbsize` -lt $dblocks ] || _notrun "Scratch device too small"
> > > > +
> > > > +nags=4
> > > > +size=`expr 125 \* 1048576` # 120 megabytes initially
> > > > +sizeb=`expr $size / $dbsize` # in data blocks
> > > > +logblks=$(_scratch_find_xfs_min_logblocks -rsize=${size} -dagcount=${nags})
> > > > +
> > > > +_scratch_mkfs_xfs -lsize=${logblks}b -rsize=${size} -dagcount=${nags} \
> > > > + >> $seqres.full || _fail "mkfs failed"
> > >
> > > Ahah, not sure why this case didn't hit the failure of xfs/609, do you think
> > > we should filter out the mkfs warning too?
>
> It won't-- the warning you got with 609 was about ignoring stripe
> geometry on a small data volume. This mkfs invocation creates a
> filesystem with a normal size data volume and a small rt volume, and
> mkfs doesn't complain about small rt volumes.
Oh, good to know that, thanks Darick :)
>
> --D
>
> > My experience with this test is that it didn't reproduce any problems on
> > current master, but Darrick had originally customized it from xfs/609
> > and found it useful to identify some issues in outstanding development
> > work around rt.
> >
> > I've been trying to keep the two tests consistent outside of enabling
> > the appropriate rt bits, so I'd suggest we apply the same changes here
> > as for 609 around the mkfs thing (whichever way that goes).
> >
> > > SECTION -- default
> > > FSTYP -- xfs (non-debug)
> > > PLATFORM -- Linux/x86_64 dell-per750-41 6.12.0-0.rc5.44.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Oct 28 14:12:55 UTC 2024
> > > MKFS_OPTIONS -- -f -rrtdev=/dev/mapper/testvg-rtdev /dev/sda6
> > > MOUNT_OPTIONS -- -o context=system_u:object_r:root_t:s0 -ortdev=/dev/mapper/testvg-rtdev /dev/sda6 /mnt/scratch
> > >
> > > xfs/610 39s
> > > Ran: xfs/610
> > > Passed all 1 tests
> > >
> > > > +_scratch_mount
> > > > +_xfs_force_bdev realtime $SCRATCH_MNT &> /dev/null
> > > > +
> > > > +# Grow the filesystem in random sized chunks while stressing and performing
> > > > +# shutdown and recovery. The randomization is intended to create a mix of sub-ag
> > > > +# and multi-ag grows.
> > > > +while [ $size -le $endsize ]; do
> > > > + echo "*** stressing a ${sizeb} block filesystem" >> $seqres.full
> > > > + _stress_scratch
> > > > + incsize=$((RANDOM % 40 * 1048576))
> > > > + size=`expr $size + $incsize`
> > > > + sizeb=`expr $size / $dbsize` # in data blocks
> > > > + echo "*** growing to a ${sizeb} block filesystem" >> $seqres.full
> > > > + $XFS_GROWFS_PROG -R ${sizeb} $SCRATCH_MNT >> $seqres.full
> > > > +
> > > > + sleep $((RANDOM % 3))
> > > > + _scratch_shutdown
> > > > + ps -e | grep fsstress > /dev/null 2>&1
> > > > + while [ $? -eq 0 ]; do
> > > > + $KILLALL_PROG -9 fsstress > /dev/null 2>&1
> > > > + wait > /dev/null 2>&1
> > > > + ps -e | grep fsstress > /dev/null 2>&1
> > > > + done
> > > > + _scratch_cycle_mount || _fail "cycle mount failed"
> > >
> > > _scratch_cycle_mount does _fail if it fails, I'll help to remove the "|| _fail ..."
> > >
> >
> > Ok.
> >
> > > > +done > /dev/null 2>&1
> > > > +wait # stop for any remaining stress processes
> > > > +
> > > > +_scratch_unmount
> > >
> > > If this ^^ isn't a necessary step of bug reproduce, then we don't need to do this
> > > manually, each test case does that at the end. I can help to remove it when I
> > > merge this patch.
> > >
> >
> > Hm I don't think so. That might also just be copy/paste leftover. Feel
> > free to drop it.
> >
> > > Others looks good to me,
> > >
> > > Reviewed-by: Zorro Lang <zlang@redaht.com>
> > >
> >
> > Thanks!
> >
> > Brian
> >
> > >
> > > > +
> > > > +echo Silence is golden.
> > > > +
> > > > +status=0
> > > > +exit
> > > > diff --git a/tests/xfs/610.out b/tests/xfs/610.out
> > > > new file mode 100644
> > > > index 00000000..c42a1cf8
> > > > --- /dev/null
> > > > +++ b/tests/xfs/610.out
> > > > @@ -0,0 +1,2 @@
> > > > +QA output created by 610
> > > > +Silence is golden.
> > > > --
> > > > 2.46.2
> > > >
> > > >
> > >
> >
> >
>
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2024-10-31 19:43 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-29 17:21 [PATCH v2 0/2] fstests/xfs: a couple growfs log recovery tests Brian Foster
2024-10-29 17:21 ` [PATCH v2 1/2] xfs: online grow vs. log recovery stress test Brian Foster
2024-10-30 19:41 ` Zorro Lang
2024-10-31 13:18 ` Brian Foster
2024-10-29 17:21 ` [PATCH v2 2/2] xfs: online grow vs. log recovery stress test (realtime version) Brian Foster
2024-10-30 19:54 ` Zorro Lang
2024-10-31 13:20 ` Brian Foster
2024-10-31 16:35 ` Darrick J. Wong
2024-10-31 19:43 ` Zorro Lang
2024-10-30 4:36 ` [PATCH v2 0/2] fstests/xfs: a couple growfs log recovery tests Christoph Hellwig
2024-10-30 8:24 ` Zorro Lang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox