From: Gao Xiang <hsiangkao@redhat.com>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: Zorro Lang <zlang@redhat.com>,
linux-xfs@vger.kernel.org, fstests@vger.kernel.org
Subject: Re: [RFC PATCH v2 3/3] xfs: stress test for shrinking free space in the last AG
Date: Sat, 13 Mar 2021 00:58:32 +0800 [thread overview]
Message-ID: <20210312165832.GA287066@xiangao.remote.csb> (raw)
In-Reply-To: <20210312163713.GC8425@magnolia>
On Fri, Mar 12, 2021 at 08:37:13AM -0800, Darrick J. Wong wrote:
> On Sat, Mar 13, 2021 at 12:17:44AM +0800, Gao Xiang wrote:
> > On Sat, Mar 13, 2021 at 12:17:55AM +0800, Zorro Lang wrote:
> > > On Fri, Mar 12, 2021 at 09:23:00PM +0800, Gao Xiang wrote:
> > > > This adds a stress testcase to shrink free space as much as
> > > > possible in the last AG with background fsstress workload.
> > > >
> > > > The expectation is that no crash happens with expected output.
> > > >
> > > > Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
> > > > ---
> > > > Note that I don't use _fill_fs instead, since fill_scratch here mainly to
> > > > eat 125M to make fsstress more effectively, rather than fill data as
> > > > much as possible.
> > >
> > > As Darrick had given lots of review points to this case, I just have
> > > 2 picky questions as below:)
> > >
> > > >
> > > > tests/xfs/991 | 121 ++++++++++++++++++++++++++++++++++++++++++++++
> > > > tests/xfs/991.out | 8 +++
> > > > tests/xfs/group | 1 +
> > > > 3 files changed, 130 insertions(+)
> > > > create mode 100755 tests/xfs/991
> > > > create mode 100644 tests/xfs/991.out
> > > >
> > > > diff --git a/tests/xfs/991 b/tests/xfs/991
> > > > new file mode 100755
> > > > index 00000000..22a5ac81
> > > > --- /dev/null
> > > > +++ b/tests/xfs/991
> > > > @@ -0,0 +1,121 @@
> > > > +#! /bin/bash
> > > > +# SPDX-License-Identifier: GPL-2.0
> > > > +# Copyright (c) 2020-2021 Red Hat, Inc. All Rights Reserved.
> > > > +#
> > > > +# FS QA Test 991
> > > > +#
> > > > +# XFS online shrinkfs stress test
> > > > +#
> > > > +# This test attempts to shrink unused space as much as possible with
> > > > +# background fsstress workload. It will decrease the shrink size if
> > > > +# larger size fails. And totally repeat 2 * TIME_FACTOR times.
> > > > +#
> > > > +seq=`basename $0`
> > > > +seqres=$RESULT_DIR/$seq
> > > > +echo "QA output created by $seq"
> > > > +
> > > > +here=`pwd`
> > > > +tmp=/tmp/$$
> > > > +status=1 # failure is the default!
> > > > +trap "rm -f $tmp.*; exit \$status" 0 1 2 3 15
> > > > +
> > > > +# get standard environment, filters and checks
> > > > +. ./common/rc
> > > > +. ./common/filter
> > > > +
> > > > +create_scratch()
> > > > +{
> > > > + _scratch_mkfs_xfs $@ | tee -a $seqres.full | \
> > > > + _filter_mkfs 2>$tmp.mkfs >/dev/null
> > > > + . $tmp.mkfs
> > > > +
> > > > + if ! _try_scratch_mount 2>/dev/null; then
> > > > + echo "failed to mount $SCRATCH_DEV"
> > > > + exit 1
> > > > + fi
> > > > +
> > > > + # fix the reserve block pool to a known size so that the enospc
> > > > + # calculations work out correctly.
> > > > + _scratch_resvblks 1024 > /dev/null 2>&1
> > > > +}
> > > > +
> > > > +fill_scratch()
> > > > +{
> > > > + $XFS_IO_PROG -f -c "resvsp 0 ${1}" $SCRATCH_MNT/resvfile
> > > > +}
> > > > +
> > > > +stress_scratch()
> > > > +{
> > > > + procs=3
> > > > + nops=$((1000 * LOAD_FACTOR))
> > > > + # -w ensures that the only ops are ones which cause write I/O
> > > > + FSSTRESS_ARGS=`_scale_fsstress_args -d $SCRATCH_MNT -w -p $procs \
> > > > + -n $nops $FSSTRESS_AVOID`
> > > > + $FSSTRESS_PROG $FSSTRESS_ARGS >> $seqres.full 2>&1 &
> > > > +}
> > > > +
> > > > +# real QA test starts here
> > > > +_supported_fs xfs
> > > > +_require_scratch
> > > > +_require_xfs_shrink
> > > > +_require_xfs_io_command "falloc"
> > >
> > > Do I miss something? I only found you use xfs_io "resvsp", why you need "falloc" cmd?
> >
> > As I mentioned before, the testcase was derived from xfs/104 with some
> > modification.
> >
> > At a quick glance, this line was added by commit 09e94f84d929 ("xfs: don't
> > assume preallocation is always supported on XFS"). I have no more background
> > yet.
>
> Why not use xfs_io falloc in the test? fallocate is the successor to
> resvsp.
Yeah, general falloc seems better, and it seems _require_xfs_io_command here is
used for always_cow inode feature. Will update it. Thanks!
Thanks,
Gao Xiang
>
> --D
prev parent reply other threads:[~2021-03-12 16:59 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-12 13:22 [RFC PATCH v2 0/3] xfs: testcases for shrinking free space in the last AG Gao Xiang
2021-03-12 13:22 ` [RFC PATCH v2 1/3] common/xfs: add a _require_xfs_shrink helper Gao Xiang
2021-03-12 15:25 ` Zorro Lang
2021-03-12 15:18 ` Gao Xiang
2021-03-12 13:22 ` [RFC PATCH v2 2/3] xfs: basic functionality test for shrinking free space in the last AG Gao Xiang
2021-03-12 15:56 ` Zorro Lang
2021-03-12 16:04 ` Gao Xiang
2021-03-12 13:23 ` [RFC PATCH v2 3/3] xfs: stress " Gao Xiang
2021-03-12 16:17 ` Zorro Lang
2021-03-12 16:17 ` Gao Xiang
2021-03-12 16:37 ` Darrick J. Wong
2021-03-12 16:58 ` Gao Xiang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210312165832.GA287066@xiangao.remote.csb \
--to=hsiangkao@redhat.com \
--cc=djwong@kernel.org \
--cc=fstests@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=zlang@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox