From: "Darrick J. Wong" <djwong@kernel.org>
To: Omar Sandoval <osandov@osandov.com>
Cc: fstests@vger.kernel.org, linux-xfs@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH fstests] xfs: test refilling AGFL after lots of btree splits
Date: Wed, 25 Oct 2023 08:27:02 -0700 [thread overview]
Message-ID: <20231025152702.GE3195650@frogsfrogsfrogs> (raw)
In-Reply-To: <c7be2fe66a297316b934ddd3a1368b14f39a9f22.1698190540.git.osandov@osandov.com>
On Tue, Oct 24, 2023 at 04:37:42PM -0700, Omar Sandoval wrote:
> This is a regression test for patch "xfs: fix internal error from AGFL
> exhaustion"), which is not yet merged. Without the fix, it will fail
> with a "Structure needs cleaning" error.
Will look at the actual code patch next...
> Signed-off-by: Omar Sandoval <osandov@osandov.com>
> ---
> tests/xfs/601 | 62 +++++++++++++++++++++++++++++++++++++++++++++++
> tests/xfs/601.out | 2 ++
> 2 files changed, 64 insertions(+)
> create mode 100755 tests/xfs/601
> create mode 100644 tests/xfs/601.out
>
> diff --git a/tests/xfs/601 b/tests/xfs/601
> new file mode 100755
> index 00000000..bbc5b443
> --- /dev/null
> +++ b/tests/xfs/601
> @@ -0,0 +1,62 @@
> +#! /bin/bash
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (c) Meta Platforms, Inc. and affiliates.
> +#
> +# FS QA Test 601
> +#
> +# Regression test for patch "xfs: fix internal error from AGFL exhaustion".
> +#
> +. ./common/preamble
> +_begin_fstest auto prealloc punch
> +
> +. ./common/filter
> +
> +_supported_fs xfs
> +_require_scratch
> +_require_test_program punch-alternating
> +_fixed_by_kernel_commit XXXXXXXXXXXX "xfs: fix internal error from AGFL exhaustion"
> +
> +_scratch_mkfs -m rmapbt=0 | _filter_mkfs > /dev/null 2> "$tmp.mkfs"
Need to probe if mkfs.xfs actually supports rmapbt options first, since
this bug applies to old fses from before rmap even existed, right?
(Or: What changes are needed to make the reproducer work with rmapbt
enabled?)
> +. "$tmp.mkfs"
> +_scratch_mount
> +
> +alloc_block_len=$((_fs_has_crcs ? 56 : 16))
> +allocbt_leaf_maxrecs=$(((dbsize - alloc_block_len) / 8))
> +allocbt_node_maxrecs=$(((dbsize - alloc_block_len) / 12))
> +
> +# Create a big file with a size such that the punches below create the exact
> +# free extents we want.
> +num_holes=$((allocbt_leaf_maxrecs * allocbt_node_maxrecs - 1))
> +$XFS_IO_PROG -c "falloc 0 $((9 * dbsize + num_holes * dbsize * 2))" -f "$SCRATCH_MNT/big"
What happens if the allocations are all in some other AG? The scratch
device could be 100TB.
> +# Fill in any small free extents in AG 0. After this, there should be only one,
> +# large free extent.
> +_scratch_unmount
> +mapfile -t gaps < <($XFS_DB_PROG -c 'agf 0' -c 'addr cntroot' -c 'p recs' "$SCRATCH_DEV" |
> + $SED_PROG -rn 's/^[0-9]+:\[[0-9]+,([0-9]+)\].*/\1/p' |
> + tac | tail -n +2)
_scratch_xfs_db -c 'agf 0' -c 'addr cntroot' -c 'btdump' ?
> +_scratch_mount
> +for gap_i in "${!gaps[@]}"; do
> + gap=${gaps[$gap_i]}
> + $XFS_IO_PROG -c "falloc 0 $((gap * dbsize))" -f "$SCRATCH_MNT/gap$gap_i"
> +done
> +
> +# Create enough free space records to make the bnobt and cntbt both full,
> +# 2-level trees, plus one more record to make them split all the way to the
> +# root and become 3-level trees. After this, there is a 7-block free extent in
> +# the rightmost leaf of the cntbt, and all of the leaves of the cntbt other
> +# than the rightmost two are full. Without the fix, the free list is also
> +# empty.
> +$XFS_IO_PROG -c "fpunch $dbsize $((7 * dbsize))" "$SCRATCH_MNT/big"
> +"$here/src/punch-alternating" -o 9 "$SCRATCH_MNT/big"
> +
> +# Do an arbitrary operation that refills the free list. Without the fix, this
> +# will allocate 6 blocks from the 7-block free extent in the rightmost leaf of
> +# the cntbt, then try to insert the remaining 1 block free extent in the
> +# leftmost leaf of the cntbt. But that leaf is full, so this tries to split the
> +# leaf and fails because the free list is empty.
> +$XFS_IO_PROG -c "fpunch 0 $dbsize" "$SCRATCH_MNT/big"
> +
> +echo "Silence is golden"
Without the fix applied, what happens now? Does fpunch fail with EIO
to taint the golden output?
--D
> +status=0
> +exit
> diff --git a/tests/xfs/601.out b/tests/xfs/601.out
> new file mode 100644
> index 00000000..0d70c3e5
> --- /dev/null
> +++ b/tests/xfs/601.out
> @@ -0,0 +1,2 @@
> +QA output created by 601
> +Silence is golden
> --
> 2.42.0
>
next prev parent reply other threads:[~2023-10-25 15:28 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-24 23:37 [PATCH] xfs: fix internal error from AGFL exhaustion Omar Sandoval
2023-10-24 23:37 ` [PATCH fstests] xfs: test refilling AGFL after lots of btree splits Omar Sandoval
2023-10-25 15:27 ` Darrick J. Wong [this message]
2023-10-25 20:15 ` Omar Sandoval
2023-10-25 22:05 ` Darrick J. Wong
2023-10-25 4:35 ` [PATCH] xfs: fix internal error from AGFL exhaustion Dave Chinner
2023-10-25 5:14 ` Omar Sandoval
2023-10-25 15:50 ` Darrick J. Wong
2023-10-25 20:03 ` Omar Sandoval
2023-10-25 22:39 ` Darrick J. Wong
2023-10-25 23:37 ` Dave Chinner
2023-10-26 3:21 ` Darrick J. Wong
2023-10-26 20:56 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231025152702.GE3195650@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=fstests@vger.kernel.org \
--cc=kernel-team@fb.com \
--cc=linux-xfs@vger.kernel.org \
--cc=osandov@osandov.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox