From: "Darrick J. Wong" <djwong@kernel.org>
To: Brian Foster <bfoster@redhat.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH v4 3/3] xfs: set aside allocation btree blocks from block reservation
Date: Tue, 27 Apr 2021 21:12:58 -0700 [thread overview]
Message-ID: <20210428041258.GG3122264@magnolia> (raw)
In-Reply-To: <20210423131050.141140-4-bfoster@redhat.com>
On Fri, Apr 23, 2021 at 09:10:50AM -0400, Brian Foster wrote:
> The blocks used for allocation btrees (bnobt and countbt) are
> technically considered free space. This is because as free space is
> used, allocbt blocks are removed and naturally become available for
> traditional allocation. However, this means that a significant
> portion of free space may consist of in-use btree blocks if free
> space is severely fragmented.
>
> On large filesystems with large perag reservations, this can lead to
> a rare but nasty condition where a significant amount of physical
> free space is available, but the majority of actual usable blocks
> consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> blocks tracked across ~300 total allocbt blocks, but effectively at
> 100% full because the the free space is entirely consumed by
> refcountbt perag reservation.
>
> Such a large perag reservation is by design on large filesystems.
> The problem is that because the free space is so fragmented, this AG
> contributes the 300 or so allocbt blocks to the global counters as
> free space. If this pattern repeats across enough AGs, the
> filesystem lands in a state where global block reservation can
> outrun physical block availability. For example, a streaming
> buffered write on the affected filesystem continues to allow delayed
> allocation beyond the point where writeback starts to fail due to
> physical block allocation failures. The expected behavior is for the
> delalloc block reservation to fail gracefully with -ENOSPC before
> physical block allocation failure is a possibility.
>
> To address this problem, set aside in-use allocbt blocks at
> reservation time and thus ensure they cannot be reserved until truly
> available for physical allocation. This allows alloc btree metadata
> to continue to reside in free space, but dynamically adjusts
> reservation availability based on internal state. Note that the
> logic requires that the allocbt counter is fully populated at
> reservation time before it is fully effective. We currently rely on
> the mount time AGF scan in the perag reservation initialization code
> for this dependency on filesystems where it's most important (i.e.
> with active perag reservations).
>
> Signed-off-by: Brian Foster <bfoster@redhat.com>
<nod>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
--D
> ---
> fs/xfs/xfs_mount.c | 15 ++++++++++++++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
> index cb1e2c4702c3..bdfee1943796 100644
> --- a/fs/xfs/xfs_mount.c
> +++ b/fs/xfs/xfs_mount.c
> @@ -1188,6 +1188,7 @@ xfs_mod_fdblocks(
> int64_t lcounter;
> long long res_used;
> s32 batch;
> + uint64_t set_aside;
>
> if (delta > 0) {
> /*
> @@ -1227,8 +1228,20 @@ xfs_mod_fdblocks(
> else
> batch = XFS_FDBLOCKS_BATCH;
>
> + /*
> + * Set aside allocbt blocks because these blocks are tracked as free
> + * space but not available for allocation. Technically this means that a
> + * single reservation cannot consume all remaining free space, but the
> + * ratio of allocbt blocks to usable free blocks should be rather small.
> + * The tradeoff without this is that filesystems that maintain high
> + * perag block reservations can over reserve physical block availability
> + * and fail physical allocation, which leads to much more serious
> + * problems (i.e. transaction abort, pagecache discards, etc.) than
> + * slightly premature -ENOSPC.
> + */
> + set_aside = mp->m_alloc_set_aside + atomic64_read(&mp->m_allocbt_blks);
> percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
> - if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside,
> + if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
> XFS_FDBLOCKS_BATCH) >= 0) {
> /* we had space! */
> return 0;
> --
> 2.26.3
>
prev parent reply other threads:[~2021-04-28 4:13 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-23 13:10 [PATCH v4 0/3] xfs: set aside allocation btree blocks from block reservation Brian Foster
2021-04-23 13:10 ` [PATCH v4 1/3] xfs: unconditionally read all AGFs on mounts with perag reservation Brian Foster
2021-04-27 10:22 ` Chandan Babu R
2021-04-27 21:36 ` Allison Henderson
2021-04-28 4:12 ` Darrick J. Wong
2021-04-23 13:10 ` [PATCH v4 2/3] xfs: introduce in-core global counter of allocbt blocks Brian Foster
2021-04-27 10:28 ` Chandan Babu R
2021-04-27 11:33 ` Brian Foster
2021-04-27 13:22 ` Chandan Babu R
2021-04-27 21:37 ` Allison Henderson
2021-04-28 4:15 ` Darrick J. Wong
2021-04-28 15:01 ` Brian Foster
2021-04-28 15:29 ` Brian Foster
2021-04-28 16:12 ` Darrick J. Wong
2021-04-23 13:10 ` [PATCH v4 3/3] xfs: set aside allocation btree blocks from block reservation Brian Foster
2021-04-27 10:29 ` Chandan Babu R
2021-04-27 21:37 ` Allison Henderson
2021-04-28 4:12 ` Darrick J. Wong [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210428041258.GG3122264@magnolia \
--to=djwong@kernel.org \
--cc=bfoster@redhat.com \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox