From: "Darrick J. Wong" <djwong@kernel.org>
To: Brian Foster <bfoster@redhat.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH v2 3/4] xfs: terminate perag iteration reliably on agcount
Date: Tue, 12 Oct 2021 12:08:22 -0700 [thread overview]
Message-ID: <20211012190822.GN24307@magnolia> (raw)
In-Reply-To: <20211012165203.1354826-4-bfoster@redhat.com>
On Tue, Oct 12, 2021 at 12:52:02PM -0400, Brian Foster wrote:
> The for_each_perag_from() iteration macro relies on sb_agcount to
> process every perag currently within EOFS from a given starting
> point. It's perfectly valid to have perag structures beyond
> sb_agcount, however, such as if a growfs is in progress. If a perag
> loop happens to race with growfs in this manner, it will actually
> attempt to process the post-EOFS perag where ->pag_agno ==
> sb_agcount. This is reproduced by xfs/104 and manifests as the
> following assert failure in superblock write verifier context:
>
> XFS: Assertion failed: agno < mp->m_sb.sb_agcount, file: fs/xfs/libxfs/xfs_types.c, line: 22
>
> Update the corresponding macro to only process perags that are
> within the current sb_agcount.
Does this need a Fixes: tag?
Also ... should we be checking for agno <= agcount-1 for the initial
xfs_perag_get in the first for loop clause of for_each_perag_range?
I /think/ the answer is that the current users are careful enough to
check that race, but I haven't looked exhaustively.
Welcome back, by the way. :)
--D
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
> fs/xfs/libxfs/xfs_ag.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/xfs/libxfs/xfs_ag.h b/fs/xfs/libxfs/xfs_ag.h
> index cf8baae2ba18..b8cc5017efba 100644
> --- a/fs/xfs/libxfs/xfs_ag.h
> +++ b/fs/xfs/libxfs/xfs_ag.h
> @@ -142,7 +142,7 @@ struct xfs_perag *xfs_perag_next(
> (pag) = xfs_perag_next((pag), &(agno)))
>
> #define for_each_perag_from(mp, agno, pag) \
> - for_each_perag_range((mp), (agno), (mp)->m_sb.sb_agcount, (pag))
> + for_each_perag_range((mp), (agno), (mp)->m_sb.sb_agcount - 1, (pag))
>
>
> #define for_each_perag(mp, agno, pag) \
> --
> 2.31.1
>
next prev parent reply other threads:[~2021-10-12 19:08 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-12 16:51 [PATCH v2 0/4] xfs: fix perag iteration raciness Brian Foster
2021-10-12 16:52 ` [PATCH v2 1/4] xfs: fold perag loop iteration logic into helper function Brian Foster
2021-10-12 18:53 ` Darrick J. Wong
2021-10-12 16:52 ` [PATCH v2 2/4] xfs: rename the next_agno perag iteration variable Brian Foster
2021-10-12 18:54 ` Darrick J. Wong
2021-10-12 16:52 ` [PATCH v2 3/4] xfs: terminate perag iteration reliably on agcount Brian Foster
2021-10-12 19:08 ` Darrick J. Wong [this message]
2021-10-14 14:10 ` Brian Foster
2021-10-14 16:46 ` Darrick J. Wong
2021-10-14 17:41 ` Brian Foster
2021-10-14 17:50 ` Darrick J. Wong
2021-10-12 16:52 ` [PATCH v2 4/4] xfs: fix perag reference leak on iteration race with growfs Brian Foster
2021-10-12 19:09 ` Darrick J. Wong
2021-10-12 21:26 ` [PATCH v2 0/4] xfs: fix perag iteration raciness Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211012190822.GN24307@magnolia \
--to=djwong@kernel.org \
--cc=bfoster@redhat.com \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).