From: "Darrick J. Wong" <djwong@kernel.org>
To: Brian Foster <bfoster@redhat.com>
Cc: linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org
Subject: Re: [PATCH 6/6] xfs: replace zero range flush with folio batch
Date: Wed, 5 Nov 2025 14:37:15 -0800 [thread overview]
Message-ID: <20251105223715.GI196370@frogsfrogsfrogs> (raw)
In-Reply-To: <20251016190303.53881-7-bfoster@redhat.com>
On Thu, Oct 16, 2025 at 03:03:03PM -0400, Brian Foster wrote:
> Now that the zero range pagecache flush is purely isolated to
> providing zeroing correctness in this case, we can remove it and
> replace it with the folio batch mechanism that is used for handling
> unwritten extents.
>
> This is still slightly odd in that XFS reports a hole vs. a mapping
> that reflects the COW fork extents, but that has always been the
> case in this situation and so a separate issue. We drop the iomap
> warning that assumes the folio batch is always associated with
> unwritten mappings, but this is mainly a development assertion as
> otherwise the core iomap fbatch code doesn't care much about the
> mapping type if it's handed the set of folios to process.
>
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
> fs/iomap/buffered-io.c | 4 ----
> fs/xfs/xfs_iomap.c | 16 ++++------------
> 2 files changed, 4 insertions(+), 16 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index d6de689374c3..7bc4b8d090ee 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -1534,10 +1534,6 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
> while ((ret = iomap_iter(&iter, ops)) > 0) {
> const struct iomap *srcmap = iomap_iter_srcmap(&iter);
>
> - if (WARN_ON_ONCE((iter.iomap.flags & IOMAP_F_FOLIO_BATCH) &&
> - srcmap->type != IOMAP_UNWRITTEN))
> - return -EIO;
> -
> if (!(iter.iomap.flags & IOMAP_F_FOLIO_BATCH) &&
> (srcmap->type == IOMAP_HOLE ||
> srcmap->type == IOMAP_UNWRITTEN)) {
> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> index 29f1462819fa..5a845a0ded79 100644
> --- a/fs/xfs/xfs_iomap.c
> +++ b/fs/xfs/xfs_iomap.c
> @@ -1704,7 +1704,6 @@ xfs_buffered_write_iomap_begin(
> {
> struct iomap_iter *iter = container_of(iomap, struct iomap_iter,
> iomap);
> - struct address_space *mapping = inode->i_mapping;
> struct xfs_inode *ip = XFS_I(inode);
> struct xfs_mount *mp = ip->i_mount;
> xfs_fileoff_t offset_fsb = XFS_B_TO_FSBT(mp, offset);
> @@ -1736,7 +1735,6 @@ xfs_buffered_write_iomap_begin(
> if (error)
> return error;
>
> -restart:
> error = xfs_ilock_for_iomap(ip, flags, &lockmode);
> if (error)
> return error;
> @@ -1812,16 +1810,10 @@ xfs_buffered_write_iomap_begin(
> xfs_trim_extent(&imap, offset_fsb,
> cmap.br_startoff + cmap.br_blockcount - offset_fsb);
> start = XFS_FSB_TO_B(mp, imap.br_startoff);
> - end = XFS_FSB_TO_B(mp,
> - imap.br_startoff + imap.br_blockcount) - 1;
> - if (filemap_range_needs_writeback(mapping, start, end)) {
> - xfs_iunlock(ip, lockmode);
> - error = filemap_write_and_wait_range(mapping, start,
> - end);
> - if (error)
> - return error;
> - goto restart;
> - }
> + end = XFS_FSB_TO_B(mp, imap.br_startoff + imap.br_blockcount);
> + iomap_flags |= iomap_fill_dirty_folios(iter, &start, end);
> + xfs_trim_extent(&imap, offset_fsb,
> + XFS_B_TO_FSB(mp, start) - offset_fsb);
Hrm, ok. This replaces the pagecache flush with passing in folios and
letting iomap zero the folios regardless of whatever's in the mapping.
That seems to me like a reasonable way to solve the immediate problem
without the huge reengineering ->iomap_begin project.
The changes here mostly look ok to me, though I wonder how well this all
meshes with all the other iomap work headed to 6.19...
--D
>
> goto found_imap;
> }
> --
> 2.51.0
>
>
next prev parent reply other threads:[~2025-11-05 22:37 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-16 19:02 [PATCH 0/6] iomap, xfs: improve zero range flushing and lookup Brian Foster
2025-10-16 19:02 ` [PATCH 1/6] iomap: replace folio_batch allocation with stack allocation Brian Foster
2025-11-05 0:07 ` Darrick J. Wong
2025-11-05 15:27 ` Brian Foster
2025-11-05 21:41 ` Darrick J. Wong
2025-11-06 15:51 ` Brian Foster
2025-11-06 15:58 ` Darrick J. Wong
2025-10-16 19:02 ` [PATCH 2/6] iomap, xfs: lift zero range hole mapping flush into xfs Brian Foster
2025-11-05 0:31 ` Darrick J. Wong
2025-11-05 15:33 ` Brian Foster
2025-11-05 22:23 ` Darrick J. Wong
2025-11-06 15:52 ` Brian Foster
2025-11-06 23:32 ` Darrick J. Wong
2025-11-07 13:52 ` Brian Foster
2025-11-07 13:59 ` Christoph Hellwig
2025-11-07 13:57 ` Christoph Hellwig
2025-11-07 13:55 ` Christoph Hellwig
2025-10-16 19:03 ` [PATCH 3/6] xfs: flush eof folio before insert range size update Brian Foster
2025-11-05 0:14 ` Darrick J. Wong
2025-11-05 15:34 ` Brian Foster
2025-11-05 22:15 ` Darrick J. Wong
2025-10-16 19:03 ` [PATCH 4/6] xfs: look up cow fork extent earlier for buffered iomap_begin Brian Foster
2025-11-05 22:26 ` Darrick J. Wong
2025-10-16 19:03 ` [PATCH 5/6] xfs: only flush when COW fork blocks overlap data fork holes Brian Foster
2025-10-16 19:03 ` [PATCH 6/6] xfs: replace zero range flush with folio batch Brian Foster
2025-11-05 22:37 ` Darrick J. Wong [this message]
2025-11-06 15:53 ` Brian Foster
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251105223715.GI196370@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=bfoster@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).