public inbox for linux-fsdevel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup
@ 2026-03-11 16:24 Brian Foster
  2026-03-11 16:24 ` [PATCH v4 1/8] xfs: fix iomap hole map reporting for zoned zero range Brian Foster
                   ` (9 more replies)
  0 siblings, 10 replies; 15+ messages in thread
From: Brian Foster @ 2026-03-11 16:24 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs

Hi all,

No significant changes in v4. A few whitespace fixes throughout and I've
added some R-b tags from v3 review. Thanks.

Brian

v4:
- Minor whitespace cleanups.
v3: https://lore.kernel.org/linux-fsdevel/20260309134506.167663-1-bfoster@redhat.com/
- Inserted new patches 1-2 to fix up zoned mode zeroing.
- Appended patch 8 to correctly report COW mappings backed by data fork
  holes.
- Various minor fixups to logic, whitespace, comments.
v2: https://lore.kernel.org/linux-fsdevel/20260129155028.141110-1-bfoster@redhat.com/
- Patch 1 from v1 merged separately.
- Fixed up iomap_fill_dirty_folios() call in patch 5.
v1: https://lore.kernel.org/linux-fsdevel/20251016190303.53881-1-bfoster@redhat.com/

Brian Foster (8):
  xfs: fix iomap hole map reporting for zoned zero range
  xfs: flush dirty pagecache over hole in zoned mode zero range
  iomap, xfs: lift zero range hole mapping flush into xfs
  xfs: flush eof folio before insert range size update
  xfs: look up cow fork extent earlier for buffered iomap_begin
  xfs: only flush when COW fork blocks overlap data fork holes
  xfs: replace zero range flush with folio batch
  xfs: report cow mappings with dirty pagecache for iomap zero range

 fs/iomap/buffered-io.c |   6 +-
 fs/xfs/xfs_file.c      |  17 +++++
 fs/xfs/xfs_iomap.c     | 146 +++++++++++++++++++++++++++++++----------
 3 files changed, 130 insertions(+), 39 deletions(-)

-- 
2.52.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 1/8] xfs: fix iomap hole map reporting for zoned zero range
  2026-03-11 16:24 [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Brian Foster
@ 2026-03-11 16:24 ` Brian Foster
  2026-03-11 16:24 ` [PATCH v4 2/8] xfs: flush dirty pagecache over hole in zoned mode " Brian Foster
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Brian Foster @ 2026-03-11 16:24 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs

The hole mapping logic for zero range in zoned mode is not quite
correct. It currently reports a hole whenever one exists in the data
fork. If the first write to a sparse range has completed and not yet
written back, the blocks exist in the COW fork as delalloc until
writeback completes, at which point they are allocated and mapped
into the data fork. If a zero range occurs on a range that has not
yet populated the data fork, we will incorrectly report it as a
hole.

Note that this currently functions correctly because we are bailed
out by the pagecache flush in iomap_zero_range(). If a hole or
unwritten mapping is reported with dirty pagecache, it assumes there
is pending data, flushes to induce any pending block
allocations/remaps, and retries the lookup. We want to remove this
hack from iomap, however, so update iomap_begin() to only report a
hole for zeroing when one exists in both forks.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
---
 fs/xfs/xfs_iomap.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index be86d43044df..8c3469d2c73e 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1651,14 +1651,6 @@ xfs_zoned_buffered_write_iomap_begin(
 				&smap))
 			smap.br_startoff = end_fsb; /* fake hole until EOF */
 		if (smap.br_startoff > offset_fsb) {
-			/*
-			 * We never need to allocate blocks for zeroing a hole.
-			 */
-			if (flags & IOMAP_ZERO) {
-				xfs_hole_to_iomap(ip, iomap, offset_fsb,
-						smap.br_startoff);
-				goto out_unlock;
-			}
 			end_fsb = min(end_fsb, smap.br_startoff);
 		} else {
 			end_fsb = min(end_fsb,
@@ -1690,6 +1682,16 @@ xfs_zoned_buffered_write_iomap_begin(
 	count_fsb = min3(end_fsb - offset_fsb, XFS_MAX_BMBT_EXTLEN,
 			 XFS_B_TO_FSB(mp, 1024 * PAGE_SIZE));
 
+	/*
+	 * When zeroing, don't allocate blocks for holes as they are already
+	 * zeroes, but we need to ensure that no extents exist in both the data
+	 * and COW fork to ensure this really is a hole.
+	 */
+	if ((flags & IOMAP_ZERO) && srcmap->type == IOMAP_HOLE) {
+		xfs_hole_to_iomap(ip, iomap, offset_fsb, end_fsb);
+		goto out_unlock;
+	}
+
 	/*
 	 * The block reservation is supposed to cover all blocks that the
 	 * operation could possible write, but there is a nasty corner case
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 2/8] xfs: flush dirty pagecache over hole in zoned mode zero range
  2026-03-11 16:24 [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Brian Foster
  2026-03-11 16:24 ` [PATCH v4 1/8] xfs: fix iomap hole map reporting for zoned zero range Brian Foster
@ 2026-03-11 16:24 ` Brian Foster
  2026-03-11 22:14   ` Darrick J. Wong
  2026-03-11 16:24 ` [PATCH v4 3/8] iomap, xfs: lift zero range hole mapping flush into xfs Brian Foster
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 15+ messages in thread
From: Brian Foster @ 2026-03-11 16:24 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs

For zoned filesystems a window exists between the first write to a
sparse range (i.e. data fork hole) and writeback completion where we
might spuriously observe holes in both the COW and data forks. This
occurs because a buffered write populates the COW fork with
delalloc, writeback submission removes the COW fork delalloc blocks
and unlocks the inode, and then writeback completion remaps the
physically allocated blocks into the data fork. If a zero range
operation does a lookup during this window where both forks show a
hole, it incorrectly reports a hole mapping for a range that
contains data.

This currently works because iomap checks for dirty pagecache over
holes and unwritten mappings. If found, it flushes and retries the
lookup. We plan to remove the hole flush logic from iomap, however,
so lift the flush into xfs_zoned_buffered_write_iomap_begin() to
preserve behavior and document the purpose for it. Zoned XFS
filesystems don't support unwritten extents, so if zoned mode can
come up with a way to close this transient hole window in the
future, this flush can likely be removed.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 fs/xfs/xfs_iomap.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 8c3469d2c73e..d3b8c018c883 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1590,6 +1590,7 @@ xfs_zoned_buffered_write_iomap_begin(
 {
 	struct iomap_iter	*iter =
 		container_of(iomap, struct iomap_iter, iomap);
+	struct address_space	*mapping = inode->i_mapping;
 	struct xfs_zone_alloc_ctx *ac = iter->private;
 	struct xfs_inode	*ip = XFS_I(inode);
 	struct xfs_mount	*mp = ip->i_mount;
@@ -1614,6 +1615,7 @@ xfs_zoned_buffered_write_iomap_begin(
 	if (error)
 		return error;
 
+restart:
 	error = xfs_ilock_for_iomap(ip, flags, &lockmode);
 	if (error)
 		return error;
@@ -1686,8 +1688,25 @@ xfs_zoned_buffered_write_iomap_begin(
 	 * When zeroing, don't allocate blocks for holes as they are already
 	 * zeroes, but we need to ensure that no extents exist in both the data
 	 * and COW fork to ensure this really is a hole.
+	 *
+	 * A window exists where we might observe a hole in both forks with
+	 * valid data in cache. Writeback removes the COW fork blocks on
+	 * submission but doesn't remap into the data fork until completion. If
+	 * the data fork was previously a hole, we'll fail to zero. Until we
+	 * find a way to avoid this transient state, check for dirty pagecache
+	 * and flush to wait on blocks to land in the data fork.
 	 */
 	if ((flags & IOMAP_ZERO) && srcmap->type == IOMAP_HOLE) {
+		if (filemap_range_needs_writeback(mapping, offset,
+				offset + count - 1)) {
+			xfs_iunlock(ip, lockmode);
+			error = filemap_write_and_wait_range(mapping, offset,
+					offset + count - 1);
+			if (error)
+				return error;
+			goto restart;
+		}
+
 		xfs_hole_to_iomap(ip, iomap, offset_fsb, end_fsb);
 		goto out_unlock;
 	}
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 3/8] iomap, xfs: lift zero range hole mapping flush into xfs
  2026-03-11 16:24 [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Brian Foster
  2026-03-11 16:24 ` [PATCH v4 1/8] xfs: fix iomap hole map reporting for zoned zero range Brian Foster
  2026-03-11 16:24 ` [PATCH v4 2/8] xfs: flush dirty pagecache over hole in zoned mode " Brian Foster
@ 2026-03-11 16:24 ` Brian Foster
  2026-03-11 16:24 ` [PATCH v4 4/8] xfs: flush eof folio before insert range size update Brian Foster
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Brian Foster @ 2026-03-11 16:24 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs

iomap zero range has a wart in that it also flushes dirty pagecache
over hole mappings (rather than only unwritten mappings). This was
included to accommodate a quirk in XFS where COW fork preallocation
can exist over a hole in the data fork, and the associated range is
reported as a hole. This is because the range actually is a hole,
but XFS also has an optimization where if COW fork blocks exist for
a range being written to, those blocks are used regardless of
whether the data fork blocks are shared or not. For zeroing, COW
fork blocks over a data fork hole are only relevant if the range is
dirty in pagecache, otherwise the range is already considered
zeroed.

The easiest way to deal with this corner case is to flush the
pagecache to trigger COW remapping into the data fork, and then
operate on the updated on-disk state. The problem is that ext4
cannot accommodate a flush from this context due to being a
transaction deadlock vector.

Outside of the hole quirk, ext4 can avoid the flush for zero range
by using the recently introduced folio batch lookup mechanism for
unwritten mappings. Therefore, take the next logical step and lift
the hole handling logic into the XFS iomap_begin handler. iomap will
still flush on unwritten mappings without a folio batch, and XFS
will flush and retry mapping lookups in the case where it would
otherwise report a hole with dirty pagecache during a zero range.

Note that this is intended to be a fairly straightforward lift and
otherwise not change behavior. Now that the flush exists within XFS,
follow on patches can further optimize it.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 fs/iomap/buffered-io.c |  2 +-
 fs/xfs/xfs_iomap.c     | 25 ++++++++++++++++++++++---
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index bc82083e420a..0999aca6e5cc 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1642,7 +1642,7 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
 		     srcmap->type == IOMAP_UNWRITTEN)) {
 			s64 status;
 
-			if (range_dirty) {
+			if (range_dirty && srcmap->type == IOMAP_UNWRITTEN) {
 				range_dirty = false;
 				status = iomap_zero_iter_flush_and_stale(&iter);
 			} else {
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index d3b8c018c883..2ace8b8ffc86 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1811,6 +1811,7 @@ xfs_buffered_write_iomap_begin(
 	if (error)
 		return error;
 
+restart:
 	error = xfs_ilock_for_iomap(ip, flags, &lockmode);
 	if (error)
 		return error;
@@ -1838,9 +1839,27 @@ xfs_buffered_write_iomap_begin(
 	if (eof)
 		imap.br_startoff = end_fsb; /* fake hole until the end */
 
-	/* We never need to allocate blocks for zeroing or unsharing a hole. */
-	if ((flags & (IOMAP_UNSHARE | IOMAP_ZERO)) &&
-	    imap.br_startoff > offset_fsb) {
+	/* We never need to allocate blocks for unsharing a hole. */
+	if ((flags & IOMAP_UNSHARE) && imap.br_startoff > offset_fsb) {
+		xfs_hole_to_iomap(ip, iomap, offset_fsb, imap.br_startoff);
+		goto out_unlock;
+	}
+
+	/*
+	 * We may need to zero over a hole in the data fork if it's fronted by
+	 * COW blocks and dirty pagecache. To make sure zeroing occurs, force
+	 * writeback to remap pending blocks and restart the lookup.
+	 */
+	if ((flags & IOMAP_ZERO) && imap.br_startoff > offset_fsb) {
+		if (filemap_range_needs_writeback(inode->i_mapping, offset,
+				offset + count - 1)) {
+			xfs_iunlock(ip, lockmode);
+			error = filemap_write_and_wait_range(inode->i_mapping,
+					offset, offset + count - 1);
+			if (error)
+				return error;
+			goto restart;
+		}
 		xfs_hole_to_iomap(ip, iomap, offset_fsb, imap.br_startoff);
 		goto out_unlock;
 	}
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 4/8] xfs: flush eof folio before insert range size update
  2026-03-11 16:24 [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Brian Foster
                   ` (2 preceding siblings ...)
  2026-03-11 16:24 ` [PATCH v4 3/8] iomap, xfs: lift zero range hole mapping flush into xfs Brian Foster
@ 2026-03-11 16:24 ` Brian Foster
  2026-03-11 21:36   ` Darrick J. Wong
  2026-03-11 16:24 ` [PATCH v4 5/8] xfs: look up cow fork extent earlier for buffered iomap_begin Brian Foster
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 15+ messages in thread
From: Brian Foster @ 2026-03-11 16:24 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs

The flush in xfs_buffered_write_iomap_begin() for zero range over a
data fork hole fronted by COW fork prealloc is primarily designed to
provide correct zeroing behavior in particular pagecache conditions.
As it turns out, this also partially masks some odd behavior in
insert range (via zero range via setattr).

Insert range bumps i_size the length of the new range, flushes,
unmaps pagecache and cancels COW prealloc, and then right shifts
extents from the end of the file back to the target offset of the
insert. Since the i_size update occurs before the pagecache flush,
this creates a transient situation where writeback around EOF can
behave differently.

This appears to be corner case situation, but if happens to be
fronted by COW fork speculative preallocation and a large, dirty
folio that contains at least one full COW block beyond EOF, the
writeback after i_size is bumped may remap that COW fork block into
the data fork within EOF. The block is zeroed and then shifted back
out to post-eof, but this is unexpected in that it leads to a
written post-eof data fork block. This can cause a zero range
warning on a subsequent size extension, because we should never find
blocks that require physical zeroing beyond i_size.

To avoid this quirk, flush the EOF folio before the i_size update
during insert range. The entire range will be flushed, unmapped and
invalidated anyways, so this should be relatively unnoticeable.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 fs/xfs/xfs_file.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 6246f34df9fd..48d812b99282 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -1263,6 +1263,23 @@ xfs_falloc_insert_range(
 	if (offset >= isize)
 		return -EINVAL;
 
+	/*
+	 * Let writeback clean up EOF folio state before we bump i_size. The
+	 * insert flushes before it starts shifting and under certain
+	 * circumstances we can write back blocks that should technically be
+	 * considered post-eof (and thus should not be submitted for writeback).
+	 *
+	 * For example, a large, dirty folio that spans EOF and is backed by
+	 * post-eof COW fork preallocation can cause block remap into the data
+	 * fork. This shifts back out beyond EOF, but creates an expectedly
+	 * written post-eof block. The insert is going to flush, unmap and
+	 * cancel prealloc across this whole range, so flush EOF now before we
+	 * bump i_size to provide consistent behavior.
+	 */
+	error = filemap_write_and_wait_range(inode->i_mapping, isize, isize);
+	if (error)
+		return error;
+
 	error = xfs_falloc_setsize(file, isize + len);
 	if (error)
 		return error;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 5/8] xfs: look up cow fork extent earlier for buffered iomap_begin
  2026-03-11 16:24 [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Brian Foster
                   ` (3 preceding siblings ...)
  2026-03-11 16:24 ` [PATCH v4 4/8] xfs: flush eof folio before insert range size update Brian Foster
@ 2026-03-11 16:24 ` Brian Foster
  2026-03-11 16:25 ` [PATCH v4 6/8] xfs: only flush when COW fork blocks overlap data fork holes Brian Foster
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Brian Foster @ 2026-03-11 16:24 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs

To further isolate the need for flushing for zero range, we need to
know whether a hole in the data fork is fronted by blocks in the COW
fork or not. COW fork lookup currently occurs further down in the
function, after the zero range case is handled.

As a preparation step, lift the COW fork extent lookup to earlier in
the function, at the same time as the data fork lookup. Only the
lookup logic is lifted. The COW fork branch/reporting logic remains
as is to avoid any observable behavior change from an iomap
reporting perspective.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 fs/xfs/xfs_iomap.c | 46 +++++++++++++++++++++++++---------------------
 1 file changed, 25 insertions(+), 21 deletions(-)

diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 2ace8b8ffc86..df1eab646cae 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1830,14 +1830,29 @@ xfs_buffered_write_iomap_begin(
 		goto out_unlock;
 
 	/*
-	 * Search the data fork first to look up our source mapping.  We
-	 * always need the data fork map, as we have to return it to the
-	 * iomap code so that the higher level write code can read data in to
-	 * perform read-modify-write cycles for unaligned writes.
+	 * Search the data fork first to look up our source mapping. We always
+	 * need the data fork map, as we have to return it to the iomap code so
+	 * that the higher level write code can read data in to perform
+	 * read-modify-write cycles for unaligned writes.
+	 *
+	 * Then search the COW fork extent list even if we did not find a data
+	 * fork extent. This serves two purposes: first this implements the
+	 * speculative preallocation using cowextsize, so that we also unshare
+	 * block adjacent to shared blocks instead of just the shared blocks
+	 * themselves. Second the lookup in the extent list is generally faster
+	 * than going out to the shared extent tree.
 	 */
 	eof = !xfs_iext_lookup_extent(ip, &ip->i_df, offset_fsb, &icur, &imap);
 	if (eof)
 		imap.br_startoff = end_fsb; /* fake hole until the end */
+	if (xfs_is_cow_inode(ip)) {
+		if (!ip->i_cowfp) {
+			ASSERT(!xfs_is_reflink_inode(ip));
+			xfs_ifork_init_cow(ip);
+		}
+		cow_eof = !xfs_iext_lookup_extent(ip, ip->i_cowfp, offset_fsb,
+				&ccur, &cmap);
+	}
 
 	/* We never need to allocate blocks for unsharing a hole. */
 	if ((flags & IOMAP_UNSHARE) && imap.br_startoff > offset_fsb) {
@@ -1904,24 +1919,13 @@ xfs_buffered_write_iomap_begin(
 	}
 
 	/*
-	 * Search the COW fork extent list even if we did not find a data fork
-	 * extent.  This serves two purposes: first this implements the
-	 * speculative preallocation using cowextsize, so that we also unshare
-	 * block adjacent to shared blocks instead of just the shared blocks
-	 * themselves.  Second the lookup in the extent list is generally faster
-	 * than going out to the shared extent tree.
+	 * Now that we've handled any operation specific special cases, at this
+	 * point we can report a COW mapping if found.
 	 */
-	if (xfs_is_cow_inode(ip)) {
-		if (!ip->i_cowfp) {
-			ASSERT(!xfs_is_reflink_inode(ip));
-			xfs_ifork_init_cow(ip);
-		}
-		cow_eof = !xfs_iext_lookup_extent(ip, ip->i_cowfp, offset_fsb,
-				&ccur, &cmap);
-		if (!cow_eof && cmap.br_startoff <= offset_fsb) {
-			trace_xfs_reflink_cow_found(ip, &cmap);
-			goto found_cow;
-		}
+	if (xfs_is_cow_inode(ip) &&
+	    !cow_eof && cmap.br_startoff <= offset_fsb) {
+		trace_xfs_reflink_cow_found(ip, &cmap);
+		goto found_cow;
 	}
 
 	if (imap.br_startoff <= offset_fsb) {
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 6/8] xfs: only flush when COW fork blocks overlap data fork holes
  2026-03-11 16:24 [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Brian Foster
                   ` (4 preceding siblings ...)
  2026-03-11 16:24 ` [PATCH v4 5/8] xfs: look up cow fork extent earlier for buffered iomap_begin Brian Foster
@ 2026-03-11 16:25 ` Brian Foster
  2026-03-11 16:25 ` [PATCH v4 7/8] xfs: replace zero range flush with folio batch Brian Foster
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Brian Foster @ 2026-03-11 16:25 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs

The zero range hole mapping flush case has been lifted from iomap
into XFS. Now that we have more mapping context available from the
->iomap_begin() handler, we can isolate the flush further to when we
know a hole is fronted by COW blocks.

Rather than purely rely on pagecache dirty state, explicitly check
for the case where a range is a hole in both forks. Otherwise trim
to the range where there does happen to be overlap and use that for
the pagecache writeback check. This might prevent some spurious
zeroing, but more importantly makes it easier to remove the flush
entirely.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
---
 fs/xfs/xfs_iomap.c | 36 ++++++++++++++++++++++++++++++------
 1 file changed, 30 insertions(+), 6 deletions(-)

diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index df1eab646cae..6229a0bf793b 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1781,10 +1781,12 @@ xfs_buffered_write_iomap_begin(
 {
 	struct iomap_iter	*iter = container_of(iomap, struct iomap_iter,
 						     iomap);
+	struct address_space	*mapping = inode->i_mapping;
 	struct xfs_inode	*ip = XFS_I(inode);
 	struct xfs_mount	*mp = ip->i_mount;
 	xfs_fileoff_t		offset_fsb = XFS_B_TO_FSBT(mp, offset);
 	xfs_fileoff_t		end_fsb = xfs_iomap_end_fsb(mp, offset, count);
+	xfs_fileoff_t		cow_fsb = NULLFILEOFF;
 	struct xfs_bmbt_irec	imap, cmap;
 	struct xfs_iext_cursor	icur, ccur;
 	xfs_fsblock_t		prealloc_blocks = 0;
@@ -1852,6 +1854,8 @@ xfs_buffered_write_iomap_begin(
 		}
 		cow_eof = !xfs_iext_lookup_extent(ip, ip->i_cowfp, offset_fsb,
 				&ccur, &cmap);
+		if (!cow_eof)
+			cow_fsb = cmap.br_startoff;
 	}
 
 	/* We never need to allocate blocks for unsharing a hole. */
@@ -1866,17 +1870,37 @@ xfs_buffered_write_iomap_begin(
 	 * writeback to remap pending blocks and restart the lookup.
 	 */
 	if ((flags & IOMAP_ZERO) && imap.br_startoff > offset_fsb) {
-		if (filemap_range_needs_writeback(inode->i_mapping, offset,
-				offset + count - 1)) {
+		loff_t	start, end;
+
+		imap.br_blockcount = imap.br_startoff - offset_fsb;
+		imap.br_startoff = offset_fsb;
+		imap.br_startblock = HOLESTARTBLOCK;
+		imap.br_state = XFS_EXT_NORM;
+
+		if (cow_fsb == NULLFILEOFF)
+			goto found_imap;
+		if (cow_fsb > offset_fsb) {
+			xfs_trim_extent(&imap, offset_fsb,
+					cow_fsb - offset_fsb);
+			goto found_imap;
+		}
+
+		/* COW fork blocks overlap the hole */
+		xfs_trim_extent(&imap, offset_fsb,
+			    cmap.br_startoff + cmap.br_blockcount - offset_fsb);
+		start = XFS_FSB_TO_B(mp, imap.br_startoff);
+		end = XFS_FSB_TO_B(mp,
+				   imap.br_startoff + imap.br_blockcount) - 1;
+		if (filemap_range_needs_writeback(mapping, start, end)) {
 			xfs_iunlock(ip, lockmode);
-			error = filemap_write_and_wait_range(inode->i_mapping,
-					offset, offset + count - 1);
+			error = filemap_write_and_wait_range(mapping, start,
+							     end);
 			if (error)
 				return error;
 			goto restart;
 		}
-		xfs_hole_to_iomap(ip, iomap, offset_fsb, imap.br_startoff);
-		goto out_unlock;
+
+		goto found_imap;
 	}
 
 	/*
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 7/8] xfs: replace zero range flush with folio batch
  2026-03-11 16:24 [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Brian Foster
                   ` (5 preceding siblings ...)
  2026-03-11 16:25 ` [PATCH v4 6/8] xfs: only flush when COW fork blocks overlap data fork holes Brian Foster
@ 2026-03-11 16:25 ` Brian Foster
  2026-03-11 16:25 ` [PATCH v4 8/8] xfs: report cow mappings with dirty pagecache for iomap zero range Brian Foster
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Brian Foster @ 2026-03-11 16:25 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs

Now that the zero range pagecache flush is purely isolated to
providing zeroing correctness in this case, we can remove it and
replace it with the folio batch mechanism that is used for handling
unwritten extents.

This is still slightly odd in that XFS reports a hole vs. a mapping
that reflects the COW fork extents, but that has always been the
case in this situation and so a separate issue. We drop the iomap
warning that assumes the folio batch is always associated with
unwritten mappings, but this is mainly a development assertion as
otherwise the core iomap fbatch code doesn't care much about the
mapping type if it's handed the set of folios to process.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
---
 fs/iomap/buffered-io.c |  4 ----
 fs/xfs/xfs_iomap.c     | 20 ++++++--------------
 2 files changed, 6 insertions(+), 18 deletions(-)

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 0999aca6e5cc..4422a6d477d7 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1633,10 +1633,6 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
 	while ((ret = iomap_iter(&iter, ops)) > 0) {
 		const struct iomap *srcmap = iomap_iter_srcmap(&iter);
 
-		if (WARN_ON_ONCE((iter.iomap.flags & IOMAP_F_FOLIO_BATCH) &&
-				 srcmap->type != IOMAP_UNWRITTEN))
-			return -EIO;
-
 		if (!(iter.iomap.flags & IOMAP_F_FOLIO_BATCH) &&
 		    (srcmap->type == IOMAP_HOLE ||
 		     srcmap->type == IOMAP_UNWRITTEN)) {
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 6229a0bf793b..51a55510d4a5 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1781,7 +1781,6 @@ xfs_buffered_write_iomap_begin(
 {
 	struct iomap_iter	*iter = container_of(iomap, struct iomap_iter,
 						     iomap);
-	struct address_space	*mapping = inode->i_mapping;
 	struct xfs_inode	*ip = XFS_I(inode);
 	struct xfs_mount	*mp = ip->i_mount;
 	xfs_fileoff_t		offset_fsb = XFS_B_TO_FSBT(mp, offset);
@@ -1813,7 +1812,6 @@ xfs_buffered_write_iomap_begin(
 	if (error)
 		return error;
 
-restart:
 	error = xfs_ilock_for_iomap(ip, flags, &lockmode);
 	if (error)
 		return error;
@@ -1866,8 +1864,8 @@ xfs_buffered_write_iomap_begin(
 
 	/*
 	 * We may need to zero over a hole in the data fork if it's fronted by
-	 * COW blocks and dirty pagecache. To make sure zeroing occurs, force
-	 * writeback to remap pending blocks and restart the lookup.
+	 * COW blocks and dirty pagecache. Scan such file ranges for dirty
+	 * cache and fill the iomap batch with folios that need zeroing.
 	 */
 	if ((flags & IOMAP_ZERO) && imap.br_startoff > offset_fsb) {
 		loff_t	start, end;
@@ -1889,16 +1887,10 @@ xfs_buffered_write_iomap_begin(
 		xfs_trim_extent(&imap, offset_fsb,
 			    cmap.br_startoff + cmap.br_blockcount - offset_fsb);
 		start = XFS_FSB_TO_B(mp, imap.br_startoff);
-		end = XFS_FSB_TO_B(mp,
-				   imap.br_startoff + imap.br_blockcount) - 1;
-		if (filemap_range_needs_writeback(mapping, start, end)) {
-			xfs_iunlock(ip, lockmode);
-			error = filemap_write_and_wait_range(mapping, start,
-							     end);
-			if (error)
-				return error;
-			goto restart;
-		}
+		end = XFS_FSB_TO_B(mp, imap.br_startoff + imap.br_blockcount);
+		iomap_fill_dirty_folios(iter, &start, end, &iomap_flags);
+		xfs_trim_extent(&imap, offset_fsb,
+				XFS_B_TO_FSB(mp, start) - offset_fsb);
 
 		goto found_imap;
 	}
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 8/8] xfs: report cow mappings with dirty pagecache for iomap zero range
  2026-03-11 16:24 [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Brian Foster
                   ` (6 preceding siblings ...)
  2026-03-11 16:25 ` [PATCH v4 7/8] xfs: replace zero range flush with folio batch Brian Foster
@ 2026-03-11 16:25 ` Brian Foster
  2026-03-12  6:55   ` Christoph Hellwig
  2026-03-18  9:13 ` [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Carlos Maiolino
  2026-03-23 10:47 ` Carlos Maiolino
  9 siblings, 1 reply; 15+ messages in thread
From: Brian Foster @ 2026-03-11 16:25 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs

XFS has long supported the case where it is possible to have dirty
data in pagecache backed by COW fork blocks and a hole in the data
fork. This occurs for two reasons. On reflink enabled files, COW
fork blocks are allocated with preallocation to help avoid
fragmention. Second, if a mapping lookup for a write finds blocks in
the COW fork, it consumes those blocks unconditionally. This might
mean that COW fork blocks are backed by non-shared blocks or even a
hole in the data fork, both of which are perfectly fine.

This leaves an odd corner case for zero range, however, because it
needs to distinguish between ranges that are sparse and thus do not
require zeroing and those that are not. A range backed by COW fork
blocks and a data fork hole might either be a legitimate hole in the
file or a range with pending buffered writes that will be written
back (which will remap COW fork blocks into the data fork).

This "COW fork blocks over data fork hole" situation has
historically been reported as a hole to iomap, which then has grown
a flush hack as a workaround to ensure zeroing occurs correctly. Now
that this has been lifted into the filesystem and replaced by the
dirty folio lookup mechanism, we can do better and use the pagecache
state to decide how to report the mapping. If a COW fork range
exists with dirty folios in cache, then report a typical shared
mapping. If the range is clean in cache, then we can consider the
COW blocks preallocation and call it a hole.

This doesn't fundamentally change behavior, but makes mapping
reporting more accurate. Note that this does require splitting
across the EOF boundary (similar to normal zero range) to ensure we
don't spuriously perform post-eof zeroing. iomap will warn about
zeroing beyond EOF because folios beyond i_size may not be written
back.

Signed-off-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
---
 fs/xfs/xfs_iomap.c | 26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 51a55510d4a5..dbd49e838889 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1786,6 +1786,7 @@ xfs_buffered_write_iomap_begin(
 	xfs_fileoff_t		offset_fsb = XFS_B_TO_FSBT(mp, offset);
 	xfs_fileoff_t		end_fsb = xfs_iomap_end_fsb(mp, offset, count);
 	xfs_fileoff_t		cow_fsb = NULLFILEOFF;
+	xfs_fileoff_t		eof_fsb = XFS_B_TO_FSB(mp, XFS_ISIZE(ip));
 	struct xfs_bmbt_irec	imap, cmap;
 	struct xfs_iext_cursor	icur, ccur;
 	xfs_fsblock_t		prealloc_blocks = 0;
@@ -1868,7 +1869,8 @@ xfs_buffered_write_iomap_begin(
 	 * cache and fill the iomap batch with folios that need zeroing.
 	 */
 	if ((flags & IOMAP_ZERO) && imap.br_startoff > offset_fsb) {
-		loff_t	start, end;
+		loff_t		start, end;
+		unsigned int	fbatch_count;
 
 		imap.br_blockcount = imap.br_startoff - offset_fsb;
 		imap.br_startoff = offset_fsb;
@@ -1883,15 +1885,33 @@ xfs_buffered_write_iomap_begin(
 			goto found_imap;
 		}
 
+		/* no zeroing beyond eof, so split at the boundary */
+		if (offset_fsb >= eof_fsb)
+			goto found_imap;
+		if (offset_fsb < eof_fsb && end_fsb > eof_fsb)
+			xfs_trim_extent(&imap, offset_fsb,
+					eof_fsb - offset_fsb);
+
 		/* COW fork blocks overlap the hole */
 		xfs_trim_extent(&imap, offset_fsb,
 			    cmap.br_startoff + cmap.br_blockcount - offset_fsb);
 		start = XFS_FSB_TO_B(mp, imap.br_startoff);
 		end = XFS_FSB_TO_B(mp, imap.br_startoff + imap.br_blockcount);
-		iomap_fill_dirty_folios(iter, &start, end, &iomap_flags);
+		fbatch_count = iomap_fill_dirty_folios(iter, &start, end,
+						       &iomap_flags);
 		xfs_trim_extent(&imap, offset_fsb,
 				XFS_B_TO_FSB(mp, start) - offset_fsb);
 
+		/*
+		 * Report the COW mapping if we have folios to zero. Otherwise
+		 * ignore the COW blocks as preallocation and report a hole.
+		 */
+		if (fbatch_count) {
+			xfs_trim_extent(&cmap, imap.br_startoff,
+					imap.br_blockcount);
+			imap.br_startoff = end_fsb;	/* fake hole */
+			goto found_cow;
+		}
 		goto found_imap;
 	}
 
@@ -1901,8 +1921,6 @@ xfs_buffered_write_iomap_begin(
 	 * unwritten extent.
 	 */
 	if (flags & IOMAP_ZERO) {
-		xfs_fileoff_t eof_fsb = XFS_B_TO_FSB(mp, XFS_ISIZE(ip));
-
 		if (isnullstartblock(imap.br_startblock) &&
 		    offset_fsb >= eof_fsb)
 			goto convert_delay;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 4/8] xfs: flush eof folio before insert range size update
  2026-03-11 16:24 ` [PATCH v4 4/8] xfs: flush eof folio before insert range size update Brian Foster
@ 2026-03-11 21:36   ` Darrick J. Wong
  0 siblings, 0 replies; 15+ messages in thread
From: Darrick J. Wong @ 2026-03-11 21:36 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-fsdevel, linux-xfs

On Wed, Mar 11, 2026 at 12:24:58PM -0400, Brian Foster wrote:
> The flush in xfs_buffered_write_iomap_begin() for zero range over a
> data fork hole fronted by COW fork prealloc is primarily designed to
> provide correct zeroing behavior in particular pagecache conditions.
> As it turns out, this also partially masks some odd behavior in
> insert range (via zero range via setattr).
> 
> Insert range bumps i_size the length of the new range, flushes,
> unmaps pagecache and cancels COW prealloc, and then right shifts
> extents from the end of the file back to the target offset of the
> insert. Since the i_size update occurs before the pagecache flush,
> this creates a transient situation where writeback around EOF can
> behave differently.
> 
> This appears to be corner case situation, but if happens to be
> fronted by COW fork speculative preallocation and a large, dirty
> folio that contains at least one full COW block beyond EOF, the
> writeback after i_size is bumped may remap that COW fork block into
> the data fork within EOF. The block is zeroed and then shifted back
> out to post-eof, but this is unexpected in that it leads to a
> written post-eof data fork block. This can cause a zero range
> warning on a subsequent size extension, because we should never find
> blocks that require physical zeroing beyond i_size.
> 
> To avoid this quirk, flush the EOF folio before the i_size update
> during insert range. The entire range will be flushed, unmapped and
> invalidated anyways, so this should be relatively unnoticeable.
> 
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> ---
>  fs/xfs/xfs_file.c | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
> 
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index 6246f34df9fd..48d812b99282 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -1263,6 +1263,23 @@ xfs_falloc_insert_range(
>  	if (offset >= isize)
>  		return -EINVAL;
>  
> +	/*
> +	 * Let writeback clean up EOF folio state before we bump i_size. The
> +	 * insert flushes before it starts shifting and under certain
> +	 * circumstances we can write back blocks that should technically be
> +	 * considered post-eof (and thus should not be submitted for writeback).
> +	 *
> +	 * For example, a large, dirty folio that spans EOF and is backed by
> +	 * post-eof COW fork preallocation can cause block remap into the data
> +	 * fork. This shifts back out beyond EOF, but creates an expectedly
> +	 * written post-eof block. The insert is going to flush, unmap and
> +	 * cancel prealloc across this whole range, so flush EOF now before we
> +	 * bump i_size to provide consistent behavior.
> +	 */
> +	error = filemap_write_and_wait_range(inode->i_mapping, isize, isize);

From what I can tell, ext4 has been doing something like this forever...

Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>

--D

> +	if (error)
> +		return error;
> +
>  	error = xfs_falloc_setsize(file, isize + len);
>  	if (error)
>  		return error;
> -- 
> 2.52.0
> 
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 2/8] xfs: flush dirty pagecache over hole in zoned mode zero range
  2026-03-11 16:24 ` [PATCH v4 2/8] xfs: flush dirty pagecache over hole in zoned mode " Brian Foster
@ 2026-03-11 22:14   ` Darrick J. Wong
  0 siblings, 0 replies; 15+ messages in thread
From: Darrick J. Wong @ 2026-03-11 22:14 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-fsdevel, linux-xfs

On Wed, Mar 11, 2026 at 12:24:56PM -0400, Brian Foster wrote:
> For zoned filesystems a window exists between the first write to a
> sparse range (i.e. data fork hole) and writeback completion where we
> might spuriously observe holes in both the COW and data forks. This
> occurs because a buffered write populates the COW fork with
> delalloc, writeback submission removes the COW fork delalloc blocks
> and unlocks the inode, and then writeback completion remaps the
> physically allocated blocks into the data fork. If a zero range
> operation does a lookup during this window where both forks show a
> hole, it incorrectly reports a hole mapping for a range that
> contains data.
> 
> This currently works because iomap checks for dirty pagecache over
> holes and unwritten mappings. If found, it flushes and retries the
> lookup. We plan to remove the hole flush logic from iomap, however,
> so lift the flush into xfs_zoned_buffered_write_iomap_begin() to
> preserve behavior and document the purpose for it. Zoned XFS
> filesystems don't support unwritten extents, so if zoned mode can
> come up with a way to close this transient hole window in the
> future, this flush can likely be removed.
> 
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> ---
>  fs/xfs/xfs_iomap.c | 19 +++++++++++++++++++
>  1 file changed, 19 insertions(+)
> 
> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> index 8c3469d2c73e..d3b8c018c883 100644
> --- a/fs/xfs/xfs_iomap.c
> +++ b/fs/xfs/xfs_iomap.c
> @@ -1590,6 +1590,7 @@ xfs_zoned_buffered_write_iomap_begin(
>  {
>  	struct iomap_iter	*iter =
>  		container_of(iomap, struct iomap_iter, iomap);
> +	struct address_space	*mapping = inode->i_mapping;
>  	struct xfs_zone_alloc_ctx *ac = iter->private;
>  	struct xfs_inode	*ip = XFS_I(inode);
>  	struct xfs_mount	*mp = ip->i_mount;
> @@ -1614,6 +1615,7 @@ xfs_zoned_buffered_write_iomap_begin(
>  	if (error)
>  		return error;
>  
> +restart:
>  	error = xfs_ilock_for_iomap(ip, flags, &lockmode);
>  	if (error)
>  		return error;
> @@ -1686,8 +1688,25 @@ xfs_zoned_buffered_write_iomap_begin(
>  	 * When zeroing, don't allocate blocks for holes as they are already
>  	 * zeroes, but we need to ensure that no extents exist in both the data
>  	 * and COW fork to ensure this really is a hole.
> +	 *
> +	 * A window exists where we might observe a hole in both forks with
> +	 * valid data in cache. Writeback removes the COW fork blocks on
> +	 * submission but doesn't remap into the data fork until completion. If
> +	 * the data fork was previously a hole, we'll fail to zero. Until we
> +	 * find a way to avoid this transient state, check for dirty pagecache
> +	 * and flush to wait on blocks to land in the data fork.
>  	 */
>  	if ((flags & IOMAP_ZERO) && srcmap->type == IOMAP_HOLE) {
> +		if (filemap_range_needs_writeback(mapping, offset,
> +				offset + count - 1)) {
> +			xfs_iunlock(ip, lockmode);
> +			error = filemap_write_and_wait_range(mapping, offset,
> +					offset + count - 1);
> +			if (error)
> +				return error;
> +			goto restart;
> +		}

Seems fine to me.

Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>

--D

> +
>  		xfs_hole_to_iomap(ip, iomap, offset_fsb, end_fsb);
>  		goto out_unlock;
>  	}
> -- 
> 2.52.0
> 
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 8/8] xfs: report cow mappings with dirty pagecache for iomap zero range
  2026-03-11 16:25 ` [PATCH v4 8/8] xfs: report cow mappings with dirty pagecache for iomap zero range Brian Foster
@ 2026-03-12  6:55   ` Christoph Hellwig
  0 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2026-03-12  6:55 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-fsdevel, linux-xfs

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup
  2026-03-11 16:24 [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Brian Foster
                   ` (7 preceding siblings ...)
  2026-03-11 16:25 ` [PATCH v4 8/8] xfs: report cow mappings with dirty pagecache for iomap zero range Brian Foster
@ 2026-03-18  9:13 ` Carlos Maiolino
  2026-03-18  9:40   ` Christian Brauner
  2026-03-23 10:47 ` Carlos Maiolino
  9 siblings, 1 reply; 15+ messages in thread
From: Carlos Maiolino @ 2026-03-18  9:13 UTC (permalink / raw)
  To: brauner; +Cc: linux-fsdevel, linux-xfs, bfoster

On Wed, Mar 11, 2026 at 12:24:54PM -0400, Brian Foster wrote:
> Hi all,
> 
> No significant changes in v4. A few whitespace fixes throughout and I've
> added some R-b tags from v3 review. Thanks.
> 
> Brian

Christian, giving the majority of this series belongs to XFS, are you ok
if I pull this series through XFS tree, including the iomap patch?

Carlos

> 
> v4:
> - Minor whitespace cleanups.
> v3: https://lore.kernel.org/linux-fsdevel/20260309134506.167663-1-bfoster@redhat.com/
> - Inserted new patches 1-2 to fix up zoned mode zeroing.
> - Appended patch 8 to correctly report COW mappings backed by data fork
>   holes.
> - Various minor fixups to logic, whitespace, comments.
> v2: https://lore.kernel.org/linux-fsdevel/20260129155028.141110-1-bfoster@redhat.com/
> - Patch 1 from v1 merged separately.
> - Fixed up iomap_fill_dirty_folios() call in patch 5.
> v1: https://lore.kernel.org/linux-fsdevel/20251016190303.53881-1-bfoster@redhat.com/
> 
> Brian Foster (8):
>   xfs: fix iomap hole map reporting for zoned zero range
>   xfs: flush dirty pagecache over hole in zoned mode zero range
>   iomap, xfs: lift zero range hole mapping flush into xfs
>   xfs: flush eof folio before insert range size update
>   xfs: look up cow fork extent earlier for buffered iomap_begin
>   xfs: only flush when COW fork blocks overlap data fork holes
>   xfs: replace zero range flush with folio batch
>   xfs: report cow mappings with dirty pagecache for iomap zero range
> 
>  fs/iomap/buffered-io.c |   6 +-
>  fs/xfs/xfs_file.c      |  17 +++++
>  fs/xfs/xfs_iomap.c     | 146 +++++++++++++++++++++++++++++++----------
>  3 files changed, 130 insertions(+), 39 deletions(-)
> 
> -- 
> 2.52.0
> 
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup
  2026-03-18  9:13 ` [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Carlos Maiolino
@ 2026-03-18  9:40   ` Christian Brauner
  0 siblings, 0 replies; 15+ messages in thread
From: Christian Brauner @ 2026-03-18  9:40 UTC (permalink / raw)
  To: Carlos Maiolino; +Cc: linux-fsdevel, linux-xfs, bfoster

On Wed, Mar 18, 2026 at 10:13:43AM +0100, Carlos Maiolino wrote:
> On Wed, Mar 11, 2026 at 12:24:54PM -0400, Brian Foster wrote:
> > Hi all,
> > 
> > No significant changes in v4. A few whitespace fixes throughout and I've
> > added some R-b tags from v3 review. Thanks.
> > 
> > Brian
> 
> Christian, giving the majority of this series belongs to XFS, are you ok
> if I pull this series through XFS tree, including the iomap patch?

Feel free to take it through the VFS tree. Thanks for asking!
Christian

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup
  2026-03-11 16:24 [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Brian Foster
                   ` (8 preceding siblings ...)
  2026-03-18  9:13 ` [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Carlos Maiolino
@ 2026-03-23 10:47 ` Carlos Maiolino
  9 siblings, 0 replies; 15+ messages in thread
From: Carlos Maiolino @ 2026-03-23 10:47 UTC (permalink / raw)
  To: linux-fsdevel, linux-xfs, Brian Foster

On Wed, 11 Mar 2026 12:24:54 -0400, Brian Foster wrote:
> No significant changes in v4. A few whitespace fixes throughout and I've
> added some R-b tags from v3 review. Thanks.
> 
> Brian
> 
> v4:
> - Minor whitespace cleanups.
> v3: https://lore.kernel.org/linux-fsdevel/20260309134506.167663-1-bfoster@redhat.com/
> - Inserted new patches 1-2 to fix up zoned mode zeroing.
> - Appended patch 8 to correctly report COW mappings backed by data fork
>   holes.
> - Various minor fixups to logic, whitespace, comments.
> v2: https://lore.kernel.org/linux-fsdevel/20260129155028.141110-1-bfoster@redhat.com/
> - Patch 1 from v1 merged separately.
> - Fixed up iomap_fill_dirty_folios() call in patch 5.
> v1: https://lore.kernel.org/linux-fsdevel/20251016190303.53881-1-bfoster@redhat.com/
> 
> [...]

Applied to for-next, thanks!

[1/8] xfs: fix iomap hole map reporting for zoned zero range
      commit: 92e9dff9ca5026805798b13b967760f8058794e8
[2/8] xfs: flush dirty pagecache over hole in zoned mode zero range
      commit: 2f46c239fce617ac26cc40d9520b1c0cf05cd34f
[3/8] iomap, xfs: lift zero range hole mapping flush into xfs
      commit: a35bb0dec9552aa2bc69a24e3126c68c257bf55e
[4/8] xfs: flush eof folio before insert range size update
      commit: c35a3e273e86e89f73abc4e75e33648fac20eec9
[5/8] xfs: look up cow fork extent earlier for buffered iomap_begin
      commit: a8eb41376df987887b33dbf7078d5b13c85f3e0c
[6/8] xfs: only flush when COW fork blocks overlap data fork holes
      commit: c770f997a4227b6fc5f62275b2337622213e35af
[7/8] xfs: replace zero range flush with folio batch
      commit: ce9d27ca8b2eafdd2457a15aafdab74218843138
[8/8] xfs: report cow mappings with dirty pagecache for iomap zero range
      commit: 388bb26b3d33de3c53a492824a4c5804151a0014

Best regards,
-- 
Carlos Maiolino <cem@kernel.org>


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2026-03-23 10:47 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-11 16:24 [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Brian Foster
2026-03-11 16:24 ` [PATCH v4 1/8] xfs: fix iomap hole map reporting for zoned zero range Brian Foster
2026-03-11 16:24 ` [PATCH v4 2/8] xfs: flush dirty pagecache over hole in zoned mode " Brian Foster
2026-03-11 22:14   ` Darrick J. Wong
2026-03-11 16:24 ` [PATCH v4 3/8] iomap, xfs: lift zero range hole mapping flush into xfs Brian Foster
2026-03-11 16:24 ` [PATCH v4 4/8] xfs: flush eof folio before insert range size update Brian Foster
2026-03-11 21:36   ` Darrick J. Wong
2026-03-11 16:24 ` [PATCH v4 5/8] xfs: look up cow fork extent earlier for buffered iomap_begin Brian Foster
2026-03-11 16:25 ` [PATCH v4 6/8] xfs: only flush when COW fork blocks overlap data fork holes Brian Foster
2026-03-11 16:25 ` [PATCH v4 7/8] xfs: replace zero range flush with folio batch Brian Foster
2026-03-11 16:25 ` [PATCH v4 8/8] xfs: report cow mappings with dirty pagecache for iomap zero range Brian Foster
2026-03-12  6:55   ` Christoph Hellwig
2026-03-18  9:13 ` [PATCH v4 0/8] iomap, xfs: improve zero range flushing and lookup Carlos Maiolino
2026-03-18  9:40   ` Christian Brauner
2026-03-23 10:47 ` Carlos Maiolino

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox