public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/2] btrfs: fix corner cases for subpage generic/363 failures
@ 2025-04-23 21:26 Qu Wenruo
  2025-04-23 21:26 ` [PATCH v4 1/2] btrfs: make btrfs_truncate_block() to zero involved blocks in a folio Qu Wenruo
  2025-04-23 21:26 ` [PATCH v4 2/2] btrfs: make btrfs_truncate_block() zero folio range for certain subpage corner cases Qu Wenruo
  0 siblings, 2 replies; 7+ messages in thread
From: Qu Wenruo @ 2025-04-23 21:26 UTC (permalink / raw)
  To: linux-btrfs

[CHANGELOG]
v4:
- Rebased to the latest for-next branch
  btrfs_free_extent_map() renames cause a minor conflict in the first
  patch.

v3:
- Fix a typo where @block_size should @blocksize.
  There is a global function, block_size(), thus this typo will cause
  type conflicts inside round_down()/round_up().

v2:
- Fix a conversion bug in the first patch that leads to generic/008
  failure on x86_64
  The range is passed incorrectly and caused btrfs_truncate_block() to
  incorrectly skip an unaligned range.

Test case generic/363 always fail on subpage (fs block fs < page size)
btrfses, there are mostly two kinds of problems here:

All examples are based on 64K page size and 4K fs block size.

1) EOF is polluted and btrfs_truncate_block() only zeros the block that
   needs to be written back

   
   0                           32K                           64K
   |                           |              |GGGGGGGGGGGGGG|
                                              50K EOF
   The original file is 50K sized (not 4K aligned), and fsx polluted the
   range beyond EOF through memory mapped write.
   And since memory mapped write is page based, and our page size is
   larger than block size, the page range [0, 64K) covere blocks beyond
   EOF.

   Those polluted range will not be written back, but will still affect
   our page cache.

   Then some operation happens to expand the inode to size 64K.

   In that case btrfs_truncate_block() is called to trim the block
   [48K, 52K), and that block will be marked dirty for written back.

   But the range [52K, 64K) is untouched at all, left the garbage
   hanging there, triggering `fsx -e 1` failure.

   Fix this case by force btrfs_truncate_block() to zeroing any involved
   blocks. (Meanwhile still only one block [48K, 52K) will be written
   back)

2) EOF is polluted and the original size is block aligned so
   btrfs_truncate_block() does nothing

   0                           32K                           64K
   |                           |                |GGGGGGGGGGGG|
                                                52K EOF

   Mostly the same as case 1, but this time since the inode size is
   block aligned, btrfs_truncate_block() will do nothing.

   Leaving the garbage range [52K, 64K) untouched and fail `fsx -e 1`
   runs.

   Fix this case by force btrfs_truncate_block() to zeroing any involved
   blocks when the btrfs is subpage and the range is aligned.
   This will not cause any new dirty blocks, but purely zeroing out EOF
   to pass `fsx -e 1` runs.

Qu Wenruo (2):
  btrfs: make btrfs_truncate_block() to zero involved blocks in a folio
  btrfs: make btrfs_truncate_block() zero folio range for certain
    subpage corner cases

 fs/btrfs/btrfs_inode.h |  10 ++-
 fs/btrfs/file.c        |  37 ++++++----
 fs/btrfs/inode.c       | 153 ++++++++++++++++++++++++++++++++++-------
 3 files changed, 162 insertions(+), 38 deletions(-)

-- 
2.49.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v4 1/2] btrfs: make btrfs_truncate_block() to zero involved blocks in a folio
  2025-04-23 21:26 [PATCH v4 0/2] btrfs: fix corner cases for subpage generic/363 failures Qu Wenruo
@ 2025-04-23 21:26 ` Qu Wenruo
  2025-04-24  0:14   ` Boris Burkov
  2025-04-23 21:26 ` [PATCH v4 2/2] btrfs: make btrfs_truncate_block() zero folio range for certain subpage corner cases Qu Wenruo
  1 sibling, 1 reply; 7+ messages in thread
From: Qu Wenruo @ 2025-04-23 21:26 UTC (permalink / raw)
  To: linux-btrfs

[BUG]
The following fsx sequence will fail on btrfs with 64K page size and 4K
fs block size:

 #fsx -d -e 1 -N 4 $mnt/junk -S 36386
 READ BAD DATA: offset = 0xe9ba, size = 0x6dd5, fname = /mnt/btrfs/junk
 OFFSET      GOOD    BAD     RANGE
 0xe9ba      0x0000  0x03ac  0x0
 operation# (mod 256) for the bad data may be 3
 ...
 LOG DUMP (4 total operations):
 1(  1 mod 256): WRITE    0x6c62 thru 0x1147d	(0xa81c bytes) HOLE	***WWWW
 2(  2 mod 256): TRUNCATE DOWN	from 0x1147e to 0x5448	******WWWW
 3(  3 mod 256): ZERO     0x1c7aa thru 0x28fe2	(0xc839 bytes)
 4(  4 mod 256): MAPREAD  0xe9ba thru 0x1578e	(0x6dd5 bytes)	***RRRR***

[CAUSE]
Only 2 operations are really involved in this case:

 3 pollute_eof	0x5448 thru	0xffff	(0xabb8 bytes)
 3 zero	from 0x1c7aa to 0x28fe3, (0xc839 bytes)
 4 mapread	0xe9ba thru	0x1578e	(0x6dd5 bytes)

At operation 3, fsx pollutes beyond EOF, that is done by mmap()
and write into that mmap() range beyondd EOF.

Such write will fill the range beyond EOF, but it will never reach disk
as ranges beyond EOF will not be marked dirty nor uptodate.

Then we zero_range for [0x1c7aa, 0x28fe3], and since the range is beyond
our isize (which was 0x5448), we should zero out any range beyond
EOF (0x5448).

During btrfs_zero_range(), we call btrfs_truncate_block() to dirty the
unaligned head block.
But that function only really zero out the block at [0x5000, 0x5fff], it
doesn't bother any range other that that block, since those range will
not be marked dirty nor written back.

So the range [0x6000, 0xffff] is still polluted, and later mapread()
will return the poisoned value.

Such behavior is only exposed when page size is larger than fs block
btrfs, as for block size == page size case the block is exactly one
page, and fsx only checks exactly one page at EOF.

[FIX]
Enhance btrfs_truncate_block() by:

- Force callers to pass a @start/@end combination
  So that there will be no 0 length passed in.

- Rename the @front parameter to an enum
  And make it matches the @start/@end parameter better by using
  TRUNCATE_HEAD_BLOCK and TRUNCATE_TAIL_BLOCK instead.

- Pass the original unmodified range into btrfs_truncate_block()
  There are several call sites inside btrfs_zero_range() and
  btrfs_punch_hole() where we pass part of the original range for
  truncating.

  This hides the original range which can lead to under or over
  truncating.
  Thus we have to pass the original zero/punch range.

- Make btrfs_truncate_block() to zero any involved blocks inside the folio
  Since we have the original range, we know exactly which range inside
  the folio that should be zeroed.

  It may cover other blocks other than the one with data space reserved,
  but that's fine, the zeroed range will not be written back anyway.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/btrfs_inode.h | 10 +++++--
 fs/btrfs/file.c        | 37 ++++++++++++++++--------
 fs/btrfs/inode.c       | 65 +++++++++++++++++++++++++++---------------
 3 files changed, 75 insertions(+), 37 deletions(-)

diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index 61fad5423b6a..17b04702562a 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -565,8 +565,14 @@ int btrfs_add_link(struct btrfs_trans_handle *trans,
 		   struct btrfs_inode *parent_inode, struct btrfs_inode *inode,
 		   const struct fscrypt_str *name, int add_backref, u64 index);
 int btrfs_delete_subvolume(struct btrfs_inode *dir, struct dentry *dentry);
-int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
-			 int front);
+
+enum btrfs_truncate_where {
+	BTRFS_TRUNCATE_HEAD_BLOCK,
+	BTRFS_TRUNCATE_TAIL_BLOCK,
+};
+int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t end,
+			 u64 orig_start, u64 orig_end,
+			 enum btrfs_truncate_where where);
 
 int btrfs_start_delalloc_snapshot(struct btrfs_root *root, bool in_reclaim_context);
 int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, long nr,
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index e688587329de..23f8f25d617b 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -2614,7 +2614,8 @@ static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len)
 	u64 lockend;
 	u64 tail_start;
 	u64 tail_len;
-	u64 orig_start = offset;
+	const u64 orig_start = offset;
+	const u64 orig_end = offset + len - 1;
 	int ret = 0;
 	bool same_block;
 	u64 ino_size;
@@ -2656,8 +2657,9 @@ static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len)
 	if (same_block && len < fs_info->sectorsize) {
 		if (offset < ino_size) {
 			truncated_block = true;
-			ret = btrfs_truncate_block(BTRFS_I(inode), offset, len,
-						   0);
+			ret = btrfs_truncate_block(BTRFS_I(inode), offset, offset + len - 1,
+						   orig_start, orig_end,
+						   BTRFS_TRUNCATE_HEAD_BLOCK);
 		} else {
 			ret = 0;
 		}
@@ -2667,7 +2669,9 @@ static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len)
 	/* zero back part of the first block */
 	if (offset < ino_size) {
 		truncated_block = true;
-		ret = btrfs_truncate_block(BTRFS_I(inode), offset, 0, 0);
+		ret = btrfs_truncate_block(BTRFS_I(inode), offset, -1,
+					   orig_start, orig_end,
+					   BTRFS_TRUNCATE_HEAD_BLOCK);
 		if (ret) {
 			btrfs_inode_unlock(BTRFS_I(inode), BTRFS_ILOCK_MMAP);
 			return ret;
@@ -2704,8 +2708,9 @@ static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len)
 			if (tail_start + tail_len < ino_size) {
 				truncated_block = true;
 				ret = btrfs_truncate_block(BTRFS_I(inode),
-							tail_start + tail_len,
-							0, 1);
+						tail_start, tail_start + tail_len - 1,
+						orig_start, orig_end,
+						BTRFS_TRUNCATE_TAIL_BLOCK);
 				if (ret)
 					goto out_only_mutex;
 			}
@@ -2873,6 +2878,8 @@ static int btrfs_zero_range(struct inode *inode,
 	int ret;
 	u64 alloc_hint = 0;
 	const u64 sectorsize = fs_info->sectorsize;
+	const u64 orig_start = offset;
+	const u64 orig_end = offset + len - 1;
 	u64 alloc_start = round_down(offset, sectorsize);
 	u64 alloc_end = round_up(offset + len, sectorsize);
 	u64 bytes_to_reserve = 0;
@@ -2935,8 +2942,9 @@ static int btrfs_zero_range(struct inode *inode,
 		}
 		if (len < sectorsize && em->disk_bytenr != EXTENT_MAP_HOLE) {
 			btrfs_free_extent_map(em);
-			ret = btrfs_truncate_block(BTRFS_I(inode), offset, len,
-						   0);
+			ret = btrfs_truncate_block(BTRFS_I(inode), offset, offset + len - 1,
+						   orig_start, orig_end,
+						   BTRFS_TRUNCATE_HEAD_BLOCK);
 			if (!ret)
 				ret = btrfs_fallocate_update_isize(inode,
 								   offset + len,
@@ -2967,7 +2975,9 @@ static int btrfs_zero_range(struct inode *inode,
 			alloc_start = round_down(offset, sectorsize);
 			ret = 0;
 		} else if (ret == RANGE_BOUNDARY_WRITTEN_EXTENT) {
-			ret = btrfs_truncate_block(BTRFS_I(inode), offset, 0, 0);
+			ret = btrfs_truncate_block(BTRFS_I(inode), offset, -1,
+						   orig_start, orig_end,
+						   BTRFS_TRUNCATE_HEAD_BLOCK);
 			if (ret)
 				goto out;
 		} else {
@@ -2984,8 +2994,9 @@ static int btrfs_zero_range(struct inode *inode,
 			alloc_end = round_up(offset + len, sectorsize);
 			ret = 0;
 		} else if (ret == RANGE_BOUNDARY_WRITTEN_EXTENT) {
-			ret = btrfs_truncate_block(BTRFS_I(inode), offset + len,
-						   0, 1);
+			ret = btrfs_truncate_block(BTRFS_I(inode), offset, offset + len - 1,
+						   orig_start, orig_end,
+						   BTRFS_TRUNCATE_TAIL_BLOCK);
 			if (ret)
 				goto out;
 		} else {
@@ -3105,7 +3116,9 @@ static long btrfs_fallocate(struct file *file, int mode,
 		 * need to zero out the end of the block if i_size lands in the
 		 * middle of a block.
 		 */
-		ret = btrfs_truncate_block(BTRFS_I(inode), inode->i_size, 0, 0);
+		ret = btrfs_truncate_block(BTRFS_I(inode), inode->i_size, -1,
+					   inode->i_size, -1,
+					   BTRFS_TRUNCATE_HEAD_BLOCK);
 		if (ret)
 			goto out;
 	}
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 538a9ec86abc..6f910c056819 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -4765,15 +4765,16 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
  *
  * @inode - inode that we're zeroing
  * @from - the offset to start zeroing
- * @len - the length to zero, 0 to zero the entire range respective to the
- *	offset
- * @front - zero up to the offset instead of from the offset on
+ * @end - the inclusive end to finish zeroing, can be -1 meaning truncating
+ *	  everything beyond @from.
+ * @where - Head or tail block to truncate.
  *
  * This will find the block for the "from" offset and cow the block and zero the
  * part we want to zero.  This is used with truncate and hole punching.
  */
-int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
-			 int front)
+int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t end,
+			 u64 orig_start, u64 orig_end,
+			 enum btrfs_truncate_where where)
 {
 	struct btrfs_fs_info *fs_info = inode->root->fs_info;
 	struct address_space *mapping = inode->vfs_inode.i_mapping;
@@ -4783,20 +4784,30 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
 	struct extent_changeset *data_reserved = NULL;
 	bool only_release_metadata = false;
 	u32 blocksize = fs_info->sectorsize;
-	pgoff_t index = from >> PAGE_SHIFT;
-	unsigned offset = from & (blocksize - 1);
+	pgoff_t index = (where == BTRFS_TRUNCATE_HEAD_BLOCK) ?
+			(from >> PAGE_SHIFT) : (end >> PAGE_SHIFT);
 	struct folio *folio;
 	gfp_t mask = btrfs_alloc_write_mask(mapping);
 	size_t write_bytes = blocksize;
 	int ret = 0;
 	u64 block_start;
 	u64 block_end;
+	u64 clamp_start;
+	u64 clamp_end;
 
-	if (IS_ALIGNED(offset, blocksize) &&
-	    (!len || IS_ALIGNED(len, blocksize)))
+	ASSERT(where == BTRFS_TRUNCATE_HEAD_BLOCK ||
+	       where == BTRFS_TRUNCATE_TAIL_BLOCK);
+
+	if (end == (loff_t)-1)
+		ASSERT(where == BTRFS_TRUNCATE_HEAD_BLOCK);
+
+	if (IS_ALIGNED(from, blocksize) && IS_ALIGNED(end + 1, blocksize))
 		goto out;
 
-	block_start = round_down(from, blocksize);
+	if (where == BTRFS_TRUNCATE_HEAD_BLOCK)
+		block_start = round_down(from, blocksize);
+	else
+		block_start = round_down(end, blocksize);
 	block_end = block_start + blocksize - 1;
 
 	ret = btrfs_check_data_free_space(inode, &data_reserved, block_start,
@@ -4876,17 +4887,22 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
 		goto out_unlock;
 	}
 
-	if (offset != blocksize) {
-		if (!len)
-			len = blocksize - offset;
-		if (front)
-			folio_zero_range(folio, block_start - folio_pos(folio),
-					 offset);
-		else
-			folio_zero_range(folio,
-					 (block_start - folio_pos(folio)) + offset,
-					 len);
-	}
+	/*
+	 * Although we have only reserved space for the one block, we still should
+	 * zero out all blocks in the original range.
+	 * The remaining blocks normally are already holes thus no need to zero again,
+	 * but it's possible for fs block size < page size cases to have memory mapped
+	 * writes to pollute ranges beyond EOF.
+	 *
+	 * In that case although the polluted blocks beyond EOF will not reach disk,
+	 * it still affects our page cache.
+	 */
+	clamp_start = max_t(u64, folio_pos(folio), orig_start);
+	clamp_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1,
+			  orig_end);
+	folio_zero_range(folio, clamp_start - folio_pos(folio),
+			 clamp_end - clamp_start + 1);
+
 	btrfs_folio_clear_checked(fs_info, folio, block_start,
 				  block_end + 1 - block_start);
 	btrfs_folio_set_dirty(fs_info, folio, block_start,
@@ -4988,7 +5004,8 @@ int btrfs_cont_expand(struct btrfs_inode *inode, loff_t oldsize, loff_t size)
 	 * rest of the block before we expand the i_size, otherwise we could
 	 * expose stale data.
 	 */
-	ret = btrfs_truncate_block(inode, oldsize, 0, 0);
+	ret = btrfs_truncate_block(inode, oldsize, -1, oldsize, -1,
+				   BTRFS_TRUNCATE_HEAD_BLOCK);
 	if (ret)
 		return ret;
 
@@ -7623,7 +7640,9 @@ static int btrfs_truncate(struct btrfs_inode *inode, bool skip_writeback)
 		btrfs_end_transaction(trans);
 		btrfs_btree_balance_dirty(fs_info);
 
-		ret = btrfs_truncate_block(inode, inode->vfs_inode.i_size, 0, 0);
+		ret = btrfs_truncate_block(inode, inode->vfs_inode.i_size, -1,
+					   inode->vfs_inode.i_size, -1,
+					   BTRFS_TRUNCATE_HEAD_BLOCK);
 		if (ret)
 			goto out;
 		trans = btrfs_start_transaction(root, 1);
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 2/2] btrfs: make btrfs_truncate_block() zero folio range for certain subpage corner cases
  2025-04-23 21:26 [PATCH v4 0/2] btrfs: fix corner cases for subpage generic/363 failures Qu Wenruo
  2025-04-23 21:26 ` [PATCH v4 1/2] btrfs: make btrfs_truncate_block() to zero involved blocks in a folio Qu Wenruo
@ 2025-04-23 21:26 ` Qu Wenruo
  2025-04-24  0:27   ` Boris Burkov
  1 sibling, 1 reply; 7+ messages in thread
From: Qu Wenruo @ 2025-04-23 21:26 UTC (permalink / raw)
  To: linux-btrfs

[BUG]
For the following fsx -e 1 run, the btrfs still fails the run on 64K
page size with 4K fs block size:

READ BAD DATA: offset = 0x26b3a, size = 0xfafa, fname = /mnt/btrfs/junk
OFFSET      GOOD    BAD     RANGE
0x26b3a     0x0000  0x15b4  0x0
operation# (mod 256) for the bad data may be 21
[...]
LOG DUMP (28 total operations):
1(  1 mod 256): SKIPPED (no operation)
2(  2 mod 256): SKIPPED (no operation)
3(  3 mod 256): SKIPPED (no operation)
4(  4 mod 256): SKIPPED (no operation)
5(  5 mod 256): WRITE    0x1ea90 thru 0x285e0	(0x9b51 bytes) HOLE
6(  6 mod 256): ZERO     0x1b1a8 thru 0x20bd4	(0x5a2d bytes)
7(  7 mod 256): FALLOC   0x22b1a thru 0x272fa	(0x47e0 bytes) INTERIOR
8(  8 mod 256): WRITE    0x741d thru 0x13522	(0xc106 bytes)
9(  9 mod 256): MAPWRITE 0x73ee thru 0xdeeb	(0x6afe bytes)
10( 10 mod 256): FALLOC   0xb719 thru 0xb994	(0x27b bytes) INTERIOR
11( 11 mod 256): COPY 0x15ed8 thru 0x18be1	(0x2d0a bytes) to 0x25f6e thru 0x28c77
12( 12 mod 256): ZERO     0x1615e thru 0x1770e	(0x15b1 bytes)
13( 13 mod 256): SKIPPED (no operation)
14( 14 mod 256): DEDUPE 0x20000 thru 0x27fff	(0x8000 bytes) to 0x1000 thru 0x8fff
15( 15 mod 256): SKIPPED (no operation)
16( 16 mod 256): CLONE 0xa000 thru 0xffff	(0x6000 bytes) to 0x36000 thru 0x3bfff
17( 17 mod 256): ZERO     0x14adc thru 0x1b78a	(0x6caf bytes)
18( 18 mod 256): TRUNCATE DOWN	from 0x3c000 to 0x1e2e3	******WWWW
19( 19 mod 256): CLONE 0x4000 thru 0x11fff	(0xe000 bytes) to 0x16000 thru 0x23fff
20( 20 mod 256): FALLOC   0x311e1 thru 0x3681b	(0x563a bytes) PAST_EOF
21( 21 mod 256): FALLOC   0x351c5 thru 0x40000	(0xae3b bytes) EXTENDING
22( 22 mod 256): WRITE    0x920 thru 0x7e51	(0x7532 bytes)
23( 23 mod 256): COPY 0x2b58 thru 0xc508	(0x99b1 bytes) to 0x117b1 thru 0x1b161
24( 24 mod 256): TRUNCATE DOWN	from 0x40000 to 0x3c9a5
25( 25 mod 256): SKIPPED (no operation)
26( 26 mod 256): MAPWRITE 0x25020 thru 0x26b06	(0x1ae7 bytes)
27( 27 mod 256): SKIPPED (no operation)
28( 28 mod 256): READ     0x26b3a thru 0x36633	(0xfafa bytes)	***RRRR***

[CAUSE]
The involved operations are:

 fallocating to largest ever: 0x40000
 21 pollute_eof	0x24000 thru	0x2ffff	(0xc000 bytes)
 21 falloc	from 0x351c5 to 0x40000 (0xae3b bytes)
 28 read	0x26b3a thru	0x36633	(0xfafa bytes)

At operation #21 a pollute_eof is done, by memory mappaed write into
range [0x24000, 0x2ffff).
At this stage, the inode size is 0x24000, which is block aligned.

Then fallocate happens, and since it's expanding the inode, it will call
btrfs_truncate_block() to truncate any unaligned range.

But since the inode size is already block aligned,
btrfs_truncate_block() does nothing and exit.

However remember the folio at 0x20000 has some range polluted already,
although they will not be written back to disk, it still affects the
page cache, resulting the later operation #28 to read out the polluted
value.

[FIX]
Instead of early exit from btrfs_truncate_block() if the range is
already block aligned, do extra filio zeroing if the fs block size is
smaller than the page size.

This is to address exactly the above case where memory mapped write can
still leave some garbage beyond EOF.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/inode.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 88 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 6f910c056819..23cd03716e57 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -4760,6 +4760,80 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
 	return ret;
 }
 
+/*
+ * A helper to zero out all blocks inside range [@orig_start, @orig_end) of
+ * the target folio.
+ * The target folio is the one containing the head or tail block of the range
+ * [@from, @end].
+ *
+ * This is a special case for fs block size < page size, where even if the range
+ * [from, end] is already block aligned, we can still have blocks beyond EOF being
+ * polluted by memory mapped write.
+ */
+static int zero_range_folio(struct btrfs_inode *inode, u64 from, u64 end,
+			    u64 orig_start, u64 orig_end,
+			    enum btrfs_truncate_where where)
+{
+	const u32 blocksize = inode->root->fs_info->sectorsize;
+	struct address_space *mapping = inode->vfs_inode.i_mapping;
+	struct extent_io_tree *io_tree = &inode->io_tree;
+	struct extent_state *cached_state = NULL;
+	struct btrfs_ordered_extent *ordered;
+	pgoff_t index = (where == BTRFS_TRUNCATE_HEAD_BLOCK) ?
+			(from >> PAGE_SHIFT) : (end >> PAGE_SHIFT);
+	struct folio *folio;
+	u64 block_start;
+	u64 block_end;
+	u64 clamp_start;
+	u64 clamp_end;
+	int ret = 0;
+
+again:
+	folio = filemap_lock_folio(mapping, index);
+	/* No folio present. */
+	if (IS_ERR(folio))
+		return 0;
+
+	if (!folio_test_uptodate(folio)) {
+		ret = btrfs_read_folio(NULL, folio);
+		folio_lock(folio);
+		if (folio->mapping != mapping) {
+			folio_unlock(folio);
+			folio_put(folio);
+			goto again;
+		}
+		if (!folio_test_uptodate(folio)) {
+			ret = -EIO;
+			goto out_unlock;
+		}
+	}
+	folio_wait_writeback(folio);
+
+	clamp_start = max_t(u64, folio_pos(folio), orig_start);
+	clamp_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1,
+			  orig_end);
+	block_start = round_down(clamp_start, blocksize);
+	block_end = round_up(clamp_end + 1, blocksize) - 1;
+	lock_extent(io_tree, block_start, block_end, &cached_state);
+	ordered = btrfs_lookup_ordered_range(inode, block_start, block_end + 1 - block_end);
+	if (ordered) {
+		unlock_extent(io_tree, block_start, block_end, &cached_state);
+		folio_unlock(folio);
+		folio_put(folio);
+		btrfs_start_ordered_extent(ordered);
+		btrfs_put_ordered_extent(ordered);
+		goto again;
+	}
+	folio_zero_range(folio, clamp_start - folio_pos(folio),
+			 clamp_end - clamp_start + 1);
+	unlock_extent(io_tree, block_start, block_end, &cached_state);
+
+out_unlock:
+	folio_unlock(folio);
+	folio_put(folio);
+	return ret;
+}
+
 /*
  * Read, zero a chunk and write a block.
  *
@@ -4801,8 +4875,20 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t end,
 	if (end == (loff_t)-1)
 		ASSERT(where == BTRFS_TRUNCATE_HEAD_BLOCK);
 
-	if (IS_ALIGNED(from, blocksize) && IS_ALIGNED(end + 1, blocksize))
-		goto out;
+	if (IS_ALIGNED(from, blocksize) && IS_ALIGNED(end + 1, blocksize)) {
+		/*
+		 * The target head/tail range is already block aligned.
+		 * If block size >= PAGE_SIZE, meaning it's impossible to mmap a
+		 * page containing anything other than the target block.
+		 * So we can safely exit.
+		 *
+		 * Otherwise we still need to zero out the range inside the folio
+		 * to avoid memory mapped write to pollute beyond EOF.
+		 */
+		if (blocksize >= PAGE_SIZE)
+			return 0;
+		return zero_range_folio(inode, from, end, orig_start, orig_end, where);
+	}
 
 	if (where == BTRFS_TRUNCATE_HEAD_BLOCK)
 		block_start = round_down(from, blocksize);
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 1/2] btrfs: make btrfs_truncate_block() to zero involved blocks in a folio
  2025-04-23 21:26 ` [PATCH v4 1/2] btrfs: make btrfs_truncate_block() to zero involved blocks in a folio Qu Wenruo
@ 2025-04-24  0:14   ` Boris Burkov
  2025-04-24  0:54     ` Qu Wenruo
  0 siblings, 1 reply; 7+ messages in thread
From: Boris Burkov @ 2025-04-24  0:14 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Thu, Apr 24, 2025 at 06:56:31AM +0930, Qu Wenruo wrote:
> [BUG]
> The following fsx sequence will fail on btrfs with 64K page size and 4K
> fs block size:
> 
>  #fsx -d -e 1 -N 4 $mnt/junk -S 36386
>  READ BAD DATA: offset = 0xe9ba, size = 0x6dd5, fname = /mnt/btrfs/junk
>  OFFSET      GOOD    BAD     RANGE
>  0xe9ba      0x0000  0x03ac  0x0
>  operation# (mod 256) for the bad data may be 3
>  ...
>  LOG DUMP (4 total operations):
>  1(  1 mod 256): WRITE    0x6c62 thru 0x1147d	(0xa81c bytes) HOLE	***WWWW
>  2(  2 mod 256): TRUNCATE DOWN	from 0x1147e to 0x5448	******WWWW
>  3(  3 mod 256): ZERO     0x1c7aa thru 0x28fe2	(0xc839 bytes)
>  4(  4 mod 256): MAPREAD  0xe9ba thru 0x1578e	(0x6dd5 bytes)	***RRRR***
> 
> [CAUSE]
> Only 2 operations are really involved in this case:
> 
>  3 pollute_eof	0x5448 thru	0xffff	(0xabb8 bytes)
>  3 zero	from 0x1c7aa to 0x28fe3, (0xc839 bytes)
>  4 mapread	0xe9ba thru	0x1578e	(0x6dd5 bytes)
> 
> At operation 3, fsx pollutes beyond EOF, that is done by mmap()
> and write into that mmap() range beyondd EOF.
> 
> Such write will fill the range beyond EOF, but it will never reach disk
> as ranges beyond EOF will not be marked dirty nor uptodate.
> 
> Then we zero_range for [0x1c7aa, 0x28fe3], and since the range is beyond
> our isize (which was 0x5448), we should zero out any range beyond
> EOF (0x5448).
> 
> During btrfs_zero_range(), we call btrfs_truncate_block() to dirty the
> unaligned head block.
> But that function only really zero out the block at [0x5000, 0x5fff], it
> doesn't bother any range other that that block, since those range will
> not be marked dirty nor written back.
> 
> So the range [0x6000, 0xffff] is still polluted, and later mapread()
> will return the poisoned value.
> 
> Such behavior is only exposed when page size is larger than fs block
> btrfs, as for block size == page size case the block is exactly one
> page, and fsx only checks exactly one page at EOF.

This wording struck me as a little weird. What fsx does and doesn't
check shouldn't really matter, compared to the expected/desired behavior
of the various APIs.

Other than that, very nice debug and thanks for the clear explanation.

> 
> [FIX]
> Enhance btrfs_truncate_block() by:
> 
> - Force callers to pass a @start/@end combination
>   So that there will be no 0 length passed in.
> 
> - Rename the @front parameter to an enum
>   And make it matches the @start/@end parameter better by using
>   TRUNCATE_HEAD_BLOCK and TRUNCATE_TAIL_BLOCK instead.

Do I understand that you are not just renaming front to an enum, but
changing the meaning to be selecting a block to do the dirty/writeback
thing on? That is not really as clear as it could be from the commit
message/comments.

> 
> - Pass the original unmodified range into btrfs_truncate_block()
>   There are several call sites inside btrfs_zero_range() and
>   btrfs_punch_hole() where we pass part of the original range for
>   truncating.

The definition of "original range" is very murky to me. It seems
specific to the hole punching code (and maybe some of the other
callsites).

> 
>   This hides the original range which can lead to under or over
>   truncating.
>   Thus we have to pass the original zero/punch range.
> 
> - Make btrfs_truncate_block() to zero any involved blocks inside the folio
>   Since we have the original range, we know exactly which range inside
>   the folio that should be zeroed.
> 
>   It may cover other blocks other than the one with data space reserved,
>   but that's fine, the zeroed range will not be written back anyway.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
>  fs/btrfs/btrfs_inode.h | 10 +++++--
>  fs/btrfs/file.c        | 37 ++++++++++++++++--------
>  fs/btrfs/inode.c       | 65 +++++++++++++++++++++++++++---------------
>  3 files changed, 75 insertions(+), 37 deletions(-)
> 
> diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
> index 61fad5423b6a..17b04702562a 100644
> --- a/fs/btrfs/btrfs_inode.h
> +++ b/fs/btrfs/btrfs_inode.h
> @@ -565,8 +565,14 @@ int btrfs_add_link(struct btrfs_trans_handle *trans,
>  		   struct btrfs_inode *parent_inode, struct btrfs_inode *inode,
>  		   const struct fscrypt_str *name, int add_backref, u64 index);
>  int btrfs_delete_subvolume(struct btrfs_inode *dir, struct dentry *dentry);
> -int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
> -			 int front);
> +
> +enum btrfs_truncate_where {
> +	BTRFS_TRUNCATE_HEAD_BLOCK,
> +	BTRFS_TRUNCATE_TAIL_BLOCK,
> +};
> +int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t end,
> +			 u64 orig_start, u64 orig_end,
> +			 enum btrfs_truncate_where where);
>  
>  int btrfs_start_delalloc_snapshot(struct btrfs_root *root, bool in_reclaim_context);
>  int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, long nr,
> diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
> index e688587329de..23f8f25d617b 100644
> --- a/fs/btrfs/file.c
> +++ b/fs/btrfs/file.c
> @@ -2614,7 +2614,8 @@ static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len)
>  	u64 lockend;
>  	u64 tail_start;
>  	u64 tail_len;
> -	u64 orig_start = offset;
> +	const u64 orig_start = offset;
> +	const u64 orig_end = offset + len - 1;
>  	int ret = 0;
>  	bool same_block;
>  	u64 ino_size;
> @@ -2656,8 +2657,9 @@ static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len)

(not visible in the formatted patch) but there is a comment just above
this change about how we don't need to truncate past EOF. Can you check
it and update it if it is wrong now?

>  	if (same_block && len < fs_info->sectorsize) {
>  		if (offset < ino_size) {
>  			truncated_block = true;
> -			ret = btrfs_truncate_block(BTRFS_I(inode), offset, len,
> -						   0);
> +			ret = btrfs_truncate_block(BTRFS_I(inode), offset, offset + len - 1,
> +						   orig_start, orig_end,
> +						   BTRFS_TRUNCATE_HEAD_BLOCK);
>  		} else {
>  			ret = 0;
>  		}
> @@ -2667,7 +2669,9 @@ static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len)
>  	/* zero back part of the first block */
>  	if (offset < ino_size) {
>  		truncated_block = true;
> -		ret = btrfs_truncate_block(BTRFS_I(inode), offset, 0, 0);
> +		ret = btrfs_truncate_block(BTRFS_I(inode), offset, -1,
> +					   orig_start, orig_end,
> +					   BTRFS_TRUNCATE_HEAD_BLOCK);
>  		if (ret) {
>  			btrfs_inode_unlock(BTRFS_I(inode), BTRFS_ILOCK_MMAP);
>  			return ret;
> @@ -2704,8 +2708,9 @@ static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len)
>  			if (tail_start + tail_len < ino_size) {
>  				truncated_block = true;
>  				ret = btrfs_truncate_block(BTRFS_I(inode),
> -							tail_start + tail_len,
> -							0, 1);
> +						tail_start, tail_start + tail_len - 1,
> +						orig_start, orig_end,
> +						BTRFS_TRUNCATE_TAIL_BLOCK);
>  				if (ret)
>  					goto out_only_mutex;
>  			}
> @@ -2873,6 +2878,8 @@ static int btrfs_zero_range(struct inode *inode,
>  	int ret;
>  	u64 alloc_hint = 0;
>  	const u64 sectorsize = fs_info->sectorsize;
> +	const u64 orig_start = offset;
> +	const u64 orig_end = offset + len - 1;
>  	u64 alloc_start = round_down(offset, sectorsize);
>  	u64 alloc_end = round_up(offset + len, sectorsize);
>  	u64 bytes_to_reserve = 0;
> @@ -2935,8 +2942,9 @@ static int btrfs_zero_range(struct inode *inode,
>  		}
>  		if (len < sectorsize && em->disk_bytenr != EXTENT_MAP_HOLE) {
>  			btrfs_free_extent_map(em);
> -			ret = btrfs_truncate_block(BTRFS_I(inode), offset, len,
> -						   0);
> +			ret = btrfs_truncate_block(BTRFS_I(inode), offset, offset + len - 1,
> +						   orig_start, orig_end,
> +						   BTRFS_TRUNCATE_HEAD_BLOCK);
>  			if (!ret)
>  				ret = btrfs_fallocate_update_isize(inode,
>  								   offset + len,
> @@ -2967,7 +2975,9 @@ static int btrfs_zero_range(struct inode *inode,
>  			alloc_start = round_down(offset, sectorsize);
>  			ret = 0;
>  		} else if (ret == RANGE_BOUNDARY_WRITTEN_EXTENT) {
> -			ret = btrfs_truncate_block(BTRFS_I(inode), offset, 0, 0);
> +			ret = btrfs_truncate_block(BTRFS_I(inode), offset, -1,
> +						   orig_start, orig_end,
> +						   BTRFS_TRUNCATE_HEAD_BLOCK);
>  			if (ret)
>  				goto out;
>  		} else {
> @@ -2984,8 +2994,9 @@ static int btrfs_zero_range(struct inode *inode,
>  			alloc_end = round_up(offset + len, sectorsize);
>  			ret = 0;
>  		} else if (ret == RANGE_BOUNDARY_WRITTEN_EXTENT) {
> -			ret = btrfs_truncate_block(BTRFS_I(inode), offset + len,
> -						   0, 1);
> +			ret = btrfs_truncate_block(BTRFS_I(inode), offset, offset + len - 1,
> +						   orig_start, orig_end,
> +						   BTRFS_TRUNCATE_TAIL_BLOCK);
>  			if (ret)
>  				goto out;
>  		} else {
> @@ -3105,7 +3116,9 @@ static long btrfs_fallocate(struct file *file, int mode,
>  		 * need to zero out the end of the block if i_size lands in the
>  		 * middle of a block.
>  		 */
> -		ret = btrfs_truncate_block(BTRFS_I(inode), inode->i_size, 0, 0);
> +		ret = btrfs_truncate_block(BTRFS_I(inode), inode->i_size, -1,
> +					   inode->i_size, -1,
> +					   BTRFS_TRUNCATE_HEAD_BLOCK);
>  		if (ret)
>  			goto out;
>  	}
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 538a9ec86abc..6f910c056819 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -4765,15 +4765,16 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
>   *
>   * @inode - inode that we're zeroing
>   * @from - the offset to start zeroing
> - * @len - the length to zero, 0 to zero the entire range respective to the
> - *	offset

orig_start/orig_end are missing. This adds to the confusion about their
definition/semantics I mentioned above.

More generally, I think this change is a huge change to the semantics of
this function, which used to operate on only a block but now operates on
a range. I think that merits a full rewrite of the doc to fully
represent what the new interface is.

> - * @front - zero up to the offset instead of from the offset on
> + * @end - the inclusive end to finish zeroing, can be -1 meaning truncating
> + *	  everything beyond @from.
> + * @where - Head or tail block to truncate.
>   *
>   * This will find the block for the "from" offset and cow the block and zero the
>   * part we want to zero.  This is used with truncate and hole punching.

"zero the part we want to zero" is far less trivial now, for example,
and merits more explanation.

>   */
> -int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
> -			 int front)
> +int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t end,
> +			 u64 orig_start, u64 orig_end,
> +			 enum btrfs_truncate_where where)
>  {
>  	struct btrfs_fs_info *fs_info = inode->root->fs_info;
>  	struct address_space *mapping = inode->vfs_inode.i_mapping;
> @@ -4783,20 +4784,30 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
>  	struct extent_changeset *data_reserved = NULL;
>  	bool only_release_metadata = false;
>  	u32 blocksize = fs_info->sectorsize;
> -	pgoff_t index = from >> PAGE_SHIFT;
> -	unsigned offset = from & (blocksize - 1);
> +	pgoff_t index = (where == BTRFS_TRUNCATE_HEAD_BLOCK) ?
> +			(from >> PAGE_SHIFT) : (end >> PAGE_SHIFT);
>  	struct folio *folio;
>  	gfp_t mask = btrfs_alloc_write_mask(mapping);
>  	size_t write_bytes = blocksize;
>  	int ret = 0;
>  	u64 block_start;
>  	u64 block_end;
> +	u64 clamp_start;
> +	u64 clamp_end;
>  
> -	if (IS_ALIGNED(offset, blocksize) &&
> -	    (!len || IS_ALIGNED(len, blocksize)))
> +	ASSERT(where == BTRFS_TRUNCATE_HEAD_BLOCK ||
> +	       where == BTRFS_TRUNCATE_TAIL_BLOCK);
> +
> +	if (end == (loff_t)-1)
> +		ASSERT(where == BTRFS_TRUNCATE_HEAD_BLOCK);
> +
> +	if (IS_ALIGNED(from, blocksize) && IS_ALIGNED(end + 1, blocksize))
>  		goto out;
>  
> -	block_start = round_down(from, blocksize);
> +	if (where == BTRFS_TRUNCATE_HEAD_BLOCK)
> +		block_start = round_down(from, blocksize);
> +	else
> +		block_start = round_down(end, blocksize);

I don't think I get why we would want to change which block we are doing
the cow stuff to, if the main focus is zeroing the full original range.

Apologies if I am being dense on that :)

>  	block_end = block_start + blocksize - 1;
>  
>  	ret = btrfs_check_data_free_space(inode, &data_reserved, block_start,
> @@ -4876,17 +4887,22 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t len,
>  		goto out_unlock;
>  	}
>  
> -	if (offset != blocksize) {
> -		if (!len)
> -			len = blocksize - offset;
> -		if (front)
> -			folio_zero_range(folio, block_start - folio_pos(folio),
> -					 offset);
> -		else
> -			folio_zero_range(folio,
> -					 (block_start - folio_pos(folio)) + offset,
> -					 len);
> -	}
> +	/*
> +	 * Although we have only reserved space for the one block, we still should
> +	 * zero out all blocks in the original range.
> +	 * The remaining blocks normally are already holes thus no need to zero again,
> +	 * but it's possible for fs block size < page size cases to have memory mapped
> +	 * writes to pollute ranges beyond EOF.
> +	 *
> +	 * In that case although the polluted blocks beyond EOF will not reach disk,
> +	 * it still affects our page cache.
> +	 */
> +	clamp_start = max_t(u64, folio_pos(folio), orig_start);
> +	clamp_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1,
> +			  orig_end);
> +	folio_zero_range(folio, clamp_start - folio_pos(folio),
> +			 clamp_end - clamp_start + 1);
> +
>  	btrfs_folio_clear_checked(fs_info, folio, block_start,
>  				  block_end + 1 - block_start);
>  	btrfs_folio_set_dirty(fs_info, folio, block_start,
> @@ -4988,7 +5004,8 @@ int btrfs_cont_expand(struct btrfs_inode *inode, loff_t oldsize, loff_t size)
>  	 * rest of the block before we expand the i_size, otherwise we could
>  	 * expose stale data.
>  	 */
> -	ret = btrfs_truncate_block(inode, oldsize, 0, 0);
> +	ret = btrfs_truncate_block(inode, oldsize, -1, oldsize, -1,
> +				   BTRFS_TRUNCATE_HEAD_BLOCK);
>  	if (ret)
>  		return ret;
>  
> @@ -7623,7 +7640,9 @@ static int btrfs_truncate(struct btrfs_inode *inode, bool skip_writeback)
>  		btrfs_end_transaction(trans);
>  		btrfs_btree_balance_dirty(fs_info);
>  
> -		ret = btrfs_truncate_block(inode, inode->vfs_inode.i_size, 0, 0);
> +		ret = btrfs_truncate_block(inode, inode->vfs_inode.i_size, -1,
> +					   inode->vfs_inode.i_size, -1,
> +					   BTRFS_TRUNCATE_HEAD_BLOCK);
>  		if (ret)
>  			goto out;
>  		trans = btrfs_start_transaction(root, 1);
> -- 
> 2.49.0

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 2/2] btrfs: make btrfs_truncate_block() zero folio range for certain subpage corner cases
  2025-04-23 21:26 ` [PATCH v4 2/2] btrfs: make btrfs_truncate_block() zero folio range for certain subpage corner cases Qu Wenruo
@ 2025-04-24  0:27   ` Boris Burkov
  2025-04-24  0:56     ` Qu Wenruo
  0 siblings, 1 reply; 7+ messages in thread
From: Boris Burkov @ 2025-04-24  0:27 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Thu, Apr 24, 2025 at 06:56:32AM +0930, Qu Wenruo wrote:
> [BUG]
> For the following fsx -e 1 run, the btrfs still fails the run on 64K
> page size with 4K fs block size:
> 
> READ BAD DATA: offset = 0x26b3a, size = 0xfafa, fname = /mnt/btrfs/junk
> OFFSET      GOOD    BAD     RANGE
> 0x26b3a     0x0000  0x15b4  0x0
> operation# (mod 256) for the bad data may be 21
> [...]
> LOG DUMP (28 total operations):
> 1(  1 mod 256): SKIPPED (no operation)
> 2(  2 mod 256): SKIPPED (no operation)
> 3(  3 mod 256): SKIPPED (no operation)
> 4(  4 mod 256): SKIPPED (no operation)
> 5(  5 mod 256): WRITE    0x1ea90 thru 0x285e0	(0x9b51 bytes) HOLE
> 6(  6 mod 256): ZERO     0x1b1a8 thru 0x20bd4	(0x5a2d bytes)
> 7(  7 mod 256): FALLOC   0x22b1a thru 0x272fa	(0x47e0 bytes) INTERIOR
> 8(  8 mod 256): WRITE    0x741d thru 0x13522	(0xc106 bytes)
> 9(  9 mod 256): MAPWRITE 0x73ee thru 0xdeeb	(0x6afe bytes)
> 10( 10 mod 256): FALLOC   0xb719 thru 0xb994	(0x27b bytes) INTERIOR
> 11( 11 mod 256): COPY 0x15ed8 thru 0x18be1	(0x2d0a bytes) to 0x25f6e thru 0x28c77
> 12( 12 mod 256): ZERO     0x1615e thru 0x1770e	(0x15b1 bytes)
> 13( 13 mod 256): SKIPPED (no operation)
> 14( 14 mod 256): DEDUPE 0x20000 thru 0x27fff	(0x8000 bytes) to 0x1000 thru 0x8fff
> 15( 15 mod 256): SKIPPED (no operation)
> 16( 16 mod 256): CLONE 0xa000 thru 0xffff	(0x6000 bytes) to 0x36000 thru 0x3bfff
> 17( 17 mod 256): ZERO     0x14adc thru 0x1b78a	(0x6caf bytes)
> 18( 18 mod 256): TRUNCATE DOWN	from 0x3c000 to 0x1e2e3	******WWWW
> 19( 19 mod 256): CLONE 0x4000 thru 0x11fff	(0xe000 bytes) to 0x16000 thru 0x23fff
> 20( 20 mod 256): FALLOC   0x311e1 thru 0x3681b	(0x563a bytes) PAST_EOF
> 21( 21 mod 256): FALLOC   0x351c5 thru 0x40000	(0xae3b bytes) EXTENDING
> 22( 22 mod 256): WRITE    0x920 thru 0x7e51	(0x7532 bytes)
> 23( 23 mod 256): COPY 0x2b58 thru 0xc508	(0x99b1 bytes) to 0x117b1 thru 0x1b161
> 24( 24 mod 256): TRUNCATE DOWN	from 0x40000 to 0x3c9a5
> 25( 25 mod 256): SKIPPED (no operation)
> 26( 26 mod 256): MAPWRITE 0x25020 thru 0x26b06	(0x1ae7 bytes)
> 27( 27 mod 256): SKIPPED (no operation)
> 28( 28 mod 256): READ     0x26b3a thru 0x36633	(0xfafa bytes)	***RRRR***
> 
> [CAUSE]
> The involved operations are:
> 
>  fallocating to largest ever: 0x40000
>  21 pollute_eof	0x24000 thru	0x2ffff	(0xc000 bytes)
>  21 falloc	from 0x351c5 to 0x40000 (0xae3b bytes)
>  28 read	0x26b3a thru	0x36633	(0xfafa bytes)
> 
> At operation #21 a pollute_eof is done, by memory mappaed write into
> range [0x24000, 0x2ffff).
> At this stage, the inode size is 0x24000, which is block aligned.
> 
> Then fallocate happens, and since it's expanding the inode, it will call
> btrfs_truncate_block() to truncate any unaligned range.
> 
> But since the inode size is already block aligned,
> btrfs_truncate_block() does nothing and exit.
> 
> However remember the folio at 0x20000 has some range polluted already,
> although they will not be written back to disk, it still affects the
> page cache, resulting the later operation #28 to read out the polluted
> value.
> 
> [FIX]
> Instead of early exit from btrfs_truncate_block() if the range is
> already block aligned, do extra filio zeroing if the fs block size is
> smaller than the page size.
> 
> This is to address exactly the above case where memory mapped write can
> still leave some garbage beyond EOF.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
>  fs/btrfs/inode.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 88 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 6f910c056819..23cd03716e57 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -4760,6 +4760,80 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
>  	return ret;
>  }
>  
> +/*
> + * A helper to zero out all blocks inside range [@orig_start, @orig_end) of
> + * the target folio.
> + * The target folio is the one containing the head or tail block of the range
> + * [@from, @end].
> + *
> + * This is a special case for fs block size < page size, where even if the range
> + * [from, end] is already block aligned, we can still have blocks beyond EOF being
> + * polluted by memory mapped write.
> + */

This function name is basically the same as folio_zero_range which
behaves very differently. I think it needs a name that captures the fact
that it is a special case for btrfs_truncate_block

> +static int zero_range_folio(struct btrfs_inode *inode, u64 from, u64 end,
> +			    u64 orig_start, u64 orig_end,
> +			    enum btrfs_truncate_where where)
> +{
> +	const u32 blocksize = inode->root->fs_info->sectorsize;
> +	struct address_space *mapping = inode->vfs_inode.i_mapping;
> +	struct extent_io_tree *io_tree = &inode->io_tree;
> +	struct extent_state *cached_state = NULL;
> +	struct btrfs_ordered_extent *ordered;
> +	pgoff_t index = (where == BTRFS_TRUNCATE_HEAD_BLOCK) ?
> +			(from >> PAGE_SHIFT) : (end >> PAGE_SHIFT);
> +	struct folio *folio;
> +	u64 block_start;
> +	u64 block_end;
> +	u64 clamp_start;
> +	u64 clamp_end;
> +	int ret = 0;
> +
> +again:
> +	folio = filemap_lock_folio(mapping, index);
> +	/* No folio present. */
> +	if (IS_ERR(folio))
> +		return 0;
> +
> +	if (!folio_test_uptodate(folio)) {
> +		ret = btrfs_read_folio(NULL, folio);
> +		folio_lock(folio);
> +		if (folio->mapping != mapping) {
> +			folio_unlock(folio);
> +			folio_put(folio);
> +			goto again;
> +		}
> +		if (!folio_test_uptodate(folio)) {
> +			ret = -EIO;
> +			goto out_unlock;
> +		}
> +	}
> +	folio_wait_writeback(folio);

This lock/read/wait pattern is the same as in btrfs_truncate_block and I
think could benefit from being lifted into a function. 

> +
> +	clamp_start = max_t(u64, folio_pos(folio), orig_start);
> +	clamp_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1,
> +			  orig_end);
> +	block_start = round_down(clamp_start, blocksize);
> +	block_end = round_up(clamp_end + 1, blocksize) - 1;
> +	lock_extent(io_tree, block_start, block_end, &cached_state);
> +	ordered = btrfs_lookup_ordered_range(inode, block_start, block_end + 1 - block_end);
> +	if (ordered) {
> +		unlock_extent(io_tree, block_start, block_end, &cached_state);
> +		folio_unlock(folio);
> +		folio_put(folio);
> +		btrfs_start_ordered_extent(ordered);
> +		btrfs_put_ordered_extent(ordered);
> +		goto again;
> +	}
> +	folio_zero_range(folio, clamp_start - folio_pos(folio),
> +			 clamp_end - clamp_start + 1);
> +	unlock_extent(io_tree, block_start, block_end, &cached_state);
> +
> +out_unlock:
> +	folio_unlock(folio);
> +	folio_put(folio);
> +	return ret;
> +}
> +
>  /*
>   * Read, zero a chunk and write a block.
>   *
> @@ -4801,8 +4875,20 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t end,
>  	if (end == (loff_t)-1)
>  		ASSERT(where == BTRFS_TRUNCATE_HEAD_BLOCK);
>  
> -	if (IS_ALIGNED(from, blocksize) && IS_ALIGNED(end + 1, blocksize))
> -		goto out;
> +	if (IS_ALIGNED(from, blocksize) && IS_ALIGNED(end + 1, blocksize)) {

Does this need to depend on `where` now or is it still necessary to
check both?

> +		/*
> +		 * The target head/tail range is already block aligned.
> +		 * If block size >= PAGE_SIZE, meaning it's impossible to mmap a
> +		 * page containing anything other than the target block.
> +		 * So we can safely exit.
> +		 *
> +		 * Otherwise we still need to zero out the range inside the folio
> +		 * to avoid memory mapped write to pollute beyond EOF.
> +		 */
> +		if (blocksize >= PAGE_SIZE)
> +			return 0;
> +		return zero_range_folio(inode, from, end, orig_start, orig_end, where);
> +	}
>  
>  	if (where == BTRFS_TRUNCATE_HEAD_BLOCK)
>  		block_start = round_down(from, blocksize);
> -- 
> 2.49.0

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 1/2] btrfs: make btrfs_truncate_block() to zero involved blocks in a folio
  2025-04-24  0:14   ` Boris Burkov
@ 2025-04-24  0:54     ` Qu Wenruo
  0 siblings, 0 replies; 7+ messages in thread
From: Qu Wenruo @ 2025-04-24  0:54 UTC (permalink / raw)
  To: Boris Burkov, Qu Wenruo; +Cc: linux-btrfs



在 2025/4/24 09:44, Boris Burkov 写道:
> On Thu, Apr 24, 2025 at 06:56:31AM +0930, Qu Wenruo wrote:
[...]
>> Such behavior is only exposed when page size is larger than fs block
>> btrfs, as for block size == page size case the block is exactly one
>> page, and fsx only checks exactly one page at EOF.
> 
> This wording struck me as a little weird. What fsx does and doesn't
> check shouldn't really matter, compared to the expected/desired behavior
> of the various APIs.

Sure, I'll remove the fsx part.

> 
> Other than that, very nice debug and thanks for the clear explanation.
> 
>>
>> [FIX]
>> Enhance btrfs_truncate_block() by:
>>
>> - Force callers to pass a @start/@end combination
>>    So that there will be no 0 length passed in.
>>
>> - Rename the @front parameter to an enum
>>    And make it matches the @start/@end parameter better by using
>>    TRUNCATE_HEAD_BLOCK and TRUNCATE_TAIL_BLOCK instead.
> 
> Do I understand that you are not just renaming front to an enum, but
> changing the meaning to be selecting a block to do the dirty/writeback
> thing on? That is not really as clear as it could be from the commit
> message/comments.

The original code is hiding the block selection by doing all it by the 
caller.

E.g. If we have something like this:


     0       4K      8K     12K     16K
     |   |///|///////|//////|///|   |
         2K                     14K

The original code do the truncation by calling with @from = 2K, @front = 
false, and @from=14K, @front = true.

But that only handles the block at range [0, 4k) and [12K, 16K), on a 
64K page, it will not tell exact range we can zero.

If we just zero everything past 2K until page end, then the contents at 
[14K, 64K) will be zeroed incorrectly.

Thus we need the correct range we're really trunacting, to avoid 
over/under zeroing.


Normally with the full truncating range passed in, we still need to know 
what exact block we're truncating, because for each range there are 
always the heading and tailing blocks.



But since you're already mentioning this, it looks like the old @from is 
still a better solution, and since we're already passing the original 
range into the function, we should be able to detect just based on @from 
and original ranges.

E.g. for above case, if we want to truncate [2k, 4K), we can just pass
@from = 2K, @range = [2K, 14K).

And if we want to trunacte [12K, 14K), pass @from = 14K, range=[2K, 14K).

> 
>>
>> - Pass the original unmodified range into btrfs_truncate_block()
>>    There are several call sites inside btrfs_zero_range() and
>>    btrfs_punch_hole() where we pass part of the original range for
>>    truncating.
> 
> The definition of "original range" is very murky to me. It seems
> specific to the hole punching code (and maybe some of the other
> callsites).

Yes, it's specific to hole punching and zero range, which all modify 
their internal cursors.

But if we go the above purposal, using the full range to replace @front 
parameter, it may be a little easier to understand.

>> @@ -2656,8 +2657,9 @@ static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len)
> 
> (not visible in the formatted patch) but there is a comment just above
> this change about how we don't need to truncate past EOF. Can you check
> it and update it if it is wrong now?

Indeed I need to update that part to mention the mmap write cases.

>> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
>> index 538a9ec86abc..6f910c056819 100644
>> --- a/fs/btrfs/inode.c
>> +++ b/fs/btrfs/inode.c
>> @@ -4765,15 +4765,16 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
>>    *
>>    * @inode - inode that we're zeroing
>>    * @from - the offset to start zeroing
>> - * @len - the length to zero, 0 to zero the entire range respective to the
>> - *	offset
> 
> orig_start/orig_end are missing. This adds to the confusion about their
> definition/semantics I mentioned above.
> 
> More generally, I think this change is a huge change to the semantics of
> this function, which used to operate on only a block but now operates on
> a range. I think that merits a full rewrite of the doc to fully
> represent what the new interface is.

Will do that.


>>   
>> -	block_start = round_down(from, blocksize);
>> +	if (where == BTRFS_TRUNCATE_HEAD_BLOCK)
>> +		block_start = round_down(from, blocksize);
>> +	else
>> +		block_start = round_down(end, blocksize);
> 
> I don't think I get why we would want to change which block we are doing
> the cow stuff to, if the main focus is zeroing the full original range.
E.g. We want to truncate range [2K, 6K], it covers two blocks:

        0           2K          4K         6K         8K
        |           |//////////////////////|          |

If we're handling the HEAD block, aka for block [0, 4K), the block start 
should be round_down(2K).

If we're handling the tail block, aka for block [4k, 8k), the block 
start should be round_down(6K).

The original code move the handling to the caller, but I believe the new 
purposed parameters can handle it a little easier.

> 
> Apologies if I am being dense on that :)

No worries, that's more or less expected because I already know the new 
parameters are very ugly, just can not come up a better one at the time 
of writing.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 2/2] btrfs: make btrfs_truncate_block() zero folio range for certain subpage corner cases
  2025-04-24  0:27   ` Boris Burkov
@ 2025-04-24  0:56     ` Qu Wenruo
  0 siblings, 0 replies; 7+ messages in thread
From: Qu Wenruo @ 2025-04-24  0:56 UTC (permalink / raw)
  To: Boris Burkov, Qu Wenruo; +Cc: linux-btrfs



在 2025/4/24 09:57, Boris Burkov 写道:
> On Thu, Apr 24, 2025 at 06:56:32AM +0930, Qu Wenruo wrote:
[...]
>> +/*
>> + * A helper to zero out all blocks inside range [@orig_start, @orig_end) of
>> + * the target folio.
>> + * The target folio is the one containing the head or tail block of the range
>> + * [@from, @end].
>> + *
>> + * This is a special case for fs block size < page size, where even if the range
>> + * [from, end] is already block aligned, we can still have blocks beyond EOF being
>> + * polluted by memory mapped write.
>> + */
> 
> This function name is basically the same as folio_zero_range which
> behaves very differently. I think it needs a name that captures the fact
> that it is a special case for btrfs_truncate_block

What about btrfs_truncate_block_zero_range()?

[...]
>> +	folio_wait_writeback(folio);
> 
> This lock/read/wait pattern is the same as in btrfs_truncate_block and I
> think could benefit from being lifted into a function.

Sure, will do that.

> 
>> +
>> +	clamp_start = max_t(u64, folio_pos(folio), orig_start);
>> +	clamp_end = min_t(u64, folio_pos(folio) + folio_size(folio) - 1,
>> +			  orig_end);
>> +	block_start = round_down(clamp_start, blocksize);
>> +	block_end = round_up(clamp_end + 1, blocksize) - 1;
>> +	lock_extent(io_tree, block_start, block_end, &cached_state);
>> +	ordered = btrfs_lookup_ordered_range(inode, block_start, block_end + 1 - block_end);
>> +	if (ordered) {
>> +		unlock_extent(io_tree, block_start, block_end, &cached_state);
>> +		folio_unlock(folio);
>> +		folio_put(folio);
>> +		btrfs_start_ordered_extent(ordered);
>> +		btrfs_put_ordered_extent(ordered);
>> +		goto again;
>> +	}
>> +	folio_zero_range(folio, clamp_start - folio_pos(folio),
>> +			 clamp_end - clamp_start + 1);
>> +	unlock_extent(io_tree, block_start, block_end, &cached_state);
>> +
>> +out_unlock:
>> +	folio_unlock(folio);
>> +	folio_put(folio);
>> +	return ret;
>> +}
>> +
>>   /*
>>    * Read, zero a chunk and write a block.
>>    *
>> @@ -4801,8 +4875,20 @@ int btrfs_truncate_block(struct btrfs_inode *inode, loff_t from, loff_t end,
>>   	if (end == (loff_t)-1)
>>   		ASSERT(where == BTRFS_TRUNCATE_HEAD_BLOCK);
>>   
>> -	if (IS_ALIGNED(from, blocksize) && IS_ALIGNED(end + 1, blocksize))
>> -		goto out;
>> +	if (IS_ALIGNED(from, blocksize) && IS_ALIGNED(end + 1, blocksize)) {
> 
> Does this need to depend on `where` now or is it still necessary to
> check both?

No, that's the beauty of passing the original range we want to truncate.
It always works no matter @where.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-04-24  0:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-23 21:26 [PATCH v4 0/2] btrfs: fix corner cases for subpage generic/363 failures Qu Wenruo
2025-04-23 21:26 ` [PATCH v4 1/2] btrfs: make btrfs_truncate_block() to zero involved blocks in a folio Qu Wenruo
2025-04-24  0:14   ` Boris Burkov
2025-04-24  0:54     ` Qu Wenruo
2025-04-23 21:26 ` [PATCH v4 2/2] btrfs: make btrfs_truncate_block() zero folio range for certain subpage corner cases Qu Wenruo
2025-04-24  0:27   ` Boris Burkov
2025-04-24  0:56     ` Qu Wenruo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox