fstests.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7] btrfs: Misc test fixes for large block/node sizes
@ 2025-07-29  6:21 Nirjhar Roy (IBM)
  2025-07-29  6:21 ` [PATCH 1/7] common/filter: Add a helper function to filter offsets and sizes Nirjhar Roy (IBM)
                   ` (6 more replies)
  0 siblings, 7 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-07-29  6:21 UTC (permalink / raw)
  To: fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
	nirjhar.roy.lists

Some of the btrfs and generic tests are written with 4k block/node size in mind.
This patch fixes some of those tests to be scalable with other block/node sizes too.
We caught this bugs while running the auto group tests on btrfs with large
block/node sizes.

Nirjhar Roy (IBM) (7):
  common/filter: Add a helper function to filter offsets and sizes
  common/btrfs: Add a helper function to get the nodesize
  btrfs/137: Make this compatible with all block sizes
  btrfs/200: Make this test scale with the block size
  generic/563: Increase the write tolerance to 6% for larger nodesize
  btrfs/301: Make this test compatible with all block sizes.
  generic/274: Make the test compatible with all blocksizes.

 common/btrfs        |   9 +++
 common/filter       |  14 +++++
 tests/btrfs/137     | 135 +++++++++++++++++++++++++++++---------------
 tests/btrfs/137.out |  59 ++-----------------
 tests/btrfs/200     |  24 +++++---
 tests/btrfs/200.out |   8 +--
 tests/btrfs/301     |   8 ++-
 tests/generic/274   |  21 +++----
 tests/generic/563   |  17 +++++-
 9 files changed, 171 insertions(+), 124 deletions(-)

--
2.34.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH 1/7] common/filter: Add a helper function to filter offsets and sizes
  2025-07-29  6:21 [PATCH 0/7] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
@ 2025-07-29  6:21 ` Nirjhar Roy (IBM)
  2025-07-29  6:21 ` [PATCH 2/7] common/btrfs: Add a helper function to get the nodesize Nirjhar Roy (IBM)
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-07-29  6:21 UTC (permalink / raw)
  To: fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
	nirjhar.roy.lists

_filter_xfs_io_size_offset() filters out the size and the offset
emitted by various subcommands of xfs_io like pwrite.

Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
---
 common/filter | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/common/filter b/common/filter
index bbe13f4c..45188a5a 100644
--- a/common/filter
+++ b/common/filter
@@ -115,6 +115,20 @@ _filter_date()
 	-e 's/[A-Z][a-z][a-z] [A-z][a-z][a-z]  *[0-9][0-9]* [0-9][0-9]:[0-9][0-9]:[0-9][0-9] [0-9][0-9][0-9][0-9]$/DATE/'
 }
 
+# This filters out the offsets and bytes written with
+# various subcommands of xfs_io. For example
+# wrote 4096/4096 bytes at offset 1052672 will be
+# converted to
+# wrote SIZE/SIZE bytes at offset OFFSET.
+# usage: __filter_xfs_io_size_offset <offset> <size>
+_filter_xfs_io_size_offset()
+{
+	local offset="$1"
+	local size="$2"
+	sed -e "s#${size}/${size}#SIZE/SIZE#g" \
+		-e "s#offset ${offset}#offset OFFSET#g"
+}
+
 # prints filtered output on stdout, values (use eval) on stderr
 # Non XFS filesystems always return a 4k block size and a 256 byte inode.
 _filter_mkfs()
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 2/7] common/btrfs: Add a helper function to get the nodesize
  2025-07-29  6:21 [PATCH 0/7] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
  2025-07-29  6:21 ` [PATCH 1/7] common/filter: Add a helper function to filter offsets and sizes Nirjhar Roy (IBM)
@ 2025-07-29  6:21 ` Nirjhar Roy (IBM)
  2025-07-29  6:21 ` [PATCH 3/7] btrfs/137: Make this compatible with all block sizes Nirjhar Roy (IBM)
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-07-29  6:21 UTC (permalink / raw)
  To: fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
	nirjhar.roy.lists

Introduce a helper function to get the nodesize of the btrfs
filesystem.

Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
---
 common/btrfs | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/common/btrfs b/common/btrfs
index 6a1095ff..2922eb8e 100644
--- a/common/btrfs
+++ b/common/btrfs
@@ -18,6 +18,15 @@ _btrfs_get_subvolid()
 	$BTRFS_UTIL_PROG subvolume list $mnt | grep -E "\s$name$" | $AWK_PROG '{ print $2 }'
 }
 
+# returns the node size of the filesystem.
+# usage _get_btrfs_node_size <device_name>
+_get_btrfs_node_size()
+{
+        local dev=$1
+        $BTRFS_UTIL_PROG inspect-internal dump-super -f "$dev"\
+                 | awk '/nodesize/ { print $2 }'
+}
+
 # _require_btrfs_command <command> [<subcommand>|<option>]
 # We check for btrfs and (optionally) features of the btrfs command
 # This function support both subfunction like "inspect-internal dump-tree" and
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 3/7] btrfs/137: Make this compatible with all block sizes
  2025-07-29  6:21 [PATCH 0/7] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
  2025-07-29  6:21 ` [PATCH 1/7] common/filter: Add a helper function to filter offsets and sizes Nirjhar Roy (IBM)
  2025-07-29  6:21 ` [PATCH 2/7] common/btrfs: Add a helper function to get the nodesize Nirjhar Roy (IBM)
@ 2025-07-29  6:21 ` Nirjhar Roy (IBM)
  2025-08-04  3:58   ` Qu Wenruo
  2025-07-29  6:21 ` [PATCH 4/7] btrfs/200: Make this test scale with the block size Nirjhar Roy (IBM)
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-07-29  6:21 UTC (permalink / raw)
  To: fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
	nirjhar.roy.lists

For large blocksizes like 64k on powerpc with 64k pagesize
it failed simply because this test was written with 4k
block size in mind.
The first few lines of the error logs are as follows:

     d3dc847171f9081bd75d7a2d3b53d322  SCRATCH_MNT/snap2/bar

     File snap1/foo fiemap results in the original filesystem:
    -0: [0..7]: data
    +0: [0..127]: data

     File snap1/bar fiemap results in the original filesystem:
    ...

Fix this by making the test choose offsets based on
the blocksize. Also, now that the file hashes and
the extent/block numbers will change depending on the
blocksize, calculate the hashes and the block mappings,
store them in temporary files and then calculate their diff
between the new and the original filesystem.
This allows us to remove all the block mapping and hashes
from the .out file.

Reported-by: Disha Goel <disgoel@linux.ibm.com>
Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
---
 tests/btrfs/137     | 135 +++++++++++++++++++++++++++++---------------
 tests/btrfs/137.out |  59 ++-----------------
 2 files changed, 94 insertions(+), 100 deletions(-)

diff --git a/tests/btrfs/137 b/tests/btrfs/137
index 7710dc18..61e983cb 100755
--- a/tests/btrfs/137
+++ b/tests/btrfs/137
@@ -27,53 +27,74 @@ _require_xfs_io_command "fiemap"
 send_files_dir=$TEST_DIR/btrfs-test-$seq
 
 rm -fr $send_files_dir
-mkdir $send_files_dir
+mkdir $send_files_dir $tmp
 
 _scratch_mkfs >>$seqres.full 2>&1
 _scratch_mount
 
+blksz=`_get_block_size $SCRATCH_MNT`
+echo "block size = $blksz" >> $seqres.full
+
 # Create the first test file.
-$XFS_IO_PROG -f -c "pwrite -S 0xaa 0 4K" $SCRATCH_MNT/foo | _filter_xfs_io
+$XFS_IO_PROG -f -c "pwrite -S 0xaa -b $blksz 0 $blksz" $SCRATCH_MNT/foo | _filter_xfs_io | \
+	_filter_xfs_io_size_offset 0 $blksz
 
 # Create a second test file with a 1Mb hole.
 $XFS_IO_PROG -f \
-     -c "pwrite -S 0xaa 0 4K" \
-     -c "pwrite -S 0xbb 1028K 4K" \
-     $SCRATCH_MNT/bar | _filter_xfs_io
+ 	-c "pwrite -S 0xaa -b $blksz 0 $blksz" \
+ 	-c "pwrite -S 0xbb -b $blksz $(( 1024 * 1024 + blksz )) $blksz" \
+ 	$SCRATCH_MNT/bar | _filter_xfs_io | \
+	_filter_xfs_io_size_offset "$(( 1024 * 1024 + blksz ))" $blksz | \
+ 	_filter_xfs_io_size_offset 0 $blksz
 
 $BTRFS_UTIL_PROG subvolume snapshot -r $SCRATCH_MNT \
 	$SCRATCH_MNT/snap1 >/dev/null
 
 # Now add one new extent to our first test file, increasing its size and leaving
 # a 1Mb hole between the first extent and this new extent.
-$XFS_IO_PROG -c "pwrite -S 0xbb 1028K 4K" $SCRATCH_MNT/foo | _filter_xfs_io
+$XFS_IO_PROG -c "pwrite -S 0xbb -b $blksz $(( 1024 * 1024 + blksz )) $blksz" $SCRATCH_MNT/foo \
+	| _filter_xfs_io | _filter_xfs_io_size_offset "$(( 1024 * 1024 + blksz ))" $blksz
 
 # Now overwrite the last extent of our second test file.
-$XFS_IO_PROG -c "pwrite -S 0xcc 1028K 4K" $SCRATCH_MNT/bar | _filter_xfs_io
+$XFS_IO_PROG -c "pwrite -S 0xcc -b $blksz $(( 1024 * 1024 + blksz )) $blksz" $SCRATCH_MNT/bar \
+	| _filter_xfs_io | _filter_xfs_io_size_offset "$(( 1024 * 1024 + blksz ))" $blksz
 
 $BTRFS_UTIL_PROG subvolume snapshot -r $SCRATCH_MNT \
 		 $SCRATCH_MNT/snap2 >/dev/null
 
-echo
-echo "File digests in the original filesystem:"
-md5sum $SCRATCH_MNT/snap1/foo | _filter_scratch
-md5sum $SCRATCH_MNT/snap1/bar | _filter_scratch
-md5sum $SCRATCH_MNT/snap2/foo | _filter_scratch
-md5sum $SCRATCH_MNT/snap2/bar | _filter_scratch
-
-echo
-echo "File snap1/foo fiemap results in the original filesystem:"
-$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap1/foo | _filter_fiemap
-echo
-echo "File snap1/bar fiemap results in the original filesystem:"
-$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap1/bar | _filter_fiemap
-echo
-echo "File snap2/foo fiemap results in the original filesystem:"
-$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap2/foo | _filter_fiemap
-echo
-echo "File snap2/bar fiemap results in the original filesystem:"
-$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap2/bar | _filter_fiemap
-echo
+echo >> $seqres.full
+
+echo "File digests in the original filesystem:" >> $seqres.full
+md5sum $SCRATCH_MNT/snap1/foo | _filter_scratch >> $tmp/snap1_foo.original.hash
+cat $tmp/snap1_foo.original.hash >> $seqres.full
+md5sum $SCRATCH_MNT/snap1/bar | _filter_scratch > $tmp/snap1_bar.original.hash
+cat $tmp/snap1_bar.original.hash >> $seqres.full
+md5sum $SCRATCH_MNT/snap2/foo | _filter_scratch >> $tmp/snap2_foo.original.hash
+cat $tmp/snap2_foo.original.hash >> $seqres.full
+md5sum $SCRATCH_MNT/snap2/bar | _filter_scratch > $tmp/snap2_bar.original.hash
+cat $tmp/snap2_bar.original.hash >> $seqres.full
+
+echo >> $seqres.full
+
+echo "File snap1/foo fiemap results in the original filesystem:" >> $seqres.full
+$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap1/foo | _filter_fiemap > $tmp/snap1_foo.original.map
+cat $tmp/snap1_foo.original.map >> $seqres.full
+echo >> $seqres.full
+
+echo "File snap1/bar fiemap results in the original filesystem:" >> $seqres.full
+$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap1/bar | _filter_fiemap > $tmp/snap1_bar.original.map
+cat $tmp/snap1_bar.original.map >> $seqres.full
+echo >> $seqres.full
+
+echo "File snap2/foo fiemap results in the original filesystem:" >> $seqres.full
+$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap2/foo | _filter_fiemap > $tmp/snap2_foo.original.map
+cat $tmp/snap2_foo.original.map >> $seqres.full
+echo >> $seqres.full
+
+echo "File snap2/bar fiemap results in the original filesystem:" >> $seqres.full
+$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap2/bar | _filter_fiemap > $tmp/snap2_bar.original.map
+cat $tmp/snap2_bar.original.map >> $seqres.full
+echo >> $seqres.full
 
 # Create the send streams to apply later on a new filesystem.
 $BTRFS_UTIL_PROG send -f $send_files_dir/1.snap $SCRATCH_MNT/snap1 2>&1 \
@@ -90,25 +111,47 @@ _scratch_mount
 $BTRFS_UTIL_PROG receive -f $send_files_dir/1.snap $SCRATCH_MNT >/dev/null
 $BTRFS_UTIL_PROG receive -f $send_files_dir/2.snap $SCRATCH_MNT >/dev/null
 
-echo
-echo "File digests in the new filesystem:"
-md5sum $SCRATCH_MNT/snap1/foo | _filter_scratch
-md5sum $SCRATCH_MNT/snap1/bar | _filter_scratch
-md5sum $SCRATCH_MNT/snap2/foo | _filter_scratch
-md5sum $SCRATCH_MNT/snap2/bar | _filter_scratch
-
-echo
-echo "File snap1/foo fiemap results in the new filesystem:"
-$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap1/foo | _filter_fiemap
-echo
-echo "File snap1/bar fiemap results in the new filesystem:"
-$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap1/bar | _filter_fiemap
-echo
-echo "File snap2/foo fiemap results in the new filesystem:"
-$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap2/foo | _filter_fiemap
-echo
-echo "File snap2/bar fiemap results in the new filesystem:"
-$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap2/bar | _filter_fiemap
+echo >> $seqres.full
+echo "File digests in the new filesystem:" >> $seqres.full
+md5sum $SCRATCH_MNT/snap1/foo | _filter_scratch > $tmp/snap1_foo.new.hash
+cat $tmp/snap1_foo.new.hash >> $seqres.full
+md5sum $SCRATCH_MNT/snap1/bar | _filter_scratch > $tmp/snap1_bar.new.hash
+cat $tmp/snap1_bar.new.hash >> $seqres.full
+md5sum $SCRATCH_MNT/snap2/foo | _filter_scratch > $tmp/snap2_foo.new.hash
+cat $tmp/snap2_foo.new.hash >> $seqres.full
+md5sum $SCRATCH_MNT/snap2/bar | _filter_scratch > $tmp/snap2_bar.new.hash
+cat $tmp/snap2_bar.new.hash >> $seqres.full
+
+diff $tmp/snap1_foo.new.hash $tmp/snap1_foo.original.hash
+diff $tmp/snap1_bar.new.hash $tmp/snap1_bar.original.hash
+diff $tmp/snap2_foo.new.hash $tmp/snap2_foo.original.hash
+diff $tmp/snap2_bar.new.hash $tmp/snap2_bar.original.hash
+
+echo >> $seqres.full
+
+echo "File snap1/foo fiemap results in the new filesystem:" >> $seqres.full
+$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap1/foo | _filter_fiemap > $tmp/snap1_foo.new.map
+cat $tmp/snap1_foo.new.map >> $seqres.full
+echo >> $seqres.full
+
+echo "File snap1/bar fiemap results in the new filesystem:" >> $seqres.full
+$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap1/bar | _filter_fiemap > $tmp/snap1_bar.new.map
+cat $tmp/snap1_bar.new.map >> $seqres.full
+echo >> $seqres.full
+
+echo "File snap2/foo fiemap results in the new filesystem:" >> $seqres.full
+$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap2/foo | _filter_fiemap > $tmp/snap2_foo.new.map
+cat $tmp/snap2_foo.new.map >> $seqres.full
+echo >> $seqres.full
+
+echo "File snap2/bar fiemap results in the new filesystem:" >> $seqres.full
+$XFS_IO_PROG -r -c "fiemap -v" $SCRATCH_MNT/snap2/bar | _filter_fiemap > $tmp/snap2_bar.new.map
+cat $tmp/snap2_bar.new.map >> $seqres.full
+
+diff $tmp/snap1_foo.new.map $tmp/snap1_foo.original.map
+diff $tmp/snap1_bar.new.map $tmp/snap1_bar.original.map
+diff $tmp/snap2_foo.new.map $tmp/snap2_foo.original.map
+diff $tmp/snap2_bar.new.map $tmp/snap2_bar.original.map
 
 status=0
 exit
diff --git a/tests/btrfs/137.out b/tests/btrfs/137.out
index 8554399f..ea9f426c 100644
--- a/tests/btrfs/137.out
+++ b/tests/btrfs/137.out
@@ -1,63 +1,14 @@
 QA output created by 137
-wrote 4096/4096 bytes at offset 0
+wrote SIZE/SIZE bytes at offset OFFSET
 XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-wrote 4096/4096 bytes at offset 0
+wrote SIZE/SIZE bytes at offset OFFSET
 XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-wrote 4096/4096 bytes at offset 1052672
+wrote SIZE/SIZE bytes at offset OFFSET
 XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-wrote 4096/4096 bytes at offset 1052672
+wrote SIZE/SIZE bytes at offset OFFSET
 XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-wrote 4096/4096 bytes at offset 1052672
+wrote SIZE/SIZE bytes at offset OFFSET
 XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-
-File digests in the original filesystem:
-3e4309c7cc81f23d45e260a8f13ca860  SCRATCH_MNT/snap1/foo
-f3934f0cf164e2efa1bab71f2f164990  SCRATCH_MNT/snap1/bar
-f3934f0cf164e2efa1bab71f2f164990  SCRATCH_MNT/snap2/foo
-d3dc847171f9081bd75d7a2d3b53d322  SCRATCH_MNT/snap2/bar
-
-File snap1/foo fiemap results in the original filesystem:
-0: [0..7]: data
-
-File snap1/bar fiemap results in the original filesystem:
-0: [0..7]: data
-1: [8..2055]: hole
-2: [2056..2063]: data
-
-File snap2/foo fiemap results in the original filesystem:
-0: [0..7]: data
-1: [8..2055]: hole
-2: [2056..2063]: data
-
-File snap2/bar fiemap results in the original filesystem:
-0: [0..7]: data
-1: [8..2055]: hole
-2: [2056..2063]: data
-
 At subvol SCRATCH_MNT/snap1
 At subvol SCRATCH_MNT/snap2
 At subvol snap1
-
-File digests in the new filesystem:
-3e4309c7cc81f23d45e260a8f13ca860  SCRATCH_MNT/snap1/foo
-f3934f0cf164e2efa1bab71f2f164990  SCRATCH_MNT/snap1/bar
-f3934f0cf164e2efa1bab71f2f164990  SCRATCH_MNT/snap2/foo
-d3dc847171f9081bd75d7a2d3b53d322  SCRATCH_MNT/snap2/bar
-
-File snap1/foo fiemap results in the new filesystem:
-0: [0..7]: data
-
-File snap1/bar fiemap results in the new filesystem:
-0: [0..7]: data
-1: [8..2055]: hole
-2: [2056..2063]: data
-
-File snap2/foo fiemap results in the new filesystem:
-0: [0..7]: data
-1: [8..2055]: hole
-2: [2056..2063]: data
-
-File snap2/bar fiemap results in the new filesystem:
-0: [0..7]: data
-1: [8..2055]: hole
-2: [2056..2063]: data
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 4/7] btrfs/200: Make this test scale with the block size
  2025-07-29  6:21 [PATCH 0/7] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
                   ` (2 preceding siblings ...)
  2025-07-29  6:21 ` [PATCH 3/7] btrfs/137: Make this compatible with all block sizes Nirjhar Roy (IBM)
@ 2025-07-29  6:21 ` Nirjhar Roy (IBM)
  2025-07-29  6:53   ` Filipe Manana
  2025-08-04  4:19   ` Qu Wenruo
  2025-07-29  6:21 ` [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize Nirjhar Roy (IBM)
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-07-29  6:21 UTC (permalink / raw)
  To: fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
	nirjhar.roy.lists

For large block sizes like 64k on powerpc with 64k
pagesize it failed because this test was hardcoded
to work with 4k blocksize.
With blocksize 4k and the existing file lengths,
we are getting 2 extents but with 64k page size
number of extents is not exceeding 1(due to lower
file size).
The first few lines of the error message is as follows:
     At snapshot incr
     OK
     OK
    +File foo does not have 2 shared extents in the base snapshot
    +/mnt/scratch/base/foo:
    +   0: [0..255]: 26624..26879
    +File foo does not have 2 shared extents in the incr snapshot
    ...

Fix this by scaling the size and offsets to scale with the block
size by a factor of (blocksize/4k).

Reported-by: Disha Goel <disgoel@linux.ibm.com>
Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
---
 tests/btrfs/200     | 24 ++++++++++++++++--------
 tests/btrfs/200.out |  8 ++++----
 2 files changed, 20 insertions(+), 12 deletions(-)

diff --git a/tests/btrfs/200 b/tests/btrfs/200
index e62937a4..fd2c2026 100755
--- a/tests/btrfs/200
+++ b/tests/btrfs/200
@@ -35,18 +35,26 @@ mkdir $send_files_dir
 _scratch_mkfs >>$seqres.full 2>&1
 _scratch_mount
 
+blksz=`_get_block_size $SCRATCH_MNT`
+echo "block size = $blksz" >> $seqres.full
+
+# Scale the test with any block size starting from 1k
+scale=$(( blksz / 1024 ))
+offset=$(( 16 * 1024 * scale ))
+size=$(( 16 * 1024 * scale ))
+
 # Create our first test file, which has an extent that is shared only with
 # itself and no other files. We want to verify a full send operation will
 # clone the extent.
-$XFS_IO_PROG -f -c "pwrite -S 0xb1 -b 64K 0 64K" $SCRATCH_MNT/foo \
-	| _filter_xfs_io
-$XFS_IO_PROG -c "reflink $SCRATCH_MNT/foo 0 64K 64K" $SCRATCH_MNT/foo \
-	| _filter_xfs_io
+$XFS_IO_PROG -f -c "pwrite -S 0xb1 -b $size 0 $size" $SCRATCH_MNT/foo \
+	| _filter_xfs_io | _filter_xfs_io_size_offset 0 $size
+$XFS_IO_PROG -c "reflink $SCRATCH_MNT/foo 0 $offset $size" $SCRATCH_MNT/foo \
+	| _filter_xfs_io | _filter_xfs_io_size_offset $offset $size
 
 # Create out second test file which initially, for the first send operation,
 # only has a single extent that is not shared.
-$XFS_IO_PROG -f -c "pwrite -S 0xc7 -b 64K 0 64K" $SCRATCH_MNT/bar \
-	| _filter_xfs_io
+$XFS_IO_PROG -f -c "pwrite -S 0xc7 -b $size 0 $size" $SCRATCH_MNT/bar \
+	| _filter_xfs_io | _filter_xfs_io_size_offset 0 $size
 
 _btrfs subvolume snapshot -r $SCRATCH_MNT $SCRATCH_MNT/base
 
@@ -56,8 +64,8 @@ $BTRFS_UTIL_PROG send -f $send_files_dir/1.snap $SCRATCH_MNT/base 2>&1 \
 # Now clone the existing extent in file bar to itself at a different offset.
 # We want to verify the incremental send operation below will issue a clone
 # operation instead of a write operation.
-$XFS_IO_PROG -c "reflink $SCRATCH_MNT/bar 0 64K 64K" $SCRATCH_MNT/bar \
-	| _filter_xfs_io
+$XFS_IO_PROG -c "reflink $SCRATCH_MNT/bar 0 $offset $size" $SCRATCH_MNT/bar \
+	| _filter_xfs_io | _filter_xfs_io_size_offset $offset $size
 
 _btrfs subvolume snapshot -r $SCRATCH_MNT $SCRATCH_MNT/incr
 
diff --git a/tests/btrfs/200.out b/tests/btrfs/200.out
index 306d9b24..4a10e506 100644
--- a/tests/btrfs/200.out
+++ b/tests/btrfs/200.out
@@ -1,12 +1,12 @@
 QA output created by 200
-wrote 65536/65536 bytes at offset 0
+wrote SIZE/SIZE bytes at offset OFFSET
 XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-linked 65536/65536 bytes at offset 65536
+linked SIZE/SIZE bytes at offset OFFSET
 XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-wrote 65536/65536 bytes at offset 0
+wrote SIZE/SIZE bytes at offset OFFSET
 XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 At subvol SCRATCH_MNT/base
-linked 65536/65536 bytes at offset 65536
+linked SIZE/SIZE bytes at offset OFFSET
 XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 At subvol SCRATCH_MNT/incr
 At subvol base
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize
  2025-07-29  6:21 [PATCH 0/7] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
                   ` (3 preceding siblings ...)
  2025-07-29  6:21 ` [PATCH 4/7] btrfs/200: Make this test scale with the block size Nirjhar Roy (IBM)
@ 2025-07-29  6:21 ` Nirjhar Roy (IBM)
  2025-07-29  7:45   ` Christoph Hellwig
                     ` (2 more replies)
  2025-07-29  6:21 ` [PATCH 6/7] btrfs/301: Make this test compatible with all block sizes Nirjhar Roy (IBM)
  2025-07-29  6:21 ` [PATCH 7/7] generic/274: Make the test compatible with all blocksizes Nirjhar Roy (IBM)
  6 siblings, 3 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-07-29  6:21 UTC (permalink / raw)
  To: fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
	nirjhar.roy.lists

When tested with blocksize/nodesize 64K on powerpc
with 64k  pagesize on btrfs, then the test fails
with the folllowing error:
     QA output created by 563
     read/write
     read is in range
    -write is in range
    +write has value of 8855552
    +write is NOT in range 7969177.6 .. 8808038.4
     write -> read/write
    ...
The slight increase in the amount of bytes that
are written is because of the increase in the
the nodesize(metadata) and hence it exceeds the tolerance limit slightly.
Fix this by increasing the write tolerance limit from 5% from 6%
for 64k blocksize btrfs.

Reported-by: Disha Goel <disgoel@linux.ibm.com>
Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
---
 tests/generic/563 | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/tests/generic/563 b/tests/generic/563
index 89a71aa4..efcac1ec 100755
--- a/tests/generic/563
+++ b/tests/generic/563
@@ -119,7 +119,22 @@ $XFS_IO_PROG -c "pread 0 $iosize" -c "pwrite -b $blksize 0 $iosize" -c fsync \
 	$SCRATCH_MNT/file >> $seqres.full 2>&1
 switch_cg $cgdir
 $XFS_IO_PROG -c fsync $SCRATCH_MNT/file
-check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
+blksz=`_get_block_size $SCRATCH_MNT`
+
+# On higher node sizes on btrfs, we observed slightly more 
+# writes, due to increased metadata sizes.
+# Hence have a higher write tolerance for btrfs and when
+# node size is greater than 4k.
+if [[ "$FSTYP" == "btrfs" ]]; then
+	nodesz=$(_get_btrfs_node_size "$SCRATCH_DEV")
+	if [[ "$nodesz" -gt 4096 ]]; then
+		check_cg $cgdir/$seq-cg $iosize $iosize 5% 6%
+	else
+		check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
+	fi
+else
+	check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
+fi
 
 # Write from one cgroup then read and write from a second. Writes are charged to
 # the first group and nothing to the second.
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 6/7] btrfs/301: Make this test compatible with all block sizes.
  2025-07-29  6:21 [PATCH 0/7] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
                   ` (4 preceding siblings ...)
  2025-07-29  6:21 ` [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize Nirjhar Roy (IBM)
@ 2025-07-29  6:21 ` Nirjhar Roy (IBM)
  2025-08-04  4:32   ` Qu Wenruo
  2025-07-29  6:21 ` [PATCH 7/7] generic/274: Make the test compatible with all blocksizes Nirjhar Roy (IBM)
  6 siblings, 1 reply; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-07-29  6:21 UTC (permalink / raw)
  To: fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
	nirjhar.roy.lists

With large block sizes like 64k on powerpc with 64k pagesize
the test failed with the following logs:

     QA output created by 301
     basic accounting
    +subvol 256 mismatched usage 33947648 vs 4587520 \
         (expected data 4194304 expected meta 393216 diff 29360128)
    +subvol 256 mismatched usage 168165376 vs 138805248 \
	(expected data 138412032 expected meta 393216 diff 29360128)
    +subvol 256 mismatched usage 33947648 vs 4587520 \
	(expected data 4194304 expected meta 393216 diff 29360128)
    +subvol 256 mismatched usage 33947648 vs 4587520 \
	(expected data 4194304 expected meta 393216 diff 29360128)
     fallocate: Disk quota exceeded
(Please note that the above ouptut had to be modified a bit since
the number of characters in each line was much greater than the
72 characters.)

The test creates nr_fill files each of size 8k i.e, 2x4k(stored in fill_sz).
Now with 64k blocksize, 8k sized files occupy more than expected
sizes (i.e, 8k) due to internal fragmentation since 1 file
will occupy at least 1 block. Fix this by scaling the file size (fill_sz)
with the blocksize.

Reported-by: Disha Goel <disgoel@linux.ibm.com>
Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
---
 tests/btrfs/301 | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/tests/btrfs/301 b/tests/btrfs/301
index 6b59749d..7547ff0e 100755
--- a/tests/btrfs/301
+++ b/tests/btrfs/301
@@ -23,7 +23,13 @@ subv=$SCRATCH_MNT/subv
 nested=$SCRATCH_MNT/subv/nested
 snap=$SCRATCH_MNT/snap
 nr_fill=512
-fill_sz=$((8 * 1024))
+
+_scratch_mkfs >> $seqres.full
+_scratch_mount
+blksz=`_get_block_size $SCRATCH_MNT`
+_scratch_unmount
+fill_sz=$(( 2 * blksz ))
+
 total_fill=$(($nr_fill * $fill_sz))
 nodesize=$($BTRFS_UTIL_PROG inspect-internal dump-super $SCRATCH_DEV | \
 					grep nodesize | $AWK_PROG '{print $2}')
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 7/7] generic/274: Make the test compatible with all blocksizes.
  2025-07-29  6:21 [PATCH 0/7] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
                   ` (5 preceding siblings ...)
  2025-07-29  6:21 ` [PATCH 6/7] btrfs/301: Make this test compatible with all block sizes Nirjhar Roy (IBM)
@ 2025-07-29  6:21 ` Nirjhar Roy (IBM)
  2025-08-04  4:35   ` Qu Wenruo
  6 siblings, 1 reply; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-07-29  6:21 UTC (permalink / raw)
  To: fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
	nirjhar.roy.lists

On btrfs with 64k blocksize on powerpc with 64k pagesize
it failed with the following error:

     ------------------------------
     preallocation test
     ------------------------------
    -done
    +failed to write to test file
    +(see /home/xfstests-dev/results//btrfs_64k/generic/274.full for details)
    ...
So, this test is written with 4K block size in mind. As we can see,
it first creates a file of size 4K and then fallocates 4MB beyond the
EOF.
Then there are 2 loops - one that fragments at alternate blocks and
the other punches holes in the remaining alternate blocks. Hence,
the test fails in 64k block size due to incorrect calculations.

Fix this test by making the test scale with the block size, that is
the offset, filesize and the assumed blocksize matches/scales with
the actual blocksize of the underlying filesystem.

Reported-by: Disha Goel <disgoel@linux.ibm.com>
Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
---
 tests/generic/274 | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/tests/generic/274 b/tests/generic/274
index 916c7173..4ea42f30 100755
--- a/tests/generic/274
+++ b/tests/generic/274
@@ -40,30 +40,31 @@ _scratch_unmount 2>/dev/null
 _scratch_mkfs_sized $((2 * 1024 * 1024 * 1024)) >>$seqres.full 2>&1
 _scratch_mount
 
-# Create a 4k file and Allocate 4M past EOF on that file
-$XFS_IO_PROG -f -c "pwrite 0 4k" -c "falloc -k 4k 4m" $SCRATCH_MNT/test \
-	>>$seqres.full 2>&1 || _fail "failed to create test file"
+blksz=`_get_block_size $SCRATCH_MNT`
+scale=$(( blksz / 1024 ))
+# Create a blocksize worth file and Allocate a large file past EOF on that file
+$XFS_IO_PROG -f -c "pwrite -b $blksz 0 $blksz" -c "falloc -k $blksz $(( 1 * 1024 * 1024 * scale ))" \
+	$SCRATCH_MNT/test >>$seqres.full 2>&1 || _fail "failed to create test file"
 
 # Fill the rest of the fs completely
 # Note, this will show ENOSPC errors in $seqres.full, that's ok.
 echo "Fill fs with 1M IOs; ENOSPC expected" >> $seqres.full
 dd if=/dev/zero of=$SCRATCH_MNT/tmp1 bs=1M >>$seqres.full 2>&1
-echo "Fill fs with 4K IOs; ENOSPC expected" >> $seqres.full
-dd if=/dev/zero of=$SCRATCH_MNT/tmp2 bs=4K >>$seqres.full 2>&1
+echo "Fill fs with $blksz K IOs; ENOSPC expected" >> $seqres.full
+dd if=/dev/zero of=$SCRATCH_MNT/tmp2 bs=$blksz >>$seqres.full 2>&1
 _scratch_sync
 # Last effort, use O_SYNC
-echo "Fill fs with 4K DIOs; ENOSPC expected" >> $seqres.full
-dd if=/dev/zero of=$SCRATCH_MNT/tmp3 bs=4K oflag=sync >>$seqres.full 2>&1
+echo "Fill fs with $blksz DIOs; ENOSPC expected" >> $seqres.full
+dd if=/dev/zero of=$SCRATCH_MNT/tmp3 bs=$blksz oflag=sync >>$seqres.full 2>&1
 # Save space usage info
 echo "Post-fill space:" >> $seqres.full
 df $SCRATCH_MNT >>$seqres.full 2>&1
-
 # Now attempt a write into all of the preallocated space -
 # in a very nasty way, badly fragmenting it and then filling it in.
 echo "Fill in prealloc space; fragment at offsets:" >> $seqres.full
 for i in `seq 1 2 1023`; do
 	echo -n "$i " >> $seqres.full
-	dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=4K count=1 conv=notrunc \
+	dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=$blksz count=1 conv=notrunc \
 		>>$seqres.full 2>/dev/null || _fail "failed to write to test file"
 done
 _scratch_sync
@@ -71,7 +72,7 @@ echo >> $seqres.full
 echo "Fill in prealloc space; fill holes at offsets:" >> $seqres.full
 for i in `seq 2 2 1023`; do
 	echo -n "$i " >> $seqres.full
-	dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=4K count=1 conv=notrunc \
+	dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=$blksz count=1 conv=notrunc \
 		>>$seqres.full 2>/dev/null || _fail "failed to fill test file"
 done
 _scratch_sync
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/7] btrfs/200: Make this test scale with the block size
  2025-07-29  6:21 ` [PATCH 4/7] btrfs/200: Make this test scale with the block size Nirjhar Roy (IBM)
@ 2025-07-29  6:53   ` Filipe Manana
  2025-08-12  6:26     ` Nirjhar Roy (IBM)
  2025-08-04  4:19   ` Qu Wenruo
  1 sibling, 1 reply; 28+ messages in thread
From: Filipe Manana @ 2025-07-29  6:53 UTC (permalink / raw)
  To: Nirjhar Roy (IBM)
  Cc: fstests, linux-btrfs, ritesh.list, ojaswin, djwong, zlang

On Tue, Jul 29, 2025 at 7:24 AM Nirjhar Roy (IBM)
<nirjhar.roy.lists@gmail.com> wrote:
>
> For large block sizes like 64k on powerpc with 64k
> pagesize it failed because this test was hardcoded
> to work with 4k blocksize.

Where exactly is it hardcoded with 4K blocksize expectations?

The test does 64K writes and reflinks at offsets multiples of 64K (0 and 64K).
In fact that's why the test is doing 64K writes and using only the
file offsets 0 and 64K, so that it works with any block size.

> With blocksize 4k and the existing file lengths,
> we are getting 2 extents but with 64k page size
> number of extents is not exceeding 1(due to lower
> file size).

Due to lower file size? How?
The file sizes should be independent of the block size, and be 64K and
128K everywhere.

Please provide more details in the changelog.
Thanks.

> The first few lines of the error message is as follows:
>      At snapshot incr
>      OK
>      OK
>     +File foo does not have 2 shared extents in the base snapshot
>     +/mnt/scratch/base/foo:
>     +   0: [0..255]: 26624..26879
>     +File foo does not have 2 shared extents in the incr snapshot
>     ...
>
> Fix this by scaling the size and offsets to scale with the block
> size by a factor of (blocksize/4k).
>
> Reported-by: Disha Goel <disgoel@linux.ibm.com>
> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
> ---
>  tests/btrfs/200     | 24 ++++++++++++++++--------
>  tests/btrfs/200.out |  8 ++++----
>  2 files changed, 20 insertions(+), 12 deletions(-)
>
> diff --git a/tests/btrfs/200 b/tests/btrfs/200
> index e62937a4..fd2c2026 100755
> --- a/tests/btrfs/200
> +++ b/tests/btrfs/200
> @@ -35,18 +35,26 @@ mkdir $send_files_dir
>  _scratch_mkfs >>$seqres.full 2>&1
>  _scratch_mount
>
> +blksz=`_get_block_size $SCRATCH_MNT`
> +echo "block size = $blksz" >> $seqres.full
> +
> +# Scale the test with any block size starting from 1k
> +scale=$(( blksz / 1024 ))
> +offset=$(( 16 * 1024 * scale ))
> +size=$(( 16 * 1024 * scale ))
> +
>  # Create our first test file, which has an extent that is shared only with
>  # itself and no other files. We want to verify a full send operation will
>  # clone the extent.
> -$XFS_IO_PROG -f -c "pwrite -S 0xb1 -b 64K 0 64K" $SCRATCH_MNT/foo \
> -       | _filter_xfs_io
> -$XFS_IO_PROG -c "reflink $SCRATCH_MNT/foo 0 64K 64K" $SCRATCH_MNT/foo \
> -       | _filter_xfs_io
> +$XFS_IO_PROG -f -c "pwrite -S 0xb1 -b $size 0 $size" $SCRATCH_MNT/foo \
> +       | _filter_xfs_io | _filter_xfs_io_size_offset 0 $size
> +$XFS_IO_PROG -c "reflink $SCRATCH_MNT/foo 0 $offset $size" $SCRATCH_MNT/foo \
> +       | _filter_xfs_io | _filter_xfs_io_size_offset $offset $size
>
>  # Create out second test file which initially, for the first send operation,
>  # only has a single extent that is not shared.
> -$XFS_IO_PROG -f -c "pwrite -S 0xc7 -b 64K 0 64K" $SCRATCH_MNT/bar \
> -       | _filter_xfs_io
> +$XFS_IO_PROG -f -c "pwrite -S 0xc7 -b $size 0 $size" $SCRATCH_MNT/bar \
> +       | _filter_xfs_io | _filter_xfs_io_size_offset 0 $size
>
>  _btrfs subvolume snapshot -r $SCRATCH_MNT $SCRATCH_MNT/base
>
> @@ -56,8 +64,8 @@ $BTRFS_UTIL_PROG send -f $send_files_dir/1.snap $SCRATCH_MNT/base 2>&1 \
>  # Now clone the existing extent in file bar to itself at a different offset.
>  # We want to verify the incremental send operation below will issue a clone
>  # operation instead of a write operation.
> -$XFS_IO_PROG -c "reflink $SCRATCH_MNT/bar 0 64K 64K" $SCRATCH_MNT/bar \
> -       | _filter_xfs_io
> +$XFS_IO_PROG -c "reflink $SCRATCH_MNT/bar 0 $offset $size" $SCRATCH_MNT/bar \
> +       | _filter_xfs_io | _filter_xfs_io_size_offset $offset $size
>
>  _btrfs subvolume snapshot -r $SCRATCH_MNT $SCRATCH_MNT/incr
>
> diff --git a/tests/btrfs/200.out b/tests/btrfs/200.out
> index 306d9b24..4a10e506 100644
> --- a/tests/btrfs/200.out
> +++ b/tests/btrfs/200.out
> @@ -1,12 +1,12 @@
>  QA output created by 200
> -wrote 65536/65536 bytes at offset 0
> +wrote SIZE/SIZE bytes at offset OFFSET
>  XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> -linked 65536/65536 bytes at offset 65536
> +linked SIZE/SIZE bytes at offset OFFSET
>  XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> -wrote 65536/65536 bytes at offset 0
> +wrote SIZE/SIZE bytes at offset OFFSET
>  XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>  At subvol SCRATCH_MNT/base
> -linked 65536/65536 bytes at offset 65536
> +linked SIZE/SIZE bytes at offset OFFSET
>  XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>  At subvol SCRATCH_MNT/incr
>  At subvol base
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize
  2025-07-29  6:21 ` [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize Nirjhar Roy (IBM)
@ 2025-07-29  7:45   ` Christoph Hellwig
  2025-08-04  7:18     ` Nirjhar Roy (IBM)
  2025-07-30 15:06   ` Filipe Manana
  2025-08-04  4:28   ` Qu Wenruo
  2 siblings, 1 reply; 28+ messages in thread
From: Christoph Hellwig @ 2025-07-29  7:45 UTC (permalink / raw)
  To: Nirjhar Roy (IBM)
  Cc: fstests, linux-btrfs, ritesh.list, ojaswin, djwong, zlang,
	fdmanana

On Tue, Jul 29, 2025 at 06:21:48AM +0000, Nirjhar Roy (IBM) wrote:
> +# writes, due to increased metadata sizes.
> +# Hence have a higher write tolerance for btrfs and when
> +# node size is greater than 4k.
> +if [[ "$FSTYP" == "btrfs" ]]; then
> +	nodesz=$(_get_btrfs_node_size "$SCRATCH_DEV")
> +	if [[ "$nodesz" -gt 4096 ]]; then
> +		check_cg $cgdir/$seq-cg $iosize $iosize 5% 6%
> +	else
> +		check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
> +	fi
> +else
> +	check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
> +fi

Everyone please stop hacking file system specific things into generic
tests.  Add proper core helpers for this instead.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize
  2025-07-29  6:21 ` [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize Nirjhar Roy (IBM)
  2025-07-29  7:45   ` Christoph Hellwig
@ 2025-07-30 15:06   ` Filipe Manana
  2025-08-04  7:18     ` Nirjhar Roy (IBM)
  2025-08-04  4:28   ` Qu Wenruo
  2 siblings, 1 reply; 28+ messages in thread
From: Filipe Manana @ 2025-07-30 15:06 UTC (permalink / raw)
  To: Nirjhar Roy (IBM)
  Cc: fstests, linux-btrfs, ritesh.list, ojaswin, djwong, zlang

On Tue, Jul 29, 2025 at 7:24 AM Nirjhar Roy (IBM)
<nirjhar.roy.lists@gmail.com> wrote:
>
> When tested with blocksize/nodesize 64K on powerpc
> with 64k  pagesize on btrfs, then the test fails
> with the folllowing error:
>      QA output created by 563
>      read/write
>      read is in range
>     -write is in range
>     +write has value of 8855552
>     +write is NOT in range 7969177.6 .. 8808038.4
>      write -> read/write
>     ...
> The slight increase in the amount of bytes that
> are written is because of the increase in the
> the nodesize(metadata) and hence it exceeds the tolerance limit slightly.
> Fix this by increasing the write tolerance limit from 5% from 6%
> for 64k blocksize btrfs.
>
> Reported-by: Disha Goel <disgoel@linux.ibm.com>
> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
> ---
>  tests/generic/563 | 17 ++++++++++++++++-
>  1 file changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/tests/generic/563 b/tests/generic/563
> index 89a71aa4..efcac1ec 100755
> --- a/tests/generic/563
> +++ b/tests/generic/563
> @@ -119,7 +119,22 @@ $XFS_IO_PROG -c "pread 0 $iosize" -c "pwrite -b $blksize 0 $iosize" -c fsync \
>         $SCRATCH_MNT/file >> $seqres.full 2>&1
>  switch_cg $cgdir
>  $XFS_IO_PROG -c fsync $SCRATCH_MNT/file
> -check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
> +blksz=`_get_block_size $SCRATCH_MNT`

Here the block size is captured, but then it's not used anywhere in the test...

Thanks.

> +
> +# On higher node sizes on btrfs, we observed slightly more
> +# writes, due to increased metadata sizes.
> +# Hence have a higher write tolerance for btrfs and when
> +# node size is greater than 4k.
> +if [[ "$FSTYP" == "btrfs" ]]; then
> +       nodesz=$(_get_btrfs_node_size "$SCRATCH_DEV")
> +       if [[ "$nodesz" -gt 4096 ]]; then
> +               check_cg $cgdir/$seq-cg $iosize $iosize 5% 6%
> +       else
> +               check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
> +       fi
> +else
> +       check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
> +fi
>
>  # Write from one cgroup then read and write from a second. Writes are charged to
>  # the first group and nothing to the second.
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/7] btrfs/137: Make this compatible with all block sizes
  2025-07-29  6:21 ` [PATCH 3/7] btrfs/137: Make this compatible with all block sizes Nirjhar Roy (IBM)
@ 2025-08-04  3:58   ` Qu Wenruo
  2025-08-05  9:41     ` Ojaswin Mujoo
  2025-08-12  6:22     ` Nirjhar Roy (IBM)
  0 siblings, 2 replies; 28+ messages in thread
From: Qu Wenruo @ 2025-08-04  3:58 UTC (permalink / raw)
  To: Nirjhar Roy (IBM), fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana



在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
> For large blocksizes like 64k on powerpc with 64k pagesize
> it failed simply because this test was written with 4k
> block size in mind.
> The first few lines of the error logs are as follows:
> 
>       d3dc847171f9081bd75d7a2d3b53d322  SCRATCH_MNT/snap2/bar
> 
>       File snap1/foo fiemap results in the original filesystem:
>      -0: [0..7]: data
>      +0: [0..127]: data
> 
>       File snap1/bar fiemap results in the original filesystem:
>      ...
> 
> Fix this by making the test choose offsets based on
> the blocksize.

I'm wondering, why not just use a fixed 64K block size?

So that all supported btrfs block sizes can result the same file contents.

> Also, now that the file hashes and
> the extent/block numbers will change depending on the
> blocksize, calculate the hashes and the block mappings,
> store them in temporary files and then calculate their diff
> between the new and the original filesystem.
> This allows us to remove all the block mapping and hashes
> from the .out file.

Although I agree we should remove the block mappings from the golden 
output, as compression can add extra flags and pollute the golden output.

But that can also be done with _require_btrfs_no_compress() helper.

> 
> Reported-by: Disha Goel <disgoel@linux.ibm.com>
> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
> ---
>   tests/btrfs/137     | 135 +++++++++++++++++++++++++++++---------------
>   tests/btrfs/137.out |  59 ++-----------------
>   2 files changed, 94 insertions(+), 100 deletions(-)
> 
> diff --git a/tests/btrfs/137 b/tests/btrfs/137
> index 7710dc18..61e983cb 100755
> --- a/tests/btrfs/137
> +++ b/tests/btrfs/137
> @@ -27,53 +27,74 @@ _require_xfs_io_command "fiemap"
>   send_files_dir=$TEST_DIR/btrfs-test-$seq
>   
>   rm -fr $send_files_dir
> -mkdir $send_files_dir
> +mkdir $send_files_dir $tmp

Just a small nitpick, it's more common to use $tmp.<suffix>, that's why 
the default _cleanup() template goes with "rm -f $tmp.*"

Thanks,
Qu


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/7] btrfs/200: Make this test scale with the block size
  2025-07-29  6:21 ` [PATCH 4/7] btrfs/200: Make this test scale with the block size Nirjhar Roy (IBM)
  2025-07-29  6:53   ` Filipe Manana
@ 2025-08-04  4:19   ` Qu Wenruo
  1 sibling, 0 replies; 28+ messages in thread
From: Qu Wenruo @ 2025-08-04  4:19 UTC (permalink / raw)
  To: Nirjhar Roy (IBM), fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana



在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
> For large block sizes like 64k on powerpc with 64k
> pagesize it failed because this test was hardcoded
> to work with 4k blocksize.
> With blocksize 4k and the existing file lengths,
> we are getting 2 extents but with 64k page size
> number of extents is not exceeding 1(due to lower
> file size).
> The first few lines of the error message is as follows:
>       At snapshot incr
>       OK
>       OK
>      +File foo does not have 2 shared extents in the base snapshot
>      +/mnt/scratch/base/foo:
>      +   0: [0..255]: 26624..26879
>      +File foo does not have 2 shared extents in the incr snapshot
>      ...
> 
> Fix this by scaling the size and offsets to scale with the block
> size by a factor of (blocksize/4k).

Although I can reproduce the bug on aarch64 with 64K page size, the 
changelog doesn't seem to explain the problem.

The problem is after receive, the file base/foo is a single extent, not 
the original reflinked two extent.

And the original fs is indeed have the correct two extents layout.


That different extent layout is definitely not explained properly in the 
commit message.

Furthermore the test case is already using 64K block size for extents 
creation, thus all supported block size should work.


So I think the change doesn't really touch the core of the failure.

Thanks,
Qu
> 
> Reported-by: Disha Goel <disgoel@linux.ibm.com>
> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
> ---
>   tests/btrfs/200     | 24 ++++++++++++++++--------
>   tests/btrfs/200.out |  8 ++++----
>   2 files changed, 20 insertions(+), 12 deletions(-)
> 
> diff --git a/tests/btrfs/200 b/tests/btrfs/200
> index e62937a4..fd2c2026 100755
> --- a/tests/btrfs/200
> +++ b/tests/btrfs/200
> @@ -35,18 +35,26 @@ mkdir $send_files_dir
>   _scratch_mkfs >>$seqres.full 2>&1
>   _scratch_mount
>   
> +blksz=`_get_block_size $SCRATCH_MNT`
> +echo "block size = $blksz" >> $seqres.full
> +
> +# Scale the test with any block size starting from 1k
> +scale=$(( blksz / 1024 ))
> +offset=$(( 16 * 1024 * scale ))
> +size=$(( 16 * 1024 * scale ))
> +
>   # Create our first test file, which has an extent that is shared only with
>   # itself and no other files. We want to verify a full send operation will
>   # clone the extent.
> -$XFS_IO_PROG -f -c "pwrite -S 0xb1 -b 64K 0 64K" $SCRATCH_MNT/foo \
> -	| _filter_xfs_io
> -$XFS_IO_PROG -c "reflink $SCRATCH_MNT/foo 0 64K 64K" $SCRATCH_MNT/foo \
> -	| _filter_xfs_io
> +$XFS_IO_PROG -f -c "pwrite -S 0xb1 -b $size 0 $size" $SCRATCH_MNT/foo \
> +	| _filter_xfs_io | _filter_xfs_io_size_offset 0 $size
> +$XFS_IO_PROG -c "reflink $SCRATCH_MNT/foo 0 $offset $size" $SCRATCH_MNT/foo \
> +	| _filter_xfs_io | _filter_xfs_io_size_offset $offset $size
>   
>   # Create out second test file which initially, for the first send operation,
>   # only has a single extent that is not shared.
> -$XFS_IO_PROG -f -c "pwrite -S 0xc7 -b 64K 0 64K" $SCRATCH_MNT/bar \
> -	| _filter_xfs_io
> +$XFS_IO_PROG -f -c "pwrite -S 0xc7 -b $size 0 $size" $SCRATCH_MNT/bar \
> +	| _filter_xfs_io | _filter_xfs_io_size_offset 0 $size
>   
>   _btrfs subvolume snapshot -r $SCRATCH_MNT $SCRATCH_MNT/base
>   
> @@ -56,8 +64,8 @@ $BTRFS_UTIL_PROG send -f $send_files_dir/1.snap $SCRATCH_MNT/base 2>&1 \
>   # Now clone the existing extent in file bar to itself at a different offset.
>   # We want to verify the incremental send operation below will issue a clone
>   # operation instead of a write operation.
> -$XFS_IO_PROG -c "reflink $SCRATCH_MNT/bar 0 64K 64K" $SCRATCH_MNT/bar \
> -	| _filter_xfs_io
> +$XFS_IO_PROG -c "reflink $SCRATCH_MNT/bar 0 $offset $size" $SCRATCH_MNT/bar \
> +	| _filter_xfs_io | _filter_xfs_io_size_offset $offset $size
>   
>   _btrfs subvolume snapshot -r $SCRATCH_MNT $SCRATCH_MNT/incr
>   
> diff --git a/tests/btrfs/200.out b/tests/btrfs/200.out
> index 306d9b24..4a10e506 100644
> --- a/tests/btrfs/200.out
> +++ b/tests/btrfs/200.out
> @@ -1,12 +1,12 @@
>   QA output created by 200
> -wrote 65536/65536 bytes at offset 0
> +wrote SIZE/SIZE bytes at offset OFFSET
>   XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> -linked 65536/65536 bytes at offset 65536
> +linked SIZE/SIZE bytes at offset OFFSET
>   XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> -wrote 65536/65536 bytes at offset 0
> +wrote SIZE/SIZE bytes at offset OFFSET
>   XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>   At subvol SCRATCH_MNT/base
> -linked 65536/65536 bytes at offset 65536
> +linked SIZE/SIZE bytes at offset OFFSET
>   XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>   At subvol SCRATCH_MNT/incr
>   At subvol base


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize
  2025-07-29  6:21 ` [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize Nirjhar Roy (IBM)
  2025-07-29  7:45   ` Christoph Hellwig
  2025-07-30 15:06   ` Filipe Manana
@ 2025-08-04  4:28   ` Qu Wenruo
  2025-08-12  6:27     ` Nirjhar Roy (IBM)
  2 siblings, 1 reply; 28+ messages in thread
From: Qu Wenruo @ 2025-08-04  4:28 UTC (permalink / raw)
  To: Nirjhar Roy (IBM), fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana



在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
> When tested with blocksize/nodesize 64K on powerpc
> with 64k  pagesize on btrfs, then the test fails
> with the folllowing error:
>       QA output created by 563
>       read/write
>       read is in range
>      -write is in range
>      +write has value of 8855552
>      +write is NOT in range 7969177.6 .. 8808038.4

I can reproduce the failure, although it's not 100% reliable, and indeed 
with one tree block's size removed, it's back into the tolerance range.

>       write -> read/write
>      ...
> The slight increase in the amount of bytes that
> are written is because of the increase in the
> the nodesize(metadata) and hence it exceeds the tolerance limit slightly.
> Fix this by increasing the write tolerance limit from 5% from 6%
> for 64k blocksize btrfs.
> 
> Reported-by: Disha Goel <disgoel@linux.ibm.com>
> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
> ---
>   tests/generic/563 | 17 ++++++++++++++++-
>   1 file changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/generic/563 b/tests/generic/563
> index 89a71aa4..efcac1ec 100755
> --- a/tests/generic/563
> +++ b/tests/generic/563
> @@ -119,7 +119,22 @@ $XFS_IO_PROG -c "pread 0 $iosize" -c "pwrite -b $blksize 0 $iosize" -c fsync \
>   	$SCRATCH_MNT/file >> $seqres.full 2>&1
>   switch_cg $cgdir
>   $XFS_IO_PROG -c fsync $SCRATCH_MNT/file
> -check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
> +blksz=`_get_block_size $SCRATCH_MNT`
> +
> +# On higher node sizes on btrfs, we observed slightly more
> +# writes, due to increased metadata sizes.
> +# Hence have a higher write tolerance for btrfs and when
> +# node size is greater than 4k.
> +if [[ "$FSTYP" == "btrfs" ]]; then
> +	nodesz=$(_get_btrfs_node_size "$SCRATCH_DEV")
> +	if [[ "$nodesz" -gt 4096 ]]; then
> +		check_cg $cgdir/$seq-cg $iosize $iosize 5% 6%
> +	else
> +		check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
> +	fi
> +else
> +	check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
> +fi

Instead of the btrfs specific hack, I'd recommend to just enlarge iosize.

Double the iosize will easily make it to cover the tolerance of even 
btrfs, but you still need a proper explanation of the change.

Thanks,
Qu

>   
>   # Write from one cgroup then read and write from a second. Writes are charged to
>   # the first group and nothing to the second.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 6/7] btrfs/301: Make this test compatible with all block sizes.
  2025-07-29  6:21 ` [PATCH 6/7] btrfs/301: Make this test compatible with all block sizes Nirjhar Roy (IBM)
@ 2025-08-04  4:32   ` Qu Wenruo
  2025-08-12  6:30     ` Nirjhar Roy (IBM)
  0 siblings, 1 reply; 28+ messages in thread
From: Qu Wenruo @ 2025-08-04  4:32 UTC (permalink / raw)
  To: Nirjhar Roy (IBM), fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana



在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
> With large block sizes like 64k on powerpc with 64k pagesize
> the test failed with the following logs:
> 
>       QA output created by 301
>       basic accounting
>      +subvol 256 mismatched usage 33947648 vs 4587520 \
>           (expected data 4194304 expected meta 393216 diff 29360128)
>      +subvol 256 mismatched usage 168165376 vs 138805248 \
> 	(expected data 138412032 expected meta 393216 diff 29360128)
>      +subvol 256 mismatched usage 33947648 vs 4587520 \
> 	(expected data 4194304 expected meta 393216 diff 29360128)
>      +subvol 256 mismatched usage 33947648 vs 4587520 \
> 	(expected data 4194304 expected meta 393216 diff 29360128)
>       fallocate: Disk quota exceeded
> (Please note that the above ouptut had to be modified a bit since
> the number of characters in each line was much greater than the
> 72 characters.)

You don't need to break the line for raw output.

> 
> The test creates nr_fill files each of size 8k i.e, 2x4k(stored in fill_sz).
> Now with 64k blocksize, 8k sized files occupy more than expected
> sizes (i.e, 8k) due to internal fragmentation since 1 file
> will occupy at least 1 block. Fix this by scaling the file size (fill_sz)
> with the blocksize.

You can just replace the fill_sz to 64K so that all block sizes will work.

Just tested with 64K fill_sz, it works for both 4K and 64K block size 
with 64K page size.

Thanks,
Qu

> 
> Reported-by: Disha Goel <disgoel@linux.ibm.com>
> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
> ---
>   tests/btrfs/301 | 8 +++++++-
>   1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/btrfs/301 b/tests/btrfs/301
> index 6b59749d..7547ff0e 100755
> --- a/tests/btrfs/301
> +++ b/tests/btrfs/301
> @@ -23,7 +23,13 @@ subv=$SCRATCH_MNT/subv
>   nested=$SCRATCH_MNT/subv/nested
>   snap=$SCRATCH_MNT/snap
>   nr_fill=512
> -fill_sz=$((8 * 1024))
> +
> +_scratch_mkfs >> $seqres.full
> +_scratch_mount
> +blksz=`_get_block_size $SCRATCH_MNT`
> +_scratch_unmount
> +fill_sz=$(( 2 * blksz ))
> +
>   total_fill=$(($nr_fill * $fill_sz))
>   nodesize=$($BTRFS_UTIL_PROG inspect-internal dump-super $SCRATCH_DEV | \
>   					grep nodesize | $AWK_PROG '{print $2}')


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 7/7] generic/274: Make the test compatible with all blocksizes.
  2025-07-29  6:21 ` [PATCH 7/7] generic/274: Make the test compatible with all blocksizes Nirjhar Roy (IBM)
@ 2025-08-04  4:35   ` Qu Wenruo
  2025-08-12  6:30     ` Nirjhar Roy (IBM)
  0 siblings, 1 reply; 28+ messages in thread
From: Qu Wenruo @ 2025-08-04  4:35 UTC (permalink / raw)
  To: Nirjhar Roy (IBM), fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana



在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
> On btrfs with 64k blocksize on powerpc with 64k pagesize
> it failed with the following error:
> 
>       ------------------------------
>       preallocation test
>       ------------------------------
>      -done
>      +failed to write to test file
>      +(see /home/xfstests-dev/results//btrfs_64k/generic/274.full for details)
>      ...
> So, this test is written with 4K block size in mind. As we can see,
> it first creates a file of size 4K and then fallocates 4MB beyond the
> EOF.
> Then there are 2 loops - one that fragments at alternate blocks and
> the other punches holes in the remaining alternate blocks. Hence,
> the test fails in 64k block size due to incorrect calculations.
> 
> Fix this test by making the test scale with the block size, that is
> the offset, filesize and the assumed blocksize matches/scales with
> the actual blocksize of the underlying filesystem.

Again, just enlarge the block size from 4K to 64K, then all block size 
will work.

Thanks,
Qu

> 
> Reported-by: Disha Goel <disgoel@linux.ibm.com>
> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
> ---
>   tests/generic/274 | 21 +++++++++++----------
>   1 file changed, 11 insertions(+), 10 deletions(-)
> 
> diff --git a/tests/generic/274 b/tests/generic/274
> index 916c7173..4ea42f30 100755
> --- a/tests/generic/274
> +++ b/tests/generic/274
> @@ -40,30 +40,31 @@ _scratch_unmount 2>/dev/null
>   _scratch_mkfs_sized $((2 * 1024 * 1024 * 1024)) >>$seqres.full 2>&1
>   _scratch_mount
>   
> -# Create a 4k file and Allocate 4M past EOF on that file
> -$XFS_IO_PROG -f -c "pwrite 0 4k" -c "falloc -k 4k 4m" $SCRATCH_MNT/test \
> -	>>$seqres.full 2>&1 || _fail "failed to create test file"
> +blksz=`_get_block_size $SCRATCH_MNT`
> +scale=$(( blksz / 1024 ))
> +# Create a blocksize worth file and Allocate a large file past EOF on that file
> +$XFS_IO_PROG -f -c "pwrite -b $blksz 0 $blksz" -c "falloc -k $blksz $(( 1 * 1024 * 1024 * scale ))" \
> +	$SCRATCH_MNT/test >>$seqres.full 2>&1 || _fail "failed to create test file"
>   
>   # Fill the rest of the fs completely
>   # Note, this will show ENOSPC errors in $seqres.full, that's ok.
>   echo "Fill fs with 1M IOs; ENOSPC expected" >> $seqres.full
>   dd if=/dev/zero of=$SCRATCH_MNT/tmp1 bs=1M >>$seqres.full 2>&1
> -echo "Fill fs with 4K IOs; ENOSPC expected" >> $seqres.full
> -dd if=/dev/zero of=$SCRATCH_MNT/tmp2 bs=4K >>$seqres.full 2>&1
> +echo "Fill fs with $blksz K IOs; ENOSPC expected" >> $seqres.full
> +dd if=/dev/zero of=$SCRATCH_MNT/tmp2 bs=$blksz >>$seqres.full 2>&1
>   _scratch_sync
>   # Last effort, use O_SYNC
> -echo "Fill fs with 4K DIOs; ENOSPC expected" >> $seqres.full
> -dd if=/dev/zero of=$SCRATCH_MNT/tmp3 bs=4K oflag=sync >>$seqres.full 2>&1
> +echo "Fill fs with $blksz DIOs; ENOSPC expected" >> $seqres.full
> +dd if=/dev/zero of=$SCRATCH_MNT/tmp3 bs=$blksz oflag=sync >>$seqres.full 2>&1
>   # Save space usage info
>   echo "Post-fill space:" >> $seqres.full
>   df $SCRATCH_MNT >>$seqres.full 2>&1
> -
>   # Now attempt a write into all of the preallocated space -
>   # in a very nasty way, badly fragmenting it and then filling it in.
>   echo "Fill in prealloc space; fragment at offsets:" >> $seqres.full
>   for i in `seq 1 2 1023`; do
>   	echo -n "$i " >> $seqres.full
> -	dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=4K count=1 conv=notrunc \
> +	dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=$blksz count=1 conv=notrunc \
>   		>>$seqres.full 2>/dev/null || _fail "failed to write to test file"
>   done
>   _scratch_sync
> @@ -71,7 +72,7 @@ echo >> $seqres.full
>   echo "Fill in prealloc space; fill holes at offsets:" >> $seqres.full
>   for i in `seq 2 2 1023`; do
>   	echo -n "$i " >> $seqres.full
> -	dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=4K count=1 conv=notrunc \
> +	dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=$blksz count=1 conv=notrunc \
>   		>>$seqres.full 2>/dev/null || _fail "failed to fill test file"
>   done
>   _scratch_sync


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize
  2025-07-29  7:45   ` Christoph Hellwig
@ 2025-08-04  7:18     ` Nirjhar Roy (IBM)
  0 siblings, 0 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-04  7:18 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: fstests, linux-btrfs, ritesh.list, ojaswin, djwong, zlang,
	fdmanana


On 7/29/25 13:15, Christoph Hellwig wrote:
> On Tue, Jul 29, 2025 at 06:21:48AM +0000, Nirjhar Roy (IBM) wrote:
>> +# writes, due to increased metadata sizes.
>> +# Hence have a higher write tolerance for btrfs and when
>> +# node size is greater than 4k.
>> +if [[ "$FSTYP" == "btrfs" ]]; then
>> +	nodesz=$(_get_btrfs_node_size "$SCRATCH_DEV")
>> +	if [[ "$nodesz" -gt 4096 ]]; then
>> +		check_cg $cgdir/$seq-cg $iosize $iosize 5% 6%
>> +	else
>> +		check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
>> +	fi
>> +else
>> +	check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
>> +fi
> Everyone please stop hacking file system specific things into generic
> tests.  Add proper core helpers for this instead.

Okay. I will try to work something out. Thank you.

--NR

-- 
Nirjhar Roy
Linux Kernel Developer
IBM, Bangalore


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize
  2025-07-30 15:06   ` Filipe Manana
@ 2025-08-04  7:18     ` Nirjhar Roy (IBM)
  0 siblings, 0 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-04  7:18 UTC (permalink / raw)
  To: Filipe Manana; +Cc: fstests, linux-btrfs, ritesh.list, ojaswin, djwong, zlang


On 7/30/25 20:36, Filipe Manana wrote:
> On Tue, Jul 29, 2025 at 7:24 AM Nirjhar Roy (IBM)
> <nirjhar.roy.lists@gmail.com> wrote:
>> When tested with blocksize/nodesize 64K on powerpc
>> with 64k  pagesize on btrfs, then the test fails
>> with the folllowing error:
>>       QA output created by 563
>>       read/write
>>       read is in range
>>      -write is in range
>>      +write has value of 8855552
>>      +write is NOT in range 7969177.6 .. 8808038.4
>>       write -> read/write
>>      ...
>> The slight increase in the amount of bytes that
>> are written is because of the increase in the
>> the nodesize(metadata) and hence it exceeds the tolerance limit slightly.
>> Fix this by increasing the write tolerance limit from 5% from 6%
>> for 64k blocksize btrfs.
>>
>> Reported-by: Disha Goel <disgoel@linux.ibm.com>
>> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
>> ---
>>   tests/generic/563 | 17 ++++++++++++++++-
>>   1 file changed, 16 insertions(+), 1 deletion(-)
>>
>> diff --git a/tests/generic/563 b/tests/generic/563
>> index 89a71aa4..efcac1ec 100755
>> --- a/tests/generic/563
>> +++ b/tests/generic/563
>> @@ -119,7 +119,22 @@ $XFS_IO_PROG -c "pread 0 $iosize" -c "pwrite -b $blksize 0 $iosize" -c fsync \
>>          $SCRATCH_MNT/file >> $seqres.full 2>&1
>>   switch_cg $cgdir
>>   $XFS_IO_PROG -c fsync $SCRATCH_MNT/file
>> -check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
>> +blksz=`_get_block_size $SCRATCH_MNT`
> Here the block size is captured, but then it's not used anywhere in the test...

Yeah. I will remove this. Thanks.

--NR

>
> Thanks.
>
>> +
>> +# On higher node sizes on btrfs, we observed slightly more
>> +# writes, due to increased metadata sizes.
>> +# Hence have a higher write tolerance for btrfs and when
>> +# node size is greater than 4k.
>> +if [[ "$FSTYP" == "btrfs" ]]; then
>> +       nodesz=$(_get_btrfs_node_size "$SCRATCH_DEV")
>> +       if [[ "$nodesz" -gt 4096 ]]; then
>> +               check_cg $cgdir/$seq-cg $iosize $iosize 5% 6%
>> +       else
>> +               check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
>> +       fi
>> +else
>> +       check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
>> +fi
>>
>>   # Write from one cgroup then read and write from a second. Writes are charged to
>>   # the first group and nothing to the second.
>> --
>> 2.34.1
>>
-- 
Nirjhar Roy
Linux Kernel Developer
IBM, Bangalore


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/7] btrfs/137: Make this compatible with all block sizes
  2025-08-04  3:58   ` Qu Wenruo
@ 2025-08-05  9:41     ` Ojaswin Mujoo
  2025-08-05  9:44       ` Qu Wenruo
  2025-08-05 10:47       ` Filipe Manana
  2025-08-12  6:22     ` Nirjhar Roy (IBM)
  1 sibling, 2 replies; 28+ messages in thread
From: Ojaswin Mujoo @ 2025-08-05  9:41 UTC (permalink / raw)
  To: Qu Wenruo
  Cc: Nirjhar Roy (IBM), fstests, linux-btrfs, ritesh.list, djwong,
	zlang, fdmanana

On Mon, Aug 04, 2025 at 01:28:24PM +0930, Qu Wenruo wrote:
> 
> 
> 在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
> > For large blocksizes like 64k on powerpc with 64k pagesize
> > it failed simply because this test was written with 4k
> > block size in mind.
> > The first few lines of the error logs are as follows:
> > 
> >       d3dc847171f9081bd75d7a2d3b53d322  SCRATCH_MNT/snap2/bar
> > 
> >       File snap1/foo fiemap results in the original filesystem:
> >      -0: [0..7]: data
> >      +0: [0..127]: data
> > 
> >       File snap1/bar fiemap results in the original filesystem:
> >      ...
> > 
> > Fix this by making the test choose offsets based on
> > the blocksize.
> 
> I'm wondering, why not just use a fixed 64K block size?

Hi Qu,

It will definitely be simpler to just use 64k io size but I feel it
might be better to not hard code it for future proofing the tests. I
know right now we don't have bs > ps in btrfs but maybe we get it in the
future and we might start seeing funky block sizes > 64k.

Same goes for not hardcoding block mappings in the golden output. 

Regards,
ojaswin


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/7] btrfs/137: Make this compatible with all block sizes
  2025-08-05  9:41     ` Ojaswin Mujoo
@ 2025-08-05  9:44       ` Qu Wenruo
  2025-08-05 12:39         ` Ojaswin Mujoo
  2025-08-05 10:47       ` Filipe Manana
  1 sibling, 1 reply; 28+ messages in thread
From: Qu Wenruo @ 2025-08-05  9:44 UTC (permalink / raw)
  To: Ojaswin Mujoo
  Cc: Nirjhar Roy (IBM), fstests, linux-btrfs, ritesh.list, djwong,
	zlang, fdmanana



在 2025/8/5 19:11, Ojaswin Mujoo 写道:
> On Mon, Aug 04, 2025 at 01:28:24PM +0930, Qu Wenruo wrote:
>>
>>
>> 在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
>>> For large blocksizes like 64k on powerpc with 64k pagesize
>>> it failed simply because this test was written with 4k
>>> block size in mind.
>>> The first few lines of the error logs are as follows:
>>>
>>>        d3dc847171f9081bd75d7a2d3b53d322  SCRATCH_MNT/snap2/bar
>>>
>>>        File snap1/foo fiemap results in the original filesystem:
>>>       -0: [0..7]: data
>>>       +0: [0..127]: data
>>>
>>>        File snap1/bar fiemap results in the original filesystem:
>>>       ...
>>>
>>> Fix this by making the test choose offsets based on
>>> the blocksize.
>>
>> I'm wondering, why not just use a fixed 64K block size?
> 
> Hi Qu,
> 
> It will definitely be simpler to just use 64k io size but I feel it
> might be better to not hard code it for future proofing the tests. I
> know right now we don't have bs > ps in btrfs but maybe we get it in the
> future and we might start seeing funky block sizes > 64k.

And in fact I'm going to add that bs > ps support very soon, since we 
already have large folio support for data, it will be just a simple 
mapping_set_folio_order_range() with a min order.

But still for btrfs, we're only going to support 4K, 8K, 16K, 32K, 64K 
block sizes, thus I don't think we need to bother larger bs yet.

Thanks,
Qu

> 
> Same goes for not hardcoding block mappings in the golden output.
> 
> Regards,
> ojaswin
> 
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/7] btrfs/137: Make this compatible with all block sizes
  2025-08-05  9:41     ` Ojaswin Mujoo
  2025-08-05  9:44       ` Qu Wenruo
@ 2025-08-05 10:47       ` Filipe Manana
  2025-08-12  6:23         ` Nirjhar Roy (IBM)
  1 sibling, 1 reply; 28+ messages in thread
From: Filipe Manana @ 2025-08-05 10:47 UTC (permalink / raw)
  To: Ojaswin Mujoo
  Cc: Qu Wenruo, Nirjhar Roy (IBM), fstests, linux-btrfs, ritesh.list,
	djwong, zlang

On Tue, Aug 5, 2025 at 10:41 AM Ojaswin Mujoo <ojaswin@linux.ibm.com> wrote:
>
> On Mon, Aug 04, 2025 at 01:28:24PM +0930, Qu Wenruo wrote:
> >
> >
> > 在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
> > > For large blocksizes like 64k on powerpc with 64k pagesize
> > > it failed simply because this test was written with 4k
> > > block size in mind.
> > > The first few lines of the error logs are as follows:
> > >
> > >       d3dc847171f9081bd75d7a2d3b53d322  SCRATCH_MNT/snap2/bar
> > >
> > >       File snap1/foo fiemap results in the original filesystem:
> > >      -0: [0..7]: data
> > >      +0: [0..127]: data
> > >
> > >       File snap1/bar fiemap results in the original filesystem:
> > >      ...
> > >
> > > Fix this by making the test choose offsets based on
> > > the blocksize.
> >
> > I'm wondering, why not just use a fixed 64K block size?
>
> Hi Qu,
>
> It will definitely be simpler to just use 64k io size but I feel it
> might be better to not hard code it for future proofing the tests. I
> know right now we don't have bs > ps in btrfs but maybe we get it in the
> future and we might start seeing funky block sizes > 64k.
>
> Same goes for not hardcoding block mappings in the golden output.

Please keep it simple and use fixed 64K aligned sizes and offsets.
It's very unlikely we will support sector sizes larger than 64K, so
keeping fixed sizes is a lot simpler to understand, maintain and debug
tests.

Thanks.

>
> Regards,
> ojaswin
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/7] btrfs/137: Make this compatible with all block sizes
  2025-08-05  9:44       ` Qu Wenruo
@ 2025-08-05 12:39         ` Ojaswin Mujoo
  0 siblings, 0 replies; 28+ messages in thread
From: Ojaswin Mujoo @ 2025-08-05 12:39 UTC (permalink / raw)
  To: Qu Wenruo
  Cc: Nirjhar Roy (IBM), fstests, linux-btrfs, ritesh.list, djwong,
	zlang, fdmanana

On Tue, Aug 05, 2025 at 07:14:52PM +0930, Qu Wenruo wrote:
> 
> 
> 在 2025/8/5 19:11, Ojaswin Mujoo 写道:
> > On Mon, Aug 04, 2025 at 01:28:24PM +0930, Qu Wenruo wrote:
> > > 
> > > 
> > > 在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
> > > > For large blocksizes like 64k on powerpc with 64k pagesize
> > > > it failed simply because this test was written with 4k
> > > > block size in mind.
> > > > The first few lines of the error logs are as follows:
> > > > 
> > > >        d3dc847171f9081bd75d7a2d3b53d322  SCRATCH_MNT/snap2/bar
> > > > 
> > > >        File snap1/foo fiemap results in the original filesystem:
> > > >       -0: [0..7]: data
> > > >       +0: [0..127]: data
> > > > 
> > > >        File snap1/bar fiemap results in the original filesystem:
> > > >       ...
> > > > 
> > > > Fix this by making the test choose offsets based on
> > > > the blocksize.
> > > 
> > > I'm wondering, why not just use a fixed 64K block size?
> > 
> > Hi Qu,
> > 
> > It will definitely be simpler to just use 64k io size but I feel it
> > might be better to not hard code it for future proofing the tests. I
> > know right now we don't have bs > ps in btrfs but maybe we get it in the
> > future and we might start seeing funky block sizes > 64k.
> 
> And in fact I'm going to add that bs > ps support very soon, since we
> already have large folio support for data, it will be just a simple
> mapping_set_folio_order_range() with a min order.
> 
> But still for btrfs, we're only going to support 4K, 8K, 16K, 32K, 64K block
> sizes, thus I don't think we need to bother larger bs yet.
> 
> Thanks,
> Qu

Okay sure, if you feel 64k should be okay for considerable future then
lets hardcode it for simplicity.

Regards,
ojaswin
> 
> > 
> > Same goes for not hardcoding block mappings in the golden output.
> > 
> > Regards,
> > ojaswin
> > 
> > 
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/7] btrfs/137: Make this compatible with all block sizes
  2025-08-04  3:58   ` Qu Wenruo
  2025-08-05  9:41     ` Ojaswin Mujoo
@ 2025-08-12  6:22     ` Nirjhar Roy (IBM)
  1 sibling, 0 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-12  6:22 UTC (permalink / raw)
  To: Qu Wenruo, fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana


On 8/4/25 09:28, Qu Wenruo wrote:
>
>
> 在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
>> For large blocksizes like 64k on powerpc with 64k pagesize
>> it failed simply because this test was written with 4k
>> block size in mind.
>> The first few lines of the error logs are as follows:
>>
>>       d3dc847171f9081bd75d7a2d3b53d322  SCRATCH_MNT/snap2/bar
>>
>>       File snap1/foo fiemap results in the original filesystem:
>>      -0: [0..7]: data
>>      +0: [0..127]: data
>>
>>       File snap1/bar fiemap results in the original filesystem:
>>      ...
>>
>> Fix this by making the test choose offsets based on
>> the blocksize.
>
> I'm wondering, why not just use a fixed 64K block size?

Sorry for delayed response. Yes, if you feel keeping it simple by 
keeping the values aligned to 64k, I can make the change.

>
> So that all supported btrfs block sizes can result the same file 
> contents.
>
>> Also, now that the file hashes and
>> the extent/block numbers will change depending on the
>> blocksize, calculate the hashes and the block mappings,
>> store them in temporary files and then calculate their diff
>> between the new and the original filesystem.
>> This allows us to remove all the block mapping and hashes
>> from the .out file.
>
> Although I agree we should remove the block mappings from the golden 
> output, as compression can add extra flags and pollute the golden output.
>
> But that can also be done with _require_btrfs_no_compress() helper.

Okay. In that case, I will have the above pre-condition too.

>
>>
>> Reported-by: Disha Goel <disgoel@linux.ibm.com>
>> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
>> ---
>>   tests/btrfs/137     | 135 +++++++++++++++++++++++++++++---------------
>>   tests/btrfs/137.out |  59 ++-----------------
>>   2 files changed, 94 insertions(+), 100 deletions(-)
>>
>> diff --git a/tests/btrfs/137 b/tests/btrfs/137
>> index 7710dc18..61e983cb 100755
>> --- a/tests/btrfs/137
>> +++ b/tests/btrfs/137
>> @@ -27,53 +27,74 @@ _require_xfs_io_command "fiemap"
>>   send_files_dir=$TEST_DIR/btrfs-test-$seq
>>     rm -fr $send_files_dir
>> -mkdir $send_files_dir
>> +mkdir $send_files_dir $tmp
>
> Just a small nitpick, it's more common to use $tmp.<suffix>, that's 
> why the default _cleanup() template goes with "rm -f $tmp.*"

Noted. Thanks.

--NR

>
> Thanks,
> Qu
>
-- 
Nirjhar Roy
Linux Kernel Developer
IBM, Bangalore


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/7] btrfs/137: Make this compatible with all block sizes
  2025-08-05 10:47       ` Filipe Manana
@ 2025-08-12  6:23         ` Nirjhar Roy (IBM)
  0 siblings, 0 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-12  6:23 UTC (permalink / raw)
  To: Filipe Manana, Ojaswin Mujoo
  Cc: Qu Wenruo, fstests, linux-btrfs, ritesh.list, djwong, zlang


On 8/5/25 16:17, Filipe Manana wrote:
> On Tue, Aug 5, 2025 at 10:41 AM Ojaswin Mujoo <ojaswin@linux.ibm.com> wrote:
>> On Mon, Aug 04, 2025 at 01:28:24PM +0930, Qu Wenruo wrote:
>>>
>>> 在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
>>>> For large blocksizes like 64k on powerpc with 64k pagesize
>>>> it failed simply because this test was written with 4k
>>>> block size in mind.
>>>> The first few lines of the error logs are as follows:
>>>>
>>>>        d3dc847171f9081bd75d7a2d3b53d322  SCRATCH_MNT/snap2/bar
>>>>
>>>>        File snap1/foo fiemap results in the original filesystem:
>>>>       -0: [0..7]: data
>>>>       +0: [0..127]: data
>>>>
>>>>        File snap1/bar fiemap results in the original filesystem:
>>>>       ...
>>>>
>>>> Fix this by making the test choose offsets based on
>>>> the blocksize.
>>> I'm wondering, why not just use a fixed 64K block size?
>> Hi Qu,
>>
>> It will definitely be simpler to just use 64k io size but I feel it
>> might be better to not hard code it for future proofing the tests. I
>> know right now we don't have bs > ps in btrfs but maybe we get it in the
>> future and we might start seeing funky block sizes > 64k.
>>
>> Same goes for not hardcoding block mappings in the golden output.
> Please keep it simple and use fixed 64K aligned sizes and offsets.
> It's very unlikely we will support sector sizes larger than 64K, so
> keeping fixed sizes is a lot simpler to understand, maintain and debug
> tests.

Okay, noted.

--NR

>
> Thanks.
>
>> Regards,
>> ojaswin
>>
-- 
Nirjhar Roy
Linux Kernel Developer
IBM, Bangalore


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 4/7] btrfs/200: Make this test scale with the block size
  2025-07-29  6:53   ` Filipe Manana
@ 2025-08-12  6:26     ` Nirjhar Roy (IBM)
  0 siblings, 0 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-12  6:26 UTC (permalink / raw)
  To: Filipe Manana; +Cc: fstests, linux-btrfs, ritesh.list, ojaswin, djwong, zlang


On 7/29/25 12:23, Filipe Manana wrote:
> On Tue, Jul 29, 2025 at 7:24 AM Nirjhar Roy (IBM)
> <nirjhar.roy.lists@gmail.com> wrote:
>> For large block sizes like 64k on powerpc with 64k
>> pagesize it failed because this test was hardcoded
>> to work with 4k blocksize.
> Where exactly is it hardcoded with 4K blocksize expectations?
>
> The test does 64K writes and reflinks at offsets multiples of 64K (0 and 64K).
> In fact that's why the test is doing 64K writes and using only the
> file offsets 0 and 64K, so that it works with any block size.
>
>> With blocksize 4k and the existing file lengths,
>> we are getting 2 extents but with 64k page size
>> number of extents is not exceeding 1(due to lower
>> file size).
> Due to lower file size? How?
> The file sizes should be independent of the block size, and be 64K and
> 128K everywhere.
>
> Please provide more details in the changelog.
> Thanks.

Yes, I think I mis-interpreted the actual issue. I am looking into this. 
For now, I will remove this patch in the next version and once I am 
aware of the actual root cause, I will re-send with a proper fix and an 
explanation.

--NR

>
>> The first few lines of the error message is as follows:
>>       At snapshot incr
>>       OK
>>       OK
>>      +File foo does not have 2 shared extents in the base snapshot
>>      +/mnt/scratch/base/foo:
>>      +   0: [0..255]: 26624..26879
>>      +File foo does not have 2 shared extents in the incr snapshot
>>      ...
>>
>> Fix this by scaling the size and offsets to scale with the block
>> size by a factor of (blocksize/4k).
>>
>> Reported-by: Disha Goel <disgoel@linux.ibm.com>
>> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
>> ---
>>   tests/btrfs/200     | 24 ++++++++++++++++--------
>>   tests/btrfs/200.out |  8 ++++----
>>   2 files changed, 20 insertions(+), 12 deletions(-)
>>
>> diff --git a/tests/btrfs/200 b/tests/btrfs/200
>> index e62937a4..fd2c2026 100755
>> --- a/tests/btrfs/200
>> +++ b/tests/btrfs/200
>> @@ -35,18 +35,26 @@ mkdir $send_files_dir
>>   _scratch_mkfs >>$seqres.full 2>&1
>>   _scratch_mount
>>
>> +blksz=`_get_block_size $SCRATCH_MNT`
>> +echo "block size = $blksz" >> $seqres.full
>> +
>> +# Scale the test with any block size starting from 1k
>> +scale=$(( blksz / 1024 ))
>> +offset=$(( 16 * 1024 * scale ))
>> +size=$(( 16 * 1024 * scale ))
>> +
>>   # Create our first test file, which has an extent that is shared only with
>>   # itself and no other files. We want to verify a full send operation will
>>   # clone the extent.
>> -$XFS_IO_PROG -f -c "pwrite -S 0xb1 -b 64K 0 64K" $SCRATCH_MNT/foo \
>> -       | _filter_xfs_io
>> -$XFS_IO_PROG -c "reflink $SCRATCH_MNT/foo 0 64K 64K" $SCRATCH_MNT/foo \
>> -       | _filter_xfs_io
>> +$XFS_IO_PROG -f -c "pwrite -S 0xb1 -b $size 0 $size" $SCRATCH_MNT/foo \
>> +       | _filter_xfs_io | _filter_xfs_io_size_offset 0 $size
>> +$XFS_IO_PROG -c "reflink $SCRATCH_MNT/foo 0 $offset $size" $SCRATCH_MNT/foo \
>> +       | _filter_xfs_io | _filter_xfs_io_size_offset $offset $size
>>
>>   # Create out second test file which initially, for the first send operation,
>>   # only has a single extent that is not shared.
>> -$XFS_IO_PROG -f -c "pwrite -S 0xc7 -b 64K 0 64K" $SCRATCH_MNT/bar \
>> -       | _filter_xfs_io
>> +$XFS_IO_PROG -f -c "pwrite -S 0xc7 -b $size 0 $size" $SCRATCH_MNT/bar \
>> +       | _filter_xfs_io | _filter_xfs_io_size_offset 0 $size
>>
>>   _btrfs subvolume snapshot -r $SCRATCH_MNT $SCRATCH_MNT/base
>>
>> @@ -56,8 +64,8 @@ $BTRFS_UTIL_PROG send -f $send_files_dir/1.snap $SCRATCH_MNT/base 2>&1 \
>>   # Now clone the existing extent in file bar to itself at a different offset.
>>   # We want to verify the incremental send operation below will issue a clone
>>   # operation instead of a write operation.
>> -$XFS_IO_PROG -c "reflink $SCRATCH_MNT/bar 0 64K 64K" $SCRATCH_MNT/bar \
>> -       | _filter_xfs_io
>> +$XFS_IO_PROG -c "reflink $SCRATCH_MNT/bar 0 $offset $size" $SCRATCH_MNT/bar \
>> +       | _filter_xfs_io | _filter_xfs_io_size_offset $offset $size
>>
>>   _btrfs subvolume snapshot -r $SCRATCH_MNT $SCRATCH_MNT/incr
>>
>> diff --git a/tests/btrfs/200.out b/tests/btrfs/200.out
>> index 306d9b24..4a10e506 100644
>> --- a/tests/btrfs/200.out
>> +++ b/tests/btrfs/200.out
>> @@ -1,12 +1,12 @@
>>   QA output created by 200
>> -wrote 65536/65536 bytes at offset 0
>> +wrote SIZE/SIZE bytes at offset OFFSET
>>   XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>> -linked 65536/65536 bytes at offset 65536
>> +linked SIZE/SIZE bytes at offset OFFSET
>>   XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>> -wrote 65536/65536 bytes at offset 0
>> +wrote SIZE/SIZE bytes at offset OFFSET
>>   XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>>   At subvol SCRATCH_MNT/base
>> -linked 65536/65536 bytes at offset 65536
>> +linked SIZE/SIZE bytes at offset OFFSET
>>   XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>>   At subvol SCRATCH_MNT/incr
>>   At subvol base
>> --
>> 2.34.1
>>
-- 
Nirjhar Roy
Linux Kernel Developer
IBM, Bangalore


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize
  2025-08-04  4:28   ` Qu Wenruo
@ 2025-08-12  6:27     ` Nirjhar Roy (IBM)
  0 siblings, 0 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-12  6:27 UTC (permalink / raw)
  To: Qu Wenruo, fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana


On 8/4/25 09:58, Qu Wenruo wrote:
>
>
> 在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
>> When tested with blocksize/nodesize 64K on powerpc
>> with 64k  pagesize on btrfs, then the test fails
>> with the folllowing error:
>>       QA output created by 563
>>       read/write
>>       read is in range
>>      -write is in range
>>      +write has value of 8855552
>>      +write is NOT in range 7969177.6 .. 8808038.4
>
> I can reproduce the failure, although it's not 100% reliable, and 
> indeed with one tree block's size removed, it's back into the 
> tolerance range.
>
>>       write -> read/write
>>      ...
>> The slight increase in the amount of bytes that
>> are written is because of the increase in the
>> the nodesize(metadata) and hence it exceeds the tolerance limit 
>> slightly.
>> Fix this by increasing the write tolerance limit from 5% from 6%
>> for 64k blocksize btrfs.
>>
>> Reported-by: Disha Goel <disgoel@linux.ibm.com>
>> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
>> ---
>>   tests/generic/563 | 17 ++++++++++++++++-
>>   1 file changed, 16 insertions(+), 1 deletion(-)
>>
>> diff --git a/tests/generic/563 b/tests/generic/563
>> index 89a71aa4..efcac1ec 100755
>> --- a/tests/generic/563
>> +++ b/tests/generic/563
>> @@ -119,7 +119,22 @@ $XFS_IO_PROG -c "pread 0 $iosize" -c "pwrite -b 
>> $blksize 0 $iosize" -c fsync \
>>       $SCRATCH_MNT/file >> $seqres.full 2>&1
>>   switch_cg $cgdir
>>   $XFS_IO_PROG -c fsync $SCRATCH_MNT/file
>> -check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
>> +blksz=`_get_block_size $SCRATCH_MNT`
>> +
>> +# On higher node sizes on btrfs, we observed slightly more
>> +# writes, due to increased metadata sizes.
>> +# Hence have a higher write tolerance for btrfs and when
>> +# node size is greater than 4k.
>> +if [[ "$FSTYP" == "btrfs" ]]; then
>> +    nodesz=$(_get_btrfs_node_size "$SCRATCH_DEV")
>> +    if [[ "$nodesz" -gt 4096 ]]; then
>> +        check_cg $cgdir/$seq-cg $iosize $iosize 5% 6%
>> +    else
>> +        check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
>> +    fi
>> +else
>> +    check_cg $cgdir/$seq-cg $iosize $iosize 5% 5%
>> +fi
>
> Instead of the btrfs specific hack, I'd recommend to just enlarge iosize.
>
> Double the iosize will easily make it to cover the tolerance of even 
> btrfs, but you still need a proper explanation of the change.

Okay. I can try the above and will come up with more detailed explantion.

--NR

>
> Thanks,
> Qu
>
>>     # Write from one cgroup then read and write from a second. Writes 
>> are charged to
>>   # the first group and nothing to the second.
>
-- 
Nirjhar Roy
Linux Kernel Developer
IBM, Bangalore


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 6/7] btrfs/301: Make this test compatible with all block sizes.
  2025-08-04  4:32   ` Qu Wenruo
@ 2025-08-12  6:30     ` Nirjhar Roy (IBM)
  0 siblings, 0 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-12  6:30 UTC (permalink / raw)
  To: Qu Wenruo, fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana


On 8/4/25 10:02, Qu Wenruo wrote:
>
>
> 在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
>> With large block sizes like 64k on powerpc with 64k pagesize
>> the test failed with the following logs:
>>
>>       QA output created by 301
>>       basic accounting
>>      +subvol 256 mismatched usage 33947648 vs 4587520 \
>>           (expected data 4194304 expected meta 393216 diff 29360128)
>>      +subvol 256 mismatched usage 168165376 vs 138805248 \
>>     (expected data 138412032 expected meta 393216 diff 29360128)
>>      +subvol 256 mismatched usage 33947648 vs 4587520 \
>>     (expected data 4194304 expected meta 393216 diff 29360128)
>>      +subvol 256 mismatched usage 33947648 vs 4587520 \
>>     (expected data 4194304 expected meta 393216 diff 29360128)
>>       fallocate: Disk quota exceeded
>> (Please note that the above ouptut had to be modified a bit since
>> the number of characters in each line was much greater than the
>> 72 characters.)
>
> You don't need to break the line for raw output.
Noted.
>
>>
>> The test creates nr_fill files each of size 8k i.e, 2x4k(stored in 
>> fill_sz).
>> Now with 64k blocksize, 8k sized files occupy more than expected
>> sizes (i.e, 8k) due to internal fragmentation since 1 file
>> will occupy at least 1 block. Fix this by scaling the file size 
>> (fill_sz)
>> with the blocksize.
>
> You can just replace the fill_sz to 64K so that all block sizes will 
> work.
>
> Just tested with 64K fill_sz, it works for both 4K and 64K block size 
> with 64K page size.

Okay, I will make the test work with 64k aligned values (similar to 
suggestions to the previous fixes in this patch series).

--NR

>
> Thanks,
> Qu
>
>>
>> Reported-by: Disha Goel <disgoel@linux.ibm.com>
>> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
>> ---
>>   tests/btrfs/301 | 8 +++++++-
>>   1 file changed, 7 insertions(+), 1 deletion(-)
>>
>> diff --git a/tests/btrfs/301 b/tests/btrfs/301
>> index 6b59749d..7547ff0e 100755
>> --- a/tests/btrfs/301
>> +++ b/tests/btrfs/301
>> @@ -23,7 +23,13 @@ subv=$SCRATCH_MNT/subv
>>   nested=$SCRATCH_MNT/subv/nested
>>   snap=$SCRATCH_MNT/snap
>>   nr_fill=512
>> -fill_sz=$((8 * 1024))
>> +
>> +_scratch_mkfs >> $seqres.full
>> +_scratch_mount
>> +blksz=`_get_block_size $SCRATCH_MNT`
>> +_scratch_unmount
>> +fill_sz=$(( 2 * blksz ))
>> +
>>   total_fill=$(($nr_fill * $fill_sz))
>>   nodesize=$($BTRFS_UTIL_PROG inspect-internal dump-super 
>> $SCRATCH_DEV | \
>>                       grep nodesize | $AWK_PROG '{print $2}')
>
-- 
Nirjhar Roy
Linux Kernel Developer
IBM, Bangalore


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 7/7] generic/274: Make the test compatible with all blocksizes.
  2025-08-04  4:35   ` Qu Wenruo
@ 2025-08-12  6:30     ` Nirjhar Roy (IBM)
  0 siblings, 0 replies; 28+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-12  6:30 UTC (permalink / raw)
  To: Qu Wenruo, fstests
  Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana


On 8/4/25 10:05, Qu Wenruo wrote:
>
>
> 在 2025/7/29 15:51, Nirjhar Roy (IBM) 写道:
>> On btrfs with 64k blocksize on powerpc with 64k pagesize
>> it failed with the following error:
>>
>>       ------------------------------
>>       preallocation test
>>       ------------------------------
>>      -done
>>      +failed to write to test file
>>      +(see /home/xfstests-dev/results//btrfs_64k/generic/274.full for 
>> details)
>>      ...
>> So, this test is written with 4K block size in mind. As we can see,
>> it first creates a file of size 4K and then fallocates 4MB beyond the
>> EOF.
>> Then there are 2 loops - one that fragments at alternate blocks and
>> the other punches holes in the remaining alternate blocks. Hence,
>> the test fails in 64k block size due to incorrect calculations.
>>
>> Fix this test by making the test scale with the block size, that is
>> the offset, filesize and the assumed blocksize matches/scales with
>> the actual blocksize of the underlying filesystem.
>
> Again, just enlarge the block size from 4K to 64K, then all block size 
> will work.

Okay.

--NR

>
> Thanks,
> Qu
>
>>
>> Reported-by: Disha Goel <disgoel@linux.ibm.com>
>> Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
>> ---
>>   tests/generic/274 | 21 +++++++++++----------
>>   1 file changed, 11 insertions(+), 10 deletions(-)
>>
>> diff --git a/tests/generic/274 b/tests/generic/274
>> index 916c7173..4ea42f30 100755
>> --- a/tests/generic/274
>> +++ b/tests/generic/274
>> @@ -40,30 +40,31 @@ _scratch_unmount 2>/dev/null
>>   _scratch_mkfs_sized $((2 * 1024 * 1024 * 1024)) >>$seqres.full 2>&1
>>   _scratch_mount
>>   -# Create a 4k file and Allocate 4M past EOF on that file
>> -$XFS_IO_PROG -f -c "pwrite 0 4k" -c "falloc -k 4k 4m" 
>> $SCRATCH_MNT/test \
>> -    >>$seqres.full 2>&1 || _fail "failed to create test file"
>> +blksz=`_get_block_size $SCRATCH_MNT`
>> +scale=$(( blksz / 1024 ))
>> +# Create a blocksize worth file and Allocate a large file past EOF 
>> on that file
>> +$XFS_IO_PROG -f -c "pwrite -b $blksz 0 $blksz" -c "falloc -k $blksz 
>> $(( 1 * 1024 * 1024 * scale ))" \
>> +    $SCRATCH_MNT/test >>$seqres.full 2>&1 || _fail "failed to create 
>> test file"
>>     # Fill the rest of the fs completely
>>   # Note, this will show ENOSPC errors in $seqres.full, that's ok.
>>   echo "Fill fs with 1M IOs; ENOSPC expected" >> $seqres.full
>>   dd if=/dev/zero of=$SCRATCH_MNT/tmp1 bs=1M >>$seqres.full 2>&1
>> -echo "Fill fs with 4K IOs; ENOSPC expected" >> $seqres.full
>> -dd if=/dev/zero of=$SCRATCH_MNT/tmp2 bs=4K >>$seqres.full 2>&1
>> +echo "Fill fs with $blksz K IOs; ENOSPC expected" >> $seqres.full
>> +dd if=/dev/zero of=$SCRATCH_MNT/tmp2 bs=$blksz >>$seqres.full 2>&1
>>   _scratch_sync
>>   # Last effort, use O_SYNC
>> -echo "Fill fs with 4K DIOs; ENOSPC expected" >> $seqres.full
>> -dd if=/dev/zero of=$SCRATCH_MNT/tmp3 bs=4K oflag=sync >>$seqres.full 
>> 2>&1
>> +echo "Fill fs with $blksz DIOs; ENOSPC expected" >> $seqres.full
>> +dd if=/dev/zero of=$SCRATCH_MNT/tmp3 bs=$blksz oflag=sync 
>> >>$seqres.full 2>&1
>>   # Save space usage info
>>   echo "Post-fill space:" >> $seqres.full
>>   df $SCRATCH_MNT >>$seqres.full 2>&1
>> -
>>   # Now attempt a write into all of the preallocated space -
>>   # in a very nasty way, badly fragmenting it and then filling it in.
>>   echo "Fill in prealloc space; fragment at offsets:" >> $seqres.full
>>   for i in `seq 1 2 1023`; do
>>       echo -n "$i " >> $seqres.full
>> -    dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=4K count=1 
>> conv=notrunc \
>> +    dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=$blksz count=1 
>> conv=notrunc \
>>           >>$seqres.full 2>/dev/null || _fail "failed to write to 
>> test file"
>>   done
>>   _scratch_sync
>> @@ -71,7 +72,7 @@ echo >> $seqres.full
>>   echo "Fill in prealloc space; fill holes at offsets:" >> $seqres.full
>>   for i in `seq 2 2 1023`; do
>>       echo -n "$i " >> $seqres.full
>> -    dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=4K count=1 
>> conv=notrunc \
>> +    dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=$blksz count=1 
>> conv=notrunc \
>>           >>$seqres.full 2>/dev/null || _fail "failed to fill test file"
>>   done
>>   _scratch_sync
>
-- 
Nirjhar Roy
Linux Kernel Developer
IBM, Bangalore


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2025-08-12  6:31 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-29  6:21 [PATCH 0/7] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
2025-07-29  6:21 ` [PATCH 1/7] common/filter: Add a helper function to filter offsets and sizes Nirjhar Roy (IBM)
2025-07-29  6:21 ` [PATCH 2/7] common/btrfs: Add a helper function to get the nodesize Nirjhar Roy (IBM)
2025-07-29  6:21 ` [PATCH 3/7] btrfs/137: Make this compatible with all block sizes Nirjhar Roy (IBM)
2025-08-04  3:58   ` Qu Wenruo
2025-08-05  9:41     ` Ojaswin Mujoo
2025-08-05  9:44       ` Qu Wenruo
2025-08-05 12:39         ` Ojaswin Mujoo
2025-08-05 10:47       ` Filipe Manana
2025-08-12  6:23         ` Nirjhar Roy (IBM)
2025-08-12  6:22     ` Nirjhar Roy (IBM)
2025-07-29  6:21 ` [PATCH 4/7] btrfs/200: Make this test scale with the block size Nirjhar Roy (IBM)
2025-07-29  6:53   ` Filipe Manana
2025-08-12  6:26     ` Nirjhar Roy (IBM)
2025-08-04  4:19   ` Qu Wenruo
2025-07-29  6:21 ` [PATCH 5/7] generic/563: Increase the write tolerance to 6% for larger nodesize Nirjhar Roy (IBM)
2025-07-29  7:45   ` Christoph Hellwig
2025-08-04  7:18     ` Nirjhar Roy (IBM)
2025-07-30 15:06   ` Filipe Manana
2025-08-04  7:18     ` Nirjhar Roy (IBM)
2025-08-04  4:28   ` Qu Wenruo
2025-08-12  6:27     ` Nirjhar Roy (IBM)
2025-07-29  6:21 ` [PATCH 6/7] btrfs/301: Make this test compatible with all block sizes Nirjhar Roy (IBM)
2025-08-04  4:32   ` Qu Wenruo
2025-08-12  6:30     ` Nirjhar Roy (IBM)
2025-07-29  6:21 ` [PATCH 7/7] generic/274: Make the test compatible with all blocksizes Nirjhar Roy (IBM)
2025-08-04  4:35   ` Qu Wenruo
2025-08-12  6:30     ` Nirjhar Roy (IBM)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).