* [PATCH v4 0/4] btrfs: Misc test fixes for large block/node sizes
@ 2025-08-25 6:04 Nirjhar Roy (IBM)
2025-08-25 6:04 ` [PATCH v4 1/4] btrfs/301: Make the test compatible with all the supported block sizes Nirjhar Roy (IBM)
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-25 6:04 UTC (permalink / raw)
To: fstests
Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
nirjhar.roy.lists, quwenruo.btrfs
Some of the btrfs and generic tests are written with 4k block/node size in mind.
This patch fixes some of those tests to be compatible with other block/node sizes too.
We caught these bugs while running the auto group tests on btrfs with large
block/node sizes.
[v2] -> v3
1. Fixed commit message typo for generic/563
2. Added RB of Qu for generic/274
3. Added RB of Ojaswin for all the patches.
[v2] -> v3
1. Added RBs by Qu Wenruo in btrfs/{301, 137}
2. Updated the commit message of generic/563 with a more detailed explanation
by Qu Wenrou.
3. Reverted by block size from 64k to 4k while filling the filesystem with dd
for test generic/274.
[v1] -> [v2]:
1. Removed the patch for btrfs/200 of [v1] - need more analysis on this.
2. Removed the first 2 patches of [v1] which introduced 2 new helper functions
3. btrfs/{137,301} and generic/274 - Instead of scaling the test dynamically
based on the underlying disk block size, I have hardcoded the pwrite blocksizes
and offsets to 64k which is aligned to all underlying fs block sizes <= 64.
4. For generic/563 - Doubled the iosize instead of btrfs specific hack to cover
for btrfs write ranges.
5. Updated the commit messages
[v1] - https://lore.kernel.org/all/cover.1753769382.git.nirjhar.roy.lists@gmail.com/
[v2] - https://lore.kernel.org/all/cover.1755604735.git.nirjhar.roy.lists@gmail.com/
[v3] - https://lore.kernel.org/all/cover.1755677274.git.nirjhar.roy.lists@gmail.com/
Nirjhar Roy (IBM) (4):
btrfs/301: Make the test compatible with all the supported block sizes
generic/274: Make the pwrite block sizes and offsets to 64k
btrfs/137: Make this test compatible with all supported block sizes
generic/563: Increase the iosize to cover for btrfs higher node sizes
tests/btrfs/137 | 11 ++++----
tests/btrfs/137.out | 66 ++++++++++++++++++++++-----------------------
tests/btrfs/301 | 2 +-
tests/generic/274 | 8 +++---
tests/generic/563 | 2 +-
5 files changed, 45 insertions(+), 44 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v4 1/4] btrfs/301: Make the test compatible with all the supported block sizes
2025-08-25 6:04 [PATCH v4 0/4] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
@ 2025-08-25 6:04 ` Nirjhar Roy (IBM)
2025-08-25 6:04 ` [PATCH v4 2/4] generic/274: Make the pwrite block sizes and offsets to 64k Nirjhar Roy (IBM)
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-25 6:04 UTC (permalink / raw)
To: fstests
Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
nirjhar.roy.lists, quwenruo.btrfs
With large block sizes like 64k the test failed with the
following logs:
QA output created by 301
basic accounting
+subvol 256 mismatched usage 33947648 vs 4587520 (expected data 4194304 expected meta 393216 diff 29360128)
+subvol 256 mismatched usage 168165376 vs 138805248 (expected data 138412032 expected meta 393216 diff 29360128)
+subvol 256 mismatched usage 33947648 vs 4587520 (expected data 4194304 expected meta 393216 diff 29360128)
+subvol 256 mismatched usage 33947648 vs 4587520 (expected data 4194304 expected meta 393216 diff 29360128)
fallocate: Disk quota exceeded
The test creates nr_fill files each of size 8k. Now with 64k
block size, 8k sized files occupy more than the expected sizes (i.e, 8k)
due to internal fragmentation, since 1 file will occupy at least 1
fsblock. Fix this by making the file size 64k, which is aligned
with all the supported block sizes.
Reviewed-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
Reported-by: Disha Goel <disgoel@linux.ibm.com>
Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
---
tests/btrfs/301 | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/btrfs/301 b/tests/btrfs/301
index 6b59749d..be346f52 100755
--- a/tests/btrfs/301
+++ b/tests/btrfs/301
@@ -23,7 +23,7 @@ subv=$SCRATCH_MNT/subv
nested=$SCRATCH_MNT/subv/nested
snap=$SCRATCH_MNT/snap
nr_fill=512
-fill_sz=$((8 * 1024))
+fill_sz=$((64 * 1024))
total_fill=$(($nr_fill * $fill_sz))
nodesize=$($BTRFS_UTIL_PROG inspect-internal dump-super $SCRATCH_DEV | \
grep nodesize | $AWK_PROG '{print $2}')
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v4 2/4] generic/274: Make the pwrite block sizes and offsets to 64k
2025-08-25 6:04 [PATCH v4 0/4] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
2025-08-25 6:04 ` [PATCH v4 1/4] btrfs/301: Make the test compatible with all the supported block sizes Nirjhar Roy (IBM)
@ 2025-08-25 6:04 ` Nirjhar Roy (IBM)
2025-08-25 6:04 ` [PATCH v4 3/4] btrfs/137: Make this test compatible with all supported block sizes Nirjhar Roy (IBM)
2025-08-25 6:04 ` [PATCH v4 4/4] generic/563: Increase the iosize to cover for btrfs higher node sizes Nirjhar Roy (IBM)
3 siblings, 0 replies; 5+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-25 6:04 UTC (permalink / raw)
To: fstests
Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
nirjhar.roy.lists, quwenruo.btrfs
This test was written with 4k block size in mind and it fails with
64k block size when tested with btrfs.
The test first does pre-allocation, then fills up the
filesystem. After that it tries to fragment and fill holes at offsets
of 4k(i.e, 1 fsblock) - which works fine with 4k block size, but with
64k block size, the test tries to fragment and fill holes within
1 fsblock(of size 64k). This results in overwrite of 64k fsblocks
and the write fails. The reason for this failure is that during
overwrite, there is no more space available for COW.
Fix this by changing the pwrite block size and offsets to 64k
so that the test never tries to punch holes or overwrite within 1 fsblock
and the test becomes compatible with all block sizes.
For non-COW filesystems/files, this test should work even if the
underlying filesytem block size > 64k.
Reviewed-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
Reported-by: Disha Goel <disgoel@linux.ibm.com>
Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
---
tests/generic/274 | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tests/generic/274 b/tests/generic/274
index 916c7173..f6c7884e 100755
--- a/tests/generic/274
+++ b/tests/generic/274
@@ -40,8 +40,8 @@ _scratch_unmount 2>/dev/null
_scratch_mkfs_sized $((2 * 1024 * 1024 * 1024)) >>$seqres.full 2>&1
_scratch_mount
-# Create a 4k file and Allocate 4M past EOF on that file
-$XFS_IO_PROG -f -c "pwrite 0 4k" -c "falloc -k 4k 4m" $SCRATCH_MNT/test \
+# Create a 64k file and Allocate 64M past EOF on that file
+$XFS_IO_PROG -f -c "pwrite 0 64k" -c "falloc -k 64k 64m" $SCRATCH_MNT/test \
>>$seqres.full 2>&1 || _fail "failed to create test file"
# Fill the rest of the fs completely
@@ -63,7 +63,7 @@ df $SCRATCH_MNT >>$seqres.full 2>&1
echo "Fill in prealloc space; fragment at offsets:" >> $seqres.full
for i in `seq 1 2 1023`; do
echo -n "$i " >> $seqres.full
- dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=4K count=1 conv=notrunc \
+ dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=64K count=1 conv=notrunc \
>>$seqres.full 2>/dev/null || _fail "failed to write to test file"
done
_scratch_sync
@@ -71,7 +71,7 @@ echo >> $seqres.full
echo "Fill in prealloc space; fill holes at offsets:" >> $seqres.full
for i in `seq 2 2 1023`; do
echo -n "$i " >> $seqres.full
- dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=4K count=1 conv=notrunc \
+ dd if=/dev/zero of=$SCRATCH_MNT/test seek=$i bs=64K count=1 conv=notrunc \
>>$seqres.full 2>/dev/null || _fail "failed to fill test file"
done
_scratch_sync
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v4 3/4] btrfs/137: Make this test compatible with all supported block sizes
2025-08-25 6:04 [PATCH v4 0/4] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
2025-08-25 6:04 ` [PATCH v4 1/4] btrfs/301: Make the test compatible with all the supported block sizes Nirjhar Roy (IBM)
2025-08-25 6:04 ` [PATCH v4 2/4] generic/274: Make the pwrite block sizes and offsets to 64k Nirjhar Roy (IBM)
@ 2025-08-25 6:04 ` Nirjhar Roy (IBM)
2025-08-25 6:04 ` [PATCH v4 4/4] generic/563: Increase the iosize to cover for btrfs higher node sizes Nirjhar Roy (IBM)
3 siblings, 0 replies; 5+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-25 6:04 UTC (permalink / raw)
To: fstests
Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
nirjhar.roy.lists, quwenruo.btrfs
For large block sizes like 64k it failed simply because this
test was written with 4k block size in mind.
The first few lines of the error logs are as follows:
d3dc847171f9081bd75d7a2d3b53d322 SCRATCH_MNT/snap2/bar
File snap1/foo fiemap results in the original filesystem:
-0: [0..7]: data
+0: [0..127]: data
File snap1/bar fiemap results in the original filesystem:
...
Fix this by making the test choose offsets and block size as 64k
which is aligned with all the underlying supported fs block sizes.
Reviewed-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
Reported-by: Disha Goel <disgoel@linux.ibm.com>
Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
---
tests/btrfs/137 | 11 ++++----
tests/btrfs/137.out | 66 ++++++++++++++++++++++-----------------------
2 files changed, 39 insertions(+), 38 deletions(-)
diff --git a/tests/btrfs/137 b/tests/btrfs/137
index 7710dc18..c1d498bd 100755
--- a/tests/btrfs/137
+++ b/tests/btrfs/137
@@ -23,6 +23,7 @@ _cleanup()
_require_test
_require_scratch
_require_xfs_io_command "fiemap"
+_require_btrfs_no_compress
send_files_dir=$TEST_DIR/btrfs-test-$seq
@@ -33,12 +34,12 @@ _scratch_mkfs >>$seqres.full 2>&1
_scratch_mount
# Create the first test file.
-$XFS_IO_PROG -f -c "pwrite -S 0xaa 0 4K" $SCRATCH_MNT/foo | _filter_xfs_io
+$XFS_IO_PROG -f -c "pwrite -S 0xaa -b 64k 0 64K" $SCRATCH_MNT/foo | _filter_xfs_io
# Create a second test file with a 1Mb hole.
$XFS_IO_PROG -f \
- -c "pwrite -S 0xaa 0 4K" \
- -c "pwrite -S 0xbb 1028K 4K" \
+ -c "pwrite -S 0xaa -b 64k 0 64K" \
+ -c "pwrite -S 0xbb -b 64k 1088K 64K" \
$SCRATCH_MNT/bar | _filter_xfs_io
$BTRFS_UTIL_PROG subvolume snapshot -r $SCRATCH_MNT \
@@ -46,10 +47,10 @@ $BTRFS_UTIL_PROG subvolume snapshot -r $SCRATCH_MNT \
# Now add one new extent to our first test file, increasing its size and leaving
# a 1Mb hole between the first extent and this new extent.
-$XFS_IO_PROG -c "pwrite -S 0xbb 1028K 4K" $SCRATCH_MNT/foo | _filter_xfs_io
+$XFS_IO_PROG -c "pwrite -S 0xbb -b 64k 1088K 64K" $SCRATCH_MNT/foo | _filter_xfs_io
# Now overwrite the last extent of our second test file.
-$XFS_IO_PROG -c "pwrite -S 0xcc 1028K 4K" $SCRATCH_MNT/bar | _filter_xfs_io
+$XFS_IO_PROG -c "pwrite -S 0xcc -b 64k 1088K 64K" $SCRATCH_MNT/bar | _filter_xfs_io
$BTRFS_UTIL_PROG subvolume snapshot -r $SCRATCH_MNT \
$SCRATCH_MNT/snap2 >/dev/null
diff --git a/tests/btrfs/137.out b/tests/btrfs/137.out
index 8554399f..e863dd51 100644
--- a/tests/btrfs/137.out
+++ b/tests/btrfs/137.out
@@ -1,63 +1,63 @@
QA output created by 137
-wrote 4096/4096 bytes at offset 0
+wrote 65536/65536 bytes at offset 0
XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-wrote 4096/4096 bytes at offset 0
+wrote 65536/65536 bytes at offset 0
XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-wrote 4096/4096 bytes at offset 1052672
+wrote 65536/65536 bytes at offset 1114112
XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-wrote 4096/4096 bytes at offset 1052672
+wrote 65536/65536 bytes at offset 1114112
XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-wrote 4096/4096 bytes at offset 1052672
+wrote 65536/65536 bytes at offset 1114112
XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
File digests in the original filesystem:
-3e4309c7cc81f23d45e260a8f13ca860 SCRATCH_MNT/snap1/foo
-f3934f0cf164e2efa1bab71f2f164990 SCRATCH_MNT/snap1/bar
-f3934f0cf164e2efa1bab71f2f164990 SCRATCH_MNT/snap2/foo
-d3dc847171f9081bd75d7a2d3b53d322 SCRATCH_MNT/snap2/bar
+9802287a6faa01a1fd0e01732b732fca SCRATCH_MNT/snap1/foo
+fe93f68ad1d8d5e47feba666ee6d3c47 SCRATCH_MNT/snap1/bar
+fe93f68ad1d8d5e47feba666ee6d3c47 SCRATCH_MNT/snap2/foo
+8d06f9b5841190b586a7526d0dd356f3 SCRATCH_MNT/snap2/bar
File snap1/foo fiemap results in the original filesystem:
-0: [0..7]: data
+0: [0..127]: data
File snap1/bar fiemap results in the original filesystem:
-0: [0..7]: data
-1: [8..2055]: hole
-2: [2056..2063]: data
+0: [0..127]: data
+1: [128..2175]: hole
+2: [2176..2303]: data
File snap2/foo fiemap results in the original filesystem:
-0: [0..7]: data
-1: [8..2055]: hole
-2: [2056..2063]: data
+0: [0..127]: data
+1: [128..2175]: hole
+2: [2176..2303]: data
File snap2/bar fiemap results in the original filesystem:
-0: [0..7]: data
-1: [8..2055]: hole
-2: [2056..2063]: data
+0: [0..127]: data
+1: [128..2175]: hole
+2: [2176..2303]: data
At subvol SCRATCH_MNT/snap1
At subvol SCRATCH_MNT/snap2
At subvol snap1
File digests in the new filesystem:
-3e4309c7cc81f23d45e260a8f13ca860 SCRATCH_MNT/snap1/foo
-f3934f0cf164e2efa1bab71f2f164990 SCRATCH_MNT/snap1/bar
-f3934f0cf164e2efa1bab71f2f164990 SCRATCH_MNT/snap2/foo
-d3dc847171f9081bd75d7a2d3b53d322 SCRATCH_MNT/snap2/bar
+9802287a6faa01a1fd0e01732b732fca SCRATCH_MNT/snap1/foo
+fe93f68ad1d8d5e47feba666ee6d3c47 SCRATCH_MNT/snap1/bar
+fe93f68ad1d8d5e47feba666ee6d3c47 SCRATCH_MNT/snap2/foo
+8d06f9b5841190b586a7526d0dd356f3 SCRATCH_MNT/snap2/bar
File snap1/foo fiemap results in the new filesystem:
-0: [0..7]: data
+0: [0..127]: data
File snap1/bar fiemap results in the new filesystem:
-0: [0..7]: data
-1: [8..2055]: hole
-2: [2056..2063]: data
+0: [0..127]: data
+1: [128..2175]: hole
+2: [2176..2303]: data
File snap2/foo fiemap results in the new filesystem:
-0: [0..7]: data
-1: [8..2055]: hole
-2: [2056..2063]: data
+0: [0..127]: data
+1: [128..2175]: hole
+2: [2176..2303]: data
File snap2/bar fiemap results in the new filesystem:
-0: [0..7]: data
-1: [8..2055]: hole
-2: [2056..2063]: data
+0: [0..127]: data
+1: [128..2175]: hole
+2: [2176..2303]: data
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v4 4/4] generic/563: Increase the iosize to cover for btrfs higher node sizes
2025-08-25 6:04 [PATCH v4 0/4] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
` (2 preceding siblings ...)
2025-08-25 6:04 ` [PATCH v4 3/4] btrfs/137: Make this test compatible with all supported block sizes Nirjhar Roy (IBM)
@ 2025-08-25 6:04 ` Nirjhar Roy (IBM)
3 siblings, 0 replies; 5+ messages in thread
From: Nirjhar Roy (IBM) @ 2025-08-25 6:04 UTC (permalink / raw)
To: fstests
Cc: linux-btrfs, ritesh.list, ojaswin, djwong, zlang, fdmanana,
nirjhar.roy.lists, quwenruo.btrfs
When tested with block size/node size 64K on btrfs, then the test fails
with the folllowing error:
QA output created by 563
read/write
read is in range
-write is in range
+write has value of 8855552
+write is NOT in range 7969177.6 .. 8808038.4
write -> read/write
...
The slight increase in the amount of bytes that are written is because
of the increase in the the nodesize(metadata) and hence it exceeds
the tolerance limit slightly. Fix this by increasing the iosize.
Increasing the iosize increases the tolerance range and covers the
tolerance for btrfs higher node sizes.
A very detailed explanation is given by Qu Wenruo in [1]
[1] https://lore.kernel.org/all/fa0dc9e3-2025-49f2-9f20-71190382fce5@gmx.com/
Reviewed-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
Reported-by: Disha Goel <disgoel@linux.ibm.com>
Signed-off-by: Nirjhar Roy (IBM) <nirjhar.roy.lists@gmail.com>
---
tests/generic/563 | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/generic/563 b/tests/generic/563
index 89a71aa4..6cb9ddb0 100755
--- a/tests/generic/563
+++ b/tests/generic/563
@@ -43,7 +43,7 @@ _require_block_device $SCRATCH_DEV
_require_non_zoned_device ${SCRATCH_DEV}
cgdir=$CGROUP2_PATH
-iosize=$((1024 * 1024 * 8))
+iosize=$((1024 * 1024 * 16))
# Check cgroup read/write charges against expected values. Allow for some
# tolerance as different filesystems seem to account slightly differently.
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-08-25 6:04 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-25 6:04 [PATCH v4 0/4] btrfs: Misc test fixes for large block/node sizes Nirjhar Roy (IBM)
2025-08-25 6:04 ` [PATCH v4 1/4] btrfs/301: Make the test compatible with all the supported block sizes Nirjhar Roy (IBM)
2025-08-25 6:04 ` [PATCH v4 2/4] generic/274: Make the pwrite block sizes and offsets to 64k Nirjhar Roy (IBM)
2025-08-25 6:04 ` [PATCH v4 3/4] btrfs/137: Make this test compatible with all supported block sizes Nirjhar Roy (IBM)
2025-08-25 6:04 ` [PATCH v4 4/4] generic/563: Increase the iosize to cover for btrfs higher node sizes Nirjhar Roy (IBM)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).