linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases)
@ 2025-08-08  7:52 Qu Wenruo
  2025-08-08  8:50 ` Qu Wenruo
  0 siblings, 1 reply; 9+ messages in thread
From: Qu Wenruo @ 2025-08-08  7:52 UTC (permalink / raw)
  To: linux-ext4, linux-btrfs, linux-fsdevel@vger.kernel.org

Hi,

[BACKGROUND]
Recently I'm testing btrfs with 16KiB block size.

Currently btrfs is artificially limiting subpage block size to 4K.
But there is a simple patch to change it to support all block sizes <= 
page size in my branch:

https://github.com/adam900710/linux/tree/larger_bs_support

[IOMAP WARNING]
And I'm running into a very weird kernel warning at btrfs/136, with 16K 
block size and 64K page size.

The problem is, the problem happens with ext3 (using ext4 modeule) with 
16K block size, and no btrfs is involved yet.

The test case btrfs/136 create an ext3 fs first, using the same block 
size of the btrfs on TEST_DEV (so it's 16K).
Then populate the fs.

The hang happens at the ext3 populating part, with the following kernel 
warning:

[  989.664270] run fstests btrfs/136 at 2025-08-08 16:57:37
[  990.551395] EXT4-fs (dm-3): mounting ext3 file system using the ext4 
subsystem
[  990.554980] EXT4-fs (dm-3): mounted filesystem 
d90f4325-e6a6-4787-9da8-150ece277a94 r/w with ordered data mode. Quota 
mode: none.
[  990.581540] ------------[ cut here ]------------
[  990.581551] WARNING: CPU: 3 PID: 434101 at fs/iomap/iter.c:34 
iomap_iter_done+0x148/0x190
[  990.583497] Modules linked in: dm_flakey nls_ascii nls_cp437 vfat fat 
btrfs polyval_ce ghash_ce rtc_efi processor xor xor_neon raid6_pq 
zstd_compress fuse loop nfnetlink qemu_fw_cfg ext4 crc16 mbcache jbd2 
dm_mod xhci_pci xhci_hcd virtio_net virtio_scsi net_failover failover 
virtio_console virtio_balloon virtio_blk virtio_mmio
[  990.587247] CPU: 3 UID: 0 PID: 434101 Comm: fsstress Not tainted 
6.16.0-rc7-custom+ #128 PREEMPT(voluntary)
[  990.588525] Hardware name: QEMU KVM Virtual Machine, BIOS unknown 
2/2/2022
[  990.589414] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS 
BTYPE=--)
[  990.590314] pc : iomap_iter_done+0x148/0x190
[  990.590874] lr : iomap_iter+0x174/0x230
[  990.591370] sp : ffff8000880af740
[  990.591800] x29: ffff8000880af740 x28: ffff0000db8e6840 x27: 
0000000000000000
[  990.592716] x26: 0000000000000000 x25: ffff8000880af830 x24: 
0000004000000000
[  990.593631] x23: 0000000000000002 x22: 000001bfdbfa8000 x21: 
ffffa6a41c002e48
[  990.594549] x20: 0000000000000001 x19: ffff8000880af808 x18: 
0000000000000000
[  990.595464] x17: 0000000000000000 x16: ffffa6a495ee6cd0 x15: 
0000000000000000
[  990.596379] x14: 00000000000003d4 x13: 00000000fa83b2da x12: 
0000b236fc95f18c
[  990.597295] x11: ffffa6a4978b9c08 x10: 0000000000001da0 x9 : 
ffffa6a41c1a2a44
[  990.598210] x8 : ffff8000880af5c8 x7 : 0000000001000000 x6 : 
0000000000000000
[  990.599125] x5 : 0000000000000004 x4 : 000001bfdbfa8000 x3 : 
0000000000000000
[  990.600040] x2 : 0000000000000000 x1 : 0000004004030000 x0 : 
0000000000000000
[  990.600955] Call trace:
[  990.601273]  iomap_iter_done+0x148/0x190 (P)
[  990.601829]  iomap_iter+0x174/0x230
[  990.602280]  iomap_fiemap+0x154/0x1d8
[  990.602751]  ext4_fiemap+0x110/0x140 [ext4]
[  990.603350]  do_vfs_ioctl+0x4b8/0xbc0
[  990.603831]  __arm64_sys_ioctl+0x8c/0x120
[  990.604346]  invoke_syscall+0x6c/0x100
[  990.604836]  el0_svc_common.constprop.0+0x48/0xf0
[  990.605444]  do_el0_svc+0x24/0x38
[  990.605875]  el0_svc+0x38/0x120
[  990.606283]  el0t_64_sync_handler+0x10c/0x138
[  990.606846]  el0t_64_sync+0x198/0x1a0
[  990.607319] ---[ end trace 0000000000000000 ]---
[  990.608042] ------------[ cut here ]------------
[  990.608047] WARNING: CPU: 3 PID: 434101 at fs/iomap/iter.c:35 
iomap_iter_done+0x164/0x190
[  990.610842] Modules linked in: dm_flakey nls_ascii nls_cp437 vfat fat 
btrfs polyval_ce ghash_ce rtc_efi processor xor xor_neon raid6_pq 
zstd_compress fuse loop nfnetlink qemu_fw_cfg ext4 crc16 mbcache jbd2 
dm_mod xhci_pci xhci_hcd virtio_net virtio_scsi net_failover failover 
virtio_console virtio_balloon virtio_blk virtio_mmio
[  990.619189] CPU: 3 UID: 0 PID: 434101 Comm: fsstress Tainted: G 
  W           6.16.0-rc7-custom+ #128 PREEMPT(voluntary)
[  990.620876] Tainted: [W]=WARN
[  990.621458] Hardware name: QEMU KVM Virtual Machine, BIOS unknown 
2/2/2022
[  990.622507] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS 
BTYPE=--)
[  990.623911] pc : iomap_iter_done+0x164/0x190
[  990.624936] lr : iomap_iter+0x174/0x230
[  990.626747] sp : ffff8000880af740
[  990.627404] x29: ffff8000880af740 x28: ffff0000db8e6840 x27: 
0000000000000000
[  990.628947] x26: 0000000000000000 x25: ffff8000880af830 x24: 
0000004000000000
[  990.631024] x23: 0000000000000002 x22: 000001bfdbfa8000 x21: 
ffffa6a41c002e48
[  990.632278] x20: 0000000000000001 x19: ffff8000880af808 x18: 
0000000000000000
[  990.634189] x17: 0000000000000000 x16: ffffa6a495ee6cd0 x15: 
0000000000000000
[  990.635608] x14: 00000000000003d4 x13: 00000000fa83b2da x12: 
0000b236fc95f18c
[  990.637854] x11: ffffa6a4978b9c08 x10: 0000000000001da0 x9 : 
ffffa6a41c1a2a44
[  990.639181] x8 : ffff8000880af5c8 x7 : 0000000001000000 x6 : 
0000000000000000
[  990.642370] x5 : 0000000000000004 x4 : 000001bfdbfa8000 x3 : 
0000000000000000
[  990.644505] x2 : 0000004004030000 x1 : 0000004004030000 x0 : 
0000004004030000
[  990.645493] Call trace:
[  990.645841]  iomap_iter_done+0x164/0x190 (P)
[  990.646377]  iomap_iter+0x174/0x230
[  990.647550]  iomap_fiemap+0x154/0x1d8
[  990.648052]  ext4_fiemap+0x110/0x140 [ext4]
[  990.649061]  do_vfs_ioctl+0x4b8/0xbc0
[  990.649704]  __arm64_sys_ioctl+0x8c/0x120
[  990.652141]  invoke_syscall+0x6c/0x100
[  990.653001]  el0_svc_common.constprop.0+0x48/0xf0
[  990.653909]  do_el0_svc+0x24/0x38
[  990.654332]  el0_svc+0x38/0x120
[  990.654736]  el0t_64_sync_handler+0x10c/0x138
[  990.655295]  el0t_64_sync+0x198/0x1a0
[  990.655761] ---[ end trace 0000000000000000 ]---

Considering it's not yet btrfs, and the call trace is from iomap, I 
guess there is something wrong with ext4's ext3 support?

The involved ext4 kernel configs are the following:

# CONFIG_EXT2_FS is not set
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=m
CONFIG_EXT4_USE_FOR_EXT2=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_JBD2=m
# CONFIG_JBD2_DEBUG is not set

Thanks,
Qu

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases)
  2025-08-08  7:52 Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases) Qu Wenruo
@ 2025-08-08  8:50 ` Qu Wenruo
  2025-08-08 12:16   ` Theodore Ts'o
  0 siblings, 1 reply; 9+ messages in thread
From: Qu Wenruo @ 2025-08-08  8:50 UTC (permalink / raw)
  To: linux-ext4, linux-btrfs, linux-fsdevel@vger.kernel.org



在 2025/8/8 17:22, Qu Wenruo 写道:
> Hi,
> 
> [BACKGROUND]
> Recently I'm testing btrfs with 16KiB block size.
> 
> Currently btrfs is artificially limiting subpage block size to 4K.
> But there is a simple patch to change it to support all block sizes <= 
> page size in my branch:
> 
> https://github.com/adam900710/linux/tree/larger_bs_support
> 
> [IOMAP WARNING]
> And I'm running into a very weird kernel warning at btrfs/136, with 16K 
> block size and 64K page size.
> 
> The problem is, the problem happens with ext3 (using ext4 modeule) with 
> 16K block size, and no btrfs is involved yet.

The following reproducer is much smaller, and of course, btrfs is not 
involved:

---
#!/bin/bash

dev="/dev/test/scratch1"
mnt="/mnt/btrfs/"
fsstress="/home/adam/xfstests-dev/ltp/fsstress"

mkfs.ext3 -F -b 16k $dev
mount $dev $mnt

$fsstress -w -n 128 -d $mnt
umount $dev
---

And ext4 is fine, so it's ext3 mode causing the problem.

Furthermore, after the kernel wanring, the fsstress will never finish, 
and no blocked process either.

Thanks,
Qu

> 
> The test case btrfs/136 create an ext3 fs first, using the same block 
> size of the btrfs on TEST_DEV (so it's 16K).
> Then populate the fs.
> 
> The hang happens at the ext3 populating part, with the following kernel 
> warning:
> 
> [  989.664270] run fstests btrfs/136 at 2025-08-08 16:57:37
> [  990.551395] EXT4-fs (dm-3): mounting ext3 file system using the ext4 
> subsystem
> [  990.554980] EXT4-fs (dm-3): mounted filesystem d90f4325- 
> e6a6-4787-9da8-150ece277a94 r/w with ordered data mode. Quota mode: none.
> [  990.581540] ------------[ cut here ]------------
> [  990.581551] WARNING: CPU: 3 PID: 434101 at fs/iomap/iter.c:34 
> iomap_iter_done+0x148/0x190
> [  990.583497] Modules linked in: dm_flakey nls_ascii nls_cp437 vfat fat 
> btrfs polyval_ce ghash_ce rtc_efi processor xor xor_neon raid6_pq 
> zstd_compress fuse loop nfnetlink qemu_fw_cfg ext4 crc16 mbcache jbd2 
> dm_mod xhci_pci xhci_hcd virtio_net virtio_scsi net_failover failover 
> virtio_console virtio_balloon virtio_blk virtio_mmio
> [  990.587247] CPU: 3 UID: 0 PID: 434101 Comm: fsstress Not tainted 
> 6.16.0-rc7-custom+ #128 PREEMPT(voluntary)
> [  990.588525] Hardware name: QEMU KVM Virtual Machine, BIOS unknown 
> 2/2/2022
> [  990.589414] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS 
> BTYPE=--)
> [  990.590314] pc : iomap_iter_done+0x148/0x190
> [  990.590874] lr : iomap_iter+0x174/0x230
> [  990.591370] sp : ffff8000880af740
> [  990.591800] x29: ffff8000880af740 x28: ffff0000db8e6840 x27: 
> 0000000000000000
> [  990.592716] x26: 0000000000000000 x25: ffff8000880af830 x24: 
> 0000004000000000
> [  990.593631] x23: 0000000000000002 x22: 000001bfdbfa8000 x21: 
> ffffa6a41c002e48
> [  990.594549] x20: 0000000000000001 x19: ffff8000880af808 x18: 
> 0000000000000000
> [  990.595464] x17: 0000000000000000 x16: ffffa6a495ee6cd0 x15: 
> 0000000000000000
> [  990.596379] x14: 00000000000003d4 x13: 00000000fa83b2da x12: 
> 0000b236fc95f18c
> [  990.597295] x11: ffffa6a4978b9c08 x10: 0000000000001da0 x9 : 
> ffffa6a41c1a2a44
> [  990.598210] x8 : ffff8000880af5c8 x7 : 0000000001000000 x6 : 
> 0000000000000000
> [  990.599125] x5 : 0000000000000004 x4 : 000001bfdbfa8000 x3 : 
> 0000000000000000
> [  990.600040] x2 : 0000000000000000 x1 : 0000004004030000 x0 : 
> 0000000000000000
> [  990.600955] Call trace:
> [  990.601273]  iomap_iter_done+0x148/0x190 (P)
> [  990.601829]  iomap_iter+0x174/0x230
> [  990.602280]  iomap_fiemap+0x154/0x1d8
> [  990.602751]  ext4_fiemap+0x110/0x140 [ext4]
> [  990.603350]  do_vfs_ioctl+0x4b8/0xbc0
> [  990.603831]  __arm64_sys_ioctl+0x8c/0x120
> [  990.604346]  invoke_syscall+0x6c/0x100
> [  990.604836]  el0_svc_common.constprop.0+0x48/0xf0
> [  990.605444]  do_el0_svc+0x24/0x38
> [  990.605875]  el0_svc+0x38/0x120
> [  990.606283]  el0t_64_sync_handler+0x10c/0x138
> [  990.606846]  el0t_64_sync+0x198/0x1a0
> [  990.607319] ---[ end trace 0000000000000000 ]---
> [  990.608042] ------------[ cut here ]------------
> [  990.608047] WARNING: CPU: 3 PID: 434101 at fs/iomap/iter.c:35 
> iomap_iter_done+0x164/0x190
> [  990.610842] Modules linked in: dm_flakey nls_ascii nls_cp437 vfat fat 
> btrfs polyval_ce ghash_ce rtc_efi processor xor xor_neon raid6_pq 
> zstd_compress fuse loop nfnetlink qemu_fw_cfg ext4 crc16 mbcache jbd2 
> dm_mod xhci_pci xhci_hcd virtio_net virtio_scsi net_failover failover 
> virtio_console virtio_balloon virtio_blk virtio_mmio
> [  990.619189] CPU: 3 UID: 0 PID: 434101 Comm: fsstress Tainted: G 
>   W           6.16.0-rc7-custom+ #128 PREEMPT(voluntary)
> [  990.620876] Tainted: [W]=WARN
> [  990.621458] Hardware name: QEMU KVM Virtual Machine, BIOS unknown 
> 2/2/2022
> [  990.622507] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS 
> BTYPE=--)
> [  990.623911] pc : iomap_iter_done+0x164/0x190
> [  990.624936] lr : iomap_iter+0x174/0x230
> [  990.626747] sp : ffff8000880af740
> [  990.627404] x29: ffff8000880af740 x28: ffff0000db8e6840 x27: 
> 0000000000000000
> [  990.628947] x26: 0000000000000000 x25: ffff8000880af830 x24: 
> 0000004000000000
> [  990.631024] x23: 0000000000000002 x22: 000001bfdbfa8000 x21: 
> ffffa6a41c002e48
> [  990.632278] x20: 0000000000000001 x19: ffff8000880af808 x18: 
> 0000000000000000
> [  990.634189] x17: 0000000000000000 x16: ffffa6a495ee6cd0 x15: 
> 0000000000000000
> [  990.635608] x14: 00000000000003d4 x13: 00000000fa83b2da x12: 
> 0000b236fc95f18c
> [  990.637854] x11: ffffa6a4978b9c08 x10: 0000000000001da0 x9 : 
> ffffa6a41c1a2a44
> [  990.639181] x8 : ffff8000880af5c8 x7 : 0000000001000000 x6 : 
> 0000000000000000
> [  990.642370] x5 : 0000000000000004 x4 : 000001bfdbfa8000 x3 : 
> 0000000000000000
> [  990.644505] x2 : 0000004004030000 x1 : 0000004004030000 x0 : 
> 0000004004030000
> [  990.645493] Call trace:
> [  990.645841]  iomap_iter_done+0x164/0x190 (P)
> [  990.646377]  iomap_iter+0x174/0x230
> [  990.647550]  iomap_fiemap+0x154/0x1d8
> [  990.648052]  ext4_fiemap+0x110/0x140 [ext4]
> [  990.649061]  do_vfs_ioctl+0x4b8/0xbc0
> [  990.649704]  __arm64_sys_ioctl+0x8c/0x120
> [  990.652141]  invoke_syscall+0x6c/0x100
> [  990.653001]  el0_svc_common.constprop.0+0x48/0xf0
> [  990.653909]  do_el0_svc+0x24/0x38
> [  990.654332]  el0_svc+0x38/0x120
> [  990.654736]  el0t_64_sync_handler+0x10c/0x138
> [  990.655295]  el0t_64_sync+0x198/0x1a0
> [  990.655761] ---[ end trace 0000000000000000 ]---
> 
> Considering it's not yet btrfs, and the call trace is from iomap, I 
> guess there is something wrong with ext4's ext3 support?
> 
> The involved ext4 kernel configs are the following:
> 
> # CONFIG_EXT2_FS is not set
> # CONFIG_EXT3_FS is not set
> CONFIG_EXT4_FS=m
> CONFIG_EXT4_USE_FOR_EXT2=y
> CONFIG_EXT4_FS_POSIX_ACL=y
> CONFIG_EXT4_FS_SECURITY=y
> # CONFIG_EXT4_DEBUG is not set
> CONFIG_JBD2=m
> # CONFIG_JBD2_DEBUG is not set
> 
> Thanks,
> Qu
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases)
  2025-08-08  8:50 ` Qu Wenruo
@ 2025-08-08 12:16   ` Theodore Ts'o
  2025-08-08 22:11     ` Qu Wenruo
  0 siblings, 1 reply; 9+ messages in thread
From: Theodore Ts'o @ 2025-08-08 12:16 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-ext4, linux-btrfs, linux-fsdevel@vger.kernel.org

On Fri, Aug 08, 2025 at 06:20:56PM +0930, Qu Wenruo wrote:
> 
> 在 2025/8/8 17:22, Qu Wenruo 写道:
> > Hi,
> > 
> > [BACKGROUND]
> > Recently I'm testing btrfs with 16KiB block size.
> > 
> > Currently btrfs is artificially limiting subpage block size to 4K.
> > But there is a simple patch to change it to support all block sizes <=
> > page size in my branch:
> > 
> > https://github.com/adam900710/linux/tree/larger_bs_support
> > 
> > [IOMAP WARNING]
> > And I'm running into a very weird kernel warning at btrfs/136, with 16K
> > block size and 64K page size.
> > 
> > The problem is, the problem happens with ext3 (using ext4 modeule) with
> > 16K block size, and no btrfs is involved yet.


Thanks for the bug report!  This looks like it's an issue with using
indirect block-mapped file with a 16k block size.  I tried your
reproducer using a 1k block size on an x86_64 system, which is how I
test problem caused by the block size < page size.  It didn't
reproduce there, so it looks like it really needs a 16k block size.

Can you say something about what system were you running your testing
on --- was it an arm64 system, or a powerpc 64 system (the two most
common systems with page size > 4k)?  (I assume you're not trying to
do this on an Itanic.  :-)   And was the page size 16k or 64k?

Thanks,

					- Ted

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases)
  2025-08-08 12:16   ` Theodore Ts'o
@ 2025-08-08 22:11     ` Qu Wenruo
  2025-08-09  9:09       ` Zhang Yi
  0 siblings, 1 reply; 9+ messages in thread
From: Qu Wenruo @ 2025-08-08 22:11 UTC (permalink / raw)
  To: Theodore Ts'o, Qu Wenruo
  Cc: linux-ext4, linux-btrfs, linux-fsdevel@vger.kernel.org



在 2025/8/8 21:46, Theodore Ts'o 写道:
> On Fri, Aug 08, 2025 at 06:20:56PM +0930, Qu Wenruo wrote:
>>
>> 在 2025/8/8 17:22, Qu Wenruo 写道:
>>> Hi,
>>>
>>> [BACKGROUND]
>>> Recently I'm testing btrfs with 16KiB block size.
>>>
>>> Currently btrfs is artificially limiting subpage block size to 4K.
>>> But there is a simple patch to change it to support all block sizes <=
>>> page size in my branch:
>>>
>>> https://github.com/adam900710/linux/tree/larger_bs_support
>>>
>>> [IOMAP WARNING]
>>> And I'm running into a very weird kernel warning at btrfs/136, with 16K
>>> block size and 64K page size.
>>>
>>> The problem is, the problem happens with ext3 (using ext4 modeule) with
>>> 16K block size, and no btrfs is involved yet.
> 
> 
> Thanks for the bug report!  This looks like it's an issue with using
> indirect block-mapped file with a 16k block size.  I tried your
> reproducer using a 1k block size on an x86_64 system, which is how I
> test problem caused by the block size < page size.  It didn't
> reproduce there, so it looks like it really needs a 16k block size.
> 
> Can you say something about what system were you running your testing
> on --- was it an arm64 system, or a powerpc 64 system (the two most
> common systems with page size > 4k)?  (I assume you're not trying to
> do this on an Itanic.  :-)   And was the page size 16k or 64k?

The architecture is aarch64, the host board is Rock5B (cheap and fast 
enough), the test machine is a VM on that board, with ovmf as the UEFI 
firmware.

The kernel is configured to use 64K page size, the *ext3* system is 
using 16K block size.

Currently I tried the following combination with 64K page size and ext3, 
the result looks like the following

- 2K block size
- 4K block size
   All fine

- 8K block size
- 16K block size
   All the same kernel warning and never ending fsstress

- 32K block size
- 64K block size
   All fine

I am surprised as you that, not all subpage block size are having 
problems, just 2 of the less common combinations failed.

And the most common ones (4K, page size) are all fine.

Finally, if using ext4 not ext3, all combinations above are fine again.

So I ran out of ideas why only 2 block sizes fail here...

Thanks,
Qu

> 
> Thanks,
> 
> 					- Ted
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases)
  2025-08-08 22:11     ` Qu Wenruo
@ 2025-08-09  9:09       ` Zhang Yi
  2025-08-09 22:06         ` Qu Wenruo
  0 siblings, 1 reply; 9+ messages in thread
From: Zhang Yi @ 2025-08-09  9:09 UTC (permalink / raw)
  To: Qu Wenruo, Theodore Ts'o, Qu Wenruo
  Cc: linux-ext4, linux-btrfs, linux-fsdevel@vger.kernel.org

On 2025/8/9 6:11, Qu Wenruo wrote:
> 在 2025/8/8 21:46, Theodore Ts'o 写道:
>> On Fri, Aug 08, 2025 at 06:20:56PM +0930, Qu Wenruo wrote:
>>>
>>> 在 2025/8/8 17:22, Qu Wenruo 写道:
>>>> Hi,
>>>>
>>>> [BACKGROUND]
>>>> Recently I'm testing btrfs with 16KiB block size.
>>>>
>>>> Currently btrfs is artificially limiting subpage block size to 4K.
>>>> But there is a simple patch to change it to support all block sizes <=
>>>> page size in my branch:
>>>>
>>>> https://github.com/adam900710/linux/tree/larger_bs_support
>>>>
>>>> [IOMAP WARNING]
>>>> And I'm running into a very weird kernel warning at btrfs/136, with 16K
>>>> block size and 64K page size.
>>>>
>>>> The problem is, the problem happens with ext3 (using ext4 modeule) with
>>>> 16K block size, and no btrfs is involved yet.
>>
>>
>> Thanks for the bug report!  This looks like it's an issue with using
>> indirect block-mapped file with a 16k block size.  I tried your
>> reproducer using a 1k block size on an x86_64 system, which is how I
>> test problem caused by the block size < page size.  It didn't
>> reproduce there, so it looks like it really needs a 16k block size.
>>
>> Can you say something about what system were you running your testing
>> on --- was it an arm64 system, or a powerpc 64 system (the two most
>> common systems with page size > 4k)?  (I assume you're not trying to
>> do this on an Itanic.  :-)   And was the page size 16k or 64k?
> 
> The architecture is aarch64, the host board is Rock5B (cheap and fast enough), the test machine is a VM on that board, with ovmf as the UEFI firmware.
> 
> The kernel is configured to use 64K page size, the *ext3* system is using 16K block size.
> 
> Currently I tried the following combination with 64K page size and ext3, the result looks like the following
> 
> - 2K block size
> - 4K block size
>   All fine
> 
> - 8K block size
> - 16K block size
>   All the same kernel warning and never ending fsstress
> 
> - 32K block size
> - 64K block size
>   All fine
> 
> I am surprised as you that, not all subpage block size are having problems, just 2 of the less common combinations failed.
> 
> And the most common ones (4K, page size) are all fine.
> 
> Finally, if using ext4 not ext3, all combinations above are fine again.
> 
> So I ran out of ideas why only 2 block sizes fail here...
> 

This issue is caused by an overflow in the calculation of the hole's
length on the forth-level depth for non-extent inodes. For a file system
with a 4KB block size, the calculation will not overflow. For a 64KB
block size, the queried position will not reach the fourth level, so this
issue only occur on the filesystem with a 8KB and 16KB block size.

Hi, Wenruo, could you try the following fix?

diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
index 7de327fa7b1c..d45124318200 100644
--- a/fs/ext4/indirect.c
+++ b/fs/ext4/indirect.c
@@ -539,7 +539,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
 	int indirect_blks;
 	int blocks_to_boundary = 0;
 	int depth;
-	int count = 0;
+	u64 count = 0;
 	ext4_fsblk_t first_block = 0;

 	trace_ext4_ind_map_blocks_enter(inode, map->m_lblk, map->m_len, flags);
@@ -588,7 +588,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
 		count++;
 		/* Fill in size of a hole we found */
 		map->m_pblk = 0;
-		map->m_len = min_t(unsigned int, map->m_len, count);
+		map->m_len = umin(map->m_len, count);
 		goto cleanup;
 	}

Thanks,
Yi.



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases)
  2025-08-09  9:09       ` Zhang Yi
@ 2025-08-09 22:06         ` Qu Wenruo
  2025-08-11 15:49           ` Darrick J. Wong
  0 siblings, 1 reply; 9+ messages in thread
From: Qu Wenruo @ 2025-08-09 22:06 UTC (permalink / raw)
  To: Zhang Yi, Qu Wenruo, Theodore Ts'o
  Cc: linux-ext4, linux-btrfs, linux-fsdevel@vger.kernel.org



在 2025/8/9 18:39, Zhang Yi 写道:
> On 2025/8/9 6:11, Qu Wenruo wrote:
>> 在 2025/8/8 21:46, Theodore Ts'o 写道:
>>> On Fri, Aug 08, 2025 at 06:20:56PM +0930, Qu Wenruo wrote:
>>>>
>>>> 在 2025/8/8 17:22, Qu Wenruo 写道:
>>>>> Hi,
>>>>>
>>>>> [BACKGROUND]
>>>>> Recently I'm testing btrfs with 16KiB block size.
>>>>>
>>>>> Currently btrfs is artificially limiting subpage block size to 4K.
>>>>> But there is a simple patch to change it to support all block sizes <=
>>>>> page size in my branch:
>>>>>
>>>>> https://github.com/adam900710/linux/tree/larger_bs_support
>>>>>
>>>>> [IOMAP WARNING]
>>>>> And I'm running into a very weird kernel warning at btrfs/136, with 16K
>>>>> block size and 64K page size.
>>>>>
>>>>> The problem is, the problem happens with ext3 (using ext4 modeule) with
>>>>> 16K block size, and no btrfs is involved yet.
>>>
>>>
>>> Thanks for the bug report!  This looks like it's an issue with using
>>> indirect block-mapped file with a 16k block size.  I tried your
>>> reproducer using a 1k block size on an x86_64 system, which is how I
>>> test problem caused by the block size < page size.  It didn't
>>> reproduce there, so it looks like it really needs a 16k block size.
>>>
>>> Can you say something about what system were you running your testing
>>> on --- was it an arm64 system, or a powerpc 64 system (the two most
>>> common systems with page size > 4k)?  (I assume you're not trying to
>>> do this on an Itanic.  :-)   And was the page size 16k or 64k?
>>
>> The architecture is aarch64, the host board is Rock5B (cheap and fast enough), the test machine is a VM on that board, with ovmf as the UEFI firmware.
>>
>> The kernel is configured to use 64K page size, the *ext3* system is using 16K block size.
>>
>> Currently I tried the following combination with 64K page size and ext3, the result looks like the following
>>
>> - 2K block size
>> - 4K block size
>>    All fine
>>
>> - 8K block size
>> - 16K block size
>>    All the same kernel warning and never ending fsstress
>>
>> - 32K block size
>> - 64K block size
>>    All fine
>>
>> I am surprised as you that, not all subpage block size are having problems, just 2 of the less common combinations failed.
>>
>> And the most common ones (4K, page size) are all fine.
>>
>> Finally, if using ext4 not ext3, all combinations above are fine again.
>>
>> So I ran out of ideas why only 2 block sizes fail here...
>>
> 
> This issue is caused by an overflow in the calculation of the hole's
> length on the forth-level depth for non-extent inodes. For a file system
> with a 4KB block size, the calculation will not overflow. For a 64KB
> block size, the queried position will not reach the fourth level, so this
> issue only occur on the filesystem with a 8KB and 16KB block size.
> 
> Hi, Wenruo, could you try the following fix?
> 
> diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
> index 7de327fa7b1c..d45124318200 100644
> --- a/fs/ext4/indirect.c
> +++ b/fs/ext4/indirect.c
> @@ -539,7 +539,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
>   	int indirect_blks;
>   	int blocks_to_boundary = 0;
>   	int depth;
> -	int count = 0;
> +	u64 count = 0;
>   	ext4_fsblk_t first_block = 0;
> 
>   	trace_ext4_ind_map_blocks_enter(inode, map->m_lblk, map->m_len, flags);
> @@ -588,7 +588,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
>   		count++;
>   		/* Fill in size of a hole we found */
>   		map->m_pblk = 0;
> -		map->m_len = min_t(unsigned int, map->m_len, count);
> +		map->m_len = umin(map->m_len, count);
>   		goto cleanup;
>   	}

It indeed solves the problem.

Tested-by: Qu Wenruo <wqu@suse.com>

Thanks,
Qu

> Thanks,
> Yi.
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases)
  2025-08-09 22:06         ` Qu Wenruo
@ 2025-08-11 15:49           ` Darrick J. Wong
  2025-08-11 22:14             ` Qu Wenruo
  0 siblings, 1 reply; 9+ messages in thread
From: Darrick J. Wong @ 2025-08-11 15:49 UTC (permalink / raw)
  To: Qu Wenruo
  Cc: Zhang Yi, Qu Wenruo, Theodore Ts'o, linux-ext4, linux-btrfs,
	linux-fsdevel@vger.kernel.org

On Sun, Aug 10, 2025 at 07:36:48AM +0930, Qu Wenruo wrote:
> 
> 
> 在 2025/8/9 18:39, Zhang Yi 写道:
> > On 2025/8/9 6:11, Qu Wenruo wrote:
> > > 在 2025/8/8 21:46, Theodore Ts'o 写道:
> > > > On Fri, Aug 08, 2025 at 06:20:56PM +0930, Qu Wenruo wrote:
> > > > > 
> > > > > 在 2025/8/8 17:22, Qu Wenruo 写道:
> > > > > > Hi,
> > > > > > 
> > > > > > [BACKGROUND]
> > > > > > Recently I'm testing btrfs with 16KiB block size.
> > > > > > 
> > > > > > Currently btrfs is artificially limiting subpage block size to 4K.
> > > > > > But there is a simple patch to change it to support all block sizes <=
> > > > > > page size in my branch:
> > > > > > 
> > > > > > https://github.com/adam900710/linux/tree/larger_bs_support
> > > > > > 
> > > > > > [IOMAP WARNING]
> > > > > > And I'm running into a very weird kernel warning at btrfs/136, with 16K
> > > > > > block size and 64K page size.
> > > > > > 
> > > > > > The problem is, the problem happens with ext3 (using ext4 modeule) with
> > > > > > 16K block size, and no btrfs is involved yet.
> > > > 
> > > > 
> > > > Thanks for the bug report!  This looks like it's an issue with using
> > > > indirect block-mapped file with a 16k block size.  I tried your
> > > > reproducer using a 1k block size on an x86_64 system, which is how I
> > > > test problem caused by the block size < page size.  It didn't
> > > > reproduce there, so it looks like it really needs a 16k block size.
> > > > 
> > > > Can you say something about what system were you running your testing
> > > > on --- was it an arm64 system, or a powerpc 64 system (the two most
> > > > common systems with page size > 4k)?  (I assume you're not trying to
> > > > do this on an Itanic.  :-)   And was the page size 16k or 64k?
> > > 
> > > The architecture is aarch64, the host board is Rock5B (cheap and fast enough), the test machine is a VM on that board, with ovmf as the UEFI firmware.
> > > 
> > > The kernel is configured to use 64K page size, the *ext3* system is using 16K block size.
> > > 
> > > Currently I tried the following combination with 64K page size and ext3, the result looks like the following
> > > 
> > > - 2K block size
> > > - 4K block size
> > >    All fine
> > > 
> > > - 8K block size
> > > - 16K block size
> > >    All the same kernel warning and never ending fsstress
> > > 
> > > - 32K block size
> > > - 64K block size
> > >    All fine
> > > 
> > > I am surprised as you that, not all subpage block size are having problems, just 2 of the less common combinations failed.
> > > 
> > > And the most common ones (4K, page size) are all fine.
> > > 
> > > Finally, if using ext4 not ext3, all combinations above are fine again.
> > > 
> > > So I ran out of ideas why only 2 block sizes fail here...
> > > 
> > 
> > This issue is caused by an overflow in the calculation of the hole's
> > length on the forth-level depth for non-extent inodes. For a file system
> > with a 4KB block size, the calculation will not overflow. For a 64KB
> > block size, the queried position will not reach the fourth level, so this
> > issue only occur on the filesystem with a 8KB and 16KB block size.
> > 
> > Hi, Wenruo, could you try the following fix?
> > 
> > diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
> > index 7de327fa7b1c..d45124318200 100644
> > --- a/fs/ext4/indirect.c
> > +++ b/fs/ext4/indirect.c
> > @@ -539,7 +539,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
> >   	int indirect_blks;
> >   	int blocks_to_boundary = 0;
> >   	int depth;
> > -	int count = 0;
> > +	u64 count = 0;
> >   	ext4_fsblk_t first_block = 0;
> > 
> >   	trace_ext4_ind_map_blocks_enter(inode, map->m_lblk, map->m_len, flags);
> > @@ -588,7 +588,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
> >   		count++;
> >   		/* Fill in size of a hole we found */
> >   		map->m_pblk = 0;
> > -		map->m_len = min_t(unsigned int, map->m_len, count);
> > +		map->m_len = umin(map->m_len, count);
> >   		goto cleanup;
> >   	}
> 
> It indeed solves the problem.
> 
> Tested-by: Qu Wenruo <wqu@suse.com>

Can we get the relevant chunks of this test turned into a tests/ext4/
fstest so that the ext4 developers have a regression test that doesn't
require setting up btrfs, please?

--D

> Thanks,
> Qu
> 
> > Thanks,
> > Yi.
> > 
> 
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases)
  2025-08-11 15:49           ` Darrick J. Wong
@ 2025-08-11 22:14             ` Qu Wenruo
  2025-08-12 16:48               ` Darrick J. Wong
  0 siblings, 1 reply; 9+ messages in thread
From: Qu Wenruo @ 2025-08-11 22:14 UTC (permalink / raw)
  To: Darrick J. Wong, Qu Wenruo
  Cc: Zhang Yi, Theodore Ts'o, linux-ext4, linux-btrfs,
	linux-fsdevel@vger.kernel.org



在 2025/8/12 01:19, Darrick J. Wong 写道:
> On Sun, Aug 10, 2025 at 07:36:48AM +0930, Qu Wenruo wrote:
>>
>>
>> 在 2025/8/9 18:39, Zhang Yi 写道:
>>> On 2025/8/9 6:11, Qu Wenruo wrote:
>>>> 在 2025/8/8 21:46, Theodore Ts'o 写道:
>>>>> On Fri, Aug 08, 2025 at 06:20:56PM +0930, Qu Wenruo wrote:
>>>>>>
>>>>>> 在 2025/8/8 17:22, Qu Wenruo 写道:
>>>>>>> Hi,
>>>>>>>
>>>>>>> [BACKGROUND]
>>>>>>> Recently I'm testing btrfs with 16KiB block size.
>>>>>>>
>>>>>>> Currently btrfs is artificially limiting subpage block size to 4K.
>>>>>>> But there is a simple patch to change it to support all block sizes <=
>>>>>>> page size in my branch:
>>>>>>>
>>>>>>> https://github.com/adam900710/linux/tree/larger_bs_support
>>>>>>>
>>>>>>> [IOMAP WARNING]
>>>>>>> And I'm running into a very weird kernel warning at btrfs/136, with 16K
>>>>>>> block size and 64K page size.
>>>>>>>
>>>>>>> The problem is, the problem happens with ext3 (using ext4 modeule) with
>>>>>>> 16K block size, and no btrfs is involved yet.
>>>>>
>>>>>
>>>>> Thanks for the bug report!  This looks like it's an issue with using
>>>>> indirect block-mapped file with a 16k block size.  I tried your
>>>>> reproducer using a 1k block size on an x86_64 system, which is how I
>>>>> test problem caused by the block size < page size.  It didn't
>>>>> reproduce there, so it looks like it really needs a 16k block size.
>>>>>
>>>>> Can you say something about what system were you running your testing
>>>>> on --- was it an arm64 system, or a powerpc 64 system (the two most
>>>>> common systems with page size > 4k)?  (I assume you're not trying to
>>>>> do this on an Itanic.  :-)   And was the page size 16k or 64k?
>>>>
>>>> The architecture is aarch64, the host board is Rock5B (cheap and fast enough), the test machine is a VM on that board, with ovmf as the UEFI firmware.
>>>>
>>>> The kernel is configured to use 64K page size, the *ext3* system is using 16K block size.
>>>>
>>>> Currently I tried the following combination with 64K page size and ext3, the result looks like the following
>>>>
>>>> - 2K block size
>>>> - 4K block size
>>>>     All fine
>>>>
>>>> - 8K block size
>>>> - 16K block size
>>>>     All the same kernel warning and never ending fsstress
>>>>
>>>> - 32K block size
>>>> - 64K block size
>>>>     All fine
>>>>
>>>> I am surprised as you that, not all subpage block size are having problems, just 2 of the less common combinations failed.
>>>>
>>>> And the most common ones (4K, page size) are all fine.
>>>>
>>>> Finally, if using ext4 not ext3, all combinations above are fine again.
>>>>
>>>> So I ran out of ideas why only 2 block sizes fail here...
>>>>
>>>
>>> This issue is caused by an overflow in the calculation of the hole's
>>> length on the forth-level depth for non-extent inodes. For a file system
>>> with a 4KB block size, the calculation will not overflow. For a 64KB
>>> block size, the queried position will not reach the fourth level, so this
>>> issue only occur on the filesystem with a 8KB and 16KB block size.
>>>
>>> Hi, Wenruo, could you try the following fix?
>>>
>>> diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
>>> index 7de327fa7b1c..d45124318200 100644
>>> --- a/fs/ext4/indirect.c
>>> +++ b/fs/ext4/indirect.c
>>> @@ -539,7 +539,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
>>>    	int indirect_blks;
>>>    	int blocks_to_boundary = 0;
>>>    	int depth;
>>> -	int count = 0;
>>> +	u64 count = 0;
>>>    	ext4_fsblk_t first_block = 0;
>>>
>>>    	trace_ext4_ind_map_blocks_enter(inode, map->m_lblk, map->m_len, flags);
>>> @@ -588,7 +588,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
>>>    		count++;
>>>    		/* Fill in size of a hole we found */
>>>    		map->m_pblk = 0;
>>> -		map->m_len = min_t(unsigned int, map->m_len, count);
>>> +		map->m_len = umin(map->m_len, count);
>>>    		goto cleanup;
>>>    	}
>>
>> It indeed solves the problem.
>>
>> Tested-by: Qu Wenruo <wqu@suse.com>
> 
> Can we get the relevant chunks of this test turned into a tests/ext4/
> fstest so that the ext4 developers have a regression test that doesn't
> require setting up btrfs, please?

Sure, although I can send out a ext4 specific test case for it, I'm 
definitely not the best one to explain why the problem happens.

Thus I believe Zhang Yi would be the best one to send the test case.



Another thing is, any ext3 run with 16K block size (that's if the system 
supports it) should trigger it with the existing test cases.

The biggest challenge is to get a system supporting 16k bs (aka page 
size >= 16K), so it has a high chance that for most people the new test 
case will mostly be NOTRUN.

Thanks,
Qu

> 
> --D
> 
>> Thanks,
>> Qu
>>
>>> Thanks,
>>> Yi.
>>>
>>
>>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases)
  2025-08-11 22:14             ` Qu Wenruo
@ 2025-08-12 16:48               ` Darrick J. Wong
  0 siblings, 0 replies; 9+ messages in thread
From: Darrick J. Wong @ 2025-08-12 16:48 UTC (permalink / raw)
  To: Qu Wenruo
  Cc: Qu Wenruo, Zhang Yi, Theodore Ts'o, linux-ext4, linux-btrfs,
	linux-fsdevel@vger.kernel.org

On Tue, Aug 12, 2025 at 07:44:09AM +0930, Qu Wenruo wrote:
> 
> 
> 在 2025/8/12 01:19, Darrick J. Wong 写道:
> > On Sun, Aug 10, 2025 at 07:36:48AM +0930, Qu Wenruo wrote:
> > > 
> > > 
> > > 在 2025/8/9 18:39, Zhang Yi 写道:
> > > > On 2025/8/9 6:11, Qu Wenruo wrote:
> > > > > 在 2025/8/8 21:46, Theodore Ts'o 写道:
> > > > > > On Fri, Aug 08, 2025 at 06:20:56PM +0930, Qu Wenruo wrote:
> > > > > > > 
> > > > > > > 在 2025/8/8 17:22, Qu Wenruo 写道:
> > > > > > > > Hi,
> > > > > > > > 
> > > > > > > > [BACKGROUND]
> > > > > > > > Recently I'm testing btrfs with 16KiB block size.
> > > > > > > > 
> > > > > > > > Currently btrfs is artificially limiting subpage block size to 4K.
> > > > > > > > But there is a simple patch to change it to support all block sizes <=
> > > > > > > > page size in my branch:
> > > > > > > > 
> > > > > > > > https://github.com/adam900710/linux/tree/larger_bs_support
> > > > > > > > 
> > > > > > > > [IOMAP WARNING]
> > > > > > > > And I'm running into a very weird kernel warning at btrfs/136, with 16K
> > > > > > > > block size and 64K page size.
> > > > > > > > 
> > > > > > > > The problem is, the problem happens with ext3 (using ext4 modeule) with
> > > > > > > > 16K block size, and no btrfs is involved yet.
> > > > > > 
> > > > > > 
> > > > > > Thanks for the bug report!  This looks like it's an issue with using
> > > > > > indirect block-mapped file with a 16k block size.  I tried your
> > > > > > reproducer using a 1k block size on an x86_64 system, which is how I
> > > > > > test problem caused by the block size < page size.  It didn't
> > > > > > reproduce there, so it looks like it really needs a 16k block size.
> > > > > > 
> > > > > > Can you say something about what system were you running your testing
> > > > > > on --- was it an arm64 system, or a powerpc 64 system (the two most
> > > > > > common systems with page size > 4k)?  (I assume you're not trying to
> > > > > > do this on an Itanic.  :-)   And was the page size 16k or 64k?
> > > > > 
> > > > > The architecture is aarch64, the host board is Rock5B (cheap and fast enough), the test machine is a VM on that board, with ovmf as the UEFI firmware.
> > > > > 
> > > > > The kernel is configured to use 64K page size, the *ext3* system is using 16K block size.
> > > > > 
> > > > > Currently I tried the following combination with 64K page size and ext3, the result looks like the following
> > > > > 
> > > > > - 2K block size
> > > > > - 4K block size
> > > > >     All fine
> > > > > 
> > > > > - 8K block size
> > > > > - 16K block size
> > > > >     All the same kernel warning and never ending fsstress
> > > > > 
> > > > > - 32K block size
> > > > > - 64K block size
> > > > >     All fine
> > > > > 
> > > > > I am surprised as you that, not all subpage block size are having problems, just 2 of the less common combinations failed.
> > > > > 
> > > > > And the most common ones (4K, page size) are all fine.
> > > > > 
> > > > > Finally, if using ext4 not ext3, all combinations above are fine again.
> > > > > 
> > > > > So I ran out of ideas why only 2 block sizes fail here...
> > > > > 
> > > > 
> > > > This issue is caused by an overflow in the calculation of the hole's
> > > > length on the forth-level depth for non-extent inodes. For a file system
> > > > with a 4KB block size, the calculation will not overflow. For a 64KB
> > > > block size, the queried position will not reach the fourth level, so this
> > > > issue only occur on the filesystem with a 8KB and 16KB block size.
> > > > 
> > > > Hi, Wenruo, could you try the following fix?
> > > > 
> > > > diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
> > > > index 7de327fa7b1c..d45124318200 100644
> > > > --- a/fs/ext4/indirect.c
> > > > +++ b/fs/ext4/indirect.c
> > > > @@ -539,7 +539,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
> > > >    	int indirect_blks;
> > > >    	int blocks_to_boundary = 0;
> > > >    	int depth;
> > > > -	int count = 0;
> > > > +	u64 count = 0;
> > > >    	ext4_fsblk_t first_block = 0;
> > > > 
> > > >    	trace_ext4_ind_map_blocks_enter(inode, map->m_lblk, map->m_len, flags);
> > > > @@ -588,7 +588,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
> > > >    		count++;
> > > >    		/* Fill in size of a hole we found */
> > > >    		map->m_pblk = 0;
> > > > -		map->m_len = min_t(unsigned int, map->m_len, count);
> > > > +		map->m_len = umin(map->m_len, count);
> > > >    		goto cleanup;
> > > >    	}
> > > 
> > > It indeed solves the problem.
> > > 
> > > Tested-by: Qu Wenruo <wqu@suse.com>
> > 
> > Can we get the relevant chunks of this test turned into a tests/ext4/
> > fstest so that the ext4 developers have a regression test that doesn't
> > require setting up btrfs, please?
> 
> Sure, although I can send out a ext4 specific test case for it, I'm
> definitely not the best one to explain why the problem happens.
> 
> Thus I believe Zhang Yi would be the best one to send the test case.
> 
> 
> 
> Another thing is, any ext3 run with 16K block size (that's if the system
> supports it) should trigger it with the existing test cases.
> 
> The biggest challenge is to get a system supporting 16k bs (aka page size >=
> 16K), so it has a high chance that for most people the new test case will
> mostly be NOTRUN.

I'm curious to try out fuse2fs against whatever test gets written, since
it supports large fsblock sizes.

--D

> Thanks,
> Qu
> 
> > 
> > --D
> > 
> > > Thanks,
> > > Qu
> > > 
> > > > Thanks,
> > > > Yi.
> > > > 
> > > 
> > > 
> 
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-08-12 16:48 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-08  7:52 Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases) Qu Wenruo
2025-08-08  8:50 ` Qu Wenruo
2025-08-08 12:16   ` Theodore Ts'o
2025-08-08 22:11     ` Qu Wenruo
2025-08-09  9:09       ` Zhang Yi
2025-08-09 22:06         ` Qu Wenruo
2025-08-11 15:49           ` Darrick J. Wong
2025-08-11 22:14             ` Qu Wenruo
2025-08-12 16:48               ` Darrick J. Wong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).