* mm/truncate.c:669 VM_BUG_ON_FOLIO() - hit on XFS on different tests
@ 2023-12-08 22:39 Luis Chamberlain
2023-12-09 5:52 ` Matthew Wilcox
0 siblings, 1 reply; 3+ messages in thread
From: Luis Chamberlain @ 2023-12-08 22:39 UTC (permalink / raw)
To: Hugh Dickins, Jan Kara, Matthew Wilcox (Oracle),
Christian Brauner, zlang, Pankaj Raghav, Daniel Gomez
Cc: linux-mm, xfs, Linux FS Devel
Commit aa5b9178c0190 ("mm: invalidation check mapping before folio_contains")
added on v6.6-rc1 moved the VM_BUG_ON_FOLIO() on invalidate_inode_pages2_range()
after the truncation check.
We managed to hit this VM_BUG_ON_FOLIO() a few times on v6.6-rc5 with a slew
of fstsets tests on kdevops [0] on the following XFS config as defined by
kdevops XFS's configurations [1] for XFS with the following failure rates
annotated:
* xfs_reflink_4k: F:1/278 - one out of 278 times
- generic/451: (trace pasted below after running test over 17 hours)
* xfs_nocrc_4k: F:1/1604 - one ou tof 1604 times
- generic/451: https://gist.github.com/mcgrof/2c40a14979ceeb7321d2234a525c32a6
To be clear F:1/1604 means you can run the test in a loop and on test number
about 1604 you may run into the bug. It would seem Zorro had hit also
with a 64k directory size (mkfs.xfs -n size=65536) on v5.19-rc2, so prior
to Hugh's move of the VM_BUG_ON_FOLIO() while testing generic/132 [0].
My hope is that this could help those interested in reproducing, to
spawn up kdevops and just run the test in a loop in the same way.
Likewise, if you have a fix to test we can test it as well, but it will
take a while as we want to run the test in a loop over and over many
times.
[0] https://github.com/linux-kdevops/
[1] https://github.com/linux-kdevops/kdevops/blob/master/playbooks/roles/fstests/templates/xfs/xfs.config
[2] https://bugzilla.kernel.org/show_bug.cgi?id=216114
Luis
Nov 05 23:20:54 r451-xfs-reflink-4k unknown: run fstests generic/451 at 2023-11-05 23:20:54
Nov 05 23:21:25 r451-xfs-reflink-4k kernel: XFS (loop16): EXPERIMENTAL online scrub feature in use. Use at your own risk!
Nov 05 23:21:25 r451-xfs-reflink-4k kernel: XFS (loop16): Unmounting Filesystem 2ed74cf8-8238-4817-bc04-d9b3f4f79275
Nov 05 23:21:25 r451-xfs-reflink-4k kernel: XFS (loop16): Mounting V5 Filesystem 2ed74cf8-8238-4817-bc04-d9b3f4f79275
Nov 05 23:21:25 r451-xfs-reflink-4k kernel: XFS (loop16): Ending clean mount
Nov 05 23:21:26 r451-xfs-reflink-4k kernel: kmemleak: 14 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
Nov 05 23:21:26 r451-xfs-reflink-4k kernel: XFS (loop16): Unmounting Filesystem 2ed74cf8-8238-4817-bc04-d9b3f4f79275
Nov 05 23:21:27 r451-xfs-reflink-4k kernel: XFS (loop16): Mounting V5 Filesystem 2ed74cf8-8238-4817-bc04-d9b3f4f79275
Nov 05 23:21:27 r451-xfs-reflink-4k kernel: XFS (loop16): Ending clean mount
Nov 05 23:21:28 r451-xfs-reflink-4k kernel: XFS (loop5): Mounting V5 Filesystem c1814fb4-5f79-4274-96fa-7bf6fabe0ee8
Nov 05 23:21:28 r451-xfs-reflink-4k kernel: XFS (loop5): Ending clean mount
Nov 05 23:21:28 r451-xfs-reflink-4k kernel: XFS (loop5): Unmounting Filesystem c1814fb4-5f79-4274-96fa-7bf6fabe0ee8
Nov 05 23:21:28 r451-xfs-reflink-4k kernel: XFS (loop16): EXPERIMENTAL online scrub feature in use. Use at your own risk!
Nov 05 23:21:28 r451-xfs-reflink-4k kernel: XFS (loop16): Unmounting Filesystem 2ed74cf8-8238-4817-bc04-d9b3f4f79275
Nov 05 23:21:29 r451-xfs-reflink-4k kernel: XFS (loop16): Mounting V5 Filesystem 2ed74cf8-8238-4817-bc04-d9b3f4f79275
Nov 05 23:21:29 r451-xfs-reflink-4k kernel: XFS (loop16): Ending clean mount
Nov 05 23:21:29 r451-xfs-reflink-4k unknown: run fstests generic/451 at 2023-11-05 23:21:29
... over 17 hours later ...
Nov 06 16:06:07 r451-xfs-reflink-4k unknown: run fstests generic/451 at 2023-11-06 16:06:07
Nov 06 16:06:38 r451-xfs-reflink-4k kernel: XFS (loop16): EXPERIMENTAL online scrub feature in use. Use at your own risk!
Nov 06 16:06:38 r451-xfs-reflink-4k kernel: XFS (loop16): Unmounting Filesystem 2ed74cf8-8238-4817-bc04-d9b3f4f79275
Nov 06 16:06:38 r451-xfs-reflink-4k kernel: XFS (loop16): Mounting V5 Filesystem 2ed74cf8-8238-4817-bc04-d9b3f4f79275
Nov 06 16:06:38 r451-xfs-reflink-4k kernel: XFS (loop16): Ending clean mount
Nov 06 16:06:41 r451-xfs-reflink-4k kernel: kmemleak: 9 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
Nov 06 16:06:42 r451-xfs-reflink-4k kernel: XFS (loop16): Unmounting Filesystem 2ed74cf8-8238-4817-bc04-d9b3f4f79275
Nov 06 16:06:42 r451-xfs-reflink-4k kernel: XFS (loop16): Mounting V5 Filesystem 2ed74cf8-8238-4817-bc04-d9b3f4f79275
Nov 06 16:06:42 r451-xfs-reflink-4k kernel: XFS (loop16): Ending clean mount
Nov 06 16:06:47 r451-xfs-reflink-4k kernel: XFS (loop5): Mounting V5 Filesystem 6a017bf9-aa36-474a-af1b-670d8bae14cf
Nov 06 16:06:47 r451-xfs-reflink-4k kernel: XFS (loop5): Ending clean mount
Nov 06 16:06:47 r451-xfs-reflink-4k kernel: XFS (loop5): Unmounting Filesystem 6a017bf9-aa36-474a-af1b-670d8bae14cf
Nov 06 16:06:47 r451-xfs-reflink-4k kernel: XFS (loop16): EXPERIMENTAL online scrub feature in use. Use at your own risk!
Nov 06 16:06:47 r451-xfs-reflink-4k kernel: XFS (loop16): Unmounting Filesystem 2ed74cf8-8238-4817-bc04-d9b3f4f79275
Nov 06 16:06:47 r451-xfs-reflink-4k kernel: XFS (loop16): Mounting V5 Filesystem 2ed74cf8-8238-4817-bc04-d9b3f4f79275
Nov 06 16:06:47 r451-xfs-reflink-4k kernel: XFS (loop16): Ending clean mount
Nov 06 16:06:47 r451-xfs-reflink-4k unknown: run fstests generic/451 at 2023-11-06 16:06:47
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: page:00000000bda16be1 refcount:8 mapcount:0 mapping:00000000258b6ed6 index:0x5c pfn:0x19728
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: head:00000000bda16be1 order:2 entire_mapcount:0 nr_pages_mapped:0 pincount:0
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: memcg:ffff987b9ecec000
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: aops:xfs_address_space_operations [xfs] ino:83 dentry name:"tst-aio-dio-cycle-write.451"
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: flags: 0xffffce0000826d(locked|referenced|uptodate|lru|workingset|private|head|node=0|zone=1|lastcpupid=0x1ffff)
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: page_type: 0xffffffff()
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: raw: 00ffffce0000826d ffffdce9c08b6048 ffff987b9eced120 ffff987b83ef0ab8
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: raw: 000000000000005c ffff987b94c07620 00000007ffffffff ffff987b9ecec000
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: page dumped because: VM_BUG_ON_FOLIO(!folio_contains(folio, indices[i]))
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ------------[ cut here ]------------
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: kernel BUG at mm/truncate.c:662!
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: CPU: 2 PID: 2235189 Comm: kworker/2:0 Not tainted 6.6.0-rc5 #1
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: Workqueue: dio/loop16 iomap_dio_complete_work
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: RIP: 0010:invalidate_inode_pages2_range+0x258/0x4b0
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: Code: e8 ad f9 ff ff 48 8b 00 f6 c4 01 0f 84 ab fe ff ff 4c 3b 6b 20 0f 84 e3 fe ff ff 48 c7 c6 20 b8 43 92 48 89 df e8 c8 74 03 00 <0f> 0b 8b 43>
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: RSP: 0018:ffffb5cd81fa7cd0 EFLAGS: 00010246
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: RAX: 0000000000000048 RBX: ffffdce9c065ca00 RCX: 0000000000000000
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: RDX: 0000000000000000 RSI: 0000000000000027 RDI: 00000000ffffffff
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: ffffb5cd81fa7b80
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: R10: 0000000000000003 R11: ffffffff926b5520 R12: ffff987b83ef0ab8
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: R13: ffffffffffffffa4 R14: 0000000000000000 R15: 0000000000000000
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: FS: 0000000000000000(0000) GS:ffff987bfbc80000(0000) knlGS:0000000000000000
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: CR2: 00007ffcdc8e98f0 CR3: 000000005c438003 CR4: 0000000000770ee0
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: PKRU: 55555554
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: Call Trace:
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: <TASK>
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? die+0x32/0x80
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? do_trap+0xd6/0x100
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? invalidate_inode_pages2_range+0x258/0x4b0
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? do_error_trap+0x6a/0x90
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? invalidate_inode_pages2_range+0x258/0x4b0
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? exc_invalid_op+0x4c/0x60
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? invalidate_inode_pages2_range+0x258/0x4b0
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? asm_exc_invalid_op+0x16/0x20
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? invalidate_inode_pages2_range+0x258/0x4b0
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? update_load_avg+0x7e/0x780
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? update_load_avg+0x7e/0x780
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? dequeue_entity+0x133/0x4a0
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? _raw_spin_unlock+0x15/0x30
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: kiocb_invalidate_post_direct_write+0x39/0x50
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: iomap_dio_complete+0x12a/0x1a0
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? __pfx_aio_complete_rw+0x10/0x10
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: iomap_dio_complete_work+0x17/0x30
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: process_one_work+0x171/0x340
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: worker_thread+0x277/0x3a0
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? __pfx_worker_thread+0x10/0x10
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: kthread+0xf0/0x120
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? __pfx_kthread+0x10/0x10
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ret_from_fork+0x2d/0x50
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ? __pfx_kthread+0x10/0x10
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ret_from_fork_asm+0x1b/0x30
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: </TASK>
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: Modules linked in: xfs sunrpc nvme_fabrics nvme_core t10_pi crc64_rocksoft_generic crc64_rocksoft crc64 kvm_intel kvm irqbypass crct10dif_pclmul >
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: ---[ end trace 0000000000000000 ]---
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: RIP: 0010:invalidate_inode_pages2_range+0x258/0x4b0
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: Code: e8 ad f9 ff ff 48 8b 00 f6 c4 01 0f 84 ab fe ff ff 4c 3b 6b 20 0f 84 e3 fe ff ff 48 c7 c6 20 b8 43 92 48 89 df e8 c8 74 03 00 <0f> 0b 8b 43>
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: RSP: 0018:ffffb5cd81fa7cd0 EFLAGS: 00010246
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: RAX: 0000000000000048 RBX: ffffdce9c065ca00 RCX: 0000000000000000
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: RDX: 0000000000000000 RSI: 0000000000000027 RDI: 00000000ffffffff
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: ffffb5cd81fa7b80
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: R10: 0000000000000003 R11: ffffffff926b5520 R12: ffff987b83ef0ab8
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: R13: ffffffffffffffa4 R14: 0000000000000000 R15: 0000000000000000
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: FS: 0000000000000000(0000) GS:ffff987bfbc80000(0000) knlGS:0000000000000000
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: CR2: 00007fa1980081d8 CR3: 000000010c812006 CR4: 0000000000770ee0
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Nov 06 16:07:16 r451-xfs-reflink-4k kernel: PKRU: 55555554
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: mm/truncate.c:669 VM_BUG_ON_FOLIO() - hit on XFS on different tests 2023-12-08 22:39 mm/truncate.c:669 VM_BUG_ON_FOLIO() - hit on XFS on different tests Luis Chamberlain @ 2023-12-09 5:52 ` Matthew Wilcox 2024-02-15 17:16 ` Luis Chamberlain 0 siblings, 1 reply; 3+ messages in thread From: Matthew Wilcox @ 2023-12-09 5:52 UTC (permalink / raw) To: Luis Chamberlain Cc: Hugh Dickins, Jan Kara, Christian Brauner, zlang, Pankaj Raghav, Daniel Gomez, linux-mm, xfs, Linux FS Devel [-- Attachment #1: Type: text/plain, Size: 1527 bytes --] On Fri, Dec 08, 2023 at 02:39:36PM -0800, Luis Chamberlain wrote: > Commit aa5b9178c0190 ("mm: invalidation check mapping before folio_contains") > added on v6.6-rc1 moved the VM_BUG_ON_FOLIO() on invalidate_inode_pages2_range() > after the truncation check. > > We managed to hit this VM_BUG_ON_FOLIO() a few times on v6.6-rc5 with a slew > of fstsets tests on kdevops [0] on the following XFS config as defined by > kdevops XFS's configurations [1] for XFS with the following failure rates > annotated: > > * xfs_reflink_4k: F:1/278 - one out of 278 times > - generic/451: (trace pasted below after running test over 17 hours) > * xfs_nocrc_4k: F:1/1604 - one ou tof 1604 times > - generic/451: https://gist.github.com/mcgrof/2c40a14979ceeb7321d2234a525c32a6 > > To be clear F:1/1604 means you can run the test in a loop and on test number > about 1604 you may run into the bug. It would seem Zorro had hit also > with a 64k directory size (mkfs.xfs -n size=65536) on v5.19-rc2, so prior > to Hugh's move of the VM_BUG_ON_FOLIO() while testing generic/132 [0]. > > My hope is that this could help those interested in reproducing, to > spawn up kdevops and just run the test in a loop in the same way. > Likewise, if you have a fix to test we can test it as well, but it will > take a while as we want to run the test in a loop over and over many > times. I'm pretty sure this is the same problem recently diagnosed by Charan. It's terribly rare, so it'll take a while to find out. Try the attached patch? [-- Attachment #2: 0001-mm-Migrate-high-order-folios-in-swap-cache-correctly.patch --] [-- Type: text/plain, Size: 2176 bytes --] From 4bd18e281a5e99f3cc55a9c9cc78cbace4e9a504 Mon Sep 17 00:00:00 2001 From: Charan Teja Kalla <quic_charante@quicinc.com> Date: Sat, 9 Dec 2023 00:39:26 -0500 Subject: [PATCH] mm: Migrate high-order folios in swap cache correctly Large folios occupy N consecutive entries in the swap cache instead of using multi-index entries like the page cache. However, if a large folio is re-added to the LRU list, it can be migrated. The migration code was not aware of the difference between the swap cache and the page cache and assumed that a single xas_store() would be sufficient. This leaves potentially many stale pointers to the now-migrated folio in the swap cache, which can lead to almost arbitrary data corruption in the future. This can also manifest as infinite loops with the RCU read lock held. Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com> [modifications to the changelog & tweaked the fix] Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- mm/migrate.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/mm/migrate.c b/mm/migrate.c index d9d2b9432e81..2d67ca47d2e2 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -405,6 +405,7 @@ int folio_migrate_mapping(struct address_space *mapping, int dirty; int expected_count = folio_expected_refs(mapping, folio) + extra_count; long nr = folio_nr_pages(folio); + long entries, i; if (!mapping) { /* Anonymous page without mapping */ @@ -442,8 +443,10 @@ int folio_migrate_mapping(struct address_space *mapping, folio_set_swapcache(newfolio); newfolio->private = folio_get_private(folio); } + entries = nr; } else { VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio); + entries = 1; } /* Move dirty while page refs frozen and newpage not yet exposed */ @@ -453,7 +456,11 @@ int folio_migrate_mapping(struct address_space *mapping, folio_set_dirty(newfolio); } - xas_store(&xas, newfolio); + /* Swap cache still stores N entries instead of a high-order entry */ + for (i = 0; i < entries; i++) { + xas_store(&xas, newfolio); + xas_next(&xas); + } /* * Drop cache reference from old page by unfreezing -- 2.42.0 ^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: mm/truncate.c:669 VM_BUG_ON_FOLIO() - hit on XFS on different tests 2023-12-09 5:52 ` Matthew Wilcox @ 2024-02-15 17:16 ` Luis Chamberlain 0 siblings, 0 replies; 3+ messages in thread From: Luis Chamberlain @ 2024-02-15 17:16 UTC (permalink / raw) To: Matthew Wilcox Cc: Hugh Dickins, Jan Kara, Christian Brauner, zlang, Pankaj Raghav, Daniel Gomez, linux-mm, xfs, Linux FS Devel, Amir Goldstein, kdevops On Sat, Dec 09, 2023 at 05:52:00AM +0000, Matthew Wilcox wrote: > On Fri, Dec 08, 2023 at 02:39:36PM -0800, Luis Chamberlain wrote: > > Commit aa5b9178c0190 ("mm: invalidation check mapping before folio_contains") > > added on v6.6-rc1 moved the VM_BUG_ON_FOLIO() on invalidate_inode_pages2_range() > > after the truncation check. > > > > We managed to hit this VM_BUG_ON_FOLIO() a few times on v6.6-rc5 with a slew > > of fstsets tests on kdevops [0] on the following XFS config as defined by > > kdevops XFS's configurations [1] for XFS with the following failure rates > > annotated: > > > > * xfs_reflink_4k: F:1/278 - one out of 278 times > > - generic/451: (trace pasted below after running test over 17 hours) > > * xfs_nocrc_4k: F:1/1604 - one ou tof 1604 times > > - generic/451: https://gist.github.com/mcgrof/2c40a14979ceeb7321d2234a525c32a6 > > > > To be clear F:1/1604 means you can run the test in a loop and on test number > > about 1604 you may run into the bug. It would seem Zorro had hit also > > with a 64k directory size (mkfs.xfs -n size=65536) on v5.19-rc2, so prior > > to Hugh's move of the VM_BUG_ON_FOLIO() while testing generic/132 [0]. > > > > My hope is that this could help those interested in reproducing, to > > spawn up kdevops and just run the test in a loop in the same way. > > Likewise, if you have a fix to test we can test it as well, but it will > > take a while as we want to run the test in a loop over and over many > > times. > > I'm pretty sure this is the same problem recently diagnosed by Charan. > It's terribly rare, so it'll take a while to find out. Try the attached > patch? Confirmed, at least v6.8-rc2 no longer as this as the commit fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly") was merged as of v6.7-rc8. I ran the test 400 times in a loop. I'll remove this now from the expunges on kdevops for v6.8-rc2 baseline. Luis ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-02-15 17:16 UTC | newest] Thread overview: 3+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-12-08 22:39 mm/truncate.c:669 VM_BUG_ON_FOLIO() - hit on XFS on different tests Luis Chamberlain 2023-12-09 5:52 ` Matthew Wilcox 2024-02-15 17:16 ` Luis Chamberlain
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).