* [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2)
@ 2024-12-11 1:54 syzbot
2024-12-11 10:06 ` David Hildenbrand
` (2 more replies)
0 siblings, 3 replies; 38+ messages in thread
From: syzbot @ 2024-12-11 1:54 UTC (permalink / raw)
To: akpm, linux-kernel, linux-mm, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: b8f52214c61a Merge tag 'audit-pr-20241205' of git://git.ke..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=164958df980000
kernel config: https://syzkaller.appspot.com/x/.config?x=c579265945b98812
dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/27d16eb66738/disk-b8f52214.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/4e6e3d3856a3/vmlinux-b8f52214.xz
kernel image: https://storage.googleapis.com/syzbot-assets/e4a9277cf155/bzImage-b8f52214.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com
entry_SYSCALL_64_after_hwframe+0x77/0x7f
page last free pid 1 tgid 1 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1127 [inline]
free_unref_page+0x661/0x1080 mm/page_alloc.c:2657
free_contig_range+0x133/0x3f0 mm/page_alloc.c:6630
destroy_args+0xa87/0xe60 mm/debug_vm_pgtable.c:1017
debug_vm_pgtable+0x168e/0x31a0 mm/debug_vm_pgtable.c:1397
do_one_initcall+0x12b/0x700 init/main.c:1266
do_initcall_level init/main.c:1328 [inline]
do_initcalls init/main.c:1344 [inline]
do_basic_setup init/main.c:1363 [inline]
kernel_init_freeable+0x5c7/0x900 init/main.c:1577
kernel_init+0x1c/0x2b0 init/main.c:1466
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
------------[ cut here ]------------
WARNING: CPU: 0 PID: 10473 at ./include/linux/rmap.h:217 __folio_rmap_sanity_checks+0x356/0x540 include/linux/rmap.h:217
Modules linked in:
CPU: 0 UID: 0 PID: 10473 Comm: syz.3.899 Not tainted 6.13.0-rc1-syzkaller-00182-gb8f52214c61a #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
RIP: 0010:__folio_rmap_sanity_checks+0x356/0x540 include/linux/rmap.h:217
Code: d2 b0 ff 49 8d 6f ff e8 28 d2 b0 ff 48 39 eb 0f 84 53 fe ff ff e8 1a d2 b0 ff 48 c7 c6 20 ac 7a 8b 48 89 df e8 db fb f6 ff 90 <0f> 0b 90 e9 36 fe ff ff e8 fd d1 b0 ff 49 89 ec 31 ff 41 81 e4 ff
RSP: 0018:ffffc900036b75d8 EFLAGS: 00010246
RAX: 0000000000080000 RBX: ffffea0001108000 RCX: ffffc9000de50000
RDX: 0000000000080000 RSI: ffffffff81e933a5 RDI: ffff88802e0d8444
RBP: ffffea000111ffc0 R08: 0000000000000000 R09: fffffbfff20be52a
R10: ffffffff905f2957 R11: 0000000000000006 R12: 0000000000000000
R13: 0000000000000410 R14: 0000000000000000 R15: dead000000000100
FS: 00007ffb8d5086c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1678a23712 CR3: 0000000068232000 CR4: 0000000000350ef0
Call Trace:
<TASK>
__folio_add_rmap mm/rmap.c:1170 [inline]
__folio_add_file_rmap mm/rmap.c:1489 [inline]
folio_add_file_rmap_ptes+0x72/0x310 mm/rmap.c:1511
set_pte_range+0x135/0x520 mm/memory.c:5065
filemap_map_folio_range mm/filemap.c:3572 [inline]
filemap_map_pages+0xb5a/0x16b0 mm/filemap.c:3681
do_fault_around mm/memory.c:5280 [inline]
do_read_fault mm/memory.c:5313 [inline]
do_fault mm/memory.c:5456 [inline]
do_pte_missing+0xdae/0x3e70 mm/memory.c:3979
handle_pte_fault mm/memory.c:5801 [inline]
__handle_mm_fault+0x103c/0x2a40 mm/memory.c:5944
handle_mm_fault+0x3fa/0xaa0 mm/memory.c:6112
faultin_page mm/gup.c:1187 [inline]
__get_user_pages+0x8d9/0x3b50 mm/gup.c:1485
populate_vma_page_range+0x27f/0x3a0 mm/gup.c:1923
__mm_populate+0x1d6/0x380 mm/gup.c:2026
mm_populate include/linux/mm.h:3386 [inline]
vm_mmap_pgoff+0x293/0x360 mm/util.c:585
ksys_mmap_pgoff+0x32c/0x5c0 mm/mmap.c:542
__do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
__se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
__x64_sys_mmap+0x125/0x190 arch/x86/kernel/sys_x86_64.c:82
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ffb8c77fed9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffb8d508058 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007ffb8c946080 RCX: 00007ffb8c77fed9
RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000
RBP: 00007ffb8c7f3cc8 R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007ffb8c946080 R15: 00007ffd68dca078
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [flat|nested] 38+ messages in thread* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-11 1:54 [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) syzbot @ 2024-12-11 10:06 ` David Hildenbrand 2024-12-28 4:56 ` syzbot 2024-12-28 12:25 ` syzbot 2 siblings, 0 replies; 38+ messages in thread From: David Hildenbrand @ 2024-12-11 10:06 UTC (permalink / raw) To: syzbot, akpm, linux-kernel, linux-mm, syzkaller-bugs; +Cc: Matthew Wilcox On 11.12.24 02:54, syzbot wrote: > Hello, > > syzbot found the following issue on: > > HEAD commit: b8f52214c61a Merge tag 'audit-pr-20241205' of git://git.ke.. > git tree: upstream > console output: https://syzkaller.appspot.com/x/log.txt?x=164958df980000 > kernel config: https://syzkaller.appspot.com/x/.config?x=c579265945b98812 > dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 > compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40 > > Unfortunately, I don't have any reproducer for this issue yet. > > Downloadable assets: > disk image: https://storage.googleapis.com/syzbot-assets/27d16eb66738/disk-b8f52214.raw.xz > vmlinux: https://storage.googleapis.com/syzbot-assets/4e6e3d3856a3/vmlinux-b8f52214.xz > kernel image: https://storage.googleapis.com/syzbot-assets/e4a9277cf155/bzImage-b8f52214.xz > > IMPORTANT: if you fix the issue, please add the following tag to the commit: > Reported-by: syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com > > entry_SYSCALL_64_after_hwframe+0x77/0x7f > page last free pid 1 tgid 1 stack trace: > reset_page_owner include/linux/page_owner.h:25 [inline] > free_pages_prepare mm/page_alloc.c:1127 [inline] > free_unref_page+0x661/0x1080 mm/page_alloc.c:2657 > free_contig_range+0x133/0x3f0 mm/page_alloc.c:6630 > destroy_args+0xa87/0xe60 mm/debug_vm_pgtable.c:1017 > debug_vm_pgtable+0x168e/0x31a0 mm/debug_vm_pgtable.c:1397 > do_one_initcall+0x12b/0x700 init/main.c:1266 > do_initcall_level init/main.c:1328 [inline] > do_initcalls init/main.c:1344 [inline] > do_basic_setup init/main.c:1363 [inline] > kernel_init_freeable+0x5c7/0x900 init/main.c:1577 > kernel_init+0x1c/0x2b0 init/main.c:1466 > ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147 > ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 > ------------[ cut here ]------------ > WARNING: CPU: 0 PID: 10473 at ./include/linux/rmap.h:217 __folio_rmap_sanity_checks+0x356/0x540 include/linux/rmap.h:217 That is: VM_WARN_ON_FOLIO(page_folio(page + nr_pages - 1) != folio, folio); Meaning, nr_pages crosses our folio, which is bad. Note that VM_WARN_ON_FOLIO(page_folio(page) != folio, folio); Held. (doing the page arithmetic will work as we are not crossing memory section boundaries with any pages we expect in here right now) > Modules linked in: > CPU: 0 UID: 0 PID: 10473 Comm: syz.3.899 Not tainted 6.13.0-rc1-syzkaller-00182-gb8f52214c61a #0 > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 > RIP: 0010:__folio_rmap_sanity_checks+0x356/0x540 include/linux/rmap.h:217 > Code: d2 b0 ff 49 8d 6f ff e8 28 d2 b0 ff 48 39 eb 0f 84 53 fe ff ff e8 1a d2 b0 ff 48 c7 c6 20 ac 7a 8b 48 89 df e8 db fb f6 ff 90 <0f> 0b 90 e9 36 fe ff ff e8 fd d1 b0 ff 49 89 ec 31 ff 41 81 e4 ff > RSP: 0018:ffffc900036b75d8 EFLAGS: 00010246 > RAX: 0000000000080000 RBX: ffffea0001108000 RCX: ffffc9000de50000 > RDX: 0000000000080000 RSI: ffffffff81e933a5 RDI: ffff88802e0d8444 > RBP: ffffea000111ffc0 R08: 0000000000000000 R09: fffffbfff20be52a > R10: ffffffff905f2957 R11: 0000000000000006 R12: 0000000000000000 > R13: 0000000000000410 R14: 0000000000000000 R15: dead000000000100 > FS: 00007ffb8d5086c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000 > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > CR2: 00007f1678a23712 CR3: 0000000068232000 CR4: 0000000000350ef0 > Call Trace: > <TASK> > __folio_add_rmap mm/rmap.c:1170 [inline] > __folio_add_file_rmap mm/rmap.c:1489 [inline] > folio_add_file_rmap_ptes+0x72/0x310 mm/rmap.c:1511 So set_pte_range() is already called with a wrong page + nr combination I suspect. > set_pte_range+0x135/0x520 mm/memory.c:5065 > filemap_map_folio_range mm/filemap.c:3572 [inline] > filemap_map_pages+0xb5a/0x16b0 mm/filemap.c:3681 > do_fault_around mm/memory.c:5280 [inline] > do_read_fault mm/memory.c:5313 [inline] > do_fault mm/memory.c:5456 [inline] > do_pte_missing+0xdae/0x3e70 mm/memory.c:3979 > handle_pte_fault mm/memory.c:5801 [inline] > __handle_mm_fault+0x103c/0x2a40 mm/memory.c:5944 > handle_mm_fault+0x3fa/0xaa0 mm/memory.c:6112 > faultin_page mm/gup.c:1187 [inline] > __get_user_pages+0x8d9/0x3b50 mm/gup.c:1485 > populate_vma_page_range+0x27f/0x3a0 mm/gup.c:1923 > __mm_populate+0x1d6/0x380 mm/gup.c:2026 > mm_populate include/linux/mm.h:3386 [inline] > vm_mmap_pgoff+0x293/0x360 mm/util.c:585 > ksys_mmap_pgoff+0x32c/0x5c0 mm/mmap.c:542 > __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] > __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] > __x64_sys_mmap+0x125/0x190 arch/x86/kernel/sys_x86_64.c:82 > do_syscall_x64 arch/x86/entry/common.c:52 [inline] > do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83 > entry_SYSCALL_64_after_hwframe+0x77/0x7f > RIP: 0033:0x7ffb8c77fed9 > Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 > RSP: 002b:00007ffb8d508058 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 > RAX: ffffffffffffffda RBX: 00007ffb8c946080 RCX: 00007ffb8c77fed9 > RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 > RBP: 00007ffb8c7f3cc8 R08: 0000000000000007 R09: 0000000000000000 > R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 > R13: 0000000000000001 R14: 00007ffb8c946080 R15: 00007ffd68dca078 > </TASK> > > > --- > This report is generated by a bot. It may contain errors. > See https://goo.gl/tpsmEJ for more information about syzbot. > syzbot engineers can be reached at syzkaller@googlegroups.com. > > syzbot will keep track of this issue. See: > https://goo.gl/tpsmEJ#status for how to communicate with syzbot. > > If the report is already addressed, let syzbot know by replying with: > #syz fix: exact-commit-title > > If you want to overwrite report's subsystems, reply with: > #syz set subsystems: new-subsystem > (See the list of subsystem names on the web dashboard) > > If the report is a duplicate of another one, reply with: > #syz dup: exact-subject-of-another-report > > If you want to undo deduplication, reply with: > #syz undup > -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-11 1:54 [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) syzbot 2024-12-11 10:06 ` David Hildenbrand @ 2024-12-28 4:56 ` syzbot 2024-12-28 7:54 ` Hillf Danton ` (7 more replies) 2024-12-28 12:25 ` syzbot 2 siblings, 8 replies; 38+ messages in thread From: syzbot @ 2024-12-28 4:56 UTC (permalink / raw) To: akpm, david, linux-kernel, linux-mm, syzkaller-bugs, willy syzbot has found a reproducer for the following issue on: HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 git tree: linux-next console output: https://syzkaller.appspot.com/x/log.txt?x=15248af8580000 kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000 Downloadable assets: disk image: https://storage.googleapis.com/syzbot-assets/98a974fc662d/disk-8155b4ef.raw.xz vmlinux: https://storage.googleapis.com/syzbot-assets/2dea9b72f624/vmlinux-8155b4ef.xz kernel image: https://storage.googleapis.com/syzbot-assets/593a42b9eb34/bzImage-8155b4ef.xz mounted in repro: https://storage.googleapis.com/syzbot-assets/07bcc698db35/mount_0.gz IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com do_ftruncate+0x4a1/0x540 fs/open.c:192 do_sys_ftruncate fs/open.c:207 [inline] __do_sys_ftruncate fs/open.c:212 [inline] __se_sys_ftruncate fs/open.c:210 [inline] __x64_sys_ftruncate+0x94/0xf0 fs/open.c:210 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f ------------[ cut here ]------------ WARNING: CPU: 0 PID: 7889 at ./include/linux/rmap.h:216 __folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216 Modules linked in: CPU: 0 UID: 0 PID: 7889 Comm: syz.0.163 Not tainted 6.13.0-rc3-next-20241220-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 RIP: 0010:__folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216 Code: 0f 0b 90 e9 b7 fd ff ff e8 8e cb ab ff 48 ff cb e9 f8 fd ff ff e8 81 cb ab ff 4c 89 e7 48 c7 c6 00 a7 15 8c e8 32 a4 f5 ff 90 <0f> 0b 90 e9 e9 fd ff ff e8 64 cb ab ff 48 ff cb e9 34 fe ff ff e8 RSP: 0018:ffffc90002f26fd8 EFLAGS: 00010246 RAX: 2a0e9269706cf300 RBX: ffffea00014280c0 RCX: ffffc90002f26b03 RDX: 0000000000000005 RSI: ffffffff8c0aaba0 RDI: ffffffff8c5fed00 RBP: 00000000000131bb R08: ffffffff901ab1f7 R09: 1ffffffff203563e R10: dffffc0000000000 R11: fffffbfff203563f R12: ffffea0001420000 R13: ffffea00014280c0 R14: 0000000000000000 R15: 00000000000001fc FS: 00007f75ef9f16c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020a56000 CR3: 00000000642f0000 CR4: 00000000003526f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> __folio_add_rmap mm/rmap.c:1170 [inline] __folio_add_file_rmap mm/rmap.c:1489 [inline] folio_add_file_rmap_ptes+0x82/0x380 mm/rmap.c:1511 set_pte_range+0x30c/0x750 mm/memory.c:5136 filemap_map_folio_range mm/filemap.c:3639 [inline] filemap_map_pages+0xfbe/0x1900 mm/filemap.c:3748 do_fault_around mm/memory.c:5351 [inline] do_read_fault mm/memory.c:5384 [inline] do_fault mm/memory.c:5527 [inline] do_pte_missing mm/memory.c:4048 [inline] handle_pte_fault+0x3888/0x5ee0 mm/memory.c:5890 __handle_mm_fault mm/memory.c:6033 [inline] handle_mm_fault+0x11f5/0x1d50 mm/memory.c:6202 faultin_page mm/gup.c:1196 [inline] __get_user_pages+0x1a92/0x4140 mm/gup.c:1491 populate_vma_page_range+0x264/0x330 mm/gup.c:1929 __mm_populate+0x27a/0x460 mm/gup.c:2032 mm_populate include/linux/mm.h:3400 [inline] vm_mmap_pgoff+0x303/0x430 mm/util.c:585 ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:607 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f75eeb85d29 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f75ef9f1038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f75eed76080 RCX: 00007f75eeb85d29 RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 RBP: 00007f75eec01b08 R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007f75eed76080 R15: 00007ffd2129f438 </TASK> --- If you want syzbot to run the reproducer, reply with: #syz test: git://repo/address.git branch-or-commit-hash If you attach or paste a git patch, syzbot will apply it before testing. ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-28 4:56 ` syzbot @ 2024-12-28 7:54 ` Hillf Danton 2024-12-28 8:03 ` syzbot 2024-12-28 10:36 ` Hillf Danton ` (6 subsequent siblings) 7 siblings, 1 reply; 38+ messages in thread From: Hillf Danton @ 2024-12-28 7:54 UTC (permalink / raw) To: syzbot; +Cc: linux-kernel, syzkaller-bugs On Fri, 27 Dec 2024 20:56:21 -0800 > syzbot has found a reproducer for the following issue on: > > HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 > git tree: linux-next > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000 #syz test --- x/include/linux/rmap.h +++ y/include/linux/rmap.h @@ -213,8 +213,17 @@ static inline void __folio_rmap_sanity_c */ VM_WARN_ON_ONCE(nr_pages <= 0); - VM_WARN_ON_FOLIO(page_folio(page) != folio, folio); - VM_WARN_ON_FOLIO(page_folio(page + nr_pages - 1) != folio, folio); + if (!folio_test_large(folio)) { + VM_WARN_ON_FOLIO(page_folio(page) != folio, folio); + VM_WARN_ON_FOLIO(page_folio(page + nr_pages - 1) != folio, folio); + } else { + struct page *p = compound_head(page); + + VM_WARN_ON_FOLIO(page_folio(p) != folio, folio); + p = page + nr_pages - 1; + p = compound_head(p); + VM_WARN_ON_FOLIO(page_folio(p) != folio, folio); + } switch (level) { case RMAP_LEVEL_PTE: -- ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-28 7:54 ` Hillf Danton @ 2024-12-28 8:03 ` syzbot 0 siblings, 0 replies; 38+ messages in thread From: syzbot @ 2024-12-28 8:03 UTC (permalink / raw) To: hdanton, linux-kernel, syzkaller-bugs Hello, syzbot tried to test the proposed patch but the build/boot failed: ./include/linux/rmap.h:220:16: error: initializing 'struct page *' with an expression of type 'typeof (page)' (aka 'const struct page *') discards qualifiers [-Werror,-Wincompatible-pointer-types-discards-qualifiers] ./include/linux/rmap.h:223:5: error: assigning to 'struct page *' from 'const struct page *' discards qualifiers [-Werror,-Wincompatible-pointer-types-discards-qualifiers] Tested on: commit: 8155b4ef Add linux-next specific files for 20241220 git tree: linux-next kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 patch: https://syzkaller.appspot.com/x/patch.diff?x=1558050f980000 ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-28 4:56 ` syzbot 2024-12-28 7:54 ` Hillf Danton @ 2024-12-28 10:36 ` Hillf Danton 2024-12-28 12:20 ` syzbot 2024-12-29 0:00 ` Hillf Danton ` (5 subsequent siblings) 7 siblings, 1 reply; 38+ messages in thread From: Hillf Danton @ 2024-12-28 10:36 UTC (permalink / raw) To: syzbot; +Cc: linux-kernel, syzkaller-bugs On Fri, 27 Dec 2024 20:56:21 -0800 > syzbot has found a reproducer for the following issue on: > > HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 > git tree: linux-next > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000 #syz test --- x/include/linux/rmap.h +++ y/include/linux/rmap.h @@ -195,7 +195,7 @@ enum rmap_level { }; static inline void __folio_rmap_sanity_checks(const struct folio *folio, - const struct page *page, int nr_pages, enum rmap_level level) + struct page *page, int nr_pages, enum rmap_level level) { /* hugetlb folios are handled separately. */ VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); @@ -213,8 +213,17 @@ static inline void __folio_rmap_sanity_c */ VM_WARN_ON_ONCE(nr_pages <= 0); - VM_WARN_ON_FOLIO(page_folio(page) != folio, folio); - VM_WARN_ON_FOLIO(page_folio(page + nr_pages - 1) != folio, folio); + if (!folio_test_large(folio)) { + VM_WARN_ON_FOLIO(page_folio(page) != folio, folio); + VM_WARN_ON_FOLIO(page_folio(page + nr_pages - 1) != folio, folio); + } else { + struct page *p = compound_head(page); + + VM_WARN_ON_FOLIO(page_folio(p) != folio, folio); + p = page + nr_pages - 1; + p = compound_head(p); + VM_WARN_ON_FOLIO(page_folio(p) != folio, folio); + } switch (level) { case RMAP_LEVEL_PTE: -- ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-28 10:36 ` Hillf Danton @ 2024-12-28 12:20 ` syzbot 0 siblings, 0 replies; 38+ messages in thread From: syzbot @ 2024-12-28 12:20 UTC (permalink / raw) To: hdanton, linux-kernel, syzkaller-bugs Hello, syzbot has tested the proposed patch but the reproducer is still triggering an issue: WARNING in __folio_rmap_sanity_checks do_ftruncate+0x4a1/0x540 fs/open.c:192 do_sys_ftruncate fs/open.c:207 [inline] __do_sys_ftruncate fs/open.c:212 [inline] __se_sys_ftruncate fs/open.c:210 [inline] __x64_sys_ftruncate+0x94/0xf0 fs/open.c:210 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f ------------[ cut here ]------------ WARNING: CPU: 0 PID: 8512 at ./include/linux/rmap.h:222 __folio_rmap_sanity_checks+0x52a/0xb30 include/linux/rmap.h:222 Modules linked in: CPU: 0 UID: 0 PID: 8512 Comm: syz.4.115 Not tainted 6.13.0-rc3-next-20241220-syzkaller-05236-g8155b4ef3466-dirty #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 RIP: 0010:__folio_rmap_sanity_checks+0x52a/0xb30 include/linux/rmap.h:222 Code: 49 ff cd e9 29 fd ff ff e8 e3 c3 ab ff 48 ff cd e9 6b fd ff ff e8 d6 c3 ab ff 4c 89 e7 48 c7 c6 60 aa 15 8c e8 87 9c f5 ff 90 <0f> 0b 90 e9 66 fd ff ff e8 b9 c3 ab ff 48 ff cd e9 a6 fd ff ff e8 RSP: 0018:ffffc9000d986fb8 EFLAGS: 00010246 RAX: 9579638a77c65000 RBX: 00000000000001f8 RCX: ffffc9000d986b03 RDX: 0000000000000005 RSI: ffffffff8c0aac20 RDI: ffffffff8c5ff180 RBP: ffffea00014d8200 R08: ffffffff901ab2f7 R09: 1ffffffff203565e R10: dffffc0000000000 R11: fffffbfff203565f R12: ffffea00014d0000 R13: ffffea00014d8200 R14: dffffc0000000000 R15: ffffea00014d8200 FS: 00007f399ed686c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00005555898e7808 CR3: 00000000776f0000 CR4: 00000000003526f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> __folio_add_rmap mm/rmap.c:1170 [inline] __folio_add_file_rmap mm/rmap.c:1489 [inline] folio_add_file_rmap_ptes+0x82/0x380 mm/rmap.c:1511 set_pte_range+0x30c/0x750 mm/memory.c:5136 filemap_map_folio_range mm/filemap.c:3639 [inline] filemap_map_pages+0xfbe/0x1900 mm/filemap.c:3748 do_fault_around mm/memory.c:5351 [inline] do_read_fault mm/memory.c:5384 [inline] do_fault mm/memory.c:5527 [inline] do_pte_missing mm/memory.c:4048 [inline] handle_pte_fault+0x3888/0x5ee0 mm/memory.c:5890 __handle_mm_fault mm/memory.c:6033 [inline] handle_mm_fault+0x11f5/0x1d50 mm/memory.c:6202 faultin_page mm/gup.c:1196 [inline] __get_user_pages+0x1a92/0x4140 mm/gup.c:1491 populate_vma_page_range+0x264/0x330 mm/gup.c:1929 __mm_populate+0x27a/0x460 mm/gup.c:2032 mm_populate include/linux/mm.h:3400 [inline] vm_mmap_pgoff+0x303/0x430 mm/util.c:585 ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:607 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f399df85d29 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f399ed68038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f399e175fa0 RCX: 00007f399df85d29 RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 RBP: 00007f399e001b08 R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007f399e175fa0 R15: 00007fff8d381608 </TASK> Tested on: commit: 8155b4ef Add linux-next specific files for 20241220 git tree: linux-next console output: https://syzkaller.appspot.com/x/log.txt?x=12d84818580000 kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 patch: https://syzkaller.appspot.com/x/patch.diff?x=12c850b0580000 ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-28 4:56 ` syzbot 2024-12-28 7:54 ` Hillf Danton 2024-12-28 10:36 ` Hillf Danton @ 2024-12-29 0:00 ` Hillf Danton 2024-12-29 1:14 ` syzbot 2024-12-29 6:42 ` Hillf Danton ` (4 subsequent siblings) 7 siblings, 1 reply; 38+ messages in thread From: Hillf Danton @ 2024-12-29 0:00 UTC (permalink / raw) To: syzbot; +Cc: linux-kernel, syzkaller-bugs On Fri, 27 Dec 2024 20:56:21 -0800 > syzbot has found a reproducer for the following issue on: > > HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 > git tree: linux-next > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000 #syz test --- x/mm/filemap.c +++ y/mm/filemap.c @@ -3636,6 +3636,7 @@ static vm_fault_t filemap_map_folio_rang continue; skip: if (count) { + VM_WARN_ON_FOLIO(page_folio(page) != folio, folio); set_pte_range(vmf, folio, page, count, addr); *rss += count; folio_ref_add(folio, count); -- ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-29 0:00 ` Hillf Danton @ 2024-12-29 1:14 ` syzbot 0 siblings, 0 replies; 38+ messages in thread From: syzbot @ 2024-12-29 1:14 UTC (permalink / raw) To: hdanton, linux-kernel, syzkaller-bugs Hello, syzbot has tested the proposed patch but the reproducer is still triggering an issue: KASAN: slab-out-of-bounds Read in filemap_map_pages ================================================================== BUG: KASAN: slab-out-of-bounds in ptep_get include/linux/pgtable.h:338 [inline] BUG: KASAN: slab-out-of-bounds in filemap_map_folio_range mm/filemap.c:3632 [inline] BUG: KASAN: slab-out-of-bounds in filemap_map_pages+0xde4/0x1aa0 mm/filemap.c:3749 Read of size 8 at addr ffff8880622b9010 by task syz.3.68/7906 CPU: 1 UID: 0 PID: 7906 Comm: syz.3.68 Not tainted 6.13.0-rc3-next-20241220-syzkaller-05236-g8155b4ef3466-dirty #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Call Trace: <TASK> __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0x169/0x550 mm/kasan/report.c:489 kasan_report+0x143/0x180 mm/kasan/report.c:602 ptep_get include/linux/pgtable.h:338 [inline] filemap_map_folio_range mm/filemap.c:3632 [inline] filemap_map_pages+0xde4/0x1aa0 mm/filemap.c:3749 do_fault_around mm/memory.c:5351 [inline] do_read_fault mm/memory.c:5384 [inline] do_fault mm/memory.c:5527 [inline] do_pte_missing mm/memory.c:4048 [inline] handle_pte_fault+0x3888/0x5ee0 mm/memory.c:5890 __handle_mm_fault mm/memory.c:6033 [inline] handle_mm_fault+0x11f5/0x1d50 mm/memory.c:6202 faultin_page mm/gup.c:1196 [inline] __get_user_pages+0x1a92/0x4140 mm/gup.c:1491 populate_vma_page_range+0x264/0x330 mm/gup.c:1929 __mm_populate+0x27a/0x460 mm/gup.c:2032 mm_populate include/linux/mm.h:3400 [inline] vm_mmap_pgoff+0x303/0x430 mm/util.c:585 ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:607 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f4084f85d29 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f4085da6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f4085176080 RCX: 00007f4084f85d29 RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 RBP: 00007f4085001b08 R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007f4085176080 R15: 00007fff8010b888 </TASK> Allocated by task 5846: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 poison_kmalloc_redzone mm/kasan/common.c:377 [inline] __kasan_kmalloc+0x98/0xb0 mm/kasan/common.c:394 kasan_kmalloc include/linux/kasan.h:260 [inline] __do_kmalloc_node mm/slub.c:4294 [inline] __kmalloc_node_noprof+0x290/0x4d0 mm/slub.c:4300 kmalloc_array_node_noprof include/linux/slab.h:1018 [inline] alloc_slab_obj_exts mm/slub.c:1964 [inline] account_slab mm/slub.c:2550 [inline] allocate_slab+0x179/0x3a0 mm/slub.c:2605 new_slab mm/slub.c:2640 [inline] ___slab_alloc+0xc27/0x14a0 mm/slub.c:3826 __slab_alloc+0x58/0xa0 mm/slub.c:3916 __slab_alloc_node mm/slub.c:3991 [inline] slab_alloc_node mm/slub.c:4152 [inline] __do_kmalloc_node mm/slub.c:4293 [inline] __kmalloc_node_noprof+0x2ee/0x4d0 mm/slub.c:4300 __kvmalloc_node_noprof+0x72/0x190 mm/util.c:667 alloc_netdev_mqs+0xa4/0x1080 net/core/dev.c:11209 rtnl_create_link+0x2f9/0xc20 net/core/rtnetlink.c:3595 rtnl_newlink_create+0x210/0xa40 net/core/rtnetlink.c:3771 __rtnl_newlink net/core/rtnetlink.c:3897 [inline] rtnl_newlink+0x1c7e/0x2210 net/core/rtnetlink.c:4012 rtnetlink_rcv_msg+0x791/0xcf0 net/core/rtnetlink.c:6902 netlink_rcv_skb+0x1e3/0x430 net/netlink/af_netlink.c:2542 netlink_unicast_kernel net/netlink/af_netlink.c:1321 [inline] netlink_unicast+0x7f6/0x990 net/netlink/af_netlink.c:1347 netlink_sendmsg+0x8e4/0xcb0 net/netlink/af_netlink.c:1891 sock_sendmsg_nosec net/socket.c:711 [inline] __sock_sendmsg+0x221/0x270 net/socket.c:726 __sys_sendto+0x363/0x4c0 net/socket.c:2208 __do_sys_sendto net/socket.c:2215 [inline] __se_sys_sendto net/socket.c:2211 [inline] __x64_sys_sendto+0xde/0x100 net/socket.c:2211 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f The buggy address belongs to the object at ffff8880622b9000 which belongs to the cache kmalloc-16 of size 16 The buggy address is located 0 bytes to the right of allocated 16-byte region [ffff8880622b9000, ffff8880622b9010) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x622b9 flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 00fff00000000000 ffff88801ac41640 ffffea0000cbd080 dead000000000002 raw: 0000000000000000 0000000000800080 00000000f5000000 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x252800(GFP_NOWAIT|__GFP_NORETRY|__GFP_COMP|__GFP_THISNODE), pid 5846, tgid 5846 (syz-executor), ts 70570504990, free_ts 70568138444 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x1f4/0x240 mm/page_alloc.c:1551 prep_new_page mm/page_alloc.c:1559 [inline] get_page_from_freelist+0x365c/0x37a0 mm/page_alloc.c:3477 __alloc_frozen_pages_noprof+0x292/0x710 mm/page_alloc.c:4754 alloc_slab_page mm/slub.c:2425 [inline] allocate_slab+0x66/0x3a0 mm/slub.c:2587 new_slab mm/slub.c:2640 [inline] ___slab_alloc+0xc27/0x14a0 mm/slub.c:3826 __slab_alloc+0x58/0xa0 mm/slub.c:3916 __slab_alloc_node mm/slub.c:3991 [inline] slab_alloc_node mm/slub.c:4152 [inline] __do_kmalloc_node mm/slub.c:4293 [inline] __kmalloc_node_noprof+0x2ee/0x4d0 mm/slub.c:4300 kmalloc_array_node_noprof include/linux/slab.h:1018 [inline] alloc_slab_obj_exts mm/slub.c:1964 [inline] account_slab mm/slub.c:2550 [inline] allocate_slab+0x179/0x3a0 mm/slub.c:2605 new_slab mm/slub.c:2640 [inline] ___slab_alloc+0xc27/0x14a0 mm/slub.c:3826 __slab_alloc+0x58/0xa0 mm/slub.c:3916 __slab_alloc_node mm/slub.c:3991 [inline] slab_alloc_node mm/slub.c:4152 [inline] __do_kmalloc_node mm/slub.c:4293 [inline] __kmalloc_node_noprof+0x2ee/0x4d0 mm/slub.c:4300 __kvmalloc_node_noprof+0x72/0x190 mm/util.c:667 alloc_netdev_mqs+0xa4/0x1080 net/core/dev.c:11209 rtnl_create_link+0x2f9/0xc20 net/core/rtnetlink.c:3595 rtnl_newlink_create+0x210/0xa40 net/core/rtnetlink.c:3771 __rtnl_newlink net/core/rtnetlink.c:3897 [inline] rtnl_newlink+0x1c7e/0x2210 net/core/rtnetlink.c:4012 page last free pid 5890 tgid 5890 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1127 [inline] free_frozen_pages+0xe0d/0x10e0 mm/page_alloc.c:2660 vfree+0x1c3/0x360 mm/vmalloc.c:3383 kcov_put kernel/kcov.c:439 [inline] kcov_close+0x28/0x50 kernel/kcov.c:535 __fput+0x3e9/0x9f0 fs/file_table.c:450 task_work_run+0x24f/0x310 kernel/task_work.c:227 exit_task_work include/linux/task_work.h:40 [inline] do_exit+0xa2f/0x28e0 kernel/exit.c:938 do_group_exit+0x207/0x2c0 kernel/exit.c:1087 get_signal+0x16b2/0x1750 kernel/signal.c:3017 arch_do_signal_or_restart+0x96/0x860 arch/x86/kernel/signal.c:337 exit_to_user_mode_loop kernel/entry/common.c:111 [inline] exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline] __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline] syscall_exit_to_user_mode+0xce/0x340 kernel/entry/common.c:218 do_syscall_64+0x100/0x230 arch/x86/entry/common.c:89 entry_SYSCALL_64_after_hwframe+0x77/0x7f Memory state around the buggy address: ffff8880622b8f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff8880622b8f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >ffff8880622b9000: 00 00 fc fc fa fb fc fc fa fb fc fc 00 00 fc fc ^ ffff8880622b9080: 00 00 fc fc 00 00 fc fc 00 00 fc fc 00 00 fc fc ffff8880622b9100: fa fb fc fc fa fb fc fc 00 00 fc fc 00 00 fc fc ================================================================== Tested on: commit: 8155b4ef Add linux-next specific files for 20241220 git tree: linux-next console output: https://syzkaller.appspot.com/x/log.txt?x=13b3d0b0580000 kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 patch: https://syzkaller.appspot.com/x/patch.diff?x=1156c818580000 ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-28 4:56 ` syzbot ` (2 preceding siblings ...) 2024-12-29 0:00 ` Hillf Danton @ 2024-12-29 6:42 ` Hillf Danton 2024-12-29 7:13 ` syzbot 2024-12-30 10:40 ` Hillf Danton ` (3 subsequent siblings) 7 siblings, 1 reply; 38+ messages in thread From: Hillf Danton @ 2024-12-29 6:42 UTC (permalink / raw) To: syzbot; +Cc: linux-kernel, syzkaller-bugs On Fri, 27 Dec 2024 20:56:21 -0800 > syzbot has found a reproducer for the following issue on: > > HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 > git tree: linux-next > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000 #syz test --- x/mm/filemap.c +++ y/mm/filemap.c @@ -3636,6 +3636,7 @@ static vm_fault_t filemap_map_folio_rang continue; skip: if (count) { + VM_WARN_ON_FOLIO(page_folio(page) != folio, folio); set_pte_range(vmf, folio, page, count, addr); *rss += count; folio_ref_add(folio, count); @@ -3739,7 +3740,7 @@ vm_fault_t filemap_map_pages(struct vm_f vmf->pte += xas.xa_index - last_pgoff; last_pgoff = xas.xa_index; end = folio_next_index(folio) - 1; - nr_pages = min(end, end_pgoff) - xas.xa_index + 1; + nr_pages = min(end, end_pgoff) - xas.xa_index; if (!folio_test_large(folio)) ret |= filemap_map_order0_folio(vmf, -- ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-29 6:42 ` Hillf Danton @ 2024-12-29 7:13 ` syzbot 0 siblings, 0 replies; 38+ messages in thread From: syzbot @ 2024-12-29 7:13 UTC (permalink / raw) To: hdanton, linux-kernel, syzkaller-bugs Hello, syzbot has tested the proposed patch but the reproducer is still triggering an issue: WARNING in filemap_map_pages kill_block_super+0x44/0x90 fs/super.c:1710 xfs_kill_sb+0x15/0x50 fs/xfs/xfs_super.c:2089 deactivate_locked_super+0xc4/0x130 fs/super.c:473 cleanup_mnt+0x41f/0x4b0 fs/namespace.c:1414 task_work_run+0x24f/0x310 kernel/task_work.c:227 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline] exit_to_user_mode_loop kernel/entry/common.c:114 [inline] exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline] __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline] syscall_exit_to_user_mode+0x13f/0x340 kernel/entry/common.c:218 do_syscall_64+0x100/0x230 arch/x86/entry/common.c:89 entry_SYSCALL_64_after_hwframe+0x77/0x7f ------------[ cut here ]------------ WARNING: CPU: 1 PID: 7157 at mm/filemap.c:3639 filemap_map_folio_range mm/filemap.c:3639 [inline] WARNING: CPU: 1 PID: 7157 at mm/filemap.c:3639 filemap_map_pages+0x1012/0x1aa0 mm/filemap.c:3749 Modules linked in: CPU: 1 UID: 0 PID: 7157 Comm: syz.1.33 Not tainted 6.13.0-rc3-next-20241220-syzkaller-05236-g8155b4ef3466-dirty #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 RIP: 0010:filemap_map_folio_range mm/filemap.c:3639 [inline] RIP: 0010:filemap_map_pages+0x1012/0x1aa0 mm/filemap.c:3749 Code: 77 c6 ff e9 73 fd ff ff e8 eb 77 c6 ff 48 ff cb e9 90 fe ff ff e8 de 77 c6 ff 48 89 df 48 c7 c6 00 b2 13 8c e8 2f 52 10 00 90 <0f> 0b 90 e9 89 fe ff ff f3 0f 1e fa 48 8b 5c 24 20 48 89 de 48 81 RSP: 0000:ffffc9000472f160 EFLAGS: 00010246 RAX: a1fe4f27a7cbfa00 RBX: ffffea0001470000 RCX: ffffc9000472ed03 RDX: 0000000000000005 RSI: ffffffff8c0aac20 RDI: ffffffff8c5feec0 RBP: ffffc9000472f370 R08: ffffffff901ab1f7 R09: 1ffffffff203563e R10: dffffc0000000000 R11: fffffbfff203563f R12: 00000000fffffc01 R13: 00000000000001fc R14: ffffea0001470008 R15: dffffc0000000000 FS: 00007f75e30f86c0(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f744d77a000 CR3: 000000005da52000 CR4: 00000000003526f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> do_fault_around mm/memory.c:5351 [inline] do_read_fault mm/memory.c:5384 [inline] do_fault mm/memory.c:5527 [inline] do_pte_missing mm/memory.c:4048 [inline] handle_pte_fault+0x3888/0x5ee0 mm/memory.c:5890 __handle_mm_fault mm/memory.c:6033 [inline] handle_mm_fault+0x11f5/0x1d50 mm/memory.c:6202 faultin_page mm/gup.c:1196 [inline] __get_user_pages+0x1a92/0x4140 mm/gup.c:1491 populate_vma_page_range+0x264/0x330 mm/gup.c:1929 __mm_populate+0x27a/0x460 mm/gup.c:2032 mm_populate include/linux/mm.h:3400 [inline] vm_mmap_pgoff+0x303/0x430 mm/util.c:585 ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:607 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f75e2385d29 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f75e30f8038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f75e2576160 RCX: 00007f75e2385d29 RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 RBP: 00007f75e2401b08 R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007f75e2576160 R15: 00007fff1d0db1f8 </TASK> Tested on: commit: 8155b4ef Add linux-next specific files for 20241220 git tree: linux-next console output: https://syzkaller.appspot.com/x/log.txt?x=15d6b2c4580000 kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 patch: https://syzkaller.appspot.com/x/patch.diff?x=107ab2c4580000 ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-28 4:56 ` syzbot ` (3 preceding siblings ...) 2024-12-29 6:42 ` Hillf Danton @ 2024-12-30 10:40 ` Hillf Danton 2024-12-30 11:08 ` syzbot 2024-12-30 11:17 ` Hillf Danton ` (2 subsequent siblings) 7 siblings, 1 reply; 38+ messages in thread From: Hillf Danton @ 2024-12-30 10:40 UTC (permalink / raw) To: syzbot; +Cc: linux-kernel, syzkaller-bugs On Fri, 27 Dec 2024 20:56:21 -0800 > syzbot has found a reproducer for the following issue on: > > HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 > git tree: linux-next > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000 #syz test --- x/mm/filemap.c +++ y/mm/filemap.c @@ -3636,6 +3636,10 @@ static vm_fault_t filemap_map_folio_rang continue; skip: if (count) { + for (unsigned int i = 0; i < count; i++) { + if (page_folio(page + i) != folio) + goto out; + } set_pte_range(vmf, folio, page, count, addr); *rss += count; folio_ref_add(folio, count); @@ -3658,6 +3662,7 @@ skip: ret = VM_FAULT_NOPAGE; } +out: vmf->pte = old_ptep; return ret; -- ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-30 10:40 ` Hillf Danton @ 2024-12-30 11:08 ` syzbot 0 siblings, 0 replies; 38+ messages in thread From: syzbot @ 2024-12-30 11:08 UTC (permalink / raw) To: hdanton, linux-kernel, syzkaller-bugs Hello, syzbot has tested the proposed patch but the reproducer is still triggering an issue: KASAN: use-after-free Read in filemap_map_pages ================================================================== BUG: KASAN: use-after-free in ptep_get include/linux/pgtable.h:338 [inline] BUG: KASAN: use-after-free in filemap_map_folio_range mm/filemap.c:3632 [inline] BUG: KASAN: use-after-free in filemap_map_pages+0xefb/0x1aa0 mm/filemap.c:3753 Read of size 8 at addr ffff88807b524000 by task syz.0.16/6781 CPU: 1 UID: 0 PID: 6781 Comm: syz.0.16 Not tainted 6.13.0-rc3-next-20241220-syzkaller-05236-g8155b4ef3466-dirty #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Call Trace: <TASK> __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0x169/0x550 mm/kasan/report.c:489 kasan_report+0x143/0x180 mm/kasan/report.c:602 ptep_get include/linux/pgtable.h:338 [inline] filemap_map_folio_range mm/filemap.c:3632 [inline] filemap_map_pages+0xefb/0x1aa0 mm/filemap.c:3753 do_fault_around mm/memory.c:5351 [inline] do_read_fault mm/memory.c:5384 [inline] do_fault mm/memory.c:5527 [inline] do_pte_missing mm/memory.c:4048 [inline] handle_pte_fault+0x3888/0x5ee0 mm/memory.c:5890 __handle_mm_fault mm/memory.c:6033 [inline] handle_mm_fault+0x11f5/0x1d50 mm/memory.c:6202 faultin_page mm/gup.c:1196 [inline] __get_user_pages+0x1a92/0x4140 mm/gup.c:1491 populate_vma_page_range+0x264/0x330 mm/gup.c:1929 __mm_populate+0x27a/0x460 mm/gup.c:2032 mm_populate include/linux/mm.h:3400 [inline] vm_mmap_pgoff+0x303/0x430 mm/util.c:585 ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:607 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fe41b185d29 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fe41bff6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007fe41b375fa0 RCX: 00007fe41b185d29 RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 RBP: 00007fe41b201b08 R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007fe41b375fa0 R15: 00007ffedf5c4578 </TASK> The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x7b524 flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff) page_type: f0(buddy) raw: 00fff00000000000 ffff88813fffbed0 ffffea000088e108 0000000000000000 raw: 0000000000000000 0000000000000002 00000000f0000000 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as freed page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd2040(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 6034, tgid 6034 (dhcpcd-run-hook), ts 82273930287, free_ts 120498877039 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x1f4/0x240 mm/page_alloc.c:1551 prep_new_page mm/page_alloc.c:1559 [inline] get_page_from_freelist+0x365c/0x37a0 mm/page_alloc.c:3477 __alloc_frozen_pages_noprof+0x292/0x710 mm/page_alloc.c:4754 alloc_pages_mpol+0x30e/0x550 mm/mempolicy.c:2270 alloc_slab_page mm/slub.c:2423 [inline] allocate_slab+0x8f/0x3a0 mm/slub.c:2587 new_slab mm/slub.c:2640 [inline] ___slab_alloc+0xc27/0x14a0 mm/slub.c:3826 __slab_alloc+0x58/0xa0 mm/slub.c:3916 __slab_alloc_node mm/slub.c:3991 [inline] slab_alloc_node mm/slub.c:4152 [inline] __do_kmalloc_node mm/slub.c:4293 [inline] __kmalloc_noprof+0x2e6/0x4c0 mm/slub.c:4306 kmalloc_noprof include/linux/slab.h:905 [inline] tomoyo_realpath_from_path+0xcf/0x5e0 security/tomoyo/realpath.c:251 tomoyo_get_realpath security/tomoyo/file.c:151 [inline] tomoyo_check_open_permission+0x258/0x4f0 security/tomoyo/file.c:771 security_file_open+0xac/0x250 security/security.c:3114 do_dentry_open+0x320/0x1960 fs/open.c:932 vfs_open+0x3b/0x370 fs/open.c:1085 do_open fs/namei.c:3828 [inline] path_openat+0x2c74/0x3580 fs/namei.c:3987 do_filp_open+0x27f/0x4e0 fs/namei.c:4014 do_sys_openat2+0x13e/0x1d0 fs/open.c:1427 page last free pid 6781 tgid 6780 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1127 [inline] free_frozen_pages+0xe0d/0x10e0 mm/page_alloc.c:2660 discard_slab mm/slub.c:2684 [inline] __put_partials+0x160/0x1c0 mm/slub.c:3153 put_cpu_partial+0x17c/0x250 mm/slub.c:3228 __slab_free+0x290/0x380 mm/slub.c:4479 qlink_free mm/kasan/quarantine.c:163 [inline] qlist_free_all+0x9a/0x140 mm/kasan/quarantine.c:179 kasan_quarantine_reduce+0x14f/0x170 mm/kasan/quarantine.c:286 __kasan_slab_alloc+0x23/0x80 mm/kasan/common.c:329 kasan_slab_alloc include/linux/kasan.h:250 [inline] slab_post_alloc_hook mm/slub.c:4115 [inline] slab_alloc_node mm/slub.c:4164 [inline] kmem_cache_alloc_noprof+0x1d9/0x380 mm/slub.c:4171 ptlock_alloc+0x20/0x70 mm/memory.c:7045 ptlock_init include/linux/mm.h:2972 [inline] pagetable_pte_ctor include/linux/mm.h:2999 [inline] __pte_alloc_one_noprof include/asm-generic/pgalloc.h:73 [inline] pte_alloc_one+0xd3/0x510 arch/x86/mm/pgtable.c:41 __pte_alloc+0x79/0x3c0 mm/memory.c:447 do_anonymous_page mm/memory.c:4848 [inline] do_pte_missing mm/memory.c:4046 [inline] handle_pte_fault+0x4d4c/0x5ee0 mm/memory.c:5890 __handle_mm_fault mm/memory.c:6033 [inline] handle_mm_fault+0x11f5/0x1d50 mm/memory.c:6202 do_user_addr_fault arch/x86/mm/fault.c:1389 [inline] handle_page_fault arch/x86/mm/fault.c:1481 [inline] exc_page_fault+0x2b9/0x8b0 arch/x86/mm/fault.c:1539 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623 Memory state around the buggy address: ffff88807b523f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff88807b523f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >ffff88807b524000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ^ ffff88807b524080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ffff88807b524100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ================================================================== Tested on: commit: 8155b4ef Add linux-next specific files for 20241220 git tree: linux-next console output: https://syzkaller.appspot.com/x/log.txt?x=1328a6df980000 kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 patch: https://syzkaller.appspot.com/x/patch.diff?x=12ccaac4580000 ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-28 4:56 ` syzbot ` (4 preceding siblings ...) 2024-12-30 10:40 ` Hillf Danton @ 2024-12-30 11:17 ` Hillf Danton 2024-12-30 11:49 ` syzbot 2024-12-30 12:02 ` Hillf Danton 2024-12-31 8:41 ` Hillf Danton 7 siblings, 1 reply; 38+ messages in thread From: Hillf Danton @ 2024-12-30 11:17 UTC (permalink / raw) To: syzbot; +Cc: linux-kernel, syzkaller-bugs On Fri, 27 Dec 2024 20:56:21 -0800 > syzbot has found a reproducer for the following issue on: > > HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 > git tree: linux-next > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000 #syz test --- x/mm/filemap.c +++ y/mm/filemap.c @@ -3636,6 +3636,10 @@ static vm_fault_t filemap_map_folio_rang continue; skip: if (count) { + for (unsigned int i = 0; i < count; i++) { + if (page_folio(page + i) != folio) + goto out; + } set_pte_range(vmf, folio, page, count, addr); *rss += count; folio_ref_add(folio, count); @@ -3658,6 +3662,7 @@ skip: ret = VM_FAULT_NOPAGE; } +out: vmf->pte = old_ptep; return ret; @@ -3738,8 +3743,8 @@ vm_fault_t filemap_map_pages(struct vm_f addr += (xas.xa_index - last_pgoff) << PAGE_SHIFT; vmf->pte += xas.xa_index - last_pgoff; last_pgoff = xas.xa_index; - end = folio_next_index(folio) - 1; - nr_pages = min(end, end_pgoff) - xas.xa_index + 1; + end = folio_next_index(folio); + nr_pages = min(end, end_pgoff) - xas.xa_index; if (!folio_test_large(folio)) ret |= filemap_map_order0_folio(vmf, -- ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-30 11:17 ` Hillf Danton @ 2024-12-30 11:49 ` syzbot 0 siblings, 0 replies; 38+ messages in thread From: syzbot @ 2024-12-30 11:49 UTC (permalink / raw) To: hdanton, linux-kernel, syzkaller-bugs Hello, syzbot has tested the proposed patch but the reproducer is still triggering an issue: KASAN: slab-use-after-free Read in filemap_map_pages ================================================================== BUG: KASAN: slab-use-after-free in ptep_get include/linux/pgtable.h:338 [inline] BUG: KASAN: slab-use-after-free in filemap_map_folio_range mm/filemap.c:3632 [inline] BUG: KASAN: slab-use-after-free in filemap_map_pages+0xdba/0x1ab0 mm/filemap.c:3753 Read of size 8 at addr ffff888063e5c000 by task syz.0.22/6900 CPU: 0 UID: 0 PID: 6900 Comm: syz.0.22 Not tainted 6.13.0-rc3-next-20241220-syzkaller-05236-g8155b4ef3466-dirty #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Call Trace: <TASK> __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0x169/0x550 mm/kasan/report.c:489 kasan_report+0x143/0x180 mm/kasan/report.c:602 ptep_get include/linux/pgtable.h:338 [inline] filemap_map_folio_range mm/filemap.c:3632 [inline] filemap_map_pages+0xdba/0x1ab0 mm/filemap.c:3753 do_fault_around mm/memory.c:5351 [inline] do_read_fault mm/memory.c:5384 [inline] do_fault mm/memory.c:5527 [inline] do_pte_missing mm/memory.c:4048 [inline] handle_pte_fault+0x3888/0x5ee0 mm/memory.c:5890 __handle_mm_fault mm/memory.c:6033 [inline] handle_mm_fault+0x11f5/0x1d50 mm/memory.c:6202 faultin_page mm/gup.c:1196 [inline] __get_user_pages+0x1a92/0x4140 mm/gup.c:1491 populate_vma_page_range+0x264/0x330 mm/gup.c:1929 __mm_populate+0x27a/0x460 mm/gup.c:2032 mm_populate include/linux/mm.h:3400 [inline] vm_mmap_pgoff+0x303/0x430 mm/util.c:585 ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:607 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fe194b85d29 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fe195969038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007fe194d76080 RCX: 00007fe194b85d29 RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 RBP: 00007fe194c01b08 R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007fe194d76080 R15: 00007ffe2c0aa718 </TASK> Allocated by task 25: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 unpoison_slab_object mm/kasan/common.c:319 [inline] __kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:345 kasan_slab_alloc include/linux/kasan.h:250 [inline] slab_post_alloc_hook mm/slub.c:4115 [inline] slab_alloc_node mm/slub.c:4164 [inline] kmem_cache_alloc_noprof+0x1d9/0x380 mm/slub.c:4171 dst_alloc+0x12b/0x190 net/core/dst.c:89 ip6_dst_alloc net/ipv6/route.c:342 [inline] icmp6_dst_alloc+0x77/0x420 net/ipv6/route.c:3275 mld_sendpack+0x6a3/0xdb0 net/ipv6/mcast.c:1849 mld_send_cr net/ipv6/mcast.c:2161 [inline] mld_ifc_work+0x7d9/0xd90 net/ipv6/mcast.c:2695 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x7a9/0x920 kernel/kthread.c:464 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Freed by task 35: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:576 poison_slab_object mm/kasan/common.c:247 [inline] __kasan_slab_free+0x59/0x70 mm/kasan/common.c:264 kasan_slab_free include/linux/kasan.h:233 [inline] slab_free_hook mm/slub.c:2353 [inline] slab_free mm/slub.c:4609 [inline] kmem_cache_free+0x195/0x410 mm/slub.c:4711 dst_destroy+0x249/0x360 net/core/dst.c:121 rcu_do_batch kernel/rcu/tree.c:2546 [inline] rcu_core+0xaaa/0x17a0 kernel/rcu/tree.c:2802 handle_softirqs+0x2d4/0x9b0 kernel/softirq.c:561 __do_softirq kernel/softirq.c:595 [inline] invoke_softirq kernel/softirq.c:435 [inline] __irq_exit_rcu+0xf7/0x220 kernel/softirq.c:662 irq_exit_rcu+0x9/0x30 kernel/softirq.c:678 instr_sysvec_irq_work arch/x86/kernel/irq_work.c:17 [inline] sysvec_irq_work+0xa3/0xc0 arch/x86/kernel/irq_work.c:17 asm_sysvec_irq_work+0x1a/0x20 arch/x86/include/asm/idtentry.h:738 Last potentially related work creation: kasan_save_stack+0x3f/0x60 mm/kasan/common.c:47 kasan_record_aux_stack+0xaa/0xc0 mm/kasan/generic.c:548 __call_rcu_common kernel/rcu/tree.c:3065 [inline] call_rcu+0x168/0xac0 kernel/rcu/tree.c:3172 refdst_drop include/net/dst.h:263 [inline] skb_dst_drop include/net/dst.h:275 [inline] __dev_queue_xmit+0x87b/0x3f50 net/core/dev.c:4389 neigh_output include/net/neighbour.h:539 [inline] ip6_finish_output2+0x12ad/0x1780 net/ipv6/ip6_output.c:141 ip6_finish_output+0x41e/0x840 net/ipv6/ip6_output.c:226 NF_HOOK+0x9e/0x430 include/linux/netfilter.h:314 mld_sendpack+0x843/0xdb0 net/ipv6/mcast.c:1860 mld_send_cr net/ipv6/mcast.c:2161 [inline] mld_ifc_work+0x7d9/0xd90 net/ipv6/mcast.c:2695 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x7a9/0x920 kernel/kthread.c:464 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 The buggy address belongs to the object at ffff888063e5c000 which belongs to the cache ip6_dst_cache of size 232 The buggy address is located 0 bytes inside of freed 232-byte region [ffff888063e5c000, ffff888063e5c0e8) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x63e5c memcg:ffff888063f79c01 flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 00fff00000000000 ffff88814ca24500 dead000000000122 0000000000000000 raw: 0000000000000000 00000000000c000c 00000000f5000000 ffff888063f79c01 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x52820(GFP_ATOMIC|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP), pid 25, tgid 25 (kworker/1:0), ts 121780402212, free_ts 107625052788 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x1f4/0x240 mm/page_alloc.c:1551 prep_new_page mm/page_alloc.c:1559 [inline] get_page_from_freelist+0x365c/0x37a0 mm/page_alloc.c:3477 __alloc_frozen_pages_noprof+0x292/0x710 mm/page_alloc.c:4754 alloc_pages_mpol+0x30e/0x550 mm/mempolicy.c:2270 alloc_slab_page mm/slub.c:2423 [inline] allocate_slab+0x8f/0x3a0 mm/slub.c:2587 new_slab mm/slub.c:2640 [inline] ___slab_alloc+0xc27/0x14a0 mm/slub.c:3826 __slab_alloc+0x58/0xa0 mm/slub.c:3916 __slab_alloc_node mm/slub.c:3991 [inline] slab_alloc_node mm/slub.c:4152 [inline] kmem_cache_alloc_noprof+0x268/0x380 mm/slub.c:4171 dst_alloc+0x12b/0x190 net/core/dst.c:89 ip6_dst_alloc net/ipv6/route.c:342 [inline] icmp6_dst_alloc+0x77/0x420 net/ipv6/route.c:3275 mld_sendpack+0x6a3/0xdb0 net/ipv6/mcast.c:1849 mld_send_cr net/ipv6/mcast.c:2161 [inline] mld_ifc_work+0x7d9/0xd90 net/ipv6/mcast.c:2695 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x7a9/0x920 kernel/kthread.c:464 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148 page last free pid 6533 tgid 6533 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1127 [inline] free_frozen_pages+0xe0d/0x10e0 mm/page_alloc.c:2660 vfree+0x1c3/0x360 mm/vmalloc.c:3383 kcov_put kernel/kcov.c:439 [inline] kcov_close+0x28/0x50 kernel/kcov.c:535 __fput+0x3e9/0x9f0 fs/file_table.c:450 task_work_run+0x24f/0x310 kernel/task_work.c:227 exit_task_work include/linux/task_work.h:40 [inline] do_exit+0xa2f/0x28e0 kernel/exit.c:938 do_group_exit+0x207/0x2c0 kernel/exit.c:1087 get_signal+0x16b2/0x1750 kernel/signal.c:3017 arch_do_signal_or_restart+0x96/0x860 arch/x86/kernel/signal.c:337 exit_to_user_mode_loop kernel/entry/common.c:111 [inline] exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline] __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline] syscall_exit_to_user_mode+0xce/0x340 kernel/entry/common.c:218 do_syscall_64+0x100/0x230 arch/x86/entry/common.c:89 entry_SYSCALL_64_after_hwframe+0x77/0x7f Memory state around the buggy address: ffff888063e5bf00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff888063e5bf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >ffff888063e5c000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888063e5c080: fb fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc ffff888063e5c100: fc fc fc fc fc fc fc fc fa fb fb fb fb fb fb fb ================================================================== Tested on: commit: 8155b4ef Add linux-next specific files for 20241220 git tree: linux-next console output: https://syzkaller.appspot.com/x/log.txt?x=1454550f980000 kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 patch: https://syzkaller.appspot.com/x/patch.diff?x=142ef0b0580000 ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-28 4:56 ` syzbot ` (5 preceding siblings ...) 2024-12-30 11:17 ` Hillf Danton @ 2024-12-30 12:02 ` Hillf Danton 2024-12-30 12:20 ` syzbot 2024-12-31 8:41 ` Hillf Danton 7 siblings, 1 reply; 38+ messages in thread From: Hillf Danton @ 2024-12-30 12:02 UTC (permalink / raw) To: syzbot; +Cc: linux-kernel, syzkaller-bugs On Fri, 27 Dec 2024 20:56:21 -0800 > syzbot has found a reproducer for the following issue on: > > HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 > git tree: linux-next > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000 #syz test --- x/mm/filemap.c +++ y/mm/filemap.c @@ -3636,6 +3636,10 @@ static vm_fault_t filemap_map_folio_rang continue; skip: if (count) { + for (unsigned int i = 0; i < count; i++) { + if (page_folio(page + i) != folio) + goto out; + } set_pte_range(vmf, folio, page, count, addr); *rss += count; folio_ref_add(folio, count); @@ -3658,6 +3662,7 @@ skip: ret = VM_FAULT_NOPAGE; } +out: vmf->pte = old_ptep; return ret; @@ -3738,8 +3743,8 @@ vm_fault_t filemap_map_pages(struct vm_f addr += (xas.xa_index - last_pgoff) << PAGE_SHIFT; vmf->pte += xas.xa_index - last_pgoff; last_pgoff = xas.xa_index; - end = folio_next_index(folio) - 1; - nr_pages = min(end, end_pgoff) - xas.xa_index + 1; + end = folio_next_index(folio) -1; + nr_pages = min(end, end_pgoff) - xas.xa_index; if (!folio_test_large(folio)) ret |= filemap_map_order0_folio(vmf, -- ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-30 12:02 ` Hillf Danton @ 2024-12-30 12:20 ` syzbot 0 siblings, 0 replies; 38+ messages in thread From: syzbot @ 2024-12-30 12:20 UTC (permalink / raw) To: hdanton, linux-kernel, syzkaller-bugs Hello, syzbot has tested the proposed patch but the reproducer is still triggering an issue: KASAN: slab-use-after-free Read in filemap_map_pages XFS (loop4): Quotacheck: Done. ================================================================== BUG: KASAN: slab-use-after-free in ptep_get include/linux/pgtable.h:338 [inline] BUG: KASAN: slab-use-after-free in filemap_map_folio_range mm/filemap.c:3632 [inline] BUG: KASAN: slab-use-after-free in filemap_map_pages+0xdbe/0x1ab0 mm/filemap.c:3753 Read of size 8 at addr ffff8880612ed000 by task syz.4.28/7015 CPU: 1 UID: 0 PID: 7015 Comm: syz.4.28 Not tainted 6.13.0-rc3-next-20241220-syzkaller-05236-g8155b4ef3466-dirty #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Call Trace: <TASK> __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0x169/0x550 mm/kasan/report.c:489 kasan_report+0x143/0x180 mm/kasan/report.c:602 ptep_get include/linux/pgtable.h:338 [inline] filemap_map_folio_range mm/filemap.c:3632 [inline] filemap_map_pages+0xdbe/0x1ab0 mm/filemap.c:3753 do_fault_around mm/memory.c:5351 [inline] do_read_fault mm/memory.c:5384 [inline] do_fault mm/memory.c:5527 [inline] do_pte_missing mm/memory.c:4048 [inline] handle_pte_fault+0x3888/0x5ee0 mm/memory.c:5890 __handle_mm_fault mm/memory.c:6033 [inline] handle_mm_fault+0x11f5/0x1d50 mm/memory.c:6202 faultin_page mm/gup.c:1196 [inline] __get_user_pages+0x1a92/0x4140 mm/gup.c:1491 populate_vma_page_range+0x264/0x330 mm/gup.c:1929 __mm_populate+0x27a/0x460 mm/gup.c:2032 mm_populate include/linux/mm.h:3400 [inline] vm_mmap_pgoff+0x303/0x430 mm/util.c:585 ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:607 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f3a46985d29 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f3a47799038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f3a46b75fa0 RCX: 00007f3a46985d29 RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 RBP: 00007f3a46a01b08 R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007f3a46b75fa0 R15: 00007ffd9346e698 </TASK> Allocated by task 6993: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 poison_kmalloc_redzone mm/kasan/common.c:377 [inline] __kasan_kmalloc+0x98/0xb0 mm/kasan/common.c:394 kasan_kmalloc include/linux/kasan.h:260 [inline] __do_kmalloc_node mm/slub.c:4294 [inline] __kmalloc_noprof+0x285/0x4c0 mm/slub.c:4306 kmalloc_noprof include/linux/slab.h:905 [inline] kzalloc_noprof include/linux/slab.h:1037 [inline] tomoyo_encode2 security/tomoyo/realpath.c:45 [inline] tomoyo_encode+0x26f/0x540 security/tomoyo/realpath.c:80 tomoyo_realpath_from_path+0x59e/0x5e0 security/tomoyo/realpath.c:283 tomoyo_get_realpath security/tomoyo/file.c:151 [inline] tomoyo_path_perm+0x2b7/0x740 security/tomoyo/file.c:822 security_inode_getattr+0x130/0x330 security/security.c:2377 vfs_getattr+0x2a/0x3a0 fs/stat.c:243 vfs_fstat fs/stat.c:265 [inline] vfs_fstatat+0xa8/0x130 fs/stat.c:364 __do_sys_newfstatat fs/stat.c:530 [inline] __se_sys_newfstatat fs/stat.c:524 [inline] __x64_sys_newfstatat+0x11d/0x1a0 fs/stat.c:524 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f Freed by task 6993: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:576 poison_slab_object mm/kasan/common.c:247 [inline] __kasan_slab_free+0x59/0x70 mm/kasan/common.c:264 kasan_slab_free include/linux/kasan.h:233 [inline] slab_free_hook mm/slub.c:2353 [inline] slab_free mm/slub.c:4609 [inline] kfree+0x196/0x430 mm/slub.c:4757 tomoyo_path_perm+0x59c/0x740 security/tomoyo/file.c:842 security_inode_getattr+0x130/0x330 security/security.c:2377 vfs_getattr+0x2a/0x3a0 fs/stat.c:243 vfs_fstat fs/stat.c:265 [inline] vfs_fstatat+0xa8/0x130 fs/stat.c:364 __do_sys_newfstatat fs/stat.c:530 [inline] __se_sys_newfstatat fs/stat.c:524 [inline] __x64_sys_newfstatat+0x11d/0x1a0 fs/stat.c:524 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f The buggy address belongs to the object at ffff8880612ed000 which belongs to the cache kmalloc-64 of size 64 The buggy address is located 0 bytes inside of freed 64-byte region [ffff8880612ed000, ffff8880612ed040) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x612ed anon flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 00fff00000000000 ffff88801ac418c0 0000000000000000 dead000000000001 raw: 0000000000000000 0000000000200020 00000000f5000000 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x52cc0(GFP_KERNEL|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP), pid 6630, tgid 6630 (syz-executor), ts 112518327578, free_ts 108816210359 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x1f4/0x240 mm/page_alloc.c:1551 prep_new_page mm/page_alloc.c:1559 [inline] get_page_from_freelist+0x365c/0x37a0 mm/page_alloc.c:3477 __alloc_frozen_pages_noprof+0x292/0x710 mm/page_alloc.c:4754 alloc_pages_mpol+0x30e/0x550 mm/mempolicy.c:2270 alloc_slab_page mm/slub.c:2423 [inline] allocate_slab+0x8f/0x3a0 mm/slub.c:2587 new_slab mm/slub.c:2640 [inline] ___slab_alloc+0xc27/0x14a0 mm/slub.c:3826 __slab_alloc+0x58/0xa0 mm/slub.c:3916 __slab_alloc_node mm/slub.c:3991 [inline] slab_alloc_node mm/slub.c:4152 [inline] __do_kmalloc_node mm/slub.c:4293 [inline] __kmalloc_noprof+0x2e6/0x4c0 mm/slub.c:4306 kmalloc_noprof include/linux/slab.h:905 [inline] kzalloc_noprof include/linux/slab.h:1037 [inline] kobject_get_path+0xb8/0x230 lib/kobject.c:161 kobject_uevent_env+0x2a5/0x8e0 lib/kobject_uevent.c:545 netdev_queue_add_kobject net/core/net-sysfs.c:1800 [inline] netdev_queue_update_kobjects+0x28d/0x550 net/core/net-sysfs.c:1841 register_queue_kobjects net/core/net-sysfs.c:1903 [inline] netdev_register_kobject+0x234/0x2e0 net/core/net-sysfs.c:2143 register_netdevice+0x12c5/0x1b00 net/core/dev.c:10599 veth_newlink+0x3fd/0xb00 drivers/net/veth.c:1815 rtnl_newlink_create+0x2ee/0xa40 net/core/rtnetlink.c:3786 __rtnl_newlink net/core/rtnetlink.c:3897 [inline] rtnl_newlink+0x1c7e/0x2210 net/core/rtnetlink.c:4012 page last free pid 5208 tgid 5208 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1127 [inline] free_frozen_pages+0xe0d/0x10e0 mm/page_alloc.c:2660 __slab_free+0x2c2/0x380 mm/slub.c:4520 qlink_free mm/kasan/quarantine.c:163 [inline] qlist_free_all+0x9a/0x140 mm/kasan/quarantine.c:179 kasan_quarantine_reduce+0x14f/0x170 mm/kasan/quarantine.c:286 __kasan_slab_alloc+0x23/0x80 mm/kasan/common.c:329 kasan_slab_alloc include/linux/kasan.h:250 [inline] slab_post_alloc_hook mm/slub.c:4115 [inline] slab_alloc_node mm/slub.c:4164 [inline] kmem_cache_alloc_lru_noprof+0x1dd/0x390 mm/slub.c:4183 __d_alloc+0x31/0x700 fs/dcache.c:1646 d_alloc fs/dcache.c:1726 [inline] d_alloc_parallel+0xdf/0x1600 fs/dcache.c:2490 lookup_open fs/namei.c:3571 [inline] open_last_lookups fs/namei.c:3748 [inline] path_openat+0x9e6/0x3580 fs/namei.c:3984 do_filp_open+0x27f/0x4e0 fs/namei.c:4014 do_sys_openat2+0x13e/0x1d0 fs/open.c:1427 do_sys_open fs/open.c:1442 [inline] __do_sys_openat fs/open.c:1458 [inline] __se_sys_openat fs/open.c:1453 [inline] __x64_sys_openat+0x247/0x2a0 fs/open.c:1453 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f Memory state around the buggy address: ffff8880612ecf00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff8880612ecf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >ffff8880612ed000: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ^ ffff8880612ed080: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ffff8880612ed100: 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc ================================================================== Tested on: commit: 8155b4ef Add linux-next specific files for 20241220 git tree: linux-next console output: https://syzkaller.appspot.com/x/log.txt?x=15f4a6df980000 kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 patch: https://syzkaller.appspot.com/x/patch.diff?x=17f4550f980000 ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-28 4:56 ` syzbot ` (6 preceding siblings ...) 2024-12-30 12:02 ` Hillf Danton @ 2024-12-31 8:41 ` Hillf Danton 2024-12-31 9:09 ` syzbot 2025-01-10 16:35 ` David Hildenbrand 7 siblings, 2 replies; 38+ messages in thread From: Hillf Danton @ 2024-12-31 8:41 UTC (permalink / raw) To: syzbot; +Cc: linux-mm, linux-kernel, syzkaller-bugs On Fri, 27 Dec 2024 20:56:21 -0800 > syzbot has found a reproducer for the following issue on: > > HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 > git tree: linux-next > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000 #syz test --- x/mm/filemap.c +++ y/mm/filemap.c @@ -3636,6 +3636,10 @@ static vm_fault_t filemap_map_folio_rang continue; skip: if (count) { + for (unsigned int i = 0; i < count; i++) { + if (page_folio(page + i) != folio) + goto out; + } set_pte_range(vmf, folio, page, count, addr); *rss += count; folio_ref_add(folio, count); @@ -3658,6 +3662,7 @@ skip: ret = VM_FAULT_NOPAGE; } +out: vmf->pte = old_ptep; return ret; @@ -3702,7 +3707,7 @@ vm_fault_t filemap_map_pages(struct vm_f struct file *file = vma->vm_file; struct address_space *mapping = file->f_mapping; pgoff_t file_end, last_pgoff = start_pgoff; - unsigned long addr; + unsigned long addr, pmd_end; XA_STATE(xas, &mapping->i_pages, start_pgoff); struct folio *folio; vm_fault_t ret = 0; @@ -3731,6 +3736,12 @@ vm_fault_t filemap_map_pages(struct vm_f if (end_pgoff > file_end) end_pgoff = file_end; + /* make vmf->pte[x] valid */ + pmd_end = ALIGN(addr, PMD_SIZE); + pmd_end = (pmd_end - addr) >> PAGE_SHIFT; + if (end_pgoff - start_pgoff > pmd_end) + end_pgoff = start_pgoff + pmd_end; + folio_type = mm_counter_file(folio); do { unsigned long end; -- ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-31 8:41 ` Hillf Danton @ 2024-12-31 9:09 ` syzbot 2025-01-10 16:35 ` David Hildenbrand 1 sibling, 0 replies; 38+ messages in thread From: syzbot @ 2024-12-31 9:09 UTC (permalink / raw) To: hdanton, linux-kernel, linux-mm, syzkaller-bugs Hello, syzbot has tested the proposed patch and the reproducer did not trigger any issue: Reported-by: syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com Tested-by: syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com Tested on: commit: 8155b4ef Add linux-next specific files for 20241220 git tree: linux-next console output: https://syzkaller.appspot.com/x/log.txt?x=175f88b0580000 kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 patch: https://syzkaller.appspot.com/x/patch.diff?x=178ee6df980000 Note: testing is done by a robot and is best-effort only. ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-31 8:41 ` Hillf Danton 2024-12-31 9:09 ` syzbot @ 2025-01-10 16:35 ` David Hildenbrand 2025-01-11 1:00 ` Hillf Danton 1 sibling, 1 reply; 38+ messages in thread From: David Hildenbrand @ 2025-01-10 16:35 UTC (permalink / raw) To: Hillf Danton, syzbot; +Cc: linux-mm, linux-kernel, syzkaller-bugs On 31.12.24 09:41, Hillf Danton wrote: > On Fri, 27 Dec 2024 20:56:21 -0800 >> syzbot has found a reproducer for the following issue on: >> >> HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 >> git tree: linux-next >> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000 > > #syz test > > --- x/mm/filemap.c > +++ y/mm/filemap.c > @@ -3636,6 +3636,10 @@ static vm_fault_t filemap_map_folio_rang > continue; > skip: > if (count) { > + for (unsigned int i = 0; i < count; i++) { > + if (page_folio(page + i) != folio) > + goto out; > + } IIRC, count <= nr_pages. Wouldn't that mean that we somehow pass in nr_pages that already exceeds the given folio+start? When I last looked at this, I was not able to spot the error in the caller :( > set_pte_range(vmf, folio, page, count, addr); > *rss += count; > folio_ref_add(folio, count); > @@ -3658,6 +3662,7 @@ skip: > ret = VM_FAULT_NOPAGE; > } > > +out: > vmf->pte = old_ptep; > > return ret; > @@ -3702,7 +3707,7 @@ vm_fault_t filemap_map_pages(struct vm_f > struct file *file = vma->vm_file; > struct address_space *mapping = file->f_mapping; > pgoff_t file_end, last_pgoff = start_pgoff; > - unsigned long addr; > + unsigned long addr, pmd_end; > XA_STATE(xas, &mapping->i_pages, start_pgoff); > struct folio *folio; > vm_fault_t ret = 0; > @@ -3731,6 +3736,12 @@ vm_fault_t filemap_map_pages(struct vm_f > if (end_pgoff > file_end) > end_pgoff = file_end; > > + /* make vmf->pte[x] valid */ > + pmd_end = ALIGN(addr, PMD_SIZE); > + pmd_end = (pmd_end - addr) >> PAGE_SHIFT; > + if (end_pgoff - start_pgoff > pmd_end) > + end_pgoff = start_pgoff + pmd_end; > + do_fault_around() comments "This way it's easier to guarantee that we don't cross page table boundaries." It does some magic with PTRS_PER_PTE. You're diff here seems to indicate that this is not the case? But it's rather surprising that we see these issues pop up just now in -next. -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-10 16:35 ` David Hildenbrand @ 2025-01-11 1:00 ` Hillf Danton 2025-01-11 10:03 ` David Hildenbrand 0 siblings, 1 reply; 38+ messages in thread From: Hillf Danton @ 2025-01-11 1:00 UTC (permalink / raw) To: David Hildenbrand; +Cc: syzbot, linux-mm, linux-kernel, syzkaller-bugs On Fri, 10 Jan 2025 17:35:25 +0100 David Hildenbrand <david@redhat.com> > On 31.12.24 09:41, Hillf Danton wrote: > > On Fri, 27 Dec 2024 20:56:21 -0800 > >> syzbot has found a reproducer for the following issue on: > >> > >> HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 > >> git tree: linux-next > >> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000 > > > > #syz test > > > > --- x/mm/filemap.c > > +++ y/mm/filemap.c > > @@ -3636,6 +3636,10 @@ static vm_fault_t filemap_map_folio_rang > > continue; > > skip: > > if (count) { > > + for (unsigned int i = 0; i < count; i++) { > > + if (page_folio(page + i) != folio) > > + goto out; > > + } > > IIRC, count <= nr_pages. Wouldn't that mean that we somehow pass in > nr_pages that already exceeds the given folio+start? > > When I last looked at this, I was not able to spot the error in the > caller :( > This is a debug patch at the first place, and this hunk overlaps with the next one. > > set_pte_range(vmf, folio, page, count, addr); > > *rss += count; > > folio_ref_add(folio, count); > > @@ -3658,6 +3662,7 @@ skip: > > ret = VM_FAULT_NOPAGE; > > } > > > > +out: > > vmf->pte = old_ptep; > > > > return ret; > > @@ -3702,7 +3707,7 @@ vm_fault_t filemap_map_pages(struct vm_f > > struct file *file = vma->vm_file; > > struct address_space *mapping = file->f_mapping; > > pgoff_t file_end, last_pgoff = start_pgoff; > > - unsigned long addr; > > + unsigned long addr, pmd_end; > > XA_STATE(xas, &mapping->i_pages, start_pgoff); > > struct folio *folio; > > vm_fault_t ret = 0; > > @@ -3731,6 +3736,12 @@ vm_fault_t filemap_map_pages(struct vm_f > > if (end_pgoff > file_end) > > end_pgoff = file_end; > > > > + /* make vmf->pte[x] valid */ > > + pmd_end = ALIGN(addr, PMD_SIZE); > > + pmd_end = (pmd_end - addr) >> PAGE_SHIFT; > > + if (end_pgoff - start_pgoff > pmd_end) > > + end_pgoff = start_pgoff + pmd_end; > > + > > do_fault_around() comments "This way it's easier to guarantee that we > don't cross page table boundaries." > > It does some magic with PTRS_PER_PTE. > > You're diff here seems to indicate that this is not the case? > > But it's rather surprising that we see these issues pop up just now in > -next. > Given double check [1], I am lean to thinking this is a simple OOB issue. [1] https://lore.kernel.org/all/6774eca1.050a0220.25abdd.09b2.GAE@google.com/ ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-11 1:00 ` Hillf Danton @ 2025-01-11 10:03 ` David Hildenbrand 0 siblings, 0 replies; 38+ messages in thread From: David Hildenbrand @ 2025-01-11 10:03 UTC (permalink / raw) To: Hillf Danton; +Cc: syzbot, linux-mm, linux-kernel, syzkaller-bugs On 11.01.25 02:00, Hillf Danton wrote: > On Fri, 10 Jan 2025 17:35:25 +0100 David Hildenbrand <david@redhat.com> >> On 31.12.24 09:41, Hillf Danton wrote: >>> On Fri, 27 Dec 2024 20:56:21 -0800 >>>> syzbot has found a reproducer for the following issue on: >>>> >>>> HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 >>>> git tree: linux-next >>>> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1652fadf980000 >>> >>> #syz test >>> >>> --- x/mm/filemap.c >>> +++ y/mm/filemap.c >>> @@ -3636,6 +3636,10 @@ static vm_fault_t filemap_map_folio_rang >>> continue; >>> skip: >>> if (count) { >>> + for (unsigned int i = 0; i < count; i++) { >>> + if (page_folio(page + i) != folio) >>> + goto out; >>> + } >> >> IIRC, count <= nr_pages. Wouldn't that mean that we somehow pass in >> nr_pages that already exceeds the given folio+start? >> >> When I last looked at this, I was not able to spot the error in the >> caller :( >> > This is a debug patch at the first place, and this hunk overlaps with the > next one. Yeah, I was rather wondering if you had any clue why that hunk might help on its own. -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-11 1:54 [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) syzbot 2024-12-11 10:06 ` David Hildenbrand 2024-12-28 4:56 ` syzbot @ 2024-12-28 12:25 ` syzbot 2025-01-10 15:48 ` David Hildenbrand 2025-01-10 21:03 ` Liam R. Howlett 2 siblings, 2 replies; 38+ messages in thread From: syzbot @ 2024-12-28 12:25 UTC (permalink / raw) To: akpm, david, hdanton, linux-kernel, linux-mm, syzkaller-bugs, willy syzbot has found a reproducer for the following issue on: HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 git tree: linux-next console output: https://syzkaller.appspot.com/x/log.txt?x=1661050f980000 kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 syz repro: https://syzkaller.appspot.com/x/repro.syz?x=17438af8580000 C reproducer: https://syzkaller.appspot.com/x/repro.c?x=101006df980000 Downloadable assets: disk image: https://storage.googleapis.com/syzbot-assets/98a974fc662d/disk-8155b4ef.raw.xz vmlinux: https://storage.googleapis.com/syzbot-assets/2dea9b72f624/vmlinux-8155b4ef.xz kernel image: https://storage.googleapis.com/syzbot-assets/593a42b9eb34/bzImage-8155b4ef.xz mounted in repro: https://storage.googleapis.com/syzbot-assets/5f780361c9ef/mount_0.gz IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com xfs_vn_setattr+0x25d/0x320 fs/xfs/xfs_iops.c:1065 notify_change+0xbca/0xe90 fs/attr.c:552 do_truncate+0x220/0x310 fs/open.c:65 do_ftruncate+0x4a1/0x540 fs/open.c:192 do_sys_ftruncate fs/open.c:207 [inline] __do_sys_ftruncate fs/open.c:212 [inline] __se_sys_ftruncate fs/open.c:210 [inline] __x64_sys_ftruncate+0x94/0xf0 fs/open.c:210 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f ------------[ cut here ]------------ WARNING: CPU: 1 PID: 11276 at ./include/linux/rmap.h:217 __folio_rmap_sanity_checks+0x369/0x590 include/linux/rmap.h:217 Modules linked in: CPU: 1 UID: 0 PID: 11276 Comm: syz-executor139 Not tainted 6.13.0-rc3-next-20241220-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 RIP: 0010:__folio_rmap_sanity_checks+0x369/0x590 include/linux/rmap.h:217 Code: 0f 0b 90 e9 e9 fd ff ff e8 64 cb ab ff 48 ff cb e9 34 fe ff ff e8 57 cb ab ff 4c 89 e7 48 c7 c6 e0 a7 15 8c e8 08 a4 f5 ff 90 <0f> 0b 90 e9 25 fe ff ff e8 3a cb ab ff 4c 89 e7 48 c7 c6 40 a9 15 RSP: 0018:ffffc9000e67efd8 EFLAGS: 00010246 RAX: 8577b516ce8a9400 RBX: ffffea0001a58080 RCX: ffffc9000e67eb03 RDX: 0000000000000005 RSI: ffffffff8c0aaba0 RDI: ffffffff8c5fed00 RBP: 00000000000024c0 R08: ffffffff901ab1f7 R09: 1ffffffff203563e R10: dffffc0000000000 R11: fffffbfff203563f R12: ffffea0001a50000 R13: ffffea0001a55c00 R14: 0000000000000000 R15: 0000000000000093 FS: 00007f885c85f6c0(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f88545b7000 CR3: 000000007fea2000 CR4: 00000000003526f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> __folio_add_rmap mm/rmap.c:1170 [inline] __folio_add_file_rmap mm/rmap.c:1489 [inline] folio_add_file_rmap_ptes+0x82/0x380 mm/rmap.c:1511 set_pte_range+0x30c/0x750 mm/memory.c:5136 filemap_map_folio_range mm/filemap.c:3639 [inline] filemap_map_pages+0xfbe/0x1900 mm/filemap.c:3748 do_fault_around mm/memory.c:5351 [inline] do_read_fault mm/memory.c:5384 [inline] do_fault mm/memory.c:5527 [inline] do_pte_missing mm/memory.c:4048 [inline] handle_pte_fault+0x3888/0x5ee0 mm/memory.c:5890 __handle_mm_fault mm/memory.c:6033 [inline] handle_mm_fault+0x11f5/0x1d50 mm/memory.c:6202 faultin_page mm/gup.c:1196 [inline] __get_user_pages+0x1a92/0x4140 mm/gup.c:1491 populate_vma_page_range+0x264/0x330 mm/gup.c:1929 __mm_populate+0x27a/0x460 mm/gup.c:2032 mm_populate include/linux/mm.h:3400 [inline] vm_mmap_pgoff+0x303/0x430 mm/util.c:585 ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:607 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f885c8d20f9 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 41 1d 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f885c85f208 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f885c95c6d8 RCX: 00007f885c8d20f9 RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 RBP: 00007f885c95c6d0 R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000028011 R11: 0000000000000246 R12: 00007f885c928908 R13: 00746e6572727563 R14: 632e79726f6d656d R15: 6d766b2f7665642f </TASK> --- If you want syzbot to run the reproducer, reply with: #syz test: git://repo/address.git branch-or-commit-hash If you attach or paste a git patch, syzbot will apply it before testing. ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-28 12:25 ` syzbot @ 2025-01-10 15:48 ` David Hildenbrand 2025-01-10 16:14 ` Matthew Wilcox 2025-01-10 21:03 ` Liam R. Howlett 1 sibling, 1 reply; 38+ messages in thread From: David Hildenbrand @ 2025-01-10 15:48 UTC (permalink / raw) To: syzbot, akpm, hdanton, linux-kernel, linux-mm, syzkaller-bugs, willy On 28.12.24 13:25, syzbot wrote: > syzbot has found a reproducer for the following issue on: > > HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 > git tree: linux-next > console output: https://syzkaller.appspot.com/x/log.txt?x=1661050f980000 > kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 > dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 > compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=17438af8580000 > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=101006df980000 > > Downloadable assets: > disk image: https://storage.googleapis.com/syzbot-assets/98a974fc662d/disk-8155b4ef.raw.xz > vmlinux: https://storage.googleapis.com/syzbot-assets/2dea9b72f624/vmlinux-8155b4ef.xz > kernel image: https://storage.googleapis.com/syzbot-assets/593a42b9eb34/bzImage-8155b4ef.xz > mounted in repro: https://storage.googleapis.com/syzbot-assets/5f780361c9ef/mount_0.gz > > IMPORTANT: if you fix the issue, please add the following tag to the commit: > Reported-by: syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com > > xfs_vn_setattr+0x25d/0x320 fs/xfs/xfs_iops.c:1065 > notify_change+0xbca/0xe90 fs/attr.c:552 > do_truncate+0x220/0x310 fs/open.c:65 > do_ftruncate+0x4a1/0x540 fs/open.c:192 > do_sys_ftruncate fs/open.c:207 [inline] > __do_sys_ftruncate fs/open.c:212 [inline] > __se_sys_ftruncate fs/open.c:210 [inline] > __x64_sys_ftruncate+0x94/0xf0 fs/open.c:210 > do_syscall_x64 arch/x86/entry/common.c:52 [inline] > do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 > entry_SYSCALL_64_after_hwframe+0x77/0x7f > ------------[ cut here ]------------ > WARNING: CPU: 1 PID: 11276 at ./include/linux/rmap.h:217 __folio_rmap_sanity_checks+0x369/0x590 include/linux/rmap.h:217 > Modules linked in: > CPU: 1 UID: 0 PID: 11276 Comm: syz-executor139 Not tainted 6.13.0-rc3-next-20241220-syzkaller #0 > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 > RIP: 0010:__folio_rmap_sanity_checks+0x369/0x590 include/linux/rmap.h:217 > Code: 0f 0b 90 e9 e9 fd ff ff e8 64 cb ab ff 48 ff cb e9 34 fe ff ff e8 57 cb ab ff 4c 89 e7 48 c7 c6 e0 a7 15 8c e8 08 a4 f5 ff 90 <0f> 0b 90 e9 25 fe ff ff e8 3a cb ab ff 4c 89 e7 48 c7 c6 40 a9 15 > RSP: 0018:ffffc9000e67efd8 EFLAGS: 00010246 > RAX: 8577b516ce8a9400 RBX: ffffea0001a58080 RCX: ffffc9000e67eb03 > RDX: 0000000000000005 RSI: ffffffff8c0aaba0 RDI: ffffffff8c5fed00 > RBP: 00000000000024c0 R08: ffffffff901ab1f7 R09: 1ffffffff203563e > R10: dffffc0000000000 R11: fffffbfff203563f R12: ffffea0001a50000 > R13: ffffea0001a55c00 R14: 0000000000000000 R15: 0000000000000093 > FS: 00007f885c85f6c0(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000 > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > CR2: 00007f88545b7000 CR3: 000000007fea2000 CR4: 00000000003526f0 > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > Call Trace: > <TASK> > __folio_add_rmap mm/rmap.c:1170 [inline] > __folio_add_file_rmap mm/rmap.c:1489 [inline] > folio_add_file_rmap_ptes+0x82/0x380 mm/rmap.c:1511 > set_pte_range+0x30c/0x750 mm/memory.c:5136 If I would have to guess, I would assume that we have a refcount issue such that we succeed in splitting a folio while concurrently mapping it. -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-10 15:48 ` David Hildenbrand @ 2025-01-10 16:14 ` Matthew Wilcox 2025-01-10 16:19 ` David Hildenbrand 0 siblings, 1 reply; 38+ messages in thread From: Matthew Wilcox @ 2025-01-10 16:14 UTC (permalink / raw) To: David Hildenbrand Cc: syzbot, akpm, hdanton, linux-kernel, linux-mm, syzkaller-bugs On Fri, Jan 10, 2025 at 04:48:03PM +0100, David Hildenbrand wrote: > On 28.12.24 13:25, syzbot wrote: > > syzbot has found a reproducer for the following issue on: > > > > HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 > > git tree: linux-next > > console output: https://syzkaller.appspot.com/x/log.txt?x=1661050f980000 > > kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 > > dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 > > compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 > > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=17438af8580000 > > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=101006df980000 > > > > Downloadable assets: > > disk image: https://storage.googleapis.com/syzbot-assets/98a974fc662d/disk-8155b4ef.raw.xz > > vmlinux: https://storage.googleapis.com/syzbot-assets/2dea9b72f624/vmlinux-8155b4ef.xz > > kernel image: https://storage.googleapis.com/syzbot-assets/593a42b9eb34/bzImage-8155b4ef.xz > > mounted in repro: https://storage.googleapis.com/syzbot-assets/5f780361c9ef/mount_0.gz > > > > IMPORTANT: if you fix the issue, please add the following tag to the commit: > > Reported-by: syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com > > > > xfs_vn_setattr+0x25d/0x320 fs/xfs/xfs_iops.c:1065 > > notify_change+0xbca/0xe90 fs/attr.c:552 > > do_truncate+0x220/0x310 fs/open.c:65 > > do_ftruncate+0x4a1/0x540 fs/open.c:192 > > do_sys_ftruncate fs/open.c:207 [inline] > > __do_sys_ftruncate fs/open.c:212 [inline] > > __se_sys_ftruncate fs/open.c:210 [inline] > > __x64_sys_ftruncate+0x94/0xf0 fs/open.c:210 > > do_syscall_x64 arch/x86/entry/common.c:52 [inline] > > do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 > > entry_SYSCALL_64_after_hwframe+0x77/0x7f > > ------------[ cut here ]------------ > > WARNING: CPU: 1 PID: 11276 at ./include/linux/rmap.h:217 __folio_rmap_sanity_checks+0x369/0x590 include/linux/rmap.h:217 > > Modules linked in: > > CPU: 1 UID: 0 PID: 11276 Comm: syz-executor139 Not tainted 6.13.0-rc3-next-20241220-syzkaller #0 > > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 > > RIP: 0010:__folio_rmap_sanity_checks+0x369/0x590 include/linux/rmap.h:217 > > Code: 0f 0b 90 e9 e9 fd ff ff e8 64 cb ab ff 48 ff cb e9 34 fe ff ff e8 57 cb ab ff 4c 89 e7 48 c7 c6 e0 a7 15 8c e8 08 a4 f5 ff 90 <0f> 0b 90 e9 25 fe ff ff e8 3a cb ab ff 4c 89 e7 48 c7 c6 40 a9 15 > > RSP: 0018:ffffc9000e67efd8 EFLAGS: 00010246 > > RAX: 8577b516ce8a9400 RBX: ffffea0001a58080 RCX: ffffc9000e67eb03 > > RDX: 0000000000000005 RSI: ffffffff8c0aaba0 RDI: ffffffff8c5fed00 > > RBP: 00000000000024c0 R08: ffffffff901ab1f7 R09: 1ffffffff203563e > > R10: dffffc0000000000 R11: fffffbfff203563f R12: ffffea0001a50000 > > R13: ffffea0001a55c00 R14: 0000000000000000 R15: 0000000000000093 > > FS: 00007f885c85f6c0(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000 > > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > > CR2: 00007f88545b7000 CR3: 000000007fea2000 CR4: 00000000003526f0 > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > > DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > > Call Trace: > > <TASK> > > __folio_add_rmap mm/rmap.c:1170 [inline] > > __folio_add_file_rmap mm/rmap.c:1489 [inline] > > folio_add_file_rmap_ptes+0x82/0x380 mm/rmap.c:1511 > > set_pte_range+0x30c/0x750 mm/memory.c:5136 > > If I would have to guess, I would assume that we have a refcount issue such > that we succeed in splitting a folio while concurrently mapping it. That would seem hard to accomplish, because both hold the folio lock, so it wouldn't be just a refcount bug but also a locking bug. Not sure what this is though. ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-10 16:14 ` Matthew Wilcox @ 2025-01-10 16:19 ` David Hildenbrand 2025-01-10 16:27 ` Matthew Wilcox 0 siblings, 1 reply; 38+ messages in thread From: David Hildenbrand @ 2025-01-10 16:19 UTC (permalink / raw) To: Matthew Wilcox Cc: syzbot, akpm, hdanton, linux-kernel, linux-mm, syzkaller-bugs On 10.01.25 17:14, Matthew Wilcox wrote: > On Fri, Jan 10, 2025 at 04:48:03PM +0100, David Hildenbrand wrote: >> On 28.12.24 13:25, syzbot wrote: >>> syzbot has found a reproducer for the following issue on: >>> >>> HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 >>> git tree: linux-next >>> console output: https://syzkaller.appspot.com/x/log.txt?x=1661050f980000 >>> kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 >>> dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 >>> compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 >>> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=17438af8580000 >>> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=101006df980000 >>> >>> Downloadable assets: >>> disk image: https://storage.googleapis.com/syzbot-assets/98a974fc662d/disk-8155b4ef.raw.xz >>> vmlinux: https://storage.googleapis.com/syzbot-assets/2dea9b72f624/vmlinux-8155b4ef.xz >>> kernel image: https://storage.googleapis.com/syzbot-assets/593a42b9eb34/bzImage-8155b4ef.xz >>> mounted in repro: https://storage.googleapis.com/syzbot-assets/5f780361c9ef/mount_0.gz >>> >>> IMPORTANT: if you fix the issue, please add the following tag to the commit: >>> Reported-by: syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com >>> >>> xfs_vn_setattr+0x25d/0x320 fs/xfs/xfs_iops.c:1065 >>> notify_change+0xbca/0xe90 fs/attr.c:552 >>> do_truncate+0x220/0x310 fs/open.c:65 >>> do_ftruncate+0x4a1/0x540 fs/open.c:192 >>> do_sys_ftruncate fs/open.c:207 [inline] >>> __do_sys_ftruncate fs/open.c:212 [inline] >>> __se_sys_ftruncate fs/open.c:210 [inline] >>> __x64_sys_ftruncate+0x94/0xf0 fs/open.c:210 >>> do_syscall_x64 arch/x86/entry/common.c:52 [inline] >>> do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 >>> entry_SYSCALL_64_after_hwframe+0x77/0x7f >>> ------------[ cut here ]------------ >>> WARNING: CPU: 1 PID: 11276 at ./include/linux/rmap.h:217 __folio_rmap_sanity_checks+0x369/0x590 include/linux/rmap.h:217 >>> Modules linked in: >>> CPU: 1 UID: 0 PID: 11276 Comm: syz-executor139 Not tainted 6.13.0-rc3-next-20241220-syzkaller #0 >>> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 >>> RIP: 0010:__folio_rmap_sanity_checks+0x369/0x590 include/linux/rmap.h:217 >>> Code: 0f 0b 90 e9 e9 fd ff ff e8 64 cb ab ff 48 ff cb e9 34 fe ff ff e8 57 cb ab ff 4c 89 e7 48 c7 c6 e0 a7 15 8c e8 08 a4 f5 ff 90 <0f> 0b 90 e9 25 fe ff ff e8 3a cb ab ff 4c 89 e7 48 c7 c6 40 a9 15 >>> RSP: 0018:ffffc9000e67efd8 EFLAGS: 00010246 >>> RAX: 8577b516ce8a9400 RBX: ffffea0001a58080 RCX: ffffc9000e67eb03 >>> RDX: 0000000000000005 RSI: ffffffff8c0aaba0 RDI: ffffffff8c5fed00 >>> RBP: 00000000000024c0 R08: ffffffff901ab1f7 R09: 1ffffffff203563e >>> R10: dffffc0000000000 R11: fffffbfff203563f R12: ffffea0001a50000 >>> R13: ffffea0001a55c00 R14: 0000000000000000 R15: 0000000000000093 >>> FS: 00007f885c85f6c0(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000 >>> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 >>> CR2: 00007f88545b7000 CR3: 000000007fea2000 CR4: 00000000003526f0 >>> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 >>> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 >>> Call Trace: >>> <TASK> >>> __folio_add_rmap mm/rmap.c:1170 [inline] >>> __folio_add_file_rmap mm/rmap.c:1489 [inline] >>> folio_add_file_rmap_ptes+0x82/0x380 mm/rmap.c:1511 >>> set_pte_range+0x30c/0x750 mm/memory.c:5136 >> >> If I would have to guess, I would assume that we have a refcount issue such >> that we succeed in splitting a folio while concurrently mapping it. > > That would seem hard to accomplish, because both hold the folio lock, > so it wouldn't be just a refcount bug but also a locking bug. Not sure > what this is though. Yeah, but we also have https://lkml.kernel.org/r/6774bf44.050a0220.25abdd.098a.GAE@google.com -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-10 16:19 ` David Hildenbrand @ 2025-01-10 16:27 ` Matthew Wilcox 2025-01-10 16:31 ` David Hildenbrand 0 siblings, 1 reply; 38+ messages in thread From: Matthew Wilcox @ 2025-01-10 16:27 UTC (permalink / raw) To: David Hildenbrand Cc: syzbot, akpm, hdanton, linux-kernel, linux-mm, syzkaller-bugs, Liam R. Howlett, Lorenzo Stoakes On Fri, Jan 10, 2025 at 05:19:54PM +0100, David Hildenbrand wrote: > On 10.01.25 17:14, Matthew Wilcox wrote: > > On Fri, Jan 10, 2025 at 04:48:03PM +0100, David Hildenbrand wrote: > > > If I would have to guess, I would assume that we have a refcount issue such > > > that we succeed in splitting a folio while concurrently mapping it. > > > > That would seem hard to accomplish, because both hold the folio lock, > > so it wouldn't be just a refcount bug but also a locking bug. Not sure > > what this is though. > > Yeah, but we also have > > https://lkml.kernel.org/r/6774bf44.050a0220.25abdd.098a.GAE@google.com That one is a UAF on the vma, so it's either a different issue, or the problem is with the VMA refcount/lookup/..., not the folio refcount. cc'ing the relevant maintainers. ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-10 16:27 ` Matthew Wilcox @ 2025-01-10 16:31 ` David Hildenbrand 2025-01-10 19:55 ` Liam R. Howlett 0 siblings, 1 reply; 38+ messages in thread From: David Hildenbrand @ 2025-01-10 16:31 UTC (permalink / raw) To: Matthew Wilcox Cc: syzbot, akpm, hdanton, linux-kernel, linux-mm, syzkaller-bugs, Liam R. Howlett, Lorenzo Stoakes On 10.01.25 17:27, Matthew Wilcox wrote: > On Fri, Jan 10, 2025 at 05:19:54PM +0100, David Hildenbrand wrote: >> On 10.01.25 17:14, Matthew Wilcox wrote: >>> On Fri, Jan 10, 2025 at 04:48:03PM +0100, David Hildenbrand wrote: >>>> If I would have to guess, I would assume that we have a refcount issue such >>>> that we succeed in splitting a folio while concurrently mapping it. >>> >>> That would seem hard to accomplish, because both hold the folio lock, >>> so it wouldn't be just a refcount bug but also a locking bug. Not sure >>> what this is though. >> >> Yeah, but we also have >> >> https://lkml.kernel.org/r/6774bf44.050a0220.25abdd.098a.GAE@google.com > > That one is a UAF on the vma, so it's either a different issue, or the > problem is with the VMA refcount/lookup/..., not the folio refcount. > cc'ing the relevant maintainers. Agreed, it's all a bit confusing. -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-10 16:31 ` David Hildenbrand @ 2025-01-10 19:55 ` Liam R. Howlett 2025-01-10 21:24 ` Suren Baghdasaryan 0 siblings, 1 reply; 38+ messages in thread From: Liam R. Howlett @ 2025-01-10 19:55 UTC (permalink / raw) To: Suren Baghdasaryan Cc: Matthew Wilcox, syzbot, akpm, hdanton, linux-kernel, linux-mm, syzkaller-bugs, Lorenzo Stoakes, David Hildenbrand * David Hildenbrand <david@redhat.com> [250110 11:31]: > On 10.01.25 17:27, Matthew Wilcox wrote: > > On Fri, Jan 10, 2025 at 05:19:54PM +0100, David Hildenbrand wrote: > > > On 10.01.25 17:14, Matthew Wilcox wrote: > > > > On Fri, Jan 10, 2025 at 04:48:03PM +0100, David Hildenbrand wrote: > > > > > If I would have to guess, I would assume that we have a refcount issue such > > > > > that we succeed in splitting a folio while concurrently mapping it. > > > > > > > > That would seem hard to accomplish, because both hold the folio lock, > > > > so it wouldn't be just a refcount bug but also a locking bug. Not sure > > > > what this is though. > > > > > > Yeah, but we also have > > > > > > https://lkml.kernel.org/r/6774bf44.050a0220.25abdd.098a.GAE@google.com > > > > That one is a UAF on the vma, so it's either a different issue, or the > > problem is with the VMA refcount/lookup/..., not the folio refcount. > > cc'ing the relevant maintainers. > > Agreed, it's all a bit confusing. > This might involve Suren's patch set which changes the locking of the vmas. Suren, if you respin and it's not too much trouble can you please make a git branch with the latest patches for easier review and testing? Thanks, Liam ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-10 19:55 ` Liam R. Howlett @ 2025-01-10 21:24 ` Suren Baghdasaryan 2025-01-11 4:29 ` Liam R. Howlett 0 siblings, 1 reply; 38+ messages in thread From: Suren Baghdasaryan @ 2025-01-10 21:24 UTC (permalink / raw) To: Liam R. Howlett, Suren Baghdasaryan, Matthew Wilcox, syzbot, akpm, hdanton, linux-kernel, linux-mm, syzkaller-bugs, Lorenzo Stoakes, David Hildenbrand On Fri, Jan 10, 2025 at 11:56 AM Liam R. Howlett <Liam.Howlett@oracle.com> wrote: > > * David Hildenbrand <david@redhat.com> [250110 11:31]: > > On 10.01.25 17:27, Matthew Wilcox wrote: > > > On Fri, Jan 10, 2025 at 05:19:54PM +0100, David Hildenbrand wrote: > > > > On 10.01.25 17:14, Matthew Wilcox wrote: > > > > > On Fri, Jan 10, 2025 at 04:48:03PM +0100, David Hildenbrand wrote: > > > > > > If I would have to guess, I would assume that we have a refcount issue such > > > > > > that we succeed in splitting a folio while concurrently mapping it. > > > > > > > > > > That would seem hard to accomplish, because both hold the folio lock, > > > > > so it wouldn't be just a refcount bug but also a locking bug. Not sure > > > > > what this is though. > > > > > > > > Yeah, but we also have > > > > > > > > https://lkml.kernel.org/r/6774bf44.050a0220.25abdd.098a.GAE@google.com > > > > > > That one is a UAF on the vma, so it's either a different issue, or the > > > problem is with the VMA refcount/lookup/..., not the folio refcount. > > > cc'ing the relevant maintainers. > > > > Agreed, it's all a bit confusing. > > > > This might involve Suren's patch set which changes the locking of the > vmas. Possibly... The patchset in linux-next on Jan 1st was somewhat different from the latest one. > > Suren, if you respin and it's not too much trouble can you please make a > git branch with the latest patches for easier review and testing? Ok, I'll see what I can do. Thanks, Suren. > > Thanks, > Liam ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-10 21:24 ` Suren Baghdasaryan @ 2025-01-11 4:29 ` Liam R. Howlett 0 siblings, 0 replies; 38+ messages in thread From: Liam R. Howlett @ 2025-01-11 4:29 UTC (permalink / raw) To: Suren Baghdasaryan Cc: Matthew Wilcox, syzbot, akpm, hdanton, linux-kernel, linux-mm, syzkaller-bugs, Lorenzo Stoakes, David Hildenbrand * Suren Baghdasaryan <surenb@google.com> [250110 16:25]: > On Fri, Jan 10, 2025 at 11:56 AM Liam R. Howlett > <Liam.Howlett@oracle.com> wrote: > > > > * David Hildenbrand <david@redhat.com> [250110 11:31]: > > > On 10.01.25 17:27, Matthew Wilcox wrote: > > > > On Fri, Jan 10, 2025 at 05:19:54PM +0100, David Hildenbrand wrote: > > > > > On 10.01.25 17:14, Matthew Wilcox wrote: > > > > > > On Fri, Jan 10, 2025 at 04:48:03PM +0100, David Hildenbrand wrote: > > > > > > > If I would have to guess, I would assume that we have a refcount issue such > > > > > > > that we succeed in splitting a folio while concurrently mapping it. > > > > > > > > > > > > That would seem hard to accomplish, because both hold the folio lock, > > > > > > so it wouldn't be just a refcount bug but also a locking bug. Not sure > > > > > > what this is though. > > > > > > > > > > Yeah, but we also have > > > > > > > > > > https://lkml.kernel.org/r/6774bf44.050a0220.25abdd.098a.GAE@google.com > > > > > > > > That one is a UAF on the vma, so it's either a different issue, or the > > > > problem is with the VMA refcount/lookup/..., not the folio refcount. > > > > cc'ing the relevant maintainers. > > > > > > Agreed, it's all a bit confusing. > > > > > > > This might involve Suren's patch set which changes the locking of the > > vmas. > > Possibly... The patchset in linux-next on Jan 1st was somewhat > different from the latest one. Yeah, I asked the bot to retest the latest unstable (which is still somewhat out of date..). I suspect it'll be okay now. We'll see what it comes back with. > > > > > Suren, if you respin and it's not too much trouble can you please make a > > git branch with the latest patches for easier review and testing? > > Ok, I'll see what I can do. Thanks, I appreciate it. Regards, Liam ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2024-12-28 12:25 ` syzbot 2025-01-10 15:48 ` David Hildenbrand @ 2025-01-10 21:03 ` Liam R. Howlett 2025-01-11 6:15 ` syzbot 2025-01-11 9:25 ` David Hildenbrand 1 sibling, 2 replies; 38+ messages in thread From: Liam R. Howlett @ 2025-01-10 21:03 UTC (permalink / raw) To: syzbot; +Cc: akpm, david, hdanton, linux-kernel, linux-mm, syzkaller-bugs, willy * syzbot <syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com> [241228 07:25]: > syzbot has found a reproducer for the following issue on: > > HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 > git tree: linux-next > console output: https://syzkaller.appspot.com/x/log.txt?x=1661050f980000 > kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 > dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 > compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=17438af8580000 > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=101006df980000 > > Downloadable assets: > disk image: https://storage.googleapis.com/syzbot-assets/98a974fc662d/disk-8155b4ef.raw.xz > vmlinux: https://storage.googleapis.com/syzbot-assets/2dea9b72f624/vmlinux-8155b4ef.xz > kernel image: https://storage.googleapis.com/syzbot-assets/593a42b9eb34/bzImage-8155b4ef.xz > mounted in repro: https://storage.googleapis.com/syzbot-assets/5f780361c9ef/mount_0.gz > > IMPORTANT: if you fix the issue, please add the following tag to the commit: > Reported-by: syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com > #syz test: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm mm-unstable ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-10 21:03 ` Liam R. Howlett @ 2025-01-11 6:15 ` syzbot 2025-01-11 9:25 ` David Hildenbrand 1 sibling, 0 replies; 38+ messages in thread From: syzbot @ 2025-01-11 6:15 UTC (permalink / raw) To: akpm, david, hdanton, liam.howlett, linux-kernel, linux-mm, syzkaller-bugs, willy Hello, syzbot has tested the proposed patch but the reproducer is still triggering an issue: WARNING in __folio_rmap_sanity_checks do_truncate fs/open.c:65 [inline] do_ftruncate+0x462/0x580 fs/open.c:181 do_sys_ftruncate fs/open.c:196 [inline] __do_sys_ftruncate fs/open.c:201 [inline] __se_sys_ftruncate fs/open.c:199 [inline] __x64_sys_ftruncate+0x94/0xf0 fs/open.c:199 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f ------------[ cut here ]------------ WARNING: CPU: 1 PID: 10938 at ./include/linux/rmap.h:216 __folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216 Modules linked in: CPU: 1 UID: 0 PID: 10938 Comm: syz.0.314 Not tainted 6.13.0-rc6-syzkaller-g0703fa3785f1 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 RIP: 0010:__folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216 Code: 0f 0b 90 e9 b7 fd ff ff e8 0e c3 ab ff 48 ff cb e9 f8 fd ff ff e8 01 c3 ab ff 4c 89 e7 48 c7 c6 80 9f 15 8c e8 f2 95 f5 ff 90 <0f> 0b 90 e9 e9 fd ff ff e8 e4 c2 ab ff 48 ff cb e9 34 fe ff ff e8 RSP: 0018:ffffc9000cdff098 EFLAGS: 00010246 RAX: fddae3826e06a400 RBX: ffffea0001450100 RCX: ffffc9000cdfec03 RDX: 0000000000000005 RSI: ffffffff8c0aa1e0 RDI: ffffffff8c5fb3a0 RBP: 000000000001318a R08: ffffffff901988f7 R09: 1ffffffff203311e R10: dffffc0000000000 R11: fffffbfff203311f R12: ffffea0001438000 R13: ffffea0001450100 R14: 0000000000000000 R15: 0000000000000003 FS: 00007f40ae2076c0(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000055558cd15608 CR3: 0000000029cd0000 CR4: 00000000003526f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> __folio_add_rmap mm/rmap.c:1170 [inline] __folio_add_file_rmap mm/rmap.c:1489 [inline] folio_add_file_rmap_ptes+0x82/0x380 mm/rmap.c:1511 set_pte_range+0x30c/0x750 mm/memory.c:5134 filemap_map_folio_range mm/filemap.c:3620 [inline] filemap_map_pages+0xfbb/0x1900 mm/filemap.c:3729 do_fault_around mm/memory.c:5349 [inline] do_read_fault mm/memory.c:5382 [inline] do_fault mm/memory.c:5525 [inline] do_pte_missing mm/memory.c:4046 [inline] handle_pte_fault mm/memory.c:5870 [inline] __handle_mm_fault+0x3f4e/0x6ee0 mm/memory.c:6013 handle_mm_fault+0x3e2/0x8c0 mm/memory.c:6182 faultin_page mm/gup.c:1196 [inline] __get_user_pages+0x1a8f/0x4140 mm/gup.c:1491 populate_vma_page_range+0x264/0x330 mm/gup.c:1929 __mm_populate+0x27a/0x460 mm/gup.c:2032 mm_populate include/linux/mm.h:3470 [inline] vm_mmap_pgoff+0x303/0x430 mm/util.c:580 ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:607 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f40ad385d29 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f40ae207038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f40ad576080 RCX: 00007f40ad385d29 RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 RBP: 00007f40ad401b08 R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007f40ad576080 R15: 00007ffe9513a848 </TASK> Tested on: commit: 0703fa37 mm: remove PageTransTail() git tree: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm mm-unstable console output: https://syzkaller.appspot.com/x/log.txt?x=11a391df980000 kernel config: https://syzkaller.appspot.com/x/.config?x=9a23460a3770d89c dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 Note: no patches were applied. ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-10 21:03 ` Liam R. Howlett 2025-01-11 6:15 ` syzbot @ 2025-01-11 9:25 ` David Hildenbrand 2025-01-11 9:54 ` syzbot 1 sibling, 1 reply; 38+ messages in thread From: David Hildenbrand @ 2025-01-11 9:25 UTC (permalink / raw) To: Liam R. Howlett, syzbot, akpm, hdanton, linux-kernel, linux-mm, syzkaller-bugs, willy On 10.01.25 22:03, Liam R. Howlett wrote: > * syzbot <syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com> [241228 07:25]: >> syzbot has found a reproducer for the following issue on: >> >> HEAD commit: 8155b4ef3466 Add linux-next specific files for 20241220 >> git tree: linux-next >> console output: https://syzkaller.appspot.com/x/log.txt?x=1661050f980000 >> kernel config: https://syzkaller.appspot.com/x/.config?x=9c90bb7161a56c88 >> dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 >> compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 >> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=17438af8580000 >> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=101006df980000 >> >> Downloadable assets: >> disk image: https://storage.googleapis.com/syzbot-assets/98a974fc662d/disk-8155b4ef.raw.xz >> vmlinux: https://storage.googleapis.com/syzbot-assets/2dea9b72f624/vmlinux-8155b4ef.xz >> kernel image: https://storage.googleapis.com/syzbot-assets/593a42b9eb34/bzImage-8155b4ef.xz >> mounted in repro: https://storage.googleapis.com/syzbot-assets/5f780361c9ef/mount_0.gz >> >> IMPORTANT: if you fix the issue, please add the following tag to the commit: >> Reported-by: syzbot+c0673e1f1f054fac28c2@syzkaller.appspotmail.com >> > > #syz test: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm mm-unstable > #syz test: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm mm-stable -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-11 9:25 ` David Hildenbrand @ 2025-01-11 9:54 ` syzbot 2025-01-13 15:39 ` David Hildenbrand 0 siblings, 1 reply; 38+ messages in thread From: syzbot @ 2025-01-11 9:54 UTC (permalink / raw) To: akpm, david, hdanton, liam.howlett, linux-kernel, linux-mm, syzkaller-bugs, willy Hello, syzbot has tested the proposed patch but the reproducer is still triggering an issue: WARNING in __folio_rmap_sanity_checks page last free pid 7533 tgid 7532 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1127 [inline] free_unref_folios+0xe39/0x18b0 mm/page_alloc.c:2706 folios_put_refs+0x76c/0x860 mm/swap.c:962 folio_batch_release include/linux/pagevec.h:101 [inline] truncate_inode_pages_range+0x460/0x10e0 mm/truncate.c:330 iomap_write_failed fs/iomap/buffered-io.c:668 [inline] iomap_write_iter fs/iomap/buffered-io.c:999 [inline] iomap_file_buffered_write+0xca5/0x11c0 fs/iomap/buffered-io.c:1039 xfs_file_buffered_write+0x2de/0xac0 fs/xfs/xfs_file.c:792 new_sync_write fs/read_write.c:586 [inline] vfs_write+0xaeb/0xd30 fs/read_write.c:679 ksys_write+0x18f/0x2b0 fs/read_write.c:731 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f ------------[ cut here ]------------ WARNING: CPU: 0 PID: 7538 at ./include/linux/rmap.h:216 __folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216 Modules linked in: CPU: 0 UID: 0 PID: 7538 Comm: syz.1.57 Not tainted 6.13.0-rc6-syzkaller-gcd6313beaeae #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 RIP: 0010:__folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216 Code: 0f 0b 90 e9 b7 fd ff ff e8 ee af ab ff 48 ff cb e9 f8 fd ff ff e8 e1 af ab ff 4c 89 e7 48 c7 c6 c0 9c 15 8c e8 82 6f f5 ff 90 <0f> 0b 90 e9 e9 fd ff ff e8 c4 af ab ff 48 ff cb e9 34 fe ff ff e8 RSP: 0018:ffffc9000c38efd8 EFLAGS: 00010246 RAX: f8a45fcd41963a00 RBX: ffffea00014f8000 RCX: ffffc9000c38eb03 RDX: 0000000000000005 RSI: ffffffff8c0aa3e0 RDI: ffffffff8c5fa860 RBP: 0000000000013186 R08: ffffffff901978b7 R09: 1ffffffff2032f16 R10: dffffc0000000000 R11: fffffbfff2032f17 R12: ffffea00014f0000 R13: ffffea00014f8080 R14: 0000000000000000 R15: 0000000000000002 FS: 00007f14451f96c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020000140 CR3: 0000000073716000 CR4: 00000000003526f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> __folio_add_rmap mm/rmap.c:1170 [inline] __folio_add_file_rmap mm/rmap.c:1489 [inline] folio_add_file_rmap_ptes+0x82/0x380 mm/rmap.c:1511 set_pte_range+0x30c/0x750 mm/memory.c:5065 filemap_map_folio_range mm/filemap.c:3563 [inline] filemap_map_pages+0xfbe/0x1900 mm/filemap.c:3672 do_fault_around mm/memory.c:5280 [inline] do_read_fault mm/memory.c:5313 [inline] do_fault mm/memory.c:5456 [inline] do_pte_missing mm/memory.c:3979 [inline] handle_pte_fault+0x3888/0x5ed0 mm/memory.c:5801 __handle_mm_fault mm/memory.c:5944 [inline] handle_mm_fault+0x1106/0x1bb0 mm/memory.c:6112 faultin_page mm/gup.c:1196 [inline] __get_user_pages+0x1c82/0x49e0 mm/gup.c:1494 populate_vma_page_range+0x264/0x330 mm/gup.c:1932 __mm_populate+0x27a/0x460 mm/gup.c:2035 mm_populate include/linux/mm.h:3397 [inline] vm_mmap_pgoff+0x2c3/0x3d0 mm/util.c:580 ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:546 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f1445385d29 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f14451f9038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f1445575fa0 RCX: 00007f1445385d29 RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 RBP: 00007f1445401b08 R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007f1445575fa0 R15: 00007ffe4c3a7978 </TASK> Tested on: commit: cd6313be Revert "vmstat: disable vmstat_work on vmstat.. git tree: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm mm-stable console output: https://syzkaller.appspot.com/x/log.txt?x=10b34bc4580000 kernel config: https://syzkaller.appspot.com/x/.config?x=d18955ff6936aa88 dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 Note: no patches were applied. ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-11 9:54 ` syzbot @ 2025-01-13 15:39 ` David Hildenbrand 2025-01-13 15:45 ` Liam R. Howlett 0 siblings, 1 reply; 38+ messages in thread From: David Hildenbrand @ 2025-01-13 15:39 UTC (permalink / raw) To: syzbot, akpm, hdanton, liam.howlett, linux-kernel, linux-mm, syzkaller-bugs, willy On 11.01.25 10:54, syzbot wrote: > Hello, > > syzbot has tested the proposed patch but the reproducer is still triggering an issue: > WARNING in __folio_rmap_sanity_checks > > page last free pid 7533 tgid 7532 stack trace: > reset_page_owner include/linux/page_owner.h:25 [inline] > free_pages_prepare mm/page_alloc.c:1127 [inline] > free_unref_folios+0xe39/0x18b0 mm/page_alloc.c:2706 > folios_put_refs+0x76c/0x860 mm/swap.c:962 > folio_batch_release include/linux/pagevec.h:101 [inline] > truncate_inode_pages_range+0x460/0x10e0 mm/truncate.c:330 > iomap_write_failed fs/iomap/buffered-io.c:668 [inline] > iomap_write_iter fs/iomap/buffered-io.c:999 [inline] > iomap_file_buffered_write+0xca5/0x11c0 fs/iomap/buffered-io.c:1039 > xfs_file_buffered_write+0x2de/0xac0 fs/xfs/xfs_file.c:792 > new_sync_write fs/read_write.c:586 [inline] > vfs_write+0xaeb/0xd30 fs/read_write.c:679 > ksys_write+0x18f/0x2b0 fs/read_write.c:731 > do_syscall_x64 arch/x86/entry/common.c:52 [inline] > do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 > entry_SYSCALL_64_after_hwframe+0x77/0x7f > ------------[ cut here ]------------ > WARNING: CPU: 0 PID: 7538 at ./include/linux/rmap.h:216 __folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216 > Modules linked in: > CPU: 0 UID: 0 PID: 7538 Comm: syz.1.57 Not tainted 6.13.0-rc6-syzkaller-gcd6313beaeae #0 > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 > RIP: 0010:__folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216 > Code: 0f 0b 90 e9 b7 fd ff ff e8 ee af ab ff 48 ff cb e9 f8 fd ff ff e8 e1 af ab ff 4c 89 e7 48 c7 c6 c0 9c 15 8c e8 82 6f f5 ff 90 <0f> 0b 90 e9 e9 fd ff ff e8 c4 af ab ff 48 ff cb e9 34 fe ff ff e8 > RSP: 0018:ffffc9000c38efd8 EFLAGS: 00010246 > RAX: f8a45fcd41963a00 RBX: ffffea00014f8000 RCX: ffffc9000c38eb03 > RDX: 0000000000000005 RSI: ffffffff8c0aa3e0 RDI: ffffffff8c5fa860 > RBP: 0000000000013186 R08: ffffffff901978b7 R09: 1ffffffff2032f16 > R10: dffffc0000000000 R11: fffffbfff2032f17 R12: ffffea00014f0000 > R13: ffffea00014f8080 R14: 0000000000000000 R15: 0000000000000002 > FS: 00007f14451f96c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000 > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > CR2: 0000000020000140 CR3: 0000000073716000 CR4: 00000000003526f0 > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > Call Trace: > <TASK> > __folio_add_rmap mm/rmap.c:1170 [inline] > __folio_add_file_rmap mm/rmap.c:1489 [inline] > folio_add_file_rmap_ptes+0x82/0x380 mm/rmap.c:1511 > set_pte_range+0x30c/0x750 mm/memory.c:5065 > filemap_map_folio_range mm/filemap.c:3563 [inline] > filemap_map_pages+0xfbe/0x1900 mm/filemap.c:3672 > do_fault_around mm/memory.c:5280 [inline] > do_read_fault mm/memory.c:5313 [inline] > do_fault mm/memory.c:5456 [inline] > do_pte_missing mm/memory.c:3979 [inline] > handle_pte_fault+0x3888/0x5ed0 mm/memory.c:5801 > __handle_mm_fault mm/memory.c:5944 [inline] > handle_mm_fault+0x1106/0x1bb0 mm/memory.c:6112 > faultin_page mm/gup.c:1196 [inline] > __get_user_pages+0x1c82/0x49e0 mm/gup.c:1494 > populate_vma_page_range+0x264/0x330 mm/gup.c:1932 > __mm_populate+0x27a/0x460 mm/gup.c:2035 > mm_populate include/linux/mm.h:3397 [inline] > vm_mmap_pgoff+0x2c3/0x3d0 mm/util.c:580 > ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:546 > do_syscall_x64 arch/x86/entry/common.c:52 [inline] > do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 > entry_SYSCALL_64_after_hwframe+0x77/0x7f > RIP: 0033:0x7f1445385d29 > Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 > RSP: 002b:00007f14451f9038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 > RAX: ffffffffffffffda RBX: 00007f1445575fa0 RCX: 00007f1445385d29 > RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 > RBP: 00007f1445401b08 R08: 0000000000000004 R09: 0000000000000000 > R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 > R13: 0000000000000000 R14: 00007f1445575fa0 R15: 00007ffe4c3a7978 > </TASK> > > > Tested on: > > commit: cd6313be Revert "vmstat: disable vmstat_work on vmstat.. > git tree: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm mm-stable > console output: https://syzkaller.appspot.com/x/log.txt?x=10b34bc4580000 > kernel config: https://syzkaller.appspot.com/x/.config?x=d18955ff6936aa88 > dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 > compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 I tried reproducing it in manually in an x86-64 VM with the provided config and C reproducer, so far no luck :( Looking at the reports, we always seem to be dealing with an order-9 (PMD-size) XFS folio with dentry name(?):"memory.current". Apparently, we're PTE-mapping that PMD_sized folio. [ 141.392393][ T7538] page: refcount:1025 mapcount:1 mapping:ffff88805b10ba48 index:0x400 pfn:0x53c00 [ 141.402708][ T7538] head: order:9 mapcount:512 entire_mapcount:0 nr_pages_mapped:512 pincount:0 [ 141.411562][ T7538] memcg:ffff88805b82e000 [ 141.415930][ T7538] aops:xfs_address_space_operations ino:42a dentry name(?):"memory.current" [ 141.424695][ T7538] flags: 0xfff5800000027d(locked|referenced|uptodate|dirty|lru|workingset|head|node=0|zone=1|lastcpupid=0x7ff) [ 141.436464][ T7538] raw: 00fff5800000027d ffffea00014d0008 ffffea00014f8008 ffff88805b10ba48 [ 141.445242][ T7538] raw: 0000000000000400 0000000000000000 0000040100000000 ffff88805b82e000 [ 141.454649][ T7538] head: 00fff5800000027d ffffea00014d0008 ffffea00014f8008 ffff88805b10ba48 [ 141.463708][ T7538] head: 0000000000000400 0000000000000000 0000040100000000 ffff88805b82e000 [ 141.472549][ T7538] head: 00fff00000000209 ffffea00014f0001 ffffffff000001ff 0000000000000200 [ 141.481225][ T7538] head: 0000000000000200 0000000000000000 0000000000000000 0000000000000000 [ 141.490004][ T7538] page dumped because: VM_WARN_ON_FOLIO((_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page))) != folio) [ 141.508510][ T7538] page_owner tracks the page as allocated -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-13 15:39 ` David Hildenbrand @ 2025-01-13 15:45 ` Liam R. Howlett 2025-01-13 15:50 ` David Hildenbrand 0 siblings, 1 reply; 38+ messages in thread From: Liam R. Howlett @ 2025-01-13 15:45 UTC (permalink / raw) To: David Hildenbrand Cc: syzbot, akpm, hdanton, linux-kernel, linux-mm, syzkaller-bugs, willy * David Hildenbrand <david@redhat.com> [250113 10:40]: > On 11.01.25 10:54, syzbot wrote: > > Hello, > > > > syzbot has tested the proposed patch but the reproducer is still triggering an issue: > > WARNING in __folio_rmap_sanity_checks > > > > page last free pid 7533 tgid 7532 stack trace: > > reset_page_owner include/linux/page_owner.h:25 [inline] > > free_pages_prepare mm/page_alloc.c:1127 [inline] > > free_unref_folios+0xe39/0x18b0 mm/page_alloc.c:2706 > > folios_put_refs+0x76c/0x860 mm/swap.c:962 > > folio_batch_release include/linux/pagevec.h:101 [inline] > > truncate_inode_pages_range+0x460/0x10e0 mm/truncate.c:330 > > iomap_write_failed fs/iomap/buffered-io.c:668 [inline] > > iomap_write_iter fs/iomap/buffered-io.c:999 [inline] > > iomap_file_buffered_write+0xca5/0x11c0 fs/iomap/buffered-io.c:1039 > > xfs_file_buffered_write+0x2de/0xac0 fs/xfs/xfs_file.c:792 > > new_sync_write fs/read_write.c:586 [inline] > > vfs_write+0xaeb/0xd30 fs/read_write.c:679 > > ksys_write+0x18f/0x2b0 fs/read_write.c:731 > > do_syscall_x64 arch/x86/entry/common.c:52 [inline] > > do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 > > entry_SYSCALL_64_after_hwframe+0x77/0x7f > > ------------[ cut here ]------------ > > WARNING: CPU: 0 PID: 7538 at ./include/linux/rmap.h:216 __folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216 > > Modules linked in: > > CPU: 0 UID: 0 PID: 7538 Comm: syz.1.57 Not tainted 6.13.0-rc6-syzkaller-gcd6313beaeae #0 > > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 > > RIP: 0010:__folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216 > > Code: 0f 0b 90 e9 b7 fd ff ff e8 ee af ab ff 48 ff cb e9 f8 fd ff ff e8 e1 af ab ff 4c 89 e7 48 c7 c6 c0 9c 15 8c e8 82 6f f5 ff 90 <0f> 0b 90 e9 e9 fd ff ff e8 c4 af ab ff 48 ff cb e9 34 fe ff ff e8 > > RSP: 0018:ffffc9000c38efd8 EFLAGS: 00010246 > > RAX: f8a45fcd41963a00 RBX: ffffea00014f8000 RCX: ffffc9000c38eb03 > > RDX: 0000000000000005 RSI: ffffffff8c0aa3e0 RDI: ffffffff8c5fa860 > > RBP: 0000000000013186 R08: ffffffff901978b7 R09: 1ffffffff2032f16 > > R10: dffffc0000000000 R11: fffffbfff2032f17 R12: ffffea00014f0000 > > R13: ffffea00014f8080 R14: 0000000000000000 R15: 0000000000000002 > > FS: 00007f14451f96c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000 > > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > > CR2: 0000000020000140 CR3: 0000000073716000 CR4: 00000000003526f0 > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > > DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > > Call Trace: > > <TASK> > > __folio_add_rmap mm/rmap.c:1170 [inline] > > __folio_add_file_rmap mm/rmap.c:1489 [inline] > > folio_add_file_rmap_ptes+0x82/0x380 mm/rmap.c:1511 > > set_pte_range+0x30c/0x750 mm/memory.c:5065 > > filemap_map_folio_range mm/filemap.c:3563 [inline] > > filemap_map_pages+0xfbe/0x1900 mm/filemap.c:3672 > > do_fault_around mm/memory.c:5280 [inline] > > do_read_fault mm/memory.c:5313 [inline] > > do_fault mm/memory.c:5456 [inline] > > do_pte_missing mm/memory.c:3979 [inline] > > handle_pte_fault+0x3888/0x5ed0 mm/memory.c:5801 > > __handle_mm_fault mm/memory.c:5944 [inline] > > handle_mm_fault+0x1106/0x1bb0 mm/memory.c:6112 > > faultin_page mm/gup.c:1196 [inline] > > __get_user_pages+0x1c82/0x49e0 mm/gup.c:1494 > > populate_vma_page_range+0x264/0x330 mm/gup.c:1932 > > __mm_populate+0x27a/0x460 mm/gup.c:2035 > > mm_populate include/linux/mm.h:3397 [inline] > > vm_mmap_pgoff+0x2c3/0x3d0 mm/util.c:580 > > ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:546 > > do_syscall_x64 arch/x86/entry/common.c:52 [inline] > > do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 > > entry_SYSCALL_64_after_hwframe+0x77/0x7f > > RIP: 0033:0x7f1445385d29 > > Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 > > RSP: 002b:00007f14451f9038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 > > RAX: ffffffffffffffda RBX: 00007f1445575fa0 RCX: 00007f1445385d29 > > RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 > > RBP: 00007f1445401b08 R08: 0000000000000004 R09: 0000000000000000 > > R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 > > R13: 0000000000000000 R14: 00007f1445575fa0 R15: 00007ffe4c3a7978 > > </TASK> > > > > > > Tested on: > > > > commit: cd6313be Revert "vmstat: disable vmstat_work on vmstat.. > > git tree: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm mm-stable > > console output: https://syzkaller.appspot.com/x/log.txt?x=10b34bc4580000 > > kernel config: https://syzkaller.appspot.com/x/.config?x=d18955ff6936aa88 > > dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 > > compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 > > I tried reproducing it in manually in an x86-64 VM with the provided > config and C reproducer, so far no luck :( Yeah, same here. Thanks for testing mm-stable with the bot. > > Looking at the reports, we always seem to be dealing with an order-9 (PMD-size) XFS folio > with dentry name(?):"memory.current". > > Apparently, we're PTE-mapping that PMD_sized folio. > > [ 141.392393][ T7538] page: refcount:1025 mapcount:1 mapping:ffff88805b10ba48 index:0x400 pfn:0x53c00 > [ 141.402708][ T7538] head: order:9 mapcount:512 entire_mapcount:0 nr_pages_mapped:512 pincount:0 > [ 141.411562][ T7538] memcg:ffff88805b82e000 > [ 141.415930][ T7538] aops:xfs_address_space_operations ino:42a dentry name(?):"memory.current" > [ 141.424695][ T7538] flags: 0xfff5800000027d(locked|referenced|uptodate|dirty|lru|workingset|head|node=0|zone=1|lastcpupid=0x7ff) > [ 141.436464][ T7538] raw: 00fff5800000027d ffffea00014d0008 ffffea00014f8008 ffff88805b10ba48 > [ 141.445242][ T7538] raw: 0000000000000400 0000000000000000 0000040100000000 ffff88805b82e000 > [ 141.454649][ T7538] head: 00fff5800000027d ffffea00014d0008 ffffea00014f8008 ffff88805b10ba48 > [ 141.463708][ T7538] head: 0000000000000400 0000000000000000 0000040100000000 ffff88805b82e000 > [ 141.472549][ T7538] head: 00fff00000000209 ffffea00014f0001 ffffffff000001ff 0000000000000200 > [ 141.481225][ T7538] head: 0000000000000200 0000000000000000 0000000000000000 0000000000000000 > [ 141.490004][ T7538] page dumped because: VM_WARN_ON_FOLIO((_Generic((page), const struct page *: (const struct folio *)_compound_head(page), struct page *: (struct folio *)_compound_head(page))) != folio) > [ 141.508510][ T7538] page_owner tracks the page as allocated > ^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) 2025-01-13 15:45 ` Liam R. Howlett @ 2025-01-13 15:50 ` David Hildenbrand 0 siblings, 0 replies; 38+ messages in thread From: David Hildenbrand @ 2025-01-13 15:50 UTC (permalink / raw) To: Liam R. Howlett, syzbot, akpm, hdanton, linux-kernel, linux-mm, syzkaller-bugs, willy On 13.01.25 16:45, Liam R. Howlett wrote: > * David Hildenbrand <david@redhat.com> [250113 10:40]: >> On 11.01.25 10:54, syzbot wrote: >>> Hello, >>> >>> syzbot has tested the proposed patch but the reproducer is still triggering an issue: >>> WARNING in __folio_rmap_sanity_checks >>> >>> page last free pid 7533 tgid 7532 stack trace: >>> reset_page_owner include/linux/page_owner.h:25 [inline] >>> free_pages_prepare mm/page_alloc.c:1127 [inline] >>> free_unref_folios+0xe39/0x18b0 mm/page_alloc.c:2706 >>> folios_put_refs+0x76c/0x860 mm/swap.c:962 >>> folio_batch_release include/linux/pagevec.h:101 [inline] >>> truncate_inode_pages_range+0x460/0x10e0 mm/truncate.c:330 >>> iomap_write_failed fs/iomap/buffered-io.c:668 [inline] >>> iomap_write_iter fs/iomap/buffered-io.c:999 [inline] >>> iomap_file_buffered_write+0xca5/0x11c0 fs/iomap/buffered-io.c:1039 >>> xfs_file_buffered_write+0x2de/0xac0 fs/xfs/xfs_file.c:792 >>> new_sync_write fs/read_write.c:586 [inline] >>> vfs_write+0xaeb/0xd30 fs/read_write.c:679 >>> ksys_write+0x18f/0x2b0 fs/read_write.c:731 >>> do_syscall_x64 arch/x86/entry/common.c:52 [inline] >>> do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 >>> entry_SYSCALL_64_after_hwframe+0x77/0x7f >>> ------------[ cut here ]------------ >>> WARNING: CPU: 0 PID: 7538 at ./include/linux/rmap.h:216 __folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216 >>> Modules linked in: >>> CPU: 0 UID: 0 PID: 7538 Comm: syz.1.57 Not tainted 6.13.0-rc6-syzkaller-gcd6313beaeae #0 >>> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 >>> RIP: 0010:__folio_rmap_sanity_checks+0x33f/0x590 include/linux/rmap.h:216 >>> Code: 0f 0b 90 e9 b7 fd ff ff e8 ee af ab ff 48 ff cb e9 f8 fd ff ff e8 e1 af ab ff 4c 89 e7 48 c7 c6 c0 9c 15 8c e8 82 6f f5 ff 90 <0f> 0b 90 e9 e9 fd ff ff e8 c4 af ab ff 48 ff cb e9 34 fe ff ff e8 >>> RSP: 0018:ffffc9000c38efd8 EFLAGS: 00010246 >>> RAX: f8a45fcd41963a00 RBX: ffffea00014f8000 RCX: ffffc9000c38eb03 >>> RDX: 0000000000000005 RSI: ffffffff8c0aa3e0 RDI: ffffffff8c5fa860 >>> RBP: 0000000000013186 R08: ffffffff901978b7 R09: 1ffffffff2032f16 >>> R10: dffffc0000000000 R11: fffffbfff2032f17 R12: ffffea00014f0000 >>> R13: ffffea00014f8080 R14: 0000000000000000 R15: 0000000000000002 >>> FS: 00007f14451f96c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000 >>> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 >>> CR2: 0000000020000140 CR3: 0000000073716000 CR4: 00000000003526f0 >>> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 >>> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 >>> Call Trace: >>> <TASK> >>> __folio_add_rmap mm/rmap.c:1170 [inline] >>> __folio_add_file_rmap mm/rmap.c:1489 [inline] >>> folio_add_file_rmap_ptes+0x82/0x380 mm/rmap.c:1511 >>> set_pte_range+0x30c/0x750 mm/memory.c:5065 >>> filemap_map_folio_range mm/filemap.c:3563 [inline] >>> filemap_map_pages+0xfbe/0x1900 mm/filemap.c:3672 >>> do_fault_around mm/memory.c:5280 [inline] >>> do_read_fault mm/memory.c:5313 [inline] >>> do_fault mm/memory.c:5456 [inline] >>> do_pte_missing mm/memory.c:3979 [inline] >>> handle_pte_fault+0x3888/0x5ed0 mm/memory.c:5801 >>> __handle_mm_fault mm/memory.c:5944 [inline] >>> handle_mm_fault+0x1106/0x1bb0 mm/memory.c:6112 >>> faultin_page mm/gup.c:1196 [inline] >>> __get_user_pages+0x1c82/0x49e0 mm/gup.c:1494 >>> populate_vma_page_range+0x264/0x330 mm/gup.c:1932 >>> __mm_populate+0x27a/0x460 mm/gup.c:2035 >>> mm_populate include/linux/mm.h:3397 [inline] >>> vm_mmap_pgoff+0x2c3/0x3d0 mm/util.c:580 >>> ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:546 >>> do_syscall_x64 arch/x86/entry/common.c:52 [inline] >>> do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 >>> entry_SYSCALL_64_after_hwframe+0x77/0x7f >>> RIP: 0033:0x7f1445385d29 >>> Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 >>> RSP: 002b:00007f14451f9038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 >>> RAX: ffffffffffffffda RBX: 00007f1445575fa0 RCX: 00007f1445385d29 >>> RDX: 0000000000000002 RSI: 0000000000b36000 RDI: 0000000020000000 >>> RBP: 00007f1445401b08 R08: 0000000000000004 R09: 0000000000000000 >>> R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 >>> R13: 0000000000000000 R14: 00007f1445575fa0 R15: 00007ffe4c3a7978 >>> </TASK> >>> >>> >>> Tested on: >>> >>> commit: cd6313be Revert "vmstat: disable vmstat_work on vmstat.. >>> git tree: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm mm-stable >>> console output: https://syzkaller.appspot.com/x/log.txt?x=10b34bc4580000 >>> kernel config: https://syzkaller.appspot.com/x/.config?x=d18955ff6936aa88 >>> dashboard link: https://syzkaller.appspot.com/bug?extid=c0673e1f1f054fac28c2 >>> compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 >> >> I tried reproducing it in manually in an x86-64 VM with the provided >> config and C reproducer, so far no luck :( > > Yeah, same here. > > Thanks for testing mm-stable with the bot. I have a suspicion of what might go very wrong here ... let me try playing with a manual reproducer to trigger the scenario I have in mind. So far, I don't think this issue is related to the latest VMA changes. We saw it upstream so far once, and I suspect it's an upstream issue. -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 38+ messages in thread
end of thread, other threads:[~2025-01-13 15:50 UTC | newest] Thread overview: 38+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-12-11 1:54 [syzbot] [mm?] WARNING in __folio_rmap_sanity_checks (2) syzbot 2024-12-11 10:06 ` David Hildenbrand 2024-12-28 4:56 ` syzbot 2024-12-28 7:54 ` Hillf Danton 2024-12-28 8:03 ` syzbot 2024-12-28 10:36 ` Hillf Danton 2024-12-28 12:20 ` syzbot 2024-12-29 0:00 ` Hillf Danton 2024-12-29 1:14 ` syzbot 2024-12-29 6:42 ` Hillf Danton 2024-12-29 7:13 ` syzbot 2024-12-30 10:40 ` Hillf Danton 2024-12-30 11:08 ` syzbot 2024-12-30 11:17 ` Hillf Danton 2024-12-30 11:49 ` syzbot 2024-12-30 12:02 ` Hillf Danton 2024-12-30 12:20 ` syzbot 2024-12-31 8:41 ` Hillf Danton 2024-12-31 9:09 ` syzbot 2025-01-10 16:35 ` David Hildenbrand 2025-01-11 1:00 ` Hillf Danton 2025-01-11 10:03 ` David Hildenbrand 2024-12-28 12:25 ` syzbot 2025-01-10 15:48 ` David Hildenbrand 2025-01-10 16:14 ` Matthew Wilcox 2025-01-10 16:19 ` David Hildenbrand 2025-01-10 16:27 ` Matthew Wilcox 2025-01-10 16:31 ` David Hildenbrand 2025-01-10 19:55 ` Liam R. Howlett 2025-01-10 21:24 ` Suren Baghdasaryan 2025-01-11 4:29 ` Liam R. Howlett 2025-01-10 21:03 ` Liam R. Howlett 2025-01-11 6:15 ` syzbot 2025-01-11 9:25 ` David Hildenbrand 2025-01-11 9:54 ` syzbot 2025-01-13 15:39 ` David Hildenbrand 2025-01-13 15:45 ` Liam R. Howlett 2025-01-13 15:50 ` David Hildenbrand
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox