* [syzbot] [xfs?] possible deadlock in xfs_icwalk_ag (3)
@ 2025-08-11 15:30 syzbot
2025-08-11 17:03 ` Alan Huang
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: syzbot @ 2025-08-11 15:30 UTC (permalink / raw)
To: cem, linux-kernel, linux-xfs, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: 6e64f4580381 Merge tag 'input-for-v6.17-rc0' of git://git...
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=170e0ea2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=ff0ac94f5fb505cf
dashboard link: https://syzkaller.appspot.com/bug?extid=789028412a4af61a2b61
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/d900f083ada3/non_bootable_disk-6e64f458.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/6b0d7c92b652/vmlinux-6e64f458.xz
kernel image: https://storage.googleapis.com/syzbot-assets/541b13915f7e/bzImage-6e64f458.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+789028412a4af61a2b61@syzkaller.appspotmail.com
loop0: detected capacity change from 0 to 32768
XFS (loop0): DAX unsupported by block device. Turning off DAX.
XFS (loop0): Mounting V5 Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Ending clean mount
XFS (loop0): Quotacheck needed: Please wait.
XFS (loop0): Quotacheck: Done.
============================================
WARNING: possible recursive locking detected
6.16.0-syzkaller-11952-g6e64f4580381 #0 Not tainted
--------------------------------------------
syz.0.0/5359 is trying to acquire lock:
ffff88805250f758 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_reclaim_inode fs/xfs/xfs_icache.c:1042 [inline]
ffff88805250f758 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1734 [inline]
ffff88805250f758 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1816
but task is already holding lock:
ffff8880525327d8 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_bmap_punch_delalloc_range+0x26d/0x7c0 fs/xfs/xfs_bmap_util.c:452
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&xfs_nondir_ilock_class);
lock(&xfs_nondir_ilock_class);
*** DEADLOCK ***
May be due to missing lock nesting notation
4 locks held by syz.0.0/5359:
#0: ffff8880525329f0 (&sb->s_type->i_mutex_key#20){+.+.}-{4:4}, at: xfs_ilock+0xfe/0x390 fs/xfs/xfs_inode.c:149
#1: ffff888052532b90 (mapping.invalidate_lock#3){+.+.}-{4:4}, at: filemap_invalidate_lock include/linux/fs.h:924 [inline]
#1: ffff888052532b90 (mapping.invalidate_lock#3){+.+.}-{4:4}, at: xfs_buffered_write_iomap_end+0x2b6/0x4c0 fs/xfs/xfs_iomap.c:1993
#2: ffff8880525327d8 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_bmap_punch_delalloc_range+0x26d/0x7c0 fs/xfs/xfs_bmap_util.c:452
#3: ffff888043ac40e0 (&type->s_umount_key#50){.+.+}-{4:4}, at: super_trylock_shared fs/super.c:563 [inline]
#3: ffff888043ac40e0 (&type->s_umount_key#50){.+.+}-{4:4}, at: super_cache_scan+0x91/0x4b0 fs/super.c:197
stack backtrace:
CPU: 0 UID: 0 PID: 5359 Comm: syz.0.0 Not tainted 6.16.0-syzkaller-11952-g6e64f4580381 #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
print_deadlock_bug+0x28b/0x2a0 kernel/locking/lockdep.c:3041
check_deadlock kernel/locking/lockdep.c:3093 [inline]
validate_chain+0x1a3f/0x2140 kernel/locking/lockdep.c:3895
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
down_write_nested+0x9d/0x200 kernel/locking/rwsem.c:1706
xfs_reclaim_inode fs/xfs/xfs_icache.c:1042 [inline]
xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1734 [inline]
xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1816
xfs_icwalk fs/xfs/xfs_icache.c:1864 [inline]
xfs_reclaim_inodes_nr+0x1e3/0x260 fs/xfs/xfs_icache.c:1108
super_cache_scan+0x41b/0x4b0 fs/super.c:228
do_shrink_slab+0x6ef/0x1110 mm/shrinker.c:437
shrink_slab+0xd74/0x10d0 mm/shrinker.c:664
shrink_one+0x28a/0x7c0 mm/vmscan.c:4954
shrink_many mm/vmscan.c:5015 [inline]
lru_gen_shrink_node mm/vmscan.c:5093 [inline]
shrink_node+0x314e/0x3760 mm/vmscan.c:6078
shrink_zones mm/vmscan.c:6336 [inline]
do_try_to_free_pages+0x668/0x1960 mm/vmscan.c:6398
try_to_free_pages+0x8a2/0xdd0 mm/vmscan.c:6644
__perform_reclaim mm/page_alloc.c:4310 [inline]
__alloc_pages_direct_reclaim+0x144/0x300 mm/page_alloc.c:4332
__alloc_pages_slowpath+0x5ff/0xce0 mm/page_alloc.c:4781
__alloc_frozen_pages_noprof+0x319/0x370 mm/page_alloc.c:5161
alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2416
alloc_frozen_pages_noprof mm/mempolicy.c:2487 [inline]
alloc_pages_noprof+0xa9/0x190 mm/mempolicy.c:2507
stack_depot_save_flags+0x777/0x860 lib/stackdepot.c:677
kasan_save_stack mm/kasan/common.c:48 [inline]
kasan_save_track+0x4f/0x80 mm/kasan/common.c:68
poison_kmalloc_redzone mm/kasan/common.c:388 [inline]
__kasan_kmalloc+0x93/0xb0 mm/kasan/common.c:405
kasan_kmalloc include/linux/kasan.h:260 [inline]
__do_kmalloc_node mm/slub.c:4365 [inline]
__kmalloc_node_track_caller_noprof+0x271/0x4e0 mm/slub.c:4384
__do_krealloc mm/slub.c:4942 [inline]
krealloc_noprof+0x124/0x340 mm/slub.c:4995
xfs_iext_realloc_root fs/xfs/libxfs/xfs_iext_tree.c:613 [inline]
xfs_iext_insert_raw+0x131/0x3260 fs/xfs/libxfs/xfs_iext_tree.c:647
xfs_iext_insert+0x36/0x220 fs/xfs/libxfs/xfs_iext_tree.c:684
xfs_bmap_del_extent_delay+0x105b/0x15b0 fs/xfs/libxfs/xfs_bmap.c:4787
xfs_bmap_punch_delalloc_range+0x536/0x7c0 fs/xfs/xfs_bmap_util.c:483
xfs_buffered_write_iomap_end+0x2d2/0x4c0 fs/xfs/xfs_iomap.c:1994
iomap_iter+0x316/0xde0 fs/iomap/iter.c:79
iomap_file_buffered_write+0x7fa/0x9b0 fs/iomap/buffered-io.c:1065
xfs_file_buffered_write+0x209/0x8a0 fs/xfs/xfs_file.c:981
aio_write+0x535/0x7a0 fs/aio.c:1634
__io_submit_one fs/aio.c:-1 [inline]
io_submit_one+0x78b/0x1310 fs/aio.c:2053
__do_sys_io_submit fs/aio.c:2112 [inline]
__se_sys_io_submit+0x185/0x2f0 fs/aio.c:2082
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f17f498ebe9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f17f5846038 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1
RAX: ffffffffffffffda RBX: 00007f17f4bb5fa0 RCX: 00007f17f498ebe9
RDX: 0000200000000540 RSI: 0000000000000008 RDI: 00007f17f5804000
RBP: 00007f17f4a11e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f17f4bb6038 R14: 00007f17f4bb5fa0 R15: 00007ffe6a2d5798
</TASK>
syz.0.0 (5359) used greatest stack depth: 19048 bytes left
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [syzbot] [xfs?] possible deadlock in xfs_icwalk_ag (3)
2025-08-11 15:30 [syzbot] [xfs?] possible deadlock in xfs_icwalk_ag (3) syzbot
@ 2025-08-11 17:03 ` Alan Huang
2025-12-02 2:06 ` syzbot
2025-12-02 3:35 ` syzbot
2 siblings, 0 replies; 5+ messages in thread
From: Alan Huang @ 2025-08-11 17:03 UTC (permalink / raw)
To: syzbot; +Cc: cem, linux-kernel, linux-xfs, syzkaller-bugs
On Aug 11, 2025, at 23:30, syzbot <syzbot+789028412a4af61a2b61@syzkaller.appspotmail.com> wrote:
>
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: 6e64f4580381 Merge tag 'input-for-v6.17-rc0' of git://git...
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=170e0ea2580000
> kernel config: https://syzkaller.appspot.com/x/.config?x=ff0ac94f5fb505cf
> dashboard link: https://syzkaller.appspot.com/bug?extid=789028412a4af61a2b61
> compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> Downloadable assets:
> disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/d900f083ada3/non_bootable_disk-6e64f458.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/6b0d7c92b652/vmlinux-6e64f458.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/541b13915f7e/bzImage-6e64f458.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+789028412a4af61a2b61@syzkaller.appspotmail.com
>
> loop0: detected capacity change from 0 to 32768
> XFS (loop0): DAX unsupported by block device. Turning off DAX.
> XFS (loop0): Mounting V5 Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Ending clean mount
> XFS (loop0): Quotacheck needed: Please wait.
> XFS (loop0): Quotacheck: Done.
> ============================================
> WARNING: possible recursive locking detected
> 6.16.0-syzkaller-11952-g6e64f4580381 #0 Not tainted
> --------------------------------------------
> syz.0.0/5359 is trying to acquire lock:
> ffff88805250f758 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_reclaim_inode fs/xfs/xfs_icache.c:1042 [inline]
> ffff88805250f758 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1734 [inline]
> ffff88805250f758 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1816
>
> but task is already holding lock:
> ffff8880525327d8 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_bmap_punch_delalloc_range+0x26d/0x7c0 fs/xfs/xfs_bmap_util.c:452
>
> other info that might help us debug this:
> Possible unsafe locking scenario:
>
> CPU0
> ----
> lock(&xfs_nondir_ilock_class);
> lock(&xfs_nondir_ilock_class);
>
> *** DEADLOCK ***
>
> May be due to missing lock nesting notation
>
> 4 locks held by syz.0.0/5359:
> #0: ffff8880525329f0 (&sb->s_type->i_mutex_key#20){+.+.}-{4:4}, at: xfs_ilock+0xfe/0x390 fs/xfs/xfs_inode.c:149
> #1: ffff888052532b90 (mapping.invalidate_lock#3){+.+.}-{4:4}, at: filemap_invalidate_lock include/linux/fs.h:924 [inline]
> #1: ffff888052532b90 (mapping.invalidate_lock#3){+.+.}-{4:4}, at: xfs_buffered_write_iomap_end+0x2b6/0x4c0 fs/xfs/xfs_iomap.c:1993
> #2: ffff8880525327d8 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_bmap_punch_delalloc_range+0x26d/0x7c0 fs/xfs/xfs_bmap_util.c:452
> #3: ffff888043ac40e0 (&type->s_umount_key#50){.+.+}-{4:4}, at: super_trylock_shared fs/super.c:563 [inline]
> #3: ffff888043ac40e0 (&type->s_umount_key#50){.+.+}-{4:4}, at: super_cache_scan+0x91/0x4b0 fs/super.c:197
>
> stack backtrace:
> CPU: 0 UID: 0 PID: 5359 Comm: syz.0.0 Not tainted 6.16.0-syzkaller-11952-g6e64f4580381 #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
> Call Trace:
> <TASK>
> dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
> print_deadlock_bug+0x28b/0x2a0 kernel/locking/lockdep.c:3041
> check_deadlock kernel/locking/lockdep.c:3093 [inline]
> validate_chain+0x1a3f/0x2140 kernel/locking/lockdep.c:3895
> __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
> lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
> down_write_nested+0x9d/0x200 kernel/locking/rwsem.c:1706
> xfs_reclaim_inode fs/xfs/xfs_icache.c:1042 [inline]
> xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1734 [inline]
> xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1816
> xfs_icwalk fs/xfs/xfs_icache.c:1864 [inline]
> xfs_reclaim_inodes_nr+0x1e3/0x260 fs/xfs/xfs_icache.c:1108
> super_cache_scan+0x41b/0x4b0 fs/super.c:228
> do_shrink_slab+0x6ef/0x1110 mm/shrinker.c:437
> shrink_slab+0xd74/0x10d0 mm/shrinker.c:664
> shrink_one+0x28a/0x7c0 mm/vmscan.c:4954
> shrink_many mm/vmscan.c:5015 [inline]
> lru_gen_shrink_node mm/vmscan.c:5093 [inline]
> shrink_node+0x314e/0x3760 mm/vmscan.c:6078
> shrink_zones mm/vmscan.c:6336 [inline]
> do_try_to_free_pages+0x668/0x1960 mm/vmscan.c:6398
> try_to_free_pages+0x8a2/0xdd0 mm/vmscan.c:6644
> __perform_reclaim mm/page_alloc.c:4310 [inline]
> __alloc_pages_direct_reclaim+0x144/0x300 mm/page_alloc.c:4332
> __alloc_pages_slowpath+0x5ff/0xce0 mm/page_alloc.c:4781
> __alloc_frozen_pages_noprof+0x319/0x370 mm/page_alloc.c:5161
> alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2416
> alloc_frozen_pages_noprof mm/mempolicy.c:2487 [inline]
> alloc_pages_noprof+0xa9/0x190 mm/mempolicy.c:2507
> stack_depot_save_flags+0x777/0x860 lib/stackdepot.c:677
> kasan_save_stack mm/kasan/common.c:48 [inline]
> kasan_save_track+0x4f/0x80 mm/kasan/common.c:68
> poison_kmalloc_redzone mm/kasan/common.c:388 [inline]
> __kasan_kmalloc+0x93/0xb0 mm/kasan/common.c:405
> kasan_kmalloc include/linux/kasan.h:260 [inline]
> __do_kmalloc_node mm/slub.c:4365 [inline]
> __kmalloc_node_track_caller_noprof+0x271/0x4e0 mm/slub.c:4384
> __do_krealloc mm/slub.c:4942 [inline]
> krealloc_noprof+0x124/0x340 mm/slub.c:4995
__GFP_NOLOCKDEP doesn’t work correctly.
> xfs_iext_realloc_root fs/xfs/libxfs/xfs_iext_tree.c:613 [inline]
> xfs_iext_insert_raw+0x131/0x3260 fs/xfs/libxfs/xfs_iext_tree.c:647
> xfs_iext_insert+0x36/0x220 fs/xfs/libxfs/xfs_iext_tree.c:684
> xfs_bmap_del_extent_delay+0x105b/0x15b0 fs/xfs/libxfs/xfs_bmap.c:4787
> xfs_bmap_punch_delalloc_range+0x536/0x7c0 fs/xfs/xfs_bmap_util.c:483
> xfs_buffered_write_iomap_end+0x2d2/0x4c0 fs/xfs/xfs_iomap.c:1994
> iomap_iter+0x316/0xde0 fs/iomap/iter.c:79
> iomap_file_buffered_write+0x7fa/0x9b0 fs/iomap/buffered-io.c:1065
> xfs_file_buffered_write+0x209/0x8a0 fs/xfs/xfs_file.c:981
> aio_write+0x535/0x7a0 fs/aio.c:1634
> __io_submit_one fs/aio.c:-1 [inline]
> io_submit_one+0x78b/0x1310 fs/aio.c:2053
> __do_sys_io_submit fs/aio.c:2112 [inline]
> __se_sys_io_submit+0x185/0x2f0 fs/aio.c:2082
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7f17f498ebe9
> Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007f17f5846038 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1
> RAX: ffffffffffffffda RBX: 00007f17f4bb5fa0 RCX: 00007f17f498ebe9
> RDX: 0000200000000540 RSI: 0000000000000008 RDI: 00007f17f5804000
> RBP: 00007f17f4a11e19 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> R13: 00007f17f4bb6038 R14: 00007f17f4bb5fa0 R15: 00007ffe6a2d5798
> </TASK>
> syz.0.0 (5359) used greatest stack depth: 19048 bytes left
>
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
>
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
>
> If the report is already addressed, let syzbot know by replying with:
> #syz fix: exact-commit-title
>
> If you want to overwrite report's subsystems, reply with:
> #syz set subsystems: new-subsystem
> (See the list of subsystem names on the web dashboard)
>
> If the report is a duplicate of another one, reply with:
> #syz dup: exact-subject-of-another-report
>
> If you want to undo deduplication, reply with:
> #syz undup
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [syzbot] [xfs?] possible deadlock in xfs_icwalk_ag (3)
2025-08-11 15:30 [syzbot] [xfs?] possible deadlock in xfs_icwalk_ag (3) syzbot
2025-08-11 17:03 ` Alan Huang
@ 2025-12-02 2:06 ` syzbot
2025-12-02 7:40 ` Christoph Hellwig
2025-12-02 3:35 ` syzbot
2 siblings, 1 reply; 5+ messages in thread
From: syzbot @ 2025-12-02 2:06 UTC (permalink / raw)
To: cem, linux-kernel, linux-xfs, mmpgouride, syzkaller-bugs
syzbot has found a reproducer for the following issue on:
HEAD commit: 1d18101a644e Merge tag 'kernel-6.19-rc1.cred' of git://git..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=119238c2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=a1db0fea040c2a9f
dashboard link: https://syzkaller.appspot.com/bug?extid=789028412a4af61a2b61
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1407a512580000
Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/d900f083ada3/non_bootable_disk-1d18101a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/98f78b52cccd/vmlinux-1d18101a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/7a8898061bfb/bzImage-1d18101a.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/9f625d767816/mount_0.gz
fsck result: failed (log: https://syzkaller.appspot.com/x/fsck.log?x=1406a192580000)
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+789028412a4af61a2b61@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
kswapd0/73 is trying to acquire lock:
ffff88804146c118 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_reclaim_inode fs/xfs/xfs_icache.c:1040 [inline]
ffff88804146c118 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1732 [inline]
ffff88804146c118 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1814
but task is already holding lock:
ffffffff8e047ae0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:7015 [inline]
ffffffff8e047ae0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x951/0x2800 mm/vmscan.c:7389
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (fs_reclaim){+.+.}-{0:0}:
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
__fs_reclaim_acquire mm/page_alloc.c:4264 [inline]
fs_reclaim_acquire+0x72/0x100 mm/page_alloc.c:4278
might_alloc include/linux/sched/mm.h:318 [inline]
slab_pre_alloc_hook mm/slub.c:4929 [inline]
slab_alloc_node mm/slub.c:5264 [inline]
__kmalloc_cache_noprof+0x40/0x6f0 mm/slub.c:5766
kmalloc_noprof include/linux/slab.h:957 [inline]
iomap_fill_dirty_folios+0xf4/0x260 fs/iomap/buffered-io.c:1557
xfs_buffered_write_iomap_begin+0xa23/0x1a70 fs/xfs/xfs_iomap.c:1857
iomap_iter+0x5f2/0xf10 fs/iomap/iter.c:110
iomap_zero_range+0x1cc/0xa50 fs/iomap/buffered-io.c:1590
xfs_zero_range+0x9a/0x100 fs/xfs/xfs_iomap.c:2289
xfs_reflink_remap_prep+0x398/0x720 fs/xfs/xfs_reflink.c:1699
xfs_file_remap_range+0x235/0x780 fs/xfs/xfs_file.c:1518
vfs_copy_file_range+0xd81/0x1370 fs/read_write.c:1598
__do_sys_copy_file_range fs/read_write.c:1681 [inline]
__se_sys_copy_file_range+0x2fb/0x470 fs/read_write.c:1648
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (&xfs_nondir_ilock_class){++++}-{4:4}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
down_write_nested+0x9d/0x200 kernel/locking/rwsem.c:1706
xfs_reclaim_inode fs/xfs/xfs_icache.c:1040 [inline]
xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1732 [inline]
xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1814
xfs_icwalk fs/xfs/xfs_icache.c:1862 [inline]
xfs_reclaim_inodes_nr+0x1e3/0x260 fs/xfs/xfs_icache.c:1106
super_cache_scan+0x41b/0x4b0 fs/super.c:228
do_shrink_slab+0x6ef/0x1110 mm/shrinker.c:437
shrink_slab+0xd74/0x10d0 mm/shrinker.c:664
shrink_one+0x28a/0x7c0 mm/vmscan.c:4955
shrink_many mm/vmscan.c:5016 [inline]
lru_gen_shrink_node mm/vmscan.c:5094 [inline]
shrink_node+0x315d/0x3780 mm/vmscan.c:6081
kswapd_shrink_node mm/vmscan.c:6941 [inline]
balance_pgdat mm/vmscan.c:7124 [inline]
kswapd+0x147c/0x2800 mm/vmscan.c:7389
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&xfs_nondir_ilock_class);
lock(fs_reclaim);
lock(&xfs_nondir_ilock_class);
*** DEADLOCK ***
2 locks held by kswapd0/73:
#0: ffffffff8e047ae0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:7015 [inline]
#0: ffffffff8e047ae0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x951/0x2800 mm/vmscan.c:7389
#1: ffff8880119bc0e0 (&type->s_umount_key#54){++++}-{4:4}, at: super_trylock_shared fs/super.c:563 [inline]
#1: ffff8880119bc0e0 (&type->s_umount_key#54){++++}-{4:4}, at: super_cache_scan+0x91/0x4b0 fs/super.c:197
stack backtrace:
CPU: 0 UID: 0 PID: 73 Comm: kswapd0 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2043
check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
down_write_nested+0x9d/0x200 kernel/locking/rwsem.c:1706
xfs_reclaim_inode fs/xfs/xfs_icache.c:1040 [inline]
xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1732 [inline]
xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1814
xfs_icwalk fs/xfs/xfs_icache.c:1862 [inline]
xfs_reclaim_inodes_nr+0x1e3/0x260 fs/xfs/xfs_icache.c:1106
super_cache_scan+0x41b/0x4b0 fs/super.c:228
do_shrink_slab+0x6ef/0x1110 mm/shrinker.c:437
shrink_slab+0xd74/0x10d0 mm/shrinker.c:664
shrink_one+0x28a/0x7c0 mm/vmscan.c:4955
shrink_many mm/vmscan.c:5016 [inline]
lru_gen_shrink_node mm/vmscan.c:5094 [inline]
shrink_node+0x315d/0x3780 mm/vmscan.c:6081
kswapd_shrink_node mm/vmscan.c:6941 [inline]
balance_pgdat mm/vmscan.c:7124 [inline]
kswapd+0x147c/0x2800 mm/vmscan.c:7389
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [syzbot] [xfs?] possible deadlock in xfs_icwalk_ag (3)
2025-08-11 15:30 [syzbot] [xfs?] possible deadlock in xfs_icwalk_ag (3) syzbot
2025-08-11 17:03 ` Alan Huang
2025-12-02 2:06 ` syzbot
@ 2025-12-02 3:35 ` syzbot
2 siblings, 0 replies; 5+ messages in thread
From: syzbot @ 2025-12-02 3:35 UTC (permalink / raw)
To: cem, linux-kernel, linux-xfs, mmpgouride, syzkaller-bugs
syzbot has found a reproducer for the following issue on:
HEAD commit: 1d18101a644e Merge tag 'kernel-6.19-rc1.cred' of git://git..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13d1a192580000
kernel config: https://syzkaller.appspot.com/x/.config?x=a1db0fea040c2a9f
dashboard link: https://syzkaller.appspot.com/bug?extid=789028412a4af61a2b61
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=10ae38c2580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=17606512580000
Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/d900f083ada3/non_bootable_disk-1d18101a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/98f78b52cccd/vmlinux-1d18101a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/7a8898061bfb/bzImage-1d18101a.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/28373feef258/mount_0.gz
fsck result: failed (log: https://syzkaller.appspot.com/x/fsck.log?x=172e38c2580000)
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+789028412a4af61a2b61@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
kswapd0/79 is trying to acquire lock:
ffff888041afd798 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_reclaim_inode fs/xfs/xfs_icache.c:1040 [inline]
ffff888041afd798 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1732 [inline]
ffff888041afd798 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1814
but task is already holding lock:
ffffffff8e047ae0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:7015 [inline]
ffffffff8e047ae0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x951/0x2800 mm/vmscan.c:7389
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (fs_reclaim){+.+.}-{0:0}:
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
__fs_reclaim_acquire mm/page_alloc.c:4264 [inline]
fs_reclaim_acquire+0x72/0x100 mm/page_alloc.c:4278
might_alloc include/linux/sched/mm.h:318 [inline]
slab_pre_alloc_hook mm/slub.c:4929 [inline]
slab_alloc_node mm/slub.c:5264 [inline]
__kmalloc_cache_noprof+0x40/0x6f0 mm/slub.c:5766
kmalloc_noprof include/linux/slab.h:957 [inline]
iomap_fill_dirty_folios+0xf4/0x260 fs/iomap/buffered-io.c:1557
xfs_buffered_write_iomap_begin+0xa23/0x1a70 fs/xfs/xfs_iomap.c:1857
iomap_iter+0x5f2/0xf10 fs/iomap/iter.c:110
iomap_zero_range+0x1cc/0xa50 fs/iomap/buffered-io.c:1590
iomap_truncate_page+0xb1/0x110 fs/iomap/buffered-io.c:1629
xfs_setattr_size+0x452/0xee0 fs/xfs/xfs_iops.c:996
__xfs_file_fallocate+0x10e1/0x1610 include/linux/fs.h:-1
xfs_file_fallocate+0x27b/0x340 fs/xfs/xfs_file.c:1462
vfs_fallocate+0x669/0x7e0 fs/open.c:342
ksys_fallocate fs/open.c:366 [inline]
__do_sys_fallocate fs/open.c:371 [inline]
__se_sys_fallocate fs/open.c:369 [inline]
__x64_sys_fallocate+0xc0/0x110 fs/open.c:369
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (&xfs_nondir_ilock_class){++++}-{4:4}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
down_write_nested+0x9d/0x200 kernel/locking/rwsem.c:1706
xfs_reclaim_inode fs/xfs/xfs_icache.c:1040 [inline]
xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1732 [inline]
xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1814
xfs_icwalk fs/xfs/xfs_icache.c:1862 [inline]
xfs_reclaim_inodes_nr+0x1e3/0x260 fs/xfs/xfs_icache.c:1106
super_cache_scan+0x41b/0x4b0 fs/super.c:228
do_shrink_slab+0x6ef/0x1110 mm/shrinker.c:437
shrink_slab_memcg mm/shrinker.c:550 [inline]
shrink_slab+0x7ef/0x10d0 mm/shrinker.c:628
shrink_one+0x28a/0x7c0 mm/vmscan.c:4955
shrink_many mm/vmscan.c:5016 [inline]
lru_gen_shrink_node mm/vmscan.c:5094 [inline]
shrink_node+0x315d/0x3780 mm/vmscan.c:6081
kswapd_shrink_node mm/vmscan.c:6941 [inline]
balance_pgdat mm/vmscan.c:7124 [inline]
kswapd+0x147c/0x2800 mm/vmscan.c:7389
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&xfs_nondir_ilock_class);
lock(fs_reclaim);
lock(&xfs_nondir_ilock_class);
*** DEADLOCK ***
2 locks held by kswapd0/79:
#0: ffffffff8e047ae0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:7015 [inline]
#0: ffffffff8e047ae0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x951/0x2800 mm/vmscan.c:7389
#1: ffff8880113de0e0 (&type->s_umount_key#55){++++}-{4:4}, at: super_trylock_shared fs/super.c:563 [inline]
#1: ffff8880113de0e0 (&type->s_umount_key#55){++++}-{4:4}, at: super_cache_scan+0x91/0x4b0 fs/super.c:197
stack backtrace:
CPU: 0 UID: 0 PID: 79 Comm: kswapd0 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2043
check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
down_write_nested+0x9d/0x200 kernel/locking/rwsem.c:1706
xfs_reclaim_inode fs/xfs/xfs_icache.c:1040 [inline]
xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1732 [inline]
xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1814
xfs_icwalk fs/xfs/xfs_icache.c:1862 [inline]
xfs_reclaim_inodes_nr+0x1e3/0x260 fs/xfs/xfs_icache.c:1106
super_cache_scan+0x41b/0x4b0 fs/super.c:228
do_shrink_slab+0x6ef/0x1110 mm/shrinker.c:437
shrink_slab_memcg mm/shrinker.c:550 [inline]
shrink_slab+0x7ef/0x10d0 mm/shrinker.c:628
shrink_one+0x28a/0x7c0 mm/vmscan.c:4955
shrink_many mm/vmscan.c:5016 [inline]
lru_gen_shrink_node mm/vmscan.c:5094 [inline]
shrink_node+0x315d/0x3780 mm/vmscan.c:6081
kswapd_shrink_node mm/vmscan.c:6941 [inline]
balance_pgdat mm/vmscan.c:7124 [inline]
kswapd+0x147c/0x2800 mm/vmscan.c:7389
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [syzbot] [xfs?] possible deadlock in xfs_icwalk_ag (3)
2025-12-02 2:06 ` syzbot
@ 2025-12-02 7:40 ` Christoph Hellwig
0 siblings, 0 replies; 5+ messages in thread
From: Christoph Hellwig @ 2025-12-02 7:40 UTC (permalink / raw)
To: syzbot
Cc: cem, linux-kernel, linux-xfs, mmpgouride, syzkaller-bugs,
Brian Foster
This look like the batch zeroing code. I think we have a patch pending
to remove the allocation, but I lost track a bit where we are with that.
On Mon, Dec 01, 2025 at 06:06:22PM -0800, syzbot wrote:
> syzbot has found a reproducer for the following issue on:
>
> HEAD commit: 1d18101a644e Merge tag 'kernel-6.19-rc1.cred' of git://git..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=119238c2580000
> kernel config: https://syzkaller.appspot.com/x/.config?x=a1db0fea040c2a9f
> dashboard link: https://syzkaller.appspot.com/bug?extid=789028412a4af61a2b61
> compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1407a512580000
>
> Downloadable assets:
> disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/d900f083ada3/non_bootable_disk-1d18101a.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/98f78b52cccd/vmlinux-1d18101a.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/7a8898061bfb/bzImage-1d18101a.xz
> mounted in repro: https://storage.googleapis.com/syzbot-assets/9f625d767816/mount_0.gz
> fsck result: failed (log: https://syzkaller.appspot.com/x/fsck.log?x=1406a192580000)
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+789028412a4af61a2b61@syzkaller.appspotmail.com
>
> ======================================================
> WARNING: possible circular locking dependency detected
> syzkaller #0 Not tainted
> ------------------------------------------------------
> kswapd0/73 is trying to acquire lock:
> ffff88804146c118 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_reclaim_inode fs/xfs/xfs_icache.c:1040 [inline]
> ffff88804146c118 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1732 [inline]
> ffff88804146c118 (&xfs_nondir_ilock_class){++++}-{4:4}, at: xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1814
>
> but task is already holding lock:
> ffffffff8e047ae0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:7015 [inline]
> ffffffff8e047ae0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x951/0x2800 mm/vmscan.c:7389
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (fs_reclaim){+.+.}-{0:0}:
> lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
> __fs_reclaim_acquire mm/page_alloc.c:4264 [inline]
> fs_reclaim_acquire+0x72/0x100 mm/page_alloc.c:4278
> might_alloc include/linux/sched/mm.h:318 [inline]
> slab_pre_alloc_hook mm/slub.c:4929 [inline]
> slab_alloc_node mm/slub.c:5264 [inline]
> __kmalloc_cache_noprof+0x40/0x6f0 mm/slub.c:5766
> kmalloc_noprof include/linux/slab.h:957 [inline]
> iomap_fill_dirty_folios+0xf4/0x260 fs/iomap/buffered-io.c:1557
> xfs_buffered_write_iomap_begin+0xa23/0x1a70 fs/xfs/xfs_iomap.c:1857
> iomap_iter+0x5f2/0xf10 fs/iomap/iter.c:110
> iomap_zero_range+0x1cc/0xa50 fs/iomap/buffered-io.c:1590
> xfs_zero_range+0x9a/0x100 fs/xfs/xfs_iomap.c:2289
> xfs_reflink_remap_prep+0x398/0x720 fs/xfs/xfs_reflink.c:1699
> xfs_file_remap_range+0x235/0x780 fs/xfs/xfs_file.c:1518
> vfs_copy_file_range+0xd81/0x1370 fs/read_write.c:1598
> __do_sys_copy_file_range fs/read_write.c:1681 [inline]
> __se_sys_copy_file_range+0x2fb/0x470 fs/read_write.c:1648
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> -> #0 (&xfs_nondir_ilock_class){++++}-{4:4}:
> check_prev_add kernel/locking/lockdep.c:3165 [inline]
> check_prevs_add kernel/locking/lockdep.c:3284 [inline]
> validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
> __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
> lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
> down_write_nested+0x9d/0x200 kernel/locking/rwsem.c:1706
> xfs_reclaim_inode fs/xfs/xfs_icache.c:1040 [inline]
> xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1732 [inline]
> xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1814
> xfs_icwalk fs/xfs/xfs_icache.c:1862 [inline]
> xfs_reclaim_inodes_nr+0x1e3/0x260 fs/xfs/xfs_icache.c:1106
> super_cache_scan+0x41b/0x4b0 fs/super.c:228
> do_shrink_slab+0x6ef/0x1110 mm/shrinker.c:437
> shrink_slab+0xd74/0x10d0 mm/shrinker.c:664
> shrink_one+0x28a/0x7c0 mm/vmscan.c:4955
> shrink_many mm/vmscan.c:5016 [inline]
> lru_gen_shrink_node mm/vmscan.c:5094 [inline]
> shrink_node+0x315d/0x3780 mm/vmscan.c:6081
> kswapd_shrink_node mm/vmscan.c:6941 [inline]
> balance_pgdat mm/vmscan.c:7124 [inline]
> kswapd+0x147c/0x2800 mm/vmscan.c:7389
> kthread+0x711/0x8a0 kernel/kthread.c:463
> ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
>
> other info that might help us debug this:
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(fs_reclaim);
> lock(&xfs_nondir_ilock_class);
> lock(fs_reclaim);
> lock(&xfs_nondir_ilock_class);
>
> *** DEADLOCK ***
>
> 2 locks held by kswapd0/73:
> #0: ffffffff8e047ae0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:7015 [inline]
> #0: ffffffff8e047ae0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x951/0x2800 mm/vmscan.c:7389
> #1: ffff8880119bc0e0 (&type->s_umount_key#54){++++}-{4:4}, at: super_trylock_shared fs/super.c:563 [inline]
> #1: ffff8880119bc0e0 (&type->s_umount_key#54){++++}-{4:4}, at: super_cache_scan+0x91/0x4b0 fs/super.c:197
>
> stack backtrace:
> CPU: 0 UID: 0 PID: 73 Comm: kswapd0 Not tainted syzkaller #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
> Call Trace:
> <TASK>
> dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
> print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2043
> check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2175
> check_prev_add kernel/locking/lockdep.c:3165 [inline]
> check_prevs_add kernel/locking/lockdep.c:3284 [inline]
> validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
> __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
> lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
> down_write_nested+0x9d/0x200 kernel/locking/rwsem.c:1706
> xfs_reclaim_inode fs/xfs/xfs_icache.c:1040 [inline]
> xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1732 [inline]
> xfs_icwalk_ag+0x12c5/0x1ab0 fs/xfs/xfs_icache.c:1814
> xfs_icwalk fs/xfs/xfs_icache.c:1862 [inline]
> xfs_reclaim_inodes_nr+0x1e3/0x260 fs/xfs/xfs_icache.c:1106
> super_cache_scan+0x41b/0x4b0 fs/super.c:228
> do_shrink_slab+0x6ef/0x1110 mm/shrinker.c:437
> shrink_slab+0xd74/0x10d0 mm/shrinker.c:664
> shrink_one+0x28a/0x7c0 mm/vmscan.c:4955
> shrink_many mm/vmscan.c:5016 [inline]
> lru_gen_shrink_node mm/vmscan.c:5094 [inline]
> shrink_node+0x315d/0x3780 mm/vmscan.c:6081
> kswapd_shrink_node mm/vmscan.c:6941 [inline]
> balance_pgdat mm/vmscan.c:7124 [inline]
> kswapd+0x147c/0x2800 mm/vmscan.c:7389
> kthread+0x711/0x8a0 kernel/kthread.c:463
> ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
> </TASK>
>
>
> ---
> If you want syzbot to run the reproducer, reply with:
> #syz test: git://repo/address.git branch-or-commit-hash
> If you attach or paste a git patch, syzbot will apply it before testing.
>
---end quoted text---
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-12-02 7:40 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-11 15:30 [syzbot] [xfs?] possible deadlock in xfs_icwalk_ag (3) syzbot
2025-08-11 17:03 ` Alan Huang
2025-12-02 2:06 ` syzbot
2025-12-02 7:40 ` Christoph Hellwig
2025-12-02 3:35 ` syzbot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox