* [syzbot] [btrfs?] INFO: task hung in btrfs_invalidate_folio (3)
@ 2026-03-19 7:21 syzbot
2026-03-26 1:50 ` Forwarded: [PATCH] btrfs: fix hung task when cloning inline extent races with writeback syzbot
2026-03-26 4:25 ` Forwarded: [PATCH] btrfs: fix hung task and deadlock when cloning inline extents syzbot
0 siblings, 2 replies; 5+ messages in thread
From: syzbot @ 2026-03-19 7:21 UTC (permalink / raw)
To: clm, dsterba, linux-btrfs, linux-kernel, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: f0caa1d49cc0 Merge tag 'hid-for-linus-2026031701' of git:/..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=10ad24da580000
kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27
dashboard link: https://syzkaller.appspot.com/bug?extid=63056bf627663701bbbf
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=178bb406580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=11c82216580000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/cf6c805602fb/disk-f0caa1d4.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/4237ac907af6/vmlinux-f0caa1d4.xz
kernel image: https://storage.googleapis.com/syzbot-assets/fd0193de4f6c/bzImage-f0caa1d4.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/810a9ef5b7b5/mount_0.gz
fsck result: OK (log: https://syzkaller.appspot.com/x/fsck.log?x=160868da580000)
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+63056bf627663701bbbf@syzkaller.appspotmail.com
INFO: task kworker/u8:7:1053 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:7 state:D stack:23520 pid:1053 tgid:1053 ppid:2 task_flags:0x4208060 flags:0x00080000
Workqueue: writeback wb_workfn (flush-btrfs-46)
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5298 [inline]
__schedule+0x1553/0x5240 kernel/sched/core.c:6911
__schedule_loop kernel/sched/core.c:6993 [inline]
schedule+0x164/0x360 kernel/sched/core.c:7008
wait_extent_bit fs/btrfs/extent-io-tree.c:811 [inline]
btrfs_lock_extent_bits+0x59c/0x700 fs/btrfs/extent-io-tree.c:1914
btrfs_lock_extent fs/btrfs/extent-io-tree.h:152 [inline]
btrfs_invalidate_folio+0x43d/0xc40 fs/btrfs/inode.c:7704
extent_writepage fs/btrfs/extent_io.c:1852 [inline]
extent_write_cache_pages fs/btrfs/extent_io.c:2580 [inline]
btrfs_writepages+0x12ff/0x2440 fs/btrfs/extent_io.c:2713
do_writepages+0x32e/0x550 mm/page-writeback.c:2554
__writeback_single_inode+0x133/0x11a0 fs/fs-writeback.c:1750
writeback_sb_inodes+0x995/0x19d0 fs/fs-writeback.c:2042
wb_writeback+0x456/0xb70 fs/fs-writeback.c:2227
wb_do_writeback fs/fs-writeback.c:2374 [inline]
wb_workfn+0x41a/0xf60 fs/fs-writeback.c:2414
process_one_work kernel/workqueue.c:3276 [inline]
process_scheduled_works+0xb6e/0x18c0 kernel/workqueue.c:3359
worker_thread+0xa53/0xfc0 kernel/workqueue.c:3440
kthread+0x388/0x470 kernel/kthread.c:436
ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
INFO: task syz.4.64:6910 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.64 state:D stack:22752 pid:6910 tgid:6905 ppid:5944 task_flags:0x400140 flags:0x00080002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5298 [inline]
__schedule+0x1553/0x5240 kernel/sched/core.c:6911
__schedule_loop kernel/sched/core.c:6993 [inline]
schedule+0x164/0x360 kernel/sched/core.c:7008
wait_current_trans+0x39f/0x590 fs/btrfs/transaction.c:535
start_transaction+0x6a7/0x1650 fs/btrfs/transaction.c:705
clone_copy_inline_extent fs/btrfs/reflink.c:299 [inline]
btrfs_clone+0x128a/0x24d0 fs/btrfs/reflink.c:529
btrfs_clone_files+0x271/0x3f0 fs/btrfs/reflink.c:750
btrfs_remap_file_range+0x76b/0x1320 fs/btrfs/reflink.c:903
vfs_copy_file_range+0xda7/0x1390 fs/read_write.c:1600
__do_sys_copy_file_range fs/read_write.c:1683 [inline]
__se_sys_copy_file_range+0x2fb/0x480 fs/read_write.c:1650
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f5f73afc799
RSP: 002b:00007f5f7315e028 EFLAGS: 00000246 ORIG_RAX: 0000000000000146
RAX: ffffffffffffffda RBX: 00007f5f73d75fa0 RCX: 00007f5f73afc799
RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000005
RBP: 00007f5f73b92c99 R08: 0000000000000863 R09: 0000000000000000
R10: 00002000000000c0 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f5f73d76038 R14: 00007f5f73d75fa0 R15: 00007fff138a5068
</TASK>
INFO: task syz.4.64:6975 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.64 state:D stack:24736 pid:6975 tgid:6905 ppid:5944 task_flags:0x400040 flags:0x00080002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5298 [inline]
__schedule+0x1553/0x5240 kernel/sched/core.c:6911
__schedule_loop kernel/sched/core.c:6993 [inline]
schedule+0x164/0x360 kernel/sched/core.c:7008
wb_wait_for_completion+0x3e8/0x790 fs/fs-writeback.c:227
__writeback_inodes_sb_nr+0x24c/0x2d0 fs/fs-writeback.c:2838
try_to_writeback_inodes_sb+0x9a/0xc0 fs/fs-writeback.c:2886
btrfs_start_delalloc_flush fs/btrfs/transaction.c:2175 [inline]
btrfs_commit_transaction+0x82e/0x31a0 fs/btrfs/transaction.c:2364
btrfs_ioctl+0xca7/0xd00 fs/btrfs/ioctl.c:5206
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:597 [inline]
__se_sys_ioctl+0xff/0x170 fs/ioctl.c:583
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f5f73afc799
RSP: 002b:00007f5f7313d028 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f5f73d76090 RCX: 00007f5f73afc799
RDX: 0000000000000000 RSI: 0000000000009408 RDI: 0000000000000004
RBP: 00007f5f73b92c99 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f5f73d76128 R14: 00007f5f73d76090 R15: 00007fff138a5068
</TASK>
Showing all locks held in the system:
2 locks held by kworker/u8:0/12:
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359
#1: ffffc90000117c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline]
#1: ffffc90000117c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359
1 lock held by khungtaskd/37:
#0: ffffffff8ddcb980
(rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
(rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
(rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
2 locks held by kworker/u8:2/40:
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359
#1: ffffc90000b17c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline]
#1: ffffc90000b17c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359
10 locks held by kworker/u8:6/144:
2 locks held by kworker/u8:7/1053:
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359
#1: ffffc90005affc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline]
#1: ffffc90005affc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359
2 locks held by kworker/u8:8/1115:
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359
#1: ffffc90005ebfc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline]
#1: ffffc90005ebfc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359
2 locks held by kworker/u8:9/1138:
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359
#1: ffffc90005fdfc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline]
#1: ffffc90005fdfc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359
5 locks held by kworker/u8:10/1156:
2 locks held by kworker/u8:11/1176:
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359
#1: ffffc900060bfc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline]
#1: ffffc900060bfc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359
3 locks held by kworker/u8:13/4443:
#0: ffff88806a862938 ((wq_completion)loop8){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
#0: ffff88806a862938 ((wq_completion)loop8){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359
#1: ffffc90010d57c40 ((work_completion)(&worker->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline]
#1: ffffc90010d57c40 ((work_completion)(&worker->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359
#2: ffff88802624a160 (&lo->lo_work_lock){+.+.}-{3:3}, at: spin_lock_irq include/linux/spinlock_rt.h:96 [inline]
#2: ffff88802624a160 (&lo->lo_work_lock){+.+.}-{3:3}, at: loop_process_work+0x125/0x11b0 drivers/block/loop.c:1953
3 locks held by kworker/u9:1/5115:
#0: ffff88803456c938 ((wq_completion)hci6){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
#0: ffff88803456c938 ((wq_completion)hci6){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359
#1: ffffc9000f917c40 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline]
#1: ffffc9000f917c40 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359
#2: ffff888031fd8f80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_cmd_sync_work+0x1d3/0x400 net/bluetooth/hci_sync.c:331
1 lock held by syslogd/5147:
#0: ffff88802963f598 (&ei->socket.wq.wait){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline]
#0: ffff88802963f598 (&ei->socket.wq.wait){+.+.}-{3:3}, at: finish_wait+0xbe/0x1e0 kernel/sched/wait.c:394
3 locks held by klogd/5154:
2 locks held by getty/5553:
#0: ffff8880379060a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc90003e8b2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13c0 drivers/tty/n_tty.c:2211
4 locks held by syz.4.64/6910:
#0: ffff888040716480 (sb_writers#12){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2710 [inline]
#0: ffff888040716480 (sb_writers#12){.+.+}-{0:0}, at: vfs_copy_file_range+0x9bb/0x1390 fs/read_write.c:1588
#1: ffff88805e1eed68 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#1: ffff88805e1eed68 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: btrfs_inode_lock+0x51/0xe0 fs/btrfs/inode.c:369
#2: ffff88805e1eebc8 (&ei->i_mmap_lock){++++}-{4:4}, at: btrfs_inode_lock+0xcb/0xe0 fs/btrfs/inode.c:372
#3: ffff888040716770 (sb_internal#2){.+.+}-{0:0}, at: clone_copy_inline_extent fs/btrfs/reflink.c:299 [inline]
#3: ffff888040716770 (sb_internal#2){.+.+}-{0:0}, at: btrfs_clone+0x128a/0x24d0 fs/btrfs/reflink.c:529
3 locks held by syz.4.64/6975:
#0: ffff8880395a7118 (btrfs_trans_num_writers){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#1: ffff8880395a7140 (btrfs_trans_num_extwriters){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#2: ffff8880407160d0 (&type->s_umount_key#56){++++}-{4:4}, at: try_to_writeback_inodes_sb+0x22/0xc0 fs/fs-writeback.c:2883
1 lock held by btrfs-transacti/6971:
#0: ffff8880395a4d98 (&fs_info->transaction_kthread_mutex){+.+.}-{4:4}, at: transaction_kthread+0xe4/0x450 fs/btrfs/disk-io.c:1515
4 locks held by syz.0.74/7139:
#0: ffff888028b52480 (sb_writers#12){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2710 [inline]
#0: ffff888028b52480 (sb_writers#12){.+.+}-{0:0}, at: vfs_copy_file_range+0x9bb/0x1390 fs/read_write.c:1588
#1: ffff8880445477b8 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#1: ffff8880445477b8 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: btrfs_inode_lock+0x51/0xe0 fs/btrfs/inode.c:369
#2: ffff888044547618 (&ei->i_mmap_lock){++++}-{4:4}, at: btrfs_inode_lock+0xcb/0xe0 fs/btrfs/inode.c:372
#3: ffff888028b52770 (sb_internal#2){.+.+}-{0:0}, at: clone_copy_inline_extent fs/btrfs/reflink.c:299 [inline]
#3: ffff888028b52770 (sb_internal#2){.+.+}-{0:0}, at: btrfs_clone+0x128a/0x24d0 fs/btrfs/reflink.c:529
3 locks held by syz.0.74/7181:
#0: ffff88803961b118 (btrfs_trans_num_writers){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#1: ffff88803961b140 (btrfs_trans_num_extwriters){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#2: ffff888028b520d0 (&type->s_umount_key#56){++++}-{4:4}, at: try_to_writeback_inodes_sb+0x22/0xc0 fs/fs-writeback.c:2883
1 lock held by btrfs-transacti/7159:
#0: ffff888039618d98 (&fs_info->transaction_kthread_mutex){+.+.}-{4:4}, at: transaction_kthread+0xe4/0x450 fs/btrfs/disk-io.c:1515
4 locks held by syz.2.80/7215:
#0: ffff888035edc480 (sb_writers#12){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2710 [inline]
#0: ffff888035edc480 (sb_writers#12){.+.+}-{0:0}, at: vfs_copy_file_range+0x9bb/0x1390 fs/read_write.c:1588
#1: ffff888040951098 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#1: ffff888040951098 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: btrfs_inode_lock+0x51/0xe0 fs/btrfs/inode.c:369
#2: ffff888040950ef8 (&ei->i_mmap_lock){++++}-{4:4}, at: btrfs_inode_lock+0xcb/0xe0 fs/btrfs/inode.c:372
#3: ffff888035edc770 (sb_internal#2){.+.+}-{0:0}, at: clone_copy_inline_extent fs/btrfs/reflink.c:299 [inline]
#3: ffff888035edc770 (sb_internal#2){.+.+}-{0:0}, at: btrfs_clone+0x128a/0x24d0 fs/btrfs/reflink.c:529
3 locks held by syz.2.80/7269:
#0: ffff88805b5a7118 (btrfs_trans_num_writers){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#1: ffff88805b5a7140 (btrfs_trans_num_extwriters){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#2: ffff888035edc0d0 (&type->s_umount_key#56){++++}-{4:4}, at: try_to_writeback_inodes_sb+0x22/0xc0 fs/fs-writeback.c:2883
1 lock held by btrfs-transacti/7265:
#0: ffff88805b5a4d98 (&fs_info->transaction_kthread_mutex){+.+.}-{4:4}, at: transaction_kthread+0xe4/0x450 fs/btrfs/disk-io.c:1515
4 locks held by syz.5.96/7519:
#0: ffff888020336480 (sb_writers#12){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2710 [inline]
#0: ffff888020336480 (sb_writers#12){.+.+}-{0:0}, at: vfs_copy_file_range+0x9bb/0x1390 fs/read_write.c:1588
#1: ffff888044546d68 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#1: ffff888044546d68 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: btrfs_inode_lock+0x51/0xe0 fs/btrfs/inode.c:369
#2: ffff888044546bc8 (&ei->i_mmap_lock){++++}-{4:4}, at: btrfs_inode_lock+0xcb/0xe0 fs/btrfs/inode.c:372
#3: ffff888020336770 (sb_internal#2){.+.+}-{0:0}, at: clone_copy_inline_extent fs/btrfs/reflink.c:299 [inline]
#3: ffff888020336770 (sb_internal#2){.+.+}-{0:0}, at: btrfs_clone+0x128a/0x24d0 fs/btrfs/reflink.c:529
3 locks held by syz.5.96/7570:
#0: ffff88803cfe3118 (btrfs_trans_num_writers){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#1: ffff88803cfe3140 (btrfs_trans_num_extwriters){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#2: ffff8880203360d0 (&type->s_umount_key#56){++++}-{4:4}, at: try_to_writeback_inodes_sb+0x22/0xc0 fs/fs-writeback.c:2883
1 lock held by btrfs-transacti/7563:
#0: ffff88803cfe0d98 (&fs_info->transaction_kthread_mutex){+.+.}-{4:4}, at: transaction_kthread+0xe4/0x450 fs/btrfs/disk-io.c:1515
5 locks held by kworker/u8:14/8416:
2 locks held by kworker/u8:15/8574:
2 locks held by udevd/8677:
#0: ffff8880222a83b0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_write_lock_killable include/linux/mmap_lock.h:554 [inline]
#0: ffff8880222a83b0 (&mm->mmap_lock){++++}-{4:4}, at: vm_mmap_pgoff+0x237/0x4f0 mm/util.c:579
#1: ffff88803655d068 (&anon_vma->rwsem){++++}-{4:4}, at: anon_vma_lock_read mm/internal.h:235 [inline]
#1: ffff88803655d068 (&anon_vma->rwsem){++++}-{4:4}, at: validate_mm+0x1e3/0x4c0 mm/vma.c:677
4 locks held by syz.6.164/8838:
#0: ffff88801e68a480 (sb_writers#12){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2710 [inline]
#0: ffff88801e68a480 (sb_writers#12){.+.+}-{0:0}, at: vfs_copy_file_range+0x9bb/0x1390 fs/read_write.c:1588
#1: ffff8880445458c8 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#1: ffff8880445458c8 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: btrfs_inode_lock+0x51/0xe0 fs/btrfs/inode.c:369
#2: ffff888044545728 (&ei->i_mmap_lock){++++}-{4:4}, at: btrfs_inode_lock+0xcb/0xe0 fs/btrfs/inode.c:372
#3: ffff88801e68a770 (sb_internal#2){.+.+}-{0:0}, at: clone_copy_inline_extent fs/btrfs/reflink.c:299 [inline]
#3: ffff88801e68a770 (sb_internal#2){.+.+}-{0:0}, at: btrfs_clone+0x128a/0x24d0 fs/btrfs/reflink.c:529
3 locks held by syz.6.164/8888:
#0: ffff88803f40b118 (btrfs_trans_num_writers){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#1: ffff88803f40b140 (btrfs_trans_num_extwriters){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#2: ffff88801e68a0d0 (&type->s_umount_key#56){++++}-{4:4}, at: try_to_writeback_inodes_sb+0x22/0xc0 fs/fs-writeback.c:2883
1 lock held by btrfs-transacti/8884:
#0: ffff88803f408d98 (&fs_info->transaction_kthread_mutex){+.+.}-{4:4}, at: transaction_kthread+0xe4/0x450 fs/btrfs/disk-io.c:1515
4 locks held by syz.1.172/8963:
#0: ffff88805c294480 (sb_writers#12){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2710 [inline]
#0: ffff88805c294480 (sb_writers#12){.+.+}-{0:0}, at: vfs_copy_file_range+0x9bb/0x1390 fs/read_write.c:1588
#1: ffff88805abcaf88 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#1: ffff88805abcaf88 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: btrfs_inode_lock+0x51/0xe0 fs/btrfs/inode.c:369
#2: ffff88805abcade8 (&ei->i_mmap_lock){++++}-{4:4}, at: btrfs_inode_lock+0xcb/0xe0 fs/btrfs/inode.c:372
#3: ffff88805c294770 (sb_internal#2){.+.+}-{0:0}, at: clone_copy_inline_extent fs/btrfs/reflink.c:299 [inline]
#3: ffff88805c294770 (sb_internal#2){.+.+}-{0:0}, at: btrfs_clone+0x128a/0x24d0 fs/btrfs/reflink.c:529
3 locks held by syz.1.172/9027:
#0: ffff88802a7fb118 (btrfs_trans_num_writers){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#1: ffff88802a7fb140 (btrfs_trans_num_extwriters){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#2: ffff88805c2940d0 (&type->s_umount_key#56){++++}-{4:4}, at: try_to_writeback_inodes_sb+0x22/0xc0 fs/fs-writeback.c:2883
1 lock held by btrfs-transacti/9025:
#0: ffff88802a7f8d98 (&fs_info->transaction_kthread_mutex){+.+.}-{4:4}, at: transaction_kthread+0xe4/0x450 fs/btrfs/disk-io.c:1515
2 locks held by kworker/u8:16/9083:
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
#0: ffff88801f2b8138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359
#1: ffffc90010587c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline]
#1: ffffc90010587c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359
4 locks held by kworker/u8:17/9084:
3 locks held by kworker/u8:18/9197:
2 locks held by kworker/u8:20/9396:
#0: ffff88807150d138 ((wq_completion)btrfs-flush_delalloc#198){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
#0: ffff88807150d138 ((wq_completion)btrfs-flush_delalloc#198){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359
#1: ffffc90011d17c40 ((work_completion)(&work->normal_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline]
#1: ffffc90011d17c40 ((work_completion)(&work->normal_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359
4 locks held by syz.3.229/9896:
#0: ffff888071198480 (sb_writers#12){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2710 [inline]
#0: ffff888071198480 (sb_writers#12){.+.+}-{0:0}, at: vfs_copy_file_range+0x9bb/0x1390 fs/read_write.c:1588
#1: ffff88805e034428 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#1: ffff88805e034428 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: btrfs_inode_lock+0x51/0xe0 fs/btrfs/inode.c:369
#2: ffff88805e034288 (&ei->i_mmap_lock){++++}-{4:4}, at: btrfs_inode_lock+0xcb/0xe0 fs/btrfs/inode.c:372
#3: ffff888071198770 (sb_internal#2){.+.+}-{0:0}, at: clone_copy_inline_extent fs/btrfs/reflink.c:299 [inline]
#3: ffff888071198770 (sb_internal#2){.+.+}-{0:0}, at: btrfs_clone+0x128a/0x24d0 fs/btrfs/reflink.c:529
3 locks held by syz.3.229/9963:
#0: ffff888070247118 (btrfs_trans_num_writers){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#1: ffff888070247140 (btrfs_trans_num_extwriters){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298
#2: ffff8880711980d0 (&type->s_umount_key#56){++++}-{4:4}, at: try_to_writeback_inodes_sb+0x22/0xc0 fs/fs-writeback.c:2883
4 locks held by syz.7.242/10098:
2 locks held by syz.7.242/10121:
#0: ffff888063b19020 (&fs_info->ordered_operations_mutex){+.+.}-{4:4}, at: btrfs_wait_ordered_roots+0xe7/0x6f0 fs/btrfs/ordered-data.c:823
#1: ffff888063c209a8 (&root->ordered_extent_mutex){+.+.}-{4:4}, at: btrfs_wait_ordered_extents+0x23d/0xcf0 fs/btrfs/ordered-data.c:767
2 locks held by syz.9.243/10101:
#0: ffff8880355860d0 (&type->s_umount_key#55/1){+.+.}-{4:4}, at: alloc_super+0x28c/0xac0 fs/super.c:345
#1: ffffffff8dc6bab8 (wq_pool_mutex){+.+.}-{4:4}, at: apply_wqattrs_lock kernel/workqueue.c:5279 [inline]
#1: ffffffff8dc6bab8 (wq_pool_mutex){+.+.}-{4:4}, at: __alloc_workqueue+0x9ef/0x1e90 kernel/workqueue.c:5832
5 locks held by syz.8.244/10103:
2 locks held by syz.4.245/10109:
4 locks held by udevd/10130:
=============================================
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 37 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)}
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
__sys_info lib/sys_info.c:157 [inline]
sys_info+0x135/0x170 lib/sys_info.c:165
check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
watchdog+0xfd9/0x1030 kernel/hung_task.c:515
kthread+0x388/0x470 kernel/kthread.c:436
ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 17 Comm: pr/legacy Not tainted syzkaller #0 PREEMPT_{RT,(full)}
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
RIP: 0010:io_serial_in+0x77/0xc0 drivers/tty/serial/8250/8250_port.c:400
Code: e8 be 9b 94 fc 44 89 f9 d3 e3 49 83 ee 80 4c 89 f0 48 c1 e8 03 42 80 3c 20 00 74 08 4c 89 f7 e8 ff b2 fa fc 41 03 1e 89 da ec <0f> b6 c0 5b 41 5c 41 5e 41 5f c3 cc cc cc cc cc 44 89 f9 80 e1 07
RSP: 0018:ffffc900001679d0 EFLAGS: 00000202
RAX: 1ffffffff332a600 RBX: 00000000000003fd RCX: 0000000000000000
RDX: 00000000000003fd RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffffff99953750 R08: 0000000000000000 R09: 0000000000000000
R10: dffffc0000000000 R11: ffffffff852fdaf0 R12: dffffc0000000000
R13: 0000000000000000 R14: ffffffff999534c0 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff88812633c000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe8d6b8bb60 CR3: 000000002c606000 CR4: 00000000003526f0
Call Trace:
<TASK>
serial_in drivers/tty/serial/8250/8250.h:128 [inline]
serial_lsr_in drivers/tty/serial/8250/8250.h:150 [inline]
wait_for_lsr+0x1aa/0x2f0 drivers/tty/serial/8250/8250_port.c:1961
fifo_wait_for_lsr drivers/tty/serial/8250/8250_port.c:3234 [inline]
serial8250_console_fifo_write drivers/tty/serial/8250/8250_port.c:3257 [inline]
serial8250_console_write+0x120d/0x1b90 drivers/tty/serial/8250/8250_port.c:3342
console_emit_next_record kernel/printk/printk.c:3163 [inline]
console_flush_one_record+0x68b/0xb90 kernel/printk/printk.c:3269
legacy_kthread_func+0x1b6/0x250 kernel/printk/printk.c:3712
kthread+0x388/0x470 kernel/kthread.c:436
ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [flat|nested] 5+ messages in thread* Forwarded: [PATCH] btrfs: fix hung task when cloning inline extent races with writeback 2026-03-19 7:21 [syzbot] [btrfs?] INFO: task hung in btrfs_invalidate_folio (3) syzbot @ 2026-03-26 1:50 ` syzbot 2026-03-26 4:25 ` Forwarded: [PATCH] btrfs: fix hung task and deadlock when cloning inline extents syzbot 1 sibling, 0 replies; 5+ messages in thread From: syzbot @ 2026-03-26 1:50 UTC (permalink / raw) To: linux-kernel, syzkaller-bugs For archival purposes, forwarding an incoming command email to linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com. *** Subject: [PATCH] btrfs: fix hung task when cloning inline extent races with writeback Author: kartikey406@gmail.com From: Deepanshu Kartikey <Kartikey406@gmail.com> #syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master When cloning an inline extent, clone_copy_inline_extent() calls copy_inline_to_page() which locks an extent range in the destination inode's io_tree, dirties a page with the inline data, and sets BTRFS_INODE_NO_DELALLOC_FLUSH on the inode. At this point i_size is still 0 since clone_finish_inode_update() has not been called yet. Then clone_copy_inline_extent() calls start_transaction() which may block waiting for the current transaction to commit. While blocked, the transaction commit calls btrfs_start_delalloc_flush() which calls try_to_writeback_inodes_sb(), queuing a kworker to flush the clone destination inode. The kworker calls btrfs_writepages() -> extent_writepage() and since i_size is still 0, the dirty page appears to be beyond EOF. This causes extent_writepage() to call folio_invalidate() -> btrfs_invalidate_folio() -> btrfs_lock_extent() which blocks forever because the clone operation holds that lock, creating a circular deadlock: clone -> waits for transaction commit to finish commit -> waits for kworker writeback to finish kworker -> waits for extent lock held by clone Additionally any periodic background writeback that races with the clone operation before i_size is updated will also block on the same extent lock causing a hung task warning. The flag BTRFS_INODE_NO_DELALLOC_FLUSH was introduced by commit 3d45f221ce62 to prevent this deadlock but was only checked inside start_delalloc_inodes(), which is only reached through the btrfs metadata reclaim path. The transaction commit path goes through try_to_writeback_inodes_sb() which is a VFS function that bypasses start_delalloc_inodes() entirely, so the flag was never checked there. Fix this by checking BTRFS_INODE_NO_DELALLOC_FLUSH at the top of btrfs_writepages() and returning early if set. This catches all writeback paths since every writeback on a btrfs inode eventually calls btrfs_writepages(). The inode will be safely written after the clone operation finishes and clears the flag, at which point all locks are released and i_size is properly updated. Also change the local variable type from 'struct inode *' to 'struct btrfs_inode *' to avoid the double BTRFS_I() conversion. Fixes: 3d45f221ce62 ("btrfs: fix deadlock when cloning inline extent and low on free metadata space") CC: stable@vger.kernel.org Reported-by: syzbot+63056bf627663701bbbf@syzkaller.appspotmail.com Signed-off-by: Deepanshu Kartikey <kartikey406@gmail.com> --- fs/btrfs/extent_io.c | 39 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 36 insertions(+), 3 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 5f97a3d2a8d7..f7df7c0c8955 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2698,21 +2698,54 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f int btrfs_writepages(struct address_space *mapping, struct writeback_control *wbc) { - struct inode *inode = mapping->host; + struct btrfs_inode *inode = BTRFS_I(mapping->host); int ret = 0; struct btrfs_bio_ctrl bio_ctrl = { .wbc = wbc, .opf = REQ_OP_WRITE | wbc_to_write_flags(wbc), }; + /* + * If this inode is being used for a clone/reflink operation that + * copied an inline extent into a page of the destination inode, skip + * writeback to avoid a deadlock or a long blocked task. + * + * The clone operation holds the extent range locked in the inode's + * io_tree for its entire duration. Any writeback attempt on this + * inode will block trying to lock that same extent range inside + * writepage_delalloc() or btrfs_invalidate_folio(), causing a + * hung task. + * + * When writeback is triggered from the transaction commit path via + * btrfs_start_delalloc_flush() -> try_to_writeback_inodes_sb(), + * this becomes a true circular deadlock: + * + * clone -> waits for transaction commit to finish + * commit -> waits for kworker writeback to finish + * kworker -> waits for extent lock held by clone + * + * The flag BTRFS_INODE_NO_DELALLOC_FLUSH was already checked in + * start_delalloc_inodes() but only for the btrfs metadata reclaim + * path. The transaction commit path goes through + * try_to_writeback_inodes_sb() which bypasses that check entirely + * and calls btrfs_writepages() directly. + * + * By checking the flag here we catch all writeback paths. The inode + * will be safely written after the clone operation finishes and + * clears BTRFS_INODE_NO_DELALLOC_FLUSH, at which point all locks + * are released and writeback can proceed normally. + */ + if (test_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &inode->runtime_flags)) + return 0; + /* * Allow only a single thread to do the reloc work in zoned mode to * protect the write pointer updates. */ - btrfs_zoned_data_reloc_lock(BTRFS_I(inode)); + btrfs_zoned_data_reloc_lock(inode); ret = extent_write_cache_pages(mapping, &bio_ctrl); submit_write_bio(&bio_ctrl, ret); - btrfs_zoned_data_reloc_unlock(BTRFS_I(inode)); + btrfs_zoned_data_reloc_unlock(inode); return ret; } -- 2.43.0 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Forwarded: [PATCH] btrfs: fix hung task and deadlock when cloning inline extents 2026-03-19 7:21 [syzbot] [btrfs?] INFO: task hung in btrfs_invalidate_folio (3) syzbot 2026-03-26 1:50 ` Forwarded: [PATCH] btrfs: fix hung task when cloning inline extent races with writeback syzbot @ 2026-03-26 4:25 ` syzbot 1 sibling, 0 replies; 5+ messages in thread From: syzbot @ 2026-03-26 4:25 UTC (permalink / raw) To: linux-kernel, syzkaller-bugs For archival purposes, forwarding an incoming command email to linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com. *** Subject: [PATCH] btrfs: fix hung task and deadlock when cloning inline extents Author: kartikey406@gmail.com #syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master When cloning or deduplicating inline extents, the clone operation sets BTRFS_INODE_NO_DELALLOC_FLUSH on the destination inode inside copy_inline_to_page() after the extent range is locked and the page is dirtied. This creates a race window where writeback can be triggered before the flag is set, causing a hung task or deadlock. The sequence that causes the deadlock is: 1. copy_inline_to_page() locks extent range [0, block_size) in the destination inode's io_tree and dirties a page. The inode's i_size is still 0 at this point since clone_finish_inode_update() has not been called yet. 2. The clone calls start_transaction() which may block waiting for the current transaction to commit (pid A waits for pid B). 3. The transaction commit (pid B) calls btrfs_start_delalloc_flush() which calls try_to_writeback_inodes_sb(), queuing a kworker to flush the destination inode. 4. The kworker calls btrfs_writepages() -> extent_writepage(). Since i_size is still 0, the dirty page appears beyond EOF, causing extent_writepage() to call folio_invalidate() -> btrfs_invalidate_folio() -> btrfs_lock_extent(), which blocks forever because the clone already holds that lock. This creates a circular deadlock: clone -> waits for transaction commit (pid A waits for pid B) commit -> waits for kworker writeback (pid B waits for pid C) kworker -> waits for extent lock held by clone (pid C waits for pid A) Additionally any periodic background writeback racing with the clone operation will block on the same extent lock causing a hung task warning even without the circular deadlock. The existing fix set BTRFS_INODE_NO_DELALLOC_FLUSH inside copy_inline_to_page() after marking the range as delalloc, leaving a race window where writeback could start before the flag is set. Also the flag was only checked in start_delalloc_inodes() which is only reached through the btrfs metadata reclaim path. The transaction commit path goes through try_to_writeback_inodes_sb() which is a VFS function that bypasses start_delalloc_inodes() entirely. Fix this with two changes: 1. Move the set_bit(BTRFS_INODE_NO_DELALLOC_FLUSH) to before the btrfs_lock_extent() call in both btrfs_clone_files() and btrfs_extent_same_range(), and clear it after btrfs_unlock_extent(). This ensures the flag is set before the extent lock is taken and before any page is dirtied, closing the race window completely. The destination inode is protected by i_mutex for the entire operation so concurrent clone/dedupe on the same inode is impossible, making set_bit/clear_bit safe without reference counting. 2. Check BTRFS_INODE_NO_DELALLOC_FLUSH at the top of btrfs_writepages() and return early if set. This catches all writeback paths since both the VFS writeback path via try_to_writeback_inodes_sb() and the btrfs delalloc path eventually call btrfs_writepages(). The inode will be safely written after the clone operation finishes and clears the flag, at which point all locks are released and i_size is properly set. Reported-by: syzbot+63056bf627663701bbbf@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=63056bf627663701bbbf Signed-off-by: Deepanshu Kartikey <Kartikey406@gmail.com> --- fs/btrfs/extent_io.c | 39 ++++++++++++++++++++++++++++++++++++--- fs/btrfs/reflink.c | 21 +++++---------------- 2 files changed, 41 insertions(+), 19 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 5f97a3d2a8d7..f7df7c0c8955 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2698,21 +2698,54 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f int btrfs_writepages(struct address_space *mapping, struct writeback_control *wbc) { - struct inode *inode = mapping->host; + struct btrfs_inode *inode = BTRFS_I(mapping->host); int ret = 0; struct btrfs_bio_ctrl bio_ctrl = { .wbc = wbc, .opf = REQ_OP_WRITE | wbc_to_write_flags(wbc), }; + /* + * If this inode is being used for a clone/reflink operation that + * copied an inline extent into a page of the destination inode, skip + * writeback to avoid a deadlock or a long blocked task. + * + * The clone operation holds the extent range locked in the inode's + * io_tree for its entire duration. Any writeback attempt on this + * inode will block trying to lock that same extent range inside + * writepage_delalloc() or btrfs_invalidate_folio(), causing a + * hung task. + * + * When writeback is triggered from the transaction commit path via + * btrfs_start_delalloc_flush() -> try_to_writeback_inodes_sb(), + * this becomes a true circular deadlock: + * + * clone -> waits for transaction commit to finish + * commit -> waits for kworker writeback to finish + * kworker -> waits for extent lock held by clone + * + * The flag BTRFS_INODE_NO_DELALLOC_FLUSH was already checked in + * start_delalloc_inodes() but only for the btrfs metadata reclaim + * path. The transaction commit path goes through + * try_to_writeback_inodes_sb() which bypasses that check entirely + * and calls btrfs_writepages() directly. + * + * By checking the flag here we catch all writeback paths. The inode + * will be safely written after the clone operation finishes and + * clears BTRFS_INODE_NO_DELALLOC_FLUSH, at which point all locks + * are released and writeback can proceed normally. + */ + if (test_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &inode->runtime_flags)) + return 0; + /* * Allow only a single thread to do the reloc work in zoned mode to * protect the write pointer updates. */ - btrfs_zoned_data_reloc_lock(BTRFS_I(inode)); + btrfs_zoned_data_reloc_lock(inode); ret = extent_write_cache_pages(mapping, &bio_ctrl); submit_write_bio(&bio_ctrl, ret); - btrfs_zoned_data_reloc_unlock(BTRFS_I(inode)); + btrfs_zoned_data_reloc_unlock(inode); return ret; } diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c index 314cb95ba846..f3387baa71ae 100644 --- a/fs/btrfs/reflink.c +++ b/fs/btrfs/reflink.c @@ -100,19 +100,6 @@ static int copy_inline_to_page(struct btrfs_inode *inode, if (ret) goto out_unlock; - /* - * After dirtying the page our caller will need to start a transaction, - * and if we are low on metadata free space, that can cause flushing of - * delalloc for all inodes in order to get metadata space released. - * However we are holding the range locked for the whole duration of - * the clone/dedupe operation, so we may deadlock if that happens and no - * other task releases enough space. So mark this inode as not being - * possible to flush to avoid such deadlock. We will clear that flag - * when we finish cloning all extents, since a transaction is started - * after finding each extent to clone. - */ - set_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &inode->runtime_flags); - if (comp_type == BTRFS_COMPRESS_NONE) { memcpy_to_folio(folio, offset_in_folio(folio, file_offset), data_start, datal); @@ -610,8 +597,6 @@ static int btrfs_clone(struct inode *src, struct inode *inode, } out: - clear_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &BTRFS_I(inode)->runtime_flags); - return ret; } @@ -644,11 +629,12 @@ static int btrfs_extent_same_range(struct btrfs_inode *src, u64 loff, u64 len, * because we have already locked the inode's i_mmap_lock in exclusive * mode. */ + set_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &dst->runtime_flags); btrfs_lock_extent(&dst->io_tree, dst_loff, end, &cached_state); ret = btrfs_clone(&src->vfs_inode, &dst->vfs_inode, loff, len, ALIGN(len, bs), dst_loff, 1); btrfs_unlock_extent(&dst->io_tree, dst_loff, end, &cached_state); - + clear_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &dst->runtime_flags); btrfs_btree_balance_dirty(fs_info); return ret; @@ -746,9 +732,12 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src, * mode. */ end = destoff + len - 1; + set_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &BTRFS_I(inode)->runtime_flags); btrfs_lock_extent(&BTRFS_I(inode)->io_tree, destoff, end, &cached_state); ret = btrfs_clone(src, inode, off, olen, len, destoff, 0); btrfs_unlock_extent(&BTRFS_I(inode)->io_tree, destoff, end, &cached_state); + clear_bit(BTRFS_INODE_NO_DELALLOC_FLUSH, &BTRFS_I(inode)->runtime_flags); + if (ret < 0) return ret; -- 2.43.0 ^ permalink raw reply related [flat|nested] 5+ messages in thread
[parent not found: <20260326014953.16727-1-kartikey406@gmail.com>]
* Re: [syzbot] [btrfs?] INFO: task hung in btrfs_invalidate_folio (3) [not found] <20260326014953.16727-1-kartikey406@gmail.com> @ 2026-03-26 2:46 ` syzbot 0 siblings, 0 replies; 5+ messages in thread From: syzbot @ 2026-03-26 2:46 UTC (permalink / raw) To: kartikey406, linux-kernel, stable, syzkaller-bugs Hello, syzbot has tested the proposed patch but the reproducer is still triggering an issue: INFO: task hung in btrfs_invalidate_folio INFO: task kworker/u8:7:151 blocked for more than 143 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u8:7 state:D stack:21504 pid:151 tgid:151 ppid:2 task_flags:0x4208060 flags:0x00080000 Workqueue: writeback wb_workfn (flush-btrfs-6) Call Trace: <TASK> context_switch kernel/sched/core.c:5298 [inline] __schedule+0x1553/0x5240 kernel/sched/core.c:6911 __schedule_loop kernel/sched/core.c:6993 [inline] schedule+0x164/0x360 kernel/sched/core.c:7008 wait_extent_bit fs/btrfs/extent-io-tree.c:811 [inline] btrfs_lock_extent_bits+0x59c/0x700 fs/btrfs/extent-io-tree.c:1914 btrfs_lock_extent fs/btrfs/extent-io-tree.h:152 [inline] btrfs_invalidate_folio+0x43d/0xc40 fs/btrfs/inode.c:7718 extent_writepage fs/btrfs/extent_io.c:1852 [inline] extent_write_cache_pages fs/btrfs/extent_io.c:2580 [inline] btrfs_writepages+0x1369/0x24a0 fs/btrfs/extent_io.c:2746 do_writepages+0x32e/0x550 mm/page-writeback.c:2554 __writeback_single_inode+0x133/0x11a0 fs/fs-writeback.c:1750 writeback_sb_inodes+0x995/0x19d0 fs/fs-writeback.c:2042 wb_writeback+0x456/0xb70 fs/fs-writeback.c:2227 wb_do_writeback fs/fs-writeback.c:2374 [inline] wb_workfn+0x41a/0xf60 fs/fs-writeback.c:2414 process_one_work kernel/workqueue.c:3276 [inline] process_scheduled_works+0xb6e/0x18c0 kernel/workqueue.c:3359 worker_thread+0xa53/0xfc0 kernel/workqueue.c:3440 kthread+0x388/0x470 kernel/kthread.c:436 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 </TASK> INFO: task syz.0.22:6562 blocked for more than 143 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.0.22 state:D stack:22752 pid:6562 tgid:6561 ppid:6245 task_flags:0x400140 flags:0x00080002 Call Trace: <TASK> context_switch kernel/sched/core.c:5298 [inline] __schedule+0x1553/0x5240 kernel/sched/core.c:6911 __schedule_loop kernel/sched/core.c:6993 [inline] schedule+0x164/0x360 kernel/sched/core.c:7008 wait_current_trans+0x39f/0x590 fs/btrfs/transaction.c:535 start_transaction+0x6a7/0x1650 fs/btrfs/transaction.c:705 clone_copy_inline_extent fs/btrfs/reflink.c:299 [inline] btrfs_clone+0x128a/0x24d0 fs/btrfs/reflink.c:529 btrfs_clone_files+0x271/0x3f0 fs/btrfs/reflink.c:750 btrfs_remap_file_range+0x76b/0x1320 fs/btrfs/reflink.c:903 vfs_copy_file_range+0xda7/0x1390 fs/read_write.c:1600 __do_sys_copy_file_range fs/read_write.c:1683 [inline] __se_sys_copy_file_range+0x2fb/0x480 fs/read_write.c:1650 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7faf436fc799 RSP: 002b:00007faf42d56028 EFLAGS: 00000246 ORIG_RAX: 0000000000000146 RAX: ffffffffffffffda RBX: 00007faf43975fa0 RCX: 00007faf436fc799 RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000005 RBP: 00007faf43792c99 R08: 0000000000000863 R09: 0000000000000000 R10: 00002000000000c0 R11: 0000000000000246 R12: 0000000000000000 R13: 00007faf43976038 R14: 00007faf43975fa0 R15: 00007fffc650d9b8 </TASK> INFO: task syz.0.22:6632 blocked for more than 143 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.0.22 state:D stack:24736 pid:6632 tgid:6561 ppid:6245 task_flags:0x400040 flags:0x00080002 Call Trace: <TASK> context_switch kernel/sched/core.c:5298 [inline] __schedule+0x1553/0x5240 kernel/sched/core.c:6911 __schedule_loop kernel/sched/core.c:6993 [inline] schedule+0x164/0x360 kernel/sched/core.c:7008 wb_wait_for_completion+0x3e8/0x790 fs/fs-writeback.c:227 __writeback_inodes_sb_nr+0x24c/0x2d0 fs/fs-writeback.c:2838 try_to_writeback_inodes_sb+0x9a/0xc0 fs/fs-writeback.c:2886 btrfs_start_delalloc_flush fs/btrfs/transaction.c:2175 [inline] btrfs_commit_transaction+0x82e/0x31a0 fs/btrfs/transaction.c:2364 btrfs_ioctl+0xca7/0xd00 fs/btrfs/ioctl.c:5212 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:597 [inline] __se_sys_ioctl+0xff/0x170 fs/ioctl.c:583 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7faf436fc799 RSP: 002b:00007faf42d35028 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007faf43976090 RCX: 00007faf436fc799 RDX: 0000000000000000 RSI: 0000000000009408 RDI: 0000000000000004 RBP: 00007faf43792c99 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007faf43976128 R14: 00007faf43976090 R15: 00007fffc650d9b8 </TASK> Showing all locks held in the system: 1 lock held by khungtaskd/38: #0: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline] #0: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline] #0: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775 2 locks held by kworker/u8:7/151: #0: ffff88801aac4138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #0: ffff88801aac4138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359 #1: ffffc90003a97c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline] #1: ffffc90003a97c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359 2 locks held by getty/5555: #0: ffff8880377e50a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc90003e7e2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13c0 drivers/tty/n_tty.c:2211 3 locks held by syz-executor/6249: 4 locks held by syz.0.22/6562: #0: ffff8880625fe480 (sb_writers#12){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2710 [inline] #0: ffff8880625fe480 (sb_writers#12){.+.+}-{0:0}, at: vfs_copy_file_range+0x9bb/0x1390 fs/read_write.c:1588 #1: ffff888058eed8c8 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline] #1: ffff888058eed8c8 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: btrfs_inode_lock+0x51/0xe0 fs/btrfs/inode.c:369 #2: ffff888058eed728 (&ei->i_mmap_lock){++++}-{4:4}, at: btrfs_inode_lock+0xcb/0xe0 fs/btrfs/inode.c:372 #3: ffff8880625fe770 (sb_internal#2){.+.+}-{0:0}, at: clone_copy_inline_extent fs/btrfs/reflink.c:299 [inline] #3: ffff8880625fe770 (sb_internal#2){.+.+}-{0:0}, at: btrfs_clone+0x128a/0x24d0 fs/btrfs/reflink.c:529 3 locks held by syz.0.22/6632: #0: ffff888045007118 (btrfs_trans_num_writers){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298 #1: ffff888045007140 (btrfs_trans_num_extwriters){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298 #2: ffff8880625fe0d0 (&type->s_umount_key#56){++++}-{4:4}, at: try_to_writeback_inodes_sb+0x22/0xc0 fs/fs-writeback.c:2883 1 lock held by udevd/6608: #0: ffff8880226e58b0 (&sb->s_type->i_mutex_key#9){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1043 [inline] #0: ffff8880226e58b0 (&sb->s_type->i_mutex_key#9){++++}-{4:4}, at: blkdev_read_iter+0x2ff/0x440 block/fops.c:854 1 lock held by btrfs-transacti/6627: #0: ffff888045004d98 (&fs_info->transaction_kthread_mutex){+.+.}-{4:4}, at: transaction_kthread+0xe4/0x450 fs/btrfs/disk-io.c:1515 2 locks held by syz.3.215/10113: 3 locks held by syz.5.214/10126: 3 locks held by syz.4.216/10132: 2 locks held by udevadm/10146: ============================================= NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Call Trace: <TASK> dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline] __sys_info lib/sys_info.c:157 [inline] sys_info+0x135/0x170 lib/sys_info.c:165 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline] watchdog+0xfd9/0x1030 kernel/hung_task.c:515 kthread+0x388/0x470 kernel/kthread.c:436 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 </TASK> Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 10146 Comm: udevadm Not tainted syzkaller #0 PREEMPT_{RT,(full)} Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 RIP: 0010:__lock_acquire+0xa9d/0x2cf0 kernel/locking/lockdep.c:5234 Code: fa 02 85 c0 74 1c 83 3d d4 21 ca 0d 00 75 13 48 8d 3d a7 32 cd 0d 48 c7 c6 65 71 67 8d 67 48 0f b9 3a 90 31 c0 48 83 78 40 00 <0f> 84 5a 1b 00 00 48 09 dd 41 8b 45 20 89 c1 81 e1 00 80 04 00 81 RSP: 0018:ffffc90006b9f6f8 EFLAGS: 00000082 RAX: ffffffff92f73c88 RBX: 00000000e60eadd5 RCX: 0000000010530efd RDX: 000000005f479b93 RSI: 0000000099d18f2c RDI: ffff88802864db80 RBP: c57865e900000000 R08: ffffffff81767e65 R09: ffffffff8ddcba00 R10: ffffc90006b9f9d8 R11: ffffffff81af90c0 R12: ffff88802864e738 R13: ffff88802864e738 R14: ffff88802864db80 R15: 0000000000000000 FS: 00007f1e8ff33880(0000) GS:ffff888126339000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000555d4ad6a5f8 CR3: 0000000064d0e000 CR4: 00000000003526f0 Call Trace: <TASK> lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868 rcu_lock_acquire include/linux/rcupdate.h:312 [inline] rcu_read_lock include/linux/rcupdate.h:850 [inline] class_rcu_constructor include/linux/rcupdate.h:1193 [inline] unwind_next_frame+0xc2/0x23c0 arch/x86/kernel/unwind_orc.c:495 arch_stack_walk+0x11b/0x150 arch/x86/kernel/stacktrace.c:25 stack_trace_save+0xa9/0x100 kernel/stacktrace.c:122 kasan_save_stack mm/kasan/common.c:57 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:78 kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:584 poison_slab_object mm/kasan/common.c:253 [inline] __kasan_slab_free+0x5c/0x80 mm/kasan/common.c:285 kasan_slab_free include/linux/kasan.h:235 [inline] slab_free_hook mm/slub.c:2685 [inline] slab_free mm/slub.c:6165 [inline] kmem_cache_free+0x185/0x6b0 mm/slub.c:6295 file_free fs/file_table.c:71 [inline] __fput+0x6d7/0xa90 fs/file_table.c:482 fput_close_sync+0x11f/0x240 fs/file_table.c:574 __do_sys_close fs/open.c:1509 [inline] __se_sys_close fs/open.c:1494 [inline] __x64_sys_close+0x7e/0x110 fs/open.c:1494 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f1e9008fa67 Code: 44 00 00 48 83 ec 10 48 63 ff 45 31 c9 45 31 c0 6a 01 31 c9 e8 ca 19 f9 ff 48 83 c4 18 c3 0f 1f 44 00 00 b8 03 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 01 c3 48 8b 15 61 b3 0d 00 f7 d8 64 89 02 b8 RSP: 002b:00007fff339659e8 EFLAGS: 00000297 ORIG_RAX: 0000000000000003 RAX: ffffffffffffffda RBX: 0000555d4ad572a0 RCX: 00007f1e9008fa67 RDX: 00007f1e90169ea0 RSI: 0000555d4ad68be0 RDI: 0000000000000005 RBP: 00007f1e90169ff0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000297 R12: 0000000000000000 R13: 3d45505954564544 R14: 3d5845444e494649 R15: 3d454d414e564544 </TASK> Tested on: commit: 0138af24 Merge tag 'erofs-for-7.0-rc6-fixes' of git://.. git tree: upstream console output: https://syzkaller.appspot.com/x/log.txt?x=1049a1d6580000 kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27 dashboard link: https://syzkaller.appspot.com/bug?extid=63056bf627663701bbbf compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8 patch: https://syzkaller.appspot.com/x/patch.diff?x=15b16a06580000 ^ permalink raw reply [flat|nested] 5+ messages in thread
[parent not found: <20260326042510.19263-1-kartikey406@gmail.com>]
* Re: [syzbot] [btrfs?] INFO: task hung in btrfs_invalidate_folio (3) [not found] <20260326042510.19263-1-kartikey406@gmail.com> @ 2026-03-26 4:58 ` syzbot 0 siblings, 0 replies; 5+ messages in thread From: syzbot @ 2026-03-26 4:58 UTC (permalink / raw) To: kartikey406, linux-kernel, syzkaller-bugs Hello, syzbot has tested the proposed patch but the reproducer is still triggering an issue: INFO: task hung in btrfs_invalidate_folio INFO: task kworker/u8:13:1428 blocked for more than 143 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u8:13 state:D stack:21504 pid:1428 tgid:1428 ppid:2 task_flags:0x4208060 flags:0x00080000 Workqueue: writeback wb_workfn (flush-btrfs-3) Call Trace: <TASK> context_switch kernel/sched/core.c:5298 [inline] __schedule+0x1553/0x5240 kernel/sched/core.c:6911 __schedule_loop kernel/sched/core.c:6993 [inline] schedule+0x164/0x360 kernel/sched/core.c:7008 wait_extent_bit fs/btrfs/extent-io-tree.c:811 [inline] btrfs_lock_extent_bits+0x59c/0x700 fs/btrfs/extent-io-tree.c:1914 btrfs_lock_extent fs/btrfs/extent-io-tree.h:152 [inline] btrfs_invalidate_folio+0x43d/0xc40 fs/btrfs/inode.c:7718 extent_writepage fs/btrfs/extent_io.c:1852 [inline] extent_write_cache_pages fs/btrfs/extent_io.c:2580 [inline] btrfs_writepages+0x1369/0x24a0 fs/btrfs/extent_io.c:2746 do_writepages+0x32e/0x550 mm/page-writeback.c:2554 __writeback_single_inode+0x133/0x11a0 fs/fs-writeback.c:1750 writeback_sb_inodes+0x995/0x19d0 fs/fs-writeback.c:2042 wb_writeback+0x456/0xb70 fs/fs-writeback.c:2227 wb_do_writeback fs/fs-writeback.c:2374 [inline] wb_workfn+0x41a/0xf60 fs/fs-writeback.c:2414 process_one_work kernel/workqueue.c:3276 [inline] process_scheduled_works+0xb6e/0x18c0 kernel/workqueue.c:3359 worker_thread+0xa53/0xfc0 kernel/workqueue.c:3440 kthread+0x388/0x470 kernel/kthread.c:436 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 </TASK> INFO: task syz.2.19:6618 blocked for more than 143 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.2.19 state:D stack:22752 pid:6618 tgid:6617 ppid:6327 task_flags:0x400140 flags:0x00080002 Call Trace: <TASK> context_switch kernel/sched/core.c:5298 [inline] __schedule+0x1553/0x5240 kernel/sched/core.c:6911 __schedule_loop kernel/sched/core.c:6993 [inline] schedule+0x164/0x360 kernel/sched/core.c:7008 wait_current_trans+0x39f/0x590 fs/btrfs/transaction.c:535 start_transaction+0x6a7/0x1650 fs/btrfs/transaction.c:705 clone_copy_inline_extent fs/btrfs/reflink.c:286 [inline] btrfs_clone+0x1275/0x24a0 fs/btrfs/reflink.c:516 btrfs_clone_files+0x27f/0x410 fs/btrfs/reflink.c:737 btrfs_remap_file_range+0x764/0x13d0 fs/btrfs/reflink.c:892 vfs_copy_file_range+0xda7/0x1390 fs/read_write.c:1600 __do_sys_copy_file_range fs/read_write.c:1683 [inline] __se_sys_copy_file_range+0x2fb/0x480 fs/read_write.c:1650 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f995234c799 RSP: 002b:00007f99519ae028 EFLAGS: 00000246 ORIG_RAX: 0000000000000146 RAX: ffffffffffffffda RBX: 00007f99525c5fa0 RCX: 00007f995234c799 RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000005 RBP: 00007f99523e2c99 R08: 0000000000000863 R09: 0000000000000000 R10: 00002000000000c0 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f99525c6038 R14: 00007f99525c5fa0 R15: 00007ffd56a62508 </TASK> INFO: task syz.2.19:6689 blocked for more than 143 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.2.19 state:D stack:24736 pid:6689 tgid:6617 ppid:6327 task_flags:0x400040 flags:0x00080002 Call Trace: <TASK> context_switch kernel/sched/core.c:5298 [inline] __schedule+0x1553/0x5240 kernel/sched/core.c:6911 __schedule_loop kernel/sched/core.c:6993 [inline] schedule+0x164/0x360 kernel/sched/core.c:7008 wb_wait_for_completion+0x3e8/0x790 fs/fs-writeback.c:227 __writeback_inodes_sb_nr+0x24c/0x2d0 fs/fs-writeback.c:2838 try_to_writeback_inodes_sb+0x9a/0xc0 fs/fs-writeback.c:2886 btrfs_start_delalloc_flush fs/btrfs/transaction.c:2175 [inline] btrfs_commit_transaction+0x82e/0x31a0 fs/btrfs/transaction.c:2364 btrfs_ioctl+0xca7/0xd00 fs/btrfs/ioctl.c:5212 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:597 [inline] __se_sys_ioctl+0xff/0x170 fs/ioctl.c:583 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f995234c799 RSP: 002b:00007f995198d028 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007f99525c6090 RCX: 00007f995234c799 RDX: 0000000000000000 RSI: 0000000000009408 RDI: 0000000000000004 RBP: 00007f99523e2c99 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f99525c6128 R14: 00007f99525c6090 R15: 00007ffd56a62508 </TASK> Showing all locks held in the system: 6 locks held by kworker/u8:0/12: #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359 #1: ffffc90000117c40 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline] #1: ffffc90000117c40 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359 #2: ffff88803f648300 (&devlink->lock_key#7){+.+.}-{4:4}, at: nsim_dev_trap_report_work+0x57/0xbc0 drivers/net/netdevsim/dev.c:909 #3: ffff88802571d120 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #3: ffff88802571d120 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: nsim_dev_trap_report drivers/net/netdevsim/dev.c:862 [inline] #3: ffff88802571d120 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: nsim_dev_trap_report_work+0x1ad/0xbc0 drivers/net/netdevsim/dev.c:922 #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: __rt_spin_lock kernel/locking/spinlock_rt.c:50 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rt_spin_lock+0x1e0/0x400 kernel/locking/spinlock_rt.c:57 #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: get_partial_node_bulk mm/slub.c:3750 [inline] #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: __refill_objects_node+0x87/0x560 mm/slub.c:7027 3 locks held by kworker/u8:1/13: #0: ffff888038bbe938 ((wq_completion)btrfs-flush_delalloc#194){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #0: ffff888038bbe938 ((wq_completion)btrfs-flush_delalloc#194){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359 #1: ffffc90000127c40 ((work_completion)(&work->normal_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline] #1: ffffc90000127c40 ((work_completion)(&work->normal_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359 #2: ffff88802a56efb8 (&entry->wait){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #2: ffff88802a56efb8 (&entry->wait){+.+.}-{3:3}, at: finish_wait+0xbe/0x1e0 kernel/sched/wait.c:394 1 lock held by khungtaskd/38: #0: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline] #0: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline] #0: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775 6 locks held by kworker/u8:3/57: #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359 #1: ffffc9000123fc40 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline] #1: ffffc9000123fc40 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359 #2: ffff8880383c8300 (&devlink->lock_key#3){+.+.}-{4:4}, at: nsim_dev_trap_report_work+0x57/0xbc0 drivers/net/netdevsim/dev.c:909 #3: ffff888039f30d20 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #3: ffff888039f30d20 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: nsim_dev_trap_report drivers/net/netdevsim/dev.c:862 [inline] #3: ffff888039f30d20 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: nsim_dev_trap_report_work+0x1ad/0xbc0 drivers/net/netdevsim/dev.c:922 #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: __rt_spin_lock kernel/locking/spinlock_rt.c:50 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rt_spin_lock+0x1e0/0x400 kernel/locking/spinlock_rt.c:57 #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: get_partial_node_bulk mm/slub.c:3750 [inline] #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: __refill_objects_node+0x87/0x560 mm/slub.c:7027 3 locks held by kworker/u8:5/70: 3 locks held by kworker/1:2/809: 6 locks held by kworker/u8:8/1062: #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359 #1: ffffc90005827c40 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline] #1: ffffc90005827c40 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359 #2: ffff88805af1a300 (&devlink->lock_key#4){+.+.}-{4:4}, at: nsim_dev_trap_report_work+0x57/0xbc0 drivers/net/netdevsim/dev.c:909 #3: ffff88805b820920 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #3: ffff88805b820920 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: nsim_dev_trap_report drivers/net/netdevsim/dev.c:862 [inline] #3: ffff88805b820920 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: nsim_dev_trap_report_work+0x1ad/0xbc0 drivers/net/netdevsim/dev.c:922 #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: __rt_spin_lock kernel/locking/spinlock_rt.c:50 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rt_spin_lock+0x1e0/0x400 kernel/locking/spinlock_rt.c:57 #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: get_partial_node_bulk mm/slub.c:3750 [inline] #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: __refill_objects_node+0x87/0x560 mm/slub.c:7027 6 locks held by kworker/u8:9/1382: #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359 #1: ffffc90006197c40 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline] #1: ffffc90006197c40 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359 #2: ffff888056a46300 (&devlink->lock_key#6){+.+.}-{4:4}, at: nsim_dev_trap_report_work+0x57/0xbc0 drivers/net/netdevsim/dev.c:909 #3: ffff88805bb0d520 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #3: ffff88805bb0d520 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: nsim_dev_trap_report drivers/net/netdevsim/dev.c:862 [inline] #3: ffff88805bb0d520 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: nsim_dev_trap_report_work+0x1ad/0xbc0 drivers/net/netdevsim/dev.c:922 #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: __rt_spin_lock kernel/locking/spinlock_rt.c:50 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rt_spin_lock+0x1e0/0x400 kernel/locking/spinlock_rt.c:57 #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: get_partial_node_bulk mm/slub.c:3750 [inline] #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: __refill_objects_node+0x87/0x560 mm/slub.c:7027 3 locks held by kworker/u8:10/1393: 6 locks held by kworker/u8:11/1408: #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359 #1: ffffc900064c7c40 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline] #1: ffffc900064c7c40 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359 #2: ffff88803c7be300 (&devlink->lock_key#5){+.+.}-{4:4}, at: nsim_dev_trap_report_work+0x57/0xbc0 drivers/net/netdevsim/dev.c:909 #3: ffff88803a1c3d20 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #3: ffff88803a1c3d20 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: nsim_dev_trap_report drivers/net/netdevsim/dev.c:862 [inline] #3: ffff88803a1c3d20 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: nsim_dev_trap_report_work+0x1ad/0xbc0 drivers/net/netdevsim/dev.c:922 #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: __rt_spin_lock kernel/locking/spinlock_rt.c:50 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rt_spin_lock+0x1e0/0x400 kernel/locking/spinlock_rt.c:57 #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: get_partial_node_bulk mm/slub.c:3750 [inline] #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: __refill_objects_node+0x87/0x560 mm/slub.c:7027 6 locks held by kworker/u8:12/1421: #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359 #1: ffffc900065e7c40 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline] #1: ffffc900065e7c40 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359 #2: ffff888062f8c300 (&devlink->lock_key#8){+.+.}-{4:4}, at: nsim_dev_trap_report_work+0x57/0xbc0 drivers/net/netdevsim/dev.c:909 #3: ffff88802ad00920 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #3: ffff88802ad00920 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: nsim_dev_trap_report drivers/net/netdevsim/dev.c:862 [inline] #3: ffff88802ad00920 (&nsim_trap_data->trap_lock){+.+.}-{3:3}, at: nsim_dev_trap_report_work+0x1ad/0xbc0 drivers/net/netdevsim/dev.c:922 #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: __rt_spin_lock kernel/locking/spinlock_rt.c:50 [inline] #4: ffffffff8ddcba00 (rcu_read_lock){....}-{1:3}, at: rt_spin_lock+0x1e0/0x400 kernel/locking/spinlock_rt.c:57 #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: get_partial_node_bulk mm/slub.c:3750 [inline] #5: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: __refill_objects_node+0x87/0x560 mm/slub.c:7027 2 locks held by kworker/u8:13/1428: #0: ffff88801aee4938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #0: ffff88801aee4938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359 #1: ffffc90006657c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline] #1: ffffc90006657c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359 2 locks held by udevd/5165: #0: ffffffff8e4dcc58 (tomoyo_ss){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:187 [inline] #0: ffffffff8e4dcc58 (tomoyo_ss){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:294 [inline] #0: ffffffff8e4dcc58 (tomoyo_ss){.+.+}-{0:0}, at: tomoyo_read_lock security/tomoyo/common.h:1112 [inline] #0: ffffffff8e4dcc58 (tomoyo_ss){.+.+}-{0:0}, at: tomoyo_check_open_permission+0x1d3/0x470 security/tomoyo/file.c:772 #1: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #1: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: get_partial_node_bulk mm/slub.c:3750 [inline] #1: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: __refill_objects_node+0x87/0x560 mm/slub.c:7027 2 locks held by getty/5558: #0: ffff888037ffe0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc90003e7e2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13c0 drivers/tty/n_tty.c:2211 2 locks held by udevd/6231: #0: ffffffff8df09670 (remove_cache_srcu){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:187 [inline] #0: ffffffff8df09670 (remove_cache_srcu){.+.+}-{0:0}, at: srcu_read_lock+0x27/0x60 include/linux/srcu.h:294 #1: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #1: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: __slab_free+0xee/0x2a0 mm/slub.c:5519 3 locks held by syz-executor/6321: #0: ffff88805962e0d0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline] #0: ffff88805962e0d0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline] #0: ffff88805962e0d0 (&type->s_umount_key#56){++++}-{4:4}, at: deactivate_super+0xa9/0xe0 fs/super.c:508 #1: ffff88803fb55020 (&fs_info->ordered_operations_mutex){+.+.}-{4:4}, at: btrfs_wait_ordered_roots+0xe7/0x6f0 fs/btrfs/ordered-data.c:823 #2: ffff888026e2e9a8 (&root->ordered_extent_mutex){+.+.}-{4:4}, at: btrfs_wait_ordered_extents+0x23d/0xcf0 fs/btrfs/ordered-data.c:767 3 locks held by syz-executor/6329: #0: ffff88805902e0d0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline] #0: ffff88805902e0d0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline] #0: ffff88805902e0d0 (&type->s_umount_key#56){++++}-{4:4}, at: deactivate_super+0xa9/0xe0 fs/super.c:508 #1: ffff88805902eb08 (&s->s_sync_lock){+.+.}-{4:4}, at: wait_sb_inodes fs/fs-writeback.c:2739 [inline] #1: ffff88805902eb08 (&s->s_sync_lock){+.+.}-{4:4}, at: sync_inodes_sb+0x288/0xc10 fs/fs-writeback.c:2927 #2: ffffffff8da15b68 (&folio_wait_table[i]){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #2: ffffffff8da15b68 (&folio_wait_table[i]){+.+.}-{3:3}, at: finish_wait+0xbe/0x1e0 kernel/sched/wait.c:394 1 lock held by syz-executor/6330: #0: ffff88805e4280d0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline] #0: ffff88805e4280d0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline] #0: ffff88805e4280d0 (&type->s_umount_key#56){++++}-{4:4}, at: deactivate_super+0xa9/0xe0 fs/super.c:508 4 locks held by syz.2.19/6618: #0: ffff88802e4d8480 (sb_writers#12){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2710 [inline] #0: ffff88802e4d8480 (sb_writers#12){.+.+}-{0:0}, at: vfs_copy_file_range+0x9bb/0x1390 fs/read_write.c:1588 #1: ffff88805b0558c8 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline] #1: ffff88805b0558c8 (&sb->s_type->i_mutex_key#24){+.+.}-{4:4}, at: btrfs_inode_lock+0x51/0xe0 fs/btrfs/inode.c:369 #2: ffff88805b055728 (&ei->i_mmap_lock){++++}-{4:4}, at: btrfs_inode_lock+0xcb/0xe0 fs/btrfs/inode.c:372 #3: ffff88802e4d8770 (sb_internal#2){.+.+}-{0:0}, at: clone_copy_inline_extent fs/btrfs/reflink.c:286 [inline] #3: ffff88802e4d8770 (sb_internal#2){.+.+}-{0:0}, at: btrfs_clone+0x1275/0x24a0 fs/btrfs/reflink.c:516 3 locks held by syz.2.19/6689: #0: ffff88803dc2b118 (btrfs_trans_num_writers){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298 #1: ffff88803dc2b140 (btrfs_trans_num_extwriters){++++}-{0:0}, at: join_transaction+0x41b/0xc90 fs/btrfs/transaction.c:298 #2: ffff88802e4d80d0 (&type->s_umount_key#56){++++}-{4:4}, at: try_to_writeback_inodes_sb+0x22/0xc0 fs/fs-writeback.c:2883 1 lock held by btrfs-transacti/6682: #0: ffff88803dc28d98 (&fs_info->transaction_kthread_mutex){+.+.}-{4:4}, at: transaction_kthread+0xe4/0x450 fs/btrfs/disk-io.c:1515 2 locks held by udevd/6732: #0: ffffffff8e4dcc58 (tomoyo_ss){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:187 [inline] #0: ffffffff8e4dcc58 (tomoyo_ss){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:294 [inline] #0: ffffffff8e4dcc58 (tomoyo_ss){.+.+}-{0:0}, at: tomoyo_read_lock security/tomoyo/common.h:1112 [inline] #0: ffffffff8e4dcc58 (tomoyo_ss){.+.+}-{0:0}, at: tomoyo_check_open_permission+0x1d3/0x470 security/tomoyo/file.c:772 #1: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #1: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: get_partial_node_bulk mm/slub.c:3750 [inline] #1: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: __refill_objects_node+0x87/0x560 mm/slub.c:7027 1 lock held by udevadm/10426: #0: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #0: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: get_partial_node_bulk mm/slub.c:3750 [inline] #0: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: __refill_objects_node+0x87/0x560 mm/slub.c:7027 1 lock held by udevadm/10431: 1 lock held by udevadm/10447: 2 locks held by udevadm/10452: 2 locks held by syz.0.221/10464: 4 locks held by syz-executor/10467: #0: ffff88803644a480 (sb_writers#5){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493 #1: ffff88805c6475d8 (&type->i_mutex_dir_key#5/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline] #1: ffff88805c6475d8 (&type->i_mutex_dir_key#5/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2923 [inline] #1: ffff88805c6475d8 (&type->i_mutex_dir_key#5/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2934 [inline] #1: ffff88805c6475d8 (&type->i_mutex_dir_key#5/1){+.+.}-{4:4}, at: filename_create+0x200/0x370 fs/namei.c:4922 #2: ffffffff8e4dcc58 (tomoyo_ss){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:187 [inline] #2: ffffffff8e4dcc58 (tomoyo_ss){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:294 [inline] #2: ffffffff8e4dcc58 (tomoyo_ss){.+.+}-{0:0}, at: tomoyo_read_lock security/tomoyo/common.h:1112 [inline] #2: ffffffff8e4dcc58 (tomoyo_ss){.+.+}-{0:0}, at: tomoyo_path_perm+0x251/0x560 security/tomoyo/file.c:826 #3: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #3: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: get_partial_node_bulk mm/slub.c:3750 [inline] #3: ffff88813fe18d58 (&n->list_lock){+.+.}-{3:3}, at: __refill_objects_node+0x87/0x560 mm/slub.c:7027 ============================================= NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Call Trace: <TASK> dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline] __sys_info lib/sys_info.c:157 [inline] sys_info+0x135/0x170 lib/sys_info.c:165 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline] watchdog+0xfd9/0x1030 kernel/hung_task.c:515 kthread+0x388/0x470 kernel/kthread.c:436 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 </TASK> Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted syzkaller #0 PREEMPT_{RT,(full)} Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 RIP: 0010:pv_native_safe_halt+0xf/0x20 arch/x86/kernel/paravirt.c:63 Code: 0e 5d 02 e9 13 c4 03 00 cc cc cc 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 66 90 0f 00 2d f3 1c 26 00 fb f4 <c3> cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 90 90 90 90 90 RSP: 0018:ffffffff8da07dc0 EFLAGS: 00000242 RAX: 00000000000a2791 RBX: ffffffff8199709a RCX: 0000000080000001 RDX: 0000000000000001 RSI: ffffffff8d562e91 RDI: ffffffff8ba66e00 RBP: ffffffff8da07eb0 R08: ffff8880b8833e1b R09: 1ffff110171067c3 R10: dffffc0000000000 R11: ffffed10171067c4 R12: 0000000000000000 R13: 1ffffffff1b605d8 R14: 0000000000000000 R15: 1ffffffff1b605d8 FS: 0000000000000000(0000) GS:ffff888126339000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f9f3ebad400 CR3: 00000000449c8000 CR4: 00000000003526f0 Call Trace: <TASK> arch_safe_halt arch/x86/kernel/process.c:766 [inline] default_idle+0x9/0x20 arch/x86/kernel/process.c:767 default_idle_call+0x72/0xb0 kernel/sched/idle.c:122 cpuidle_idle_call kernel/sched/idle.c:199 [inline] do_idle+0x36a/0x5f0 kernel/sched/idle.c:352 cpu_startup_entry+0x43/0x60 kernel/sched/idle.c:451 rest_init+0x2de/0x300 init/main.c:760 start_kernel+0x385/0x3d0 init/main.c:1210 x86_64_start_reservations+0x24/0x30 arch/x86/kernel/head64.c:310 x86_64_start_kernel+0x143/0x1c0 arch/x86/kernel/head64.c:291 common_startup_64+0x13e/0x147 </TASK> Tested on: commit: 0138af24 Merge tag 'erofs-for-7.0-rc6-fixes' of git://.. git tree: upstream console output: https://syzkaller.appspot.com/x/log.txt?x=136c3e02580000 kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27 dashboard link: https://syzkaller.appspot.com/bug?extid=63056bf627663701bbbf compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8 patch: https://syzkaller.appspot.com/x/patch.diff?x=10221cba580000 ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-03-26 4:58 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-19 7:21 [syzbot] [btrfs?] INFO: task hung in btrfs_invalidate_folio (3) syzbot
2026-03-26 1:50 ` Forwarded: [PATCH] btrfs: fix hung task when cloning inline extent races with writeback syzbot
2026-03-26 4:25 ` Forwarded: [PATCH] btrfs: fix hung task and deadlock when cloning inline extents syzbot
[not found] <20260326014953.16727-1-kartikey406@gmail.com>
2026-03-26 2:46 ` [syzbot] [btrfs?] INFO: task hung in btrfs_invalidate_folio (3) syzbot
[not found] <20260326042510.19263-1-kartikey406@gmail.com>
2026-03-26 4:58 ` syzbot
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox