public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* [syzbot] [xfs?] INFO: task hung in xlog_force_lsn (2)
@ 2025-10-10 20:49 syzbot
  2026-04-13  6:19 ` Yuto Ohnuki
  2026-04-13  8:06 ` Yuto Ohnuki
  0 siblings, 2 replies; 5+ messages in thread
From: syzbot @ 2025-10-10 20:49 UTC (permalink / raw)
  To: cem, linux-kernel, linux-xfs, syzkaller-bugs

Hello,

syzbot found the following issue on:

HEAD commit:    971199ad2a0f Merge tag 'arm64-fixes' of git://git.kernel.o..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13a5e1e2580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=5dad7c03514bf787
dashboard link: https://syzkaller.appspot.com/bug?extid=c27dee924f3271489c82
compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=17be6304580000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=10c431e2580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/6f70cc00930c/disk-971199ad.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/5c5740d1a0de/vmlinux-971199ad.xz
kernel image: https://storage.googleapis.com/syzbot-assets/9f306c159d19/bzImage-971199ad.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/4027534f1a29/mount_0.gz
  fsck result: failed (log: https://syzkaller.appspot.com/x/fsck.log?x=1495f92f980000)

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c27dee924f3271489c82@syzkaller.appspotmail.com

INFO: task syz.0.17:6151 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.17        state:D stack:23560 pid:6151  tgid:6150  ppid:5941   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x16f3/0x4c20 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:7026
 xlog_wait fs/xfs/xfs_log_priv.h:588 [inline]
 xlog_wait_on_iclog+0x4ac/0x6f0 fs/xfs/xfs_log.c:841
 xlog_force_lsn+0x4d7/0x970 fs/xfs/xfs_log.c:3045
 xfs_log_force_seq+0x1c9/0x440 fs/xfs/xfs_log.c:3082
 __xfs_trans_commit+0x7d2/0xbd0 fs/xfs/xfs_trans.c:879
 xfs_trans_commit+0x13e/0x1c0 fs/xfs/xfs_trans.c:928
 xfs_sync_sb_buf+0x134/0x230 fs/xfs/libxfs/xfs_sb.c:1472
 xfs_ioc_setlabel fs/xfs/xfs_ioctl.c:1039 [inline]
 xfs_file_ioctl+0x14b2/0x1830 fs/xfs/xfs_ioctl.c:1196
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:597 [inline]
 __se_sys_ioctl+0xfc/0x170 fs/ioctl.c:583
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f601931eec9
RSP: 002b:00007f6018986038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f6019575fa0 RCX: 00007f601931eec9
RDX: 00002000000001c0 RSI: 0000000041009432 RDI: 0000000000000004
RBP: 00007f60193a1f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f6019576038 R14: 00007f6019575fa0 R15: 00007fffb9cf9fe8
 </TASK>
INFO: task syz.0.17:6177 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.17        state:D stack:27208 pid:6177  tgid:6150  ppid:5941   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x16f3/0x4c20 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:7026
 schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
 ___down_common kernel/locking/semaphore.c:268 [inline]
 __down_common+0x319/0x6a0 kernel/locking/semaphore.c:293
 down+0x80/0xd0 kernel/locking/semaphore.c:100
 xfs_buf_lock+0x15d/0x4d0 fs/xfs/xfs_buf.c:993
 xfs_buf_item_unpin+0x1d4/0x700 fs/xfs/xfs_buf_item.c:556
 xlog_cil_ail_insert fs/xfs/xfs_log_cil.c:-1 [inline]
 xlog_cil_committed+0x95c/0x1040 fs/xfs/xfs_log_cil.c:897
 xlog_cil_process_committed+0x15c/0x1b0 fs/xfs/xfs_log_cil.c:927
 xlog_state_shutdown_callbacks+0x269/0x360 fs/xfs/xfs_log.c:488
 xlog_force_shutdown+0x332/0x400 fs/xfs/xfs_log.c:3520
 xfs_do_force_shutdown+0x283/0x640 fs/xfs/xfs_fsops.c:517
 xfs_fs_goingdown+0x71/0x150 fs/xfs/xfs_fsops.c:-1
 xfs_file_ioctl+0x11be/0x1830 fs/xfs/xfs_ioctl.c:1371
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:597 [inline]
 __se_sys_ioctl+0xfc/0x170 fs/ioctl.c:583
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f601931eec9
RSP: 002b:00007f6018965038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f6019576090 RCX: 00007f601931eec9
RDX: 0000200000000080 RSI: 000000008004587d RDI: 0000000000000005
RBP: 00007f60193a1f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f6019576128 R14: 00007f6019576090 R15: 00007fffb9cf9fe8
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/38:
 #0: ffffffff8d7aa500 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8d7aa500 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8d7aa500 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
2 locks held by kworker/1:1/44:
 #0: ffff88805b739138 ((wq_completion)xfs-sync/loop6){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff88805b739138 ((wq_completion)xfs-sync/loop6){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc90000b57ba0 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc90000b57ba0 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
2 locks held by kworker/u8:4/70:
 #0: ffff888055624138 ((wq_completion)xfs-cil/loop6){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff888055624138 ((wq_completion)xfs-cil/loop6){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc9000155fba0 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc9000155fba0 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
3 locks held by kworker/0:2/1245:
2 locks held by getty/5563:
 #0: ffff88823bf320a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e832e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x444/0x1400 drivers/tty/n_tty.c:2222
1 lock held by syz.0.17/6151:
 #0: ffff888029a28480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:552
1 lock held by syz.2.33/6264:
 #0: ffff888050c54480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:552
1 lock held by syz.1.34/6276:
 #0: ffff8880551d2480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:552
1 lock held by syz.3.80/6795:
 #0: ffff888056592480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:552
1 lock held by syz.6.61/6859:
 #0: ffff888039de8480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:552
1 lock held by syz.5.113/7166:
 #0: ffff8880365f0480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:552
1 lock held by syz.4.127/7342:
 #0: ffff88802bb78480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:552
1 lock held by syz.8.104/7367:
 #0: ffff888039d14480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:552
1 lock held by syz.7.171/7768:
 #0: ffff888067dbc480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:552
1 lock held by syz.9.185/7887:
 #0: ffff88806ddf0480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:552
4 locks held by syz.2.319/8216:
1 lock held by syz.6.321/8222:
5 locks held by syz.0.323/8224:
1 lock held by syz-executor/8227:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline]
 watchdog+0xf60/0xfa0 kernel/hung_task.c:495
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x4b9/0x870 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 1245 Comm: kworker/0:2 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Workqueue: events_power_efficient wg_ratelimiter_gc_entries
RIP: 0010:__lock_acquire+0x818/0xd20 kernel/locking/lockdep.c:-1
Code: 8d 48 89 de e8 d9 b7 1c 03 eb c2 44 89 e0 25 ff 1f 00 00 41 c1 ec 03 41 81 e4 00 60 00 00 41 09 c4 4c 89 f9 48 c1 e9 20 89 ca <c1> c2 04 41 29 cc 44 31 e2 44 01 f9 41 29 d7 89 d6 c1 c6 06 44 31
RSP: 0000:ffffc90005177770 EFLAGS: 00000803
RAX: 0000000000000b53 RBX: 0000000000000003 RCX: 000000008c3c8419
RDX: 000000008c3c8419 RSI: ffff888026bd47d8 RDI: ffff888026bd3c00
RBP: 0000000000000000 R08: 0000000000000000 R09: ffffffff8ac7bb0a
R10: dffffc0000000000 R11: fffffbfff1deecaf R12: 0000000000000b53
R13: ffff888026bd4760 R14: ffff888026bd47d8 R15: 8c3c8419ff0f723d
FS:  0000000000000000(0000) GS:ffff888126bcd000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f4515835613 CR3: 0000000071142000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0xa7/0xf0 kernel/locking/spinlock.c:162
 rtlock_slowlock kernel/locking/rtmutex.c:1894 [inline]
 rtlock_lock kernel/locking/spinlock_rt.c:43 [inline]
 __rt_spin_lock kernel/locking/spinlock_rt.c:49 [inline]
 rt_spin_lock+0x14a/0x3e0 kernel/locking/spinlock_rt.c:57
 spin_lock include/linux/spinlock_rt.h:44 [inline]
 wg_ratelimiter_gc_entries+0x5d/0x480 drivers/net/wireguard/ratelimiter.c:63
 process_one_work kernel/workqueue.c:3263 [inline]
 process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3346
 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3427
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x4b9/0x870 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [syzbot] [xfs?] INFO: task hung in xlog_force_lsn (2)
  2025-10-10 20:49 [syzbot] [xfs?] INFO: task hung in xlog_force_lsn (2) syzbot
@ 2026-04-13  6:19 ` Yuto Ohnuki
  2026-04-13  6:38   ` syzbot
  2026-04-13  8:06 ` Yuto Ohnuki
  1 sibling, 1 reply; 5+ messages in thread
From: Yuto Ohnuki @ 2026-04-13  6:19 UTC (permalink / raw)
  To: syzbot+c27dee924f3271489c82; +Cc: linux-xfs, syzkaller-bugs, Yuto Ohnuki

#syz test

diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
index f807f8f4f705..2645052042bf 100644
--- a/fs/xfs/xfs_log.c
+++ b/fs/xfs/xfs_log.c
@@ -426,6 +426,23 @@ xlog_state_shutdown_callbacks(
 	struct xlog_in_core	*iclog;
 	LIST_HEAD(cb_list);
 
+	/*
+	 * Shutdown waiters on ic_force_wait do not require callback completion.
+	 * Once log shutdown has been established, they only need to wake,
+	 * observe xlog_is_shutdown(), and abort with -EIO.
+	 *
+	 * Wake them before processing callbacks to avoid deadlock if callback
+	 * processing blocks on a buffer lock and prevents the wakeup from being
+	 * reached.
+	 *
+	 * Keep ic_write_wait wakeups ordered after callback processing so
+	 * shutdown callback side effects still complete before teardown progresses.
+	 */
+	iclog = log->l_iclog;
+	do {
+		wake_up_all(&iclog->ic_force_wait);
+	} while ((iclog = iclog->ic_next) != log->l_iclog);
+
 	iclog = log->l_iclog;
 	do {
 		if (atomic_read(&iclog->ic_refcnt)) {
@@ -439,7 +456,6 @@ xlog_state_shutdown_callbacks(
 
 		spin_lock(&log->l_icloglock);
 		wake_up_all(&iclog->ic_write_wait);
-		wake_up_all(&iclog->ic_force_wait);
 	} while ((iclog = iclog->ic_next) != log->l_iclog);
 
 	wake_up_all(&log->l_flush_wait);
-- 
2.50.1




Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705




^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [syzbot] [xfs?] INFO: task hung in xlog_force_lsn (2)
  2026-04-13  6:19 ` Yuto Ohnuki
@ 2026-04-13  6:38   ` syzbot
  0 siblings, 0 replies; 5+ messages in thread
From: syzbot @ 2026-04-13  6:38 UTC (permalink / raw)
  To: linux-kernel, linux-xfs, syzkaller-bugs, ytohnuki

Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
INFO: task hung in xfs_buf_item_unpin

INFO: task kworker/u8:0:12 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:0    state:D stack:20896 pid:12    tgid:12    ppid:2      task_flags:0x4248060 flags:0x00080000
Workqueue: xfs-cil/loop1 xlog_cil_push_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0x1553/0x5190 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0x164/0x360 kernel/sched/core.c:7008
 schedule_timeout+0xc3/0x2c0 kernel/time/sleep_timeout.c:75
 ___down_common kernel/locking/semaphore.c:268 [inline]
 __down_common+0x321/0x730 kernel/locking/semaphore.c:293
 down+0x80/0xd0 kernel/locking/semaphore.c:100
 xfs_buf_lock+0x14d/0x520 fs/xfs/xfs_buf.c:993
 xfs_buf_item_unpin+0x1c4/0x770 fs/xfs/xfs_buf_item.c:551
 xlog_cil_ail_insert fs/xfs/xfs_log_cil.c:-1 [inline]
 xlog_cil_committed+0x9f4/0x1170 fs/xfs/xfs_log_cil.c:995
 xlog_cil_push_work+0x1e0c/0x23d0 fs/xfs/xfs_log_cil.c:1607
 process_one_work kernel/workqueue.c:3288 [inline]
 process_scheduled_works+0xb6e/0x18c0 kernel/workqueue.c:3371
 worker_thread+0x8a2/0xda0 kernel/workqueue.c:3452
 kthread+0x388/0x470 kernel/kthread.c:436
 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz.1.22:6678 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.22        state:D stack:25536 pid:6678  tgid:6675  ppid:6316   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0x1553/0x5190 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0x164/0x360 kernel/sched/core.c:7008
 schedule_timeout+0xc3/0x2c0 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common kernel/sched/completion.c:121 [inline]
 wait_for_common kernel/sched/completion.c:132 [inline]
 wait_for_completion+0x2cc/0x5e0 kernel/sched/completion.c:153
 __flush_workqueue+0x6f6/0x14f0 kernel/workqueue.c:4096
 xlog_cil_push_now fs/xfs/xfs_log_cil.c:1725 [inline]
 xlog_cil_force_seq+0x228/0x8c0 fs/xfs/xfs_log_cil.c:1927
 xfs_log_force_seq+0x196/0x440 fs/xfs/xfs_log.c:3000
 __xfs_trans_commit+0x7d3/0xc20 fs/xfs/xfs_trans.c:877
 xfs_trans_commit+0x13e/0x1c0 fs/xfs/xfs_trans.c:926
 xfs_sync_sb_buf+0x13f/0x230 fs/xfs/libxfs/xfs_sb.c:1490
 xfs_ioc_setlabel+0x1de/0x340 fs/xfs/xfs_ioctl.c:1041
 xfs_file_ioctl+0x9c5/0x1710 fs/xfs/xfs_ioctl.c:1198
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:597 [inline]
 __se_sys_ioctl+0xff/0x170 fs/ioctl.c:583
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f73218ba539
RSP: 002b:00007f7320f1e028 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f7321b25fa0 RCX: 00007f73218ba539
RDX: 00002000000001c0 RSI: 0000000041009432 RDI: 0000000000000004
RBP: 00007f732194dee0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7321b26038 R14: 00007f7321b25fa0 R15: 00007ffded160d28
 </TASK>

Showing all locks held in the system:
2 locks held by kworker/u8:0/12:
 #0: ffff88803de1e138 ((wq_completion)xfs-cil/loop1){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88803de1e138 ((wq_completion)xfs-cil/loop1){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc90000117c40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc90000117c40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
1 lock held by khungtaskd/38:
 #0: ffffffff8dbb20c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8dbb20c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8dbb20c0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
3 locks held by kworker/u9:0/60:
 #0: ffff888028c92138 ((wq_completion)hci7){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff888028c92138 ((wq_completion)hci7){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc9000126fc40 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc9000126fc40 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
 #2: ffff8880310bcf80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_cmd_sync_work+0x1d3/0x400 net/bluetooth/hci_sync.c:331
2 locks held by kworker/u8:4/67:
 #0: ffff88813fe6c138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88813fe6c138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc9000153fc40 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc9000153fc40 ((reaper_work).work){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
2 locks held by kworker/u8:6/140:
 #0: ffff88803a6cc138 ((wq_completion)xfs-cil/loop4#36){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88803a6cc138 ((wq_completion)xfs-cil/loop4#36){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc90003aa7c40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc90003aa7c40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
2 locks held by kworker/1:2/805:
 #0: ffff88802558a538 ((wq_completion)xfs-sync/loop1){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88802558a538 ((wq_completion)xfs-sync/loop1){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc90005107c40 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc90005107c40 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
2 locks held by kworker/u8:8/1054:
 #0: ffff88803583d138 ((wq_completion)xfs-cil/loop3#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88803583d138 ((wq_completion)xfs-cil/loop3#2){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc90005edfc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc90005edfc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
3 locks held by kworker/u8:10/1400:
 #0: ffff88803478a138 ((wq_completion)loop8){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88803478a138 ((wq_completion)loop8){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc90006c0fc40 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc90006c0fc40 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
 #2: ffff888025b90160 (&lo->lo_work_lock){+.+.}-{3:3}, at: spin_lock_irq include/linux/spinlock_rt.h:96 [inline]
 #2: ffff888025b90160 (&lo->lo_work_lock){+.+.}-{3:3}, at: loop_process_work+0x125/0x11b0 drivers/block/loop.c:1953
2 locks held by kworker/u8:12/1431:
 #0: ffff88813fe6c138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88813fe6c138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc90006b0fc40 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc90006b0fc40 (connector_reaper_work){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
2 locks held by kworker/u8:14/4094:
 #0: ffff88803ac6a138 ((wq_completion)xfs-cil/loop2#4){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88803ac6a138 ((wq_completion)xfs-cil/loop2#4){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc900107bfc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc900107bfc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
1 lock held by udevd/5162:
2 locks held by getty/5551:
 #0: ffff8880370870a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e8b2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13c0 drivers/tty/n_tty.c:2211
1 lock held by udevd/6362:
 #0: ffff88801b2c5a58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline]
 #0: ffff88801b2c5a58 (&n->list_lock){+.+.}-{3:3}, at: get_partial_node_bulk mm/slub.c:3750 [inline]
 #0: ffff88801b2c5a58 (&n->list_lock){+.+.}-{3:3}, at: __refill_objects_node+0x87/0x560 mm/slub.c:7027
2 locks held by kworker/1:6/6523:
 #0: ffff8880376a0938 ((wq_completion)xfs-sync/loop4#37){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff8880376a0938 ((wq_completion)xfs-sync/loop4#37){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc90004f07c40 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc90004f07c40 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
2 locks held by kworker/1:7/6527:
 #0: ffff8880296d4d38 ((wq_completion)xfs-sync/loop3){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff8880296d4d38 ((wq_completion)xfs-sync/loop3){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc90004f27c40 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc90004f27c40 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
1 lock held by udevd/6623:
 #0: ffff88802205be30 (&sb->s_type->i_mutex_key#9){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1043 [inline]
 #0: ffff88802205be30 (&sb->s_type->i_mutex_key#9){++++}-{4:4}, at: blkdev_read_iter+0x2ff/0x440 block/fops.c:854
1 lock held by udevd/6644:
5 locks held by udevd/6645:
2 locks held by kworker/0:6/6670:
 #0: ffff88803b9f1d38 ((wq_completion)xfs-sync/loop2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88803b9f1d38 ((wq_completion)xfs-sync/loop2){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc900015cfc40 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc900015cfc40 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
1 lock held by syz.1.22/6678:
 #0: ffff888036b5c480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
1 lock held by syz.3.32/6804:
 #0: ffff8880247ec480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
1 lock held by syz.2.37/6884:
 #0: ffff8880288bc480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
1 lock held by syz.4.146/8193:
 #0: ffff888034c2a480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
1 lock held by syz-executor/8516:
 #0: ffff88801b2c5a58 (&n->list_lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline]
 #0: ffff88801b2c5a58 (&n->list_lock){+.+.}-{3:3}, at: get_partial_node_bulk mm/slub.c:3750 [inline]
 #0: ffff88801b2c5a58 (&n->list_lock){+.+.}-{3:3}, at: __refill_objects_node+0x87/0x560 mm/slub.c:7027
1 lock held by syz.5.318/10004:
 #0: ffff88803c46c0d0 (&type->s_umount_key#55/1){+.+.}-{4:4}, at: alloc_super+0x28c/0xac0 fs/super.c:345
2 locks held by syz.6.319/10016:
2 locks held by syz.0.320/10018:
2 locks held by syz.7.321/10020:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xfd9/0x1030 kernel/hung_task.c:515
 kthread+0x388/0x470 kernel/kthread.c:436
 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 6644 Comm: udevd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
RIP: 0010:spin_lock include/linux/spinlock_rt.h:45 [inline]
RIP: 0010:__slab_free+0xe6/0x2a0 mm/slub.c:5519
Code: f0 49 0f c7 4e 20 0f 84 cd 00 00 00 48 89 44 24 40 48 89 54 24 48 e9 ac 00 00 00 49 8b 06 48 c1 e8 3a 4c 8b ac c3 c8 00 00 00 <4c> 89 ef e8 b2 9b f0 08 4d 8b 26 41 c1 ec 09 41 83 e4 01 f6 43 0a
RSP: 0018:ffffc900051978b0 EFLAGS: 00000246
RAX: 0000000000000000 RBX: ffff88801b2c8280 RCX: 0000000000120011
RDX: ffff88805d4e99c8 RSI: ffff88805d4e99c8 RDI: ffff88801b2c8280
RBP: ffffc90005197938 R08: 0000000000000001 R09: ffffffff8227effc
R10: dffffc0000000000 R11: fffffbfff1e7e917 R12: 0000000000000000
R13: ffff88801b2c5b00 R14: ffffea0001753a00 R15: 0000000000000000
FS:  00007fe1d410f880(0000) GS:ffff88812660f000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe1d37bb000 CR3: 0000000035e24000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 qlink_free mm/kasan/quarantine.c:163 [inline]
 qlist_free_all+0x97/0x100 mm/kasan/quarantine.c:179
 kasan_quarantine_reduce+0x148/0x160 mm/kasan/quarantine.c:286
 __kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:350
 kasan_slab_alloc include/linux/kasan.h:253 [inline]
 slab_post_alloc_hook mm/slub.c:4538 [inline]
 slab_alloc_node mm/slub.c:4866 [inline]
 __do_kmalloc_node mm/slub.c:5259 [inline]
 __kmalloc_noprof+0x399/0x7b0 mm/slub.c:5272
 kmalloc_noprof include/linux/slab.h:954 [inline]
 tomoyo_realpath_from_path+0xe3/0x5d0 security/tomoyo/realpath.c:251
 tomoyo_get_realpath security/tomoyo/file.c:151 [inline]
 tomoyo_path_perm+0x283/0x560 security/tomoyo/file.c:827
 security_inode_getattr+0x12b/0x310 security/security.c:1870
 vfs_getattr fs/stat.c:259 [inline]
 vfs_fstat fs/stat.c:281 [inline]
 __do_sys_newfstat fs/stat.c:551 [inline]
 __se_sys_newfstat fs/stat.c:546 [inline]
 __x64_sys_newfstat+0x13b/0x270 fs/stat.c:546
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fe1d4267ad7
Code: 73 01 c3 48 8b 0d 21 f3 0d 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 05 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 01 c3 48 8b 15 f1 f2 0d 00 f7 d8 64 89 02 b8
RSP: 002b:00007ffd1c62b248 EFLAGS: 00000297 ORIG_RAX: 0000000000000005
RAX: ffffffffffffffda RBX: 000055d891db7280 RCX: 00007fe1d4267ad7
RDX: 00007fe1d4345ea0 RSI: 00007ffd1c62b250 RDI: 000000000000000b
RBP: 00007fe1d4345ff0 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000297 R12: 000000000000000a
R13: 0000000000003fff R14: 0000000000000000 R15: 000055d891db7280
 </TASK>


Tested on:

commit:         028ef9c9 Linux 7.0
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=125260ce580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=9800c931612cba58
dashboard link: https://syzkaller.appspot.com/bug?extid=c27dee924f3271489c82
compiler:       Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch:          https://syzkaller.appspot.com/x/patch.diff?x=1266d106580000


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [syzbot] [xfs?] INFO: task hung in xlog_force_lsn (2)
  2025-10-10 20:49 [syzbot] [xfs?] INFO: task hung in xlog_force_lsn (2) syzbot
  2026-04-13  6:19 ` Yuto Ohnuki
@ 2026-04-13  8:06 ` Yuto Ohnuki
  2026-04-13  8:29   ` syzbot
  1 sibling, 1 reply; 5+ messages in thread
From: Yuto Ohnuki @ 2026-04-13  8:06 UTC (permalink / raw)
  To: syzbot+c27dee924f3271489c82; +Cc: linux-xfs, syzkaller-bugs, Yuto Ohnuki

#syz test

diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
index f807f8f4f705..2645052042bf 100644
--- a/fs/xfs/xfs_log.c
+++ b/fs/xfs/xfs_log.c
@@ -426,6 +426,23 @@ xlog_state_shutdown_callbacks(
        struct xlog_in_core     *iclog;
        LIST_HEAD(cb_list);
 
+       /*
+        * Shutdown waiters on ic_force_wait do not require callback completion.
+        * Once log shutdown has been established, they only need to wake,
+        * observe xlog_is_shutdown(), and abort with -EIO.
+        *
+        * Wake them before processing callbacks to avoid deadlock if callback
+        * processing blocks on a buffer lock and prevents the wakeup from being
+        * reached.
+        *
+        * Keep ic_write_wait wakeups ordered after callback processing so
+        * shutdown callback side effects still complete before teardown progresses.
+        */
+       iclog = log->l_iclog;
+       do {
+               wake_up_all(&iclog->ic_force_wait);
+       } while ((iclog = iclog->ic_next) != log->l_iclog);
+
        iclog = log->l_iclog;
        do {
                if (atomic_read(&iclog->ic_refcnt)) {
@@ -439,7 +456,6 @@ xlog_state_shutdown_callbacks(
 
                spin_lock(&log->l_icloglock);
                wake_up_all(&iclog->ic_write_wait);
-               wake_up_all(&iclog->ic_force_wait);
        } while ((iclog = iclog->ic_next) != log->l_iclog);
 
        wake_up_all(&log->l_flush_wait);
diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c
index edc368938f30..a4244a8c43e1 100644
--- a/fs/xfs/xfs_log_cil.c
+++ b/fs/xfs/xfs_log_cil.c
@@ -1721,7 +1721,7 @@ xlog_cil_push_now(
        ASSERT(push_seq && push_seq <= cil->xc_current_sequence);
 
        /* start on any pending background push to minimise wait time on it */
-       if (!async)
+       if (!async && !xlog_is_shutdown(log))
                flush_workqueue(cil->xc_push_wq);
 
        spin_lock(&cil->xc_push_lock);
-- 
2.50.1



Amazon Web Services EMEA SARL, 38 avenue John F. Kennedy, L-1855 Luxembourg, R.C.S. Luxembourg B186284

Amazon Web Services EMEA SARL, Irish Branch, One Burlington Plaza, Burlington Road, Dublin 4, Ireland, branch registration number 908705




^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [syzbot] [xfs?] INFO: task hung in xlog_force_lsn (2)
  2026-04-13  8:06 ` Yuto Ohnuki
@ 2026-04-13  8:29   ` syzbot
  0 siblings, 0 replies; 5+ messages in thread
From: syzbot @ 2026-04-13  8:29 UTC (permalink / raw)
  To: linux-kernel, linux-xfs, syzkaller-bugs, ytohnuki

Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
INFO: task hung in xfs_buf_item_unpin

INFO: task kworker/u8:3:57 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:3    state:D stack:24616 pid:57    tgid:57    ppid:2      task_flags:0x4248160 flags:0x00080000
Workqueue: xfs-cil/loop3 xlog_cil_push_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0x1553/0x5190 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0x164/0x360 kernel/sched/core.c:7008
 schedule_timeout+0xc3/0x2c0 kernel/time/sleep_timeout.c:75
 ___down_common kernel/locking/semaphore.c:268 [inline]
 __down_common+0x321/0x730 kernel/locking/semaphore.c:293
 down+0x80/0xd0 kernel/locking/semaphore.c:100
 xfs_buf_lock+0x14d/0x520 fs/xfs/xfs_buf.c:993
 xfs_buf_item_unpin+0x1c4/0x770 fs/xfs/xfs_buf_item.c:551
 xlog_cil_ail_insert fs/xfs/xfs_log_cil.c:-1 [inline]
 xlog_cil_committed+0x9f4/0x1170 fs/xfs/xfs_log_cil.c:995
 xlog_cil_push_work+0x1e0c/0x23d0 fs/xfs/xfs_log_cil.c:1607
 process_one_work kernel/workqueue.c:3288 [inline]
 process_scheduled_works+0xb6e/0x18c0 kernel/workqueue.c:3371
 worker_thread+0x8a2/0xda0 kernel/workqueue.c:3452
 kthread+0x388/0x470 kernel/kthread.c:436
 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz.3.50:7129 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.50        state:D stack:25616 pid:7129  tgid:7127  ppid:6387   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0x1553/0x5190 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0x164/0x360 kernel/sched/core.c:7008
 schedule_timeout+0xc3/0x2c0 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common kernel/sched/completion.c:121 [inline]
 wait_for_common kernel/sched/completion.c:132 [inline]
 wait_for_completion+0x2cc/0x5e0 kernel/sched/completion.c:153
 __flush_workqueue+0x6f6/0x14f0 kernel/workqueue.c:4096
 xlog_cil_push_now fs/xfs/xfs_log_cil.c:1725 [inline]
 xlog_cil_force_seq+0x262/0x930 fs/xfs/xfs_log_cil.c:1927
 xfs_log_force_seq+0x196/0x440 fs/xfs/xfs_log.c:3000
 __xfs_trans_commit+0x7d3/0xc20 fs/xfs/xfs_trans.c:877
 xfs_trans_commit+0x13e/0x1c0 fs/xfs/xfs_trans.c:926
 xfs_sync_sb_buf+0x13f/0x230 fs/xfs/libxfs/xfs_sb.c:1490
 xfs_ioc_setlabel+0x1de/0x340 fs/xfs/xfs_ioctl.c:1041
 xfs_file_ioctl+0x9c5/0x1710 fs/xfs/xfs_ioctl.c:1198
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:597 [inline]
 __se_sys_ioctl+0xff/0x170 fs/ioctl.c:583
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f6551f5a539
RSP: 002b:00007f65515be028 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f65521c5fa0 RCX: 00007f6551f5a539
RDX: 00002000000001c0 RSI: 0000000041009432 RDI: 0000000000000004
RBP: 00007f6551fedee0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f65521c6038 R14: 00007f65521c5fa0 R15: 00007ffeece18258
 </TASK>

Showing all locks held in the system:
2 locks held by kworker/u8:1/13:
 #0: ffff88805ab4c138 ((wq_completion)xfs-cil/loop0#15){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88805ab4c138 ((wq_completion)xfs-cil/loop0#15){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc90000127c40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc90000127c40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
1 lock held by khungtaskd/37:
 #0: ffffffff8dbb20c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8dbb20c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8dbb20c0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
2 locks held by kworker/u8:2/39:
 #0: ffff88805a498138 ((wq_completion)xfs-cil/loop2#20){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88805a498138 ((wq_completion)xfs-cil/loop2#20){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc90000b07c40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc90000b07c40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
2 locks held by kworker/u8:3/57:
 #0: ffff8880346e0138 ((wq_completion)xfs-cil/loop3#6){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff8880346e0138 ((wq_completion)xfs-cil/loop3#6){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc9000123fc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc9000123fc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
2 locks held by kworker/u8:8/1051:
 #0: ffff88805ff21938 ((wq_completion)xfs-cil/loop7#34){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88805ff21938 ((wq_completion)xfs-cil/loop7#34){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc900057afc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc900057afc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
2 locks held by kworker/u8:9/1485:
 #0: ffff88805c9f5138 ((wq_completion)xfs-cil/loop5#13){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88805c9f5138 ((wq_completion)xfs-cil/loop5#13){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc9000667fc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc9000667fc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
2 locks held by kworker/u8:10/1960:
 #0: ffff8880412e0938 ((wq_completion)xfs-cil/loop8#29){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff8880412e0938 ((wq_completion)xfs-cil/loop8#29){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc9000756fc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc9000756fc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
2 locks held by kworker/u8:12/3013:
 #0: ffff88805b7bb938 ((wq_completion)xfs-cil/loop1#43){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88805b7bb938 ((wq_completion)xfs-cil/loop1#43){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc9000eaefc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc9000eaefc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
2 locks held by getty/5548:
 #0: ffff8880382bf0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e8b2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13c0 drivers/tty/n_tty.c:2211
1 lock held by udevd/6541:
 #0: ffff888022331670 (&sb->s_type->i_mutex_key#9){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1043 [inline]
 #0: ffff888022331670 (&sb->s_type->i_mutex_key#9){++++}-{4:4}, at: blkdev_read_iter+0x2ff/0x440 block/fops.c:854
2 locks held by kworker/u8:13/6857:
 #0: ffff88803a0cd938 ((wq_completion)xfs-cil/loop6#10){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff88803a0cd938 ((wq_completion)xfs-cil/loop6#10){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc90004a27c40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc90004a27c40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
1 lock held by syz.3.50/7129:
 #0: ffff888038ffc480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
1 lock held by syz.0.87/7519:
 #0: ffff88803bc02480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
1 lock held by syz.2.100/7656:
 #0: ffff888034b6e480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
1 lock held by syz.5.144/8136:
 #0: ffff88805764c480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
1 lock held by syz.1.178/8506:
 #0: ffff88804313c480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
1 lock held by syz.6.180/8535:
 #0: ffff88802a7ea480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
2 locks held by kworker/u8:14/8818:
 #0: ffff888062f6b938 ((wq_completion)xfs-cil/loop9#8){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline]
 #0: ffff888062f6b938 ((wq_completion)xfs-cil/loop9#8){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371
 #1: ffffc9000fbdfc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline]
 #1: ffffc9000fbdfc40 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371
1 lock held by syz.7.266/9442:
 #0: ffff8880656c2480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
1 lock held by syz.9.270/9483:
 #0: ffff888057cee480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
1 lock held by syz.8.292/9652:
 #0: ffff88805f5ba480 (sb_writers#12){.+.+}-{0:0}, at: mnt_want_write_file+0x63/0x210 fs/namespace.c:537
1 lock held by syz.5.361/10079:
4 locks held by syz.0.362/10081:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 37 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xfd9/0x1030 kernel/hung_task.c:515
 kthread+0x388/0x470 kernel/kthread.c:436
 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 17 Comm: pr/legacy Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
RIP: 0010:io_serial_in+0x77/0xc0 drivers/tty/serial/8250/8250_port.c:401
Code: e8 5e d3 95 fc 44 89 f9 d3 e3 49 83 ee 80 4c 89 f0 48 c1 e8 03 42 80 3c 20 00 74 08 4c 89 f7 e8 ef da fa fc 41 03 1e 89 da ec <0f> b6 c0 5b 41 5c 41 5e 41 5f c3 cc cc cc cc cc 44 89 f9 80 e1 07
RSP: 0018:ffffc900001679d0 EFLAGS: 00000202
RAX: 1ffffffff32cff00 RBX: 00000000000003fd RCX: 0000000000000000
RDX: 00000000000003fd RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffffff9967fe30 R08: 0000000000000000 R09: 0000000000000000
R10: dffffc0000000000 R11: ffffffff852d02a0 R12: dffffc0000000000
R13: 0000000000000000 R14: ffffffff9967fba0 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff88812660f000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f098b3e1000 CR3: 000000002a706000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 serial_in drivers/tty/serial/8250/8250.h:128 [inline]
 serial_lsr_in drivers/tty/serial/8250/8250.h:150 [inline]
 wait_for_lsr+0x1aa/0x2f0 drivers/tty/serial/8250/8250_port.c:1970
 serial8250_fifo_wait_for_lsr_thre drivers/tty/serial/8250/8250_port.c:3207 [inline]
 serial8250_console_fifo_write drivers/tty/serial/8250/8250_port.c:3272 [inline]
 serial8250_console_write+0x120d/0x1b90 drivers/tty/serial/8250/8250_port.c:3357
 console_emit_next_record kernel/printk/printk.c:3163 [inline]
 console_flush_one_record+0x68b/0xb90 kernel/printk/printk.c:3269
 legacy_kthread_func+0x1b6/0x250 kernel/printk/printk.c:3712
 kthread+0x388/0x470 kernel/kthread.c:436
 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>


Tested on:

commit:         028ef9c9 Linux 7.0
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=104e60ce580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=9800c931612cba58
dashboard link: https://syzkaller.appspot.com/bug?extid=c27dee924f3271489c82
compiler:       Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch:          https://syzkaller.appspot.com/x/patch.diff?x=11de9036580000


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-04-13  8:29 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-10 20:49 [syzbot] [xfs?] INFO: task hung in xlog_force_lsn (2) syzbot
2026-04-13  6:19 ` Yuto Ohnuki
2026-04-13  6:38   ` syzbot
2026-04-13  8:06 ` Yuto Ohnuki
2026-04-13  8:29   ` syzbot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox