public inbox for linux-ext4@vger.kernel.org
 help / color / mirror / Atom feed
From: syzbot <syzbot+512459401510e2a9a39f@syzkaller.appspotmail.com>
To: brauner@kernel.org, jack@suse.cz, linux-ext4@vger.kernel.org,
	 linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	 syzkaller-bugs@googlegroups.com, viro@zeniv.linux.org.uk
Subject: [syzbot] [ext4?] INFO: task hung in filename_rmdir
Date: Fri, 27 Feb 2026 05:49:31 -0800	[thread overview]
Message-ID: <69a1a0eb.050a0220.3a55be.0021.GAE@google.com> (raw)

Hello,

syzbot found the following issue on:

HEAD commit:    6de23f81a5e0 Linux 7.0-rc1
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=176ad9e6580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=d91443204e48b7a1
dashboard link: https://syzkaller.appspot.com/bug?extid=512459401510e2a9a39f
compiler:       Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=14630202580000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=11dd4006580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/a3fcd7d8bed6/disk-6de23f81.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/4956764f8450/vmlinux-6de23f81.xz
kernel image: https://storage.googleapis.com/syzbot-assets/b9f8616ac66b/bzImage-6de23f81.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/6c600d0e3ce2/mount_0.gz
  fsck result: OK (log: https://syzkaller.appspot.com/x/fsck.log?x=1779655a580000)

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+512459401510e2a9a39f@syzkaller.appspotmail.com

INFO: task syz.0.17:5981 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.17        state:D stack:29024 pid:5981  tgid:5976  ppid:5922   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0x14fb/0x52c0 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 rt_mutex_schedule+0x76/0xf0 kernel/sched/core.c:7285
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1f8f/0x25c0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xbd/0x170 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14d/0x730 kernel/locking/rwbase_rt.c:244
 inode_lock_nested include/linux/fs.h:1073 [inline]
 __start_dirop fs/namei.c:2923 [inline]
 start_dirop fs/namei.c:2934 [inline]
 filename_rmdir+0x1cd/0x520 fs/namei.c:5386
 __do_sys_rmdir fs/namei.c:5416 [inline]
 __se_sys_rmdir+0x2e/0x140 fs/namei.c:5413
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fd6182ac629
RSP: 002b:00007fd6178e5028 EFLAGS: 00000246 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 00007fd618526090 RCX: 00007fd6182ac629
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00002000000000c0
RBP: 00007fd618342b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fd618526128 R14: 00007fd618526090 R15: 00007fff2f2a73b8
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/37:
 #0: ffffffff8ddcd780 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8ddcd780 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8ddcd780 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
2 locks held by kworker/u8:3/58:
3 locks held by kworker/1:2/863:
 #0: ffff888019c03d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3250 [inline]
 #0: ffff888019c03d38 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9ea/0x1830 kernel/workqueue.c:3358
 #1: ffffc90004ca7c40 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
 #1: ffffc90004ca7c40 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa25/0x1830 kernel/workqueue.c:3358
 #2: ffff88805aa3e260 (&data->fib_lock){+.+.}-{4:4}, at: nsim_fib_event_work+0x222/0x3e0 drivers/net/netdevsim/fib.c:1490
2 locks held by getty/5552:
 #0: ffff8880370f50a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e8b2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13c0 drivers/tty/n_tty.c:2211
6 locks held by syz.0.17/5977:
2 locks held by syz.0.17/5981:
 #0: ffff88803704c480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff8880445acbf0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff8880445acbf0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2923 [inline]
 #1: ffff8880445acbf0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2934 [inline]
 #1: ffff8880445acbf0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5386
5 locks held by syz.1.18/6004:
2 locks held by syz.1.18/6008:
 #0: ffff888027ddc480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff88805812b430 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff88805812b430 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2923 [inline]
 #1: ffff88805812b430 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2934 [inline]
 #1: ffff88805812b430 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5386
7 locks held by syz.2.19/6030:
2 locks held by syz.2.19/6034:
 #0: ffff888036f74480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff8880581c6f90 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff8880581c6f90 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2923 [inline]
 #1: ffff8880581c6f90 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2934 [inline]
 #1: ffff8880581c6f90 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5386
8 locks held by syz.3.20/6059:
2 locks held by syz.3.20/6063:
 #0: ffff88803c04c480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff88804005c010 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff88804005c010 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2923 [inline]
 #1: ffff88804005c010 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2934 [inline]
 #1: ffff88804005c010 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5386
7 locks held by syz.4.21/6098:
2 locks held by syz.4.21/6102:
 #0: ffff88803297c480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff88805812e3b0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff88805812e3b0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2923 [inline]
 #1: ffff88805812e3b0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2934 [inline]
 #1: ffff88805812e3b0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5386
7 locks held by syz.5.22/6133:
2 locks held by syz.5.22/6137:
 #0: ffff88805b0a8480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff888058311c70 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff888058311c70 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2923 [inline]
 #1: ffff888058311c70 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2934 [inline]
 #1: ffff888058311c70 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5386
2 locks held by syz.6.23/6167:
 #0: ffff88805e904480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff888044774bf0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff888044774bf0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2923 [inline]
 #1: ffff888044774bf0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2934 [inline]
 #1: ffff888044774bf0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5386
7 locks held by syz.6.23/6172:
2 locks held by syz-executor/6180:

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 37 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xfd9/0x1030 kernel/hung_task.c:515
 kthread+0x388/0x470 kernel/kthread.c:467
 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 58 Comm: kworker/u8:3 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Workqueue: bat_events batadv_dat_purge
RIP: 0010:__lock_acquire+0x273/0x2cf0 kernel/locking/lockdep.c:-1
Code: e8 f2 93 39 03 4d 89 f1 48 8b 74 24 18 44 8b 74 24 20 48 83 bc 24 20 01 00 00 00 0f 85 ce fe ff ff 44 89 74 24 20 44 89 3c 24 <8b> bc 24 38 01 00 00 4c 8b 84 24 28 01 00 00 4c 8b 74 24 08 4d 8d
RSP: 0018:ffffc9000124f800 EFLAGS: 00000046
RAX: 0000000000000113 RBX: 0000000000000000 RCX: ffffffff92f693f0
RDX: 0000000000000005 RSI: 000000000000000b RDI: ffffffff8ddcd780
RBP: 0000000000000005 R08: 0000000000000000 R09: ffffffff8ddcd780
R10: dffffc0000000000 R11: fffffbfff1ed44b7 R12: 0000000000000000
R13: 0000000000000002 R14: 0000000000000000 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff888126443000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe67c399068 CR3: 0000000039862000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
 rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 rcu_read_lock include/linux/rcupdate.h:850 [inline]
 __rt_spin_lock kernel/locking/spinlock_rt.c:50 [inline]
 rt_spin_lock+0x1fc/0x400 kernel/locking/spinlock_rt.c:57
 spin_lock_bh include/linux/spinlock_rt.h:90 [inline]
 __batadv_dat_purge+0x131/0x400 net/batman-adv/distributed-arp-table.c:173
 batadv_dat_purge+0x20/0x70 net/batman-adv/distributed-arp-table.c:204
 process_one_work kernel/workqueue.c:3275 [inline]
 process_scheduled_works+0xb02/0x1830 kernel/workqueue.c:3358
 worker_thread+0xa50/0xfc0 kernel/workqueue.c:3439
 kthread+0x388/0x470 kernel/kthread.c:467
 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

             reply	other threads:[~2026-02-27 13:49 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-27 13:49 syzbot [this message]
2026-02-28  8:29 ` [syzbot] [ext4?] INFO: task hung in filename_rmdir Qing Wang
2026-02-28  9:02   ` syzbot
2026-02-28  9:44   ` Qing Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=69a1a0eb.050a0220.3a55be.0021.GAE@google.com \
    --to=syzbot+512459401510e2a9a39f@syzkaller.appspotmail.com \
    --cc=brauner@kernel.org \
    --cc=jack@suse.cz \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=syzkaller-bugs@googlegroups.com \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox