From: syzbot <syzbot+5af806780f38a5fe691f@syzkaller.appspotmail.com>
To: akpm@linux-foundation.org, axelrasmussen@google.com,
david@kernel.org, hannes@cmpxchg.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
ljs@kernel.org, mhocko@kernel.org, shakeel.butt@linux.dev,
syzkaller-bugs@googlegroups.com, weixugc@google.com,
yuanchu@google.com, zhengqi.arch@bytedance.com
Subject: [syzbot] [mm?] possible deadlock in rhashtable_free_and_destroy
Date: Tue, 21 Apr 2026 08:34:22 -0700 [thread overview]
Message-ID: <69e798fe.050a0220.24bfd3.0032.GAE@google.com> (raw)
Hello,
syzbot found the following issue on:
HEAD commit: 8541d8f725c6 Merge tag 'mtd/for-7.1' of git://git.kernel.o..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=15380836580000
kernel config: https://syzkaller.appspot.com/x/.config?x=7e54da1916e8d11f
dashboard link: https://syzkaller.appspot.com/bug?extid=5af806780f38a5fe691f
compiler: gcc (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/d900f083ada3/non_bootable_disk-8541d8f7.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/22dfea2c37c2/vmlinux-8541d8f7.xz
kernel image: https://storage.googleapis.com/syzbot-assets/e2f93ad68fe3/bzImage-8541d8f7.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5af806780f38a5fe691f@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Tainted: G L
------------------------------------------------------
kswapd0/108 is trying to acquire lock:
ffff888056f3c4e8 (&ht->mutex){+.+.}-{4:4}, at: rhashtable_free_and_destroy+0x3d/0x9b0 lib/rhashtable.c:1154
but task is already holding lock:
ffffffff8e9b0800 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xb5d/0x1ac0 mm/vmscan.c:7102
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (fs_reclaim){+.+.}-{0:0}:
__fs_reclaim_acquire mm/page_alloc.c:4327 [inline]
fs_reclaim_acquire+0xc4/0x100 mm/page_alloc.c:4341
might_alloc include/linux/sched/mm.h:317 [inline]
slab_pre_alloc_hook mm/slub.c:4520 [inline]
slab_alloc_node mm/slub.c:4875 [inline]
__do_kmalloc_node mm/slub.c:5294 [inline]
__kvmalloc_node_noprof+0xcc/0xa00 mm/slub.c:6828
bucket_table_alloc.isra.0+0x88/0x460 lib/rhashtable.c:186
rhashtable_rehash_alloc+0x68/0x110 lib/rhashtable.c:368
rht_deferred_worker+0x1d9/0x1fd0 lib/rhashtable.c:429
process_one_work+0xa0e/0x1980 kernel/workqueue.c:3302
process_scheduled_works kernel/workqueue.c:3385 [inline]
worker_thread+0x5ef/0xe50 kernel/workqueue.c:3466
kthread+0x370/0x450 kernel/kthread.c:436
ret_from_fork+0x72b/0xd50 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
-> #0 (&ht->mutex){+.+.}-{4:4}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x14b8/0x2630 kernel/locking/lockdep.c:5237
lock_acquire kernel/locking/lockdep.c:5868 [inline]
lock_acquire+0x1b1/0x370 kernel/locking/lockdep.c:5825
__mutex_lock_common kernel/locking/mutex.c:632 [inline]
__mutex_lock+0x1a4/0x1b10 kernel/locking/mutex.c:806
rhashtable_free_and_destroy+0x3d/0x9b0 lib/rhashtable.c:1154
shmem_evict_inode+0x1ae/0xc40 mm/shmem.c:1429
evict+0x3c2/0xad0 fs/inode.c:841
iput_final fs/inode.c:1960 [inline]
iput.part.0+0x605/0xf50 fs/inode.c:2009
iput+0x35/0x40 fs/inode.c:1975
dentry_unlink_inode+0x2a1/0x490 fs/dcache.c:467
__dentry_kill+0x1d0/0x600 fs/dcache.c:670
finish_dput+0x76/0x480 fs/dcache.c:879
dput.part.0+0x456/0x570 fs/dcache.c:928
dput+0x1f/0x30 fs/dcache.c:920
ovl_destroy_inode+0x3e/0x190 fs/overlayfs/super.c:217
destroy_inode+0xcb/0x1c0 fs/inode.c:394
evict+0x599/0xad0 fs/inode.c:865
iput_final fs/inode.c:1960 [inline]
iput.part.0+0x605/0xf50 fs/inode.c:2009
iput+0x35/0x40 fs/inode.c:1975
dentry_unlink_inode+0x2a1/0x490 fs/dcache.c:467
__dentry_kill+0x1d0/0x600 fs/dcache.c:670
shrink_kill fs/dcache.c:1147 [inline]
shrink_dentry_list+0x180/0x5e0 fs/dcache.c:1174
prune_dcache_sb+0xea/0x150 fs/dcache.c:1256
super_cache_scan+0x328/0x550 fs/super.c:223
do_shrink_slab+0x416/0x1240 mm/shrinker.c:440
shrink_slab_memcg mm/shrinker.c:557 [inline]
shrink_slab+0xa7d/0x12e0 mm/shrinker.c:635
shrink_one+0x398/0x7f0 mm/vmscan.c:4932
shrink_many mm/vmscan.c:4993 [inline]
lru_gen_shrink_node mm/vmscan.c:5071 [inline]
shrink_node+0x2673/0x3dc0 mm/vmscan.c:6059
kswapd_shrink_node mm/vmscan.c:6913 [inline]
balance_pgdat+0xaaf/0x1ac0 mm/vmscan.c:7089
kswapd+0x557/0xb60 mm/vmscan.c:7362
kthread+0x370/0x450 kernel/kthread.c:436
ret_from_fork+0x72b/0xd50 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&ht->mutex);
lock(fs_reclaim);
lock(&ht->mutex);
*** DEADLOCK ***
2 locks held by kswapd0/108:
#0: ffffffff8e9b0800 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xb5d/0x1ac0 mm/vmscan.c:7102
#1: ffff88801347c0d8 (&type->s_umount_key#76){++++}-{4:4}, at: super_trylock_shared fs/super.c:565 [inline]
#1: ffff88801347c0d8 (&type->s_umount_key#76){++++}-{4:4}, at: super_cache_scan+0x98/0x550 fs/super.c:198
stack backtrace:
CPU: 2 UID: 0 PID: 108 Comm: kswapd0 Tainted: G L syzkaller #0 PREEMPT(full)
Tainted: [L]=SOFTLOCKUP
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
print_circular_bug.cold+0x178/0x1c7 kernel/locking/lockdep.c:2043
check_noncircular+0x146/0x160 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x14b8/0x2630 kernel/locking/lockdep.c:5237
lock_acquire kernel/locking/lockdep.c:5868 [inline]
lock_acquire+0x1b1/0x370 kernel/locking/lockdep.c:5825
__mutex_lock_common kernel/locking/mutex.c:632 [inline]
__mutex_lock+0x1a4/0x1b10 kernel/locking/mutex.c:806
rhashtable_free_and_destroy+0x3d/0x9b0 lib/rhashtable.c:1154
shmem_evict_inode+0x1ae/0xc40 mm/shmem.c:1429
evict+0x3c2/0xad0 fs/inode.c:841
iput_final fs/inode.c:1960 [inline]
iput.part.0+0x605/0xf50 fs/inode.c:2009
iput+0x35/0x40 fs/inode.c:1975
dentry_unlink_inode+0x2a1/0x490 fs/dcache.c:467
__dentry_kill+0x1d0/0x600 fs/dcache.c:670
finish_dput+0x76/0x480 fs/dcache.c:879
dput.part.0+0x456/0x570 fs/dcache.c:928
dput+0x1f/0x30 fs/dcache.c:920
ovl_destroy_inode+0x3e/0x190 fs/overlayfs/super.c:217
destroy_inode+0xcb/0x1c0 fs/inode.c:394
evict+0x599/0xad0 fs/inode.c:865
iput_final fs/inode.c:1960 [inline]
iput.part.0+0x605/0xf50 fs/inode.c:2009
iput+0x35/0x40 fs/inode.c:1975
dentry_unlink_inode+0x2a1/0x490 fs/dcache.c:467
__dentry_kill+0x1d0/0x600 fs/dcache.c:670
shrink_kill fs/dcache.c:1147 [inline]
shrink_dentry_list+0x180/0x5e0 fs/dcache.c:1174
prune_dcache_sb+0xea/0x150 fs/dcache.c:1256
super_cache_scan+0x328/0x550 fs/super.c:223
do_shrink_slab+0x416/0x1240 mm/shrinker.c:440
shrink_slab_memcg mm/shrinker.c:557 [inline]
shrink_slab+0xa7d/0x12e0 mm/shrinker.c:635
shrink_one+0x398/0x7f0 mm/vmscan.c:4932
shrink_many mm/vmscan.c:4993 [inline]
lru_gen_shrink_node mm/vmscan.c:5071 [inline]
shrink_node+0x2673/0x3dc0 mm/vmscan.c:6059
kswapd_shrink_node mm/vmscan.c:6913 [inline]
balance_pgdat+0xaaf/0x1ac0 mm/vmscan.c:7089
kswapd+0x557/0xb60 mm/vmscan.c:7362
kthread+0x370/0x450 kernel/kthread.c:436
ret_from_fork+0x72b/0xd50 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
next reply other threads:[~2026-04-21 15:34 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-21 15:34 syzbot [this message]
2026-04-21 21:27 ` [syzbot] [mm?] possible deadlock in rhashtable_free_and_destroy Shakeel Butt
2026-04-22 1:21 ` Hillf Danton
2026-04-27 6:59 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=69e798fe.050a0220.24bfd3.0032.GAE@google.com \
--to=syzbot+5af806780f38a5fe691f@syzkaller.appspotmail.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=david@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=syzkaller-bugs@googlegroups.com \
--cc=weixugc@google.com \
--cc=yuanchu@google.com \
--cc=zhengqi.arch@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.