From: syzbot <syzbot+33e571025d88efd1312c@syzkaller.appspotmail.com>
To: eadavis@qq.com, linux-kernel@vger.kernel.org,
syzkaller-bugs@googlegroups.com
Subject: Re: [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write
Date: Thu, 09 Apr 2026 06:50:04 -0700 [thread overview]
Message-ID: <69d7ae8c.a00a0220.468cb.001b.GAE@google.com> (raw)
In-Reply-To: <tencent_AD26DBB362E367A96DC3A8C454F760576705@qq.com>
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in __kernfs_remove
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.1.18/6463 is trying to acquire lock:
ffff888032ccf968 (kn->active#59){++++}-{0:0}, at: __kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
but task is already holding lock:
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (cgroup_mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_lock include/linux/cgroup.h:394 [inline]
cgroup_lock_and_drain_offline+0x8d/0x4a0 kernel/cgroup/cgroup.c:3265
cgroup_kn_lock_live+0x120/0x230 kernel/cgroup/cgroup.c:1730
cgroup_subtree_control_write+0x4b3/0x10a0 kernel/cgroup/cgroup.c:3580
cgroup_file_write+0x36f/0x790 kernel/cgroup/cgroup.c:4313
kernfs_fop_write_iter+0x3b0/0x540 fs/kernfs/file.c:352
new_sync_write fs/read_write.c:595 [inline]
vfs_write+0x629/0xba0 fs/read_write.c:688
ksys_write+0x156/0x270 fs/read_write.c:740
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #2 (&of->mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_file_release+0xdd/0x110 kernel/cgroup/cgroup.c:4283
kernfs_release_file fs/kernfs/file.c:764 [inline]
kernfs_fop_release+0x21d/0x450 fs/kernfs/file.c:779
__fput+0x461/0xa90 fs/file_table.c:469
fput_close_sync+0x11f/0x240 fs/file_table.c:574
__do_sys_close fs/open.c:1509 [inline]
__se_sys_close fs/open.c:1494 [inline]
__x64_sys_close+0x7e/0x110 fs/open.c:1494
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #1 (&kernfs_locks->open_file_mutex[count]){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
kernfs_open_file_mutex_lock fs/kernfs/file.c:56 [inline]
kernfs_get_open_node fs/kernfs/file.c:538 [inline]
kernfs_fop_open+0x6e6/0xcb0 fs/kernfs/file.c:718
do_dentry_open+0x83d/0x13e0 fs/open.c:949
vfs_open+0x3b/0x350 fs/open.c:1081
do_open fs/namei.c:4677 [inline]
path_openat+0x2e43/0x38a0 fs/namei.c:4836
do_file_open+0x23e/0x4a0 fs/namei.c:4865
do_sys_openat2+0x113/0x200 fs/open.c:1366
do_sys_open fs/open.c:1372 [inline]
__do_sys_openat fs/open.c:1388 [inline]
__se_sys_openat fs/open.c:1383 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1383
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (kn->active#59){++++}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4485
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6199
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6313
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Chain exists of:
kn->active#59 --> &of->mutex --> cgroup_mutex
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(cgroup_mutex);
lock(&of->mutex);
lock(cgroup_mutex);
lock(kn->active#59);
*** DEADLOCK ***
4 locks held by syz.1.18/6463:
#0: ffff88803ad2a480 (sb_writers#9){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
#1: ffff88804012b2f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
#1: ffff88804012b2f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2929 [inline]
#1: ffff88804012b2f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2940 [inline]
#1: ffff88804012b2f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5392
#2: ffff88805a98f4f8 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#2: ffff88805a98f4f8 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: vfs_rmdir+0x109/0x6f0 fs/namei.c:5329
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732
stack backtrace:
CPU: 0 UID: 0 PID: 6463 Comm: syz.1.18 Not tainted syzkaller #0 PREEMPT_{RT,(full)}
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043
check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4485
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6199
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6313
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff79991c819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ff798f76028 EFLAGS: 00000246 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 00007ff799b95fa0 RCX: 00007ff79991c819
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000100
RBP: 00007ff7999b2c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ff799b96038 R14: 00007ff799b95fa0 R15: 00007ffce7cda7c8
</TASK>
Tested on:
commit: 7f87a5ea Merge tag 'hid-for-linus-2026040801' of git:/..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=10b05e06580000
kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27
dashboard link: https://syzkaller.appspot.com/bug?extid=33e571025d88efd1312c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch: https://syzkaller.appspot.com/x/patch.diff?x=155d9316580000
next prev parent reply other threads:[~2026-04-09 13:50 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-09 10:04 [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write syzbot
2026-04-09 13:06 ` Edward Adam Davis
2026-04-09 13:27 ` syzbot
2026-04-09 13:35 ` Edward Adam Davis
2026-04-09 13:50 ` syzbot [this message]
2026-04-09 14:02 ` Edward Adam Davis
2026-04-09 14:28 ` syzbot
2026-04-10 0:56 ` Edward Adam Davis
2026-04-10 1:19 ` syzbot
2026-04-10 4:00 ` Hillf Danton
2026-04-10 4:17 ` syzbot
2026-04-10 4:00 ` [PATCH] sched/psi: fix race between file release and pressure write Edward Adam Davis
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=69d7ae8c.a00a0220.468cb.001b.GAE@google.com \
--to=syzbot+33e571025d88efd1312c@syzkaller.appspotmail.com \
--cc=eadavis@qq.com \
--cc=linux-kernel@vger.kernel.org \
--cc=syzkaller-bugs@googlegroups.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox