From: syzbot <syzbot+33e571025d88efd1312c@syzkaller.appspotmail.com>
To: hdanton@sina.com, linux-kernel@vger.kernel.org,
syzkaller-bugs@googlegroups.com
Subject: Re: [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write
Date: Fri, 10 Apr 2026 20:26:02 -0700 [thread overview]
Message-ID: <69d9bf4a.050a0220.3030df.003c.GAE@google.com> (raw)
In-Reply-To: <20260411031045.1713-1-hdanton@sina.com>
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in kernfs_drain_open_files
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.3.20/6665 is trying to acquire lock:
ffffffff8df818f8 (drain_mutex){+.+.}-{4:4}, at: kernfs_drain_open_files+0x32/0x730 fs/kernfs/file.c:823
but task is already holding lock:
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (cgroup_mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_lock include/linux/cgroup.h:394 [inline]
cgroup_lock_and_drain_offline+0x8d/0x4a0 kernel/cgroup/cgroup.c:3265
cgroup_kn_lock_live+0x120/0x230 kernel/cgroup/cgroup.c:1730
cgroup_subtree_control_write+0x4b3/0x10a0 kernel/cgroup/cgroup.c:3580
cgroup_file_write+0x36f/0x790 kernel/cgroup/cgroup.c:4311
kernfs_fop_write_iter+0x3e4/0x580 fs/kernfs/file.c:357
new_sync_write fs/read_write.c:595 [inline]
vfs_write+0x629/0xba0 fs/read_write.c:688
ksys_write+0x156/0x270 fs/read_write.c:740
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #1 (&of->mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
kernfs_fop_write_iter+0x213/0x580 fs/kernfs/file.c:348
new_sync_write fs/read_write.c:595 [inline]
vfs_write+0x629/0xba0 fs/read_write.c:688
ksys_write+0x156/0x270 fs/read_write.c:740
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (drain_mutex){+.+.}-{4:4}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
kernfs_drain_open_files+0x32/0x730 fs/kernfs/file.c:823
kernfs_drain+0x470/0x600 fs/kernfs/dir.c:542
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1533
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1742
kernfs_remove_by_name include/linux/kernfs.h:641 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4483
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6197
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6311
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1311
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Chain exists of:
drain_mutex --> &of->mutex --> cgroup_mutex
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(cgroup_mutex);
lock(&of->mutex);
lock(cgroup_mutex);
lock(drain_mutex);
*** DEADLOCK ***
4 locks held by syz.3.20/6665:
#0: ffff88803bdc6480 (sb_writers#9){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
#1: ffff888043d2ef78 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
#1: ffff888043d2ef78 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2929 [inline]
#1: ffff888043d2ef78 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2940 [inline]
#1: ffff888043d2ef78 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5392
#2: ffff888043f66478 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#2: ffff888043f66478 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: vfs_rmdir+0x109/0x6f0 fs/namei.c:5329
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732
stack backtrace:
CPU: 0 UID: 0 PID: 6665 Comm: syz.3.20 Not tainted syzkaller #0 PREEMPT_{RT,(full)}
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043
check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
kernfs_drain_open_files+0x32/0x730 fs/kernfs/file.c:823
kernfs_drain+0x470/0x600 fs/kernfs/dir.c:542
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1533
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1742
kernfs_remove_by_name include/linux/kernfs.h:641 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4483
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6197
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6311
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1311
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f178b75c819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f178adb6028 EFLAGS: 00000246 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 00007f178b9d5fa0 RCX: 00007f178b75c819
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000100
RBP: 00007f178b7f2c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f178b9d6038 R14: 00007f178b9d5fa0 R15: 00007fff5fda5b58
</TASK>
Tested on:
commit: 7c6c4ed8 Merge tag 'vfs-7.0-rc8.fixes' of git://git.ke..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=141c7f4e580000
kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27
dashboard link: https://syzkaller.appspot.com/bug?extid=33e571025d88efd1312c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch: https://syzkaller.appspot.com/x/patch.diff?x=169a1106580000
prev parent reply other threads:[~2026-04-11 3:26 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-09 10:04 [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write syzbot
2026-04-09 13:06 ` Edward Adam Davis
2026-04-09 13:27 ` syzbot
2026-04-09 13:35 ` Edward Adam Davis
2026-04-09 13:50 ` syzbot
2026-04-09 14:02 ` Edward Adam Davis
2026-04-09 14:28 ` syzbot
2026-04-10 0:56 ` Edward Adam Davis
2026-04-10 1:19 ` syzbot
2026-04-10 4:00 ` Hillf Danton
2026-04-10 4:17 ` syzbot
2026-04-10 4:00 ` [PATCH] sched/psi: fix race between file release and pressure write Edward Adam Davis
2026-04-10 9:00 ` Chen Ridong
2026-04-10 9:45 ` Edward Adam Davis
2026-04-10 12:39 ` [PATCH v2] " Edward Adam Davis
2026-04-10 19:14 ` Tejun Heo
2026-04-11 4:25 ` Edward Adam Davis
2026-04-11 7:39 ` Tejun Heo
2026-04-10 10:00 ` [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write Edward Adam Davis
2026-04-10 10:25 ` syzbot
2026-04-10 10:48 ` Edward Adam Davis
2026-04-10 11:10 ` syzbot
2026-04-10 12:02 ` Hillf Danton
2026-04-10 12:27 ` syzbot
2026-04-10 12:13 ` Edward Adam Davis
2026-04-10 12:40 ` syzbot
2026-04-11 3:10 ` Hillf Danton
2026-04-11 3:26 ` syzbot [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=69d9bf4a.050a0220.3030df.003c.GAE@google.com \
--to=syzbot+33e571025d88efd1312c@syzkaller.appspotmail.com \
--cc=hdanton@sina.com \
--cc=linux-kernel@vger.kernel.org \
--cc=syzkaller-bugs@googlegroups.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox