* Re: [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write
2026-04-09 10:04 [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write syzbot
@ 2026-04-09 13:06 ` Edward Adam Davis
2026-04-09 13:27 ` syzbot
2026-04-09 13:35 ` Edward Adam Davis
` (4 subsequent siblings)
5 siblings, 1 reply; 12+ messages in thread
From: Edward Adam Davis @ 2026-04-09 13:06 UTC (permalink / raw)
To: syzbot+33e571025d88efd1312c; +Cc: linux-kernel, syzkaller-bugs
#syz test
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 4ca3cb993da2..c42e4d9930fa 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -4281,7 +4281,9 @@ static void cgroup_file_release(struct kernfs_open_file *of)
cft->release(of);
put_cgroup_ns(ctx->ns);
kfree(ctx);
+ mutex_lock(&of->mutex);
of->priv = NULL;
+ mutex_unlock(&of->mutex);
}
static ssize_t cgroup_file_write(struct kernfs_open_file *of, char *buf,
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write
2026-04-09 13:06 ` Edward Adam Davis
@ 2026-04-09 13:27 ` syzbot
0 siblings, 0 replies; 12+ messages in thread
From: syzbot @ 2026-04-09 13:27 UTC (permalink / raw)
To: eadavis, linux-kernel, syzkaller-bugs
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in __kernfs_remove
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.2.19/6647 is trying to acquire lock:
ffff8880573dce18 (kn->active#59){++++}-{0:0}, at: __kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
but task is already holding lock:
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (cgroup_mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_lock include/linux/cgroup.h:394 [inline]
cgroup_lock_and_drain_offline+0x8d/0x4a0 kernel/cgroup/cgroup.c:3265
cgroup_kn_lock_live+0x120/0x230 kernel/cgroup/cgroup.c:1730
cgroup_subtree_control_write+0x4b3/0x10a0 kernel/cgroup/cgroup.c:3580
cgroup_file_write+0x36f/0x790 kernel/cgroup/cgroup.c:4313
kernfs_fop_write_iter+0x3b0/0x540 fs/kernfs/file.c:352
new_sync_write fs/read_write.c:595 [inline]
vfs_write+0x629/0xba0 fs/read_write.c:688
ksys_write+0x156/0x270 fs/read_write.c:740
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #2 (&of->mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_file_release+0xe5/0x110 kernel/cgroup/cgroup.c:4284
kernfs_release_file fs/kernfs/file.c:764 [inline]
kernfs_fop_release+0x21d/0x450 fs/kernfs/file.c:779
__fput+0x461/0xa90 fs/file_table.c:469
fput_close_sync+0x11f/0x240 fs/file_table.c:574
__do_sys_close fs/open.c:1509 [inline]
__se_sys_close fs/open.c:1494 [inline]
__x64_sys_close+0x7e/0x110 fs/open.c:1494
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #1 (&kernfs_locks->open_file_mutex[count]){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
kernfs_open_file_mutex_lock fs/kernfs/file.c:56 [inline]
kernfs_get_open_node fs/kernfs/file.c:538 [inline]
kernfs_fop_open+0x6e6/0xcb0 fs/kernfs/file.c:718
do_dentry_open+0x83d/0x13e0 fs/open.c:949
vfs_open+0x3b/0x350 fs/open.c:1081
do_open fs/namei.c:4677 [inline]
path_openat+0x2e43/0x38a0 fs/namei.c:4836
do_file_open+0x23e/0x4a0 fs/namei.c:4865
do_sys_openat2+0x113/0x200 fs/open.c:1366
do_sys_open fs/open.c:1372 [inline]
__do_sys_openat fs/open.c:1388 [inline]
__se_sys_openat fs/open.c:1383 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1383
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (kn->active#59){++++}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4485
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6199
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6313
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Chain exists of:
kn->active#59 --> &of->mutex --> cgroup_mutex
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(cgroup_mutex);
lock(&of->mutex);
lock(cgroup_mutex);
lock(kn->active#59);
*** DEADLOCK ***
4 locks held by syz.2.19/6647:
#0: ffff88803b86e480 (sb_writers#9){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
#1: ffff8880441e1778 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
#1: ffff8880441e1778 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2929 [inline]
#1: ffff8880441e1778 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2940 [inline]
#1: ffff8880441e1778 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5392
#2: ffff88804b42c8f8 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#2: ffff88804b42c8f8 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: vfs_rmdir+0x109/0x6f0 fs/namei.c:5329
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732
stack backtrace:
CPU: 0 UID: 0 PID: 6647 Comm: syz.2.19 Not tainted syzkaller #0 PREEMPT_{RT,(full)}
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043
check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4485
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6199
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6313
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f38b3fcc819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f38b362e028 EFLAGS: 00000246 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 00007f38b4245fa0 RCX: 00007f38b3fcc819
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000100
RBP: 00007f38b4062c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f38b4246038 R14: 00007f38b4245fa0 R15: 00007ffef200b028
</TASK>
Tested on:
commit: 7f87a5ea Merge tag 'hid-for-linus-2026040801' of git:/..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=16bd8cd2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27
dashboard link: https://syzkaller.appspot.com/bug?extid=33e571025d88efd1312c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch: https://syzkaller.appspot.com/x/patch.diff?x=12ddbbd6580000
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write
2026-04-09 10:04 [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write syzbot
2026-04-09 13:06 ` Edward Adam Davis
@ 2026-04-09 13:35 ` Edward Adam Davis
2026-04-09 13:50 ` syzbot
2026-04-09 14:02 ` Edward Adam Davis
` (3 subsequent siblings)
5 siblings, 1 reply; 12+ messages in thread
From: Edward Adam Davis @ 2026-04-09 13:35 UTC (permalink / raw)
To: syzbot+33e571025d88efd1312c; +Cc: linux-kernel, syzkaller-bugs
#syz test
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 4ca3cb993da2..5a34bd70ef7b 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -4280,8 +4280,10 @@ static void cgroup_file_release(struct kernfs_open_file *of)
if (cft->release)
cft->release(of);
put_cgroup_ns(ctx->ns);
+ mutex_lock(&of->mutex);
kfree(ctx);
of->priv = NULL;
+ mutex_unlock(&of->mutex);
}
static ssize_t cgroup_file_write(struct kernfs_open_file *of, char *buf,
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write
2026-04-09 13:35 ` Edward Adam Davis
@ 2026-04-09 13:50 ` syzbot
0 siblings, 0 replies; 12+ messages in thread
From: syzbot @ 2026-04-09 13:50 UTC (permalink / raw)
To: eadavis, linux-kernel, syzkaller-bugs
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in __kernfs_remove
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.1.18/6463 is trying to acquire lock:
ffff888032ccf968 (kn->active#59){++++}-{0:0}, at: __kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
but task is already holding lock:
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (cgroup_mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_lock include/linux/cgroup.h:394 [inline]
cgroup_lock_and_drain_offline+0x8d/0x4a0 kernel/cgroup/cgroup.c:3265
cgroup_kn_lock_live+0x120/0x230 kernel/cgroup/cgroup.c:1730
cgroup_subtree_control_write+0x4b3/0x10a0 kernel/cgroup/cgroup.c:3580
cgroup_file_write+0x36f/0x790 kernel/cgroup/cgroup.c:4313
kernfs_fop_write_iter+0x3b0/0x540 fs/kernfs/file.c:352
new_sync_write fs/read_write.c:595 [inline]
vfs_write+0x629/0xba0 fs/read_write.c:688
ksys_write+0x156/0x270 fs/read_write.c:740
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #2 (&of->mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_file_release+0xdd/0x110 kernel/cgroup/cgroup.c:4283
kernfs_release_file fs/kernfs/file.c:764 [inline]
kernfs_fop_release+0x21d/0x450 fs/kernfs/file.c:779
__fput+0x461/0xa90 fs/file_table.c:469
fput_close_sync+0x11f/0x240 fs/file_table.c:574
__do_sys_close fs/open.c:1509 [inline]
__se_sys_close fs/open.c:1494 [inline]
__x64_sys_close+0x7e/0x110 fs/open.c:1494
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #1 (&kernfs_locks->open_file_mutex[count]){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
kernfs_open_file_mutex_lock fs/kernfs/file.c:56 [inline]
kernfs_get_open_node fs/kernfs/file.c:538 [inline]
kernfs_fop_open+0x6e6/0xcb0 fs/kernfs/file.c:718
do_dentry_open+0x83d/0x13e0 fs/open.c:949
vfs_open+0x3b/0x350 fs/open.c:1081
do_open fs/namei.c:4677 [inline]
path_openat+0x2e43/0x38a0 fs/namei.c:4836
do_file_open+0x23e/0x4a0 fs/namei.c:4865
do_sys_openat2+0x113/0x200 fs/open.c:1366
do_sys_open fs/open.c:1372 [inline]
__do_sys_openat fs/open.c:1388 [inline]
__se_sys_openat fs/open.c:1383 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1383
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (kn->active#59){++++}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4485
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6199
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6313
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Chain exists of:
kn->active#59 --> &of->mutex --> cgroup_mutex
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(cgroup_mutex);
lock(&of->mutex);
lock(cgroup_mutex);
lock(kn->active#59);
*** DEADLOCK ***
4 locks held by syz.1.18/6463:
#0: ffff88803ad2a480 (sb_writers#9){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
#1: ffff88804012b2f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
#1: ffff88804012b2f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2929 [inline]
#1: ffff88804012b2f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2940 [inline]
#1: ffff88804012b2f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5392
#2: ffff88805a98f4f8 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#2: ffff88805a98f4f8 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: vfs_rmdir+0x109/0x6f0 fs/namei.c:5329
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732
stack backtrace:
CPU: 0 UID: 0 PID: 6463 Comm: syz.1.18 Not tainted syzkaller #0 PREEMPT_{RT,(full)}
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043
check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4485
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6199
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6313
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff79991c819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ff798f76028 EFLAGS: 00000246 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 00007ff799b95fa0 RCX: 00007ff79991c819
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000100
RBP: 00007ff7999b2c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ff799b96038 R14: 00007ff799b95fa0 R15: 00007ffce7cda7c8
</TASK>
Tested on:
commit: 7f87a5ea Merge tag 'hid-for-linus-2026040801' of git:/..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=10b05e06580000
kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27
dashboard link: https://syzkaller.appspot.com/bug?extid=33e571025d88efd1312c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch: https://syzkaller.appspot.com/x/patch.diff?x=155d9316580000
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write
2026-04-09 10:04 [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write syzbot
2026-04-09 13:06 ` Edward Adam Davis
2026-04-09 13:35 ` Edward Adam Davis
@ 2026-04-09 14:02 ` Edward Adam Davis
2026-04-09 14:28 ` syzbot
2026-04-10 0:56 ` Edward Adam Davis
` (2 subsequent siblings)
5 siblings, 1 reply; 12+ messages in thread
From: Edward Adam Davis @ 2026-04-09 14:02 UTC (permalink / raw)
To: syzbot+33e571025d88efd1312c; +Cc: linux-kernel, syzkaller-bugs
#syz test
diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
index e32406d62c0d..f76e2b9452d0 100644
--- a/fs/kernfs/file.c
+++ b/fs/kernfs/file.c
@@ -348,8 +348,12 @@ static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter)
}
ops = kernfs_ops(of->kn);
- if (ops->write)
+ if (ops->write) {
+ struct mutex *mutex;
+ mutex = kernfs_open_file_mutex_lock(of->kn);
len = ops->write(of, buf, len, iocb->ki_pos);
+ mutex_unlock(mutex);
+ }
else
len = -EINVAL;
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write
2026-04-09 14:02 ` Edward Adam Davis
@ 2026-04-09 14:28 ` syzbot
0 siblings, 0 replies; 12+ messages in thread
From: syzbot @ 2026-04-09 14:28 UTC (permalink / raw)
To: eadavis, linux-kernel, syzkaller-bugs
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in __kernfs_remove
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.1.18/6652 is trying to acquire lock:
ffff88803634de18 (kn->active#59){++++}-{0:0}, at: __kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
but task is already holding lock:
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (cgroup_mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_lock include/linux/cgroup.h:394 [inline]
cgroup_lock_and_drain_offline+0x8d/0x4a0 kernel/cgroup/cgroup.c:3265
cgroup_kn_lock_live+0x120/0x230 kernel/cgroup/cgroup.c:1730
cgroup_subtree_control_write+0x4b3/0x10a0 kernel/cgroup/cgroup.c:3580
cgroup_file_write+0x36f/0x790 kernel/cgroup/cgroup.c:4311
kernfs_fop_write_iter+0x43e/0x5e0 fs/kernfs/file.c:354
new_sync_write fs/read_write.c:595 [inline]
vfs_write+0x629/0xba0 fs/read_write.c:688
ksys_write+0x156/0x270 fs/read_write.c:740
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #1 (&kernfs_locks->open_file_mutex[count]){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
kernfs_open_file_mutex_lock fs/kernfs/file.c:56 [inline]
kernfs_get_open_node fs/kernfs/file.c:542 [inline]
kernfs_fop_open+0x6e6/0xcb0 fs/kernfs/file.c:722
do_dentry_open+0x83d/0x13e0 fs/open.c:949
vfs_open+0x3b/0x350 fs/open.c:1081
do_open fs/namei.c:4677 [inline]
path_openat+0x2e43/0x38a0 fs/namei.c:4836
do_file_open+0x23e/0x4a0 fs/namei.c:4865
do_sys_openat2+0x113/0x200 fs/open.c:1366
do_sys_open fs/open.c:1372 [inline]
__do_sys_openat fs/open.c:1388 [inline]
__se_sys_openat fs/open.c:1383 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1383
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (kn->active#59){++++}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4483
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6197
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6311
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Chain exists of:
kn->active#59 --> &kernfs_locks->open_file_mutex[count] --> cgroup_mutex
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(cgroup_mutex);
lock(&kernfs_locks->open_file_mutex[count]);
lock(cgroup_mutex);
lock(kn->active#59);
*** DEADLOCK ***
4 locks held by syz.1.18/6652:
#0: ffff888024604480 (sb_writers#9){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
#1: ffff888044184378 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
#1: ffff888044184378 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2929 [inline]
#1: ffff888044184378 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2940 [inline]
#1: ffff888044184378 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5392
#2: ffff888054c08c78 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#2: ffff888054c08c78 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: vfs_rmdir+0x109/0x6f0 fs/namei.c:5329
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732
stack backtrace:
CPU: 0 UID: 0 PID: 6652 Comm: syz.1.18 Not tainted syzkaller #0 PREEMPT_{RT,(full)}
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043
check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4483
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6197
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6311
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4ba117c819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f4ba07d6028 EFLAGS: 00000246 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 00007f4ba13f5fa0 RCX: 00007f4ba117c819
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000100
RBP: 00007f4ba1212c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f4ba13f6038 R14: 00007f4ba13f5fa0 R15: 00007fff23e2bb58
</TASK>
Tested on:
commit: 7f87a5ea Merge tag 'hid-for-linus-2026040801' of git:/..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=14138cd2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27
dashboard link: https://syzkaller.appspot.com/bug?extid=33e571025d88efd1312c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch: https://syzkaller.appspot.com/x/patch.diff?x=17beaeba580000
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write
2026-04-09 10:04 [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write syzbot
` (2 preceding siblings ...)
2026-04-09 14:02 ` Edward Adam Davis
@ 2026-04-10 0:56 ` Edward Adam Davis
2026-04-10 1:19 ` syzbot
2026-04-10 4:00 ` Hillf Danton
2026-04-10 4:00 ` [PATCH] sched/psi: fix race between file release and pressure write Edward Adam Davis
5 siblings, 1 reply; 12+ messages in thread
From: Edward Adam Davis @ 2026-04-10 0:56 UTC (permalink / raw)
To: syzbot+33e571025d88efd1312c; +Cc: linux-kernel, syzkaller-bugs
#syz test
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 4ca3cb993da2..c0cfe91c2991 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -4005,11 +4005,11 @@ static ssize_t pressure_write(struct kernfs_open_file *of, char *buf,
return -ENODEV;
cgroup_get(cgrp);
- cgroup_kn_unlock(of->kn);
/* Allow only one trigger per file descriptor */
if (ctx->psi.trigger) {
cgroup_put(cgrp);
+ cgroup_kn_unlock(of->kn);
return -EBUSY;
}
@@ -4017,12 +4017,14 @@ static ssize_t pressure_write(struct kernfs_open_file *of, char *buf,
new = psi_trigger_create(psi, buf, res, of->file, of);
if (IS_ERR(new)) {
cgroup_put(cgrp);
+ cgroup_kn_unlock(of->kn);
return PTR_ERR(new);
}
smp_store_release(&ctx->psi.trigger, new);
cgroup_put(cgrp);
+ cgroup_kn_unlock(of->kn);
return nbytes;
}
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write
2026-04-09 10:04 [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write syzbot
` (3 preceding siblings ...)
2026-04-10 0:56 ` Edward Adam Davis
@ 2026-04-10 4:00 ` Hillf Danton
2026-04-10 4:17 ` syzbot
2026-04-10 4:00 ` [PATCH] sched/psi: fix race between file release and pressure write Edward Adam Davis
5 siblings, 1 reply; 12+ messages in thread
From: Hillf Danton @ 2026-04-10 4:00 UTC (permalink / raw)
To: syzbot; +Cc: linux-kernel, syzkaller-bugs
> Date: Thu, 09 Apr 2026 03:04:32 -0700 [thread overview]
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: 591cd656a1bf Linux 7.0-rc7
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=114a36ba580000
> kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27
> dashboard link: https://syzkaller.appspot.com/bug?extid=33e571025d88efd1312c
> compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=16cb33da580000
> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=12648bd6580000
#syz test
--- x/fs/kernfs/file.c
+++ y/fs/kernfs/file.c
@@ -756,6 +756,7 @@ static void kernfs_release_file(struct k
lockdep_assert_held(kernfs_open_file_mutex_ptr(kn));
if (!of->released) {
+ mutex_lock(&of->mutex);
/*
* A file is never detached without being released and we
* need to be able to release files which are deactivated
@@ -764,6 +765,7 @@ static void kernfs_release_file(struct k
kn->attr.ops->release(of);
of->released = true;
of_on(of)->nr_to_release--;
+ mutex_unlock(&of->mutex);
}
}
--
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write
2026-04-10 4:00 ` Hillf Danton
@ 2026-04-10 4:17 ` syzbot
0 siblings, 0 replies; 12+ messages in thread
From: syzbot @ 2026-04-10 4:17 UTC (permalink / raw)
To: hdanton, linux-kernel, syzkaller-bugs
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in __kernfs_remove
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.0.17/6575 is trying to acquire lock:
ffff88803630b1e8 (kn->active#59){++++}-{0:0}, at: __kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
but task is already holding lock:
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (cgroup_mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
cgroup_lock include/linux/cgroup.h:394 [inline]
cgroup_lock_and_drain_offline+0x8d/0x4a0 kernel/cgroup/cgroup.c:3265
cgroup_kn_lock_live+0x120/0x230 kernel/cgroup/cgroup.c:1730
cgroup_subtree_control_write+0x4b3/0x10a0 kernel/cgroup/cgroup.c:3580
cgroup_file_write+0x36f/0x790 kernel/cgroup/cgroup.c:4311
kernfs_fop_write_iter+0x3b0/0x540 fs/kernfs/file.c:352
new_sync_write fs/read_write.c:595 [inline]
vfs_write+0x629/0xba0 fs/read_write.c:688
ksys_write+0x156/0x270 fs/read_write.c:740
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #2 (&of->mutex){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
kernfs_release_file+0xfc/0x300 fs/kernfs/file.c:759
kernfs_fop_release+0x111/0x190 fs/kernfs/file.c:781
__fput+0x461/0xa90 fs/file_table.c:469
fput_close_sync+0x11f/0x240 fs/file_table.c:574
__do_sys_close fs/open.c:1509 [inline]
__se_sys_close fs/open.c:1494 [inline]
__x64_sys_close+0x7e/0x110 fs/open.c:1494
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #1 (&kernfs_locks->open_file_mutex[count]){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
kernfs_open_file_mutex_lock fs/kernfs/file.c:56 [inline]
kernfs_get_open_node fs/kernfs/file.c:538 [inline]
kernfs_fop_open+0x6e6/0xcb0 fs/kernfs/file.c:718
do_dentry_open+0x83d/0x13e0 fs/open.c:949
vfs_open+0x3b/0x350 fs/open.c:1081
do_open fs/namei.c:4677 [inline]
path_openat+0x2e43/0x38a0 fs/namei.c:4836
do_file_open+0x23e/0x4a0 fs/namei.c:4865
do_sys_openat2+0x113/0x200 fs/open.c:1366
do_sys_open fs/open.c:1372 [inline]
__do_sys_openat fs/open.c:1388 [inline]
__se_sys_openat fs/open.c:1383 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1383
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (kn->active#59){++++}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4483
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6197
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6311
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Chain exists of:
kn->active#59 --> &of->mutex --> cgroup_mutex
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(cgroup_mutex);
lock(&of->mutex);
lock(cgroup_mutex);
lock(kn->active#59);
*** DEADLOCK ***
4 locks held by syz.0.17/6575:
#0: ffff88803b560480 (sb_writers#9){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
#1: ffff88804413d3f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
#1: ffff88804413d3f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2929 [inline]
#1: ffff88804413d3f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2940 [inline]
#1: ffff88804413d3f8 (&type->i_mutex_dir_key#6/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5392
#2: ffff88805f899cf8 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
#2: ffff88805f899cf8 (&type->i_mutex_dir_key#6){++++}-{4:4}, at: vfs_rmdir+0x109/0x6f0 fs/namei.c:5329
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:394 [inline]
#3: ffffffff8de0ad98 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1732
stack backtrace:
CPU: 0 UID: 0 PID: 6575 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT_{RT,(full)}
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043
check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
kernfs_drain+0x284/0x600 fs/kernfs/dir.c:511
__kernfs_remove+0x3cf/0x660 fs/kernfs/dir.c:1513
kernfs_remove_by_name_ns+0xaf/0x130 fs/kernfs/dir.c:1722
kernfs_remove_by_name include/linux/kernfs.h:633 [inline]
cgroup_rm_file kernel/cgroup/cgroup.c:1758 [inline]
cgroup_addrm_files+0x684/0xc30 kernel/cgroup/cgroup.c:4483
cgroup_destroy_locked+0x321/0x630 kernel/cgroup/cgroup.c:6197
cgroup_rmdir+0x3e8/0x710 kernel/cgroup/cgroup.c:6311
kernfs_iop_rmdir+0x203/0x350 fs/kernfs/dir.c:1291
vfs_rmdir+0x400/0x6f0 fs/namei.c:5344
filename_rmdir+0x292/0x520 fs/namei.c:5399
__do_sys_rmdir fs/namei.c:5422 [inline]
__se_sys_rmdir+0x2e/0x140 fs/namei.c:5419
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f5f367fc819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f5f35e5e028 EFLAGS: 00000246 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 00007f5f36a75fa0 RCX: 00007f5f367fc819
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000100
RBP: 00007f5f36892c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f5f36a76038 R14: 00007f5f36a75fa0 R15: 00007ffe927de3a8
</TASK>
Tested on:
commit: 9a9c8ce3 Merge tag 'kbuild-fixes-7.0-4' of git://git.k..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=10f69cd2580000
kernel config: https://syzkaller.appspot.com/x/.config?x=45cb3c58fd963c27
dashboard link: https://syzkaller.appspot.com/bug?extid=33e571025d88efd1312c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch: https://syzkaller.appspot.com/x/patch.diff?x=11047e06580000
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH] sched/psi: fix race between file release and pressure write
2026-04-09 10:04 [syzbot] [cgroups?] KASAN: slab-use-after-free Read in pressure_write syzbot
` (4 preceding siblings ...)
2026-04-10 4:00 ` Hillf Danton
@ 2026-04-10 4:00 ` Edward Adam Davis
5 siblings, 0 replies; 12+ messages in thread
From: Edward Adam Davis @ 2026-04-10 4:00 UTC (permalink / raw)
To: syzbot+33e571025d88efd1312c
Cc: cgroups, hannes, linux-kernel, mkoutny, syzkaller-bugs, tj
A potential race condition exists between pressure write and cgroup file
release regarding the priv member of struct kernfs_open_file, which
triggers the uaf reported in [1].
The scope of the cgroup_mutex protection in pressure write has been
expanded to prevent the uaf described in [1].
[1]
BUG: KASAN: slab-use-after-free in pressure_write+0xa4/0x210 kernel/cgroup/cgroup.c:4011
Call Trace:
pressure_write+0xa4/0x210 kernel/cgroup/cgroup.c:4011
cgroup_file_write+0x36f/0x790 kernel/cgroup/cgroup.c:4311
kernfs_fop_write_iter+0x3b0/0x540 fs/kernfs/file.c:352
Allocated by task 9352:
cgroup_file_open+0x90/0x3a0 kernel/cgroup/cgroup.c:4256
kernfs_fop_open+0x9eb/0xcb0 fs/kernfs/file.c:724
do_dentry_open+0x83d/0x13e0 fs/open.c:949
Freed by task 9353:
cgroup_file_release+0xd6/0x100 kernel/cgroup/cgroup.c:4283
kernfs_release_file fs/kernfs/file.c:764 [inline]
kernfs_drain_open_files+0x392/0x720 fs/kernfs/file.c:834
kernfs_drain+0x470/0x600 fs/kernfs/dir.c:525
Fixes: 0e94682b73bf ("psi: introduce psi monitor")
Reported-by: syzbot+33e571025d88efd1312c@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=33e571025d88efd1312c
Tested-by: syzbot+33e571025d88efd1312c@syzkaller.appspotmail.com
Signed-off-by: Edward Adam Davis <eadavis@qq.com>
---
kernel/cgroup/cgroup.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 4ca3cb993da2..c0cfe91c2991 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -4005,11 +4005,11 @@ static ssize_t pressure_write(struct kernfs_open_file *of, char *buf,
return -ENODEV;
cgroup_get(cgrp);
- cgroup_kn_unlock(of->kn);
/* Allow only one trigger per file descriptor */
if (ctx->psi.trigger) {
cgroup_put(cgrp);
+ cgroup_kn_unlock(of->kn);
return -EBUSY;
}
@@ -4017,12 +4017,14 @@ static ssize_t pressure_write(struct kernfs_open_file *of, char *buf,
new = psi_trigger_create(psi, buf, res, of->file, of);
if (IS_ERR(new)) {
cgroup_put(cgrp);
+ cgroup_kn_unlock(of->kn);
return PTR_ERR(new);
}
smp_store_release(&ctx->psi.trigger, new);
cgroup_put(cgrp);
+ cgroup_kn_unlock(of->kn);
return nbytes;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 12+ messages in thread