* [syzbot] [squashfs?] possible deadlock in fsnotify_destroy_mark
@ 2024-08-22 8:32 syzbot
2024-09-27 9:12 ` [PATCH] inotify: Fix " Lizhi Xu
0 siblings, 1 reply; 9+ messages in thread
From: syzbot @ 2024-08-22 8:32 UTC (permalink / raw)
To: amir73il, jack, linux-fsdevel, linux-kernel, phillip,
squashfs-devel, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: b311c1b497e5 Merge tag '6.11-rc4-server-fixes' of git://gi..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=10351047980000
kernel config: https://syzkaller.appspot.com/x/.config?x=df2f0ed7e30a639d
dashboard link: https://syzkaller.appspot.com/bug?extid=c679f13773f295d2da53
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1247776b980000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=121bb693980000
Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7bc7510fe41f/non_bootable_disk-b311c1b4.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/1c99fa48192f/vmlinux-b311c1b4.xz
kernel image: https://storage.googleapis.com/syzbot-assets/16d5710a012a/bzImage-b311c1b4.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/d59520377407/mount_0.gz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c679f13773f295d2da53@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.11.0-rc4-syzkaller-00019-gb311c1b497e5 #0 Not tainted
------------------------------------------------------
kswapd0/78 is trying to acquire lock:
ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
but task is already holding lock:
ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6841 [inline]
ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbb4/0x35a0 mm/vmscan.c:7223
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (fs_reclaim){+.+.}-{0:0}:
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
__fs_reclaim_acquire mm/page_alloc.c:3818 [inline]
fs_reclaim_acquire+0x88/0x140 mm/page_alloc.c:3832
might_alloc include/linux/sched/mm.h:334 [inline]
slab_pre_alloc_hook mm/slub.c:3939 [inline]
slab_alloc_node mm/slub.c:4017 [inline]
kmem_cache_alloc_noprof+0x3d/0x2a0 mm/slub.c:4044
inotify_new_watch fs/notify/inotify/inotify_user.c:599 [inline]
inotify_update_watch fs/notify/inotify/inotify_user.c:647 [inline]
__do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:786 [inline]
__se_sys_inotify_add_watch+0x72e/0x1070 fs/notify/inotify/inotify_user.c:729
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (&group->mark_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3133 [inline]
check_prevs_add kernel/locking/lockdep.c:3252 [inline]
validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
__lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
fsnotify_destroy_marks+0x14a/0x660 fs/notify/mark.c:934
fsnotify_inoderemove include/linux/fsnotify.h:264 [inline]
dentry_unlink_inode+0x2e0/0x430 fs/dcache.c:403
__dentry_kill+0x20d/0x630 fs/dcache.c:610
shrink_kill+0xa9/0x2c0 fs/dcache.c:1055
shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1082
prune_dcache_sb+0x10f/0x180 fs/dcache.c:1163
super_cache_scan+0x34f/0x4b0 fs/super.c:221
do_shrink_slab+0x701/0x1160 mm/shrinker.c:435
shrink_slab+0x1093/0x14d0 mm/shrinker.c:662
shrink_one+0x43b/0x850 mm/vmscan.c:4815
shrink_many mm/vmscan.c:4876 [inline]
lru_gen_shrink_node mm/vmscan.c:4954 [inline]
shrink_node+0x3799/0x3de0 mm/vmscan.c:5934
kswapd_shrink_node mm/vmscan.c:6762 [inline]
balance_pgdat mm/vmscan.c:6954 [inline]
kswapd+0x1bcd/0x35a0 mm/vmscan.c:7223
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&group->mark_mutex);
lock(fs_reclaim);
lock(&group->mark_mutex);
*** DEADLOCK ***
2 locks held by kswapd0/78:
#0: ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6841 [inline]
#0: ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbb4/0x35a0 mm/vmscan.c:7223
#1: ffff8880385580e0 (&type->s_umount_key#44){++++}-{3:3}, at: super_trylock_shared fs/super.c:562 [inline]
#1: ffff8880385580e0 (&type->s_umount_key#44){++++}-{3:3}, at: super_cache_scan+0x94/0x4b0 fs/super.c:196
stack backtrace:
CPU: 0 UID: 0 PID: 78 Comm: kswapd0 Not tainted 6.11.0-rc4-syzkaller-00019-gb311c1b497e5 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:93 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:119
check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2186
check_prev_add kernel/locking/lockdep.c:3133 [inline]
check_prevs_add kernel/locking/lockdep.c:3252 [inline]
validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
__lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
fsnotify_destroy_marks+0x14a/0x660 fs/notify/mark.c:934
fsnotify_inoderemove include/linux/fsnotify.h:264 [inline]
dentry_unlink_inode+0x2e0/0x430 fs/dcache.c:403
__dentry_kill+0x20d/0x630 fs/dcache.c:610
shrink_kill+0xa9/0x2c0 fs/dcache.c:1055
shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1082
prune_dcache_sb+0x10f/0x180 fs/dcache.c:1163
super_cache_scan+0x34f/0x4b0 fs/super.c:221
do_shrink_slab+0x701/0x1160 mm/shrinker.c:435
shrink_slab+0x1093/0x14d0 mm/shrinker.c:662
shrink_one+0x43b/0x850 mm/vmscan.c:4815
shrink_many mm/vmscan.c:4876 [inline]
lru_gen_shrink_node mm/vmscan.c:4954 [inline]
shrink_node+0x3799/0x3de0 mm/vmscan.c:5934
kswapd_shrink_node mm/vmscan.c:6762 [inline]
balance_pgdat mm/vmscan.c:6954 [inline]
kswapd+0x1bcd/0x35a0 mm/vmscan.c:7223
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH] inotify: Fix possible deadlock in fsnotify_destroy_mark
2024-08-22 8:32 [syzbot] [squashfs?] possible deadlock in fsnotify_destroy_mark syzbot
@ 2024-09-27 9:12 ` Lizhi Xu
2024-09-27 9:31 ` Amir Goldstein
0 siblings, 1 reply; 9+ messages in thread
From: Lizhi Xu @ 2024-09-27 9:12 UTC (permalink / raw)
To: syzbot+c679f13773f295d2da53
Cc: amir73il, jack, linux-fsdevel, linux-kernel, phillip,
squashfs-devel, syzkaller-bugs
[Syzbot reported]
WARNING: possible circular locking dependency detected
6.11.0-rc4-syzkaller-00019-gb311c1b497e5 #0 Not tainted
------------------------------------------------------
kswapd0/78 is trying to acquire lock:
ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
but task is already holding lock:
ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6841 [inline]
ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbb4/0x35a0 mm/vmscan.c:7223
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (fs_reclaim){+.+.}-{0:0}:
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
__fs_reclaim_acquire mm/page_alloc.c:3818 [inline]
fs_reclaim_acquire+0x88/0x140 mm/page_alloc.c:3832
might_alloc include/linux/sched/mm.h:334 [inline]
slab_pre_alloc_hook mm/slub.c:3939 [inline]
slab_alloc_node mm/slub.c:4017 [inline]
kmem_cache_alloc_noprof+0x3d/0x2a0 mm/slub.c:4044
inotify_new_watch fs/notify/inotify/inotify_user.c:599 [inline]
inotify_update_watch fs/notify/inotify/inotify_user.c:647 [inline]
__do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:786 [inline]
__se_sys_inotify_add_watch+0x72e/0x1070 fs/notify/inotify/inotify_user.c:729
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (&group->mark_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3133 [inline]
check_prevs_add kernel/locking/lockdep.c:3252 [inline]
validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
__lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
fsnotify_destroy_marks+0x14a/0x660 fs/notify/mark.c:934
fsnotify_inoderemove include/linux/fsnotify.h:264 [inline]
dentry_unlink_inode+0x2e0/0x430 fs/dcache.c:403
__dentry_kill+0x20d/0x630 fs/dcache.c:610
shrink_kill+0xa9/0x2c0 fs/dcache.c:1055
shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1082
prune_dcache_sb+0x10f/0x180 fs/dcache.c:1163
super_cache_scan+0x34f/0x4b0 fs/super.c:221
do_shrink_slab+0x701/0x1160 mm/shrinker.c:435
shrink_slab+0x1093/0x14d0 mm/shrinker.c:662
shrink_one+0x43b/0x850 mm/vmscan.c:4815
shrink_many mm/vmscan.c:4876 [inline]
lru_gen_shrink_node mm/vmscan.c:4954 [inline]
shrink_node+0x3799/0x3de0 mm/vmscan.c:5934
kswapd_shrink_node mm/vmscan.c:6762 [inline]
balance_pgdat mm/vmscan.c:6954 [inline]
kswapd+0x1bcd/0x35a0 mm/vmscan.c:7223
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&group->mark_mutex);
lock(fs_reclaim);
lock(&group->mark_mutex);
*** DEADLOCK ***
[Analysis]
The inotify_new_watch() call passes through GFP_KERNEL, use memalloc_nofs_save/
memalloc_nofs_restore to make sure we don't end up with the fs reclaim dependency.
Reported-and-tested-by: syzbot+c679f13773f295d2da53@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=c679f13773f295d2da53
Signed-off-by: Lizhi Xu <lizhi.xu@windriver.com>
---
fs/notify/inotify/inotify_user.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
index c7e451d5bd51..70b77b6186a6 100644
--- a/fs/notify/inotify/inotify_user.c
+++ b/fs/notify/inotify/inotify_user.c
@@ -643,8 +643,13 @@ static int inotify_update_watch(struct fsnotify_group *group, struct inode *inod
/* try to update and existing watch with the new arg */
ret = inotify_update_existing_watch(group, inode, arg);
/* no mark present, try to add a new one */
- if (ret == -ENOENT)
+ if (ret == -ENOENT) {
+ unsigned int nofs_flag;
+
+ nofs_flag = memalloc_nofs_save();
ret = inotify_new_watch(group, inode, arg);
+ memalloc_nofs_restore(nofs_flag);
+ }
fsnotify_group_unlock(group);
return ret;
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH] inotify: Fix possible deadlock in fsnotify_destroy_mark
2024-09-27 9:12 ` [PATCH] inotify: Fix " Lizhi Xu
@ 2024-09-27 9:31 ` Amir Goldstein
2024-09-27 10:20 ` Jan Kara
0 siblings, 1 reply; 9+ messages in thread
From: Amir Goldstein @ 2024-09-27 9:31 UTC (permalink / raw)
To: Lizhi Xu
Cc: syzbot+c679f13773f295d2da53, jack, linux-fsdevel, linux-kernel,
phillip, squashfs-devel, syzkaller-bugs
On Fri, Sep 27, 2024 at 11:12 AM Lizhi Xu <lizhi.xu@windriver.com> wrote:
>
> [Syzbot reported]
> WARNING: possible circular locking dependency detected
> 6.11.0-rc4-syzkaller-00019-gb311c1b497e5 #0 Not tainted
> ------------------------------------------------------
> kswapd0/78 is trying to acquire lock:
> ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
> ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
>
> but task is already holding lock:
> ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6841 [inline]
> ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbb4/0x35a0 mm/vmscan.c:7223
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (fs_reclaim){+.+.}-{0:0}:
> lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
> __fs_reclaim_acquire mm/page_alloc.c:3818 [inline]
> fs_reclaim_acquire+0x88/0x140 mm/page_alloc.c:3832
> might_alloc include/linux/sched/mm.h:334 [inline]
> slab_pre_alloc_hook mm/slub.c:3939 [inline]
> slab_alloc_node mm/slub.c:4017 [inline]
> kmem_cache_alloc_noprof+0x3d/0x2a0 mm/slub.c:4044
> inotify_new_watch fs/notify/inotify/inotify_user.c:599 [inline]
> inotify_update_watch fs/notify/inotify/inotify_user.c:647 [inline]
> __do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:786 [inline]
> __se_sys_inotify_add_watch+0x72e/0x1070 fs/notify/inotify/inotify_user.c:729
> do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> -> #0 (&group->mark_mutex){+.+.}-{3:3}:
> check_prev_add kernel/locking/lockdep.c:3133 [inline]
> check_prevs_add kernel/locking/lockdep.c:3252 [inline]
> validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
> __lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
> lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
> __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
> fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
> fsnotify_destroy_marks+0x14a/0x660 fs/notify/mark.c:934
> fsnotify_inoderemove include/linux/fsnotify.h:264 [inline]
> dentry_unlink_inode+0x2e0/0x430 fs/dcache.c:403
> __dentry_kill+0x20d/0x630 fs/dcache.c:610
> shrink_kill+0xa9/0x2c0 fs/dcache.c:1055
> shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1082
> prune_dcache_sb+0x10f/0x180 fs/dcache.c:1163
> super_cache_scan+0x34f/0x4b0 fs/super.c:221
> do_shrink_slab+0x701/0x1160 mm/shrinker.c:435
> shrink_slab+0x1093/0x14d0 mm/shrinker.c:662
> shrink_one+0x43b/0x850 mm/vmscan.c:4815
> shrink_many mm/vmscan.c:4876 [inline]
> lru_gen_shrink_node mm/vmscan.c:4954 [inline]
> shrink_node+0x3799/0x3de0 mm/vmscan.c:5934
> kswapd_shrink_node mm/vmscan.c:6762 [inline]
> balance_pgdat mm/vmscan.c:6954 [inline]
> kswapd+0x1bcd/0x35a0 mm/vmscan.c:7223
> kthread+0x2f0/0x390 kernel/kthread.c:389
> ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
>
> other info that might help us debug this:
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(fs_reclaim);
> lock(&group->mark_mutex);
> lock(fs_reclaim);
> lock(&group->mark_mutex);
>
> *** DEADLOCK ***
>
> [Analysis]
> The inotify_new_watch() call passes through GFP_KERNEL, use memalloc_nofs_save/
> memalloc_nofs_restore to make sure we don't end up with the fs reclaim dependency.
I don't think this can actually happen, because an inode with
an inotify mark cannot get evicted, but I cannot think of a way to annotate
this to lockdep, so if we need to silence lockdep, this is what
FSNOTIFY_GROUP_NOFS was created for.
Thanks,
Amir.
>
> Reported-and-tested-by: syzbot+c679f13773f295d2da53@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=c679f13773f295d2da53
> Signed-off-by: Lizhi Xu <lizhi.xu@windriver.com>
> ---
> fs/notify/inotify/inotify_user.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
> index c7e451d5bd51..70b77b6186a6 100644
> --- a/fs/notify/inotify/inotify_user.c
> +++ b/fs/notify/inotify/inotify_user.c
> @@ -643,8 +643,13 @@ static int inotify_update_watch(struct fsnotify_group *group, struct inode *inod
> /* try to update and existing watch with the new arg */
> ret = inotify_update_existing_watch(group, inode, arg);
> /* no mark present, try to add a new one */
> - if (ret == -ENOENT)
> + if (ret == -ENOENT) {
> + unsigned int nofs_flag;
> +
> + nofs_flag = memalloc_nofs_save();
> ret = inotify_new_watch(group, inode, arg);
> + memalloc_nofs_restore(nofs_flag);
> + }
> fsnotify_group_unlock(group);
>
> return ret;
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] inotify: Fix possible deadlock in fsnotify_destroy_mark
2024-09-27 9:31 ` Amir Goldstein
@ 2024-09-27 10:20 ` Jan Kara
2024-09-27 13:38 ` [PATCH V2] " Lizhi Xu
0 siblings, 1 reply; 9+ messages in thread
From: Jan Kara @ 2024-09-27 10:20 UTC (permalink / raw)
To: Amir Goldstein
Cc: Lizhi Xu, syzbot+c679f13773f295d2da53, jack, linux-fsdevel,
linux-kernel, phillip, squashfs-devel, syzkaller-bugs
On Fri 27-09-24 11:31:50, Amir Goldstein wrote:
> On Fri, Sep 27, 2024 at 11:12 AM Lizhi Xu <lizhi.xu@windriver.com> wrote:
> >
> > [Syzbot reported]
> > WARNING: possible circular locking dependency detected
> > 6.11.0-rc4-syzkaller-00019-gb311c1b497e5 #0 Not tainted
> > ------------------------------------------------------
> > kswapd0/78 is trying to acquire lock:
> > ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
> > ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
> >
> > but task is already holding lock:
> > ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6841 [inline]
> > ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbb4/0x35a0 mm/vmscan.c:7223
> >
> > which lock already depends on the new lock.
> >
> >
> > the existing dependency chain (in reverse order) is:
> >
> > -> #1 (fs_reclaim){+.+.}-{0:0}:
> > lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
> > __fs_reclaim_acquire mm/page_alloc.c:3818 [inline]
> > fs_reclaim_acquire+0x88/0x140 mm/page_alloc.c:3832
> > might_alloc include/linux/sched/mm.h:334 [inline]
> > slab_pre_alloc_hook mm/slub.c:3939 [inline]
> > slab_alloc_node mm/slub.c:4017 [inline]
> > kmem_cache_alloc_noprof+0x3d/0x2a0 mm/slub.c:4044
> > inotify_new_watch fs/notify/inotify/inotify_user.c:599 [inline]
> > inotify_update_watch fs/notify/inotify/inotify_user.c:647 [inline]
> > __do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:786 [inline]
> > __se_sys_inotify_add_watch+0x72e/0x1070 fs/notify/inotify/inotify_user.c:729
> > do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> > do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
> > entry_SYSCALL_64_after_hwframe+0x77/0x7f
> >
> > -> #0 (&group->mark_mutex){+.+.}-{3:3}:
> > check_prev_add kernel/locking/lockdep.c:3133 [inline]
> > check_prevs_add kernel/locking/lockdep.c:3252 [inline]
> > validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
> > __lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
> > lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
> > __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> > __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> > fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
> > fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
> > fsnotify_destroy_marks+0x14a/0x660 fs/notify/mark.c:934
> > fsnotify_inoderemove include/linux/fsnotify.h:264 [inline]
> > dentry_unlink_inode+0x2e0/0x430 fs/dcache.c:403
> > __dentry_kill+0x20d/0x630 fs/dcache.c:610
> > shrink_kill+0xa9/0x2c0 fs/dcache.c:1055
> > shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1082
> > prune_dcache_sb+0x10f/0x180 fs/dcache.c:1163
> > super_cache_scan+0x34f/0x4b0 fs/super.c:221
> > do_shrink_slab+0x701/0x1160 mm/shrinker.c:435
> > shrink_slab+0x1093/0x14d0 mm/shrinker.c:662
> > shrink_one+0x43b/0x850 mm/vmscan.c:4815
> > shrink_many mm/vmscan.c:4876 [inline]
> > lru_gen_shrink_node mm/vmscan.c:4954 [inline]
> > shrink_node+0x3799/0x3de0 mm/vmscan.c:5934
> > kswapd_shrink_node mm/vmscan.c:6762 [inline]
> > balance_pgdat mm/vmscan.c:6954 [inline]
> > kswapd+0x1bcd/0x35a0 mm/vmscan.c:7223
> > kthread+0x2f0/0x390 kernel/kthread.c:389
> > ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
> > ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
> >
> > other info that might help us debug this:
> >
> > Possible unsafe locking scenario:
> >
> > CPU0 CPU1
> > ---- ----
> > lock(fs_reclaim);
> > lock(&group->mark_mutex);
> > lock(fs_reclaim);
> > lock(&group->mark_mutex);
> >
> > *** DEADLOCK ***
> >
> > [Analysis]
> > The inotify_new_watch() call passes through GFP_KERNEL, use memalloc_nofs_save/
> > memalloc_nofs_restore to make sure we don't end up with the fs reclaim dependency.
>
> I don't think this can actually happen, because an inode with
> an inotify mark cannot get evicted,
Well, in the trace above dentry reclaim apparently raced with unlink and so
ended up going to dentry_unlink_inode() -> fsnotify_inoderemove() which
does indeed end up grabbing group->mark_mutex.
> but I cannot think of a way to annotate
> this to lockdep, so if we need to silence lockdep, this is what
> FSNOTIFY_GROUP_NOFS was created for.
So yes, inotify needs FSNOTIFY_GROUP_NOFS as well. In fact this trace shows
that any notification group needs to use NOFS allocations to be safe
against this race so we can just remove FSNOTIFY_GROUP_NOFS and
unconditionally do memalloc_nofs_save() in fsnotify_group_lock(). Lizhi,
will you send a patch please?
Honza
> > Reported-and-tested-by: syzbot+c679f13773f295d2da53@syzkaller.appspotmail.com
> > Closes: https://syzkaller.appspot.com/bug?extid=c679f13773f295d2da53
> > Signed-off-by: Lizhi Xu <lizhi.xu@windriver.com>
> > ---
> > fs/notify/inotify/inotify_user.c | 7 ++++++-
> > 1 file changed, 6 insertions(+), 1 deletion(-)
> >
> > diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
> > index c7e451d5bd51..70b77b6186a6 100644
> > --- a/fs/notify/inotify/inotify_user.c
> > +++ b/fs/notify/inotify/inotify_user.c
> > @@ -643,8 +643,13 @@ static int inotify_update_watch(struct fsnotify_group *group, struct inode *inod
> > /* try to update and existing watch with the new arg */
> > ret = inotify_update_existing_watch(group, inode, arg);
> > /* no mark present, try to add a new one */
> > - if (ret == -ENOENT)
> > + if (ret == -ENOENT) {
> > + unsigned int nofs_flag;
> > +
> > + nofs_flag = memalloc_nofs_save();
> > ret = inotify_new_watch(group, inode, arg);
> > + memalloc_nofs_restore(nofs_flag);
> > + }
> > fsnotify_group_unlock(group);
> >
> > return ret;
> > --
> > 2.43.0
> >
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH V2] inotify: Fix possible deadlock in fsnotify_destroy_mark
2024-09-27 10:20 ` Jan Kara
@ 2024-09-27 13:38 ` Lizhi Xu
2024-09-27 13:59 ` Amir Goldstein
0 siblings, 1 reply; 9+ messages in thread
From: Lizhi Xu @ 2024-09-27 13:38 UTC (permalink / raw)
To: jack
Cc: amir73il, linux-fsdevel, linux-kernel, lizhi.xu, phillip,
squashfs-devel, syzbot+c679f13773f295d2da53, syzkaller-bugs
[Syzbot reported]
WARNING: possible circular locking dependency detected
6.11.0-rc4-syzkaller-00019-gb311c1b497e5 #0 Not tainted
------------------------------------------------------
kswapd0/78 is trying to acquire lock:
ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
but task is already holding lock:
ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6841 [inline]
ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbb4/0x35a0 mm/vmscan.c:7223
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (fs_reclaim){+.+.}-{0:0}:
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
__fs_reclaim_acquire mm/page_alloc.c:3818 [inline]
fs_reclaim_acquire+0x88/0x140 mm/page_alloc.c:3832
might_alloc include/linux/sched/mm.h:334 [inline]
slab_pre_alloc_hook mm/slub.c:3939 [inline]
slab_alloc_node mm/slub.c:4017 [inline]
kmem_cache_alloc_noprof+0x3d/0x2a0 mm/slub.c:4044
inotify_new_watch fs/notify/inotify/inotify_user.c:599 [inline]
inotify_update_watch fs/notify/inotify/inotify_user.c:647 [inline]
__do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:786 [inline]
__se_sys_inotify_add_watch+0x72e/0x1070 fs/notify/inotify/inotify_user.c:729
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (&group->mark_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3133 [inline]
check_prevs_add kernel/locking/lockdep.c:3252 [inline]
validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
__lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
fsnotify_destroy_marks+0x14a/0x660 fs/notify/mark.c:934
fsnotify_inoderemove include/linux/fsnotify.h:264 [inline]
dentry_unlink_inode+0x2e0/0x430 fs/dcache.c:403
__dentry_kill+0x20d/0x630 fs/dcache.c:610
shrink_kill+0xa9/0x2c0 fs/dcache.c:1055
shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1082
prune_dcache_sb+0x10f/0x180 fs/dcache.c:1163
super_cache_scan+0x34f/0x4b0 fs/super.c:221
do_shrink_slab+0x701/0x1160 mm/shrinker.c:435
shrink_slab+0x1093/0x14d0 mm/shrinker.c:662
shrink_one+0x43b/0x850 mm/vmscan.c:4815
shrink_many mm/vmscan.c:4876 [inline]
lru_gen_shrink_node mm/vmscan.c:4954 [inline]
shrink_node+0x3799/0x3de0 mm/vmscan.c:5934
kswapd_shrink_node mm/vmscan.c:6762 [inline]
balance_pgdat mm/vmscan.c:6954 [inline]
kswapd+0x1bcd/0x35a0 mm/vmscan.c:7223
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&group->mark_mutex);
lock(fs_reclaim);
lock(&group->mark_mutex);
*** DEADLOCK ***
[Analysis]
The inotify_new_watch() call passes through GFP_KERNEL, use memalloc_nofs_save/
memalloc_nofs_restore to make sure we don't end up with the fs reclaim dependency.
That any notification group needs to use NOFS allocations to be safe
against this race so we can just remove FSNOTIFY_GROUP_NOFS and
unconditionally do memalloc_nofs_save() in fsnotify_group_lock().
Reported-and-tested-by: syzbot+c679f13773f295d2da53@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=c679f13773f295d2da53
Signed-off-by: Lizhi Xu <lizhi.xu@windriver.com>
---
V1 -> V2: remove FSNOTIFY_GROUP_NOFS in fsnotify_group_lock and unlock
fs/notify/inotify/inotify_user.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
index 8be029bc50b1..7b0a2809fc2d 100644
--- a/include/linux/fsnotify_backend.h
+++ b/include/linux/fsnotify_backend.h
@@ -268,14 +268,12 @@ struct fsnotify_group {
static inline void fsnotify_group_lock(struct fsnotify_group *group)
{
mutex_lock(&group->mark_mutex);
- if (group->flags & FSNOTIFY_GROUP_NOFS)
- group->owner_flags = memalloc_nofs_save();
+ group->owner_flags = memalloc_nofs_save();
}
static inline void fsnotify_group_unlock(struct fsnotify_group *group)
{
- if (group->flags & FSNOTIFY_GROUP_NOFS)
- memalloc_nofs_restore(group->owner_flags);
+ memalloc_nofs_restore(group->owner_flags);
mutex_unlock(&group->mark_mutex);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH V2] inotify: Fix possible deadlock in fsnotify_destroy_mark
2024-09-27 13:38 ` [PATCH V2] " Lizhi Xu
@ 2024-09-27 13:59 ` Amir Goldstein
2024-09-27 14:36 ` [PATCH V3] " Lizhi Xu
0 siblings, 1 reply; 9+ messages in thread
From: Amir Goldstein @ 2024-09-27 13:59 UTC (permalink / raw)
To: Lizhi Xu
Cc: jack, linux-fsdevel, linux-kernel, phillip, squashfs-devel,
syzbot+c679f13773f295d2da53, syzkaller-bugs
On Fri, Sep 27, 2024 at 3:38 PM Lizhi Xu <lizhi.xu@windriver.com> wrote:
>
> [Syzbot reported]
> WARNING: possible circular locking dependency detected
> 6.11.0-rc4-syzkaller-00019-gb311c1b497e5 #0 Not tainted
> ------------------------------------------------------
> kswapd0/78 is trying to acquire lock:
> ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
> ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
>
> but task is already holding lock:
> ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6841 [inline]
> ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbb4/0x35a0 mm/vmscan.c:7223
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (fs_reclaim){+.+.}-{0:0}:
> lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
> __fs_reclaim_acquire mm/page_alloc.c:3818 [inline]
> fs_reclaim_acquire+0x88/0x140 mm/page_alloc.c:3832
> might_alloc include/linux/sched/mm.h:334 [inline]
> slab_pre_alloc_hook mm/slub.c:3939 [inline]
> slab_alloc_node mm/slub.c:4017 [inline]
> kmem_cache_alloc_noprof+0x3d/0x2a0 mm/slub.c:4044
> inotify_new_watch fs/notify/inotify/inotify_user.c:599 [inline]
> inotify_update_watch fs/notify/inotify/inotify_user.c:647 [inline]
> __do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:786 [inline]
> __se_sys_inotify_add_watch+0x72e/0x1070 fs/notify/inotify/inotify_user.c:729
> do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> -> #0 (&group->mark_mutex){+.+.}-{3:3}:
> check_prev_add kernel/locking/lockdep.c:3133 [inline]
> check_prevs_add kernel/locking/lockdep.c:3252 [inline]
> validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
> __lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
> lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
> __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
> fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
> fsnotify_destroy_marks+0x14a/0x660 fs/notify/mark.c:934
> fsnotify_inoderemove include/linux/fsnotify.h:264 [inline]
> dentry_unlink_inode+0x2e0/0x430 fs/dcache.c:403
> __dentry_kill+0x20d/0x630 fs/dcache.c:610
> shrink_kill+0xa9/0x2c0 fs/dcache.c:1055
> shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1082
> prune_dcache_sb+0x10f/0x180 fs/dcache.c:1163
> super_cache_scan+0x34f/0x4b0 fs/super.c:221
> do_shrink_slab+0x701/0x1160 mm/shrinker.c:435
> shrink_slab+0x1093/0x14d0 mm/shrinker.c:662
> shrink_one+0x43b/0x850 mm/vmscan.c:4815
> shrink_many mm/vmscan.c:4876 [inline]
> lru_gen_shrink_node mm/vmscan.c:4954 [inline]
> shrink_node+0x3799/0x3de0 mm/vmscan.c:5934
> kswapd_shrink_node mm/vmscan.c:6762 [inline]
> balance_pgdat mm/vmscan.c:6954 [inline]
> kswapd+0x1bcd/0x35a0 mm/vmscan.c:7223
> kthread+0x2f0/0x390 kernel/kthread.c:389
> ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
>
> other info that might help us debug this:
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(fs_reclaim);
> lock(&group->mark_mutex);
> lock(fs_reclaim);
> lock(&group->mark_mutex);
>
> *** DEADLOCK ***
>
> [Analysis]
> The inotify_new_watch() call passes through GFP_KERNEL, use memalloc_nofs_save/
> memalloc_nofs_restore to make sure we don't end up with the fs reclaim dependency.
>
> That any notification group needs to use NOFS allocations to be safe
> against this race so we can just remove FSNOTIFY_GROUP_NOFS and
> unconditionally do memalloc_nofs_save() in fsnotify_group_lock().
>
> Reported-and-tested-by: syzbot+c679f13773f295d2da53@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=c679f13773f295d2da53
> Signed-off-by: Lizhi Xu <lizhi.xu@windriver.com>
> ---
> V1 -> V2: remove FSNOTIFY_GROUP_NOFS in fsnotify_group_lock and unlock
>
> fs/notify/inotify/inotify_user.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
> index 8be029bc50b1..7b0a2809fc2d 100644
> --- a/include/linux/fsnotify_backend.h
> +++ b/include/linux/fsnotify_backend.h
> @@ -268,14 +268,12 @@ struct fsnotify_group {
> static inline void fsnotify_group_lock(struct fsnotify_group *group)
> {
> mutex_lock(&group->mark_mutex);
> - if (group->flags & FSNOTIFY_GROUP_NOFS)
> - group->owner_flags = memalloc_nofs_save();
> + group->owner_flags = memalloc_nofs_save();
> }
>
> static inline void fsnotify_group_unlock(struct fsnotify_group *group)
> {
> - if (group->flags & FSNOTIFY_GROUP_NOFS)
> - memalloc_nofs_restore(group->owner_flags);
> + memalloc_nofs_restore(group->owner_flags);
> mutex_unlock(&group->mark_mutex);
> }
>
You missed several more instances of FSNOTIFY_GROUP_NOFS
that need to be removed, and please also remove the definition of the
flag as well as the nofs_marks_lock lockdep key that is no longer needed.
Thanks,
Amir.
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH V3] inotify: Fix possible deadlock in fsnotify_destroy_mark
2024-09-27 13:59 ` Amir Goldstein
@ 2024-09-27 14:36 ` Lizhi Xu
2024-09-27 14:58 ` Amir Goldstein
2024-10-02 13:23 ` Jan Kara
0 siblings, 2 replies; 9+ messages in thread
From: Lizhi Xu @ 2024-09-27 14:36 UTC (permalink / raw)
To: amir73il
Cc: jack, linux-fsdevel, linux-kernel, lizhi.xu, phillip,
squashfs-devel, syzbot+c679f13773f295d2da53, syzkaller-bugs
[Syzbot reported]
WARNING: possible circular locking dependency detected
6.11.0-rc4-syzkaller-00019-gb311c1b497e5 #0 Not tainted
------------------------------------------------------
kswapd0/78 is trying to acquire lock:
ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
but task is already holding lock:
ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6841 [inline]
ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbb4/0x35a0 mm/vmscan.c:7223
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (fs_reclaim){+.+.}-{0:0}:
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
__fs_reclaim_acquire mm/page_alloc.c:3818 [inline]
fs_reclaim_acquire+0x88/0x140 mm/page_alloc.c:3832
might_alloc include/linux/sched/mm.h:334 [inline]
slab_pre_alloc_hook mm/slub.c:3939 [inline]
slab_alloc_node mm/slub.c:4017 [inline]
kmem_cache_alloc_noprof+0x3d/0x2a0 mm/slub.c:4044
inotify_new_watch fs/notify/inotify/inotify_user.c:599 [inline]
inotify_update_watch fs/notify/inotify/inotify_user.c:647 [inline]
__do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:786 [inline]
__se_sys_inotify_add_watch+0x72e/0x1070 fs/notify/inotify/inotify_user.c:729
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (&group->mark_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3133 [inline]
check_prevs_add kernel/locking/lockdep.c:3252 [inline]
validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
__lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
fsnotify_destroy_marks+0x14a/0x660 fs/notify/mark.c:934
fsnotify_inoderemove include/linux/fsnotify.h:264 [inline]
dentry_unlink_inode+0x2e0/0x430 fs/dcache.c:403
__dentry_kill+0x20d/0x630 fs/dcache.c:610
shrink_kill+0xa9/0x2c0 fs/dcache.c:1055
shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1082
prune_dcache_sb+0x10f/0x180 fs/dcache.c:1163
super_cache_scan+0x34f/0x4b0 fs/super.c:221
do_shrink_slab+0x701/0x1160 mm/shrinker.c:435
shrink_slab+0x1093/0x14d0 mm/shrinker.c:662
shrink_one+0x43b/0x850 mm/vmscan.c:4815
shrink_many mm/vmscan.c:4876 [inline]
lru_gen_shrink_node mm/vmscan.c:4954 [inline]
shrink_node+0x3799/0x3de0 mm/vmscan.c:5934
kswapd_shrink_node mm/vmscan.c:6762 [inline]
balance_pgdat mm/vmscan.c:6954 [inline]
kswapd+0x1bcd/0x35a0 mm/vmscan.c:7223
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
lock(&group->mark_mutex);
lock(fs_reclaim);
lock(&group->mark_mutex);
*** DEADLOCK ***
[Analysis]
The inotify_new_watch() call passes through GFP_KERNEL, use memalloc_nofs_save/
memalloc_nofs_restore to make sure we don't end up with the fs reclaim dependency.
That any notification group needs to use NOFS allocations to be safe
against this race so we can just remove FSNOTIFY_GROUP_NOFS and
unconditionally do memalloc_nofs_save() in fsnotify_group_lock().
Reported-and-tested-by: syzbot+c679f13773f295d2da53@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=c679f13773f295d2da53
Signed-off-by: Lizhi Xu <lizhi.xu@windriver.com>
---
V1 -> V2: remove FSNOTIFY_GROUP_NOFS in fsnotify_group_lock and unlock
V2 -> V3: remove nofs_marks_lock and FSNOTIFY_GROUP_NOFS
fs/nfsd/filecache.c | 2 +-
fs/notify/dnotify/dnotify.c | 3 +--
fs/notify/fanotify/fanotify_user.c | 2 +-
fs/notify/group.c | 11 -----------
include/linux/fsnotify_backend.h | 10 +++-------
5 files changed, 6 insertions(+), 22 deletions(-)
diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
index 24e8f1fbcebb..2bb8465474dc 100644
--- a/fs/nfsd/filecache.c
+++ b/fs/nfsd/filecache.c
@@ -764,7 +764,7 @@ nfsd_file_cache_init(void)
}
nfsd_file_fsnotify_group = fsnotify_alloc_group(&nfsd_file_fsnotify_ops,
- FSNOTIFY_GROUP_NOFS);
+ 0);
if (IS_ERR(nfsd_file_fsnotify_group)) {
pr_err("nfsd: unable to create fsnotify group: %ld\n",
PTR_ERR(nfsd_file_fsnotify_group));
diff --git a/fs/notify/dnotify/dnotify.c b/fs/notify/dnotify/dnotify.c
index 46440fbb8662..d5dbef7f5c95 100644
--- a/fs/notify/dnotify/dnotify.c
+++ b/fs/notify/dnotify/dnotify.c
@@ -406,8 +406,7 @@ static int __init dnotify_init(void)
SLAB_PANIC|SLAB_ACCOUNT);
dnotify_mark_cache = KMEM_CACHE(dnotify_mark, SLAB_PANIC|SLAB_ACCOUNT);
- dnotify_group = fsnotify_alloc_group(&dnotify_fsnotify_ops,
- FSNOTIFY_GROUP_NOFS);
+ dnotify_group = fsnotify_alloc_group(&dnotify_fsnotify_ops, 0);
if (IS_ERR(dnotify_group))
panic("unable to allocate fsnotify group for dnotify\n");
dnotify_sysctl_init();
diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
index 13454e5fd3fb..9644bc72e457 100644
--- a/fs/notify/fanotify/fanotify_user.c
+++ b/fs/notify/fanotify/fanotify_user.c
@@ -1480,7 +1480,7 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
/* fsnotify_alloc_group takes a ref. Dropped in fanotify_release */
group = fsnotify_alloc_group(&fanotify_fsnotify_ops,
- FSNOTIFY_GROUP_USER | FSNOTIFY_GROUP_NOFS);
+ FSNOTIFY_GROUP_USER);
if (IS_ERR(group)) {
return PTR_ERR(group);
}
diff --git a/fs/notify/group.c b/fs/notify/group.c
index 1de6631a3925..18446b7b0d49 100644
--- a/fs/notify/group.c
+++ b/fs/notify/group.c
@@ -115,7 +115,6 @@ static struct fsnotify_group *__fsnotify_alloc_group(
const struct fsnotify_ops *ops,
int flags, gfp_t gfp)
{
- static struct lock_class_key nofs_marks_lock;
struct fsnotify_group *group;
group = kzalloc(sizeof(struct fsnotify_group), gfp);
@@ -136,16 +135,6 @@ static struct fsnotify_group *__fsnotify_alloc_group(
group->ops = ops;
group->flags = flags;
- /*
- * For most backends, eviction of inode with a mark is not expected,
- * because marks hold a refcount on the inode against eviction.
- *
- * Use a different lockdep class for groups that support evictable
- * inode marks, because with evictable marks, mark_mutex is NOT
- * fs-reclaim safe - the mutex is taken when evicting inodes.
- */
- if (flags & FSNOTIFY_GROUP_NOFS)
- lockdep_set_class(&group->mark_mutex, &nofs_marks_lock);
return group;
}
diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
index 8be029bc50b1..3ecf7768e577 100644
--- a/include/linux/fsnotify_backend.h
+++ b/include/linux/fsnotify_backend.h
@@ -217,7 +217,6 @@ struct fsnotify_group {
#define FSNOTIFY_GROUP_USER 0x01 /* user allocated group */
#define FSNOTIFY_GROUP_DUPS 0x02 /* allow multiple marks per object */
-#define FSNOTIFY_GROUP_NOFS 0x04 /* group lock is not direct reclaim safe */
int flags;
unsigned int owner_flags; /* stored flags of mark_mutex owner */
@@ -268,22 +267,19 @@ struct fsnotify_group {
static inline void fsnotify_group_lock(struct fsnotify_group *group)
{
mutex_lock(&group->mark_mutex);
- if (group->flags & FSNOTIFY_GROUP_NOFS)
- group->owner_flags = memalloc_nofs_save();
+ group->owner_flags = memalloc_nofs_save();
}
static inline void fsnotify_group_unlock(struct fsnotify_group *group)
{
- if (group->flags & FSNOTIFY_GROUP_NOFS)
- memalloc_nofs_restore(group->owner_flags);
+ memalloc_nofs_restore(group->owner_flags);
mutex_unlock(&group->mark_mutex);
}
static inline void fsnotify_group_assert_locked(struct fsnotify_group *group)
{
WARN_ON_ONCE(!mutex_is_locked(&group->mark_mutex));
- if (group->flags & FSNOTIFY_GROUP_NOFS)
- WARN_ON_ONCE(!(current->flags & PF_MEMALLOC_NOFS));
+ WARN_ON_ONCE(!(current->flags & PF_MEMALLOC_NOFS));
}
/* When calling fsnotify tell it if the data is a path or inode */
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH V3] inotify: Fix possible deadlock in fsnotify_destroy_mark
2024-09-27 14:36 ` [PATCH V3] " Lizhi Xu
@ 2024-09-27 14:58 ` Amir Goldstein
2024-10-02 13:23 ` Jan Kara
1 sibling, 0 replies; 9+ messages in thread
From: Amir Goldstein @ 2024-09-27 14:58 UTC (permalink / raw)
To: Lizhi Xu
Cc: jack, linux-fsdevel, linux-kernel, phillip, squashfs-devel,
syzbot+c679f13773f295d2da53, syzkaller-bugs
On Fri, Sep 27, 2024 at 4:36 PM Lizhi Xu <lizhi.xu@windriver.com> wrote:
>
> [Syzbot reported]
> WARNING: possible circular locking dependency detected
> 6.11.0-rc4-syzkaller-00019-gb311c1b497e5 #0 Not tainted
> ------------------------------------------------------
> kswapd0/78 is trying to acquire lock:
> ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
> ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
>
> but task is already holding lock:
> ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6841 [inline]
> ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbb4/0x35a0 mm/vmscan.c:7223
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (fs_reclaim){+.+.}-{0:0}:
> lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
> __fs_reclaim_acquire mm/page_alloc.c:3818 [inline]
> fs_reclaim_acquire+0x88/0x140 mm/page_alloc.c:3832
> might_alloc include/linux/sched/mm.h:334 [inline]
> slab_pre_alloc_hook mm/slub.c:3939 [inline]
> slab_alloc_node mm/slub.c:4017 [inline]
> kmem_cache_alloc_noprof+0x3d/0x2a0 mm/slub.c:4044
> inotify_new_watch fs/notify/inotify/inotify_user.c:599 [inline]
> inotify_update_watch fs/notify/inotify/inotify_user.c:647 [inline]
> __do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:786 [inline]
> __se_sys_inotify_add_watch+0x72e/0x1070 fs/notify/inotify/inotify_user.c:729
> do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> -> #0 (&group->mark_mutex){+.+.}-{3:3}:
> check_prev_add kernel/locking/lockdep.c:3133 [inline]
> check_prevs_add kernel/locking/lockdep.c:3252 [inline]
> validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
> __lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
> lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
> __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
> fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
> fsnotify_destroy_marks+0x14a/0x660 fs/notify/mark.c:934
> fsnotify_inoderemove include/linux/fsnotify.h:264 [inline]
> dentry_unlink_inode+0x2e0/0x430 fs/dcache.c:403
> __dentry_kill+0x20d/0x630 fs/dcache.c:610
> shrink_kill+0xa9/0x2c0 fs/dcache.c:1055
> shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1082
> prune_dcache_sb+0x10f/0x180 fs/dcache.c:1163
> super_cache_scan+0x34f/0x4b0 fs/super.c:221
> do_shrink_slab+0x701/0x1160 mm/shrinker.c:435
> shrink_slab+0x1093/0x14d0 mm/shrinker.c:662
> shrink_one+0x43b/0x850 mm/vmscan.c:4815
> shrink_many mm/vmscan.c:4876 [inline]
> lru_gen_shrink_node mm/vmscan.c:4954 [inline]
> shrink_node+0x3799/0x3de0 mm/vmscan.c:5934
> kswapd_shrink_node mm/vmscan.c:6762 [inline]
> balance_pgdat mm/vmscan.c:6954 [inline]
> kswapd+0x1bcd/0x35a0 mm/vmscan.c:7223
> kthread+0x2f0/0x390 kernel/kthread.c:389
> ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
>
> other info that might help us debug this:
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(fs_reclaim);
> lock(&group->mark_mutex);
> lock(fs_reclaim);
> lock(&group->mark_mutex);
>
> *** DEADLOCK ***
>
> [Analysis]
> The inotify_new_watch() call passes through GFP_KERNEL, use memalloc_nofs_save/
> memalloc_nofs_restore to make sure we don't end up with the fs reclaim dependency.
>
> That any notification group needs to use NOFS allocations to be safe
> against this race so we can just remove FSNOTIFY_GROUP_NOFS and
> unconditionally do memalloc_nofs_save() in fsnotify_group_lock().
>
> Reported-and-tested-by: syzbot+c679f13773f295d2da53@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=c679f13773f295d2da53
> Signed-off-by: Lizhi Xu <lizhi.xu@windriver.com>
> ---
> V1 -> V2: remove FSNOTIFY_GROUP_NOFS in fsnotify_group_lock and unlock
> V2 -> V3: remove nofs_marks_lock and FSNOTIFY_GROUP_NOFS
Reviewed-by: Amir Goldstein <amir73il@gmail.com>
Thanks,
Amir.
>
> fs/nfsd/filecache.c | 2 +-
> fs/notify/dnotify/dnotify.c | 3 +--
> fs/notify/fanotify/fanotify_user.c | 2 +-
> fs/notify/group.c | 11 -----------
> include/linux/fsnotify_backend.h | 10 +++-------
> 5 files changed, 6 insertions(+), 22 deletions(-)
>
> diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
> index 24e8f1fbcebb..2bb8465474dc 100644
> --- a/fs/nfsd/filecache.c
> +++ b/fs/nfsd/filecache.c
> @@ -764,7 +764,7 @@ nfsd_file_cache_init(void)
> }
>
> nfsd_file_fsnotify_group = fsnotify_alloc_group(&nfsd_file_fsnotify_ops,
> - FSNOTIFY_GROUP_NOFS);
> + 0);
> if (IS_ERR(nfsd_file_fsnotify_group)) {
> pr_err("nfsd: unable to create fsnotify group: %ld\n",
> PTR_ERR(nfsd_file_fsnotify_group));
> diff --git a/fs/notify/dnotify/dnotify.c b/fs/notify/dnotify/dnotify.c
> index 46440fbb8662..d5dbef7f5c95 100644
> --- a/fs/notify/dnotify/dnotify.c
> +++ b/fs/notify/dnotify/dnotify.c
> @@ -406,8 +406,7 @@ static int __init dnotify_init(void)
> SLAB_PANIC|SLAB_ACCOUNT);
> dnotify_mark_cache = KMEM_CACHE(dnotify_mark, SLAB_PANIC|SLAB_ACCOUNT);
>
> - dnotify_group = fsnotify_alloc_group(&dnotify_fsnotify_ops,
> - FSNOTIFY_GROUP_NOFS);
> + dnotify_group = fsnotify_alloc_group(&dnotify_fsnotify_ops, 0);
> if (IS_ERR(dnotify_group))
> panic("unable to allocate fsnotify group for dnotify\n");
> dnotify_sysctl_init();
> diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
> index 13454e5fd3fb..9644bc72e457 100644
> --- a/fs/notify/fanotify/fanotify_user.c
> +++ b/fs/notify/fanotify/fanotify_user.c
> @@ -1480,7 +1480,7 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
>
> /* fsnotify_alloc_group takes a ref. Dropped in fanotify_release */
> group = fsnotify_alloc_group(&fanotify_fsnotify_ops,
> - FSNOTIFY_GROUP_USER | FSNOTIFY_GROUP_NOFS);
> + FSNOTIFY_GROUP_USER);
> if (IS_ERR(group)) {
> return PTR_ERR(group);
> }
> diff --git a/fs/notify/group.c b/fs/notify/group.c
> index 1de6631a3925..18446b7b0d49 100644
> --- a/fs/notify/group.c
> +++ b/fs/notify/group.c
> @@ -115,7 +115,6 @@ static struct fsnotify_group *__fsnotify_alloc_group(
> const struct fsnotify_ops *ops,
> int flags, gfp_t gfp)
> {
> - static struct lock_class_key nofs_marks_lock;
> struct fsnotify_group *group;
>
> group = kzalloc(sizeof(struct fsnotify_group), gfp);
> @@ -136,16 +135,6 @@ static struct fsnotify_group *__fsnotify_alloc_group(
>
> group->ops = ops;
> group->flags = flags;
> - /*
> - * For most backends, eviction of inode with a mark is not expected,
> - * because marks hold a refcount on the inode against eviction.
> - *
> - * Use a different lockdep class for groups that support evictable
> - * inode marks, because with evictable marks, mark_mutex is NOT
> - * fs-reclaim safe - the mutex is taken when evicting inodes.
> - */
> - if (flags & FSNOTIFY_GROUP_NOFS)
> - lockdep_set_class(&group->mark_mutex, &nofs_marks_lock);
>
> return group;
> }
> diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
> index 8be029bc50b1..3ecf7768e577 100644
> --- a/include/linux/fsnotify_backend.h
> +++ b/include/linux/fsnotify_backend.h
> @@ -217,7 +217,6 @@ struct fsnotify_group {
>
> #define FSNOTIFY_GROUP_USER 0x01 /* user allocated group */
> #define FSNOTIFY_GROUP_DUPS 0x02 /* allow multiple marks per object */
> -#define FSNOTIFY_GROUP_NOFS 0x04 /* group lock is not direct reclaim safe */
> int flags;
> unsigned int owner_flags; /* stored flags of mark_mutex owner */
>
> @@ -268,22 +267,19 @@ struct fsnotify_group {
> static inline void fsnotify_group_lock(struct fsnotify_group *group)
> {
> mutex_lock(&group->mark_mutex);
> - if (group->flags & FSNOTIFY_GROUP_NOFS)
> - group->owner_flags = memalloc_nofs_save();
> + group->owner_flags = memalloc_nofs_save();
> }
>
> static inline void fsnotify_group_unlock(struct fsnotify_group *group)
> {
> - if (group->flags & FSNOTIFY_GROUP_NOFS)
> - memalloc_nofs_restore(group->owner_flags);
> + memalloc_nofs_restore(group->owner_flags);
> mutex_unlock(&group->mark_mutex);
> }
>
> static inline void fsnotify_group_assert_locked(struct fsnotify_group *group)
> {
> WARN_ON_ONCE(!mutex_is_locked(&group->mark_mutex));
> - if (group->flags & FSNOTIFY_GROUP_NOFS)
> - WARN_ON_ONCE(!(current->flags & PF_MEMALLOC_NOFS));
> + WARN_ON_ONCE(!(current->flags & PF_MEMALLOC_NOFS));
> }
>
> /* When calling fsnotify tell it if the data is a path or inode */
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH V3] inotify: Fix possible deadlock in fsnotify_destroy_mark
2024-09-27 14:36 ` [PATCH V3] " Lizhi Xu
2024-09-27 14:58 ` Amir Goldstein
@ 2024-10-02 13:23 ` Jan Kara
1 sibling, 0 replies; 9+ messages in thread
From: Jan Kara @ 2024-10-02 13:23 UTC (permalink / raw)
To: Lizhi Xu
Cc: amir73il, jack, linux-fsdevel, linux-kernel, phillip,
squashfs-devel, syzbot+c679f13773f295d2da53, syzkaller-bugs
On Fri 27-09-24 22:36:42, Lizhi Xu wrote:
> [Syzbot reported]
> WARNING: possible circular locking dependency detected
> 6.11.0-rc4-syzkaller-00019-gb311c1b497e5 #0 Not tainted
> ------------------------------------------------------
> kswapd0/78 is trying to acquire lock:
> ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
> ffff88801b8d8930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
>
> but task is already holding lock:
> ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6841 [inline]
> ffffffff8ea2fd60 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbb4/0x35a0 mm/vmscan.c:7223
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (fs_reclaim){+.+.}-{0:0}:
> lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
> __fs_reclaim_acquire mm/page_alloc.c:3818 [inline]
> fs_reclaim_acquire+0x88/0x140 mm/page_alloc.c:3832
> might_alloc include/linux/sched/mm.h:334 [inline]
> slab_pre_alloc_hook mm/slub.c:3939 [inline]
> slab_alloc_node mm/slub.c:4017 [inline]
> kmem_cache_alloc_noprof+0x3d/0x2a0 mm/slub.c:4044
> inotify_new_watch fs/notify/inotify/inotify_user.c:599 [inline]
> inotify_update_watch fs/notify/inotify/inotify_user.c:647 [inline]
> __do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:786 [inline]
> __se_sys_inotify_add_watch+0x72e/0x1070 fs/notify/inotify/inotify_user.c:729
> do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> -> #0 (&group->mark_mutex){+.+.}-{3:3}:
> check_prev_add kernel/locking/lockdep.c:3133 [inline]
> check_prevs_add kernel/locking/lockdep.c:3252 [inline]
> validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
> __lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
> lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
> __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline]
> fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578
> fsnotify_destroy_marks+0x14a/0x660 fs/notify/mark.c:934
> fsnotify_inoderemove include/linux/fsnotify.h:264 [inline]
> dentry_unlink_inode+0x2e0/0x430 fs/dcache.c:403
> __dentry_kill+0x20d/0x630 fs/dcache.c:610
> shrink_kill+0xa9/0x2c0 fs/dcache.c:1055
> shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1082
> prune_dcache_sb+0x10f/0x180 fs/dcache.c:1163
> super_cache_scan+0x34f/0x4b0 fs/super.c:221
> do_shrink_slab+0x701/0x1160 mm/shrinker.c:435
> shrink_slab+0x1093/0x14d0 mm/shrinker.c:662
> shrink_one+0x43b/0x850 mm/vmscan.c:4815
> shrink_many mm/vmscan.c:4876 [inline]
> lru_gen_shrink_node mm/vmscan.c:4954 [inline]
> shrink_node+0x3799/0x3de0 mm/vmscan.c:5934
> kswapd_shrink_node mm/vmscan.c:6762 [inline]
> balance_pgdat mm/vmscan.c:6954 [inline]
> kswapd+0x1bcd/0x35a0 mm/vmscan.c:7223
> kthread+0x2f0/0x390 kernel/kthread.c:389
> ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
>
> other info that might help us debug this:
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(fs_reclaim);
> lock(&group->mark_mutex);
> lock(fs_reclaim);
> lock(&group->mark_mutex);
>
> *** DEADLOCK ***
>
> [Analysis]
> The inotify_new_watch() call passes through GFP_KERNEL, use memalloc_nofs_save/
> memalloc_nofs_restore to make sure we don't end up with the fs reclaim dependency.
>
> That any notification group needs to use NOFS allocations to be safe
> against this race so we can just remove FSNOTIFY_GROUP_NOFS and
> unconditionally do memalloc_nofs_save() in fsnotify_group_lock().
>
> Reported-and-tested-by: syzbot+c679f13773f295d2da53@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=c679f13773f295d2da53
> Signed-off-by: Lizhi Xu <lizhi.xu@windriver.com>
Thanks. I've added the patch to my tree.
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-10-02 13:23 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-22 8:32 [syzbot] [squashfs?] possible deadlock in fsnotify_destroy_mark syzbot
2024-09-27 9:12 ` [PATCH] inotify: Fix " Lizhi Xu
2024-09-27 9:31 ` Amir Goldstein
2024-09-27 10:20 ` Jan Kara
2024-09-27 13:38 ` [PATCH V2] " Lizhi Xu
2024-09-27 13:59 ` Amir Goldstein
2024-09-27 14:36 ` [PATCH V3] " Lizhi Xu
2024-09-27 14:58 ` Amir Goldstein
2024-10-02 13:23 ` Jan Kara
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).