* [syzbot] [mm?] possible deadlock in lock_next_vma
@ 2025-07-12 5:57 syzbot
2025-07-14 16:38 ` syzbot
2025-07-15 9:33 ` Lorenzo Stoakes
0 siblings, 2 replies; 4+ messages in thread
From: syzbot @ 2025-07-12 5:57 UTC (permalink / raw)
To: Liam.Howlett, akpm, linux-kernel, linux-mm, lorenzo.stoakes,
shakeel.butt, surenb, syzkaller-bugs, vbabka
Hello,
syzbot found the following issue on:
HEAD commit: 26ffb3d6f02c Add linux-next specific files for 20250704
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=12d4df70580000
kernel config: https://syzkaller.appspot.com/x/.config?x=1e4f88512ae53408
dashboard link: https://syzkaller.appspot.com/bug?extid=159a3ef1894076a6a6e9
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/fd5569903143/disk-26ffb3d6.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/1b0c9505c543/vmlinux-26ffb3d6.xz
kernel image: https://storage.googleapis.com/syzbot-assets/9d864c72bed1/bzImage-26ffb3d6.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+159a3ef1894076a6a6e9@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.16.0-rc4-next-20250704-syzkaller #0 Not tainted
------------------------------------------------------
syz.4.1737/14243 is trying to acquire lock:
ffff88807634d1e0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock_killable include/linux/mmap_lock.h:432 [inline]
ffff88807634d1e0 (&mm->mmap_lock){++++}-{4:4}, at: lock_vma_under_mmap_lock mm/mmap_lock.c:189 [inline]
ffff88807634d1e0 (&mm->mmap_lock){++++}-{4:4}, at: lock_next_vma+0x802/0xdc0 mm/mmap_lock.c:264
but task is already holding lock:
ffff888020b36a88 (vm_lock){++++}-{0:0}, at: lock_next_vma+0x146/0xdc0 mm/mmap_lock.c:220
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (vm_lock){++++}-{0:0}:
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
__vma_enter_locked+0x182/0x380 mm/mmap_lock.c:63
__vma_start_write+0x1e/0x120 mm/mmap_lock.c:87
vma_start_write include/linux/mmap_lock.h:267 [inline]
mprotect_fixup+0x571/0x9b0 mm/mprotect.c:670
setup_arg_pages+0x53a/0xaa0 fs/exec.c:670
load_elf_binary+0xb9f/0x2730 fs/binfmt_elf.c:1013
search_binary_handler fs/exec.c:1670 [inline]
exec_binprm fs/exec.c:1702 [inline]
bprm_execve+0x99c/0x1450 fs/exec.c:1754
kernel_execve+0x8f0/0x9f0 fs/exec.c:1920
try_to_run_init_process+0x13/0x60 init/main.c:1397
kernel_init+0xad/0x1d0 init/main.c:1525
ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
-> #0 (&mm->mmap_lock){++++}-{4:4}:
check_prev_add kernel/locking/lockdep.c:3168 [inline]
check_prevs_add kernel/locking/lockdep.c:3287 [inline]
validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
down_read_killable+0x50/0x350 kernel/locking/rwsem.c:1547
mmap_read_lock_killable include/linux/mmap_lock.h:432 [inline]
lock_vma_under_mmap_lock mm/mmap_lock.c:189 [inline]
lock_next_vma+0x802/0xdc0 mm/mmap_lock.c:264
get_next_vma fs/proc/task_mmu.c:182 [inline]
query_vma_find_by_addr fs/proc/task_mmu.c:516 [inline]
query_matching_vma+0x28f/0x4b0 fs/proc/task_mmu.c:545
do_procmap_query fs/proc/task_mmu.c:637 [inline]
procfs_procmap_ioctl+0x406/0xce0 fs/proc/task_mmu.c:748
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:598 [inline]
__se_sys_ioctl+0xf9/0x170 fs/ioctl.c:584
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
rlock(vm_lock);
lock(&mm->mmap_lock);
lock(vm_lock);
rlock(&mm->mmap_lock);
*** DEADLOCK ***
2 locks held by syz.4.1737/14243:
#0: ffff888020b36e48 (vm_lock){++++}-{0:0}, at: lock_next_vma+0x146/0xdc0 mm/mmap_lock.c:220
#1: ffff888020b36a88 (vm_lock){++++}-{0:0}, at: lock_next_vma+0x146/0xdc0 mm/mmap_lock.c:220
stack backtrace:
CPU: 1 UID: 0 PID: 14243 Comm: syz.4.1737 Not tainted 6.16.0-rc4-next-20250704-syzkaller #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2046
check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2178
check_prev_add kernel/locking/lockdep.c:3168 [inline]
check_prevs_add kernel/locking/lockdep.c:3287 [inline]
validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
down_read_killable+0x50/0x350 kernel/locking/rwsem.c:1547
mmap_read_lock_killable include/linux/mmap_lock.h:432 [inline]
lock_vma_under_mmap_lock mm/mmap_lock.c:189 [inline]
lock_next_vma+0x802/0xdc0 mm/mmap_lock.c:264
get_next_vma fs/proc/task_mmu.c:182 [inline]
query_vma_find_by_addr fs/proc/task_mmu.c:516 [inline]
query_matching_vma+0x28f/0x4b0 fs/proc/task_mmu.c:545
do_procmap_query fs/proc/task_mmu.c:637 [inline]
procfs_procmap_ioctl+0x406/0xce0 fs/proc/task_mmu.c:748
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:598 [inline]
__se_sys_ioctl+0xf9/0x170 fs/ioctl.c:584
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f79bc78e929
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f79bd5c8038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f79bc9b6080 RCX: 00007f79bc78e929
RDX: 0000200000000180 RSI: 00000000c0686611 RDI: 0000000000000006
RBP: 00007f79bc810b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f79bc9b6080 R15: 00007ffcdd82ae18
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [syzbot] [mm?] possible deadlock in lock_next_vma
2025-07-12 5:57 [syzbot] [mm?] possible deadlock in lock_next_vma syzbot
@ 2025-07-14 16:38 ` syzbot
2025-07-15 9:33 ` Lorenzo Stoakes
1 sibling, 0 replies; 4+ messages in thread
From: syzbot @ 2025-07-14 16:38 UTC (permalink / raw)
To: Liam.Howlett, akpm, liam.howlett, linux-kernel, linux-mm,
lorenzo.stoakes, shakeel.butt, surenb, syzkaller-bugs, vbabka
syzbot has found a reproducer for the following issue on:
HEAD commit: 0be23810e32e Add linux-next specific files for 20250714
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=15cfb0f0580000
kernel config: https://syzkaller.appspot.com/x/.config?x=be9e2082003f81ff
dashboard link: https://syzkaller.appspot.com/bug?extid=159a3ef1894076a6a6e9
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1003b18c580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=11437d82580000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/69e6cc49d526/disk-0be23810.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/71d1bab88eaa/vmlinux-0be23810.xz
kernel image: https://storage.googleapis.com/syzbot-assets/5a516dc7bb0d/bzImage-0be23810.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+159a3ef1894076a6a6e9@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.16.0-rc6-next-20250714-syzkaller #0 Not tainted
------------------------------------------------------
syz.2.103/6308 is trying to acquire lock:
ffff88807d33bde0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock_killable include/linux/mmap_lock.h:432 [inline]
ffff88807d33bde0 (&mm->mmap_lock){++++}-{4:4}, at: lock_vma_under_mmap_lock mm/mmap_lock.c:189 [inline]
ffff88807d33bde0 (&mm->mmap_lock){++++}-{4:4}, at: lock_next_vma+0x802/0xdc0 mm/mmap_lock.c:264
but task is already holding lock:
ffff8880338c6948 (vm_lock){++++}-{0:0}, at: lock_next_vma+0x146/0xdc0 mm/mmap_lock.c:220
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (vm_lock){++++}-{0:0}:
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
__vma_enter_locked+0x182/0x380 mm/mmap_lock.c:63
__vma_start_write+0x1e/0x120 mm/mmap_lock.c:87
vma_start_write include/linux/mmap_lock.h:267 [inline]
mprotect_fixup+0x571/0x9b0 mm/mprotect.c:670
setup_arg_pages+0x53a/0xaa0 fs/exec.c:670
load_elf_binary+0xb9f/0x2730 fs/binfmt_elf.c:1013
search_binary_handler fs/exec.c:1670 [inline]
exec_binprm fs/exec.c:1702 [inline]
bprm_execve+0x999/0x1450 fs/exec.c:1754
kernel_execve+0x8f0/0x9f0 fs/exec.c:1920
try_to_run_init_process+0x13/0x60 init/main.c:1397
kernel_init+0xad/0x1d0 init/main.c:1525
ret_from_fork+0x3f9/0x770 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
-> #0 (&mm->mmap_lock){++++}-{4:4}:
check_prev_add kernel/locking/lockdep.c:3168 [inline]
check_prevs_add kernel/locking/lockdep.c:3287 [inline]
validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
down_read_killable+0x50/0x350 kernel/locking/rwsem.c:1562
mmap_read_lock_killable include/linux/mmap_lock.h:432 [inline]
lock_vma_under_mmap_lock mm/mmap_lock.c:189 [inline]
lock_next_vma+0x802/0xdc0 mm/mmap_lock.c:264
get_next_vma fs/proc/task_mmu.c:182 [inline]
query_vma_find_by_addr fs/proc/task_mmu.c:512 [inline]
query_matching_vma+0x319/0x5c0 fs/proc/task_mmu.c:544
do_procmap_query fs/proc/task_mmu.c:629 [inline]
procfs_procmap_ioctl+0x3f9/0xd50 fs/proc/task_mmu.c:747
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:598 [inline]
__se_sys_ioctl+0xfc/0x170 fs/ioctl.c:584
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
rlock(vm_lock);
lock(&mm->mmap_lock);
lock(vm_lock);
rlock(&mm->mmap_lock);
*** DEADLOCK ***
1 lock held by syz.2.103/6308:
#0: ffff8880338c6948 (vm_lock){++++}-{0:0}, at: lock_next_vma+0x146/0xdc0 mm/mmap_lock.c:220
stack backtrace:
CPU: 0 UID: 0 PID: 6308 Comm: syz.2.103 Not tainted 6.16.0-rc6-next-20250714-syzkaller #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2046
check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2178
check_prev_add kernel/locking/lockdep.c:3168 [inline]
check_prevs_add kernel/locking/lockdep.c:3287 [inline]
validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
down_read_killable+0x50/0x350 kernel/locking/rwsem.c:1562
mmap_read_lock_killable include/linux/mmap_lock.h:432 [inline]
lock_vma_under_mmap_lock mm/mmap_lock.c:189 [inline]
lock_next_vma+0x802/0xdc0 mm/mmap_lock.c:264
get_next_vma fs/proc/task_mmu.c:182 [inline]
query_vma_find_by_addr fs/proc/task_mmu.c:512 [inline]
query_matching_vma+0x319/0x5c0 fs/proc/task_mmu.c:544
do_procmap_query fs/proc/task_mmu.c:629 [inline]
procfs_procmap_ioctl+0x3f9/0xd50 fs/proc/task_mmu.c:747
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:598 [inline]
__se_sys_ioctl+0xfc/0x170 fs/ioctl.c:584
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa51ab8e929
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fa51b99b038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fa51adb5fa0 RCX: 00007fa51ab8e929
RDX: 0000200000000180 RSI: 00000000c0686611 RDI: 0000000000000003
RBP: 00007fa51ac10b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fa51adb5fa0 R15: 00007ffdecbd3a88
</TASK>
---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [syzbot] [mm?] possible deadlock in lock_next_vma
2025-07-12 5:57 [syzbot] [mm?] possible deadlock in lock_next_vma syzbot
2025-07-14 16:38 ` syzbot
@ 2025-07-15 9:33 ` Lorenzo Stoakes
1 sibling, 0 replies; 4+ messages in thread
From: Lorenzo Stoakes @ 2025-07-15 9:33 UTC (permalink / raw)
To: syzbot
Cc: Liam.Howlett, akpm, linux-kernel, linux-mm, shakeel.butt, surenb,
syzkaller-bugs, vbabka
So (as mentioned by others elsewhere also) this seems to all be a product of
ioctl()'s not being synchronised at all, and so when proc_maps_open() is called,
we set up the struct proc_maps_private structure.
Then in procfs_procmap_ioctl():
struct seq_file *seq = file->private_data;
struct proc_maps_private *priv = seq->private;
And that'll be the same proc_maps_private for all threads running ioctl's...
So both:
struct proc_maps_private {
...
bool mmap_locked;
struct vm_area_struct *locked_vma;
...
};
Fields are problematic here - these implicitly assume that it's one fd per
operation... and ioctl()'s make this not the case.
So you'll get the imbalanced VMA locking you're seeing here, as well as NULL
pointer derefs in particular because of:
static void unlock_vma(struct proc_maps_private *priv)
{
if (priv->locked_vma) {
vma_end_read(priv->locked_vma);
priv->locked_vma = NULL;
}
}
Which will just race on setting this field to NULL then something else touches
it and kaboom.
A stack I observed locally (the repro is super reproducible) was:
[access NULL vma->vm_mm -> boom]
vma_refcount_put()
unlock_vma()
get_next_vma()
query_vma_find_by_addr()
query_matching_vma()
do_procmap_query()
Racing with query_vma_teardown() it seemed.
So I think either we need to:
a. Acquire a lock before invoking do_procmap_query()
b. Find some other means of storing per-ioctl state.
The problem reported here afaict as a result relates only to "fs/proc/task_mmu:
execute PROCMAP_QUERY ioctl under per-vma locks".
Any issues that might/might not relate to the previous commit will have to be
considerated separately :P
On Fri, Jul 11, 2025 at 10:57:31PM -0700, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: 26ffb3d6f02c Add linux-next specific files for 20250704
> git tree: linux-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=12d4df70580000
> kernel config: https://syzkaller.appspot.com/x/.config?x=1e4f88512ae53408
> dashboard link: https://syzkaller.appspot.com/bug?extid=159a3ef1894076a6a6e9
> compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/fd5569903143/disk-26ffb3d6.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/1b0c9505c543/vmlinux-26ffb3d6.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/9d864c72bed1/bzImage-26ffb3d6.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+159a3ef1894076a6a6e9@syzkaller.appspotmail.com
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.16.0-rc4-next-20250704-syzkaller #0 Not tainted
> ------------------------------------------------------
> syz.4.1737/14243 is trying to acquire lock:
> ffff88807634d1e0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock_killable include/linux/mmap_lock.h:432 [inline]
> ffff88807634d1e0 (&mm->mmap_lock){++++}-{4:4}, at: lock_vma_under_mmap_lock mm/mmap_lock.c:189 [inline]
> ffff88807634d1e0 (&mm->mmap_lock){++++}-{4:4}, at: lock_next_vma+0x802/0xdc0 mm/mmap_lock.c:264
>
> but task is already holding lock:
> ffff888020b36a88 (vm_lock){++++}-{0:0}, at: lock_next_vma+0x146/0xdc0 mm/mmap_lock.c:220
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (vm_lock){++++}-{0:0}:
> lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
> __vma_enter_locked+0x182/0x380 mm/mmap_lock.c:63
> __vma_start_write+0x1e/0x120 mm/mmap_lock.c:87
> vma_start_write include/linux/mmap_lock.h:267 [inline]
> mprotect_fixup+0x571/0x9b0 mm/mprotect.c:670
> setup_arg_pages+0x53a/0xaa0 fs/exec.c:670
> load_elf_binary+0xb9f/0x2730 fs/binfmt_elf.c:1013
> search_binary_handler fs/exec.c:1670 [inline]
> exec_binprm fs/exec.c:1702 [inline]
> bprm_execve+0x99c/0x1450 fs/exec.c:1754
> kernel_execve+0x8f0/0x9f0 fs/exec.c:1920
> try_to_run_init_process+0x13/0x60 init/main.c:1397
> kernel_init+0xad/0x1d0 init/main.c:1525
> ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
>
> -> #0 (&mm->mmap_lock){++++}-{4:4}:
> check_prev_add kernel/locking/lockdep.c:3168 [inline]
> check_prevs_add kernel/locking/lockdep.c:3287 [inline]
> validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
> __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
> lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
> down_read_killable+0x50/0x350 kernel/locking/rwsem.c:1547
> mmap_read_lock_killable include/linux/mmap_lock.h:432 [inline]
> lock_vma_under_mmap_lock mm/mmap_lock.c:189 [inline]
> lock_next_vma+0x802/0xdc0 mm/mmap_lock.c:264
> get_next_vma fs/proc/task_mmu.c:182 [inline]
> query_vma_find_by_addr fs/proc/task_mmu.c:516 [inline]
> query_matching_vma+0x28f/0x4b0 fs/proc/task_mmu.c:545
> do_procmap_query fs/proc/task_mmu.c:637 [inline]
> procfs_procmap_ioctl+0x406/0xce0 fs/proc/task_mmu.c:748
> vfs_ioctl fs/ioctl.c:51 [inline]
> __do_sys_ioctl fs/ioctl.c:598 [inline]
> __se_sys_ioctl+0xf9/0x170 fs/ioctl.c:584
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> other info that might help us debug this:
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> rlock(vm_lock);
> lock(&mm->mmap_lock);
> lock(vm_lock);
> rlock(&mm->mmap_lock);
>
> *** DEADLOCK ***
>
> 2 locks held by syz.4.1737/14243:
> #0: ffff888020b36e48 (vm_lock){++++}-{0:0}, at: lock_next_vma+0x146/0xdc0 mm/mmap_lock.c:220
> #1: ffff888020b36a88 (vm_lock){++++}-{0:0}, at: lock_next_vma+0x146/0xdc0 mm/mmap_lock.c:220
>
> stack backtrace:
> CPU: 1 UID: 0 PID: 14243 Comm: syz.4.1737 Not tainted 6.16.0-rc4-next-20250704-syzkaller #0 PREEMPT(full)
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
> Call Trace:
> <TASK>
> dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
> print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2046
> check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2178
> check_prev_add kernel/locking/lockdep.c:3168 [inline]
> check_prevs_add kernel/locking/lockdep.c:3287 [inline]
> validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
> __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
> lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
> down_read_killable+0x50/0x350 kernel/locking/rwsem.c:1547
> mmap_read_lock_killable include/linux/mmap_lock.h:432 [inline]
> lock_vma_under_mmap_lock mm/mmap_lock.c:189 [inline]
> lock_next_vma+0x802/0xdc0 mm/mmap_lock.c:264
> get_next_vma fs/proc/task_mmu.c:182 [inline]
> query_vma_find_by_addr fs/proc/task_mmu.c:516 [inline]
> query_matching_vma+0x28f/0x4b0 fs/proc/task_mmu.c:545
> do_procmap_query fs/proc/task_mmu.c:637 [inline]
> procfs_procmap_ioctl+0x406/0xce0 fs/proc/task_mmu.c:748
> vfs_ioctl fs/ioctl.c:51 [inline]
> __do_sys_ioctl fs/ioctl.c:598 [inline]
> __se_sys_ioctl+0xf9/0x170 fs/ioctl.c:584
> do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7f79bc78e929
> Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007f79bd5c8038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
> RAX: ffffffffffffffda RBX: 00007f79bc9b6080 RCX: 00007f79bc78e929
> RDX: 0000200000000180 RSI: 00000000c0686611 RDI: 0000000000000006
> RBP: 00007f79bc810b39 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> R13: 0000000000000000 R14: 00007f79bc9b6080 R15: 00007ffcdd82ae18
> </TASK>
>
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
>
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
>
> If the report is already addressed, let syzbot know by replying with:
> #syz fix: exact-commit-title
>
> If you want to overwrite report's subsystems, reply with:
> #syz set subsystems: new-subsystem
> (See the list of subsystem names on the web dashboard)
>
> If the report is a duplicate of another one, reply with:
> #syz dup: exact-subject-of-another-report
>
> If you want to undo deduplication, reply with:
> #syz undup
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [syzbot] [mm?] possible deadlock in lock_next_vma
[not found] <20250717023444.2281-1-hdanton@sina.com>
@ 2025-07-17 4:53 ` syzbot
0 siblings, 0 replies; 4+ messages in thread
From: syzbot @ 2025-07-17 4:53 UTC (permalink / raw)
To: hdanton, linux-kernel, linux-mm, lorenzo.stoakes, surenb,
syzkaller-bugs
Hello,
syzbot has tested the proposed patch and the reproducer did not trigger any issue:
Reported-by: syzbot+159a3ef1894076a6a6e9@syzkaller.appspotmail.com
Tested-by: syzbot+159a3ef1894076a6a6e9@syzkaller.appspotmail.com
Tested on:
commit: 760b462b mm: add zblock allocator
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-new
console output: https://syzkaller.appspot.com/x/log.txt?x=106c2d8c580000
kernel config: https://syzkaller.appspot.com/x/.config?x=c39d2ebefebaec5f
dashboard link: https://syzkaller.appspot.com/bug?extid=159a3ef1894076a6a6e9
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
Note: no patches were applied.
Note: testing is done by a robot and is best-effort only.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-07-17 4:53 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-12 5:57 [syzbot] [mm?] possible deadlock in lock_next_vma syzbot
2025-07-14 16:38 ` syzbot
2025-07-15 9:33 ` Lorenzo Stoakes
[not found] <20250717023444.2281-1-hdanton@sina.com>
2025-07-17 4:53 ` syzbot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).