public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [syzbot] [kvm?] INFO: task hung in kvm_swap_active_memslots (2)
@ 2025-11-17 10:44 syzbot
  2025-11-17 11:06 ` Alexander Potapenko
  2026-04-06 19:53 ` syzbot
  0 siblings, 2 replies; 4+ messages in thread
From: syzbot @ 2025-11-17 10:44 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, syzkaller-bugs

Hello,

syzbot found the following issue on:

HEAD commit:    3a8660878839 Linux 6.18-rc1
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=160a05e2580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=e854293d7f44b5a5
dashboard link: https://syzkaller.appspot.com/bug?extid=5c566b850d6ab6f0427a
compiler:       gcc (Debian 12.2.0-14+deb12u1) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/87a66406ce1a/disk-3a866087.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/7c3300da5269/vmlinux-3a866087.xz
kernel image: https://storage.googleapis.com/syzbot-assets/b4fcefdaf57b/bzImage-3a866087.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5c566b850d6ab6f0427a@syzkaller.appspotmail.com

INFO: task syz.2.1185:11790 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.1185      state:D stack:25976 pid:11790 tgid:11789 ppid:5836   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7026
 kvm_swap_active_memslots+0x2ea/0x7d0 virt/kvm/kvm_main.c:1642
 kvm_activate_memslot virt/kvm/kvm_main.c:1786 [inline]
 kvm_create_memslot virt/kvm/kvm_main.c:1852 [inline]
 kvm_set_memslot+0xd3b/0x1380 virt/kvm/kvm_main.c:1964
 kvm_set_memory_region+0xe53/0x1610 virt/kvm/kvm_main.c:2120
 kvm_set_internal_memslot+0x9f/0xe0 virt/kvm/kvm_main.c:2143
 __x86_set_memory_region+0x2f6/0x740 arch/x86/kvm/x86.c:13242
 kvm_alloc_apic_access_page+0xc5/0x140 arch/x86/kvm/lapic.c:2788
 vmx_vcpu_create+0x503/0xbd0 arch/x86/kvm/vmx/vmx.c:7599
 kvm_arch_vcpu_create+0x688/0xb20 arch/x86/kvm/x86.c:12706
 kvm_vm_ioctl_create_vcpu virt/kvm/kvm_main.c:4207 [inline]
 kvm_vm_ioctl+0xfec/0x3fd0 virt/kvm/kvm_main.c:5158
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:597 [inline]
 __se_sys_ioctl fs/ioctl.c:583 [inline]
 __x64_sys_ioctl+0x18e/0x210 fs/ioctl.c:583
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f3c9978eec9
RSP: 002b:00007f3c9a676038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f3c999e5fa0 RCX: 00007f3c9978eec9
RDX: 0000000000000000 RSI: 000000000000ae41 RDI: 0000000000000003
RBP: 00007f3c99811f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f3c999e6038 R14: 00007f3c999e5fa0 R15: 00007ffda33577e8
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e3c42e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e3c42e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8e3c42e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6775
2 locks held by getty/8058:
 #0: ffff88803440d0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000e0cd2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
2 locks held by syz.2.1185/11790:
 #0: ffff888032d640a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:228 [inline]
 #0: ffff888032d640a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2782
 #1: ffff888032d64138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1380 virt/kvm/kvm_main.c:1915
4 locks held by kworker/u8:44/13722:
 #0: ffff88801ba9f148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3238
 #1: ffffc9000b6cfd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3239
 #2: ffffffff900e8630 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xad/0x8b0 net/core/net_namespace.c:669
 #3: ffffffff8e3cf878 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x1a3/0x3c0 kernel/rcu/tree_exp.h:343
4 locks held by syz-executor/14037:
 #0: ffff888056654dc8 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close+0x26/0x90 net/bluetooth/hci_core.c:499
 #1: ffff8880566540b8 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x3ae/0x11d0 net/bluetooth/hci_sync.c:5291
 #2: ffffffff90371248 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2118 [inline]
 #2: ffffffff90371248 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xbb/0x260 net/bluetooth/hci_conn.c:2602
 #3: ffff88803179c338 (&conn->lock#2){+.+.}-{4:4}, at: l2cap_conn_del+0x80/0x730 net/bluetooth/l2cap_core.c:1762
1 lock held by syz.0.1905/15421:
 #0: ffffffff900e8630 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x2d6/0x690 net/core/net_namespace.c:576
1 lock held by syz.0.1905/15423:
 #0: ffffffff900e8630 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x2d6/0x690 net/core/net_namespace.c:576
2 locks held by syz.1.1908/15436:
 #0: ffff88807a444dc8 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close+0x26/0x90 net/bluetooth/hci_core.c:499
 #1: ffff88807a4440b8 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x3ae/0x11d0 net/bluetooth/hci_sync.c:5291

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline]
 watchdog+0xf3f/0x1170 kernel/hung_task.c:495
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x675/0x7d0 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [syzbot] [kvm?] INFO: task hung in kvm_swap_active_memslots (2)
  2025-11-17 10:44 [syzbot] [kvm?] INFO: task hung in kvm_swap_active_memslots (2) syzbot
@ 2025-11-17 11:06 ` Alexander Potapenko
  2025-11-17 16:54   ` Sean Christopherson
  2026-04-06 19:53 ` syzbot
  1 sibling, 1 reply; 4+ messages in thread
From: Alexander Potapenko @ 2025-11-17 11:06 UTC (permalink / raw)
  To: syzbot; +Cc: kvm, linux-kernel, pbonzini, syzkaller-bugs

On Mon, Nov 17, 2025 at 11:44 AM syzbot
<syzbot+5c566b850d6ab6f0427a@syzkaller.appspotmail.com> wrote:
>
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit:    3a8660878839 Linux 6.18-rc1
> git tree:       upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=160a05e2580000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=e854293d7f44b5a5
> dashboard link: https://syzkaller.appspot.com/bug?extid=5c566b850d6ab6f0427a
> compiler:       gcc (Debian 12.2.0-14+deb12u1) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/87a66406ce1a/disk-3a866087.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/7c3300da5269/vmlinux-3a866087.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/b4fcefdaf57b/bzImage-3a866087.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+5c566b850d6ab6f0427a@syzkaller.appspotmail.com
>
> INFO: task syz.2.1185:11790 blocked for more than 143 seconds.
>       Not tainted syzkaller #0
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> task:syz.2.1185      state:D stack:25976 pid:11790 tgid:11789 ppid:5836   task_flags:0x400140 flags:0x00080002
> Call Trace:
>  <TASK>
>  context_switch kernel/sched/core.c:5325 [inline]
>  __schedule+0x1190/0x5de0 kernel/sched/core.c:6929
>  __schedule_loop kernel/sched/core.c:7011 [inline]
>  schedule+0xe7/0x3a0 kernel/sched/core.c:7026
>  kvm_swap_active_memslots+0x2ea/0x7d0 virt/kvm/kvm_main.c:1642
>  kvm_activate_memslot virt/kvm/kvm_main.c:1786 [inline]
>  kvm_create_memslot virt/kvm/kvm_main.c:1852 [inline]
>  kvm_set_memslot+0xd3b/0x1380 virt/kvm/kvm_main.c:1964
>  kvm_set_memory_region+0xe53/0x1610 virt/kvm/kvm_main.c:2120
>  kvm_set_internal_memslot+0x9f/0xe0 virt/kvm/kvm_main.c:2143
>  __x86_set_memory_region+0x2f6/0x740 arch/x86/kvm/x86.c:13242
>  kvm_alloc_apic_access_page+0xc5/0x140 arch/x86/kvm/lapic.c:2788
>  vmx_vcpu_create+0x503/0xbd0 arch/x86/kvm/vmx/vmx.c:7599
>  kvm_arch_vcpu_create+0x688/0xb20 arch/x86/kvm/x86.c:12706
>  kvm_vm_ioctl_create_vcpu virt/kvm/kvm_main.c:4207 [inline]
>  kvm_vm_ioctl+0xfec/0x3fd0 virt/kvm/kvm_main.c:5158
>  vfs_ioctl fs/ioctl.c:51 [inline]
>  __do_sys_ioctl fs/ioctl.c:597 [inline]
>  __se_sys_ioctl fs/ioctl.c:583 [inline]
>  __x64_sys_ioctl+0x18e/0x210 fs/ioctl.c:583
>  do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
>  do_syscall_64+0xcd/0xfa0 arch/x86/entry/syscall_64.c:94
>  entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7f3c9978eec9
> RSP: 002b:00007f3c9a676038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
> RAX: ffffffffffffffda RBX: 00007f3c999e5fa0 RCX: 00007f3c9978eec9
> RDX: 0000000000000000 RSI: 000000000000ae41 RDI: 0000000000000003
> RBP: 00007f3c99811f91 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> R13: 00007f3c999e6038 R14: 00007f3c999e5fa0 R15: 00007ffda33577e8
>  </TASK>
>
> Showing all locks held in the system:
> 1 lock held by khungtaskd/31:
>  #0: ffffffff8e3c42e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
>  #0: ffffffff8e3c42e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
>  #0: ffffffff8e3c42e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6775
> 2 locks held by getty/8058:
>  #0: ffff88803440d0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
>  #1: ffffc9000e0cd2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
> 2 locks held by syz.2.1185/11790:
>  #0: ffff888032d640a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:228 [inline]
>  #0: ffff888032d640a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2782
>  #1: ffff888032d64138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1380 virt/kvm/kvm_main.c:1915

It's worth noting that in addition to upstream this bug has been
reported by syzkaller on several other kernel branches.
In each report, the task was shown to be holding the same pair of
locks: &kvm->slots_arch_lock and &kvm->slots_lock.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [syzbot] [kvm?] INFO: task hung in kvm_swap_active_memslots (2)
  2025-11-17 11:06 ` Alexander Potapenko
@ 2025-11-17 16:54   ` Sean Christopherson
  0 siblings, 0 replies; 4+ messages in thread
From: Sean Christopherson @ 2025-11-17 16:54 UTC (permalink / raw)
  To: Alexander Potapenko; +Cc: syzbot, kvm, linux-kernel, pbonzini, syzkaller-bugs

On Mon, Nov 17, 2025, Alexander Potapenko wrote:
> On Mon, Nov 17, 2025 at 11:44 AM syzbot
> <syzbot+5c566b850d6ab6f0427a@syzkaller.appspotmail.com> wrote:
> >
> > Hello,
> >
> > syzbot found the following issue on:
> >
> > HEAD commit:    3a8660878839 Linux 6.18-rc1
> > git tree:       upstream
> > console output: https://syzkaller.appspot.com/x/log.txt?x=160a05e2580000
> > kernel config:  https://syzkaller.appspot.com/x/.config?x=e854293d7f44b5a5
> > dashboard link: https://syzkaller.appspot.com/bug?extid=5c566b850d6ab6f0427a
> > compiler:       gcc (Debian 12.2.0-14+deb12u1) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
> >
> > Unfortunately, I don't have any reproducer for this issue yet.
> >
> > Downloadable assets:
> > disk image: https://storage.googleapis.com/syzbot-assets/87a66406ce1a/disk-3a866087.raw.xz
> > vmlinux: https://storage.googleapis.com/syzbot-assets/7c3300da5269/vmlinux-3a866087.xz
> > kernel image: https://storage.googleapis.com/syzbot-assets/b4fcefdaf57b/bzImage-3a866087.xz
> >
> > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > Reported-by: syzbot+5c566b850d6ab6f0427a@syzkaller.appspotmail.com
> >
> > INFO: task syz.2.1185:11790 blocked for more than 143 seconds.
> >       Not tainted syzkaller #0
> > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > task:syz.2.1185      state:D stack:25976 pid:11790 tgid:11789 ppid:5836   task_flags:0x400140 flags:0x00080002
> > Call Trace:
> >  <TASK>
> >  context_switch kernel/sched/core.c:5325 [inline]
> >  __schedule+0x1190/0x5de0 kernel/sched/core.c:6929
> >  __schedule_loop kernel/sched/core.c:7011 [inline]
> >  schedule+0xe7/0x3a0 kernel/sched/core.c:7026
> >  kvm_swap_active_memslots+0x2ea/0x7d0 virt/kvm/kvm_main.c:1642
> >  kvm_activate_memslot virt/kvm/kvm_main.c:1786 [inline]
> >  kvm_create_memslot virt/kvm/kvm_main.c:1852 [inline]
> >  kvm_set_memslot+0xd3b/0x1380 virt/kvm/kvm_main.c:1964
> >  kvm_set_memory_region+0xe53/0x1610 virt/kvm/kvm_main.c:2120
> >  kvm_set_internal_memslot+0x9f/0xe0 virt/kvm/kvm_main.c:2143
> >  __x86_set_memory_region+0x2f6/0x740 arch/x86/kvm/x86.c:13242
> >  kvm_alloc_apic_access_page+0xc5/0x140 arch/x86/kvm/lapic.c:2788
> >  vmx_vcpu_create+0x503/0xbd0 arch/x86/kvm/vmx/vmx.c:7599
> >  kvm_arch_vcpu_create+0x688/0xb20 arch/x86/kvm/x86.c:12706
> >  kvm_vm_ioctl_create_vcpu virt/kvm/kvm_main.c:4207 [inline]
> >  kvm_vm_ioctl+0xfec/0x3fd0 virt/kvm/kvm_main.c:5158
> >  vfs_ioctl fs/ioctl.c:51 [inline]
> >  __do_sys_ioctl fs/ioctl.c:597 [inline]
> >  __se_sys_ioctl fs/ioctl.c:583 [inline]
> >  __x64_sys_ioctl+0x18e/0x210 fs/ioctl.c:583
> >  do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> >  do_syscall_64+0xcd/0xfa0 arch/x86/entry/syscall_64.c:94
> >  entry_SYSCALL_64_after_hwframe+0x77/0x7f
> > RIP: 0033:0x7f3c9978eec9
> > RSP: 002b:00007f3c9a676038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
> > RAX: ffffffffffffffda RBX: 00007f3c999e5fa0 RCX: 00007f3c9978eec9
> > RDX: 0000000000000000 RSI: 000000000000ae41 RDI: 0000000000000003
> > RBP: 00007f3c99811f91 R08: 0000000000000000 R09: 0000000000000000
> > R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> > R13: 00007f3c999e6038 R14: 00007f3c999e5fa0 R15: 00007ffda33577e8
> >  </TASK>
> >
> > Showing all locks held in the system:
> > 1 lock held by khungtaskd/31:
> >  #0: ffffffff8e3c42e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
> >  #0: ffffffff8e3c42e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
> >  #0: ffffffff8e3c42e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6775
> > 2 locks held by getty/8058:
> >  #0: ffff88803440d0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
> >  #1: ffffc9000e0cd2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
> > 2 locks held by syz.2.1185/11790:
> >  #0: ffff888032d640a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:228 [inline]
> >  #0: ffff888032d640a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2782
> >  #1: ffff888032d64138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1380 virt/kvm/kvm_main.c:1915
> 
> It's worth noting that in addition to upstream this bug has been
> reported by syzkaller on several other kernel branches.
> In each report, the task was shown to be holding the same pair of
> locks: &kvm->slots_arch_lock and &kvm->slots_lock.

Ya, though that's not terribly interesting because kvm_swap_active_memslots()
holds those locks, and the issue is specifically that kvm_swap_active_memslots()
is waiting on kvm->mn_active_invalidate_count to go to zero.

Paolo even called out this possibility in commit 52ac8b358b0c ("KVM: Block memslot
updates across range_start() and range_end()"):

 : Losing the rwsem fairness does theoretically allow MMU notifiers to
 : block install_new_memslots forever.  Note that mm/mmu_notifier.c's own
 : retry scheme in mmu_interval_read_begin also uses wait/wake_up
 : and is likewise not fair.

In every reproducer, the "VMM" process is either getting thrashed by reclaim, or
the process itself is generating a constant stream of mmu_notifier invalidations.

I don't see an easy, or even decent, solution for this.  Forcing new invalidations
to wait isn't really an option because in-flight invalidations may be sleepable
(and KVM has zero visibility into the the behavior of invalidator), while new
invalidations may not be sleepable.

And on the KVM side, bailing from kvm_activate_memslot() on a pending signal
isn't an option, because kvm_activate_memslot() must not fail.  Hmm, at least,
not without terminating the VM.  I guess maybe that's an option?  Add a timeout
(maybe with a module param) to the kvm_swap_active_memslots() loop, and then WARN
and kill the VM if the timeout is hit.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [syzbot] [kvm?] INFO: task hung in kvm_swap_active_memslots (2)
  2025-11-17 10:44 [syzbot] [kvm?] INFO: task hung in kvm_swap_active_memslots (2) syzbot
  2025-11-17 11:06 ` Alexander Potapenko
@ 2026-04-06 19:53 ` syzbot
  1 sibling, 0 replies; 4+ messages in thread
From: syzbot @ 2026-04-06 19:53 UTC (permalink / raw)
  To: glider, kvm, linux-kernel, pbonzini, seanjc, syzkaller-bugs

syzbot has found a reproducer for the following issue on:

HEAD commit:    591cd656a1bf Linux 7.0-rc7
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1480b3da580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=64e78d99d9bf8b4c
dashboard link: https://syzkaller.appspot.com/bug?extid=5c566b850d6ab6f0427a
compiler:       gcc (Debian 14.2.0-19) 14.2.0, GNU ld (GNU Binutils for Debian) 2.44
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=15a9a302580000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=131136ba580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/bf0f027c93ec/disk-591cd656.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/60d829be17f4/vmlinux-591cd656.xz
kernel image: https://storage.googleapis.com/syzbot-assets/673c009c7550/bzImage-591cd656.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5c566b850d6ab6f0427a@syzkaller.appspotmail.com

INFO: task syz.0.17:6023 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.17        state:D stack:25752 pid:6023  tgid:6023  ppid:5958   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7008
 kvm_swap_active_memslots+0x2e0/0x7c0 virt/kvm/kvm_main.c:1627
 kvm_activate_memslot virt/kvm/kvm_main.c:1786 [inline]
 kvm_create_memslot virt/kvm/kvm_main.c:1852 [inline]
 kvm_set_memslot+0xbde/0x1740 virt/kvm/kvm_main.c:1964
 kvm_set_memory_region+0xe1c/0x1570 virt/kvm/kvm_main.c:2120
 kvm_set_internal_memslot+0x9f/0xf0 virt/kvm/kvm_main.c:2143
 __x86_set_memory_region+0x2f6/0x730 arch/x86/kvm/x86.c:13355
 kvm_alloc_apic_access_page+0xc5/0x140 arch/x86/kvm/lapic.c:2861
 vmx_vcpu_create+0x79b/0xb90 arch/x86/kvm/vmx/vmx.c:7830
 kvm_arch_vcpu_create+0x683/0xac0 arch/x86/kvm/x86.c:12803
 kvm_vm_ioctl_create_vcpu virt/kvm/kvm_main.c:4207 [inline]
 kvm_vm_ioctl+0x756/0x4080 virt/kvm/kvm_main.c:5165
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:597 [inline]
 __se_sys_ioctl fs/ioctl.c:583 [inline]
 __x64_sys_ioctl+0x18e/0x210 fs/ioctl.c:583
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fbd53f9c819
RSP: 002b:00007fffae0c01c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fbd54215fa0 RCX: 00007fbd53f9c819
RDX: 0000000000000004 RSI: 000000000000ae41 RDI: 0000000000000003
RBP: 00007fbd54032c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fbd54215fac R14: 00007fbd54215fa0 R15: 00007fbd54215fa0
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
2 locks held by getty/5584:
 #0: ffff8880394da0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
2 locks held by syz.0.17/6023:
 #0: ffff8880785f00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff8880785f00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff8880785f0138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.1.18/6062:
 #0: ffff88807cae00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff88807cae00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff88807cae0138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.2.19/6085:
 #0: ffff8880790a00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff8880790a00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff8880790a0138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.3.20/6109:
 #0: ffff88807b3340a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff88807b3340a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff88807b334138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.4.21/6138:
 #0: ffff8880343100a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff8880343100a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff888034310138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.5.22/6172:
 #0: ffff88802b0e40a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff88802b0e40a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff88802b0e4138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.6.23/6201:
 #0: ffff88805ead00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff88805ead00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff88805ead0138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.7.24/6230:
 #0: ffff8880582a40a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff8880582a40a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff8880582a4138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.8.25/6259:
 #0: ffff8880556000a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff8880556000a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff888055600138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 175 Comm: kworker/u8:6 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Workqueue: events_unbound nsim_dev_trap_report_work
RIP: 0010:chacha_block_generic+0x102/0x360 lib/crypto/chacha-block-generic.c:80
Code: df 48 89 44 24 38 49 8b 45 08 48 89 44 24 40 49 8b 45 10 48 89 44 24 48 49 8b 45 18 48 89 44 24 50 49 8b 45 20 48 89 44 24 58 <49> 8b 45 28 48 89 44 24 60 49 8b 45 30 48 89 44 24 68 49 8b 45 38
RSP: 0018:ffffc90003067820 EFLAGS: 00000046
RAX: 21eb668f03ac7932 RBX: ffffc90003067940 RCX: 0000000000000001
RDX: 0000000000000000 RSI: 0000000000000014 RDI: ffffc90003067a70
RBP: ffffc90003067a70 R08: 0000000000000001 R09: 0000000000000000
R10: ffffc90003067aa0 R11: 3a60345ebc2c5805 R12: dffffc0000000000
R13: ffffc90003067a70 R14: 0000000000000000 R15: dffffc0000000000
FS:  0000000000000000(0000) GS:ffff888124340000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000563c2aaa9660 CR3: 000000000e598000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 chacha20_block include/crypto/chacha.h:45 [inline]
 crng_fast_key_erasure+0x1a5/0x260 drivers/char/random.c:319
 crng_make_state+0x1c2/0x6c0 drivers/char/random.c:385
 _get_random_bytes+0x11c/0x220 drivers/char/random.c:399
 nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:846 [inline]
 nsim_dev_trap_report drivers/net/netdevsim/dev.c:876 [inline]
 nsim_dev_trap_report_work+0x79b/0xd10 drivers/net/netdevsim/dev.c:922
 process_one_work+0xa23/0x19a0 kernel/workqueue.c:3276
 process_scheduled_works kernel/workqueue.c:3359 [inline]
 worker_thread+0x5ef/0xe50 kernel/workqueue.c:3440
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-04-06 19:53 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-17 10:44 [syzbot] [kvm?] INFO: task hung in kvm_swap_active_memslots (2) syzbot
2025-11-17 11:06 ` Alexander Potapenko
2025-11-17 16:54   ` Sean Christopherson
2026-04-06 19:53 ` syzbot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox