* [Qemu-devel] kernel 4.4.2: kvm_irq_delivery_to_api / rwsem_down_read_failed
@ 2016-02-20 10:44 Stefan Priebe
2016-02-22 17:36 ` Paolo Bonzini
0 siblings, 1 reply; 4+ messages in thread
From: Stefan Priebe @ 2016-02-20 10:44 UTC (permalink / raw)
To: qemu-devel, kvm
Hi,
while testing Kernel 4.4.2 and starting 20 Qemu 2.4.1 virtual machines.
I got those traces and a load of 500 on those system. I was only abler
to recover by sysrq-trigger.
All traces:
INFO: task pvedaemon worke:7470 blocked for more than 120 seconds.
Not tainted 4.4.2+1-ph #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
pvedaemon worke D ffff88239c367ca0 0 7470 7468 0x00080000
ffff88239c367ca0 ffff8840a6232500 ffff8823ed83a500 ffff88239c367c90
ffff88239c368000 ffff8845f5f070e8 ffff8845f5f07100 0000000000000000
00007ffc73b48e58 ffff88239c367cc0 ffffffffb66a4d89 ffff88239c367cf0
Call Trace:
[<ffffffffb66a4d89>] schedule+0x39/0x80
[<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
[<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
[<ffffffffb66a6af7>] ? down_read+0x17/0x20
[<ffffffffb617b99e>] __access_remote_vm+0x3e/0x1c0
[<ffffffffb63cb594>] ? call_rwsem_down_read_failed+0x14/0x30
[<ffffffffb6181d5f>] access_remote_vm+0x1f/0x30
[<ffffffffb623212e>] proc_pid_cmdline_read+0x16e/0x4f0
[<ffffffffb611a4cc>] ? acct_account_cputime+0x1c/0x20
[<ffffffffb61c8348>] __vfs_read+0x18/0x40
[<ffffffffb61c94fe>] vfs_read+0x8e/0x140
[<ffffffffb61c95ff>] SyS_read+0x4f/0xa0
[<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71
INFO: task pvestatd:7633 blocked for more than 120 seconds.
Not tainted 4.4.2+1-ph #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
pvestatd D ffff88239f16fd40 0 7633 1 0x00080000
ffff88239f16fd40 ffff8824e76a8000 ffff8823e5fc2500 ffff8823e5fc2500
ffff88239f170000 ffff8845f5f070e8 ffff8845f5f07100 ffff8845f5f07080
000000000341bf10 ffff88239f16fd60 ffffffffb66a4d89 024000d000000058
Call Trace:
[<ffffffffb66a4d89>] schedule+0x39/0x80
[<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
[<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
[<ffffffffb66a6af7>] ? down_read+0x17/0x20
[<ffffffffb623206c>] proc_pid_cmdline_read+0xac/0x4f0
[<ffffffffb611a4cc>] ? acct_account_cputime+0x1c/0x20
[<ffffffffb60b1f23>] ? account_user_time+0x73/0x80
[<ffffffffb60b249e>] ? vtime_account_user+0x4e/0x70
[<ffffffffb61c8348>] __vfs_read+0x18/0x40
[<ffffffffb61c94fe>] vfs_read+0x8e/0x140
[<ffffffffb61c95ff>] SyS_read+0x4f/0xa0
[<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71
INFO: task kvm:11766 blocked for more than 120 seconds.
Not tainted 4.4.2+1-ph #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kvm D ffff88452a2d3dd0 0 11766 1 0x00080000
ffff88452a2d3dd0 ffff880166c74a00 ffff8845b7354a00 ffffffffb617fc8e
ffff88452a2d4000 ffff8845f5f070e8 ffff8845f5f07100 ffff88452a2d3f58
ffff8845b7354a00 ffff88452a2d3df0 ffffffffb66a4d89 00007fa807abbf80
Call Trace:
[<ffffffffb617fc8e>] ? __handle_mm_fault+0xd1e/0x1260
[<ffffffffb66a4d89>] schedule+0x39/0x80
[<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
[<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
[<ffffffffb66a6af7>] ? down_read+0x17/0x20
[<ffffffffb604f477>] __do_page_fault+0x2b7/0x380
[<ffffffffb60b1f23>] ? account_user_time+0x73/0x80
[<ffffffffb60b249e>] ? vtime_account_user+0x4e/0x70
[<ffffffffb604f5a7>] do_page_fault+0x37/0x90
[<ffffffffb66aa7b8>] page_fault+0x28/0x30
INFO: task kvm:11824 blocked for more than 120 seconds.
Not tainted 4.4.2+1-ph #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kvm D ffff8840a867faa0 0 11824 1 0x00080000
ffff8840a867faa0 ffff8845866a4a00 ffff8840a6232500 0000000000000001
ffff8840a8680000 ffff8845f5f070e8 ffff8845f5f07100 ffff8840a867fc0e
0000000000000000 ffff8840a867fac0 ffffffffb66a4d89 ffffffffc0606a06
Call Trace:
[<ffffffffb66a4d89>] schedule+0x39/0x80
[<ffffffffc0606a06>] ? kvm_irq_delivery_to_apic+0x56/0x220 [kvm]
[<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
[<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
[<ffffffffb66a6af7>] ? down_read+0x17/0x20
[<ffffffffc05d1480>] kvm_host_page_size+0x60/0xa0 [kvm]
[<ffffffffc05ea9bc>] mapping_level+0x5c/0x130 [kvm]
[<ffffffffc05f1b1b>] tdp_page_fault+0x9b/0x260 [kvm]
[<ffffffffc05d70ed>] ? kernel_pio+0x2d/0x40 [kvm]
[<ffffffffc05eba21>] kvm_mmu_page_fault+0x31/0x120 [kvm]
[<ffffffffc0678db4>] handle_ept_violation+0xa4/0x170 [kvm_intel]
[<ffffffffc067fd07>] vmx_handle_exit+0x257/0x490 [kvm_intel]
[<ffffffffb60b2081>] ? __vtime_account_system+0x31/0x40
[<ffffffffc05e662f>] vcpu_enter_guest+0x6af/0xff0 [kvm]
[<ffffffffc06034ad>] ? kvm_apic_local_deliver+0x5d/0x60 [kvm]
[<ffffffffc05e8564>] kvm_arch_vcpu_ioctl_run+0xc4/0x3c0 [kvm]
[<ffffffffc05cf844>] kvm_vcpu_ioctl+0x324/0x5d0 [kvm]
[<ffffffffb611a4cc>] ? acct_account_cputime+0x1c/0x20
[<ffffffffb60b1f23>] ? account_user_time+0x73/0x80
[<ffffffffb61da203>] do_vfs_ioctl+0x83/0x4e0
[<ffffffffb600261f>] ? enter_from_user_mode+0x1f/0x50
[<ffffffffb6002711>] ? syscall_trace_enter_phase1+0xc1/0x110
[<ffffffffb61da6ac>] SyS_ioctl+0x4c/0x80
[<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71
INFO: task kvm:11825 blocked for more than 120 seconds.
Not tainted 4.4.2+1-ph #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kvm D ffff88458d6a3aa0 0 11825 1 0x00080002
ffff88458d6a3aa0 ffff880167302500 ffff8840a6234a00 0000000000000001
ffff88458d6a4000 ffff8845f5f070e8 ffff8845f5f07100 ffff88458d6a3c0e
0000000000000000 ffff88458d6a3ac0 ffffffffb66a4d89 ffffffffc0606a06
Call Trace:
[<ffffffffb66a4d89>] schedule+0x39/0x80
[<ffffffffc0606a06>] ? kvm_irq_delivery_to_apic+0x56/0x220 [kvm]
[<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
[<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
[<ffffffffb66a6af7>] ? down_read+0x17/0x20
[<ffffffffc05d1480>] kvm_host_page_size+0x60/0xa0 [kvm]
[<ffffffffc05ea9bc>] mapping_level+0x5c/0x130 [kvm]
[<ffffffffc05f1b1b>] tdp_page_fault+0x9b/0x260 [kvm]
[<ffffffffc05eba21>] kvm_mmu_page_fault+0x31/0x120 [kvm]
[<ffffffffc0678db4>] handle_ept_violation+0xa4/0x170 [kvm_intel]
[<ffffffffc067fd07>] vmx_handle_exit+0x257/0x490 [kvm_intel]
[<ffffffffb60b2081>] ? __vtime_account_system+0x31/0x40
[<ffffffffc05e662f>] vcpu_enter_guest+0x6af/0xff0 [kvm]
[<ffffffffc06034ad>] ? kvm_apic_local_deliver+0x5d/0x60 [kvm]
[<ffffffffc05e8564>] kvm_arch_vcpu_ioctl_run+0xc4/0x3c0 [kvm]
[<ffffffffc05cf844>] kvm_vcpu_ioctl+0x324/0x5d0 [kvm]
[<ffffffffb611a4cc>] ? acct_account_cputime+0x1c/0x20
[<ffffffffb60b1f23>] ? account_user_time+0x73/0x80
[<ffffffffb61da203>] do_vfs_ioctl+0x83/0x4e0
[<ffffffffb600261f>] ? enter_from_user_mode+0x1f/0x50
[<ffffffffb6002711>] ? syscall_trace_enter_phase1+0xc1/0x110
[<ffffffffb61da6ac>] SyS_ioctl+0x4c/0x80
[<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71
INFO: task kvm:14910 blocked for more than 120 seconds.
Not tainted 4.4.2+1-ph #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kvm D ffff8838aee4fdd0 0 14910 1 0x00080000
ffff8838aee4fdd0 ffff882279b04a00 ffff883a20a5ca00 ffffffffb617facf
ffff8838aee50000 ffff8845f5f070e8 ffff8845f5f07100 ffff8838aee4ff58
ffff883a20a5ca00 ffff8838aee4fdf0 ffffffffb66a4d89 ffff880000000040
Call Trace:
[<ffffffffb617facf>] ? __handle_mm_fault+0xb5f/0x1260
[<ffffffffb66a4d89>] schedule+0x39/0x80
[<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
[<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
[<ffffffffb66a6af7>] ? down_read+0x17/0x20
[<ffffffffb604f477>] __do_page_fault+0x2b7/0x380
[<ffffffffb60b1f23>] ? account_user_time+0x73/0x80
[<ffffffffb60b249e>] ? vtime_account_user+0x4e/0x70
[<ffffffffb604f5a7>] do_page_fault+0x37/0x90
[<ffffffffb66aa7b8>] page_fault+0x28/0x30
INFO: task kvm:14912 blocked for more than 120 seconds.
Not tainted 4.4.2+1-ph #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kvm D ffff8845683efdf8 0 14912 1 0x00080000
ffff8845683efdf8 ffffffffb6c0f4c0 ffff883998edca00 ffff883998edca00
ffff8845683f0000 ffff8845f5f07100 ffff8845f5f070e8 ffffffff00000000
ffffffff0000000a ffff8845683efe18 ffffffffb66a4d89 0000000000000000
Call Trace:
[<ffffffffb66a4d89>] schedule+0x39/0x80
[<ffffffffb66a7237>] rwsem_down_write_failed+0x1b7/0x300
[<ffffffffb63cb5c3>] call_rwsem_down_write_failed+0x13/0x20
[<ffffffffb66a6b24>] ? down_write+0x24/0x40
[<ffffffffb6188901>] SyS_mprotect+0xc1/0x210
[<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71
INFO: task kvm:15177 blocked for more than 120 seconds.
Not tainted 4.4.2+1-ph #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kvm D ffff8845eb73fdd0 0 15177 1 0x00080000
ffff8845eb73fdd0 ffff8824e76aca00 ffff883994e20000 ffffffffb617facf
ffff8845eb740000 ffff8845f5f070e8 ffff8845f5f07100 ffff8845eb73ff58
ffff883994e20000 ffff8845eb73fdf0 ffffffffb66a4d89 ffff880000000038
Call Trace:
[<ffffffffb617facf>] ? __handle_mm_fault+0xb5f/0x1260
[<ffffffffb66a4d89>] schedule+0x39/0x80
[<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
[<ffffffffb60cbc11>] ? rwsem_wake+0x71/0xb0
[<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
[<ffffffffb66a6af7>] ? down_read+0x17/0x20
[<ffffffffb604f477>] __do_page_fault+0x2b7/0x380
[<ffffffffb60b1f23>] ? account_user_time+0x73/0x80
[<ffffffffb60b249e>] ? vtime_account_user+0x4e/0x70
[<ffffffffb604f5a7>] do_page_fault+0x37/0x90
[<ffffffffb66aa7b8>] page_fault+0x28/0x30
INFO: task iotop:14292 blocked for more than 120 seconds.
Not tainted 4.4.2+1-ph #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
iotop D ffff88239130bca0 0 14292 14290 0x00080000
ffff88239130bca0 ffff8824e7670000 ffff88228bf70000 ffff88239130bc90
ffff88239130c000 ffff8845f5f070e8 ffff8845f5f07100 0000000000000000
00007ffc73b48e58 ffff88239130bcc0 ffffffffb66a4d89 ffff88239130bcf0
Call Trace:
[<ffffffffb66a4d89>] schedule+0x39/0x80
[<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
[<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
[<ffffffffb66a6af7>] ? down_read+0x17/0x20
[<ffffffffb617b99e>] __access_remote_vm+0x3e/0x1c0
[<ffffffffb63cb594>] ? call_rwsem_down_read_failed+0x14/0x30
[<ffffffffb6181d5f>] access_remote_vm+0x1f/0x30
[<ffffffffb623212e>] proc_pid_cmdline_read+0x16e/0x4f0
[<ffffffffb611a4cc>] ? acct_account_cputime+0x1c/0x20
[<ffffffffb61c8348>] __vfs_read+0x18/0x40
[<ffffffffb61c94fe>] vfs_read+0x8e/0x140
[<ffffffffb61c95ff>] SyS_read+0x4f/0xa0
[<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71
INFO: task top:14293 blocked for more than 120 seconds.
Not tainted 4.4.2+1-ph #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
top D ffff8845ed873ca0 0 14293 14291 0x00080000
ffff8845ed873ca0 ffff8823ed83a500 ffff8836ab6dca00 ffff8845ed873c90
ffff8845ed874000 ffff8845f5f070e8 ffff8845f5f07100 0000000000000000
00007ffc73b48e58 ffff8845ed873cc0 ffffffffb66a4d89 ffff8845ed873cf0
Call Trace:
[<ffffffffb66a4d89>] schedule+0x39/0x80
[<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
[<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
[<ffffffffb66a6af7>] ? down_read+0x17/0x20
[<ffffffffb617b99e>] __access_remote_vm+0x3e/0x1c0
[<ffffffffb63cb594>] ? call_rwsem_down_read_failed+0x14/0x30
[<ffffffffb6181d5f>] access_remote_vm+0x1f/0x30
[<ffffffffb623212e>] proc_pid_cmdline_read+0x16e/0x4f0
[<ffffffffb611a4cc>] ? acct_account_cputime+0x1c/0x20
[<ffffffffb61c8348>] __vfs_read+0x18/0x40
[<ffffffffb61c94fe>] vfs_read+0x8e/0x140
[<ffffffffb61c95ff>] SyS_read+0x4f/0xa0
[<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71
vmbr0: port 32(tap111i0) entered disabled state
vmbr0: port 32(tap111i0) entered disabled state
Greets,
Stefan
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] kernel 4.4.2: kvm_irq_delivery_to_api / rwsem_down_read_failed
2016-02-20 10:44 [Qemu-devel] kernel 4.4.2: kvm_irq_delivery_to_api / rwsem_down_read_failed Stefan Priebe
@ 2016-02-22 17:36 ` Paolo Bonzini
2016-02-22 19:35 ` Stefan Priebe
0 siblings, 1 reply; 4+ messages in thread
From: Paolo Bonzini @ 2016-02-22 17:36 UTC (permalink / raw)
To: Stefan Priebe, qemu-devel, kvm
On 20/02/2016 11:44, Stefan Priebe wrote:
> Hi,
>
> while testing Kernel 4.4.2 and starting 20 Qemu 2.4.1 virtual machines.
> I got those traces and a load of 500 on those system. I was only abler
> to recover by sysrq-trigger.
It seems like something happening at the VM level. A task took the mm
semaphore and hung everyone else. Difficult to debug without a core
(and without knowing who held the semaphore). Sorry.
Paolo
> All traces:
>
> INFO: task pvedaemon worke:7470 blocked for more than 120 seconds.
> Not tainted 4.4.2+1-ph #1
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> pvedaemon worke D ffff88239c367ca0 0 7470 7468 0x00080000
> ffff88239c367ca0 ffff8840a6232500 ffff8823ed83a500 ffff88239c367c90
> ffff88239c368000 ffff8845f5f070e8 ffff8845f5f07100 0000000000000000
> 00007ffc73b48e58 ffff88239c367cc0 ffffffffb66a4d89 ffff88239c367cf0
> Call Trace:
> [<ffffffffb66a4d89>] schedule+0x39/0x80
> [<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
> [<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
> [<ffffffffb66a6af7>] ? down_read+0x17/0x20
> [<ffffffffb617b99e>] __access_remote_vm+0x3e/0x1c0
> [<ffffffffb63cb594>] ? call_rwsem_down_read_failed+0x14/0x30
> [<ffffffffb6181d5f>] access_remote_vm+0x1f/0x30
> [<ffffffffb623212e>] proc_pid_cmdline_read+0x16e/0x4f0
> [<ffffffffb611a4cc>] ? acct_account_cputime+0x1c/0x20
> [<ffffffffb61c8348>] __vfs_read+0x18/0x40
> [<ffffffffb61c94fe>] vfs_read+0x8e/0x140
> [<ffffffffb61c95ff>] SyS_read+0x4f/0xa0
> [<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71
> INFO: task pvestatd:7633 blocked for more than 120 seconds.
> Not tainted 4.4.2+1-ph #1
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> pvestatd D ffff88239f16fd40 0 7633 1 0x00080000
> ffff88239f16fd40 ffff8824e76a8000 ffff8823e5fc2500 ffff8823e5fc2500
> ffff88239f170000 ffff8845f5f070e8 ffff8845f5f07100 ffff8845f5f07080
> 000000000341bf10 ffff88239f16fd60 ffffffffb66a4d89 024000d000000058
> Call Trace:
> [<ffffffffb66a4d89>] schedule+0x39/0x80
> [<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
> [<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
> [<ffffffffb66a6af7>] ? down_read+0x17/0x20
> [<ffffffffb623206c>] proc_pid_cmdline_read+0xac/0x4f0
> [<ffffffffb611a4cc>] ? acct_account_cputime+0x1c/0x20
> [<ffffffffb60b1f23>] ? account_user_time+0x73/0x80
> [<ffffffffb60b249e>] ? vtime_account_user+0x4e/0x70
> [<ffffffffb61c8348>] __vfs_read+0x18/0x40
> [<ffffffffb61c94fe>] vfs_read+0x8e/0x140
> [<ffffffffb61c95ff>] SyS_read+0x4f/0xa0
> [<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] kernel 4.4.2: kvm_irq_delivery_to_api / rwsem_down_read_failed
2016-02-22 17:36 ` Paolo Bonzini
@ 2016-02-22 19:35 ` Stefan Priebe
2016-02-22 19:44 ` Paolo Bonzini
0 siblings, 1 reply; 4+ messages in thread
From: Stefan Priebe @ 2016-02-22 19:35 UTC (permalink / raw)
To: Paolo Bonzini, qemu-devel, kvm
Am 22.02.2016 um 18:36 schrieb Paolo Bonzini:
>
>
> On 20/02/2016 11:44, Stefan Priebe wrote:
>> Hi,
>>
>> while testing Kernel 4.4.2 and starting 20 Qemu 2.4.1 virtual machines.
>> I got those traces and a load of 500 on those system. I was only abler
>> to recover by sysrq-trigger.
>
> It seems like something happening at the VM level. A task took the mm
> semaphore and hung everyone else. Difficult to debug without a core
> (and without knowing who held the semaphore). Sorry.
OK thank you anyway. Is there anything i can do if this happens again?
Stefan
> Paolo
>
>
>> All traces:
>>
>> INFO: task pvedaemon worke:7470 blocked for more than 120 seconds.
>> Not tainted 4.4.2+1-ph #1
>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> pvedaemon worke D ffff88239c367ca0 0 7470 7468 0x00080000
>> ffff88239c367ca0 ffff8840a6232500 ffff8823ed83a500 ffff88239c367c90
>> ffff88239c368000 ffff8845f5f070e8 ffff8845f5f07100 0000000000000000
>> 00007ffc73b48e58 ffff88239c367cc0 ffffffffb66a4d89 ffff88239c367cf0
>> Call Trace:
>> [<ffffffffb66a4d89>] schedule+0x39/0x80
>> [<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
>> [<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
>> [<ffffffffb66a6af7>] ? down_read+0x17/0x20
>> [<ffffffffb617b99e>] __access_remote_vm+0x3e/0x1c0
>> [<ffffffffb63cb594>] ? call_rwsem_down_read_failed+0x14/0x30
>> [<ffffffffb6181d5f>] access_remote_vm+0x1f/0x30
>> [<ffffffffb623212e>] proc_pid_cmdline_read+0x16e/0x4f0
>> [<ffffffffb611a4cc>] ? acct_account_cputime+0x1c/0x20
>> [<ffffffffb61c8348>] __vfs_read+0x18/0x40
>> [<ffffffffb61c94fe>] vfs_read+0x8e/0x140
>> [<ffffffffb61c95ff>] SyS_read+0x4f/0xa0
>> [<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71
>> INFO: task pvestatd:7633 blocked for more than 120 seconds.
>> Not tainted 4.4.2+1-ph #1
>> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> pvestatd D ffff88239f16fd40 0 7633 1 0x00080000
>> ffff88239f16fd40 ffff8824e76a8000 ffff8823e5fc2500 ffff8823e5fc2500
>> ffff88239f170000 ffff8845f5f070e8 ffff8845f5f07100 ffff8845f5f07080
>> 000000000341bf10 ffff88239f16fd60 ffffffffb66a4d89 024000d000000058
>> Call Trace:
>> [<ffffffffb66a4d89>] schedule+0x39/0x80
>> [<ffffffffb66a7447>] rwsem_down_read_failed+0xc7/0x120
>> [<ffffffffb63cb594>] call_rwsem_down_read_failed+0x14/0x30
>> [<ffffffffb66a6af7>] ? down_read+0x17/0x20
>> [<ffffffffb623206c>] proc_pid_cmdline_read+0xac/0x4f0
>> [<ffffffffb611a4cc>] ? acct_account_cputime+0x1c/0x20
>> [<ffffffffb60b1f23>] ? account_user_time+0x73/0x80
>> [<ffffffffb60b249e>] ? vtime_account_user+0x4e/0x70
>> [<ffffffffb61c8348>] __vfs_read+0x18/0x40
>> [<ffffffffb61c94fe>] vfs_read+0x8e/0x140
>> [<ffffffffb61c95ff>] SyS_read+0x4f/0xa0
>> [<ffffffffb66a892e>] entry_SYSCALL_64_fastpath+0x12/0x71
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] kernel 4.4.2: kvm_irq_delivery_to_api / rwsem_down_read_failed
2016-02-22 19:35 ` Stefan Priebe
@ 2016-02-22 19:44 ` Paolo Bonzini
0 siblings, 0 replies; 4+ messages in thread
From: Paolo Bonzini @ 2016-02-22 19:44 UTC (permalink / raw)
To: Stefan Priebe; +Cc: qemu-devel, kvm
----- Original Message -----
> From: "Stefan Priebe" <s.priebe@profihost.ag>
> To: "Paolo Bonzini" <pbonzini@redhat.com>, "qemu-devel" <qemu-devel@nongnu.org>, kvm@vger.kernel.org
> Sent: Monday, February 22, 2016 8:35:41 PM
> Subject: Re: kernel 4.4.2: kvm_irq_delivery_to_api / rwsem_down_read_failed
>
>
> Am 22.02.2016 um 18:36 schrieb Paolo Bonzini:
> >
> >
> > On 20/02/2016 11:44, Stefan Priebe wrote:
> >> Hi,
> >>
> >> while testing Kernel 4.4.2 and starting 20 Qemu 2.4.1 virtual machines.
> >> I got those traces and a load of 500 on those system. I was only abler
> >> to recover by sysrq-trigger.
> >
> > It seems like something happening at the VM level. A task took the mm
> > semaphore and hung everyone else. Difficult to debug without a core
> > (and without knowing who held the semaphore). Sorry.
>
> OK thank you anyway. Is there anything i can do if this happens again?
Try grabbing a vmcore with sysrq-c, if you have kdump configured.
Paolo
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-02-22 19:44 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-02-20 10:44 [Qemu-devel] kernel 4.4.2: kvm_irq_delivery_to_api / rwsem_down_read_failed Stefan Priebe
2016-02-22 17:36 ` Paolo Bonzini
2016-02-22 19:35 ` Stefan Priebe
2016-02-22 19:44 ` Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).