* [Bug 191451] New: Host hangs when hyperv/pvspinlock are disabled
@ 2016-12-29 6:06 bugzilla-daemon
2017-02-13 5:31 ` [Bug 191451] " bugzilla-daemon
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: bugzilla-daemon @ 2016-12-29 6:06 UTC (permalink / raw)
To: kvm
https://bugzilla.kernel.org/show_bug.cgi?id=191451
Bug ID: 191451
Summary: Host hangs when hyperv/pvspinlock are disabled
Product: Virtualization
Version: unspecified
Kernel Version: 4.7.3-4.9.0
Hardware: All
OS: Linux
Tree: Mainline
Status: NEW
Severity: normal
Priority: P1
Component: kvm
Assignee: virtualization_kvm@kernel-bugs.osdl.org
Reporter: uzytkownik2@gmail.com
Regression: No
Created attachment 249011
--> https://bugzilla.kernel.org/attachment.cgi?id=249011&action=edit
Screen photo shot
I haven't figured out what exactly causes the problem (I got file corruption
and other nasty issues) but if I disable hyperv/pvspinlock I get the host to
hang. It might be connected to VFIO.
Sometimes the rcu hangs are displayed (attached).
--
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Bug 191451] Host hangs when hyperv/pvspinlock are disabled
2016-12-29 6:06 [Bug 191451] New: Host hangs when hyperv/pvspinlock are disabled bugzilla-daemon
@ 2017-02-13 5:31 ` bugzilla-daemon
2017-02-23 22:42 ` bugzilla-daemon
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: bugzilla-daemon @ 2017-02-13 5:31 UTC (permalink / raw)
To: kvm
https://bugzilla.kernel.org/show_bug.cgi?id=191451
--- Comment #1 from uzytkownik2@gmail.com (uzytkownik2@gmail.com) ---
VFIO was the only thing which seems to be turning on and off. I managed to get
it fixed by turning computer off and on again - as oppose to reboot - I'm not
sure why it would matter.
--
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Bug 191451] Host hangs when hyperv/pvspinlock are disabled
2016-12-29 6:06 [Bug 191451] New: Host hangs when hyperv/pvspinlock are disabled bugzilla-daemon
2017-02-13 5:31 ` [Bug 191451] " bugzilla-daemon
@ 2017-02-23 22:42 ` bugzilla-daemon
2017-03-16 6:44 ` bugzilla-daemon
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: bugzilla-daemon @ 2017-02-23 22:42 UTC (permalink / raw)
To: kvm
https://bugzilla.kernel.org/show_bug.cgi?id=191451
--- Comment #2 from uzytkownik2@gmail.com (uzytkownik2@gmail.com) ---
Debugging unrelated issue I noticed that many processes are in uninterruptable
sleep:
# ps aux | awk '$8 ~ /D/{print $0}'
root 742 0.0 0.0 0 0 ? D Feb20 0:00
[kworker/5:255]
root 913 0.0 0.0 0 0 ? D Feb20 0:15
[kworker/3:173]
root 4875 0.0 0.0 0 0 ? Ds Feb21 0:00
[systemd-logind]
root 4890 0.0 0.0 0 0 ? D Feb21 0:00
[kworker/u24:3]
root 5048 0.0 0.0 0 0 ? D Feb21 0:00 [kworker/6:1]
root 5238 0.0 0.0 62512 3376 ? DNs Feb21 0:00 (coredump)
root 5593 0.0 0.0 62512 3376 ? DNs Feb22 0:00 (coredump)
root 5715 0.0 0.0 62512 3376 ? DNs Feb22 0:00 (coredump)
root 24545 0.0 0.0 0 0 ? DN Feb22 0:00 [scanelf]
root 24556 0.0 0.0 62512 3504 ? DNs Feb22 0:00 (coredump)
root 27852 0.0 0.0 0 0 ? D Feb20 0:02 [kworker/9:4]
root 32672 0.0 0.0 0 0 ? D Feb20 0:20
[kworker/7:232]
# uptime
14:40:15 up 7 days, 15:26, 2 users, load average: 0.00, 0.00, 0.00
The VFIO might be a missed hint which just makes it appear quicker.
In kernel log I have messages such as:
[660318.170055] rcu_sched kthread starved for 6305 jiffies! g933470 c933469
f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1
[660318.170055] rcu_sched S14864 7 2 0x00000000
[660318.170055] ffff970f5bf43500 ffff970f5f5162c0 ffff970b17a16cc0
ffff970f5bef4f80
[660318.170055] ffff970f5f5162c0 ffffa66283197d88 ffffffffa1e673d8
ffffa66283197dd0
[660318.170055] 000000005f50eb00 0000000000000286 ffff970f5bef4f80
ffffa66283197dd0
[660318.170055] Call Trace:
[660318.170055] [<ffffffffa1e673d8>] ? __schedule+0x1f8/0x650
[660318.170055] [<ffffffffa1e67861>] schedule+0x31/0x80
[660318.170055] [<ffffffffa1e6a6d5>] schedule_timeout+0x155/0x2f0
[660318.170055] [<ffffffffa18da01d>] ? rcu_report_qs_rnp+0xed/0x180
[660318.170055] [<ffffffffa18e0580>] ? del_timer_sync+0x50/0x50
[660318.170055] [<ffffffffa18bf2dc>] ? prepare_to_swait+0x5c/0x90
[660318.170055] [<ffffffffa18dc4d3>] rcu_gp_kthread+0x473/0x7f0
[660318.170055] [<ffffffffa18dc060>] ? call_rcu_sched+0x20/0x20
[660318.170055] [<ffffffffa189fe85>] kthread+0xc5/0xe0
[660318.170055] [<ffffffffa189fdc0>] ? kthread_park+0x60/0x60
[660318.170055] [<ffffffffa1e6bc92>] ret_from_fork+0x22/0x30
--
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Bug 191451] Host hangs when hyperv/pvspinlock are disabled
2016-12-29 6:06 [Bug 191451] New: Host hangs when hyperv/pvspinlock are disabled bugzilla-daemon
2017-02-13 5:31 ` [Bug 191451] " bugzilla-daemon
2017-02-23 22:42 ` bugzilla-daemon
@ 2017-03-16 6:44 ` bugzilla-daemon
2017-09-14 1:52 ` bugzilla-daemon
2017-10-07 2:04 ` bugzilla-daemon
4 siblings, 0 replies; 6+ messages in thread
From: bugzilla-daemon @ 2017-03-16 6:44 UTC (permalink / raw)
To: kvm
https://bugzilla.kernel.org/show_bug.cgi?id=191451
--- Comment #3 from uzytkownik2@gmail.com (uzytkownik2@gmail.com) ---
Reproduced without VFIO:
Mar 15 22:19:52 serenity kernel: ------------[ cut here ]------------
Mar 15 22:19:52 serenity kernel: WARNING: CPU: 0 PID: 3 at
net/sched/sch_generic.c:316 dev_watchdog+0x259/0x260
Mar 15 22:19:52 serenity kernel: NETDEV WATCHDOG: eno1 (e1000e): transmit queue
0 timed out
Mar 15 22:19:52 serenity kernel: Modules linked in: ipt_MASQUERADE
nf_nat_masquerade_ipv4 xfrm_user xfrm_algo iptable_nat nf_conntrack_ipv4
nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype iptable_filter ip_tables xt_conn
Mar 15 22:19:52 serenity kernel: CPU: 0 PID: 3 Comm: ksoftirqd/0 Tainted: P
O 4.9.13-hardened #2
Mar 15 22:19:52 serenity kernel: Hardware name: Gigabyte Technology Co., Ltd.
To be filled by O.E.M./X99-SLI-CF, BIOS F1 04/15/2015
Mar 15 22:19:52 serenity kernel: ffffc900031d3c88 ffffffff812f1f7c
ffffffff81108c71 0000000000000000
Mar 15 22:19:52 serenity kernel: ffffc900031d3cd8 0000000000000000
ffffc900031d3cc8 ffffffff81097781
Mar 15 22:19:52 serenity kernel: 0000013c1166b180 0000000000000000
ffff880408448000 0000000000000000
Mar 15 22:19:52 serenity kernel: Call Trace:
Mar 15 22:19:52 serenity kernel: [<ffffffff812f1f7c>] dump_stack+0x6a/0x9e
Mar 15 22:19:52 serenity kernel: [<ffffffff81108c71>] ?
print_modules+0x61/0xc0
Mar 15 22:19:52 serenity kernel: [<ffffffff81097781>] __warn+0xc1/0xe0
Mar 15 22:19:52 serenity kernel: [<ffffffff810977fa>]
warn_slowpath_fmt+0x5a/0x80
Mar 15 22:19:52 serenity kernel: [<ffffffff815ef0f0>] ?
tcp_write_timer_handler+0x200/0x200
Mar 15 22:19:52 serenity kernel: [<ffffffff815a7a19>] dev_watchdog+0x259/0x260
Mar 15 22:19:52 serenity kernel: [<ffffffff815a77c0>] ?
dev_deactivate_queue.constprop.28+0x90/0x90
Mar 15 22:19:52 serenity kernel: [<ffffffff810eef27>]
call_timer_fn.isra.25+0x17/0x70
Mar 15 22:19:52 serenity kernel: [<ffffffff810ef10a>]
run_timer_softirq+0x18a/0x1d0
Mar 15 22:19:52 serenity kernel: [<ffffffff810cefc7>] ?
pick_next_task_fair+0x417/0x4e0
Mar 15 22:19:52 serenity kernel: [<ffffffff8109c4b2>] __do_softirq+0xd2/0x1d0
Mar 15 22:19:52 serenity kernel: [<ffffffff8109c5c7>] run_ksoftirqd+0x17/0x30
Mar 15 22:19:52 serenity kernel: [<ffffffff810b9975>]
smpboot_thread_fn+0x105/0x160
Mar 15 22:19:52 serenity kernel: [<ffffffff810b9870>] ? sort_range+0x20/0x20
Mar 15 22:19:52 serenity kernel: [<ffffffff810b5b84>] kthread+0xd4/0xf0
Mar 15 22:19:52 serenity kernel: [<ffffffff810b5ab0>] ?
kthread_create_on_node+0x60/0x60
Mar 15 22:19:52 serenity kernel: [<ffffffff816884a2>] ret_from_fork+0x22/0x30
Mar 15 22:19:52 serenity kernel: ---[ end trace 17d22798f082457b ]---
Mar 15 22:19:52 serenity kernel: e1000e 0000:00:19.0 eno1: Reset adapter
unexpectedly
Mar 15 22:20:20 serenity kernel: e1000e: eno1 NIC Link is Up 1000 Mbps Full
Duplex, Flow Control: None
Mar 15 22:30:36 serenity kernel: INFO: rcu_sched self-detected stall on CPU
Mar 15 22:30:36 serenity kernel: 4-...: (281 GPs behind)
idle=5ab/140000000000001/0 softirq=153767/153767 fqs=2
Mar 15 22:30:36 serenity kernel: (t=2101 jiffies g=36284 c=36283
q=51633)
Mar 15 22:30:36 serenity kernel: rcu_sched kthread starved for 2097 jiffies!
g36284 c36283 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x0
Mar 15 22:30:36 serenity kernel: rcu_sched R running task 15288 7
2 0x00000000
Mar 15 22:30:36 serenity kernel: ffffffff81a06180 ffff88087fc13140
ffff88040cbe12c0 ffff88085c2d3c00
Mar 15 22:30:36 serenity kernel: ffff88087fc13140 ffffc900031f3d98
ffffffff816838e8 ffffffff00000000
Mar 15 22:30:37 serenity kernel: 0000000000000000 0000000000000000
ffff88085c2d3c00 ffffffff81ac02e0
Mar 15 22:30:37 serenity kernel: Call Trace:
Mar 15 22:30:37 serenity kernel: [<ffffffff816838e8>] ? __schedule+0x1f8/0x5d0
Mar 15 22:30:37 serenity kernel: [<ffffffff81683cf1>] schedule+0x31/0x80
Mar 15 22:30:37 serenity kernel: [<ffffffff816871c5>]
schedule_timeout+0xa5/0x120
Mar 15 22:30:37 serenity kernel: [<ffffffff810eef00>] ?
del_timer_sync+0x50/0x50
Mar 15 22:30:37 serenity kernel: [<ffffffff810eb918>]
rcu_gp_kthread+0x2a8/0x5b0
Mar 15 22:30:37 serenity kernel: [<ffffffff810eb670>] ?
rcu_gp_init+0x380/0x380
Mar 15 22:30:37 serenity kernel: [<ffffffff810b5b84>] kthread+0xd4/0xf0
Mar 15 22:30:37 serenity kernel: [<ffffffff810b5ab0>] ?
kthread_create_on_node+0x60/0x60
Mar 15 22:30:37 serenity kernel: [<ffffffff816884a2>] ret_from_fork+0x22/0x30
Mar 15 22:30:37 serenity kernel: Task dump for CPU 4:
Mar 15 22:30:37 serenity kernel: cc1plus R running task 13704 10989
10945 0x00000000
Mar 15 22:30:37 serenity kernel: ffffc90000023d98 ffffffff810c080b
0000000000000004 ffffffff81ac0080
Mar 15 22:30:37 serenity kernel: ffffc90000023db0 ffffffff810c2fb2
0000000000000004 ffffc90000023de0
Mar 15 22:30:37 serenity kernel: ffffffff8113dfc8 ffff88087fc93e40
ffffffff81ac0080 0000000000000001
Mar 15 22:30:37 serenity kernel: Call Trace:
Mar 15 22:30:37 serenity kernel: <IRQ>
Mar 15 22:30:37 serenity kernel: [<ffffffff810c080b>]
sched_show_task+0xdb/0x140
Mar 15 22:30:37 serenity kernel: [<ffffffff810c2fb2>] dump_cpu_task+0x32/0x40
Mar 15 22:30:37 serenity kernel: [<ffffffff8113dfc8>]
rcu_dump_cpu_stacks+0x8d/0xb1
Mar 15 22:30:37 serenity kernel: [<ffffffff810ed138>]
rcu_check_callbacks+0x6a8/0x800
Mar 15 22:30:37 serenity kernel: [<ffffffff81120dbc>] ?
__acct_update_integrals+0x2c/0xb0
Mar 15 22:30:37 serenity kernel: [<ffffffff810efaaa>]
update_process_times+0x2a/0x50
Mar 15 22:30:37 serenity kernel: [<ffffffff810fe4bb>]
tick_sched_timer+0x5b/0x1a0
Mar 15 22:30:37 serenity kernel: [<ffffffff810f02be>]
__hrtimer_run_queues+0xde/0x1b0
Mar 15 22:30:37 serenity kernel: [<ffffffff810f0832>]
hrtimer_interrupt+0xb2/0x1b0
Mar 15 22:30:37 serenity kernel: [<ffffffff81039073>]
smp_trace_apic_timer_interrupt+0x63/0x90
Mar 15 22:30:37 serenity kernel: [<ffffffff810390a9>]
smp_apic_timer_interrupt+0x9/0x10
Mar 15 22:30:37 serenity kernel: [<ffffffff81688df1>]
apic_timer_interrupt+0x81/0x90
Mar 15 22:30:37 serenity kernel: <EOI>
Mar 15 22:30:37 serenity kernel: e1000e 0000:00:19.0 eno1: Reset adapter
unexpectedly
Mar 15 22:30:39 serenity kernel: e1000e: eno1 NIC Link is Up 1000 Mbps Full
Duplex, Flow Control: None
Mar 15 22:32:20 serenity kernel: INFO: rcu_sched self-detected stall on CPU
Mar 15 22:32:20 serenity kernel: INFO: rcu_sched self-detected stall on CPU
Mar 15 22:32:20 serenity kernel: 9-...: (13 GPs behind)
idle=11f/140000000000001/0 softirq=171795/171795 fqs=3
Mar 15 22:32:20 serenity kernel: (t=2100 jiffies g=36550 c=36549
q=44075)
Mar 15 22:32:20 serenity kernel: rcu_sched kthread starved for 2053 jiffies!
g36550 c36549 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x0
Mar 15 22:32:20 serenity kernel: rcu_sched R running task 15288 7
2 0x00000000
Mar 15 22:32:20 serenity kernel: ffff8803c548bc00 ffff88087fc13140
ffff8804195b1a40 ffff88085c2d3c00
Mar 15 22:32:20 serenity kernel: ffff88087fc13140 ffffc900031f3d98
ffffffff816838e8 ffffffff00000000
Mar 15 22:32:20 serenity kernel: 0000000000000000 0000000000000000
ffff88085c2d3c00 ffffffff81ac02e0
Mar 15 22:32:20 serenity kernel: Call Trace:
Mar 15 22:32:20 serenity kernel: [<ffffffff816838e8>] ? __schedule+0x1f8/0x5d0
Mar 15 22:32:20 serenity kernel: [<ffffffff81683cf1>] schedule+0x31/0x80
Mar 15 22:32:20 serenity kernel: [<ffffffff816871c5>]
schedule_timeout+0xa5/0x120
Mar 15 22:32:20 serenity kernel: [<ffffffff810eef00>] ?
del_timer_sync+0x50/0x50
Mar 15 22:32:20 serenity kernel: [<ffffffff810eb918>]
rcu_gp_kthread+0x2a8/0x5b0
Mar 15 22:32:20 serenity kernel: [<ffffffff810eb670>] ?
rcu_gp_init+0x380/0x380
Mar 15 22:32:20 serenity kernel: [<ffffffff810b5b84>] kthread+0xd4/0xf0
Mar 15 22:32:20 serenity kernel: [<ffffffff810b5ab0>] ?
kthread_create_on_node+0x60/0x60
Mar 15 22:32:20 serenity kernel: [<ffffffff816884a2>] ret_from_fork+0x22/0x30
Mar 15 22:32:20 serenity kernel: Task dump for CPU 8:
Mar 15 22:32:20 serenity kernel: cc1plus R running task 13944 11343
11324 0x00000000
Mar 15 22:32:20 serenity kernel: ffffc9000efafc28 ffffffff813112e0
ffffea000a9bc2a0 1488f1d5fc5cb49b
Mar 15 22:32:20 serenity kernel: ffffea000439a5a0 ffffc9000efafc40
ffffffff8131132d ffffc9000efafd50
Mar 15 22:32:20 serenity kernel: ffffc9000efafd38 ffffffff811473bb
0000000000000000 ffff88087fd16b20
Mar 15 22:32:20 serenity kernel: Call Trace:
Mar 15 22:32:20 serenity kernel: [<ffffffff813112e0>] ?
__list_del_entry+0x20/0x60
Mar 15 22:32:20 serenity kernel: [<ffffffff8131132d>] ? list_del+0xd/0x30
Mar 15 22:32:20 serenity kernel: [<ffffffff811473bb>]
get_page_from_freelist+0x2db/0xa70
Mar 15 22:32:20 serenity kernel: [<ffffffff81148b89>]
__alloc_pages_nodemask+0xd9/0x1f0
Mar 15 22:32:20 serenity kernel: [<ffffffff811acafb>] ?
mem_cgroup_commit_charge+0x7b/0x480
Mar 15 22:32:20 serenity kernel: [<ffffffff81192e7f>]
alloc_pages_vma+0x9f/0x270
Mar 15 22:32:20 serenity kernel: [<ffffffff8117bedd>] ?
page_add_new_anon_rmap+0x9d/0xe0
Mar 15 22:32:20 serenity kernel: [<ffffffff8114fa21>] ?
lru_cache_add_active_or_unevictable+0x31/0xa0
Mar 15 22:32:20 serenity kernel: [<ffffffff81171493>] ?
handle_mm_fault+0xfa3/0x1070
Mar 15 22:32:20 serenity kernel: [<ffffffff81044efa>] ?
__do_page_fault+0x21a/0x430
Mar 15 22:32:20 serenity kernel: [<ffffffff8104514c>] ? do_page_fault+0xc/0x10
Mar 15 22:32:20 serenity kernel: [<ffffffff81688bdd>] ? retint_user+0x8/0xd
Mar 15 22:32:20 serenity kernel: Task dump for CPU 9:
Mar 15 22:32:20 serenity kernel: cc1plus R running task 14184 11389
11388 0x00000000
Mar 15 22:32:20 serenity kernel: ffffc9000004bd98 ffffffff810c080b
0000000000000009 ffffffff81ac0080
Mar 15 22:32:20 serenity kernel: ffffc9000004bdb0 ffffffff810c2fb2
0000000000000009 ffffc9000004bde0
Mar 15 22:32:20 serenity kernel: ffffffff8113dfc8 ffff88087fd33e40
ffffffff81ac0080 0000000000000001
Mar 15 22:32:20 serenity kernel: Call Trace:
Mar 15 22:32:20 serenity kernel: <IRQ>
Mar 15 22:32:20 serenity kernel: [<ffffffff810c080b>]
sched_show_task+0xdb/0x140
Mar 15 22:32:20 serenity kernel: [<ffffffff810c2fb2>] dump_cpu_task+0x32/0x40
Mar 15 22:32:20 serenity kernel: [<ffffffff8113dfc8>]
rcu_dump_cpu_stacks+0x8d/0xb1
Mar 15 22:32:20 serenity kernel: [<ffffffff810ed138>]
rcu_check_callbacks+0x6a8/0x800
Mar 15 22:32:20 serenity kernel: 8-...: (14 GPs behind)
idle=01d/140000000000001/0 softirq=95511/95520 fqs=3
Mar 15 22:32:20 serenity kernel: [<ffffffff81120dbc>] ?
__acct_update_integrals+0x2c/0xb0
Mar 15 22:32:20 serenity kernel: [<ffffffff810efaaa>]
update_process_times+0x2a/0x50
Mar 15 22:32:20 serenity kernel: [<ffffffff810fe4bb>]
tick_sched_timer+0x5b/0x1a0
Mar 15 22:32:20 serenity kernel: [<ffffffff810f02be>]
__hrtimer_run_queues+0xde/0x1b0
Mar 15 22:32:20 serenity kernel: [<ffffffff810f0832>]
hrtimer_interrupt+0xb2/0x1b0
Mar 15 22:32:20 serenity kernel: [<ffffffff81039073>]
smp_trace_apic_timer_interrupt+0x63/0x90
Mar 15 22:32:20 serenity kernel: [<ffffffff810390a9>]
smp_apic_timer_interrupt+0x9/0x10
Mar 15 22:32:20 serenity kernel: [<ffffffff81688df1>]
apic_timer_interrupt+0x81/0x90
Mar 15 22:32:20 serenity kernel: <EOI>
Mar 15 22:32:20 serenity kernel: (t=2100 jiffies g=36550 c=36549
q=44075)
(...)
I've run memcheck86 on system and it has detected no errors.
--
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Bug 191451] Host hangs when hyperv/pvspinlock are disabled
2016-12-29 6:06 [Bug 191451] New: Host hangs when hyperv/pvspinlock are disabled bugzilla-daemon
` (2 preceding siblings ...)
2017-03-16 6:44 ` bugzilla-daemon
@ 2017-09-14 1:52 ` bugzilla-daemon
2017-10-07 2:04 ` bugzilla-daemon
4 siblings, 0 replies; 6+ messages in thread
From: bugzilla-daemon @ 2017-09-14 1:52 UTC (permalink / raw)
To: kvm
https://bugzilla.kernel.org/show_bug.cgi?id=191451
uzytkownik2@gmail.com (uzytkownik2@gmail.com) changed:
What |Removed |Added
----------------------------------------------------------------------------
Kernel Version|4.7.3-4.9.0 |4.7.3-4.12.1
--- Comment #4 from uzytkownik2@gmail.com (uzytkownik2@gmail.com) ---
Keeping bug alive:
[62880.118108] INFO: rcu_sched detected stalls on CPUs/tasks:
[62880.118392] 11-...: (32521 GPs behind) idle=870/0/0 softirq=1901624/1901625
fqs=1
[62880.118691] (detected by 6, t=2102 jiffies, g=329804, c=329803, q=811)
[62880.119019] Sending NMI from CPU 6 to CPUs 11:
[62880.119045] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[62880.120028] rcu_sched kthread starved for 2100 jiffies! g329804 c329803 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[62880.120416] rcu_sched S11456 8 2 0x00000000
[62880.120422] Call Trace:
[62880.120431] __schedule+0x2c8/0x620
[62880.120434] schedule+0x44/0x90
[62880.120437] schedule_timeout+0x110/0x1a0
[62880.120442] ? del_timer_sync+0x50/0x50
[62880.120445] ? prepare_to_swait+0x62/0x70
[62880.120450] rcu_gp_kthread+0x4c8/0x7f0
[62880.120453] ? rcu_gp_kthread+0x4c8/0x7f0
[62880.120456] kthread+0x10d/0x140
[62880.120460] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[62880.120462] ? kthread_park+0x70/0x70
[62880.120466] ret_from_fork+0x22/0x30
[63134.340912] INFO: rcu_sched detected stalls on CPUs/tasks:
[63134.341355] 11-...: (0 ticks this GP) idle=df0/0/0 softirq=1901625/1901625
fqs=0
[63134.341813] (detected by 4, t=2102 jiffies, g=330302, c=330301, q=1024)
[63134.342302] Sending NMI from CPU 4 to CPUs 11:
[63134.342326] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[63134.343310] rcu_sched kthread starved for 2102 jiffies! g330302 c330301 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[63134.343859] rcu_sched S11456 8 2 0x00000000
[63134.343865] Call Trace:
[63134.343874] __schedule+0x2c8/0x620
[63134.343876] schedule+0x44/0x90
[63134.343879] schedule_timeout+0x110/0x1a0
[63134.343884] ? del_timer_sync+0x50/0x50
[63134.343888] ? prepare_to_swait+0x62/0x70
[63134.343893] rcu_gp_kthread+0x4c8/0x7f0
[63134.343896] ? rcu_gp_kthread+0x4c8/0x7f0
[63134.343899] kthread+0x10d/0x140
[63134.343903] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[63134.343905] ? kthread_park+0x70/0x70
[63134.343909] ret_from_fork+0x22/0x30
[64670.826943] INFO: rcu_sched detected stalls on CPUs/tasks:
[64670.827547] 11-...: (3203 GPs behind) idle=8d4/0/0 softirq=1901625/1901625
fqs=1
[64670.828165] (detected by 7, t=2102 jiffies, g=333505, c=333504, q=1024)
[64670.828813] Sending NMI from CPU 7 to CPUs 11:
[64670.828845] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[64670.829821] rcu_sched kthread starved for 2099 jiffies! g333505 c333504 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[64670.830534] rcu_sched S11456 8 2 0x00000000
[64670.830540] Call Trace:
[64670.830548] __schedule+0x2c8/0x620
[64670.830551] schedule+0x44/0x90
[64670.830554] schedule_timeout+0x110/0x1a0
[64670.830559] ? del_timer_sync+0x50/0x50
[64670.830562] ? prepare_to_swait+0x62/0x70
[64670.830566] rcu_gp_kthread+0x4c8/0x7f0
[64670.830570] ? rcu_gp_kthread+0x4c8/0x7f0
[64670.830572] kthread+0x10d/0x140
[64670.830576] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[64670.830578] ? kthread_park+0x70/0x70
[64670.830582] ret_from_fork+0x22/0x30
[66220.692488] INFO: rcu_sched detected stalls on CPUs/tasks:
[66220.693249] 11-...: (0 ticks this GP) idle=f24/0/0 softirq=1901625/1901625
fqs=0
[66220.694020] (detected by 10, t=2102 jiffies, g=336717, c=336716, q=1140)
[66220.694827] Sending NMI from CPU 10 to CPUs 11:
[66220.694852] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[66220.695836] rcu_sched kthread starved for 2102 jiffies! g336717 c336716 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[66220.696699] rcu_sched S11456 8 2 0x00000000
[66220.696705] Call Trace:
[66220.696713] __schedule+0x2c8/0x620
[66220.696718] ? put_prev_entity+0x39/0x540
[66220.696720] schedule+0x44/0x90
[66220.696723] schedule_timeout+0x110/0x1a0
[66220.696727] ? del_timer_sync+0x50/0x50
[66220.696730] ? prepare_to_swait+0x62/0x70
[66220.696735] rcu_gp_kthread+0x4c8/0x7f0
[66220.696738] ? rcu_gp_kthread+0x4c8/0x7f0
[66220.696741] kthread+0x10d/0x140
[66220.696745] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[66220.696747] ? kthread_park+0x70/0x70
[66220.696751] ret_from_fork+0x22/0x30
[66283.738241] INFO: rcu_sched detected stalls on CPUs/tasks:
[66283.739155] 11-...: (0 ticks this GP) idle=f34/0/0 softirq=1901625/1901625
fqs=1
[66283.740087] (detected by 4, t=8407 jiffies, g=336717, c=336716, q=3093)
[66283.741048] Sending NMI from CPU 4 to CPUs 11:
[66283.741069] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[66283.742056] rcu_sched kthread starved for 6305 jiffies! g336717 c336716 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[66283.743074] rcu_sched S11456 8 2 0x00000000
[66283.743079] Call Trace:
[66283.743085] __schedule+0x2c8/0x620
[66283.743088] schedule+0x44/0x90
[66283.743091] schedule_timeout+0x110/0x1a0
[66283.743094] ? del_timer_sync+0x50/0x50
[66283.743098] ? prepare_to_swait+0x62/0x70
[66283.743102] rcu_gp_kthread+0x4c8/0x7f0
[66283.743105] ? rcu_gp_kthread+0x4c8/0x7f0
[66283.743108] kthread+0x10d/0x140
[66283.743112] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[66283.743114] ? kthread_park+0x70/0x70
[66283.743118] ret_from_fork+0x22/0x30
[66902.906415] INFO: rcu_sched detected stalls on CPUs/tasks:
[66902.907481] 11-...: (1255 GPs behind) idle=f54/0/0 softirq=1901625/1901625
fqs=1
[66902.908568] (detected by 5, t=2102 jiffies, g=337972, c=337971, q=1121)
[66902.909689] Sending NMI from CPU 5 to CPUs 11:
[66902.909702] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[66902.910697] rcu_sched kthread starved for 2100 jiffies! g337972 c337971 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[66902.911876] rcu_sched S11456 8 2 0x00000000
[66902.911882] Call Trace:
[66902.911889] __schedule+0x2c8/0x620
[66902.911892] schedule+0x44/0x90
[66902.911895] schedule_timeout+0x110/0x1a0
[66902.911899] ? del_timer_sync+0x50/0x50
[66902.911902] ? prepare_to_swait+0x62/0x70
[66902.911907] rcu_gp_kthread+0x4c8/0x7f0
[66902.911910] ? rcu_gp_kthread+0x4c8/0x7f0
[66902.911913] kthread+0x10d/0x140
[66902.911916] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[66902.911918] ? kthread_park+0x70/0x70
[66902.911922] ret_from_fork+0x22/0x30
[67123.131539] INFO: rcu_sched detected stalls on CPUs/tasks:
[67123.132769] 11-...: (0 ticks this GP) idle=ff8/0/0 softirq=1901625/1901625
fqs=0
[67123.134016] (detected by 10, t=2102 jiffies, g=338391, c=338390, q=1258)
[67123.135295] Sending NMI from CPU 10 to CPUs 11:
[67123.135320] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[67123.136304] rcu_sched kthread starved for 2102 jiffies! g338391 c338390 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[67123.137627] rcu_sched S11456 8 2 0x00000000
[67123.137632] Call Trace:
[67123.137639] __schedule+0x2c8/0x620
[67123.137644] ? put_prev_entity+0x39/0x540
[67123.137646] schedule+0x44/0x90
[67123.137649] schedule_timeout+0x110/0x1a0
[67123.137653] ? del_timer_sync+0x50/0x50
[67123.137656] ? prepare_to_swait+0x62/0x70
[67123.137661] rcu_gp_kthread+0x4c8/0x7f0
[67123.137664] ? rcu_gp_kthread+0x4c8/0x7f0
[67123.137667] kthread+0x10d/0x140
[67123.137671] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[67123.137673] ? kthread_park+0x70/0x70
[67123.137676] ret_from_fork+0x22/0x30
[67186.177259] INFO: rcu_sched detected stalls on CPUs/tasks:
[67186.178640] 11-...: (0 ticks this GP) idle=008/0/0 softirq=1901625/1901625
fqs=1
[67186.180047] (detected by 5, t=8407 jiffies, g=338391, c=338390, q=3456)
[67186.181485] Sending NMI from CPU 5 to CPUs 11:
[67186.181497] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[67186.182493] rcu_sched kthread starved for 6305 jiffies! g338391 c338390 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[67186.183988] rcu_sched S11456 8 2 0x00000000
[67186.183993] Call Trace:
[67186.183999] __schedule+0x2c8/0x620
[67186.184001] schedule+0x44/0x90
[67186.184004] schedule_timeout+0x110/0x1a0
[67186.184008] ? del_timer_sync+0x50/0x50
[67186.184011] ? prepare_to_swait+0x62/0x70
[67186.184015] rcu_gp_kthread+0x4c8/0x7f0
[67186.184018] ? rcu_gp_kthread+0x4c8/0x7f0
[67186.184021] kthread+0x10d/0x140
[67186.184025] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[67186.184026] ? kthread_park+0x70/0x70
[67186.184030] ret_from_fork+0x22/0x30
[67244.753305] INFO: rcu_sched detected stalls on CPUs/tasks:
[67244.754844] 11-...: (88 GPs behind) idle=018/0/0 softirq=1901625/1901625
fqs=1
[67244.756406] (detected by 5, t=2102 jiffies, g=338479, c=338478, q=1121)
[67244.758001] Sending NMI from CPU 5 to CPUs 11:
[67244.758013] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[67244.759008] rcu_sched kthread starved for 2099 jiffies! g338479 c338478 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[67244.760662] rcu_sched S11456 8 2 0x00000000
[67244.760667] Call Trace:
[67244.760673] __schedule+0x2c8/0x620
[67244.760676] schedule+0x44/0x90
[67244.760679] schedule_timeout+0x110/0x1a0
[67244.760683] ? del_timer_sync+0x50/0x50
[67244.760686] ? prepare_to_swait+0x62/0x70
[67244.760690] rcu_gp_kthread+0x4c8/0x7f0
[67244.760694] ? rcu_gp_kthread+0x4c8/0x7f0
[67244.760696] kthread+0x10d/0x140
[67244.760700] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[67244.760702] ? kthread_park+0x70/0x70
[67244.760706] ret_from_fork+0x22/0x30
[68476.790067] INFO: rcu_sched detected stalls on CPUs/tasks:
[68476.791773] 11-...: (0 ticks this GP) idle=164/0/0 softirq=1901625/1901625
fqs=0
[68476.793489] (detected by 7, t=2102 jiffies, g=341009, c=341008, q=1380)
[68476.795244] Sending NMI from CPU 7 to CPUs 11:
[68476.795276] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[68476.796252] rcu_sched kthread starved for 2102 jiffies! g341009 c341008 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[68476.798063] rcu_sched S11456 8 2 0x00000000
[68476.798069] Call Trace:
[68476.798076] __schedule+0x2c8/0x620
[68476.798081] ? put_prev_entity+0x39/0x540
[68476.798083] schedule+0x44/0x90
[68476.798086] schedule_timeout+0x110/0x1a0
[68476.798089] ? del_timer_sync+0x50/0x50
[68476.798092] ? prepare_to_swait+0x62/0x70
[68476.798096] rcu_gp_kthread+0x4c8/0x7f0
[68476.798100] ? rcu_gp_kthread+0x4c8/0x7f0
[68476.798102] kthread+0x10d/0x140
[68476.798106] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[68476.798108] ? kthread_park+0x70/0x70
[68476.798112] ret_from_fork+0x22/0x30
[68539.835809] INFO: rcu_sched detected stalls on CPUs/tasks:
[68539.837664] 11-...: (0 ticks this GP) idle=174/0/0 softirq=1901625/1901625
fqs=1
[68539.839539] (detected by 5, t=8407 jiffies, g=341009, c=341008, q=3363)
[68539.841447] Sending NMI from CPU 5 to CPUs 11:
[68539.841459] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[68539.842454] rcu_sched kthread starved for 6305 jiffies! g341009 c341008 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[68539.844425] rcu_sched S11456 8 2 0x00000000
[68539.844430] Call Trace:
[68539.844436] __schedule+0x2c8/0x620
[68539.844439] schedule+0x44/0x90
[68539.844442] schedule_timeout+0x110/0x1a0
[68539.844446] ? del_timer_sync+0x50/0x50
[68539.844449] ? prepare_to_swait+0x62/0x70
[68539.844453] rcu_gp_kthread+0x4c8/0x7f0
[68539.844456] ? rcu_gp_kthread+0x4c8/0x7f0
[68539.844459] kthread+0x10d/0x140
[68539.844462] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[68539.844464] ? kthread_park+0x70/0x70
[68539.844468] ret_from_fork+0x22/0x30
[72010.480535] INFO: rcu_sched detected stalls on CPUs/tasks:
[72010.482562] 11-...: (0 ticks this GP) idle=4e0/0/0 softirq=1901625/1901625
fqs=0
[72010.484604] (detected by 10, t=2102 jiffies, g=348218, c=348217, q=1624)
[72010.486678] Sending NMI from CPU 10 to CPUs 11:
[72010.486702] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[72010.487686] rcu_sched kthread starved for 2102 jiffies! g348218 c348217 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[72010.489781] rcu_sched S11456 8 2 0x00000000
[72010.489787] Call Trace:
[72010.489794] __schedule+0x2c8/0x620
[72010.489800] ? put_prev_entity+0x39/0x540
[72010.489802] schedule+0x44/0x90
[72010.489805] schedule_timeout+0x110/0x1a0
[72010.489809] ? del_timer_sync+0x50/0x50
[72010.489812] ? prepare_to_swait+0x62/0x70
[72010.489817] rcu_gp_kthread+0x4c8/0x7f0
[72010.489820] ? rcu_gp_kthread+0x4c8/0x7f0
[72010.489823] kthread+0x10d/0x140
[72010.489827] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[72010.489829] ? kthread_park+0x70/0x70
[72010.489833] ret_from_fork+0x22/0x30
[88198.215374] INFO: rcu_sched detected stalls on CPUs/tasks:
[88198.217491] 11-...: (34112 GPs behind) idle=2d4/0/0 softirq=1901625/1901625
fqs=1
[88198.219592] (detected by 6, t=2102 jiffies, g=382330, c=382329, q=1979)
[88198.221683] Sending NMI from CPU 6 to CPUs 11:
[88198.221695] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[88198.222692] rcu_sched kthread starved for 2100 jiffies! g382330 c382329 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[88198.224789] rcu_sched S11456 8 2 0x00000000
[88198.224795] Call Trace:
[88198.224803] __schedule+0x2c8/0x620
[88198.224806] schedule+0x44/0x90
[88198.224809] schedule_timeout+0x110/0x1a0
[88198.224813] ? del_timer_sync+0x50/0x50
[88198.224817] ? prepare_to_swait+0x62/0x70
[88198.224822] rcu_gp_kthread+0x4c8/0x7f0
[88198.224825] ? rcu_gp_kthread+0x4c8/0x7f0
[88198.224829] kthread+0x10d/0x140
[88198.224832] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[88198.224834] ? kthread_park+0x70/0x70
[88198.224838] ret_from_fork+0x22/0x30
--
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Bug 191451] Host hangs when hyperv/pvspinlock are disabled
2016-12-29 6:06 [Bug 191451] New: Host hangs when hyperv/pvspinlock are disabled bugzilla-daemon
` (3 preceding siblings ...)
2017-09-14 1:52 ` bugzilla-daemon
@ 2017-10-07 2:04 ` bugzilla-daemon
4 siblings, 0 replies; 6+ messages in thread
From: bugzilla-daemon @ 2017-10-07 2:04 UTC (permalink / raw)
To: kvm
https://bugzilla.kernel.org/show_bug.cgi?id=191451
uzytkownik2@gmail.com (uzytkownik2@gmail.com) changed:
What |Removed |Added
----------------------------------------------------------------------------
Kernel Version|4.7.3-4.12.1 |4.7.3-4.13.1
--- Comment #5 from uzytkownik2@gmail.com (uzytkownik2@gmail.com) ---
Slightly different information when I enabled panic on lockup:
Panic#1 Part1
<4>[408949.271662] R13: ffffbcf243a33bbc R14: 0000000000000001 R15:
ffffbcf243a33b94
<4>[408949.271698] </IRQ>
<4>[408949.271711] do_sys_poll+0x2a2/0x5d0
<4>[408949.271731] ? __enqueue_entity+0x7a/0x80
<4>[408949.271753] ? enqueue_entity+0x2e2/0x9d0
<4>[408949.271775] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271805] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271835] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271864] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271894] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271923] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271953] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271982] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.272012] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.272042] SyS_ppoll+0x176/0x190
<4>[408949.272061] ? SyS_ppoll+0x176/0x190
<4>[408949.272080] ? exit_to_usermode_loop+0x8a/0xa0
<4>[408949.272104] entry_SYSCALL_64_fastpath+0x13/0x94
<4>[408949.272127] RIP: 0033:0x7f4013925edb
<4>[408949.272146] RSP: 002b:00007fff5744d720 EFLAGS: 00000293 ORIG_RAX:
000000000000010f
<4>[408949.273003] RAX: ffffffffffffffda RBX: 00005566c67d1710 RCX:
00007f4013925edb
<4>[408949.273866] RDX: 00007fff5744d740 RSI: 0000000000000014 RDI:
00005566c8376460
<4>[408949.274722] RBP: 00007fff5744d7a4 R08: 0000000000000008 R09:
0000000000000000
<4>[408949.275565] R10: 0000000000000000 R11: 0000000000000293 R12:
00000000000a3104
<4>[408949.276392] R13: 00005566c67d1710 R14: 0000000000000000 R15:
0000000000000000
<0>[408949.277228] Kernel Offset: 0x3a200000 from 0xffffffff81000000
(relocation range: 0xffffffff80000000-0xffffffffbfffffff)
Panic#1 Part2
<4>[408949.271093] RDX: 00007fff5744d740 RSI: 0000000000000014 RDI:
00005566c8376460
<4>[408949.271093] RBP: 00007fff5744d7a4 R08: 0000000000000008 R09:
0000000000000000
<4>[408949.271094] R10: 0000000000000000 R11: 0000000000000293 R12:
00000000000a3104
<4>[408949.271095] R13: 00005566c67d1710 R14: 0000000000000000 R15:
0000000000000000
<4>[408949.271095] Code: 85 f6 49 89 fc 48 89 f2 74 13 48 8b 06 48 85 c0 74 0b
48 8d 73 58 48 85 f6 74 02 ff d0 31 c0 41 f6 44 24 44 02 74 13 48 8b 53 38 <48>
8d 4b 38 48 39 d1 ba 04 01 00 00 0f 45 c2 48 8b 13 48 39 d3
<0>[408949.271119] Kernel panic - not syncing: softlockup: hung tasks
<4>[408949.271150] CPU: 7 PID: 7050 Comm: qemu-system-x86 Tainted: P
O L 4.13.2-gentoo #1
<4>[408949.271197] Hardware name: Gigabyte Technology Co., Ltd. To be filled by
O.E.M./X99-SLI-CF, BIOS F1 04/15/2015
<4>[408949.271246] Call Trace:
<4>[408949.271260] <IRQ>
<4>[408949.271274] dump_stack+0x60/0x7f
<4>[408949.271293] panic+0xe8/0x238
<4>[408949.271311] watchdog_timer_fn+0x208/0x210
<4>[408949.271334] __hrtimer_run_queues+0xcd/0x130
<4>[408949.271357] hrtimer_interrupt+0xad/0x1e0
<4>[408949.271380] smp_trace_apic_timer_interrupt+0x71/0xa0
<4>[408949.271407] smp_apic_timer_interrupt+0x1c/0x20
<4>[408949.271431] apic_timer_interrupt+0x86/0x90
<4>[408949.271456] RIP: 0010:usbdev_poll+0x4d/0x90 [usbcore]
<4>[408949.271482] RSP: 0018:ffffbcf243a33a90 EFLAGS: 00000202 ORIG_RAX:
ffffffffffffff10
<4>[408949.271520] RAX: 0000000000000000 RBX: ffff9d4308a98b40 RCX:
ffff9d43c1bab701
<4>[408949.271555] RDX: ffff9d4308a98b78 RSI: ffffbcf243a33c10 RDI:
ffff9d43c1bab700
<4>[408949.271591] RBP: ffffbcf243a33aa0 R08: ffff9d43c1bab700 R09:
0000000000000370
<4>[408949.271626] R10: ffff9d43b6ab4000 R11: 00000000000002bb R12:
ffff9d43c1bab700
Panic#1 Part3
<4>[408949.271048] RDX: ffff9d4308a98b78 RSI: ffffbcf243a33c10 RDI:
ffff9d43c1bab700
<4>[408949.271048] RBP: ffffbcf243a33aa0 R08: ffff9d43c1bab700 R09:
0000000000000370
<4>[408949.271049] R10: ffff9d43b6ab4000 R11: 00000000000002bb R12:
ffff9d43c1bab700
<4>[408949.271050] R13: ffffbcf243a33bbc R14: 0000000000000001 R15:
ffffbcf243a33b94
<4>[408949.271051] FS: 00007f4016deab00(0000) GS:ffff9d481f5c0000(0000)
knlGS:0000000000000000
<4>[408949.271052] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[408949.271052] CR2: 000002a18d051000 CR3: 0000000436859000 CR4:
00000000001426e0
<4>[408949.271053] Call Trace:
<4>[408949.271059] do_sys_poll+0x2a2/0x5d0
<4>[408949.271063] ? __enqueue_entity+0x7a/0x80
<4>[408949.271065] ? enqueue_entity+0x2e2/0x9d0
<4>[408949.271067] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271069] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271071] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271072] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271074] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271076] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271077] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271079] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271081] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271083] SyS_ppoll+0x176/0x190
<4>[408949.271085] ? SyS_ppoll+0x176/0x190
<4>[408949.271086] ? exit_to_usermode_loop+0x8a/0xa0
<4>[408949.271089] entry_SYSCALL_64_fastpath+0x13/0x94
<4>[408949.271090] RIP: 0033:0x7f4013925edb
<4>[408949.271091] RSP: 002b:00007fff5744d720 EFLAGS: 00000293 ORIG_RAX:
000000000000010f
<4>[408949.271092] RAX: ffffffffffffffda RBX: 00005566c67d1710 RCX:
00007f4013925edb
Panic#1 Part4
<4>[408907.138887] rcu_gp_kthread+0x502/0x840
<4>[408907.138890] ? rcu_gp_kthread+0x502/0x840
<4>[408907.138894] kthread+0x10d/0x140
<4>[408907.138897] ? call_rcu_sched+0x30/0x30
<4>[408907.138900] ? kthread_park+0x70/0x70
<4>[408907.138903] ret_from_fork+0x22/0x30
<0>[408949.270946] watchdog: BUG: soft lockup - CPU#7 stuck for 81s!
[qemu-system-x86:7050]
<4>[408949.270992] Modules linked in: macvtap ipt_MASQUERADE
nf_nat_masquerade_ipv4 nf_conntrack_netlink xfrm_user xfrm_algo iptable_nat
nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype iptable_filter
ip_tables xt_conntrack nf_nat nf_conntrack br_netfilter bridge stp llc
nvidia_uvm(PO) af_packet tcm_loop target_core_pscsi target_core_file
target_core_iblock iscsi_target_mod macvlan snd_hda_codec_hdmi btrfs
nls_iso8859_1 nls_cp437 vfat coretemp fat kvm_intel kvm crc32c_intel mousedev
i2c_i801 snd_hda_codec_realtek snd_hda_codec_generic nvidia_drm(PO)
nvidia_modeset(PO) nvidia(PO) snd_hda_intel snd_hda_codec snd_hwdep
snd_hda_core snd_pcm button snd_timer snd soundcore e1000e shpchp
hid_logitech_hidpp hid_logitech_dj hid_generic usbhid hid xhci_pci ehci_pci
xhci_hcd ehci_hcd usbcore usb_common sr_mod
<4>[408949.271028] cdrom vhost_net tun tap vhost_scsi vhost target_core_mod
efivarfs
<4>[408949.271034] CPU: 7 PID: 7050 Comm: qemu-system-x86 Tainted: P
O 4.13.2-gentoo #1
<4>[408949.271035] Hardware name: Gigabyte Technology Co., Ltd. To be filled by
O.E.M./X99-SLI-CF, BIOS F1 04/15/2015
<4>[408949.271037] task: ffff9d43b6ab8000 task.stack: ffffbcf243a30000
<4>[408949.271045] RIP: 0010:usbdev_poll+0x4d/0x90 [usbcore]
<4>[408949.271046] RSP: 0018:ffffbcf243a33a90 EFLAGS: 00000202 ORIG_RAX:
ffffffffffffff10
<4>[408949.271047] RAX: 0000000000000000 RBX: ffff9d4308a98b40 RCX:
ffff9d43c1bab701
Panic#1 Part5
<6>[ 823.455297] usb 3-9.2: reset full-speed USB device number 3 using
xhci_hcd
<6>[ 823.765321] usb 3-9.2: reset full-speed USB device number 3 using
xhci_hcd
<6>[ 824.155267] usb 3-9.3: reset full-speed USB device number 4 using
xhci_hcd
<6>[ 824.435267] usb 3-9.3: reset full-speed USB device number 4 using
xhci_hcd
<6>[ 999.039964] worker (30163) used greatest stack depth: 8552 bytes left
<6>[ 4880.736677] perf: interrupt took too long (2515 > 2500), lowering
kernel.perf_event_max_sample_rate to 79500
<6>[ 6097.254169] perf: interrupt took too long (3152 > 3143), lowering
kernel.perf_event_max_sample_rate to 63400
<6>[ 8062.800175] perf: interrupt took too long (3948 > 3940), lowering
kernel.perf_event_max_sample_rate to 50600
<6>[21236.525379] perf: interrupt took too long (4938 > 4935), lowering
kernel.perf_event_max_sample_rate to 40500
<6>[251807.087841] perf: interrupt took too long (6177 > 6172), lowering
kernel.perf_event_max_sample_rate to 32300
<3>[408907.137636] INFO: rcu_sched detected stalls on CPUs/tasks:
<3>[408907.137690] 7-...: (109 GPs behind) idle=f74/0/0
softirq=11032837/11032839 fqs=1
<3>[408907.137738] (detected by 9, t=2102 jiffies, g=1840140, c=1840139,
q=578)
<6>[408907.137785] Sending NMI from CPU 9 to CPUs 7:
<4>[408907.137796] NMI backtrace for cpu 7 skipped: idling at pc
0xffffffffbb937c6a
<3>[408907.138794] rcu_sched kthread starved for 2100 jiffies! g1840140
c1840139 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1
<6>[408907.138856] rcu_sched S11432 8 2 0x00000000
<4>[408907.138862] Call Trace:
<4>[408907.138870] __schedule+0x2d1/0x640
<4>[408907.138873] schedule+0x44/0x90
<4>[408907.138876] schedule_timeout+0x110/0x1a0
<4>[408907.138880] ? del_timer_sync+0x50/0x50
<4>[408907.138883] ? prepare_to_swait+0x62/0x70
--
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2017-10-07 2:04 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-12-29 6:06 [Bug 191451] New: Host hangs when hyperv/pvspinlock are disabled bugzilla-daemon
2017-02-13 5:31 ` [Bug 191451] " bugzilla-daemon
2017-02-23 22:42 ` bugzilla-daemon
2017-03-16 6:44 ` bugzilla-daemon
2017-09-14 1:52 ` bugzilla-daemon
2017-10-07 2:04 ` bugzilla-daemon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox