* [Bug 191451] Host hangs when hyperv/pvspinlock are disabled
2016-12-29 6:06 [Bug 191451] New: Host hangs when hyperv/pvspinlock are disabled bugzilla-daemon
` (2 preceding siblings ...)
2017-03-16 6:44 ` bugzilla-daemon
@ 2017-09-14 1:52 ` bugzilla-daemon
2017-10-07 2:04 ` bugzilla-daemon
4 siblings, 0 replies; 6+ messages in thread
From: bugzilla-daemon @ 2017-09-14 1:52 UTC (permalink / raw)
To: kvm
https://bugzilla.kernel.org/show_bug.cgi?id=191451
uzytkownik2@gmail.com (uzytkownik2@gmail.com) changed:
What |Removed |Added
----------------------------------------------------------------------------
Kernel Version|4.7.3-4.9.0 |4.7.3-4.12.1
--- Comment #4 from uzytkownik2@gmail.com (uzytkownik2@gmail.com) ---
Keeping bug alive:
[62880.118108] INFO: rcu_sched detected stalls on CPUs/tasks:
[62880.118392] 11-...: (32521 GPs behind) idle=870/0/0 softirq=1901624/1901625
fqs=1
[62880.118691] (detected by 6, t=2102 jiffies, g=329804, c=329803, q=811)
[62880.119019] Sending NMI from CPU 6 to CPUs 11:
[62880.119045] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[62880.120028] rcu_sched kthread starved for 2100 jiffies! g329804 c329803 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[62880.120416] rcu_sched S11456 8 2 0x00000000
[62880.120422] Call Trace:
[62880.120431] __schedule+0x2c8/0x620
[62880.120434] schedule+0x44/0x90
[62880.120437] schedule_timeout+0x110/0x1a0
[62880.120442] ? del_timer_sync+0x50/0x50
[62880.120445] ? prepare_to_swait+0x62/0x70
[62880.120450] rcu_gp_kthread+0x4c8/0x7f0
[62880.120453] ? rcu_gp_kthread+0x4c8/0x7f0
[62880.120456] kthread+0x10d/0x140
[62880.120460] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[62880.120462] ? kthread_park+0x70/0x70
[62880.120466] ret_from_fork+0x22/0x30
[63134.340912] INFO: rcu_sched detected stalls on CPUs/tasks:
[63134.341355] 11-...: (0 ticks this GP) idle=df0/0/0 softirq=1901625/1901625
fqs=0
[63134.341813] (detected by 4, t=2102 jiffies, g=330302, c=330301, q=1024)
[63134.342302] Sending NMI from CPU 4 to CPUs 11:
[63134.342326] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[63134.343310] rcu_sched kthread starved for 2102 jiffies! g330302 c330301 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[63134.343859] rcu_sched S11456 8 2 0x00000000
[63134.343865] Call Trace:
[63134.343874] __schedule+0x2c8/0x620
[63134.343876] schedule+0x44/0x90
[63134.343879] schedule_timeout+0x110/0x1a0
[63134.343884] ? del_timer_sync+0x50/0x50
[63134.343888] ? prepare_to_swait+0x62/0x70
[63134.343893] rcu_gp_kthread+0x4c8/0x7f0
[63134.343896] ? rcu_gp_kthread+0x4c8/0x7f0
[63134.343899] kthread+0x10d/0x140
[63134.343903] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[63134.343905] ? kthread_park+0x70/0x70
[63134.343909] ret_from_fork+0x22/0x30
[64670.826943] INFO: rcu_sched detected stalls on CPUs/tasks:
[64670.827547] 11-...: (3203 GPs behind) idle=8d4/0/0 softirq=1901625/1901625
fqs=1
[64670.828165] (detected by 7, t=2102 jiffies, g=333505, c=333504, q=1024)
[64670.828813] Sending NMI from CPU 7 to CPUs 11:
[64670.828845] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[64670.829821] rcu_sched kthread starved for 2099 jiffies! g333505 c333504 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[64670.830534] rcu_sched S11456 8 2 0x00000000
[64670.830540] Call Trace:
[64670.830548] __schedule+0x2c8/0x620
[64670.830551] schedule+0x44/0x90
[64670.830554] schedule_timeout+0x110/0x1a0
[64670.830559] ? del_timer_sync+0x50/0x50
[64670.830562] ? prepare_to_swait+0x62/0x70
[64670.830566] rcu_gp_kthread+0x4c8/0x7f0
[64670.830570] ? rcu_gp_kthread+0x4c8/0x7f0
[64670.830572] kthread+0x10d/0x140
[64670.830576] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[64670.830578] ? kthread_park+0x70/0x70
[64670.830582] ret_from_fork+0x22/0x30
[66220.692488] INFO: rcu_sched detected stalls on CPUs/tasks:
[66220.693249] 11-...: (0 ticks this GP) idle=f24/0/0 softirq=1901625/1901625
fqs=0
[66220.694020] (detected by 10, t=2102 jiffies, g=336717, c=336716, q=1140)
[66220.694827] Sending NMI from CPU 10 to CPUs 11:
[66220.694852] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[66220.695836] rcu_sched kthread starved for 2102 jiffies! g336717 c336716 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[66220.696699] rcu_sched S11456 8 2 0x00000000
[66220.696705] Call Trace:
[66220.696713] __schedule+0x2c8/0x620
[66220.696718] ? put_prev_entity+0x39/0x540
[66220.696720] schedule+0x44/0x90
[66220.696723] schedule_timeout+0x110/0x1a0
[66220.696727] ? del_timer_sync+0x50/0x50
[66220.696730] ? prepare_to_swait+0x62/0x70
[66220.696735] rcu_gp_kthread+0x4c8/0x7f0
[66220.696738] ? rcu_gp_kthread+0x4c8/0x7f0
[66220.696741] kthread+0x10d/0x140
[66220.696745] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[66220.696747] ? kthread_park+0x70/0x70
[66220.696751] ret_from_fork+0x22/0x30
[66283.738241] INFO: rcu_sched detected stalls on CPUs/tasks:
[66283.739155] 11-...: (0 ticks this GP) idle=f34/0/0 softirq=1901625/1901625
fqs=1
[66283.740087] (detected by 4, t=8407 jiffies, g=336717, c=336716, q=3093)
[66283.741048] Sending NMI from CPU 4 to CPUs 11:
[66283.741069] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[66283.742056] rcu_sched kthread starved for 6305 jiffies! g336717 c336716 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[66283.743074] rcu_sched S11456 8 2 0x00000000
[66283.743079] Call Trace:
[66283.743085] __schedule+0x2c8/0x620
[66283.743088] schedule+0x44/0x90
[66283.743091] schedule_timeout+0x110/0x1a0
[66283.743094] ? del_timer_sync+0x50/0x50
[66283.743098] ? prepare_to_swait+0x62/0x70
[66283.743102] rcu_gp_kthread+0x4c8/0x7f0
[66283.743105] ? rcu_gp_kthread+0x4c8/0x7f0
[66283.743108] kthread+0x10d/0x140
[66283.743112] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[66283.743114] ? kthread_park+0x70/0x70
[66283.743118] ret_from_fork+0x22/0x30
[66902.906415] INFO: rcu_sched detected stalls on CPUs/tasks:
[66902.907481] 11-...: (1255 GPs behind) idle=f54/0/0 softirq=1901625/1901625
fqs=1
[66902.908568] (detected by 5, t=2102 jiffies, g=337972, c=337971, q=1121)
[66902.909689] Sending NMI from CPU 5 to CPUs 11:
[66902.909702] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[66902.910697] rcu_sched kthread starved for 2100 jiffies! g337972 c337971 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[66902.911876] rcu_sched S11456 8 2 0x00000000
[66902.911882] Call Trace:
[66902.911889] __schedule+0x2c8/0x620
[66902.911892] schedule+0x44/0x90
[66902.911895] schedule_timeout+0x110/0x1a0
[66902.911899] ? del_timer_sync+0x50/0x50
[66902.911902] ? prepare_to_swait+0x62/0x70
[66902.911907] rcu_gp_kthread+0x4c8/0x7f0
[66902.911910] ? rcu_gp_kthread+0x4c8/0x7f0
[66902.911913] kthread+0x10d/0x140
[66902.911916] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[66902.911918] ? kthread_park+0x70/0x70
[66902.911922] ret_from_fork+0x22/0x30
[67123.131539] INFO: rcu_sched detected stalls on CPUs/tasks:
[67123.132769] 11-...: (0 ticks this GP) idle=ff8/0/0 softirq=1901625/1901625
fqs=0
[67123.134016] (detected by 10, t=2102 jiffies, g=338391, c=338390, q=1258)
[67123.135295] Sending NMI from CPU 10 to CPUs 11:
[67123.135320] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[67123.136304] rcu_sched kthread starved for 2102 jiffies! g338391 c338390 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[67123.137627] rcu_sched S11456 8 2 0x00000000
[67123.137632] Call Trace:
[67123.137639] __schedule+0x2c8/0x620
[67123.137644] ? put_prev_entity+0x39/0x540
[67123.137646] schedule+0x44/0x90
[67123.137649] schedule_timeout+0x110/0x1a0
[67123.137653] ? del_timer_sync+0x50/0x50
[67123.137656] ? prepare_to_swait+0x62/0x70
[67123.137661] rcu_gp_kthread+0x4c8/0x7f0
[67123.137664] ? rcu_gp_kthread+0x4c8/0x7f0
[67123.137667] kthread+0x10d/0x140
[67123.137671] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[67123.137673] ? kthread_park+0x70/0x70
[67123.137676] ret_from_fork+0x22/0x30
[67186.177259] INFO: rcu_sched detected stalls on CPUs/tasks:
[67186.178640] 11-...: (0 ticks this GP) idle=008/0/0 softirq=1901625/1901625
fqs=1
[67186.180047] (detected by 5, t=8407 jiffies, g=338391, c=338390, q=3456)
[67186.181485] Sending NMI from CPU 5 to CPUs 11:
[67186.181497] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[67186.182493] rcu_sched kthread starved for 6305 jiffies! g338391 c338390 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[67186.183988] rcu_sched S11456 8 2 0x00000000
[67186.183993] Call Trace:
[67186.183999] __schedule+0x2c8/0x620
[67186.184001] schedule+0x44/0x90
[67186.184004] schedule_timeout+0x110/0x1a0
[67186.184008] ? del_timer_sync+0x50/0x50
[67186.184011] ? prepare_to_swait+0x62/0x70
[67186.184015] rcu_gp_kthread+0x4c8/0x7f0
[67186.184018] ? rcu_gp_kthread+0x4c8/0x7f0
[67186.184021] kthread+0x10d/0x140
[67186.184025] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[67186.184026] ? kthread_park+0x70/0x70
[67186.184030] ret_from_fork+0x22/0x30
[67244.753305] INFO: rcu_sched detected stalls on CPUs/tasks:
[67244.754844] 11-...: (88 GPs behind) idle=018/0/0 softirq=1901625/1901625
fqs=1
[67244.756406] (detected by 5, t=2102 jiffies, g=338479, c=338478, q=1121)
[67244.758001] Sending NMI from CPU 5 to CPUs 11:
[67244.758013] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[67244.759008] rcu_sched kthread starved for 2099 jiffies! g338479 c338478 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[67244.760662] rcu_sched S11456 8 2 0x00000000
[67244.760667] Call Trace:
[67244.760673] __schedule+0x2c8/0x620
[67244.760676] schedule+0x44/0x90
[67244.760679] schedule_timeout+0x110/0x1a0
[67244.760683] ? del_timer_sync+0x50/0x50
[67244.760686] ? prepare_to_swait+0x62/0x70
[67244.760690] rcu_gp_kthread+0x4c8/0x7f0
[67244.760694] ? rcu_gp_kthread+0x4c8/0x7f0
[67244.760696] kthread+0x10d/0x140
[67244.760700] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[67244.760702] ? kthread_park+0x70/0x70
[67244.760706] ret_from_fork+0x22/0x30
[68476.790067] INFO: rcu_sched detected stalls on CPUs/tasks:
[68476.791773] 11-...: (0 ticks this GP) idle=164/0/0 softirq=1901625/1901625
fqs=0
[68476.793489] (detected by 7, t=2102 jiffies, g=341009, c=341008, q=1380)
[68476.795244] Sending NMI from CPU 7 to CPUs 11:
[68476.795276] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[68476.796252] rcu_sched kthread starved for 2102 jiffies! g341009 c341008 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[68476.798063] rcu_sched S11456 8 2 0x00000000
[68476.798069] Call Trace:
[68476.798076] __schedule+0x2c8/0x620
[68476.798081] ? put_prev_entity+0x39/0x540
[68476.798083] schedule+0x44/0x90
[68476.798086] schedule_timeout+0x110/0x1a0
[68476.798089] ? del_timer_sync+0x50/0x50
[68476.798092] ? prepare_to_swait+0x62/0x70
[68476.798096] rcu_gp_kthread+0x4c8/0x7f0
[68476.798100] ? rcu_gp_kthread+0x4c8/0x7f0
[68476.798102] kthread+0x10d/0x140
[68476.798106] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[68476.798108] ? kthread_park+0x70/0x70
[68476.798112] ret_from_fork+0x22/0x30
[68539.835809] INFO: rcu_sched detected stalls on CPUs/tasks:
[68539.837664] 11-...: (0 ticks this GP) idle=174/0/0 softirq=1901625/1901625
fqs=1
[68539.839539] (detected by 5, t=8407 jiffies, g=341009, c=341008, q=3363)
[68539.841447] Sending NMI from CPU 5 to CPUs 11:
[68539.841459] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[68539.842454] rcu_sched kthread starved for 6305 jiffies! g341009 c341008 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[68539.844425] rcu_sched S11456 8 2 0x00000000
[68539.844430] Call Trace:
[68539.844436] __schedule+0x2c8/0x620
[68539.844439] schedule+0x44/0x90
[68539.844442] schedule_timeout+0x110/0x1a0
[68539.844446] ? del_timer_sync+0x50/0x50
[68539.844449] ? prepare_to_swait+0x62/0x70
[68539.844453] rcu_gp_kthread+0x4c8/0x7f0
[68539.844456] ? rcu_gp_kthread+0x4c8/0x7f0
[68539.844459] kthread+0x10d/0x140
[68539.844462] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[68539.844464] ? kthread_park+0x70/0x70
[68539.844468] ret_from_fork+0x22/0x30
[72010.480535] INFO: rcu_sched detected stalls on CPUs/tasks:
[72010.482562] 11-...: (0 ticks this GP) idle=4e0/0/0 softirq=1901625/1901625
fqs=0
[72010.484604] (detected by 10, t=2102 jiffies, g=348218, c=348217, q=1624)
[72010.486678] Sending NMI from CPU 10 to CPUs 11:
[72010.486702] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[72010.487686] rcu_sched kthread starved for 2102 jiffies! g348218 c348217 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[72010.489781] rcu_sched S11456 8 2 0x00000000
[72010.489787] Call Trace:
[72010.489794] __schedule+0x2c8/0x620
[72010.489800] ? put_prev_entity+0x39/0x540
[72010.489802] schedule+0x44/0x90
[72010.489805] schedule_timeout+0x110/0x1a0
[72010.489809] ? del_timer_sync+0x50/0x50
[72010.489812] ? prepare_to_swait+0x62/0x70
[72010.489817] rcu_gp_kthread+0x4c8/0x7f0
[72010.489820] ? rcu_gp_kthread+0x4c8/0x7f0
[72010.489823] kthread+0x10d/0x140
[72010.489827] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[72010.489829] ? kthread_park+0x70/0x70
[72010.489833] ret_from_fork+0x22/0x30
[88198.215374] INFO: rcu_sched detected stalls on CPUs/tasks:
[88198.217491] 11-...: (34112 GPs behind) idle=2d4/0/0 softirq=1901625/1901625
fqs=1
[88198.219592] (detected by 6, t=2102 jiffies, g=382330, c=382329, q=1979)
[88198.221683] Sending NMI from CPU 6 to CPUs 11:
[88198.221695] NMI backtrace for cpu 11 skipped: idling at pc
0xffffffffae71669a
[88198.222692] rcu_sched kthread starved for 2100 jiffies! g382330 c382329 f0x0
RCU_GP_WAIT_FQS(3) ->state=0x1
[88198.224789] rcu_sched S11456 8 2 0x00000000
[88198.224795] Call Trace:
[88198.224803] __schedule+0x2c8/0x620
[88198.224806] schedule+0x44/0x90
[88198.224809] schedule_timeout+0x110/0x1a0
[88198.224813] ? del_timer_sync+0x50/0x50
[88198.224817] ? prepare_to_swait+0x62/0x70
[88198.224822] rcu_gp_kthread+0x4c8/0x7f0
[88198.224825] ? rcu_gp_kthread+0x4c8/0x7f0
[88198.224829] kthread+0x10d/0x140
[88198.224832] ? _synchronize_rcu_expedited.constprop.78+0x380/0x380
[88198.224834] ? kthread_park+0x70/0x70
[88198.224838] ret_from_fork+0x22/0x30
--
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 6+ messages in thread* [Bug 191451] Host hangs when hyperv/pvspinlock are disabled
2016-12-29 6:06 [Bug 191451] New: Host hangs when hyperv/pvspinlock are disabled bugzilla-daemon
` (3 preceding siblings ...)
2017-09-14 1:52 ` bugzilla-daemon
@ 2017-10-07 2:04 ` bugzilla-daemon
4 siblings, 0 replies; 6+ messages in thread
From: bugzilla-daemon @ 2017-10-07 2:04 UTC (permalink / raw)
To: kvm
https://bugzilla.kernel.org/show_bug.cgi?id=191451
uzytkownik2@gmail.com (uzytkownik2@gmail.com) changed:
What |Removed |Added
----------------------------------------------------------------------------
Kernel Version|4.7.3-4.12.1 |4.7.3-4.13.1
--- Comment #5 from uzytkownik2@gmail.com (uzytkownik2@gmail.com) ---
Slightly different information when I enabled panic on lockup:
Panic#1 Part1
<4>[408949.271662] R13: ffffbcf243a33bbc R14: 0000000000000001 R15:
ffffbcf243a33b94
<4>[408949.271698] </IRQ>
<4>[408949.271711] do_sys_poll+0x2a2/0x5d0
<4>[408949.271731] ? __enqueue_entity+0x7a/0x80
<4>[408949.271753] ? enqueue_entity+0x2e2/0x9d0
<4>[408949.271775] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271805] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271835] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271864] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271894] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271923] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271953] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271982] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.272012] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.272042] SyS_ppoll+0x176/0x190
<4>[408949.272061] ? SyS_ppoll+0x176/0x190
<4>[408949.272080] ? exit_to_usermode_loop+0x8a/0xa0
<4>[408949.272104] entry_SYSCALL_64_fastpath+0x13/0x94
<4>[408949.272127] RIP: 0033:0x7f4013925edb
<4>[408949.272146] RSP: 002b:00007fff5744d720 EFLAGS: 00000293 ORIG_RAX:
000000000000010f
<4>[408949.273003] RAX: ffffffffffffffda RBX: 00005566c67d1710 RCX:
00007f4013925edb
<4>[408949.273866] RDX: 00007fff5744d740 RSI: 0000000000000014 RDI:
00005566c8376460
<4>[408949.274722] RBP: 00007fff5744d7a4 R08: 0000000000000008 R09:
0000000000000000
<4>[408949.275565] R10: 0000000000000000 R11: 0000000000000293 R12:
00000000000a3104
<4>[408949.276392] R13: 00005566c67d1710 R14: 0000000000000000 R15:
0000000000000000
<0>[408949.277228] Kernel Offset: 0x3a200000 from 0xffffffff81000000
(relocation range: 0xffffffff80000000-0xffffffffbfffffff)
Panic#1 Part2
<4>[408949.271093] RDX: 00007fff5744d740 RSI: 0000000000000014 RDI:
00005566c8376460
<4>[408949.271093] RBP: 00007fff5744d7a4 R08: 0000000000000008 R09:
0000000000000000
<4>[408949.271094] R10: 0000000000000000 R11: 0000000000000293 R12:
00000000000a3104
<4>[408949.271095] R13: 00005566c67d1710 R14: 0000000000000000 R15:
0000000000000000
<4>[408949.271095] Code: 85 f6 49 89 fc 48 89 f2 74 13 48 8b 06 48 85 c0 74 0b
48 8d 73 58 48 85 f6 74 02 ff d0 31 c0 41 f6 44 24 44 02 74 13 48 8b 53 38 <48>
8d 4b 38 48 39 d1 ba 04 01 00 00 0f 45 c2 48 8b 13 48 39 d3
<0>[408949.271119] Kernel panic - not syncing: softlockup: hung tasks
<4>[408949.271150] CPU: 7 PID: 7050 Comm: qemu-system-x86 Tainted: P
O L 4.13.2-gentoo #1
<4>[408949.271197] Hardware name: Gigabyte Technology Co., Ltd. To be filled by
O.E.M./X99-SLI-CF, BIOS F1 04/15/2015
<4>[408949.271246] Call Trace:
<4>[408949.271260] <IRQ>
<4>[408949.271274] dump_stack+0x60/0x7f
<4>[408949.271293] panic+0xe8/0x238
<4>[408949.271311] watchdog_timer_fn+0x208/0x210
<4>[408949.271334] __hrtimer_run_queues+0xcd/0x130
<4>[408949.271357] hrtimer_interrupt+0xad/0x1e0
<4>[408949.271380] smp_trace_apic_timer_interrupt+0x71/0xa0
<4>[408949.271407] smp_apic_timer_interrupt+0x1c/0x20
<4>[408949.271431] apic_timer_interrupt+0x86/0x90
<4>[408949.271456] RIP: 0010:usbdev_poll+0x4d/0x90 [usbcore]
<4>[408949.271482] RSP: 0018:ffffbcf243a33a90 EFLAGS: 00000202 ORIG_RAX:
ffffffffffffff10
<4>[408949.271520] RAX: 0000000000000000 RBX: ffff9d4308a98b40 RCX:
ffff9d43c1bab701
<4>[408949.271555] RDX: ffff9d4308a98b78 RSI: ffffbcf243a33c10 RDI:
ffff9d43c1bab700
<4>[408949.271591] RBP: ffffbcf243a33aa0 R08: ffff9d43c1bab700 R09:
0000000000000370
<4>[408949.271626] R10: ffff9d43b6ab4000 R11: 00000000000002bb R12:
ffff9d43c1bab700
Panic#1 Part3
<4>[408949.271048] RDX: ffff9d4308a98b78 RSI: ffffbcf243a33c10 RDI:
ffff9d43c1bab700
<4>[408949.271048] RBP: ffffbcf243a33aa0 R08: ffff9d43c1bab700 R09:
0000000000000370
<4>[408949.271049] R10: ffff9d43b6ab4000 R11: 00000000000002bb R12:
ffff9d43c1bab700
<4>[408949.271050] R13: ffffbcf243a33bbc R14: 0000000000000001 R15:
ffffbcf243a33b94
<4>[408949.271051] FS: 00007f4016deab00(0000) GS:ffff9d481f5c0000(0000)
knlGS:0000000000000000
<4>[408949.271052] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
<4>[408949.271052] CR2: 000002a18d051000 CR3: 0000000436859000 CR4:
00000000001426e0
<4>[408949.271053] Call Trace:
<4>[408949.271059] do_sys_poll+0x2a2/0x5d0
<4>[408949.271063] ? __enqueue_entity+0x7a/0x80
<4>[408949.271065] ? enqueue_entity+0x2e2/0x9d0
<4>[408949.271067] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271069] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271071] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271072] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271074] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271076] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271077] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271079] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271081] ? compat_poll_select_copy_remaining+0x130/0x130
<4>[408949.271083] SyS_ppoll+0x176/0x190
<4>[408949.271085] ? SyS_ppoll+0x176/0x190
<4>[408949.271086] ? exit_to_usermode_loop+0x8a/0xa0
<4>[408949.271089] entry_SYSCALL_64_fastpath+0x13/0x94
<4>[408949.271090] RIP: 0033:0x7f4013925edb
<4>[408949.271091] RSP: 002b:00007fff5744d720 EFLAGS: 00000293 ORIG_RAX:
000000000000010f
<4>[408949.271092] RAX: ffffffffffffffda RBX: 00005566c67d1710 RCX:
00007f4013925edb
Panic#1 Part4
<4>[408907.138887] rcu_gp_kthread+0x502/0x840
<4>[408907.138890] ? rcu_gp_kthread+0x502/0x840
<4>[408907.138894] kthread+0x10d/0x140
<4>[408907.138897] ? call_rcu_sched+0x30/0x30
<4>[408907.138900] ? kthread_park+0x70/0x70
<4>[408907.138903] ret_from_fork+0x22/0x30
<0>[408949.270946] watchdog: BUG: soft lockup - CPU#7 stuck for 81s!
[qemu-system-x86:7050]
<4>[408949.270992] Modules linked in: macvtap ipt_MASQUERADE
nf_nat_masquerade_ipv4 nf_conntrack_netlink xfrm_user xfrm_algo iptable_nat
nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype iptable_filter
ip_tables xt_conntrack nf_nat nf_conntrack br_netfilter bridge stp llc
nvidia_uvm(PO) af_packet tcm_loop target_core_pscsi target_core_file
target_core_iblock iscsi_target_mod macvlan snd_hda_codec_hdmi btrfs
nls_iso8859_1 nls_cp437 vfat coretemp fat kvm_intel kvm crc32c_intel mousedev
i2c_i801 snd_hda_codec_realtek snd_hda_codec_generic nvidia_drm(PO)
nvidia_modeset(PO) nvidia(PO) snd_hda_intel snd_hda_codec snd_hwdep
snd_hda_core snd_pcm button snd_timer snd soundcore e1000e shpchp
hid_logitech_hidpp hid_logitech_dj hid_generic usbhid hid xhci_pci ehci_pci
xhci_hcd ehci_hcd usbcore usb_common sr_mod
<4>[408949.271028] cdrom vhost_net tun tap vhost_scsi vhost target_core_mod
efivarfs
<4>[408949.271034] CPU: 7 PID: 7050 Comm: qemu-system-x86 Tainted: P
O 4.13.2-gentoo #1
<4>[408949.271035] Hardware name: Gigabyte Technology Co., Ltd. To be filled by
O.E.M./X99-SLI-CF, BIOS F1 04/15/2015
<4>[408949.271037] task: ffff9d43b6ab8000 task.stack: ffffbcf243a30000
<4>[408949.271045] RIP: 0010:usbdev_poll+0x4d/0x90 [usbcore]
<4>[408949.271046] RSP: 0018:ffffbcf243a33a90 EFLAGS: 00000202 ORIG_RAX:
ffffffffffffff10
<4>[408949.271047] RAX: 0000000000000000 RBX: ffff9d4308a98b40 RCX:
ffff9d43c1bab701
Panic#1 Part5
<6>[ 823.455297] usb 3-9.2: reset full-speed USB device number 3 using
xhci_hcd
<6>[ 823.765321] usb 3-9.2: reset full-speed USB device number 3 using
xhci_hcd
<6>[ 824.155267] usb 3-9.3: reset full-speed USB device number 4 using
xhci_hcd
<6>[ 824.435267] usb 3-9.3: reset full-speed USB device number 4 using
xhci_hcd
<6>[ 999.039964] worker (30163) used greatest stack depth: 8552 bytes left
<6>[ 4880.736677] perf: interrupt took too long (2515 > 2500), lowering
kernel.perf_event_max_sample_rate to 79500
<6>[ 6097.254169] perf: interrupt took too long (3152 > 3143), lowering
kernel.perf_event_max_sample_rate to 63400
<6>[ 8062.800175] perf: interrupt took too long (3948 > 3940), lowering
kernel.perf_event_max_sample_rate to 50600
<6>[21236.525379] perf: interrupt took too long (4938 > 4935), lowering
kernel.perf_event_max_sample_rate to 40500
<6>[251807.087841] perf: interrupt took too long (6177 > 6172), lowering
kernel.perf_event_max_sample_rate to 32300
<3>[408907.137636] INFO: rcu_sched detected stalls on CPUs/tasks:
<3>[408907.137690] 7-...: (109 GPs behind) idle=f74/0/0
softirq=11032837/11032839 fqs=1
<3>[408907.137738] (detected by 9, t=2102 jiffies, g=1840140, c=1840139,
q=578)
<6>[408907.137785] Sending NMI from CPU 9 to CPUs 7:
<4>[408907.137796] NMI backtrace for cpu 7 skipped: idling at pc
0xffffffffbb937c6a
<3>[408907.138794] rcu_sched kthread starved for 2100 jiffies! g1840140
c1840139 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1
<6>[408907.138856] rcu_sched S11432 8 2 0x00000000
<4>[408907.138862] Call Trace:
<4>[408907.138870] __schedule+0x2d1/0x640
<4>[408907.138873] schedule+0x44/0x90
<4>[408907.138876] schedule_timeout+0x110/0x1a0
<4>[408907.138880] ? del_timer_sync+0x50/0x50
<4>[408907.138883] ? prepare_to_swait+0x62/0x70
--
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 6+ messages in thread