public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* do_IRQ: 0.126 No irq handler for vector (irq -1)
@ 2015-07-20 10:22 Stefan Priebe - Profihost AG
  2015-07-20 10:53 ` Thomas Gleixner
  0 siblings, 1 reply; 8+ messages in thread
From: Stefan Priebe - Profihost AG @ 2015-07-20 10:22 UTC (permalink / raw)
  To: x86; +Cc: tglx, Ingo Molnar, JBeulich, linux-kernel

Hello list,

i've 36 servers all running vanilla 3.18.18 kernel which have a very
high disk and network load.

Since a few days i encounter regular the following error messages and
pretty often completely hanging disk i/o:
[535040.439859] do_IRQ: 0.126 No irq handler for vector (irq -1)
[548400.353679] do_IRQ: 2.109 No irq handler for vector (irq -1)
[551624.894507] do_IRQ: 4.84 No irq handler for vector (irq -1)
[557524.288691] do_IRQ: 1.158 No irq handler for vector (irq -1)
[559786.928441] do_IRQ: 3.172 No irq handler for vector (irq -1)
[572906.281394] do_IRQ: 3.72 No irq handler for vector (irq -1)
[576611.808128] do_IRQ: 3.118 No irq handler for vector (irq -1)
[577242.682643] do_IRQ: 2.45 No irq handler for vector (irq -1)
[578524.584545] do_IRQ: 5.190 No irq handler for vector (irq -1)
[602109.548268] do_IRQ: 3.101 No irq handler for vector (irq -1)

All systems are Single E5 Xeons and I'm running irqbalance on them.

Chipset:
Intel C602J chipset

Is there anything i can do to fix this? Is there may be a kernel patch
available?

Thanks!

Greets,
Stefan

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-07-26 19:42 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-20 10:22 do_IRQ: 0.126 No irq handler for vector (irq -1) Stefan Priebe - Profihost AG
2015-07-20 10:53 ` Thomas Gleixner
2015-07-20 10:59   ` Stefan Priebe - Profihost AG
2015-07-21 18:12   ` Stefan Priebe
2015-07-21 21:15     ` Thomas Gleixner
2015-07-22  7:23       ` Stefan Priebe - Profihost AG
2015-07-23 15:59         ` Stefan Priebe
2015-07-26 19:42           ` Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox