public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* contention on long-held spinlock
@ 2011-08-19  9:21 Ortwin Glück
  2011-08-19 19:25 ` Bryan Donlan
  2011-08-19 23:30 ` Andi Kleen
  0 siblings, 2 replies; 5+ messages in thread
From: Ortwin Glück @ 2011-08-19  9:21 UTC (permalink / raw)
  To: linux-kernel

Hi,

I have observed a bad behaviour that is likely caused by spinlocks in 
the qla2xxx driver. This is a QLogic Fibre Channel storage driver.

Somehow the attached SAN had a problem and became unresponsive. Many 
processes queued up waiting to write to the device. The processes were 
doing nothing but wait, but system load increased to insane values (40 
and above on a 4 core machine). The system was very sluggish and 
unresponsive, making it very hard and slow to see what actually was the 
problem.

I didn't run an indepth analysis, but this is my guess: I see that 
qla2xxx uses spinlocks to guard the HW against concurrent access. So if 
the HW becomes unresponsive all waiters would busy spin and burn 
resources, right? Those spinlocks are superfast as long as the HW 
responds well, but become a CPU burner once the HW becomes slow.

I wonder if spinlocks could be made aware of such a situation and relax. 
Something like if spinning for more than 1000 times, perform a simple 
backoff and sleep. A spinlock should never spin busy for several 
seconds, right?

Thanks,

Ortwin

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2011-08-23 16:24 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-08-19  9:21 contention on long-held spinlock Ortwin Glück
2011-08-19 19:25 ` Bryan Donlan
     [not found]   ` <5E4F49720D0BAD499EE1F01232234BA873C669E4D7@AVEXMB1.qlogic.org>
2011-08-23 15:07     ` Bryan Donlan
2011-08-23 16:24   ` Arnd Bergmann
2011-08-19 23:30 ` Andi Kleen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox