linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Question on -rt synchronize_irq()
@ 2007-09-21  1:12 Paul E. McKenney
  2007-09-23 17:46 ` Paul E. McKenney
  0 siblings, 1 reply; 2+ messages in thread
From: Paul E. McKenney @ 2007-09-21  1:12 UTC (permalink / raw)
  To: linux-rt-users; +Cc: mingo, tglx, dvhltc, tytso

Hello!

Color me blind, but I don't see how the following race is avoided:

CPU 0:	A hardware interrupt is received for a threaded irq, which
	eventually results in do_hardirq() being invoked and the
	descriptor lock being acquired.  Because the IRQ_INPROGRESS
	status bit is set, execution continues.  Once the handler
	returns, having already cleared the IRQ_INPROGRESS status bit,
	the descriptor lock is released.

CPU 1:	A second hardware interrupt is received for the same threaded
	irq, which also wends its way to do_hardirq() with the
	IRQ_INPROGRESS status bit set.  It enters the handler (having
	released the descriptor lock) and accesses some data structure
	that CPU 2 now wants to get rid of.

CPU 2:	A synchronize_irq() is executed, again for this same irq.
	Because the descriptor status does not have the IRQ_NODELAY
	bit set, and because the IRQ_INPROGRESS status bit is set,
	this task blocks.

CPU 0:	Execution continues near the end of do_hardirq(), which notices
	that the descriptor wait_for_handler queue is non-empty,
	and therefore wakes up CPU 2's task.

CPU 2:	The task starts running, and proceeds to clean up the data
	structures that CPU 1 is still using.

CPU 1:	This second handler is suddenly and fatally disappointed by
	the disappearance of its data structures.


So what am I missing here?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2007-09-23 17:46 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-09-21  1:12 Question on -rt synchronize_irq() Paul E. McKenney
2007-09-23 17:46 ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).