* might_sleep oops in irq_set_affinity_notifier users
@ 2013-08-19 16:25 Joe Korty
0 siblings, 0 replies; only message in thread
From: Joe Korty @ 2013-08-19 16:25 UTC (permalink / raw)
To: linux-rt-users
Hi,
The sfc, mellanox, and infiniband drivers use the
irq_set_affinity_notifier service. This causes these
drivers to issue a might_sleep oops on driver load due to
calling schedule_work under a raw spin lock:
int __irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask)
{
...
if (desc->affinity_notify) {
kref_get(&desc->affinity_notify->kref);
schedule_work(&desc->affinity_notify->work);
}
...
}
int irq_set_affinity(unsigned int irq, const struct cpumask *mask)
{
...
raw_spin_lock_irqsave(&desc->lock, flags);
ret = __irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask);
raw_spin_unlock_irqrestore(&desc->lock, flags);
...
}
I suppose this could be fixed by using tasklets instead of
schedule_work. But perhaps it might be better to modify
work queues to use raw locks for the queuing / dequeing
of work rather than sleepy locks; that way, work can be
queued up under both sleepy and atomic contexts under the
rt kernel, just as work can be today queued up under the
nort kernel without any problems.
Joe
PS: failure paths: both sfc and mellanox invoke
irq_cpu_rmap_add which in turn invokes the above
irq_set_affinity lock which oops. The infiniband driver
invokes irq_set_affinity_lock directly.
PPS: oops was observed under 3.6.11.6-rt38 but a perusal
of 3.10.6-rt3 sources shows it has the same problem.
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2013-08-19 16:42 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-08-19 16:25 might_sleep oops in irq_set_affinity_notifier users Joe Korty
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).