public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [patch 0/5] genirq: Forced threaded interrupt handlers
@ 2011-02-23 23:52 Thomas Gleixner
  2011-02-23 23:52 ` [patch 1/5] genirq: Prepare the handling of shared oneshot interrupts Thomas Gleixner
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Thomas Gleixner @ 2011-02-23 23:52 UTC (permalink / raw)
  To: LKML; +Cc: Linus Torvalds, Andrew Morton, Ingo Molnar, Peter Zijlstra

Some time ago when the threaded interrupt handlers infrastructure was
about to be merged, Andrew asked me where that command line switch was
which magically runs all interrupt handlers and the softirqs in
threads.

While we were doing that brute force in preempt-rt for quite a while
it took some time to come up with a reasonable non intrusive
implementation for mainline. We also had to find a solution which fits
Linus' recently issued "palatable Trojan horse" requirement (see:
https://lwn.net/Articles/370998/).

The gift of this patch series is the ability to add "threadirqs" to
the kernel command line and magically (almost) all interrupt handlers
- except those which are explicitely marked IRQF_NO_THREAD - are
confined into threads along with all soft interrupts.

That allows to enhance the debugability of the kernel as a bug in an
interrupt handler is not necessarily taking the whole machine
down. It's just the particular irq thread which goes into nirwana. Bad
luck if that's the one which is crucial to retrieve the bug report,
but in most cases - yes, I analysed quite a lot of bugzilla reports -
it will be helpful for reporters not to be forced to transcribe the
bug from the screen.

An architecture has to enable that feature in Kconfig via a selectable
option which says: Yes, we marked all interrupts which never can be
threaded - like IPIs etc. - as IRQF_NO_THREAD. All interrupts marked
IRQF_TIMER or IRQF_PER_CPU are automatically excluded from threading.

A side effect of this is, that the long standing request of supporting
oneshot threaded handlers on shared interrupt lines (given that all
drivers agree) is now possible. I know, that this is nuts, but
unfortunately common sense and basic understanding of the problem
seems to be not required when HW folks are set out to save a gate.

Thanks,

	tglx
---
 Documentation/kernel-parameters.txt |    4 +
 include/linux/interrupt.h           |   13 +++
 include/linux/irqdesc.h             |    2 
 kernel/irq/Kconfig                  |    3 
 kernel/irq/handle.c                 |   34 +++++----
 kernel/irq/internals.h              |    2 
 kernel/irq/manage.c                 |  128 +++++++++++++++++++++++++++++++-----
 kernel/sched.c                      |    5 +
 kernel/softirq.c                    |   16 +++-
 9 files changed, 175 insertions(+), 32 deletions(-)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [patch 1/5] genirq: Prepare the handling of shared oneshot interrupts
  2011-02-23 23:52 [patch 0/5] genirq: Forced threaded interrupt handlers Thomas Gleixner
@ 2011-02-23 23:52 ` Thomas Gleixner
  2011-02-24  2:30   ` Linus Torvalds
  2011-02-26 16:22   ` [tip:irq/core] " tip-bot for Thomas Gleixner
  2011-02-23 23:52 ` [patch 2/5] genirq: Allow " Thomas Gleixner
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 14+ messages in thread
From: Thomas Gleixner @ 2011-02-23 23:52 UTC (permalink / raw)
  To: LKML; +Cc: Linus Torvalds, Andrew Morton, Ingo Molnar, Peter Zijlstra

[-- Attachment #1: genirq-prepare-handling-shared-oneshot-interrupts.patch --]
[-- Type: text/plain, Size: 8656 bytes --]

For level type interrupts we need to track how many threads are on
flight to avoid useless interrupt storms when not all thread handlers
have finished yet. Keep track of the woken threads and only unmask
when there are no more threads in flight.

Yes, I'm lazy and using a bitfield. But not only because I'm lazy, the
main reason is that it's way simpler than using a refcount. A refcount
based solution would need to keep track of various things like
crashing the irq thread, spurious interrupts coming in,
disables/enables, free_irq() and some more. The bitfield keeps the
tracking simple and makes things just work. It's also nicely confined
to the thread code pathes and does not require additional checks all
over the place.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/interrupt.h |    2 +
 include/linux/irqdesc.h   |    2 +
 kernel/irq/handle.c       |   34 ++++++++++++++++++-----------
 kernel/irq/manage.c       |   53 ++++++++++++++++++++++++++++++++++++++++------
 4 files changed, 72 insertions(+), 19 deletions(-)

Index: linux-2.6-tip/include/linux/interrupt.h
===================================================================
--- linux-2.6-tip.orig/include/linux/interrupt.h
+++ linux-2.6-tip/include/linux/interrupt.h
@@ -99,6 +99,7 @@ typedef irqreturn_t (*irq_handler_t)(int
  * @thread_fn:	interupt handler function for threaded interrupts
  * @thread:	thread pointer for threaded interrupts
  * @thread_flags:	flags related to @thread
+ * @thread_mask:	bitmask for keeping track of @thread activity
  */
 struct irqaction {
 	irq_handler_t handler;
@@ -109,6 +110,7 @@ struct irqaction {
 	irq_handler_t thread_fn;
 	struct task_struct *thread;
 	unsigned long thread_flags;
+	unsigned long thread_mask;
 	const char *name;
 	struct proc_dir_entry *dir;
 } ____cacheline_internodealigned_in_smp;
Index: linux-2.6-tip/include/linux/irqdesc.h
===================================================================
--- linux-2.6-tip.orig/include/linux/irqdesc.h
+++ linux-2.6-tip/include/linux/irqdesc.h
@@ -28,6 +28,7 @@ struct timer_rand_state;
  * @lock:		locking for SMP
  * @affinity_notify:	context for notification of affinity changes
  * @pending_mask:	pending rebalanced interrupts
+ * @threads_oneshot:	bitfield to handle shared oneshot threads
  * @threads_active:	number of irqaction threads currently running
  * @wait_for_threads:	wait queue for sync_irq to wait for threaded handlers
  * @dir:		/proc/irq/ procfs entry
@@ -86,6 +87,7 @@ struct irq_desc {
 	cpumask_var_t		pending_mask;
 #endif
 #endif
+	unsigned long		threads_oneshot;
 	atomic_t		threads_active;
 	wait_queue_head_t       wait_for_threads;
 #ifdef CONFIG_PROC_FS
Index: linux-2.6-tip/kernel/irq/handle.c
===================================================================
--- linux-2.6-tip.orig/kernel/irq/handle.c
+++ linux-2.6-tip/kernel/irq/handle.c
@@ -51,6 +51,26 @@ static void warn_no_thread(unsigned int 
 	       "but no thread function available.", irq, action->name);
 }
 
+static void irq_wake_thread(struct irq_desc *desc, struct irqaction *action)
+{
+	/*
+	 * Wake up the handler thread for this action. In case the
+	 * thread crashed and was killed we just pretend that we
+	 * handled the interrupt. The hardirq handler has disabled the
+	 * device interrupt, so no irq storm is lurking.
+	 */
+	if (!test_bit(IRQTF_DIED, &action->thread_flags) &&
+	    !test_and_set_bit(IRQTF_RUNTHREAD, &action->thread_flags)) {
+		/*
+		 * We or the mask lockless. Safe because the code
+		 * which clears the mask is serialized
+		 * vs. IRQ_INPROGRESS.
+		 */
+		desc->threads_oneshot |= action->thread_mask;
+		wake_up_process(action->thread);
+	}
+}
+
 irqreturn_t
 handle_irq_event_percpu(struct irq_desc *desc, struct irqaction *action)
 {
@@ -84,19 +104,7 @@ handle_irq_event_percpu(struct irq_desc 
 				break;
 			}
 
-			/*
-			 * Wake up the handler thread for this
-			 * action. In case the thread crashed and was
-			 * killed we just pretend that we handled the
-			 * interrupt. The hardirq handler above has
-			 * disabled the device interrupt, so no irq
-			 * storm is lurking.
-			 */
-			if (likely(!test_bit(IRQTF_DIED,
-					     &action->thread_flags))) {
-				set_bit(IRQTF_RUNTHREAD, &action->thread_flags);
-				wake_up_process(action->thread);
-			}
+			irq_wake_thread(desc, action);
 
 			/* Fall through to add to randomness */
 		case IRQ_HANDLED:
Index: linux-2.6-tip/kernel/irq/manage.c
===================================================================
--- linux-2.6-tip.orig/kernel/irq/manage.c
+++ linux-2.6-tip/kernel/irq/manage.c
@@ -617,8 +617,11 @@ static int irq_wait_for_interrupt(struct
  * handler finished. unmask if the interrupt has not been disabled and
  * is marked MASKED.
  */
-static void irq_finalize_oneshot(unsigned int irq, struct irq_desc *desc)
+static void irq_finalize_oneshot(struct irq_desc *desc,
+				 struct irqaction *action, bool force)
 {
+	if (!(desc->istate & IRQS_ONESHOT))
+		return;
 again:
 	chip_bus_lock(desc);
 	raw_spin_lock_irq(&desc->lock);
@@ -631,6 +634,10 @@ again:
 	 * on the other CPU. If we unmask the irq line then the
 	 * interrupt can come in again and masks the line, leaves due
 	 * to IRQS_INPROGRESS and the irq line is masked forever.
+	 *
+	 * This also serializes the state of shared oneshot handlers
+	 * versus "desc->threads_onehsot |= action->thread_mask;" in
+	 * handle_irq_event().
 	 */
 	if (unlikely(desc->istate & IRQS_INPROGRESS)) {
 		raw_spin_unlock_irq(&desc->lock);
@@ -639,11 +646,23 @@ again:
 		goto again;
 	}
 
-	if (!(desc->istate & IRQS_DISABLED) && (desc->istate & IRQS_MASKED)) {
+	/*
+	 * Now check again, whether the thread should run. Otherwise
+	 * we would clear the threads_oneshot bit of this thread which
+	 * was just set.
+	 */
+	if (!force && test_bit(IRQTF_RUNTHREAD, &action->thread_flags))
+		goto out_unlock;
+
+	desc->threads_oneshot &= ~action->thread_mask;
+
+	if (!desc->threads_oneshot && !(desc->istate & IRQS_DISABLED) &&
+	    (desc->istate & IRQS_MASKED)) {
 		irq_compat_clr_masked(desc);
 		desc->istate &= ~IRQS_MASKED;
 		desc->irq_data.chip->irq_unmask(&desc->irq_data);
 	}
+out_unlock:
 	raw_spin_unlock_irq(&desc->lock);
 	chip_bus_sync_unlock(desc);
 }
@@ -691,7 +710,7 @@ static int irq_thread(void *data)
 	};
 	struct irqaction *action = data;
 	struct irq_desc *desc = irq_to_desc(action->irq);
-	int wake, oneshot = desc->istate & IRQS_ONESHOT;
+	int wake;
 
 	sched_setscheduler(current, SCHED_FIFO, &param);
 	current->irqaction = action;
@@ -719,8 +738,7 @@ static int irq_thread(void *data)
 
 			action->thread_fn(action->irq, action->dev_id);
 
-			if (oneshot)
-				irq_finalize_oneshot(action->irq, desc);
+			irq_finalize_oneshot(desc, action, false);
 		}
 
 		wake = atomic_dec_and_test(&desc->threads_active);
@@ -729,6 +747,9 @@ static int irq_thread(void *data)
 			wake_up(&desc->wait_for_threads);
 	}
 
+	/* Prevent a stale desc->threads_oneshot */
+	irq_finalize_oneshot(desc, action, true);
+
 	/*
 	 * Clear irqaction. Otherwise exit_irq_thread() would make
 	 * fuzz about an active irq thread going into nirvana.
@@ -743,6 +764,7 @@ static int irq_thread(void *data)
 void exit_irq_thread(void)
 {
 	struct task_struct *tsk = current;
+	struct irq_desc *desc;
 
 	if (!tsk->irqaction)
 		return;
@@ -751,6 +773,14 @@ void exit_irq_thread(void)
 	       "exiting task \"%s\" (%d) is an active IRQ thread (irq %d)\n",
 	       tsk->comm ? tsk->comm : "", tsk->pid, tsk->irqaction->irq);
 
+	desc = irq_to_desc(tsk->irqaction->irq);
+
+	/*
+	 * Prevent a stale desc->threads_oneshot. Must be called
+	 * before setting the IRQTF_DIED flag.
+	 */
+	irq_finalize_oneshot(desc, tsk->irqaction, true);
+
 	/*
 	 * Set the THREAD DIED flag to prevent further wakeups of the
 	 * soon to be gone threaded handler.
@@ -767,7 +797,7 @@ __setup_irq(unsigned int irq, struct irq
 {
 	struct irqaction *old, **old_ptr;
 	const char *old_name = NULL;
-	unsigned long flags;
+	unsigned long flags, thread_mask = 0;
 	int ret, nested, shared = 0;
 	cpumask_var_t mask;
 
@@ -865,12 +895,23 @@ __setup_irq(unsigned int irq, struct irq
 
 		/* add new interrupt at end of irq queue */
 		do {
+			thread_mask |= old->thread_mask;
 			old_ptr = &old->next;
 			old = *old_ptr;
 		} while (old);
 		shared = 1;
 	}
 
+	/*
+	 * Setup the thread mask for this irqaction. Unlikely to have
+	 * 32 resp 64 irqs sharing one line, but who knows.
+	 */
+	if (new->flags & IRQF_ONESHOT && thread_mask == ~0UL) {
+		ret = -EBUSY;
+		goto out_mask;
+	}
+	new->thread_mask = 1 << ffz(thread_mask);
+
 	if (!shared) {
 		irq_chip_set_defaults(desc->irq_data.chip);
 



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [patch 2/5] genirq: Allow shared oneshot interrupts
  2011-02-23 23:52 [patch 0/5] genirq: Forced threaded interrupt handlers Thomas Gleixner
  2011-02-23 23:52 ` [patch 1/5] genirq: Prepare the handling of shared oneshot interrupts Thomas Gleixner
@ 2011-02-23 23:52 ` Thomas Gleixner
  2011-02-26 16:22   ` [tip:irq/core] " tip-bot for Thomas Gleixner
  2011-02-23 23:52 ` [patch 3/5] genirq: Add IRQF_NO_THREAD Thomas Gleixner
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Thomas Gleixner @ 2011-02-23 23:52 UTC (permalink / raw)
  To: LKML; +Cc: Linus Torvalds, Andrew Morton, Ingo Molnar, Peter Zijlstra

[-- Attachment #1: genirq-allow-shared-oneshot-interrupts.patch --]
[-- Type: text/plain, Size: 1354 bytes --]

Support ONESHOT on shared interrupts, if all drivers agree on it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/irq/manage.c |   10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

Index: linux-2.6-tip/kernel/irq/manage.c
===================================================================
--- linux-2.6-tip.orig/kernel/irq/manage.c
+++ linux-2.6-tip/kernel/irq/manage.c
@@ -823,10 +823,6 @@ __setup_irq(unsigned int irq, struct irq
 		rand_initialize_irq(irq);
 	}
 
-	/* Oneshot interrupts are not allowed with shared */
-	if ((new->flags & IRQF_ONESHOT) && (new->flags & IRQF_SHARED))
-		return -EINVAL;
-
 	/*
 	 * Check whether the interrupt nests into another interrupt
 	 * thread.
@@ -880,10 +876,12 @@ __setup_irq(unsigned int irq, struct irq
 		 * Can't share interrupts unless both agree to and are
 		 * the same type (level, edge, polarity). So both flag
 		 * fields must have IRQF_SHARED set and the bits which
-		 * set the trigger type must match.
+		 * set the trigger type must match. Also all must
+		 * agree on ONESHOT.
 		 */
 		if (!((old->flags & new->flags) & IRQF_SHARED) ||
-		    ((old->flags ^ new->flags) & IRQF_TRIGGER_MASK)) {
+		    ((old->flags ^ new->flags) & IRQF_TRIGGER_MASK) ||
+		    ((old->flags ^ new->flags) & IRQF_ONESHOT)) {
 			old_name = old->name;
 			goto mismatch;
 		}



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [patch 3/5] genirq: Add IRQF_NO_THREAD
  2011-02-23 23:52 [patch 0/5] genirq: Forced threaded interrupt handlers Thomas Gleixner
  2011-02-23 23:52 ` [patch 1/5] genirq: Prepare the handling of shared oneshot interrupts Thomas Gleixner
  2011-02-23 23:52 ` [patch 2/5] genirq: Allow " Thomas Gleixner
@ 2011-02-23 23:52 ` Thomas Gleixner
  2011-02-26 16:22   ` [tip:irq/core] " tip-bot for Thomas Gleixner
  2011-02-23 23:52 ` [patch 4/5] sched: Switch wait_task_inactive to schedule_hrtimeout() Thomas Gleixner
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Thomas Gleixner @ 2011-02-23 23:52 UTC (permalink / raw)
  To: LKML; +Cc: Linus Torvalds, Andrew Morton, Ingo Molnar, Peter Zijlstra

[-- Attachment #1: genirq-add-nothread-flag-for-interrupts.patch --]
[-- Type: text/plain, Size: 1273 bytes --]

Some low level interrupts cannot be threaded even when we force thread
all interrupt handlers. Add a flag to annotate such interrupts. Add
all timer interrupts to this category by default.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/interrupt.h |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Index: linux-2.6-tip/include/linux/interrupt.h
===================================================================
--- linux-2.6-tip.orig/include/linux/interrupt.h
+++ linux-2.6-tip/include/linux/interrupt.h
@@ -58,6 +58,7 @@
  *                irq line disabled until the threaded handler has been run.
  * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend
  * IRQF_FORCE_RESUME - Force enable it on resume even if IRQF_NO_SUSPEND is set
+ * IRQF_NO_THREAD - Interrupt cannot be threaded
  */
 #define IRQF_DISABLED		0x00000020
 #define IRQF_SAMPLE_RANDOM	0x00000040
@@ -70,8 +71,9 @@
 #define IRQF_ONESHOT		0x00002000
 #define IRQF_NO_SUSPEND		0x00004000
 #define IRQF_FORCE_RESUME	0x00008000
+#define IRQF_NO_THREAD		0x00010000
 
-#define IRQF_TIMER		(__IRQF_TIMER | IRQF_NO_SUSPEND)
+#define IRQF_TIMER		(__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD)
 
 /*
  * These values can be returned by request_any_context_irq() and



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [patch 4/5] sched: Switch wait_task_inactive to schedule_hrtimeout()
  2011-02-23 23:52 [patch 0/5] genirq: Forced threaded interrupt handlers Thomas Gleixner
                   ` (2 preceding siblings ...)
  2011-02-23 23:52 ` [patch 3/5] genirq: Add IRQF_NO_THREAD Thomas Gleixner
@ 2011-02-23 23:52 ` Thomas Gleixner
  2011-02-26 16:23   ` [tip:irq/core] " tip-bot for Thomas Gleixner
  2011-02-23 23:52 ` [patch 5/5] genirq: Provide forced interrupt threading Thomas Gleixner
  2011-02-24  7:12 ` [patch 0/5] genirq: Forced threaded interrupt handlers Ingo Molnar
  5 siblings, 1 reply; 14+ messages in thread
From: Thomas Gleixner @ 2011-02-23 23:52 UTC (permalink / raw)
  To: LKML; +Cc: Linus Torvalds, Andrew Morton, Ingo Molnar, Peter Zijlstra

[-- Attachment #1: sched-switch-to-hrtimer-for-wait-task-inactive.patch --]
[-- Type: text/plain, Size: 870 bytes --]

When we force thread hard and soft interrupts the startup of ksoftirqd
would hang in kthread_bind() when wait_task_inactive() calls
schedule_timeout_uninterruptible() because there is no softirq yet
which will wake us up.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/sched.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Index: linux-2.6-tip/kernel/sched.c
===================================================================
--- linux-2.6-tip.orig/kernel/sched.c
+++ linux-2.6-tip/kernel/sched.c
@@ -2224,7 +2224,10 @@ unsigned long wait_task_inactive(struct 
 		 * yield - it could be a while.
 		 */
 		if (unlikely(on_rq)) {
-			schedule_timeout_uninterruptible(1);
+			ktime_t to = ktime_set(0, NSEC_PER_SEC/HZ);
+
+			set_current_state(TASK_UNINTERRUPTIBLE);
+			schedule_hrtimeout(&to, HRTIMER_MODE_REL);
 			continue;
 		}
 



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [patch 5/5] genirq: Provide forced interrupt threading
  2011-02-23 23:52 [patch 0/5] genirq: Forced threaded interrupt handlers Thomas Gleixner
                   ` (3 preceding siblings ...)
  2011-02-23 23:52 ` [patch 4/5] sched: Switch wait_task_inactive to schedule_hrtimeout() Thomas Gleixner
@ 2011-02-23 23:52 ` Thomas Gleixner
  2011-02-26 16:23   ` [tip:irq/core] " tip-bot for Thomas Gleixner
  2011-02-24  7:12 ` [patch 0/5] genirq: Forced threaded interrupt handlers Ingo Molnar
  5 siblings, 1 reply; 14+ messages in thread
From: Thomas Gleixner @ 2011-02-23 23:52 UTC (permalink / raw)
  To: LKML; +Cc: Linus Torvalds, Andrew Morton, Ingo Molnar, Peter Zijlstra

[-- Attachment #1: genirq-provide-forced-interrupt-threading.patch --]
[-- Type: text/plain, Size: 6999 bytes --]

Add a commandline parameter "threadirqs" which forces all interrupts except
those marked IRQF_NO_THREAD to run threaded. That's mostly a debug option to
allow retrieving better debug data from crashing interrupt handlers. If
"threadirqs" is not enabled on the kernel command line, then there is no
impact in the interrupt hotpath.

Architecture code needs to select CONFIG_IRQ_FORCED_THREADING after
marking the interrupts which cant be threaded IRQF_NO_THREAD. All
interrupts which have IRQF_TIMER set are implict marked
IRQF_NO_THREAD. Also all PER_CPU interrupts are excluded.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 Documentation/kernel-parameters.txt |    4 ++
 include/linux/interrupt.h           |    7 +++
 kernel/irq/Kconfig                  |    3 +
 kernel/irq/internals.h              |    2 +
 kernel/irq/manage.c                 |   67 +++++++++++++++++++++++++++++++++---
 kernel/softirq.c                    |   16 +++++++-
 6 files changed, 93 insertions(+), 6 deletions(-)

Index: linux-2.6-tip/Documentation/kernel-parameters.txt
===================================================================
--- linux-2.6-tip.orig/Documentation/kernel-parameters.txt
+++ linux-2.6-tip/Documentation/kernel-parameters.txt
@@ -2436,6 +2436,10 @@ and is between 256 and 4096 characters. 
 			<deci-seconds>: poll all this frequency
 			0: no polling (default)
 
+	threadirqs	[KNL]
+			Force threading of all interrupt handlers except those
+			marked explicitely IRQF_NO_THREAD.
+
 	topology=	[S390]
 			Format: {off | on}
 			Specify if the kernel should make use of the cpu
Index: linux-2.6-tip/include/linux/interrupt.h
===================================================================
--- linux-2.6-tip.orig/include/linux/interrupt.h
+++ linux-2.6-tip/include/linux/interrupt.h
@@ -383,6 +383,13 @@ static inline int disable_irq_wake(unsig
 }
 #endif /* CONFIG_GENERIC_HARDIRQS */
 
+
+#ifdef CONFIG_IRQ_FORCED_THREADING
+extern bool force_irqthreads;
+#else
+#define force_irqthreads	(0)
+#endif
+
 #ifndef __ARCH_SET_SOFTIRQ_PENDING
 #define set_softirq_pending(x) (local_softirq_pending() = (x))
 #define or_softirq_pending(x)  (local_softirq_pending() |= (x))
Index: linux-2.6-tip/kernel/irq/Kconfig
===================================================================
--- linux-2.6-tip.orig/kernel/irq/Kconfig
+++ linux-2.6-tip/kernel/irq/Kconfig
@@ -38,6 +38,9 @@ config HARDIRQS_SW_RESEND
 config IRQ_PREFLOW_FASTEOI
        bool
 
+config IRQ_FORCED_THREADING
+       bool
+
 config SPARSE_IRQ
 	bool "Support sparse irq numbering"
 	depends on HAVE_SPARSE_IRQ
Index: linux-2.6-tip/kernel/irq/internals.h
===================================================================
--- linux-2.6-tip.orig/kernel/irq/internals.h
+++ linux-2.6-tip/kernel/irq/internals.h
@@ -27,12 +27,14 @@ extern int noirqdebug;
  * IRQTF_DIED      - handler thread died
  * IRQTF_WARNED    - warning "IRQ_WAKE_THREAD w/o thread_fn" has been printed
  * IRQTF_AFFINITY  - irq thread is requested to adjust affinity
+ * IRQTF_FORCED_THREAD  - irq action is force threaded
  */
 enum {
 	IRQTF_RUNTHREAD,
 	IRQTF_DIED,
 	IRQTF_WARNED,
 	IRQTF_AFFINITY,
+	IRQTF_FORCED_THREAD,
 };
 
 /*
Index: linux-2.6-tip/kernel/irq/manage.c
===================================================================
--- linux-2.6-tip.orig/kernel/irq/manage.c
+++ linux-2.6-tip/kernel/irq/manage.c
@@ -17,6 +17,17 @@
 
 #include "internals.h"
 
+#ifdef CONFIG_IRQ_FORCED_THREADING
+__read_mostly bool force_irqthreads;
+
+static int __init setup_forced_irqthreads(char *arg)
+{
+	force_irqthreads = true;
+	return 0;
+}
+early_param("threadirqs", setup_forced_irqthreads);
+#endif
+
 /**
  *	synchronize_irq - wait for pending IRQ handlers (on other CPUs)
  *	@irq: interrupt number to wait for
@@ -701,6 +712,32 @@ irq_thread_check_affinity(struct irq_des
 #endif
 
 /*
+ * Interrupts which are not explicitely requested as threaded
+ * interrupts rely on the implicit bh/preempt disable of the hard irq
+ * context. So we need to disable bh here to avoid deadlocks and other
+ * side effects.
+ */
+static void
+irq_forced_thread_fn(struct irq_dessc *desc, struct irqaction *action)
+{
+	local_bh_disable();
+	action->thread_fn(action->irq, action->dev_id);
+	irq_finalize_oneshot(desc, action, false);
+	local_bh_enable();
+}
+
+/*
+ * Interrupts explicitely requested as threaded interupts want to be
+ * preemtible - many of them need to sleep and wait for slow busses to
+ * complete.
+ */
+static void irq_thread_fn(struct irq_dessc *desc, struct irqaction *action)
+{
+	action->thread_fn(action->irq, action->dev_id);
+	irq_finalize_oneshot(desc, action, false);
+}
+
+/*
  * Interrupt handler thread
  */
 static int irq_thread(void *data)
@@ -710,8 +747,15 @@ static int irq_thread(void *data)
 	};
 	struct irqaction *action = data;
 	struct irq_desc *desc = irq_to_desc(action->irq);
+	void (*handler_fn)(struct irq_desc *desc, struct irq_action *action);
 	int wake;
 
+	if (force_irqthreads & test_bit(IRQTF_FORCED_THREAD,
+					&action->thread_flags))
+		handler_fn = irq_forced_thread_fn;
+	else
+		handler_fn = irq_thread_fn;
+
 	sched_setscheduler(current, SCHED_FIFO, &param);
 	current->irqaction = action;
 
@@ -735,10 +779,7 @@ static int irq_thread(void *data)
 			raw_spin_unlock_irq(&desc->lock);
 		} else {
 			raw_spin_unlock_irq(&desc->lock);
-
-			action->thread_fn(action->irq, action->dev_id);
-
-			irq_finalize_oneshot(desc, action, false);
+			handler_fn(desc->action);
 		}
 
 		wake = atomic_dec_and_test(&desc->threads_active);
@@ -788,6 +829,22 @@ void exit_irq_thread(void)
 	set_bit(IRQTF_DIED, &tsk->irqaction->flags);
 }
 
+static void irq_setup_forced_threading(struct irqaction *new)
+{
+	if (!force_irqthreads)
+		return;
+	if (new->flags & (IRQF_NO_THREAD | IRQF_PERCPU | IRQF_ONESHOT))
+		return;
+
+	new->flags |= IRQF_ONESHOT;
+
+	if (!new->thread_fn) {
+		set_bit(IRQTF_FORCED_THREAD, &new->thread_flags);
+		new->thread_fn = new->handler;
+		new->handler = irq_default_primary_handler;
+	}
+}
+
 /*
  * Internal function to register an irqaction - typically used to
  * allocate special interrupts that are part of the architecture.
@@ -837,6 +894,8 @@ __setup_irq(unsigned int irq, struct irq
 		 * dummy function which warns when called.
 		 */
 		new->handler = irq_nested_primary_handler;
+	} else {
+		irq_setup_forced_threading(new);
 	}
 
 	/*
Index: linux-2.6-tip/kernel/softirq.c
===================================================================
--- linux-2.6-tip.orig/kernel/softirq.c
+++ linux-2.6-tip/kernel/softirq.c
@@ -311,9 +311,21 @@ void irq_enter(void)
 }
 
 #ifdef __ARCH_IRQ_EXIT_IRQS_DISABLED
-# define invoke_softirq()	__do_softirq()
+static inline void invoke_softirq(void)
+{
+	if (!force_irqthreads)
+		__do_softirq();
+	else
+		wakeup_softirqd();
+}
 #else
-# define invoke_softirq()	do_softirq()
+static inline void invoke_softirq(void)
+{
+	if (!force_irqthreads)
+		do_softirq();
+	else
+		wakeup_softirqd();
+}
 #endif
 
 /*



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch 1/5] genirq: Prepare the handling of shared oneshot interrupts
  2011-02-23 23:52 ` [patch 1/5] genirq: Prepare the handling of shared oneshot interrupts Thomas Gleixner
@ 2011-02-24  2:30   ` Linus Torvalds
  2011-02-24 17:56     ` Thomas Gleixner
  2011-02-26 16:22   ` [tip:irq/core] " tip-bot for Thomas Gleixner
  1 sibling, 1 reply; 14+ messages in thread
From: Linus Torvalds @ 2011-02-24  2:30 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: LKML, Andrew Morton, Ingo Molnar, Peter Zijlstra

On Wed, Feb 23, 2011 at 3:52 PM, Thomas Gleixner <tglx@linutronix.de> wrote:
>
> +               /*
> +                * We or the mask lockless. Safe because the code
> +                * which clears the mask is serialized
> +                * vs. IRQ_INPROGRESS.
> +                */
> +               desc->threads_oneshot |= action->thread_mask;
> +               wake_up_process(action->thread);

That comment makes no sense.

What has "code which clears the mask" to do with anything?

Even if another CPU _sets_ a bit, doing so in parallel will lose bits
if it's not locked. One CPU will read the old value, set its bit, and
store the new value: with the race, only one of the bits will be set.

So maybe the code is safe, but the comment is still totally wrong. You
had better be safe not just against people clearing bits, you need to
be safe against ANY OTHER WRITER (clear or set), including very much
OTHER BITS in the same word.

So if it really is safe, comment on ALL the cases.

                     Linus

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch 0/5] genirq: Forced threaded interrupt handlers
  2011-02-23 23:52 [patch 0/5] genirq: Forced threaded interrupt handlers Thomas Gleixner
                   ` (4 preceding siblings ...)
  2011-02-23 23:52 ` [patch 5/5] genirq: Provide forced interrupt threading Thomas Gleixner
@ 2011-02-24  7:12 ` Ingo Molnar
  5 siblings, 0 replies; 14+ messages in thread
From: Ingo Molnar @ 2011-02-24  7:12 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: LKML, Linus Torvalds, Andrew Morton, Peter Zijlstra


* Thomas Gleixner <tglx@linutronix.de> wrote:

> Some time ago when the threaded interrupt handlers infrastructure was
> about to be merged, Andrew asked me where that command line switch was
> which magically runs all interrupt handlers and the softirqs in
> threads.
> 
> While we were doing that brute force in preempt-rt for quite a while
> it took some time to come up with a reasonable non intrusive
> implementation for mainline. We also had to find a solution which fits
> Linus' recently issued "palatable Trojan horse" requirement (see:
> https://lwn.net/Articles/370998/).
> 
> The gift of this patch series is the ability to add "threadirqs" to
> the kernel command line and magically (almost) all interrupt handlers
> - except those which are explicitely marked IRQF_NO_THREAD - are
> confined into threads along with all soft interrupts.
> 
> That allows to enhance the debugability of the kernel as a bug in an
> interrupt handler is not necessarily taking the whole machine
> down. It's just the particular irq thread which goes into nirwana. Bad
> luck if that's the one which is crucial to retrieve the bug report,
> but in most cases - yes, I analysed quite a lot of bugzilla reports -
> it will be helpful for reporters not to be forced to transcribe the
> bug from the screen.

Just a quick bike shed painting suggestion: could we please name it anything but 
'forced' threaded irqs? Something like 'irqthread debugging' or 'full irqthreads'?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [patch 1/5] genirq: Prepare the handling of shared oneshot interrupts
  2011-02-24  2:30   ` Linus Torvalds
@ 2011-02-24 17:56     ` Thomas Gleixner
  0 siblings, 0 replies; 14+ messages in thread
From: Thomas Gleixner @ 2011-02-24 17:56 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: LKML, Andrew Morton, Ingo Molnar, Peter Zijlstra

[-- Attachment #1: Type: TEXT/PLAIN, Size: 4554 bytes --]

On Wed, 23 Feb 2011, Linus Torvalds wrote:
> On Wed, Feb 23, 2011 at 3:52 PM, Thomas Gleixner <tglx@linutronix.de> wrote:
> >
> > +               /*
> > +                * We or the mask lockless. Safe because the code
> > +                * which clears the mask is serialized
> > +                * vs. IRQ_INPROGRESS.
> > +                */
> > +               desc->threads_oneshot |= action->thread_mask;
> > +               wake_up_process(action->thread);
> 
> That comment makes no sense.
> 
> What has "code which clears the mask" to do with anything?
> 
> Even if another CPU _sets_ a bit, doing so in parallel will lose bits
> if it's not locked. One CPU will read the old value, set its bit, and
> store the new value: with the race, only one of the bits will be set.
> 
> So maybe the code is safe, but the comment is still totally wrong. You
> had better be safe not just against people clearing bits, you need to
> be safe against ANY OTHER WRITER (clear or set), including very much
> OTHER BITS in the same word.
> 
> So if it really is safe, comment on ALL the cases.

Fair enough, that comment really sucks if you are not familiar with
the gory details of that code.

What about the following replacement, which might be overly detailed,
but that's exactly how it works.

Thanks,

	tglx

Index: linux-2.6-tip/kernel/irq/handle.c
===================================================================
--- linux-2.6-tip.orig/kernel/irq/handle.c
+++ linux-2.6-tip/kernel/irq/handle.c
@@ -57,18 +57,60 @@ static void irq_wake_thread(struct irq_d
 	 * Wake up the handler thread for this action. In case the
 	 * thread crashed and was killed we just pretend that we
 	 * handled the interrupt. The hardirq handler has disabled the
-	 * device interrupt, so no irq storm is lurking.
+	 * device interrupt, so no irq storm is lurking. If the
+	 * RUNTHREAD bit is already set, nothing to do.
 	 */
-	if (!test_bit(IRQTF_DIED, &action->thread_flags) &&
-	    !test_and_set_bit(IRQTF_RUNTHREAD, &action->thread_flags)) {
-		/*
-		 * We or the mask lockless. Safe because the code
-		 * which clears the mask is serialized
-		 * vs. IRQ_INPROGRESS.
-		 */
-		desc->threads_oneshot |= action->thread_mask;
-		wake_up_process(action->thread);
-	}
+	if (test_bit(IRQTF_DIED, &action->thread_flags) ||
+	    test_and_set_bit(IRQTF_RUNTHREAD, &action->thread_flags))
+		return;
+
+	/*
+	 * It's safe to OR the mask lockless here. We have only two
+	 * places which write to threads_oneshot: This code and the
+	 * irq thread.
+	 *
+	 * This code is the hard irq context and can never run on two
+	 * cpus in parallel. If it ever does we have more serious
+	 * problems than this bitmask.
+	 *
+	 * The irq threads of this irq which clear their "running" bit
+	 * in threads_oneshot are serialized via desc->lock against
+	 * each other and they are serialized against this code by
+	 * IRQS_INPROGRESS.
+	 *
+	 * Hard irq handler:
+	 *
+	 *	spin_lock(desc->lock);
+	 *	desc->state |= IRQS_INPROGRESS;
+	 *	spin_unlock(desc->lock);
+	 *	set_bit(IRQTF_RUNTHREAD, &action->thread_flags);
+	 *	desc->threads_oneshot |= mask;
+	 *	spin_unlock(desc->lock);
+	 *	desc->state &= ~IRQS_INPROGRESS;
+	 *	spin_unlock(desc->lock);
+	 *
+	 * irq thread:
+	 *
+	 * again:
+	 *	spin_lock(desc->lock);
+	 *	if (desc->state & IRQS_INPROGRESS) {
+	 *		spin_unlock(desc->lock);
+	 *		while(desc->state & IRQS_INPROGRESS)
+	 *			cpu_relax();
+	 *		goto again;
+	 *	}
+	 *	if (!test_bit(IRQTF_RUNTHREAD, &action->thread_flags))
+	 *		desc->threads_oneshot &= ~mask;
+	 *	spin_unlock(desc->lock);
+	 *
+	 * So either the thread waits for us to clear IRQS_INPROGRESS
+	 * or we are waiting in the flow handler for desc->lock to be
+	 * released before we reach this point. The thread also checks
+	 * IRQTF_RUNTHREAD under desc->lock. If set it leaves
+	 * threads_oneshot untouched and runs the thread another time.
+	 */
+	desc->threads_oneshot |= action->thread_mask;
+	wake_up_process(action->thread);
 }
 
 irqreturn_t
Index: linux-2.6-tip/kernel/irq/manage.c
===================================================================
--- linux-2.6-tip.orig/kernel/irq/manage.c
+++ linux-2.6-tip/kernel/irq/manage.c
@@ -637,7 +637,8 @@ again:
 	 *
 	 * This also serializes the state of shared oneshot handlers
 	 * versus "desc->threads_onehsot |= action->thread_mask;" in
-	 * handle_irq_event().
+	 * irq_wake_thread(). See the comment there which explains the
+	 * serialization.
 	 */
 	if (unlikely(desc->istate & IRQS_INPROGRESS)) {
 		raw_spin_unlock_irq(&desc->lock);

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [tip:irq/core] genirq: Prepare the handling of shared oneshot interrupts
  2011-02-23 23:52 ` [patch 1/5] genirq: Prepare the handling of shared oneshot interrupts Thomas Gleixner
  2011-02-24  2:30   ` Linus Torvalds
@ 2011-02-26 16:22   ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 14+ messages in thread
From: tip-bot for Thomas Gleixner @ 2011-02-26 16:22 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx

Commit-ID:  b5faba21a6805c33b40e258d36f57997ee1de131
Gitweb:     http://git.kernel.org/tip/b5faba21a6805c33b40e258d36f57997ee1de131
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 23 Feb 2011 23:52:13 +0000
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Fri, 25 Feb 2011 20:24:21 +0100

genirq: Prepare the handling of shared oneshot interrupts

For level type interrupts we need to track how many threads are on
flight to avoid useless interrupt storms when not all thread handlers
have finished yet. Keep track of the woken threads and only unmask
when there are no more threads in flight.

Yes, I'm lazy and using a bitfield. But not only because I'm lazy, the
main reason is that it's way simpler than using a refcount. A refcount
based solution would need to keep track of various things like
crashing the irq thread, spurious interrupts coming in,
disables/enables, free_irq() and some more. The bitfield keeps the
tracking simple and makes things just work. It's also nicely confined
to the thread code pathes and does not require additional checks all
over the place.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <20110223234956.388095876@linutronix.de>
---
 include/linux/interrupt.h |    2 +
 include/linux/irqdesc.h   |    2 +
 kernel/irq/handle.c       |   76 +++++++++++++++++++++++++++++++++++++--------
 kernel/irq/manage.c       |   54 ++++++++++++++++++++++++++++---
 4 files changed, 115 insertions(+), 19 deletions(-)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 8da6643..e116fef 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -99,6 +99,7 @@ typedef irqreturn_t (*irq_handler_t)(int, void *);
  * @thread_fn:	interupt handler function for threaded interrupts
  * @thread:	thread pointer for threaded interrupts
  * @thread_flags:	flags related to @thread
+ * @thread_mask:	bitmask for keeping track of @thread activity
  */
 struct irqaction {
 	irq_handler_t handler;
@@ -109,6 +110,7 @@ struct irqaction {
 	irq_handler_t thread_fn;
 	struct task_struct *thread;
 	unsigned long thread_flags;
+	unsigned long thread_mask;
 	const char *name;
 	struct proc_dir_entry *dir;
 } ____cacheline_internodealigned_in_smp;
diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index 2f87d64..9eb9cd3 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -28,6 +28,7 @@ struct timer_rand_state;
  * @lock:		locking for SMP
  * @affinity_notify:	context for notification of affinity changes
  * @pending_mask:	pending rebalanced interrupts
+ * @threads_oneshot:	bitfield to handle shared oneshot threads
  * @threads_active:	number of irqaction threads currently running
  * @wait_for_threads:	wait queue for sync_irq to wait for threaded handlers
  * @dir:		/proc/irq/ procfs entry
@@ -86,6 +87,7 @@ struct irq_desc {
 	cpumask_var_t		pending_mask;
 #endif
 #endif
+	unsigned long		threads_oneshot;
 	atomic_t		threads_active;
 	wait_queue_head_t       wait_for_threads;
 #ifdef CONFIG_PROC_FS
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index b110c83..517561f 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -51,6 +51,68 @@ static void warn_no_thread(unsigned int irq, struct irqaction *action)
 	       "but no thread function available.", irq, action->name);
 }
 
+static void irq_wake_thread(struct irq_desc *desc, struct irqaction *action)
+{
+	/*
+	 * Wake up the handler thread for this action. In case the
+	 * thread crashed and was killed we just pretend that we
+	 * handled the interrupt. The hardirq handler has disabled the
+	 * device interrupt, so no irq storm is lurking. If the
+	 * RUNTHREAD bit is already set, nothing to do.
+	 */
+	if (test_bit(IRQTF_DIED, &action->thread_flags) ||
+	    test_and_set_bit(IRQTF_RUNTHREAD, &action->thread_flags))
+		return;
+
+	/*
+	 * It's safe to OR the mask lockless here. We have only two
+	 * places which write to threads_oneshot: This code and the
+	 * irq thread.
+	 *
+	 * This code is the hard irq context and can never run on two
+	 * cpus in parallel. If it ever does we have more serious
+	 * problems than this bitmask.
+	 *
+	 * The irq threads of this irq which clear their "running" bit
+	 * in threads_oneshot are serialized via desc->lock against
+	 * each other and they are serialized against this code by
+	 * IRQS_INPROGRESS.
+	 *
+	 * Hard irq handler:
+	 *
+	 *	spin_lock(desc->lock);
+	 *	desc->state |= IRQS_INPROGRESS;
+	 *	spin_unlock(desc->lock);
+	 *	set_bit(IRQTF_RUNTHREAD, &action->thread_flags);
+	 *	desc->threads_oneshot |= mask;
+	 *	spin_lock(desc->lock);
+	 *	desc->state &= ~IRQS_INPROGRESS;
+	 *	spin_unlock(desc->lock);
+	 *
+	 * irq thread:
+	 *
+	 * again:
+	 *	spin_lock(desc->lock);
+	 *	if (desc->state & IRQS_INPROGRESS) {
+	 *		spin_unlock(desc->lock);
+	 *		while(desc->state & IRQS_INPROGRESS)
+	 *			cpu_relax();
+	 *		goto again;
+	 *	}
+	 *	if (!test_bit(IRQTF_RUNTHREAD, &action->thread_flags))
+	 *		desc->threads_oneshot &= ~mask;
+	 *	spin_unlock(desc->lock);
+	 *
+	 * So either the thread waits for us to clear IRQS_INPROGRESS
+	 * or we are waiting in the flow handler for desc->lock to be
+	 * released before we reach this point. The thread also checks
+	 * IRQTF_RUNTHREAD under desc->lock. If set it leaves
+	 * threads_oneshot untouched and runs the thread another time.
+	 */
+	desc->threads_oneshot |= action->thread_mask;
+	wake_up_process(action->thread);
+}
+
 irqreturn_t
 handle_irq_event_percpu(struct irq_desc *desc, struct irqaction *action)
 {
@@ -85,19 +147,7 @@ handle_irq_event_percpu(struct irq_desc *desc, struct irqaction *action)
 				break;
 			}
 
-			/*
-			 * Wake up the handler thread for this
-			 * action. In case the thread crashed and was
-			 * killed we just pretend that we handled the
-			 * interrupt. The hardirq handler above has
-			 * disabled the device interrupt, so no irq
-			 * storm is lurking.
-			 */
-			if (likely(!test_bit(IRQTF_DIED,
-					     &action->thread_flags))) {
-				set_bit(IRQTF_RUNTHREAD, &action->thread_flags);
-				wake_up_process(action->thread);
-			}
+			irq_wake_thread(desc, action);
 
 			/* Fall through to add to randomness */
 		case IRQ_HANDLED:
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 01f8a95..2301de1 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -617,8 +617,11 @@ static int irq_wait_for_interrupt(struct irqaction *action)
  * handler finished. unmask if the interrupt has not been disabled and
  * is marked MASKED.
  */
-static void irq_finalize_oneshot(unsigned int irq, struct irq_desc *desc)
+static void irq_finalize_oneshot(struct irq_desc *desc,
+				 struct irqaction *action, bool force)
 {
+	if (!(desc->istate & IRQS_ONESHOT))
+		return;
 again:
 	chip_bus_lock(desc);
 	raw_spin_lock_irq(&desc->lock);
@@ -631,6 +634,11 @@ again:
 	 * on the other CPU. If we unmask the irq line then the
 	 * interrupt can come in again and masks the line, leaves due
 	 * to IRQS_INPROGRESS and the irq line is masked forever.
+	 *
+	 * This also serializes the state of shared oneshot handlers
+	 * versus "desc->threads_onehsot |= action->thread_mask;" in
+	 * irq_wake_thread(). See the comment there which explains the
+	 * serialization.
 	 */
 	if (unlikely(desc->istate & IRQS_INPROGRESS)) {
 		raw_spin_unlock_irq(&desc->lock);
@@ -639,11 +647,23 @@ again:
 		goto again;
 	}
 
-	if (!(desc->istate & IRQS_DISABLED) && (desc->istate & IRQS_MASKED)) {
+	/*
+	 * Now check again, whether the thread should run. Otherwise
+	 * we would clear the threads_oneshot bit of this thread which
+	 * was just set.
+	 */
+	if (!force && test_bit(IRQTF_RUNTHREAD, &action->thread_flags))
+		goto out_unlock;
+
+	desc->threads_oneshot &= ~action->thread_mask;
+
+	if (!desc->threads_oneshot && !(desc->istate & IRQS_DISABLED) &&
+	    (desc->istate & IRQS_MASKED)) {
 		irq_compat_clr_masked(desc);
 		desc->istate &= ~IRQS_MASKED;
 		desc->irq_data.chip->irq_unmask(&desc->irq_data);
 	}
+out_unlock:
 	raw_spin_unlock_irq(&desc->lock);
 	chip_bus_sync_unlock(desc);
 }
@@ -691,7 +711,7 @@ static int irq_thread(void *data)
 	};
 	struct irqaction *action = data;
 	struct irq_desc *desc = irq_to_desc(action->irq);
-	int wake, oneshot = desc->istate & IRQS_ONESHOT;
+	int wake;
 
 	sched_setscheduler(current, SCHED_FIFO, &param);
 	current->irqaction = action;
@@ -719,8 +739,7 @@ static int irq_thread(void *data)
 
 			action->thread_fn(action->irq, action->dev_id);
 
-			if (oneshot)
-				irq_finalize_oneshot(action->irq, desc);
+			irq_finalize_oneshot(desc, action, false);
 		}
 
 		wake = atomic_dec_and_test(&desc->threads_active);
@@ -729,6 +748,9 @@ static int irq_thread(void *data)
 			wake_up(&desc->wait_for_threads);
 	}
 
+	/* Prevent a stale desc->threads_oneshot */
+	irq_finalize_oneshot(desc, action, true);
+
 	/*
 	 * Clear irqaction. Otherwise exit_irq_thread() would make
 	 * fuzz about an active irq thread going into nirvana.
@@ -743,6 +765,7 @@ static int irq_thread(void *data)
 void exit_irq_thread(void)
 {
 	struct task_struct *tsk = current;
+	struct irq_desc *desc;
 
 	if (!tsk->irqaction)
 		return;
@@ -751,6 +774,14 @@ void exit_irq_thread(void)
 	       "exiting task \"%s\" (%d) is an active IRQ thread (irq %d)\n",
 	       tsk->comm ? tsk->comm : "", tsk->pid, tsk->irqaction->irq);
 
+	desc = irq_to_desc(tsk->irqaction->irq);
+
+	/*
+	 * Prevent a stale desc->threads_oneshot. Must be called
+	 * before setting the IRQTF_DIED flag.
+	 */
+	irq_finalize_oneshot(desc, tsk->irqaction, true);
+
 	/*
 	 * Set the THREAD DIED flag to prevent further wakeups of the
 	 * soon to be gone threaded handler.
@@ -767,7 +798,7 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
 {
 	struct irqaction *old, **old_ptr;
 	const char *old_name = NULL;
-	unsigned long flags;
+	unsigned long flags, thread_mask = 0;
 	int ret, nested, shared = 0;
 	cpumask_var_t mask;
 
@@ -865,12 +896,23 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
 
 		/* add new interrupt at end of irq queue */
 		do {
+			thread_mask |= old->thread_mask;
 			old_ptr = &old->next;
 			old = *old_ptr;
 		} while (old);
 		shared = 1;
 	}
 
+	/*
+	 * Setup the thread mask for this irqaction. Unlikely to have
+	 * 32 resp 64 irqs sharing one line, but who knows.
+	 */
+	if (new->flags & IRQF_ONESHOT && thread_mask == ~0UL) {
+		ret = -EBUSY;
+		goto out_mask;
+	}
+	new->thread_mask = 1 << ffz(thread_mask);
+
 	if (!shared) {
 		irq_chip_set_defaults(desc->irq_data.chip);
 

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [tip:irq/core] genirq: Allow shared oneshot interrupts
  2011-02-23 23:52 ` [patch 2/5] genirq: Allow " Thomas Gleixner
@ 2011-02-26 16:22   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 14+ messages in thread
From: tip-bot for Thomas Gleixner @ 2011-02-26 16:22 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx

Commit-ID:  9d591edd02a245305b1b9379e4c5571bad4d2774
Gitweb:     http://git.kernel.org/tip/9d591edd02a245305b1b9379e4c5571bad4d2774
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 23 Feb 2011 23:52:16 +0000
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Fri, 25 Feb 2011 20:24:21 +0100

genirq: Allow shared oneshot interrupts

Support ONESHOT on shared interrupts, if all drivers agree on it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <20110223234956.483640430@linutronix.de>
---
 kernel/irq/manage.c |   10 ++++------
 1 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 2301de1..58c8613 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -824,10 +824,6 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
 		rand_initialize_irq(irq);
 	}
 
-	/* Oneshot interrupts are not allowed with shared */
-	if ((new->flags & IRQF_ONESHOT) && (new->flags & IRQF_SHARED))
-		return -EINVAL;
-
 	/*
 	 * Check whether the interrupt nests into another interrupt
 	 * thread.
@@ -881,10 +877,12 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
 		 * Can't share interrupts unless both agree to and are
 		 * the same type (level, edge, polarity). So both flag
 		 * fields must have IRQF_SHARED set and the bits which
-		 * set the trigger type must match.
+		 * set the trigger type must match. Also all must
+		 * agree on ONESHOT.
 		 */
 		if (!((old->flags & new->flags) & IRQF_SHARED) ||
-		    ((old->flags ^ new->flags) & IRQF_TRIGGER_MASK)) {
+		    ((old->flags ^ new->flags) & IRQF_TRIGGER_MASK) ||
+		    ((old->flags ^ new->flags) & IRQF_ONESHOT)) {
 			old_name = old->name;
 			goto mismatch;
 		}

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [tip:irq/core] genirq: Add IRQF_NO_THREAD
  2011-02-23 23:52 ` [patch 3/5] genirq: Add IRQF_NO_THREAD Thomas Gleixner
@ 2011-02-26 16:22   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 14+ messages in thread
From: tip-bot for Thomas Gleixner @ 2011-02-26 16:22 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx

Commit-ID:  0c4602ff88d6d6ef0ee6d228ee9acaa6448ff6f5
Gitweb:     http://git.kernel.org/tip/0c4602ff88d6d6ef0ee6d228ee9acaa6448ff6f5
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 23 Feb 2011 23:52:18 +0000
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Fri, 25 Feb 2011 20:24:22 +0100

genirq: Add IRQF_NO_THREAD

Some low level interrupts cannot be threaded even when we force thread
all interrupt handlers. Add a flag to annotate such interrupts. Add
all timer interrupts to this category by default.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <20110223234956.578893460@linutronix.de>
---
 include/linux/interrupt.h |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index e116fef..0fc3eb9 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -58,6 +58,7 @@
  *                irq line disabled until the threaded handler has been run.
  * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend
  * IRQF_FORCE_RESUME - Force enable it on resume even if IRQF_NO_SUSPEND is set
+ * IRQF_NO_THREAD - Interrupt cannot be threaded
  */
 #define IRQF_DISABLED		0x00000020
 #define IRQF_SAMPLE_RANDOM	0x00000040
@@ -70,8 +71,9 @@
 #define IRQF_ONESHOT		0x00002000
 #define IRQF_NO_SUSPEND		0x00004000
 #define IRQF_FORCE_RESUME	0x00008000
+#define IRQF_NO_THREAD		0x00010000
 
-#define IRQF_TIMER		(__IRQF_TIMER | IRQF_NO_SUSPEND)
+#define IRQF_TIMER		(__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD)
 
 /*
  * These values can be returned by request_any_context_irq() and

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [tip:irq/core] sched: Switch wait_task_inactive to schedule_hrtimeout()
  2011-02-23 23:52 ` [patch 4/5] sched: Switch wait_task_inactive to schedule_hrtimeout() Thomas Gleixner
@ 2011-02-26 16:23   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 14+ messages in thread
From: tip-bot for Thomas Gleixner @ 2011-02-26 16:23 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx

Commit-ID:  8eb90c30e0e815a1308828352eabd03ca04229dd
Gitweb:     http://git.kernel.org/tip/8eb90c30e0e815a1308828352eabd03ca04229dd
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 23 Feb 2011 23:52:21 +0000
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Fri, 25 Feb 2011 20:24:22 +0100

sched: Switch wait_task_inactive to schedule_hrtimeout()

When we force thread hard and soft interrupts the startup of ksoftirqd
would hang in kthread_bind() when wait_task_inactive() calls
schedule_timeout_uninterruptible() because there is no softirq yet
which will wake us up.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <20110223234956.677109139@linutronix.de>
---
 kernel/sched.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 18d38e4..66ca5d9 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2224,7 +2224,10 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state)
 		 * yield - it could be a while.
 		 */
 		if (unlikely(on_rq)) {
-			schedule_timeout_uninterruptible(1);
+			ktime_t to = ktime_set(0, NSEC_PER_SEC/HZ);
+
+			set_current_state(TASK_UNINTERRUPTIBLE);
+			schedule_hrtimeout(&to, HRTIMER_MODE_REL);
 			continue;
 		}
 

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [tip:irq/core] genirq: Provide forced interrupt threading
  2011-02-23 23:52 ` [patch 5/5] genirq: Provide forced interrupt threading Thomas Gleixner
@ 2011-02-26 16:23   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 14+ messages in thread
From: tip-bot for Thomas Gleixner @ 2011-02-26 16:23 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx

Commit-ID:  8d32a307e4faa8b123dc8a9cd56d1a7525f69ad3
Gitweb:     http://git.kernel.org/tip/8d32a307e4faa8b123dc8a9cd56d1a7525f69ad3
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 23 Feb 2011 23:52:23 +0000
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sat, 26 Feb 2011 11:57:18 +0100

genirq: Provide forced interrupt threading

Add a commandline parameter "threadirqs" which forces all interrupts except
those marked IRQF_NO_THREAD to run threaded. That's mostly a debug option to
allow retrieving better debug data from crashing interrupt handlers. If
"threadirqs" is not enabled on the kernel command line, then there is no
impact in the interrupt hotpath.

Architecture code needs to select CONFIG_IRQ_FORCED_THREADING after
marking the interrupts which cant be threaded IRQF_NO_THREAD. All
interrupts which have IRQF_TIMER set are implict marked
IRQF_NO_THREAD. Also all PER_CPU interrupts are excluded.

Forced threading hard interrupts also forces all soft interrupt
handling into thread context.

When enabled it might slow down things a bit, but for debugging problems in
interrupt code it's a reasonable penalty as it does not immediately
crash and burn the machine when an interrupt handler is buggy.

Some test results on a Core2Duo machine:

Cache cold run of:
 # time git grep irq_desc

      non-threaded       threaded
 real 1m18.741s          1m19.061s
 user 0m1.874s           0m1.757s
 sys  0m5.843s           0m5.427s

 # iperf -c server
non-threaded
[  3]  0.0-10.0 sec  1.09 GBytes   933 Mbits/sec
[  3]  0.0-10.0 sec  1.09 GBytes   934 Mbits/sec
[  3]  0.0-10.0 sec  1.09 GBytes   933 Mbits/sec
threaded
[  3]  0.0-10.0 sec  1.09 GBytes   939 Mbits/sec
[  3]  0.0-10.0 sec  1.09 GBytes   934 Mbits/sec
[  3]  0.0-10.0 sec  1.09 GBytes   937 Mbits/sec

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <20110223234956.772668648@linutronix.de>
---
 Documentation/kernel-parameters.txt |    4 ++
 include/linux/interrupt.h           |    7 ++++
 kernel/irq/Kconfig                  |    3 ++
 kernel/irq/internals.h              |    2 +
 kernel/irq/manage.c                 |   67 ++++++++++++++++++++++++++++++++--
 kernel/softirq.c                    |   16 +++++++-
 6 files changed, 93 insertions(+), 6 deletions(-)

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 89835a4..cac6cf9 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2436,6 +2436,10 @@ and is between 256 and 4096 characters. It is defined in the file
 			<deci-seconds>: poll all this frequency
 			0: no polling (default)
 
+	threadirqs	[KNL]
+			Force threading of all interrupt handlers except those
+			marked explicitely IRQF_NO_THREAD.
+
 	topology=	[S390]
 			Format: {off | on}
 			Specify if the kernel should make use of the cpu
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 0fc3eb9..f8a8af1 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -383,6 +383,13 @@ static inline int disable_irq_wake(unsigned int irq)
 }
 #endif /* CONFIG_GENERIC_HARDIRQS */
 
+
+#ifdef CONFIG_IRQ_FORCED_THREADING
+extern bool force_irqthreads;
+#else
+#define force_irqthreads	(0)
+#endif
+
 #ifndef __ARCH_SET_SOFTIRQ_PENDING
 #define set_softirq_pending(x) (local_softirq_pending() = (x))
 #define or_softirq_pending(x)  (local_softirq_pending() |= (x))
diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig
index 355b8c7..144db9d 100644
--- a/kernel/irq/Kconfig
+++ b/kernel/irq/Kconfig
@@ -38,6 +38,9 @@ config HARDIRQS_SW_RESEND
 config IRQ_PREFLOW_FASTEOI
        bool
 
+config IRQ_FORCED_THREADING
+       bool
+
 config SPARSE_IRQ
 	bool "Support sparse irq numbering"
 	depends on HAVE_SPARSE_IRQ
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index 935bec4..6c6ec9a 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -27,12 +27,14 @@ extern int noirqdebug;
  * IRQTF_DIED      - handler thread died
  * IRQTF_WARNED    - warning "IRQ_WAKE_THREAD w/o thread_fn" has been printed
  * IRQTF_AFFINITY  - irq thread is requested to adjust affinity
+ * IRQTF_FORCED_THREAD  - irq action is force threaded
  */
 enum {
 	IRQTF_RUNTHREAD,
 	IRQTF_DIED,
 	IRQTF_WARNED,
 	IRQTF_AFFINITY,
+	IRQTF_FORCED_THREAD,
 };
 
 /*
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 58c8613..acd599a 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -17,6 +17,17 @@
 
 #include "internals.h"
 
+#ifdef CONFIG_IRQ_FORCED_THREADING
+__read_mostly bool force_irqthreads;
+
+static int __init setup_forced_irqthreads(char *arg)
+{
+	force_irqthreads = true;
+	return 0;
+}
+early_param("threadirqs", setup_forced_irqthreads);
+#endif
+
 /**
  *	synchronize_irq - wait for pending IRQ handlers (on other CPUs)
  *	@irq: interrupt number to wait for
@@ -702,6 +713,32 @@ irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action) { }
 #endif
 
 /*
+ * Interrupts which are not explicitely requested as threaded
+ * interrupts rely on the implicit bh/preempt disable of the hard irq
+ * context. So we need to disable bh here to avoid deadlocks and other
+ * side effects.
+ */
+static void
+irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
+{
+	local_bh_disable();
+	action->thread_fn(action->irq, action->dev_id);
+	irq_finalize_oneshot(desc, action, false);
+	local_bh_enable();
+}
+
+/*
+ * Interrupts explicitely requested as threaded interupts want to be
+ * preemtible - many of them need to sleep and wait for slow busses to
+ * complete.
+ */
+static void irq_thread_fn(struct irq_desc *desc, struct irqaction *action)
+{
+	action->thread_fn(action->irq, action->dev_id);
+	irq_finalize_oneshot(desc, action, false);
+}
+
+/*
  * Interrupt handler thread
  */
 static int irq_thread(void *data)
@@ -711,8 +748,15 @@ static int irq_thread(void *data)
 	};
 	struct irqaction *action = data;
 	struct irq_desc *desc = irq_to_desc(action->irq);
+	void (*handler_fn)(struct irq_desc *desc, struct irqaction *action);
 	int wake;
 
+	if (force_irqthreads & test_bit(IRQTF_FORCED_THREAD,
+					&action->thread_flags))
+		handler_fn = irq_forced_thread_fn;
+	else
+		handler_fn = irq_thread_fn;
+
 	sched_setscheduler(current, SCHED_FIFO, &param);
 	current->irqaction = action;
 
@@ -736,10 +780,7 @@ static int irq_thread(void *data)
 			raw_spin_unlock_irq(&desc->lock);
 		} else {
 			raw_spin_unlock_irq(&desc->lock);
-
-			action->thread_fn(action->irq, action->dev_id);
-
-			irq_finalize_oneshot(desc, action, false);
+			handler_fn(desc, action);
 		}
 
 		wake = atomic_dec_and_test(&desc->threads_active);
@@ -789,6 +830,22 @@ void exit_irq_thread(void)
 	set_bit(IRQTF_DIED, &tsk->irqaction->flags);
 }
 
+static void irq_setup_forced_threading(struct irqaction *new)
+{
+	if (!force_irqthreads)
+		return;
+	if (new->flags & (IRQF_NO_THREAD | IRQF_PERCPU | IRQF_ONESHOT))
+		return;
+
+	new->flags |= IRQF_ONESHOT;
+
+	if (!new->thread_fn) {
+		set_bit(IRQTF_FORCED_THREAD, &new->thread_flags);
+		new->thread_fn = new->handler;
+		new->handler = irq_default_primary_handler;
+	}
+}
+
 /*
  * Internal function to register an irqaction - typically used to
  * allocate special interrupts that are part of the architecture.
@@ -838,6 +895,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
 		 * dummy function which warns when called.
 		 */
 		new->handler = irq_nested_primary_handler;
+	} else {
+		irq_setup_forced_threading(new);
 	}
 
 	/*
diff --git a/kernel/softirq.c b/kernel/softirq.c
index c049046..a33fb29 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -311,9 +311,21 @@ void irq_enter(void)
 }
 
 #ifdef __ARCH_IRQ_EXIT_IRQS_DISABLED
-# define invoke_softirq()	__do_softirq()
+static inline void invoke_softirq(void)
+{
+	if (!force_irqthreads)
+		__do_softirq();
+	else
+		wakeup_softirqd();
+}
 #else
-# define invoke_softirq()	do_softirq()
+static inline void invoke_softirq(void)
+{
+	if (!force_irqthreads)
+		do_softirq();
+	else
+		wakeup_softirqd();
+}
 #endif
 
 /*

^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2011-02-26 16:23 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-02-23 23:52 [patch 0/5] genirq: Forced threaded interrupt handlers Thomas Gleixner
2011-02-23 23:52 ` [patch 1/5] genirq: Prepare the handling of shared oneshot interrupts Thomas Gleixner
2011-02-24  2:30   ` Linus Torvalds
2011-02-24 17:56     ` Thomas Gleixner
2011-02-26 16:22   ` [tip:irq/core] " tip-bot for Thomas Gleixner
2011-02-23 23:52 ` [patch 2/5] genirq: Allow " Thomas Gleixner
2011-02-26 16:22   ` [tip:irq/core] " tip-bot for Thomas Gleixner
2011-02-23 23:52 ` [patch 3/5] genirq: Add IRQF_NO_THREAD Thomas Gleixner
2011-02-26 16:22   ` [tip:irq/core] " tip-bot for Thomas Gleixner
2011-02-23 23:52 ` [patch 4/5] sched: Switch wait_task_inactive to schedule_hrtimeout() Thomas Gleixner
2011-02-26 16:23   ` [tip:irq/core] " tip-bot for Thomas Gleixner
2011-02-23 23:52 ` [patch 5/5] genirq: Provide forced interrupt threading Thomas Gleixner
2011-02-26 16:23   ` [tip:irq/core] " tip-bot for Thomas Gleixner
2011-02-24  7:12 ` [patch 0/5] genirq: Forced threaded interrupt handlers Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox