* Patch "x86/irq: Check vector allocation early" has been added to the 4.4-stable tree
@ 2016-03-01 22:42 gregkh
0 siblings, 0 replies; only message in thread
From: gregkh @ 2016-03-01 22:42 UTC (permalink / raw)
To: tglx, bp, gregkh, jiang.liu, jmmahler, joe.lawrence, linux
Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
x86/irq: Check vector allocation early
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
x86-irq-check-vector-allocation-early.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From 3716fd27a604d61a91cda47083504971486b80f1 Mon Sep 17 00:00:00 2001
From: Thomas Gleixner <tglx@linutronix.de>
Date: Thu, 31 Dec 2015 16:30:48 +0000
Subject: x86/irq: Check vector allocation early
From: Thomas Gleixner <tglx@linutronix.de>
commit 3716fd27a604d61a91cda47083504971486b80f1 upstream.
__assign_irq_vector() uses the vector_cpumask which is assigned by
apic->vector_allocation_domain() without doing basic sanity checks. That can
result in a situation where the final assignement of a newly found vector
fails in apic->cpu_mask_to_apicid_and(). So we have to do rollbacks for no
reason.
apic->cpu_mask_to_apicid_and() only fails if
vector_cpumask & requested_cpumask & cpu_online_mask
is empty.
Check for this condition right away and if the result is empty try immediately
the next possible cpu in the requested mask. So in case of a failure the old
setting is unchanged and we can remove the rollback code.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Borislav Petkov <bp@alien8.de>
Tested-by: Joe Lawrence <joe.lawrence@stratus.com>
Cc: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Jeremiah Mahler <jmmahler@gmail.com>
Cc: andy.shevchenko@gmail.com
Cc: Guenter Roeck <linux@roeck-us.net>
Link: http://lkml.kernel.org/r/20151231160106.561877324@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
arch/x86/kernel/apic/vector.c | 38 +++++++++++++++++++++++++-------------
1 file changed, 25 insertions(+), 13 deletions(-)
--- a/arch/x86/kernel/apic/vector.c
+++ b/arch/x86/kernel/apic/vector.c
@@ -30,7 +30,7 @@ struct apic_chip_data {
struct irq_domain *x86_vector_domain;
static DEFINE_RAW_SPINLOCK(vector_lock);
-static cpumask_var_t vector_cpumask, searched_cpumask;
+static cpumask_var_t vector_cpumask, vector_searchmask, searched_cpumask;
static struct irq_chip lapic_controller;
#ifdef CONFIG_X86_IO_APIC
static struct apic_chip_data *legacy_irq_data[NR_IRQS_LEGACY];
@@ -128,8 +128,20 @@ static int __assign_irq_vector(int irq,
while (cpu < nr_cpu_ids) {
int new_cpu, vector, offset;
+ /* Get the possible target cpus for @mask/@cpu from the apic */
apic->vector_allocation_domain(cpu, vector_cpumask, mask);
+ /*
+ * Clear the offline cpus from @vector_cpumask for searching
+ * and verify whether the result overlaps with @mask. If true,
+ * then the call to apic->cpu_mask_to_apicid_and() will
+ * succeed as well. If not, no point in trying to find a
+ * vector in this mask.
+ */
+ cpumask_and(vector_searchmask, vector_cpumask, cpu_online_mask);
+ if (!cpumask_intersects(vector_searchmask, mask))
+ goto next_cpu;
+
if (cpumask_subset(vector_cpumask, d->domain)) {
if (cpumask_equal(vector_cpumask, d->domain))
goto success;
@@ -162,7 +174,7 @@ next:
if (test_bit(vector, used_vectors))
goto next;
- for_each_cpu_and(new_cpu, vector_cpumask, cpu_online_mask) {
+ for_each_cpu(new_cpu, vector_searchmask) {
if (!IS_ERR_OR_NULL(per_cpu(vector_irq, new_cpu)[vector]))
goto next;
}
@@ -174,7 +186,7 @@ next:
d->move_in_progress =
cpumask_intersects(d->old_domain, cpu_online_mask);
}
- for_each_cpu_and(new_cpu, vector_cpumask, cpu_online_mask)
+ for_each_cpu(new_cpu, vector_searchmask)
per_cpu(vector_irq, new_cpu)[vector] = irq_to_desc(irq);
d->cfg.vector = vector;
cpumask_copy(d->domain, vector_cpumask);
@@ -196,8 +208,14 @@ next_cpu:
return -ENOSPC;
success:
- /* cache destination APIC IDs into cfg->dest_apicid */
- return apic->cpu_mask_to_apicid_and(mask, d->domain, &d->cfg.dest_apicid);
+ /*
+ * Cache destination APIC IDs into cfg->dest_apicid. This cannot fail
+ * as we already established, that mask & d->domain & cpu_online_mask
+ * is not empty.
+ */
+ BUG_ON(apic->cpu_mask_to_apicid_and(mask, d->domain,
+ &d->cfg.dest_apicid));
+ return 0;
}
static int assign_irq_vector(int irq, struct apic_chip_data *data,
@@ -407,6 +425,7 @@ int __init arch_early_irq_init(void)
arch_init_htirq_domain(x86_vector_domain);
BUG_ON(!alloc_cpumask_var(&vector_cpumask, GFP_KERNEL));
+ BUG_ON(!alloc_cpumask_var(&vector_searchmask, GFP_KERNEL));
BUG_ON(!alloc_cpumask_var(&searched_cpumask, GFP_KERNEL));
return arch_early_ioapic_init();
@@ -496,14 +515,7 @@ static int apic_set_affinity(struct irq_
return -EINVAL;
err = assign_irq_vector(irq, data, dest);
- if (err) {
- if (assign_irq_vector(irq, data,
- irq_data_get_affinity_mask(irq_data)))
- pr_err("Failed to recover vector for irq %d\n", irq);
- return err;
- }
-
- return IRQ_SET_MASK_OK;
+ return err ? err : IRQ_SET_MASK_OK;
}
static struct irq_chip lapic_controller = {
Patches currently in stable-queue which might be from tglx@linutronix.de are
queue-4.4/x86-irq-validate-that-irq-descriptor-is-still-active.patch
queue-4.4/x86-irq-remove-outgoing-cpu-from-vector-cleanup-mask.patch
queue-4.4/x86-irq-get-rid-of-code-duplication.patch
queue-4.4/x86-entry-compat-add-missing-clac-to-entry_int80_32.patch
queue-4.4/tick-nohz-set-the-correct-expiry-when-switching-to-nohz-lowres-mode.patch
queue-4.4/irqchip-mxs-add-missing-set_handle_irq.patch
queue-4.4/genirq-validate-action-before-dereferencing-it-in-handle_irq_event_percpu.patch
queue-4.4/x86-irq-remove-offline-cpus-from-vector-cleanup.patch
queue-4.4/posix-clock-fix-return-code-on-the-poll-method-s-error-path.patch
queue-4.4/x86-irq-reorganize-the-return-path-in-assign_irq_vector.patch
queue-4.4/cputime-prevent-32bit-overflow-in-time_to_cputime.patch
queue-4.4/x86-irq-copy-vectormask-instead-of-an-and-operation.patch
queue-4.4/x86-irq-call-irq_force_move_complete-with-irq-descriptor.patch
queue-4.4/x86-irq-call-chip-irq_set_affinity-in-proper-context.patch
queue-4.4/x86-irq-plug-vector-cleanup-race.patch
queue-4.4/x86-irq-do-not-use-apic_chip_data.old_domain-as-temporary-buffer.patch
queue-4.4/clockevents-tcb_clksrc-prevent-disabling-an-already-disabled-clock.patch
queue-4.4/x86-irq-reorganize-the-search-in-assign_irq_vector.patch
queue-4.4/x86-irq-remove-the-cpumask-allocation-from-send_cleanup_vector.patch
queue-4.4/x86-irq-fix-a-race-in-x86_vector_free_irqs.patch
queue-4.4/x86-irq-check-vector-allocation-early.patch
queue-4.4/irqchip-omap-intc-add-support-for-spurious-irq-handling.patch
queue-4.4/x86-irq-clear-move_in_progress-before-sending-cleanup-ipi.patch
queue-4.4/x86-mpx-fix-off-by-one-comparison-with-nr_registers.patch
queue-4.4/irqchip-atmel-aic-fix-wrong-bit-operation-for-irq-priority.patch
queue-4.4/revert-workqueue-make-sure-delayed-work-run-in-local-cpu.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2016-03-01 22:42 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-01 22:42 Patch "x86/irq: Check vector allocation early" has been added to the 4.4-stable tree gregkh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).