public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH -v2] x86: allocate cpumask during check irq vectors
@ 2014-01-25  0:59 Yinghai Lu
  2014-01-25  8:02 ` Ingo Molnar
  0 siblings, 1 reply; 11+ messages in thread
From: Yinghai Lu @ 2014-01-25  0:59 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: Thomas Gleixner, Ingo Molnar, linux-kernel, Yinghai Lu,
	Prarit Bhargava

Fix warning:
arch/x86/kernel/irq.c: In function check_irq_vectors_for_cpu_disable:
arch/x86/kernel/irq.c:337:1: warning: the frame size of 2052 bytes is larger than 2048 bytes

when NR_CPUS=8192

We should use zalloc_cpumask_var() instead.

-v2: update to GFP_ATOMIC instead and free the allocated cpumask at last.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: Prarit Bhargava <prarit@redhat.com>

---
 arch/x86/kernel/irq.c |   24 +++++++++++++++++-------
 1 file changed, 17 insertions(+), 7 deletions(-)

Index: linux-2.6/arch/x86/kernel/irq.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/irq.c
+++ linux-2.6/arch/x86/kernel/irq.c
@@ -277,11 +277,18 @@ int check_irq_vectors_for_cpu_disable(vo
 	unsigned int this_cpu, vector, this_count, count;
 	struct irq_desc *desc;
 	struct irq_data *data;
-	struct cpumask affinity_new, online_new;
+	cpumask_var_t affinity_new, online_new;
+
+	if (!alloc_cpumask_var(&affinity_new, GFP_ATOMIC))
+		return -ENOMEM;
+	if (!alloc_cpumask_var(&online_new, GFP_ATOMIC)) {
+		free_cpumask_var(affinity_new);
+		return -ENOMEM;
+	}
 
 	this_cpu = smp_processor_id();
-	cpumask_copy(&online_new, cpu_online_mask);
-	cpu_clear(this_cpu, online_new);
+	cpumask_copy(online_new, cpu_online_mask);
+	cpumask_clear_cpu(this_cpu, online_new);
 
 	this_count = 0;
 	for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
@@ -289,8 +296,8 @@ int check_irq_vectors_for_cpu_disable(vo
 		if (irq >= 0) {
 			desc = irq_to_desc(irq);
 			data = irq_desc_get_irq_data(desc);
-			cpumask_copy(&affinity_new, data->affinity);
-			cpu_clear(this_cpu, affinity_new);
+			cpumask_copy(affinity_new, data->affinity);
+			cpumask_clear_cpu(this_cpu, affinity_new);
 
 			/* Do not count inactive or per-cpu irqs. */
 			if (!irq_has_action(irq) || irqd_is_per_cpu(data))
@@ -311,12 +318,15 @@ int check_irq_vectors_for_cpu_disable(vo
 			 * mask is not zero; that is the down'd cpu is the
 			 * last online cpu in a user set affinity mask.
 			 */
-			if (cpumask_empty(&affinity_new) ||
-			    !cpumask_subset(&affinity_new, &online_new))
+			if (cpumask_empty(affinity_new) ||
+			    !cpumask_subset(affinity_new, online_new))
 				this_count++;
 		}
 	}
 
+	free_cpumask_var(affinity_new);
+	free_cpumask_var(online_new);
+
 	count = 0;
 	for_each_online_cpu(cpu) {
 		if (cpu == this_cpu)

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2014-01-27  7:14 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-25  0:59 [PATCH -v2] x86: allocate cpumask during check irq vectors Yinghai Lu
2014-01-25  8:02 ` Ingo Molnar
2014-01-26 12:22   ` Prarit Bhargava
2014-01-26 13:32     ` Ingo Molnar
2014-01-26 19:19       ` Prarit Bhargava
2014-01-26 20:21         ` Ingo Molnar
2014-01-26 20:23           ` H. Peter Anvin
2014-01-26 20:27             ` Ingo Molnar
2014-01-26 20:29               ` H. Peter Anvin
2014-01-26 21:46                 ` Prarit Bhargava
2014-01-27  7:14                 ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox