public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC patch] Use IPI_shortcut for lapic timer broadcast
@ 2009-06-29  6:47 Luming Yu
  2009-06-29  7:20 ` Ingo Molnar
  2009-06-29 20:34 ` Pallipadi, Venkatesh
  0 siblings, 2 replies; 13+ messages in thread
From: Luming Yu @ 2009-06-29  6:47 UTC (permalink / raw)
  To: LKML; +Cc: suresh.b.siddha, venkatesh.pallipadi

[-- Attachment #1: Type: text/plain, Size: 1669 bytes --]

Hello,

We need to use IPI shortcut to send lapic timer broadcast
to avoid the latency of sending IPI one bye one on systems with many
logical processors when NO_HZ is disabled.
Without this patch,I have seen upstream kernel with RHEL 5 kernel
config boot hang .

The patch also changes physflat_send_IPI_all to IPI shortcut mode.

Please review, and apply.

**The patch is enclosed in text attachment*
**Using web client to send the patch* *
**below is for review, please apply attached  patch*/

Thanks,
Luming


Signed-off-by: Yu Luming <luming.yu@intel.com>

 apic.c         |    4 +++-
 apic_flat_64.c |    7 ++++++-
 2 files changed, 9 insertions(+), 2 deletions(-)

--- linux-2.6.30-rc6/arch/x86/kernel/apic/apic.c.0	2009-06-28
20:22:55.000000000 -0600
+++ linux-2.6.30-rc6/arch/x86/kernel/apic/apic.c	2009-06-29
00:21:44.000000000 -0600
@@ -419,7 +419,9 @@
 static void lapic_timer_broadcast(const struct cpumask *mask)
 {
 #ifdef CONFIG_SMP
-	apic->send_IPI_mask(mask, LOCAL_TIMER_VECTOR);
+	if (cpus_empty(*mask))
+		return;
+	apic->send_IPI_all(LOCAL_TIMER_VECTOR);
 #endif
 }

--- linux-2.6.30-rc6/arch/x86/kernel/apic/apic_flat_64.c.0	2009-06-29
00:13:26.000000000 -0600
+++ linux-2.6.30-rc6/arch/x86/kernel/apic/apic_flat_64.c	2009-06-29
00:11:23.000000000 -0600
@@ -274,7 +274,12 @@

 static void physflat_send_IPI_all(int vector)
 {
-	physflat_send_IPI_mask(cpu_online_mask, vector);
+	if (vector == NMI_VECTOR) {
+		physflat_send_IPI_mask(cpu_online_mask, vector);
+	} else {
+		__default_send_IPI_shortcut(APIC_DEST_ALLINC,
+					    vector, apic->dest_logical);
+	}
 }

 static unsigned int physflat_cpu_mask_to_apicid(const struct cpumask *cpumask)

[-- Attachment #2: 1.patch --]
[-- Type: application/octet-stream, Size: 999 bytes --]

--- linux-2.6.30-rc6/arch/x86/kernel/apic/apic.c.0	2009-06-28 20:22:55.000000000 -0600
+++ linux-2.6.30-rc6/arch/x86/kernel/apic/apic.c	2009-06-29 00:21:44.000000000 -0600
@@ -419,7 +419,9 @@
 static void lapic_timer_broadcast(const struct cpumask *mask)
 {
 #ifdef CONFIG_SMP
-	apic->send_IPI_mask(mask, LOCAL_TIMER_VECTOR);
+	if (cpus_empty(*mask))
+		return;
+	apic->send_IPI_all(LOCAL_TIMER_VECTOR);
 #endif
 }
 
--- linux-2.6.30-rc6/arch/x86/kernel/apic/apic_flat_64.c.0	2009-06-29 00:13:26.000000000 -0600
+++ linux-2.6.30-rc6/arch/x86/kernel/apic/apic_flat_64.c	2009-06-29 00:11:23.000000000 -0600
@@ -274,7 +274,12 @@
 
 static void physflat_send_IPI_all(int vector)
 {
-	physflat_send_IPI_mask(cpu_online_mask, vector);
+	if (vector == NMI_VECTOR) {
+		physflat_send_IPI_mask(cpu_online_mask, vector);
+	} else {
+		__default_send_IPI_shortcut(APIC_DEST_ALLINC,
+					    vector, apic->dest_logical);
+	}
 }
 
 static unsigned int physflat_cpu_mask_to_apicid(const struct cpumask *cpumask)

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2009-07-03  2:04 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-06-29  6:47 [RFC patch] Use IPI_shortcut for lapic timer broadcast Luming Yu
2009-06-29  7:20 ` Ingo Molnar
2009-06-29  8:04   ` Luming Yu
2009-06-29  8:16     ` Ingo Molnar
2009-06-29  8:21       ` Luming Yu
2009-06-29  8:30         ` Ingo Molnar
2009-06-29  8:43           ` Luming Yu
2009-06-29  9:15             ` Ingo Molnar
2009-06-29 14:01             ` Arjan van de Ven
2009-06-29 20:34 ` Pallipadi, Venkatesh
2009-06-30  7:01   ` Luming Yu
2009-07-03  0:23     ` Pallipadi, Venkatesh
2009-07-03  2:04       ` Luming Yu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox