public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] Fix sched_clock_cpu for systems with unsynchronized TSC
@ 2010-02-25 18:33 Dimitri Sivanich
  0 siblings, 0 replies; 2+ messages in thread
From: Dimitri Sivanich @ 2010-02-25 18:33 UTC (permalink / raw)
  To: linux-kernel; +Cc: Pallipadi, Venkatesh, Thomas Gleixner, Ingo Molnar

On UV systems, the TSC is not synchronized across blades.  The
sched_clock_cpu() function is returning values that can go backwards 
(I've seen as much as 8 seconds) when switching between cpus.

As each cpu comes up, early_init_intel() will currently set the
sched_clock_stable flag true.  When mark_tsc_unstable() runs, it clears
the flag, but this only occurs once (the first time a cpu comes up whose
TSC is not synchronized with cpu 0).  After this, early_init_intel() will
set the flag again as the next cpu comes up.

This patch changes the logic to assume that the sched_clock is stable at
first.  From then on it is only cleared.

Signed-off-by: Dimitri Sivanich <sivanich@sgi.com>

---

 arch/x86/kernel/cpu/intel.c |    4 ++--
 kernel/sched_clock.c        |    2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

Index: linux/arch/x86/kernel/cpu/intel.c
===================================================================
--- linux.orig/arch/x86/kernel/cpu/intel.c
+++ linux/arch/x86/kernel/cpu/intel.c
@@ -70,8 +70,8 @@ static void __cpuinit early_init_intel(s
 	if (c->x86_power & (1 << 8)) {
 		set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
 		set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
-		sched_clock_stable = 1;
-	}
+	} else
+		sched_clock_stable = 0;
 
 	/*
 	 * There is a known erratum on Pentium III and Core Solo
Index: linux/kernel/sched_clock.c
===================================================================
--- linux.orig/kernel/sched_clock.c
+++ linux/kernel/sched_clock.c
@@ -45,7 +45,7 @@ unsigned long long __attribute__((weak))
 static __read_mostly int sched_clock_running;
 
 #ifdef CONFIG_HAVE_UNSTABLE_SCHED_CLOCK
-__read_mostly int sched_clock_stable;
+__read_mostly int sched_clock_stable = 1;
 
 struct sched_clock_data {
 	u64			tick_raw;

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [PATCH] Fix sched_clock_cpu for systems with unsynchronized TSC
@ 2010-02-25 18:36 Dimitri Sivanich
  0 siblings, 0 replies; 2+ messages in thread
From: Dimitri Sivanich @ 2010-02-25 18:36 UTC (permalink / raw)
  To: linux-kernel; +Cc: venkatesh.pallipadi, Thomas Gleixner, Ingo Molnar

On UV systems, the TSC is not synchronized across blades.  The
sched_clock_cpu() function is returning values that can go backwards 
(I've seen as much as 8 seconds) when switching between cpus.

As each cpu comes up, early_init_intel() will currently set the
sched_clock_stable flag true.  When mark_tsc_unstable() runs, it clears
the flag, but this only occurs once (the first time a cpu comes up whose
TSC is not synchronized with cpu 0).  After this, early_init_intel() will
set the flag again as the next cpu comes up.

This patch changes the logic to assume that the sched_clock is stable at
first.  From then on it is only cleared.

Signed-off-by: Dimitri Sivanich <sivanich@sgi.com>

---

 arch/x86/kernel/cpu/intel.c |    4 ++--
 kernel/sched_clock.c        |    2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

Index: linux/arch/x86/kernel/cpu/intel.c
===================================================================
--- linux.orig/arch/x86/kernel/cpu/intel.c
+++ linux/arch/x86/kernel/cpu/intel.c
@@ -70,8 +70,8 @@ static void __cpuinit early_init_intel(s
 	if (c->x86_power & (1 << 8)) {
 		set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
 		set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
-		sched_clock_stable = 1;
-	}
+	} else
+		sched_clock_stable = 0;
 
 	/*
 	 * There is a known erratum on Pentium III and Core Solo
Index: linux/kernel/sched_clock.c
===================================================================
--- linux.orig/kernel/sched_clock.c
+++ linux/kernel/sched_clock.c
@@ -45,7 +45,7 @@ unsigned long long __attribute__((weak))
 static __read_mostly int sched_clock_running;
 
 #ifdef CONFIG_HAVE_UNSTABLE_SCHED_CLOCK
-__read_mostly int sched_clock_stable;
+__read_mostly int sched_clock_stable = 1;
 
 struct sched_clock_data {
 	u64			tick_raw;

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2010-02-25 18:36 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-02-25 18:36 [PATCH] Fix sched_clock_cpu for systems with unsynchronized TSC Dimitri Sivanich
  -- strict thread matches above, loose matches on Subject: below --
2010-02-25 18:33 Dimitri Sivanich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox