public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH]new_TSC_based_delay_tsc()
@ 2007-11-18  9:20 Marin Mitov
  2007-11-19  8:54 ` [PATCH] new TSC based delay_tsc() Ingo Molnar
  0 siblings, 1 reply; 4+ messages in thread
From: Marin Mitov @ 2007-11-18  9:20 UTC (permalink / raw)
  To: linux-kernel

Hi all,

This is a patch based on the Ingo's idea/patch to track 
delay_tsc() migration to another cpu by comparing 
smp_processor_id(). It is against kernel-2.6.24-rc3.

What is different:
1. Using unsigned (instead of long) to unify for i386/x86_64.
2. Minimal preempt_disable/enable() critical sections
   (more room for preemption)
3. some statements have been rearranged, to account for
    possible under/overflow of left/TSC

Tested on both: 32/64 bit SMP PREEMPT kernel-2.6.24-rc3

Hope it is correct. Comments, please.

Signed-off-by: Marin Mitov <mitov@issp.bas.bg>
=========================================
--- a/arch/x86/lib/delay_32.c	2007-11-18 10:20:45.000000000 +0200
+++ b/arch/x86/lib/delay_32.c	2007-11-18 10:20:44.000000000 +0200
@@ -38,18 +38,41 @@
 		:"0" (loops));
 }
 
-/* TSC based delay: */
+/* TSC based delay:
+ *
+ * We are careful about preemption as TSC's are per-CPU.
+ */
 static void delay_tsc(unsigned long loops)
 {
-	unsigned long bclock, now;
+	unsigned prev, now;
+	unsigned left = loops;
+	unsigned prev_cpu, cpu;
+
+	preempt_disable();
+	rdtscl(prev);
+	prev_cpu = smp_processor_id();
+	preempt_enable();
+	now = prev;
 
-	preempt_disable();		/* TSC's are per-cpu */
-	rdtscl(bclock);
 	do {
 		rep_nop();
+
+		left -= now - prev;
+		prev = now;
+
+		preempt_disable();
 		rdtscl(now);
-	} while ((now-bclock) < loops);
-	preempt_enable();
+		cpu = smp_processor_id();
+		preempt_enable();
+
+		if (prev_cpu != cpu){
+			/*
+			 * We have migrated, we skip this small amount of time:
+			 */
+			prev = now;
+			prev_cpu = cpu;
+		}
+	} while ((now-prev) < left);
 }
 
 /*
--- a/arch/x86/lib/delay_64.c	2007-11-18 10:20:44.000000000 +0200
+++ b/arch/x86/lib/delay_64.c	2007-11-18 10:20:44.000000000 +0200
@@ -26,18 +26,42 @@
 	return 0;
 }
 
+/* TSC based delay:
+ *
+ * We are careful about preemption as TSC's are per-CPU.
+ */
 void __delay(unsigned long loops)
 {
-	unsigned bclock, now;
+	unsigned prev, now;
+	unsigned left = loops;
+	unsigned prev_cpu, cpu;
+
+	preempt_disable();
+	rdtscl(prev);
+	prev_cpu = smp_processor_id();
+	preempt_enable();
+	now = prev;
 
-	preempt_disable();		/* TSC's are pre-cpu */
-	rdtscl(bclock);
 	do {
-		rep_nop(); 
+		rep_nop();
+
+		left -= now - prev;
+		prev = now;
+
+		preempt_disable();
 		rdtscl(now);
+		cpu = smp_processor_id();
+		preempt_enable();
+
+		if (prev_cpu != cpu){
+			/*
+			 * We have migrated, we skip this small amount of time:
+			 */
+			 prev = now;
+			 prev_cpu = cpu;
+		}
 	}
-	while ((now-bclock) < loops);
-	preempt_enable();
+	while ((now-prev) < left);
 }
 EXPORT_SYMBOL(__delay);
 

 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] new TSC based delay_tsc()
  2007-11-18  9:20 [PATCH]new_TSC_based_delay_tsc() Marin Mitov
@ 2007-11-19  8:54 ` Ingo Molnar
  0 siblings, 0 replies; 4+ messages in thread
From: Ingo Molnar @ 2007-11-19  8:54 UTC (permalink / raw)
  To: Marin Mitov; +Cc: linux-kernel, Thomas Gleixner


* Marin Mitov <mitov@issp.bas.bg> wrote:

> Hi all,
> 
> This is a patch based on the Ingo's idea/patch to track delay_tsc() 
> migration to another cpu by comparing smp_processor_id(). It is 
> against kernel-2.6.24-rc3.
> 
> What is different:
> 1. Using unsigned (instead of long) to unify for i386/x86_64.
> 2. Minimal preempt_disable/enable() critical sections
>    (more room for preemption)
> 3. some statements have been rearranged, to account for
>     possible under/overflow of left/TSC
> 
> Tested on both: 32/64 bit SMP PREEMPT kernel-2.6.24-rc3
> 
> Hope it is correct. Comments, please.

thanks! The changes look certainly good to me.

> Signed-off-by: Marin Mitov <mitov@issp.bas.bg>

Signed-off-by: Ingo Molnar <mingo@elte.hu>

	Ingo

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH]new_TSC_based_delay_tsc()
@ 2007-11-20 19:32 Marin Mitov
  2007-11-20 21:37 ` [PATCH] new TSC based delay_tsc() Ingo Molnar
  0 siblings, 1 reply; 4+ messages in thread
From: Marin Mitov @ 2007-11-20 19:32 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar

Hi all,

Please ignore the previous patch with the same subject.
It has a bug that can manifest itself in the very exotic case
when each do {} while() iteration executes on different cpu
leading to potentially infinite loop.

This is a patch based on the Ingo's idea/patch to track 
delay_tsc() migration to another cpu by comparing 
smp_processor_id(). It is against kernel-2.6.24-rc3.

What is different:
1. Using unsigned (instead of long) to unify for i386/x86_64.
2. Minimal preempt_disable/enable() critical sections
   (more room for preemption)
3. some statements have been rearranged, to account for
    possible under/overflow of left/TSC

Tested on both: 32/64 bit SMP PREEMPT kernel-2.6.24-rc3

Comments, please.

Signed-off-by: Marin Mitov <mitov@issp.bas.bg>
=========================================
--- a/arch/x86/lib/delay_32.c	2007-11-18 08:14:05.000000000 +0200
+++ b/arch/x86/lib/delay_32.c	2007-11-20 19:03:52.000000000 +0200
@@ -38,18 +38,42 @@
 		:"0" (loops));
 }
 
-/* TSC based delay: */
+/* TSC based delay:
+ *
+ * We are careful about preemption as TSC's are per-CPU.
+ */
 static void delay_tsc(unsigned long loops)
 {
-	unsigned long bclock, now;
+	unsigned prev, prev_1, now;
+	unsigned left = loops;
+	unsigned prev_cpu, cpu;
+
+	preempt_disable();
+	rdtscl(prev);
+	prev_cpu = smp_processor_id();
+	preempt_enable();
+	now = prev;
 
-	preempt_disable();		/* TSC's are per-cpu */
-	rdtscl(bclock);
 	do {
 		rep_nop();
+
+		left -= now - prev;
+		prev = now;
+
+		preempt_disable();
+		rdtscl(prev_1);
+		cpu = smp_processor_id();
 		rdtscl(now);
-	} while ((now-bclock) < loops);
-	preempt_enable();
+		preempt_enable();
+
+		if (prev_cpu != cpu){
+			/*
+			 * We have migrated, forget prev_cpu's tsc reading
+			 */
+			prev = prev_1;
+			prev_cpu = cpu;
+		}
+	} while ((now-prev) < left);
 }
 
 /*
--- a/arch/x86/lib/delay_64.c	2007-11-18 08:14:40.000000000 +0200
+++ b/arch/x86/lib/delay_64.c	2007-11-20 19:47:29.000000000 +0200
@@ -26,18 +26,42 @@
 	return 0;
 }
 
+/* TSC based delay:
+ *
+ * We are careful about preemption as TSC's are per-CPU.
+ */
 void __delay(unsigned long loops)
 {
-	unsigned bclock, now;
+	unsigned prev, prev_1, now;
+	unsigned left = loops;
+	unsigned prev_cpu, cpu;
+
+	preempt_disable();
+	rdtscl(prev);
+	prev_cpu = smp_processor_id();
+	preempt_enable();
+	now = prev;
 
-	preempt_disable();		/* TSC's are pre-cpu */
-	rdtscl(bclock);
 	do {
-		rep_nop(); 
+		rep_nop();
+
+		left -= now - prev;
+		prev = now;
+
+		preempt_disable();
+		rdtscl(prev_1);
+		cpu = smp_processor_id();
 		rdtscl(now);
-	}
-	while ((now-bclock) < loops);
-	preempt_enable();
+		preempt_enable();
+
+		if (prev_cpu != cpu){
+			/*
+			 * We have migrated, forget prev_cpu's tsc reading
+			 */
+			 prev = prev_1;
+			 prev_cpu = cpu;
+		}
+	} while ((now-prev) < left);
 }
 EXPORT_SYMBOL(__delay);
 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] new TSC based delay_tsc()
  2007-11-20 19:32 [PATCH]new_TSC_based_delay_tsc() Marin Mitov
@ 2007-11-20 21:37 ` Ingo Molnar
  0 siblings, 0 replies; 4+ messages in thread
From: Ingo Molnar @ 2007-11-20 21:37 UTC (permalink / raw)
  To: Marin Mitov; +Cc: linux-kernel, Thomas Gleixner


hi Marin,

here's the patch we are carrying in x86.git at the moment - could you 
please update it with v3 of your code, and send us the patch (with the 
patch metadata kept intact, like you see it below)? Thanks,

	Ingo

----------------->
From: Marin Mitov <mitov@issp.bas.bg>
Subject: new TSC based delay_tsc()

This is a patch based on the Ingo's idea/patch to track
delay_tsc() migration to another cpu by comparing
smp_processor_id().

What is different:
1. Using unsigned (instead of long) to unify for i386/x86_64.
2. Minimal preempt_disable/enable() critical sections
   (more room for preemption)
3. some statements have been rearranged, to account for
    possible under/overflow of left/TSC

Tested on both: 32/64 bit SMP PREEMPT kernel-2.6.24-rc3

Signed-off-by: Marin Mitov <mitov@issp.bas.bg>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/lib/delay_32.c |   35 +++++++++++++++++++++++++++++------
 arch/x86/lib/delay_64.c |   36 ++++++++++++++++++++++++++++++------
 2 files changed, 59 insertions(+), 12 deletions(-)

Index: linux/arch/x86/lib/delay_32.c
===================================================================
--- linux.orig/arch/x86/lib/delay_32.c
+++ linux/arch/x86/lib/delay_32.c
@@ -38,18 +38,41 @@ static void delay_loop(unsigned long loo
 		:"0" (loops));
 }
 
-/* TSC based delay: */
+/* TSC based delay:
+ *
+ * We are careful about preemption as TSC's are per-CPU.
+ */
 static void delay_tsc(unsigned long loops)
 {
-	unsigned long bclock, now;
+	unsigned prev, now;
+	unsigned left = loops;
+	unsigned prev_cpu, cpu;
+
+	preempt_disable();
+	rdtscl(prev);
+	prev_cpu = smp_processor_id();
+	preempt_enable();
+	now = prev;
 
-	preempt_disable();		/* TSC's are per-cpu */
-	rdtscl(bclock);
 	do {
 		rep_nop();
+
+		left -= now - prev;
+		prev = now;
+
+		preempt_disable();
 		rdtscl(now);
-	} while ((now-bclock) < loops);
-	preempt_enable();
+		cpu = smp_processor_id();
+		preempt_enable();
+
+		if (prev_cpu != cpu) {
+			/*
+			 * We have migrated, we skip this small amount of time:
+			 */
+			prev = now;
+			prev_cpu = cpu;
+		}
+	} while ((now-prev) < left);
 }
 
 /*
Index: linux/arch/x86/lib/delay_64.c
===================================================================
--- linux.orig/arch/x86/lib/delay_64.c
+++ linux/arch/x86/lib/delay_64.c
@@ -26,18 +26,42 @@ int read_current_timer(unsigned long *ti
 	return 0;
 }
 
+/* TSC based delay:
+ *
+ * We are careful about preemption as TSC's are per-CPU.
+ */
 void __delay(unsigned long loops)
 {
-	unsigned bclock, now;
+	unsigned prev, now;
+	unsigned left = loops;
+	unsigned prev_cpu, cpu;
+
+	preempt_disable();
+	rdtscl(prev);
+	prev_cpu = smp_processor_id();
+	preempt_enable();
+	now = prev;
 
-	preempt_disable();		/* TSC's are pre-cpu */
-	rdtscl(bclock);
 	do {
-		rep_nop(); 
+		rep_nop();
+
+		left -= now - prev;
+		prev = now;
+
+		preempt_disable();
 		rdtscl(now);
+		cpu = smp_processor_id();
+		preempt_enable();
+
+		if (prev_cpu != cpu) {
+			/*
+			 * We have migrated, we skip this small amount of time:
+			 */
+			 prev = now;
+			 prev_cpu = cpu;
+		}
 	}
-	while ((now-bclock) < loops);
-	preempt_enable();
+	while ((now-prev) < left);
 }
 EXPORT_SYMBOL(__delay);
 

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2007-11-20 21:38 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-11-18  9:20 [PATCH]new_TSC_based_delay_tsc() Marin Mitov
2007-11-19  8:54 ` [PATCH] new TSC based delay_tsc() Ingo Molnar
  -- strict thread matches above, loose matches on Subject: below --
2007-11-20 19:32 [PATCH]new_TSC_based_delay_tsc() Marin Mitov
2007-11-20 21:37 ` [PATCH] new TSC based delay_tsc() Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox