public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Rob Hussey <robjhussey@gmail.com>
Cc: efault@gmx.de, a.p.zijlstra@chello.nl,
	linux-kernel@vger.kernel.org, ck@vds.kolivas.org
Subject: Re: Scheduler benchmarks - a follow-up
Date: Mon, 17 Sep 2007 13:27:07 +0200	[thread overview]
Message-ID: <20070917112707.GA24583@elte.hu> (raw)
In-Reply-To: <6b8cef970709170221s4301e896x2ee123a149c05c3a@mail.gmail.com>


* Rob Hussey <robjhussey@gmail.com> wrote:

> Hi all,
> 
> After posting some benchmarks involving cfs 
> (http://lkml.org/lkml/2007/9/13/385), I got some feedback, so I 
> decided to do a follow-up that'll hopefully fill in the gaps many 
> people wanted to see filled.

thanks for the update!

> I'll start with some selected numbers, which are preceded by the 
> command used for the benchmark.
> 
> for((i=2; i < 201; i++)); do lat_ctx -s 0 $i; done:
> (the left most column is the number of processes ($i))
> 
> 	2.6.21		2.6.22-ck1	2.6.23-rc6-cfs-devel
> 
> 15	5.88		4.85		5.14
> 16	5.80		4.77		4.76

the unbound results are harder to compare because CFS changed SMP 
balancing to saturate multiple cores better - but this can result in a 
micro-benchmark slowdown if the other core is idle (and one of the 
benchmark tasks runs on one core and the other runs on the first core). 
This affects lat_ctx and pipe-test. (I'll have a look at the hackbench 
behavior.)

> Bound to Single core:

these are the more comparable (apples to apples) tests. Usually the most 
stable of them is pipe-test:

> pipe-test:
> 
> 	2.6.21	2.6.22-ck1	2.6.23-rc6-cfs-devel
> 
> 1  	9.27	8.50 		8.55
> 2  	9.27	8.47 		8.55
> 3  	9.28	8.47 		8.54
> 4  	9.28	8.48 		8.54
> 5  	9.28	8.48 		8.54

so -ck1 is 0.8% faster in this particular test. (but still, there can be 
caching effects in either direction - so i usually run the test on both 
cores/CPUs to see whether there's any systematic spread in the results. 
The cache-layout related random spread can be as high as 10% on some 
systems!)

many things happened between 2.6.22-ck1 and 2.6.23-cfs-devel that could 
affect performance of this test. My initial guess would be sched_clock() 
overhead. Could you send me your system's 'dmesg' output when running a 
2.6.22 (or -ck1) kernel? Chances are that your TSC got marked unstable, 
this turns on a much less precise but also faster sched_clock() 
implementation. CFS uses the TSC even if the time-of-day code marked it 
as unstable - going for the more precise but slightly slower variant.

To test this theory, could you apply the patch below to cfs-devel (if 
you are interested in further testing this) - this changes the cfs-devel 
version of sched_clock() to have a low-resolution fallback like v2.6.22 
does. Does this result in any measurable increase in performance? 

(there's also a new sched-devel.git tree out there - if you update to it 
you'll need to re-pull it against a pristine Linus git head.)

	Ingo

---
 arch/i386/kernel/tsc.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Index: linux/arch/i386/kernel/tsc.c
===================================================================
--- linux.orig/arch/i386/kernel/tsc.c
+++ linux/arch/i386/kernel/tsc.c
@@ -110,9 +110,9 @@ unsigned long long native_sched_clock(vo
 	 *   very important for it to be as fast as the platform
 	 *   can achive it. )
 	 */
-	if (unlikely(!tsc_enabled && !tsc_unstable))
+	if (1 || unlikely(!tsc_enabled && !tsc_unstable))
 		/* No locking but a rare wrong value is not a big deal: */
-		return (jiffies_64 - INITIAL_JIFFIES) * (1000000000 / HZ);
+		return jiffies_64 * (1000000000 / HZ);
 
 	/* read the Time Stamp Counter: */
 	rdtscll(this_offset);

  parent reply	other threads:[~2007-09-17 11:28 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-09-17  9:21 Scheduler benchmarks - a follow-up Rob Hussey
2007-09-17 11:12 ` Ed Tomlinson
2007-09-17 11:47   ` Ingo Molnar
2007-09-17 20:22   ` Ingo Molnar
2007-09-17 11:27 ` Ingo Molnar [this message]
     [not found]   ` <E1IXMXf-0000uG-ID@flower>
2007-09-17 19:43     ` Willy Tarreau
2007-09-17 20:01       ` Ingo Molnar
2007-09-17 20:06       ` Oleg Verych
2007-09-17 20:05         ` Ingo Molnar
2007-09-17 20:42         ` Willy Tarreau
2007-09-17 13:05 ` Ingo Molnar
2007-09-17 14:01   ` [ck] " Jos Poortvliet
2007-09-17 14:12     ` Ingo Molnar
2007-09-17 20:36   ` Ingo Molnar
2007-09-18  4:30     ` Rob Hussey
2007-09-18  4:53       ` Willy Tarreau
2007-09-18  4:58         ` Rob Hussey
2007-09-18  6:40       ` Ingo Molnar
2007-09-18  8:23         ` Rob Hussey
2007-09-18  8:48       ` Ingo Molnar
2007-09-18  9:45         ` Rob Hussey
2007-09-18  9:48           ` Ingo Molnar
2007-09-18  1:44   ` Rob Hussey

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070917112707.GA24583@elte.hu \
    --to=mingo@elte.hu \
    --cc=a.p.zijlstra@chello.nl \
    --cc=ck@vds.kolivas.org \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robjhussey@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox