public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6
@ 2007-05-02 23:11 Al Boldi
  2007-05-03  0:49 ` William Lee Irwin III
  0 siblings, 1 reply; 8+ messages in thread
From: Al Boldi @ 2007-05-02 23:11 UTC (permalink / raw)
  To: linux-kernel; +Cc: ck

Con Kolivas wrote:
> On Monday 30 April 2007 18:05, Michael Gerdau wrote:
> > meanwhile I've redone my numbercrunching tests with the following
> > kernels: 2.6.21.1 (mainline)
> >     2.6.21-sd046
> >     2.6.21-cfs-v6
> > running on a dualcore x86_64.
> > [I will run the same test with 2.6.21.1-cfs-v7 over the next days,
> > likely tonight]
:
:
> > However from these figures it seems as if sd does provide for the
> > fairest (as in equal share for all) scheduling among the 3 schedulers
> > tested.
>
> Looks good, thanks. Ingo's been hard at work since then and has v8 out by
> now. SD has not changed so you wouldn't need to do the whole lot of tests
> on SD again unless you don't trust some of the results.

Well, I tried cfs-v8 and it still shows some nice regressions wrt 
mainline/sd.  SD's nice-levels look rather solid, implying fairness.


Thanks!

--
Al


^ permalink raw reply	[flat|nested] 8+ messages in thread
* [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6
@ 2007-04-30  8:05 Michael Gerdau
  2007-05-02 12:11 ` [ck] " Con Kolivas
  0 siblings, 1 reply; 8+ messages in thread
From: Michael Gerdau @ 2007-04-30  8:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ingo Molnar, Linus Torvalds, Nick Piggin, Gene Heskett,
	Juliusz Chroboczek, Mike Galbraith, Peter Williams, ck list,
	Thomas Gleixner, William Lee Irwin III, Andrew Morton,
	Bill Davidsen, Willy Tarreau, Arjan van de Ven

[-- Attachment #1: Type: text/plain, Size: 3451 bytes --]

i list,

meanwhile I've redone my numbercrunching tests with the following kernels:
    2.6.21.1 (mainline)
    2.6.21-sd046
    2.6.21-cfs-v6
running on a dualcore x86_64.
[I will run the same test with 2.6.21.1-cfs-v7 over the next days,
likely tonight]

The tests consist of 3 tasks (named LTMM, LTMB and LTBM). The only
I/O they do is during init and for logging the results, the rest
is just floating point math.

There are 3 scenarios:
    j1 - all 3 tasks run sequentially
	 /proc/sys/kernel/sched_granularity_ns=4000000
	 /proc/sys/kernel/rr_interval=16
    j3 - all 3 tasks run in parallel
	 /proc/sys/kernel/sched_granularity_ns=4000000
	 /proc/sys/kernel/rr_interval=16
    j3big - all 3 tasks run in parallel with timeslice extended
            by 2 magnitudes (not run for mainline)
            /proc/sys/kernel/sched_granularity_ns=400000000
            /proc/sys/kernel/rr_interval=400

All 3 tasks are run while the system does nothing else except for
the "normal" (KDE) daemons. The system had not been used for
interactive work during the tests.

I'm giving user time as provided by the "time" cmd followed by wallclock time
(all values in seconds).

                LTMM
                j1              j3              j3big
2.6.21-cfs-v6    5655.07/ 5682   5437.84/ 5531   5434.04/ 8072
2.6.21-sd-046    5556.44/ 5583   5446.86/ 8037   5407.50/ 8274
2.6.21.1         5417.62/ 5439   5422.37/ 7132   na     /na

                LTMB
                j1              j3              j3big
2.6.21-cfs-v6    7729.81/ 7755   7470.10/10244   7449.16/10186
2.6.21-sd-046    7611.00/ 7626   7573.16/10109   7458.10/10316
2.6.21.1         7438.31/ 7461   7620.72/11087   na     /na

                LTBM
                j1              j3              j3big
2.6.21-cfs-v6    7720.70/ 7746   7567.09/10362   7464.17/10335
2.6.21-sd-046    7431.06/ 7452   7539.83/10600   7474.20/10222
2.6.21.1         7452.80/ 7481   7484.19/ 9570   na     /na

                LTMM+LTMB+LTBM
                j1              j3              j3big
2.6.21-cfs-v6	21105.58/21183  20475.03/26137  20347.37/28593
2.6.21-sd-046	20598.50/20661  20559.85/28746  20339.80/28812
2.6.21.1	20308.73/20381  20527.28/27789  na      /na
	

User time apparently is subject to some variance. I'm particularly surprised
by the wallclock time of scenario j1 and j3 for case LTMM with 2.6.21-cfs-v6.
I'm not sure what to make of this, i.e. whether I had happening something
else on my machine during j1 of LTMM -- that's always been the first test
I ran and it might be that there were still some other jobs running after
the initial boot.

Assuming scenario j1 does constitute the "true" time each task requires and
also assuming each scheduler makes maximum use of the available CPUs (the
tests involve very little I/O) one could compute the expected wallclock time.
However since I suspect the j1 figures of LTMM to be somewhat "dirty" I'll
refrain from it.

However from these figures it seems as if sd does provide for the fairest
(as in equal share for all) scheduling among the 3 schedulers tested.

Best,
Michael
-- 
 Technosis GmbH, Geschäftsführer: Michael Gerdau, Tobias Dittmar
 Sitz Hamburg; HRB 89145 Amtsgericht Hamburg
 Vote against SPAM - see http://www.politik-digital.de/spam/
 Michael Gerdau       email: mgd@technosis.de
 GPG-keys available on request or at public keyserver

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2007-05-03  7:57 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-05-02 23:11 [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6 Al Boldi
2007-05-03  0:49 ` William Lee Irwin III
2007-05-03  3:51   ` Al Boldi
2007-05-03  4:35     ` William Lee Irwin III
2007-05-03  6:42       ` Al Boldi
2007-05-03  7:23         ` William Lee Irwin III
2007-05-03  8:01           ` Al Boldi
  -- strict thread matches above, loose matches on Subject: below --
2007-04-30  8:05 Michael Gerdau
2007-05-02 12:11 ` [ck] " Con Kolivas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox