public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Michael Gerdau <mgd@technosis.de>
Cc: linux-kernel@vger.kernel.org,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Nick Piggin <npiggin@suse.de>,
	Gene Heskett <gene.heskett@gmail.com>,
	Juliusz Chroboczek <jch@pps.jussieu.fr>,
	Mike Galbraith <efault@gmx.de>,
	Peter Williams <pwil3058@bigpond.net.au>,
	ck list <ck@vds.kolivas.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	William Lee Irwin III <wli@holomorphy.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Bill Davidsen <davidsen@tmr.com>, Willy Tarreau <w@1wt.eu>,
	Arjan van de Ven <arjan@infradead.org>
Subject: Re: [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6
Date: Thu, 3 May 2007 14:28:51 +0200	[thread overview]
Message-ID: <20070503122851.GA32222@elte.hu> (raw)
In-Reply-To: <200704301005.33884.mgd@technosis.de>


* Michael Gerdau <mgd@technosis.de> wrote:

> There are 3 scenarios:
>     j1 - all 3 tasks run sequentially
> 	 /proc/sys/kernel/sched_granularity_ns=4000000
> 	 /proc/sys/kernel/rr_interval=16
>     j3 - all 3 tasks run in parallel
> 	 /proc/sys/kernel/sched_granularity_ns=4000000
> 	 /proc/sys/kernel/rr_interval=16
>     j3big - all 3 tasks run in parallel with timeslice extended
>             by 2 magnitudes (not run for mainline)
>             /proc/sys/kernel/sched_granularity_ns=400000000
>             /proc/sys/kernel/rr_interval=400
> 
> All 3 tasks are run while the system does nothing else except for
> the "normal" (KDE) daemons. The system had not been used for
> interactive work during the tests.
> 
> I'm giving user time as provided by the "time" cmd followed by wallclock time
> (all values in seconds).
> 
>                 LTMM
>                 j1              j3              j3big
> 2.6.21-cfs-v6    5655.07/ 5682   5437.84/ 5531   5434.04/ 8072
>                 LTMB
> 2.6.21-cfs-v6    7729.81/ 7755   7470.10/10244   7449.16/10186
>                 LTBM
> 2.6.21-cfs-v6    7720.70/ 7746   7567.09/10362   7464.17/10335
>                 LTMM+LTMB+LTBM
> 2.6.21-cfs-v6	21105.58/21183  20475.03/26137  20347.37/28593

> User time apparently is subject to some variance. I'm particularly 
> surprised by the wallclock time of scenario j1 and j3 for case LTMM 
> with 2.6.21-cfs-v6. I'm not sure what to make of this, i.e. whether I 
> had happening something else on my machine during j1 of LTMM -- that's 
> always been the first test I ran and it might be that there were still 
> some other jobs running after the initial boot.

thanks for the testing!

regarding the fairness of the different schedulers, please note the 
different runtimes for each component of the workload:

     LTMM:   5655.07/ 5682
     LTMB:   7729.81/ 7755
     LTBM:   7720.70/ 7746

this means that a fair scheduler would _not_ be the one that finishes 
them first on wall-clock time (!). A fair scheduler would run each of 
them at 33% capacity until the fastest one (LTMM) reaches ~5650 seconds 
runtime and finishes, and _then_ the remaining ~2050 seconds of runtime 
would be done at 50%/50% capacity between the remaining two jobs. I.e. 
the fair wall-clock results should be around:

    LTMM:  ~8500 seconds
    LTMB: ~10600 seconds 
    LTBM: ~10600 seconds

(but the IO portion of the workloads and other scheduling effects could 
easily shift these numbers by a few minutes.)

regarding the results: it seems the wallclock portion of LTMM/j3 is too 
small - even though the 3 tasks ran in parallel, in the CFS test LTMM 
finished just as fast as if it were running alone, right? That does not 
seem to be logical and indeed suggests some sort of testing artifact.

That makes it hard to judge which scheduler achieved the above 'ideal 
fair distribution' of the workloads better - for some of the results it 
was SD, for some it was CFS - but the missing LTMM/j3 number makes it 
hard to decide it conclusively. They are certainly both close enough and 
the noise of the results seems quite high.

	Ingo

  parent reply	other threads:[~2007-05-03 12:29 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-04-30  8:05 [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6 Michael Gerdau
2007-05-02 12:11 ` [ck] " Con Kolivas
2007-05-03 12:28 ` Ingo Molnar [this message]
2007-05-03 12:45   ` Michael Gerdau

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070503122851.GA32222@elte.hu \
    --to=mingo@elte.hu \
    --cc=akpm@linux-foundation.org \
    --cc=arjan@infradead.org \
    --cc=ck@vds.kolivas.org \
    --cc=davidsen@tmr.com \
    --cc=efault@gmx.de \
    --cc=gene.heskett@gmail.com \
    --cc=jch@pps.jussieu.fr \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgd@technosis.de \
    --cc=npiggin@suse.de \
    --cc=pwil3058@bigpond.net.au \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=w@1wt.eu \
    --cc=wli@holomorphy.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox