public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Williams <pwil3058@bigpond.net.au>
To: Andi Kleen <ak@muc.de>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [PATCH] V-3.0 Single Priority Array O(1) CPU Scheduler Evaluation
Date: Tue, 03 Aug 2004 10:27:33 +1000	[thread overview]
Message-ID: <410EDBF5.40205@bigpond.net.au> (raw)
In-Reply-To: <m3isc1smag.fsf@averell.firstfloor.org>

Andi Kleen wrote:
> Peter Williams <pwil3058@bigpond.net.au> writes:
> 
> 
>>Version 3 of the various single priority array scheduler patches for
>>2.6.7, 2.6.8-rc2 and 2.6.8-rc2-mm1 kernels are now available for
>>download and evaluation:
> 
> 
> [...] So many schedulers. It is hard to chose one for testing.
> 
> Do you have one you prefer and would like to be tested especially? 

I think that ZAPHOD in "pb" mode is probably the best but I'm hoping for 
input from a wider audience.  That's why I built HYDRA.  So that various 
options can be tried without having to rebuild and reboot.

The questions I'm trying to answer are:

1.  Are "priority" based ("pb") semantics for "nice" better than 
"entitlement" based ("eb") semantics?  Does it depend on what the 
system's being used for?

2. What are the appropriate values for controlling interactive bonuses 
for "pb" and "eb"?

3. Are interactive bonuses even necessary for "eb"?

4. What's the best time slice size for interactive use?

5. What's the best time slice size for a server?

6. Does the throughput bonus help?  In particular, does it reduce the 
amount of queuing when the system is not fully loaded.

7. Should any (or all) of the scheduling knobs in 
/proc/sys/kernel/cpusched/ be retained for a production scheduler.

> 
> Perhaps a few standard benchmark numbers for the different ones 
> (e.g. volanomark or hackbench or the OSDL SDL tests) would make it 
> easier to preselect.

OK, I'll do that. But some of the questions for which I am seeking 
answers are more subjective.  In particular, interactive responsiveness 
is hard to test automatically.

Also, these tests are no good for evaluating the throughput bonus 
because when the system is heavily loaded there's going to be queuing 
anyway.  Where this bonus is expected to be most helpful is at 
intermediate system loads where there is still significant queuing (due 
to lack of serendipity i.e. they're all sleeping at the same time and 
trying to run at the same time instead of slotting in nicely between 
each other) even though there are (theoretically) enough CPU resources 
to handle the load without any queuing.  Tests with an earlier scheduler 
showed that (where such queuing was occurring) the techniques used in 
the throughput bonus could significantly reduce it.  These tests were 
conducted with artificial loads and confirmation from the real world 
would be helpful.  (Of import here is whether there are significant 
queuing problems at intermediate loads in the real world because if 
there isn't then there is nothing for the throughput bonus to fix.  This 
is possible because the higher the randomness of a system's load then 
the better the serendipity will be.)

The scheduling statistics that are included in my patches measure time 
spent on the run queue so that measurements of improvements due to the 
throughput bonus are possible.  Another way that such improvements would 
be increased responsiveness from servers.

> 
> Have you considered submitting one to -mm* for wider testing?

I've made patches available for 2.6.8-rc2-mm1 and I'll provide them for 
mm2 as soon as possible.  Is there something else I should be doing?

Peter
-- 
Peter Williams                                   pwil3058@bigpond.net.au

"Learning, n. The kind of ignorance distinguishing the studious."
  -- Ambrose Bierce


  reply	other threads:[~2004-08-03  0:32 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <2oEEn-197-9@gated-at.bofh.it>
2004-08-02 13:27 ` [PATCH] V-3.0 Single Priority Array O(1) CPU Scheduler Evaluation Andi Kleen
2004-08-03  0:27   ` Peter Williams [this message]
2004-08-03  3:53     ` Andrew Morton
2004-08-03  4:38       ` Peter Williams
2004-08-03  6:51       ` Andi Kleen
2004-08-07  1:44 Peter Williams
  -- strict thread matches above, loose matches on Subject: below --
2004-08-02  6:31 Peter Williams
2004-08-02 13:42 ` William Lee Irwin III
2004-08-03  0:33   ` Peter Williams
2004-08-03  2:03     ` William Lee Irwin III
2004-08-03  3:39       ` Peter Williams
2004-08-03 10:49         ` William Lee Irwin III
2004-08-04  0:37           ` Peter Williams
2004-08-04  0:50             ` William Lee Irwin III
2004-08-04  1:36               ` Peter Williams
2004-08-04  1:51                 ` William Lee Irwin III
2004-08-04  2:40                   ` Peter Williams
2004-08-04  7:05                     ` Ingo Molnar
2004-08-04  7:44                     ` William Lee Irwin III
2004-08-05  1:06                       ` Peter Williams
2004-08-05  2:00                         ` William Lee Irwin III
2004-08-05  2:12                           ` Peter Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=410EDBF5.40205@bigpond.net.au \
    --to=pwil3058@bigpond.net.au \
    --cc=ak@muc.de \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox