public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Williams <pwil3058@bigpond.net.au>
To: Al Boldi <a1426z@gawab.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [ANNOUNCE][RFC] PlugSched-6.3.1 for  2.6.16-rc5
Date: Sun, 09 Apr 2006 12:58:29 +1000	[thread overview]
Message-ID: <44387855.30004@bigpond.net.au> (raw)
In-Reply-To: <200604082331.56715.a1426z@gawab.com>

Al Boldi wrote:
> Peter Williams wrote:
>> Al Boldi wrote:
>>> Can you try the attached mem-eater passing it the number of kb to be
>>> eaten.
>>>
>>> 	i.e. '# while :; do ./eatm 9999 ; done'
>>>
>>> This will print the number of bytes eaten and the timing in ms.
>>>
>>> Adjust the number of kb to be eaten such that the timing will be less
>>> than timeslice (120ms by default for spa).  Switch to another vt and
>>> start pressing enter.  A console lockup should follow within seconds for
>>> all spas except ebs.
>> This doesn't seem to present a problem (other than the eatme loop being
>> hard to kill with control-C) on my system using spa_ws with standard
>> settings.  I tried both UP and SMP.  I may be doing something wrong or
>> perhaps don't understand what you mean by a console lock up.
> 
> Switching from one vt to another receives hardly any response.

Aah.  Virtual terminals.  I was using Gnome terminals under X.

> 
> This is especially visible in spa_no_frills, and spa_ws recovers from this 
> lockup somewhat and starts exhibiting this problem as a choking behavior.
> 
> Running '# top d.1 (then shift T)' on another vt shows this choking behavior 
> as the proc gets boosted.
> 
>> When you say "less than the timeslice" how much smaller do you mean?
> 
> This depends on your machine's performance.  On my 400MhzP2 UP 128MB, w/ 
> spa_no_frills default settings, looping eatm 9999 takes 63ms per eat and 
> causes the rest of the system to be starved.  Raising kb to 19999 takes 
> 126ms which is greater than the default 120ms timeslice and causes no system 
> starvation.
> 
> What numbers do you get?

For 9999 I get 20ms.  I have 1GB of memory and no swapping is taking 
place but with only 128MB it's possible that your system is swapping and 
that could make the effect more pronounced.

But anyway, based on the evidence, I think the problem is caused by the 
fact that the eatm tasks are running to completion in less than one time 
slice without sleeping and this means that they never have their 
priorities reassessed.  The reason that spa_ebs doesn't demonstrate the 
problem is that it uses a smaller time slice for the first time slice 
that a task gets. The reason that it does this is that it gives newly 
forked processes a fairly high priority and if they're left to run for a 
full 120 msecs at that high priority they can hose the system.  Having a 
shorter first time slice gives the scheduler a chance to reassess the 
task's priority before it does much damage.

The reason that the other schedulers don't have this strategy is that I 
didn't think that it was necessary.  Obviously I was wrong and should 
extend it to the other schedulers.  It's doubtful whether this will help 
a great deal with spa_no_frills as it is pure round robin and doesn't 
reassess priorities except when nice changes of the task changes 
policies.  This is one good reason not to use spa_no_frills on 
production systems.  Perhaps you should consider creating a child 
scheduler on top of it that meets your needs?

Anyway, an alternative (and safer) way to reduce the effects of this 
problem (while your waiting for me to do the above change) is to reduce 
the size of the time slice.  The only bad effects of doing this is that 
you'll do slightly worse (less than 1%) on kernbench.

Peter
-- 
Peter Williams                                   pwil3058@bigpond.net.au

"Learning, n. The kind of ignorance distinguishing the studious."
  -- Ambrose Bierce

  reply	other threads:[~2006-04-09  2:58 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-04-03 11:59 [ANNOUNCE][RFC] PlugSched-6.3.1 for 2.6.16-rc5 Al Boldi
2006-04-03 12:13 ` Paolo Ornati
2006-04-03 23:04 ` Peter Williams
2006-04-03 23:29   ` Con Kolivas
2006-04-04  0:01     ` Peter Williams
2006-04-04  0:12       ` Con Kolivas
2006-04-04  1:29         ` Peter Williams
2006-04-04 13:27           ` Al Boldi
2006-04-04 13:27   ` Al Boldi
2006-04-04 23:17     ` Peter Williams
2006-04-05  8:16       ` Al Boldi
2006-04-05 22:53         ` Peter Williams
2006-04-07 21:32           ` Al Boldi
2006-04-08  1:29             ` Peter Williams
2006-04-08 20:31               ` Al Boldi
2006-04-09  2:58                 ` Peter Williams [this message]
2006-04-09  5:04                   ` Al Boldi
2006-04-09 23:53                     ` Peter Williams
2006-04-10 14:43                       ` Al Boldi
2006-04-11  2:07                         ` Peter Williams
2006-04-03 23:27 ` Peter Williams
2006-04-04 13:27   ` Al Boldi
2006-04-04 23:20     ` Peter Williams
2006-04-05  8:16       ` Al Boldi
  -- strict thread matches above, loose matches on Subject: below --
2006-02-28 22:32 Peter Williams
2006-03-01  2:36 ` Peter Williams
2006-04-02  2:04   ` Peter Williams
2006-04-02  6:02     ` Con Kolivas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=44387855.30004@bigpond.net.au \
    --to=pwil3058@bigpond.net.au \
    --cc=a1426z@gawab.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox