From: Daniel Hazelton <dhazelton@enter.net>
To: Ingo Molnar <mingo@elte.hu>
Cc: William Lee Irwin III <wli@holomorphy.com>,
Srivatsa Vaddagiri <vatsa@in.ibm.com>,
efault@gmx.de, tingy@cs.umass.edu, linux-kernel@vger.kernel.org
Subject: Re: fair clock use in CFS
Date: Mon, 14 May 2007 10:31:13 -0400 [thread overview]
Message-ID: <200705141031.13528.dhazelton@enter.net> (raw)
In-Reply-To: <20070514115049.GA28721@elte.hu>
On Monday 14 May 2007 07:50:49 Ingo Molnar wrote:
> * William Lee Irwin III <wli@holomorphy.com> wrote:
> > On Mon, May 14, 2007 at 12:31:20PM +0200, Ingo Molnar wrote:
> > > please clarify - exactly what is a mistake? Thanks,
> >
> > The variability in ->fair_clock advancement rate was the mistake, at
> > least according to my way of thinking. [...]
>
> you are quite wrong. Lets consider the following example:
>
> we have 10 tasks running (all at nice 0). The current task spends 20
> msecs on the CPU and a new task is picked. How much CPU time did that
> waiting task get entitled to during its 20 msecs wait? If fair_clock was
> constant as you suggest then we'd give it 20 msecs - but its true 'fair
> expectation' of CPU time was only 20/10 == 2 msecs!
Either you have a strange definition of fairness or you chose an extremely
poor example, Ingo. In a fair scheduler I'd expect all tasks to get the exact
same amount of time on the processor. So if there are 10 tasks running at
nice 0 and the current task has run for 20msecs before a new task is swapped
onto the CPU, the new task and *all* other tasks waiting to get onto the CPU
should get the same 20msecs. What you've described above is fundamentally
unfair - one process running for 20msecs while the 10 processes that are
waiting for their chance each get a period that increases from a short period
at a predictable rate.
Some numbers based on your above description:
Process 1 runs for 20msecs
Process 2 runs for 2msecs (20/10 == 2msecs)
Process 3 runs for 2.2msecs (has waited 22msecs, 22/10 == 2.2)
Process 4 runs for 2.4msecs (has waited 24.2msecs - rounded for brevity)
Process 5 runs for 2.7msecs (has waited 26.6msecs - rounded for brevity)
process 6 runs for 3msecs (has waited 30.3msecs)
process 7 runs for 3.3msecs (has waited approx. 33msecs)
process 8 runs for 3.6msecs (has waited approx. 36msecs)
process 9 runs for 3.9msecs (has waited approx. 39msecs)
process 10 runs for 4.2msecs (has waited approx. 42msecs)
Now if the "process time" isn't scaled to match the length of time that the
process has spent waiting to get on the CPU you get some measure of fairness
back, but even then the description of CFS you've given shows a fundamental
unfairness.
However, if you meant that "the new process has spent 20msecs waiting to get
on the CPU", then the rest of your description does show what I'd expect from
a fair scheduler. If not, then I guess that CFS is only "Completely Fair" for
significantly large values of "fair".
(I will not, however, argue that CFS is'nt a damned good scheduler that has
improved interactivity on the systems of those people that have tested it)
> So a 'constant' fair_clock would turn the whole equilibrium upside down
> (it would inflate p->wait_runtime values and the global sum would not be
> roughly constant anymore but would run up very fast), especially during
> fluctuating loads.
Hrm... Okay, so you're saying that "fair_clock" runs slower the more process
there are running to keep the above run-up in "Time Spent on CPU" I noticed
based solely on your initial example? If that is the case, then I can see the
fairness - its just not visible from a really quick look at the code and the
simplified description you gave earlier.
DRH
next prev parent reply other threads:[~2007-05-14 14:31 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-05-14 8:33 fair clock use in CFS Srivatsa Vaddagiri
2007-05-14 10:29 ` William Lee Irwin III
2007-05-14 10:31 ` Ingo Molnar
2007-05-14 11:05 ` William Lee Irwin III
2007-05-14 11:22 ` Srivatsa Vaddagiri
2007-05-14 11:20 ` William Lee Irwin III
2007-05-14 12:04 ` Ingo Molnar
2007-05-14 23:57 ` William Lee Irwin III
2007-05-14 20:20 ` Ting Yang
2007-05-14 11:50 ` Ingo Molnar
2007-05-14 14:31 ` Daniel Hazelton [this message]
2007-05-14 15:02 ` Srivatsa Vaddagiri
2007-05-14 15:08 ` Ingo Molnar
2007-05-15 2:59 ` David Schwartz
2007-05-14 21:24 ` Ting Yang
2007-05-15 0:57 ` Ting Yang
2007-05-14 23:23 ` William Lee Irwin III
2007-05-14 11:10 ` Ingo Molnar
2007-05-14 13:04 ` Srivatsa Vaddagiri
2007-05-14 13:15 ` Ingo Molnar
-- strict thread matches above, loose matches on Subject: below --
2007-05-14 15:02 Al Boldi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200705141031.13528.dhazelton@enter.net \
--to=dhazelton@enter.net \
--cc=efault@gmx.de \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=tingy@cs.umass.edu \
--cc=vatsa@in.ibm.com \
--cc=wli@holomorphy.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox