public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Tong Li <tong.n.li@intel.com>
Cc: Roman Zippel <zippel@linux-m68k.org>,
	linux-kernel@vger.kernel.org, peterz@infradead.org,
	Mike Galbraith <efault@gmx.de>
Subject: Re: [ANNOUNCE/RFC] Really Simple Really Fair Scheduler
Date: Sun, 2 Sep 2007 21:44:47 +0200	[thread overview]
Message-ID: <20070902194447.GA30724@elte.hu> (raw)
In-Reply-To: <Pine.LNX.4.64.0709021106230.30970@tongli.jf.intel.com>


* Tong Li <tong.n.li@intel.com> wrote:

> I like this patch since it's really simple. CFS does provide a nice 
> infrastructure to enable new algorithmic changes/extensions. My only 
> concern was the O(log N) complexity under heavy load, but I'm willing 
> to agree that it's OK in the common case. [...]

yeah. Note that just in case i wasnt clear enough: my patch attempts to 
be an adoption of the core fairness math algorithm Roman suggested - so 
it is not my idea and i dont want to take credit for it. (if it were to 
go upstream it would of course carry a prominent "fairness math 
rewritten by Roman Zippel" credit.)

about O(log N) complexity: the "timeline" nature of the CFS rbtree 
(which rbtree Roman's patch preserves) guarantees a certain good level 
of cache locality. We generally insert tasks at the "right side" of the 
tree and remove them from the "left side" of the tree. (not always of 
course, but for most workloads) So in practice, on a reasonable CPU, 
there's no difference to the cachemiss patterns of pure O(1) algorithms. 
And in terms of cycle overhead, lets consider something really extreme: 
_one million runnable tasks on a single CPU_ (which is clearly silly and 
unrealistic), which has a worst-case rbtree depth of ~31. A modern CPU 
can walk a 31-deep binary tree in the neighborhood of 100 cycles 
(cached). That makes it comparable to the O(1) scheduler's reliance on 
the BSF instruction on x86 (which instruction costs a few dozen cycles 
last i remember). In practice O(log(N)) algorithms are really equivalent 
to O(1) algorithms. The big thing about the "O(1) scheduler" was that 
the scheduler it replaced was O(N). Now an O(N) algorithm _does_ hurt.

No doubt, people _will_ play with CFS and will try to implement its 
timeline data structure using O(1) algorithms (or improved tree 
algorithms). It's doable and it will certainly be interesting to see the 
results of such experiments. The rbtree was simply the most natural 
choice of an already existing, lightweight in-kernel tree data 
structure. [ It's also used by the MM so continued sanity and 
performance of that code is guaranteed by the MM hackers ;-) ]

> [...] Some comments on the code:

> >+	key = se->exec_runtime;
> >
> >	se->fair_key = key;
> >}
> 
> Should we use the weighted fair clock exec_runtime as the key? This 
> way tasks with larger weights will have their keys incremented more 
> slowly and thus be given more CPU time. This is what other 
> virtual-clock based fair scheduling algorithms commonly do.

yep. The code i posted treats all tasks as nice-0. I suspect by adding a 
calc_weighted() transformation to the key calculation above we'd get 
most of the nice level support. (but i havent tried that yet - i was 
mainly interested in a simple expression of Roman's ideas)

> >+	se->exec_runtime = avg_exec_runtime(cfs_rq);
> >	__enqueue_entity(cfs_rq, se);
> >}
> 
> What's the intuition behind avg_exec_runtime? I thought the original 
> CFS approach, i.e., setting a newly arriving task's key to be the 
> current fair clock, adjusted by wait_runtime, was good. It matches 
> other fair queuing algorithms and thus has provably good properties.

it's i think what Roman's algorithm does, and i wanted to implement that 
and only that, to be able to review the effects of a much simpler, yet 
behaviorally equivalent patch. Perhaps Roman can shed some light on what 
the thinking behind that average is.

	Ingo

  reply	other threads:[~2007-09-02 19:45 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-09-02 12:01 [ANNOUNCE/RFC] Really Simple Really Fair Scheduler Ingo Molnar
2007-09-02 19:12 ` Tong Li
2007-09-02 19:44   ` Ingo Molnar [this message]
2007-09-03 18:38 ` Roman Zippel
2007-09-03 18:54   ` Ingo Molnar
2007-09-03 19:13     ` Roman Zippel
2007-09-03 19:20       ` Ingo Molnar
2007-09-03 19:55         ` Roman Zippel
2007-09-03 20:04           ` Ingo Molnar
2007-09-04  2:50             ` Roman Zippel
2007-09-04  6:29               ` Ingo Molnar
2007-09-04 11:21                 ` Roman Zippel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070902194447.GA30724@elte.hu \
    --to=mingo@elte.hu \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=tong.n.li@intel.com \
    --cc=zippel@linux-m68k.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox