From: Rusty Russell <rusty@rustcorp.com.au>
To: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: anton@samba.org, davej@suse.de, marcelo@conectiva.com.br,
linux-kernel@vger.kernel.org, torvalds@transmeta.com
Subject: Re: Linux 2.4.17-pre5
Date: Sun, 09 Dec 2001 12:58:43 +1100 [thread overview]
Message-ID: <E16CtEp-0007Jb-00@wagner> (raw)
In-Reply-To: Your message of "Sun, 09 Dec 2001 00:31:13 -0000." <E16Crs9-0003Gc-00@the-village.bc.nu>
In message <E16Crs9-0003Gc-00@the-village.bc.nu> you write:
> > The sched.c change is also useless (ie. only harmful). Anton and I looked
at
> > adapting the scheduler for hyperthreading, but it looks like the recent
> > changes have had the side effect of making hyperthreading + the current
>
> I trust Intels own labs over you on this one.
This is voodoo optimization. I don't care WHO did it.
Marcelo, drop the patch. Please delay scheduler hacks until they can
be verified to actually do something.
Given another chip with similar technology (eg. PPC's Hardware Multi
Threading) and the same patch, dbench runs 1 - 10 on 4-way makes NO
POSITIVE DIFFERENCE.
http://samba.org/~anton/linux/HMT/
> I suspect they know what their chip needs.
I find your faith in J. Random Intel Engineer fascinating.
================
The current scheduler actually works quite well if you number your
CPUs right, and to fix the corner cases takes more than this change.
First some simple terminology: let's assume we have two "sides" to
each CPU (ie. each CPU has two IDs, smp_num_cpus()/2 apart):
0 1 2 3
4 5 6 7
The current scheduler code reschedule_idle()s (pushes) from 0 to 3
first anyway, so if we're less than 50% utilized it tends to "just
work". Note that it doesn't stop the schedule() (pulls) on 4 - 7 from
grabbing a process to run even if there is a fully idle CPU, so it's
far from perfect.
Now let's look at the performance-problematic case: dbench 5.
Without HMT/hyperthread:
Fifth process not scheduled at all.
When any of the first four processes schedule(), the fifth
process is pulled onto that processor.
With HMT/hyperthread:
Fifth process scheduled on 4 (shared with 0).
When processes on 1, 2, or 3 schedule(), that processor sits
idle, while processor 0/4 is doing double work (ie. only 2 in
5 chance that the right process will schedule() first).
Finally, 0 or 4 will schedule() then wakeup, and be pulled
onto another CPU (unless they are all busy again).
The result is that dbench 5 runs significantly SLOWER with
hyperthreading than without. We really want to pull a process off a
cpu it is running on, if we are completely idle and it is running on a
double-used CPU. Note that dbench 6 is almost back to normal
performance, since the probability of the right process scheduling
first becomes 4 in 6).
Now, the Intel hack changes reschedule_idle() to push onto the first
completely idle CPU above all others. Nice idea: the only problem is
finding a load where that actually happens, since we push onto low
numbers first anyway. If we have an average of <= 4 running
processes, they spread out nicely, and if we have an average of > 4
then there are no fully idle processes and this hack is useless.
Clear?
Rusty.
--
Anyone who quotes me in their sig is an idiot. -- Rusty Russell.
next prev parent reply other threads:[~2001-12-09 1:58 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-12-06 17:39 Linux 2.4.17-pre5 Marcelo Tosatti
2001-12-06 19:07 ` Alan Cox
2001-12-06 23:38 ` Dave Jones
2001-12-07 0:09 ` Alan Cox
2001-12-08 10:46 ` Rusty Russell
2001-12-09 0:17 ` Linux HMT analysis Anton Blanchard
2001-12-09 1:39 ` Alan Cox
2001-12-09 1:41 ` Alan Cox
2001-12-09 0:31 ` Linux 2.4.17-pre5 Alan Cox
2001-12-09 1:58 ` Rusty Russell [this message]
2001-12-09 2:35 ` Davide Libenzi
2001-12-09 6:20 ` Rusty Russell
2001-12-09 16:24 ` Alan Cox
2001-12-09 19:48 ` Davide Libenzi
2001-12-09 22:44 ` Mike Kravetz
2001-12-09 23:50 ` Davide Libenzi
2001-12-09 23:57 ` Alan Cox
2001-12-19 22:16 ` Pavel Machek
2001-12-20 19:10 ` Davide Libenzi
2001-12-09 16:16 ` Alan Cox
2001-12-10 0:21 ` Rusty Russell
2001-12-10 0:41 ` Alan Cox
2001-12-10 2:10 ` Martin J. Bligh
2001-12-10 5:40 ` Rusty Russell
2001-12-10 5:31 ` Rusty Russell
2001-12-10 8:28 ` Alan Cox
2001-12-10 23:12 ` James Cleverdon
2001-12-10 23:30 ` Alan Cox
2001-12-11 9:16 ` Robert Varga
2001-12-11 9:23 ` David Weinehall
2001-12-11 9:00 ` Eric W. Biederman
2001-12-11 23:14 ` Alan Cox
2001-12-09 19:38 ` Marcelo Tosatti
2001-12-09 9:47 ` arjan
2001-12-07 16:07 ` Marcelo Tosatti
2001-12-06 19:18 ` Matthias Andree
2001-12-06 19:05 ` Marcelo Tosatti
2001-12-06 21:14 ` Ben Greear
2001-12-06 21:58 ` David S. Miller
2001-12-06 22:24 ` Matthias Andree
2001-12-09 10:10 ` Eran Man
2001-12-06 20:14 ` Rik van Riel
2001-12-06 20:22 ` Jeff Garzik
2001-12-06 20:58 ` David S. Miller
2001-12-08 4:56 ` M. Edward Borasky
2001-12-08 5:41 ` David S. Miller
-- strict thread matches above, loose matches on Subject: below --
2001-12-06 20:44 Luca Montecchiani
2001-12-07 0:12 ` Stephan von Krawczynski
2001-12-07 3:43 ` Keith Owens
2001-12-07 11:55 ` Stephan von Krawczynski
2001-12-07 13:35 ` Keith Owens
2001-12-07 14:25 ` Stephan von Krawczynski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=E16CtEp-0007Jb-00@wagner \
--to=rusty@rustcorp.com.au \
--cc=alan@lxorguk.ukuu.org.uk \
--cc=anton@samba.org \
--cc=davej@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=marcelo@conectiva.com.br \
--cc=torvalds@transmeta.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox