From: Bill Davidsen <davidsen@tmr.com>
To: Con Kolivas <kernel@kolivas.org>
Cc: linux kernel mailing list <linux-kernel@vger.kernel.org>,
Nick Piggin <piggin@cyberone.com.au>, Ingo Molnar <mingo@elte.hu>
Subject: Re: HT schedulers' performance on single HT processor
Date: Sat, 03 Jan 2004 12:56:17 -0500 [thread overview]
Message-ID: <3FF70241.1030307@tmr.com> (raw)
In-Reply-To: <200312130157.36843.kernel@kolivas.org>
Con Kolivas wrote:
> I set out to find how the hyper-thread schedulers would affect the all
> important kernel compile benchmark on machines that most of us are likely to
> encounter soon. The single processor HT machine.
>
> Usual benchmark precautions taken; best of five runs (curiously the fastest
> was almost always the second run). Although for confirmation I really did
> this twice.
>
> Tested a kernel compile with make vmlinux, make -j2 and make -j8.
>
> make vmlinux - tests to ensure the sequential single threaded make doesn't
> suffer as a result of these tweaks
>
> make -j2 vmlinux - tests to see how well wasted idle time is avoided
>
> make -j8 vmlinux - maximum throughput test (4x nr_cpus seems to be ceiling for
> this).
>
> Hardware: P4 HT 3.066
>
> Legend:
> UP - Uniprocessor 2.6.0-test11 kernel
> SMP - SMP kernel
> C1 - With Ingo's C1 hyperthread patch
> w26 - With Nick's w26 sched-rollup (hyperthread included)
>
> make vmlinux
> kernel time
> UP 65.96
> SMP 65.80
> C1 66.54
> w26 66.25
>
> I was concerned this might happen and indeed the sequential single threaded
> compile is slightly worse on both HT schedulers. (1)
>
> make -j2 vmlinux
> kernel time
> UP 65.17
> SMP 57.77
> C1 66.01
> w26 57.94
>
> Shows the smp kernel nicely utilises HT whereas the UP kernel doesn't. The C1
> result was very repeatable and I was unable to get it lower than this.(2)
>
> make -j8 vmlinux
> kernel time
> UP 65.00
> SMP 57.85
> C1 58.25
> w26 57.94
If you could make one more test, do the compile with -pipe set in the
top level Makefile. I don't have play access to a HT uni, the only
machines available to me at the moment are SMP and production at that.
I did try it just for grins on a non-HT uni and saw this:
opt real user sys idle
-j1 406.2 308.1 19.0 79.1
-j1 -pipe 398.6 308.2 19.0 71.4
-j3 391.6 308.3 19.0 64.3
-j3 -pipe 388.7 308.4 19.0 61.3
P4-2.4MHz, 256MB, compiling 2.5.47-ac6 with just "make." Using -pipe
*may* allow both siblings to cooperate better.
I assume that CPU affinity should apply to all siblings in a package?
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
prev parent reply other threads:[~2004-01-03 17:56 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-12-12 14:57 HT schedulers' performance on single HT processor Con Kolivas
2003-12-14 19:49 ` Nathan Fredrickson
2003-12-14 20:35 ` Adam Kropelin
2003-12-14 21:15 ` Nathan Fredrickson
2003-12-15 10:11 ` Con Kolivas
2003-12-16 0:16 ` Nathan Fredrickson
2003-12-16 0:55 ` Con Kolivas
2003-12-16 3:57 ` Nathan Fredrickson
2004-01-03 17:56 ` Bill Davidsen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3FF70241.1030307@tmr.com \
--to=davidsen@tmr.com \
--cc=kernel@kolivas.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=piggin@cyberone.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox