public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Con Kolivas <kernel@kolivas.org>
To: Nathan Fredrickson <8nrf@qlink.queensu.ca>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Nick Piggin <piggin@cyberone.com.au>, Ingo Molnar <mingo@elte.hu>,
	Adam Kropelin <akropel1@rochester.rr.com>
Subject: Re: HT schedulers' performance on single HT processor
Date: Mon, 15 Dec 2003 21:11:52 +1100	[thread overview]
Message-ID: <200312152111.52949.kernel@kolivas.org> (raw)
In-Reply-To: <1071431363.19011.64.camel@rocky>

On Mon, 15 Dec 2003 06:49, Nathan Fredrickson wrote:
> On Fri, 2003-12-12 at 09:57, Con Kolivas wrote:
> > I set out to find how the hyper-thread schedulers would affect the all
> > important kernel compile benchmark on machines that most of us are likely
> > to encounter soon. The single processor HT machine.
>
> I ran some further tests since I have access to some SMP systems with HT
> (1, 2 and 4 physical processors).

> I can also run the same on four physical processors if there is
> interest.

>              j =  1     2     3     4     8
> 1phys (uniproc)  1.00  1.00  1.00  1.00  1.00
> 1phys w/HT       1.02  1.02  0.87  0.87  0.87
> 1phys w/HT (w26) 1.02  1.02  0.87  0.87  0.88
> 1phys w/HT (C1)  1.03  1.02  0.88  0.88  0.88
> 2phys            1.00  1.00  0.53  0.53  0.53
> 2phys w/HT       1.01  1.01  0.64  0.50  0.48
> 2phys w/HT (w26) 1.02  1.01  0.55  0.49  0.47
> 2phys w/HT (C1)  1.02  1.01  0.53  0.50  0.48

The specific HT scheduler benefits only start appearing with more physical 
cpus which is to be expected. Just for demonstration the four processor run 
would be nice (and obviously take you less time to do ;). I think it will 
demonstrate it even more. It would be nice to help the most common case of 
one HT cpu, though, instead of hindering it.

Adam already pointed out that you -j2 didn't really get you 2 jobs. I was 
using a 2.4 kernel tree for the benchmarks and j2 was giving me two jobs 
although perhaps something about the C1 patch was preventing the second job 
from ever taking off which is why the result is the same as one job in my 
benches. Curious.

> > Conclusion?
> > If you run nothing but kernel compiles all day on a P4 HT, make sure you
> > compile it for SMP ;-)
>
> And make sure you compile with the -jX option with X >= logical_procs+1

Of course. For now on the uniprocessor HT setup I'd recommend the unmodified 
scheduler in SMP mode.

Con


  parent reply	other threads:[~2003-12-15 10:12 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-12-12 14:57 HT schedulers' performance on single HT processor Con Kolivas
2003-12-14 19:49 ` Nathan Fredrickson
2003-12-14 20:35   ` Adam Kropelin
2003-12-14 21:15     ` Nathan Fredrickson
2003-12-15 10:11   ` Con Kolivas [this message]
2003-12-16  0:16     ` Nathan Fredrickson
2003-12-16  0:55       ` Con Kolivas
2003-12-16  3:57         ` Nathan Fredrickson
2004-01-03 17:56 ` Bill Davidsen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200312152111.52949.kernel@kolivas.org \
    --to=kernel@kolivas.org \
    --cc=8nrf@qlink.queensu.ca \
    --cc=akropel1@rochester.rr.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=piggin@cyberone.com.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox