From: Nick Piggin <piggin@cyberone.com.au>
To: Bill Davidsen <davidsen@tmr.com>
Cc: linux-kernel@vger.kernel.org, Con Kolivas <kernel@kolivas.org>
Subject: Re: [BENCHMARK] 2.6.3-rc2 v 2.6.3-rc3-mm1 kernbench
Date: Wed, 18 Feb 2004 03:45:35 +1100 [thread overview]
Message-ID: <4032452F.2090001@cyberone.com.au> (raw)
In-Reply-To: <Pine.LNX.3.96.1040217110549.8209A-100000@gatekeeper.tmr.com>
Bill Davidsen wrote:
>On Tue, 17 Feb 2004, Nick Piggin wrote:
>
>
>
>>Bill, I have CC'ed your message without modification because Con is
>>not subscribed to the list. Even for people who are subscribed, the
>>convention on lkml is to reply to all.
>>
>>Anyway, the "no SMT" run is with CONFIG_SCHED_SMT turned off, P4 HT
>>is still on. This was my fault because I didn't specify clearly that
>>I wanted to see a run with hardware HT turned off, although these
>>numbers are still interesting.
>>
>>Con hasn't tried HT off AFAIK because we couldn't work out how to
>>turn it off at boot time! :(
>>
>
>The curse of the brain-dead BIOS :-(
>
>So does CONFIG_SCHED_SMT turned off mean not using more than one sibling
>per package, or just going back to using them poorly? Yes, I should go
>root through the code.
>
>
It just goes back to treating them the same as physical CPUs.
The option will be eventually removed.
>Clearly it would be good to get one more data point with HT off in BIOS,
>but from this data it looks as if the SMT stuff really helps little when
>the system is very heavily loaded (Nproc>=Nsibs), and does best when the
>load is around Nproc==Ncpu. At least as I read the data. The really
>interesting data would be the -j64 load without HT, using both schedulers.
>
>
The biggest problems with SMT happen when 1 < Nproc < Nsibs,
because every two processes that end up on one physical CPU
leaves one physical CPU idle, and the non HT scheduler can't
detect or correct this.
At higher numbers of processes, you fill all virtual CPUs,
so physical CPUs don't become idle. You can still be smarter
about cache and migration costs though.
>I just got done looking at a mail server with HT, kept the load avg 40-70
>for a week. Speaks highly for the stability of RHEL-3.0, but I wouldn't
>mind a little more performance for free.
>
>
Not sure if they have any sort of HT aware scheduler or not.
If they do it is probably a shared runqueues type which is
much the same as sched domains in terms of functionality.
I don't think it would help much here though.
next prev parent reply other threads:[~2004-02-17 16:50 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-02-16 12:59 [BENCHMARK] 2.6.3-rc2 v 2.6.3-rc3-mm1 kernbench Con Kolivas
2004-02-16 13:20 ` Nick Piggin
2004-02-16 14:30 ` Con Kolivas
2004-02-17 0:42 ` bill davidsen
2004-02-17 4:22 ` Nick Piggin
2004-02-17 16:19 ` Bill Davidsen
2004-02-17 16:45 ` Nick Piggin [this message]
2004-02-18 0:25 ` Con Kolivas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4032452F.2090001@cyberone.com.au \
--to=piggin@cyberone.com.au \
--cc=davidsen@tmr.com \
--cc=kernel@kolivas.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox