public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "J.A. Magallon" <jamagallon@able.es>
To: Scott Robert Ladd <scott@coyotegulch.com>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: HT Benchmarks (was: /proc/cpuinfo and hyperthreading)
Date: Mon, 16 Dec 2002 23:38:48 +0100	[thread overview]
Message-ID: <20021216223848.GA2994@werewolf.able.es> (raw)
In-Reply-To: <FKEAJLBKJCGBDJJIPJLJKEMCDLAA.scott@coyotegulch.com>; from scott@coyotegulch.com on Mon, Dec 16, 2002 at 16:44:34 +0100


On 2002.12.16 Scott Robert Ladd wrote:
>Måns Rullgård wrote:
>> It's easy to write a program that displays any number of graphs
>> vaguely related to the system load.  How do we know that the
>> performance meter isn't lying?
>
>We don't.
>
>All I can say is that the performance meter seems (note the weasel-word)
>proper when running Win2K SMP on a dual PIII-933 box at one of my client
>sites. However, such experience does *not* guarantee that WinXP is reporting
>valid numbers for a P4 with HT.
>
>Here's a little test I ran this morning, now that my new system is
>operational. My benchmark is a full "make bootstrap" compile of gcc-3.2.1,
>with and without the - j 2 make switch that enables two threads of
>compilation. Using the 2.5.51 SMP kernel, I see the following compile times:
>
>  SMP     w/o  -j 2: 28m11s
>  "nosmp" with -j 2: 27m32s
>  SMP     with -j 2: 24m21s
>
>HT appears to give a very tiny benefit even without an SMP kernel -- and
>*with* an SMP kernel, I get a 16% improvement in my compile time. That
>pretty much matches my expectation (i.e., a HT processor is *not* equal to
>dual processor, but it *is* better than a non-HT processor).
>

HT can give no benefit in UP case, nobody knows that the sibling exists
and the P4 does not paralelize itself. The gain you see is due to 
computation-io overlap.

This my render code, implemented with posix threads, running on a dual
P4-Xeon@1.8GHz. Work is just dynamic strctures walk-through and floating
point calculation, no IO. In this example the database is tiny, so there
is no swap, and the box is 'all mine', any other process eating CPU.

Processes do not bounce between cpus and ht-aware scheduler
prefers a processor in different physical package when two cpu intensive
threads are running, so in the 2-threads case they run on different
packages:

Number of threads	Elapsed time   User Time   System Time
1                   53:216           53:220    00:000
2                   29:272           58:180    00:320
3                   27:162         1:21:450    00:540
4                   25:094         1:41:080    01:250

Elapsed is measured by the parent thread, that is not doing anything
but wait on a pthread_join. User and system times are the sum of
times for all the children threads, that do real work.

The jump from 1->2 threads is fine, the one from 2->4 is ridiculous...
I have my cpus doubled but each one has half the pipelining for floating
point...see the user cpu time increased due to 'worst' processors and
cache pollution on each package.

So, IMHO and for my apps, HyperThreading is just a bad joke.

-- 
J.A. Magallon <jamagallon@able.es>      \                 Software is like sex:
werewolf.able.es                         \           It's better when it's free
Mandrake Linux release 9.1 (Cooker) for i586
Linux 2.4.20-jam1 (gcc 3.2 (Mandrake Linux 9.1 3.2-4mdk))

  reply	other threads:[~2002-12-16 22:31 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <200212160004.gBG04g130091@hera.kernel.org>
2002-12-16  2:34 ` [XFS] add missing file xfs_iomap.c Anders Gustafsson
2002-12-16  3:58   ` /proc/cpuinfo and hyperthreading Scott Robert Ladd
2002-12-16  4:02     ` Robert Love
2002-12-16  4:13       ` Scott Robert Ladd
2002-12-16  6:28         ` Zwane Mwaikambo
2002-12-16  6:35           ` Zwane Mwaikambo
2002-12-16 13:38             ` Scott Robert Ladd
2002-12-16 13:54               ` Brian Jackson
2002-12-16 14:10                 ` Richard B. Johnson
2002-12-16 14:56                   ` Scott Robert Ladd
2002-12-16 15:11                     ` Måns Rullgård
2002-12-16 15:44                       ` HT Benchmarks (was: /proc/cpuinfo and hyperthreading) Scott Robert Ladd
2002-12-16 22:38                         ` J.A. Magallon [this message]
2002-12-16 23:21                           ` Scott Robert Ladd
2002-12-16 23:27                             ` J.A. Magallon
2002-12-17 11:03                               ` Denis Vlasenko
2002-12-17 20:44                                 ` H. Peter Anvin
2002-12-16 23:50                             ` H. Peter Anvin
2002-12-17 19:27                           ` Bill Davidsen
2002-12-16 15:52                 ` /proc/cpuinfo and hyperthreading Andrew Theurer
2002-12-16 14:08     ` Dave Jones
2002-12-16 14:36       ` Scott Robert Ladd
2002-12-16  4:39   ` [XFS] add missing file xfs_iomap.c Linus Torvalds
     [not found] <1_0212161441436926@cichlid.com>
2002-12-18 17:56 ` HT Benchmarks (was: /proc/cpuinfo and hyperthreading) Andrew Burgess
2002-12-19 22:04   ` J.A. Magallon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20021216223848.GA2994@werewolf.able.es \
    --to=jamagallon@able.es \
    --cc=linux-kernel@vger.kernel.org \
    --cc=scott@coyotegulch.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox