public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andi Kleen <ak@muc.de>
To: Darren Hart <dvhltc@us.ibm.com>
Cc: lkml <linux-kernel@vger.kernel.org>, Andi Kleen <ak@muc.de>
Subject: Re: sched_domains and Stream benchmark
Date: 27 Apr 2004 23:03:52 +0200
Date: Tue, 27 Apr 2004 23:03:52 +0200	[thread overview]
Message-ID: <20040427210352.GA53718@colin2.muc.de> (raw)
In-Reply-To: <1083084439.2733.34.camel@farah>

On Tue, Apr 27, 2004 at 09:47:19AM -0700, Darren Hart wrote:
> On Mon, 2004-04-26 at 19:33, Andi Kleen wrote:
> > > I noticed your binary ran with N=2000000 which is only sufficient for a
> > > 2 proc 1 MB cache opteron box according to the documentation on the
> > 
> > It does not seem to make any difference. 
> 
> I was under the impression you didn't change the N value (array size)
> and ran the benchmark with someone else's precompiled binaries (the ones
> you sent me).  Did you have two binaries with different array sizes

Correct.  I always used 2000000. But it did not seem to make
any difference in showing the scheduler issues even when going
to 4 CPUs.

(there were some fluctuations, but much less than the 25% you reported)

> > > stream faq.  I also noticed wide variation in results (25% or so) when
> > > running with 4 threads on a 4 proc opteron on linux-2.6.5-mm5.  Can you
> > > provide me with the specs of the system you ran your tests on?
> > 
> > Yes, mm5 is still broken because it has the "tuned to numasaurus" numa
> > scheduler. Run it on a standard (non mm*) kernel or with Ingo's early 
> > load balance patch.
> 
> I ran it on 2.6.5, 2.6.5-mm5, and 2.6.5-mm5-flat-domains trying to
> reproduce the results you found (including the poor performance of
> virgin and mm) so that I can have some context while analyzing the
> sched_domains topology on x86_64 and its effects on performance.  So
> that I can see where the differences lie in our tests, could you please
> provide some of the specs of the system you ran on, such as number of
> procs, cache size, and amount of RAM.

I saw the issue on a wide range of systems, ranging from 2 CPUs to 4 CPUs.
All hard enough per CPU memory to fit the benchmark.

-Andi

  reply	other threads:[~2004-04-27 21:03 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1N7xQ-7fh-29@gated-at.bofh.it>
2004-04-20 18:58 ` sched_domains and Stream benchmark Andi Kleen
2004-04-26 22:30   ` Darren Hart
2004-04-27  2:33     ` Andi Kleen
2004-04-27  2:44       ` Nick Piggin
2004-04-27  2:48         ` Andi Kleen
2004-04-27  2:54           ` Ingo Molnar
2004-04-27 16:47       ` Darren Hart
2004-04-27 21:03         ` Andi Kleen [this message]
2004-04-28  9:47     ` Zoltan Menyhart
2004-04-28 17:20       ` Andi Kleen
2004-04-20 17:43 Darren Hart

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20040427210352.GA53718@colin2.muc.de \
    --to=ak@muc.de \
    --cc=dvhltc@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox