From: Erich Focht <efocht@hpce.nec.com>
To: Andrew Morton <akpm@osdl.org>
Cc: nickpiggin@yahoo.com.au, mbligh@aracnet.com, mingo@elte.hu,
ak@suse.de, jun.nakajima@intel.com, ricklind@us.ibm.com,
linux-kernel@vger.kernel.org, kernel@kolivas.org,
rusty@rustcorp.com.au, anton@samba.org,
lse-tech@lists.sourceforge.net
Subject: Re: [Lse-tech] [patch] sched-domain cleanups, sched-2.6.5-rc2-mm2-A3
Date: Wed, 31 Mar 2004 20:59:23 +0200 [thread overview]
Message-ID: <200403312059.23287.efocht@hpce.nec.com> (raw)
In-Reply-To: <20040330030242.56221bcf.akpm@osdl.org>
On Tuesday 30 March 2004 13:02, Andrew Morton wrote:
> Erich Focht <efocht@hpce.nec.com> wrote:
> > > And finally, HPC
> > > applications are the very ones that should be using CPU
> > > affinities because they are usually tuned quite tightly to the
> > > specific architecture.
> >
> > There are companies mainly selling NUMA machines for HPC (SGI?), so
> > this is not a niche market.
>
> It is niche in terms of number of machines and in terms of affected users.
> And the people who provide these machines have the resources to patch the
> scheduler if needs be.
Uhm, depends on the CPUs you think of. I bet much more than half of
the Opterons and Itanium2 CPUs sold last year went into HPC. Certainly
not so many IA64s went into NUMA machines. But almost all Opterons ;-)
IBM's NUMA machines with Power CPUs are mainly sold with AIX into the
HPC market, I don't recall to have seen big HPC installations with HP
Superdome under Linux, not yet...? IBM sells x86-NUMA more into the
commercial market, the only big visible Linux-NUMA in HPC is SGI's
Altix. Most of the other NUMA machines go into HPC with other OSes and
we don't care about them (yet?). So you're probably right about the
number of Linux-NUMA-HPC users, but this actually shows that
Linux-NUMA is currently not the ideal choice. We're working on it,
right?
> Correct me if I'm wrong, but what we have here is a situation where if we
> design the scheduler around the HPC requirement, it will work poorly in a
> significant number of other applications. And we don't see a way of fixing
> this without either a /proc/i-am-doing-hpc, or a config option, or
> requiring someone to carry an external patch, yes?
>
> If so then all of those seem reasonable options to me. We should optimise
> the scheduler for the common case, and that ain't HPC.
Yes! A per process flag would be enough to have the choice.
> If we agree that architecturally sched-domains _can_ satisfy the HPC
> requirement then I think that's good enough for now. I'd prefer that Ingo
> and Nick not have to bust a gut trying to get optimum HPC performance
> before the code is even merged up.
Sure. On the other hand the benchmark brought into discussion by Andi
is very easy to understand, much easier than any Java monster. If the
scheduler doesn't have a screw for running this optimally, it's
disappointing.
> Do you agree that sched-domains is architected appropriately?
My current impression is: YES. My testing experience with it is
still very limited...
Regards,
Erich
next prev parent reply other threads:[~2004-03-31 18:59 UTC|newest]
Thread overview: 75+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-03-25 15:31 [Lse-tech] [patch] sched-domain cleanups, sched-2.6.5-rc2-mm2-A3 Nakajima, Jun
2004-03-25 15:40 ` Andi Kleen
2004-03-25 19:09 ` Ingo Molnar
2004-03-25 15:21 ` Andi Kleen
2004-03-25 19:39 ` Ingo Molnar
2004-03-25 20:30 ` Ingo Molnar
2004-03-29 8:45 ` Andi Kleen
2004-03-29 10:20 ` Rick Lindsley
2004-03-29 5:07 ` Andi Kleen
2004-03-29 11:28 ` Nick Piggin
2004-03-29 17:30 ` Rick Lindsley
2004-03-30 0:01 ` Nick Piggin
2004-03-30 1:26 ` Rick Lindsley
2004-03-29 11:20 ` Nick Piggin
2004-03-29 6:01 ` Andi Kleen
2004-03-29 11:46 ` Ingo Molnar
2004-03-29 7:03 ` Andi Kleen
2004-03-29 7:10 ` Andi Kleen
2004-03-29 20:14 ` Andi Kleen
2004-03-29 23:51 ` Nick Piggin
2004-03-30 6:34 ` Andi Kleen
2004-03-30 6:40 ` Ingo Molnar
2004-03-30 7:07 ` Andi Kleen
2004-03-30 7:14 ` Nick Piggin
2004-03-30 7:45 ` Ingo Molnar
2004-03-30 7:58 ` Nick Piggin
2004-03-30 7:15 ` Ingo Molnar
2004-03-30 7:18 ` Nick Piggin
2004-03-30 7:48 ` Andi Kleen
2004-03-30 8:18 ` Ingo Molnar
2004-03-30 9:36 ` Andi Kleen
2004-03-30 7:42 ` Ingo Molnar
2004-03-30 7:03 ` Nick Piggin
2004-03-30 7:13 ` Andi Kleen
2004-03-30 7:24 ` Nick Piggin
2004-03-30 7:38 ` Arjan van de Ven
2004-03-30 7:13 ` Martin J. Bligh
2004-03-30 7:31 ` Nick Piggin
2004-03-30 7:38 ` Martin J. Bligh
2004-03-30 8:05 ` Ingo Molnar
2004-03-30 8:19 ` Nick Piggin
2004-03-30 8:45 ` Ingo Molnar
2004-03-30 8:53 ` Nick Piggin
2004-03-30 15:27 ` Martin J. Bligh
2004-03-25 19:24 ` Martin J. Bligh
2004-03-25 21:48 ` Ingo Molnar
2004-03-25 22:28 ` Martin J. Bligh
2004-03-29 22:30 ` Erich Focht
2004-03-30 9:05 ` Nick Piggin
2004-03-30 10:04 ` Erich Focht
2004-03-30 10:58 ` Andi Kleen
2004-03-30 16:03 ` [patch] sched-2.6.5-rc3-mm1-A0 Ingo Molnar
2004-03-31 2:30 ` Nick Piggin
2004-03-30 11:02 ` [Lse-tech] [patch] sched-domain cleanups, sched-2.6.5-rc2-mm2-A3 Andrew Morton
[not found] ` <20040330161438.GA2257@elte.hu>
[not found] ` <20040330161910.GA2860@elte.hu>
[not found] ` <20040330162514.GA2943@elte.hu>
2004-03-30 21:03 ` [patch] new-context balancing, 2.6.5-rc3-mm1 Ingo Molnar
2004-03-31 2:30 ` Nick Piggin
2004-03-31 18:59 ` Erich Focht [this message]
2004-03-31 2:08 ` [Lse-tech] [patch] sched-domain cleanups, sched-2.6.5-rc2-mm2-A3 Nick Piggin
2004-03-31 22:23 ` Erich Focht
2004-03-30 15:01 ` Martin J. Bligh
2004-03-31 21:23 ` Erich Focht
2004-03-31 21:33 ` Martin J. Bligh
2004-03-25 21:59 ` Ingo Molnar
2004-03-25 22:26 ` Rick Lindsley
2004-03-25 22:30 ` Andrew Theurer
2004-03-25 22:38 ` Martin J. Bligh
2004-03-26 1:29 ` Andi Kleen
2004-03-26 3:23 ` Nick Piggin
-- strict thread matches above, loose matches on Subject: below --
2004-03-30 21:40 Nakajima, Jun
2004-03-30 22:15 ` Andrew Theurer
[not found] <1DF7H-22Y-11@gated-at.bofh.it>
[not found] ` <1DL3x-7iG-7@gated-at.bofh.it>
[not found] ` <1DLGd-7TS-17@gated-at.bofh.it>
[not found] ` <1FmNz-72J-73@gated-at.bofh.it>
[not found] ` <1FnzJ-7IW-15@gated-at.bofh.it>
2004-03-30 9:39 ` Andi Kleen
2004-03-31 1:56 ` Nick Piggin
2004-03-25 15:15 Nakajima, Jun
2004-03-25 16:19 ` John Hawkes
2004-03-25 16:53 ` Martin J. Bligh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200403312059.23287.efocht@hpce.nec.com \
--to=efocht@hpce.nec.com \
--cc=ak@suse.de \
--cc=akpm@osdl.org \
--cc=anton@samba.org \
--cc=jun.nakajima@intel.com \
--cc=kernel@kolivas.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lse-tech@lists.sourceforge.net \
--cc=mbligh@aracnet.com \
--cc=mingo@elte.hu \
--cc=nickpiggin@yahoo.com.au \
--cc=ricklind@us.ibm.com \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox