public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Martin J. Bligh" <mbligh@aracnet.com>
To: Erich Focht <efocht@hpce.nec.com>, Ingo Molnar <mingo@elte.hu>
Cc: Andi Kleen <ak@suse.de>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	Rick Lindsley <ricklind@us.ibm.com>,
	piggin@cyberone.com.au, linux-kernel@vger.kernel.org,
	akpm@osdl.org, kernel@kolivas.org, rusty@rustcorp.com.au,
	anton@samba.org, lse-tech@lists.sourceforge.net
Subject: Re: [Lse-tech] [patch] sched-domain cleanups, sched-2.6.5-rc2-mm2-A3
Date: Wed, 31 Mar 2004 13:33:02 -0800	[thread overview]
Message-ID: <289170000.1080768782@flay> (raw)
In-Reply-To: <200403312323.00944.efocht@hpce.nec.com>

> On Tuesday 30 March 2004 17:01, Martin J. Bligh wrote:
>> > I don't think it's worth to wait and hope that somebody shows up with
>> > a magic algorithm which balances every kind of job optimally.
>> 
>> Especially as I don't believe that exists ;-) It's not deterministic.
> 
> Right, so let's choose the initial balancing policy on a per process
> basis.

Yup, that seems like a reasonable thing to do. That way you can override
it for things that fork and never exec, if they're performance critical
(like HPC maybe).
 
>> > Benchmarks simulating "user work" like SPECsdet, kernel compile, AIM7
>> > are not relevant for HPC. In a compute center it actually doesn't
>> > matter much whether some shell command returns 10% faster, it just
>> > shouldn't disturb my super simulation code for which I bought an
>> > expensive NUMA box.
>> 
>> OK, but the scheduler can't know the difference automatically, I don't
>> think ... and whether we should tune the scheduler for "user work" or
>> HPC is going to be a hotly contested point ;-) We need to try to find
>> something that works for both. And suppose you have a 4 node system,
>> with 4 HPC apps running? Surely you want each app to have one node to
>> itself?
> 
> If the machine is 100% full all the time and all apps demand the same
> amount of bandwidth, yes, I want 1 job per node. If the average load is
> less than 100% (sometimes only 2-3 jobs are running) then I'd prefer to
> spread the processes of a job across the machine. The average bandwidth
> per process will be higher. Modern NUMA machines have big bandwidth to
> neighboring nodes and not too bad latency penalties for remote accesses.

In theory at least, doing the rebalance_on_clone if and only if there are
idle procs on another node sounds reasonable. In practice, I'm not sure
how well that'll work, since one app may well start wholly before another,
but maybe we can figure out something smart to do.

M.


  reply	other threads:[~2004-03-31 21:35 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-03-25 15:31 [Lse-tech] [patch] sched-domain cleanups, sched-2.6.5-rc2-mm2-A3 Nakajima, Jun
2004-03-25 15:40 ` Andi Kleen
2004-03-25 19:09   ` Ingo Molnar
2004-03-25 15:21     ` Andi Kleen
2004-03-25 19:39       ` Ingo Molnar
2004-03-25 20:30         ` Ingo Molnar
2004-03-29  8:45           ` Andi Kleen
2004-03-29 10:20             ` Rick Lindsley
2004-03-29  5:07               ` Andi Kleen
2004-03-29 11:28               ` Nick Piggin
2004-03-29 17:30                 ` Rick Lindsley
2004-03-30  0:01                   ` Nick Piggin
2004-03-30  1:26                     ` Rick Lindsley
2004-03-29 11:20             ` Nick Piggin
2004-03-29  6:01               ` Andi Kleen
2004-03-29 11:46                 ` Ingo Molnar
2004-03-29  7:03                   ` Andi Kleen
2004-03-29  7:10                     ` Andi Kleen
2004-03-29 20:14                   ` Andi Kleen
2004-03-29 23:51                     ` Nick Piggin
2004-03-30  6:34                       ` Andi Kleen
2004-03-30  6:40                         ` Ingo Molnar
2004-03-30  7:07                           ` Andi Kleen
2004-03-30  7:14                             ` Nick Piggin
2004-03-30  7:45                               ` Ingo Molnar
2004-03-30  7:58                                 ` Nick Piggin
2004-03-30  7:15                             ` Ingo Molnar
2004-03-30  7:18                               ` Nick Piggin
2004-03-30  7:48                               ` Andi Kleen
2004-03-30  8:18                                 ` Ingo Molnar
2004-03-30  9:36                                   ` Andi Kleen
2004-03-30  7:42                             ` Ingo Molnar
2004-03-30  7:03                         ` Nick Piggin
2004-03-30  7:13                           ` Andi Kleen
2004-03-30  7:24                             ` Nick Piggin
2004-03-30  7:38                             ` Arjan van de Ven
2004-03-30  7:13                           ` Martin J. Bligh
2004-03-30  7:31                             ` Nick Piggin
2004-03-30  7:38                               ` Martin J. Bligh
2004-03-30  8:05                               ` Ingo Molnar
2004-03-30  8:19                                 ` Nick Piggin
2004-03-30  8:45                                   ` Ingo Molnar
2004-03-30  8:53                                     ` Nick Piggin
2004-03-30 15:27                                       ` Martin J. Bligh
2004-03-25 19:24     ` Martin J. Bligh
2004-03-25 21:48       ` Ingo Molnar
2004-03-25 22:28         ` Martin J. Bligh
2004-03-29 22:30           ` Erich Focht
2004-03-30  9:05             ` Nick Piggin
2004-03-30 10:04               ` Erich Focht
2004-03-30 10:58                 ` Andi Kleen
2004-03-30 16:03                   ` [patch] sched-2.6.5-rc3-mm1-A0 Ingo Molnar
2004-03-31  2:30                     ` Nick Piggin
2004-03-30 11:02                 ` [Lse-tech] [patch] sched-domain cleanups, sched-2.6.5-rc2-mm2-A3 Andrew Morton
     [not found]                   ` <20040330161438.GA2257@elte.hu>
     [not found]                     ` <20040330161910.GA2860@elte.hu>
     [not found]                       ` <20040330162514.GA2943@elte.hu>
2004-03-30 21:03                         ` [patch] new-context balancing, 2.6.5-rc3-mm1 Ingo Molnar
2004-03-31  2:30                           ` Nick Piggin
2004-03-31 18:59                   ` [Lse-tech] [patch] sched-domain cleanups, sched-2.6.5-rc2-mm2-A3 Erich Focht
2004-03-31  2:08                 ` Nick Piggin
2004-03-31 22:23                   ` Erich Focht
2004-03-30 15:01             ` Martin J. Bligh
2004-03-31 21:23               ` Erich Focht
2004-03-31 21:33                 ` Martin J. Bligh [this message]
2004-03-25 21:59   ` Ingo Molnar
2004-03-25 22:26     ` Rick Lindsley
2004-03-25 22:30     ` Andrew Theurer
2004-03-25 22:38       ` Martin J. Bligh
2004-03-26  1:29       ` Andi Kleen
2004-03-26  3:23   ` Nick Piggin
  -- strict thread matches above, loose matches on Subject: below --
2004-03-30 21:40 Nakajima, Jun
2004-03-30 22:15 ` Andrew Theurer
     [not found] <1DF7H-22Y-11@gated-at.bofh.it>
     [not found] ` <1DL3x-7iG-7@gated-at.bofh.it>
     [not found]   ` <1DLGd-7TS-17@gated-at.bofh.it>
     [not found]     ` <1FmNz-72J-73@gated-at.bofh.it>
     [not found]       ` <1FnzJ-7IW-15@gated-at.bofh.it>
2004-03-30  9:39         ` Andi Kleen
2004-03-31  1:56           ` Nick Piggin
2004-03-25 15:15 Nakajima, Jun
2004-03-25 16:19 ` John Hawkes
2004-03-25 16:53 ` Martin J. Bligh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=289170000.1080768782@flay \
    --to=mbligh@aracnet.com \
    --cc=ak@suse.de \
    --cc=akpm@osdl.org \
    --cc=anton@samba.org \
    --cc=efocht@hpce.nec.com \
    --cc=jun.nakajima@intel.com \
    --cc=kernel@kolivas.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lse-tech@lists.sourceforge.net \
    --cc=mingo@elte.hu \
    --cc=piggin@cyberone.com.au \
    --cc=ricklind@us.ibm.com \
    --cc=rusty@rustcorp.com.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox