public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Martin J. Bligh" <Martin.Bligh@us.ibm.com>
To: Erich Focht <focht@ess.nec.de>, Mike Kravetz <kravetz@us.ibm.com>
Cc: Jesse Barnes <jbarnes@sgi.com>, Peter Rival <frival@zk3.dec.com>,
	lse-tech@lists.sourceforge.net, linux-kernel@vger.kernel.org
Subject: Re: [Lse-tech] NUMA scheduling
Date: Mon, 25 Feb 2002 10:55:03 -0800	[thread overview]
Message-ID: <20940000.1014663303@flay> (raw)
In-Reply-To: <Pine.LNX.4.21.0202251737420.30318-100000@sx6.ess.nec.de>
In-Reply-To: <Pine.LNX.4.21.0202251737420.30318-100000@sx6.ess.nec.de>

> - The load_balancing() concept is different:
> 	- there are no special time intervals for balancing across pool
> 	boundaries, the need for this can occur very quickly and I
> 	have the feeling that 2*250ms is a long time for keeping the 
> 	nodes unbalanced. This means: each time load_balance() is called
> 	it _can_ balance across pool boundaries (but doesn't have to).

Imagine for a moment that there's a short spike in workload on one node.
By agressively balancing across nodes, won't you incur a high cost
in terms of migrating all the cache data to the remote node (destroying
the cache on both the remote and local node), when it would be cheaper 
to wait for a few more ms, and run on the local node? This is a 
non-trivial problem to solve, and I'm not saying either approach is
correct, just that there are some disadvantages of being too agressive.
Perhaps it's architecture dependant (I'm used to NUMA-Q, which has 
caches on the interconnect, and a cache-miss access speed ratio of 
about 20:1 remote:local).

> Would be interesting to hear oppinions on initial balancing. What are the
> pros and cons of balancing at do_fork() or do_execve()? And it would be
> interesting to learn about other approaches, too...

Presumably exec-time balancing is cheaper, since there are fewer shared
pages to be bounced around between nodes, but less effective if the main
load on the machine is one large daemon app, which just forks a few copies
of itself ... I would have though that'd get sorted out a little later anyway
by the background rebalancing though?

M.


  reply	other threads:[~2002-02-25 18:55 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-02-22 18:56 NUMA scheduling Mike Kravetz
2002-02-22 19:14 ` [Lse-tech] " Jesse Barnes
2002-02-22 19:29   ` Peter Rival
2002-02-22 23:59 ` Mike Kravetz
2002-02-25 18:32 ` Erich Focht
2002-02-25 18:55   ` Martin J. Bligh [this message]
2002-02-25 19:03     ` Larry McVoy
2002-02-25 19:28       ` Davide Libenzi
2002-02-25 19:45         ` Davide Libenzi
2002-02-25 19:35       ` Timothy D. Witham
2002-02-25 19:49       ` Bill Davidsen
2002-02-25 20:02         ` Larry McVoy
2002-02-25 20:18           ` Davide Libenzi
2002-02-26  5:14           ` Bill Davidsen
2002-02-25 23:35     ` [Lse-tech] [rebalance at: do_fork() vs. do_execve()] " Andy Pfiffer
2002-02-26 10:33     ` [Lse-tech] " Erich Focht
2002-02-26 15:30       ` Martin J. Bligh
2002-02-27 16:56         ` Erich Focht
2002-02-26 19:03       ` Mike Kravetz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20940000.1014663303@flay \
    --to=martin.bligh@us.ibm.com \
    --cc=focht@ess.nec.de \
    --cc=frival@zk3.dec.com \
    --cc=jbarnes@sgi.com \
    --cc=kravetz@us.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lse-tech@lists.sourceforge.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox