public inbox for linux-ia64@vger.kernel.org
 help / color / mirror / Atom feed
From: Erich Focht <efocht@hpce.nec.com>
To: Mark Goodwin <markgw@sgi.com>
Cc: Matthew Dobson <colpatch@us.ibm.com>,
	Jack Steiner <steiner@sgi.com>,
	Takayoshi Kochi <t-kochi@bq.jp.nec.com>,
	linux-ia64@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>
Subject: Re: Externalize SLIT table
Date: Wed, 10 Nov 2004 18:45:33 +0000	[thread overview]
Message-ID: <200411101945.34003.efocht@hpce.nec.com> (raw)
In-Reply-To: <Pine.LNX.4.61.0411101532350.15897@woolami.melbourne.sgi.com>

On Wednesday 10 November 2004 06:05, Mark Goodwin wrote:
> 
> On Tue, 9 Nov 2004, Matthew Dobson wrote:
> > On Tue, 2004-11-09 at 12:34, Mark Goodwin wrote:
> >> Once again however, it depends on the definition of distance. For nodes,
> >> we've established it's the ACPI SLIT (relative distance to memory). For
> >> cpus, should it be distance to memory? Distance to cache? Registers? Or
> >> what?
> >>
> > That's the real issue.  We need to agree upon a meaningful definition of
> > CPU-to-CPU "distance".  As Jesse mentioned in a follow-up, we can all
> > agree on what Node-to-Node "distance" means, but there doesn't appear to
> > be much consensus on what CPU "distance" means.
> 
> How about we define cpu-distance to be "relative distance to the
> lowest level cache on another CPU".

Several definitions are possible, this is really a source of
confusion. Any of these can be reconstructed if one has access to the
constituents: node-to-node latency (SLIT), cache-to-cache
latencies. The later ones aren't available and would anyhow be better
placed in something like /proc/cpuinfo or similar. They are CPU or
package specific and have nothing to do with NUMA.

> On a system that has nodes with multiple sockets (each supporting
> multiple cores or HT "CPUs" sharing some level of cache), when the
> scheduler needs to migrate a task it would first choose a CPU
> sharing the same cache, then a CPU on the same node, then an
> off-node CPU (i.e. falling back to node distance).

This should be done by correctly setting up the sched domains. It's
not a question of exporting useless or redundant information to user
space.

The need for some (any) cpu-to-cpu metrics initially brought up by
Jack seemed mainly motivated by existing user space tools for
constructing cpusets (maybe in PBS). I think it is a tolerable effort
to introduce in user space an inlined function or macro doing
something like
   cpu_metric(i,j) := node_metric(cpu_node(i),cpu_node(j))

It keeps the kernel free of misleading information which might just
slightly make cpusets construction more comfortable. In user space you
have the full freedom to enhance your metrics when getting more
details about the next generation cpus.

Regards,
Erich


  reply	other threads:[~2004-11-10 18:45 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-11-03 20:56 Externalize SLIT table Jack Steiner
2004-11-04  1:59 ` Takayoshi Kochi
2004-11-04  4:07   ` Andi Kleen
2004-11-04  4:57     ` Takayoshi Kochi
2004-11-04  6:37       ` Andi Kleen
2004-11-05 16:08       ` Jack Steiner
2004-11-05 16:26         ` Andreas Schwab
2004-11-05 16:44           ` Jack Steiner
2004-11-06 11:50             ` Christoph Hellwig
2004-11-06 12:48               ` Andi Kleen
2004-11-06 13:07                 ` Christoph Hellwig
2004-11-05 17:13         ` Erich Focht
2004-11-05 19:13           ` Jack Steiner
2004-11-09 19:23     ` Matthew Dobson
2004-11-04 14:13   ` Jack Steiner
2004-11-04 14:29     ` Andi Kleen
2004-11-04 15:31     ` Erich Focht
2004-11-04 17:04       ` Andi Kleen
2004-11-04 19:36         ` Jack Steiner
2004-11-09 19:45         ` Matthew Dobson
2004-11-09 19:43       ` Matthew Dobson
2004-11-09 20:34         ` Mark Goodwin
2004-11-09 22:00           ` Jesse Barnes
2004-11-09 23:58           ` Matthew Dobson
2004-11-10  5:05             ` Mark Goodwin
2004-11-10 18:45               ` Erich Focht [this message]
2004-11-10 22:09                 ` Matthew Dobson
2004-11-18 16:39 ` Jack Steiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200411101945.34003.efocht@hpce.nec.com \
    --to=efocht@hpce.nec.com \
    --cc=colpatch@us.ibm.com \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=markgw@sgi.com \
    --cc=steiner@sgi.com \
    --cc=t-kochi@bq.jp.nec.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox