public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andi Kleen <andi@firstfloor.org>
To: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: Andi Kleen <andi@firstfloor.org>, Ingo Molnar <mingo@elte.hu>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 0/3] x86: adapt CPU topology detection for AMD Magny-Cours
Date: Tue, 5 May 2009 19:54:34 +0200	[thread overview]
Message-ID: <20090505175434.GP23223@one.firstfloor.org> (raw)
In-Reply-To: <20090505164733.GA2868@alberich.amd.com>

On Tue, May 05, 2009 at 06:47:33PM +0200, Andreas Herrmann wrote:
> > > > First I must say it's unclear to me if CPU topology is really generally
> > > > useful to export to the user.
> > > 
> > > I think it is useful.
> > 
> > You forgot to state for what?
> 
> You are kidding, aren't you? Isn't this obvious?
> Why shouldn't a user be interested in stuff like core_siblings and
> thread_siblings? Maybe a user want to make scheduling decisions based
> on that and pin tasks accordingly?

my earlier point was in this case they should pin based on cache topology
(= cost of cache line transfer in cache) or on memory topology (cost of 
sharing data out of cache). cpu topology seems comparatively unimportant 
compared to the first two. 

> > > (a) Are you saying that users have to check NUMA distances when they
> > >     want to pin tasks on certain CPUs?
> > 
> > CPU == core ? No you just bind to that CPU. Was that a trick question?
> 
> Of course that was no trick question -- at most a stupid typo. (Sometimes
> in Linux CPU == core.) So, no, sorry I meant "certain cores".
> (And I meant not pinning to one core but to a set of cores).

I suspect you meant pinning to a node :) numactl --cpunodebind=...

> 
> > > SLIT and SRAT are not sufficient.
> > > 
> > > The kernel 
> > 
> > Which part of the kernel?
> 
> I provided this info in my first reply to you. Here it is again:
> 
>   "The kernel needs to know this when accessing processor
>    configuration space, when accessing shared MSRs or for counting
>    northbridge specific events."

Wait, but only a "internal node" shares MSRs and that is just
a node anyways isn't it? So that code just needs to look for nodes.

Ok I think you were worried about NUMA off, but still providing
a "fake mini NUMA" seems inferior to me than just always providing
a basic NUMA (even if it's all the same distance) for this case.

Anyways if it's really not working to use nodes for this internally
(although I must admit it's not fully clear to me why not)
I think it's ok to add this information internally; the part
i just object is extending the already stretched cpuinfo interface
for it and exporting such information without designing a proper
flexible future proof interface. 
 
> To translate this for you. Potential users are
> - EDAC ;-)
> - other MCA related stuff (e.g. L3 cache index disable)

Surely that's handled with the existing cache topology.

> - performance monitoring
> - most probably everything that accesses processor configuration
>   space and shared MSRs

It's just the same as a NUMA node. Not different from old systems.
The code can just look that up.

> You didn't read all my mails regarding this topic.
> The patches fixup sibling information for Magny-Cours. This info is
> not only exposed to /proc/cpuinfo but also with cpu-topology
> information in sysfs. I don't see why
> /sys/devices/system/cpu/cpuX/topology is an old ad-hoc hack.

Ok need to check that. I hope it didn't do the graph hardcoding
like your cpuinfo patch though.  After all sysfs is flexible
enough to express arbitary graphs and if something is moved
there it should be a flexible interface. Again I'm not sure
it's really needed though. Having three different topologies
for scheduling is likely not a very good idea in any case.

-Andi
-- 
ak@linux.intel.com -- Speaking for myself only.

  reply	other threads:[~2009-05-05 17:50 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-04 17:33 [PATCH 0/3] x86: adapt CPU topology detection for AMD Magny-Cours Andreas Herrmann
2009-05-04 17:34 ` [PATCH 1/3] x86: introduce cpuinfo->cpu_node_id to reflect topology of multi-node CPU Andreas Herrmann
2009-05-06 11:44   ` Ingo Molnar
2009-05-06 16:14     ` Andreas Herrmann
2009-05-04 17:36 ` [PATCH 2/3] x86: fixup topology detection for AMD " Andreas Herrmann
2009-05-04 17:37 ` [PATCH 3/3] x86: cacheinfo: fixup L3 cache information " Andreas Herrmann
2009-05-04 17:44 ` [PATCH 0/3] x86: adapt CPU topology detection for AMD Magny-Cours Andreas Herrmann
2009-05-04 20:16 ` Andi Kleen
2009-05-05  9:22   ` Andreas Herrmann
2009-05-05  9:35     ` Andi Kleen
2009-05-05 10:48       ` Andreas Herrmann
2009-05-05 12:02         ` Andi Kleen
2009-05-05 14:40           ` Andreas Herrmann
2009-05-05 15:31             ` Andi Kleen
2009-05-05 16:47               ` Andreas Herrmann
2009-05-05 17:54                 ` Andi Kleen [this message]
2009-05-08 14:28 ` Andreas Herrmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090505175434.GP23223@one.firstfloor.org \
    --to=andi@firstfloor.org \
    --cc=andreas.herrmann3@amd.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox