public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mike Galbraith <efault@gmx.de>
To: Linda Walsh <lkml@tlinx.org>
Cc: Linux-Kernel <linux-kernel@vger.kernel.org>
Subject: Re: 2 physical-cpu (like 2x6core) config and NUMA?
Date: Tue, 18 Sep 2012 07:05:51 +0200	[thread overview]
Message-ID: <1347944751.7002.30.camel@marge.simpson.net> (raw)
In-Reply-To: <5057654A.803@tlinx.org>

On Mon, 2012-09-17 at 11:00 -0700, Linda Walsh wrote: 
> I was wondering, on dual processor MB's, Intel uses dedicated memory for
> 
> each cpu ....  6 memchips in the X5XXX series, and to access the memory
> of the other chip's cores, the memory has to be transferred over the QPI
> bus.
> 
> So wouldn't it be of benefit if such dual chip configurations were to
> be setup as 'NUMA', as there is a higher cost between migrating 
> memory/processes
> between Cores on different chips vs. on the same chip?  
> 
> I note from 'cpupower -c all frequency-info, that the "odd" cpu-cores
> all hve to run at the same clock frequency, and the "even" all have
> to run together, which I take to mean that the odd number cores are
> on 1 chip and the even numbered cores are on the other chip.
> 
> Since the QPI path is limited and appears to be < the local memory access
> rate, wouldn't it be appropriate if 2 cpu-chip setups were configured
> as 2 NUMA cores?  
> 
> Although -- I have no clue how the memory space is divided between the
> two cores -- i.e. I don't know if say, I have 24G on each, if they
> alternate 4G in the physical address space or what (that would all be
> handed (or mapped) before the chips come up.. so it could be contiguous).
> 
> 
> Does the kernel support scheduling based on the different speed of
> memory between "on die" vs. "off die"?   I was surprised to see
> that it viewed my system as 1 NUMA node with all 12 on 1 node -- when
> I know that it is physically organized as 2x6.

Yeah, the scheduler will setup for numa if srat says the box is numa.

I have a 64 core DL980 box that numactl --hardware says is a single
node, but that's due to ram truly _existing_ only on one node.   Not a
wonderful (or even supported) setup.

If ram isn't physically plugged into the right spots, or some bios
option makes the box appear to be single node, that's what you'll see
too, (SIBLING maybe) MC and CPU domains, but no NUMA.

-Mike


  reply	other threads:[~2012-09-18  5:05 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-09-17 18:00 2 physical-cpu (like 2x6core) config and NUMA? Linda Walsh
2012-09-18  5:05 ` Mike Galbraith [this message]
2012-09-18  6:55 ` Jike Song
2012-09-18 11:04   ` Linda Walsh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1347944751.7002.30.camel@marge.simpson.net \
    --to=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkml@tlinx.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox