From: Dave Hansen <dave@sr71.net>
To: Peter Zijlstra <peterz@infradead.org>, Ingo Molnar <mingo@kernel.org>
Cc: Chuck Ebbert <cebbert.lkml@gmail.com>,
linux-kernel@vger.kernel.org, borislav.petkov@amd.com,
andreas.herrmann3@amd.com, hpa@linux.intel.com,
ak@linux.intel.com
Subject: Re: [PATCH] x86: Consider multiple nodes in a single socket to be "sane"
Date: Tue, 16 Sep 2014 09:36:02 -0700 [thread overview]
Message-ID: <541866F2.4020108@sr71.net> (raw)
In-Reply-To: <20140916155928.GA2848@worktop.localdomain>
On 09/16/2014 08:59 AM, Peter Zijlstra wrote:
> On Tue, Sep 16, 2014 at 08:44:03AM +0200, Ingo Molnar wrote:
>> Note that that's not really a 'NUMA node' in the way lots of
>> places in the kernel assume it: permanent placement assymetry
>> (and access cost assymetry) of RAM.
>
> Agreed, that is not NUMA, both groups will have the exact same local
> DRAM latency (unlike the AMD thing which has two memory busses on the
> single package, and therefore really has two nodes on a single chip).
I don't think this is correct.
>From my testing, each ring of CPUs has a "close" and "far" memory
controller in the socket.
> This also means the CoD thing sets up the NUMA masks incorrectly.
I used this publicly-available Intel tool:
https://software.intel.com/en-us/articles/intelr-memory-latency-checker
And ran various combinations pinning the latency checker to various CPUs
and NUMA nodes.
Here's what I think the SLIT table should look like with cluster-on-die
disabled. There is one node per socket and the latency to the other
node is 1.5x the latency to the local node:
* 0 1
0 10 15
1 15 10
or, measured in ns:
* 0 1
0 76 119
1 114 76
Enabling cluster-on-die, we get 4 nodes. The local memory in thesame
socket gets faster, and remote memory in the same socket gets both
absolutely and relatively slower:
* 0 1 2 3
0 10 20 26 26
1 20 10 26 26
2 26 26 10 20
3 26 26 20 10
and in ns:
* 0 1 2 3
0 74.8 152.3 190.6 200.4
1 146.2 75.6 190.8 200.6
2 185.1 195.5 74.5 150.1
3 186.6 195.6 147.3 75.6
So I think it really is reasonable to say that there are 2 NUMA nodes in
a socket.
BTW, these numbers are only approximate. They were not run under
particularly controlled conditions and I don't even remember what kernel
they were under.
next prev parent reply other threads:[~2014-09-16 16:36 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-15 22:26 [PATCH] x86: Consider multiple nodes in a single socket to be "sane" Dave Hansen
2014-09-16 3:29 ` Peter Zijlstra
2014-09-16 6:38 ` Chuck Ebbert
2014-09-16 6:44 ` Ingo Molnar
2014-09-16 7:03 ` Chuck Ebbert
2014-09-16 7:05 ` Ingo Molnar
2014-09-16 16:01 ` Peter Zijlstra
2014-09-16 16:46 ` Dave Hansen
2014-09-16 15:59 ` Peter Zijlstra
2014-09-16 16:36 ` Dave Hansen [this message]
2014-09-16 8:17 ` Dave Hansen
2014-09-16 10:07 ` Heiko Carstens
2014-09-16 17:58 ` Peter Zijlstra
2014-09-16 23:49 ` Dave Hansen
2014-09-17 22:57 ` Peter Zijlstra
2014-09-18 0:33 ` Dave Hansen
2014-09-17 12:55 ` Borislav Petkov
2014-09-18 7:32 ` Borislav Petkov
2014-09-16 16:59 ` Brice Goglin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=541866F2.4020108@sr71.net \
--to=dave@sr71.net \
--cc=ak@linux.intel.com \
--cc=andreas.herrmann3@amd.com \
--cc=borislav.petkov@amd.com \
--cc=cebbert.lkml@gmail.com \
--cc=hpa@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox