From: Chuck Ebbert <cebbert.lkml@gmail.com>
To: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
Dave Hansen <dave@sr71.net>,
linux-kernel@vger.kernel.org, borislav.petkov@amd.com,
andreas.herrmann3@amd.com, hpa@linux.intel.com,
ak@linux.intel.com
Subject: Re: [PATCH] x86: Consider multiple nodes in a single socket to be "sane"
Date: Tue, 16 Sep 2014 02:03:00 -0500 [thread overview]
Message-ID: <20140916020300.5013b8f0@as> (raw)
In-Reply-To: <20140916064403.GC14807@gmail.com>
On Tue, 16 Sep 2014 08:44:03 +0200
Ingo Molnar <mingo@kernel.org> wrote:
>
> * Chuck Ebbert <cebbert.lkml@gmail.com> wrote:
>
> > On Tue, 16 Sep 2014 05:29:20 +0200
> > Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > > On Mon, Sep 15, 2014 at 03:26:41PM -0700, Dave Hansen wrote:
> > > >
> > > > I'm getting the spew below when booting with Haswell (Xeon
> > > > E5-2699) CPUs and the "Cluster-on-Die" (CoD) feature
> > > > enabled in the BIOS.
> > >
> > > What is that cluster-on-die thing? I've heard it before but
> > > never could find anything on it.
> >
> > Each CPU has 2.5MB of L3 connected together in a ring that
> > makes it all act like a single shared cache. The HW tries to
> > place the data so it's closest to the CPU that uses it. On the
> > larger processors there are two rings with an interconnect
> > between them that adds latency if a cache fetch has to cross
> > that. CoD breaks that connection and effectively gives you two
> > nodes on one die.
>
> Note that that's not really a 'NUMA node' in the way lots of
> places in the kernel assume it: permanent placement assymetry
> (and access cost assymetry) of RAM.
>
> It's a new topology construct that needs new handling (and
> probably a new mask): Non Uniform Cache Architecture (NUCA)
> or so.
Hmm, looking closer at the diagram, each ring has its own memory controller, so
it really is NUMA if you break the interconnect between that caches.
next prev parent reply other threads:[~2014-09-16 7:03 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-15 22:26 [PATCH] x86: Consider multiple nodes in a single socket to be "sane" Dave Hansen
2014-09-16 3:29 ` Peter Zijlstra
2014-09-16 6:38 ` Chuck Ebbert
2014-09-16 6:44 ` Ingo Molnar
2014-09-16 7:03 ` Chuck Ebbert [this message]
2014-09-16 7:05 ` Ingo Molnar
2014-09-16 16:01 ` Peter Zijlstra
2014-09-16 16:46 ` Dave Hansen
2014-09-16 15:59 ` Peter Zijlstra
2014-09-16 16:36 ` Dave Hansen
2014-09-16 8:17 ` Dave Hansen
2014-09-16 10:07 ` Heiko Carstens
2014-09-16 17:58 ` Peter Zijlstra
2014-09-16 23:49 ` Dave Hansen
2014-09-17 22:57 ` Peter Zijlstra
2014-09-18 0:33 ` Dave Hansen
2014-09-17 12:55 ` Borislav Petkov
2014-09-18 7:32 ` Borislav Petkov
2014-09-16 16:59 ` Brice Goglin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140916020300.5013b8f0@as \
--to=cebbert.lkml@gmail.com \
--cc=ak@linux.intel.com \
--cc=andreas.herrmann3@amd.com \
--cc=borislav.petkov@amd.com \
--cc=dave@sr71.net \
--cc=hpa@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox