From: Michael Neuling <mikey@neuling.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>,
mingo@kernel.org, linux-kernel@vger.kernel.org, clm@fb.com,
mgalbraith@suse.de, tglx@linutronix.de, fweisbec@gmail.com,
srikar@linux.vnet.ibm.com, anton@samba.org,
oliver <oohall@gmail.com>,
"Shreyas B. Prabhu" <shreyas@linux.vnet.ibm.com>
Subject: Re: [RFC][PATCH 4/7] sched: Replace sd_busy/nr_busy_cpus with sched_domain_shared
Date: Thu, 12 May 2016 21:07:52 +1000 [thread overview]
Message-ID: <1463051272.28449.59.camel@neuling.org> (raw)
In-Reply-To: <20160512050750.GK3192@twins.programming.kicks-ass.net>
On Thu, 2016-05-12 at 07:07 +0200, Peter Zijlstra wrote:
> On Thu, May 12, 2016 at 12:05:37PM +1000, Michael Neuling wrote:
> >
> > On Wed, 2016-05-11 at 20:24 +0200, Peter Zijlstra wrote:
> > >
> > > On Wed, May 11, 2016 at 02:33:45PM +0200, Peter Zijlstra wrote:
> > > >
> > > >
> > > > Hmm, PPC folks; what does your topology look like?
> > > >
> > > > Currently your sched_domain_topology, as per
> > > > arch/powerpc/kernel/smp.c
> > > > seems to suggest your cores do not share cache at all.
> > > >
> > > > https://en.wikipedia.org/wiki/POWER7 seems to agree and states
> > > >
> > > > "4 MB L3 cache per C1 core"
> > > >
> > > > And http://www-03.ibm.com/systems/resources/systems_power_software_
> > > > i_pe
> > > > rfmgmt_underthehood.pdf
> > > > also explicitly draws pictures with the L3 per core.
> > > >
> > > > _however_, that same document describes L3 inter-core fill and
> > > > lateral
> > > > cast-out, which sounds like the L3s work together to form a node
> > > > wide
> > > > caching system.
> > > >
> > > > Do we want to model this co-operative L3 slices thing as a sort of
> > > > node-wide LLC for the purpose of the scheduler ?
> > > Going back a generation; Power6 seems to have a shared L3 (off
> > > package)
> > > between the two cores on the package. The current topology does not
> > > reflect that at all.
> > >
> > > And going forward a generation; Power8 seems to share the per-core
> > > (chiplet) L3 amonst all cores (chiplets) + is has the centaur (memory
> > > controller) 16M L4.
> > Yep, L1/L2/L3 is per core on POWER8 and POWER7. POWER6 and POWER5
> > (both
> > dual core chips) had a shared off chip cache
> But as per the above, Power7 and Power8 have explicit logic to share the
> per-core L3 with the other cores.
>
> How effective is that? From some of the slides/documents i've looked at
> the L3s are connected with a high-speed fabric. Suggesting that the
> cross-core sharing should be fairly efficient.
I'm not sure. I thought it was mostly private but if another core was
sleeping or not experiencing much cache pressure, another core could use it
for some things. But I'm fuzzy on the the exact properties, sorry.
> In which case it would make sense to treat/model the combined L3 as a
> single large LLC covering all cores.
Are you thinking it would be much cheaper to migrate a task to another core
inside this chip, than to off chip?
Mikey
next prev parent reply other threads:[~2016-05-12 11:07 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-09 10:48 [RFC][PATCH 0/7] sched: select_idle_siblings rewrite Peter Zijlstra
2016-05-09 10:48 ` [RFC][PATCH 1/7] sched: Remove unused @cpu argument from destroy_sched_domain*() Peter Zijlstra
2016-05-09 10:48 ` [RFC][PATCH 2/7] sched: Restructure destroy_sched_domain() Peter Zijlstra
2016-05-09 14:46 ` Peter Zijlstra
2016-05-09 10:48 ` [RFC][PATCH 3/7] sched: Introduce struct sched_domain_shared Peter Zijlstra
2016-05-09 10:48 ` [RFC][PATCH 4/7] sched: Replace sd_busy/nr_busy_cpus with sched_domain_shared Peter Zijlstra
2016-05-11 11:55 ` Matt Fleming
2016-05-11 12:33 ` Peter Zijlstra
2016-05-11 18:11 ` Peter Zijlstra
2016-05-11 18:24 ` Peter Zijlstra
2016-05-12 2:05 ` Michael Neuling
2016-05-12 5:07 ` Peter Zijlstra
2016-05-12 11:07 ` Michael Neuling [this message]
2016-05-12 11:33 ` Peter Zijlstra
2016-05-13 0:12 ` Michael Neuling
2016-05-16 14:00 ` Peter Zijlstra
2016-05-17 10:20 ` Peter Zijlstra
2016-05-17 10:52 ` Srikar Dronamraju
2016-05-17 11:15 ` Peter Zijlstra
2016-05-11 17:37 ` Peter Zijlstra
2016-05-11 18:04 ` Matt Fleming
2016-05-16 15:31 ` Dietmar Eggemann
2016-05-16 17:02 ` Peter Zijlstra
2016-05-16 17:26 ` Dietmar Eggemann
2016-05-09 10:48 ` [RFC][PATCH 5/7] sched: Rewrite select_idle_siblings() Peter Zijlstra
2016-05-10 21:05 ` Yuyang Du
2016-05-11 7:00 ` Peter Zijlstra
2016-05-10 23:42 ` Yuyang Du
2016-05-11 7:43 ` Mike Galbraith
2016-05-09 10:48 ` [RFC][PATCH 6/7] sched: Optimize SCHED_SMT Peter Zijlstra
2016-05-09 10:48 ` [RFC][PATCH 7/7] sched: debug muck -- not for merging Peter Zijlstra
2016-05-10 0:50 ` [RFC][PATCH 0/7] sched: select_idle_siblings rewrite Chris Mason
2016-05-11 14:19 ` Chris Mason
2016-05-18 5:51 ` [RFC][PATCH 8/7] sched/fair: Use utilization distance to filter affine sync wakeups Mike Galbraith
2016-05-19 21:43 ` Rik van Riel
2016-05-20 2:52 ` Mike Galbraith
2016-05-25 14:51 ` [RFC][PATCH 0/7] sched: select_idle_siblings rewrite Chris Mason
2016-05-25 16:24 ` Peter Zijlstra
2016-05-25 17:11 ` Chris Mason
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1463051272.28449.59.camel@neuling.org \
--to=mikey@neuling.org \
--cc=anton@samba.org \
--cc=clm@fb.com \
--cc=fweisbec@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=matt@codeblueprint.co.uk \
--cc=mgalbraith@suse.de \
--cc=mingo@kernel.org \
--cc=oohall@gmail.com \
--cc=peterz@infradead.org \
--cc=shreyas@linux.vnet.ibm.com \
--cc=srikar@linux.vnet.ibm.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).