From: Mel Gorman <mgorman@suse.de>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>,
Barry Song <song.bao.hua@hisilicon.com>,
catalin.marinas@arm.com, will@kernel.org, rjw@rjwysocki.net,
lenb@kernel.org, gregkh@linuxfoundation.org,
Jonathan.Cameron@huawei.com, mingo@redhat.com,
juri.lelli@redhat.com, vincent.guittot@linaro.org,
dietmar.eggemann@arm.com, rostedt@goodmis.org,
bsegall@google.com, mark.rutland@arm.com,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org,
linuxarm@huawei.com, xuwei5@huawei.com, prime.zeng@hisilicon.com
Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
Date: Thu, 3 Dec 2020 09:49:14 +0000 [thread overview]
Message-ID: <20201203094914.GE3306@suse.de> (raw)
In-Reply-To: <20201203092831.GH2414@hirez.programming.kicks-ass.net>
On Thu, Dec 03, 2020 at 10:28:31AM +0100, Peter Zijlstra wrote:
> On Tue, Dec 01, 2020 at 04:04:04PM +0000, Valentin Schneider wrote:
> >
> > Gating this behind this new config only leveraged by arm64 doesn't make it
> > very generic. Note that powerpc also has this newish "CACHE" level which
> > seems to overlap in function with your "CLUSTER" one (both are arch
> > specific, though).
> >
> > I think what you are after here is an SD_SHARE_PKG_RESOURCES domain walk,
> > i.e. scan CPUs by increasing cache "distance". We already have it in some
> > form, as we scan SMT & LLC domains; AFAICT LLC always maps to MC, except
> > for said powerpc's CACHE thingie.
>
> There's some intel chips with a smaller L2, but I don't think we ever
> bothered.
>
> There's also the extended topology stuff from Intel: SMT, Core, Module,
> Tile, Die, of which we've only partially used Die I think.
>
> Whatever we do, it might make sense to not all use different names.
>
> Also, I think Mel said he was cooking something for
> select_idle_balance().
>
> Also, I've previously posted patches that fold all the iterations into
> one, so it might make sense to revisit some of that if Mel also already
> didn.t
I didn't. The NUMA/lb reconcilation took most of my attention and right
now I'm looking at select_idle_sibling again in preparation for tracking
the idle cpumask in a sensible fashion. The main idea I had in mind was
special casing EPYC as it has multiple small L3 caches that may benefit
from select_idle_sibling looking slightly beyond the LLC as a search
domain but it has not even started yet.
--
Mel Gorman
SUSE Labs
next prev parent reply other threads:[~2020-12-03 9:50 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-01 2:59 [RFC PATCH v2 0/2] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
2020-12-01 2:59 ` [RFC PATCH v2 1/2] topology: Represent clusters of CPUs within a die Barry Song
2020-12-01 16:03 ` Valentin Schneider
2020-12-02 9:55 ` Sudeep Holla
2020-12-01 2:59 ` [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters Barry Song
2020-12-01 16:04 ` Valentin Schneider
2020-12-03 9:28 ` Peter Zijlstra
2020-12-03 9:49 ` Mel Gorman [this message]
2020-12-03 9:57 ` Song Bao Hua (Barry Song)
2020-12-03 10:07 ` Peter Zijlstra
2020-12-02 8:27 ` Vincent Guittot
2020-12-02 9:20 ` Song Bao Hua (Barry Song)
2020-12-02 10:16 ` Vincent Guittot
2020-12-02 10:45 ` Song Bao Hua (Barry Song)
2020-12-02 10:48 ` Song Bao Hua (Barry Song)
2020-12-02 20:58 ` Song Bao Hua (Barry Song)
2020-12-03 9:03 ` Vincent Guittot
2020-12-03 9:11 ` Song Bao Hua (Barry Song)
2020-12-03 9:39 ` Vincent Guittot
2020-12-03 9:54 ` Vincent Guittot
2020-12-07 9:59 ` Song Bao Hua (Barry Song)
2020-12-07 15:29 ` Vincent Guittot
2020-12-09 11:35 ` Song Bao Hua (Barry Song)
2020-12-01 10:46 ` [RFC PATCH v2 0/2] scheduler: expose the topology of clusters and add cluster scheduler Dietmar Eggemann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201203094914.GE3306@suse.de \
--to=mgorman@suse.de \
--cc=Jonathan.Cameron@huawei.com \
--cc=bsegall@google.com \
--cc=catalin.marinas@arm.com \
--cc=dietmar.eggemann@arm.com \
--cc=gregkh@linuxfoundation.org \
--cc=juri.lelli@redhat.com \
--cc=lenb@kernel.org \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxarm@huawei.com \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=prime.zeng@hisilicon.com \
--cc=rjw@rjwysocki.net \
--cc=rostedt@goodmis.org \
--cc=song.bao.hua@hisilicon.com \
--cc=valentin.schneider@arm.com \
--cc=vincent.guittot@linaro.org \
--cc=will@kernel.org \
--cc=xuwei5@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).