From: "Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com>
To: Tim Chen <tim.c.chen@linux.intel.com>,
Peter Zijlstra <peterz@infradead.org>
Cc: "catalin.marinas@arm.com" <catalin.marinas@arm.com>,
"will@kernel.org" <will@kernel.org>,
"rjw@rjwysocki.net" <rjw@rjwysocki.net>,
"vincent.guittot@linaro.org" <vincent.guittot@linaro.org>,
"bp@alien8.de" <bp@alien8.de>,
"tglx@linutronix.de" <tglx@linutronix.de>,
"mingo@redhat.com" <mingo@redhat.com>,
"lenb@kernel.org" <lenb@kernel.org>,
"dietmar.eggemann@arm.com" <dietmar.eggemann@arm.com>,
"rostedt@goodmis.org" <rostedt@goodmis.org>,
"bsegall@google.com" <bsegall@google.com>,
"mgorman@suse.de" <mgorman@suse.de>,
"msys.mizuma@gmail.com" <msys.mizuma@gmail.com>,
"valentin.schneider@arm.com" <valentin.schneider@arm.com>,
"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
Jonathan Cameron <jonathan.cameron@huawei.com>,
"juri.lelli@redhat.com" <juri.lelli@redhat.com>,
"mark.rutland@arm.com" <mark.rutland@arm.com>,
"sudeep.holla@arm.com" <sudeep.holla@arm.com>,
"aubrey.li@linux.intel.com" <aubrey.li@linux.intel.com>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>,
"x86@kernel.org" <x86@kernel.org>,
"xuwei (O)" <xuwei5@huawei.com>,
"Zengtao (B)" <prime.zeng@hisilicon.com>,
"guodong.xu@linaro.org" <guodong.xu@linaro.org>,
yangyicong <yangyicong@huawei.com>,
"Liguozhu (Kenneth)" <liguozhu@hisilicon.com>,
"linuxarm@openeuler.org" <linuxarm@openeuler.org>,
"hpa@zytor.com" <hpa@zytor.com>
Subject: RE: [Linuxarm] Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
Date: Mon, 8 Mar 2021 22:30:33 +0000 [thread overview]
Message-ID: <6d8940e227324c2c88474d9d0769c001@hisilicon.com> (raw)
In-Reply-To: <a8474bae-5d9a-8c0b-766a-7188ed71320b@linux.intel.com>
> -----Original Message-----
> From: Tim Chen [mailto:tim.c.chen@linux.intel.com]
> Sent: Thursday, March 4, 2021 7:34 AM
> To: Peter Zijlstra <peterz@infradead.org>; Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com>
> Cc: catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net;
> vincent.guittot@linaro.org; bp@alien8.de; tglx@linutronix.de;
> mingo@redhat.com; lenb@kernel.org; dietmar.eggemann@arm.com;
> rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de;
> msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> Subject: [Linuxarm] Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler
> level for x86
>
>
>
> On 3/2/21 2:30 AM, Peter Zijlstra wrote:
> > On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote:
> >> From: Tim Chen <tim.c.chen@linux.intel.com>
> >>
> >> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
> >> is shared among a cluster of cores instead of being exclusive
> >> to one single core.
> >
> > Isn't that most atoms one way or another? Tremont seems to have it per 4
> > cores, but earlier it was per 2 cores.
> >
>
> Yes, older Atoms have 2 cores sharing L2. I probably should
> rephrase my comments to not leave the impression that sharing
> L2 among cores is new for Atoms.
>
> Tremont based Atom CPUs increases the possible load imbalance more
> with 4 cores per L2 instead of 2. And also with more overall cores on a die,
> the
> chance increases for packing running tasks on a few clusters while leaving
> others empty on light/medium loaded systems. We did see
> this effect on Jacobsville.
>
> So load balancing between the L2 clusters is more
> useful on Tremont based Atom CPUs compared to the older Atoms.
It seems sensible the more CPU we get in the cluster, the more
we need the kernel to be aware of its existence.
Tim, it is possible for you to bring up the cpu_cluster_mask and
cluster_sibling for x86 so that the topology can be represented
in sysfs and be used by scheduler? It seems your patch lacks this
part.
BTW, I wonder if x86 can do some improvement on your KMP_AFFINITY
by leveraging the cluster topology level.
https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/optimization-and-programming-guide/openmp-support/openmp-library-support/thread-affinity-interface-linux-and-windows.html
KMP_AFFINITY has thread affinity modes like compact and scatter,
it seems this "compact" and "scatter" can also use the cluster
information as you see we are also struggling with the "compact"
and "scatter" issues here in this patchset :-)
Thanks
Barry
next prev parent reply other threads:[~2021-03-08 22:31 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-01 22:59 [RFC PATCH v4 0/3] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
2021-03-01 22:59 ` [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die Barry Song
2021-03-15 3:11 ` Song Bao Hua (Barry Song)
2021-03-15 10:52 ` Jonathan Cameron
2021-03-01 22:59 ` [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters Barry Song
2021-03-02 10:43 ` Peter Zijlstra
2021-03-16 7:33 ` Song Bao Hua (Barry Song)
2021-03-08 11:25 ` Vincent Guittot
2021-03-08 22:15 ` Song Bao Hua (Barry Song)
2021-03-01 22:59 ` [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86 Barry Song
2021-03-02 10:30 ` Peter Zijlstra
2021-03-03 18:34 ` Tim Chen
2021-03-08 22:30 ` Song Bao Hua (Barry Song) [this message]
2021-03-15 20:53 ` [Linuxarm] " Tim Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6d8940e227324c2c88474d9d0769c001@hisilicon.com \
--to=song.bao.hua@hisilicon.com \
--cc=aubrey.li@linux.intel.com \
--cc=bp@alien8.de \
--cc=bsegall@google.com \
--cc=catalin.marinas@arm.com \
--cc=dietmar.eggemann@arm.com \
--cc=gregkh@linuxfoundation.org \
--cc=guodong.xu@linaro.org \
--cc=hpa@zytor.com \
--cc=jonathan.cameron@huawei.com \
--cc=juri.lelli@redhat.com \
--cc=lenb@kernel.org \
--cc=liguozhu@hisilicon.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxarm@openeuler.org \
--cc=mark.rutland@arm.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=msys.mizuma@gmail.com \
--cc=peterz@infradead.org \
--cc=prime.zeng@hisilicon.com \
--cc=rjw@rjwysocki.net \
--cc=rostedt@goodmis.org \
--cc=sudeep.holla@arm.com \
--cc=tglx@linutronix.de \
--cc=tim.c.chen@linux.intel.com \
--cc=valentin.schneider@arm.com \
--cc=vincent.guittot@linaro.org \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=xuwei5@huawei.com \
--cc=yangyicong@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox