From: Zhao Liu <zhao1.liu@intel.com>
To: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: "Eduardo Habkost" <eduardo@habkost.net>,
"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Yanan Wang" <wangyanan55@huawei.com>,
"Michael S . Tsirkin" <mst@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Richard Henderson" <richard.henderson@linaro.org>,
"Eric Blake" <eblake@redhat.com>,
"Markus Armbruster" <armbru@redhat.com>,
"Marcelo Tosatti" <mtosatti@redhat.com>,
"Alex Bennée" <alex.bennee@linaro.org>,
"Peter Maydell" <peter.maydell@linaro.org>,
"Jonathan Cameron" <Jonathan.Cameron@huawei.com>,
"Sia Jee Heng" <jeeheng.sia@starfivetech.com>,
qemu-devel@nongnu.org, kvm@vger.kernel.org,
qemu-riscv@nongnu.org, qemu-arm@nongnu.org,
"Zhenyu Wang" <zhenyu.z.wang@intel.com>,
"Dapeng Mi" <dapeng1.mi@linux.intel.com>,
"Yongwei Ma" <yongwei.ma@intel.com>
Subject: Re: [RFC v2 0/7] Introduce SMP Cache Topology
Date: Tue, 4 Jun 2024 23:31:11 +0800 [thread overview]
Message-ID: <Zl8zP4pXjRmVs3VP@intel.com> (raw)
In-Reply-To: <Zl7ea2o2aaxXMj9X@redhat.com>
Hi Daniel,
On Tue, Jun 04, 2024 at 10:29:15AM +0100, Daniel P. Berrangé wrote:
> Date: Tue, 4 Jun 2024 10:29:15 +0100
> From: "Daniel P. Berrangé" <berrange@redhat.com>
> Subject: Re: [RFC v2 0/7] Introduce SMP Cache Topology
>
> On Thu, May 30, 2024 at 06:15:32PM +0800, Zhao Liu wrote:
> > Hi,
> >
> > Now that the i386 cache model has been able to define the topology
> > clearly, it's time to move on to discussing/advancing this feature about
> > configuring the cache topology with -smp as the following example:
> >
> > -smp 32,sockets=2,dies=2,modules=2,cores=2,threads=2,maxcpus=32,\
> > l1d-cache=core,l1i-cache=core,l2-cache=core,l3-cache=die
> >
> > With the new cache topology options ("l1d-cache", "l1i-cache",
> > "l2-cache" and "l3-cache"), we could adjust the cache topology via -smp.
>
> Switching to QAPI for a second, your proposal is effectively
>
> { 'enum': 'SMPCacheTopo',
> 'data': [ 'default','socket','die','cluster','module','core','thread'] }
>
> { 'struct': 'SMPConfiguration',
> 'data': {
> '*cpus': 'int',
> '*drawers': 'int',
> '*books': 'int',
> '*sockets': 'int',
> '*dies': 'int',
> '*clusters': 'int',
> '*modules': 'int',
> '*cores': 'int',
> '*threads': 'int',
> '*maxcpus': 'int',
> '*l1d-cache': 'SMPCacheTopo',
> '*l1i-cache': 'SMPCacheTopo',
> '*l2-cache': 'SMPCacheTopo',
> '*l3-cache': 'SMPCacheTopo',
> } }
>
> I think that would be more natural to express as an array of structs
> thus:
>
> { 'enum': 'SMPCacheTopo',
> 'data': [ 'default','socket','die','cluster','module','core','thread'] }
>
> { 'enum': 'SMPCacheType',
> 'data': [ 'l1d', 'l1i', 'l2', 'l3' ] }
>
> { 'struct': 'SMPCache',
> 'data': {
> 'type': 'SMPCacheType',
> 'topo': 'SMPCacheTopo',
> } }
>
> { 'struct': 'SMPConfiguration',
> 'data': {
> '*cpus': 'int',
> '*drawers': 'int',
> '*books': 'int',
> '*sockets': 'int',
> '*dies': 'int',
> '*clusters': 'int',
> '*modules': 'int',
> '*cores': 'int',
> '*threads': 'int',
> '*maxcpus': 'int',
> 'caches': [ 'SMPCache' ]
> } }
>
> Giving an example in (hypothetical) JSON cli syntax of:
>
> -smp "{'cpus':32,'sockets':2,'dies':2,'modules':2,
> 'cores':2,'threads':2,'maxcpus':32,'caches': [
> {'type':'l1d','topo':'core' },
> {'type':'l1i','topo':'core' },
> {'type':'l2','topo':'core' },
> {'type':'l3','topo':'die' },
> ]}"
Thanks! Looks clean to me and I think it is ok.
Just one further question, for this case where it must be expressed in a
raw JSON string, is there any requirement in QEMU that a simple
command-line friendly variant must be provided that corresponds to it?
> > Open about How to Handle the Default Options
> > ============================================
> >
> > (For the detailed description of this series, pls skip this "long"
> > section and review the subsequent content.)
> >
> >
> > Background of OPEN
> > ------------------
> >
> > Daniel and I discussed initial thoughts on cache topology, and there was
> > an idea that the default *cache_topo is on the CORE level [3]:
> >
> > > simply preferring "cores" for everything is a reasonable
> > > default long term plan for everything, unless the specific
> > > architecture target has no concept of "cores".
>
> FYI, when I wrote that I wasn't specifically thinking about cache
> mappings. I just meant that when exposing SMP topology to guests,
> 'cores' is a good default, compared to 'sockets', or 'threads',etc.
>
> Defaults for cache <-> topology mappings should be whatever makes
> most sense to the architecture target/cpu.
Thank you for the additional clarification!
> > Problem with this OPEN
> > ----------------------
> >
> > Some arches have their own arch-specific cache topology, such as l1 per
> > core/l2 per core/l3 per die for i386. And as Jeehang proposed for
> > RISC-V, the cache topologies are like: l1/l2 per core and l3 per
> > cluster.
> >
> > Taking L3 as an example, logically there is a difference between the two
> > starting points of user-specified core level and with the default core
> > level.
> >
> > For example,
> >
> > "(user-specified) l3-cache-topo=core" should override i386's default l3
> > per core, but i386's default l3 per core should also override
> > "(default) l3-cache-topo=core" because this default value is like a
> > placeholder that specifies nothing.
> >
> > However, from a command line parsing perspective, it's impossible to
> > tell what the “l3-cache-topo=core” setting is for...
>
> Yes, we need to explicitly distinguish built-in defaults from
> user specified data, otherwise we risk too many mistakes.
>
> > Options to solve OPEN
> > ---------------------
> >
> > So, I think we have the following options:
> >
> >
> > 1. Can we avoid such default parameters?
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > This would reduce the pain in QEMU, but I'm not sure if it's possible to
> > make libvirt happy?
>
> I think having an explicit "defualt" value is inevitable, not simply
> because of libvirt. Long experiance with QEMU shows that we need to
> be able to reliably distinguish direct user input from built-in
> defaults in cases like this.
Thanks, that gives me an answer to that question!
> >
> > It is also possible to expose Cache topology information as the CPU
> > properties in “query-cpu-model-expansion type=full”, but that adds
> > arch-specific work.
> >
> > If omitted, I think it's just like omitting “cores”/“sockets”,
> > leaving it up to the machine to decide based on the specific CPU model
> > (and now the cache topology is indeed determined by the CPU model as
> > well).
> >
> >
> > 2. If default is required, can we use a specific abstract word?
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > That is, is it possible to use a specific word like “auto”/“invalid”
> > /“default” and avoid a specific topology level?
>
> "invalid" feels a bit wierd, but 'auto' or 'default' are fine,
> and possibly "unspecified"
I prefer the "default" like you listed in your QAPI example. :)
> > Like setting “l3-cache-topo=invalid” (since I've only added the invalid
> > hierarchy so far ;-) ).
> >
> > I found the cache topology of arches varies so much that I'm sorry to
> > say it's hard to have a uniform default cache topology.
> >
> >
> > I apologize for the very lengthy note and appreciate you reviewing it
> > here as well as your time!
Thanks,
Zhao
prev parent reply other threads:[~2024-06-04 15:15 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-30 10:15 [RFC v2 0/7] Introduce SMP Cache Topology Zhao Liu
2024-05-30 10:15 ` [RFC v2 1/7] hw/core: Make CPU topology enumeration arch-agnostic Zhao Liu
2024-06-03 12:19 ` Markus Armbruster
2024-06-04 8:39 ` Zhao Liu
2024-06-04 8:44 ` Markus Armbruster
2024-06-03 12:25 ` Markus Armbruster
2024-06-04 8:33 ` Zhao Liu
2024-06-04 8:47 ` Markus Armbruster
2024-06-04 9:06 ` Zhao Liu
2024-05-30 10:15 ` [RFC v2 2/7] hw/core: Define cache topology for machine Zhao Liu
2024-05-30 10:15 ` [RFC v2 3/7] hw/core: Add cache topology options in -smp Zhao Liu
2024-06-04 8:54 ` Markus Armbruster
2024-06-04 9:32 ` Daniel P. Berrangé
2024-06-04 16:08 ` Zhao Liu
2024-05-30 10:15 ` [RFC v2 4/7] i386/cpu: Support thread and module level cache topology Zhao Liu
2024-05-30 10:15 ` [RFC v2 5/7] i386/cpu: Update cache topology with machine's configuration Zhao Liu
2024-05-30 10:15 ` [RFC v2 6/7] i386/pc: Support cache topology in -smp for PC machine Zhao Liu
2024-05-30 10:15 ` [RFC v2 7/7] qemu-options: Add the cache topology description of -smp Zhao Liu
2024-06-04 9:29 ` [RFC v2 0/7] Introduce SMP Cache Topology Daniel P. Berrangé
2024-06-04 15:31 ` Zhao Liu [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zl8zP4pXjRmVs3VP@intel.com \
--to=zhao1.liu@intel.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=alex.bennee@linaro.org \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=dapeng1.mi@linux.intel.com \
--cc=eblake@redhat.com \
--cc=eduardo@habkost.net \
--cc=jeeheng.sia@starfivetech.com \
--cc=kvm@vger.kernel.org \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=mtosatti@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=philmd@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-riscv@nongnu.org \
--cc=richard.henderson@linaro.org \
--cc=wangyanan55@huawei.com \
--cc=yongwei.ma@intel.com \
--cc=zhenyu.z.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox