From: Zhao Liu <zhao1.liu@intel.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P . Berrangé" <berrange@redhat.com>,
"Igor Mammedov" <imammedo@redhat.com>,
"Eduardo Habkost" <eduardo@habkost.net>,
"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Yanan Wang" <wangyanan55@huawei.com>,
"Michael S . Tsirkin" <mst@redhat.com>,
"Richard Henderson" <richard.henderson@linaro.org>,
"Jonathan Cameron" <Jonathan.Cameron@huawei.com>,
"Alireza Sanaee" <alireza.sanaee@huawei.com>,
"Sia Jee Heng" <jeeheng.sia@starfivetech.com>,
qemu-devel@nongnu.org, kvm@vger.kernel.org
Subject: Re: [PATCH v6 0/4] i386: Support SMP Cache Topology
Date: Wed, 25 Dec 2024 11:03:42 +0800 [thread overview]
Message-ID: <Z2t2DuMBYb2mioB0@intel.com> (raw)
In-Reply-To: <44212226-3692-488b-8694-935bd5c3a333@redhat.com>
> > About smp-cache
> > ===============
> >
> > The API design has been discussed heavily in [3].
> >
> > Now, smp-cache is implemented as a array integrated in -machine. Though
> > -machine currently can't support JSON format, this is the one of the
> > directions of future.
> >
> > An example is as follows:
> >
> > smp_cache=smp-cache.0.cache=l1i,smp-cache.0.topology=core,smp-cache.1.cache=l1d,smp-cache.1.topology=core,smp-cache.2.cache=l2,smp-cache.2.topology=module,smp-cache.3.cache=l3,smp-cache.3.topology=die
> >
> > "cache" specifies the cache that the properties will be applied on. This
> > field is the combination of cache level and cache type. Now it supports
> > "l1d" (L1 data cache), "l1i" (L1 instruction cache), "l2" (L2 unified
> > cache) and "l3" (L3 unified cache).
> >
> > "topology" field accepts CPU topology levels including "thread", "core",
> > "module", "cluster", "die", "socket", "book", "drawer" and a special
> > value "default".
>
> Looks good; just one thing, does "thread" make sense? I think that it's
> almost by definition that threads within a core share all caches, but maybe
> I'm missing some hardware configurations.
Hi Paolo, merry Christmas. Yes, AFAIK, there's no hardware has thread
level cache.
I considered the thread case is that it could be used for vCPU
scheduling optimization (although I haven't rigorously tested the actual
impact). Without CPU affinity, tasks in Linux are generally distributed
evenly across different cores (for example, vCPU0 on Core 0, vCPU1 on
Core 1). In this case, the thread-level cache settings are closer to the
actual situation, with vCPU0 occupying the L1/L2 of Host core0 and vCPU1
occupying the L1/L2 of Host core1.
┌───┐ ┌───┐
vCPU0 vCPU1
│ │ │ │
└───┘ └───┘
┌┌───┐┌───┐┐ ┌┌───┐┌───┐┐
││T0 ││T1 ││ ││T2 ││T3 ││
│└───┘└───┘│ │└───┘└───┘│
└────C0────┘ └────C1────┘
The L2 cache topology affects performance, and cluster-aware scheduling
feature in the Linux kernel will try to schedule tasks on the same L2
cache. So, in cases like the figure above, setting the L2 cache to be
per thread should, in principle, be better.
Thanks,
Zhao
next prev parent reply other threads:[~2024-12-25 2:45 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-19 8:32 [PATCH v6 0/4] i386: Support SMP Cache Topology Zhao Liu
2024-12-19 8:32 ` [PATCH v6 1/4] i386/cpu: Support thread and module level cache topology Zhao Liu
2024-12-19 8:32 ` [PATCH v6 2/4] i386/cpu: Update cache topology with machine's configuration Zhao Liu
2024-12-19 8:32 ` [PATCH v6 3/4] i386/pc: Support cache topology in -machine for PC machine Zhao Liu
2024-12-19 8:32 ` [PATCH v6 4/4] i386/cpu: add has_caches flag to check smp_cache configuration Zhao Liu
2024-12-24 16:04 ` [PATCH v6 0/4] i386: Support SMP Cache Topology Paolo Bonzini
2024-12-25 3:03 ` Zhao Liu [this message]
2025-01-02 14:57 ` Alireza Sanaee via
2025-01-02 17:09 ` Rob Herring
2025-01-02 18:01 ` Alireza Sanaee via
2025-01-03 8:25 ` Zhao Liu
2025-01-03 14:04 ` Alireza Sanaee via
2025-01-03 15:50 ` Zhao Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z2t2DuMBYb2mioB0@intel.com \
--to=zhao1.liu@intel.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=alireza.sanaee@huawei.com \
--cc=berrange@redhat.com \
--cc=eduardo@habkost.net \
--cc=imammedo@redhat.com \
--cc=jeeheng.sia@starfivetech.com \
--cc=kvm@vger.kernel.org \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=richard.henderson@linaro.org \
--cc=wangyanan55@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).