qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Zhao Liu <zhao1.liu@linux.intel.com>
To: Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>
Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org,
	Zhenyu Wang <zhenyu.z.wang@intel.com>,
	Zhuocheng Ding <zhuocheng.ding@intel.com>,
	Babu Moger <babu.moger@amd.com>,
	Yongwei Ma <yongwei.ma@intel.com>, Zhao Liu <zhao1.liu@intel.com>
Subject: [PATCH v6 06/16] i386: Introduce module-level cpu topology to CPUX86State
Date: Fri, 17 Nov 2023 15:50:56 +0800	[thread overview]
Message-ID: <20231117075106.432499-7-zhao1.liu@linux.intel.com> (raw)
In-Reply-To: <20231117075106.432499-1-zhao1.liu@linux.intel.com>

From: Zhuocheng Ding <zhuocheng.ding@intel.com>

smp command has the "clusters" parameter but x86 hasn't supported that
level. "cluster" is a CPU topology level concept above cores, in which
the cores may share some resources (L2 cache or some others like L3
cache tags, depending on the Archs) [1][2]. For x86, the resource shared
by cores at the cluster level is mainly the L2 cache.

However, using cluster to define x86's L2 cache topology will cause the
compatibility problem:

Currently, x86 defaults that the L2 cache is shared in one core, which
actually implies a default setting "cores per L2 cache is 1" and
therefore implicitly defaults to having as many L2 caches as cores.

For example (i386 PC machine):
-smp 16,sockets=2,dies=2,cores=2,threads=2,maxcpus=16 (*)

Considering the topology of the L2 cache, this (*) implicitly means "1
core per L2 cache" and "2 L2 caches per die".

If we use cluster to configure L2 cache topology with the new default
setting "clusters per L2 cache is 1", the above semantics will change
to "2 cores per cluster" and "1 cluster per L2 cache", that is, "2
cores per L2 cache".

So the same command (*) will cause changes in the L2 cache topology,
further affecting the performance of the virtual machine.

Therefore, x86 should only treat cluster as a cpu topology level and
avoid using it to change L2 cache by default for compatibility.

"cluster" in smp is the CPU topology level which is between "core" and
die.

For x86, the "cluster" in smp is corresponding to the module level [2],
which is above the core level. So use the "module" other than "cluster"
in i386 code.

And please note that x86 already has a cpu topology level also named
"cluster" [3], this level is at the upper level of the package. Here,
the cluster in x86 cpu topology is completely different from the
"clusters" as the smp parameter. After the module level is introduced,
the cluster as the smp parameter will actually refer to the module level
of x86.

[1]: 864c3b5c32f0 ("hw/core/machine: Introduce CPU cluster topology support")
[2]: Yanan's comment about "cluster",
     https://lists.gnu.org/archive/html/qemu-devel/2023-02/msg04051.html
[3]: SDM, vol.3, ch.9, 9.9.1 Hierarchical Mapping of Shared Resources.

Signed-off-by: Zhuocheng Ding <zhuocheng.ding@intel.com>
Co-developed-by: Zhao Liu <zhao1.liu@intel.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
Tested-by: Babu Moger <babu.moger@amd.com>
Tested-by: Yongwei Ma <yongwei.ma@intel.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
Changes since v1:
 * The background of the introduction of the "cluster" parameter and its
   exact meaning were revised according to Yanan's explanation. (Yanan)
---
 hw/i386/x86.c     | 1 +
 target/i386/cpu.c | 1 +
 target/i386/cpu.h | 5 +++++
 3 files changed, 7 insertions(+)

diff --git a/hw/i386/x86.c b/hw/i386/x86.c
index b3d054889bba..24628c1d2f73 100644
--- a/hw/i386/x86.c
+++ b/hw/i386/x86.c
@@ -306,6 +306,7 @@ void x86_cpu_pre_plug(HotplugHandler *hotplug_dev,
     init_topo_info(&topo_info, x86ms);
 
     env->nr_dies = ms->smp.dies;
+    env->nr_modules = ms->smp.clusters;
 
     /*
      * If APIC ID is not set,
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 1713499c44cd..f600c0ee9df1 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -7698,6 +7698,7 @@ static void x86_cpu_initfn(Object *obj)
     CPUX86State *env = &cpu->env;
 
     env->nr_dies = 1;
+    env->nr_modules = 1;
 
     object_property_add(obj, "feature-words", "X86CPUFeatureWordInfo",
                         x86_cpu_get_feature_words,
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index a214d056ac4b..da58d41c9969 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1904,6 +1904,11 @@ typedef struct CPUArchState {
 
     /* Number of dies within this CPU package. */
     unsigned nr_dies;
+    /*
+     * Number of modules within this CPU package.
+     * Module level in x86 cpu topology is corresponding to smp.clusters.
+     */
+    unsigned nr_modules;
 } CPUX86State;
 
 struct kvm_msrs;
-- 
2.34.1



  parent reply	other threads:[~2023-11-17  7:41 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-17  7:50 [PATCH v6 00/16] Support smp.clusters for x86 in QEMU Zhao Liu
2023-11-17  7:50 ` [PATCH v6 01/16] i386/cpu: Fix i/d-cache topology to core level for Intel CPU Zhao Liu
2023-11-17  7:50 ` [PATCH v6 02/16] i386/cpu: Use APIC ID offset to encode cache topo in CPUID[4] Zhao Liu
2023-11-17  7:50 ` [PATCH v6 03/16] i386/cpu: Consolidate the use of topo_info in cpu_x86_cpuid() Zhao Liu
2023-11-17  7:50 ` [PATCH v6 04/16] i386: Split topology types of CPUID[0x1F] from the definitions of CPUID[0xB] Zhao Liu
2023-11-17  7:50 ` [PATCH v6 05/16] i386: Decouple CPUID[0x1F] subleaf with specific topology level Zhao Liu
2023-11-17  7:50 ` Zhao Liu [this message]
2023-11-17  7:50 ` [PATCH v6 07/16] i386: Support modules_per_die in X86CPUTopoInfo Zhao Liu
2023-11-17  7:50 ` [PATCH v6 08/16] i386: Expose module level in CPUID[0x1F] Zhao Liu
2023-11-17  7:50 ` [PATCH v6 09/16] i386: Support module_id in X86CPUTopoIDs Zhao Liu
2023-11-17  7:51 ` [PATCH v6 10/16] i386/cpu: Introduce cluster-id to X86CPU Zhao Liu
2023-11-17  7:51 ` [PATCH v6 11/16] tests: Add test case of APIC ID for module level parsing Zhao Liu
2023-11-17  7:51 ` [PATCH v6 12/16] hw/i386/pc: Support smp.clusters for x86 PC machine Zhao Liu
2023-11-17  7:51 ` [PATCH v6 13/16] i386: Add cache topology info in CPUCacheInfo Zhao Liu
2023-11-17  7:51 ` [PATCH v6 14/16] i386: Use CPUCacheInfo.share_level to encode CPUID[4] Zhao Liu
2023-11-17  7:51 ` [PATCH v6 15/16] i386: Use offsets get NumSharingCache for CPUID[0x8000001D].EAX[bits 25:14] Zhao Liu
2023-11-17  7:51 ` [PATCH v6 16/16] i386: Use CPUCacheInfo.share_level to encode " Zhao Liu
2023-12-05  3:28 ` [PATCH v6 00/16] Support smp.clusters for x86 in QEMU Zhao Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231117075106.432499-7-zhao1.liu@linux.intel.com \
    --to=zhao1.liu@linux.intel.com \
    --cc=babu.moger@amd.com \
    --cc=eduardo@habkost.net \
    --cc=kvm@vger.kernel.org \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=mst@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=yongwei.ma@intel.com \
    --cc=zhao1.liu@intel.com \
    --cc=zhenyu.z.wang@intel.com \
    --cc=zhuocheng.ding@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).