qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Zhao Liu <zhao1.liu@intel.com>
To: "Paolo Bonzini" <pbonzini@redhat.com>,
	"Daniel P . Berrangé" <berrange@redhat.com>,
	"Igor Mammedov" <imammedo@redhat.com>,
	"Eduardo Habkost" <eduardo@habkost.net>
Cc: Ewan Hai <ewanhai-oc@zhaoxin.com>,
	Jason Zeng <jason.zeng@intel.com>,
	Xiaoyao Li <xiaoyao.li@intel.com>, Tao Su <tao1.su@intel.com>,
	Yi Lai <yi1.lai@intel.com>, Dapeng Mi <dapeng1.mi@intel.com>,
	Tejus GK <tejus.gk@nutanix.com>,
	Manish Mishra <manish.mishra@nutanix.com>,
	qemu-devel@nongnu.org, Zhao Liu <zhao1.liu@intel.com>
Subject: [PATCH v2 4/9] i386/cpu: Introduce cache model for YongFeng
Date: Fri, 11 Jul 2025 18:45:58 +0800	[thread overview]
Message-ID: <20250711104603.1634832-5-zhao1.liu@intel.com> (raw)
In-Reply-To: <20250711104603.1634832-1-zhao1.liu@intel.com>

From: Ewan Hai <ewanhai-oc@zhaoxin.com>

Add the cache model to YongFeng (v3) to better emulate its
environment.

Note, although YongFeng v2 was added after v10.0, it was also back
ported to v10.0.2. Therefore, the new version (v3) is needed to avoid
conflict.

The cache model is as follows:

      --- cache 0 ---
      cache type                         = data cache (1)
      cache level                        = 0x1 (1)
      self-initializing cache level      = true
      fully associative cache            = false
      maximum IDs for CPUs sharing cache = 0x0 (0)
      maximum IDs for cores in pkg       = 0x0 (0)
      system coherency line size         = 0x40 (64)
      physical line partitions           = 0x1 (1)
      ways of associativity              = 0x8 (8)
      number of sets                     = 0x40 (64)
      WBINVD/INVD acts on lower caches   = false
      inclusive to lower caches          = false
      complex cache indexing             = false
      number of sets (s)                 = 64
      (size synth)                       = 32768 (32 KB)
      --- cache 1 ---
      cache type                         = instruction cache (2)
      cache level                        = 0x1 (1)
      self-initializing cache level      = true
      fully associative cache            = false
      maximum IDs for CPUs sharing cache = 0x0 (0)
      maximum IDs for cores in pkg       = 0x0 (0)
      system coherency line size         = 0x40 (64)
      physical line partitions           = 0x1 (1)
      ways of associativity              = 0x10 (16)
      number of sets                     = 0x40 (64)
      WBINVD/INVD acts on lower caches   = false
      inclusive to lower caches          = false
      complex cache indexing             = false
      number of sets (s)                 = 64
      (size synth)                       = 65536 (64 KB)
      --- cache 2 ---
      cache type                         = unified cache (3)
      cache level                        = 0x2 (2)
      self-initializing cache level      = true
      fully associative cache            = false
      maximum IDs for CPUs sharing cache = 0x0 (0)
      maximum IDs for cores in pkg       = 0x0 (0)
      system coherency line size         = 0x40 (64)
      physical line partitions           = 0x1 (1)
      ways of associativity              = 0x8 (8)
      number of sets                     = 0x200 (512)
      WBINVD/INVD acts on lower caches   = false
      inclusive to lower caches          = true
      complex cache indexing             = false
      number of sets (s)                 = 512
      (size synth)                       = 262144 (256 KB)
      --- cache 3 ---
      cache type                         = unified cache (3)
      cache level                        = 0x3 (3)
      self-initializing cache level      = true
      fully associative cache            = false
      maximum IDs for CPUs sharing cache = 0x0 (0)
      maximum IDs for cores in pkg       = 0x0 (0)
      system coherency line size         = 0x40 (64)
      physical line partitions           = 0x1 (1)
      ways of associativity              = 0x10 (16)
      number of sets                     = 0x2000 (8192)
      WBINVD/INVD acts on lower caches   = true
      inclusive to lower caches          = true
      complex cache indexing             = false
      number of sets (s)                 = 8192
      (size synth)                       = 8388608 (8 MB)
      --- cache 4 ---
      cache type                         = no more caches (0)

Tested-by: Yi Lai <yi1.lai@intel.com>
Signed-off-by: Ewan Hai <ewanhai-oc@zhaoxin.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
Changes Since v1:
 * Polish the note.

Changes on the original codes:
 * Rearrange cache model fields to make them easier to check.
 * And add explanation of why v3 is needed.
 * Drop lines_per_tag field for L2 & L3.
---
 target/i386/cpu.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 104 insertions(+)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index e9f3434b096c..5676e5526f40 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -3154,6 +3154,105 @@ static const CPUCaches xeon_srf_cache_info = {
     },
 };
 
+static const CPUCaches yongfeng_cache_info = {
+    .l1d_cache = &(CPUCacheInfo) {
+        /* CPUID 0x4.0x0.EAX */
+        .type = DATA_CACHE,
+        .level = 1,
+        .self_init = true,
+
+        /* CPUID 0x4.0x0.EBX */
+        .line_size = 64,
+        .partitions = 1,
+        .associativity = 8,
+
+        /* CPUID 0x4.0x0.ECX */
+        .sets = 64,
+
+        /* CPUID 0x4.0x0.EDX */
+        .no_invd_sharing = false,
+        .inclusive = false,
+        .complex_indexing = false,
+
+        /* CPUID 0x80000005.ECX */
+        .lines_per_tag = 1,
+        .size = 32 * KiB,
+
+        .share_level = CPU_TOPOLOGY_LEVEL_CORE,
+    },
+    .l1i_cache = &(CPUCacheInfo) {
+        /* CPUID 0x4.0x1.EAX */
+        .type = INSTRUCTION_CACHE,
+        .level = 1,
+        .self_init = true,
+
+        /* CPUID 0x4.0x1.EBX */
+        .line_size = 64,
+        .partitions = 1,
+        .associativity = 16,
+
+        /* CPUID 0x4.0x1.ECX */
+        .sets = 64,
+
+        /* CPUID 0x4.0x1.EDX */
+        .no_invd_sharing = false,
+        .inclusive = false,
+        .complex_indexing = false,
+
+        /* CPUID 0x80000005.EDX */
+        .lines_per_tag = 1,
+        .size = 64 * KiB,
+
+        .share_level = CPU_TOPOLOGY_LEVEL_CORE,
+    },
+    .l2_cache = &(CPUCacheInfo) {
+        /* CPUID 0x4.0x2.EAX */
+        .type = UNIFIED_CACHE,
+        .level = 2,
+        .self_init = true,
+
+        /* CPUID 0x4.0x2.EBX */
+        .line_size = 64,
+        .partitions = 1,
+        .associativity = 8,
+
+        /* CPUID 0x4.0x2.ECX */
+        .sets = 512,
+
+        /* CPUID 0x4.0x2.EDX */
+        .no_invd_sharing = false,
+        .inclusive = true,
+        .complex_indexing = false,
+
+        /* CPUID 0x80000006.ECX */
+        .size = 256 * KiB,
+
+        .share_level = CPU_TOPOLOGY_LEVEL_CORE,
+    },
+    .l3_cache = &(CPUCacheInfo) {
+        /* CPUID 0x4.0x3.EAX */
+        .type = UNIFIED_CACHE,
+        .level = 3,
+        .self_init = true,
+
+        /* CPUID 0x4.0x3.EBX */
+        .line_size = 64,
+        .partitions = 1,
+        .associativity = 16,
+
+        /* CPUID 0x4.0x3.ECX */
+        .sets = 8192,
+
+        /* CPUID 0x4.0x3.EDX */
+        .no_invd_sharing = true,
+        .inclusive = true,
+        .complex_indexing = false,
+
+        .size = 8 * MiB,
+        .share_level = CPU_TOPOLOGY_LEVEL_DIE,
+    },
+};
+
 /* The following VMX features are not supported by KVM and are left out in the
  * CPU definitions:
  *
@@ -6433,6 +6532,11 @@ static const X86CPUDefinition builtin_x86_defs[] = {
                     { /* end of list */ }
                 }
             },
+            {
+                .version = 3,
+                .note = "with the cache model",
+                .cache_info = &yongfeng_cache_info,
+            },
             { /* end of list */ }
         }
     },
-- 
2.34.1



  parent reply	other threads:[~2025-07-11 10:25 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-11 10:45 [PATCH v2 0/9] i386/cpu: Intel cache model & topo CPUID enhencement Zhao Liu
2025-07-11 10:45 ` [PATCH v2 1/9] i386/cpu: Introduce cache model for SierraForest Zhao Liu
2025-07-11 10:45 ` [PATCH v2 2/9] i386/cpu: Introduce cache model for GraniteRapids Zhao Liu
2025-07-11 10:45 ` [PATCH v2 3/9] i386/cpu: Introduce cache model for SapphireRapids Zhao Liu
2025-07-11 10:45 ` Zhao Liu [this message]
2025-07-11 10:45 ` [PATCH v2 5/9] i386/cpu: Add a "x-force-cpuid-0x1f" property Zhao Liu
2025-07-11 10:46 ` [PATCH v2 6/9] i386/cpu: Enable 0x1f leaf for SierraForest by default Zhao Liu
2025-07-11 10:46 ` [PATCH v2 7/9] i386/cpu: Enable 0x1f leaf for GraniteRapids " Zhao Liu
2025-07-11 10:46 ` [PATCH v2 8/9] i386/cpu: Enable 0x1f leaf for SapphireRapids " Zhao Liu
2025-07-11 10:46 ` [PATCH v2 9/9] i386/cpu: Enable 0x1f leaf for YongFeng " Zhao Liu
2025-07-11 16:51 ` [PATCH v2 0/9] i386/cpu: Intel cache model & topo CPUID enhencement Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250711104603.1634832-5-zhao1.liu@intel.com \
    --to=zhao1.liu@intel.com \
    --cc=berrange@redhat.com \
    --cc=dapeng1.mi@intel.com \
    --cc=eduardo@habkost.net \
    --cc=ewanhai-oc@zhaoxin.com \
    --cc=imammedo@redhat.com \
    --cc=jason.zeng@intel.com \
    --cc=manish.mishra@nutanix.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=tao1.su@intel.com \
    --cc=tejus.gk@nutanix.com \
    --cc=xiaoyao.li@intel.com \
    --cc=yi1.lai@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).