* [Qemu-devel] [PATCH v10 0/5] i386: Enable TOPOEXT to support hyperthreading on AMD CPU
@ 2018-05-22 0:41 Babu Moger
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 1/5] i386: Clean up cache CPUID code Babu Moger
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: Babu Moger @ 2018-05-22 0:41 UTC (permalink / raw)
To: mst, marcel.apfelbaum, pbonzini, rth, ehabkost, mtosatti
Cc: qemu-devel, kvm, babu.moger, kash, geoff
This series enables the TOPOEXT feature for AMD CPUs. This is required to
support hyperthreading on kvm guests.
This addresses the issues reported in these bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1481253
https://bugs.launchpad.net/qemu/+bug/1703506
v10:
Based the patches on Eduardo's git://github.com/ehabkost/qemu.git x86-next
Some of the earlier patches are already queued. So, submitting the rest of
the series here. This series adds complete redesign of the cpu topology.
Based on user given parameter, we try to build topology very close to the
hardware. Maintains symmetry as much as possible. Added new function
epyc_build_topology to build the topology based on user given nr_cores,
nr_threads.
Summary of changes.
1. Build the topology dinamically based on nr_cores and nr_threads
2. Added new epyc_build_topology to build the new topology.
3. Added new function num_sharing_l3_cache to calculate the L3 sharing
4. Added a check to verify the topology. Disabled the TOPOEXT if the
topology cannot be built.
v9:
Based the patches on Eduardo's git://github.com/ehabkost/qemu.git x86-next
tree. Following 3 patches from v8 are already queued.
i386: Add cache information in X86CPUDefinition
i386: Initialize cache information for EPYC family processors
i386: Helpers to encode cache information consistently
So, submitting the rest of the series here.
Changes:
1. Included Eduardo's clean up patch
2. Added 2.13 machine types
3. Disabled topoext for 2.12 and below versions.
4. Added the assert to core_id as discussed.
v8:
Addressed feedback from Eduardo. Thanks Eduardo for being patient with me.
Tested on AMD EPYC server and also did some basic testing on intel box.
Summary of changes.
1. Reverted back l2 cache associativity. Kept it same as legacy.
2. Changed cache_info structure in X86CPUDefinition and CPUX86State to pointers.
3. Added legacy_cache property in PC_COMPAT_2_12 and initialized legacy_cache
based on static cache_info availability.
4. Squashed patch 4 and 5 and applied it before patch 3.
5. Added legacy cache check for cpuid[2] and cpuid[4] for consistancy.
6. Simplified NUM_SHARING_CACHE definition for readability,
7. Removed assert for core_id as it appeared redundant.
8. Simplified encode_cache_cpuid8000001d little bit.
9. Few more minor changes
v7:
Rebased on top of latest tree after 2.12 release and done few basic tests. There are
no changes except for few minor hunks. Hopefully this gets pulled into 2.13 release.
Please review, let me know of any feedback.
v6:
1.Fixed problem with patch#4(Add new property to control cache info). The parameter
legacy_cache should be "on" by default on machine type "pc-q35-2.10". This was
found by Alexandr Iarygin.
2.Fixed the l3 cache size for EPYC based machines(patch#3). Also, fixed the number of
logical processors sharing the cache(patch#6). Only L3 cache is shared by multiple
cores but not L1 or L2. This was a bug while decoding. This was found by Geoffrey McRae
and he verified the fix.
v5:
In this series I tried to address the feedback from Eduardo Habkost.
The discussion thread is here.
https://patchwork.kernel.org/patch/10299745/
The previous thread is here.
http://patchwork.ozlabs.org/cover/884885/
Reason for these changes.
The cache properties for AMD family of processors have changed from
previous releases. We don't want to display the new information on the
old family of processors as this might cause compatibility issues.
Changes:
1.Based the patches on top of Eduardo's(patch#1) patch.
Changed few things.
Moved the Cache definitions to cpu.h file.
Changed the CPUID_4 names to generic names.
2.Added a new propery "legacy-cache" in cpu object(patch#2). This can be
used to display the old property even if the host supports the new cache
properties.
3.Added cache information in X86CPUDefinition and CPUX86State
4.Patch 6-7 changed quite a bit from previous version does to new approach.
5.Addressed few issues with CPUID_8000_001d and CPUID_8000_001E.
v4:
1.Removed the checks under cpuid 0x8000001D leaf(patch #2). These check are
not necessary. Found this during internal review.
2.Added CPUID_EXT3_TOPOEXT feature for all the 17 family(patch #4). This was
found by Kash Pande during his testing.
3.Removed th hardcoded cpuid xlevel and dynamically extended if CPUID_EXT3_TOPOEXT
is supported(Suggested by Brijesh Singh).
v3:
1.Removed the patch #1. Radim mentioned that original typo problem is in
linux kernel header. qemu is just copying those files.
2.In previous version, I used the cpuid 4 definitions for AMDs cpuid leaf
0x8000001D. CPUID 4 is very intel specific and we dont want to expose those
details under AMD. I have renamed some of these definitions as generic.
These changes are in patch#1. Radim, let me know if this is what you intended.
3.Added assert to for core_id(Suggested by Radim Krcmár).
4.Changed the if condition under "L3 cache info"(Suggested by Gary Hook).
5.Addressed few more text correction and code cleanup(Suggested by Thomas Lendacky).
v2:
Fixed few more minor issues per Gary Hook's comments. Thank you Gary.
Removed the patch#1. We need to handle the instruction cache associativity
seperately. It varies based on the cpu family. I will comeback to that later.
Added two more typo corrections in patch#1 and patch#5.
v1:
Stanislav Lanci posted few patches earlier.
https://patchwork.kernel.org/patch/10040903/
Rebased his patches with few changes.
1.Spit the patches into two, separating cpuid functions
0x8000001D and 0x8000001E (Patch 2 and 3).
2.Removed the generic non-intel check and made a separate patch
with some changes(Patch 5).
3.Fixed L3_N_SETS_AMD(from 4096 to 8192) based on CPUID_Fn8000001D_ECX_x03.
Added 2 more patches.
Patch 1. Fixes cache associativity.
Patch 4. Adds TOPOEXT feature on AMD EPYC CPU.
Babu Moger (4):
i386: Populate AMD Processor Cache Information for cpuid 0x8000001D
i386: Add support for CPUID_8000_001E for AMD
i386: Enable TOPOEXT feature on AMD EPYC CPU
i386: Remove generic SMT thread check
Eduardo Habkost (1):
i386: Clean up cache CPUID code
include/hw/i386/pc.h | 4 +
target/i386/cpu.c | 357 +++++++++++++++++++++++++++++++++++++++++----------
target/i386/cpu.h | 14 +-
target/i386/kvm.c | 29 ++++-
4 files changed, 329 insertions(+), 75 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] [PATCH v10 1/5] i386: Clean up cache CPUID code
2018-05-22 0:41 [Qemu-devel] [PATCH v10 0/5] i386: Enable TOPOEXT to support hyperthreading on AMD CPU Babu Moger
@ 2018-05-22 0:41 ` Babu Moger
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D Babu Moger
` (3 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Babu Moger @ 2018-05-22 0:41 UTC (permalink / raw)
To: mst, marcel.apfelbaum, pbonzini, rth, ehabkost, mtosatti
Cc: qemu-devel, kvm, babu.moger, kash, geoff
From: Eduardo Habkost <ehabkost@redhat.com>
Always initialize CPUCaches structs with cache information, even
if legacy_cache=true. Use different CPUCaches struct for
CPUID[2], CPUID[4], and the AMD CPUID leaves.
This will simplify a lot the logic inside cpu_x86_cpuid().
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
---
target/i386/cpu.c | 117 +++++++++++++++++++++++++++---------------------------
target/i386/cpu.h | 14 ++++---
2 files changed, 67 insertions(+), 64 deletions(-)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index e5e66a7..d9773b6 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1114,7 +1114,7 @@ struct X86CPUDefinition {
};
static CPUCaches epyc_cache_info = {
- .l1d_cache = {
+ .l1d_cache = &(CPUCacheInfo) {
.type = DCACHE,
.level = 1,
.size = 32 * KiB,
@@ -1126,7 +1126,7 @@ static CPUCaches epyc_cache_info = {
.self_init = 1,
.no_invd_sharing = true,
},
- .l1i_cache = {
+ .l1i_cache = &(CPUCacheInfo) {
.type = ICACHE,
.level = 1,
.size = 64 * KiB,
@@ -1138,7 +1138,7 @@ static CPUCaches epyc_cache_info = {
.self_init = 1,
.no_invd_sharing = true,
},
- .l2_cache = {
+ .l2_cache = &(CPUCacheInfo) {
.type = UNIFIED_CACHE,
.level = 2,
.size = 512 * KiB,
@@ -1148,7 +1148,7 @@ static CPUCaches epyc_cache_info = {
.sets = 1024,
.lines_per_tag = 1,
},
- .l3_cache = {
+ .l3_cache = &(CPUCacheInfo) {
.type = UNIFIED_CACHE,
.level = 3,
.size = 8 * MiB,
@@ -3342,9 +3342,8 @@ static void x86_cpu_load_def(X86CPU *cpu, X86CPUDefinition *def, Error **errp)
env->features[w] = def->features[w];
}
- /* Store Cache information from the X86CPUDefinition if available */
- env->cache_info = def->cache_info;
- cpu->legacy_cache = def->cache_info ? 0 : 1;
+ /* legacy-cache defaults to 'off' if CPU model provides cache info */
+ cpu->legacy_cache = !def->cache_info;
/* Special cases not set in the X86CPUDefinition structs: */
/* TODO: in-kernel irqchip for hvf */
@@ -3695,21 +3694,11 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
if (!cpu->enable_l3_cache) {
*ecx = 0;
} else {
- if (env->cache_info && !cpu->legacy_cache) {
- *ecx = cpuid2_cache_descriptor(&env->cache_info->l3_cache);
- } else {
- *ecx = cpuid2_cache_descriptor(&legacy_l3_cache);
- }
- }
- if (env->cache_info && !cpu->legacy_cache) {
- *edx = (cpuid2_cache_descriptor(&env->cache_info->l1d_cache) << 16) |
- (cpuid2_cache_descriptor(&env->cache_info->l1i_cache) << 8) |
- (cpuid2_cache_descriptor(&env->cache_info->l2_cache));
- } else {
- *edx = (cpuid2_cache_descriptor(&legacy_l1d_cache) << 16) |
- (cpuid2_cache_descriptor(&legacy_l1i_cache) << 8) |
- (cpuid2_cache_descriptor(&legacy_l2_cache_cpuid2));
+ *ecx = cpuid2_cache_descriptor(env->cache_info_cpuid2.l3_cache);
}
+ *edx = (cpuid2_cache_descriptor(env->cache_info_cpuid2.l1d_cache) << 16) |
+ (cpuid2_cache_descriptor(env->cache_info_cpuid2.l1i_cache) << 8) |
+ (cpuid2_cache_descriptor(env->cache_info_cpuid2.l2_cache));
break;
case 4:
/* cache info: needed for Core compatibility */
@@ -3722,35 +3711,27 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
}
} else {
*eax = 0;
- CPUCacheInfo *l1d, *l1i, *l2, *l3;
- if (env->cache_info && !cpu->legacy_cache) {
- l1d = &env->cache_info->l1d_cache;
- l1i = &env->cache_info->l1i_cache;
- l2 = &env->cache_info->l2_cache;
- l3 = &env->cache_info->l3_cache;
- } else {
- l1d = &legacy_l1d_cache;
- l1i = &legacy_l1i_cache;
- l2 = &legacy_l2_cache;
- l3 = &legacy_l3_cache;
- }
switch (count) {
case 0: /* L1 dcache info */
- encode_cache_cpuid4(l1d, 1, cs->nr_cores,
+ encode_cache_cpuid4(env->cache_info_cpuid4.l1d_cache,
+ 1, cs->nr_cores,
eax, ebx, ecx, edx);
break;
case 1: /* L1 icache info */
- encode_cache_cpuid4(l1i, 1, cs->nr_cores,
+ encode_cache_cpuid4(env->cache_info_cpuid4.l1i_cache,
+ 1, cs->nr_cores,
eax, ebx, ecx, edx);
break;
case 2: /* L2 cache info */
- encode_cache_cpuid4(l2, cs->nr_threads, cs->nr_cores,
+ encode_cache_cpuid4(env->cache_info_cpuid4.l2_cache,
+ cs->nr_threads, cs->nr_cores,
eax, ebx, ecx, edx);
break;
case 3: /* L3 cache info */
pkg_offset = apicid_pkg_offset(cs->nr_cores, cs->nr_threads);
if (cpu->enable_l3_cache) {
- encode_cache_cpuid4(l3, (1 << pkg_offset), cs->nr_cores,
+ encode_cache_cpuid4(env->cache_info_cpuid4.l3_cache,
+ (1 << pkg_offset), cs->nr_cores,
eax, ebx, ecx, edx);
break;
}
@@ -3963,13 +3944,8 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
(L1_ITLB_2M_ASSOC << 8) | (L1_ITLB_2M_ENTRIES);
*ebx = (L1_DTLB_4K_ASSOC << 24) | (L1_DTLB_4K_ENTRIES << 16) | \
(L1_ITLB_4K_ASSOC << 8) | (L1_ITLB_4K_ENTRIES);
- if (env->cache_info && !cpu->legacy_cache) {
- *ecx = encode_cache_cpuid80000005(&env->cache_info->l1d_cache);
- *edx = encode_cache_cpuid80000005(&env->cache_info->l1i_cache);
- } else {
- *ecx = encode_cache_cpuid80000005(&legacy_l1d_cache_amd);
- *edx = encode_cache_cpuid80000005(&legacy_l1i_cache_amd);
- }
+ *ecx = encode_cache_cpuid80000005(env->cache_info_amd.l1d_cache);
+ *edx = encode_cache_cpuid80000005(env->cache_info_amd.l1i_cache);
break;
case 0x80000006:
/* cache info (L2 cache) */
@@ -3985,17 +3961,10 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
(L2_DTLB_4K_ENTRIES << 16) | \
(AMD_ENC_ASSOC(L2_ITLB_4K_ASSOC) << 12) | \
(L2_ITLB_4K_ENTRIES);
- if (env->cache_info && !cpu->legacy_cache) {
- encode_cache_cpuid80000006(&env->cache_info->l2_cache,
- cpu->enable_l3_cache ?
- &env->cache_info->l3_cache : NULL,
- ecx, edx);
- } else {
- encode_cache_cpuid80000006(&legacy_l2_cache_amd,
- cpu->enable_l3_cache ?
- &legacy_l3_cache : NULL,
- ecx, edx);
- }
+ encode_cache_cpuid80000006(env->cache_info_amd.l2_cache,
+ cpu->enable_l3_cache ?
+ env->cache_info_amd.l3_cache : NULL,
+ ecx, edx);
break;
case 0x80000007:
*eax = 0;
@@ -4692,6 +4661,37 @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp)
cpu->phys_bits = 32;
}
}
+
+ /* Cache information initialization */
+ if (!cpu->legacy_cache) {
+ if (!xcc->cpu_def || !xcc->cpu_def->cache_info) {
+ char *name = x86_cpu_class_get_model_name(xcc);
+ error_setg(errp,
+ "CPU model '%s' doesn't support legacy-cache=off", name);
+ g_free(name);
+ return;
+ }
+ env->cache_info_cpuid2 = env->cache_info_cpuid4 = env->cache_info_amd =
+ *xcc->cpu_def->cache_info;
+ } else {
+ /* Build legacy cache information */
+ env->cache_info_cpuid2.l1d_cache = &legacy_l1d_cache;
+ env->cache_info_cpuid2.l1i_cache = &legacy_l1i_cache;
+ env->cache_info_cpuid2.l2_cache = &legacy_l2_cache_cpuid2;
+ env->cache_info_cpuid2.l3_cache = &legacy_l3_cache;
+
+ env->cache_info_cpuid4.l1d_cache = &legacy_l1d_cache;
+ env->cache_info_cpuid4.l1i_cache = &legacy_l1i_cache;
+ env->cache_info_cpuid4.l2_cache = &legacy_l2_cache;
+ env->cache_info_cpuid4.l3_cache = &legacy_l3_cache;
+
+ env->cache_info_amd.l1d_cache = &legacy_l1d_cache_amd;
+ env->cache_info_amd.l1i_cache = &legacy_l1i_cache_amd;
+ env->cache_info_amd.l2_cache = &legacy_l2_cache_amd;
+ env->cache_info_amd.l3_cache = &legacy_l3_cache;
+ }
+
+
cpu_exec_realizefn(cs, &local_err);
if (local_err != NULL) {
error_propagate(errp, local_err);
@@ -5175,11 +5175,10 @@ static Property x86_cpu_properties[] = {
DEFINE_PROP_BOOL("vmware-cpuid-freq", X86CPU, vmware_cpuid_freq, true),
DEFINE_PROP_BOOL("tcg-cpuid", X86CPU, expose_tcg, true),
/*
- * lecacy_cache defaults to CPU model being chosen. This is set in
- * x86_cpu_load_def based on cache_info which is initialized in
- * builtin_x86_defs
+ * lecacy_cache defaults to true unless the CPU model provides its
+ * own cache information (see x86_cpu_load_def()).
*/
- DEFINE_PROP_BOOL("legacy-cache", X86CPU, legacy_cache, false),
+ DEFINE_PROP_BOOL("legacy-cache", X86CPU, legacy_cache, true),
/*
* From "Requirements for Implementing the Microsoft
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 8bc54d7..5098a12 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1098,10 +1098,10 @@ typedef struct CPUCacheInfo {
typedef struct CPUCaches {
- CPUCacheInfo l1d_cache;
- CPUCacheInfo l1i_cache;
- CPUCacheInfo l2_cache;
- CPUCacheInfo l3_cache;
+ CPUCacheInfo *l1d_cache;
+ CPUCacheInfo *l1i_cache;
+ CPUCacheInfo *l2_cache;
+ CPUCacheInfo *l3_cache;
} CPUCaches;
typedef struct CPUX86State {
@@ -1292,7 +1292,11 @@ typedef struct CPUX86State {
/* Features that were explicitly enabled/disabled */
FeatureWordArray user_features;
uint32_t cpuid_model[12];
- CPUCaches *cache_info;
+ /* Cache information for CPUID. When legacy-cache=on, the cache data
+ * on each CPUID leaf will be different, because we keep compatibility
+ * with old QEMU versions.
+ */
+ CPUCaches cache_info_cpuid2, cache_info_cpuid4, cache_info_amd;
/* MTRRs */
uint64_t mtrr_fixed[11];
--
1.8.3.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D
2018-05-22 0:41 [Qemu-devel] [PATCH v10 0/5] i386: Enable TOPOEXT to support hyperthreading on AMD CPU Babu Moger
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 1/5] i386: Clean up cache CPUID code Babu Moger
@ 2018-05-22 0:41 ` Babu Moger
2018-05-22 1:32 ` Duran, Leo
2018-05-22 13:54 ` Eduardo Habkost
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 3/5] i386: Add support for CPUID_8000_001E for AMD Babu Moger
` (2 subsequent siblings)
4 siblings, 2 replies; 12+ messages in thread
From: Babu Moger @ 2018-05-22 0:41 UTC (permalink / raw)
To: mst, marcel.apfelbaum, pbonzini, rth, ehabkost, mtosatti
Cc: qemu-devel, kvm, babu.moger, kash, geoff
Add information for cpuid 0x8000001D leaf. Populate cache topology information
for different cache types(Data Cache, Instruction Cache, L2 and L3) supported
by 0x8000001D leaf. Please refer Processor Programming Reference (PPR) for AMD
Family 17h Model for more details.
Signed-off-by: Babu Moger <babu.moger@amd.com>
---
target/i386/cpu.c | 103 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
target/i386/kvm.c | 29 +++++++++++++--
2 files changed, 129 insertions(+), 3 deletions(-)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index d9773b6..1dd060a 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -336,6 +336,85 @@ static void encode_cache_cpuid80000006(CPUCacheInfo *l2,
}
}
+/* Definitions used for building CPUID Leaf 0x8000001D and 0x8000001E */
+/* Please refer AMD64 Architecture Programmer’s Manual Volume 3 */
+#define MAX_CCX 2
+#define MAX_CORES_IN_CCX 4
+#define MAX_NODES_EPYC 4
+#define MAX_CORES_IN_NODE 8
+
+/* Number of logical processors sharing L3 cache */
+#define NUM_SHARING_CACHE(threads, num_sharing) ((threads > 1) ? \
+ (((num_sharing - 1) * threads) + 1) : \
+ (num_sharing - 1))
+/*
+ * L3 Cache is shared between all the cores in a core complex.
+ * Maximum cores that can share L3 is 4.
+ */
+static int num_sharing_l3_cache(int nr_cores)
+{
+ int i, nodes = 1;
+
+ /* Check if we can fit all the cores in one CCX */
+ if (nr_cores <= MAX_CORES_IN_CCX) {
+ return nr_cores;
+ }
+ /*
+ * Figure out the number of nodes(or dies) required to build
+ * this config. Max cores in a node is 8
+ */
+ for (i = nodes; i <= MAX_NODES_EPYC; i++) {
+ if (nr_cores <= (i * MAX_CORES_IN_NODE)) {
+ nodes = i;
+ break;
+ }
+ /* We support nodes 1, 2, 4 */
+ if (i == 3) {
+ continue;
+ }
+ }
+ /* Spread the cores accros all the CCXs and return max cores in a ccx */
+ return (nr_cores / (nodes * MAX_CCX)) +
+ ((nr_cores % (nodes * MAX_CCX)) ? 1 : 0);
+}
+
+/* Encode cache info for CPUID[8000001D] */
+static void encode_cache_cpuid8000001d(CPUCacheInfo *cache, CPUState *cs,
+ uint32_t *eax, uint32_t *ebx,
+ uint32_t *ecx, uint32_t *edx)
+{
+ uint32_t num_share_l3;
+ assert(cache->size == cache->line_size * cache->associativity *
+ cache->partitions * cache->sets);
+
+ *eax = CACHE_TYPE(cache->type) | CACHE_LEVEL(cache->level) |
+ (cache->self_init ? CACHE_SELF_INIT_LEVEL : 0);
+
+ /* L3 is shared among multiple cores */
+ if (cache->level == 3) {
+ num_share_l3 = num_sharing_l3_cache(cs->nr_cores);
+ *eax |= (NUM_SHARING_CACHE(cs->nr_threads, num_share_l3) << 14);
+ } else {
+ *eax |= ((cs->nr_threads - 1) << 14);
+ }
+
+ assert(cache->line_size > 0);
+ assert(cache->partitions > 0);
+ assert(cache->associativity > 0);
+ /* We don't implement fully-associative caches */
+ assert(cache->associativity < cache->sets);
+ *ebx = (cache->line_size - 1) |
+ ((cache->partitions - 1) << 12) |
+ ((cache->associativity - 1) << 22);
+
+ assert(cache->sets > 0);
+ *ecx = cache->sets - 1;
+
+ *edx = (cache->no_invd_sharing ? CACHE_NO_INVD_SHARING : 0) |
+ (cache->inclusive ? CACHE_INCLUSIVE : 0) |
+ (cache->complex_indexing ? CACHE_COMPLEX_IDX : 0);
+}
+
/*
* Definitions of the hardcoded cache entries we expose:
* These are legacy cache values. If there is a need to change any
@@ -4005,6 +4084,30 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
*edx = 0;
}
break;
+ case 0x8000001D:
+ *eax = 0;
+ switch (count) {
+ case 0: /* L1 dcache info */
+ encode_cache_cpuid8000001d(env->cache_info_amd.l1d_cache, cs,
+ eax, ebx, ecx, edx);
+ break;
+ case 1: /* L1 icache info */
+ encode_cache_cpuid8000001d(env->cache_info_amd.l1i_cache, cs,
+ eax, ebx, ecx, edx);
+ break;
+ case 2: /* L2 cache info */
+ encode_cache_cpuid8000001d(env->cache_info_amd.l2_cache, cs,
+ eax, ebx, ecx, edx);
+ break;
+ case 3: /* L3 cache info */
+ encode_cache_cpuid8000001d(env->cache_info_amd.l3_cache, cs,
+ eax, ebx, ecx, edx);
+ break;
+ default: /* end of info */
+ *eax = *ebx = *ecx = *edx = 0;
+ break;
+ }
+ break;
case 0xC0000000:
*eax = env->cpuid_xlevel2;
*ebx = 0;
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index d6666a4..a8bf7eb 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -979,9 +979,32 @@ int kvm_arch_init_vcpu(CPUState *cs)
}
c = &cpuid_data.entries[cpuid_i++];
- c->function = i;
- c->flags = 0;
- cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
+ switch (i) {
+ case 0x8000001d:
+ /* Query for all AMD cache information leaves */
+ for (j = 0; ; j++) {
+ c->function = i;
+ c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
+ c->index = j;
+ cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx, &c->edx);
+
+ if (c->eax == 0) {
+ break;
+ }
+ if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
+ fprintf(stderr, "cpuid_data is full, no space for "
+ "cpuid(eax:0x%x,ecx:0x%x)\n", i, j);
+ abort();
+ }
+ c = &cpuid_data.entries[cpuid_i++];
+ }
+ break;
+ default:
+ c->function = i;
+ c->flags = 0;
+ cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
+ break;
+ }
}
/* Call Centaur's CPUID instructions they are supported. */
--
1.8.3.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [Qemu-devel] [PATCH v10 3/5] i386: Add support for CPUID_8000_001E for AMD
2018-05-22 0:41 [Qemu-devel] [PATCH v10 0/5] i386: Enable TOPOEXT to support hyperthreading on AMD CPU Babu Moger
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 1/5] i386: Clean up cache CPUID code Babu Moger
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D Babu Moger
@ 2018-05-22 0:41 ` Babu Moger
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 4/5] i386: Enable TOPOEXT feature on AMD EPYC CPU Babu Moger
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 5/5] i386: Remove generic SMT thread check Babu Moger
4 siblings, 0 replies; 12+ messages in thread
From: Babu Moger @ 2018-05-22 0:41 UTC (permalink / raw)
To: mst, marcel.apfelbaum, pbonzini, rth, ehabkost, mtosatti
Cc: qemu-devel, kvm, babu.moger, kash, geoff
Add support for cpuid leaf CPUID_8000_001E. Build the config that closely
match the underlying hardware. Please refer Processor Programming Reference
(PPR) for AMD Family 17h Model for more details.
Signed-off-by: Babu Moger <babu.moger@amd.com>
---
target/i386/cpu.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 85 insertions(+)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 1dd060a..d9ccaad 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -415,6 +415,86 @@ static void encode_cache_cpuid8000001d(CPUCacheInfo *cache, CPUState *cs,
(cache->complex_indexing ? CACHE_COMPLEX_IDX : 0);
}
+/* Data structure to hold the configuration info for a given core index */
+struct epyc_topo {
+ /* core complex id of the current core index */
+ int ccx_id;
+ /* new core id for this core index in the topology */
+ int core_id;
+ /* Node(or Die) id this core index */
+ int node_id;
+ /* Number of nodes(or dies) in this config, 0 based */
+ int num_nodes;
+};
+
+/*
+ * Build the configuration closely match the EPYC hardware
+ * nr_cores : Total number of cores in the config
+ * core_id : Core index of the current CPU
+ * topo : Data structure to hold all the config info for this core index
+ * Rules
+ * Max ccx in a node(die) = 2
+ * Max cores in a ccx = 4
+ * Max nodes(dies) = 4 (1, 2, 4)
+ * Max sockets = 2
+ * Maintain symmetry as much as possible
+ */
+static void epyc_build_topology(int nr_cores, int core_id,
+ struct epyc_topo *topo)
+{
+ int nodes = 1, cores_in_ccx;
+ int i;
+
+ /* Lets see if we can fit all the cores in one ccx */
+ if (nr_cores <= MAX_CORES_IN_CCX) {
+ cores_in_ccx = nr_cores;
+ goto topo;
+ }
+ /*
+ * Figure out the number of nodes(or dies) required to build
+ * this config. Max cores in a node is 8
+ */
+ for (i = nodes; i <= MAX_NODES_EPYC; i++) {
+ if (nr_cores <= (i * MAX_CORES_IN_NODE)) {
+ nodes = i;
+ break;
+ }
+ /* We support nodes 1, 2, 4 */
+ if (i == 3) {
+ continue;
+ }
+ }
+ /* Spread the cores accros all the CCXs and return max cores in a ccx */
+ cores_in_ccx = (nr_cores / (nodes * MAX_CCX)) +
+ ((nr_cores % (nodes * MAX_CCX)) ? 1 : 0);
+
+topo:
+ topo->node_id = core_id / (cores_in_ccx * MAX_CCX);
+ topo->ccx_id = (core_id % (cores_in_ccx * MAX_CCX)) / cores_in_ccx;
+ topo->core_id = core_id % cores_in_ccx;
+ /* num_nodes is 0 based, return n - 1 */
+ topo->num_nodes = nodes - 1;
+}
+
+/* Encode cache info for CPUID[8000001E] */
+static void encode_topo_cpuid8000001e(CPUState *cs, X86CPU *cpu,
+ uint32_t *eax, uint32_t *ebx,
+ uint32_t *ecx, uint32_t *edx)
+{
+ struct epyc_topo topo = {0};
+
+ *eax = cpu->apic_id;
+ epyc_build_topology(cs->nr_cores, cpu->core_id, &topo);
+ if (cs->nr_threads - 1) {
+ *ebx = ((cs->nr_threads - 1) << 8) | (topo.node_id << 3) |
+ (topo.ccx_id << 2) | topo.core_id;
+ } else {
+ *ebx = (topo.node_id << 4) | (topo.ccx_id << 3) | topo.core_id;
+ }
+ *ecx = (topo.num_nodes << 8) | (cpu->socket_id << 2) | topo.node_id;
+ *edx = 0;
+}
+
/*
* Definitions of the hardcoded cache entries we expose:
* These are legacy cache values. If there is a need to change any
@@ -4108,6 +4188,11 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
break;
}
break;
+ case 0x8000001E:
+ assert(cpu->core_id <= 255);
+ encode_topo_cpuid8000001e(cs, cpu,
+ eax, ebx, ecx, edx);
+ break;
case 0xC0000000:
*eax = env->cpuid_xlevel2;
*ebx = 0;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [Qemu-devel] [PATCH v10 4/5] i386: Enable TOPOEXT feature on AMD EPYC CPU
2018-05-22 0:41 [Qemu-devel] [PATCH v10 0/5] i386: Enable TOPOEXT to support hyperthreading on AMD CPU Babu Moger
` (2 preceding siblings ...)
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 3/5] i386: Add support for CPUID_8000_001E for AMD Babu Moger
@ 2018-05-22 0:41 ` Babu Moger
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 5/5] i386: Remove generic SMT thread check Babu Moger
4 siblings, 0 replies; 12+ messages in thread
From: Babu Moger @ 2018-05-22 0:41 UTC (permalink / raw)
To: mst, marcel.apfelbaum, pbonzini, rth, ehabkost, mtosatti
Cc: qemu-devel, kvm, babu.moger, kash, geoff
Enable TOPOEXT feature on EPYC CPU. This is required to support
hyperthreading on VM guests. Also extend xlevel to 0x8000001E.
Disable TOPOEXT feature for legacy machines and also disable
TOPOEXT feature if the config cannot be supported.
Signed-off-by: Babu Moger <babu.moger@amd.com>
---
include/hw/i386/pc.h | 4 ++++
target/i386/cpu.c | 37 +++++++++++++++++++++++++++++++++++--
2 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index a0c269f..9c8db3d 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -302,6 +302,10 @@ bool e820_get_entry(int, uint32_t, uint64_t *, uint64_t *);
.driver = TYPE_X86_CPU,\
.property = "legacy-cache",\
.value = "on",\
+ },{\
+ .driver = "EPYC-" TYPE_X86_CPU,\
+ .property = "topoext",\
+ .value = "off",\
},
#define PC_COMPAT_2_11 \
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index d9ccaad..d20b305 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -496,6 +496,20 @@ static void encode_topo_cpuid8000001e(CPUState *cs, X86CPU *cpu,
}
/*
+ * Check if we can support this topology
+ * Fail if number of cores are beyond the supported config
+ * or nr_threads is more than 2
+ */
+static int verify_topology(int nr_cores, int nr_threads)
+{
+ if ((nr_cores > (MAX_CORES_IN_NODE * MAX_NODES_EPYC)) ||
+ (nr_threads > 2)) {
+ return 0;
+ }
+ return 1;
+}
+
+/*
* Definitions of the hardcoded cache entries we expose:
* These are legacy cache values. If there is a need to change any
* of these values please use builtin_x86_defs
@@ -2541,7 +2555,8 @@ static X86CPUDefinition builtin_x86_defs[] = {
.features[FEAT_8000_0001_ECX] =
CPUID_EXT3_OSVW | CPUID_EXT3_3DNOWPREFETCH |
CPUID_EXT3_MISALIGNSSE | CPUID_EXT3_SSE4A | CPUID_EXT3_ABM |
- CPUID_EXT3_CR8LEG | CPUID_EXT3_SVM | CPUID_EXT3_LAHF_LM,
+ CPUID_EXT3_CR8LEG | CPUID_EXT3_SVM | CPUID_EXT3_LAHF_LM |
+ CPUID_EXT3_TOPOEXT,
.features[FEAT_7_0_EBX] =
CPUID_7_0_EBX_FSGSBASE | CPUID_7_0_EBX_BMI1 | CPUID_7_0_EBX_AVX2 |
CPUID_7_0_EBX_SMEP | CPUID_7_0_EBX_BMI2 | CPUID_7_0_EBX_RDSEED |
@@ -2586,7 +2601,8 @@ static X86CPUDefinition builtin_x86_defs[] = {
.features[FEAT_8000_0001_ECX] =
CPUID_EXT3_OSVW | CPUID_EXT3_3DNOWPREFETCH |
CPUID_EXT3_MISALIGNSSE | CPUID_EXT3_SSE4A | CPUID_EXT3_ABM |
- CPUID_EXT3_CR8LEG | CPUID_EXT3_SVM | CPUID_EXT3_LAHF_LM,
+ CPUID_EXT3_CR8LEG | CPUID_EXT3_SVM | CPUID_EXT3_LAHF_LM |
+ CPUID_EXT3_TOPOEXT,
.features[FEAT_8000_0008_EBX] =
CPUID_8000_0008_EBX_IBPB,
.features[FEAT_7_0_EBX] =
@@ -4166,6 +4182,12 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
break;
case 0x8000001D:
*eax = 0;
+ /* Check if we can support this topology */
+ if (!verify_topology(cs->nr_cores, cs->nr_threads)) {
+ /* Disable topology extention */
+ env->features[FEAT_8000_0001_ECX] &= !CPUID_EXT3_TOPOEXT;
+ break;
+ }
switch (count) {
case 0: /* L1 dcache info */
encode_cache_cpuid8000001d(env->cache_info_amd.l1d_cache, cs,
@@ -4190,6 +4212,12 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
break;
case 0x8000001E:
assert(cpu->core_id <= 255);
+ /* Check if we can support this topology */
+ if (!verify_topology(cs->nr_cores, cs->nr_threads)) {
+ /* Disable topology extention */
+ env->features[FEAT_8000_0001_ECX] &= !CPUID_EXT3_TOPOEXT;
+ break;
+ }
encode_topo_cpuid8000001e(cs, cpu,
eax, ebx, ecx, edx);
break;
@@ -4654,6 +4682,11 @@ static void x86_cpu_expand_features(X86CPU *cpu, Error **errp)
x86_cpu_adjust_level(cpu, &env->cpuid_min_xlevel, 0x8000000A);
}
+ /* TOPOEXT feature requires 0x8000001E */
+ if (env->features[FEAT_8000_0001_ECX] & CPUID_EXT3_TOPOEXT) {
+ x86_cpu_adjust_level(cpu, &env->cpuid_min_xlevel, 0x8000001E);
+ }
+
/* SEV requires CPUID[0x8000001F] */
if (sev_enabled()) {
x86_cpu_adjust_level(cpu, &env->cpuid_min_xlevel, 0x8000001F);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [Qemu-devel] [PATCH v10 5/5] i386: Remove generic SMT thread check
2018-05-22 0:41 [Qemu-devel] [PATCH v10 0/5] i386: Enable TOPOEXT to support hyperthreading on AMD CPU Babu Moger
` (3 preceding siblings ...)
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 4/5] i386: Enable TOPOEXT feature on AMD EPYC CPU Babu Moger
@ 2018-05-22 0:41 ` Babu Moger
4 siblings, 0 replies; 12+ messages in thread
From: Babu Moger @ 2018-05-22 0:41 UTC (permalink / raw)
To: mst, marcel.apfelbaum, pbonzini, rth, ehabkost, mtosatti
Cc: qemu-devel, kvm, babu.moger, kash, geoff
Remove generic non-intel check while validating hyperthreading support.
Certain AMD CPUs can support hyperthreading now.
CPU family with TOPOEXT feature can support hyperthreading now.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Tested-by: Geoffrey McRae <geoff@hostfission.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
---
target/i386/cpu.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index d20b305..7eba8cc 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -4961,17 +4961,20 @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp)
qemu_init_vcpu(cs);
- /* Only Intel CPUs support hyperthreading. Even though QEMU fixes this
- * issue by adjusting CPUID_0000_0001_EBX and CPUID_8000_0008_ECX
- * based on inputs (sockets,cores,threads), it is still better to gives
+ /* Most Intel and certain AMD CPUs support hyperthreading. Even though QEMU
+ * fixes this issue by adjusting CPUID_0000_0001_EBX and CPUID_8000_0008_ECX
+ * based on inputs (sockets,cores,threads), it is still better to give
* users a warning.
*
* NOTE: the following code has to follow qemu_init_vcpu(). Otherwise
* cs->nr_threads hasn't be populated yet and the checking is incorrect.
*/
- if (!IS_INTEL_CPU(env) && cs->nr_threads > 1 && !ht_warned) {
- error_report("AMD CPU doesn't support hyperthreading. Please configure"
- " -smp options properly.");
+ if (IS_AMD_CPU(env) &&
+ !(env->features[FEAT_8000_0001_ECX] & CPUID_EXT3_TOPOEXT) &&
+ cs->nr_threads > 1 && !ht_warned) {
+ error_report("This family of AMD CPU doesn't support "
+ "hyperthreading(%d). Please configure -smp "
+ "options properly.", cs->nr_threads);
ht_warned = true;
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D Babu Moger
@ 2018-05-22 1:32 ` Duran, Leo
2018-05-22 13:32 ` Moger, Babu
2018-05-22 13:54 ` Eduardo Habkost
1 sibling, 1 reply; 12+ messages in thread
From: Duran, Leo @ 2018-05-22 1:32 UTC (permalink / raw)
To: Moger, Babu, mst@redhat.com, marcel.apfelbaum@gmail.com,
pbonzini@redhat.com, rth@twiddle.net, ehabkost@redhat.com,
mtosatti@redhat.com
Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, kash@tripleback.net,
geoff@hostfission.com
Babu,
If num_sharing_l3_cache() uses MAX_NODES_EPYC, then that function It’s EPYC specific.
An alternative would be to use a data member (e.g., max_nodes_per_socket)) that get initialized (via another helper function) to MAX_NODES_EPYC.
Basically, ideally the functions that return CPUID information do *not* use EPYC-specific macros, like MAX_NODES_EPYC.
Leo.
> -----Original Message-----
> From: Moger, Babu
> Sent: Monday, May 21, 2018 7:41 PM
> To: mst@redhat.com; marcel.apfelbaum@gmail.com; pbonzini@redhat.com;
> rth@twiddle.net; ehabkost@redhat.com; mtosatti@redhat.com
> Cc: qemu-devel@nongnu.org; kvm@vger.kernel.org; Moger, Babu
> <Babu.Moger@amd.com>; kash@tripleback.net; geoff@hostfission.com
> Subject: [PATCH v10 2/5] i386: Populate AMD Processor Cache Information
> for cpuid 0x8000001D
>
> Add information for cpuid 0x8000001D leaf. Populate cache topology
> information for different cache types(Data Cache, Instruction Cache, L2 and
> L3) supported by 0x8000001D leaf. Please refer Processor Programming
> Reference (PPR) for AMD Family 17h Model for more details.
>
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
> target/i386/cpu.c | 103
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> target/i386/kvm.c | 29 +++++++++++++--
> 2 files changed, 129 insertions(+), 3 deletions(-)
>
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c index d9773b6..1dd060a
> 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -336,6 +336,85 @@ static void
> encode_cache_cpuid80000006(CPUCacheInfo *l2,
> }
> }
>
> +/* Definitions used for building CPUID Leaf 0x8000001D and 0x8000001E
> +*/
> +/* Please refer AMD64 Architecture Programmer’s Manual Volume 3 */
> +#define MAX_CCX 2 #define MAX_CORES_IN_CCX 4 #define
> MAX_NODES_EPYC 4
> +#define MAX_CORES_IN_NODE 8
> +
> +/* Number of logical processors sharing L3 cache */
> +#define NUM_SHARING_CACHE(threads, num_sharing) ((threads > 1) ? \
> + (((num_sharing - 1) * threads) + 1) : \
> + (num_sharing - 1))
> +/*
> + * L3 Cache is shared between all the cores in a core complex.
> + * Maximum cores that can share L3 is 4.
> + */
> +static int num_sharing_l3_cache(int nr_cores) {
> + int i, nodes = 1;
> +
> + /* Check if we can fit all the cores in one CCX */
> + if (nr_cores <= MAX_CORES_IN_CCX) {
> + return nr_cores;
> + }
> + /*
> + * Figure out the number of nodes(or dies) required to build
> + * this config. Max cores in a node is 8
> + */
> + for (i = nodes; i <= MAX_NODES_EPYC; i++) {
> + if (nr_cores <= (i * MAX_CORES_IN_NODE)) {
> + nodes = i;
> + break;
> + }
> + /* We support nodes 1, 2, 4 */
> + if (i == 3) {
> + continue;
> + }
> + }
> + /* Spread the cores accros all the CCXs and return max cores in a ccx */
> + return (nr_cores / (nodes * MAX_CCX)) +
> + ((nr_cores % (nodes * MAX_CCX)) ? 1 : 0); }
> +
> +/* Encode cache info for CPUID[8000001D] */ static void
> +encode_cache_cpuid8000001d(CPUCacheInfo *cache, CPUState *cs,
> + uint32_t *eax, uint32_t *ebx,
> + uint32_t *ecx, uint32_t *edx) {
> + uint32_t num_share_l3;
> + assert(cache->size == cache->line_size * cache->associativity *
> + cache->partitions * cache->sets);
> +
> + *eax = CACHE_TYPE(cache->type) | CACHE_LEVEL(cache->level) |
> + (cache->self_init ? CACHE_SELF_INIT_LEVEL : 0);
> +
> + /* L3 is shared among multiple cores */
> + if (cache->level == 3) {
> + num_share_l3 = num_sharing_l3_cache(cs->nr_cores);
> + *eax |= (NUM_SHARING_CACHE(cs->nr_threads, num_share_l3) <<
> 14);
> + } else {
> + *eax |= ((cs->nr_threads - 1) << 14);
> + }
> +
> + assert(cache->line_size > 0);
> + assert(cache->partitions > 0);
> + assert(cache->associativity > 0);
> + /* We don't implement fully-associative caches */
> + assert(cache->associativity < cache->sets);
> + *ebx = (cache->line_size - 1) |
> + ((cache->partitions - 1) << 12) |
> + ((cache->associativity - 1) << 22);
> +
> + assert(cache->sets > 0);
> + *ecx = cache->sets - 1;
> +
> + *edx = (cache->no_invd_sharing ? CACHE_NO_INVD_SHARING : 0) |
> + (cache->inclusive ? CACHE_INCLUSIVE : 0) |
> + (cache->complex_indexing ? CACHE_COMPLEX_IDX : 0); }
> +
> /*
> * Definitions of the hardcoded cache entries we expose:
> * These are legacy cache values. If there is a need to change any @@ -
> 4005,6 +4084,30 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index,
> uint32_t count,
> *edx = 0;
> }
> break;
> + case 0x8000001D:
> + *eax = 0;
> + switch (count) {
> + case 0: /* L1 dcache info */
> + encode_cache_cpuid8000001d(env->cache_info_amd.l1d_cache, cs,
> + eax, ebx, ecx, edx);
> + break;
> + case 1: /* L1 icache info */
> + encode_cache_cpuid8000001d(env->cache_info_amd.l1i_cache, cs,
> + eax, ebx, ecx, edx);
> + break;
> + case 2: /* L2 cache info */
> + encode_cache_cpuid8000001d(env->cache_info_amd.l2_cache, cs,
> + eax, ebx, ecx, edx);
> + break;
> + case 3: /* L3 cache info */
> + encode_cache_cpuid8000001d(env->cache_info_amd.l3_cache, cs,
> + eax, ebx, ecx, edx);
> + break;
> + default: /* end of info */
> + *eax = *ebx = *ecx = *edx = 0;
> + break;
> + }
> + break;
> case 0xC0000000:
> *eax = env->cpuid_xlevel2;
> *ebx = 0;
> diff --git a/target/i386/kvm.c b/target/i386/kvm.c index d6666a4..a8bf7eb
> 100644
> --- a/target/i386/kvm.c
> +++ b/target/i386/kvm.c
> @@ -979,9 +979,32 @@ int kvm_arch_init_vcpu(CPUState *cs)
> }
> c = &cpuid_data.entries[cpuid_i++];
>
> - c->function = i;
> - c->flags = 0;
> - cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
> + switch (i) {
> + case 0x8000001d:
> + /* Query for all AMD cache information leaves */
> + for (j = 0; ; j++) {
> + c->function = i;
> + c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
> + c->index = j;
> + cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx,
> + &c->edx);
> +
> + if (c->eax == 0) {
> + break;
> + }
> + if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
> + fprintf(stderr, "cpuid_data is full, no space for "
> + "cpuid(eax:0x%x,ecx:0x%x)\n", i, j);
> + abort();
> + }
> + c = &cpuid_data.entries[cpuid_i++];
> + }
> + break;
> + default:
> + c->function = i;
> + c->flags = 0;
> + cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
> + break;
> + }
> }
>
> /* Call Centaur's CPUID instructions they are supported. */
> --
> 1.8.3.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D
2018-05-22 1:32 ` Duran, Leo
@ 2018-05-22 13:32 ` Moger, Babu
2018-05-22 14:03 ` Eduardo Habkost
0 siblings, 1 reply; 12+ messages in thread
From: Moger, Babu @ 2018-05-22 13:32 UTC (permalink / raw)
To: Duran, Leo, mst@redhat.com, marcel.apfelbaum@gmail.com,
pbonzini@redhat.com, rth@twiddle.net, ehabkost@redhat.com,
mtosatti@redhat.com
Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, kash@tripleback.net,
geoff@hostfission.com
> -----Original Message-----
> From: Duran, Leo
> Sent: Monday, May 21, 2018 8:32 PM
> To: Moger, Babu <Babu.Moger@amd.com>; mst@redhat.com;
> marcel.apfelbaum@gmail.com; pbonzini@redhat.com; rth@twiddle.net;
> ehabkost@redhat.com; mtosatti@redhat.com
> Cc: qemu-devel@nongnu.org; kvm@vger.kernel.org; kash@tripleback.net;
> geoff@hostfission.com
> Subject: RE: [PATCH v10 2/5] i386: Populate AMD Processor Cache
> Information for cpuid 0x8000001D
>
> Babu,
>
> If num_sharing_l3_cache() uses MAX_NODES_EPYC, then that function It’s
> EPYC specific.
>
> An alternative would be to use a data member (e.g.,
> max_nodes_per_socket)) that get initialized (via another helper function) to
> MAX_NODES_EPYC.
Thanks Leo. Let me see how we can handle this. This requires changes in generic
Data structure which I tried to avoid here. I will wait for all the comments for whole
series before making this change. Note that right now, this feature is only enabled
for EPYC. Yes. I know this could this in future.
> Basically, ideally the functions that return CPUID information do *not* use
> EPYC-specific macros, like MAX_NODES_EPYC.
>
> Leo.
>
> > -----Original Message-----
> > From: Moger, Babu
> > Sent: Monday, May 21, 2018 7:41 PM
> > To: mst@redhat.com; marcel.apfelbaum@gmail.com;
> pbonzini@redhat.com;
> > rth@twiddle.net; ehabkost@redhat.com; mtosatti@redhat.com
> > Cc: qemu-devel@nongnu.org; kvm@vger.kernel.org; Moger, Babu
> > <Babu.Moger@amd.com>; kash@tripleback.net; geoff@hostfission.com
> > Subject: [PATCH v10 2/5] i386: Populate AMD Processor Cache Information
> > for cpuid 0x8000001D
> >
> > Add information for cpuid 0x8000001D leaf. Populate cache topology
> > information for different cache types(Data Cache, Instruction Cache, L2 and
> > L3) supported by 0x8000001D leaf. Please refer Processor Programming
> > Reference (PPR) for AMD Family 17h Model for more details.
> >
> > Signed-off-by: Babu Moger <babu.moger@amd.com>
> > ---
> > target/i386/cpu.c | 103
> > ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > target/i386/kvm.c | 29 +++++++++++++--
> > 2 files changed, 129 insertions(+), 3 deletions(-)
> >
> > diff --git a/target/i386/cpu.c b/target/i386/cpu.c index d9773b6..1dd060a
> > 100644
> > --- a/target/i386/cpu.c
> > +++ b/target/i386/cpu.c
> > @@ -336,6 +336,85 @@ static void
> > encode_cache_cpuid80000006(CPUCacheInfo *l2,
> > }
> > }
> >
> > +/* Definitions used for building CPUID Leaf 0x8000001D and 0x8000001E
> > +*/
> > +/* Please refer AMD64 Architecture Programmer’s Manual Volume 3 */
> > +#define MAX_CCX 2 #define MAX_CORES_IN_CCX 4 #define
> > MAX_NODES_EPYC 4
> > +#define MAX_CORES_IN_NODE 8
> > +
> > +/* Number of logical processors sharing L3 cache */
> > +#define NUM_SHARING_CACHE(threads, num_sharing) ((threads > 1) ?
> \
> > + (((num_sharing - 1) * threads) + 1) : \
> > + (num_sharing - 1))
> > +/*
> > + * L3 Cache is shared between all the cores in a core complex.
> > + * Maximum cores that can share L3 is 4.
> > + */
> > +static int num_sharing_l3_cache(int nr_cores) {
> > + int i, nodes = 1;
> > +
> > + /* Check if we can fit all the cores in one CCX */
> > + if (nr_cores <= MAX_CORES_IN_CCX) {
> > + return nr_cores;
> > + }
> > + /*
> > + * Figure out the number of nodes(or dies) required to build
> > + * this config. Max cores in a node is 8
> > + */
> > + for (i = nodes; i <= MAX_NODES_EPYC; i++) {
> > + if (nr_cores <= (i * MAX_CORES_IN_NODE)) {
> > + nodes = i;
> > + break;
> > + }
> > + /* We support nodes 1, 2, 4 */
> > + if (i == 3) {
> > + continue;
> > + }
> > + }
> > + /* Spread the cores accros all the CCXs and return max cores in a ccx */
> > + return (nr_cores / (nodes * MAX_CCX)) +
> > + ((nr_cores % (nodes * MAX_CCX)) ? 1 : 0); }
> > +
> > +/* Encode cache info for CPUID[8000001D] */ static void
> > +encode_cache_cpuid8000001d(CPUCacheInfo *cache, CPUState *cs,
> > + uint32_t *eax, uint32_t *ebx,
> > + uint32_t *ecx, uint32_t *edx) {
> > + uint32_t num_share_l3;
> > + assert(cache->size == cache->line_size * cache->associativity *
> > + cache->partitions * cache->sets);
> > +
> > + *eax = CACHE_TYPE(cache->type) | CACHE_LEVEL(cache->level) |
> > + (cache->self_init ? CACHE_SELF_INIT_LEVEL : 0);
> > +
> > + /* L3 is shared among multiple cores */
> > + if (cache->level == 3) {
> > + num_share_l3 = num_sharing_l3_cache(cs->nr_cores);
> > + *eax |= (NUM_SHARING_CACHE(cs->nr_threads, num_share_l3) <<
> > 14);
> > + } else {
> > + *eax |= ((cs->nr_threads - 1) << 14);
> > + }
> > +
> > + assert(cache->line_size > 0);
> > + assert(cache->partitions > 0);
> > + assert(cache->associativity > 0);
> > + /* We don't implement fully-associative caches */
> > + assert(cache->associativity < cache->sets);
> > + *ebx = (cache->line_size - 1) |
> > + ((cache->partitions - 1) << 12) |
> > + ((cache->associativity - 1) << 22);
> > +
> > + assert(cache->sets > 0);
> > + *ecx = cache->sets - 1;
> > +
> > + *edx = (cache->no_invd_sharing ? CACHE_NO_INVD_SHARING : 0) |
> > + (cache->inclusive ? CACHE_INCLUSIVE : 0) |
> > + (cache->complex_indexing ? CACHE_COMPLEX_IDX : 0); }
> > +
> > /*
> > * Definitions of the hardcoded cache entries we expose:
> > * These are legacy cache values. If there is a need to change any @@ -
> > 4005,6 +4084,30 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t
> index,
> > uint32_t count,
> > *edx = 0;
> > }
> > break;
> > + case 0x8000001D:
> > + *eax = 0;
> > + switch (count) {
> > + case 0: /* L1 dcache info */
> > + encode_cache_cpuid8000001d(env->cache_info_amd.l1d_cache,
> cs,
> > + eax, ebx, ecx, edx);
> > + break;
> > + case 1: /* L1 icache info */
> > + encode_cache_cpuid8000001d(env->cache_info_amd.l1i_cache, cs,
> > + eax, ebx, ecx, edx);
> > + break;
> > + case 2: /* L2 cache info */
> > + encode_cache_cpuid8000001d(env->cache_info_amd.l2_cache, cs,
> > + eax, ebx, ecx, edx);
> > + break;
> > + case 3: /* L3 cache info */
> > + encode_cache_cpuid8000001d(env->cache_info_amd.l3_cache, cs,
> > + eax, ebx, ecx, edx);
> > + break;
> > + default: /* end of info */
> > + *eax = *ebx = *ecx = *edx = 0;
> > + break;
> > + }
> > + break;
> > case 0xC0000000:
> > *eax = env->cpuid_xlevel2;
> > *ebx = 0;
> > diff --git a/target/i386/kvm.c b/target/i386/kvm.c index d6666a4..a8bf7eb
> > 100644
> > --- a/target/i386/kvm.c
> > +++ b/target/i386/kvm.c
> > @@ -979,9 +979,32 @@ int kvm_arch_init_vcpu(CPUState *cs)
> > }
> > c = &cpuid_data.entries[cpuid_i++];
> >
> > - c->function = i;
> > - c->flags = 0;
> > - cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
> > + switch (i) {
> > + case 0x8000001d:
> > + /* Query for all AMD cache information leaves */
> > + for (j = 0; ; j++) {
> > + c->function = i;
> > + c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
> > + c->index = j;
> > + cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx,
> > + &c->edx);
> > +
> > + if (c->eax == 0) {
> > + break;
> > + }
> > + if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
> > + fprintf(stderr, "cpuid_data is full, no space for "
> > + "cpuid(eax:0x%x,ecx:0x%x)\n", i, j);
> > + abort();
> > + }
> > + c = &cpuid_data.entries[cpuid_i++];
> > + }
> > + break;
> > + default:
> > + c->function = i;
> > + c->flags = 0;
> > + cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
> > + break;
> > + }
> > }
> >
> > /* Call Centaur's CPUID instructions they are supported. */
> > --
> > 1.8.3.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D Babu Moger
2018-05-22 1:32 ` Duran, Leo
@ 2018-05-22 13:54 ` Eduardo Habkost
2018-05-23 18:16 ` Moger, Babu
1 sibling, 1 reply; 12+ messages in thread
From: Eduardo Habkost @ 2018-05-22 13:54 UTC (permalink / raw)
To: Babu Moger
Cc: mst, marcel.apfelbaum, pbonzini, rth, mtosatti, qemu-devel, kvm,
kash, geoff
On Mon, May 21, 2018 at 08:41:12PM -0400, Babu Moger wrote:
> Add information for cpuid 0x8000001D leaf. Populate cache topology information
> for different cache types(Data Cache, Instruction Cache, L2 and L3) supported
> by 0x8000001D leaf. Please refer Processor Programming Reference (PPR) for AMD
> Family 17h Model for more details.
>
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
> target/i386/cpu.c | 103 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> target/i386/kvm.c | 29 +++++++++++++--
> 2 files changed, 129 insertions(+), 3 deletions(-)
>
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index d9773b6..1dd060a 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -336,6 +336,85 @@ static void encode_cache_cpuid80000006(CPUCacheInfo *l2,
> }
> }
>
The number of variables here is large, so maybe we should
document what each one mean so it's easier to review:
> +/* Definitions used for building CPUID Leaf 0x8000001D and 0x8000001E */
> +/* Please refer AMD64 Architecture Programmer’s Manual Volume 3 */
> +#define MAX_CCX 2
CCX is "core complex", right? A comment would be useful here.
> +#define MAX_CORES_IN_CCX 4
> +#define MAX_NODES_EPYC 4
A comment explaining why it's OK to use a EPYC-specific constant
here would be useful.
> +#define MAX_CORES_IN_NODE 8
> +
> +/* Number of logical processors sharing L3 cache */
> +#define NUM_SHARING_CACHE(threads, num_sharing) ((threads > 1) ? \
> + (((num_sharing - 1) * threads) + 1) : \
> + (num_sharing - 1))
This formula is confusing to me. If 4 cores are sharing the
cache and threads==1, 4 logical processors share the cache, and
we return 3. Sounds OK.
But, if 4 cores are sharing the cache and threads==2, the number
of logical processors sharing the cache is 8. We should return
7. The formula above returns (((4 - 1) * 2) + 1), which is
correct.
But isn't it simpler to write this as:
#define NUM_SHARING_CACHE(threads, num_sharing) \
(((num_sharing) * (threads)) - 1)
(Maybe the "- 1" part could be moved outside the macro for
clarity. See below.)
> +/*
> + * L3 Cache is shared between all the cores in a core complex.
> + * Maximum cores that can share L3 is 4.
> + */
> +static int num_sharing_l3_cache(int nr_cores)
Can we document what exactly this function is going to return?
This returns the number of cores sharing l3 cache, not the number
of logical processors, correct?
> +{
> + int i, nodes = 1;
> +
> + /* Check if we can fit all the cores in one CCX */
> + if (nr_cores <= MAX_CORES_IN_CCX) {
> + return nr_cores;
> + }
> + /*
> + * Figure out the number of nodes(or dies) required to build
> + * this config. Max cores in a node is 8
> + */
> + for (i = nodes; i <= MAX_NODES_EPYC; i++) {
> + if (nr_cores <= (i * MAX_CORES_IN_NODE)) {
> + nodes = i;
> + break;
> + }
> + /* We support nodes 1, 2, 4 */
> + if (i == 3) {
> + continue;
> + }
> + }
"continue" as the very last statement of a for loop does nothing,
so it looks like this could be written as:
for (i = nodes; i <= MAX_NODES_EPYC; i++) {
if (nr_cores <= (i * MAX_CORES_IN_NODE)) {
nodes = i;
break;
}
}
which in turn seems to be the same as:
nodes = DIV_ROUND_UP(nr_cores, MAX_CORES_IN_NODE);
nodes = MIN(nodes, MAX_NODES_EPYC)
But, is this really what we want here?
> + /* Spread the cores accros all the CCXs and return max cores in a ccx */
> + return (nr_cores / (nodes * MAX_CCX)) +
> + ((nr_cores % (nodes * MAX_CCX)) ? 1 : 0);
This also seems to be the same as DIV_ROUND_UP?
return DIV_ROUND_UP(nr_cores, nodes * MAX_CCX);
I didn't confirm the logic is valid, though, because I don't know
what we should expect. What is the expected return value of this
function in the following cases?
-smp 24,sockets=2,cores=12,threads=1
-smp 64,sockets=2,cores=32,threads=1
> +}
> +
> +/* Encode cache info for CPUID[8000001D] */
> +static void encode_cache_cpuid8000001d(CPUCacheInfo *cache, CPUState *cs,
> + uint32_t *eax, uint32_t *ebx,
> + uint32_t *ecx, uint32_t *edx)
> +{
> + uint32_t num_share_l3;
> + assert(cache->size == cache->line_size * cache->associativity *
> + cache->partitions * cache->sets);
> +
> + *eax = CACHE_TYPE(cache->type) | CACHE_LEVEL(cache->level) |
> + (cache->self_init ? CACHE_SELF_INIT_LEVEL : 0);
> +
> + /* L3 is shared among multiple cores */
> + if (cache->level == 3) {
> + num_share_l3 = num_sharing_l3_cache(cs->nr_cores);
> + *eax |= (NUM_SHARING_CACHE(cs->nr_threads, num_share_l3) << 14);
Considering that the line below has an explicit "- 1", I think
the "- 1" part could be moved outside the NUM_SHARING_CACHE
macro, and used explicitly here.
But then the NUM_SHARING_CACHE would be just a simple
multiplication, so this could be simply written as:
/* num_sharing_l3_cache() renamed to cores_sharing_l3_cache() */
uint32_t l3_cores = cores_sharing_l3_cache(cs->nr_cores);
uint32_t l3_logical_processors = l3_cores * cs->nr_threads;
*eax |= (l3_logical_processors - 1) << 14;
> + } else {
> + *eax |= ((cs->nr_threads - 1) << 14);
> + }
> +
> + assert(cache->line_size > 0);
> + assert(cache->partitions > 0);
> + assert(cache->associativity > 0);
> + /* We don't implement fully-associative caches */
> + assert(cache->associativity < cache->sets);
> + *ebx = (cache->line_size - 1) |
> + ((cache->partitions - 1) << 12) |
> + ((cache->associativity - 1) << 22);
> +
> + assert(cache->sets > 0);
> + *ecx = cache->sets - 1;
> +
> + *edx = (cache->no_invd_sharing ? CACHE_NO_INVD_SHARING : 0) |
> + (cache->inclusive ? CACHE_INCLUSIVE : 0) |
> + (cache->complex_indexing ? CACHE_COMPLEX_IDX : 0);
> +}
> +
> /*
> * Definitions of the hardcoded cache entries we expose:
> * These are legacy cache values. If there is a need to change any
> @@ -4005,6 +4084,30 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
> *edx = 0;
> }
> break;
> + case 0x8000001D:
> + *eax = 0;
> + switch (count) {
> + case 0: /* L1 dcache info */
> + encode_cache_cpuid8000001d(env->cache_info_amd.l1d_cache, cs,
> + eax, ebx, ecx, edx);
> + break;
> + case 1: /* L1 icache info */
> + encode_cache_cpuid8000001d(env->cache_info_amd.l1i_cache, cs,
> + eax, ebx, ecx, edx);
> + break;
> + case 2: /* L2 cache info */
> + encode_cache_cpuid8000001d(env->cache_info_amd.l2_cache, cs,
> + eax, ebx, ecx, edx);
> + break;
> + case 3: /* L3 cache info */
> + encode_cache_cpuid8000001d(env->cache_info_amd.l3_cache, cs,
> + eax, ebx, ecx, edx);
> + break;
> + default: /* end of info */
> + *eax = *ebx = *ecx = *edx = 0;
> + break;
> + }
> + break;
> case 0xC0000000:
> *eax = env->cpuid_xlevel2;
> *ebx = 0;
> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
> index d6666a4..a8bf7eb 100644
> --- a/target/i386/kvm.c
> +++ b/target/i386/kvm.c
> @@ -979,9 +979,32 @@ int kvm_arch_init_vcpu(CPUState *cs)
> }
> c = &cpuid_data.entries[cpuid_i++];
>
> - c->function = i;
> - c->flags = 0;
> - cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
> + switch (i) {
> + case 0x8000001d:
> + /* Query for all AMD cache information leaves */
> + for (j = 0; ; j++) {
> + c->function = i;
> + c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
> + c->index = j;
> + cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx, &c->edx);
> +
> + if (c->eax == 0) {
> + break;
> + }
> + if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
> + fprintf(stderr, "cpuid_data is full, no space for "
> + "cpuid(eax:0x%x,ecx:0x%x)\n", i, j);
> + abort();
> + }
> + c = &cpuid_data.entries[cpuid_i++];
> + }
> + break;
> + default:
> + c->function = i;
> + c->flags = 0;
> + cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
> + break;
> + }
> }
>
> /* Call Centaur's CPUID instructions they are supported. */
> --
> 1.8.3.1
>
--
Eduardo
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D
2018-05-22 13:32 ` Moger, Babu
@ 2018-05-22 14:03 ` Eduardo Habkost
2018-05-23 16:18 ` Moger, Babu
0 siblings, 1 reply; 12+ messages in thread
From: Eduardo Habkost @ 2018-05-22 14:03 UTC (permalink / raw)
To: Moger, Babu
Cc: Duran, Leo, mst@redhat.com, marcel.apfelbaum@gmail.com,
pbonzini@redhat.com, rth@twiddle.net, mtosatti@redhat.com,
qemu-devel@nongnu.org, kvm@vger.kernel.org, kash@tripleback.net,
geoff@hostfission.com
On Tue, May 22, 2018 at 01:32:52PM +0000, Moger, Babu wrote:
>
> > -----Original Message-----
> > From: Duran, Leo
> > Sent: Monday, May 21, 2018 8:32 PM
> > To: Moger, Babu <Babu.Moger@amd.com>; mst@redhat.com;
> > marcel.apfelbaum@gmail.com; pbonzini@redhat.com; rth@twiddle.net;
> > ehabkost@redhat.com; mtosatti@redhat.com
> > Cc: qemu-devel@nongnu.org; kvm@vger.kernel.org; kash@tripleback.net;
> > geoff@hostfission.com
> > Subject: RE: [PATCH v10 2/5] i386: Populate AMD Processor Cache
> > Information for cpuid 0x8000001D
> >
> > Babu,
> >
> > If num_sharing_l3_cache() uses MAX_NODES_EPYC, then that function It’s
> > EPYC specific.
> >
> > An alternative would be to use a data member (e.g.,
> > max_nodes_per_socket)) that get initialized (via another helper function) to
> > MAX_NODES_EPYC.
>
> Thanks Leo. Let me see how we can handle this. This requires changes in generic
> Data structure which I tried to avoid here. I will wait for all the comments for whole
> series before making this change. Note that right now, this feature is only enabled
> for EPYC. Yes. I know this could this in future.
We just need a reasonable default, by now, and it can even be the
same value used on EPYC. This default just need to generate
reasonable results for other cases that don't match real hardware
(like cores=32 or cores=12).
--
Eduardo
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D
2018-05-22 14:03 ` Eduardo Habkost
@ 2018-05-23 16:18 ` Moger, Babu
0 siblings, 0 replies; 12+ messages in thread
From: Moger, Babu @ 2018-05-23 16:18 UTC (permalink / raw)
To: Eduardo Habkost
Cc: Duran, Leo, mst@redhat.com, marcel.apfelbaum@gmail.com,
pbonzini@redhat.com, rth@twiddle.net, mtosatti@redhat.com,
qemu-devel@nongnu.org, kvm@vger.kernel.org, kash@tripleback.net,
geoff@hostfission.com
> -----Original Message-----
> From: Eduardo Habkost [mailto:ehabkost@redhat.com]
> Sent: Tuesday, May 22, 2018 9:04 AM
> To: Moger, Babu <Babu.Moger@amd.com>
> Cc: Duran, Leo <leo.duran@amd.com>; mst@redhat.com;
> marcel.apfelbaum@gmail.com; pbonzini@redhat.com; rth@twiddle.net;
> mtosatti@redhat.com; qemu-devel@nongnu.org; kvm@vger.kernel.org;
> kash@tripleback.net; geoff@hostfission.com
> Subject: Re: [PATCH v10 2/5] i386: Populate AMD Processor Cache
> Information for cpuid 0x8000001D
>
> On Tue, May 22, 2018 at 01:32:52PM +0000, Moger, Babu wrote:
> >
> > > -----Original Message-----
> > > From: Duran, Leo
> > > Sent: Monday, May 21, 2018 8:32 PM
> > > To: Moger, Babu <Babu.Moger@amd.com>; mst@redhat.com;
> > > marcel.apfelbaum@gmail.com; pbonzini@redhat.com; rth@twiddle.net;
> > > ehabkost@redhat.com; mtosatti@redhat.com
> > > Cc: qemu-devel@nongnu.org; kvm@vger.kernel.org;
> kash@tripleback.net;
> > > geoff@hostfission.com
> > > Subject: RE: [PATCH v10 2/5] i386: Populate AMD Processor Cache
> > > Information for cpuid 0x8000001D
> > >
> > > Babu,
> > >
> > > If num_sharing_l3_cache() uses MAX_NODES_EPYC, then that function
> It’s
> > > EPYC specific.
> > >
> > > An alternative would be to use a data member (e.g.,
> > > max_nodes_per_socket)) that get initialized (via another helper
> function) to
> > > MAX_NODES_EPYC.
> >
> > Thanks Leo. Let me see how we can handle this. This requires changes in
> generic
> > Data structure which I tried to avoid here. I will wait for all the comments
> for whole
> > series before making this change. Note that right now, this feature is only
> enabled
> > for EPYC. Yes. I know this could this in future.
>
> We just need a reasonable default, by now, and it can even be the
> same value used on EPYC. This default just need to generate
> reasonable results for other cases that don't match real hardware
> (like cores=32 or cores=12).
Ok. Will change the name to bit generic for now and keep the defaults as in EPYC.
>
> --
> Eduardo
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D
2018-05-22 13:54 ` Eduardo Habkost
@ 2018-05-23 18:16 ` Moger, Babu
0 siblings, 0 replies; 12+ messages in thread
From: Moger, Babu @ 2018-05-23 18:16 UTC (permalink / raw)
To: Eduardo Habkost
Cc: mst@redhat.com, marcel.apfelbaum@gmail.com, pbonzini@redhat.com,
rth@twiddle.net, mtosatti@redhat.com, qemu-devel@nongnu.org,
kvm@vger.kernel.org, kash@tripleback.net, geoff@hostfission.com
Hi Eduardo, Please see my comments below.
> -----Original Message-----
> From: Eduardo Habkost [mailto:ehabkost@redhat.com]
> Sent: Tuesday, May 22, 2018 8:54 AM
> To: Moger, Babu <Babu.Moger@amd.com>
> Cc: mst@redhat.com; marcel.apfelbaum@gmail.com; pbonzini@redhat.com;
> rth@twiddle.net; mtosatti@redhat.com; qemu-devel@nongnu.org;
> kvm@vger.kernel.org; kash@tripleback.net; geoff@hostfission.com
> Subject: Re: [PATCH v10 2/5] i386: Populate AMD Processor Cache
> Information for cpuid 0x8000001D
>
> On Mon, May 21, 2018 at 08:41:12PM -0400, Babu Moger wrote:
> > Add information for cpuid 0x8000001D leaf. Populate cache topology
> information
> > for different cache types(Data Cache, Instruction Cache, L2 and L3)
> supported
> > by 0x8000001D leaf. Please refer Processor Programming Reference (PPR)
> for AMD
> > Family 17h Model for more details.
> >
> > Signed-off-by: Babu Moger <babu.moger@amd.com>
> > ---
> > target/i386/cpu.c | 103
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > target/i386/kvm.c | 29 +++++++++++++--
> > 2 files changed, 129 insertions(+), 3 deletions(-)
> >
> > diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> > index d9773b6..1dd060a 100644
> > --- a/target/i386/cpu.c
> > +++ b/target/i386/cpu.c
> > @@ -336,6 +336,85 @@ static void
> encode_cache_cpuid80000006(CPUCacheInfo *l2,
> > }
> > }
> >
>
> The number of variables here is large, so maybe we should
> document what each one mean so it's easier to review:
>
Sure. Will add more comments.
>
> > +/* Definitions used for building CPUID Leaf 0x8000001D and 0x8000001E */
> > +/* Please refer AMD64 Architecture Programmer’s Manual Volume 3 */
> > +#define MAX_CCX 2
>
> CCX is "core complex", right? A comment would be useful here.
Yes. It is core complex. Will add comments.
>
> > +#define MAX_CORES_IN_CCX 4
> > +#define MAX_NODES_EPYC 4
>
> A comment explaining why it's OK to use a EPYC-specific constant
> here would be useful.
Sure.
>
>
> > +#define MAX_CORES_IN_NODE 8
> > +
> > +/* Number of logical processors sharing L3 cache */
> > +#define NUM_SHARING_CACHE(threads, num_sharing) ((threads > 1) ?
> \
> > + (((num_sharing - 1) * threads) + 1) : \
> > + (num_sharing - 1))
>
> This formula is confusing to me. If 4 cores are sharing the
> cache and threads==1, 4 logical processors share the cache, and
> we return 3. Sounds OK.
>
> But, if 4 cores are sharing the cache and threads==2, the number
> of logical processors sharing the cache is 8. We should return
> 7. The formula above returns (((4 - 1) * 2) + 1), which is
> correct.
>
> But isn't it simpler to write this as:
>
> #define NUM_SHARING_CACHE(threads, num_sharing) \
> (((num_sharing) * (threads)) - 1)
>
>
> (Maybe the "- 1" part could be moved outside the macro for
> clarity. See below.)
Yes, If we move -1 outside, then we could simplify it and we don’t need this macro. Will change it.
>
>
> > +/*
> > + * L3 Cache is shared between all the cores in a core complex.
> > + * Maximum cores that can share L3 is 4.
> > + */
> > +static int num_sharing_l3_cache(int nr_cores)
>
> Can we document what exactly this function is going to return?
> This returns the number of cores sharing l3 cache, not the number
> of logical processors, correct?
Yes. It is the number of cores. Will fix it.
>
>
> > +{
> > + int i, nodes = 1;
> > +
> > + /* Check if we can fit all the cores in one CCX */
> > + if (nr_cores <= MAX_CORES_IN_CCX) {
> > + return nr_cores;
> > + }
> > + /*
> > + * Figure out the number of nodes(or dies) required to build
> > + * this config. Max cores in a node is 8
> > + */
> > + for (i = nodes; i <= MAX_NODES_EPYC; i++) {
> > + if (nr_cores <= (i * MAX_CORES_IN_NODE)) {
> > + nodes = i;
> > + break;
> > + }
> > + /* We support nodes 1, 2, 4 */
> > + if (i == 3) {
> > + continue;
> > + }
> > + }
>
> "continue" as the very last statement of a for loop does nothing,
> so it looks like this could be written as:
In real hardware number of nodes 3 is not a valid configuration. I was trying to avoid 3 there. Yes, we can achieve this with DIV_ROUND_UP like below.
>
> for (i = nodes; i <= MAX_NODES_EPYC; i++) {
> if (nr_cores <= (i * MAX_CORES_IN_NODE)) {
> nodes = i;
> break;
> }
> }
>
> which in turn seems to be the same as:
>
> nodes = DIV_ROUND_UP(nr_cores, MAX_CORES_IN_NODE);
> nodes = MIN(nodes, MAX_NODES_EPYC)
>
> But, is this really what we want here?
Number of nodes supported is 1, 2 or 4. Hardware does not support 3. That is what I was trying to achieve there.
DIV_ROUND_UP will work with check for 3. If it is 3 then make nodes = 4. Will change it.
MIN(nodes, MAX_NODES_EPYC) is not required as I have added a check in patch 4/5 to check topology(function verify_topology).
If we go beyond 4 nodes then I am disabling topoext feature.
>
>
> > + /* Spread the cores accros all the CCXs and return max cores in a ccx */
> > + return (nr_cores / (nodes * MAX_CCX)) +
> > + ((nr_cores % (nodes * MAX_CCX)) ? 1 : 0);
>
> This also seems to be the same as DIV_ROUND_UP?
>
> return DIV_ROUND_UP(nr_cores, nodes * MAX_CCX);
>
Yes. DIV_ROUND_UP will work.
> I didn't confirm the logic is valid, though, because I don't know
> what we should expect. What is the expected return value of this
> function in the following cases?
>
> -smp 24,sockets=2,cores=12,threads=1
This should return 3(DIV_ROUND_UP(12, 2 * 2). We can fit in 2 nodes, with 4 core complexes. There will be 3 cores in each core complex.
> -smp 64,sockets=2,cores=32,threads=1
This should return 4(DIV_ROUND_UP(32, 4 * 2).. We can fit it in 4 nodes with total 8 core complexes. There will be 4 cores in each core complex.
>
>
> > +}
> > +
> > +/* Encode cache info for CPUID[8000001D] */
> > +static void encode_cache_cpuid8000001d(CPUCacheInfo *cache,
> CPUState *cs,
> > + uint32_t *eax, uint32_t *ebx,
> > + uint32_t *ecx, uint32_t *edx)
> > +{
> > + uint32_t num_share_l3;
> > + assert(cache->size == cache->line_size * cache->associativity *
> > + cache->partitions * cache->sets);
> > +
> > + *eax = CACHE_TYPE(cache->type) | CACHE_LEVEL(cache->level) |
> > + (cache->self_init ? CACHE_SELF_INIT_LEVEL : 0);
> > +
> > + /* L3 is shared among multiple cores */
> > + if (cache->level == 3) {
> > + num_share_l3 = num_sharing_l3_cache(cs->nr_cores);
> > + *eax |= (NUM_SHARING_CACHE(cs->nr_threads, num_share_l3) <<
> 14);
>
> Considering that the line below has an explicit "- 1", I think
> the "- 1" part could be moved outside the NUM_SHARING_CACHE
> macro, and used explicitly here.
>
> But then the NUM_SHARING_CACHE would be just a simple
> multiplication, so this could be simply written as:
>
> /* num_sharing_l3_cache() renamed to cores_sharing_l3_cache() */
> uint32_t l3_cores = cores_sharing_l3_cache(cs->nr_cores);
> uint32_t l3_logical_processors = l3_cores * cs->nr_threads;
> *eax |= (l3_logical_processors - 1) << 14;
Yes. Will make these changes.
>
> > + } else {
> > + *eax |= ((cs->nr_threads - 1) << 14);
> > + }
> > +
> > + assert(cache->line_size > 0);
> > + assert(cache->partitions > 0);
> > + assert(cache->associativity > 0);
> > + /* We don't implement fully-associative caches */
> > + assert(cache->associativity < cache->sets);
> > + *ebx = (cache->line_size - 1) |
> > + ((cache->partitions - 1) << 12) |
> > + ((cache->associativity - 1) << 22);
> > +
> > + assert(cache->sets > 0);
> > + *ecx = cache->sets - 1;
> > +
> > + *edx = (cache->no_invd_sharing ? CACHE_NO_INVD_SHARING : 0) |
> > + (cache->inclusive ? CACHE_INCLUSIVE : 0) |
> > + (cache->complex_indexing ? CACHE_COMPLEX_IDX : 0);
> > +}
> > +
> > /*
> > * Definitions of the hardcoded cache entries we expose:
> > * These are legacy cache values. If there is a need to change any
> > @@ -4005,6 +4084,30 @@ void cpu_x86_cpuid(CPUX86State *env,
> uint32_t index, uint32_t count,
> > *edx = 0;
> > }
> > break;
> > + case 0x8000001D:
> > + *eax = 0;
> > + switch (count) {
> > + case 0: /* L1 dcache info */
> > + encode_cache_cpuid8000001d(env->cache_info_amd.l1d_cache,
> cs,
> > + eax, ebx, ecx, edx);
> > + break;
> > + case 1: /* L1 icache info */
> > + encode_cache_cpuid8000001d(env->cache_info_amd.l1i_cache, cs,
> > + eax, ebx, ecx, edx);
> > + break;
> > + case 2: /* L2 cache info */
> > + encode_cache_cpuid8000001d(env->cache_info_amd.l2_cache, cs,
> > + eax, ebx, ecx, edx);
> > + break;
> > + case 3: /* L3 cache info */
> > + encode_cache_cpuid8000001d(env->cache_info_amd.l3_cache, cs,
> > + eax, ebx, ecx, edx);
> > + break;
> > + default: /* end of info */
> > + *eax = *ebx = *ecx = *edx = 0;
> > + break;
> > + }
> > + break;
> > case 0xC0000000:
> > *eax = env->cpuid_xlevel2;
> > *ebx = 0;
> > diff --git a/target/i386/kvm.c b/target/i386/kvm.c
> > index d6666a4..a8bf7eb 100644
> > --- a/target/i386/kvm.c
> > +++ b/target/i386/kvm.c
> > @@ -979,9 +979,32 @@ int kvm_arch_init_vcpu(CPUState *cs)
> > }
> > c = &cpuid_data.entries[cpuid_i++];
> >
> > - c->function = i;
> > - c->flags = 0;
> > - cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
> > + switch (i) {
> > + case 0x8000001d:
> > + /* Query for all AMD cache information leaves */
> > + for (j = 0; ; j++) {
> > + c->function = i;
> > + c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
> > + c->index = j;
> > + cpu_x86_cpuid(env, i, j, &c->eax, &c->ebx, &c->ecx, &c->edx);
> > +
> > + if (c->eax == 0) {
> > + break;
> > + }
> > + if (cpuid_i == KVM_MAX_CPUID_ENTRIES) {
> > + fprintf(stderr, "cpuid_data is full, no space for "
> > + "cpuid(eax:0x%x,ecx:0x%x)\n", i, j);
> > + abort();
> > + }
> > + c = &cpuid_data.entries[cpuid_i++];
> > + }
> > + break;
> > + default:
> > + c->function = i;
> > + c->flags = 0;
> > + cpu_x86_cpuid(env, i, 0, &c->eax, &c->ebx, &c->ecx, &c->edx);
> > + break;
> > + }
> > }
> >
> > /* Call Centaur's CPUID instructions they are supported. */
> > --
> > 1.8.3.1
> >
>
> --
> Eduardo
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2018-05-23 18:16 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-05-22 0:41 [Qemu-devel] [PATCH v10 0/5] i386: Enable TOPOEXT to support hyperthreading on AMD CPU Babu Moger
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 1/5] i386: Clean up cache CPUID code Babu Moger
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 2/5] i386: Populate AMD Processor Cache Information for cpuid 0x8000001D Babu Moger
2018-05-22 1:32 ` Duran, Leo
2018-05-22 13:32 ` Moger, Babu
2018-05-22 14:03 ` Eduardo Habkost
2018-05-23 16:18 ` Moger, Babu
2018-05-22 13:54 ` Eduardo Habkost
2018-05-23 18:16 ` Moger, Babu
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 3/5] i386: Add support for CPUID_8000_001E for AMD Babu Moger
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 4/5] i386: Enable TOPOEXT feature on AMD EPYC CPU Babu Moger
2018-05-22 0:41 ` [Qemu-devel] [PATCH v10 5/5] i386: Remove generic SMT thread check Babu Moger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).