* [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling
@ 2026-03-04 18:06 Zide Chen
2026-03-04 18:07 ` [PATCH V3 01/13] target/i386: Disable unsupported BTS for guest Zide Chen
` (12 more replies)
0 siblings, 13 replies; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:06 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
This series contains a set of fixes, cleanups, and improvements in
target/i386 PMU: legacy PEBS, Topdown metrics, and generic MSR
handling.
The patches are grouped into a single series for review convenience.
Some of them are not tightly coupled and can be reviewed and applied
individually.
For example, the PEBS-related changes could be a separate series, and
the Topdown metrics patch could be separate. However, they touch
closely related PMU and MSR code paths, and keeping them together here
makes review easier and helps avoid potential merge conflicts.
Patch series overview:
Patches 1-6: Miscellaneous PMU/MSR fixes and cleanups.
Patches 7-8, 11–12: Complete legacy PEBS support in QEMU.
Patches 9-10: Refactoring in preparation for pebs-fmt support.
Patch 13: Add Topdown metrics feature support.
The KVM patch series for Topdown metrics support:
https://lore.kernel.org/kvm/20260226230606.146532-1-zide.chen@intel.com/T/#t
Changes since v2:
- Add new patch 13/13 to support Topdown metrics.
- Separate the adjustment of maximum PMU counters to patch 4/13, in
order not to bump PMU migration version_id twice.
- Re-base on top of most recent mainline QEMU: d8a9d97317d0
- Remove MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR in patch 2/13.
- Do not support pebs-fmt=0.
- Fix the vmstate name of msr_ds_pebs.
- Misc fixes and cleanup.
Changes since v1:
- Add two new patches to clean up and refactor LBR format handling.
- Introduce a new pebs-fmt command-line option.
- Add a patch to avoid exposing PEBS capabilities when not enabled.
- Trivial fixes and cleanups.
v1: https://lore.kernel.org/qemu-devel/20260117011053.80723-1-zide.chen@intel.com/
v2: https://lore.kernel.org/qemu-devel/20260128231003.268981-1-zide.chen@intel.com/T/#t
Dapeng Mi (4):
target/i386: Don't save/restore PERF_GLOBAL_OVF_CTRL MSRs
target/i386: Support full-width writes for perf counters
target/i386: Add get/set/migrate support for legacy PEBS MSRs
target/i386: Add Topdown metrics feature support
Zide Chen (9):
target/i386: Disable unsupported BTS for guest
target/i386: Gate enable_pmu on kvm_enabled()
target/i386: Adjust maximum number of PMU counters
target/i386: Increase MSR_BUF_SIZE and split KVM_[GET/SET]_MSRS calls
target/i386: Make some PEBS features user-visible
target/i386: Clean up LBR format handling
target/i386: Refactor LBR format handling
target/i386: Add pebs-fmt CPU option
target/i386: Clean up Intel Debug Store feature dependencies
target/i386/cpu.c | 140 ++++++++++++++++++++---------
target/i386/cpu.h | 42 ++++++---
target/i386/kvm/kvm-cpu.c | 3 +
target/i386/kvm/kvm.c | 179 +++++++++++++++++++++++++++++++-------
target/i386/machine.c | 54 ++++++++++--
5 files changed, 328 insertions(+), 90 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH V3 01/13] target/i386: Disable unsupported BTS for guest
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-04 18:07 ` [PATCH V3 02/13] target/i386: Don't save/restore PERF_GLOBAL_OVF_CTRL MSRs Zide Chen
` (11 subsequent siblings)
12 siblings, 0 replies; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
BTS (Branch Trace Store), enumerated by IA32_MISC_ENABLE.BTS_UNAVAILABLE
(bit 11), is deprecated and has been superseded by LBR and Intel PT.
KVM yields control of this bit to userspace since KVM commit
9fc222967a39 ("KVM: x86: Give host userspace full control of
MSR_IA32_MISC_ENABLES").
However, QEMU does not set this bit, which allows guests to write the
BTS and BTINT bits in IA32_DEBUGCTL. Since KVM doesn't support BTS,
this may lead to unexpected MSR access errors.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
V3:
- Add two Reviewed-by.
V2:
- Address review comments.
- Remove mention of VMState version_id from the commit message.
---
target/i386/cpu.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 9f222a0c9fe0..016fb1b30bbd 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -474,8 +474,11 @@ typedef enum X86Seg {
#define MSR_IA32_MISC_ENABLE 0x1a0
/* Indicates good rep/movs microcode on some processors: */
-#define MSR_IA32_MISC_ENABLE_DEFAULT 1
+#define MSR_IA32_MISC_ENABLE_FASTSTRING (1ULL << 0)
+#define MSR_IA32_MISC_ENABLE_BTS_UNAVAIL (1ULL << 11)
#define MSR_IA32_MISC_ENABLE_MWAIT (1ULL << 18)
+#define MSR_IA32_MISC_ENABLE_DEFAULT (MSR_IA32_MISC_ENABLE_FASTSTRING | \
+ MSR_IA32_MISC_ENABLE_BTS_UNAVAIL)
#define MSR_MTRRphysBase(reg) (0x200 + 2 * (reg))
#define MSR_MTRRphysMask(reg) (0x200 + 2 * (reg) + 1)
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V3 02/13] target/i386: Don't save/restore PERF_GLOBAL_OVF_CTRL MSRs
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
2026-03-04 18:07 ` [PATCH V3 01/13] target/i386: Disable unsupported BTS for guest Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-04 18:07 ` [PATCH V3 03/13] target/i386: Gate enable_pmu on kvm_enabled() Zide Chen
` (10 subsequent siblings)
12 siblings, 0 replies; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
From: Dapeng Mi <dapeng1.mi@linux.intel.com>
MSR_CORE_PERF_GLOBAL_OVF_CTRL and MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR
are write-only MSRs and reads always return zero.
Saving and restoring these MSRs is therefore unnecessary. Replace
VMSTATE_UINT64 with VMSTATE_UNUSED in the VMStateDescription to ignore
env.msr_global_ovf_ctrl during migration. This avoids the need to bump
version_id and does not introduce any migration incompatibility.
cc: Dongli Zhang <dongli.zhang@oracle.com>
cc: Sandipan Das <sandipan.das@amd.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Co-developed-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
V3:
- Remove MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR.
---
target/i386/cpu.h | 3 ---
target/i386/kvm/kvm.c | 10 ----------
target/i386/machine.c | 4 ++--
3 files changed, 2 insertions(+), 15 deletions(-)
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 016fb1b30bbd..6d3e70395dbd 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -507,11 +507,9 @@ typedef enum X86Seg {
#define MSR_CORE_PERF_FIXED_CTR_CTRL 0x38d
#define MSR_CORE_PERF_GLOBAL_STATUS 0x38e
#define MSR_CORE_PERF_GLOBAL_CTRL 0x38f
-#define MSR_CORE_PERF_GLOBAL_OVF_CTRL 0x390
#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300
#define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301
-#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR 0xc0000302
#define MSR_K7_EVNTSEL0 0xc0010000
#define MSR_K7_PERFCTR0 0xc0010004
@@ -2102,7 +2100,6 @@ typedef struct CPUArchState {
uint64_t msr_fixed_ctr_ctrl;
uint64_t msr_global_ctrl;
uint64_t msr_global_status;
- uint64_t msr_global_ovf_ctrl;
uint64_t msr_fixed_counters[MAX_FIXED_COUNTERS];
uint64_t msr_gp_counters[MAX_GP_COUNTERS];
uint64_t msr_gp_evtsel[MAX_GP_COUNTERS];
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 3b66ec8c42b2..1131c350d352 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -4207,8 +4207,6 @@ static int kvm_put_msrs(X86CPU *cpu, KvmPutState level)
if (pmu_version > 1) {
kvm_msr_entry_add(cpu, MSR_CORE_PERF_GLOBAL_STATUS,
env->msr_global_status);
- kvm_msr_entry_add(cpu, MSR_CORE_PERF_GLOBAL_OVF_CTRL,
- env->msr_global_ovf_ctrl);
/* Now start the PMU. */
kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR_CTRL,
@@ -4252,8 +4250,6 @@ static int kvm_put_msrs(X86CPU *cpu, KvmPutState level)
if (pmu_version > 1) {
kvm_msr_entry_add(cpu, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS,
env->msr_global_status);
- kvm_msr_entry_add(cpu, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR,
- env->msr_global_ovf_ctrl);
kvm_msr_entry_add(cpu, MSR_AMD64_PERF_CNTR_GLOBAL_CTL,
env->msr_global_ctrl);
}
@@ -4769,7 +4765,6 @@ static int kvm_get_msrs(X86CPU *cpu)
kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR_CTRL, 0);
kvm_msr_entry_add(cpu, MSR_CORE_PERF_GLOBAL_CTRL, 0);
kvm_msr_entry_add(cpu, MSR_CORE_PERF_GLOBAL_STATUS, 0);
- kvm_msr_entry_add(cpu, MSR_CORE_PERF_GLOBAL_OVF_CTRL, 0);
}
for (i = 0; i < num_pmu_fixed_counters; i++) {
kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR0 + i, 0);
@@ -4812,7 +4807,6 @@ static int kvm_get_msrs(X86CPU *cpu)
if (pmu_version > 1) {
kvm_msr_entry_add(cpu, MSR_AMD64_PERF_CNTR_GLOBAL_CTL, 0);
kvm_msr_entry_add(cpu, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, 0);
- kvm_msr_entry_add(cpu, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, 0);
}
}
@@ -5135,10 +5129,6 @@ static int kvm_get_msrs(X86CPU *cpu)
case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS:
env->msr_global_status = msrs[i].data;
break;
- case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
- case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR:
- env->msr_global_ovf_ctrl = msrs[i].data;
- break;
case MSR_CORE_PERF_FIXED_CTR0 ... MSR_CORE_PERF_FIXED_CTR0 + MAX_FIXED_COUNTERS - 1:
env->msr_fixed_counters[index - MSR_CORE_PERF_FIXED_CTR0] = msrs[i].data;
break;
diff --git a/target/i386/machine.c b/target/i386/machine.c
index c9139612813b..1125c8a64ec5 100644
--- a/target/i386/machine.c
+++ b/target/i386/machine.c
@@ -666,7 +666,7 @@ static bool pmu_enable_needed(void *opaque)
int i;
if (env->msr_fixed_ctr_ctrl || env->msr_global_ctrl ||
- env->msr_global_status || env->msr_global_ovf_ctrl) {
+ env->msr_global_status) {
return true;
}
for (i = 0; i < MAX_FIXED_COUNTERS; i++) {
@@ -692,7 +692,7 @@ static const VMStateDescription vmstate_msr_architectural_pmu = {
VMSTATE_UINT64(env.msr_fixed_ctr_ctrl, X86CPU),
VMSTATE_UINT64(env.msr_global_ctrl, X86CPU),
VMSTATE_UINT64(env.msr_global_status, X86CPU),
- VMSTATE_UINT64(env.msr_global_ovf_ctrl, X86CPU),
+ VMSTATE_UNUSED(sizeof(uint64_t)),
VMSTATE_UINT64_ARRAY(env.msr_fixed_counters, X86CPU, MAX_FIXED_COUNTERS),
VMSTATE_UINT64_ARRAY(env.msr_gp_counters, X86CPU, MAX_GP_COUNTERS),
VMSTATE_UINT64_ARRAY(env.msr_gp_evtsel, X86CPU, MAX_GP_COUNTERS),
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V3 03/13] target/i386: Gate enable_pmu on kvm_enabled()
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
2026-03-04 18:07 ` [PATCH V3 01/13] target/i386: Disable unsupported BTS for guest Zide Chen
2026-03-04 18:07 ` [PATCH V3 02/13] target/i386: Don't save/restore PERF_GLOBAL_OVF_CTRL MSRs Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-04 18:07 ` [PATCH V3 04/13] target/i386: Adjust maximum number of PMU counters Zide Chen
` (9 subsequent siblings)
12 siblings, 0 replies; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
Guest PMU support requires KVM. Clear cpu->enable_pmu when KVM is not
enabled, so PMU-related code can rely solely on cpu->enable_pmu.
This reduces duplication and avoids bugs where one of the checks is
missed. For example, cpu_x86_cpuid() enables CPUID.0AH when
cpu->enable_pmu is set but does not check kvm_enabled(). This is
implicitly fixed by this patch:
if (cpu->enable_pmu) {
x86_cpu_get_supported_cpuid(0xA, count, eax, ebx, ecx, edx);
}
Also fix two places that check kvm_enabled() but not cpu->enable_pmu.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
V2:
- Replace a tab with spaces.
---
target/i386/cpu.c | 9 ++++++---
target/i386/kvm/kvm.c | 4 ++--
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 9b9ed2d1e38e..a69c3108f64b 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -8661,7 +8661,7 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
*ecx = 0;
*edx = 0;
if (!(env->features[FEAT_7_0_EBX] & CPUID_7_0_EBX_INTEL_PT) ||
- !kvm_enabled()) {
+ !cpu->enable_pmu) {
break;
}
@@ -9008,7 +9008,7 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
case 0x80000022:
*eax = *ebx = *ecx = *edx = 0;
/* AMD Extended Performance Monitoring and Debug */
- if (kvm_enabled() && cpu->enable_pmu &&
+ if (cpu->enable_pmu &&
(env->features[FEAT_8000_0022_EAX] & CPUID_8000_0022_EAX_PERFMON_V2)) {
*eax |= CPUID_8000_0022_EAX_PERFMON_V2;
*ebx |= kvm_arch_get_supported_cpuid(cs->kvm_state, index, count,
@@ -9630,7 +9630,7 @@ static bool x86_cpu_filter_features(X86CPU *cpu, bool verbose)
* are advertised by cpu_x86_cpuid(). Keep these two in sync.
*/
if ((env->features[FEAT_7_0_EBX] & CPUID_7_0_EBX_INTEL_PT) &&
- kvm_enabled()) {
+ cpu->enable_pmu) {
x86_cpu_get_supported_cpuid(0x14, 0,
&eax_0, &ebx_0, &ecx_0, &edx_0);
x86_cpu_get_supported_cpuid(0x14, 1,
@@ -9778,6 +9778,9 @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp)
Error *local_err = NULL;
unsigned requested_lbr_fmt;
+ if (!kvm_enabled())
+ cpu->enable_pmu = false;
+
#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
/* Use pc-relative instructions in system-mode */
tcg_cflags_set(cs, CF_PCREL);
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 1131c350d352..144585df5ba6 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -4400,7 +4400,7 @@ static int kvm_put_msrs(X86CPU *cpu, KvmPutState level)
env->msr_xfd_err);
}
- if (kvm_enabled() && cpu->enable_pmu &&
+ if (cpu->enable_pmu &&
(env->features[FEAT_7_0_EDX] & CPUID_7_0_EDX_ARCH_LBR)) {
uint64_t depth;
int ret;
@@ -4912,7 +4912,7 @@ static int kvm_get_msrs(X86CPU *cpu)
kvm_msr_entry_add(cpu, MSR_IA32_XFD_ERR, 0);
}
- if (kvm_enabled() && cpu->enable_pmu &&
+ if (cpu->enable_pmu &&
(env->features[FEAT_7_0_EDX] & CPUID_7_0_EDX_ARCH_LBR)) {
uint64_t depth;
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V3 04/13] target/i386: Adjust maximum number of PMU counters
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
` (2 preceding siblings ...)
2026-03-04 18:07 ` [PATCH V3 03/13] target/i386: Gate enable_pmu on kvm_enabled() Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-06 3:02 ` Mi, Dapeng
2026-03-04 18:07 ` [PATCH V3 05/13] target/i386: Support full-width writes for perf counters Zide Chen
` (8 subsequent siblings)
12 siblings, 1 reply; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
Changing either MAX_GP_COUNTERS or MAX_FIXED_COUNTERS affects the
VMState layout and therefore requires bumping the migration version
IDs. Adjust both limits together to avoid repeated VMState version
bumps in follow-up patches.
To support full-width writes, QEMU needs to handle the alias MSRs
starting at 0x4c1. With the current limits, the alias range can
extend into MSR_MCG_EXT_CTL (0x4d0). Reducing MAX_GP_COUNTERS from 18
to 15 avoids the overlap while still leaving room for future expansion
beyond current hardware (which supports at most 10 GP counters).
Increase MAX_FIXED_COUNTERS to 7 to support additional fixed counters
(e.g. Topdown metric events).
With these changes, bump version_id to prevent migration to older
QEMU, and bump minimum_version_id to prevent migration from older
QEMU, which could otherwise result in VMState overflows.
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
target/i386/cpu.h | 8 ++------
target/i386/machine.c | 4 ++--
2 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 6d3e70395dbd..23d4ee13abfa 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1749,12 +1749,8 @@ typedef struct {
#define CPU_NB_REGS CPU_NB_REGS32
#endif
-#define MAX_FIXED_COUNTERS 3
-/*
- * This formula is based on Intel's MSR. The current size also meets AMD's
- * needs.
- */
-#define MAX_GP_COUNTERS (MSR_IA32_PERF_STATUS - MSR_P6_EVNTSEL0)
+#define MAX_FIXED_COUNTERS 7
+#define MAX_GP_COUNTERS 15
#define NB_OPMASK_REGS 8
diff --git a/target/i386/machine.c b/target/i386/machine.c
index 1125c8a64ec5..7d08a05835fc 100644
--- a/target/i386/machine.c
+++ b/target/i386/machine.c
@@ -685,8 +685,8 @@ static bool pmu_enable_needed(void *opaque)
static const VMStateDescription vmstate_msr_architectural_pmu = {
.name = "cpu/msr_architectural_pmu",
- .version_id = 1,
- .minimum_version_id = 1,
+ .version_id = 2,
+ .minimum_version_id = 2,
.needed = pmu_enable_needed,
.fields = (const VMStateField[]) {
VMSTATE_UINT64(env.msr_fixed_ctr_ctrl, X86CPU),
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V3 05/13] target/i386: Support full-width writes for perf counters
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
` (3 preceding siblings ...)
2026-03-04 18:07 ` [PATCH V3 04/13] target/i386: Adjust maximum number of PMU counters Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-04 18:07 ` [PATCH V3 06/13] target/i386: Increase MSR_BUF_SIZE and split KVM_[GET/SET]_MSRS calls Zide Chen
` (7 subsequent siblings)
12 siblings, 0 replies; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
From: Dapeng Mi <dapeng1.mi@linux.intel.com>
If IA32_PERF_CAPABILITIES.FW_WRITE (bit 13) is set, each general-
purpose counter IA32_PMCi (starting at 0xc1) is accompanied by a
corresponding 64-bit alias MSR starting at 0x4c1 (IA32_A_PMC0).
The legacy IA32_PMCi MSRs are not full-width and their effective width
is determined by CPUID.0AH:EAX[23:16].
Since these MSRs are architectural aliases, when IA32_A_PMCi is
supported, these alias MSRs can safely be used for save/restore
instead of the legacy IA32_PMCi MSRs
Full-width write is a user-visible feature and can be disabled
individually.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
V3:
- Move the MAX_GP_COUNTERS change and migrate version ID code to
[patch v3 4/13] to avoid bumping version IDs twice in one patch
series.
V2:
- Slightly improve the commit message wording.
- Update the comment for MSR_IA32_PMC0 definition.
---
target/i386/cpu.h | 3 +++
target/i386/kvm/kvm.c | 18 ++++++++++++++++--
2 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 23d4ee13abfa..7c241a20420c 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -421,6 +421,7 @@ typedef enum X86Seg {
#define MSR_IA32_PERF_CAPABILITIES 0x345
#define PERF_CAP_LBR_FMT 0x3f
+#define PERF_CAP_FULL_WRITE (1U << 13)
#define MSR_IA32_TSX_CTRL 0x122
#define MSR_IA32_TSCDEADLINE 0x6e0
@@ -448,6 +449,8 @@ typedef enum X86Seg {
#define MSR_IA32_SGXLEPUBKEYHASH3 0x8f
#define MSR_P6_PERFCTR0 0xc1
+/* Alias MSR range for full-width general-purpose performance counters */
+#define MSR_IA32_PMC0 0x4c1
#define MSR_IA32_SMBASE 0x9e
#define MSR_SMI_COUNT 0x34
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 144585df5ba6..39a67c58ac22 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -4187,6 +4187,12 @@ static int kvm_put_msrs(X86CPU *cpu, KvmPutState level)
}
if ((IS_INTEL_CPU(env) || IS_ZHAOXIN_CPU(env)) && pmu_version > 0) {
+ uint32_t perf_cntr_base = MSR_P6_PERFCTR0;
+
+ if (env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_FULL_WRITE) {
+ perf_cntr_base = MSR_IA32_PMC0;
+ }
+
if (pmu_version > 1) {
/* Stop the counter. */
kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR_CTRL, 0);
@@ -4199,7 +4205,7 @@ static int kvm_put_msrs(X86CPU *cpu, KvmPutState level)
env->msr_fixed_counters[i]);
}
for (i = 0; i < num_pmu_gp_counters; i++) {
- kvm_msr_entry_add(cpu, MSR_P6_PERFCTR0 + i,
+ kvm_msr_entry_add(cpu, perf_cntr_base + i,
env->msr_gp_counters[i]);
kvm_msr_entry_add(cpu, MSR_P6_EVNTSEL0 + i,
env->msr_gp_evtsel[i]);
@@ -4761,6 +4767,11 @@ static int kvm_get_msrs(X86CPU *cpu)
}
if ((IS_INTEL_CPU(env) || IS_ZHAOXIN_CPU(env)) && pmu_version > 0) {
+ uint32_t perf_cntr_base = MSR_P6_PERFCTR0;
+
+ if (env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_FULL_WRITE) {
+ perf_cntr_base = MSR_IA32_PMC0;
+ }
if (pmu_version > 1) {
kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR_CTRL, 0);
kvm_msr_entry_add(cpu, MSR_CORE_PERF_GLOBAL_CTRL, 0);
@@ -4770,7 +4781,7 @@ static int kvm_get_msrs(X86CPU *cpu)
kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR0 + i, 0);
}
for (i = 0; i < num_pmu_gp_counters; i++) {
- kvm_msr_entry_add(cpu, MSR_P6_PERFCTR0 + i, 0);
+ kvm_msr_entry_add(cpu, perf_cntr_base + i, 0);
kvm_msr_entry_add(cpu, MSR_P6_EVNTSEL0 + i, 0);
}
}
@@ -5135,6 +5146,9 @@ static int kvm_get_msrs(X86CPU *cpu)
case MSR_P6_PERFCTR0 ... MSR_P6_PERFCTR0 + MAX_GP_COUNTERS - 1:
env->msr_gp_counters[index - MSR_P6_PERFCTR0] = msrs[i].data;
break;
+ case MSR_IA32_PMC0 ... MSR_IA32_PMC0 + MAX_GP_COUNTERS - 1:
+ env->msr_gp_counters[index - MSR_IA32_PMC0] = msrs[i].data;
+ break;
case MSR_P6_EVNTSEL0 ... MSR_P6_EVNTSEL0 + MAX_GP_COUNTERS - 1:
env->msr_gp_evtsel[index - MSR_P6_EVNTSEL0] = msrs[i].data;
break;
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V3 06/13] target/i386: Increase MSR_BUF_SIZE and split KVM_[GET/SET]_MSRS calls
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
` (4 preceding siblings ...)
2026-03-04 18:07 ` [PATCH V3 05/13] target/i386: Support full-width writes for perf counters Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-06 3:09 ` Mi, Dapeng
2026-03-04 18:07 ` [PATCH V3 07/13] target/i386: Add get/set/migrate support for legacy PEBS MSRs Zide Chen
` (6 subsequent siblings)
12 siblings, 1 reply; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
Newer Intel server CPUs support a large number of PMU MSRs. Currently,
QEMU allocates cpu->kvm_msr_buf as a single-page buffer, which is not
sufficient to hold all possible MSRs.
Increase MSR_BUF_SIZE to 8192 bytes, providing space for up to 511 MSRs.
This is sufficient even for the theoretical worst case, such as
architectural LBR with a depth of 64.
KVM_[GET/SET]_MSRS is limited to 255 MSRs per call. Raising this limit
to 511 would require changes in KVM and would introduce backward
compatibility issues. Instead, split requests into multiple
KVM_[GET/SET]_MSRS calls when the number of MSRs exceeds the API limit.
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
v3:
- Address Dapeng's comments.
---
target/i386/kvm/kvm.c | 110 +++++++++++++++++++++++++++++++++++-------
1 file changed, 92 insertions(+), 18 deletions(-)
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 39a67c58ac22..4ba54151320f 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -97,9 +97,12 @@
#define KVM_APIC_BUS_CYCLE_NS 1
#define KVM_APIC_BUS_FREQUENCY (1000000000ULL / KVM_APIC_BUS_CYCLE_NS)
-/* A 4096-byte buffer can hold the 8-byte kvm_msrs header, plus
- * 255 kvm_msr_entry structs */
-#define MSR_BUF_SIZE 4096
+/* A 8192-byte buffer can hold the 8-byte kvm_msrs header, plus
+ * 511 kvm_msr_entry structs */
+#define MSR_BUF_SIZE 8192
+
+/* Maximum number of MSRs in one single KVM_[GET/SET]_MSRS call. */
+#define KVM_MAX_IO_MSRS 255
typedef bool QEMURDMSRHandler(X86CPU *cpu, uint32_t msr, uint64_t *val);
typedef bool QEMUWRMSRHandler(X86CPU *cpu, uint32_t msr, uint64_t val);
@@ -4016,21 +4019,99 @@ static void kvm_msr_entry_add_perf(X86CPU *cpu, FeatureWordArray f)
}
}
-static int kvm_buf_set_msrs(X86CPU *cpu)
+static int __kvm_buf_set_msrs(X86CPU *cpu, struct kvm_msrs *msrs)
{
- int ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_MSRS, cpu->kvm_msr_buf);
+ int ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_MSRS, msrs);
if (ret < 0) {
return ret;
}
- if (ret < cpu->kvm_msr_buf->nmsrs) {
- struct kvm_msr_entry *e = &cpu->kvm_msr_buf->entries[ret];
+ if (ret < msrs->nmsrs) {
+ struct kvm_msr_entry *e = &msrs->entries[ret];
error_report("error: failed to set MSR 0x%" PRIx32 " to 0x%" PRIx64,
(uint32_t)e->index, (uint64_t)e->data);
}
- assert(ret == cpu->kvm_msr_buf->nmsrs);
- return 0;
+ assert(ret == msrs->nmsrs);
+ return ret;
+}
+
+static int __kvm_buf_get_msrs(X86CPU *cpu, struct kvm_msrs *msrs)
+{
+ int ret;
+
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MSRS, msrs);
+ if (ret < 0) {
+ return ret;
+ }
+
+ if (ret < msrs->nmsrs) {
+ struct kvm_msr_entry *e = &msrs->entries[ret];
+ error_report("error: failed to get MSR 0x%" PRIx32,
+ (uint32_t)e->index);
+ }
+
+ assert(ret == msrs->nmsrs);
+ return ret;
+}
+
+static int kvm_buf_set_or_get_msrs(X86CPU *cpu, bool is_write)
+{
+ struct kvm_msr_entry *entries = cpu->kvm_msr_buf->entries;
+ struct kvm_msrs *buf = NULL;
+ int current, remaining, ret = 0;
+ size_t buf_size;
+
+ buf_size = KVM_MAX_IO_MSRS * sizeof(struct kvm_msr_entry) +
+ sizeof(struct kvm_msrs);
+ buf = g_malloc(buf_size);
+
+ remaining = cpu->kvm_msr_buf->nmsrs;
+ current = 0;
+ while (remaining) {
+ size_t size;
+
+ memset(buf, 0, buf_size);
+
+ if (remaining > KVM_MAX_IO_MSRS) {
+ buf->nmsrs = KVM_MAX_IO_MSRS;
+ } else {
+ buf->nmsrs = remaining;
+ }
+
+ size = buf->nmsrs * sizeof(entries[0]);
+ memcpy(buf->entries, &entries[current], size);
+
+ if (is_write) {
+ ret = __kvm_buf_set_msrs(cpu, buf);
+ } else {
+ ret = __kvm_buf_get_msrs(cpu, buf);
+ }
+
+ if (ret < 0) {
+ goto out;
+ }
+
+ if (!is_write)
+ memcpy(&entries[current], buf->entries, size);
+
+ current += buf->nmsrs;
+ remaining -= buf->nmsrs;
+ }
+
+out:
+ g_free(buf);
+ return ret < 0 ? ret : cpu->kvm_msr_buf->nmsrs;
+}
+
+static inline int kvm_buf_set_msrs(X86CPU *cpu)
+{
+ return kvm_buf_set_or_get_msrs(cpu, true);
+}
+
+static inline int kvm_buf_get_msrs(X86CPU *cpu)
+{
+ return kvm_buf_set_or_get_msrs(cpu, false);
}
static void kvm_init_msrs(X86CPU *cpu)
@@ -4066,7 +4147,7 @@ static void kvm_init_msrs(X86CPU *cpu)
if (has_msr_ucode_rev) {
kvm_msr_entry_add(cpu, MSR_IA32_UCODE_REV, cpu->ucode_rev);
}
- assert(kvm_buf_set_msrs(cpu) == 0);
+ kvm_buf_set_msrs(cpu);
}
static int kvm_put_msrs(X86CPU *cpu, KvmPutState level)
@@ -4959,18 +5040,11 @@ static int kvm_get_msrs(X86CPU *cpu)
}
}
- ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MSRS, cpu->kvm_msr_buf);
+ ret = kvm_buf_get_msrs(cpu);
if (ret < 0) {
return ret;
}
- if (ret < cpu->kvm_msr_buf->nmsrs) {
- struct kvm_msr_entry *e = &cpu->kvm_msr_buf->entries[ret];
- error_report("error: failed to get MSR 0x%" PRIx32,
- (uint32_t)e->index);
- }
-
- assert(ret == cpu->kvm_msr_buf->nmsrs);
/*
* MTRR masks: Each mask consists of 5 parts
* a 10..0: must be zero
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V3 07/13] target/i386: Add get/set/migrate support for legacy PEBS MSRs
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
` (5 preceding siblings ...)
2026-03-04 18:07 ` [PATCH V3 06/13] target/i386: Increase MSR_BUF_SIZE and split KVM_[GET/SET]_MSRS calls Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-06 3:17 ` Mi, Dapeng
2026-03-04 18:07 ` [PATCH V3 08/13] target/i386: Make some PEBS features user-visible Zide Chen
` (5 subsequent siblings)
12 siblings, 1 reply; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
From: Dapeng Mi <dapeng1.mi@linux.intel.com>
The legacy DS-based PEBS relies on IA32_DS_AREA and IA32_PEBS_ENABLE
to take snapshots of a subset of the machine registers into the Intel
Debug-Store.
Adaptive PEBS introduces MSR_PEBS_DATA_CFG to be able to capture only
the data of interest, which is enumerated via bit 14 (PEBS_BASELINE)
of IA32_PERF_CAPABILITIES.
QEMU must save, restore and migrate these MSRs when legacy PEBS is
enabled. Though the availability of these MSRs may not be the same,
it's still valid to put them in the same vmstate subsection for
implementation simplicity.
Originally-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Co-developed-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
V3:
- Add the missing Originally-by tag to credit Luwei.
- Fix the vmstate name of msr_ds_pebs.
- Fix the criteria for determining availability of IA32_PEBS_ENABLE
and MSR_PEBS_DATA_CFG.
- Change title to cover all aspects of what this patch does.
- Re-work the commit messages.
---
target/i386/cpu.h | 10 ++++++++++
target/i386/kvm/kvm.c | 29 +++++++++++++++++++++++++++++
target/i386/machine.c | 27 ++++++++++++++++++++++++++-
3 files changed, 65 insertions(+), 1 deletion(-)
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 7c241a20420c..3a10f3242329 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -422,6 +422,7 @@ typedef enum X86Seg {
#define MSR_IA32_PERF_CAPABILITIES 0x345
#define PERF_CAP_LBR_FMT 0x3f
#define PERF_CAP_FULL_WRITE (1U << 13)
+#define PERF_CAP_PEBS_BASELINE (1U << 14)
#define MSR_IA32_TSX_CTRL 0x122
#define MSR_IA32_TSCDEADLINE 0x6e0
@@ -479,6 +480,7 @@ typedef enum X86Seg {
/* Indicates good rep/movs microcode on some processors: */
#define MSR_IA32_MISC_ENABLE_FASTSTRING (1ULL << 0)
#define MSR_IA32_MISC_ENABLE_BTS_UNAVAIL (1ULL << 11)
+#define MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL (1ULL << 12)
#define MSR_IA32_MISC_ENABLE_MWAIT (1ULL << 18)
#define MSR_IA32_MISC_ENABLE_DEFAULT (MSR_IA32_MISC_ENABLE_FASTSTRING | \
MSR_IA32_MISC_ENABLE_BTS_UNAVAIL)
@@ -514,6 +516,11 @@ typedef enum X86Seg {
#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300
#define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301
+/* Legacy DS based PEBS MSRs */
+#define MSR_IA32_PEBS_ENABLE 0x3f1
+#define MSR_PEBS_DATA_CFG 0x3f2
+#define MSR_IA32_DS_AREA 0x600
+
#define MSR_K7_EVNTSEL0 0xc0010000
#define MSR_K7_PERFCTR0 0xc0010004
#define MSR_F15H_PERF_CTL0 0xc0010200
@@ -2099,6 +2106,9 @@ typedef struct CPUArchState {
uint64_t msr_fixed_ctr_ctrl;
uint64_t msr_global_ctrl;
uint64_t msr_global_status;
+ uint64_t msr_ds_area;
+ uint64_t msr_pebs_data_cfg;
+ uint64_t msr_pebs_enable;
uint64_t msr_fixed_counters[MAX_FIXED_COUNTERS];
uint64_t msr_gp_counters[MAX_GP_COUNTERS];
uint64_t msr_gp_evtsel[MAX_GP_COUNTERS];
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 4ba54151320f..8c4564bcbb9e 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -4280,6 +4280,16 @@ static int kvm_put_msrs(X86CPU *cpu, KvmPutState level)
kvm_msr_entry_add(cpu, MSR_CORE_PERF_GLOBAL_CTRL, 0);
}
+ if (env->features[FEAT_1_EDX] & CPUID_DTS) {
+ kvm_msr_entry_add(cpu, MSR_IA32_DS_AREA, env->msr_ds_area);
+ }
+ if (!(env->msr_ia32_misc_enable & MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL)) {
+ kvm_msr_entry_add(cpu, MSR_IA32_PEBS_ENABLE, env->msr_pebs_enable);
+ }
+ if (env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_PEBS_BASELINE) {
+ kvm_msr_entry_add(cpu, MSR_PEBS_DATA_CFG, env->msr_pebs_data_cfg);
+ }
+
/* Set the counter values. */
for (i = 0; i < num_pmu_fixed_counters; i++) {
kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR0 + i,
@@ -4900,6 +4910,16 @@ static int kvm_get_msrs(X86CPU *cpu)
kvm_msr_entry_add(cpu, MSR_AMD64_PERF_CNTR_GLOBAL_CTL, 0);
kvm_msr_entry_add(cpu, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, 0);
}
+
+ if (env->features[FEAT_1_EDX] & CPUID_DTS) {
+ kvm_msr_entry_add(cpu, MSR_IA32_DS_AREA, 0);
+ }
+ if (!(env->msr_ia32_misc_enable & MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL)) {
+ kvm_msr_entry_add(cpu, MSR_IA32_PEBS_ENABLE, 0);
+ }
+ if (env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_PEBS_BASELINE) {
+ kvm_msr_entry_add(cpu, MSR_PEBS_DATA_CFG, 0);
+ }
}
if (env->mcg_cap) {
@@ -5241,6 +5261,15 @@ static int kvm_get_msrs(X86CPU *cpu)
env->msr_gp_evtsel[index] = msrs[i].data;
}
break;
+ case MSR_IA32_DS_AREA:
+ env->msr_ds_area = msrs[i].data;
+ break;
+ case MSR_PEBS_DATA_CFG:
+ env->msr_pebs_data_cfg = msrs[i].data;
+ break;
+ case MSR_IA32_PEBS_ENABLE:
+ env->msr_pebs_enable = msrs[i].data;
+ break;
case HV_X64_MSR_HYPERCALL:
env->msr_hv_hypercall = msrs[i].data;
break;
diff --git a/target/i386/machine.c b/target/i386/machine.c
index 7d08a05835fc..5cff5d5a9db5 100644
--- a/target/i386/machine.c
+++ b/target/i386/machine.c
@@ -659,6 +659,27 @@ static const VMStateDescription vmstate_msr_ia32_feature_control = {
}
};
+static bool ds_pebs_enabled(void *opaque)
+{
+ X86CPU *cpu = opaque;
+ CPUX86State *env = &cpu->env;
+
+ return (env->msr_ds_area || env->msr_pebs_enable ||
+ env->msr_pebs_data_cfg);
+}
+
+static const VMStateDescription vmstate_msr_ds_pebs = {
+ .name = "cpu/msr_architectural_pmu/msr_ds_pebs",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .needed = ds_pebs_enabled,
+ .fields = (const VMStateField[]){
+ VMSTATE_UINT64(env.msr_ds_area, X86CPU),
+ VMSTATE_UINT64(env.msr_pebs_data_cfg, X86CPU),
+ VMSTATE_UINT64(env.msr_pebs_enable, X86CPU),
+ VMSTATE_END_OF_LIST()}
+};
+
static bool pmu_enable_needed(void *opaque)
{
X86CPU *cpu = opaque;
@@ -697,7 +718,11 @@ static const VMStateDescription vmstate_msr_architectural_pmu = {
VMSTATE_UINT64_ARRAY(env.msr_gp_counters, X86CPU, MAX_GP_COUNTERS),
VMSTATE_UINT64_ARRAY(env.msr_gp_evtsel, X86CPU, MAX_GP_COUNTERS),
VMSTATE_END_OF_LIST()
- }
+ },
+ .subsections = (const VMStateDescription * const []) {
+ &vmstate_msr_ds_pebs,
+ NULL,
+ },
};
static bool mpx_needed(void *opaque)
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V3 08/13] target/i386: Make some PEBS features user-visible
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
` (6 preceding siblings ...)
2026-03-04 18:07 ` [PATCH V3 07/13] target/i386: Add get/set/migrate support for legacy PEBS MSRs Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-06 3:25 ` Mi, Dapeng
2026-03-04 18:07 ` [PATCH V3 09/13] target/i386: Clean up LBR format handling Zide Chen
` (4 subsequent siblings)
12 siblings, 1 reply; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
Populate selected PEBS feature names in FEAT_PERF_CAPABILITIES to make
the corresponding bits user-visible CPU feature knobs, allowing them to
be explicitly enabled or disabled via -cpu +/-<feature>.
Once named, these bits become part of the guest CPU configuration
contract. If a VM is configured with such a feature enabled, migration
to a destination that does not support the feature may fail, as the
destination cannot honor the guest-visible CPU model.
The PEBS_FMT bits are not exposed, as target/i386 currently does not
support multi-bit CPU properties.
Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
V2:
- Add the missing comma after "pebs-arch-reg".
- Simplify the PEBS_FMT description in the commit message.
---
target/i386/cpu.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index a69c3108f64b..89691fba45e1 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1618,10 +1618,10 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
.type = MSR_FEATURE_WORD,
.feat_names = {
NULL, NULL, NULL, NULL,
+ NULL, NULL, "pebs-trap", "pebs-arch-reg",
NULL, NULL, NULL, NULL,
- NULL, NULL, NULL, NULL,
- NULL, "full-width-write", NULL, NULL,
- NULL, NULL, NULL, NULL,
+ NULL, "full-width-write", "pebs-baseline", NULL,
+ NULL, "pebs-timing-info", NULL, NULL,
NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL,
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V3 09/13] target/i386: Clean up LBR format handling
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
` (7 preceding siblings ...)
2026-03-04 18:07 ` [PATCH V3 08/13] target/i386: Make some PEBS features user-visible Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-04 18:07 ` [PATCH V3 10/13] target/i386: Refactor " Zide Chen
` (3 subsequent siblings)
12 siblings, 0 replies; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
Since the lbr-fmt property is masked with PERF_CAP_LBR_FMT in
DEFINE_PROP_UINT64_CHECKMASK(), there is no need to explicitly validate
user-requested lbr-fmt values.
The PMU feature is only supported when running under KVM, so initialize
cpu->lbr_fmt in kvm_cpu_instance_init(). Use -1 as the default lbr-fmt,
rather than initializing it with ~PERF_CAP_LBR_FMT, which is misleading
as it suggests a semantic relationship that does not exist.
Rename requested_lbr_fmt to a more generic guest_fmt. When lbr-fmt is
not specified and cpu->migratable is false, the guest lbr_fmt value is
not user-requested.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
V2: New patch.
---
target/i386/cpu.c | 18 ++++++------------
target/i386/kvm/kvm-cpu.c | 2 ++
2 files changed, 8 insertions(+), 12 deletions(-)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 89691fba45e1..da2e67ca1faf 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -9776,7 +9776,7 @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp)
X86CPUClass *xcc = X86_CPU_GET_CLASS(dev);
CPUX86State *env = &cpu->env;
Error *local_err = NULL;
- unsigned requested_lbr_fmt;
+ unsigned guest_fmt;
if (!kvm_enabled())
cpu->enable_pmu = false;
@@ -9816,11 +9816,7 @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp)
* Override env->features[FEAT_PERF_CAPABILITIES].LBR_FMT
* with user-provided setting.
*/
- if (cpu->lbr_fmt != ~PERF_CAP_LBR_FMT) {
- if ((cpu->lbr_fmt & PERF_CAP_LBR_FMT) != cpu->lbr_fmt) {
- error_setg(errp, "invalid lbr-fmt");
- return;
- }
+ if (cpu->lbr_fmt != -1) {
env->features[FEAT_PERF_CAPABILITIES] &= ~PERF_CAP_LBR_FMT;
env->features[FEAT_PERF_CAPABILITIES] |= cpu->lbr_fmt;
}
@@ -9829,9 +9825,8 @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp)
* vPMU LBR is supported when 1) KVM is enabled 2) Option pmu=on and
* 3)vPMU LBR format matches that of host setting.
*/
- requested_lbr_fmt =
- env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_LBR_FMT;
- if (requested_lbr_fmt && kvm_enabled()) {
+ guest_fmt = env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_LBR_FMT;
+ if (guest_fmt) {
uint64_t host_perf_cap =
x86_cpu_get_supported_feature_word(NULL, FEAT_PERF_CAPABILITIES);
unsigned host_lbr_fmt = host_perf_cap & PERF_CAP_LBR_FMT;
@@ -9840,10 +9835,10 @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp)
error_setg(errp, "vPMU: LBR is unsupported without pmu=on");
return;
}
- if (requested_lbr_fmt != host_lbr_fmt) {
+ if (guest_fmt != host_lbr_fmt) {
error_setg(errp, "vPMU: the lbr-fmt value (0x%x) does not match "
"the host value (0x%x).",
- requested_lbr_fmt, host_lbr_fmt);
+ guest_fmt, host_lbr_fmt);
return;
}
}
@@ -10264,7 +10259,6 @@ static void x86_cpu_initfn(Object *obj)
object_property_add_alias(obj, "sse4_2", obj, "sse4.2");
object_property_add_alias(obj, "hv-apicv", obj, "hv-avic");
- cpu->lbr_fmt = ~PERF_CAP_LBR_FMT;
object_property_add_alias(obj, "lbr_fmt", obj, "lbr-fmt");
if (xcc->model) {
diff --git a/target/i386/kvm/kvm-cpu.c b/target/i386/kvm/kvm-cpu.c
index c34d9f15c7e8..1d0047d037c7 100644
--- a/target/i386/kvm/kvm-cpu.c
+++ b/target/i386/kvm/kvm-cpu.c
@@ -230,6 +230,8 @@ static void kvm_cpu_instance_init(CPUState *cs)
kvm_cpu_max_instance_init(cpu);
}
+ cpu->lbr_fmt = -1;
+
kvm_cpu_xsave_init();
}
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V3 10/13] target/i386: Refactor LBR format handling
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
` (8 preceding siblings ...)
2026-03-04 18:07 ` [PATCH V3 09/13] target/i386: Clean up LBR format handling Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-04 18:07 ` [PATCH V3 11/13] target/i386: Add pebs-fmt CPU option Zide Chen
` (2 subsequent siblings)
12 siblings, 0 replies; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
Detach x86_cpu_pmu_realize() from x86_cpu_realizefn() to keep the latter
focused and easier to follow. Introduce a dedicated helper,
x86_cpu_apply_lbr_pebs_fmt(), in preparation for adding PEBS format
support without duplicating code.
Convert PERF_CAP_LBR_FMT into separate mask and shift macros to allow
x86_cpu_apply_lbr_pebs_fmt() to be shared with PEBS format handling.
No functional change intended.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
V2: New patch.
---
target/i386/cpu.c | 93 +++++++++++++++++++++++++++++++----------------
target/i386/cpu.h | 3 +-
2 files changed, 64 insertions(+), 32 deletions(-)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index da2e67ca1faf..d5e00b41fb04 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -9769,6 +9769,65 @@ static bool x86_cpu_update_smp_cache_topo(MachineState *ms, X86CPU *cpu,
}
#endif
+static bool x86_cpu_apply_lbr_pebs_fmt(X86CPU *cpu, uint64_t host_perf_cap,
+ uint64_t user_req, bool is_lbr_fmt,
+ Error **errp)
+{
+ CPUX86State *env = &cpu->env;
+ uint64_t mask;
+ unsigned shift;
+ unsigned user_fmt;
+ const char *name;
+
+ if (is_lbr_fmt) {
+ mask = PERF_CAP_LBR_FMT_MASK;
+ shift = PERF_CAP_LBR_FMT_SHIFT;
+ name = "lbr";
+ } else {
+ return false;
+ }
+
+ if (user_req != -1) {
+ env->features[FEAT_PERF_CAPABILITIES] &= ~(mask << shift);
+ env->features[FEAT_PERF_CAPABILITIES] |= (user_req << shift);
+ }
+
+ user_fmt = (env->features[FEAT_PERF_CAPABILITIES] >> shift) & mask;
+ if (user_fmt) {
+ unsigned host_fmt = (host_perf_cap >> shift) & mask;
+
+ if (!cpu->enable_pmu) {
+ error_setg(errp, "vPMU: %s is unsupported without pmu=on", name);
+ return false;
+ }
+ if (user_fmt != host_fmt) {
+ error_setg(errp, "vPMU: the %s-fmt value (0x%x) does not match "
+ "the host value (0x%x).",
+ name, user_fmt, host_fmt);
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static int x86_cpu_pmu_realize(X86CPU *cpu, Error **errp)
+{
+ uint64_t host_perf_cap =
+ x86_cpu_get_supported_feature_word(NULL, FEAT_PERF_CAPABILITIES);
+
+ /*
+ * Override env->features[FEAT_PERF_CAPABILITIES].LBR_FMT
+ * with user-provided setting.
+ */
+ if (!x86_cpu_apply_lbr_pebs_fmt(cpu, host_perf_cap,
+ cpu->lbr_fmt, true, errp)) {
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
static void x86_cpu_realizefn(DeviceState *dev, Error **errp)
{
CPUState *cs = CPU(dev);
@@ -9776,7 +9835,6 @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp)
X86CPUClass *xcc = X86_CPU_GET_CLASS(dev);
CPUX86State *env = &cpu->env;
Error *local_err = NULL;
- unsigned guest_fmt;
if (!kvm_enabled())
cpu->enable_pmu = false;
@@ -9812,35 +9870,8 @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp)
goto out;
}
- /*
- * Override env->features[FEAT_PERF_CAPABILITIES].LBR_FMT
- * with user-provided setting.
- */
- if (cpu->lbr_fmt != -1) {
- env->features[FEAT_PERF_CAPABILITIES] &= ~PERF_CAP_LBR_FMT;
- env->features[FEAT_PERF_CAPABILITIES] |= cpu->lbr_fmt;
- }
-
- /*
- * vPMU LBR is supported when 1) KVM is enabled 2) Option pmu=on and
- * 3)vPMU LBR format matches that of host setting.
- */
- guest_fmt = env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_LBR_FMT;
- if (guest_fmt) {
- uint64_t host_perf_cap =
- x86_cpu_get_supported_feature_word(NULL, FEAT_PERF_CAPABILITIES);
- unsigned host_lbr_fmt = host_perf_cap & PERF_CAP_LBR_FMT;
-
- if (!cpu->enable_pmu) {
- error_setg(errp, "vPMU: LBR is unsupported without pmu=on");
- return;
- }
- if (guest_fmt != host_lbr_fmt) {
- error_setg(errp, "vPMU: the lbr-fmt value (0x%x) does not match "
- "the host value (0x%x).",
- guest_fmt, host_lbr_fmt);
- return;
- }
+ if (x86_cpu_pmu_realize(cpu, errp)) {
+ return;
}
if (x86_cpu_filter_features(cpu, cpu->check_cpuid || cpu->enforce_cpuid)) {
@@ -10430,7 +10461,7 @@ static const Property x86_cpu_properties[] = {
#endif
DEFINE_PROP_INT32("node-id", X86CPU, node_id, CPU_UNSET_NUMA_NODE_ID),
DEFINE_PROP_BOOL("pmu", X86CPU, enable_pmu, false),
- DEFINE_PROP_UINT64_CHECKMASK("lbr-fmt", X86CPU, lbr_fmt, PERF_CAP_LBR_FMT),
+ DEFINE_PROP_UINT64_CHECKMASK("lbr-fmt", X86CPU, lbr_fmt, PERF_CAP_LBR_FMT_MASK),
DEFINE_PROP_UINT32("hv-spinlocks", X86CPU, hyperv_spinlock_attempts,
HYPERV_SPINLOCK_NEVER_NOTIFY),
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 3a10f3242329..a064bf8ab17e 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -420,7 +420,8 @@ typedef enum X86Seg {
#define ARCH_CAP_TSX_CTRL_MSR (1<<7)
#define MSR_IA32_PERF_CAPABILITIES 0x345
-#define PERF_CAP_LBR_FMT 0x3f
+#define PERF_CAP_LBR_FMT_MASK 0x3f
+#define PERF_CAP_LBR_FMT_SHIFT 0x0
#define PERF_CAP_FULL_WRITE (1U << 13)
#define PERF_CAP_PEBS_BASELINE (1U << 14)
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V3 11/13] target/i386: Add pebs-fmt CPU option
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
` (9 preceding siblings ...)
2026-03-04 18:07 ` [PATCH V3 10/13] target/i386: Refactor " Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-06 5:23 ` Mi, Dapeng
2026-03-04 18:07 ` [PATCH V3 12/13] target/i386: Clean up Intel Debug Store feature dependencies Zide Chen
2026-03-04 18:07 ` [PATCH V3 13/13] target/i386: Add Topdown metrics feature support Zide Chen
12 siblings, 1 reply; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
Similar to lbr-fmt, target/i386 does not support multi-bit CPU
properties, so the PEBS record format cannot be exposed as a
user-visible CPU feature.
Add a pebs-fmt option to allow users to specify the PEBS format via the
command line. Since the PEBS state is part of the vmstate, this option
is considered migratable.
We do not support PEBS record format 0. Although it is a valid format
on some very old CPUs, it is unlikely to be used in practice. This
allows pebs-fmt=0 to be used to explicitly disable PEBS in the case of
migratable=off.
If PEBS is not enabled, mark it as unavailable in IA32_MISC_ENABLE and
clear the PEBS-related bits in IA32_PERF_CAPABILITIES.
If migratable=on on PEBS capable host and pmu is enabled:
- PEBS is disabled if pebs-fmt is not specified or pebs-fmt=0.
- PEBS is enabled if pebs-fmt is set to the same value as the host.
When migratable=off, the behavior is similar, except that omitting
the pebs-fmt option does not disable PEBS.
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
V3:
- If DS is not available, make this option invalid.
- If pebs_fmt is 0, mark PEBS unavailable.
- Move MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL code from [patch v2 11/11] to
this patch for tighter logic.
- Add option usage to commit message.
V2: New patch.
---
target/i386/cpu.c | 23 ++++++++++++++++++++++-
target/i386/cpu.h | 7 +++++++
target/i386/kvm/kvm-cpu.c | 1 +
3 files changed, 30 insertions(+), 1 deletion(-)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index d5e00b41fb04..2e1dea65d708 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -9170,6 +9170,13 @@ static void x86_cpu_reset_hold(Object *obj, ResetType type)
env->msr_ia32_misc_enable |= MSR_IA32_MISC_ENABLE_MWAIT;
}
+ if (!(env->features[FEAT_1_EDX] & CPUID_DTS) ||
+ !(env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_PEBS_FORMAT)) {
+ /* Mark PEBS unavailable and clear all PEBS related bits. */
+ env->msr_ia32_misc_enable |= MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL;
+ env->features[FEAT_PERF_CAPABILITIES] &= ~0x34fc0ull;
+ }
+
memset(env->dr, 0, sizeof(env->dr));
env->dr[6] = DR6_FIXED_1;
env->dr[7] = DR7_FIXED_1;
@@ -9784,10 +9791,17 @@ static bool x86_cpu_apply_lbr_pebs_fmt(X86CPU *cpu, uint64_t host_perf_cap,
shift = PERF_CAP_LBR_FMT_SHIFT;
name = "lbr";
} else {
- return false;
+ mask = PERF_CAP_PEBS_FMT_MASK;
+ shift = PERF_CAP_PEBS_FMT_SHIFT;
+ name = "pebs";
}
if (user_req != -1) {
+ if (!is_lbr_fmt && !(env->features[FEAT_1_EDX] & CPUID_DTS)) {
+ error_setg(errp, "vPMU: %s is unsupported without Debug Store", name);
+ return false;
+ }
+
env->features[FEAT_PERF_CAPABILITIES] &= ~(mask << shift);
env->features[FEAT_PERF_CAPABILITIES] |= (user_req << shift);
}
@@ -9825,6 +9839,11 @@ static int x86_cpu_pmu_realize(X86CPU *cpu, Error **errp)
return -EINVAL;
}
+ if (!x86_cpu_apply_lbr_pebs_fmt(cpu, host_perf_cap,
+ cpu->pebs_fmt, false, errp)) {
+ return -EINVAL;
+ }
+
return 0;
}
@@ -10291,6 +10310,7 @@ static void x86_cpu_initfn(Object *obj)
object_property_add_alias(obj, "hv-apicv", obj, "hv-avic");
object_property_add_alias(obj, "lbr_fmt", obj, "lbr-fmt");
+ object_property_add_alias(obj, "pebs_fmt", obj, "pebs-fmt");
if (xcc->model) {
x86_cpu_load_model(cpu, xcc->model);
@@ -10462,6 +10482,7 @@ static const Property x86_cpu_properties[] = {
DEFINE_PROP_INT32("node-id", X86CPU, node_id, CPU_UNSET_NUMA_NODE_ID),
DEFINE_PROP_BOOL("pmu", X86CPU, enable_pmu, false),
DEFINE_PROP_UINT64_CHECKMASK("lbr-fmt", X86CPU, lbr_fmt, PERF_CAP_LBR_FMT_MASK),
+ DEFINE_PROP_UINT64_CHECKMASK("pebs-fmt", X86CPU, pebs_fmt, PERF_CAP_PEBS_FMT_MASK),
DEFINE_PROP_UINT32("hv-spinlocks", X86CPU, hyperv_spinlock_attempts,
HYPERV_SPINLOCK_NEVER_NOTIFY),
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index a064bf8ab17e..6a9820c4041a 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -422,6 +422,10 @@ typedef enum X86Seg {
#define MSR_IA32_PERF_CAPABILITIES 0x345
#define PERF_CAP_LBR_FMT_MASK 0x3f
#define PERF_CAP_LBR_FMT_SHIFT 0x0
+#define PERF_CAP_PEBS_FMT_MASK 0xf
+#define PERF_CAP_PEBS_FMT_SHIFT 0x8
+#define PERF_CAP_PEBS_FORMAT (PERF_CAP_PEBS_FMT_MASK << \
+ PERF_CAP_PEBS_FMT_SHIFT)
#define PERF_CAP_FULL_WRITE (1U << 13)
#define PERF_CAP_PEBS_BASELINE (1U << 14)
@@ -2410,6 +2414,9 @@ struct ArchCPU {
*/
uint64_t lbr_fmt;
+ /* PEBS_FMT bits in IA32_PERF_CAPABILITIES MSR. */
+ uint64_t pebs_fmt;
+
/* LMCE support can be enabled/disabled via cpu option 'lmce=on/off'. It is
* disabled by default to avoid breaking migration between QEMU with
* different LMCE configurations.
diff --git a/target/i386/kvm/kvm-cpu.c b/target/i386/kvm/kvm-cpu.c
index 1d0047d037c7..60bf3899852a 100644
--- a/target/i386/kvm/kvm-cpu.c
+++ b/target/i386/kvm/kvm-cpu.c
@@ -231,6 +231,7 @@ static void kvm_cpu_instance_init(CPUState *cs)
}
cpu->lbr_fmt = -1;
+ cpu->pebs_fmt = -1;
kvm_cpu_xsave_init();
}
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V3 12/13] target/i386: Clean up Intel Debug Store feature dependencies
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
` (10 preceding siblings ...)
2026-03-04 18:07 ` [PATCH V3 11/13] target/i386: Add pebs-fmt CPU option Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-06 5:34 ` Mi, Dapeng
2026-03-16 3:21 ` Chenyi Qiang
2026-03-04 18:07 ` [PATCH V3 13/13] target/i386: Add Topdown metrics feature support Zide Chen
12 siblings, 2 replies; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
- 64-bit DS Area (CPUID.01H:ECX[2]) depends on DS (CPUID.01H:EDX[21]).
- When PMU is disabled, Debug Store must not be exposed to the guest,
which implicitly disables legacy DS-based PEBS.
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
V3:
- Update title to be more accurate.
- Make DTES64 depend on DS.
- Mark MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL in previous patch.
- Clean up the commit message.
V2: New patch.
---
target/i386/cpu.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 2e1dea65d708..3ff9f76cf7da 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1899,6 +1899,10 @@ static FeatureDep feature_dependencies[] = {
.from = { FEAT_1_ECX, CPUID_EXT_PDCM },
.to = { FEAT_PERF_CAPABILITIES, ~0ull },
},
+ {
+ .from = { FEAT_1_EDX, CPUID_DTS},
+ .to = { FEAT_1_ECX, CPUID_EXT_DTES64},
+ },
{
.from = { FEAT_1_ECX, CPUID_EXT_VMX },
.to = { FEAT_VMX_PROCBASED_CTLS, ~0ull },
@@ -9471,6 +9475,7 @@ void x86_cpu_expand_features(X86CPU *cpu, Error **errp)
env->features[FEAT_1_ECX] &= ~CPUID_EXT_PDCM;
}
+ env->features[FEAT_1_EDX] &= ~CPUID_DTS;
env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V3 13/13] target/i386: Add Topdown metrics feature support
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
` (11 preceding siblings ...)
2026-03-04 18:07 ` [PATCH V3 12/13] target/i386: Clean up Intel Debug Store feature dependencies Zide Chen
@ 2026-03-04 18:07 ` Zide Chen
2026-03-06 5:37 ` Mi, Dapeng
12 siblings, 1 reply; 25+ messages in thread
From: Zide Chen @ 2026-03-04 18:07 UTC (permalink / raw)
To: qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu, Fabiano Rosas,
Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang, Dapeng Mi, Zide Chen
From: Dapeng Mi <dapeng1.mi@linux.intel.com>
IA32_PERF_CAPABILITIES.PERF_METRICS_AVAILABLE (bit 15) indicates that
the CPU provides built-in support for TMA L1 metrics through
the PERF_METRICS MSR. Expose it as a user-visible CPU feature
("perf-metrics"), allowing it to be explicitly enabled or disabled and
used with migratable guests.
Plumb IA32_PERF_METRICS through the KVM MSR get/put paths to be able
to save and restore this MSR.
Migrate IA32_PERF_METRICS MSR using a new subsection of
vmstate_msr_architectural_pmu.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Co-developed-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
---
V3: New patch
---
target/i386/cpu.c | 2 +-
target/i386/cpu.h | 3 +++
target/i386/kvm/kvm.c | 10 ++++++++++
target/i386/machine.c | 19 +++++++++++++++++++
4 files changed, 33 insertions(+), 1 deletion(-)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 3ff9f76cf7da..88cfd3529851 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1620,7 +1620,7 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
NULL, NULL, NULL, NULL,
NULL, NULL, "pebs-trap", "pebs-arch-reg",
NULL, NULL, NULL, NULL,
- NULL, "full-width-write", "pebs-baseline", NULL,
+ NULL, "full-width-write", "pebs-baseline", "perf-metrics",
NULL, "pebs-timing-info", NULL, NULL,
NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL,
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 6a9820c4041a..5d0ed692ae06 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -428,6 +428,7 @@ typedef enum X86Seg {
PERF_CAP_PEBS_FMT_SHIFT)
#define PERF_CAP_FULL_WRITE (1U << 13)
#define PERF_CAP_PEBS_BASELINE (1U << 14)
+#define PERF_CAP_TOPDOWN (1U << 15)
#define MSR_IA32_TSX_CTRL 0x122
#define MSR_IA32_TSCDEADLINE 0x6e0
@@ -514,6 +515,7 @@ typedef enum X86Seg {
#define MSR_CORE_PERF_FIXED_CTR0 0x309
#define MSR_CORE_PERF_FIXED_CTR1 0x30a
#define MSR_CORE_PERF_FIXED_CTR2 0x30b
+#define MSR_PERF_METRICS 0x329
#define MSR_CORE_PERF_FIXED_CTR_CTRL 0x38d
#define MSR_CORE_PERF_GLOBAL_STATUS 0x38e
#define MSR_CORE_PERF_GLOBAL_CTRL 0x38f
@@ -2111,6 +2113,7 @@ typedef struct CPUArchState {
uint64_t msr_fixed_ctr_ctrl;
uint64_t msr_global_ctrl;
uint64_t msr_global_status;
+ uint64_t msr_perf_metrics;
uint64_t msr_ds_area;
uint64_t msr_pebs_data_cfg;
uint64_t msr_pebs_enable;
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 8c4564bcbb9e..3f533cd65708 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -4295,6 +4295,10 @@ static int kvm_put_msrs(X86CPU *cpu, KvmPutState level)
kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR0 + i,
env->msr_fixed_counters[i]);
}
+ /* SDM: Write IA32_PERF_METRICS after fixed counter 3. */
+ if (env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_TOPDOWN) {
+ kvm_msr_entry_add(cpu, MSR_PERF_METRICS, env->msr_perf_metrics);
+ }
for (i = 0; i < num_pmu_gp_counters; i++) {
kvm_msr_entry_add(cpu, perf_cntr_base + i,
env->msr_gp_counters[i]);
@@ -4868,6 +4872,9 @@ static int kvm_get_msrs(X86CPU *cpu)
kvm_msr_entry_add(cpu, MSR_CORE_PERF_GLOBAL_CTRL, 0);
kvm_msr_entry_add(cpu, MSR_CORE_PERF_GLOBAL_STATUS, 0);
}
+ if (env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_TOPDOWN) {
+ kvm_msr_entry_add(cpu, MSR_PERF_METRICS, 0);
+ }
for (i = 0; i < num_pmu_fixed_counters; i++) {
kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR0 + i, 0);
}
@@ -5234,6 +5241,9 @@ static int kvm_get_msrs(X86CPU *cpu)
case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS:
env->msr_global_status = msrs[i].data;
break;
+ case MSR_PERF_METRICS:
+ env->msr_perf_metrics = msrs[i].data;
+ break;
case MSR_CORE_PERF_FIXED_CTR0 ... MSR_CORE_PERF_FIXED_CTR0 + MAX_FIXED_COUNTERS - 1:
env->msr_fixed_counters[index - MSR_CORE_PERF_FIXED_CTR0] = msrs[i].data;
break;
diff --git a/target/i386/machine.c b/target/i386/machine.c
index 5cff5d5a9db5..6b7141cfead7 100644
--- a/target/i386/machine.c
+++ b/target/i386/machine.c
@@ -680,6 +680,24 @@ static const VMStateDescription vmstate_msr_ds_pebs = {
VMSTATE_END_OF_LIST()}
};
+static bool perf_metrics_enabled(void *opaque)
+{
+ X86CPU *cpu = opaque;
+ CPUX86State *env = &cpu->env;
+
+ return !!env->msr_perf_metrics;
+}
+
+static const VMStateDescription vmstate_msr_perf_metrics = {
+ .name = "cpu/msr_architectural_pmu/msr_perf_metrics",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .needed = perf_metrics_enabled,
+ .fields = (const VMStateField[]){
+ VMSTATE_UINT64(env.msr_perf_metrics, X86CPU),
+ VMSTATE_END_OF_LIST()}
+};
+
static bool pmu_enable_needed(void *opaque)
{
X86CPU *cpu = opaque;
@@ -721,6 +739,7 @@ static const VMStateDescription vmstate_msr_architectural_pmu = {
},
.subsections = (const VMStateDescription * const []) {
&vmstate_msr_ds_pebs,
+ &vmstate_msr_perf_metrics,
NULL,
},
};
--
2.53.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH V3 04/13] target/i386: Adjust maximum number of PMU counters
2026-03-04 18:07 ` [PATCH V3 04/13] target/i386: Adjust maximum number of PMU counters Zide Chen
@ 2026-03-06 3:02 ` Mi, Dapeng
0 siblings, 0 replies; 25+ messages in thread
From: Mi, Dapeng @ 2026-03-06 3:02 UTC (permalink / raw)
To: Zide Chen, qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu,
Fabiano Rosas, Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang
On 3/5/2026 2:07 AM, Zide Chen wrote:
> Changing either MAX_GP_COUNTERS or MAX_FIXED_COUNTERS affects the
> VMState layout and therefore requires bumping the migration version
> IDs. Adjust both limits together to avoid repeated VMState version
> bumps in follow-up patches.
>
> To support full-width writes, QEMU needs to handle the alias MSRs
> starting at 0x4c1. With the current limits, the alias range can
> extend into MSR_MCG_EXT_CTL (0x4d0). Reducing MAX_GP_COUNTERS from 18
> to 15 avoids the overlap while still leaving room for future expansion
> beyond current hardware (which supports at most 10 GP counters).
>
> Increase MAX_FIXED_COUNTERS to 7 to support additional fixed counters
> (e.g. Topdown metric events).
>
> With these changes, bump version_id to prevent migration to older
> QEMU, and bump minimum_version_id to prevent migration from older
> QEMU, which could otherwise result in VMState overflows.
>
> Signed-off-by: Zide Chen <zide.chen@intel.com>
> ---
> target/i386/cpu.h | 8 ++------
> target/i386/machine.c | 4 ++--
> 2 files changed, 4 insertions(+), 8 deletions(-)
>
> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
> index 6d3e70395dbd..23d4ee13abfa 100644
> --- a/target/i386/cpu.h
> +++ b/target/i386/cpu.h
> @@ -1749,12 +1749,8 @@ typedef struct {
> #define CPU_NB_REGS CPU_NB_REGS32
> #endif
>
> -#define MAX_FIXED_COUNTERS 3
> -/*
> - * This formula is based on Intel's MSR. The current size also meets AMD's
> - * needs.
> - */
> -#define MAX_GP_COUNTERS (MSR_IA32_PERF_STATUS - MSR_P6_EVNTSEL0)
> +#define MAX_FIXED_COUNTERS 7
> +#define MAX_GP_COUNTERS 15
I suppose it's good enough to reduce MAX_GP_COUNTERS to 10. I don't think
there would be 10+ GP counters for Intel platforms in near future. But need
AMD guys to confirm if it's enough for AMD platforms.
Of course, shrinking MAX_GP_COUNTERS to 15 is fine for me as well.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
>
> #define NB_OPMASK_REGS 8
>
> diff --git a/target/i386/machine.c b/target/i386/machine.c
> index 1125c8a64ec5..7d08a05835fc 100644
> --- a/target/i386/machine.c
> +++ b/target/i386/machine.c
> @@ -685,8 +685,8 @@ static bool pmu_enable_needed(void *opaque)
>
> static const VMStateDescription vmstate_msr_architectural_pmu = {
> .name = "cpu/msr_architectural_pmu",
> - .version_id = 1,
> - .minimum_version_id = 1,
> + .version_id = 2,
> + .minimum_version_id = 2,
> .needed = pmu_enable_needed,
> .fields = (const VMStateField[]) {
> VMSTATE_UINT64(env.msr_fixed_ctr_ctrl, X86CPU),
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V3 06/13] target/i386: Increase MSR_BUF_SIZE and split KVM_[GET/SET]_MSRS calls
2026-03-04 18:07 ` [PATCH V3 06/13] target/i386: Increase MSR_BUF_SIZE and split KVM_[GET/SET]_MSRS calls Zide Chen
@ 2026-03-06 3:09 ` Mi, Dapeng
0 siblings, 0 replies; 25+ messages in thread
From: Mi, Dapeng @ 2026-03-06 3:09 UTC (permalink / raw)
To: Zide Chen, qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu,
Fabiano Rosas, Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang
LGTM. Thanks.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
On 3/5/2026 2:07 AM, Zide Chen wrote:
> Newer Intel server CPUs support a large number of PMU MSRs. Currently,
> QEMU allocates cpu->kvm_msr_buf as a single-page buffer, which is not
> sufficient to hold all possible MSRs.
>
> Increase MSR_BUF_SIZE to 8192 bytes, providing space for up to 511 MSRs.
> This is sufficient even for the theoretical worst case, such as
> architectural LBR with a depth of 64.
>
> KVM_[GET/SET]_MSRS is limited to 255 MSRs per call. Raising this limit
> to 511 would require changes in KVM and would introduce backward
> compatibility issues. Instead, split requests into multiple
> KVM_[GET/SET]_MSRS calls when the number of MSRs exceeds the API limit.
>
> Signed-off-by: Zide Chen <zide.chen@intel.com>
> ---
> v3:
> - Address Dapeng's comments.
> ---
> target/i386/kvm/kvm.c | 110 +++++++++++++++++++++++++++++++++++-------
> 1 file changed, 92 insertions(+), 18 deletions(-)
>
> diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
> index 39a67c58ac22..4ba54151320f 100644
> --- a/target/i386/kvm/kvm.c
> +++ b/target/i386/kvm/kvm.c
> @@ -97,9 +97,12 @@
> #define KVM_APIC_BUS_CYCLE_NS 1
> #define KVM_APIC_BUS_FREQUENCY (1000000000ULL / KVM_APIC_BUS_CYCLE_NS)
>
> -/* A 4096-byte buffer can hold the 8-byte kvm_msrs header, plus
> - * 255 kvm_msr_entry structs */
> -#define MSR_BUF_SIZE 4096
> +/* A 8192-byte buffer can hold the 8-byte kvm_msrs header, plus
> + * 511 kvm_msr_entry structs */
> +#define MSR_BUF_SIZE 8192
> +
> +/* Maximum number of MSRs in one single KVM_[GET/SET]_MSRS call. */
> +#define KVM_MAX_IO_MSRS 255
>
> typedef bool QEMURDMSRHandler(X86CPU *cpu, uint32_t msr, uint64_t *val);
> typedef bool QEMUWRMSRHandler(X86CPU *cpu, uint32_t msr, uint64_t val);
> @@ -4016,21 +4019,99 @@ static void kvm_msr_entry_add_perf(X86CPU *cpu, FeatureWordArray f)
> }
> }
>
> -static int kvm_buf_set_msrs(X86CPU *cpu)
> +static int __kvm_buf_set_msrs(X86CPU *cpu, struct kvm_msrs *msrs)
> {
> - int ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_MSRS, cpu->kvm_msr_buf);
> + int ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_MSRS, msrs);
> if (ret < 0) {
> return ret;
> }
>
> - if (ret < cpu->kvm_msr_buf->nmsrs) {
> - struct kvm_msr_entry *e = &cpu->kvm_msr_buf->entries[ret];
> + if (ret < msrs->nmsrs) {
> + struct kvm_msr_entry *e = &msrs->entries[ret];
> error_report("error: failed to set MSR 0x%" PRIx32 " to 0x%" PRIx64,
> (uint32_t)e->index, (uint64_t)e->data);
> }
>
> - assert(ret == cpu->kvm_msr_buf->nmsrs);
> - return 0;
> + assert(ret == msrs->nmsrs);
> + return ret;
> +}
> +
> +static int __kvm_buf_get_msrs(X86CPU *cpu, struct kvm_msrs *msrs)
> +{
> + int ret;
> +
> + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MSRS, msrs);
> + if (ret < 0) {
> + return ret;
> + }
> +
> + if (ret < msrs->nmsrs) {
> + struct kvm_msr_entry *e = &msrs->entries[ret];
> + error_report("error: failed to get MSR 0x%" PRIx32,
> + (uint32_t)e->index);
> + }
> +
> + assert(ret == msrs->nmsrs);
> + return ret;
> +}
> +
> +static int kvm_buf_set_or_get_msrs(X86CPU *cpu, bool is_write)
> +{
> + struct kvm_msr_entry *entries = cpu->kvm_msr_buf->entries;
> + struct kvm_msrs *buf = NULL;
> + int current, remaining, ret = 0;
> + size_t buf_size;
> +
> + buf_size = KVM_MAX_IO_MSRS * sizeof(struct kvm_msr_entry) +
> + sizeof(struct kvm_msrs);
> + buf = g_malloc(buf_size);
> +
> + remaining = cpu->kvm_msr_buf->nmsrs;
> + current = 0;
> + while (remaining) {
> + size_t size;
> +
> + memset(buf, 0, buf_size);
> +
> + if (remaining > KVM_MAX_IO_MSRS) {
> + buf->nmsrs = KVM_MAX_IO_MSRS;
> + } else {
> + buf->nmsrs = remaining;
> + }
> +
> + size = buf->nmsrs * sizeof(entries[0]);
> + memcpy(buf->entries, &entries[current], size);
> +
> + if (is_write) {
> + ret = __kvm_buf_set_msrs(cpu, buf);
> + } else {
> + ret = __kvm_buf_get_msrs(cpu, buf);
> + }
> +
> + if (ret < 0) {
> + goto out;
> + }
> +
> + if (!is_write)
> + memcpy(&entries[current], buf->entries, size);
> +
> + current += buf->nmsrs;
> + remaining -= buf->nmsrs;
> + }
> +
> +out:
> + g_free(buf);
> + return ret < 0 ? ret : cpu->kvm_msr_buf->nmsrs;
> +}
> +
> +static inline int kvm_buf_set_msrs(X86CPU *cpu)
> +{
> + return kvm_buf_set_or_get_msrs(cpu, true);
> +}
> +
> +static inline int kvm_buf_get_msrs(X86CPU *cpu)
> +{
> + return kvm_buf_set_or_get_msrs(cpu, false);
> }
>
> static void kvm_init_msrs(X86CPU *cpu)
> @@ -4066,7 +4147,7 @@ static void kvm_init_msrs(X86CPU *cpu)
> if (has_msr_ucode_rev) {
> kvm_msr_entry_add(cpu, MSR_IA32_UCODE_REV, cpu->ucode_rev);
> }
> - assert(kvm_buf_set_msrs(cpu) == 0);
> + kvm_buf_set_msrs(cpu);
> }
>
> static int kvm_put_msrs(X86CPU *cpu, KvmPutState level)
> @@ -4959,18 +5040,11 @@ static int kvm_get_msrs(X86CPU *cpu)
> }
> }
>
> - ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MSRS, cpu->kvm_msr_buf);
> + ret = kvm_buf_get_msrs(cpu);
> if (ret < 0) {
> return ret;
> }
>
> - if (ret < cpu->kvm_msr_buf->nmsrs) {
> - struct kvm_msr_entry *e = &cpu->kvm_msr_buf->entries[ret];
> - error_report("error: failed to get MSR 0x%" PRIx32,
> - (uint32_t)e->index);
> - }
> -
> - assert(ret == cpu->kvm_msr_buf->nmsrs);
> /*
> * MTRR masks: Each mask consists of 5 parts
> * a 10..0: must be zero
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V3 07/13] target/i386: Add get/set/migrate support for legacy PEBS MSRs
2026-03-04 18:07 ` [PATCH V3 07/13] target/i386: Add get/set/migrate support for legacy PEBS MSRs Zide Chen
@ 2026-03-06 3:17 ` Mi, Dapeng
0 siblings, 0 replies; 25+ messages in thread
From: Mi, Dapeng @ 2026-03-06 3:17 UTC (permalink / raw)
To: Zide Chen, qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu,
Fabiano Rosas, Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
On 3/5/2026 2:07 AM, Zide Chen wrote:
> From: Dapeng Mi <dapeng1.mi@linux.intel.com>
>
> The legacy DS-based PEBS relies on IA32_DS_AREA and IA32_PEBS_ENABLE
> to take snapshots of a subset of the machine registers into the Intel
> Debug-Store.
>
> Adaptive PEBS introduces MSR_PEBS_DATA_CFG to be able to capture only
> the data of interest, which is enumerated via bit 14 (PEBS_BASELINE)
> of IA32_PERF_CAPABILITIES.
>
> QEMU must save, restore and migrate these MSRs when legacy PEBS is
> enabled. Though the availability of these MSRs may not be the same,
> it's still valid to put them in the same vmstate subsection for
> implementation simplicity.
>
> Originally-by: Luwei Kang <luwei.kang@intel.com>
> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Co-developed-by: Zide Chen <zide.chen@intel.com>
> Signed-off-by: Zide Chen <zide.chen@intel.com>
> ---
> V3:
> - Add the missing Originally-by tag to credit Luwei.
> - Fix the vmstate name of msr_ds_pebs.
> - Fix the criteria for determining availability of IA32_PEBS_ENABLE
> and MSR_PEBS_DATA_CFG.
> - Change title to cover all aspects of what this patch does.
> - Re-work the commit messages.
> ---
> target/i386/cpu.h | 10 ++++++++++
> target/i386/kvm/kvm.c | 29 +++++++++++++++++++++++++++++
> target/i386/machine.c | 27 ++++++++++++++++++++++++++-
> 3 files changed, 65 insertions(+), 1 deletion(-)
>
> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
> index 7c241a20420c..3a10f3242329 100644
> --- a/target/i386/cpu.h
> +++ b/target/i386/cpu.h
> @@ -422,6 +422,7 @@ typedef enum X86Seg {
> #define MSR_IA32_PERF_CAPABILITIES 0x345
> #define PERF_CAP_LBR_FMT 0x3f
> #define PERF_CAP_FULL_WRITE (1U << 13)
> +#define PERF_CAP_PEBS_BASELINE (1U << 14)
>
> #define MSR_IA32_TSX_CTRL 0x122
> #define MSR_IA32_TSCDEADLINE 0x6e0
> @@ -479,6 +480,7 @@ typedef enum X86Seg {
> /* Indicates good rep/movs microcode on some processors: */
> #define MSR_IA32_MISC_ENABLE_FASTSTRING (1ULL << 0)
> #define MSR_IA32_MISC_ENABLE_BTS_UNAVAIL (1ULL << 11)
> +#define MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL (1ULL << 12)
> #define MSR_IA32_MISC_ENABLE_MWAIT (1ULL << 18)
> #define MSR_IA32_MISC_ENABLE_DEFAULT (MSR_IA32_MISC_ENABLE_FASTSTRING | \
> MSR_IA32_MISC_ENABLE_BTS_UNAVAIL)
> @@ -514,6 +516,11 @@ typedef enum X86Seg {
> #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300
> #define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301
>
> +/* Legacy DS based PEBS MSRs */
> +#define MSR_IA32_PEBS_ENABLE 0x3f1
> +#define MSR_PEBS_DATA_CFG 0x3f2
> +#define MSR_IA32_DS_AREA 0x600
> +
> #define MSR_K7_EVNTSEL0 0xc0010000
> #define MSR_K7_PERFCTR0 0xc0010004
> #define MSR_F15H_PERF_CTL0 0xc0010200
> @@ -2099,6 +2106,9 @@ typedef struct CPUArchState {
> uint64_t msr_fixed_ctr_ctrl;
> uint64_t msr_global_ctrl;
> uint64_t msr_global_status;
> + uint64_t msr_ds_area;
> + uint64_t msr_pebs_data_cfg;
> + uint64_t msr_pebs_enable;
> uint64_t msr_fixed_counters[MAX_FIXED_COUNTERS];
> uint64_t msr_gp_counters[MAX_GP_COUNTERS];
> uint64_t msr_gp_evtsel[MAX_GP_COUNTERS];
> diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
> index 4ba54151320f..8c4564bcbb9e 100644
> --- a/target/i386/kvm/kvm.c
> +++ b/target/i386/kvm/kvm.c
> @@ -4280,6 +4280,16 @@ static int kvm_put_msrs(X86CPU *cpu, KvmPutState level)
> kvm_msr_entry_add(cpu, MSR_CORE_PERF_GLOBAL_CTRL, 0);
> }
>
> + if (env->features[FEAT_1_EDX] & CPUID_DTS) {
> + kvm_msr_entry_add(cpu, MSR_IA32_DS_AREA, env->msr_ds_area);
> + }
> + if (!(env->msr_ia32_misc_enable & MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL)) {
> + kvm_msr_entry_add(cpu, MSR_IA32_PEBS_ENABLE, env->msr_pebs_enable);
> + }
> + if (env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_PEBS_BASELINE) {
> + kvm_msr_entry_add(cpu, MSR_PEBS_DATA_CFG, env->msr_pebs_data_cfg);
> + }
> +
> /* Set the counter values. */
> for (i = 0; i < num_pmu_fixed_counters; i++) {
> kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR0 + i,
> @@ -4900,6 +4910,16 @@ static int kvm_get_msrs(X86CPU *cpu)
> kvm_msr_entry_add(cpu, MSR_AMD64_PERF_CNTR_GLOBAL_CTL, 0);
> kvm_msr_entry_add(cpu, MSR_AMD64_PERF_CNTR_GLOBAL_STATUS, 0);
> }
> +
> + if (env->features[FEAT_1_EDX] & CPUID_DTS) {
> + kvm_msr_entry_add(cpu, MSR_IA32_DS_AREA, 0);
> + }
> + if (!(env->msr_ia32_misc_enable & MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL)) {
> + kvm_msr_entry_add(cpu, MSR_IA32_PEBS_ENABLE, 0);
> + }
> + if (env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_PEBS_BASELINE) {
> + kvm_msr_entry_add(cpu, MSR_PEBS_DATA_CFG, 0);
> + }
> }
>
> if (env->mcg_cap) {
> @@ -5241,6 +5261,15 @@ static int kvm_get_msrs(X86CPU *cpu)
> env->msr_gp_evtsel[index] = msrs[i].data;
> }
> break;
> + case MSR_IA32_DS_AREA:
> + env->msr_ds_area = msrs[i].data;
> + break;
> + case MSR_PEBS_DATA_CFG:
> + env->msr_pebs_data_cfg = msrs[i].data;
> + break;
> + case MSR_IA32_PEBS_ENABLE:
> + env->msr_pebs_enable = msrs[i].data;
> + break;
> case HV_X64_MSR_HYPERCALL:
> env->msr_hv_hypercall = msrs[i].data;
> break;
> diff --git a/target/i386/machine.c b/target/i386/machine.c
> index 7d08a05835fc..5cff5d5a9db5 100644
> --- a/target/i386/machine.c
> +++ b/target/i386/machine.c
> @@ -659,6 +659,27 @@ static const VMStateDescription vmstate_msr_ia32_feature_control = {
> }
> };
>
> +static bool ds_pebs_enabled(void *opaque)
> +{
> + X86CPU *cpu = opaque;
> + CPUX86State *env = &cpu->env;
> +
> + return (env->msr_ds_area || env->msr_pebs_enable ||
> + env->msr_pebs_data_cfg);
> +}
> +
> +static const VMStateDescription vmstate_msr_ds_pebs = {
> + .name = "cpu/msr_architectural_pmu/msr_ds_pebs",
> + .version_id = 1,
> + .minimum_version_id = 1,
> + .needed = ds_pebs_enabled,
> + .fields = (const VMStateField[]){
> + VMSTATE_UINT64(env.msr_ds_area, X86CPU),
> + VMSTATE_UINT64(env.msr_pebs_data_cfg, X86CPU),
> + VMSTATE_UINT64(env.msr_pebs_enable, X86CPU),
> + VMSTATE_END_OF_LIST()}
> +};
> +
> static bool pmu_enable_needed(void *opaque)
> {
> X86CPU *cpu = opaque;
> @@ -697,7 +718,11 @@ static const VMStateDescription vmstate_msr_architectural_pmu = {
> VMSTATE_UINT64_ARRAY(env.msr_gp_counters, X86CPU, MAX_GP_COUNTERS),
> VMSTATE_UINT64_ARRAY(env.msr_gp_evtsel, X86CPU, MAX_GP_COUNTERS),
> VMSTATE_END_OF_LIST()
> - }
> + },
> + .subsections = (const VMStateDescription * const []) {
> + &vmstate_msr_ds_pebs,
> + NULL,
> + },
> };
>
> static bool mpx_needed(void *opaque)
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V3 08/13] target/i386: Make some PEBS features user-visible
2026-03-04 18:07 ` [PATCH V3 08/13] target/i386: Make some PEBS features user-visible Zide Chen
@ 2026-03-06 3:25 ` Mi, Dapeng
0 siblings, 0 replies; 25+ messages in thread
From: Mi, Dapeng @ 2026-03-06 3:25 UTC (permalink / raw)
To: Zide Chen, qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu,
Fabiano Rosas, Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang
LGTM.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
On 3/5/2026 2:07 AM, Zide Chen wrote:
> Populate selected PEBS feature names in FEAT_PERF_CAPABILITIES to make
> the corresponding bits user-visible CPU feature knobs, allowing them to
> be explicitly enabled or disabled via -cpu +/-<feature>.
>
> Once named, these bits become part of the guest CPU configuration
> contract. If a VM is configured with such a feature enabled, migration
> to a destination that does not support the feature may fail, as the
> destination cannot honor the guest-visible CPU model.
>
> The PEBS_FMT bits are not exposed, as target/i386 currently does not
> support multi-bit CPU properties.
>
> Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Signed-off-by: Zide Chen <zide.chen@intel.com>
> ---
> V2:
> - Add the missing comma after "pebs-arch-reg".
> - Simplify the PEBS_FMT description in the commit message.
> ---
> target/i386/cpu.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index a69c3108f64b..89691fba45e1 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -1618,10 +1618,10 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
> .type = MSR_FEATURE_WORD,
> .feat_names = {
> NULL, NULL, NULL, NULL,
> + NULL, NULL, "pebs-trap", "pebs-arch-reg",
> NULL, NULL, NULL, NULL,
> - NULL, NULL, NULL, NULL,
> - NULL, "full-width-write", NULL, NULL,
> - NULL, NULL, NULL, NULL,
> + NULL, "full-width-write", "pebs-baseline", NULL,
> + NULL, "pebs-timing-info", NULL, NULL,
> NULL, NULL, NULL, NULL,
> NULL, NULL, NULL, NULL,
> NULL, NULL, NULL, NULL,
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V3 11/13] target/i386: Add pebs-fmt CPU option
2026-03-04 18:07 ` [PATCH V3 11/13] target/i386: Add pebs-fmt CPU option Zide Chen
@ 2026-03-06 5:23 ` Mi, Dapeng
0 siblings, 0 replies; 25+ messages in thread
From: Mi, Dapeng @ 2026-03-06 5:23 UTC (permalink / raw)
To: Zide Chen, qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu,
Fabiano Rosas, Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang
On 3/5/2026 2:07 AM, Zide Chen wrote:
> Similar to lbr-fmt, target/i386 does not support multi-bit CPU
> properties, so the PEBS record format cannot be exposed as a
> user-visible CPU feature.
>
> Add a pebs-fmt option to allow users to specify the PEBS format via the
> command line. Since the PEBS state is part of the vmstate, this option
> is considered migratable.
>
> We do not support PEBS record format 0. Although it is a valid format
> on some very old CPUs, it is unlikely to be used in practice. This
> allows pebs-fmt=0 to be used to explicitly disable PEBS in the case of
> migratable=off.
>
> If PEBS is not enabled, mark it as unavailable in IA32_MISC_ENABLE and
> clear the PEBS-related bits in IA32_PERF_CAPABILITIES.
>
> If migratable=on on PEBS capable host and pmu is enabled:
> - PEBS is disabled if pebs-fmt is not specified or pebs-fmt=0.
> - PEBS is enabled if pebs-fmt is set to the same value as the host.
>
> When migratable=off, the behavior is similar, except that omitting
> the pebs-fmt option does not disable PEBS.
>
> Signed-off-by: Zide Chen <zide.chen@intel.com>
> ---
> V3:
> - If DS is not available, make this option invalid.
> - If pebs_fmt is 0, mark PEBS unavailable.
> - Move MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL code from [patch v2 11/11] to
> this patch for tighter logic.
> - Add option usage to commit message.
>
> V2: New patch.
> ---
> target/i386/cpu.c | 23 ++++++++++++++++++++++-
> target/i386/cpu.h | 7 +++++++
> target/i386/kvm/kvm-cpu.c | 1 +
> 3 files changed, 30 insertions(+), 1 deletion(-)
>
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index d5e00b41fb04..2e1dea65d708 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -9170,6 +9170,13 @@ static void x86_cpu_reset_hold(Object *obj, ResetType type)
> env->msr_ia32_misc_enable |= MSR_IA32_MISC_ENABLE_MWAIT;
> }
>
> + if (!(env->features[FEAT_1_EDX] & CPUID_DTS) ||
> + !(env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_PEBS_FORMAT)) {
> + /* Mark PEBS unavailable and clear all PEBS related bits. */
> + env->msr_ia32_misc_enable |= MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL;
> + env->features[FEAT_PERF_CAPABILITIES] &= ~0x34fc0ull;
Better use a combined macro bitmap to clear the PEBS bits instead of a
magic number. It's hard to read and check.
> + }
> +
> memset(env->dr, 0, sizeof(env->dr));
> env->dr[6] = DR6_FIXED_1;
> env->dr[7] = DR7_FIXED_1;
> @@ -9784,10 +9791,17 @@ static bool x86_cpu_apply_lbr_pebs_fmt(X86CPU *cpu, uint64_t host_perf_cap,
> shift = PERF_CAP_LBR_FMT_SHIFT;
> name = "lbr";
> } else {
> - return false;
> + mask = PERF_CAP_PEBS_FMT_MASK;
> + shift = PERF_CAP_PEBS_FMT_SHIFT;
> + name = "pebs";
> }
>
> if (user_req != -1) {
> + if (!is_lbr_fmt && !(env->features[FEAT_1_EDX] & CPUID_DTS)) {
> + error_setg(errp, "vPMU: %s is unsupported without Debug Store", name);
Better change the name to preciser "ds pebs" since arch-PEBS doesn't depend
on DS. Thanks.
> + return false;
> + }
> +
> env->features[FEAT_PERF_CAPABILITIES] &= ~(mask << shift);
> env->features[FEAT_PERF_CAPABILITIES] |= (user_req << shift);
> }
> @@ -9825,6 +9839,11 @@ static int x86_cpu_pmu_realize(X86CPU *cpu, Error **errp)
> return -EINVAL;
> }
>
> + if (!x86_cpu_apply_lbr_pebs_fmt(cpu, host_perf_cap,
> + cpu->pebs_fmt, false, errp)) {
> + return -EINVAL;
> + }
> +
> return 0;
> }
>
> @@ -10291,6 +10310,7 @@ static void x86_cpu_initfn(Object *obj)
>
> object_property_add_alias(obj, "hv-apicv", obj, "hv-avic");
> object_property_add_alias(obj, "lbr_fmt", obj, "lbr-fmt");
> + object_property_add_alias(obj, "pebs_fmt", obj, "pebs-fmt");
>
> if (xcc->model) {
> x86_cpu_load_model(cpu, xcc->model);
> @@ -10462,6 +10482,7 @@ static const Property x86_cpu_properties[] = {
> DEFINE_PROP_INT32("node-id", X86CPU, node_id, CPU_UNSET_NUMA_NODE_ID),
> DEFINE_PROP_BOOL("pmu", X86CPU, enable_pmu, false),
> DEFINE_PROP_UINT64_CHECKMASK("lbr-fmt", X86CPU, lbr_fmt, PERF_CAP_LBR_FMT_MASK),
> + DEFINE_PROP_UINT64_CHECKMASK("pebs-fmt", X86CPU, pebs_fmt, PERF_CAP_PEBS_FMT_MASK),
>
> DEFINE_PROP_UINT32("hv-spinlocks", X86CPU, hyperv_spinlock_attempts,
> HYPERV_SPINLOCK_NEVER_NOTIFY),
> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
> index a064bf8ab17e..6a9820c4041a 100644
> --- a/target/i386/cpu.h
> +++ b/target/i386/cpu.h
> @@ -422,6 +422,10 @@ typedef enum X86Seg {
> #define MSR_IA32_PERF_CAPABILITIES 0x345
> #define PERF_CAP_LBR_FMT_MASK 0x3f
> #define PERF_CAP_LBR_FMT_SHIFT 0x0
> +#define PERF_CAP_PEBS_FMT_MASK 0xf
> +#define PERF_CAP_PEBS_FMT_SHIFT 0x8
> +#define PERF_CAP_PEBS_FORMAT (PERF_CAP_PEBS_FMT_MASK << \
> + PERF_CAP_PEBS_FMT_SHIFT)
> #define PERF_CAP_FULL_WRITE (1U << 13)
> #define PERF_CAP_PEBS_BASELINE (1U << 14)
>
> @@ -2410,6 +2414,9 @@ struct ArchCPU {
> */
> uint64_t lbr_fmt;
>
> + /* PEBS_FMT bits in IA32_PERF_CAPABILITIES MSR. */
> + uint64_t pebs_fmt;
> +
> /* LMCE support can be enabled/disabled via cpu option 'lmce=on/off'. It is
> * disabled by default to avoid breaking migration between QEMU with
> * different LMCE configurations.
> diff --git a/target/i386/kvm/kvm-cpu.c b/target/i386/kvm/kvm-cpu.c
> index 1d0047d037c7..60bf3899852a 100644
> --- a/target/i386/kvm/kvm-cpu.c
> +++ b/target/i386/kvm/kvm-cpu.c
> @@ -231,6 +231,7 @@ static void kvm_cpu_instance_init(CPUState *cs)
> }
>
> cpu->lbr_fmt = -1;
> + cpu->pebs_fmt = -1;
>
> kvm_cpu_xsave_init();
> }
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V3 12/13] target/i386: Clean up Intel Debug Store feature dependencies
2026-03-04 18:07 ` [PATCH V3 12/13] target/i386: Clean up Intel Debug Store feature dependencies Zide Chen
@ 2026-03-06 5:34 ` Mi, Dapeng
2026-03-16 3:21 ` Chenyi Qiang
1 sibling, 0 replies; 25+ messages in thread
From: Mi, Dapeng @ 2026-03-06 5:34 UTC (permalink / raw)
To: Zide Chen, qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu,
Fabiano Rosas, Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
On 3/5/2026 2:07 AM, Zide Chen wrote:
> - 64-bit DS Area (CPUID.01H:ECX[2]) depends on DS (CPUID.01H:EDX[21]).
> - When PMU is disabled, Debug Store must not be exposed to the guest,
> which implicitly disables legacy DS-based PEBS.
>
> Signed-off-by: Zide Chen <zide.chen@intel.com>
> ---
> V3:
> - Update title to be more accurate.
> - Make DTES64 depend on DS.
> - Mark MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL in previous patch.
> - Clean up the commit message.
>
> V2: New patch.
> ---
> target/i386/cpu.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index 2e1dea65d708..3ff9f76cf7da 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -1899,6 +1899,10 @@ static FeatureDep feature_dependencies[] = {
> .from = { FEAT_1_ECX, CPUID_EXT_PDCM },
> .to = { FEAT_PERF_CAPABILITIES, ~0ull },
> },
> + {
> + .from = { FEAT_1_EDX, CPUID_DTS},
> + .to = { FEAT_1_ECX, CPUID_EXT_DTES64},
> + },
> {
> .from = { FEAT_1_ECX, CPUID_EXT_VMX },
> .to = { FEAT_VMX_PROCBASED_CTLS, ~0ull },
> @@ -9471,6 +9475,7 @@ void x86_cpu_expand_features(X86CPU *cpu, Error **errp)
> env->features[FEAT_1_ECX] &= ~CPUID_EXT_PDCM;
> }
>
> + env->features[FEAT_1_EDX] &= ~CPUID_DTS;
> env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
> }
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V3 13/13] target/i386: Add Topdown metrics feature support
2026-03-04 18:07 ` [PATCH V3 13/13] target/i386: Add Topdown metrics feature support Zide Chen
@ 2026-03-06 5:37 ` Mi, Dapeng
0 siblings, 0 replies; 25+ messages in thread
From: Mi, Dapeng @ 2026-03-06 5:37 UTC (permalink / raw)
To: Zide Chen, qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu,
Fabiano Rosas, Sandipan Das
Cc: Xiaoyao Li, Dongli Zhang
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
On 3/5/2026 2:07 AM, Zide Chen wrote:
> From: Dapeng Mi <dapeng1.mi@linux.intel.com>
>
> IA32_PERF_CAPABILITIES.PERF_METRICS_AVAILABLE (bit 15) indicates that
> the CPU provides built-in support for TMA L1 metrics through
> the PERF_METRICS MSR. Expose it as a user-visible CPU feature
> ("perf-metrics"), allowing it to be explicitly enabled or disabled and
> used with migratable guests.
>
> Plumb IA32_PERF_METRICS through the KVM MSR get/put paths to be able
> to save and restore this MSR.
>
> Migrate IA32_PERF_METRICS MSR using a new subsection of
> vmstate_msr_architectural_pmu.
>
> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Co-developed-by: Zide Chen <zide.chen@intel.com>
> Signed-off-by: Zide Chen <zide.chen@intel.com>
> ---
> V3: New patch
> ---
> target/i386/cpu.c | 2 +-
> target/i386/cpu.h | 3 +++
> target/i386/kvm/kvm.c | 10 ++++++++++
> target/i386/machine.c | 19 +++++++++++++++++++
> 4 files changed, 33 insertions(+), 1 deletion(-)
>
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index 3ff9f76cf7da..88cfd3529851 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -1620,7 +1620,7 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
> NULL, NULL, NULL, NULL,
> NULL, NULL, "pebs-trap", "pebs-arch-reg",
> NULL, NULL, NULL, NULL,
> - NULL, "full-width-write", "pebs-baseline", NULL,
> + NULL, "full-width-write", "pebs-baseline", "perf-metrics",
> NULL, "pebs-timing-info", NULL, NULL,
> NULL, NULL, NULL, NULL,
> NULL, NULL, NULL, NULL,
> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
> index 6a9820c4041a..5d0ed692ae06 100644
> --- a/target/i386/cpu.h
> +++ b/target/i386/cpu.h
> @@ -428,6 +428,7 @@ typedef enum X86Seg {
> PERF_CAP_PEBS_FMT_SHIFT)
> #define PERF_CAP_FULL_WRITE (1U << 13)
> #define PERF_CAP_PEBS_BASELINE (1U << 14)
> +#define PERF_CAP_TOPDOWN (1U << 15)
>
> #define MSR_IA32_TSX_CTRL 0x122
> #define MSR_IA32_TSCDEADLINE 0x6e0
> @@ -514,6 +515,7 @@ typedef enum X86Seg {
> #define MSR_CORE_PERF_FIXED_CTR0 0x309
> #define MSR_CORE_PERF_FIXED_CTR1 0x30a
> #define MSR_CORE_PERF_FIXED_CTR2 0x30b
> +#define MSR_PERF_METRICS 0x329
> #define MSR_CORE_PERF_FIXED_CTR_CTRL 0x38d
> #define MSR_CORE_PERF_GLOBAL_STATUS 0x38e
> #define MSR_CORE_PERF_GLOBAL_CTRL 0x38f
> @@ -2111,6 +2113,7 @@ typedef struct CPUArchState {
> uint64_t msr_fixed_ctr_ctrl;
> uint64_t msr_global_ctrl;
> uint64_t msr_global_status;
> + uint64_t msr_perf_metrics;
> uint64_t msr_ds_area;
> uint64_t msr_pebs_data_cfg;
> uint64_t msr_pebs_enable;
> diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
> index 8c4564bcbb9e..3f533cd65708 100644
> --- a/target/i386/kvm/kvm.c
> +++ b/target/i386/kvm/kvm.c
> @@ -4295,6 +4295,10 @@ static int kvm_put_msrs(X86CPU *cpu, KvmPutState level)
> kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR0 + i,
> env->msr_fixed_counters[i]);
> }
> + /* SDM: Write IA32_PERF_METRICS after fixed counter 3. */
> + if (env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_TOPDOWN) {
> + kvm_msr_entry_add(cpu, MSR_PERF_METRICS, env->msr_perf_metrics);
> + }
> for (i = 0; i < num_pmu_gp_counters; i++) {
> kvm_msr_entry_add(cpu, perf_cntr_base + i,
> env->msr_gp_counters[i]);
> @@ -4868,6 +4872,9 @@ static int kvm_get_msrs(X86CPU *cpu)
> kvm_msr_entry_add(cpu, MSR_CORE_PERF_GLOBAL_CTRL, 0);
> kvm_msr_entry_add(cpu, MSR_CORE_PERF_GLOBAL_STATUS, 0);
> }
> + if (env->features[FEAT_PERF_CAPABILITIES] & PERF_CAP_TOPDOWN) {
> + kvm_msr_entry_add(cpu, MSR_PERF_METRICS, 0);
> + }
> for (i = 0; i < num_pmu_fixed_counters; i++) {
> kvm_msr_entry_add(cpu, MSR_CORE_PERF_FIXED_CTR0 + i, 0);
> }
> @@ -5234,6 +5241,9 @@ static int kvm_get_msrs(X86CPU *cpu)
> case MSR_AMD64_PERF_CNTR_GLOBAL_STATUS:
> env->msr_global_status = msrs[i].data;
> break;
> + case MSR_PERF_METRICS:
> + env->msr_perf_metrics = msrs[i].data;
> + break;
> case MSR_CORE_PERF_FIXED_CTR0 ... MSR_CORE_PERF_FIXED_CTR0 + MAX_FIXED_COUNTERS - 1:
> env->msr_fixed_counters[index - MSR_CORE_PERF_FIXED_CTR0] = msrs[i].data;
> break;
> diff --git a/target/i386/machine.c b/target/i386/machine.c
> index 5cff5d5a9db5..6b7141cfead7 100644
> --- a/target/i386/machine.c
> +++ b/target/i386/machine.c
> @@ -680,6 +680,24 @@ static const VMStateDescription vmstate_msr_ds_pebs = {
> VMSTATE_END_OF_LIST()}
> };
>
> +static bool perf_metrics_enabled(void *opaque)
> +{
> + X86CPU *cpu = opaque;
> + CPUX86State *env = &cpu->env;
> +
> + return !!env->msr_perf_metrics;
> +}
> +
> +static const VMStateDescription vmstate_msr_perf_metrics = {
> + .name = "cpu/msr_architectural_pmu/msr_perf_metrics",
> + .version_id = 1,
> + .minimum_version_id = 1,
> + .needed = perf_metrics_enabled,
> + .fields = (const VMStateField[]){
> + VMSTATE_UINT64(env.msr_perf_metrics, X86CPU),
> + VMSTATE_END_OF_LIST()}
> +};
> +
> static bool pmu_enable_needed(void *opaque)
> {
> X86CPU *cpu = opaque;
> @@ -721,6 +739,7 @@ static const VMStateDescription vmstate_msr_architectural_pmu = {
> },
> .subsections = (const VMStateDescription * const []) {
> &vmstate_msr_ds_pebs,
> + &vmstate_msr_perf_metrics,
> NULL,
> },
> };
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V3 12/13] target/i386: Clean up Intel Debug Store feature dependencies
2026-03-04 18:07 ` [PATCH V3 12/13] target/i386: Clean up Intel Debug Store feature dependencies Zide Chen
2026-03-06 5:34 ` Mi, Dapeng
@ 2026-03-16 3:21 ` Chenyi Qiang
2026-03-16 6:57 ` Xiaoyao Li
2026-03-16 18:17 ` Chen, Zide
1 sibling, 2 replies; 25+ messages in thread
From: Chenyi Qiang @ 2026-03-16 3:21 UTC (permalink / raw)
To: Zide Chen, qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu,
Fabiano Rosas, Sandipan Das, Xiaoyao Li
Cc: Dongli Zhang, Dapeng Mi
On 3/5/2026 2:07 AM, Zide Chen wrote:
> - 64-bit DS Area (CPUID.01H:ECX[2]) depends on DS (CPUID.01H:EDX[21]).
> - When PMU is disabled, Debug Store must not be exposed to the guest,
> which implicitly disables legacy DS-based PEBS.
>
> Signed-off-by: Zide Chen <zide.chen@intel.com>
> ---
> V3:
> - Update title to be more accurate.
> - Make DTES64 depend on DS.
> - Mark MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL in previous patch.
> - Clean up the commit message.
>
> V2: New patch.
> ---
> target/i386/cpu.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index 2e1dea65d708..3ff9f76cf7da 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -1899,6 +1899,10 @@ static FeatureDep feature_dependencies[] = {
> .from = { FEAT_1_ECX, CPUID_EXT_PDCM },
> .to = { FEAT_PERF_CAPABILITIES, ~0ull },
> },
> + {
> + .from = { FEAT_1_EDX, CPUID_DTS},
> + .to = { FEAT_1_ECX, CPUID_EXT_DTES64},
> + },
> {
> .from = { FEAT_1_ECX, CPUID_EXT_VMX },
> .to = { FEAT_VMX_PROCBASED_CTLS, ~0ull },
> @@ -9471,6 +9475,7 @@ void x86_cpu_expand_features(X86CPU *cpu, Error **errp)
> env->features[FEAT_1_ECX] &= ~CPUID_EXT_PDCM;
> }
>
> + env->features[FEAT_1_EDX] &= ~CPUID_DTS;
> env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
This change, along with the original CPUID_7_0_EDX_ARCH_LBR clear, will also affect the configuration for TD VMs.
For a TD VM, enable_pmu controls TDX_TD_ATTRIBUTES_PERFMON, CPUID_DTS is fixed to 1, and arch_lbr is controlled by XFAM[15].
These features' configuration do not have dependencies on each other. So how about skipping the TD VM case like:
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 98e95d0842..458bfb07b9 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -9736,8 +9736,10 @@ void x86_cpu_expand_features(X86CPU *cpu, Error **errp)
env->features[FEAT_1_ECX] &= ~CPUID_EXT_PDCM;
}
- env->features[FEAT_1_EDX] &= ~CPUID_DTS;
- env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
+ if (!is_tdx_vm()) {
+ env->features[FEAT_1_EDX] &= ~CPUID_DTS;
+ env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
+ }
}
for (i = 0; i < ARRAY_SIZE(feature_dependencies); i++) {
> }
>
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH V3 12/13] target/i386: Clean up Intel Debug Store feature dependencies
2026-03-16 3:21 ` Chenyi Qiang
@ 2026-03-16 6:57 ` Xiaoyao Li
2026-03-16 18:17 ` Chen, Zide
2026-03-16 18:17 ` Chen, Zide
1 sibling, 1 reply; 25+ messages in thread
From: Xiaoyao Li @ 2026-03-16 6:57 UTC (permalink / raw)
To: Chenyi Qiang, Zide Chen, qemu-devel, kvm, Paolo Bonzini, Zhao Liu,
Peter Xu, Fabiano Rosas, Sandipan Das
Cc: Dongli Zhang, Dapeng Mi
On 3/16/2026 11:21 AM, Chenyi Qiang wrote:
>
>
> On 3/5/2026 2:07 AM, Zide Chen wrote:
>> - 64-bit DS Area (CPUID.01H:ECX[2]) depends on DS (CPUID.01H:EDX[21]).
>> - When PMU is disabled, Debug Store must not be exposed to the guest,
>> which implicitly disables legacy DS-based PEBS.
>>
>> Signed-off-by: Zide Chen <zide.chen@intel.com>
>> ---
>> V3:
>> - Update title to be more accurate.
>> - Make DTES64 depend on DS.
>> - Mark MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL in previous patch.
>> - Clean up the commit message.
>>
>> V2: New patch.
>> ---
>> target/i386/cpu.c | 5 +++++
>> 1 file changed, 5 insertions(+)
>>
>> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
>> index 2e1dea65d708..3ff9f76cf7da 100644
>> --- a/target/i386/cpu.c
>> +++ b/target/i386/cpu.c
>> @@ -1899,6 +1899,10 @@ static FeatureDep feature_dependencies[] = {
>> .from = { FEAT_1_ECX, CPUID_EXT_PDCM },
>> .to = { FEAT_PERF_CAPABILITIES, ~0ull },
>> },
>> + {
>> + .from = { FEAT_1_EDX, CPUID_DTS},
>> + .to = { FEAT_1_ECX, CPUID_EXT_DTES64},
>> + },
>> {
>> .from = { FEAT_1_ECX, CPUID_EXT_VMX },
>> .to = { FEAT_VMX_PROCBASED_CTLS, ~0ull },
>> @@ -9471,6 +9475,7 @@ void x86_cpu_expand_features(X86CPU *cpu, Error **errp)
>> env->features[FEAT_1_ECX] &= ~CPUID_EXT_PDCM;
>> }
>>
>> + env->features[FEAT_1_EDX] &= ~CPUID_DTS;
>> env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
>
> This change, along with the original CPUID_7_0_EDX_ARCH_LBR clear, will also affect the configuration for TD VMs.
> For a TD VM, enable_pmu controls TDX_TD_ATTRIBUTES_PERFMON, CPUID_DTS is fixed to 1, and arch_lbr is controlled by XFAM[15].
> These features' configuration do not have dependencies on each other. So how about skipping the TD VM case like:
I think the dependency between enable_pmu and ARCH_LBR still applies to
TDX. This dependency is defined by QEMU that enable_pmu controls all the
PMU features, including ARCH_LBR. So we can enforce the rule of
"XFAM[15] cannot be 1 when enable_pmu == 0"
For CPUID_DTS, it seems OK to expose it even when PMU is disabled?
I sort of disagree with the statement in the changelog:
- When PMU is disabled, Debug Store must not be exposed to the guest,
which implicitly disables legacy DS-based PEBS
If I read SDM correctly, the availability of legacy PEBS can be
enumerated by MSR_IA32_MISC_ENABLE.PEBS_UNAVAILABLE bit. And from the
linux code, it also proves that DTS can be 1 while PEBS is 0:
if (boot_cpu_has(X86_FEATURE_DS)) {
unsigned int l1, l2;
rdmsr(MSR_IA32_MISC_ENABLE, l1, l2);
if (!(l1 & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL))
set_cpu_cap(c, X86_FEATURE_BTS);
if (!(l1 & MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL))
set_cpu_cap(c, X86_FEATURE_PEBS);
}
Since it will need nasty handling for TDX case, I would vote not
clearing CPUID_DTS here when !PMU, unless a strong reason is provided.
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index 98e95d0842..458bfb07b9 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -9736,8 +9736,10 @@ void x86_cpu_expand_features(X86CPU *cpu, Error **errp)
> env->features[FEAT_1_ECX] &= ~CPUID_EXT_PDCM;
> }
>
> - env->features[FEAT_1_EDX] &= ~CPUID_DTS;
> - env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
> + if (!is_tdx_vm()) {
> + env->features[FEAT_1_EDX] &= ~CPUID_DTS;
> + env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
> + }
> }
>
> for (i = 0; i < ARRAY_SIZE(feature_dependencies); i++) {
>
>
>
>> }
>>
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V3 12/13] target/i386: Clean up Intel Debug Store feature dependencies
2026-03-16 6:57 ` Xiaoyao Li
@ 2026-03-16 18:17 ` Chen, Zide
0 siblings, 0 replies; 25+ messages in thread
From: Chen, Zide @ 2026-03-16 18:17 UTC (permalink / raw)
To: Xiaoyao Li, Chenyi Qiang, qemu-devel, kvm, Paolo Bonzini,
Zhao Liu, Peter Xu, Fabiano Rosas, Sandipan Das
Cc: Dongli Zhang, Dapeng Mi, Hector Cao
On 3/15/2026 11:57 PM, Xiaoyao Li wrote:
> On 3/16/2026 11:21 AM, Chenyi Qiang wrote:
>>
>>
>> On 3/5/2026 2:07 AM, Zide Chen wrote:
>>> - 64-bit DS Area (CPUID.01H:ECX[2]) depends on DS (CPUID.01H:EDX[21]).
>>> - When PMU is disabled, Debug Store must not be exposed to the guest,
>>> which implicitly disables legacy DS-based PEBS.
>>>
>>> Signed-off-by: Zide Chen <zide.chen@intel.com>
>>> ---
>>> V3:
>>> - Update title to be more accurate.
>>> - Make DTES64 depend on DS.
>>> - Mark MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL in previous patch.
>>> - Clean up the commit message.
>>>
>>> V2: New patch.
>>> ---
>>> target/i386/cpu.c | 5 +++++
>>> 1 file changed, 5 insertions(+)
>>>
>>> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
>>> index 2e1dea65d708..3ff9f76cf7da 100644
>>> --- a/target/i386/cpu.c
>>> +++ b/target/i386/cpu.c
>>> @@ -1899,6 +1899,10 @@ static FeatureDep feature_dependencies[] = {
>>> .from = { FEAT_1_ECX, CPUID_EXT_PDCM },
>>> .to = { FEAT_PERF_CAPABILITIES, ~0ull },
>>> },
>>> + {
>>> + .from = { FEAT_1_EDX, CPUID_DTS},
>>> + .to = { FEAT_1_ECX, CPUID_EXT_DTES64},
>>> + },
>>> {
>>> .from = { FEAT_1_ECX, CPUID_EXT_VMX },
>>> .to = { FEAT_VMX_PROCBASED_CTLS, ~0ull },
>>> @@ -9471,6 +9475,7 @@ void x86_cpu_expand_features(X86CPU *cpu, Error
>>> **errp)
>>> env->features[FEAT_1_ECX] &= ~CPUID_EXT_PDCM;
>>> }
>>> + env->features[FEAT_1_EDX] &= ~CPUID_DTS;
>>> env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
>>
>> This change, along with the original CPUID_7_0_EDX_ARCH_LBR clear,
>> will also affect the configuration for TD VMs.
>> For a TD VM, enable_pmu controls TDX_TD_ATTRIBUTES_PERFMON, CPUID_DTS
>> is fixed to 1, and arch_lbr is controlled by XFAM[15].
>> These features' configuration do not have dependencies on each other.
>> So how about skipping the TD VM case like:
>
> I think the dependency between enable_pmu and ARCH_LBR still applies to
> TDX. This dependency is defined by QEMU that enable_pmu controls all the
> PMU features, including ARCH_LBR. So we can enforce the rule of
> "XFAM[15] cannot be 1 when enable_pmu == 0"
Yes, when !enable_pmu, XSTATE_ARCH_LBR_MASK will be cleared in
FEAT_XSAVE_XSS_LO, and therefore the ARCH_LBR bit in XFAM will also be
cleared. As a result, CPUID_7_0_EDX_ARCH_LBR will ultimately be cleared
from the guest CPUID. However, it is better to follow the spec and leave
it unchanged here.
> For CPUID_DTS, it seems OK to expose it even when PMU is disabled?
> I sort of disagree with the statement in the changelog:
>
> - When PMU is disabled, Debug Store must not be exposed to the guest,
> which implicitly disables legacy DS-based PEBS
As you mentioned, the dependencies of enable_pmu are defined by QEMU.
If the DS bit remains set when !enable_pmu, it looks inconsistent.
> If I read SDM correctly, the availability of legacy PEBS can be
> enumerated by MSR_IA32_MISC_ENABLE.PEBS_UNAVAILABLE bit. And from the
> linux code, it also proves that DTS can be 1 while PEBS is 0:
>
> if (boot_cpu_has(X86_FEATURE_DS)) {
> unsigned int l1, l2;
>
> rdmsr(MSR_IA32_MISC_ENABLE, l1, l2);
> if (!(l1 & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL))
> set_cpu_cap(c, X86_FEATURE_BTS);
> if (!(l1 & MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL))
> set_cpu_cap(c, X86_FEATURE_PEBS);
> }
MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL are sufficient to prevent the
guest from enumerating the PEBS feature.
However, keeping CPUID_DTS set has some negative impacts. For example,
it may incorrectly determine the availability of IA32_DS_AREA.
> Since it will need nasty handling for TDX case, I would vote not
> clearing CPUID_DTS here when !PMU, unless a strong reason is provided.
The "nasty handling for TDX" code already there, adding CPUID_DTS may
not make it worse. :) Accepting Chenyi's suggested code seems reasonable
and clean.
>
>> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
>> index 98e95d0842..458bfb07b9 100644
>> --- a/target/i386/cpu.c
>> +++ b/target/i386/cpu.c
>> @@ -9736,8 +9736,10 @@ void x86_cpu_expand_features(X86CPU *cpu, Error
>> **errp)
>> env->features[FEAT_1_ECX] &= ~CPUID_EXT_PDCM;
>> }
>>
>> - env->features[FEAT_1_EDX] &= ~CPUID_DTS;
>> - env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
>> + if (!is_tdx_vm()) {
>> + env->features[FEAT_1_EDX] &= ~CPUID_DTS;
>> + env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
>> + }
>> }
>>
>> for (i = 0; i < ARRAY_SIZE(feature_dependencies); i++) {
>>
>>
>>
>>> }
>>>
>>
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V3 12/13] target/i386: Clean up Intel Debug Store feature dependencies
2026-03-16 3:21 ` Chenyi Qiang
2026-03-16 6:57 ` Xiaoyao Li
@ 2026-03-16 18:17 ` Chen, Zide
1 sibling, 0 replies; 25+ messages in thread
From: Chen, Zide @ 2026-03-16 18:17 UTC (permalink / raw)
To: Chenyi Qiang, qemu-devel, kvm, Paolo Bonzini, Zhao Liu, Peter Xu,
Fabiano Rosas, Sandipan Das, Xiaoyao Li
Cc: Dongli Zhang, Dapeng Mi, Hector Cao
On 3/15/2026 8:21 PM, Chenyi Qiang wrote:
>
>
> On 3/5/2026 2:07 AM, Zide Chen wrote:
>> - 64-bit DS Area (CPUID.01H:ECX[2]) depends on DS (CPUID.01H:EDX[21]).
>> - When PMU is disabled, Debug Store must not be exposed to the guest,
>> which implicitly disables legacy DS-based PEBS.
>>
>> Signed-off-by: Zide Chen <zide.chen@intel.com>
>> ---
>> V3:
>> - Update title to be more accurate.
>> - Make DTES64 depend on DS.
>> - Mark MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL in previous patch.
>> - Clean up the commit message.
>>
>> V2: New patch.
>> ---
>> target/i386/cpu.c | 5 +++++
>> 1 file changed, 5 insertions(+)
>>
>> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
>> index 2e1dea65d708..3ff9f76cf7da 100644
>> --- a/target/i386/cpu.c
>> +++ b/target/i386/cpu.c
>> @@ -1899,6 +1899,10 @@ static FeatureDep feature_dependencies[] = {
>> .from = { FEAT_1_ECX, CPUID_EXT_PDCM },
>> .to = { FEAT_PERF_CAPABILITIES, ~0ull },
>> },
>> + {
>> + .from = { FEAT_1_EDX, CPUID_DTS},
>> + .to = { FEAT_1_ECX, CPUID_EXT_DTES64},
>> + },
>> {
>> .from = { FEAT_1_ECX, CPUID_EXT_VMX },
>> .to = { FEAT_VMX_PROCBASED_CTLS, ~0ull },
>> @@ -9471,6 +9475,7 @@ void x86_cpu_expand_features(X86CPU *cpu, Error **errp)
>> env->features[FEAT_1_ECX] &= ~CPUID_EXT_PDCM;
>> }
>>
>> + env->features[FEAT_1_EDX] &= ~CPUID_DTS;
>> env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
>
> This change, along with the original CPUID_7_0_EDX_ARCH_LBR clear, will also affect the configuration for TD VMs.
> For a TD VM, enable_pmu controls TDX_TD_ATTRIBUTES_PERFMON, CPUID_DTS is fixed to 1, and arch_lbr is controlled by XFAM[15].
Yes, I agree. In the TDX case, neither the DTS nor the arch_lbr bit
should be cleared.
> These features' configuration do not have dependencies on each other. So how about skipping the TD VM case like:
>
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index 98e95d0842..458bfb07b9 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -9736,8 +9736,10 @@ void x86_cpu_expand_features(X86CPU *cpu, Error **errp)
> env->features[FEAT_1_ECX] &= ~CPUID_EXT_PDCM;
> }
>
> - env->features[FEAT_1_EDX] &= ~CPUID_DTS;
> - env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
> + if (!is_tdx_vm()) {
> + env->features[FEAT_1_EDX] &= ~CPUID_DTS;
> + env->features[FEAT_7_0_EDX] &= ~CPUID_7_0_EDX_ARCH_LBR;
> + }
> }
>
> for (i = 0; i < ARRAY_SIZE(feature_dependencies); i++) {
>
>
>
>> }
>>
>
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2026-03-16 18:18 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-04 18:06 [PATCH V3 00/13] target/i386: Misc PMU fixes and enabling Zide Chen
2026-03-04 18:07 ` [PATCH V3 01/13] target/i386: Disable unsupported BTS for guest Zide Chen
2026-03-04 18:07 ` [PATCH V3 02/13] target/i386: Don't save/restore PERF_GLOBAL_OVF_CTRL MSRs Zide Chen
2026-03-04 18:07 ` [PATCH V3 03/13] target/i386: Gate enable_pmu on kvm_enabled() Zide Chen
2026-03-04 18:07 ` [PATCH V3 04/13] target/i386: Adjust maximum number of PMU counters Zide Chen
2026-03-06 3:02 ` Mi, Dapeng
2026-03-04 18:07 ` [PATCH V3 05/13] target/i386: Support full-width writes for perf counters Zide Chen
2026-03-04 18:07 ` [PATCH V3 06/13] target/i386: Increase MSR_BUF_SIZE and split KVM_[GET/SET]_MSRS calls Zide Chen
2026-03-06 3:09 ` Mi, Dapeng
2026-03-04 18:07 ` [PATCH V3 07/13] target/i386: Add get/set/migrate support for legacy PEBS MSRs Zide Chen
2026-03-06 3:17 ` Mi, Dapeng
2026-03-04 18:07 ` [PATCH V3 08/13] target/i386: Make some PEBS features user-visible Zide Chen
2026-03-06 3:25 ` Mi, Dapeng
2026-03-04 18:07 ` [PATCH V3 09/13] target/i386: Clean up LBR format handling Zide Chen
2026-03-04 18:07 ` [PATCH V3 10/13] target/i386: Refactor " Zide Chen
2026-03-04 18:07 ` [PATCH V3 11/13] target/i386: Add pebs-fmt CPU option Zide Chen
2026-03-06 5:23 ` Mi, Dapeng
2026-03-04 18:07 ` [PATCH V3 12/13] target/i386: Clean up Intel Debug Store feature dependencies Zide Chen
2026-03-06 5:34 ` Mi, Dapeng
2026-03-16 3:21 ` Chenyi Qiang
2026-03-16 6:57 ` Xiaoyao Li
2026-03-16 18:17 ` Chen, Zide
2026-03-16 18:17 ` Chen, Zide
2026-03-04 18:07 ` [PATCH V3 13/13] target/i386: Add Topdown metrics feature support Zide Chen
2026-03-06 5:37 ` Mi, Dapeng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox