From: Sean Christopherson <seanjc@google.com>
To: Marc Zyngier <maz@kernel.org>, Oliver Upton <oupton@kernel.org>,
Tianrui Zhao <zhaotianrui@loongson.cn>,
Bibo Mao <maobibo@loongson.cn>,
Huacai Chen <chenhuacai@kernel.org>,
Anup Patel <anup@brainfault.org>, Paul Walmsley <pjw@kernel.org>,
Palmer Dabbelt <palmer@dabbelt.com>,
Albert Ou <aou@eecs.berkeley.edu>, Xin Li <xin@zytor.com>,
"H. Peter Anvin" <hpa@zytor.com>,
Andy Lutomirski <luto@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Namhyung Kim <namhyung@kernel.org>,
Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>
Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
kvm@vger.kernel.org, loongarch@lists.linux.dev,
kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org,
Mingwei Zhang <mizhang@google.com>,
Xudong Hao <xudong.hao@intel.com>,
Sandipan Das <sandipan.das@amd.com>,
Dapeng Mi <dapeng1.mi@linux.intel.com>,
Xiong Zhang <xiong.y.zhang@linux.intel.com>,
Manali Shukla <manali.shukla@amd.com>,
Jim Mattson <jmattson@google.com>
Subject: [PATCH v6 21/44] KVM: x86/pmu: Load/save GLOBAL_CTRL via entry/exit fields for mediated PMU
Date: Fri, 5 Dec 2025 16:16:57 -0800 [thread overview]
Message-ID: <20251206001720.468579-22-seanjc@google.com> (raw)
In-Reply-To: <20251206001720.468579-1-seanjc@google.com>
From: Dapeng Mi <dapeng1.mi@linux.intel.com>
When running a guest with a mediated PMU, context switch PERF_GLOBAL_CTRL
via the dedicated VMCS fields for both host and guest. For the host,
always zero GLOBAL_CTRL on exit as the guest's state will still be loaded
in hardware (KVM will context switch the bulk of PMU state outside of the
inner run loop). For the guest, use the dedicated fields to atomically
load and save PERF_GLOBAL_CTRL on all entry/exits.
For now, require VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL support (introduced by
Sapphire Rapids). KVM can support such CPUs by saving PERF_GLOBAL_CTRL
via the MSR save list, a.k.a. the MSR auto-store list, but defer that
support as it adds a small amount of complexity and is somewhat unique.
To minimize VM-Entry latency, propagate IA32_PERF_GLOBAL_CTRL to the VMCS
on-demand. But to minimize complexity, read IA32_PERF_GLOBAL_CTRL out of
the VMCS on all non-failing VM-Exits. I.e. partially cache the MSR.
KVM could track GLOBAL_CTRL as an EXREG and defer all reads, but writes
are rare, i.e. the dirty tracking for an EXREG is unnecessary, and it's
not obvious that shaving ~15-20 cycles per exit is meaningful given the
total overhead associated with mediated PMU context switches.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Co-developed-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Mingwei Zhang <mizhang@google.com>
Tested-by: Xudong Hao <xudong.hao@intel.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/kvm-x86-pmu-ops.h | 2 ++
arch/x86/include/asm/vmx.h | 1 +
arch/x86/kvm/pmu.c | 13 +++++++++--
arch/x86/kvm/pmu.h | 3 ++-
arch/x86/kvm/vmx/capabilities.h | 6 +++++
arch/x86/kvm/vmx/pmu_intel.c | 25 ++++++++++++++++++++-
arch/x86/kvm/vmx/vmx.c | 31 +++++++++++++++++++++++++-
arch/x86/kvm/vmx/vmx.h | 3 ++-
8 files changed, 78 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h
index 9159bf1a4730..ad2cc82abf79 100644
--- a/arch/x86/include/asm/kvm-x86-pmu-ops.h
+++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h
@@ -23,5 +23,7 @@ KVM_X86_PMU_OP_OPTIONAL(reset)
KVM_X86_PMU_OP_OPTIONAL(deliver_pmi)
KVM_X86_PMU_OP_OPTIONAL(cleanup)
+KVM_X86_PMU_OP_OPTIONAL(write_global_ctrl)
+
#undef KVM_X86_PMU_OP
#undef KVM_X86_PMU_OP_OPTIONAL
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index c85c50019523..b92ff87e3560 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -107,6 +107,7 @@
#define VM_EXIT_PT_CONCEAL_PIP 0x01000000
#define VM_EXIT_CLEAR_IA32_RTIT_CTL 0x02000000
#define VM_EXIT_LOAD_CET_STATE 0x10000000
+#define VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL 0x40000000
#define VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR 0x00036dff
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 182ff2d8d119..c4a32bfb26f5 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -103,7 +103,7 @@ void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_ops)
#undef __KVM_X86_PMU_OP
}
-void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops)
+void kvm_init_pmu_capability(struct kvm_pmu_ops *pmu_ops)
{
bool is_intel = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL;
int min_nr_gp_ctrs = pmu_ops->MIN_NR_GP_COUNTERS;
@@ -141,6 +141,9 @@ void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops)
!pmu_ops->is_mediated_pmu_supported(&kvm_host_pmu))
enable_mediated_pmu = false;
+ if (!enable_mediated_pmu)
+ pmu_ops->write_global_ctrl = NULL;
+
if (!enable_pmu) {
memset(&kvm_pmu_cap, 0, sizeof(kvm_pmu_cap));
return;
@@ -836,6 +839,9 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
diff = pmu->global_ctrl ^ data;
pmu->global_ctrl = data;
reprogram_counters(pmu, diff);
+
+ if (kvm_vcpu_has_mediated_pmu(vcpu))
+ kvm_pmu_call(write_global_ctrl)(data);
}
break;
case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
@@ -930,8 +936,11 @@ void kvm_pmu_refresh(struct kvm_vcpu *vcpu)
* in the global controls). Emulate that behavior when refreshing the
* PMU so that userspace doesn't need to manually set PERF_GLOBAL_CTRL.
*/
- if (kvm_pmu_has_perf_global_ctrl(pmu) && pmu->nr_arch_gp_counters)
+ if (kvm_pmu_has_perf_global_ctrl(pmu) && pmu->nr_arch_gp_counters) {
pmu->global_ctrl = GENMASK_ULL(pmu->nr_arch_gp_counters - 1, 0);
+ if (kvm_vcpu_has_mediated_pmu(vcpu))
+ kvm_pmu_call(write_global_ctrl)(pmu->global_ctrl);
+ }
bitmap_set(pmu->all_valid_pmc_idx, 0, pmu->nr_arch_gp_counters);
bitmap_set(pmu->all_valid_pmc_idx, KVM_FIXED_PMC_BASE_IDX,
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index 506c203587ea..2ff469334c1a 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -38,6 +38,7 @@ struct kvm_pmu_ops {
void (*cleanup)(struct kvm_vcpu *vcpu);
bool (*is_mediated_pmu_supported)(struct x86_pmu_capability *host_pmu);
+ void (*write_global_ctrl)(u64 global_ctrl);
const u64 EVENTSEL_EVENT;
const int MAX_NR_GP_COUNTERS;
@@ -183,7 +184,7 @@ static inline bool pmc_is_locally_enabled(struct kvm_pmc *pmc)
extern struct x86_pmu_capability kvm_pmu_cap;
-void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops);
+void kvm_init_pmu_capability(struct kvm_pmu_ops *pmu_ops);
void kvm_pmu_recalc_pmc_emulation(struct kvm_pmu *pmu, struct kvm_pmc *pmc);
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 26302fd6dd9c..4e371c93ae16 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -109,6 +109,12 @@ static inline bool cpu_has_load_cet_ctrl(void)
{
return (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_CET_STATE);
}
+
+static inline bool cpu_has_save_perf_global_ctrl(void)
+{
+ return vmcs_config.vmexit_ctrl & VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL;
+}
+
static inline bool cpu_has_vmx_mpx(void)
{
return vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_BNDCFGS;
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 050c21298213..dbab7cca7a62 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -778,7 +778,29 @@ static bool intel_pmu_is_mediated_pmu_supported(struct x86_pmu_capability *host_
* Require v4+ for MSR_CORE_PERF_GLOBAL_STATUS_SET, and full-width
* writes so that KVM can precisely load guest counter values.
*/
- return host_pmu->version >= 4 && host_perf_cap & PERF_CAP_FW_WRITES;
+ if (host_pmu->version < 4 || !(host_perf_cap & PERF_CAP_FW_WRITES))
+ return false;
+
+ /*
+ * All CPUs that support a mediated PMU are expected to support loading
+ * PERF_GLOBAL_CTRL via dedicated VMCS fields.
+ */
+ if (WARN_ON_ONCE(!cpu_has_load_perf_global_ctrl()))
+ return false;
+
+ /*
+ * KVM doesn't yet support mediated PMU on CPUs without support for
+ * saving PERF_GLOBAL_CTRL via a dedicated VMCS field.
+ */
+ if (!cpu_has_save_perf_global_ctrl())
+ return false;
+
+ return true;
+}
+
+static void intel_pmu_write_global_ctrl(u64 global_ctrl)
+{
+ vmcs_write64(GUEST_IA32_PERF_GLOBAL_CTRL, global_ctrl);
}
struct kvm_pmu_ops intel_pmu_ops __initdata = {
@@ -794,6 +816,7 @@ struct kvm_pmu_ops intel_pmu_ops __initdata = {
.cleanup = intel_pmu_cleanup,
.is_mediated_pmu_supported = intel_pmu_is_mediated_pmu_supported,
+ .write_global_ctrl = intel_pmu_write_global_ctrl,
.EVENTSEL_EVENT = ARCH_PERFMON_EVENTSEL_EVENT,
.MAX_NR_GP_COUNTERS = KVM_MAX_NR_INTEL_GP_COUNTERS,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 9f71ba99cf70..72b92cea9d72 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4294,6 +4294,18 @@ static void vmx_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
vmx_set_intercept_for_msr(vcpu, MSR_IA32_S_CET, MSR_TYPE_RW, intercept);
}
+ if (enable_mediated_pmu) {
+ bool is_mediated_pmu = kvm_vcpu_has_mediated_pmu(vcpu);
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+ vm_entry_controls_changebit(vmx,
+ VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL, is_mediated_pmu);
+
+ vm_exit_controls_changebit(vmx,
+ VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL |
+ VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL, is_mediated_pmu);
+ }
+
/*
* x2APIC and LBR MSR intercepts are modified on-demand and cannot be
* filtered by userspace.
@@ -4476,6 +4488,16 @@ void vmx_set_constant_host_state(struct vcpu_vmx *vmx)
vmcs_writel(HOST_SSP, 0);
vmcs_writel(HOST_INTR_SSP_TABLE, 0);
}
+
+ /*
+ * When running a guest with a mediated PMU, guest state is resident in
+ * hardware after VM-Exit. Zero PERF_GLOBAL_CTRL on exit so that host
+ * activity doesn't bleed into the guest counters. When running with
+ * an emulated PMU, PERF_GLOBAL_CTRL is dynamically computed on every
+ * entry/exit to merge guest and host PMU usage.
+ */
+ if (enable_mediated_pmu)
+ vmcs_write64(HOST_IA32_PERF_GLOBAL_CTRL, 0);
}
void set_cr4_guest_host_mask(struct vcpu_vmx *vmx)
@@ -4543,7 +4565,8 @@ static u32 vmx_get_initial_vmexit_ctrl(void)
VM_EXIT_CLEAR_IA32_RTIT_CTL);
/* Loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically */
return vmexit_ctrl &
- ~(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | VM_EXIT_LOAD_IA32_EFER);
+ ~(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | VM_EXIT_LOAD_IA32_EFER |
+ VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL);
}
void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
@@ -7270,6 +7293,9 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx)
struct perf_guest_switch_msr *msrs;
struct kvm_pmu *pmu = vcpu_to_pmu(&vmx->vcpu);
+ if (kvm_vcpu_has_mediated_pmu(&vmx->vcpu))
+ return;
+
pmu->host_cross_mapped_mask = 0;
if (pmu->pebs_enable & pmu->global_ctrl)
intel_pmu_cross_mapped_check(pmu);
@@ -7572,6 +7598,9 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
vmx->loaded_vmcs->launched = 1;
+ if (!msr_write_intercepted(vmx, MSR_CORE_PERF_GLOBAL_CTRL))
+ vcpu_to_pmu(vcpu)->global_ctrl = vmcs_read64(GUEST_IA32_PERF_GLOBAL_CTRL);
+
vmx_recover_nmi_blocking(vmx);
vmx_complete_interrupts(vmx);
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index bc3ed3145d7e..d7a96c84371f 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -510,7 +510,8 @@ static inline u8 vmx_get_rvi(void)
VM_EXIT_CLEAR_BNDCFGS | \
VM_EXIT_PT_CONCEAL_PIP | \
VM_EXIT_CLEAR_IA32_RTIT_CTL | \
- VM_EXIT_LOAD_CET_STATE)
+ VM_EXIT_LOAD_CET_STATE | \
+ VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL)
#define KVM_REQUIRED_VMX_PIN_BASED_VM_EXEC_CONTROL \
(PIN_BASED_EXT_INTR_MASK | \
--
2.52.0.223.gf5cc29aaa4-goog
next prev parent reply other threads:[~2025-12-06 0:18 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-06 0:16 [PATCH v6 00/44] KVM: x86: Add support for mediated vPMUs Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 01/44] perf: Skip pmu_ctx based on event_type Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 02/44] perf: Add generic exclude_guest support Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 03/44] perf: Move security_perf_event_free() call to __free_event() Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 04/44] perf: Add APIs to create/release mediated guest vPMUs Sean Christopherson
2025-12-08 11:51 ` Peter Zijlstra
2025-12-08 18:07 ` Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 05/44] perf: Clean up perf ctx time Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 06/44] perf: Add a EVENT_GUEST flag Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 07/44] perf: Add APIs to load/put guest mediated PMU context Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 08/44] perf/x86/core: Register a new vector for handling mediated guest PMIs Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 09/44] perf/x86/core: Add APIs to switch to/from mediated PMI vector (for KVM) Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 10/44] perf/x86/core: Do not set bit width for unavailable counters Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 11/44] perf/x86/core: Plumb mediated PMU capability from x86_pmu to x86_pmu_cap Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 12/44] perf/x86/intel: Support PERF_PMU_CAP_MEDIATED_VPMU Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 13/44] perf/x86/amd: Support PERF_PMU_CAP_MEDIATED_VPMU for AMD host Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 14/44] KVM: Add a simplified wrapper for registering perf callbacks Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 15/44] KVM: x86/pmu: Snapshot host (i.e. perf's) reported PMU capabilities Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 16/44] KVM: x86/pmu: Start stubbing in mediated PMU support Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 17/44] KVM: x86/pmu: Implement Intel mediated PMU requirements and constraints Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 18/44] KVM: x86/pmu: Implement AMD mediated PMU requirements Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 19/44] KVM: x86/pmu: Register PMI handler for mediated vPMU Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 20/44] KVM: x86/pmu: Disable RDPMC interception for compatible " Sean Christopherson
2025-12-06 0:16 ` Sean Christopherson [this message]
2025-12-06 0:16 ` [PATCH v6 22/44] KVM: x86/pmu: Disable interception of select PMU MSRs for mediated vPMUs Sean Christopherson
2025-12-06 0:16 ` [PATCH v6 23/44] KVM: x86/pmu: Bypass perf checks when emulating mediated PMU counter accesses Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 24/44] KVM: x86/pmu: Introduce eventsel_hw to prepare for pmu event filtering Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 25/44] KVM: x86/pmu: Reprogram mediated PMU event selectors on event filter updates Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 26/44] KVM: x86/pmu: Always stuff GuestOnly=1,HostOnly=0 for mediated PMCs on AMD Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 27/44] KVM: x86/pmu: Load/put mediated PMU context when entering/exiting guest Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 28/44] KVM: x86/pmu: Disallow emulation in the fastpath if mediated PMCs are active Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 29/44] KVM: x86/pmu: Handle emulated instruction for mediated vPMU Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 30/44] KVM: nVMX: Add macros to simplify nested MSR interception setting Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 31/44] KVM: nVMX: Disable PMU MSR interception as appropriate while running L2 Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 32/44] KVM: nSVM: " Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 33/44] KVM: x86/pmu: Expose enable_mediated_pmu parameter to user space Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 34/44] KVM: x86/pmu: Elide WRMSRs when loading guest PMCs if values already match Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 35/44] KVM: VMX: Drop intermediate "guest" field from msr_autostore Sean Christopherson
2025-12-08 9:14 ` Mi, Dapeng
2025-12-06 0:17 ` [PATCH v6 36/44] KVM: nVMX: Don't update msr_autostore count when saving TSC for vmcs12 Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 37/44] KVM: VMX: Dedup code for removing MSR from VMCS's auto-load list Sean Christopherson
2025-12-08 9:29 ` Mi, Dapeng
2025-12-09 17:37 ` Sean Christopherson
2025-12-10 1:08 ` Mi, Dapeng
2025-12-06 0:17 ` [PATCH v6 38/44] KVM: VMX: Drop unused @entry_only param from add_atomic_switch_msr() Sean Christopherson
2025-12-08 9:32 ` Mi, Dapeng
2025-12-06 0:17 ` [PATCH v6 39/44] KVM: VMX: Bug the VM if either MSR auto-load list is full Sean Christopherson
2025-12-08 9:32 ` Mi, Dapeng
2025-12-08 9:34 ` Mi, Dapeng
2025-12-06 0:17 ` [PATCH v6 40/44] KVM: VMX: Set MSR index auto-load entry if and only if entry is "new" Sean Christopherson
2025-12-08 9:35 ` Mi, Dapeng
2025-12-06 0:17 ` [PATCH v6 41/44] KVM: VMX: Compartmentalize adding MSRs to host vs. guest auto-load list Sean Christopherson
2025-12-08 9:36 ` Mi, Dapeng
2025-12-06 0:17 ` [PATCH v6 42/44] KVM: VMX: Dedup code for adding MSR to VMCS's auto list Sean Christopherson
2025-12-08 9:37 ` Mi, Dapeng
2025-12-06 0:17 ` [PATCH v6 43/44] KVM: VMX: Initialize vmcs01.VM_EXIT_MSR_STORE_ADDR with list address Sean Christopherson
2025-12-06 0:17 ` [PATCH v6 44/44] KVM: VMX: Add mediated PMU support for CPUs without "save perf global ctrl" Sean Christopherson
2025-12-08 9:39 ` Mi, Dapeng
2025-12-09 6:31 ` Mi, Dapeng
2025-12-08 15:37 ` [PATCH v6 00/44] KVM: x86: Add support for mediated vPMUs Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251206001720.468579-22-seanjc@google.com \
--to=seanjc@google.com \
--cc=acme@kernel.org \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=chenhuacai@kernel.org \
--cc=dapeng1.mi@linux.intel.com \
--cc=hpa@zytor.com \
--cc=jmattson@google.com \
--cc=kvm-riscv@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=loongarch@lists.linux.dev \
--cc=luto@kernel.org \
--cc=manali.shukla@amd.com \
--cc=maobibo@loongson.cn \
--cc=maz@kernel.org \
--cc=mingo@redhat.com \
--cc=mizhang@google.com \
--cc=namhyung@kernel.org \
--cc=oupton@kernel.org \
--cc=palmer@dabbelt.com \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=pjw@kernel.org \
--cc=sandipan.das@amd.com \
--cc=xin@zytor.com \
--cc=xiong.y.zhang@linux.intel.com \
--cc=xudong.hao@intel.com \
--cc=zhaotianrui@loongson.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).